text stringlengths 256 16.4k |
|---|
Let's skip to the stochastic differential equation (SDE):
$$ dF=\left[\frac{\partial F}{\partial t}+\mu \frac{\partial F}{\partial x}+\frac{1}{2}\sigma^2 \frac{\partial^2 F}{\partial x^2} \right]dt + \sigma \frac{\partial F}{\partial x}dW $$
What does this equation
actually represent? It suggests that a change in $F$ (represented by $\Delta F$) equals a change in $t$ times some derivatives in $F$ (represented by $[\cdots]\Delta t$) plus the change in the Brownian motion $W$ times another derivative (the part $[\cdots]\Delta W$). This also suggests that we are looking at a differential equation, since we are equating small changes in $t$ and $W$ to changes in $F$.
However, technically speaking this does not make sense since the Brownian motion $W$ isn't differentiable! We cannot actually interpret this as a "differential" equation in the usual (deterministic) sense, because the derivative $\frac{dW}{dt}$ does not exist.
Instead this differential equation is actually just a short hand notation. By
definition it is short for:
$$F(T, X(T)) - F(0, X(0)) = \int_0^T \left[\frac{\partial F}{\partial t}+\mu \frac{\partial F}{\partial x}+\frac{1}{2}\sigma^2 \frac{\partial^2 F}{\partial x^2} \right] dt + \int_0^T \sigma \frac{\partial F}{\partial x} dW_s$$
The SDE is just an integral equation in disguise! It looks like this is obtained by integrating the SDE. But again, that's just because the SDE is actually defined to be this integral equation. The reason is that we can integrate Brownian motions, but we cannot differentiate them. Keep that in mind.
The integral $\int[\cdots]dW_s$ is a tricky object to interpret. For instance it is not defined in the usual Riemann sense. The integral
itself is actually a random variable. We can generate a path $W(t)$, then perform the integral and that way obtain a sample from the distribution of this integral.
Unfortunately, in general you cannot really manipulate or solve these types of integrals like you would with an ordinary integral. So determining their distribution is in general very difficult. They do have one very important property, which I'll come back to.
Now the next step is obtained by making use of the original PDE. We are assuming that $F$ itself is defined to satisfy this equation. Now, first I'm going to assume $r=0$. I'll comment on this later on. This makes the argument a bit easier.
Therefore we have:
$$\int_0^T \left[\frac{\partial F}{\partial t}+\mu \frac{\partial F}{\partial x}+\frac{1}{2}\sigma^2 \frac{\partial^2 F}{\partial x^2} \right]dt = 0$$
and this gives:
$$ F(T, X(T)) = F(0, X(0)) + \int_0^T \sigma \frac{\partial F}{\partial x}dW_t$$
We are almost done. Next step we take the expectation value w.r.t. the generated paths $W(t)$ (on the interval $[0,T]$). Here comes the important property of stochastic integrals I mentioned earlier:
$$ E\Big[\int_0^T \sigma \frac{\partial F}{\partial x}dW_t \Big] = 0$$
The expectation value is simply zero! The mean of these types of integrals vanish. Why? Well, heuristically the integral is just a sum over small "changes" in $\Delta W_t$ weighted by whatever we are integrating over. Since the $\Delta W_t$ are all normally distributed with mean zero, so it's like we are summing a whole bunch of normal distributions with mean zero. The mean of this sum is also zero. The formal proof is of course trickier, but you get the point.
Back to our equation we get:
$$ E[F(T, X(T))] = E[F(0, X(0))] + 0 $$
where the $+0$ was the integral before. We assume that we are viewing from $t=0$, so we know the value of $F$ at this time. Therefore:
$$F(0, X(0)) = E[F(T, X(T))]$$.
And there we have our martingale property. Just as a side note, you would technically write this as:
$$F(0, X(0)) = E[F(T, X(T))|\mathcal{F}_{t=0}]$$.
where $\mathcal{F}_{t=0}$ is called a filtration. It basically means that the path $X(t)$ is completely known / fixed for $t<= 0$.
Now, I set $r=0$ in the derivation above. This is not really necessary. We could also set $G=e^{-rt}F$ to "absorb" the $rF$ term in the original PDE (or turn it around: define $F=e^{rt} G$ and substitute it into the PDE. You get the same PDE, but written in terms of $G$ and no $rG$ term on the right hand side). My derivation then applies also to $G$. The final expression $G(0, X(0)) = E[G(T, X(T))]$ can be transformed back to $F$ by subbing $G(t)=e^{-rt}F(t)$ giving:
$$F(0, X(0)) = e^{-rT}E[F(T, X(T))]$$
If you want to know more, then I suggest looking at a book like Shreve to study the properties of stochastic calculus and stochastic integration.
Same conclusion though. |
Background/Setup
For any connected scheme $S$, let $\text{FEt}_S$ denote the category of finite etale $S$-schemes. Let $f : X\rightarrow Y$ be a morphism of connected schemes, then for any finite etale cover $C\rightarrow Y$, we can pull it back to a finite etale cover of $X$, so after making a choice of pullbacks, we get an exact functor $f^* : \text{FEt}_Y\rightarrow\text{FEt}_X$.
Let $x,y$ be geometric points of $X,Y$ respectively, such that $f(x) = y$. Let $F_x : \text{FEt}_X\rightarrow\textbf{Sets}$ denote the fiber functor, which sends every finite etale cover $p : C\rightarrow X$ to the finite set $p^{-1}(x)$, and similarly with $F_y$. Since $f(x) = y$, the universal property of fiber products gives us a uniquely determined isomorphism of fiber functors $\eta : F_x\circ f^*\stackrel{\sim}{\longrightarrow} F_y$, and thus for any automorphism $\alpha\in\text{Aut}(F_x)$, via $\eta$ we obtain an automorphism of $F_y$. This defines a homomorphism $\pi_1(X,x) := \text{Aut}(F_x) \rightarrow \text{Aut}(F_y) =: \pi_1(Y,y)$.
My question:
This is all well and good, until I ask myself: if $X = Y$ and $f$ is an automorphism of $X$, when does $f$ induce the identity map on fundamental groups?
Specifically, fix a scheme $X$, a geometric point $x\in X$, and an automorphism $f\in\text{Aut}(X)$ such that $f(x) = x$. For any finite etale cover $p : C\rightarrow X$, we can pull it back via $f : X\rightarrow X$ to get $C\times_{X,f}X$. However, to actually define the pullback functor $f^*$, strictly speaking for any $p : C\rightarrow X$ in $\text{FEt}_X$ we really should give a concrete construction of the fiber product $C\times_{X,f}X$. In this situation, we can define this fiber product to simply be the same object $C$, where the projection map to the first component is the identity $C = C$, and the projection map to $X$ is the composition $C\stackrel{p}{\rightarrow} X\stackrel{f^{-1}}{\rightarrow} X$.
Now, given these choices of pullbacks, we should have a uniquely determined isomorphism of fiber functors $\eta : F_x\circ f^*\stackrel{\sim}{\longrightarrow} F_x$. But....for any $p : C\rightarrow X$ in $\text{FEt}_X$, we find that because $f(x) = x$ and our choice of pullbacks, that the fiber of $f^*C$ over $x$ (ie, the geometric points of $C$ which get mapped to $x$ via $C\stackrel{p}{\rightarrow}X\stackrel{f^{-1}}{\rightarrow}X$) are unequivocally the
same (not just canonically isomorphic) as the fiber of $C$ over $x$ (ie, the geometric points of $C$ which get mapped to $x$ via $C\stackrel{p}{\rightarrow}X$), and so $F_x\circ f^*$ is actually equal to $F_x$, and the uniquely determined isomorphism $\eta$ is just the identity on $F_x$, which implies that the induced homomorphism $\text{Aut}(F_x)\rightarrow\text{Aut}(F_x)$ is also the identity.
This seems to show that the answer is "
always".
But...I feel like this can't be right. For example, you could take $X$ to be an elliptic curve over $\mathbb{C}$, $x$ to be the point at infinity, and $f$ to be the automorphism $[-1]$. Then the induced automorphism of its topological fundamental group is nontrivial, given by inversion. Surely the same should be true of the etale fundamental group?
EDIT: To be more specific, the way that I see that $F_x\circ f^* = F_x$ is as follows. Let $pt$ denote $\text{Spec }k$ where $k$ is an algebraically closed field. Then the geometric point $x$ is given by a morphism $x : pt\rightarrow X$. For any cover $C\rightarrow X$, $F_x(C\rightarrow X)$ is defined to be the set of geometric points of $C$ over $x$. Ie,$$F_x(C\stackrel{p}{\rightarrow} X) = \text{Hom}_X(pt,C) = \{x'\in\text{Hom}(pt,C) : p\circ x' = x\}$$Thus,$$F_x(f^*(C\stackrel{p}{\rightarrow} X)) = F_x(C\stackrel{p}{\rightarrow}X\stackrel{f^{-1}}{\rightarrow}X) = \{x'\in\text{Hom}(pt,C) : f^{-1}\circ p\circ x' = x\}$$But of course requiring that $f^{-1}\circ p\circ x' = x$ is the same as requiring that $p\circ x' = f\circ x$, but by assumption $f\circ x = x$, so $F_x(f^*(C\rightarrow X)) = F_x(C\rightarrow X)$. Alternatively you can also get this from the universal property of the fiber product diagram$$\begin{array}{ccc}C & \stackrel{\text{id}}{\longrightarrow} & C \\\downarrow & & \;\;\;\downarrow p \\X & \stackrel{f}{\longrightarrow} & X\end{array}$$where the vertical arrow on the left is the unique one making the diagram commute. Ie, its $f^{-1}\circ p$.
thanks,
will |
I am trying to prove soundness and completeness for S4.2 and I am considering Kripke frames which are reflexive, transitive and have the Church-Rosser property. Now, there is one thing that really puzzles me, though it might be quite obvious and due to some triviality I'm currently missing.
If I compute the corresponding Sahlqvist formula for $\Diamond\Box p \rightarrow \Box\Diamond p$ (which is an axiom of S4.2) what I obtain is $\forall x,y,z (Rxy \land Rxz \rightarrow \exists w (Ryw \land Rzw))$. So I take the former axiom of S4.2 to define the class of Kripke-frames which have the Church-Rosser property.
However, it seems to me that it is possible to find a conterexample: just consider the Kripke-model$\mathbb{M}=(W,R,V)$ where:
W= {1, 2, 3, 4, 5, 6, 7} R= {R12, R13, R24, R35, R57, R46} V(p)={3, 4}
So we seem to have that $\mathbb{M}\vDash \Diamond\Box p \rightarrow \Box\Diamond p$. However, this frame clearly does not have the Church-Rosser property. Thus there is an evaluation $V$ s.t. the frame (W,R) validates the axioms of S4.2 but it does not have the Church-Rosser property.
I guess there some mistake in this reasoning, but I don't understand where and why. Can someone help? |
A certain complicated option pricing formula results in products of Black Scholes $N$ components like this:
$-p_1N(d_1)N(d_6)+p_sN(d_2)N(d_5)>?0$ where $p_s>p_1$
Trying to find a simple way to prove if this identity is true or not without having to use differential geometry on a multi variable function. Taylor series don't work because the Ln's and $a*t^{1/2}$ part can be very big. interest set to zero for simplicity
$\begin{align} d_1 &= \frac{1}{\alpha\sqrt{t}}\left[\ln\left(\frac{p_1}{p_s}\right) + \left(0 + \frac{\alpha^2}{2}\right)t\right] \\ d_2 &= \frac{1}{\alpha\sqrt{t}}\left[\ln\left(\frac{p_1}{p_s}\right) + \left(0 - \frac{\alpha^2}{2}\right)t\right] \\ d_5 &= \frac{1}{\alpha\sqrt{t}}\left[ \left(0 + \frac{\alpha^2}{2}\right)t\right] \\ d_6 &= \frac{1}{\alpha\sqrt{t}}\left[\left(0 - \frac{\alpha^2}{2}\right)t\right] \\ \end{align}$
Trying to prove this with differentials requires taking the derivative of a complicated function with respectto the time/vol variable and then the p_s an p_1 variables to determine if the manifold has a maximum above zero
At the endpoints $at \to 0$ and $at \to infinity $ it equals zero, so the function is a frown or a smile that has positive or negative curvature. This is an extremely hard problem because linear approximations don;t work |
On Monday, Celestalon kicked off the official Alpha Theorycrafting season by posting a Theorycrafting Discussion thread on the forums. And he was kind enough to toss a meaty chunk of information our way about
Resolve, the replacement for Vengeance.
Resolve: Increases your healing and absorption done to yourself, based on Stamina and damage taken (before avoidance and mitigation) in the last 10 sec.
In today’s post, I want to go over the mathy details about how Resolve works, how it differs from Vengeance, and how it may (or may not) fix some of the problems we’ve discussed in previous blog posts.
Mathemagic
Celestalon broke the formula up into two components: one from stamina and one from damage taken. But for completeness, I’m going to bolt them together into one formula for resolve $R$:
$$ R =\frac{\rm Stamina}{250~\alpha} + 0.25\sum_i \frac{D_i}{\rm MaxHealth}\left ( \frac{2 ( 10-\Delta t_i )}{10} \right )$$
where $D_i$ is an individual damage event that occurred $\Delta t_i$ seconds ago, and $\alpha$ is a level-dependent constant, with $\alpha(100)=261$. The sum is carried out over all damaging events that have happened in the last 10 seconds.
The first term in the equation is the stamina-based contribution, which is always active, even when out of combat. There’s a helpful buff in-game to alert you to this:
My premade character has 1294 character sheet stamina, which after dividing by 250 and $\alpha(90)=67$, gives me 0.07725, or about 7.725% Resolve. It’s not clear at this point whether the tooltip is misleadingly rounding down to 7% (i.e. using floor instead of round) or whether Resolve is only affected by the stamina from gear. The Alpha servers went down as I was attempting to test this, so we’ll have to revisit it later. We’ve already been told that this will update dynamically with stamina buffs, so having Power Word: Fortitude buffed on you mid-combat will raise your Resolve.
Once you’re in combat and taking damage, the second term makes a contribution:
I’ve left this term in roughly the form Celestalon gave, even though it can obviously be simplified considerably by combining all of the constants, because this form does a better job of illustrating the behavior of the mechanic. Let’s ignore the sum for now, and just consider an isolated damage event that does $D$ damage:
$$0.25\times\frac{D}{\rm MaxHealth}\left ( \frac{2 ( 10-\Delta t )}{10} \right )$$
The 0.25 just moderates the amount of Resolve you get from damaging attacks. It’s a constant multiplicative factor that they will likely tweak to achieve the desired balance between baseline (stamina-based) Resolve and dynamic (damage-based) Resolve.
The factor of $D/{\rm MaxHealth}$ means that we’re normalizing the damage by our max health. So if we have 1000 health and take an attack that deals 1000 damage (remember, this is before mitigation), this term gives us a factor of 1. Avoided auto-attacks also count here, though instead of performing an actual damage roll the game just uses the mean value of the boss’s auto-attack damage. Again, nothing particularly complicated here, it just makes Resolve depend on the percentage of your health the attack would have removed rather than the raw damage amount. Also note that we’ve been told that dynamic health effects from temporary multipliers (e.g. Last Stand) aren’t included here, so we’re not punished for using temporary health-increasing cooldowns.
The term in parentheses is the most important part, though. In the instant the attack lands, $\Delta t=0$ and the term in parentheses evaluates to $2(10-0)/10 = 2.$ So that attack dealing 1000 damage to our 1000-health tank would give $0.25\times 1 \times 2 = 0.5,$ or 50% Resolve.
However, one second later, $\Delta t = 1$, so the term in parentheses is only $2(10-1)/10 = 1.8$, and the amount of resolve it grants is reduced to 45%. The amount of Resolve granted continues to linearly decrease as time passes, and by the time ten seconds have elapsed it’s reduced to zero. Each attack is treated independently, so to get our total Resolve from all damage taken we just have to add up the Resolve granted by every attack we’ve taken, hence the sum in my equation.
You may note that the time-average of the term in parentheses is 1, which is how we get the advertised “averages to ~Damage/MaxHealth” that Celestalon mentioned in the post. In that regard, he’s specifically referring to just the part within the sum, not the constant factor of 0.25 outside of it. So in total, your average Resolve contribution from damage is 25% of Damage/MaxHealth.
Comparing to Vengeance
Mathematically speaking, there’s a world of difference between Resolve and Vengeance. First and foremost is the part we already knew: Resolve doesn’t grant any offensive benefit. We’ve talked about that a lot before, though, so it’s not territory worth re-treading.
Even in the defensive component though, there are major differences. Vengeance’s difference equation, if solved analytically, gives solutions that are exponentials. In other words, provided you were continuously taking damage (such that it didn’t fall off entirely), Vengeance would decay and adjust to your new damage intake rather smoothly. It also meant that damage taken at the very beginning of an encounter was still contributing some amount of Vengeance at the very end, again, assuming there was no interruption. And since it was only recalculated on a damage event, you could play some tricks with it, like taking a giant attack that gave you millions of Vengeance and then riding that wave for 20 seconds while your co-tank takes the boss.
Resolve does away with all of that. It flat-out says “look, the only thing that matters is the last 10 seconds.” The calculation doesn’t rely on a difference equation, meaning that when recalculating, it doesn’t care what your Resolve was at any time previously. And it forces a recalculation at fixed intervals, not just when you take damage. As a result, it’s much harder to game than Vengeance was.
Celestalon’s post also outlines a few other significant differences:
No more ramp-up mechanism No taunt-transfer mechanism Resolve persists through shapeshifts Resolve only affects self-healing and self-absorbs
The lack of ramp-up and taunt-transfer mechanisms may at first seem like a problem. But in practice, I don’t think we’ll miss either of them. Both of these effects served offensive (i.e. threat) and defensive purposes, and it’s pretty clear that the offensive purposes are made irrelevant by definition here since Resolve won’t affect DPS/threat. The defensive purpose they served was to make sure you had
some Vengeance to counter the boss’s first few hits, since Vengeance had a relatively slow ramp-up time but the boss’s attacks did not.
However, Resolve ramps up a
lot faster than Vengeance does. Again, this is in part thanks to the fact that it isn’t governed by a difference equation. The other part is because it only cares about the last ten seconds.
To give you a visual representation of that, here’s a plot showing both Vengeance and Resolve for a player being attacked by a boss. The tank has 100 health and the boss swings for 30 raw damage every 1.5 seconds. Vengeance is shown in arbitrary units here since we’re not interested in the exact magnitude of the effect, just in its dynamic properties. I’ve also ignored the baseline (stamina-based) contribution to Resolve for the same reason.
As a final note, while the blog post says that Resolve is recalculated every second, it seemed like it was updating closer to every half-second when I fooled with it on alpha, so these plots use 0.5-second update intervals. Changing to 1-second intervals doesn’t significantly change the results (they just look a little more fragmented).
The plot very clearly shows the 50% ramp-up mechanism and slow decay-like behavior of Vengeance. Note that while the ramp-up mechanism gets you to 50% of Vengeance’s overall value at the first hit (at t=2.5 seconds), Resolve hits this mark as soon as the second hit lands (at 4.0 seconds) despite not having
any ramp-up mechanism.
Resolve also hits its steady-state value much more quickly than Vengeance does. By definition, Resolve gets there after about 10 seconds of combat (t=12.5 seconds). But with Vengeance, it takes upwards of 30-40 seconds to even approach the steady-state value thanks to the decay effect (again, a result of the difference equation used to calculate Vengeance). Since most fights involve tank swaps more frequently than this, it meant that you were consistently getting stronger the longer you tanked a boss. This in turn helped encourage the sort of “solo-tank things that should not be solo-tanked” behavior we saw in Mists.
This plot assumes a boss who does exactly 30 damage per swing, but in real encounters the boss’s damage varies. Both Vengeance and Resolve adapt to mimic that change in the tank’s damage intake, but as you could guess, Resolve adapts much more quickly. If we allow the boss to hit for a random amount between 20 and 40 damage:
You can certainly see the similar changes in both curves, but Resolve reacts quickly to each change while Vengeance changes rather slowly.
One thing you’ve probably noticed by now is that the Resolve plot looks very jagged (in physics, we might call this a “sawtooth wave”). This happens because of the linear decay built into the formula. It peaks in the instant you take the attack – or more accurately, in the instant that Resolve is recalculated after that attack. But then every time it’s recalculated it linearly decreases by a fixed percent. If the boss swings in 1.5-second intervals, then Resolve will zig-zag between its max value and 85% of its max value in the manner shown.
The more frequently the boss attacks, the smoother that zig-zag becomes; conversely, a boss with a long swing timer will cause a larger variation in Resolve. This is apparent if we adjust the boss’s swing timer in either direction:
It’s worth noting that every plot here has a new randomly-generated sequence of attacks, so don’t be surprised that the plots don’t have the same profile as the original. The key difference is the size of the zig-zag on the Resolve curve.
I’ve also run simulations where the boss’ base damage is 50 rather than 30, but apart from the y-axis having large numbers there’s no real difference:
Note that even a raw damage of 50% is pretty conservative for a boss – heroic bosses in Siege have frequently had raw damages that were larger than the player’s health. But it’s not clear if that will still be the case with the new tanking and healing paradigm that’s been unveiled for Warlords.
If we make the assumption that raw damage will be lower, then these rough estimates give us an idea of how large an effect Resolve will be. If we guess at a 5%-10% baseline value from stamina, these plots suggest that Resolve will end up being anywhere from a 50% to 200% modifier on our healing. In other words, it has the potential to double or triple our healing output with the current tuning numbers. Of course, it’s anyone’s guess as to whether those numbers are even remotely close to what they’ll end up being by the end of beta.
Is It Fixed Yet?
If you look back over our old blog posts, the vast majority of our criticisms of Vengeance had to do with its tie-in to damage output. Those have obviously been addressed, which leaves me worrying that I’ll have nothing to rant about for the next two or three years.
But regarding everything else, I think Resolve stands a fair chance of addressing our concerns. One of the major issues with Vengeance was the sheer magnitude of the effect – you could go from having 50k AP to 600k AP on certain bosses, which meant your abilities got up to 10x more effective. Even though that’s an extreme case, I regularly noted having over 300k AP during progression bosses, a factor of around 6x improvement. Resolve looks like it’ll tamp down on that some. Reasonable bosses are unlikely to grant a multiplier larger than 2x, which will be easier to balance around.
It hasn’t been mentioned specifically in Celestalon’s post, but I think it’s a reasonable guess that they will continue to disable Resolve gains from damage that could be avoided through better play (i.e. intentionally “standing in the bad”). If so, there will be little (if any) incentive to take excess damage to get more Resolve. Our sheer AP scaling on certain effects created situations where this was a net survivability gain with Vengeance, but the lower multiplier should make that impossible with Resolve.
While I still don’t think it needs to affect anything other than active mitigation abilities, the fact that it’s a multiplier affecting everything equally rather than a flat AP boost should make it easier to keep talents with different AP coefficients balanced (Eternal Flame and Sacred Shield, specifically). And we already know that Eternal Flame is losing its Bastion of Glory interaction, another change which will facilitate making both talents acceptable choices.
All in all, I think it’s a really good system, if slightly less transparent. It’s too soon to tell whether we’ll see any unexpected problems, of course, but the mechanic doesn’t have any glaring issues that stand out upon first examination (unlike Vengeance). I still have a few lingering concerns about steady-state threat stability between tanks (ironically, due to the
removal of Vengeance), but that is the sort of thing which will become apparent fairly quickly during beta testing, and at any rate shouldn’t reflect on the performance of Resolve. |
On Monday, Celestalon kicked off the official Alpha Theorycrafting season by posting a Theorycrafting Discussion thread on the forums. And he was kind enough to toss a meaty chunk of information our way about
Resolve, the replacement for Vengeance.
Resolve: Increases your healing and absorption done to yourself, based on Stamina and damage taken (before avoidance and mitigation) in the last 10 sec.
In today’s post, I want to go over the mathy details about how Resolve works, how it differs from Vengeance, and how it may (or may not) fix some of the problems we’ve discussed in previous blog posts.
Mathemagic
Celestalon broke the formula up into two components: one from stamina and one from damage taken. But for completeness, I’m going to bolt them together into one formula for resolve $R$:
$$ R =\frac{\rm Stamina}{250~\alpha} + 0.25\sum_i \frac{D_i}{\rm MaxHealth}\left ( \frac{2 ( 10-\Delta t_i )}{10} \right )$$
where $D_i$ is an individual damage event that occurred $\Delta t_i$ seconds ago, and $\alpha$ is a level-dependent constant, with $\alpha(100)=261$. The sum is carried out over all damaging events that have happened in the last 10 seconds.
The first term in the equation is the stamina-based contribution, which is always active, even when out of combat. There’s a helpful buff in-game to alert you to this:
My premade character has 1294 character sheet stamina, which after dividing by 250 and $\alpha(90)=67$, gives me 0.07725, or about 7.725% Resolve. It’s not clear at this point whether the tooltip is misleadingly rounding down to 7% (i.e. using floor instead of round) or whether Resolve is only affected by the stamina from gear. The Alpha servers went down as I was attempting to test this, so we’ll have to revisit it later. We’ve already been told that this will update dynamically with stamina buffs, so having Power Word: Fortitude buffed on you mid-combat will raise your Resolve.
Once you’re in combat and taking damage, the second term makes a contribution:
I’ve left this term in roughly the form Celestalon gave, even though it can obviously be simplified considerably by combining all of the constants, because this form does a better job of illustrating the behavior of the mechanic. Let’s ignore the sum for now, and just consider an isolated damage event that does $D$ damage:
$$0.25\times\frac{D}{\rm MaxHealth}\left ( \frac{2 ( 10-\Delta t )}{10} \right )$$
The 0.25 just moderates the amount of Resolve you get from damaging attacks. It’s a constant multiplicative factor that they will likely tweak to achieve the desired balance between baseline (stamina-based) Resolve and dynamic (damage-based) Resolve.
The factor of $D/{\rm MaxHealth}$ means that we’re normalizing the damage by our max health. So if we have 1000 health and take an attack that deals 1000 damage (remember, this is before mitigation), this term gives us a factor of 1. Avoided auto-attacks also count here, though instead of performing an actual damage roll the game just uses the mean value of the boss’s auto-attack damage. Again, nothing particularly complicated here, it just makes Resolve depend on the percentage of your health the attack would have removed rather than the raw damage amount. Also note that we’ve been told that dynamic health effects from temporary multipliers (e.g. Last Stand) aren’t included here, so we’re not punished for using temporary health-increasing cooldowns.
The term in parentheses is the most important part, though. In the instant the attack lands, $\Delta t=0$ and the term in parentheses evaluates to $2(10-0)/10 = 2.$ So that attack dealing 1000 damage to our 1000-health tank would give $0.25\times 1 \times 2 = 0.5,$ or 50% Resolve.
However, one second later, $\Delta t = 1$, so the term in parentheses is only $2(10-1)/10 = 1.8$, and the amount of resolve it grants is reduced to 45%. The amount of Resolve granted continues to linearly decrease as time passes, and by the time ten seconds have elapsed it’s reduced to zero. Each attack is treated independently, so to get our total Resolve from all damage taken we just have to add up the Resolve granted by every attack we’ve taken, hence the sum in my equation.
You may note that the time-average of the term in parentheses is 1, which is how we get the advertised “averages to ~Damage/MaxHealth” that Celestalon mentioned in the post. In that regard, he’s specifically referring to just the part within the sum, not the constant factor of 0.25 outside of it. So in total, your average Resolve contribution from damage is 25% of Damage/MaxHealth.
Comparing to Vengeance
Mathematically speaking, there’s a world of difference between Resolve and Vengeance. First and foremost is the part we already knew: Resolve doesn’t grant any offensive benefit. We’ve talked about that a lot before, though, so it’s not territory worth re-treading.
Even in the defensive component though, there are major differences. Vengeance’s difference equation, if solved analytically, gives solutions that are exponentials. In other words, provided you were continuously taking damage (such that it didn’t fall off entirely), Vengeance would decay and adjust to your new damage intake rather smoothly. It also meant that damage taken at the very beginning of an encounter was still contributing some amount of Vengeance at the very end, again, assuming there was no interruption. And since it was only recalculated on a damage event, you could play some tricks with it, like taking a giant attack that gave you millions of Vengeance and then riding that wave for 20 seconds while your co-tank takes the boss.
Resolve does away with all of that. It flat-out says “look, the only thing that matters is the last 10 seconds.” The calculation doesn’t rely on a difference equation, meaning that when recalculating, it doesn’t care what your Resolve was at any time previously. And it forces a recalculation at fixed intervals, not just when you take damage. As a result, it’s much harder to game than Vengeance was.
Celestalon’s post also outlines a few other significant differences:
No more ramp-up mechanism No taunt-transfer mechanism Resolve persists through shapeshifts Resolve only affects self-healing and self-absorbs
The lack of ramp-up and taunt-transfer mechanisms may at first seem like a problem. But in practice, I don’t think we’ll miss either of them. Both of these effects served offensive (i.e. threat) and defensive purposes, and it’s pretty clear that the offensive purposes are made irrelevant by definition here since Resolve won’t affect DPS/threat. The defensive purpose they served was to make sure you had
some Vengeance to counter the boss’s first few hits, since Vengeance had a relatively slow ramp-up time but the boss’s attacks did not.
However, Resolve ramps up a
lot faster than Vengeance does. Again, this is in part thanks to the fact that it isn’t governed by a difference equation. The other part is because it only cares about the last ten seconds.
To give you a visual representation of that, here’s a plot showing both Vengeance and Resolve for a player being attacked by a boss. The tank has 100 health and the boss swings for 30 raw damage every 1.5 seconds. Vengeance is shown in arbitrary units here since we’re not interested in the exact magnitude of the effect, just in its dynamic properties. I’ve also ignored the baseline (stamina-based) contribution to Resolve for the same reason.
As a final note, while the blog post says that Resolve is recalculated every second, it seemed like it was updating closer to every half-second when I fooled with it on alpha, so these plots use 0.5-second update intervals. Changing to 1-second intervals doesn’t significantly change the results (they just look a little more fragmented).
The plot very clearly shows the 50% ramp-up mechanism and slow decay-like behavior of Vengeance. Note that while the ramp-up mechanism gets you to 50% of Vengeance’s overall value at the first hit (at t=2.5 seconds), Resolve hits this mark as soon as the second hit lands (at 4.0 seconds) despite not having
any ramp-up mechanism.
Resolve also hits its steady-state value much more quickly than Vengeance does. By definition, Resolve gets there after about 10 seconds of combat (t=12.5 seconds). But with Vengeance, it takes upwards of 30-40 seconds to even approach the steady-state value thanks to the decay effect (again, a result of the difference equation used to calculate Vengeance). Since most fights involve tank swaps more frequently than this, it meant that you were consistently getting stronger the longer you tanked a boss. This in turn helped encourage the sort of “solo-tank things that should not be solo-tanked” behavior we saw in Mists.
This plot assumes a boss who does exactly 30 damage per swing, but in real encounters the boss’s damage varies. Both Vengeance and Resolve adapt to mimic that change in the tank’s damage intake, but as you could guess, Resolve adapts much more quickly. If we allow the boss to hit for a random amount between 20 and 40 damage:
You can certainly see the similar changes in both curves, but Resolve reacts quickly to each change while Vengeance changes rather slowly.
One thing you’ve probably noticed by now is that the Resolve plot looks very jagged (in physics, we might call this a “sawtooth wave”). This happens because of the linear decay built into the formula. It peaks in the instant you take the attack – or more accurately, in the instant that Resolve is recalculated after that attack. But then every time it’s recalculated it linearly decreases by a fixed percent. If the boss swings in 1.5-second intervals, then Resolve will zig-zag between its max value and 85% of its max value in the manner shown.
The more frequently the boss attacks, the smoother that zig-zag becomes; conversely, a boss with a long swing timer will cause a larger variation in Resolve. This is apparent if we adjust the boss’s swing timer in either direction:
It’s worth noting that every plot here has a new randomly-generated sequence of attacks, so don’t be surprised that the plots don’t have the same profile as the original. The key difference is the size of the zig-zag on the Resolve curve.
I’ve also run simulations where the boss’ base damage is 50 rather than 30, but apart from the y-axis having large numbers there’s no real difference:
Note that even a raw damage of 50% is pretty conservative for a boss – heroic bosses in Siege have frequently had raw damages that were larger than the player’s health. But it’s not clear if that will still be the case with the new tanking and healing paradigm that’s been unveiled for Warlords.
If we make the assumption that raw damage will be lower, then these rough estimates give us an idea of how large an effect Resolve will be. If we guess at a 5%-10% baseline value from stamina, these plots suggest that Resolve will end up being anywhere from a 50% to 200% modifier on our healing. In other words, it has the potential to double or triple our healing output with the current tuning numbers. Of course, it’s anyone’s guess as to whether those numbers are even remotely close to what they’ll end up being by the end of beta.
Is It Fixed Yet?
If you look back over our old blog posts, the vast majority of our criticisms of Vengeance had to do with its tie-in to damage output. Those have obviously been addressed, which leaves me worrying that I’ll have nothing to rant about for the next two or three years.
But regarding everything else, I think Resolve stands a fair chance of addressing our concerns. One of the major issues with Vengeance was the sheer magnitude of the effect – you could go from having 50k AP to 600k AP on certain bosses, which meant your abilities got up to 10x more effective. Even though that’s an extreme case, I regularly noted having over 300k AP during progression bosses, a factor of around 6x improvement. Resolve looks like it’ll tamp down on that some. Reasonable bosses are unlikely to grant a multiplier larger than 2x, which will be easier to balance around.
It hasn’t been mentioned specifically in Celestalon’s post, but I think it’s a reasonable guess that they will continue to disable Resolve gains from damage that could be avoided through better play (i.e. intentionally “standing in the bad”). If so, there will be little (if any) incentive to take excess damage to get more Resolve. Our sheer AP scaling on certain effects created situations where this was a net survivability gain with Vengeance, but the lower multiplier should make that impossible with Resolve.
While I still don’t think it needs to affect anything other than active mitigation abilities, the fact that it’s a multiplier affecting everything equally rather than a flat AP boost should make it easier to keep talents with different AP coefficients balanced (Eternal Flame and Sacred Shield, specifically). And we already know that Eternal Flame is losing its Bastion of Glory interaction, another change which will facilitate making both talents acceptable choices.
All in all, I think it’s a really good system, if slightly less transparent. It’s too soon to tell whether we’ll see any unexpected problems, of course, but the mechanic doesn’t have any glaring issues that stand out upon first examination (unlike Vengeance). I still have a few lingering concerns about steady-state threat stability between tanks (ironically, due to the
removal of Vengeance), but that is the sort of thing which will become apparent fairly quickly during beta testing, and at any rate shouldn’t reflect on the performance of Resolve. |
Let $X \sim \mathcal{N}(0, \Sigma)$ be a Gaussian vector in dimension $N$. I am interested by the probability density of the random variable $\lVert X \lVert_2$.
If $\Sigma = {I}_N$, we recognize the $\chi$-law. We especially know that the probability density is given by $$p(x) \propto x^{{N} -1} \mathrm{e}^{-\frac{x^2}{2}} 1_{x\geq 0} .$$
In the general case, we can decompose the matrix $\Sigma = P^t D P$ with $P$ orthogonal and $D=D(\lambda_1,\cdots,\lambda_N)$ diagonal and $\lVert X \lVert_2 \sim \lVert \mathcal{N} (0,D)\lVert_2$. What can we say about the probability density of $\lVert X \lVert_2$ in this general case?
Thanks by advance. |
Forgot password? New user? Sign up
Existing user? Log in
If I win when I flip more consecutive heads than I've ever flipped tails, what's my probability of winning?
Note by Sumukh Bansal 1 year, 11 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
This is a tricky one. I counted the number of allowable strings of length nnn starting at n=1n = 1n=1 and obtained the sequence 1,0,1,0,1,1,1,2,3,4,6,10,15,....1,0,1,0,1,1,1,2,3,4,6,10,15, ....1,0,1,0,1,1,1,2,3,4,6,10,15,...., which matches this OEIS sequence. So the desired probability of winning is
P=12+123+125+126+127+228+329+4210+6211+10212+.....P = \dfrac{1}{2} + \dfrac{1}{2^{3}} + \dfrac{1}{2^{5}} + \dfrac{1}{2^{6}} + \dfrac{1}{2^{7}} + \dfrac{2}{2^{8}} + \dfrac{3}{2^{9}} + \dfrac{4}{2^{10}} + \dfrac{6}{2^{11}} + \dfrac{10}{2^{12}} + .....P=21+231+251+261+271+282+293+2104+2116+21210+......
Now the sequence of numerators is Fibonacci-like in that the ratio of successive terms approaches the golden ratio ϕ\phiϕ. This allows us to estimate that 0.798<P<0.7990.798 \lt P \lt 0.7990.798<P<0.799.
Edit: As Milo Koumouris has pointed out, my initial calculation was incorrect; the estimate should actually be P≈0.7112P \approx 0.7112P≈0.7112.
Log in to reply
Thanks Sir
Wow - I didn't expect this! Nice... Is there away to prove the link?
While I've confirmed the match up to n=13n = 13n=13 I haven't been able to prove it yet. It involves recurrence relations, but seems quite complicated. I'll keep working at it.
@Brian Charlesworth – Also, would there be a way to calculate the exact value of PPP? It's pretty easy to do for most recurrences, but do you think it can be done for this one?
@Miles Koumouris – Also, regarding your approximation, since (from observation) a(n+1)a(n)<ϕ ∀ n≥12\dfrac{a(n+1)}{a(n)}<\phi \;\; \forall \;\; n\geq 12a(n)a(n+1)<ϕ∀n≥12, shouldn't P<∑k=112a(k)2k+a(13)213(1−ϕ2)=14392048+158192(1−5+14)≈0.7122P<\sum_{k=1}^{12}\dfrac{a(k)}{2^k}+\dfrac{a(13)}{2^{13}\left(1-\frac{\phi }{2}\right)}=\dfrac{1439}{2048}+\dfrac{15}{8192\left(1-\frac{\sqrt{5}+1}{4}\right)}\approx 0.7122P<∑k=1122ka(k)+213(1−2ϕ)a(13)=20481439+8192(1−45+1)15≈0.7122? I feel like I missed some information about the sequence when I made the assumption - would it be possible for you to show how you got your approximation?
@Miles Koumouris – Sorry for the slow reply. Yes, you're calculation is correct; I should have triple-checked mine. :/
I believe that
P=1−(∏n=1∞(1−(12)n))=0.7112119049133975787211002780707692…P=1-\left(\prod_{n=1}^{\infty }\left(1-\left(\dfrac{1}{2}\right)^n\right)\right)=0.7112119049133975787211002780707692\ldots P=1−(n=1∏∞(1−(21)n))=0.7112119049133975787211002780707692…
Based on the assumption that the ratio between successive terms of a(n)a(n)a(n) is monotonically increasing and upper-bounded by ϕ\phi ϕ for all n≥13n\geq 13n≥13, we can design an approximation technique for PPP:
∑k=1M(a(k)2k)+a(M+1)2M+1(1−(a(M+2)2a(M+1)))<P<∑k=1N(a(k)2k)+a(N+1)2N+1(1−(ϕ2))\sum_{k=1}^M\left(\dfrac{a(k)}{2^k}\right)+\dfrac{a(M+1)}{2^{M+1}\left(1-\left(\frac{a(M+2)}{2a(M+1)}\right)\right)}<P<\sum_{k=1}^N\left(\dfrac{a(k)}{2^k}\right)+\dfrac{a(N+1)}{2^{N+1}\left(1-\left(\frac{\phi }{2}\right)\right)}k=1∑M(2ka(k))+2M+1(1−(2a(M+1)a(M+2)))a(M+1)<P<k=1∑N(2ka(k))+2N+1(1−(2ϕ))a(N+1)
On the OEIS link, there are only 606060 terms provided, so setting M=58M=58M=58 and N=59N=59N=59 yields
34677104745555667007539167054875776756717555040447889408<P<715765590735+8199713396797771591152921504606846976\dfrac{3467710474555566700753916705}{4875776756717555040447889408}<P<\dfrac{71576559073\sqrt{5}+819971339679777159}{1152921504606846976}48757767567175550404478894083467710474555566700753916705<P<1152921504606846976715765590735+819971339679777159
which implies
0.7112119048…<P<0.7112119051…0.7112119048\ldots <P<0.7112119051\ldots 0.7112119048…<P<0.7112119051…
and hence we can confirm PPP rounded to 777 decimal places as 0.71121190.71121190.7112119. Upon searching this value, I found this link to an almost identical problem, with an answer of 0.7112119…0.7112119\ldots 0.7112119…, found by evaluating the product
P=1−(∏n=1∞(1−(12)n))P=1-\left(\prod_{n=1}^{\infty }\left(1-\left(\dfrac{1}{2}\right)^n\right)\right)P=1−(n=1∏∞(1−(21)n))
So thanks to the much smarter people who actually did the work, came up with this, and proved it, I believe it wouldn't be too hard to use their solutions and verify that this is indeed the answer.
Great work! With several approaches to the problem yielding the same value I think that it's safe to conclude that the answer is verified. The formula you have for PPP is surprisingly elegant. :)
Thanks! Although I didn't really do any work - the others on stack exchange solved it, and you were the one that initially found the recurrence relation. I am surprised too by how simple the formula is... do you think PPP can be expressed in closed form? Something tells me it can't, but I'm not sure how one would go about proving it.
@Miles Koumouris – I would doubt that there is a closed form. It appears to be related to the Pentagonal Number Theorem.
@Brian Charlesworth – Thanks! I'll have a read.
Dunno if this is right but....
Define PnP_nPn as the probability of winning starting right after you flipped a total of nnn tails.
Pn=(12)n+1+(1−(12)n+1)Pn+1P_n=\left(\dfrac{1}{2}\right)^{n+1}+\left(1-\left(\dfrac{1}{2}\right)^{n+1}\right)P_{n+1}Pn=(21)n+1+(1−(21)n+1)Pn+1
⇔(2n−1)Pn=2nPn−1−1\Leftrightarrow (2^n-1)P_n=2^nP_{n-1}-1⇔(2n−1)Pn=2nPn−1−1
Pretty obvious that P(∞)=0.P(\infty)=0.P(∞)=0.
Aaaand I dunno if this is solvable x'D
0.5
Since there is no way to lose, and it is possible to win given any previous sequence of tosses, the reasoning to solve this problem changes. The only way to 'not win' is to toss an endless sequence, since the game can only terminate in a win. There are clearly infinitely many of these endless games, but since the probability of one of these endless games is infinitesimally small ((12)N(\dfrac{1}{2})^N(21)N as N→∞N\rightarrow \inftyN→∞), it would be sufficient to prove that the infinite number of these endless games is not an exponential divergence. I'm not really sure how to address this...
Problem Loading...
Note Loading...
Set Loading... |
This question already has an answer here:
Time reversal symmetry and T^2 = -1 2 answers
I'm a mathematician interested in abstract QFT. I'm trying to undersand why, under certain (all?) circumstances, we must have $T^2 = -1$ rather than $T^2 = +1$, where $T$ is the time reversal operator. I understand from the Wikipedia article that requiring that energy stay positive forces $T$ to be represented by an anti-unitary operator. But I don't see how this forces $T^2=-1$. (Or maybe it doesn't force it, it merely allows it?)
Here's another version of my question. There are two distinct double covers of the Lie group $O(n)$ which restrict to the familiar $Spin(n)\to SO(n)$ cover on $SO(n)$; they are called $Pin_+(n)$ and $Pin_-(n)$. If $R\in O(n)$ is a reflection and $\tilde{R}\in Pin_\pm(n)$ covers $R$, then $\tilde{R}^2 = \pm 1$. So saying that $T^2=-1$ means we are in $Pin_-$ rather than $Pin_+$. (I'm assuming Euclidean signature here.) My question (version 2): Under what circumstances are we forced to use $Pin_-$ rather than $Pin_+$ here? |
We know that any particle moving at speed of light will have infinite mass.
Since even Light has particles. Then what is the size of the Particle? How to imagine this with the mass of light particles.
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
The mass of a photon is anywhere between zero and $10^{-18} ~ eV/c^2$.
No need to write photon or light with capitals, by the way.
Massive objects simply can not move at the speed of light. Light, or photons, are mass-less particles. They have 0 mass - which is why they can (and have to) move at the speed of light.
From the
special relativity theory you got the usefull relationship
$$E^2 = \Big(m\cdot c^2 \Big)^2 + \Big(p \cdot c \Big)^2 \quad .$$
All experimental results up to now show that $m=0$ for a photon. Therefore
$$E^2 = 0 + (p\cdot c)^2 \\ E = p\cdot c = \hbar \cdot {k} \cdot c = \hbar\cdot \omega$$
BTW: In my opinion it is false to talk about relativistic masses as you did in your post. The mass is invariant!
The equation can be simple derived by using the equation for a relativistic
moment
$$ p = \cfrac{m\cdot v}{\sqrt{1-\frac{v^2}{c^2}}} \qquad ,$$
and the
relativistic energy
$$ E = \cfrac{m\cdot c^2}{\sqrt{1-\frac{v^2}{c^2}}} \qquad .$$
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
Jupyter Notebook here: https://git.io/fjRjL
PDF Article here: http://dx.doi.org/10.13140/RG.2.2.17472.58886
For an explicit compressible algorithm the maximum timestep one can take is dictated by both the advective and acoustic speeds in the flow.
This is given by the famous CFL condition \begin{equation} V \frac{\delta t}{\Delta x} \leq 1 \end{equation} where $V$ is the maximum speed in the flow. In this case, \begin{equation} V = u + c \end{equation} where $u$ is the advective speed and $c$ is the speed of sound. One can turn this into useful quantities in terms of the Mach number, $M=\frac{u}{c}$. Number of Timesteps
One such useful quantity is the number of timesteps required to advect a fluid front a single grid point, $\Delta x$.
In a single timestep, $\delta t$, the fluid moves (by advection) by a distance $\delta x = u \delta t$. Assume it takes $n$ timesteps of size $\delta t$ to move a distance equal to the grid spacing $\Delta x$. In other words, let $\Delta t = n \delta t$ such that $\Delta x = u \Delta t$. Now \begin{equation} \Delta t = n\delta t = n \frac{\Delta x}{V} \end{equation} or \begin{equation} n = V \frac{\Delta t}{\Delta x} = (u+c)\frac{\Delta t}{\Delta x} \end{equation} now divide by the speed of sound, $c$ \begin{equation} n = (M+1)\frac{c\Delta t}{\Delta x} \end{equation} but, recall that the fluid front moves at the advective velocity $u$ such that $\Delta x = u \Delta t$, this means \begin{equation} \label{eq:ndt} n = (M+1)\frac{c}{u} = \frac{M+1}{M} = 1 + \frac{1}{M} \end{equation} Courant Number
Another useful quantity is the (advective) Courant number ($\text{Cr} = u\frac{\delta t}{\Delta x}$) as a function of the Mach number. This one is easy. Starting with
\begin{equation} \delta t = \frac{\Delta x}{V} = \frac{\Delta x}{u+c} \end{equation} multiply both sides by $\frac{u}{\Delta x}$ \begin{equation} \frac{u}{\Delta x}\delta t = \frac{u}{\Delta x} \frac{\Delta x}{u+c} \end{equation} or \begin{equation} \text{Cr} = \frac{u}{u+c} = \frac{M}{M+1} \label{eq:courant} \end{equation} Preconditioning
To alleviate this strict limitation on the timestep size, several techniques have been introduced to manipulate the speed of sound in the governing equations without modifying much of the dynamics. One popular approach is the ASR method. The technique results in effectively scaling the speed of sound by a factor, $\alpha$, such that
\begin{equation} c \to \frac{c}{\alpha}; \quad \alpha > 1 \end{equation} This means that the Mach number gets amplified by a factor $\alpha$, i.e. $M\to\alpha M$. This changes \eqref{ndt} to \begin{equation} \label{eq:nsteps-precon} n =\frac{\alpha M + 1}{\alpha M}= 1 + \frac{1}{\alpha M} \end{equation} and \eqref{courant} to \begin{equation} \label{eq:courant-precon} \text{Cr} = \frac{\alpha M}{\alpha M+1} \end{equation} Cost Analysis
The plots in Figure (\ref{fig:test}) compare pressure-based to conditioned and unconditioned density-based methods. But they do not compare cost. The real question is: what if the cost of a single pressure-based timestep is 10 times that of an ASR-100 timestep? Where is the break-even Mach number? Here, the break-even Mach number is defined as the Mach number below which pressure-based methods cost less in terms of time-to-solution and above which density-based methods cost less (in terms of time-to solution).
We can cleverly estimate this cost as follows. Define the cost ratio, $r$ as the ratio of the cost per timestep of a pressure-based method to that of a density-based method, \begin{equation} r = \frac{\text{cost per timestep of pressure-based}}{\text{cost per timestep of density-based}} = \frac{p\text{-cost}}{\rho\text{-cost}} \end{equation} Now, for a density based method, it takes $n$ steps to move a fluid front by one grid point. The cost for that operation is \begin{equation} \rho\text{-cost}\times n = \rho\text{-cost}\times\frac{\alpha M + 1}{\alpha M} \end{equation} For a pressure based method, by definition, it takes a single timestep to move a fluid front by a single grid point. This operation costs $p\text{-cost}$. The breakeven Mach number is when those two are equal, i.e. \begin{equation} \rho\text{-cost}\times n = p\text{-cost} \end{equation} or, dividing by $\rho\text{-cost}$ and substituting for the value of $n$, we have \begin{equation} \frac{\alpha M_\text{breakeven} + 1}{\alpha M_\text{breakeven}} = r \end{equation} or \begin{equation} M_\text{breakeven} = \frac{1}{\alpha (r-1)} \end{equation} |
Statistics - Forward and Backward Stepwise (Selection|Regression) Table of Contents 1 - About
Stepwise methods have the same ideas as best subset selection but they look at a more restrictive set of models.
Between backward and forward stepwise selection, there's just one fundamental difference, which is whether you're starting with a model:
At each step:
we're not looking at every single possible model in the universe that contains k predictors such as in best subset selection but we're just looking at the models that contain the k minus 1 predictors the we already chose in the previous step. we're just going to choose the variable that gives the biggest improvement to the model we just had a moment earlier. 2 - Articles Related 3 - Selection 3.1 - Forward
Forward Selection chooses a subset of the predictor variables for the final model.
We can do forward stepwise in context of linear regression whether n is less than p or n is greater than p.
Forward selection is a very attractive approach, because it's both tractable and it gives a good sequence of models.
Start with a null model. The null model has no predictors, just one intercept (The mean over Y). Fit p simple linear regression models, each with one of the variables in and the intercept. So basically, you just search through all the single-variable models the best one (the one that results in the lowest residual sum of squares). You pick and fix this one in the model. Now search through the remaining p minus 1 variables and find out which variable should be added to the current model to best improve the residual sum of squares. Continue until some stopping rule is satisfied, for example when all remaining variables have a p-value above some threshold. 3.2 - Backward
Unlike forward stepwise selection, it begins with the full least squares model containing all p predictors, and then iteratively removes the least useful predictor, one-at-a-time.
In order to be able to perform backward selection, we need to be in a situation where we have more observations than variables because we can do least squares regression when n is greater than p. If p is greater than n, we cannot fit a least squares model. It's not even defined.
Start with all variables in the model. Remove the variable with the largest p-value | that is, the variable that is the least statistically significant. The new (p - 1)-variable model is t, and the variable with the largest p-value is removed. Continue until a stopping rule is reached. For instance, we may stop when all remaining variables have a significant p-value defined by some significance threshold. 4 - Characteristics 4.1 - Overfitting
Forward and backward stepwise selection is not guaranteed to give us the best model containing a particular subset of the p predictors but that's the price to pay in order to avoid overfitting. Even if p is less than 40, looking at all possible models may not be the best thing to do. The point is that is not always best to do a full search, even when you can do it because we will pay a price in variance (and thus in test error). Just because best subset has a better model on the training data doesn't mean that it's really going to be a better model overall in the context of test data, which is what we really care about.
4.2 - Correlation and RSS
Just like in best subset selection, we get p plus 1 models from M0 to Mp. The difference is that these models are nested.
M1 contains the predictor in M0, plus one more. M2 contains M1 plus one more predictor. M3 contains the predictors in M2 plus one more, and so on.
These models are nested in a way that wasn't the case for best subset selection.
As backward and forward stepwise are not doing a search among all possible models. For a given model size, they are going to have an RSS that typically will be above that for best subset. This happens only when there's correlation between the features. It's pretty easy to show that if the variables had no correlation, then the variables chosen by the two methods would be exactly the same. Because of the correlation between the features you can get a discrepancy between best subset and forward stepwise.
They still have the same RSS on two points:
the null model because they each contain it the model with all p predictors because they are each considering the model with all p predictors.
But in between there is a gap.
4.3 - Computation
Instead of looking at 2 to the p models, The backward and forward selection approach searches through around p squared models:
<MATH> \begin{array}{rrl} Model Space & \approx & 1 + p + (p-1) + (p-2) + \dots + (p-k) \approx p^2 \\ & = & 1 + \frac{p(p + 1)}{2} \\ \end{array} </MATH> |
I want to know what can happen if we multiply both sides of an equation by $\frac{1}{x}$, where $x$ is a variable.
I mean, is it possible that we get redundant equations? Or defective equations ?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I want to know what can happen if we multiply both sides of an equation by $\frac{1}{x}$, where $x$ is a variable.
I mean, is it possible that we get redundant equations? Or defective equations ?
You might find it useful to argue "by cases". An example would be instructive:
Find all $x$ satisfying $x^2 + x = 0$.
Seeing the common factor of $x$, it is tempting to divide both sides by $x$ (i.e., multiply both sides by $\frac{1}{x}$). Any time you divide by an unknown quantity, however, you should be concerned about the case where that quantity is zero (since division by zero is undefined). You could use cases to organize your thinking: either $x$ is zero or it is not.
Case 1: $x = 0$
Since $x = 0$, we can substitue to find $0^2 + 0$ is indeed $0$, so $x = 0$ is a solution.
Case 2: $x \neq 0$
Since $x \neq 0$, it is perfectly legal to divide both sides of the equation by $x$. We are left with the simpler equation $x + 1 = 0$, which has solution $x = -1$.
Putting the two cases together, we see that both $x = 0$ and $x = -1$ are solutions. Notice that if you had divided both sides by $x$ without thinking in cases, you would have missed the $x = 0$ solution.
I don't think there can be any such situation where multiplying both sides by $\frac 1x$ gives a redundant equation (where there is either no solution or an infinite amount of solutions) or a "defective equation," as long as $x \neq 0$. If you done such, the balance in the equation remains. This is why you have to apply the same operation on one side to the other. As an example, take the linear equation $x + 2 = 5$. When you solve it, you get $x = 3$. But if you were to start by multiplying both sides of the equation by $\frac 1x$, you'll see something like this: $$\begin{align} x + 2 & = 5 && \text{Given} \\ \frac 1x (x + 2) & = 5\left(\frac 1x\right) && \text{Multiply both sides by $\frac 1x$} \\ \frac{x+2}{x} & = \frac 5x && \text{Result of above operation} \\ x \cdot \frac{x + 2}{x} & = \frac 5x \cdot x && \text{Multiplying both sides by $x$ to cancel the denominator out} \\ x + 2 & = 5 && \text{We end up back where we started} \\ x & = 3 && \text{Solving for $x$} \end{align} $$
It did almost nothing to the original equation! Hence, multiplying both sides by $\frac 1x$ doesn't change the solutions-- as long as $x \neq 0$. |
I am reading a paper using log-gabor filters for feature detection. I was thinking about the difference between Gabor filters and log-gabor filters. Can anyone tell me the difference(s), and a way to implement them? Thanks in advance.
The 1D gabor filter has the following form in the frequency domain:
$$G_{b(\sigma,\omega_0)}(\omega) = \text{exp}\left(-\frac{\sigma^2}{2}(\omega - \omega_0)^2\right)$$
The 1D log-gabor filter is:
$$G_{l(\sigma,\omega_0)}(\omega) = \text{exp}\left(-\frac{\ln^2(\omega/\omega_0)}{2\ln^2(\sigma)}\right)$$
Log-gabor filters are used because they have 0 DC component for arbitrary large bandwidth, and size distribution of features in an image is often logarithmic.
The excellent paper
On the Choice of Band-Pass Quadrature Filters by Djamal Boukerroui, J. Alison Noble and Michael Brady explains further differences for these two filters as well as others.
Here is some MATLAB code for the 2D version:
[rows, cols] = size(I);% FFT mesh[ux, uy] = meshgrid(([1:cols]-(fix(cols/2)+1))/(cols-mod(cols,2)), ... ([1:rows]-(fix(rows/2)+1))/(rows-mod(rows,2)));ux = ifftshift(ux); % Quadrant shift to put 0 frequency at the cornersuy = ifftshift(uy);% Convert to polar coordinatesth = atan2(uy,ux);r = sqrt(ux.^2 + uy.^2);% Create spectrumfilterFFT = 1.0/wavelength;filterFFT = exp((-(log(r/filterFFT)).^2) / (2 * log(sigma)^2));filterFFT(1,1) = 0;% Filter imageI_filtered = real(ifft2(fft2(I) .* filterFFT));
Advantage of 0 DC
The advantage of having 0 DC component is that the response of the filter doesn't depend on the mean value of the signal. It also means you don't have an infinite impulse response. The Gabor filter has a small non-zero component around DC that is usually removed.
For feature detection, say you designed a filter kernel to match a step (1D) or edge (2D). If the filter had a DC component then the response would vary with the mean image level. e.g. consider the sobel edge detector, and the same kernel with increased values.
I = double(imread('the_image.jpg'));I = I(:,:,1);f1 = [[-1 0 1];... [-2 0 2];... [-1 0 1]];f2 = [[0 3 4];... [1 3 5];... [0 3 4]]; I1 = conv2(I,f1,'same'); images(I1); colorbar; pause;I2 = conv2(I,f2,'same'); images(I2); colorbar; pause;I3 = conv2(I+100,f1,'same'); images(I3); colorbar; pause;I4 = conv2(I+100,f2,'same'); images(I4); colorbar; pause;
In this code
f2 =
f1 + 3. Only convolving with
f1, which has 0 DC, gives the same result when the mean value of
I is changed. The image responses 'look' the same, but only the 0 DC one is useful for feature detection. |
Let $V$ be a finite-dimensional vector space over a field $\Bbbk$. Let $V^*$ denote its dual. I strongly suspect that there is a natural map $$\Lambda^k V \otimes \Lambda^k V^* \to \Bbbk$$ that looks something like $$v_1 \wedge \dotsb \wedge v_k \otimes \alpha_1 \wedge \dotsb \wedge \alpha_k \mapsto \sum_{\sigma} {\operatorname{sgn} \, \sigma}\prod_i \alpha_i(v_{\sigma(i)}).$$ What does the correct, natural formula look like? In particular, what is the correct sign convention?
While at the vector space level, the pairing might seem slightly forced, we can derive it naturally by adding structure.
Given a vector space $V$, we have a graded commutative ring $\bigwedge V = \bigoplus_i \bigwedge^i V$.
Given $\phi\in V^*$, it naturally extends to a (graded) derivation $d_{\phi}$ of degree $-1$ on $\bigwedge V$. Since $d_{\phi}^2=0$ and $d_{\phi+\psi}=d_{\phi}+d_{\psi}$, we can extend the action of $V^*$ to an action of $\bigwedge V^*$. The pairing is just the action restricted to a single degree.
Elaboration on the constructions:
First, we need to see that specifying a derivation by how it acts on generators is actually well defined. Note that, $\bigwedge V = T(V)/(v\otimes v\mid v\in V)$ is a quotient of the tensor algebra, Given any $\phi \in V^*$, we can define a derivation $d_{\phi}$ of $T(V)$ extending $\phi$, and because every element of $T(V)$ can be written in a unique way, such a derivation is well defined. For any degree $-1$ derivation $d$ we have $d(v^2)=(dv)v-v(dv)=0$, and so $d$ vanishes on the ideal defining $\bigwedge V$, and thus passes to a well defined map there.
To see that derivations extend to an action of $\bigwedge V^*$, we have that if $d:V\to A$ is a linear map of a vector space into an algebra such that $(d(v))^2=0$ for every $v\in V$, then there exists a unique map $\bigwedge V \to A$ extending $d$. However, care must be taken here, as we want $A$ to be a graded algebra and we want $d(V)\subset A_1$.
Unfortunately, because we wish our map to take values in $\operatorname{End}_k(\bigwedge V)$, which is not commutative, we can't just use the universal property of $\bigwedge V$ being the free graded commutative algebra generated in degree $1$, and we have to* do things at the level of the tensor algebra and show that things descend.
All these are related to various structures present in differential forms and vector fields, and the interaction between them (e.g. Lie derivatives), which can be extended further to structures in Hochschild homology and cohomology. There are also analogies to be made between cup and cap products in algebraic topology.
Other related ideas worth looking into are the variants of the Schouten bracket.
Note that most of the related structures are not entirely linear, and that the structure we have here here is merely a linear approximation to them.
*No, we probably don't have to. I just can't think of a cleaner way to do it at the moment. If anybody has suggestions, please let me know.
I can construct this map abstractly, but I want to convince you that it isn't completely natural. Let's work in more generality: suppose $A \otimes B \to \mathbb{k}$ is a bilinear pairing. If I want to replace $A$ with some quotient $A/A'$, what's the natural thing to do to the pairing? If $A, B$ are finite-dimensional, then giving a bilinear pairing is the same as giving a map $A \to B^{\ast}$. If I want to replace $A$ with a quotient, then the natural thing to do is to send this map to the induced map $A/A' \to B^{\ast}/\text{im}(A')$. But dualizing the quotient map $B^{\ast} \to B^{\ast}/\text{im}(A')$ gives an
inclusion
$$\left( B^{\ast}/\text{im}(A') \right)^{\ast} \to B.$$
The LHS is the subspace of $B$ annihilated by every element of $A'$. So contrary to intuition, the natural thing to do is
not to replace $B$ by a quotient. Note that this recipe has the desirable property that if the old pairing is nondegenerate, so is the new pairing.
Now let $A = V^{\otimes k}, B = (V^{\ast})^{\otimes k}$. These spaces are equipped with a canonical pairing $A \otimes B \to k$. If I want to replace $A$ by its quotient $\Lambda^k(V)$, then the above recipe tells me that the correct thing to do is to replace $B$ by a subspace, which turns out to be precisely the subspace of antisymmetric tensors $\text{Alt}^k(V^{\ast}) \subset (V^{\ast})^{\otimes k}$. Note that this is not abstractly the same thing as $\Lambda^k(V^{\ast})$. So the correct replacement pairing is
$$\Lambda^k(V) \otimes \text{Alt}^k(V^{\ast}) \to \mathbb{k}$$
which I believe is nondegenerate in characteristic greater than $2$. In addition, there is a natural map
$$\text{Alt}^k(V^{\ast}) \to (V^{\ast})^{\otimes k} \to \Lambda^k(V^{\ast})$$
which I believe is an isomorphism in characteristic greater than $k$ but is
zero in characteristic less than or equal to $k$. The problem is that the space on the left is spanned by elements of the form
$$\sum_{\pi \in S_k} \text{sgn}(\pi) e_{\pi(1)} \otimes e_{\pi(2)} \otimes ... \otimes e_{\pi(k)}$$
where $e_1, ... e_k$ are a $k$-element subset of a basis of $V^{\ast}$, and the image of this element in $\Lambda^k(V^{\ast})$ is $k! e_1 \vee e_2 \vee ... \vee e_k$ which vanishes if $k! = 0$.
Punchline: if you use only the natural maps above, I think the pairing you want is only natural in characteristic greater than $k$ and it's given by $\frac{1}{k!}$ times what you wrote. As far as sign convention, this is all a matter of what you think the natural pairing
$$V^{\otimes k} \otimes (V^{\ast})^{\otimes k} \to \mathbb{k}$$
is. Do you think it's given by evaluating the middle two factors on each other, then the next middle two, and so forth, or do you think it's given by evaluating the first factor in $V^{\otimes k}$ on the first factor in $(V^{\ast})^{\otimes k}$, and so forth? You use the second convention in your post but to me the first convention is more natural (at least it generalizes in a less annoying way to a symmetric monoidal category with duals).
The above discussion is closely related to another confusing property of the exterior power, which is that if $V$ has an inner product then the natural space which inherits an inner product from $V$ is not $\Lambda^k(V)$ but $\text{Alt}^k(V)$, and people don't always use the canonical map between these spaces; for example people sometimes want the exterior product of orthogonal unit vectors to be a unit vector, but that is actually false if you only use natural maps, and it's necessary to normalize a map somewhere (either the identification above or, equivalently, the antisymmetrization map).
His approach is to let a general bilinear pairing $B:V\times W\to k$ yield a pairing $V^{\otimes n}\times W^{\otimes n}\to k$ given by $(\otimes v_i,\otimes l_j)\mapsto \prod_{i=1}^n B(v_i,l_i)$. Under the natural conditions (invariance under swaps, or vanishing if a sequence of inputs `stammers'), this induces parings on the symmetric, or exterior algebras.
In particular, applied to the evaluation pairing $B:V\times V^*\to k$ this induces the desired pairing $\bigwedge^n(V)\times \bigwedge(V^*)\to k$ given by $(\wedge v_i,\wedge l_j)\mapsto \det(l_i(v_j))$.
(This is a very short summary; his explanation is much better and extensive so read it yourself :)) |
This is a survey of modern recommender systems, particularly looking at the details of Matrix Factorization methods:
Weighted-Regulated MF and Bayesian Personalized Ranking with respect to explicit and implicit feedback datasets. Ranking Function
The ranking function $r_{ui}$ gives the ranking score of user $u$ for item $i$ where $R$ is the sparse matrix of user/item interactions.
The objective is to approximate a ranking function $\hat{r}_{ui}$ which is the unobserved ranking by user $u$ and for item $i$. How do we find a function $\hat{r}$ that approximates
r?
There are two methods two popular methods to learn $\hat{r}$:
Content-based Approach Collaborative Filtering
This paper will exclusively focus on Collaborative Filtering techniques.
Content-based Approach
Find other items w/ a low distance function. Finds similar items to past liked items (stuck in a bubble)
Pros: No cold-start problem. Cons: Pigeon Hole Problem: Fails to surface orignal content Collaborative Filtering
Find users who liked what I liked. Then recommend what they liked (collaboration). Based upon the assumption that those who agreed in the past tend to agree in the future.
Among CF methods, there are two main categories:
Memory Based - Neighborhood Models Latent Factor Models - Matrix Factorization (state-of-the-art) Memory Based - Neighborhood Models
Find similar users (user-based CF) or items (item-based CF) to predict missing ratings.
User-Based CF Find k nearest neighbors $S$ (user/item matrix) Generate recommendations based on items liked by k neighbors.
Problem: Memory-based. Expensive online similarity computation — does not scale. Item-Based CF
Estimate rating by looking at simular items and running a weighted sum. First we must establish a simularity measure:
Using the similarity measure, we identify the
k items rated by u, which are most similar to i. This set of k neighbors is denoted by S. The predicted value of $r_{ui}$ is taken as a weighted average of the ratings for neighboring items:
All item-oriented models share a disadvantage in regards to implicit feedback –they do not provide the flexibility to make a distinction between user preferences and the confidence we might have in those preferences.
Neighborhood Model Problems Performance Implications
These methods are computationally expensive and grow w/ the number of users and items and therefore do not scale well. Similarity measure is a computational bottleneck. Time Complexity: Highly time consuming with millions of users and items in database.
The worst case complexity is $O(mn)$ (m customers and n products).
One solution is to break the neighborhood generation and production into discrete steps:
“off-line component” / “model” — similarity computation, done earlier & stored in memory. “on-line component” — predication generation process
Some techniques such as clustering or K-means can help also. In other words do some initial clustering of users and only computer the similarity of users who are initially similar.
Sparcity Prolem Typically: large product sets, user ratings for a smaller percentage of them Example Amazon: millions of books and a user may have bought at most hundreds of books: the probability that two users that have bought 100s of books have a common book (in a catalog of 1 million books) is .01 (with 50 and 10 millions is 0.0002) Standard CF must have a number of users comparable to one tenth of the size of the product catalog. In other words if the number of users is at least 1/10 the size of the items in your catalog then Neighborhood based CF is feasible. Sparcity Example: Netflix Challenge
If you represent the Netflix Challenge rating data as a user/item matrix you would get: 500k x 17k = 8.5B position matrix where only about 100M are not missing/0s!
Solution:
Latent-Factor Models Model Based - Latent Factor Models
In contrast to the previous, memory based system, which try to generate a a prediction using the entire user/item matrix, Model based methods will take a more sophisticated approach and try to develop a model of the user.
Matrix Factorization (MF) methods relate users and items by uncovering latent dimensions such that users have similar representations to items they rate highly, and are the basis of many state-of-the-art recommendation approaches (e.g. Rendle et al. (2009), Bell, Koren, and Volinsky (2007), Bennett and Lanning (2007)).
Ratings are deeply influenced by a set of factors that are very specific to the domain (e.g. amount of action in movies, compleixty of characters). These factors are in genreal not obvious, we might be able to think of some of them but it’s hard to estimate their impact on ratings. The goal is to infer those so called
latent factors from the rating data by using mathametical techniques: Singular Value Decomposition (SVD). SVD/Matrix Factorization
The basic idea is that we want collapse the sparse (full of 0s) user/item matrix into something much less sparse and lower dimensional. The idea behind matrix factorization is that we can do that by decomposing my original matrix into 3 components: user/item matrix, item/item matrix and a diagonal that maps factors into themselves. There are different ways to get to this decomposition, but the idea is that we reduce the dimensionality of the item space into a lower space where I have a latent number of factors and then mapping users to those factors and items to those factors.
Instead of computing this in a closed form mathematically way, you can do it iteratively using stochastic gradient descent.
All these methods look to optimze a cost function w/ respect the parameters $x$ and $y$ which are latent factors. The differences are the cost functions and optimization techniques.
Within the category of Model-based methods there can be two categories:
Point-wise methods: These typically deal w/ explicit feedback datasets and minimizesa cost function which relates the error (diff) between two explicit ratings. Pair-wise methods: These typically deal w/ implicit feedback datasets and maximizesa cost function which relates a positive and negative observations. Point-wise methods Weighed Regulated Matrix Factorization (WR-MF)
The famous SVD algorithm, as popularized by Simon Funk during the Netflix Prize. An adaption of a SVD, which minimizes the squared loss. Their extensions are regularization to prevent overfitting and weights in the error function to increase the impact of positive feedback.
A typical model associates each user $u$ with a user-factors vector $x_u\in{R^f}$ , and each item $i$ with an item-factors vector $y_i\in{R^f}$. The prediction is done by taking an inner product, i.e.:
The more involved part is parameter estimation for $x$ and $y$. Many of the recent works, applied to explicit feedback datasets, suggested modeling directly only the observed ratings, while avoiding overfitting through an adequate regularized model, such as:
where the last term is the regularization term using the square of the L2 norm of the two parameters we are interested in optimizing: $x$ and $y$.
To estimate the parameters $x$ and $y$ we minimize the following regularized squared error cost function above. You can solve this w/ Alternating Least Squares (ALC) or Stochastic Gradient Descent (SGD). Below are partial derivatives updates for SGD:
where $e_{ui} = r_{ui} - \hat{r}_{ui}$ and $\eta$ is the learning rate and $\lambda$ is the regularization coefficient. These steps are performed over all the ratings of the trainset and repeated
n_epochs times. Baselines are initialized to
0. User and item factors are initialized to
0.1, as recommended by Funk.
Note: If we use ALS to solve the cost function we can employ parallelism to speed up training. Spark implements the ALS solver natively. Pair-wise methods
In contrast to point-wise methods, pairwise methods are based on a weaker but possibly more realistic assumption that
positive feedback must only be ‘more preferable’ than non-observed feedback. Such methods directly optimize the ranking of the feedback and are to our knowledge state-of-the-art for implicit feedback datasets. Rendle et al.(2009) propose a generalized Bayesian Personalized Ranking (BPR) framework and experimentally show that BPR-MF (i.e., with MF as the underlying predictor) outperforms a variety of competitive baselines. [Julian VBPR paper] Maximum Margin Matrix Factorization (MM-MF)
A pairwise MF model from Gantner et al.(2011), which is optimized for a hinge ranking loss on $x_{u,i,j}$ and trained using SGA.
Baysian Personalized Recommenadation - BRP MF
Optimization criterion for personalized ranking that is based on pairs of items (i.e. the user-specific order of two items. Maximises the prediction difference between a positive example and a randomly chosen negative example. Useful when only positive interactions are present and optimising ROC AUC is desired.
A training set $D_S$ consists of triples of the form $(u, i, j)$, where $u$ denotes the user together with an item $i$ about which they expressed positive feedback, and a non-observed item $j$:
This dataset takes a different approach than the point-wise methods above which are based on a user/item matrix. Here, for each user, we build an item/item matrix which shows how user $u$ prefered item $i$ over item $j$. See the BPR cost function:
where $\sigma$ is the
sigmoid function, $\Theta$ is the model parameters: $x$ and $y$ and $\hat{r}_{uij}$ is
The cost function above has an interesting curve, it is convex and not concave — so this will impose a requirement upon us to use Stochastic Gradient
Ascent (SGA) as opposed to batch SGA:
Figure 1: The cost function which is maximized at around $\theta=2$
Thus, in order to maximize the cost function we will have to do gradient
ascent.
First a triple $(u, i, j)$ is sampled from $D_S$ and then the learning algorithm updates parameters in the following fashion:
where $\eta$ is the learning rate and the partial derivatives for the parameters $\Theta$ are:
With the cost function minimized, reconstruct the $R$ matrix $\hat{R}$ and make a prediction:
Collaborative Filtering Limitations Cold Start
There needs to be enough other users already in the system to find a match. new items need to get enough ratings.
New user problem: TO make accurate recommendations, the system must first learn the user’s preferences from the ratings. Several techniques proposed to address tech. Most use hybrid recommendation approach (see VBPR, Ruining, McAuley), which combines content-based and collaborative techniques. One solution to this is to recommend popular items until the user has enough ratings in the system for CF to give meaningful results.
New Item Problem: New items are added regularly to recommender system. Until the new item is rated by a substantial number of users, the recommender system is not able to recommend it.
Popularity Bias
Hard to recommend items to someone with unique tastes. Tends to recommend popular items (items from the tail do not get so much data)
Appendix Datasets
Data gathered for recommendation systems is either in the form of explicit or implicit feedback. Explicit datasets are the typical situation while implicit datasets require some extra considering for modeling.
Explicit Feedback
Explicit feedback datasets are a collection of $(u,i,r)$ triples where $u$ is a user, $i$ is an item and $r$ is some type of quantifiable ranking/rating. A common example is star ratings from Netflix, MovieLens etc.
MovieLens sparse matrix, e.g.: 133,960x431,826 = 57.8 billion positions where only 100M are not 0’s!
Implicit Feedback
Unlike explicit feedback, the $(u,i,r)$ implicit feedback triples are not directly quantifiable, but rather indirect actions that imply some degree of preference, e.g:
watched episode of show -> loosely implies they like it
reviewed item on amazon -> implies they bought it which implies they like it
clicked link -> can imply something…
k Parameter
We can choose how many factors to include in the model by tuning the hyper-paramter: $k$ . $k$ specifies how many eigenvectors/components we want to keep in $x$ and $y$. As $k$ decreases the quality of the approximation of $R$ will decrease in proportion to the quality of the missing rakings predictions. Tuning $k$ is a tradeoff between dimensionality and accuracy which is also connected to computational time and memory to train and run the model predictions.
Permalink: recommender-systems Tags: |
Two years have gone by since I asked this question, and now after some time at uni here's my take on a "rigorous" (whatever I meant with that at the time) derivation(Since now I don't find the other answer very clear, what's the curve integrated over, what's the force field the particle is moving in?):
Suppose the position of a particle of mass $m > 0$ moving between time $0\in \mathbb{R}$ and time $1 \in \mathbb{R}$ is given by a smooth curve $x: [0,1]\to\mathbb{R^3}$. If the particle moves through a force field $F: \mathbb{R^3} \to \mathbb{R^3}$, then the work done on the particle is given by the line integral $$\Delta W=\int_xds\cdot F(s).$$
We want to find the work needed to accelerate the object from it's initial velocity $\dot{x}(0)$ to it's final velocity $\dot{x}(1)$. Suppose there are no external forces, only the inertia of the particle is "trying to counteract" the acceleration. For the sake of the argument, suppose further that $x$ is injective. Then the force field of the intertia at point $y \in x([0,1])$ is given as follows by Newton's third law: $$F(y):=m\ddot{x}(x^{-1}(y)) \in \mathbb{R^3}.$$The expression simply assigns every point on the trajectory of the particle the force needed to give the particle it's current acceleration.
The work integral computes as follows:$$\Delta W = \int_x ds\cdot F(s)= \int_0^1 dt \langle F(x(t)) \ | \ \dot{x}(t)\rangle = \int_0^1 dt \langle m\ddot{x}(x^{-1}(x(t))) \ | \ \dot{x}(t)\rangle = m \int_0^1 dt \langle \ddot{x}(t) \ | \ \dot{x}(t)\rangle.$$ One quickly checks that $\langle\ddot{x}(t) | \dot{x}(t)\rangle = \frac{1}{2}\frac{d}{dt} ||\dot{x}(t)||^2$. So by defining $\Delta v^2 := ||\dot{x}(1)||^2-||\dot{x}(0)||^2$, the work integral becomes$$\Delta W = \frac{1}{2}m \int_0^1 dt \frac{d}{dt}||\dot{x}(t)||^2 = \frac{1}{2}m(||\dot{x}(1)||^2-||\dot{x}(0)||^2) = \frac{1}{2}m\Delta v^2,$$which is the familiar expression of the kinetic energy. |
Join our Whatsapp Notifications and Newsletters touch here
COURTESY OF ATIKA SCHOOL
SIMPLIFY:
\[\large \frac{p^{2}+2pq+q^{2}}{p^{3}-pq^{2}+p^{2}q-q^{3}}\]
STEP 1: FACTORIZE THE NOMINATOR AND DENOMINATOR
\[\large \frac{(p+q)(p+q)}{p(p^{2}-q^{2})+q(p^{2}-q^{2})}\]
\[\large \frac{(p+q)(p+q)}{(p+q)(p+q)(p-q)}\]
Step 2: do the cancellation;
\[\large \frac{1}{(p-q)}\]
17/5/2019
Read Now
Join our Whatsapp Notifications and Newsletters touch here
COURTESY OF ATIKA SCHOOL
All prime numbers less than ten are arranged in descending order to form a number. (a) Write down the number formed. (1 mark)
\[\large 7532\]
Note that; b) State the total value of the second digit in the number formed in (a) above. (1 mark)
\[\large 500\]
Note that; More questions on;
Join our Whatsapp Notifications and Newsletters touch here
COURTESY OF ATIKA SCHOOL
WITHOUT USING MATHEMATICAL TABLES OR A CALCULATOR, EVALUATE;
\[\large \frac{\sqrt[3]{675\times 135}}{\sqrt{2025}}\]
Step 1: find the prime factors of each number.
this step will help find the cube root of the nominator and the square root of the denominator
\[\large \frac{\sqrt[3]{3^{3}\times 5^{2}\times 3^{3}\times 5}}{\sqrt{3^{4}\times 5^{2}}}\]
Step 2: Get the cube root and the square root,then compute the answer
\[\large \frac{\sqrt{3^{2}\times 5}}{\sqrt{3^{2}\times 5}}\]
\[\large = 1\]
Join our Whatsapp Notifications and Newsletters touch here
COURTESY OF ATIKA SCHOOL
wHAT ARE nATURAL NUMBERS?
Natural numbers also called counting numbers are numbers ranging from 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
Place Value
This is the position of a digit in a number.
Importance of place values
Place value helps students understand values of numbers in a series of numbers, such as computing the total value of a number in a group of numbers, rounding off numbers, changing numbers from figures to words vis a vis and operations on numbers. This is very essential in the counting of numbers as applied in science, real life situations, mathematics, business and accounting etc.
example 1.
What is the position of 6 in the number 346789
PROCEDURE FOR FINDING A PLACE VALUE exercises
What is the place value of 5 in the number 524239
Hundred Thousands
What is the place value of 1 in the number 721
Ones
state the place values of digit 8 in each of the following numbers
(a) 1689
Tens
(b) 4008772
Thousands
(c) 2847246
Hundred Thousands
(d) 184392649
Ten Millions
(e) 281199300505
Ten billions
Join our Whatsapp Notifications and Newsletters touch here
COURTESY OF ATIKA SCHOOL
MATHEMATICS FORM 1 EXAMINATION PAPERS TERM 1 QUESTIONS AND ANSWERS How to download Answers:
Join our Whatsapp Notifications and Newsletters touch here
COURTESY OF ATIKA SCHOOL
OPENER (TUNEUP EXAMS) MID TERM EXAMS END TERM EXAMS
Join our Whatsapp Notifications and Newsletters touch here
COURTESY OF ATIKA SCHOOL
Join our Whatsapp Notifications and Newsletters touch here
COURTESY OF ATIKA SCHOOL
Details
Author Archives Categories
Join our Whatsapp Notifications and Newsletters touch here COURTESY OF ATIKA SCHOOL
All |
Question 1
Choose the number that is a solution to the inequality [tex]3x-2<0[/tex]?
Question 2
Which number is
not
a solution to the inequality [tex]1-6x>0[/tex]?
Question 3
The inequality 4x + 2 > 16 is equivalent to:
Question 4
The inequality [tex]3x-7>4x-6[/tex] is equivalent to:
Question 5
The inequality [tex]\frac{x-5}{2}<\frac{5x-4}{3}[/tex] is
not
equivalent to:
Question 6
The greatest integer that is a solution to the inequality [tex]5-6x>2(4-x)[/tex] is:
Question 7
Which is the smallest integer that is a solution to the inequality [tex]3.2x-2>2x+0.4[/tex]?
Question 8
Which interval is solution to [tex]2x-7<3x+5[/tex]
Question 9
The solution set of inequalities [tex]x-3>1[/tex] and [tex]x+1\leq -1[/tex] is:
Question 10
Find all whole positive numbers that are solutions to the inequality[tex]2x-\frac{(x-3)^{2}}{2}\leq 2-\frac{(2x-3)^{2}}{8}[/tex]:
Question 11
Find all negative whole numbers that are solutions to [tex]\frac{1}{2}-\left ( 2x-\frac{1}{2}(x-3)+\frac{x}{2} \right )\times 2<0[/tex]
Question 12
The solutions to the inequality [tex]\left ( \frac{x}{2}-1 \right )^{2}-\frac{1}{2}\left ( 2-\frac{5x-3}{3} \right )<\frac{x^{2}+1}{4}[/tex] are:
Question 13
The solutions to the inequality [tex]\left ( x-\frac{1}{3} \right )^{2}-\left (-\frac{3}{2}+ \frac{x}{3} \right )^{2}-\frac{1}{3}\left ( -\frac{2}{3}+x \right )<2\left ( \frac{2x}{3}-5 \right )\left ( \frac{2x}{3}+5 \right )-\frac{13}{3}[/tex] are:
Question 14
The solutions to the inequality [tex]3-\frac{1}{2}\left ( \frac{1}{2}-\frac{2-\frac{x}{3}}{2} \right )>x-\frac{\frac{x}{2}-\frac{3+x}{3}}{2}[/tex] are:
Question 15
The solutions to the inequality [tex]\frac{1}{5}\times\left ( 2-x \right )^{3}+\frac{x^{2}-2x+4}{3}\times\left ( -x-2 \right )\leq \frac{x(1-x)(4+x)}{2}-\frac{x^{3}-81x^{2}+32}{30}[/tex]
are: |
This paper will document the implementation of a handwritten digit recognition system using a Gaussian Generative Model called Linear Discrimiate Analysis. We will use the MNIST dataset which can be obtained from: http://yann.lecun.com/exdb/mnist/index.html
MNIST Summary
MNIST has 60k training examples and 10k test examples. For this model we have 10 classes to classify:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, which represents the digits 0-9. Each example in MNIST is a 28x28 pixel image represented by 784 greyscale intensity (0-255) features. Each class will modeled by a multivariate 784-dimensional gaussian. In other words we have a 784-dimensional feature vector and 10, 768 degree multivariate gaussians.
Iris
Before we get started on MNIST, lets work on a smaller scale w/ the iris dataset. 4 Features/target. 4 classes. So each class would have a 4D M-Gaussian. This is actually a really good model for IRIS b/c gaussians follow the random patterns in nature and iris is samples from nature.
import mathimport matplotlib.pyplot as plt%pylab inlineimport numpy as npfrom sklearn import datasetsiris = datasets.load_iris()from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()gnb.fit(iris.data, iris.target)
As you can see from the Iris data, we have 150 examples, with 3 total classes. Therefore the class priori or $π_j = 1/3$
print gnb.classes_print gnb.class_count_print gnb.class_prior_print gnb.sigma_ # variance of each feature per class -- not Sigma in the context of covariance matrix
[0 1 2][ 50. 50. 50.][ 0.33333333 0.33333333 0.33333333][[ 0.121764 0.142276 0.029504 0.011264] [ 0.261104 0.0965 0.2164 0.038324] [ 0.396256 0.101924 0.298496 0.073924]]
Now, what scikit did was to fit a gaussian to each of the classes (3) in the training set examples. It does that by finding the mean and the covariance from the examples. Let the Gaussian for the jth class be: $P_j = N(\mu_j, \Sigma_j)$.
$\DeclareMathOperator*{\argmax}{arg\,max}$
Then in order to classify an unknown flower, simply use the class prior w/ the class posterior (Bayes Rule) for all the classes and choose the one w/ the largest probability:
y_pred = gnb.predict(iris.data)print("Number of mislabeled points out of a total %d points : %d" % (iris.data.shape[0],(iris.target != y_pred).sum()))
Number of mislabeled points out of a total 150 points : 6
Easy huh? Now, the devil is in the details: how do you calculate $P_j$, how do we estimate a gaussian for each class j?
We know that $p(\hat{x})$, the probability density function (PDF) of a multivariate Gaussian is this standard form:
Therefore the trick is to tune $\Sigma$, the covariance matrix, to model the training data.
$\Sigma$ is a p x p matrix containing all pairwise covariances, where p is the number of features in your training set:
Then, given some training points, the way to generate this matrix using numpy:
np.cov(examples, rowvar=0)
Then, for each target/class (10) you will get a $\mu$ and a $\Sigma$ that you can pop into the numpy PDF routine:
np.random.multivariate_normal
mean = np.array(examples.mean(0))[0]cov = np.cov(examples, rowvar=0)p_x = multivariate_normal(mean=mean, cov=cov)
For iris dataset we will have 4x4 covariance matrix b/c there are 4 features in our training set., here’s an example of finding the covariance matrix $\Sigma$ on the iris trainging set. We should be able to confirm our answer
#lets split into a test and training setfrom sklearn.cross_validation import train_test_splitX_train, X_test, Y_train, Y_test = train_test_split(iris.data,iris.target, test_size=0.4, random_state=4)# lets gather all the examples from class 0def get_examples_for_class(class_id): examples = [] for i, example in enumerate(X_train): if Y_train[i]==class_id: examples.append(example) examples = np.matrix(examples) return examplesexamples = get_examples_for_class(0)examples.shape
(25, 4)
Now according to the above assertion this should map to a 4x4 covariance matrix. We can use
numpy.cov to test this assertion and then look at how to implement the equivalent
numpy.cov in python.
mean = np.array(examples.mean(0))[0]cov = np.cov(examples.T) # I don't know why you have to transpose the input to numpy... print meanprint cov #should be 4x4 for iris
[ 4.964 3.416 1.44 0.24 ][[ 0.1049 0.06976667 0.01483333 0.00566667] [ 0.06976667 0.10723333 0.00391667 0.00266667] [ 0.01483333 0.00391667 0.02583333 0.00625 ] [ 0.00566667 0.00266667 0.00625 0.01 ]]
Now we can take this covariance matrix and pipe it into the numpy PDF routine to get our distribution:
from scipy.stats import multivariate_normal
P_0 = np.random.multivariate_normal(mean, cov).TP_0var = multivariate_normal(mean=mean, cov=cov)
Now according to equation 1, if we want to classify some vector, X,
lets test the probabity the the following test vector is in class 0, or:
where, X is the vector below:
X1=X_test[15]import randomX1 = random.choice (X_test)print X1
[ 5.4 3. 4.5 1.5]
prior = pi_0 = gnb.class_prior_[0]prob_0=[0, var.pdf(X1)]prob_0
[0, 9.6702924830667242e-90]
$P_1 = N(\mu_1, \Sigma_1)$
#now for class 1examples_1 = get_examples_for_class(1)mean_1 = np.array(examples_1.mean(0))[0]cov_1 = np.cov(examples_1.T)p_x_1 = multivariate_normal(mean=mean_1, cov=cov_1)prob_1 = [1, gnb.class_prior_[1] * p_x_1.pdf(X1)]prob_1
[1, 0.074284709872233928]
#now for class 2examples_2 = get_examples_for_class(2)mean_2 = np.array(examples_2.mean(0))[0]cov_2 = np.cov(examples_2.T)p_x_2 = multivariate_normal(mean=mean_2, cov=cov_2)prob_2 = [2, gnb.class_prior_[2] * p_x_2.pdf(X1)]prob_2
[2, 0.0028397660536565485]
prediction = max(prob_0, prob_1, prob_2, key= lambda a: a[1])print iris.target_names[prediction[0]]
versicolor
X = iris.dataY = iris.targetfrom sklearn.naive_bayes import GaussianNBclf = GaussianNB()clf.fit(X, Y)print(iris.target_names[clf.predict([X1])][0])
versicolor
So it seems to be a functioning classifier. Now lets scale this up and test it on the digits dataset.
Digits
Digits should have 10 classes: 0-9 with each example having 784 (representing 28x28 pixel image). A note: scikit’s version of MNSIT is only 8x8 whereas the original is 28*28. SO as you can see above we have 64 features instead of 784
digits = datasets.load_digits()X_train, X_test, Y_train, Y_test = train_test_split(digits.data, digits.target, test_size=0.4, random_state=4)X_train.shape
(1078, 64)
First we need to calculate the prior probabilities of the 10 classes. For the sake of experimentation we just use scikit’s NBClassifer to gen these values for us:
helper = GaussianNB()helper.fit(X_train, Y_train)classes = helper.classes_priors = helper.class_prior_print classespriors
[0 1 2 3 4 5 6 7 8 9]
array([ 0.09369202, 0.10482375, 0.0974026 , 0.0974026 , 0.10575139, 0.10296846, 0.0974026 , 0.10018553, 0.09925788, 0.10111317])
So from the above we can see the bayesian prior probabilities for the 10 classes 0-9 which looks like a pretty even distribution.
LDA Model Training
Now lets generate the bayesian posterior probabilities for each class using the training data.
See this note on the covariance matrix tuning: http://stackoverflow.com/questions/35273908/scipy-stats-multivariate-normal-raising-linalgerror-singular-matrix-even-thou/35293215
We will implement covariance matrix smoothing later, but for now, just set
allow_singular=True
posteriors=[]for klass in classes: examples = get_examples_for_class(klass) mean = np.array(examples.mean(0))[0] cov = np.cov(examples.T) p_x = multivariate_normal(mean=mean, cov=cov) posteriors.append(p_x)
Classification
Now that we have the prior and posterior probabilities for our training set, lets use that to make a prediction:
#choose a random point from the test datax = random.choice (X_test)print x
[ 0. 0. 3. 14. 13. 12. 14. 0. 0. 0. 11. 14. 12. 15. 9. 0. 0. 0. 16. 5. 3. 16. 2. 0. 0. 1. 9. 1. 10. 12. 0. 0. 0. 0. 0. 7. 16. 14. 6. 0. 0. 0. 4. 16. 16. 11. 1. 0. 0. 0. 0. 15. 5. 0. 0. 0. 0. 0. 6. 13. 0. 0. 0. 0.]
bayes_probs = []for klass in classes: prob = [klass, priors[klass] * posteriors[klass].pdf(x)] bayes_probs.append(prob)bayes_probs
[[0, 0.0], [1, 1.0140520342536575e-281], [2, 0.0], [3, 5.5444700721071972e-221], [4, 0.0], [5, 2.4639081660101053e-208], [6, 0.0], [7, 8.1408087153194087e-50], [8, 1.6042311520465275e-319], [9, 1.4456153872364315e-197]]
Notice that the probabilities are VERY small. In this discrete environment we may be losing a lot of precision, so it may be wise to explore using the
log_pdf instead. We will explore this more later.
Now we choose the max and that is our prediction:
prediction = max(bayes_probs, key= lambda a: a[1])print digits.target_names[prediction[0]]
7
So our routine predicted a 4, lets visually confirm:
plt.figure(1, figsize=(3, 3))plt.imshow(x.reshape(8,8), cmap=plt.cm.gray_r, interpolation='nearest')plt.show()
Success!
Benchmark
Now lets scale this up and check our error rate
#first I need an interface to batch test, test input (instead of 1 vector at a time, like above)Y = []for x in X_test: bayes_probs = [] for klass in classes: prob = [klass, priors[klass] * posteriors[klass].pdf(x)] bayes_probs.append(prob) prediction = max(bayes_probs, key= lambda a: a[1]) Y.append(prediction[0])
errors = (Y_test != Y).sum()total = X_test.shape[0]print("Error rate: %d/%d = %f" % ((errors,total,(errors/float(total)))))
Error rate: 34/719 = 0.047288
Our naively implemented Gaussian Classifier achieved a 96% success rate.
Error Analysis
Lets look at the examples our classifier filed on:
def displaychar(image): plt.imshow(np.reshape(image, (8,8)), cmap=plt.cm.gray) plt.axis('off')
indicies = np.array(np.where((Y_test != Y)==True))[0]index = 0rows = len(indicies)%10cols = 10plt.figure(figsize=(15,7))for i in indicies: index += 1 plt.subplot(rows, cols, index) displaychar(X_test[i]) plt.title('exp:%i, act:%i' %( Y[i],Y_test[i]), fontsize = 10)
The above figure shows what the classifier expected and what the actual classification. From the error examples above, some of the examples would even be mistaken by a human.
Optimizations
The scikit version of MNIST is a scaled down version. If we run this classifier on the official MNIST, we will run into some limitations. The MNIST version has 784 features which digits only has 64 and MNIST has more examples: 60,000. At this scale the model starts to have issues taking the determinate of the covariance matrix and we also start to have issues w/ underflow in the probabilities.
Log Probabilities
The bayes model multiplies very small probabilities and while, theoretically there is nothing wrong with this, in our discrete limited-percussion environment, we start to experience underflow and our accuracy decreases. A common fix for this is to update our classification step to use log probabilities:
np.log(self.priors[klass]) + self.posteriors[klass].logpdf(x)
def predict(self, X): Y = [] for x in X: bayes_probs = [] for klass in self.classes_: prob = [klass, np.log(self.priors[klass]) + self.posteriors[klass].logpdf(x)] bayes_probs.append(prob) prediction = max(bayes_probs, key= lambda a: a[1]) Y.append(prediction[0]) return Y
Covariance Matrix Smoothing
Another common optimization is to add a smoothing factor to the covariance matrix:
Where $I$ is the identity matrix and $c$ is a constant we must set by tuning against the test data and then apply it to the covariance matrix in the training step:
c = 3500for klass in classes: examples = get_examples_for_class(klass) mean = np.array(examples.mean(0))[0] cov = np.cov(examples, rowvar=0) cov_smoothed = cov + (c * np.eye(mean.shape[0])) p_x = multivariate_normal(mean=mean, cov=cov_smoothed) posteriors.append(p_x)
Through experimentation I find that a value around
3500 works well and achieves %96 on full MNIST.
Relation to Logistic Regression
LDA is often contrasted against a Multinomial Logistic Regression in that they are both simple linear multi-class classifiers. The only difference between the two approaches lies in the fact that the LR weights are estimated using maximum likelihood, whereas the LDA parameters are computed using the estimated mean and variance from the normal distribution. Although, the two models differ in their fitting method, however, LDA is often considered superior to Logisitic Regression for the following reasons:
When the classes are well-separated, the paramter estimtes for logistic regression model are suprisingly unstable. Linear discrimiinate anlysis does not suffer from this problem. If the number of observations is small and the distribtuion of the predictors (features) X is approximately normal in each of the classes, the linear discriminant model is again more stable than the logistic regression model. Permalink: mnist-gaussian-classifier Tags: |
Real Analysis Exchange Real Anal. Exchange Volume 43, Number 2 (2018), 359-386. Simultaneous Small Coverings by Smooth Functions Under the Covering Property Axiom Abstract
The covering property axiom CPA is consistent with ZFC: it is satisfied in the iterated perfect set model. We show that CPA implies that for every \(\nu\in\omega\cup\{\infty\}\) there exists a family \(\mathcal{F}_\nu\subset C^\nu(\mathbb{R})\) of cardinality \(\omega_1<\mathfrak{c}\) such that for every \(g\in D^\nu(\mathbb{R})\) the set \(g\setminus \bigcup \mathcal{F}_\nu\) has cardinality \(\leq\omega_1\). Moreover, we show that this result remains true for partial functions \(g\) (i.e., \(g\in D^\nu(X)\) for some \(X\subset\mathbb{R}\)) if, and only if, \(\nu \in\{0,1\}\). The proof of this result is based on the following theorem of independent interest (which, for \(\nu\neq 0\), seems to have been previously unnoticed): for every \(X\subset\mathbb{R}\) with no isolated points, every \(\nu\)-times differentiable function \(g\colon X\to\mathbb{R}\) admits a \(\nu\)-times differentiable extension \(\bar g\colon B\to\mathbb{R}\), where \(B \supset X\) is a Borel subset of \(\mathbb{R}\). The presented arguments rely heavily on a Whitney’s Extension Theorem for the functions defined on perfect subsets of \(\mathbb{R}\), for which a short, but fully detailed, proof is included. Some open questions are also posed.
Article information Source Real Anal. Exchange, Volume 43, Number 2 (2018), 359-386. Dates First available in Project Euclid: 27 June 2018 Permanent link to this document https://projecteuclid.org/euclid.rae/1530064967 Digital Object Identifier doi:10.14321/realanalexch.43.2.0359 Mathematical Reviews number (MathSciNet) MR3942584 Zentralblatt MATH identifier 06924895 Subjects Primary: 26A24: Differentiation (functions of one variable): general theory, generalized derivatives, mean-value theorems [See also 28A15] 26A04 Secondary: 03E35: Consistency and independence results Citation
Ciesielski, Krzysztof C.; Seoane--Sepúlveda, Juan B. Simultaneous Small Coverings by Smooth Functions Under the Covering Property Axiom. Real Anal. Exchange 43 (2018), no. 2, 359--386. doi:10.14321/realanalexch.43.2.0359. https://projecteuclid.org/euclid.rae/1530064967 |
I am reading the following paper by Slater:
https://journals.aps.org/pr/pdf/10.1103/PhysRev.81.385
On page 5 they write above equation (12) the following:
"If we now average over all wave functions, we find that the properly weighted average of $F(\eta)$ is 3/4."
Now, $F(\eta)=1/2 + \frac{1-\eta^2}{4\eta}\ln((1+\eta)/(1-\eta))$.
I don't understand what does it mean to average over wave functions, I thought that they calculated: $\lim_{T\to \infty} \frac{1}{T}\int_0^T F(x)dx$, but I have given maple to calculate this limit (for the additive part without 1/2), and it didn't gave me 1/4.
So I don't understand which average of this function did they calculate?
ANyone knows?
Thanks! |
The J-homomorphism is a well-known and classical map $\pi_n (O(k)) \to \pi_{n+k} (S^k)$, or after stabilizing with respect to $k$, a map $J_n:\pi_n (O) \to \pi_{n}^{st}$, from the stable homotopy of orthogonal groups to stable homomotopy of spheres. The main results on $J_n$ were proven by Adams in a classic series of four papers.
The results are: the image of $J_{4r-1}$ (in which case the source group is $\mathbb{Z}$) is cyclic of order $den (\frac{B_r}{4r})$ (denominator of Bernoulli numbers). For $n\equiv 0,1 \pmod 8$, $J_n$ is injective (the source is $\mathbb{Z}/2$). Another theorem by Adams is that the unit map $\pi_{n}^{st} \to \pi_n (BO)= \pi_{n-1} (O)$ hits all $\mathbb{Z}/2$-groups in the image.
These results play an important role in differential topology (for example in the classification of exotic spheres), which is why from time to time, I struggle to understand these results. But I am rather foreign to stable homotopy theory and I am scared away by this battle with homological algebra and stable homotopy theory and never manage to get the main points from Adams' papers.
However, for the case $n=4r-1$, there is a version of the proof without leaving the mathematical terrain I am used to navigate in. There are two parts: proving that the image of $J$ has \it{at least} the order $den (\frac{B_r}{4r})$ requires an invariant an a device for its computation. The invariant is the $e$-invariant $e: \pi_{4n-1}^{st} \to \mathbb{Q}/\mathbb{Z}$ and the computation of $e \circ J$ is done using characteristic classes. All this is well-explained in Hatcher's book project on $K$-theory, with a hands-on definition of the $e$-invariant.
Proving that the image of $J$ has \it{at most} the order $den (\frac{B_r}{4r})$ requires a construction of a nullhomotopy. It follows from the Adams conjecture and some Bernoulli numerology. Besides the first proofs of the Adams conjecture by Quillen and Sullivan, there exist two proofs which I understand (by Becker-Gottlieb and a simplification of it by E. Brown, which I wrote up some years ago).
Here are my questions:
Is there an argument for the injectivity of $J_{n}$, $n \equiv 0,1 \pmod 8$, which is similarly direct as the argument in Hatcher's book?
The $J$-homomorphism gives a map of spectra $\Sigma^{-1} KO \to S$. What is the composition with the unit map $S \to KO$, and what is a low level explanation for it? EDIT: this is too naive, see Neil's comment below.
Is there a low-level description for the image of the unit map $S \to KO$ on homotopy groups?
Or do have to learn it the hard way? |
So the set up is as follows: We have n coins being flipped independently, not necessarily all fair. I know that if there is at least one fair coin then the probability of getting an even number of heads after flipping is 1\2. I want to show the converse, that if the probability is 1/2(of getting an even number of heads) then there is at least one fair coin.
Not as Elegant a Basic Approach as I Had Foreseen. We show this by induction.
First, for $n = 1$, a single coin. Obviously, then the probability of an even number of heads is simply the probability that this coin flips tails. If this coin is unfair, this probability is clearly not equal to $1/2$. Therefore the coin must be fair. This establishes the basis step.
Now, suppose that the proposition is true for some $n > 0$. Let us now show it for $n+1$. The antecedent is that the probability of an even number of heads in these $n+1$ flips is $1/2$. If (at least) one of the first $n$ coins is fair, then the consequent is true.
If, on the other hand, none of the first $n$ coins is fair, we already know that such a circumstance does not permit the probability of an even number of heads in the first $n$ tosses to be $1/2$. Let us say therefore that this probability is instead $P_n \not= 1/2$, and let the $n+1$th coin have a probability of heads of $q$. Then the probability that the number of heads is even after all $n+1$ tosses is
$$ P_{n+1} = P_n(1-q) + (1-P_n)q = P_n + q(1-2P_n) $$
But we know, by hypothesis, that $P_{n+1} = 1/2$, so we write
$$ \frac12 = P_n + q(1-2P_n) $$
which gives us, after some simple algebra,
$$ q = \frac{1/2-P_n}{1-2P_n} = \frac12 $$
This establishes the induction step and the proposition is shown.
This is really the same as the answer by @BrianTung but the presentation is a tad shorter. :)
Assume a set of $n$ coins has that property. Partition this set into two arbitrary non-empty subsets $X, Y$ and let $p_X = ({1 \over 2} + x), p_Y = ({1\over 2} + y)$ be the respective probabilities of each set to have an even number of heads. Then:
$$ {1 \over 2} = p_X p_Y + (1 - p_X) (1 - p_Y) = ({1 \over 2} + x) ({1 \over 2} + y) + ({1 \over 2} - x) ({1 \over 2} - y) = {1 \over 2} + 2xy$$
after you expand and realize the cross-terms cancel. Thus either $x$ or $y$ (or both) must be $0$, i.e. one (or both) of the subsets must have this property. As you recur downward you eventually reach a single coin which must be fair.
Case $n=3$.
Let: $$p(h_1)=x,p(h_2)=y,p(h_3)=z,0<x<y<z<1.$$ The probability of even number ($0$ or $2$) of heads: $$p(h_1h_2t_3)+p(h_1h_3t_2)+p(h_2h_3t_1)+p(t_1t_2t_3)=0.5\iff \\ xy(1-z)+xz(1-y)+yz(1-x)+(1-x)(1-y)(1-z)=0.5 \iff \\ x+y+z-2(xy+yz+xz)+4xyz=0.5 \iff \\ y(1-2x-2z+4xz)=0.5-x-z+2xz \Rightarrow \\ \begin{cases}0<x=\frac12<y<z<1\\ 0<x<y=\frac12<z<1\\ 0<x<y<z=\frac12\end{cases}.$$ |
Consider rational functions $F(x)=P(x)/Q(x)$ with $P(x),Q(x) \in \mathbb{Z}[x]$. I'd like to know when I can expect $F(k) \in \mathbb{Z}$ for infinitely many positive integers $k$. Of course this doesn't always happen ($P(x)=1, Q(x)=x, F(x)=1/x$). I am particulary interested in answering this for the rational function $F(x)=\frac{x^{2}+3}{x-1}$.
If $F=P/Q$ is integral infinitely often then $F$ is a polynomial.
Write $$P(x)=f(x)Q(x)+R(x)$$ for some polynomial $R$ of degree strictly less than the degree of $Q$. If you have infinitely many integral $x$ so that $P/Q$ is integral then you get infinitely many $x$ so that $NR/Q$ is integral, where $N$ is the product of all denominators of the coefficients in $f$. However $R/Q\to 0$ as $x\to \pm \infty$ so $R\equiv 0$ and so $Q(x)$ is a divisor of $P(x)$.
Now, as pointed out by Mark Sapir below, not all polynomials with rational coefficients take on integer values infinitely often (at integers), but you can check this in all practical cases by seeing if $dF$ has a root $\pmod{d}$, where $d$ is the common denominator of the coefficients in $F$.
$(x^2+3)/(x-1)=x+1+(4/(x-1))$ so this question, at least, is easy; you get an integer if and only if 4 is a multiple of $x-1$. |
Global attractor for a Klein-Gordon-Schrodinger type system
1.
Department of Mathematics, National Technical University, Zografou Campus 157 80, Athens, Greece
2.
Department of Mathematics, National Technical University, Zografou Campus 157 80, Athens, Hellas, Greece
$i\psi_t + k\psi_(xx) + i\alpha\psi$ = $\phi\psi + f(x)$,
$\phi_(tt)$ - $\phi_(xx) + \phi + \lambda\phi_t$ = -$Re\psi_x + g(x)$, $\psi(x,0)=\psi_0(x), \phi(x,0)$ = $\phi_0, \phi_t(x,0)=\phi_1(x)$ $\phi(x,t)=\phi(x,t)=0$, $x\in\partial\Omega, t>0$
where $x \in \Omega, t > 0, k > 0, \alpha > 0, \lambda > 0, f(x)$ and $g(x)$ are the driving terms and $\Omega$ (bounded) $\subset \mathbb{R}$. Also we prove the continuous dependence of solutions of the system on the initial data as well as the existence of a global attractor.
Keywords:Klein-Gordon-Schrodinger equation; Global Attractor; Absorbing set; Asymptotic Compactness; Uniqueness; Continuity.. Mathematics Subject Classification:Primary: 58F15, 58F17; Secondary: 53C3. Citation:Marilena N. Poulou, Nikolaos M. Stavrakakis. Global attractor for a Klein-Gordon-Schrodinger type system. Conference Publications, 2007, 2007 (Special) : 844-854. doi: 10.3934/proc.2007.2007.844
[1] [2]
Tomás Caraballo, Marta Herrera-Cobos, Pedro Marín-Rubio.
Global attractor for a nonlocal
[3]
M. Keel, Tristan Roy, Terence Tao.
Global well-posedness of the
Maxwell-Klein-Gordon equation below the energy norm.
[4] [5]
Irena Lasiecka, Roberto Triggiani.
Global exact controllability of semilinear wave equations by a double compactness/uniqueness argument.
[6]
Zhiming Liu, Zhijian Yang.
Global attractor of multi-valued operators with applications to a strongly damped nonlinear wave equation without uniqueness.
[7]
Andrew Comech.
Weak attractor
of the Klein-Gordon field in discrete space-time
interacting with a nonlinear oscillator.
[8]
Salah Missaoui, Ezzeddine Zahrouni.
Regularity of the attractor for a coupled Klein-Gordon-Schrödinger system with cubic nonlinearities in $\mathbb{R}^2$.
[9] [10] [11]
Satoshi Masaki, Jun-ichi Segata.
Modified scattering for the Klein-Gordon equation with the critical nonlinearity in three dimensions.
[12]
Aslihan Demirkaya, Panayotis G. Kevrekidis, Milena Stanislavova, Atanas Stefanov.
Spectral stability analysis for standing waves of a perturbed Klein-Gordon equation.
[13]
Chi-Kun Lin, Kung-Chien Wu.
On the fluid dynamical approximation to the
nonlinear Klein-Gordon equation.
[14]
Hironobu Sasaki.
Small data scattering for the Klein-Gordon equation with cubic convolution nonlinearity.
[15] [16] [17] [18] [19] [20]
Emmanuel Hebey and Frederic Robert.
Compactness and global estimates for the geometric Paneitz equation in high dimensions.
Impact Factor:
Tools Metrics Other articles
by authors
[Back to Top] |
Question
Since the equation for torque on a current-carrying loop is $\tau = NIAB \sin(\theta)$, the units of $\textrm{N} \cdot \textrm{m}$ must equal units of $\textrm{A} \cdot \textrm{m}^2 \cdot \textrm{T}$. Verify this.
Final Answer
see solution video for dimensional analysis.
Video Transcript
This is College Physics Answers with Shaun Dychko. We're going to show that the units on the left on the right hand side of this equation are the same and which has to be true for every equation that's correct. So we have torque units, we break it down into Newtons and meters and we know it's Newton meters because another formula for torque is that it's force times lever arm. And force with Newtons and lever arm's meters with Newton meters there. And on the right hand side, we want to turn everything into Newtons and meters if we can and we'll show that this works out to Newton meters. Now it could reduce the Ampere units into Coulombs per second but I got to leave it as Amperes for a minute because I have a feeling that when we break down this Tesla unit and it's going to have an Ampere up here, and which is going to cancel with this one. So, now think about this formula:
Fequals qvBsine theta. I'm looking for a way to turn Tesla into a more basic type of unit. And so magnetic field is force over qvsine theta. And so the units here are Teslas, are Newtons and the force per Coulomb which isn't really a unit. It's more like a counting unit than anything times meters per second times sine theta which also is dimensionless. So, we have Newtons per meter per second. And this is Newton seconds per meter. On the other hand, let's keep the Coulombs here. Let's use those Coulombs from the charge because that Coulomb per second, that actually makes Amperes. That's helpful because this works out to Newtons per Amp times meters. So instead of Tesla, I'll replace that with Newtons per Amp meter. And here is the ticket because the Ampere's cancel and so does one of the meters, leaving us with Newton meters. And so we have verified that the units on the right hand side, here are the same on the units on the left. |
TL;DR: This is an informal summary of our recent paper Blended Matching Pursuit with Cyrille W. Combettes, showing that the blending approach that we used earlier for conditional gradients can be carried over also to the Matching Pursuit setting, resulting in a new and very fast algorithm for minimizing convex functions over linear spaces while maintaining sparsity close to full orthogonal projection approaches such as Orthogonal Matching Pursuit. What is the paper about and why you might care
We are interested in solving the following convex optimization problem. Let $f$ be a smooth convex function with potentially additional properties and $D \subseteq \RR^n$ a finite set of vectors. We want to solve:
The set $D$ in the context considered here is often referred to as
dictionary and its elements are called atoms. Note that this problem does also make sense for infinite dictionaries and more general Hilbert spaces but for the sake of exposition we confine ourselves here to the finite case; see the paper for more details. Sparse Signal Recovery
The problem (opt) with, e.g., $f(x) \doteq \norm{x - y}_2^2$ for a given vector $y \in \RR^n$ is of particular interest in
Signal Processing in the context of Sparse Signal Recovery, where a signal $y \in \RR^n$ is measured that is known to be the sum of a sparse linear combination of elements in $D$ and a noise term $\epsilon$, e.g., $y = x + \epsilon$ for some $x \in \operatorname{lin} D$ and $\epsilon \sim N(0,\Sigma)$; see Wikipedia for more details. Here sparsity refers to $x$ being a linear combination of few elements from $D$ and the task is to reconstruct $x$ from $y$. If the signal’s sparsity is known ahead of time, say $m$, then the optimization problem of interest is:
where $|D| = k$ and $m \ll k$ typically. As the above problem is non-convex (and in fact NP-hard to solve), various relaxations have been used and a common one is to solve (opt) instead with an algorithm that promotes sparsity due to its algorithmic design. Other variants include relaxing the $\ell_0$-norm constraint via an $\ell_1$-norm constraint and then solving the arising constrained convex optimization problem over an appropriately scaled $\ell_1$-ball with an optimization methods that is relatively sparse, such as e.g., conditional gradients and related methods.
The following graphics is taken from Wikipedia’s Matching Pursuit entry. On the bottom the actual signal is depicted in the time domain and on top the inner product of the wavelet atom with the signal is shown as a heat map, where each pixel corresponds to a time-frequency wavelet atom (this would be our dictionary). In this example, we would seek a reconstruction with $3$ elements given by the centers of the ellipsoids.
Without going into detail here, (sparseRecovery) also naturally relates to compressed sensing and our algorithm also applies to this context, as do all other algorithms that solve (sparseRecovery).
The general setup
Here we actually consider the more general problem of minimizing an arbitrary smooth convex function $f$ over the linear span of the dictionary $D$ in (opt). This more general setup has many applications including the one from above. Basically, whenever we seek to project a vector into a linear space, writing it as linear combination of basis elements we are in the setup of (opt). Moreover, sparsity is often a natural requirement as it helps explainability and interpretation etc. in many cases.
Solving the optimization problem
Apart from the broad applicability, (opt) is also algorithmically interesting. It is a constrained problem as we optimize subject to $x \in \operatorname{lin} D$, yet at the same time the feasible region is unbounded. Surely one could project into the linear space etc but this is quite costly if $D$ is large and potentially very challenging if $D$ is countably infinite; in fact it is (opt) that solves exactly this problem for a
specific vector $y$ subject to additional constraints such as, e.g., sparsity and good Normalized Mean Squared Error (NMSE). When solving (opt) we thus face some interesting challenges, such as not being able to bound the diameter of the feasible region (an often used quantity in constrained convex minimization).
There are various algorithms to solve (opt) while maintaining sparsity. One such class are Coordinate Descent, Matching Pursuit [MZ], Orthogonal Matching Pursuit [AKGT] and similar algorithms that try to achieve sparsity due to their design. Another class solves a constraint version by introducing an $\ell_1$-constraint as discussed above to induce sparsity. This includes (vanilla) Gradient Descent (not really sparse), Conditional Gradient descent [CG] (aka the Frank-Wolfe algorithm [FW]) and its variants (see e.g., [LJ]) as well as specialized algorithms such as Compressive Sampling Matching Pursuit (CoSaMP) [NT] or Conditional Gradient with Enhancement and Truncation (CoGEnT) [RSW]. Also our recent Blended Conditional Gradients (BCG) algorithm [BPTW] applies to the formulation with $\ell_1$-ball relaxation; see also the summary of the paper for more details.
For an overview of the computational as well as reconstruction advantages and disadvantages of some of those algorithms, see [AKGT].
Our results
More recently, in [LKTJ] a unifying view of Conditional Gradients and Matching Pursuit has been established. Apart from presenting new algorithms, the authors also show that basically the Frank-Wolfe algorithm corresponds to Matching Pursuit and the Fully-Corrective Frank-Wolfe algorithm corresponds to Orthogonal Matching Pursuit; shortly after in [LRKRSSJ] an accelerated variant of Matching Pursuit has been provided. The unified view of [LKTJ] motivated us to carry over the blending idea from [BPTW] to the Matching Pursuit context, as the BCG algorithm provided very good sparsity in the constraint case in our tests. Moreover, we wanted to extend the convergence analysis to not just smooth and (strongly) convex functions but more generally smooth and sharp functions, which nicely interpolates between the convex and the strongly convex regime (see Cheat Sheet: Hölder Error Bounds (HEB) for Conditional Gradients for details on sharpness); the same can be also done for Conditional Gradients (see our recent work [KDP] or the summary).
The basic idea behind behind
blending is to mix together various types of steps. Here the mixing is between Matching Pursuit style steps and low-complexity Gradient Steps over the currently selected atoms. The former steps make sure that we discover new dictionary elements that we need to make progress, whereas the latter ones usually give more per-iteration progress, are cheaper in wall-clock time, and promote sparsity. Unfortunately, as straightforward as it sounds to carry over the blending to Matching Pursuit, it is not that simple. The blending that we did before in [BPTW] heavily relied on dual gap estimates (in fact variants of the Wolfe gap) to switch between the various steps, however these gaps are not available here due to the unboundedness of $\operatorname{lin} D$.
After navigating these technical challenges, what we ended up with is a
Blended Matching Pursuit (BMP) algorithm, that is basically as fast (or faster) than the standard Matching Pursuit (or its generalized variant, Generalized Matching Pursuit (MP), for arbitrary smooth convex functions), while maintaining a sparsity close to that of the much slower Orthogonal Matching Pursuit (OMP); the former only performs line search across the newly added atom, while the latter re-optimizes over the full set of selected elements in each iteration, hence offering much better sparsity at the price of much higher running times. Example Computation 1:
The following figure shows a sample computation for a sparse signal recovery instance from [RSW], which we scaled down by a factor of $10$. The actual signal has a sparsity of $s = 100$, we have $m = 500$ measurements, and the measurement happens in $n = 2000$-dimensional space. We choose $A\in\mathbb{R}^{m\times n}$ and $x^\esx \in \mathbb{R}^n$ with $\norm{x^\esx}_0=s$. The measurement is generated as $y=Ax^\esx + \mathcal{N}(0,\sigma^2I_m)$ with $\sigma = 0.05$.
We benchmarked BMP against MP and OMP (see [LKTJ] for pseudo-code). We also benchmarked against BCG (see [BPTW] for pseudo-code) and CoGEnT (see [RSW] for pseudo-code); for these algorithms we optimize subject to a scaled $\ell_1$-ball, where the radius has been empirically chosen so the signal is contained in the ball; otherwise we could not compare primal gap progress. Note that by scaling up the $\ell_1$-ball we might produce less sparse solutions; see [LKTJ] and the contained discussion for relating conditional gradient methods to matching pursuit methods. Each algorithm is run for either $300$ secs or until there is no (substantial) primal improvement anymore; whichever comes first.
In the aforementioned
Sparse Signal Recovery problem, another way to compare the quality of the actual reconstructions is via the Normalized Mean Square Error (NMSE). The next figure shows the evolution of NMSE across the optimization:
The rebound likely happens because once the actual signal is reconstructed, overfitting of the noise term starts, deteriorating NMSE. One could clean up the reconstruction by removing all atoms in the support with small coefficients; this is beyond the scope however. Here the same NMSE plot truncated after the first $30$ secs for better visibility of the initial phase:
Example Computation 2:
Same setup as above, however this time actual signal has a sparsity of $s = 100$, we have $m = 1500$ measurements, and the measurement happens in $n = 6000$-dimensional space. This time we run for $1200$ secs or until no (substantial) primal progress. Here the performance of BMP is very obvious.
The next figure shows the evolution of NMSE across the optimization:
And truncated again, here after roughly the first $300$ secs for better visibility. We can see that BMP reaches its NMSE minimum right around $100$ atoms and it is much faster than any of the other algorithms.
References
[MZ] Mallat, S. G., & Zhang, Z. (1993). Matching pursuits with time-frequency dictionaries. IEEE Transactions on signal processing, 41(12), 3397-3415. pdf
[TG] Tropp, J. A., & Gilbert, A. C. (2007). Signal recovery from random measurements via orthogonal matching pursuit. IEEE Transactions on information theory, 53(12), 4655-4666. pdf
[CG] Levitin, E. S., & Polyak, B. T. (1966). Constrained minimization methods. Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki, 6(5), 787-823. pdf
[FW] Frank, M., & Wolfe, P. (1956). An algorithm for quadratic programming. Naval research logistics quarterly, 3(1‐2), 95-110. pdf
[LJ] Lacoste-Julien, S., & Jaggi, M. (2015). On the global linear convergence of Frank-Wolfe optimization variants. In Advances in Neural Information Processing Systems (pp. 496-504). pdf
[NT] Needell, D., & Tropp, J. A. (2009). CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and computational harmonic analysis, 26(3), 301-321. pdf
[RSW] Rao, N., Shah, P., & Wright, S. (2015). Forward–backward greedy algorithms for atomic norm regularization. IEEE Transactions on Signal Processing, 63(21), 5798-5811. pdf
[BPTW] Braun, G., Pokutta, S., Tu, D., & Wright, S. (2018). Blended Conditional Gradients: the unconditioning of conditional gradients. arXiv preprint arXiv:1805.07311. pdf
[AKGT] Arjoune, Y., Kaabouch, N., El Ghazi, H., & Tamtaoui, A. (2017, January). Compressive sensing: Performance comparison of sparse recovery algorithms. In 2017 IEEE 7th annual computing and communication workshop and conference (CCWC) (pp. 1-7). IEEE. pdf
[LKTJ] Locatello, F., Khanna, R., Tschannen, M., & Jaggi, M. (2017). A unified optimization view on generalized matching pursuit and frank-wolfe. arXiv preprint arXiv:1702.06457. pdf
[LRKRSSJ] Locatello, F., Raj, A., Karimireddy, S. P., Rätsch, G., Schölkopf, B., Stich, S. U., & Jaggi, M. (2018). On matching pursuit and coordinate descent. arXiv preprint arXiv:1803.09539. pdf
[KDP] Kerdreux, T., d’Aspremont, A., & Pokutta, S. (2018). Restarting Frank-Wolfe. to appear in Proceedings of AISTATS. pdf |
Observation
When using a forward-Euler method for the time integration of the momentum equation for an inviscid-flow, it appears that the kinetic energy of the flow grows unbounded in time, regardless of the timestep size.
Problem Statement
Estimate the change in total kinetic energy when using forward-Euler to integrate the Euler momentum equations in a periodic box.
Approach
To solve this problem, we need to do the following:
Formulate a FE semi-discrete formula for the Euler equations. Construct the local kinetic energy (i.e. pointwise) by taking the dot product of the semi-discrete velocity field. Using this information, construct an equation for the change in total kinetic energy with time, e.g. ${\displaystyle \frac{\Delta K}{\Delta t}} $. Analyze the right-hand-side (RHS) of this equation in the context of a periodic box. If the RHS > 0, then the kinetic energy will increase. Show Me the Math
It is best to use tensor notation for this problem. We start with the Euler equations
\begin{equation}
\frac{\partial u_{i}}{\partial x_{i}}=0 \end{equation}
\begin{equation}
\frac{\partial u_{i}}{\partial t}=-u_{j}\frac{\partial u_{i}}{\partial x_{j}}-\frac{\partial p}{\partial x_{i}}\equiv F_{i} \end{equation}
where $i\in\{1,2,3\}$ denotes the $i^{\text{th}}$ spatial direction.
Using forward-euler time integration, we construct the semi-discrete formula
\begin{equation}
\frac{u_{i}^{n+1}-u_{i}^{n}}{\Delta t}+\mathcal{O}\left(\Delta t\right)=F_{i}^{n};\quad F_{i}^{n}\equiv u_{j}^{n}\frac{\partial u_{i}^{n}}{\partial x_{j}}+\frac{\partial p^{n}}{\partial x_{i}} \end{equation}
where $t^{n+1}=t^{n}+\Delta t$ and $\Delta t$ is the fixed timestep size. For simplity of the exposition, we drop the “order” notation and set
\begin{equation}
u_{i}^{n+1}=u_{i}^{n}-\Delta t\, F_{i}^{n}\label{eq:ke-euler-discrete} \end{equation}
Now, in a staggered arrangement, the kinetic energy lives at cell centers while the velocity components are stored on staggered volumes. For simpliciy, we assume a uniform, staggered grid. Consequently, the staggered-volumes coincide with the faces of the cell centered ones. Then, one calculates the local kinetic energy, $k$, as
\begin{equation}
k=\tfrac{1}{2}u_{i}u_{i}=\tfrac{1}{2}(u_{1}^{2}+u_{2}^{2}+u_{3}^{2}) \end{equation}
where Einstein summation is implied on repeated inidces. Note that the local kinetic energy is defined as the pointwise kinetic energy – the kinetic energy in every cell. Now, we can formulate the following sub-problem: given a discrete velocity field $u_{i}^{n}$ such that $\frac{\partial u_{i}^{n}}{\partial x_{i}}=0$ (discretely), estimate the kinetic energy, $ k^{n+1}$, at the next timestep. In other words, use \eqref{eq:ke-euler-discrete} to calculate $k^{n+1}$.
We start by multiplying \eqref{eq:ke-euler-discrete} through with $u_{i}^{n+1}$, we have
\begin{equation} u_{i}^{n+1}u_{i}^{n+1}=u_{i}^{n+1}\left(u_{i}^{n}-\Delta t\, F_{i}^{n}\right) \end{equation}
Or, using $k=\frac{1}{2}u_{i}u_{i}$, we have
\begin{equation} 2k^{n+1}=\left(u_{i}^{n}-\Delta t\, F_{i}^{n}\right)\left(u_{i}^{n}-\Delta t\, F_{i}^{n}\right) \end{equation}
Expanding the right-hand-side, we get
\begin{equation} 2k^{n+1}=u_{i}^{n}u_{i}^{n}-2\Delta t\, u_{i}^{n}F_{i}^{n}+\Delta t^{2}F_{i}^{n}F_{i}^{n} \end{equation} or \begin{equation} 2k^{n+1}=2k^{n}-2\Delta t\, u_{i}^{n}F_{i}^{n}+\Delta t^{2}F_{i}^{n}F_{i}^{n}\label{eq:kn1-0} \end{equation} We now rearrange \eqref{eq:kn1-0} as follows \begin{equation} \frac{k^{n+1}-k^{n}}{\Delta t}=-u_{i}^{n}F_{i}^{n}+\frac{1}{2}\Delta tF_{i}^{n}F_{i}^{n} \end{equation} or \begin{equation} \frac{\Delta k}{\Delta t}=-u_{i}^{n}F_{i}^{n}+\frac{1}{2}\Delta tF_{i}^{n}F_{i}^{n}\label{eq:dkdt-0} \end{equation}
This equation tells us how much the implied kinetic energy changes with time. It is an implied kinetic energy equation given that it was constructed from the velocity field not from the transported kinetic energy.
We now focus our attention on the first term in the RHS of \eqref{eq:dkdt-0}, i.e. $u_{i}^{n}F_{i}^{n}$. Substituting for $F_{i}^{n}$, we have
\begin{equation}
u_{i}^{n}F_{i}^{n}=u_{i}^{n}\left(u_{j}^{n}\frac{\partial u_{i}^{n}}{\partial x_{j}}+\frac{\partial p^{n}}{\partial x_{i}}\right)=u_{i}^{n}u_{j}^{n}\frac{\partial u_{i}^{n}}{\partial x_{j}}+u_{i}^{n}\frac{\partial p^{n}}{\partial x_{i}}\label{eq:ui-fn} \end{equation}
The purpose now is to try to convert this term into a divergence form. We will see why later. Starting with the first term $u_{i}^{n}u_{j}^{n}\frac{\partial u_{i}^{n}}{\partial x_{j}}$, we know that
\begin{equation}
\frac{\partial u_{i}^{n}u_{i}^{n}u_{j}^{n}}{\partial x_{j}}=u_{i}^{n}u_{i}^{n}\frac{\partial u_{j}^{n}}{\partial x_{j}}+u_{j}^{n}\frac{\partial u_{i}^{n}u_{i}^{n}}{\partial x_{j}}=u_{i}^{n}u_{i}^{n}\frac{\partial u_{j}^{n}}{\partial x_{j}}+2u_{i}^{n}u_{j}^{n}\frac{\partial u_{i}^{n}}{\partial x_{j}} \end{equation}
But, assuming that continuity is satified discretely at every cell, we can set $\frac{\partial u_{j}^{n}}{\partial x_{j}}=0$. Then, we recover
\begin{equation} u_{i}^{n}u_{j}^{n}\frac{\partial u_{i}^{n}}{\partial x_{j}}=\frac{1}{2}\frac{\partial u_{i}^{n}u_{i}^{n}u_{j}^{n}}{\partial x_{j}}=\frac{\partial k^{n}u_{j}^{n}}{\partial x_{j}} \end{equation} Lastly, the second term in \eqref{eq:ui-fn} is easily replaced by \begin{equation} u_{i}^{n}\frac{\partial p^{n}}{\partial x_{i}}=\frac{\partial u_{i}^{n}p^{n}}{\partial x_{i}}-p^{n}\frac{\partial u_{i}^{n}}{\partial x_{i}} \end{equation}
Again, by assuming that continuity is satisfied discretely, we can drop the last term in the previous equation and use
\begin{equation} u_{i}^{n}\frac{\partial p^{n}}{\partial x_{i}}=\frac{\partial u_{i}^{n}p^{n}}{\partial x_{i}} \end{equation}
Upon substitution of the modified terms back into the implied kinetic energy equation \eqref{eq:dkdt-0}, we have
\begin{equation}
\frac{\Delta k}{\Delta t}=-\frac{\partial k^{n}u_{j}^{n}}{\partial x_{j}}-\frac{\partial u_{i}^{n}p^{n}}{\partial x_{i}}+\frac{1}{2}\Delta tF_{i}^{n}F_{i}^{n} \end{equation}
In principle, for an inviscid flow in a period box, there should be no production or dissipation of total kinetic energy. The total kinetic energy is defined as the volumetric integral of $k$ over the domain, i.e.
\begin{equation}
\mathrm{K}=\int_{V}k\,\mathrm{d}V \end{equation}
Since there is no production or dissipation in our problem, we expect that $\text{K}=const$ or
\begin{equation}
\frac{\partial}{\partial t}\int_{V}k\,\mathrm{d}V=0 \end{equation}
Looking at our semi-discrete equation, upon integration over the periodic box, we have
\begin{equation}
\frac{\Delta}{\Delta t}\int_{V}k\mathrm{d}V=-\int_{V}\frac{\partial k^{n}u_{j}^{n}}{\partial x_{j}}\mathrm{d}V-\int_{V}\frac{\partial u_{i}^{n}p^{n}}{\partial x_{i}}\mathrm{d}V+\frac{1}{2}\Delta t\int_{V}F_{i}^{n}F_{i}^{n}\mathrm{d}V \end{equation}
Being periodic, we have, for any vector $\mathbf{v}$,
\begin{equation}
\int_{V}\nabla\cdot\mathbf{v}\mathrm{d}V=\int_{V}\frac{\partial v_{i}}{\partial x_{i}}\mathrm{d}V=\int_{\mathcal{S}}\mathbf{v}\cdot\mathbf{n}\mathrm{d}\mathcal{S}=0 \end{equation}
Then, all terms that are written in divergence form vanish identically. Hence, one is left with,
\begin{equation}
\frac{\Delta\mathrm{K}}{\Delta t}=\frac{1}{2}\Delta t\int_{V}F_{i}^{n}F_{i}^{n}\mathrm{d}V\geq0 \end{equation}
and therefore, the total kinetic energy grows unboundedly with time when using a forward-Euler scheme on the Euler equations, in a periodic box.
Dr. Saad’s Notes, retrieved on June 30, 2016, http://www.tonysaad.net/notes/unbounded-kinetic-energy-in-forward-euler-inviscid-flows/. |
Mechanical Properties of Fluids Viscosity Viscous force is the force of friction acting between two layers of fluids, opposing the relative motion of one over the other. When temperature increases viscosity of gases increases due to exchange of momentum. When temperature increases viscosity of liquids decreases due to decrease in cohesive forces. Velocity gradient is the ratio of difference in velocities and the distance between two points in the liquid. The viscosity and compressibility of an ideal fluid is zero. The viscous force acting between two adjacent layers of a liquid is directly proportional to the surface area of the layers in contact and the velocity gradient. Coefficient of viscosity is defined as the tangential force per unit area required to maintain unit velocity gradient. The ratio between the coefficient of viscosity and density of liquid is called coefficient of kinematic viscosity. If the flow is streamline velocity of liquid is proportional to pressure difference. When capillaries are connected in series volume of the liquid flowing per second is same. when capillaries are connected in parallel the total pressure remains constant. When the sum of viscous force and upthrust of the liquid on the body is equal to its weight. Terminal velocity is directly proportional to the square of the radius of the sphere. Terminal velocity and viscosity are inversely proportional to each other. When 'n' droplets are falling down with terminal velocity ν are combined and formed a big drop then terminal velocity of big drop V big= n 2/3V small. View the Topic in this video From 07:19 To 35:15
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. The
coefficient of viscosity for a fluid is defined as the ratio of shearing stress to the strain rate. \eta=\frac{F/A}{v/l}=\frac{Fl}{vA}
2.
Stoke's law: It is the viscous drag force F on a sphere of radius r moving with velocity v through a fluid of viscosity η. F = 6 πηav
3.
Terminal velocity: The maximum constant velocity acquired by the body while falling through a viscous fluid.
\tt v=\frac{2}{9}\times\frac{r^{2}\left(\rho-\sigma\right)g}{\eta} |
$$L=\{0^m1^n \enspace | \enspace m \neq n\}$$
I saw that this exact question exists elsewhere, but I couldn't understand what was being said there. My question does not mandate the use of the Pumping Lemma as stated "elsewhere", but I am using the Pumping Lemma anyway. I want to present what I have so far, and for someone to tell me if I'm on the right track:
Assume $L$ is not regular. Let $p$ be the pumping length given by the Pumping Lemma for regular languages. Let the string $w = 0^p1^{p+1} \in L$ By the Pumping Lemma, $w = xy^iz$, where $i \geq 0$, $\color{green}{\lvert y \rvert \geq 1}$, and $\color{red}{\lvert xy \rvert \lt p}$. Let:
\begin{equation} \begin{aligned} \mathcal{x} &= \mathcal{0}^{p} \\ {y} & = {1}^{p+1} \\ {z} & = \varepsilon \end{aligned} \end{equation}
It is at this point in the proof that I get confused. I feel as if I've set it up well, but just can't finish. Here's what I've got, though:
We see that $\lvert y \rvert= p+1 \geq 1 \enspace \color{green}{\checkmark}$ However, $\lvert xy \rvert= p+p+1 \gt p \enspace \color{red}{\textbf{X}}$
As we can see by $\textit{(7)}$, our test string $w$ violates a $\color{red}{condition}$ of the Pumping Lemma, thus is not regular.
Thumbs up, thumbs down, anyone? Did I make the appropriate inferences about my split string $w$ in order to achieve a contradiction, and did I even split the string correctly? And to boot, did I even pick a $w$ that is useful to the proof? |
Speaker
Dr Raphael M. Albuquerque (University of São Paulo)
Description
We calculate the branching ratio for the production of the meson $Y(4260)$ in the decay $B^- \to Y(4260)K^-$. We use QCD sum rules approach and we consider the $Y(4260)$ to be a mixture between charmonium and exotic tetraquark, $[\bar{c}\bar{q}][qc]$, states with $J^{PC}=1^{--}$. Using the value of the mixing angle determined previously as: $\theta=(53.0\pm0.5)^\circ$, we get the branching ratio $\mathcal{B}(B\to Y(4260)K)=(1.34\pm0.47)\times10^{-6}$, which allows us to estimate an interval on the branching fraction $3.0 \times 10^{-8} < {\mathcal B}_{_Y} < 1.8 \times 10^{-6}$ in agreement with the experimental upper limit reported by Babar Collaboration.
Primary author
Dr Raphael M. Albuquerque (University of São Paulo) |
I am reading the following paper by Slater:
https://journals.aps.org/pr/pdf/10.1103/PhysRev.81.385
On page 5 they write above equation (12) the following:
"If we now average over all wave functions, we find that the properly weighted average of $F(\eta)$ is 3/4."
Now, $F(\eta)=1/2 + \frac{1-\eta^2}{4\eta}\ln((1+\eta)/(1-\eta))$.
I don't understand what does it mean to average over wave functions, I thought that they calculated: $\lim_{T\to \infty} \frac{1}{T}\int_0^T F(x)dx$, but I have given maple to calculate this limit (for the additive part without 1/2), and it didn't gave me 1/4.
So I don't understand which average of this function did they calculate?
ANyone knows?
Thanks! |
Complex Numbers
A
complex number is a couple of two real numbers (x, y). We can think about complex numbers like points on the coordinate plane. Let z be a complex number, i.e. z = (x, y) x is the real part of z, and y is the imaginary part of z.
Complex numbers are denoted by $\mathbb{C}$
The set of real numbers is its subset. Real numbers written as complex are $(x, 0), \ \ x \in \mathbb{R}$
Each complex number (x, y) have a relevant point on the coordinate plane. We can not write point A > B, because of that we can notwrite for two complex numbers (x
1, y 1) > (x 2, y 2) Complex numbers have no ordering.
Let z
1 = (x 1, y 1) and z 2 = (x 2, y 2) be two complex numbers then:
$z_1 = z_2 \Leftrightarrow x_1 = x_2$ and $y_1 = y_2$
$z_1 \pm z_2 = (x_1, y_1) \pm (x_2, y_2) = (x_1 \pm x_2, y_1 \pm y_2)$ $z_1z_2 = (x_1, y_1)\times (x_2, y_2) = (x_1x_2 - y_1y_2, x_1y_2 + y_1x_2)$ $\frac{z_1}{z_2}=\frac{(x_1, y_1)}{(x_2, y_2)}=\big(\frac{x_1x_2+y_1y_2}{x_2^2+y_2^2}, \frac{x_2y_1-x_1y_2}{x_2^2+y_2^2}\big)$ Another way to write the complex number z = (x, y) is: z = a + bi, a is the real part of z, b is the imaginary part and i is the imaginary unit. i 2 = -1, i = √-1.
Each complex number z = a + bi has its complex conjugate z = a - bi.
z + z = 2a - a real number; z - z = 2bi - an imaginary number; z ⋅ z = a 2+ b 2= |z| 2- a real number Addition, multiplication and division of complex numbers
Let (a + bi) and (c + di) be two complex numbers.
Complex numbers
addition:
Complex numbers
subtraction:
Reals are added with reals and imaginary with imaginary.
Complex numbers
multiplication:
Complex numbers
division:
$\frac{a + bi}{c + di}=\frac{(ac + bd)+(bc - ad)i)}{c^2+d^2}$
Rule Equivalent Exponent $i^1 = i$ $i^{4n + 1} = i$ Multiple of 4 + 1
${4n + 1, \ n \in \mathbb{Z}} = {1; 5; 9...}$
$i^2 = -1$ $i^{4n + 2} = -1$ Multiple of 4 + 2
${4n + 2, \ n \in \mathbb{Z}} = {2; 6; 10...}$
$i^3 = -i$ $i^{4n + 3} = -i$ Multiple of 4 + 3
${4n + 3, \ n \in \mathbb{Z}} = {3; 7; 11...}$
$i^4 = 1$ $i^{4n} = 1$ Multiple of 4
${4n, \ n \in \mathbb{Z}} = {4; 8; 12...}$
Polar form
The polar form of a complex number is:
iθor
z = r(cos(θ) + isin(θ)) = re
iθ
Here, |z|(or r) is known as the complex modulus
θ is known as complex argument or phase.Тhe dashed circle above represents the complexmodulus |z| of z and the angle θ represents its complexargument.
Let we have two complex numbers z
1 and z 2 in polar form: z 1 = r 1(cos(θ 1) + i⋅sin(θ 1)) z 2 = r 2(cos(θ 2) + i⋅sin(θ 2))
then
z
1⋅z 2 = r 1⋅r 2[cos(θ 1 + θ 2) +i⋅sin(θ 1 + θ 2)]
$\frac{z_1}{z_2}=\frac{r_1}{r_2}[\cos(\theta_1-\theta_2)+i\cdot \sin(\theta_1-\theta_2)]$
Moivre's formulas
The powers of a complex number:
z n = r n(cos(nθ) + i⋅sin(nθ))
Finding the nth root of a complex number:
$\sqrt[n]{z}=\sqrt[n]{r}(\cos(\frac{\theta+2k\pi}{n})+i\cdot \sin(\frac{\theta+2k\pi}{n}))$ k = 0, 1, 2,..., n-1 |
Yesterday, I read in my textbook,
We assign degree to every polynomial and even a non-zero constant is assigned a degree $0$ but $0$ itself is not assigned a degree.
Why is that? Why we don't assign degree $0$ to the zero polynomial?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Assigning a degree to the zero polynomial will cause trouble with important and useful theorems that relate the degree of a polynomial to its roots:
If $F$ is a field (examples of fields are $\mathbb{R}$, $\mathbb{C}$, $\mathbb{Q}$, $\mathbb{Z}/p\mathbb{Z}$), a polynomial $P$ with coefficients in $F$ (the set/ring of these polynomials is usually denoted by $F[x]$) of degree $n$, has at most $n$ distinct points $\alpha\in F$ such that $P(\alpha)=0$.
This theorem follows from the fact that we can repeatedly factor out terms of the form $x-\alpha$ (where $\alpha$ is a root) from $P(x)$, lowering the degree of the remaining polynomial by $1$ in each step. See also: http://en.wikipedia.org/wiki/Factor_theorem.
When we restrict to polynomials with coefficients in $\mathbb{C}$, the statement is related to the Fundamental Theorem of Algebra.
Now since the zero polynomial in $F[x]$ has a root at every point in $F$, at least for infinite fields ($\mathbb{R},\mathbb{C},\mathbb{Q}$), we cannot assign a finite value to the degree of the zero polynomial without getting into trouble.
This is to make nice rules such as $$ \text{deg }(PQ) = \text{deg }P + \text{deg }Q\\ \text{deg }(P+Q) \le \max(\text{deg }P , \text{deg }Q) $$
So the only value that makes it possible is $$\text{deg }0= -\infty$$
The degree of a polynomial is the exponent of the term with the highest power (and non-zero coefficient).
$$5x-4x^4+2\to x^4\to 4$$ $$0x^{45}+1x^3+2x^2\to x^3\to3$$ $$21\to x^0\to0$$ $$0\to ??$$
The null polynomial contains no power of the variable. |
Kontsevich's formality theorem implies in particular that star-products on a $C^\infty$-manifold $M$,$$f\star g = fg + \sum_{k\geq1} \hbar^k B_k(f,g),\qquad f,g\in C^\infty(M),$$ where $B_k$ are bidifferential operators of degree at most $k$, are classified, up to the gauge equivalence $$\star \sim \star' \iff \exists\ T=I+\sum_{l\geq1}\hbar^lT_l\ ,\quad T(f\star g)=T(f)\star' T(g)$$where $T_l$ are differential operators of order at most $l$, by Poisson bivectors depending formally on $\hbar$$$\Pi(\hbar) = \Pi_0+\sum_{k\geq0}\hbar^{k}\Pi_k \in C^\infty(M,\wedge^2 TM)[[\hbar]],\qquad [\Pi(\hbar),\Pi(\hbar)]_{\mathrm{SN}}=0$$(where $[\cdot,\cdot]_{\mathrm{SN}}$ is the Schouten-Nijenhuis bracket) up to formal paths in the groups of diffeomorphisms of $M$ starting at the identity diffeomorphism.
The star product commutator $[f,g] := \frac{1}{\hbar} (f\star g - g\star f) = \{f,g\}+\sum_{k\geq0}\hbar^kC_k(f,g)$ starts with the Poisson bracket associated to the Poisson tensor $\Pi_0$. So a star product, which is an associative deformation $(C^\infty(M)[[\hbar]],\star)$ of the associative commutative algebra $(C^\infty(M),\cdot)$, induces in particular a Lie algebra deformation $(C^\infty(M)[[\hbar]],[\cdot,\cdot])$ of the Poisson algebra $(C^\infty(M),\{\cdot,\cdot\})$.
1) Is there a sensible notion of "deformation quantization" of the Poisson algebra $(C^\infty(M),\{\cdot,\cdot\})$ as a Lie algebra deformation $(C^\infty(M)[[\hbar]],[\cdot,\cdot])$ which does not require referring to (or the existence of) a star-product, i.e. a notion of quantum commutator without the corresponding star-product?
If yes,
2a) does any such special Lie algebra deformation come from a star-product anyway?
2b) is there a classification analogous to Kontsevich's one?
Motivation: in field theory one often faces the problem that, while commutators of local functionals can be defined as local functionals themselves, star-products (or even just classical products for that matter) of local functionals are not local functionals (they are sometimes defined only in some completion of the tensor algebra of local functionals). Can one do without star-products and consider the classification problem for commutators instead?This post imported from StackExchange MathOverflow at 2016-05-29 11:29 (UTC), posted by SE-user issoroloap |
Question
Four fair six-sided dice are rolled. The probability that the sum of the results being $22$ is $$\frac{X}{1296}.$$ What is the value of $X$?
My Approach
I simplified it to the equation of the form:
$x_{1}+x_{2}+x_{3}+x_{4}=22, 1\,\,\leq x_{i} \,\,\leq 6,\,\,1\,\,\leq i \,\,\leq 4 $
Solving this equation results in:
$x_{1}+x_{2}+x_{3}+x_{4}=22$
I removed restriction of $x_{i} \geq 1$ first as follows-:
$\Rightarrow x_{1}^{'}+1+x_{2}^{'}+1+x_{3}^{'}+1+x_{4}^{'}+1=22$
$\Rightarrow x_{1}^{'}+x_{2}^{'}+x_{3}^{'}+x_{4}^{'}=18$
$\Rightarrow \binom{18+4-1}{18}=1330$
Now i removed restriction for $x_{i} \leq 6$ , by calculating the number of
bad cases and then subtracting it from $1330$:
calculating
bad combination i.e $x_{i} \geq 7$
$\Rightarrow x_{1}^{'}+x_{2}^{'}+x_{3}^{'}+x_{4}^{'}=18$
We can distribute $7$ to $2$ of $x_{1}^{'},x_{2}^{'},x_{3}^{'},x_{4}^{'}$ i.e$\binom{4}{2}$
We can distribute $7$ to $1$ of $x_{1}^{'},x_{2}^{'},x_{3}^{'},x_{4}^{'}$ i.e$\binom{4}{1}$ and then among all others .
i.e
$$\binom{4}{1} \binom{14}{11}$$
Therefore, the number of bad combinations equals $$\binom{4}{1} \binom{14}{11} - \binom{4}{2}$$
Therefore, the solution should be:
$$1330-\left( \binom{4}{1} \binom{14}{11} - \binom{4}{2}\right)$$
However, I am getting a negative value. What am I doing wrong?
EDIT
I am asking for my approach, because if the question is for a larger number of dice and if the sum is higher, then predicting the value of dice will not work. |
You can ask Coq to show you its proof object. Before
Qed, type
Show Proof.
(fun (_ : forall n : nat, evenb n = true -> oddb (S n) = true)
(H2 : oddb 3 = true) => H2)
As you can see,
H1 is not used at all, and the goal
is
H2.
It comes from the definition of
oddb and
evenb, the definition of
negb and the definition of the natural numbers and the booleans.
Definition oddb (n:nat) : bool := negb (evenb n).
oddb 3 is equal to
negb (evenb 3) by the definition of
oddb above.
evenb 3 is equal to
evenb (S (S (S O))) by the definition of
3. This is equal to
evenb (S O) which is equal to
false by the definition of
evenb.
Therefore
oddb 3 = negb false which is in turn equal to
true by the definition of
negb.
evenb 4 is equal to
evenb (S (S (S (S O)))) by the definition of
4. Applying the definition of
evenb three shows that this is equal to
evenb (S (S O)), to
evenb O, and to
true.
All of these equalities come from computation: if $x$ reduces to $y$ then $x$ is equal to $y$ — this is the most basic form of equality¹. The reductions above use the $\beta$ rule (applying a lambda term to an argument), the $\delta$ rule (replacing a name by its definition) and the $\iota$ rule (applying a recursive function
fix … and simplifying pattern matching).
Coq has a large set of tactics to perform computations, though they're only needed in advanced cases. For example, after
intros H1 H2, you can ask it to expand
evenb and
oddb, but do nothing else.
cbv delta.
cbv delta in H2.
Integers like
3 and
4 are syntactic sugar, not named constants.
3 is
S (S (S O)), not just some term that reduces to it. If you want to see everything that's going on, turn off pretty-printing of notations.
Unset Printing Notations.
(Use
Set Printing Notations. to turn them back on.)
You can watch the goal being reduced to its value.
cbn iota. cbn beta. cbn iota. cbn beta. cbn iota. cbn beta. cbn iota. cbn beta. cbn iota.
Of course you don't need to guide Coq so much, this is just if you want to see all the steps. You can simply ask Coq to compute as much as possible. Just after
intros H1 H2, run
compute. compute in H2.
and you'll see that both
H2 and the goal simplify to
true = true.
In this particular case there's no simpler way to go from
H2 to the goal than to fully calculate both. If you continue the tutorial a bit, you'll get to recursive proofs, where it's very common to have similar hypotheses and a similar goal, but with a variable instead of constants like
3. There you would typically combine
H1 and
H2 together (but here
H1 is not particularly interesting — since you're proving something about
evenb, you'd want the an implication of the form
oddb … -> evenb … rather than
evenb … -> oddb …).
¹
In a certain sense, it's the only form of equality. |
I am trying to evaluate the Exponential Integral $Ei(x)=-\int^{\infty}_{-x}\frac{e^{-t}}{t}dt$ for $x>0$ (interpreted as the Cauchy principal value) by using rational Chebyshev approximations, which can be found in a paper by Cody & Thacher "Chebyshev Approximations for the Exponential Integral $Ei(x)$". The paper is available online at ams.org. I am trying to use the equation on page 292 for the interval $0<x\le6$:
$$Ei(x)\simeq\log(x/x_0)+(x-x_0)\frac{\sum_{j=0}^{n}p_jT_j^{*}(x/6)}{\sum_{j=0}^{n}q_jT_j^{*}(x/6)}$$
Where $x_0\approx{}0.37$ is the zero of $Ei(x)$, $p_j$ and $q_j$ are the coefficients found in table II of the paper (I am using $n=9$) and $T_j^{*}(x)=T_j(2x-1)$ are the shifted Chebyshev polynomials. So I implemented the equation in a naive C program
#include <stdio.h>#include <math.h>#include <stdlib.h>#define N_1 10double chebyshev(int n, double x){ switch(n){ case 0: return 1.0; break; case 1: return x; break; default: return 2.0*x*chebyshev(n-1, x)-chebyshev(n-2, x); break; }}double shifted_chebyshev(int n, double x){ return chebyshev(n, 2.0*x-1.0);}double interval1(double x){ static const double pj_1[N_1] = { -4.1658081333604994241879E11, +1.2177698136199594677580E10, -2.5301823984599019348858E10, +3.1984354235237738511048E8, -3.5377809694431133484800E8, -3.1398660864247265862050E5, -1.4299841572091610380064E6, -1.4287072500197005777376E4, -1.2831220659262000678155E3, -1.2963702602474830028590E1 }; static const double qj_1[N_1] = { -1.7934749837151009723371E11, +9.8900934262481749439886E10, -2.8986272696554495342658E10, +5.4229617984472955011862E9, -7.0108568774215954065376E8, +6.4698830956576428587653E7, -4.2648434812177161405483E6, +1.9418469440759880361415E5, -5.5648470543369082846819E3, +7.6886718750000000000000E1 }; static const double x0 = 0.372507410781366634461991866580; int j; double result, numerator, denominator, sum; result = log(x/x0); numerator = 0.0; denominator = 0.0; sum = 0.0; for(j=0; j<N_1; j++){ numerator += pj_1[j] * shifted_chebyshev(j, x/6.0); denominator += qj_1[j] * shifted_chebyshev(j, x/6.0); } sum = numerator / denominator; sum *= (x-x0); result += sum; return result;}double ei(double x){ if( x < 0.0 ){ printf("Argument of ei(x) must be positive\n"); exit(1); } if( x <= 6.0 ){ return interval1(x); } printf("Argument range not yet implemented for ei(x).\n"); exit(1);}int main(void){ double x = 5.0; printf("ei(%g)=%g\n", x, ei(x)); return 0;}
But for the interval in question the results are completely off. For example I want to evaluate $Ei(5)\approx{}40.2$, but i am getting $\approx{}19.0654$. The paper claims a maximal relative error of $8\cdot10^{-19}$, so I guess I must be doing something wrong. I double checked the coefficients and my implementation of the equation but I don't see any mistake.
When I plot the relative error of the C calculated value compared to the real/expected value (as returned by the ExpIntegralEi function of Mathematica) I can see that only for the first interval ($0<x\le6$) the error is unreasonably high:
Plotted is $y(x)=\left|\frac{C\left(x\right)-M\left(x\right)}{M\left(x\right)}\right|$, where $C(x)$ is the value as returned by my C program and M(x) the same value as returned by Mathematica.
Any help would be much appreciated.
References Cody, W. J., and Henry C. Thacher. "Chebyshev approximations for the exponential integral 𝐸𝑖 (𝑥)." Mathematics of Computation 23.106 (1969): 289-303. |
I want to prove the following:
Let $(M,d)$ be a metric space. Let $A\subseteq V\subseteq M$.
1) $A$ is open in $V \Leftrightarrow A = C\cap V$ (for a certain open $C$ in $M$)
2) $A$ is closed in $V \Leftrightarrow A = C\cap V$ (for a certain closed $C$ in $M$)
Questions:
Could someone check the proof?
'
for a certain open $C$ in $\color{Blue}{M}$.'
Would this proof also work for a more specific choice of $C$? Like for a certain open $C$ in $\color{blue}{V}$. I don't really see the added value of choosing $M$ over $V$.
Could some give me some pointers on how to prove $2, \Rightarrow$? Proof 1)
$\Leftarrow$: Choose $a\in A$.
$$\begin{array}{rl} & a \in A = C\cap V\\ \Rightarrow & a \in C\\ \Rightarrow & (\exists r > 0)(B_M(a,r)\subseteq C)\\ \Rightarrow & (\exists r > 0)(B_M(a,r)\cap V \subseteq C\cap V)\\ \Rightarrow & (\exists r> 0) (B_V(a,r)\subseteq A \end{array}$$
$\Rightarrow$: Choose $a\in A$.
$$\begin{array}{rl} \Rightarrow & (\exists r_a >0)(B_V(a,r_a) \subseteq A) \end{array}$$
Consider all $a\in A$ then:
$$\begin{array}{rl} & A = \bigcup_{a\in A} B_V(a,r_a)\\ \Rightarrow & A = \bigcup_{a\in A} \left[ V\cap B_M(a,r_a)\right]\\ \Rightarrow & A = V\cap\left[ \bigcup_{a\in A} B_M(a,r_a)\right] \end{array}$$
Let $$\left[ \bigcup_{a\in A} B_M(a,r_a)\right] = C$$ which is open as a union of open sets.
Proof 2)
$\Leftarrow$:
$$\begin{array}{rrl} & V\setminus A &= V\setminus(C\cap V)\\ \Rightarrow & & = (V\setminus C)\cup (V\setminus V)\\ \Rightarrow && = V\setminus C \end{array}$$
Since $C$ is closed then $V\setminus C$ is open and so is $V\setminus A$. Then $A$ is closed in $V$.
$\Rightarrow$: How? |
$\textbf{First question:}$ Let $L=\{R\}$ be a language consisting of one unary relation symbol. Show that there are exactly $\aleph_0$-many countable $L$-structures up to isomorphism.
My (attempted) solution is as follows: Let $\mathcal{M}=(M,R^{\mathcal{M}})$ be an $L$-structure. Since $R$ is a unary relation symbol, the basic relation $R^{\mathcal{M}}$ is also unary. But a unary relation is just a subset of $M$. Any countable $L$-structure is isomorphic to $\omega$ or some $n \in \omega$. So the set of all countable $L$-structures up to isomorphism is $$\{(n,E):n \in \omega,E \subseteq n\} \cup \{(\omega,F):F \subseteq \omega\}$$
We first look at the set one the left hand side. It is of cardinality $\aleph_0$. If I can show the set on the right hand side is also of cardinality $\aleph_0$ then we are done. But how?
$|\mathcal{P}(\omega)|=\aleph_1$ hence the cardinality of the set on the right is $\aleph_1$. $\textbf{What did I do wrong?}$
$\textbf{Second question:}$ Let $L=\{R\}$ be a language consisting of one unary relation symbol. How many $L$-structure of size $\aleph_1$ are there?
I was trying to use the similar argument as in my first question, but in the first question, we consider countable structures and we know every countable set is isomorphic to $\omega$ or some $n \in \omega$. But if we consider structures of cardinality $\aleph_1$, I don't know the particular set they are isomorphic to, so I can't use the argument as before. What should I do? |
Assume the equations are discretized on the $\tau$ grid, and introduce several column-vectors, using the notation where the superscript stands for the grid point index i $\in$ [1,...,n].
$\vec{\tau}=\left[ \tau^1,\tau^2,...,\tau^{n} \right] $
$\vec{\phi_1}=\left[ \phi_1^1,\phi_1^2,...,\phi_1^{n} \right] $
$\vec{\phi_2}=\left[ \phi_2^1,\phi_2^2,...,\phi_2^{n} \right] $
$\vec{f_{11}}=\left[ f_{11}^1,f_{11}^2,...,f_{11}^{n} \right] $
$\vec{f_{12}}=\left[ f_{12}^1,f_{12}^2,...,f_{12}^{n} \right] $
$\vec{f_{21}}=\left[ f_{21}^1,f_{21}^2,...,f_{21}^{n} \right] $
Next, the second derivative operator on the grid is a matrix $\hat{\Delta}$ of size $n \times n$.
Also, use $\alpha (k) = c_2(k) / c_1(k)$, and denote$c_1(k) = \lambda$ and $c_2(k) = \alpha \lambda$
Now, let's write both equations as a single linear system of size $2n \times 2n$ for the compound state column-vector $\vec{\phi}$ which is
\begin{equation}\vec{\phi} = [\phi_1^1,\phi_1^2,...,\phi_1^{n},\phi_2^1,\phi_2^2,...,\phi_2^{n}]\end{equation}
We'll use here several matrices of size $2n \times 2n$:
The left hand side matrix $L$\begin{bmatrix}\hat{I} & \hat{0}\\\hat{0} & \alpha \hat{I}\end{bmatrix}
the differential operator matrix $D$\begin{bmatrix}\hat{\Delta} & \hat{0} \\\hat{0} & \hat{\Delta}\end{bmatrix}
the cross-coupling matrix $C$\begin{bmatrix}\hat{0} & \hat{I} \vec{f_{12}} \\\hat{I} \vec{f_{21}} & \hat{0}\end{bmatrix}
and the forcing term matrix $F$\begin{bmatrix}\hat{I} \vec{f_{11}} & \hat{0} \\\hat{0} & \hat{0}\end{bmatrix}
Here $\hat{I}$ is the unit matrix of size $n \times n$,$\hat{0}$ is the null matrix of size $n \times n$.
Now our problem can be written as
\begin{equation}\lambda L \vec{\phi} =\left(D + C + F\right) \vec{\phi}\end{equation}
or
\begin{equation}\lambda \vec{\phi} =L^{-1} \left(D + C + F\right) \vec{\phi}\tag{*}\label{*}\end{equation}
This a linear eigenvalue problem, where the right hand side matrix contains the parameter $\alpha$, so the eigenvalues are functions of $\alpha$, which in turn is a function of $k$.
For a given value of $k$ there is a spectrum of eigenvalues, $\lambda_0, \lambda_1$ etc. Let's say we are interested in the smallest eigenvalue $\lambda_0$ which we'd just call $\lambda$. Solution of the eigenvalue problem $\eqref{*}$ by standard methods of linear algebra defines a function $\lambda(k)$, on the other hand by construction $\lambda$ must be equal to $c_1(k)$. The root of the equation $\lambda(k) = c_1(k)$ (if it exists) defines the solution of the problem. Since $\lambda(k)$ and $c_1(k)$ are two nonlinear functions the root of the equation $\lambda(k) = c_1(k)$ has to be sought by some iterative numerical technique. |
I want to specify, what it means to give an algebra as input to an algorithm and didn't find very much literature about it. So first I want to ask if you can recommend a book or paper that deals with the topic of complexity analysis of algebras over fields
and clearly define the decision problem.
After some digging I found
something and want to share it here and furthermore ask if the definitions make sense and are in compliance with literature (if there is any):
Definition:Let $\mathbb F$ be a field and $A$ be a finitely generated commutative $\mathbb F$-algebra with additive basis $b_1,\ldots, b_n\in\mathbb F$. We now want to capture the multiplicative structure of the algebra and therefore write every product of base elements as a linear combination of all base elements: $$ \forall 1\leq i, j, k\leq n: \exists a_{ijk}: b_ib_j=\sum_{k=1}^n a_{ijk}b_k. $$ The $a_{ijk}$ are called structure coefficients. We directly have that: $$ A \cong \left.\mathbb{F}[b_1, \ldots, b_n] \middle/ \left<b_i b_j-\sum_{k=1}^n a_{ijk}b_k\right>_{1\leq i,j\leq n}\right..$$ Now one can define the following decision problem: $$ \{(A,B)\mid A, B \text{ commutative $\mathbb F$-algebras with basis $b_1, \ldots b_n$ and } A\cong B\}. $$ To specify an isomorphism $\phi:A\rightarrow B$ it is sufficient to write every $\phi(b_i)$ as linear combination of the elements of a basis of $B$.
Does anything in this definition seem strange to you or do you think that one can work with it?
Motivation:My motivation behind this is to give a very clear definition of the decision problem first to connect it to other problems, i.e. the problem of deciding polynomial equivalence: Given two polynomials $f,g\in\mathbb F[x_1, \ldots, x_n]$, we say that $f$ is equivalentto $g$ if there exists an invertible linear transformation $\tau$ on the variables such that $f(\tau(x_1), \ldots, \tau(x_n))=g(x_1, \ldots, x_n)$. In other words, two polynomials are equivalent if you can replace every variable by a linear combination of all variables to obtain the other polynomial.
I'm not sure if this helps as a motivation but the connection of this problems is established by constructing finitely generated commutative $\mathbb F$-algebras from the two polynomials that are isomorphic if and only if the polynomials are equivalent. For this I wanted to make sure that the decision problem is defined very clearly. |
Invariant theory is central to the mathematical account of the fundamental notion of symmetry. Its various appearances in apparently distant mathematical fields emphasize its importance. In its most general formulation, one considers a family \(\mathcal F\) of objects with an equivalence relation on it. An invariant is a function \(f\) that associates to each object of \(\mathcal{F}\) an element of another set such that the function \(f\) is constant in each equivalence class of \({\mathcal F}\). Thus, by its very definition, if two objects \(A, B\in {\mathcal F}\) satisfy that \(f(A)\neq f(B)\), then \(A\) is not equivalent to \(B\). On the other hand, it may happen that \(f(A)=f(B)\) but \(A\) may not be equivalent to \(B\), that is, the systems of invariants \(f\) may not be complete.
A general goal of the theory is to construct a
complete system of invariants for the given classification problem. It barely needs noting that such a general goal might not be attainable, and one must settle for less complete systems of invariants or more concrete settings to formulate the classification problems at hand.
Perhaps the oldest invariant problem in algebra considers the action a given group \(G\) on an algebra of polynomials \(k[x_1,\dots, x_n]\), over a field \(k\). More concretely, one considers actions of linear algebraic groups, that is, subgroups of the general linear group of invertible matrices \(\mathrm{GL}_n\) over the field \(k\); to avoid arithmetic complications one considers first the case when the field is algebraically closed. Thus, the actions that are of interest are actions of a linear group \(G\) on rings of coordinates \(k[X]\) of affine algebraic varieties \(X\) over the given field that come from regular actions of \(G\) over the variety \(X\). The main invariant is the subring of functions that are fixed by all elements of the group: \[ k[X]^G=\{f\in k[X]: \sigma\cdot f=f\;\text{for all \(\sigma\in G\)}\}.\]
There are several problems associated to finding the invariant algebra \(k[X]^G\), and the most important one is to find generators for this ring as a \(k\)-algebra. Of course, one must answer first the basic question of when this \(k\)-algebra is finitely generated.
From its very beginning, invariant theory had to deal with two fundamental aspects of mathematics: abstract theoretical advances and algorithmic practical developments. The first practitioners of invariant theory in the 19th century, Cayley, Sylvester, Gordan, Clebsch, Aronhold, and Cremona, were masters of the constructive approach to invariant theory, looking for algorithms to compute generators for the ring of invariants \(k[X]^G\), when \(G\) is a classical group.
This classic period of invariant theory came to an early demise when Hilbert gave a non-constructive proof of the finite generation of rings of invariants of classical groups. Gordan’s well-known criticism of Hilbert methods was answered by a constructive proof, also by Hilbert, for finding all the invariants of the special and general linear groups.
After these developments, invariant theory went to a period of hibernation, but the algebraic methods that were developed around these classical problems would find fertile ground for the algebraization of geometry in the 20th century.
Invariant theory came back, now with a strong algebraic-geometry formulation, in the hands of David Mumford and the Grothendieck school: Geometric invariant theory looks for algebraic varieties (schemes, algebraic spaces or stacks in more general settings) \(Y\) that realize the ring of invariants \(K[X]^G\) as the coordinate ring \(K[Y]\) of \(Y\).
Moreover, during the second half of the 20th century, powerful computational tools were developed for finding generators and relations between them, many of them based on Gröbner bases and the Buchberger algorithm.
Invariant theory is alive and well now. It flourishes, in fact, either in the abstract theoretical setting or in the constructive version, with a healthy interaction between both approaches.
The book under review is devoted to the constructive, algorithmic, approach to invariant theory. Although there are many monographs devoted to invariant theory, most of them take the non constructive approach, for example in Popov and Vinberg
Invariant Theory (Springer, 1994) or Dolgachev’s Lectures on Invariant Theory (Cambridge, 2003) or the algebro-geometrical approach of Mumford (and Fogarty and Kirwan) in Geometric Invariant Theory (Springer, first edition 1965, second edition 1982, third edition 1994) or Ferrer and Rittatore’s Actions and Invariants of Algebraic Groups , (Chapman and Hall, 2005). Perhaps the only other book focused on the algorithmic approach is Sturmfels’ Algorithms in Invariant Theory (Springer, 1993).
The contents of the book under review can be divided in three parts. The first part, chapters one and two, collects some introductory material. Chapter one treats the necessary background on Gröbner bases and algorithms to built them. The second chapter summarizes the basic facts of invariant theory over an algebraically closed field, from invariants rings and reductive groups to categorical quotients and Hilbert series of invariant rings.
The second part of the book, chapters three and four, almost 200 pages, constitute the core of the monograph. Chapter three treats the invariant theory of finite groups. Again, the focus is on the computation of finite sets of generators for the invariant ring. It is important to remark that both the modular and non-modular cases are treated. Since the approach is mainly constructive, for the implementations of the invariant theory algorithms discussed in this chapter, the computer algebra systems SINGULAR and MAGMA are highly recommended, since both come with several packages and libraries devoted to invariant theory.
Most of the algorithms discussed in this chapter are due to the authors, especially for the modular case. The fourth chapter treats the case of invariants of linearly reductive groups. Again some of the given algorithms to compute sets of generators are due to the authors. Details and background material to discuss these algorithms are presented in this chapter. Here we also find methods to compute Hilbert series of the invariant ring by a variation of Molien’s formula. Most of the results discussed in this chapter are for reductive groups, since, by a theorem of Nagata, this is the case when the ring of invariants is guaranteed to be finitely generated. The final sections of the chapter consider the non-reductive case and some methods that could be useful to compute generators of the invariant ring, provided that this ring is finitely generated for the given non reductive group.
The third part of the book, chapter five, is a hodgepodge of short vistas of applications of invariant theory, from the computation of cohomology rings of finite groups and calculation of Galois groups to combinatorics and computer vision.
The book has three appendices; the first one gives a quick survey of results on linear algebraic groups that are used throughout the text. For this second edition, there are now a second and third appendices, by V. Popov. The second appendix studies algorithms to decide when an orbit is contained in the closure of another one, and the third appendix is devoted to the stratification of the nullcone, with an addendum that includes the source code for a program to compute this stratification.
Although the book under review is a monograph, now in its second edition, it is so well structured that can be read by anyone with a basic background on algebraic groups. The subject, as was sketchily summarized in the introduction to this review, is a classic one with an almost romantic history of being reborn after periods of stagnation, as Kung and Rota so eloquently remind us in their survey article "The Invariant Theory of Binary Forms"
Bull. A. M. S. 10 (1984), 27-85.
Felipe Zaldivar is Professor of Mathematics at the Universidad Autonoma Metropolitana-I, in Mexico City. His e-mail address is fz@xanum.uam.mx. |
Your working is correct, but your method has the disadvantage that you have set the prior probabilities of your hypothesis to values that are fixed by your prior distribution for the parameter. In Bayesian hypothesis testing we usually want to have the freedom to vary the prior probabilities of the overall hypotheses (i.e., classes of parameter values), while keeping the form of the prior distribution when conditioning on particular hypotheses. We can do this in the present case by formulating the more general model:
$$\begin{equation} \begin{aligned}X_1,...,X_n | \lambda &\sim \text{IID Pois}(\lambda), \\[10pt]\pi(\lambda | H_0) &\propto \text{Ga}(\lambda|\alpha, \beta) \cdot \mathbb{I}(\lambda \leqslant \lambda_0), \\[10pt]\pi(\lambda | H_1) &\propto \text{Ga}(\lambda|\alpha, \beta) \cdot \mathbb{I}(\lambda > \lambda_0). \\[10pt]\end{aligned} \end{equation}$$
In this generalised model we can vary the prior probability $\phi = \pi(H_0)$ while maintaining the gamma form for the parameter value (truncated under each hypothesis so that its support is only over the class of parameters in that hypothesis). This allows us to conduct the hypothesis test for any chosen prior probability of the hypotheses.
Implementing the generalised hypothesis test: Bayesian hypothesis testing is usually done by calculating Bayes' factor. This allows you to find the posterior probability of the hypotheses under any specified prior probabilities for the two hypotheses in your test. First we will confirm your derivation of the likelihood and posterior. In this case, you have likelihood function:
$$L_{\mathbf{x}}(\lambda) \propto \prod_{i=1}^n \text{Pois}(x_i|\lambda) = \prod_{i=1}^n \frac{\lambda^{x_i}}{x_i!} \exp(-\lambda) \propto \lambda^{\sum x_i} \exp(-n\lambda).$$
Combining this with the prior gives you the (hypothesis-conditional) posterior distributions:
$$\pi_n(\lambda|H_0) \propto \text{Ga}\Big( \lambda \Big| \alpha + \sum_{i=1}^n x_i, \frac{\beta}{1 + n \beta} \Big) \cdot \mathbb{I}(\lambda \leqslant \lambda_0),$$
$$\pi_n(\lambda|H_1) \propto \text{Ga}\Big( \lambda \Big| \alpha + \sum_{i=1}^n x_i, \frac{\beta}{1 + n \beta} \Big) \cdot \mathbb{I}(\lambda > \lambda_0).$$
This confirms that your derivation of the posterior is correct, though we have generalised your model to allow variation of $\phi$ separately from $\alpha$ and $\beta$. From here we obtain the Bayes factor:
$$\begin{equation} \begin{aligned}BF(\mathbf{x}) &\equiv \frac{p(\mathbf{x}|H_1)}{p(\mathbf{x}|H_0)} \\[6pt]&= \frac{\int \pi_n(\lambda|H_1) \ d\lambda}{\int \pi_n(\lambda|H_0) \ d\lambda} \cdot \frac{\int \pi(\lambda|H_0) \ d\lambda}{\int \pi(\lambda|H_1) \ d\lambda} \\[6pt]&= \frac{\int_{\lambda_0}^\infty \pi_n(\lambda|H_1) \ d\lambda}{\int_0^{\lambda_0} \pi_n(\lambda|H_0) \ d\lambda} \cdot \frac{\int_0^{\lambda_0} \pi(\lambda|H_0) \ d\lambda}{\int_{\lambda_0}^\infty \pi(\lambda|H_1) \ d\lambda} \\[6pt]&= \frac{\Gamma(\alpha + \sum x_i)-\gamma(\alpha + \sum x_i, \beta \lambda_0 /(1+n\beta))}{\gamma(\alpha + \sum x_i, \beta \lambda_0 /(1+n\beta))} \cdot \frac{\gamma(\alpha, \beta \lambda_0)}{\Gamma(\alpha)-\gamma(\alpha, \beta \lambda_0)}. \\[6pt]\end{aligned} \end{equation}$$
Using the Bayes factor you have:
$$\frac{\mathbb{P}(H_1|\mathbf{x})}{\mathbb{P}(H_0|\mathbf{x})} = \frac{\mathbb{P}(H_1)}{\mathbb{P}(H_0)} \cdot BF(\mathbf{x}) = \frac{1-\phi}{\phi} \cdot BF(\mathbf{x}).$$
In your case you have decided to reject $H_0$ if this posterior ratio exceeds one (i.e., if the alternative has higher posterior probability than the null), and you have implicitly constrained the prior probability of the null hypothesis to $\phi = \gamma(\alpha,\beta \lambda_0)/\Gamma(\alpha)$. The advantage of expressing things in our more generalised form is that we can choose any prior probability $\phi$, irrespective of the other model parameters. |
Why is it that the characteristic polynomial for a matrix $A$
$$\phi(\lambda) = \det(\lambda I - A)$$
when finding the roots gives the eigenvalues of $A$?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
By definition, an eigenvalue of a matrix $A$ is a number $\lambda$ such that $Ax=\lambda x$ where $x$ is a non zero vector, called the eigenvector corresponding to $\lambda$. This means $(A-\lambda I)x=0$ for this $x$. That is, $\lambda$ is an eigenvalue of $A$ iff $A-\lambda I$ is not invertible. That is, $\lambda$ is an eigenvalue of $A$ iff $\det (A-\lambda I)=0$. Here $\det (A-\lambda I)$ is a polynomial in $\lambda$ and any $\lambda$'s satisfying this polynomial are eigenvalues!
$\lambda$ is an eigenvalue of $A$ iff $A-\lambda I$ has a non-trivial null space, which is true iff $\det(A-\lambda I)=0$, which is equivalent to $p(\lambda)=0$, where $p$ is the characteristic polynomial of $A$.
We need a non zero vector $V$ to satisfy $$AV=\lambda V$$
Thus the homogeneous system $$(A-\lambda I)V=0$$ must have a nontrivial solution.
Thus we have to have $$det(A-\lambda I)=0$$
We want to find all $\lambda $ satisfuing $Ax= \lambda x \rightarrow (\lambda x -Ax)=0 \rightarrow(\lambda I -A)x=0 ; x \neq 0$ Therefore $ \lambda I -A$ is singular, meaning has determinant $0$ . Expanding on the determinant, we find all $\lambda$ that satisfy this condition. Was this clear/helpful? |
You get $2^\kappa$ many models for any uncountable $\kappa$. This follows immediately from the fact that $\mathsf{ZFC}$ (trivially) defines an infinite linear order, and this is all that's needed to ensure unstability.
(The same argument shows that already much weaker theories, such as $\mathsf{PA}$, are unstable.)
This gives the result for $\kappa$ uncountable. For $\kappa$ countable you also have the maximum number of nonisomorphic models of $\mathsf{ZFC}$. To see this, use the incompleteness theorem to show that you can recursively label the nodes of the complete binary tree as $T_s$, $s\in 2^{<\omega}$, so that $T_\emptyset=\mathsf{ZFC}$ (or $\mathsf{PA}$, if you prefer), $T_s$ is consistent for each $s$, $T_s\subsetneq T_t$ for $s\subsetneq t$, and each $T_{s{}^\frown\langle i\rangle}$, for $i=0,1$, is obtained from $T_s$ by adding a single new axiom (that depends on $s$, of course). Now, for each $x\in 2^\omega$, let $T_x$ be any consistent complete theory extending $\bigcup_n T_{x\upharpoonright n}$. These are $2^{\aleph_0}$ pairwise incompatible theories, all extending $\mathsf{ZFC}$ (or $\mathsf{PA}$), and all having countable models. (Examples such as the theory of $(\mathbb Q,<)$ show that the argument must be different for countable models.)
The paragraph above shows that there are $2^{\aleph_0}$ incompatible extensions of $\mathsf{ZFC}$, and therefore $2^{\aleph_0}$ non-isomorphic countable models of $\mathsf{ZFC}$. The result is also true for a fixed complete extension $T$, but the argument seems harder. A proof follows from the following, but most likely there are easier approaches:
Consider first the paper
Ali Enayat.
Leibnizian models of set theory, J. Symbolic Logic, 69 (3), (2004), 775–789. MR2078921 (2005e:03076).
In it, Enayat defines a model to be
Leibnizian iff it has no pair of indiscernibles. He shows that there is a first order statement $\mathsf{LM}$ such that any (consistent) complete extension $T$ of $\mathsf{ZF}$ admits a Leibnizian model iff $T\vdash\mathsf{LM}$. He also shows that $\mathsf{LM}$ follows from $\mathrm{V}=\mathsf{OD}$, and that any (consistent) complete extension of $\mathsf{ZF}+\mathsf{LM}$ admits continuum many countable nonisomorphic Leibnizian models.
Now consider the paper
Ali Enayat.
Models of set theory with definable ordinals, Arch. Math. Logic, 44 (3), (2005), 363–385. MR2140616 (2005m:03098).
In it, Enayat defines a model to be
Paris iff all its ordinals are first order definable within the model. He shows that any (consistent) complete extension of $\mathsf{ZF}+\mathrm{V}\ne\mathsf{OD}$ admits continuum many countable nonisomorphic Paris models.
These two facts together imply the result you are after (and more, of course). An earlier result of Keisler and Morley, that started the whole area of model theory of set theory, shows that any countable model of $\mathsf{ZFC}$ admits an elementary end-extension. (This fails for uncountable models.) It may well be that an easy extension of this is all that is needed to prove the existence of continuum many non-isomorphic countable models of any fixed (consistent) $T\supset\mathsf{ZFC}$, but I do not see right now how to get there. The Keisler-Morley theorem alone does not seem to suffice, in view of Joel Hamkins's beautiful answer to this question. |
9.2 Regression with ARIMA errors in R
The R function
Arima() will fit a regression model with ARIMA errors if the argument
xreg is used. The
order argument specifies the order of the ARIMA error model. If differencing is specified, then the differencing is applied to all variables in the regression model before the model is estimated. For example, the R command
will fit the model \(y_t' = \beta_1 x'_t + \eta'_t\), where \(\eta'_t = \phi_1 \eta'_{t-1} + \varepsilon_t\) is an AR(1) error. This is equivalent to the model\[ y_t = \beta_0 + \beta_1 x_t + \eta_t,\]where \(\eta_t\) is an ARIMA(1,1,0) error. Notice that the constant term disappears due to the differencing. To include a constant in the differenced model, specify
include.drift=TRUE.
The
auto.arima() function will also handle regression terms via the
xreg argument. The user must specify the predictor variables to include, but
auto.arima() will select the best ARIMA model for the errors. If differencing is required, then all variables are differenced during the estimation process, although the final model will be expressed in terms of the original variables.
The AICc is calculated for the final model, and this value can be used to determine the best predictors. That is, the procedure should be repeated for all subsets of predictors to be considered, and the model with the lowest AICc value selected.
Example: US Personal Consumption and Income
Figure 9.1 shows the quarterly changes in personal consumption expenditure and personal disposable income from 1970 to 2016 Q3. We would like to forecast changes in expenditure based on changes in income. A change in income does not necessarily translate to an instant change in consumption (e.g., after the loss of a job, it may take a few months for expenses to be reduced to allow for the new circumstances). However, we will ignore this complexity in this example and try to measure the instantaneous effect of the average change of income on the average change of consumption expenditure.
(fit <- auto.arima(uschange[,"Consumption"], xreg=uschange[,"Income"]))#> Series: uschange[, "Consumption"] #> Regression with ARIMA(1,0,2) errors #> #> Coefficients:#> ar1 ma1 ma2 intercept xreg#> 0.692 -0.576 0.198 0.599 0.203#> s.e. 0.116 0.130 0.076 0.088 0.046#> #> sigma^2 estimated as 0.322: log likelihood=-156.9#> AIC=325.9 AICc=326.4 BIC=345.3
The data are clearly already stationary (as we are considering percentage changes rather than raw expenditure and income), so there is no need for any differencing. The fitted model is \[\begin{align*} y_t &= 0.599 + 0.203 x_t + \eta_t, \\ \eta_t &= 0.692 \eta_{t-1} + \varepsilon_t -0.576 \varepsilon_{t-1} + 0.198 \varepsilon_{t-2},\\ \varepsilon_t &\sim \text{NID}(0,0.322). \end{align*}\]
We can recover estimates of both the \(\eta_t\) and \(\varepsilon_t\) series using the
residuals() function.
It is the ARIMA errors that should resemble a white noise series. |
A high order numerical method for solving nonlinear fractional differential equation with non-uniform meshes AffiliationUniversity of Chester; Lvliang University Publication Date2019-01-18
MetadataShow full item record AbstractWe introduce a high-order numerical method for solving nonlinear fractional differential equation with non-uniform meshes. We first transform the fractional nonlinear differential equation into the equivalent Volterra integral equation. Then we approximate the integral by using the quadratic interpolation polynomials. On the first subinterval $[t_{0}, t_{1}]$, we approximate the integral with the quadratic interpolation polynomials defined on the nodes $t_{0}, t_{1}, t_{2}$ and in the other subinterval $[t_{j}, t_{j+1}], j=1, 2, \dots N-1$, we approximate the integral with the quadratic interpolation polynomials defined on the nodes $t_{j-1}, t_{j}, t_{j+1}$. A high-order numerical method is obtained. Then we apply this numerical method with the non-uniform meshes with the step size $\tau_{j}= t_{j+1}- t_{j}= (j+1) \mu$ where $\mu= \frac{2T}{N (N+1)}$. Numerical results show that this method with the non-uniform meshes has the higher convergence order than the standard numerical methods obtained by using the rectangle and the trapzoid rules with the same non-uniform meshes. CitationFan L., Yan Y. (2019) A High Order Numerical Method for Solving Nonlinear Fractional Differential Equation with Non-uniform Meshes. In: Nikolov G., Kolkovska N., Georgiev K. (eds) Numerical Methods and Applications. NMA 2018. Lecture Notes in Computer Science, vol 11189. Springer, Cham PublisherSpringer Link TypeBook chapter Languageen
ae974a485f413a2113503eed53cd6c53
10.1007/978-3-030-10692-8_23
Scopus Count Collections
Except where otherwise noted, this item's license is described as https://creativecommons.org/licenses/by-nc-nd/4.0/ |
Learning Objectives
Complex signals can be built from elemental signals, including the complex exponential, unit step, pulse, etc. Learning about elemental signals. Elemental signals are the building blocks with which we build complicated signals. By definition, elemental signals have a simple structure. Exactly what we mean by the "structure of a signal" will unfold in this section of the course. Signals are nothing more than functions defined with respect to some independent variable, which we take to be time for the most part. Very interesting signals are not functions solely of time; one great example of which is an image. For it, the independent variables are x and y (two-dimensional space). Video signals are functions of three variables: two spatial dimensions and time. Fortunately, most of the ideas underlying modern signal theory can be exemplified with one-dimensional signals. Sinusoids
Perhaps the most common real-valued signal is the sinusoid.
\[s(t) = A\cos (2\pi f_{0}t+\varphi )\]
For this signal,
is its amplitude, its frequency, and its phase. Complex Exponentials
The most important signal is complex-valued, the complex exponential.
\[s(t) = Ae^{i(2\pi f_{0}t+\varphi )}\\ s(t) = Ae^{i\varphi }e^{i2\pi f_{0}t}\]
Here, \[i = \sqrt{-1}\]
\[Ae^{i\varphi } \: is\: the\: signal's\: \textbf{complex amplitude}\]
Considering the complex amplitude as a complex number in polar form, its magnitude is the amplitude
A and its angle the signal phase. The complex amplitude is also known as a phasor. The complex exponential cannot be further decomposed into more elemental signals, and is the most important signal in electrical engineering! Mathematical manipulations at first appear to be more difficult because complex-valued numbers are introduced. In fact, early in the twentieth century, mathematicians thought engineers would not be sufficiently sophisticated to handle complex exponentials even though they greatly simplified solving circuit problems. Steinmetz introduced complex exponentials to electrical engineering, and demonstrated that "mere" engineers could use them to good effect and even obtain right answers! See Complex Numbers for a review of complex numbers and complex arithmetic.
The complex exponential defines the notion of frequency: it is the
only signal that contains only one frequency component. The sinusoid consists of two frequency components: one at the frequency f 0 and the other at -f 0 .
Euler Relation
This decomposition of the sinusoid can be traced to Euler's relation.
\[\cos (2\pi ft) = \frac{e^{i2\pi ft}+e^{-(i2\pi ft)}}{2}\\ \sin (2\pi ft) = \frac{e^{i2\pi ft}-e^{-(i2\pi ft)}}{2i}\\ e^{i2\pi ft} = cos (2\pi ft)+i\: sin (2\pi ft)\]
Decomposition
The complex exponential signal can thus be written in terms of its real and imaginary parts using Euler's relation. Thus, sinusoidal signals can be expressed as either the real or the imaginary part of a complex exponential signal, the choice depending on whether cosine or sine phase is needed, or as the sum of two complex exponentials. These two decompositions are mathematically equivalent to each other.
\[A\cos (2\pi ft+\varphi ) = \Re (Ae^{i\varphi }e^{i2\pi ft})\\ A\sin (2\pi ft+\varphi ) = \Im (Ae^{i\varphi }e^{i2\pi ft})\]
Fig. 2.2.1 Graphically, the complex exponential scribes a circle in the complex plane as time evolves. Its real and imaginary parts are sinusoids. The rate at which the signal goes around the circle is the frequency f and the time taken to go around is the period A fundamental relationship is T = 1/f
Using the complex plane, we can envision the complex exponential's temporal variations as seen in the above figure. The magnitude of the complex exponential is and the initial value of the complex exponential at has an angle of
φ. As time increases, the locus of points traced by the complex exponential is a circle (it has constant magnitude of A). The number of times per second we go around the circle equals the frequency The time taken for the complex exponential to go around the circle once is known as its period T and equals 1/f. The projections onto the real and imaginary axes of the rotating vector representing the complex exponential signal are the cosine and sine signal of Euler's relation as stated above. Real Exponentials
As opposed to complex exponentials which oscillate, real exponentials decay.
\[s(t) = e^{-\frac{t}{\tau }}\]
Fig. 2.2.2 The real exponential
The quantity
τ is known as the exponential's time constant, and corresponds to the time required for the exponential to decrease by a factor of 1/e, which approximately equals 0.368. A decaying complex exponential is the product of a real and a complex exponential.
\[s(t) = Ae^{i\varphi }e^{-\frac{t}{\tau }}e^{i2\pi ft}\\ s(t) = Ae^{i\varphi }e^{\left ( -\frac{t}\tau +i2\pi f \right )t}\]
In the complex plane, this signal corresponds to an exponential spiral. For such signals, we can define
complex frequency as the quantity multiplying Unit Step
The
unit step function is denoted by u(t), and is defined to be
\[u(t) = 0\, \, if \,\, t< 0\\ u(t) = 1\, \: if \, \, t> 0\]
Origin Warning
This signal is discontinuous at the origin. Its value at the origin need not be defined, and doesn't matter in signal theory.
This kind of signal is used to describe signals that "turn on" suddenly. For example, to mathematically represent turning on an oscillator, we can write it as the product of a sinusoid and a step:
\[s(t) = A\sin (2\pi ft)u(t)\]
Pulse
The
unit pulse describes turning a unit-amplitude signal on for a duration of Δ seconds, then turning it off.
\[p\Delta (t) = 0\; if\; t< 0\\ p\Delta (t) = 1\; if\; 0< t< \Delta \\ p\Delta (t) = 0\; if\; t> \Delta\]
Fig. 2.2.4 The Pulse
We will find that this is the second most important signal in communications.
Square Wave
The
square wave sq(t) is a periodic signal like the sinusoid. It too has an amplitude and a period, which must be specified to characterize the signal. We find subsequently that the sine wave is a simpler signal than the square wave. Fig. 2.2.5 The Square wave Contributor ContribEEOpenStax |
Let $x_1 := \sqrt{2}$ and $x_{n+1} :=\sqrt{2x_n} $ for all $n \in \mathbb{N}$.
By using proof by induction:
(i) Prove that $\sqrt{2} ≤ x_n ≤ 2$ for all $n \in \mathbb{N}$.
(ii) Prove that $x_n ≤ x_{n+1}$ for all $n \in \mathbb{N}$.
For (i)
Let's prove the case base case, for $n=1$ we have that $\sqrt{2} \leq x_1\leq 2$ which is clearly true by $x_1=\sqrt{2}$
Now we assume true for $n=k$ $\Longrightarrow$ $\sqrt{2}\leq x_k\leq2$ and we are required to prove $\sqrt{2}\leq x_{k+1} \leq 2$.
From our hypothesis : $\sqrt{2}\leq x_k\leq2$ however we know that $x_{k+1} =\sqrt{2x_k}\Longrightarrow x_k = \frac{(x_{k+1})^2}{2}$ so we have $\sqrt{2}\leq \frac{(x_{k+1})^2}{2}\leq2 \Leftrightarrow \sqrt{2\sqrt{2}}\leq x_{k+1} \leq 2$.
How do I proceed? |
I should know how do this problem, but I have troubles with it.
Let $B$ be an invertible matrix and let $A$ be a matrix with $\operatorname{rk}(A) = 1$. Then $\exists \lambda$ such that $A^2 = \lambda A$ and the problem is for which values of $\lambda$ the matrix $B + A$ is invertible?
When $B = I$ then $B + A$ is invertible iff $\lambda \neq -1$ and in the general case I suppose is $\lambda \neq -\det(B)$.
I think it'll be better if I type my conclusion for $B = I$, if $I +A$ es invertible let $C$ such that $$(I + A)C = C(I + A) = I.$$ Then $AC = CA$ iff $C^{-1}AC = A$, but $C^{-1} = I + A$ and therefore we have $$A = (I + A)AC = (A +A^2)C=(A +\lambda A)C = (1 + \lambda)AC.$$ If $1 + \lambda = 0$ then $A = 0$ which has no rank 1, thereupon $\lambda \neq -1$ and if $\lambda \neq -1$ the inverse for $I+A$ is $I -\frac{1}{1+\lambda} A$. |
I am trying to find $$\int \frac {\sqrt {x^2 - 4}}{x} dx$$
I make $x = 2 \sec\theta$
$$\int \frac {\sqrt {4(\sec^2 \theta - 1)}}{x} dx$$
$$\int \frac {\sqrt {4\tan^2 \theta}}{x} dx$$
$$\int \frac {2\tan \theta}{x} dx$$
From here I am not too sure what to do but I know I shouldn't have x.
$$\int \frac {2\tan \theta}{2 \sec\theta} dx$$
I also know I shouldn't have dx anymore.
$$dx = 2\sec \theta \tan \theta \; \mathrm d\theta$$
$$\int \frac {2\tan \theta}{2 \sec\theta} 2\sec \theta \tan \theta \; \mathrm d\theta$$
$$\int {2\tan^2 \theta} \; \mathrm d\theta$$
$$2\int {\tan^2 \theta} \; \mathrm d\theta$$
I have no idea how to find the integral of $\tan^2 \theta$
So I use Wolfram Alpha:
$$\tan \theta - \theta + c$$
Now I need to replace theta with x.
$$x = 2 \sec\theta$$
With same mathmagics I produce
$$ \frac {x}{2} = \sec \theta$$
$$ \theta = \operatorname {arcsec} \left(\frac{x}{2}\right)$$
$$\tan \left(\operatorname {arcsec} \left(\frac{x}{2}\right)\right) - \left(\operatorname {arcsec} \left(\frac{x}{2}\right)\right) + c$$
This is wrong but I am not sure why. |
Short Answer:
YES you can.
Long answer:
A) Limits of continuum mechanics:
The continuum model of fluid dynamics is valid only till the fluid behaves as a continuous medium. This is characterized by the Knudsen number. The Knudsen number is given by $Kn = \frac{\lambda}{l_s}$, where $\lambda$ is the mean free path and $l_s$ is the characteristic dimension of the channel (diameter in the case of the circular pipe). Non equilibrium effects start to happen if $Kn > 10^{-3}$. Modified slip boundary conditions can be used for $10^{-3} < Kn < 10^{-1}$, and condinuum model completely breaks if $Kn > 1$. (
Fun fact: because the distance between two vehicles on a crowded road is much smaller than straight portion of the road itself (length scale in $1d$ flow), we can model the traffic flow with a PDE! However it will not work if there is only one car on a long stretch of road)
Coming back to water, as the water molecules are not freely moving and are loosely bound, we consider the lattice spacing $\delta$ for computing $Kn$. For water $\delta$ is about $3 nm$. So continuum theory will hold good for a tube of diameter, $300 nm$ or larger $^*$. Now this is a good news!
$^*$ Reference: Liquid flows in microchannels
B) Applicability of Hagen Poiseuille equation:
Since your tube is in sub-millimeters range, it is much larger than the minimum diameter required (sub-micrometer)for the continuity equation. However,
depending on the shape of cross section of the tube, the results will differ (Link to ref.). Liquid flows are much simpler to analyse since they are characterized by much smaller Reynold's number and velocities. Also density essentially remains constant. So there should not be a problem in considering the theory to be valid.Now since the Hagen Poiseuille flow is derived from the Navier Stokes equations, it follows the assumption of continuity.
If your flow is through a porous medium, you might have to consider effects like electrokinetic effect. There might be other complications in straightforward application of H-P equations to microfluidic flows, but I am unable to comment since do not know much in this field.
C) Some examples
In a report on "microfluidics networking", Biral has used the continuum theory for modeling and simulation (in OpenFOAM) of the microfluidic flows.
Fillips discusses more about the Knudsen number in his paper- Limits of continuum aerodynamics.
This report clearly mentions that HP equation is applicable even to microfluidic flows
This document on PDMS Viscometer gives derivation of HP equation for microfluidic flows.
Finally here is a YouTube video discussing about matrix formalism for solving the Hagen-Poiseuille law in microfluidic hydraulic circuits.
Based on these references, it should be safe to assume that H-P equation can be applied to microfluidic flows. However, experts are welcome to enlighten us in this regards.
Cheers! |
I am stuck with a Linear Algebra problem. I have attempted a solution but I do not know whether it's correct or not. The problem is as follows:
Let $C^1(\mathbb{R})$ be the vector space of all differentiable real-valued functions on the real line $\mathbb{R}$. Let $S\subseteq C(\mathbb{R})$ consist entirely of polynomial functions, and prove that the function $f\not\in \langle S\rangle$, where $f(x)=\sin{x}$
I have written that:
$$ p(x)\in\langle S\rangle \mid p(x)=\sum a_n x^n \text{ and }a_n\in\mathbb{R}$$ and $$ f(x)=\sum_0^{\infty}(-1)^n\frac{x^{2n+1}}{(2n+1)!}$$
The only thing that comes to mind is if $\langle S\rangle$ permits finite linear combinations, this would disallow $f\in\langle S\rangle$. Is this assertion true? If not, if possible, please point me in the right direction.
NOTE: This is not a homework problem. |
Let's take a look at a minimal example:
$$P = \frac 1 2\left[\begin{array}{ccc|ccc}2&1&0&0&0&0\\0&0&1&0&0&0\\0&1&1&\color{red} 2&0&0\\\hline0&0&0&\color{red} \uparrow&1&0\\0&0&0&0&0&1\\0&0&0&0&1&1\end{array}\right]$$
Basic "building blocks" as in the answer here on the "diagonal" we can easily implement with Kronecker product with $\bf I$, the "storage states" are the 2s. Now let but the upper leftmost block have storage state displaced 1 row to the block above. This way each displacement will slow down (1 lower exponent than the previous part of the chain).
Any elegant way to avoid the displacement latency will be welcome!
Crazy checker (first time we get non-zero prob for 1 and 2 resp):$${\bf v} = \left[\begin{array}{cccccc}0&0&0&0&0&1\end{array}\right]^T$$$${\bf P}^2{\bf v} = \left[\begin{array}{cccccc}0&0&0&0.25&0.25&0.5\end{array}\right]^T\\{\bf P}^5{\bf v} = \left[\begin{array}{cccccc}0.0625&0.125&0.3125&0.09375&0.15625&0.25\end{array}\right]^T$$
The chance for 2 heads in a row is $1/4 = 0.25$The chance for 4 heads in a row is $1/16 = 0.0625$
So assuming $H{\bf H}H{\bf H}$ counts as two heads in a row twice then the solution works!
We can also calculate the probability of not having any two H in a row in a string of 2 and 5 respectively, and if we do, we do indeed get the sum of the two last states: $$1-\frac 1 4\approx 0.25+0.5 = 0.75 \\ 1-\frac {19}{32} \approx 0.15625+0.25= 0.40625$$
The systematic construction of matrices for arbitrary $k,l,m$ should now be obvious. |
dU, dG, dH etc are all exact differentials and the variables themselves are known as state functions because they only depend on the state of the system. However, dq and dw for example, are inexact differentials. My questions is, what does this actually mean? I've been told that exact differentials are not dependent on the path (what does 'path' mean?) but inexact differentials are. I have been told this is related to line integrals but I am not sure how. Also is this relevant?
As opposed to an exact differential, an inexact differential cannot be expressed as the differential of a function, i.e. while there exist a function $U$ such that $U = \int \mathrm{d} U$, there is no such functions for $\text{đ} q$ and $\text{đ} w$. And the same is, of course, true for any state function $a$ and any path function $b$ respectively: an infinitesimal change in a state function is represented by an exact differential $\mathrm{d} a$ and there is a function $a$ such that $a = \int \mathrm{d} a$, while an infinitesimal change in a path function $b$ is represented by an inexact differential $\text{đ} b$ and there is no function $b$ such that $b = \int \text{đ} b$.
Consequently, for a process in which a system goes from state $1$ to state $2$ a change in a state function $a$ can be evaluated simply as $$\int_{1}^{2} \mathrm{d} a = a_{2} - a_{1} \, ,$$ while a change in a path function $b$ can not be evaluated in such a simple way, $$\int_{1}^{2} \text{đ} b \neq b_{2} - b_{1} \, .$$ And for a state function $a$ in a thermodynamic cycle $$\oint \mathrm{d} a = 0 \, ,$$ while for a path function $b$ $$\oint \text{đ} b \neq 0 \, .$$ The last mathematical relations are important, for instance, for the first law of thermodynamics, because while $\oint \text{đ} q \neq 0$ and $\oint \text{đ} w \neq 0$ it was experimentally found that $\oint (\text{đ} q + \text{đ} w) = 0$ for a closed system, which implies that there exist a state function $U$ such that $\mathrm{d} U = \text{đ} q + \text{đ} w$.
Not a complete answer, but the path is exactly what it sounds like. Say you are rolling a boulder up a hill. This increases its potential energy, which can be released by rolling the boulder down the hill. But the path you take to get to the top of the hill is irrelevant, only the height you raise the boulder matters for potential energy. So if you roll it up half way, let it fall back a quarter of the way, or any such combination of forward and back,
none of this matters to the total change in potential energy (assuming perfect conditions, no friction, etc.).
With respect to Wildcat's great answer, this means for state functions the endpoints of your definite integral are all that matter: you could parameterize any path you want between the endpoints and the resulting (line) integral is the same.
Be careful, there are lots of confusion and misleading claims in introductory textbooks, like that "
thermodynamics only applies to macroscopic objects", ignoring the whole field of nanothermodynamics or the thermodynamics of small objects; or that " thermodynamics only applies to equilibrium", ignoring that two Nobel Prizes for Chemistry were awarded to advances in the thermodynamics of non-equilibrium.
It is not true that an infinitesimal change in a path function "
is represented by an inexact differential". Heat, as any other path function, can be represented by an exact differential. Precisely, one of those Nobel laureates has a book where heat is treated as an exact differential. The book is " Modern Thermodynamics: From Heat Engines to Dissipative Structures" by Kondepudi and Prigogine and the pair of authors explain this topic very well, so I will copy and paste the relevant part:
For a closed system, the energy exchanged by a system with the exterior in a time dt may be divided into two parts: $dQ$, the amount of heat, and $dW$: the amount of mechanical energy. Unlike the total internal energy $dU$, the quantities $dQ$ and $dW$ are not independent of the manner of transformation; we cannot specify $dQ$ or $dW$ simply by knowing the initial and final states. Hence it is not possible to define a function $Q$ that depends only on the initial and final states, i.e. heat is not a state function. Although every system can be said to possess a certain amount of energy $U$, the same cannot be said of heat $Q$ or work $W$. But there is no difficulty in specifying the amount of heat exchanged in a particular transformation. If the rate process that results in the exchange of heat is specified, then $dQ$ is the heat exchanged in a time interval $dt$.
Most introductory texts on thermodynamics do not include irreversible processes but describe all transformations as idealized, infinitely slow, reversible processes. In this case, $dQ$ cannot be defined in terms of a time interval $dt$ because the transformation does not occur in finite time, and one has to use the initial and final states to specify $dQ$. This poses a problem because $Q$ is not a state function, so $dQ$ cannot be uniquely specified by the initial and final states. To overcome this difficulty, an "imperfect differential" $\text{đ} Q$ is defined to represent the heat exchanged in a transformation, a quantity that depends on the initial and final states and the manner of transformation. In our approach we will avoid the use of imperfect differentials. The heat flow is described by processes that occur in a finite time and, with the assumption that the rate of heat flow is known, the heat exchanged $dQ$ in a time $dt$ is well defined. The same is true for the work $dW$. Idealized, infinitely slow reversible processes still remain useful for some conceptual reasons and we will use them occasionally, but we will not restrict our presentation to reversible processes as many texts do.
The total change in energy $dU$ of a closed system in a time $dt$ is
$$dU = dQ + dW \>\>\>\>\>\>\>\>\>\>\>\>\>(2.2.3)$$
The quantities $dQ$ and $dW$ can be specified in terms of the rate laws for heat transfer and the forces that do the work. For example, the heat supplied in a time $dt$ by a heating coil of resistance $R$ carrying a current $I$ is given by $dQ = (I^2 R)dt = VI dt$, in which $V$ is the voltage drop across the coil.
In more advanced formulations one always work with rates
$$\frac{dU}{dt} = \frac{dQ}{dt} + \frac{dW}{dt}$$
with heat and work rates given by
$$\frac{dQ}{dt} = - \int_{B(t)} \mathbf{q} \mathbf{n} dB$$
$$\frac{dW}{dt} = - \int_{B(t)} \mathbf{T} \mathbf{v} dB + \int_{V(t)} \rho \mathbf{F} \mathbf{v} dV$$
For a system enclosed in a volume $V$ with a boundary $B$, with the work performed by the body forces per unit mass $F$ and the contact forces $T$, $\mathbf{v}$ being the velocity field, and $\mathbf{q}$ the heat flux vector on the normal $\mathbf{q}$ to the boundary, and $\rho$ the mass density. |
The problem with your a approach is, that this is a statically indeterminate system. Basically, you could for example turn
one fixed bearing into a simple support and the structure would still be statically stable.Remove another reaction force and the system turns into a mechanism. Thus the degree of static indeterminacy is $n=1$.
As a consequence, the reaction forces depend on the materials elastic properties and cannot be determined simply by solving the forces and moment equations.My approach was using the force method, assuming the elastic modulus $E$ and the area moment of inertia $I$ to be constant throughout the entire beam.
For this more general case of beam-lengths $a$ and $b$ I got the following solutions:
$$ A=F\frac{b^3}{a^3+b^3} \qquad B=F\frac{a^3}{a^3+b^3} $$
Thus for the case $a=L$ and $b=2L$:
$$ A=\frac{8}{9}F \qquad B=\frac{1}{9}F $$
PS: It may look like the solution is not depending on the elastic properties of the beam, but this is due to the assumption of a homogeneous beam, allowing me to cancel out $EI$ during my calculations.
force-method: (see attachment)
The principle behind the force method is, that you remove e.g. the moment restraint at point $B$ (replace it with a simple support, so it becomes determinate) and then calculate the moment distribution of this determinate system. Then, by implementing a unitary moment $X_1=1$, you calculate the required moment $X_1=M_B$ to satisfy the conditions of a fixed restraint: no rotation at $B$ $\quad \to\delta_1=\delta_{10}+X_1\cdot\delta_{11}=0 $
note: $\delta_1$ is the rotation at $B$, which must be zero, $\delta_{10}$ is the rotation caused by $F$ in the reduced system (marked as GS), $X_1\cdot\delta_{11}$ is the rotation caused by the moment $X_1$ (marked as ÜS), thus when you solve for $X_1$ you get $M_B$ |
Preprints (rote Reihe) des Fachbereich Mathematik
211-2
211-1
284
A polynomial function \(f : L \to L\) of a lattice \(\mathcal{L}\) = \((L; \land, \lor)\) is generated by the identity function id \(id(x)=x\) and the constant functions \(c_a (x) = a\) (for every \(x \in L\)), \(a \in L\) by applying the operations \(\land, \lor\) finitely often. Every polynomial function in one or also in several variables is a monotone function of \(\mathcal{L}\). If every monotone function of \(\mathcal{L}\)is a polynomial function then \(\mathcal{L}\) is called orderpolynomially complete. In this paper we give a new characterization of finite order-polynomially lattices. We consider doubly irreducible monotone functions and point out their relation to tolerances, especially to central relations. We introduce chain-compatible lattices and show that they have a non-trivial congruence if they contain a finite interval and an infinite chain. The consequences are two new results. A modular lattice \(\mathcal{L}\) with a finite interval is order-polynomially complete if and only if \(\mathcal{L}\) is finite projective geometry. If \(\mathcal{L}\) is simple modular lattice of infinite length then every nontrivial interval is of infinite length and has the same cardinality as any other nontrivial interval of \(\mathcal{L}\). In the last sections we show the descriptive power of polynomial functions of lattices and present several applications in geometry.
253-2
Order-semi-primal lattices (1994)
285
On derived varieties (1996)
Derived varieties play an essential role in the theory of hyperidentities. In [11] we have shown that derivation diagrams are a useful tool in the analysis of derived algebras and varieties. In this paper this tool is developed further in order to use it for algebraic constructions of derived algebras. Especially the operator \(S\) of subalgebras, \(H\) of homomorphic irnages and \(P\) of direct products are studied. Derived groupoids from the groupoid \(N or (x,y)\) = \(x'\wedge y'\) and from abelian groups are considered. The latter class serves as an example for fluid algebras and varieties. A fluid variety \(V\) has no derived variety as a subvariety and is introduced as a counterpart for solid varieties. Finally we use a property of the commutator of derived algebras in order to show that solvability and nilpotency are preserved under derivation.
220
Hyperidentities (1992)
The concept of a free algebra plays an essential role in universal algebra and in computer science. Manipulation of terms, calculations and the derivation of identities are performed in free algebras. Word problems, normal forms, system of reductions, unification and finite bases of identities are topics in algebra and logic as well as in computer science. A very fruitful point of view is to consider structural properties of free algebras. A.I. Malcev initiated a thorough research of the congruences of free algebras. Henceforth congruence permutable, congruence distributive and congruence modular varieties are intensively studied. A lot of Malcev type theorems are connected to the congruence lattice of free algebras. Here we consider free algebras as semigroups of compositions of terms and more specific as clones of terms. The properties of these semigroups and clones are adequately described by hyperidentities. Naturally a lot of theorems of "semigroup" or "clone" type can be derived. This topic of research is still in its beginning and therefore a lot öf concepts and results cannot be presented in a final and polished form. Furthermore a lot of problems and questions are open which are of importance for the further development of the theory of hyperidentities. |
Lie algebras over rings (Lie rings) are important in group theory. For instance, to every group $G$ one can associate a Lie ring
$$L(G)=\bigoplus _{i=1}^\infty \gamma _i(G)/\gamma _{i+1}(G),$$
where $\gamma _i(G)$ is the $i$-th term of the lower central series of $G$. The addition is defined by the additive structure of $\gamma _i{G}/\gamma _{i+1}(G)$, and the Lie product is defined on homogeneous elements by $[x\gamma _{i+1}(G),y\gamma _{j+1}(G)]=[x,y]\gamma _{i+j+1}(G)$, and then extended to L(G) by linearity.
There are several other ways of constructing Lie rings associated to groups, and there are numerous applications of these. One of the most notorious ones is the solution of the Restricted Burnside Problem by Zelmanov, see the book M. R. Vaughan-Lee, "The Restricted Burnside Problem". There's other books related to these rings, for example,Kostrikin, "Around Burnside",Huppert, Blackburn, "Finite groups II",Dixon, du Sautoy, Mann, Segal, "Analytic pro-$p$ groups". |
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering
Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code.
he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects.
i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent.
you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl.
In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos...
Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval
@AkivaWeinberger are you familiar with the theory behind Fourier series?
anyway here's a food for thought
for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely.
(a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$?
@AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it.
> In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite.
I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d...
Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions.
@AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations
hence you're free to rescale the sides, and therefore the (semi)perimeter as well
so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality
that makes a lot of the formulas simpler, e.g. the inradius is identical to the area
It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane?
$q$ is the upper summation index in the sum with the Bernoulli numbers.
This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .." |
Let $f(x,y)$ be a binary quadratic form with co-prime integer coefficients. We say that $f$ is a
proper subform of $g(x,y)$ if there exists an integer matrix $A = \left(\begin{smallmatrix} a_1 & a_2 \\ a_3 & a_4 \end{smallmatrix}\right)$ with $|\det A| > 1$ such that
$$\displaystyle f(x,y) = g(a_1 x + a_2 y, a_3 x + a_4 y).$$
For example, the form $f(x,y) = 4x^2 + 4xy + 5y^2$ is a proper subform of $g(x,y) = x^2 + y^2$, since
$$\displaystyle 4x^2 + 4xy + 5y^2 = (2x + y)^2 + (2y)^2.$$
If $f$ is a proper subform, then the discriminant of $f$ is divisible by $\det(A)^2$, so it is not square-free. My question is the converse: suppose that $\Delta(f)$ is divisible by an odd square $m^2$. Is $f$ a proper subform of another form $g$? |
Consider the $p$-spin spherical spin glass model with Hamiltonian $$H_{N,p}(\sigma)=\frac{1}{{N}^{\frac{(p-1)}{2}}} \sum \limits_{i_1,...i_p} J_{i_1,...i_p} \sigma_{i_1} \sigma_{i_2} .. \sigma_{i_p} $$ where $$\sigma = (\sigma_1,...,\sigma_{i_N}) \in S^{N-1}(\sqrt{N}) \subseteq \mathbb{R}^{N} $$ the Euclidean sphere with radius $\sqrt{N}$.
I am a mathematician, so my physics knowledge is very limited.
I am trying to understand why is the TAP functional, given by $$f_{TAP}(q)(\sigma) = 2^{-\frac{1}{2}}q^{\frac{p}{2}}\frac{1}{N}H_{N,p}(\sigma) + B(q) $$ where $$B(q)= -\frac{1}{2\beta} \log(1-q) -\frac{\beta}{4} (1+(p-1)q^{p} - pq^{p-1} ) ,$$ important in the study of p-spin spherical spin glass. A description in words or some comments would be enough for now, as a start. I have also found a few papers concerning the $f_{TAP} $ functional but they seem too advanced for me.
Could someone give me some help with this? |
Let $K$ be a number field. For any $\alpha, \beta \in \mathcal{O}_K$ such that $N_{K/\mathbb{Q}}(\alpha) | N_{K/\mathbb{Q}}(\beta)$, is there a $\gamma \in \mathcal{O}_K$ such that $N_{K/\mathbb{Q}}(\gamma) = N_{K/\mathbb{Q}}(\beta)/N_{K/\mathbb{Q}}(\alpha)$?
Obviously, we have $N_{K/\mathbb{Q}}(\beta/\alpha) = N_{K/\mathbb{Q}}(\beta)/N_{K/\mathbb{Q}}(\alpha)$, but it is not true in general that $\beta/\alpha \in \mathcal{O}_K$. For example, take $K = \mathbb{Q}(i)$ and $\alpha = 5, \beta = 6 + 8i$. Then $N_{K/\mathbb{Q}}(\alpha) = 25 | 100 = N_{K/\mathbb{Q}}(\beta)$, but $\beta/\alpha = \frac{6 + 8i}{5} \not\in \mathbb{Z}[i] = \mathcal{O}_K$. Of course we know that $N_{K/\mathbb{Q}}(2) = 4 = \frac{100}{25}$ so this is not a counterexample to the question.
But perhaps there is a chance that given $\beta/\alpha$, we could find some element $\mu \in K$ such that $N_{K/\mathbb{Q}}(\mu) = 1$ and such that $\mu\beta/\alpha \in \mathcal{O}_K$. If $\beta/\alpha \not\in \mathcal{O}_K$ then we cannot have $\mu \in \mathcal{O}_K$, but there exist in general plenty of elements of unit norm in $K$ that are not algebraic integers, so the limitation is not as stringent as that given by the structure of the unit group of $\mathcal{O}_K$. For an example of this, see again $K = \mathbb{Q}(i)$ and the element $\frac{3+4i}{5}$ which has norm $1$ but is not in $\mathbb{Z}[i]$ (this is also what I used to produce the example in the paragraph above).
I know that in the case of quadratic fields $K = \mathbb{Q}(\sqrt{d})$, we can at least parametrize the set $\{x + y\sqrt{d}: x^2 - dy^2 = 1 \text{ and } x, y \in \mathbb{Q}\}$ using the method of choosing a starting point like $(1, 0)$ and constructing the intersection of lines of rational slopes passing through this point with the curve defined by $x^2 - dy^2 = 1$. But I don't know if that is of much help. |
One of the questions in my complex analysis book (Stein's text) is the following:
Prove that if $f$ is holomorphic in the unit disc, bounded, and not identically zero, and $z_1,z_2,\ldots,z_n,\ldots$ are its zeros, $(|z_k|<1)$, then $$ \sum_{n=1}^\infty(1-|z_n|)<\infty. $$
I proved this just fine using Jensen's formula, but I am still not able to think of an example for such a function. Obviously it will have to have infinitely many zeros, otherwise we're only adding finitely many terms and the problem becomes trivial. Since there are infinitely many, the limit point(s) has to be on the boundary (otherwise the function is identically zero). At one point, I think someone suggested a function like $\sin(\pi/z)$ but this isn't bounded (indeed, this has lots of problems around $0$).
Does anyone have an example of such a function? |
I have some questions concerning the hyperbolic geometry side of the rigidity question for $K_3$ which asks if the natural map $K_3^{\operatorname{ind}}(\overline{\mathbb{Q}})\to K_3^{\operatorname{ind}}(\mathbb{C})$ is surjective.
Question 0, a historical aside: Recently I grew a little uncertain about the correct attribution of this question/conjecture. I have seen this conjecture attributed to Bloch somewhere, and in Dupont's book "Scissors congruences, group homology and characteristic classes" it is attributed to Sah. What's the exact history?
Now for the actual mathematical questions. Recall that one can associate classes in $K_3$ (or versions of the Bloch group) to hyperbolic $3$-manifolds. Roughly, this is done by choosing a triangulation of the manifold $M$ by ideal tetrahedra, and the ideal tetrahedra naturally give classes in the scissors congruence (or pre-Bloch) group. There are several papers by Neumann and Yang about this. My main question is now:
what, if anything, corresponds to the rigidity question for $K_3$ on the hyperbolic $3$-manifold side? More detailed questions are formulated below: Question 1: What are possible reasons for believing or disbelieving the conjecture? I guess the rigidity of the Cheeger-Chern-Simons invariants is one reason for believing rigidity, maybe another is that we have no clue what else could be in $K_3^{\operatorname{ind}}(\mathbb{C})$? Any other reasons? Or maybe there is an argument why hyperbolic geometry can not possibly see non-rigidity of $K_3$? I would be mainly interested in intuition from the hyperbolic geometry side, about which I know almost nothing. Question 2: Assume, just for the fun of it, that the conjecture is false, i.e., there are classes in $K_3^{\operatorname{ind}}(\mathbb{C})$ which do not come from $K_3^{\operatorname{ind}}(\overline{\mathbb{Q}})$. Would there be a hyperbolic geometry interpretation of these classes?
Something related to Question 2 was discussed in Ian Agol's answer to this MO-question. Apparently, one would not see the "new" classes as manifolds with strange triangulations, but rather the new classes would come from deformations of $\operatorname{SL}_2(\mathbb{C})$-representations of the fundamental group of $M$. The representations would correspond to flat rank $2$ vector bundles. Is it possible to say that a failure of the rigidity conjecture would imply the existence of strange deformations of flat rank $2$ vector bundles on hyperbolic $3$-manifolds?
Are there other things/objects in hyperbolic geometry that would deform in a strange way if the rigidity conjecture was false? I guess it is called rigidity conjecture for a reason.
Question 3: Have people considered a way of going from classes in the Bloch group to hyperbolic manifolds? An element of the Bloch group can be represented as a linear combination of ideal tetrahedra, but it is not obvious to me how I could get a hyperbolic $3$-manifold from that? Is the relation defining the Bloch group ($\sum x\wedge (1-x)=0$) enough to make sure that the ideal tetrahedra can be glued to a manifold in some way?
Provided Question 3 has a positive answer, then I would have a way of interpreting elements in $K_3^{\operatorname{ind}}(\mathbb{C})$ which do not come from $\overline{\mathbb{Q}}$ (assuming these exist). For simplicity, let $C$ be a smooth projective curve over $\overline{\mathbb{Q}}$. Then I would interpret elements in $K_3^{\operatorname{ind}}(\overline{\mathbb{Q}}(C))$ as (linear combinations of) deformations of ideal tetrahedra with boundary points in $\overline{\mathbb{Q}}$ with parameter space some open subcurve of $C$. If the simplices can be glued, that would provide a hyperbolic $3$-manifold together with a deformation of a triangulation (with parameter space a subcurve of $C$). The fact that the corresponding element in $K_3$ does not come from $\overline{\mathbb{Q}}$ would say that the deformation of the triangulation can not be made constant by the obvious operations on ideal tetrahedra. Is it true that I can view non-rigid elements in $K_3$ as deformations of ideal triangulations?
Question 4: Now this is only meaningful if the questions above have a reasonably positive answer. Assuming the failure of the rigidity conjecture, and assuming that it is possible to represent non-rigid elements by deformations of hyperbolic-geometric objects (vector bundles or triangulations or some such thing), would there be invariants (other than their classes in $K_3$) that one could use to show that such objects yield non-rigid classes in $K_3$? The Cheeger-Chern-Simons invariants are rigid, but are there other suitably analytic invariants, maybe related to regulators, that could do the job?
Any help, comments and explanations are most welcome. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
In her solution, Maile observes that the numeral 1000 can represent different numbers. It might represent
one thousand. In fact, I think it's fair to say it "usually" represents one thousand because we usually interpret numerals in base 10. But, the symbol 1000 could just as easily represent eightif we interpret it in base 2. (In fact, it could just as easily represent a lot of numbers: sixty-four, f our thousand ninety-six, negative twenty-seven... but let's not get ahead of ourselves.)
Maile answers the challenge by claiming that we can stop finger counting at eight rather than counting all the way to one thousand, since 1000 represents the number eight in base 2. She's right in that we do in fact land on the same finger. "But," I wondered, "Does that always work?"
(As a quick aside, I have to appreciate the lucky randomness of the fact that I chose 1000 as the target number. That is to say, if I had asked what finger we'd be on when counting to 500 or to 2000, the connection to binary numbers would never, I suspect, have come up.)
To answer the question "does it always work?", it's helpful to watch Paul's solution to the challenge, in which he makes the connection between this kind of finger counting and modular arithmetic. If a number is congruent to either 2 or 0 modulo 8 it will end up on your index finger. Since \(1000 \equiv 0 \mod{8}\), we end up on the index finger when we count to 1000.
Said the other way around: if two numbers are congruent modulo 8, then they will end up on the same finger when finger counting in this way. So here was the question I had:
Given a numeral made up only of the digits 1 and 0, are the base 2 value of this numeral and the base 10 value of this numeral always congruent modulo 8?
Let's imagine an arbitrary string of 1s and 0s that is \(n\) symbols long. We might label the symbols \(b_{n-1} b_{n-2} \ldots b_2 b_1 b_0\), where each \(b_i\) is either 1 or 0.
When we interpret the string as a base 10 number, its value is \[b_{n-1} 10^{n-1} + \ldots + b_2 10^2 + b_1 10^1 + b_0 10^0.\]
On the other hand, when we interpret the string as a base 2 number, its value is \[b_{n-1} 2^{n-1} + \ldots + b_2 2^2 + b_1 2^1 + b_0 2^0.\]
Our goal is to show that the difference of these two is a multiple of eight. We subtract and gather up like terms, giving us \[b_{n-1} (10^{n-1} - 2^{n-1}) + \ldots + b_2 (10^2 - 2^2) + b_1 (10^1 - 2^1) + b_0 (10^0 - 2^0).\]
All of the terms have the same form: \(b_i (10^i - 2^i) = b_i 2^i (5^i - 1)\). When \(i \geq 3\) the term clearly has \(2^3 = 8\) as a factor. But what about the smaller terms, when \(i < 3\)? Well, we're in luck:
\begin{array}{l}
b_2(10^2 - 2^2) = b_2(96)\\
b_1(10^1 - 2^1) = b_1(8)\\
b_0(10^0 - 2^0) = b_0(0) \end{array}
Since each of the terms in the sum is a multiple of 8, the sum as a whole is a multiple of 8, which is what we wanted to show.
I think this is a great example of what makes teaching so challenging and exciting for me. I love being surprised by students and by the way they approach mathematics, and I love thinking deeply about so-called "simple" concepts.
I hope to hear from folks in the comments, either here or over at Math Mistakes! |
I had an exam today and in the exam a question was this, "A 50 year old tortoise jumps onto a rocket and goes to a distant planet 370 lightyears away, the rockets speed was 0.7c . If a tortoise lives 450 years in avarage, would he live till reaching the destination or not?" cruel question I know, but i calculated like this that relativity won't affect the tortoise and it'd take him (370)©/(0.7c)=528 years, so he won't survive. Is my process correct? because as i understand it, for the tortoise everything is normal. All my friends have answered using time dilation. So I am really confused. Please help.
Aging When Travellingrelativity
Posted 26 April 2017 - 08:47 AM
The tortoise rocket clock would record an elapse time of 377.47 years so the tortoise would only be 427.5 (rounded) years old, giving it another 22.5 years to explore the planet. 528.57years is the period of time an Earth based observer would record for the 370 light year journey of the craft travelling at 0.7c. The tortoise travelling in the rocket would experience time dilation of that figure, giving 377.47 years.
Posted 26 April 2017 - 11:54 AM
The tortoise rocket clock would record an elapse time of 377.47 years...
How did you get that?
I had an exam today and in the exam a question was this, "A 50 year old tortoise jumps onto a rocket and goes to a distant planet 370 lightyears away, the rockets speed was 0.7c . If a tortoise lives 450 years in avarage, would he live till reaching the destination or not?" cruel question I know, but i calculated like this that relativity won't affect the tortoise and it'd take him (370)©/(0.7c)=528 years, so he won't survive. Is my process correct? because as i understand it, for the tortoise everything is normal. All my friends have answered using time dilation. So I am really confused. Please help.
They got it wrong as well then. The distance traveled in space is shortened by the same amount traveled in time. Apply whatever length contraction is at 0.7c and then work work how long it would take to travel that distance then apply time dilation to the journey time. The tortoise doesn't feel time dilated so it that sense everything is normal. The tortoise should have plenty of time.
Edited by A-wal, 26 April 2017 - 04:20 PM.
Posted 26 April 2017 - 03:34 PM
How did you get that?
Special relativity means moving clocks run slow. The time dilation equation is:
∆t
0 = ∆t√ 1 – v 2/c 2
∆t
0 = spacecraft time interval (in light years for this example)
∆t = Earth based time interval in light years (528.57 in this case)
v = spacecraft velocity 0.7c
c = speed of light (2.99792458x10
8 m/s) (3.0x10 8) rounded (Cancels out in this example)
∆t
0 = ∆t√1 – v 2/c 2 ∆t 0 = 528.57√1 – (0.7) 2 ∆ t 0 = 528.57√1 – 0.49 ∆ t 0 = 528.57√0.51
∆ t
0 = 528.57 x 0.7141428 = 377.47 years
Posted 26 April 2017 - 04:20 PM
The tortoise does feel time dilated so it that sense everything is normal.
I meant: The tortoise DOESN'T feel time dilated so it that sense everything is normal.
Posted 04 May 2017 - 05:54 AM
Mathimatically, the tortoise would have quite some time to explore the planet. However, time dilation is based on moving at fast speeds which can be caused by gravity. 0.7c is hardly fast enough to escape gravity of a black hole and if you're going to a planet 370 light years away, you are bound to run into a 100 different large black holes. In technical terms, he would be dead before he reached even 30% of his journey. To really go into depth, you'll need to know the route, the gravitational field Emmitt; only then will your answer me 100% correct.
Posted 04 May 2017 - 09:46 AM
if you're going to a planet 370 light years away, you are bound to run into a 100 different large black holes.
I’m not so sure, the nearest black hole is 2,800 light years away in the faint Monoceros (Unicorn) constellation, (viewed in the night sky between Sirius and Procyon), according to the reference below.
Posted 04 May 2017 - 11:15 AM
sparten45 gave the correct answer per
SR and the initial conditions, which did not include black holes.
Posted 18 May 2017 - 09:34 AM
sparten45 gave the correct answer per
SR
No he didn't, he left out length contraction. To get the difference in the amount of proper time it would take to make the journey you need to work out out the difference in coordinate time over the distance in space.
Length contracts by the same amount as time dilates so it should take 270 years proper time.
Posted 18 May 2017 - 09:40 AM
Hi,
Mathimatically, the tortoise would have quite some time to explore the planet. However, time dilation is based on moving at fast speeds which can be caused by gravity. 0.7c is hardly fast enough to escape gravity of a black hole and if you're going to a planet 370 light years away, you are bound to run into a 100 different large black holes. In technical terms, he would be dead before he reached even 30% of his journey. To really go into depth, you'll need to know the route, the gravitational field Emmitt; only then will your answer me 100% correct.
No, there are no black holes that close to the Earth.
Posted 18 May 2017 - 03:09 PM
No he didn't, he left out length contraction. To get the difference in the amount of proper time it would take to make the journey you need to work out out the difference in coordinate time over the distance in space.
Length contracts by the same amount as time dilates
This is interesting, as you have tackled the problem from a different perspective. Here is my reasoning looking at the problem from the point of view of length contraction.
L= L
0√ 1 – v 2/c 2
L
0 = Earth based distance in light years (370 in this case)
L = spacecraft measured distance (in light years for this example)
v = spacecraft velocity 0.7c
c = speed of light (2.99792458x10
8 m/s) (3.0x10 8) rounded (Cancels out in this example)
L= L
0√1 – v 2/c 2 L= 370√1 – (0.7) 2 L= 370√1 – 0.49 L= 370√0.51
L= 370 x 0.7141428 = 264.23 light years.
Using time=distance/speed t=264.23/0.7 years
t = 377.47 years.
OceanBreeze likes this
Posted 18 May 2017 - 03:45 PM
This is interesting, as you have tackled the problem from a different perspective. Here is my reasoning looking at the problem from the point of view of length contraction.
L= L
0√ 1 – v 2/c 2
L
0= Earth based distance in light years (370 in this case)
L = spacecraft measured distance (in light years for this example)
v = spacecraft velocity 0.7c
c = speed of light (2.99792458x10
8m/s) (3.0x10 8) rounded (Cancels out in this example)
L= L
0√1 – v 2/c 2L= 370√1 – (0.7) 2L= 370√1 – 0.49 L= 370√0.51
L= 370 x 0.7141428 = 264.23 light years.
Using time=distance/speed t=264.23/0.7 years
t = 377.47 years.
You are much too kind! A-wal didn’t tackle the problem at all, he just posted a number of 270 years, that is wrong without showing any calculations at all.
You, on the other hand have done your calculations properly, and your solutions of 264.23 LY distance due to length contraction, and 377.47 years time, due to time dilation, are correct.
One thing to keep in mind about this sort of problem is both observers (the one on earth and the one in the rocket) will not agree on the time or the distance, but they always agree on the velocity.
That gives you a quick and easy way to check your answer.
Since velocity = distance / time
For the earth observer: 0.7c = 370 LY / 528 Y
For the space farer: 0.7C = 264.23 LY / 377.47 Y
The velocities agree, as they must, so that is a good indication your answers are correct, and of course they are.
But don’t expect A-wal “Mr special relativity” to admit he is wrong. He never does and he never learns
Posted 18 May 2017 - 05:06 PM
Right, length contracts by the same amount as time dilates so you get the same result. So when both are taken into account you get a proper time of 270 years for the trip, right?
Posted 19 May 2017 - 03:38 AM
Right, length contracts by the same amount as time dilates so you get the same result.
You get the same result for velocity, yes.
So when both are taken into account you get a proper time of 270 years for the trip, right?
No, Wrong. See here
For an object in a SR spacetime traveling with a velocity of v for a time Δ T the proper time interval experienced is the same as in the SR time dilation formula, 377.47 years in this case.
Why do you think the proper time is different from the dilated time?
Posted 21 May 2017 - 11:52 AM
The velocities agree, as they must, so that is a good indication your answers are correct, and of course they are.
But don’t expect A-wal “Mr special relativity” to admit he is wrong. He never does and he never learns
You get the same result for velocity, yes.
No, Wrong. See here
For an object in a SR spacetime traveling with a velocity of v for a time Δ T the proper time interval experienced is the same as in the SR time dilation formula, 377.47 years in this case.
It shouldn't be, that doesn't take into account traveling the shorter distance caused by length contraction after accelerating.
Why do you think the proper time is different from the dilated time?
Time dilation is the coordinate difference between two inertial frames, proper time is the amount of time (dilated) that it takes to moves across length contracted space. This...
Special relativity means moving clocks run slow. The time dilation equation is:
∆t
0= ∆t√ 1 – v 2/c 2
∆t
0= spacecraft time interval (in light years for this example)
∆t = Earth based time interval in light years (528.57 in this case)
v = spacecraft velocity 0.7c
c = speed of light (2.99792458x10
8m/s) (3.0x10 8) rounded (Cancels out in this example)
∆t
0= ∆t√1 – v 2/c 2∆t 0= 528.57√1 – (0.7) 2∆ t 0= 528.57√1 – 0.49 ∆ t 0= 528.57√0.51
∆ t
0= 528.57 x 0.7141428 = 377.47 years
...is time dilation. This...
This is interesting, as you have tackled the problem from a different perspective. Here is my reasoning looking at the problem from the point of view of length contraction.
L= L
0√ 1 – v 2/c 2
L
0= Earth based distance in light years (370 in this case)
L = spacecraft measured distance (in light years for this example)
v = spacecraft velocity 0.7c
c = speed of light (2.99792458x10
8m/s) (3.0x10 8) rounded (Cancels out in this example)
L= L
0√1 – v 2/c 2L= 370√1 – (0.7) 2L= 370√1 – 0.49 L= 370√0.51
L= 370 x 0.7141428 = 264.23 light years.
Using time=distance/speed t=264.23/0.7 years
t = 377.47 years.
...is length contraction. To get the proper time you need both. When the tortoise accelerates the distance between the starting point and the destination decreases and so does the coordinate time it takes to cover that distance. The amount of proper time is how long it takes on the watch of the of the tortoise.
Maybe the equations are taking both into account in both examples but that's not how it reads.
Edited by A-wal, 21 May 2017 - 11:54 AM.
Posted 21 May 2017 - 12:33 PM
I gave you the link to follow. If that doesn't convince you, I won't waste any more of my time on you.
[math] \Delta \tau = {\sqrt {\Delta T^{2}-(v_{x}\Delta T/c)^{2}-(v_{y}\Delta T/c)^{2}-(v_{z}\Delta T/c)^{2}}}=\Delta T{\sqrt {1-v^{2}/c^{2}}} [/math]
As you can see, the last part of the equation for proper time, is exactly the same as the expression for dilated time in SR.
Edited by OceanBreeze, 21 May 2017 - 12:46 PM.
Posted 21 May 2017 - 01:30 PM
But this...
This is interesting, as you have tackled the problem from a different perspective. Here is my reasoning looking at the problem from the point of view of length contraction.
L= L
0√ 1 – v 2/c 2
L
0= Earth based distance in light years (370 in this case)
L = spacecraft measured distance (in light years for this example)
v = spacecraft velocity 0.7c
c = speed of light (2.99792458x10
8m/s) (3.0x10 8) rounded (Cancels out in this example)
L= L
0√1 – v 2/c 2L= 370√1 – (0.7) 2L= 370√1 – 0.49 L= 370√0.51
L= 370 x 0.7141428 = 264.23 light years.
Using time=distance/speed t=264.23/0.7 years
t = 377.47 years.
... is time dilation alone, meaning that the journey in proper time would be 377.47 years if length in space weren't contracted, and this...
This is interesting, as you have tackled the problem from a different perspective. Here is my reasoning looking at the problem from the point of view of length contraction.
L= L
0√ 1 – v 2/c 2
L
0= Earth based distance in light years (370 in this case)
L = spacecraft measured distance (in light years for this example)
v = spacecraft velocity 0.7c
c = speed of light (2.99792458x10
8m/s) (3.0x10 8) rounded (Cancels out in this example)
L= L
0√1 – v 2/c 2L= 370√1 – (0.7) 2L= 370√1 – 0.49 L= 370√0.51
L= 370 x 0.7141428 = 264.23 light years.
Using time=distance/speed t=264.23/0.7 years
t = 377.47 years.
... is length contraction alone, meaning that the journey in proper time would be 377.47 years if length in time weren't dilated.
Are you saying that both ways of doing it take both time dilation and length contraction into account? It doesn't look like it.
Also tagged with one or more of these keywords: relativity
Physics / Math / Engineering → Physics and Mathematics → Physics / Math / Engineering → Physics and Mathematics → Alternative theories → Strange Claims Forum → Physics / Math / Engineering → Physics and Mathematics → Physics / Math / Engineering → Physics and Mathematics → Physics / Math / Engineering → Physics and Mathematics → Alternative theories → |
What is the relationship of the following to other axioms of $\sf ZFC$?
$\sf WB$: Every set $A$ is in bijection with a set well-founded by $\in$.
Obviously, $\sf ZF$ implies $\sf WB$ (because every set is well-founded) and $\sf ZFC-Reg$ implies $\sf WB$ (because every set is equinumerous to an ordinal, which is well-founded). Is it known if $\sf ZF-Reg$ implies $\sf WB$? Another equivalent statement is:
$\sf WB'$: Every set injects into ${\rm WF}:=\bigcup_\alpha V_\alpha$.
(Since every set well-founded by $\in$ is in $\rm WF$, $\sf WB\to WB'$, and for the converse note that the range $B\subseteq {\rm WF}$ of any injection from set $A$ must also be a set so it is contained in $V_\delta$ where $\delta$ is the supremum of the ranks of elements of $B$.) When written this way it seems hard to believe that it could be false even without Regularity. |
Each month the challenge has a different topic. This month the challenge was to improve an existing graph. I decided to take a previous graph of mine that was published on a paper that was published on 2015.
The "original"
The following is the graph that we are going to start with.
This is not the exact same graph that was presented in the article on 2015, but it serves the purpose of the challenge. The data can be downloaded from here. This graph present the fraction of energy (\(\eta\)) transmitted for helical composites with different geometrical parameters. The parameters varied were:
Pitch angle \(\alpha\): the angle between consecutive layers; Composite thickness \(D\), that is normalized to the wavelength \(\lambda\); and Number of layers \(N\) in each cell of the composite.
The following schematic illustrate these parameters.
I would not say that the graph is awful, and, in comparison to what you wouldfind in most scientific literature it is even good. But … in the land of theblind, the one-eyed is king. So, let's enumerate what are the
sins of theplot and see if we can correct them: It has two x axes. The legend is enclosed in a box that seems unnecessary. Right and top spines are not contributing to the plot. Annotations are stealing protagonism from the data. It looks clutterd with lines and markers. It is a spaghetti graph.
The following snippet generates this graph.
import numpy as np from matplotlib import pyplot as plt from matplotlib import rcParams rcParams['font.family'] = 'serif' rcParams['font.size'] = 16 rcParams['legend.fontsize'] = 15 rcParams['mathtext.fontset'] = 'cm' markers = ['o', '^', 'v', 's', '<', '>', 'd', '*', 'x', 'D', '+', 'H'] data = np.loadtxt("Energy_vs_D.csv", skiprows=1, delimiter=",") labels = np.loadtxt("Energy_vs_D.csv", skiprows=0, delimiter=",", usecols=range(1, 9)) labels = labels[0, :] fig = plt.figure() ax = plt.subplot(111) for cont in range(8): plt.plot(data[:, 0], data[:, cont + 1], marker=markers[cont], label=r"$D/\lambda={:.3g}$".format(labels[cont])) # First x-axis xticks, xlabels = plt.xticks() plt.xlabel(r"Number of layers - $N$", size=15) plt.ylabel(r"Fraction of Energy - $\eta$", size=15) ax.legend(loc='center left', bbox_to_anchor=(1, 0.5)) # Second x-axis ax2 = ax.twiny() ax2.set_xticks(xticks[2:]) ax2.set_xticklabels(180./xticks[2:]) plt.xlabel(r"Angle - $\alpha\ (^\circ)$", size=15) plt.tight_layout() plt.savefig("energy_vs_D_orig.svg") plt.savefig("energy_vs_D_orig.png", dpi=300) Corrections
I will vindicate the graph one
sin at a time, let's see how it turns out. It has two x axes
I, originally, added two axes to show both the number of layers and the angle between them at the same time. The general recommendation is to avoid this, so let's get rid of the top one.
Legend in a box
Pretty straightforward …
Right and top spines
Let's remove them
Annotations are stealing protagonism
Let's move all the annotations to the background by changing the color to a lighter gray.
Clutterd with lines and markers
Let's just keep the lines.
And increase the linewidth.
It is a spaghetti graph
I think that a good option for this graph would be to highlight one line at a time. Doing this, we end up with.
The following snippet generates this version.
import numpy as np from matplotlib import pyplot as plt from matplotlib import rcParams # Plots setup gray = '#757575' plt.rcParams["mathtext.fontset"] = "cm" plt.rcParams["text.color"] = gray plt.rcParams["xtick.color"] = gray plt.rcParams["ytick.color"] = gray plt.rcParams["axes.labelcolor"] = gray plt.rcParams["axes.edgecolor"] = gray plt.rcParams["axes.spines.right"] = False plt.rcParams["axes.spines.top"] = False rcParams['font.family'] = 'serif' rcParams['mathtext.fontset'] = 'cm' def line_plots(data, highlight_line, title): plt.title(title) for cont, datum in enumerate(data[:, 1:].T): if cont == highlight_line: plt.plot(data[:, 0], datum, zorder=3, color="#984ea3", linewidth=2) else: plt.plot(data[:, 0], datum, zorder=2, color=gray, linewidth=1, alpha=0.5) data = np.loadtxt("Energy_vs_D.csv", skiprows=1, delimiter=",") labels = np.loadtxt("Energy_vs_D.csv", skiprows=0, delimiter=",", usecols=range(1, 9)) labels = labels[0, :] plt.close("all") plt.figure(figsize=(8, 4)) for cont in range(8): ax = plt.subplot(2, 4, cont + 1) title = r"$D/\lambda={:.3g}$".format(labels[cont]) line_plots(data, cont, title) plt.ylim(0.4, 1.0) if cont < 4: plt.xlabel("") ax.xaxis.set_ticks([]) ax.spines["bottom"].set_color("none") else: plt.xlabel(r"Number of layers - $N$") if cont % 4 > 0: ax.yaxis.set_ticks([]) ax.spines["left"].set_color("none") else: plt.ylabel(r"Fraction of Energy - $\eta$") plt.tight_layout() plt.savefig("energy_vs_D_7.svg") plt.savefig("energy_vs_D_7.png", dpi=300) Final tweaks
Using Inkscape I added some final details to get the following version.
Further reading Knaflic, Cole Nussbaumer. Storytelling with data: A data visualization guide for business professionals. John Wiley & Sons, 2015. Nicolás Guarín-Zapata et al. "Shear wave filtering in naturally-occurring Bouligand structures." Acta biomaterialia 23 (2015): 11-20. Preprint: https://arxiv.org/pdf/1505.04203.pdf |
Density changes when mixing liquids is not only applicable to NaOH solutions. Even when you dilute a stock solution with water you may find that the volume is not additive. The factors that matter are the starting concentration and the concentration change.
At the concentrations you're using, you can probably safely ignore density changes and assume volumes are additive, unless you need extreme precision (but then you could use volumetric glassware and do the dilutions properly, instead of just adding volumes).
You are correct: when you dilute your $100 \ \mu M$ pNP solution with $0.1 \ M$ NaOH, it will turn yellow, because you form the pNP anion, which is intensely yellow.
However, for the spectroscopy part to work correctly, you must make sure you always have at least the stoichiometric amount of NaOH, i.e. you must convert all pNP to its anion. This will be the case in all your diluted solutions, simply because the concentration of NaOH is 1000 times larger than the pNP concentration ($0.1 \ M = 100 \ mM = 100000 \ \mu M$), so whenever the volume of NaOH solution you add to the pNP solution is greater than 1/1000 times the volume of the latter, you're already in excess of NaOH.
For the rest, the formula you mentioned works correctly. For instance, to make the $70 \ \mu M$ solution of pNP anion: $V_2 = \frac {100 \cdot V_1} {70} \approx 1.43 \cdot V_1$, so you could add $0.43 \ mL$ of $0.1 \ M$ NaOH to $1 \ mL$ of $100 \ \mu M$ pNP. You can see that you are in huge excess of NaOH, so there is no risk to leave any non-ionised pNP.
You can achieve the same result by a 'quick' subtraction method, which also works when both solutions contain the substance of interest.
In this case, your stock is $100 \ \mu M$ in the substance of interest (pNP). You dilute it with a solution that contains no pNP, so it's $0 \ \mu M$. You want to obtain a $70 \ \mu M$ solution. Subtract the final desired concentration from the concentration of each solution you're mixing, take the absolute values and swap the results. So, pNP stock minus final desired solution is $|100 - 70| = 30$. NaOH minus final desired solution is $|0 - 70| = 70$. Swap the two: 30 applies to NaOH and 70 applies to pNP. This means that, to obtain the desired final solution, you need to mix 30 'parts' of NaOH with 70 'parts' of pNP. You can easily verify that indeed $100/70-1 = 30/70$.
This may sound more complicated than the other formula, but in fact if you write the concentrations of the two 'starting' solutions on the top left and bottom left corner of an ideal square, respectively, and the final concentration in the centre of the square, and then do the subtractions 'across the diagonals', you will get in the right corners of the square the 'parts' to mix for each starting solution.
E.g. if you had a $10 \ \mu M$ solution (A) and a $90 \ \mu M$ solution (B) and wanted to obtain a $40 \ \mu M$ solution by mixing them, this method would quickly tell you that you need to mix 50 'parts' of A and 30 'parts' of B. Obviously any mixing-based dilution will work only if the final desired concentration is 'bracketed' by the concentrations of the solutions being mixed. In these examples, $0<70<100$ and $10<40<90$. |
The linear bounded automata (LBA) is defined as follows:
A linear bounded automata is a nondeterministic Turing machine $M=(Q,\Sigma,\Gamma,\delta,q_0,\square,F)$ (as in the definition of TM) with the restriction that $\Sigma$ must contain two symbols $[$ and $]$, such that $\delta(q_i,[)$ can contain only elements of the form $(q_j,[,R)$ and $\delta(q_i,])$ can contain only elements of the form $(q_j,],L)$
Informally this can be interpreted as follows:
In linear bounded automata, we allow the Turing machine to use only that part of the tape occupied by the input. The input can be envisioned as bracketed by left end marker $[$ and right end marker $]$. The end markers cannot be rewritten, and RW head cannot move to the left of $[$ or to the right of $]$.
Now I read that context sensitive grammar imitates the function of LBA and is defined as follows:
A grammar is CSG if all productions in context sensitive grammar takes form $$x\rightarrow y,$$ where $x,y\in(V\cup T)^+$ and $|x|\leq|y|$
Now people say that CSG cannot contain lambda or empty production (which takes form: $x\rightarrow \lambda$) as it will make make it impossible to meet the requirement $|x|\leq|y|$ and this can be understood.
However, what I don't understand is how informal interpretation of the working of LBA given above explains
(which is why CSG does not have lambda production). Can anyone explain? why LBA cannot accept empty string |
Difference between revisions of "Help:Editing"
(add link to Quickstart Guide)
Line 36: Line 36:
*[[Wikipedia:Manual of Style]]
*[[Wikipedia:Manual of Style]]
{{h:f|enname=Editing}}
{{h:f|enname=Editing}}
+ + Revision as of 14:39, 10 February 2008
This Editing Overview has a lot of examples. You may want to keep this page open in a separate browser window for reference while you edit. If you want the super-quickie help, see the Quickstart Guide.
Each of the topics covered here is covered somewhere else in more detail. See box at right for that
Contents Editing basics Start editing To start editing a MediaWiki page, click the Edit this page(or just edit) link at one of its edges. This brings you to the edit page: a page with a text box containing the wikitext: the editable source code from which the server produces the webpage. If you just want to experiment, please do so in the sandbox, not here. Type your changes You can just type your text. However, also using basic wiki markup (described in the next section) to make links and do simple formatting adds to the value of your contribution. Summarize your changes Write a short edit summary in the small field below the edit-box. You may use shorthand to describe your changes, as described in the legend. Preview before saving When you have finished, click Show previewto see how your changes will look -- beforeyou make them permanent. Repeat the edit/preview process until you are satisfied, then click Save pageand your changes will be immediately applied to the article. Basic text formatting
What it looks like What you type
You can
3 apostrophes will bold
5 apostrophes will bold and italicise
(4 apostrophes don't do anything special -- there's just '
You can ''italicize text'' by putting 2 apostrophes on each side. 3 apostrophes will bold '''the text'''. 5 apostrophes will bold and italicize '''''the text'''''. (4 apostrophes don't do anything special -- there's just ''''one left over''''.)
A single newline has no effect on the layout. But an empty line
starts a new paragraph.
A single newline has no effect on the layout. But an empty line starts a new paragraph.
You can break lines
You can break lines<br /> without a new paragraph.<br /> Please use this sparingly.
You should "sign" your comments on talk pages:
You should "sign" your comments on talk pages: <br /> - Three tildes gives your user name: ~~~ <br /> - Four tildes give your user name plus date/time: ~~~~ <br /> - Five tildes gives the date/time alone: ~~~~~ <br />
You can use
Put text in a
Superscripts and subscripts:X
Invisible comments to editors (<!-- -->) only appear while editing the page.
If you wish to make comments to the public, you should usually go on the talk page, though.
You can use <b>HTML tags</b>, too, if you want. Some useful ways to use HTML: Put text in a <tt>typewriter font</tt>. The same font is generally used for <code> computer code</code>. <strike>Strike out</strike> or <u>underline</u> text, or write it <span style= "font-variant:small-caps"> in small caps</span>. Superscripts and subscripts: X<sup>2</sup>, H<sub>2</sub>O Invisible comments to editors (<!-- -->) only appear while editing the page. <!-- Note to editors: blah blah blah. --> If you wish to make comments to the public, you should usually go on the [[talk page]], though.
For a list of HTML tags that are allowed, see HTML in wikitext. However, you should avoid HTML in favor of Wiki markup whenever possible.
Organizing your writing
What it looks like What you type
Subsection
Using more equals signs creates a subsection.
A smaller subsection
Don't skip levels, like from two to four equals signs.
Start with 2 equals signs not 1 because 1 creates H1 tags which should be reserved for page title.
== Section headings == ''Headings'' organize your writing into sections. The Wiki software can automatically generate a table of contents from them. === Subsection === Using more equals signs creates a subsection. ==== A smaller subsection ==== Don't skip levels, like from two to four equals signs. Start with 2 equals signs not 1 because 1 creates H1 tags which should be reserved for page title.
marks the end of the list.
* ''Unordered lists'' are easy to do: ** Start every line with a star. *** More stars indicate a deeper level. *: Previous item continues. ** A new line * in a list marks the end of the list. * Of course you can start again.
A new line marks the end of the list.
# ''Numbered lists'' are: ## Very organized ## Easy to follow A new line marks the end of the list. # New numbering starts with 1.
Another kind of list is a
Another kind of list is a ''definition list'': ; Word : Definition of the word ; Here is a longer phrase that needs a definition : Phrase defined ; A word : Which has a definition : Also a second one : And even a third * You can even do mixed lists *# and nest them *# inside each other *#* or break lines<br />in lists. *#; definition lists *#: can be *#;; nested too
A new line after that starts a new paragraph.
: A colon (:) indents a line or paragraph. A new line after that starts a new paragraph. <br /> This is often used for discussion on talk pages. : We use 1 colon to indent once. :: We use 2 colons to indent twice. ::: We use 3 colons to indent 3 times, and so on.
You can make horizontal dividing lines (----) to separate text.
But you should usually use sections instead, so that they go in the table of contents.
You can make horizontal dividing lines (----) to separate text. ---- But you should usually use sections instead, so that they go in the table of contents.
You can add footnotes to sentences using the
You can add footnotes to sentences using the ''ref'' tag -- this is especially good for citing a source. :There are over six billion people in the world.<ref>CIA World Factbook, 2006.</ref> <br /> References: <references/> For details, see [[Wikipedia:Footnotes]] and [[Help:Footnotes]].
You will often want to make clickable
links to other pages.
What it looks like What you type Here's a link to a page named [[Arts and Letters Magazine]]. You can even say [[arts and letters magazine]]s and the link will show up correctly.
You can put formatting around a link.Example:
You can put formatting around a link. Example: ''[[Wikipedia]]''. The ''first letter'' of articles is automatically capitalized, so [[wikipedia]] goes to the same place as [[Wikipedia]]. Capitalization matters after the first letter.
The weather in Riga is a page that doesn't exist yet. You could create it by clicking on the link.
[[The weather in Riga]] is a page that doesn't exist yet. You could create it by clicking on the link.
You can link to a page section by its title:
If multiple sections have the same title, add a number. #Example section 3 goes to the third section named "Example section".
You can link to a page section by its title: *[[List of cities by country#Morocco]]. If multiple sections have the same title, add a number. [[#Example section 3]] goes to the third section named "Example section".
You can make a link point to a different place with a piped link. Put the link target first, then the pipe character "|", then the link text.
Or you can use the "pipe trick" so that text in brackets does not appear.
You can make a link point to a different place with a [[Help:Piped link|piped link]]. Put the link target first, then the pipe character "|", then the link text. *[[Help:Link|About Links]] *[[List of cities by country#Morocco| Cities in Morocco]] Or you can use the "pipe trick" so that text in brackets does not appear. *[[Spinning (textiles)|Spinning]]
You can make an external link just by typing a URL: http://www.nupedia.com
You can give it a title: Nupedia
Or leave the title blank: [1]
You can make an external link just by typing a URL: http://www.nupedia.com You can give it a title: [http://www.nupedia.com Nupedia] Or leave the title blank: [http://www.nupedia.com] Linking to an e-mail address works the same way: mailto:someone@domain.com or [mailto:someone@domain.com someone]
You can redirect the user to another page.
#REDIRECT [[Official position]]
Category links do not show up in linebut instead at page bottom
Add an extra colon to
[[Help:Category|Category links]] do not show up in line but instead at page bottom ''and cause the page to be listed in the category.'' [[Category:English documentation]] Add an extra colon to ''link'' to a category in line without causing the page to be listed in the category: [[:Category:English documentation]]
The Wiki reformats linked dates to match the reader's date preferences. These three dates will show up the same if you choose a format in your Preferences:
The Wiki reformats linked dates to match the reader's date preferences. These three dates will show up the same if you choose a format in your [[Special:Preferences|]]: * [[July 20]], [[1969]] * [[20 July]] [[1969]] * [[1969]]-[[07-20]] Just show what I typed
A few different kinds of formatting will tell the Wiki to display things as you typed them.
What it looks like What you type
The nowiki tag ignores [[Wiki]] ''markup''. It reformats text by removing new lines and multiple spaces. It still interprets special characters: →
<nowiki> The nowiki tag ignores [[Wiki]] ''markup''. It reformats text by removing new lines and multiple spaces. It still interprets special characters: → </nowiki> The pre tag ignores [[Wiki]] ''markup''. It also doesn't reformat text. It still interprets special characters: → <pre> The pre tag ignores [[Wiki]] ''markup''. It also doesn't reformat text. It still interprets special characters: → </pre>
Leading spaces are another way to preserve formatting.
Putting a space at the beginning of each line stops the text from being reformatted. It still interprets Wiki Leading spaces are another way to preserve formatting. Putting a space at the beginning of each line stops the text from being reformatted. It still interprets [[Wiki]] ''markup'' and special characters: → Images, tables, video, and sounds
After uploading, just enter the filename, highlight it and press the "embedded image"-button of the edit_toolbar.
This will produce the syntax for uploading a file
[[Image:filename.png]]
This is a very quick introduction. For more information, see:
Help:Images and other uploaded files for how to upload files Help:Extended image syntax for how to arrange images on the page Help:Tables for how to create a table
What it looks like What you type
A picture, including alternate text:
You can put the image in a frame with a caption:
A picture, including alternate text: [[Image:Wiki.png|The logo for this Wiki]] You can put the image in a frame with a caption: [[Image:Wiki.png|frame|The logo for this Wiki]]
A link to Wikipedia's page for the image: Image:Wiki.png
Or a link directly to the image itself: Media:Wiki.png
A link to Wikipedia's page for the image: [[:Image:Wiki.png]] Or a link directly to the image itself: [[Media:Wiki.png]]
Use
Use '''media:''' links to link directly to sounds or videos: [[media:Sg_mrob.ogg|A sound file]]
{| border="1" cellspacing="0" cellpadding="5" align="center" ! This ! is |- | a | table |- |} Mathematical formulas
You can format mathematical formulas with TeX markup. See Help:Formula.
What it looks like What you type
<math>\sum_{n=0}^\infty \frac{x^n}{n!}</math>
<math>\sum_{n=0}^\infty \frac{x^n}{n!}</math> Templates Templates are segments of Wiki markup that are meant to be copied automatically ("transcluded") into a page.You add them by putting the template's name in {{double braces}}.
Some templates take
parameters, as well, which you separate with the pipe character.
What it looks like What you type {{Transclusion demo}}
This template takes two parameters, and creates underlined text with a hover box for many modern browsers supporting CSS:
Go to this page to see the H:title template itself: {{H:title}}
This template takes two parameters, and creates underlined text with a hover box for many modern browsers supporting CSS: {{H:title|This is the hover text| Hover your mouse over this text}} Go to this page to see the H:title template itself: {{tl|H:title}} Minor edits
A logged-in user can mark an edit as "minor". Minor edits are generally spelling corrections, formatting, and minor rearrangement of text. Users may choose to
hide minor edits when viewing Recent Changes.
Marking a significant change as a minor edit is considered bad Wikiquette. If you have accidentally marked an edit as minor, make a dummy edit, verify that the "
[ ] This is a minor edit" check-box is unchecked, and explain in the edit summary that the previous edit was not minor. See also Help:Editing FAQ Help:Automatic conversion of wikitext Help:Calculation Help:Editing toolbar Help:HTML in wikitext Protecting pages Wikipedia:Cheatsheet—a listing of the basic editing commands. Help:Starting a new page Help:Variable HTML elements. Wikipedia:Manual of Style |
7.8 Exercises
Consider the
pigsseries — the number of pigs slaughtered in Victoria each month.
Use the
ses()function in R to find the optimal values of \(\alpha\) and \(\ell_0\), and generate forecasts for the next four months.
Compute a 95% prediction interval for the first forecast using \(\hat{y} \pm 1.96s\) where \(s\) is the standard deviation of the residuals. Compare your interval with the interval produced by R. Use the
Write your own function to implement simple exponential smoothing. The function should take arguments
y(the time series),
alpha(the smoothing parameter \(\alpha\)) and
level(the initial level \(\ell_0\)). It should return the forecast of the next observation in the series. Does it give the same forecast as
ses()?
Modify your function from the previous exercise to return the sum of squared errors rather than the forecast of the next observation. Then use the
optim()function to find the optimal values of \(\alpha\) and \(\ell_0\). Do you get the same values as the
ses()function?
Combine your previous two functions to produce a function which both finds the optimal values of \(\alpha\) and \(\ell_0\), and produces a forecast of the next observation in the series.
Data set
bookscontains the daily sales of paperback and hardcover books at the same store. The task is to forecast the next four days’ sales for paperback and hardcover books.
Plot the series and discuss the main features of the data. Use the
ses()function to forecast each series, and plot the forecasts.
Compute the RMSE values for the training data in each case. Now apply Holt’s linear method to the
paperbackand
hardbackseries and compute four-day forecasts in each case.
Compare the RMSE measures of Holt’s method for the two series to those of simple exponential smoothing in the previous question. (Remember that Holt’s method is using one more parameter than SES.) Discuss the merits of the two forecasting methods for these data sets. Compare the forecasts for the two series using both methods. Which do you think is best? Calculate a 95% prediction interval for the first forecast for each series, using the RMSE values and assuming normal errors. Compare your intervals with those produced using
sesand
holt.
Now apply Holt’s linear method to the
For this exercise use data set
eggs, the price of a dozen eggs in the United States from 1900–1993. Experiment with the various options in the
holt()function to see how much the forecasts change with damped trend, or with a Box-Cox transformation. Try to develop an intuition of what each argument is doing to the forecasts.
[Hint: use
h=100when calling
holt()so you can clearly see the differences between the various options when plotting the forecasts.]
Which model gives the best RMSE?
Recall your retail time series data (from Exercise 3 in Section 2.10).
Why is multiplicative seasonality necessary for this series? Apply Holt-Winters’ multiplicative method to the data. Experiment with making the trend damped. Compare the RMSE of the one-step forecasts from the two methods. Which do you prefer? Check that the residuals from the best method look like white noise. Now find the test set RMSE, while training the model to the end of 2010. Can you beat the seasonal naïve approach from Exercise 8 in Section 3.7?
For the same retail data, try an STL decomposition applied to the Box-Cox transformed series, followed by ETS on the seasonally adjusted data. How does that compare with your best previous forecasts on the test set?
For this exercise use data set
ukcars, the quarterly UK passenger vehicle production data from 1977Q1–2005Q1.
Plot the data and describe the main features of the series. Decompose the series using STL and obtain the seasonally adjusted data. Forecast the next two years of the series using an additive damped trend method applied to the seasonally adjusted data. (This can be done in one step using
stlf()with arguments
etsmodel="AAN", damped=TRUE.)
Forecast the next two years of the series using Holt’s linear method applied to the seasonally adjusted data (as before but with
damped=FALSE).
Now use
ets()to choose a seasonal model for the data.
Compare the RMSE of the ETS model with the RMSE of the models you obtained using STL decompositions. Which gives the better in-sample fits? Compare the forecasts from the three approaches? Which seems most reasonable? Check the residuals of your preferred model.
For this exercise use data set
visitors, the monthly Australian short-term overseas visitors data, May 1985–April 2005.
Make a time plot of your data and describe the main features of the series. Split your data into a training set and a test set comprising the last two years of available data. Forecast the test set using Holt-Winters’ multiplicative method. Why is multiplicative seasonality necessary here? Forecast the two-year test set using each of the following methods: an ETS model; an additive ETS model applied to a Box-Cox transformed series; a seasonal naïve method; an STL decomposition applied to the Box-Cox transformed data followed by an ETS model applied to the seasonally adjusted (transformed) data. Which method gives the best forecasts? Does it pass the residual tests? Compare the same four methods using time series cross-validation with the
tsCV()function instead of using a training and test set. Do you come to the same conclusions?
The
fets()function below returns ETS forecasts.
Apply
tsCV()for a forecast horizon of \(h=4\), for both ETS and seasonal naïve methods to the
qcementdata, (Hint: use the newly created
fets()and the existing
snaive()functions as your forecast function arguments.)
Compute the MSE of the resulting \(4\)-step-ahead errors. (Hint: make sure you remove missing values.) Why are there missing values? Comment on which forecasts are more accurate. Is this what you expected? Apply
Compare
ets(),
snaive()and
stlf()on the following six time series. For
stlf(), you might need to use a Box-Cox transformation. Use a test set of three years to decide what gives the best forecasts.
ausbeer,
bricksq,
dole,
a10,
h02,
usmelec.
Use
ets()on the following series:
bicoal,
chicken,
dole,
usdeaths,
lynx,
ibmclose,
eggs. Does it always give good forecasts?
Find an example where it does not work well. Can you figure out why? Use
Show that the point forecasts from an ETS(M,A,M) model are the same as those obtained using Holt-Winters’ multiplicative method.
Show that the forecast variance for an ETS(A,N,N) model is given by \[ \sigma^2\left[1+\alpha^2(h-1)\right]. \]
Write down 95% prediction intervals for an ETS(A,N,N) model as a function of \(\ell_T\), \(\alpha\), \(h\) and \(\sigma\), assuming normally distributed errors. |
Three - dimensional Geometry Angle between Two Planes If 'θ' is an angle between the two planes \vec{r}\cdot \vec{n}_{1}=d_{1} and \vec{r}\cdot \vec{n}_{2}=d_{2}, then \cos \theta=\frac{\vec{n}_{1}\cdot\vec{n}_{2}}{|\vec{n}_{1}| \ |\vec{n}_{2}|} If θ is an angle between the two planes a 1x + b 1y + c 1z + d 1= 0, and a 2x + b 2y + c 2z + d 2= 0, then \cos\theta=\frac{a_{1}a_{2}+b_{1}b_{2}+c_{1}c_{2}}{\sqrt{a_1^2+b_1^2+c_1^2}\ \sqrt{a_2^2+b_2^2+c_2^2}} Plane || Plane ⇒ Normal || Normal Plane ⊥ Plane ⇒ Normal ⊥ Normal View the Topic in this video From 06:50 To 16:00
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. In the vector form, if θ is the angle between the two planes, \vec{r}\cdot \vec{n}_{1}=d_{1} and \vec{r}\cdot \vec{n}_{2}=d_{2} then, \theta=\cos^{-1}\frac{|\vec{n}_{1} \cdot \vec{n}_{2}|}{|\vec{n}_{1}| \ |\vec{n}_{2}|}.
2. The angle θ between the planes A
1x + B 1y + C 1z + D 1 = 0 and A 2x + B 2y + C 2z + D 2 = 0 is given by \cos \theta=\begin{vmatrix}\frac{A_{1}A_{2}+B_{1}B_{2}+C_{1}C_{2}}{\sqrt{A_1^2+B_1^2+C_1^2}\sqrt{A_2^2+B_2^2+C_2^2}} \end{vmatrix} |
Grammars are inherently recursive objects, so the answer seems obvious: by induction. That said, the specifics are often tricky to get right. In the sequel I will describe a technique that allows to reduce many a grammar-correctness proof to mechanical steps, provided some creative preprocessing is done.$\newcommand{\lang}[1]{\mathcal{L}(#1)} \newcommand{\sent}[1]{\vartheta(#1)} \newcommand{\derive}{\mathbin{\Rightarrow}} \newcommand{\derivestar}{\mathbin{\Rightarrow^*}} \newcommand{\nats}{\mathbb{N}}$
The basic idea is to not restrict oneself to
words of grammar and language; it is hard to grasp the structure of the grammar in this way. Instead, we will argue about the set of sentences the grammar can create. Furthermore, we will split one daunting proof goal into many small goals that are more tractable.
Let $G=(N,T,\delta,S)$ a formal grammar with non-terminals $N$, terminals $T$, rules $\delta$ and starting symbol $S \in N$. Wedenote by $\sent{G}$ the set of sentences that can be derived from $S$ given $\delta$, that is $\alpha \in \sent{G} \iff S \derivestar \alpha$. The language generated by $G$ is $\lang{G} = \sent{G} \cap T^*$. Suppose we want to show that $L = \lang{G}$ for some $L \subseteq T^*$.
The ansatz
Here is how we go about that. We define $M_1, \dots, M_k \subseteq (N \cup T)^*$ so that
$\displaystyle \sent{G} = \bigcup_{i=1}^k M_i$ and $\displaystyle T^* \cap \bigcup_{i=1}^k M_i = L$.
While 2. is usually clear by definition of the $M_i$, 1. requires some serious work. The two items together clearly imply $\lang{G} = L$ as desired.
For ease of notation, let's denote $M = \bigcup_{i=1}^k M_i$.
The rocky road
There are two major steps to performing such a proof.
How to find (good) $M_i$? One strategy is to investigate phases the grammar works through. Not every grammar is amenable to this idea; in general,this is a creative step. It helps if we can define the grammar ourselves; with some experience, we will be able to definegrammars more tractable with this approach.
How to prove 1.? As with any set equality, there are two directions. $\sent{G} \subseteq M$: (structural) induction over the productions of $G$. $M \subseteq \sent{G}$: Usually one induction by $M_i$, starting from the one that contains $S$.
This is as specific as it gets; the details depend on the grammar and language at hand.
Example
Consider the language
$\qquad \displaystyle L = \{ a^n b^n c^m \mid n,m \in \nats \}$
and the grammar $G = (\{S,A\}, \{a,b,c\}, \delta, S)$ with $\delta$ given by
$\qquad \begin{align} S &\to Sc \mid A \\ A &\to aAb \mid \varepsilon\end{align}$
for which we want to show that $L = \lang{G}$. What are the phases this grammar works through? Well, first it generates $c^m$ and then $a^n b^n$. This immediately informs our choice of $M_i$, namely
$\qquad \begin{align} M_0 &= \{Sc^m \mid m \in \nats \} \;, \\ M_1 &= \{ a^n A b^n c^m \mid m,n \in \nats \} \;, \\ M_2 &= \{ a^n b^n c^m \mid m,n \in \nats \} \;. \\\end{align}$
As $M_2 = L$ and $M_0 \cap T^* = M_1 \cap T^* = \emptyset$, item 2. is already taken care of. Towards 1., we split the proof in two parts as announced.
$\mathbf{\sent{G} \subseteq M}$
We perform structural induction along the rules of $G$.
I.A.: Since $S = Sc^0 \in M_0$ we anchor successfully.
I.H.: Assume for some set of sentences $X \subseteq \sent{G}$ that we also know $X \subseteq M$.
I.S.: Let $\alpha \in X \subseteq \sent{G} \cap M$ arbitrary. We have to show that whatever form $\alpha$ has and whatever rule is applied next, we don't leave $M$. We do this by complete case distinction. By induction hypothesis, we know that (exactly) one of the following cases applies: $w \in M_0$, that is $w = Sc^m$ for some $m \in \nats$. Two rules can be applied, both of which derive a sentence in $M$: $Sc^m \derive Sc^{m+1} \in M_0$ and $Sc^m \derive Ac^m = a^0Ab^0c^m \in M_1$. $w \in M_1$, i.e. $w = a^nAb^nc^m$ for some $m,n \in \nats$: $w \derive a^{n+1}Ab^{n+1}c^m \in M_1$ and $w \derive a^nb^nc^m \in M_2$. $w \in M_3$: since $w \in T^*$, no further derivations are possible.
Since we have successfully covered all cases, the induction is complete.
$\mathbf{\sent{G} \supseteq M}$
We perform one (simple) proof per $M_i$. Note how we chain the proofs so "later" $M_i$ can anchor using the "earlier" $M_i$.
$M_1$: We perform an induction over $m$, anchoring in $Sc^0 = S$ and using $S \to Sc$ in the step. $M_2$: We fix $m$ to an arbitrary value and induce over $n$. We anchor in $Ac^m$, using that $S \derivestar Sc^m \derive Ac^m$ by the former proof. The step progresses via $A \to aAb$. $M_3$: For arbitrary $m,n \in \nats$ we use the former proof for $S \derivestar a^nAb^nc^m \derive a^nb^nc^m$.
This concludes the second direction of the proof of 1., and we are done.
We can see that we heavily exploit that the grammar is
linear. For non-linear grammars, we need $M_i$ with more than one variable parameter (in the proof(s)), which can become ugly. If we have control over the grammar, this teaches us to keep it simple. Consider as deterring example this grammar which is equivalent to $G$:
$\qquad \begin{align} S &\to aAbC \mid \varepsilon \\ A &\to aAb \mid \varepsilon \\ C &\to cC \mid \varepsilon\end{align}$
Exercise
Give a grammar for
$\qquad L = \{ b^k a^l (bc)^m a^n b^o \mid k,l,m,n,o \in \nats, k \neq o, 2l = n, m \geq 2 \}$
and prove its correctness.
If you have trouble, a grammar:
Consider $G = (\{S,B_r,B_l,A,C\}, \{a,b,c\}, \delta, S)$ with productions
$\quad \begin{align} S &\to bSb \mid B_l \mid B_r \\ B_l &\to bB_l \mid bA \\ B_r &\to B_r b \mid Ab \\ A &\to aAaa \mid C \\ C &\to bcC \mid bcbc \end{align}$
and $M_i$:
$\quad\begin{align} M_0 &= \{ b^i S b^i \mid i \in \nats \} \\ M_1 &= \{ b^i B_l b^o \mid o \in \nats, i \geq o \} \\ M_2 &= \{ b^k B_r b^i \mid k \in \nats, i \geq k \} \\ M_3 &= \{ b^k a^i A a^{2i} b^o \mid k,o,i \in \nats, k \neq o \} \\ M_4 &= \{ b^k a^l (bc)^i C a^{2l} b^o \mid k,o,l,i \in \nats, k \neq o \} \\ M_5 &= L \end{align}$
What about non-linear grammars?
The characterising feature of the class of context-free languages is the Dyck language: essentially, every context-free language can be expressed as the intersection of a Dyck language and a regular language. Unfortunately, the Dyck language is not linear, that is we can give no grammar that is inherently suited to this approach.
We can, of course, still define $M_i$ and do the proof, but it's bound to be more arduous with nested inductions and what not.There is one general way I know of that can help to some extent. We change the ansatz to showing that we generate
at least allrequired words, and that we generate the right amount of words (per length). Formally, we show that $\displaystyle \sent{G} \supseteq L$ and $\displaystyle |\lang{G} \cap T^n| = |L \cap T^n|$ for all $n \in \nats$.
This way, we can restrict ourselves to the "easy" direction from the original ansatz and exploit structure in the language, ignoring overcomplicated features the grammar may have. Of course, there is no free lunch: we get the all new task of counting the words $G$ generates for
each $n \in \nats$. Lucky for us, this is often tractable; see here and here for details¹. You can find some examples in my bachelor thesis.
For ambiguous and non-context-free grammars, I'm afraid we are back to ansatz one and thinking caps.
When using that particular method for counting, we get as a bonus that the grammar is unambiguous. In turn, this also means that the technique has to fail for ambiguous grammars as we can never prove 2. |
OK, I've thought about it a bit more and I'll give an alternate proof below the first horizontal line. However, I highly deny that this proof is better, and I am not sure it is actually shorter.To my mind, the morally correct proof is to show that, if $K \subseteq L$ with $L$ algebraically closed, and a system of polynomial equations has a root in $L$, then it has a root in a finite extension of $K$.
The point, which I am sure Qiaochu understands, is that he only knows a priori that the representation is defined over $\mathbb{C}$. Once he knows that the representation is definable over an algebraic extension $K'$ of $K$, he can replace $K'$ by its normal closure, note that $\mathrm{Gal}(K', \mathbb{Q}) \to \mathrm{Gal}(K, \mathbb{Q})$ is surjective, lift any element $\sigma$ of $\mathrm{Gal}(K, \mathbb{Q})$ to some $\tilde{\sigma}$ in $\mathrm{Gal}(K', \mathbb{Q})$, and apply $\tilde{\sigma}$ to the entries of his matrices.
Part of the problem is that the representation may honestly not be defined over $K$. For example, the two dimensional representation of the quaternion eight group has character with values in $\mathbb{Q}$, but can't be defined over $\mathbb{Q}$.
Fix $G$ and a representation $V$ of $G$. For $g \in G$, let $\lambda_1(g)$, $\lambda_2(g)$, ..., $\lambda_n(g)$ be the multiset of eigenvalues of $g$ acting on $V$. These are necessarily roots of unity, since $g^N=1$ for some $N$. For any symmetric polynomial $f$, with integer coefficients, define $\chi(f,g) = f(\lambda_1(g), \ldots, \lambda_n(g))$.
Lemma: With notation as above, $g \mapsto \chi(f,g)$ is a virtual character.
Proof: If $f$ is the elementary symmetric function $e_k$, then this is the character of $\bigwedge^k V$. Any symmetric function is a polynomial (with integer coefficients) in the $e_k$'s; take the corresponding tensor product and formal difference of virtual characters.
Any Galois symmetry $\sigma$ of $\mathbb{Q}(\zeta_N)$ is of the form $\zeta_N \mapsto \zeta_N^s$, for $s$ relatively prime to $N$. Consider the power sum symmetric function $p_s := \sum x_i^s$. So $\chi(p_s, \ )$ is the Galois conjugate $\chi^{\sigma}$, and we now know that it is a virtual character.
But $\langle \chi^{\sigma}, \chi^{\sigma} \rangle = \langle \chi, \chi \rangle =1$, because the inner product is built out of polynomial operations and complex conjugation, and complex conjugation is central in the Galois group. So this virtual character must correspond to $\pm W$, for some representation $W$. Since $\chi^{\sigma}(e) = \chi(e) = \dim V$, we conclude that the positive sign is correct.
It just occurred to me that actually writing this out for some specific small values of $s$ makes some nonobvious statements about representation theory. For example, if $G$ has odd order and $V$ is a $G$-irrep, then $\bigwedge^2 V$ has a $G$-equivariant injection into $\mathrm{Sym}^2 V$. Proof: The difference of their characters is the character of $V^{\sigma}$, where $\sigma: \zeta \mapsto \zeta^2$. |
In the book of Hesthaven and Warburton on discontinual Galerkin methods the authors give motivation to the differentiation matrix (page 52), referred to as $D_r(i,j)=\frac{dl_j}{dr}|_{r_i}$ where $l_i(r) = \prod_{j=1 \\ j\neq i} \frac{r - \xi_j}{\xi_i - \xi_j}$ a base-vector of the Lagrangian polynomial base.
They say that the following equation motivates this definition:
$$u_h(r)=\sum_{n=1}^{N_p}\hat{u}\tilde{P}_{n-1}(r)=\sum_{n=1}^{N_p}u(r_i)l_i(r)$$
where the first sum is a linear combination of the orthonormal Legendre-polynomial basis and (I suspect) the second sum gives the same polynomial with respect to the Lagrange basis. They say that in the equation above $D_r$ is the operator that transforms point values, $u(r_i)$, to derivatives at these same points (e.g. $u_h'=D_ru_h$).
Sadly, I do not see the connection there.
What follows are some manipulation of which I do not grasp the disappearance of the sum in the integral $$(MD_r)_{ij}= \sum_{n=1}^{N_p} M_{in}D_r(n,j)= \sum_{n=1}^{N_p}\int_{-1}^1l_i(r)l_n(r)\frac{dl_j}{dr}|_{r_n}dr \\= \int_{-1}^1l_i(r)\sum_{n=1}^{N_p}l_j(r)\frac{dl_j}{dr}|_{r_n}l_n(r)dr=\int_{-1}^1 l_i(r)\frac{dl_j(r)}{dr}dr=S_{ij}$$
So the use of $\sum_{n=1}^{N_p}l_j(r)\frac{dl_j}{dr}|_{r_n} l_n(r) = \frac{dl_j(r)}{dr}$
is not clear to me. |
Let $G$ be a simply connected simple algebraic group over $\mathbb C$, $B\subset G$ a Borel subgroup, and $T\subset B$ a maximal torus. Let $\mathcal{S}=\mathcal{S}(G,T,B)$ denote the set of simple roots. For $\alpha\in \mathcal{S}$, let $P_\alpha\supset B$ denote the corresponding minimal parabolic subgroup.
Let $Y=G/H$ be a spherical homogeneous space of $G$. The word "spherical" means that the Borel subgroup $B$ has an open orbit in $Y$.
Let ${\mathcal{D}}$ denote the (finite) set of
colors of $Y$, that is, of $B$-orbits of codimension one in $Y$.Let $\mathcal{X}\subset X^*(B)$ denote the weight lattice of $Y$. There is a canonical map$$\rho\colon {\mathcal{D}}\to V:={\rm Hom}_{\mathbb Z} (\mathcal{X}, \mathbb{Q}).$$
Let $\mathcal{P}(\mathcal{S})$ denote the set of subsets of $\mathcal{S}$. For a color $D\in{\mathcal{D}}$, let $\varsigma(D)$ denote the set of $\alpha\in\mathcal{S}$ such that $P_\alpha\cdot D\neq D$. We obtain a canonical map $$\varsigma\colon {\mathcal{D}}\to\mathcal{P}(\mathcal{S}).$$
We say that two colors $D,D'\in{\mathcal{D}}$ are a
pair of colors if$$\rho(D)=\rho(D')\in V\quad \text{and}\quad\varsigma(D)=\varsigma(D')\in\mathcal{P}(\mathcal{S}).$$There cannot be three different colors $D,D',D''$ with the same images in $V$ and $\mathcal{P}(\mathcal{S})$.
Question.What is an example of a spherical homogeneous space $Y=G/H$ having a pair of colors and such that the center $Z(G)$ is not contained in $H$? Where can I find a number of such examples?
Any comments or references are welcome! |
The groups whose subgroups are totally ordered by inclusion are easy to classify; they are subgroups of $\mathbb{Z}/p^{\infty} = \text{colim } \mathbb{Z}/p^k$ for some prime $p$, and thus $\mathbb{Z}/p^{\infty}$ or finite cyclic of prime power order. What about fields?
Is it possible to classify (or at least, give some properties and examples) the field extensions $E/F$, whose intermediate fields are totally ordered by inclusion?
If $E/F$ is a Galois extension, then we may rephrase the condition: The
closed subgroups of $Gal(E/F)$ are totally ordered. It can be shown that then there is a prime $p$ such that $Gal(E/F)$ is pro-$p$-cyclic and thus isomorphic to $\mathbb{Z}/p^k$ for some $k \geq 0$ or to $\mathbb{Z}_p = \lim_k \mathbb{Z}/p^k$. Thus $E/F$ is built up out of cyclic Galois extensions $F_{i+1} / F_i$ degree $p$ (for example $E = \mathbb{F}_{q^{p^\infty}}, F = \mathbb{F}_q$). In characteristic $p$, cyclic Galois extensions of degree $p$ are characterized by a Theorem of Artin-Schreier. In characteristic $q \neq p$ ($q=0$ allowed), there is a characterization if $F_i$ contains a primitive $p$th root of unity. What can be said if this is not the case?
Now do not assume that $E/F$ is Galois. Here is a simple observation:
The intermediate fields of $E/F$ are totally ordered iff $E/F$ is algebraic and the finite intermediate fields of $E/F$ are totally ordered.
Proof: $\Rightarrow:$ If $t$ is a variable, then the intermediate fields of $F(t)/F$ are not totally ordered, consider $F(t^2)$ and $F(t^3)$. $\Leftarrow:$ Let $K,L$ be intermediate fields which are not compatible. Choose $a \in K - L, b \in L - K$. Then $F(a), F(b)$ are finite extensions, which are not compatible, contradiction.
An immediate consequence is, that we may first restrict to the finite case: Namely every $E/F$ as above is a directed union of finite subextensions, whose intermediate fields are totally ordered by inclusion; and vice versa.
Note that we cannot restrict the degree of $E/F$. Namely, $S_{n-1}$ is a maximal subgroup of $S_n$. Since $S_n=Gal(E/F)$ for some Galois extension $E/F$, if $K$ is the fixed field of $S_{n-1}$, the extension $K/F$ has degree $n$ and no nontrivial intermediate fields at all.
What about inseparable extensions? And what happens if we take the normal closure? Perhaps we can reduce everything to the Galois case? |
Tagged: ideal Problem 624
Let $R$ and $R’$ be commutative rings and let $f:R\to R’$ be a ring homomorphism.
Let $I$ and $I’$ be ideals of $R$ and $R’$, respectively. (a) Prove that $f(\sqrt{I}\,) \subset \sqrt{f(I)}$. (b) Prove that $\sqrt{f^{-1}(I’)}=f^{-1}(\sqrt{I’})$
Add to solve later
(c) Suppose that $f$ is surjective and $\ker(f)\subset I$. Then prove that $f(\sqrt{I}\,) =\sqrt{f(I)}$ Problem 526
A ring is called
local if it has a unique maximal ideal. (a) Prove that a ring $R$ with $1$ is local if and only if the set of non-unit elements of $R$ is an ideal of $R$.
Add to solve later
(b) Let $R$ be a ring with $1$ and suppose that $M$ is a maximal ideal of $R$. Prove that if every element of $1+M$ is a unit, then $R$ is a local ring. Problem 525
Let
\[R=\left\{\, \begin{bmatrix} a & b\\ 0& a \end{bmatrix} \quad \middle | \quad a, b\in \Q \,\right\}.\] Then the usual matrix addition and multiplication make $R$ an ring.
Let
\[J=\left\{\, \begin{bmatrix} 0 & b\\ 0& 0 \end{bmatrix} \quad \middle | \quad b \in \Q \,\right\}\] be a subset of the ring $R$. (a) Prove that the subset $J$ is an ideal of the ring $R$.
Add to solve later
(b) Prove that the quotient ring $R/J$ is isomorphic to $\Q$. Problem 524
Let $R$ be the ring of all $2\times 2$ matrices with integer coefficients:
\[R=\left\{\, \begin{bmatrix} a & b\\ c& d \end{bmatrix} \quad \middle| \quad a, b, c, d\in \Z \,\right\}.\]
Let $S$ be the subset of $R$ given by
\[S=\left\{\, \begin{bmatrix} s & 0\\ 0& s \end{bmatrix} \quad \middle | \quad s\in \Z \,\right\}.\] (a) True or False: $S$ is a subring of $R$.
Add to solve later
(b) True or False: $S$ is an ideal of $R$. Problem 432 (a) Let $R$ be an integral domain and let $M$ be a finitely generated torsion $R$-module. Prove that the module $M$ has a nonzero annihilator. In other words, show that there is a nonzero element $r\in R$ such that $rm=0$ for all $m\in M$. Here $r$ does not depend on $m$.
Add to solve later
(b) Find an example of an integral domain $R$ and a torsion $R$-module $M$ whose annihilator is the zero ideal. Problem 431
Let $R$ be a commutative ring and let $I$ be a nilpotent ideal of $R$.
Let $M$ and $N$ be $R$-modules and let $\phi:M\to N$ be an $R$-module homomorphism.
Prove that if the induced homomorphism $\bar{\phi}: M/IM \to N/IN$ is surjective, then $\phi$ is surjective.Add to solve later
Problem 417
Let $R$ be a ring with $1$ and let $M$ be an $R$-module. Let $I$ be an ideal of $R$.
Let $M’$ be the subset of elements $a$ of $M$ that are annihilated by some power $I^k$ of the ideal $I$, where the power $k$ may depend on $a$. Prove that $M’$ is a submodule of $M$. |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
Search
Now showing items 1-5 of 5
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ... |
Search
Now showing items 1-10 of 33
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ... |
Suppose we have $n$ balls which are the same except colors, denote $S$ to be the set of all different permutations of the balls.(i.e. the swap of two balls with the same color will be the same permutation)
We now define a function from $S$ to $\mathbb{N}$ as follow: $$ f(\sigma)=\prod_{i=1}^nm_i $$ where $m_i=i$ if the $i$th ball in $\sigma$ has the same color with its preceding ball, otherwise let $m_i=1$.
My question is:
Does the identity $$\sum_{\sigma\in S}f(\sigma)=n!$$ hold for all coloring of the balls?
Example: We can verify that if all the balls have pairwise distinct colors, then the identity trivially holds. If $n-1$ balls have the same color but one with different color, we will have$$\sum_{\sigma\in S}f(\sigma)=n!\left(\frac{1}{1*2}+\cdots+\frac{1}{i*(i+1)}+\cdots+\frac{1}{n}\right)=n!$$ If two balls have the same color and the rest have all the different colors, we well get$$\sum_{\sigma\in S}f(\sigma)=(n-2)!\left(2+3+\cdots+n+\frac{n(n-1)}{2}-n+1\right)=n!$$
I think the above evidence should not just be some kind of coincidence, but I can not find the combinatorial intuition behind it.
So, can anyone prove the above identity?or find an counterexample to disprove it? |
I am just starting a course on Lie groups and I'm having some difficulty understanding some of the ideas to do with vector fields on Lie groups.Here is something that I have written out, which I
know is wrong, but can't understand why:
Let $X$ be any vector field on a Lie group $G$, so that $X\colon C^\infty(G)\to C^\infty(G)$. Write $X_x$ to mean the tangent vector $X_x\in T_x G$ coming from evaluation at $G$, that is, define $X_x(-)=(X(-))(x)$ for some $-\in C^\infty(G)$. We also write $L_g$ to mean the left-translation diffeomorphism $x\mapsto gx$.
Now \begin{align} X_g(-) = (X(-))(g) &= (X(-))(L_g(e))\\ &= X(-\circ L_g)(e)\\ &= X_e(-\circ L_g) \\ &= ((DL_g)_eX_e)(-). \end{align} Using this we can show that $((L_g)_*X)_{L_g(h)}=X_{L_g(H)}$ for all $h\in G$, and thus $(L_g)_*X=X$, i.e. $X$ is left-invariant.
I'm sure that the mistake must be very obvious, but I'm really not very good at this sort of maths, so a gentle nudge to help improve my understanding would be very much appreciated! |
I was trying to clarify some questions I had about elliptic integrals using
There they define the map $$\phi\colon w\mapsto \int_0^w\frac{\mathrm{d}z}{\sqrt{1-z^2}}$$ on $\mathbb{C}\setminus[-1,1]$ to get $\phi$ well-defined up to periods of the integral. The choice of the interval $[-1,1]$ is made so that $\sqrt{1-z^2}$ admits a single-valued branch.
Now, I know that the principal branch of the square root $\sqrt{z}$ is discontinuous on the half-line $(-\infty,0)$, so to get a holomorphic map we restrict to $\mathbb{C}\setminus (-\infty,0]$. Substituting $1-z^2$ for $z$ we get that the appropriate branch cuts for the above mapping $\sqrt{1-z^2}$ would be $(-\infty,-1]$ and $[1,\infty)$, which is somewhat the opposite of the suggested interval $[-1,1]$.
From that I conclude that they didn't choose the principal branch, otherwise for e.g. $z=2$ the map would be discontinuous.
My question is: Are both choices possible? Then there must be some way to choose another branch of $\sqrt{1-z^2}$. Is there a good way to see how to choose "elegant" branch cuts and the corresponding holomorphic branches?
A thought of my own: It should be possible to instead integrate on the Riemann sphere, using $\infty$ and not $0$ as a starting point. Then the two intervals would "swap roles". But I don't see how to formalize this. |
Help please!
Prove $\sum \sqrt{a_n b_n}$ converges if $\sum a_n$ and $\sum b_n$ converge.
I can prove that $\sum a_n b_n$ converges but couldn't for $\sum \sqrt{a_n b_n}$.
Thank you.
EDIT: $a_n, b_n \ge 0 \;\forall n$
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Hint:$$\sqrt {AB}\le A+B$$because of $(\sqrt A - \sqrt B)^2 \ge 0$.
Note: Similar can be shown to hold true for $A_1, ..., A_n$ and $A_1, A_2, ...$ |
Profit-sharing is not the solution to inequality Matthew Martin8/22/2014 04:43:00 PM
Tweetable
Fortune, sociologist Joseph Blasi thinks he has a better solution to inequality than Piketty's proposed global wealth tax. His solution is this: instead of taxing wealth as a way to give the non-rich an advantage in capital markets, we should simply promote "profit-sharing" between firms and their employees, by having firms compensate employees in stocks and dividends. But here's the problem:
You don't need a mathematical model to see why. Individual corporations are extremely risky. They may make a mistake--a flaw in the computer chip design, or a contaminate in their food products--that will put them out of business tomorrow, or at least cause their stocks to plunge shortly before it's time for you to retire. And even when they do everything right, corporations are still extremely risky--a mad cow scare in Virginia can easily put your beef packing plant in California out of business, or a new technology can come along and make your paper company obsolete. Merely by working at a company, workers are subjected to huge amounts of risk because they could loose their future wages at any moment if they are laid off. But, at least in that eventuality their savings will not also be lost when they loose their future wages.
Except when they have a profit-sharing agreement with their company. These agreements require that part of worker's wages will be paid not in cash but in stocks, with strings attached about when and how these stocks can be traded. The result is that profit-sharing arrangements require workers to invest not just their future wage incomes in their extremely risky employer, but also a large share of their savings too. Now when their employer goes bankrupt, they loose both their source of income and their life savings. Profit-sharing magnifies risks to workers, and no one is harmed more by this than the poor.
I'm not saying that we shouldn't encourage the poor to invest in equities. I'm saying that they are much better off being able to diversify that investment to insure themselves against the extremely high amount of risk that any single firm represents.
We can examine this with math. Households value consumption according to [$]U=E\left[u\left(C\right)\right][$] where [$]u[$] is a continuous strictly concave increasing function--that is, households want to consume more, but experience diminishing returns. The household's wage income is [$]y[$], and there are two identical firms that earn [$]\pi_i[$] profits. Each firm faces a probability [$]p[$] that it will have a disaster (say, a recall) that will reduce profits by [$]D[$] so that [$]\pi_i=\pi[$] with probability [$]1-p[$] and [$]\pi_i=\pi-D[$] with probability [$]p[$]. The two firms are identical, but their disaster risks are independent. A household works for just one of these firms.
Under profit sharing, household utility is [$$] U=pu\left(\pi-D+y\right)+\left(1-p\right)u\left(\pi+y\right)[$$] because the individual is forced, by the profit sharing agreement, to own the firm's equity and therefore bears the firm's disaster risk. Without profit sharing, total worker's compensation is the same, but the worker is no longer required to invest in his own firm's equity. He therefore buys a diversified portfolio of both firm's equity, such that his portfolio earns [$]\pi[$] with probability [$]\left(1-p\right)^2[$], [$]\pi-\frac{1}{2}D[$] with probability [$]2\left(1-p\right)p[$], and [$]\pi-D[$] with probability [$]p^2.[$] The expected returns on this portfolio are identical to the expected returns in the profit-sharing case, but with a much smaller variance--he is less likely to earn [$]\pi[$], but a lot more likely to earn more than [$]\pi-D[$]. The utility without profit sharing is [$$]U=\left(1-p\right)^2u\left(\pi+y\right)+2\left(1-p\right)pu\left(\pi-\frac{1}{2}D+y\right)+p^2u\left(\pi-D+y\right).[$$] The only assumption needed to show that [$]U[$] is higher (and therefore the household is better off) without profit-sharing than with is that [$]u[$] is strictly concave, which is equivalent to saying that people are risk averse which is equivalent to saying people experience diminishing returns.
In fact, most economists would just say that my result follows directly from the definition of concavity, though this may be less obvious to non-math folks. Indeed, what constitutes a mathematical "proof" depends entirely on the knowledge level of the reader. If you are aware, for example, that a continuous function [$]f[$] is strictly concave on a set [$]C[$] if and only if for any [$]x,y\in C[$] [$$]f\left(\frac{x+y}{2}\right)\gt\frac{f\left(x\right)+f\left(y\right)}{2}[$$] I could simply start there. But if you don't know that, then a proof that assumes that doesn't really prove anything, does it? I had planned on simply linking you to the wikipedia page to assert that this is true--you can bug them if you remain unconvinced--but it turns out wikipedia only asserts the weak form of concavity ([$]\geq[$]) and not the strict form I'm using here ([$]\gt[$]). So I'm just going to tell you that this result is easily shown by taking wikipedia's definition of strict concavity and setting [$]t=\frac{1}{2}[$]. That let's us jot down the following proof:
Proof:
Recall that utility in the non-profit-sharing case was given by [$$]U=\left(1-p\right)^2u\left(\pi+y\right)+2\left(1-p\right)pu\left(\pi-\frac{1}{2}D+y\right)+p^2u\left(\pi-D+Y\right).[$$] Therefore,
\begin{align}
U&=\left(1-p\right)^2u\left(\pi+y\right)+2\left(1-p\right)pu\left(\pi-\frac{1}{2}D+y\right)+p^2u\left(\pi-D+Y\right)\\
&\gt\left(1-p\right)^2u\left(\pi+y\right)+2\left(1-p\right)p\frac{u\left(\pi+y\right)+u\left(\pi-D+y\right)}{2})+p^2u\left(\pi-D+Y\right)\\
&=\left(1-p\right)u\left(\pi+y\right)+pu\left(\pi-D+Y\right)
\end{align}
which was the utility in the profit sharing case. So, profit sharing is worse. |
Besides the usual deterministic DFS/BFS approaches, one could also consider a randomized algorithm. I will shortly describe a randomized algorithm for deciding if two vertices $s$ and $t$ are connected. It can also be used to decide if the whole graph is connected. The main benefit is that this method requires $O(\log |V|)$ bits of space, whereas a BFS/DFS requires $\Omega(|V|)$ space.
The
cover time of an undirected graph $G=(V,E)$ is the maximum over all vertices $v \in V(G)$ of the expected time to visit all of the nodes in the graph by a random walk starting from $v$. Using some theory of Markov chains, it is not too hard to prove the cover time of $G$ if bounded from above by $4|V|\cdot|E|$.
The algorithm for deciding if $s$ and $t$ are connected is simple:
Input: two vertices s,t
1. Start a random walk from s.
2. If t is reached within 4|V|^3 steps, return true. Otherwise, return false.
Clearly, if there is no path between $s$ and $t$ the algorithm returns the correct answer. If there is a path, the algorithm errs if it is not found within $4|V|^3$ steps. The cover time of $G$ is bounded from above by $4|V||E| < 2|V|^3$. Using Markov's inequality, the probability that a random walk takes more than $4|V|^3$ steps to reach $t$ from $s$ is at most $1/2$. In other words, the algorithm returns the correct answer with probability $1/2$, and only errs by saying $s$ and $t$ are not connected when they in fact are. |
Y i = β 1 + β 2 X i + U i E ( U | X ) = 0 V a r ( U | X ) = V a r ( Y | X ) = σ 2 Y i = β 1 + β 2 X i + U i Y i ^ = β 1 ^ + β 2 ^ X i U i ^ = Y i − Y i ^ {\displaystyle {\begin{aligned}&Y_{i}=\beta _{1}+\beta _{2}X_{i}+U_{i}\\&E(U|X)=0\\&Var(U|X)=Var(Y|X)=\sigma ^{2}\\&Y_{i}=\beta _{1}+\beta _{2}X_{i}+U_{i}\\&{\widehat {Y_{i}}}={\widehat {\beta _{1}}}+{\widehat {\beta _{2}}}X_{i}\\&{\widehat {U_{i}}}=Y_{i}-{\widehat {Y_{i}}}\\\end{aligned}}}
Use of
or β 1 {\displaystyle \beta _{1}} to denote the Y-intercept is solely discretionary. β 0 {\displaystyle \beta _{0}} β 2 ^ = ∑ ( Y i − Y ¯ ) ( X i − X ¯ ) ∑ ( X i − X ¯ ) 2 β 1 ^ = Y ¯ − β 2 ^ X ¯ V a r ( β 2 ^ ) = σ 2 ∑ ( X i − X ¯ ) 2 V a r ( β 1 ^ ) = ∑ X i 2 ∗ σ 2 n ∗ ∑ ( X i − X ¯ ) 2 σ 2 ^ = ∑ U i 2 ^ n − 2 {\displaystyle {\begin{aligned}&{\widehat {\beta _{2}}}={\frac {\sum {(Y_{i}-{\bar {Y}}})(X_{i}-{\bar {X}})}{\sum {(X_{i}-{\bar {X}})^{2}}}}\\&{\widehat {\beta _{1}}}={\bar {Y}}-{\widehat {\beta _{2}}}{\bar {X}}\\&Var({\widehat {\beta _{2}}})={\frac {\sigma ^{2}}{\sum {(X_{i}-{\bar {X}})^{2}}}}\\&Var({\widehat {\beta _{1}}})={\frac {\sum {X_{i}^{2}}*\sigma ^{2}}{n*\sum {(X_{i}-{\bar {X}})^{2}}}}\\&{\widehat {\sigma ^{2}}}={\frac {\sum {\widehat {U_{i}^{2}}}}{n-2}}\\\end{aligned}}}
U and
have both been used to denote the error term. ϵ {\displaystyle \epsilon } S 2 2 = V a r ( β 2 ^ ) ^ = σ 2 ^ ∑ ( X i − X ¯ ) 2 S 1 2 = V a r ( β 1 ^ ) ^ = ∑ X i 2 ∗ σ 2 ^ n ∗ ∑ ( X i − X ¯ ) 2 S . E . ( β 2 ^ ) = V a r ( β 2 ^ ) ^ S . E . ( β 1 ^ ) = V a r ( β 1 ^ ) ^ {\displaystyle {\begin{aligned}&S_{2}^{2}={\widehat {Var({\widehat {\beta _{2}}})}}={\frac {\widehat {\sigma ^{2}}}{\sum {(X_{i}-{\bar {X}})^{2}}}}\\&S_{1}^{2}={\widehat {Var({\widehat {\beta _{1}}})}}={\frac {\sum {X_{i}^{2}}*{\widehat {\sigma ^{2}}}}{n*\sum {(X_{i}-{\bar {X}})^{2}}}}\\&S.E.({\widehat {\beta _{2}}})={\sqrt {\widehat {Var({\widehat {\beta _{2}}})}}}\\&S.E.({\widehat {\beta _{1}}})={\sqrt {\widehat {Var({\widehat {\beta _{1}}})}}}\\\end{aligned}}} is used to denote a sample variance, S.E. standard error. S 2 {\displaystyle S^{2}} T S S = ∑ ( Y i − Y ¯ ) 2 E S S = ∑ ( Y i ^ − Y ¯ ) 2 R S S = ∑ U i 2 ^ {\displaystyle {\begin{aligned}&TSS=\sum {(Y_{i}-{\bar {Y}})^{2}}\\&ESS=\sum {({\widehat {Y_{i}}}-{\bar {Y}})^{2}}\\&RSS=\sum {\widehat {U_{i}^{2}}}\\\end{aligned}}}
TSS may also be presented as SST for the Total Sum of Squares, ESS as SSE (error) and RSS as SSR (residuals).Depending on the text, ESS and RSS may become very confusing, as there is great variety in the terminology used.
R 2 = E S S T S S = 1 − R S S T S S {\displaystyle R^{2}={\frac {ESS}{TSS}}=1-{\frac {RSS}{TSS}}}
log ( Y ) ^ = β 1 ^ + β 2 ^ log ( X ) {\displaystyle {\widehat {\log(Y)}}={\widehat {\beta _{1}}}+{\widehat {\beta _{2}}}\log(X)} |
Let $\alpha$ and $\beta$ be two
distinct eigenvalues of a $2\times2$ matrix $A$. Then which of the following statements must be true.
1 - $A^n$ is not a scalar multiple of identity matrix for any positive positive integer $n$.
2 - $ A^3 = \dfrac{\alpha^3-\beta^3}{\alpha-\beta}A-\alpha\beta(\alpha+\beta)I$
For the statement 1 I picked up a diagonal matrix with diagonal entries 1 and -1 whose square comes out to be identity matrix. Thus statement may be false. But for the second statement i am not able to figure out a way to start. This probably may be easy but I am not able to get this. Please post a small hint so that I may proceed further. |
In my paper with M. Hochman "Equidistribution from fractals", we give a geometric condition on a probability measure $\mu$ on the real numbers which ensures that $\mu$ almost every point is normal to a given base $m\ge 2$. The condition is fairly technical to state precisely, but it roughly says that if the measure $\mu$ does not exhibit any almost-periodic behaviour under the process of ``magnifying by a factor $m$ around a typical point'', then $\mu$ almost all points are normal to base $m$.
Our condition is invariant under $C^1$ maps. This does not say that normal numbers are invariant under $C^1$ maps, which is trivially false. It does say that some geometric features that ensure normality are, in a global sense, invariant under $C^1$ maps.
As pointed out by Robert Israel and Christian Remling, this is of interest only when $\mu$ is a singular measure with respect to Lebesgue measure. The good news it that there are plenty of natural singular measures to which our results apply. In particular, absolute continuity is definitely not the only reasonable condition that ensures that almost all points are normal (although it is the only condition if one considers ``size'' only).
For example, suppose $\mu$ is a measure on $[0,1)$ which is invariant under multiplication by $p$ on the circle, i.e. $\mu(A)=\mu(T_p^{-1}A)$ where $T_p(x)=px\bmod 1$. Then $\mu$ almost every point is normal to base $m$, for any $m$ such that $\log p/\log m$ is irrational (if $\log p/\log m$ is rational, this is not true). This extends previous results by Cassels, B. Host, E. Lindenstrauss and others.
For $T_p$-invariant measures $\mu$, it is not true that $\mu$ almost all points are normal to all bases. We do get many examples where this is the case.
Let $B\subset\mathbb{N}$ be a finite set with at least two elements, and let $A$ be the set of all points whose continued fraction expansion has only digits in the set $B$. It is well known that $A$ has positive and finite Hausdorff measure in its dimension, let $\mu$ be the restriction of the corresponding Hausdorff measure to $A$ (alternatively, there is a geometric Gibbs measure on $A$ for the Gauss map which is equivalent to $\mu$). Then $\mu$ almost all points are normal. Related results have been obtained by Kaufman, and by T. Jordan and T. Sahlsten.
To give a final example, let $A$ be the self-similar set obtained by replacing $[0,1]$ with $[0,1/2]\cup [2/3,1]$ and continuing inductively on each interval, always keeping intervals of relative lengths $1/2$ and $1/3$. Again, $A$ supports a natural measure $\mu$, and $\mu$ almost all points are normal, even though $A$ has Hausdorff dimension less than $1$. The same holds for a much wider class of self-similar measures, as long as there are two contraction ratios $r_1,r_2$ in the construction such that $\log r_1/\log r_2\notin\mathbb{Q}$. |
I am currently going over this paper, and in fact have already tried to implement it:
Paper looks at the amplitude of the step and the time between the swings. ALthough I seem to understand the general idea, I am slightly confused about one of the formulas, which describes time between steps, namely:
"To extract only one valley from a gathered group of valley candidates in a very short time range, every valley candidate is validated by checking the time distance to the recent valley using the following threshold. $Th_v = \mu_v - (\sigma_v/\beta)$ where $\mu_v$ and $\sigma_v$ represent the average and the standard deviation of the time interval between adjacent valleys in the magnitude of acceleration, respectively. These averages and the standard deviations are calculated for recent M peaks or valleys."*
Except time threshold obviously magnitude is also taken into consideration.
where red dots correspond to initially discovered valleys and purple to points that should re-evaluate last valley.
So what I currently have is something like this:
1) I obtain valley at point 20. I obtain valley at Point 49. This should mean that the average distance between two valleys is 29 readings, and therefore I set my $\mu_v = 29 and \sigma = 0$, which yields me a value of $Th_v=29$ ( my $\beta = 1/3$). I also store the difference (29). 2) Then I get to point 51, which should update the $Th_v$ value. So now I calculate the mean, which I think means $(29 + (51-49))/2 = 15.5$
And calculate standard deviation, which equals $19.09$. So then when I actually plug these values in the formula above I get $Th_v = -41.909$, which should not be possible as I believe this value should be always positive to indicate the time between two valleys.
Am I dong something incorrect or do you have any idea what's wrong with my or paper's approach?
References:
[1]: Lee H, Choi S, Lee M. Step Detection Robust against the Dynamics of Smartphones. Wang X, ed. Sensors (Basel, Switzerland). 2015;15(10):27230-27250. doi:10.3390/s151027230. |
Correction to: “Existence, uniqueness and comparison results for BSDEs with Lévy jumps in an extended monotonic generator setting” 65 Downloads
The proof of (Geiss and Steinicke (2018), Theorem 3.5) needs an extra step addressing the problem that our conditions on the generator are not sufficient to guarantee the existence of the considered optional projection:
f as the optional projection of n y, z, u). However, this optional projection does not always exist for generators fsatisfying (A1)–(A3). f( ω, s, y, z, u) by K>0. f . Concerning (A3), one observes that only the cases where both factors of ( K y− y ′)( f( s, y, z, u)− f( s, y ′, z ′, u ′)) are either positive or negative are relevant. Since f . The above inequality implies that also ( K A γ) holds for f . K f and K f ′and gets K K→ ∞, so that \( \,\,\,Y_{t} \le Y^{\prime }_{t} \quad \mathbb {P}\text {-a.s.} \) follows. In the proof of Proposition 4.2 it was shown that for data ( ξ, f) and ( ξ, f ) it holds K
The factor \(\sup _{t \in [0,T]} \|Y_{t}-Y^{K}_{t}\|\) is bounded according to Proposition 4.1, and the integral goes to zero by monotone convergence. Since \({\lim }_{x\to 0} h(a,b,x) =0,\) one derives that \({\lim }_{K \to \infty }\|Y_{t} - Y^{K}_{t} \| =0,\) and in the same way it follows \({\lim }_{K \to \infty }\|Y^{\prime }_{t} - Y^{\prime {K}}_{t} \| =0.\)
Moreover, Theorem 3.4 and Lemma 5.1 in Geiss and Steinicke (2018) are only valid, if
f in Definition 3.3 exists. For the proof of Theorem 3.5 this does not cause a problem since we need these results for \(f^{K}_{n}\) only. n Notes Authors’ contributions
Both authors read and approved the final manuscript.
Competing interests
The authors declare that they have no competing interests.
References 1. 2.Geiss, C., Steinicke, A.: Existence, uniqueness and comparison results for BSDEs with Lévy jumps in an extended monotonic generator setting. Probab. Uncertain. Quant. Risk. 3(9) (2018).Google Scholar 3. Copyright information Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. |
The aim of this test case is to validate the following functions:
The simulation results of SimScale were compared to the analytical results derived from [SCHAUM]. The mesh used was created using first order tetrahedralization meshing algorithm on the SimScale platform.
The square box mass has a length, width and height of 1m with upper face partitioned in to half.
Tool Type : Code_Aster Analysis Type : Linear static and dynamic Mesh and Element types :
Case Mesh type Number of nodes Number of 3D elements Element type Analysis type Elastic support type – face EIGJ Elastic support type – face IFJH Elastic support type – combined (face EFGH) (A-1) linear tetrahedrals 21 26 3D isoparametric Static isotropic total (A-2) linear tetrahedrals 21 26 3D isoparametric Dynamic isotropic total (B-1) linear tetrahedrals 33 61 3D isoparametric Static isotropic total orthotropic total (B-2) linear tetrahedrals 33 61 3D isoparametric Static isotropic distributed orthotropic distributed (B-3) linear tetrahedrals 33 61 3D isoparametric Static isotropic total and distributed orthotropic total and distributed isotropic total
Material:
Constraints:
Case A-1/A-2: Case B-1: Case B-2: Case B-3: Case A-1/B-1/B-2/B-3: (1)
$$x = \frac {mg}{k} = \frac {10.(9.81)}{9810} = 0.01 m$$
Case A-2: (2)
$$x = \frac {v_o}{\omega} sin \omega t + x_o cos \omega t$$
where,
angular frequency, \(\omega\) = \(\sqrt \frac {k}{m}\) = \(\sqrt \frac {9810}{10}\) = 31.32 rad/s.
initial velocity, V
o= -0.01 m/s
position of initial release, X
o= -0.01 m
time, 2s <= t <= 4s
Quantity [SCHAUM] Case A-1 Error Case B-1 Error Case B-2 Error Case B-3 Error x 0.01 0.01 0 0.01 0 0.01 0 0.01 0
[SCHAUM] (1, 2, 3, 4, 5) (2011)”McGraw-Hill Schaum’s outlines, Engineering Mechanics: Dynamics”, pg 271-273, N. W. Nelson, C. L. Best, W. J. McLean, Merle C. Potter |
For a 6DoF robot with all revolute joints the Jacobian is given by: $$ \mathbf{J} = \begin{bmatrix} \hat{z_0} \times (\vec{o_6}-\vec{o_0}) & \ldots & \hat{z_5} \times (\vec{o_6}-\vec{o_5})\\ \hat{z_0} & \ldots & \hat{z_5} \end{bmatrix} $$ where $z_i$ is the unit z axis of joint $i+1$(using DH params), $o_i$ is the origin of the coordinate frame connected to joint $i+1$, and $o_6$ is the origin of the end effector. The jacobian matrix is the relationship between the Cartesian velocity vector and the joint velocity vector: $$ \dot{\mathbf{X}}= \begin{bmatrix} \dot{x}\\ \dot{y}\\ \dot{z}\\ \dot{r_x}\\ \dot{r_y}\\ \dot{r_z} \end{bmatrix} = \mathbf{J} \begin{bmatrix} \dot{\theta_1}\\ \dot{\theta_2}\\ \dot{\theta_3}\\ \dot{\theta_4}\\ \dot{\theta_5}\\ \dot{\theta_6}\\ \end{bmatrix} = \mathbf{J}\dot{\mathbf{\Theta}} $$
Here is a singularity position of a Staubli TX90XL 6DoF robot:
$$ \mathbf{J} = \begin{bmatrix} -50 & -425 & -750 & 0 & -100 & 0\\ 612.92 & 0 & 0 & 0 & 0 & 0\\ 0 & -562.92 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 1 & 0 & 1 & 0\\ 1 & 0 & 0 & -1 & 0 & -1 \end{bmatrix} $$
You can easily see that the 4th row corresponding to $\dot{r_x}$ is all zeros, which is exactly the lost degree of freedom in this position.
However, other cases are not so straightforward.
$$ \mathbf{J} = \begin{bmatrix} -50 & -324.52 & -649.52 & 0 & -86.603 & 0\\ 987.92 & 0 & 0 & 0 & 0 & 0\\ 0 & -937.92 & -375 & 0 & -50 & 0\\ 0 & 0 & 0 & 0.5 & 0 & 0.5\\ 0 & 1 & 1 & 0 & 1 & 0\\ 1 & 0 & 0 & -0.866 & 0 & -0.866 \end{bmatrix} $$
Here you can clearly see that joint 4 and joint 6 are aligned because the 4th and 6th columns are the same. But it's not clear which Cartesian degree of freedom is lost (it should be a rotation about the end effector's x axis in red).
Even less straightforward are singularities at workspace limits.
$$ \mathbf{J} = \begin{bmatrix} -50 & 650 & 325 & 0 & 0 & 0\\ 1275.8 & 0 & 0 & 50 & 0 & 0\\ 0 & -1225.8 & -662.92 & 0 & -100 & 0\\ 0 & 0 & 0 & 0.86603 & 0 & 1\\ 0 & 1 & 1 & 0 & 1 & 0\\ 1 & 0 & 0 & 0.5 & 0 & 0 \end{bmatrix} $$
In this case, the robot is able to rotate $\dot{-r_y}$ but not $\dot{+r_y}$. There are no rows full of zeros, or equal columns, or any clear linearly dependent columns/rows.
Is there a way to determine which degrees of freedom are lost by looking at the jacobian? |
Science Advisor
Gold Member
2,232 1,262 Summary Wigner's friend seems to lead to certainty in two complimentary contexts Summary:Wigner's friend seems to lead to certainty in two complimentary contexts
This is probably pretty dumb, but I was just thinking about Wigner's friend and wondering about the two contexts involved.
The basic set up I'm wondering about is as follows:
The friend does a spin measurement in the ##\left\{|\uparrow_z\rangle, |\downarrow_z\rangle\right\}## basis, i.e. of ##S_z## at time ##t_1##. And let's say the particle is undisturbed after that.
For experiments outside the lab Wigner considers the lab to be in the basis:
$$\frac{1}{\sqrt{2}}\left(|L_{\uparrow_z}, D_{\uparrow_z}, \uparrow_z \rangle + |L_{\downarrow_z}, D_{\downarrow_z}, \downarrow_z \rangle\right)$$
He then considers a measurement of the observable ##\mathcal{X}## which has eigenvectors:
$$\left\{\frac{1}{\sqrt{2}}\left(|L_{\uparrow_z}, D_{\uparrow_z}, \uparrow_z \rangle + |L_{\downarrow_z}, D_{\downarrow_z}, \downarrow_z \rangle\right), \frac{1}{\sqrt{2}}\left(|L_{\uparrow_z}, D_{\uparrow_z}, \uparrow_z \rangle - |L_{\downarrow_z}, D_{\downarrow_z}, \downarrow_z \rangle\right)\right\}$$
with eigenvalues ##\{1,-1\}## respectively.
At time ##t_2## the friend flips a coin and either he does a measurement of ##S_z## or Wigner does a measurement of ##\mathcal{X}##
However if the friend does a measurement of ##S_z## he knows for a fact he will get whatever result he originally got. However he also knows Wigner will obtain the ##1## outcome with certainty.
However ##\left[S_{z},\mathcal{X}\right] \neq 0##. Thus the friend seems to be predicting with certainty observables belonging to two separate contexts. Which is not supposed to be possible in the quantum formalism.
What am I missing? |
Search
Now showing items 1-1 of 1
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.