text stringlengths 256 16.4k |
|---|
An apparently elementary question that bugs me for quite some time:
(1)Why are the integers with the cofinite topology not path-connected?
Recall that the open sets in the cofinite topology on a set are the subsets whose complement is finite or the entire space.
Obviously, the integers are connected in the cofinite topology, but to prove that they are not path-connected is much more subtle. I admit that this looks like the next best homework problem (and was dismissed as such in this thread), but if you think about it, it does not seem to be obvious at all.
An equivalent reformulation of
(1) is:
(2)The unit interval $[0,1] \subset \mathbb{R}$ cannot be written as a countable union of pairwise disjoint non-empty closed sets.
I
can prove this, but I'm not really satisfied with my argument, see below.
My questions are:
Does anybody know a reference for a proof of (1), (2)or an equivalent statement, and if so, do you happen to know who has proved this originally? Do you have an easier or slicker proof than mine?
Here's an outline of my rather clumsy proof of
(2):
Let $[0,1] = \bigcup_{n=1}^{\infty} F_{n}$ with $F_{n}$ closed, non-empty and $F_{i} \cap F_{j} = \emptyset$ for $i \neq j$.
The idea is to construct by induction a decreasing family $I_{1} \supset I_{2} \supset \cdots$ of non-empty closed intervals such that $I_{n} \cap F_{n} = \emptyset$. Then $I = \bigcap_{n=1}^{\infty} I_{n}$ is non-empty. On the other hand, since every $x \in I$ lies in exactly one $F_{n}$, and since $x \in I \subset I_{n}$ and $I_{n} \cap F_{n} = \emptyset$, we see that $I$ must be empty, a contradiction.
In order to construct the decreasing sequence of intervals, we proceed as follows:
Since $F_{1}$ and $F_{2}$ are closed and disjoint, there are open sets $U_{1} \supset F_{1}$ and $U_{2} \supset F_{2}$ such that $U_{1} \cap U_{2} = \emptyset$. Let $I_{1} = [a,b]$ be a connected component of $[0,1] \smallsetminus U_{1}$ such that $I_{1} \cap F_{2} \neq \emptyset$. By construction, $I_{1}$ is not contained in $F_{2}$, so by connectedness of $I_{1}$ there must be infinitely many $F_{n}$'s such that $F_{n} \cap I_{1} \neq \emptyset$.
Replacing $[0,1]$ by $I_{1}$ and the $F_{n}$'s by a (monotone) enumeration of those $F_{n}$ with non-empty intersection with $I_{1}$, we can repeat the argument of the previous paragraph and get $I_{2}$.
[In case we have thrown away $F_{3}, F_{4}, \ldots, F_{m}$ in the induction step (i.e, their intersection with $I_{1}$ is empty but $F_{m+1} \cap I_{1} \neq \emptyset$), we put $I_{3}, \ldots, I_{m}$ to be equal to $I_{2}$ and so on.]
Added: Feb 15, 2011
I was informed that a proof of
(2) appears in C. Kuratowski, Topologie II, §42, III, 6 on p.113 of the 1950 French edition, with essentially the same argument as I gave above. There it is attributed to W. Sierpiński, Un théorème sur les continus, Tôhoku Mathematical Journal 13 (1918), p. 300-303. |
Consider the function $f$ on $[0,1]\times [0,1]$ given by
$$f(x,y) = \frac{x^2-y^2}{(x^2+y^2)^2}, \,(x,y)\neq (0,0)$$
and $f(0,0) = 0.$
Let $M$ denote the $\sigma$-algebra of Lebesgue measuable sets and $m$ the Lebesgue measure.
In my previous question, it was shown that $f(x,y)$ is $M\times M$ measurable.
I am trying to show now that $f$ is $m\times m$ summable.
Is my approach correct?
Note that when $y<x$ we have that $x^2+y^2\leq 2x^2$ and $x^2-y^2\geq 0$. Then, $$\int_{0}^{1} |f_x| dm(y) \geq \int_{0}^{x} \frac{x^2-y^2}{(x^2+y^2)^2} dm(y)\geq \int_{0}^{x}\frac{x^2-y^2}{4x^4} dm(y)$$
Now, $$\int_{0}^{x} \frac{x^2-y^2}{4x^4} dm(y) = \frac{y}{4x^2}-\frac{y^3}{12x^4} \bigg|_{y = 0}^{y=x} = \frac{1}{6x}\rightarrow \infty$$
as $x\rightarrow 0$. Hence our function is
not $m\times m$ summable. |
The Convexity of Quadratic Maps and the Controllability of Coupled Systems Author MetadataShow full item record CitationSheriff, Jamin Lebbe. 2013. The Convexity of Quadratic Maps and the Controllability of Coupled Systems. Doctoral dissertation, Harvard University. AbstractA quadratic form on \(\mathbb{R}^n\) is a map of the form \(x \mapsto x^T M x\), where M is a symmetric \(n \times n\) matrix. A quadratic map from \(\mathbb{R}^n\) to \(\mathbb{R}^m\) is a map, all m of whose components are quadratic forms. One of the two central questions in this thesis is this: when is the image of a quadratic map \(Q: \mathbb{R}^n \rightarrow \mathbb{R}^m\) a convex subset of \(\mathbb{R}^m\)? This question has intrinsic interest; despite being only a degree removed from linear maps, quadratic maps are not well understood. However, the convexity properties of quadratic maps have practical consequences as well: underlying every semidefinite program is a quadratic map, and the convexity of the image of that map determines the nature of the solutions to the semidefinite program. Quadratic maps that map into \(\mathbb{R}^2\) and \(\mathbb{R}^3\) have been studied before (in (Dines, 1940) and (Calabi, 1964) respectively). The Roundness Theorem, the first of the two principal results in this thesis, is a sufficient and (almost) necessary condition for a quadratic map \(Q: \mathbb{R}^n \rightarrow \mathbb{R}^m\) to have a convex image when \(m \geq 4\), \(n \geq m\) and \(n \not= m + 1\). Concomitant with the Roundness Theorem is an important lemma: when \(n < m\), quadratic maps from \(\mathbb{R}^n\) to \(\mathbb{R}^m\)seldom have convex images. This second result in this thesis is a controllability condition for bilinear systems defined on direct products of the form \(\mathcal{G} \times\mathcal{G}\), where \(\mathcal{G}\) is a simple Lie group. The condition is this: a bilinear system defined on \(\mathcal{G} \times\mathcal{G}\) is not
controllable if and only if the Lie algebra generated by the system’s vector fields is the graph of some automorphism of \(\mathcal{g}\), the Lie algebra of \(\mathcal{G}\).
Terms of UseThis article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:11030574
Collections FAS Theses and Dissertations [4139] |
To expand on my comments, here's
one approach. Someone more used to the area you're working in (and there are many here) may have a better suggestion for solving the same problems:
1) Assume that confounding variables are of two kinds.
i) The first kind (the main one) are counfounders which are always the same for a given subject. "School effects" and "teacher effects" and socio-economic variables, for example, may be reasonably assumed to be the same before and after for each subject
ii) The second kind (which may not exist for your problem) can change within subjects (these would be time-related things like 'learning effects' from having been tested before rather than from the intervention itself)
2) Assume no confounders interact with any of the effects you're interested in
A model that reflects that could be written as follows:
Let $i$ represent the subject, and let $t$ represent the time (0/1). Let $Y_{it}$ be the response for subject $i$ at times $t$. The variable $\text{Treatment}$ is $1$ for those in the treatment group and $0$ for the control
$\alpha_i$ incorporates all the individual-level counfounders above.
$\gamma$ incorporates any time-counfounders, including the effect of the first round of testing.
$\beta$ incorporates the treatments effects - the difference
$Y_{it} = \alpha_i + \gamma \cdot t + \beta \text{ Treatment}\cdot t +\varepsilon_{it}$
Normally with a model like this I'd be tempted to use mixed effects model with random intercept, but in this case you don't have randomization. Nonetheless, because of the before/after pairing, with the assumptions of no interaction of confounders with treatment you can tease out the treatment effect.
For example, If you take $D_i = Y_{i1}-Y_{i0}$, you get:
$Y_{i1} = \alpha_i + \gamma + \beta \text{ Treatment} +\varepsilon_{i1}$
$Y_{i0} = \alpha_i + 0 + 0 +\varepsilon_{i0}$
$D_i = \gamma + \beta \text{ Treatment} + \eta_i$
where $\eta_i =\varepsilon_{i1}-\varepsilon_{i0}$.
Then - assuming sample sizes are large enough, a straight two-sample test of equality of population means of the $D$'s between control and treatment should arguably pick up a treatment effect. |
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences). |
You did not specify what materials you are working with and neither the dimensions. I made some assumptions.
First of all, the coolant circuit is likely driven by a pump. Therefore you need to know the flow rate $\dot{Q}$ the pressure needed on the pressure side. You can get more information here.
For your specific setup to cool the PSU you have pressure drops along your tubings, curvatures and through the height differences.
To keep things simple I just assumed a height difference of 20 cm and no differences in pressure in different reservoirs where the coolant might be pumped. Then the only significant pressure drop is the result of friction pressure loss. I also did not account for any curvatures. For $\rho$ I assumed water.
$\Delta p_r = \lambda \cdot \dfrac{L}{d} \cdot \dfrac{\rho}{2} \cdot u^2$
I assumed rubber tubings with the darcy friction factor formulae $\lambda$ = 0.02
The plot below shows 3 different assumptions in lenghts of the tubing and inner diameter.
$L$ = 2 m and $d$ = 0.01 m $L$ = 2 m and $d$ = 0.005 m $L$ = 10 m and $d$ = 0.005 m
Pumps have characteristic curves as well but the pressure decreases with increasing $\dot{Q}$. The intersect of the two curves is the operating point.
Source: ctgclean
Now either the pump is fixed and you need to adjust your pressure drop along the tubing accordingly or vice versa. Therefore the specification of a minimum pressure instead of maximum.
I hope that shed some light on your question. |
Perturbed fractional eigenvalue problems
1.
Department of Mathematics, University of Craiova, 200585 Craiova, Romania
2.
"Simion Stoilow" Institute of Mathematics of the Romanian Academy, 010702 Bucharest, Romania
3.
Department of Mathematics and Computer Science, University Politehnica of Bucharest, 060042 Bucharest, Romania
4.
"Simion Stoilow" Institute of Mathematics of the Romanian Academy, 010702 Bucharest, Romania
Let $Ω\subset\mathbb{R}^N$ ($N≥2$) be a bounded domain with Lipschitz boundary. For each $p∈(1,∞)$ and $s∈ (0,1)$ we denote by $(-Δ_p)^s$ the fractional $(s,p)$-Laplacian operator. In this paper we study the existence of nontrivial solutions for a perturbation of the eigenvalue problem $(-Δ_p)^s u=λ |u|^{p-2}u$, in $Ω$, $u=0$, in $\mathbb{R}^N\backslash Ω$, with a fractional $(t,q)$-Laplacian operator in the left-hand side of the equation, when $t∈(0,1)$ and $q∈(1,∞)$ are such that $s-N/p=t-N/q$. We show that nontrivial solutions for the perturbed eigenvalue problem exists if and only if parameter $λ$ is strictly larger than the first eigenvalue of the $(s,p)$-Laplacian.
Keywords:Perturbed eigenvalue problem, non-local operator, variational methods, fractional Sobolev space. Mathematics Subject Classification:Primary: 35P30; Secondary: 49J35, 47J30, 46E35. Citation:Maria Fărcăşeanu, Mihai Mihăilescu, Denisa Stancu-Dumitru. Perturbed fractional eigenvalue problems. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6243-6255. doi: 10.3934/dcds.2017270
References:
[1]
M. Bocea and M. Mihăilescu, Existence of nonnegative viscosity solutions for a class of problems involving the $∞$-Laplacian,
[2]
L. Brasco, E. Parini and M. Squassina,
Stability of variational eigenvalues for the fractional $p$-Laplacian,
[3] [4] [5] [6]
M. Fărcăşeanu, M. Mihăilescu and D. Stancu-Dumitru,
On the set of eigenvalues of some PDEs with homogeneous Neumann boundary condition,
[7]
R. Ferreira and M. Perez-Llanos, Limit problems for a Fractional $p$-Laplacian as $p \to \infty $,
[8] [9]
P. Grisvard,
[10] [11]
M. Mihăilescu,
An eigenvalue problem possessing a continuous family of eigenvalues plus an isolated eigenvalue,
[12]
show all references
References:
[1]
M. Bocea and M. Mihăilescu, Existence of nonnegative viscosity solutions for a class of problems involving the $∞$-Laplacian,
[2]
L. Brasco, E. Parini and M. Squassina,
Stability of variational eigenvalues for the fractional $p$-Laplacian,
[3] [4] [5] [6]
M. Fărcăşeanu, M. Mihăilescu and D. Stancu-Dumitru,
On the set of eigenvalues of some PDEs with homogeneous Neumann boundary condition,
[7]
R. Ferreira and M. Perez-Llanos, Limit problems for a Fractional $p$-Laplacian as $p \to \infty $,
[8] [9]
P. Grisvard,
[10] [11]
M. Mihăilescu,
An eigenvalue problem possessing a continuous family of eigenvalues plus an isolated eigenvalue,
[12]
[1]
Raffaella Servadei, Enrico Valdinoci.
Variational methods for non-local operators
of elliptic type.
[2]
Shixiu Zheng, Zhilei Xu, Huan Yang, Jintao Song, Zhenkuan Pan.
Comparisons of different methods for balanced data classification under the discrete non-local total variational framework.
[3]
Rafael Abreu, Cristian Morales-Rodrigo, Antonio Suárez.
Some eigenvalue problems with non-local boundary conditions and applications.
[4]
Walter Allegretto, Yanping Lin, Shuqing Ma.
On the box method for a non-local parabolic variational inequality.
[5]
Anouar Bahrouni, VicenŢiu D. RĂdulescu.
On a new fractional Sobolev space and applications to nonlocal variational problems with variable exponent.
[6]
Bin Guo, Wenjie Gao.
Finite-time blow-up and extinction rates of solutions to an initial Neumann problem
involving the $p(x,t)-Laplace$ operator and a non-local
term.
[7]
Anouar Bahrouni.
Trudinger-Moser type inequality and existence of solution for perturbed non-local elliptic operators with exponential nonlinearity.
[8]
Alexander V. Rezounenko, Petr Zagalak.
Non-local PDEs
with discrete state-dependent delays: Well-posedness in a metric
space.
[9]
Massimiliano Ferrara, Giovanni Molica Bisci, Binlin Zhang.
Existence of weak solutions for non-local fractional problems via Morse theory.
[10]
Christos V. Nikolopoulos, Georgios E. Zouraris.
Numerical solution of a non-local elliptic problem modeling a thermistor with a finite element and a finite volume method.
[11]
Joseph G. Conlon, André Schlichting.
A non-local problem for the Fokker-Planck equation related to the Becker-Döring model.
[12] [13] [14]
Olivier Bonnefon, Jérôme Coville, Guillaume Legendre.
Concentration phenomenon in some non-local equation.
[15]
Yuxia Guo, Jianjun Nie.
Infinitely many non-radial solutions for the prescribed curvature problem of fractional operator.
[16] [17] [18] [19] [20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
One way to understand solubility is to start with the Clausius -Clapyron equation (CC), which despite approximations involved is a good description the vapour pressure over various solids and liquids. For the ideal solution
$$ \ln\left (\frac{p_2}{p_1}\right) = \frac{-\Delta H_{vap}}{R}\left( \frac{1}{T_2}-\frac{1}{T_1} \right )$$where
p is the vapour pressure, T the temperature and $\Delta H$ the enthalpy change between states $1$ and $2$.
The heat of sublimation is $\Delta H_{sub}=\Delta H_{vap}+\Delta H_{fus}$and calculating the pressure over the pure solid form of the solvent gives $$ \ln\left (\frac{p_2}{p^*_1}\right) = \frac{-\Delta H_{sub}}{R}\left( \frac{1}{T_2}-\frac{1}{T_1} \right )$$
and so subtracting these two equations gives
$$ \ln\left (\frac{p^*_1}{p_1}\right) = \frac{\Delta H_{fus}}{R}\left( \frac{1}{T_2}-\frac{1}{T_1} \right )$$
If it is assumed that Raoult’s Law applies then $ p_1=xp^*_1$ where
x is the mole fraction of the solute. Substituting into the last equation gives$$ \ln(x)= -\frac{\Delta H_{fus}}{R}\left( \frac{1}{T_2}-\frac{1}{T_M} \right )$$where now $T_M$ is the melting temperature of the pure solute. As $\Delta H/T_M$ is a constant then the mole fraction and hence solubility in solution is $$ \ln(x) \propto \frac{-\Delta H_{fus}}{R}\left( \frac{1}{T_2} \right )$$
which shows that the mole fraction at temperature
T varies as$$x_T \propto \exp \left (- \frac{\Delta H_{fus}}{RT}\right )$$
and so the solubility rises with increase in temperature. Different species will rise more or less slowly depending on their heat of fusion as $\Delta H_{fus}/R$.
Using data for NaCl shows that the mole fraction hardly varies between $200$ to $400$ K, whereas there is a huge increase for $\ce{NaNO3}$ under the same conditions. |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
Straight Lines Application of straight Lines The equation of a line parallel to ax + by + c = 0 is of the form ax + by + k = 0 , K ∈ R The equation of a line perpendicular to ax + by + c = 0 is of the form bx – ay + k = 0 Equation of a line passing through (x 1y 1) and parallel to ax + by + c = 0 is a (x – x 1) + b (y – y 1) = 0 Equation of a line passing through (x 1y 1) and perpendicular to ax + by + c = 0 is b (x – x 1) − a (y – y 1) = 0 The image of the point P(x 1, y 1)) with respect to X-axis is Q(x 1, − y 1). The image of the point P(x 1, y 1) with respect to Y-axis is Q(− x 1, y 1). The image of the point P(x 1, y 1) with respect to mirror y = x is Q(y 1, x 1). The image of the point P(x 1, y 1) with respect to the origin is the point (− x 1, −y 1). The point (x 1y 1) lies between the above parallel lines or does not lie between them according as \tt \frac{ax_{1}+by_{1}+c_{1}}{ax_{2}+by_{2}+c_{2}} is negative or positive. View the Topic in this video From 00:40 To 51:00
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Let the image of a point (x
1, y 1) with respect to ax + by + c = 0 be (x 2, y 2), then \tt \frac{x_{2}-x_{1}}{a}=\frac{y_{2}-y_{1}}{b}=\frac{-2\left(ax_{1}+by_{1}+c\right)}{a^{2}+b^{2}}
2. Point of intersection of two lines a
1x + b 1y + c 1 = 0 and a 2x + b 2y + c 2 = 0 is \tt \left(\frac{b_{1}c_{2}-b_{2}c_{1}}{a_{1}b_{2}-a_{2}b_{1}},\frac{c_{1}a_{2}-c_{2}a_{1}}{a_{1}b_{2}-a_{2}b_{1}}\right)
3. The length of perpendicular from a point (x
1, y 1) to a line ax + by + c = 0 is \tt \begin{vmatrix}\frac{ax_{1}+by_{1}+c}{\sqrt{a^{2}+b^{2}}} \end{vmatrix} 4. The foot of the perpendicular (h, k) from (x 1, y 1) to the line ax + by + c = 0 is given by \tt \frac{h-x_{1}}{a}=\frac{k-y_{1}}{b}=-\frac{\left(ax_{1}+by_{1}+c\right)}{a^{2}+b^{2}}. 5. (a). Foot of the perpendicular from (a, b) on x – y = 0 is \tt \left(\frac{a+b}{2},\frac{a+b}{2}\right). (b). Foot of the perpendicular from (a, b) on x + y = 0 is \tt \left(\frac{a-b}{2},\frac{a-b}{2}\right). 6. The image of the line a 1x + b 1y + c 1 = 0 about the line ax + by + c = 0 is 2(aa 1 + bb 1) (ax + by + c) = (a 2 + b 2) (a 1x + b 1y + c 1).
7. The equation of a line parallel and lying midway between the above two lines is ax + by + \tt \frac{c_{1}+c_{2}}{2}=0
8. If (h , k) is the foot of the perpendicular from (x
1 y 1) to the line ax + by + c = 0 then \tt \frac{h-x_{1}}{a}= \frac{k-y_{1}}{b}=\frac{-\left(ax_{1}+by_{1}+c\right)}{a^{2}+b^{2}}
9. If (h , k) is the image (reflection) of the point (x
1 y 1) w.r.t the line ax + by + c = 0 then \tt \frac{h-x_{1}}{a}= \frac{k-y_{1}}{b}=\frac{-2\left(ax_{1}+by_{1}+c\right)}{a^{2}+b^{2}}
10. If ‘B’ is image of ‘A’ w.r.t ‘P’ then 2P = A + B. |
Browse by Person
Up a level 60. Article
Aaboudd, M, Aad, G, Abbott, B et al. (2705 more authors) (2017)
Reconstruction of primary vertices at the ATLAS experiment in Run 1 proton-proton collisions at the LHC. European Physical Journal C , 77 (5). 332. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2017)
Performance of algorithms that reconstruct missing transverse momentum in root s=8 TeV proton-proton collisions in the ATLAS detector. The European Physical Journal C, 77 (4). 241. ISSN 1434-6044
Aaboud, M, Aad, G, Abbott, B et al. (2858 more authors) (2017)
Measurement of the tt¯Z and tt¯W production cross sections in multilepton final states using 3.2 fb−1 of pp collisions at √s = 13 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 77. 40. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2883 more authors) (2016)
Addendum to ‘Measurement of the tˉt production cross-section using eμ events with b-tagged jets in pp collisions at √s = 7 and 8 TeV with the ATLAS detector’. European Physical Journal C: Particles and Fields, 76. 642. ISSN 1434-6044
Aaboud, M, Aad, G, Abbott, B et al. (2850 more authors) (2016)
Search for the Standard Model Higgs boson produced by vector-boson fusion and decaying to bottom quarks in √s = 8 TeV pp collisions with the ATLAS detector. Journal of High Energy Physics, 2016 (11).
Aad, G, Abbott, B, Abdallah, J et al. (2855 more authors) (2016)
Performance of pile-up mitigation techniques for jets in pp collisions at √s=8 TeV using the ATLAS detector. European Physical Journal C, 76 (11). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2851 more authors) (2016)
Search for gluinos in events with an isolated lepton, jets and missing transverse momentum at √s = 13 Te V with the ATLAS detector. European Physical Journal C, 76 (10). ISSN 1434-6044
Aaboud, M, Aad, G, Abbott, B et al. (2857 more authors) (2016)
Search for top squarks in final states with one isolated lepton, jets, and missing transverse momentum in collisions with the ATLAS detector. Physical Review D, 94 (5). ISSN 1550-7998
Aaboud, M, Aad, G, Abbott, B et al. (2852 more authors) (2016)
Search for pair production of Higgs bosons in the bb¯bb¯ final state using proton-proton collisions at s√=13 TeV with the ATLAS detector. Physical Review D, 94 (5). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2860 more authors) (2016)
Measurement of the inclusive isolated prompt photon cross section in pp collisions at root s=8 TeV with the ATLAS detector. Journal of High Energy Physics (8). ARTN 005. pp. 1-42.
Aaboud, M, Aad, G, Abbott, B et al. (2861 more authors) (2016)
Search for TeV-scale gravity signatures in high-mass final states with leptons and jets with the ATLAS detector at root s=13 TeV. Physics Letters B, 760. pp. 520-537. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (2866 more authors) (2016)
Measurement of event-shape observables in Z→ℓ+ℓ- events in pp collisions at √s=7 TeV with the ATLAS detector at the LHC. The European Physical Journal C - Particles and Fields, 76 (7). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2866 more authors) (2016)
Measurement of event-shape observables in Z→ℓ+ℓ- events in pp collisions at √s=7 TeV with the ATLAS detector at the LHC. The European Physical Journal C, 76. 375. ISSN 1434-6044
Aaboud, M, Aad, G, Abbott, B et al. (2849 more authors) (2016)
Search for metastable heavy charged particles with large ionization energy loss in pp collisions at √s=13 TeV using the ATLAS experiment. Physical Review D, 93 (11). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2867 more authors) (2016)
Identification of high transverse momentum top quarks in pp collisions at √s=8 TeV with the ATLAS detector. Journal of High Energy Physics, 2016. 93. ISSN 1029-8479
Aaboud, M, Aad, G, Abbott, B et al. (2852 more authors) (2016)
Measurement of the relative width difference of the B⁰-B¯0 system with the ATLAS detector. Journal of High Energy Physics, 2016 (6). 81. ISSN 1029-8479
Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016)
Measurement of the charged-particle multiplicity inside jets from √ s =8 TeV pp collisions with the ATLAS detector. European Physical Journal C: Particles and Fields , 76 (6). 322. ISSN 1434-6044
Aaboud, M, Aad, G, Abbott, B et al. (2852 more authors) (2016)
Search for new phenomena in events with a photon and missing transverse momentum in pp collisions at √s=13 TeV with the ATLAS detector. Journal of High Energy Physics, 2016 (6).
Aaboud, M, Aad, G, Abbott, B et al. (2852 more authors) (2016)
Search for new phenomena in events with a photon and missing transverse momentum in pp collisions at √s=13 TeV with the ATLAS detector. Journal of High Energy Physics, 2016. 59. ISSN 1029-8479
Aad, G, Abbott, B, Abdallah, J et al. (2715 more authors) (2016)
Measurements of and production in collisions at with the ATLAS detector. Physical Review D, 93 (11). ISSN 2470-0010
Aad, G, Abbott, B, Abdallah, J et al. (2837 more authors) (2016)
Search for the Standard Model Higgs boson decaying into bb¯ produced in association with top quarks decaying hadronically in pp collisions at √s = 8 TeV with the ATLAS detector. Journal of High Energy Physics, 2016 (5).
Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016)
Reconstruction of hadronic decay products of tau leptons with the ATLAS experiment. The European Physical Journal C, 76 (5). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2867 more authors) (2016)
Measurement of the transverse momentum and Øn∗ distributions of Drell–Yan lepton pairs in proton–proton collisions at √s = 8 TeV with the ATLAS detector. The European Physical Journal C - Particles and Fields, 76 (5). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2841 more authors) (2016)
Muon reconstruction performance of the ATLAS detector in proton–proton collision data at √s=13 TeV. The European Physical Journal C, 76 (5). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2862 more authors) (2016)
Measurement of the differential cross-sections of prompt and non-prompt production of J/ψ and ψ(2S) in pp collisions at √s=7 and 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (5). 283. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2866 more authors) (2016)
Search for the standard model Higgs boson produced in association with a vector boson and decaying into a tau pair in pp collisions sqrt s = 8 TeV at with the ATLAS detector. Physical Review D, 93 (9). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2860 more authors) (2016)
Measurements of production cross sections in collisions at with the ATLAS detector and limits on anomalous gauge boson self-couplings. Physical Review D, 93 (9). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2856 more authors) (2016)
Search for supersymmetry at $$\sqrt{s}=13$$ s = 13 TeV in final states with jets and two same-sign leptons or three leptons with the ATLAS detector. European Physical Journal C (The), 76 (5). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2868 more authors) (2016)
Observation of Long-Range Elliptic Azimuthal Anisotropies in root s=13 and 2.76 TeV pp Collisions with the ATLAS Detector. PHYSICAL REVIEW LETTERS, 116 (17). ARTN 172301. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016)
Probing lepton flavour violation via neutrinoless τ⟶3μ decays with the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (5). 232. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2871 more authors) (2016)
Search for dark matter produced in association with a Higgs boson decaying to two bottom quarks in pp collisions at root s=8 TeV with the ATLAS detector. PHYSICAL REVIEW D, 93 (7). ARTN 072007. ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2863 more authors) (2016)
Search for new phenomena in events with at least three photons collected in pp collisions at $$\sqrt{s}$$ s = 8 TeV with the ATLAS detector. The European Physical Journal C, 76 (4). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2863 more authors) (2016)
Search for anomalous couplings in the W tb vertex from the measurement of double differential angular decay rates of single top quarks produced in the t-channel with the ATLAS detector. Journal of High Energy Physics, 2016 (4).
Aad, G, Abbott, B, Abdallah, J et al. (2782 more authors) (2016)
Search for magnetic monopoles and stable particles with high electric charges in 8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052009. ISSN 2470-0010
Aad, G, Abbott, B, Abdallah, J et al. (2772 more authors) (2016)
Measurement of the ZZ Production Cross Section in pp Collisions at root s=13 TeV with the ATLAS Detector. Physical Review Letters, 116 (10). 101801. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2769 more authors) (2016)
Search for new phenomena in dijet mass and angular distributions from pp collisions at root s=13 TeV with the ATLAS detector. Physics Letters B, 754. pp. 302-322. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (2844 more authors) (2016)
Search for new phenomena with photon plus jet events in proton-proton collisions at TeV with the ATLAS detector. Journal of High Energy Physics (3). 41. ISSN 1029-8479
Aad, G, Abbott, B, Abdallah, J et al. (2835 more authors) (2016)
Search for strong gravity in multijet final states produced in pp collisions at root s=13 TeV using the ATLAS detector at the LHC. Journal of High Energy Physics. 26. ISSN 1029-8479
Aad, G, Abbott, B, Abdallah, J et al. (2865 more authors) (2016)
Search for the electroweak production of supersymmetric particles in root s=8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052002. ISSN 2470-0010
Aad, G, Abbott, B, Abdallah, J et al. (2794 more authors) (2016)
Search for the electroweak production of supersymmetric particles in root s=8 TeV pp collisions with the ATLAS detector. Physical Review D, 93 (5). 052002. ISSN 2470-0010
Aad, G, Abbott, B, Abdallah, J et al. (2879 more authors) (2016)
Centrality, rapidity, and transverse momentum dependence of isolated prompt photon production in lead-lead collisions at TeV measured with the ATLAS detector. Physical Review C, 93 (3). ISSN 0556-2813
Aad, G, Abbott, B, Abdallah, J et al. (2856 more authors) (2016)
Search for invisible decays of a Higgs boson using vector-boson fusion in pp collisions at √s=8 TeV with the ATLAS detector. Journal of High Energy Physics, 2016. 172. ISSN 1126-6708
Aad, G, Abbott, B, Abdallah, J et al. (2862 more authors) (2016)
Search for a high-mass Higgs boson decaying to a W boson pair in pp collisions at s = 8 $$ \sqrt{s}=8 $$ TeV with the ATLAS detector. Journal of High Energy Physics, 2016 (1).
Aad, G, Abbott, B, Abdallah, J et al. (2854 more authors) (2016)
Measurements of fiducial cross-sections for $$t\bar{t}$$ t t ¯ production with one or two additional b-jets in pp collisions at $$\sqrt{s}$$ s =8 TeV using the ATLAS detector. European Physical Journal C: Particles and Fields, 76 (1). 11. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2824 more authors) (2016)
Measurements of the Higgs boson production and decay rates and coupling strengths using pp collision data at $$\sqrt{s}=7$$ s = 7 and 8 TeV in the ATLAS experiment. European Physical Journal C: Particles and Fields, 76. 6. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2854 more authors) (2015)
ATLAS Run 1 searches for direct pair production of third-generation squarks at the Large Hadron Collider. European Physical Journal C: Particles and Fields, 75 (10). 510. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2835 more authors) (2015)
Search for Higgs boson pair production in the $$b\bar{b}b\bar{b}$$ b b ¯ b b ¯ final state from pp collisions at $$\sqrt{s} = 8$$ s = 8 TeVwith the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (9). 412. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2825 more authors) (2015)
Search for heavy long-lived multi-charged particles in pp collisions at root s=8 TeV using the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (8). 362. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2819 more authors) (2015)
Constraints on the off-shell Higgs boson signal strength in the high-mass ZZ and WW final states with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (7). 335. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2015)
Search for a new resonance decaying to a W or Z boson and a Higgs boson in the $$\ell \ell / \ell \nu / \nu \nu + b \bar{b}$$ ℓ ℓ / ℓ ν / ν ν + b b ¯ final states with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (6). 263. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2823 more authors) (2015)
Determination of spin and parity of the Higgs boson in the $$WW^*\rightarrow e \nu \mu \nu $$ W W ∗ → e ν μ ν decay channel with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 231. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2815 more authors) (2015)
Observation and measurements of the production of prompt and non-prompt $$\varvec{\text {J}\uppsi }$$ J ψ mesons in association with a $$\varvec{Z}$$ Z boson in $$\varvec{pp}$$ p p collisions at $$\varvec{\sqrt{s}= 8\,\text {TeV}}$$ s = 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 229. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2821 more authors) (2015)
Search for direct pair production of a chargino and a neutralino decaying to the 125 GeV Higgs boson in $$\sqrt{\varvec{s}} = 8$$ s = 8 TeV $$\varvec{pp}$$ p p collisions with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 208. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2015)
Search for $$W' \rightarrow tb \rightarrow qqbb$$ W ′ → t b → q q b b decays in $$pp$$ p p collisions at $$\sqrt{s}$$ s = 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (4). 165. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2015)
Search for Higgs and Z Boson Decays to J/ψγ and ϒ(nS)γ with the ATLAS Detector. Physical Review Letters, 114 (12). 121801. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2015)
Search for dark matter in events with heavy quarks and missing transverse momentum in pp collisions with the ATLAS detector. European Physical Journal C, 75 (2). 92. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2467 more authors) (2015)
Search for dark matter in events with heavy quarks and missing transverse momentum in pp collisions with the ATLAS detector. European Physical Journal C , 75 (2). 92. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2896 more authors) (2015)
Measurements of Higgs boson production and couplings in the four-lepton channel in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector. Physical Review D, 91 (1). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2014)
Search for nonpointing and delayed photons in the diphoton and missing transverse momentum final state in 8 TeV pp collisions at the LHC using the ATLAS detector. Physical Review D, 90 (11). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2886 more authors) (2014)
Measurement of the Higgs boson mass from the H→γγ and H→ZZ∗→4ℓ channels in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector. Physical Review D, 90 (5). ISSN 1550-7998 |
Under what conditions is it possible, using a suitable change of variables, to eliminate 1st order terms in an elliptic partial differential equation, so that the equation involves the 2nd derivatives, the dependent variable, and independent terms only?
To be concrete, consider the elliptic equation $-\Delta u + \sum_i \frac{d u}{dx^i} a^i + f(x)=0$.
If the $a^i$ are constant, define $u(x) = v(x) e^{\frac{1}{2}\sum_j a^j x^j}$ and obtain
$-\Delta v - \frac{1}{4} v \sum_i a^i a^i + f(x)e^{-\frac{1}{2}\sum_j a^j x^j}=0$, an elliptic equation without 1st order terms.
If the $a^i$ are not constant or if the equation is quasilinear, the problem is harder. It can be approached using contact transformations and Cartan's method of equivalence, but I am not aware of results. |
Define:
$$q_\alpha(F_L)=F^{\leftarrow}(\alpha)=\inf\lbrace{x\in \mathbb{R}\mid F_L(x)\geq \alpha\rbrace}=VaR_\alpha(L)$$
I want to prove that:
$$ES_\alpha = \frac{1}{1-\alpha}\mathbb{E}[\mathbb{1}_{\lbrace{ L\geq q_\alpha(L)\rbrace}}\cdot L] \overset{!!!}{=}\mathbb{E}[L\mid L\geq q_\alpha(L)] $$
I get stuck as:
$$\mathbb{E}[\mathbb{1}_{\lbrace{ L\geq q_\alpha(L)\rbrace}}\cdot L]= \mathbb{E}[\mathbb{E}[\mathbb{1}_{\lbrace{ L\geq q_\alpha(L)\rbrace}}\cdot L\mid L\geq q_\alpha(L)]] = \mathbb{E}[\mathbb{1}_{\lbrace{ L\geq q_\alpha(L)\rbrace}}\cdot\mathbb{E}[L\mid L\geq q_\alpha(L)]\ ]$$
Now I would like to use that $\Pr(L\geq q_\alpha(L) \ )=1-\alpha$, but I don't know how to proceed. |
Integrals Integration by Partial Fractions TIPS FOR SOLVING INTEGRATING FUNCTIONS Integrals which are in the form \int \frac{f(x)}{g(x)} dx (Where degree of f(x) < degree of g(x)) Process :using the process of partial fractions by the following procedure \frac{1}{(x+a)(x+b)} = \frac{A}{x+a} + \frac{B}{x+b} \frac{1}{(x+a)^{2} (x+b)} = \frac{A}{(x+a)} + \frac{B}{(x+a)^{2}} + \frac{C}{(x+b)} \frac{1}{(x+a)(x^{2}+b)} = \frac{A}{x+a} + \frac{Bx+c}{x^{2}+b} View the Topic in this video From 13:35 To 48:50
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
S.No
Form of the rational function
Form of the partial fraction
1. \frac{px+q}{(x-a)(x-b)},a\neq b \frac{A}{x-a}+\frac{B}{x-b}
2. \frac{px+q}{(x-a)^2} \frac{A}{x-a}+\frac{B}{(x-a)^2}
3. \frac{px^2+qx+r}{(x-a)(x-b)(x-c)} \frac{A}{x-a}+\frac{B}{x-b}+\frac{C}{x-c}
4. \frac{px^2+qx+r}{(x-a)^2(x-b)} \frac{A}{x-a}+\frac{B}{(x-a)^2}+\frac{C}{x-b}
5. \frac{px^2+qx+r}{(x-a)(x^2+bx+c)} \frac{A}{x-a}+\frac{Bx+C}{x^2+bx+c}
where
x 2 + bx +c cannot be factorised further |
The vacuum permittivity appears originally in Maxwell's equations, used to describe electric fields. The permeability of vacuum was
defined using Ampere's force law (itself derived from Biot-Savart law): If two current carrying wires were spaced one meter apart, both carrying one ampere of current, the force exerted against each of the wires would be exactly $2*10^-7 N$. This also defined the ampere. Therefore, the value of vacuum permeability was fixed to $4π*10−7 H/m$ by definition.
Using his laws, Maxwell was able to produce a wave equation with the speed:
$${\displaystyle c={1 \over {\sqrt {\mu _{0}\varepsilon _{0}}}}.}$$
This turned out to be the same as the speed of light (already measured using other means), so light was deduced to be an electromagnetic wave. Here $c$ was already known, $\mu_{0}$ was defined, but how was $\epsilon_{0}$ found? Was it experimentally determined?
If so, how was it found in Maxwell's times? Today, also the speed of light is defined exactly, so $\epsilon_{0}$ also now has a defined value, but clearly this was not always the case.
The reason I'm asking this is because research into this turns up a huge amount of contradicting information and circular logic, so I want to be clear on this. |
2019 Volume 60 Issue 3 Pages 374-378
The equilibrium between metallic titanium and titanium ions, 3Ti
2+ = 2Ti 3+ + Ti, in MgCl 2–LiCl molten salt system was revaluated by means of best fitting. The measurement was also carried for the MgCl 2–LiCl melt with various composition of LiCl at 1023 K. The results illustrate that the values of K c correspond well with the composition of the melt which defined as polarizing power.
The wide application of titanium in the industrial or domestic use has been limited because of its high cost caused by the complicated production process, the Kroll process.
1 ) To explore a promising process with low cost, several methods have been proposed and widely investigated. However, the prediction that the Kroll process would be replaced by an electrochemical route has not been fulfilled; 2 – 7 ) attempts involving the electro-deposition of titanium from molten salts have been hampered by the difficulties in eliminating the redox cycling of multivalent titanium ions. 2 ) Thus, for understanding the mechanism, investigation on the equilibrium between titanium ions and metallic titanium is very important.
It has been reported that the electrode reduction from Ti
3+ to metallic Ti takes a two-step process, Ti 3+ → Ti 2+ → Ti, in alkali metal chloride melts except a pure CsCl. 8 , 9 ) In comparison, Ti 3+ was directly reduced to be titanium in a fluoride melts. 10 ) Indeed, the stability of the titanium ions depends on the bath composition. The Ti 2+ was more stable in alkali chloride melts, however, in a fluoride melts, the higher oxidation states of Ti 4+ and Ti 3+ are more stable. It has been reported that equilibrium exists among Ti 2+, Ti 3+, and metallic titanium in most chloride melts, which can be expressed by eq. (1):
\begin{equation} \text{3Ti$^{2+}$} = \text{2Ti$^{3+}$} + \text{Ti} \end{equation} (1)
\begin{equation} K_{c} = \frac{\alpha_{\textit{Ti}^{3+}}^{2}\alpha_{\textit{Ti}}}{\alpha_{\textit{Ti}^{2+}}^{3}} = \frac{x_{\textit{Ti}^{3+}}^{2}\gamma_{\textit{Ti}^{3+}}^{2}}{x_{\textit{Ti}^{2+}}^{3}\gamma_{\textit{Ti}^{2+}}^{3}} \end{equation} (2)
\begin{equation} x_{i} = \frac{n_{i}}{n_{\textit{Mg}^{2+}} + n_{\textit{Li}^{+}} + n_{\textit{Ti}^{2+}} + n_{\textit{Ti}^{3+}}} \end{equation} (3)
\begin{equation} K_{c} = \frac{x_{\textit{Ti}^{3+}}^{2}}{x_{\textit{Ti}^{2+}}^{3}} \end{equation} (4)
Though the equilibrium constants of eq. (1) had been investigated by many researchers, the study in the MgCl
2–LiCl system has seldom been reported. The previous works in our group show that the divalent species perform more stable in alkali cation series when the ionic radius is smaller. With the smallest cation radius in alkali-metal, the MgCl 2–LiCl system is profitable toward the stability of Ti 2+, which would attribute to lessen the occurrence of disproportional reaction. Thus, it would be worthy of studying the compositional influence on the equilibrium constant of the reaction, 3 Ti 2+ = 2 Ti 3+ + Ti, in molten MgCl 2–LiCl.
Commercially available anhydrous MgCl
2 and anhydrous LiCl were used (Sinopharm Chemical Reagent Co., Ltd., analytical grade ≥99.9% and 99.9%, respectively). Before the use as the solvents, both MgCl 2 and LiCl were purified by bubbling HCl gas through the melts separately to remove O 2− dissolved in the salts, and filtrated through a quartz filter. 21 ) Titanium sub-chloride salts were prepared by the disproportionation of titanium tetrachloride TiCl 4 and metallic titanium in the solvent salts.
The salts containing titanium dichloride and excess metallic titanium were held at a designed temperature to reach equilibrium. High-purity argon gas was used to stir the molten salt for making the reaction of titanium ions and titanium metal fast. A quartz sampler consists of an injector and a quartz tube. The injector at the top of the quartz tube (6 mm diameter) is sealed by a rubber plug. By fixing the sampling point at the same location four parallel samples were taken out from molten salts by the quartz sampler for analysis in each experiment, and the average value of the concentrations of titanium ions was considered as the result.
The quantitative analysis of different oxidation states of titanium ions consists of three main steps. The concentrations of Ti
2+ and Ti 3+ in each sample were determined by H 2 volumetric analysis and titration, respectively. Finally, that the concentration of Ti 4+ after titration was equal to the concentration of Ti 2+ plus Ti 3+ was determined by diantipyryl methane spectrophotometry.
In order to investigate the influence of composition on the equilibrium constant of titanium ions, various molar percent of LiCl was added in molten MgCl
2–LiCl which were used as studying electrolyte.
Figure 1(a) shows the relationship of the molar fraction (mole percent) between divalent titanium (
x Ti 2+) and trivalent titanium ( x Ti 3+) in MgCl 2–LiCl (28 mol%, 72 mol%) melt at 973 K, 1023 K and 1073 K, respectively. Taking the salt composition of MgCl 2–LiCl (28 mol%, 72 mol%) is based on the consideration of that at this ratio the system has the lowest liquid transition point. x Ti 3+ and K c against x Ti 2+, (a) and (b), in MgCl 2–LiCl eutectic melt ( x LiCl = 72 mol%) at 973 K, 1023 K and 1073 K, respectively, under metallic titanium existence.
The equilibrium constant curves, 0.1, 0.5, 1.0 and 1.5, which were calculated from $x_{\textit{Ti}^{2 + }}$, were also listed in Fig. 1(a). At all temperatures, the concentration of divalent titanium,
x Ti 2+, was slightly higher than that of trivalent titanium, x Ti 3+. Moreover, K c values decreased from 1.5 to 0.5 with the concentration increasing of Ti 2+ ( x Ti 2+). The relationship between K c and
\begin{equation} x_{\textit{Ti}^{2+}}^{\textit{anal.}} = \left\{\frac{[(x_{\textit{Ti}^{3+}}^{\textit{anal.}} - x_{O^{2-}}^{\textit{init.}})+\sqrt{(x_{\textit{Ti}^{3+}}^{\textit{anal.}} - x_{O^{2-}}^{\textit{init.}})^{2} + 4K_{\text{sp}}}]^{2}}{4K_{\text{c}}}\right\}^{\frac{1}{3}} \end{equation} (5)
In the best-fitting treatment,
K c, K sp, and initial $x_{\textit{O}^{2 - }}$ were set as the parameters, eq. (5) was employed to fit the experimental data $x_{\textit{Ti}^{2 + }}^{\textit{anal.}}$ and $x_{\textit{Ti}^{3 + }}^{\textit{anal.}}$, in view that they are constants at a fixed temperature. The details of this process have been described in our previous papers. 15 – 18 )
Figure 2(a), (b) and (c) show best-fitting curves of titanium ions at different temperatures, which correspond well with the experimental data. In addition, for the aim of comparing the equilibrium constant,
K c values were plotted in Fig. 2 which are dash lines and calculated by experimental data ( x Ti 2+ and x Ti 3+).
The relationship between the best-fitting curves and the experiment plots (a) at 1073 K, (b) at 1023 K and (c) at 973 K.
Figure 3(a), (b) and (c) are outcomes of the equilibrium constant
K c after best-fitting. Also, the original
The relationship between the
K c of the experiment data, and best-fitting parameters (a) at 973 K, (b) 1023 K and (c) 1073 K for titanium ions in MgCl
It was reported that
K c value related to the composition of the molten salt at fixed temperature.
The relationship between molar fraction of LiCl in molten MgCl
2–LiCl and equilibrium constant at 1023 K.
As far as we know, the molten salts are fully made up of cations and anions. And here ions can be classified into the free ions and complex ions. It can not avoid the illustration of interaction of cations and anions when taking about the stabilization of them. Hence, the polarization power (
P) of component cations in the molten salt was employed to interpret the result. The r i and P values are shown in Table 2.
The relationship between
K c and P is shown in Fig. 5.
The relationship between polarizing power of electrolyte and equilibrium constant at 1023 K.
It shows that the
K c values are decreased by the increasing of polarizing power of MgCl
The equilibrium among metallic titanium and titanium ions, 3Ti
2+ = 2Ti 3+ + Ti, in molten MgCl 2–LiCl was evaluated by the best-fitting method. Results also show that the equilibrium constant, K c, increased with the increase of temperature. The polarizing power was used to illustrate the influence of composition of the molten salt on the
The authors thank supports from Collaborative Innovation Center of Henan Resources and Materials Industry, Zhengzhou University and Startup Research Fund of Zhengzhou University (No. 32210804). The authors are grateful to the National Natural Science Foundation of China (No. 51322402). |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Neumann boundary conditions are implemented by introducing ghost points outside the domain and then using the boundary conditions to eliminate the ghost points. For example, see this question.
I have a set of three equations in variables $(u,v,w)$ which are one dimensional functions of $x$, and I cannot see how to
eliminate the ghost points for the equation for the $w$ variable. Background
The problem is something like this,
$ \frac{\partial u}{\partial t} = a_u\frac{\partial^2 u}{\partial x^2} + b_u\frac{\partial u}{\partial x} + f_u(x,u,v,w) \\ \frac{\partial v}{\partial t} = a_v\frac{\partial^2 v}{\partial x^2} + b_v\frac{\partial v}{\partial x} + f_v(x,u,v,w) \\ \frac{\partial w}{\partial t} = a_u\frac{\partial u}{\partial x} +a_v\frac{\partial v}{\partial x} + f_w(x,u,v,w) \\ $
The set of equations describe a two species advection-diffusion problem where the third equation couples to the other two. I am having problems applying Neumann boundary conditions to the third equation. Notice that the third equation does not contain a differential of $w$, but it does contain differentials of both $u$ and $v$.
In discretizated form (only showing the $w$ equation, where $\Delta t$ is the time step and $s_u$ and $s_v$ are the coefficients of the equations after discretization),
$ w_j^{n+1} - s_u\beta(u_{j+1}^{n+1} - u_{j-1}^{n+1}) - s_v\beta(v_{j+1}^{n+1} - v_{j-1}^{n+1}) = s_u(1-\beta)(u_{j+1}^{n} - u_{j-1}^{n}) + s_v(1-\beta)(v_{j+1}^{n} - v_{j-1}^{n}) + w_j^n + \Delta tf_{j}^n $
The problem
The above is a Crank-Nicolson discretization, but for the boundary points let's use an implicit scheme (setting $\beta=1$),
$ w_1^{n+1} - s_u(u_{2}^{n+1} - u_{0}^{n+1}) - s_v(v_{2}^{n+1} - v_{0}^{n+1}) = w_1^n + \Delta tf_{1}^n $
The ghost points, $u_0^{n+1}$ and $v_0^{n+1}$ can be eliminated, by using the boundary conditions for the $u$ and $v$ variables. However, applying Neumann boundary conditions to the $w$ variable gives,
$ \frac{\partial w}{\partial x} = \sigma |_{j=1} \\ \frac{1}{2 \Delta x} (w_{2}^{n+1} - w_{0}^{n+1} ) = \sigma $
The problem is that the boundary equation cannot be substituted into the equation for $j=1$ because it does not contain at term for $w_{0}^{n+1}$. Question
Do you have any suggestions on how I can apply Neumann boundary conditions to this equation? |
Can n! be a perfect square when n is an integer greater than 1? (But is it possible, to prove without Bertrand's postulate. Because bertrands postulate is quite a strong result.)
Assume, $n\geq 4$. By Bertrand's postulate there is a prime, let's call it $p$ such that $\frac{n}{2}<p<n$ . Suppose, $p^2$ divides $n$. Then, there should be another number $m$ such that $p<m\leq n$ such that $p$ divides $m$. So, $\frac{m}{p}\geq 2$, then, $m\geq 2p > n$. This is a contradiction. So, $p$ divides $n!$ but $p^2$ does not. So $n!$ is not a perfect square.
That leaves two more cases. We check directly, $2!=2$ and $3!=6$ are not perfect squares.
There is a prime between n/2 and n, if I am not mistaken.
Hopefully this is a little more intuitive (although quite a bit longer) than the other answers up here.
Let's begin by stating a simple fact : (1) when factored into its prime factorization, any perfect square will have an even number of each prime factor.
If $n$ is a prime number, then $n$ will not repeat in any of the other factors of $n!$, meaning that $n!$ cannot be a perfect square (1). Consider if $n$ is composite. $n!$ will contain at least two prime factors ($n=4$ is the smallest composite number that qualifies the restraints), so let's call $p$ the largest prime factor of $n!$
The only way that $n!$ can be a perfect square is if $n!$ contains $p$ and a second multiple of $p$ (1). Obviously, this multiple must be greater than $p$ and less than $n.$
Using Bertrand's postulate, we know that there exists an additional prime number, let's say $p'$, such that $p < p' < 2p$. Because $p$ is the largest prime factor of $n!$, we know that $p' > n$ (If it were the opposite, then we would reach a contradiction).
Thus it follows that $2p > p' > n$. Because $2p$ is the smallest multiple of $p$ and $2p > n$, then $n!$ only contains one factor of $p$. Therefore it is impossible for $n!$ to be a perfect square.
If $n$ is prime, then for $n!$ to be a perfect square, one of $n-1, n-2, ... , 2$ must contain n as a factor. But this means one of $n-1, n-2, ... , 2 \geq n$, which is impossible.
If $n$ is not prime, then the first prime less than $n$ will be $p = n-k$, $0<k<n-1, 2\leq p<n$. No number less than $p$ will contain $p$ as a factor, so for $n!$ to be a perfect square there must exist a multiple of $p$, I'll call it $bp$, $1<b<n,$ such that$ p<bp\leq n$. Now according to chebyshev's theorem for any no. $p$ there exists a prime number between $p$and $2p.$ so if $r< n < 2r$ and also $p<n$ , so such an $n!$ would never be a perfect square. Hope this helps.
You can refer this.
Your statement has a generalization. There is a work by Erdos and Selfridge stating that the product of at least two consecutive natural numbers is never a power. Here is it: http://ad.bolyai.hu/~p_erdos/1975-46.pdf. |
Difference between revisions of "Atmosphere"
m (-random case;)
(*phrasing)
Line 80: Line 80:
: <math>v_T = \sqrt{\frac{1250 \frac{\text{kg}}{\text{m}^2} \cdot GM}{r^2\, \rho}}</math>
: <math>v_T = \sqrt{\frac{1250 \frac{\text{kg}}{\text{m}^2} \cdot GM}{r^2\, \rho}}</math>
−
For the
+
For the pod and parachute example pictured above, the drag coefficient is 35.07, so its terminal velocity at sea level on Kerbin (which is 600 km from Kerbin's center) is:
: <math>v_T = \sqrt{\frac{250 \frac{\text{kg}}{\text{m}^2} \cdot GM}{r^2\, \rho \cdot 35.07}}</math>
: <math>v_T = \sqrt{\frac{250 \frac{\text{kg}}{\text{m}^2} \cdot GM}{r^2\, \rho \cdot 35.07}}</math>
Revision as of 16:01, 14 July 2014
Planets Moons Eve Kerbin Laythe Duna Jool
The
atmosphere of a celestial body slows the movement of any object passing through it, a force known as atmospheric drag (or simply drag). An atmosphere also allows for aerodynamic lift. The celestial bodies with atmospheres are the planets Eve, Kerbin, Duna and Jool, as well as Laythe, a moon of Jool. Only Kerbin and Laythe have atmospheres that contain oxygen and thus produce intake air for jet engines to work.
Atmospheric pressure diminishes exponentially with increasing altitude. An atmosphere's
scale height is the distance over which atmospheric pressure changes as a factor of e, or 2.718. For example, Kerbin's atmosphere has a scale height of 5000 m, meaning the atmospheric pressure at altitude n is 2.718 times greater than the pressure at altitude n + 5000.
Atmospheres vary in temperature, though this has no bearing on gameplay.
Atmospheres allow aerobraking and easier landing. However, an atmosphere makes taking off from a planet more difficult and increases the minimum stable orbit altitude.
Contents Drag
In the game, the force of atmospheric drag (
F D) is modeled as follows: [1]
where
ρ is the atmospheric density (kg/m 3), v is the ship's velocity (m/s), d is the coefficient of drag (dimensionless), and A is the cross-sectional area (m 2).
Note that the cross-sectional area is not actually calculated in the game. It is instead assumed that it is directly proportional to the mass, which is an unrealistic simplification made by KSP. The parameter FlightGlobals.DragMultiplier indicates that the proportionality ratio is 0.008 m
2/kg, so:
where
m is the ship's mass (kg).
The atmospheric density
ρ is directly proportional to atmospheric pressure ( p of unit atm), which is a function of altitude, the atmosphere's pressure at altitude 0 ( p 0), and scale height ( H):
where p here is in units atm, and
ρ in kg/m 3. The conversion factor of 1.2230948554874 kg/(m 3·atm) is given by FlightGlobals.getAtmDensity(1.0), which returns the density at 1 atmosphere (sea level on Kerbin) pressure.
The coefficient of drag (
d) is calculated as the mass-weighted average of the max_drag values of all parts on the ship. For most ships without deployed parachutes, d will be very near 0.2, since this is the max_drag value of the vast majority of parts. Also a group of the same part have always the same drag coefficient. Terminal velocity
The terminal velocity of an object falling through an atmosphere is the velocity at which the force of gravity is equal to the force of drag. Terminal velocity changes as a function of altitude. Given enough time, an object falling into the atmosphere will slow to terminal velocity and then remain at terminal velocity for the rest of its fall.
Terminal velocity is important because:
It describes the amount of velocity which a spacecraft must burn away when it is close to the ground. It represents the speed at which a ship should be traveling upward during a fuel-optimal ascent.
The force of gravity (
F G) is:
where
m is still the ship's mass, G is the gravitational constant, M is the mass of the planet, and r is the distance from the center of the planet to the falling object.
To find terminal velocity, we set
F G equal to F: D
Assuming
d is 0.2 (which is a good approximation, provided parachutes are not in use), this simplifies to:
For the Mk1-2 pod and Mk16XL parachute example pictured above, the drag coefficient is 35.07, so its terminal velocity at sea level on Kerbin (which is 600 km from Kerbin's center) is:
Examples
Altitude (m) v T (m/s) Eve Kerbin Duna Jool Laythe 0 58.385 100.13 212.41 23.124 115.62 100 58.783 101.01 214.21 23.162 116.32 1000 62.494 109.30 231.16 23.508 122.83 10000 115.27 240.52 495.18 27.272 211.77 On-rails physics
If a ship is "on rails" (meaning it's further than 2.25 km from the actively-controlled ship) and its orbit passes through a planet's atmosphere, one of two things will happen based on atmospheric pressure at the ship's altitude:
below 0.01 atm: no atmospheric drag will occur — the ship will be completely unaffected 0.01 atm or above: the ship will disappear
The following table gives the altitude of this 0.01 atm threshold for each celestial body with an atmosphere:
Body Altitude (m) Eve 0 Kerbin 0 Duna 0 Jool 0 Laythe 0 Atmospheric height
The atmospheric height depend on the scale height of the celestial body and is where 0.000001
th (0.0001%) of the surface pressure is remaining so the atmospheric pressure at the border isn't constant. Technically a craft in Jool's orbit can get lower into the atmosphere (or the atmosphere starts from a higher pressure).
Kerbin's atmosphere ends at 0.000001 atm and to calculate where the other celestial bodies should have the atmospheric height: |
Besides standard math, trigonometry functions are helpful for simulating rotation and making games like Asteroids.
Key Commands
The trigonometric commands are sin, cos and tan.
sin(θ) returns the sine of θ, which is defined as the y-value of the point of intersection of the unit circle and a line containing the origin that makes an angle θ with the positive x-axis. cos(θ) returns the cosine of θ, which is defined as the x-value of the point of intersection of the unit circle and a line containing the origin that makes an angle θ with the positive x-axis. tan(θ) returns the tangent of angle θ, which is defined as $\frac{\sin \theta}{\cos \theta}$ It is slightly unnecessary, as most games that use trigonometry only require sin and cos.
Let's say you are making an object firing simulation and you are keeping track of the angle θ. When you fire, you can increment the location variables X and Y with speed a and angle θ using these formulas:
X+Acos(θ→XY+Asin(θ→Y
Examples
The following code illustrates and then calculates the area of a rectangle inscribed in a circle after the user presses [Enter].
Input "RADIUS: ",RInput "ANGLE: ",θAxesOffZSquareRcos(θ→ARsin(θ→BCircle(0,0,RLine(0,0,A,BLine(A,B,A,BLine(A,B,A,BLine(A,B,A,BLine(A,B,A,BPause 4abs(AB
The following code awaits the user to input an angle to fire at, using a line to guide, and fires a projectile at that angle until the user presses [Clear]
:AxesOff:DelVar XDelVar θDegree:ClrDraw:94→Xmax:62→Ymax:0→Ymin:Ans→Xmin:Repeat K=21:Line(0,0,2cos(θ),2sin(θ:Repeat Ans:getKey→K:End:Line(0,0,2cos(θ),2sin(θ),0:max(0,min(90,θ+(Ans=25)-(Ans=34→θ:End:Repeat K=45:X+2cos(θ→X:Y+2sin(θ→Y:Pt-On(X,Y:getKey→K:End
Of course, Earth has gravity, doesn't it? We can account for that pretty easily. To simplify the problem, we'll just assume that Earth is flat and that the projectile follows a parabolic path:
:AxesOff:DelVar XDelVar θDegree:ClrDraw:94→Xmax:62→Ymax:0→Ymin:Ans→Xmin:Repeat K=21:Line(0,0,2cos(θ),2sin(θ:Repeat Ans:getKey→K:End:Line(0,0,2cos(θ),2sin(θ),0:max(0,min(90,θ+(Ans=25)-(Ans=34→θ:End:2sin(θ→G:Repeat K=45:X+2cos(θ→X:Y+G→Y:G-.05→G // :D Now it follows a parabolic path! You can change gravity by changing this number.:Pt-On(X,Y:getKey→K:End
<< Probability Table of Contents Complex Numbers >> |
My favourite one: $[0, 1]$ is compact, i.e. every open cover of $[0, 1]$ has a finite subcover.
Proof: Suppose for a contradiction that there is an open cover $\mathcal{U}$ which does not admit any finite subcover. Thus, either $\left[ 0, \frac{1}{2} \right]$ or $\left[ \frac{1}{2}, 1 \right]$ cannot be covered with a finite number of sets from $\mathcal{U}$ - name it $I_1$. Again, one of the two $I_1$'s subintervals of length $\frac{1}{4}$ can't be covered with a finite number of sets from $\mathcal{U}$. Continuing, we get a descending sequence of intervals $I_n$ of length $\frac{1}{2^n}$ each, every of which cannot be finitely covered.
By the Cantor Intersection Theorem,
$$\bigcap_{n=1}^{\infty} I_n = \{ x \}$$
for some $x \in [0, 1].$ But there is such $U \in \mathcal{U}$ that $x \in U$ and so $I_n \subseteq U$ for some sufficiently large $n$. That's a contradiction.
But given an arbitrary cover $\mathcal{U}$, I think finding a finite subcover may be a somewhat tedious task. :p
P.S. There actually comes a procedure from the proof above:
See if $[0, 1]$ itself is covered by one set from $\mathcal{U}$. If so, we're done. If not, execute step 1. for $\left[ 0, \frac{1}{2} \right]$ and $\left[ \frac{1}{2}, 1 \right]$ to get their finite subcovers, then unite them.
The proof guarantees you will eventually find a finite subcover (i.e. you'll never end up going downwards infinitely), but you cannot tell how long it will take. So it is not
as constructive as one would expect. |
Search
Now showing items 1-10 of 167
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... |
Original research Open Access Published: Bi-univalent properties for certain class of Bazilevič functions defined by convolution and with bounded boundary rotation Journal of the Egyptian Mathematical Society volume 27, Article number: 11 (2019) Article metrics
294 Accesses
Abstract
In this paper, we obtain bi-univalent properties for certain class of Bazilevič functions defined by convolution and with bounded boundary rotation. We will find coefficient bounds for |
a 2| and | a 3| for the class \(\mathcal {M}_{\alpha,\lambda,\rho,k,\beta }(f * h)\mathsf {.}\) Introduction
Let \(\mathcal {A}\) denote the class of analytic functions of the form:
For \(h(z)\in \mathcal {A}\), given by \(h(z)=z+\sum \limits _{n=2}^{\infty }h_{n}z^{n},\) the Hadamard product (or convolution) of
f( z) and h( z) is defined by: Definition 1
([1, 2], and [3]
with p = 1). Let \(\mathcal {P}_{k}^{\lambda }(\rho)\, \left (0\leq \rho <1,\ k\geq 2 \text { and } \left \vert \lambda \right \vert <\frac {\pi }{2}\right)\) denote the class of functions \(p(z)=1+\sum \limits _{n=1}^{\infty }c_{n}z^{n},\) which are analytic in \(\mathbb {U}\) and satisfy the conditions:
We note that:
(i) \(\mathcal {P}_{k}^{\lambda }(0)=\mathcal {P}_{k}^{\lambda }\ (\ k\geq 2\ \)and \(\left \vert \lambda \right \vert <\frac {\pi }{2})\ \) is the class of functions introduced by Robertson (see [4]), and he derived a variational formula for functions in this class.
(iii) \(\mathcal {P}_{k}^{0}(0)=\mathcal {P}_{k}(k\geq 2)\ \) is the class of functions having their real parts bounded in the mean on \(\mathbb {U}\), introduced by Robertson [4] and studied by Pinchuk [7].
(iv) \(\mathcal {P}_{2}^{0}(\rho)=\mathcal {P}\left (\rho \right)\ (0\leq \rho <1)\ \)is the class of functions with positive real part of order
ρ, 0≤ ρ<1.
(v) \(\mathcal {P}_{2}^{0}(0)=\mathcal {P}\) is the class of functions having positive real part for \(z\in \mathbb {U}\).
By the Koebe one-quarter theorem [8], we know that the image of \(\mathbb {U\ }\)under every univalent function \(f\in \mathcal {A}\) contains the disk with center in the origin and radius 1/4. Therefore, every univalent function
f has an inverse f −1 satisfies:
It is easy to see that the inverse function has the form:
A function \(f\in \mathcal {A}\) is said to be bi-univalent in \(\mathbb {U}\) if both
f and its inverse map g= f −1are univalent in \(\mathbb {U}\).
The object of this paper is to introduce new subclass of Bazilevič functions [10] for the class \(\sum \) with bounded boundary rotation and defined by using convolution as follows:
Definition 2
Let \(f,h\in \sum,\ \alpha \in \mathbb {C}^{\ast },\ \beta \geq 0,\ 0\leq \rho <1,\ k\geq 2\ {and}\ \left \vert \lambda \right \vert <\frac {\pi }{2},\ \)then \((f * h)(z)\in \sum \) is said to be in the class \(\mathcal {M}_{\alpha,\lambda,\rho,k,\beta }(f * h)\ \)if it satisfies the following conditions:
and
We note that by putting different values for
h, α, β, k, λ, and ρ, in the above definition, we have:
(1) \(\mathcal {M}_{1,0,\rho,k,\beta }\left (f\times \frac {z}{1-z}\right)=R_{\sum } (\rho,k,\beta)\ (f\in \sum,\ \beta \geq 0,\ 0\leq \rho <1,\ k\geq 2)\ \)(see [11], with
γ=1);
(2) \(\mathcal {M}_{\alpha,0,\rho,k,1}(f * h)=\mathcal {L} _{\alpha,\rho,k}(f * h)\ \left (\ f,h\in \sum,\ \alpha \in \mathbb {C} ^{\ast },\ 0\leq \rho <1,\ k\geq 2\right)\ \)(see [12]);
(4) \(\mathcal {M}_{\eta,0,\rho,2,1}\left (f\times \frac {z}{1-z}\right)=\mathcal {L}_{\eta,\rho }(f)(z)\ \left (\ f\in \sum,\ \eta \geq 0,\ 0\leq \rho <1\right)\ \)(see [15]);
(5) \(\mathcal {M}_{1,0,\rho,2,\beta }\left (f\times \frac {z}{1-z}\right)=\mathcal {L} _{\rho,\beta }(f)(z)\left (\ f\in \sum,\ \beta \geq 0,\ 0\leq \rho <1\right)\ \)(see [16]);
(6) \(\mathcal {M}_{1,0,\rho,2,1}\left (f\times \frac {z}{1-z}\right)=\mathcal {L}_{\rho }(f)(z)\left (\ f\in \sum,\ 0\leq \rho <1\right)\ \)(see [9]);
(7) \(\mathcal {M}_{\alpha,0,\rho,2,\beta }\left (f\times \frac {z}{1-z} \right)=\mathcal {NP}_{\sum }^{\beta,\alpha }(0,\rho)\ \left (f\in \sum,\ \beta,\alpha \geq 0,\ 0\leq \rho <1\right)\ \)(see [[17], with
β=0]);
(8) \(\mathcal {M}_{1,0,\rho,2,\beta }\left (f\times \frac {z}{1-z}\right)=\mathcal {R}_{\sum }(\beta,\rho)\ \left (\ f\in \sum,\ \beta \geq 0,\ 0\leq \rho <1\ \right)\ \)(see [18]).
Also, we can obtain the following subclasses:
(i) \(\mathcal {M}_{\alpha,\lambda,\rho,k,\beta }\left (f\times \frac {z} {1-z\ }\right)=\mathcal {\digamma }_{\alpha,\lambda,\rho,k,\beta }(f)\)
(ii) \(\mathcal {M}_{\alpha,0,\rho,k,\beta }(f\ast h)=\mathcal {F}_{\alpha,\rho,k,\beta }(f * h)\)
(iii) \(\mathcal {M}_{\alpha,0,\rho,2,\beta }(f\ast h)=\mathcal {F}_{\alpha,\rho,\beta }(f * h)\)
(iv) \(\mathcal {M}_{\alpha,\lambda,0,k,\beta }(f * h)=\mathcal {M}_{\alpha,\lambda,k,\beta }(f * h)\)
(v) \(\mathcal {M}_{\alpha,0,0,k,\beta }(f * h)=\mathcal {M}_{\alpha,k,\beta }(f * h)\)
(vi) \(\mathcal {M}_{\alpha,0,0,2,\beta }(f * h)=\mathcal {M}_{\alpha,\beta }(f * h)\)
(vii) \( \mathcal {M}_{1,\lambda,\rho,k,\beta }(f * h)=\mathbb {F}_{\lambda,\rho,k,\beta }(f * h)\)
or
(viii) \(\mathcal {M}_{1,0,\rho,2,\beta }(f * h)=\mathbb {F}_{\rho,\beta }(f * h)\)
In order to obtain our main results, we have to recall here the following lemma.
Lemma 1
([3]
with p = 1). If \(p(z)=1+\sum \limits _{n=1}^{\infty }c_{n}z^{n}\in \mathcal {P}_{k}^{\lambda }(\rho),\) then
The result is sharp. Equality is attained for the odd coefficients and even coefficients respectively for the functions:
We note that for
λ=0 in Lemma 1, we obtain the result obtained by Goswami et al. [19] [Lemma 2.1] for the class \(\mathcal {P}_{k}(\rho).\)
In this paper, we will obtain the coefficients bounds |
a 2| and | a 3| for the class \(\mathcal {M}_{\alpha,\lambda,\rho,k,\beta }(f * h)\), which defined in Definition 2. Coefficient estimates for functions in the class \(\mathcal {M}_{\alpha,\lambda,\rho,k,\beta }(f * h)\) Theorem 1
Let \(f,h\in \sum,\ \alpha \in \mathbb {C} ^{\ast }\backslash \{-1,\frac {-1}{2}\},\ \beta \geq 0,\ 0\leq \rho <1,\ k\geq 2,\ \left \vert \lambda \right \vert <\frac {\pi }{2},\)
f∗ h given by (2) and h 2, h 3≠0. If f∗ h belongs to \(\mathcal {M}_{\alpha,\lambda,\rho,k,\beta }(f * h)\), then:
and
The result is sharp.
Proof 1 If \((f * h)\in \mathcal {M}_{\alpha,\lambda,\rho,k,\beta }(f * h)\), then from Definition 2, we have:
and
where
p and q have Taylor expansions as follows:
and
Since \(p,q\in \mathcal {P}_{k}^{\lambda }(\rho)\ {and}\) by applying Lemma 1, we have:
and
Also, we have:
which completes the proof of Theorem 1. The result is sharp in view of the fact that assertion (8) of Lemma 1 is sharp.
Remark 1
For \(h(z)=\frac {z}{1-z\ },\ \beta =\alpha =1,\ k=2,\ \)and
λ=0 in Theorem 1, we obtain the result obtained by Srivastava et al. [9] [Theorem 2].
Putting \(h(z)=\frac {z}{1-z\ }\ \)in Theorem 1, we obtain the following corollary.
Corollary 1
Let \(\ f\in \sum,\ \alpha \in \mathbb {C}^{\ast }\backslash \left \{-1,\frac {-1}{2}\right \},\ \beta \geq 0,\ 0\leq \rho <1,\ k\geq 2\ and\ \left \vert \lambda \right \vert <\frac {\pi }{2}.\ \)If \(f\in \mathcal {\digamma }_{\alpha,\lambda,\rho,k,\beta }(f)\), then:
and
The result is sharp.
Putting
λ=0 in Theorem 1, we obtain the following corollary. Corollary 2
Let \(\ f,h\in \sum,\ \alpha \in \mathbb {C}^{\ast }\backslash \left \{-1,\frac {-1}{2}\right \},\ \beta \geq 0,\ 0\leq \rho <1,\ k\geq 2,\)
f∗ h given by (2) and h 2, h 3≠0. If \(f * h\in \mathcal {F}_{\alpha,\rho,k,\beta }(f * h)\), then:
and
The result is sharp.
Putting
λ=0 and k=2 in Theorem 1, we obtain the following corollary. Corollary 3
Let \(\ f,h\in \sum,\ \alpha \in \mathbb {C}^{\ast }\backslash \ \left \{-1,\frac {-1}{2}\right \},\ \beta \geq 0,\ 0\leq \rho <1,\)
f∗ h given by (2) and h 2, h 3≠0. If \(f * h\in \mathcal {F}_{\alpha,\rho,\beta }(f * h)\), then:
and
The result is sharp.
Putting
α=1 in Theorem 1, we obtain the following corollary. Corollary 4
Let \(\ f,h\in \sum,\ \beta \geq 0,\ 0\leq \rho <1,\ k\geq 2,\ \left \vert \lambda \right \vert <\frac {\pi }{2},\)
f∗ h given by (2) and h 2, h 3≠0. If \(f * h\in \mathbb {F} _{\lambda,\rho,k,\beta }(f * h)\), then:
and
The result is sharp.
Putting
α=1, k=2, and λ=0 in Theorem 1, we obtain the following corollary. Corollary 5
Let \(\ f,h\in \sum,\ \beta \geq 0,\ 0\leq \rho <1,\)
f∗ h given by (2) and h 2, h 3≠0. If \(f * h\in \mathbb {F}_{\rho,\beta }(f * h)\), then:
and
The result is sharp.
Putting
ρ=0 in Theorem 1, we obtain the following corollary. Corollary 6
Let \(\ f,h\in \sum,\ \alpha \in \mathbb {C}^{\ast }\backslash \ \left \{-1,\frac {-1}{2}\right \},\ \beta \geq 0,\ \left \vert \lambda \right \vert <\frac {\pi }{2},\ k\geq 2,\)
f∗ h given by (2) and h 2, h 3≠0. If \(f * h\in \mathcal {M}_{\alpha,\lambda,k,\beta }(f * h)\), then:
and
The result is sharp.
Putting
ρ= λ=0 in Theorem 1, we obtain the following corollary. Corollary 7
Let \(\ f,h\in \sum,\ \alpha \in \mathbb {C}^{\ast }\backslash \left \{-1,\frac {-1}{2}\right \},\ \beta \geq 0,\ k\geq 2,\)
f∗ h given by (2) and h 2, h 3≠0. If \(f * h\in \mathcal {M}_{\alpha,k,\beta }(f * h)\), then:
and
The result is sharp.
Putting
ρ= λ=0 and k=2 in Theorem 1, we obtain the following corollary. Corollary 8
Let \(\ f,h\in \sum,\ \alpha \in \mathbb {C} ^{\ast }\backslash \left \{-1,\frac {-1}{2}\right \},\ \beta \geq 0,\)
f∗ h given by (2) and h 2, h 3≠0. If \(f * h\in \mathcal {M}_{\alpha,\beta }(f * h)\), then:
and
The result is sharp.
Putting
λ=0, α=1 and \(h(z)=\frac {z}{1-z}\ \)in Theorem 1, we obtain the following corollary. Corollary 9
Let \(\ f\in \sum,\ 0\leq \rho <1\ \) and
β≥0. If \(f\in R_{\sum }(\rho,k,\beta)\), then:
and
The result is sharp.
Remark 2
The results in Corollary 9 correct the results obtained by Orhan et al. [11] [Theorem 2.11, with
γ=1. ]. References 1
Moulis, E. J.: Generalizations of the Robertson functions, Pacific. J. Math. 81(1), 167–174 (1979).
2
Noor, K., Arif, M., Muhammad, A.: Mapping properties of some classes of analytic functions under an integral operator. J. Math. Inequal. 4(4), 593–600 (2010).
3
Aouf, M. K.: A generalization of functions with real part bounded in the mean on the unit disc. Math. Japonica. 33(2), 175–182 (1988).
4
Robertson, M. S.: Variational formulas for several classes of analytic functions. Math. Z. 118, 311–319 (1976).
5
Padmanabh, K. S., Paravatham, R.: Properties of a class of functions with bounded boundary rotation. Ann. Polon. Math. 31(3), 842–853 (1975).
6
Umarani, P. G., Aouf, M. K.: Linear combination of functions of bounded boundary rotation of order
α, Tamkang. J. Math. 20(1), 83–86 (1989). 7
Pinchuk, B.: Functions of bounded boundary rotation, Isr. J. Math. 10, 7–16 (1971).
8
Duren, P. L.: Univalent Functions. In:
Grundlehren der Mathematischen Wissenschaften Series, p. 259. Springer Verlag, New York (1983). 9
Srivastava, H. M., Mishra, A. K., Gochhayat, P.: Certain subclasses of analytic and bi-univalent functions, Appl. Math. Lett. 23, 1188–1192 (2010).
10
Bazilevič, I. E.: On a case of integrability in quadratures of the Lowner-Kufarev equation, Mat. Sb. 37(79), 471–476 (1955).
11
Orhan, H., Magesh, N., Balaji, V. K.: Certain classes of bi-univalent functions with bounded boundary variation, Tbilisi. Math. J. 10(4), 17–27 (2017).
12
Altınkaya, S., Yalçın, S.: Coefficient problem for certain subclasses of bi-univalent functions defined by convolution. Math. Moravica. 20(2), 15–21 (2016).
13
El-Ashwah, R. M.: Subclasses of bi-univalent functions defined by convolution. J. Egypt. Math. Soc. 22, 348–351 (2013).
14
Motamednezhad, A.: S. NosratiI and S. Zaker, Bounds for initial Maclaurin coefficients of a subclass of bi-univalent functions associated with subordination, Commun. Fac. S.i. Univ. Ank. Ser. A1 Math. Stat. 68(1), 125–135 (2019).
15
Frasin, B. A., Aouf, M. K.: New subclasses of bi-univalent functions, Appl. Math. Lett. 24(9), 1569–1573 (2011).
16
Prema, S., Keerthi, B. S.: Coefficient bounds for certain subclasses of analytic functions. J. Math. Anal. 4(1), 22–27 (2013).
17
Orhan, H., Magesh, N., Balaji, V. K.: Initial coefficient bounds for a general class of bi-univalent functions. Filomat. 29(6), 1259–1267 (2015).
18
Magesh, N., T.Rosy, Varma, S.: Coefficient estimate problem for a new subclass of bi-univalent functions. J. Compl. Anal.2013, 3 (2013). Article ID 474231.
19
Goswami, P., Alkahtani, B. S., Bulboaca, T.: Estimate for initial Maclaurin coefficients of certain subclasses of bi-univalent functions (2015). arXiv:1503.04644v1 [math.CV].
Acknowledgements
The authors are grateful to the referees for their valuable suggestions.
Funding
Higher Institute for Engineering and Technology, New Damietta, Egypt
Availability of data and materials
Not applicable.
Ethics declarations Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
I'm trying to prove a property of Lebesgue measure sets that says:
If the $A_{k}$'s are measurable and $A_{1} \supset A_{2} \supset A_{3} \supset \ldots,$ and if $\lambda (A_{1}) < \infty, $ then $$\lambda ( \bigcap_{k=1}^{\infty} A_{k} ) = \lim_{k \to \infty} \lambda (A_{k}).$$
And I'm trying to emulate the proof of the previous property that says:
If the $A_{k}$'s are measurable and $A_{1} \subset A_{2} \subset A_{3} \subset \ldots,$ then $$\lambda ( \bigcup_{k=1}^{\infty} A_{k} ) = \lim_{k \to \infty} \lambda (A_{k}).$$
And they express the union $\bigcup A_{k}$ as a disjoint union like this: $$\bigcup_{k=1}^{\infty} A_{k} = A_{1} \cup \bigcup_{k=2}^{\infty} (A_{k} \sim A_{k-1})$$ So I'm trying to create a increasing sequence from my decreasing one like this:
Say $(A_{k})= A_{1} \supset A_{2} \supset \ldots$ then $B_{k}=A_{1} - A_{k}$ and I'm not quite sure if $(B_{k})$ is an increasing sequence, and if so, I'm able to express this as a disjoint union like before?
I don't know if I'm on good track or I'm completely lost, can you help me with this please?
Also, what's the point of $\lambda (A_{1}) < \infty $ how can I use that?
Thank you guys! |
More generally, if a set $S$ has cardinality $\mathfrak{m}$, how many of its subsets have cardinality $\mathfrak{n}$? Clearly there are at least $2^\mathfrak{n}$ such subsets. I don't see how many more though.
Thanks
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Let's assume the axiom of choice, because it's simpler and easier and commonly done. I'll say a few words on the case without choice later.
First we use a notation: $[A]^\kappa$ is the collection of subsets of $A$ of cardinality $\kappa$. If we want to include smaller sets as well $[A]^{\leq\kappa}$. We write $A^\kappa$ for all the functions from $\kappa$ into $A$, and again with the $A^{\leq\kappa}$.
So you asked what is the cardinality of $[\mathbb R]^\omega$, for this we see that every countable set can be seen as the range of a function from $\omega$ into $\mathbb R$. Choose for every countable set an enumeration, now we have an injection from $[\mathbb R]^\omega$ into $\mathbb R^\omega$. Therefore $|[\mathbb R]^\omega|\leq|\mathbb R^\omega|$. Trivially we also have $|\mathbb R|\leq|[\mathbb R]^\omega|$. We calculuate: $$2^{\aleph_0}\leq |[\mathbb R]^\omega|\leq(2^{\aleph_0})^{\aleph_0}=2^{\aleph_0},$$ and therefore equality ensues and there are $2^{\aleph_0}$ countable subsets of $\mathbb R$.
In the general case, $[A]^\kappa$ is empty when $|A|<\kappa$. Assuming $\kappa\leq|A|$, if $|A|<2^\kappa$ then there are $2^\kappa$ many sets, this is not too hard to prove.
On the other hand, if $2^\kappa\leq|A|$ then there will be exactly $|A|$ many sets in $[A]^\kappa$. To see this note that there are $2^\kappa$ ways to enumerate every set in $[A]^\kappa$, so we have that $|A^\kappa|=2^\kappa\cdot|[A]^\kappa|=|[A]^\kappa|$.
The exact result of $|A^\kappa|$ depends a lot on the model chosen, if we assume GCH (or some other nicely behaved continuum function) then we can calculate it pretty well, and in some cases we can calculate the value in terms of $A,2^A,\kappa$ relatively well.
However consider the following case:
$|A|=\aleph_\omega$, $\kappa=\omega$ and $2^\kappa=\aleph_4$. Koenig's theorem tells us that $|A|<|A^\omega|$, and in fact one of Shelah's most celebrated results in his
PCF theory is that: $$(\aleph_\omega)^\omega<\aleph_{\omega_4}\cdot2^{\aleph_0},$$or in our case, where $2^{\aleph_0}<\aleph_\omega$, then simply have $|A^\omega|<\aleph_{\omega_4}$.
There is work being done to try and improve this bound to be $\aleph_{\omega_3}$, which is quite an improvement but this is still quite open, and as Andres points in his comment below, it is consistent (relative to the existence of a very large cardinal) that $|A^\omega|=\aleph_{\alpha+1}$ for any countable $\alpha$.
A word on the horrors without choice:
There are models of ZF in which the axiom of choice is false, and we cannot choose an enumeration for every countable subset of $\mathbb R$. In such models something quite peculiar happens:
Not to mention all sort of strange sets which have strange properties. For example, if $A$ is an infinite set which cannot be written as a disjoint union of two infinite sets (such set is called
amorphous) then we have a very interesting property:
The set $[A]^\omega$ is actually empty (as $A$ cannot have a countably infinite subset), let us consider $[A]^{<\omega}$ instead. It is in fact
exactly half of $\mathcal P(A)$. When I say exactly, I mean that. If you add or remove even a single element you will properly change the cardinality. We have the following:$$|[A]^{<\omega}|+|[A]^{<\omega}|=2^{|A|}.$$
Yes, such wonderful horrors are consistent with ZF when the axiom of choice is absent!
Each real number can be written down (not necessarily uniquely) as a countable sequence of digits and decimal points.
Thus, you can write down a countable set of real numbers simply by interlacing the digits of its members according to some fixed rule.
Therefore there must be exactly continuum many countable subsets of $\mathbb R$.
In the general case, as long as $\aleph_0\le B\le A\le2^B$, an analogous argument shows that there are $2^B$ subsets of $S$ with cardinality $B$. (This depends on the Axiom of Choice).
If $A>2^B$, then there are of course at least $A$ subsets. My intuition is that there will be no more, but I have no proof of that ready.
If $A<B$, then there are $0$ subsets of size $B$.
How about this map, send each countable set $A$ to $f_A$ where $f_A:\mathbb{N} \to A$ a bijection. This is injection because, let $f_A=f_B$ then $im(f_A)= im(f_B)=A=B$. So we get injection from set of all countable sets to set of all bijections $\mathbb{N} \to \mathbb {R}$, which is subset of set of all functions $f:\mathbb{N} \to \mathbb {R} \cong \mathbb{R}$. Now also exist injection from $\mathbb{R} \cong P(\mathbb{N})$ to set of all countable subsets. So we get cardinality exactly $c$ |
You state:
After I shifted to make their orders same:
$$
\sum_{k = -2}^\infty a_{k + 2}(k + r + 2)^2 x^{k + r + 1} + 2 \sum_{k = -1}^\infty a_{k + 1}(k + r + 2)x^{k + r + 1}+\sum_{k = 0}^\infty a_k x^{k + r + 1}
$$
What you do is the following. Given that$$\sum_{k = 0}^\infty a_k (k + r)^2 x^{k + r - 1} + \sum_{k = 0}^\infty a_k(2 k + 2 r + 1)x^{k + r} + \sum_{k = 0}^\infty a_k x^{k + r + 1}$$(note that your second sum was incorrectly calculated)
you need to separate the necessary terms of the sums in order to group the powers of $x$ correctly, i.e:
\begin{multline}\sum_{n = 0}^\infty a_n (n + r)^2 x^{n + r - 1} + 2 \sum_{n = 0}^\infty a_n (n + r + 1) x^{n + r} + \sum_{n = 0}^\infty a_n x^{n + r + 1} = \\a_0 r^2 x^{r-1} + a_1 (r + 1)^2 x^r + \sum_{n=2}^\infty a_n (n + r)^2 x^{n + r - 1} + (2 r + 1) a_0 x^r + \\ \sum\limits_{n = 1}^\infty a_n (2 n + 2 r+ 1)x^{n + r} +\sum_{n = 0}^\infty a_n x^{n + r + 1} = 0\end{multline}
Regrouping orders, you have
\begin{multline}a_0 r^2 x^{r-1} + [a_1 (r + 1) + a_0 (2 r + 1)] x^r + \\ \sum_{k = 0}^\infty \left\{ a_{k + 2}(k + r + 2)^2 + 2 a_{k + 1} (k + r + 2) + a_k \right\} x^{k + r + 1} = 0\end{multline}
Each power of $x$ needs to vanish, hence $r^2 = 0$. This is the indicial polynomial (details here). This means that $r = 0$ and
\begin{align}a_1 + a_0 &= 0 \\a_{k + 2}(k + 2)^2 + 2 a_{k + 1} (2 k + 3) + a_k &= 0\end{align}
which closes the recurrence relation. The first tree terms are\begin{align}a_1 &= -a_0\\a_2 &= \frac{1}{2!}a_0\\a_3 &= -\frac{1}{3!}a_0\end{align}and it's clear that a relationship is forming. By induction, the whole solution can be computed.
Note that, assuming that $y$ is
somehow well behaved, for $x \sim 0$,
$$x y'' + (2x + 1) y' + (x + 1) y = 0 \quad \sim \quad y' + y = 0.$$
Proposing the anzats $y(x) = e^{-x} z(x)$ and substituting in the original ode,
$$x y'' + (2x + 1) y' + (x + 1) y = e^{-x}\left(x z'' + z'\right) = 0,$$
and it's easily verified that $z = c_1 \log x + c_2$. Hence
$$y(x) = e^{-x}\left(c_1 \log x + c_2\right)$$
Cool trick ha? |
5.8 Nonlinear regression
Although the linear relationship assumed so far in this chapter is often adequate, there are many cases in which a nonlinear functional form is more suitable. To keep things simple in this section we assume that we only have one predictor \(x\).
The simplest way of modelling a nonlinear relationship is to transform the forecast variable \(y\) and/or the predictor variable \(x\) before estimating a regression model. While this provides a non-linear functional form, the model is still linear in the parameters. The most commonly used transformation is the (natural) logarithm (see Section 3.2).
A
log-log functional form is specified as\[ \log y=\beta_0+\beta_1 \log x +\varepsilon.\]In this model, the slope \(\beta_1\) can be interpreted as an elasticity: \(\beta_1\) is the average percentage change in \(y\) resulting from a \(1\%\) increase in \(x\). Other useful forms can also be specified. The log-linear form is specified by only transforming the forecast variable and the linear-log form is obtained by transforming the predictor.
Recall that in order to perform a logarithmic transformation to a variable, all of its observed values must be greater than zero. In the case that variable \(x\) contains zeros, we use the transformation \(\log(x+1)\); i.e., we add one to the value of the variable and then take logarithms. This has a similar effect to taking logarithms but avoids the problem of zeros. It also has the neat side-effect of zeros on the original scale remaining zeros on the transformed scale.
There are cases for which simply transforming the data will not be adequate and a more general specification may be required. Then the model we use is \[ y=f(x) +\varepsilon \] where \(f\) is a nonlinear function. In standard (linear) regression, \(f(x)=\beta_{0} + \beta_{1} x\). In the specification of nonlinear regression that follows, we allow \(f\) to be a more flexible nonlinear function of \(x\), compared to simply a logarithmic or other transformation.
One of the simplest specifications is to make \(f\)
piecewise linear. That is, we introduce points where the slope of \(f\) can change. These points are called knots. This can be achieved by letting \(x_{1,t}=x\) and introducing variable \(x_{2,t}\) such that\[\begin{align*} x_{2,t} = (x-c)_+ &= \left\{ \begin{array}{ll} 0 & x < c\\ (x-c) & x \ge c \end{array}\right.\end{align*}\]The notation \((x-c)_+\) means the value \(x-c\) if it is positive and 0 otherwise. This forces the slope to bend at point \(c\). Additional bends can be included in the relationship by adding further variables of the above form.
An example of this follows by considering \(x=t\) and fitting a piecewise linear trend to a time series.
Piecewise linear relationships constructed in this way are a special case of
regression splines. In general, a linear regression spline is obtained using\[ x_{1}= x \quad x_{2} = (x-c_{1})_+ \quad\dots\quad x_{k} = (x-c_{k-1})_+\]where \(c_{1},\dots,c_{k-1}\) are the knots (the points at which the line can bend). Selecting the number of knots (\(k-1\)) and where they should be positioned can be difficult and somewhat arbitrary. Some automatic knot selection algorithms are available in some software, but are not yet widely used.
A smoother result can be obtained using piecewise cubics rather than piecewise lines. These are constrained to be continuous (they join up) and smooth (so that there are no sudden changes of direction, as we see with piecewise linear splines). In general, a cubic regression spline is written as \[ x_{1}= x \quad x_{2}=x^2 \quad x_3=x^3 \quad x_4 = (x-c_{1})_+ \quad\dots\quad x_{k} = (x-c_{k-3})_+. \] Cubic splines usually give a better fit to the data. However, forecasts of \(y\) become unreliable when \(x\) is outside the range of the historical data.
Forecasting with a nonlinear trend
In Section 5.4 fitting a linear trend to a time series by setting \(x=t\) was introduced. The simplest way of fitting a nonlinear trend is using quadratic or higher order trends obtained by specifying \[ x_{1,t} =t,\quad x_{2,t}=t^2,\quad \dots. \] However, it is not recommended that quadratic or higher order trends be used in forecasting. When they are extrapolated, the resulting forecasts are often unrealistic.
A better approach is to use the piecewise specification introduced above and fit a piecewise linear trend which bends at some point in time. We can think of this as a nonlinear trend constructed of linear pieces. If the trend bends at time \(\tau\), then it can be specified by simply replacing \(x=t\) and \(c=\tau\) above such that we include the predictors, \[\begin{align*} x_{1,t} & = t \\ x_{2,t} = (t-\tau)_+ &= \left\{ \begin{array}{ll} 0 & t < \tau\\ (t-\tau) & t \ge \tau \end{array}\right. \end{align*}\] in the model. If the associated coefficients of \(x_{1,t}\) and \(x_{2,t}\) are \(\beta_1\) and \(\beta_2\), then \(\beta_1\) gives the slope of the trend before time \(\tau\), while the slope of the line after time \(\tau\) is given by \(\beta_1+\beta_2\). Additional bends can be included in the relationship by adding further variables of the form \((t-\tau)_+\) where \(\tau\) is the “knot” or point in time at which the line should bend.
Example: Boston marathon winning times
The top panel of Figure 5.20 shows the Boston marathon winning times (in minutes) since it started in 1897. The time series shows a general downward trend as the winning times have been improving over the years. The bottom panel shows the residuals from fitting a linear trend to the data. The plot shows an obvious nonlinear pattern which has not been captured by the linear trend. There is also some heteroscedasticity, with decreasing variation over time.
Fitting an exponential trend (equivalent to a log-linear regression) to the data can be achieved by transforming the \(y\) variable so that the model to be fitted is, \[ \log y_t=\beta_0+\beta_1 t +\varepsilon_t. \] This also addresses the heteroscedasticity. The fitted exponential trend and forecasts are shown in Figure 5.21. Although the exponential trend does not seem to fit the data much better than the linear trend, it gives a more sensible projection in that the winning times will decrease in the future but at a decaying rate rather than a fixed linear rate.
The plot of winning times reveals three different periods. There is a lot of volatility in the winning times up to about 1940, with the winning times decreasing overall but with significant increases during the 1920s. After 1940 there is a near-linear decrease in times, followed by a flattening out after the 1980s, with the suggestion of an upturn towards the end of the sample. To account for these changes, we specify the years 1940 and 1980 as knots. We should warn here that subjective identification of knots can lead to over-fitting, which can be detrimental to the forecast performance of a model, and should be performed with caution.
h <- 10fit.lin <- tslm(marathon ~ trend)fcasts.lin <- forecast(fit.lin, h = h)fit.exp <- tslm(marathon ~ trend, lambda = 0)fcasts.exp <- forecast(fit.exp, h = h)t <- time(marathon)t.break1 <- 1940t.break2 <- 1980tb1 <- ts(pmax(0, t - t.break1), start = 1897)tb2 <- ts(pmax(0, t - t.break2), start = 1897)fit.pw <- tslm(marathon ~ t + tb1 + tb2)t.new <- t[length(t)] + seq(h)tb1.new <- tb1[length(tb1)] + seq(h)tb2.new <- tb2[length(tb2)] + seq(h)newdata <- cbind(t=t.new, tb1=tb1.new, tb2=tb2.new) %>% as.data.frame()fcasts.pw <- forecast(fit.pw, newdata = newdata)fit.spline <- tslm(marathon ~ t + I(t^2) + I(t^3) + I(tb1^3) + I(tb2^3))fcasts.spl <- forecast(fit.spline, newdata = newdata)autoplot(marathon) + autolayer(fitted(fit.lin), series = "Linear") + autolayer(fitted(fit.exp), series = "Exponential") + autolayer(fitted(fit.pw), series = "Piecewise") + autolayer(fitted(fit.spline), series = "Cubic Spline") + autolayer(fcasts.pw, series="Piecewise") + autolayer(fcasts.lin, series="Linear", PI=FALSE) + autolayer(fcasts.exp, series="Exponential", PI=FALSE) + autolayer(fcasts.spl, series="Cubic Spline", PI=FALSE) + xlab("Year") + ylab("Winning times in minutes") + ggtitle("Boston Marathon") + guides(colour = guide_legend(title = " "))
Figure 5.21 above shows the fitted lines and forecasts from linear, exponential, piecewise linear, and cubic spline trends. The best forecasts appear to come from the piecewise linear trend, while the cubic spline gives the best fit to the historical data but poor forecasts.
There is an alternative formulation of cubic splines (called
natural cubic smoothing splines) that imposes some constraints, so the spline function is linear at the end, which usually gives much better forecasts without compromising the fit. In Figure 5.22, we have used the
splinef() function to produce the cubic spline forecasts. This uses many more knots than we used in Figure 5.21, but the coefficients are constrained to prevent over-fitting, and the curve is linear at both ends. This has the added advantage that knot selection is not subjective. We have also used a log transformation (
lambda=0) to handle the heteroscedasticity.
The residuals plotted in Figure 5.23 show that this model has captured the trend well, although there is some heteroscedasticity remaining. The wide prediction interval associated with the forecasts reflects the volatility observed in the historical winning times. |
The relationship between simple strain and true strain The relationship between simple strain and true strain
Simple (or unit) strain is the change in length over the original length, so that for a pressuremeter measuring radius it can be expressed as
...[1]
$$\xi _s=\frac{r_i - r_o}{r_o}$$
where $\xi _s$ is simple strain
$r_i$ is the current radius of the cavity $r_o$ is the original radius of the cavity
From equation [1] it follows that
...[2]
$$\frac{r_i}{r_o}=1+\xi _s$$
True (natural, or logarithmic) strain is defined as the sum of each incremental increase in radius divided by the current radius, so
...[3]
$$\begin{matrix} \xi _t &=& \int _{r_o}^{r_i}(1/r)\mathit{dr}\\ \\ &=& \left[\ln (r)\right]_{r_o}^{r_i}\\ \\ &=& \ln (r_i)\text{–}\ln (r_o)\\ \\ &=& \ln (r_i/r_o) \end{matrix}$$
where $\xi _t$ is true strain
Substituting equation [2] into [3] gives
...[4]
$$\xi _t=\ln (1+\xi _s)$$
It is well known that for instruments which measure the radius of the cavity the following expression can be used to derive estimates for shear modulus from the test curve whenever the response from the ground is elastic:
...[5]
$$G=\frac 1 2\left(\frac{r_i}{r_o}\right)\left(\frac{\mathit{dP}}{d\xi _c}\right)$$
where $G$ is the shear modulus
$P$ is the change in pressure $\xi_s$ is cavity strain and is simple strain
This is sometimes expressed in a simplified form as
...[6]
$$G=\frac 1 2\left(\frac{\mathit{dP}}{d\xi _c}\right)$$
but this approximation can only be justified for very small strains.
The multiplier $\frac{r_i}{r_o}$ has the effect of converting an expression in simple strain to one in terms of true strain, as the following argument shows:
Differentiate equation [4] with respect to $\xi _s$
...[7]
$$\begin{matrix}\frac{d\xi _t}{d\xi _s} &=& \frac 1{1+\xi _s}\\ \\ \therefore d\xi _s &=& d\xi _t(1+\xi _s)\end{matrix}$$
Substitute equations [2] and [7] into [5] to give
$$G=\frac 1 2(1+\xi _s)\left(\frac{\mathit{dP}}{d\xi _t(1+\xi _s)}\right)$$
which simplifies to
...[8]
$$G=\frac 1 2\left(\frac{\mathit{dP}}{d\xi _t}\right)$$
Hence the simplified version of the shear modulus expression shown in equation [6] is good for all strains as long as true strain is being used. Plotting true strain rather than simple strain makes it easier to compare modulus parameters taken from rebound cycles at different cavity strains, and makes it easier to compare rebound cycles between instruments which strain the soil to different magnitudes. |
Tagged: determinant of a matrix Problem 718
Let
\[ A= \begin{bmatrix} 8 & 1 & 6 \\ 3 & 5 & 7 \\ 4 & 9 & 2 \end{bmatrix} . \] Notice that $A$ contains every integer from $1$ to $9$ and that the sums of each row, column, and diagonal of $A$ are equal. Such a grid is sometimes called a magic square.
Compute the determinant of $A$.Add to solve later
Problem 686
In each of the following cases, can we conclude that $A$ is invertible? If so, find an expression for $A^{-1}$ as a linear combination of positive powers of $A$. If $A$ is not invertible, explain why not.
(a) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=0$.
Add to solve later
(b) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=-1$. Problem 582
A square matrix $A$ is called
nilpotent if some power of $A$ is the zero matrix. Namely, $A$ is nilpotent if there exists a positive integer $k$ such that $A^k=O$, where $O$ is the zero matrix.
Suppose that $A$ is a nilpotent matrix and let $B$ be an invertible matrix of the same size as $A$.
Is the matrix $B-A$ invertible? If so prove it. Otherwise, give a counterexample. Problem 571
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.
There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold). The time limit was 55 minutes.
This post is Part 2 and contains Problem 4, 5, and 6.
Check out Part 1 and Part 3 for the rest of the exam problems.
Problem 4. Let \[\mathbf{a}_1=\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}, \mathbf{a}_2=\begin{bmatrix} 2 \\ -1 \\ 4 \end{bmatrix}, \mathbf{b}=\begin{bmatrix} 0 \\ a \\ 2 \end{bmatrix}.\]
Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$.
Problem 5. Find the inverse matrix of \[A=\begin{bmatrix} 0 & 0 & 2 & 0 \\ 0 &1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 \end{bmatrix}\] if it exists. If you think there is no inverse matrix of $A$, then give a reason. Problem 6. Consider the system of linear equations \begin{align*} 3x_1+2x_2&=1\\ 5x_1+3x_2&=2. \end{align*} (a) Find the coefficient matrix $A$ of the system. (b) Find the inverse matrix of the coefficient matrix $A$. (c) Using the inverse matrix of $A$, find the solution of the system.
(
Linear Algebra Midterm Exam 1, the Ohio State University) Read solution Problem 546
Let $A$ be an $n\times n$ matrix.
The $(i, j)$
cofactor $C_{ij}$ of $A$ is defined to be \[C_{ij}=(-1)^{ij}\det(M_{ij}),\] where $M_{ij}$ is the $(i,j)$ minor matrix obtained from $A$ removing the $i$-th row and $j$-th column.
Then consider the $n\times n$ matrix $C=(C_{ij})$, and define the $n\times n$ matrix $\Adj(A)=C^{\trans}$.
The matrix $\Adj(A)$ is called the adjoint matrix of $A$.
When $A$ is invertible, then its inverse can be obtained by the formula
For each of the following matrices, determine whether it is invertible, and if so, then find the invertible matrix using the above formula.
(a) $A=\begin{bmatrix} 1 & 5 & 2 \\ 0 &-1 &2 \\ 0 & 0 & 1 \end{bmatrix}$. (b) $B=\begin{bmatrix} 1 & 0 & 2 \\ 0 &1 &4 \\ 3 & 0 & 1 \end{bmatrix}$. Problem 509
Using the numbers appearing in
\[\pi=3.1415926535897932384626433832795028841971693993751058209749\dots\] we construct the matrix \[A=\begin{bmatrix} 3 & 14 &1592& 65358\\ 97932& 38462643& 38& 32\\ 7950& 2& 8841& 9716\\ 939937510& 5820& 974& 9 \end{bmatrix}.\]
Prove that the matrix $A$ is nonsingular.Add to solve later
Problem 505
Let $A$ be a singular $2\times 2$ matrix such that $\tr(A)\neq -1$ and let $I$ be the $2\times 2$ identity matrix.
Then prove that the inverse matrix of the matrix $I+A$ is given by the following formula: \[(I+A)^{-1}=I-\frac{1}{1+\tr(A)}A.\]
Using the formula, calculate the inverse matrix of $\begin{bmatrix}
2 & 1\\ 1& 2 \end{bmatrix}$. Problem 486
Determine whether there exists a nonsingular matrix $A$ if
\[A^4=ABA^2+2A^3,\] where $B$ is the following matrix. \[B=\begin{bmatrix} -1 & 1 & -1 \\ 0 &-1 &0 \\ 2 & 1 & -4 \end{bmatrix}.\]
If such a nonsingular matrix $A$ exists, find the inverse matrix $A^{-1}$.
(
The Ohio State University, Linear Algebra Final Exam Problem) Read solution Eigenvalues of Orthogonal Matrices Have Length 1. Every $3\times 3$ Orthogonal Matrix Has 1 as an Eigenvalue Problem 419 (a) Let $A$ be a real orthogonal $n\times n$ matrix. Prove that the length (magnitude) of each eigenvalue of $A$ is $1$.
Add to solve later
(b) Let $A$ be a real orthogonal $3\times 3$ matrix and suppose that the determinant of $A$ is $1$. Then prove that $A$ has $1$ as an eigenvalue. |
I have the following problem with
breqn package: I want to write a long equation and automatically split it with
dmath environment. It is splitting equation perfectly, but placing the equation number one line below. I want to have a result such that last part of the equation and equation number are in the same line.
Here is the code I'm trying to fix:
\documentclass{article}\usepackage{breqn}\begin{document}\begin{dmath}{{E}_{0}}\sum\limits_{t=0}^{\infty }\beta _{B}^{t}\left({{D}_{t+1}}+\left( 1+r_{f,t-1} \right)L_{f,t}+\left( 1+r_{g,t-1} \right)L_{g,t}+\left( 1+r_{i,t-1} \right)L_{i,t}-L_{f,t+1}-L_{g,t+1}-L_{i,t+1}-\left( 1+r_{p,t-1} \right){{D}_{t}} -\frac{{{\phi }_{f}}}{2} L_{f,t+1}^{2}-\frac{{{\phi }_{g}}}{2}L_{g,t+1}^{2}-\frac{\phi_{i}}{2}L_{i,t+1}^{2} \right)\end{dmath}\end{document}
EDIT: I was told that the space in the second line is not enough to place the equation number. How about following case where I have exactly the same equation given in 3 line with more than enough space to place equation number in the same row:
\documentclass{beamer}\usetheme{Madrid}\usepackage{breqn}\begin{document}\begin{frame}\frametitle{Problem}\begin{dmath}{{E}_{0}}\sum\limits_{t=0}^{\infty }\beta _{B}^{t}\left({{D}_{t+1}}+\left( 1+r_{f,t-1} \right)L_{f,t}+\left( 1+r_{g,t-1} \right)L_{g,t}+\left( 1+r_{i,t-1} \right)L_{i,t}-L_{f,t+1}-L_{g,t+1}-L_{i,t+1}-\left( 1+r_{p,t-1} \right){{D}_{t}} -\frac{{{\phi }_{f}}}{2} L_{f,t+1}^{2}-\frac{{{\phi }_{g}}}{2}L_{g,t+1}^{2}-\frac{\phi_{i}}{2}L_{i,t+1}^{2} \right)\end{dmath}\end{frame}\end{document} |
When water is added to a weak acid like ethanoic acid, the number of ethanoic acid molecules that dissociate increases, however, pH increases (less acidic), too. Why is this so?
It's simply due to dilution.
If it's true that adding water increases dissociation (Gerard has done a nice effort in Does the number of H+ ions in solution go up on dilution of a weak acid?), concentration of $\ce{H3O+}$ inevitably decreases. As a consequence, $\mathrm{pH}$, defined as $-\log[\ce{H3O+}]$, increases.
Start with $\pu{1 L}$ acetic acid concentration $\pu{1 M}$, $$\ce{[H3O+]} = \sqrt{K_c} = \pu{4.22E-3M},$$ that corresponds to $\pu{4.22E-3mol}$ of $\ce{H3O+}$. If you double the volume ($\pu{2 L}$), $$\ce{[H3O+]}=\sqrt{K_c/2}=\pu{2.98E-3M},$$ that corresponds to $\pu{5.96E-3mol}$ of $\ce{H3O+}$.
The amount of substance is higher, but concentration is less: dilution does play a role here.
Let's first examine the chemical reaction between our acids and bases. Ethanoic acid, commonly referred to as acetic acid reacts with our base, water. Water is amphiprotic, in that it can act as both a base and acid.
$$\ce{CH3COOH + H2O <=> CH3COO- + H3O+} $$
However, the acetate ion undergoes hydrolysis with water which now instead acts as an acid, reforming acetic acid as well has hydroxide ions:
$$\ce{CH3COO^- + H2O <=> CH3COOH + OH-}$$
Acetic acid is weak acid, however, the conjugate base, acetate, is a strong base/salt. Water acts as a strong base, however the hydronium ion is a weak acid, so we can except the pH to increase.
What many people often forget to realize is that once an acid/base reaction occurs, a salt is created along with water. However, a hydrolysis reaction between the salt and water can occur.
Taken from Chemical Principles: The Quest for Insight by Atkins et. al on page 494 in the tenth edition:
Calcium acetate, $\ce{Ca(CH3CO2)2 (aq)}$ is used in medicine to treat patients with a kidney disease that results in high levels of phosphate ions in the blood. The calcium binds with the phosphates so that they can be excreted. If you are using calcium acetate for this purpose, it is important to know the pH of the solution to avoid complications in treatment.
Say we have a $\pu{0.15 M}$ solution of $\ce{Ca(CH3CO2)2 (aq)}$.
Since the conjugate base of acetic acid is strong, we can expect the pH to increase.
Our expression can be written as this:
$$\ce{CH3COO^- + H2O <=> CH3COOH + OH-}$$
Calcium is a spectator ion and thus can be ignored.
$$K_b = \ce{\frac{[CH3COOH][OH-]}{[CH3COO-]}}$$
Constructing our RICE table:
$$\ce{CH3COO^- + H2O <=> CH3COOH + OH-}$$
\begin{array} {|c|c|c|c|c|} \hline &\ce{CH3COO^-} & \ce{H2O} & \ce{CH3COOH} & \ce{OH-}\\\hline \text{Initial conc.} & 0.30 & - & 0 & 0 \\ \hline \text{Change conc.} & -x & - & +x & +x\\ \hline \text{End conc.} & 0.30 - x & - & x & x \\ \hline \end{array}
\begin{align} K_b &= 5.6\times10^{-10} = \frac{[x][x]}{[0.30-x]}\\ x &= 1.3 \times 10^{-5}\\ \text{pOH} &= -\log([\ce{OH-}])\\ \text{pH} &= 14 - \text{pOH} = 14 - 4.89 = 9.11 \end{align}
On dilution, ionisation increases, although pH increases due to the fact that, volume increases dominantly over dissociation. Hence concentration of $\ce{H+}$ ions decreases instead of the extent of ionisation increases. |
I was playing with numbers on calculator and to my amaze i could see that calculator calculated $(4.5)!$ or any real numbers but factorial is defined for integers how is this done any advanced function. (Note I am grade $11$ student)
In general, $~n!~=~\displaystyle\int_0^\infty\exp\Big(-\sqrt[n]x\Big)~dx,~$ which for $~n=\dfrac12~$ yields $~\Big(\tfrac12\Big)!~=~\displaystyle\int_0^\infty e^{-x^2}~dx.~$
But the value of the Gaussian integral is known to be $\sqrt\pi~,~$ implying that $~\Big(\tfrac12\Big)!~=~\dfrac{\sqrt\pi}2,~$
since the integrand is even. Now all that's left to do is to repeatedly employ the well-known
factorial property $(n+1)!=(n+1)~n!~$ for $~n+1=4+\dfrac12,~$ and the result follows.
There is a function called the Gamma function. It is similar to the factorial as the factorial could be thought of as a special case of the gamma function.
$\Gamma(n) = (n-1)!$
or rather, when you shift it by one, as shown in the above equation.
The gamma function happens to be
$\Gamma(t) = \int_0^\infty x^{t-1} e^{-x} dx$
Calculators often use the gamma function to calculate factorials of non-natural values.
The generalization is useful when you need to extend the definition of the factorial beyond the natural numbers. For example, some probability distributions use the factorial, and the gamma function can be used to generalize them.
The factorial and gamma function both have some interesting properties in common.
For example, the factorial function can be defined recursively.
$0!=1$
$(n+1)! = (n+1) \times n!$
The gamma function also has this property
$\Gamma (1) = 1$
$\Gamma(x+1) = (x+1) \times \Gamma(x) $
It's possible the calculator gave you the value of $\Gamma(5.5)$.
The $\Gamma$ function is a sort of generalization of the factorial in the sense that for every $n\in\mathbb N$, you have that $\Gamma(n) = (n-1)!$. So if you ever want to calculate $m!$, that's the same as calculating $\Gamma(m+1)$.
Along with the gamma function, it is much easier to approximate to good accuracy using Stirling's approximation.
It is defined as:
$$n!\approx \sqrt{2\pi n}\left(\frac ne\right)^n$$
The Gamma function works for all real numbers. It gives the factorial of n-1 for integers, and an analytic continuation, ie. Smooth graph for in between inputs.
There is an easy introduction here:http://www.sosmath.com/calculus/improper/gamma/gamma.html |
I am looking at a problem of constrained minimization, where the function to be minimized contains the Heaviside function, and as such is not twice continuously differentiable.
My question is what effect would the use of a twice continuously differentiable approximation to the Heaviside function have on the accuracy and efficiency of the optimization?
the problem takes the form
$min \sum_{i}^{N} k_{i}\left [ H\left ( x_{i} \right ) -T\right ],$
$s.t. \sum_{i}^{N} k_{i}\left [ x_{i} -R\right ] \le 0.01,$
where $H\left(x \right ) $ is the Heaviside function
and $x,k,T,R\in\mathbb{R}$
Would the use of the approximate Heaviside
$H\left ( x \right )\approx \frac{1}{2}+\frac{1}{2}tanh(kx)=\frac{1}{1+e^{-2kx}}$
help with the minimization and give a sufficiently accurate result
or would a Legendre polynomial expansion (similar to http://www.phys.ufl.edu/~fry/6346/legendrestep.pdf) be more successful?
This approach will need to be extended into multiple dimension before implementation, for example in 3 dimensions the minimization becomes
$\sum_{i}\sum_{j}\sum_{l}k_{1,i}k_{2,j}k_{3,l}\left [ H\left ( x_{ijk}\right )-T \right ],$
and the constraint
$\sum_{i}\sum_{j}\sum_{l} k_{1,i }k_{2,j}k_{3,l}\left [x_{ijk} - R \right] \le 0.01,$
The minimization is covered by the question constrained minimization in N dimensions, this question is regarding the effect of using an approximation to the step function in the algorithm (and ultimately the choice of algorithm to use) |
I have an optimization problem that has a linear objective function. The constraints are of the form: $(Ax \leq b) \wedge (Cx \nless d)$. In other words, I have:
\begin{align} \min &f^T x \notag \\ \text{s.t.} &Ax \leq b \\ &Cx \nless d\\ \end{align}
One way to solve the problem would be to decompose the constraint $Cx \nless d$ into $m$ constraints (assuming $C\in R^{m\times n}$): \begin{align} & c_1^T x &\geq &d_1 \\ \vee & c_2^T x &\geq &d_2 \\ & & \vdots & \\ \vee & c_m^T x &\geq &d_m \\ \end{align} and we end up solving $m$ linear programs that are exactly the same except for one constraint that changes from one LP to another. Global optimum is simply the best among the $m$ optimums obtained.
Can anyone think of a faster way to perform this optimization? What about convex relaxation? How would I relax my problem to a single linear program? How good a convex relaxation solution would be? |
Short answer
Does it need to be at $25~^\circ\mathrm{C}$?
No. $\Delta_\mathrm{r} G^\circ$ can be defined at any temperature you wish to define it at, since the standard state does not prescribe a particular temperature. If you change the temperature, $\Delta_\mathrm{r} G^\circ$ will change.
Does $\Delta_\mathrm{r} G^\circ = \Delta_\mathrm{r} H^\circ - T\Delta_\mathrm{r} S^\circ$ always use $T = 298~\mathrm{K}$?
No. You use whatever temperature you are running your reaction at.
(...maths...)
Yes, at equilibrium, $\Delta_\mathrm{r} G = 0$ and $Q = K$.
However, everything after the first bullet point is wrong. You cannot conclude that $\Delta_\mathrm{r} G^\circ = 0$, nor can you conclude that $K = 1$. The equation $\Delta_\mathrm{r} G^\circ = -RT \ln K$ does
not analogously translate into $\Delta_\mathrm{r} G = -RT \ln Q$! The accurate relation is:
$$\Delta_\mathrm{r} G = \Delta_\mathrm{r} G^\circ + RT\ln Q$$
Setting $Q = K$ and $\Delta_\mathrm{r} G = 0$ in this equation does
not tell you anything about the value of $K$. In fact, if you try doing it, all you will find out is that $\Delta_\mathrm{r} G^\circ = -RT \ln K$ - no surprises there! Long answer
Any book that writes that $\Delta_\mathrm{r} G^\circ$ is the "special case" of $\Delta_\mathrm{r} G$ at $T = 298~\mathrm{K}$ is
wrong.
The
Gibbs free energy of a system is defined as follows:
$$G = H - TS$$
Under constant temperature and pressure (from now on, I will just assume constant $T$ and $p$ without stating it), all systems will seek to minimise their Gibbs free energy. Equilibrium is reached when $G$ is minimised. When $G$ is at a minimum, any infinitesimal change in $G$, i.e. $\mathrm{d}G$, will be $0$. Therefore, this is equivalent to saying that the condition for chemical equilibrium is $\mathrm{d}G = 0$.
Clearly, we need a way to relate this quantity $\mathrm{d}G$ to the actual reactants and products that are in the system. This can be done by using the Maxwell relation (see any physical chemistry text for details):
$$\mathrm{d}G = V\,\mathrm{d}p - S\,\mathrm{d}T + \sum_i \mu_i\,\mathrm{d}n_i$$
Under constant $T$ and $p$, $\mathrm{d}p = \mathrm{d}T = 0$ and therefore
$$\mathrm{d}G = \sum_i \mu_i\,\mathrm{d}n_i$$
where $\mu_i$ is the
chemical potential of species $i$, defined as a partial derivative:
$$\mu_i = \left(\frac{\partial G}{\partial n_i}\right)_{n_{j\neq i}}$$
So, we now have a refined condition for equilibrium:
$$\mathrm{d}G = \sum_i \mu_i\,\mathrm{d}n_i = 0$$
We can go further by noting that the values of $\mathrm{d}n_i$ for different components $i$, $j$, etc. are not unrelated. For example, if we have a reaction $i + j \longrightarrow k$, then for each mole of $i$ that is consumed, we must also use up one mole of $j$; this means that $\mathrm{d}n_i = \mathrm{d}n_j$.
This can be formalised using the idea of a stoichiometric coefficient $\nu_i$, which is defined to be positive for products and negative for reactants. For example, in the reaction
$$\ce{3H2 + N2 -> 2NH3}$$
we have $\nu_{\ce{H2}} = -3$, $\nu_{\ce{N2}} = -1$, and $\nu_{\ce{NH3}} = 2$.
By stoichiometry, if $1.5~\mathrm{mol}$ of $\ce{H2}$ is consumed, then $1~\mathrm{mol}$ of $\ce{NH3}$ has to be produced. We could write $\Delta n_{\ce{H2}} = -1.5~\mathrm{mol}$ and $\Delta n_{\ce{NH3}} = 1~\mathrm{mol}$. These quantities are proportional to their stoichiometric coefficients:
$$\frac{\Delta n_{\ce{H2}}}{\nu_{\ce{H2}}} = \frac{-1.5~\mathrm{mol}}{-3} = 0.5~\mathrm{mol} = \frac{1~\mathrm{mol}}{2} = \frac{\Delta n_{\ce{NH3}}}{\nu_{\ce{NH3}}}$$
The quantity $0.5~\mathrm{mol}$ is a constant for all chemical species $\ce{J}$ that participate in the reaction, and it is called the "extent of reaction" and denoted $\Delta \xi$ (that is the Greek letter xi). If the reaction is going forward, then $\Delta \xi$ is positive, and if the reaction is going backwards, then $\Delta \xi$ is negative. If we generalise the above result, we can write
$$\Delta \xi = \frac{\Delta n_i}{\nu_i}$$
and if we make $\Delta n_i$ smaller and smaller until it becomes an infinitesimal, then:
$$\begin{align}\mathrm{d}\xi &= \frac{\mathrm{d}n_i}{\nu_i} \\\mathrm{d}n_i &= \nu_i\,\mathrm{d}\xi\end{align}$$
If we go back to our condition for equilibrium, we can substitute in the above to get:
$$\mathrm{d}G = \sum_i \mu_i\nu_i\,\mathrm{d}\xi = 0$$
Now, $\mathrm{d}\xi$ is no longer dependent on $i$, since we have established already that $\Delta \xi$ (and by extension $\mathrm{d}\xi$) is a constant for all chemical species. So, we can "divide through" by it to get:
$$\Delta_\mathrm{r} G \equiv \frac{\mathrm{d}G}{\mathrm{d}\xi} = \sum_i \mu_i\nu_i = 0$$
where $\Delta_\mathrm{r} G$ is
defined to be $\mathrm{d}G/\mathrm{d}\xi$.
Note that $\Delta_\mathrm{r} G$ is an intensive property and has units of $\mathrm{kJ~mol^{-1}}$, since $\mathrm{d}\xi$ has units of $\mathrm{mol}$. This ensures that the units we use are consistent: since we know that $\Delta_\mathrm{r}G = \Delta_\mathrm{r} G^\circ + RT\ln Q$, $\Delta_\mathrm{r}G$ must have the same units as $RT$.
How do we interpret the physical significance of $\Delta_\mathrm{r} G$, or in other words, what does it even mean? There are two ways, each based on a different mathematical expression.
We have $\Delta_\mathrm{r}G = \sum \nu_i \mu_i$. This means that $\Delta_\mathrm{r}G$ is simply the difference between the chemical potentials of the products and the reactants, weighted by their stoichiometric coefficients. For the reaction $\ce{3H2 + N2 -> 2NH3}$, we have:
$$\Delta_\mathrm{r} G = \sum_i \mu_i\nu_i = 2\mu_{\ce{NH3}} - 3\mu_{\ce{H2}} -\mu_{\ce{N2}}$$
We have $\Delta_\mathrm{r}G = \mathrm{d}G/\mathrm{d}\xi$. This means that it is the slope of a curve of $G$ against $\xi$:
Note that up to this point, we have not stipulated any particular temperature, pressure, amounts of species present, or any conditions whatsoever. We have only said that the temperature and pressure must be constant.
It is important to realise that $\Delta_\mathrm{r}G$ is a well-defined quantity at all $T$, all $p$, and all possible values of $n_i, n_j, \cdots$! The shape of the curve will change when you vary the conditions. However, no matter what the curve looks like, it is always possible to find its gradient ($= \Delta_\mathrm{r}G$) at a particular point.
What exactly, then, is $\Delta G^\circ$? It is just a special case of $\Delta G$, where all the reactants and products are prepared in a
standard state. According to IUPAC, the standard state is defined as: For a gas: pure ideal gas when the pressure $p$ is equal to the standard pressure $p^\circ$. For a liquid or solid: pure liquid or solid at $p = p^\circ$ For a solution: ideal solution when the concentration $c$ is equal to the standard concentration $c^\circ$.
$p^\circ$ is most commonly taken to be $\pu{1 bar}$, although older texts may use the value $\pu{1 atm} = \pu{1.01325 bar}$. Since 1982, IUPAC has recommended the value $\pu{1 bar}$ for the standard pressure (
Pure Appl. Chem. 1982, 54 (6), 1239–1250; DOI: 10.1351/pac198254061239). However, depending on the context, a different value of $p^\circ$ may prove to be more convenient. Likewise, $c^\circ$ is most commonly – but not necessarily – taken to be $\pu{1 mol dm-3}$.
Note that in the above definitions, no temperature is specified. Therefore, by defining the standard Gibbs free energy, we are fixing a particular value of $p$, as well as particular values of $n_i, n_j, \cdots$. However, the value of $T$ is not fixed. Therefore, when stating a value of $\Delta_\mathrm rG^\circ$, it is also necessary to state the temperature which that value applies to.
When a reaction vessel is prepared with all its substances in the standard state, all the components of the system will have an activity of exactly $1$ by definition. Therefore, the reaction quotient $Q$ (which is a ratio of activities) will also be exactly equal to $1$. So, we could also say that $\Delta_\mathrm{r}G^\circ$ is the value of $\Delta_\mathrm{r}G$ when $Q = 1$.
Returning to the graph of $G$ against $\xi$ above, we note that at the left-most point, $Q = 0$ since there are only reactants; at the right-most point, $Q \to \infty$ as there are only products. As we move from left to right, $Q$ increases continuously, so there
must be a point where $Q = 1$. (In general, the point where $Q = 1$ will not be the same as the equilibrium point.) Since $\Delta_\mathrm{r}G$ is the gradient of the graph, $\Delta_\mathrm{r}G^\circ$ is simply the gradient of the graph at that particular point where $Q = 1$:
The gradient of the graph, i.e. $\Delta_\mathrm{r}G$, will vary as you traverse the graph from left to right. At equilibrium, the gradient is zero, i.e. $\Delta_\mathrm{r}G = 0$. However, $\Delta_\mathrm{r}G^\circ$ refers to the gradient at that
one specific point where $Q = 1$. In the example illustrated above, that specific gradient is negative, i.e. $\Delta_\mathrm{r}G^\circ < 0$.
Again, I reiterate that the temperature has nothing to do with this. If you were to change the temperature, you would get an entirely different graph of $G$ versus $\xi$. You can still find the point
on that graph where $Q = 1$, and the gradient of that graph at the point where $Q = 1$ is simply $\Delta_\mathrm{r} G^\circ$ at that temperature.
We have established the qualitative relationship between $\Delta_\mathrm{r} G$ and $\Delta_\mathrm{r} G^\circ$, but it is often useful to have an exact mathematical relation.
$\Delta_\mathrm{r} G^\circ$ is exactly the same as $\Delta_\mathrm{r} G$ except for the imposition of the standard state. It follows that if we take the equation
$$\Delta_\mathrm{r} G = \sum_i \mu_i \nu_i$$
and impose the standard state, we get
$$\Delta_\mathrm{r} G^\circ = \sum_i \mu_i^\circ \nu_i$$
Thermodynamics tells us that
$$\mu_i = \mu_i^\circ + RT\ln{a_i}$$
where $a_i$ is the thermodynamic activity of species $i$. Substituting this into the expressions for $\Delta G$ and $\Delta G^\circ$ above, we obtain the result:
$$\Delta_\mathrm{r} G = \Delta_\mathrm{r} G^\circ + RT\ln Q$$
where the reaction quotient $Q$ is defined as
$$Q = \prod_i a_i^{\nu_i}$$
When equilibrium is reached, we necessarily have $\Delta_\mathrm{r} G = 0$ (see the discussion above). The equilibrium constant $K$ is defined to be the value of $Q$ at equilibrium. Therefore, at equilibrium, $Q = K$. Plugging this into the equation above gives us the famous equation:
$$\Delta_\mathrm{r} G^\circ = -RT\ln K$$
Again,
no temperature is specified! In general, $K$ depends on the temperature as well; the relationship is given by the van 't Hoff equation. |
I have always thought that there is two solutions to the square root of a real number, one being positive and the other being negative. However, in Penrose's book, A Road to Reality, he seems to claim that $e^{1/2}$ will always give a positive answer, since $e^n$ is defined as $1+ \frac{n}{1!} +\frac{n^2}{2!} + \ldots$, so substituting $\frac{1}{2}$ into the equation will give us a positive number. And so logarithm defined with base $e$ will be unambiguous since there is only one answer for every $e^n$. However, I find it quite puzzling because $-e^{1/2}$ will also give me $e$ when squared, doesn't it?
An old chestnut (and I'm sure there are many other posts on this, but here's another short attempt):
For real numbers $x$ which are zero or positive, the square root of $x$ is defined to be the real number $y \geq 0$ such that $y^2 = x$. It's easy to see that $y$ is unique and we denote it by the symbols $\sqrt{x}$. We make the square root unique in part so that the function $f(x) = \sqrt{x}$ is well defined.
If you then ask the question: what are the solutions of $x^2 = c$, for a positive, real number $c$, then there are two: $x = \sqrt{c}$ and $x = -\sqrt{c}$.
Similarly, the reason why the formula for solving the generic quadratic equation over the reals of $ax^2 + bx + c = 0$ has a $\pm$ symbol is because the expression $\sqrt{b^2 - 4ac}$ has a unique value when it exists.
So what Penrose is doing is using the established convention for what $\sqrt{x}$ means for a positive real number $x$.
I think the confusion arises because you are reading $e^{1/2}$ as "the set of solutions to $x^2=e$", and Penrose is defining $e^{1/2}$ as $\sum_{k\ge0}\frac{(1/2)^k}{k!}$. He is not using $e^{1/2}$ the way $y^{1/2}$ is defined in complex analysis, but rather as a notation for a function given by a convergent series. It would be more clear if he defined $$ \exp(n)=\sum_{k\ge0}\frac{n^k}{k!} $$ and then said that $\exp(1/2)$ is unambiguously defined.
If the solution to $x^2-e=0$ doesn't have the two roots $\pm\sqrt e$ some fundamental theorems of algebra has to be revised. But of course, also the exponential function must be well defined for $\displaystyle x=\frac{1}{2}$.
I guess it is not perfect to claim $\displaystyle a^{\frac{1}{2}}=\pm\sqrt a$. |
What is the best or most popular symbol for vector/matrix transpose? I have used simply
^T, for example
$c^T x$. I think it is ugly, mainly because it is a little too big compared with vector variables usually denoted by lower-case characters. Can you suggest a better one?
It's always difficult to answer questions for "the best" or "most popular". As is mentioned, these are typically opinions. But you did say that your objection was the fact that the "T" symbol was too big. Therefore, I would recommend the
\intercal symbol to produce a "T" which isn't so big. Also, writing the vectors and matrices in bold seems, in my opinion, to make it look a little better. Try the following code:
\documentclass{article}\usepackage{amsmath,amsfonts,amssymb}\begin{document}$\mathbf{A}^\intercal$\\$\mathbf{c}^\intercal \mathbf{x}$\\$c^T x$\\$\mathbf{M}^\top$\end{document}
The Comprehensive LaTeX Symbol List says the following:
Some people use a superscripted
\intercalfor matrix transpose:
A^\intercal. (See the May 2009 comp.text.tex thread, "raising math symbols", for suggestions about altering the height of the superscript.)
\top,
T, and
\mathsf{T}are other popular choices.
In order to give some reference:
(DIN) EN ISO 80000-2:2013 writes it like the following.
% arara: lualatex\documentclass{article}\usepackage{mathtools}\usepackage{unicode-math}\setmathfont{XITS Math}\newcommand*{\matr}[1]{\mathbfit{#1}}\newcommand*{\tran}{^{\mkern-1.5mu\mathsf{T}}}\newcommand*{\conj}[1]{\overline{#1}}\newcommand*{\hermconj}{^{\mathsf{H}}}\begin{document}\[\matr{A}\tran\] \[\matr{A}\hermconj\coloneqq(\conj{\matr{A}})\tran\]\end{document}
The good part here is that the 'big' T visually fits to the H of the same size which might be used for the Hermitian conjugate matrix.
I prefer
A^\mathsf{T} which looks clean and is the right size.
In my opinion, the serifs in
A^T distracting, the T is set too low in
A^\intercal, the T in
A^\top is too thin and too big, and
A^* implies the presence of complex numbers.
In any case, it's always good to use a macro in case you change your mind later.
\documentclass{article}\usepackage{amssymb,amsmath}\usepackage{relsize}\begin[document}$A^T\ A^{\mathsmaller T}$\end{document}
The symbol
\intercal is quite a nice symbol for transpose, but it is placed a little low. Therefore the example defines
\transpose to use a
\intercal, which is shifted to the baseline. The symbol size adapts to the current math style.
\documentclass{article}\usepackage{amssymb}\makeatletter\newcommand*{\transpose}{% {\mathpalette\@transpose{}}%}\newcommand*{\@transpose}[2]{% % #1: math style % #2: unused \raisebox{\depth}{$\m@th#1\intercal$}%}\makeatother\newcommand*{\test}[1]{% \[ \mathbf{M}^{#1} \; \scriptstyle \mathbf{M}^{#1} \; \scriptscriptstyle \mathbf{M}^{#1} \]}\begin{document} \test{\transpose} \test{\intercal} \test{\mathsf{T}} \test{\top}\end{document}
Personally I often use the conjugate transpose instead. For real matrices this concept coincides with the transpose, for matrices over the complex field the conjugate is usually what you want anyway. The conjugate transpose of a matrix
A is denote
A^*.
I use the pre-exponent t in upright shape, either with mathtools package, based on the code:
\prescript{\mathrm t}{}{A}, or using the
\ltrans command from
leftidx package.Here is a code that tries to take into account different situations, which involve different math kerning, nested transpose, and so on. I define a
\transp command, with an optional argument, the math kerning (defaults to
-3mu) between the t prescript and the ‘prescripted’ expression that follows.
\documentclass[12pt, a4paper]{article}\usepackage[utf8]{inputenc}\usepackage[T1]{fontenc}\usepackage{mathtools}\newcommand*{\transp}[2][-3mu]{\ensuremath{\mskip1mu\prescript{\smash{\mathrm t\mkern#1}}{}{\mathstrut#2}}}%\begin{document}Matrix transposition has the following properties : \begin{align*}\transp{(\mskip-1mu AB)} & = \transp{B}\transp{A} \\\transp{(\transp{A})} & = A \\\transp{(\mathrm N + \mathrm P)} & = \transp[0mu]{\mathrm N} +\transp[0mu]{\mathrm P}\end{align*}\end{document}
Conjugate transpose is in physics often denoted by
^\dagger because of its association with adjoint operators.
There are some good suggestions regarding which symbol to use, it is a good idea to define your own macros for indicating matrices, vectors, and transpose, so that you can write:
\MAT A \VEC b^\TRANSPOSE
This will make it easy to change the notation in the future, if you ever need to do so. In addition, the source is more readable than
\mathbf A \mathbf b^\intercal etc.
maybe you could use
$\mathbf{C}^{^\mathrm{T}}$ to raise it a smaller transpose T
A^{\tau} looks best for me.I tried others but T was still too big.
protected by jub0bs Dec 12 '14 at 10:17
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
The class $\Sigma^1$ of symbols on $\mathbb R^{2n}$ is made with $C^\infty$ functions $a$ of $X=(x,\xi)\in \mathbb R^n\times\mathbb R^n$ such that$$\vert\partial_X^\alpha a\vert\le C_\alpha(1+\vert X\vert)^{2-\vert \alpha\vert}.$$Assuming that $a\in\Sigma^1$ is real-valued principal type and denoting by $A$ its Weyl quantization, using the fact that $A$ is continuous on the Schwartz space$\mathscr S(\mathbb R^n)$ and on its dual (tempered distributions)$\mathscr S'(\mathbb R^n)$,we may define the maximal extension $H$ of $A$ with the domain$$D(H)=\{u\in L^2(\mathbb R^n), Au \in L^2(\mathbb R^n) \}.$$
Then I claim that $H$ is self-adjoint. I believe that it is well-known and I am looking for a reference in the literature.
A related question is the same problem for first-order pseudo-differential operators on a compact manifold without boundary $\mathcal M$ (equipped with a smooth density): let $A$ be a first-order pseudo-differential operator on $\mathcal M$ (I do not want to assume ellipticity, but I know that $A$ is continuous on $C^\infty(\mathcal M)$ and on the distributions on $\mathcal M$) and assume that $A$ is symmetric, that is such that for $\phi, \psi\in C^\infty(\mathcal M)$ $\langle A\phi,\psi \rangle=\langle \phi,A\psi \rangle_{L^2(\mathcal M)}$. Then consider the maximal extension $H$ of $A$ with $$ D(H)=\{u\in L^2(\mathcal M), Au \in L^2(\mathcal M) \}. $$ Then $H$ is self-adjoint. Is it true and well-known?
Last but not least, dropping the compactness assumption on $\mathcal M$ in the second question above, assuming that $A$ is properly supported, can I get the same result? |
If you’ve worked in computational fluid dynamics, then you’re probably aware of the Taylor-Green vortex – at least the two-dimensional case. A simple google search will land you on this wikipedia page. The classic solution there is presented in the form
\begin{equation} u = \cos x \sin y F(t);\quad v = -\sin x \cos y F(t);\quad F(t) = e^{-2\nu t} \end{equation}
There are also many variants on this form, some that exchange the sine and cosine such as McDermott and Almgren et. al – but the most interesting variation, is that presented by McDermott, and presents a non-stationary form (in space) – i.e. the vortices advect in space on an Eulerian computational grid.
There are four points that I want to address in this post:
It seems that the wrong paper is always cited when referring to the original work for this TG Vortex solution In light of the previous point, we should call the 2D solution the Taylor vortex and keep the Taylor-Green designation to the 3D initial condition How the heck do you obtain solutions that are non-stationary in space (Thanks to Lee Shunn for opening my mind about this) Finally, I want to actually derive the TG vortex solution and discuss it a bit further Point 1: We are citing the wrong paper
I am the first to admit – I have always cited the wrong paper when referring to the 2d TG vortex – namely:
Taylor, G. I., & Green, A. E. (1937). Mechanism of the production of small eddies from large ones. Proceedings of the Royal Society of London. Series A-Mathematical and Physical Sciences, 158(895), 499.
For the life of me I cannot figure out how to recover the 2D solution from that paper. The correct paper to cite for the 2D vortex is the following:
G.I. Taylor F.R.S. (1923) LXXV. On the decay of vortices in a viscous fluid, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 46:274, 671-674, DOI: 10.1080/14786442308634295
In light of the previous point, we should refer to the 2D solution as the Taylor vortex and reserve the Taylor-Green designation for the 3D case.
Point 3: How to obtain “advecting” solutions on an Eulerian grid?
Simple – Thanks to the mysterious art of Galilean transformations! Let’s say you find a solution for the velocity field $$\mathbf{u}(\mathbf{x},t)$$ in an inertial reference frame $(\mathbf{x},t)$ and now you want your solution in a moving frame $(\mathbf{x}’=\mathbf{x} – \mathbf {c} t, t)$, then, the solution in the moving frame, $\mathbf{u}’$ is
\begin{equation} \mathbf{u}’ =\mathbf{u}(\mathbf{x}-\mathbf{c} t) = \mathbf{u}(\mathbf{x}, t) – \mathbf{c} \end{equation} So that – in practice – where we implement our solutions on an Eulerian grid, we simply set \begin{equation} \mathbf{u}(\mathbf{x}, t) =\mathbf{c} + \mathbf{u}(\mathbf{x}-\mathbf{c} t) \end{equation} I owe it to Lee Shunn of Cascade Technologies for pointing that out to me in a private communication regarding one of his manufactured solutions for variable density flows. Point 4: How did Taylor do it?
Taylor is like Euler – not only virtue of their names rhyming – but also by virtue of their contributions to Fluid mechanics. His approach was based on a vorticity-streamfunction approach (heavily used in the rocket-motor stability analysis community – see Culick, Flandro, Majdalani, Saad) to eliminate the pressure. In two-dimensions, the only non-zero component of vorticity is
\begin{equation} \omega_z = \frac{\partial v}{\partial x} – \frac{\partial u}{\partial y} \end{equation} Now, using the streamfunction ($u=-\frac{\partial \psi}{\partial y}$, $v = \frac{\partial \psi}{\partial x}$) \begin{equation} \omega_z = \nabla^2 \psi \end{equation} Great so far. Now we write the vorticity transport equation \begin{equation} \frac{\partial\omega_z}{\partial t}+\mathbf{u}\cdot\nabla\omega_z=\nu\nabla^{2}\omega_z \end{equation} or \begin{equation} \left( \frac{\partial}{\partial t}+\mathbf{u}\cdot\nabla -\nu\nabla^{2}\right)\omega_z=0 \end{equation} Now Taylor makes a beautiful assumption: he sets $\omega_z = k \psi$. As I will explain later (maybe in the future), this is entire possible – this only means that lines of constant vorticity are also lines of constant streamfunction. More generally, this type of flow belongs to a general class of flows known as Generalized Beltrami flows where $\nabla\times\mathbf{u}\times\boldsymbol{\Omega} = 0$.
In 2d, this implies that the vorticity is an arbitrary function of $\psi$, i.e. $\omega_z = f(\psi)$. Taylor’s choice is simply $f(\psi) = k \psi$. This assumption leads to two simplifications:
We now have $\nabla ^2 \psi = k \psi$ The advection term in the vorticity equation disappears by virtue of the fact that $\mathbf{u}$ is perpendicular to $\nabla \psi$
The vorticity transport equation now reduces to
\begin{equation} \frac{\partial \psi}{\partial t} -\nu\psi=0 \end{equation} which leads to the general solution \begin{equation} \psi = F(x,y) e^{-k\nu t} \end{equation} where $F(x,y)$ is a solution to the vorticity equation (substitute $\psi = F e^{k\nu t}$ into the vorticity equation $\nabla^2 \psi = k \psi$) \begin{equation} \nabla F^2 = k F(x,y) \label{eq:f-equation} \end{equation} Generic solutions for $F$ are therefore of the sine, cosine, or exponential type. Taylor then proceeds to set a solution of the form \begin{equation} F(x,y) = A \cos (\pi \frac{x}{d}) \cos (\pi \frac{y}{d}) \end{equation} Substitution into \eqref{eq:f-equation} leads to \begin{equation} k = \frac{2\pi^2}{d^2} \end{equation} and the final solution is \begin{equation} \psi = A \cos (\pi \frac{x}{d}) \cos (\pi \frac{y}{d}) e^{-\frac{2\pi^2}{d^2} \nu t} \end{equation} This leads to the classical velocity field for the 2D Taylor vortex \begin{equation} u = A \frac{\pi}{d} \cos (\pi \frac{x}{d}) \sin (\pi \frac{y}{d})e^{-\frac{2\pi^2}{d^2} \nu t}; \\ v = – A \frac{\pi}{d} \sin (\pi \frac{x}{d}) \cos (\pi \frac{y}{d}) e^{-\frac{2\pi^2}{d^2} \nu t} \end{equation} or, more generally \begin{equation} \boxed{\psi = A \cos (\alpha x) \cos (\beta y) e^{-(\alpha^2 + \beta^2) \nu t}} \end{equation} \begin{equation} u = A \beta \cos (\alpha x) \sin (\beta y)e^{-(\alpha^2 + \beta^2) \nu t}; \\ v = – A \alpha\sin (\alpha x) \cos (\beta y)e^{-(\alpha^2 + \beta^2) \nu t} \end{equation} If implemented in an Eulerian computational code, this solution will just decay in time. To obtain advecting solutions, apply the Galilean transformation discussed above. This leads to solutions of the form \begin{equation} \boxed{ u = u_f + A \beta \cos [\alpha (x-u_f t)] \sin [\beta (y – v_f t)]e^{-(\alpha^2 + \beta^2) \nu t}; \\ v = v_f – A \alpha \sin [\alpha (x-u_f t)] \cos [\beta (y – v_f t)]e^{-(\alpha^2 + \beta^2) \nu t}} \end{equation}
Another interesting form (used by Almgren and McDermott) is
\begin{equation}
\boxed{\psi = -A \cos (\alpha x) \cos (\beta y) e^{-(\alpha^2 + \beta^2) \nu t}} \end{equation} \begin{equation} u = -A \beta \cos (\alpha x) \sin (\beta y)e^{-(\alpha^2 + \beta^2) \nu t}; \\ v = A \alpha\sin (\alpha x) \cos (\beta y)e^{-(\alpha^2 + \beta^2) \nu t} \end{equation}
or, for an advecting solution
\begin{equation}
u = u_f – A \beta \cos [\alpha (x-u_f t)] \sin [\beta (y – v_f t)]e^{-(\alpha^2 + \beta^2) \nu t}; \\ v = v_f + A \alpha \sin [\alpha (x-u_f t)] \cos [\beta (y – v_f t)]e^{-(\alpha^2 + \beta^2) \nu t} \end{equation} Generalized Beltrami Flows
Taylor’s vortex belongs a class of flows known as generalized Beltrami flows. These are flows where $\nabla\times\mathbf{u}\times\boldsymbol{\Omega} = 0$.
To be continued… |
A
food web is a representation of who eats whom.
A
qualitative food web provides for every pair of vertices just the information whether or not one feeds on the other.
This can be represented e.g. by some
directed graph with a set of vertices $V$ and a set of arrows $A$ between the vertices. An arrow has the form $$a: v_1\rightarrow v_2$$for some vertices $v_1$ and $v_2 \in V$ meaning $v_1$ feeds on $v_2$.
A
quantitative food web provides for every pair of vertices a quantity (e.g. in form of a postive real number) defining how much one feeds on the other.
This can be represented e.g. by some
weighted directed graph with a set of vertices $V$, a set of arrows $A$ between the vertices and additionally a set of weights $\mathbb{R}^{\geq0}$ providing one weight for each arrow. An arrow has the form $$a: v_1\rightarrow_x v_2$$for some vertices $v_1$ and $v_2 \in V$ and a positive real number $x\in\mathbb{R}^{\geq0}$ meaning $v_1$ feeds on $v_2$ to an extend of $x$.
The specification of the weights $x$ can vary (bio mass, energy, ...).
The
arrows in a graph representation of a food web represent a feeding relationship - the exact specification can be versatile: qualitative, quantitative (diverse units).
The
vertices in a graph representation of a food web can be species (i.e. grouping according to genetics), traits (i.e. grouping according to certain properties or combinations of properties).
Community. In ecology, a is a group or association of populations of two or more different species occupying the same geographical area and in a particular time. community
SeeWikipedia: Community
Thereby a
species based food web (i.e. a representation of the feeding relationship between species) that is e.g. represented as a graph with a set of vertices $V$ and arrows $A$ thereby implicitely also represents a community by forgetting the representation of the feeding relations $A$ and regarding just the set of the included species $V$.
A
trait based food web can also be regarded as a community in the above sense. E.g. assume the food web is represented as a graph with a set of vertices $V$ (not being species) and arrows $A$. Assume the traits are about individuals (respectively species). Then translating $V$ in the case of species based traits to the set of involved species:
$$S(V):= \{s|s \text{ is a species for which some trait }v\in V \text{ is true}\}$$
or in the case of individual based traits:
$$S(V):= \{s|s \text{ is the species of some individual for which some trait }v\in V \text{ is true}\}$$
yields again a community.
Summarized. You should always be able to yield a community from a food web by forgetting the feeding relationship and taking all involved species. |
From what I understand, it sounds like the Gibbs Free Energy change of a reversible reaction at equilibrium is zero. However, since I know that Gibbs Free energy change depends on temperature, does this not imply that equilibrium can only ever be reached at one very specific temperature? This doesn't sound right, as I know that equilibrium can be established at lots of different reaction temperatures, although the position of equilibrium changes with temperature of course. I asked three chemistry teachers at my school, who all seem just as baffled as I am about this.
it sounds like the Gibbs Free Energy change of a reversible reaction at equilibrium is zero
Yes, that is true. The Gibbs free energy is zero when the reaction has reached an equilibrium, i.e. the reaction quotient Q is equal to the equilibrium constant K. You can express the Gibbs free energy in terms of the standard Gibbs free energy and the reaction quotient:
$$\Delta G = \Delta G^\circ + R T \ln(Q) = 0$$
No matter what the value of $\Delta G^\circ$, there is always a matching $Q$ that will result in a $\Delta G$ of zero. This is why at any temperature, the reaction will be able to reach equilibrium.
However, since I know that Gibbs Free energy change depends on temperature, does this not imply that equilibrium can only ever be reached at one very specific temperature?
As explained above, as long as $Q$ is able to change, there is a state of equilibrium for any value of $\Delta G^\circ$, i.e. at any temperature. However, if all of the reactants and products are pure liquids or solids, the expression for $Q$ is simply 1, and so $\Delta G$ does not change when the reaction proceeds forward or backward. In those cases, $\Delta G^\circ$ has to be zero in order to attain equilibrium, and that only happens at a specific temperature.
There are some electrochemical reactions where all reactant and product species are solids. This is great for making a battery because the voltage won't drop as you discharge the battery. Another more familiar example is the process of ice melting:
$$\ce{H2O(s) <=> H2O(l)}$$
The equilibrium constant expression and the reaction quotient for this process is simply 1. There is only one temperature at normal pressure where ice and liquid water exist side by side, and this temperature is called the normal melting point of water. At a temperature higher than that, water is all liquid, and at a temperature lower than that, it is all ice.
In general, any expression for $G$ will by necessity have a dependence on $T$, if only through the explicit incorporation of temperature in $G = H - TS$.
Looking at the Calphad (Calculation of Phase Diagram) community, they go further. Referring to A.T. Dinsdale, "SGTE Data for Pure Elements", CALPHAD 15(4) 317-425 (1991), they state, for the purposes of defining $G$ for the elements:
The Gibbs energy is represented as a power series in terms of temperature $T$ in the form: $ G = a + bT + cT \ln(T) + \Sigma d_{n}T^{n}$
From the definition of $G$, $S$, and $H$ one can then get:
$S = -b-c-c\ln(T)-\Sigma nd_{n}T^{n-1}$ and
$H = a-cT-\Sigma (n-1)d_{n}T^{n}$
So, one sees, given the chosen representation for $G$, that both $H$ and $S$ should depend on $T$ unless $G$ can be represented in a very simple form (only $a$ and $b$ being non-zero). Since Dinsdale uses a more complex power series, one can rest assured that $G$, even for the elements, tends to be rich in temperature dependence. |
Assume that $\Gamma$ is a group with neutral element $e$. We associate to $\Gamma$ the following groupoid $G$:
$G=\Gamma \times \Gamma,\;\;\;G^{(0)}=\Gamma \times \{0\},\;\;s(a,b)=(a,e),\;\;\; r(a,b)=(ba, e)$
If $\phi:\Gamma_{1}\to \Gamma_{2}$ is a group isomorphism, then $\tilde{\phi}:G_{1} \to G_{2}$ with $\tilde{\phi}(a,b)=(\phi(a), \phi(b))$ is a groupoid isomorphism. So isomorphism groups give us isomorphic groupoids. Now we ask the converse:
Are there two non isomorphic groups $\Gamma_{1}, \Gamma_{2}$ such that the corresponding groupoids $G_{1}, G_{2}$ are isomorphic.
Note that a groupoid isomorphism between $G_{1}, G_{2}$ does not necessarily come from a group isomorphism between $\Gamma_{1}, \Gamma_{2}$, as constructed above. An easy example can be provided by $\Gamma_{1}=\Gamma_{2}=\mathbb{Z}/2\mathbb{Z}$.
This situation is a motivation for the above question. |
Difference between revisions of "SageMath"
(Change sage-notebook to sagemath-jupyter in installation section, since the former is deprecated in favour of the latter)
(44 intermediate revisions by 18 users not shown) Line 1: Line 1: −
[[Category:
+
[[Category:
−
{{
+
]]
−
{{
+
{{start}}
− +
{{Related|Matlab}}
− +
{{|Octave}}
−
{{
+
{{|Mathematica}}
−
{{
+
{{end}}
−
{{
− −
Sage provides support for the following:
+
Sage
+ +
provides support for the following:
* '''Calculus''': using [[Wikipedia:Maxima (software)|Maxima]] and [[Wikipedia:SymPy|SymPy]].
* '''Calculus''': using [[Wikipedia:Maxima (software)|Maxima]] and [[Wikipedia:SymPy|SymPy]].
* '''Linear Algebra''': using the [[Wikipedia:GNU Scientific Library|GSL]], [[Wikipedia:SciPy|SciPy]] and [[Wikipedia:NumPy|NumPy]].
* '''Linear Algebra''': using the [[Wikipedia:GNU Scientific Library|GSL]], [[Wikipedia:SciPy|SciPy]] and [[Wikipedia:NumPy|NumPy]].
Line 18: Line 18:
== Installation ==
== Installation ==
−
Sage
+ + + + + +
Sage [[|]] the package {{|sage -}}, in .
== Usage ==
== Usage ==
− +
mainly uses Python as a scripting language with a few [http://.sagemath.org//tutorial/afterword.html#section-mathannoy modifications] to make it better suited for mathematical computations.
−
===
+
=== command-line ===
− +
can be started from the command-line:
$ sage
$ sage
−
For information on the
+
For information on the command-line see [http://.sagemath.org///.html this page].
−
The command-line is based on the IPython shell so you can use all its [http://
+
The command-line is based on the IPython shell so you can use all its [http://.sagemath.org//tutorial/interactive_shell.html tricks] with . For an extensive tutorial on IPython see the community maintained [http://wiki.ipython.org/Cookbook IPython Cookbook].
Note, however, that it is not very comfortable for some uses such as plotting. When you try to plot something, for example:
Note, however, that it is not very comfortable for some uses such as plotting. When you try to plot something, for example:
sage: plot(sin,(x,0,10))
sage: plot(sin,(x,0,10))
− +
opens the .
=== Sage Notebook ===
=== Sage Notebook ===
−
A better suited interface for advanced usage in
+ + +
A better suited interface for advanced usage in is the Notebook .
To start the Notebook server from the command-line, execute:
To start the Notebook server from the command-line, execute:
$ sage -n
$ sage -n
Line 45: Line 52:
$ sage -c "notebook(automatic_login=True)"
$ sage -c "notebook(automatic_login=True)"
−
For a more comprehensive tutorial on the Sage Notebook see the [http://
+
For a more comprehensive tutorial on the Sage Notebook see the [http://.sagemath.org//reference/notebook.html Sage documentation]. For more information on the {{ic|notebook()}} command see [http://.sagemath.org//reference/sagenb/notebook/.html this page].
− + +
the notebook ,
+ +
[[]] .
=== Cantor ===
=== Cantor ===
−
[http://edu.kde.org/applications/mathematics/cantor/ Cantor] is an application included in the KDE Edu Project. It acts as a front-end for various mathematical applications such as Maxima,
+
[http://edu.kde.org/applications/mathematics/cantor/ Cantor] is an application included in the KDE Edu Project. It acts as a front-end for various mathematical applications such as Maxima, , Octave, Scilab, etc. See the [http://wiki.sagemath.org/Cantor Cantor page] on the Sage wiki for more information on how to use it with .
− − − − − − − − − +
of {{|-}} {{|}} .
== Optional additions ==
== Optional additions ==
=== SageTeX ===
=== SageTeX ===
−
If you have
+
If you have [[TeX Live]] on your system, you may be interested in [http://.sagemath.org//tutorial/sagetex.html using SageTeX], a package that makes the inclusion of code in LaTeX files possible. TeX Live is made aware of SageTeX automatically so you can start using it straight away.
As a simple example, here is how you include a Sage 2D plot in your TEX document (assuming you use {{ic|pdflatex}}):
As a simple example, here is how you include a Sage 2D plot in your TEX document (assuming you use {{ic|pdflatex}}):
Line 89: Line 91:
* compile your document with the following procedure:
* compile your document with the following procedure:
$ pdflatex <doc.tex>
$ pdflatex <doc.tex>
−
$ sage <doc.sage>
+
$ sage <doc.sage>
$ pdflatex <doc.tex>
$ pdflatex <doc.tex>
Line 110: Line 112:
== See also ==
== See also ==
* [http://www.sagemath.org/ Official Website]
* [http://www.sagemath.org/ Official Website]
−
* [http://
+
* [http://.sagemath.org/ Documentation]
* [http://planet.sagemath.org/ Planet Sage]
* [http://planet.sagemath.org/ Planet Sage]
−
* [http://wiki.sagemath.org/
+
* [http://wiki.sagemath.org/ Wiki]
−
* [http://www.sagemath.org/links-components.html Software Used by
+
* [http://www.sagemath.org/links-components.html Software Used by ]
Latest revision as of 17:04, 10 May 2019
SageMath (formerly
Sage) is a program for numerical and symbolic mathematical computation that uses Python as its main language. It is meant to provide an alternative for commercial programs such as Maple, Matlab, and Mathematica.
SageMath provides support for the following:
Calculus: using Maxima and SymPy. Linear Algebra: using the GSL, SciPy and NumPy. Statistics: using R (through RPy) and SciPy. Graphs: using matplotlib. An interactive shellusing IPython. Access to Python modulessuch as PIL, SQLAlchemy, etc. Contents Installation contains the command-line version; for HTML documentation and inline help from the command line. includes a kernel for the Jupyter notebook interface. Note:Most of the standard Sage packages are available as optional dependencies of the package or in AUR, therefore they have to be installed additionally as normal Arch packages in order to take advantage of their features. Note that there is no need to install them with
sage -i, in fact this command will not work if you installed SageMath with pacman.
Usage
SageMath mainly uses Python as a scripting language with a few modifications to make it better suited for mathematical computations.
SageMath command-line
SageMath can be started from the command-line:
$ sage
For information on the SageMath command-line see this page.
Note, however, that it is not very comfortable for some uses such as plotting. When you try to plot something, for example:
sage: plot(sin,(x,0,10))
SageMath opens the plot in an external application.
Sage Notebook Note:The SageMath Flask notebook is deprecated in favour of the Jupyter notebook. The Jupyter notebook is recommended for all new worksheets. You can use the application to convert your Flask notebooks to Jupyter
A better suited interface for advanced usage in SageMath is the Notebook (). To start the Notebook server from the command-line, execute:
$ sage -n
The notebook will be accessible in the browser from http://localhost:8080 and will require you to login.
However, if you only run the server for personal use, and not across the internet, the login will be an annoyance. You can instead start the Notebook without requiring login, and have it automatically pop up in a browser, with the following command:
$ sage -c "notebook(automatic_login=True)" Jupyter Notebook
SageMath also provides a kernel for the Jupyter notebook in the package. To use it, launch the notebook with the command
$ jupyter notebook
and choose "SageMath" in the drop-down "New..." menu. The SageMath Jupyter notebook supports LaTeX output via the
%display latex command and 3D plots if is installed.
Cantor
Cantor is an application included in the KDE Edu Project. It acts as a front-end for various mathematical applications such as Maxima, SageMath, Octave, Scilab, etc. See the Cantor page on the Sage wiki for more information on how to use it with SageMath.
Cantor can be installed with the official repositories.package or as part of the or groups, available in the
Optional additions SageTeX
If you have TeX Live installed on your system, you may be interested in using SageTeX, a package that makes the inclusion of SageMath code in LaTeX files possible. TeX Live is made aware of SageTeX automatically so you can start using it straight away.
As a simple example, here is how you include a Sage 2D plot in your TEX document (assuming you use
pdflatex):
include the
sagetexpackage in the preamble of your document with the usual
\usepackage{sagetex} create a
sagesilentenvironment in which you insert your code:
\begin{sagesilent} dob(x) = sqrt(x^2 - 1) / (x * arctan(sqrt(x^2 - 1))) dpr(x) = sqrt(x^2 - 1) / (x * log( x + sqrt(x^2 - 1))) p1 = plot(dob,(x, 1, 10), color='blue') p2 = plot(dpr,(x, 1, 10), color='red') ptot = p1 + p2 ptot.axes_labels(['$\\xi$','$\\frac{R_h}{\\max(a,b)}$']) \end{sagesilent} create the plot, e.g. inside a
floatenvironment:
\begin{figure} \begin{center} \sageplot[width=\linewidth]{ptot} \end{center} \end{figure} compile your document with the following procedure: $ pdflatex <doc.tex> $ sage <doc.sagetex.sage> $ pdflatex <doc.tex> you can have a look at your output document.
The full documentation of SageTeX is available on CTAN.
Troubleshooting TeX Live does not recognize SageTex
If your TeX Live installation does not find the SageTex package, you can try the following procedure (as root or use a local folder):
Copy the files to the texmf directory: # cp /opt/sage/local/share/texmf/tex/* /usr/share/texmf/tex/ Refresh TeX Live: # texhash /usr/share/texmf/ texhash: Updating /usr/share/texmf/.//ls-R... texhash: Done. |
Let $T: \R^n \to \R^m$ be a linear transformation.Suppose that the nullity of $T$ is zero.
If $\{\mathbf{x}_1, \mathbf{x}_2,\dots, \mathbf{x}_k\}$ is a linearly independent subset of $\R^n$, then show that $\{T(\mathbf{x}_1), T(\mathbf{x}_2), \dots, T(\mathbf{x}_k) \}$ is a linearly independent subset of $\R^m$.
Let $A$ be the matrix given by\[A=\begin{bmatrix}-2 & 0 & 1 \\-5 & 3 & a \\4 & -2 & -1\end{bmatrix}\]for some variable $a$. Find all values of $a$ which will guarantee that $A$ has eigenvalues $0$, $3$, and $-3$.
Let\[A=\begin{bmatrix}8 & 1 & 6 \\3 & 5 & 7 \\4 & 9 & 2\end{bmatrix}.\]Notice that $A$ contains every integer from $1$ to $9$ and that the sums of each row, column, and diagonal of $A$ are equal. Such a grid is sometimes called a magic square.
Define two functions $T:\R^{2}\to\R^{2}$ and $S:\R^{2}\to\R^{2}$ by\[T\left(\begin{bmatrix}x \\ y\end{bmatrix}\right)=\begin{bmatrix}2x+y \\ 0\end{bmatrix},\;S\left(\begin{bmatrix}x \\ y\end{bmatrix}\right)=\begin{bmatrix}x+y \\ xy\end{bmatrix}.\]Determine whether $T$, $S$, and the composite $S\circ T$ are linear transformations.
Let\[\mathbf{v}_{1}=\begin{bmatrix}1 \\ 1\end{bmatrix},\;\mathbf{v}_{2}=\begin{bmatrix}1 \\ -1\end{bmatrix}.\]Let $V=\Span(\mathbf{v}_{1},\mathbf{v}_{2})$. Do $\mathbf{v}_{1}$ and $\mathbf{v}_{2}$ form an orthonormal basis for $V$?
Let $A$ be an $m \times n$ matrix.Suppose that the nullspace of $A$ is a plane in $\R^3$ and the range is spanned by a nonzero vector $\mathbf{v}$ in $\R^5$. Determine $m$ and $n$. Also, find the rank and nullity of $A$.
Suppose that a set of vectors $S_1=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is a spanning set of a subspace $V$ in $\R^5$. If $\mathbf{v}_4$ is another vector in $V$, then is the set\[S_2=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4\}\]still a spanning set for $V$? If so, prove it. Otherwise, give a counterexample.
For a set $S$ and a vector space $V$ over a scalar field $\K$, define the set of all functions from $S$ to $V$\[ \Fun ( S , V ) = \{ f : S \rightarrow V \} . \]
For $f, g \in \Fun(S, V)$, $z \in \K$, addition and scalar multiplication can be defined by\[ (f+g)(s) = f(s) + g(s) \, \mbox{ and } (cf)(s) = c (f(s)) \, \mbox{ for all } s \in S . \]
(a) Prove that $\Fun(S, V)$ is a vector space over $\K$. What is the zero element?
(b) Let $S_1 = \{ s \}$ be a set consisting of one element. Find an isomorphism between $\Fun(S_1 , V)$ and $V$ itself. Prove that the map you find is actually a linear isomorpism.
(c) Suppose that $B = \{ e_1 , e_2 , \cdots , e_n \}$ is a basis of $V$. Use $B$ to construct a basis of $\Fun(S_1 , V)$.
(d) Let $S = \{ s_1 , s_2 , \cdots , s_m \}$. Construct a linear isomorphism between $\Fun(S, V)$ and the vector space of $n$-tuples of $V$, defined as\[ V^m = \{ (v_1 , v_2 , \cdots , v_m ) \mid v_i \in V \mbox{ for all } 1 \leq i \leq m \} . \]
(e) Use the basis $B$ of $V$ to constract a basis of $\Fun(S, V)$ for an arbitrary finite set $S$. What is the dimension of $\Fun(S, V)$?
(f) Let $W \subseteq V$ be a subspace. Prove that $\Fun(S, W)$ is a subspace of $\Fun(S, V)$. |
NOTICE: Citizendium is still being set up on its newer server, treat as a beta for now; please see here for more. Citizendium - a community developing a quality comprehensive compendium of knowledge, online and free . Click here to join and contribute—free CZ thanks our previous donors. Donate here. Treasurer's Financial Report -- Thanks to our content contributors. -- Difference between revisions of "ABC conjecture"
(Start article: ABC conjecture)
m (→Statement: Baker (1998))
(5 intermediate revisions by the same user not shown) Line 3: Line 3:
==Statement==
==Statement==
−
Define the ''radical'' of
+
Define the ''radical'' of to be the product of its distinct prime factors
:<math> r(n) = \prod_{p|n} p \ . </math>
:<math> r(n) = \prod_{p|n} p \ . </math>
−
Suppose now that the equation <math>A + B
+
Suppose now that the equation <math>A + B C </math> holds for coprime integers <math>A,B,C</math>. The conjecture asserts that for every <math>\epsilon > 0</math> there exists <math>\kappa(\epsilon) > 0</math> such that
:<math> |A|, |B|, |C| < \kappa(\epsilon) r(ABC)^{1+\epsilon} \ . </math>
:<math> |A|, |B|, |C| < \kappa(\epsilon) r(ABC)^{1+\epsilon} \ . </math>
+ + + + + + + + + + + + + + + + + + + + + Latest revision as of 18:18, 13 January 2013
In mathematics, the
ABC conjecture relates the prime factors of two integers to those of their sum. It was proposed by David Masser and Joseph Oesterlé in 1985. It is connected with other problems of number theory: for example, the truth of the ABC conjecture would provide a new proof of Fermat's Last Theorem. Statement
Define the
radical of an integer to be the product of its distinct prime factors
Suppose now that the equation holds for coprime integers . The conjecture asserts that for every there exists such that
A weaker form of the conjecture states that
If we define
then it is known that as .
Baker introduced a more refined version of the conjecture in 1998. Assume as before that holds for coprime integers . Let be the radical of and the number of distinct prime factors of . Then there is an absolute constant such that
This form of the conjecture would give very strong bounds in the method of linear forms in logarithms.
Results
It is known that there is an effectively computable such that |
Statistics Learning - Discriminant analysis Table of Contents 1 - About
In discriminant analysis, the idea is to:
The Bayes theorem is a basis for discriminant analysis.
2 - Articles Related 3 - Bayes theorem for classification
<MATH> \begin{array}{rrl} Pr(Y = k|X = x) & = & \frac{\displaystyle Pr(X = x|Y = k) Pr(Y = k)}{\displaystyle Pr(X = x)} \\ & = & \frac{\displaystyle \pi_k f_k(x)}{\displaystyle \sum_{l=1}^K \pi_l f_l (x)} \end{array} </MATH>
where:
The marginal is the summing over all the classes. <math>\displaystyle \sum_{l=1}^K \pi_l f_l (x)</math>
This approach is quite general, and other (distributions|density) can be used including non-parametric approaches. By altering the forms for <math>f_k(x)</math>, we get different classifiers (ie classification rule).
3.1 - Gaussian
The two popular forms of linear discriminant analysis are:
3.2 - Naive Bayes
When you have a large number of features (like 4,000), you really wouldn't want to estimate the large covariance matrices.
You will then assume that in each class the density factors into a product of densities. <MATH> f_k(x) = \prod^p_{j=1} f_{jk}(x_j) </MATH> where:
k is the class p is the number of parameters
ie that the variables are conditionally independent in each of the classes.
If we plug it into the above Bayes formula, we get something known as the naive Bayes classifier.
For linear discriminant analysis, this means that the covariances matrix <math>\sigma_k</math> are diagonal. Instead of estimating the covariance matrix, if you've got p variables, we got P squared parameters that must be estimated.
Although the assumption seems very crude or wrong, the naive Bayes classifier is actually very useful in high-dimensional problems. We end up with maybe biased estimates for the probabilities
In classification, we're mainly concerned about which class has the highest probability and not whether we got the probabilities exactly right.
In terms of classification, as we just need to to classify to the largest probability, we can tolerate quite a lot of bias and still get good classification performance. What we get in return is much reduced variance from having to estimate far fewer parameters.
4 - Classify to the highest density
We classify a new point according to which density is highest.
On the right, when the priors are different we favor the pink class, the decision boundary has shifted to the left.
5 - Advantage / Disadvantage When the classes are well-separated, the parameter estimates for the logistic regression model are surprisingly unstable. Linear discriminant analysis does not suffer from this problem. If you have the right population model, Bayes rule is the best you can possibly do. 6 - Example
Suppose that in Ad Clicks (a problem where you try to model if a user will click on a particular ad) it is well known that the majority of the time an ad is shown it will not be clicked. What is another way of saying that?
Ad Clicks have a low Prior Probability (Status: correct) Ad Clicks have a high Prior Probability. Ad Clicks have a low Density. Ad Clicks have a high Density
Whether or not an ad gets clicked is a Qualitative Variable. Thus, it does not have a density. The Prior Probability of Ad Clicks is low because most ads are not clicked. |
How can we prove that the inverse of an upper (lower) triangular matrix is upper (lower) triangular?
Another method is as follows. An invertible upper triangular matrix has the form $A=D(I+N)$ where $D$ is diagonal (with the same diagonal entries as $A$) and $N$ is upper triangular with zero diagonal. Then $N^n=0$ where $A$ is $n$ by $n$. Both $D$ and $I+N$ have upper triangular inverses: $D^{-1}$ is diagonal, and $(I+N)^{-1}=I-N+N^2-\cdots +(-1)^{n-1}N^{n-1}$. So $A^{-1}=(I+N)^{-1}D^{-1}$ is upper triangular.
Personally, I prefer arguments which are more geometric to arguments rooted in matrix algebra. With that in mind, here is a proof.
First, two observations on the geometric meaning of an upper triangular invertible linear map.
Define $S_k = {\rm span} (e_1, \ldots, e_k)$, where $e_i$ the standard basis vectors. Clearly, the linear map $T$ is upper triangular if and only if $T S_k \subset S_k$.
If $T$ is in addition invertible, we must have the stronger relation $T S_k = S_k$.
Indeed, if $T S_k$ was a strict subset of $S_k$, then $Te_1, \ldots, Te_k$ are $k$ vectors in a space of dimension strictly less than $k$, so they must be dependent: $\sum_i \alpha_i Te_i=0$ for some $\alpha_i$ not all zero. This implies that $T$ sends the
nonzerovector $\sum_i \alpha_i e_i$ to zero, so $T$ is not invertible.
With these two observations in place, the proof proceeds as follows. Take any $s \in S_k$. Since $TS_k=S_k$ there exists some $s' \in S_k$ with $Ts'=s$ or $T^{-1}s = s'$. In other words, $T^{-1} s$ lies in $S_k$, so $T^{-1}$ is upper triangular.
I'll add nothing to alext87 answer, or J.M. comments. Just "display" them. :-)
Remeber that you can compute the inverse of a matrix by reducing it to row echelon form and solving the simultaneous systems of linear equations $ (A \vert I)$, where $A$ is the matrix you want to invert and $I$ the unit matrix. When you have finished the process, you'll get a matrix like $(I\vert A^{-1})$ and the matrix on the right, yes!, is the inverse of $A$. (Why?)
In your case, half of the work is already done:
$$ \begin{pmatrix} a^1_1 & a^1_2 & \cdots & a^1_{n-1} & a^1_n & 1 & 0 & \cdots & 0 & 0 \\\ & a^2_2 & \cdots & a^2_{n-1} & a^2_n & & 1 & \cdots & 0 & 0 \\\ & & \ddots & \vdots & \vdots & & & \ddots & \vdots & \vdots \\\ & & & a^{n-1}_{n-1} & a^{n-1}_n & & & & 1 & 0 \\\ & & & & a^n_n & & & & & 1 \end{pmatrix} $$
Now, what happens when you do back substitution starting with $a^n_n$ and then continuing with $a^{n-1}_{n-1}$...?
You can prove by induction.
Suppose $A$ is upper triangular. It is easy to show that this holds for any $2\times 2$ matrix. (In fact, $A^{-1}=\left[\begin{array}{cc} a & b\\ 0 & d \end{array}\right]^{-1} =\frac{1}{ad}\left[\begin{array}{cc} d & -b\\ 0 & a \end{array}\right]$. )
Suppose the result holds for any $n\times n$ upper triangular matrix. Let $A=\left[\begin{array}{cc} A_{1} & a_{2}\\ 0 & x \end{array}\right]$, $B=\left[\begin{array}{cc} B_{1} & b_{2}\\ b_{3}^{T} & y \end{array}\right]$ be any $(n+1)\times (n+1)$ upper triangular matrix and its inverse. (Mind that $a_2$, $b_2$, $b_3$ are $n\times 1$ vectors, $x$, $y$ are scalars.) Then $AB=BA=I_{n+1}$ implies that $$ \left[\begin{array}{cc} A_{1} & a_{2}\\ 0 & x \end{array}\right] \left[\begin{array}{cc} B_{1} & b_{2}\\ b_{3}^{T} & y \end{array}\right]= \left[\begin{array}{cc} B_{1} & b_{2}\\ b_{3}^{T} & y \end{array}\right] \left[\begin{array}{cc} A_{1} & a_{2}\\ 0 & x \end{array}\right] =I_{n+1}, $$
From the upper left corner of the second multiplication, we have $B_1 A_1 = I_n$. Hence $B_1$ is upper triangular from our hypothesis. From the lower left block of the multiplication , we know that $b_3=0$. ($x\ne 0$ since $A$ is invertible.) Therefore $B=\left[\begin{array}{cc} B_{1} & b_{2}\\ 0 & y \end{array}\right]$ is also upper triangular.
Another proof is by contradiction. Let $A = [a_{ij}]$ be an upper triangular matrix of size $N$. Assume $B = A^{-1} = [b_{ij}]$ is not upper triangular. Thus there exists an entry $b_{ij} \neq 0$ for $i > j$. Let $b_{ik}$ be the element with the smallest $k$ in row $i$ such that $b_{ik} \neq 0$ and $ i > k$. Consider the product $C = B A$. The element $c_{ik}$ of matrix C is off-diagonal ($i > k$) and computed as $$ c_{ik} = \sum b_{ij}a_{jk} = b_{i1}a_{1k} + b_{i2}a_{2k} + \dots + b_{ik}a_{kk} + \dots + b_{iN}a_{Nk} $$
Since $b_{ik}$ is the first non-zero element in its row, all the terms to the left of $b_{ik}a_{kk}$ vanish. Since A is upper triangular (given), all the terms to the right of $b_{ik}a_{kk}$ vanish. Since $A$ is invertible, all its diagonal elements are non-zero. Thus $c_{ik} = b_{ik}a_{kk} \neq 0$. However, since $C$ is the identity matrix and $c_{ik}$ is off diagonal, this is a contradiction! Thus, $B = A^{-1}$ is upper triangular.
Same applies to lower triangular matrix by noticing that $(A^T)^{-1} = (A^{-1})^T$
Suppose that $U$ is upper. The $i$th column $x_i$ of the inverse is given by $Ux_i=e_i$ where $e_i$ is the $i$th unit vector. By backward subsitution you can see that $(x_i)_j=0$ for $i+1\leq j\leq n$. I.e all the entries in the $i$th column of the inverse below the diagonal are zero. This is true for all $i$ and hence the inverse $U^{-1}=[x_1|\ldots|x_n]$ is upper triangular.
The same thing works for lower triangular using forward subsitution. |
The C-field in 11-dimensional supergravity is an elusive object that is
not the simple higher $\mathrm{U}(1)$-gauge field one would naively make this out to be. For an overview of possible models for this object, see, for instance, section 3 of "The M-theory 3-form and $E_8$ gauge theory" by Diaconescu, Freed and Moore (henceforth DFM).
However, it is always an object that carries with it a notion of "gauge transformation", and for naive higher $\mathrm{U}(1)$-gauge fields with a transformation law $$ C\mapsto C + \mathrm{d}\Lambda$$ for $C$ a $p$-form and $\Lambda$ a $(p-1)$-form, one can easily see that there are "gauge transformations" that actually do not change the gauge field at all - those with $\mathrm{d}\Lambda = 0$. However, objects charged under this $\mathrm{U}(1)$ would transform by $\mathrm{e}^{\mathrm{i}\Lambda}$, meaning this transformation is non-trivial on the other fields. This means that there is a global symmetry group associated to this gauge transformation law, namely given by all closed $(p-1)$-forms $C^{p-1}_\mathrm{dR}(M)$ on our spacetime manifold $M$ that does not change the gauge field (hence does not need to be quotiented out in order to "fix the gauge") and therefore remains even after quantization. If we consider that the objects charged under such a symmetry are $(p-1)$-dimensional objects, it is clear that the proper symmetry operator is $\mathrm{e}^{\mathrm{i}\int_\Sigma \Lambda}$ where $\Sigma$ is the charge object, and so the final global symmetry group is in fact $H^{p-1}(C,\mathrm{U}(1))$ since exact forms just act as the identity.
1
A similar reasoning seems to be carried out in
"M-Theory Dynamics On A Manifold Of $G_2$ Holonomy" by Atiyah and Witten to obtain the total and unbroken global symmetry groups associated to the C-field. However, as I mentioned in the first paragraph of this question, the C-field is not a simple higher gauge field, and its exact "gauge group" is a subtle question.
For instance, in one of DFM's models, the proper notion of a gauge transformation is that "the C-field" is a pair of objects $(A,c)$ where $A$ is an ordinary $E_8$-gauge field and $c$ a 3-form with integral periods, and the gauge transformations are given by $$ A\mapsto A+\alpha \quad c\mapsto c - \mathrm{CS}_3(A,A+\alpha) + \omega,$$ where $\mathrm{CS}_3(A,A+\alpha)$ is the relative Chern-Simons invariant 3-form between two connections given by integrating $\mathrm{tr}(F^2)$ along the straight line between $A$ and $A+\alpha$ in connection space (which is affine, so this is possible). The $\alpha$ is simply a 1-form on $\mathrm{ad}(P)$ and $\omega$ is a 3-form with integral periods.
There seems to be no evident notion of how such a transformation acts on objects charged under the C-field, nor do there seem to be global transformations in this case or indeed a straightforward relation of this transformation to the $\Lambda$ considered earlier. Section 7 of DFM defines the proper notion of charge for the $E_8$-model of the C-field, but does not consider how objects charged thusly transform as far as I can see.
What is the proper action of such a gauge transformation in the sense of DFM on charged objects? How can this be reconciled with the analysis of global symmetries of the C-field as done by Atiyah and Witten? Is there a notion of the global symmetries associated to the gauge symmetry of the C-field in the stricter formulation where it is no longer a naive higher $\mathrm{U}(1)$-gauge field?
1The clear analogy in electromagnetism is the global $\mathrm{U}(1)$ symmetry that persists even after quantization, corresponding to $H^0(\mathbb{R}^4,\mathrm{U}(1)) = \mathrm{U}(1)$.
This post imported from StackExchange Physics at 2017-11-18 23:21 (UTC), posted by SE-user ACuriousMind |
On Monday, Celestalon kicked off the official Alpha Theorycrafting season by posting a Theorycrafting Discussion thread on the forums. And he was kind enough to toss a meaty chunk of information our way about
Resolve, the replacement for Vengeance.
Resolve: Increases your healing and absorption done to yourself, based on Stamina and damage taken (before avoidance and mitigation) in the last 10 sec.
In today’s post, I want to go over the mathy details about how Resolve works, how it differs from Vengeance, and how it may (or may not) fix some of the problems we’ve discussed in previous blog posts.
Mathemagic
Celestalon broke the formula up into two components: one from stamina and one from damage taken. But for completeness, I’m going to bolt them together into one formula for resolve $R$:
$$ R =\frac{\rm Stamina}{250~\alpha} + 0.25\sum_i \frac{D_i}{\rm MaxHealth}\left ( \frac{2 ( 10-\Delta t_i )}{10} \right )$$
where $D_i$ is an individual damage event that occurred $\Delta t_i$ seconds ago, and $\alpha$ is a level-dependent constant, with $\alpha(100)=261$. The sum is carried out over all damaging events that have happened in the last 10 seconds.
The first term in the equation is the stamina-based contribution, which is always active, even when out of combat. There’s a helpful buff in-game to alert you to this:
My premade character has 1294 character sheet stamina, which after dividing by 250 and $\alpha(90)=67$, gives me 0.07725, or about 7.725% Resolve. It’s not clear at this point whether the tooltip is misleadingly rounding down to 7% (i.e. using floor instead of round) or whether Resolve is only affected by the stamina from gear. The Alpha servers went down as I was attempting to test this, so we’ll have to revisit it later. We’ve already been told that this will update dynamically with stamina buffs, so having Power Word: Fortitude buffed on you mid-combat will raise your Resolve.
Once you’re in combat and taking damage, the second term makes a contribution:
I’ve left this term in roughly the form Celestalon gave, even though it can obviously be simplified considerably by combining all of the constants, because this form does a better job of illustrating the behavior of the mechanic. Let’s ignore the sum for now, and just consider an isolated damage event that does $D$ damage:
$$0.25\times\frac{D}{\rm MaxHealth}\left ( \frac{2 ( 10-\Delta t )}{10} \right )$$
The 0.25 just moderates the amount of Resolve you get from damaging attacks. It’s a constant multiplicative factor that they will likely tweak to achieve the desired balance between baseline (stamina-based) Resolve and dynamic (damage-based) Resolve.
The factor of $D/{\rm MaxHealth}$ means that we’re normalizing the damage by our max health. So if we have 1000 health and take an attack that deals 1000 damage (remember, this is before mitigation), this term gives us a factor of 1. Avoided auto-attacks also count here, though instead of performing an actual damage roll the game just uses the mean value of the boss’s auto-attack damage. Again, nothing particularly complicated here, it just makes Resolve depend on the percentage of your health the attack would have removed rather than the raw damage amount. Also note that we’ve been told that dynamic health effects from temporary multipliers (e.g. Last Stand) aren’t included here, so we’re not punished for using temporary health-increasing cooldowns.
The term in parentheses is the most important part, though. In the instant the attack lands, $\Delta t=0$ and the term in parentheses evaluates to $2(10-0)/10 = 2.$ So that attack dealing 1000 damage to our 1000-health tank would give $0.25\times 1 \times 2 = 0.5,$ or 50% Resolve.
However, one second later, $\Delta t = 1$, so the term in parentheses is only $2(10-1)/10 = 1.8$, and the amount of resolve it grants is reduced to 45%. The amount of Resolve granted continues to linearly decrease as time passes, and by the time ten seconds have elapsed it’s reduced to zero. Each attack is treated independently, so to get our total Resolve from all damage taken we just have to add up the Resolve granted by every attack we’ve taken, hence the sum in my equation.
You may note that the time-average of the term in parentheses is 1, which is how we get the advertised “averages to ~Damage/MaxHealth” that Celestalon mentioned in the post. In that regard, he’s specifically referring to just the part within the sum, not the constant factor of 0.25 outside of it. So in total, your average Resolve contribution from damage is 25% of Damage/MaxHealth.
Comparing to Vengeance
Mathematically speaking, there’s a world of difference between Resolve and Vengeance. First and foremost is the part we already knew: Resolve doesn’t grant any offensive benefit. We’ve talked about that a lot before, though, so it’s not territory worth re-treading.
Even in the defensive component though, there are major differences. Vengeance’s difference equation, if solved analytically, gives solutions that are exponentials. In other words, provided you were continuously taking damage (such that it didn’t fall off entirely), Vengeance would decay and adjust to your new damage intake rather smoothly. It also meant that damage taken at the very beginning of an encounter was still contributing some amount of Vengeance at the very end, again, assuming there was no interruption. And since it was only recalculated on a damage event, you could play some tricks with it, like taking a giant attack that gave you millions of Vengeance and then riding that wave for 20 seconds while your co-tank takes the boss.
Resolve does away with all of that. It flat-out says “look, the only thing that matters is the last 10 seconds.” The calculation doesn’t rely on a difference equation, meaning that when recalculating, it doesn’t care what your Resolve was at any time previously. And it forces a recalculation at fixed intervals, not just when you take damage. As a result, it’s much harder to game than Vengeance was.
Celestalon’s post also outlines a few other significant differences:
No more ramp-up mechanism No taunt-transfer mechanism Resolve persists through shapeshifts Resolve only affects self-healing and self-absorbs
The lack of ramp-up and taunt-transfer mechanisms may at first seem like a problem. But in practice, I don’t think we’ll miss either of them. Both of these effects served offensive (i.e. threat) and defensive purposes, and it’s pretty clear that the offensive purposes are made irrelevant by definition here since Resolve won’t affect DPS/threat. The defensive purpose they served was to make sure you had
some Vengeance to counter the boss’s first few hits, since Vengeance had a relatively slow ramp-up time but the boss’s attacks did not.
However, Resolve ramps up a
lot faster than Vengeance does. Again, this is in part thanks to the fact that it isn’t governed by a difference equation. The other part is because it only cares about the last ten seconds.
To give you a visual representation of that, here’s a plot showing both Vengeance and Resolve for a player being attacked by a boss. The tank has 100 health and the boss swings for 30 raw damage every 1.5 seconds. Vengeance is shown in arbitrary units here since we’re not interested in the exact magnitude of the effect, just in its dynamic properties. I’ve also ignored the baseline (stamina-based) contribution to Resolve for the same reason.
As a final note, while the blog post says that Resolve is recalculated every second, it seemed like it was updating closer to every half-second when I fooled with it on alpha, so these plots use 0.5-second update intervals. Changing to 1-second intervals doesn’t significantly change the results (they just look a little more fragmented).
The plot very clearly shows the 50% ramp-up mechanism and slow decay-like behavior of Vengeance. Note that while the ramp-up mechanism gets you to 50% of Vengeance’s overall value at the first hit (at t=2.5 seconds), Resolve hits this mark as soon as the second hit lands (at 4.0 seconds) despite not having
any ramp-up mechanism.
Resolve also hits its steady-state value much more quickly than Vengeance does. By definition, Resolve gets there after about 10 seconds of combat (t=12.5 seconds). But with Vengeance, it takes upwards of 30-40 seconds to even approach the steady-state value thanks to the decay effect (again, a result of the difference equation used to calculate Vengeance). Since most fights involve tank swaps more frequently than this, it meant that you were consistently getting stronger the longer you tanked a boss. This in turn helped encourage the sort of “solo-tank things that should not be solo-tanked” behavior we saw in Mists.
This plot assumes a boss who does exactly 30 damage per swing, but in real encounters the boss’s damage varies. Both Vengeance and Resolve adapt to mimic that change in the tank’s damage intake, but as you could guess, Resolve adapts much more quickly. If we allow the boss to hit for a random amount between 20 and 40 damage:
You can certainly see the similar changes in both curves, but Resolve reacts quickly to each change while Vengeance changes rather slowly.
One thing you’ve probably noticed by now is that the Resolve plot looks very jagged (in physics, we might call this a “sawtooth wave”). This happens because of the linear decay built into the formula. It peaks in the instant you take the attack – or more accurately, in the instant that Resolve is recalculated after that attack. But then every time it’s recalculated it linearly decreases by a fixed percent. If the boss swings in 1.5-second intervals, then Resolve will zig-zag between its max value and 85% of its max value in the manner shown.
The more frequently the boss attacks, the smoother that zig-zag becomes; conversely, a boss with a long swing timer will cause a larger variation in Resolve. This is apparent if we adjust the boss’s swing timer in either direction:
It’s worth noting that every plot here has a new randomly-generated sequence of attacks, so don’t be surprised that the plots don’t have the same profile as the original. The key difference is the size of the zig-zag on the Resolve curve.
I’ve also run simulations where the boss’ base damage is 50 rather than 30, but apart from the y-axis having large numbers there’s no real difference:
Note that even a raw damage of 50% is pretty conservative for a boss – heroic bosses in Siege have frequently had raw damages that were larger than the player’s health. But it’s not clear if that will still be the case with the new tanking and healing paradigm that’s been unveiled for Warlords.
If we make the assumption that raw damage will be lower, then these rough estimates give us an idea of how large an effect Resolve will be. If we guess at a 5%-10% baseline value from stamina, these plots suggest that Resolve will end up being anywhere from a 50% to 200% modifier on our healing. In other words, it has the potential to double or triple our healing output with the current tuning numbers. Of course, it’s anyone’s guess as to whether those numbers are even remotely close to what they’ll end up being by the end of beta.
Is It Fixed Yet?
If you look back over our old blog posts, the vast majority of our criticisms of Vengeance had to do with its tie-in to damage output. Those have obviously been addressed, which leaves me worrying that I’ll have nothing to rant about for the next two or three years.
But regarding everything else, I think Resolve stands a fair chance of addressing our concerns. One of the major issues with Vengeance was the sheer magnitude of the effect – you could go from having 50k AP to 600k AP on certain bosses, which meant your abilities got up to 10x more effective. Even though that’s an extreme case, I regularly noted having over 300k AP during progression bosses, a factor of around 6x improvement. Resolve looks like it’ll tamp down on that some. Reasonable bosses are unlikely to grant a multiplier larger than 2x, which will be easier to balance around.
It hasn’t been mentioned specifically in Celestalon’s post, but I think it’s a reasonable guess that they will continue to disable Resolve gains from damage that could be avoided through better play (i.e. intentionally “standing in the bad”). If so, there will be little (if any) incentive to take excess damage to get more Resolve. Our sheer AP scaling on certain effects created situations where this was a net survivability gain with Vengeance, but the lower multiplier should make that impossible with Resolve.
While I still don’t think it needs to affect anything other than active mitigation abilities, the fact that it’s a multiplier affecting everything equally rather than a flat AP boost should make it easier to keep talents with different AP coefficients balanced (Eternal Flame and Sacred Shield, specifically). And we already know that Eternal Flame is losing its Bastion of Glory interaction, another change which will facilitate making both talents acceptable choices.
All in all, I think it’s a really good system, if slightly less transparent. It’s too soon to tell whether we’ll see any unexpected problems, of course, but the mechanic doesn’t have any glaring issues that stand out upon first examination (unlike Vengeance). I still have a few lingering concerns about steady-state threat stability between tanks (ironically, due to the
removal of Vengeance), but that is the sort of thing which will become apparent fairly quickly during beta testing, and at any rate shouldn’t reflect on the performance of Resolve. |
Question: Prove that $\forall a\in \mathbb{R\smallsetminus Q}$, there exist infinitely many $n\in \mathbb N$ such that $\lfloor{an^2}\rfloor$ is even.
If $a=\sqrt2$ and $x^2-2y^2=1(x,y\in\mathbb N)$ then $x-\sqrt2y=\dfrac{1}{x+\sqrt2y},$ $$9xy-9\sqrt{2}y^2=\dfrac{9y}{x+\sqrt2y}\in(3,4)$$ so $\lfloor{(3y)^2\sqrt2}\rfloor=9xy-4$ is even since $y$ is even.
If $k\in\mathbb N,\sqrt k \notin \mathbb N$ then we can prove it for $a=u+v\sqrt k,u,v\in\mathbb Q$ as above.
I know that if $a$ is an counterexample and the continued fraction representations of$a=[a_0; a_1,a_2,....]$ then $a_{k},a_{k+2},a_{k+4},\cdots$ must be all even for some $k\in \mathbb N$. |
This is a subtle question, because what you see depends on competing effects. The answer depends on exactly how you fall into the black hole, since the appearence of objects depends very strongly on your boost. If you boost away from any object, the object will redshift, spread out in your field of vision, and dim, and if you boost toward it it will blueshift, compress into your forward field of vision and brighten. This means that what you see depends on whether you speed up toward the black hole to crash in fast, or whether you accelerate away from the black hole so that you are highly boosted outward when you cross (the typical situation for a late-falling observer)
The light-rays at the moment of crossing the horizon from the star stay on the horizon forever, so you can always (classically) collect some radiation from the star no matter how late you cross. But the image size, shape brightness and reshift depends on your boost in such a way that you never see too much of the star at late times. If you go in moving very fast toward the center of the black hole, you see a small bright image of the star directly ahead, toward the direction of the singularity whose size is inversely proportional to the time at which you cross (the affine parameter of your crossing location). If you go in naturally, meaning you spend a while accelerating near the horizon to keep from falling in, and then let yourself go, then you see a spread out dim redshifted image (the same image as before in a different frame), which is redshifted to oblivion if you come in at late times after spending much time near the horizon.
Near Horizon Solution
The near horizon form of the Schwartschild solution can be found by writing $r=2M+u^2$ in the usual r coordinate, as described here: Why is spacetime near a quantum black hole approximately AdS? . You get (choosing units so that 2M=1, and calling the Schwartschild time $\theta$):
$$ ds^2 = - u^2 d\theta^2 + du^2 + (1 + {u^2\over 4} ) d\Omega^2 $$
This is a Rindler space cross a sphere, so that you can transform it into Minkowksi space cross $S_2$ by using the coordinates $t=u \sinh(\theta)$ $x=u \cosh(\theta)$
$$ ds^2 = - dt^2 + dx^2 + (1 + {x^2 - t^2\over 4} ) d\Omega^2 $$
This is the near horizon form, including the leading order variation in the sphere radius with distance from the horizon. The horizon is the light path $x=t$. The region $t>x$ inside the forward lightcone of the origin is the region in which the sphere radius contracts, and this is the interior of the black hole, while the region $t<x$ spacelike and to the right of the origin is the exterior of the black hole
The problem is ray-tracing in a Schwarzschild geometry, so one has to consider light rays starting at a crossing point $x=t=t_0$ on the horizon. The backward light cone from this point can be parametrized by choosing a past-pointing vector in M_2, and adding the appropriate length component along the sphere.
No winding
The main issue in the solution of the problem is whether you see multiple images. The light rays near the horizon going close to outward slowly travel around the black hole surface, and you might thing that you can see many images of the star, due to rays that slowly crawl around the black hole to reach you from the star after a winding.
This is not so, because the time for one winding is always comparable to the time it takes the light to get away from the surface of the black hole. This is easiest to see in the product near-horizon solution.
Given a light ray coming into your eya at small angle $\theta$ from directly toward the center of the black hole, the failure of the ray to be the horizon generator is proportional to $\theta^2$, while the component along the sphere factor of the near horizon solution is proportional to $\theta$.
But if you look at the quantity $x^2-t^2$, which is $u^2$,the squared difference of the radial Schwarzschild coordinate from 2M, along the approximate geodesic, it is
$$ (t-s)^2 - (t - s\cos(\theta))^2 = s(t-s)\theta^2 $$
This quantity increases to a maximum of $(t\theta/2)^2$. When this maximum is comparable to 1, the light ray traced back escapes from the gravitational product region. This heuristic shows that the escape time is for $\theta\propto 1/t$, up to small factors of order unity.
Since the winding time is also $t\propto 1/\theta$, there are no windings--- the light ray escapes the near-horizon product region before it can go once around.
Size of the stellar image
The angular size of the stellar image is determined by the affine-parameter to escape the horizon region for a back-traced ray at angle $\theta$ away from the line toward the center of the black hole.
Since the solution looks (nearly) like a product right by the horizon, the geodesics are simply straight lines in the Minkowski space, which simultaneously wind around the sphere. No-winding shows that the angular spread of the image of the star (assuming you come in in the same boost frame as the star when the star crosses the horizon) is less than the angle which will wind one full turn in the affine parameter $t$ where you cross.
This means that the angular spread of the star falls as $1/t$. This is the image in the unboosted frame, which is defined by translating the frame of the infalling star with no boosting in the Minkowksi space factor of the product
Boosting effects
The effect of boosting is a conformal transformation of the sphere of incoming light rays. This is described very nicely in Penrose's Spinors and Space-Time Vol.1. The qualitative effect is clear--- if you go very fast in a certain direction, light in your frame has additional momentum in the opposite direction, concentrating the light into your forward field of vision and blueshifting it. Light behind you is redshifted and spread into oblivion.
For a black hole, time translation along the horizon is a boost, since the external time parameter $\theta$ is a boost parameter. This means that if you wait a long time in the external coordinates, and look at the same velocity translated into the future using the time Killing vector, this velocity has been boosted away from the black hole center by an amount proprotional to the time.
The visual image of the star is only undimmed and shrunk by an amount proportional to t in the "rest frame" of the collapsing star (I put rest frame in quotes because it is a reference frame of the near horizon M_2 x S_2 metric). It is undimmed because the product nature of the solution does not let the light rays spread out, but it is shrunk to an angular size of 1/t because most of the rays miss the star.
But when you are falling in with the same velocity at late times, t, you are boosting by an amount proportional to t. A boost by an amount proprtional to t will spread the angular region behind by an exponentially growing amount in t. This leads the image to dim exponentially, so that you really won't see anything at all if you fall in at late times in a natural way. |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$.
Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$...
What if $\theta$ is irrational...what did I do wrong?
'cause I understand that second one but I'm having a hard time explaining it in words
(Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.)
DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something.
he based much of his success on principles like this I cant believe ive forgotten it
it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar
yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one
I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually
Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book
regardless of you attending Princeton university or not
yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities
but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were
i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it
well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned
@TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking
Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college.
long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now
@BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers
What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even
A proof of this uses (basically) Fourier analysis
Even though it looks rather innocuous albeit surprising result in pure number theory
@BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho
yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen
oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive
absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about
@Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$).
The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero.
I can try to recall more if you're interested.
It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane
Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i
Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things
I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know
Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps
So it sort of makes sense
Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis
@BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre
You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all
it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate |
Motion in a Plane Kinematics of Circular Motion The angle swept by the radius vector is called angular displacement the unit of angular displacement is radian. The angular velocity is the rate of change of angular displacement in unit time. The unit of angular velocity is radian per second W = dθ/dt The angular acceleration is the rate of change of angular velocity in unit time. The unit of angular acceleration is radian per second. ∝ = dw/dt The relation between Linear velocity and angular velocity is v = \overline{w} \times \overline{r} where \overline{r} is the position vector of the particle with respect to centre of circle. The relation between Linear acceleration and angular acceleration is \overline{a} = \overline{\alpha} \times \overline{r} where \overline{r} is the position vector of the particle with respect to centre of circle Tangential acceleration is the component of acceleration in the direction of velocity which is responsible for change in speed of particle (at = dv/dt) Thin component is tangential to the circle Radial acceleration is the component of acceleration in a direction towards centre. This component is responsible for change in direction of Velocity. a_{r} = \frac{v^{2}}{r} = rw^{2} The final angular velocity wf = wi + αt when α is the angular acceleration and 't' is the time and wi is the initial angular velocity. The final angular velocity wf 2= wi 2+ 2 αθ where 'α' is the angular acceleration and 'θ' is the angular displacement. The angular displacement \theta = wit + \frac{1}{2} \alpha t^{2} where 'wi' is the initial angular velocity 'α' is the angular acceleration at time 't'. View the Topic in this video From 20:29 To 29:33
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Angular displacement (Δθ) = \frac{\Delta s}{r}
2. Angular velocity (\omega) = \frac{\Delta \theta}{\Delta t}
3. Angular acceleration (\alpha) = \frac{d \omega}{dt} = \frac{d^{2}\theta}{dt^{2}}
4. Centripetal acceleration \alpha = \frac{v^{2}}{r} = r\omega^{2}
5. Centripetal force F = \frac{mv^{2}}{r} = mr\omega^{2}
6. Kinematical Equations in Circular Motion
(i) ω = ω 0 + α t (ii) \theta = \omega_{0}t + \frac{1}{2}\alpha t^{2} (iii) \omega^{2} = \omega_{0}^{2} + 2\alpha \theta
7. The coefficient of friction (μ
s) between the road and tyres should be, \mu_{s} \geq \frac{v^{2}}{rg} \ {\tt or} \ v \leq \sqrt{\mu_{s} rg}
8. If centripetal force is obtained only by the banking of roads, then the speed (v) of the vehicle for a safe turn v = \sqrt{rg \tan \theta}
9. When centripetal force is obtained from friction force as well as banking of roads, then the maximum safe value of speed of vehicle
v_{max} = \sqrt{\frac{rg (\tan \theta + \mu)}{(1 - \mu \tan \theta)}}
10. If a cyclist inclined at an angle θ, then \tan \theta = \frac{v^{2}}{rg}
11. When a vehicle is moving over a convex bridge, then at the maximum height, reaction (N
1) is N_{1} = mg - \frac{mv^{2}}{r}
12. When a vehicle is moving over a concave bridge, then at the lowest point, reaction (N
2) is N_{2} = mg + \frac{mv^{2}}{r}
13. Both acceleration acts perpendicular to each other. Resultant acceleration
a = \sqrt{a_{R}^{2} + a_{T}^{2}} = \sqrt{\left[\frac{v^{2}}{r}\right]^{2} + (r\alpha)^{2}} and \tan \phi = \frac{a_{T}}{a_{R}} = \frac{r^{2}\alpha}{v^{2}}
14. Time period of conical pendulum, T = 2\pi\sqrt{\frac{l \cos \theta}{g}} |
PRIMER:
The complex logarithm function is a multi-valued function that is defined as
$$\log(z)=\log(|z|)+i\arg(z) \tag1$$
where $\arg(z)$ is the multivalued argument of $z$.
The function $f(z)=z^c$, where $c\in \mathbb{C}$, is defined as
$$f(z)=e^{c\log(z)} \tag2$$
Therefore, $f(z)$ is also multivalued when $c$ is not an integer.
BRANCH POINT
If $z_0$ is branch point of the multivalued function $f(z)$ then there is no open neighborhood $N(z_0)$ of $z_0$ on which $f$ is continuous. Loosely speaking, we cannot encircle $z_0$ without encountering a discontinuity.
We can see from $(1)$ that $z_0=0$ is a branch point of $\log(z)$. Let $z_0=e^{i\theta_0}$ be a point on the unit circle. Then $\log(z_0)=i\theta_0$.
We travel on the unit circle from $z_0$ by increasing $\arg(z)$ from $\theta_0$ to $\theta_0+2\pi$. While we have returned to $z_0$, the value of $\log(z)$ has jumped from $i\theta_0$ to $i(\theta_0+2\pi)$. (Note that we have tacitly cut the plane along the ray $\theta=\theta_0$).
Inasmuch as $(2)$ defines $z^c$, then for non-integer $c$, $z^c$ shares the branch point singularity of $\log(z)$.
To see the reason that $z=\infty$ is also a branch point, we let $w=1/z$. Since $\log(w)$ has a branch point at $w=0$, then $\log(z)=\log(1/w)$ has a branch point at $\infty$.
INTEGRATION OVER THE KEYHOLE CONTOUR
From the previous discussion, we know that $z^{1/3}$ has logarithmic branch points at $z=0$ and $z=\infty$. We choose to cut the plane along the positive real axis.
With this choice of branch cut, if we approach a point on the positive real axis along a contour in the first quadrant, then $\arg(z)$ approaches $0$. If we approach a point on the positive real axis along a contour in the fourth quadrant, then $\arg(z)$ approaches $2\pi$.
Referring to the diagram in the OP, we can formally parameterize the green (red) segments as $z=x \pm i\epsilon$, $x\in [\sqrt{\nu^2-\epsilon^2},\sqrt{R^2-\epsilon^2})$, where $\nu>0$ is the radius of the blue-colored circular arc centered at the origin and $R$ is the radius of the gray-colored circular arc. Then, we have
\begin{align}\lim_{\epsilon\to 0}\int_{\sqrt{\nu^2-\epsilon^2}}^{\sqrt{R^2-\epsilon^2}}\frac{(x+ i\epsilon)^{1/3}}{(x+ i\epsilon+1)^2}\,dx&=\int_\nu^R \frac{x^{1/3}}{(x+1)^2}\,dx\\\\\lim_{\epsilon\to 0}\int_{\sqrt{\nu^2-\epsilon^2}}^{\sqrt{R^2-\epsilon^2}}\frac{(x- i\epsilon)^{1/3}}{(x- i\epsilon+1)^2}\,dx&=\int_\nu^R \frac{x^{1/3}e^{i2\pi/3}}{(x+1)^2}\,dx\end{align}
FINISHING IT UP
It can be shown that as $\nu\to 0$ and $R\to \infty$, the contributions from the integrals around the circular arcs vanish. This leaves
$$\begin{align}(1-e^{i2\pi/3})\int_0^\infty \frac{x^{1/3}}{(x+1)^2}\,dx&=2\pi i \text{Res}\left(\frac{z^{1/3}}{(z+1)^2},z=-1\right)\\\\&=2\pi i\lim_{z\to -1}\frac13z^{-2/3}\\\\&=\frac{2\pi i}3 e^{-i2\pi/3}\end{align}$$
Solving for
$$\int_0^\infty\frac{x^{1/3}}{(x+1)^2}\,dx=\frac{2\pi}{3\sqrt{3}}$$ |
Let $A$ be a nonempty and bounded below, and define $B= \{b\in \mathbb{R}: b$ is a lower bound for $A\}$. Show sup$B$ = inf$A$.
So far I have: Let $A$ be nonempty and bounded below. This implies $\exists l \in A, \forall a \in A, l\leq a$ and $l =$ inf$A$. Let $B = \{b\in \mathbb{R}: b$ is a lower bound for $A\}$ and let $M=$ sup$B$. This implies $\exists b\in B, \forall b \in B, b\leq M$. By definition of $B$, we know inf$A \in B$. Since $b\leq$ sup$B$, with sup$B$ the largest element in $B$, we have inf$A \leq M =$ sup$B$, so inf$A \leq$ sup$B$.
I realize I have to prove that sup$B \leq$ inf$A$ to get that sup$B$=inf$A$. I am having trouble doing so.
Part B: Use (a) to explain why there is no need to assert that greatest lower bounds exist as part of the Axiom of Completeness. |
Straight Lines Slope of a Line and Angle between Two Lines Tips : If the inclination of a non-vertical line is ‘θ’ then tan θ is called slope of the line and is usually denoted by m, thus m = tan θ Slope of a horizontal line is ‘0’ [\because θ = 0°] Slope of a vertical line is not defined [\because θ = 90°] Slope of the line joining two points A(x 1, y 1), B (x 2y 2) is \tt m=\frac{y_{2}-y_{1}}{x_{2}-x_{1}} Two non-vertical lines are parallel if their slopes are equal. Two non-vertical lines are perpendicular if their product of slopes is ‘−1’ If ‘3’ points are collinear (A, B, C) then slope of AB = Slope of BC = Slope of AC (or) Area ΔABC = 0 (or) one of the points lies on the line joining of the other two points (or) sum of two of the distances AB, BC, CA, is equal to the third If ‘θ’ is an acute angle between the lines having slopes m 1and m 2then \tt \tan \theta=\begin{vmatrix}\frac{m_{1}-m_{2}}{1+m_{1}m_{2}} \end{vmatrix} If ‘θ’ is an acute angle between the lines a 1x + b 1y + c 1= 0 a 2x + b 2y + c 2= 0 then \tt cos \theta=\begin{vmatrix}\frac{a_{1}a_{2}+b_{1}b_{2}}{\sqrt{a^{2}_{1} + b^{2}}_{1} .\sqrt{{a_2^2}+b_2^2}}\end{vmatrix}and \ \tan\theta=\begin{vmatrix}\frac{a_{1}b_{2}-a_{2}b_{1}}{a_{1}a_{2}+b_{1}b_{2}}\end{vmatrix} other angle between the lines is π – θ. If a 1x + b 1y + c 1= 0 and a 2x + b 2y + c 2= 0 are parallel to y–axis then angle between them is ‘o’ or ‘π’. If any one of the lines is parallel to y-axis and other makes on angle ‘θ’ with positive direction of x-axis then angle between lines is |90° – θ|. Tricks : If a line is equally inclined to the axes, then it will make an angle of 45° or 135° with x–axis (Positive direction of x-axis) and hence its slope will be tan 45° or tan 135° = ± 1. Slope of the line ax + by + c = 0, b ≠ 0 is \tt \frac{-a}{b} Part1: View the Topic in this video From 46:50 To 55:39 Part2: View the Topic in this video From 00:40 To 21:23
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
Important Results on Slope of Line
1. Slope of a line passing through (x
1 , y 1) and (x 2 , y 2) is given by m=tan \theta= \frac{y_{2}-y_{1}}{x_{2}-x_{1}}\cdot 2. Slope of a line parallel to Y-axis, m = ∞. 3. Slope of a line parallel to X-axis, m = 0. Angle between Two Lines The angle θ between two lines having slopes m 1 and m 2, is tan \theta= \begin{vmatrix}\frac{m_{2}-m_{1}}{1+m_{1}m_{2}} \end{vmatrix}\cdot a). Two lines are parallel, iff m 1 = m 2. b). Two lines are perpendicular to each other, iff m 1m 2 = −1. |
Students encountering time-varying electromagnetic fields for the first time have usually been exposed to electrostatics and magnetostatics already. These disciplines exhibit many similarities as summarized in Table \(\PageIndex{1}\). The principles of time-varying electromagnetics presented in this table are all formally introduced in other sections; the sole purpose of this table is to point out the differences. We can summarize the differences as follows:
Maxwell’s Equations in the general (time-varying) case include extra terms that do not appear in the equations describing electrostatics and magnetostatics. These terms involve time derivatives of fields and describe coupling between electric and magnetic fields.
The coupling between electric and magnetic fields in the time-varying case has one profound consequence in particular. It becomes possible for fields to continue to exist even after their sources – i.e., charges and currents – are turned off. What kind of field can continue to exist in the absence of a source? Such a field is commonly called a wave. Examples of waves include signals in transmission lines and signals propagating away from an antenna.
Electrostatics / Magnetostatics Time-Varying (Dynamic) Electric & magnetic fields are... independent possibly coupled Maxwell’s Eqns.
(integral)
\(\oint_{\mathcal S}{\bf D}\cdot d{\bf s} = Q_{encl}\) \(\oint_{\mathcal S}{\bf D}\cdot d{\bf s} = Q_{encl}\) \(\oint_{\mathcal C}{\bf E}\cdot d{\bf l} = 0\) \(\oint_{\mathcal C}{\bf E}\cdot d{\bf l} = \color{blue}-\frac{\partial}{\partial t}\int_{\mathcal S}{\bf B}\cdot d{\bf s}\) \(\oint_{\mathcal S}{\bf B}\cdot d{\bf s} = 0\) \(\oint_{\mathcal S}{\bf B}\cdot d{\bf s} = 0\) \(\oint_{\mathcal C}{\bf H}\cdot d{\bf s} = I_{encl}\) \(\oint_{\mathcal C}{\bf H}\cdot d{\bf l} = I_{encl} {\color{blue} + \int_{\mathcal S}\frac{\partial}{\partial t}{\bf D}\cdot d{\bf s}}\) Maxwell’s Eqns.
(differential)
\(\nabla\cdot{\bf D}=\rho_v\) \(\nabla\cdot{\bf D}=\rho_v\) \(\nabla\times{\bf E}=0\) \(\nabla\times{\bf E}={\color{blue}-\frac{\partial}{\partial t}{\bf B}}\) \(\nabla\cdot{\bf B}=0\) \(\nabla\cdot{\bf B}=0\) \(\nabla\times{\bf H}={\bf J}\) \(\nabla\times{\bf H}={\bf J} {\color{blue}+ \frac{\partial}{\partial t}{\bf D}}\)
The coupling between electric and magnetic fields in the time-varying case has one profound consequence in particular. It becomes possible for fields to continue to exist even after their sources – i.e., charges and currents – are turned off. What kind of field can continue to exist in the absence of a source? Such a field is commonly called a
wave. Examples of waves include signals in transmission lines and signals propagating away from an antenna. Contributors
Ellingson, Steven W. (2018) Electromagnetics, Vol. 1. Blacksburg, VA: VT Publishing. https://doi.org/10.21061/electromagnetics-vol-1 Licensed with CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0. Report adoption of this book here. If you are a professor reviewing, adopting, or adapting this textbook please help us understand a little more about your use by filling out this form. |
The answer is "no" for equicharacteristic cases, and probably also mixed characteristic but I don't know an appropriate mixed-characteristic Bertini theorem for such cases (e.g., recent papers on local mixed-char. Bertini theorem are not sufficient). So let me explain a general method which reaches the end for equicharacteristic cases but gets stuck on lack of Bertini in mixed characteristic.
The punchline is that the Galois-theoretic context is a red herring in the end (in that it is not needed in the solution): if $A$ is an integral closed domain with fraction field $K$ and $B$ its integral closure in a field $L$ of finite degree $d$ over $K$ then we shall prove that all residue field extensions for $B$ over $A$ have degree at most $d$, subject to a caveat for mixed characteristic. The proof will use considerations with noetherian approximation, henselization, and Bertini theorems (standard in equicharacteristic, but not at all in mixed characteristic).
By expressing $L/K$ as a tower of primitive extensions, we can assume $L = K(\alpha)$ where $\alpha$ is a root of a monic irreducible polynomial $f \in K[X]$. Let $\{A_i\}$ be the directed system of finitely generated $\mathbf{Z}$-subalgebras of $A$. The normalization of each $A_i$ is module-finite over $A_i$ (by excellence considerations) and is contained in $A$, so the $A_i$'s that are integrally closed are cofinal within this directed system. Letting $K_i = {\rm{Frac}}(K_i)$, taking $i$ large enough lets us assume $f \in K_i[X]$ for all $i$, so $L_i := K_i[X]/(f)$ is a field in which the integral closure $B_i$ of $A_i$ is module-finite (by excellence considerations) and the direct limit of the $B_i$'s is $B$.
Fix a point $s' \in {\rm{Spec}}(B)$ over $s \in {\rm{Spec}}(A)$, and let $s'_i \in {\rm{Spec}}(B_i)$ and $s_i \in {\rm{Spec}}(A_i)$ be the corresponding images. Then the natural map $k(s'_i) \otimes_{k(s_i)} k(s) \rightarrow k(s')$ is surjective for large $i$ since $k(s')$ is $k(s)$-finite and equality holds in the limit. Thus, if $[k(s'_i):k(s_i)] \le d$ for all $i$ then we will be done. Thus, we may replace $A \rightarrow B$ with $A_i \rightarrow B_i$, so we may assume $A$ is finitely generated over $\mathbf{Z}$. We can assume $A$ is local.
The henselization $A^{\rm{h}}$ is normal noetherian, and $B\otimes_A A^{\rm{h}}$ is the direct product of the (normal noetherian) henselizations of $B$ at its maximal ideals over that of $A$. The fraction fields of those factor rings have degree at most $d$ over the fraction field of $A^{\rm{h}}$, and henselization doesn't change the residue field, so it is harmless to pass to such factor rings. Hence, we may now assume $A$ and $B$ are henselian local. Note that their residue fields are finitely generated over their prime fields.
If $k \rightarrow \kappa$ is the extension of residue fields and $k'/k$ is the maximal separable subextension then by "Hensel's Lemma" for henselian local rings the local finite etale $A$-algebra $A'$ with residue field $k'$ uniquely maps to $B$ over $A$ and must be a normal domain and have fraction field of degree $[k':k]$ over ${\rm{Frac}}(A)$, so consideration of generic fibers over $A$ shows that the map $A' \rightarrow B$ is injective. Hence, we may replace $A$ with $A'$ to reduce to the case that the residual extension is purely inseparable. We may assume the residual degree is $> 1$ or there is nothing to prove, so now the residue characteristic is $p > 0$ and $k$ is imperfect. Since $k$ is finitely generated over $\mathbf{F}_p$, its "constant field" is a finite field $\mathbf{F}$, and that is also algebraically closed in $\kappa$ since $\kappa/k$ is purely inseparable.
Now the problem breaks into two cases, depending on whether the generic characteristic is 0 or $p$. Suppose we are in equicharacteristic $p$, so $A$ is uniquely an $\mathbf{F}$-algebra. In view of our earlier passage to finite type $\mathbf{Z}$-algebras prior to henselizing, we can therefore "spread out" our situation to arrive at the following geometric situation: we have a finite dominant map $f:X \rightarrow Y$ between normal affine varieties over $\mathbf{F}$ (irreducible and reduced) with generic degree $d$, and a
geometrically irreducible positive-dimensional proper closed subvariety $X_0 \subset X$ such that $X_0$ is generically purely inseparable over $Y_0 = f(X_0) \subset Y$. We claim that the generic degree of $X_0 \rightarrow Y_0$ is at most $d$. The ambient normal varieties inherit geometric irreducibility over $\mathbf{F}$ from that of $X_0$ and $Y_0$, so extending scalars to $\overline{\mathbf{F}}$ has no effect on the degrees under consideration.
Thus, now we may consider the same problem over an algebraically closed ground field $F$ of characteristic $p$ (in fact an algebraic closure of $\mathbf{F}_p$). Since $X_0$ is a proper closed subvariety of $X$ with positive dimension, the common dimension $\delta$ of $X$ and $Y$ is at least 2. If $\delta=2$ then by normality $X \rightarrow Y$ is flat in codimension 1, hence away from a finite set of closed points in $Y$, so the fiber-degree over $y$ coincides with the fiber-degree $d$ at the generic point of $Y$. Hence, the residue degree $[F(x):F(y)]$ is certainly at most $d$ in such cases. By shrinking $Y$ around $y$ we can arrange that $X_0$ and $Y_0$ are smooth with $X_0 \rightarrow Y_0$ finite flat of degree $[F(x):F(y)]$.
Suppose instead that $\delta>2$ and that the result is known in dimension $\delta-1$. Note that $X \rightarrow Y$ is flat in codimension 1 by normality. By the Bertini theorems (in the general form of Jouanolou's book, for example) a generic hyperplane slice $Y'$ of $Y$ is irreducible and smooth in codimension 1 and inherits $S_2$ from $Y$, so is normal by Serre's criterion. By finiteness of $X \rightarrow Y$ the same holds for its preimage $X'$ in $X$ if the slice is generically chosen, with $X' \rightarrow Y'$ of generic degree $d$ by the flatness in codimension 1. Genericity also ensures that the slices $X'_0 = X' \cap X_0$ and $Y'_0 = Y' \cap Y_0$ are irreducible and reduced of positive dimension when $X_0$ and $Y_0$ are of dimension at least 2, or are finite sets of reduced points when $X_0$ and $Y_0$ are curves. The finite flat $X_0 \rightarrow Y_0$ has constant fiber-degree, so applying that at the generic point of $Y'_0$ then completes the dimension induction when $X_0$ and $Y_0$ have dimension $>1$. If instead $X_0$ and $Y_0$ are curves then the reducedness of $X'_0 = f^{-1}(Y'_0)$ and the normality of $X'$ and $Y'$ implies that $X' \rightarrow Y'$ is
etale over $Y'_0$ (see Lemma 1.5 in the book by Freitag & Kiehl on etale cohomology), so that full fiber degree (which is the original residue degree of interest) coincides with the degree $d$ of $X'$ over $Y'$, so we win again. This completes the proof in equicharacteristic $p>0$.
Now consider the mixed characteristic case, so the henselizations are naturally algebras over the finite unramified extension $R$ of $\mathbf{Z}_{(p)}^{\rm{h}}$ corresponding to the finite residue field $\mathbf{F}$. By similar reasoning as at the start of the equicharacteristic case, we can extend scalars to the strict henselization $W$ of $R$, or equivalently of $\mathbf{Z}_{(p)}^{\rm{h}}$. So we can again do "spreading out", but now getting "arithmetic schemes": flat affine normal irreducible $W$-schemes $X$ and $Y$ of finite type (rather than over its residue field). Letting $\delta$ denote the common dimension of $X$ and $Y$, once again $\delta \ge 2$ and the case $\delta=2$ is easy via flatness in codimension 1, so we may assume $\delta > 2$ and that the result is known in dimension $\delta - 1$. We can also assume that the points of interests $x \in X$ and $y \in Y$ in the mod-$p$ fibers are non-generic (or else we can use flatness in codimension 1 to conclude).If there were a mixed-characteristic Bertini theorem for normal flat affine schemes of finite type over $W$ (so algebraically closed residue field) then one should be able to conclude via dimension induction as in the equicharacteristic-$p$ case; lacking a reference for such a result, this is a good place to stop. |
Tagged: subspace Problem 709
Let $S=\{\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3},\mathbf{v}_{4},\mathbf{v}_{5}\}$ where
\[ \mathbf{v}_{1}= \begin{bmatrix} 1 \\ 2 \\ 2 \\ -1 \end{bmatrix} ,\;\mathbf{v}_{2}= \begin{bmatrix} 1 \\ 3 \\ 1 \\ 1 \end{bmatrix} ,\;\mathbf{v}_{3}= \begin{bmatrix} 1 \\ 5 \\ -1 \\ 5 \end{bmatrix} ,\;\mathbf{v}_{4}= \begin{bmatrix} 1 \\ 1 \\ 4 \\ -1 \end{bmatrix} ,\;\mathbf{v}_{5}= \begin{bmatrix} 2 \\ 7 \\ 0 \\ 2 \end{bmatrix} .\] Find a basis for the span $\Span(S)$. Problem 706
Suppose that a set of vectors $S_1=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is a spanning set of a subspace $V$ in $\R^5$. If $\mathbf{v}_4$ is another vector in $V$, then is the set
\[S_2=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4\}\] still a spanning set for $V$? If so, prove it. Otherwise, give a counterexample. Problem 663
Let $\R^2$ be the $x$-$y$-plane. Then $\R^2$ is a vector space. A line $\ell \subset \mathbb{R}^2$ with slope $m$ and $y$-intercept $b$ is defined by
\[ \ell = \{ (x, y) \in \mathbb{R}^2 \mid y = mx + b \} .\]
Prove that $\ell$ is a subspace of $\mathbb{R}^2$ if and only if $b = 0$.Add to solve later
Problem 659
Fix the row vector $\mathbf{b} = \begin{bmatrix} -1 & 3 & -1 \end{bmatrix}$, and let $\R^3$ be the vector space of $3 \times 1$ column vectors. Define
\[W = \{ \mathbf{v} \in \R^3 \mid \mathbf{b} \mathbf{v} = 0 \}.\] Prove that $W$ is a vector subspace of $\R^3$. Problem 658
Let $V$ be the vector space of $n \times n$ matrices with real coefficients, and define
\[ W = \{ \mathbf{v} \in V \mid \mathbf{v} \mathbf{w} = \mathbf{w} \mathbf{v} \mbox{ for all } \mathbf{w} \in V \}.\] The set $W$ is called the center of $V$.
Prove that $W$ is a subspace of $V$.Add to solve later
Problem 612
Let $C[-2\pi, 2\pi]$ be the vector space of all real-valued continuous functions defined on the interval $[-2\pi, 2\pi]$.
Consider the subspace $W=\Span\{\sin^2(x), \cos^2(x)\}$ spanned by functions $\sin^2(x)$ and $\cos^2(x)$. (a) Prove that the set $B=\{\sin^2(x), \cos^2(x)\}$ is a basis for $W$.
Add to solve later
(b) Prove that the set $\{\sin^2(x)-\cos^2(x), 1\}$ is a basis for $W$. Problem 611
An $n\times n$ matrix $A$ is called
orthogonal if $A^{\trans}A=I$. Let $V$ be the vector space of all real $2\times 2$ matrices.
Consider the subset
\[W:=\{A\in V \mid \text{$A$ is an orthogonal matrix}\}.\] Prove or disprove that $W$ is a subspace of $V$. Problem 604
Let
\[A=\begin{bmatrix} 1 & -1 & 0 & 0 \\ 0 &1 & 1 & 1 \\ 1 & -1 & 0 & 0 \\ 0 & 2 & 2 & 2\\ 0 & 0 & 0 & 0 \end{bmatrix}.\] (a) Find a basis for the null space $\calN(A)$. (b) Find a basis of the range $\calR(A)$. (c) Find a basis of the row space for $A$.
Add to solve later
(The Ohio State University, Linear Algebra Midterm) Read solution Problem 601
Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.
Let \[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix} a & b\\ c& -a \end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\] (a) Show that $W$ is a subspace of $V$. (b) Find a basis of $W$. (c) Find the dimension of $W$.
Add to solve later
(The Ohio State University, Linear Algebra Midterm) Read solution |
For any odd prime $p$, let $D_p$ denote the determinant $$\det\left[\left(\frac{i^2-\frac{p-1}2!\times j}p\right)\right]_{1\le i,j\le (p-1)/2},$$ where $(\frac{\cdot}p)$ is the Legendre symbol. Then \begin{gather*}D_3=0,\ D_5=-1,\ D_7=D_{11}=0,\ D_{13}=-8, \\ D_{17}=-72,\ D_{19}=D_{23}=0,\ D_{29}=-2061248.\end{gather*}
QUESTION: Let $p$ be an odd prime. Is it true that $D_p=0$ if and only if $p\equiv 3\pmod4$?
In 2013 I formulated this problem and conjectured that the answer is yes. I have verified this for all primes $p<2300$. By a result of L. Mordell [Amer. Math. Monthly 68(1965), 145-146], for any prime $p>3$ with $p\equiv3\pmod4$ we have $$\frac{p-1}2!\equiv(-1)^{(h(-p)+1)/2}\pmod p,$$ where $h(-p)$ is the class number of the imaginary quadratic field $\mathbb Q(\sqrt{-p})$.
Any ideas towards the solution? |
Functiones et Approximatio Commentarii Mathematici Funct. Approx. Comment. Math. Volume 37, Number 1 (2007), 131-148. On a Kakeya-type problem Abstract
Let $A$ be a finite subset of an abelian group $G$. For every element $b_i$ of the sumset $2A = \{b_0, b_1, ...,b_{|2A|-1}\}$ we denote by $D_i = \{a-a': a, a'\in A; a + a' = b_i\}$ and $r_i = |\{(a,a'): a + a' = b_i; a, a' \in A \}|$. After an eventual reordering of $2A$, we may assume that $r_0 \geq r_1 \geq ... \geq r_{|2A|-1}.$ For every $1 \le s \le |2A|$ we define $R_s(A)=|D_0 \cup D_1 \cup ... \cup D_{s-1}|$ and $R_s(k) = \max \{R_s(A): A \subseteq G, |A| = k\}.$ Bourgain and Katz and Tao obtained an estimate of $R_s(k)$ assuming $s$ being of order $k$. In this note we find the {\it exact value } of $R_s(k)$ in cases $s = 1$, $s = 2$ and $s = 3$. The case $s = 3$ appeared to be not simple. The structure of {\it extremal sets} led us to sets isomorphic to planar sets having a rather unexpected form of a perfect hexagon. The proof suggests the way of dealing with the general case $s \ge 4$.
Article information Source Funct. Approx. Comment. Math., Volume 37, Number 1 (2007), 131-148. Dates First available in Project Euclid: 18 December 2008 Permanent link to this document https://projecteuclid.org/euclid.facm/1229618746 Digital Object Identifier doi:10.7169/facm/1229618746 Mathematical Reviews number (MathSciNet) MR2357314 Zentralblatt MATH identifier 1210.11106 Subjects Primary: 11P70: Inverse problems of additive number theory, including sumsets Secondary: 11B75: Other combinatorial number theory Citation
Freiman, Gregory A.; Stanchescu, Yonutz V. On a Kakeya-type problem. Funct. Approx. Comment. Math. 37 (2007), no. 1, 131--148. doi:10.7169/facm/1229618746. https://projecteuclid.org/euclid.facm/1229618746 |
Usually, one defines [large gauge transformations](http://en.wikipedia.org/wiki/Large_gauge_transformation) as those elements of $SU(2)$ that can't be smoothly transformed to the identity transformation. The group $SU(2)$ is simply connected and thus I'm wondering why there are transformations that are not connected to the identity. (Another way to frame this, is to say that large gauge transformations can not be built from infinitesimal ones.)
An explicit example of a large gauge transformation is
$ U^{\left( 1\right) }\left( \vec{x}\right) =\exp\left( \frac{i\pi
x^{a}\tau^{a}}{\sqrt{x^{2}+c^{2}}}\right) $
How can I see explicitly that it is impossible to transform this transformation to the identity transformation?
I can define
$U^\lambda(\vec x) = \exp\left( \lambda \frac{i\pi
x^{a}\tau^{a}}{\sqrt{x^{2}+c^{2}}}\right) $
and certainly
$ U^{\lambda=0}(\vec x) = I $
$ U^{\lambda=1}(\vec x) = U^{\left( 1\right) }\left( \vec{x}\right) $
Thus I have found a smooth map $S^3 \to SU(2)$ that transforms $U^{\left( 1\right) }\left( \vec{x}\right)$ into the identity transformation. So, in what sense is it not connected to identity transformation?
Framed differently: in what sense is it true that $U^{\lambda=1}(\vec x)$ and $U^{\lambda=0}(\vec x)$ aren't homotopic, although the map $U^\lambda(\vec x)$ exists? My guess is that at as we vary $\lambda$ from $0$ to $1$, we somehow leave the target space $SU(2)$, but I'm not sure how I can see this.
In addition, if we can write the large gauge transformation as an exponential, doesn't this does mean explicitly that we get a finite large gauge transformation, from infinitesimal ones?
According to this paper, the defining feature of large gauge transformations is that the function in the exponent $\omega(x)$ is singular at some point. Is this singularity the reason that we can't transform large gauge transformations "everywhere" to the identity transformations? And if yes, how can we see this?
Edit: I got another idea from this paper. There, the authors state that its not enough that we find a map $U^\lambda(\vec x)$, with the properties mentioned above, but additionally this map must have the following limit $ U^\lambda(\vec x) \to I \quad \text{ for } x\to \infty \quad \forall \lambda. $ Obviously, this is not correct for my map $U^\lambda(\vec x)$. However, I don't understand why we have here this extra condition.
Edit 2: As mentioned above, there only exists no smooth map between $U^{\lambda=1}(\vec x)$ and $U^{\lambda=0}(\vec x)$, if we restrict ourselves to those gauge transformations that satisfy
$ U(x) \to I \quad \text{ for } x\to \infty. $
The mystery therefore is, why we do this.
It seems, I'm not the only one puzzled by this, because Itzykson and Zuber write in their QFT book:
"there is actually no very convincing argument to justify this
restriction". |
The next phase of my project involved actually creating some models of the magnetic field of a magnetic dipole. This proved to require more steps and creativity than previously thought, but in the end I did get some useful plots. The three finalized Mathematica files that I refer to throughout this post can be found via the link at the bottom of this post.
The first breakthrough I had was using a different technique to convert the expression for the magnetic field from spherical to cartesian coordinates so that Mathematica can plot some version of its information. This technique worked way better than using Mathematica’s built in TransformedField function, which produced some weird results (see Preliminary Data post). Instead, I decided, with the help of Shelly Johnson, to write out the explicit cartesian forms of $r$, $\theta$, and $\phi$ as well as the explicit spherical forms of $\hat{r}$ $\hat{\theta}$, and $\hat{\phi}$, and then let Mathematica substitute these expressions in the larger expression for the magnetic field. This worked pretty well to give the three components of the magnetic field, $\vec{B_x}$, $\vec{B_y}$, and $\vec{B_z}$, as can be seen at the top of my Mathematica file titled “3D_vector_graphs.nb”.
When these expressions are plotted using VectorPlot3D, however, the results are pretty uninformative.
3D vector plot of the magnetic field of a magnetic dipole
Even zoomed in and seen from a more right angle, this field is not helpful.
Same plot as above, but seen zoomed in and in the xz plane
Since this approach didn’t work well, for reasons discussed more in my next Conclusion post, and my previous attempts at plotting the vector field at representative points was getting complicated as well, I decided to try to use this new transformation by substitution method to try Contour plots instead.
I began by forming one expression for the magnetic field value at points in space, with no vector information. This expression ends up being pretty manageable, and the contours of this can be made effectively using ContourPlot3D. Some different results are shown below, taken from my Mathematica file “3D_contour_graphs”.
3D contour of my expression for the magnitude of the magnetic field of a magnetic dipole. This contour is for when the magnetic field equals 1
The same plot as above, but this time zoomed in and shown “cut in half” to see what happens on the inside of the symmetric circular outer portion
These results are pretty promising, so I then added in a few more contours, to see how the different values looked relative to one another.
Similar plot to the above, but this time with three contours. From the inner to outer contours, these surfaces represent the places where the magnitude of the magnetic field equals 0.75, 0.5, and 0.28 respectively.
This is pretty good, but where are these surfaces in relation to the small current loop that is supposed to be creating these magnetic field contours? My next few contours include a small yellow ring at the origin, which indicates the placement of the loop, and also plots both positive and negative contours, which gives the whole picture both above and below the ring (which, looking at the original equation, should be symmetric).
The same three contours as above, along with their negative counterparts, and a small yellow ring representing the current loop that creates the magnetic field modeled.
A different view of the above graph, which allows one to look into the part of the contour that has been “cut open”.
Now that there is a current loop in our models, what happens when the current in this loop is increased? The above models were made with a magnetic dipole moment, $m=1$, but the below image increased the current so that $m=2$.
Similar plot to above, but with a stronger magnetic field (m=2).
Putting images with $m=1$, $m=2$, and $m=3$ side by side shows that the same contours move farther away (and change curvature a bit) for models with larger
m values, which is expected because larger m values indicate larger magnetic fields.
From left to right, Contours with m = 1, m = 2, and m = 3.
This is all pretty good, but what are these contour surfaces actually telling us about the magnetic field? These contour surfaces represent places where the magnitude of the magnetic field is at a constant value, so these shapes represent equipotential surfaces. While equipotential surfaces are not often talked about when talking about magnetic fields, the general principle that field vectors are perpendicular to equipotential surfaces is true of magnetic fields as well as electric. Therefore, these contours actually tell us something about the magnetic field of a dipole, albeit in an indirect manner.
To try and get a better picture of what the field lines look like around this current loop, I plotted these contour surfaces in 2D by removing any y dependence, so that the 2D contours plotted are shown in the xz plane. This doesn’t lose much information again because of the $\phi$ independence of the magnetic field of a dipole. The result is shown below, which is taken from my Mathematica file “2D_contour_graphs”.
2D contour, showing 20 different contour lines, both positive and negative.
This gives us a representation where the vector field lines may be easier to picture, and easier to compare to the lines shown in Griffiths’ “Introduction to Electrodynamics” 5.4.3 – pg. 255, which are shown in 2D. It took some squinting, but I can see that lines perpendicular to these (and more) contour lines forms loops expected of a magnetic dipole. I overlaid a few representative loops over my 2D contour image in paint to illustrate this idea.
Same image as above, but with (approximate) magnetic field lines overlaid in thick black.
Link to final Mathematica files: 3 Final Mathematica Files |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
I will consider Non-Noisy Observations i.e. $y=f(x)$ Lets say we have the following data set of 5 training examples with one of the examples duplicated $(1,2,3,4,4)$ maps to $(2,4,6,8,8)$. Since for GPR we have to invert a Kernel Matrix and a Kernel matrix containing duplicate inputs will not be invertible we should remove duplicate training examples when doing GPR with non-noisy observation. Am I right in my reasoning ? Kindly comment
The duplicate data add no additional information, and rank-deficiency in the kernel matrix is fatal to the process. Removing them has literally no inferential consequence.
That said,
numerically, the kernel matrix $K$ will occasionally become numerically singular if some points are too close together (but not necessarily identical). In this scenario, you can either identify and deal with the problem points (deletion, merging them, whatever) or you can some (small) noise: $\hat{K}=K+\epsilon I$. Usually $\epsilon=10^{-6}$ is sufficient for me, or you can perform a spectral decomposition of $K$ and then for each eigenvalue $\lambda_i$, replace it with $\hat{\lambda_i}=\max{\{\lambda_i, \epsilon\lambda_{\max}\}}$ for some small $\epsilon.$ The idea here is that you've effectively pinned the smallest eigenvalue of the matrix relative to the largest, and this may be a more "minimal" intervention into the matrix. This is an area where I'm not sure there are any good solutions.
The numerical component of the problem is considered in more detail on this thread: |
Direct and Inverse Proportion A self-marking exercise in three levels on solving direct and inverse variation problems.
This is level 1; Direct proportion. You can earn a trophy if you get at least 9 correct.
Instructions
Try your best to answer the questions above. Type your answers into the boxes provided leaving no spaces. As you work through the exercise regularly click the "check" button. If you have any wrong answers, do your best to do corrections but if there is anything you don't understand, please ask your teacher for help.
When you have got all of the questions correct you may want to print out this page and paste it into your exercise book. If you keep your work in an ePortfolio you could take a screen shot of your answers and paste that into your Maths file.
Transum.org
This web site contains over a thousand free mathematical activities for teachers and pupils. Click here to go to the main page which links to all of the resources available.
Please contact me if you have any suggestions or questions.
Mathematicians are not the people who find Maths easy; they are the people who enjoy how mystifying, puzzling and hard it is. Are you a mathematician?
Comment recorded on the 24 May 'Starter of the Day' page by Ruth Seward, Hagley Park Sports College:
"Find the starters wonderful; students enjoy them and often want to use the idea generated by the starter in other parts of the lesson. Keep up the good work"
Comment recorded on the 7 December 'Starter of the Day' page by Cathryn Aldridge, Pells Primary:
"I use Starter of the Day as a registration and warm-up activity for my Year 6 class. The range of questioning provided is excellent as are some of the images.
Answers
There are answers to this exercise but they are available in this space to teachers, tutors and parents who have logged in to their Transum subscription on this computer.
A Transum subscription unlocks the answers to the online exercises, quizzes and puzzles. It also provides the teacher with access to quality external links on each of the Transum Topic pages and the facility to add to the collection themselves.
Subscribers can manage class lists, lesson plans and assessment data in the Class Admin application and have access to reports of the Transum Trophies earned by class members.
If you would like to enjoy ad-free access to the thousands of Transum resources, receive our monthly newsletter, unlock the printable worksheets and see our Maths Lesson Finishers then sign up for a subscription now:Subscribe
Go Maths
Learning and understanding Mathematics, at every level, requires learner engagement. Mathematics is not a spectator sport. Sometimes traditional teaching fails to actively involve students. One way to address the problem is through the use of interactive activities and this web site provides many of those. The Go Maths page is an alphabetical list of free activities designed for students in Secondary/High school.
Maths Map
Are you looking for something specific? An exercise to supplement the topic you are studying at school at the moment perhaps. Navigate using our Maths Map to find exercises, puzzles and Maths lesson starters grouped by topic.
Teachers
If you found this activity useful don't forget to record it in your scheme of work or learning management system. The short URL, ready to be copied and pasted, is as follows:
Do you have any comments? It is always useful to receive feedback and helps make this free resource even more useful for those learning Mathematics anywhere in the world. Click here to enter your comments.
Close
Level 1 - Direct proportion
Level 2 - Inverse proportion
Level 3 - Mixed non-linear questions
Unitary Method - Test your understanding of the Unitary Method for solving real life proportion problems with this online, self-marking quiz.
Exam Style Questions - A collection of problems in the style of GCSE or IB/A-level exam paper questions (worked solutions are available for Transum subscribers).
More on this topic including lesson Starters, visual aids, investigations and self-marking exercises.
This video is from Mannel's Maths Music.
If \(a\) varies directly with \(b\) and \(a=24\) when \(b=8\) find \(a\) when \(b=9\)$$a \propto b$$ $$a = kb$$
Where \(k\) is some constant. If \(a=24\) when \(b=8\) then$$24 = 8k$$ $$k = 3$$
so the equation is$$a = 3b$$
If \(b = 9\) then$$a = 3 \times 9 = 27$$
If \(a\) is inversely proportional to \(b\) and \(a=4\) when \(b=6\) find \(a\) when \(b=8\)$$a \propto \frac{1}{b}$$ $$a = \frac{k}{b}$$
Where \(k\) is some constant. If \(a=4\) when \(b=6\) then$$4 = \frac{k}{6}$$ $$k = 24$$
so the equation is$$a = \frac{24}{b}$$
If \(b = 8\) then$$a = 24 \div 8 = 3$$
If \(a\) is directly proportional to the square of \(b\) and \(a=24\) when \(b=2\) find \(a\) when \(b=3\)$$a \propto b^2$$ $$a = kb^2$$
Where \(k\) is some constant. If \(a=24\) when \(b=2\) then$$24 = 2^2 \times k$$ $$k = 6$$
so the equation is$$a = 6b^2$$
If \(b = 3\) then$$a = 6 \times 3^2 = 54$$
Don't wait until you have finished the exercise before you click on the 'Check' button. Click it often as you work through the questions to see if you are answering them correctly. You can double-click the 'Check' button to make it float at the bottom of your screen.
Close |
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions.
Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not.
Let $A, B, C$ be $n\times n$ invertible matrices. When you simplify the expression\[C^{-1}(AB^{-1})^{-1}(CA^{-1})^{-1}C^2,\]which matrix do you get?(a) $A$(b) $C^{-1}A^{-1}BC^{-1}AC^2$(c) $B$(d) $C^2$(e) $C^{-1}BC$(f) $C$
Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less.Let\[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\]where\begin{align*}p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3.\end{align*}
(a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$.
(b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$.
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be a vector space and $B$ be a basis for $V$.Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$.Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$.
After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form\[\begin{bmatrix}1 & 0 & 2 & 1 & 0 \\0 & 1 & 3 & 0 & 1 \\0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0\end{bmatrix}.\]
(a) What is the dimension of $V$?
(b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$?
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.Let\[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix}a & b\\c& -a\end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\]
(a) Show that $W$ is a subspace of $V$.
(b) Find a basis of $W$.
(c) Find the dimension of $W$.
(The Ohio State University, Linear Algebra Midterm)
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems.
Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$.
(a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$.
(b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$.
Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular.
Problem 9.Determine whether each of the following sentences is true or false.
(a) There is a $3\times 3$ homogeneous system that has exactly three solutions.
(b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric.
(c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$.
(d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 2 and contains Problem 4, 5, and 6.Check out Part 1 and Part 3 for the rest of the exam problems.
Problem 4. Let\[\mathbf{a}_1=\begin{bmatrix}1 \\2 \\3\end{bmatrix}, \mathbf{a}_2=\begin{bmatrix}2 \\-1 \\4\end{bmatrix}, \mathbf{b}=\begin{bmatrix}0 \\a \\2\end{bmatrix}.\]
Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$.
Problem 5.Find the inverse matrix of\[A=\begin{bmatrix}0 & 0 & 2 & 0 \\0 &1 & 0 & 0 \\1 & 0 & 0 & 0 \\1 & 0 & 0 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason.
Problem 6.Consider the system of linear equations\begin{align*}3x_1+2x_2&=1\\5x_1+3x_2&=2.\end{align*}
(a) Find the coefficient matrix $A$ of the system.
(b) Find the inverse matrix of the coefficient matrix $A$.
(c) Using the inverse matrix of $A$, find the solution of the system.
(Linear Algebra Midterm Exam 1, the Ohio State University)
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 1 and contains the first three problems.Check out Part 2 and Part 3 for the rest of the exam problems.
Problem 1. Determine all possibilities for the number of solutions of each of the systems of linear equations described below.
(a) A consistent system of $5$ equations in $3$ unknowns and the rank of the system is $1$.
(b) A homogeneous system of $5$ equations in $4$ unknowns and it has a solution $x_1=1$, $x_2=2$, $x_3=3$, $x_4=4$.
Problem 2. Consider the homogeneous system of linear equations whose coefficient matrix is given by the following matrix $A$. Find the vector form for the general solution of the system.\[A=\begin{bmatrix}1 & 0 & -1 & -2 \\2 &1 & -2 & -7 \\3 & 0 & -3 & -6 \\0 & 1 & 0 & -3\end{bmatrix}.\]
Problem 3. Let $A$ be the following invertible matrix.\[A=\begin{bmatrix}-1 & 2 & 3 & 4 & 5\\6 & -7 & 8& 9& 10\\11 & 12 & -13 & 14 & 15\\16 & 17 & 18& -19 & 20\\21 & 22 & 23 & 24 & -25\end{bmatrix}\]Let $I$ be the $5\times 5$ identity matrix and let $B$ be a $5\times 5$ matrix.Suppose that $ABA^{-1}=I$.Then determine the matrix $B$.
(Linear Algebra Midterm Exam 1, the Ohio State University) |
Is there any Kähler Ricci flow method for solving structure theorems in Algebraic geometry
In fact If $X$ be a Calabi-Yau manifold then we can descend the Kähler Ricci flow to its finite etale universal cover $\tilde X$, and its classification of solutions corresponds to Beauville-Bogomolov type decomposition.
The Beauville-Bogomolov decomposition theorem states that given a compact Kähler manifold $X$ with zero real first Chern class, the universal cover $\tilde X$ of $X$ splits holomorphically (and isometrically, once a Ricci-flat Kähler metric is chosen) into the product of a flat factor $\mathbb C^q$ and simply connected compact Kähler manifolds with special unitary or compact symplectic holonomy i.e., Calabi-Yau or hyper-Kähler manifolds.
We have analogue of the Beauville-Bogomolov decomposition theorem when anti-canonical bundle $-K_X$ is nef.
We have the following conjecture. In fact when $X$ is projective then it has been solved . See this paepr.
Conjecture: Let $X$ be a compact Kähler manifold with nef anticanonicalclass i.e $-K_X$ is nef. Then the universal cover $\tilde X$ of $X$ decomposes as a product$$\tilde X ≃\mathbb C^q ×\prod_j Y_j ×\prod_k S_k × Z,$$where $Y_j$ are irreducible Calabi-Yau manifolds, $S_k$ are irreducible hyper-kähler manifolds, and $Z$ is a rationally connected manifold
Note that, $X$ is rationally connected if and only if for every invertible subsheaf $\mathcal L⊆\Omega^p_X$, $1≤p≤n$, $\mathcal L$ is not pseudo-effective;
We have the same result when $-K_X\geq 0$ or $T_X$ is nef. See Jean-Pierre Demailly, Thomas Peternell, and Michael Schneider. Compact complex manifolds with numerically effective tangent bundles. J. Algebraic Geom., 3(2):295– 345, 1994 |
Tagged: subspace Problem 709
Let $S=\{\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3},\mathbf{v}_{4},\mathbf{v}_{5}\}$ where
\[ \mathbf{v}_{1}= \begin{bmatrix} 1 \\ 2 \\ 2 \\ -1 \end{bmatrix} ,\;\mathbf{v}_{2}= \begin{bmatrix} 1 \\ 3 \\ 1 \\ 1 \end{bmatrix} ,\;\mathbf{v}_{3}= \begin{bmatrix} 1 \\ 5 \\ -1 \\ 5 \end{bmatrix} ,\;\mathbf{v}_{4}= \begin{bmatrix} 1 \\ 1 \\ 4 \\ -1 \end{bmatrix} ,\;\mathbf{v}_{5}= \begin{bmatrix} 2 \\ 7 \\ 0 \\ 2 \end{bmatrix} .\] Find a basis for the span $\Span(S)$. Problem 706
Suppose that a set of vectors $S_1=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}$ is a spanning set of a subspace $V$ in $\R^5$. If $\mathbf{v}_4$ is another vector in $V$, then is the set
\[S_2=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4\}\] still a spanning set for $V$? If so, prove it. Otherwise, give a counterexample. Problem 663
Let $\R^2$ be the $x$-$y$-plane. Then $\R^2$ is a vector space. A line $\ell \subset \mathbb{R}^2$ with slope $m$ and $y$-intercept $b$ is defined by
\[ \ell = \{ (x, y) \in \mathbb{R}^2 \mid y = mx + b \} .\]
Prove that $\ell$ is a subspace of $\mathbb{R}^2$ if and only if $b = 0$.Add to solve later
Problem 659
Fix the row vector $\mathbf{b} = \begin{bmatrix} -1 & 3 & -1 \end{bmatrix}$, and let $\R^3$ be the vector space of $3 \times 1$ column vectors. Define
\[W = \{ \mathbf{v} \in \R^3 \mid \mathbf{b} \mathbf{v} = 0 \}.\] Prove that $W$ is a vector subspace of $\R^3$. Problem 658
Let $V$ be the vector space of $n \times n$ matrices with real coefficients, and define
\[ W = \{ \mathbf{v} \in V \mid \mathbf{v} \mathbf{w} = \mathbf{w} \mathbf{v} \mbox{ for all } \mathbf{w} \in V \}.\] The set $W$ is called the center of $V$.
Prove that $W$ is a subspace of $V$.Add to solve later
Problem 612
Let $C[-2\pi, 2\pi]$ be the vector space of all real-valued continuous functions defined on the interval $[-2\pi, 2\pi]$.
Consider the subspace $W=\Span\{\sin^2(x), \cos^2(x)\}$ spanned by functions $\sin^2(x)$ and $\cos^2(x)$. (a) Prove that the set $B=\{\sin^2(x), \cos^2(x)\}$ is a basis for $W$.
Add to solve later
(b) Prove that the set $\{\sin^2(x)-\cos^2(x), 1\}$ is a basis for $W$. Problem 611
An $n\times n$ matrix $A$ is called
orthogonal if $A^{\trans}A=I$. Let $V$ be the vector space of all real $2\times 2$ matrices.
Consider the subset
\[W:=\{A\in V \mid \text{$A$ is an orthogonal matrix}\}.\] Prove or disprove that $W$ is a subspace of $V$. Problem 604
Let
\[A=\begin{bmatrix} 1 & -1 & 0 & 0 \\ 0 &1 & 1 & 1 \\ 1 & -1 & 0 & 0 \\ 0 & 2 & 2 & 2\\ 0 & 0 & 0 & 0 \end{bmatrix}.\] (a) Find a basis for the null space $\calN(A)$. (b) Find a basis of the range $\calR(A)$. (c) Find a basis of the row space for $A$.
Add to solve later
(The Ohio State University, Linear Algebra Midterm) Read solution Problem 601
Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.
Let \[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix} a & b\\ c& -a \end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\] (a) Show that $W$ is a subspace of $V$. (b) Find a basis of $W$. (c) Find the dimension of $W$.
Add to solve later
(The Ohio State University, Linear Algebra Midterm) Read solution |
Search
Now showing items 1-1 of 1
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... |
$$ I=\int \frac{x^9}{(x^2+4)^6}\mathrm{d}x $$ Yeah I know, I can substitute: $$t=x^2+4\text{ or }2\tan\theta$$ So that: $$I=\frac12\int\frac{(t-4)^4}{t^6}\mathrm{d}t\text{ or } I=2^{-2}\int\tan^9\theta\cos^{10}\theta\mathrm{d}\theta$$ Both of which are a long tedious way* to solve, is there any easier method?
Update: I am not asking among these two, I am asking any "substitution" except these two, which is shorter. Edit: I am very sorry I missed the ^$6$ in question.
Use Binomial Theorem for first and then divide by $t^6$, then integrate term wise.
$\require{cancel}\cancel{\text{Use Reduction formula for second or integration by parts.}}$ |
On Monday, Celestalon kicked off the official Alpha Theorycrafting season by posting a Theorycrafting Discussion thread on the forums. And he was kind enough to toss a meaty chunk of information our way about
Resolve, the replacement for Vengeance.
Resolve: Increases your healing and absorption done to yourself, based on Stamina and damage taken (before avoidance and mitigation) in the last 10 sec.
In today’s post, I want to go over the mathy details about how Resolve works, how it differs from Vengeance, and how it may (or may not) fix some of the problems we’ve discussed in previous blog posts.
Mathemagic
Celestalon broke the formula up into two components: one from stamina and one from damage taken. But for completeness, I’m going to bolt them together into one formula for resolve $R$:
$$ R =\frac{\rm Stamina}{250~\alpha} + 0.25\sum_i \frac{D_i}{\rm MaxHealth}\left ( \frac{2 ( 10-\Delta t_i )}{10} \right )$$
where $D_i$ is an individual damage event that occurred $\Delta t_i$ seconds ago, and $\alpha$ is a level-dependent constant, with $\alpha(100)=261$. The sum is carried out over all damaging events that have happened in the last 10 seconds.
The first term in the equation is the stamina-based contribution, which is always active, even when out of combat. There’s a helpful buff in-game to alert you to this:
My premade character has 1294 character sheet stamina, which after dividing by 250 and $\alpha(90)=67$, gives me 0.07725, or about 7.725% Resolve. It’s not clear at this point whether the tooltip is misleadingly rounding down to 7% (i.e. using floor instead of round) or whether Resolve is only affected by the stamina from gear. The Alpha servers went down as I was attempting to test this, so we’ll have to revisit it later. We’ve already been told that this will update dynamically with stamina buffs, so having Power Word: Fortitude buffed on you mid-combat will raise your Resolve.
Once you’re in combat and taking damage, the second term makes a contribution:
I’ve left this term in roughly the form Celestalon gave, even though it can obviously be simplified considerably by combining all of the constants, because this form does a better job of illustrating the behavior of the mechanic. Let’s ignore the sum for now, and just consider an isolated damage event that does $D$ damage:
$$0.25\times\frac{D}{\rm MaxHealth}\left ( \frac{2 ( 10-\Delta t )}{10} \right )$$
The 0.25 just moderates the amount of Resolve you get from damaging attacks. It’s a constant multiplicative factor that they will likely tweak to achieve the desired balance between baseline (stamina-based) Resolve and dynamic (damage-based) Resolve.
The factor of $D/{\rm MaxHealth}$ means that we’re normalizing the damage by our max health. So if we have 1000 health and take an attack that deals 1000 damage (remember, this is before mitigation), this term gives us a factor of 1. Avoided auto-attacks also count here, though instead of performing an actual damage roll the game just uses the mean value of the boss’s auto-attack damage. Again, nothing particularly complicated here, it just makes Resolve depend on the percentage of your health the attack would have removed rather than the raw damage amount. Also note that we’ve been told that dynamic health effects from temporary multipliers (e.g. Last Stand) aren’t included here, so we’re not punished for using temporary health-increasing cooldowns.
The term in parentheses is the most important part, though. In the instant the attack lands, $\Delta t=0$ and the term in parentheses evaluates to $2(10-0)/10 = 2.$ So that attack dealing 1000 damage to our 1000-health tank would give $0.25\times 1 \times 2 = 0.5,$ or 50% Resolve.
However, one second later, $\Delta t = 1$, so the term in parentheses is only $2(10-1)/10 = 1.8$, and the amount of resolve it grants is reduced to 45%. The amount of Resolve granted continues to linearly decrease as time passes, and by the time ten seconds have elapsed it’s reduced to zero. Each attack is treated independently, so to get our total Resolve from all damage taken we just have to add up the Resolve granted by every attack we’ve taken, hence the sum in my equation.
You may note that the time-average of the term in parentheses is 1, which is how we get the advertised “averages to ~Damage/MaxHealth” that Celestalon mentioned in the post. In that regard, he’s specifically referring to just the part within the sum, not the constant factor of 0.25 outside of it. So in total, your average Resolve contribution from damage is 25% of Damage/MaxHealth.
Comparing to Vengeance
Mathematically speaking, there’s a world of difference between Resolve and Vengeance. First and foremost is the part we already knew: Resolve doesn’t grant any offensive benefit. We’ve talked about that a lot before, though, so it’s not territory worth re-treading.
Even in the defensive component though, there are major differences. Vengeance’s difference equation, if solved analytically, gives solutions that are exponentials. In other words, provided you were continuously taking damage (such that it didn’t fall off entirely), Vengeance would decay and adjust to your new damage intake rather smoothly. It also meant that damage taken at the very beginning of an encounter was still contributing some amount of Vengeance at the very end, again, assuming there was no interruption. And since it was only recalculated on a damage event, you could play some tricks with it, like taking a giant attack that gave you millions of Vengeance and then riding that wave for 20 seconds while your co-tank takes the boss.
Resolve does away with all of that. It flat-out says “look, the only thing that matters is the last 10 seconds.” The calculation doesn’t rely on a difference equation, meaning that when recalculating, it doesn’t care what your Resolve was at any time previously. And it forces a recalculation at fixed intervals, not just when you take damage. As a result, it’s much harder to game than Vengeance was.
Celestalon’s post also outlines a few other significant differences:
No more ramp-up mechanism No taunt-transfer mechanism Resolve persists through shapeshifts Resolve only affects self-healing and self-absorbs
The lack of ramp-up and taunt-transfer mechanisms may at first seem like a problem. But in practice, I don’t think we’ll miss either of them. Both of these effects served offensive (i.e. threat) and defensive purposes, and it’s pretty clear that the offensive purposes are made irrelevant by definition here since Resolve won’t affect DPS/threat. The defensive purpose they served was to make sure you had
some Vengeance to counter the boss’s first few hits, since Vengeance had a relatively slow ramp-up time but the boss’s attacks did not.
However, Resolve ramps up a
lot faster than Vengeance does. Again, this is in part thanks to the fact that it isn’t governed by a difference equation. The other part is because it only cares about the last ten seconds.
To give you a visual representation of that, here’s a plot showing both Vengeance and Resolve for a player being attacked by a boss. The tank has 100 health and the boss swings for 30 raw damage every 1.5 seconds. Vengeance is shown in arbitrary units here since we’re not interested in the exact magnitude of the effect, just in its dynamic properties. I’ve also ignored the baseline (stamina-based) contribution to Resolve for the same reason.
As a final note, while the blog post says that Resolve is recalculated every second, it seemed like it was updating closer to every half-second when I fooled with it on alpha, so these plots use 0.5-second update intervals. Changing to 1-second intervals doesn’t significantly change the results (they just look a little more fragmented).
The plot very clearly shows the 50% ramp-up mechanism and slow decay-like behavior of Vengeance. Note that while the ramp-up mechanism gets you to 50% of Vengeance’s overall value at the first hit (at t=2.5 seconds), Resolve hits this mark as soon as the second hit lands (at 4.0 seconds) despite not having
any ramp-up mechanism.
Resolve also hits its steady-state value much more quickly than Vengeance does. By definition, Resolve gets there after about 10 seconds of combat (t=12.5 seconds). But with Vengeance, it takes upwards of 30-40 seconds to even approach the steady-state value thanks to the decay effect (again, a result of the difference equation used to calculate Vengeance). Since most fights involve tank swaps more frequently than this, it meant that you were consistently getting stronger the longer you tanked a boss. This in turn helped encourage the sort of “solo-tank things that should not be solo-tanked” behavior we saw in Mists.
This plot assumes a boss who does exactly 30 damage per swing, but in real encounters the boss’s damage varies. Both Vengeance and Resolve adapt to mimic that change in the tank’s damage intake, but as you could guess, Resolve adapts much more quickly. If we allow the boss to hit for a random amount between 20 and 40 damage:
You can certainly see the similar changes in both curves, but Resolve reacts quickly to each change while Vengeance changes rather slowly.
One thing you’ve probably noticed by now is that the Resolve plot looks very jagged (in physics, we might call this a “sawtooth wave”). This happens because of the linear decay built into the formula. It peaks in the instant you take the attack – or more accurately, in the instant that Resolve is recalculated after that attack. But then every time it’s recalculated it linearly decreases by a fixed percent. If the boss swings in 1.5-second intervals, then Resolve will zig-zag between its max value and 85% of its max value in the manner shown.
The more frequently the boss attacks, the smoother that zig-zag becomes; conversely, a boss with a long swing timer will cause a larger variation in Resolve. This is apparent if we adjust the boss’s swing timer in either direction:
It’s worth noting that every plot here has a new randomly-generated sequence of attacks, so don’t be surprised that the plots don’t have the same profile as the original. The key difference is the size of the zig-zag on the Resolve curve.
I’ve also run simulations where the boss’ base damage is 50 rather than 30, but apart from the y-axis having large numbers there’s no real difference:
Note that even a raw damage of 50% is pretty conservative for a boss – heroic bosses in Siege have frequently had raw damages that were larger than the player’s health. But it’s not clear if that will still be the case with the new tanking and healing paradigm that’s been unveiled for Warlords.
If we make the assumption that raw damage will be lower, then these rough estimates give us an idea of how large an effect Resolve will be. If we guess at a 5%-10% baseline value from stamina, these plots suggest that Resolve will end up being anywhere from a 50% to 200% modifier on our healing. In other words, it has the potential to double or triple our healing output with the current tuning numbers. Of course, it’s anyone’s guess as to whether those numbers are even remotely close to what they’ll end up being by the end of beta.
Is It Fixed Yet?
If you look back over our old blog posts, the vast majority of our criticisms of Vengeance had to do with its tie-in to damage output. Those have obviously been addressed, which leaves me worrying that I’ll have nothing to rant about for the next two or three years.
But regarding everything else, I think Resolve stands a fair chance of addressing our concerns. One of the major issues with Vengeance was the sheer magnitude of the effect – you could go from having 50k AP to 600k AP on certain bosses, which meant your abilities got up to 10x more effective. Even though that’s an extreme case, I regularly noted having over 300k AP during progression bosses, a factor of around 6x improvement. Resolve looks like it’ll tamp down on that some. Reasonable bosses are unlikely to grant a multiplier larger than 2x, which will be easier to balance around.
It hasn’t been mentioned specifically in Celestalon’s post, but I think it’s a reasonable guess that they will continue to disable Resolve gains from damage that could be avoided through better play (i.e. intentionally “standing in the bad”). If so, there will be little (if any) incentive to take excess damage to get more Resolve. Our sheer AP scaling on certain effects created situations where this was a net survivability gain with Vengeance, but the lower multiplier should make that impossible with Resolve.
While I still don’t think it needs to affect anything other than active mitigation abilities, the fact that it’s a multiplier affecting everything equally rather than a flat AP boost should make it easier to keep talents with different AP coefficients balanced (Eternal Flame and Sacred Shield, specifically). And we already know that Eternal Flame is losing its Bastion of Glory interaction, another change which will facilitate making both talents acceptable choices.
All in all, I think it’s a really good system, if slightly less transparent. It’s too soon to tell whether we’ll see any unexpected problems, of course, but the mechanic doesn’t have any glaring issues that stand out upon first examination (unlike Vengeance). I still have a few lingering concerns about steady-state threat stability between tanks (ironically, due to the
removal of Vengeance), but that is the sort of thing which will become apparent fairly quickly during beta testing, and at any rate shouldn’t reflect on the performance of Resolve. |
All the four terms are different and they represent different concepts in quantum mechanics.
Firstly, the term $\Psi$ represents wave function of a particle which is distributed in a three dimensional space. This wave function is a function of four coordinates ($x$, $y$, $z$, and $t$), and it gives the values which are in complex-space. For a typical example, $$\Psi(x,y,z,t) = \sqrt{\frac{8}{abc}} \sin\left(\frac{n_x\pi x}{a}\right) \sin\left(\frac{n_y\pi y}{b}\right) \sin\left(\frac{n_z\pi z}{c}\right) e^{-2\pi iEt/h}$$ is an example of a wave-function.
But the $|\Psi|^2$ is mathematically defined as $\Psi\cdot\Psi^*$. Max Born interpreted the value of this real valued function as the probability of finding the particle in 3 dimensional space. If you consider the previous example then $$|\Psi(x,y,z,t)|^2 = \frac{8}{abc} \sin^2\left(\frac{n_x\pi x}{a}\right) \sin^2\left(\frac{n_y\pi y}{b}\right) \sin^2\left(\frac{n_z\pi z}{c}\right)$$ Here the complex part will not appear as in the previous example because $\Psi$ is multiplied with its complex conjugate.
The probability of finding the particle in a unit volume element $\mathrm dV$ is $|\Psi|^2 \mathrm dV$. In spherical polar coordinates, it is $|\Psi|^2 r^2 \, \mathrm dr \, \sin\theta \, \mathrm d\theta \, \mathrm d\phi$. When you are only concerned about the radial part, the polar angular integral and azimuthal angular integral are replaced by $4\pi$ as, $\int_{0}^{2\pi} \int_{0}^{\pi} \sin\theta \, \mathrm{d}\theta \, \mathrm{d}\phi =4\pi$. thus we are left with only the radial part which is your
radial probability distribution function. $$\Pr(r) = |\Psi|^2 4 \pi r^2 \, \mathrm dr$$ The radial probability can be thought as the probability of finding the particle within an interval of length $\text{d}r$ at $r=r_0$. So, the radial distribution is a function but the radial probability as described can be calculated by integrating that function from $0$ to $r_0$. |
Let $ u_{n} = \sin \! \left( \dfrac{\pi}{n} \right) $, where $ n \in \Bbb{N} $, and consider the series $ \displaystyle \sum_{n = 1}^{\infty} u_{n} $. Which of the following is/are true?
(a) $ \displaystyle \sum_{n = 1}^{\infty} u_{n} $ is convergent.
(b) $ \displaystyle \sum_{n = 1}^{\infty} u_{n} $ is divergent.
(c) $ \displaystyle \sum_{n = 1}^{\infty} u_{n} $ is absolutely convergent.
(d) $ u_{n} \to 0 $ as $ n \to \infty $.
Now, $ n \to \infty $ implies $ \dfrac{\pi}{n} \to 0 $, so $ u_{n} = \sin \! \left( \dfrac{\pi}{n} \right) \to 0 $. Also, from the graph of $ \sin $, it looks like this sequence will tend to $ 0 $.
I am not sure about the series options — whether they are all wrong or some are right, and why so. |
1. What is the difference between the compound interests on Rs. 5000 for 1
A. Rs. 2.04
B. Rs. 4.80
C. Rs. 3.06
D. Rs. 8.30
Here is the answer and explanation
Answer : Option A
Explanation :
Amount after 1
1⁄ 2 years when interest is compounded yearly $MF#%= 5000 \times \left(1 + \dfrac{4}{100}\right)^1\times \left(1 + \dfrac{\dfrac{1}{2} \times 4}{100}\right) = 5000 \times \dfrac{104}{100} \times \left(1 + \dfrac{2}{100}\right) \\\\ = 5000 \times \dfrac{104}{100} \times \dfrac{102}{100} = 50 \times 104 \times \dfrac{51}{50} \\\\ = 104 \times 51 = \text{Rs. }5304$MF#% Compound Interest for 1 1⁄ 2 years when interest is compounded yearly = Rs.(5304 - 5000) Amount after 1 1⁄ 2 years when interest is compounded half-yearly $MF#%= \text{P}\left(1 + \dfrac{\text{(R/2)}}{100}\right)^\text{2T} = 5000\left(1 + \dfrac{(4/2)}{100}\right)^{2 \times \frac{3}{2}} = 5000\left(1 + \dfrac{2}{100}\right)^3\\\\ = 5000\left(\dfrac{102}{100}\right)^3 = 5000\left(\dfrac{102}{100}\right)\left(\dfrac{102}{100}\right)\left(\dfrac{102}{100}\right) = 50 \times 102 \times \dfrac{51}{50}\times \dfrac{51}{50} \\\\ = 102 \times 51 \times \dfrac{51}{50} = 51 \times 51 \times \dfrac{51}{25} = \text{Rs. } 5306.04$MF#% Compound Interest for 1 1⁄ 2 years when interest is compounded half-yearly = Rs.(5306.04 - 5000) Difference in the compound interests = (5306.04 - 5000) - (5304 - 5000) = 5306.04 - 5304 = Rs. 2.04
2. A bank offers 5% compound interest calculated on half-yearly basis. A customer deposits Rs. 1600 each on 1st January and 1st July of a year. At the end of the year, the amount he would have gained by way of interest is:
A. Rs. 120
B. Rs. 121
C. Rs. 123
D. Rs. 122
Here is the answer and explanation
Answer : Option B
Explanation :
Amount after 1 year on Rs. 1600 (deposited on 1st Jan) at 5% when interest calculated half-yearly
$MF#%= \text{P}\left(1 + \dfrac{\text{(R/2)}}{100}\right)^\text{2T} = 1600\left(1 + \dfrac{\text{(5/2)}}{100}\right)^{2 \times 1} = 1600\left(1 + \dfrac{1}{40}\right)^2$MF#% Amount after 1/2 year on Rs. 1600 (deposited on 1st Jul) at 5% when interest calculated half-yearly $MF#%= \text{P}\left(1 + \dfrac{\text{(R/2)}}{100}\right)^\text{2T} = 1600\left(1 + \dfrac{\text{(5/2)}}{100}\right)^{2 \times \frac{1}{2}} = 1600\left(1 + \dfrac{1}{40}\right)$MF#% Total Amount after 1 year $MF#%=1600\left(1 + \dfrac{1}{40}\right)^2 + 1600\left(1 + \dfrac{1}{40}\right) = 1600\left( \dfrac{41}{40}\right)^2 + 1600\left(\dfrac{41}{40}\right) = 1600\left( \dfrac{41}{40}\right)\left[1 + \dfrac{41}{40}\right]\\\\ = 1600\left( \dfrac{41}{40}\right)\left( \dfrac{81}{40}\right) = 41 \times 81 = \text{Rs. }3321$MF#% Compound Interest = Rs.3321 - Rs.3200 = Rs.121
3. There is 80% increase in an amount in 8 years at simple interest. What will be the compound interest of Rs. 14,000 after 3 years at the same rate?
A. Rs.3794
B. Rs.3714
C. Rs.4612
D. Rs.4634
Here is the answer and explanation
Answer : Option D
Explanation :
Let P = Rs.100
Simple Interest = Rs. 80 ( ∵ 80% increase is due to the simple interest) $MF#%\text{Rate of interest} =\dfrac{100 \times \text{SI}}{\text{PT}} = \dfrac{100 \times 80}{100 \times 8} = 10\%\text{ per annum}$MF#% Now let's find out the compound interest of Rs. 14,000 after 3 years at 10% P = Rs.14000 T = 3 years R = 10% $MF#%\text{Amount after 3 years } = \text{P}\left(1 + \dfrac{\text{R}}{100}\right)^\text{T} = 14000\left(1 + \dfrac{10}{100}\right)^3 \\\\ = 14000\left(\dfrac{110}{100}\right)^3 = 14000\left(\dfrac{11}{10}\right)^3 = 14 \times 11^3 = 18634$MF#% Compound Interest = Rs.18634 - Rs.14000 = Rs.4634
4. The compound interest on Rs. 30,000 at 7% per annum is Rs. 4347. The period (in years) is:
A. 1
B. 2
C. 3
D. 3.5
Here is the answer and explanation
Answer : Option B
Explanation :
Let the period be n years
Then, amount after n years = Rs.(30000 + 4347) = Rs. 34347 $MF#%\begin{align}&\text{P}\left(1 + \dfrac{\text{R}}{100}\right)^\text{T} = 34347\\\\ &30000\left(1 + \dfrac{7}{100}\right)^\text{n} = 34347\\\\ &30000\left(\dfrac{107}{100}\right)^\text{n} = 34347\\\\ &\left(\dfrac{107}{100}\right)^\text{n} = \dfrac{34347}{30000} = \dfrac{11449}{10000} = \left(\dfrac{107}{100}\right)^2\\\\ &n = 2\text{ years}\end{align}$MF#%
5. The difference between simple and compound interests compounded annually on a certain sum of money for 2 years at 4% per annum is Re. 1. The sum is:
A. Rs.600
B. Rs.645
C. Rs.525
D. Rs.625
Here is the answer and explanation
Answer : Option D
Explanation :
---------------------------------------------------------------------------------------
Solution 1 --------------------------------------------------------------------------------------- Let the sum be Rs.x $MF#%\begin{align}&\text{Amount after 2 years on Rs.x at 4% per annum when interest is compounded annually }\\\\&=\text{x}\left(1 + \dfrac{4}{100}\right)^2 = \text{x}\left(\dfrac{104}{100}\right)^2\\\\&\text{Compound Interest = }\text{x}\left(\dfrac{104}{100}\right)^2 - x \\\\&= x\left[\left(\dfrac{104}{100}\right)^2 - 1\right] = x\left[\left(\dfrac{26}{25}\right)^2 - 1\right] = x\left[\dfrac{676}{625} - 1\right] = x\left[\dfrac{51}{625} \right] = \dfrac{51x}{625}\\\\\\\\ &\text{Simple Interest = }\dfrac{\text{PRT}}{100} = \dfrac{x \times 4 \times 2}{100} = \dfrac{2x}{25}\end{align}$MF#% Given that difference between compound interest and simple interest is Rs.1 $MF#%\begin{align}&\Rightarrow \dfrac{51x}{625} - \dfrac{2x}{25} = 1\\\\ &\Rightarrow \dfrac{51x - 50x}{625} = 1\\\\ &\Rightarrow x = 625\end{align}$MF#% --------------------------------------------------------------------------------------- Solution 2 ---------------------------------------------------------------------------------------
$MF#%= \text{P}\left(\dfrac{\text{R}}{100}\right)^2$MF#%[Read more ...]
$MF#%\begin{align}&\text{P}\left(\dfrac{\text{R}}{100}\right)^2= 1\\\\
&\text{P}\left(\dfrac{4}{100}\right)^2= 1\\\\
&\text{P}\left(\dfrac{1}{25}\right)^2= 1\\\\
&\text{P}\left(\dfrac{1}{625}\right)= 1\\\\
&\text{P} = 625\end{align}$MF#%
i.e., required sum is Rs.625
Post Your Comment |
The Annals of Statistics Ann. Statist. Volume 6, Number 4 (1978), 701-726. Nonparametric Inference for a Family of Counting Processes Abstract
Let $\mathbf{B} = (N_1, \cdots, N_k)$ be a multivariate counting process and let $\mathscr{F}_t$ be the collection of all events observed on the time interval $\lbrack 0, t\rbrack.$ The intensity process is given by $\Lambda_i(t) = \lim_{h \downarrow 0} \frac{1}{h}E(N_i(t + h) - N_i(t) \mid \mathscr{F}_t)\quad i = 1, \cdots, k.$ We give an application of the recently developed martingale-based approach to the study of $\mathbf{N}$ via $\mathbf{\Lambda}.$ A statistical model is defined by letting $\Lambda_i(t) = \alpha_i(t)Y_i(t), i = 1, \cdots, k,$ where $\mathbf{\alpha} = (\alpha_1, \cdots, \alpha_k)$ is an unknown nonnegative function while $\mathbf{Y} = (Y_1, \cdots, Y_k),$ together with $\mathbf{N},$ is a process observable over a certain time interval. Special cases are time-continuous Markov chains on finite state spaces, birth and death processes and models for survival analysis with censored data. The model is termed nonparametric when $\mathbf{\alpha}$ is allowed to vary arbitrarily except for regularity conditions. The existence of complete and sufficient statistics for this model is studied. An empirical process estimating $\beta_i(t) = \int^t_0 \alpha_i(s) ds$ is given and studied by means of the theory of stochastic integrals. This empirical process is intended for plotting purposes and it generalizes the empirical cumulative hazard rate from survival analysis and is related to the product limit estimator. Consistency and weak convergence results are given. Tests for comparison of two counting processes, generalizing the two sample rank tests, are defined and studied. Finally, an application to a set of biological data is given.
Article information Source Ann. Statist., Volume 6, Number 4 (1978), 701-726. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176344247 Digital Object Identifier doi:10.1214/aos/1176344247 Mathematical Reviews number (MathSciNet) MR491547 Zentralblatt MATH identifier 0389.62025 JSTOR links.jstor.org Subjects Primary: 62G05: Estimation Secondary: 62G10: Hypothesis testing 62M99: None of the above, but in this section 62N05: Reliability and life testing [See also 90B25] 60G45 60H05: Stochastic integrals 62M05: Markov processes: estimation Citation
Aalen, Odd. Nonparametric Inference for a Family of Counting Processes. Ann. Statist. 6 (1978), no. 4, 701--726. doi:10.1214/aos/1176344247. https://projecteuclid.org/euclid.aos/1176344247 |
Electromagnetic Waves Displacement Current The magnetic field due to current carrying conductor, i cis determined by using the Ampere’s circuit law.
\tt \int \overrightarrow{B} \cdot \overrightarrow{dl} = \mu_{0} i_{c}
Conduction current, i cis produced by the time varying magnetic field \tt i_{c} \propto \frac{dB}{dt} \ and \ i_{c} = \frac{dq}{dt} The rate of change of electrical flux produces a current called DISPLACEMENT CURRENT, i d. Flux due to electric field. φ E= EA Cos θ \tt \overline{E} \cdot \overline{A} = \int \overline{E} \cdot \overline{ds} \tt i_{d} = \varepsilon_{0} \frac{d \phi_{E}}{dt} \tt i_{d} = \varepsilon_{0} A \frac{dE}{dt} {φ = E.A} When a variable electrical field is applied to the gap.
\tt i_{d} = A \varepsilon_{0} \frac{dE}{dt} id = Displacement Current
When a variable potential difference is applied to the plates of a condenser C then \tt i_{d} = C \frac{dv}{dt} \tt i_{c} = i_{d} \Rightarrow i_{c} = \frac{V}{X_{c}} = V \omega C where, i c= conduction current, i d= displacement current Amperes circuit law \tt \int \overrightarrow{B} \cdot \overrightarrow{dl} = \mu_{0} \left(i_{c} + i_{d}\right)
= \tt \mu_{0} \left(i_{c} + \varepsilon_{0} \frac{d \phi_{E}}{dt}\right)
\tt \int B \cdot dl = \mu_{0}i_{c} + \mu_{0} \varepsilon_{0} \frac{d \phi_{E}}{dt} \tt B = \frac{\mu_{0}}{2 \pi} i_{d} \frac{r}{R^{2}} The magnetic field at a distance R from the axis \tt B = \frac{\mu_{0}}{2 \pi} \frac{i_{d}}{R}. This is the maximum value. GAUSS LAW FOR ELECTRICITY \tt \int \overrightarrow{E} . \overrightarrow{dA} = q_{net}/ \varepsilon_{0} GAUSS LAW FOR MAGNETISM \tt \int \overrightarrow{B} . \overrightarrow{dA} = 0 FARADAY’S LAW, \tt \int \overrightarrow{E} . \overrightarrow{dl} = - \frac{d \phi_{B}}{dt} \tt \int \overrightarrow{B} . \overrightarrow{dl} = \mu_{0}i_{c} + \mu_{0} \varepsilon_{0} \frac{d \phi_{E}}{dt} = \mu_{0} \left(i_{c} + i_{d}\right) The force acting on a charge, of moving in electric and magnetic fields which are similar to EM wave are existing simultaneously is \tt \overline{F} = q \left[ \overline{E} + \overline{V} \times \overline{B}\right]. This force is Lorentz force. View the Topic in this video From 00:34 To 57:46
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. According to Ampere's circuital law, the magnetic field B is related to steady current
I as \oint_{c} \overrightarrow{B}\cdot \overrightarrow{dl} = \mu_{0}I
2. The sum of the conduction current and the displacement current. The generalised law is \oint
B· d I = \mu_{0}i_{c} + \mu_{0} \varepsilon_{0}\frac{d \phi_{E}}{dt}
3. \oint
E·d A = Q/ε 0 (Gauss's Law for electricity)
4. \oint
B·d A = 0 (Gauss's Law for Magnetism)
5. \oint
E·d l = \frac{-d \phi_{B}}{dt}(Faraday's Law) |
Notice:
If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help!
Test-drive the GATK tools and Best Practices pipelines on Terra Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything. Genotype Refinement workflow for germline short variants Contents Overview Summary of workflow steps Output annotations Example More information about priors Mathematical details 1. Overview
The core GATK Best Practices workflow has historically focused on variant discovery --that is, the existence of genomic variants in one or more samples in a cohorts-- and consistently delivers high quality results when applied appropriately. However, we know that the quality of the individual genotype calls coming out of the variant callers can vary widely based on the quality of the BAM data for each sample. The goal of the Genotype Refinement workflow is to use additional data to improve the accuracy of genotype calls and to filter genotype calls that are not reliable enough for downstream analysis. In this sense it serves as an optional extension of the variant calling workflow, intended for researchers whose work requires high-quality identification of individual genotypes.
While every study can benefit from increased data accuracy, this workflow is especially useful for analyses that are concerned with how many copies of each variant an individual has (e.g. in the case of loss of function) or with the transmission (or de novo origin) of a variant in a family.
If a “gold standard” dataset for SNPs is available, that can be used as a very powerful set of priors on the genotype likelihoods in your data. For analyses involving families, a pedigree file describing the relatedness of the trios in your study will provide another source of supplemental information. If neither of these applies to your data, the samples in the dataset itself can provide some degree of genotype refinement (see section 5 below for details).
After running the Genotype Refinement workflow, several new annotations will be added to the INFO and FORMAT fields of your variants (see below).
Note that GQ fields will be updated, and genotype calls may be modified. However, the Phred-scaled genotype likelihoods (PLs) which indicate the original genotype call (the genotype candidate with PL=0) will remain untouched. Any analysis that made use of the PLs will produce the same results as before. 2. Summary of workflow steps Input
Begin with recalibrated variants from VQSR at the end of the germline short variants pipeline. The filters applied by VQSR will be carried through the Genotype Refinement workflow.
Step 1: Derive posterior probabilities of genotypes Tool used: CalculateGenotypePosteriors
Using the Phred-scaled genotype likelihoods (PLs) for each sample, prior probabilities for a sample taking on a HomRef, Het, or HomVar genotype are applied to derive the posterior probabilities of the sample taking on each of those genotypes. A sample’s PLs were calculated by HaplotypeCaller using only the reads for that sample. By introducing additional data like the allele counts from the 1000 Genomes project and the PLs for other individuals in the sample’s pedigree trio, those estimates of genotype likelihood can be improved based on what is known about the variation of other individuals.
SNP calls from the 1000 Genomes project capture the vast majority of variation across most human populations and can provide very strong priors in many cases. At sites where most of the 1000 Genomes samples are homozygous variant with respect to the reference genome, the probability of a sample being analyzed of also being homozygous variant is very high.
For a sample for which both parent genotypes are available, the child’s genotype can be supported or invalidated by the parents’ genotypes based on Mendel’s laws of allele transmission. Even the confidence of the parents’ genotypes can be recalibrated, such as in cases where the genotypes output by HaplotypeCaller are apparent Mendelian violations.
Step 2: Filter low quality genotypes Tool used: VariantFiltration
After the posterior probabilities are calculated for each sample at each variant site, genotypes with GQ < 20 based on the posteriors are filtered out. GQ20 is widely accepted as a good threshold for genotype accuracy, indicating that there is a 99% chance that the genotype in question is correct. Tagging those low quality genotypes indicates to researchers that these genotypes may not be suitable for downstream analysis. However, as with the VQSR, a filter tag is applied, but the data is not removed from the VCF.
Step 3: Annotate possible de novo mutations Tool used: VariantAnnotator
Using the posterior genotype probabilities, possible de novo mutations are tagged. Low confidence de novos have child GQ >= 10 and AC < 4 or AF < 0.1%, whichever is more stringent for the number of samples in the dataset. High confidence de novo sites have all trio sample GQs >= 20 with the same AC/AF criterion.
Step 4: Functional annotation of possible biological effects Tool options: Funcotator (experimental)
Especially in the case of de novo mutation detection, analysis can benefit from the functional annotation of variants to restrict variants to exons and surrounding regulatory regions. Funcotator is a new tool that is currently still in development. If you would prefer to use a more mature tool, we recommend you look into SnpEff or Oncotator, but note that these are not GATK tools so we do not provide support for them.
3. Output annotations
The Genotype Refinement workflow adds several new info- and format-level annotations to each variant. GQ fields will be updated, and genotypes calculated to be highly likely to be incorrect will be changed. The Phred-scaled genotype likelihoods (PLs) carry through the pipeline without being changed. In this way, PLs can be used to derive the original genotypes in cases where sample genotypes were changed.
Population Priors
New INFO field annotation PG is a vector of the Phred-scaled prior probabilities of a sample at that site being HomRef, Het, and HomVar. These priors are based on the input samples themselves along with data from the supporting samples if the variant in question overlaps another in the supporting dataset.
Phred-Scaled Posterior Probability
New FORMAT field annotation PP is the Phred-scaled posterior probability of the sample taking on each genotype for the given variant context alleles. The PPs represent a better calibrated estimate of genotype probabilities than the PLs are recommended for use in further analyses instead of the PLs.
Genotype Quality
Current FORMAT field annotation GQ is updated based on the PPs. The calculation is the same as for GQ based on PLs.
Joint Trio Likelihood
New FORMAT field annotation JL is the Phred-scaled joint likelihood of the posterior genotypes for the trio being incorrect. This calculation is based on the PLs produced by HaplotypeCaller (before application of priors), but the genotypes used come from the posteriors. The goal of this annotation is to be used in combination with JP to evaluate the improvement in the overall confidence in the trio’s genotypes after applying CalculateGenotypePosteriors. The calculation of the joint likelihood is given as:
where the GLs are the genotype likelihoods in [0, 1] probability space.
Joint Trio Posterior
New FORMAT field annotation JP is the Phred-scaled posterior probability of the output posterior genotypes for the three samples being incorrect. The calculation of the joint posterior is given as:
where the GPs are the genotype posteriors in [0, 1] probability space.
Low Genotype Quality
New FORMAT field filter lowGQ indicates samples with posterior GQ less than 20. Filtered samples tagged with lowGQ are not recommended for use in downstream analyses.
High and Low Confidence De Novo
New INFO field annotation for sites at which at least one family has a possible de novo mutation. Following the annotation tag is a list of the children with de novo mutations. High and low confidence are output separately.
4. Example
Before:
1 1226231 rs13306638 G A 167563.16 PASS AC=2;AF=0.333;AN=6;… GT:AD:DP:GQ:PL 0/0:11,0:11:0:0,0,249 0/0:10,0:10:24:0,24,360 1/1:0,18:18:60:889,60,0
After:
1 1226231 rs13306638 G A 167563.16 PASS AC=3;AF=0.500;AN=6;…PG=0,8,22;… GT:AD:DP:GQ:JL:JP:PL:PP 0/1:11,0:11:49:2:24:0,0,249:49,0,287 0/0:10,0:10:32:2:24:0,24,360:0,32,439 1/1:0,18:18:43:2:24:889,60,0:867,43,0
The original call for the child (first sample) was HomRef with GQ0. However, given that, with high confidence, one parent is HomRef and one is HomVar, we expect the child to be heterozygous at this site. After family priors are applied, the child’s genotype is corrected and its GQ is increased from 0 to 49. Based on the allele frequency from 1000 Genomes for this site, the somewhat weaker population priors favor a HomRef call (PG=0,8,22). The combined effect of family and population priors still favors a Het call for the child.
The joint likelihood for this trio at this site is two, indicating that the genotype for one of the samples may have been changed. Specifically a low JL indicates that posterior genotype for at least one of the samples was not the most likely as predicted by the PLs. The joint posterior value for the trio is 24, which indicates that the GQ values based on the posteriors for all of the samples are at least 24. (See above for a more complete description of JL and JP.)
5. More information about priors
The Genotype Refinement Pipeline uses Bayes’s Rule to combine independent data with the genotype likelihoods derived from HaplotypeCaller, producing more accurate and confident genotype posterior probabilities. Different sites will have different combinations of priors applied based on the overlap of each site with external, supporting SNP calls and on the availability of genotype calls for the samples in each trio.
Input-derived Population Priors
If the input VCF contains at least 10 samples, then population priors will be calculated based on the discovered allele count for every called variant.
Supporting Population Priors
Priors derived from supporting SNP calls can only be applied at sites where the supporting calls overlap with called variants in the input VCF. The values of these priors vary based on the called reference and alternate allele counts in the supporting VCF. Higher allele counts (for ref or alt) yield stronger priors.
Family Priors
The strongest family priors occur at sites where the called trio genotype configuration is a Mendelian violation. In such a case, each Mendelian violation configuration is penalized by a de novo mutation probability (currently 10-6). Confidence also propagates through a trio. For example, two GQ60 HomRef parents can substantially boost a low GQ HomRef child and a GQ60 HomRef child and parent can improve the GQ of the second parent. Application of family priors requires the child to be called at the site in question. If one parent has a no-call genotype, priors can still be applied, but the potential for confidence improvement is not as great as in the 3-sample case.
Caveats
Right now family priors can only be applied to biallelic variants and population priors can only be applied to SNPs. Family priors only work for trios.
6. Mathematical details
Note that family priors are calculated and applied before population priors. The opposite ordering would result in overly strong population priors because they are applied to the child and parents and then compounded when the trio likelihoods are multiplied together.
Review of Bayes’s Rule
HaplotypeCaller outputs the likelihoods of observing the read data given that the genotype is actually HomRef, Het, and HomVar. To convert these quantities to the probability of the genotype given the read data, we can use Bayes’s Rule. Bayes’s Rule dictates that the probability of a parameter given observed data is equal to the likelihood of the observations given the parameter multiplied by the prior probability that the parameter takes on the value of interest, normalized by the prior times likelihood for all parameter values:
$$ P(\theta|Obs) = \frac{P(Obs|\theta)P(\theta)}{\sum_{\theta} P(Obs|\theta)P(\theta)} $$
In the best practices pipeline, we interpret the genotype likelihoods as probabilities by implicitly converting the genotype likelihoods to genotype probabilities using non-informative or flat priors, for which each genotype has the same prior probability. However, in the Genotype Refinement Pipeline we use independent data such as the genotypes of the other samples in the dataset, the genotypes in a “gold standard” dataset, or the genotypes of the other samples in a family to construct more informative priors and derive better posterior probability estimates.
Calculation of Population Priors
Given a set of samples in addition to the sample of interest (ideally non-related, but from the same ethnic population), we can derive the prior probability of the genotype of the sample of interest by modeling the sample’s alleles as two independent draws from a pool consisting of the set of all the supplemental samples’ alleles. (This follows rather naturally from the Hardy-Weinberg assumptions.) Specifically, this prior probability will take the form of a multinomial Dirichlet distribution parameterized by the allele counts of each allele in the supplemental population. In the biallelic case the priors can be calculated as follows:
$$ P(GT = HomRef) = \dbinom{2}{0} \ln \frac{\Gamma(nSamples)\Gamma(RefCount + 2)}{\Gamma(nSamples + 2)\Gamma(RefCount)} $$
$$ P(GT = Het) = \dbinom{2}{1} \ln \frac{\Gamma(nSamples)\Gamma(RefCount + 1)\Gamma(AltCount + 1)}{\Gamma(nSamples + 2)\Gamma(RefCount)\Gamma(AltCount)} $$
$$ P(GT = HomVar) = \dbinom{2}{2} \ln \frac{\Gamma(nSamples)\Gamma(AltCount + 2)}{\Gamma(nSamples + 2)\Gamma(AltCount)} $$
where Γ is the Gamma function, an extension of the factorial function.
The prior genotype probabilities based on this distribution scale intuitively with number of samples. For example, a set of 10 samples, 9 of which are HomRef yield a prior probability of another sample being HomRef with about 90% probability whereas a set of 50 samples, 49 of which are HomRef yield a 97% probability of another sample being HomRef.
Calculation of Family Priors
Given a genotype configuration for a given mother, father, and child trio, we set the prior probability of that genotype configuration as follows:
$$ P(G_M,G_F,G_C) = P(\vec{G}) \cases{ 1-10\mu-2\mu^2 & no MV \cr \mu & 1 MV \cr \mu^2 & 2 MVs} $$
where the 10 configurations with a single Mendelian violation are penalized by the de novo mutation probability μ and the two configurations with two Mendelian violations by μ^2. The remaining configurations are considered valid and are assigned the remaining probability to sum to one.
This prior is applied to the joint genotype combination of the three samples in the trio. To find the posterior for any single sample, we marginalize over the remaining two samples as shown in the example below to find the posterior probability of the child having a HomRef genotype:
This quantity P(Gc|D) is calculated for each genotype, then the resulting vector is Phred-scaled and output as the Phred-scaled posterior probabilities (PPs). |
Forgot password? New user? Sign up
Existing user? Log in
a=1+122+132+⋯+120162 \large a = 1 + \dfrac1{2^2} + \dfrac1{3^2} + \cdots + \dfrac1{2016^2} a=1+221+321+⋯+201621
Find the value of ⌊a⌋\lfloor a \rfloor ⌊a⌋.
Notation: ⌊⋅⌋ \lfloor \cdot \rfloor ⌊⋅⌋ denotes the floor function.
Note by Ayush G Rai 3 years, 4 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
The answer is 1.
Note that if sn=∑r=1n1r2∀n∈N−{1}s1=1s_n=\sum_{r=1}^n \frac{1}{r^2} \quad \forall n \in \mathbb{N}-\{1\} \\ s_1=1sn=r=1∑nr21∀n∈N−{1}s1=1 then {sn}n=1∞\{s_n\}_{n=1}^{\infty}{sn}n=1∞ is a strictly increasing sequence that which is always less than 2, that is, n<m ⟹ sn<smn,m∈Nsn<2∀n∈Nn<m \implies s_n<s_m \quad n, m \in \mathbb{N} \\ s_n<2 \quad \forall n \in \mathbb{N}n<m⟹sn<smn,m∈Nsn<2∀n∈N
Here are the proofs:
The first statement is true as sms_msm contains more number of positive terms than sns_nsn.
To prove the next assertion, note that 0<r(r−1)<r2∀r∈N−{1} ⟹ 1r2<1r(r−1)∀r∈N−{1} ⟹ sn<1+∑r=2n1r(r−1)=1+∑r=2n1r−1−1r=2−1n ⟹ sn<2∀n∈N\begin{aligned}0&<r(r-1)<r^2 \quad \forall r \in \mathbb{N}-\{1\} \\\implies \frac{1}{r^2} &< \frac{1}{r(r-1)} \quad \forall r \in \mathbb{N}-\{1\} \\\implies s_n&<1+\sum_{r=2}^n \frac{1}{r(r-1)}\\&=1+\sum_{r=2}^n \frac{1}{r-1}-\frac{1}{r}\\&=2-\frac{1}{n}\\\implies s_n&<2 \quad \forall n \in \mathbb{N} \end{aligned}0⟹r21⟹sn⟹sn<r(r−1)<r2∀r∈N−{1}<r(r−1)1∀r∈N−{1}<1+r=2∑nr(r−1)1=1+r=2∑nr−11−r1=2−n1<2∀n∈N
Now, 1=s1<s2016<2 ⟹ ⌊s2016⌋=11=s_1<s_{2016}<2\\\implies \large \boxed{\lfloor s_{2016} \rfloor=1}1=s1<s2016<2⟹⌊s2016⌋=1
Note:
s2016<limn→∞sn=π26<1.645s_{2016}<\lim_{n \to \infty} s_n = \frac{\pi^2}{6} < 1.645s2016<n→∞limsn=6π2<1.645
Refer here for the proof of this.
Log in to reply
Nice solution..+1.I guessed it anyway
Try to answer other problems of the set.You are tooooo good in solving problems.
Will try after jee advanced :)
@Deeparaj Bhat – How much did you get in jee-main?
@Ayush G Rai – Not so good... 250.
@Deeparaj Bhat – That is pretty good. well i have a long way to go.I'm still in 9th std going to the 10th.
I think the answer is 2.
This is again for NMTC lvl 2 2016
no it is of 2015
Oh sorry I meant the previous year IE 2015....I know because I appeared...
@Ankit Kumar Jain – even I appeared
@Ayush G Rai – How many could you solve??
@Ankit Kumar Jain – same as you...1 and a half
@Ankit Kumar Jain – how much problems could you solve??
@Ayush G Rai – 1 and a half....definitely not so good...One of my friends solved about 3 and qualified
Problem Loading...
Note Loading...
Set Loading... |
I was wondering if there were any encryption algorithms that kinda worked like RSA in the sense that there are two keys, however, one of the keys
only decrypts (meaning you can't encrypt with it) and the other key that only encrypts. Is this possible?
I was wondering if there were any encryption algorithms that kinda worked like RSA in the sense that there are two keys, however, one of the keys
There's two fundamental difficulties with what you're asking for.
First, in the usual definition of public-key encryption, the public key is assumed to be, well, public. That is, everybody is assumed to know it, and thus to be able to encrypt messages.
If you don't want everybody to be able to encrypt, then you're no longer doing public-key crypto in the usual sense, and you need to start paying attention to things like exactly who should be allowed to know the (not so) public half of the key pair and how the halves of the key pair should be distributed.
Second, in general, whoever first creates the key pair must, at least temporarily, possess the information necessary the construct both keys. If they want, they can save that information and use it to both encrypt and decrypt any messages.
In standard public-key crypto this is not an issue, since it's usually assumed that the keys are generated by the private key holder, who is supposed to know both keys anyway. But if the private key holder is not supposed to know the "public" key, then we need to either:
trust the private key holder to forget the public key (and any other information that could be used to reconstruct it) after generating it, entrust the key pair generation to some trusted third party (who, if they're not as trustworthy as we think they are, could then eavesdrop on or tamper with any communications encrypted with the keys), or somehow generate the key pair using some kind of distributed multi-party computation scheme that doesn't allow any party to learn any parts of the keys they shouldn't know (which is far from a trivial task, if it's even possible).
All that said, if we handwave all those issues away (e.g. by assuming a trusted third party that handles key generation and distribution), plain old RSA can be used like this. In fact, I asked a question about using RSA in a very similar manner a while ago.
All we need to do is somehow ensure that:
only those who should be able to encrypt messages know the encryption exponent $e$, only those who should be able to decrypt messages know the decryption exponent $d$, and nobody (except maybe the 100% trustworthy key generator) knows the factors $p$ and $q$ of the modulus $n = pq$, or anything else (such as $\lambda(n) = \operatorname{lcm}(p-1,q-1)$) that could be used to reconstruct one exponent from the other.
(The modulus $n$ itself can still be public knowledge.)
Of course, this implies that we cannot use a fixed encryption exponent like $e = 65{,}537$, like most ordinary RSA implementations do, but must instead pick $e$ at random from the set of distinct possible encryption exponents (i.e. positive integers less than and coprime to $\lambda(n)$).
We can do this by repeatedly picking a random odd number $1 < e < \lambda(n)$ until we find one that satisfies $\gcd(e, \lambda(n)) = 1$, and compute $d = e^{-1} \pmod{\lambda(n)}$. Or, equivalently, we can pick $d$ randomly in the same way, and compute $e = d^{-1} \pmod{\lambda(n)}$. Modular inversion is a bijection, so picking $e$ uniformly at random from the entire set of invertible congruence classes ensures that $d$ is also uniformly distributed, and vice versa.
In fact, the latter scheme is essentially* what Rivest, Shamir and Adleman used in the original RSA paper. Using a small, fixed $e$ is a later optimization introduced after it was realized that $e$, being public, doesn't actually need to be random.
The tricky part, of course, is that whoever generates the key pair will still end up knowing both $e$ and $d$, and — if they're not supposed to retain the ability to both encrypt and decrypt — must be trusted to securely forget at least one of these exponents, as well as any other values (including $p$, $q$ and $\lambda(n)$) that would allow them to reconstruct those exponents later.
*) The original RSA paper uses $\phi(n) = (p-1)(q-1)$ instead of $\lambda(n)$ when computing the modular inverse. This has no real effect on security, it just sometimes produces larger exponents than necessary and allows multiple equivalent key pairs. The original RSA paper also does not specify an actual range to pick $d$ from, but only that it should be "a large, random integer". |
Crystals of Generalized Young Walls¶
AUTHORS:
Lucas David-Roesler: Initial version Ben Salisbury: Initial version Travis Scrimshaw: Initial version
Generalized Young walls are certain generalizations of Young tableaux introduced in [KS2010] and designed to be a realization of the crystals \(\mathcal{B}(\infty)\) and \(\mathcal{B}(\lambda)\) in type \(A_n^{(1)}\).
REFERENCES:
class
sage.combinat.crystals.generalized_young_walls.
CrystalOfGeneralizedYoungWalls(
n, La)¶
The crystal \(\mathcal{Y}(\lambda)\) of generalized Young walls of the given type with highest weight \(\lambda\).
INPUT:
n– type \(A_n^{(1)}\)
weight– dominant integral weight
EXAMPLES:
sage: La = RootSystem(['A',3,1]).weight_lattice(extended=True).fundamental_weights()[1] sage: YLa = crystals.GeneralizedYoungWalls(3,La) sage: y = YLa([[0],[1,0,3,2,1],[2,1,0],[3]]) sage: y.pp() 3| 0|1|2| 1|2|3|0|1| 0| sage: y.weight() -Lambda[0] + Lambda[2] + Lambda[3] - 3*delta sage: y.in_highest_weight_crystal(La) True sage: y.f(1) [[0], [1, 0, 3, 2, 1], [2, 1, 0], [3], [], [1]] sage: y.f(1).f(1) sage: yy = crystals.infinity.GeneralizedYoungWalls(3)([[0], [1, 0, 3, 2, 1], [2, 1, 0], [3], [], [1]]) sage: yy.f(1) [[0], [1, 0, 3, 2, 1], [2, 1, 0], [3], [], [1], [], [], [], [1]] sage: yyy = yy.f(1) sage: yyy.in_highest_weight_crystal(La) False sage: LS = crystals.LSPaths(['A',3,1],[1,0,0,0]) sage: C = LS.subcrystal(max_depth=4) sage: G = LS.digraph(subset=C) sage: P = RootSystem(['A',3,1]).weight_lattice(extended=True) sage: La = P.fundamental_weights() sage: YW = crystals.GeneralizedYoungWalls(3,La[0]) sage: CW = YW.subcrystal(max_depth=4) sage: GW = YW.digraph(subset=CW) sage: GW.is_isomorphic(G,edge_labels=True) True
To display the crystal down to a specified depth:
sage: S = YLa.subcrystal(max_depth=4) sage: G = YLa.digraph(subset=S) sage: view(G) # not tested class
sage.combinat.crystals.generalized_young_walls.
CrystalOfGeneralizedYoungWallsElement(
parent, data)¶
Element of the highest weight crystal of generalized Young walls.
e(
i)¶
Compute the action of \(e_i\) restricted to the highest weight crystal.
EXAMPLES:
sage: La = RootSystem(['A',2,1]).weight_lattice(extended=True).fundamental_weights()[1] sage: hwy = crystals.GeneralizedYoungWalls(2,La)([[],[1,0],[2,1]]) sage: hwy.e(1) [[], [1, 0], [2]] sage: hwy.e(2) sage: hwy.e(3)
f(
i)¶
Compute the action of \(f_i\) restricted to the highest weight crystal.
EXAMPLES:
sage: La = RootSystem(['A',2,1]).weight_lattice(extended=True).fundamental_weights()[1] sage: GYW = crystals.infinity.GeneralizedYoungWalls(2) sage: y = GYW([[],[1,0],[2,1]]) sage: y.f(1) [[], [1, 0], [2, 1], [], [1]] sage: hwy = crystals.GeneralizedYoungWalls(2,La)([[],[1,0],[2,1]]) sage: hwy.f(1)
phi(
i)¶
Return the value \(\varepsilon_i(Y) + \langle h_i, \mathrm{wt}(Y)\rangle\), where \(h_i\) is the \(i\)-th simple coroot and \(Y\) is
self.
EXAMPLES:
sage: La = RootSystem(['A',3,1]).weight_lattice(extended=True).fundamental_weights() sage: y = crystals.GeneralizedYoungWalls(3,La[0])([]) sage: y.phi(1) 0 sage: y.phi(2) 0
weight()¶
Return the weight of
selfin the highest weight crystal as an element of the weight lattice \(\bigoplus_{i=0}^n \ZZ \Lambda_i\).
EXAMPLES:
sage: La = RootSystem(['A',2,1]).weight_lattice(extended=True).fundamental_weights()[1] sage: hwy = crystals.GeneralizedYoungWalls(2,La)([[],[1,0],[2,1]]) sage: hwy.weight() Lambda[0] - Lambda[1] + Lambda[2] - delta class
sage.combinat.crystals.generalized_young_walls.
GeneralizedYoungWall(
parent, data)¶
A generalized Young wall.
For more information, see
InfinityCrystalOfGeneralizedYoungWalls.
EXAMPLES:
sage: Y = crystals.infinity.GeneralizedYoungWalls(4) sage: mg = Y.module_generators[0]; mg.pp() 0 sage: mg.f_string([1,2,0,1]).pp() 1|2| 0|1| |
Epsilon()¶
Return \(\sum_{i=0}^n \varepsilon_i(Y) \Lambda_i\) where \(Y\) is
self.
EXAMPLES:
sage: y = crystals.infinity.GeneralizedYoungWalls(3)([[0],[1,0,3,2],[2,1],[3,2,1,0,3,2],[0],[],[2]]) sage: y.Epsilon() Lambda[0] + 3*Lambda[2]
Phi()¶
Return \(\sum_{i=0}^n \varphi_i(Y) \Lambda_i\) where \(Y\) is
self.
EXAMPLES:
sage: y = crystals.infinity.GeneralizedYoungWalls(3)([[0],[1,0,3,2],[2,1],[3,2,1,0,3,2],[0],[],[2]]) sage: y.Phi() -Lambda[0] + 3*Lambda[1] - Lambda[2] + 3*Lambda[3] sage: x = crystals.infinity.GeneralizedYoungWalls(3)([[],[1,0,3,2],[2,1],[3,2,1,0,3,2],[],[],[2]]) sage: x.Phi() 2*Lambda[0] + Lambda[1] - Lambda[2] + Lambda[3]
a(
i, k)¶
Return the number \(a_i(k)\) of \(i\)-colored boxes in the
k-th column of
self.
EXAMPLES:
sage: y = crystals.infinity.GeneralizedYoungWalls(3)([[0],[1,0,3,2],[2,1],[3,2,1,0,3,2],[0],[],[2]]) sage: y.a(1,2) 1 sage: y.a(0,2) 1 sage: y.a(3,2) 0
column(
k)¶
Return the list of boxes from the
k-th column of
self.
EXAMPLES:
sage: y = crystals.infinity.GeneralizedYoungWalls(3)([[0],[1,0,3,2],[2,1],[3,2,1,0,3,2],[0],[],[2]]) sage: y.column(2) [None, 0, 1, 2, None, None, None] sage: hw = crystals.infinity.GeneralizedYoungWalls(5)([]) sage: hw.column(1) []
content()¶
Return total number of blocks in
self.
EXAMPLES:
sage: y = crystals.infinity.GeneralizedYoungWalls(2)([[0],[1,0],[2,1,0,2],[],[1]]) sage: y.content() 8 sage: x = crystals.infinity.GeneralizedYoungWalls(3)([[],[1,0,3,2],[2,1],[3,2,1,0,3,2],[],[],[2]]) sage: x.content() 13
e(
i)¶
Return the application of the Kashiwara raising operator \(e_i\) on
self.
This will remove the \(i\)-colored box corresponding to the rightmost \(+\) in
self.signature(i).
EXAMPLES:
sage: x = crystals.infinity.GeneralizedYoungWalls(3)([[],[1,0,3,2],[2,1],[3,2,1,0,3,2],[],[],[2]]) sage: x.e(2) [[], [1, 0, 3, 2], [2, 1], [3, 2, 1, 0, 3, 2]] sage: _.e(2) [[], [1, 0, 3], [2, 1], [3, 2, 1, 0, 3, 2]] sage: _.e(2) [[], [1, 0, 3], [2, 1], [3, 2, 1, 0, 3]] sage: _.e(2)
epsilon(
i)¶
Return the number of \(i\)-colored arrows in the \(i\)-string above
selfin the crystal graph.
EXAMPLES:
sage: y = crystals.infinity.GeneralizedYoungWalls(3)([[],[1,0,3,2],[2,1],[3,2,1,0,3,2],[],[],[2]]) sage: y.epsilon(1) 0 sage: y.epsilon(2) 3 sage: y.epsilon(0) 0
f(
i)¶
Return the application of the Kashiwara lowering operator \(f_i\) on
self.
This will add an \(i\)-colored colored box to the site corresponding to the leftmost plus in
self.signature(i).
EXAMPLES:
sage: hw = crystals.infinity.GeneralizedYoungWalls(2)([]) sage: hw.f(1) [[], [1]] sage: _.f(2) [[], [1], [2]] sage: _.f(0) [[], [1, 0], [2]] sage: _.f(0) [[0], [1, 0], [2]]
generate_signature(
i)¶
The \(i\)-signature of
self(with whitespace where cancellation occurs) together with the unreduced sequence from \(\{+,-\}\). The result also records to the row and column position of the sign.
EXAMPLES:
sage: y = crystals.infinity.GeneralizedYoungWalls(2)([[0],[1,0],[2,1,0,2],[],[1]]) sage: y.generate_signature(1) ([['+', 2, 5], ['-', 4, 1]], ' ')
in_highest_weight_crystal(
La)¶
Return a boolean indicating if the generalized Young wall element is in the highest weight crystal cut out by the given highest weight
La.
By Theorem 4.1 of [KS2010], a generalized Young wall \(Y\) represents a vertex in the highest weight crystal \(Y(\lambda)\), with \(\lambda = \Lambda_{i_1} + \Lambda_{i_2} + \cdots + \Lambda_{i_\ell}\) a dominant integral weight of level \(\ell > 0\), if it satisfies the following condition. For each positive integer \(k\), if there exists \(j \in I\) such that \(a_j(k) - a_{j-1}(k) > 0\), then for some \(p = 1, \ldots, \ell\),\[j + k \equiv i_p + 1 \bmod n+1 \text{ and } a_j(k) - a_{j-1}(k) \le \lambda(h_{i_p}),\]
where \(\{h_0, h_1, \ldots, h_n\}\) is the set of simple coroots attached to \(A_n^{(1)}\).
EXAMPLES:
sage: La = RootSystem(['A',2,1]).weight_lattice(extended=True).fundamental_weights()[1] sage: GYW = crystals.infinity.GeneralizedYoungWalls(2) sage: y = GYW([[],[1,0],[2,1]]) sage: y.in_highest_weight_crystal(La) True sage: x = GYW([[],[1],[2],[],[],[2],[],[],[2]]) sage: x.in_highest_weight_crystal(La) False
latex_large()¶
Generate LaTeX code for
selfbut the output is larger. Requires TikZ.
EXAMPLES:
sage: x = crystals.infinity.GeneralizedYoungWalls(3)([[],[1,0,3,2],[2,1],[3,2,1,0,3,2],[],[],[2]]) sage: x.latex_large() '\\begin{tikzpicture}[baseline=5,scale=.45] \n \\foreach \\x [count=\\s from 0] in \n{{},{1,0,3,2},{2,1},{3,2,1,0,3,2},{},{},{2}} \n{\\foreach \\y [count=\\t from 0] in \\x { \\node[font=\\scriptsize] at (-\\t,\\s) {$\\y$}; \n \\draw (-\\t+.5,\\s+.5) to (-\\t-.5,\\s+.5); \n \\draw (-\\t+.5,\\s-.5) to (-\\t-.5,\\s-.5); \n \\draw (-\\t-.5,\\s-.5) to (-\\t-.5,\\s+.5); } \n \\draw[-,thick] (.5,\\s+1) to (.5,-.5) to (-\\t-1,-.5); } \n \\end{tikzpicture} \n'
number_of_parts()¶
Return the value of \(\mathscr{N}\) on
self.
In [KLRS2016], the statistic \(\mathscr{N}\) was defined on elements in \(\mathcal{Y}(\infty)\) which counts how many parts are in the corresponding Kostant partition. Specifically, the computation of \(\mathscr{N}(Y)\) is done using the following algorithm:
If \(Y\) has no rows whose right-most box is colored \(n\) and such that the length of this row is a multiple of \(n+1\), then \(\mathscr{N}(Y)\) is the total number of distinct rows in \(Y\), not counting multiplicity. Otherwise, search \(Y\) for the longest row such that the right-most box is colored \(n\) and such that the total number of boxes in the row is \(k(n+1)\) for some \(k\ge 1\). Replace this row by \(n+1\) distinct rows of length \(k\), reordering all rows, if necessary, so that the result is a proper wall. (Note that the resulting wall may no longer be reduced.) Repeat the search and replace process for all other rows of the above form for each \(k' < k\). Then \(\mathscr{N}(Y)\) is the number of distinct rows, not counting multiplicity, in the wall resulting from this process.
EXAMPLES:
sage: Y = crystals.infinity.GeneralizedYoungWalls(3) sage: y = Y([[0],[],[],[],[0],[],[],[],[0]]) sage: y.number_of_parts() 1 sage: Y = crystals.infinity.GeneralizedYoungWalls(3) sage: y = Y([[0,3,2],[1,0],[],[],[0,3],[1,0],[],[],[0]]) sage: y.number_of_parts() 4 sage: Y = crystals.infinity.GeneralizedYoungWalls(2) sage: y = Y([[0,2,1],[1,0],[2,1,0,2,1,0,2,1,0],[],[2,1,0,2,1,0]]) sage: y.number_of_parts() 8
phi(
i)¶
Return the value \(\varepsilon_i(Y) + \langle h_i, \mathrm{wt}(Y)\rangle\), where \(h_i\) is the \(i\)-th simple coroot and \(Y\) is
self.
EXAMPLES:
sage: y = crystals.infinity.GeneralizedYoungWalls(3)([[0],[1,0,3,2],[2,1],[3,2,1,0,3,2],[0],[],[2]]) sage: y.phi(1) 3 sage: y.phi(2) -1
pp()¶
Return an ASCII drawing of
self.
EXAMPLES:
sage: y = crystals.infinity.GeneralizedYoungWalls(2)([[0,2,1],[1,0,2,1,0],[],[0],[1,0,2],[],[],[1]]) sage: y.pp() 1| | | 2|0|1| 0| | 0|1|2|0|1| 1|2|0|
raw_signature(
i)¶
Return the sequence from \(\{+,-\}\) obtained from all \(i\)-admissible slots and removable \(i\)-boxes without canceling any \((+,-)\)-pairs. The result also notes the row and column of the sign.
EXAMPLES:
sage: x = crystals.infinity.GeneralizedYoungWalls(3)([[],[1,0,3,2],[2,1],[3,2,1,0,3,2],[],[],[2]]) sage: x.raw_signature(2) [['-', 3, 6], ['-', 1, 4], ['-', 6, 1]]
signature(
i)¶
Return the \(i\)-signature of
self.
The signature is obtained by reading
selfin columns bottom to top starting from the left. Then add a \(-\) at every \(i\)-box which may be removed from
selfand still obtain a legal generalized Young wall, and add a \(+\) at each site for which an \(i\)-box may be added and still obtain a valid generalized Young wall. Then successively cancel any \((+,-)\)-pair to obtain a sequence of the form \(- \cdots -+ \cdots +\). This resulting sequence is the output.
EXAMPLES:
sage: y = crystals.infinity.GeneralizedYoungWalls(2)([[0],[1,0],[2,1,0,2],[],[1]]) sage: y.signature(1) '' sage: x = crystals.infinity.GeneralizedYoungWalls(3)([[],[1,0,3,2],[2,1],[3,2,1,0,3,2],[],[],[2]]) sage: x.signature(2) '---'
sum_of_weighted_row_lengths()¶
Return the value of \(\mathscr{M}\) on
self.
Let \(\mathcal{Y}_0 \subset \mathcal{Y}(\infty)\) be the set of generalized Young walls which have no rows whose right-most box is colored \(n\). For \(Y \in \mathcal{Y}_0\),\[\mathscr{M}(Y) = \sum_{i=1}^n (i+1)M_i(Y),\]
where \(M_i(Y)\) is the number of nonempty rows in \(Y\) whose right-most box is colored \(i-1\).
EXAMPLES:
sage: Y = crystals.infinity.GeneralizedYoungWalls(2) sage: y = Y([[0,2,1,0,2],[1,0,2],[],[0,2],[1,0],[],[0],[1,0]]) sage: y.sum_of_weighted_row_lengths() 15
weight(
root_lattice=False)¶
Returns the weight of
self.
INPUT:
root_lattice– boolean determining whether weight should appear in root lattice or not in extended affine weight lattice.
EXAMPLES:
sage: x = crystals.infinity.GeneralizedYoungWalls(3)([[],[1,0,3,2],[2,1],[3,2,1,0,3,2],[],[],[2]]) sage: x.weight() 2*Lambda[0] + Lambda[1] - 4*Lambda[2] + Lambda[3] - 2*delta sage: x.weight(root_lattice=True) -2*alpha[0] - 3*alpha[1] - 5*alpha[2] - 3*alpha[3] class
sage.combinat.crystals.generalized_young_walls.
InfinityCrystalOfGeneralizedYoungWalls(
n, category)¶
The crystal \(\mathcal{Y}(\infty)\) of generalized Young walls of type \(A_n^{(1)}\) as defined in [KS2010].
A generalized Young wall is a collection of boxes stacked on a fixed board, such that color of the box at the site located in the \(j\)-th row from the bottom and the \(i\)-th column from the right is \(j-1 \bmod n+1\). There are several growth conditions on elements in \(Y \in \mathcal{Y}(\infty)\):
Walls grow in rows from right to left. That is, for every box \(y\in Y\) that is not in the rightmost column, there must be a box immediately to the right of \(y\). For all \(p>q\) such that \(p-q \equiv 0 \bmod n+1\), the \(p\)-th row has most as many boxes as the \(q\)-th row. There does not exist a column in the wall such that if one \(i\)-colored box, for every \(i = 0,1,\ldots,n\), is removed from that column, then the result satisfies the above conditions.
There is a crystal structure on \(\mathcal{Y}(\infty)\) defined as follows. Define maps\[e_i,\ f_i \colon \mathcal{Y}(\infty) \longrightarrow \mathcal{Y}(\infty) \sqcup \{0\}, \qquad \varepsilon_i,\ \varphi_i \colon \mathcal{Y}(\infty) \longrightarrow \ZZ, \qquad \mathrm{wt}\colon \mathcal{Y}(\infty) \longrightarrow \bigoplus_{i=0}^n \ZZ \Lambda_i \oplus \ZZ \delta,\]
by\[\mathrm{wt}(Y) = -\sum_{i=0}^n m_i(Y) \alpha_i,\]
where \(m_i(Y)\) is the number of \(i\)-boxes in \(Y\), \(\varepsilon_i(Y)\) is the number of \(-\) in the \(i\)-signature of \(Y\), and\[\varphi_i(Y) = \varepsilon_i(Y) + \langle h_i, \mathrm{wt}(Y) \rangle.\]
INPUT:
n– type \(A_n^{(1)}\)
EXAMPLES:
sage: Yinf = crystals.infinity.GeneralizedYoungWalls(3) sage: y = Yinf([[0],[1,0,3,2],[],[3,2,1],[0],[1,0]]) sage: y.pp() 0|1| 0| 1|2|3| | 2|3|0|1| 0| sage: y.weight(root_lattice=True) -4*alpha[0] - 3*alpha[1] - 2*alpha[2] - 2*alpha[3] sage: y.f(0) [[0], [1, 0, 3, 2], [], [3, 2, 1], [0], [1, 0], [], [], [0]] sage: y.e(0).pp() 0|1| | 1|2|3| | 2|3|0|1| 0|
To display the crystal down to depth 3:
sage: S = Yinf.subcrystal(max_depth=3) sage: G = Yinf.digraph(subset=S) # long time sage: view(G) # not tested |
In all references I saw so far it was claimed that this technology has no limits on the height of the buildings.
This statement is more or less true.
hazzey's answer has already done a good job of summarizing the
actual limitations of building height - i.e., the factors that, in any real application, control the decision of how many storeys to build a building. However, there is still the question of how high a structure could be, assuming we were able to ignore all of these other factors.
If we make a simplifying (and very naive) assumption that the only limitation of the height of a structure is the compressive strength of the concrete itself, and also that the only load being carried by the concrete is the load resulting from the weight of the vertical monolithic concrete column above (there are no live loads, or load transfers; the building is essentially a massive block of reinforced concrete), the calculation is fairly straightforward.
Unit weight of concrete: $$\gamma_c=150\frac{\text{lbf}}{\text{ft}^3}$$ Compressive strength of concrete (high performance concrete): $$f'_c=20,000\frac{\text{lbf}}{{\text{in}}^2}$$ Stress carried by concrete at the bottom: $$f=H_{c}\gamma_c$$ Set $f=f'_c$ and solve for maximum height: $$H_{max}=\frac{f`_c}{\gamma_c}=\frac{20,000\text{psi}}{150\text{pcf}}=19,200\text{ft}$$
This is so high (3.64 mi, or 5.85 km) that the acceleration due to gravity would be noticeably different at the top of the structure; the unit weight of concrete at the top would be roughly be 99.82% of what it is at the bottom - that is, about 149.73 pcf.
Additionally, the incredible stress applied to the concrete would result in appreciable strains. One equation for the modulus of elasticity of high strength concrete (from ACI) is:
$E_c=40,000\sqrt{f'_c}+1\times 10^6 \text{psi}=6,657\text{ksi}=45.9\text{GPa}$
According to Hooke's Law, the maximum strain at the bottom of the structure would be around 0.3%:
$\varepsilon_{max}=\frac{f'_c}{E_c}=0.3\%$
To find the strain across the entire structure height, we simply integrate:
$$\int_{0}^{H_c}\frac{f(z)}{E_c}\text{d}z=28.8\text{ft}$$ where $f(z)=\gamma_cz\cdot g(z)$ (gravity, $g$ is a function of height $z$).
This means the reduced height of the structure after taking into account concrete strain would be around 19170 ft (3.63 mi, or 5.84 km).
According to this article from Contruction Week Online, at 92 storeys (423 m, or 1388 ft) Trump International Hotel and Tower is currently the world's tallest concrete building (by their definition), and it is the 9th tallest building in the world. This is around 7% of the height possible (as defined by the simplified analysis above). Although the simplified analysis ignores all sorts of practical considerations and includes no safety factors, it is at least somewhat instructive as to what might be possible using high performance reinforced concrete. |
Suppose I have a 2D array
M[n][n] of integers (in fact, binary is fine, but I doubt it matters). I am interested in repeated queries of the form: given a coordinate pair $k,l$, what is$$\sum_{i = 0}^{k-1} \sum_{j = 0}^{l-1} M[i][j]?$$Of course, all these values can be computed in $\mathcal O(n^2)$ time total, and after that queries take $\mathcal O(1)$. However, my array is mutable, and each time I change a value, the obvious solution requires a $\mathcal O(n^2)$ update.
We can create a quad tree over
M; the preprocessing takes $\mathcal O(n^2\log(n))$, and this allows us to do queries in $\mathcal O(n\log(n))$, and updates in $\mathcal O(\log(n))$.
My question is:
Can we improve significantly on the queries without sacrificing too much on the updates?
I am especially interested in getting both the update and query operations sub-linear, and in particular getting them both to $\mathcal O(n^\epsilon)$.
Edit: for some more information, although I think the problem is interesting even without this further restriction, I expect to do roughly $\mathcal O(n^3)$ queries, and about $\mathcal O(n^2)$ updates. The ideal goal is to get the full runtime down to about $\mathcal O(n^{3+\epsilon})$. Thus, a situation where an update takes $\mathcal O(n \log(n))$ while a query takes $\mathcal O(\log(n))$ would also be interesting to me. |
Current creates a magnetic field, which subsequently exerts force on other current-bearing structures. For example, the current in each winding of a coil exerts a force on every other winding of the coil. If the windings are fixed in place, then this force is unable to do work (i.e., move the windings), so instead the coil stores potential energy. This potential energy can be released by turning off the external source. When this happens, charge continues to flow, but is now propelled by the magnetic force. The potential energy that was stored in the coil is converted to kinetic energy and subsequently used to redistribute the charge until no current flows. At this point, the inductor has expended its stored energy. To restore energy, the external source must be turned back on, restoring the flow of charge and thereby restoring the magnetic field.
Now recall that the magnetic field is essentially defined in terms of the force associated with this potential energy; i.e., \({\bf F} = q{\bf v} \times {\bf B}\) where \(q\) is the charge of a particle comprising the current, \({\bf v}\) is the velocity of the particle, and \({\bf B}\) is magnetic flux density (Section 2.5). So, rather than thinking of the potential energy of the system as being associated with the magnetic force applied to current, it is equally valid to think of the potential energy as being stored in the magnetic field associated with the current distribution. The energy stored in the magnetic field depends on the geometry of the current-bearing structure and the permeability of the intervening material because the magnetic field depends on these parameters.
The relationship between current applied to a structure and the energy stored in the associated magnetic field is what we mean by
inductance. We may fairly summarize this insight as follows:
Inductance is the ability of a structure to store energy in a magnetic field.
The inductance of a structure depends on the geometry of its current-bearing structures and the permeability of the intervening medium.
Note that inductance does
not depend on current, which we view as either a stimulus or response from this point of view. The corresponding response or stimulus, respectively, is the magnetic flux associated with this current. This leads to the following definition:
\[L = \frac{\Phi}{I} ~~\mbox{(single linkage)} \label{m0123_Ldef}\]
where \(\Phi\) (units of Wb) is magnetic flux, \(I\) (units of A) is the current responsible for this flux, and \(L\) (units of H) is the associated inductance. (The “single linkage” caveat will be explained below.) In other words, a device with high inductance generates a large magnetic flux in response to a given current, and therefore stores more energy for a given current than a device with lower inductance.
To use Equation \ref{m0123_Ldef} we must carefully define what we mean by “magnetic flux” in this case. Generally, magnetic flux is magnetic flux density (again, \({\bf B}\), units of Wb/m\(^2\)) integrated over a specified surface \({\mathcal S}\), so
\[\Phi = \int_{\mathcal S}{\bf B}\cdot d{\bf s}\]
where \(d{\bf s}\) is the differential surface area vector, with direction normal to \({\mathcal S}\). However, this leaves unanswered the following questions: Which \({\mathcal S}\), and which of the two possible normal directions of \(d{\bf s}\)? For a meaningful answer, \({\mathcal S}\) must uniquely associate the magnetic flux to the associated current. Such an association exists if we require the current to form a closed loop. This is shown in Figure \(\PageIndex{1}\). Here \({\mathcal C}\) is the closed loop along which the current flows, \({\mathcal S}\) is a surface bounded by \({\mathcal C}\), and the direction of \(d{\bf s}\) is defined according to the right-hand rule of Stokes’ Theorem (Section 4.9). Note that \({\mathcal C}\) can be a closed loop of
any shape; i.e., not just circular, and not restricted to lying in a plane. Further note that \({\mathcal S}\) used in the calculation of \(\Phi\) can be any surface bounded by \({\mathcal C}\). This is because magnetic field lines form closed loops such that any one magnetic field line intersects any open surface bounded by \({\mathcal C}\) exactly once. Such an intersection is sometimes called a “linkage.” So there we have it – we require the current \(I\) to form a closed loop, we measure the magnetic flux through this loop using the sign convention of the right-hand rule, and the ratio is the inductance.
Figure \(\PageIndex{1}\): Association between a closed loop of current and the associated magnetic flux.© CC BY SA 4.0
Many structures consist of multiple such loops – the coil is of course one of these. In a coil, each winding carries the same current, and the magnetic fields of the windings add to create a magnetic field, which grows in proportion to the winding density (Section 7.6). The magnetic flux density inside a coil is proportional to the number of windings, \(N\), so the flux \(\Phi\) in Equation \ref{m0123_Ldef} should properly be indicated as \(N\Phi\). Another way to look at this is that we are counting the number of times the same current is able to generate a unique set of magnetic field lines that intersect \({\mathcal S}\).
Summarizing, our complete definition for inductance is \[\boxed{ L = \frac{N\Phi}{I}~~\mbox{(identical linkages)} } \label{m0123_Ldef2}\]
An engineering definition of inductance is Equation \ref{m0123_Ldef2}, with the magnetic flux defined to be that associated with a single closed loop of current with sign convention as indicated in Figure \(\PageIndex{1}\), and \(N\) defined to be the number of times the same current \(I\) is able to create that flux.
What happens if the loops have different shapes? For example, what if the coil is not a cylinder, but rather cone-shaped? (Yes, there is such a thing – see “Additional Reading” at the end of this section.) In this case, one needs a better way to determine the factor \(N\Phi\) since the flux associated with each loop of current will be different. However, this is beyond the scope of this section.
An
inductor is a device that is designed to exhibit a specified inductance. We can now make the connection to the concept of the inductor as it appears in elementary circuit theory. First, we rewrite Equation \ref{m0123_Ldef2} as follows:
\[I = \frac{N\Phi}{L}\]
Taking the derivative of both sides of this equation with respect to time, we obtain:
\[\frac{d}{dt}I = \frac{N}{L}\frac{d}{dt}\Phi \label{m0123_edIdt}\]
Now we need to reach beyond the realm of magnetostatics for just a moment. Section 8.3 (“Faraday’s Law”) shows that the change in \(\Phi\) associated with a change in current results in the creation of an electrical potential equal to \(-Nd\Phi/dt\) realized over the loop \({\mathcal C}\). In other words, the terminal voltage \(V\) is \(+Nd\Phi/dt\), with the change of sign intended to keep the result consistent with the sign convention relating current and voltage in passive devices. Therefore, \(d\Phi/dt\) in Equation \ref{m0123_edIdt} is equal to \(V/N\). Making the substitution we find:
\[V = L\frac{d}{dt}I \label{m0125_eLCT}\]
This is the expected relationship from elementary circuit theory.
Another circuit theory concept related to inductance is
mutual inductance. Whereas inductance relates changes in current to instantaneous voltage in the same device (Equation \ref{m0125_eLCT}), mutual inductance relates changes in current in one device to instantaneous voltage in a different device. This can occur when the two devices are coupled (“linked”) by the same magnetic field. For example, transformers (Section 8.5) typically consist of separate coils that are linked by the same magnetic field lines. The voltage across one coil may be computed as the time-derivative of current on the other coil times the mutual inductance.
Let us conclude this section by taking a moment to dispel a common misconception about inductance. The misconception pertains to the following question. If the current does not form a closed loop, what is the inductance? For example, engineers sometimes refer to the inductance of a pin or lead of an electronic component. A pin or lead is not a closed loop, so the formal definition of inductance given above – ratio of magnetic flux to current – does not apply. The broader definition of inductance – the ability to store energy in a magnetic field – does apply, but this is
not what is meant by “pin inductance” or “lead inductance.” What is actually meant is the imaginary part of the impedance of the pin or lead – i.e., the reactance – expressed as an equivalent inductance. In other words, the reactance of an inductive device is positive, so any device that also exhibits a positive reactance can be viewed from a circuit theory perspective as an equivalent inductance. This is not referring to the storage of energy in a magnetic field; it merely means that the device can be modeled as an inductor in a circuit diagram. In the case of “pin inductance,” the culprit is not actually inductance, but rather skin effect (see “Additional References” at the end of this section). Summarizing:
Inductance implies positive reactance, but positive reactance does not imply the physical mechanism of inductance.
Contributors
Ellingson, Steven W. (2018) Electromagnetics, Vol. 1. Blacksburg, VA: VT Publishing. https://doi.org/10.21061/electromagnetics-vol-1 Licensed with CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0. Report adoption of this book here. If you are a professor reviewing, adopting, or adapting this textbook please help us understand a little more about your use by filling out this form. |
I am looking for a counterexample of Sobolev Embedding Theorem, i.e. I am seeking for a sobolev function $u\in W_0^{1,p}(\Omega),\,p\in[1,n)$, $\Omega$ is a bounded domain in $\mathbb{R}^n$ such that the inequality: $||u||_{L^q}\leq C||\nabla u||_{L^p}$ does
not hold, where $q>p^*:=\frac{np}{n-p}$. A similar question was asked in the past, but with $\mathbb{R}^n$ as a domain and the space used was $W^{1,p}(\Omega)$, see link
I am looking for a counterexample of Sobolev Embedding Theorem, i.e. I am seeking for a sobolev function $u\in W_0^{1,p}(\Omega),\,p\in[1,n)$, $\Omega$ is a bounded domain in $\mathbb{R}^n$ such that the inequality: $||u||_{L^q}\leq C||\nabla u||_{L^p}$ does
You don't need any special function to reach this conclusion.
Any domain contains a ball; so it's enough to consider a ball, which may as well be centered at $0$. Take any function $f$ with nonzero $L^q$ norm supported in this ball. Consider the sequence $f_k(x)=f(kx)$ and use the change of variables $y=kx$ to show that $$ \|f_k\|_{L^q} = k^{-n/q} \|f\|_{L^q},\quad \|\nabla f_k\|_{L^p} = k^{1-n/p} \|f\|_{L^p} $$ The conclusion follows since $-\frac{n}{q} > 1-\frac{n}{p}$. |
In his 1905 paper, Einstein derives the Lorentz transformation using the two postulates of SR;constancy of $c$ for all inertial frames and the Invariance of the laws of physics for all inertial frames.
I'll summarize his mathematical derivation and then ask one specific question about it.
So we consider two frames $(x,y,z,t)$ and $(\xi,η, ζ,\tau)$ in relative motion along the x-axis with velocity $v$, and we're interested in finding a spacetime transformation that relates thier coordinates.
We consider some arbitrary point $x'=x-vt$. This point is at rest in $(\xi,η, ζ,\tau)$ since it's moving with $v$, Therefore this point has $x',y,z$ coordinates that is independent of time, in other words the distance between that point and the origin of $(\xi,η, ζ,\tau)$ is constant.
We consider the following scenario: hit a beam of light from the origin of $(\xi,η, ζ,\tau)$ at $\tau_0$ arriving at the point $x'$ at $\tau_1$ and then being reflected and arrive at the origin of $(\xi,η, ζ,\tau)$ at $\tau_2$.
So that we have: $1/2(\tau_0+\tau_2)=\tau_1$. Since $\tau$ is a function of $(x,y,z,t)$ we have:
$\dfrac{1}{2}[\tau(0,0,0,t)+\tau(0,0,0,t+\dfrac{x'}{c-v}+\dfrac{x'}{c+v})]=\tau(x',0,0,t+\dfrac{x'}{c-v})$
Assuming that $x'$ is infinitely small then taylor expanding this equation and approximating it to first order we get:
$\dfrac{\partial \tau}{\partial x'}+\dfrac{v}{c^2-v^2}\dfrac{\partial\tau}{\partial t}=0$
Solving it then we have:
$\tau=a(t-\dfrac{v}{c^2-v^2}x')$
where $a$ is some unkown function of $v$ (in fact $a=1$).
Finally consider a beam of light emitted from $(\xi,η, ζ,\tau)$ at the origin, it's $\xi$ coordinate is given by $\xi=c\tau=ca(t-\dfrac{v}{c^2-v^2}x')$
It's given by $\dfrac{x'}{c-v}=t$ in $(x,y,z,t)$, plugging in for $t$ we get:
$\xi=a\dfrac{c^2}{c^2-v^2}x'$
He then states:
Substituting for $x'$ its value, we obtain $\xi=a\dfrac{1}{\sqrt{1-v^2/c^2}}(x-vt)$ ...
My question is:
1) the three equations $\tau=a(t-\dfrac{v}{c^2-v^2}x')$ and $\xi=c\tau$ and $x'=x-vt$ when combined together gives :
$\xi=c\tau=ca(t-\dfrac{v}{c^2-v^2}x')=a\dfrac{c^2}{c^2-v^2}x'$, since $x'=x-vt$ by plugging in we get:
$\xi=a\dfrac{c^2}{c^2-v^2}(x-vt)=a\dfrac{1}{1-v^2/c^2}(x-vt)$ not $a\dfrac{1}{\sqrt{1-v^2/c^2}}(x-vt)$ .
But He says
Substituting for $x'$ its value, we obtain $\xi=a\dfrac{1}{\sqrt{1-v^2/c^2}}(x-vt)$ ...
All these equations are copied from Einstein's original paper, So what is wrong with my calculations that does not make it match up with that of Einstein? |
I'm trying to build a discrete Kalman Filter that fuses accelerometer (acceleration) and GPS (position, velocity) measurements. However, I'm finding that my filter can't properly track a constant-acceleration path:
My implementation uses the following standard signal model:
$A = \begin{pmatrix} 1 & \Delta t \\ 0 & 1 \end{pmatrix}, \quad B = \begin{pmatrix} \frac{1}{2}\Delta t^{2}\\ \Delta t \end{pmatrix}, \quad C = \begin{pmatrix} 1 & 0\\ 0 & 1 \end{pmatrix} \quad D = \begin{pmatrix} 0\\0 \end{pmatrix}$
$x_{n+1}=Ax_{n}+Bu_{n}$
$y_{n} = Cx_{n}+Du_{n}$
Where $x = (r_x,v_x)^{T}$, the measurement $y=x$ uses the GPS position/velocity readings, $\Delta t$ is the sample time and the accelerometer reading is used as the control input $u$. My $Q$ and $R$ matrices are both constant, reflecting each sensor's minimum rated accuracy.
Is this poor tracking behavior to be expected, or is there something wrong with my implementation? I also understand that several other alternative signal models can be used for this problem - for example, using the acceleration as a observable state instead of a control input. I've tried implementing this model too, but the filtered result is even noisier than what's shown above. Is one of these models more suitable over the other, and why? |
Statistics - Fisher (Multiple Linear Discriminant Analysis|multi-variant Gaussian) Table of Contents 1 - About
multi-variant Gaussians.
Fisher has describe first this analysis with his Iris Data Set.
2 - Prerequisites 3 - Steps 3.1 - Density Function
Pictures of the Statistics - (Probability) Density Function (PDF) made with R:
Formulas:
<MATH> f(x)=\frac{1}{(2\pi)^{p/2}|\sum|^{1/2}} e^{\displaystyle - \frac{1}{2}(x-\mu)^T \sum ^{-1}(x-\mu) } </MATH>
This formula is just a generalization of the simple formula we had for a single variable. This is called a covariance matrix,
3.2 - Discriminant Function
The discriminant functions is telling you how to classify.
The idea of the discriminant function is to compute one of these discriminant function for each of the classes, and then you classify it to the class for which it's largest. You pick the discriminant function that's largest.
If you go through the simplifications with linear algebra, you can do the cancellation and get the below formula:
<MATH> \delta_k(x) = x^T \sum ^{-1} \mu_k - \frac{1}{2} \mu^T_k \sum ^{-1} \mu_k + log \pi_k </MATH>
where:
<math>x^T</math> is the transpose of the vector x containing all variables <math>\mu^T_k</math> is the transpose of the vector <math>\mu_k</math> containing all means
Despite its complex form it's still a linear function in x, where the coefficient is a vector.
Simplified form:
<MATH> \delta_k(x) = c_{k0} + c_{k1}x_1 + c_{k2}x_2 + c_{k3} x_3 + \dots + c_{kp} x_p </MATH>
where:
x is not more a vector but an expansion of the previous vector expression. <math>c_{k1}x_1 + \dots + c_{kp} x_p = x^T \sum ^{-1} \mu_k</math> <math>c_{k0}= - \frac{1}{2} \mu^T_k \sum ^{-1} \mu_k + log \pi_k</math> 3.3 - Probabilities
Once we have estimates <math>\delta_k(x)</math>, we can turn these into estimates for class probabilities:
<MATH> \hat{Pr}(Y=k|X=x) = \frac {\displaystyle e ^{\displaystyle \hat{\delta}_k(x)} } {\displaystyle \sum_{l=1}^K e ^{\displaystyle \hat{\delta}_l(x)}} </MATH>
3.4 - Classification
So classifying to the largest <math>\hat{\delta}_k(x)</math> amounts to classifying to the class for which <math>\hat{Pr}(Y=k|X=x)</math> is largest.
When K = 2, we classify to class 2 if <math>\hat{Pr}(Y=k|X=x) >= 0.5</math> else to class 1.
4 - Illustration 4.1 - p = 2 and K = 3 classes
Here <math>\pi_1 = \pi_2 = \pi_3 = \frac{1}{3} </math> The dashed lines are known as the Bayes decision boundaries. Were they known, they would yield the fewest misclassification errors, among all possible classifiers.
4.2 - Discriminant Plot
When there are K classes, linear discriminant analysis can be viewed exactly in a K - 1 dimensional plot. Because it essentially classifies to the closest centroid, and they span a K - 1 dimensional plane. Even when K > 3, we can find the “best” 2-dimensional plane for visualizing the discriminant rule.
The three centroids actually line in a plane (a two-dimensional subspace), a subspace of the four-dimensional space and that's essentially the two-dimensional plot below that captures exactly what's important in terms of the classification. |
I have been trying to derive the Einstein equation from the Einstein-Hilbert action $$ S[g_{\mu \nu}] = \frac{1}{16 \pi} \int_M \text{d}^4x \sqrt{-g}R $$ The standard derivation states that the variation $\delta S =0$ when we vary the metric components. In this derivation, we use the fact that $\delta (\sqrt{-g}R)= \delta \sqrt{-g} R + \sqrt{-g}\delta R$. To me this seems quite obvious but I thought I would try and prove this.
My understanding is that if we have an action $$ S[q] = \int \text{d}t L(q,\dot{q},t)$$ we vary it as $$ \delta S[q] = \int \text{d}t \delta L(q,\dot{q},t)$$ so $$ \delta L= \frac{\partial L}{\partial q} \delta q + \frac{\partial L}{\partial \dot{q}} \delta \dot{q} \\ = \left( \frac{\partial L}{\partial q} - \frac{\text{d}}{\text{d}t} \frac{\partial L}{\partial \dot{q}} \right) \delta q + \frac{\text{d}}{\text{d}t} \left( \frac{\partial L}{\partial \dot{q}} \delta q \right)$$ As we integrate over $\delta L$, we ignore the last term as this produces a boundary term which we can take to vanish, therefore I say
$$ \delta L = \left( \frac{\partial L}{\partial q} - \frac{\text{d}}{\text{d}t} \frac{\partial L}{\partial \dot{q}} \right) \delta q $$
Okay, now let's say my Lagrangian is a product: $L(q,\dot{q},t) = f(q,\dot{q},t)g(q,\dot{q},t)$. Plugging this into the above formula for the variation, I have
$$ \frac{\partial L}{\partial q} = \frac{\partial f}{\partial q} g + \frac{\partial g }{\partial q} f $$
$$ \frac{\partial L}{\partial \dot{q}} = \frac{\partial f}{\partial \dot{q}}g + \frac{\partial g}{\partial \dot{q}} f$$
$$ \frac{\text{d}}{\text{d}t} \frac{\partial L}{\partial \dot{q}} = \left( \frac{\text{d}}{\text{d}t} \frac{\partial f}{\partial \dot{q}} \right) g + \left( \frac{\text{d}}{\text{d}t}\frac{\partial g}{\partial \dot{q}} \right) f + \frac{\partial f}{\partial \dot{q}} \frac{\text{d}g}{\text{d}t} + \frac{\partial q}{\partial \dot{q}} \frac{\text{d}f}{\text{d}t}$$
so the variation of this Lagrangian is
$$ \delta L = \left( \frac{\partial f}{\partial q} - \frac{\text{d}}{\text{d}t} \frac{\partial f}{\partial \dot{q}} \right)g \delta q + \left( \frac{\partial g}{\partial q} - \frac{\text{d}}{\text{d}t} \frac{\partial g}{\partial \dot{q}} \right)f \delta q - \frac{\partial f}{\partial \dot{q}} \frac{\text{d}g}{\text{d}t}\delta q - \frac{\partial q}{\partial \dot{q}} \frac{\text{d}f}{\text{d}t}\delta q$$
or
$$ \delta L = g \delta f + f \delta g - \frac{\partial f}{\partial \dot{q}} \frac{\text{d}g}{\text{d}t}\delta q - \frac{\partial g}{\partial \dot{q}} \frac{\text{d}f}{\text{d}t}\delta q $$
Now I can't seem to get rid of those horrible extra terms - I can't see how they would produce a boundary term when integrated. Maybe my understanding was incorrect and variations do not obey the product rule? Many standard resources suggest that varying the Einstein-Hilbert action obeys the product rule... is this just an exception?
My question:
How can I show that variations obey $\delta (fg) = f \delta g + g \delta f$ |
Consider a $2\times 2$ block matrix and a linear system of equations associated to it:
\begin{equation} \begin{pmatrix} - A & B \\ B^t & C \end{pmatrix} \begin{pmatrix} x \\ y \end{pmatrix} = \begin{pmatrix} \phi \\ \psi \end{pmatrix} \end{equation}
Assume that $A$ and $C$ are symmetric, and symmetric positive semi-definite, and that $A$ is even invertible. One can construct the Schur complement system $(C + B^t A^{-1} B ) y = \psi - B^t A^{-1} \phi$ where $S = C + B^t A^{-1} B$ is symmetric positive-definite, by assumption.
How do you precondition such a system in general? I have noted that preconditioners for $S$ are derived from block matrix preconditioners for the original block matrix. Is there a general consensus how such a block matrix preconditioner looks like? |
Category: Ring theory Problem 624
Let $R$ and $R’$ be commutative rings and let $f:R\to R’$ be a ring homomorphism.
Let $I$ and $I’$ be ideals of $R$ and $R’$, respectively. (a) Prove that $f(\sqrt{I}\,) \subset \sqrt{f(I)}$. (b) Prove that $\sqrt{f^{-1}(I’)}=f^{-1}(\sqrt{I’})$
Add to solve later
(c) Suppose that $f$ is surjective and $\ker(f)\subset I$. Then prove that $f(\sqrt{I}\,) =\sqrt{f(I)}$ Problem 618
Let $R$ be a commutative ring with $1$ such that every element $x$ in $R$ is idempotent, that is, $x^2=x$. (Such a ring is called a
Boolean ring.) (a) Prove that $x^n=x$ for any positive integer $n$.
Add to solve later
(b) Prove that $R$ does not have a nonzero nilpotent element. Problem 543
Let $R$ be a ring with $1$.
Suppose that $a, b$ are elements in $R$ such that \[ab=1 \text{ and } ba\neq 1.\] (a) Prove that $1-ba$ is idempotent. (b) Prove that $b^n(1-ba)$ is nilpotent for each positive integer $n$.
Add to solve later
(c) Prove that the ring $R$ has infinitely many nilpotent elements. |
I am looking at an example Turing machine in my textbook,
Automata and Computability by Dexter C. Kozen, and I'm confused as to how they determine the number of states this particular machine has. The example (Example 28.1, page 211) reads as follows:
Here is a TM that accepts the non-context free set $\{a^nb^nc^n \mid n\geq 0\}$.
Informally, the machine starts in its start state
s, then scans to the right over the input string, checking that it is of the form $a^* b^* c^*$. It doesn't write anything on the way across (formally, it writes the same symbol it reads). When it sees the first blank symbol _, it overwrites it with a right endmarker ].
Now it scans left, erasing the first
cit sees, then the first bit sees, then the first ait sees, until it comes to the [. It then scans right, erasing one a, one b, and one c. It continues to sweep left and right over the input, erasing one occurrence of each letter in each pass.
If on some pass it sees at least one occurrence of one of the letters and no occurrences of another, it rejects. Otherwise, it eventually erases all the letters and makes one pass between
[and ]seeing only blanks, at which point it accepts.
Formally, this machine has $Q = \{s, q_1, ... , q_{10}, q_a, q_r\}, Σ = \{a,b, c\}, Γ = \Sigma ∪ \{[, \_, ]\}$
Are they simply creating states based on their informal definition? Or is there some methodology they are implementing that determines the number of states? If there is some sort of methodology, is it a general methodology that can be applied to other Turing machines? |
(
This is related to this question.)
Define the integral,
$$I_n = \int_0^1\frac{\rm{Li}_n(x)}{1+x}dx$$
with
polylogarithm $\rm{Li}_n(x)$. Given the Nielsen generalized polylogarithm $S_{n,p}(z)$,
$$S_{n,p}(z) = \frac{(-1)^{n+p-1}}{(n-1)!\,p!}\int_0^1\frac{(\ln t)^{n-1}\big(\ln(1-z\,t)\big)^p}{t}dt$$
Then it seems,
$$I_1 = -S_{1,1}(-1)-\tfrac12\ln(2)\ln(2)$$
$$I_2 = -5S_{1,2}(-1)+\ln(2)\,\zeta(2)\quad$$
$$\quad\qquad I_3 = -2S_{1,3}(-1)+\ln(2)\,\zeta(3)-\tfrac12\zeta(4)$$
where $S_{1,1}(-1) = -\tfrac12\zeta(2)$, and $S_{1,2}(-1) = \tfrac18\zeta(3)$ and $S_{1,3}(-1)$ has a more complicated closed-form given in the linked post.
Q:What is $I_4$ and $I_5$? In general, can $I_n$ be expressed by the Nielsen generalized polylogarithm? P.S. Note that $\rm{Li}_n(z), \ln(z), \zeta(z)$ are just special cases of this function. |
Even and Odd Integers
An integer n is even if, and only if, n equals twice some integer. An integer n is odd if, and only if, n equals twice some integer plus 1.
Symbolically, if n is an integer, then
n is even \( \Leftrightarrow \exists \) an integer k such that \(n=2k\)
n is odd \( \Leftrightarrow \exists \) an integer k such that \(n=2k+1\)
Prime Numbers
An integer \(n\) is prime if, and only if, \(n\)>1 and for all positive integers \(r\) and \(s\), if \(n=rs\), then either \(r\) or \(s\) equals \(n\). An intger \(n\) is composite if, and only if \(n\)>1 and \(r=rs\) for some integers \(r\) and \(s\) with \(1<r<n\) and \(1<s<n\).
In symbols:
\(n\) is prime \( \Leftrightarrow \forall \) positive integers \(r\) and \(s\) if \(n=rs\) then either \(r=1\) and \(s=n\) or \(r=n\) and \(s=1\).
\(n\) is composite \( \Leftrightarrow \exists \) positive integers \(r\) and \(s\) such that \(n=rs\) and \(1<r<n\) and \(1<s<n\).
The sum of any two even integers is even
Suppose \(m\) and \(n\) are even integers. By definition of even, \(m=2r\) and \(n=2s\) for some integers \(r\) and \(s\). Then
\(m+n=2r+2s\)
\( =2(r+s)\)
Let \(t=r+s\). Note that \(t\) is an integer because it is a sum of integers. hence \(m+n=2t\) where t is an integer.
It follows by definition of even that \(m+n\) is even. QED.
Rational Number
A real number r is rational if, and only if, it can be expressed as a quotent of two integers with a nonzero denominator. A real number that is not rational is irrational.
More formally, if r is a real number, then
\(r\) is rational \( \Leftrightarrow \exists \) integers \(a and b\) such that \(r=\frac{a}{b} and b\neq0.\)
The sum of any two rational numbers is rational
Suppose r and s are rational numbers. Then by definition of rational, \(r=\frac{a}{b} and \frac{c}{d}\) for some integers \(a,b,c, and d\) with \(b\neq0\) and \(d\neq0\). Thus
\(r+s=\frac{a}{b}+\frac{c}{d}\)
\( =\frac{ad+bc}{bd}\)
Let \(p=ad+bc\) and \(q=bd\). Then p and q are integers because products and sums of integers are integers and because a,b,c and d are all integers. Also \(q\neq0\) by the zero product property. Thus
\(r+s=\frac{p}{q}\) where p and q are integers and \(q\neq0\).
Therefore, \(r+s\) is rational by definition of a rational number. QED.
Permalink: simple-math-proofs Tags: |
I checked Wikipedia, I know it is a powerful quantization in physics, but I am wondering what is its relation in mathematics (like mirror symmetry as in wikipedia). A related thing is quantum master equation, what's its use in mathematics? Any reference or background? Thanks!
The BV formalism provides a (co)homological reformulation of several important questions of quantum field theory. The kind of problems that are usually addressed by the BV formalism are:
the determination of gauge invariant operators, the determination of conserved currents, the problem of consistent deformation of a theory, the determination of possible quantum anomalies (the violation of the gauge invariance due to quantum effects).
The BV formalism is specially attractive because it does not require one to make a choice of a gauge fixing and it maintains a manifest spacetime covariance. It can also deals with situation that the traditional BRST formalism can not handle. This is for example the case of gauge theories admitting an
open gauge algebra ( a gauge algebra that is closed only modulo the equations of motion). The typical example are supergravity theories.The BV formalism also allows an elegant and powerful mathematical reformulation of certain questions of quantum field theories in the language of homomological algebra.
Mathematically, the BV formalism is simply a clever application of
homological perturbation theory. In order to understand the relation, I will first review the geometry of a physical model described by a Lagrangian $\mathcal{L}$ depending of fields $\phi^I$ and a finite number of their derivatives and admitting a gauge symmetry $G$. The starting point is the space $\mathcal{M}$ of all possible configurations of fields and their derivatives. This can be formalized using the language of jet-spaces. The Euler-Lagrange equations give the equations of motion of the theory and together with their derivatives, they define a sub-space $\Sigma$ of $\mathcal{M}$ called the stationary space. The on-shell functions are the functions relevant for the dynamic of the theory, they are defined on the stationary space $\Sigma$, they can be described alegebraically as $\mathbb{C}^\infty(\Sigma)=\mathbb{C}^\infty(\mathcal{M})/ \mathcal{N}$ where $\mathcal{N}$ is the ideal of functions that vanish on $\Sigma$.Because of the gauge invariance, the Euler-Lagrange equations are not independent but they satisfy some non-trivial relations called Noether identities. One has to identify different configurations related by a gauge transformation. Indeed, a gauge symmetry is not a real symmetry of the theory but a redundancy of the description.
The two steps that we have just described (restriction to the stationary surface and taking the quotient by the gauge transformations) are respectively realized in the BV formalism by the homology of the
Koszul-Tate differential $\delta$ and the cohomology of the longitudinal operator $\gamma$. The Koszul-Tate operator defines a resolution of the equations of motion in homology. This is done by introducing one antifield $\phi^*_I$ for each field $\phi^I$ of the Lagrangian. The antifields are introduced to ensure that the equations of motion are trivial in the homology of the Koszul-Tate operator. The gauge invariance of the theory is taking care of by the cohomology of the longitudinal differential $\gamma$. In the case of Yang-Mills theories, the cohomology of $\gamma$ is equivalent to the Lie algebra cohomology.
The full BV operator is then given by
$$s=\delta + \gamma+\cdots,$$
where the dots are for possible additional terms required to ensure that the BV operator $s$ is nilpotent ( $s^2=0$). The construction of $s$ from $\delta$ and $\gamma$ follows a recursive pattern borrowed from
homological perturbation theory. One can trace the need for the antifields and the Koszul-Tate differential to this recursive pattern. For simple theories like Yang-Mills, we just have $s=\delta+\gamma$ because the gauge algebra closes as a group without using the equations of motion. In more complicate situation when the algebra is open there are additional terms in the definition of $s$. One can generates $s$ using the BV bracket $(\cdot ,\cdot)$ (under which a field and its associated antifields are dual) and a source $S$ such that the BV operator can be expressed as $$s F= (S,F).$$ The classifical master equation is $$(S,S)=0,$$ and it is just equivalent to $s^2=0$.
At the quantum level, the action $S$ is replaced by a quantum action $W=S+\sum_ i \hbar^i M_i$ where the terms $M_i$ are contribution due to the path integral measure. The gauge invariant of quantum expectation values of operators is equivalent to the
quantum master equation :$$\frac{1}{2}(W,W)=i\hbar \Delta W,$$where $\Delta$ is an operator similar to the Laplacian but defined in the space of fields and their antifields. This operator naturally appears when one considers the invariance of the measure of the path integral under an infinitesimal BRST transformation.When $\Delta S=0$, we can take $W=S$.
We will now review the BV (co)homological interpretation of some important questions in quantum field theory:
The
observablesof the theory are gauge invariant operators, they are described by the cohomology group $H(s)$ in ghost number zero.
Non-trivial
conserved currentsof the theory are equivalent to the so-called characteristic cohomology$H^{n-1}_0(\delta |d)$ which is the cohomology of the Koszul-Tate operator $\delta$ (in antifield number zero) modulo total derivatives for forms of degree $n-1$, where $n$ is the dimension of spacetime.
The equivalent class of
global symmetriesis equivalent to $H^n_1(\delta| d)$.
The
gauge anomaliesare controlled by the group $H^{1,n}(s|d)$ (that is $H(s)$ in antifield number 1 and in the space of $n$-form modulo total derivative). The conditions that define the cohomology $H^{1,n}(s|d)$ are generalization of the famous Wess-Zumino consistency condition.
The group $H^{0,n}(s|d)$ controls the
renormalizationof the theory and all the possible counter terms.
The groups $H^{0,n}(\gamma,d)$ and $H^{1,n}(\gamma, d)$ control the
consistent deformationsof the theory. References:
For a short review, I recommend the preprint by Fuster, Henneaux and Maas: hep-th/0506098.
The classical reference is the book of Marc Henneaux and Claudio Teitelboim (
Quantization of Gauge Systems).
For applications there is also a standard review by Barnich, Brandt and Henneaux: ``
Local BRST cohomology in gauge theories,'' Phys. Rept.338, 439 (2000) [arXiv:hep-th/0002245]. |
$1$-starcompactness is not closed hereditary. This is mentioned immediately after Example $2.3.3$, which shows that the Tikhonov plank is $1$-starcompact: the set $\{\omega_1\}\times\omega$ is a closed, discrete subset of the plank that is clearly not $1$-starcompact.
The result in your question is at least consistently true. This follows from Lemma $2.2.10$ of the paper:
Lemma $\mathbf{2.2.10}$. If $X$ is a regular first countable $1$-starcompact space with $w(X)<\mathfrak d$, then $X$ is countably compact.
Now let $X$ be a regular, first countable, star-compact space of cardinality $\omega_1$. As you point out in the question, $X$ is $1$-starcompact. Moreover, $w(X)\le\chi(X)\cdot|X|=\omega\cdot\omega_1=\omega_1\le\mathfrak d$. If $\omega_1<\mathfrak d=2^\omega$, which is known to be consistent (e.g., it follows from $\mathsf{MA}+\neg\mathsf{CH}$), then $X$ is countably compact by Lemma $2.2.10$.
Thus, if there is a consistent counterexample, it will have to be in a model in which $\mathfrak d=\omega_1$. And indeed
Example $\mathbf{2.3.5}$ of the paper turns out to be such a counterexample. The paper contains a proof that this space $X$ is $1$-starcompact, but in fact it is star-compact.
Proof. Let $\mathscr{U}$ be any open cover of $X$. For $k\in\omega$ define $U_k$ and $f\in D$ as in the proof that $X$ is $1$-starcompact. Let $K=L_f\cup\{g\in D:g\le^*f\}$; I claim that $K$ is compact. $K\cap D$ is homeomorphic to $\alpha+1$ for some countable ordinal $\alpha$ and is therefore compact. Suppose that $V$ is an open nbhd of $K\cap D$, and $A=L_f\setminus V$ is infinite. $L_f\cap\big(\{k\}\times\omega\big)$ is finite for each $k\in\omega$, so $A\cap\big(\{k\}\times\omega\big)$ is finite for each $k\in\omega$, and it follows from the proof of Claim $\mathbf 2$ of the paper that $A$ has a limit point $g\in K\cap D$. But then $g\in V$, so $V\cap A\ne\varnothing$, contradicting the choice of $A$. Thus, $A$ is finite, and $K$ is compact.
Let $F=\{k\in\omega:K\cap U_k=\varnothing\}$; then $F$ is finite, and $\operatorname{st}(K,\mathscr{U})\supseteq K\cup\big((\omega\setminus F)\times\omega\big)\cup(\omega\setminus F)$. Then $K\cup F$ is compact, and $S=(\omega\times\omega)\setminus\operatorname{st}(K\cup F,\mathscr{U})$ is finite, so $K\cup F\cup S$ is compact, and $\operatorname{st}(K\cup F\cup S,\mathscr{U})\supseteq K\cup(\omega\times\omega)\cup\omega$. That is, $X\setminus\operatorname{st}(K\cup F\cup S,\mathscr{U})\subseteq D\setminus K$. But $D\setminus K$ is homeomorphic to $\omega_1$ and is therefore star finite (= strongly $1$-starcompact), so there is a finite $E\subseteq D\setminus K$ such that $K\cup F\cup S\cup E$ is compact and $\operatorname{st}(K\cup F\cup S\cup E,\mathscr{U})=X$. $\dashv$ |
Yes.
Simple Approximation - Rule of Thumb
Use the formula:
$$ \gamma \text{(pv01/bp)} = -\frac{1+tenor}{10,000 (bps)} pv01 $$
So for the 20Y and 30Y tenors respectively this formula gives 210 and -310 respectively. Of which half is produced from PnL component (discount risk) and half is produced from forecasting risk.
Approximation Accounting for Shape of Curve
Use the formula:
$$ \gamma \text{(pv01/bp)} = -\frac{pv01}{10,000 (bps)} * \frac{\sum_{j=1}^{N}2jA_j}{\sum_{j=1}^NA_j} $$
where $A_j$ is the analytic delta of a 1Y forward trade, so for a 3Y swap ($N=3$) you would use the analytic delta of a 0y1y, 1y1y, 2y1y. Note this reduces to the approximation above if $A_j=1$.
Further Detail
These formulae are derived in Pricing and Trading Interest Rate Derivatives: A Practical Guide to Swaps by Darbyshire. The bibliography includes code that has even more accurate formulae calculating the specific cross-gamma risks, and methods of converting between par and forward representations. |
AliPhysics 1811c8f (1811c8f)
#include <AliFMDCorrNoiseGain.h>
AliFMDCorrNoiseGain () AliFMDCorrNoiseGain (const AliFMDFloatMap &map) Float_t Get (UShort_t d, Char_t r, UShort_t s, UShort_t t) const void Set (UShort_t d, Char_t r, UShort_t s, UShort_t t, Float_t x) const AliFMDFloatMap & Values ()
AliFMDFloatMap fValues
Get the noise calibration. That is, the ratio
\[ \frac{\sigma_{i}}{g_{i}k} \]
where \( k\) is a constant determined by the electronics of units DAC/MIP, and \( \sigma_i, g_i\) are the noise and gain of the \( i \) strip respectively.
This correction is needed because some of the reconstructed data (what which have an AliESDFMD class version less than or equal to 3) used the wrong zero-suppression factor. The zero suppression factor used by the on-line electronics was 4, but due to a coding error in the AliFMDRawReader a zero suppression factor of 1 was assumed during the reconstruction. This shifts the zero of the energy loss distribution artificially towards the left (lover valued signals).
So let's assume the real zero-suppression factor is \( f\) while the zero suppression factor \( f'\) assumed in the reconstruction was (wrongly) lower. The number of ADC counts \( c_i'\) used in the reconstruction can be calculated from the reconstructed signal \( m_i'\) by
\[ c_i' = m_i \times g_i \times k / \cos\theta_i \]
where \(\theta_i\) is the incident angle of the \( i\) strip.
This number of counts used the wrong noise factor \( f'\) so to correct to the on-line value, we need to do
\[ c_i = c_i' - \lfloor f'\times n_i\rfloor + \lfloor f\times n_i\rfloor \]
which gives the correct number of ADC counts over the pedestal. To convert back to the scaled energy loss signal we then need to calculate (noting that \( f,f'\) are integers)
\begin{eqnarray} m_i &=& \frac{c_i \times \cos\theta_i}{g_i \times k}\\ &=& \left(c_i' - \lfloor f'\times n_i\rfloor + \lfloor f\times n_i\rfloor\right)\frac{\cos\theta}{g_i \times k}\\ &=& \left(\frac{m_i'\times g_i\times k}{\cos\theta} - \lfloor f'\times n_i\rfloor + \lfloor f\times n_i\rfloor\right) \frac{\cos\theta}{g_i \times k}\\ &=& m_i' + \frac{1}{g_i \times k} \left(\lfloor f\times n_i\rfloor- \lfloor f'\times n_i\rfloor\right)\cos\theta\\ &=& m_i' + \frac{\lfloor n_i\rfloor}{g_i \times k} \left(f-f'\right)\cos\theta \end{eqnarray}
inline
inline
inline
protected |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.