text
stringlengths
256
16.4k
I will consider the following simpler version of your question: is there a function $g(n,\epsilon)$ such that the following are equivalent, for a function $f(n)$: $f$ is bounded. For every $\epsilon > 0$ there exists $N$ such that for $n \geq N$, $f(n) \leq g(n,\epsilon)$. Consider first the case in which $g(n,\epsilon_1)$ eventually exceeds $g(n,\epsilon_2)$ whenever $\epsilon_1 > \epsilon_2$ (like your example $g(n,\epsilon) = n^\epsilon$). Then the second condition can be rewritten equivalently as For every integer $M$ there exists $N$ such that for $n \geq N$, $f(n) \leq g(n,1/M)$. Define the function$$ f(n) = \max_{M \leq n} g(n,1/M). $$ This function satisfies condition (3), and so is bounded, say $f \leq C$. This implies $g(n,1) \leq C$. But then the bounded function $C+1$ doesn't satisfy condition (3) for $M=1$. This contradiction shows that under the stated constraint, no function $g$ fits the bill. On the other hand, as you mention, $f$ is bounded iff it is eventually dominated by every $\omega(1)$ integer function. Indeed, suppose that $f$ is unbounded. Then for every integer $M$ there exist infinitely many $n$ such that $f(n) \geq M$. In particular, we can find an increasing sequence $n_1,n_2,\ldots$ such that $f(n_M) \geq M$. Define a function $g$ by $g(n) = M-1$ for $n_{M-1} < n \leq n_M$. Then $g = \omega(1)$ but $f$ is not eventually dominated by $g$. The functions from $\mathbb{N}$ to $\mathbb{N}$ have the cardinality of the continuum, and so they can be put (constructively!) in one-to-one correspondence with the real interval $(0,\infty)$. This immediately gives a function $g(n,\epsilon)$ which does fit your bill, though is not particularly natural.
Residue techniques often help one solve improper (real) integrals like: $\int_{-\infty}^\infty f(x)\,dx$. In order to apply contour techniques one has to turn this real integral into a complex contour integral. This is done by taking a symmetric interval: $[-R,R]$ and adding in a upper semi-circle of radius $R$ (centered at the origin). Then as $R \to \infty$ one wants the semi-circle part to tend to zero so that in the limit one gets the integral over $(-\infty,\infty)$ (well, actually the principal value). As your half circle gets larger and larger it captures more and more of the upper half plane, so in the limit you have essentially captured the entire upper half plane. Thus when you go to compute the contour integral using residues, you should only consider poles in the upper half plane. One could just as easily use the lower half plane. Or if you where doing an integral of the form $\int_0^\infty f(x)\,dx$ you could use the first quadrant. There are occasions where you might want to consider the entire plane, but none come to mind at the moment.
First, in order to make your theorem well-defined, the following theorem is needed (II.27 theorem 1 in Bourbaki): Therem 1: Let $X$ be a compact space. Then there exists only one uniform structure on $X$ compatible with its topology, namely the neighborhoods of the diagonal $\Delta$ in $X \times X$. In fact, your theorem then follows easily, but the proof of theorem 1 in Bourbaki may be not very clear at some points. When I read it, I tried to fill the gaps writting the following: Lemma 2: Let $X$ be a uniform space and $\mathfrak{S}$ be the set of symmetric entourages of $X$. Then for every $V \in \mathfrak{S}$ and $M \subset X \times X$, $V \circ M \circ V$ is a neighborhood of $M$. Moreover, $$\mathrm{cl}(M)= \bigcap\limits_{V \in \mathfrak{S}} V \circ M \circ V.$$ Corollary 3: Let $X$ be a uniform space. The interiors (resp. the closures) of the entourages of $X$ define a fundamental system of entourages. I will describe the proofs of lemma 2 and corollary 3 only if needed, because I find the proofs given in Bourbaki clear enough (II.4-5, proprosition 2 and corollary 2). Proof of theorem 1. First let us suppose that $X$ has a compatible uniform structure $\mathcal{U}$. If $V \in \mathcal{U}$ is a symmetric entourage, we deduce from lemma 2 that $\overset{2}{V}=V \circ \Delta \circ V$ is a neighborhood of $\Delta$. Because any uniform structure admits a fundamental system of symmetric entourages, we know that any entourage of $\mathcal{U}$ is a neighborhood of $\Delta$. By contradiction, suppose that there exists a neighborhood $V$ of $\Delta$ such that $V \notin \mathcal{U}$. Because $W \backslash V \neq \emptyset$ for every $W \in \mathcal{U}$, $\{ W \backslash V \mid W \in \mathcal{U}\}$ spans a filter $\mathfrak{F}$ on $X \times X$. By compactness, $\mathfrak{F}$ has a limit point $a \notin \Delta$. Clearly, $a$ is also a limit point of $\mathcal{U}$, hence $$a \in \bigcap\limits_{M \in \mathcal{U}} \overline{M}.$$ If $x \neq y \in X$, because $X$ is Hausdorff and because $\mathcal{U}$ admits a fundamental system of closed entourages according to corollary 3, there exist $U,W \in \mathcal{U}$ such that $U(x) \cap W(y)= \emptyset$, that is $x \notin W(y)$ or $(x,y) \notin W$, hence $(x,y) \notin \bigcap\limits_{M \in \mathcal{U}} \overline{M}$. We deduce that $$\bigcap\limits_{M \in \mathcal{U}} \overline{M} \subset \Delta,$$ a contradiction with $a \notin \Delta$. Consequently, we just proved that if $X$ is uniformisable then the uniform structure is the set $\mathcal{U}$ of neighborhood of the diagonal $\Delta$. Now we want to prove that $\mathcal{U}$ is indeed a uniform structure. First, because $(x,y) \mapsto (y,x)$ is continuous, $\overset{-1}{V} \in \mathcal{U}$ for all $V \in \mathcal{U}$. By contradiction, suppose that there exists $A \in \mathcal{U}$ such that $\overset{2}{W} \backslash A \neq \emptyset$ for all $W \in \mathcal{U}$. Then $\{ \overset{2}{W} \backslash A \mid W \in \mathcal{U} \}$ spans a filter $\mathfrak{F}$ on $X \times X$. By compactness, it has a limit point $(x,y ) \notin \Delta$. Because $X$ is normal, there exist two disjoint closed subspaces $V_1,V_2$ and two disjoint open subspaces $U_1,U_2$ satisfying $x \in V_1 \subset U_1$ and $y \in V_2 \subset U_2$. Let $U_3= X \backslash (V_1 \cup V_2)$ and $W= \bigcup\limits_{i=1}^3 U_i \times U_i$. Then $W$ is a neighborhood of $\Delta$, hence $W \in \mathcal{U}$ and $\overset{2}{W} \cap (V_1 \times V_2) \neq \emptyset$. Therefore, there exists $z \in X$ such that $(x,z),(z,y) \in W$. But $x \in V_1 \subset U_1$ implies $z \in U_1$ and $z \in U_1 $ implies $y \in U_1$. Thus, $y \in U_1 \cap V_2 \subset U_1 \cap U_2 = \emptyset$, a contradiction. Consequently, we justed proved that for all $V \in \mathcal{U}$ there exists $W \in \mathcal{U}$ such that $\overset{2}{W} \subset V$. Therefore, $\mathcal{U}$ is a uniform structure. To conclude, it is sufficient to prove that $\mathcal{U}$ is compatible with the topology of $X$. For convenience, let $\mathcal{T}_c$ be the compact topology of $X$ and let $\mathcal{T}_{\mathcal{U}}$ be the topology induced by $\mathcal{U}$. Let $x \in X$ and let $V$ be a neighborhood of $x$ for $\mathcal{T}_{\mathcal{U}}$; in particular, there exists $W \in \mathcal{U}$ such that $W(x) \subset V$. Let $$\pi_x : \left\{ \begin{array}{ccc} (X, \mathcal{T}_c) & \to & (X \times X, \mathcal{T}_c \times \mathcal{T}_c) \\ y & \mapsto & (x,y) \end{array} \right. .$$ Because $\pi_x$ is continuous and that $W(x)=\pi_x^{-1}(W)$, we deduce that $W(x)$ is a neighborhood of $x$ for $\mathcal{T}_c$; since $W(x) \subset V$, $V$ is also a neighborhood of $x$ for $\mathcal{T}_c$. Thus, $\mathcal{T}_{\mathcal{U}}$ is finer that $\mathcal{T}_c$; in particular, $\mathrm{Id} : (X, \mathcal{T}_c) \to (X, \mathcal{T}_{\mathcal{U}})$ is continuous. Moreover, we saw that $\bigcap\limits_{U \in \mathcal{U}} \overline{U} \subset \Delta$; in fact, clearly $\bigcap\limits_{U \in \mathcal{U}} \overline{U}= \Delta$, that is $\mathcal{T}_{\mathcal{U}}$ is Hausdorff. Therefore, $\mathrm{Id}$ is closed, and in fact a homeomorphism by compactness : $\mathcal{T}_c \simeq \mathcal{T}_{\mathcal{U}}$. $\square$ Theorem 4: Let $X$ be a compact space, $X'$ be a uniform space and $f : X \to X'$ be a continuous map. Then $f$ is uniformly continuous. Proof. $f$ is uniformly continuous iff $g^{-1}(V')$ is an entourage of $X$ for every entourage of $V'$ of $X'$, where $g:= f \times f : X \times X \to X' \times X'$. According to corollary 3, we may suppose that $V'$ is open. Because $g$ is continuous, we deduce that $g^{-1}(V')$ is an open neighborhood of the diagonal $\Delta \subset X \times X$, so it is an entourage of $X$ according to theorem 1. $\square$
Contents Up to Discrete Optimization Consider the following general combinatorial optimization problem. Let \(\mathbb{F}\) be a family of subsets of a finite set \(E\) and let \(w: E \rightarrow \mathbb{R}\) be a real-valued weight function defined on the elements of \(E\). The objective of the combinatorial optimization problem is to find \(F^* \in \mathbb{F}\) such that \[w(F^*) = \mbox{min}_{F \in \mathbb{F}} w(F)\] where \(w(F) := \sum_{e \in \mathbb{F}} w(e)\). To translate the combinatorial optimization problem into an optimization problem in \(\mathbb{R}^E\), we can represent each \(F \in \mathbb{F}\) by its incidence vector. Let \(\chi_e^F = 1\) if \(e \in F\) and \(\chi_e^F = 0\) otherwise. Then, if we let \(S = \{\chi^F: F \in \mathbb{F}\} \subseteq \{0,1\}^E\) be the set of incidence vectors of the sets in \(\mathbb{F}\), the corresponding optimization problem is \[\mbox{min}\{w^T x : x \in S\}.\] Traveling Salesman Problem Perhaps the most famous combinatorial optimization problem is the (TSP). Given a complete graph on \(n\) vertices and a weight function defined on the edges, the objective of the TSP is to construct a tour (a circuit that passes through each vertex) of minimum total weight. The TSP is an example of a Traveling Salesman Problem hardcombinatorial optimization problem; the decision version of the problem is \(\mathcal{NP}\)-complete. The Traveling Salesman Problem page presents an integer programming formulation of the TSP and provides some software and online resources. See the Multiple Traveling Salesman Problem (mTSP) case study describes a generalization of the TSP in which more than one salesman is allowed. Cutting Stock Problem The is an \(\mathcal{NP}\)-complete optimization problem that arises in many applications in industry. The classic Cutting Stock Problem one-dimensionalcutting stock problem is to determine how to cut rolls of paper of fixed-width into customer orders for smaller widths so as to minimize waste. The cutting stock problem can be formulated as an integer linear programming problem and solved using column generation. The Cutting Stock Problem case study presents a small example, provides an integer linear programming formulation, and discusses the delayed column generation approach. The Wikipedia entry lists a number of examples and provides some references. VPSolver is a vector packing solver based on an arc-flow formulation with graph compression; it generates models that can be solved using general-purpose mixed-integer programming solvers. In two-dimensionalcutting stock problems, rectangular (or more general) shapes are to be cut from a larger sheet. There are both guillotine and non-guillotine versions. Packing Problems can be viewed as complementary to cutting problems in that the objective is to fill a larger space with specified smaller shapes in the most economical (profitable) way. There are geometric packing problems in one dimension, two dimensions and even three dimensions, such as those that arise in filling trucks or shipping containers. The size measure is not always length or width; it may be weight, for example. Packing Problems Minimum Spanning Tree Another well-known combinatorial optimization problem is the (MST) problem. Given a connected, undirected graph, a Minimum Spanning Tree spanning treeof the graph is a subgraph that is a tree and connects all the vertices. Given a weight assigned to each edge, a minimum spanning treeis a spanning tree with weight less than or equal to the weight of every other spanning tree. The MST problem is an example of an easy combinatorial optimization problem. Two common algorithms, Prim's Algorithm and Kruskal's Algorithm, are greedy algorithms that run in polynomial time; the decision version of the MST problem is in \(\mathcal{P}\). MSTs have direct applications in the design of networks of all types (computer, telecommunications, transportation, water supply and electricity) and arise as subproblems in the solution of optimization problems (e.g., TSP). Textbooks Ahuja, R. K., Magnanti, T. L., and J. B. Orlin. 1993. Network Flows.Prentice-Hall, Inc., Upper Saddle River, NJ. Bertsekas, D. P. 1998. Network Optimization: Continuous and Discrete Models.Athena Scientific, Nashua, NH. Lawler, E. 2001. Combinatorial Optimization: Networks and Matroids. Dover Publications, Inc., Mineola, NY. Murty, K. G. 2006. Network Programming, Internet Edition.. First published in 1992 by Prentice-Hall, Inc., Upper Saddle River, NJ. Nemhauser, G. L. and Wolsey, L. A. 1988. Integer and Combinatorial Optimization. John Wiley & Sons, New York. Papadimitriou, C. H. and Steiglitz, K. 1998. Combinatorial Optimization: Algorithms and Complexity. Dover Publications, Inc., Mineola, NY. Rockafellar, R. T. 1998. Network Flows and Monotropic Optimization.Athena Scientific, Nashua, NH. Journal Papers and Technical Reports Optimization Online Combinatorial Optimization area
On a recent MathsJam Shout, an Old Chestnut appeared (in this form, due to @jamestanton): If you’ve not seen it, stop reading here and have a play with it - it’s a classic puzzle for a reason. Below the line are spoilers. Counting is hard The first thing you’d probablyRead More → In this episode, we're joined by special guest co-host @sophiebays, who is Dr Sophie Carr in real life, and the world's most interesting mathematician1. We discuss: The Big Internet Math-Off. My favourite pitch wasn’t really in the contest! I also liked Alex’s wobbly table and Anna’s FURNACE. Number of theRead More → Dear Uncle Colin, If $e = \left( 1+ \frac{1}{n} \right)^n$ when $n = \infty$, how come it isn’t 1? Surely $1 + \frac{1}{\infty}$ is just 1? - I’m Not Finding It Natural, It’s Terribly Yucky Hi, INFINITY, and thanks for your message. You have fallen into one of maths’s classicRead More → What are they? I thought, until I looked closely, that we had a Hoberman sphere in the children’s toybox. We don’t: we have something closely related to it, though. The Hoberman mechanism comprises a series of pairs of pivoted struts arranged end to end. Each pair looks a little likeRead More → Dear Uncle Colin, I’ve been struggling with this: “If the surface area of a sphere to cylinder is in the ratio 4:3 and the sphere has a radius of 3a, calculate the radius of the cylinder if the radius if the cylinder is equal to its height.” Can you help?Read More → I love Futility Closet -- it's an incredible collection of interesting bits and pieces, but it has a special place in my heart because they love and appreciate maths. Not only that, they appreciate maths that I find interesting. The internet has many interesting miscellanies, and many excellent sites specialisingRead More → Dear Uncle Colin, I have to solve $615 + x^2 = 2^y$ for integers $x$ and $y$. I’ve solved it by inspection using Desmos ($x=59$ and $y=12$ is the only solution), but I’d prefer a more analytical solution! Getting Exponent Right Makes An Interesting Noise Hi, GERMAIN, and thanks forRead More → Via @markritchings, an excellent logs problem: If $a = \log_{14}(7)$ and $b = \log_{14}(5)$, find $\log_{35}(28)$ in terms of $a$ and $b$. One of the reasons I like this puzzle is that I did it a somewhat brutal way, and once I had the answer, a much neater way jumpedRead More → In this episode, we're joined by @christianp, who is Christian Lawson-Perfect in real life, our first returning special guest co-host1. We discuss: The Big Internet Math Off and associated stickerbook 99 variations on a proof by Philip Ording The Art of Statistics - Learning from Data by David Spiegelhalter MathsRead More → Dear Uncle Colin, How would you write $\frac{1}{10}$ in binary? Binary Is Totally Stupid Hi, BITS, and thanks for your message! I have two ways to deal with this: the standard, long-division sort of method, and a much nicer geometric series approach. Long division-esque While I can do the longRead More →
Suppose you have an open map \(p\) between topological spaces, and if you have a subet \(A\) of \(p\)’s domain such that \(p(A)\) is open. Can you then conclude that \(A\) is open? Nope! Consider the following spaces \(X=\{x_1,x_2\}\) and \(Y=\{y_1,y_2\}\) with topologies \(\tau_X=\{\varnothing, X, \{x_1\}\}\) and \(\tau_Y=\{\varnothing,Y,\{y_1\}\}\), respectively and let \(p: X\times Y\to X\) be the projection onto its first fator. This is an open map. If we consider \(A=X\times\{y_2\}\) we see that \(A\) is not open in \(X\times Y\), but we have that \(p(A)=p(X\times\{y_2\})= X\) which is trivially open in \(X\). I came across this little problem recently: If \(X\) is a topological space with exactly two components, and given an equivalence relation \(\sim\) what can we say about its quotient space \(X/{\sim}\)? It turns out that \(X/{\sim}\) is connected if and only if there exists \(x,y\in X\) where \(x\) and \(y\) are in separate components, such that \(x\sim y\). Suppose first that there exists \(x,y\in X\) such that \(x\sim y\). Let \(C_1\) and \(C_2\) be the two components of \(X\) and let \(p: X \to X/{\sim}\) be the natural projection. Since \(p\) is a quotient map it is surely continuous and since the image of a connected space under a continuous function is connected we have have that, say \(p(C_1)\) is connected and so is \(p(C_2)\), but since \(x\sim y\) we have that \(p(C_1)\cap p(C_2)\neq \varnothing\) so \(X/{\sim}\) consists of a single component, becuase \[p(C_1)\cup p(C_2) = p(C_1\cup C_2)=p(X)=X/{\sim},\] as wanted. To show the reverse implication, we use the contrapositive of the statement and show: if we for no \(x\in C_1\) or \(y\in C_2\) have that \(x\sim y\), then \(X/{\sim}\) is not connected. Assume the hypothesis and note that then \(p(C_1)\) and \(p(C_2)\) are then disjoint connected subspaces whose union equal all of \(X/{\sim}\) (since \(p\) is surjective). But then the images of \(C_1\) and \(C_2\) under \(p\) are two components of \(X/{\sim}\), showing that \(X/{\sim}\) is not connected. As wanted.
Abraham-Lorentz? Two electrons d apart have potential energy. Release them, they will be repelled according to Coulomb's law. I could make an assumption about the associated vector potential, but I think that would ignore some important relativistic modifications. I'm only familiar with relativistically correct potentials of a charge with uniform velocity, there will be acceleration here. But accelerating charges radiate, producing radiation, resulting in breaking action. How much energy us lost instead of becoming kinetic? $$\frac{e^2}{4\pi\epsilon_0d}=\frac{e^2}{4\pi\epsilon_0x}+(2)\frac{1}{2}m_e\dot{x}^2+E_\mathrm{rad}$$ This is a straight forward algebra problem if $E_\mathrm{rad}$ is assumed to be zero. The potential energy term goes to zero, leaving in expression easy to solve for $\dot{x}$. What's $E_\mathrm{rad}$? I'm not sure where to start without inserting assumptions I suspect might be faulty. At first blush, I'd begin with assuming $\vec{A}=\frac{\mu_0}{4\pi r}\frac{d\vec{p}}{dt}$ where $\vec{A}$ is the vector potential and $\vec{p}$ is the dipole moment. This is my habit for radiation problems. But the dipole moment in this case is $\vec{0}$. What if attention is just placed on one of the electrons? Then $\vec{p}=-ex\hat{i}$ since the electrons move in the + and - x directions \begin{align} \nabla \times \vec{A} & = \vec{B}=\frac{\mu_0}{4\pi}\left[\frac{-\hat{r}}{r^2}\times (-e)\dot{x}\hat{i}+\frac{-\hat{r}}{r^2c}\times (-e)\ddot{x}\hat{i}\right] \\ \vec{E} & = c\vec{B}\times\hat{r} \\ \vec{S} & = \frac{1}{\mu_0}\vec{E}\times\vec{B} \\ E_\mathrm{rad} & = \iiint\vec{S}\cdot\hat{r}R^2\sin(\phi)d\phi d\theta \end{align} I'd flesh the math some more, but I'm pretty confident I made a mistake with my expression for the Vector Potential. I might try starting with a relativistic expression for a moving charge.
Dear Uncle Colin, I'm told that $z=i$ is a solution to the complex quadratic $z^2 + wz + (1+i)=0$, and need to find $w$. I've tried the quadratic formula and completing the square, but neither of those seem to work! How do I solve it? - Don't Even Start Contemplating A Robust Trial & Error Solution Hello, DESCART&ES, and thank you for your message! You'll kick yourself. You're told that when $z=i$, the equation holds, so your starting point should be substituting $z=i$ into the equation and seeing what happens. You get $-1 + wi + 1 + i = 0$, so $w=-1$. For the record, the quadratic formula should work (although it's WAY overkill): $z = \frac{-w \pm \sqrt{w^2 - 4(1+i)}}{2}$. For one of those roots to be equal to $i$, you have: $-w \pm \sqrt{w^2 - 4 - 4i} = 2i$, or $\pm \sqrt{w^2 - 4 - 4i} = w + 2i$. Squaring both sides gives $w^2 - 4 - 4i = w^2 + 4wi - 4$, so $-4i = 4wi$ and again, $w=-1$. Hope that helps! -- Uncle Colin
Trigonometric functions From JSXGraph Wiki Revision as of 18:10, 20 February 2013 by A WASSERMANN The well known trigonometric functions can be visualized on the circle of radius 1. See http://en.wikipedia.org/wiki/Trigonometric_functions for the definitions. Tangent: [math]\tan x = \frac{\sin x}{\cos x}[/math] Cotangent: [math]\cot x = \frac{\cos x}{\sin x}[/math] Secant: [math]\sec x = \frac{1}{\cos x}[/math] Cosecant: [math]\csc x = \frac{1}{\sin x}[/math] The JavaScript Code
Compactness of certain sets is not needed. For the first part of your question you will find an answer in Definition of locally pathwise connected. But for the sake of completeness let us prove once more that the following are equivalent: (1) $X$ is locally connected (locally path connected), i.e. has a base consisting of open connected (open path connected) sets. (2) Components (path components) of open sets are open. (1) $\Rightarrow$ (2): Let $\mathcal{B}$ be a base for $X$ consisting of open connected (open path connected) sets. Let $U \subset X$ be open and $C$ be a component (path component) of $X$. Consider $x \in C$. By assumption there exists $V \in \mathcal{B}$ such that $x \in V \subset U$. Since $V \cap C \ne \emptyset$, we see that $V \cup C$ is a connected (path connected) subset of $U$ which contains $C$. By definiton of $C$ we see that $V \cup C = C$, i.e. $V \subset C$. Hence $C = \bigcup_{V \in \mathcal{B}, V \subset C} V$. In particular, $C$ is open in $X$. (2) $\Rightarrow$ (1): Let $U \subset X$ be open. For any $x \in U$ the component (path component) of $U$ containing $x$ is open, hence $U$ is the union of open connected (open path connected) sets. Now, if $X$ is locally path connected, then it is also locally connected. Hence components and path components of open sets are open. Moreover, components and path components of open sets agree (this applies in particular to $X$ itself). To see this, consider an open $U \subset X$. Each path component $C$ of $U$ is contained in a component $C'$ of $U$. Assume $C \subsetneqq C'$. Let $C_\alpha$ be the path components of $C'$. They are again open, and we must have more than one. Then $C'$ can be decomposed as the disjoint union of two non-empty open sets (e.g. $C_{\alpha_0}$ and $\bigcup_{\alpha \ne \alpha_0} C_\alpha$). This means that $C'$ is not connected, a contradiction. We conclude $C = C'$.
Some tricks I've seen: Tricks with notable products $(a + b)^2 = a^2 + 2ab + b^2$ This formula can be used to compute squares. Say that we want to compute $46^2$. We use $46^2 = (40+6)^2 = 40^2+2\cdot40\cdot6 +6^2 = 1600 + 480 + 36 = 2116$. You can also use this method for negative $b$:$ 197^2 = (200 - 3)^2 = 200^2 - 2\cdot200\cdot3 + 3^2 = 40000 - 1200 + 9 = 38809 $ The last subtraction can be kind of tricky: remember to do it right to left, and take out the common multiples of 10:$ 40000 - 1200 = 100(400-12) = 100(398-10) = 100(388) = 38800 $The hardest thing here is to keep track of the amount of zeroes, this takes some practice! Also note that if we're computing $(a+b)^2$ and a is a multiple of $10^k$ and $b$ is a single digit-number, we already know the last $k$ digits of the answer: they are $b^2$, then the rest (going to the right) are zeroes. We can use this even if a is only a multiple of 10: the last digit of $(10 * a + b)^2$ (where $a$ and $b$ consist of a single digit) is $b$. So we can write (or maybe only make a mental note that we have the final digit) that down and worry about the more significant digits. Also useful for things like $46\cdot47 = 46^2 + 46 = 2116 + 46 = 2162$. When both numbers are even or both numbers are uneven, you might want to use: $(a+b)(a-b) = a^2 - b^2$Say, for example, we want to compute $23 \cdot 27$. We can write this as $(25 - 2)(25 + 2) = 25^2 - 2^2 = (20 + 5)^2 = 20^2 + 2\cdot20\cdot5 + 5^2 - 4 = 400 + 200 + 25 - 4 = 621$. Divisibility checks Already covered by Theodore Norvell. The basic idea is that if you represent numbers in a base $b$, you can easily tell if numbers are divisible by $b - 1$, $b + 1$ or prime factors of $b$, by some modular arithmetic. Vedic math A guy in my class gave a presentation on Vedic math. I don't really remember everything and there probably are a more cool things in the book, but I remember with algorithm for multiplication that you can use to multiplicate numbers in your head. This picture shows a method called lattice or gelosia multiplication and is just a way of writing our good old-fashioned multiplication algorithm (the one we use on paper) in a nice way. Please notice that the picture and the Vedic algorithm are not tied: I added the picture because I think it helps you appreciate and understand the pattern that is used in the algorithm. The gelosia notation shows this in a much nicer way than the traditional notation. The algorithm the guy explained is essentially the same algorithm as we would use on paper. However, it structures the arithmetic in such a way that we never have remember too many numbers at the same time. Let's illustrate the method by multiplying $456$ with $128$, as in the picture. We work from left to right: we first compute the least significant digits and work our way up. We start by multiplying the least significant digits: $6 \cdot 8 = 48$: the least significant digit is $8$, remember the $4(0)$ for the next round (of course, I don't mean zero times four here but four, or forty, whatevery you prefer: be consistent though, if you include the zero here to make forty, you got do it everywhere).$ 8 \cdot 5(0) = 40(0) $ $ 2(0) \cdot 6 = 12(0) $ $ 4(0) + 40(0) + 12(0) = 56(0) $: our next digit (to the left of the $8$) is $6$: remember the $5(00)$ $ 8 \cdot 4(00) = 32(00) $ $ 2(0) \cdot 5(0) = 10(00) $ $ 1(00) \cdot 6 = 6(00) $ $ 5(00) + 32(00) + 10(00) + 6(00) = 53(00) $: our next digit is a $3$, remember the $5(000)$ Pfff... starting with 2-digit numbers is a better idea, but I wanted to this longer one to make the structure of the algorithm clear. You can do this much faster if you have practiced, since you don't have to write it all down. $ 2(0) \cdot 4(00) = 8(000) $ $ 1(00) \cdot 5(0) = 5(000)$ $ 5(000) + 8(000) + 5(000) = 18(000)$: next digit is an $8$, remember the $1(0000)$ $ 1(00) \cdot 4(00) = 4(0000) $ $ 1(0000) + 4(0000) = 5(0000) $: the most significant digit is a $5$. So we have $58368$. Quadratic equations There are multiple ways to solve a quadratic equation in your head. The easiest are quadratic with integer coefficients. If we have $x^2 + ax + c = 0$, try to find $r_{1, 2}$ such that $r_1 + r_2 = -a$ and $r_1r_2 = c$. It is also possible to solve for non-integer solutions this way, but it is usually too hard to actually come up with solutions this way. Another way is just to try divisors of the constant term. By the rational root theorem (google it, I can't link anymore sigh) all solutions to $x^n + ... + c = 0$ need to be divisors of $c$. If $c$ is a fraction $\frac{p}{q}$, the solutions need to be of the form $\frac{a}{b}$ where $a$ divides $p$ and $b$ divides $q$. If this all fails, we can still put the abc-formula in a much easier form: $ ux^2 + vx + w = 0 $ $ x^2 + \frac{v}{u}x + \frac{w}{u} = 0 $ $ x^2 + \frac{v}{u}x + \frac{w}{u} = 0 $ $ x^2 - ax - b = 0 $ $ x^2 = ax + b $ (This is the form that I found easiest to use!) $ (x - \frac{a}{2})^2 = (\frac{a}{2})^2 + b $ $ x = \frac{a\pm\sqrt{a^2 + 4b}}{2} = \frac{a}{2} \pm \sqrt{(\frac{a}{2})^2 + b} $ I'm sure there are also a lot of techniques for estimating products and the like, but I'm not really familiar with them. Tricks that aren't really usable but still pretty cool See this excerpt from Feynman's "Surely you're joking, Mr. Feynman!" about how he managed to amaze some of his colleagues, and also this video from Numberphile.
Here's a question that's creating some doubt to me. Suppose there are 2 big spheres A and B of mass M and mass 4M, each of radius, R separated by a distance of 6R. An object of mass, m is projected from the surface of A. What should be the minimum velocity of the body with which it should be projected so that it just reaches the surface of B. A try to the question:- We'll first find the neutral point where the gravitational forces by both the objects cancel out each other. For this, we give a little displacement to the object/satellite, $d\vec{r}$ which is in direction from A to B. Also, a unit vector $\hat{r}$ is assigned in the same direction. Suppose the gravitational forces of the masses be $\vec{F_A}$ and $\vec{F_B}$. They are given below: $\vec{F_A}$ = - $G\frac{Mm}{r^2}\hat{r}$ $\vec{F_B}$ = $G\frac{4Mm}{x^2}\hat{r}$ Negative sign is not present in $\vec{F_B}$ because $\hat{r}$ is in the direction of the force $\vec{F_B}$. Now, at neutral point, P $\vec{F_A}$ = - $\vec{F_B}$ F A = F B $G\frac{Mm}{r^2}$ = $G\frac{4Mm}{x^2}$ [Here, x=6R-r] $G\frac{Mm}{r^2}$ = $G\frac{4Mm}{(6R-r)^2}$ 4 r 2 = (6R-r) 2 r = 2R From this point, P(r=2R), the gravitational force, F B is sufficient to attract the satellite to reach the surface of B. Let W A and W B be the work done by the gravitational forces $\vec{F_A}$ and $\vec{F_B}$ separately from the surface of A to point P. Work done by Force, F A:- dW A = $\vec{F_A}\cdot d\vec{r}$ dW A = F A dr cos 180° dW A = - FA dr ---------Eq(a) In equation a, For limits, Now, when object is at surface of A, r = R And, when object is at neutral point P, r = 2R $$\int \, dW_A = \int\limits_{R}^{2R} - F_A \, dr$$ $$W_A = - \int\limits_{R}^{2R} G\frac{Mm}{r^2} \, dr$$ $$W_A = -GMm \int\limits_{R}^{2R} \frac{1}{r^2} \, dr$$ $$W_A = -GMm \biggl[\frac{-1}{r}\biggr]_{R}^{2R} $$ $$W_A = -GMm \biggl[\frac{-1}{2R}-\frac{-1}{R}\biggr] $$ $$W_A = -\frac{GMm}{2R} $$ $$W_A = {\color{violet}{\int\limits_{R}^{2R} - F_A \, dr}} = {\color{pink}{-\frac{GMm}{2R}}} $$ Both equations, violet and the pink one are satisfying each other, infering that the work done by the gravitational force $\vec{F_A}$ will be negative. Work done by Force, F B:- dW B = $\vec{F_B}\cdot d\vec{r}$ dW B = F B dr cos 0° dW B = FB dr ---------Eq(b) In equation b, For limits, Now, when object is at surface of A, r = R -----> x = 6R - r = 5R And, when object is at neutral point P, r = 2R -----> x = 6R - r = 4R $$\int \, dW_B = \int\limits_{R}^{2R} F_B \, dr$$ $$W_B = \int\limits_{R}^{2R} G\frac{4Mm}{(6R - r)^2} \, dr$$ $$W_B = 4GMm\int\limits_{5R}^{4R} \frac{1}{x^2} \, dr$$ $$W_B = 4GMm \biggl[\frac{-1}{x} \biggr]_{5R}^{4R} $$ $$W_B = 4GMm \biggl[\frac{-1}{4R}-\frac{-1}{5R}\biggr] $$ $$W_B = 4GMm \biggl[\frac{-1}{20R} \biggr] $$ $$W_B = -\frac{GMm}{5R} $$ $$W_B = {\color{orange}{\int\limits_{R}^{2R} F_B \, dr}} = {\color{cyan}{-\frac{GMm}{5R}}}$$ So, here's my doubt:- The orange equation infer that the work by gravitational force $\vec{F_B}$ is positive (why) as it was derived from equation (b) and in that equation, the angle between force $\vec{F_B}$ and displacement $d\vec{r}$ was 0°. And, it is equal to the cyan equation. But in the cyan equation, there is a negative sign which tells us that the work by gravitational force $\vec{F_B}$ is negative. Hence, the orange equation is not consistent with cyan equation. My Doubt: Why? Well, there's lot to be done as we need to find the velocity of satellite with which it should be projected. But, before that, I need to add the 2 work, W A and W B and make it equal to the kinetic energy of the satellite to find the velocity of the body (if I'm not wrong). But, I'm stuck here. So please help. OK, I have done a lot in the post but the doubt is lot like similar to the last one asked in this post Work Done by Gravitational Force. The difference is only that in that post, I had some problem associated to the direction of radial vector $d\vec{r}$. But here, I don't think so, there's any problem with that. So, please tell me why the the orange equation is not consistent with cyan equation?
As you read in the section, the concept Einstein and others could not accept was that of non-locality. In their opinion, measurements made by an observer on the system A, could not influence the results of the mesurements made by an other observer on the system B, if the two systems have no way to communicate (light years away for example). If quantum mechanics is valid, this happens: A and B possess one electron each, the electrons are described by a singlet state. If A measures spin along the axis $\vec{u}$ and gets $|+>$, he projects the state of the two electrons system along that axis. Then he can tell B what he got and B now knows the spin state of his electron: then he can make predictions (giving probabilities) on his measurements along the axis $\vec{v}$. For example, if $\vec{v}=\vec{u}$, he can say to have 100% chance to measure $|->$ (Note that A had to tell B, and this is why we say quantum mechanics doesn't break the speed of light). Einstein didn't want to deny all the incredible results of quantum mechanincs (of which he was one of the founders), but only this particular aspect. The example you brought from Sakurai is a model made to maintain the probabilistic predictions of quantum mechanics, but respecting the locality principle (A decisions can not influence B's measurements). This is made by introducing the so called "hidden variables": the particle already possess a property that will determine the outcome of the measurement. If we could know that property a priori, we would be certain on the result. But we can't know that property, and that's why we must still use a probabilistic approach to make predictions. So let's say an electron has the property that a measurement along z will yeld |+>. We say that the electron belongs to the set $z+$. After measuring it, we could say: "Ok! That electron already belonged to that set!". But what if we chose to measure along $x$? Turn back time and suppose we didn't measure along $z$. The electron must have a property that determines the outcome of a measurement along the $x$ axis! For example, it belongs to $x-$. So the electron has two properties, which will determine the outcome if we decide to mesure along $x$, or along $z$. Therefore it belongs to $(z+, x-)$.However, a measurements still strongly modifies the system (the two measurements don't commute), so if we decide to measure $z$ and get $|+>$, the properties of the particle will change, and it is not true that if we measure $x$ we will get $|->$. That is what they mean with "Proponents of this model agree that it is impossible to determine $S_x$ and $S_z$ simultaneously". Summing up, $(z+, x-)$ doesn't mean that if we measure z and then x (or viceversa) we will surely get those two results, but only the first we decide to make. Now to the last question:"If we measure $S_z$ and do not measure $S_x$ as is mentioned, how do we assign it to a type of the form (z+,x−)?" Actually, we can not do that working with one particle, in this case of spin measurements. We can only say things like "That electron belonged to the set $z+$.", but not like "That electron belonged to the set $(z+,x-)$.", because we should do two measurements, procedure that as I said modifies the system after the first measurement. What we can do, and this is what Sakurai presents, is to use the fact that the measurements of the two observers A and B are no more correlated! Let's see what has changed in this new, Einstein's world. As before A and B possess one electron each. These two electrons are in the singlet state, but this state is now very different from quantum mechanics. Before, we had a couple of particles described by ONE state vector, which tells us the probability of projecting one of the electrons along z+ or z- (let's suppose it is 50-50 for semplicity, so $|\psi>=\frac{1}{\sqrt{2}}(|+->_z-|-+>_z)$). Now, instead, the couple possess a property that determines the result: suppose A's electron $\in z+$, since the couple is in singlet state, B's particle must be $\in z-$. But QM succesfully predicts that if we measure a big set of A's particles, we get 50% $|+>$ and 50%$|->$. How do we recover this result? If we have N couples we assume there are $N/2$ couples in which A's electron $\in z+$ and $N/2$ where A's electron $\in z-$. So we must introduce two populations of couples:$$\qquad A\qquad \qquad B$$$$N/2 \qquad z+\qquad \quad z-$$$$N/2 \qquad z-\qquad \quad z+$$ And what if we decide to characterize the couples with two measurements instead of one? Measure along $z$ and $x$. The populations we obtain are 4, all made to ensure singlet state: $$\qquad \qquad A\qquad \qquad \qquad B$$$$a)\quad N_1 \qquad (z+,x+)\qquad \quad (z-,x-)$$$$b)\quad N_2 \qquad (z+,x-)\qquad \quad (z-,x+)$$$$c)\quad N_3 \qquad (z-,x+)\qquad \quad (z+,x-)$$$$d)\quad N_4 \qquad (z-,x-)\qquad \quad (z+,x+)$$ Finally, we can see what I meant with "using the fact that the two measurements are no more correlated".A measures z, gets $+$. So the couple was in a) or b). A now has not modified the state of B's particle! B measures x and gets $+$. So the couple belonged to b) from the beginning. The probability of getting this result is $N_2/\sum N_i$.Note that A and B can perform their mesurements at any time, this procedure does not depend on the order of measurements, while in QM the chronological order was important! Bell's inequality is set assuming Einstein's way of seeing things: in certain conditions, it is not valid in the world of quantum mechanics. Its violation was tested experimentally and was the proof that QM was the right theory.
Hi there! I have a very simple question, which requires an expert in multilinear algebra. $V$ is an $n$-dimensional vector space, and $\omega\in V^\ast\wedge V^\ast$ is a skew-symmetric form on it. Then, mimicking symplectic geometry, I call isotropic a subspace $L\subseteq V$ such that $\omega|_L\equiv0$. QUESTION. Let $L$ be an hyperplane in $V$, and $\alpha\in V^\ast$ a dual covector for $L$, i.e., such that $L=\ker\alpha$: which conditions on $\alpha$ correspond to the fact that $L$ is isotropic? In other words, I'm trying to characterize the covectors $\alpha\in V^\ast$ such that $\omega(x,y)=0$ for all $x,y\in\ker\alpha$. My instinct says that the condition I'm looking for is $\alpha\wedge\omega=0,\quad\quad (*)$ but my poor skills in multilinear algebra are not sufficient to prove this - that's why I'm seeking for advice. My question ends here, though I have some curiosity on this matter, which might be answered by an expert. Namely, in symplectic geometry there is the notion of Lagrangian Grassmannian... so here there should be that of Isotropic Grassmannian (I cannot type curly braces below): $I_r(V,\omega)=(L\in G_r(V)\mid \omega|_L\equiv 0 )$ Where I can find some information (if any) about properties/applications of this $I_r(V,\omega)$ (smoothness, tangent spaces, canonical structures on it, etc.)? In terms of Isotropic Grassmannians, the main question reads: how to characterize the image of $I_{n-1}(V,\omega)$ by means of the isomorphism $G_{n-1}(V)\to \mathbb{P}V^\ast$? This is what puzzled me: $I_{n-1}(V,\omega)$ seems to be a quadric variety, but condition $(*)$ above is linear!
Answer $ \vec{r} = 131\hat{i} + 92\hat{j} \ \ km$ Work Step by Step We know that your x position is given by: $ x = rcos \theta$ and that your y position is given by: $ y = rsin \theta$ We plug in 35 for theta and 160 km for r to have: $ \vec{r} = 131\hat{i} + 92\hat{j} \ \ km$
Cilve, your first instance shows that we have to be careful when moving from $\mathcal{D}$ to $[\mathcal{C}, \mathcal{D}]$ --- you should easily find categories $\mathcal{C}, \mathcal{D}$ such that $\mathcal{D}$ is cartesian closed, but the functor category $[\mathcal{C}, \mathcal{D}]$ is not. The other examples follow more-or-less from a suitable version of the Yoneda lemma. I will show you how to apply Yoneda lemma to get limits/colimits in a functor category. Let $\mathcal{X}, \mathcal{D}$ be small categories. There exists their cotensor (exponent) $\mathcal{D}^\mathcal{X}$ together with the diagonal functor:$$\mathcal{D} \overset{\Delta}{\rightarrow} \mathcal{D}^\mathcal{X}$$given as the transposition of the cartesian projection $\pi \colon \mathcal{D} \times \mathcal{X} \rightarrow \mathcal{D}$. The limit functor $\mathit{lim}$ is defined as the right adjoint to the diagonal, and the colimit $\mathit{colim}$ is defined as the left adjoint to the diagonal. The whole idea is to apply the 2-Yoneda functor to the above diagram. We have $$\hom(-, \mathcal{D}^\mathcal{X}) \approx \hom(-, \mathcal{D})^{\mathcal{X}}$$by the definition of the cotensor. Therefore the above diagram is mapped to the diagram:$$\hom(-, \mathcal{D}) \rightarrow \hom(-, \mathcal{D})^\mathcal{X}$$Since adjunctions are equationally defined, they are preserved by any 2-functor, and particularly by 2-Yoneda. This means, that the above transformation has right/left adjoint transformation provided $\Delta$ has. But a transformation that has left/right adjoint, has left/right adjoint on its each componenet $\mathcal{C}$. Thus:$$\hom(\mathcal{C}, \mathcal{D}) \rightarrow \hom(\mathcal{C}, \mathcal{D})^\mathcal{X}$$ has right/left adjoint if $\mathcal{D}$ has $\mathcal{X}$-indexed limits/colimits. We may also see what would go wrong if one tried to apply the above strategy to show that cartesian closedness is inherited by functor categories. Let us recall that a category $\mathcal{D}$ is cartesian closed if for every global element $x \colon 1 \rightarrow \mathcal{D}$ the canonical functor:$$\mathcal{D} \approx \mathcal{D} \times 1 \overset{\mathit{id}\times x}\rightarrow \mathcal{D} \times \mathcal{D} \overset{\times_\mathcal{D}}{\rightarrow} \mathcal{D}$$has right adjoint, where $\mathcal{D} \times \mathcal{D} \overset{\times_\mathcal{D}}{\rightarrow} \mathcal{D}$ is the internal cartesian product functor in $\mathcal{D}$. Just like before, we may apply to our diagram the 2-Yoneda functor obtaining:$$\hom(-, \mathcal{D}) \approx \hom(-, \mathcal{D}) \times 1 \overset{\mathit{id}\times \hom(-, x)}\rightarrow \hom(-, \mathcal{D}) \times \hom(-, \mathcal{D}) \overset{\times_{\hom(-, \mathcal{D})}}{\rightarrow} \mathcal{D}$$and conclude that this transformation has right adjoint iff the former has. However, the terminal object $1$ is not a (2-)generator in $\mathbf{Cat}^{\mathbf{cat}^{op}}$, thus the adjunctions do not give a good characterisation of internally cartesian closed objects in that category. Particularly, $\hom(-, \mathcal{D}) \colon \mathbf{cat}^{op} \rightarrow \mathbf{Cat}$ may not be a cartesian closed (2-)fibration, and one may expect existence of exponents in a fibre $\hom(\mathcal{C}, \mathcal{D})$ only on "constant" objects induced by global sections $\hom(\mathcal{C}, x) \colon 1 \rightarrow \hom(\mathcal{C}, \mathcal{D})$.
It can be proven that multiplicative group of integers modulo $N$ defined as $$\mathbb{Z}^\times_N = \{ i\in \mathbb Z : 1\leq i\leq N−1\; \text{ and }\; \gcd(i,N)=1 \}$$ is cyclic for a prime $N$ and that if it is of prime order, then every non-identity element in the group is a generator of this group. How can I find such $N$? I wrote a simple program and brute-forced for $i \in \mathbb Z; i\in [1, 300 000]$ and found no group of prime order so far.
Heisenberg's uncertainty principle states that we cannot know the position and momentum of subatomic particles simultaneously...but what exactly is the boundary of size of such a particle? Does such a boundary even exist or is it simply defined as all particles in the standard model? You can use uncertainy principle for everything, what determines the error of measurement though is the $mass$ of an object not its size. In the classical limit (a very rough estimation indeed) we can write HUP like this: $$\Delta x \Delta p\geq \hbar/2 \rightarrow \Delta x \Delta v\geq \hbar/(2m) $$ As the mass increases, the right side of inequality decreases. In the case of macroscopic objects it's safe to assume that it will become zero (i mean just look at $\hbar$ scale) so according to HUP you can measure velocity and poisition of an object simultaneously, without a noticeable error. In the case of microscopic object though, the right side will become big enough to make us believe that if we measure position or velocity, we will "mess with" the other greatly. In other words, momentum of macroscopic objects is big enough (because of mass) that we don't care for errors in scale of $\hbar$. Do note that this was not a technical answer, but it should be good enough for laymans in my opinion. The truth is you should solve Schrödinger equation for macroscopic objects and find true values of $\Delta x$ and $\Delta p$. You will see that both of them will be tiny (most of the times at least) due to the mass, or other classical limits. Also you might find this helpful: https://en.wikipedia.org/wiki/Correspondence_principle There is no limit. It valid for all objects irrespective of their sizes and shapes. For objects of practical size this uncertainty is irrelevant as measurement error is much greater than the uncertainty. For example consider a ball of mass 1kg moving at 1 m/s. Using the uncertainty principle we get uncertainty in position to be of the order of $10^{-36}m$. This is such a small quantity that the error in measurement will be order of magnitudes greater and therefore the uncertainty principle is irrelevant. Uncertainty principle is valid for all objects but only relevant for objects with very small momentum.
I am currently working on problem that I think could be expressed as an integer lattice problem. Given $u \in \mathbb{R}^n$ and a bounded integer lattice $L = \mathbb{Z}^n \cap [-M,M]^n$ I would like to find an integer vector $v \in L$ that minimizes the angle between $u$ and $v$. That is, I would like $$v \in \text{argmax}_{w \in L} \frac{u.w}{\|u\|\|w\|}$$ Here, the objective is maximizing the cosine of the angle between $u$ and $w$ (i.e. minimizing the angle between $u$ and $w$). The vectors $u$ and $w$ are said to be "similar" if this quantity is close to 1. I am wondering: Is this problem related to a well-known integer lattice problem (e.g. a closest vector problem)? Could this problem be solved using existing lattice algorithms (e.g. the LLL algorithm?)
e+e-</EM> Annihilation near Threshold > Top quark polarization in $e^+e^-$ annihilation into $t\bar t$is calculated for linearly polarized beams.The Green function formalism is appliedto this reaction near threshold.The Lippmann–Schwinger equations for the $S$-wave and$P$-wave Green functions are solved numerically for theQCD chromostatic potential given by the two-loop formulafor large momentum transfer and Richardson's ansatzfor intermediate and small momenta.$S$-$P$–wave interference contributes to allcomponents of the top quark polarization vector. Rescattering ofthe decay products is considered.The mean values $\langle n \ell \rangle$of the charged lepton four-momentum projections on appropriately chosendirections $n$ in semileptonic top decays are proposed as experimentallyobservable quantities sensitiveto top quark polarization.The results for $\langle n \ell \rangle$ are obtainedincluding $S$-$P$–wave interference and rescattering of the decayproducts.It is demonstrated that for the longitudinally polarizedelectron beam a highly polarized sample of top quarks can Recent calculations are presented of top quark polarization in$t\bar t$ pair production close to threshold. S–P-wave interferencegives contributions to all components of the top quarkpolarization vector. Rescattering of the decay products is considered.Moments of the fourmomentum of the charged lepton in semileptonictop decays are calculated and shown to be very sensitive to the top M. Jezabek, R. Harlander, J.H. Kuehn and M. Peter Proceedings of the Workshop on “Physics and Experiments at Linear Colliders”, Morioka-Appi, Japan, Sept. 1995, pp. 436-446 TTP95-46 TOP QUARK PAIR PRODUCTION IN THE THRESHOLD REGION TTP95-46 TOP QUARK PAIR PRODUCTION IN THE THRESHOLD REGION Recent results on production and decays of polarized top quarksare reviewed.Top quark pair production in $e^+e^-$ annihilation is considerednear energy threshold.For longitudinallypolarized electrons the produced top quarks and antiquarksare highly polarized.Dynamical effects originating from strong interactionsand Higgs boson exchange in the$t-\bar t$ system can be calculated using the Green function method.Energy-angular distributions of leptons in semileptonic decaysare sensitive to the polarization of the decaying topquark and to the Lorentz structure of the weak charged current. Marek Jezabek Proceedings of the EPS Conference on HEP, Brussels, July 1995, J. Lemonne et al. eds., World Scientific 1996, pp. 671-673 TTP95-44 The Scalar Contribution to $\tau\to K\pi\nu_\tau$ TTP95-44 The Scalar Contribution to $\tau\to K\pi\nu_\tau$ We consider the scalar form factor in $\tau \to K\pi \nu_\tau$decays. It receives contributions both from the scalar resonance$K_0^*(1430)$ and from the scalar projection of off-shell vector resonances.We construct a model for the hadronic current which includes the vectorresonances $K^*(892)$ and $K^*(1410)$ and the scalar resonance$K_0^*(1430)$. The parameters of the model are fixedby matching to the $O(p^4)$ predictions of chiral perturbation theory.Suitable angular correlations of the $K\pi$ systemallow for a model independent separation of the vector and scalarform factor. Numerical results for the relevantstructure functions are presented. TTP95-42 Dijet Production at HERA in Next-to-Leading Order TTP95-42 Dijet Production at HERA in Next-to-Leading Order Two-jet cross sections in deep inelastic scattering at HERA are calculated innext-to-leading order. The QCD corrections are implemented in a new$ep\rightarrow n$ jets event generator, MEPJET, which allows to analyzearbitrary jet definition schemes and general cuts in terms of parton4-momenta. First results are presented for the JADE, the cone and the $k_T$schemes. For the $W$-scheme, disagreement with previous results and largeradiative corrections and recombination scheme ambiguties are traced to acommon origin. TTP95-41 Heavy Quark Vacuum Polarization to Three Loops TTP95-41 Heavy Quark Vacuum Polarization to Three Loops The real and imaginary part of the vacuum polarization function$\Pi(q^2)$ induced by a massive quark is calculated in perturbativeQCD up to order $\alpha_s^2$. The method is described and theresults are presented. This extends the calculation byK\“all\'en and Sabry from two to three loops. We review current issues in exclusive semileptonic tau decays.We present the formalism of structure functions,and then discuss predictions for final states with kaons,for decays into four pions and forradiative corrections to the decay into asingle pion. J.H. K\“uhn, E. Mirkes and M. Finkemeier Proceedings of the EPS Conference on HEP, Brussels, July 1995, J. Lemonne et al. eds., World Scientific 1996, pp. 631-635 TTP95-37 Radiation of Light Fermions in Heavy Fermion Production TTP95-37 Radiation of Light Fermions in Heavy Fermion Production Recent analytic calculations onthe rate for the production of a pair of massive fermions in $e^+ e^-$annihilation plus real or virtual radiation of a pair of masslessfermions are discussed.The contributions for real and virtual radiation are displayed separately.The asymptotic behaviour close to threshold is given in a compact form andan application to the angular distribution of massive quarks close tothreshold is presented. A.H. Hoang, J.H. K\“uhn (Karlsruhe U., TTP) and T. Teubner (Durham U.) Proceedings of the EPS Conference on HEP, Brussels, July 1995, J. Lemonne et al. eds, World Scientific 1996, pp. 343-344 TTP95-36 Hadronic Decays of Excited Heavy Quarkonia TTP95-36 Hadronic Decays of Excited Heavy Quarkonia We construct an effective Lagrangian for the hadronic decays of a heavyexcited $s$-wave-spin-one quarkonium $\Psi'$ into a lower $s$-wave-spin-onestate $\Psi$. We show that reasonable fits to the measured invariant massspectra in the charmonium andbottomonium systems can be obtained within this framework. The massdependence of the various terms in the Lagrangian is discussed on thebasis of a quark model. The electromagnetic corrections to the masses of the pseudoscalarmesons $\pi$ and $K$ are considered. We calculate in chiralperturbation theory the contributions which arise from resonanceswithin a photon loop at order $O(e^2 m_q)$.Within this approach we find rather moderate deviations to Dashen's TTP95-26 ANGULAR DISTRIBUTIONS OF MASSIVE QUARKS AND LEPTONS CLOSE TO THRESHOLD TTP95-26 ANGULAR DISTRIBUTIONS OF MASSIVE QUARKS AND LEPTONS CLOSE TO THRESHOLD Predictions for the angular distribution of massive quarks andleptons are presented, including QCD and QED corrections. Recentresults for the fermionic part of the two-loop corrections to theelectromagnetic form factors are combined with the BLM scale fixingprescription. Two distinctly different scales arise as arguments of$\alpha_s(\mu^2)$ near threshold: the relative momentum of thequarks governing the soft gluon exchange responsible for the Coulombpotential, and a large momentum scale approximately equal to twicethe quark mass for the corrections induced by transverse gluons.Numerical predictions for charmed, bottom, and top quarks are given.One obtains a direct determination of $\alpha_{\mbox{\it\scriptsizeV}}(Q^2)$, the coupling in the heavy quark potential, which can becompared with lattice gauge theory predictions. The correspondingQED results for $\tau$ pair production allow for a measurement ofthe magnetic moment of the $\tau$ and could be tested at a future$\tau$-charm factory. S.J. Brodsky, A.H. Hoang, J.H. K\“uhn and T. Teubner, TTP95-24 Fragmentation production of doubly heavy baryons TTP95-24 Fragmentation production of doubly heavy baryons Baryons with a single heavy quark are being studied experimentally at present.Baryons with two units of heavy flavor will be abundantly produced not only atfuture colliders, but also at existing facilities. In this paper we study theproduction via heavy quark fragmentation of baryons containing two heavy quarksat the Tevatron, the LHC, HERA, and the NLC. The production rate is woefullysmall at HERA and at the NLC, but significant at $pp$ and $p\bar{p}$ machines.We present distributions in various kinematical variables in addition to theintegrated cross sections at hadron colliders. TTP95-20 Three-loop QCD Corrections to \delta\rho, \DELTA r and \Delta\kappa TTP95-20 Three-loop QCD Corrections to \delta\rho, \DELTA r and \Delta\kappa QCD corrections to electroweak observables are reviewed. Recent resultson contributions from the top-bottom doubletof ${\cal O}(\as^2)$ to $\drho$, $\Delta r$ and $\Delta\kappa$ are presented.It is demonstrated that the first three termsin the expansion in $M_Z^2/M_t^2$ provide an excellent approximation tothe exact result.Calculational techniques are briefly discussed. K.G. Chetyrkin, J.H. Kuehn, M. Steinhauser Proceedings of the Workshop “Perspectives for Electroweak Interactions in e+e- Collisions”, B. A. Kniehl, ed., World Scientific 1995, pp. 97-108 For conference proceedings. TTP95-17 HADRON RADIATION IN TAU PRODUCTION AND THE LEPTONIC Z BOSON DECAY RATE TTP95-17 HADRON RADIATION IN TAU PRODUCTION AND THE LEPTONIC Z BOSON DECAY RATE Secondary radiation of hadrons from a tau pair producedin electron positron collisions may constitute animportant obstacle for precision measurements ofthe production cross section and of branching ratios.The rate for real and virtual radiation is calculatedand various distributions are presented.For Z decays a comprehensive analysis is performed whichincorporates real and virtual radiation ofleptons. The corresponding results are also given forprimary electron and muon pairs.Compact analytical formulae are presented forentirely leptonic configurations.Measurements of $Z$ partial decay rates which eliminateall hadron and lepton radiation are about 0.3\% to 0.4\%lower than totally inclusive measurements, a consequence ofthe ${\cal O}(\alpha^2)$ negative virtual corrections whichare enhanced by the third power of a large logarithm. Possibilites for measuring the $J^{PC}$ quantumnumbers of the Higgs particle through its interactions withgauge bosons and with fermions are discussed.Observables which indicate CP violation in these couplingsare also identified. M. L. Stong Proceedings of the Workshop “Perspectives for Electroweak Interactions in e+e- Collisions”, B. A. Kniehl, ed., World Scientific 1995, pp. 317-328 TTP95-13 QCD Corrections from Top Quark to Relations between Electroweak Parameters to Order $\as^2$ TTP95-13 QCD Corrections from Top Quark to Relations between Electroweak Parameters to Order $\as^2$ The vacuum polarization functions $\Pi(q^2)$ of charged and neutralgauge bosons which arise from top and bottom quark loops lead toimportant shifts in relations between electroweak parameterswhich can be measured with ever-increasing precision. The largemass of the top quark allows approximation of these functions throughthe first two terms of an expansion in $M_Z^2/M_t^2$.The first three terms of the Taylor series of $\Pi(q^2)$ areevaluated analytically up to order $\as^2$.The first two are required to derive the approximation, thethird can be used to demonstrate the smallness of the neglected terms.The paperimproves earlier resultsbased on the leading term $\propto G_F M_t^2 \as^2$.Results for the subleadingcontributions to $\dr$ and the effective mixing angle $\sineff$ are presented. Recent theoretical results on the production and decay of top quarks arepresented. The implications of the new experimental results from theTEVATRON are briefly discussed. Predictions for the top quark decayrate and distributions are described, including the influence of QCD andelectroweak radiative corrections.Top production at an $e^+e^-$ collider is discussed withemphasis towards the threshold region.The polarization of top quarks in the threshold regionis calculated with techniques based on Green's functions for $S$ and$P$ waves. TTP95-10 RADIATION OF LIGHT FERMIONS IN HEAVY FERMION PRODUCTION TTP95-10 RADIATION OF LIGHT FERMIONS IN HEAVY FERMION PRODUCTION The rate for the production of a pair of massive fermions in $e^+ e^-$annihilation plus real or virtual radiation of a pair of masslessfermions is calculated analytically.The contributions for real and virtual radiation are displayed separately.The asymptotic behaviour close to threshold and for high energiesis given in a compact form. These approximations providearguments for the appropriate choice of the scale inthe ${\cal O}(\alpha)$ result, such that no large logarithms remain in thefinal answer. TTP95-09 Approximating the radiatively corrected Higgs mass in the Minimal Supersymmetric Model TTP95-09 Approximating the radiatively corrected Higgs mass in the Minimal Supersymmetric Model To obtain the most accurate predictions for the Higgs masses in theminimal supersymmetric model (MSSM), one should compute the full set ofone-loop radiative corrections, resum the large logarithms to allorders, and add the dominant two-loop effects. A completecomputation following this procedure yields a complex set of formulaewhich must be analyzed numerically. We discuss a very simpleapproximation scheme which includes the most important terms fromeach of the three components mentioned above. We estimate that theHiggs masses computed using our scheme lie within 2 GeV of theirtheoretically predicted values over a very large fraction of MSSM TTP95-08 Rho - omega mixing in chiral perturbation theory TTP95-08 Rho - omega mixing in chiral perturbation theory In order to calculate the $\rho^0 -\omega$ mixing we extend the chiralcouplings of the low-lying vector mesonsin chiral perturbation theory to a lagrangian thatcontains two vector fields. We determine the $p^2$ dependence of thetwo-point function and recover an earlier result for the on-shell expression. TTP95-05 QCD Corrections to Electroweak Annihilation Decays of Superheavy Quarkonia TTP95-05 QCD Corrections to Electroweak Annihilation Decays of Superheavy Quarkonia QCD corrections to all the allowed decays of superheavy groundstate quarkoniainto electroweak gauge and Higgs bosons are presented. For quick estimates,approximations that reproduce the exact results within less than at worsttwo percent are also given. TTP95-04 Recent Results on QCD Corrections to Semileptonic $b$-Decays TTP95-04 Recent Results on QCD Corrections to Semileptonic $b$-Decays We summarize recent results on QCD corrections to variousobservables in semileptonic $b$ quark decays. For massless leptons inthe final state we present effects of such corrections on tripledifferential distribution of leptons which are important in studies ofpolarized $b$ quark decays. Analogous formulas for distributions ofneutrinos are applicable in decays of polarized $c$ quarks. In thecase of decays with a $\tau$ lepton in the final state mass effect of$\tau$ has to be included. In this case we concentrate on corrections Andrzej Czarnecki and Marek Jezabek 138th WE-Heraeus Seminar: Heavy Quark Physics, eds. J. Körner, P. Kroll, World Scientific 1995, 67-74 The three-loop QCD corrections to the $\rho$ parameter from top andbottom quark loops are calculated.The result differs from the one recently calculatedby Avdeev et al. As function of the pole mass the numerical value is given by$\drho=\frac{3G_F M_t^2}{8\sqrt{2}\pi^2}(1- 2.8599\, \api- 14.594\, (\api)^2 )$. TTP95-02 Spectra of baryons containing two heavy quarks. TTP95-02 Spectra of baryons containing two heavy quarks. The spectra of baryons containing two heavy quarks test the form ofthe $QQ$ potential through the spin-averaged masses and hyperfinesplittings. The mass splittings in these spectra are calculatedin a nonrelativistic potential model and the effects of varying thepotential studied. The simple description in terms of light quark
As @DavidRicherby already points out, the confusion arises because different measures of complexity are getting mixed up.But let me elaborate a bit. Usually, when studying algorithms for polynomial multiplication over arbitrary rings, one is interested in the number of arithmetic operations in the ring that an algorithm uses. In particular, given some (commutative, unitary) ring $R$, and two polynomials $f,g \in R[X]$ of degree less than $n$, the Schönhage-Strassen algorithm needs $O(n \log{n} \log{\log{n}})$ multiplications and additions in $R$ in order to compute $fg \in R[X]$ by, roughly, adjoining $n$-th primitive roots of unity to $R$ to get some larger ring $D \supset R$ and then, using the Fast Fourier Transform over $D$, computing the product in $D$. If your ring contains an $n$-th root of unity, then this can be sped up to $O(n \log n)$ operations in $R$ by using the Fast Fourier Transform directly over $R$.More specifically, over $\mathbb{Z} \subset \mathbb{C}$, you can do this using $O(n \log n)$ ring operations (ignoring the fact that this would require exact arithmetic over the complex numbers). The other measure that can be taken into account is the bit complexity of an operation.And this is what we are interested in when multiplying two integers of bit length $n$. Here, the primitive operations are multiplying and adding two digits (with carry).So, when multiplying two polynomials over $\mathbb{Z}$, you actually need to take into account the fact that the numbers that arise during computation cannot be multiplied using a constant number of primitive operations.This and the fact that $\mathbb{Z}$ doesn't have an $n$-th primitive root of unity for $n > 2$ prevents you from appling the $O(n \log n)$ algorithm.You overcome this by considering $f,g$ with coefficients from the ring $\mathbb{Z}/\langle 2^n + 1 \rangle$, since the coefficients of the product polynomial will not exceed this bound.There (when $n$ is a power of two), you have (the congruence class of) $2$ as an $n$-th root of unity, and by recursively calling the algorithm for coefficient multiplications, you can achieve a total of $O(n \log n \log \log n)$ primitive (i.e., bit) operations. This then carries over to integer multiplication. For an example that nicely highlights the importance of the difference between ring operations and primitive operations, consider two methods for evaluating polynomials: Horner's method and Estrin's method.Horner's method evaluates a polynomial $f = \sum_{i=0}^n f_i X^i$ at some $x \in \mathbb{Z}$ by exploiting the identity$$f(x) = (\ldots (f_n x + f_{n-1})x + \ldots + \ldots) + f_0$$while Estrin's method splits up $f$ into two parts $$H = \sum_{i=1}^{n/2} f_{n/2+i} X^i$$ and $$L = \sum_{i=0}^{n/2} f_{i} X^i$$i.e., $H$ contains the terms of degree $>n/2$ and $L$ the terms of degree $\leq n/2$ (assume $n$ is a power of two, for simplicity). Then, we can calculate $f(x)$ using$$f(x) = H(x)x^{n/2} + L(x)$$and applying the algorithm recursively. The former, using $n$ additions and multiplications, is proven to be optimal w.r.t. the number of additions and multiplications (that is, ring operations), the latter needs more (at least $n + \log n$). But, on the level of bit operations, one can (quite easily) show that in the worst case, Horner's method performs $n/2$ multiplications of numbers of size at least $n/2$, leading to $\Omega(n^2)$ many bit operations (this holds even if we assume that two $n$-bit numbers can be multiplied in time $O(n)$), whereas Estrin's scheme uses $O(n \log^c n) = \tilde{O}(n)$ operations for some $c > 0$, which is, by far, asymptotically faster.
Given equation $$E(t)=A \exp(-bt)$$ $A$ and $b$ are constant and $E$ is energy $t$ is time If there is an error of say 1.5 percent In measured value of t What is error in value of energy. How can I find it Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community First and foremost, this looks like an error propagation problem. You are given an equation and some measurement of error. Formal error propagation is given approximately by the formula (ignoring all covarriance): $$ \sigma_f^2 = \sum_i \sigma_{x_i}^2 \left(\frac{\partial f(x_i)}{\partial x_i} \right)^2$$ Where $\sigma_f$ is the total error that should be propagated, $\sigma_{x_i}$ is the error on the given varying element of $i$ (in your case, your $t$ variable), and $f(x_i)$ is the function of which you are trying to propagate error through (in your case, the equation $E(t) = A \exp\left( -bt \right)$. Because you only have one term in your equation which has an error, namely $t$, such that $\sigma_t = 0.00015$, you can solve the error propagation formula for your new error. The corresponding article of error propagation on Wikipedia has a much more detailed and formalized information about this subject. In particular is the section of pre-calculated error propagation formulas. Of interest is the following: $$ f = a \exp\left(bA\right) \qquad \Rightarrow \qquad \sigma_f^2 \approx f^2 \left(b \sigma_A \right)^2$$ Given the real value $A$, with error $\sigma_A$; the exactly known real-valued constants $a,b$ where $\sigma_a = \sigma_b = 0$. The ending value of the original function (see above) $f(x_i)$ given as $f$. Please note that it is hard to do error propagation with the function that you provided without the value of the number itself, that is $t$ and its result, $f(t)$. For other functional arrangements, it may be easier.
Back to Nonlinear Programming is one of the algorithms in a class of methods for constrained optimization that seeks a solution by replacing the original constrained problem by a sequence of Augmented Lagrangian method unconstrainedsubproblems. Also known as the method of multipliers, the augmented Lagrangian method introduces explicit Lagrangian multiplier estimates at each step. Augmented Lagrangian algorithms are based on successive minimization of the augmented Lagrangian \(\mathcal{L}_A\) with respect to \(x\), with updates of \(\lambda\) and possibly occurring between iterations. An augmented Lagrangian algorithm for the constrained optimization problem computes \(x_{k+1}\) as an approximate minimizer of the subproblem \[\min \{ {\mathcal{L}_A(x, \lambda_k; \nu_k) : l \leq x \leq u} \},\] where \[\mathcal{L}_A(x, \lambda; \nu) = f(x) + \sum_{i \in \mathcal{E}} \lambda_i c_i(x) + \frac{1}{2} \sum_{i \in \mathcal{E}} \nu_i c_i^2(x)\] includes only the equality constraints. Updating of the multipliers usually takes the form \[\lambda_i \leftarrow \lambda_i + \nu_i c_i (x_k).\] This approach is relatively easy to implement because the main computational operation at each iteration is minimization of the smooth function \(\mathcal{L}_A\) with respect to \(x\) subject only to bound constraints. A large-scale implementation of the augmented Lagrangian approach can be found in the LANCELOT package, which is available on the NEOS Server. LANCELOT solves the bound-constrained subproblem by using special data structures to exploit the (group partially separable) structure of the underlying problem. The OPTIMA and OPTPACK libraries also contain augmented Lagrangian codes.
Back to Continuous Optimization (QCQP) problems are optimization problems with a quadratic objective function and quadratic constraints. The general QCQP problem has the following form: Quadratically-constrained quadratic programming \[ \begin{array}{ll} \mbox{minimize} & q_0(y) \\ \mbox{subject to} & q_i(y) \leq 0 \, \forall i = 1, \cdots, m \end{array} \] where \(q_i(y) = \frac{1}{2} y^t Q_iy + y^tb_i + c_i, \, y \in R^n\) for all \(i = 0, 1, \cdots, m\). The problem is convex if \(Q_i\) is positive, semidefinite (\(Q_i \succeq 0 \)) for all \(i\), in which case an elegant duality structure is available. References Baron, D. P. 1972. Quadratic programming with quadratic constraints. Naval Research Logistics Quarterly 19(2), 253 - 260. Ben-Tal, A. and Teboulle, M. 1996. Hidden convexity in some nonconvex quadratically constrained quadratic programming. Mathematical Programming 72, 51 - 63. Ecker, J. G. and Niemi, R. D. 1975. A dual method for quadratic programs with quadratic constraints. SIAM Journal on Applied Mathematics 28(3), 568 - 576. Kim, S. and Kojima, M. 2003. Exact solutions of some nonconvex quadratic optimization problems via SDP and SOCP relaxations. Computational Optimization and Applications 26, 143 - 154. Linderoth, J. 2005. A simplicial branch-and-bound algorithm for solving quadratically constrained quadratic programs. Mathematical Programming, Series B 103, 251 - 282. Yuan, Y. 1991. A dual algorithm for minimizing a quadratic function with two quadratic constraints. Journal of Computational Mathematics 9(4), 348 - 359. Optimization Online Global Optimization area
Difference between revisions of "Navier-Stokes equations" (Created page with 'The Navier-Stokes equation is an equation in fluid mechanics that states: <math>\rho \frac{D \mathbf{V}}{D t} = -\nabla p + \mu \nabla^2 \mathbf{V} + \rho \m…') (No difference) Revision as of 15:34, 15 August 2010 The Navier-Stokes equation is an equation in fluid mechanics that states: where is the pressure difference (expressed as the partial derivative of pressure in each dimension), is the total derivative of velocity, is the kinematic viscosity of the fluid, is the density of the fluid, and is the gravitational acceleration. [1] References A.J. Smits, "A Physical Introduction to Fluid Mechanics," John Wiley & Sons, ISBN 0-471-25349-9
The language is regular. Hint: cast out nines Proof idea For $a=9$ and $b < 9$, build an automaton with $9$ states labeled $0$ through $8$. $0$ is the initial state, and the one final state is $b$. From state $s$, on digit $d$, transition to state $(s + d) \;\mathrm{mod}\; 9$. To handle other values of $a$ that are coprime with $10$, group digits in packets to find some $k$ such that $a$ divides $10^k-1$ (e.g. take $k=3$ if $a=37$ because $999 = 27 \times 37$). To handle values of $a$ whose only prime factors are $2$ and $5$, note that it's all about a finite number of digits at the end. To generalize to all values of $a$ and $b$, use the fact that union and intersection of regular languages are regular, that finite languages are regular, and that the multiples of $a_1 \cdot a_2$ are exactly the multiples of both when $a_1$ and $a_2$ are coprime. Note that we use whichever technique is convenient; the three main elementary techniques (regular expressions, finite automata, set-theoretic properties) are all represented in this proof. Detailed proof Let $a = 2^p 5^q a'$ with $a'$ coprime with $10$.Let $M' = \{\overline{a'\,x+b} \mid x\in\mathbb{Z} \wedge a'\,x+b \ge 0\}$ and $M'' = \{\overline{2^p 5^q\,x+b} \mid x\in\mathbb{Z} \wedge 2^p 5^q\,x+b \ge 0\}$. By elementary arithmetic, the numbers equal to $b$ modulo $a$ are exactly the numbers equal to $b$ modulo $a'$ and to $b$ modulo $2^p5^q$, so $M \cap \{\overline{x} \mid x \ge b\} = M' \cap M'' \cap \{\overline{x} \mid x \ge b\}$. Since the intersection of regular languages is regular, and $\{\overline{x} \mid x \ge b\}$ is regular because it is the complement of a finite (hence regular) language, if $M'$ and $M''$ are also regular, then $M \cap \{\overline{x} \mid x \ge b\}$ is regular; and $M$ is therefore regular since it is the union of that language with a finite set.So to conclude the proof it suffices to prove that $M'$ and $M''$ are regular. Let us start with $M''$, i.e. numbers modulo $2^p 5^q$. The integers whose decimal expansion is in $M''$ are characterized by their last $\mathrm{max}(p,q)$ digits, since changing digits further left means adding a multiple of $10^{\mathrm{max}(p,q)}$ which is a multiple of $2^p 5^q$. Hence $0^* M'' = \aleph^* F$ where $\aleph$ is the alphabet of all digits and $F$ is a finite set of words of length $\mathrm{max}(p,q)$, and $M'' = (\aleph^* F) \cap ((\aleph \setminus \{0\}) \aleph^*)$ is a regular language. We now turn to $M'$, i.e. numbers modulo $a'$ where $a'$ is coprime with $10$. If $a' = 1$ then $M'$ is the set of decimal expansions of all naturals, i.e. $M' = \{0\} \cup ((\aleph \setminus \{0\}) \aleph^*)$, which is a regular language. We now assume $a' > 1$. Let $k = a'-1$.By Fermat's little theorem, $10^{a'-1} \equiv 1 \mod a'$, which is to say that $a'$ divides $10^k-1$. We build a deterministic finite automaton that will recognize $0^* M'$ as follows: The states are $[0,k-1] \times [0,10^k-2]$. The first part represents a digit position and the second part represents a number modulo $10^k-1$. The initial state is $(0,0)$. There is a transition labeled $d$ from $(i,u)$ to $(j,v)$ iff $v \equiv d 10^i + u \mod 10^k-1$ and $j \equiv i + 1 \mod k$. A state $(i,u)$ is final iff $u \equiv b \mod a'$ (note that $a'$ divides $10^k-1$). The state $(i,u)$ reached from a word $\overline{x}$ satisfies $i \equiv |\overline{x}| \mod k$ and $u \equiv x \mod 10^k-1$. This can be proved by induction over the word, following the transitions on the automaton; the transitions are calculated for this, using the fact that $10^k \equiv 1 \mod 10^k-1$. Thus the automaton recognizes the decimal expansions (allowing initial zeroes) of the numbers of the form $u + y 10^k$ with $u \equiv b \mod a'$; since $10^k \equiv 1 \mod a'$, the automaton recognizes the decimal expansions of the numbers equal to $b$ modulo $a'$ allowing initial zeroes, which is $0^* M'$. This language is thus proved regular. Finally, $M' = (0^* M') \cap ((\aleph \setminus \{0\}) \aleph^*)$ is a regular language. To generalize to bases other than $10$, replace $2$ and $5$ above by all the prime factors of the base. Formal proof Left as an exercise for the reader, in your favorite theorem prover.
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Title: Quantitative homogenization in a balanced random environment Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). Wednesday, February 27 at 1:10pm Jon Peterson, Purdue Title: Functional Limit Laws for Recurrent Excited Random Walks Abstract: Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina. March 14, TBA March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM).
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Title: Quantitative homogenization in a balanced random environment Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). Wednesday, February 27 at 1:10pm Jon Peterson, Purdue Title: Functional Limit Laws for Recurrent Excited Random Walks Abstract: Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina. March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM).
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues? Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson... Hmm, it seems we cannot just superimpose gravitational waves to create standing waves The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line [The Cube] Regarding The Cube, I am thinking about an energy level diagram like this where the infinitely degenerate level is the lowest energy level when the environment is also taken account of The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings @Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer). Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it? Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks. I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh... @0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P) Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio... the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\... @ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there. @CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer
The estimable @colinthemathmo suggests a method for estimating the radius of the earth, which he credits to a sundial expert friend named Mike: Stand on a wall, perhaps two metres high, and wait for sunrise. When you see the sun just peak above the horizon, start the stopwatch, and jumpRead More → Dear Uncle Colin, I'm pretty good with quadratic inequalities and pretty good with absolute values, but when I get the two together, I get confused. For example, I struggled with the set of values satisfying $x^2 -\left| 5x-3\right| < 2 + x$. Can you help? - Nasty Absolute Value InequalitiesRead More → It's good to see @srcav back in the twitter and blogging fold - he's been missed! As part of his comeback, he shared this lovely geometry puzzle: Assuming the situation is symmetrical (which it needs to be to get a sensible solution), there are - as usual - several waysRead More → Dear Uncle Colin, When I solve $2\tan(2x)-2\cot(x)=0$ (for $0 \le x \le 2\pi$) by keeping everything in terms of $\tan$, I get four solutions; if I use sines and cosines, I get six (which Desmos agrees with). What am I missing? - Trigonometric Answers Not Generated - Expecting 'Nother TwoRead More → In this month's edition of Wrong, But Useful, @reflectivemaths and I are joined by special guest co-host @dragon_dodo, who is Dominika Vasilkova in real life. We discuss: What maths appeals to a physicist. Dominika's number of the podcast: $0.110001000000000000000001...$, Liouville's constant, which is $\sum_{n=1}^\infty 10^{-n!}$, the first constant to beRead More → Zeke and Monty play a game. They repeatedly toss a coin until either the sequence tail-tail-head (TTH) or the sequence tail-head-head (THH) appears. If TTH shows up first, Zeke wins; if THH shows up first, Monty wins. What is the probability that Zeke wins? My first reaction to this questionRead More → Dear Uncle Colin, I was asked to find the tangent to the curve $r=\frac{8}{\theta}$ at the point where $\theta = \frac{\pi}{2}$. I worked out $\dydx = \frac{ \frac{8 \left(\theta \cos(\theta)-\sin(\theta)\right)}{\theta^2}}{\frac{-8\left(\theta \sin(\theta)+\cos(\theta)\right)}{\theta^2} }$, which simplifies to $ -\frac{\theta \cos(\theta)-\sin(\theta)} {\theta \sin(\theta)-\cos(\theta)}$. Evaluated at $\theta = \frac{\pi}{2}$, that gives $\dydx=\frac{2}{\pi}$ and aRead More → As the student was wont to do, he idly muttered "So, that's $\cos(10º)$..." The calculator, as calculators are wont to do when the Mathematical Ninja is around, suddenly went up in smoke. "0.985," with a heavy implication of 'you don't need a calculator for that'. As the student was wontRead More → Dear Uncle Colin, If you know all of the factors of $n$, can you use that to find all of the factors of $n^2$? For example, I know that 6 has factors 1, 2, 3 and 6. Its square, 36, has the same factors, as well as 4, 9, 12,Read More → I'm a big advocate of error logs: notebooks in which students analyse their mistakes. I recommend a three-column approach: in the first, write the question, in the second, what went wrong, and in the last, how to do it correctly. Oddly, that's the format for this post, too. The questionRead More →
257 23 Homework Statement Two identical uniform triangular metal played held together by light rods. Caluclate the x coordinate of centre of mass of the two mass object. Give that mass per unit area of plate is 1.4g/cm square and total mass = 25.2g Homework Equations - Not sure what I went wrong here, anyone can help me out on this? Thanks. EDIT: Reformatted my request. Diagram: So as far as I know to calculate the center of mass for x, I have to use the following equation: COM(x): ##\frac{1}{M}\int x dm## And I also figured that to find center of mass, I will have to sum the mass of the 2 plates by 'cutting' them into stripes, giving me the following formula: ##dm = \mu * dx * y## where ##\mu## is the mass per unit area. So subbing in the above equation into the first, I get: ##\frac{1}{M}\int x (\mu * dx *y) ## ##\frac{\mu}{M}\int xy dx## Since the 2 triangles are identical, I can assume triangle on the left has equation ##y = 1/4x +4## This is the part where I'm not sure. Do I calculate each of the triangle's center of moment, sum them and divide by 2? Or am I suppose to use another method? Regardless of what, supposed I am correct: COM for right triangle: ##\frac{\mu}{M}\int_{4}^{16}x(\frac{1}{4}x+4) dx## = 8 (expected) COM for left triangle: ##\frac{\mu}{M}\int_{-11}^{1}x(-\frac{1}{4}x+4) dx## = 5.63... Total COM = ##8+5.63/2## which is wrong :( Thanks EDIT: Reformatted my request. Diagram: So as far as I know to calculate the center of mass for x, I have to use the following equation: COM(x): ##\frac{1}{M}\int x dm## And I also figured that to find center of mass, I will have to sum the mass of the 2 plates by 'cutting' them into stripes, giving me the following formula: ##dm = \mu * dx * y## where ##\mu## is the mass per unit area. So subbing in the above equation into the first, I get: ##\frac{1}{M}\int x (\mu * dx *y) ## ##\frac{\mu}{M}\int xy dx## Since the 2 triangles are identical, I can assume triangle on the left has equation ##y = 1/4x +4## This is the part where I'm not sure. Do I calculate each of the triangle's center of moment, sum them and divide by 2? Or am I suppose to use another method? Regardless of what, supposed I am correct: COM for right triangle: ##\frac{\mu}{M}\int_{4}^{16}x(\frac{1}{4}x+4) dx## = 8 (expected) COM for left triangle: ##\frac{\mu}{M}\int_{-11}^{1}x(-\frac{1}{4}x+4) dx## = 5.63... Total COM = ##8+5.63/2## which is wrong :( Thanks Last edited:
Turboprop Propellers can be ducted for extra thrust. What are the criteria for determining that the engine is no longer a ducted propeller design, but a turbofan? Does there have to be a type of blade design, a fully separate nacelle/pod, number of blades? Or are all turbofans technically shrouded propellers with some augmented thrust from the turbine exhaust ? tl;dr They're pretty much all considered turbofans with a few exceptions from the '70s such as this experimental Britten-Norman Islander. They key difference is a prop can be considered a separate entity (add-on) to the turbine for a turboprop, but a fan is an integrated component for a turbofan. Slightly longer answer After a fair bit of thought and research I hope I've managed to come up with a satisfactory answer, though to start with a disclaimer. The aviation industry is far too fond of merging and creating new words to describe new engine architecture. Take Open Rotor vs. PropFan for example. To get technical about it, you could look at the similar/reverse argument about the difference between an open rotor turbofan and a turboprop, as addressed by the EASA. If you look at Appendix 1: Open Rotor Definition (Page 86) of that document they outlined the following key differences Open rotor module that cannot be distinguished as a separate entity However the following was the agreed definition, which is still rather ambiguous. A Turbine Engine featuring contra-rotating fan stages not enclosed within a casing I think the first one is more key, the prop can be considered as a separate entity, attached to the front of a turbine and powered by a gearbox. A fan is an integral part of the engine and cannot really be considered a separate entity, as it actually forms part of the low pressure shaft and is considered the first stage of the low pressure compressor. In general (there are often exceptions to any rule), a turboprop unit is likely to have a gearbox between the turbine and the propeller. The propeller is likely to be capable of varying its pitch. Also, the propeller may be a set of two contra-rotating propellers in the case of a high power turboprop. A turbofan engine is unlikely to have these features, although PW does have a GTF geared turbofan. The difference between a ducted propeller and a turbofan is mainly determined by the difference between a propeller and a fan. A propeller has relatively few blades, which are relatively long and slender. A fan has many blades, with a relatively large chord. Like a household fan. A parameter to catch blade count and chord relative to blade length, is the disk solidity $\sigma$. Area of all blades summed together, divided by the area of the circle defined by the blade tip length: $$\sigma = \frac{\text{blade area}}{\text{disk area}} = \frac{A_b}{A} = \frac{N_b c R}{\pi R^2} = \frac{N_b c}{\pi R}$$ with $N_b$ = number of blades, c = blade chord, R = blade radius. There does not seem to be a defined transition point of $\sigma$, above which we're talking about a fan. The 8-bladed propeller of the A400M with a blade radius of 2.6m has a solidity ration of about 0.3 which is amongst the highest in the world. We instantly recognise it as being a propeller. The fan of a geared turbofan like the PW 1000G is instantly recognisable as a fan, with its 20 relatively fat blades and a $\sigma$ close to 1. Note that both are driven by a gearbox for obtaining a beneficial tip speed. Or are all turbofans technically shrouded propellers with some augmented thrust from the turbine exhaust ? Indeed, technically a fan is a type of propeller. Turboprops also have thrust directly from the turbine exhaust. 1 By Rafael Luiz Canossa - IMG_9975, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=55662586
Sequence is just a function of the type $f:\mathbb{N} \to \mathbb{R}$. It is common to list the elements of this sequence as $$(a_1,a_2,a_3,\ldots,a_n)\,.$$ One example is the sequence of all even numbers: $(0,2,4,6,8,10,\ldots)$. However some sequences may be defined in a different form when there is no easy formula for expressing the terms, like the sequence of prime numbers $(2,3,5,7,11,13,\ldots)$, defined verbally. Series means summation of terms, this is clear if we use sigma notation, in which the terms are defined by a law that resembles a sequence. For example: $$\sum_{i=1}^{n} i^2$$ is the sum of the sequence of squares $(1,4,9,16,25,\ldots,n^2)$. We can expand the RHS for the sake of clearness: $$\sum_{i=1}^{n} i^2=1^2+2^2+3^2+4^2+\cdots+n^2.$$ Often the series has a formula that depends only on the upper limit, so we can easily find the result without adding the terms, for the example above we know that $$\sum_{i=1}^{n} i^2=1^2+2^2+3^2+4^2+\cdots+n^2=\frac{n(n+1)(2n+1)}{6}$$ There is also the term infinite series, that is simply the limit $$\lim_{n\to\infty}\sum_{i=1}^{n} a_i\,,$$ from this many concepts arrises, but it is another discussion. For the sake of didactic you can visualize a sequence as a stone path, where each stone has a number, when talking about sequences what matters is in what stone you are, like in the number 3 or more generally, $a_n$. On the same example, series is the path you travel to reach a determined stone, i.e., if you are supposed to go to the stone numbered 9 (from the origin), series is the sum of the stones you stepped, in this case: $(1+2+ 3+\ldots+8+9)$, or $(a_1+a_2+a_3+\ldots+a_8+a_9)$. Hope I could help you more. I could, so I edited.
I am studying relativity and was working on deriving stellar aberration using relativity. My derivation is as follows: Suppose that the observer is traveling at a velocity $v$ in the $x$-direction. Consider a light beam coming from a star to the ship's crew, which has velocity $c$ and components $u_x=c\cos\theta$ and $u_y=c\sin\theta$. Using the Lorentz transformations, assuming that hte shit is at rest and that the star is moving in the $x$-direction at a velocity of $-v$, we see that $u'_x=\frac{u_x+v}{1+u_xv/c^2}$ and $u_y'=\frac{u_y}{\gamma(1+u_x v/c^2)}$, where $\gamma=\frac{1}{\sqrt{1-(v/c)^2}}$. Then the new angle is given by $$\tan\theta'=\frac{u_y'}{u'_x}=\frac{u_y}{\gamma(u_x+v)}=\frac{\sin\theta}{\gamma(\cos\theta+(v/c))}.$$ For most angles, this has the desired effect; it makes the angle smaller. However, at the angle $\theta=\arccos(-v/c)$, we encounter a discontinuity, such that the tangent of $\theta'$ is not defined. I am a little hesitant to say that this would thus correspond with $\theta'=\pi/2$, so that $\arccos(-v/c)$ is sent to $\pi/2$. Is this indeed the case? Additionally, after this (i.e. for angles greater than $\arccos(-v/c)$), $\tan\theta'$ becomes negative. Moreover, for angles $\theta$ near $\arccos(-v/c)$, we can choose $\theta$ close enough to $\arccos(-v/c)$ to be arbitrarily large (in magnitude). But since these angles are naturally modded by $2\pi$, it would seem them that the apparent position of the star could be just about anywhere as observed by the observer. This seems very fishy, and occurs as well with angles just below $\arccos(-v/c)$. How should I interpret this? Indeed, how should I interpret the fact that the angles are negative? Does this means that it is reflected about the $x$-axis? Note that I am not making (and do not want to make) assumptions about the relative size of $v$; $v$ can be arbitrarily large (while less than $c$). Thank you for your time.
To answer your questions, Do the terms (J2, etc.) change with time? Yes, they do. The Earth is still rebounding from the end of the last glaciation, and the Earth's rotation rate is decreasing due to transfer of angular momentum to the Moon's orbit. The end result is that $J2$ is decreasing by about 3 parts per million per century. You don't need to model that, however. The gravity models that GMAT uses are static models. You can enable tides, making the gravity model somewhat dynamic, but the dynamics are purely cyclical. There are no secular terms. I found other terms coefficients here. How to add them also to differential equation? Find a software package that uses spherical harmonics to model gravitation. Do not roll your own. And if you do use such a package, make sure the coefficients you use are consistent with the model. The coefficients you found are denormalized. Most modern spherical harmonics models expect fully normalized coefficients. A much trickier issue lies in the $\bar C_{20}$ term. Some of the Earth's equatorial bulge results from tidal interactions with the Moon and the Sun. The spherical harmonics gravitational coefficients are typically computed as if the Moon and Sun were not present. These tide-free models omit the contribution of the Moon and Sun to the equatorial bulge, and hence to $J_2$. (Technically, the frequency coefficients used to model the Earth tides have a zero frequency term that is nonzero.) The Earth spherical harmonics coefficients used by GMAT are tide-free. If you want to model how satellites precess due to the tidal bulge, but don't want to use a full blown Earth tide gravity model, it's better to add a tiny bit to the the $\bar C_{20}$ term. See Section 6, Gravitation of IERS Technical Note 36 for details. Why does the shown equation give 1 km error in 1 day in comparison with GMAT (configuration is below)? The GMAT JGM-2 gravity model is in the file JGM2.cof of the GMAT source code tree. The first few non-comment lines in this file are POTFIELD 70 70 1 3.98600441500000e+14 6.37813630000000e+06 1.00000000000000e+00 RECOEF 2 0 -4.84165390000000e-04 RECOEF 2 1 -1.86987640000000e-10 1.19528010000000e-09 RECOEF 2 2 2.43908370000000e-06-1.40010930000000e-06 You need to use compatible values to have your Julia integrator be consistent with GMAT's implementation of JGM-2. The first value, $3.986004415\times10^{14}$, is the TT-compatible value of the Earth's gravitational coefficient in $\text{m}^3/\text{s}^2$. You should be using this rather than the WGS-84 value of $3.986004418\times10^{14}$. The WGS-84 model is aimed at GPS; it is relativistically correct and hence uses a TDB-compatible value of $GM_\oplus$. The second value, $6.3781363\times10^{6}$, is the equatorial radius of the Earth, in meters. You should be using this rather than the WGS-84 value if you want to try to match the results from GMAT. The next three lines contain the normalized values of the cosine and sine coefficients of the JGM-2 gravitational potential model for the Earth. The first, -4.84165390000000e-04, is the tide-free value of the fully normalized $\bar C_{20}$ coefficient. The $\bar C_{20}$ is directly related to $J_2$ via $J_2 = -\sqrt{5}\bar C_{20}$. From poking at the GMAT source code, there is a correction for the $\bar C_{20}$ term gravity coefficient if that term includes the permanent tide and if the user enables Earth tides. There does not appear to be a correction for a tide-free value that adds the permanent tide effect if the user disables Earth tides. The widely published value of $J_2$, 0.0010826359, includes the permanent tides. You probably should be using a value consistent with the tide-free value of $\bar C_{20}$, or $\sqrt{5}\,4.8416539\times10^{-4} \approx 0.00108262672$ instead of that widely published value. The next two lines contain $\bar C_{21}$, $\bar S_{21}$, $\bar C_{22}$, and $\bar S_{22}$. Note that the $\bar C_{21}$ and $\bar S_{21}$ are nonzero. This means that the Earth's axis of rotation is not quite in line with the way we measure latitude and longitude. The Earth's rotation axis undergoes a small polar motion; you should turn this off in GMAT (if you can) to have a better chance of having your integrated state agreeing with the state computed by GMAT. If you can, you should also turn off Earth nutation and precession in GMAT. Finally, you're using a different language, and possibly a different numerical integrator. You shouldn't be surprised on seeing differences with exactly the same code when one changes compiler optimization options or migrates to a different computer. You shouldn't be surprised at all when using a different language.
Research Open Access Published: Nonlinear impulsive differential and integral inequalities with integral jump conditions Advances in Difference Equations volume 2016, Article number: 112 (2016) Article metrics 860 Accesses 4 Citations Abstract Some new nonlinear impulsive differential inequalities and integral inequalities with integral jump conditions for discontinuous functions are established using the method of successive iteration. These jump conditions at a discontinuous point are related to the integral conditions of the past state, which can be used in the qualitative analysis of the solutions to certain nonlinear impulsive differential systems. Introduction Impulsive differential equations, that is, differential equations involving impulse effect, appear as a natural description of observed evolution phenomena of several real world problems. Many processes studied in applied sciences are represented by impulsive differential equations. However, the situation is quite different in many physical phenomena that have a sudden change in their states such as mechanical systems with impact, biological systems such as heart beats, blood flows, population dynamics theoretical physics, pharmacokinetics, mathematical economy, chemical technology, electric technology, metallurgy, ecology, industrial robotics, biotechnology processes, and so on (see [1–3] and [4] for details). In spite of the importance of impulsive differential equations, the development of the theory of impulsive differential equations has been quite slow due to special features possessed by impulsive differential equations in general, such as pulse phenomena, confluence, and loss of autonomy. Among these results, differential inequalities and integral inequalities with impulsive effects play increasingly important roles in the study of quantitative properties of solutions of impulsive differential systems. However, most of these results involving impulsive effects are point-discontinuous, i.e., impulsive effects are added at a sequence of discontinuous points (see [5–12] for details). For example, in 2004, Borysenko [13] considered the following integral inequality with impulsive effect: in 2007, Iovane [14] studied the following integral inequalities: in 2011, Wu-Sheng Wang [5] gave the upper bound for the nonlinear inequality As we know, most of the phenomena occurring in the natural world do not suddenly change, so the impulsive differential equations with integral jump conditions are more accurate than impulsive differential equations with stationary discontinuous points in characterizing the nature. In 2012, based on a well-known result given by Lakshmikantham et al. [1], Thiramanns and Tarboon [15] studied the following impulsive linear differential inequalities: and gave the upper-bound estimation of the unknown function \(m(t)\). Theorem 1.1 Suppose that (\(\mathrm{H}_{0}\)) and (\(\mathrm{H}_{1}\)) hold. If \(p, q\in C[\mathbb{R}_{+}, \mathbb{R}]\) and for \(k=1, 2, \ldots, t\geq t_{0}\), the impulsive linear differential inequality (1.1) holds, where \(c_{k}\); \(d_{k}\geq0\), \(0\leq\sigma_{k}\leq\tau_{k}\leq t_{k}-t_{k-1}\), \(b_{k}\) are constants. Then This result can be used to investigate the qualitative properties of certain linear impulsive differential equations. A natural question arises, that is, how about the upper bound if the inequality is of nonlinearity? In this paper, under different jump conditions, we will study the upper-bound estimation of the nonlinear inequality Main results In this paper, let \(0\leq t_{0}< t_{1}< t_{2}<\cdots\) be a sequence. For \(I\subset\mathbb{R}\), we denote by \(\mathrm{PC}(I, \mathbb{R})\) the functions \(u(t)\) defined on I, which is continuous for \(t\neq t_{k}\), \(u(0+)\), \(u(t_{k}+)\), \(u(t_{k}-)\) exist and \(u(t)\) is left continuous at \(t_{k}\), \(k=1, 2, \ldots\) , \(\mathrm{PC}^{1}(I, \mathbb{R}_{+})\) is the collection of functions \(u(t)\) such that \(u, u'\in\mathrm{PC}(I, \mathbb {R}_{+})\). Throughout this paper, we assume the following hypotheses: (\(\mathrm{H}_{0}\)): the sequence \(\{t_{k}\}\) satisfies \(0\leq t_{0}\leq t_{1}\leq t_{2}\leq\cdots\) , \(\lim_{k\to\infty}t_{k}=+\infty\). (\(\mathrm{H}_{1}\)): \(m\in\mathrm{PC}^{1}(I, \mathbb{R}_{+})\), and \(m(t)\) is left continuous at \(t_{k}\), \(k=1, 2, \ldots\) . Lemma 2.1 (see [11]) Suppose that \(a, b\in\mathbb{R}\), \(p>0\). Then where \(C_{p}=1\) for \(0< p\leq1\), and \(C_{p}=2^{p-1}\) for \(p>1\). Theorem 2.1 Suppose that (\(\mathrm{H}_{0}\)) and (\(\mathrm{H}_{1}\)) hold. If for \(k=1, 2, \ldots, t\geq t_{0}\), here \(0<\alpha<1\), \(p, q\in C[\mathbb{R}_{+}, \mathbb{R}]\), and for \(k=1, 2, \ldots, t\geq t_{0}\), \(c_{k}\); \(d_{k}\geq0\), \(0\leq\sigma_{k}\leq\tau_{k}\leq t_{k}-t_{k-1}\), \(b_{k}\) are constants. We have the estimation where Proof For \(t\in[t_{0}, t_{1}]\), we have integrating (2.6) implies which shows that (2.3) holds for \(t\in[t_{0}, t_{1}]\). This completes the proof of Theorem 2.1. □ If \(d_{k}\equiv0\) in Theorem 2.1, we obtain the following corollary. Corollary 2.1 Suppose that (\(\mathrm{H}_{0}\)) and (\(\mathrm{H}_{1}\)) hold, \(p, q\in C[\mathbb{R}_{+}, \mathbb{R}]\) and for \(k=1, 2, \ldots, t\geq t_{0}\), where \(c_{k}\), \(b_{k}\), \(\sigma_{k}\), \(\tau_{k}\) are defined as in Theorem 2.1, then we have If \(d_{k}\equiv1\), we obtain the following theorem. Theorem 2.2 Suppose that (\(\mathrm{H}_{0}\)) and (\(\mathrm{H}_{1}\)) hold. If, for \(k=1, 2, \ldots, t\geq t_{0}\), where \(0<\alpha<1\), \(p, q\in C[\mathbb{R}_{+}, \mathbb{R}]\), and for \(k=1, 2, \ldots, t\geq t_{0}\), \(c_{k} \geq0\), \(0\leq\sigma_{k}\leq\tau_{k}\leq t_{k}-t_{k-1}\), \(b_{k}\) are constants, \(\Delta m^{1-\alpha}(t_{k})=m^{1-\alpha}(t_{k}^{+})-m^{1-\alpha}(t_{k})\). We have the estimation where \(E_{k}\) is defined as (2.4) ( with \(d_{k}\equiv1\)), Proof So Using (2.7) (with \(t_{0}\) being replaced by \(t_{n}^{+}\)), we obtain, for \(t\in(t_{n}, t_{n+1}]\), This completes the proof. □ Remark 2.1 If \(p(t)\equiv0\) in Theorem 2.2, we obtain the following useful corollary. Corollary 2.2 If (\(\mathrm{H}_{0}\)) and (\(\mathrm{H}_{1}\)) hold and for \(k=1, 2, \ldots, t\geq t_{0}\), then Next, we will give another kind of nonlinear impulsive differential inequalities. Theorem 2.3 Suppose that (\(\mathrm{H}_{0}\)) holds, and \(m\in\mathrm{PC}^{1}[\mathbb {R}_{+}, \mathbb{R}_{+}]\), \(m(t)\) is left continuous at \(t_{k}\), \(k=1, 2, \ldots, p(t)\), \(q(t)\in C[\mathbb{R}_{+}, \mathbb{R}_{+}]\). Assume where \(\Delta m(t_{k})=m(t_{k}^{+})-m(t_{k})\), \(0<\alpha<1\), \(c_{k} \geq0\), \(0\leq\sigma_{k}\leq\tau_{k}\leq t_{k}-t_{k-1}\), \(b_{k}\) are constants. We have the estimation where Proof Since \(\frac{1}{1-\alpha}>1\), by Lemma 2.1, which shows that (2.13) holds for \(k=n+1\). This completes the proof. □ Now we give an upper-bound estimation of a nonlinear integral inequality with integral jump conditions. Theorem 2.4 Suppose that (\(\mathrm{H}_{0}\)) holds, and suppose \(m, p, q\in C[\mathbb {R}_{+}, \mathbb{R}_{+}]\). For \(t\geq t_{0}\), if where \(\alpha_{k} \geq0\), \(0\leq\sigma_{k}\leq\tau_{k}\leq t_{k}-t_{k-1}\), \(c\geq0\), \(0<\alpha<1\) are constants. Then we have the estimation where \(F_{k}\) and \(R_{k}\) are defined as that in Theorem 2.3, with \(c_{k}\) being replaced by \(\alpha_{k}\). Proof Defined the right-hand side of (2.14) as a new function \(v(t)\), we have \(m(t)\leq v(t)\) and \(v(t_{0})=c\). Since we obtain further Then using Theorem 2.3 implies the estimation of \(v(t)\), the estimation of the unknown function \(m(t)\) is obtained since \(m(t)\leq v(t)\), and this completes the proof. □ Application to impulsive differential equations As an application of Theorem 2.4, we give an upper-bound estimation of certain nonlinear impulsive differential equation as follows: where \(f\in C(\mathbb{R}\times\mathbb{R}, \mathbb{R})\), \(I_{k}\in C(\mathbb{R}, \mathbb{R})\), \(0< t_{0}< t_{1}<\cdots\) , \(\lim_{t\to\infty}t_{k}=+\infty\), \(0\leq\sigma _{k}\leq\tau_{k}\leq t_{k}-t_{k-1}\), \(k=1, 2,\ldots\) . If there exists \(L>0\) such that and there exist \(\iota_{k}\geq0\), such that then for any solution \(v(t)\) of (3.1), we have Proof Then by Theorem 2.4, we compute that References 1. Lakshmikantham, V, Bainov, DD, Simeonov, PS: Theory of Impulsive Differential Equations. World Scientific, Singapore (1989) 2. Bainov, DD, Simeonov, PS: Impulsive Differential Equations: Periodic Solutions and Applications. Longman, Harlow (1993) 3. Bainov, DD, Simeonov, PS: Impulsive Differential Equations: Asymptotic Properties of the Solutions. World Scientific, Singapore (1995) 4. Samoilenko, AM, Perestyuk, NA: Impulsive Differential Equations. World Scientific, Singapore (1995) 5. Wang, W-S: A generalized Gronwall-Bellman integral inequality with impulsive function and its application. J. Sichuan Normal Univ. (Nat. Sci.) 34(1), 43-46 (2011) 6. Deng, S, Prather, C: Generalization of an impulsive nonlinear singular Gronwall-Bihari inequality with delay. J. Inequal. Pure Appl. Math. 9(2), 34 (2008). 7. Hristova, SG: Nonlinear delay integral inequalities for piecewise continuous functions and applications. J. Inequal. Pure Appl. Math. 5(4), 88 (2004). 8. Li, J: On some new impulsive integral inequalities. J. Inequal. Appl. 28, 312395 (2008). doi:10.1155/2008/312395312-395 9. Tatar, NE: An impulsive nonlinear singular version of the Gronwall-Bihari inequality. J. Inequal. Appl. 2006, 84561 (2006). doi:10.1155/JIA/2006/84561 10. Wang, H, Ding, C: A new nonlinear impulsive delay differential inequality and its applications. J. Inequal. Appl. 2011, 11 (2011). doi:10.1186/1029-242X-2011-11 11. Garling, DJH: Inequalities, a Journey into Linear Analysis. Cambridge University Press, Cambridge (2007) 12. Zheng, Z, Gao, X, Shao, J: Some new generalized retarded inequalities for discontinuous functions and their applications. J. Inequal. Appl. 2016, 7 (2016). doi:10.1186/s13660-015-0943-6 13. Borysenko, DS: About one integral inequality for piece-wise continuous functions. In: Proceedings of 10th International Kravchuk Conference, Kyiv (2004), p. 323 14. Iovane, G: Some new integral inequalities of Bellman-Bihari type with delay for discontinuous functions. Nonlinear Anal. 66(2), 498-508 (2007) 15. Thiramanus, P, Tariboon, J: Impulsive differential and impulsive integral inequalities with integral jump conditions. J. Inequal. Appl. 2012, 25 (2012). doi:10.1186/1029-242X-2012-25 Acknowledgements The authors sincerely thank the referees for their constructive suggestions and corrections. This research was partially supported by the NSF of Shandong Province (Grant ZR2015PA005), the NNSF of China (Grant 11271225). Additional information Competing interests The authors declare that there are no competing interests. Authors’ contributions JS gave the main theorems; FM gave some useful comments and revised the paper. All authors have read and approved the final manuscript.
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s? @daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format). @JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems.... well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty... Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d... @Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure. @JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now @yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first @yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts. @JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing. @Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work @Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable. @Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time. @Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things @Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)] @JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :) @Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!) @JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand. @JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series @JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code. @PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
@mickep I'm pretty sure that malicious actors knew about this long before I checked it. My own server gets scanned by about 200 different people for vulnerabilities every day and I'm not even running anything with a lot of traffic. @JosephWright @barbarabeeton @PauloCereda I thought we could create a golfing TeX extension, it would basically be a TeX format, just the first byte of the file would be an indicator of how to treat input and output or what to load by default. I thought of the name: Golf of TeX, shortened as GoT :-) @PauloCereda Well, it has to be clever. You for instance need quick access to defining new cs, something like (I know this won't work, but you get the idea) \catcode`\@=13\def@{\def@##1\bgroup} so that when you use @Hello #1} it expands to \def@#1{Hello #1} If you use the D'Alembert operator as well, you might find pretty using the symbol \bigtriangleup for your Laplace operator, in order to get a similar look as the \Box symbol that is being used for D'Alambertian. In the following, a tricky construction with \mathop and \mathbin is used to get the... Latex exports. I am looking for a hint on this. I've tried everything I could find but no solution yet. I read equations from file generated by CAS programs. I can't edit these or modify them in any way. Some of these are too long. Some are not. To make them fit in the page width, I tried resizebox. The problem is that this will resize the small equation as well as the long one to fit the page width. Which is not what I want. I want only to resize the ones that are longer that pagewidth and keep the others as is. Is there a way in Latex to do this? Again, I do not before hand the size of… \documentclass[12pt]{article}\usepackage{amsmath}\usepackage{graphicx}\begin{document}\begin{equation*}\resizebox{\textwidth}{!}{$\begin{split}y &= \sin^2 x + \cos^2 x\\x &= 5\end{split}$}\end{equation*}\end{document} The above will resize the small equation, which I do not want. But since I do not know before how long the equation is, I do resize on everyone. Is there a way to find in Latex using some latex command, if an equation "will fit" the page width or how long it is? If so I can add logic to add resize if needed in that case. What I mean, I want to resize DOWN only if needed. And not resize UP. Also, if you think I should ask this on main board, I can. But thought to check here first. @egreg what other options do I have? sometimes cas generates an equation which do not fit the page. Now it overflows the page and one can't see the rest of it at all. Now, since in pdf one can zoom in a little, at least one can see it if needed. It is impossible to edit or modify these by hand, as this is done all using a program. @UlrikeFischer I do not generate unreadable equations. These are solutions of ODE's. The latex is generated by Maple. Some of them are longer than the page width. That is all. So what is your suggestion I do? Keep the long solutions flow out of the page? I can't edit these by hand. This is all generated by a program. I can add latex code around them that is all. But editing them is out of question. I tried breqn package, but that did not work. It broke many things as well. @egreg That was just an example. That was something I added by hand to make up a long equation for illustration. That was not real solution to an ODE. Again, thanks for the effort. but I can't edit the latex generated at all by hand. It will take me a year to do. And I run the program many times each day. each time, all the latex files are overwritten again any way. CAS providers do not generate good Latex also. That is why breqn did not work. many times they add {} around large expressions, which made breqn not able to break it. Also breqn has many other problems. So I no longer use it at all.
Back to Linear Programming Introduction The simplex method generates a sequence of feasible iterates by repeatedly moving from one vertex of the feasible set to an adjacent vertex with a lower value of the objective function \(c^T x\). When it is not possible to find an adjoining vertex with a lower value of \(c^T x\), the current vertex must be optimal, and termination occurs. After its development by Dantzig in the 1940s, the simplex method was unrivaled until the late 1980s for its utility in solving linear programming problems. Although never observed on practical problems, the poor worst-case behavior of the algorithm -- the number of iterations may be exponential in the number of unknowns -- led to an ongoing search for algorithms with better computational complexity. This search continued until the late 1970s when the first polynomial-time algorithm (Khachiyan's ellipsoid method) was developed. Most interior-point methods also have polynomial complexity. Algorithm Steps Algebraically speaking, the simplex method is based on the observation that at least \((n-m)\) of the components of \(x\) are zero if \(x\) is a vertex of the feasible set. Accordingly, the components of \(x\) can be partitioned at each vertex into a set of \(m\) basic variables (all nonnegative) and a set of \(n-m\) nonbasic variables (all zero). If we gather the basic variables into a subvector, \(x_B \in R^m\), and the nonbasic variables into another subvector, \(x_N \in R^{n-m}\), we can partition the columns of \(A\) as \([B|N]\), where \(B\) contains the \(m\) columns that correspond to \(x_B\). (Note that \(B\) is a square matrix.) At each iteration of the simplex method, a basic variable (a component of \(x_B\)) is reclassified as nonbasic and vice versa. In other words, \(x_B\) and \(x_N\) exchange a component. Geometrically, this swapping process corresponds to a move from one vertex of the feasible set to an adjacent vertex. We therefore need to choose which component of \(x_N\) should enter \(x_B\) (that is, be allowed to move off its zero bound) and which component of \(x_B\) should enter \(x_N\) (that is, be driven to zero). In fact, we need make only the first of these choices, since the second choice is implied by the feasibility constraints \(Ax = b\) and \(x\geq 0\). In selecting the entering component, we note that \(c^T x\) can be expressed as a function of \(x_N\) alone. We can express \(x_B\) in terms of \(x_N\) by noting that \(Ax = b\) implies that \[x_B = B^{-1} (b - N x_N).\] Hence, partitioning \(c\) into \(c_B\) and \(c_N\) in the obvious way, we have \[c^T = c_B^T x_B + c_N^T x_N = c_B^T B^{-1} b + \left( c_N - N^T B^{-T}c_B \right)^T x_N\] The vector \[d_N = c_N - N^T B^{-T} c_B\] is the ''reduced-cost vector''. If all components of \(x_B\) are strictly positive and some component (say, the \(i^{th}\) component) of \(d_N\) is negative, we can decrease the value of \(c^T x \) by allowing component \(i\) of \(x_N\) to become positive while adjusting \(x_B\) to maintain feasibility. Unless there exist feasible points \(x\) that make \(c^T x\) arbitrarily negative, the requirement \(x_B \geq 0\) imposes an upper bound on \(x_{N_i}\), component \(i\) of \(x_N\). In principle, we can choose any component \(x_{N_i}\) with \(d_{N_i} < 0\) as an entering variable. If there are no negative entries in \(d_N\), the current point \(x\) is optimal. If there is more than one, we would ideally pick the component that will lead to the largest reduction in \(c^T x\) on the current iteration. Heuristics for making this selection are discussed below. It follows from \(Ax = b\) and the fact that the remaining elements of \(x_N\) are held at zero that \[x_B = B^{-1} (b - N_i x_{N_i})\] where \(N_i\) denotes the column of \(N\) that corresponds to \(x_{N_i}\). We choose the new value of \(x_{N_i}\) to be the largest value that maintains \(x_B \geq 0\). To obtain \(x_{N_i}\) explicitly, we can rearrange the previous equation to obtain \[x_{N_i} = \min \left\{ \frac{[B^{-1} b]_j}{[B^{-1} N_i]_j} \; : \; (B^{-1}N_i)_j > 0\right\}.\] The index \(j\) that achieves the minimum in this formula indicates the basic variable \(x_{B_j}\) that is to become nonbasic. If more than one such component achieves the minimum simultaneously, the one with the largest value of \((B^{-1}N_i)_j\) is usually selected. Matrix Operations Most of the computational cost in simplex algorithms arises from the need to compute the vectors \(B^{-T} c_B\) and \(B^{-1} N_i\) and the need to keep track of the changes in \(B\) and \(B^{-1}\) resulting from the changes in the basis at each iteration. We could simply recompute and store \(B^{-1}\) explicitly after each step. This strategy is undesirable for two reasons. First, the matrix \(B^{-1}\) is usually dense even though the original \(B\) is sparse. Hence, explicit calculation of \(B^{-1}\) requires prohibitive amounts of computing time and storage. Second, since \(B\) changes only slightly from one iteration to the next, we should be able to update information about \(B\) and \(B^{-1}\) rather than recompute it anew at every step. A technique that is used in many commercial codes is to store an \(L U\) factorization of \(B\). That is, to keep track of matrices \(P, Q, L, U\) such that \[B = P L U Q \,\] where \(P, Q\) are permutation matrices (identity matrices whose rows have been reordered), while \(L\), \(U\) are lower- and upper- triangular matrices, respectively. Since \(P^T P = I\) and \(Q^T Q = I\), we can calculate \(z = B^{-T} c_B\) by performing the following sequence of operations: \[U^T z_1 = Q c_B, \quad L^Tz_2 = z_1, \quad z = P z_2\] The first two operations are back- and forward-substitutions with triangular matrices, which can be performed efficiently, while the final operation is a simple rearrangement of the elements of \([z_2]\). Calculation of \(B^{-1} N_i\) proceeds similarly. The permutation matrices \(P\), \(Q\) are chosen so that the factorization is reasonably stable and the factors \(L\), \(U\) are not too much denser than the original matrix \(A\). When \(B\) is changed by a single column, the factorization can be updated by applying a number of elementary transformations; that is, additions of multiples of one row of the matrix to another row. Rather than applying these transformations explicitly to the existing factors, they are usually stored in a compact form. When the storage occupied by the elementary transformations becomes excessive, they are discarded, and the current basis matrix \(B\) is refactored from scratch. Pricing Strategies We return to strategies for choosing the component \(i\) of \(x_N\) to enter the basis, an operation that is known as ''pricing'' in linear programming parlance. The simplest strategy is to choose \(i\) to correspond to the most negative component of the reduced-cost vector \(d_N\). This approach, known as Dantzig's rule, gives the fastest decrease in the objective function per unit increase in the entering variable. However, change in the entering variable often does not give a good indication of how far we actually have to move; it could be that a small perturbation in the entering component corresponds to a huge step along the corresponding edge of the feasible polytope, so we actually have to move a long way to get the benefits promised by Dantzig's rule. This observation is the motivation behind the ''steepest-edge'' strategy, in which we choose the edge along which the objective function decreases most rapidly ''per unit of distance along the edge''. The extra computation needed to identify the steepest edge is often more than offset by a reduction in the number of iterations, and this strategy is an option in many LP solvers. When the linear program is too large for the data to be stored in core memory, the cost of computing the complete reduced-cost vector \(d_N\) at each iteration may require too much traffic with secondary storage, and it may take too long. In this situation, a ''partial pricing'' strategy may be appropriate. This strategy finds only a subvector of \(d_N\) and chooses the entering variable from those components that are actually computed. Of course, the subset of indices that defines the subvector of \(d_N\) should be changed frequently. A problem with all of these strategies is that they do not predict the actual decrease in the objective function \(c^T x\) that will occur on this iteration. It may happen that we are able to move only a short distance along the chosen edge before encountering another vertex, so the reduction may be minimal. A ''multiple pricing'' strategy selects a small group of columns with negative reduced costs and computes the actual reduction that would be achieved if any one of the corresponding variables entered the basis. This process is expensive, since it requires the calculation of \(B^{-1} N_i\) for each candidate \(i\). One of the candidates is chosen, and the remainder are retained as candidates for the next iteration, since the marginal cost of updating the column \(B^{-1} N_i\) to correspond to the new basis matrix is not too high. Of course, the candidate list must be refreshed frequently. Simplex Method Tools There are a number of interactive tools available that allow users to step through iterations of the simplex method. The NEOS Guide offers a Java-based Simplex Method Tool that demonstrates the workings of the simplex method on small user-entered problems. Robert Vanderbei of Princeton has developed Java-based tools for facilitating simplex pivots and facilitating network simplex pivots as well as a variety of Java applets that test students on their knowledge of various simplex-based methods. The Java-based Linear Program Solver with Simplex, part of the RIOT project at Berkeley, allows the user to step through each iteration of the simplex method or to solve for the optimal solution. The Finite Mathematics and Applied Calculus Resource Page offers a Simplex Method Tool to display tableaus and to solve LP models. It also offers a Simplex Method Tutorial.
$\pi R^2 \Delta p$ (1) and the surface tension is: $2\pi R \gamma$ (2) and a sphere has two surfaces so, roughly: $\Delta p = \frac{4 \gamma}{R}$ but if we take the limit: $\lim_{R \to 0} \frac{4 \gamma}{R}$, then obviously the pressure difference between the outside and inside surface of the bubble is $\infty$! How does that make sense? Is there any way I can get round this? Or have I done something wrong with my algebra?
Dear Uncle Colin, I’m trying to organise a tournament involving seven teams and two pitches. The following conditions must hold: Each team plays four games No pair of teams meets more than once Each team must play at most one pair of back-to-back matches How would you solve this? BitRead More → This is based on a puzzle I heard from @colinthemathmo, who wrote it up here; he heard it from @DavidB52s, and there the trail goes cold. The Mathematical Ninja lay awake, toes itching. This generally meant that a mission was in the offing. Awake or dreaming? Unclear. But the thoughtRead More → Dear Uncle Colin, How would I work out $\sqrt{\ln(100!)}$ in my head? - Some Tricks I’d Really Like In Number Games Hi, STIRLING, and thanks for your message! I don’t know how you’d do it, but I know how the Mathematical Ninja would! Stirling’s Approximation1 says that $\ln(n!) \approx nRead More → In class, a student asked to work through a question: Let $f(x) = \frac{5(x-1)}{(x+1)(x-4)} - \frac{3}{x-4}$. (a) Show that $f(x)$ can be written as $\frac{2}{x+1}$. (b)Hence find $f^{-1}(x)$, stating its domain. The answer they gave was outrageous1. Part (a) Part (a) was fine: combine it all into a single fractionRead More → Dear Uncle Colin, I’m struggling to make any headway with this: find all integers $n$ such that $5 \times 2^n + 1$ is square. Any ideas? Lousy Expression Being Equalto Square Gives Undue Exasperation Hi, LEBESGUE, and thanks for your message! Every mathematician should have a Bag Of Tricks –Read More → In this month’s episode of Wrong, But Useful, we’re joined by @DrSmokyFurby and his handler, Belgin Seymenoglu. Apologies for the poor audio quality on this call. Dave's fault, obviously1 . We discuss: The Talkdust podcast (via Adam Atkinson): Life insurance Superpermutations: new record for n = 7 in the commentsRead More → I had a fascinating conversation on Twitter the other day about, I suppose, different modes of solving a problem. Here’s where it started: Heh. You spend half an hour knee-deep in STEP algebra, solve it, then realise that tweaking the diagram a tiny bit turns it into a two-liner. —Read More → Dear Uncle Colin, If I didn’t have a calculator and wanted to know the decimal expansion of $\sqrt{2}$, how would I be best to go about it? Roots As Decimals - Irrational Constant At Length Hi, RADICAL, and thanks for your message! There are several options for finding $\sqrt{2}$ asRead More → Stefan Banach was one of the early 20th century’s most important mathematicians - if you’re at all interested in popular maths, you’ll have heard of the Banach-Tarski paradox; if you’ve done any serious linera algebra, you’ll know about Banach spaces; if you’ve read Cracking Mathematics (available wherever good books areRead More →
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
I wonder if anybody can help me with this problem. I'm trying to compute the Mertens function for large $n$. The most obvious algorithm is just to compute all primes up to $\sqrt{n}$ and then to sieve. That takes at least an order of $n\log n$ operations, and really even more. The most recent article that I could find that discusses methods to compute the function directly is dated 1994, and it proposes to do exactly that. Are there any known algorithms that let you compute Mertens faster than by sieving? I know that $\pi(n)$ can be computed in $O(n^{2/3})$, I looked into that algorithm but it does not seem to be easily adaptable to my task. Alternatively, I could use an algorithm to compute $M(n+dn)-M(n)$ for $dn\ll n$ (say $dn\sim \sqrt{n}$ ) in $O(\sqrt{n})$ time or less.
Expected Value X is a random variable over [1,2] 1. Find distribution function of Y=e$^x$ 2. Find E[Y] i.e expected value of Y I'm done with part a. The answer is 1/y . Can anyone help calculate the expected value. The answer to part a is not $\dfrac 1 y$ It is $f_Y(y) = \begin{cases}0 & y < e \\ \dfrac 1 y &y \in [e,e^2] \\ 0 &e^2 < y \end{cases}$ There is a serious difference. $E[Y] = \displaystyle \int_e^{e^2}~y\dfrac 1 y~dy = \int_e^{e^2}~1=e^2-e$ Romsek is assuming you meant "x is uniformly distributed over [1, 2]". Just saying "x is a random variable over [1, 2]" does not tell us the probability distribution which is necessary in order to answer this question. Quote: All times are GMT -8. The time now is 01:17 AM. Copyright © 2019 My Math Forum. All rights reserved.
In number theory, here are four applications of techniques or results in first-year calculus. (1) Finding equations of tangent lines by first-semester calculus methods lets us add points on elliptic curves using the Weierstrass equation for the curve. This is more algebraic geometry than number theory, so I'll add that the methods show if the Weierstrass equation has rational coefficients then the sum of two rational points is again a rational point. (2) The recursion in Newton's method from differential calculus is the basic idea behind Hensel's lemma in $p$-adic analysis (or, more simply, lifting solutions of congruences from modulus $p$ to modulus $p^k$ for all $k \geq 1$). (3) The infinitude of the primes can be derived from the divergence of the harmonic series (the zeta-function at 1), which is based on a bound involving the definition of the natural logarithm as an integral. (4) Unique factorization in the Gaussian integers can be derived from the Leibniz formula$$\frac{\pi}{4} = 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + \frac{1}{9} - \cdots = \sum_{n \geq 0} \frac{(-1)^n}{2n+1}$$by interpreting it as a case of Dirichlet's class number formula $2\pi h/(w\sqrt{|D|}) = L(1,\chi_D)$ for $\chi_D$ the primitive quadratic character associated to ${\mathbf Q}(\sqrt{D})$ where $D$ is a negative fundamental discriminant, $h$ is the class number of ${\mathbf Q}(\sqrt{D})$ and $w$ is the number of roots of unity in ${\mathbf Q}(\sqrt{D})$. Taking $D = -4$ turns the left side into $2\pi h/(4\sqrt{4}) = (\pi/4)h$, so the Leibniz formula is equivalent to $h = 1$, which is another way of saying $\mathbf Z[i]$ is a PID or equivalently (for Dedekind domains) a UFD. Here are two more applications, not in number theory directly. (5) Gerry Edgar mentions in his answer Niven's proof of the irrationality of $\pi$, which is available in Spivak's calculus book. The same ideas imply irrationality of $e^a$ for every positive integer $a$, which in turns easily implies irrationality of $e^r$ for nonzero rational $r$ and thus also irrationality of $\log r$ for positive rational $r \not= 1$. The calculus fact in the proof of irrationality of the numbers $e^a$ is that for all positive integers $n$ the polynomial $$\frac{x^n(1-x)^n}{n!}$$and all of its higher derivatives take integer values at $0$ and $1$. That implies a certain expression involving a definite integral is a positive integer, and then with the fundamental theorem of calculus that same expression turns out to be less than 1 for large $n$ (where "large" depends on the hypothetical denominator of a rational formula for $e^a$), and that is a contradiction. (6) Prove that if $f$ is a smooth function (= infinitely differentiable) on the real line and $f(0) = 0$ then $f(x) = xg(x)$ where $g$ is a smooth function on the real line. There is no difficulty in defining what $g(x)$ has to be if it exists at all, namely $$g(x) = \begin{cases}f(x)/x, & \text{ if } x \not= 0, \\f'(0), & \text{ if } x = 0.\end{cases}$$And easily the function defined this way is continuous on the real line and satisfies $f(x) = xg(x)$. But why is this function smooth at $x = 0$ (smoothness away from $x = 0$ is easy)? You can try to do it using progressively messier formulas for higher derivatives of $g$ at 0 by taking limits, but a much slicker technique is to use the fundamental theorem of calculus to write$$f(x) = f(x) - f(0) = \int_0^x f'(t)\,dt = x\int_0^1 f'(xu)\,du,$$which leads to a different formula for $g(x)$ that doesn't involve cases:$$g(x) = \int_0^1 f'(xu)\,du.$$If you're willing to accept differentiation under the integral sign (maybe that's not in the first-year calculus curriculum, but we used first-year calculus to get the slick formula for $g(x)$) then the right side is easily checked to be a smooth function of $x$ from $f$ being smooth.
Binary codes and bit-wise operations are fundamental in computer science. Whatever device you are running today works because of binary codes, and the bit-wise operations AND, OR, NOT, and XOR. (A fun exercise, prove to yourself that each of these rules meets the definition of an operation. If you need a refresher, check out this post.) We can also look at binary codes and the bit-wise operations in a more algebraic way. Let’s continue exploring abstract algebra via Pinter’s delightful algebra book. The previous post discussed operations on sets, and explored concatenation as an example. Here, we will build on this and look at a set G combined with an operation \ast that satisfies three axioms. This operation/set combination is a special algebraic object called a group, and has been studied extensively. We’ll look at a specific group to illustrate the concept: binary codes with the XOR operation. 1 (Note: we also looked at modulo arithmetic on finite groups of integers here as another example of a group.) Some quick terminology and setup We currently (until the advent of quantum or DNA computers, or something else entirely) transmit information by coding it into binary strings – strings of 1s and 0s. We call these binary words. 100111 is a binary word, as are 001, 1, and 10110. A general binary word can have any length we want. When we send these binary words via some communication channel, we always run the risk of errors in transmission. This means we need to devise some way to detect and correct transmission errors in binary codes. One way to do this is the XOR operation. Let \mathbf{a} = a_{1}a_{2}\ldots a_{n} and \mathbf{b} = b_{1}b_{2}\ldots b_{n} be binary words of length n, where a_{i},b_{i} \in \{0,1\} for i=1,...,n. We define the XOR operation, which we will denote by the symbol \oplus on each bit. For two bits a_{i}, b_{i} in a word,a_{i}\oplus b_{i} = (a_{i} + b_{i}) \bmod 2 (Now would be a good time to review your modulo arithmetic if you’ve forgotten.) Then for two words \mathbf{a} and \mathbf{b}, the XOR operation is done bit-wise (component by component). That is,\mathbf{a} \oplus \mathbf{b} = (a_{1}\oplus b_{1})(a_{2}\oplus b_{2})\ldots (a_{n}\oplus b_{n}) As a quick example,110010 \oplus 110001 = 000011 Notice that the result of the XOR operation shows us the positions in which \mathbf{a} and \mathbf{b} differ. Another way to look at it is if \mathbf{a} was transmitted, and \mathbf{b} was received, there was an error in the last two positions. We can call the result of XORing two binary words the error pattern. Showing Binary Words with the XOR operation is a group Now we’ll look at the set of all binary words of length n, called \mathbb{B}^{n} coupled with the operation XOR. 2 We want to show that this set along with the XOR operation forms an algebraic structure called a A group is one of the most basic algebraic structures we can study. We will be exploring all sorts of things we can do and build with groups in future posts, so first we need to define a group. group. What’s a group? From Pinter (1982) A group is a set G coupled with an operation \ast, denoted \langle G, \ast \rangle that satisfies the following three axioms: (G1: Associativity of the operation)The operation \ast is associative. (See the previous post for a review of associativity.) (G2: Existence of an identity element)There is an identity element inside the set G that we will call e such that for every element g \in G, e\ast g = g\ast e =g (G3: Existence of an inverse for every element)For every element g \in G, there is a corresponding element g^{-1} \in G such that g\ast g^{-1} = g^{-1}\ast g = e All three properties were discussed in the previous post. It’s important to note that a group is a set and an operation. If we change one, then we either have a different group, or we lose the group classification. Real numbers under addition are a group, as are nonzero real numbers under multiplication, but those are two different groups. (This will be important when we get to rings and fields.) Integers under addition are also a group, but a different one than real numbers under addition. Let’s prove \langle\mathbb{B}^{n}, \oplus \rangle is a group. Showing a set and operation is a group is pretty algorithmic, in a sense. We just have to show that all three axioms are satisfied. 3 (G1): Associativity This one will be a bit tedious. We have to show associativity for words of any length n. But fear not. Since XOR of words is done bit wise, we can exploit that and first show associativity for binary words of length 1, then “scale it up”, if you will. In this case, for words of length 1, we just have to brute-force it. We have to show that for any a,b,c \in \mathbb{B}^{1}, that(a\oplus b) \oplus c = a \oplus (b \oplus c) So,\begin{aligned} 1\oplus (1\oplus 1) = 1 \oplus 0 = 1 &\text{ and } (1 \oplus 1) \oplus 1 = 0 \oplus 1 = 1\\ 1\oplus (1 \oplus 0) = 1\oplus 1 = 0 &\text{ and } (1\oplus 1) \oplus 0 = 0 \oplus 0 = 0\\ &\vdots\end{aligned} Continue in this fashion until you have tried all combinations. (It does work out.) Now that we have that this is true for words of length 1, we just use the definition of XOR operation on words of length 1 to “scale up” to words of length n. Since the operation is done component-wise, and it is associative on each component, it is associative on the whole word. We’ll show this formally now: Let \mathbb{a,b,c} \in \mathbb{B}^{n}. So \mathbf{a} = a_{1}a_{2}\ldots a_{n}, \mathbf{b} = b_{1}b_{2}\ldots b_{n}, and \mathbf{c} = c_{1}c_{2}\ldots c_{n}. Then\begin{aligned}\mathbf{a}\oplus (\mathbf{b} \oplus \mathbf{c}) &= a_{1}a_{2}\ldots a_{n}\oplus [(b_{1}\oplus c_{1})(b_{2}\oplus c_{2})\ldots (b_{n}\oplus c_{n})]\\ &= (a_{1} \oplus (b_{1} \oplus c_{1}))(a_{2} \oplus (b_{2} \oplus c_{2}))\ldots (a_{n} \oplus (b_{n} \oplus c_{n}))\\&= ((a_{1} \oplus b_{1})\oplus c_{1})((a_{2}\oplus b_{2})\oplus c_{2})\ldots ((a_{n}\oplus b_{n})\oplus c_{n})\\&= (\mathbf{a} \oplus \mathbf{b})\oplus \mathbf{c}\end{aligned} That third equality holds because we already showed that XOR was bit-wise associative. The last equality just recalls what it means to XOR two binary words. With that, we have shown associativity of the XOR operation. (G2): Existence of an identity element When we want to show that a group has an identity element, we must actually find a candidate and show it meets the criterion. Here is (frustratingly, sometimes), where intuition and experience tend to play the biggest role. But, as my mother always told me when I was frustrated at my middle school math homework: “Make it look like something you’ve seen before”. XOR is bitwise addition, just with a twist (add then mod out by 2). So let’s start by considering the identity element for addition: 0. We’re looking at binary words of length n, so a good candidate for our identity element e would be a string of n 0s. But does it fit? Any word\begin{aligned}a_{1}a_{2}\ldots a_{n} \oplus 000\ldots 0 &= (a_{1}\oplus 0)(a_{2} \oplus 0)\ldots (a_{n} \oplus 0)\\&= a_{1}a_{2}\ldots a_{n}.\end{aligned} You can check also that 0 \oplus \mathbf{a} = \mathbf{a}, and thus our candidate is a match! 4 (G3): Existence of an inverse element for every element in the set This one is a little bit trickier. We need to be able to find any generic binary word, and show that there is another binary word such that when we XOR them together, we get the sequence of all 0s. (Computer science friends are ahead on this one.) Think back to how we looked at the XOR operation as a form of error checking. If we XORed two words together, there was a 1 in every position in which they differ, and a 0 in every position in which they were identical. Therefore, we come to the interesting conclusion that every element is its own inverse! If you XOR an element with itself, you will get a sequence of 0s. With our error checking interpretation, this makes perfect sense. We know the communication line transmits perfectly if we can XOR the sent and received word and get all 0s every time. Conclusion: every element is its own inverse, and every element in the group is, well, in the group, so we have satisfied the third axiom. Therefore, we have a group, fellow math travelers! Bonus round: showing that \langle \mathbb{B}^{n}, \oplus \rangle is an abelian group Notice that we are missing one property that all of you likely take for granted in regular arithmetic: being able to add and multiply in any order you like. That is, you take for granted that 2+3 = 3+2. This property is called commutativity, and groups with this bonus property are called abelian groups. Not all groups are abelian. Square matrices under matrix multiplication come to mind. However, we will show that \langle \mathbb{B}^{n}, \oplus \rangle is an abelian group. This actually can be done quite simply by exploiting two things we already know: modulo arithmetic and bit-wise operations. We know that XOR is basically bit wise addition modulo 2. Addition is absolutely commutative, and modding out by 2 doesn’t change that, since we add in the regular ol’ way, then divide by 2 and take the remainder. That means that XORing binary words of length 1 is commutative. Now, since XORing is done bit-wise, which we noted while proving associativity, we can use that same reasoning again (in fact, it looks almost exactly the same), to conclude that XOR is indeed a commutative operation, and thus we have an abelian group. 5 Conclusion We explored the definition of a group using a common operation and set from computer science. We needed this foundation to be able to study more. The next post in the coding theory and algebra series will take an in depth look at maximum likelihood decoding. This decoding process is a way to attempt to decode and correct errors in transmission when the communication channel is noisy (and most are, somewhat). Abstract algebra is a powerful branch of mathematics that touches many more things than most of us realize. Now that we know binary words under the XOR operation is a group, we can start extending this and studying it as a structure, or skeleton, rather than focusing on the individual elements. The elements and operation are really just the drywall or stucco on a house. The algebraic structure is the skeleton of beams inside the house. We learn a lot more by studying the structure than the covering, which is why it is important to enter that abstract realm. We will uncover similarities in strange places, and applications we didn’t know were possible. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Footnotes This post adapts Chapter 3, Exercise Set F from Pinter’s book. I’ll put another plug in to encourage anyone who wants to explore further to buy the book. How many binary words are there of length n? Well, each slot can have two possibilities, 0 and 1, and there are n slots, so there are 2 nwords of length n. We call the number of elements in a set the cardinalityof a set. So, the cardinality of the set of binary words of length n is 2 n. You see how quickly the number grows as n grows. The use of the word “just” is disingenuous here. Sometimes it’s really hard to prove one or more of them in general, especially existence of an inverse. Make sure you check that the identity element commutes with any other element. You have to be able to perform the operation on the right andthe left, or the candidate fails. What I just did here is a mathematician’s favorite trick: reducing the problem to something we’ve seen and proven before
The book I am following explains the solution as, As we can see,the size of sub problems at the first level of recursion is $n$.So, let us guess that $T(n)=O(n\log n)$ and try to prove that our guess is correct. Doubt- Does initial problem size(i.e n) give a hint to reach $O(n\log n)$ ? How it does? furthermore ,the book says Let's start by trying to prove an upper bound$T(n)\le c\cdot n\log n$ \begin{align*} T(n)=\sqrt{n} \cdot T(\sqrt{n}) + n \tag{1} \\ \le \sqrt{n}\cdot c\sqrt{n}\log(\sqrt{n}) + n \tag{2} \\ =n\cdot c\log (\sqrt{n})+ n \tag{3} \\ =n\cdot c\cdot\frac{1}{2}\log n+ n \tag{4} \\ \le c\cdot n\log n \tag{5} \end{align*} Last inequality assumes only that $$1\le c\cdot\frac{1}{2}\cdot \log n$$ This is correct if n is sufficiently large and for any constant c,no matter how small.So we are correct for upper bound. I am not getting what's happening from $(1)$ to $(2)$ and that from $(4)$ to $(5)$. Also What does the last line prove ?
EDIT: We may assume that the Picard number is at least two, as otherwise the cone is simply a ray generated by any effective curve. In particular, every effective curve is extremal. I will also assume that "curve" means "effective curve". (This edit was prompted by Damiano's comment that is now (sadly) deleted. It was a useful contribution.) A curve on a surface is simultaneously a curve and a divisor and assuming the surface is smooth or at least $\mathbb Q$-factorial, then the curve, as a divisor, induces a linear functional on $1$-cycles. This works better if the surface is proper, so let's assume that. So, if $C$ is such a curve, then the corresponding linear function on the space where $NE(S)$ lives is best represented by the hyperplane on which it vanishes and remembering which side is positive and which one is negative. If $C$ is reducible, then it may have negative self-intersection, but it is not extremal. For an example, blow up two separate points on a smooth surface and take the sum of the exceptional divisors. My guess is that you meant irreducible, so let's assume that. Now we have $3$ cases: 1) $C^2>0$. In this case $C$ is in the interior of the cone and it cannot be extremal, can't even be on the boundary (Use Riemann-Roch to prove this). 2) $C^2=0$. Since $C$ is irreducible, it follows that it is nef and hence a limit of ample classes, so it is effective, but as Damiano pointed out I have already assumed that. (It is left to the reader to rephrase this if $C$ is assumed to be nef instead of effective). In this case the hyperplane corresponding to $C$ as a linear functional is a supporting hyperplane of the cone, intersecting it at least in the ray generated by $C$. So $C$ is definitely on the boundary, but it may or may not be extremal depending on the surface. For example any curve of self-intersection $0$ on an abelian surface is extremal, but for instance a member of a fibration that also has reducible fibers is not extremal despite being irreducible. For the latter think of a K3 surface with an elliptic fibration that has some $(-2)$-curves contained in some fibers. 3) $C^2<0$. If $C$ is effective, then $C\cdot D>0$ for any irreducible curve $D\neq C$. This means that $C$ and all other irreducible curves lie on different sides of the hyperplane corresponding to $C$ as a linear functional, so the convex cone they generate must have $C$ generating an extremal ray. Observe that we did not use the Cone Theorem. In fact one gets a different "cone theorem" this way: Theorem Let $S$ be a smooth projective surface $H$ an arbitrary ample divisor on $S$ and let $$ Q^+=\{\sigma\in N_1(S) \vert \sigma^2 >0, H\cdot\sigma \geq 0 \} $$ be the "positive component" of the interior of the quadric cone defined by the intersection pairing. Then $$ \overline{NE}(S) = \overline{Q^+} + \sum_{C^2<0} \mathbb R_+[C] $$ There is also one for $K3$'S, using the above notation: Theorem Let $S$ be a smooth algebraic K3 surface and assume that its Picard number is at least $3$. (If the Picard number is at most $2$, then there are not too many choices for a cone). Then one of the following holds: (i) $$ \overline{NE}(S) = \overline{Q^+}, or $$ (ii) $$ \overline{NE}(S) = \overline{\sum_{C\simeq \mathbb P^1, C^2<0} \mathbb R_+[C]}. $$ The two cases are distinguished by the fact whether there exists a curve in $S$ with negative self-intersection. If the Picard number is at least $12$, then only (ii) is possible. For proofs and more details, see this paper.
Skills to Develop Use Student's \(t\)–test for one sample when you have one measurement variable and a theoretical expectation of what the mean should be under the null hypothesis. It tests whether the mean of the measurement variable is different from the null expectation. There are several statistical tests that use the \(t\)-distribution and can be called a \(t\) -test. One is Student's \(t\) -test for one sample, named after "Student," the pseudonym that William Gosset used to hide his employment by the Guinness brewery in the early 1900s (they had a rule that their employees weren't allowed to publish, and Guinness didn't want other employees to know that they were making an exception for Gosset). Student's \(t\) -test for one sample compares a sample to a theoretical mean. It has so few uses in biology that I didn't cover it in previous editions of this Handbook, but then I recently found myself using it (McDonald and Dunn 2013), so here it is. When to use it Use Student's \(t\)-test when you have one measurement variable, and you want to compare the mean value of the measurement variable to some theoretical expectation. It is commonly used in fields such as physics (you've made several observations of the mass of a new subatomic particle—does the mean fit the mass predicted by the Standard Model of particle physics?) and product testing (you've measured the amount of drug in several aliquots from a new batch—is the mean of the new batch significantly less than the standard you've established for that drug?). It's rare to have this kind of theoretical expectation in biology, so you'll probably never use the one-sample \(t\)-test. I've had a hard time finding a real biological example of a one-sample \(t\)-test, so imagine that you're studying joint position sense, our ability to know what position our joints are in without looking or touching. You want to know whether people over- or underestimate their knee angle. You blindfold \(10\) volunteers, bend their knee to a \(120^{\circ}\) angle for a few seconds, then return the knee to a \(90^{\circ}\) angle. Then you ask each person to bend their knee to the \(120^{\circ}\) angle. The measurement variable is the angle of the knee, and the theoretical expectation from the null hypothesis is \(120^{\circ}\). You get the following imaginary data: Individual Angle A 120.6 B 116.4 C 117.2 D 118.1 E 114.1 F 116.9 G 113.3 H 121.1 I 116.9 J 117.0 If the null hypothesis were true that people don't over- or underestimate their knee angle, the mean of these \(10\) numbers would be \(120\). The mean of these ten numbers is \(117.2\); the one-sample \(t\)–test will tell you whether that is significantly different from \(120\). Null hypothesis The statistical null hypothesis is that the mean of the measurement variable is equal to a number that you decided on before doing the experiment. For the knee example, the biological null hypothesis is that people don't under- or overestimate their knee angle. You decided to move people's knees to \(120^{\circ}\), so the statistical null hypothesis is that the mean angle of the subjects' knees will be \(120^{\circ}\). How the test works Calculate the test statistic,\(t_s\), using this formula: \[t_s=\frac{(\bar{x}-\mu _\theta )}{(s/\sqrt{n})}\] where \(\bar{x}\) is the sample mean, \(\mu\) is the mean expected under the null hypothesis, \(s\) is the sample standard deviation and \(n\) is the sample size. The test statistic, \(t_s\), gets bigger as the difference between the observed and expected means gets bigger, as the standard deviation gets smaller, or as the sample size gets bigger. Applying this formula to the imaginary knee position data gives a \(t\)-value of \(-3.69\). You calculate the probability of getting the observed \(t_s\) value under the null hypothesis using the t-distribution. The shape of the \(t\)-distribution, and thus the probability of getting a particular \(t_s\) value, depends on the number of degrees of freedom. The degrees of freedom for a one-sample \(t\)-test is the total number of observations in the group minus \(1\). For our example data, the \(P\) value for a \(t\)-value of \(-3.69\) with \(9\) degrees of freedom is \(0.005\), so you would reject the null hypothesis and conclude that people return their knee to a significantly smaller angle than the original position. Assumptions The \(t\) -test assumes that the observations within each group are normally distributed. If the distribution is symmetrical, such as a flat or bimodal distribution, the one-sample \(t\) -test is not at all sensitive to the non-normality; you will get accurate estimates of the \(P\) value, even with small sample sizes. A severely skewed distribution can give you too many false positives unless the sample size is large (above \(50\) or so). If your data are severely skewed and you have a small sample size, you should try a data transformation to make them less skewed. With large sample sizes (simulations I've done suggest \(50\) is large enough), the one-sample \(t\) -test will give accurate results even with severely skewed data. Example McDonald and Dunn (2013) measured the correlation of transferrin (labeled red) and Rab-10 (labeled green) in five cells. The biological null hypothesis is that transferrin and Rab-10 are not colocalized (found in the same subcellular structures), so the statistical null hypothesis is that the correlation coefficient between red and green signals in each cell image has a mean of zero. The correlation coefficients were \(0.52,\; 0.20,\; 0.59,\; 0.62\) and \(0.60\) in the five cells. The mean is \(0.51\), which is highly significantly different from \(0\) (\(t=6.46,\; 4d.f.,\; P=0.003\)), indicating that transferrin and Rab-10 are colocalized in these cells. Graphing the results Because you're just comparing one observed mean to one expected value, you probably won't put the results of a one-sample \(t\) -test in a graph. If you've done a bunch of them, I guess you could draw a bar graph with one bar for each mean, and a dotted horizontal line for the null expectation. Similar tests The paired t–test is a special case of the one-sample \(t\) -test; it tests the null hypothesis that the mean difference between two measurements (such as the strength of the right arm minus the strength of the left arm) is equal to zero. Experiments that use a paired t–test are much more common in biology than experiments using the one-sample \(t\) -test, so I treat the paired \(t\)-test as a completely different test. The two-sample t–test compares the means of two different samples. If one of your samples is very large, you may be tempted to treat the mean of the large sample as a theoretical expectation, but this is incorrect. For example, let's say you want to know whether college softball pitchers have greater shoulder flexion angles than normal people. You might be tempted to look up the "normal" shoulder flexion angle (\(150^{\circ}\)) and compare your data on pitchers to the normal angle using a one-sample \(t\) -test. However, the "normal" value doesn't come from some theory, it is based on data that has a mean, a standard deviation, and a sample size, and at the very least you should dig out the original study and compare your sample to the sample the \(150^{\circ}\) "normal" was based on, using a two-sample \(t\)-test that takes the variation and sample size of both samples into account. How to do the test Spreadsheets I have set up a spreadsheet to perform the one-sample \(t\)–test onesamplettest.xls. It will handle up to \(1000\) observations. R Salvatore Mangiafico's \(R\) Companion has a sample R program for the one-sample t–test. SAS You can use PROC TTEST for Student's \(t\)-test; the CLASS parameter is the nominal variable, and the VAR parameter is the measurement variable. Here is an example program for the joint position sense data above. Note that \(H0\) parameter for the theoretical value is \(H\) followed by the numeral zero, not a capital letter \(O\). DATA jps; INPUT angle; DATALINES; 120.6 116.4 117.2 118.1 114.1 116.9 113.3 121.1 116.9 117.0 ; PROC TTEST DATA=jps H0=50; VAR angle; RUN; The output includes some descriptive statistics, plus the \(t\)-value and \(P\) value. For these data, the \(P\) value is \(0.005\). DF t Value Pr > |t| 9 -3.69 0.0050 Power analysis To estimate the sample size you to detect a significant difference between a mean and a theoretical value, you need the following: the effect size, or the difference between the observed mean and the theoretical value that you hope to detect the standard deviation alpha, or the significance level (usually \(0.05\)) beta, the probability of accepting the null hypothesis when it is false (\(0.50,\; 0.80\) and \(0.90\) are common values) The G*Power program will calculate the sample size needed for a one-sample \(t\)-test. Choose "t tests" from the "Test family" menu and "Means: Difference from constant (one sample case)" from the "Statistical test" menu. Click on the "Determine" button and enter the theoretical value ("Mean \(H0\)") and a mean with the smallest difference from the theoretical that you hope to detect ("Mean \(H1\)"). Enter an estimate of the standard deviation. Click on "Calculate and transfer to main window". Change "tails" to two, set your alpha (this will almost always be \(0.05\)) and your power (\(0.5,\; 0.8,\; or\; 0.9\) are commonly used). As an example, let's say you want to follow up the knee joint position sense study that I made up above with a study of hip joint position sense. You're going to set the hip angle to \(70^{\circ}\) (Mean \(H0=70\)) and you want to detect an over- or underestimation of this angle of \(1^{\circ}\), so you set Mean \(H1=71\). You don't have any hip angle data, so you use the standard deviation from your knee study and enter \(2.4\) for SD. You want to do a two-tailed test at the \(P<0.05\) level, with a probability of detecting a difference this large, if it exists, of \(90\%\) (\(1-\text {beta}=0.90\)). Entering all these numbers in G*Power gives a sample size of \(63\) people. Reference McDonald, J.H., and K.W. Dunn. 2013. Statistical tests for measures of colocalization in biological microscopy. Journal of Microscopy 252: 295-302. Contributor John H. McDonald (University of Delaware)
Current browse context: physics.gen-ph Change to browse by: References & Citations Bookmark(what is this?) Physics > General Physics Title: Quasi-Topological Magnetic Brane coupled to Nonlinear Electrodynamics (Submitted on 21 Jan 2019) Abstract: In this paper, we are eager to construct a new class of (n+1)-dimensional static magnetic brane solutions in quasi-topological gravity coupled to nonlinear electrodynamics such as exponential and logarithmic forms. The solutions of this magnetic brane are horizonless and have no curvature. For \rho near r_{+}, the solution f(\rho) is dependent to the values of parameters q and n and for larger \rho, it depends on the coefficients of Love-Lock and quasi-topological gravities \lambda, \mu and c. The obtained solutions also have a conic singularity at r=0 with a deficit angle that is only dependent to the parameters q, n and \beta. We should remind that the two forms of nonlinear electrodynamics theory have similar behaviors on the obtained solutions. At last, by using the counterterm method, we obtain conserved quantities such as mass and electric charge. The value of the electric charge for this static magnetic brane is obtained zero. Submission historyFrom: Mohammad Ghanaatian [view email] [v1]Mon, 21 Jan 2019 20:48:46 GMT (156kb)
Difference between revisions of "Gay-Berne model" m Line 47: Line 47: :<math>\frac{\chi \prime }{\alpha \prime^{2}}=1- {\left(\frac{\epsilon_{ee}}{\epsilon_{ss}}\right)} ^{\frac{1}{\mu}}.</math> :<math>\frac{\chi \prime }{\alpha \prime^{2}}=1- {\left(\frac{\epsilon_{ee}}{\epsilon_{ss}}\right)} ^{\frac{1}{\mu}}.</math> − + + ==References== ==References== #[http://dx.doi.org/10.1063/1.441483 J. G. Gay and B. J. Berne "Modification of the overlap potential to mimic a linear site–site potential", Journal of Chemical Physics '''74''' pp. 3316-3319 (1981)] #[http://dx.doi.org/10.1063/1.441483 J. G. Gay and B. J. Berne "Modification of the overlap potential to mimic a linear site–site potential", Journal of Chemical Physics '''74''' pp. 3316-3319 (1981)] #[http://dx.doi.org/10.1103/PhysRevE.54.559 Douglas J. Cleaver, Christopher M. Care, Michael P. Allen, and Maureen P. Neal "Extension and generalization of the Gay-Berne potential" Physical Review E '''54''' pp. 559 - 567 (1996)] #[http://dx.doi.org/10.1103/PhysRevE.54.559 Douglas J. Cleaver, Christopher M. Care, Michael P. Allen, and Maureen P. Neal "Extension and generalization of the Gay-Berne potential" Physical Review E '''54''' pp. 559 - 567 (1996)] − − [[category:liquid crystals]] [[category:liquid crystals]] [[category:models]] [[category:models]] Revision as of 16:14, 19 February 2008 The Gay-Berne model is used extensively in simulations of liquid crystalline systems. The Gay-Berne modelis an anistropic form of the Lennard-Jones 12:6 potential.where, in the limit of one of the particles being spherical, gives: and with and Phase diagram Main article: Phase diagram of the Gay-Berne model References J. G. Gay and B. J. Berne "Modification of the overlap potential to mimic a linear site–site potential", Journal of Chemical Physics 74pp. 3316-3319 (1981) Douglas J. Cleaver, Christopher M. Care, Michael P. Allen, and Maureen P. Neal "Extension and generalization of the Gay-Berne potential" Physical Review E 54pp. 559 - 567 (1996)
Let $C$ be a $[n,k]$ linear Code over $\mathbb{F}_q$ . I want to show that each vector of $\mathbb{F}_q^{n-k} $ is written as a linear combination of $m$ columns of $H$ iff $\rho \leq m$. I have thought the following: $$ d(C)=\min \{ d \in \mathbb{N} | \text{ there are d linearly dependent columns of H}\} $$ So $H$ has $ d-1 $ linearly independent columns , so each vector of $\mathbb{F}_q^n$ can be written as a linear combination of these columns. But what can we say about the vectors of $ \mathbb{F}_q^{n-k}$?
Instead of arguing with other people's answers in the comments I thought it might be more productive to present my own point of view. I find myself completely unable to understand why anyone would take off points for this student's answer. Just to be clear, this isn't because I'm being somehow lax or generous as a grader. My opinion is that this is a model solution to the problem, written clearly and well, and I can imagine writing exactly what this student wrote as part of homework solution or exam solution that I distribute to a class. In the context of Calculus I, it's also how I would do this problem on the board during class if a student asked me about it. On the Status of Infinity Some of the other calculus teachers here have mentioned that they teach their students that "infinity isn't a number". I find this statement very strange, and I suppose that my position is that infinity is a number. It certainly isn't a real number, since it's not included in the usual real number system. But neither is the imaginary unit $i$, and I don't think many people would argue that $i$ isn't a number. The number $i$ is included in the system of complex numbers, and the number $\infty$ is included in the system of extended real numbers, which is the set $\mathbb{R}\cup\{-\infty,\infty\}$. I don't see the difference. Of course, there's no standard definition of "number" in mathematics, so there's no objective truth either way. This is part of why it strikes me as so odd that a teacher would say that "$\infty$ isn't a number". It's possible that what they mean is that "you can't do arithmetic with $\infty$". But of course you can do arithmetic with $\infty$. For example,$$\infty + \infty = \infty,\qquad \infty \cdot \infty = \infty,\qquad\text{and}\qquad 3\cdot \infty = \infty.$$These definitions are absolutely standard in mathematics, and I would feel free to use them in a conference talk or journal article without comment. I would hope that most calculus students would know how to do basic arithmetic with $\infty$ by the end of a first calculus course, but apparently this varies by instructor. There are also arithmetic operations involving $\infty$ that are undefined, such as$$\infty - \infty,\qquad \frac{\infty}{\infty},\qquad\text{and}\qquad 0\cdot\infty.$$The last is sometimes defined to be zero (e.g. in the theory of Lebesgue integration), but in the context of calculus it's better to leave it undefined. As far as I know, all of this is completely standard, and in my experience arithmetic involving $\infty$ and $-\infty$ is commonly used by mathematicians without further explanation or comment. I've seen lots of examples of this, but to cite a specific one it's certainly the case that Rudin's Real & Complex Analysis textbook (an extremely standard choice for a graduate analysis course) uses the extended real number system throughout. On the Student's Answer The student's answer depends primarily on the following theorem Theorem. Let $f\colon \mathbb{R}\to\mathbb{R}$ and $g\colon\mathbb{R}\to\mathbb{R}$ be functions, and let $a\in [-\infty,\infty]$. If$$\lim_{x\to a} f(x) = L\qquad\text{and}\qquad \lim_{x\to a} g(x) = M$$ for some $L,M\in[-\infty,\infty]$ and the product $LM$ is defined, then$$\lim_{x\to a} f(x)\,g(x) = LM.$$ This is a well-known and standard theorem in analysis. In the context of this theorem, the student's work constitutes a perfectly good proof of the fact that$$\lim_{x\to\infty} \bigl(x-\sqrt{x}\bigr) = \infty.$$It is no more or less correct than something like$$\lim_{x\to 0} \frac{x\sin x + 2 \sin x}{x} = \lim_{x\to 0} \,\bigl(x+2\bigr)\!\left(\frac{\sin x}{x}\right) = (2)(1) = 2.$$I don't see why this proof would require any more explanation or rigor, in either a calculus or real analysis course, and I feel the same way about the student's proof. I suppose it might be reasonable for an analysis professor to always require students to cite the theorems that they are using, as opposed to using theorems implicitly as part of a calculation. I certainly don't think this would be a reasonable requirement for student answers in a calculus course. Should we teach arithmetic with infinity to calculus students? I do, and I would certainly hope that most other calculus instructors do as well. Dealing with the concept of infinity is a major theme of calculus, and the rules for arithmetic involving infinity ultimately derive from the idea of a limit. How does it help to avoid talking about this? Actually, it seems to me that it would be difficult to cover the idea of an "indeterminate form" without covering this material. I guess at least some of the teachers here manage to avoid saying that "infinity plus infinity equals infinity" by always saying "the sum of two quantities that are both approaching infinity again approaches infinity", but what's the purpose of being so obtuse? If there's a simple way to say something, just say it that way. And in any case, the reality is that you can do arithmetic with infinity. Saying that $\infty+\infty$ is undefined or indeed anything other than $\infty$ is just wrong, both at an intuitive level and from the point of view of standard notation and terminology. Students will figure out that it's true on their own, and will try to guess what other arithmetic rules you're not telling them. If you tell students that $\infty + \infty$ isn't $\infty$, you lose your credibility, and they won't believe you later when you tell them that $\infty - \infty$ isn't $0$. Okay, but should we mark the student wrong? Even if you don't talk about arithmetic involving infinity in your calculus class, the fact remains that it is absolutely standard mathematical notation. Students often seek help from mathematics tutors, other math professors, online videos, and so forth, and any one of those sources might be teaching your students about how to use infinity in this fashion. Can you really justify deducting points from students who don't write their mathematics the way that you want it written? I feel like one of the most basic principles of grading is that correct answers should receive full credit, unless the answer explicitly violates the instructions for the question. This student's answer is completely correct, and in my opinion giving it anything less than 5/5 is just arbitrary and unfair.
Be $X\sim N(\mu,1)$ and $Y\sim Inverse-Gamma(\alpha,\beta)$. For the Inverse-Gamma, I usually use the parameterization which leads to the following probability distribution function for Y: $f(y;\alpha,\beta)=\frac{\beta^{\alpha}}{\Gamma (\alpha)}(\frac{1}{x})^{\alpha+1}e^{-\frac{\beta}{x}}$ I need to find the distribution of $T=X\sqrt{Y}$. According to my calculations, T is not a non-central Student's t-distribution , it is a non-standardized Student's t instead with $2\alpha$ degrees of freedom, location parameter $\mu$ and a scale parameter $\sqrt{\frac{\beta}{\alpha}}$. Is it correct? Thank you. EDIT: These are my calculations:
The equation of the line through $2 + 3i$ and $0$ can be written as \[az + b \overline{z} = 0\]for some complex numbers $a$ and $b$. Find the quotient $b/a$ in rectangular form. The equation of the line through $2 + 3i$ and $0$ can be written as \[az + b \overline{z} = 0\]for some complex numbers $a$ and $b$. Find the quotient $b/a$ in rectangular form. \(\begin{array}{|rcll|} \hline az + b \overline{z} &=& 0 \\ &&\boxed{ z = 2+3i} \\ && \boxed{\overline{z} = 2-3i} \\ a(2+3i) + b (2-3i) &=& 0 \\ b (2-3i) &=& -a(2+3i) \\\\ \dfrac{b}{a}&=& \dfrac{-(2+3i)} {(2-3i)} \\\\ \dfrac{b}{a}&=& \dfrac{-(2+3i)} {(2-3i)}\cdot\dfrac{(2+3i)} {(2+3i)} \\\\ \dfrac{b}{a}&=& \dfrac{-(2+3i)(2+3i)} {(2-3i)(2+3i)} \\\\ \dfrac{b}{a}&=& \dfrac{-(4+12i+9i^2)} {4-9i^2} \quad & | \quad i^2=-1 \\\\ \dfrac{b}{a}&=& \dfrac{-(4+12i-9)} {4+9} \\\\ \dfrac{b}{a}&=& \dfrac{-(-5+12i)} {13} \\\\ \dfrac{b}{a}&=& \dfrac{5-12i} {13} \\\\ \mathbf{\dfrac{b}{a}} &\mathbf{=}& \mathbf{\dfrac{5} {13} -\dfrac{12} {13}i} \\ \hline \end{array}\)
Dear Uncle Colin I'm stuck on a trigonometry proof: I need to show that $\cosec(x) - \sin(x) \ge 0$ for $0 < x < \pi$. How would you go about it? - Coming Out Short of Expected Conclusion Hi, COSEC, and thank you for your message! As is so often the case, there are several ways to approach this. The first approach I would try would be to turn the left hand side into a single fraction: $\frac{1}{\sin(x)} - \sin(x) \equiv \frac{1 - \sin^2(x)}{\sin(x)}$. The top of that is $\cos^2(x)$, so you have $\frac{\cos^2(x)}{\sin(x)}$. In the specified region, $\cos(x)$ is non-negative (it is zero at $x=\piby 2$), while $\sin(x) > 0$ (because the endpoints are excluded). Therefore, you have a non-negative number divided by a positive number, which is non-negative, as required. A really neat alternative is to note that, in the given domain, $0 \lt \sin(x) \le 1$. Dividing that through by $\sin(x)$, which is ok everywhere, because it's positive in that domain, we get $0 \lt 1 \le \cosec(x)$, which tells us that $\sin(x) \le 1 \le \cosec(x)$, so $\sin(x) \le \cosec(x)$, which means $\cosec(x) - \sin(x) > 0$. $\blacksquare$. Hope that helps! - Uncle Colin
In this MO post, I ran into the following family of polynomials: $$f_n(x)=\sum_{m=0}^{n}\prod_{k=0}^{m-1}\frac{x^n-x^k}{x^m-x^k}.$$ In the context of the post, $x$ was a prime number, and $f_n(x)$ counted the number of subspaces of an $n$-dimensional vector space over $GF(x)$ (which I was using to determine the number of subgroups of an elementary abelian group $E_{x^n}$). Anyway, while I was investigating asymptotic behavior of $f_n(x)$ in Mathematica, I got sidetracked and (just for fun) looked at the set of complex roots when I set $f_n(x)=0$. For $n=24$, the plot looked like this: (The real and imaginary axes are from $-1$ to $1$.) Surprised by the unusual symmetry of the solutions, I made the same plot for a few more values of $n$. Note the clearly defined "tails" (on the left when even, top and bottom when odd) and "cusps" (both sides). You can see that after $n=60$-ish, the "circle" of solutions started to expand into a band of solutions with a defined outline. To fully absorb the weirdness of this, I animated the solutions from $n=2$ to $n=112$. The following is the result. Pretty weird right!? Anyhow, here are my questions: First, has anybody ever seen anything at all like this before? What's up with those "tails?" They seem to occur only on even $n$, and they are surely distinguishable from the rest of the solutions. Look how the "enclosed" solutions rotate as $n$ increases. Why does this happen? [Explained in edits.] Anybody have any idea what happens to the solution set as $n\rightarrow \infty$?Thanks to @WillSawin, we now know that all the roots are contained in an annulus that converges to the unit circle, which is fantastic. So, the final step in understanding the limit of the solution sets is figuring out what happens onthe unit circle. We can see from the animation that there are many gaps, particularly around certain roots of unity; however, they do appear to be closing. The natural question is, which points on the unit circle "are roots in the limit"? In other words, what are the accumulation points of $\{z\left|z\right|^{-1}:z\in\mathbb{C}\text{ and }f_n(z)=0\}$? Is the set of accumulation points dense? @NoahSnyder's heuristic of considering these as a random family of polynomials suggests it should be- at least, almost surely. These are polynomials in $\mathbb{Z}[x]$. Can anybody think of a way to rewrite the formula (perhaps recursively?) for the simplified polynomial, with no denominator? If so, we could use the new formula to prove the series converges to a function on the unit disc, as well as cut computation time in half. [See edits for progress.] Does anybody know a numerical method specifically for finding roots of high degree polynomials? Or any other way to efficiently compute solution sets for high $n$? [Thanks @Hooked!] Thanks everyone. This may not turn out to be particularly mathematically profound, but it sure is neat. EDIT: Thanks to suggestions in the comments, I cranked up the working precision to maximum and recalculated the animation. As Hurkyl and mercio suspected, the rotation was indeed a software artifact, and in fact evidently so was the thickening of the solution set. The new animation looks like this: So, that solves one mystery: the rotation and inflation were caused by tiny roundoff errors in the computation. With the image clearer, however, I see the behavior of the cusps more clearly. Is there an explanation for the gradual accumulation of "cusps" around the roots of unity? (Especially 1.) EDIT: Here is an animation $Arg(f_n)$ up to $n=30$. I think we can see from this that $f_n$ should converge to some function on the unit disk as $n\rightarrow \infty$. I'd love to include higher $n$, but this was already rather computationally exhausting. Now, I've been tinkering and I may be onto something with respect to point $5$ (i.e. seeking a better formula for $f_n(x)$). The folowing claims aren't proven yet, but I've checked each up to $n=100$, and they seem inductively consistent. Here denote $\displaystyle f_n(x)=\sum_{m}a_{n,m}x^m$, so that $a_{n,m}\in \mathbb{Z}$ are the coefficients in the simplified expansion of $f_n(x)$. First, I found $\text{deg}(f_n)=\text{deg}(f_{n-1})+\lfloor \frac{n}{2} \rfloor$. The solution to this recurrence relation is $$\text{deg}(f_n)=\frac{1}{2}\left({\left\lceil\frac{1-n}{2}\right\rceil}^2 -\left\lceil\frac{1-n}{2}\right\rceil+{\left\lfloor \frac{n}{2} \right\rfloor}^2 + \left\lfloor \frac{n}{2} \right\rfloor\right)=\left\lceil\frac{n^2}{4}\right\rceil.$$ If $f_n(x)$ has $r$ more coefficients than $f_{n-1}(x)$, the leading $r$ coefficients are the same as the leading $r$ coefficients of $f_{n-2}(x)$, pairwise. When $n>m$, $a_{n,m}=a_{n-1,m}+\rho(m)$, where $\rho(m)$ is the number of integer partitions of $m$. (This comes from observation, but I bet an actual proof could follow from some of the formulas here.) For $n\leq m$ the $\rho(m)$ formula first fails at $n=m=6$, and not before for some reason. There is probably a simple correction term I'm not seeing - and whatever that term is, I bet it's what's causing those cusps. Anyhow, with this, we can make almost make a recursive relation for $a_{n,m}$,$$a_{n,m}= \left\{ \begin{array}{ll} a_{n-2,m+\left\lceil\frac{n-2}{2}\right\rceil^2-\left\lceil\frac{n}{2}\right\rceil^2} & : \text{deg}(f_{n-1}) < m \leq \text{deg}(f_n)\\ a_{n-1,m}+\rho(m) & : m \leq \text{deg}(f_{n-1}) \text{ and } n > m \\ ? & : m \leq \text{deg}(f_{n-1}) \text{ and } n \leq m \end{array} \right.$$but I can't figure out the last part yet. EDIT:Someone pointed out to me that if we write $\lim_{n\rightarrow\infty}f_n(x)=\sum_{m=0}^\infty b_{m} x^m$, then it appears that $f_n(x)=\sum_{m=0}^n b_m x^m + O(x^{n+1})$. The $b_m$ there seem to me to be relatively well approximated by the $\rho(m)$ formula, considering the correction term only applies for a finite number of recursions. So, if we have the coefficients up to an order of $O(x^{n+1})$, we can at least prove the polynomials converge on the open unit disk, which the $Arg$ animation suggests is true. (To be precise, it looks like $f_{2n}$ and $f_{2n+1}$ may have different limit functions, but I suspect the coefficients of both sequences will come from the same recursive formula.) With this in mind, I put a bounty up for the correction term, since from that all the behavior will probably be explained. EDIT: The limit function proposed by Gottfriend and Aleks has the formal expression $$\lim_{n\rightarrow \infty}f_n(x)=1+\prod_{m=1}^\infty \frac{1}{1-x^m}.$$I made an $Arg$ plot of $1+\prod_{m=1}^r \frac{1}{1-x^m}$ for up to $r=24$ to see if I could figure out what that ought to ultimately end up looking like, and came up with this: Purely based off the plots, it seems not entirely unlikely that $f_n(x)$ is going to the same place this is, at least inside the unit disc. Now the question is, how do we determine the solution set at the limit? I speculate that the unit circle may become a dense combination of zeroes and singularities, with fractal-like concentric "circles of singularity" around the roots of unity... :)
I am trying to understand the Feynman path integral by reading the book from Leon Takhtajan. In one of the examples, there is a full explanation of the calculation of the propagator $$K(\mathbf{q'},t';\mathbf{q},t) = \frac{1}{(2\pi\hbar)^n} \int_{\mathbb{R}^n} e^{\frac{i}{\hbar}(\mathbf{p}(\mathbf{q'}-\mathbf{q})-\frac{\mathbf{p}^2}{2m}T)} d^n\mathbf{p},\quad T=t'-t.$$ in the case of a free quantum particle with Hamiltonian operator $$H_0 = \frac{\mathbf{P}^2}{2m},$$ and the solution is given by $$K(\mathbf{q'},t';\mathbf{q},t) = \left(\frac{m}{2\pi i \hbar T}\right)^{\frac{n}{2}} e^{\frac{im}{2\hbar T}(\mathbf{q}-\mathbf{q'})^2}.$$ Could you please help me to understand how to perform the calculation in the case where the Hamiltonian is given by $$H_1 = \frac{\mathbf{P}^2}{2m} + V(\mathbf{Q})$$ where $V(\mathbf{Q})$ is the potential defined by $$ V(\mathbf{Q})=\left\{ \begin{array}{cc} \infty, & \mathbf{Q} \leq b \\ 0, & \mathbf{Q}>b. \\ \end{array} \right. $$ Update : I've read the article provided by Trimok, and another one found in the references, but I am still annoyed with the way the propagator is computed. I may be mistaken, but it seems that in that kind of articles, they always start the computation from the scratch, without using what they already know about path integral. I am actually trying to write something about the use of path integrals in option pricing. From Takhtajan's book, I know that for a general Hamiltonian $H=H_0 + V(q)$ where $H_0 = \frac{P^2}{2m}$, the path integral in the configuration space (or more precisely the propagator) is given by \begin{equation} \begin{array}{c} \displaystyle K(q',t';q,t) = \lim_{n\to\infty}\left(\frac{m}{2\pi\hbar i \Delta t}\right)^{\frac{n}{2}} \\ \displaystyle \times \underset{\mathbb{R}^{n-1}}{\int \cdots\int} \exp\left\{\frac{i}{\hbar}\sum_{k=0}^{n-1}\left(\frac{m}{2}\left(\frac{q_{k+1} - q_k}{\Delta t}\right)^2 - V(q_k)\right)\Delta t\right\} \prod_{k=1}^{n-1} dq_k.\\ \end{array} \end{equation} I would like to start my computation from this result, and avoid repeating once again the time slicing procedure. So du to the particular form of the potential, I think I can rewrite the previous equation as \begin{equation} \begin{array}{c} \displaystyle K(q',t';q,t) = \lim_{n\to\infty}\left(\frac{m}{2\pi\hbar i \Delta t}\right)^{\frac{n}{2}} \\ \displaystyle \times \int_0^{+\infty} \cdots\int_0^{+\infty} \exp\left\{\frac{i}{\hbar}\frac{m}{2}\sum_{k=0}^{n-1}\frac{(q_{k+1} - q_k)^2}{\Delta t}\right\} \prod_{k=1}^{n-1} dq_k.\\ \end{array} \end{equation} Then I need a trick to go back to full integrals over $\mathbb{R}$ and use what I already know on the free particle propagator. However, since the integrals are coupled, I don't find the right way to end the calculation and find the result provided by Trimok. Could you please tell me if I am right or wrong ? Thanks.
Overview To find the gravitational force exerted by a sphere of mass \(M\) on a particle of mass \(m\), we must first subdivide that sphere into many very skinny shells and find the gravitational force exerted by anyone of those shells on \(m\). We'll see, however, that finding the gravitational force exerted by such a shell is in of itself a somewhat tedious exercise. In the end, we'll see that the gravitational force exerted by a sphere of mass \(M\) on a particle of mass \(m\) outside of the sphere (where \(D\) is the center-to-center separation distance between the sphere and particle) is completely identical to the gravitational force exerted by a particle of mass \(M\) on the mass \(m\) such that \(D\) is their separation distance. Finding Gravitational Force Exerted by Shell and Sphere In this lesson, we'll use Newton's law of gravity and the concept of a definite integral to calculate the gravitational force exerted by a solid sphere of uniform mass density \(ρ\) on a particle of mass \(m\) at the point \(P\) (see Figure 1) where the particle is outside of the sphere. To solve this problem, we must subdivide the sphere into many very thin shells. By finding the gravitational pull exerted on \(m\) by any one of these shells, we can find the total gravitational force exerted on \(m\) by the entire sphere. Finding the gravitational force on \(m\) due to a spherical shell is, in of itself, a fairly tedious problem. To find the force on \(m\) due to a spherical shell, we must subdivide the shell into many very thing rings. Summing the contributions to the total gravitational tug on \(m\) by every ring will give the total gravitational force exerted on \(m\) by the entire shell. In Figure 1, the solid \(QRR_1Q_1\) is one of these rings. We can subdivide this ring into many tiny pieces of volume \(dV\). Since the mass density throughout the sphere is constant, this mass density is given by the equation $$ρ=\frac{\text{Mass inside of volume}}{Volume}.$$ Using this equation, we can determine that the mass of one of the tiny pieces comprising the ring is given by $$dm=ρdV.$$ Since each dimension of \(dV\) is infinitesimally small, we can regard the entire mass \(dm\) as being concentrated into a single point. This is good news. Newton's law of gravity only applies to particles and since both the mass element \(dm\) in the ring and the mass \(m\) at \(P\) are particles, we can use Newton's law of gravity to find the gravitational force by \(dm\) on \(m\). Doing so, we have $$f=Gm\frac{ρ}{QP^2}dV.\tag{1}$$ Equation (1) represents the gravitational force exerted by any mass element \(dm\) in the ring on the mass \(m\) at \(P\). To find the total gravitational force exerted on \(m\) by the entire disk, we must add up all the forces acting on \(m\) due to every mass element \(dm\) comprising the ring. Since very mass \(dm\) is an equal distance \(r\) away from \(m\), each mass \(dm\) exerts an equal force on \(m\). Furthermore, since the \(x\)-component of force exerted by \(dm\) is given by $$f_x=fsin(OPQ),$$ and since the angle \(OPQ\) is the same for every mass element in the ring, it follows that every mass \(dm\) exerts the same \(x\)-component of force, \(f_x\), on \(m\). When we add up the forces acting on \(m\) due to each mass element \(dm\), for any mass element \(Q_1Q\) on the ring which exerts a horizontal force \(\vec{f}_x\) on \(m\) there is another mass element \(R_1R\) on the ring which exerts \(-\vec{f}_x\) on \(m\). Thus, we only need to add up the \(y\)-components of force (which we'll by represent by \(f_y\)) which are given by $$f_y=Gm\frac{ρ}{QP^2}dVcos(OPQ).\tag{2}$$ If we add up each force \(f\) exerted on \(m\) by each \(dm\), all of the \(x\)-components of \(f\) cancel leaving us with just the infinite sum of \(f_y\): $$f_{ring}=\int{f_y}=\frac{Gmρcos(OPQ)}{(QP)^2}\int{dV}.\tag{3}$$ Notice that since every term in Equation (2) is constant for every \(dm\), we were able to pull all of those terms outside of the integral as we did in Equation (3). The integral, \(\int{dV}\), is just the volume of the ring. Thus, Equation (3) becomes $$f_{ring}=\frac{Gmρcos(OPQ)}{(QP)^2}\biggl(\text{Volume of ring}\biggr).\tag{4}$$ The volume of the ring is given by the product of the rings circumference \(2π(QS)\) and the arc length \(Q_1Q\). Thus, $$\text{Volume of ring}=2π(QS)(Q_1Q).$$ Using the relationships \(Q_1Q=adθ\) and \(QS=asinθ\), the above equation becomes $$\text{Volume of ring}=2π(asinθ)(adθ).\tag{5}$$ Substituting Equation (5) into (4), we have $$f_{ring}=\frac{Gmρcos(OPQ)}{(QP)^2}·2π(asinθ)(adθ).\tag{6}$$ You might be asking why we substituted Equation (5) into (4). As I mentioned earlier, to find the total force exerted by the shell on \(m\), we must add up all the forces due to every ring. In other words, we have to be able to calculate the integral, \(\int{f_{ring}}\). To be able to calculate this integral, we must do something similar to what we have been doing in so many previous lessons concerning the applications of definite integrals; namely, we want to represent \(\int{f_{ring}}\) in the same form as \(\int_a^bf(x)dx\). To do this, we need to represent everything in Equation (6) in terms of a single variable. It is all too easy to get lost in the math and lose track of what we're doing; but everything we have done since deriving Equation (4) and everything that we'll continue to do until we finally take the integral of \(f_{ring}\) will involve altering Equation (4) until it is represented in terms of a single variable. That tangent aside, let's see if there is anything that we can do to Equation (6) to come closer to reaching our goal. As you can see from Figure 1, $$cos(OPQ)=\frac{SP}{r}=\frac{D-OS}{r}=\frac{D-acosθ}{r}.$$ Substituting this result in Equation (6), we have $$f_{ring}=\frac{Gmρ}{r^2}\biggl(\frac{D-acosθ}{r}\biggr)(2πa^2sinθdθ).\tag{7}$$ To make Equation (7) of the same form as \(f(x)dx\), we can either express everything in Equation (7) in terms of \(θ\) or everything terms of \(r\). Doing either would work and would allow us to calculate the integral \(\int{f_{ring}}\). Let's represent everything in Equation (7) in terms of \(r\). We can do this by making the appropriate substitutions to eliminate all of the \(θ\) terms. Let's apply the law of cosines to the triangle \(OQP\) in Figure 1 to get $$r^2=a^2+D^2-2aDcosθ.\tag{8}$$ Let's take the derivative on both sides of Equation (8) with respect to \(θ\) to get $$2r\biggl(\frac{dr}{dθ}\biggr)=2aDsinθ.\tag{9}$$ Making some algebraic simplifications, Equation (9) becomes $$\frac{rdr}{D}=asinθdθ.\tag{10}$$ Also, doing some algebraic manipulations on Equation (8), we have $$r^2-a^2+D^2=2D^2-2aDcosθ.$$ This equation can be further simplified to $$r^2-a^2+D^2=2D(D-cosθ)$$ or $$\frac{r^2-a^2+D^2}{2D}=D-acosθ.\tag{11}$$ It is perfectly natural at this point to be asking yourself we went through the trouble of doing all that. Well, the reason why is because we can substitute Equations (10) and (11) into Equation (7) to eliminate all of the \(θ\) terms and to represent everything in Equation (7) in terms of \(r\). Making these substitutions, Equation (7) becomes $$f_{ring}=\frac{Gmρ}{r^2}\frac{\biggl(\frac{r^2-a^2+D^2}{2D}\biggr)}{r}(2πa)\frac{rdr}{D}$$ or $$f_{ring}=\frac{Gmρπa}{D^2}\biggl(\frac{r^2+D^2-a^2}{r^2}\biggr)dr.\tag{12}$$ All of the messy math that we did in the steps in between Equations (4) and (12) was to represent \(f_{ring}\) of the same form as \(f(x)dx\). After all of the messy math we went through, as you can see Equation (12) is of the form \(f(r)dr\). We could've taken the integral, \(\int{f_{ring}}\), a long time ago to get the gravitational force xerted by the entire disk; but not until now (after having had derived Equation (12)) could we write down $$f_{disk}=\int{f_{ring}}=\int_{?_1}^{?_2}f(r)dr$$ and actually calculate this force. Taking the integral on both sides of Equation (12), we have $$f_{disk}=\frac{Gmρπa}{D^2}\int_{?_1}^{?_2}\frac{r^2+D^2-a^2}{r^2}dr.\tag{13}$$ As you can see from Figure 1, the lower and upper-limits of integration for the integral in Equation (13) are \(D-a\) and \(D+a\), respectively. Thus, $$f_{disk}=\frac{Gmρπa}{D^2}\int_{D-a}^{D+a}1+\frac{D^2-a^2}{r^2}dr.\tag{14}$$ To try to attempt to keep things from getting too messy, let's ignore the \(Gmρπa/D^2\) term in Equation (14) for just a moment and let's just focus on calculating the definite integral in Equation (14). Doing so, we have $$\int_{D-a}^{D+a}1+\frac{D^2-a^2}{r^2}dr=\int_{D-a}^{D+a}(1)dr+\int_{D-a}^{D+a}\frac{D^2-a^2}{r^2}dr$$ $$=\biggl[r\biggr]_{D-a}^{D+a}+\biggl[\frac{-1}{r}\biggr]_{D-a}^{D+a}(D^2-a^2)$$ $$=D+a-(D-a)+\biggl(\frac{1}{D-a}-\frac{1}{D+a}\biggr)(D^2-a^2).$$ If we multiply the two terms, \(1/(D-a)\) and \(-1/(D+a)\), by \((D+a)/(D+a)\) and \((D-a)\(D-a)\), respectively, we have $$D+a-(D-a)+\biggl(\frac{D+a}{(D-a)(D+a)}-\frac{D-a}{(D+a)(D-a)}\biggr)=2a+\biggl(\frac{2a}{D^2-a^2}\biggr)(D^2-a^2)=4a.$$ Substituting this result for the integral in Equation (14), we have $$f_{shell}=Gmρπa/D^2(4πa^2).\tag{15}$$ The gravitational force exerted by a shell on a mass \(m\) outside of the shell is given by Equation (15). Since \(ρ\) Is the mass density of the cylinder and \((4πa^2)t\) is its volume, we see that Equation (15) can also be written as $$f_{disk}=G\frac{mM_{shell}}{D^2}.\tag{15}$$ What Equation (15) means is that a thin shell exerts a force on a aprticle outside of the shell as if all of the shell's mass were concentrated at a single point at the center of the shell. to find the gravitational force excerted by a solid sphere on a particle outside of the sphere, we must add up the forces on \(m\) due to infinitely many, infinitesimally thin shells. Since any such shell is infinitesimally thin, let's replace \(t\) with \(dr\). Taking the integral of both sides of Equation (15), we have $$f_{\text{Solid sphere}}=\frac{4πGmρ}{D^2}\int_0^Rr^2dr\tag{16}$$ where \(R\) is the radius of the sphere. Calculating the integral in Equation (16), Equation (16) simplifies to $$f_{\text{Solid sphere}}=\frac{4πGmρ}{D^2}\biggl(\frac{R^3}{3}\biggr)$$ or $$f_{\text{Solid sphere}}=G\frac{m\biggl(ρ\frac{4}{3}πR^3\biggr)}{D^2}.\tag{17}$$ Since \(ρ\frac{4}{3}πR^3\) is just the mass of the solid sphere, Equation (17) simplifies to $$f_{\text{Solid sphere}}=G\frac{mM_{sphere}}{D^2}.\tag{18}$$ Equation (18) tells us that a sphere of mass \(M\) and radius \(D\) exerting a gravitational force on a point-mass \(m\) outside of the sphere exerts the same force as a particle of mass \(M\) acting on the point-mass \(m\) such that those two particles separation distance are given by \(D\). When the Earth exerts a gravitational force on an object, if that object is very small compared to the Earth then that object can be approximated as a point-mass and the Earth can be approximated as a sphere with uniform mass density. The gravitational force exerted on such objects is given by Equation (18). This article is licensed under a CC BY-NC-SA 4.0 license. References 1. Kline, Morris. "Some Physical Applications of the Definite Integral." Calculus: An Intuitive and Physical Approach. Mineola, NY: Dover Publications, 1998. 502. Print Add React in One Minute This page demonstrates using React with no build tooling. React is loaded as a script tag.
I want to derive the basic equations for the rate of electron capture, taking into account the relativistic kinematics explicitly $$p^{+}+e^{-}\rightarrow n^{0}+\nu_{e}$$ The idea is to treat electronic wave function like a particle-in-the-box, rather than use full blown atomic orbitals. But I want to do the full calculation over the kinematics, and compute the matrix elements, form factors, etc. I am assuming one can derive this to using something like the differential cross section for 2+2 body scattering $$1+2\rightarrow 3+4$$ $$d\sigma_{i,f}\sim\left(\dfrac{1}{64\pi^{2}s}\dfrac{|\vec{p}_{f}|}{|\vec{p}_{i}|}\big\vert\mathcal{M_{i,f}}\big\vert^{2}d\Omega\right)$$ (in the C.M. frame, $\mathbf{p_{e}}=-\mathbf{p_{p}}$) and the rate is obtained by the integrating the differential cross section, to get something like $$\Gamma=\big\Vert\Psi(0)\big\Vert\mathbf{v}\sigma$$ where $\sigma$ is the unpolarized cross section, and where $$\big\Vert\psi_{e}(0)\big\Vert$$ is the electronic density at the origin (the nuclear charge) The matrix elements $M_{i,f}$ can be computed from the Lagrangian for the Weak interaction. I am trying to get to an expression for the rate of capture which is something like $$d\lambda_{ep}=\left(\dfrac{1}{2\pi}\right)^{2}\dfrac{\sum_{fi}\big\vert\mathcal{M}_{fi}\big\vert^{2}\big\Vert\psi_{e}(\mathbf{x})\big\Vert_{\mathbf{x}=0}}{16E_{p}E_{e}\;\big\vert\mathbf{k}\cdot(E{_n}\mathbf{k}-k^{0}\mathbf{p}_{n})\big\vert}k^{3}d\Omega_{k}$$ where $\mathbf{k}$ is the momentum 3-vector for the outbound neutrino $\nu_{e}$ Note that the rate calculations integrate over the solid angle of the neutrino
Given an electric charge $q$ of mass $m$ moving at a velocity ${\bf v}$ in a region containing both electric field ${\bf E}(t,x,y,z)$ and magnetic field ${\bf B}(t,x,y,z)$ (${\bf B}$ and ${\bf E}$ are derivable from a scalar potential $\phi(t, x, y, z) $and a vector potential ${\bf A}(t,x,y,z)$), knowing that ${\bf E}=- \nabla \phi - \frac{\partial {\bf A}} {\partial t}$ ${\bf B}= \nabla \times {\bf A} $ $U=q \phi - q {\bf A} \cdot{\bf v}$ ($U$ is the velocity-dependent potential) the Lagrangian is $$L=(1/2) m v^2-U=(1/2) m v^2- q\phi + q{\bf A} \cdot{\bf v}.$$ Considering just the $x$-component of Lagrange's equation, how can I obtain $$m \ddot{x}=q\left (v_x \frac{\partial A_x}{\partial x} + v_y \frac {\partial A_y}{ \partial x} + v_z \frac{\partial A_z} {\partial x}\right )-q\left (\frac{\partial \phi }{\partial x} + \frac{d A_x}{dt}\right) ~?$$
13 0 This should be simple but I know I'm going wrong somewhere and I can't figure out where. The curl of a electric field is zero, i.e. [itex]\vec { \nabla } \times \vec { E } = 0[/itex] Because , But, Maxwell's 3rd Equation tells us that, the curl of a electric field is equal to i.e. [itex]\vec { \nabla } \times \vec { E } = -\frac { \partial }{ \partial t } \vec { B } [/itex] So is the curl zero or is it not? If we equate those two equations we get that the time derivative of magnetic field is zero. What's wrong? What am I missing? The curl of a electric field is zero, i.e. [itex]\vec { \nabla } \times \vec { E } = 0[/itex] Because , no set of charge, regardless of their size and position could ever produce a fieldwhose curl is not zero. But, Maxwell's 3rd Equation tells us that, the curl of a electric field is equal to the negative partial time derivative of magnetic field [itex] \vec {B}[/itex]. i.e. [itex]\vec { \nabla } \times \vec { E } = -\frac { \partial }{ \partial t } \vec { B } [/itex] So is the curl zero or is it not? If we equate those two equations we get that the time derivative of magnetic field is zero. What's wrong? What am I missing?
Electromagnetic Field Dynamics Variables > s.a. electromagnetism / electricity; magnetism; electromagnetism in media; tensor decomposition. * 3+1 version: The electric and magnetic field 3-vectors ( E , i E = −∇ φ , B = ∇ × A . * Covariant version: The 4-vector A = ( a in terms of the spatial tensors E = a Maxwell's Equations > s.a. Coulomb's Law; Faraday's Law; Gauss' Law; electricity; magnetism [Ampère-Maxwell law]. * The Maxwell equations: If we include hypothetical magnetic charges and currents, they are \[ \def\dd{{\rm d}} \matrix{\underline{\rm Differential\ version\ (cgs,\ in\ a\ medium)} & \underline{{\rm Integral\ version\ (SI},\ \rho_{\rm m} = {\bf J}_{\rm m} = 0)}\hfil \cr \nabla\cdot{\bf B} = 4\pi\,\rho_{\rm m}\hfil & \oint_S {\bf B}\cdot\dd{\bf a} = 0\hfil \cr \nabla\times{\bf E} + {1\over c}\,{\partial{\bf B}\over\partial t} = -{4\pi\over c}\,{\bf J}_{\rm m}\hfil & \oint_C {\bf E}\cdot\dd{\bf s} = -{\dd\over\dd t}\int_S {\bf B}\cdot\dd{\bf a}\hfil \cr \nabla\cdot{\bf D} = 4\pi\,\rho_{\rm e}\hfil & \oint_S {\bf E}\cdot\dd{\bf a} = {Q\over\epsilon_0}\hfil \cr \nabla\times{\bf H} - {1\over c}\,{\partial{\bf D}\over\partial t} = {4\pi\over c}\,{\bf J}_{\rm e}\hfil & \oint_C {\bf B}\cdot\dd{\bf s} = \mu_{_0}I + \mu_{_0}\epsilon_{_0}\,{\dd\over\dd t} \int_S {\bf E}\cdot\dd{\bf a}}\] * Covariant versions: In terms of the Faraday tensor they can be written as d F = 0 (local existence condition for A ) and d* a ∇ [ a and in terms of the electric and magnetic fields they are ∇ ( a where u is the timelike unit vector field used to define the space + time decomposition that gives the fields a main page – abbreviations – journals – comments – other sites – acknowledgements send feedback and suggestions to bombelli at olemiss.edu – modified 9 sep 2019
When water freezes continuous translational symmetry is broken. When a metal becomes superconducting, what is the symmetry that gets broken? In most of the textbooks discussing this point, you should find something like : superconductors breaks the U(1)-gauge symmetry down to $\mathbb{Z}_{2}$. Fine, but what does it mean ? To explain it, let me be a bit outside the main stream discussion. What I'll discuss below is more a personal reflexion than something clearly stated in any book. Clearly, the origin of superconductivity -- as explained by Bardeen, Cooper and Schrieffer (BCS) -- is the instability of the Fermi surface due to the Cooper pairing. So the first question to ask is: why is the Fermi surface stable? You can find more details in the previous link, so let me give you the ultimate answer: the Fermi surface is stable since it is a topological concept. In short, the Fermi surface can be defined as a quantity which is not perturbed by some interactions. You can add impurities in your solid and/or various interactions between your electrons, the Fermi surface will not be so much deformed. Of course, an other arrangement of atoms in the solid gives an other Fermi surface, but the stability of this new one is still verified. Over the years, this concept of the stability of the Fermi surface has been refined, down to the work by Horava, reproduced in the book by Volovik. There you will see the topological invariant responsible for the stability of the Fermi surface (chapter 8), and the reason why it is a U(1) stability (well, it has to be Abelian for simple reasons you can guess easily, and the Fermi surface has a volume in the energy-space, so it can be reduced to a circle). The point is: the Fermi surface is stable with respect to almost all interactions, with the exception of the Cooper pairing. The reason is simple to understand: most of the interaction conserve the number of particles, but the Cooper pairing transmute the particles. In short, the vacuum of the Cooper pair is no more a Fermi gas/liquid, but a Bose gas/liquid. Then the volume of the Fermi surface (i.e. the number of fermions) is no more conserved. In other words, the topological protection ensuring the stability of the Fermi surface is no more at work when the Cooper pairing enters the stage. Now let us picturesquely understand why it is a $\text{U}\left(1\right)\rightarrow\mathbb{Z}_{2}$ breaking. The disappearance of the electrons at the Fermi surface creates a gap, reminiscent of the physics of semi-conductors. There, you know that there are 2-bands (conduction and valence). This is the first hint why only $\mathbb{Z}_{2}$. The second ingredient is that the Bose gas/liquid has no Fermi surface (tautology !), so it is not stable at all with respect to any interactions (re-tautology !), and so in principle the breaking should be from U(1) to nothing. But you still have two species of bosons: the hole-like and the particle-like, hence the doubled $\mathbb{Z}_{2}$ symmetry. Of course, all the arguments above are sketchy, so a more precise definition is still warm welcome. So, let us go back to the mainstream argument: the BCS-interaction reads $$H_{\text{BCS}}\sim c^{\dagger}\left(x\right)c^{\dagger}\left(x\right)c\left(x\right)c\left(x\right)$$ in a simplified form. To this Hamiltonian you can apply the transform $$c\left(x\right)\rightarrow e^{\mathbf{i}\varphi\left(x\right)}c\left(x\right)\;\;;\;\; c^{\dagger}\left(x\right)\rightarrow e^{-\mathbf{i}\varphi\left(x\right)}c^{\dagger}\left(x\right)$$ such that $H_{\text{int}}\rightarrow H_{\text{int}}$ and so $H_{\text{int}}$ is invariant with respect to a U(1)-gauge-transformation, since any real phase $\varphi$ is allowed and the group of multiplication by a phase $e^{\mathbf{i}\varphi\left(x\right)}$ is the group U(1). The mean-field counterpart of $H_{\text{int}}$ in the Cooper channel reads $$\tilde{H}_{\text{BCS}}\sim\Delta\left(x\right)c^{\dagger}\left(x\right)c^{\dagger}\left(x\right)+\Delta^{\dagger}\left(x\right)c\left(x\right)c\left(x\right)$$ and so it is only invariant when we choose $\varphi\in\left\{ 0,\pi\right\} $ in the above gauge-transformation. So there is only two possibilities if one wants to change the phase of the operators and keeps the mean-field Hamiltonian invariant. The group with only two elements is called $\mathbb{Z}_{2}$. That's the microscopic origin of the $\text{U}\left(1\right)\rightarrow\mathbb{Z}_{2}$ gauge-symmetry breaking.
Most of what I write is already written within the other answers/comments,but maybe the followig helps you: When you talk about $SU(2)$ Spinor-Transformations, I think you talk about non relativistic Spinors (?) (otherwise it should be $SL_2(\mathbb{C})$), so you should not view it as a subgroup of $\mathcal{L}^{\uparrow}_+$, if at all, $\mathcal{L}^{\uparrow}_+ \subset SL_2(\mathbb{C})$. Or, in your case (if my hypothesis is correct), $SO(3) \subset SU(2)$. What's important is that in any case we're looking at the double covering$f: SU(2) \to SO(3)$ (or that of the Lorentz group).Now, the representation under which a Spinor transforms is, by the very definition of a Spinor, one which is a representation of $SU(2)$, say, $\rho: SU(2) \to GL(\mathbb{C}^4)$ that does not decent to a representation of $SO(3)$. It does, however, decent to a projective, i.e ''double valued'' representation of $SO(3)$. In the quantum mechanical context, this is no problem since they both give the same state. If you are, however, not working within a projective space, you cannot speak of a Lorentz (or $SO(3)$) transformation of the field/particle $\zeta$, and thus you are forced to view it as a transformation given by the double covering. Mathematically, as indicated in the commments, this amounts to viewing Spinors as sections of the vector bundle associated to the spin-structureand the representation, that is, $\zeta \in \Gamma(P_{SU(2)}(M) \times_{\rho} \mathbb{C}^4)$ where $M$ is a $3$ dimensional oriented riemannian Manifold endowed with a spin-structure (oberseve that $Spin(3) \cong SU(2)$,whereas $Spin(1,3) \cong SL_2(\mathbb{C})$). (Maybe, in a more familiar Language, $\zeta' = \rho(A) \zeta$, where $\zeta'$ is the Spinor after the transformation $B = f(A) \in SO(3)$, $A \in SU(2)$). If it would ''transform under Lorentz-Transformation'' (or $SO(3)$ in our context) it would be an Element of $\Gamma(P_{SO(3)}(M) \times_{\Lambda}\mathbb{C}^4)$ where $\Lambda: SO(3) \to GL(\mathbb{C}^4)$ is your representation of choice. This really seems like a whole lot of overkill, and making it more precise certainly overkills even more, and it really is just a little elaboration of what's already written within the comments and the other answers. Finally, to answer your question: Different indices indicate the different vector bundles (equivalently: the different symmetry Group and the different corresponding representation) which I mentioned earlier - at least that's what I think...
Lesson overview In this lesson, we'll discuss how by using the concept of a definite integral one can calculate the volume of something called an oblate spheroid. An oblate spheroid is essentially just a sphere which is compressed or stretched along one of its dimensions while leaving its other two dimensions unchanged. For example, the Earth is technically not a sphere—it is an oblate spheroid. To find the volume of an oblate spheroid, we'll start out by finding the volume of a paraboloid . (If you cut an oblate spheroid in half, the two left over pieces would be paraboloids.) To do this, we'll draw an \(n\) number of cylindrical shells inside of the paraboloid; by taking the Riemann sum of the volume of each cylindrical shell, we can obtain an estimate of the volume enclosed inside of the paraboloid. If we then take the limit of this sum as the number of cylindrical shells approaches infinity and their volumes approach zero, we'll obtain a definite integral which gives the exact volume inside of the paraboloid. After computing this definite integral, we'll multiply the result by two to get the volume of the oblate spheroid. Finding volume of an oblate spheroid In Figure 1, I have graphed the ellipse \(\frac{x^2}{9}+{y^2}{4}=1\) on the \(xy\)-plane. If we rotate the eclipse about either the \(x\)-axis or \(y\)-axis, the ellipse will trace out the closed surface illustrated in Figure 3. The volume of revolution which that surface encloses is called an oblate spheroid. In this lesson, we'll use the concept of a definite integral to calculate the volume of an oblate spheroid. To calculate this volume, we'll first approximate the volume by summing the volumes of an \(n\) number of cylindrical shells (see Figure 2) drawn within the oblate spheroid. After that, we'll take the limit of this sum as \(n→∞\). But before we do that, let's discuss how to construct a cylindrical shell and how to calculate its volume. Let's subdivide the interval on the \(x\)-axis, \(Δx=3-0\), into an \(n\) number of equally spaced tick marks; let's label each tick mark with \(x_i\) where \(i=1,...,n\). In Figure 1, I have drawn a rectangle with with \(Δx=x_{i+1}-x_i\) and height \(f(x_i)\). If we rotate this rectangle about the \(y\)-axis, the rectangle will trace out the cylindrical shell illustrated in Figure 2. To calculate the volume of the cylindrical shell, we must take the product of the area of the cylindrical shell's base with its height. The ring \(QQ'RR'\) with width \(Δx_{i+1}-x_i\) in Figure 2 is the cylindrical shell's base. Let's subtract the area of the inner circle \(QQ'\) from the area of the outer circle \(RR'\) in Figure 2 to get the area of the cylindrical shell's base: $$A=π(x_{i+1})^2-π(x_i)^2.\tag{1}$$ Using basic algebra, we can rewrite Equation (1) as $$A=π\frac{x_i+x_{i+1}}{2}\biggl[2(x_{i+1}-x_i)\biggr].\tag{2}$$ The term \((x_i+x_{i+1}/2\) in Equation (2) is the average value of \(x_i\) and \(x_{i+1}\). In Figure 1 (click to enlarge), I have labeled the average of these two values as \(\bar{x}_i\) on the \(x\)-axis. Substituting \(\bar{x}_i\) into Equation (2), we have $$A=2π\bar{x}_iΔx.\tag{3}$$ (You might be asking yourself why we went through the trouble of rewriting Equation (1) of the form expressed in Equation (3). The reason why we did this will become evident when we wish to express the limit of the sum of the volumes of each cylindrical shell as a definite integral. But we'll discuss this in more detail shortly.) As you can see from Figure 2, the hieght of a cylindrical shell is \(f(x_i)\). The volume of the \(i^{th}\) cylindrical shell is therefore given by $$ΔV_i=2π\bar{x}_if(x_i)Δx.\tag{4}$$ To estimate the volume of the paraboloid, let's sum the volumes of all the cylindrical shells to get $$S_n=\sum_{i=1}^n2π\bar{x}_if(x_i)Δx.\tag{5}$$ When defining a definite integral, we always start with a sum of the form $$S_m=\sum_{i=1}^mg(x_i)Δx;\tag{6}$$ then, we take the limit of such a sum as \(m→∞\) to get $$\int_a^bg(x)dx=\lim_{m→∞}\sum_{i=1}^mg(x_i)Δx.$$ The problem with Equation (5) is that the term \(2π\bar{x}_if(\bar{x}_i)Δx\) isn't the same as the \(g(x_i)\) in Equation (6). We cannot define a function \(h(\bar{x}_i)\) or \(h(x_i)\) that we can set equal to \(2π\bar{x}_if(\bar{x}_i)Δx\). The term \(2π\bar{x}_if(\bar{x}_i)Δx\) requires two input values (namely, \(\bar{x}_i)\) and \(x_i\)) to specify its value whereas functions like \(g(x_i)\) in Equation (6) require only one input value (namely, \(x_i\)) to specify its value. Fortunately, there is a way around this problem. Recall that it does not matter whether we take a left-hand side Reimann sum (in which case, the height of the rectangle would be \(g(x_i)\)), a right-hand side Reimann sum (this is when the height of each rectangle is given by \(g(x_{i+1}\)), or a midpoint Reiman sum (when the height of a rectangle is given by \(g(\frac{x_i+x_{i+1}}{2})=g(\bar{x}_i)\)). (We shall not discuss the reasons why this is here; but if you do not understand why this is, I strongly encourage you to review the topic.) For similar reasons, we could replace the \(f(x_i)\) in Equation (5) with either \(f(x_{i+1}\) or \(f(\bar{x}_i)\); doing so will not change the limit of the sum. (Indeed, we could replace \(f(x_i)\) in Equation (5) with \(f(x_i*)\) (where \(x_i≤x_i*≤x_{i+1}\) and, although the Equation (5) would give a different approximation of the paraboloid, the limit of Equation (5) would remain the same. To understand why this is, it would be a good idea to review the concept of limits.) Swapping the \(f(x_i)\) in Equation (5) with \(f(\bar{x}_i)\), we get a different sum (which we'll specify by \(S_n'\)) given by $$S_n'=\sum_{i=1}^n2π\bar{x}_if(\bar{x}_i)Δx.\tag{7}$$ What's nice about Equation (7) is that the term \(2π\bar{x}_if(\bar{x}_i)Δx\) is expressed entirely in terms of the single variable \bar{x}_i\). Thus, Equation (7) is of the same form as Equation (6). If \(n→∞\) (which is to say, if the number of cylindrical shells within the paraboloid approaches infinity), then the sum \(S'_n\) will get closer and closer to equaling the exact volume of the paraboloid. Thus $$\lim_{n→∞}\sum_{i=1}^n2π\bar{x}_if(\bar{x}_i)Δx=\int_0^32πxf(x)dx.\tag{8}$$ To evaluate the integral in Equation (8), we need to find out what the function \(f(x)\) is. \(f(x)\) represents the height (which is to say, the \(y\)-value) associated with each rectangle on the interval \(Δx=3-0\). In other words, \(f(x)\) is the \(y\)-coordinate associated with each point along the quarter-ellipse in the first quadrant of the \(xy\)-plane illustrated in Figure 2. Recall that the equation \(\frac{x^2}{9}+\frac{y^2}{4}=1\) was used to graph each \((x,y)\) coordinate along the ellipse in Figure 1. If we restrict the domain of this function to values of \(x\) and \(y\) where \(0≤y≤2\), then the equation \(\frac{x^2}{9}+\frac{y^2}{4}=1\) could be used to graph the quarter-ellipse in the first quadrant of the \(xy\)-plane in Figure 1. Thus, for the aforementioned restrictions on the domain, the \(y\) in the equation, \(\frac{x^2}{9}+\frac{y^2}{4}=1\), specifies the \(y\)-coordinate of each point along the quarter-ellipse. It therefore also specifies the height of each rectangle under the quarter-ellipse. This means that \(f(x)=y(x)\). Using the equation \(\frac{x^2}{9}+\frac{y^2}{4}=1\), we can solve for \(f(x)=y(x)\): $$\frac{x^2}{9}+\frac{(f(x))^2}{4}=1$$ $$\frac{(f(x))^2}{4}=1-\frac{x^2}{9}$$ $$f(x)=\sqrt{4-\frac{4}{9}x^2}.\tag{9}$$ Substituting Equation (9) into the integral in Equation (8), we have $$\text{Volume of paraboloid}=\int_0^32πx\sqrt{4-\frac{4}{9}x^2}dx.\tag{10}$$ At this point, all of the hard work is done and we just need to solve the definite integral in equation (10) and then multiply our answer by \(2\) to get the volume of the oblate spheroid illustrated in Figure 3. We can solve the integral in Equation (10) by using \(u\)-substitution. If we let \(u=4-\frac{4}{9}x^2\), then $$\frac{du}{dx}=\frac{-8}{9}x$$ $$du=\frac{-8}{9}xdx$$ $$dx=\frac{-9}{8}\frac{1}{x}du.\tag{11}$$ Substituting \(u\) and Equation (11) into (10), we have $$\text{Volume of paraboloid}=\int_{?_1}^{?_2}(2πx)\biggl(\frac{-9}{8}\frac{1}{x}\biggr)u^{1/2}du$$ and $$\text{Volume of paraboloid}=\frac{-9}{4}π\int_{?_1}^{?_2}u^{1/2}du.$$ When \(x=0\), \(u=4\) and when \(x=3\), \(u=4-\frac{4}{9}(3)^2=4-4=0\). Substituting the limits of integration into the integral above and solving the integral, we have $$\text{Volume of paraboloid}=\frac{-9}{4}\biggl[\frac{2}{3}u^{3/2}\biggr]_4^0$$ $$=\frac{9}{4}π(\frac{2}{3}(4)^{3/2})=\frac{3}{2}π(4)^{3/2}$$ $$=\frac{-9}{4}π\biggl[\frac{2}{3}u^{3/2}\biggr]_4^0=\frac{-3}{2}π\biggl[(4-\frac{4}{9}x^2)^{3/2}\biggr]_0^3$$ $$=\frac{-3}{2}π\biggr[(4-4)-(4-0)^{3/2}\biggr]=12π.$$ Thus we have shown that the volume of the paraboloid is \(12π\) units squared. Multiplying this result by \(2\), we find that the volume of this oblate spheroid is given by $$\text{Volume of oblate spheroid}=24π.\tag{12}$$ This article is licensed under a CC BY-NC-SA 4.0 license.
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
Back to Semi-infinite Programming. Sequential quadratic programming (SQP) methods can be applied to problems that satisfy the same assumptions as those required for the KKT reduction methods. SQP methods can be obtained from a local reduction of \(P\) to a finite program, which is inspired in the implicit function theorem. Let \(\overline{x}\) be a local minimizer of \(P\). Under suitable assumptions, \(T_{a}(\overline{x}) = \{ \overline{t_{j}}, j=1, \dots, q(\overline{x} \}\), with \(q(\overline{x}) \in \mathbb{N}\), and there exists a neighborhood of \(\overline{x}\), say \(U_{\overline{x}}\), and smooth functions say \(t_{j} : U_{\overline{x}} \rightarrow T, j= 1, \dots, q(\overline{x})\), such that \(t_{j}(\overline{x})=\overline{t_{j}}, j= 1, \dots, q(\overline{x})\), and for all \(x \in U_{\overline{x}}\), \(x\) is a local minimum for \(P\) if and only if it is a local minimum of the (finite) reduced problem at \(\overline{x}\) \[\begin{array}{lll} P(\overline{x}) & \min_{x} & f(x) \\ & \text{s. t.} & g(x,t_{j}(x)) \geq 0, \, \, j=1, \dots, q(\overline{x}). \end{array}\] Step \(k\) : Start with a given \(x_{k}\) (not necessarily feasible). Determine the set of local minima \(\{ \overline{t}_{j}, j=1, \dots, q(x_{k}) \}\) of \(Q(x_{k}).\) Apply \(N_{k}\) steps of a SQP solver (for finite programs) to \[\begin{array}{lll} P(x_{k}) & \min_{x} & f(x) \\ & \text{s.t.} & g(x,t_{j}(x)) \geq 0, \, \, j=1, \dots, q(x_{k}), \end{array}\] leading to iterates \(x_{k,i}, i=1, \dots, N_{k}\). Set \(x_{k+1} = x_{k,N_{k}}\) and \(k=k+1\). This sketch of the method can be easily adapted to generalized SIP. As it happens with KKT reduction methods, SQP methods are usually combined with discretization or central cutting plane methods in two-phase methods. For more details, see Hettich and Kortanek (1993), López and Still (2007), and Stein (2003) in the Semi-infinite Programming References.
I am going to change the notation in order to make the equations more compact. The counter weight is $M$ and the payload is $m$. The length of the bar is $L$ and the distance of the center of gravity to the counterweight is $a=\frac{m}{M+m}\,L$ and from the payload $b=\frac{M}{M+m}\,L$ such that $L=a+b$. Note I have said nothing about the pivot yet. The distance between the pivot and the center of gravity is $c$ and it is an independent variable we wish to optimize. The pivot is between the center of gravity and the payload (for positive $c$). The angle of the bar is $\theta$ with $\theta=0$ when horizontal. The height of the pivot from the ground is $h$ such that when the counterweight hits the ground the payload is launched at $\theta_{f} = -45^\circ$. So $h=(a+c) \sin(- \theta_f$). As a consequence the initial angle is $\sin \theta_i = \frac{a+c}{b-c} \sin(- \theta_f )$ in order for the payload to rest in the ground initially. This is valid for $c<\frac{b-a}{2}$, otherwise the things sits vertically with $\theta_i=\frac{\pi}{2}$. Doing the dynamics using Newtons's Laws, or Langrange's equations will yield the following acceleration formula $$ \ddot{\theta} = -\,\frac{c g (M+m) \cos(\theta)}{\frac{m M}{M+m} L^2 + (M+m) c^2} $$ The denominator being the moment of inertia about the pivot. Here is the fun stuff.The above can be integrated since the right hand side is a function of $\theta$ only with a constant $\alpha$: $$ \ddot{\theta}=\frac{{\rm d}\dot\theta}{{\rm d}t} =-\alpha \cos(\theta) $$ $$ \frac{{\rm d}\dot\theta}{{\rm d}\theta} \frac{{\rm d}\theta}{{\rm d}t} = -\alpha \cos(\theta) $$ $$ \frac{{\rm d}\dot\theta}{{\rm d}\theta} \dot\theta = -\alpha \cos(\theta) $$ $$ \int \dot\theta {\rm d}\dot\theta =-\int \alpha \cos(\theta) {\rm d}\theta + K$$ $$ \frac{{\dot \theta}^2}{2} = -\alpha \sin \theta + K $$ with $K$ based on the initial conditions ($\theta=\theta_i$, $\dot\theta=0$) $$ \dot\theta = \sqrt{2 \alpha (\sin(\theta_i)-\sin(\theta))}$$ and final rotational velocity $$ \dot\theta_f = \sqrt{2 \alpha (\sin(\theta_i)-\sin(\theta_f))}$$ tangentially the payload launch velocity is $$ v_{B_f} = (b-c) \dot\theta_f = (b-c) \sqrt{2 \alpha (\sin(\theta_i)-\sin(\theta_f))} $$ with both $\alpha$ and $\theta_i$ depending on the variable $c$. To optimize we set $ \frac{{\rm d}v_{B_f}}{{\rm d}c}=0 $ which is solved for: $$ \frac{c}{L} = \frac{\sqrt{m} \left( \sqrt{M+m}-\sqrt{m} \right)}{M+m } $$ For example, a $m=20 {\rm lbs}$ payload, with a $M=400 {\rm lbs}$ counter weight on a $L=20 {\rm ft}$ bar, requires the pivot to be $c=20\;\frac{\sqrt{20} \left( \sqrt{420}-\sqrt{20} \right)}{420 } = 3.412\;{\rm ft} $ from the center of gravity. The c.g. is $a=\frac{20}{420}\,20 = 0.952 {\rm ft}$ from the counterweight. Edit 1 Based on comments made by the OP the launch velocity is$$v_{B_{f}}=\left(\frac{M}{M+m}L-c\right)\sqrt{\frac{2cg(M+m)}{\frac{M}{M+m}L^{2}+(M+m)c^{2}}\left(\sin\theta_{i}-\sin\theta_{f}\right)}$$where $g=9.80665\,{\rm m/s}$ is gravity. With infinite counterweight the maximum launch velocity is $\max(v_{B_{f}})=\sqrt{\frac{2g(L-c)^{2}}{c}}$ so to reach $v_{B_{f}}=6000\,{\rm m/s}$ from earth if $c=1\,{\rm m}$ then $L>1355.8\,{\rm m}$. With infinite bar length the maximum launch velocity is $\max(v_{B_{f}})=\sqrt{\frac{2Mcg}{m}}$ so to reach $v_{B_{f}}=6000\,{\rm m/s}$ from earth if $c=1\,{\rm m}$ then $M>18.3\,10^{6}\,{\rm kg}$. So lets consider $L=2000\,{\rm m}$ and $M=40.0\,10^{6}\,{\rm kg}$ then we choose pivot location at $c=1.500\,{\rm m}$ to get $$v_{B_{f}}=(1999.999-1.5)\sqrt{2\;4.526\left(\sin\theta_{i}-\sin\theta_{f}\right)}$$which is solved for $v_{B_{f}}=6000\,{\rm m/s}$ when $\sin\theta_{i}-\sin\theta_{f}=0.9957$with $\theta_{i}>0$ and $\theta_{f}<0$.
LaTeX supports many worldwide languages by means of some special packages. In this article is explained how to import and use those packages to create documents in Portuguese. Contents Portuguese language has some accentuated words. For this reason the preamble of your file must be modified accordingly to support these characters and some other features. \documentclass{article} %encoding %-------------------------------------- \usepackage[T1]{fontenc} \usepackage[utf8]{inputenc} %-------------------------------------- %Portuguese-specific commands %-------------------------------------- \usepackage[portuguese]{babel} %-------------------------------------- %Hyphenation rules %-------------------------------------- \usepackage{hyphenat} \hyphenation{mate-mática recu-perar} %-------------------------------------- \begin{document} \tableofcontents \vspace{2cm} %Add a 2cm space \begin{abstract} Este é um breve resumo do conteúdo do documento escrito em Português. \end{abstract} \section{Seção introdutória} Esta é a primeira seção, podemos acrescentar alguns elementos adicionais e tudo será escrito corretamente. Além disso, se uma palavra é um caminho muito longo e tem de ser truncado, babel irá tentar truncar corretamente, dependendo do idioma. \section{Segunda seção} Esta seção é para ver o que acontece com comandos de texto que definem \[ \lim x = \theta + 152383.52 \] \end{document} There are two packages in this document related to the encoding and the special characters. These packages will be explained in the next sections. If your are looking for instructions on how to use more than one language in a single document, for instance English and Portuguese, see the International language support article. Modern computer systems allow you to input letters of national alphabets directly from the keyboard. In order to handle a variety of input encodings used for different groups of languages and/or on different computer platforms LaTeX employs the inputenc package to set up input encoding. In this case the package properly displays characters in the Portuguese alphabet. To use this package add the next line to the preamble of your document: \usepackage[utf8]{inputenc} The recommended input encoding is utf-8. You can use other encodings depending on your operating system. To format LaTeX documents properly you should also choose a font encoding which supports specific characters for Portuguese language, this is accomplished by the package: fontenc \usepackage[T1]{fontenc} Even though the default encoding works well in Portuguese, using this specific encoding will avoid glitches with some specific characters, e.g., some accented characters might not be directly copyable from the generated PDF and instead are constructed using the base character and an overlayed shifted accent symbol, resulting in two separate symbols if you copy it. The default LaTeX encoding is OT1. To extended the default LaTeX capabilities, for proper hyphenation and translating the names of the document elements, import the babel package for the Portuguese language. \usepackage[portuguese]{babel} As you may see in the example at the introduction, instead of "abstract" and "Contents" the Portuguese words "Resumo" and "Conteúdo" are used. If you need to use the Brazilian Portuguese localization use brazilian instead of portuguese as parameter when importing babel. Sometimes for formatting reasons some words have to be broken up in syllables separated by a - ( hyphen) to continue the word in a new line. For example, matemática could become mate-mática. The package babel, whose usage was described in the previous section, usually does a good job breaking up the words correctly, but if this is not the case you can use a couple of commands in your preamble. \usepackage{hyphenat} \hyphenation{mate-mática recu-perar} The first command will import the package hyphenat and the second line is a list of space-separated words with defined hyphenation rules. On the other side, if you want a word not to be broken automatically, use the {\nobreak word} command within your document or include it in an \mbox{word}. For more information see
Search Now showing items 1-10 of 23 OGLE-2017-BLG-0173Lb: Low-mass-ratio Planet in a "Hollywood" Microlensing Event (2018) We present microlensing planet OGLE-2017-BLG-0173Lb, with planet-host mass ratio either $q\simeq 2.5\times 10^{-5}$ or $q\simeq 6.5\times 10^{-5}$, the lowest or among the lowest ever detected. The planetary perturbation ... OGLE-2016-BLG-1045: A Test of Cheap Space-Based Microlens Parallaxes (2018) Microlensing is a powerful and unique technique to probe isolated objects in the Galaxy. To study the characteristics of these interesting objects based on the microlensing method, measurement of the microlens parallax ... OGLE-2017-BLG-1130: The First Binary Gravitational Microlens Detected From Spitzer Only (2018) We analyze the binary gravitational microlensing event OGLE-2017-BLG-1130 (mass ratio q~0.45), the first published case in which the binary anomaly was only detected by the Spitzer Space Telescope. This event provides ... OGLE-2017-BLG-0329L: A Microlensing Binary Characterized with Dramatically Enhanced Precision Using Data from Space-based Observations (2018) Mass measurements of gravitational microlenses require one to determine the microlens parallax PIe, but precise PIe measurement, in many cases, is hampered due to the subtlety of the microlens-parallax signal combined ... OGLE-2017-BLG-1434Lb: Eighth q < 1 * 10^-4 Mass-Ratio Microlens Planet Confirms Turnover in Planet Mass-Ratio Function (2018) We report the discovery of a cold Super-Earth planet (m_p=4.4 +/- 0.5 M_Earth) orbiting a low-mass (M=0.23 +/- 0.03 M_Sun) M dwarf at projected separation a_perp = 1.18 +/- 0.10 AU, i.e., about 1.9 times the snow line. ... OGLE-2017-BLG-0373Lb: A Jovian Mass-Ratio Planet Exposes A New Accidental Microlensing Degeneracy (2018) We report the discovery of microlensing planet OGLE-2017-BLG-0373Lb. We show that while the planet-host system has an unambiguous microlens topology, there are two geometries within this topology that fit the data equally ... OGLE-2017-BLG-1522: A giant planet around a brown dwarf located in the Galactic bulge (2018) We report the discovery of a giant planet in the OGLE-2017-BLG-1522 microlensing event. The planetary perturbations were clearly identified by high-cadence survey experiments despite the relatively short event timescale ... Spitzer Opens New Path to Break Classic Degeneracy for Jupiter-Mass Microlensing Planet OGLE-2017-BLG-1140Lb (2018) We analyze the combined Spitzer and ground-based data for OGLE-2017-BLG-1140 and show that the event was generated by a Jupiter-class (m_p\simeq 1.6 M_jup) planet orbiting a mid-late M dwarf (M\simeq 0.2 M_\odot) that ... OGLE-2016-BLG-1266: A Probable Brown-Dwarf/Planet Binary at the Deuterium Fusion Limit (2018) We report the discovery, via the microlensing method, of a new very-low-mass binary system. By combining measurements from Earth and from the Spitzer telescope in Earth-trailing orbit, we are able to measure the ... KMT-2016-BLG-0212: First KMTNet-Only Discovery of a Substellar Companion (2018) We present the analysis of KMT-2016-BLG-0212, a low flux-variation $(I_{\rm flux-var}\sim 20$) microlensing event, which is well-covered by high-cadence data from the three Korea Microlensing Telescope Network (KMTNet) ...
Category Theory for Programmers Chapter 13: Free Monoids Challenges Show that an isomorphism between monoids that preserves multiplication must automatically preserve unit. Let $A$ and $B$ be monoids and $A \cong B$. Then there exists $f:A \to B$ and $f^{-1}:B \to A$ such that $f \circ f^{-1} = \mathrm{Id}_B$ and $f^{-1} \circ f = \mathrm{Id}_A$. And because $f$ and $f^{-1}$ preserve multiplication, we have also that $f(a a’) = f(a) f(a’)$ for all $a, a’ \in A$ and $f^{-1}(b b’) = f^{-1}(b) f^{-1}(b’)$ for all $b,b’ \in B$. Also, because $A$ and $B$ are monoids, there exists $e_A \in A$ such that $a e_A = e_A a = a$ for all $a \in A$ and similarly $e_B \in B$ such that $b e_B = e_B b = b$ for all $b \in B$. Let $f(a e_A) = f(a) f(e_A) = f(a)$ for all $a \in A$, then $f(e_A)$ is a candidate for identity in $B$ because for all $b \in B$, we have Then we have that $e_B f(e_A) = e_B$ by above and $e_B f(e_A) = f(e_A)$ by definition of $e_B$ and so we have that Similarly, we can show \(\blacksquare\) Consider homomorphism from ([Integer], [], ++)to $(\mathbb{Z}, 1, \times)$. What is image of []? Since []is the identity in [Integer], the homomorphism preserves structure and so would be mapped to the identity, $1$. Assume all singleton lists are mapped to their integers, [x]$\mapsto x$. What’s the image of [1, 2, 3, 4]? Because we don’t need to worry about associativity (property of monoids), we can say that [1, 2, 3, 4] = [1] ++ [2] ++ [3] ++ [4]. Then using the assumption above, this maps to $1 \times 2 \times 3 \times 4 = 24$. How many lists map to $12$? The prime factors of $12$ are $2 \times 2 \times 3$ and so we can enumerate all groups of integers that have product 12: $2, 2, 3$ $4, 3$ $2, 6$ $12$ $-4, -3$ $-2, -6$ $-2, -2, 3$ $-2, 2, -3$ Including identities, we could have infinitely many. What is the free monoid generated by a one-element set? $(\mathbb{N}, 0, +)$ where the isomorphism given by the length of the list.
The rotational inertia of an object (represented by \(I\)) measures how difficult it is to get an object spinning if its angular velocity is zero or how difficult it is to slow down an object's angular velocity to zero if it is already spinning. How much or how little rotational inertia an object has depends on its mass and how that mass is distributed across space. For example, a hollow sphere with just as much mass as a solid sphere has more rotational inertia. Hollow spheres of equal weight as solid spheres are thus more difficult to get spinning or to stop their rotational motion if they're already spinning. This article will represent the beginning of a new series in which we look at the future of technology and how it will impact our civilization. In this article in particular, we’ll talk about the potential impacts of room-temperature superconductors and we’ll also discuss the history of technological revolutions and how they enhanced our biology over the course of millions of years. This will lead us to a concept coined by the cosmologist Max Tegmark called “Life 3.0.” We’ll discuss how the third industrial revolution will differ from all other prior technological revolutions in that it’ll produce technology which will allow us to enhance the functionality of our own biology. As the title suggests, in this article we’ll be talking about the future of robotics, AI, and automation. We’ll have pretty detailed discussions on driverless vehicles (which can be thought of as robots), agricultural robots, manufacturing and construction robots, and retail robots. We’ll also briefly talk about things like nanobots and some of the kinds of robots we could use in space. We’ll also have a discussion about “big numbers” and the kinds of weird quantum effects that we’d expect to occur over long time intervals. Lastly, we discuss the scientific possibility of the holy grail of Star Trek - universal assemblers. In this article, we’ll speculate about what the world will look like in the year 2100. We’ll discuss things like robot chefs, virtual and augmented reality, and transhumanism just to name a few; we shall also, briefly, allude to a discussion on living on Uranus’s moon, Miranda. This article will be the beggining of a new series in which we examine the effects that artificial intelligence (AI), robots, and automation will have on human civilization. In this article, we’ll primarily be focusing on the implications of artificial general intelligence (AGI). In this article, we’ll talk in laymen terms about quantum theory and general relativity and, specifically, how the two are related. We shall begin by discussing the well-known fact that these two theories—which describe how the universe works on the scale of the very small (quantum theory) and the very large (general relativity)—oftentimes contradict one another and they usually contradict each other on the scale of the very small (which is where general relativity breaks down and quantum mechanics gives us the correct picture). Now, something that is a little less well-known is that quantum theory and general relativity seem to, in some strange sense, make similar predictions about how nature is on vast size scales. Both theories predict that there are other universes and extra spatial dimensions. We shall close our discussion in this article by answering a question that we posed at the end of the article, Orbital Rings and Planet Building. In this article, we’ll discuss star lifting and its applications to interstellar and intergalactic space travel. In this article, we’ll look at various different ways we could travel to the stars. We’ll first discuss how very small, but very fast probes could be accelerated to relativistic speeds using lasers (or masers); such probes could reach the nearest stars within the span of a human lifetime. This discussion will also lead us to the notion of an “interstellar highway” which we’ll discuss in detail. We conclude by discussing how asteroids and comets could also be used as spaceships to reach the stars. In this article, we discuss preliminary interstellar missions which will serve as preludes to missions involving sending spacecraft to stars. We primarily discuss using the Sun as a gravitational lens - a kind of “cosmic telescope” - to search for exoplanets which likely harbor life as well as those which likely do not. This article is essentially a “teaser” of what we have in store for upcoming articles. Basically, I summarize ideas that will be discussed in tremendous detail in subsequent articles. These ideas are, primarily, interplanetary travel, interstellar travel, and intergalactic travel and how megastructures like orbital rings and star lifters (and a few others) will enable such voyages. We also give a very brief “teaser” on the redesign of the social and economic systems which underlie all industrial and social protocol. To find the gravitational force exerted by a sphere of mass \(M\) on a particle of mass \(m\) outside of that sphere, we must first subdivide that sphere into many very skinny shells and find the gravitational force exerted by anyone of those shells on \(m\). We'll see, however, that finding the gravitational force exerted by such a shell is in of itself a somewhat tedious exercise. In the end, we'll see that the gravitational force exerted by a sphere of mass \(M\) on a particle of mass \(m\) outside of the sphere (where \(D\) is the center-to-center separation distance between the sphere and the particle) is completely identical to the gravitational force exerted by a particle of mass \(M\) on the other particle of mass \(m\) such that \(D\) is there separation distance. An orbital ring connected to the Earth by space elevators would reduce the cost of going to space to an amount comparable to an airplane ticket. This would cause a boom in the space tourism industry and eventually millions and even billions of people and tons of cargo will be moving from the Earth’s surface to space annually, and vise versa. This would necessitate an expansion in our space-based infrastructure to include space-based solar panels, a lunar mass driver, the routine mining of asteroids, and especially enormous space habitats (for all those billions of people to live in) such as the Standford Torus, the Bernal Sphere, or the O’Neil Cylinder. Orbital rings also allow you to build artificial planets and Dyson spheres, which would allow us to completely colonize the solar system. They would also allow us to build a Birch planet, a single planet with a surface area which exceeds the total surface area of all the planets in the Milky Way galaxy. A star such as the Sun provides stupendous quantities of power. We earthlings tap into only a tiny fraction of the power of the Sun’s light that reaches Earth because so much of that power is lost when the Sun’s light is transmitted through the atmosphere. But what if we extracted the Sun’s solar energy from space by building large arrays of space-based solar panels? Space-based solar energy has myriad applications such as powering infrastructure and cities on the Earth, the Moon, or other worlds in the solar system; it can also be used to sterilize “space junk” and to create a highway between the stars for solar-sail spacecraft. In this lesson, we’ll discuss the prospect of life in the Milky Way galaxy beyond the Earth. We'll begin by discussing the speculations made in a paper written by Carl Sagan about the possibility of life in Jupiter's atmosphere. From there, we shall derive a formula which describes the habitable zone of a star. Using this formula and data obtained by the Kepler Space Telescope, we can estimate the total number of "Earth-like" planets in the Milky Way. From there, we discuss the fraction of those planets on which simple and intelligent life evolve; then we'll discuss the fraction of those planets on which advanced communicating civilizations evolve and what fraction of those civilizations are communicating right now. In this lesson, we’ll attempt to give a brief catalog of the very different classes of planets in the universe. We'll discuss Pulsar planets, hot Jupiters, Super Earths, ice and water worlds, and many more. In this lesson, we’ll use the squeeze theorem and elementary trigonometry to prove that \(\lim_{x→0}\frac{sinx}{x}=1\). In this video, we’ll discuss Shkadov thrusters: a method of moving stars, star systems, and even entire galaxies. For a vector field \(\vec{F}(x,y)\) defined at each point \((x,y))\) within the region \(R\) and along the continuous, smooth, closed, piece-wise curve \(c\) such that \(R\) is the region enclosed by \(c\), we shall derive a formula (known as Green’s Theorem) which will allow us to calculate the line integral of \(\vec{F}(x,y)\) over the curve \(c\). Using Newton's law of gravity and the concept of the definite integral, we can find the total gravitational force exerted by a rod on a particle a horizontal distance \(d\) away from the rod. To find the gravitational force exerted by a sphere on a particle of mass \(M\) outside of that sphere, we must first subdivide that sphere into many very skinny shells and find the gravitational force exerted by anyone of those shells on \(m\). We'll see, however, that finding the gravitational force exerted by such a shell is in of itself a somewhat tedious exercise. In the end, we'll see that the gravitational force exerted by a sphere of mass \(M\) on a particle of mass \(m\) outside of the sphere (where \(D\) is the center-to-center separation distance between the sphere and particle) is completely identical to the gravitional force exerted by a particle of mass \(M\) on the mass \(m\) such that \(D\) is their separation distance.
From the relativistic covariance of the Dirac equation (see Section 2.1.3 in the QFT book of Itzykson and Zuber for a derivation. I also more or less follow their notation.), you know how a Dirac spinor transforms. One has $$\psi'(x')=S(\Lambda)\ \psi(x)$$ under the Lorentz transformation $$x'^\mu= {\Lambda^\mu}_\nu\ x^\nu= {\exp(\lambda)^\mu}_\nu\ x^\nu=(I + {\lambda^\mu}_\nu+\cdots)\ x^\nu\ .$$ Explicitly, one has $S(\Lambda)=\exp\left(\tfrac{1}8[\gamma_\mu,\gamma_\nu]\ \lambda^{\mu\nu}\right)$. To show reducibility, all you need is to find a basis for the gamma matrices (as well as Dirac spinors) such that $[\gamma_\mu,\gamma_\nu]$ is block diagonal with two $2\times 2$ blocks. Once this is shown, it proves the reducibility of Dirac spinors under Lorentz transformations since $S(\Lambda)$ is also block diagonal. Such a basis is called the chiral basis. It is also important to note that a mass term in the Dirac term mixes the Weyl spinors in the Dirac equation but that is not an issue for reducibility. While this derivation does not directly use representation theory of the Lorentz group, it does use the Lorentz covariance of the Dirac equation. I don't know if this is what you wanted. (This post imported from StackExchange Physics at 2014-03-31 16:04 (UCT), posted by SE-user suresh I am not interested in your bounty -- please don't award me anything.)
$\let\opn=\operatorname$For my BA thesis I have to describe formal groups from the functorial point of view. I am hence reading Strickland - Formal Schemes and Formal Groups, which is apparently the only article that deals with this topic in that way. He defines (4.1) an formal scheme as a functor $X: \opn{CRings}\to \opn{Set}$ that is a small filtered colimit of affine schemes i.e., $X(R)=\lim\limits_{\rightarrow i}X_i(R)$. The first example (4.2) is given by the functor $\widehat{\mathbb {A}}^{1} $ defined as $\widehat{\mathbb {A}}^{1}(R)\mathrel{:=}\opn{Nil}(R)$. I don't understand why this functor is the colimit over $N$ of the functors $\opn{spec}(\mathbb{Z}[x]/x^{N+1})\mathrel{:=}\opn{Hom}_{\opn{CRing}}(\mathbb{Z}[x]/x^{N+1},\_)$. I would appreciate it if someone could explain it in general and kindly give an illustrating example. Other simple examples of formal schemes are also highly welcome. Many thanks!
2018-09-11 04:29 Proprieties of FBK UFSDs after neutron and proton irradiation up to $6*10^{15}$ neq/cm$^2$ / Mazza, S.M. (UC, Santa Cruz, Inst. Part. Phys.) ; Estrada, E. (UC, Santa Cruz, Inst. Part. Phys.) ; Galloway, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; Gee, C. (UC, Santa Cruz, Inst. Part. Phys.) ; Goto, A. (UC, Santa Cruz, Inst. Part. Phys.) ; Luce, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; McKinney-Martinez, F. (UC, Santa Cruz, Inst. Part. Phys.) ; Rodriguez, R. (UC, Santa Cruz, Inst. Part. Phys.) ; Sadrozinski, H.F.-W. (UC, Santa Cruz, Inst. Part. Phys.) ; Seiden, A. (UC, Santa Cruz, Inst. Part. Phys.) et al. The properties of 60-{\mu}m thick Ultra-Fast Silicon Detectors (UFSD) detectors manufactured by Fondazione Bruno Kessler (FBK), Trento (Italy) were tested before and after irradiation with minimum ionizing particles (MIPs) from a 90Sr \b{eta}-source . [...] arXiv:1804.05449. - 13 p. Preprint - Full text Notice détaillée - Notices similaires 2018-08-25 06:58 Charge-collection efficiency of heavily irradiated silicon diodes operated with an increased free-carrier concentration and under forward bias / Mandić, I (Ljubljana U. ; Stefan Inst., Ljubljana) ; Cindro, V (Ljubljana U. ; Stefan Inst., Ljubljana) ; Kramberger, G (Ljubljana U. ; Stefan Inst., Ljubljana) ; Mikuž, M (Ljubljana U. ; Stefan Inst., Ljubljana) ; Zavrtanik, M (Ljubljana U. ; Stefan Inst., Ljubljana) The charge-collection efficiency of Si pad diodes irradiated with neutrons up to $8 \times 10^{15} \ \rm{n} \ cm^{-2}$ was measured using a $^{90}$Sr source at temperatures from -180 to -30°C. The measurements were made with diodes under forward and reverse bias. [...] 2004 - 12 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 533 (2004) 442-453 Notice détaillée - Notices similaires 2018-08-23 11:31 Notice détaillée - Notices similaires 2018-08-23 11:31 Effect of electron injection on defect reactions in irradiated silicon containing boron, carbon, and oxygen / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Yakushevich, H S (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) Comparative studies employing Deep Level Transient Spectroscopy and C-V measurements have been performed on recombination-enhanced reactions between defects of interstitial type in boron doped silicon diodes irradiated with alpha-particles. It has been shown that self-interstitial related defects which are immobile even at room temperatures can be activated by very low forward currents at liquid nitrogen temperatures. [...] 2018 - 7 p. - Published in : J. Appl. Phys. 123 (2018) 161576 Notice détaillée - Notices similaires 2018-08-23 11:31 Notice détaillée - Notices similaires 2018-08-23 11:31 Characterization of magnetic Czochralski silicon radiation detectors / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) Silicon wafers grown by the Magnetic Czochralski (MCZ) method have been processed in form of pad diodes at Instituto de Microelectrònica de Barcelona (IMB-CNM) facilities. The n-type MCZ wafers were manufactured by Okmetic OYJ and they have a nominal resistivity of $1 \rm{k} \Omega cm$. [...] 2005 - 9 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 548 (2005) 355-363 Notice détaillée - Notices similaires 2018-08-23 11:31 Silicon detectors: From radiation hard devices operating beyond LHC conditions to characterization of primary fourfold coordinated vacancy defects / Lazanu, I (Bucharest U.) ; Lazanu, S (Bucharest, Nat. Inst. Mat. Sci.) The physics potential at future hadron colliders as LHC and its upgrades in energy and luminosity Super-LHC and Very-LHC respectively, as well as the requirements for detectors in the conditions of possible scenarios for radiation environments are discussed in this contribution.Silicon detectors will be used extensively in experiments at these new facilities where they will be exposed to high fluences of fast hadrons. The principal obstacle to long-time operation arises from bulk displacement damage in silicon, which acts as an irreversible process in the in the material and conduces to the increase of the leakage current of the detector, decreases the satisfactory Signal/Noise ratio, and increases the effective carrier concentration. [...] 2005 - 9 p. - Published in : Rom. Rep. Phys.: 57 (2005) , no. 3, pp. 342-348 External link: RORPE Notice détaillée - Notices similaires 2018-08-22 06:27 Numerical simulation of radiation damage effects in p-type and n-type FZ silicon detectors / Petasecca, M (Perugia U. ; INFN, Perugia) ; Moscatelli, F (Perugia U. ; INFN, Perugia ; IMM, Bologna) ; Passeri, D (Perugia U. ; INFN, Perugia) ; Pignatel, G U (Perugia U. ; INFN, Perugia) In the framework of the CERN-RD50 Collaboration, the adoption of p-type substrates has been proposed as a suitable mean to improve the radiation hardness of silicon detectors up to fluencies of $1 \times 10^{16} \rm{n}/cm^2$. In this work two numerical simulation models will be presented for p-type and n-type silicon detectors, respectively. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 2971-2976 Notice détaillée - Notices similaires 2018-08-22 06:27 Technology development of p-type microstrip detectors with radiation hard p-spray isolation / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) ; Díez, S (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) A technology for the fabrication of p-type microstrip silicon radiation detectors using p-spray implant isolation has been developed at CNM-IMB. The p-spray isolation has been optimized in order to withstand a gamma irradiation dose up to 50 Mrad (Si), which represents the ionization radiation dose expected in the middle region of the SCT-Atlas detector of the future Super-LHC during 10 years of operation. [...] 2006 - 6 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 566 (2006) 360-365 Notice détaillée - Notices similaires 2018-08-22 06:27 Defect characterization in silicon particle detectors irradiated with Li ions / Scaringella, M (INFN, Florence ; U. Florence (main)) ; Menichelli, D (INFN, Florence ; U. Florence (main)) ; Candelori, A (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Bruzzi, M (INFN, Florence ; U. Florence (main)) High Energy Physics experiments at future very high luminosity colliders will require ultra radiation-hard silicon detectors that can withstand fast hadron fluences up to $10^{16}$ cm$^{-2}$. In order to test the detectors radiation hardness in this fluence range, long irradiation times are required at the currently available proton irradiation facilities. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 589-594 Notice détaillée - Notices similaires
A function is defined as a relation which possesses one output value for each permissible or allowable input value. The period of a function is the number “T” such that so, if a number is fixed and $\theta$ is any angle, we have the following below given period. The function sine and cosine have a period 2$\pi$. Some of the trigonometry periodic functions are given below, \[\large Sin\left(\omega\theta\right)\Rightarrow T=\frac{\pi}{\omega}\] \[\large Cos\left(\omega\theta\right)\Rightarrow T=\frac{\pi}{\omega}\] \[\large Tan\left(\omega\theta\right)T=\frac{\pi}{\omega}\] solved examples Question 1: Find the period of the function $cos$ $\frac{x}{3}$? Solution: Let y = $cos$ $\frac{x}{3}$ Use period formula, T = $\frac{2\pi}{\omega}$ The multiple of x = $\frac{1}{3}$ = ${\omega}$ T = $\frac{2\pi}{\frac{1}{3}}$ = 6$\pi$ Hence the period of the function $cos$$\frac{x}{3}$ is 6$\pi$
Probability Seminar Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Title: Quantitative homogenization in a balanced random environment Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). Wednesday, February 27 at 1:10pm Jon Peterson, Purdue Title: Functional Limit Laws for Recurrent Excited Random Walks Abstract: Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina. March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM).
This is a guest post from @ImMisterAl, who prefers to remain anonymous in real life. It refers to the problem in this post: a semi-circle is inscribed in a 3-4-5 triangle as shown; find $X$. As with any mathematical problem, my first thought was to sort out exactly what I know or can easily find, and what I want to know. I know: I want: Hmm. So, if I find the radius of the circle I can easily double that to get the diameter, from which I can find $x$. But how to get to that radius? Since I know that I can find any angle in the triangle, that's what I'll focus on. Let’s call the top vertex of the triangle point $B$ and the vertex at the right angle point $D$. Let the centre of the circle be $C$ and the point of tangency be $T$. Drawing the line $BC$ forms two triangles, $BCD$ and $BTC$ which look as though they might be congruent. Are they congruent? Yes they are, since both these triangles are right-angled, and the Equal Tangents theorem tells us that $BD = BT$, and also $CD$ must equal $CT$ since they’re both just radii of the circle. That means the angle $CBD$ is half of angle $ABD$. That’s got to be useful, right? (Confession time: At this point I reached for my calculator – yes, I know, and I’m sorry1 – and I got it to tell me angle $ABD$, which I divided by 2 to give angle $CBD$. I then considered the triangle $BCD$ and used the trigonometry buttons on my calculator again to tell me the radius of the circle. But when I saw how simple the final answer was, I realised there must be a neater way.) Let’s call the angle $CBD$ $\theta$, which would make angle $ABD$ equal to $2\theta$. Triangle $CBD$ then tells us that $\tan(\theta)=\frac{r}{3}$ and from triangle $ABD$ we get $\tan(2\theta)=\frac{4}{3}$. Ah, but I know an identity that links those two things. Exciting stuff! Let’s piece it all together. Here’s the identity: $\tan(2\theta) = \frac{2\tan(\theta)}{1-\tan^2(\theta)}$ Substituting what we have into that gives us the equation: $\frac{4}{3} = \frac{ 2 \left(\frac{r}{3}\right)}{1-\left(\frac{r}{3}\right)^2}$ This rearranges into the quadratic equation: $2r^2 + 9r - 18 = 0$, which has solutions $r=-6$ and $r=\frac{3}{2}$, and since the radius can’t be negative, $\frac{3}{2}$ must be our answer. From here it’s a simple step to show that $x$ equals 1. Of course, I kicked myself when I later saw that using similar triangles gave a far more elegant solution, but I was pleased to use the double angle identity for $\tan$ in an actual problem. It’s always the identity that tends to get overlooked in my opinion.
Question Let $M$ be a (finite dimensional) smooth manifold and $g,\bar{g}$ be Riemannian metrics on $M$. Under what conditions can we guarantee that there exists another finite dimensional Riemannian manifold $(N,h)$ and a smooth map $f:M\to N$ such that $(M,\bar{g})$ is realised as the graph of $f$ in the product manifold $(M\times N, g\oplus h)$? To put it another way, when is it possible to write $\bar{g} = g + f^*h$? Is there a way to bound the dimension of $N$ required? Comments Clearly by definition $\bar{g} - g$ must be positive semidefinite for this to work. But we can equally well ask the question in the context of pseudo-Riemannian manifolds where this requirement is unnecessary. There is a trivial lower bound on the dimension of $N$ from the fact that the maximal rank of $f^*h$ (equivalently of $\mathrm{d}f$) is bounded above by the dimension of $N$. So if in local coordinates $\bar{g} - g$ is a rank $k$ matrix somewhere, we know that $N$ has to be at least dimension $k$. The global question aside, what is the correct integrability condition for the local problem? This probably just requires a suitable rephrasing of the question, but I'm having a bit of problem seeing the right geometric picture. The rank 1 case is not too hard (I think). Without loss of too much generality we can let $N$ be $\mathbb{R}$ with the standard metric. Using that the gradient vector field is orthogonal to the level sets, we have additionally an integrability condition (roughly speaking, let $v$ be the smooth vector field of unit eigenvectors of $\bar{g} - g$ relative to $g$ with non-zero eigenvalue $\lambda^2$, then we need the vector field $\lambda v$ to be hypersurface orthogonal (in the metric $g$); this gives necessity. For sufficiency take a hypersurface orthogonal to $\lambda v$ and set $f = 0$ on there, and integrate along $\lambda v$ to get the desired function).
Summary: The objective of sudoku is to fill a 9 x 9 grid so that each column, each row, and each of the nine 3 x 3 boxes contains all of the digits from 1 to 9 exactly once. Click here to go to the new Sudoku puzzle. Sudoku can be formulated as a mixed integer linear programming (MILP) problem and solved using one of the MILP solvers on the NEOS Server. If you submit the puzzle to be solved by the NEOS Server, the applet will create an AMPL model of the instance, submit the model to the NEOS Server, and retrieve the results. Then, you can click the 'Reveal Solution' button to display the solution. The Mathematical Model The objective of sudoku is to fill a 9 x 9 grid so that each column, each row, and each of the nine 3 x 3 boxes contains all of the digits from 1 to 9. Let \(n\) be the dimension of the boxes that make up the grid; \(n = 3\) in a standard 9 x 9 sudoku puzzle. Parameters \(n\) = dimension of the puzzle (\(n = 9\)) \(m\) = dimension of the boxes that make up the grid (\(m = 3\)) \(P\{N,N\}\) = prespecified digits, i.e., \(P[i,j] = k\) means that digit \(k\) should be the number in cell \((i,j)\) Set \(N\) = set of digits from 1 to \(n\) Variables \[ z[i, j, k] = \left\{ \begin{array}{ll} 1 & \mbox{if \(k\) is the entry in row \(i\) and column \(j\)} \\ 0 & \mbox{otherwise} \end{array} \right. \] Objective Function: minimize 0 The objective of the puzzle is to find a solution that satisfies the constraints; there is no objective function to be minimized or maximized. Typically, the prespecified values are set in such a way that there is a unique solution to the puzzle. Constraints Column constraints: only one of each digit in each column \( \sum_{i \in N} z[i,j,k] = 1, \forall j \in N, k \in N\) Row constraints: only one of each digit in each row \( \sum_{j \in N} z[i,j,k] = 1, \forall i \in N, k \in N\) Box constraints: only one of each digit in each box \(\sum_{p=1}^m \sum_{q=1}^m z[mr + p, mc + q, k] = 1, \forall r \in 0..m-1, c \in 0..m-1, k \in N\) Uniqueness constraints: only one digit in each cell \(\sum_{k \in N} z[i,j,k] = 1, \forall i \in N, j \in N\) Prespecified values constraints: fix location of prespecified values \(z[i,j,P[i,j]] = 1, \forall i \in N, j \in N, P[i,j] < > 0\) param m >= 1, integer, default 3; param n := m*m; set N := 1..n; # prespecified data values param P{N,N} default 0, integer, >= 0, <= n; # z[i,j,k] = 1 if digit k is in row i and column j var z{N,N,N} binary; # dummy objective minimize obj: 0; # only one of each digit in each column subject to col_sum{j in N, k in N}: sum{i in N} z[i,j,k] = 1; # only one of each digit in each row subject to row_sum{i in N, k in N}: sum{j in N} z[i,j,k] = 1; # only one of each digit in each box subject to sqr_sum{r in 0..m-1, c in 0..m-1, k in N}: sum{p in 1..m, q in 1..m} z[m*r+p,m*c+q,k] = 1; # only one digit in each cell subject to unique{i in N, j in N}: sum{k in N} z[i,j,k] = 1; # fix position of prespecified values subject to fixed{i in N, j in N: P[i,j] <> 0}: z[i,j,P[i,j]] = 1; data; param m:= 3; param P: 1 2 3 4 5 6 7 8 9 := 1 1 . . . . 6 3 . 8 2 . . 2 3 . . . 9 . 3 . . . . . . 7 1 6 4 7 . 8 9 4 . . . 2 5 . . 4 . . . 9 . . 6 9 . . . 2 5 1 . 4 7 6 2 9 . . . . . . 8 . 4 . . . 7 6 . . 9 5 . 7 6 . . . . 3 ; solve; #display the results for {i in N}{ for {j in N}{ for {k in N}{ if (z[i,j,k] == 1) then printf "%3i", k; }; if ((j mod m) == 0) then printf " | "; }; printf "\n"; if ((i mod m) == 0) then { for {j in 1..m}{ for {k in 1..m-1}{ printf "---" }; if (j < m) then printf "----+-"; else printf "----+\n"; }; }; };
Dear Uncle Colin, I noticed that $2^{\frac{1}{1,000,000}} = 1.000 000 693 147 2$ or so, pretty much exactly $\left(1 + \frac{1}{1,000,000} \ln(2)\right)$. Is that a coincidence? Nice Interesting Numbers; Jarring Acronym Dear NINJA, The easiest way to see that it's not a coincidence is to check out $3^{\frac{1}{1,000,000}} $, whichRead More → It's genuinely difficult to write an innovative maths book, something that'll teach even the most grizzled and cynical of tutors a thing or two, but @standupmaths1 has done exactly that. Most popular maths books, my own included, tread a pretty familiar path through the history of maths, throw out aRead More → It's not often I have anything nice to say about EdExcel. I've usually found their exams to be the most predictable and least thought-provoking of all the boards (at least until they finally snapped in 2013 and let Kate the Photographer loose on an unsuspecting cohort). At GCSE, their advancedRead More → In this episode of Wrong, But Useful, @reflectivemaths and @icecolbeveridge...: Argue about the inferiority of statistics Give a number of the podcast: $e^{\frac{\pi}{2}} = i^i \approx 0.20788...$ Review @standupmaths's excellent Things to Make and Do in the Fourth Dimension Investigate equable shapes in several dimensions, with reference to @tombutton's MathsJamRead More → Dear Uncle Colin, I've been trying to work out $I = \int_0^{\frac \pi 4} x \frac{\sin(x)}{\cos^3(x)} \d x$ for hours. It's the fifth time this week I've been up until the small hours working on integration and it's affecting my work and home life. I'm worried I'm becoming a calcoholic.Read More → A guest post from @FennekLyra, who is Eva in real life. Thanks, Eva! “Want to see something awful?” asked Agent Lyra1 suddenly, turning to her fellow maths agent and friend Dodo at the £16,000 question of Who Wants To Be A Millionaire? that both of them watched daily. “Oh comeRead More → Dear Uncle Colin Somebody told me that the sequences $\left \lfloor \frac {2n}{\ln(2)} \right \rfloor$ and $\left \lceil \frac{2}{2^{\frac 1n}-1} \right \rceil$ were equal up to the 777,451,915,729,368th term, and I shivered in ecstasy. Is there something wrong with me? -- Sequences Considered Harmful When Agreeing Really Zealously Hi, SCHWARZRead More → Every Friday afternoon, double maths with Mr Hutt: he would march up and down the classroom, barking: "Number seven: six times eight. Six times eight. Number eight: ..." Twenty times tables questions, rapid-fire, scores kept. (One week, I fumbled $7\times 8$, blemishing my perfect score; Paul Edwards, on the otherRead More → Dear Uncle Colin, I was playing with parametric equations and stumbled on something Wolfram Alpha wouldn't plot: $x=t^i;\, y = t^{-i}$. Does this curve really exist? Or am I imagining it? -- A Real Graph? A Non-existant Drawing? Hi, ARGAND -- what you're trying to plot certainly exists; whether orRead More → A student asks: I don't get the Venn diagram method for highest common factor and least common multiple. Do you have any other suggestions? As it happens, I do. I'm assuming you're OK with finding the prime factorisation of a number using (for example) a factor tree. In this example,Read More →
Getting Faster Results Warning The solution described below is useful when you mathematically know aproblem is DCP-compliant and none of your data inputs will change thenature of the problem. We recommend that users check theDCP-compliance of a problem (via a call to is_dcp(prob) for example)at least once to ensure this is the case. Not verifyingDCP-compliance may result in garbage! Introduction As was remarked in the introduction to CVXR, its chief advantage is flexibility: you can specify a problem in close to mathematical formand CVXR solves it for you, if it can. Behind the scenes, CVXRcompiles the domain specific language and verifies the convexity ofthe problem before sending it off to solvers. If the problem violatesthe rules of Disciplined Convex Programming it isrejected. Therefore, it is generally slower than tailor-made solutions to a given problem. An Example To understand the speed issues, let us consider the global warming data from the Carbon Dioxide Information Analysis Center (CDIAC) again. The data points are the annual temperature anomalies relative to the 1961–1990 mean. We will fit the nearly-isotonic approximation \(\beta \in {\mathbf R}^m\) by solving \[ \begin{array}{ll} \underset{\beta}{\mbox{Minimize}} & \frac{1}{2}\sum_{i=1}^m (y_i - \beta_i)^2 + \lambda \sum_{i=1}^{m-1}(\beta_i - \beta_{i+1})_+, \end{array} \] where \(\lambda \geq 0\) is a penalty parameter and \(x_+ =\max(x,0)\). This can be solved as follows. data(cdiac)y <- cdiac$annualm <- length(y)lambda <- 0.44beta <- Variable(m)obj <- 0.5 * sum((y - beta)^2) + lambda * sum(pos(diff(beta)))prob <- Problem(Minimize(obj))soln <- solve(prob)betaHat <- soln$getValue(beta) This is the recommended way to solve a problem. However, suppose we wished to construct bootstrap confidence intervals for the estimate using 100 resamples. It is clear that this computation time can quickly become limiting . Below, we show how one can get at the problem data and directly call a solver to get faster results. Profile the code Profiling a single fit to the model is useful to figure out where most of the time is spent. data(cdiac)y <- cdiac$annualprofvis({ beta <- Variable(m) obj <- Minimize(0.5 * sum((y - beta)^2) + lambda * sum(pos(diff(beta)))) prob <- Problem(obj) soln <- solve(prob) betaHat <- soln$getValue(beta)}) It is especially instructive to click on the data tab and open upthe tree for solve to see the sequence of calls and cumulative timeused. The profile shows that most of the total time (2400ms for one of ourruns) time is spent in the call to is_dcp generic (about2000ms). This generic is responsible to ensuring that all the problemis DCP-compliant by checking the nature of each of the components thatmake up the problem. The actual solving took a much smaller fractionof the time. Directly Calling the Solver We are mathematically certain that the above is convex and so we canavoid the is_dcp hit. We can obtain the the problem data for aparticular solver (like ECOS or SCS) using the function get_problem_data and directly hand that data to the solver to getthe solution. prob_data <- get_problem_data(prob, solver = "ECOS") ASIDE: How did we know ECOS was the solver to use? Future versionswill provode a function to match a solver to a problem. (Actually, itis available already, but not exported yet!). For now, a single callto solve with the verbose option set to TRUE can provide thatinformation. soln <- solve(prob, verbose = TRUE) Now that we have the problem data and know which solver to use, we cancall the ECOS solver with the right arguments. (The ECOS solver isprovided by the package ECOSolveR which CVXR imports.) solver_output <- ECOSolveR::ECOS_csolve(c = prob_data[["c"]], G = prob_data[["G"]], h = prob_data[["h"]], dims = prob_data[["dims"]], A = prob_data[["A"]], b = prob_data[["b"]]) Finally, we can obtain the results by asking CVXR to unpack thesolver results for us. (See ?unpack_results for further examples.) direct_soln <- unpack_results(prob, "ECOS", solver_output) Profile the Direct Call We can profile this direct call now. profvis({ beta <- Variable(m) obj <- Minimize(0.5 * sum((y - beta)^2) + lambda * sum(pos(diff(beta)))) prob <- Problem(obj) prob_data <- get_problem_data(prob, solver = "ECOS") solver_output <- ECOSolveR::ECOS_csolve(c = prob_data[["c"]], G = prob_data[["G"]], h = prob_data[["h"]], dims = prob_data[["dims"]], A = prob_data[["A"]], b = prob_data[["b"]]) direct_soln <- unpack_results(prob, "ECOS", solver_output)}) For one of our runs, the total time went down from \(2400ms\) to \(690ms\), more than a 3-fold speedup! In cases where the objective function and constraints are more complex, the speedup can be more than 10-fold. Same Answer? Of course, we should also verify that the results obtained in both cases are same. identical(betaHat, direct_soln$getValue(beta)) ## [1] TRUE Session Info sessionInfo() ## R version 3.6.0 (2019-04-26)## Platform: x86_64-apple-darwin18.5.0 (64-bit)## Running under: macOS Mojave 10.14.5## ## Matrix products: default## BLAS/LAPACK: /usr/local/Cellar/openblas/0.3.6_1/lib/libopenblasp-r0.3.6.dylib## ## locale:## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8## ## attached base packages:## [1] stats graphics grDevices datasets utils methods base ## ## other attached packages:## [1] profvis_0.3.6 CVXR_0.99-6 ## ## loaded via a namespace (and not attached):## [1] Rcpp_1.0.1 knitr_1.23 magrittr_1.5 ## [4] bit_1.1-14 lattice_0.20-38 R6_2.4.0 ## [7] stringr_1.4.0 tools_3.6.0 grid_3.6.0 ## [10] xfun_0.7 R.oo_1.22.0 scs_1.2-3 ## [13] htmltools_0.3.6 yaml_2.2.0 bit64_0.9-7 ## [16] digest_0.6.19 bookdown_0.11 Matrix_1.2-17 ## [19] gmp_0.5-13.5 ECOSolveR_0.5.2 htmlwidgets_1.3 ## [22] R.utils_2.8.0 evaluate_0.14 rmarkdown_1.13 ## [25] blogdown_0.12.1 stringi_1.4.3 compiler_3.6.0 ## [28] Rmpfr_0.7-2 R.methodsS3_1.7.1 jsonlite_1.6
I am trying to compute the symmetry factor of a Feynman diagram in $\phi^4$ but i do not get the result Peskin Claims. This is the diagram I am considering $$\left(\frac{1}{4!}\right)^3\phi(x)\phi(y)\int{}d^4z\,\phi\phi\phi\phi\int{}d^4w\,\phi\phi\phi\phi\int{}d^4v\,\phi\phi\phi\phi$$ my attempt is the following: there are 4 ways to join $\phi(x)$ with $\phi(z)$. There are then 3 ways to connect $\phi(y)$ with $\phi(z)$. Then, there are 8 ways to connect $\phi(z)$ with $\phi(w)$ and 4 ways to contract the remainning $\phi(w)$ with $\phi(v)$. Finally the there are 6 ways to contract the $\phi(w)$ and the three $\phi(v)$ in pairs $$\left(\frac{1}{4!}\right)^3\dot{}4\dot{}3\dot{}8\dot{}4\dot{}6=\frac{1}{6}$$ but the result claimed in Peskin (page 93) is $1/12$. What am I doing wrong?
Measuring a qubit and ending up with a bit feels a little like tossing out infinities in renormalization. Does neglecting the part of the wave function with a vanishing Hilbert space norm amount to renormalizing of Hilbert space? No, those are two very different processes (as far as I understand). Renormalization: When you are calculating vacuum expectation values, for instance $\langle \Omega\mid T(\phi(\mathbf{p})\phi(0))\mid \Omega\rangle$, you discover that these values are infinite. However, you can interpret this infinity, in a consistent manner, as the value of this correlation function at other momentum $\mathbf{p}^{\prime}$ and a finite part that relates the correlation function at the two different momenta. Nothing is really lost in the renormalization procedure, it is just a matter of how to introduce a measured quantity (the correlation function at this other momentum) into the theory. Measurement: The measurement concerns a certain state $\mid \psi\rangle$ coupled to the a measurement device. Originally, before being coupled, the pure state has entropy equal to zero. Later, by the time evolution of the coupled system, the system being measured has, after tracing over the measurement device states, entropy larger than zero. The difference is the information lost by the system in the process. So, something is lost in the measurement process, contrary to renormalization. I am maybe a bit uncertain what you are asking, but from what I understand the answer would be no. Renormalization is a procedure for absorbing infinities in an interacting field theory. A quantum bit is really just a state, but referred to in information theoretic terms. The two physics are not directly related as such. In a measurement if one considers it as a collapse there is a new normalization (renormalization?) of the system state, which is just the state vector which pertains to the measurement outcome.
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
Forgot password? New user? Sign up Existing user? Log in ∫121π/661π/3(cosx+sinx) dx\large \displaystyle \int_{{121 \pi} / {6}}^{{61 \pi} / {3}} (\cos x + \sin x) \, dx∫121π/661π/3(cosx+sinx)dx If the integral above can be expressed as a1+b\displaystyle \frac{a}{1+\sqrt{b}} 1+ba, what is the value of a+b a+ba+b? Problem Loading... Note Loading... Set Loading...
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
also, if you are in the US, the next time anything important publishing-related comes up, you can let your representatives know that you care about this and that you think the existing situation is appalling @heather well, there's a spectrum so, there's things like New Journal of Physics and Physical Review X which are the open-access branch of existing academic-society publishers As far as the intensity of a single-photon goes, the relevant quantity is calculated as usual from the energy density as $I=uc$, where $c$ is the speed of light, and the energy density$$u=\frac{\hbar\omega}{V}$$is given by the photon energy $\hbar \omega$ (normally no bigger than a few eV) di... Minor terminology question. A physical state corresponds to an element of a projective Hilbert space: an equivalence class of vectors in a Hilbert space that differ by a constant multiple - in other words, a one-dimensional subspace of the Hilbert space. Wouldn't it be more natural to refer to these as "lines" in Hilbert space rather than "rays"? After all, gauging the global $U(1)$ symmetry results in the complex line bundle (not "ray bundle") of QED, and a projective space is often loosely referred to as "the set of lines [not rays] through the origin." — tparker3 mins ago > A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service” for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @EmilioPisanty > for example, I could (theoretically) argue economic duress because my job depends on getting published in certain journals, and those journals force the people that hold my job to basically force me to get published in certain journals (in other words, what you just told me is true in terms of publishing) @0celo7 but the bosses are forced because they must continue purchasing journals to keep up the copyright, and they want their employees to publish in journals they own, and journals that are considered high-impact factor, which is a term basically created by the journals. @BalarkaSen I think one can cheat a little. I'm trying to solve $\Delta u=f$. In coordinates that's $$\frac{1}{\sqrt g}\partial_i(g^{ij}\partial_j u)=f.$$ Buuuuut if I write that as $$\partial_i(g^{ij}\partial_j u)=\sqrt g f,$$ I think it can work... @BalarkaSen Plan: 1. Use functional analytic techniques on global Sobolev spaces to get a weak solution. 2. Make sure the weak solution satisfies weak boundary conditions. 3. Cut up the function into local pieces that lie in local Sobolev spaces. 4. Make sure this cutting gives nice boundary conditions. 5. Show that the local Sobolev spaces can be taken to be Euclidean ones. 6. Apply Euclidean regularity theory. 7. Patch together solutions while maintaining the boundary conditions. Alternative Plan: 1. Read Vol 1 of Hormander. 2. Read Vol 2 of Hormander. 3. Read Vol 3 of Hormander. 4. Read the classic papers by Atiyah, Grubb, and Seeley. I am mostly joking. I don't actually believe in revolution as a plan of making the power dynamic between the various classes and economies better; I think of it as a want of a historical change. Personally I'm mostly opposed to the idea. @EmilioPisanty I have absolutely no idea where the name comes from, and "Killing" doesn't mean anything in modern German, so really, no idea. Googling its etymology is impossible, all I get are "killing in the name", "Kill Bill" and similar English results... Wilhelm Karl Joseph Killing (10 May 1847 – 11 February 1923) was a German mathematician who made important contributions to the theories of Lie algebras, Lie groups, and non-Euclidean geometry.Killing studied at the University of Münster and later wrote his dissertation under Karl Weierstrass and Ernst Kummer at Berlin in 1872. He taught in gymnasia (secondary schools) from 1868 to 1872. He became a professor at the seminary college Collegium Hosianum in Braunsberg (now Braniewo). He took holy orders in order to take his teaching position. He became rector of the college and chair of the town... @EmilioPisanty Apparently, it's an evolution of ~ "Focko-ing(en)", where Focko was the name of the guy who founded the city, and -ing(en) is a common suffix for places. Which...explains nothing, I admit.
Hessian Valuations Speaker(s):Monika Ludwig (Technische Universität Wien) Location:MSRI: Simons Auditorium Tags/Keywords Valuation convex function intrinsic volume Primary Mathematics Subject Classification Secondary Mathematics Subject ClassificationNo Secondary AMS MSC 11-Ludwig Different approaches to introduce intrinsic volumes and more generally mixed volumes for convex and log-concave functions were proposed by Bobkov, Colesanti and Fragal\` a, by Rotem and Milman and by Alesker. They all turn out to be valuations on the corresponding spaces. Here a new class of continuous valuations on the space of convex functions on ${\mathbb R}^n$ is introduced. On smooth convex functions, they are defined for $i=0,\dots,n$ by \begin{equation*} u\mapsto \int_{{\mathbb R}^n} \zeta(u(x),x,\nabla u(x))\,[{{D}^2} u(x)]_i\,d x \end{equation*} where $\zeta\in C({\mathbb R}\times{\mathbb R}^n\times{\mathbb R}^n)$ and $[{{D}^2} u]_i$ is the $i$-th elementary symmetric function of the eigenvalues of the Hessian matrix, ${{D}^2} u$, of $u$. Under suitable assumptions on $\zeta$, these valuations are shown to be invariant under translations and rotations on convex and coercive functions. Ultimately, a complete classification of continuous and rigid motion invariant valuations on this space of functions is the aim of this approach. The connection to Hadwiger's theorem will be discussed. The results presented in this talk are joint with Andrea Colesanti (University of Florence) and Fabian Mussnig (Technische Universit\"at Wien). Ludwig Notes Download 11-Ludwig H.264 Video 11-Ludwig.mp4 Download If none of the options work for you, you can always buy the DVD of this lecture. The videos are sold at cost for $20USD (shipping included). Please Click Here to send an email to MSRI to purchase the DVD. See more of our Streaming videos on our main VMath Videos page.
View Answer question_answer1) What is the common difference of an A.P in which \[{{a}_{21}}-{{a}_{7}}=84\]? View Answer question_answer2) If the angle between two tangents drawn from an external point P to a circle of radius a and centre O, is \[60{}^\circ \], then find the length of OP. View Answer question_answer3) If a tower 30 m high, casts a shadow \[10\sqrt{3}\] long on the ground, then what is the angle of elevation of the sun? View Answer question_answer4) The probability of selecting a rotten apple randomly from a heap of 900 apples is 0 ? 18. What is the number of rotten apples in the heap? View Answer question_answer5) Find the value of p, for which one root of the quadratic equation \[p{{x}^{2}}-14x+8=0\] is 6 times the other. View Answer question_answer6) Which term of the progression \[20,19\frac{1}{4},18\frac{1}{2},17\frac{3}{4},\] ?? is the first negative term? View Answer question_answer7) Prove that the tangents drawn at the end points of a chord of a circle make equal angles with the chord. View Answer question_answer8) A circle touches all the four sides of a quadrilateral ABCD. Prove that \[AB+CD=BC+DA\] View Answer question_answer9) A line intersects the y ? axis and x ? axis at the points P and Q respectively. If (\[2,\text{ }\text{ }5\]) is the mid-point of PQ, then find the co-ordinates of P and Q. View Answer question_answer10) If the distances of P(x, y), from A(5,1) and B(\[1,\text{ }5\]) are equal, then prove that \[3x=2y\]. View Answer question_answer11) If \[ad\ne bc,\]then prove that the equation \[({{a}^{2}}+{{b}^{2}}){{x}^{2}}+2(ac+bd)x+({{c}^{2}}+{{d}^{2}})=0\]has no real roots. View Answer question_answer12) The first term of an A.P. is 5, the last term is 45 and the sum of all its terms is 400. Find the number of terms and the common difference of the A.P. View Answer question_answer13) On a straight line passing through the foot of a tower, two points C and D are at distances of 4 m and 16 m from the foot respectively. If the angles of elevation from C and D of the top of the tower are complementary, then find the height of the tower. View Answer question_answer14) A bag contains 15 white and some black balls. If the probability of drawing a black ball from the bag is thrice that of drawing a white ball, find the number of black balls in the bag. question_answer15) In what ratio does the point \[\left( \frac{24}{11},y \right)\] divide the line segment joining the points \[P(2,-2)\] and \[Q(3,7)\]? Also find the value of y. question_answer16) Three semicircles each of diameter 3 cm, a circle of diameter 4.5 cm and a semicircle of radius 4.5 cm are drawn in the given figure. Find the area of the shaded region. question_answer17) In the given figure, two concentric circles with centre O have radii 21 cm and 42 cm. If \[\angle AOB=~60{}^\circ \], find the area of the shaded region. [Use \[\pi =\frac{22}{7}\]] View Answer question_answer18) Water in a canal, 5.4 m wide and 1.8 m deep, is flowing with a speed of 25 km/hour. How much area can it irrigate in 40 minutes, if 10 cm of standing water is required for irrigation? View Answer question_answer19) The slant height of a frustum of a cone is 4 cm and the perimeters of its circular ends are 18 cm and 6 cm. Find the curved surface area of the frustum. View Answer question_answer20) The dimensions of a solid iron cuboid are \[4.4\text{ }m\times 2.6\text{ }m\times 1.0\text{ }m\]. It is melted and recast into a hollow cylindrical pipe of 30 cm inner radius and thickness 5 cm. Find the length of the pipe. View Answer question_answer21) Solve for x: \[\frac{1}{x+1}+\frac{3}{5x+1}=\frac{5}{x+4},x\ne -1,-\frac{1}{5},-4\] View Answer question_answer22) Two taps running together can fill a tank in \[3\frac{1}{13}\] hours. If one tap takes 3 hours more than the other to fill the tank, then how much time will each tap take to fill the tank? View Answer question_answer23) If the ratio of the sum of the first n terms of two A.P.s is \[(7n+1):(4n+27)\], then find the ratio of their 9th terms. View Answer question_answer24) Prove that the lengths of two tangents drawn from an external point to a circle are equal. question_answer25) In the given figure, XY and X?Y? are two parallel tangents to a circle with centre O and another tangent AB with point of contact C, is intersecting XY at A and X?Y? at B. Prove that \[\angle AOB=90{}^\circ \]. View Answer question_answer26) Construct a triangle ABC with side\[BC=7\text{ }cm,\text{ }\angle B=45{}^\circ ,\text{ }\angle A=105{}^\circ \]. Then construct another triangle whose sides are \[\frac{3}{4}\] times the corresponding sides of the \[\Delta \text{ }ABC\]. View Answer question_answer27) An aeroplane is flying at a height of 300 m above the ground. Flying at this height, the angles of depression from the aeroplane of two points on both banks of a river in opposite directions are \[45{}^\circ \] and \[60{}^\circ \] respectively. Find the width of the river. [Use\[\sqrt{3}=1.732\]] View Answer question_answer28) If the points \[A(k+1,2k),B(3k,2k+3)\] and \[C(5k-1,5k)\] are collinear, then find the value of k. question_answer29) Two different dice are thrown together. Find the probability that the numbers obtained have (i) even sum, and (ii) even product. question_answer30) In the given figure, ABCD is a rectangle of dimensions \[21\,cm\times 14\text{ }cm\]. A semicircle is drawn with BC as diameter. Find the area and the perimeter of the shaded region in the figure. View Answer question_answer31) In a rain-water harvesting system, the rain-water from a roof of \[22\text{ }m\times 20\text{ }m\] drains into a cylindrical tank having diameter of base 2 m and height 3.5 m. If the tank is full, find the rainfall in cm. Write your views on water conservation. You need to login to perform this action. You will be redirected in 3 sec
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
Summary: The objective of the Air Ambulance Reassignment Problem is to determine a minimum cost assignment of helicopters to sites to satisfy the projected demand for the next time period. An air ambulance (helicopter) service provider has a set of locations with a number of helicopters assigned to each site. Within each site's defined service area, requests are satisfied by the helicopters assigned to that site. At the end of each time period, the service provider updates the projected demand for each site and determines whether any of the helicopters need to be reassigned. There is a transportation cost associated with reassigning a helicopter to a different site. Therefore, the service provider wants to determine a minimum cost assignment of helicopters to sites to satisfy the next period's projected demand at each site. As an example, consider an air ambulance service provider with 5 locations in the Midwest. The company evaluates the need to relocate helicopters based on the monthly projected demand for each location. The cost of reassigning a helicopter from site \(i\) to site \(j\) is $100 per kilometer of distance from site \(i\) to site \(j\). For each of the company's locations, the table below lists the \(x\) and \(y\) coordinates of the site, the number of helicopters currently assigned, and the projected demand for the next period. X - Coord Y-Coord # Assigned Projected Demand Location #1 36 20 6 7 Location #2 23 30 2 3 Location #3 23 56 3 2 Location #4 10 15 3 4 Location #5 5 5 4 2 Given the set of locations, the pairwise distances between locations, the number of helicopters currently assigned to each location, the projected demand for each location, and the cost per kilometer, the objective of the Air Ambulance Reassignment Problem is to determine a minimum cost set of reassignments to satisfy projected demand. The problem can be formulated as a linear programming problem because the objective function and the constraints are all linear functions. The pairwise distances are computed as the Euclidean distance. Set \(L\) = the set of locations Parameters \(x_i\) = the x-coordinate for location \(i\), \(\forall i \in L\) \(y_i\) = the y-coordinate for the location \(i\), \(\forall i \in L\) \(d_i\) = projected demand for the next period for location \(i\), \(\forall i \in L\) \(s_i\) = number of helicopters currently assigned to location \(i\), \(\forall i \in L\) \(c\) = transportation cost per kilometer \(dist_{ij}\) = Euclidean distance between location \(i\) and location \(j\), \(\forall i \in L\), \(\forall j \in L\) \(dist_{ij}\) = \(\sqrt( (x_j - x_i)^2 + (y_j - y_i)^2 )\) Variables \(z_{ij}\) = number of helicopters to be moved from location \(i\) to \(j\), \(\forall i \in L\), \(\forall j \in L\) Objective Function Minimize \( \sum_{(i,j) \in L \times L} dist_{ij}*c*z_{ij}\) Constraints Flow balance constraint for each location \(i\) in set \(L\) \(\sum_{j \in L} z_{ji} + s_i = d_i + \sum_{j \in L} z_{ij}, \forall i \in L\) To solve this linear programming problem, we can use one of the NEOS Server solvers in the Linear Programming (LP) category. Each LP solver has one or more input formats that it accepts. Here is a GAMS model for the small example provided above in the problem statement section. set L /1*5/; alias(L,nL); Parameters x(L) x-coordinates / 1 36 2 23 3 23 4 10 5 5 / y(L) y-coordinates / 1 20 2 30 3 56 4 15 5 5 / d(L) projected demand / 1 7 2 3 3 2 4 4 5 2 / s(L) units currently assigned / 1 6 2 2 3 3 4 3 5 4 /; Parameter dist(L,nL) distance; dist(L,nL) = sqrt( (x(nL) - x(L))*(x(nL) - x(L)) + (y(nL) - y(L))*(y(nL) - y(L))); Scalar c cost per kilometer /100/; Variable obj; Positive Variable z(L,nL); Equations objective balance(L) ; objective.. sum((L,nL), dist(L,nL)*c*z(L,nL)) =e= obj; balance(L) .. sum(nL, z(nL,L)) + s(L) =e= d(L) + sum(nL, z(L,nL)) ; Model AirAmbulance /all/ ; Solve AirAmbulance using lp minimizing obj; Display z.l; If we submit this LP model to XpressMP, we obtain the following solution: \(z_{3,2}\) = 1, \(z_{5,1}\) = 1, and \(z_{5,4}\) = 1 with an objective function value of 7161.87. The pairwise distances are as follows: \(dist_{3,2}\) = 26, \(dist_{5,1}\) = 34.438, and \(dist_{5,4}\) = 11.180.
And I'm looking at the equation t the very end of section 2.1 regarding baseline estimates, and it's the addition of 3 summations, which I would expect to result in a single value, but the result is passed into the min[b*] function. I must be misinterpreting something about the equation. Is min maybe not taking the result of the sum of summations? Is it a constant that multiplies the first summation? I must admit, I'm an engineer, and I'm a bit rusty on my mathematical notation. I'm used to reading things in code, but I want to struggle with the math notation rather than read the SVD++ implementation in MLLib because it's important to me that I get comfortable with it. EditOriginal equation for context. $u$ is an index over users $i$ is an index over items $r_{u,i}$ is the rating that user $u$ gives the $i$th item $\mathcal{K} = (u, i \mid r_{u,i} \textrm{is known})$ (i.e., the knowledge or training data) $\mu$ is the average rating over all items $b_i$ is the "baseline" or average rating for the $i$th item, across users, relative to $\mu$ $b_u$ is the "baseline" or average rating given by user $u$, relative to $\mu$ $$ \underset{b*}{\min} \sum_{(u,i) \in \mathcal{K}} \bigg(r_{u,i} - \mu - b_u - b_i\bigg)^2 + \lambda_1 \bigg(\sum_u b_{u}^2 + \sum_i b_{i}^2\bigg)$$
If we have a linear regression equation $y=X\beta + u$, then we can find the OLS estimate of $\beta$ by minimizing wrt $\hat \beta$: $E(\hat u)=E(y-X\hat\beta )$ However, my textbook suddenly says, out of nowhere, that the OLS estimate of the variance of $u$ (each $u_i$ is iid). $\sigma ^2$ is $\hat \sigma ^2 = \frac {\hat u^T \hat u}{n-K}$, where $n $ is the sample size and $K$ is the amount of independent variables. I understand that this estimator is unbiased, but I have absolutely no idea how it is derived from the assumption of OLS, or why it is called the OLS estimate of $\sigma$. How do we derive this estimator?
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building) It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore) In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of @TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $... "If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed? Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2 Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$ Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight. hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$ for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$ I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything. I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ... The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious. (but seriously, the best tactic is over powered...) Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field? Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement? "Infinity exists" comes to mind as a potential candidate statement. Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system @Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity but so far failed Put it in another way, an equivalent formulation of that (possibly open) problem is: > Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object? If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite. My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science... O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem hmm... By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as: $$P(x) = \prod_{k=0}^n (x - \lambda_k)$$ If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows: The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases. In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}... Do these still exist if the axiom of infinity is blown up? Hmmm... Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum: $$\sum_{k=1}^M \frac{1}{b^{k!}}$$ The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'... and neither Rolle nor mean value theorem need the axiom of choice Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set > are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion
Diff of /trunk/doc/user/stokesflow.tex Parent Directory| Revision Log| Patch 66 The Courant number is defined as: The Courant number is defined as: 67 % % 68 \begin{equation} \begin{equation} 69 C = \frac{v \delta t}{h}. C = \frac{v \delta t}{h} 70 \label{COURANT} \label{COURANT} 71 \end{equation} \end{equation} 72 % % 91 needs to be applied to make sure that the discretized problem has a unique needs to be applied to make sure that the discretized problem has a unique 92 solution, see~\cite{LBB} for details\footnote{Alternatively, one can use solution, see~\cite{LBB} for details\footnote{Alternatively, one can use 93 second order elements for the velocity and first order elements for pressure second order elements for the velocity and first order elements for pressure 94 on the same element. You may use \code{order=2} in \class{esys.finley.Rectangle}.}. on the same element. You can set \code{order=2} in \class{esys.finley.Rectangle}.}. 95 The fact that pressure and velocity are represented in different ways is The fact that pressure and velocity are represented in different ways is 96 expressed by expressed by 97 \begin{python} \begin{python} Legend: Removed from v.3290 changed lines Added in v.3291
Overview of calculus In this lesson, we'll give a broad overview and description of single-variable calculus. Single-variable calculus is a big tool kit for finding the slope or area underneath any arbitrary function \(f(x)\) which is smooth and continuous. If the slope of \(f(x)\) is constant, then we don't need calculus to find the slope or area; but when the slope of \(f(x)\) is a variable, then we must use a tool called a derivative to find the slope and another tool called an integral to find the area. Limits Limits describe what one quantity approaches as some other quantity approaches a given value. This concept is the basis of calculus because it is used to define both derivatives and integrals. In this lesson, we'll try to wrap our minds around what the notion of a limit is and use it to define the derivative function. In this lesson, we’ll prove that \(\lim_{ϴ→0}\frac{sinϴ}{ϴ}=1\). We'll prove this result by using the squeeze theorem and basic geometry, algebra, and trigonometry. In a future lesson, we'll learn why this result is important: the reason being because knowledge that \(\lim_{ϴ→0}\frac{sinϴ}{ϴ}=1\) is required to find the derivatives of the sin and cosine functions. But we'll save that for a future lesson. Derivatives Limits describe what one quantity approaches as some other quantity approaches a given value. This concept is the basis of calculus because it is used to define both derivatives and integrals. In this lesson, we'll try to wrap our minds around what the notion of a limit is and use it to define the derivative function. In previous lessons, we learned how the derivative \(f'(x)\) gives us the steepness at each point along a function \(f(x)\). In this lesson, we'll discuss how using the concept of a partial derivative we can find the steepness at each point along a surface \(z=f(x,y)\). To find the partial derivative we treat one of the variables as a constant and then take the ordinary derivative of \(f(x,y)\). Using this concept, we can specify how steep a surface \(f(x,y)\) is along the \(x\) direction and along the \(y\) direction at each point along the surface. In other words, for every point along the surface, there is a steepness of the surface associated with both the \(x\) and the \(y\) directions at that point. Optimization problems Calculus—specifically, derivatives—can be used to find the values of \(x\) at which the function \(f(x)\) is at either a minimum value or a maximum value. For example, suppose that we let \(x\) denote the horizontal distance away from the beginning of a hiking trail near a mountain and we let \(f(x)\) denote the altitude of the mountainous terrain at each \(x\) value. \(f(x)\) reaches a minimum value when the function "flattens out"—that is, when \(f'(x)\) becomes equal to zero. These particular values of \(x\) are associated with the bottom and top of the mountain. The condition that \(f'(x)=0\) only tells us that \(f(x)\) is at either a minimum or a maximum. To determine whether or not \(f(x)\) is at a minimum or a maximum, we must use the concept of the second derivative. This will be the topic of discussion in this lesson. Given that the perimeter \(2x+2y\) of any arbitrary rectangle must be constant, we can use calculus to find that particular rectangle with the greatest area. The solution to this problem has practical applications. For example, suppose that someone had only 30 meters of fencing to enclose their backyard and they wanted to know what fencing layout would maximize the size and total area of their backyard. Using calculus, we can answer such questions. If \((x,y)\) represents any point on the circle, if \(P\) is a point fixed at the coordinate point \((4,0)\), and if \(d\) represents the distance between those two points then, by using only calculus, we can find the point \((x,y)\) on the circle associated with the minimum distance \(d\). The law of reflection had been well known as early as the first century; but it took longer than another millennium to discover Snell's law, the law of refraction. The law of reflection was readily observable and could be easily determined by making measurements; this law states that if a light ray strikes a surface at an angle \(θ_i\) relative to the normal and gets reflected off of the surface, it will be reflected at an angle \(θ_r\) relative to the normal such that \(θ_i=θ_r\). The law of refraction, however, is a little less obvious and it required calculus to prove. The mathematician Pierre de Fermat postulated the principle of least time: that light travels along the path which gets it from one place to another such that the time \(t\) required to traverse that path is shorter than the time required to take any other path. In this lesson, we shall use this principle to derive Snell's law. Chain rule Integrals To find the gravitational force exerted by a sphere on a particle of mass \(M\) outside of that sphere, we must first subdivide that sphere into many very skinny shells and find the gravitational force exerted by anyone of those shells on \(m\). We'll see, however, that finding the gravitational force exerted by such a shell is in of itself a somewhat tedious exercise. In the end, we'll see that the gravitational force exerted by a sphere of mass \(M\) on a particle of mass \(m\) outside of the sphere (where \(D\) is the center-to-center separation distance between the sphere and particle) is completely identical to the gravitational force exerted by a particle of mass \(M\) on the mass \(m\) such that \(D\) is their separation distance. In previous lessons, we learned that by taking the integral of some function \(f(x)\) we can find the area underneath that curve by summing the areas of infinitely many, infinitesimally skinny rectangles. In this lesson, we'll use the concept of a double integral to find the volume underneath any smooth and continuous surface \(f(x,y)\) by summing the volumes of infinitely many, infinitesimally skinny columns. In the previous lesson, we defined the concept of a line integral and derived a formula for calculating them. We learned that line integrals give the volume between a surface \(f(x,y)\) and a curve \(C\). In this lesson, we'll learn about some of the applications of line integral for finding the volumes of solids and calculating work. In particular, we'll use the concept of line integrals to calculate the volume of a cylinder, the work done by a proton on another proton moving in the presence of its electric field, and the work done by gravity on a swinging pendulum. For a vector field \(\vec{F}(x,y)\) defined at each point \((x,y)\) within the region \(R\) and along the continuous, smooth, closed, piece-wise curve \(c\) such that \(R\) is the region enclosed by \(c\), we shall derive a formula (known as Green’s Theorem) which will allow us to calculate the line integral of \(\vec{F}(x,y)\) over the curve \(c\). Solids of revolution In this lesson, we'll use the concept of a definite integral to calculate the volume of a sphere. First, we'll find the volume of a hemisphere by taking the infinite sum of infinitesimally skinny cylinders enclosed inside of the hemisphere. Then we'll multiply our answer by two and we'll be done. In this lesson, we'll discuss how by using the concept of a definite integral one can calculate the volume of something called an oblate spheroid. An oblate spheroid is essentially just a sphere which is compressed or stretched along one of its dimensions while leaving its other two dimensions unchanged. For example, the Earth is technically not a sphere—it is an oblate spheroid. To find the volume of an oblate spheroid, we'll start out by finding the volume of a paraboloid . (If you cut an oblate spheroid in half, the two left over pieces would be paraboloids.) To do this, we'll draw an \(n\) number of cylindrical shells inside of the paraboloid; by taking the Riemann sum of the volume of each cylindrical shell, we can obtain an estimate of the volume enclosed inside of the paraboloid. If we then take the limit of this sum as the number of cylindrical shells approaches infinity and their volumes approach zero, we'll obtain a definite integral which gives the exact volume inside of the paraboloid. After computing this definite integral, we'll multiply the result by two to get the volume of the oblate spheroid. Series In this lesson, we'll derive Maclaurin/Taylor polynomials which are used to "approximate" arbitrary functions which are smooth and continuous. More generally, they are used to give a local approximation of such functions. We'll also derive Maclaurin/Taylor series where the approximation becomes exact.
Research Open Access Published: Approximation results on Dunkl generalization of Phillips operators via q-calculus Advances in Difference Equations volume 2019, Article number: 244 (2019) Article metrics 213 Accesses Abstract The main purpose of this paper is to construct q-Phillips operators generated by Dunkl generalization. We prove several results of Korovkin type and estimate the order of convergence in terms of several moduli of continuity. Introduction and auxiliary results In 1950, Szász [27] defined the following operators for a continuous function \(f\in C[0, \infty )\): provided that the series is convergent. In [26], Sucu approximated the Szász-operators defined by (1.1) by Dunkl generalization with an exponential function (see [24]). For \(\upsilon >-\frac{1}{2}\), Cheikh et al. [6] studied q-Hermite type polynomials and gave definitions of q-Dunkl analogues of exponential functions and recursion formula as follows: The q-integer \([ n ] _{q}\) and q-factorial \([ n ] _{q}!\), respectively, are defined by The q-calculus appeared as a new area in approximation theory and has a lot of applications in different mathematical areas and physics such as number theory, combinatorics, orthogonal polynomials, basic hypergeometric functions, quantum theory, mechanics, and the theory of relativity (see [13,14,15]). for \(\upsilon >\frac{1}{2}\), \(x\geq 0\), \(0< q<1\) and \(f\in C[0,\infty )\). The main purpose of this article is to construct the q-Phillips operators generated by Dunkl generalization via q-calculus. For more details on the approximation of classical Phillips operators via Dunkl type version, we refer to the recent article [21]. We obtain a Korovkin type result, as well as local and weighted approximations. We also study convergence properties by using the modulus of continuity and investigate the rate of convergence for functions belonging to the Lipschitz class. For further details and more information on approximation, we refer to [9, 10, 19]. For every \(f\in C_{\zeta }[0,\infty )=\{f\in C[0,\infty ): f(t)=O(t ^{\zeta }), t\rightarrow \infty \}\) and \(x\in [0,\infty )\), \(\zeta >n\), \(n \in \mathbb{N}\cup \{0\}\), \(\upsilon \geq - \frac{1}{2}\), we define where For the proof of a basic estimate, we use the generalized q-gamma function. Definition 1.1 The generalized q-gamma function is defined by where \(\varGamma _{q}(t)=K(A;t)\gamma _{q}^{A}(t)\) and \(K(A;t)= \frac{1}{1+A}A^{t} (1+\frac{1}{A} )_{q}^{t} (1+A ) _{q}^{t-1}\). Moreover, for any positive integer n, we have \(K(A;n)=q^{\frac{n(n-1)}{2}}\) and \(\varGamma _{q}(n)=q^{\frac{n(n-1)}{2}} \gamma _{q}^{A}(n)\), which also satisfy the following equation: For more details, see [8]. Estimation of moments Lemma 2.1 Let \(\mathcal{P}_{n,q}^{\ast }( \cdot ; \cdot )\) be the operators defined by (1.7). Then, we have Proof We prove this lemma by using the definition of generalized q-gamma function defined by Definition 1.1. More precisely, If \(u=0\), then \(f(t)=1\), and hence If \(u=1\), then \(f(t)=t\), and hence Take \(u=2\), then, for \(f(t)=t^{2}\), we have For \(u=3\), \(f(t)=t^{3}\) and for \(u=4\), \(f(t)=t^{4}\), we get and A simple calculation leads to Lemma 2.2 Let \(\mathcal{P}_{n,q}^{\ast }( \cdot ; \cdot )\) be the operators defined by (1.7). Then, we have Korovkin and weighted Korovkin type approximation Korovkin’s approximation theory [4] has many applications in classical approximation theory, as well as in other branches of mathematics. In this section we obtained some approximation results via well known Korovkin’s type theorem and weighted Korovkin’s type theorem for the operators defined by (1.7). Let \(C_{B}(\mathbb{R^{+}})\) be the set of all bounded and continuous functions on \(\mathbb{R^{+}}=[0,\infty )\), which is a linear normed space with Let In order to obtain the convergence results for the operators \(\mathcal{P}_{n,q}^{\ast }( \cdot ; \cdot )\) defined by (1.7), we take \(q=q_{n}\) (\(0< q_{n}<1\)) such that for some constant α (\(0\leqq \alpha <1\)). Theorem 3.1 Let \(q=q_{n}\), with \(0< q_{n}<1\), satisfy (3.1). Then, for any function \(f\in C[0, \infty )\cap E\), Proof The proof is based on the well-known Korovkin’s theorem regarding the convergence of a sequence of linear and positive operators, so it is enough to prove the conditions uniformly on \([0,1]\). Clearly, \(\frac{1}{[n]_{q}}\rightarrow 0\), (\(n\rightarrow \infty \)) we have This completes the proof. □ We recall the weighted spaces of functions on \(\mathbb{R}^{+}\), which are defined as follows: where \(\sigma (x)=1+x^{2}\) is a weight function and \(M_{f}\) is a constant depending only on f. Note that \(Q_{\sigma }(\mathbb{R}^{+})\) is a normed space with the norm \(\Vert f\Vert _{\sigma }= \sup_{x\geq 0}\frac{\vert f(x)\vert }{\sigma (x)}\). Theorem 3.2 Let \(q=q_{n}\), with \(0< q_{n}<1\), satisfy (3.1). Then, for any function \(f\in Q_{\sigma }^{k}(\mathbb{R}^{+})\), we have Proof Take \(f(t)=t^{\tau }\). Then since \(f(t)\in C_{\sigma }^{k}(\mathbb{R} ^{+}) \), by Korovkin’s theorem, it satisfies \(\mathcal{P}_{n,q_{n}} ^{\ast }(t^{\tau }; x)\rightarrow x^{\tau }\) uniformly, whenever \(n\rightarrow \infty \). Therefore, by applying Lemma 2.1, since \(\mathcal{P}_{n,q_{n}}^{\ast }(1; x)=1\), we have and Then, clearly, \(\frac{1}{[n]_{q_{n}}}\rightarrow 0\) as \(n\rightarrow \infty \), which implies that In similar way, Thus we have This completes the proof. □ Order of approximation The modulus of continuity of f denoted by \(\omega (f;\delta )\) gives the maximum oscillation of f in any interval of length not exceeding \(\delta >0 \). For a function \(f\in C_{B}(\mathbb{R}^{+})\), it is given by and, for any \(\delta >0\), one has Theorem 4.1 Let \(f\in C_{B}(\mathbb{R}^{+})\) and \(x\in [0,\infty )\). Then we have Proof where \(\mathcal{P}_{n,q_{n}}^{\ast } ( (q^{k+2\upsilon \theta _{k} }_{n} t-x )^{2};x ) \leq \mathcal{P}_{n,q_{n}}^{ \ast } ((t-x)^{2};x )\). And if we now choose \(\delta = \delta _{n}=\sqrt{\frac{1}{[n]_{q_{n}}}}\), then we get our result. □ Corollary 4.2 For \(\delta _{n}=\mathcal{P}_{n,q_{n}}^{\ast } ( (q^{k+2 \upsilon \theta _{k} }_{n} t-x )^{2};x )\), we have Rate of convergence Now we give the rate of convergence of the operators \(\mathcal{P}_{n,q} ^{\ast }(f;x)\) in terms of the elements of the usual Lipschitz class \(\operatorname{Lip}_{M}(\nu )\). Let \(f\in C[0,\infty )\), \(M>0\) and \(0<\nu \leq 1\). The class \(\operatorname{Lip}_{M}(\nu )\) is defined as Theorem 5.1 Let \(q=q_{n}\) be such that \(q_{n}\in (0,1)\) and (3.1) holds. Then, for each \(f\in \operatorname{Lip}_{M}(\nu )\) with \(M>0\), \(0<\nu \leq 1\), we have Proof We prove it by using (5.1) and Hölder’s inequality. Indeed, Therefore, This completes the proof. □ Let \(C_{B}[0,\infty )\) denote the space of all bounded and continuous functions defined on \(\mathbb{R}^{+}=[0,\infty )\) and with the norm also set Theorem 5.2 Let \(\mathcal{P}_{n,q}^{\ast }( \cdot ; \cdot )\) be the operators defined by (1.7). Then, for \(q=q_{n}\) such that \(q_{n}\in (0,1) \) and any \(\psi \in C_{B}^{2}(\mathbb{R}^{+})\), where \(\Delta _{n,q_{n}}=\frac{1}{{q_{n}}[n]_{q_{n}}}+\frac{1}{{2q_{n} ^{2}}[n]_{q_{n}}^{2}} (1+\frac{1}{q_{n}} )\) and \(\varPhi _{n,q_{n}}(x)=\frac{1}{2{q_{n}}^{2}[n]_{q_{n}}} (1+{q_{n}} ^{2}[1+2\upsilon ]_{q_{n}} )x\). Proof Let \(\psi \in C_{B}^{2}(\mathbb{R}^{+})\). Then, by using the generalized mean value theorem in the Taylor series expansion, we have By applying the linearity property of \(\mathcal{P}_{n,{q_{n}}}^{ \ast }\), we have which implies that From (5.3) we have \(\Vert \psi ^{\prime } \Vert _{C_{B}(\mathbb{R}^{+})}\leq \Vert \psi \Vert _{C_{B}^{2}(\mathbb{R}^{+})}\) and \(\Vert \psi ^{\prime \prime }\Vert _{C_{B}(\mathbb{R}^{+})}\leq \Vert \psi \Vert _{C_{B}^{2}(\mathbb{R}^{+})}\), as well as This completes the proof. □ Direct theorem In 1968, J. Peetre [22] introduced a functional known as Peetre’s K-functional, which is defined by There exits a positive constant \(C>0\) such that \(K_{2}(f,\delta ) \leq C\omega _{2}(f,\delta ^{\frac{1}{2}})\), \(\delta >0\), where the second-order modulus of continuity is given by Theorem 6.1 For \(f\in C_{B}(\mathbb{R}^{+})\), \(x\in {}[ 0,\infty )\) and \(q=q_{n}\) satisfying (3.1), we have where \(\mathcal{D}\) is a positive constant. Proof We prove this by using Theorem 5.2. Let \(\psi \in C_{B}(\mathbb{R}^{+}) \), then By taking the infimum over all \(\psi \in C_{B}^{2}(\mathbb{R}^{+})\) and using (6.1), we get Now, for an absolute constant \(\mathcal{D}>0\) provided in [7], we use the relation This completes the proof. □ Atakut and Ispir [5] introduced the weighted modulus of continuity defined as for an arbitrary \(f\in Q_{\sigma }^{k}(\mathbb{R}^{+})\). The two main properties of this modulus of continuity are \(\lim_{\delta \rightarrow 0}\varOmega (f;\delta )=0\) and where \(t,x\in {}[ 0,\infty )\). Theorem 6.2 Let \(q=q_{n}\) be numbers such that \(q_{n}\in (0,1)\) as \(n\rightarrow \infty \). Then, for every \(f\in Q_{\sigma }^{k}(\mathbb{R}^{+})\), where the positive constant \(\mathcal{C}=1+\mathcal{C}_{1}+4 \mathcal{C}_{2}\) and Proof and From Lemma 2.2, we easily see that where Now there exits a constant \(\mathcal{C}_{1}> 0\) such that We easily conclude that where Since, \(\lim_{n \to \infty }\frac{1}{[n]_{q_{n}}^{i} }=0\) for all \(i=1,2,3,4\) and \(\lim_{n \to \infty }q_{n} =1\), for a constant \(\mathcal{C}_{1}>0\), we have In the view of (6.7), we easily see that Finally, in the light of equation (6.5) by combining (6.6)–(6.10), if we choose \(\delta =\sqrt{ \chi _{\upsilon ,q_{n}}(n)}\) and take the supremum over \(x\in {}[ 0,\chi _{\upsilon ,q_{n}}(n))\), we get the desired result. □ References 1. Acar, T.: Quantitative q-Voronovskaya and q-Grüss–Voronovskaya-type results for q-Szász operators. Georgian Math. J. 23, 459–468 (2016) 2. Acar, T., Aral, A.: On pointwise convergence of q-Bernstein operators and their q-derivatives. Numer. Funct. Anal. Optim. 36, 287–304 (2015) 3. Alotaibi, A., Nasiruzzaman, M., Mursaleen, M.: A Dunkl type generalization of Szász operators via post-quantum calculus. J. Inequal. Appl. 2018, Article ID 287 (2018) 4. Altomare, F.: Korovkin type theorems and approximation by positive linear operators. Surv. Approx. Theory 5, 92–164 (2010) 5. Atakut, Ç., Ispir, N.: Approximation by modified Szász–Mirakjan operators on weighted spaces. Proc. Indian Acad. Sci. Math. Sci. 112, 571–578 (2002) 6. Cheikh, B., Gaied, Y., Zaghouani, M.: A q-Dunkl-classical q-Hermite type polynomials. Georgian Math. J. 21, 125–137 (2014) 7. Ciupa, A.: A class of integral Favard–Szász type operators. Stud. Univ. Babeş–Bolyai, Math. 40, 39–47 (1995) 8. De Sole, A., Kac, V.G.: On integral representation of q-gamma and q-beta functions. Atti Accad. Naz. Lincei, Rend. Lincei, Mat. Appl. 16, 11–29 (2005) 9. Gairola, A.R., Deepmala, Mishra, L.N.: Rate of approximation by finite iterates of q-Durrmeyer operators. Proc. Natl. Acad. Sci. India Sect. A Phys. Sci. 86, 229–234 (2016) 10. Gairola, A.R., Deepmala, Mishra, L.N.: On the q-derivatives of a certain linear positive operators. Iran. J. Sci. Technol., Trans. A, Sci. 42, 1409–1417 (2018) 11. İçöz, G., Çekim, B.: Dunkl generalization of Szász operators via q-calculus. J. Inequal. Appl. 2015, Article ID 284 (2015) 12. İçöz, G., Çekim, B.: Stancu type generalization of Dunkl analogue of Szász–Kantorovich operators. Math. Methods Appl. Sci. 39, 1803–1810 (2016) 13. Jackson, F.H.: On q-definite integrals. Q. J. Pure Appl. Math. 15, 193–203 (1910) 14. Lupaş, A.: A q-analogue of the Bernstein operator. Univ. Cluj-Napoca Semin. Numer. Stat. Calc. 9, 85–92 (1987) 15. May, C.P.: On Phillips operators. J. Approx. Theory 20, 315–322 (1977) 16. Milovanovic, G.V., Mursaleen, M., Nasiruzzaman, M.: Modified Stancu type Dunkl generalization of Szász–Kantorovich operators. Rev. R. Acad. Cienc. Exactas Fís. Nat., Ser. A Mat. 112, 135–151 (2018) 17. Mishra, V.N., Khatri, K., Mishra, L.N.: Statistical approximation by Kantorovich type discrete q-beta operators. Adv. Differ. Equ. 2013, Article ID 345 (2013) 18. Mishra, V.N., Khatri, K., Mishra, L.N.: Some approximation properties of q-Baskakov–Beta–Stancu type operators. J. Calc. Var. 2013, Article ID 814824 (2013) 19. Mishra, V.N., Pandey, S., Khan, I.A.: On a modification of Dunkl generalization of Szász operators via q-calculus. Eur. J. Pure Appl. Math. 10, 1067–1077 (2017) 20. Mursaleen, M., Nasiruzzaman, M., Alotaibi, A.: On modified Dunkl generalization of Szász operators via q-calculus. J. Inequal. Appl. 2017, Article ID 38 (2017) 21. Nasiruzzaman, M., Rao, N.: A generalized Dunkl type modifications of Phillips operators. J. Inequal. Appl. 2018, Article ID 323 (2018) 22. Peetre, J.: A Theory of Interpolation of Normed Spaces. Notas de mathematica, vol. 39. Instituto de Mathemática Pura e Applicada, Conselho Nacional de Pesquidas Rio de Janeiro (1968) 23. Rao, N., Wafi, A., Acu, A.M.: q-Szász–Durrmeyer type operators based on Dunkl analogue. Complex Anal. Oper. Theory (2019). https://doi.org/10.1007/s11785-018-0816-3 24. Rosenblum, M.: Generalized Hermite polynomials and the Bose-like oscillator calculus. Oper. Theory, Adv. Appl. 73, 369–396 (1994) 25. Srivastava, H.M., Mursaleen, M., AlotaibiI, A., Nasiruzzaman, M., Al-Abied, A.: Some approximation results involving the q-Szasz–Mirakjan–Kantorovich type operators via Dunkl’s generalization. Math. Methods Appl. Sci. 40, 5437–5452 (2017) 26. Sucu, S.: Dunkl analogue of Szász operators. Appl. Math. Comput. 244, 42–48 (2014) 27. Szász, O.: Generalization of S. Bernstein’s polynomials to the infinite interval. J. Res. Natl. Bur. Stand. 45, 239–245 (1950) 28. Ulusoy, G., Acar, T.: q-Voronovskaya type theorems for q-Baskakov operators. Math. Methods Appl. Sci. 39, 3391–3401 (2016) Ethics declarations Competing interests The authors declare that they have no competing interests. Additional information Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. About this article Received Accepted Published DOI MSC 41A25 41A36 33C45 Keywords Szász operator Generating functions Dunkl analogue Generalization of exponential function Modulus of continuity Weighted modulus of continuity
For the past couple of weeks at work, I’ve been checking through our computer-based assessments before the students have a go at them. That means I’ve had to do lots and lots of calculations by hand, to confirm the computer’s got the right answer. Well, not quite by hand – I use a calculator for the stuff that I can’t keep in my head. I’ve got a calculator app called RealCalc Plus on my phone, which I highly recommend. The main thing it’s got going for it is its RPN mode. While most calculators ask you to type in expressions pretty much as you’d see them on a page, read from left to right with numbers separated by operators and brackets, reverse Polish notation looks like this: 1 5 √ + 2 ÷ That computes $\phi = \frac{1+\sqrt{5}}{2}$. You put the numbers in first, and then say what to do with them. This might look obtuse, but it can make long expressions much easier to type in. Because there are no brackets, you don’t need to remember to close them. And if the same fragment appears more than once in the expression, you can just duplicate it on the stack, instead of typing it all out again. I’ve been using an RPN calculator for a few years now, and I get a warm mathmo feeling when I think about all the time it’s saved me. But there are a couple of things that regularly trip me up. Once you’ve performed a calculation, you can see the result but you can’t see how you got there, so when you’ve got a couple of long numbers sitting on the stack and you can’t remember which is which, you just have to start again. And if you want to repeat a calculation but with a slightly different starting value, you’ve got no choice other than to type it all in again. So, about halfway through my marathon of testing, with thumbs sore from tapping calculator buttons, I decided there was nothing for it except to make my own calculator. It would mainly be like RealCalc, but do something to solve those two problems. I headed straight to my favourite gaudily-decorated coding environment, Glitch.com, and set to work. In this case, I think the day or so I spent making the calculator was worthwhile. I’ve been using the new calculator to finish off my testing duties, and it feels much better to use: I can do calculations quicker, and I lose track of what I’m doing less often. The first change I made was to display not just the results of calculations, but how they were obtained. When you press 1 2 +, a box is pushed to the stack with a 3 at the bottom, but also the 1 and 2 and a + symbol above it. This works for nested operations, too, but the ingredients are hidden by default until you tap the box. There’s an ‘undo’ button, which throws away an operation and puts the operands back on the stack – very handy when you tap the wrong operator! Numbers that you type in directly are shown in blue. You can tap any blue number and enter a new value, and any operations it feeds into are recalculated. I’ve used that quite a few times to set up a formula, such as the quadratic equation, and each time I used it I just need to change the input values and the result appears at the bottom. It’s also useful for factorising numbers: I can start by typing something like 10199 3 ÷, and I just replace the 3 with different prime numbers until the result is an integer. After a while, I realised that there was another optimisation to be made: sometimes a formula uses the same variable more than once! For example, in the quadratic formula, $b$ appears twice: \[ x = \frac{ -b \pm \sqrt{b^2 – 4ac}}{2a} \] It would be nice if you could use the same number in more than one place, and have every instance update when you change the value of one of them. Every RPN calculator has a ‘copy’ button, to push another copy of the last item on the stack, so I just decided to make copies remember they’re linked. You can move copies of a number around, but when you overwrite one of them, all the others take the new value too. In order for this to not make your head explode, I needed a way of showing which number boxes are linked together in this way. Asking the user for a name felt like overkill, and would interrupt your flow unnecessarily if you’re not planning on using the overwriting feature. I came up with a nice solution: when you copy a number, it’s assigned an emoji. All copies of the number have the emoji stuck onto them, so you can quickly tell who they are. This was way more useful than I expected! While I was testing a horrible question involving cancelling fractions with very large denominators, the linking feature really came into its own. By setting up $n/p$ and $d/p$, I could quickly find common factors of $n$ and $d$ just by changing the value of $p$ until both divisions produced integers. Bonza! I’m quite happy with my calculator and I haven’t felt the need to go back to RealCalc yet. Hooray! You can use it too: go to nice-calculator.glitch.me. If you open it on your phone, you can add it to your home screen and it should act like a normal app, instead of a web page.