text
stringlengths
256
16.4k
I learned that the sufficient and necessary condition for a finite state Markov chain to have a unique stationary distribution is there's only one closed communication class. For example from this tutorial Every Markov Chain with a finite state space has a unique stationary distribution unless the chain has two or more closed communicating classes. I tried to prove that myself and got stuck somewhere. My proof basically goes as follows. It can be discussed separately in three cases, 1) the whole train itself one closed communication class, 2) it's an absorbing chain with one closed communication class, 3) there're more than one closed communication classes. For the first case, there's a unique stationary distribution, many proofs can be found online for example here. In the third case, there's more than one stationary distributions. Let's first say there're two closed communication classes and the rest should be similar. A chain with two closed communicating classes can be represented as $$ P= \begin{bmatrix} A & 0 & 0 \\ 0 & B & 0 \\ C & D & E \end{bmatrix} $$ where $A$ and $B$ are closed communicating classes. From the first case we know they both have a unique stationary distribution, say $\pi_a$ and $\pi_b$, it follows $\pi_a=\pi_aA$ and $\pi_b=\pi_bB$. It's easy to verify $v_a =[\pi_a, 0, 0]$ and $v_b = [0,\pi_b,0]$ both satisfy $v=vP$. Moreover a linear combination in the form of $\alpha v_a + (1-\alpha)v_b$ is also a stationary distribution of $P$. The second case is where I got stuck, how to prove there's one and only one stationary distribution in this case? Any suggestions will be appreciated. Also I'll be glad to learn if there's another way of proving it.
EDIT: Here is a related post which concern quadratic vector fields rather than Van der pol equation. In this linked post we see that the convexity of limit cycle play a crucial role. On the other hand the unique limit cycle of Van der pol equation is convex. So there is a Riemanian metric on $\mathbb{R}^2 \setminus C$ such that all solutions of the Van der pol equation are geodesics. Here $C$ is the algebraic curve $yP-xQ=0$ where $P,Q$ are the components of the Van der pol equation. Moreover the limit cycle of the vander pol equation do not intersect this algebraic curve $C$. The classical Van der Pol equation is the following vector field on $\mathbb{R}^{2}$: \begin{equation}\cases{\dot{x}=y-(x^{3}-x)\\ \dot{y}=-x}\end{equation} This equation defines a foliation on $\mathbb{R}^{2}-\{ 0\}$. It is well known that this vector field has a unique limit cycle(isolated closed leaf) in the (punctured) plane. I search for a geometric proof for a particular case of this fact. In fact I search for an alternative proof of the fact that this system has at most one limit cycle. Here is my question: Question: Is there a Riemannian metric on $\mathbb{R}^{2}-\{0\}$ with the following two properties?: The Gaussian curvature is nonzero at all points of $\mathbb{R}^{2}-\{0\}$. Each leaf of the corresponding foliation of $\mathbb{R}^{2}-\{0\}$ is a geodesic. Obviously from the Gauss Bonnet theorem we conclude that existence of such metric implies that there are no two distinct simple closed geodesics on $\mathbb{R}^2\setminus \{0\}$, otherwise we glue two copy of the annular region surrounded by closed geodesics along the boundary then we obtain a torus with non zero curvature.(So this gives us an alternative proof for having at most one limit cycle for the Van der pol equation) For a related question see Conformal changes of metric and geodesics My initial motivation for this question goes back to more than 15 years ago, when I was reading a statement in the book of De Carmo, differential geometry of curves and surface, who wrote that: A topological cylinder in $\mathbb{R}^{3}$ whose curvature is negative, can have at most one closed geodesic. After this, I asked my supervisor for a possible relation between limit cycles and Riemannian metrics. As a response to my question, he introduced me a very interesting paper by Romanovski entitled "Limit cycles and complex geometry" Note 1:For the moment we forget "negative curvature".We just search for a metric compatible to the Van der pol foliation. In this regard, one can see that for every metric on $\mathbb{R}^2 \setminus \{0\}$, with the property that all solutions of the Van der pol equations are (non parametrized) geodesics, then either the metric is not complete or the punctured plane does not possess a polynomial convex function or an strictly convex function. This is a consequence of Proposition 2.1 of this paper and also the following fact. Note 2: What is the answer if we replace the Vander pol vector field by an arbitrary foliation of $\mathbb{R}^{2}\setminus \{0\}$ with a unique compact leaf? Remark: The initial motivation is mentioned in page 3, item 5 of this arxiv note.
Problem Let $\{A_\alpha\}$ be a collection of subsets of $X$; let $X=\bigcup_{\alpha}A_\alpha$. Let $f:X\rightarrow Y$;suppose that $f\vert_{A_\alpha}$ is continuous for each $\alpha$. An index family of sets $\{A_\alpha\}$ is defined to be locally finite if each point $x$ has a neighborhood that intersects $A_\alpha$ for only finitely many values of $\alpha$. Show that if the family $\{A_\alpha\}$ is locally finite and each $A_\alpha$ closed, then $f$ is continuous. Attempted Solution It suffices to show that $f^{-1}\left(V\right)$ is closed in $X$ for any closed set $V$ in $Y$. Pick an arbitrary $x\in\overline{f^{-1}\left(V\right)}$ , then we have $U\cap f^{-1}\left(V\right)\neq\emptyset$ for any open set $U$ containing $x$. By local finiteness, $\exists$ an open neighborhood $N$ such that $x\in N$ and $N\cap A_{\alpha}\neq\emptyset$ for finitely $\alpha,$ namely $\left\{ A_{i}\right\} _{i=1}^{k}$. Since $x\in U\cap N,$ we have \begin{align} U\cap f^{-1}\left(V\right)\cap\left(\cup_{i=1}^{k}A_{i}\right) & =U\cap\left[\cup_{i=1}^{k}\left(f^{-1}\left(V\right)\cap A_{i}\right)\right]\\ & =U\cap\left(\cup_{i=1}^{k}f\vert_{A_{i}}^{-1}\left(V\right)\right)\\ & \supset U\cap N\cap\left(\cup_{i=1}^{k}f\vert_{A_{i}}^{-1}\left(V\right)\right)\neq\emptyset \end{align} from which we have that $x\in\overline{\cup_{i=1}^{k}f\vert_{A_{i}}\left(V\right)}$. Note that $f\vert_{A_{i}}$ is continuous, so $f\vert_{A_{i}}^{-1}\left(V\right)=f^{-1}\left(V\right)\cap A_{i}$ is closed in the subspace topology of $A_{i}.$ So $f\vert_{A_{i}}^{-1}\left(V\right)=F_{i}\cap A_{i}$ for some closed set $F_{i}\subset X$. Since $A_{i}$ is closed, $f\vert_{A_{i}}^{-1}\left(V\right)$ is closed in $X$ as well. Thus, $x\in\cup_{i=1}^{k}f\vert_{A_{i}}^{-1}\left(V\right)=f^{-1}\left(V\right)\cap\left(\cup_{i=1}^{k}A_{i}\right)\subset f^{-1}\left(V\right)$, from which we conclude $\overline{f^{-1}\left(V\right)}\subset f^{-1}\left(V\right)$ and as a result, $f^{-1}\left(V\right)$ is closed because it contains all the limit points. Question (1).This is is problem in Munkrs topology. I tried to solve it and I think I had it. I really appreciate is anyone can take a look at my solution.
If I were to cover the Fibonacci Sequence in introducing sequences to Calculus students, I would probably avoid some of the more obscure properties. I would focus on what early Calculus students should know: for example, how to prove that a sequence converges, and if it does, then how to find what it converges to. (Note also the places, not necessarily specified below, in which one needs to remind students about the basic properties of sequences and limits, and when things can be "split up" only when convergence has already been demonstrated.) In this case, one can show the ratio $\frac{f_{n+1}}{f_n}$ is monotonically increasing and bounded above by $2$. This is enough to conclude that the ratio converges to some limit as $n$ tends to infinity. Call this limit $\phi$. Then consider two ways of writing $\frac{f_{n+2}}{f_n}$ as $n \rightarrow \infty$. The first way: $$\lim_{n \rightarrow \infty} \frac{f_{n+2}}{f_n} = \lim_{n \rightarrow \infty} \frac{f_{n+1} + f_{n}}{f_n} = \lim_{n \rightarrow \infty} \frac{f_{n+1}}{f_n} + \frac{f_{n}}{f_n} = \lim_{n \rightarrow \infty} \frac{f_{n+1}}{f_n} + 1 = \phi + 1.$$ The second way: $$\lim_{n \rightarrow \infty} \frac{f_{n+2}}{f_n} = \lim_{n \rightarrow \infty} \frac{f_{n+2}}{f_{n+1}} \cdot \frac{f_{n+1}}{f_n} = \lim_{n \rightarrow \infty} \frac{f_{n+2}}{f_{n+1}} \cdot \lim_{n \rightarrow \infty} \frac{f_{n+1}}{f_n} = \phi \cdot \phi = \phi^2.$$ Since these expressions are equal, we find that $\phi^2 = \phi + 1$, whence we can solve for $\phi$ using the quadratic equation. This will give two possibilities: one positive, and one negative. Noting that $\phi > 0$, we find the limit to which our ratio converges. On the other hand, if you are looking for a somewhat nonstandard problem: Call a binary string "tripletless" if it never contains three consecutive $0$s or $1$s. How many tripletless binary strings are there of length $n$? I worked this problem out some time ago, and found the answer is $2f_{n+1}$. For example, Length $1$: $\{0, 1\}$. Total: $2$, i.e., $2f_{2} = 2\cdot1$. Length $2$: $\{00, 01, 10, 11\}.$ Total: $4$, i.e., $2f_{3} = 2\cdot2$. Length $3$: $\{001, 010, 011, 100, 101, 110\}.$ Total: $6$, i.e., $2f_{4} = 2\cdot3$. Length $4$: $\{0010, 0011, 0100, 0101, 0110, 1001, 1010, 1011, 1100, 1101\}.$ Total: $10$, i.e., $2f_{5} = 2\cdot5$. (If more details on the proof would be helpful then I would be happy to provide them, but I think it is a nice problem to work out. For pedagogically appropriate materials, though, I think the example above is a much better choice.) Edit (4 April 2016): An irresistible mathematical side-note. The "somewhat nonstandard problem" here essentially asks for a count of base $2$ strings of length $n$ without $3$ consecutive occurrences of the same digit. A natural generalization would be to ask for a count of base $a$ strings of length $b$ without $c$ consecutive occurrences of the same digit. I asked this very question way back in Apr 2014 on MSE ( 775863) and it now has a nice answer from Markus Scheuer that uses generating functions and the "Goulden-Jackson Cluster Method" ( arXiv). Edit (27 March 2017): I recently explored with a class of mine the following: $$1+\frac{1}{1}, 1 + \frac{1}{1 + \frac{1}{1}}, 1 + \frac{1}{1 + \frac{1}{1 + \frac{1}{1}}}, \ldots $$ where each subsequent term is $1$ plus $1$ over the previous term. These yield $$\frac{2}{1}, \frac{3}{2}, \frac{5}{3}, \ldots$$ and, more generally, one can show that you get ratios of consecutive Fibonacci numbers. So the next one would be $8/5$, then $13/8$, etc. When you really do this out by hand, e.g., observing that $1 + 1/(8/5)$ is $1 + 5/8 = (8+5)/8 = 13/8$, you not only get to grasp why the Fibonacci numbers are emerging, but by taking the "limit" of the sequence, you have that $x = 1 + 1/x$, which allows you to solve for $x$ and pick the positive root of this disguised quadratic to figure out precisely what the ratio of consecutive Fibonacci terms converges to.
However, each reaction at a different temperature has a different equilibrium constant. With that said, the amount of each molecule will change which also changes the concentration so the amount of NH3 would not necessarily be 2. This is true. I have not watched the video, but if he is suggesting that the stoichiometric coefficients define the ratios of the equilibrium concentrations of the compounds, then he is wrong. Stoichiometric coefficients do play an important role in equilibrium expressions, but not that one. He raised each concentration to the power of the starting coefficient for that molecule. If you start out with $\ce{3H2}$ and you need a wildly different concentration to reach equilibrium, why are we raising the concentration to the power of the starting amount of $\ce{H2}$? This is how the equilibrium constant (and more broadly the reaction quotient) is defined. Take your generic reaction: $$\ce{aA + bB <=> cC + dD}$$ The reaction quotient $Q$ is defined as the product of the concentrations of the products divided by the product of the concentrations of the reactants. Since $\ce{a}$ equivalents of $\ce{A}$ are in the reactants, $[\ce{A}]$ is multiplied $\ce{a}$ times for $[\ce{A}]^{\ce{a}}$. $$Q_c = \frac{\prod\lbrace[\text{Products}]_i\rbrace}{\prod\lbrace[\text{Reactants}]_i\rbrace} = \frac{[\ce{C}]^{\ce{c}}[\ce{D}]^{\ce{d}}}{[\ce{A}]^{\ce{a}}[\ce{B}]^{\ce{b}}} $$ The pressure version of $Q$ replaces concentrations with partial pressures $P_i$.$$Q_p = \frac{\prod\lbrace (P_{\text{products}})_i\rbrace}{\prod\lbrace( P_{\text{reactants}})_i\rbrace} = \frac{P_{\ce{C}}^{\ce{c}} P_{\ce{D}}^{\ce{d}}}{P_{\ce{A}}^{\ce{a}} P_{\ce{B}}^{\ce{b}}}$$ The equilibrium constant ($K_c$ for concentrations and $K_p$ for pressures) is defined as the reaction quotient at equilibrium concentrations/pressures. At equilibrium: $$Q_c = K_c = \frac{[\ce{C}]^{\ce{c}}[\ce{D}]^{\ce{d}}}{[\ce{A}]^{\ce{a}}[\ce{B}]^{\ce{b}}}$$ $$Q_p = K_p =\frac{P_{\ce{C}}^{\ce{c}} P_{\ce{D}}^{\ce{d}}}{P_{\ce{A}}^{\ce{a}} P_{\ce{B}}^{\ce{b}}}$$ The values of $K_c$ and $K_p$ are determined by experimentally measuring the concentrations or pressures of $\ce{A}$, $\ce{B}$, $\ce{C}$, and $\ce{D}$ and plugging them into the expressions above. For the specific reaction of nitrogen and hydrogen to form ammonia, the equilibrium constant (at a given temperature) is determined by measuring the equilibrium concentrations/pressures and plugging them into the expressions below. $$\ce{3H2(g) + 2N2(g) <=> 2NH3 (g)}$$ $$K_c = \frac{[\ce{NH3}]^3}{[\ce{H2}]^3 [\ce{N2}]^2}$$ $$K_p = \frac{P_{\ce{NH3}}^2}{P_{\ce{H2}}^3 P_{\ce{N2}}^2}$$ Once we know the value of $K_c$ or $K_p$, we can then use it and some partial concentration/pressure data to predict the equilibrium concentrations/pressures. At 300 K, the $K_c$ of this process (the Haber Reaction) is $4.34\times10^{-3} \text{ M}^{-3}$. If we started with $[\ce{H2}]=3.0 \text{ M}$, $[\ce{N2}]=2.0 \text{ M}$, and $[\ce{NH3}]=2.0 \text{ M}$, we can calculate the equilibrium concentrations of all three species when the dust has settled. First we calculate $Q_c$ and compare to $K_c$. If $Q_c$ is greater than $K_c$, there is too much product, and the reaction will shift toward reactants. If $Q_c$ is less than $K_c$, there is too much reactant, and the reaction will shift toward products. $$Q_c = \frac{[\ce{NH3}]^3}{[\ce{H2}]^3 [\ce{N2}]^2} = \frac{(2.0 \text{ M})^2}{(3.0 \text{ M})^3 (2.0 \text{ M})^2}= \frac{1}{27 \text{ M}^3}=3.70\times10^{-2} \text{ M}^{-3}$$ $$ 3.70\times10^{-2} > 4.34\times10^{-3} \space \therefore \space Q_c > K_c $$ At this combination, the reaction shifts toward reactants. $[\ce{H2}]$ and $[\ce{N2}]$ increase, and $[\ce{NH3}]$ decreases, all by a factor of $x$ multiplied by the stoichiometric coefficients of each species. The new concentrations are $[\ce{H2}]=3.0+3x \text{ M}$, $[\ce{N2}]=2.0+2x \text{ M}$, and $[\ce{NH3}]=2.0-2x \text{ M}$. The equilibrium expression is: $$K_c 4.34\times 10^{-3} = \frac{(2-2x)^2}{(3+3x)^3 (2+2x)^2}$$ Since I chose the starting concentrations to be those values implied by the stoichiometry, I ended up (coincidentally) with a reaction near equilibrium. Thus, I cannot make assumptions about the value of $x$ relative to the concentrations, so no simplification is possible. To avoid mistakes, I will let WolframAlpha handle the algebra and solve for $x=3.18\times 10^{-1}$. The equilibrium concentrations are thus: $$[\ce{H2}]_{eq} = 3+3(3.18\times 10^{-1})=3.95 \text{ M}$$$$[\ce{N2}]_{eq}= 2+2(3.18\times 10^{-1})=2.64 \text{ M}$$$$[\ce{NH3}]_{eq} = 2-2(3.18\times 10^{-1})=1.36 \text{ M}$$ Note that if we had started at different initial concentrations, we would end with different equilibrium concentrations. The maths are easier if you start with zero of one or two species. I could also go on about the other way to determine the equilibrium constant (i.e. from the relationship $\Delta G^o=-RT\ln{K}$), but I have gone on long enough.
Bit of a strange question, but what is it? My physics teacher said it was kind of like a "push" that pushes electrons around the circuit. Can I have a more complex explanation? Any help is much appreciated. Your teacher was right. Current is electric charges (usually electrons) moving. They don't do that by themselves for no reason, no more so than a shopping cart moves across the floor of a store by itself. In physics, we call the force that pushes charges the electromotive force, or "EMF". It is almost always expressed in units of volts, so we usually take little shortcut and say "voltage" most of the time. Technically EMF is the physical quantity and volts is one unit it can be quantified in. EMF can be generated several ways: Electromagnetic.When a conductor (like a wire) is moved sideways thru a magnetic field, there will be a voltage generated along the length of the wire. Electric generators like in power plants and the alternator in your car work on this principle. Electrochemical.A chemical reaction can cause a voltage difference. Batteries work on this principle. Photovoltaic.Crash photons into a semiconductor diode at the right place and you get a voltage. This is how solar cells work. Electrostatic.Rub two of the right kind of materials together and one sheds electrons onto the other. Two material that exhibit this phenomenon well are a plastic comb and a cat. This is what happens when you shuffle across the right kind of carpet and then get a zap when you touch a metal object. Rubbing a balloon against your shirt does this, which then allows the balloon to "stick" to something else. In that case the EMF can't make the electrons move, but it still pulls on them, which then in turn pull on the baloon they are stuck on. This effect can be scaled up to make vary high voltages and is the basis for how Van de Graaff generators work. Thermo-electric.A temperature gradient along most conductors causes a voltage. This is called the Siebeckeffect. Unfortunately you can't harness that because to use this voltage there is eventually a closed loop. Any voltage gained by a temperature rise in part of the loop is then offset by a temperature decrease in another part of the loop. The trick is to use two different materials that exhibit a different voltage as a result of the same temperature gradient (different Siebeck coefficient). Use one material going out to a heat source and a different coming back, and you do get a net voltage you can use at the same temperature. The total voltage you get from one out and back, even with a high temperature difference is pretty small. By putting many of these out and back combinations together, you can get a useful voltage. A single out and back is called a thermocouple, and can be used to sense temperature. Many together is a thermocouple generator. Yes, those actually exist. There have been spacecraft powered on this principle with the heat source coming from the decay of a radio-isotope. Thermionic. If you heat something high enough (100s of °C), then the electrons on its surface move so fast that sometimes they fly off. If they have a place to land that is colder (so they won't fly off again from there), you have a thermionic generator. This may sound far fetched, but there have also been spacecraft powered from this principle with the heat source again being radio-isotope decay. Electron tubes use this principle in part. Instead of heating something so that electrons fly off on their own, you can heat it to almost that point so that they fly off when a little extra voltage is applied. This is the basis of the vacuum tube diode and important to most vacuum tubes. This is why these tubes had heaters and you could see them glow. It takes glowing temperatures to get to where the thermionic effect is significant. Piezo-electric.Certain materials (quartz crystal for example) generate a voltage when you squeeze them. Some microphones work on this principle. The varying pressure waves in the air we call sound squish and squash a quartz crystal alternately, which causes it to make tiny voltage waves as a result. We can amplify them to eventually make signals you can record, drive loudspeakers with so you can hear them, etc. This principle is also used in many barbecue grill igniters. A spring mechanism whacks a quartz crystal pretty hard so that it makes enough of a voltage to cause a spark. Using a fluid analogy, Voltage is pressure, Current is Flow rate. "Voltage" is a derived quantity. It is hard to understand its Physical meaning without understanding the quantities it is derived from. It all starts with the force between two point charges. Let the charges of the points \$ P_1 \$ and \$ P_2 \$ be \$ q_1 \$ and \$ q_2 \$. Let the distance between them be \$ r \$. The fundamental theorem says that, the force between these two charges are proportional with the amount of charges, and inversely proportional with square of the distance between the charges. That is: \$ F = k\dfrac{q_1 q_2 }{r^2} \$ Let the location and the charge of \$ P_1 \$ be fixed. Now the force depends on the location and charge of \$ P_2 \$. So we define a vector field called "Electrostatic Field". Direction of the vector field is the same with direction of the field of the force between \$ P_1 \$ and \$ P_2 \$ when \$ q_2 \$ is positive unit charge. And magnitude of the field is the force per charge \$ q_1 \$ when \$ q_2 \$ is unit positive charge. That is: \$ \bar{E} = \lim \limits_{q_1 \to 0} \dfrac{\bar{F}}{q_1} \quad \mbox{(} q_2 \mbox{ is unit positive charge)} \$ We make \$ q_1 \$ approach to zero in order to neglect some other electromagnetic effects; don't let it confuse you so much. It is something like "an aura that is able to generate some force per unite electrical charge". Its direction is the same with the direction of the force it generates, and its magnitude is proportional to the magnitude of the force. Now we come to see that these quantities we defined are very similar to some other Physical quantities we know. For example, the force above is very similar to the force between the Earth and a space object, like the Moon. And the \$ \bar{E} \$ field is very similar to the gravitational field of the Earth. Then the idea of defining electrical potential arises which is similar to the potential of a space object with respect to the Earth. Potential of a point in the space around Earth is energy per unit mass to bring an object (which has unit mass) from infinity to that point. When we define it in Electrostatics, the potential of the point \$ P_2 \$ becomes: \$ V_2 = - \int \limits_{\infty}^{P_2} \bar{E} d\bar{\ell} \$ Then, the potential difference between two independent points (\$ P_2 \$ and \$ P_3 \$) in the space within the \$ \bar{E} \$ field (caused by \$ q_1 \$) is: \$ V_2 - V_3 = \left(-\int \limits_{\infty}^{P_2} \bar{E} d\bar{\ell}\right) - \left(-\int \limits_{\infty}^{P_3} \bar{E} d\bar{\ell}\right) = \int \limits_{P_3}^{P_2} \bar{E} d\bar{\ell}\$ Note that electric field is curl-free, which means it can always be represented as gradient of a scalar field (\$ \bar{E} = - \bar{\nabla} V \$). These line integrals are independent of path. So, this is the definition of the potential field. A point will always have a potential even if there is no charge on it. Think it as of "the energy needed to bring a unit charge to there from infinity". Potential difference between two points is similar; it is the energy needed to carry a unit charge from one point to another. Or think it on a more concrete example like for celestial bodies. Potential difference between 100km height and 200km height above Earth's surface is nothing but differences of potential energies between two 1kg objects at the given heights. When we come to real world, potential of a point is some of all individual potentials caused by the charges around (theory of superposition applies). A voltage appears whenever there is an imbalance of electrical charge (i.e. electrons). Since like charges repel and opposite charges attract, any collection of electrically charged particles creates some kind of force on each other. If there is an imbalance of negative to positive, a kind of "pressure" or "push" is formed. In conducting materials, electrons are free to flow through the material, as opposed to being fixed in atoms, and will therefore flow to the point of least "pressure". Some complicating considerations: Electricity and chemistry are closely connected. In a battery, for example, a chemical imbalance creates an electrical imbalance (voltage) across the terminals, by forcing charged particles to one side. Chemistry also affects electrical conditions in other ways. Current (I) is the flow of electrons, however, electrons (since they are negative) flow in the opposite direction of the "current". The current is then the conceptual flow of positive charge, even though the actual flow is negative, but in the other direction. This demonstrates that a negative "push" is the exact same as a positive "pull". A definition I've heard is: Voltage is the potential (for charge) to do work. In other words, voltage is the energy given to a unit of charge, i.e., \$ V = {dE \over dQ} \$, where \$ E \$ is energy and \$ Q \$ is charge. The quickie, first approximation, rule-of-thumb answer: voltage is electrical pressure. But expanding on that: Voltage is not like pressure, not exactly. Instead, it's a math/physics concept called "potentials." Voltage is more like altitude in a gravity field, where each electron or proton is like a boulder. Altitude isn't pressure or weight or force. If a boulder is at the top of a hill, the boulder at a high-potential location. This means the boulder is storing potential energy (PE), and will release this energy as kinetic energy (KE) if it's allowed to move downhill (move to a low-potential location.) Lifted to the same voltage (altitude,) larger boulders would have higher PE. More precise: voltage is electric Potential. It isn't force (it's not like the boulder's down-force or weight, nor is it like the amount of force upon an electric charge in an electrical field.) Also voltage is not potential energy, since if we take away the boulder, then the gravity, altitude, and potential still exists. Potentials are part of the field itself. Patterns of voltage can hang in empty space. Voltage is a way of describing/visualizing/measuring electric fields. To describe e-fields, we can draw flux-lines between opposite electric charges. Or instead, we can draw the pattern of voltage, the iso-potential surfaces, drawing them perpendicular to the flux lines. Wherever we find some electric lines-of-force, we'll also find voltage. What is voltage not? What are typical misconceptions? Here's a big one: "voltage is a kind of potential energy." Nope, wrong. Instead voltage is the math-concept "Potentials," which aren't energy, nor are they "potential to do something." Here's another miscon: "voltage is the potential energy per unit charge." Nope, wrong. That's just the physics definition of the Volt unit, linking it with Joule and Coulomb units. Actually it goes the other way: the amount of energy (amount of work done in moving a charge across a certain voltage-difference) is found by multiplying charge by change in voltage! Electrical energy is determined by voltage! But Voltage itself needs no moving charge nor potential-energy stored, since voltage is a way to describe a field in empty space. The test-charges used to describe voltage are imaginary infinitesimal charges. Another miscon: "voltage appears on the surface of wires." Wrong, voltage actually extends into the space around wires. Half-way between your 9V battery terminals you'll find a 4.5V potential, hanging alone in empty space! But typical voltmeters won't detect the space-voltage, since that requires a voltmeter with infinite Z(inp), or at lest a few hundred gigohms. Normal 10Meg DMM voltmeters draw significant current, will short out any pure e-fields, so they must be touched to conductor surfaces in order to measure voltage. What is voltage? It's a stack of invisible membranes which fill the space between charged capacitor plates. Voltage is the pattern of concentric onion-layers which surround any charged object, with the onion-layers running perpendicular to the flux-lines of the electric field. So, 'stacks of voltage-layers' is one way of describing an electric field. The other more familiar way is to use 'lines of force.' Actually we can't. Electrostatic force is proportional to the potential gradient, but not directly to potential. Force on a one coulomb of charge is proportional to the potential gradient: \$ F= Q \times {d[V]\over dl } \$ Actually, 1 V means if you have 1 joule of electrical energy, it will be transferred into mechanical energy on a +1 coulomb charge [so it will accelerate, or increase its 1/2mV^2 by 1 J]. It's actually analogous to energy. Adding to what Gunnish said: Voltage at point A is literally a measurement of the work you would expend if you were to push a positive charge from 0V (usually either defined as infinitely far from A, or ground) to A. Voltage is important in electronics because if we start with a positive charge at point A, it is able to DO that same amount of work getting to 0V (ex. turning on an LED in the process). What is pushing the elections is a difference in potential energy, much like the way you are being pushed/pulled to the earth by gravity. This generates a favorable probably for the electrons to move one way over another, this also partly explains why the electrons move "randomly" in a wire. protected by Olin Lathrop Dec 31 '12 at 17:47 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Let's say I have a solid cylinder of uniform mass density, radius $R$ and height $h$. I know that the moment of inertia of this cylinder rotating about the axis parallel to the height and passing through the center of mass is $\frac{MR^2}{2}$. How would the Moment of Inertia (about the same axis) change if I were to cut this cylinder in half? (The cut goes along the length of the cylinder) If you do the calculations, you get that the Moment of inertia of a cilinder it's $$ I=\rho\int_{z_1}^{z_2}\int_0^{2\pi}\int_0^R r^3drd\theta dz $$ With $$ \rho=\frac{M}{V}=\frac{M}{h\pi R^2} $$ Half of the cilinder means that $\theta$ goes from $0$ to $\pi$. Also you conserve the density. Since there is no angular dependence, it's just the half of the Moment of inertia of the initial cilinder.
I am working on a problem, which would possibly relate the Fourier transform/series with the jump singularities of the function where the function itself or one of its derivatives jump. ((some kind of logarthmic blow ups too, possibly as a corollary). Consider a BV function $f(t)$ in $L^2(\mathbb{R})$ such that $f(t) =0, t<0$. Let $F(\omega)$ be its Fourier transform. Consider the family of curves $\alpha_t(\omega) \equiv (x_t(\omega),y_t(\omega)) $ given as $$x_t(\omega) = \int_0^{\omega}R(\Omega)\cos(\Omega t + \Phi(\Omega))d\Omega$$ and $$y_t(\omega) = \int_0^{\omega}R(\Omega)\sin(\Omega t + \Phi(\Omega))d\Omega,$$ defined only for $\omega \ge 0$, where $R(\omega) = |F(\omega)|$ and $\Phi(\omega) = \angle F(\omega)$. Let $A_t(s) \equiv (X_t(s),Y_t(s))$ be the arc length parametrization of the above mentioned curves. It can be seen that the transformation is $s(\omega) = \int_0^{\omega}R(\omega)d\omega$. We define the moment of inertia about center of mass of a segment of this curve corresponding to $t$, between $s_0$ and $s_1$ in arc length parametrization as $$I_{s_0,s_1}(t) = \int_{s_0}^{s_1} ((X_t(s)-X_{cm})^2 + (Y_t(s)-Y_{cm})^2) ds, $$ where $X_{cm} = \frac{1}{s_1-s_0}\int_{s_0}^{s_1}X_t(s)ds$ and $Y_{cm} = \frac{1}{s_1-s_0}\int_{s_0}^{s_1}Y_t(s)ds$. The moment of inertia about center of mass of curve segment ((corresonding to $t$)) between $\omega_0$ and $\omega_1$ is denoted as $$MI_{\omega_0,\omega_1}(t) = I_{s(\omega_0),s(\omega_1)}(t).$$ Assumption : Assume $f(t)$ only has jump singularities in the form of the function itself or one of its derivatives jumping at that point. For example $t_0$ is considered as a singularity if any derivative, say the tenth derivative $f^{(10)}(t)$ jumps at $t_0$. Statement : Given that there is a jump singularity at $t = t_0 > 0$ then we can always find an $\omega_{oc}$ such that, for all $\omega_0 > \omega_{oc}$, given any arbitrarily samll $\epsilon$, we can find a sufficiently large $\omega_{0,\epsilon}$ such that for all $\omega > \omega_{0,\epsilon}$ the function in $t$, $MI_{\omega_0,\omega}(t)$ has a maxima in $(t_0-\epsilon,t_0+\epsilon)$. PS : Clarification : If the function $f$ is continuous at $t_0$ but say the tenth derivativejumps at $t_0$, then also $t_0$ is defined as a jump singularity of $f$ in this problem. The function may have multiple jump singularities like third derivative jumping at $t_1$ and second derivative jumping at $t_2$, etc. Clues I had : I am trying to use this result and this answer, which I think is the key, but my limited ability to solve complex math or lack any sharp ideas, I am not able to attempt to solve it anymore. So I give up and post it here in this forum, where I hope to find fresh ideas and solution. Things look interesting once we start looking from the geometric perspective of the plane where our curves are. Also to note, $f\cos(\theta) + f_h \sin(\theta)$, ($f_h$ being Hilbert transform of $f$) for different $\theta$ all have same singularities (see here) at same places, only difference being partial blowup and partial jump, depending on $\theta$. (blowup being always logarithmic). This is in sync with follows from the translation and rotation invariance property of our moment of inertia about center of mass. Some non technical details : ...I have been trying to formulate and prove this relation for the past 3.5 years. Most of my activity on math.SE and here was indirectly related to solving this. In fact I bumped into math.SE and mathoverflow when I started on this. This question in particular was an attempt to know any existing theorems...). (..If proven this can be extended to functions in $\mathbb{R}^N$ using clifford algebra. I guess this problem is very important for applied math. As far I know, definitely for signal processing. PS2 : This concept exhibits duality, for example consider the real part of the Fourier transform as the function to begin with, then we can construct exactly similar things about the singularities of this real part function in frequency domain. Motivation : For math greats like Terry and the likes and also for newbies like me, here is a motivation as to why this problem is so important. Let $f(t)$ be an audio signal. We can safely asume it to be bandlimited to 0-20kHz as we cannot hear anything above that. Capture this signal in digital computer with appropriate sampling frequency and denote it as $f[n]$. Now take Discrete Hilbert transform of $f[n]$ to get $f_h[n]$, (using the code $f_h$ = imag(hilbert(f)); in Matlab). Compute the signal $f_{\theta}[n] = f[n]\cos\theta + f_h[n]\sin\theta$ for any value of $\theta$, then listen to the signal with different values for $\theta$. They all sound exactly identical. Similarly our $MI_{\omega_0,\omega_1}(t)$ is same for all $f_{\theta} = f\cos\theta + f_h\sin\theta$, for any value of $\theta$. just try it. $<f,f_h> = 0$, they why do they produce same effect in the listner? MATLAB code : [f,fs] = wavread('audio_file.wav'); fh = imag(hilbert(f)); theta = pi/4; f_tht = f cos(theta) + fhsin(theta); wavplay(f,fs); wavplay(f_tht,fs); Some Illustrations for the problem : Some illustrations : (These are discrete approximations) The function $f(t)$ (discrete version) is as follows : The corresponding Moment of inertia $MI(t)$ (segment from zero to highest frequency) is as follows : (interesting to observe there is no ringing!) Here is a plot of curves from $t = 0$ to $t = 800$. We can see that at $t = 400$, the curve is almost straight, making MI highest. $x-$axis is $f(t)$ and $y-$axis is $f_h(t)$.
A somewhat similar question to what I'm going to ask is this one. The problem is basically that one has the heat equation $c^2\nabla^2u = u_t$ in which initial and boundary conditions are given. But these boundary conditions do not match continuously with the initial conditions. In the question I referred to in the first sentence the problem can be solved quite easily since the discontinuity is given in the spatial derivatives of $u$. However I have this situation: There are two spheres with radii $r_1,r_2$. The inner surface is at $0$ temperature and the outer surface is always at a temperature $f(\theta)$ (in phyisics $0\le\theta\le\pi$). The initial condition of the system in the region between the two spherical surfaces is $u(r,\theta,0)=0$. Here is my analysis: When one separates the time part $T$ of the heat equation one gets $$\frac{dT}{dt} + \alpha c^2 T = 0.$$ If $\alpha$ is positive then we get an exponential solution that goes to $0$ as $t$ goes to $\infty$. If $\alpha$ is negative, then we get an exponential that diverges as $t$ goes to $\infty$. Finally, if $\alpha$ is $0$, we get a linear solution. First of all, if we get either exponential solutions it means that $u$ either goes to $0$ or diverges in the domain of the problem ($r_1<r<r_2, 0\le\theta,\le\pi$). This is not possible because the stationary system cannot have $0$ temperature because of the function $f(\theta)$ at the boundary: the inside of the bigger sphere should heat up a bit. Also if $T$ is linear ($At+B$) it diverges. If $A=0$, then $T(t)=B$, constant. If we examine the initial condition, this means that $B$ must be $0$. But this gives a trivial solution $u=0$. So the thing here is that the solution is initially not continuous at $r_2$ (in the question I referred to, the discontinuity is at the derivative of the solution). So my question is: what's wrong with my approach? Thanks for your comments and answers.
Answer to Question #2602 in Calculus for Alex T Simon Question #2602 Use polar coordinate to find the volume of solid inside the cylinder and inside the ellipsoid. Expert's answer 1) Suppose that one of the base D of the cylinder is a disk of radius R in the plane xy and centered at the origin. Let also its height is equal to H. Then its volume can be calculated as integral <img src="/cgi-bin/mimetex.cgi?%5Cint_D%20Hr%20dr%20d%5Cphi%20=%20H%20%5Cint_0%5ER%20r%20dr%20%5Cint_0%5E%7B2%5Cpi%7Dd%5Cphi%20=%20H%20%5Cfrac%7Br%5E2%7D%7B2%7D%202%20%5Cpi%20=%20%5Cpi%20R%5E2%20H" title="\int_D Hr dr d\phi = H \int_0^R r dr \int_0^{2\pi}d\phi = H \frac{r^2}{2} 2 \pi = \pi R^2 H"> 2) In general, ellipsoid is given by equation <img src="http://latex.codecogs.com/gif.latex?%5Cfrac%7Bx%5E2%7D%7Ba%5E2%7D+%20%5Cfrac%7By%5E2%7D%7Bb%5E2%7D%20+%20%5Cfrac%7Bz%5E2%7D%7Bc%5E2%7D%20=%201" title="\frac{x^2}{a^2}+ \frac{y^2}{b^2} + \frac{z^2}{c^2} = 1"> The simples way to find its volume via polar coordinates is first assume that a=b=c=1, so our ellipsoid is a ball of radius 1 centered at the origin. Moreover, we can also calculate only the volume of the art of this ball over the plane xy. <img src="http://latex.codecogs.com/gif.latex?V%20=%202%20%5Cint_D%20%5Csqrt%7B1-x%5E2-y%5E2%7D%20dxdy" title="V = 2 \int_D \sqrt{1-x^2-y^2} dxdy"> where D is the unit disk in the plane xy centered at origin.
Numerical solutions of viscoelastic bending wave equations with two term time kernels by Runge-Kutta convolution quadrature Department of Mathematics, Hunan Normal University, Changsha 410081, Hunan, China $u_{t}(x,~t)-\int_{0}^{t}[\beta_{1}(t-s)\,u_{xx}(x,~s) - \beta_{2}(t-s)\,u_{xxxx}(x,~s)]ds = f(x,~t),$ $ 0<x<1,~ 0<t\leq T $ $ \beta_{1}(t) $ $ \beta_{2}(t) $ $ (0,~\infty) $ Mathematics Subject Classification:Primary: 45K05, 65J08; Secondary: 65D3. Citation:Da Xu. Numerical solutions of viscoelastic bending wave equations with two term time kernels by Runge-Kutta convolution quadrature. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2389-2416. doi: 10.3934/dcdsb.2017122 References: [1] [2] [3] [4] [5] [6] [7] H. S. Carslaw and J. C. Jaeger, Conduction of Heat in Solids, 2 edition, Charendon Press, Oxford, 1959. Google Scholar [8] [9] [10] [11] [12] E. Hairer, S. P. Nϕrsett and G. Wanner, [13] E. Hairer and G. Wanner, [14] [15] [16] [17] [18] [19] [20] F. Liu, M. M. Meerschaert, R. J. McCough, P. Zhuang and Q. Liu, Numerical methods for solving the multi-term time-fractional wave-diffusion equation, [21] [22] [23] [24] W. McLean and V. Thomée, Maximum-norm error analysis of a numerical solution via Laplace transformation and quadrature of a fractional order evolution equation, [25] [26] [27] [28] [29] [30] [31] [32] A. K. Pani, G. Fairweather and R. I. Fernandes, Alternating direction implicit orthogonal spline collocation methods for an evolution equation with a positive-type memory term, [33] J. Prüss, [34] M. Renardy, W. J. Hrusa and J. A. Nohel, [35] [36] [37] J. Tang and D. Xu, The global behavior of finite difference-spatial spectral collocation methods for a partial integro-differential equation with a weakly singular kernel, [38] D. V. Widder, The Laplace Transform, Princeton University Press, Princeton, NJ., 1946. Google Scholar [39] [40] [41] D. Xu, Uniform [42] [43] [44] [45] [46] [47] H. Ye, F. Liu, I. Turner, V. Anh and K. Burrage, Series expansion solutions for the multiterm time and space fractional partial differential equations in two-and three-dimensions, show all references References: [1] [2] [3] [4] [5] [6] [7] H. S. Carslaw and J. C. Jaeger, Conduction of Heat in Solids, 2 edition, Charendon Press, Oxford, 1959. Google Scholar [8] [9] [10] [11] [12] E. Hairer, S. P. Nϕrsett and G. Wanner, [13] E. Hairer and G. Wanner, [14] [15] [16] [17] [18] [19] [20] F. Liu, M. M. Meerschaert, R. J. McCough, P. Zhuang and Q. Liu, Numerical methods for solving the multi-term time-fractional wave-diffusion equation, [21] [22] [23] [24] W. McLean and V. Thomée, Maximum-norm error analysis of a numerical solution via Laplace transformation and quadrature of a fractional order evolution equation, [25] [26] [27] [28] [29] [30] [31] [32] A. K. Pani, G. Fairweather and R. I. Fernandes, Alternating direction implicit orthogonal spline collocation methods for an evolution equation with a positive-type memory term, [33] J. Prüss, [34] M. Renardy, W. J. Hrusa and J. A. Nohel, [35] [36] [37] J. Tang and D. Xu, The global behavior of finite difference-spatial spectral collocation methods for a partial integro-differential equation with a weakly singular kernel, [38] D. V. Widder, The Laplace Transform, Princeton University Press, Princeton, NJ., 1946. Google Scholar [39] [40] [41] D. Xu, Uniform [42] [43] [44] [45] [46] [47] H. Ye, F. Liu, I. Turner, V. Anh and K. Burrage, Series expansion solutions for the multiterm time and space fractional partial differential equations in two-and three-dimensions, K Rate 2 - 4 8 2 - 8 Theory K Rate 2 - 4 8 2 - 8 Theory K Rate 2 - 4 8 2 - 8 K Rate 2 - 4 8 2 - 8 K Rate 2 - 4 8 2 - 8 Theory K Rate 2 - 4 8 2 - 8 Theory K Rate 2 - 4 8 2 - 8 K Rate 2 - 4 8 2 - 8 K Rate 2 - 4 8 16 2 - 8 2 - 16 4 - 16 K Rate 2 - 4 8 16 2 - 8 2 - 16 4 - 16 K Rate 2 - 4 8 2 - 8 K Rate 2 - 4 8 2 - 8 K Rate 4 - 8 16 4 - 16 K Rate 4 - 8 16 4 - 16 K Rate 2 - 4 8 2 - 8 K Rate 2 - 4 8 2 - 8 [1] Sihong Shao, Huazhong Tang. Higher-order accurate Runge-Kutta discontinuous Galerkin methods for a nonlinear Dirac model. [2] Lijuan Wang, Jun Zou. Error estimates of finite element methods for parameter identifications in elliptic and parabolic systems. [3] Weizhu Bao, Chunmei Su. Uniform error estimates of a finite difference method for the Klein-Gordon-Schrödinger system in the nonrelativistic and massless limit regimes. [4] Xiaohai Wan, Zhilin Li. Some new finite difference methods for Helmholtz equations on irregular domains or with interfaces. [5] Jonathan Touboul. Controllability of the heat and wave equations and their finite difference approximations by the shape of the domain. [6] Jonathan Touboul. Erratum on: Controllability of the heat and wave equations and their finite difference approximations by the shape of the domain. [7] Zalman Balanov, Carlos García-Azpeitia, Wieslaw Krawcewicz. On variational and topological methods in nonlinear difference equations. [8] Jie Shen, Xiaofeng Yang. Error estimates for finite element approximations of consistent splitting schemes for incompressible flows. [9] Antonia Katzouraki, Tania Stathaki. Intelligent traffic control on internet-like topologies - integration of graph principles to the classic Runge--Kutta method. [10] Wenjuan Zhai, Bingzhen Chen. A fourth order implicit symmetric and symplectic exponentially fitted Runge-Kutta-Nyström method for solving oscillatory problems. [11] Konstantinos Chrysafinos. Error estimates for time-discretizations for the velocity tracking problem for Navier-Stokes flows by penalty methods. [12] Weidong Zhao, Jinlei Wang, Shige Peng. Error estimates of the $\theta$-scheme for backward stochastic differential equations. [13] Tao Lin, Yanping Lin, Weiwei Sun. Error estimation of a class of quadratic immersed finite element methods for elliptic interface problems. [14] Z. Jackiewicz, B. Zubik-Kowal, B. Basse. Finite-difference and pseudo-spectral methods for the numerical simulations of in vitro human tumor cell population kinetics. [15] Dominik Hafemeyer, Florian Mannel, Ira Neitzel, Boris Vexler. Finite element error estimates for one-dimensional elliptic optimal control by BV-functions. [16] Matteo Bonforte, Jean Dolbeault, Matteo Muratori, Bruno Nazaret. Weighted fast diffusion equations (Part Ⅱ): Sharp asymptotic rates of convergence in relative error by entropy methods. [17] Hong Wang, Aijie Cheng, Kaixin Wang. Fast finite volume methods for space-fractional diffusion equations. [18] Xiaomeng Li, Qiang Xu, Ailing Zhu. Weak Galerkin mixed finite element methods for parabolic equations with memory. [19] Zhongliang Deng, Enwen Hu. Error minimization with global optimization for difference of convex functions. [20] Marina Ghisi, Massimo Gobbino. Hyperbolic--parabolic singular perturbation for mildly degenerate Kirchhoff equations: Global-in-time error estimates. 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
Both devices are listed to work with I2C clock frequencies up to 400 kHz. The two devices are the only devices on the I2C bus. However, working out the calculations for the pull-up resistor bounds gives some rather odd values. Calculating the I2C minimum pullup resistor value: \begin{equation} R_{min} = \frac{Vcc - 0.4}{3mA} = 966.7 \Omega \end{equation} Looking at the uC datasheets, on page 92 lists the max pin input capacitance of 10pF. However for the LCD, it has on page 8 something called the Capacitive load represent by each bus line labeled as Cb, and is listed at a max value of 400pF. I'm assuming I should just add this value to the 10pF uC input capacitance, but this seems really high and the calculations are wonky. For example, when I try computing the maximum pullup resistor value for a 400kHz clock: \begin{equation} R_{max} = \frac{300ns}{10pF + 400pF} = 731.7 \Omega \end{equation} Am I misinterpreting the LCD datasheet? Obviously the max allowable pullup resistor value cannot be smaller than the min allowable value. likewise, if I assume a maximum net bus capacitance of 400pF, I get: \begin{equation} R_{max} = \frac{300ns}{400pF} = 750 \Omega \end{equation} still under the maximum allowable value.
Is there a finite group such that, if you pick one element from each conjugacy class, these don't necessarily generate the entire group? No, this is impossible. This is a standard lemma, but I'm finding it easier to give a proof than a reference: Let $G$ be your finite group. Suppose that $H$ were a proper subgroup, intersecting every conjugacy class of $G$. Then $G = \bigcup_{g \in G} g H g^{-1}$. If $g_1$ and $g_2$ are in the same coset of $G/H$, then $g_1 H g_1^{-1} = g_2 H g_2^{-1}$, so we can rewrite this union as $\bigcup_{g \in G/H} g H g^{-1}$. There are $|G|/|H|$ sets in this union, each of which has $|H|$ elements. So the only way they can cover $G$ is if they are disjoint. But they all contain the identity, a contradiction. UPDATE: I found a reference. According to Serre, this result goes back to Jordan, in the 1870's. It is impossible. As I mentioned in the comment to Richard Stanley's answer, you are looking for a finite group $G$ with a maximal subgroup $M$ such that $M$ intersects every conjugacy class. Then $G=\cup M^g$ is the union of $M$ and its conjugates, which is well-known to never happen. Steve The impossibility also follows from Jordan's lemma: Let $G$ be a finite group which acts transitively on a set $\Omega$ with $|\Omega|:=n\geq 2$. Then there existsa $g\in G$ such that $\chi(g)=0$ where $\chi$ denotes the permutation character (put in simple terms this means that $g$ fixes no element of $\Omega$ ). In fact with some additional work one can show that the proportion of elements $g\in G$ such that $\chi(g)=0$ is larger than or equal to $\frac{1}{n}$. So now let us see how Jordan's lemma implies that the answer to the OP's question is negative. So let $H$ be the group generated by $\{g_i\}$, a complete set of representatives of the conjugacy classes of $G$. Suppose that $H$ is a proper subgroup of $G$. Then we may look at the left action of $G$ on $G/H$. Since $|G/H|\geq 2$ and the action is transitive, it follows from Jordan's lemma that there exists a $x\in G$ such that for all $i$, $x g_i H\neq g_i H$. In other words, for each $g_i$ one has that $g_i^{-1}x g_i\notin H$ which in turn implies that for all $g\in G$ one has that $g^{-1}xg\notin H$; and therefore the conjugacy class of $x$ does not intersect $H$ which is absurd. Note also that one gets the following corollary from the previous argument: Let $H$ be a proper subgroup of $G$ then we may always find two distinct (linear) characters of $G$ that have the same restriction to $H$. Indeed, by the previous argument there exists a conjugacy class $C$ of $G$ that does not intersect $H$. Let $D=G-C$ and define $f$ to be the class function which is equal to $0$ on $D$ and $1$ on $C$ and let $g$ be the class function which is equal to $1$ everywhere.Since $f$ and $g$ are (in a unique way) linear combinations of irreducible characters of $G$ and $f|H=g|H$ there must exist distinct irreducible characters $\psi_1$ and $\psi_2$ of $G$ which have the same restriction to $H$. A superficially different counting argument, which boils down to the same proof as before: If $H$ is a proper subgroup whose conjugates completely cover $G$, then let $G$ act on the right cosets of $H$ by right multiplication. This action is transitive. Since $H$ is a point stabilizer, the conjugates of $H$ are just all the point stabilizers. Then saying that the conjugates of $H$ cover $G$ is saying that every element of this permutation group has a fixed point. In a transitive permutation group, the average number of fixed points is $1$. The number of fixed points of the identity is the number of points, $[G:H]$. The only way every permutation can have at least the average number of fixed points is for every permutation to have exactly the average number of fixed points, so $[G:H]=1$ contradicting the assumption that $H$ is proper. A subgroup intersecting all conjugacy classes is usually called a conjugately dense subgroup. Here is a few related papers: Levchuk, V. M. Sylow intersections and conjugately dense subgroups of Chevalley groups. (Russian) Algebra and linear optimization (Russian) (Ekaterinburg, 2002), 161–165, Ross. Akad. Nauk Ural. Otdel., Inst. Mat. Mekh., Ekaterinburg, 2002. 20G40 Zyubin, S. A.; Levchuk, V. M. Conjugately dense subgroups of locally finite Chevalley groups of Lie rank 1. (Russian) Sibirsk. Mat. Zh. 44 (2003), no. 4, 742--748; translation in Siberian Math. J. 44 (2003), no. 4, 581–586. Zyubin, S. A. Conjugately dense subgroups of 3-dimensional linear groups over locally finite field. Internat. J. Algebra Comput. 15 (2005), no. 5-6, 1273–1280. Zyubin, S. A. Conjugately dense subgroups of free products of groups with an amalgamated subgroup. (Russian) Algebra Logika 45 (2006), no. 5, 520--537, 631; translation in Algebra Logic 45 (2006), no. 5, 296–305. Zyubin, S. A. On irreducible conjugately dense subgroups of linear groups. (Russian) Dokl. Akad. Nauk 413 (2007), no. 4, 450--453; translation in Dokl. Math. 75 (2007), no. 2, 266–269 20E06 (20G15) Erfanian, Ahmad; Russo, Francesco Conjugately dense subgroups in generalized FC-groups. Acta Univ. Apulensis Math. Inform. No. 20 (2009), 79–91.
How to Implement a Point Source with the Weak Form Today we continue our discussion on the weak formulation by looking at how to implement a point source with the weak form. A point source is a useful tool for idealizing the situation where a source is concentrated in a very small region of the modeling domain. We will find that it is very convenient to set up such a point source using the weak form. The Mathematics of a Point Source Consider a one-dimensional domain on the x-axis with a source localized around x = 0. We can plot the strength of the source as a function of x and it may look like this: Here, we have assumed that the strength has a constant value of 1/w within the interval [-w/2, w/2] and is zero everywhere else. This gives a rectangular shape of width w and height 1/w, as shown in the figure above. The function is often called a rectangular, top-hat, or sometimes, a disc function. The total strength of the source is given by the area of the rectangle, which is unity. For linear systems, if we only care about what happens far away from the source where \left| x \right| \gg w, then the actual shape of the source strength does not matter much, as long as the area beneath that shape is the same. Furthermore, we are free to make w progressively smaller and smaller: the width of the rectangle decreases while its height increases in such a way that the total area remains the same, as shown in the graph below. The localized source represented by the blue curve is progressively made thinner and taller (the orange and green curves), while maintaining the integrated strength of unity. Eventually, we arrive at a rectangle that is infinitesimally thin and infinitely tall, but still has a well defined area of unity. This leads us to the so-called delta function \delta(x) and, correspondingly, the localized source now becomes an idealized point source of unit strength. The delta function has some convenient properties. Its value is zero everywhere except at the origin: \infty \mbox{ for } x=0\\ 0 \mbox{ elsewhere} \end{array} \right. Integrating the product of a delta function and another function just extracts the value of the latter function at the origin: A point source at a general position x=a can be obtained by a simple coordinate shift of the delta function \delta(x-a). We have \infty \mbox{ for } x=a\\ 0 \mbox{ elsewhere} \end{array} \right. and It is also easy to generalize the delta function and the corresponding point source to higher dimensions. For example, in 2D, we have \infty \mbox{ for } x=a, y=b\\ 0 \mbox{ elsewhere} \end{array} \right. and (1) Implementing a Point Source Using the Weak Form This tutorial solves the Poisson equation on a unit disc with a point source at the origin. The equation reads (2) where u is the dependent field variable to be solved. At first sight it may not be obvious how to discretize this equation to be solved numerically. What value do we put at the origin for the source term on the right-hand side? The value of the delta function is infinite there, but computers don’t like infinities! Here, we will see that the weak formulation comes in handy. Recall that in this introductory blog post on the weak form, we multiply the differential equation to be solved by a test function and integrate over the entire domain (See Eq.(4) in that post). We can follow the same procedure here to solve Eq. (2). After multiplying by a test function \tilde{u}(x,y) and integrating over the unit disc domain, the right-hand side of Eq. (2) simply becomes (3) by using the integration property of the delta function given in Eq. (1). This gives us something very easy to implement in COMSOL Multiphysics. Start with a new 2D model with the Weak Form PDE physics interface and a Stationary study. Draw a unit circle centered at (0,0) and draw a point there as well. Set the Weak Expressions field under the default Weak Form PDE 1 feature to -test(ux)*ux-test(uy)*uy. This takes care of the left-hand side of Eq. (2) in exactly the same fashion as for the 1D case discussed in this previous post. Now, for the point source on the right-hand side, \tilde{u}(0,0), we simply add a point Weak Contribution node and select the point at the origin. For the Weak expression, we enter test(u). It’s that simple for the point source! It may be worth noting that by entering test(u), we set the strength of the point source to unity. For any other source magnitude, simply multiply by a factor. For example, the expression 2*test(u) gives a point source of strength 2. After finishing the set-up with a Dirichlet boundary condition at the perimeter of the circle, we can solve the model and observe the same solution as seen in the point source tutorial mentioned above: Also as seen in the tutorial, the numerical solution (blue curve) matches the analytical solution (green curve) very well, except near the original where a singularity occurs: As mentioned earlier, the point source provides a convenient idealization of a localized source in situations where we only care about the solution far away from the source. We illustrate this point with the following graph, where we have added three more curves to the graph above. These three curves are numerical solutions to the same Poisson equation in the same unit disc domain, but with various sizes of top-hat, or disc, shaped sources replacing the point source. The integrated strength of each top-hat source is calibrated to unity by setting its height to one over its area, in the same fashion as in the 1D case shown in the image above. As we see clearly from the figure below, all solutions are indistinguishable from one another far away from the sources. (In this example for x \gg 10 \, mm.) Conclusion Here, we have demonstrated the ease of creating point sources using the weak form. The numerical difficulty in the representation of the delta function is circumvented with a simple integration. In upcoming posts we will look at discontinuities and boundary conditions. Stay tuned! Kommentare (9) KATEGORIEN Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Publications Influence Claim Your Author Page Ensure your research is discoverable on Semantic Scholar. Claiming your author page allows you to personalize the information displayed and manage publications. C. P. Shen, C. Z. Yuan, Y. Ban, H. Aihara, D. M. Asner, I. Badhrees, 27 A. M. Bakich, E. Barberio, P. Behera, V. Bhardwaj, B. Bhuyan, J. Biswal, A. Bondar, 49 G. Bonvicini, A. Bozek, M. Bračko, 24 T.… (More) M. Ablikim, M. N. Achasov, X. C. Ai, O. Albayrak, M. Albrecht, D. J. Ambrose, A. Amoroso , F. F. An, Q. An, J. Z. Bai, R. Baldini Ferroli , Y. Ban, D. W. Bennett, J. V. Bennett, M. Bertani, D.… (More) We present measurements of time-dependent CP asymmetries in B0 → φ(1020)K0, η′K0, K0 SK 0 SK 0 S , K 0 Sπ 0, f0(980)K 0 S , ω(782)K 0 S and K +K−K0 S decays based on a sample of 386×10 6 BB pairs… (More) The RPC muon detector of the CMS experiment at the LHC (CERN, Geneva, Switzerland) is equipped with a Gas Gain Monitoring (GGM) system. A report on the stability of the system during the 2011-2012… (More) A former premature infant (1,795 g) with chronic lung disease underwent pyrolomyotomy under spinal anesthesia. She had been managed with artificial ventilation for 2 months after birth and had… (More) We observe the decay $\psi(3686) \to n \bar{n}$ for the first time and measure $\psi(3686) \to p \bar{p}$ with improved accuracy by using $1.07\times 10^8$ $\psi(3686)$ events collected with the… (More)
DCDS We make two observations concerning the generalised Korteweg de Vries equation $u_t + $u xxx$ = \mu ( |u|^{p-1} u )_x$. Firstly we give a scaling argument that shows, roughly speaking, that any quantitative scattering result for $L^2$-critical equation ($p=5$) automatically implies an analogous scattering result for the $L^2$-critical nonlinear Schrödinger equation $iu_t + $u xx$ = \mu |u|^4 u$. Secondly, in the defocusing case $\mu > 0$ we present a new dispersion estimate which asserts, roughly speaking, that energy moves to the left faster than the mass, and hence strongly localised soliton-like behaviour at a fixed scale cannot persist for arbitrarily long times. DCDS The incompressible Euler equations on a compact Riemannian manifold take the form $\partial_t u + \nabla_u u =- \mathrm{grad}_g p \\\mathrm{div}_g u =0.$ We show that any quadratic ODE , where $B \colon \mathbb{R}^n × \mathbb{R}^n \to \mathbb{R}^n$ is a symmetric bilinear map, can be linearly embedded into the incompressible Euler equations for some manifold if and only if obeys the cancellation condition $\langle B(y,y), y \rangle =0$ for some positive definite inner product on . This allows one to construct explicit solutions to the Euler equations with various dynamical features, such as quasiperiodic solutions, or solutions that transition from one steady state to another, and provides evidence for the "Turing universality" of such Euler flows. DCDS We show that the Maxwell-Klein-Gordon equations in three dimensions are globally well-posed in $H^s_x$ in the Coulomb gauge for all $s > \sqrt{3}/2 \approx 0.866$. This extends previous work of Klainerman-Machedon [24] on finite energy data $s \geq 1$, and Eardley-Moncrief [11] for still smoother data. We use the method of almost conservation laws, sometimes called the "I-method", to construct an almost conserved quantity based on the Hamiltonian, but at the regularity of $H^s_x$ rather than $H^1_x$. One then uses Strichartz, null form, and commutator estimates to control the development of this quantity. The main technical difficulty (compared with other applications of the method of almost conservation laws) is at low frequencies, because of the poor control on the $L^2_x$ norm. In an appendix, we demonstrate the equations' relative lack of smoothing - a property that presents serious difficulties for studying rough solutions using other known methods. ERA-MS This is an announcement of the proof of the inverse conjecture for the Gowers $U^{s+1}[N]$-norm for all $s \geq 3$; this is new for $s \geq 4$, the cases $s = 1,2,3$ having been previously established. More precisely we outline a proof that if $f : [N] \rightarrow [-1,1]$ is a function with ||$f$|| $U^{s+1}[N] \geq \delta$ then there is a bounded-complexity $s$-step nilsequence $F(g(n)\Gamma)$ which correlates with $f$, where the bounds on the complexity and correlation depend only on $s$ and $\delta$. From previous results, this conjecture implies the Hardy-Littlewood prime tuples conjecture for any linear system of finite complexity. In particular, one obtains an asymptotic formula for the number of $k$-term arithmetic progressions $p_1 < p_2 < ... < p_k \leq N$ of primes, for every $k \geq 3$. ERA-MS This is an informal announcement of results to be described and proved in detail in [3]. We give various results on the structure of approximate subgroups in linear groups such as ${\rm{S}}{{\rm{L}}_n}(k)$. For example, generalizing a result of Helfgott (who handled the cases $n = 2$ and $3$), we show that any approximate subgroup of ${\rm{S}}{{\rm{L}}_n}({\mathbb{F}_q})$ which generates the group must be either very small or else nearly all of ${\rm{S}}{{\rm{L}}_n}({\mathbb{F}_q})$. The argument is valid for all Chevalley groups $G(\mathbb{F}_q)$. Extending work of Bourgain-Gamburd we also announce some applications to expanders, which will be proven in detail in [4].
Continuing the answer given here: How to determine if this series converges absolutely/conditionally or diverges? wrt to this series: $$ \sum_{n=2}^{\infty} \ln \left(1+\frac{(-1)^n}{n}\right) $$ will you please help me understand why is it legitimate to split this series into odd and even terms? i.e.- if the series indeed converges conditionally, we know that different splitting will give different values, right? Where is my misunderstanding? thanks
This answer focuses on identifying families of solutions to the problem described in the question. I've made two provisional conjectures in order to make progress with the problem: The result can be stated for three $2n$-gons rather than two $n$-gons and one $2n$-gon. Solutions have mirror symmetry. Or equivalently, in any solution there are two pairs of $2n$-gons which have the same degree of overlap. [This turns out to be false - see 'Solution family 5' below. However, this condition is assumed in Solution families 1-4.] [ Continuation 6: in an overhaul of the notation I've halved $\phi$ and doubled $m$ so that $m$ is always an integer.] If we define the degree of overlap, $j$, between two $2n$-gons $(n>3)$ as the number of edges of one that lie wholly inside the other, then $1 < j < n$. If $$\phi = \frac{\pi}{2n}$$is half the angle subtended at the centre of the $2n$-gon by one of its edges, then the distance between the centres of two overlapping $2n$-gons is $$D_{jn} = 2\cos{j\phi}$$Consider a $2n$-gon P which overlaps a $2n$-gon O with degree $j$. Now bring in a third $2n$-gon, Q, which also overlaps O with degree $j$ but is rotated about the centre of O by an angle $m\phi$ with respect to P, where $m$ is an integer. The distance between the centres of P and Q, which I'll denote by $D_{kn}$ for a reason that will become apparent, is$$D_{kn} = 2D_{jn}\sin{\tfrac{m}{2}\phi} = 4\cos{j\phi} \, \sin{\tfrac{m}{2}\phi}$$ We now demand that P and Q should overlap by an integer degree, $k$, so that$$D_{kn} = 2\cos{k\phi}$$This will ensure that all points of intersection coincide with vertices of the intersecting polygons, and thus provide a configuration satisfying the requirements of the question (with the proviso that the condition does not guarantee that there is a common area of overlap shared by all three polygons). We have omitted mention of the orientation of the polygons, but it is easily shown that this is always such as to achieve the desired overlap. Combining the two expressions for $D_{kn}$ gives the condition $$2\cos{j\phi}\, \sin{\tfrac{m}{2}\phi} = \cos{k\phi}$$or (since $n\phi=\pi/2$)$$2\cos{j\phi}\, \cos{(n-\tfrac{m}{2})\phi} = \cos{k\phi} \tag{1}$$ The configurations we seek are solutions of this equation for integer $n$, $j$, $k$ and $m$. In the first example in the question $n = 12, j = 8, k = 6, m = 12$. In the second example $n = 15, j = 6, k = 10, m = 6$. [ Continuation 6: for solutions under the constraint of conjecture 2, $m$ is always even, but in the more general case $m$ may be odd.] I'll now throw this open to see if anyone can provide a general solution. It seems likely that $j$, $k$ and $m/2$ must be divisors of $2n$ [this turns out to be incorrect], and I have a hunch that the solution will involve cyclotomic polynomials [this turns out to be correct]. Continuation (1) I've now identified 3 families of solutions consistent with conjecture 2 (mirror symmetry), all involving angles of 60 degrees. There may be others. Solution family 1 This family is defined by setting $j=2n/3$. This means that half the angle subtended at the centre of O by its overlapping edges is $\tfrac{\pi}{3}$ radians or 60 degrees. Since $\cos{\tfrac{\pi}{3}} = \tfrac{1}{2}$ it reduces equation 1 to$$\cos{(n-\tfrac{m}{2})\phi} = \cos{k\phi}$$so there are solutions with$$n-\tfrac{m}{2} = k$$(where $\tfrac{m}{2}$ is an integer) subject to $2 \le k \le n-1\,\,$, $1 \le \tfrac{m}{2} \le n-2\,\,$ and $3|n$. The first example in the question belongs to this family. The complete set of solutions for $n=12$ combine to make this pleasing diagram: Solution family 2 This family has $m=2n/3$. This makes $\cos{(n-\tfrac{m}{2})\phi}=\cos{(\pi/3)} = \tfrac{1}{2}$, which reduces equation 1 to$$\cos{j\phi} = \cos{k\phi}$$so (given that $j<n$ and $k<n$)$$j = k$$These solutions have threefold rotational symmetry. The only restriction is that $n$ must be divisible by 3. Example ($n=9, j=k=4, m=6$): Solution family 3 This family is the most interesting of the three, but yields only one solution. It is defined by setting $k=2n/3$ so that $\cos{k\phi}=\cos{\tfrac{\pi}{3}} = \tfrac{1}{2}$. Equation 1 then becomes $$2\cos{j\phi}\,\cos{(n-\tfrac{m}{2})\phi} = \tfrac{1}{2}$$which may be written in the following equivalent forms:$$\cos{(n+\tfrac{m}{2}-j)\phi} + \cos{(n+\tfrac{m}{2}+j)\phi} = -\tfrac{1}{2} \tag{2}$$$$\cos{(n-\tfrac{m}{2}-j)\phi} + \cos{(n-\tfrac{m}{2}+j)\phi} = \tfrac{1}{2} \tag{3}$$Solutions to these equations can be found using the following theorem relating the roots $z_i(N)$ of the $N$th cyclotomic polynomial to the Möbius function $\mu(N)$: $$\sum_{i=1}^{\varphi(N)} {z_i(N)} = \mu(N)$$where $\varphi(N)$ is the Euler totient function (the number of positive integers less than $N$ that are relatively prime to $N$) and $z_i(N)$ are a subset of the $N$th roots of unity.Taking the real part of both sides and using symmetry this becomes:$$\sum_{i=1}^{\varphi(N)/2} { \cos{(p_i(N) \frac{2\pi}{N})} } = \tfrac{1}{2} \mu(N) \tag{4}$$where $p_i(N)$ is the $i$th integer which is coprime with $N$. The Möbius function $\mu(N)$ takes values as follows: $\mu(N) = 1$ if $N$ is a square-free positive integer with an even number of prime factors. $\mu(N) = −1$ if $N$ is a square-free positive integer with an odd number of prime factors. $\mu(N) = 0$ if $N$ has a squared prime factor. Equation 4 thus provides solutions to equations 2 and 3 if $\varphi(N) = 4$, $\mu(N)$ has the appropriate sign and the cosine arguments are matched. The first two conditions are true for only two integers: $N=5$, with $\mu(5)=-1$, $p_1(5) = 1, p_2(5) = 2$ $N=10$, with $\mu(10)=1$, $p_1(10) = 1, p_2(10) = 3$. We first set $N=5$ and look for solutions to equation 2. Matching the cosine arguments requires firstly that$$2j \frac{\pi}{2n} = (p_2(5)-p_1(5))\frac{2\pi}{5}$$from which it follows that$$5j = 2n$$ $n$ must be divisible by 3 to satisfy $k=2n/3$, so the smallest value of $n$ for which solutions are possible is $n=15$, with $k=10$ and $j=6$. All other solutions will be multiples of this one.Matching the cosine arguments also requires that$$(n+\tfrac{m}{2}-j) \frac{\pi}{2n} = p_1(5) \frac{2\pi}{5}$$which implies $m=6$. This is the solution illustrated by the second example in the question. Setting $N=10$ and looking for solutions to equation 3 yields the same solution. Continuation (2) Solution family 4 A fourth family of solutions can be obtained by writing equation 1 as $$\cos{(n+\tfrac{m}{2}-j)\phi} + \cos{(n+\tfrac{m}{2}+j)\phi} + \cos{k\phi} = 0 \tag{5}$$ and viewing this as an instance of equation 4 with $\varphi(N)/2 = 3$ and $\mu(N) = 0$. There are two values of N which satisfy these conditions, $N = 9$ and $N = 18$, which lead to three solutions: For $N = 9$:$$n=9, j=6, k=8, m=2\\n=9, j=4, k=4, m=6$$ For $N=18$:$$n=9, j=2, k=2, m=6$$ However, these are not new solutions. The first is a member of family 1 and the last two are members of family 2. Continuation (3) Solution family 5 Rotating a $2n$-gon about a vertex by an angle $m\phi$ moves its centre by a distance $$2\sin{ \tfrac{m}{2}\phi} = 2\cos{(n-\tfrac{m}{2})\phi} = D_{n-m/2,n}.$$If $m$ is even the rotated $2n$-gon thus overlaps the original $2n$-gon with integer degree $n-\tfrac{m}{2}$, and a third $2n$-gon with a different $m$ may overlap both of these, providing another type of solution to the problem. Solutions of this kind may be constructed for all $n \ge 3$. The diagram below includes the complete set of such solutions for $n=5$. A similar diagram with $n=12$ (but with a centrally placed $2n$-gon of the same size which can only be added when $3|n$) is shown above under Solution family 1. This family of solutions provides exceptions to conjecture 2: not all groups of three $2n$-gons overlapping in this way show mirror symmetry. Continuation (4) If we relax the condition set by conjecture 2, allowing solutions without mirror symmetry, we need an additional parameter, $l$, to specify the degree of overlap between O and P (which is now no longer $j$). The distances between the centres of the three $2n$-gons are now related by the cosine rule: $$D_{nk}^2 = D_{nj}^2 + D_{nl}^2 - 2 D_{nj}D_{nl}\cos{m_k\phi},$$where a subscript $k$ has been added to $m$ to acknowledge the fact that $j$, $l$ and $k$ can be cycled to generate three equations of this form. These can be written$$\\ \cos^2{J} + \cos^2{L} - 2 \cos{J} \cos{L} \cos{M_k} = \cos^2{K} \\ \cos^2{K} + \cos^2{J} - 2 \cos{K} \cos{J} \cos{M_l} = \cos^2{L} \\ \cos^2{L} + \cos^2{K} - 2 \cos{L} \cos{K} \cos{M_j} = \cos^2{J} $$where$$J = j\phi,\, L = l\phi,\, K = k\phi,\\M_j = m_j\phi,\, M_l = m_l\phi,\, M_k = m_k\phi$$ The same result in a slightly different form is derived in the answer provided by @marco trevi. $M_j$, $M_l$ and $M_k$ are the angles of the triangle formed by the centres of the three polygons. Since these sum to $\pi$ we have$$m_j + m_l + m_k = 2n$$ The sine rule gives another set of relations:$$\frac{\cos{J}}{\sin{M_j}} = \frac{\cos{L}} {\sin{M_l}} = \frac{\cos{K}}{\sin{M_k}} $$ In general the $m$ parameters are limited to integer values (as can be seen by considering the symmetry of the overlap between a $2n$-gon and each of its two neighbours). But they are now not necessarily even.
1,409 Works In 1971 Baumgartner showed it is consistent that any two $\aleph_1$-dense subsets of the real line are order isomorphic. This was important both for the methods of the proof and for consequences of the result. We introduce methods that lead to an analogous result for $\aleph_2$-dense sets. Keywords : forcing - large cardinals - Baumgartner isomorphism - infinitary Ramsey principles - reflection principles The chromatic number $\chi(G)$ of a graph $G$ is always at least the size of its largest clique (denoted by $\omega(G)$), and there are graphs $G$ with $\omega(G)=2$ and $\chi(G)$ arbitrarily large. On the other hand, the perfect graph theorem asserts that if neither $G$ nor its complement has an odd hole, then $\chi(G)=\omega(G)$ . (A "hole" is an induced cycle of length at least four, and "odd holes" are holes of odd length.) What... The talk is about a class of systems of 2d statistical mechanics, such as random tilings, noncolliding walks, log-gases and random matrix-type distributions. Specific members in this class are integrable, which means that available exact formulas allow delicate asymptotic analysis leading to the Gaussian Free Field, sine-process, Tracy-Widom distributions. Extending the results beyond the integrable cases is challenging. I will speak about a recent progress in this direction: about universal local limit theorems for a... Using a Lagrangian framework, we study overdamping phenomena in gyroscopic systems composed of two components, one of which is highly lossy and the other is lossless. The losses are accounted for by a Rayleigh dissipative function. We prove that selective overdamping is a generic phenomenon in Lagrangian systems with gyroscopic forces and give an analysis of the overdamping phenomena in such systems. Central to the analysis is the introduction of the notion of a dual... In this talk I will present a couple of results for the existence of solutions to the one-dimensional Euler, Navier-Stokes and multi-dimensional Navier-Stokes systems. The purpose of the talk is to focus on the role of the pressure in the compressible fluid equations, and to understand whether or not it can be replaced by the nonlocal attraction-repulsion terms arising in the models of collective behaviour. A common way to prove global well-posedness of free boundary problems for incompressible viscous fluids is to transform the equations governing the fluid motion to a fixed domain with respect to the time variable. An elegant and physically reasonable way to do this is to introduce Lagrangian coordinates. These coordinates are given by the transformation rule $x(t)=\xi +\int_{0}^{t}u(\tau ,\xi ) d\tau $ where $u(\tau ,\xi )$ is the velocity vector of the fluid particle at... In this joint work with Athanasios Tzavaras (KAUST) and Corrado Lattanzio (L’Aquila) we develop a relative entropy framework for Hamiltonian flows that in particular covers the Euler-Korteweg system, a well-known diffuse interface model for compressible multiphase flows. We put a particular emphasis on extending the relative entropy framework to the case of non-monotone pressure laws which make the energy functional non-convex.The relative entropy computation directly implies weak (entropic)-strong uniqueness, but we will also outline how... In this talk, I will present a recent study on traveling waves solutions to a 1D biphasic Navier-Stokes system coupling compressible and incompressible phases. With this original fluid equations, we intend to model congestion (or saturation) phenomena in heterogeneous flows (mixtures, collective motion, etc.). I will first exhibit explicit partially congested propagation fronts and show that these solutions can be approached by profiles which are solutions to a singular compressible Navier-Stokes system. The last part... The Euler-Korteweg system corresponds to compressible, inviscid fluids with capillary forces. It can be used to model diffuse interfaces. Mathematically it reads as the Euler equations with a third order dispersive perturbation corresponding to the capillary tensor. In dimension one there exists traveling waves with equal or different limit at infinity, respectively solitons and kinks. Their stability is ruled by a simple criterion a la Grillakis-Shatah-Strauss. This talk is devoted to the construction of multiple... The productivity of the $\kappa $-chain condition, where $\kappa $ is a regular, uncountable cardinal, has been the focus of a great deal of set-theoretic research. In the 1970’s, consistent examples of $kappa-cc$ posets whose squares are not $\kappa-cc$ were constructed by Laver, Galvin, Roitman and Fleissner. Later, ZFC examples were constructed by Todorcevic, Shelah, and others. The most difficult case, that in which $\kappa = \aleph{_2}$, was resolved by Shelah in 1997. In the... Generalized descriptive set theory has mostly been developed for uncountable cardinals satisfying the condition $\kappa ^{< \kappa }=\kappa$ (thus in particular for $\kappa$ regular). More recently the case of uncountable cardinals of countable cofinality has attracted some attention, partially because of its connections with very large cardinal axioms like I0. In this talk I will survey these recent developments and propose a unified approach which potentially could encompass all possible scenarios (including singular cardinals of... I give a survey of some recent results on set mappings By the Cantor-Bendixson theorem, subtrees of the binary tree on $\omega$ satisfy a dichotomy - either the tree has countably many branches or there is a perfect subtree (and in particular, the tree has continuum manybranches, regardless of the size of the continuum). We generalize this to arbitrary regular cardinals $\kappa$ and ask whether every $\kappa$-tree with more than $\kappa$ branches has a perfect subset. From large cardinals, this statement isconsistent at a weakly compact... The Galvin-Prikry theorem states that Borel partitions of the Baire space are Ramsey. Thus, given any Borel subset $\chi$ of the Baire space and an infinite set $N$, there is an infinite subset $M$ of $N$ such that $\left [M \right ]^{\omega }$ is either contained in $\chi$ or disjoint from $\chi$ . In their 2005 paper, Kechris, Pestov and Todorcevic point out the dearth of similar results for homogeneous relational structures. We have attained... A new type of a simple iterated game with natural biological motivation is introduced. Two individuals are chosen at random from a population. They must survive a certain number of steps. They start together, but if one of them dies the other one tries to survive on its own. The only payoff is to survive the game. We only allow two strategies: cooperators help the other individual, while defectors do not. There is no strategic... Following the seminal work by Benamou and Brenier on the time continuous formulation of the optimal transport problem, we show how optimal transport techniques can be used in various areas, ranging from "the reconstruction problem" cosmology to a problem of volatility calibration in finance. Swiss-born mathematician Nicola Kistler was the first holder of the Jean-Morlet Chair for mathematical sciences at CIRM and, in that capacity, became the first visiting researcher in residence for six months at the Centre. His stay at CIRM lasted from early February till July 2013. He set up a program of mathematical events focusing on 'Probability', with the collaboration of Véronique Gayrard, local project leader working at Marseille's Laboratoire d'Analyse, Topologie, Probabilités (ex LATP -... In these lectures, we will review what it means for a 3-manifold to have a hyperbolic structure, and give tools to show that a manifold is hyperbolic. We will also discuss how to decompose examples of 3-manifolds, such as knot complements, into simpler pieces. We give conditions that allow us to use these simpler pieces to determine information about the hyperbolic geometry of the original manifold. Most of the tools we present were developed in... The theory of mean field type control (or control of MacKean-Vlasov) aims at describing the behaviour of a large number of agents using a common feedback control and interacting through some mean field term. The solution to this type of control problem can be seen as a collaborative optimum. We will present the system of partial differential equations (PDE) arising in this setting: a forward Fokker-Planck equation and a backward Hamilton-Jacobi-Bellman equation. They describe respectively... Non-convex random sets of admissible positions naturally arise in the setting of fixed transaction costs or when only a finite range of possible transactions is considered. The talk defines set-valued risk measures in such cases and explores the situations when they return convex result, namely, when Lyapunov's theorem applies. The case of fixed transaction costs is analysed in greater details. Joint work with Andreas Haier (FINMA, Switzerland). Let $G$ be a connected semisimple Lie group with Lie algebra $\mathfrak{g}$. There are two natural duality constructions that assign to it the Langlands dual group $G^\lor$ (associated to the dual root system) and the Poisson-Lie dual group $G^∗$. Cartan subalgebras of $\mathfrak{g}^\lor$ and $\mathfrak{g}^∗$ are isomorphic to each other, but $G^\lor$ is semisimple while $G^∗$ is solvable. In this talk, we explain the following non-trivial relation between these two dualities: the integral cone defined... Hyperbolically embedded subgroups have been defined by Dahmani-Guirardel-Osin and they provide a common perspective on (relatively) hyperbolic groups, mapping class groups, Out(F_n), CAT(0) groups and many others. I will sketch how to extend a quasi-cocycle on a hyperbolically embedded subgroup H to a quasi-cocycle on the ambient group G. Also, I will discuss how some of those extended quasi-cocycles (of dimension 2 and higher) "contain" the information that H is hyperbolically embedded in G. This... We revise recent contributions to 2D Euler and Navier-Stokes equations with and without noise, but always in the case of stochastic solutions. The role of white noise initial conditions will be stressed and related to some questions about turbulence. The Jacobian algebra, obtained from the ring of germs of functions modulo the partial derivatives of a function $f$ with an isolated singularity, has a non-degenerate bilinear form, Grothendieck Residue, for which multiplication by $f$ is a symmetric nilpotent operator. The vanishing cohomology of the Milnor Fibre has a bilinear form induced by cup product for which the nilpotent operator $N$, the logarithm of the unipotent part of the monodromy, is antisymmetric. Using the nilpotent...
Question:Give an example of a 1-dimensional subspace U of R3. Answer: Any line through the origin is a one-dimensional subspace of R3. How do i write this as a set? i mean U=? thanks You should know this from analytic geometry: $\displaystyle U=\{t\cdot (a,b,c)\,\mid\,\,t\in\mathbb{R}\,,\,(a,b,c)\neq (0,0,0)\}$ is a straight line through the origin in 3-dimensional space, which is EXACTLY the same as the span of one non-zero vector...
Should we assume that the 58% chance of winning is fixed and that points are independent? I believe that Whuber's answer is a good one, and beautifully written and explained, when the consideration is that every point is independent from the next one. However I believe that, in practice it is only an interesting starting point (theoretic/idealized). I imagine that in reality the points are not independent from each other, and this might make it more or less likely that your co-worker opponent gets to a win at least once out of 50. At first I imagined that the dependence of the points would be a random process, ie not controlled by the players (e.g. when one is winning or loosing playing differently), and this should create a greater dispersion of the results benefiting the lesser player to get this one point out of fifty. A second thought however might suggest the opposite: The fact that you already "achieved" something with a 9.7% of chance may give some (but only slight) benefit, from a Bayesian point of view, to ideas about favouring mechanisms that get you to win more than 85% probability to win a game (or at least make it less likely that your opponent has a much higher probability than 15% as argued in the previous two paragraphs). For instance, it could be that you score better when your position is less good (it is not strange for people scoring much more different on match points, in favor or against, than on regular points). You can improve estimates of the 85% by taking these dynamics into account and possibly you have more than 85% probability to win a game. Anyway, it might be very wrong to use this simple points statistic to provide an answer. Yes you can do it, but it won't be right since the premises (independency of points) are not necessarily correct and highly influence the answer. The 42/58 statistic is more information but we do not know very well how to use it (the correctness of the model) and using the information might provide answers with high precision that it actually does not have. Example Example: an equally reasonable model with a completely different result So the hypothetical question (assuming independent points and known, theoretical, probabilities for these points) is in itself interesting and can be answered, But just to be annoying and skeptical/cynical; an answer to the hypothetical case does not relate that much to your underlying/original problem, and might be why the statisticians/data-scientists at your company are reluctant to provide a straight answer. Just to give an alternative example (not neccesarily better) that provides a confusing (counter-) statement 'Q: what is the probability to win all of the total of 50 games if I already won 15?' If we do not start to think that 'the point scores 42/58 are relevant or give us better predictions' then we would start to make predictions of your probability to win the game and predictions to win another 35 games solely based on your previously won 15 games: with a Bayesian technique for your probability to win a game this would mean: $p(\text{win another 35 | after already 15}) = \frac{\int_0^1 f(p) p^{50}}{\int_0^1 f(p) p^{15}}$ which is roughly 31% for a uniform prior f(x) = 1, although that might be a bit too optimistic. But still if you consider a beta distribution with $\beta=\alpha$ between 1 and 5 then you get to: which means that I would not be so pessimistic as the straightforward 0.432% prediction The fact that you already won 15 games should elevate the probability that you win the next 35 games. Note based on the new data Based on your data for the 18 games I tried fitting a beta-binomial model. Varying $\alpha=\mu\nu$ and $\beta=(1-\mu)\nu$ and calculating the probabilities to get to a score i,21 (via i,20) or a score 20,20 and then sum their logs to a log-likelihood score. It shows that a very high $\nu$ parameter (little dispersion in the underlying beta distribution) has a higher likelihood and thus there is probably little over-dispersion. That means that the data does not suggest that it is better to use a variable parameter for your probability of winning a point, instead of your fixed 58% chance of winning. This new data is providing extra support for Whuber's analysis, which assumes scores based on a binomial distribution. But of course, this still assumes that the model is static and also that you and your co-worker behave according to a random model (in which every game and point are independent). Maximum likelihood estimation for parameters of beta distribution in place of fixed 58% winning chance: Q: how do I read the "LogLikelihood for parameters mu and nu" graph? A: 1) Maximum likelihood estimate (MLE) is a way to fit a model. Likelihood means the probability of the data given the parameters of the model and then we look for the model that maximizes this. There is a lot of philosophy and mathematics behind it. 2) The plot is a lazy computational method to get to the optimum MLE. I just compute all possible values on a grid and see what the valeu is. If you need to be faster you can either use a computational iterative method/algorithm that seeks the optimum, or possibly there might be a direct analytical solution. 3) The parameters $\mu$ and $\nu$ relate to the beta distribution https://en.wikipedia.org/wiki/Beta_distribution which is used as a model for the p=0.58 (to make it not fixed but instead vary from time to time). This modeled 'beta-p' is than combined with a binomial model to get to predictions of probabilities to reach certain scores. It is almost the same as the beta-binomial distribution. You can see that the optimum is around $\mu \simeq 0.6$ which is not surprising. The $\nu$ value is high (meaning low dispersion). I had imagined/expected at least some over-dispersion. code/computation for graph 1 posterior <- sapply(seq(1,5,0.1), function(x) { integrate(function(p) dbeta(p,x,x)*p^50,0,1)[1]$value/ integrate(function(p) dbeta(p,x,x)*p^15,0,1)[1]$value } ) prior <- sapply(seq(1,5,0.1), function(x) { integrate(function(p) dbeta(p,x,x)*p^35,0,1)[1]$value } ) layout(t(c(1,2))) plot( seq(1,5,0.1), posterior, ylim = c(0,0.32), xlab = expression(paste(alpha, " and ", beta ," values for prior beta-distribution")), ylab = "P(win another 35| after already 15)" ) title("posterior probability assuming beta-distribution") plot( seq(1,5,0.1), prior, ylim = c(0,0.32), xlab = expression(paste(alpha, " and ", beta ," values for prior beta-distribution")), ylab = "P(win 35)" ) title("prior probability assuming beta-distribution") code/computation for graph 2 library("shape") # probability that you win and opponent has kl points Pwl <- function(a,b,kl,kw=21) { kt <- kl+kw-1 Pwl <- choose(kt,kw-1) * beta(kw+a,kl+b)/beta(a,b) Pwl } # probability to end in the 20-20 score Pww <- function(a,b,kl=20,kw=20) { kt <- kl+kw Pww <- choose(kt,kw) * beta(kw+a,kl+b)/beta(a,b) Pww } # probability that you lin with kw points Plw <- function(a,b,kl=21,kw) { kt <- kl+kw-1 Plw <- choose(kt,kw) * beta(kw+a,kl+b)/beta(a,b) Plw } # calculation of log likelihood for data consisting of 17 opponent scores and 1 tie-position # parametezation change from mu (mean) and nu to a and b loglike <- function(mu,nu) { a <- mu*nu b <- (1-mu)*nu scores <- c(18, 17, 11, 13, 15, 15, 16, 9, 17, 17, 13, 8, 17, 11, 17, 13, 19) ps <- sapply(scores, function(x) log(Pwl(a,b,x))) loglike <- sum(ps,log(Pww(a,b))) loglike } #vectors and matrices for plotting contour mu <- c(1:199)/200 nu <- 2^(c(0:400)/40) z <- matrix(rep(0,length(nu)*length(mu)),length(mu)) for (i in 1:length(mu)) { for(j in 1:length(nu)) { z[i,j] <- loglike(mu[i],nu[j]) } } #plotting levs <- c(-900,-800,-700,-600,-500,-400,-300,-200,-100,-90,-80,-70,-60,-55,-52.5,-50,-47.5) # contour plot filled.contour(mu,log(nu),z, xlab="mu",ylab="log(nu)", #levels=c(-500,-400,-300,-200,-100,-10:-1), color.palette=function(n) {hsv(c(seq(0.15,0.7,length.out=n),0), c(seq(0.7,0.2,length.out=n),0), c(seq(1,0.7,length.out=n),0.9))}, levels=levs, plot.axes= c({ contour(mu,log(nu),z,add=1, levels=levs) title("loglikelihood for parameters mu and nu") axis(1) axis(2) },""), xlim=range(mu)+c(-0.05,0.05), ylim=range(log(nu))+c(-0.05,0.05) )
Possible Duplicate: Long underscore in LaTeX I'm writing a document in which I encourage the reader to "fill in the blanks" in order to be actively engaged with the proof. Is there any way to write a clean, long underscore? Currently I'm using \underline{\hspace*{2cm}}, but it doesn't look so great... Here's a sample of my text: \begin{thm}$A \subset B$ if and only if $A \cap B = A$.\end{thm}\begin{proof}Assume $A \subset B$.We want to show $A \subset (A \cap B)$ and \underline{\hspace*{2cm}}.The first fact is true since: $A \subset B \Rightarrow$if $x \in A$ then \underline{\hspace*{2cm}} $\Rightarrow$if $x \in A$ then $x \in A \textrm{ and } B$.The second fact is true by \underline{\hspace*{2cm}}.Conversly, assume \underline{\hspace*{2cm}}.By the first property again, $B \supset$ \underline{\hspace*{2cm}},so we have \underline{\hspace*{4cm}}.\end{proof}
Learn how to add your own Javascript libraries to Kotobee Author. Adding your own Javascript libraries can help perform certain functions, such as rendering Mathematical formulae (e.g. KaTeX) or displaying a fancy tooltip (e.g. Tippy). Since Kotobee Author provides you with access to the internal files of the ebook project (via the File Manager), you may import any JS library you need. It's important to understand how the Kotobee Reader works, and how it deals with such external Javascript libraries. Though sometimes it's possible to follow the library's corresponding instructions for them to fail the first time around. There are some gotchas you should know about. Kotobee's parsing When an ebook is opened, Kotobee's Reader dissects the content and splits each word into its own HTML span tag. That is done in order to provide text-selection at the word level. If you are entering some special code inside a tag that is to be rendered by a library (as is the case with KaTeX for rendering a formula), by the time the library finds the text, it will not understand it. That is because it will have already been parsed and divided into separate pieces (assuming there are spaces) by the reader. The solution is to add the class "parsed" to a tag that does not need parsing. As an example with a KaTeX formula, assume the following code: <p>\[\int_1^\pi \sin x \mathrm{d} x\]</p> The default behavior is that it will be parsed into the following: <p> <span>\[\int_1^\pi </span> <span>\sin </span> <span>x </span> <span>\mathrm{d} </span> <span>x\]</span> </p> To avoid this and have the entire content attached together as one piece, simply add the class "parsed" as follows: <p class="parsed">\[\int_1^\pi \sin x \mathrm{d} x\]</p> Waiting for the DOM The standard way of dealing with Javascript libraries that require rendering HTML in the DOM is to wait for the DOM to load and become ready. This is done in several ways, such as: document.addEventListener("DOMContentLoaded", ready); or window.addEventListener("onload", ready); Although this is the correct way to initiate executing any Javascript in your page, this causes an issue in Kotobee. Basically, the DOM in Kotobee is loaded just once for the entire app. Hence these events will be triggered just once. Whenever a chapter is loaded that relies on these DOM-ready events, they will never be triggered. Thus, the binding Javascript will never be executed. The way around this is to execute functions directly without wrapping them in such listeners. In the case of KaTeX, instead of having the Javascript code as such: <script> document.addEventListener("DOMContentLoaded", function() { renderMathInElement(document.body, { delimiters: [ {left: "$", right: "$", display: true}, {left: "\\[", right: "\\]", display: true}, {left: "$", right: "$", display: false}, {left: "\\(", right: "\\)", display: false} ] } ); </script> It will become: <script> renderMathInElement(document.body, { delimiters: [ {left: "$", right: "$", display: true}, {left: "\\[", right: "\\]", display: true}, {left: "$", right: "$", display: false}, {left: "\\(", right: "\\)", display: false} ] } </script> If you are planning to use the ebook on various readers including Kotobee Reader, then you may do a simple check to find out whether it is being run on Kotobee Reader with the following Javascript: if(typeof isKotobee != "undefined") In the near future, we will be applying smarter ways to automatically bypass DOM-ready events, without you having to enter any custom code.
I just watched the Blender Guru video How to Make a Beer in Blender and stopped at the point where it advises us to make the liquid slightly larger in diameter than the inner diameter of the glass - approximately half way between the inner and outer walls. I then turned to Fluid in a Glass(and why you’ve been doing it wrong your whole life) where the video points. Although the wording there isn't exactly correct, I think that I understand the point. In the glass shader where you specify IOR, you are not actually specifying the index of refraction of the material inside the volume defined by the mesh. You are actually only specifying the ratio of the index of refractions of the spaces on either side of the mesh. And that's all that Snell's Law actually needs when applied to a single interface: $$ \mathbf{n_1} \sin(\theta_1) = \mathbf{n_2} \sin(\theta_2) $$ $$ \sin(\theta_1) = \frac{\mathbf{n_2}}{\mathbf{n_1}} \sin(\theta_2) $$ $$ \theta_1 = \sin^{-1}\left(\frac{\mathbf{n_2}}{\mathbf{n_1}} \sin(\theta_2)\right) $$ The problem is that in the physical world, the interface between liquid and glass has a very small index difference - the ratio is nearly 1.0. This is why some transparent object can seem to almost completely disappear when submerged in water. But it seems to me that the workaround of embedding the liquid into the glass is not going to have the desired effect - if the desired effect is to get closer to a realistic image. It creates two interfaces and each has a large ratio of indices of refraction of something like 1.3 or 1.4, or 1/1.3 and 1/1.4 depending on the directions of the normals. Question: Wouldn't embedding the liquid inside the glass wall as described in the video produce physically wrong and unrealistic refraction? Since there is just one interface, using two meshes that just touch, or overlap seems like it's just asking for trouble. From a rendering point of view, meshes are interfaces between physical materials, even though we say that we assign "materials" to them we're actually assigning surface characteristics. edit: I've just found an excellent explanation here, yes the IOR < 1 technique should be correct and a single mesh for the glass-liquid interface used. I'll use an index of refraction of 1.4 and 1.3 for glass and liquid respectively in the following discussion just to make it simpler. Wouldn't a more realistic method be to just use a single mesh for the boundary between glass and liquid, and choose IOR = 0.93 with the normals pointing out, or IOR = 1.08 with the normals pointing in? That doesn't mean that there is an actual index of refraction is 0.93, it just means that when rays pass between the space inside the glass mesh and the space inside the liquid mesh they will be refracted and (Fresnel) reflected based on the correct physics for a single interface between materials within indices of refraction of 1.4 and 1.3? note: This would require the top of the liquid to have a different mesh or at least a different material, with the IOR set to 1.3 for the correct liquid-air interface behavior. Or you could just put foam on top.
Here is a beautiful result from numerical analysis. Given any nonsingular $n\times n$ system of linear equations $Ax=b$, an optimal Krylov subspace method like GMRES must necessarily terminate with the exact solution $x=A^{-1}b$ in no more than $n$ iterations (assuming exact arithmetic). The Cayley-Hamilton theorem provides a simple, elegant proof of this statement. To begin, recall that at the $k$-th iteration, minimum residual methods like GMRES solve the least-squares problem$$\underset{x_k\in\mathbb{R}^n}{\text{minimize }} \|Ax_k-b\|$$by picking a solution from the $k$-th Krylov subspace$$\text{subject to } x_k \in \mathrm{span}\{b,Ab,A^2b,\ldots,A^{k-1}b\}.$$If the objective $ \|Ax_k-b\|$ goes to zero, then we have found the exact solution at the $k$-th iteration (we have assumed that $A$ is full-rank). Next, observe that $x_k=(c_0 + c_1 A + \cdots + c_{k-1}A^{k-1})b=p(A)b$, where $p(\cdot)$ is a polynomial of order $k-1$. Similarly, $\|Ax_k-b\|=\|q(A)b\|$, where $q(\cdot)$ is a polynomial of order $k$ satisfying $q(0)=-1$. So the least-squares problem from above for each fixed $k$ can be equivalently posed as a polynomial optimization problem with the same optimal objective $$\text{minimize } \|q_k(A)b\| \text{ subject to } q_k(0)=-1,\; q_k(\cdot) \text{ is an order-} k \text{ polynomial.}$$Again, if the objective $\|q_k(A)b\|$ goes to zero, then GMRES has found the exact solution at the $k$-th iteration. Finally, we ask: what is a bound on $k$ that guarantees that the objective goes to zero? Well, with $k=n$, and the optimal polynomial $q_n(\cdot)$ for our polynomial optimization problem is just the characteristic polynomial of $A$. According to Cayley-Hamilton, $q_n(A)=0$, so $\|q_n(A)b\|=0$. Hence we conclude that GMRES always terminate with the exact solution at the $n$-th iteration. This same argument can be repeated (with very minor modifications) for other optimal Krylov methods like conjugate gradients, conjugate residual / MINRES, etc. In each case, the Cayley-Hamilton forms the crux of the argument.
I was wondering when a stochastic process defined via a SDE is Markovian? The SDE may involved Ito integral, Lebesgue integral, jump component, and any other things. The reason I ask this question is that I don't understand when we can and cannot discuss properties of a Markov process, such as Kolmogorov equations, for a process defined by a SDE. Thanks and regards! If SDEs are of the form $\mathrm dX_t = \sum_{i=1}^n f(X_t)\mathrm dY^i_t$ where $Y^i$ are Markov processes and $f$ are some functions, than unless $Y^i$ and $f$ are not "nice enough", $X$ will be a Markov process. Intuitively, this is the case since the future dynamics of $X_t$ is independent from its past: e.g. $$ \mathrm dX_t = X_t\mathrm dW_t $$ is a Markov process whereas $$ \mathrm dX_t = \left(\int_0^t X_s\mathrm ds\right)\mathrm dW_t $$ is not. I am not sure that there is a complete answer, which "nice" conditions $Y^i$ and $f$ have to verify in order for $X$ to be a (strong) Markov process - but I guess you are aware of many example of such conditions for jump diffusion. Consider the following SDE $$ X_t ^{s,x} = x +\int _s^t \sigma(u,X_t ^{u,x}) ~ dW_u+\int _s^t b(u,X_t ^{u,x}) ~ du $$ satisfing the following hypothesis: there is $C>0$ such that, for all $(x,y) \in \mathbb R ^p \times \mathbb R ^p$ and $u \in \mathbb R _+$ $$\left|\sigma(u,x) -\sigma(u,y) \right|+\left|b(u,x) -b(u,y) \right|\leq C\left|x-y \right|$$ for all $t\in \mathbb R _+$ and $x\in \mathbb R ^p$ $$ \int_s^t \left(\left| \sigma \right|^2(u,x)+\left|b \right|^2(u,x)\right)~du <+ \infty$$ If $b$ and $\sigma$ are time-homogeneous, ie, $\sigma(u,x) =\sigma(x)$ and $ b(u,x)= b(x)$, the simple Markov property applies to the solution of this SDE. A good example of this is the time-integral of the Ornstein-Uhlenbeck process. On its own, this is not a Markov process. However, a two dimensional stochastic process, with one co-ordinate being the Ornstein-Uhlenbeck process and the other being its time-integral, will be a two dimensional Markov process. Similarly, a one-dimensional Markov process and its supremum process, comprise a two-dimensional Markov process.
Difference between revisions of "LaTeX:Symbols" (→Relations) (→Operators) Line 38: Line 38: |<math>\star</math>||\star||<math>\dagger</math>||\dagger||<math>\ddagger</math>||\ddagger |<math>\star</math>||\star||<math>\dagger</math>||\dagger||<math>\ddagger</math>||\ddagger |- |- − |< + |<math>||\cup |- |- − |<math>\uplus< + |<math>\uplus<math>||\uplus||<math>\sqcap<math>||\sqcap||<math>\sqcup<math>||\sqcup |- |- − |<math>\vee< + |<math>\vee<math>||\vee||<math>\wedge<math>||\wedge||<math>\oplus<math>||\oplus |- |- − |<math>\ominus< + |<math>\ominus<math>||\ominus||<math>\otimes<math>||\otimes||<math>\circ<math>||\circ |- |- − |<math>\bullet< + |<math>\bullet<math>||\bullet||<math>\diamond<math>||\diamond||<math>\lhd<math>||\lhd |- |- − |<math>\rhd< + |<math>\rhd<math>||\rhd||<math>\unlhd<math>||\unlhd||[[Image:Unrhd.gif]]||\unrhd |- |- − |<math>\oslash< + |<math>\oslash<math>||\oslash||<math>\odot<math>||\odot||<math>\bigcirc<math>||\bigcirc |- |- − |<math>\triangleleft< + |<math>\triangleleft<math>||\triangleleft||<math>\Diamond<math>||\Diamond||<math>\bigtriangleup<math>||\bigtriangleup |- |- − |<math>\bigtriangledown< + |<math>\bigtriangledown<math>||\bigtriangledown||<math>\Box<math>||\Box||<math>\triangleright<math>||\triangleright |- |- − |<math>\setminus< + |<math>\setminus<math>||\setminus||<math>\wr<math>||\wr||<math>\sqrt{x}<math>||\sqrt{x} |- |- − |<math>x^{\circ}< + |<math>x^{\circ}<math>||x^{\circ}||<math>\triangledown<math>||\triangledown||<math>\sqrt[n]{x}<math>||\sqrt[n]{x} |- |- − |<math>a^x< + |<math>a^x<math>||a^x||<math>a^{xyz}||a^{xyz} |} |} Revision as of 15:20, 5 January 2017 LaTeX About - Getting Started - Diagrams - Symbols - Downloads - Basics - Math - Examples - Pictures - Layout - Commands - Packages - Help This article will provide a short list of commonly used LaTeX symbols. Contents Common Symbols Operators Relations Finding Other Symbols Here are some external resources for finding less commonly used symbols: Detexify is an app which allows you to draw the symbol you'd like and shows you the code for it! MathJax (what allows us to use on the web) maintains a list of supported commands. The Comprehensive LaTeX Symbol List. Operators Symbol Command Symbol Command Symbol Command \pm \mp \times \div \cdot \ast \star \dagger \ddagger =]-[0p8;97\uplus\sqcap\sqcup\vee\wedge\oplus\ominus\otimes\circ\bullet\diamond\lhd\rhd\unlhd\oslash\odot\bigcirc\triangleleft\Diamond\bigtriangleup\bigtriangledown\Box\triangleright\setminus\wr\sqrt{x}x^{\circ}\triangledown\sqrt[n]{x}a^xa^{xyz}$ a^{xyz} Relations Symbol Command Symbol Command Symbol Command \le \ge \neq \sim \ll \gg \doteq \simeq \subset \supset \approx \asymp \subseteq \supseteq \cong \smile \sqsubset \sqsupset \equiv \frown \sqsubseteq \sqsupseteq \propto \bowtie \in \ni \prec \succ \vdash \dashv \preceq \succeq \models \perp \parallel \mid \bumpeq Negations of many of these relations can be formed by just putting \not before the symbol, or by slipping an n between the \ and the word. Here are a few examples, plus a few other negations; it works for many of the others as well. Symbol Command Symbol Command Symbol Command \nmid \nleq \ngeq \nsim \ncong \nparallel \not< \not> \not= \not\le \not\ge \not\sim \not\approx \not\cong \not\equiv \not\parallel \nless \ngtr \lneq \gneq \lnsim \lneqq \gneqq To use other relations not listed here, such as =, >, and <, in LaTeX, you may just use the symbols on your keyboard. Greek Letters Symbol Command Symbol Command Symbol Command Symbol Command \alpha \beta \gamma \delta \epsilon \varepsilon \zeta \eta \theta \vartheta \iota \kappa \lambda \mu \nu \xi \pi \varpi \rho \varrho \sigma \varsigma \tau \upsilon \phi \varphi \chi \psi \omega Symbol Command Symbol Command Symbol Command Symbol Command \Gamma \Delta \Theta \Lambda \Xi \Pi \Sigma \Upsilon \Phi \Psi \Omega Headline text Arrows Symbol Command Symbol Command \gets \to \leftarrow \Leftarrow \rightarrow \Rightarrow \leftrightarrow \Leftrightarrow \mapsto \hookleftarrow \leftharpoonup \leftharpoondown \rightleftharpoons \longleftarrow \Longleftarrow \longrightarrow \Longrightarrow \longleftrightarrow \Longleftrightarrow \longmapsto \hookrightarrow \rightharpoonup \rightharpoondown \leadsto \uparrow \Uparrow \downarrow \Downarrow \updownarrow \Updownarrow \nearrow \searrow \swarrow \nwarrow (For those of you who hate typing long strings of letters, \iff and \implies can be used in place of \Longleftrightarrow and \Longrightarrow respectively.) Dots Symbol Command Symbol Command Symbol Command Symbol Command \cdot \vdots \dots \cdots \ddots \iddots Accents Symbol Command Symbol Command Symbol Command \hat{x} \check{x} \dot{x} \breve{x} \acute{x} \ddot{x} \grave{x} \tilde{x} \mathring{x} \bar{x} \vec{x} When applying accents to i and j, you can use \imath and \jmath to keep the dots from interfering with the accents: Symbol Command Symbol Command \vec{\jmath} \tilde{\imath} \tilde and \hat have wide versions that allow you to accent an expression: Symbol Command Symbol Command \widehat{7+x} \widetilde{abc} Others Command Symbols Some symbols are used in commands so they need to be treated in a special way. Symbol Command Symbol Command Symbol Command Symbol Command \textdollar or $ \& \% \# \_ \{ \} \backslash (Warning: Using $ for will result in . This is a bug as far as we know. Depending on the version of this is not always a problem.) European Language Symbols Symbol Command Symbol Command Symbol Command Symbol Command {\oe} {\ae} {\o} {\OE} {\AE} {\AA} {\O} {\l} {\ss} !` {\L} {\SS} Bracketing Symbols In mathematics, sometimes we need to enclose expressions in brackets or braces or parentheses. Some of these work just as you'd imagine in LaTeX; type ( and ) for parentheses, [ and ] for brackets, and | and | for absolute value. However, other symbols have special commands: Symbol Command Symbol Command Symbol Command \{ \} \| \backslash \lfloor \rfloor \lceil \rceil \langle \rangle You might notice that if you use any of these to typeset an expression that is vertically large, like (\frac{a}{x} )^2 the parentheses don't come out the right size: If we put \left and \right before the relevant parentheses, we get a prettier expression: \left(\frac{a}{x} \right)^2 gives \left and \right can also be used to resize the following symbols: Symbol Command Symbol Command Symbol Command \uparrow \downarrow \updownarrow \Uparrow \Downarrow \Updownarrow Multi-Size Symbols Some symbols render differently in inline math mode and in display mode. Display mode occurs when you use \[...\] or $$...$$, or environments like \begin{equation}...\end{equation}, \begin{align}...\end{align}. Read more in the commands section of the guide about how symbols which take arguments above and below the symbols, such as a summation symbol, behave in the two modes. In each of the following, the two images show the symbol in display mode, then in inline mode. Symbol Command Symbol Command Symbol Command \sum \int \oint \prod \coprod \bigcap \bigcup \bigsqcup \bigvee \bigwedge \bigodot \bigotimes \bigoplus \biguplus
Singularity of controls in a simple model of acquired chemotherapy resistance 1. Inter-Faculty Individual Doctoral Studies in Natural Sciences and Mathematics, University of Warsaw, Banacha 2c, 02-097 Warsaw, Poland 2. Faculty of Mathematics and Computer Science, University of Warmia and Mazury in Olsztyn, Sloneczna 54, 10-710 Olsztyn, Poland 3. Faculty of Mathematics, Informatics and Mechanics, Institute of Applied Mathematics and Mechanics, University of Warsaw, Banacha 2, 02-097 Warsaw, Poland This study investigates how optimal control theory may be used to delay the onset of chemotherapy resistance in tumours. An optimal control problem with simple tumour dynamics and an objective functional explicitly penalising drug resistant tumour phenotype is formulated. It is shown that for biologically relevant parameters the system has a single globally attracting positive steady state. The existence of singular arc is then investigated analytically under a very general form of the resistance penalty in the objective functional. It is shown that the singular controls are of order one and that they satisfy Legendre-Clebsch condition in a subset of the domain. A gradient method for solving the proposed optimal control problem is then used to find the control minimising the objective. The optimal control is found to consist of three intervals: full dose, singular and full dose. The singular part of the control is essential in delaying the onset of drug resistance. Keywords:Optimal control, ordinary differential equations, cancer chemotherapy, drug resistance, non-homogeneous tumour. Mathematics Subject Classification:Primary: 49K15, 92C50; Secondary: 37N25. Citation:Piotr Bajger, Mariusz Bodzioch, Urszula Foryś. Singularity of controls in a simple model of acquired chemotherapy resistance. Discrete & Continuous Dynamical Systems - B, 2019, 24 (5) : 2039-2052. doi: 10.3934/dcdsb.2019083 References: [1] P. Bajger, M. Bodzioch and U. Foryś, Role of cell competition in acquired chemotherapy resistance, [2] R. H. Chisholm, T. Lorenzi and J. Clairambault, Cell population heterogeneity and evolution towards drug resistance in cancer: Biological and mathematical assessment, theoretical treatment optimisation, [3] [4] [5] [6] [7] P. Hahnfeldt, J. Folkman and L. Hlatky, Minimizing long-term tumor burden: The logic for metronomic chemotherapeutic dosing and its antiangiogenic basis, [8] P. Hahnfeldt, D. Panigrahy, J. Folkman and L. Hlatky, Tumor development under angiogenic signaling: A dynamical theory of tumor growth, treatment response, and postvascular dormancy, [9] I. Kareva, D. Waxman and G. Klement, Metronomic chemotherapy: An attractive alternative to maximum tolerated dose therapy that can activate anti-tumor immunity and minimize therapeutic resistance, [10] O. Lavi, J. Greene, D. Levy and M. Gottesman, The role of cell density and intratumoral heterogeneity in multidrug resistance, [11] [12] [13] [14] L. Pontryagin, V. Boltyanskii, R. Gamkrelidze and E. Mishchenko, The Mathematical Theory of Optimal Processes, MacMillan, New York, 1964. Google Scholar [15] [16] [17] [18] [19] J. Śmieja, A. Świerniak and Z. Duda, Gradient method for finding optimal scheduling in infinite dimensional models of chemotherapy, [20] [21] A. Świerniak, A. Polański, J. Śmieja and M. Kimmel, Modelling growth of drug resistant cancer populations as the system with positive feedback, show all references References: [1] P. Bajger, M. Bodzioch and U. Foryś, Role of cell competition in acquired chemotherapy resistance, [2] R. H. Chisholm, T. Lorenzi and J. Clairambault, Cell population heterogeneity and evolution towards drug resistance in cancer: Biological and mathematical assessment, theoretical treatment optimisation, [3] [4] [5] [6] [7] P. Hahnfeldt, J. Folkman and L. Hlatky, Minimizing long-term tumor burden: The logic for metronomic chemotherapeutic dosing and its antiangiogenic basis, [8] P. Hahnfeldt, D. Panigrahy, J. Folkman and L. Hlatky, Tumor development under angiogenic signaling: A dynamical theory of tumor growth, treatment response, and postvascular dormancy, [9] I. Kareva, D. Waxman and G. Klement, Metronomic chemotherapy: An attractive alternative to maximum tolerated dose therapy that can activate anti-tumor immunity and minimize therapeutic resistance, [10] O. Lavi, J. Greene, D. Levy and M. Gottesman, The role of cell density and intratumoral heterogeneity in multidrug resistance, [11] [12] [13] [14] L. Pontryagin, V. Boltyanskii, R. Gamkrelidze and E. Mishchenko, The Mathematical Theory of Optimal Processes, MacMillan, New York, 1964. Google Scholar [15] [16] [17] [18] [19] J. Śmieja, A. Świerniak and Z. Duda, Gradient method for finding optimal scheduling in infinite dimensional models of chemotherapy, [20] [21] A. Świerniak, A. Polański, J. Śmieja and M. Kimmel, Modelling growth of drug resistant cancer populations as the system with positive feedback, Name Value Role $\gamma_1$ 0.192 Proliferation rate of sensitive cells. $\gamma_2$ 0.096 Proliferation rate of resistant cells. $\tau_1$ 0.002 Mutation rate towards the resistant phenotype. $\tau_2$ 0.001 Mutation rate towards the sensitive phenotype. $T$ 13.5 Therapy duration. $\omega_1$ 60 Weight for sensitive cell volume at the terminal point. $\omega_2$ 120 Weight for the resistant cell volume at the terminal point. $\eta_1$ 3 Weight in the overall tumour burden penalty for sensitive cells. $\eta_2$ 6 Weight in the overall tumour burden penalty for resistant cells. $\xi$ 1 Weight for the resistant phenotype penalty. $\epsilon$ 0.1 Scaling factor in the resistant phenotype penalty function $G$. $\Delta$ $10^{-6}$ Step used in finite differences gradient calculations. Name Value Role $\gamma_1$ 0.192 Proliferation rate of sensitive cells. $\gamma_2$ 0.096 Proliferation rate of resistant cells. $\tau_1$ 0.002 Mutation rate towards the resistant phenotype. $\tau_2$ 0.001 Mutation rate towards the sensitive phenotype. $T$ 13.5 Therapy duration. $\omega_1$ 60 Weight for sensitive cell volume at the terminal point. $\omega_2$ 120 Weight for the resistant cell volume at the terminal point. $\eta_1$ 3 Weight in the overall tumour burden penalty for sensitive cells. $\eta_2$ 6 Weight in the overall tumour burden penalty for resistant cells. $\xi$ 1 Weight for the resistant phenotype penalty. $\epsilon$ 0.1 Scaling factor in the resistant phenotype penalty function $G$. $\Delta$ $10^{-6}$ Step used in finite differences gradient calculations. [1] Urszula Ledzewicz, Heinz Schättler. Drug resistance in cancer chemotherapy as an optimal control problem. [2] Urszula Ledzewicz, Shuo Wang, Heinz Schättler, Nicolas André, Marie Amélie Heng, Eddy Pasquier. On drug resistance and metronomic chemotherapy: A mathematical modeling and optimal control approach. [3] Urszula Ledzewicz, Heinz Schättler, Mostafa Reisi Gahrooi, Siamak Mahmoudian Dehkordi. On the MTD paradigm and optimal control for multi-drug cancer chemotherapy. [4] Ping Lin, Weihan Wang. Optimal control problems for some ordinary differential equations with behavior of blowup or quenching. [5] [6] Shuo Wang, Heinz Schättler. Optimal control of a mathematical model for cancer chemotherapy under tumor heterogeneity. [7] [8] Arturo Alvarez-Arenas, Konstantin E. Starkov, Gabriel F. Calvo, Juan Belmonte-Beitia. Ultimate dynamics and optimal control of a multi-compartment model of tumor resistance to chemotherapy. [9] Ami B. Shah, Katarzyna A. Rejniak, Jana L. Gevertz. Limiting the development of anti-cancer drug resistance in a spatial model of micrometastases. [10] Nassif Ghoussoub. Superposition of selfdual functionals in non-homogeneous boundary value problems and differential systems. [11] Lukáš Adam, Jiří Outrata. On optimal control of a sweeping process coupled with an ordinary differential equation. [12] Shuo Wang, Heinz Schättler. Optimal control for cancer chemotherapy under tumor heterogeneity with Michealis-Menten pharmacodynamics. [13] Franck Boyer, Pierre Fabrie. Outflow boundary conditions for the incompressible non-homogeneous Navier-Stokes equations. [14] Wojciech M. Zajączkowski. Long time existence of regular solutions to non-homogeneous Navier-Stokes equations. [15] [16] Hongwei Lou, Weihan Wang. Optimal blowup/quenching time for controlled autonomous ordinary differential equations. [17] Laurent Denis, Anis Matoussi, Jing Zhang. The obstacle problem for quasilinear stochastic PDEs with non-homogeneous operator. [18] [19] Christine Chambers, Nassif Ghoussoub. Deformation from symmetry and multiplicity of solutions in non-homogeneous problems. [20] Wei Feng, Shuhua Hu, Xin Lu. Optimal controls for a 3-compartment model for cancer chemotherapy with quadratic objective. 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
People say that the probability density function of the continuity equation for the Dirac equation is definite positive. I wanted to see it myself and followed the same path as Bjorken & Drell's did in their RQM book. They say "We left-multiply the Dirac equation with the Hermitian conjugate wave function, then we take the Hermitian conjugate of the Dirac equation and we right-multiply it with the wavefunction by subtracting the second equation from the first one we get the continuity equation as follows". I'll add a screenshot from the book below. What is that I don't understand is why they take the derivative operators as if they are Hermitian operators. Those operators are anti-hermitian thus left side of the equation after subtraction is, \begin{align} i\hbar(\Psi^\dagger\frac{d\Psi}{dt}-\frac{d\Psi^\dagger}{dt}\Psi) \end{align} at this point, due to minus sign, we can't use product rule in derivative to simplify this side as, \begin{align} i\hbar\frac{d}{dt}(\Psi^\dagger\Psi) \end{align} I looked for other books too all seem to follow the same process like this one, I am not sure what I am missing there. Book Screenshot, In the form with a minus sign, the probability density might not be positive definite for an arbitrary wavefunction. Where do you think the problem is?
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
Given a semilattice $S$, a subset $E$, and a positive integer $n$, let $E^{[n]}$ be the set of all products of $n$-tuples in $E$. Thus $\bigcup_{n\geq 1} E^{[n]}$ is nothing but the subsemigroup of $S$ generated by $E$, which I'll denote by $\langle E\rangle$. The following definition arose in some work I am writing up, as a technical condition needed to make a theorem work. Definition. $S$ has "generation depth $\leq n$" if there exists $n$ such that $E^{[n]} = \langle E\rangle$ for every subset $E\subseteq S$. The terminology is my own, because I don't know if there is existing terminology that I should be using instead. So my questions are: has anyone seen this definition before, and do they have a reference where this condition is given an explicit name? Some remarks.It is clear that for each $n$, I can find a finite semilattice which does not have generation depth $\leq n$ (a free semilattice on at least $n+1$ generators, for instance). On the other hand, easy pigeon-hole arguments show that a semilattice of width $n$ has generation depth $\leq n$, as does a semilattice of height $n$. UPDATESome more context, in case it helps or is suggestive. The condition arises from the following question: Given a semilattice $S$ and a weight $\omega$ on $S$, that is to say, a submultiplicative function $\omega: S \to [1,\infty)$, suppose $\psi:S \to {\mathbb C}$ is approximately multiplicative, in the sense that $$ \sup_{x,y\in S} \omega(x)^{-1}\omega(y)^{-1} |\psi(x)\psi(y)-\psi(xy)| \hbox{ is small.} $$ Does this force $\psi$ to be a small perturbation of a multiplicative function $S\to\{0,1\}$? It turns out that the answer is YES if $S$ has "generation depth $\leq n$" for some $n$, regardless of the choice of $\omega$ -- roughly speaking, if I know what $\psi$ does on some subset $E$, then the condition allows me to control what $\psi$ does on the filter generated by $E$. As a partial converse, I can find a semilattice $S$ and a weight $\omega$ such that the answer is NO (the counter-example is what motivated the definition).
To construct the DFA for a cross-section of the languages (string must be accepted by both DFAs) You can work as follows: Make sure the transition function for the input DFAs is complete. The new set of states for the DFA is the cartesian product of the states of the 2 DFAs $Q' = Q_1 \times Q_2$. The transition function looks at the 2 states independently: $\delta'((Q_1, Q_2), s) = (\delta_1(Q_1, s), \delta_2(Q_2, s))$. The accepting states are those where both the original states are accepting $F' = F_1 \times F_2$. For the union of the languages (string must be accepted by either DFA) it's the same except that the accepting state is where 1 or both of the states is accepting: $F' = (Q_1, F_2) \cup (F_1, Q_2)$. Here it is important that the transition functions are complete. For completeness the negation of a DFA is the DFA where the accepting states are the states that are non accepting in the original DFA $F' = Q \backslash F$. Again it is important here that the transition function is complete.
Let $Q$ be a finite set of states, $\Sigma$ a finite alphabet, $q_0\in Q$ the start state and $F\subseteq Q$ the set of accepting sets. Let $\{\delta_k:Q\times\Sigma\rightarrow Q\}_{k=1}^n$ be a set of $n$ possible transition functions and $m$ a fixed natural number. Consider the following problem: find a word $w\in\Sigma^m$ s.t. the DFA accepts $w$ with maximal probability, assuming that the transition function is chosen randomly from a uniform distribution over the $\delta_k$, and that you are allowed to observe the state transitions resulting from $w_1\ldots w_k$ before choosing $w_{k+1}$ (but you don't know which transition function was chosen). In other words, we are doing one-shot reinforcement learning for a deterministic MDP where $\Sigma$ is the set of actions, and the reward is $1$ if we end up in $F$ after $m$ actions and $0$ otherwise. Given a policy $\pi: (\Sigma \times Q)^{< m} \rightarrow \Sigma$ (that decides which action to take after observing a given history), and some choice of $\delta_k$, we get a particular history $h_{\pi,k}\in(\Sigma\times Q)^{m}$. In fact, the collection of the $h_{\pi,k}$ for $k$ from $1$ to $n$ gives us a compact description of $\pi$: in order to implement $\pi$, we just need to look at the current history $h\in(\Sigma \times Q)^{< m}$ and find $k$ s.t. $h$ is a prefix of $h_{\pi,k}$ to know what action to take next. This description is of size $O(nm|\Sigma||Q|)$, as opposed to a lookup table for $\pi$ which would be of size exponential in $m$. In particular, an optimal policy (i.e. a policy which maximizes the probability of the DFA accepting) also has such a compact description. Is it possible to find a compact description (as above) of an optimal policy in time polynomial in the size of the problem and $m$? Here I'm assuming that the $\delta_k$ are given explicitly as lookup tables, and $F$ as a list of states (where states and actions are represented by numbers). For a negative answer, you can assume any standard complexity-theory conjecture.
The common understanding is that, setting air resistance aside, all objects dropped to Earth fall at the same rate. This is often demonstrated through the thought experiment of cutting a large object in half. The halves clearly can't then fall more slowly just by being sliced in two. However, I believe the answer is that when two objects fall together, attached or not, they do "fall" faster than an object of less mass alone does. This is because not only does the Earth accelerate the objects toward itself but the objects also accelerate the Earth toward themselves. Considering the formula: $$ F_{\text{g}} = \frac{G m_1 m_2}{d^2} $$ Given $F = ma$ thus $a = F/m$, we note that the mass of the small object doesn't seem to matter as when calculating acceleration the force is divided by the $m$ term, its mass. However, this overlooks that the force is actually applied to both objects, not just to the smaller one. The acceleration on the second, larger object is found by dividing $F$, in turn, by the larger object's mass. The two objects' acceleration vectors are exactly opposite, so closing acceleration is the sum of the two: $$ a_{\text{closing}} = \frac{F}{m_1} + \frac{F}{m_2} $$ Since the Earth is extremely massive compared to everyday objects, the acceleration imparted on the object by the Earth will radically dominate the equation. As the Earth is $\sim 5.972 \times {10}^{24} \, \mathrm{kg} ,$ a falling object of $5.972 \, \mathrm{kg}$ (just over 13 pounds) would accelerate the Earth about $\frac{1}{{10}^{24}}$ as much, which is one part in a trillion trillion. Of course in everyday situations, we can for all practical purposes treat objects as falling at the same rate because of this negligible difference—which our instruments probably couldn't even detect. But I'm hoping not for a discussion of practicality or what's measurable or observable, but what we think is actually happening. Am I right or wrong? What really clinched this for me was considering dropping a small Moon-massed object close to the Earth and a small Earth-massed object close to the Moon. This made me realize that falling isn't one object moving toward some fixed frame of reference, but that the Earth is just another object, and thus "falling" consists of multiple objects mutually attracting in space.
Welcome to The Riddler. Every week, I offer up problems related to the things we hold dear around here: math, logic and probability. There are two types: Riddler Express for those of you who want something bite-size and Riddler Classic for those of you in the slow-puzzle movement. Submit a correct answer for either, 1 and you may get a shoutout in next week’s column. If you need a hint or have a favorite puzzle collecting dust in your attic, find me on Twitter. Riddler Express From Dave Moran, into the woods: Xavier and Yolanda are on a road trip when they notice a hiking trailhead on the side of the highway. They decide to check it out and pull into the parking lot. The sign at the trailhead describes the trail like this: “Flat, with excellent scenery on both sides. The trail dead-ends about 3 miles from this trailhead.” Xavier and Yolanda agree to go hiking on the trail. But Yolanda hikes at a faster pace than Xavier, and each prefers to hike at his or her own steady pace instead of adjusting it to stay with the other person. Neither Xavier nor Yolanda knows exactly how fast either of them walk; they only know that Yolanda is faster. To further complicate matters, both of their cellphones are dead, they have no other timepieces of any kind, and they have no idea, as they stand at the trailhead, whether there are any distance markers or memorable landmarks along the trail. How can Xavier and Yolanda plan their hike such that (1) they start hiking down the trail at the same time, (2) they each hike at their own steady pace the entire time, (3) they only hike up and down the trail (i.e., no hiking in the parking lot or on the highway), and (4) they arrive back at the trailhead at the same time? Riddler Classic From Taylor Firman, the bell rings and it’s time for gym: The following delightful video has been making the viral rounds: Let’s call this game rock-paper-scissors-hop. Here is an idealized list of its rules: Kids stand at either end of N hoops. At the start of the game, one kid from each end starts hopping at a speed of one hoop per second until they run into each other, either in adjacent hoops or in the same hoop. At that point, they play rock-paper-scissors at a rate of one game per second until one of the kids wins. The loser goes back to their end of the hoops, a new kid immediately steps up at that end, and the winner and the new player hop until they run into each other. This process continues until someone reaches the opposing end. That player’s team wins! You’ve just been hired as the gym teacher at Riddler Elementary. You’re having a bad day, and you want to make sure the kids stay occupied for the entire class. If you put down eight hoops, how long on average will the game last? How many hoops should you put down if you want the game to last for the entire 30-minute period, on average? To get you started, here’s how a single game might unfold: Solution to last week’s Riddler Express Congratulations to 👏 JiHwan Moon 👏 of Seoul, South Korea, winner of last week’s Riddler Express! Last week, I asked you to take a standard deck of cards and pull out the numbered cards (the cards 2 through 10) from one suit. After shuffling them and laying them face down in a row, you flipped over the first card. If you correctly guessed whether the next card in the row was bigger or smaller, you could keep going. If you played this game optimally, what was the probability that you could get to the end without making any mistakes? It’s about 17 percent. The optimal strategy is fairly simple: You should say “bigger” if there are more bigger cards remaining and “smaller” if there are more smaller cards remaining. (If there is an equal number of bigger and smaller cards, you can just guess randomly — for simplicity, let’s say you always guess “bigger” in this case.) Getting to the probability of winning is a little trickier. Because we are dealing with cards 2 through 10, we are dealing with nine cards. Therefore, there are 9! — or 362,880 — ways that these cards might be arranged. What we need to do next is figure out how many of those arrangements will lead us to a win. If we divide the number of winning arrangements by the total number of arrangements, we’ll get the probability that we win the game. To ease into these calculations, let’s first imagine a smaller game — just two cards, numbered 2 and 3. In this case, you’re guaranteed to win because you’ll know for sure after seeing the first card whether the second and final card is bigger or smaller. Now, let’s move to a slightly larger game — three cards, numbered 2, 3 and 4. Things get a little more interesting here. There are six (3!) decks you might face. Five of these — every arrangement except 324 — leads to a win. In a game with four cards, there are 24 (4!) possible decks, 16 of which lead to victory. Solver Keith Hudson developed a clever way to find and visualize the underlying pattern here, which we can extend to solve our nine-card game. Imagine numbers in a pyramid, where each row going down represents a game of a certain size: one card, two cards, etc. The numbers in each row of the pyramid add up to the number of ways you can win that game. From left to right, the individual numbers in each row are the number of ways you can win that game if you happen to start with the lowest card, then the second-lowest card, and so on to the highest card. For example, in a one-card game, there is only one card you can start with and one way to win. In a two-card game, there are two cards you can start with, and one way to win in each of those cases. In a three-card game, there are three cards you can start with, and two, one and two ways, respectively, to win in those scenarios. The first six rows look like this: 1 1 1 2 1 2 5 3 3 5 16 11 8 11 16 62 46 35 35 46 62 We can build this pyramid with some pretty simple rules. If, for example, we start by drawing either the smallest or biggest card in our deck, we are essentially then playing the game as if we had started with one fewer card, so the outer numbers (for example, 62 and 62 in the bottom row above) in the rows are the sums of the whole row that comes above them. You’ll notice that you want to draw the biggest or smallest cards — the closer you get to the middle, the fewer ways you have to win the game. For example, say you’re playing with six cards (the last row in our pyramid above). If you draw the biggest or smallest card (the outer numbers of the row), you have 62 ways to win. But if you draw the third-biggest or third-smallest, you have only 35 ways to win. You’ll also notice that the numbers have a relationship with each other. If a number is on the interior of the pyramid and on the left side, it is the sum of everything above it and to the right. Same goes for a number on the interior of the pyramid and to the right — it’s the sum of everything above it to the left. Why is this? Because if we draw the biggest or smallest cards (the outer numbers), it’s like we haven’t added a new card at all — we have the same number of ways to win as we did in total in the smaller game (the row above). But as you move closer to the middle of a row, you’re adding more and more complications, so your options to win are fewer. Remarkably, though, the past games help us calculate the number of winning paths in the new ones. Continuing in this way, the sum of the numbers in the ninth row of this pyramid is exactly 62,000. So our probability of winning is 62,000/362,880, or about 17 percent. Solution to last week’s Riddler Classic Congratulations to 👏 Nicholas Ellens 👏 of Longmont, Colorado, winner of last week’s Riddler Classic! Last week, Ariel, Beatrice and Cassandra — three brilliant game theorists — were bored at a game theory conference and devised the following game to pass the time. They drew a number line and placed a stack of $1 on the 1, $2 on the 2, $3 on the 3 and so on to $10 on the 10. Each player had a personalized token. They took turns — Ariel first, Beatrice second and Cassandra third — placing their tokens on one of the money stacks (only one token was allowed per space). Once the tokens were all placed, each player got to take every stack that her token was on or was closest to. (If a stack was midway between two tokens, the players split that cash.) How did this game play out? Ariel begins by placing her token on the $5. Beatrice then places her token on the $9. And finally Cassandra places her token on the $8. Ariel wins $21 (1+2+3+4+5+6), Beatrice wins $19 (9+10) and Cassandra wins $15 (7+8). Somewhat surprisingly, going first in this game pays off! There is no simple and elegant way to get to this solution, as far as I know. Brute force, or something similar to it, is the way to go. In addition to providing computer code, solver Aaron Zinger explained how this works: “Work backwards: Find Cassandra’s best move in each situation, then Beatrice’s best move given that we now know what Cassandra will do, then finally Ariel’s best move. Ariel’s best outcome is to cause her opponents to cluster at the upper end of the number line, giving her most of the money to herself. If she plays higher than 5, someone will bid lower than her to capture all the lower values. If she plays lower than 5, everyone will still play higher than her, so she’s given up part of the upper range without getting anything in return.” I offered a grab bag of extra credits for this problem: What if there were more players? What if it was played on a clock (with stacks of $1 to $12) rather than a line? Laurent Lessard, as usual, provided some excellent solutions to these questions. For starters, the first-player advantage appears to hold for any number of players. If there are four players, for example, the payoffs are $17, $15, $13 and $10. If we get up to 10 players, the payoffs just become $10, $9, $8 and so on down to $1. The same is true on a clock. If Ariel, Beatrice and Cassandra were playing that version of the game, Laurent found, Ariel would play on the 7, Beatrice on the 11, and Cassandra on the 10, for final winnings of $31.50, $27.50 and $19. Finally, solver Sawyer Tabony wrote in to share this extra-credit gem: This week’s Riddler Classic has a really cool extension. If you want to solve it for higher and higher numbers of stacks of money, you can take it to the continuous limit and have the three players place tokens on different real numbers between 0 and 1. After the three tokens are placed, the payoff is the integral of the function \(f(x)=x\) over the parts of the interval that are closest to your token. You end up needing to solve a few quadratic equations. But it’s all doable on paper, and the final solution is pretty cool. The optimal play turns out to be: A places a token at \(3/\sqrt{34} -\epsilon\) for some tiny number \(\epsilon\). B places a token at \(5/\sqrt{34} - \delta\) where \(\delta\) is tiny and a function of \(\epsilon\). And C places at the same position as B plus or minus \(\gamma\) for some tiny number \(\gamma\). The payouts work out to A earning 4/17 and B and C each earning 9/68 of the total 1/2 pot. Sawyer’s work is shown below: Want to submit a riddle? Email me at oliver.roeder@fivethirtyeight.com.
Roughly speaking there are two ways for a series to converge: As in the case of \(\sum 1/n^2\), the individual terms get small very quickly, so that the sum of all of them stays finite, or, as in the case of \( \sum (-1)^{n-1}/n\), the terms do not get small fast enough (\(\sum 1/n\) diverges), but a mixture of positive and negative terms provides enough cancellation to keep the sum finite. You might guess from what we've seen that if the terms get small fast enough to do the job, then whether or not some terms are negative and some positive the series converges. Theorem 11.6.1 If \(\sum_{n=0}^\infty |a_n|\) converges, then \(\sum_{n=0}^\infty a_n\) converges. Proof. Note that \( 0\le a_n+|a_n|\le 2|a_n|\) so by the comparison test \(\sum_{n=0}^\infty (a_n+|a_n|)\) converges. Now \[ \sum_{n=0}^\infty (a_n+|a_n|) -\sum_{n=0}^\infty |a_n| = \sum_{n=0}^\infty a_n+|a_n|-|a_n| = \sum_{n=0}^\infty a_n \] converges by theorem 11.2.2. So given a series \(\sum a_n\) with both positive and negative terms, you should first ask whether \(\sum |a_n|\) converges. This may be an easier question to answer, because we have tests that apply specifically to terms with non-negative terms. If \(\sum |a_n|\) converges then you know that \(\sum a_n\) converges as well. If \(\sum |a_n|\) diverges then it still may be true that \(\sum a_n\) converges---you will have to do more work to decide the question. Another way to think of this result is: it is (potentially) easier for \(\sum a_n\) to converge than for \(\sum |a_n|\) to converge, because the latter series cannot take advantage of cancellation. If \(\sum |a_n|\) converges we say that \(\sum a_n\) converges absolutely; to say that \(\sum a_n\) converges absolutely is to say that any cancellation that happens to come along is not really needed, as the terms already get small so fast that convergence is guaranteed by that alone. If \(\sum a_n\) converges but \(\sum |a_n|\) does not, we say that \(\sum a_n\) converges conditionally. For example \(\sum_{n=1}^\infty (-1)^{n-1} {1\over n^2}\) converges absolutely, while \(\sum_{n=1}^\infty (-1)^{n-1} {1\over n}\) converges conditionally. Example 11.6.2 Does \[\sum_{n=2}^\infty {\sin n\over n^2}\] converge? Solution In example 11.5.2 we saw that \[\sum_{n=2}^\infty {|\sin n|\over n^2}\] converges, so the given series converges absolutely. Example 11.6.3 Does \(\sum_{n=0}^\infty (-1)^{n}{3n+4\over 2n^2+3n+5}\) converge? Solution Taking the absolute value, \[\sum_{n=0}^\infty {3n+4\over 2n^2+3n+5}\] diverges by comparison to \[\sum_{n=1}^\infty {3\over 10n},\] so if the series converges it does so conditionally. It is true that \[\lim_{n\to\infty}(3n+4)/(2n^2+3n+5)=0,\] so to apply the alternating series test we need to know whether the terms are decreasing. If we let \[ f(x)=(3x+4)/(2x^2+3x+5)\] then \[ f'(x)=-(6x^2+16x-3)/(2x^2+3x+5)^2,\] and it is not hard to see that this is negative for \(x\ge1\), so the series is decreasing and by the alternating series test it converges. Does the series \(\sum_{n=0}^\infty {n^5\over 5^n}\) converge? It is possible, but a bit unpleasant, to approach this with the integral test or the comparison test, but there is an easier way. Consider what happens as we move from one term to the next in this series: $$\cdots+{n^5\over5^n}+{(n+1)^5\over 5^{n+1}}+\cdots$$ The denominator goes up by a factor of 5, \( 5^{n+1}=5\cdot5^n\), but the numerator goes up by much less: $$ (n+1)^5=n^5+5n^4+10n^3+10n^2+5n+1,$$ which is much less than \( 5n^5\) when \(n\) is large, because \( 5n^4\) is much less than \( n^5\). So we might guess that in the long run it begins to look as if each term is \(1/5\) of the previous term. We have seen series that behave like this: $$\sum_{n=0}^\infty {1\over 5^n} = {5\over4},$$ a geometric series. So we might try comparing the given series to some variation of this geometric series. This is possible, but a bit messy. We can in effect do the same thing, but bypass most of the unpleasant work. The key is to notice that $$ \lim_{n\to\infty} {a_{n+1}\over a_n}= \lim_{n\to\infty} {(n+1)^5\over 5^{n+1}}{5^n\over n^5}= \lim_{n\to\infty} {(n+1)^5\over n^5}{1\over 5}=1\cdot {1\over5} ={1\over 5}. $$ This is really just what we noticed above, done a bit more officially: in the long run, each term is one fifth of the previous term. Now pick some number between \(1/5\) and \(1\), say \(1/2\). Because $$\lim_{n\to\infty} {a_{n+1}\over a_n}={1\over5},$$ then when \(n\) is big enough, say \(n\ge N\) for some \(N\), $$ {a_{n+1}\over a_n} < {1\over2} \quad \hbox{and}\quad a_{n+1} < {a_n\over2}. $$ So \( a_{N+1} < a_N/2\), \( a_{N+2} < a_{N+1}/2 < a_N/4\), \( a_{N+3} < a_{N+2}/2 < a_{N+1}/4 < a_N/8\), and so on. The general form is \( a_{N+k} < a_N/2^k\). So if we look at the series $$ \sum_{k=0}^\infty a_{N+k}= a_N+a_{N+1}+a_{N+2}+a_{N+3}+\cdots+a_{N+k}+\cdots, $$ its terms are less than or equal to the terms of the sequence $$ a_N+{a_N\over2}+{a_N\over4}+{a_N\over8}+\cdots+{a_N\over2^k}+\cdots= \sum_{k=0}^\infty {a_N\over 2^k} = 2a_N. $$ So by the comparison test, \(\sum_{k=0}^\infty a_{N+k}\) converges, and this means that \(\sum_{n=0}^\infty a_{n}\) converges, since we've just added the fixed number \( a_0+a_1+\cdots+a_{N-1}\). Under what circumstances could we do this? What was crucial was that the limit of \( a_{n+1}/a_n\), say \(L\), was less than 1 so that we could pick a value \(r\) so that \(L < r < 1\). The fact that \(L < r\) (\(1/5 < 1/2\) in our example) means that we can compare the series \(\sum a_n\) to \(\sum r^n\), and the fact that \(r < 1\) guarantees that \(\sum r^n\) converges. That's really all that is required to make the argument work. We also made use of the fact that the terms of the series were positive; in general we simply consider the absolute values of the terms and we end up testing for absolute convergence. Theroem 11.7.1: The Ratio Test Suppose that $$\lim_{n\to \infty} |a_{n+1}/a_n|=L.$$ If $$L < 1$$ the series \(\sum a_n\) converges absolutely, if \(L>1\) the series diverges, and if \(L=1\) this test gives no information. Proof. The example above essentially proves the first part of this, if we simply replace \(1/5\) by \(L\) and \(1/2\) by \(r\). Suppose that \(L>1\), and pick \(r\) so that \(1 < r < L\). Then for \(n\ge N\), for some \(N\), $${|a_{n+1}|\over |a_n|} > r \quad \hbox{and}\quad |a_{n+1}| > r|a_n|.$$ This implies that $$ |a_{N+k}|>r^k|a_N|$$, but since \(r>1\) this means that $$\lim_{k\to\infty}|a_{N+k}|\not=0$$, which means also that $$\lim_{n\to\infty}a_n\not=0$$. By the divergence test, the series diverges. \(\square\) To see that we get no information when \(L=1\), we need to exhibit two series with \(L=1\), one that converges and one that diverges. It is easy to see that \(\sum 1/n^2\) and \(\sum 1/n\) do the job. Example 11.7.2 The ratio test is particularly useful for series involving the factorial function. Consider \(\sum_{n=0}^\infty 5^n/n!\). $$ \lim_{n\to\infty} {5^{n+1}\over (n+1)!}{n!\over 5^n}= \lim_{n\to\infty} {5^{n+1}\over 5^n}{n!\over (n+1)!}= \lim_{n\to\infty} {5}{1\over (n+1)}=0. $$ Since \(0 < 1\), the series converges. A similar argument, which we will not do, justifies a similar test that is occasionally easier to apply. Theroem 11.7.3: The Root Test Suppose that \[\lim_{n\to \infty} |a_n|^{1/n}=L.\] If \(L < 1\),the series \(\sum a_n\) converges absolutely, if \(L>1\) the series diverges, and if \(L=1\) this test gives no information. The proof of the root test is actually easier than that of the ratio test, and is a good exercise. Example 11.7.4 Analyze \(\sum_{n=0}^\infty {5^n\over n^n}\). Solution The ratio test turns out to be a bit difficult on this series (try it). Using the root test: $$ \lim_{n\to\infty} \left({5^n\over n^n}\right)^{1/n}= \lim_{n\to\infty} {(5^n)^{1/n}\over (n^n)^{1/n}}= \lim_{n\to\infty} {5\over n}=0. $$ Since \(0 < 1\), the series converges. The root test is frequently useful when \(n\) appears as an exponent in the general term of the series.
If you want the expected value, one answer is $n E[S_{(m)}]$, where $S_{(m)}$ is the $m$th order statistic of a sample of $n$ gamma$(k,1)$ random variables. While this expression may not have a simple closed form, you may be able to get a decent-sized approximate answer from the literature on moments of order statistics. (Edit: This appears to be the case, even when comparing with the known asymptotic expression for the case $m=n$. See discussion at end.) Here's the argument for $n E[S_{(m)}]$: Take a Poisson process $P$ with rate 1 and interarrival times $Z_1, Z_2, \ldots$. Let each event in the process $P$ have probability of $1/n$ of being the first kind of coupon, probability $1/n$ of being the second kind of coupon, and so forth. By the decomposition property of Poisson processes, we can then model the arrival of coupon type $i$ as a Poisson process $P_i$ with rate $1/n$, and the $P_i$'s are independent. Denote the time until process $P_i$ obtains $k$ coupons by $T_i$. Then $T_{i}$ has a gamma$(k,1/n)$ distribution. The waiting time until $m$ processes have obtained $k$ coupons is the $m$th order statistic $T_{(m)}$ of the iid random variables $T_1, T_2, \ldots, T_n$. Let $N_m$ denote the total number of events in the processes at time $T_{(m)}$. Thus $N_m$ is the random variable the OP is interested in. We have $$T_{(m)} = \sum_{r=1}^{N_m} Z_r.$$ Since $N_m$ and the $Z_r$'s are independent, and the $Z_r$ are iid exponential(1), we have $$E[T_{(m)}] = E\left[E\left[\sum_{r=1}^{N_m} Z_r \bigg| N_m \right] \right] = E\left[\sum_{r=1}^{N_m} E[Z_r] \right] = E\left[N_m \right].$$ By scaling properties of the gamma distribution, $T_i = n S_i$, where $S_i$ has a gamma$(k,1)$ distribution. Thus $T_{(m)} = n S_{(m)}$, and so $E\left[N_m \right] = n E[S_{(m)}]$. For more on this idea, see Lars Holt's paper "On the birthday, collectors', occupancy, and other classical urn problems," International Statistical Review 54(1) (1986), 15-27. (ADDED: Looked up literature on moments of order statistics.) David and Nagaraja's text Order Statistics (pp. 91-92) implies the bound$$n P^{-1}\left(k,\frac{m-1}{n}\right) \leq n E[S_{(m)}] \leq n P^{-1}\left(k,\frac{m}{n}\right),$$where $P(k,x)$ is the regularized incomplete gamma function. Some software programs can invert $P$ for you numerically. Trying a few examples, it appears that the bounds given by David and Nagaraja can be quite tight. For example, taking $n$ = 100,000, $m$ = 50,000, and $k$ = 25,000, the two bounds give estimates (via Mathematica) around $2.5 \times 10^9$, and the difference between the two estimates is about 400. More extreme values for $k$ and $m$ give results that are not as good, but even values as extreme as $m$ = 10, $k$ = 4 with $n$ = 100,000 still yield a relative error of less than 3%. Depending on the precision you need, this might be good enough. Moreover, these bounds seem to give better results for $m \approx n$ versus using the asymptotic expression for the case $m = n$ given in Flajolet and Sedgewick's Analytic Combinatorics as an estimate. The latter has error $o(n)$ and appears to be for fixed $k$. If $k$ is small, the asymptotic estimate is within or is quite close to the David and Nagaraja bounds. However, for large enough $k$ (say, on the order of $n$) the error in the asymptotic is on the order of the size of estimate, and the asymptotic expression can even produce a negative expected value estimate. In contrast, the bounds from the order statistics approach appear to get tighter when $k$ is on the order of $n$. (Caution: There are two versions of the regularized incomplete gamma function: the lower one $P$ that we want with bounds from $0$ to $x$, and the upper one $Q$ with bounds from $x$ to $\infty$. Some software programs use the upper one.)
I will start with an example. Consider a symmetry breaking pattern like $SU(4)\rightarrow Sp(4)$. We know that in $SU(4)$ there is the Standard Model (SM) symmetry $SU(2)_L\times U(1)_Y$ but depending on which vacuum we use to break this symmetry, in a case you can totally break the SM symmetry, with the vacuum : $$\Sigma_1 = \begin{pmatrix} 0& I_2 \\ -I_2 & 0 \end{pmatrix}$$ and in another case, these symmetry is preserved, with the vacuum $$\Sigma_2 = \begin{pmatrix} i\sigma_2& 0 \\ 0 & i\sigma_2 \end{pmatrix}$$ In the first case (with $\Sigma_1$), the generators corresponding to the SM symmetry are part of the broken generators so the SM symmetry is totally broken. In the second case ($\Sigma_2$), the SM generators are part of the unbroken generators then the SM symmetry is preserved. As you can read, I know the answers but not how to find them ! So, my questions are : How is it possible in general (not only for the $SU(4)\rightarrow Sp(4)$ breaking pattern) to construct the vacuum that breaks the symmetry ? Is it possible, when constructing the vacuum, to ensure that the vacuum will (or not) break a sub-symmetry like the SM symmetry in the previous example ?
Export 30 results:Author [ Title] Type Year Filters: is First Letter Of Title [Clear All Filters] C C Carbides and grain defect formation in directionally solidified nickel-base superalloys. Advanced Technologies for Superalloy Affordability as held at the 2000 TMS Annual Meeting. :2000.. 2000. Carburization of W-and Re-rich Ni-based alloys in impure helium at 1000° C. Corrosion Science. 53:388–398.. 2011. Cast gamma titanium aluminides for low pressure turbine blades: a design case study for intermetallics. Minerals, Metals and Materials Society/AIME, Structural Intermetallics 2001(USA),. :3–12.. 2001. Cast structure and property variability in gamma titanium aluminides. Intermetallics. 6:629–636.. 1998. Chromia-Assisted Decarburization of W-Rich Ni-Based Alloys in Impure Helium at 1273 K (1000° C). Metallurgical and Materials Transactions A. 42:1229–1244.. 2011. A combinatorial investigation of palladium and platinum additions to $\beta$-NiAl overlay coatings. Acta Materialia. 77:379–393.. 2014. A combined grain scale elastic–plastic criterion for identification of fatigue crack initiation sites in a twin containing polycrystalline nickel-base superalloy. Acta Materialia. 103:461–473.. 2016. A comparative analysis of low temperature deformation in B2 aluminides. Materials Science and Engineering: A. 317:241–248.. 2001. A comparative examination of aging and creep behavior of die-cast MRI230D and AXJ530. Symposium on Magnesium Technology 2008 (TMS 9 March 2008 to 13 March 2008). :117–122.. 2008. A comparative investigation of oxide formation on EQ (Equilibrium) and NiCoCrAlY bond coats under stepped thermal cycling. Surface and Coatings Technology. 205:3066–3072.. 2011. COMPRESSION CREEP BEHAVIOR OF B 2 AL-NI-RU TERNARY ALLOYS. Advanced Intermetallic-Based Alloys(MRS Symposium Proceedings Series Volume 980). 980:45–50.. 2007. Compression Creep Behavior of B2 AL-Ni-Ru Ternary Alloys. MRS Proceedings. 980:0980–II01.. 2006. Crack progression during sustained-peak low-cycle fatigue in single-crystal Ni-base superalloy René N5. Metallurgical and Materials Transactions A. 41:947–956.. 2010. Creep and directional coarsening in single crystals of new $\gamma$–$\gamma$′ cobalt-base alloys. Scripta Materialia. 66:574–577.. 2012. Creep and Elemental Partitioning Behavior of Mg-Al-Ca-Sn Alloys with the Addition of Sr. Magnesium Technology 2011. :215–222.. 2011. Creep behavior under isothermal and non-isothermal conditions of AM3 single crystal superalloy for different solutioning cooling rates. Materials Science and Engineering: A. 601:145–152.. 2014. Creep deformation and the evolution of precipitate morphology in nickel-based single crystals. Modelling of Microstructural Evolution in Creep Resistant Materials. :1998.. 1998. Creep deformation mechanisms in Ru-Ni-Al ternary B2 alloys. Metallurgical and Materials Transactions A. 39:39–49.. 2008. Creep deformation-induced antiphase boundaries in L1 2-containing single-crystal cobalt-base superalloys. Acta Materialia. 77:352–359.. 2014. Creep of $\alpha$ 2+ $\beta$ Titanium Aluminide Alloys. ISIJ International. 31:1139–1146.. 1991. Creep resistance of CMSX-3 nickel base superalloy single crystals. Acta Metallurgica et Materialia. 40:1–30.. 1992. CREEP RESISTANCE OF CMSX-3 NICKEL-BASE SUPERALLOY SINGLE-CRYSTALS (VOL 40, PG 1, 1992). ACTA METALLURGICA ET MATERIALIA. 41:2253–2253.. 1993. Creep resistance of nickel-base superalloy single crystals. Creep and fracture of engineering materials and structures. :287–301.. 1990.
Let $m\geq4$ be an even integer, $V\subset\mathbb{C}^{m-1}$ be the solution set of the following polynomial equations: \begin{cases} &\sum\limits_{s=1}^{2t-1}z_sz_{2t-s}+\sum\limits_{s=2t+1}^{m-1}z_sz_{m+2t-s}=0,\quad t=1,\dots,m/2-1,\\ &z_sz_{\frac{m}{2}+s}=0,\quad s=1,\dots,m/2-1,\\ &z_sz_{m-s}=0,\quad s=1,\dots,m/2. \end{cases} For convenience, denote the left hand side of the $t$th equation by $f_t$. Note that the last equation implies that $z_{\frac{m}{2}}=0$. Question: Is $V$ zero-dimensional? Remark 1: Since the above equations are homogenous, this is equivalent to ask if $V$ only contains $0\in\mathbb{C}^{m-1}$. Computation via Groebner basis shows that it s ture for $m\leq18$. Remark 2: If the indices are counted modulo $m$, then the system (with solution $(z_0,z_1,\dots,z_{m-1})\in\mathbb{C}^m$ but $z_0$ always zero, so in one-to-one correspondence with the solution $(z_1,\dots,z_{m-1})\in\mathbb{C}^{m-1}$ of our original system) can be written shorter as\begin{cases}&\sum\limits_{s=0}^{m-1}z_sz_{2t-s}=0,\quad t=1,\dots,m/2-1,\\&z_sz_{\frac{m}{2}+s}=0,\quad s=1,\dots,m/2-1,\\&z_sz_{m-s}=0,\quad s=0,\dots,m/2.\end{cases} When $m=4$, the system is \begin{cases} &z_1^2+z_3^2=0,\\ &z_1z_3=0,\\ &z_2=0. \end{cases} Observing that the first two equations lead to $z_1=z_3=0$, we know the answer is true for $m=4$. When $m=6$, the system is \begin{cases} &z_1^2+2z_3z_5+z_4^2=0,\\ &2z_1z_3+z_2^2+z_5^2=0,\\ &z_1z_4=z_2z_5=0,\\ &z_1z_5=z_2z_4=0,\\ &z_3=0 \end{cases} Substituting $z_3=0$ into the first equation we get $z_1^2+z_4^2=0$, which in conjunction with $z_1z_4=0$ implies $z_1=z_4=0$. We deduce similarly that $z_2=z_5=0$, so the answer is true for $m=6$. When $m=8$, the system is \begin{cases} &z_1^2+2z_3z_7+2z_4z_6+z_5^2=0,\\ &2z_1z_3+z_2^2+2z_5z_7+z_6^2=0,\\ &2z_1z_5+2z_2z_4+z_3^2+z_7^2=0,\\ &z_1z_5=z_2z_6=z_3z_7=0,\\ &z_1z_7=z_2z_6=z_3z_5=0,\\ &z_4=0 \end{cases} We get from the first equation and $z_3z_7=z_4=0$ that $z_1^2+z_5^2=0$, so $z_1=z_5=0$ as $z_1z_5=0$. We get from the third equation and $z_1z_5=z_4=0$ that $z_3^2+z_7^2=0$, so $z_3=z_7=0$ as $z_3z_7=0$. Then the second equation turn out to be $z_2^2+z_6^2=0$, so $z_2=z_6=0$ as $z_2z_6=0$. This shows that the answer is true for $m=8$. When $m=10$, we are not lucky enough to simply apply the argument like above. However, I think exhaustivity of "the possible zeros" between $z_1,\dots,z_9$ should work. Here "the possbile zeros" means assigning zeros to some of the $z_k$'s such that (i) for $s=1,\dots,4$, at least one of $z_s$ and $z_{5+s}$ is zero, (ii) for $s=1,\dots,4$, at least one of $z_s$ and $z_{10-s}$ is zero, (iii) $z_5$ is zero, and then figure out if this implies all of the $z_k$'s are zero. For example, we start with supposing $z_1=z_2=z_3=z_4=z_5=0$, then first deduce $z_6=0$ and $z_9=0$ and next $z_7=0$ and $z_8=0$. If it can be shown that with any initial assignment of zeros satisfying (i)-(iii) we will succesfully deduce all the $z_k$'s are zero, then the answer for $m=10$ is true. This may hopefully lead to a more efficient algorithm for our system than using Groebner basis method. Edit: My attempt to this problem illustrated earlier is in fact considering if there exist $\emptyset\neq N\subset\mathbb{Z}/m\mathbb{Z}$ satisfying the following three conditions. (I) $(N+\frac{m}{2})\cap N=\emptyset$; (II) $(-N)\cap N=\emptyset$; (III) for each $k\in\mathbb{Z}/m\mathbb{Z}$, $(2k-N)\cap N\neq\{k\}$. If there does not exist such nonempty $N$ for some fixed $m$, then the polynomial system is zero-dimensional for this $m$. In order to show that if the above mentioned $N$ does not exist then $V=\{0\}$ for the corresponding $m$, we suppose $V$ contains a nonzero point $z=(z_0,z_1,\dots,z_{m-1})$ and let $N_1$ consist of all the indices $s$ (modulo $m$) with $z_s\neq0$. It is easy to see that $N_1$ satisfies (I), because $z_s$ and $z_{\frac{m}{2}}$ cannot be nonzero simultaneously for any $s$ since $z_sz_{\frac{m}{2}+s}=0$. Similarly, $N_1$ satisfies (II). To show (III) for $N_1$, assume on the contrary that $(2k-N_1)\cap N_1=\{k\}$ for some $k$. Then $z_k\neq0$ and so $z_{\frac{m}{2}+k}=0$. Moreover, for any $j\neq k$, $j\not\in(2k-N_1)\cap N_1$. Hence $2k-j\not\in N_1$ or $j\not\in N_1$, which is to say, $z_{2k-j}=0$ or $z_j=0$. So $z_jz_{2k-j}=0$ for any $j\neq k$, contradicting $\sum\limits_{s=0}^{m-1}z_sz_{2k-s}=0$. Edit after two answers have been posted: Will Sawin's answer saves me from going in the previous edited way. Lev Borisov wrote $f_t$ into product of two linear factors and then suggested showing all the possible linear systems are zero-dimensional. I tried to follow Lev Borisov's way, but still see no light. (If anyone knows how to probably do it, point out for me please.) However, I figured out how to show the system is zero-dimensional for $m=10,12,14$. I will upgrade here my study progress to this problem. Hereafter, I will use the mod $m$ indices. The following observations will be useful. Claim 1: Let $a\in(\mathbb{Z}/m\mathbb{Z})^\times$. If $(x_0,x_1,\dots,x_{m-1})\in V$, then $(x_0,x_a,\dots,x_{a(m-1)})$ and $(x_{m/2},x_{1+m/2},\dots,x_{m-1+m/2})$ are both in $V$. In light of Claim 1, define maps $\phi_a$ and $\psi_a$ on $\mathbb{Z}/m\mathbb{Z}$ for each $a\in\mathbb{Z}/m\mathbb{Z})^\times$ by $$ \phi_a(x):x\mapsto ax+m(1+\rho(a))/4,\quad\psi_a(x):x\mapsto ax+m(1-\rho(a))/4, $$ where $\rho$ is the Jacobi symbol mod $m$. Then all the $\phi_a$ and $\psi_a$ form an abelian group $G$ of order $2\varphi(m)$, and all the $\phi_a$ form a subgroup $H$ of order $\varphi(m)$. Let $G$ act on $\mathbb{C}[z_0,\dots,z_{m-1}]$ by action on the indices of $z_k$'s. Claim 2: If $(x_0,x_1,\dots,x_{m-1})\in V$ satisfies $x_2=x_4=\dots=x_{m-2}=0$, then $x_1=x_3=\dots=x_{m-1}=0$. Claim 2 follows from the convolution formula of discrete Fourier transform on $(x_1,x_3,\dots,x_{m-1})$. Similarly we have Claim 3: If $(x_0,x_1,\dots,x_{m-1})\in V$ satisfies $x_1=x_3=\dots=x_{m-1}=0$, then $x_2=x_4=\dots=x_{m-2}=0$. Case $m=10$: Multiply $z_1$ on both sides of $f_1=0$ gives $z_1^3+2z_1z_4z_8=0$. Further multiply $z_3$ on both sides gives $z_1^3z_3=0$, which is equivalent to $z_1z_3=0$. Hence by Claim 1, $z_8z_4=\phi_3(z_1z_3)=0$. This leads to $z_1^3=0$, which is equivalent to $z_1=0$. Therefore, $V=\{0\}$ by Claim 1 since $G$ acts transitively on $\mathbb{Z}/10\mathbb{Z}$. Case $m=12$: $z_1z_3f_1=0$ gives $z_1^3z_3=0$, which is equivalent to $z_1z_3=0$. This implies $z_7z_9=\psi_1(z_1z_3)=0$ by Claim 1. Hence $f_2=0$ turns out to be $z_2^2+z_8^2=0$, which in conjunction with $z_2z_8=0$ implies $z_2=z_8=0$. Therefore, $z_4=z_{10}=0$ by Claim 1, and so $V=\{0\}$ by Claim 2. Case $m=14$: $z_1z_3f_1=0$ gives $z_1^3z_3+2z_1z_3z_4z_{12}=0$, and $z_1z_2z_3f_1=0$ gives $z_1z_2z_3=0$. The latter implies $z_4z_1z_{12}=\psi_{11}(z_1z_2z_3)=0$ by Claim 1, which leads to $z_1^3z_3=0$, i.e. $z_1z_3=0$. Hence $z_4z_{12}=\psi_{11}(z_1z_3)=0$ and $z_{11}z_5=\phi_{11}(z_1z_3)=0$. Thus $z_1f_1=0$ turns out to be $-z_1^3=2z_1z_6z_{10}$. Put $g=\phi_3$. Then $g$ generate $H$ and the above equation can be written as $-z_1^3=2z_1z_{g^3(1)}z_{g(1)}$. Now consider any $(x_0,x_1,\dots,x_{11})\in V$. By Claim 1, $-z_{g^j(1)}^3=2z_{g^j(1)}z_{g^{j+3}(1)}z_{g^{j+1}(1)}$ for $j=0,\dots,5$, and hence we know that $x_{g^{j+1}(1)}=0\Rightarrow x_{g^j(1)}=0$. Therefore, any $x_{g^j(1)}=0$ will lead to $x_1=0$, and$$\prod\limits_{j=0}^5-z_{g^j(1)}^3=\prod\limits_{j=0}^52z_{g^j(1)}z_{g^{j+3}(1)}z_{g^{j+1}(1)}.$$From the above we deduce that $\prod_{j=0}^5z_{g^j(1)}=0$, and so $x_{g^j(1)}=0$ for some $j$, which leads to $x_1=0$. Thus we have shown $z_1=0$, and so $V=\{0\}$ by Claim 1 since $G$ acts transitively on $\mathbb{Z}/14\mathbb{Z}$. Viewing the above discussion, I would suggest study first the case $m=2l$ where $l$ is prime. Even, those $l\equiv1\pmod{4}$ and $l\equiv3\pmod{4}$ may differ, and we could suppose one of them at the beginning.
Let $X$ be a Banach space and $Y$ a closed subspace of $X$. If $\varphi\in Y^*$, then Hahn-Banach allows us to extend $\varphi$ to a $\tilde\varphi\in X^*$, such that $\|\tilde\varphi\|=\|\varphi\|$. This extension can happen in infinitely many ways, in general. My question is the following: Can we guarantee the existence of a bounded linear transformation$$F : Y^*\to X^*, $$ such that $F(\varphi)$ is an extension of $\varphi\in Y^*$ as a bounded linear functional on $X$? (Preferably with $\|F\|=1$.) First unsuccessful attempt: Define $F$ on a (Hamel) basis of $Y^*$, and then linearly extend to the whole of $Y^*$. But this $F$ is not necessarily bounded.
Adventure Begins The game Pokenom Go has just been released. Pokenom trainers can now travel the world, capture Pokenom in the wild and battle each other! Bash — the Pokenom trainer — has decided to drop out of his university to pursue his childhood dream of becoming the best Pokenom trainer! However, Linux — Bash’s university headmaster — does not allow his students to drop out so easily … Linux puts $N$ black boxes on a straight line. The black boxes are numbered from $1$ to $N$ from left to right. Initially, all black boxes are empty. Then Linux gives Bash $Q$ queries. Each query can be one of the following $2$ types: Linux puts exactly one stone inside exactly one box between $u$-th box and $v$-th box, inclusive, with equal probability. $(1 \le u \le v \le N)$. Let $a_ i$ be the number of stones in black box numbered $i$. Let $A = \sum _{i=1}^{N}{a_ i^2}$. Bash has to calculate the expected value $E(A)$. Bash can only drop out of his university if he is able to answer all queries correctly. But now all Bash can think of is Pokenom. Please help him! Input The first line of input contains exactly $2$ positive integers $N$ and $Q$. $(1 \le N, Q \le 10^5)$. $Q$ lines follow, each line contains exactly one query. As explained, a query can be one of the following $2$ types: $1 \; u \; v$: Linux puts a stone inside one of the boxes between $u$ and $v$. $2$: Linux asks Bash to compute $E(A)$. Output It can be proved that the expected value can be represented as an irreducible fraction $\dfrac {A}{B}$. For each query of type $2$, print one line containing the value $A \times B^{-1}$ modulo $10^{9} + 7$. The given input guarantees that $B$ is not a multiple of $10^{9} + 7$. Explanation for examples In the first example: With a probability of $0.5$, two stones are in different squares. Hence, the answer to the fourth query is $0.5 \times (1^{2} + 1^{2}) + 0.5 \times 2^{2} = 3$. In the second example: With a probability of $\frac{2}{3}$, two stones are in different squares. Hence, the answer to the fourth query is $\frac{2}{3} \times 2 + \frac{1}{3} \times 4 = \frac{8}{3}$. Sample Input 1 Sample Output 1 2 4 1 1 2 2 1 1 2 2 1 3 Sample Input 2 Sample Output 2 3 4 1 1 3 2 1 1 3 2 1 666666674
There are many explanations to be found about shot noise in optics, but the answers I find are incompatible. There are three ways shot noise in optics is explained. (Note that according to Wikipedia, in general, shot noise is a type of noise which can be modeled by a Poisson process.) It is the noise purely arising form (vacuum) fluctuations of the EM-field. For example, the book of Gerry and Knight states that "In an actual experiment, the signal beam is first blocked in order to obtain the shot-noise level." I guess the number of photons you would detect in this way follows a Poissonian distribution, hence the name `shot noise'. (For context, see screenshot of relevant section below - courtesy of Google Books) It is due to 'the particle nature of light'. Semi-classically, a low intensity laser beam will emit photons following a Poisson distribution. If the beam is incident on a photon detector, this detector will receive a fluctuating number of photons per time bin (according to the Poissonian). Thus the intensity (~number of photons per time bin) will fluctuate. These fluctuations are the `shot noise'. A laser beam emits a coherent state $|\alpha \rangle$. The probability to find $n$ photons upon measurement follows the poisson distribution, $P(n)=|\langle n | \alpha \rangle|^2= \frac{\bar n}{n!}e^{-\bar n}$ with $\bar n = |\alpha|^2= \langle \alpha | a^\dagger a | \alpha \rangle $ the average number of photons. Thus there is shot noise in the number of photons. (Here $| n \rangle $ is in the Fock basis but $|\alpha \rangle $ is in the coherent state basis.) So what is shot noise? Can you have multiple sources of shot noise, throw them all on one heap and call the combination 'the shot noise'. Then how can you 'measure the shot noise level' as in 1, or 'measure at the shot noise level'? Explanation 1 is incompatible with 2 and 3, for both 2 and 3 will cause no photons at all to be counted in the vacuum state. (The vacuum state is the coherent state with $\alpha=0$.)
11 0 Hello!! I did this problem and have gotten it wrong-- but my math is right. Maybe I'm missing something? Here's the question: We want to support a thin hoop by a horizontal nail and have the hoop make one complete small-angle oscillation each 2.0 s. What must the hoop's radius be? So this is a physical pendulum, and in order to find the radius, I need to know the period. The period is found by: [tex]T=2\pi\sqrt{\displaystyle{\frac{I}{mgd}}}[/tex] Where I is the moment of inertia, g is the acceleration due to gravity, and d is the distance from the pivot point to the center of gravity. For a hoop, the moment of inertia is: [tex]I=MR^2[/tex] And the center of gravity is in the center of the hoop, so d is the radius. So with that information, I set up the following: [tex]T=2\pi\sqrt{\displaystyle{\frac{MR^2}{MgR}}}[/tex] [tex]T=2\pi\sqrt{\displaystyle{\frac{R}{g}}}[/tex] And in plugging in the values, I get: [tex]2=2\pi\sqrt{\displaystyle{\frac{R}{9.8}}}[/tex] Solving for R, I get .9929475 Which, when I plug into my equation, I get very close to 2 for my period. However, this is incorrect... Any ideas as to why? Thanks!!!! I did this problem and have gotten it wrong-- but my math is right. Maybe I'm missing something? Here's the question: We want to support a thin hoop by a horizontal nail and have the hoop make one complete small-angle oscillation each 2.0 s. What must the hoop's radius be? So this is a physical pendulum, and in order to find the radius, I need to know the period. The period is found by: [tex]T=2\pi\sqrt{\displaystyle{\frac{I}{mgd}}}[/tex] Where I is the moment of inertia, g is the acceleration due to gravity, and d is the distance from the pivot point to the center of gravity. For a hoop, the moment of inertia is: [tex]I=MR^2[/tex] And the center of gravity is in the center of the hoop, so d is the radius. So with that information, I set up the following: [tex]T=2\pi\sqrt{\displaystyle{\frac{MR^2}{MgR}}}[/tex] [tex]T=2\pi\sqrt{\displaystyle{\frac{R}{g}}}[/tex] And in plugging in the values, I get: [tex]2=2\pi\sqrt{\displaystyle{\frac{R}{9.8}}}[/tex] Solving for R, I get .9929475 Which, when I plug into my equation, I get very close to 2 for my period. However, this is incorrect... Any ideas as to why? Thanks!!!!
Just as a single integral has a domain of one-dimension (a line) and a double integral a domain of two-dimension (an area), a triple integral has a domain of three-dimension (a volume). Furthermore, as a single integral produces a value of 2D and a double integral a value of 3D, a triple integral produces a value of higher dimension beyond 3D, namely 4D. However, because of difficulty in visualizing a four dimensional world, we can simplify and say that a triple integral with a domain of a certain volume, given a function as the density value at a point \((x,y,z)\), produces a value of a mass, which can be written as follows: \[ \underset{D}{\iiint} \delta (x,y,z) dV \] \[ \underset{D}{\iiint} \delta (x,y,z) \,dz\, dy\, dx \] in which the order of \(dx\), \(dy\), and \(dz\) does not matter just like the order of \(dx\) and \(dy\) doesn't matter in double integrals. Triple Integrals in terms of Summation When we first learned the concept of integrals, we visualized the integral as an area under the curve. However, as we learned more about the integrals, we realized that the integral is a sum of the values at points within a domain, which we divide into infinitely many parts. Likewise, triple integrals can be explained in terms of summation, \[ \underset{D}{\iiint} f(x,y,z) dV = \underset{n \to \infty} \sum_{i=1}^{n} f(x_{i}, y_{i}, z_{i}) \Delta V_{i}\] where In another words, we divide the domain into little parts until they become like a point, which appromaxiately has a value of \(f(x,y,z)\). Then we sum up all the values to find the value of the integral. Volume in terms of Triple Integral Let's return to the previous visualization of triple integrals as masses given a function of density. Given an object (which is, domain), if we let the density of the object equals to 1, we can assume that the mass of the object equals the volume of the object, because density is mass divided by volume. So if the density = 1, we can use the triple integrals to find the volume, which is also the mass. So, \[ \mathrm{Volume} = \underset{D}{\iiint} dV.\] \[ \mathrm{Sum } = \underset{n \to \infty}{\lim} \sum_{i=1}^{n} f(x_{i}, y_{i}, z_{i}) \Delta V_{i}\] \[ \mathrm{Sum } = \underset{n \to \infty}{\lim} \sum_{i=1}^{n} \Delta V_{i}\] \[ \underset{n \to \infty}{\lim} \sum_{i=1}^{n} \Delta V_{i} = \underset{D}{\iiint} dV\] Average Value of a Function The average value of a list is defined as the sum of all the values in the list divided by the number of the values in the list. Now when finding the average value of a function \(f(x,y,z)\), we must find the overall sum of all the values of the function within a domain divided by the overall size of the function. In other word, \[\text{Average Value of } f \approx \dfrac{\sum_{i=1}^{n} f(x_{i},y_{i},z_{i})}{n}\] And before we derive the formula for finding the average value of a function, we must understand that the change in volume, \(dV\), is simply the whole volume within the domain \(D\) divided by some number \(n\). \[\text{so, } \Delta V = \dfrac{\text{Volume of D}}{n}\] \[\text{which also means, } n = \dfrac{\text{Volume of D}}{\Delta V}\] So using this information we can substitute \(n\) with \(\dfrac{\text{Volume of D}}{dV}\). \[\text{Average Value of } f \approx \dfrac{\sum_{i=1}^{n} f(x_{i},y_{i},z_{i})}{n}\] \[= \dfrac{\sum_{i=1}^{n} f(x_{i},y_{i},z_{i})}{\dfrac{\text{Volume of D}}{\Delta V}}\] \[= \dfrac{\sum_{i=1}^{n} f(x_{i},y_{i},z_{i}) \Delta V}{\text{Volume of D.}}\] therefore the average value of \(f\) is \[ \approx \dfrac{1}{\text{Volume of D}} \sum_{i=1}^{n} f(x_{i},y_{i},z_{i}) \Delta V. \] And as \(n\) approaches infinity, meaning dV becomes extremely small, we can find the actual average value of the function. \[\text{so, Average Value of }f = \dfrac{1}{\text{Volume of D}} \underset{n \to \infty}{\lim} \sum_{i=1}^{n} f(x_{i},y_{i},z_{i}) \Delta V \] \[\therefore \text{Average Value of } f = \dfrac{1}{\text{Volume of D}} \underset{D}{\iiint} f(x,y,z) dV \] Now, from the previous sub-section, we learned that the Volume of \(D\) is the triple integral of the domain \(D\) when \(f(x,y,z) = 1\). \[\text{So, Average Value of }f = \dfrac{\underset{D}{\iiint} f(x,y,z) dV}{\underset{D}{\iiint} dV.} \] Finding the Bounds in the Order of \(dz\), \(dy\), \(dx\) Although the order of integration does not matter in finding the answer, we will be finding the limits of integration in the order of dz, dy, dx just to make the explanation easier. Conceptually, finding the limits is as follows: To find the z-limits of integration, we must look at the domain in 3D perspective and draw a ray in the positive z-direction through the center of the domain. Then we must find the lower surface and the upper surface that the ray passes through. And these surfaces are typically expressed in the forms of \(z=f(x,y)\). To find the y-limits of integration, we need to look at the domain in 2D perspective, or the x-y surface. So imagine that we have slapped the domain onto the xy-plane, or that we are looking at the domain straight down from the positive z-value. And with the domain in 2D perspective, draw a ray in the positive y-direction through the center of the domain. Then identify two curves, one that the ray passed through first and another later, that are usually expressed in the forms of \(y=f(x)\). To find the x-limits of integration, we now need to look at the domain in 1D perspective, or the x-axis. Just like in the case of finding y-limits, let us imagine that we are looking at the 2D domain (note that it is not the original 3D domain but the 2D domain that we were looking at to find the y-limits) from the positive y-value, so that the domain looks like the interval in the x-axis. Then find the lower limit and the upper limit just like how we find the limits in the single integrals. Example \(\PageIndex{1}\): Limits of Integration Find the mass( in kg) of a ball, which has a radius of 2m and a density, \(\mathbf{ \delta (x,y,z) = 2} \) kg / m 3. Solution Now we must plug in all the information into the triple integral. \[ \underset{D}{\iiint} \delta (x,y,z) dV = \int_{x_{L}}^{x_{U}} \int_{y_{L}}^{y_{U}} \int_{z_{L}}^{z_{U}} \delta (x,y,z) dz dy dx\] \[ = \int_{-2}^{2} \int_{-\sqrt{4-x^2}}^{\sqrt{4-x^2}} \int_{-\sqrt{4-x^2-y^2}}^{\sqrt{4-x^2-y^2}} 2 \; dz dy dx \] However, before we integrate this, we must realize that the density is constant regarless of \(x\), \(y\), or \(z\), and the domain is perfectly symmetric in all \(x\), \(y\), and \(z\) directions. So we can take the integral in the first octant and multiply the result by 8 to obtain the mass. In other words, \[ \mathrm{Mass } = 8 \int_{0}^{2} \int_{0}^{\sqrt{4-x^2}} \int_{0}^{\sqrt{4-x^2-y^2}} 2 \; dz dy dx. \] And since this integration is rather tedious, we will leave the answer in the integral form. Example \(\PageIndex{1}\): Volume and Average Value Find the average value of \(\mathbf{f(x,y,z) = 8xyz}\) over a domain bounded by \(\mathbf{z=x+y}\), \(\mathbf{z=0}\), \(\mathbf{y=x}\), \(\mathbf{y=0}\), and \(\mathbf{x=1}\). Graph of the domain. First of all, we know that, \[\textrm{Average Value of }f(x,y,z) = \dfrac{\textrm{Integral of }f(x,y,z) \text{ over D}}{\textrm{Volume of D}}\] So we must find the triple integral of the function \(f(x,y,z)\) and the volume of the domain using the triple integral. Part 1: Volume Since calculating the volume is much easier, we will first find the volume of the domain bounded by the planes listed above. And the formula for the volume of domain D is as follows: \[\text{Volume of D} = \underset{D}{\iiint} dz dy dx.\] In finding the limits of integration, we must notice that the bounds for the domain is rather simple, so we can easily identify the limits: \[= \int_{0}^{1} \int_{0}^{x} \int_{0}^{x+y} dz dy dx.\] Now, we can simply do the integration as we have learned from double integration. \[\begin{align*} & = \int_{0}^{1} \int_{0}^{x} \left [ z \right ]_{0}^{x+y} dy dx \nonumber \\[4pt] & = \int_{0}^{1} \int_{0}^{x} x+y dy dx \nonumber \\[4pt] & = \int_{0}^{1} \left [ xy + y^2 \right ]_{0}^{x} dx \nonumber \\[4pt] & = \int_{0}^{1} x^2 + x^2 dx \nonumber \\[4pt] & = \int_{0}^{1} 2x^2 dx \nonumber \\[4pt] & = \left [ \dfrac{2}{3} x^3 \right ]_{0}^{1} \nonumber \\[4pt] & = \dfrac{2}{3} \nonumber \\[4pt] \therefore \textrm{Volume of D} &= \dfrac{2}{3} \end{align*}\] Part 2: Integral of \(f(x,y,z)\) over \(D\) From what we have learned so far, we know that: \[ \textrm{Integral of } f(x,y,z) \textrm{over D} = \underset{D}{\iiint} f(x,y,z) dz dy dx. \] And from the part 1, we have already found the limits of integration. So the equation becomes: \[ = \int_{0}^{1} \int_{0}^{x} \int_{0}^{x+y} 8 x y z \; dz dy dx.\] From this, we can simply do the integration. \[\begin{align*} & = \int_{0}^{1} \int_{0}^{x} \left [ 4 x y z^2 \right ]_{0}^{x+y} dy dx \nonumber \\[4pt] & = \int_{0}^{1}\int_{0}^{x} 4 x y (x + y)^2 dy dx \nonumber \\[4pt] & = \int_{0}^{1 }\int_{0}^{x} 4xy(x^2+2xy+y^2) dy dx \nonumber \\[4pt] & = \int_{0}^{1 }\int_{0}^{x} 4x^3y + 8x^2y^2 +4xy^3 dy dx \nonumber \\[4pt] & = \int_{0}^{1} \left [ 2x^5 + \dfrac{8}{3}x^2y^3 + xy^4 \right ]_{0}^{x} dx \nonumber \\[4pt] & = \int_{0}^{1} 2x^5 + \dfrac{8}{3}x^5 + x^5 \; dx \nonumber \\[4pt] & = \int_{0}^{1} \dfrac{17}{3} x^5 dx \nonumber \\[4pt] & = \left [ \dfrac{17}{18} x^6 \right ]_{0}^{1} \nonumber \\[4pt] & = \dfrac{17}{18} \nonumber \\[4pt] \therefore \textrm{Integral of } f(x,y,z) &= \dfrac{17}{18} \end{align*}\] Part 3: Average Value of \(f(x,y,z)\) After calculating the integral of \(f(x,y,z)\) over the domain and the volume of the domain, calculating the average value of the function is extremely esay. As it is stated above, \[ \textrm{Average Value of } f(x,y,z) = \dfrac{\textrm{Integral of } f(x,y,z)}{\textrm{Volume of D.}}\] Then we substitute the values we found in part 1 and part 2: \[ \textrm{Average Value of } f(x,y,z) = \dfrac{17/18}{2/3} = \dfrac{17}{12}\] \[\therefore \textrm{Average Value of } f(x,y,z) = \dfrac{17}{12} .\] Example \(\PageIndex{3}\) In a country of slimes, there was a slime king whose massive figure was measured to be 3m wide, 3m long, and 4m tall. After years of research, slime scientists found that the density of a slime is as follows: \[\mathbf{\delta (x,y,z) = \dfrac{1}{z+1}}. \] However, because the king dislikes complex mathematics involving variables, he ordered the scientists to find the average density of his slime. Using the king's massive figure, calculate the average density of the king. Solution First of all, the questions is asking for the average value of the density function. So we must find the volume and the integral of the density function over the domain. Calculating the volume of the slime king is simple: \[\textrm{Volume} = \textrm{Length x Width x Height}\] \[ = \textrm{3 x 3 x 4} = 36 \textrm{m^3}.\] Now the equation for calculating an average value of a density function is given as follows: \[\textrm{Integral of } \delta (x,y,z) \textrm{ over D} = \underset{D}{\iiint} \delta (x,y,z) dz dy dx.\] And for this specific problem, it becomes: \[ = \int_{0}^{3} \int_{0}^{3} \int_{0}^{4} \dfrac{1}{z+1} dz dy dx.\] And from here, it just becomes a simple integration: \[\begin{align*} & = \int_{0}^{3} \int_{0}^{3} \left [ \ln \left | z+1 \right | \right ]_{0}^{4} dy dx \nonumber \\[4pt] & = \int_{0}^{3} \int_{0}^{3} \ln (5) \; dy dx \nonumber \\[4pt] & = \int_{0}^{3} \left [ \ln (5) y \right ]_{0}^{3}dx \nonumber \\[4pt] & = \int_{0}^{3} 3 \ln (5) dx \nonumber \\[4pt] & = \left [ 3 x \ln (5) \right ]_{0}^{3} \nonumber \\[4pt] &= 9 \ln (5) \textrm{kg.} & \end{align*}\] As for the final answer, we must divide the integral by the volume, to get the average density of the King Slime: \[ = \dfrac{\textrm{Integral of } \delta}{\textrm{Volume}} = \dfrac{9\ln(5)}{36}\] which is \[= \dfrac{\ln(5)}{4} \mathrm{kg/m^3}. \] Contributors Joseph Sanghun Lee (UCD) Integrated by Justin Marshall.
Here's an asymptotic bound - hopefully it's tight. We throw $N$ little balls of diameter $D$ randomly (uniformly) inside the big sphere of radius $R$. UPDATE: The new version (lower half) is better than what follows. Neglecting border effects (reasonable if $N$ is large) the probability that the ball $i$ is "free" (no other overlaps with it) is $$P(F_i)=\left(1-v(D)\right)^{N-1} \tag{1}$$ where $v(D) \triangleq D^3/R^3$ The probability that all balls are free can be bounded as $$P(\cap F_i)=1- P(\cup F_i^c)\ge 1 - N(1-P(F_i)) \triangleq g(D,N) \tag{2} $$ For large $N$ $$g(D,N) \approx 1 - N^2 \, v(D) \tag{3} $$ in the range where this is positive, ie. $0\le D \le D_1 \triangleq R/N^{2/3} $ Now, let $t$ be the minimun distance between the sphere centers. Then $$P(t \ge D) = P(\cap F_i) \ge g(D,N) \tag{4}$$ And then $$E(t) = \int_0^{\infty} P(t \ge D) dD \ge\int_0^{D_1} g(D,N) \, dD \approx D_1- N^2 \frac{ D_1^4}{4 R^3} = \frac{3 }{ 4 } \frac{R}{N^{2/3}} $$ (Simulation data suggests that the order is right, and so is the bound, but the real coefficient is around $1.12$ - perhaps $9/8$) Update: (Improved version) A better approach can be obtained by considering instead of $F_i$ (free ball) the event $S_j\equiv$ "separated pair" (pair of balls are separated, they do not overlap) where $j$ indexes the $M=N(N-1)/2 \approx N^2/2$ pairs. By the same reasoning: $$P(S_j)=1-v(D) =1 - \frac{D^3}{R^3} \tag{5}$$ $$P(\cap S_i) \ge \max(1 - M(1-P(S_i)),0)= \max\left(1 - M \frac{D^3}{R^3},0\right) \triangleq h(D,M) \tag{6} $$ The range where $h(D,M)$ is positive, ie. $0\le D \le D_2 \triangleq R/M^{1/3} $ Now, let $t$ be the minimun distance between the sphere centers. Then $$P(t \ge D) = P(\cap S_i) \ge h(D,M) \tag{7}$$ And then $$E(t) = \int_0^{\infty} P(t \ge D) dD \ge\int_0^{D_2} h(D,M) \, dD =\\= \frac{3}{4} \frac{R}{M^{1/3}} \approx 0.945 \frac{R}{\sqrt[3]{N(N-1)}} \approx 0.945 \frac{R}{N^{2/3}} \tag{8}$$ Update 2 : A simple heuristic which seems to produce the correct coefficient: Following the approach above, we could assume that $S_i$ are asympotically independent, and then: $$P(\cap S_i) \approx \left(1-\frac{D^3}{R^3}\right)^M \tag{9}$$ Then$$E(t) \approx \int_0^{R}\left(1-\frac{D^3}{R^3}\right)^M dD =\\= R \, \Gamma(4/3) \frac{\Gamma(M+1)}{\Gamma(M+4/3)} \approx R \, \Gamma(4/3) M^{-1/3} \approx 1.12508368 \frac{R}{N^{2/3}} \tag{10}$$ Update 3 : Regarding corrections for border effects. (Lets assume $R=1$ to save notation, it's just a scale factor) If we wished to include border effects we should replace $(5)$ (computing the balls intersection as here) by $$1-D^3+\frac{9}{16}D^4 -\frac{1}{32}D^6 \hspace{1cm} 0\le D\le 2$$ The integral gets more complicated, but the (first order) asymptotic result is not altered: Lemma: For any positive differentiable function $g(x)$ which , in $[0,+\infty)$, has global maximum at $g(0)=1$, and which has zero first and second derivates $g(x)=1-a x^3 + O(x^4)$ we have (variation of Laplace method, see eg here sec 2.1.3) $$ \int_0^\infty g(x)^M dx = \frac{\Gamma(1/3)}{3 a^{1/3}} M^{-1/3}+ o(M^{-1/3})$$ which again leads as to $(10)$.
I would like to calculate the distribution function from the characteristic function. There is a formula given as $$F_X(x)=\frac{1}{2}+\frac{1}{2\pi}\int_{0}^\infty \frac{e^{i w x}\phi(-w)-e^{-i w x}\phi(w)}{i w}\mbox{d}w$$ Actually, I want to evaluate $F_X(x)$ at some real number $b<0$. I wrote some codes in Mathematica with unfortunately no success. The codes give some results but they seem to be incorrect. Here are my codes: opts = {Method -> {Automatic, "SymbolicProcessing" -> None}, AccuracyGoal -> 8}b = -2;f1[x_] := PDF[NormalDistribution[2, 2], x]ff1[\[Omega]_] := InverseFourierTransform[f1[x], x, \[Omega]]qn1 =1/2 + 1/(2*Pi) NIntegrate[(Exp[I \[Omega] b]*ff1[-\[Omega]] -Exp[-I \[Omega] b]*ff1[\[Omega]])/(I \[Omega]), {\[Omega], 0,Infinity}, Evaluate@opts]NIntegrate[f1[x], {x, -Infinity, b}] What I expect is $qn1$ to be equal to the last line of my code. But it just $0$. I cannot see the problem. Perhaps someone can? Thanks in advance
I want to evaluate the integral: $$\int_{-\infty}^{0}\frac{2x^2-1}{x^4+1}\,dx$$ using contour integration. I re-wrote it as: $\displaystyle \int_{0}^{\infty}\frac{2x^2-1}{x^4+1}\,dx$. I am considering of integrating on a semicircle contour with center at the origin. I considered the function $\displaystyle f(z)=\frac{2z^2-1}{z^4+1}$ which has $4$ simple poles but only two of them lie on the upper half plane and included in the contour which are: $\displaystyle z_1=\frac{1+i}{\sqrt{2}}, \;\; z_2=\frac{-1+i}{\sqrt{2}}$. The residue at $\displaystyle z_1$ equals $\displaystyle \mathfrak{Res}\left ( f; z_1 \right )=-\frac{2i-1}{2\sqrt{2}}$ while the residue at $z_2$ equals $\displaystyle -2\sqrt{2}i-2\sqrt{2}$. (if I have done the calculations right) Now, I don't know how to continue. Should I find the residues at the other poles as well and the say $\displaystyle \oint_{C}f(z)=2\pi i \sum res$ where $C$ is the semicircle contour and then expand it? That is: $$\oint_{C}f(z)\,dz=\int_{0}^{a} + \int_{{\rm arc}}$$ Then let $a \to +\infty$ then than arc integral would go to zero. But I don't know how to proceed. I had dealt with this integral with residues converting it into a minus infinity to infinity integral but with contours I am having a bit of problem. Therefore I'd like some help.
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range 1. J/ψ production cross section and its dependence on charged-particle multiplicity in p + p collisions at s=200 GeV Physics Letters B, ISSN 0370-2693, 11/2018, Volume 786, pp. 87 - 93 We present a measurement of inclusive production at mid-rapidity ( ) in collisions at a center-of-mass energy of GeV with the STAR experiment at the... [formula omitted] collisions | Charged-particle multiplicity | Quarkonium | Multiple parton interactions | p+p collisions | NUCLEAR PHYSICS AND RADIATION PHYSICS | quarkonium | charged-particle multiplicity | multiple parton interactions [formula omitted] collisions | Charged-particle multiplicity | Quarkonium | Multiple parton interactions | p+p collisions | NUCLEAR PHYSICS AND RADIATION PHYSICS | quarkonium | charged-particle multiplicity | multiple parton interactions Journal Article Physical Review Letters, ISSN 0031-9007, 06/2015, Volume 115, Issue 1, p. 012301 The second-order azimuthal anisotropy Fourier harmonics, nu(2), are obtained in p-Pb and PbPb collisions over a wide pseudorapidity (.) range based on... PLUS AU COLLISIONS | ANISOTROPIC FLOW | PROTON-PROTON | PHYSICS, MULTIDISCIPLINARY | ECCENTRICITIES | Correlation | Large Hadron Collider | Anisotropy | Dynamics | Collisions | Luminosity | Charged particles | Dynamical systems PLUS AU COLLISIONS | ANISOTROPIC FLOW | PROTON-PROTON | PHYSICS, MULTIDISCIPLINARY | ECCENTRICITIES | Correlation | Large Hadron Collider | Anisotropy | Dynamics | Collisions | Luminosity | Charged particles | Dynamical systems Journal Article 3. Constraints on parton distribution functions and extraction of the strong coupling constant from the inclusive jet cross section in pp collisions at √s = 7TeV ISSN 1434-6044, 2015 statistical | experimental results | CMS | jet: hadroproduction | parton: distribution function | CERN LHC Coll | dijet: final state | gluon: density | jet: rapidity | electroweak interaction: correction | p: distribution function | Z0: mass | phase space | final state: (3jet) | jet: transverse momentum | p: momentum | quantum chromodynamics: perturbation theory | p: structure function | p p: colliding beams | strong interaction: coupling constant | p p: scattering | strong coupling | 7000 GeV-cms | higher-order: 1 Journal Article 4. Measurement of the differential cross section and charge asymmetry for inclusive $$\mathrm {p}\mathrm {p}\rightarrow \mathrm {W}^{\pm }+X$$ p p → W ± + X production at $${\sqrt{s}} = 8$$ s = 8 TeV The European Physical Journal C, ISSN 1434-6044, 8/2016, Volume 76, Issue 8, pp. 1 - 27 The differential cross section and charge asymmetry for inclusive $$\mathrm {p}\mathrm {p}\rightarrow \mathrm {W}^{\pm }+X \rightarrow \mu ^{\pm }\nu +X$$ p p... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Journal Article 5. Search for heavy Majorana neutrinos in μ±μ± + jets events in proton–proton collisions at √s = 8 TeV ISSN 0370-2693, 2015 dimuon: same sign | experimental results | CMS | CERN LHC Coll | neutrino: heavy: search for | 8000 GeV-cms | neutrino: mixing | background | p p --> 2muon- 2jet anything | neutrino: Majorana: mass | p p: colliding beams | p p: scattering | final state: ((n)jet dilepton) | p p --> 2muon+ 2jet anything Journal Article 6. Search for heavy Majorana neutrinos in mu(+/-)mu(+/-) + jets events inproton-proton collisions at root s=8TeV PHYSICS LETTERS B, ISSN 0370-2693, 09/2015, Volume 748, pp. 144 - 166 A search is performed for heavy Majorana neutrinos (N) using an event signature defined by two muons of the same charge and two jets (mu(+/-)mu(+/-)jj). The... Heavy neutrino | LEPTONS | MASSES | ASTRONOMY & ASTROPHYSICS | CMS | PHYSICS, NUCLEAR | Physics | NONCONSERVATION | PHYSICS, PARTICLES & FIELDS Heavy neutrino | LEPTONS | MASSES | ASTRONOMY & ASTROPHYSICS | CMS | PHYSICS, NUCLEAR | Physics | NONCONSERVATION | PHYSICS, PARTICLES & FIELDS Journal Article 7. Low-cost process for P-N junction-type solar cell : final report, covering the period 1 September 1980 to 28 February 1981 1981, ix, 97 Book 8. Evidence for transverse-momentum- and pseudorapidity-dependent event-plane fluctuations in PbPb and p Pb collisions Physical Review C - Nuclear Physics, ISSN 0556-2813, 09/2015, Volume 92, Issue 3 Journal Article 9. Observation of Charge-Dependent Azimuthal Correlations in p-Pb Collisions and Its Implication for the Search for the Chiral Magnetic Effect Physical Review Letters, ISSN 0031-9007, 03/2017, Volume 118, Issue 12 Charge-dependent azimuthal particle correlations with respect to the second-order event plane in p-Pb and PbPb collisions at a nucleon-nucleon center-of-mass... PARITY VIOLATION | SEPARATION | PHYSICS, MULTIDISCIPLINARY | FIELD | Hadrons | Correlation | Large Hadron Collider | Searching | Correlation analysis | Collisions | Solenoids | Atomic collisions | NUCLEAR PHYSICS AND RADIATION PHYSICS | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS PARITY VIOLATION | SEPARATION | PHYSICS, MULTIDISCIPLINARY | FIELD | Hadrons | Correlation | Large Hadron Collider | Searching | Correlation analysis | Collisions | Solenoids | Atomic collisions | NUCLEAR PHYSICS AND RADIATION PHYSICS | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 10. Searches for supersymmetry based on events with b jets and four W bosons in pp collisions at 8 TeV ISSN 0370-2693, 2015 sbottom: pair production | supersymmetry | experimental results | sbottom: mass: lower limit | gluino: mass: lower limit | CMS | W: associated production | CERN LHC Coll | p p --> 2sbottom anything | jet: bottom | lepton: multiplicity | 8000 GeV-cms | signature: leptonic | W: decay modes | final state: ((n)lepton) | gluino: pair production | dilepton: same sign | p p: colliding beams | p p: scattering | p p --> 2gluino anything Journal Article 11. Measurement of the cross section ratio $\sigma_\mathrm{t \bar{t} b \bar{b}} / \sigma_\mathrm{t \bar{t} jj }$ in pp collisions at $\sqrt{s}$ = 8 TeV ISSN 0370-2693, 2015 experimental results | jet: associated production | cross section: ratio: calculated | p p --> 2top 2bottom anything | CMS | CERN LHC Coll | cross section: ratio: measured | jet: bottom | 8000 GeV-cms | top: pair production | channel cross section | p p --> 2top 2jet anything | jet: transverse momentum | p p: colliding beams | p p: scattering | final state: ((n)jet dilepton) | higher-order: 1 Journal Article 12. Measurement of prompt ψ(2S) to J/ψ yield ratios in Pb-Pb and p-p collisions at sqrt[sNN]=2.76 TeV Physical review letters, 12/2014, Volume 113, Issue 26, p. 262301 The ratio between the prompt ψ(2S) and J/ψ yields, reconstructed via their decays into μ+ μ-, is measured in Pb-Pb and p-p collisions at sqrt[sNN]=2.76 TeV.... Journal Article 13. Measurement of prompt ψ (2S) to J /ψ yield ratios in Pb-Pb and p - p collisions at sNN =2.76 TeV Physical Review Letters, ISSN 0031-9007, 12/2014, Volume 113, Issue 26 Journal Article
We solve the following problem that arises for example in sparse signal reconstruction problems such as compressed sensing: \[ \mbox{minimize } ||x||_1 \mbox{ ($L_1$) }\\ \mbox{subject to } Ax = b \] with \(x\in R^n\), \(A \in R^{m \times n}\) and \(m\leq n.\) Reformulate the problem expressing the \(L_1\) norm of \(x\) as follows \[ x \leq u\\ -x \leq u\\ \] where \(u\in R^n\) and we minimize the sum of \(u\). The reformulated problem using the stacked variables \[ z = \begin{pmatrix}x\\u\end{pmatrix} \] is now \[ \mbox{minimize } c^{\top}z\\ \mbox{subject to } \tilde{A}x = b \mbox{ (LP) }\\ Gx \leq h \] where the inequality is with respective to the positive orthant. Here is the R code that generates a random instance of this problem and solves it. library(ECOSolveR)library(Matrix)set.seed(182391)n <- 1000Lm <- 10Ldensity <- 0.01c <- c(rep(0.0, n), rep(1.0, n)) First, a function to generate random sparse matrices with normal entries. sprandn <- function(nrow, ncol, density) { items <- ceiling(nrow * ncol * density) matrix(c(rnorm(items), rep(0, nrow * ncol - items)), nrow = nrow)} A <- sprandn(m, n, density)Atilde <- Matrix(cbind(A, matrix(rep(0.0, m * n), nrow = m)), sparse = TRUE)b <- rnorm(m)I <- diag(n)G <- rbind(cbind(I, -I), cbind(-I, -I))G <- Matrix(G, sparse = TRUE)h <- rep(0.0, 2L * n)dims <- list(l = 2L * n, q = NULL, e = 0L) Note how ECOS expects sparse matrices, not ordinary matrices. ## Solve the problemz <- ECOS_csolve(c = c, G = G, h = h, dims = dims, A = Atilde, b = b) We check that the solution was found. names(z) ## [1] "x" "y" "s" "z" "infostring"## [6] "retcodes" "summary" "timing" z$infostring ## [1] "Optimal solution found" Extract the solution. x <- z$x[1:n]u <- z$x[(n+1):(2*n)]nnzx = sum(abs(x) > 1e-8)sprintf("x reconstructed with %d non-zero entries", nnzx / length(x) * 100) ## [1] "x reconstructed with 100 non-zero entries"
Skills to Develop Solve equations with fraction coefficients Solve equations with decimal coefficients be prepared! Before you get started, take this readiness quiz. Solve Equations with Fraction Coefficients Let’s use the General Strategy for Solving Linear Equations introduced earlier to solve the equation \(\frac{1}{8}x + \frac{1}{2} = \frac{1}{4}\). To isolate the x term, subtract \(\frac{1}{2}\) from both sides. $$\frac{1}{8} x + \frac{1}{2} \textcolor{red}{- \frac{1}{2}} = \frac{1}{4} \textcolor{red}{- \frac{1}{2}}$$ Simplify the left side. $$\frac{1}{8} x = \frac{1}{4} - \frac{1}{2}$$ Change the constants to equivalent fractions with the LCD. $$\frac{1}{8} x = \frac{1}{4} - \frac{2}{4}$$ Subtract. $$\frac{1}{8} x = - \frac{1}{4}$$ Multiply both sides by the reciprocal of \(\frac{1}{8}\). $$\textcolor{red}{\frac{8}{1}} \cdot \frac{1}{8} x = \textcolor{red}{\frac{8}{1}} \left(- \dfrac{1}{4}\right)$$ Simplify. $$x = -2$$ This method worked fine, but many students don’t feel very confident when they see all those fractions. So we are going to show an alternate method to solve equations with fractions. This alternate method eliminates the fractions. We will apply the Multiplication Property of Equality and multiply both sides of an equation by the least common denominator of all the fractions in the equation. The result of this operation will be a new equation, equivalent to the first, but with no fractions. This process is called clearing the equation of fractions. Let’s solve the same equation again, but this time use the method that clears the fractions. Example 8.37: Solve: \(\frac{1}{8} x + \frac{1}{2} = \frac{1}{4}\). Solution Find the least common denominator of all the fractions in the equation. $$\frac{1}{8} x + \frac{1}{2} = \frac{1}{4} \quad LCD = 8$$ Multiply both sides of the equation by that LCD, 8. This clears the fractions. $$\textcolor{red}{8} \left(\dfrac{1}{8} x + \dfrac{1}{2}\right) = \textcolor{red}{8} \left(\dfrac{1}{4}\right)$$ Use the Distributive Property. $$8 \cdot \frac{1}{8} x + 8 \cdot \frac{1}{2} = 8 \cdot \frac{1}{4}$$ Simplify — and notice, no more fractions! $$x + 4 = 2$$ Solve using the General Strategy for Solving Linear Equations. $$x + 4 \textcolor{red}{-4} = 2 \textcolor{red}{-4}$$ Simplify. $$x = -2$$ Check: Let x = −2. $$\begin{split} \frac{1}{8} x + \frac{1}{2} &= \frac{1}{4} \\ \frac{1}{8} (\textcolor{red}{-2}) + \frac{1}{2} &\stackrel{?}{=} \frac{1}{4} \\ - \frac{2}{8} + \frac{1}{2} &\stackrel{?}{=} \frac{1}{4} \\ - \frac{2}{8} + \frac{4}{8} &\stackrel{?}{=} \frac{1}{4} \\ \frac{2}{4} &\stackrel{?}{=} \frac{1}{4} \\ \frac{1}{4} &= \frac{1}{4}\; \checkmark \end{split}$$ Exercise 8.73: Solve: \(\frac{1}{4} x + \frac{1}{2} = \frac{5}{8}\). Exercise 8.74: Solve: \(\frac{1}{6} y - \frac{1}{3} = \frac{1}{6}\). Notice in Example 8.37 that once we cleared the equation of fractions, the equation was like those we solved earlier in this chapter. We changed the problem to one we already knew how to solve! We then used the General Strategy for Solving Linear Equations. HOW TO: SOLVE EQUATIONS WITH FRACTION COEFFICIENTS BY CLEARING THE FRACTIONS Step 1. Find the least common denominator of all the fractions in the equation. Step 2. Multiply both sides of the equation by that LCD. This clears the fractions. Step 3. Solve using the General Strategy for Solving Linear Equations. Example 8.38: Solve: 7 = \(\frac{1}{2} x + \frac{3}{4} x − \frac{2}{3} x\). Solution We want to clear the fractions by multiplying both sides of the equation by the LCD of all the fractions in the equation. Find the least common denominator of all the fractions in the equation. $$7 = \frac{1}{2} x + \frac{3}{4} x - \frac{2}{3} x \quad LCD = 12$$ Multiply both sides of the equation by 12. $$\textcolor{red}{12} (7) = \textcolor{red}{12} \cdot \frac{1}{2} x + \frac{3}{4} x - \frac{2}{3} x$$ Distribute. $$12(7) = 12 \cdot \frac{1}{2} x + 12 \cdot \frac{3}{4} x - 12 \cdot \frac{2}{3} x$$ Simplify — and notice, no more fractions! $$84 = 6x + 9x - 8x$$ Combine like terms. $$84 = 7x$$ Divide by 7. $$\frac{84}{\textcolor{red}{7}} = \frac{7x}{\textcolor{red}{7}}$$ Simplify. $$12 = x$$ Check: Let x = 12. $$\begin{split} 7 &= \frac{1}{2} x + \frac{3}{4} x - \frac{2}{3} x \\ 7 &\stackrel{?}{=} \frac{1}{2} (\textcolor{red}{12}) + \frac{3}{4} (\textcolor{red}{12}) - \frac{2}{3} (\textcolor{red}{12}) \\ 7 &\stackrel{?}{=} 6 + 9 - 8 \\ 7 &= 7\; \checkmark \end{split}$$ Exercise 8.75: Solve: 6 = \(\frac{1}{2} v + \frac{2}{5} v − \frac{3}{4} v\). Exercise 8.76: Solve: -1 = \(\frac{1}{2} u + \frac{1}{4} u − \frac{2}{3} u\). In the next example, we’ll have variables and fractions on both sides of the equation. Example 8.39: Solve: \(x + \frac{1}{3} = \frac{1}{6} x − \frac{1}{2}\). Solution Find the LCD of all the fractions in the equation. $$x + \frac{1}{3} = \frac{1}{6} x - \frac{1}{2} \quad LCD = 6$$ Multiply both sides by the LCD. $$\textcolor{red}{6} \left(x + \dfrac{1}{3}\right) = \textcolor{red}{6} \left(\dfrac{1}{6} x - \dfrac{1}{2}\right)$$ Distribute. $$6 \cdot x + 6 \cdot \frac{1}{3} = 6 \cdot \frac{1}{6} x - 6 \cdot \frac{1}{2}$$ Simplify — no more fractions! $$6x + 2 = x - 3$$ Subtract x from both sides. $$6x \textcolor{red}{-x} + 2 = x \textcolor{red}{-x} - 3$$ Simplify. $$5x + 2 = -3$$ Subtract 2 from both sides. $$5x + 2 \textcolor{red}{-2} = -3 \textcolor{red}{-2}$$ Simplify. $$5x = -5$$ Divide by 5. $$\frac{5x}{\textcolor{red}{5}} = \frac{-5}{\textcolor{red}{5}}$$ Simplify. $$x = -1$$ Check: Substitute x = −1. $$\begin{split} x + \frac{1}{3} &= \frac{1}{6} x - \frac{1}{2} \\ (\textcolor{red}{-1}) + \frac{1}{3} &\stackrel{?}{=} \frac{1}{6} (\textcolor{red}{-1}) - \frac{1}{2} \\ (-1) + \frac{1}{3} &\stackrel{?}{=} - \frac{1}{6} - \frac{1}{2} \\ - \frac{3}{3} + \frac{1}{3} &\stackrel{?}{=} - \frac{1}{6} - \frac{3}{6} \\ - \frac{2}{3} &\stackrel{?}{=} - \frac{4}{6} \\ - \frac{2}{3} &= - \frac{2}{3}\; \checkmark \end{split}$$ Exercise 8.77: Solve: \(a + \frac{3}{4} = \frac{3}{8} a − \frac{1}{2}\). Exercise 8.78: Solve: \(c + \frac{3}{4} = \frac{1}{2} c − \frac{1}{4}\). In Example 8.40, we’ll start by using the Distributive Property. This step will clear the fractions right away! Example 8.40: Solve: 1 = \(\frac{1}{2}\)(4x + 2). Solution Distribute. $$1 = \frac{1}{2} \cdot 4x + \frac{1}{2} \cdot 2$$ Simplify. Now there are no fractions to clear! $$1 = 2x + 1$$ Subtract 1 from both sides. $$1 \textcolor{red}{-1} = 2x + 1 \textcolor{red}{-1}$$ Simplify. $$0 = 2x$$ Divide by 2. $$\frac{0}{\textcolor{red}{2}} = \frac{2x}{\textcolor{red}{2}}$$ Simplify. $$0 = x$$ Check: Let x = 0. $$\begin{split} 1 &= \frac{1}{2} (4x + 2) \\ 1 &\stackrel{?}{=} \frac{1}{2} [4(\textcolor{red}{0}) + 2] \\ 1 &\stackrel{?}{=} \frac{1}{2} (2) \\ 1 &\stackrel{?}{=} \frac{2}{2} \\ 1 &= 1\; \checkmark \end{split}$$ Exercise 8.79: Solve: −11 = \(\frac{1}{2}\)(6p + 2). Exercise 8.80: Solve: 8 = \(\frac{1}{3}\)(9q + 6). Many times, there will still be fractions, even after distributing. Example 8.41: Solve: \(\frac{1}{2}\)(y − 5) = \(\frac{1}{4}\)(y − 1). Solution Distribute. $$\frac{1}{2} \cdot y - \frac{1}{2} \cdot 5 = \frac{1}{4} \cdot y - \frac{1}{4} \cdot 1$$ Simplify. $$\frac{1}{2} y - \frac{5}{2} = \frac{1}{4} y - \frac{1}{4}$$ Multiply by the LCD, 4. $$\textcolor{red}{4} \left(\dfrac{1}{2} y - \dfrac{5}{2}\right) = \textcolor{red}{4} \left(\dfrac{1}{4} y - \dfrac{1}{4}\right)$$ Distribute. $$4 \cdot \frac{1}{2} y - 4 \cdot \frac{5}{2} = 4 \cdot \frac{1}{4} y - 4 \cdot \frac{1}{4}$$ Simplify. $$2y - 10 = y - 1$$ Collect the y terms to the left. $$2y - 10 \textcolor{red}{-y} = y - 1 \textcolor{red}{-y}$$ Simplify. $$y - 10 = -1$$ Collect the constants to the right. $$y - 10 \textcolor{red}{+10} = -1 \textcolor{red}{+10}$$ Simplify. $$y = 9$$ Check: Substitute 9 for y. $$\begin{split} \frac{1}{2} (y - 5) &= \frac{1}{4} (y - 1) \\ \frac{1}{2} (\textcolor{red}{9} - 5) &\stackrel{?}{=} \frac{1}{4} (\textcolor{red}{9} - 1) \\ \frac{1}{2} (4) &\stackrel{?}{=} \frac{1}{4} (8) \\ 2 &= 2\; \checkmark \end{split}$$ Exercise 8.81: Solve: \(\frac{1}{5}\)(n + 3) = \(\frac{1}{4}\)(n + 2). Exercise 8.82: Solve: \(\frac{1}{2}\)(m − 3) = \(\frac{1}{4}\)(m − 7). Solve Equations with Decimal Coefficients Some equations have decimals in them. This kind of equation will occur when we solve problems dealing with money and percent. But decimals are really another way to represent fractions. For example, 0.3 = \(\frac{3}{10}\) and 0.17 = \(\frac{17}{100}\). So, when we have an equation with decimals, we can use the same process we used to clear fractions—multiply both sides of the equation by the least common denominator. Example 8.42: Solve: 0.8x − 5 = 7. Solution The only decimal in the equation is 0.8. Since 0.8 = \(\frac{8}{10}\), the LCD is 10. We can multiply both sides by 10 to clear the decimal. Multiply both sides by the LCD. $$\textcolor{red}{10} (0.8x - 5) = \textcolor{red}{10} (7)$$ Distribute. $$10(0.8x) - 10(5) = 10(7)$$ Multiply, and notice, no more decimals! $$8x - 50 = 70$$ Add 50 to get all constants to the right. $$8x - 50 \textcolor{red}{+50} = 70 \textcolor{red}{+50}$$ Simplify. $$8x = 120$$ Divide both sides by 8. $$\frac{8x}{\textcolor{red}{8}} = \frac{120}{\textcolor{red}{8}}$$ Simplify. $$x = 15$$ Check: Let x = 15. $$\begin{split} 0.8(\textcolor{red}{15}) - 5 &\stackrel{?}{=} 7 \\ 12 - 5 &\stackrel{?}{=} 7 \\ 7 &= 7\; \checkmark \end{split}$$ Exercise 8.83: Solve: 0.6x − 1 = 11. Exercise 8.84: Solve: 1.2x − 3 = 9. Example 8.43: Solve: 0.06x + 0.02 = 0.25x − 1.5. Solution Look at the decimals and think of the equivalent fractions. $$0.06 = \frac{6}{100}, \qquad 0.02 = \frac{2}{100}, \qquad 0.25 = \frac{25}{100}, \qquad 1.5 = 1 \frac{5}{10}$$ Notice, the LCD is 100. By multiplying by the LCD we will clear the decimals. Multiply both sides by 100. $$\textcolor{red}{100} (0.06x + 0.02) = \textcolor{red}{100} (0.25x - 1.5)$$ Distribute. $$100(0.06x) + 100(0.02) = 100(0.25x) - 100(1.5)$$ Multiply, and now no more decimals. $$6x + 2 = 25x - 150$$ Collect the variables to the right. $$6x \textcolor{red}{-6x} + 2 = 25x \textcolor{red}{-6x} - 150$$ Simplify. $$2 = 19x - 150$$ Collect the constants to the left. $$2 \textcolor{red}{+150} = 19x - 150 \textcolor{red}{+150}$$ Simplify. $$152 = 19x$$ Divide by 19. $$\frac{152}{\textcolor{red}{19}} = \frac{19x}{\textcolor{red}{19}}$$ Simplify. $$8 = x$$ Check: Let x = 8. $$\begin{split} 0.06(\textcolor{red}{8}) + 0.02 &= 0.25 (\textcolor{red}{8}) - 1.5 \\ 0.48 + 0.02 &= 2.00 - 1.5 \\ 0.50 &= 0.50\; \checkmark \end{split}$$ Exercise 8.85: Solve: 0.14h + 0.12 = 0.35h − 2.4. Exercise 8.86: Solve: 0.65k − 0.1 = 0.4k − 0.35. The next example uses an equation that is typical of the ones we will see in the money applications in the next chapter. Notice that we will distribute the decimal first before we clear all decimals in the equation. Solve: 0.25x + 0.05(x + 3) = 2.85. Solution Distribute first. $$0.25x + 0.05x + 0.15 = 2.85$$ Combine like terms. $$0.30x + 0.15 = 2.85$$ To clear decimals, multiply by 100. $$\textcolor{red}{100} (0.30x + 0.15) = \textcolor{red}{100} (2.85)$$ Distribute. $$30x + 15 = 285$$ Subtract 15 from both sides. $$30x + 15 \textcolor{red}{-15} = 285 \textcolor{red}{-15}$$ Simplify. $$30x = 270$$ Divide by 30. $$\frac{30x}{\textcolor{red}{30}} = \frac{270}{\textcolor{red}{30}}$$ Simplify. $$x = 9$$ Check: Let x = 9. $$\begin{split} 0.25x + 0.05(x + 3) &= 2.85 \\ 0.25(\textcolor{red}{9}) + 0.05(\textcolor{red}{9} + 3) &\stackrel{?}{=} 2.85 \\ 2.25 + 0.05(12) &\stackrel{?}{=} 2.85 \\ 2.25 + 0.60 &\stackrel{?}{=} 2.85 \\ 2.85 &= 2.85\; \checkmark \end{split}$$ Exercise 8.87: Solve: 0.25n + 0.05(n + 5) = 2.95. Exercise 8.88: Solve: 0.10d + 0.05(d − 5) = 2.15. ACCESS ADDITIONAL ONLINE RESOURCES Practice Makes Perfect Solve equations with fraction coefficients In the following exercises, solve the equation by clearing the fractions. \(\frac{1}{4} x − \frac{1}{2} = − \frac{3}{4}\) \(\frac{3}{4} x − \frac{1}{2} = \frac{1}{4}\) \(\frac{5}{6} y − \frac{2}{3} = − \frac{3}{2}\) \(\frac{5}{6} y − \frac{1}{3} = − \frac{7}{6}\) \(\frac{1}{2} a + \frac{3}{8} = \frac{3}{4}\) \(\frac{5}{8} b + \frac{1}{2} = − \frac{3}{4}\) 2 = \(\frac{1}{3} x − \frac{1}{2} x + \frac{2}{3} x\) 2 = \(\frac{3}{5} x − \frac{1}{3} x + \frac{2}{5} x\) \(\frac{1}{4} m − \frac{4}{5} m + \frac{1}{2} m\) = −1 \(\frac{5}{6} n − \frac{1}{4} n − \frac{1}{2} n\) = −2 \(x + \frac{1}{2} = \frac{2}{3} x − \frac{1}{2}\) \(x + \frac{3}{4} = \frac{1}{2} x − \frac{5}{4}\) \(\frac{1}{3} w + \frac{5}{4} = w − \frac{1}{4}\) \(\frac{3}{2} z + \frac{1}{3} = z − \frac{2}{3}\) \(\frac{1}{2} x − \frac{1}{4} = \frac{1}{12} x + \frac{1}{6}\) \(\frac{1}{2} a − \frac{1}{4} = \frac{1}{6} a + \frac{1}{12}\) \(\frac{1}{3} b + \frac{1}{5} = \frac{2}{5} b − \frac{3}{5}\) \(\frac{1}{3} x + \frac{2}{5} = \frac{1}{5} x − \frac{2}{5}\) 1 = \(\frac{1}{6}\)(12x − 6) 1 = \(\frac{1}{5}\)(15x − 10) \(\frac{1}{4}\)(p − 7) = \(\frac{1}{3}\)(p + 5) \(\frac{1}{5}\)(q + 3) = \(\frac{1}{2}\)(q − 3) \(\frac{1}{2}\)(x + 4) = \(\frac{3}{4}\) \(\frac{1}{3}\)(x + 5) = \(\frac{5}{6}\) Solve Equations with Decimal Coefficients In the following exercises, solve the equation by clearing the decimals. 0.6y + 3 = 9 0.4y − 4 = 2 3.6j − 2 = 5.2 2.1k + 3 = 7.2 0.4x + 0.6 = 0.5x − 1.2 0.7x + 0.4 = 0.6x + 2.4 0.23x + 1.47 = 0.37x − 1.05 0.48x + 1.56 = 0.58x − 0.64 0.9x − 1.25 = 0.75x + 1.75 1.2x − 0.91 = 0.8x + 2.29 0.05n + 0.10(n + 8) = 2.15 0.05n + 0.10(n + 7) = 3.55 0.10d + 0.25(d + 5) = 4.05 0.10d + 0.25(d + 7) = 5.25 0.05(q − 5) + 0.25q = 3.05 0.05(q − 8) + 0.25q = 4.10 Everyday Math CoinsTaylor has $2.00 in dimes and pennies. The number of pennies is 2 more than the number of dimes. Solve the equation 0.10d + 0.01(d + 2) = 2 for d, the number of dimes. StampsTravis bought $9.45 worth of 49-cent stamps and 21-cent stamps. The number of 21-cent stamps was 5 less than the number of 49-cent stamps. Solve the equation 0.49s + 0.21(s − 5) = 9.45 for s, to find the number of 49-cent stamps Travis bought. Writing Exercises Explain how to find the least common denominator of \(\frac{3}{8}, \frac{1}{6}\), and \(\frac{2}{3}\). If an equation has several fractions, how does multiplying both sides by the LCD make it easier to solve? If an equation has fractions only on one side, why do you have to multiply both sides of the equation by the LCD? In the equation 0.35x + 2.1 = 3.85, what is the LCD? How do you know? Self Check (a) After completing the exercises, use this checklist to evaluate your mastery of the objectives of this section. (b) Overall, after looking at the checklist, do you think you are well-prepared for the next Chapter? Why or why not? Contributors Lynn Marecek (Santa Ana College) and MaryAnne Anthony-Smith (Formerly of Santa Ana College). This content is licensed under Creative Commons Attribution License v4.0 "Download for free at http://cnx.org/contents/fd53eae1-fa2...49835c3c@5.191."
Monstrous Moonshine Undergraduate math student at NYU Member for 4 years 1 profile view Last seen Apr 18 at 17:41 Communities (29) Top network posts 18 Why do people study curve shortening flows? 13 Is the intersection of two Caccioppoli (i.e. finite perimeter) sets Caccioppoli? 12 Can a non-local ring have only two prime ideals? 9 Can the long line be embedded in euclidean space? 8 Can De Rham cohomology be defined distributionally? 8 What is the ideal class group of the ring $\mathbb{R}[x,y]/(x^2+y^2-1)$? 8 Show $A\cap B \neq \varnothing \Rightarrow \operatorname{dist}(A,B) = 0$, and $\operatorname{dist}(A, B) = 0 \not\Rightarrow A\cap B \neq \varnothing$ View more network posts →
News for cp3 Louvain-La-Neuve September 24, 2019 New result on BR(K^+ \to \pi^+ \nu \bar{\nu}) from NA62... A way to investigate the presence of new physics at very high energy, not accessible to LHC, is searching for deviations from the SM of quantities predicted very precisely within the SM, even at small energies. One of these quantities is how often a charged Kaon decays to a charged pion and a...Click to know more September 06, 2019 BND School 2019 This year, members of the Centre for Cosmology, Particle Physics and Phenomenology (CP3) of the IRMP at UCLouvain are in charge of the organization of the 31st Belgian-Dutch-German Graduate School for Particle Physics, also known as the "BND School". This summer school is intended primarily for...Click to know more February 22, 2019 CMS technology used to develop a new portable muon... Suppose that you need to see what is inside a small room that is difficult to access. Thick walls between you and the object of your investigation; you cannot dig, and the closest that you can get to the target is inside a narrow tunnel, with no electricity to power your instrumentation....Click to know more June 04, 2018 Direct coupling of the Higgs boson to the top quark observed An observation made by the Compact Muon Solenoid experiment at the Large Hadron Collider at CERN, published in the Physical Review Letters today, connects for the first time the two heaviest elementary particles of the Standard Model. Members of the CMS collaboration, including the Excellence of...Click to know more March 15, 2018 NA62: First results on $K^+\to \pi^+ \nu\bar{\nu}$. On Sunday 11th of March, NA62 collaboration presented in Moriond the first results of the $K^+\to \pi^+ \nu\bar{\nu}$ analysis. This channel has a branching ratio in the SM of (8.4 ±1.0) × 10-11. The analysis presented in Moriond is based in the study of a sample of 1.21×...Click to know more
However, there's an interesting 3-sigma anomaly in an otherwise obscure search so let me tell you what it is. It appears in the following preprint: Search for Type III Seesaw Model Heavy Fermions in Events with Four Charged Leptons using \(5.8\,{\rm fb}^{-1}\) of \(\sqrt{s} = 8\TeV\) data with the ATLAS Detector (ATLAS-CONF-2013-019)It is a relatively obscure search for an electroweak triplet of new fermions, \(N^\pm,N^0\), that are used in the so-called type III seesaw models. Note that all seesaw models are meant to produce the neutrino masses (equal to zero in the "truly minimal" Standard Model) – and explain why they're so small. The type I seesaw models add at least two right-handed neutrinos \(\nu_R\) with masses near the GUT scale. The type II seesaw models add a new Higgs triplet. The type III seesaw models add the triplet of fermions \(N^\pm,N^0\) that are approximately equally heavy. It is supposed that the proton-proton collisions may produce either \(N^\pm N^\mp\) or \(N^\pm N^0\) where the latter possibilities are approximately 2 times more likely than the former possibility. As you may expect, the ATLAS folks exclude the existence of these new fermions \(N^\pm,N^0\) up to some mass, namely \(245\GeV\). But there's an interesting 3-sigma excess near the (higher) mass \(m_N\sim 420\GeV\): its \(p\)-value (probability of a similarly strong signal according to the null hypothesis) is about \(p_0=0.20\), a statement whose origin I don't quite understand. I would understand \(0.20\%\) but maybe their figure is right and unimpressive due to some look-elsewhere reduction. At any rate, the picture (Figure 4) says a clear story of a rather strong excess by itself: Click to zoom in. On the \(x\)-axis, you have the assumed mass of the new fermions, \(m_N\), in the units of \(\GeV\). The \(y\)-axis contains the relevant cross section \[ \frac{\sigma(pp\!\to\! N^\pm N^0)\times BF(N^\pm \!\to\! Z \ell^\pm)\times BF(N^0\!\to\! W^\pm \ell^\mp) }{\rm fb} \] In other words, it's some cross section (in the units of one femtobarn) for the production of a pair of the new fermions (one neutral fermion and one charged fermions) using a proton pair but only the "branching fractions" in which these new fermions decay to \(W^\pm/Z^0\) gauge bosons plus leptons in the indicated way (pretty much the dominant decays expected for the new fermions) are included. The decays of these hypothetical new fermionic triplets violate the lepton flavor if not the lepton number. They can probably achieve what they can achieve – the neutrino masses – but I haven't encountered them anywhere else. In particular, I am not aware of any top-down explanation why these things should exist. But of course, it's not impossible that these otherwise unwanted beasts are employed by Mother Nature. It's more likely that the excess is a fluke. But even if it is due to new physics, I suspect that the details of the new physics could be a bit different (sleptons and sneutrinos of some kind?). This particular paper has only used \(5.8/{\rm fb}\) of the 2012 data. Over twenty inverse femtobarns have (already) been collected last year so when they're processed, the signals – if they're due to new physics – should grow to indisputable proportions. TBBT and women in science Last night, the latest episode of The Big Bang Theory made Leonard want to help young women enter science. Sheldon and Howard ultimately agreed to co-operate. They went to a high school to meet girls and the sitcom showed a very realistic picture of how hopeless disinterest most of the girls of this age have in science and how complete hypocritical waste of time similar attempts to "draft girls" are. New Czech president Miloš Zeman was inaugurated as the new Czech president. Lots of fun formalities at the Prague Castle and the cathedral over there. His inauguration speech was given off-hand, rather impressive. Among other things, he declared war against three main enemies of the society – mafia's godfathers, neo-Nazi guerrilla groups, and most of the journalists. ;-) The latter group (Zeman's comment about this group was the only thing that excited an applause among the audience dominated by top politicians) is composed of jealous and stupid individuals who love to criticize people for doing something they can't do at all and who love to brainwash the citizens. Fully agreed. There were things I disagreed with, too. He rewrote the history when he presented Masaryk as the guy who wanted to eliminated all traces of monarchy and introduced pure republicanism. That's rubbish. Masaryk deliberately preserved some of the royal functions and image of the kings for the Czechoslovak presidents. At any rate, Zeman surrendered his right to declare amnesties and pardons (that's like not doing a part of his job!) and promised to be an intermediary of a political dialogue, not a judge.
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced. Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit. @Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form. A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it. Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis. Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)? No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet. @MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it. Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow. @QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary. @Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer. @QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits... @QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right. OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ... So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study? > I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a... @MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really. When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.? @tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...) @MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences).
Answer : B. $L_1 = \{ a^pb^q | p + q \geq 10^6 \}$ $L_1$ is Regular. Moreover, $10^6$ is nothing but a large constant number. If it had been $10^7 \,\,or\,\,10^2$ instead of $10^6$, It wouldn't have mattered, Right? So, Let's get rid of such big number and replace it with, say, $1.$ So, $L_1' = \{ a^pb^q | p + q \geq 1 \}$ Which is a Regular language. It is same as $\{a ^*b^* \} - \{ \in \}.$ Similarly, $L_1$ is also regular because $L_1$ is the Intersection of $\{ a^*b^*\}$ and $\{ w \in \{a ,b \}^* |\,\, |w| \geq 10^6\},$ both of which are Regular. $L_2 = \{ a^mb^n | m - n \geq 10^6 \}$ Again, $10^6$ is nothing but a large constant number. If it had been $10^7 \,\,or\,\,10^2$ instead of $10^6$, It wouldn't have mattered, Right? So, Let's get rid of such big number and replace it with, say, $1.$ $L_2' = \{ a^mb^n | m - n \geq 1 \}$ Which is nothing but $L_2' = \{ a^mb^n | m > n \}$ Which is a DCFL (so is CFL). It is CFL because we can Push $a's$ on the stack and keep pushing them onto the Stack and when we see $b's$ then we pop One a for one b. And at last, when the string ends, if we have $a's$ on the stack then string will be accepted, otherwise rejected. That's informal idea of PDA(or DPDA ) programming for this language. Now you can make a DPDA for this language. Or we can write a CFG for this language as follows : $S \rightarrow aSb|A$ $A \rightarrow aA|a$ This is the CFG that generates All and Only the strings of $L_2'.$ Why $L_2'$ is Not Regular?? Though by now you must have understood that $L_2$ is Not Regular because FA cannot remember How many $a's$ have appeared before $b's$ start to appear. Like take the string $a^{100} b^{98}$, $100\,\,a's$ have appeared before $b's$ start to appear But when $b's$ start to appear, FA doesn't know Exactly How many $a's$ had appeared. So, It can't really check between number of $a's$ and $b's.$ So, the above is Informal but Intuitive idea behind $L_2'$ Not being a Regular language. But Let me show you that $L_2'$ is Not Regular using Pumping lemma and then Using Myhill-Nerode equivalence relation. Using Pumping lemma for Regular languages to Show that $L_2'$ is Not Regular : Pumping lemma states a Deep Property that All Regular languages share. By Showing that a language does not have the property stated by Pumping lemma, we are guaranteed that It is Not Regular. Pumping Lemma(PL) is a necessary condition for Regular languages but not sufficient condition i.e. All Regular languages must satisfy this condition and some Non-regular languages also satisfy this condition. So, Regular Language → Satisfies Pumping Lemma ∴ Contrapositive statement is Doesn't Satisfy Pumping Lemma → Not Regular Languages Pumping lemma for Regular languages says " If a language L is Regular, then $\exists P \geq 1$, such that $\forall \,\,strings\,\,w \in L$, If $|w| \geq P$ then $\exists x,y,z$, such that $w = xyz$ and $|xy| \leq P$ and $y \neq \in$ i.e. $|y| \geq 1$ and $\forall q \geq 0$, $xy^qz \in L$ In Words, If $L$ is regular, then there’s some magic number $P$(Called Pumping length). And if I take any string $w$ that is at least as long as $P$, then I can break it up into three parts $x, y, $and $z.$ Now, the length of $xy$ is less than or equal to $P$, and also the length of $y$ is greater than or equal to $1$ and less than or equal to $P$. In very simple words, If $L$ is a regular language then there is some positive number $P$ associated with $L$ such that for all strings $w $ of length greater than or equal to $P$, we can find some non-empty sub-string $y$ in $w$ within the first $P$ symbols of $w$ such that when we repeat $y$ Zero or more times then the produced strings also belong to $L.$ For language, $L_2' = \{ a^mb^n | m > n \}$ Let's assume that $L_2'$ is Regular. So, There must be some positive number $P$ associated with $L_2'$ such that Pumping lemma satisfies. So, Let's say that Magic number (Pumping length) is $P.$ So, Now, For all string, $w$ with length $\geq P$ , We must be able to find some Non-empty substring $y$ of length at most $P$ within the first $P$ symbols of $w.$ So, Let's take string $w = a^Pb^{P-1},$ For this string if You take any Non-empty substring $y$ within the first $P$ symbols (i.e. $y$ must be made of $a's$ only) and if you repeat that $y$ Zero times (repeating $y$ Zero times is same as Removing $y$ from the string $w$) then the resulted string will NOT belong to $L_2'.$ So, Whatever Pumping length $P$ you take, at least One string $w = a^Pb^{P-1} $ I have shown you which Won't get Pumped. So, Pumping lemma is violated/Not Satisfied by $L_2'.$ So, $L_2' $ is Not Regular. Similar logic can be used for $L_2$ as well. So, $L_2$ is Not Regular. Using Myhill-Nerode Equivalence Relation to Show that $L_2'$ is Not Regular : I will Not be stating the Whole Myhill-Neorde Theorem here But only the Part that is required to Show that $L_2'$ is Non-Regular. Theorem : Let $L ⊆ Σ ^∗$ be any language. Suppose there is an infinite set $S$ of strings such that $\forall x, y ∈ S (x \neq y), x \not\equiv _L y.$ That is, suppose $(∀x \neq y ∈ S)(∃z ∈ Σ ^∗ ) (Exactly \,\,\,one\,\,\, of \,\,\,the \,\,\,strings\,\,\, xz\,\,\, or\,\,\, yz\,\,\, belongs\,\,\, to\,\,\, L).$ Then $L$ is not a regular language. Here such set $S$ is called Distinctive set for $L.$ http://www.cse.buffalo.edu/~regan/cse396/CSE396MNT.pdf For given language $L_2', $ Consider the set $S = \{a^* \}.$ This set will work as a distinctive set for $L_2'.$ i.e. If you take any Two different arbitrary string $x,y$ from $S$ then You will definitely find some $z \in \Sigma^*$ such that Exactly one of the Strings $xz$ or $yz$ will belong to $L_2'.$ Let any pair $x, y ∈ S$ with $x \neq y $ be given. By definition of S, there are numbers $m, n ≥ 0$ with $m \neq n$ such that $x = 0^m$ and $y = 0^n$ . Take $z = 1^m.$ Then $xz$ will belong to $L$ But $yz$ will Not belong to $L.$ Since $xz = 0^m1^ m ∈ L$ and $yz = 0^n1^ m \notin L.$ Since $x, y ∈ S$ were arbitrary, $S$ is an “infinite distinctive set” for $L_2'$, so $L_2'$ is non-regular. (Here the boldface take goes with the $∃$ quantifer, while all the other boldface words go with the $∀$ quantifier.)
In general matching with dependent types can be quite subtle! You'll note that in the Coq documentation that the extended pattern-matching syntax is match t as x in T1 return T2 with | C1 a1 ... an ... In particular, ommiting any of the as, in or return clauses can prevent type inference of the statement. Intuitively, if the type of (say) constructor $C_1$ is$$ \Pi(x_1:A_1)\ldots(x_k:A_k).I\ \vec{p}$$and the type of the matched term $t$ is$$ I\ \vec{v}$$then you get the constraint$$ \vec{p}[\vec{a}/\vec{x}]\simeq \vec{v}$$which may lead to an undecidable problem, as explained in Goguen et al. Then all you need is for each right hand side to correspond to the appropriate instance of the return type. This return type may depend on $t$, so the return clause specifies the pattern of the return type as a function of $x$ (which represents $t$ through the as clause). More formally, each $e_i$ must have type: $$ T_2[C_i\ \vec{a}/x]$$ and the type of the whole match expression is$$ T_2[t/x]$$ More details can be found here.
Let $X_1, X_2, ...$ be independent random variables. Define $$\mathscr{T}_n = \sigma(X_{n+1}, X_{n+2}, \ldots)$$ and $$\mathscr{T} = \bigcap_{n} \mathscr{T}_n,$$ the tail σ-algebra of $(X_1, X_2, \ldots)$. Are $\sigma(X_1), \sigma(X_2), ...$ independent of $\mathscr{T}$? If so, why? If not, why, and what about $$\sigma(X_1), \sigma(X_2), ..., \sigma(X_k) \ \;\forall k \in \mathbb{N}\quad?$$ All I got so far is that if $X_1, X_2, \ldots$ were events instead of random variables, $X_1, X_2, \ldots, X_k \ \forall k \in \mathbb{N}$ would be independent of some events in $\mathscr{T}$ such as $\limsup X_n$.
A Brief Introduction to the Weak Form This is an introduction to the weak form for those of us who didn’t grow up using finite element analysis and vector calculus in our daily lives, but are nevertheless interested in learning about the weak form, with the help of some physical intuition and basic calculus. About the Weak Form, Briefly For many of the different types of physics simulated with COMSOL Multiphysics, a weak formulation, or weak form, is used behind the scenes to construct the mathematical model. Understanding the weak form will help us gain insight into how the COMSOL software works internally as well as enable us to write our own equations when there is no built-in interface available for the particular physics involved in our model. You may also be interested in reading my colleague Bettina Schieche’s blog post “The Strength of the Weak Form“. A Simple Example Let’s consider a concrete example of 1D heat transfer at steady state with no heat source. Specifically, we are interested in the temperature, T, as a function of the position x in the domain defined by the interval $1\le x\le 5$. For simplicity, we assume the heat conductivity is unity. Then, the heat flux, q, in the positive x-direction is given by the gradient of the temperature T: (1) and the conservation of the heat flux (with no heat source in the domain) simply says (2) This is the main equation that we want to solve. Its solution will give us the temperature profile within the domain. Equations of this form appear in many different disciplines. For example, in electrostatics, T is replaced by the electric potential and q by the electric field, while in elasticity, T becomes the displacement and q becomes the stress. Here, we start to see why COMSOL Multiphysics can solve coupled multiphysics problems in a breeze: No matter what physical mechanisms are involved, they are modeled by equations and once the equations are written down, they can be discretized and solved straightforwardly by the core algorithms of the COMSOL software. Some readers may ask why we chose this seemingly too simple of an example, whose analytical solution can be easily obtained by very simple math or physical arguments. The reason is two-fold: We want to focus on the central idea of the weak form and not be distracted by the math of a complicated physical system. In subsequent posts, we will expand the example to more than one domain to demonstrate the coupling between two equation systems through their boundary conditions. Starting with a more complicated example now will almost certainly obscure the central theme later when the example is expanded. The Weak Formulation Equation (2) involves the first derivative of the heat flux, q, or the second derivative of the temperature, T, which may cause numerical issues in practical situations where the differentiability of the temperature profile may be limited. For example, at a boundary where the adjacent materials have different values of thermal conductivity, the first derivative of the temperature T becomes discontinuous and the second derivative of T can not be evaluated numerically. The main idea of the weak form is to turn the differential equation into an integral equation, so as to lessen the burden on the numerical algorithm in evaluating derivatives. To turn the differential equation (2) into an integral equation, a naive first approach may be to integrate it over the entire domain $1\le x\le 5$: This asks that the average value of $\partial_x q(x)$ in the entire domain is zero. Indeed, it seems “too weak” as compared to the original differential equation, which asks that $\partial_x q(x)$ should be zero everywhere in the interval $1\le x\le 5$. To improve upon it, we can ask instead that the average value of $\partial_x q(x)$ in a very narrow domain is zero, say, The integral only involves the value of $\partial_x q(x)$ in the vicinity of x=3.5. Thus, the relation above requires it not to be too far away from zero: $\partial_x q(3.5) \approx 0$. Extending the same idea to all locations in the entire domain $1\le x\le 5$, we see that the original differential equation may be approximated by a set of integral equations, like this: (3) The higher the number of integral equations in the set, the better the approximation. In the limit of the infinite number of such integral equations, we recover the original differential equation. It is cumbersome or even impossible to write out all the integral equations in the set, but we can apply the same idea in a different way. The main idea is to sample the value of $\partial_x q(x)$ in a narrow range. This is done by integrating it over a narrow range in Eq. (3) above. The same kind of sampling can be done by multiplying the integrand by a weight function \tilde{T}(x) that is non-trivial only in a narrow range, as shown pictorially below: Then, we can integrate the product $\partial_x q(x)\tilde{T}(x)$ over the entire domain $1\le x\le 5$ for a variety of weight functions \tilde{T}(x). Each weight function limits the contribution of the integrand to a narrow range centered around a different x value, thus achieving the same effect as the collection of integral equations in Eq. (3). This leads us to the weak formulation, which states that the relation (4) should hold for a set of weight functions \tilde{T}(x), commonly called test functions. For every value of x, say x=3.5, we can choose a test function \tilde{T}(x) that is a narrow weight function centered around x=3.5. Plugging this test function into Eq. (4) would sample the value of $\partial_x q(x)$ in the vicinity of x=3.5 and so require it not to be too far away from zero: $\partial_x q(3.5) \approx 0$. By plugging a large number of narrow weight functions as test functions into Eq. (4), each centered at a different location in the interval $1\le x\le 5$, the value of the function $\partial_x q(x)$ will be clamped down to zero everywhere within the domain. Footnote: In the picture above, we intentionally plotted $\partial_x q(x)$ as an arbitrary curve, not the final solution to the equation, to emphasize the fact that we haven’t found the solution yet. Later on in the solution process, this arbitrary curve will be pushed up and down by a collection of test functions to reach the shape of the final solution. Reducing Order of Differentiation Note that the order of differentiation in the integrand of Eq. (4) is still the same as in Eq. (2) (after all, it’s the same function $\partial_x q(x)$), but it can be reduced using the method of integration by parts: (5) Now, there is no derivative of the heat flow, q, in the equation, or in terms of the temperature, T, the order of differentiation is reduced from two to one. What about the first derivative of the test function \tilde{T}(x), which just now appeared in the equation? As we have seen in the previous section, the test function serves as a tool for us to find the solution to the equation. Thus, we have the freedom to choose any conveniently differentiable form for it. Natural Boundary Condition The first two terms of Eq. (5) involve the heat flux and test function at the domain boundaries x=1 and x=5, with the heat flux, q, defined in the positive x-direction. We can rewrite them in terms of the flux going out of the domain and move them to the right-hand side: (6) Here, $\Lambda$ is the outgoing flux, and the subscripts 1 and 2 represent the domain boundaries x=1 and x=5, respectively \Lambda_1\equiv -q(x=1) \mbox{, } \Lambda_2\equiv +q(x=5) \mbox{, } \tilde{T}_1\equiv \tilde{T}(x=1) \mbox{, } \tilde{T}_2\equiv \tilde{T}(x=5). Also, we have used the heat flux relation (1) to write the integrand in terms of the temperature, T, and its test function, \tilde{T}. The right-hand side of the equation provides a natural way to assign boundary conditions in terms of the heat flux. The simplest is to set both \Lambda_1 and \Lambda_2 to zero to get insulating boundary conditions (no heat flux through the boundaries). This is exactly the reason why in COMSOL Multiphysics the default boundary condition for heat transfer is “Thermal Insulation” and the one for solid mechanics is “Free (no boundary force)”. This kind of boundary condition, which specifies the flux or force (the first derivative of the variable being solved), is commonly called the natural boundary condition or the Neumann boundary condition. Fixed Boundary Condition Another type of boundary condition, commonly called the fixed boundary condition or the Dirichlet boundary condition, specifies the value of the variable being solved. In our current example, it specifies the value of the temperature at a point on the boundary. This kind of boundary condition is usually needed to set up a well-posed problem with a unique solution. For example, in fluid flow, we need to specify the pressure somewhere (not just the flow); and in solid mechanics, we need to specify the displacement somewhere (not just the force). As we have seen in the example, the weak formulation provides a natural way to specify the heat flux at a boundary. How do we specify a fixed temperature at a boundary, then? The trick is to take advantage of the mathematical structure of the natural boundary condition and apply the same idea of using test functions to clamp down the solution. Conceptually, to maintain a fixed temperature at a boundary point, a certain heat flux coming from the outside of the boundary is needed to compensate for the heat flux inside of the boundary. The weak formulation poses the problem as this: Find the heat flux needed to maintain the fixed temperature at the boundary point. For example, if we want to specify the outgoing flux \Lambda to be 2 at x=1 and the temperature T to be 9 at x=5, then we introduce a new unknown variable \lambda_2 and its corresponding test function \tilde{\lambda}_2, and write Eq. (6) as: (7) Here, on the right-hand side, the first term specifies the outgoing flux of 2 at x=1 and the second term specifies the unknown flux at x=5; both terms come directly from the natural boundary condition terms on the right-hand side of Eq. (6). The new variable \lambda_2 represents the unknown heat flux to be determined at the boundary x=5. The third term is added to the equation to force the solution to be T=9 at x=5 by means of the test function \tilde{\lambda}_2 in the same fashion as in the earlier discussion about the test function \tilde{T}(x). Comment on Higher Dimensions So far, we have been discussing a very simple one-dimensional example. In higher dimensions, such as a 2D surface domain or a 3D volume domain, the equations become more complicated, but the basic idea remains the same. The weak formulation turns a differential equation into an integral equation. Integration by parts reduces the order of differentiation to provide numerical advantages, and generates natural boundary conditions for specifying fluxes at the boundaries. In the simple 1D example, the boundary is the two end points and the flux is a single value at each point. In 2D and 3D, the boundary is the closed curve and the closed surface enclosing the domain, respectively. The right-hand side of Eq. (6) becomes the line or surface integral of the incoming flux density, in other words, the total incoming flux. In essence, the process of integration by parts in 2D and 3D uses the divergence theorem to obtain the line or surface integral of the flux at the boundary of the modeling domain. In this blog post, we chose the simple 1D example so that the central idea wouldn’t be obscured by the complexity of the math. Summary and Next Up Today, we learned about the idea of the weak formulation of using test functions to clamp down the solution. Integrating the weak form by parts provides the numerical benefit of reduced differentiation order. It also provides a natural way to specify boundary conditions in terms of the fluxes or forces (the first derivatives of the variables being solved), the so-called natural boundary condition or the Neumann boundary condition. When solving a practical problem, it’s often necessary to specify the variable being solved — not just its derivative — via the so-called fixed boundary condition or the Dirichlet boundary condition. We saw that the weak formulation uses the same mechanism of test functions and its natural boundary conditions to construct additional terms for the fixed boundary conditions. So far, we have left the equations in their original analytical forms without any numerical approximation. In the next blog post, we will implement the weak form equation (7) in COMSOL Multiphysics to solve it numerically. After that, we will discuss how the numerical approximation is done internally, how the same problem can be solved in different ways, and how different boundary conditions can be set up for different types of problems. Kommentare (22) KATEGORIEN Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
fskilnik wrote: GMATH practice exercise (Quant Class 15) What is the value of \(k\) ? (1) \(x = 2\) (2) \(y^2 = 3\) \(? = k\,\,\,\left[ {{\rm{degrees}}} \right]\) \(\left( {1 + 2} \right)\,\,\left\{ \matrix{ \,\Delta \,{\rm{below}}\,\,{\rm{unique}}\,\,\left( {{{30}^ \circ },{{60}^ \circ },{{90}^ \circ } + \,\,y\,\,{\rm{given}}} \right) \hfill \cr \,\Delta \,{\rm{big}}\,\,{\rm{unique}}\,\,\left( {{\rm{legs}}\,\,{\rm{known + Pythagoras}}} \right)\,\, \hfill \cr} \right.\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,{30^ \circ } + {k^ \circ }\,\,{\rm{unique}}\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,{\rm{SUFF}}.\) The correct answer is (C). We follow the notations and rationale taught in the GMATH method. Regards, Fabio. _________________ Fabio Skilnik :: GMATH method creator (Math for the GMAT) Our high-level "quant" preparation starts here: https://gmath.net
Search Now showing items 1-10 of 15 A free-floating planet candidate from the OGLE and KMTNet surveys (2017) Current microlensing surveys are sensitive to free-floating planets down to Earth-mass objects. All published microlensing events attributed to unbound planets were identified based on their short timescale (below 2 d), ... OGLE-2016-BLG-1190Lb: First Spitzer Bulge Planet Lies Near the Planet/Brown-Dwarf Boundary (2017) We report the discovery of OGLE-2016-BLG-1190Lb, which is likely to be the first Spitzer microlensing planet in the Galactic bulge/bar, an assignation that can be confirmed by two epochs of high-resolution imaging of the ... OGLE-2015-BLG-1459L: The Challenges of Exo-Moon Microlensing (2017) We show that dense OGLE and KMTNet $I$-band survey data require four bodies (sources plus lenses) to explain the microlensing light curve of OGLE-2015-BLG-1459. However, these can equally well consist of three lenses ... OGLE-2017-BLG-1130: The First Binary Gravitational Microlens Detected From Spitzer Only (2018) We analyze the binary gravitational microlensing event OGLE-2017-BLG-1130 (mass ratio q~0.45), the first published case in which the binary anomaly was only detected by the Spitzer Space Telescope. This event provides ... OGLE-2017-BLG-1434Lb: Eighth q < 1 * 10^-4 Mass-Ratio Microlens Planet Confirms Turnover in Planet Mass-Ratio Function (2018) We report the discovery of a cold Super-Earth planet (m_p=4.4 +/- 0.5 M_Earth) orbiting a low-mass (M=0.23 +/- 0.03 M_Sun) M dwarf at projected separation a_perp = 1.18 +/- 0.10 AU, i.e., about 1.9 times the snow line. ... OGLE-2017-BLG-0373Lb: A Jovian Mass-Ratio Planet Exposes A New Accidental Microlensing Degeneracy (2018) We report the discovery of microlensing planet OGLE-2017-BLG-0373Lb. We show that while the planet-host system has an unambiguous microlens topology, there are two geometries within this topology that fit the data equally ... OGLE-2017-BLG-1522: A giant planet around a brown dwarf located in the Galactic bulge (2018) We report the discovery of a giant planet in the OGLE-2017-BLG-1522 microlensing event. The planetary perturbations were clearly identified by high-cadence survey experiments despite the relatively short event timescale ... Spitzer Opens New Path to Break Classic Degeneracy for Jupiter-Mass Microlensing Planet OGLE-2017-BLG-1140Lb (2018) We analyze the combined Spitzer and ground-based data for OGLE-2017-BLG-1140 and show that the event was generated by a Jupiter-class (m_p\simeq 1.6 M_jup) planet orbiting a mid-late M dwarf (M\simeq 0.2 M_\odot) that ... OGLE-2016-BLG-1266: A Probable Brown-Dwarf/Planet Binary at the Deuterium Fusion Limit (2018) We report the discovery, via the microlensing method, of a new very-low-mass binary system. By combining measurements from Earth and from the Spitzer telescope in Earth-trailing orbit, we are able to measure the ... KMT-2016-BLG-0212: First KMTNet-Only Discovery of a Substellar Companion (2018) We present the analysis of KMT-2016-BLG-0212, a low flux-variation $(I_{\rm flux-var}\sim 20$) microlensing event, which is well-covered by high-cadence data from the three Korea Microlensing Telescope Network (KMTNet) ...
Consider the function $f: (0,1) \rightarrow \mathbb{R}$. \ $f(n) =\left\{\begin{array}{ll} \dfrac{1}{x} - 2 & \text{if}\ 0 < x \leq \dfrac{1}{2}\\ \dfrac{1}{x-1} + 2 & \text{if} \ \dfrac{1}{2} < x < 1\end{array}\right.$ \ We claim that $f$ is a bijective function between the open unit interval $(0,1)$ and $\mathbb{R}$ that takes rationals to rationals and irrationals to irrationals. \ First, we notice that $f$ is piece-wise defined in such a way that $dom \ f$ partitions the open unit interval into two sets $S$ and $R$ defined by $S = \left \{ x \in \mathbb{R} \ | \ x \in (0,\dfrac{1}{2}] \right\}$ and $R = \left \{ x \in \mathbb{R} \ | \ x \in (\dfrac{1}{2},1) \right\}$. \ Second, we show that $f$ takes rationals only to rationals and irrationals only to irrationals. There are a total of four cases to consider. $\bf{Case \ \#1}$: Say $x \in S \cap \mathbb{Q}$. Then, by the definition of our function $f$, $\exists a,b \in \mathbb{Z}$ such that $x = \dfrac{a}{b}$ and $f(x) = \dfrac{b}{a} - 2$ and $b \neq 0$ because every rational $x$ can be represented as a ratio of two integers, with the denominator non-zero. Since rationals are closed under division and subtraction, we know that $\dfrac{b}{a} - 2$ is rational. $\bf{Case \ \#2}$: Say $x \in R \cap \mathbb{Q}$. Then, by the definition of our function $f$, $\exists a,b \in \mathbb{Z}$ such that $x = \dfrac{a}{b}$ and $f(x) = \dfrac{1}{\dfrac{a}{b}-1} + 2$. Since rationals are closed under division, subtraction, and addition, we know that $\dfrac{1}{x-1} + 2$ is rational. This shows that $\forall x \in (0,1) \cap \mathbb{Q}, f(x) \in \mathbb{Q} \subseteq \mathbb{R}$. So every rational in the open interval is mapped to a rational. $\bf{Case \ \#3}$: Say $x \in S \cap (\mathbb{R} - \mathbb{Q}$). Then, by the definition of $f$, $\sim \exists a,b \in \mathbb{Z}$ such that $f(x) = \dfrac{b}{a} - 2$. Since both the subtraction and division of an irrational number by a rational number still produces an irrational number, we know that $\dfrac{b}{a} - 2$ is irrational. $\bf{Case \ \#4}$: Say $x \in R \cap (\mathbb{R} - \mathbb{Q}$). Then, by the definition of $f$, $\sim \exists a,b \in \mathbb{Z}$ such that $f(x) = \dfrac{1}{\dfrac{a}{b}-1} + 2$. Since the subtraction, division, and addition of an irrational number with a rational number produces an irrational number, we know that $\dfrac{1}{\dfrac{a}{b}-1} + 2$ is irrational. Therefore, $f$ maps irrationals only to irrationals. \ Thirdly, we must show that $f$ is a bijective function. \ To prove injectivity, there are two cases: \ $\bf{Case \ \#1}$: Let $f(x) = f(y)$, where $x,y \in S \cap \mathbb{Q}$. Then, $\exists a,b,c,d \in \mathbb{Z}$, where $b,d \neq 0$, and $x = \dfrac{a}{b}$ and $y = \dfrac{c}{d}$ such that $\dfrac{b}{a} - 2 = \dfrac{d}{c} -2$. Adding both sides by two, we get $\dfrac{b}{a} = \dfrac{d}{c}$ which implies that $\dfrac{a}{b} = \dfrac{c}{d}$. So $x = y$ and $f$ is injective. $\bf{Case \ \#2}$: Let $f(x) = f(y)$, where $x,y \in R \cap \mathbb{Q}$. Then, $\exists a,b,c,d \in \mathbb{Z}$, where $b,d \neq 0$, and $x = \dfrac{a}{b}$ and $y = \dfrac{c}{d}$ such that $\dfrac{1}{\dfrac{a}{b}-1} + 2 = \dfrac{1}{\dfrac{c}{d}-1} + 2$. Subtracting both sides by 2 we get $\dfrac{1}{\dfrac{a}{b}-1} = \dfrac{1}{\dfrac{c}{d}-1}$. This implies that $\dfrac{c}{d} - 1 = \dfrac{a}{b} -1$. Adding both sides by 1, we get $y = x$ so $f$ is injective. \ To prove surjectivity, there are four cases: \ $\bf{Case \ \#1}$: We show that $\forall y \in \mathbb{Q}$, $y = f(x)$ for some $x \in S \cap \mathbb{Q}$. Let $f(x) = y$; so $y = \dfrac{1}{x} - 2$. Then, $x = \dfrac{1}{y+ 2}$ and we are done. $\bf{Case \ \#2}$: We show that $\forall y \in \mathbb{Q}$, $y = f(x)$ for some $x \in R \cap \mathbb{Q}$. Let $f(x) = y$; so $y = \dfrac{1}{x-1} + 2$. Then, $x = \dfrac{1}{y-2} + 1$ and we are done. $\bf{Case \ \#3}$: We show that $\forall y \in (\mathbb{R} - \mathbb{Q})$, $y = f(x)$ for some $x \in S \cap (\mathbb{R} - \mathbb{Q})$. Let $f(x) = y$; so $y = \dfrac{1}{x} - 2$. Then, $x = \dfrac{1}{y+ 2}$ and we are done. $\bf{Case \ \#4}$: We show that $\forall y \in (\mathbb{R} - \mathbb{Q})$, $y = f(x)$ for some $x \in R \cap (\mathbb{R} - \mathbb{Q})$. Let $f(x) = y$; so $y = \dfrac{1}{x-1} + 2$. Then, $x = \dfrac{1}{y-2} + 1$ and we are done. \ This completes our proof that $f$ is a bijective function between the open unit interval $(0,1)$ and $\mathbb{R}$ that takes rationals to rationals and irrationals to irrationals.
This is actually parts of an exercise I found in W.Hodges' shorter model theory (P.147): Let $\mathcal{L}$ be a first-order language and T a theory in $\mathcal{L}$. Also let $\mathcal{A}$ and $\mathcal{B}$ are models of T and $\mathcal{C}$ is an $\mathcal{L}$-structure such that $\mathcal{A} \subseteq \mathcal{C} \subseteq \mathcal{B}$. If T is equivalent to a set of $\exists \forall$-sentences and $\mathcal{A} \preccurlyeq_2 \mathcal{B}$ (i.e. for every $\exists \forall$-formula $\phi(\bar{x})$ of $\mathcal{L}$ and every tuple $\bar{a}$ of elements of $\mathcal{A}$, $\mathcal{B} \models \phi(\bar{a})$ implies $\mathcal{A} \models\phi(\bar{a})$), then $\mathcal{C}$ is also a model of T. I think if we can show that $\mathcal{C}\models \exists \bar{x}\forall\bar{y} \phi(\bar{x},\bar{y})$ whenever $\mathcal{A},\mathcal{B} \models\exists \bar{x}\forall\bar{y} \phi(\bar{x},\bar{y})$ with $\mathcal{A} \subseteq \mathcal{C} \subseteq \mathcal{B}$, then we are done. But I have no idea how to get this conclusion. Any hints/helps are welcomed. Thank you !
The general theorem is: for all odd, distinct primes $p, q$, the following holds: $$\left( \frac{p}{q} \right) \left( \frac{q}{p} \right) = (-1)^{\frac{p-1}{2}\frac{q-1}{2}}$$ I've discovered the following proof for the case $q=3$: Consider the Möbius transformation $f(x) = \frac{1}{1-x}$, defined on $F_{p} \cup {\infty}$. It is a bijection of order 3: $f^{(3)} = Id$. Now we'll count the number of fixed points of $f$, modulo 3: 1) We can calculate the number of solutions to $f(x) = x$: it is equivalent to $(2x-1)^2 = -3$. Since $p \neq 2,3$, the number of solutions is $\left( \frac{-3}{p} \right) + 1$ (if $-3$ is a non-square, there's no solution. Else, there are 2 distinct solutions, corresponding to 2 distinct roots of $-3$). 2) We know the structure of $f$ as a permutation: only 3-cycles or fixed points. Thus, number of fixed points is just $|F_{p} \cup {\infty}| \mod 3$, or: $p+1 \mod 3$. Combining the 2 results yields $p = \left( \frac{-3}{p} \right) \mod 3$. Exploiting Euler's criterion gives $\left( \frac{p}{3} \right) = p^{\frac{3-1}{2}} = p \mod 3$, and using $\left( \frac{-1}{p} \right) = (-1)^{\frac{p-1}{2}}$, we get: $$\left( \frac{3}{p} \right) \left( \frac{p}{3} \right) = (-1)^{\frac{p-1}{2}\frac{3-1}{2}} \mod 3$$ and equality in $\mathbb{Z}$ follows. My questions: Can this idea be generalized, with other functions $f$? Is there a list\article of proofs to special cases of the theorem?
Answer (b) is the sketch that better represents. Work Step by Step 1. Considering a pure solution: $[H_3O^+] = [A^-] = x$ Therefore: $Ka = \frac{[H_3O^+][A^-]}{[HA]} = \frac{x^2}{[HA]}$ 2. If the molarity of the solution is doubled: $Ka = \frac{y^2}{2[HA]}$ ** The ka is constant, but the $[H_3O^+]$ changes, so we consider other unknown : y. Since the ka is constant, we can say that: $\frac{x^2}{[HA]} = \frac{y^2}{2[HA]}$ - We can eliminate the $[HA]$: $x^2 = \frac{y^2}{2}$ - And put a square root in both sides: $\sqrt {x^2} = \sqrt { \frac{y^2}{2}}$ $x \approx \frac{y}{1.4}$ $y \approx 1.4x$ - Since x is the concentration of the far left $H_3O^+$, and it has 10 protons in the image: $y = 10 * 1.4$ $y= 14$ Then "y" should have 14 protons in its image, so (b) is the right answer.
I am currently reading Atkins and Friedman's "Molecular Quantum Mechanics" (4th ed), looking at the Rayleigh-Ritz variation method. Starting from the Schrödinger equation $\hat{H}\psi = E \psi$, we get the "Rayleigh ratio" $$ E = \frac{\int \psi^*\hat{H}\psi d\tau}{\int \psi^*\psi d\tau} $$ Setting a trial function to be the following linear combination (assuming real coefficients) $$ \psi_{trial} = \sum_i c_i\psi_i $$ we find that $$ E = \frac{\sum_{i,j} c_ic_j H_{ij}}{\sum_{i,j} c_ic_j S_{ij}} $$ where $H_{ij} = \int \psi_{trial}^*\hat{H}\psi_{trial}d\tau$ and $S_{ij} = \int \psi_{trial}^*\psi_{trial}d\tau$. Now the goal is to minimize the expression for $E$. We should therefore "differentiate with respect to each coefficient in turn and set $\partial E / \partial c_k = 0$ in each case". Using the quotient rule, I get the following $$ \frac{\partial E}{\partial c_k} = \frac{ \sum_{i,j} c_ic_jS_{ij} \cdot \frac{\partial}{\partial c_k} \left( \sum_{i,j}c_ic_jH_{ij} \right) }{\left( \sum_{i,j}c_ic_jS_{ij} \right)^2} - \frac{ \sum_{i,j} c_ic_jH_{ij} \cdot \frac{\partial}{\partial c_k} \left( \sum_{i,j}c_ic_jS_{ij} \right) }{\left( \sum_{i,j}c_ic_jS_{ij} \right)^2} = 0 $$ However, I am not sure how to simplify this to obtain the correct expression, which is $$ \frac{\partial E}{\partial c_k}_{Correct} = \frac{\sum_j c_j (H_{kj} - ES_{kj})}{\sum_{i,j} c_ic_jS_{ij}} + \frac{\sum_i c_i (H_{ik} - ES_{ik})}{\sum_{i,j} c_ic_jS_{ij}} = 0 $$ Clearly I can cancel a factor in the first term of my expression, but I don't know how to handle the derivatives with respect to $c_k$.
Is there any (i.e., to be found in probability books) metric for the distance between two probability measures, defined only on a subset of their support? official Take, for example, the total variation distance: $$TV(\mu,\nu)=\sup_{A\in\mathcal{F}}|\mu(A)-\nu(A)|.$$ If $X$ and $Y$ are two real positive continuous random variables with densities $f_X$ and $f_Y$, then their total variation distance is, if I understand correctly: $$TV(\mu_X,\mu_Y)=\int_{0}^{\infty}|f_X(z)−f_Y(z)|dz.$$ Would it make any sense to calculate a quantity, for $\tau>0$, let's call it partial distance, like this: $$PV(\mu_X,\mu_Y;\tau)=\int_{\tau}^{\infty}|f_X(z)−f_Y(z)|dz\;\;\;?$$ If this does not make any sense (sorry, I really cannot tell, as I am not that good with measure theory...), can anyone think of a measure that would make sense? What I want to use this for is to compare the closeness of two PDFs (or other functions describing a distribution: CDF, CCDF...) $f_X(t)$, $f_Y(t)$ to a third one $f_Z(t)$. I know that both $f_X$ and $f_Y$ "eventually" ($t\to\infty$) converge to $f_Z$, but I would like to show that one of them gets closer, sooner than the other one...
[EDIT]: After getting a very nice answer by Kevin Buzzard I realize that my question was a little bit too vague and I try to restate it more precisely. As the title says, I would like to understand an isomorphism of Hida from a more geometric perspective than what I normally read. What bothers me is that there are two construction of the universal (ordinary) Hida-Hecke algebra and they turn out to give isomorphic objects: fix a prime $p\geq 5$, and a tame level $N$ prime to $p$. Take the projective limit over the level$r$ of the Hecke algebra acting on $S_k(\Gamma_1(Np^r),\mathbb{Z}_p)$ where $k$ is any weight. By applying the usual idempotent, one gets the Hida-Hecke ordinary algebra $h_k^0(Np^\infty;\mathbb{Z}_p)$, where I adopt notations as in Hida's paper in Inventiones, 1986, "Galois representations into $\mathrm{GL}(2,\mathbb{Z}_p[[X]])$...". Consider now the injective limit over the weightof the spaces of cusp forms $S_k(Np;\mathbb{Z}_p)$. By taking a suitable completion of this injective limit, one sees that the projective limit (over the weight, now) of Hecke algebras acts on the above completion. Applying again the idempotent, we get the Hida-Hecke ordinary algebra $h^0(N,\mathbb{Z}_p)$. Theorem 1.1 in the quoted paper by Hida shows that these two algebras are isomorphic (in the most compatible way one can dream of, in particular inducing the same Hecke action on spaces of cusp forms) but his proof is entirely algebraic. My question is: is there a reasonable way to prove this isomorphism geometrically?As Kevin Buzzard suggested, several papers of Katz (and successive work by Coleman-Mazur, Buzzard himself et al.) discuss geometric interpretation of $p$-adic modular forms and $p$-adic families of modular forms. Still, I do not understand how Hida's isomorphism comparing the Hecke algebra as acting on the projective limit over the level (so ''at the top of the modular tower'') or on the inductive limit over the weight (so, ''over the first curve $X_1(Np)$'') can be given a geometric interpretation.
Suppose a group \(G\) has a finite index subgroup that maps onto the free group of rank 2. Show that every countable group can be embedded in one of the quotient groups of \(G\). GD Star Rating loading... Suppose a group \(G\) has a finite index subgroup that maps onto the free group of rank 2. Show that every countable group can be embedded in one of the quotient groups of \(G\). Let \( A, B \) be \( n \times n \) Hermitian matrices. Find all positive integer \( n \) such that the following statement holds: “If \( AB – BA \) is singular, then \( A \) and \( B \) have a common eigenvector.” The best solution was submitted by 채지석 (수리과학과 2016학번). Congratulations! Here is his solution of problem 2019-14. A similar solution was submitted by 하석민 (수리과학과 2017학번, +3). Late solutions are not graded. Let \( A, B \) be \( n \times n \) Hermitian matrices. Find all positive integer \( n \) such that the following statement holds: “If \( AB – BA \) is singular, then \( A \) and \( B \) have a common eigenvector.” A group \(G\) is called residually finite if for any nontrivial element \(g\) of \(G\), there exists a finite group \(K\) and a surjective homomorphism \(\rho: G \to K\) such that \(\rho(g)\) is a nontrivial element of \(K\). Suppose \(G\) is a finitely generated residually finite group. Show that any surjective homomorphism from \(G\) to itself is an isomorphism. The best solution was submitted by 채지석 (수리과학과 2016학번). Congratulations! Here is his solution of problem 2019-14. Other solutions were submitted by 김동률 (수리과학과 2015학번, +3), 김태균 (수리과학과 2016학번, +3), 하석민 (수리과학과 2017학번, +3). A group \(G\) is called residually finite if for any nontrivial element \(g\) of \(G\), there exists a finite group \(K\) and a surjective homomorphism \(\rho: G \to K\) such that \(\rho(g)\) is a nontrivial element of \(K\). Suppose \(G\) is a finitely generated residually finite group. Show that any surjective homomorphism from \(G\) to itself is an isomorphism. Let \(I, J\) be connected open intervals such that \(I \cap J\) is a nonempty proper sub-interval of both \(I\) and\(J\). For instance, \(I = (0, 2)\) and \(J = (1, 3)\) form an example. Let \(f\) (\(g\), resp.) be an orientation-preserving homeomorphism of the real line \(\mathbb{R}\) such that the set of points of \(\mathbb{R}\) which are not fixed by \(f\) (\(g\), resp.) is precisely \(I\) (\(J\), resp.). Show that for large enough integer \(n\), the group generated by \(f^n, g^n\) is isomorphic to the group with the following presentation \[ <a, b | [ab^{-1}, a^{-1}ba] = [ab^{-1}, a^{-2}ba^2] = id>. \] The best solution was submitted by 김동률 (수리과학과 2015학번). Congratulations! Here is his solution of problem 2019-12. POW 2019-12 is still open and anyone who first submits a correct solution will get the full credit. Let \( A_{a, b} = \{ (x, y) \in \mathbb{Z}^2 : 1 \leq x \leq a, 1 \leq y \leq b \} \). Consider the following property, which we call Property R: “If each of the points in \(A\) is colored red, blue, or yellow, then there is a rectangle whose sides are parallel to the axes and vertices have the same color.” Find the maximum of \(|A_{a, b}|\) such that \( A_{a, b} \) has Property R but \( A_{a-1, b} \) and \( A_{a, b-1} \) do not. The best solution was submitted by 하석민 (수리과학과 2017학번). Congratulations! Here is his solution of problem 2019-13. An incorrect solution was received. Late solutions are not graded. Let \( A_{a, b} = \{ (x, y) \in \mathbb{Z}^2 : 1 \leq x \leq a, 1 \leq y \leq b \} \). Consider the following property, which we call Property R: “If each of the points in \(A\) is colored red, blue, or yellow, then there is a rectangle whose sides are parallel to the axes and vertices have the same color.” Find the maximum of \(|A_{a, b}|\) such that \( A_{a, b} \) has Property R but \( A_{a-1, b} \) and \( A_{a, b-1} \) do not. 1. There will be no POW this week due to 추석 (thanksgiving) break. POW will resume next week. 2. The submission due for POW2019-12 is extended to Sep. 18 (Wed.).
Your confusion is justified -- the author's definitions abuse notation, at least in my opinion, leading to unnecessary ambiguity. $I$ is the index set, this means it is the set of labels which are assigned to each event in the collection of events. Of course, each event is assigned at most one label, and each label is assigned at most one event. The big problem here is that $I$ here is used in at least two different contexts (as is the letter $\alpha$ to denote a generic label in an index set). Let me try to re-write the author's definitions so that they are clearer: Independence of events within a single collection "from each other": A possibly-infinite collection $\{A_\alpha\}_{\alpha \in I}$ of events is said to be independent if for each $i \in \mathbb{N}$ and each distinct finite choice $\alpha_1,\alpha_2,\dotsc, \alpha_i\in I$ we have $$ \tag{3.2.1} P(A_{\alpha_1}\cap A_{\alpha_2} \cap \dots A_{\alpha_j}) = P(A_{\alpha_1})P(A_{\alpha_2}) \dots P(A_{\alpha_j}) $$ Let's review the implicit hierarchy and try to draw an analogy -- let's say that events are like cells, collections (of events) are like human bodies. Then the notion of independence defined above is for cells within a single body. The author's second definition is for independence of different human bodies within a society. This is why I believe it to be an abuse of notation to use the same letter $I$ to denote the index set in the first and the second definition -- in the first defintion, $I$ is indexing cells in a body, whereas in the second definition, $I$ is indexing bodies in a society. Obviously there is a clear analogy between the two, but over-using the notation leads to unnecessary blurring of the concepts. Here I rewrite the author's second definition: Independence of distinct collections "from each other": Distinct collections of events $\large\{ \mathcal{A_\beta} = (\ \{A_{\alpha_{\beta}} \}_{\alpha_{\beta} \in I_{\beta}}) ;\beta \in J \}$ are independent if for all $j \in \mathbb{N}$ , for all distinct $\beta_1,\beta_2,\dotsc, \beta_j\in J$, and for all $\large A_{\alpha_{\beta_1}} \in \mathcal{A}_{\beta_1},\dotsc ,A_{\alpha_{\beta_j}} \in \mathcal{A}_{\beta_j}$, equation (3.2.1) holds. In other words, each possible collection of events $\mathscr{C}=\{\mathscr{C}_{\beta}\}_{\beta \in J}$ formed such that $\mathscr{C}_{\beta_k} \in \mathcal{A}_{\beta_k}$ for all $\beta_k \in J$ (i.e. such that the $\beta_k$th element of $\mathscr{C}$ comes from the collection $\mathcal{A}_{\beta_k}$) is an independent collection of events in the sense of the first definition. One can almost see why the author was imprecise because trying to be completely precise here I have almost made it somewhat less clear -- the idea is that $\beta$ denotes a generic element of $J$ which is an index, where $J$ indexes the collections, $\beta_m$ is one of $n$ arbitrary elements taken from $J$, $\alpha_{\beta}$ is a generic index of the index set $I_{\beta}$, the index set of the $\beta$th collection of events, so $I_{\beta}$ indexes events, not collections of events, $\alpha_{\beta_m}$ is an arbitrary index from the index set $I_{\beta_m}$ which is one of $n$ arbitrary index sets $I_{\beta_1}, \dots, I_{\beta_n}$, and thus $A_{\alpha_{\beta_m}}$ is an arbitrary element from the collection of events $\mathcal{A}_{\beta_m}$. Two things to note: The second definition does not imply the first one. Namely, one could have that a society consists of humans which are all independent from each other, but it is not necessarily the case that all of the cells within each human are independent from each other. In fact, the case when the second definition holds but the first does not hold will be encountered more commonly, as is made clear in The type of collections of events for which the second definition is most often applied is $\sigma-$algebras. This is obviously because $\sigma-$algebras are one of the basic components which make up a probability space; also because we need to consider multiple $\sigma-$algebras for the same event space at the same time when considering concepts like conditional expectation or stochastic processes. For example, given $n\in\mathbb{N}$ $\sigma-$algebras $\mathscr{F}_1, \dots, \mathscr{F}_n$ it is not that atypical for all $n$ of them to be independent of each other, e.g. given $A_1 \in \mathscr{F_1}$ and $B_1 \in \mathscr{F}_2$, we have $$\mathbb{P}(A_1 \cap B_1)=\mathbb{P}(A_1)\mathbb{P}(B_1),$$ but it is usually not the case (in fact I believe it is even impossible unless the $\sigma$-algebra in question is somehow degenerate) that for all of the sets within each $\sigma-$algebra are independent from each other, i.e. usually we will have for $A_1, A_2 \in \mathscr{F}_1$ that $$\mathbb{P}(A_1 \cap A_2) \not=\mathbb{P}(A_1)\mathbb{P}(A_2).$$ This however is not a problem since the first definition does not need to hold for the second definition to hold. Also, the same way how the notion of independence of distinct collections of events is used to define the notion of independent $\sigma$-algebras, the notion of independent $\sigma-$algebras is then used to define the notion of independence of random variables. Namely, two random variables $X$ and $Y$ are independent if and only if the $\sigma-$algebras generated by them, $\sigma(X)$ and $\sigma(Y)$ respectively, are independent (as $\sigma-$algebras). Anyway, the purpose of the second definition is to define a notion of independence for $\sigma-$algebras without demanding that all of the sets within each individual $\sigma$-algebra are independent from each other (which would be the content of the first definition).
Instability of bound states for 2D nonlinear Schrödinger equations 1. Department of Mathematical Sciences, Yokohama City University, Seto 22-2, 236-0027, Japan $iu_t+\Delta u+|u|^{p-1}u=0\quad$ for $x\in \mathbb R^2$ and $t>0$, where $(r,\theta)$ are polar coordinates and $m\in\mathbb N$. Using the Evans function, we prove linear instability of standing wave solutions with nodes in the case where $p>3$. Mathematics Subject Classification:35B35, 35Q55, 35J60, 35B0. Citation:Tetsu Mizumachi. Instability of bound states for 2D nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 2005, 13 (2) : 413-428. doi: 10.3934/dcds.2005.13.413 [1] Reika Fukuizumi. Stability and instability of standing waves for the nonlinear Schrödinger equation with harmonic potential. [2] Masahito Ohta. Strong instability of standing waves for nonlinear Schrödinger equations with a partial confinement. [3] Jaeyoung Byeon, Ohsang Kwon, Yoshihito Oshita. Standing wave concentrating on compact manifolds for nonlinear Schrödinger equations. [4] Hiroaki Kikuchi. Remarks on the orbital instability of standing waves for the wave-Schrödinger system in higher dimensions. [5] [6] François Genoud. Existence and stability of high frequency standing waves for a nonlinear Schrödinger equation. [7] [8] Renata Bunoiu, Radu Precup, Csaba Varga. Multiple positive standing wave solutions for schrödinger equations with oscillating state-dependent potentials. [9] [10] Alex H. Ardila. Stability of standing waves for a nonlinear SchrÖdinger equation under an external magnetic field. [11] Reika Fukuizumi, Louis Jeanjean. Stability of standing waves for a nonlinear Schrödinger equation wdelta potentialith a repulsive Dirac. [12] Jun-ichi Segata. Initial value problem for the fourth order nonlinear Schrödinger type equation on torus and orbital stability of standing waves. [13] [14] Jun-ichi Segata. Well-posedness and existence of standing waves for the fourth order nonlinear Schrödinger type equation. [15] J. Colliander, M. Keel, G. Staffilani, H. Takaoka, T. Tao. Polynomial upper bounds for the instability of the nonlinear Schrödinger equation below the energy norm. [16] Xiaoyu Zeng. Asymptotic properties of standing waves for mass subcritical nonlinear Schrödinger equations. [17] Soohyun Bae, Jaeyoung Byeon. Standing waves of nonlinear Schrödinger equations with optimal conditions for potential and nonlinearity. [18] Wulong Liu, Guowei Dai. Multiple solutions for a fractional nonlinear Schrödinger equation with local potential. [19] Xudong Shang, Jihui Zhang. Multiplicity and concentration of positive solutions for fractional nonlinear Schrödinger equation. [20] Kazuhiro Kurata, Tatsuya Watanabe. A remark on asymptotic profiles of radial solutions with a vortex to a nonlinear Schrödinger equation. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
In quantum computation, what is the equivalent model of a Turing machine? It is quite clear to me how quantum circuits can be constructed out of quantum gates, but how can we define a quantum Turing machine (QTM) that can actually benefit from quantum effects, namely, perform on high-dimensional systems? In quantum computation, what is the equivalent model of a Turing machine? It is quite clear to me how quantum ( note: the full desciption is a bit complex, and has several subtleties which I prefered to ignore. The following is merely the high-level ideas for the QTM model) When defining a Quantum Turing machine (QTM), one would like to have a simple model, similar to the classical TM (that is, a finite state machine plus an infinite tape), but allow the new model the advantage of quantum mechanics. Similarly to the classical model, QTM has: $Q=\{q_0,q_1,..\}$ - a finite set of states. Let $q_0$ be an initial state. $\Sigma=\{\sigma_0,\sigma_1,...\}$, $\Gamma=\{\gamma_0,..\}$ - set of input/working alphabet an infinite tape and a single "head". However, when defining the transition function, one should recall that any quantum computation must be reversible. Recall that a configuration of TM is the tuple $C=(q,T,i)$ denoting that the TM is at state $q\in Q$, the tape contains $T\in \Gamma^*$ and the head points to the $i$th cell of the tape. Since, at any given time, the tape consist only a finite amount of non-blank cells, we define the (quantum) state of the QTM as a unit vector in the Hilbert space $\mathcal{H}$ generated by the configuration space $Q\times\Sigma^*\times \mathrm{Z}$. The specific configuration $C=(q,T,i)$ is represented as the state $$|C\rangle = |q\rangle |T\rangle |i\rangle.$$ (remark: Therefore, every cell in the tape isa $\Gamma$-dimensional Hilbert space.) The QTM is initialized to the state $|\psi(0)\rangle = |q_0\rangle |T_0\rangle |1\rangle$, where $T_0\in \Gamma^*$ is concatenation of the input $x\in\Sigma^*$ with many "blanks" as needed (there is a subtlety here to determine the maximal length, but I ignore it). At each time step, the state of the QTM evolves according to some unitary $U$ $$|\psi(i+1)\rangle = U|\psi(i)\rangle$$ Note that the state at any time $n$ is given by $|\psi(n)\rangle = U^n|\psi(0)\rangle$. $U$ can be any unitary that "changes" the tape only where the head is located and moves the head one step to the right or left. That is, $\langle q',T',i'|U|q,T,i\rangle$ is zero unless $i'= i \pm 1$ and $T'$ differs from $T$ only at position $i$. At the end of the computation (when the QTM reaches a state $q_f$) the tape is being measured (using, say, the computational basis). The interesting thing to notice, is that each "step" the QTM's state is a superposition of possible configurations, which gives the QTM the "quantum" advantage. The answer is based on Masanao Ozawa, On the Halting Problem for Quantum Turing Machines. See also David Deutsch, Quantum theory, the Church-Turing principle and the universal quantum computer. As the notes indicate, the way to define a QTM is to define the transition function as a unitary transform of state and letter. So in each step, you imagine multiplying the (state,letter) vector by a transformation to get a new (state, letter). It's not particularly convenient, but it can be defined.
Results from the kaonic hydrogen X-ray measurement at DAFNE and outlook to future experiments 43 Downloads Citations Abstract The \(\overline{K}N\) system at rest plays a key role for the understanding of strong interaction of hadrons with strangeness involved. The experiment SIDDHARTA used X-ray spectroscopy of kaonic atoms to measure the strong interaction induced shift and width of the ground state. It was the first experiment on kaonic He3 and deuterium ever, kaonic hydrogen was measured with improved precision resulting in \(\epsilon_{1s} = -283 \pm 36 \mbox{(stat)} \pm 6 \mbox{(syst)}\) eV and \(\Gamma_{1s} = 541 \pm 89 \mbox{(stat)} \pm 22 \mbox{(syst)}\) eV. Additionally a scheme for an improved future experiment on kaonic deuterium is introduced in this contribution. KeywordsKaonic hydrogen Antikaon-nucleon physics Silicon drift detectors Preview Unable to display preview. Download preview PDF. References 1. 2. 3. 4. 5. 6. 7. 8.SIDDHARTA-2 Collaboration: Proposal of Laboratori Nazionali di Frascati of INFN. The upgrade of the SIDDHARTA apparatus for an enriched scientific case (2010)Google Scholar
I am reading D. Joyce book "Compact manifolds with special holonomy" and I have some problems of understanding some computation on page 111, the first line in the proof of Proposition 5.4.6. More specific the following: Let $(M,\omega, J)$ be a compact Kähler manifold with Kähler form $\omega$ and complex structure $J$. In holomorphic coordinates $\omega$ is of the form $\omega = ig_{\alpha \overline{\beta}}dz^{\alpha} \wedge d\overline{z}^{\beta}$. Associated to the above data we have the Riemannian metric $g$ which may be written in holomorphic coordinates as $g=g_{\alpha \overline{\beta}}(dz^{\alpha}\otimes d\overline{z}^{\beta} + d\overline{z}^{\beta} \otimes dz^{\alpha})$. Associated to $g$ let $\nabla$ be the Levi-Civita connection which also defines a covariant derivative on tensors. For a function $\phi$ on $M$ one may compute $\nabla^{k}\phi$. For example $\nabla \phi = (\nabla_{\lambda}\phi)dz^{\lambda} + (\nabla_{\overline{\lambda}}\phi)d\overline{z}^{\lambda}=(\partial_{\lambda}\phi)dz^{\lambda} + (\partial_{\overline{\lambda}}\phi)d\overline{z}^{\lambda}$ (once applied on functions is as the usual $d$) and $\nabla_{\alpha \beta}\phi = \partial_{\alpha \beta} \phi - \partial_{\gamma}\phi \Gamma^{\gamma}_{\alpha \beta}$, $\nabla_{\alpha \overline{\beta}}\phi = \partial_{\alpha \overline{\beta}}\phi$ etc. In the first sentence of the proof of proposition 5.4.6 Joyce considers the equation $\det(g_{\alpha \overline{\beta}} + \partial_{\alpha \overline{\beta}}\phi) = e^{f}\det(g_{\alpha \overline{\beta}})$, where $f:M\rightarrow \mathbb{R}$ is a smooth function on $M$. After taking the $\log$ of this equation he obtains $\log[\det(g_{\alpha \overline{\beta}} + \partial_{\alpha \overline{\beta}}\phi)] - \log[\det(g_{\alpha \overline{\beta}} )] = f$ which is obviously a globaly defined equality of functions on $M$. Now he takes the covariant derivative $\nabla$ of this equation and obtains $\nabla_{\overline{\lambda}}f = g'^{\mu \overline{\nu}}\nabla_{\overline{\lambda} \mu \overline{\nu}}\phi$ where $g'^{\mu \overline{\nu}}$ is the inverse of the metric $g'_{\alpha \overline{\beta}} = g_{\alpha \overline{\beta}} + \partial_{\alpha \overline{\beta}}\phi$ (which he assumes to exists). This last step (when taking the covariant derivative) I do not understant. In my computation I have the following: When taking the covariant derivative $\nabla_{\overline{\lambda}}$ of the equation $\log[\det(g_{\alpha \overline{\beta}} + \partial_{\alpha \overline{\beta}}\phi)] - \log[\det(g_{\alpha \overline{\beta}} )] = f$ and using the formula for the derivative of the determinant I obtain $g'^{\alpha \overline{\beta}}(\partial_{\overline{\lambda}}g_{\alpha \overline{\beta}} + \partial_{\overline{\lambda} \alpha \overline{\beta}}\phi) - g^{\alpha \overline{\beta}}(\partial_{\overline{\lambda}}g_{\alpha \overline{\beta}}) = \partial_{\overline{\lambda}}f = \nabla_{\overline{\lambda}}f$. This is obviously different to his formula. Moreover the term $\nabla_{\overline{\lambda}\mu \overline{\nu}}\phi$ contains not only derivatives of order $3$ of $\phi$ but it also contains a term with second derivatives of $\phi$. My question is: Where is my mistake? Have I understood something wrong?
How can you derive the Carnot efficiency using only properties of reversible cycles? closed as unclear what you're asking by Whit3rd, Gert, stafusa, Jon Custer, Kyle Kanos Nov 22 '17 at 11:15 Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. Use the second law of thermodynamics. Consider as the system a generic reversible engine plus the cold and hot sources. This is a closed and reversible system, therefore its entropy change vanishes. You decompose this change in changes due to the engine and due to the sources. The entropy change of the engine after a cycle vanishes, thus $$\Delta S=\Delta S_{\mathrm{sources}}=0.$$ The hot source loses $|Q_1|$ at constant temperature $T_1$, whereas the cold source gain $|Q_2|$ at temperature $T_2$. Hence, $$\Delta S=-\frac{|Q_1|}{T_1}+\frac{|Q_2|}{T_2}=0,$$ i.e. $$\frac{|Q_2|}{|Q_1|}=\frac{T_2}{T_1}.$$ Plug this into the efficiency $$\eta=\frac{W}{|Q_1|}=1-\frac{|Q_2|}{|Q_1|},$$ and obtain the efficiency of a reversible engine $$\eta=1-\frac{T_2}{T_1}.$$ Notice that we have not made any assumption about the cycle or even the agent responsible by the engine. That is in the Core of Carnot theorem. Any engine, regardless its nature, working between the same sources have the same efficiency, so it is natural that this efficiency can be calculated without any reference to the Carnot cycle which is a specific cycle followed by a specific agent (ideal gas).
I think that everybody will agree that \({(\sqrt{9+\sqrt{80}}+\sqrt{9-\sqrt{80}})^2}\) is much easier to understand than (sqr root of (9 +sqr root of 80)+sqr root of (9 - sqr root 80))^2. So, in order to help you with the questions you post more efficiently please use the following guide to write math formulas. Square roots How to make (x+5)^(1/2)<17^(1/2) to look like \(\sqrt{x+5}<\sqrt{17}\): Step 1: Mark x+5 and press square_root button, then mark 17 and press square_root button again; Step 2: Now, mark the whole expression and press m button. Other Useful SymbolsAnother Way of Writing Fractions: How to make (a+b)/c to look like \(\frac{a+b}{c}\) Step 1: Write \frac{a+b}{c}, (note that numerator and denominator must be enclosed in { } and you must write out \frac to tell the system that it is a fraction); Step 2. Mark the whole expression and press m button.Exponents How to make x^12 to look like \(x^{12}\) Step 1: Write x^{12} , (note that multi-digit powers must be enclosed in { } ); Step 2. Mark the whole expression and press m button.Roots How to make 3rd root of x^2 to look like \(\sqrt[3]{x^2}\) Step 1: Write \sqrt[3]{x^2} , (note that 3 must be enclosed in [ ] and 2 must be enclosed in { } ); Step 2. Highlight the whole expression and press m button.Inequalities \(x\approx{3}\): write x\approx{3} and press m button (note that 3 must be enclosed in { } ). \(x\leq5\): write x\leq{5} and press m button. \(x\geq3\): write x\geq{3} and press m button. \(x\neq0\): write x\neq{0} and press m button.Subscript \(x_1\): write x_1 and press m button. If a subscript is more than one-digit number, for example \(x_{15}\) then write x_{15} and press m button (note that such kind of subscripts must be enclosed in { } ).Geometry \(\pi\): write \pi and press m button; \(\angle\): write \angle and press m button; \(90^{\circ}\): write 90^{\circ} and press m button; \(\alpha\): write \alpha and press m button; \(\triangle\): write \triangle and press m button.
Give an example of a perfect set in $\mathbb R^n$ that does not contain any of the rationals. (Or prove that it does not exist). Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Give an example of a perfect set in $\mathbb R^n$ that does not contain any of the rationals. (Or prove that it does not exist). This question appears to be off-topic. The users who voted to close gave this specific reason: An easy example comes from the fact that a number with an infinite continued fraction expansion is irrational (and conversely). The set of all irrationals with continued fractions consisting only of 1's and 2's in any arrangement is a perfect set of irrational numbers. Consider the set of reals x whose binary expansion, if you look only at the even digit places, is some fixed non-eventually-repeating pattern z. This is perfect, since we have branching at the odd digits, but they are all irrational, since z is not eventually repeating. You can draw a picture of this set, and it looks something like the Cantor middle third set, except that you divide into four pieces, and take either first+third or second+fourth, depending on the digits of z. Another solution: Begin with an interval having irrational endpoints, and perform the usual Cantor middle-third construction, except that at stage n, be sure to exclude the n-th rational number (with respect to some fixed enumeration), using a subinterval having irrational endpoints. By systematically excluding all rational numbers, you have the desired perfect set of irrationals. (Hi François!) It is well-known that $C$ is homeomorphic to $C \times C$, where $C$ is the Cantor set, as both are zero-dimensional compact metric spaces without isolated points. So $C$ contains uncountably many disjoint homeomorphic copies of $C$ and at most countably many of them can contain rationals... Just consider a translation of Cantor set $C$, denote as $E=C+\{x_0\}$. The perfectness of $E$ is trivial due to the perfectness of $C$. To make $E\cap\mathbb{Q}=\varnothing$, we need to choose an $x_0\notin \mathbb{Q}-C$. The only thing left is to show $\mathbb{Q}-C\neq\mathbb{R}$, i.e. $\mathbb{Q}+C\neq\mathbb{R}$. By Baire Category theorem $$\mathbb{Q}+C=\bigcup_{r\in\mathbb{Q}}\{r\}+C$$ can't have any interior point, since $\{r\}+C$ don't have any interior point, for any $r\in\mathbb{Q}$. The conclusion follows. It can be proven that the Cantor set is perfect. Certainly, this contains infinitely many rationals. How about modifying the construction of the Cantor set by defining: $I_1 = [\sqrt{2},\sqrt{2}+1/3] \cup [\sqrt{2}+2/3,\sqrt{2}+1]$, $I_2 = [\sqrt{2},\sqrt{2}+1/9] \cup [\sqrt{2}+2/9,\sqrt{2}+1/3]\cup[\sqrt{2}+2/3,\sqrt{2}+7/9]\cup[\sqrt{2}+8/9,\sqrt{2}+1]$, etc and setting $P = \cap_{i=1}^\infty I_i$? Each of end points of any interval that appears in the construction is a member of $P$ and is irrational. However, is it true that all the members of $P$ must be an end point of a certain interval? I am tempted to think so because we can prove that $P$ does not contain any interval. Let $A$ be an open subset of $R$ of finite measure and containing $Q$. This is possible because $Q$ is countable. Let $B=R$ \ $A$. Now $B$ is closed, and uncountable (because it has infinite measure). Let $ C$ be the family of open real intervals that, each, have countable intersection with $B$. Then $\cup C$ is equal to $\cup D$ where $ D $ is a countable subset of $ C$, so $B$ has countable intersection with $\cup C$. The uncountable closed set $E= B$ \ $\cup C$ is perfect. Indeed, if $p \in E$ and $V$ is an open interval containing $p$, then $E \cap V$ is uncountable.
Under what conditions is a specific sorting algorithm actually the fastest one? 1) When implemented in a parallel way in hardware, does it need to have a reasonably low latency while requiring as few gates as possible?Yes -> use a bitonic sorter or Batcher odd-even mergesort, latency is $\Theta(\log(n)^2)$ and the number of comparators and multiplexers is $\Theta(n \cdot \log(n)^2)$. 2) How many different values can each element have?Cans every possible value have assigned a unique place in memory or cacheYes -> use count sort or radix sort, those usually have a linear runtime of $\Theta(n \cdot k)$ (count sort) or $\Theta(n \cdot m)$ (bucket sort) but slow down for a large number of different values, as $k=2^{\#number\_of\_Possible\_values}$ and $m = \#maximum\_length\_of\_keys$. 3) Does the underlying data structure consist of linked elements?Yes -> allways use in place merge sort. There are both easy to implement fixed size or adaptive(aka natural) bottom-up in place merge sorts of different arities for linked data structures, and since they never require copying the entire data in each step and they never require recursions either, they are faster than any other general comparision-based sorts, even faster than quick sort. 4) Does the sorting need to be stable?Yes -> use merge sort, either in place or not, fixed-size or adaptive, depending on the underlying data structure and the kind of data to be expected, even in cases where quick sort would otherwise be preferred, as stabilizing an arbitrary sorting algorithm requires $\Theta(n)$ additional memory in the worst case consisting of original indexes, which also needs to be kept in sync with each swap that is to be performed on the input data, so that every performance gain that quick sort might have over merge sort is probably thwarted. 5) Can the size of the underlying data be bound to a small to medium size? e.g. Is n < 10,000...100,000,000 (depending on the underlying architecture and data structure)?Yes -> use bitonic sort or Batcher odd-even mergesort. Goto 1) 6) Can you spare another $\Theta(n)$ memory?Yes -> a) Does the input data consist of large pieces of already sorted sequential data? -> use adaptive (aka natural) merge sort or tim sortYes -> b) Does the input data mostly consist of elements that are almost in the correct place? -> use bubble sort or insertion sort. If you fear their $\Theta(n^2)$ time complexity (which is pathological for almost sorted data), maybe consider switching to shell sort with an (almost) asymptotically optimal sequence of gaps, some sequences that yield $\Theta(n \cdot \log(n)^2)$ worst case run time are known, or maybe try comb sort. I'm not sure either shell sort or comb sort would perform reasonably good in practice. No -> 7) Can you spare another $\Theta(\log(n))$ memory? Yes -> a) Does the undelying data structure allow for directed sequential access or better? Yes -> Does it allow only a single sequence of read/write accesses at a time up till the end of the data has been reached (e.g. directed tape access)? Yes -> i) use merge sort, but there is no obvious way to make this case in place, so it may require additional $\Theta(n)$ memory. But if you have time and the balls to do it, there is a way to merge 2 arrays in $\Theta(n)$ time using only $\Theta(\log(n))$ space in a stable way, according to Donald E. Knuth "The Art of Computer Programming, Volume 3: Sorting and Searching", exercise 5.5.3. states that there is an algorithm by L. Trabb-Pardo that does so. However, I doubt this would be any faster than the naive mergesort version or the quicksort from the case above. No, it allows multiple simultaneous accesses to a sequence of data (e.g. is not a tape drive) -> ii) use quicksort, for practical purposes I would recommend either a randomized or an approximated median one. If you are wary of pathological $\Theta(n^2)$ cases, consider using intro sort. If you are hell-bent on deterministic behavior, consider using the median-of-median algorithm to select the pivot element, it requires $\Theta(n)$ time and its naive implementation requires $\Theta(n)$ space (parallelizable), whereas it may be implemented to only require $\Theta(\log(n))$ space (not parallelizable). However, the median-of-median algorithm gives you a deterministic quicksort which has worst-case $\Theta(n \cdot \log(n))$ run-time. No -> you're screwed (sorry, we need at least 1 way of accessing each data element once) No -> 8) Can you spare a small constant amount of memory? Yes -> Does the underlying data structure allow for random access? Yes -> use heapsort, it has an asymptotic optimal run-time of $\Theta(n \cdot \log(n))$, but dismal cache coherency and doesn't parallelize well. No -> you are screwed No -> you are screwed Implementation hints for quicksort: 1) Naive binary quicksort requires $\Theta(n)$ additional memory, however, it is relatively easy to reduce that down to $\Theta(\log(n))$ by rewriting the last recursion call into a loop. Doing the same for k-ary quicksorts for k > 2 requires $\Theta(n^{\log_k(k-1)})$ space (according to the master theorem), so binary quicksort requires the least amount of memory, but I would be delighted to hear if anyone knows whether k-ary quicksort for k > 2 might be faster than binary quicksort on some real world setup. 2) There exist bottom-up, iterative variants of quicksort, but AFAIK, they have the same asymptotic space and time boundaries as the top-down ones, with the additional down sides of being difficult to implement (e.g. explicitly managing a queue). My experience is that for any practical purposes, those are never worth considering. Implementation hints for mergesort: 1) bottum-up mergesort is always faster than top-down mergesort, as it requires no recursion calls. 2) the very naive mergesort may be sped up by using a double buffer and switch the buffer instead of copying the data back from the temporal array after each step. 3) For many real-world data, adaptive mergesort is much faster than a fixed-size mergesort. 4) the merge algorithm can easily be parallelized by splitting the input data into k approximately same-sized parts. This will require k references into data, and it is a good thing to choose k such that all of k (or c*k for a small constant c >= 1) fit into the nearest memory hierarchy(usually L1 data cache). Choosing the smallest out of k elements the naive way(linear search) takes $\Theta(k)$ time, whereas building up a min-heap within those k elements and choosing the smallest one requires only amortized $\Theta(\log(k))$ time (picking the minimum is $\Theta(1)$ of course, but we need to do a little maintenance as one element is removed and replaced by another one in each step).The parallelized merge always requires $\Theta(n)$ memory regardless of k. From what I have written, it is clear that quicksort often isn't the fastest algorithm, except when the following conditions all apply: 1) there are more than a "few" possible values 2) the underlying data structure is not linked 3) we do not need a stable order 4) data is big enough that the slight sub-optimal asymptotic run-time of a bitonic sorter or Batcher odd-even mergesort kicks in 5) the data isn't almost sorted and doesn't consist of bigger already sorted parts 6) we can access the data sequence simultaneously from multiple places 7){memory writes are particularly expensive (because that's mergesort's main disadvantage), so far as it slows down the algorithm beyond a quicksort's probable sub-optimal split.} or {we can only have $\Theta(\log(n))$ additional memory, $\Theta(n)$ is too much (e.g. external storage)} p.s.: Someone need to help me with the formatting of the text.
Consider a variant of the traditional coupon collector's problem. There're $n$ kinds of coupons and there's a $1 \times n$ grid. Each grid corresponds to one kind of coupon. Once picking a coupon, we color the corresponding grid. I'd like to ask what's the expected number of coupons to pick such that the maximum number of consecutive uncolored grids is not greater than $k$? Note that if we denote the expectation as $f(n,k)$, the traditional coupon collector's problem is just $f(n,0) = n\sum_{i=1}^{n}\frac{1}{i}$. It can also be considered as a balls-in-bins variant if viewing the grids as bins and coupons as balls. There're $n$ bins and infinite balls. At each round, a ball is put into a random bin. We are interested in the expected number of balls to put such that the maximum number of consecutive empty bins is not greater than $k$. It seems that the problem is quite difficult, and some analysis for small cases ($k = 1, 2, \cdots)$ is also welcome. Edit: Let $X_i$ be the number of balls in $Bin_{i}, \cdots, Bin_{i+k-1}$. The problem is simplified to calculate $Prob[X_1=0 \ \vee \cdots \vee X_{n-k+1}=0]$. Using inclusive-exclusive principle, the problem reduces to calculate $Prob_{x\in S}[x=0]$ for any subset $S$ of $\{X_1,\cdots,X_{n-k+1}\}$. But the final piece is missing since it's not easy to calculate the coefficients.
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$. Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$... What if $\theta$ is irrational...what did I do wrong? 'cause I understand that second one but I'm having a hard time explaining it in words (Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.) DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something. he based much of his success on principles like this I cant believe ive forgotten it it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book regardless of you attending Princeton university or not yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned @TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college. long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now @BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even A proof of this uses (basically) Fourier analysis Even though it looks rather innocuous albeit surprising result in pure number theory @BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about @Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$). The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero. I can try to recall more if you're interested. It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps So it sort of makes sense Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis @BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate
Start from the opposite task. If $\displaystyle \int x^x \, dx=F(x)$ then $\displaystyle F'(x)=x^x$ First we need to find asymptotic evaluation of the integral. Let us take it in the form $$F(x)=x^xg(x)$$ So it has to be: $$F'(x)=x^x((1+\ln(x))g(x)+g'(x))=x^x$$ From there it is sufficient to take $g(x) \sim \frac{1}{1+\ln(x)}$ So we can start our journey: $$F(x)=x^x(\frac{1}{1+\ln(x)}+f(g(x)))$$ If you calculate the derivative of this you have $$g'(x)f'(g(x))-\frac{1}{x(\ln(x)+1)^2}+(\ln(x)+1)f(g(x))=0$$ For the purpose of cancellation if it best to take $$g'(x)=\ln(x)+1$$ meaning $$g(x)=x\ln(x)$$ Now we continue using the steps that are revealing the integral structure. $$F(x)=x^x(\frac{1}{\ln(x)+1}+f(x\ln(x)))$$ Take derivative once more and you have got$$f(x\ln(x))=\frac{1}{x(1+\ln(x))^3}-f'(x\ln(x))$$or $$F(x)=x^x(\frac{1}{\ln(x)+1}+\frac{1}{x(\ln(x)+1)^3}-f'(x\ln(x)))$$ We can then write: $$F(x)=x^x(\frac{1}{\ln(x)+1}+\sum_{n=1}^{\infty}f_n(x))$$ where $$\displaystyle f_{n}=-\frac{f_{n-1}'}{1+\ln(x)},\,f_0=\frac{1}{1+\ln(x)}$$ From $x=0$ to $1$ you can use probably more suitably $F(x)=xg(\ln(x))$ The derivation is similar to the one given above.
Yes, and no. There is a "categorically comprehensive" reason for this trace map to exist, but not necessarily to construct it. And to prove that this reason is valid, one does require categorical machinery. The elevator speech answer is: THH is an algebra in some symmetric monoidal category. K is the unit in this symmetric monoidal category, so THH receives a unique algebra map from K theory. (If you are looking for a less theoretical reason, and want some explicit constructions, you might take a look at Kantorovitz-Miller, "An explicit description of the Dennis trace map.") Everything I write below, I learned from Blumberg-Gepner-Tabuada, "Uniqueness of the multiplicative cyclotomic trace." First, we note that both THH and K define functors $\infty Cat^{perf} \to Spectra$. On the righthand side is the category of spectra. (Take any model you like, so long as it's not the homotopy category of the model. You can take Lurie's oo-categorical model, or symmetric spectra if you like.) On the lefthand side is the category of perfect stable oo-categories. Roughly, these are the categories that look like modules over some ring spectrum. A different way you might describe this category is as follows: The category of spectrally enriched categories, localized with respect to Morita equivalence. Note that both categories--$\infty Cat^{perf}$ and $Spectra$--have a symmetric monoidal structure. The latter has the usual smash product, while the former has the tensor product of stable $\infty$-categories. This is given by the cocompletion of the following naive tensor product: Given two categories $A$ and $B$, the objects of $A \otimes^{naive} B$ are pairs of objects $(a,b)$, and the hom spectrum between $(a,b)$ and $(a',b')$ is given by $hom(a,a') \wedge hom(b,b')$. Moreover, we note that both THH and K satisfy the following properties: They are lax monoidal. (In fact, THH is symmetric monoidal.) This means that we have specified natural maps $K(A) \otimes K(B) \to K(A \otimes B)$, but these need not be equivalences. They are localizing: If we have a short exact sequence of categories $A \to B \to C$, we have a cofibration sequence of spectra $K(A) \to K(B) \to K(C)$, and likewise for THH. The proof of this for THH can be found in Blumberg-Mandell ("Localization theorems in topological Hochschild homology and topological cyclic homology"). Now, consider the category of all functors $\infty Cat^{perf} \to Spectra$ satisfying (2). One can construct a symmetric monoidal structure on this category. And it turns out that any functor further satisfying (1) can be made into an $E_\infty$ algebra in this category, and that K theory is in fact the unit! Since THH satsifies (1) and (2), the corresponding algebra for THH receives a unique algebra map from K theory. When $A$ is an $E_\infty$ ring, then $THH(A)$ is an $E_\infty$ ring as well; by the algebra map from $K$ to $THH$, one obtains an $E_\infty$ ring map $K(A) \to THH(A)$. More generally, if $A$ is an $E_n$-algebra, then $K(A) = K(AMod)$ is an $E_{n-1}$ ring. This is because the category of $A$-modules can be given an $E_{n-1}$-structure, and $K$ theory is lax monoidal. Moreover, you can also prove that $THH(A)$ has an $E_{n-1}$ structure as well (you can see this also using factorization homology, for instance). The fact that there is an algebra map $K \to THH$ implies that one also obtains an $E_{n-1}$-algebra map $K(A) \to THH(A)$.
Analytical problem : what you are expecting is positive diffusion : you want the $T_i$ values to spread over your domain as time passes to eventually reach $T_i(t\rightarrow \infty) = cte$ if $\alpha$ was a negative number, you would have what is called negative diffusion : you'll have exactly the opposite i.e. the gradients will get greater through time. The sign of $\alpha$ hence dictates the behaviour of your analytical solution. $\alpha < 0$ case : Ideally, the numerical solution should have the same behaviour as the analytical solution. However, the finite difference theory assumes the solution to be smooth : if the solution features gradients that are too sharp, then your numerical method will not be able to handle them. We have just said that in the case where $\alpha < 0$, the gradients grow greater with time. The error generated by the simulation will not be smeared out, as would be the case with positive diffusion $\alpha > 0$, but instead will be amplified. For that reason, if α<0, you know for sure your simulation is going to blow up at some point. $\alpha > 0$ case : If $\alpha > 0$, you are however not safe. If your time step is too large, your simulation will not be stable either. The stability condition $\Delta t < \frac{\Delta x ^2}{2 \alpha}$ indicates whether your numerical method has a chance of being stable or not. Note that it is a necessary condition for your numerical method to be stable, not a sufficient condition. Yet in practice, it turns out to be a very powerful tool. Also, the mesh Fourier number for a diffusive term can be defined as $\alpha \frac{\Delta t}{\Delta x^2}$. In practice it is more convenient to write the stability condition in terms of the mesh Fourier$\alpha \frac{\Delta t}{\Delta x^2} < \frac{1}{2}$ This way you can see that the parameters of your simulation $\Delta t$, $\Delta x$ and $\alpha$ are all on the left hand side and $\frac{1}{2}$ is the critical value that must not be exceeded for the simulation to have a chance of being stable. In practice, the value for $\alpha$ is given by your problem and you will have chosen $\Delta x$ already. Hence, $\Delta t$ is the only parameter you can play with so that the stability condition on diffusion is observed. The value of the critical mesh Fourier number depends on the space and time discretisation you have chosen. Some time integrators have broader stability regions than others, hence they will allow larger mesh Fourier numbers. Practically speaking, this means you'd be able to choose larger time steps while still having a stable numerical method. To summarise : if $\alpha < 0$, you will have negative diffusion and your simulation will in any case not be stable. if $\alpha > 0$, your simulation might be stable... or it may not ! the stability condition on diffusion (and the mesh Fourier number) helps you choose the time step $\Delta t$ for your numerical method to be stable. I recommend you make a dummy simulation and play with the parameters to see what happens. No need to waste time into programming something : a spreadsheet software is enough for your particular case. Edit: partial rewrite of my answer to make it clearer
2018-08-25 06:58 Recent developments of the CERN RD50 collaboration / Menichelli, David (U. Florence (main) ; INFN, Florence)/CERN RD50 The objective of the RD50 collaboration is to develop radiation hard semiconductor detectors for very high luminosity colliders, particularly to face the requirements of the possible upgrade of the large hadron collider (LHC) at CERN. Some of the RD50 most recent results about silicon detectors are reported in this paper, with special reference to: (i) the progresses in the characterization of lattice defects responsible for carrier trapping; (ii) charge collection efficiency of n-in-p microstrip detectors, irradiated with neutrons, as measured with different readout electronics; (iii) charge collection efficiency of single-type column 3D detectors, after proton and neutron irradiations, including position-sensitive measurement; (iv) simulations of irradiated double-sided and full-3D detectors, as well as the state of their production process.. 2008 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 596 (2008) 48-52 In : 8th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 27 - 29 Jun 2007, pp.48-52 Αναλυτική εγγραφή - Παρόμοιες εγγραφές 2018-08-25 06:58 Αναλυτική εγγραφή - Παρόμοιες εγγραφές 2018-08-25 06:58 Performance of irradiated bulk SiC detectors / Cunningham, W (Glasgow U.) ; Melone, J (Glasgow U.) ; Horn, M (Glasgow U.) ; Kazukauskas, V (Vilnius U.) ; Roy, P (Glasgow U.) ; Doherty, F (Glasgow U.) ; Glaser, M (CERN) ; Vaitkus, J (Vilnius U.) ; Rahman, M (Glasgow U.)/CERN RD50 Silicon carbide (SiC) is a wide bandgap material with many excellent properties for future use as a detector medium. We present here the performance of irradiated planar detector diodes made from 100-$\mu \rm{m}$-thick semi-insulating SiC from Cree. [...] 2003 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 509 (2003) 127-131 In : 4th International Workshop on Radiation Imaging Detectors, Amsterdam, The Netherlands, 8 - 12 Sep 2002, pp.127-131 Αναλυτική εγγραφή - Παρόμοιες εγγραφές 2018-08-24 06:19 Measurements and simulations of charge collection efficiency of p$^+$/n junction SiC detectors / Moscatelli, F (IMM, Bologna ; U. Perugia (main) ; INFN, Perugia) ; Scorzoni, A (U. Perugia (main) ; INFN, Perugia ; IMM, Bologna) ; Poggi, A (Perugia U.) ; Bruzzi, M (Florence U.) ; Lagomarsino, S (Florence U.) ; Mersi, S (Florence U.) ; Sciortino, Silvio (Florence U.) ; Nipoti, R (IMM, Bologna) Due to its excellent electrical and physical properties, silicon carbide can represent a good alternative to Si in applications like the inner tracking detectors of particle physics experiments (RD50, LHCC 2002–2003, 15 February 2002, CERN, Ginevra). In this work p$^+$/n SiC diodes realised on a medium-doped ($1 \times 10^{15} \rm{cm}^{−3}$), 40 $\mu \rm{m}$ thick epitaxial layer are exploited as detectors and measurements of their charge collection properties under $\beta$ particle radiation from a $^{90}$Sr source are presented. [...] 2005 - 4 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 546 (2005) 218-221 In : 6th International Workshop on Radiation Imaging Detectors, Glasgow, UK, 25-29 Jul 2004, pp.218-221 Αναλυτική εγγραφή - Παρόμοιες εγγραφές 2018-08-24 06:19 Measurement of trapping time constants in proton-irradiated silicon pad detectors / Krasel, O (Dortmund U.) ; Gossling, C (Dortmund U.) ; Klingenberg, R (Dortmund U.) ; Rajek, S (Dortmund U.) ; Wunstorf, R (Dortmund U.) Silicon pad-detectors fabricated from oxygenated silicon were irradiated with 24-GeV/c protons with fluences between $2 \cdot 10^{13} \ n_{\rm{eq}}/\rm{cm}^2$ and $9 \cdot 10^{14} \ n_{\rm{eq}}/\rm{cm}^2$. The transient current technique was used to measure the trapping probability for holes and electrons. [...] 2004 - 8 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 3055-3062 In : 50th IEEE 2003 Nuclear Science Symposium, Medical Imaging Conference, 13th International Workshop on Room Temperature Semiconductor Detectors and Symposium on Nuclear Power Systems, Portland, OR, USA, 19 - 25 Oct 2003, pp.3055-3062 Αναλυτική εγγραφή - Παρόμοιες εγγραφές 2018-08-24 06:19 Lithium ion irradiation effects on epitaxial silicon detectors / Candelori, A (INFN, Padua ; Padua U.) ; Bisello, D (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Schramm, A (Hamburg U., Inst. Exp. Phys. II) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) ; Wyss, J (Cassino U. ; INFN, Pisa) Diodes manufactured on a thin and highly doped epitaxial silicon layer grown on a Czochralski silicon substrate have been irradiated by high energy lithium ions in order to investigate the effects of high bulk damage levels. This information is useful for possible developments of pixel detectors in future very high luminosity colliders because these new devices present superior radiation hardness than nowadays silicon detectors. [...] 2004 - 7 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 1766-1772 In : 13th IEEE-NPSS Real Time Conference 2003, Montreal, Canada, 18 - 23 May 2003, pp.1766-1772 Αναλυτική εγγραφή - Παρόμοιες εγγραφές 2018-08-24 06:19 Radiation hardness of different silicon materials after high-energy electron irradiation / Dittongo, S (Trieste U. ; INFN, Trieste) ; Bosisio, L (Trieste U. ; INFN, Trieste) ; Ciacchi, M (Trieste U.) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; D'Auria, G (Sincrotrone Trieste) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) The radiation hardness of diodes fabricated on standard and diffusion-oxygenated float-zone, Czochralski and epitaxial silicon substrates has been compared after irradiation with 900 MeV electrons up to a fluence of $2.1 \times 10^{15} \ \rm{e} / cm^2$. The variation of the effective dopant concentration, the current related damage constant $\alpha$ and their annealing behavior, as well as the charge collection efficiency of the irradiated devices have been investigated.. 2004 - 7 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 530 (2004) 110-116 In : 6th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 29 Sep - 1 Oct 2003, pp.110-116 Αναλυτική εγγραφή - Παρόμοιες εγγραφές 2018-08-24 06:19 Recovery of charge collection in heavily irradiated silicon diodes with continuous hole injection / Cindro, V (Stefan Inst., Ljubljana) ; Mandić, I (Stefan Inst., Ljubljana) ; Kramberger, G (Stefan Inst., Ljubljana) ; Mikuž, M (Stefan Inst., Ljubljana ; Ljubljana U.) ; Zavrtanik, M (Ljubljana U.) Holes were continuously injected into irradiated diodes by light illumination of the n$^+$-side. The charge of holes trapped in the radiation-induced levels modified the effective space charge. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 343-345 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.343-345 Αναλυτική εγγραφή - Παρόμοιες εγγραφές 2018-08-24 06:19 First results on charge collection efficiency of heavily irradiated microstrip sensors fabricated on oxygenated p-type silicon / Casse, G (Liverpool U.) ; Allport, P P (Liverpool U.) ; Martí i Garcia, S (CSIC, Catalunya) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Turner, P R (Liverpool U.) Heavy hadron irradiation leads to type inversion of n-type silicon detectors. After type inversion, the charge collected at low bias voltages by silicon microstrip detectors is higher when read out from the n-side compared to p-side read out. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 340-342 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.340-342 Αναλυτική εγγραφή - Παρόμοιες εγγραφές 2018-08-23 11:31 Formation and annealing of boron-oxygen defects in irradiated silicon and silicon-germanium n$^+$–p structures / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Korshunov, F P (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) ; Abrosimov, N V (Unlisted, DE) New findings on the formation and annealing of interstitial boron-interstitial oxygen complex ($\rm{B_iO_i}$) in p-type silicon are presented. Different types of n+−p structures irradiated with electrons and alpha-particles have been used for DLTS and MCTS studies. [...] 2015 - 4 p. - Published in : AIP Conf. Proc. 1583 (2015) 123-126 Αναλυτική εγγραφή - Παρόμοιες εγγραφές
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues? Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson... Hmm, it seems we cannot just superimpose gravitational waves to create standing waves The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line [The Cube] Regarding The Cube, I am thinking about an energy level diagram like this where the infinitely degenerate level is the lowest energy level when the environment is also taken account of The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings @Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer). Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it? Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks. I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh... @0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P) Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio... the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\... @ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there. @CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer
Interesting recursive functions - et R={i∣∃j:f(j)=i} be the set of distinct values that...can sum1 explain clearly ???wat is d meaning of dis R={i∣∃j:f(j)=i} be the set of distinct values that f takes R={i∣∃j:f(j)=i} be the set of distinct values that f takes Someone please explain why is it written "takes" ? R contains the values that f can "give" right? @MINIPanda it is recursive function.f(x){if (x==1) return x; //map to "x"else if (x==5) return y; map to "y" // base conditionif (x mod 2 == 0) f(x/2)else f(x+5)}Function assign value to it when it terminates. As you can see base value can be either 1 or 5. R = {1,5}PS: Recursive fun calls doesn't provide mapping. A value is mapped when function terminates and return something. R={i∣∃j:f(j)=i} R={i∣∃j:f(j)=i} even this is ambiguous as everything boils down to f(1) and f(5) it should be mentioned that we can consider f(x)=x coz R contains all values such that there exist at least one value in Domain which should map to this x. @Divy Kala I have written the code above for the same. Please check it and let me know if you still have doubt. Answer is 2Its Saying we have 2 domainsN+ → N+ can anyone explain meaning of this plz?? @ Deepesh Kataria .where is f(9) R={i∣∃j:f(j)=i} here the set definition means that take suppose j=1 then f(1) =6 ,this 6 is i f(2) = 1 ,this 1 is i f(3) =8 ,this 8 is i f(4) = 2 ,this 4 is i . this way we get so many values of i which form N+ set only {1 , 2 , 3 ,4 .....} then why are you doing f(1) =f(6) =f(3) =f(8)...?? the set builder form is not asking this. @akriti, see defination of function again, it's in recursive form f(n) = f(n/2) ( see the recursion) f(1) = f(6) so in order to cal. f(1) we need to agin cal f(6) now this will give f(3) now again we need to fine f(3) which will be f(8).now f(8)will give f(4) and then f(4) will give f(2) and then f(2) -> f(1) so in this way it repeats itself. see the function again, the catchy thing is the recursion part. http://math.stackexchange.com/questions/2118739/finding-recursive-function-range/2118749 We will use strong Induction Hypothesis to proof this.Suppose that $f(1) = a$ and $f(5) = b$. It is clear that $$f(5n) = b$$ for all $n$. We'll prove by induction that for all $n \ne 5k$, $f(n) = a$.First note that$$f(2) = f(\frac{2}{2}) = f(1) = a,$$$$f(3) = f(3+5) = f(8) = f(4) = f(2) = a,$$$$f(4) = f(2) = a.$$Now suppose $n = 5k + r$, where $0 \lt r \lt 5$, and for all $k\lt n$, $n$ is not divisible by $5$ bcoz $r \neq 0$Note that if $n$ is not divisible by $5$ then $n-5$ is also not divisible by $5$. Because $n-5 = 5(k-1) + r$, again $r \neq 0$.And also Note that $\frac{n}{2}$ is not divisible by $5$, bcoz if it were divisible by $5$, this will make $n$ divisible by $5$. Base case: $f(1)=f(2)=f(3)=f(4)=a$ [already solved for base cases above]Incuctive step: Now suppose $n = 5k + r$, where $0 \lt r \lt 5$, and for all $m\lt n$ which are not divisible by $5$, $f(m) = a$.($m$ already covers $n-5$ and $\frac{n}{2}$)If $n$ is odd, $f(n) = f(n-5)$, and by induction hypothesis, $f(n-5) = a$, so we get $$f(n) = a.$$If $n$ is even, $f(n) = f(n/2)$, and by induction hypothesis, $f(n/2) = a$, so we get $$f(n) = a.$$ Best solution by using mathematical Induction.Thanks :-) @Sourav BasuYes. $\because f(n)=f(n+5)$. Putting $n=n-5$ [where $n>5$] will yield $f(n-5)=f(n-5+5)=f(n)$ $\text{let we have f(1) = x. Then, f(2) = f(2/2) = f(1) = x}$ $\text{f(3) = f(3+5) = f(8) = f(8/2) = f(4/2) = f(2/1) = f(1) = x }$$\text{f(5) = f(5+5) = f(10/2) = f(5) = y. }$ $\text{All $N^+$ except multiples of 5 are mapped to x and multiples}$ $\text{of 5 are mapped to y so ,$\mathbf{Answer\space is\space 2}$}$ thanx @Prince Sindhiya ,the pictorial mapping clears everything., choose any number let n=17, then f(17)=f(22)=f(11)=f(16)=f(8)=f(4)=f(2)=f(1)=f(6)=f(3)=f(8)=f(4)=f(2)=f(1)=f(6)=f(3)=f(8)=f(4)=f(2)=f(1)... <this is one part> now let n=50 f(50)=f(25)=f(30)=f(15)=f(20)=f(10)=f(5)=f(10)=f(15)=f(20)=f(10)=f(5)=f(10)=f(5)=f(10)=f(5) .....<this is other part> so we can take any number and that will fall either of any cycle, these are the two types of values that Function f( ) can take. For combinatorics , can add balls and bin...
Learning Outcomes Solve equations that include square roots. Square roots occur frequently in a statistics course, especially when dealing with standard deviations and sample sizes. In this section we will learn how to solve for a variable when that variable lies under the square root sign. The key thing to remember is that the square of a square root is what lies inside. In other words, squaring a square root cancels the square root. Example \(\PageIndex{1}\) Solve the following equation for \(x\). \[2+\sqrt{x-3}\:=\:6 \nonumber \] Solution What makes this a challenge is the square root. The strategy for solving is to isolate the square root on the left side of the equation and then square both sides. First subtract 2 from both sides: \[\sqrt{x-3}=4 \nonumber \] Now that the square root is isolated, we can square both sides of the equation: \[\left(\sqrt{x-3}\right)^2=4^2 \nonumber \] Since the square and the square root cancel we get: \[x-3=16 \nonumber \] Finally add 3 to both sides to arrive at: \[x=19 \nonumber \] It's always a good idea to check your work. We do this by plugging the answer back in and seeing if it works. We plug in \(x=19\) to get \[ \begin{align*}2+\sqrt{19-3} &=2+\sqrt{16} \\[4pt] &=2+4 \\[4pt] &= 6 \end{align*}\] Yes, the solution is correct. Example \(\PageIndex{2}\) The standard deviation, \(\sigma_\hat p\), of the sampling distribution for a proportion follows the formula: \[\sigma_\hat p=\sqrt{\frac{p\left(1-p\right)}{n}} \nonumber \] Where \(p\) is the population proportion and \(n\) is the sample size. If the population proportion is 0.24 and you need the standard deviation of the sampling distribution to be 0.03, how large a sample do you need? Solution We are given that \(p=0.24\) and \(\sigma_{\hat p } = 0.03 \) Plug in to get: \[0.03=\sqrt{\frac{0.24\left(1-0.24\right)}{n}} \nonumber \] We want to solve for \(n\), so we want \(n\) on the left hand side of the equation. Just switch to get: \[\sqrt{\frac{0.24\left(1-0.24\right)}{n}}\:=\:0.03 \nonumber \] Next, we subtract: \[1-0.24\:=\:0.76 \nonumber \] And them multiply: \[0.24\left(0.76\right)=0.1824 \nonumber \] This gives us \[\sqrt{\frac{0.1824}{n}}\:=\:0.03 \nonumber \] To get rid of the square root, square both sides: \[\left(\sqrt{\frac{0.1824}{n}}\right)^2\:=\:0.03^2 \nonumber \] The square cancels the square root, and squaring the right hand side gives: \[\frac{0.1824}{n}\:=\:0.0009 \nonumber \] We can write: \[\frac{0.1824}{n}\:=\frac{\:0.0009}{1} \nonumber \] Cross multiply to get: \[0.0009\:n\:=\:0.1824 \nonumber \] Finally, divide both sides by 0.0009: \[n\:=\frac{\:0.1824}{0.0009}=202.66667 \nonumber \] Round up and we can conclude that we need a sample size of 203 to get a standard error that is 0.03. We can check to see if this is reasonable by plugging \(n = 203\) back into the equation. We use a calculator to get: \[\sqrt{\frac{0.24\left(1-0.24\right)}{203}}\:=\:0.029975 \nonumber \] Since this is very close to 0.03, the answer is reasonable. Exercise The standard deviation, \(\sigma_\bar x\), of the sampling distribution for a mean follows the formula: \[\sigma_\bar x=\frac{\sigma}{\sqrt{n}} \nonumber \] Where \(\sigma \) is the population standard deviation and \(n\) is the sample size. If the population standard deviation is 3.8 and you need the standard deviation of the sampling distribution to be 0.5, how large a sample do you need?
Is there a standard example of two abelian varieties $A$, $B$ over some number field $k$ which are $k_v$-isomorphic for every place $v$ of $k$ but not $k$-isomorphic ? (If you upvote this answer, please consider upvoting the answers by Felipe Voloch and David Speyer too, since this answer builds on their ideas.) The smallest examples are in dimension $2$. Let $E$ be any elliptic curve over $\mathbf{Q}$ without complex multiplication, e.g., $X_0(11)$. We will construct two twists of $E^2$ that are isomorphic over $\mathbf{Q}_p$ for all $p \le \infty$ but not isomorphic over $\mathbf{Q}$. Let $K:=\mathbf{Q}(\sqrt{-1},\sqrt{17})$. Let $G:=\operatorname{Gal}(K/\mathbf{Q}) = (\mathbf{Z}/2\mathbf{Z})^2$. Let $\alpha \colon G \to \operatorname{GL}_2(\mathbf{Z}) = \operatorname{Aut}(E^2)$ be a homomorphism sending the two generators to the reflections in the coordinate axes of $\mathbf{Z}^2$, and let $A$ be the $K/\mathbf{Q}$-twist of $E^2$ given by $\alpha$. Define $\beta$ and $B$ similarly, but with the lines $y=x$ and $y=-x$ in place of the coordinate axes. The representations $\alpha$ and $\beta$ of $G$ on $\mathbf{Z}^2$ are not conjugate: only the former is such that the lattice vectors fixed by nontrivial elements of $G$ generate all of $\mathbf{Z}^2$. Thus $A$ and $B$ are not isomorphic over $\mathbf{Q}$. On the other hand, every decomposition group $D_p$ in $G$ is smaller than $G$ since $-1$ is a square in $\mathbf{Q}_{17}$ and $17$ is a square in $\mathbf{Q}_2$. Also, the restrictions of $\alpha$ and $\beta$ to any proper subgroup of $G$ are conjugate: any single line spanned by a primitive vector in $\mathbf{Z}^2$ can be mapped to any other by an element of $\operatorname{GL}_2(\mathbf{Z})$. Thus $A$ and $B$ become isomorphic after base extension to $\mathbf{Q}_p$ for any $p \le \infty$. $\square$ Remark: The abelian surfaces $A$ and $B$ constructed above are isogenous even over $\mathbf{Q}$, because the $\mathbf{Z}^2$ with one Galois action can be embedded into the $\mathbf{Z}^2$ with the other Galois action: rotate $45^\circ$ and dilate. Remark: The nonexistence of examples in dimension $1$ follows from these two well-known facts: 1) Twists of an elliptic curve over a field $k$ of characteristic $0$ are classified by $H^1(k,\mu_n)=k^\times/k^{\times n}$ where $n$ is 2, 4, or 6. 2) If $n<8$, the map $k^\times/k^{\times n} \to \prod_v k_v^\times/k_v^{\times n}$ is injective. [ Edit: This answer was edited to simplify the construction and to add those remarks at the end.] Here's a slight variant of Felipe Voloch's answer, for those who don't have a favorite group cohomology class. Let $C$ be an abelian variety over $\mathbb{Q}$. Suppose that all the $\overline{\mathbb{Q}}$ automorphisms of $C$ are defined over $\mathbb{Q}$ and let $P$ be this automorphism group. Take two classes in $H^1(\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q}), P)$ which are distinct, but become equal in $H^1(\mathrm{Gal}(\overline{\mathbb{Q}_v}/\mathbb{Q}_v), P)$ for every $v$. The corresponding twists of $C$ should give you the examples you want. How have I made things easier? Because I made the action of $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ on $P$ trivial, I can describe the group cohmology explicitly as $$H^1(\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q}), P) \cong \mathrm{Hom}(\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q}), P)/P.$$ Here $P$ acts by conjugation on the target. Since $P$ is finite, any of these Hom's factor through $\mathrm{Gal}(K/\mathbb{Q})$ for some finite extension $K/\mathbb{Q}$. So we are now reduced to the following: We must find finite groups $G$ and $P$, an extension $K/\mathbb{Q}$ with Galois group $G$, an abelian variety with automorphism group $P$ and two maps $\alpha$, $\beta: G \to P$ such that $\alpha$ and $\beta$ are not conjugated to each other by any element of $P$ but when we restrict to any decomposition subgroup group, $\alpha$ and $\beta$ become conjugate. Take $G=(\mathbb{Z}/2)^2$ and $P=S_6 \times (\mathbb{Z}/2)$. We will not use the $(\mathbb{Z}/2)$ factor at all in the following; the reason it is there is that the automorphism group of an abeliabn variety always contains a central involution, namely $-1$. Feel free to think of $P$ as $S_6$. Take $K/\mathbb{Q}$ to be any biquadratic extension in which no prime is completely ramified. This condition assures that no decomposition group is the whole of $G$. Let $\alpha$ send the generators of $G$ to the elements $(12)(56)$ and $(34)(56)$ of $S_6$. Let $\beta$ send the generators of $G$ to $(12)(34)$ and $(13)(24)$. Then $\alpha$ and $\beta$ are not conjugate in $S_6$, but they become conjugate when restricted to any of the three cyclic subgroups. The one missing step is to construct an abelian variety with automorphism group $S_6 \times (\mathbb{Z}/2)$, and all automorphisms defined over $\mathbb{Q}$. Dror Spieser, in the comments, points out that we can just take the restriction of scalars of an elliptic curve (without CM) defined over an $S_6$ extension of $\mathbb{Q}$. I still don't have a good construction of this but, thanks to Bjorn's answer, I don't need one. If $A,B$ are as stated, then $B$ must be a twist of $A$ which is everywhere locally trivial, so $B$ gives a class in $H^1(k,G)$ (where $G$ is the automorphism group of $A$), which is everywhere locally trivial. So, pick a group $G$ that you know has everywhere locally trivial but globally non-trivial class in $H^1(k,G)$ and make it act on an abelian variety. For instance you can make the group act on a curve and therefore on its jacobian. As for your actual question, if there is a "standard" such example, I guess the answer is no. Selmer's curve $3X^3+4y^3+5z^3=0$ is a non-example (see the comment below) but somwhat relevant. See Theorem 1 in Mazur's article titled ON THE PASSAGE FROM LOCAL TO GLOBAL IN NUMBER THEORY.
Let $f\colon X\to Y$ be a flat morphism of irreducible projective algebraic varieties over $\mathbb{C}$ (or any other algebraically closed field of characteristic 0). Assume that $Y$ is smooth, and the generic fiber of $f$ is smooth (these two assumptions seem to be important). Let $U\subset Y$ be a Zariski open non-empty subset such that $f\colon f^{-1}(U)\to U$ is smooth. Fix a closed point $y\in Y$. Let us denote by $S$ the "disk", more precisely $S$ is the spectrum of henselization of $\mathbb{C}[t]$ localized at the ideal generated by $t$. Let $\eta$ be the generic point of $S$. For any morphism $\nu\colon S\to Y$ such that $\eta$ is mapped to $U$ and the closed point of $S$ to $y$, consider the fibered product $S\times_Y X\to S$. Notice that the generic fiber of this morphism is smooth over $\eta$. Consider the nearby cycle functor of the constant sheaf $\underline{\mathbb{\mathbb{Q}_l}}$. It is a perverse sheaf on $f^{-1}(y)$ which we will denote by $\mathcal{F}_\nu$ to emphasize dependence on the morphism $\nu$. QUESTION.Is it true that for all choices of $\nu$ as above the perverse sheaves $\mathcal{F}_\nu$ are isomorphic to each other?
Is Bekenstein entropy limit inconsistent with universal continuity? Yes, but that doesn't mean the Bekenstein bound is correct and everything is fine. Entropy can be considered as "sameness", related to available energy. And the expression $S \leq \frac{2 \pi k R E}{\hbar c}$ has a c in it, but the coordinate speed of light at the event horizon is zero. It is unknown whether the universe is discrete or continuous in its intricate quantum level structure. A photon has its E=hf quantum nature, but that doesn't mean it approaches you in steps. It's quantum field theory, and the wave nature of matter. IMHO there's no evidence at all for a universe that has some "discrete intricate quantum level structure". Even so, all branches of modern physics rely heavily on fully continuous structures. I'd say the universe appears to be continuous. From the Bekenstein bound applied to a black hole, we know that the information entropy that can be contained inside a black hole is finite and proportional to the surface area of the event horizon. We don't actually know that, it's hypothetical. From the No hair theorem/conjecture Which is also hypothetical. it is believed that the black hole is uniquely described by mass/energy, linear and angular momentum, position, and electric charge, which amounts to a total of 11 real numbers. Possibly, if magnetic monopoles exist, we can add an additional number for magnetic charge. There's potential issues with the angular momentum and charge and magnetic monopoles. But I'll go with the flow. Most physicists will argue that these 11 numbers are continuous (i.e. not bounded rational approximations). Note that a particle such as an electron has unit charge. But again, I'll go with the flow. With an assumption of real continuity, as the black hole undergoes change, for example taking on additional mass over a period of time, the 11 numbers will change as time flows over a continuous infinitude of real numbers, with no smallest increment of time. Fair enough. The 11 numbers must then each assume values that are rational, irrational, transcendental, non-computable and non-definable, as they continuously sweep through the real number field. OK. In fact, if any number is sampled at random, i.e. at a random time, it will almost surely (i.e with probability one) be non-computable and non-definable. A non-computable and non-definable number has infinite Kolmogorov complexity and carries infinite entropy, as its shortest description is its own random and infinite digit sequence. How is that consistent with the starting assumption of bounded entropy? The number does not actually exist. A bl;ack hole exists, and a photon exists. This photon can fall into the black hole increasing its mass/energy by E=hf, and E can take any value. But we can't say that a black hole consists of n photons or is anything to do with statistical mechanics or information theory. Note the black hole information paradox: "The black hole information paradox[1] is an observational phenomenon that results from the combination of quantum mechanics and general relativity which suggests that physical information could permanently disappear in a black hole, allowing many physical states to devolve into the same state". For all you know the photon totally loses its identity, and the black hole is like one big boson, where everything is the same. For all you know the black hole could be something like a BEC, and subject to a Bosenova. Sorry this doesn't give you anything definite, but so much of this stuff is hypothetical.
The following equation describes the motion of a rigid body rotation, such as a gyroscope: $$ \frac{d\textbf{L}}{dt} ={\bf{\tau}}= \textbf{r}\times m\textbf{g}= {\omega}\times \textbf{L}$$ where $L$ is the angular momentum, $\tau$ is the torque, $r$ is the position vector, $g$ is gravity, and $\omega$ is the angular velocity. In thread 1, it is shown that the relation $$ \frac{dL}{dt} = \omega\times L $$ can be derived from the following Lagrangian: $$ S[\omega, {\bf p}, {\bf r}]= \int \left(\frac 12 I_1\omega_1^2+\frac 12 I_2\omega_2^2+\frac 12 I_3\omega_3^2+ {\bf p}\cdot (\dot {\bf r}+ \omega \times {\bf r})\right)dt $$ where ${\bf p}$ here is a Lagrange multiplier for the Lin constraint $\dot {\bf r}+ \omega \times {\bf r} =0 $. My questions are - 1) Why does this equation cannot be derived from a simple (Kinetic - Potential) energy Lagrangian? Why do we need an additional constraint? I tried reading some papers about Lin constraints in Fluid mechanics, in which they are required to derive Navier-Stokes equations in Euler's specification. But what is the intuition behind this here? 2) If the constraint is imposed in order to keep a rotational reference frame, how, I would guess that (as mentioned in 2), we need an additional potential energy, because of the effect of gravity, but it seems like this is not represented in this Lagrangian. What is the reason for this? (an equivalent question is how to account for the torque $r\times mg $ in the Largrangian? ) I will be glad to receive some intuition about Lin constraints in this particular case (any further details, or good references, will be gratefully appreciated). Thanks
The task is to find the period of small oscillations in the potential $$U=U_0\tan^2{\Big(\frac{x^2}{a^2}\Big)}.$$ I started with finding the stable equilibrium points: $\frac{dU}{dx}=0$ $2U_{0}\tan{\Big(\frac{x^2}{a^2}\Big)}\frac{1}{\cos^{2}{\Big(\frac{x^2}{a^2}\Big)}}\frac{2x}{a^{2}}=0$ The solutions of this equation are ( $k$ is an arbitrary non-negative integer number) $x_{k}=\pm a\sqrt{\pi k}$ If I expand the potential near an extremum point, I get the quadratic form. Hence, the period of oscillations can be found like this: $T = \frac{2 \pi}{w_{0}}=2\pi \sqrt{\frac{m}{k_{eff}}}= a\sqrt{\frac{m \pi}{2U_{0} k}}$ $k_{eff}=\frac{dU^{2}}{dx^{2}} (x_{k})=\frac{8U_{0}\pi k}{a^{2}}$ I don't see any mistakes in my reasoning. However, my solution does not work for the case in which $k=0$. When $x=0$, the potential function, as well as its first and second derivatives, is zero, which is confusing. Could someone explain me what to do in such situation?
With which notation do you feel uncomfortable? closed as not constructive by Loop Space, Chris Schommer-Pries, Qiaochu Yuan, Scott Morrison♦ Mar 19 '10 at 6:10 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. If this question can be reworded to fit the rules in the help center, please edit the question. There is a famous anecdote about Barry Mazur coming up with the worst notation possible at a seminar talk in order to annoy Serge Lang. Mazur defined $\Xi$ to be a complex number and considered the quotient of the conjugate of $\Xi$ and $\Xi$: $$\frac{\overline{\Xi}}{\Xi}.$$ This looks even better on a blackboard since $\Xi$ is drawn as three horizonal lines. My favorite example of bad notation is using $\textrm{sin}^2(x)$ for $(\textrm{sin}(x))^2$ and $\textrm{sin}^{-1}(x)$ for $\textrm{arcsin}(x)$, since this is basically the same notation used for two different things ($\textrm{sin}^2(x)$ should mean $\textrm{sin}(\textrm{sin}(x))$ if $\textrm{sin}^{-1}(x)$ means $\textrm{arcsin}(x)$). It might not be horrible, since it rarely leads to confusion, but it is inconsistent notation, which should be avoided in general. I personally hate the notation $x \mid y$, for "$x$ divides $y$". Of course, I'm used to reading it by now, but a general principle I follow and recommend is: Never use a symmetric symbol to denote an asymmetric relation! I never liked the notation ${\mathbb Z}_p$ for the ring of residue classes modulo $p$. At one point, it confused the hell out of me, and this confusion is easily avoided by writing $C_p$, $C(p)$ or ${\mathbb Z}/p$. Mathematicians are really quite bad when it comes to notation. They should learn from programming langauges people. Bad notation actually makes it difficult for students to understand the concepts. Here are some really bad ones: Using $f(x)$ to denote both the value of $f$ at $x$ and the function $f$ itself. Because of this students in programming classes cannot tell the difference beween $f$ (the function) and $f(x)$ (function applied to an argument). When I was a student nobody ever managed to explain to me why $dy/dx$ made sense. What is $dy$ and what is $dx$? They're not numbers, yet we divide them (I am just giving a student's perspective). In Langrangian mechanics and calculus of variations people take the partial derivative of the Lagrangian $L$ with respect to $\dot q$, where $\dot q$ itself is the derivative of momentum $q$ with respect to time. That's crazy. The summation convention, e.g., that ${\Gamma^{ij}}_j$ actually means $\sum_j {\Gamma^{ij}}_j$ is useful but very hard to get used to. In category theory I wish people sometimes used anynotation as opposed to nameless arrows which are introduced in accompanying text as "the evident arrow". Physicist will hate me for this, but I never liked Einstein's summation convention, nor the famous bra ($\langle\phi|$) and ket ($|\psi\rangle$) notation. Both notations make easy things look unnecessarily complicated, and especially the bra-ket notation is no fun to use in LaTeX. My candidate would be the (internal) direct sum of subspaces $U \oplus V$ in linear algebra. As an operator it is equivalent to sum but with the side effect of implying that $U \cap V = \lbrace 0\rbrace$. Whenever I had a chance to teach linear algebra I found this terribly confusing for students. I think composition of arrows $f:X\to Y$ and $g:Y\to Z$ should be written $fg$ not $gf$. First of all it would make the notation $\hom(X,Y)\to\hom(Y,Z)\to \hom(X,Z)$ much more natural: $\hom(E,X)$ should be a left $\hom(E,E)$ module because $E$ is on the left :) Secondly, diagrams are written from left to right (even stronger: Almost anything in the western world is written left to right). And i think the strange (-1) needed when shifting complexes is an effect of this twisted notation. The notation ]a,b[ for open intervals and its ilk. Sorry, Bourbaki. Writing a finite field of size $q$ as $\mathrm{GF}(q)$ instead of as $\mathbf{F}_q$ always rubbed me the wrong way. I know where it comes from (Galois Field), and I think it is still widely used in computer science, and maybe in some allied areas of discrete math, but I still dislike it. As Trevor Wooley used to always say in class, ``Vinogradov's notation sucks....the constants away." For those who don't know, Vinogradov's notation in this context is $f(x)\ll g(x)$ meaning $f(x) = O(g(x)).$ (if you prefer big-O notation, that is). I rather dislike the notation $$\int_{\Omega}f(x)\,\mu(dx)$$ myself. I realize that just as the integral sign is a generalized summation sign, the $dx$ in $\mu(dx)$ would stand for some small measurable set of which you take the measure, but it still rubs me the wrong way. Is it only because I was brought up with the $\int\cdots\,d\mu(x)$ notation? The latter nicely generalizes the notation for the Stieltjes integral at least. I get very frustrated when an author or speaker writes "Let $X\colon= A\sqcup B$..." to mean: $A$ and $B$ are disjoint sets (in whatever the appropriate universe is), and let $X\colon= A\cup B$. If they just meant "form the disjoint union of $A$ and $B$" this would be fine. But I've seen speakers later use the fact that $A$ and $B$ are disjoint, which was never stated anywhere except as above. You should never hide an assumption implicitly in your notation. The use of squared brackets $\left[...\right]$ for anything. It's not bad per se, but unfortunately it is used both as a substitute for $\left(...\right)$ and as a notation for the floor function. And there are cases when it takes a while to figure out which of these is meant - I'm not making this up. The word "character" meaning: a 1-dimensional representation, a representation, a trace form of a representation, a formal linear combination of representations, a formal linear combination of trace forms of representations. The word "adjoint", and the corresponding notation $A\mapsto A^{\ast}$, having two completely unrelated meanings. The term "symplectic group" used to mean the group $U(n,{\mathbb H})$. It's as if people called $U(n)$ and $GL(n,{\mathbb R})$ by some single name. My personal pet peeve of notation HAS to be algebraists writing functions on the right a la Herstein's "Topics In Algebra". I don't know why they do it when everyone else doesn't. I think one of them got up one day and decided they wanted to be cooler then everyone else, seriously... I don't like (but maybe for a bad reason) the notation $F\vdash G$ for $F$ is left adjoint to $G$. Any comment ? A cute idea but for which I have yet to find supporters is D. G. Northcott's notation (used at least in [Northcott, D. G. A first course of homological algebra. Cambridge University Press, London, 1973. xi+206 pp. MR0323867) for maps in a commutative diagram, which consists in enumerating the names of the objects placed vertices along the way of the composition. Thus, if there is only one map in sight from $M$ to $N$, he writes it simply $MN$, so he has formulas looking like $$A'A(ABB'') = A'ABB'' = A'B'BB'' = 0.$$ He also writes maps on the right, so his $$xMN=0$$ means that the image of $x$ under the map from $M$ to $N$ is zero. I would not say this is among the worst notations ever, though. Students have big difficulties when first confronted with the $o(\cdot)$ and $O(\cdot)$ notation. The term $o(x^3)$, e.g., does not denote a certain function evaluated at $x^3$, but a function of $x$, defined by the context, that converges to zero when divided by $x^3$. I have struggled with 'dx'. I've spent years trying to study every different approach to calculus that I could find to try and make sense of it. I read about the limit definitions in my first book, vector calculus with them as pullbacks of linear transformations or flows/flux, differential forms from the bridge project, k-forms, nonstandard analysis which enlarges $\mathbb{R}$ to give you infinitesimals (and unbounded numbers) but the same first order properties and lets integral be defined as a sum, constructive analysis using a monad to take the closure of the rationals to give reals... but I am still just as confused as ever, I understand that the mathematical notation doesn't have a compositional semantics but still don't really get it - one of the problems is despite not really understanding it, or having any abstract definition of it.. I can still get correct answers and I really hope this doesn't become a theme as I study more topics in mathematics. p < q as in "the forcing condition p is stronger than q". I hate the short cut $ab$ for $a\cdot b$. Everyone get used to it, BUT it creates very deep problem with all other notation; say you never can be sure what $f(x+y)$ or $2\!\tfrac23$ might be... Also in modern mathematics people do not multiply things too often, so it does not have sense to make such a short cut. Yet the shortcut $x^n$ is really bad one. One can not use upper indexes after this. It would be easy to write $x^{\cdot n}$ instead.
The formulation of an ARIMA model with exogenous regressors is not generally the same as a linear regression model with lagged dependent variables. To my knowledge, the formulation in software packages for the ARIMA model with exogenous regressors is the following: $$\left[ y(t) - \beta_0 - \beta_3 \hbox{levelshift}(t) \right] = \beta_1 \left[y(t-1) - \beta_0 - \beta_3 \hbox{levelshift}(t-1)\right] + \mu(t) \,,$$ which, as you can see, differs from the regression equation that you give. This issue is sometimes a bit misleading. Below, I give some details based on a larger discussion that I give here. Linear regression model with lagged dependent variables $$y_t = \beta_0 + \beta_1 x_{1,t} + \cdots + \beta_k x_{k,t} + \phi_1 y_{t-1} + \cdots + \phi_p y_{t-p} + \epsilon_t \,, \quad \epsilon_t \sim NID(0, \sigma^2) \,.$$ The coefficient $\beta_1$ measures how the dependent variable $y_t$ changes when there is a unit change in $x_1$. The role of the lagged dependent variables is usually to whiten the residuals, i.e. remove serial correlation in the disturbance term in order to gain efficiency in the Ordinary Least Squares estimates. This is for example used in the so-called augmented Dickey-Fuller regression. $\beta_0$ is an intercept, the expected value of $y_t$ when $x_{1,t}$ is zero. ARMA time series model with exogenous regressors Although the above formulation could be understood as an AR(p) model with regressors, this model is actually specified a follows (and, to my knowledge, this is how it is implemented, this is how it is implemented in software packages): $$(y_t - \beta_0 - \beta_1 x_{1,t}) = \sum_{i=1}^p \phi_i (y_{t-i} - \beta_0 - \beta_1 x_{1,t-i}) + \epsilon_t \,, \quad \epsilon_t \sim NID(0, \sigma^2) \,.$$ To save space, I have used only on regressor $x_1$. The MA term is not included, as the comparison with the linear regression model is not straightforward in that case. The role of the lagged variables (and of moving average terms of a general ARMA model) is to capture the overall dynamics observed in the data, e.g. looking to the autocorrelation function. In the absence of exogenous regressors, $\beta_0$ is not an intercept as in the regression model above, it is instead the mean of $y_t$. With explanatory variables, the mean of the series is not constant and changes with $x_t$. Edit (In response to the comment by @habu.) Both specifications are correct and can be estimated for example by maximum likelihood. Which one is more convenient depends on the context. In a regression analysis, the interpretation of the coefficients related to the explanatory variables is more natural in terms of how the dependent variable changes with a unit increase in one of the regressors. In this context we don't care much about the value of the coefficients related to the lags of the dependent variable since they are included just to render the residuals uncorrelated. In time series analysis, we are usually more interested in knowing the overall dynamics in the level of the series rather than how a regressor explains the dependent variable. Also, in this context, the coefficients related to the lags are interesting since they contain relevant information about the dynamics of the data (e.g. autocorrelations, periodicity of the most important cycles in the series). Apart from these differences in the interpretation and purposes, both equations are valid and estimable.
Data on the mean multiplicity of strange hadrons produced in minimum bias proton--proton and central nucleus--nucleus collisions at momenta between 2.8 and 400 GeV/c per nucleon have been compiled. The multiplicities for nucleon--nucleon interactions were constructed. The ratios of strange particle multiplicity to participant nucleon as well as to pion multiplicity are larger for central nucleus--nucleus collisions than for nucleon--nucleon interactions at all studied energies. The data at AGS energies suggest that the latter ratio saturates with increasing masses of the colliding nuclei. The strangeness to pion multiplicity ratio observed in nucleon--nucleon interactions increases with collision energy in the whole energy range studied. A qualitatively different behaviour is observed for central nucleus--nucleus collisions: the ratio rapidly increases when going from Dubna to AGS energies and changes little between AGS and SPS energies. This change in the behaviour can be related to the increase in the entropy production observed in central nucleus-nucleus collisions at the same energy range. The results are interpreted within a statistical approach. They are consistent with the hypothesis that the Quark Gluon Plasma is created at SPS energies, the critical collision energy being between AGS and SPS energies. Elastic and inelastic 19.8 GeV/c proton-proton collisions in nuclear emulsion are examined using an external proton beam of the CERN Proton Synchrotron. Multiple scattering, blob density, range and angle measurements give the momentum spectra and angular distributions of secondary protons and pions. The partial cross-sections corresponding to inelastic interactions having two, four, six, eight, ten and twelve charged secondaries are found to be, respectively, (16.3±8.4) mb, (11.5 ± 6.0) mb, (4.3 ± 2.5) mb, (1.9 ± 1.3) mb, (0.5 ± 0.5) mb and (0.5±0.5)mb. The elastic cross-section is estimated to be (4.3±2.5) mb. The mean charged meson multiplicity for inelastic events is 3.7±0.5 and the average degree of inelasticity is 0.35±0.09. Strong forward and backward peaking is observed in the center-of-mass system for both secondary charged pions and protons. Distributions of energy, momentum and transverse momentum for identified charged secondaries are presented and compared with the results of work at other energies and with the results of a statistical theory of proton-proton collisions. The differential and total cross sections for kaon pair production in the pp->ppK+K- reaction have been measured at three beam energies of 2.65, 2.70, and 2.83 GeV using the ANKE magnetic spectrometer at the COSY-Juelich accelerator. These near-threshold data are separated into pairs arising from the decay of the phi-meson and the remainder. For the non-phi selection, the ratio of the differential cross sections in terms of the K-p and K+p invariant masses is strongly peaked towards low masses. This effect can be described quantitatively by using a simple ansatz for the K-p final state interaction, where it is seen that the data are sensitive to the magnitude of an effective K-p scattering length. When allowance is made for a small number of phi events where the K- rescatters from the proton, the phi region is equally well described at all three energies. A very similar phenomenon is discovered in the ratio of the cross sections as functions of the K-pp and K+pp invariant masses and the identical final state interaction model is also very successful here. The world data on the energy dependence of the non-phi total cross section is also reproduced, except possibly for the results closest to threshold. The production of eta mesons has been measured in the proton-proton interaction close to the reaction threshold using the COSY-11 internal facility at the cooler synchrotron COSY. Total cross sections were determined for eight different excess energies in the range from 0.5 MeV to 5.4 MeV. The energy dependence of the total cross section is well described by the available phase-space volume weighted by FSI factors for the proton-proton and proton-eta pairs. Sigma+ hyperon production was measured at the COSY-11 spectrometer via the p p --> n K+ Sigma+ reaction at excess energies of Q = 13 MeV and Q = 60 MeV. These measurements continue systematic hyperon production studies via the p p --> p K+ Lambda/Sigma0 reactions where a strong decrease of the cross section ratio close-to-threshold was observed. In order to verify models developed for the description of the Lambda and Sigma0 production we have performed the measurement on the Sigma+ hyperon and found unexpectedly that the total cross section is by more than one order of magnitude larger than predicted by all anticipated models. After the reconstruction of the kaon and neutron four momenta, the Sigma+ is identified via the missing mass technique. Details of the method and the measurement will be given and discussed in view of theoretical models. Measurements have been made on 753 four-prong events obtained by exposing the Brookhaven National Laboratory 20-in. liquid hydrogen bubble chamber to 2.85-Bev protons. The partial cross sections observed for multiple meson production reactions are: pp+−(p+p→p+p+π++π−), 2.67±0.13; pn++−, 1.15±0.09; pp+−0, 0.74±0.07; d++−, 0.06±0.02; four or more meson production, 0.04±0.02, all in mb. Production of two mesons appears to occur mainly in peripheral collisions with relatively little momentum transfer. In cases of three-meson production, however, the protons are typically deflected at large angles and are more strongly degraded in energy. The 32, 32 pion-nucleon resonance dominates the interaction; there is some indication that one or both of the T=12, pion-nucleon resonances also play a part. The recently discovered resonance in a T=0, three-pion state appears to be present in the pp+−0 reaction. Results are compared with the predictions of the isobaric nucleon model of Sternheimer and Lindenbaum, and with the statistical model of Cerulus and Hagedorn. The cross section for the reaction π0+p→π++π−+p is derived using an expression from the one-pion exchange model of Drell. The cross section for the production of $\omega$ mesons in proton-proton collisions has been measured in a previously unexplored region of incident energies. Cross sections were extracted at 92 MeV and 173 MeV excess energy, respectively. The angular distribution of the $\omega$ at $\epsilon$=173 MeV is strongly anisotropic, demonstrating the importance of partial waves beyond pure s-wave production at this energy. The pp->pp phi reaction has been studied at the Cooler Synchrotron COSY-Juelich, using the internal beam and ANKE facility. Total cross sections have been determined at three excess energies epsilon near the production threshold. The differential cross section closest to threshold at epsilon=18.5 MeV exhibits a clear S-wave dominance as well as a noticeable effect due to the proton-proton final state interaction. Taken together with data for pp omega-production, a significant enhancement of the phi/omega ratio of a factor 8 is found compared to predictions based on the Okubo-Zweig-Iizuka rule. Detailed measurements of the production of charged π mesons in proton-proton collisions are reported. The observed results are compared with the "isobar" and "one-pion exchange" models and for single production are in agreement if only the "resonant" part of the π−p cross section is used and if the angular distribution cos16θ is introduced for the production of the N1* isobar. The effects of higher resonances are also considered. The cross section for the reaction pp → Σ + K + n at 5 GeV/ c is measured to be 48.1 ± 3.5 μ b. The KΣ mass spectrum shows an enhancement at 1.86 GeV, which may due to the Δ (1920) resonance. Adequacy of the one-pion exchange model for the reaction is discussed. The cross section for the reaction pp → Σ + K o p is found to be 24.9 ± 2.3 μ b. The cross section for inclusive multipion production in the pp->ppX reaction was measured at COSY-ANKE at four beam energies, 0.8, 1.1, 1.4, and 2.0 GeV, for low excitation energy in the final pp system, such that the diproton quasi-particle is in the 1S0 state. At the three higher energies the missing mass Mx spectra show a strong enhancement at low Mx, corresponding to an ABC effect that moves steadily to larger values as the energy is increased. Despite the missing-mass structure looking very different at 0.8 GeV, the variation with Mx and beam energy are consistent with two-pion production being mediated through the excitation of two Delta(1232) isobars, coupled to S-- and D-- states of the initial pp system. The angular and energy distributions of pions produced by 650-MeV protons and pion-nucleon correlations were studied using a liquid hydrogen bubble chamber. The present investigation indicates that the experimental angular distributions of neutral and charged pions are consis- tent with the assumption of isotopic spin conservation. The contributions of rrN subsystem states with isospin T 11'N = 7' 2 and % are measured; the contribution of the latter is 72 ± 3%. The production cross sections of the prompt charmed mesons D$^0$, D$^+$, D$^{*+}$ and D$_s$ were measured at mid-rapidity in p-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC. D mesons were reconstructed from their decays D$^0\rightarrow{\rm K}^-\pi^+$, D$^+\rightarrow{\rm K}^-\pi^+\pi^+$, D$^{*+}\rightarrow D^0\pi^+$, D$_s^+\rightarrow\phi\pi^+\rightarrow{\rm K}^-{\rm K}^+\pi^+$, and their charge conjugates. The $p_{\rm T}$-differential production cross sections were measured at mid-rapidity in the interval $1<p_{\rm T}<24$ GeV/$c$ for D$^0$, D$^+$ and D$^{*+}$ mesons and in $2<p_{\rm T}<12$ GeV/$c$ for D$_s$ mesons, using an analysis method based on the selection of decay topologies displaced from the interaction vertex. The production cross sections of the D$^0$, D$^+$ and D$^{*+}$ mesons were also measured in three $p_{\rm T}$ intervals as a function of the rapidity $y_{\rm cms}$ in the centre-of-mass system in $-1.26<y_{\rm cms}<0.34$. In addition, the prompt D$^0$ cross section was measured in pp collisions at $\sqrt{s}=7$ TeV and p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV down to $p_{\rm T}=0$ using an analysis technique that is based on the estimation and subtraction of the combinatorial background, without reconstruction of the D$^0$ decay vertex. The nuclear modification factor $R_{\rm pPb}(p_{\rm T})$, defined as the ratio of the $p_{\rm T}$-differential D-meson cross section in p-Pb collisions and that in pp collisions scaled by the mass number of the Pb nucleus, was calculated for the four D-meson species and found to be compatible with unity within experimental uncertainties. The results are compared to theoretical calculations that include cold-nuclear-matter effects and to transport model calculations incorporating the interactions of charm quarks with an expanding deconfined medium.
Let $G$ be a real Lie group and $\mathfrak{g}$ the corresponding Lie algebra. Let $\mathfrak{g}^*$ be the dual of the Lie algebra. Then we have the coadjoint action of $G$ on $\mathfrak{g}^*$. Consider a $G$-invariant (w.r.t. the coadjoint action) embedded submanifold $U \subset \mathfrak{g}^*$, such that the maximal-dimensional orbits in $U$ are embedded submanifolds. Let $$k := \dim U, \qquad \ell := \max_{x \in U} \dim (G.x),$$ and fix a point $z \in U$ such that $\dim (G.z) = \ell$. Can we find $G$-invariant polynomials $f_1, \dots, f_{k-\ell} \colon U \to \mathbb{R}$ such that the function $F :=(f_1, \dots, f_{k-\ell}) \colon U \to \mathbb{R}^{k-\ell}$ is a submersion at $z$? Edit: By polynomials on $U$, I mean polynomials on $\mathfrak{g}^*$ restricted to $U$. And by $G$-invariant, that the restricted polynomials are $G$-invariant on $U$ and not necessarily on the whole of $\mathfrak{g}^*$.
I am still not sure about this problem and I was hoping someone can help me put it to rest once and for all. Suppose $X=[0,1]$ and $m$ is the $\sigma$ algebra of Lebesgue measurable sets on $X$ and $\mu$ is Lebesgue measure on $X$, consider the function $g_1=2\chi_{[0,1/2)}, g_2=4\chi_{[1/2,3/4)}, g_3=8\chi_{[3/4,7/8)}$ and so on. Is the function $$f(x,y)=\sum_{n=1}^{\infty}(g_n(x)-g_{n+1}(x))g_n(y)$$ integrable on $[0,1]x[0,1]$? I am currently working on Fubini and Tonelli theorems and I was given this problem but I'm not sure how to go about it. Here are a few ideas I'm considering, Since the simple functions $g_n$ are measurable and integrable over the space $[0,1]$, the function $f(x,y)$ is measurable . Now I wish to show integrability. and As I understand, I need to show that $\int_{X \times Y}| f(x,y)|d\mu (x,y)<\infty$ For each $x$ , I notices that $\int_0^1g_n(x)dx=1$. So I go like, for a fixed $x$, $$ \int_0^1\int_0^1 f(x,y)=\int_0^1\int_0^1\sum_{n=1}^{\infty}(g_n(x)-g_{n+1}(x))g_n(y)dydx$$ And since $g_n(y)\geq 0$ I can switch the integral and sum. $$= \int_0^1\sum_{n=1}^{\infty}\int_0^1 (g_n(x)-g_{n+1}(x))g_n(y)dydx= \int_0^1\sum_{n=1}^{\infty} (g_n(x)-g_{n+1}(x))dx$$ That is because $\int_0^1g_n(y)dy=1$. So that yields $$=\int_0^1\sum_{n=1}^{\infty} g_n(x)dx-\int_0^1\sum_{n=1}^{\infty} g_{n+1}(x)=\int_0^1\sum_{n=1}^{\infty}2^n\chi_{E_n}(x)dx-\int_0^1\sum_{n=1}^{\infty}2^{n+1}\chi_{E_n+1}(x)dx$$ But since the $E_n's$ are Disjoint I have $$\sum_{n=1}^{\infty}2^n\chi_{E_n}=2^n\chi_{\cup_nE_n}=2^n\chi_{[0,1]}$$ So I get $$\int_0^12^n\chi_{[0,1]}(x)dx-\int_0^12^{n+1}\chi_{[0,1]}(x)dx$$ $$=1-1=0$$ On the other hand Keeping $y$ fixed. Can I write $$\int_0^1\int_0^1\sum_{n=1}^{\infty}(g_n(x)-g_{n+1}(x))g_n(y)dxdy$$ $$=\int_Y\int_0^1\sum_{n=1}^{\infty} g_n(x)g_n(y)dxdy-\int_Y\int_0^1\sum_{n=1}^{\infty} g_n(x)g_{n+1}g_n(y)dxdy$$ $$=\int_0^1g_n(y)dy-\int_0^1g_n(y)dy=0$$ can someone pls check my idea , or show me show me how to view or tackle suck problems, thanks.
Here, I will demonstrate an astonishing fact (and thus an astonishing lack of understanding): It is well known from the uncertainty principle that an electron cannot be at rest. However, consider the relativistic limit, in which we treat the electron as a spinor. Then, by solving the free equation of motion, we get \begin{align} (i\gamma^\mu \partial_\mu -m)\Psi&=0\\ \partial_0\Psi&=im\gamma^0\Psi \end{align} Which yields, if we explicitly write out the left and right-handed components as $\Psi=\begin{bmatrix}\zeta_1&\zeta_2\end{bmatrix}$, $\partial_0^2\zeta_i=m^2\zeta_i$ thus yielding $$\Psi(x,t)=e^{\pm im t}\Psi(x,0)$$ And hence we are free to talk about things such as "an electron at rest". Why do we have to wait until quantum field theory to discuss electrons at rest? Why does it make sense in the relativistic context?