text
stringlengths
256
16.4k
An X-ray plot illustrates the geometry of a complex analytic function f(z). Thick black curves show where Im(f(z))=0(the function is pure real). Thick red curves show where Re(f(z))=0(the function is pure imaginary). Points where black and red curves intersect are zeros or poles. Magnitude level curves ∣f(z)∣=Care rendered as thin gray curves, with brighter shades corresponding to larger C. Blue lines show branch cuts. The value of the function is continuous with the branch cut on the side indicated with a solid line, and discontinuous on the side indicated with a dashed line. Yellow is used to highlight important regions. Entry(ID("106bf7"),Image(Description("X-ray of", BarnesG(z), "on", Element(z, Add(ClosedInterval(-4, 6), Mul(ClosedInterval(-5, 5), ConstI)))), ImageSource("xray_barnes_g")),Description("An X-ray plot illustrates the geometry of a complex analytic function", f(z), ".", "Thick black curves show where", Equal(Im(f(z)), 0), "(the function is pure real).", "Thick red curves show where", Equal(Re(f(z)), 0), "(the function is pure imaginary).", "Points where black and red curves intersect are zeros or poles.", "Magnitude level curves", Equal(Abs(f(z)), C), "are rendered as thin gray curves, with brighter shades corresponding to larger", C, ".", "Blue lines show branch cuts.", "The value of the function is continuous with the branch cut on the side indicated with a solid line, and discontinuous on the side indicated with a dashed line.", "Yellow is used to highlight important regions.")) An X-ray plot illustrates the geometry of a complex analytic function f(z). Thick black curves show where Im(f(z))=0(the function is pure real). Thick red curves show where Re(f(z))=0(the function is pure imaginary). Points where black and red curves intersect are zeros or poles. Magnitude level curves ∣f(z)∣=Care rendered as thin gray curves, with brighter shades corresponding to larger C. Blue lines show branch cuts. The value of the function is continuous with the branch cut on the side indicated with a solid line, and discontinuous on the side indicated with a dashed line. Yellow is used to highlight important regions. Entry(ID("e1497f"),Image(Description("X-ray of", LogBarnesG(z), "on", Element(z, Add(ClosedInterval(-4, 6), Mul(ClosedInterval(-5, 5), ConstI)))), ImageSource("xray_log_barnes_g")),Description("An X-ray plot illustrates the geometry of a complex analytic function", f(z), ".", "Thick black curves show where", Equal(Im(f(z)), 0), "(the function is pure real).", "Thick red curves show where", Equal(Re(f(z)), 0), "(the function is pure imaginary).", "Points where black and red curves intersect are zeros or poles.", "Magnitude level curves", Equal(Abs(f(z)), C), "are rendered as thin gray curves, with brighter shades corresponding to larger", C, ".", "Blue lines show branch cuts.", "The value of the function is continuous with the branch cut on the side indicated with a solid line, and discontinuous on the side indicated with a dashed line.", "Yellow is used to highlight important regions.")) logG(x)={log(G(x)),log(∣G(x)∣)+21n(n−1)πi,x>0otherwise where n=⌊x⌋ Assumptions:x∈Randx∈/{0,−1,…} TeX: \log G(x) = \begin{cases} \log\!\left(G(x)\right), & x > 0\\\log\!\left(\left|G(x)\right|\right) + \frac{1}{2} n \left(n - 1\right) \pi i, & \text{otherwise}\\ \end{cases}\; \text{ where } n = \left\lfloor x \right\rfloorx \in \mathbb{R} \,\mathbin{\operatorname{and}}\, x \notin \{0, -1, \ldots\} \operatorname{Im}\!\left(\log G(x)\right) = \frac{n \left(n - 1\right)}{2} \pi\; \text{ where } n = \left\lfloor x \right\rfloorx \in \mathbb{R} \,\mathbin{\operatorname{and}}\, x < 0 \,\mathbin{\operatorname{and}}\, x \notin \mathbb{Z}
Get your free trial content now! Video Transcript Transcript Solving Proportions Andretti the Yeti is worried. Why's he worried? Well, in two hours, he has a date with Nettie the Bodacious Yeti and it's just started to snow. Snow? What's a little snow when you have a date with the love of your life? Well, Rule #5 of the Yeti Handbook states, if the snow is four feet deep, you can't be outside. Andretti doesn’t know if he'll be able to make it to her place. The Yeti Weatherman reports the snow will fall at a constant rate of one foot of snow per half hour for the next several hours. Nettie is expecting him in 2 hours. In two hours, will the height of the snow be less than four feet? Setting up a Proportion or two equal Ratios Let’s help him figure this out! We can set up a proportion, also known as two equal ratios. Ratios compare quantities. When you set up a proportion, the numerators and denominators of each ratio must contain the same variable. Our proportion will contain ratios comparing the amount of snow in feet per hour. Let's look at the given information. We know it's snowing one foot per half hour. How deep will the snow be in two hours? Solving Example Proportion Andretti the Yeti has an idea of how to solve this problem. He left us a hint in the snow. To solve for the unknown quantity of snow, we'll use cross multiplication, also known as cross product. Don't confuse cross multiplication with cross cancellation. You use cross cancellation to make multiplying fractions easier. Set the two cross products equal to each other. 1 · 2 = 2 and 0.5 · f = 0.5f. We are left with 2 = 0.5f. To solve, divide both sides by 0.5. f = 4. At this rate, in two hours the snow will be four feet deep. Oh no! That won't work. What should Andretti do? Example Proportion 2 He should then plan to arrive earlier for his date! He doesn't want to break the Yeti rule, so he needs to get there before the snow is 4 feet deep. In how many hours will the snow be only three and one-half feet deep? Using the same ratio, let’s set up another proportion, this time to solve for the unknown quantity of hours. We will solve this proportion just like we did before, using cross multiplication. h = 0.5 · 3.5. To get to Nettie’s place before the snow is too deep, Andretti should plan to meet with her in 1.75 hours, or an hour and three-quarters. He needs to leave fifteen minutes earlier than originally planned. Nettie is finally ready for their date. Poor Andretti, he followed Yeti Rule number 7 and waited outside of her house since he was too early. 1 comment One of my favorite videos! Solving Proportions Übung Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Solving Proportions kannst du es wiederholen und üben. Manipulate the formula to get the value for $f$. Tipps Here you can see an example of cross-multiplication: $\frac42=\frac a3~\leftrightarrow~4\times 3=2\times a$ You can solve an equation by using opposite operations: the opposite operation of addition is subtraction and vice versa the opposite operation of multiplication is division and vice versa Lösung An equation representing a proportion has the form: $\frac ab=\frac cd$. You have to know three values to get the solution for the unknown value. First of all, you cross-multiply the terms: $a\times d=c\times b$. Afterwards you can solve this equation by dividing. Now, let's have a look at the equation: $\frac1{0.5}=\frac f2$ first cross-multiply to $1\times 2=0.5\times f$ divide by $0.5$ to get the solution, $f=4$ Find the correct proportion. Tipps A proportion can be written as a fraction. For example, six scoops of ice-cream for three yetis can be written as $\frac63$. Keep in mind that for this ratio the amount of time is listed in the denominator. Lösung What do we know? The snow gets one foot deeper every half an hour. We can write this as a ratio: $\frac1{0.5}$. In order to calculate the number of hours $h$ for the given depth of 3.5 feet we can write the ratio: $\frac {3.5}h$. $\frac1{0.5}=\frac {3.5}h$ By cross-multiplying we can solve for the unknown value: $1\times h=0.5\times 3.5$ The result is $h=1.75$. Andretti has to start in $1.75$ hours. Determine the depth of the snow after two hours. Tipps In order to solve equations you need to use opposite operations. The equation $2x=6$ can be solved by dividing by $2$. A ratio like four cookies for two yetis can be written as $\frac42$. Lösung Let's have a look at the given information: The snow gets one foot deeper every half an hour. This can be written as $\frac1{0.5}$. The snow gets $f$ feet deeper in two hours, which results in $\frac f2$. We can write the proportion: $\frac1{0.5}=\frac f2$ Now we can cross multiply: $1\times 2=0.5\times f$ In a last step, we can divide by $0.5$ to get the solution: $f=\frac2{0.5}=4$ Determine the number of hours till the snow is four feet deep. Tipps Keep in mind that we have to solve two proportions. In the first proportion, the depth is unknown. In the second proportion, the number of hours is unknown. You have to subtract the depth of snow after two hours from four. This is the depth you have to use for the second equation. Keep in mind that you have to add two hours to the solution of the second equation. Lösung To solve this problem, we will have to follow two steps: First of all we will have to determine the depth of the snow after two hours. If the snow isn't four feet deep yet, we will have to use a second proportion. If the snow gets $0.25$ feet deeper every half an hour, how deep will the snow be after two hours? first proportion: $\frac{0.25}{0.5}=\frac f2$ cross-multiplying results in $0.25\times 2=f\times 0.5$ divide by $0.5$ to get $f=1$ The snow isn't deep enough, so we'll have to determine how long it will take for three more feet of snow to fall ($4-1$). second proportion: $\frac{0.75}{0.5}=\frac 3h$ cross-multiplication: $0.75\times h=3\times 0.5=1.5$ divide by $0.75$ to get $h=1.5\div 0.75=2$ Therefore we know that after four hours ($2+2$) the snow is four feet deep. Solve the following proportions. Tipps Cross-multiplication: $\frac ab=\frac cd \Leftrightarrow a\times d=c\times b$ Depending on the value you want to determine, you will have to use the opposite operation. To solve the equation $2\times x=8$, you will have to divide by $2$ to get the solution $x=4$. You can check your result by plugging it into the given proportion. Lösung You can use cross-multiplication to solve for unknown values in a proportion: $\frac ab=\frac cd$. Follow the steps: cross-multiply: $a\times d=c\times b$ solve the equation by using opposite operations cross-multiply: $1\times a=4\times 2$ result is $a=8$ check: $\frac12=\frac48=\frac{4\div4}{8\div4}=\frac12$ $\surd$ cross-multiply $1\times 4=b\times 2$ divide by $2$ to get the solution $b=4\div 2=2$ check: $\frac12=\frac24=\frac{2\div2}{4\div2}=\frac12$ $\surd$ cross-multiply $c\times 4=9\times 2$ divide by $4$ and get $c=18\div 4=4.5$ check: $\frac{4.5}2=\frac{4.5\times 2}{2\times 2}=\frac94$ $\surd$ Determine the number of days Freddy the yeti has to save his pocket money. Tipps To figure this out, you can set up a proportion. You already know the proportion $\frac{3.5}7$ $3.5$ is his allowance for each week $7$ is the number of days per week The number of days are he needs to save money are unknown. Represent the unknown number of days with a variable, and write it in the denominator. Cross-multiply to solve for the unknown value. $\frac ab=\frac cd \Leftrightarrow a\times d=c\times b$ You can check the solution by plugging it into the proportion: $\frac {3.5}7=\frac ab$ $a$ is the amount of money needed to fulfill a wish $b$ is the unknown number of days Lösung Each proportion is written in this form: $\frac ab=\frac cd$ You can check each solution by plugging it into the proportion: $\frac {3.5}7=\frac ab$ $a$ is the amount of money needed to fulfill a wish $b$ is the unknown number of days Sledge $\frac {3.5}7=\frac {14}b$ cross-multiplication results in $3.5 \times b=14\times 7$ dividing by $3.5$ gives us $b=28$ Freddy has to save his pocket money for $28$ days. CD $\frac {3.5}7=\frac {5}b$ cross-multiplying results in $3.5 \times b=5\times 7$ dividing by $3.5$ gives us $b=10$ Freddy has to save his pocket money for $10$ days. Snow boots $\frac {3.5}7=\frac {40}b$ cross-multiplying results in $3.5 \times b=40\times 7$ dividing by $3.5$ gives us $b=80$ Freddy has to save his pocket money for $80$ days. Ice cream $\frac {3.5}7=\frac {3}b$ Cross-multiplication results in $3.5 \times b=3\times 7$ dividing by $3.5$ gives us $b=6$ Freddy has to save his pocket money for $6$ days.
Food Chain THE BROKEN LINK BETWEEN SUPPLY AND DEMAND CREATES CHAOTIC TURBULENCE (+controls) The existing global capitalistic growth paradigm is totally flawed Growth in supply and productivity is a summation of variables as is demand ... when the link between them is broken by catastrophic failure in a component the creation of unpredictable chaotic turbulence puts the controls ito a situation that will never return the system to its initial conditions as it is STIC system (Lorenz) The chaotic turbulence is the result of the concept of infinite bigness this has been the destructive influence on all empires and now shown up by Feigenbaum numbers and Dunbar numbers for neural netwoirks See Guy Lakeman Bubble Theory for more details on keeping systems within finite working containers (villages communities) Prey&Predator Physical meaning of the equationsThe Lotka–Volterra model makes a number of assumptions about the environment and evolution of the predator and prey populations: 1. The prey population finds ample food at all times.2. The food supply of the predator population depends entirely on the size of the prey population.3. The rate of change of population is proportional to its size.4. During the process, the environment does not change in favour of one species and genetic adaptation is inconsequential.5. Predators have limitless appetite.As differential equations are used, the solution is deterministic and continuous. This, in turn, implies that the generations of both the predator and prey are continually overlapping.[23] Prey When multiplied out, the prey equation becomesdx/dt = αx - βxy The prey are assumed to have an unlimited food supply, and to reproduce exponentially unless subject to predation; this exponential growth is represented in the equation above by the term αx. The rate of predation upon the prey is assumed to be proportional to the rate at which the predators and the prey meet; this is represented above by βxy. If either x or y is zero then there can be no predation. With these two terms the equation above can be interpreted as: the change in the prey's numbers is given by its own growth minus the rate at which it is preyed upon.Predators The predator equation becomes dy/dt = - In this equation, {\displaystyle \displaystyle \delta xy} represents the growth of the predator population. (Note the similarity to the predation rate; however, a different constant is used as the rate at which the predator population grows is not necessarily equal to the rate at which it consumes the prey). {\displaystyle \displaystyle \gamma y} represents the loss rate of the predators due to either natural death or emigration; it leads to an exponential decay in the absence of prey. Hence the equation expresses the change in the predator population as growth fueled by the food supply, minus natural death. BATHTUB MEAN TIME BETWEEN FAILURE (MTBF) RISK F(t) = 1 - e ^ -λt Where • F(t) is the probability of failure • λ is the failure rate in 1/time unit (1/h, for example) • t is the observed service life (h, for example) The inverse curve is the trust time On the right the increase in failures brings its inverse which is loss of trust and move into suspicion and lack of confidence. This can be seen in strategic social applications with those who put economy before providing the priorities of the basic living infrastructures for all. This applies to policies and strategic decisions as well as physical equipment. A) Equipment wears out through friction and preventive maintenance can increase the useful lifetime, B) Policies/working practices/guidelines have to be updated to reflect changes in the external environment and eventually be replaced when for instance a population rises too large (constitutional changes are required to keep pace with evolution, e.g. the concepts of the ancient Greeks, 3000 years ago, who based their thoughts on a small population cannot be applied in 2013 except where populations can be contained into productive working communities with balanced profit and loss centers to ensure sustainability) Early LifeIf we follow the slope from the leftmost start to where it begins to flatten out this can be considered the first period. The first period is characterized by a decreasing failure rate. It is what occurs during the “early life” of a population of units. The weaker units fail leaving a population that is more rigorous. Useful Life The next period is the flat bottom portion of the graph. It is called the “useful life” period. Failures occur more in a random sequence during this time. It is difficult to predict which failure mode will occur, but the rate of failures is predictable. Notice the constant slope. Wearout The third period begins at the point where the slope begins to increase and extends to the rightmost end of the graph. This is what happens when units become old and begin to fail at an increasing rate. It is called the “wearout” period. Rock Platform Food Web FORCED GROWTH INTO TURBULENCE FORCED GROWTH GROWTH GOES INTO TURBULENT CHAOTIC DESTRUCTION BEWARE pushing increased growth blows the system! (governments are trying to push growth on already unstable systems !) The existing global capitalistic growth paradigm is totally flawed The chaotic turbulence is the result of the concept and flawed strategy of infinite bigness this has been the destructive influence on all empires and now shown up by Feigenbaum numbers and Dunbar numbers for neural netwoirks See Guy Lakeman Bubble Theory for more details on keeping systems within finite limited size working capacity containers (villages communities) Plant, Deer and Wolf Population Dynamics Biol 205 Keq1bmodel OVERSHOOT GROWTH INTO TURBULENCE The existing global capitalistic growth paradigm is totally flawed The chaotic turbulence is the result of the concept of infinite bigness this has been the destructive influence on all empires and now shown up by Feigenbaum numbers and Dunbar numbers for neural netwoirks See Guy Lakeman Bubble Theory for more details on keeping systems within finite limited size working capacity containers (villages communities) E coli growth model Yo-yo Learning and Teaching Strategy E coli life cycle model Plant, Deer and Wolf Population Dynamics - ISD OWL Arctic Populations Predator-Prey Model ("Lotka'Volterra") Dynamic simulation modelers are particularly interested in understanding and being able to distinguish between the behavior of stocks and flows that result from internal interactions and those that result from external forces acting on a system. For some time modelers have been particularly interested in internal interactions that result in stable oscillations in the absence of any external forces acting on a system. The model in this last scenario was independently developed by Alfred Lotka (1924) and Vito Volterra (1926). Lotka was interested in understanding internal dynamics that might explain oscillations in moth and butterfly populations and the parasitoids that attack them. Volterra was interested in explaining an increase in coastal populations of predatory fish and a decrease in their prey that was observed during World War I when human fishing pressures on the predator species declined. Both discovered that a relatively simple model is capable of producing the cyclical behaviors they observed. Since that time, several researchers have been able to reproduce the modeling dynamics in simple experimental systems consisting of only predators and prey. It is now generally recognized that the model world that Lotka and Volterra produced is too simple to explain the complexity of most and predator-prey dynamics in nature. And yet, the model significantly advanced our understanding of the critical role of feedback in predator-prey interactions and in feeding relationships that result in community dynamics.The Lotka–Volterra model makes a number of assumptions about the environment and evolution of the predator and prey populations: 1. The prey population finds ample food at all times.2. The food supply of the predator population depends entirely on the size of the prey population.3. The rate of change of population is proportional to its size.4. During the process, the environment does not change in favour of one species and genetic adaptation is inconsequential.5. Predators have limitless appetite.As differential equations are used, the solution is deterministic and continuous. This, in turn, implies that the generations of both the predator and prey are continually overlapping.[23] Prey When multiplied out, the prey equation becomesdx/dt = αx - βxy The prey are assumed to have an unlimited food supply, and to reproduce exponentially unless subject to predation; this exponential growth is represented in the equation above by the term αx. The rate of predation upon the prey is assumed to be proportional to the rate at which the predators and the prey meet; this is represented above by βxy. If either x or y is zero then there can be no predation. With these two terms the equation above can be interpreted as: the change in the prey's numbers is given by its own growth minus the rate at which it is preyed upon.Predators The predator equation becomes dy/dt = - In this equation, {\displaystyle \displaystyle \delta xy} represents the growth of the predator population. (Note the similarity to the predation rate; however, a different constant is used as the rate at which the predator population grows is not necessarily equal to the rate at which it consumes the prey). {\displaystyle \displaystyle \gamma y} represents the loss rate of the predators due to either natural death or emigration; it leads to an exponential decay in the absence of prey. Hence the equation expresses the change in the predator population as growth fueled by the food supply, minus natural death. Wolves vs. Moose Populations honeybee hive population model Bio103 Predator-Prey Model ("Lotka'Volterra") Dynamic simulation modelers are particularly interested in understanding and being able to distinguish between the behavior of stocks and flows that result from internal interactions and those that result from external forces acting on a system. For some time modelers have been particularly interested in internal interactions that result in stable oscillations in the absence of any external forces acting on a system. The model in this last scenario was independently developed by Alfred Lotka (1924) and Vito Volterra (1926). Lotka was interested in understanding internal dynamics that might explain oscillations in moth and butterfly populations and the parasitoids that attack them. Volterra was interested in explaining an increase in coastal populations of predatory fish and a decrease in their prey that was observed during World War I when human fishing pressures on the predator species declined. Both discovered that a relatively simple model is capable of producing the cyclical behaviors they observed. Since that time, several researchers have been able to reproduce the modeling dynamics in simple experimental systems consisting of only predators and prey. It is now generally recognized that the model world that Lotka and Volterra produced is too simple to explain the complexity of most and predator-prey dynamics in nature. And yet, the model significantly advanced our understanding of the critical role of feedback in predator-prey interactions and in feeding relationships that result in community dynamics.The Lotka–Volterra model makes a number of assumptions about the environment and evolution of the predator and prey populations: 1. The prey population finds ample food at all times.2. The food supply of the predator population depends entirely on the size of the prey population.3. The rate of change of population is proportional to its size.4. During the process, the environment does not change in favour of one species and genetic adaptation is inconsequential.5. Predators have limitless appetite.As differential equations are used, the solution is deterministic and continuous. This, in turn, implies that the generations of both the predator and prey are continually overlapping.[23] Prey When multiplied out, the prey equation becomesdx/dt = αx - βxy The prey are assumed to have an unlimited food supply, and to reproduce exponentially unless subject to predation; this exponential growth is represented in the equation above by the term αx. The rate of predation upon the prey is assumed to be proportional to the rate at which the predators and the prey meet; this is represented above by βxy. If either x or y is zero then there can be no predation. With these two terms the equation above can be interpreted as: the change in the prey's numbers is given by its own growth minus the rate at which it is preyed upon.Predators The predator equation becomes dy/dt = - In this equation, {\displaystyle \displaystyle \delta xy} represents the growth of the predator population. (Note the similarity to the predation rate; however, a different constant is used as the rate at which the predator population grows is not necessarily equal to the rate at which it consumes the prey). {\displaystyle \displaystyle \gamma y} represents the loss rate of the predators due to either natural death or emigration; it leads to an exponential decay in the absence of prey. Hence the equation expresses the change in the predator population as growth fueled by the food supply, minus natural death. Plant, Deer and Wolf Population Dynamics G-IV Intro Biology levels and genetics cave food chain Disease Dynamics Simple Feedback of Insulin: Model 2 (dampening oscillation) Bipolar II dynamics In this simulation an afflicted individual with Bipolar II disorder is put to treatment after 20 months the calibration of the medicine or treatment he recieves is such that it simulates the natural cycles of a "normal being". You can note by manipulating the parameters that sometimes too much treatment disrupts equilibria. Also note that in the state diagrams there are 2 limit cycles, the lower one being the healthiest as there are less changes.
Let $P$ be the prime numbers in $\Bbb{Z}^+$. Define a topology on $\Bbb{Z}^+$ by taking as basis sets of the form $P\cdot P \cdot \ldots \cdot P \ (k \text{ times})$, where the $\cdot$ is semigroup multiplication between each two pair of elements, $k \geq 0$. Then $P^k \cap P^l = \varnothing$ vacuosly. What's it's name and any links? This is not really a topology so much as a partition. Since $\mathbb{Z}^+ = \coprod_{k\geq 0} P^k$, each $P^k$ is both open and closed in this topology, and so all we've really done is broken up $\mathbb{Z}^+$ into the sets $P^0, P^1, P^2,\ldots$ and given each one the trivial topology. Many mathematicians have divided up the natural numbers by number of prime divisors with multiplicity, but I don't think there's a special name for it. Also, we haven't used the additive structure of $\mathbb{Z}$ at all. As such, we can think of $\mathbb{Z}^+$ as the free abelian monoid on countably many generators, with isomorphism $\oplus_{i=0}^\infty \mathbb{N}\to \mathbb{Z}^+$ given by $(a_0,a_1,\ldots)\mapsto \prod_i p_i^{a_i}$, where $p_0, p_1,\ldots$ are the primes. For this reason, anything we do with $\mathbb{Z}^+$ that doesn't make use of addition will not have much number-theoretic significance, since it will all boil down to studying a free monoid.
As preparation for writing the original grant proposal, Tony Paxton and Andrew Horsfield put together some notes on their understanding of the definition of terms used in electrochemistry, and some of the understanding of physics of corrosion. Mnemonics and definitions OILRIG: Oxidation Is Loss Reduction Is Gain At an Anode we have oxid Ation At a Cathode we have redu Ction A reduction reaction has the electrons on the left. For example $M^{++}+e^{-}\to M^{+}$ is reduction because the oxidation state of M(II) is reduced to M(I). Similarly $M\to M^{+}+e^{-}$ is oxidation: this happens at the anode during corrosion. A noble metalis unreactive and has a large positive standard electrode potential. The solid metal is more stable than its aqueous ions. A base metalis reactive and has large negative standard electrode potential. The solid metal is less stable than its aqueous ions. An ideally polarizable electrodesupports the applied potential without allowing any current to flow. It can be thought of as acting like a capacitor. An ideally non-polarizable electrodesupports a current from the electode into the solution with very small applied potentials. Structure There are several models of the electrical double layer The Helmholtzdouble layer has a fixed layer of ions The Gouy & Chapmandouble layer has point ions that follow Maxwell-Boltzmann statistics. The Posson-Boltzmann equation can be linearized in the low concentration limit to solve Poisson’s equation. The Sterndouble layer combines a fixed layer with a diffuse region. The Grahamedouble layer has an Inner Helmholtz Plane (absorbed), an Outer Helmholtz Plane, and a diffuse layer. Equilibrium Thermodynamics Corrosion can be described by a series of chemical reactions. Standard results from chemistry. At equilibrium the ratios of the amounts of reactants and products can be computed. For a reaction of the form$$n_{1}R_{1}+n_{2}R_{2}+\dots+n_{N}R_{N}\rightleftharpoons m_{1}P_{1}+m_{2}P_{2}+\dots+m_{M}P_{M}\label{eq:N1}$$ we have equilibrium constant $K$ given by$$K=\frac{\Pi_{j=1}^{M}\left\{ P_{j}\right\} ^{m_{j}}}{\Pi_{i=1}^{N}\left\{ R_{i}\right\} ^{n_{i}}}\label{eq:N2}$$ where $\left\{ X\right\}$ is the activity of $X$, and is equal to 1 under standard conditions ($\left\{ X\right\} ^{\ominus}=1$). Thermodynamic activity is dimensionless and related to concentration by$$ \left{ X\right} =\gamma_{X}\frac{\left[X\right]}{\left[X\right]^{\ominus}} \label{eq:N3}$$ where $\gamma_{X}$ is the activity coefficient. It can also be expressed in a similar manner in terms of pressure or mole fraction. The equilibrium constant can be related to the free energy change for the reaction. The free energy is stationary at equilibrium, and hence does not change if a small amount of the reactants transform into products or vice versa. Thus we have $0=\sum_{j=1}^{M}m_{j}\mu_{Pu_{j}}-\sum_{i=1}^{N}n_{i}\mu_{R_{i}}$ where $\mu_{P_{j}}$ is the chemical potential for species $P_{j}$ etc. Chemical potentials of species vary with their environment according to$$ \mu_{X}=\mu_{X}^{\ominus}+RT\ln\left\{ X\right\} \label{eq:N4}$$ where $\mu_{X}^{\ominus}$ is the chemical potential of $X$ under standard conditions. Substituting the expression for chemical potential into the equilibrium condition gives$$\begin{aligned} 0 & = & \sum_{j=1}^{M}m_{j}\mu_{P_{j}}^{\ominus}-\sum_{i=1}^{N}n_{i}\mu_{R_{i}}^{\ominus}+RT\left(\sum_{j=1}^{M}\ln\left\{ P_{j}\right\} ^{m_{j}}-\sum_{i=1}^{N}\ln\left\{ R_{i}\right\} ^{n_{i}}\right)\\ & = & \Delta G^{\ominus}+RT\ln\frac{\Pi_{j=1}^{M}\left\{ P_{j}\right\} ^{m_{j}}}{\Pi_{i=1}^{N}\left\{ R_{i}\right\} ^{n_{i}}} \end{aligned}$$ where $\Delta G^{\ominus}=\sum_{j=1}^{M}m_{j}\mu_{P_{j}}^{\ominus}-\sum_{i=1}^{N}n_{i}\mu_{R_{i}}^{\ominus}$. This can be rearranged to give$$ K=\frac{\Pi_{j=1}^{M}\left\{ P_{j}\right\} ^{m_{j}}}{\Pi_{i=1}^{N}\left\{ R_{i}\right\} ^{n_{i}}}=\exp\left(-\frac{\Delta G^{\ominus}}{RT}\right) \label{eq:N5}$$ Nernst Equation The Nernst equation is a statement about equilibrium for half reactions. Consider the reaction $$M^{z+}+ze^{-}\rightleftharpoons M\label{eq:N5.1}$$ At equilibrium the change of Gibbs free energy is zero if you go forwards or backwards. Thus we have $$\mu_{M^{z+}}+z\mu_{e}-\mu_{M}=0\label{eq:N5.2}$$ If we substitute Eq. [eq:N4] into Eq. [eq:N5.2] we get where $\Delta G^{\ominus}=\mu_{M^{z+}}^{\ominus}+z\mu_{e}^{\ominus}-\mu_{M}^{\ominus}$. The electron chemical potential $\mu_{e}^{\ominus}$ is not defined uniquely by the standard conditions. Thus we will add the condition that the system be in equilibrium; this condition can always be met by providing enough $M$ and $M^{z+}$ under standard conditions such that the electron reservoir (electrode) is charged to the point that equilibrium is established. Note that this means that, provided the electrode is metallic and sufficiently inert, it does not matter what material is chosen. Once we reach equilibrium, we have $\Delta G^{\ominus}=0$, and hence This corresponds to the standard electrode potential $E^{\ominus}$ through $\mu_{e}^{\ominus}=-FE^{\ominus}$, where $F$ is a Faraday. Rearranging Eq. [eq:N6] we then get the Nernst equation The Standard Hydrogen Electrode (SHE) The Standard Hydrogen Electrode is characterized by the equilibrium $H^{+}+e^{-}\rightleftharpoons\frac{1}{2}H_{2}$ with pH 0, $P_{H_{2}}=1$ bar, and $T=298$ K. Thus $\mu_{e}=\frac{1}{2}\mu_{H_{2}}-\mu_{H^{+}}$. The corresponding potential is in the range 4.44 V to 4.85 V relative to vacuum. Consider two half reactions with associated standard electrode potentials $E_{1}$ and $E_{2}$. Now suppose we allow electrons to flow from the electrode for $M_{1}$ to the electrode for $M_{2}$. The free energy change per mole of electrons is then $\Delta G=F\left(E_{1}-E_{2}\right)$. If $E_{1}>E_{2}$ then $\Delta G>0$, and it is favourable for the electrons to flow the other way (from 2 to 1). Similarly, if $E_{1}<E_{2}$ it is energetically favourable for the electrons to flow from 1 to 2. In short, electrons flow from the electrode with more negative potential to the one with more positive (as expected). Applying Le Chatelier’s principle, that means coupling two electrodes will cause the reaction with the more negative potential to proceed in the oxidising direction (mnemonic: NO), while the one with the more positive potential will proceed in the reducing direction (mnemonic: PR). Pourbaix Diagrams These are phase diagrams displaying the most stable species as a function of electrode potential ($\mu_{e}=-FE$) and pH. The phase boundary lines are derived from the equation of chemical equilibrium. For example, for the equilibrium $Mg^{++}+2H^{+}+4e^{-}\rightleftharpoons MgH_{2}$ we have $\mu_{Mg^{++}}+2\mu_{H^{+}}+4\mu_{e}=\mu_{MgH_{2}}$. Hence, The voltage is given by $\mu_{e}=\mu_{e,ref}-FE$. Kinetics The Butler-Volmer equation: For large positive (or negative) overpotential we get the Tafel equation Faraday’s law $$\frac{m}{M}=\frac{Q}{nF}$$ where $m$ is the mass of substanceproduced at an electrode, $Q$ is the total charge delivered to the system, $F$is Faraday’s constant, $M$ is the molar mass of the substance, and $n$ is thecharge per ion. At fixed overpotential, the total measured current must equal the net rate of electron transfer to or from the electrode (to avoid a change in net charge). Potentials Galvani potential $\phi$ Electric potential inside the conductor Volta potential $\psi$ Electric potential at point $p$ just outside the interface, in vacuum. Dipole potential $\chi$ Electric potential difference between the point $p$ and inside: $\chi=\phi-\psi$ Chemical potential $\mu$ $\mu=\mu^{0}+kT\ln a$ Electrochemical potential $\tilde{\mu}$ Work done in taking particle from infinity to the interior of the phase. $\tilde{\mu}=\mu+q\phi$ Real potential $\alpha$ Work done in taking particle from inside the phase to point $p$ just outside. $\alpha=\tilde{\mu}-q\psi=\mu+q\chi$. Also $\alpha=\alpha^{0}+kT\ln a$ where $\alpha^{0}=\mu^{0}+q\chi$ Work function $W$ $W=\tilde{\mu}^{g,0}-\tilde{\mu}^{0}=\mu^{g,0}-\alpha^{0}=\mu^{g,0}-\mu^{0}-q\chi$ Potential of Zero Charge The value of the electrode potential such that the electrode surface has zero charge $q$ is the charge of the particle. Note that the point $p$ is always in vacuum. H formation by Mg in aqueous solution Let a piece of pure Mg with a perfectly clean surface be placed in a beaker of pure water. The Mg can dissolve into the water according to This results in the accumulation of electrons in the solid Mg, lowering its potential (making it more cathodic). The standard electrode potential is -2.38 V. These electrons are free to participate in a second reaction Thus the dissolution of Mg enables the reduction of water to form hydrogen gas. The removal of electrons from the solid Mg by reaction [eq:RH] pulls reaction [eq:RMg] to the right, resulting in further dissolution of g. This in turn produces more electrons, pulling reaction [eq:RH] to the right as well. Thus the two reactions support each other, resulting in the steady formation of hydrogen and dissolution of Mg. The hydroxide ions can combine with Mg ions to form insoluble magnesium hydroxide which has the hexagonal hP3 structure (space group P$\bar{3}$m1 No. 164, lattice constants a = 0.312 nm, c = 0.473 nm) This removes both $Mg^{++}$ and $OH^{-}$ from solution, further encouraging the forward reactions, unless the hydroxide forms a passivating layer. We can quantify the above somewhat by reference to Eq. [eq:N5]we From Eq. [eq:N4] we have Substituting Eq. [eq:mue] into Eqs. [eq:E1] and [eq:E2] then gives from which we see that a more positive potential (anodic) encourages formation of $Mg^{++}$ while a more negative potential (cathodic) encourages $OH^{-}$ formation. In the negative difference effect we find more hydrogen gas being produced at anodic potentials, which contradicts the results above. From Eq. [eq:E3] we see that increasing as the solid hydroxide precipitates out. This in turn will pull reaction [eq:RH] to the right (there will be more $H^{+}$ ions in solution), resulting in more hydrogen production.
Research Open Access Published: Existence results for a kind of fourth-order impulsive integral boundary value problems Boundary Value Problems volume 2016, Article number: 81 (2016) Article metrics 853 Accesses 2 Citations Abstract In this paper we investigate the existence of solutions to a kind of fourth-order impulsive differential equations with integral boundary value conditions. By employing the Schauder fixed point theorem, we obtain sufficient conditions which ensure the system has at lease one solution. Also by using the contraction mapping theorem we get the uniqueness result. Finally an example is given to illustrate the effectiveness of our results. Introduction Fourth-order boundary value problems have attached much attention from many authors; for example, see Sun and Wang [1], Yao [2], O’Regan [3], Yang [4], Zhang [5], Gupta [6], Agarwal [7], Bonanno and Bella [8], and Han and Xu [9]. In particular, we would like to mention some results as follows. In [10], Zhang and Liu studied the following fourth-order four-point boundary value problem: where \(0<\xi, \eta<1\), \(0\leq a< b<1\). By using the upper and lower solutions method, fixed point theorems, and the properties of the Green’s functions \(G(t,s)\) and \(H(t,s)\), the authors gave sufficient conditions for the existence of one positive solution. Zhou and Zhang [11] employed a new existence theory to study the fourth-order p-Laplacian elasticity problems: where \(a,b> 0\), \(J=[0,1]\), \(\phi_{m}(s)\) is an m-Laplace operator, i.e. \(\phi_{m}(s)=|s|^{m-2}s\), \(m>1\), \((\phi_{m})^{-1}=\phi_{m^{*}}\), \(\frac{1}{m}+\frac {1}{m^{*}}=1\), \(F:[0,1]\times R \times R \rightarrow R\) is continuous. In their paper, a new technique for dealing with the bending term of the fourth-order p-Laplacian elasticity problems was introduced and several new and more general results were obtained for the existence of at least single, double, or triple positive solutions. Feng [12] studied a fourth-order boundary value problem with impulses and integral boundary conditions, By using a suitably constructed cone and fixed point theory for cones, the existence of multiple positive solutions was established. Some papers considered the existence, multiplicity, and nonexistence of positive solutions for fourth-order impulsive differential equations with one-dimensional m-Laplacian; for example, see [13–17]. Most recently Feng and Qiu [18], studied a fourth-order impulse integral boundary value problem with one-dimensional m-Laplacian and deviating arguments: We see that in the above system the right-hand side function f has nothing to do with the term \(y'\), the jumping function \(I_{k}\) does not contain the term \(y'(t_{k})\). What is more, there is no restriction on the impulses for state function, i.e. \(\Delta y|_{t=t_{k}}\) does not appear. Definitely for more extensive applications, we would better consider the following boundary value problem: where \(a,b> 0\), \(J=[0,1]\), \(\phi_{m}(s)\) is an m-Laplace operator, i.e. \(\phi_{m}(s)=|s|^{m-2}s\), \(m>1\), \((\phi_{m})^{-1}=\phi_{m^{*}}\), \(\frac{1}{m}+\frac {1}{m^{*}}=1\), \(0=t_{0}< t_{1}< t_{2}<\cdots<t_{k}<\cdots<t_{m}<t_{m+1}=1\), \(f\in {C[J\times R^{n}\times R^{n},R^{n}]}\), \(I_{k}\in{C[R^{n},R^{n}]}\), \(\bar{I}_{k}\in{C[R^{n}\times R^{n},R^{n}]}\), \(\Delta y|_{t=t_{k}}=y(t^{+}_{k})-y(t^{-}_{k})\), here \(y(t^{+}_{k})\) and \(y(t^{-}_{k})\) represent the right-hand limit and left-hand limit of \(y(t)\) at \(t=t_{k}\), respectively. \(\Delta y'|_{t=t_{k}}\) has a similar meaning for \(y'(t)\). In addition, f, g, and h satisfy the following conditions. (H1) \(f\in{C[J\times R^{n}\times R^{n},R^{n}]}\), \(\Delta y|_{t=t_{k}}=y(t^{+}_{k})-y(t^{-}_{k})\), where \(y(t^{+}_{k})\) and \(y(t^{-}_{k})\) represent the right-hand limit and left-hand limit of \(y(t)\) at \(t=t_{k}\), respectively; (H2) \(I_{k}\in{C[R^{n},R^{n}]}\), \(\bar{I}_{k}\in{C[R^{n}\times R^{n},R^{n}]}\); (H3) \(g,h\in{L^{1}[0,1]}\) are nonnegative and \(\xi\in[0,a)\), \(\upsilon\in [0,1)\) where$$ \xi= \int^{1}_{0}g(t)\,dt, \qquad \upsilon= \int^{1}_{0}h(t)\,dt. $$(1.2) The remainder of the paper is organized as below. In Section 2, we give the expression of the solution to BVP (1.2). For this purpose, we do some computation and estimation of the Green’s function. In Section 3, we show the existence and uniqueness of solutions to BVP (1.2) by the Schauder fixed point theorem and contraction mapping theorem. Section 4 gives an example to illustrate our main result. Preliminaries and lemmas We shall reduce problem (1.1) to an integral equation. To this aim, first, by means of the transformation we convert problem (1.1) into and Lemma 2.1 If (H1), (H2), and (H3) hold, then problem (2.1) has a unique solution \(x(t)\), which is given by where Proof Integrating (2.1) from 0 to t we get Integrating it again, we have From (2.1) we know that \(x(0)=x(1)=\int^{1}_{0}h(t)x(t)\,dt\). Letting \(t=1\) we then obtain Hence Thus we get and Hence, Finally, we obtain Thus where This completes the proof. □ Lemma 2.2 Let \(G(t,s)\) and \(H(t,s)\) be given as in Lemma 2.1. Assume that (H3) holds. Then we have where Lemma 2.3 If (H1), (H2), and (H3) hold, then problem (2.2) has a unique solution \(y(t)\) expressed in the form where and \(d=a(a+2b)\). Proof First, we assume that \(t\in I_{i}\), \(I_{i}=(t_{i},t_{i+1})\) (\(i=0,1,2,\ldots,m\)). Integrating both sides of (2.2) from \(t_{i}\) to \(t^{-}_{i+1}\), we get Adding the above equations, we find Similarly, we can get It is easy to get which implies Then we have the following equations: Obviously, Hence, we finally get Hence where Then the proof is completed. □ Lemma 2.4 Suppose (H3) holds and assume that \(G_{1}(t,s)\) and \(H_{1}(t,s)\) are given as in Lemma 2.3. Then we have where Proof In fact, for \(t\in[\zeta,1]\) and \(s\in J\), we have Consequently, This completes the proof. □ Lemma 2.5 Assume that (H1)-(H3) hold. Then \(y(t)\) has the following form: Proof The conclusion is so straightforward that we omit it here. □ We next give some notations and a fixed point theorem which will be used to prove our main results. Let Clearly, \(\mathit{PC}^{1}[J,R^{n}]\) is a Banach space with the norm \(\|y\|_{\mathit{PC}^{1}}=\max\{\|y\|_{\mathit{PC}^{1}},\|y'\|_{\mathit{PC}^{1}}\}\). Lemma 2.6 [19] \(H\subset \mathit{PC}^{1}[J,R^{n}]\) is a relatively compact set if and only if \(\forall y\in H\), y and \(y'\) are uniformly bounded in J and equi- continuous on \(J_{k}\) (\(k=0, 1, 2, \ldots, m\)). Definition 2.1 Lemma 2.7 (Schauder fixed point theorem) If K is a nonempty convex subset of a Banach space V and T is a continuous mapping of K into itself such that \(T(K)\) is contained in a compact subset of K, then T has a fixed point. Definition 2.2 Define an operator \(A:\mathit{PC}^{1}[J,R^{n}]\rightarrow \mathit{PC}^{1}[J,R^{n}]\) by Lemma 2.8 Assume that (H1)-(H3) hold. Then \(y(t)\in{J}\) is a fixed point of A if and only if \(y(t)\) is a solution of problem (1.1). Lemma 2.9 The operator \(A: \mathit{PC}^{1}[J,R^{n}]\rightarrow \mathit{PC}^{1}[J,R^{n}]\) is completely continuous. Proof According to (2.16) we have From (2.16) and (2.17) we know that \(A:\mathit{PC}^{1}[J,R^{n}]\rightarrow \mathit{PC}^{1}[J,R^{n}]\) is continuous. For any bounded set \(S\in \mathit{PC}^{1}[J,R^{n}]\), and any function \(y(t)\in S\), we see that \((Ay)(t)\) and \((Ay)'(t)\) are uniformly bounded and equi-continuous on \(J_{k}\) (\(k=0,1,2,\ldots,m\)). Hence, according to Lemma 2.6 we see that \(A(S)\) is a relatively compact set, therefore A is a completely continuous operator. □ Main results Let Theorem 3.1 Assume that (H1)-(H3) hold. Let \(\eta= \max \{\eta _{1},\eta_{2}\}<1\), then (1.1) has at least one solution, where Proof From the definition of β, there exists \(N>0\), s.t. Similarly, we get where \(\eta_{1}=2\rho_{2}\phi_{m^{*}}(\frac{\gamma\beta}{4})+2m\rho _{2}\bar{\beta}_{k}+m\rho_{2}\beta_{k}\). and which together with (2.17) imply where \(\eta_{2}=2\rho_{3}\phi_{m^{*}}(\frac{\gamma\beta}{4})+2m\rho _{3}\bar{\beta}_{k}+m\rho_{2}\beta_{k}\). On the other hand, according to Lemma 2.9, we know operator A is a completely continuous operator. Together with Lemma 2.6 (Schauder fixed point theorem), we know A has a fixed point in \(\mathit{PC}^{1}[J,R^{n}]\). □ Theorem 3.2 Assume that (H1)-(H3) hold. If there exist nonnegative real numbers α, \(\alpha_{k}\), \(\bar{\alpha}_{k}\), s. t. and \(\xi= \max\{\xi_{1},\xi_{2}\}<1\), then (1.1) has a unique solution, where Proof Computing straightforwardly we have Also we obtain It follows from \(\xi<1\) that A has a unique fixed point and therefore (1.1) has a unique solution. □ Example In this section, we will illustrate the main results by a simple example. Let \(n=1\), \(t_{1}=\frac{1}{2}\), \(a=b=1\), \(I_{1}(y(t_{1}))=\bar{I}_{1}(y(t_{1}, y'(t_{1})))=\frac{1}{2}\), \(f(t,y,y')=\sqrt[3]{t-y+y'}-\frac {1}{42}y'-3\ln(1+y^{2})\), and \(g(s)=\frac{1}{3}\), \(h(t)=\frac{1}{2}\), \(m=3\) in \(\phi_{m}\). Then equation (1.1) turns to the following equation: Following Theorem 3.1, we have the following result. Theorem 4.1 The problem (4.1) has at least one positive solution. Proof Obviously, \(f(t,y,y')\in{C[J\times R^{n}\times R^{n}, R^{n}]}\), \(I_{1}(y(t_{1}))\in {C[R^{n}, R^{n}]}\), \(\bar{I}_{1}(y(t_{1}, y'(t_{1})))\in C[R^{n}\times R^{n},R^{n}]\). From (4.1), we get so we get \(\beta\leq\frac{1}{41}\), \(\beta_{1}=0\), \(\bar{\beta}_{1}=0\), \(\rho _{2}=2\), \(\rho_{3}=\frac{2}{3}\), \(\gamma=2\), \(\phi_{m^{*}}(\frac{\gamma\beta }{4})\leq\sqrt{\frac{1}{82}}\). Hence, \(\eta_{1}\leq\sqrt{\frac{8}{41}}\leq1\), \(\eta_{2}\leq\sqrt{\frac {8}{369}}\leq1\), \(\eta\leq\sqrt{\frac{8}{41}}\leq1\). From Theorem 3.1, we get the result. This completes the proof. □ References 1. Sun, JP, Wang, XQ: Monotone positive solution of nonlinear beam equations with nonlinear boundary conditions. Math. Probl. Eng. 2011, Article ID 609189 (2011) 2. Yao, QL: Positive solutions of nonlinear beam equations with time and space singularities. J. Math. Anal. Appl. 374, 681-692 (2011) 3. O’Regan, D: Solvability of some fourth (and higher)order singular boundary value problems. J. Math. Anal. Appl. 161, 78-116 (1991) 4. Yang, B: Positive solutions for the beam equations under certain boundary conditions. Electron. J. Differ. Equ. 2005, 78 (2005) 5. Zhang, XG: Existence and iteration of monotone positive solutions for an elastic beam equation with a comer. Nonlinear Anal., Real World Appl. 10, 2097-2103 (2009) 6. Gupta, GP: Existence and uniqueness theorems for the bending of an elastic beam equation. Appl. Anal. 26, 289-304 (1988) 7. Agarwal, RP: On fourth-order boundary value problems arising in beam analysis. Differ. Integral Equ. 2, 91-110 (1989) 8. Bonanno, G, Bella, BD: A boundary value problem for fourth-order elastic beam equations. J. Math. Anal. Appl. 343, 1166-1176 (2008) 9. Han, GD, Xu, ZB: Multiple solutions of some nonlinear fourth-order beam equations. Nonlinear Anal. TMA 68, 3646-3656 (2008) 10. Zhang, XG, Liu, LS: Positive solutions of fourth-order four-point boundary value problems with p-Laplacian operator. J. Math. Anal. Appl. 336, 1414-1423 (2007) 11. Zhou, YL, Zhang, XM: New existence theory of positive solutions to fourth order p-Laplacian elasticity problems. Bound. Value Probl. 2015, 205 (2015) 12. Feng, MQ: Multiple positive solutions for fourth-order impulsive differential equations with integral boundary conditions and one-dimensional p-Laplacian. Bound. Value Probl. 2011, Article ID 654871 (2011) 13. Afrouzi, GA, Hadjian, A, Radulescu, VD: Variational approach to fourth-order impulsive differential equations with two control parameters. Results Math. 65, 371-384 (2014) 14. Cabada, A, Tersian, S: Existence and multiplicity of solutions to boundary value problems for fourth-order impulsive differential equations. Bound. Value Probl. 2014, 105 (2014) 15. Sun, JT, Chen, HB, Yang, L: Variational methods to fourth-order impulsive differential equations. J. Appl. Math. Comput. 35, 323-340 (2011) 16. Xie, JL, Luo, ZG: Solutions to a boundary value problem of a fourth-order impulsive differential equations. Bound. Value Probl. 2013, 154 (2013) 17. Zhang, XM, Feng, MQ: Positive solutions for classes of multi-parameter fourth-order impulsive differential equations with one-dimensional singular p-Laplacian. Bound. Value Probl. 2014, 112 (2014) 18. Feng, MQ, Qiu, JL: Multi-parameter fourth order impulsive integral boundary value problems with one-dimensional m-Laplacian and deviating arguments. J. Inequal. Appl. 2015, 64 (2015) 19. Guo, DJ, Sun, JX, Liu, ZL: Functional Analysis Method of Nonlinear Ordinary Differential Equations. Shandong Science and Technology Press, Jinan (2005). ISBN:7-5331-1497-3 Acknowledgements The authors express their sincere thanks to the anonymous reviews for their valuable suggestions and corrections for improving the quality of the paper. This work was supported by NNSF of China No. 11431008 and NNSF of China No. 11271261. Additional information Competing interests The authors declare that they have no competing interests. Authors’ contributions All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Using a HOL-like approach as described in the link given by DanielV, subsets given by the axiom of specification are just modeled as predicates. Your whole expression becomes $\forall(\lambda x\!:\!\mathbb{N}.S(f(x)))$ (or more compactly: $\forall (S\circ f)$) where $S\equiv\lambda n\!:\!\mathbb{N}.(2|n)$. You can look at the systems HOL4, HOL Light, and Isabelle/HOL for mechanized proof assistants for this approach. An alternative approach mentioned in the other question but not elaborated on there is a propositions-as-types approach. This approach is usually used in a constructive setting with a dependently typed lambda calculus. In this approach, we model a statement by a type, and the proof of the statement is witnessed by providing a value of that type. This is the approach used by mechanized proof assistants like Coq, Agda, or LEAN. In this case, we might have a type like $\prod_{x:\mathbb{N}}S(f(x))$. $S$ would look the same except now $2|n$ would need to stand for a type. There are a variety of ways of accomplishing this. For example, we could have an (explicitly defined) function (i.e. an algorithm) $\mathtt{divides} : \mathbb{N}\times\mathbb{N}\to\mathbb{B}$ where $\mathbb{B}$ is the type of Booleans with values $\mathtt{True}$ and $\mathtt{False}$. Then $(m|n)\equiv(\mathtt{divides}(m,n)=_\mathbb{B}\mathtt{True})$. Actually proving that $\prod_{x:\mathbb{N}}S(f(x))$ holds would mean actually providing a lambda term of that type. It may be as simple as $\lambda x\!:\!\mathbb{N}.\mathtt{refl}_\mathtt{True}$ depending on the exact definition of $\mathtt{divides}$, but it could easily require more equational reasoning than this. ($\mathtt{refl}_x$ is the value that "proves", in the above sense, that $x=x$.) To address one of your comments, both approaches above have been extensively used in practice as you can see from the applications of the proof assistants mentioned. This is a personal opinion, but I would go so far to say that even using either of these approaches by hand is more natural and feasible that (formally!) using set theory. Of course, the latter approach is, as I said, usually used as a constructive type theory which makes a profound difference including making some results much harder to prove (and, of course, making some results impossible to prove). (Classical) HOL is definitely much closer (in fact very close to) "standard" math/set theory.
How do you obtain the volume of a sphere? You just calculate a volume integral on the sphere $$V=\int_{\mathbb{R}^3}\chi(x,y,z)\;d\mathbf{x}$$ where $\chi$ is a function that equals $1$ inside the sphere and $0$ outside. Of course is comfortable to switch to spherical coordinates. The determinat of the Jacobian is $|J|=r^2\sin\theta$, s0 $$ V=\int_0^Rr^2dr\int_0^\pi \sin\theta d\theta\int_0^{2\pi}d\varphi $$ calculating the two rightmost integral you obtain $$ V=\int_0^R4\pi r^2dr =\int_0^R \frac{dV}{dr}dr \tag{1}$$ How do you calculate the surface area of a sphere? Through a surface integral $$SA=\int_{\mathbb{R}^3}\sigma(x,y,z)\;d\mathbf{x}$$ where $\sigma(x,y,z)=\chi(x,y,z)\delta (r-R)$. In spherical coordinates: $$ SA=\int_0^\pi r^2\sin\theta d\theta\int_0^{2\pi}d\varphi = 4\pi r^2 \tag{2}$$ So, confronting equation $(1)$ and $(2)$ is possible to prove that $$SA=\frac{dV}{dr}$$ This, of course, means that $dV=SAdr$ i.e., the infinitesimal increment of the Volume $dV$ is obtained through the product of the surface area $SA$ and the infinitesimal increment of the radius $dr$.
In A.Zee's "QFT in a nutshell"book Page 324, after he wrote down the general Lagrangian, he said "in previous chapter, we learned that by introducing a Chern-Simons gauge field we can transform $\psi$ to a scalar field" However, I didn't found any clarification about this argument in previous chapter. Any comment or references are greatly appreciated. Yes, this is explained in the previous chapter (Fractional Quantum Hall effect) Surprisingly, when we have a filling factor $\nu$ of the Landau levels of form $\frac{1}{3}$ or $\frac{1}{5}$, the quantum Hall fluid appears incompressible. This seems curious, because the integer quantum Hall effect appears only for integer values, corresponding to $\nu$ Landau levels completely filled (they have the degeneracy $\frac{BA}{2\pi}$, where $B$ is the magnetic field, and $A$ is the area occupied by the electrons). So, something happens, calling $N_e$ the number of electrons ,when $\dfrac{N_e}{(\frac{BA}{2\pi})}= \nu$, with $\nu^{-1}$ odd. Now, you have to notice that $\frac{BA}{2\pi}$ is simply the number of flux quanta $N_\phi$ So, finally, the number of flux quanta by electron, is : $\dfrac{N_\phi}{N_e} = \nu^{-1}$, a odd integer. Now, if we look at the statistics, exchanging $2$ particles, we have the $(-1)$ term coming from the Fermi statistics, now multiplyed by an other $(-1)^{\nu^{-1}}$ term coming from a Aharanov-Bohm effect. So, finally, one have a $(+1)$ term, which signals a Bose statistics. A Dirac spinor in $(2+1)$ dimensions having $2$ degrees of freedom, we simply have to choose a bosonic field with $2$ degrees of freedom, that is a complex scalar field.
The precise theorem is the following, cf. e.g. Ref. 1. Theorem 1: Given a non-positive (=attractive) potential $V\leq 0$ with negative spatial integral $$ v~:=~\int_{\mathbb{R}^n}\! d^n r~V({\bf r}) ~<~0 ,\tag{1} $$ then there exists a bound state$^1$ with energy $E<0$ for the Hamiltonian $$ H~=~K+V, \qquad K~=~ -\frac{\hbar^2}{2m}{\bf \nabla}^2\tag{2} $$ if the spatial dimension $\color{Red}{n\leq 2}$ is smaller than or equal to two. The theorem 1 does not hold for dimensions $n\geq3$. E.g. it can be shown that already a spherically symmetric finite well potential does not$^2$ always have a bound state for $n\geq3$. Proof of theorem 1: Here we essentially use the same proof as in Ref. 2, which relies on the variational method. We can for convenience use the constants $c$, $\hbar$ and $m$ to render all physical variables dimensionless, e.g. $$ V~\longrightarrow~ \tilde{V}~:=~\frac{V}{mc^2}, \qquad {\bf r}~\longrightarrow~\tilde{\bf r}~:=~ \frac{mc}{\hbar}{\bf r},\tag{3} $$ and so forth. The tildes are dropped from the notation from now on. (This effectively corresponds to setting the constants $c$, $\hbar$ and $m$ to 1.) Consider a 1-parameter family of trial wavefunctions $$ \psi_{\varepsilon}(r)~=~e^{-f_{\varepsilon}(r)}~\nearrow ~e^{-1}\quad\text{for}\quad \varepsilon ~\searrow ~0^{+} , \tag{4}$$ where $$ f_{\varepsilon}(r)~:=~ (r+1)^{\varepsilon} ~\searrow ~1\quad\text{for}\quad \varepsilon ~\searrow ~0^{+} \tag{5} $$ $r$-pointwise. Here the $\nearrow$ and $\searrow$ symbols denote increasing and decreasing limit processes, respectively. E.g. eq. (4) says in words that for each radius $r \geq 0$, the function $\psi_{\varepsilon}(r)$ approaches monotonically the limit $e^{-1}$ from below when $\varepsilon$ approaches monotonically $0$ from above. It is easy to check that the wavefunction (4) is normalizable: $$0~\leq~\qquad\langle\psi_{\varepsilon}|\psi_{\varepsilon} \rangle~=~ \int_{\mathbb{R}^n} d^nr~|\psi_{\varepsilon}(r)|^2 ~\propto~ \int_{0}^{\infty} \! dr ~r^{n-1} |\psi_{\varepsilon}(r)|^2$$$$~\leq~ \int_{0}^{\infty} \! dr ~(r+1)^{n-1} e^{-2f_{\varepsilon}(r)} ~\stackrel{f=(1+r)^{\varepsilon}}{=}~\frac{1}{\varepsilon} \int_{1}^{\infty}\!df~f^{\frac{n}{\varepsilon}-1} e^{-2f}~<~\infty,\qquad \varepsilon~> ~0.\tag{6} $$ The kinetic energy vanishes $$ 0~\leq~\qquad\langle\psi_{\varepsilon}|K|\psi_{\varepsilon} \rangle ~=~ \frac{1}{2}\int_{\mathbb{R}^n}\! d^nr~|{\bf \nabla}\psi_{\varepsilon}(r) |^2~=~ \frac{1}{2}\int_{\mathbb{R}^n}\! d^nr~\left|\psi_{\varepsilon}(r)\frac{df_{\varepsilon}(r)}{dr} \right|^2 $$$$~\propto~ \varepsilon^2\int_{0}^{\infty}\! dr~r^{n-1} (r+1)^{2\varepsilon-2}|\psi_{\varepsilon}(r)|^2~\leq~\varepsilon^2 \int_{0}^{\infty} \!dr ~ (r+1)^{2\varepsilon+n-3}e^{-2f_{\varepsilon}(r)}$$$$~\stackrel{f=(1+r)^{\varepsilon}}{=}~\varepsilon \int_{1}^{\infty}\! df ~ f^{1+\frac{\color{Red}{n-2}}{\varepsilon}} e^{-2f}~\searrow ~0\quad\text{for}\quad \varepsilon ~\searrow ~0^{+}, \tag{7}$$when $\color{Red}{n\leq 2}$, while the potential energy $$0~\geq~\qquad\langle\psi_{\varepsilon}|V|\psi_{\varepsilon} \rangle ~=~ \int_{\mathbb{R}^n} \!d^nr~|\psi_{\varepsilon}(r)|^2~V({\bf r}) $$$$ ~\searrow ~e^{-2}\int_{\mathbb{R}^n} \!d^nr~V({\bf r})~<~0 \quad\text{for}\quad \varepsilon ~\searrow ~0^{+} ,\tag{8} $$ remains non-zero due to assumption (1) and Lebesgue's monotone convergence theorem. Thus by choosing $ \varepsilon \searrow 0^{+}$ smaller and smaller, the negative potential energy (8) beats the positive kinetic energy (7), so that the average energy $\frac{\langle\psi_{\varepsilon}|H|\psi_{\varepsilon}\rangle}{\langle\psi_{\varepsilon}|\psi_{\varepsilon}\rangle}<0$ eventually becomes negative for the trial function $\psi_{\varepsilon}$. A bound state$^1$ can then be deduced from the variational method. Note in particular that it is absolutely crucial for the argument in the last line of eq. (7) that the dimension $\color{Red}{n\leq 2}$. $\Box$ Simpler proof for $\color{Red}{n<2}$: Consider an un-normalized (but normalizable) Gaussian test/trial wavefunction $$\psi(x)~:=~e^{-\frac{x^2}{2L^2}}, \qquad L~>~0.\tag{9}$$ Normalization must scale as $$||\psi|| ~\stackrel{(9)}{\propto}~ L^{\frac{n}{2}}.\tag{10}$$ The normalized kinetic energy scale as $$0~\leq~\frac{\langle\psi| K|\psi \rangle}{||\psi||^2} ~\propto ~ L^{-2}\tag{11}$$ for dimensional reasons. Hence the un-normalized kinetic scale as $$0~\leq~\langle\psi| K|\psi \rangle ~\stackrel{(10)+(11)}{\propto} ~ L^{\color{Red}{n-2}}.\tag{12}$$ Eq. (12) means that $$\exists L_0>0 \forall L\geq L_0:~~0~\leq~ \langle\psi|K|\psi\rangle ~ \stackrel{(12)}{\leq} ~-\frac{v}{3}~>~0\tag{13}$$ if $\color{Red}{n<2}$. The un-normalized potential energy tends to a negative constant $$\langle\psi| V|\psi \rangle ~\searrow~\int_{\mathbb{R}^n} \! \mathrm{d}^nx ~V(x)~=:~v~<~0\quad\text{for}\quad L~\to~ \infty.\tag{14}$$ Eq. (14) means that $$\exists L_0>0 \forall L\geq L_0:~~ \langle\psi| V|\psi\rangle ~\stackrel{(14)}{\leq}~ \frac{2v}{3} ~<~ 0.\tag{15}$$ It follows that the average energy $$\frac{\langle\psi|H|\psi\rangle}{||\psi||^2}~=~\frac{\langle\psi|K|\psi\rangle+\langle\psi|V|\psi\rangle}{||\psi||^2}~\stackrel{(13)+(15)}{\leq}~ \frac{v}{3||\psi||^2}~<~0\tag{16}$$ of trial function must be negative for a sufficiently big finite $L\geq L_0$ if $\color{Red}{n<2}$. Hence the ground state energy must be negative (possibly $-\infty$). $\Box$ References: K. Chadan, N.N. Khuri, A. Martin and T.T. Wu, Bound States in one and two Spatial Dimensions, J.Math.Phys. 44 (2003) 406, arXiv:math-ph/0208011. K. Yang and M. de Llano, Simple variational proof that any two‐dimensional potential well supports at least one bound state, Am. J. Phys. 57 (1989) 85. -- $^1$ The spectrum could be unbounded from below. $^2$ Readers familiar with the correspondence $\psi_{1D}(r)=r\psi_{3D}(r)$ between 1D problems and 3D spherically symmetric $s$-wave problems in QM may wonder why the even bound state $\psi_{1D}(r)$ that always exists in the 1D finite well potential does not yield a corresponding bound state $\psi_{3D}(r)$ in the 3D case? Well, it turns out that the corresponding solution $\psi_{3D}(r)=\frac{\psi_{1D}(r)}{r}$ is singular at $r=0$ (where the potential is constant), and hence must be discarded.
In both cases, there appears to be a confusion of terminology between common and technical uses. We commonly use methane and propane for cooking (and home heating), but not ethane. I would expect ethane to be suitable for this, being in between the two, but I've never heard of anyone using it for this purpose. Why is that? In reality, anyone using natural gas as a cooking fuel likely is cooking both with $\ce{CH4}$ and $\ce{C2H6}$. From the above-linked Wikipedia page (emphasis added): Natural gas is a naturally occurring hydrocarbon gas mixture consisting primarily of methane, , and sometimes a small percentage of carbon dioxide, nitrogen, hydrogen sulfide, or helium. but commonly including varying amounts of other higher alkanes EngineeringToolbox.com reports the following representative composition ranges (probably in percent by volume?) of natural gas: $$\text{Composition (%)} \\\begin{array}{cccccccccc}\hline& \ce{CO2} & \ce{CO} & \ce{CH4} & \ce{C2H6} & \ce{H2} & \ce{H2S} & \ce{O2} & \ce{N2} \\\hline\text{Min} & 0 & 0 & 82 & 0 & 0 & 0 & 0 & 0.5 \\\text{Max} & 0.8 & 0.45 & 93 & 15.8 & 1.8 & 0.18 & 0.35 & 8.4\\\hline\end{array}$$ Given that $\ce{CH4}$ is by far the major constituent of natural gas, it is sensible that it is referred to commonly by the term methane, even if it is often actually a mixture of methane, ethane, and trace higher hydrocarbons. On a related note, why is butane used for cigarette lighters and basically nothing else (in ordinary life, I mean)? Per the Wikipedia page for liquefied petroleum gas, linked in Mithoron's comment, most of what is commonly referred to as propane or butane is actually a mix of $\ce{C3H8}$ and $\ce{C4H10}$ in varying ratios (emphasis added): Liquefied petroleum gas or liquid petroleum gas (LPG or LP gas), , are flammable mixtures of hydrocarbon gases used as fuel in heating appliances, cooking equipment, and vehicles. ... Varieties of LPG bought and sold include mixes that are mostly propane ($\ce{C3H8}$), mostly butane ($\ce{C4H10}$) and, most commonly, mixes including both propane and butane. In the northern hemisphere winter, the mixes contain more propane, while in summer, they contain more butane. also referred to as simply propane or butane So, Mithoron is right: $\ce{C4H10}$ is used in much more than just cigarette lighters, it's just that common usage happens to apply the term butane for this context. As a further note, I would guess the primary rationale for using different mixes of $\ce{C3H8}$/$\ce{C4H10}$ deals with the vapor pressures of the two gases. The energy densities $\eta$ of the liquefied gases, approximated as $-\Delta H_c^\circ\rho \over \mathrm{MW}$, are nearly equal: $$\begin{array}{ccccc}\hline\text{Quantity} & \text{Units} & \ce{C3H8} & n\text{-}\ce{C4H10} & iso\text{-}\ce{C4H10} \\\hline\Delta H_c^\circ & \mathrm{kJ\over mol} & -2202^1 & -2878^2 & -2869^3\\\mathrm{MW} & \mathrm{g\over mol} & 44 & 58 & 58 \\\rho & \mathrm{g\over mL} & 0.58^4 & 0.604^5 & 0.56^6 \\\hline\eta & \mathrm{MJ\over L} & 29.0 & 30.0 & 27.7 \\\hline\end{array}$$ Thus, roughly comparable energy value is obtained per volume of each, and there is little reason to favor one or the other on this basis. Practically, the lower limit of acceptable vapor pressure is that which provides sufficient flow of gaseous hydrocarbon to the point of combustion. The upper limit is more or less defined by the strength of the container and plumbing. Consider the following vapor pressure data, calculated from fitted equations (sources: propane | n-butane | iso-butane): $$\text{Vapor Pressure (atm)} \\\begin{array}{ccc}\hline & 0~^\circ\mathrm C& 25~^\circ\mathrm C & 38~^\circ\mathrm C \\\hline\ce{C3H8} & 4.7 & 9.3 & 12.8 \\n\text{-}\ce{C4H10} & 1.0 & 2.4 & 3.5 \\iso\text{-}\ce{C4H10} & 1.5 & 3.4 & 4.9 \\\hline\end{array}$$ Cigarette lighters (especially disposable plastic ones) presumably do actually use butane-rich fuel mixes, so as not to approach or exceed the mechanical limits of the lightweight, portable containers. As well, the temperature at point-of-use is somewhat better controlled, as even on cold days the heat from the user's hand is likely to maintain the butane vapor pressure high enough to provide sufficient gas flow. Finally, as noted in a comment by A.K., lighters are generally charged with iso-butane, which is sensible as it is the isomer exhibiting modestly higher vapor pressures. For applications where metal-walled containers are feasible (grilling, automotive fuel, etc.), however, structural considerations are less important and the higher deliverable pressure from propane becomes advantageous. In hot summer months, though, I would assume the higher fraction of butane is used so as to mitigate the fairly dramatic increase in vapor pressure of pure propane with increasing temperature. $^1$ Wikipedia, "Propane (data page)" $^2$ Wikipedia, "Butane (data page)" $^3$ Wikipedia, "Isobutane (data page)" $^4$ Engineering Toolbox, "Chemical, Physical and Thermal Properties of Propane Gas - $\ce{C3H8}$ $^5$ Engineering Toolbox, "Chemical, Physical and Thermal Properties of n-Butane" $^6$ AeroPres, "Physical Properties" datasheet (PDF link)
I'm in the early stages of developing a swaption pricing model. Suppose $t_1$ is the tenor of the swap rate in years, $F$ is the forward rate of the underlying swap, $X$ is the strke rate of the swaption, $r$ is the risk-free rate, $T$ is the swaption expiration (term) in years, $\sigma$ is the volatility of the forward-starting swap rate and $m$ is the compounding per year in swap rate. As I understand, the Black-76 model for the price of a European payer swaption is $$P_{PS}= \frac{1-(1+\frac{F}{m})^{-t_1m}}{F}\cdot e^{-rT}[F\Phi(d_1)-X\Phi(d_2)],$$ where $$d_1=\frac{\ln(\frac{F}{X})+ \frac{\sigma^2T}{2}}{\sigma\sqrt{T}}\quad\text{and}\quad d_2 = d_1-\sigma\sqrt{T}.$$ Equivalently, for a receiver swaption, the price is given by the formula $$P_{RS}= \frac{1-(1+\frac{F}{m})^{-t_1m}}{F}\cdot e^{-rT}[X\Phi(-d_2)-F\Phi(-d_2)].$$ This is like the original formulae in Black's model except for the additional term $\frac{1-(1+\frac{F}{m})^{-t_1m}}{F}$(source). In additional to validating that these are indeed the correct pricing formulae, I'd like to derive formula for two greeks in particular: theta ($\Theta$) and gamma ($\Gamma$). Theta $$\begin{align} \Theta_{PS} =\frac{\partial P_{PS}}{\partial T} = \Bigg[\frac{1-(1+\frac{F}{m})^{-t_1m}}{F}\Bigg]\cdot\frac{\partial}{\partial T}\{e^{-rT}[F\Phi(d_1)-X\Phi(d_2)]\}= \frac{1-(1+\frac{F}{m})^{-t_1m}}{F}\cdot\Bigg[-\frac{Fe^{-rT}\phi(d_1)\sigma}{2\sqrt{T}}-rFe^{-rT}\Phi(-d_1)+rXe^{-rT}\Phi(-d_2)\Bigg] \end{align}$$ where the term in the square parentheses in the standard formula for the theta of a put option under Black's model. $\Theta_{RS}$ derived analogously. Does anyone know a source for the delta and gamma of a swaption under Black model? Many thanks
Let $M^n$ be a smooth closed embedded hupersurface in $\mathbb R^{n+1}$. Denote by $D$ the bounded connected component of $\mathbb R^{n+1}\backslash M$. We assume that $\mathbb R^{n+1}\backslash D$ is simply connected. Let $B=B_R^n$ be the ball centered at the origin with radius $R>>1$ whose boundary is denoted by $S$. Question: Does there exist a smooth map $$F:M\times [0,1]\rightarrow \bar B\backslash D$$ such that: 1) $F(x,0)=x,$, $f(x, 1)\in S, \forall x\in M$, 2) For any fixed $x\in M$, $F(x,\cdot):[0,1]\rightarrow \bar B\backslash D$ is injective. The motivation of this question comes from several complex variables, where $M$ is taken to be the boundary of a bounded domain.
This could be a math question. But if you think about the physical aspect of the question, it is interesting to look at the Schrodinger equation: For free fields (without potential), you have (in units $\bar h = m = \omega = 1$): $$i\frac{\partial \Psi( k, t)}{\partial t} = \frac{ k^2}{2}\Psi( k, t)$$ or $$ E \tilde \Psi( k, E) = \frac{ k^2}{2} \tilde \Psi( k, E)$$ Whose solution is : $$\Psi( k, t) \sim e^{- i \frac{ k^2}{2} t}$$ or $$ \tilde \Psi( k, E) \sim \delta \left(E - \frac{ k^2}{2}\right)$$ Here $ \tilde \Psi( k, E)$ is a Fourier transform of $\Psi( k, t)$. It is clear, from the form of the equations, that there is no constraint about $E$. It is a continuous spectra, and this is clearly a non-normalizable solution. However, with potentials, things appear differently, and you will have some differential equations, for instance, for the harmonic oscillator potential, you will have : $$ E \Psi( k, E) = \frac{ k^2}{2} \Psi( k, E) - \frac{1}{2}\frac{\partial^2 \Psi( k, E)}{\partial k^2}$$ The solution for $\Psi$ involves a Hermite differential equation (multiply by some exponential $e^{-k^2}$). If E is taking continuous, then the Hermite solution (with a real index) is not bounded at infinity, and so the solution is not normalizable. If we want a normalizable solution, then we need a (positive) integer indexed solution $H_n$, whose name is Hermite polynomials. In this case, the spectrum of $E$ is discrete. The choice of $E$ discrete (and so a normalizable solution) is then a physical choice. In the case of the harmonic oscillator, it is unphysical to suppose that the solution is not bounded at infinity. The case of Hermite polynomials is a special case of orthogonal polynomials, which is very well suited to represent orthonormal states, corresponding to discrete eigenvalues of the hermitian operator Energy.
Article Об одном подходе к преобразованию периодических последовательностей In this article we study an arithmetical properties of irrationalities $\alpha=\sum_{n=0}^\infty \frac{c_n}{n!} in case when sequence $\{c_n\} $ is a pure peridiodical. As an application of prooved theorems the method for generation nonperiodical sequences was proposed. This paper focuses on the psychological aspects of poverty, in particular, the relationship between poverty and individual psychological characteristics. We analyzed a number of studies that make it possible to formulate hypotheses about the relationship between different types of poverty and components of self-perception, basic individual values of the person and features of economic decision-making. We emphasize the need for empirical research in order to test these hypotheses and identify possible new directions for research within the psychology of poverty. We consider a periodic sequence { ck }k=0 ∞ and investigate a numerical properties of an irrational number α =σk=0 ∞ ck/k!. As an application of our results we present a simple transformation of periodic sequence { ck }k=0 ∞ into aperiodic sequence. In this article we study one class of irrationalities which may be defined as covergent series with rational coefficients. This class contain a lot of well known constants such as $\ln 2$, $\pi$, e.t.c. We consider the problem of determination parameters of rational coefficients by rational approximation of irrationality. We deduced the lower and upper bounds and present an algorithm for determination of unknown parameters. Also, we present some results of practical calculations. A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traffic is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the final node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a finite-dimensional system of differential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of differential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station. This proceedings publication is a compilation of selected contributions from the “Third International Conference on the Dynamics of Information Systems” which took place at the University of Florida, Gainesville, February 16–18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government, and academia in order to exchange new discoveries and results in a broad range of topics relevant to the theory and practice of dynamics of information systems. Dynamics of Information Systems: Mathematical Foundation presents state-of-the art research and is intended for graduate students and researchers interested in some of the most recent discoveries in information theory and dynamical systems. Scientists in other disciplines may also benefit from the applications of new developments to their own area of study.
Transforming Simple Repeating Decimals to Fractions and Vice Versa 03:41 minutes Get your free trial content now! Video Transcript Transcript Transforming Simple Repeating Decimals to Fractions and Vice Versa The brothers Zooey, Louie and Phooey formed a band called the Musical Triplets and entered their school’s annual Battle of the Bands contest. Look at that?! The Triplets beat last year's winner, the Math Bros. and win first place! The grand prize is a one hundred dollar bill! They plan to divide the prize money evenly amongst themselves. They go to their Uncle Huge, who's as good as a bank, to exchange their one hundred dollar bill for 10 ten dollar bills. After dividing the money evenly, they're left with one ten dollar bill. They then ask their uncle to change the last ten dollar bill into ten, one-dollar bills. Each boy now gets three dollars, leaving one one-dollar bill. This goes on and on...they exchange dollars for dimes and dimes for pennies until... Using long division They divided all the money, except for one last penny. What if they want to split that one remaining penny? We know how to split one cent into three even parts with math. We write this as a fraction one-third but how much is this, exactly? You already know the fraction bar indicates division, so one third is the same as one divided by three. Do you remember how we calculate the quotient? That's right! We use long division! Now do the math. Do you see a pattern? Since we'll always have a remainder, we call this number, and numbers like this, a repeating decimal. Instead of writing the repeating part again and again, we can use a horizontal bar to indicate the digits that repeat.One third is equal to one divided by three, which we rewrote in long division form. While evaluating this problem, we get zero point three, three, three, three, three, three. Oh, sorry. Converting a repeating decimal into a fraction Okay, so you can use division to see if a fraction has a repeating decimal, but how do you convert a repeating decimal into a fraction?Changing a repeating decimal in which all numbers after the decimal repeat, such as zero point one repeating, zero point two repeating, etc. and zero point 73 repeating, all the way up to zero point 123456789 repeating, into a fraction, might seem daunting at first. But I'll show you a little trick that'll amaze your friends or maybe just your math teacher. Use place value to determine the denominator. To write these numbers as fractions, first find out how many numbers repeat after the decimal. Here, there are one, two and three digits to the right of the decimal that repeat, respectively. Next, you write the repeating part in the numerator like so. For each of the denominators we need to write the same number of 9s as there are numbers in the numerator. So one 9 here, two 9s here and three 9s here. Now all we have to do is simplify! Two-ninths is already in its reduced form, so we don't have to do anything to it. The greatest common factor of 36 and 99 is 9. So we can divide the numerator and denominator by 9. Doing so leaves us with four-elevenths.The greatest common factor of 459 and 999 is 27. So we can divide the numerator and denominator by 27. Doing so leaves us with seventeen-thirty-sevenths.Let's see if the Triplets have figured out the trick to splitting the last penny. Uh-oh, it looks like Uncle Huge is gonna keep that penny! Well, a penny saved is a penny earned! Transforming Simple Repeating Decimals to Fractions and Vice Versa Übung Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Transforming Simple Repeating Decimals to Fractions and Vice Versa kannst du es wiederholen und üben. Explain how to divide 100 dollars evenly between three people. Tipps Divide $9$ by $3$: $\frac93=\frac{3}{1}=3$ Subtract $9$ from $10$ to get $10-9=1$ one $\$100$ bill is equal to ten $\$10$ bills one $\$10$ bill is equal to ten $\$1$ bills one $\$1$ bill is equal to ten dimes one dime is equal to ten pennies Lösung How can we divide $\$100$ evenly amongst the three band members? First, we exchange the $\$100$ bills for ten $\$10$ bills. Each band member gets three $\$10$ bills, but one $\$10$ bill is left. Next, we have to exchange the last $\$10$ bill for ten $\$1$ bills. Again, everyone gets three $\$1$ bills and, again, one $\$1$ bill is left. In the same manner, we can proceed until only one penny is left. We can describe this situation with a fraction $ \frac13$, which can be written as $1\div3$. This is the part of a penny each member is entitled to. Convert the fraction $\frac13$ into a decimal by using long division. Tipps You have to do the same steps over and over again. To show that a decimal is a repeating decimal, place a horizontal line over the repeating numbers. Lösung You already know that a fraction bar indicates a division. So we can say that $\frac13=1\div3$. But how can you divide $1$ by $3$? On the right, you can see the problem worked out. If we use long division, we will always have a remainder of $1$. We can repeat this process over and over and OVER again. $1\div 3=0.33333......$. We call this a repeating decimal and indicate the repeating portion by placing a horizontal line over the numbers: $0.\overline3$ $0.\overline{623}$ $0.\overline{1025}$ Define a repeating decimal. Tipps $1\div 3=0.3333333........$ is an infinite but periodic decimal. $0.\bar3$ is a repeating decimal. Lösung Dividing $1$ by $3$ gives us $0.3333...$ with infinite $3$s behind the decimal point. Numbers with one or more repeating decimals are infinite, or without end, and are called repeating decimals. The number of repeated decimal places can vary: $\begin{array}{lcl} 0.\bar2 &&1 \text{ repeating number after the decimal}\\ 0.\overline{36} &&2 \text{ repeating numbers after the decimal}\\ 0.\overline{459} &&3 \text{ repeating numbers after the decimal} \end{array}$ All these numbers are written with a horizontal bar above the decimals that are repeated. For example $0.\overline{459}=0.459459459459.....$. Repeating decimals are infinite. Determine the repeating decimal equivalent for each given fraction and vice versa. Tipps The factors of $99$ are: $1$, $3$, $9$, $11$, $33$, and $99$ of $999$ are: $1$, $3$, $9$, $27$, $37$, $111$, $333$, and $999$ Is $37$ a factor of $99$? Is $37$ a factor of $999$? You can write every repeating decimal in fraction form in which the denominator has as many $9$s as the number of digits that repeat in the numerator. For example $0.\overline{1234}$ would look like this as a fraction: $\dfrac{1234}{9999}$ Lösung When transforming fractions into repeating decimals... You can use long division Or you can multiply the fraction so that the denominator is $9...9$ For the fraction $\frac{46}{111}$: If we multiply both the numerator and the denominator by $9$ we get: $\begin{array}{rcl} \frac{46}{111}&=&\frac{46\times 9}{111\times 9}\\ &=&\frac{414}{999}\\ &=&0.\overline{414} \end{array}$ $~$ For the fraction $\frac{4}{27}$: We multiply both the numerator and the denominator by $37$ to get: $\begin{array}{rcl} \frac{4}{27}&=&\frac{4\times 37}{27\times 37}\\ &=&\frac{148}{999}\\ &=&0.\overline{148} \end{array}$ $~$ To transform repeating decimals into fractions: write all repeating numbers in the numerator in the denominator, place as many $9$s as there are repeating digits reduce the fraction by dividing the numerator and denominator by the GCF For the repeating decimal $0.\overline{81}$: $\begin{array}{rcl} 0.\overline{81}&=&\frac{81}{99}\\ &=&\frac{81\div9}{99\div 9}\\ &=&\frac{9}{11} \end{array}$ $~$ For the repeating decimal $0.\overline{63}$: $\begin{array}{rcl} 0.\overline{63}&=&\frac{63}{99}\\ &=&\frac{63\div9}{99\div 9}\\ &=&\frac{7}{11} \end{array}$ Decide how much money each band member should receive. Tipps First, take a look at the amount of money they will be paid at each location. Then, see how you can divide that amount amongst the $3$ boys. For example, if the boys got paid $\$110$ for a concert in Plano, Texas, the equation would look like this: $\dfrac{\text{total money}}{\text{number of boys}}=\dfrac{\$110}{3}$ Lösung Lucky for them, the Musical Triplets are not only excellent singers, they are also very talented mathematicians, so they are able to calculate how much each band member will receive for each gig. $\dfrac{\text{total money}}{\text{number of boys}}$ Boston $\begin{array}{rcl} \dfrac{\$160}{3}&=&\$53\frac13\\ \end{array}$ Since we know that $1\div3=0.\overline{3}$, we can say that each band member will be paid $\$53.\bar3$. Chicago $\begin{array}{rcl} \dfrac{\$200}{3}&=&\$66\frac23\\ \end{array}$ Since $2\div3=0.\overline{6}$, each band member will be paid $\$66.\bar6$. Memphis $\begin{array}{rcl} \dfrac{\$205}{3}&=&\$68\frac13\\ \end{array}$ Each band member will be paid $\$68.\overline{3}$. Atlanta $\begin{array}{rcl} \dfrac{\$130}{3}&=&\$43\frac13\\ \end{array}$ Each band member will be paid $\$43.\overline{3}$. Write repeating decimals as fractions. Tipps Simplify the fraction by dividing the numerator and the denominator by their GCF ( Greatest Common Factor). Lösung You can write a repeating decimal as a fraction as follows: $0.\overline{123}=\frac{123}{999}$ We place the repeated numbers in the numerator. We place as many $9$s in the denominator as the number of digits that repeat in the numerator. Greatest Common Factor ( GCF): $\frac{123}{999}=\frac{123\div3}{999\div 3}=\frac{41}{333}$ $~$ 1.We can write the repeating decimal $\mathbf{0.\overline{45}}$ as a simplified fraction: $\begin{align*} 0.\overline{45}&=\frac{45}{99}\\ &=\frac{45\div 9}{99\div 9}\\ &=\frac{5}{11}\\ \end{align*}$ 2.We can write the repeating decimal $\mathbf{0.\overline{324}}$ as a simplified fraction: $\begin{align*} 0.\overline{324}&=\frac{324}{999}\\ &=\frac{324\div 27}{999\div 27}\\ &=\frac{12}{37}\\ \end{align*}$ 3.We can write the repeating decimal $\mathbf{0.\overline{18}}$ as a simplified fraction: $\begin{align*} 0.\overline{18}&=\frac{18}{99}\\ &=\frac{18\div 9}{99\div 9}\\ &=\frac{2}{11}\\ \end{align*}$ 4.We can write the repeating decimal $\mathbf{0.\overline{132}}$ as a simplified fraction: $\begin{align*} 0.\overline{132}&=\frac{132}{999}\\ &=\frac{132\div 3}{999\div 3}\\ &=\frac{44}{333} \end{align*}$
<< ToK ToK Warszawa meeting - Rough Notes Thu 15 Feb 2007 These are just rough notes - feel free to correct them, add links, etc. Hector Rubinstein - stockholm - magnetic fields Magnetic fields on kpc scales exist. They may exist on intergalacticscales - it's unclear whether or not their origin is primordial. CMB - Planck - may be able to detect magnetic fields present at theepochs not long after nucleosynthesis and recombination it is well known that photon has a thermal mass - about 10^{-39} [units =eV?]which is extremely small - related to electron loops Maxwell eqns -> Proca eqns WikipediaEn:Proca_action m_photon < 10^-26 eV \exists galactic mag fields at z \approx 3 making dynamo mechanisms difficult to explain them Boehm - LDM hypothesis 511 keV detection Leventhal(sp?) 199x ApJ OSSE 3 components Purcell et al 1997 candidates stars - SNe, SNII, WRcompact sources - pulsars, BH, low mass binaries - most excluded because they would imply 511keV from the disk- SNIa - need large escape fraction and explosion rate to maintain a steady flux- low mass X-ray binaries - need electrons to escape from the disk to the bulge dm + dm -> e^- + e^+ e+ loses energy -> positronium e+e- -> positronium decays para-positronium 2 gamma - monochromatic wih 511keVortho-positronium 3 gamma - continuum predictions positron emission should be maximal with highest DM concentration (n^2 effect?) cdm spectrum does NOT produce CDM-like power spectrum??? - at 10^9 M_sun essentially CDM-like - by 10^6 M_sun, the difference would be important spectrum Ascasibar et al 2005, 2006 model through F- 511keV through Z' relic density link with neutrino mass interaction/decay diagrams -> link between neutrino mass and DM cross section: \sigma_\nu well-known for relic density \sim 10^{-26} cm^3/sMeV" class="mmpImage" src="/foswiki/pub/Cosmo/ToK070215RoughNotes/_MathModePlugin_731486621df83fabc7268c4eba1fd524.png" /> to fit neutrino data BBN: 1MeV < m_N low energy Beyond SM MeV DM has definitely escaped all previous low energy experimentsdue to lack of luminosity BABAR/BES II ... ? summary ... explains low value of neutrino masses detection at LHC may be possible but requires work back to SUSY -> snu-neutralino-nu ? Conlon - hierarchy problems in string theory: the power of large volume planck scale 10^18 GeV ... cosm constant scale (10^-3eV)^4 - large-volume models can generate hierarchies thorugh a stabilisedexponentially large volume - predicts cosmological constant (but about 50 orders of magnitude toolarge - solving this problem is left to the reader/audience) G\"unther Stigl - high-energy c-rays, gamma-rays, neutrinos HESS - correlation of observations at GC with molecular cloud distribution KASCADE - has made observations Southern Auger - 1500km^2 - in Chile/Argentina Hillas plot c-rays at highest energies could be protons, could be ions - most interactions produce pions; pi^\pm decays to neutrinos pi^0 decays to photons (gamma-rays) origin of very high energy c rays remains one of the fundamental unsolved questions of astroparticle physics - even galactic c ray origin is unclear acceleration and sky distribution of c rays are strongly linked to the strength and distribution of cosmic magnetic fields - which are poorly known sources probably lie in fields of \mu-Gauss HE c-rays, pion-production, gamma-ray/neutrinos - all three fields should be considered together; strong constraints arise from gamma-ray overproduction Khalil - DM - SUSY - brane cosmology (British University in Egypt = BUE) - friedmann eqn modified in 5D (brane model) - dark matter relic abundance Error during latex2img: ERROR: problems during latex INPUT: \documentclass[fleqn,12pt]{article} \usepackage{amsmath} \usepackage[normal]{xcolor} \setlength{\mathindent}{0cm} \definecolor{teal}{rgb}{0,0.5,0.5} \definecolor{navy}{rgb}{0,0,0.5} \definecolor{aqua}{rgb}{0,1,1} \definecolor{lime}{rgb}{0,1,0} \definecolor{maroon}{rgb}{0.5,0,0} \definecolor{silver}{gray}{0.75} \usepackage{latexsym} \begin{document} \pagestyle{empty} \pagecolor{white} { \color{black} \begin{math}\displaystyle m_\nu = \sqrt{ \sigma \nu \over 128 \pi^3 } m_N^2 ln{ \Lambd^2/m_N^2 }\end{math} } \clearpage \end{document} STDERR: This is pdfTeX, Version 3.14159265-2.6-1.40.17 (TeX Live 2016/Debian) (preloaded format=latex) restricted \write18 enabled. entering extended mode (/tmp/QeZnZlMzQy/ihZJc_cPYZ LaTeX2e <2017/01/01> patch level 3 Babel <3.9r> and hyphenation patterns for 30 language(s) loaded. (/usr/share/texlive/texmf-dist/tex/latex/base/article.cls Document Class: article 2014/09/29 v1.4h Standard LaTeX document class (/usr/share/texlive/texmf-dist/tex/latex/base/fleqn.clo) (/usr/share/texlive/texmf-dist/tex/latex/base/size12.clo)) (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsmath.sty For additional information on amsmath, use the `?' option. (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amstext.sty (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsgen.sty)) (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsbsy.sty) (/usr/share/texlive/texmf-dist/tex/latex/amsmath/amsopn.sty)) (/usr/share/texlive/texmf-dist/tex/latex/xcolor/xcolor.sty (/usr/share/texlive/texmf-dist/tex/latex/graphics-cfg/color.cfg) (/usr/share/texlive/texmf-dist/tex/latex/graphics-def/dvips.def)) (/usr/share/texlive/texmf-dist/tex/latex/base/latexsym.sty) No file ihZJc_cPYZ.aux. (/usr/share/texlive/texmf-dist/tex/latex/base/ulasy.fd) Package amsmath Warning: Foreign command \over; (amsmath) \frac or \genfrac should be used instead (amsmath) on input line 17. ! Undefined control sequence. l.17 ...ma \nu \over 128 \pi^3 } m_N^2 ln{ \Lambd ^2/m_N^2 }\end{math} [1] (./ihZJc_cPYZ.aux) ) (see the transcript file for additional information) Output written on ihZJc_cPYZ.dvi (1 page, 752 bytes). Transcript written on ihZJc_cPYZ.log.
为了更好的帮助您理解掌握查询词或其译词在地道英语中的实际用法,我们为您准备了出自英文原文的大量英语例句,供您参考。 If is a ?2-grading of a simple Lie algebra, we explicitly describe a-module Spin0 () such that the exterior algebra of is the tensor square of this module times some power of 2. The operation adj on matrices arises from the (n - 1)st exterior power functor on modules; the analogous factorization question for matrix constructions arising from other functors is raised, as are several other questions. Let q be a power of p and let G(q) be the finite group of Fq-rational points of G. We prove a Tauberian theorem of the form $\phi * g (x)\sim p(x)w(x)$ as $x \to \infty,$ where p(x) is a bounded periodic function and w(x) is a weighted function of power growth. A new C*-algebra of strong limit power functions is proposed. 更多
Result Number Material Type Add to My Shelf Action Record Details and Options 1 Material Type:Article Study of \(D^{**}\) production and light hadronic states in the \(\bar{B}^0 \to D^{*+} \omega \pi^-\) decayarXiv.org, Jul 27, 2015 [Peer Reviewed Journal] 2 Material Type:Article arXiv.org, Dec 9, 2015 [Peer Reviewed Journal]View all versions 3 Material Type:Article Study of \(\mathbf{B^{0}\rightarrow\rho^{+}\rho^{-}}\) decays and implications for the CKM angle \(\mathbf{\phi_2}\)arXiv.org, Oct 1, 2016 [Peer Reviewed Journal] 4 Material Type:Article arXiv.org, Oct 18, 2013 [Peer Reviewed Journal]View all versions 5 Material Type:Article arXiv.org, Dec 23, 2012 [Peer Reviewed Journal]View all versions 6 Material Type:Article arXiv.org, Dec 21, 2015 [Peer Reviewed Journal]View all versions 7 Material Type:Article Measurement of the decay \(B\to D\ell\nu_\ell\) in fully reconstructed events and determination of the Cabibbo-Kobayashi-Maskawa matrix element \(|V_{cb}|\)arXiv.org, Jan 15, 2016 [Peer Reviewed Journal] 8 Material Type:Article arXiv.org, Feb 12, 2016 [Peer Reviewed Journal] 9 Material Type:Article arXiv.org, Sep 10, 2015 [Peer Reviewed Journal]View all versions 10 Material Type:Article arXiv.org, Nov 23, 2013 [Peer Reviewed Journal]View all versions
The volatiltiy surface is just a representation of European option prices as a function of strike and maturity in a different "unit" - namely implied volatility (while the term implied volatility has to be made precise by the model used to convert prices (quotes) into implied volatilities - for example: we may consider log-normal vols and normal vols). ... Let $t_0, t_1, \ldots, t_n$ be observation dates, where $0=t_0 < \cdots < t_n = T$, and $\{S_t \mid t \geq 0\}$ be the equity price process without dividend payments. Then the realized variance is defined by\begin{align*}\frac{252}{n}\sum_{i=1}^n \ln^2 \frac{S_{t_i}}{S_{t_{i-1}}}.\end{align*}Note that, for sufficiently small $x$,\begin{align*}\... It seems that your question refers to the microstructure noise defined in papers about intraday volatility estimates.Originally, it comes from the bid-ask bounce, i.e. the fact that even if the volatility is zero, you have buyers and sellers at this price and consequently you observe prices at Bid or Ask prices, and not at mid-price. Because of that, if ... The main issue measuring intraday volatility is called "signature plot": when you zoom in, the volatility measure (i.e. empirical quadratic variations) explode.Similarly you have the "Epps effect" for correlations: when you zoom in, the correlations collapse (it is at least a mechanical effect).For the volatility a lot of models can correct this:- first ... The term has a different meaning to different people.to econometricians, microstructure noise is a disturbance that makes high frequency estimates of some parameters (e.g. realized volatility) very unstable. Generally this strand of the literature professes agnosticism as to the its origin;to market microstructure researchers, microstructure noise is a ... Some cynical but functional definitions:It's what you can't model if you're not using tick by tick dataIt's what proper quant pricing theory doesn't know how to model yetIt's information (order book behavior) that reflects momentary fluctuations in the supply/demand of a given contract, rather than its underlying value (eg an arbitrage free price)The ... Volatility is mean reverting because you can prove by contradiction that it cannot be otherwise. You have an intuitive understanding of why, but you need something closer to a proof.Assume volatility is not mean reverting. At time t, the effect of the random component of the volatility on its level will be $\sigma \cdot \sqrt{t}$For an arbitrarily ... The key to this is to think about the enterprise value of a business separately from how it is financed.For simplicity sake, consider a business that comprises a sole gold bar (no workers, no extraction costs, etc). The value of the business is clearly just the value of the gold bar. If it were a listed company, with no debt, then the equity ... You may want to first broadly categorize volatility models before comparing between them within each class, it does not make sense to compare standard deviation models with an implied vol model.I would broadly classify as follows:Historical realized volatility: Those include standard deviation (sum of squared deviations), realized range volatility ... There is no "plain Black Scholes implied surface" because implied volatilities come from options market prices (calls and put). If you had a whole continuum of call prices $C : \mathbb{R}_+ \times \mathbb{R}_+ \to \mathbb{R}_+$, $(T,K) \mapsto C(T,K)$ you would get a implied volatility function $\sigma_I : \mathbb{R}_+ \times \mathbb{R}_+ \to \mathbb{R}_+$ ... Great question!I think the most useful starting point is Stock Return Characteristics, Skew Laws, and the Differential Pricing of Individual Equity Options by Bakshi, Kapadia and Madan (2003). Their paper proposes a definition of model-free implied skewness (they originally called it risk-neutral skewness, but MFIS is more accurate), which they prove will ... Setting aside, that it's not pure riskless arbitrage, but rather statistical arbitrage:You can extract the profit by performing continuous delta hedging. If you constantly adjust your hedge position you gain/lose money by delta hedging.Being long option (gamma long), you sell at higher prices and buy at lower ones. Over the course of time you realize ... The expression you have is fine. But more generally, for the intraday volatility, I don't think there "the correct definition". More like, whatever works in the given context. I found the following notes by Almgren pretty useful:http://cims.nyu.edu/~almgren/timeseries/notes7.pdf I do not have the time right now to write up a summary concise enough but at the same time trying to really touch on all the points that have to be made to delineate the above. Instead I point you to couple papers that are concise enough to skim over in a matter of minutes in order to understand the differences.Jim Gatheral on Local vs Stoch Vols:http://... There are rigorous econometric definitions, as has already been eluded to by others. For practical purposes, microstructure noise is a component of a price process that exhibits mean reversion on some (possibly time-varying) frequency.This reversion is particularly attractive to liquidity provisioners, who seek to profit from this noise component (along ... The usual technique of computing the mean and standard deviation of returns happens to coincide with the maximum likelihood estimate when the data are regularly spaced. However, when the data are not regularly spaced, you can still do a maximum likelihood estimate. It's just more computationally intensive than before.That is to say, assume you have ... Windham Capital Management is using hidden markov models for their Risk Regime Strategies.Mark Kritzman, who is also CEO, has published an article about the general outline of the strategy (with source code so you can replicate the results!):Regime Shifts: Implications for Dynamic Strategies (corrected August 2012) by M. Kritzman, S. Page, D. Turkington]... The price of a binary option, ignoring interest rates, is basically the same as the CDF $\phi(S)$ (or $1-\phi(S)$ ) of the terminal probability distribution. Generally that terminal distribution will be lognormal from the Black-Scholes model, or close to it. Option price is$$C = e^{-rT} \int_K^\infty \psi(S_T) dS_T$$for calls and$$ P = e^{-rT} \... The way market makers mark their volatility curves is by using models which 'fill in the gaps', i.e. they will make a price for a given option even if they do not believe this option is going to get a lot of volume. They are still willing to go long/short because they have a strategy to hedge their overall position (i.e. by managing their greeks and expiries)... You kind of answered the question yourself. Precisely because different market participants use different inputs to their pricing models, it is much easier to quote one single input (implied vols) than the output of 5 different inputs (BS option price). What is important is that you clearly differentiate between quoting and agreeing on the trade vs. the ... I had read some of them; actually, it does not exist an on-line library that collected them (or, better, it existed here, but it seems the website does not work anymore).I reported here below some of them that you did not find:More Than You Ever Wanted To Know* About Volatility SwapsModel RiskThe Volatility Smile And Its implied TreeEnhanced Numerical ... Using months of proprietary data that labels participants by their participant ID, it has been found that during periods of significant volatility, the composition of HFT participants in the book remains mostly constant as a fraction of the total BBO composition.What really changes, it was found, was that the fraction of low-frequency traders aggressing on ... Let's assume T=1 and let S be a geometric gaussian process with zero drift, i.e. $\ln(S_1/S_0)$ is normally distributed with mean $-1/2\times\mathrm{VEV}^2$ and volatility VEV.Then$$\ln(\mathrm{VaR}/S_0) = -1/2\mathrm{VEV}^2 - \mathrm{VEV} \times 1.96$$ with the VAR at $0.975$ quantile.This is a quadratic equation in VEV, with solutions$$\mathrm{VEV}... I didn't quite understand your objection.Most theories of market making are derived from a famous paper by Jack Treynor (The Economics of the Dealer Function). In the theory, there are initially no market makers, but there is a backstop seller (in this case someone willing to sell large amounts at 10.10) and a backstop buyer (a Warren Buffet ready to buy ... Along with Gatheral's book, I'd recommend reading Lorenzo Bergomi's "Stochastic Volatility Modelling". The first 2 chapters are available for download on his website. That being said, let me try to give you the basic picture.Below we assume that the equity forward curve $F(0,t)=\Bbb{E}_0^\Bbb{Q}[S_t]$ is given for all $t$ smaller than some relevant ... Intraday seasonality is a major factor in comparing volatility at different times of day. Most time series display significantly higher volatility in the morning EST than mid-day. For US exchange-traded products, volatility picks up again just before 4:00 PM EST. This is known as the u-shaped volatility pattern for exchange-traded products. A proper ... Scaling volatility as you do is often leading to inaccurate results which is over-estimating volatility especially when you scale daily volatility to even longer periods. Please see the following for more:http://economics.sas.upenn.edu/~fdiebold/papers/paper18/dsi.pdfThe above paper also explains why scaling the way you did does not properly account for ...
Averages Questions for CAT Set-2 PDF Download important CAT Averages Questions Set-2 with Solutions PDF based on previously asked questions in CAT exam. Practice Averages Questions Set-2 with Solutions for CAT exam. Question 1: The average marks of a student in 10 papers are 80. If the highest and the lowest scores are not considered, the average is 81. If his highest score is 92, find the lowest. a) 55 b) 60 c) 62 d) Cannot be determined Question 2: Prof. Suman takes a number of quizzes for a course. All the quizzes are out of 100. A student can get an A grade in the course if the average of her scores is more than or equal to 90.Grade B is awarded to a student if the average of her scores is between 87 and 89 (both included). If the average is below 87, the student gets a C grade. Ramesh is preparing for the last quiz and he realizes that he will score a minimum of 97 to get an A grade. After the quiz, he realizes that he will score 70, and he will just manage a B. How many quizzes did Prof. Suman take? a) 6 b) 7 c) 8 d) 9 e) None of these Question 3: 2 years ago, one-fifth of Amita’s age was equal to one-fourth of the age of Sumita, and the average of their age was 27 years. If the age of Paramita is also considered, the average age of three of them declines to 24. What will be the average age of Sumita and Paramita 3 years from now? a) 25 years b) 26 years c) 27 years d) cannot be determined Question 4: The average of 7 consecutive numbers is P. If the next three numbers are also added, the average shall a) remain unchanged b) increase by 1 c) increase by 1.5 d) increase by 2 Question 5: The average height of 22 toddlers increases by 2 inches when two of them leave this group. If the average height of these two toddlers is one-third the average height of the original 22, then the average height, in inches, of the remaining 20 toddlers is a) 30 b) 28 c) 32 d) 26 Question 6: Consider the set S = {2, 3, 4, …., 2n+1}, where n is a positive integer larger than 2007. Define X as the average of the odd integers in S and Y as the average of the even integers in S. What is the value of X – Y ? a) 0 b) 1 c) (1/2)*n d) (n+1)/2n e) 2008 Question 7: A college has raised 75% of the amount it needs for a new building by receiving an average donation of Rs. 600 from the people already solicited. The people already solicited represent 60% of the people the college will ask for donations. If the college is to raise exactly the amount needed for the new building, what should be the average donation from the remaining people to be solicited? a) Rs. 300 b) Rs. 250 c) Rs. 400 d) 500 Question 8: Three classes X, Y and Z take an algebra test. The average score in class X is 83. The average score in class Y is 76. The average score in class Z is 85. The average score of all students in classes X and Y together is 79. The average score of all students in classes Y and Z together is 81. What is the average for all the three classes? a) 81 b) 81.5 c) 82 d) 84.5 Question 9: Consider a sequence of seven consecutive integers. The average of the first five integers is n. The average of all the seven integers is: [CAT 2000] a) n b) n+1 c) kn, where k is a function of n d) n+(2/7) InstructionsThere are 60 students in a class. These students are divided into three groups A, B and C of 15, 20 and 25 students each. The groups A and C are combined to form group D. Question 10: What is the average weight of the students in group D? a) More than the average weight of A b) More than the average weight of C c) Less than the average weight of C d) Cannot be determined Answers & Solutions: 1) Answer (B) Total marks = 80 x 10 = 800 Total marks except highest and lowest marks = 81 x 8 = 648 So Summation of highest marks and lowest marks will be = 800 – 648 = 152 When highest marks is 92, lowest marks will be = 152-92 = 60 2) Answer (D) Grade A $\geq$ 90 and Grade B = 87 to 89 If Ramesh scores 70 instead of 97, => Change of marks = 97 – 70 = 27 It creates a change from grade A to B, this means an overall change in average by = Minimum marks for grade A – Minimum marks for Grade B = 90 – 87 = 3 $\therefore$ Number of subjects = $\frac{27}{3} = 9$ 3) Answer (B) Let ‘A’, ‘S’ and ‘P’ be Amita’s, Sumita’s and Paramita’s present age. It is given that 2 years ago, one-fifth of Amita’s age was equal to one-fourth of the age of Sumita, and the average of their age was 27 years. $\dfrac{(A-2)+(S-2)}{2} = 27$ $A+S = 58$ … (1) Also, $\dfrac{A-2}{5} = \dfrac{S-2}{4}$ $4A-8 = 5S-10$ $5S – 4A = 2$ … (2) From equation (1) and (2) we can say that S = 26, A = 32. Average age of Amita, Sumita and Paramita before 2 years = 24. $\dfrac{(A-2)+(S-2)+(P-2)}{3} = 24$ $A+S+P = 78$. Hence, P = 20. Therefore, the average age of Sumita and Paramita 3 years from now? = $\dfrac{(S+3)+(P+3)}{2}$ = $\dfrac{(26+3)+(20+3)}{2}$ = 26 years. Hence, option B is the correct answer. 4) Answer (C) Let the 7 consecutive numbers be a-3, a-2, a-1, a, a+1, a+2 and a+3. Sum of the numbers = 7a and the average of these numbers = a If next 3 numbers a+4, 4+5 and a+6 are also added then the average of these 10 numbers = $\dfrac{7a+a+4+a+5+a+6}{10} = a+1.5$ Thus, the average increases by 1.5 Hence, option C is the correct answer. 5) Answer (C) Let the average height of 22 toddlers be 3x. Sum of the height of 22 toddlers = 66x Hence average height of the two toddlers who left the group = x Sum of the height of the remaining 20 toddlers = 66x – 2x = 64x Average height of the remaining 20 toddlers = 64x/20 = 3.2x Difference = 0.2x = 2 inches => x = 10 inches Hence average height of the remaining 20 toddlers = 3.2x = 32 inches 6) Answer (B) The odd numbers in the set are 3, 5, 7, …2n+1 Sum of the odd numbers = 3+5+7+…+(2n+1) = $n^2 + 2n$ Average of odd numbers = $n^2 + 2n$/n = n+2 Sum of even numbers = 2 + 4 + 6 + … + 2n = 2(1+2+3+…+n) = 2*n*(n+1)/2 = n(n+1) Average of even numbers = n(n+1)/n = n+1 So, difference between the averages of even and odd numbers = 1 7) Answer (A) Let there be total 100 people whom the college will ask for donation. Out of these 60 people have already given average donation of 600 Rs. Thus total amount generated by 60 people is 36000. This is 75% of total amount required . so the amount remaining is 12000 which should be generated from remaining 40 people. So average amount needed is 12000/40 = 300 8) Answer (B) Let x , y and z be no. of students in class X, Y ,Z respectively. From 1st condition we have 83*x+76*y = 79*x+79*y which give 4x = 3y. Next we have 76*y + 85*z = 81(y+z) which give 4z = 5y . Now overall average of all the classes can be given as $\frac{83x+76y+85z}{x+y+z}$ Substitute the relations in above equation we get, $\frac{83x+76y+85z}{x+y+z}$ = (83*3/4 + 76 + 85*5/4)/(3/4 + 1 + 5/4) = 978/12 = 81.5 9) Answer (B) The first five numbers could be n-2, n-1, n, n+1, n+2. The next two number would then be, n+3 and n+4, in which case, the average of all the 7 numbers would be $\frac{(5n+2n+7)}{7}$ = n+1 10) Answer (D) As data regarding weights of people is not given, hence we can’t determine the avg. weight of people in group D We hope this CAT Averages Set-2 Questions with Solutions PDF for CAT will be helpful to you.
743 4 This is referring to problem 2.25 in Griffiths. Find the potential at a distance 'z' above the midpoint between two equal charges, q, a distance 'd' apart. Compute the electric field in each case. In the first part the charges are equal. The electric potential is: [tex]V = \frac{1}{4\pi \epsilon_0} \frac{2q}{{\sqrt{z^2 + (d^2/4)}}}[/tex] In this case, electric field given by [itex]\vec E = -\nabla V[/itex] is: [tex]\vec E = \frac{1}{4\pi \epsilon_0} \frac{2qz}{[{z^2 + (d^2/4)}]^{(3/2)}}} \hat z[/tex] Matches with the result known in problem 2.2(a). In the 2nd case we consider oppositely charges +q & -q. In this case the electric potential V = 0 which naively() suggests [itex]\vec E = -\nabla V = 0[/itex] which is in contradiction to a previous known result in problem 2.2(b) which suggests: [tex]\vec E = \frac{1}{4\pi \epsilon_0} \frac{qd}{[{z^2 + (d^2/4)}]^{(3/2)}} \hat z[/tex] How is this discrepancy accounted for? In the first part the charges are equal. The electric potential is: [tex]V = \frac{1}{4\pi \epsilon_0} \frac{2q}{{\sqrt{z^2 + (d^2/4)}}}[/tex] In this case, electric field given by [itex]\vec E = -\nabla V[/itex] is: [tex]\vec E = \frac{1}{4\pi \epsilon_0} \frac{2qz}{[{z^2 + (d^2/4)}]^{(3/2)}}} \hat z[/tex] Matches with the result known in problem 2.2(a). In the 2nd case we consider oppositely charges +q & -q. In this case the electric potential V = 0 which naively() suggests [itex]\vec E = -\nabla V = 0[/itex] which is in contradiction to a previous known result in problem 2.2(b) which suggests: [tex]\vec E = \frac{1}{4\pi \epsilon_0} \frac{qd}{[{z^2 + (d^2/4)}]^{(3/2)}} \hat z[/tex] How is this discrepancy accounted for?
Research Open Access Published: Fourth order Hamiltonian system with some singular nonlinear term and multiplicity result Boundary Value Problems volume 2016, Article number: 133 (2016) Article metrics 672 Accesses Abstract We consider a fourth order Hamiltonian system with some singular nonlinear term and multiplicity result. We get two theorems which show the number of weak solutions of this problem. The first theorem is a result which shows that there exists a weak solution for this problem and the second one is an improved result of the first result, which shows that there exist infinitely many weak solutions for this problem. We get the first result by a variational method and critical point theory, and we get the second result by homology theory. Introduction Let \(\bar{N}_{\epsilon}(\theta)\) be a closure of an ϵ-neighborhood of \(\theta=(0,\ldots,0)\), \(\epsilon>0\) be a fixed small number, and D be an open subset in \(R^{n}\) with compact complement \(\bar{N}_{\epsilon}(\theta)=R^{n}\setminus D\), \(n\ge2\). Let \(c\in R\) and \(\vert \cdot \vert \) be a norm in \(R^{n}\). In this paper we consider the weak solutions \(z(t)=(z_{1}(t),\ldots,z_{n}(t))\in C^{4}([0,2\pi], D)\) of a fourth order Hamiltonian system with singular nonlinear term Our problems are characterized as a singular fourth order Hamiltonian system with singularity at \(\{z(t)=\theta\}\), \(\theta=(0,\ldots,0)\). The motivation of this paper is the fourth order elliptic problem with singular potential. We recommend the book [1] for the singular elliptic problems. Many authors considered the fourth order elliptic boundary value problem. In particular, Choi and Jung [2] showed that the problem has at least two nontrivial solutions when \(c<\lambda_{1}\), \(\lambda_{1}(\lambda_{1}-c)< b<\lambda_{2}(\lambda_{2}-c)\), and \(s<0\) or when \(\lambda_{1}< c<\lambda_{2}\), \(b<\lambda_{1}(\lambda_{1}-c)\), and \(s>0\). We obtained these results by using a variational reduction method. We [3] also proved that when \(c<\lambda_{1}\), \(\lambda_{1}(\lambda_{1}-c)< b<\lambda_{2}(\lambda_{2}-c)\), and \(s<0\), (1.2) has at least three nontrivial solutions by using degree theory. Tarantello [4] also studied She showed that if \(c<\lambda_{1}\) and \(b\ge\lambda_{1}(\lambda _{1}-c)\), then (1.3) has a negative solution. She obtained this result by degree theory. Micheletti and Pistoia [5] also proved that if \(c<\lambda_{1}\) and \(b\ge \lambda_{2}(\lambda_{2}-c)\) then (1.3) has at least three solutions by the variational linking theorem and Leray-Schauder degree theory. The eigenvalue problem has many eigenvalues \(\lambda_{j}\), \(j\ge1\), and corresponding eigenfunctions \(\phi_{j}\), \(j\ge 1\), suitably normalized with respect to \(L^{2}([0,2\pi])\) inner product and each eigenvalue \(\lambda_{j}\) is repeated as often as its multiplicity. The eigenvalue problem has also infinitely many eigenvalues \(\mu_{j}=\lambda_{j}(\lambda_{j}-c)\), \(j\ge1\), and corresponding eigenfunctions \(\phi_{j}\), \(j\ge1\). We note that \(\mu_{1}<\mu_{2}\le\mu_{3},\ldots, \mu_{j} \to +\infty\). In this paper we are trying to find the weak solutions \(z(t)\in C^{4}([0,2\pi],D)\cap\Lambda D\) of the system (1.1) satisfying for all \(\phi(t)\in C^{4}([0,2\pi],D)\cap\Lambda D\), where Λ D is introduced in Section 2. Theorem 1.1 Assume that \(\lambda _{j}< c<\lambda_{j+1}\), \(j\ge1\). Then the system (1.1) has at least one nontrivial weak solution. Moreover, we improve Theorem 1.1 as follows. Theorem 1.2 Assume that \(\lambda _{j}< c<\lambda_{j+1}\), \(j\ge1\). Then the system (1.1) has infinitely many nontrivial weak solutions. For the proof of Theorem 1.1 we follow the approach of the variational method and use a minimax method in critical point theory on the loop space Λ D, and for the proof of Theorem 1.2 we follow homology theory. In Section 2, we introduce a loop subspace Λ D of the Banach space, and we prove that the associated functional J of (1.1) satisfies the \((P.S.)\) condition on the loop subspace Λ D. In Section 3, we use a minimax method and critical point theory for the existence of a nontrivial weak solution of (1.1) and prove Theorem 1.1. We also prove Theorem 1.2 by using critical point theory and homology theory to prove the existence of infinitely many nontrivial weak solutions. Variational approach Let \(L^{2}([0,2\pi],R)\) be a square integrable function space defined on \([0,2\pi]\). Any element x in \(L^{2}([0,2\pi],R)\) can be written as We shall denote the subset of \(L^{2}([0,2\pi],R)\) satisfying the 2 π-periodic condition, by \(L^{2}(S^{1},R)\). Similar notations will be used for other 2 π-periodic function spaces. We define a subspace W of \(L^{2}(S^{1},R)\) as follows: Then this is a complete normed space with a norm Let Then \(W=W^{-}\oplus W^{+}\), for \(x\in W\), \(x=x^{-}+x^{+}\in W^{-}\oplus W^{+}\). Let E be the n Cartesian product space of W, i.e., Let \(E^{+}\) and \(E^{-}\) be the subspaces on which the functional is positive definite and negative definite, respectively. Then Let \(P^{+}\) be the projection from E onto \(E^{+}\) and \(P^{-}\) the projection from E onto \(E^{-}\). The norm in E is given by where \(\Vert P^{+}z\Vert ^{2}_{E}=\sum^{n}_{i=1}\Vert P^{+}z_{i}\Vert ^{2}_{W}\), \(\Vert P^{-}z\Vert ^{2}_{E}=\sum^{n}_{i=1}\Vert P^{-}z_{i}\Vert ^{2}_{W}\), \(z=(z_{1},\ldots,z_{n})\). Let \(\nu^{1}_{\mu_{i}}, \nu^{2}_{\mu_{i}}, \ldots, \nu ^{n}_{\mu_{i}}\) be the eigenvalues of the matrix that is, Let \((c^{1}_{1,\mu_{i}},\ldots,c^{1}_{n,\mu_{i}}), (c^{2}_{1,\mu _{i}},\ldots,c^{2}_{n,\mu_{i}}), \ldots, (c^{n}_{1,\mu _{i}},\ldots,c^{n}_{n,\mu_{i}})\) be the eigenvectors of the matrix corresponding to the eigenvalues \(\nu ^{1}_{\mu_{i}}\), \(\nu^{2}_{\mu_{i}}, \ldots, \nu^{n}_{\mu _{i}}\), respectively. Since \(\nu^{k}_{\mu_{i}}=\mu_{i}\) for all \(k=1, 2, \ldots, n\), \((c^{1}_{1,\mu_{i}},\ldots,c^{1}_{n,\mu _{i}})=\cdots=(c^{n}_{1,\mu_{i}},\ldots,c^{n}_{n,\mu_{i}})\). Let us set Let us set We note that and Let us introduce an open set of the Hilbert space E as follows: Let us consider the functional on Λ D The Euler equation for J is (1.1). Lemma 2.1 \(J(z)\) is continuous and Fréchet differentiable in Λ D with Fréchet derivative Moreover, \(DJ\in C\). That is, \(J\in C^{1}\). Proof First we prove that \(J(z)\) is continuous. For \(z, w\in\Lambda D\), We have Thus we have Next we shall prove that \(J(z)\) is Fréchet differentiable in Λ D. For \(z, w\in\Lambda D\), Thus by (2.3), we have Similarly, it is easily checked that \(J\in C^{1}\). □ Lemma 2.2 Assume that \(\lambda _{j}< c<\lambda_{j+1}\), \(j\ge1\). Let \(\{z_{k}\}\subset\Lambda D\), \(z_{k}(t)\in Z\), and \(z_{k}\rightharpoonup z\) weakly in Λ D with \(z\in\partial \Lambda D\). Then \(J(z_{k})\to\infty\), where Z is a neighborhood of \(\theta=(0,\ldots,0)\). Proof Since \(\frac{1}{z(t)^{2p}}\) has a singular point \(\theta =(0,\ldots,0)\) in \(R^{n}\), the conclusion follows. □ Now, we shall prove that \(J(z)\) satisfies \((P.S.)_{\gamma}\) condition for any \(\gamma\in R\). Lemma 2.3 Assume that \(\lambda _{j}< c<\lambda_{j+1}\), \(j\ge1\). Then if \(\Vert z_{k}\Vert _{E}\to\infty\), then there exist \((z_{h_{k}})_{k}\) and z in Λ D such that Proof Let \(\Vert z_{k}\Vert _{E}\to\infty\). Then \(\frac{1}{| z_{k}(t)|^{2p}}\) is bounded, it follows that Since by (2.5), we have Thus the sequence \((\int^{2\pi}_{0}\frac{\operatorname{grad}_{z}\frac{1}{\vert z_{k}(t)\vert ^{2p}} \cdot z_{k}(t)}{\Vert z_{k}\Vert _{E}} \,dt)_{k}\) is bounded. It follows from (2.6) that there exists a subsequence \((z_{h_{k}})_{k}\) such that Since \(\operatorname{grad}_{z}\frac{1}{\vert z_{k}(t)\vert ^{2p}}\) is bounded when \(\Vert z_{k}\Vert _{E}\to\infty\), it follows from (2.7) that there exists z in Λ D such that Thus the lemma is proved. □ Lemma 2.4 Assume that \(\lambda _{j}< c<\lambda_{j+1}\), \(j\ge1\). Then \(J(z)\) satisfies the \((P.S.)_{\gamma}\) condition for any \(\gamma\in R\). Proof Let \(\gamma\in R\) and \((z_{k})_{k}\subset\Lambda D\) be a sequence such that \(J(z_{k})\to\gamma\) and or equivalently where \(D_{tttt}z_{k}(t)=\ddddot{z_{k}}(t)\), \((D_{tttt}+cD_{tt})^{-1}\) is a compact operator. We shall show that \((z_{k})_{k}\) has a convergent subsequence. We claim that \(\{z_{k}\}\) is bounded in Λ D. By contradiction, we suppose that \(\Vert z_{k}\Vert _{E}\to\infty\) and set \(w_{k}=\frac{z_{k}}{\Vert z_{k}\Vert _{E}}\). Since \((w_{k})_{k}\) is bounded, up to a subsequence, \((w_{k})_{k}\) converges weakly to some \(w_{0}\) in Λ D. Since \(J(z_{k})\to\gamma\) and \(DJ(z_{k})\to0\), we have Thus we have Thus we have \(w_{0}=0\), which is absurd because \(\Vert w_{0}\Vert _{E}=1\). Thus \(\{z_{k}\}\) is bounded in Λ D. Thus \((z_{k})_{k}\) has a convergent subsequence converging weakly to some z in Λ D. We claim that this subsequence of \((z_{k})_{k}\) converges strongly to z. By \(DJ(z_{k})\to\theta\), we have We claim that the mapping \(z_{k}\to\mapsto (\operatorname{grad}_{z}\frac{1}{\vert z_{k}(t)\vert ^{2p}})_{k}\) is compact. Since the embedding \(\Lambda D\hookrightarrow C^{2}([0,2\pi]\times \Lambda D,R^{n})\) is compact, the sequence \((\int^{2\pi}_{0}[\operatorname{grad}_{z}\frac{1}{\vert z_{k}(t)\vert ^{2p}}\cdot z_{k}(t)\,dt)_{n}\) has a convergent subsequence which converges to \(\int^{2\pi}_{0} [\operatorname{grad}_{z}\frac{1}{\vert z(t)\vert ^{2p}} \cdot z(t)\,dt\). Because \(\{z_{k}\}\) is bounded and the subsequence of \((z_{k})_{k}\) converges weakly to some z in Λ D, \((\operatorname{grad}_{z}\frac{1}{\vert z_{k}(t)\vert ^{2p}})_{k}\) is bounded. Since \((D_{tttt}+cD_{tt})^{-1}\) is compact, by (2.8), \((P_{+}z_{k})_{k}\) and \((P_{-}z_{k})_{k}\) have subsequences converging strongly. Thus \((z_{k})_{k}\) has a subsequence converging strongly. Thus the lemma is proved. □ Lemma 3.1 There exists a sequence of integers such that \(H_{b_{i}}(\Lambda D)\neq0\). Proof Let \(\epsilon>0\) be a fixed small number such that \(\bar {N}_{\epsilon}(\theta)\) contains θ, and choose \(R>0\) such that \(\bar{N}_{\epsilon}(\theta)\subset\operatorname{int}(B_{R})\). Then we have Since \(R^{n}\setminus B_{R}\) is a deformation retract of \(R^{n}\setminus\{\theta\}\), \(\Lambda(R^{n}\setminus B_{R})\) is a deformation retract of \(\Lambda(R^{n}\setminus\{\theta\})\), so \(\Lambda(R^{n}\setminus B_{R})\) is a deformation retract of λD. Then we have By [6], the Poincaré series of \(\Lambda(S^{n-1})\) is written as with \(Z_{2}\) coefficients. Thus the lemma is proved. □ Let us set a level set and Lemma 3.2 Assume that \(\lambda _{j}< c<\lambda_{j+1}\), \(j\ge1\). For each \(\gamma>0\), there exists a finite dimensional singular complex \(\Omega=\Omega_{\gamma}\) such that the level set \(J_{\gamma}\) is deformed into Ω. Proof Let us choose \(z\in J_{\gamma}\). Then \(z\in\Lambda D\) and we have We note that there exists a constant \(R_{0}>0\) such that We also note that there exists a neighborhood Z of \(\bar {N}_{\epsilon}(\theta)\) such that It follows that there exists a constant \(\gamma_{0}>0\) such that i.e., we have Since the number of elements of the set \(\{\lambda_{i}-c\mid \lambda_{i}-c<0\}\) is finite and \(\lambda_{i}-c\to\infty\) as \(i\in \infty\), there exists a constant \(\gamma_{1}>0\) such that By Lemma 2.2, there exists \(\epsilon_{0}=\epsilon(\gamma,\gamma _{1})\) such that Let us choose an integer \(M=M_{\gamma}>2\pi\frac{\gamma_{1}^{\frac {1}{2}}}{\epsilon_{0}}\) and let Let us define a broken line \(\forall t\in[t_{i-1},t_{i}]\), \(i=0, 1, 2,\ldots, M\), \(\forall x\in J_{\gamma}\). Let The corresponding \(\bar{z}\mapsto(z(t_{1}),z(t_{2}),\ldots,z(t_{M}))\) define a homeomorphism between Ω and a certain open subset of the M-fold product \(D\times D\times\cdots\times D\). We first claim that \(\Omega\subset\Lambda D\). In fact, \(\forall z\in J_{\gamma}\), for \(t_{2}>t_{1}\), by (3.3), we have Therefore \(\forall s\in[t_{i-1},t_{i}]\), \(i=0, 1, 2,\ldots,M\). We next claim that there exists \(\nu\in C([0,1]\times J_{\gamma },\Lambda D)\) such that \(\nu(0,\cdot)=\operatorname{id}\), and \(\nu (1,J_{\gamma})=\Omega\). In fact, let us choose \(z(t)\in\Lambda D\) and let us define ν as follows: Then \(\nu(0,\cdot)=\operatorname{id}\), and \(\nu(1,J_{\gamma})=\Omega\). Thus we prove that \(J_{\gamma}\) is deformed into Ω in the loop space Λ D. Thus the lemma is proved. □ Proof of Theorem 1.1 (Existence of a weak solution) We shall show that the functional \(J(z)\) has a critical value by the generalized mountain pass theorem. Thus we first shall show that \(J(z)\) satisfies the geometric assumptions of the generalized mountain pass theorem. Let Then Let \(z\in\Lambda D^{+}\). Then we have Since \(\frac{1}{\vert z(t)\vert ^{2p}}\) is positive and bounded, if \(z\in \Lambda D^{+}\), then there exists a number \(r>0\) such that if \(z\in \partial B_{r}\cap\Lambda D^{+}\), then \(J(z)>0\). Thus \(\inf_{z\in \partial B_{r}\cap\Lambda D^{+}}J(z)>0\). We note that by (3.1), there exists \(R>R_{0}\) such that and by (3.2), there exists a neighborhood Z of \(\bar{N}_{\epsilon }(\theta)\) such that Let us choose \(e\in B_{1}\cap\Lambda D^{+}\). Let \(z\in\Lambda D^{-}\oplus\{\rho e\mid \rho>0\}\). Then \(z=x+y\), \(x\in\Lambda D^{-}\), \(y=\rho e\). Then we have By (3.1), there exists constant \(R_{0}>0\) such that if \((t,z(t))\in [0,2\pi]\times R^{n}\setminus B_{R_{0}}\), then \(\vert \frac{1}{\vert z(t)\vert ^{2p}}\vert <+\infty\) and \(\vert \operatorname{grad}_{z}\frac{1}{\vert z(t)\vert ^{2p}}\vert <+\infty\). Thus there exist a large number \(R>R_{0}\) and a small number \(\rho>0\) such that if \(z=x+\rho e\in\partial Q=\partial (((\bar{B}_{R}\cap\Lambda D^{-})\oplus\{re\mid e\in B_{1}\cap\Lambda D^{+}, 0< r< R\})\setminus B_{R_{0}})\), then \(J(z)<0\). Thus we have \(\sup_{z\in\partial Q}J(z)<0\). By Lemma 2.1, \(J(z)\) is continuous and Fréchet differentiable in Λ D and, moreover, \(DJ\in C\). By Lemma 2.4, \(J(z)\) satisfies the \((P.S.)\) condition. Thus by the generalized mountain pass theorem [7], \(J(z)\) possesses a critical value \(c>0\), which is characterized as where Proof of Theorem 1.2 (Existence of infinitely many weak nontrivial solutions) By contradiction, we assume that \(J(z)\) has only finitely many critical points \(z_{1}, z_{2}, \ldots, z_{l}\) such that by the process of the proof of Theorem 1.1, we can obtain \(J(z_{j})>0\), \(1\le j\le l\). Let us set We note that \(\dim\operatorname{ker}(D^{2}J(z_{j}))\le2n\), for all j. Letting where \(M_{\gamma}\) is defined in the proof of Lemma 3.2, and we have and It follows that Since which is a contradiction to Lemma 3.1. Thus \(J(z)\) has infinitely many critical points \(z_{j}\), \(j=1, 2,\ldots\) , in Λ D. □ References 1. Ghergu, M, Rǎdulescu, VD: Singular Elliptic Problems. Bifurcation and Asymptotic Analysis. Clarendon Press, Oxford (2008) 2. Choi, QH, Jung, T: Multiplicity results on nonlinear biharmonic operator. Rocky Mt. J. Math. 29(1), 141-164 (1999) 3. Jung, TS, Choi, QH: Multiplicity results on a nonlinear biharmonic equation. Nonlinear Anal. 30(8), 5083-5092 (1997) 4. Tarantello, G: A note on a semilinear elliptic problem. Differ. Integral Equ. 5(3), 561-565 (1992) 5. Micheletti, AM, Pistoia, A: Multiplicity results for a fourth-order semilinear elliptic problem. Nonlinear Anal. 31(7), 895-908 (1998) 6. Bott, R: Nondegenerate critical manifolds. Ann. Math. 60, 248-261 (1954) 7. Rabinowitz, PH: Minimax Methods in Critical Point Theory with Applications to Differential Equations. CBMS. Regional Conf. Ser. Math., vol. 65. Am. Math. Soc., Providence (1986) Acknowledgements This work was supported by Inha University Research Grant. Additional information Competing interests The authors declare that they have no competing interests. Authors’ contributions All authors contributed equally to the manuscript and read and approved the final manuscript. About this article Received Accepted Published DOI MSC 35Q72 Keywords fourth order Hamiltonian system singular nonlinear term variational method critical point theory minimax method homology theory \((P.S.)_{c}\) condition
Let $a_t$ and $b_t$ be white noise processes. Can we say $c_t=a_t+b_t$ is necessarily a white noise process? No, you need more (at least under Hayashi's definition of white noise). For example, the sum of two independent white noise processes is white noise. Why is $a_t$ and $b_t$ white noise insufficient for $a_t+b_t$ to be white noise? Let $\{a_t\}$ and $\{b_t\}$ be white noise processes. Define $c_t = a_t + b_t$. Trivially we have $\mathrm{E}[c_t] = 0$. Checking the covariance condition: \begin{align*} \mathrm{Cov} \left( c_t, c_{t-j} \right) &= \mathrm{Cov} \left( a_t, a_{t-j}\right) + \mathrm{Cov} \left( a_t, b_{t-j}\right) + \mathrm{Cov} \left( b_t, a_{t-j}\right) + \mathrm{Cov} \left( b_t, b_{t-j}\right) \end{align*} Applying that $\{a_t\}$ and $\{b_t\}$ are white noise: \begin{align*} \mathrm{Cov} \left( c_t, c_{t-j} \right) &= \mathrm{Cov} \left( a_t, b_{t-j}\right) + \mathrm{Cov} \left( b_t, a_{t-j}\right) \end{align*} So whether $\{c_t\}$ is white noise depends on whether $\mathrm{Cov} \left( a_t, b_{t-j}\right) + \mathrm{Cov} \left( b_t, a_{t-j}\right) = 0$ for all $j\neq 0$. Example where sum of two white noise processes is not white noise: Let $\{a_t\}$ be white noise. Let $b_t = a_{t-1}$. Observe that process $\{b_t\}$ is also white noise. Let $c_t = a_t + b_t$, hence $c_t = a_t + a_{t-1}$, and observe that process $\{c_t\}$ is not white noise. Even simpler than @MatthewGunn's answer, Consider $b_t = -a_t$. Obviously $c_t \equiv 0$ is not white noise -- it'd be hard to call it any kind of noise. The broader point is, if we don't know anything about the joint distribution of $a_t$ and $b_t$, we won't be able to say what happens when we try and examine objects which depend on both of them. The covariance structure is essential to this end. Addendum: Of course, this is exactly the purpose of noise-cancelling headphones! -- to reverse the frequency of external noises and cancel them out -- so, going back to the physical definition of white noise, this sequence is literal silence. No noise at all. In electronics, white noise is defined as having a flat frequency spectrum ('white') and being random ('noise'). Noise generally can be contrasted with 'interference', one or more undesired signals being picked up from elsewhere and being added to the signal of interest, and 'distortion', undesired signals being generated from nonlinear processes acting on the signal of interest itself. While it is possible for two different signals to have correlated parts, and therefore cancel differently at different frequencies or at different times, e.g. completely canceling over a certain band of frequencies or during a certain interval of time, but then not canceling, or even adding constructively over another band of frequencies or during a certain interval of time, the correlation between the two signals presumes a correlation, which is precluded by the presumably random aspect of 'noise', which is what was asked about. If, indeed, the signals are 'noise' and therefore independent and random, then no such correlations should/would exist, so adding them together will also have a flat frequency spectrum and will therefore also be white. Also, trivially, if the noises are exactly anti-correlated, then they could cancel to give zero output at all times, which also has a flat frequency spectrum, zero power at all frequencies, which could fall under a sort of degenerate definition of white noise, except that it isn't random and can be perfectly predicted. Noise in electronics can come from several places. For example, shot noise, arising from the random arrival of electrons in a photocurrent (coming from the random arrival times of photons), and Johnson noise, coming from the Brownian motion of electrons in a resistive element warmer than absolute zero, both produce white noise, although, always with a finite bandwidth at both ends of the spectrum in any real system measured over a finite length of time. if both white noise sound is traveling in same direction And if their frequency is in phase matched up, then only they get added. But, one thing i am not sure about is after adding up will it remain as white noise or it will become some other type of sound having different frequency.
The basic point is the following: If an element commutes with more than half of the other elements, it already commutes with all elements. This is basically Lagrange's theorem: Let $C_R(x) = \{r \in R | rx = xr\}$ be the centralizer of $x$. We assume $|C_R(x)| > \frac{1}{2} |R|$. $C_R(x)$ is a subgroup of $R$ and thus its order divides $|R|$. However, by our assumption $\frac{|R|}{|C_R(x)|} < 2$, so $C_R(x)$ already has to be whole $R$. Intuitively $C_R(x)$ is big enough to cover $R$; take an element of $R$ and look at $C_R(x) + r$. This set has the same number of elements as $C_R(x)$ and since there are more than $\frac{1}{2} |R|$ such elements, these two sets cannot be disjoint. But then they must actually be equal! Now if more than half of the elements commute with more than half of the elements, then more than half of the elements commute with every element. Put another way: Every element commutes with more than half of the elements of $R$ and thus -- by the above -- every element commutes with every element; $R$ is commutative. Now why do more than half of the elements commute with more than half of the elements? Now the idempotents come into play: Let $T = \{r \in R | r^2=r\}$ be the idempotents. Then $|T| > \frac{3}{4} |R|$ by assumption. Now fix $x \in T$. We will show that $x$ commutes with a special subset of $R$ and then count that subset to see that it has more than $\frac{1}{2} |R|$ elements. The idea to get to this set is the following: If $x$ and $y$ and $x + y$ are idempotent, we get $(x + y)^2 = x^2 + xy + yx + y^2 = x + y$, so $xy = -yx$. For $x=y$ this says $x=-x$ or $2x = 0$. Let's concentrate on this case for a bit: Since we have a lot of idempotents, the "probability" is high for $x + x$ or equivalently $-x$ to also be idempotent. (If $-x$ is idempotent, we have $-x = (-x)^2=x^2=x$, thus $2x = x + x$ which is idempotent; if $x + x$ is idempotent, we have $x = -x$ idempotent by the above.) How many elements does $T \cap (-T)$ have? Well, by inclusion-exclusion for example, more than half of $|R|$. With a similar argument to the above (Lagrange), and with the above we now see that $T \cap (-T)$ is a subset of the subgroup $\{r \in R | 2r = 0 \}$ and thus this subgroup exhausts $R$. $R$ has characteristic 2! Now the only thing that is left to ensure is that for our $x \in T$ there are enough idempotents $y$ such that $x + y$ is also idempotent, because then -- as we saw -- $xy = -yx = yx$. So we count the set $T \cap (T - x)$ or equivalently $T\cap (T + x)$. This is fairly easy using inclusion-exclusion again: Since we know $|T + x| = |T| > \frac{3}{4} |R|$ and $|T \cup {x + T}| \le |R|$ we get $|T \cap (x + T)| > \frac{1}{2} |R|$ as we wanted! I hope this didn't become to convoluted; this theorem is a corollary of Theorem 4 of this paper. I tried to make its application to this specific scenario a bit more transparent.
I am reading a book "The concepts and practice of mathematical finance" by Mark Joshi. In Chapter 18 he discusses the shapes and dynamics of smiles under different models. I do not understand what is meant by "smile implied by a model". As I understand it, given a vanilla option with a fixed maturity $T$ and strike $K$, implied volatility is defined as the value of the parameter $\sigma$ we need to input into the Black-Scholes formula in order to get the price observed in the market. Repeating this for different strikes, we obtain the smile as a function of $K$. Correct me if I am wrong, but implied volatilities are always meant to be as above and relate to the Black-Scholes model. In this case, can anyone please give me a definition of "smile implied by a model" for a more general model? For example, Joshi discusses the smiles of the stochastic volatility model: $$\frac{dS}{S} = \mu dt + V^{1/2} dW^{(1)},$$ $$ dV = \lambda (V_r - V) dt + \sigma_V V^\alpha dW^{(2)}.$$ What totally confuses me is when he talks about the smile implied by this model. We are allowed to calibrate any of $\alpha$, $\lambda$, $V_r$ and $\sigma_V$ so that the model matches market prices, but volatility of the stock is a stochastic process so there is no way we can introduce "implied volatility". I have a similar confusion in the case of jump-diffusions or variance gamma. There could be even more parameters and the solution set that gives the market price would be multidimensional. Let alone the fact that volatility of the stock may not be a parameter. Thank you for your help.
Consider a coupon bond, starting at $T_{0}$ , with face value $K$, coupon payments at $T_1, . . . , T_n$ and a fixed coupon rate $r$. Determine the coupon rate $r$, such that the price of the bond, at $T_0$, equals its face value. For simplicity,we let \begin{align} &\delta=\frac{T_n-T_0}{n}\\ &T_i=T_0+i\delta, \end{align} for $i=1,2,...,n$ we have $$c_i=r\,\delta\,K.$$ The price, $p(t)$ at a time $t < T_1$, of the coupon bond is given by $$p(t)=KP(t,T_{n})+\sum_{i=1}^{n}c_i P(t,T_{i}),$$ we know the price of the bond, at $T_0$, equals its face value,thus we have $$p(T_0)=K=KP(T_0,T_{n})+r\,\delta\,K\sum_{i=1}^{n} P(T_0,T_{i}),$$ then $$r=\frac{1-P(T_0,T_{n})}{\delta\sum_{i=1}^{n} P(T_0,T_{i})}.$$ For more details, you can see this link Are you familiar with the concept of yield-to-maturity (YTM)? Here you find all necessary steps. You first calculate using the current price and the cashflows. Then as you can see in the paper provided a bond with coupon rate equal to its YTM is priced at par (100) and thus the price equals its face value. I always thought doing $y(t,T) = \frac{-ln(P(t,T))}{T-t}$ was a quick good approximation, it applies when the bond price is calculated in continuous time
Research Open Access Published: On a parabolic equation related to the p-Laplacian Boundary Value Problems volume 2016, Article number: 78 (2016) Article metrics 947 Accesses 13 Citations Abstract Consider a parabolic equation related to the p-Laplacian. If the diffusion coefficient of the equation is degenerate on the boundary, no matter we can define the trace of the solution on the boundary or not, by choosing a suitable test function, the stability of the solutions always can be established without a boundary condition. Introduction and the main results Consider an equation related to the p-Laplacian, where Ω is a bounded domain in \(\mathbb{R}^{N}\) with appropriately smooth boundary, \(\rho(x) = \operatorname{dist} (x,\partial\Omega)\), \(p>1\), \(\alpha >0\). Yin and Wang [1] first studied the equation and showed that, when \(\alpha>p-1\), the solution of the equation is completely controlled by the initial value. The author studied in cooperation with Zhan and Yuan [2] the following equation: and had shown that, to consider the well-posedness of equation (1.3), instead of the whole boundary condition only the partial boundary condition is necessary. Here, \(\Sigma _{p} \subseteq\partial\Omega\) is just a portion of ∂Ω, which is determined by the first order derivative term \(\frac{\partial b_{i}(u)}{\partial x_{i}}\), \(i=1, 2, \ldots , N\). Certainly, the initial value is always necessary, In our paper, we will consider the well-posedness of the solutions of equation (1.1). First of all, we give some basic functional spaces. For every fixed \(t\in[0, T]\), we introduce the Banach space and denote by \(V'_{t}(\Omega)\) its dual. By \(\mathbf{W}(Q_{T})\) we denote the Banach space \(\mathbf{W}'(Q_{T})\) is the dual of \(\mathbf{W}(Q_{T})\) (the space of linear functionals over \(\mathbf{W}(Q_{T})\)): The norm in \(\mathbf{W}'(Q_{T})\) is defined by Definition 1.1 and for any function \(\varphi \in C_{0}^{\infty}({Q_{T}})\), we have The initial value, as usual, is satisfied in the sense that We can easily obtain the existence of the weak solution. Theorem 1.2 Let us suppose \(1< p\), \(0<\alpha\), \(f(s,x,t)\) is a Lipschitz function. If We mainly are concerned with the stability of the solutions. As in [1–3], due to the fact that the weak solution defined in our paper satisfies (1.7), when \(\alpha< p-1\), we can define the trace of u on the boundary, while for \(\alpha\geq p-1\), the obtained weak solution lacks the regularity to define the trace on the boundary. However, in the short paper, by choosing a suitable test function, we can obtain the stability of the weak solutions without the boundary condition only if \(\alpha>0\). In other words, whether the weak solution is regular enough to define the trace on the boundary is not so important. The main result of our paper is the following theorem. Theorem 1.3 Let u and v be two weak solutions of equation (1.1) with the different initial values \(u(x,0)=v(x,0)\), respectively. If \(\alpha>0\) and \(f(s,x,t)\) is a Lipschitz function, then Comparing with [1], the greatest improvement lies in that we do not require any boundary condition, no matter that \(\alpha< p-1\) or \(\alpha \geq p-1\). At the same time, the nonlinear source term \(f(u,x,t)\) adds the difficulty when we use the compact convergence theorem. The proof of the existence of the weak solution is quite different from that in [1]. Moreover, we consider the following equation, which seems similar to our equation (1.1): which has been studied thoroughly for a long time, one may refer to [4–7]. Generally, to the growth order of u in \(f(u,x,t)\) should be added some restrictions. Very recently, Benedikt et al. [8] have studied the equation with \(0<\gamma<1\), and shown that the uniqueness of the solutions of equation (1.13) is not true. From the short comment, one can see that the degeneracy of the coefficient \(\rho^{\alpha}\) plays an important role in the well-posedness of the solutions, it even can eliminate the action from the source term \(f(u,x,t)\). By the way, the author has been interested in the boundary value condition of a degenerate parabolic equation for some time, one may refer to [9]. The proof of Theorem 1.2 Lemma 2.1 Let \(q\geq1\). If \(u_{\varepsilon}\in L^{\infty }(0,T;L^{2}(\Omega))\cap\mathbf{W}(Q_{T})\), \(\| u_{\varepsilon t}\| _{\mathbf{W}'(Q_{T})}\leq c\), \(\|\nabla(|u_{\varepsilon }|^{q-1}\times u_{\varepsilon})\|_{p,Q_{T}}\leq c\), then there is a subsequence of \(\{u_{\varepsilon}\}\) which is relatively compact in \(L^{s}(Q_{T})\) with \(s\in(1,\infty)\). where \({\rho_{\varepsilon}} = \rho\ast\delta_{\varepsilon}+ \varepsilon\), \(\varepsilon > 0\), \(\delta_{\varepsilon}\) is the mollifier as usual, \({u_{\varepsilon ,0}} \in{C^{\infty}_{0} }(\Omega)\) and \(\rho_{\varepsilon}^{\alpha}{ \vert {\nabla{u_{\varepsilon,0}}} \vert ^{p}}\in {L^{1}}(\Omega)\) is uniformly bounded, and \({u_{\varepsilon,0}}\) converges to \(u_{0}\) in \(W_{0}^{1,p}(\Omega)\). It is well known that the above problem has a unique classical solution [12–14]. Lemma 2.2 Proof By the maximum principle, there is a constant c only dependent on \({\Vert {{u_{0}}} \Vert _{{L^{\infty}}(\Omega)}}\) but independent on ε, such that Multiplying (2.1) by \(u_{\varepsilon}\) and integrating it over \(Q_{T}\), we get For small enough \(\lambda>0\), let \(\Omega_{\lambda}=\{x\in\Omega: \operatorname{dist}(x,\partial\Omega)>\lambda\}\). Since \(p>1\), by (2.6), Now, for any \(v\in\mathbf{W}(Q_{T})\), \(\|v\|_{W(Q_{T})}=1\), by the Young inequality, we can show that then Now, let \(\varphi\in C_{0}^{1}(\Omega)\), \(0\leq\varphi\leq1\), such that Then we have and so By Lemma 2.1, \(\varphi u_{\varepsilon}\) is relatively compact in \(L^{s}(Q_{T})\) with \(s\in(1,\infty)\). Then \(\varphi u_{\varepsilon }\rightarrow\varphi u\) a.e. in \(Q_{T}\). In particular, due to the arbitrariness of λ, \(u_{\varepsilon}\rightarrow u\) a.e. in \(Q_{T}\). and In order to prove u satisfies equation (1.1), we notice that, for any function \(\varphi \in C_{0}^{\infty}({Q_{T}})\), and \(u_{\varepsilon} \rightarrow u\) is almost everywhere convergent, so \(f(u_{\varepsilon},x,t)\rightarrow f(u,x,t)\) is true. Then for any function \(\varphi \in C_{0}^{\infty}({Q_{T}})\), we omit the details here. Thus u satisfies equation (1.1). The stability of the solutions As we have said before, by choosing a suitable test function, we can prove the stability of the solutions without any boundary value condition only if \(\alpha>0\). Proof of Theorem 1.3 For any given positive integer n, let \({g_{n}}(s)\) be an odd function, and Clearly, and where c is independent of n. Let \(\beta\leq\frac{\alpha}{p}\) and By taking the limit, we can choose \({g_{n}}(\phi(u - v))\) as the test function, then Thus Let \(n\rightarrow\infty\). If \(\{ x \in\Omega:\rho^{\beta}|u - v| = 0\} \) is a set with 0 measure, then If \(\{ x \in\Omega:\rho^{\beta}|u - v| = 0\}\) is a set with positive measure, then Both cases lead to the right hand side of (3.8) going to 0 as \(n\rightarrow\infty\). Meanwhile, Now, let \(n\rightarrow\infty\) in (3.5). Then It implies that Theorem 1.3 is proved. □ References 1. Yin, J, Wang, C: Properties of the boundary flux of a singular diffusion process. Chin. Ann. Math., Ser. B 25(2), 175-182 (2004) 2. Zhan, H, Yuan, H: A diffusion convection equation with degeneracy on the boundary. J. Jilin Univ. Sci. Ed. 53(3), 353-358 (2015) (in Chinese) 3. Zhan, H: The boundary value condition of an evolutionary \(p(x)\)-Laplacian equation. Bound. Value Probl. 2015, 112 (2015). doi:10.1186/s13661-015-0377-6 4. Zhao, JN: Existence and nonexistence of solutions for \({u_{t}} =div({| {\nabla u} |^{p - 2}}\nabla u) + f(\nabla u,u,x,t)\). J. Math. Anal. Appl. 172(1), 130-146 (1993) 5. Wang, J, Gao, W, Su, M: Periodic solutions of non-Newtonian polytropic filtration equations with nonlinear sources. Appl. Math. Comput. 216, 1996-2009 (2010) 6. Lee, K, Petrosyan, A, Vazquez, JL: Large time geometric properties of solutions of the evolution p-Laplacian equation. J. Differ. Equ. 229, 389-411 (2006) 7. Yin, J, Wang, C: Evolutionary weighted p-Laplacian with boundary degeneracy. J. Differ. Equ. 237, 421-445 (2007) 8. Benedikt, J, Girg, P, Kotrla, L, Takáč, P: Nonuniqueness and multi-bump solutions in parabolic problems with the p-Laplacian. J. Differ. Equ. 260, 991-1009 (2016) 9. Zhan, H: The solutions of a hyperbolic-parabolic mixed type equation on half-space domain. J. Differ. Equ. 259, 1449-1481 (2015) 10. Antontsev, SN, Shmarev, SI: Anisotropic parabolic equations with variable nonlinearity. Publ. Mat. 53, 355-399 (2009) 11. Antontsev, SN, Shmarev, SI: Parabolic equations with double variable nonlinearities. Math. Comput. Simul. 81, 2018-2032 (2011) 12. Ragusa, MA: Cauchy-Dirichlet problem associated to divergence form parabolic equations. Commun. Contemp. Math. 6(3), 377-393 (2004) 13. Gu, L: Second Order Parabolic Partial Differential Equations. The Publishing Company of Xiamen University, Xiamen (2002) (in Chinese) 14. Taylor, ME: Partial Differential Equations III. Springer, Berlin (1999) 15. Zhan, H: The solution of convection-diffusion equation. Chin. Ann. Math. 34(2), 235-256 (2013) Acknowledgements The paper is supported by NSF of China (no. 11371297) and supported by NSF of Fujian Province (no. 2015J01592), China. Additional information Competing interests The author declares to have no competing interests.
90 6 Homework Statement The picture shows a graph of amplitude (measured in degrees) vs time (measured in seconds) for a pendulum disturbed by different accelerations. 1) Draw the free body diagram of the pendulum in a situtation where this could happen. 2) Find the acceleration for the different periods. 3) When and in which period is the maximun and minimun tension? 4) Find ##\theta (t)## for the region of minimun period and for the following initial conditions: ##\theta _0 =10°##, ##v_o=0.1 rad/s## Homework Equations ##x(t)=A.sin(\omega .t)## Well, this is a problem which makes you think more about concepts than numbers, so I want to see if I've done it correctly. 1) I draw a simple pendulum in an elevator, where you have weight, tension and a pseudo-force. In this situation the effective gravity may be changing due to different accelerations of the elevator so this makes the period change. 2) ##\theta (t)=A.sin(\omega . t)## so differentiating you'll get ##\ddot \theta (t)=-A \omega ^2 sin (\omega .t)## In this case ##A=\frac{\pi}{180}## and ##\omega## can be easily found knowing the period, then the frequency and then ##\omega##. So you'll get three expressions which will be different just in ##\omega##. I didn't consider ##\phi## because the motion starts at 0 3) The maximun tension is always in the equilibrium point. Then if we think about the situation of the elevator, in this point ##T=mg+f*## where ##f*## is the pseudo-force due to the acceleration of the elevator. Then, if the gravity "is heavier" the period will be minimum, so the tension will be maximum when the period is the smallest. On the other hand, the tension will be weaker when the pendulum is in the extreme point and when the gravity is "lighter", so when the period is the longest. 4) ##\theta (t) =A.cos(\omega .t + \phi)##. So you have to find ##A## and ##\phi##. You use ##A=\sqrt{x_0 ^2 +\frac{\dot x_0^2}{\omega^2}}## and ##\phi=arctg(-\frac{\dot x_0}{\omega x_0})##. So you get the values and replace them. 1) I draw a simple pendulum in an elevator, where you have weight, tension and a pseudo-force. In this situation the effective gravity may be changing due to different accelerations of the elevator so this makes the period change. 2) ##\theta (t)=A.sin(\omega . t)## so differentiating you'll get ##\ddot \theta (t)=-A \omega ^2 sin (\omega .t)## In this case ##A=\frac{\pi}{180}## and ##\omega## can be easily found knowing the period, then the frequency and then ##\omega##. So you'll get three expressions which will be different just in ##\omega##. I didn't consider ##\phi## because the motion starts at 0 3) The maximun tension is always in the equilibrium point. Then if we think about the situation of the elevator, in this point ##T=mg+f*## where ##f*## is the pseudo-force due to the acceleration of the elevator. Then, if the gravity "is heavier" the period will be minimum, so the tension will be maximum when the period is the smallest. On the other hand, the tension will be weaker when the pendulum is in the extreme point and when the gravity is "lighter", so when the period is the longest. 4) ##\theta (t) =A.cos(\omega .t + \phi)##. So you have to find ##A## and ##\phi##. You use ##A=\sqrt{x_0 ^2 +\frac{\dot x_0^2}{\omega^2}}## and ##\phi=arctg(-\frac{\dot x_0}{\omega x_0})##. So you get the values and replace them. Attachments 63.3 KB Views: 12
People arrive to a bank according to a possion process $N(t)$ with $\lambda = 1$ client/minute. Each client makes a deposit $Y \sim \mathrm{Unif}\{1,2\}$ in thousand dollars. Calculate the probability that at time $t=5$ there will be 6 thousand dollars. The way I've trying to solve this is using compound poisson process: Define $X(t)=\sum\limits_{i=1}^{N(t)}{Y_i}$ where $N(t)$ is the number of clients at time $t$ and $Y_i$ is the deposit of the client $i$. Then, I need $\mathbb{P}[X(5)=6]$ I've been trying to find the density function like this: $\mathbb{P}(X(t)=x)=\mathbb{P}\left(\sum\limits_{i=1}^{N(t)}{Y_i}=x\right)=\sum_{N(t)=0}^{\infty} \Bbb P\left( \sum_{i=1}^n Y_i=x|N(t)=n\right)\Bbb P(N(t)=n)$ if this is the right way to proceed, then I'm not sure how to keep going, so any help would be appreciated. Also, is there another way to get this problem solved?
How can I quantitatively and qualitatively understand the fact that there is a relevence between the existence of anti-particles and the causality? This is a consequence of the fact that there is no positive frequency function which is zero outside the light cone. If you have a particle in relativity, its dynamics require that it goes faster than light, and to restore causality, it must go back in time. This is explained in my answer here: Is anti-matter matter going backwards in time? . If you have a quantum particle with positive energy, the propagation function $G(x-y)$ is the amplitude to go from x to y. This propagation is said to be causal if the propagator is zero unless x is to the future of y, so that in a time-space decomposition, $G(t,r)$ is zero for t<0. In this case, the Fourier transform $G(\omega,k)$ cannot vanish for all $\omega<0$, because it is impossible for a nonzero function and its Fourier transform to both be exactly zero in a half-plane. To see this, the condition of vanishing of $G(t,r)$ for t<0 implies the analyticity of the Fourier transform for $\omega$ with a negative imaginary part, since in this region, the Fourier transform of G becomes a sum of decaying exponentials. An analytic function can't be zero in a region without being zero everywhere, so the Fourier transform of a future directed function is not strictly positive energy. Because of this, there is no relativistic particle formalism in which the particles both have positive energies and causal propagation. You can either deal with fields, in which case the particle notion is non-local, or you can deal with particles, but then they go back in time. The back-in-time formalism is using the standard non-causal Feynman propagator, which is $$ G(\omega,k) = {i\over \omega^2 - k^2 - m^2 - i\epsilon}$$ up to numerator modifications for higher spin, with the $i\epsilon$ pole prescription. This has two poles in $\omega$ for any $k$, and the pole prescription pushes one pole to have slightly positive imaginary part and the other pole to the slightly negative imaginary part. There are singularities in both directions in imaginary $\omega$ direction, which means that the propagation is non-causal. The part that goes forward in time is the positive energy part; the part that goes back in time is the negative energy part. This is mainly an issue of the complex Klein Gordon field (There's no such requirement for the Dirac field for instance) It's most easily shown with the self propagator of the complex Klein Gordon field using plane waves in the x-direction: The Klein Gordon equation is. $\frac{\partial^2 \psi}{\partial t^2} ~~=~~ \left(\frac{\partial^2}{\partial x^2} - m^2 \right)~\psi$ A straightforward generator of time evolution would be. $\frac{\partial \psi}{\partial t} ~~=~~\pm i \sqrt{-\frac{\partial^2}{\partial x^2} + m^2}~\psi$ With a (-) sign for particles and a (+) sign for anti-particles.However, this generator is non local since it corresponds to an infinite series of derivatives. In fact it's equal to a convolution with a Bessel K function. $\frac{\partial \psi}{\partial t} ~~=~~\pm i \sqrt{-\frac{\partial^2}{\partial x^2} + m^2}~\psi ~~=~~ \frac{m}{x}K_1(mx)~*~\psi$ This means instantaneous propagation since $\partial\psi/\partial t$ depends on non local values of $\psi$. This problem then enters the general time evolution operator for arbitrary $t$. $\psi(t) ~~=~~ \exp\left\{ \pm i \sqrt{-\frac{\partial^2}{\partial x^2} + m^2}\right\}~\psi$ However, now comes the trick: The sum of the particle and anti particle propagators is local (within the light cone) $\psi(t) ~~=~~ \frac12\Big(\exp\left\{ + i ...\right\} +\exp\left\{ - i ...\right\}\Big)\psi ~~=~~ \cos\left\{ \sqrt{-\frac{\partial^2}{\partial x^2} + m^2}\right\}~\psi$ because the Taylor series expansion of the cosine only contains even powers of the argument there is no more square root operator. It has to be said that the part outside the light cone is small to begin with, in the order of the Compton radius of the particle. But it also shrinks further as propagation progresses. For an electron it's about $10^{-13}m$ at the start of the propagation but only $10^{-20}m$ after a lightmicron (the time in which light propagates 1 $\mu m$). It shrinks further away linear with time. This issue does not occur for the time evolution operator of the correct equation for the electron : The Dirac equation. This equation is linear and $\partial\psi/\partial t$ does not contain the above square root. Hans.
A fraction represents a part of whole. For example, it tells how many slices of a pizza left or eaten with respect to the whole pizza, like, one-half, three-quarters. Parts of a Fraction: Every fraction is made up of two terms, namely Numerator, which is the top part of a fraction. Denominator, which is the bottom part of a fraction. Example: \(\frac{5}{9}\), Here, 5 is the Numerator and 9 is the Denominator. Types of Fraction: Fractions can be of two types Proper Fractions and Improper Fractions. If both the numerator and denominators are positive and numerator less than denominator such fractions are called proper fraction. Ex: \(\frac{3}{8},\frac{9}{11}\) etc. Whereas fractions having numerator greater than the denominator is called improper fraction. Ex: \(\frac{8}{7},\frac{5}{2}\) etc. Improper fractions can also be written using the combination of a whole number and a fraction term known as Mixed Fraction. Ex: \(\frac{29}{8} = 3\frac{5}{8}\) , \(\frac{7}{3} = 2\frac{1}{3}\) In this article we learn about multiplication of fractions and various other operations related to fractions. Multiplication of Fractions: Multiplication involving a fraction is as simple as multiplication of any other real number. In order to multiply the fractions we need to follow the step given: Multiply all the numerator terms and all the denominator terms that would result in numerator and denominator of the fraction respectively. \(Product \;\; of \;\; Fraction = \frac{Product \;\; of \;\; Numerator}{Product \;\; of \;\; Denominator}\) Eg: \(\frac{3}{5} \times \frac{1}{4} \times \frac{7}{9} = \frac{3 \times 1 \times 7}{5 \times 4 \times 9} = \frac{7}{60}\) This is true for a proper fraction and an improper fraction. To find the product of any numbers involving a mixed fraction, convert the mixed fraction into an improper fraction and multiply. As we already know that the multiplication is the repeated addition of numbers, therefore adding the number of times a fraction would result in their multiplication. Eg: \(5 \times \frac{1}{6} = \frac{1}{6} + \frac{1}{6} + \frac{1}{6} + \frac{1}{6} + \frac{1}{6}\) = \(= \frac{5}{6}\) FRACTION AS AN OPERATOR When a fraction is referred as an operator it means it is a part of whole of something. The word ‘of’ denotes the multiplication. For example, 2/4 of 2 pizzas means 1 pizza i.e. 2/4 × 2=1. Another example say, the shaded portions in the figure given below represent 4/6 of the triangle. Example 1: Evaluate \(\frac{2}{3} \times \frac{5}{9}\) Solution: Product of rational numbers= (Product of numerators)/(Product of denominators) \(\Rightarrow \frac{2}{3} \times \frac{5}{9} = \frac{1 \times 6}{4 \times 9} = \frac{6}{36}\) \(= \frac{1}{6}\) Example 2: Evaluate \(3\frac{1}{4} \times 3\) Solution: To find the product convert the mixed fraction into an improper fraction and then multiply. \(3\frac{1}{4} = \frac{(3 \times 4) + 1}{4}\) \(= \frac{13}{4}\) Now multiplying the terms, we have: \(\frac{13}{4} \times 3 = \frac{13 \times 3}{4} = \frac{39}{4}\) Example 3: Find 5/6 of 12 Solution: 5/6 of 12 means, \(\frac{5}{6} \times 12 = \frac{5 \times 12}{6} = 10\) To solve more problems on the topic, download Byju’s – The Learning App from Google Play Store and watch interactive videos. Also, take free tests to practice for exams.
In many problems of enumerative combinatorics, one finds the solution formula that involve complex roots of unity, $\cos(\frac{n \pi }{ k})$ and $\sin(\frac{n \pi }{k})$. Can someone highlight any combinatorial interpretation of such expressions. I haven't find any book or paper highlighting this except some rudiments in papers by Arthur T.Benjamin. (If this question is not appropriate for MathOverflow, I am extremely sorry.) The appearance of roots of unity or $\cos(\frac{n\\pi}{k})$ and $\sin(\frac{n\pi}{k})$ in combinatorial contexts can almost always be explained through the representation theory of $\mathbb{Z}/n\mathbb Z$. The language of representation theory is avoided most of the time, and one attributes the appearance of $\cos(\frac{n\pi}{k})$'s to the appearance of circulant matrices, which have eigenvalues that are linear combinations of roots of unity, however, notice that the space of $n\times n$ circulant matrices is simply the group ring of $\mathbb Z/n \mathbb Z$. Now, circulant matrices (or similarly manageable Toeplitz matrices) will appear in combinatorial problems whenever your objects come with a $\mathbb Z/n \mathbb Z$ action. The numbers $\cos(\frac{n\pi}{k})$ are not integers, so they will not have a combinatorial meaning themselves, but they make their way through as eigenvalues of circulant matrices. This can happen through a Fourier transform such as in Brendan McKay's answer, or through traces or determinants. For example most formulas on the number of spanning trees, perfect matching, or closed walks on graphs with circular symmetry will contain roots of unity. Take circulant graphs, for instance, which have cyclic symmetry. These are defined by an integer $n$ and a sequence $s_1,s_2,\dots,s_k$ so that the vertex set is $\lbrace 1,2,\dots,n\rbrace$, and two vertices are connected whenever $|i-j|=s_r$ for some $1\le r\le k$. Denoting this graph by $G$ and the number of spanning trees by $\kappa(G)$, one has $$\kappa(G)=\frac{1}{n}\prod_{j=1}^{n-1}\left(2k-2\sum_{i=1}^{k}\cos(\frac{2\pi s_i j}{n})\right).$$ Now, as I mentioned earlier, circulant matrices and $\mathbb Z/n\mathbb Z$ actions don't exactly tell the whole story, because there are other nice Toeplitz matrices (or combinations of these) with eigenvalues being linear combinations of roots of unity. Some of the simplest examples are grid graphs. My favourite example to illustrate this is Kasteleyn's formula for counting domino tilings of an $n\times m$ grid. This number is $$\prod_{j=1}^{m}\prod_{k=1}^n \left(4\cos^2 \frac{\pi j}{m+1}+4\cos^2 \frac{\pi k}{n+1}\right)^{1/4}.$$ And of course there are appearances of roots of unity which happen in situations with a number theoretic flavour, such as Gauss sums or the Möbius function, but here the roots of unity usually don't make it to the enumeration formula as they usually cancel on the way, or are hidden behind expressions like Legendre symbols. :) If you have a polynomial or sufficiently convergent power series $f(x)$, and you sum it over $x$ being each of the $k$-th roots of unity, then you get $k$ times the sum of the coefficients of the powers of $x$ that are multiples of $k$. The simplest case is that $f(1)+f(-1)$ is twice the sum of the coefficients of the even powers. This is one way that items like $e^{-2ij\pi/k}$ or its real and imaginary parts can get into a formula. http://en.wikipedia.org/wiki/Series_multisection gives the general formula.
I want to calculate the electromagnetic tensor components in cylindrical coordinates. Suppose I did not know that those components are given in Cartesian coordinates by $$(F^{\mu \nu})= \begin{pmatrix} 0 & E_x & E_y & E_z \\ -E_x & 0 & B_z & -B_y \\ -E_y & -B_z & 0 & B_x \\ -E_z & B_y & -B_x & 0 \end{pmatrix}.$$ I want to derive the result in the same manner I did in the Cartesian coordinates case, i.e., using that $F^{ \mu \nu} = \partial^\mu A^\nu - \partial^\nu A^\mu$, where $A^\alpha=(V,\vec{A})$, $\vec{B} = \nabla \times \vec{A}$ and $\vec{E} = -\nabla V - \partial \vec{A} / \partial t$. Using the formulas for curl and gradient in cylindrical coordinates, we find $$ \vec{E} = - \left( \frac{\partial V}{\partial r} + \frac{\partial A_r}{\partial t} \right)\hat{r} \ - \left( \frac{1}{r}\frac{\partial V}{\partial \phi} + \frac{\partial A_\phi}{\partial t} \right)\hat{\phi} - \left( \frac{\partial V}{\partial z} + \frac{\partial A_z}{\partial t} \right)\hat{z} $$ and $$ \vec{B} = \left( \frac{1}{r}\frac{\partial A_z}{\partial \phi} - \frac{\partial A_\phi}{\partial z} \right)\hat{r} \ +\left(\frac{\partial A_r}{\partial z} - \frac{\partial A_z}{\partial r} \right)\hat{\phi} \ +\frac{1}{r}\left(\frac{\partial (r A_\phi)}{\partial r} - \frac{\partial A_z}{\partial r} \right)\hat{z}. \ $$ The invariant interval is given by $ds^2 = -dt^2 + dr^2 + r^2 d\phi^2 + dz^2$, (with $c=1$). Therefore, the metric tensor reads $$(g_{\mu \nu})= \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & r^2 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix},$$ and its inverse is $$(g^{\mu \nu})= \begin{pmatrix} -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1/r^2 & 0\\ 0 & 0 & 0 & 1 \end{pmatrix}.$$ Which implies $\partial^0 = -\partial_0$, $\partial^1 = \partial_1$, $\partial^2 = \frac{1}{r^2}\partial_2$ and $\partial^3 = \partial_3$. So, for example, $$ F^{ 01} = \partial^0 A^1 - \partial^1 A^0 = -\partial_0 A^1 - \partial_1 A^0 = -\frac{\partial A_r}{\partial t}-\frac{\partial V}{\partial r} = E_r, $$ which is reassuring. Now, $$ F^{02} = \partial^0 A^2 - \partial^2 A^0 = -\partial_0 A^2 - \frac{1}{r^2}\partial_2 A^0 = -\frac{\partial A_\phi}{\partial t}-\frac{1}{r^2}\frac{\partial V}{\partial \phi}. $$ However, I cannot identify this quantity with any component of the electric field. This last expression looks almost like $E_\phi$, except for an extra $\frac{1}{r}$ multiplying $\partial V / \partial \phi$. What went wrong here?
Let $Z = -Y$ so that $X-Y = X+Z$. Then, $$\operatorname{var}(Z)= (-1)^2\operatorname{var}(Y) = \operatorname{var}(Y)$$ and $$\operatorname{cov}(X,Z) = \operatorname{cov}(X,-Y) = -\operatorname{cov}(X,Y)$$so that \begin{align}\operatorname{var}(X-Y) &= \operatorname{var}(X+Z)\\&= \operatorname{var}(X)+\operatorname{var}(Z) + 2\operatorname{cov}(X,Z)\\&= \operatorname{var}(X)+\operatorname{var}(Y) - 2\operatorname{cov}(X,Y)\end{align}In short, \begin{align}\operatorname{var}(X+Y) &= \operatorname{var}(X) + \operatorname{var}(Y) + 2\operatorname{cov}(X,Y)\\\operatorname{var}(X-Y) &= \operatorname{var}(X) + \operatorname{var}(Y) - 2\operatorname{cov}(X,Y)\end{align}really are the same formula, and which of the two variances is larger depends entirely on whether $\operatorname{cov}(X,Y)$ is positive or negative. In the case when $\operatorname{cov}(X,Y)$ equals $0$ (i.e. $X$ and $Y$ are uncorrelated random variables), $\operatorname{var}(X+Y)$ equals $\operatorname{var}(X) + \operatorname{var}(Y)$ as you have already noted. As to intuition regarding why $\operatorname{var}(X+Y) > \operatorname{var}(X-Y)$ when $\operatorname{cov}(X,Y) > 0$, a geometric viewpoint might help. $X$ and $Y$ can be regarded as vectors of lengths $\sigma_X = \sqrt{\operatorname{var}(X)}$ and $\sigma_X = \sqrt{\operatorname{var}(Y)}$ that are pointed roughly in the same direction when $\operatorname{cov}(X,Y) > 0$ and in roughly opposite direction when $\operatorname{cov}(X,Y) < 0$. Now, one would expect intuitively that the(vector) sum of two vectors pointing in roughly the same direction is a longer vector than the (vector) difference of the two vectors, no? A crudely drawn picture of the difference vector and the sum vector is in the diagram below. Note that in the left-hand figure above, the vectors $X$ and $Y$ are shown as being at an acute angle $\theta$ and thus are "pointed in roughly the same direction". In fact, the correlation coefficient $\rho$ of (random variables) $X$ and $Y$ is $\cos(\theta) > 0$. Now, in a triangle with vertices $A, B, C$, and opposite sides of lengths $a, b$ and $c$ respectively, we have the following result familiar from elementary geometry/trigonometry $$c^2 = a^2 + b^2 - 2ab\cos(\angle C)$$ which is equivalent to \begin{align}\sigma_{X-Y}^2 &= \sigma_X^2 + \sigma_Y^2 - 2\sigma_X\sigma_Y\cos(\theta)\\ \operatorname{var}(X-Y) &= \operatorname{var}(X) + \operatorname{var}(Y)-2 \rho \sigma_X\sigma_Y\\\operatorname{var}(X-Y) &= \operatorname{var}(X) + \operatorname{var}(Y)-2 \operatorname{cov}(X,Y)\end{align}On the other hand, as shown in the right-hand figure above, for vector sums,the included angle $\pi-\theta$ is obtuse, and so$\cos(\pi-\theta) = -\cos(\theta) < 0$. So, we get that\begin{align}\sigma_{X+Y}^2 &= \sigma_X^2 + \sigma_Y^2 + 2\sigma_X\sigma_Y\cos(\theta)\\\operatorname{var}(X+Y) &= \operatorname{var}(X) + \operatorname{var}(Y)+2 \operatorname{cov}(X,Y),\end{align}that is, $\operatorname{var}(X+Y) > \operatorname{var}(X-Y)$ for positively correlated random variables.
Difference between revisions of "Probability Seminar" (→Thursday, May 7, Jessica Lin, UW-Madison) Line 174: Line 174: == Thursday, May 7, [http://www.math.wisc.edu/~jessica/ Jessica Lin], [http://www.math.wisc.edu/ UW-Madison] == == Thursday, May 7, [http://www.math.wisc.edu/~jessica/ Jessica Lin], [http://www.math.wisc.edu/ UW-Madison] == − Title: + Title: − − + == Thursday, May 14, [http://www.math.wisc.edu/~janjigia/ Chris Janjigian], [http://www.math.wisc.edu UW-Madison] == == Thursday, May 14, [http://www.math.wisc.edu/~janjigia/ Chris Janjigian], [http://www.math.wisc.edu UW-Madison] == Revision as of 12:10, 28 April 2015 Spring 2015 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu. Thursday, January 15, Miklos Racz, UC-Berkeley Stats Title: Testing for high-dimensional geometry in random graphs Abstract: I will talk about a random geometric graph model, where connections between vertices depend on distances between latent d-dimensional labels; we are particularly interested in the high-dimensional case when d is large. Upon observing a graph, we want to tell if it was generated from this geometric model, or from an Erdos-Renyi random graph. We show that there exists a computationally efficient procedure to do this which is almost optimal (in an information-theoretic sense). The key insight is based on a new statistic which we call "signed triangles". To prove optimality we use a bound on the total variation distance between Wishart matrices and the Gaussian Orthogonal Ensemble. This is joint work with Sebastien Bubeck, Jian Ding, and Ronen Eldan. Thursday, January 22, No Seminar Thursday, January 29, Arnab Sen, University of Minnesota Title: Double Roots of Random Littlewood Polynomials Abstract: We consider random polynomials whose coefficients are independent and uniform on {-1,1}. We will show that the probability that such a polynomial of degree n has a double root is o(n^{-2}) when n+1 is not divisible by 4 and is of the order n^{-2} otherwise. We will also discuss extensions to random polynomials with more general coefficient distributions. This is joint work with Ron Peled and Ofer Zeitouni. Thursday, February 5, No seminar this week Thursday, February 12, No Seminar this week Thursday, February 19, Xiaoqin Guo, Purdue Title: Quenched invariance principle for random walks in time-dependent random environment Abstract: In this talk we discuss random walks in a time-dependent zero-drift random environment in [math]Z^d[/math]. We prove a quenched invariance principle under an appropriate moment condition. The proof is based on the use of a maximum principle for parabolic difference operators. This is a joint work with Jean-Dominique Deuschel and Alejandro Ramirez. Thursday, February 26, Dan Crisan, Imperial College London Title: Smoothness properties of randomly perturbed semigroups with application to nonlinear filtering Abstract: In this talk I will discuss sharp gradient bounds for perturbed diffusion semigroups. In contrast with existing results, the perturbation is here random and the bounds obtained are pathwise. Our approach builds on the classical work of Kusuoka and Stroock and extends their program developed for the heat semi-group to solutions of stochastic partial differential equations. The work is motivated by and applied to nonlinear filtering. The analysis allows us to derive pathwise gradient bounds for the un-normalised conditional distribution of a partially observed signal. The estimates we derive have sharp small time asymptotics This is joint work with Terry Lyons (Oxford) and Christian Literrer (Ecole Polytechnique) and is based on the paper D Crisan, C Litterer, T Lyons, Kusuoka–Stroock gradient bounds for the solution of the filtering equation, Journal of Functional Analysis, 2105 Wednesday, March 4, Sam Stechmann, UW-Madison, 2:25pm Van Vleck B113 Please note the unusual time and room. Title: Stochastic Models for Rainfall: Extreme Events and Critical Phenomena Abstract: In recent years, tropical rainfall statistics have been shown to conform to paradigms of critical phenomena and statistical physics. In this talk, stochastic models will be presented as prototypes for understanding the atmospheric dynamics that leads to these statistics and extreme events. Key nonlinear ingredients in the models include either stochastic jump processes or thresholds (Heaviside functions). First, both exact solutions and simple numerics are used to verify that a suite of observed rainfall statistics is reproduced by the models, including power-law distributions and long-range correlations. Second, we prove that a stochastic trigger, which is a time-evolving indicator of whether it is raining or not, will converge to a deterministic threshold in an appropriate limit. Finally, we discuss the connections among these rainfall models, stochastic PDEs, and traditional models for critical phenomena. Thursday, March 12, Ohad Feldheim, IMA Title: The 3-states AF-Potts model in high dimension Abstract: Take a bounded odd domain of the bipartite graph [math]\mathbb{Z}^d[/math]. Color the boundary of the set by [math]0[/math], then color the rest of the domain at random with the colors [math]\{0,\dots,q-1\}[/math], penalizing every configuration with proportion to the number of improper edges at a given rate [math]\beta\gt 0[/math] (the "inverse temperature"). Q: "What is the structure of such a coloring?" This model is called the [math]q[/math]-states Potts antiferromagnet(AF), a classical spin glass model in statistical mechanics. The [math]2[/math]-states case is the famous Ising model which is relatively well understood. The [math]3[/math]-states case in high dimension has been studies for [math]\beta=\infty[/math], when the model reduces to a uniformly chosen proper three coloring of the domain. Several words, by Galvin, Kahn, Peled, Randall and Sorkin established the structure of the model showing long-range correlations and phase coexistence. In this work, we generalize this result to positive temperature, showing that for large enough [math]\beta[/math] (low enough temperature) the rigid structure persists. This is the first rigorous result for [math]\beta\lt \infty[/math]. In the talk, assuming no acquaintance with the model, we shall give the physical background, introduce all the relevant definitions and shed some light on how such results are proved using only combinatorial methods. Joint work with Yinon Spinka. Thursday, March 19, Mark Huber, Claremont McKenna Math Title: Understanding relative error in Monte Carlo simulations Abstract: The problem of estimating the probability [math]p[/math] of heads on an unfair coin has been around for centuries, and has inspired numerous advances in probability such as the Strong Law of Large Numbers and the Central Limit Theorem. In this talk, I'll consider a new twist: given an estimate [math]\hat p[/math], suppose we want to understand the behavior of the relative error [math](\hat p - p)/p[/math]. In classic estimators, the values that the relative error can take on depends on the value of [math]p[/math]. I will present a new estimate with the remarkable property that the distribution of the relative error does not depend in any way on the value of [math]p[/math]. Moreover, this new estimate is very fast: it takes a number of coin flips that is very close to the theoretical minimum. Time permitting, I will also discuss new ways to use concentration results for estimating the mean of random variables where normal approximations do not apply. Thursday, March 26, Ji Oon Lee, KAIST Title: Tracy-Widom Distribution for Sample Covariance Matrices with General Population Abstract: Consider the sample covariance matrix [math](\Sigma^{1/2} X)(\Sigma^{1/2} X)^*[/math], where the sample [math]X[/math] is an [math]M \times N[/math] random matrix whose entries are real independent random variables with variance [math]1/N[/math] and [math]\Sigma[/math] is an [math]M \times M[/math] positive-definite deterministic diagonal matrix. We show that the fluctuation of its rescaled largest eigenvalue is given by the type-1 Tracy-Widom distribution. This is a joint work with Kevin Schnelli. Thursday, April 2, No Seminar, Spring Break Thursday, April 9, Elnur Emrah, UW-Madison Title: The shape functions of certain exactly solvable inhomogeneous planar corner growth models Abstract: I will talk about two kinds of inhomogeneous corner growth models with independent waiting times {W(i, j): i, j positive integers}: (1) W(i, j) is distributed exponentially with parameter [math]a_i+b_j[/math] for each i, j.(2) W(i, j) is distributed geometrically with fail parameter [math]a_ib_j[/math] for each i, j. These generalize exactly-solvable i.i.d. models with exponential or geometric waiting times. The parameters (a_n) and (b_n) are random with a joint distribution that is stationary with respect to the nonnegative shifts and ergodic (separately) with respect to the positive shifts of the indices. Then the shape functions of models (1) and (2) satisfy variational formulas in terms of the marginal distributions of (a_n) and (b_n). For certain choices of these marginal distributions, we still get closed-form expressions for the shape function as in the i.i.d. models. Thursday, April 16, Scott Hottovy, UW-Madison Title: An SDE approximation for stochastic differential delay equations with colored state-dependent noise Abstract: In this talk I will introduce a stochastic differential delay equation with state-dependent colored noise which arises from a noisy circuit experiment. In the experimental paper, a small delay and correlation time limit was performed by using a Taylor expansion of the delay. However, a time substitution was first performed to obtain a good match with experimental results. I will discuss how this limit can be proved without the use of a Taylor expansion by using a theory of convergence of stochastic processes developed by Kurtz and Protter. To obtain a necessary bound, the theory of sums of weakly dependent random variables is used. This analysis leads to the explanation of why the time substitution was needed in the previous case. Thursday, April 23, Hoi Nguyen, Ohio State University Title: On eigenvalue repulsion of random matrices Abstract: I will address certain repulsion behavior of roots of random polynomials and of eigenvalues of Wigner matrices, and their applications. Among other things, we show a Wegner-type estimate for the number of eigenvalues inside an extremely small interval for quite general matrix ensembles. Thursday, May 7, Jessica Lin, UW-Madison Title: Random Walks in Random Environments and Stochastic Homogenization In this talk, I will draw connections between random walks in random environments (RWRE) and stochastic homogenization of partial differential equations (PDE). I will introduce various models of RWRE and derive the corresponding PDEs to show that the two subjects are intimately related. I will then give a brief overview of the tools and techniques used in both approaches (reviewing some classical results), and discuss some recent problems in RWRE which are related to my research in stochastic homogenization. Thursday, May 14, Chris Janjigian, UW-Madison Title: TBA Abstract:
Déposez votre fichier ici pour le déplacer vers cet enregistrement. Research talks;Partial Differential Equations;Mathematical Physics In the fifties John Nash astonished the geometers with his celebrated isometric embedding theorems. A folkloristic explanation of his first theorem is that you should be able to put any piece of paper in your pocket without crumpling or folding it, no matter how large it is. Ten years ago László Székelyhidi and I discovered unexpected similarities with the behavior of some classical equations in fluid dynamics. Our remark sparked a series of discoveries and works which have gone in several directions. Among them the most notable is the recent proof of Phil Isett of a long-standing conjecture of Lars Onsager in the theory of turbulent flows. In a joint work with László, Tristan Buckmaster and Vlad Vicol we improve Isett's theorem to show the existence of dissipative solutions of the incompressible Euler equations below the Onsager's threshold. In the fifties John Nash astonished the geometers with his celebrated isometric embedding theorems. A folkloristic explanation of his first theorem is that you should be able to put any piece of paper in your pocket without crumpling or folding it, no matter how large it is. Ten years ago László Székelyhidi and I discovered unexpected similarities with the behavior of some classical equations in fluid dynamics. Our remark sparked a series of ... 35Q31 ; 35D30 ; 76B03 ... Lire [+] Déposez votre fichier ici pour le déplacer vers cet enregistrement. Research talks;Partial Differential Equations In a joint work with Maria Colombo and Luigi De Rosa we consider the Cauchy problem for the ipodissipative Navier-Stokes equations, where the classical Laplacian $-\Delta$ is substited by a fractional Laplacian $(-\Delta)^\alpha$. Although a classical Hopf approach via a Galerkin approximation shows that there is enough compactness to construct global weak solutions satisfying the energy inequality à la Leray, we show that such solutions are not unique when $\alpha$ is small enough and the initial data are not regular. Our proof is a simple adapation of the methods introduced by Laszlo Székelyhidi and myself for the Euler equations. The methods apply for $\alpha < \frac{1}{2}$, but in order to show that they produce Leray solutions some more care is needed and in particular we must take smaller exponents. In a joint work with Maria Colombo and Luigi De Rosa we consider the Cauchy problem for the ipodissipative Navier-Stokes equations, where the classical Laplacian $-\Delta$ is substited by a fractional Laplacian $(-\Delta)^\alpha$. Although a classical Hopf approach via a Galerkin approximation shows that there is enough compactness to construct global weak solutions satisfying the energy inequality à la Leray, we show that such solutions are not ... 35Q31 ; 35A01 ; 35D30 ... Lire [+] Déposez votre fichier ici pour le déplacer vers cet enregistrement. - v; 79 p. ISBN 978-0-8218-4914-9 Memoirs of the american mathematical society , 0991 Localisation : Collection 1er étage fonction Q-valuée # énergie de Dirichlet # existence et régularité # espace métrique # application harmonique # théorie de la mesure géométrique 49Q20 ; 35J55 ; 54E40 ; 53A10 ... Lire [+] Déposez votre fichier ici pour le déplacer vers cet enregistrement. - vi; 124 p. ISBN 978-3-03719-044-9 Zurich lectures in advanced mathematics Localisation : Ouvrage RdC (DELE) théorie de la mesure # mesure géométrique # fonction à plusieurs variables # intégration # problème variationnel # théorème de Mastrand # critère de rectifiabilité 28A75 ; 26B15 ; 49Q15 ; 49Q20 ... Lire [+]
Get your free trial content now! Video Transcript Transcript Function Operations Zooming along on his flying carpet, Jaanav heads for home. Oh geez, he’s caught in a sandstorm, again. As he’s covered with sand. He has sand in his hair, sand in his ears, it’s everywhere! Back at home, he thinks, there must be a solution for this annoying problem. Eureka, Jaanav has an idea! He'll attach a sand-proof glass dome to his flying carpet and he can advertise this amazing new product onTV. Expressing Costs in a Function Of course, he wants to make lots and lots of money from this business venture, so he must figure out all the costs to manufacture and advertise the product. Plus, he must figure out the sales price of each item and and how many he must sell to make a healthy profit. Jaanav figures out that cost to make each carpet is 100 gold coins plus a one-time cost of 150 gold coins - to buy the loom used to weave the carpets. The total cost to produce the sand-proof glass dome is 50 gold coins per dome plus a one-time cost of 100 gold coins to buy the dome-building machinery. To figure out the total cost, let's write the costs of manufacturing as two different functions, and then add them together. Let C represent the costs associated with the product, and let 'x' represent the number of units. We can write the costs of just the carpets that he will produce as C_carpet(x) = 100x + 150 and the costs of the sand-proof domes as C_dome(x) = 50x + 100. Remember, the costs depend on the number of items, 'x', that will be produced. Adding Functions To calculate the sum (f+g)(x), you just have to add f(x) and g(x). For the two given functions this means: (C_carpet + C_dome)(x) =C_carpet(x) + C_dome(x). Now to calculate the total cost of a flying carpet with a glass dome, Jaanav has to add the two functions together. C_totalcost(x) = C_carpet + C_dome(x). The sum of the two functions is equal to 100x + 150 + 50x + 100. Combine the like terms, and then write the expression: 150x + 250. Subtracting Functions Now that Jaanav knows the production costs, he needs to consider the cost of the televison advertisement, and then figure out the selling price for each unit. The ad will cost a one-time fee of 750 gold coins. Based on all the cost information, he decides to sell the units for 250 gold coins each. We can write this information as the function, R(x). “R” represents the receipts after the cost of the advertisement. So, R(x) = 250x – 750. How much money will be left over after paying all of the costs? This amount is the profit, and it's equal to the receipts minus the total cost. We can write this as the function P(x): P(x) = (R – C_totalcost)(x). Which equals (250x – 750) – (150x + 250). To simplify, don’t forget to distribute the negative across both terms inside the second set of parentheses. We can write the expression as 250x – 150x -750 – 250, and then combine the like terms, giving us P(x) = 100x -1000. Break-Even / Covering the Cost How many carpets does Jaanav need to sell just to break-even? Break-even is the amount of money Jaanav needs to earn just to cover his fixed costs. This is getting complicated! Maybe Jaanav should call his accountant? NO, we can help him. To determine the break even amount of this business venture we just need to solve the equation that models his profit. If we set P(x) = 0, then we can isolate 'x' and find out how many carpets Jaanav should sell.He needs to sell 10 domed carpets just to break even. Well done. Multiplying Functions But can we multiply two functions? To calculate the product of the functions (f times g)(x), simply multiply f(x) and g(x). Let's take a look at an example. f(x) equals 2x + 3 and g(x) equals 4x - 2. So to find the product (f times g)(x) we have to multiply the two terms 2x + 3 and 4x - 2. For example, using the FOIL-method we get: 8x² - 4x + 12x -6. We can simplify this expression by combining like terms. That's it. Dividing Functions And how do we divide two functions? Let's take a look. Just like multiplying two functions, you have to divide both terms - '3x+5' and 'x-2' - to get a new function - (f/g)(x).Oh, oh! The commercial is on the television! Let’s watch Do you get dust in your eyes when flying around on your magic carpet? Do you need protection from the elements? Then you need the OCD2000! The Oriental Carpet Dome will protect you from anything Mother Nature has to throw at you except maybe hurricanes and tornados... 2 comments Thanks for liking (and commenting) on our videos! Very informative for math AND advertising! Thanks! Function Operations Übung Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Function Operations kannst du es wiederholen und üben. Determine the total cost of the sand-proof glass carpet dome. Tipps For example, if the production of one carpet dome costs $10$ dollars, with a one-time cost of $200$ dollars, then the total cost of producing $x$ carpets is $C_{\text{carpet}}(x)=10x+200$. To add functions together, just add the terms of both functions together: Lösung The cost to make each carpet is 100 gold coins plus a one-time cost of 150 gold coins - to buy the loom used to weave the carpets. The total cost to produce the sand-proof glass dome is 50 gold coins per dome plus a one-time cost of 100 gold coins to buy the dome-building machinery. To figure out the total cost, let's write the costs of manufacturing as two different functions, and then add them together. Let $x$ represent the number of carpets produced and $C_{\text{carpet}}(x)$ represent the total cost of producing $x$ carpets. We then have that: $C_{\text{carpet}}(x)=100x+150$. Let $x$ represent the number of domes produced and $C_{\text{dome}}(x)$ represent the total cost of producing $x$ domes. We then have that: $C_{\text{dome}}(x)=50x+100$. If Jaanav produces $x$ carpets and domes, then we just have to add the total costs of the domes and the carpets together to get the total cost $C_{\text{total cost}}(x)$ of the sand-proof glass carpet dome! i.e. $C_{\text{total cost}}(x)=C_{\text{carpet}}(x)+C_{\text{dome}}(x)$. Specifically, $C_{\text{total cost}}(x)=100x+150+50x+100$. Combining like terms, we get $C_{\text{total cost}}(x)=150x+250$. Decide what the correct rules are for combining functions. Tipps An example of addition with $f(x)=2x+1$ and $g(x)=3x-2$: An example for division with $f(x)=2x+1$ and $g(x)=3x-2$: An example for multiplication with $f(x)=2x+1$ and $g(x)=3x-2$: Lösung How can we add, subtract, multiply, or divide two functions? $(f+g)(x)=f(x)+g(x)$ $(f-g)(x)=f(x)-g(x)$ $(f\times g)(x)=f(x)\times g(x)$ $(f\div g)(x)=f(x)\div g(x)$ For example, with $f(x)=2x+1$ and $g(x)=3x-2$: $(f-g)(x)=f(x)-g(x)=2x+1-(3x-2)=2x+1-3x+2=-x+3$. Establish the equation to calculate Jaanav total profit. Tipps The $750$ gold coins is a cost which needs to be paid from the money made from the carpet domes. So you have to subtract $750$ from the amount of money made. If one carpet dome costs $10$ dollars, then we get $10\times 10=100$ dollars for selling $10$ carpet domes. An example of combining like terms: $20x+30-15x=20x-15x+30=5x+30$ Lösung Let $x$ be the number of carpet domes Jaanav would like to sell and $C_{\text{initial profit}}(x)$ be the amount of gold coins Jaanav makes after covering the cost for his commercial. We then have that $C_{\text{initial profit}}(x)=250x-750$. To get the total profit $C_{\text{total profit}}(x)$ after selling $x$ carpet domes, we have to subtract the total cost from the initial profit: $C_{\text{total profit}}(x)=C_{\text{initial profit}}(x)-C_{\text{total cost}}(x)$. We can then conclude that the total profit is $C_{\text{total profit}}(x)=250x-750-(150x+250)=100x-1000$. Calculate each operation with the given functions. Tipps Remember to keep track of signs. Use the FOIL method for multiplying two binomials: Multiply the First. Next multiply the Outer. Then the Inner. Last multiply the Last. Lösung To add, subtract, multiply, or divide functions, just add, subtract, multiply, or divide their terms: $(f+g)(x)=f(x)+g(x)=12x+23+4x-3=12x+4x+23-3=16x+20$. $(f-h)(x)=f(x)-h(x)=12x+23-(12x-9)=12x+23-12x+9=32$. $(h\div g)(x)=\frac{h(x)}{g(x)}=\frac{12x-9}{4x-3}=\frac{3(4x-3)}{4x-3}=3$. $(f\times h)(x)=f(x)\times h(x)=(12x+23)\times (4x-3)=48x^2+56x-69$. Multiply the Examine the total cost of the production with Janaav's brother's new glass dome machine. Tipps Remember to combine like terms. To add terms with variables you just have to add the coefficients: Lösung The total cost is the sum of the carpet cost and the dome cost. We have that the carpet cost is given by, $C_{\text{carpets}}(x)=100 x+ 150$, and that the dome cost is given by, $C_{\text{dome}}(x)=30x+150$. The sum of these functions gives us the total cost: $C_{\text{total cost}}(x)=(C_{\text{carpets}}+C_{\text{dome}})(x)=(100 x+ 150)+(30x+150)=100 x+ 150+30x+150$. Now we rearrange the terms so that the like terms are together, $C_{\text{total cost}}(x)=100x+30x+150+150$. And we combine like terms to get, $C_{\text{total cost}}(x)=130x+300$. Figure out the price for one dome. Tipps The cost function is given by the item cost multiplied by the number of domes produced plus any one-time costs. The price which Janaav will break even at will give a profit of zero gold coins. You have to solve a linear equation with an unknown price $y$. Lösung The break even point is when the amount of money made and the costs are the same. i.e. the total profit is zero. The function representing the costs is $C(x)=80x+200$, where the cost for advertising is included. Because the amount of money made for one dome is unknown, we represent it with the variable $y$. So we get the profit function, $P(x)=yx-(80x+200)$. We know that for $x=10$ the costs should be totally covered, that means that we want $P(10)=0$. This gives us the following equation with the unknown price $y$: $0=10y-(80(10)+200)$. We simplify the term inside the parentheses to $C(10)=80(10)+200=1000$ and solve the equation as follows: $\begin{array}{rcr} 0&=&10y-1000\\ +1000&&+1000\\ 1000&=&10y\\ \overline{\color{#669900}{\div 10}}&&\overline{\color{#669900}{\div 10}}\\ 100&=&y \end{array}$ We then find that Janaav should charge 100 gold coins per dome.
On the $ L^p $ regularity of solutions to the generalized Hunter-Saxton system 1. Department of Mathematics, University of Maine, Orono, ME 04469, USA 2. Department of Mathematics, University of Chicago, Chicago, IL 60637, USA 3. Department of Mathematics, Cornell University, Ithaca, NY 14850, USA 4. Department of Mathematics, University of North Georgia, Dahlonega, GA 30533, USA The generalized Hunter-Saxton system comprises several well-kno-wn models from fluid dynamics and serves as a tool for the study of fluid convection and stretching in one-dimensional evolution equations. In this work, we examine the global regularity of periodic smooth solutions of this system in $ L^p $, $ p \in [1,\infty) $, spaces for nonzero real parameters $ (\lambda,\kappa) $. Our results significantly improve and extend those by Wunsch et al. [ Mathematics Subject Classification:35B44, 35B10, 35B65, 35Q35, 35B40. Citation:Jaeho Choi, Nitin Krishna, Nicole Magill, Alejandro Sarria. On the $ L^p $ regularity of solutions to the generalized Hunter-Saxton system. Discrete & Continuous Dynamical Systems - B, 2019, 24 (12) : 6349-6365. doi: 10.3934/dcdsb.2019142 References: [1] [2] [3] S. Childress, G.R. Ierley, E.A. Spiegel and W.R. Young, Blow-up of unsteady two-dimensional Euler and Navier-Stokes solutions having stagnation-point form, [4] [5] A. Constantin and D. Lannes, The hydrodynamical relevance of the Camassa-Holm and Degasperis-Procesi equations, [6] H.R. Dullin, G.A. Gottwald and D.D. Holm, Camassa-Holm, Korteweg-de Vries-5 and other asymptotically equivalent equations for shallow water waves, [7] [8] J. Escher, O. Lechtenfeld and Z. Yin, Well-posedness and blow-up phenomena for the 2-component Camassa-Holm equation, [9] G. Gasper and M. Rahman, [10] [11] [12] [13] [14] [15] [16] [17] [18] B. Moon and Y. Liu, Wave breaking and global existence for the generalized periodic two-component Hunter-Saxton system, [19] [20] [21] [22] [23] [24] [25] [26] A. Sarria and R. Saxton, The role of initial curvature in solutions to the generalized inviscid Proudman-Johnson equation, [27] [28] [29] [30] [31] show all references References: [1] [2] [3] S. Childress, G.R. Ierley, E.A. Spiegel and W.R. Young, Blow-up of unsteady two-dimensional Euler and Navier-Stokes solutions having stagnation-point form, [4] [5] A. Constantin and D. Lannes, The hydrodynamical relevance of the Camassa-Holm and Degasperis-Procesi equations, [6] H.R. Dullin, G.A. Gottwald and D.D. Holm, Camassa-Holm, Korteweg-de Vries-5 and other asymptotically equivalent equations for shallow water waves, [7] [8] J. Escher, O. Lechtenfeld and Z. Yin, Well-posedness and blow-up phenomena for the 2-component Camassa-Holm equation, [9] G. Gasper and M. Rahman, [10] [11] [12] [13] [14] [15] [16] [17] [18] B. Moon and Y. Liu, Wave breaking and global existence for the generalized periodic two-component Hunter-Saxton system, [19] [20] [21] [22] [23] [24] [25] [26] A. Sarria and R. Saxton, The role of initial curvature in solutions to the generalized inviscid Proudman-Johnson equation, [27] [28] [29] [30] [31] [1] [2] Jibin Li. Bifurcations and exact travelling wave solutions of the generalized two-component Hunter-Saxton system. [3] Jingqun Wang, Lixin Tian, Weiwei Guo. Global exact controllability and asympotic stabilization of the periodic two-component $\mu\rho$-Hunter-Saxton system. [4] [5] [6] [7] Min Li, Zhaoyang Yin. Blow-up phenomena and travelling wave solutions to the periodic integrable dispersive Hunter-Saxton equation. [8] Qunying Zhang, Zhigui Lin. Blowup, global fast and slow solutions to a parabolic system with double fronts free boundary. [9] Hammadi Abidi, Taoufik Hmidi, Sahbi Keraani. On the global regularity of axisymmetric Navier-Stokes-Boussinesq system. [10] Irena Lasiecka, Mathias Wilke. Maximal regularity and global existence of solutions to a quasilinear thermoelastic plate system. [11] Xiaojing Xu, Zhuan Ye. Note on global regularity of 3D generalized magnetohydrodynamic-$\alpha$ model with zero diffusivity. [12] Jong-Shenq Guo, Satoshi Sasayama, Chi-Jen Wang. Blowup rate estimate for a system of semilinear parabolic equations. [13] [14] Kazuo Yamazaki. Global regularity of the two-dimensional magneto-micropolar fluid system with zero angular viscosity. [15] Dugan Nina, Ademir Fernando Pazoto, Lionel Rosier. Global stabilization of a coupled system of two generalized Korteweg-de Vries type equations posed on a finite domain. [16] Zaihui Gan, Boling Guo, Jian Zhang. Sharp threshold of global existence for the generalized Davey-Stewartson system in $R^2$. [17] Caixia Chen, Shu Wen. Wave breaking phenomena and global solutions for a generalized periodic two-component Camassa-Holm system. [18] Tobias Black. Global generalized solutions to a parabolic-elliptic Keller-Segel system with singular sensitivity. [19] [20] Zhaoyang Yin. Well-posedness, blowup, and global existence for an integrable shallow water equation. 2018 Impact Factor: 1.008 Tools Article outline [Back to Top]
Parentheses in mathematical expressions show what is evaluated first. In some cases, the sequence of evaluation really matters, affecting the final result of expression. In other cases, a sequence doesn’t change the result. For the latter, parentheses can be used too, as example for convenience, for better explanation, for illustrative purposes and/or for some other reasons Your first expression, ${\phi \, \boldsymbol{a} \cdot \boldsymbol{\nabla} \boldsymbol{b}}$ (* can have parentheses anywhere without affecting its result *) please avoid using the same symbol for very different operations, when you really need the explicit symbol for scalar multiplication, then you may write it like ${\phi \cdot \boldsymbol{a} \bullet \boldsymbol{\nabla} \boldsymbol{b}}$ or, with the explicit symbol for tensor multiplication, ${\phi \cdot \boldsymbol{a} \bullet \boldsymbol{\nabla} \otimes \boldsymbol{b}}$ $${\phi \bigl( \boldsymbol{a} \cdot \boldsymbol{\nabla} \boldsymbol{b} \bigr) \!}= {\phi \, \boldsymbol{a} \cdot \bigl( \boldsymbol{\nabla} \boldsymbol{b} \bigr) \!}= {\phi \bigl( \boldsymbol{a} \cdot \boldsymbol{\nabla} \bigr) \boldsymbol{b}}= {\, \bigl( \phi \, \boldsymbol{a} \cdot \boldsymbol{\nabla} \bigr) \boldsymbol{b}}= {\, \bigl( \phi \, \boldsymbol{a} \bigr) \! \cdot \boldsymbol{\nabla} \boldsymbol{b}}= {\, \bigl( \phi \, \boldsymbol{a} \bigr) \! \cdot \bigl( \boldsymbol{\nabla} \boldsymbol{b} \bigr) \!}= {\phi \, \boldsymbol{a} \cdot \boldsymbol{\nabla} \boldsymbol{b}}$$ The “trick to manage parentheses” is nothing more but understanding the operations used in an expression, their properties and their arguments Dot product is the complex operation, it is the tensor product followed by contraction. Dot product affects only tensors of complexity larger than zero (vectors and more complex tensors), thus it has no effect on scalars. Tensor product is the basic operation, it takes two tensors of any complexities and results in a tensor of aggregate complexity. Tensor product of two vectors is often called the dyadic product. Scalar multiplication can be seen as the special case of a tensor product with scalar argument[s] as well as the part of a linear combining operation. Contraction, sometimes called “trace”, takes one argument (it’s unary operation), reducing the complexity of this argument by two via summation over adjacent indices Polyadic representation (polyadic decomposition, component expansion, linear combination of basis vectors/dyads/triads/polyads with components) of tensors can help to get what’s going on inside, behind the scenes. Here you measure non-scalar tensors via some complete set of mutually independent vectors, called the basis vectors. Any vector can be measured as the linear combination of basis vectors with coefficients, so called components of a vector within a basis currently used for measuring. The simplest basis is an orthonormal one, in which the basis vectors are mutually perpendicular to each other and all are the one unit long, that’s $\boldsymbol{e}_i \cdot \boldsymbol{e}_j = \delta_{ij}$ (https://en.wikipedia.org/wiki/Kronecker_delta) $$\boldsymbol{w} = w_1 \boldsymbol{e}_1 + w_2 \boldsymbol{e}_2 + w_3 \boldsymbol{e}_3$$ or, easy and short $$\boldsymbol{w} = w_i \boldsymbol{e}_i$$ The differential operator “nabla” $\boldsymbol{\nabla}$ is the special vector, which components are coordinate derivatives $\partial_i \equiv \frac{\partial}{\partial x_i}$ applied to the term immediately following the nabla $$\boldsymbol{\nabla} = \boldsymbol{e}_i \partial_i$$ Expanding your expression by measuring via orthonormal basis, you have $$\phi \, \boldsymbol{a} \cdot \boldsymbol{\nabla} \boldsymbol{b} = \phi \, a_i \boldsymbol{e}_i \cdot \boldsymbol{e}_j \partial_j \bigl( b_k \boldsymbol{e}_k \bigr)$$ or, since mutually orthogonal unit vectors of orthormal basis are constant $$\phi \, \boldsymbol{a} \cdot \boldsymbol{\nabla} \boldsymbol{b} = \phi \, a_i \boldsymbol{e}_i \cdot \boldsymbol{e}_j \bigl( \partial_j b_k \bigr) \boldsymbol{e}_k$$ or, using $\boldsymbol{e}_i \cdot \boldsymbol{e}_j = \delta_{ij}$ $${\phi \, \boldsymbol{a} \cdot \boldsymbol{\nabla} \boldsymbol{b}}= {\phi \, a_i \delta_{ij} \partial_j \bigl( b_k \boldsymbol{e}_k \bigr) \!}= {\phi \, a_i \partial_i \bigl( b_k \boldsymbol{e}_k \bigr)}$$ or $$\phi \, \boldsymbol{a} \cdot \boldsymbol{\nabla} \boldsymbol{b}= {\phi \, a_i \delta_{ij} \bigl( \partial_j b_k \bigr) \boldsymbol{e}_k}= {\phi \, a_i \bigl( \partial_i b_k \bigr) \boldsymbol{e}_k}$$ As long as you keep coordinate derivative $\partial_i$ applied to $\boldsymbol{b}$ (or its components in some orthonormal basis), you can evaluate this expression using any sequence you wish The story about cross products is much more longer, I propose to take a look at my answer for Gradient of a dot product
Advection in log coordinate \((1) ~ ~ ~ \dot F(r,t)= - V'(r) F(r,t) - V(r) F'(r,t)\) where \(r\) has sense of the independent coordinate, for example, size of particles; \(t\) has sense of time; \(V(r)\) determines the velocity, speed, rate of advection at coordinate \(r\) and is assumed to be a given function, \(F(r,t)\) is unknown function, that is supposed to be found as solution of equation (1), assuming some additional condition(s); for example \(F(r,0)=F_0(r)\), where \(F_0\) is given function. Here and through TORI, by default, the prime after the name of function indicates its derivative with respect to its first argument, and dot above the name of the function indicates its derivative with respect to the last argument. This article considers the special case of transformation of the independent variable, namely, change from variable \(r\) to variable \(x=\ln(r)\) in such a way that \(r=\exp(x)\). In order to write equation of advection in new coordinate, let \((2) ~ ~ ~ \rho(x,t)=F(\mathrm e^x,t)~ \mathrm e^x\) The goal of this article is to write out the equation for function \(\rho\), that follows from equation (1). It is desirable, however to have the resulting equation, that also can be interpreted as advection in the new coordinate. Deduction From equation (2), the distribution \(F\) can be expressed through the new function \(\rho\) as follows: \((3) ~ ~ ~ \displaystyle F(r,t)=\frac{1}{r} \rho\Big(\ln(r),t\Big)\) Then, its derivative with respect to its first argument, that appears in the right hand side of equation (1), can be written as follows: \((4) ~ ~ ~ \displaystyle F'(r,t)= \frac{-1}{r^2} \rho\Big(\ln(r),t\Big) + \frac{1}{r}\rho\Big(\ln(r),t\Big) \frac{1}{r}\) Change of variable \(r \mapsto \exp(x)\) gives \((5) ~ ~ ~ \displaystyle F'(\mathrm e^x,t)= - \mathrm e ^{-2x} \rho(x,t)+\mathrm e^{-2x} \rho'(x,t)\) With this expression, from equation (1), the derivative of \(\rho\) with respect to the last argument can be written as follows: \((6) ~ ~ ~ \displaystyle \dot \rho(x,t)= \dot F(\mathrm e^x,t) ~ \mathrm e^x\) \(= \Big( - V'(\mathrm e^x)\, F(\mathrm e^x,t) -V(\mathrm e^x)\, F'(\mathrm e^x,t) \Big)~ \mathrm e^x\) \(=-V'(\mathrm e^x)\, F(\mathrm e^x,t)\,\mathrm e^x - V(\mathrm e^x)\,\mathrm e^x\, F'(\mathrm e^x,t)\) \(= -V'(\mathrm e^x)\,\rho(x,t) - V(\mathrm e^x)\,\Big(- \mathrm e ^{-x} \rho(x,t)+\mathrm e^{-x} \rho'(x,t)\Big)\) \(= -\Big(V'(\mathrm e^x) - \mathrm e^{-x} V(\mathrm e^x)\Big)\, \rho(x) -V(\mathrm e^x)\, \mathrm e^{-x} \rho'(x,t)\) In order to interpret equation (6) as the advection equation, we need to define a new function \(W\), that have sense of velocity of drift; let \((7) ~ ~ ~ \displaystyle W(x)=V(\mathrm e^x) \, \mathrm e^{-x}\) Then, its derivative \((8) ~ ~ ~ \displaystyle W('x)=V'(\mathrm e^x) -V(\mathrm e^x)\, \mathrm e^{-x}\) and this is coefficient at the first term in the right hand side of equation (6). With function \(W\), equation (6) can be rewritten as follows: \((9) ~ ~ ~ \displaystyle \dot \rho(x,t)= W'(x)\, \rho(x,t) - W(x)\, \rho'(x,t) \) This is also advection equation (without diffusion); and this article is loaded to TORI in order to simplify the search of the deduction of this equation. Equation (9) indicates, that \(W(x)\) can be interpreted as the drift rate at coordinate \(x\), while \(\rho(x,t)\) has sense of density of distribution of some quantity along this coordinate. Lognormal distribution Change of variable \(r \mapsto x=\ln(t)\) has sense in order to apply the Fourier methods for the advection with distribution similar to the lognormal distribution. This refers to the case, when the initial condition for \(\rho\) id Gaussian, for example, \((10) ~ ~ ~ \displaystyle \rho(x,0)= \frac{1}{\sqrt{\pi}} \exp(-x^2)\) This corresponds to the lognormal initial condition for the function \(F\). This can be seen from the deduction below. This deduction refers to \(t\!=\!0\), so, the last argument in the deduction is omitted. Let \((11) ~ ~ ~ \rho(x)\, \mathrm dx = F(r)\, \mathrm d r= F(\mathrm e^x)\, \mathrm e^x\, \mathrm d x \) Then \((12) ~ ~ ~ \rho(x)=F(\mathrm e^x) \mathrm e^x\) \((13) ~ ~ ~ \rho\big(\ln(r)\big)=F(r) r\) \((14) ~ ~ ~ \displaystyle F(r)=\frac{1}{r} \rho\big(\ln(r)\Big)\) For the Gaussian (10), one this gives the initial lognormal function \((14) ~ ~ ~ \displaystyle F(r)=\frac{1}{r \sqrt{\pi}} \exp\Big(-\ln(r)^2\Big)\) This function is shown in the Figure at right. Similar (a little bit more general) form of this lognormal distribution appears at Wikipedia, http://en.wikipedia.org/wiki/Log-normal_distribution Discussion Conclusion Keywords References Deduction in this article is copied from Dima2014, page 115.
In the random-effects model of meta-analysis a canonical representation of the restricted likelihood function is obtained. This representation relates the mean effect and the heterogeneity variance estimation problems. An explicit form of the variance of weighted means statistics determined by means of a quadratic form is found. The behavior of the... Computing the exponential of large-scale skew-Hermitian matrices or parts thereof is frequently required in applications. In this work, we consider the task of extracting finite diagonal blocks from a doubly-infinite skew-Hermitian matrix. These matrices usually have unbounded entries which impede the application of many classical techniques from a... We consider recent work linking majorization and trumping, two partial orders that have proven useful with respect to the entanglement transformation problem in quantum information, with general Dirichlet polynomials, Mellin transforms, and completely monotone sequences. We extend a basic majorization result to the more physically realistic infinit... We give a method for constructing a regularizing decomposition of a matrix pencil, which is formulated in terms of the linear mappings. We prove that two pencils are topologically equivalent if and only if their regularizing decompositions coincide up to permutation of summands and their regular parts coincide up to homeomorphisms of their spaces. We study the varieties of invariant totally geodesic submanifolds of isometries of the spherical, Euclidean and hyperbolic spaces in each finite dimension. We show that the dimensions of the connected components of these varieties determine the orbit type (or the z-class) of the isometry. For this purpose, we introduce the Segre symbol of an isomet... We prove that for almost square tensor product grids and certain sets of bivariate polynomials the Vandermonde determinant can be factored into a product of univariate Vandermonde determinants. This result generalizes the conjecture [Lemma 1, L. Bos et al. (2009), Dolomites Research Notes on Approximation, 2:1-15]. As a special case, we apply the r... For a given extension $A \subset E$ of associative algebras we describe and classify up to an isomorphism all $A$-complements of $E$, i.e. all subalgebras $X$ of $E$ such that $E = A + X$ and $A \cap X = \{0\}$. Let $X$ be a given complement and $(A, \, X, \, \triangleright, \triangleleft, \leftharpoonup, \rightharpoonup \bigl)$ the canonical match... Published inLinear Algebra and Its Applications A special class of matrix algebras, the rc-signature algebras, naturally emerged as a result of the study of a Multiplicative Decomposition Property of matrices (a multiplicative analogue of the Riesz Decomposition Property in ordered vector spaces). This note is devoted to the study of a tractable subclass of these algebras. It is proven that a ne... Published inLinear Algebra and Its Applications A matrix is almost strictly totally positive if all its minors are nonnegative and they are positive if and only if they do not contain a zero in their diagonal. An optimal test to check if a given matrix belongs to this class of matrices is presented. For this purpose, we establish a bijection between the set of nonzero entries of the matrix and a... Published inLinear Algebra and Its Applications We generalize several Cauchy-like inclusion regions for the zeros of a polynomial expressed in a basis defined by a three-term recurrence relation. Our results are obtained by applying linear algebra techniques to the comrade matrix of a polynomial. We pay special attention to the Newton and Chebyshev bases.
Suppose $\lambda$ is a successor of a singular cardinal. We will say $\lambda$ fake if there is a transitive set $M$ such that $\lambda \subseteq M$ satisfying $\mathrm{ZFC}^-$ (ZFC without powerset) in which there is a largest $M$-cardinal $\kappa < \lambda$ which is regular in $M$. We will say $\lambda$ is weak if we can find such $M$ and $\kappa$ such that $M \models \kappa^{<\kappa} = \kappa$. Question: If $\lambda$ is a fake successor of a singular, is it also weak? Some motivation: To obtain some properties around singular cardinals of high consistency strength, one often creates weak successors of singulars using Prikry-type forcing. But to obtain other such properties, one needs to use successors of singulars that are not weak. These two methods are in tension. In practice, the examples of fake successors of singulars are also weak, since the witnesses may be taken from inner models satisfying GCH. But I am wondering if there is a deeper explanation. Remark: If $\kappa$ is supercompact and $\mathrm{cf}(\mu)<\kappa<\mu$, then $\mu^+$ is not weak. Using Radin forcing, we can produce a model with many measurable cardinals in which every successor of a singular is weak.
For $x \in \mathbb{Z}_{200} $solve this modular equation $$(x-1)(x-2) \equiv 0 \mod 200$$ I don't know how to deal with that $x$ occurs in second power, I mean $x^2$ I am asking for advice. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community For $x \in \mathbb{Z}_{200} $solve this modular equation $$(x-1)(x-2) \equiv 0 \mod 200$$ I don't know how to deal with that $x$ occurs in second power, I mean $x^2$ I am asking for advice. Hint: $200 = 2^3 \times 5^2$. Solve first mod $2$ and mod $5$, then lift to $2^3$ and $5^2$, then Chinese remainder... I would recommend that you do not expand your parentheses, that is, don't write it as $x^2 - 3x + 2 \equiv 0$. Rather, look at what your problem is telling you: You have two consecutive integers, and their product is divisible by $200 = 2^3 \cdot 5^2$. That means that of the two numbers, one must be divisible by $25$, and one must be divisible by $8$. It's possible that one of the numbers is both, in which case your product is either $1\cdot 0$ or $0\cdot 199$, i.e. $x = 2$ or $x = 1$. The other case is that one of them is divisible by $25$ and the other is divisible by $8$. There are a two possibilities: $x = 26$ gives $25\cdot 24$, and $x = 177$ gives $176\cdot 175$. There are no other multiples of $25$ (below $200$) which are one away from a multiple of $8$.
This question already has an answer here: Section numbering without numbers 3 answers Here is my code: \documentclass[letterpaper,10pt,fleqn]{article}\setlength{\mathindent}{1cm} \usepackage{geometry}\geometry{textheight=9in, textwidth=6.5in}\usepackage{amssymb} \usepackage{amsmath} \numberwithin{equation}{section}\parindent = 0.0in\parskip = 0.2in\begin{document}\section*{New Section}\hruleText \begin{align} \phi =& \int_S \mathrm{d}a\\ =& \int_S f(u, v) |\mathbf{T}_u \times \mathbf{T}_v| \mathrm{d}u\mathrm{d}v\end{align}\end{document} I don't want the number to be in front of "New Section", so I am using the asterisk \section*{New Section} but I do want the equations to be numbered as (1.1) and (1.2) etc. However, I have found that using the asterisk sets the equation numbers to (0.1) and (0.2) and I have yet to find a way around this.
Get your free trial content now! Video Transcript Transcript Evaluating Expressions Let me tell you a story about evaluating expressions… Once upon a time "Once upon a time, there lived a girl in a towerShe was there every day, hour after hour.""Her name was Rap-Punzel,a mischievious damsel,whose hair was a handful" "The love of her life, Prince MC, was in love with Rap-Punzel... as in love as can be. They waited for the day and waited for the time when the prince could scale the tower with a short little climb.""But he's tired of waiting, what a waste of days.He should evaluate expressions but he doesn't know the way" Let's help MC Prince. At the start of this tale, Rap-Punzel’s hair is 50 inches long. Each month, it grows 5 inches." Let’s write this as an expression… 50 is the constant number in this expression, meaning it does not change. We said before that her hair grows at a rate of 5 inches per month. The number of months is the unknown quantity, which we will call x. We can also use any other variable. Expression for the length of Rap-Punzel's hair The number five is the coefficient of the variable. So… the expression to describe the length of Rap-Punzel's hair after a variable number of months is 5x + 50. Ok, let's use our expression to evaluate the length of Rap-Punzel's glorious hair over a period of a few months. Let’s set up a function table… When evaluating expressions, always remember to use the correct order of operations. Calculation of further growth After 1 month, the length of her hair is equal to 5(1) + 50, for a total length of 55 inches. After 2 months, the length is 2(5) + 50, which equals 60. Let’s fill in the table... OMG, after 6 months, Rap-Punzel’s hair is a staggering 80 inches long! But, disaster struck and Rap-Punzel discovered that her glorious hair was not so glorious after all ... - it was full of split ends. (Pause). Impulsively, .... ... she cut her hair. Now it's just 10 inches long. To keep her hair healthy, Rap-Punzel must trim her hair two inches every month. Despondent, MC Prince wondered, how much longer must he wait to be with his beloved? Let's modify our expression. 5x for the growth per month minus 2x for the trimming per month plus ten for the starting length. Insert the right numbers Now, simplify the expression. 3x + 10. MC Prince wondered, '...after six more months, how long will her hair be?...' Evaluate the expression, this time, letting x equal six… use the correct order of operations, 3(6) = 18; plus ten it is equal to twenty-eight. Her hair will only be twenty-eight inches long! "Still waiting outside, MC Prince began to doubtwithout Rapunzel in his life, he's like a plant in a droughthe had an idea, on which he was keen"what's this? OH NO...he bought magic beans... 2 comments Yeaaaah paaaaaad if you need to get rid of the coefficient as in 0.5x it is best to divide. so divide both sides by 0.5 and that will give you the answer. Evaluating Expressions Übung Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Evaluating Expressions kannst du es wiederholen und üben. Calculate the length of hair after given number of months. Tipps You can use a linear function to represent the length of Rap-Punzel's hair. The unknown quantity is the number of months. Let's look at another example: The amount of money you receive on your birthday increases by $\$5$ each year. You start with $\$15$. After one year, you receive $\$15+\$5=\$20$ After two years, you receive $\$20+\$5=\$25$ You have to multiply the number of months that have passed by the rate her hair grows, then add $50$. Lösung In the beginning, Rap-Punzel's hair is $50$ inches long. Because her hair grows $5$ inches per month, we know that Rap-Punzel's hair grows as follows: After one year: $50+5=55$ inches After two years: $55+5=60$ inches After three years: $60+5=65$ inches After four years: $65+5=70$ inches After five years: $70+5=75$ inches After six years: $75+5=80$ inches $\begin{array}{c|c|c|c|c|c|c} x&1&2&3&4&5&6\\ \hline \text{length}&55&60&65&70&75&80 \end{array}$ Label the different parts of the expression. Tipps A constant is independent of the variable. The term above represents the length of Rap-Punzel's hair after $x$ months. In the beginning, Rap-Punzel's hair is $50$ inches long. Lösung This expression represents the length of Rap-Punzel's hair after $x$ months. The coefficient to the variable is the rate of growth, $5$. $x$ represents the unknown number of months. Finally, our constant is the starting length of her Rap-Punzel's hair in inches, $50$. Find an equation that represents Rap-Punzel's hair length after $x$ months. Tipps Rap-Punzel's hair grows $5$ inches per month. He hair has grown: 10 inches after $2$ months 15 inches after $3$ months To avoid split ends, Rap-Punzel has to trim her hair. This means she has to cut her hair. Is her hair getting longer or shorter? You can simplify expressions and equations by combining like terms. For example: $2$ apples plus $3$ bananas plus $4$ apples results in $6$ apples and $3$ bananas. Lösung Poor Rap-Punzel and poor Prince MC. Rap-Punzel's hair grows $5$ inches per month. But she has split ends. To avoid getting split ends, she has to cut her hair $2$ inches per month. How can we write this as a mathematical expression? Let's represent the rate of growth: $5x$ Next,write the expression to represent Rap-Punzel trimming her hair: $-2x$ Finally, we add $10$ We can combine the like terms to get $3x+10$ as our final expression. Decide which function table belongs to which equation. Tipps To match to the correct function table, make sure more than $x$-$y$ pair satisfies the equation. For each equation on the right, plug in several different $x$s and compare the answers. Lösung A function table is a useful tool used to set up a linear equation. This is what it looks like: $\begin{array}{c|c|c|c} x& & & \\ \hline f(x)&&& \end{array}$ You can plug in different values for the variable $x$ and check the corresponding $f(x)$. $f(x) = 3x + 4$ $\begin{array}{rcl} 3(3) + 4 &=& \\ 9 + 4 &=& 13 \end{array}$ So our function table should look like this: $\begin{array}{c|c|c|c|c|c} x& 1&2 &3&4&5 \\ \hline f(x)&7&10&13&16&19 \end{array}$ $~$ $f(x) = 4x + 3$ $~$ $f(x) = 2x + 5$ $~$ $f(x) = 5x + 2$ Evaluate how long Rap-Punzel's hair is after one year. Tipps First, determine the expression that represents the length of Rap-Punzel's hair after $x$ months. $x$ can be any number of months: $1$, $2$, $3$, ... Rap-Punzel's starting hair length is the constant. To write the simplified expression, the coefficient can be found by determining the difference between the growth rate of Rap-Punzel's hair minus the monthly trimming in order to avoid split ends. Lösung First, let's write an expression representing the length of Rap-Punzel's hair after $x$ months: To start, Rap-Punzel's hair is $10$ inches long. This is our constant. The variable is the unknown quantity of months, which we'll call $x$. The coefficient can be found by calculating the net growth per month of Rap-Punzel's hair. In this case, her hair grows $5$ inches per month, but she cuts $2$ inches each month as well. So our coefficient becomes $5-2=3$. To determine the length of Rap-Punzel's hair after one year, we can plug in $x=12$. $\begin{array}{rcl} 3(12)+10&=&\\ 36+10&=&46 \end{array}$ So after one year, Rap-Punzel's hair is $46$ inches long. The tower is a BIT taller than $46$ inches... Poor Prince MC. Determine how long it takes until the magic beanstalk grows to a height of $100$ inches. Tipps The expression is: coefficient $\times x +$ constant. To isolate the variable $x$, use opposite operations: Multiplication ($\times~\longleftrightarrow~\div$) Division ($\div~\longleftrightarrow~\times$) Addition ($+~\longleftrightarrow~-$) Subtraction ($-~\longleftrightarrow~+$) Check your solution by plugging in your value for $x$: $0.5x + 20$ Did the expression simplify to $100$? Lösung Waiting for Rap-Punzel's hair to grow takes too long and learning how to climb is too expensive, so Prince MC decides to buy some magic beans to grow a plant. The beanstalk has: an initial height of 20 inches - this is the constant a growth rate of 0.5 inches per day - this is the coefficient the variable $x$ to the unknown number of days To figure out the number of days the beans need to reach $100$ inches, we must solve the equation for $x$: $0.5x+20=100$ We use Opposite Operationsto isolate $x$: $\begin{array}{rclcl} 0.5x + 20y &=& 100 \\ \color{#669900}{-20} && \color{#669900}{-20} \\ 0.5x &=& 80 \\ \color{#669900}{\times 2} && \color{#669900}{\times 2} \\ x &=& 160 \end{array}$ So, after $160$ days, more than $5$ months, the magic beanstalk will be $100$ inches high. Is this tall enough to reach Rap-Punzel?
By Maschke's Theorem, every direct product of representations is decomposable into a direct sum of representations, that is, the function you are integrating can be rewritten as a sum of functions with symmetry properties equal to specific irreducible representations. As you note, integrating an odd function (which has a certain symmetry, namely antisymmetry with respect to $\sigma_y$, the mirror "plane" of the y-axis) over all space is zero, while integrating an even function (which is symmetric with respect to $\sigma_y$) is generally not. Since integration is linear (that is, the integral of a sum is the sum of an integral), we can consider the integral of each irrep separately. If the function being integrated is antisymmetric with respect to some symmetry, it will be zero when integrated over all space. The only way for the integral not to be zero is if it is symmetric with respect to all possible symmetries, i.e. it is the totally symmetric irrep. Expressed in equation form:$$\int{\prod_{i}\Gamma_i\,\mathrm d\tau}=\int{\sum_j\Gamma_j\,\mathrm d\tau}=\sum_j\int{\Gamma_j\,\mathrm d\tau}$$and$$\int{\Gamma}\,\mathrm d\tau=\left\{ \begin{array}{@{}ll@{}} 0, & \text{if}\ \Gamma \text{ contains any odd (antisymmetry)} \\ \text{not necessarily } 0, & \text{otherwise} \end{array}\right.$$ Then, it is clear that the only time the integral is nonzero is when, amongst the irreducible representations of the direct product (regardless of its dimensionality), the totally symmetric one is present, all other ones integrate out to zero. I should clarify that when I write the integral of $\Gamma$, I really mean the integral of some awful, complicated looking function that has the symmetry properties of $\Gamma$, which is why I used $\prod$ and $\sum$ instead of $\otimes$ and $\oplus$. If your question was instead how do you know that the functions with a given antisymmetry really do integrate to zero, let's consider what it means to have a certain symmetry. It means that there is a particular line or plane or point about which the value of the function is redundant. That is, if you specify it on one side, then the value is exactly determined on the other. We are integrating over all space, which means we can choose bounds of integration that utilize this symmetry. This then separates your integral into a non-symmetric piece and a piece using the symmetry. You first integrate over the non-symmetric piece and get something, and then you integrate that over the symmetry-including part. If that symmetry is antisymmetry, as will be the case with at least one symmetry of any irrep other than the totally symmetric one, this will cancel out the non-symmetric part with an equal and opposite non-symmetric part and the whole integral will be zero. Regardless of the dimensionality of the irrep being integrated it has at least one symmetry element over which it can be integrated that is antisymmetric. Hence, only integrals of functions that are totally symmetric are non-zero. As a basic example, consider your $\int{x\,\mathrm dx}$ integral that you started with. Since this function has a symmetry (namely antisymmetry with respect to $\sigma_y$), I can make the following choice of bounds:$$\int_{-\infty}^\infty{x \,\mathrm dx} = \int_{-\infty}^0{x \,\mathrm dx}+\int_{0}^\infty{x \,\mathrm dx} =-\int_{0}^\infty{x \,\mathrm dx}+\int_{0}^\infty{x \,\mathrm dx}=0$$ I can choose to integrate only over the nonsymmetric part and then use the symmetry for the rest of space. The presence of symmetry guarantees that I can do that. Remember, the irrep label is just telling you the symmetry properties of some complicated function. If $x$ in the above equation were some function $f$ that I knew nothing about except that is has this symmetry, I could still make this choice of bounds and use this property to get a 0 integral. EDIT: To address orthocresol's question below: How does this work for the $E$ set in $C_{3v}$? Let us first think of a function with $E$ symmetry. The $p_x+p_y$ orbitals are such a candidate. Let's consider a top-down view of a $C_{3v}$ molecule. Now, I will consider one of the mirror planes to lie in the $xz$-plane (see Figure 1). I can do this because I can rotate the $p_x+p_y$ ensemble anywhere in the $xy$-plane. That is, there is some linear combination of $p_x+p_y$ such that there is a new set $p_{x'}+p_{y'}$ where the mirror plane lies on the $x'y'$-plane. Now we set about integrating the upper half plane. Clearly, the half of the negative lobe of the $p_x$ orbital will cancel the half of the positive lobe, leaving only the integral of the positive lobe of $p_y$. Now we consider the lower half plane, the two halves of the $p_x$ lobes cancel again and we have only the integral of the negative lobe of $p_y$. Since the integral of the upper-half plane is the negative of the lower-half plane, the integral over all space is 0. Now, why was it important to consider $p_x+p_y$ together? This is what allowed us to orient the orbitals with respect to the mirror plane. Consider a $p_x$ orbital alone rotated away from the axis given. It will have no defined symmetries with respect to the plane (reflecting across it will not map the orbital to itself or its negative) and there is no way to rotate it without the $p_y$. This is why they need to transform together in $C_{3v}$. Indeed, the reason why $E$ functions have 0 character under this symmetry is because the $p_x$ got sent to $p_x$ and the $p_y$ to $-p_y$ and so the whole ensemble was sent to $1-1=0$. Now, hold on you say, can't I use similar arguments to imply that if I integrated a $p_z$ orbital (which is $A_1$ in $C_{3v}$) in all space, I will also get zero. The answer is yes. The $p_z$ orbital is not zero by symmetry in $C_{3v}$. I alluded to this above, the integral of a totally symmetric function in a given point group can still be zero. However, by describing the point group as $C_{3v}$, we are indicating that there is something special about the $z$-axis that breaks the symmetry and therefore precludes coming to this conclusion based on symmetry alone. Consider the deformation of planar $\ce{NH_3}$ to pyramidal $\ce{NH_3}$. In the former ($D_{3h}$), nothing distinguishes the positive $z$ from negative $z$ axis and a $p_z$ orbital is not totally symmetric ($A_2''$) and so we could conclude that it is zero by symmetry. As we pyramidalize to $C_{3v}$ though, the negative $z$ axis becomes different (it's got hydrogens there while the postive $z$ does not) and we can no longer conclude by symmetry that the integral of the $p_z$ orbital will be 0 (indeed, if it mixes even slightly with those hydrogens, it won't be). If it didn't mix though, it's integral would still be 0 (it's still just a $p_z$ orbital), we just couldn't tell that assuming $C_{3v}$ symmetry.
I came across this question while thinking about the question whether the integrability is a local property. I first thought that the MSE question What's the definition of a "local property"? addresses this issue, but this question only answers what a local property for a topological space and not for a function is. Thus, my question is: What is the definition for a property to be a local property of a function? A property $P$ is local when, for all spaces $X$, if $\{U_i\}$ is an open cover of $X$ and all the $U_i$ have $P$, then $X$ has $P$. From this one may deduce that A property $P$ is local for a function $f:X\to Y$, iff from the fact, that $\{U_i\}$ is an open cover of $X$ and all $f|_{U_i}$ have the property $P$, follows that also $f$ has the property $P$. Is this the right definition? Update: First I had the following wrong argumentation in my question as pointed out by Daniel Fischer in the comments. However, the main question remains... However, this does not seem to be the right definition. Let $X$ be the long line which we get by pasting $[0,1)$ uncountable many times together. We define $g:X\to\mathbb R$ with $g(x)=\sin(2\pi x)$ on each interval $[0,1)$. As J. Loreaux argued in his answer to the question Does this intuition for "calculus-ish" continuity generalize to topological continuity?, the function $g$ is not continuous. This contradicts the above attempt for a definition of a function's local property since there is an (uncountable) open cover $\{U_i\}$ of the long line $X$ for which all $g|_{U_i}$ are continuous (for example when all $U_i$ are bounded) and continuity is one of the properties I would see as being a local property. So: What is the definition of a local property for a function?
Neutrinos are light, uncharged leptons. The neutrino tag should be applied to question relating to neutrino properties or interactions involving neutrinos. Neutrinos are light, uncharged leptons. The neutrino tag should be applied to question relating to neutrino properties or interactions involving neutrinos. Neutrinos are produced in nuclear reaction involving the weak force. Sources that are useful for experimental efforts include the sun (matter type, electron flavored neutrinos), nuclear fission reactors (anti-matter type, electron flavored neutrinos), the interactions of cosmic rays with the atmosphere and the interactions of man-made particle beams with matter (both matter and anti-matter, and all flavors) Having neither charge nor color, neutrinos interact only by way of gravity and the weak nuclear force. Both of these forces are, well, weak and the neutrinos have relatively low cross-section for interactions with ordinary matter. Flavor? Both the charged and the un-charged leptons come in three type which seem to be identical except for mass. The charged leptons are the electron, the muon, and the tau-lepton (often just called "a tau"). For each of these there is a corresponding neutrino, but see the section on mixing below. Brief History A light uncharged particle was first proposed in 1930 by Wolfgang Pauli to solve the problem of the beta decay spectrum. Pauli called his particle a "neutron", but that name was later adopted for the uncharged nucleon. The name "neutrino" (meaning "little neutral one") was coined by Enrico Fermi in 1934. Neutrinos were originally modeled as massless for simplicity and in the absence of any measurable mass the assumption was adopted as a given. Neutrinos (actually anti-neutrinos) from a fission reactor were first detected experimentally in 1956 by Cowan and Reines using a delay coincidence technique that remains the standard for reactor neutrinos to this day. Starting in 1970 Raymond Davis Jr., Kenneth C. Hoffman and Don S. Harmer tried to measure the solar neutrino flux using a large tank full of cleaning fluid placed deep in the Homestake mine in South Dakota. They got a figure too low to match theories of stellar structure. This mis-match persisted for two decades, and required a change of theory to resolve: the neutrinos must be considered as massive (albeit light) and allowed to mix. Experiments at Sudbury Canada, the Kamioka mine facility in Japan, various nuclear reactor complexes, and at several accelerator sites around the world would eventually show clear evidence of neutrino mixing. Current efforts are focused on determining the parameters of the mixing matrix (two mass differences and all three mixing angles are known), searching for evidence of CP violation in the neutrino sector, and determining if the neutrinos are Dirac or Majorana particles. Mixing Mixing occurs because the flavor states of the neutrinos, written $\nu_e, \nu_\mu, \nu_\tau$ are not eigenstates of the free Hamiltonian. Those are called the "mass states" and are written $\nu_1, \nu_2, \nu_3$. In a mixing experiment, (anit-)neutrinos are produced in one location (production occurs in flavor state) and allowed to propagate to another location where they are detected (again, detection is of flavor states). During the time the neutrinos travel, they are acted upon by the free Hamiltonian which does not keep pure flavor state pure---that is, it mixes them. The result is that the distribution of flavor states detected may not match the distribution of flavor states created. Mixing was actually proposed by Gribov and Pontecorvo in 1968 (even before the Homestake experiment). [Phys. Lett, 28B, vol. 7, p. 493] Open Questions Measure the remaining parameter of the mixing ($\delta_{CP}$) and refine the values of the known parameters. Does neutrino mixing violate CP (i.e. is $\delta_{CP} \ne 0$?) Mass hierarchy problem. Dirac of Majorana nature? Are there additional neutrinos states (either heavy weakly interacting neutrinos or sterile neutrinos)? What is up with the new result from OPERA? Do theyThis appears to be solved, and Einstein is still right. reallygo faster than light?
Here's a very frustrating question that I have been stuck on for some time. I believe that my question could fit in a general framework of what happens when you restrict $L^2$-cohomology classes on a Shimura variety to a sub-Shimura variety. However I formulate the question for the special case I am interested in. Let $A_2$ be the moduli stack of principally polarized abelian surfaces. To the irreducible finite dimensional representation of $\mathrm{Sp}(4)$ of highest weight $a \geq b \geq 0$ we attach a a local system $V_{a,b}$ on $A_2$. Suppose $(a,b) \neq (0,0)$. One can prove that $H^4_c(A_2,V_{a,b})$ vanishes unless $a=b$ is even, in which case $H^4_c(A_2,V_{2k,2k})$ is pure of Tate type and of the same dimension as the space of cusp forms of weight $4k+4$ for $\mathrm{SL}(2,\mathbf Z)$. The map $H^4_c \to H^4_{(2)}$ to the $L^2$-cohomology is an isomorphism. In terms of automorphic representations, these cohomology classes can be described as follows: for any level 1 cusp form $\pi$ on $\mathrm{GL}(2,\mathbf A)$ of weight $4k+4$ we consider the unique irreducible quotient of $$ \mathrm{Ind}_{P(\mathbf A)}^{\mathrm{GSp}(4,\mathbf A)} \left( \vert \cdot \vert^{1/2} \pi \otimes \vert \cdot \vert^{-1/2} \right)$$ where $P$ denotes the Siegel parabolic subgroup (whose Levi factor is $\mathrm{GL}(2) \times \mathrm{GL}(1)$); this is a discrete automorphic representation for $\mathrm{GSp}(4)$ which contributes a Tate type class to the $L^2$-cohomology in degrees $2$ and $4$. There is a map $\mathrm{Sym}^2(A_1) \hookrightarrow A_2$ given by taking a pair of elliptic curves to their product. We can also restrict $V_{a,b}$ to $\mathrm{Sym}^2(A_1)$. By determining the branching formula for $\mathrm{SL}(2)^2 \rtimes S_2 \subset \mathrm{Sp}(4)$ we find that the trivial local system occurs as a summand in the restriction of $V_{a,b}$ to $\mathrm{Sym}^2(A_1)$ if and only if $a=b$ is even, in which case it appears with multiplicity $1$. So $H^4_c(\mathrm{Sym}^2(A_1),V_{2k,2k})$ is also pure of Tate type but $1$-dimensional. Again we could think about $L^2$-cohomology and it would not make a difference. MAIN QUESTION: Is the restriction map $H^4_c(A_2,V_{2k,2k}) \to H^4_c(\mathrm{Sym}^2(A_1),V_{2k,2k})$ nonzero for $k \geq 2$? Any ideas or pointers at all would be appreciated. I am very ignorant about automorphic representations, Shimura varieties etc. and I am naively hoping that there exists some general method for answering question of this form. This question arose from the paper http://arxiv.org/abs/1210.5761 . A positive answer would imply that all even cohomology of $\mathcal{\overline{M}}_{2,n}$ is tautological for $n < 20$, and that the Gorenstein conjecture fails on $\mathcal{\overline{M}}_{2,20}$.
How does TeX decide how to size a middle delimiter? When I want to show evaluation of limits of an integral, I would type $$\int_1^2 x\; dx=\frac{x^2}{2}|_1^2=4-\frac{1}{2}=\frac{7}{2}.$$ How do I get the vertical bar showing the limits big enough? I have found \bigg and \Big but I would like it to autosize like \left. It seems harder because there isn't a left side to figure out what is inside. I tried \left space and \right \mid, but didn't find success. On math.stackexchange it was stated I just needed \left. as period is a delimiter. Is there a list of delimiters? How does TeX decide how to size a middle delimiter? When I want to show evaluation of limits of an integral, I would type You should use: \[\int_1^2 x\; dx=\left.\frac{x^2}{2}\right|_1^2=4-\frac{1}{2}=\frac{7}{2}.\] Latex grabs the height of whatever is in between the \left and \right and then adjust the delimiters ( . and | in this case) to be big enough to encompass that. Since in your example, the limit evaluation only relates to the fraction in front of it, therefore one can use \left and \right without the need of \middle. Since we don't need a left delimited we use . which stands for ''nothing''. Even if we don't want a left delimiter \left has to be there to tell latex where to start measuring the height. \middle is only needed when three or more delimiters are used, as is the case in this example: \[A = \left\{ \frac{x_i}{i} \middle| i\in \mathcal{I} \right\} \] where we would want the collections curly braces to match with the inner line. Giving a list of delimiters would be hard, since there are so many of them, and a lot of packages introduce new ones. Have a look at The Comprehensive Symbol List to have an idea of what is generally in use. I mostly use ( ) \{ \} [ ] and |. Some other notes: I tend not to use \Bigetc since this doesn't work automatically. You should use \[and \]to denote a math environment instead of $$, which is old tex code instead of latex. Have a look at this question about integral evaluation. When I say ''delimiters'', I don't mean that this is a class of latex symbols that is special in any way, I just mean symbols that can be scaled.
I have the following problem : I'm calculating the sample covariance matrix in the frequency domain ( $y_{k}$ is the FFT of a time domain $k_{th}$ symbol vector signal , basically a simulated received signal) as follows: $$\mathbf{R}=\frac{1}{N_{f}}\sum_{k=0}^{N_{f}-1}y_{k}y_{k}^{H}$$ Well , the next step on my algorithm is to solve a optimization process, as an essential part of it I need to compute the eigenvalues by doing SVD ,method of powers, etc... in MATLAB of the following equation: $$(\mathbf{R}^{-1}\mathbf{A})$$ To not get into too many details because I believe my issue comes from a numerical problem and many insights of the actual algortihm / context aren't needed .Let's assume, $A$ is simply a predefined matrix that I compute. The REAL ISSUE appears now because $\mathbf R$ is ill-conditioned as MATLAB tells me so. So the inverse procedure seems to be failing and the eigenvalues I'm obtaining are incredibly small due to this issue (In fact I only get 1 eigenvalue different than zero). The dimensions of $\mathbf R$ are typically large ( since they are compressed depends on the actual compression ratio I'm using but let's say $32\times 32$ for example). One approach to solve this problem I found is to use diagonal loading as: $$(\left(\mathbf R+\sigma\mathbf I\right)^{-1}\mathbf A)\quad\text{with}\quad\sigma > 0$$ This seems to be solving the problem, the eigenvalues are now scaled due to this background "noise". My question is how can I obtain the truly well scaled eigenvalues because , later on, in my algorithm these eigenvalues will serve as weights since I'm considering them as an actual power estimate. Note : I've been playing with the cond() function in MATLAB for $\sigma= 0.05$ , cond(R+sigma*I)= 2 , which is not bad I believe. Feel free to ask more questions about the problem. But I think my question relates to a purely numerical issue involving eigenvalues and covariances matrices.
This tutorial demonstrates the usage of the Purpose: package, which allows for the determination of optical coefficients of layered systems. We will demonstrate the functionality of the package on the example of bithiophene thin films. LayerOptics 0. How to get the source code Before starting, we need to download the python scripts of the package. In order to do this, first create a working directory LayerOptics and move inside it. From now on the symbol LayerOptics will indicate the shell prompt. $ $ mkdir LayerOptics$ cd LayerOptics A -file can be downloaded from the tar . Once you have downloaded the LayerOptics webpage -file tar to the working directory LayerOptics_1.1.tar.gz , we will extract the constituent files: LayerOptics $ tar xvf LayerOptics_1.1.tar.gz Now the directory contains the three files: LayerOptics LO-setup.py LO-execute.py generate_ex.py In order to perform calculations, we also need the dielectric tensor of the bithiophene thin film, which can be obtained either from or TDDFT calculations. In this tutorial, we will not calculate explicitly the dielectric tensor, but use the one obtained from converged BSE calculations. The corresponding BSE -file can be downloaded tar . Once again, we extract from here the files epsilon.tar.gz (where EPSILON_NAR_BSEsinglet_SCRfull_OC ij.OUT = i,j are the cartesian directions) with the following command: 1,2,3 $ tar xvf epsilon.tar.gz 1. Introduction Theoretical spectroscopy is a powerful tool to describe and predict optical properties of materials. Nowadays, routinely performed first-principles calculations only provide bulk dielectric tensors in the coordinate system of the unit cell. However, these outputs are hardly comparable with experimental data, which are typically given by macroscopic quantities, crucially depending on the laboratory setup. is a versatile and user-friendly implementation, based on the 4$\times$4 matrix formalism for the Maxwell equations, to compute optical coefficients in anisotropic layered materials. This formalism is a generalization for anisotropic media of the 2$\times$2 approach, commonly used for isotropic layered systems. More details on the approach can be found in LayerOptics . a recent publication We assume a system of layers, which are stacked in negative -direction and are infinitely extended in the z -plane. Each layer has a frequency-dependent dielectric tensor and a specific thickness. The top of the layered structure is occupied by a semi-infinite vacuum, the bottom by a semi-infinite substrate with constant isotropic dielectric function $\epsilon_s$. Between the vacuum and the substrate, there are the active layers with dielectric tensor $\mathbf{\epsilon}_i$ and thickness $t_i$, where $i$ runs over the number of intermediate layers. The beam angle is described by the azimuth angle $\Theta$ relative to the xy -axis. The polarization of the beam is described by the angle $\delta$ relative to the positive z -axis, such that $\delta$=$0$ degrees corresponds to full parallel polarization ( y -polarization) and $\delta$=$90$ degrees to fully perpendicular polarization ( p -polarization). s is executed in two steps: LayerOptics At first, the dielectric tensor of each layer can be rotated by three Euler angles to simulate different orientations of the layer with respect to the substrate and the incoming light. For this purpose, the script is used, which activates an interactive script to guide the user through the setup. The rotation is performed by three successive rotations with three independent Euler angles $\alpha$, $\beta$, and $\gamma$. The direction of the rotations is illustrated in the following schema. More information can be found in LO-setup.py . Ref.[1] In the second step, the Fresnel coefficients, namely absorbance, reflection and transmission coefficients, are calculated for the layered system. Here, the beam angle and the polarization of the incoming light have to be specified. For this purpose, the script is used, which also uses an interactive script. LO-execute.py 2. Polarization-dependent absorption coefficient i) Setup As an example, we study the polarization-dependent absorption of a thin-film of bithiophene crystal. This material is characterized by a monoclinic unit cell, including two inequivalent bithiophene molecules, as shown in the figure below. In such a crystal structure, the dielectric tensor has four independent components. To execute the setup script type the following line in your bash shell: $ ./LO-setup.py At this point, an interactive interface will appear on the screen, asking the user to provide the necessary information to post-process the dielectric tensor. Specifically, the has to be specified, as well as the number of active layers defining the rotation of the tensor, mimicking the rotation of the sample. Since we are considering here only a bithiophene thin film, type Euler angles after the line 1 . In this first example, no rotation is applied to the dielectric tensors. Hence, all Euler angles are to be set to zero. At this point, the interactive shell will look like the following: Please provide the layer index: The interactive script should look like the following: _________________________________________________________________ | | | WELCOME TO LAYEROPTICS (SETUP) | | | | written by Christian Vorwerk | |_________________________________________________________________| Please provide the layer index: >>>>>>>> 1 Please provide the Euler angle ALPHA (deg): >>>>>>>> 0 Please provide the Euler angle BETA (deg): >>>>>>>> 0 Please provide the Euler angle GAMMA (deg): >>>>>>>> 0 After running the setup script, 9 additional files will appear in the working directory, labelled , where 1_ij.OUT = i,j label the optical components, following the convention of the files 1,2,3 . They include the components of the rotated dielectric tensor of the first layer. EPSILON_NAR_BSEsinglet_SCRfull_OC ij.OUT ii) Calculation of the absorption coefficients Now, we want to calculate the absorbance of the bithiophene thin film at different polarization angles of the incoming light. To do so, run the script typing the following command on the bash shell: $ ./LO-execute.py Another interacting interface will appear. First, the user is asked to insert the the incidence angle $\Theta$ of the incoming light. In this example, we choose $\Theta$=$0$ degrees, such that the light beam is normal to the surface. Next, the angle of light polarization has to be specified. We explore a range of angles, from $\delta$=$0$ degrees, corresponding to parallel ( ) polarized light, to $\delta$=$90$ degrees, denoting perpendicular ( p )) light polarization, with steps of $10$ degrees. Also, the number of s activelayers has to be specified. Important: The vacuum layer and the semi-infinite substrate are not to be included here!Again, we are considering a single thin film, hence you should type . Finally, the thickness of the active layer should be given in input. In this example, we assume the bithiophene layer to be 1 nm thick. 10 Following these instructions, the interactive interface will look like this: _________________________________________________________________ | | | WELCOME TO LAYEROPTICS | | | | written by Christian Vorwerk | |_________________________________________________________________| Set the ANGLE of the INCOMING BEAM or Type "RANGE" to consider a range of angles >>>>>>>> 0 Set the POLARIZATION ANGLE or Type "RANGE" to consider a range of angles >>>>>>>> RANGE Please type the two values between which the angle should be varied and the number of steps. >>>>>>>> 0 90 10 Provide the number of layers, not counting vacuum and substrate>>>>>>>> 1 For each layer. specify the thickness (nm):*******************************************For Layer 1>>>>>>>> 10******************************************* The calculation of the absorption coefficients will take a few seconds. Once the script is finished, 30 new files will appear in the working directory, namely , absorbancel.out , and transmission l.out coefficients, for each considered angle of the light polarization, indicated by reflection l.out ranging from l ($\delta$=$0$ degrees) to 0 ($\delta$=$90$ degrees). Each file contains four columns. The first one is the energy of the incoming photon, expressed in eV; the second and the third ones are the parallel ( 9 ) and perpendicular ( p ) component of the given optical coefficient, respectively; the fourth column contains the total value of the optical coefficient. s , you can now visualize the dependence of the total absorbance on the polarization angle. We plot here the xmgrace total contributionof the optical coefficient, which is stored in the fourthcolumn of the corresponding output file. The resulting plot, showing only the spectra obtained for $\delta$=$0$ , $30$, $60$, and $90$ degrees will look like this: We notice that for $\delta$=$90$ degrees ( polarization) the spectrum is dominated by an intense peak centered at 4.75 eV. At increasing values of $\delta$ additional peaks of increasing intensity appear at lower energy. For $\delta$=$90$ degrees ( p polarization), the spectrum exhibits three peaks, with the most intense on being centered at 4.5 eV. s 3. Angle-dependent absorption coefficient As a second example, we calculate the dependence of the optical coefficients on the angle $\Theta$ of the incoming light. To do so, we rotate the bithiophene thin film around the -axis of the unit cell, in order to simulate different orientations of the molecules with respect to the substrate, as well as to the plane of incidence of the incoming light. z First, we need to generate a new directory where we copy the scripts and necessary input files. To do so, type the following lines on your shell: $ cd ../$ mkdir LayerOptics_Theta$ cd LayerOptics_Theta$ cp ../LayerOptics/* ./$ rm *.out Next, following the procedure showed and explained above, we have to run the setup script specifying the number of layers and the Euler rotation angles to apply to the system. The script is invoked with the command: $ ./LO-setup.py Again, we are considering only a one-layer system. However, differently from the previous example, we now apply an effective rotation of the Euler angle $\alpha$=$45$ degrees to the four independent components of the dielectric tensor. The other two Euler angles are set to zero: $\beta$=$\gamma$=$0$ degrees. The resulting interacting interface will look like this: _________________________________________________________________ | | | WELCOME TO LAYEROPTICS (SETUP) | | | | written by Christian Vorwerk | |_________________________________________________________________| Please provide the layer index: >>>>>>>> 1 Please provide the Euler angle ALPHA (deg): >>>>>>>> 45 Please provide the Euler angle BETA (deg): >>>>>>>> 0 Please provide the Euler angle GAMMA (deg): >>>>>>>> 0 To calculate the optical coefficients at different angles of the incoming light beam, we execute the script by typing: $ ./LO-execute.py Again, an interacting shell will appear. In this case, we vary the angle of the incoming beam ranging from $\Theta$=$0$ to $\Theta$=$80$ degrees. The polarization angle is kept fixed. We again consider a layer thickness of 10 nm. After typing these commands into the interacting interface, you will see on the screen: _________________________________________________________________ | | | WELCOME TO LAYEROPTICS | | | | written by Christian Vorwerk | |_________________________________________________________________| Set the ANGLE of the INCOMING BEAM or Type "RANGE" to consider a range of angles >>>>>>>> RANGE Please type the two values between which the angle should be varied and the number of steps. >>>>>>>> 0 80 10 Choose the polarization angle. For an array of beam angles the polarization has to be fixed >>>>>>>> 0 Provide the number of layers, not counting vacuum and substrate>>>>>>>> 1 For each layer. specify the thickness (nm):*******************************************For Layer 1>>>>>>>> 10******************************************* After a few seconds, the calculation is completed. Again, a number of new files will appear in the working directory, labeled , absorbancej.out , and transmission j.out . In this case reflection j.out ranges from j ($\Theta$=$0$ degrees) to 0 ($\Theta$=$80$ degrees). We visualize the 9 component of the reflection coefficient at different angles of the incident beam. This quantity is stored in the s-polarized thirdcolumn of the output files . reflection j.out Exercise 1 Calculate the optical coefficients of a two-layer systemof bithiophene with different molecular orientation on a substrate. Use the original orientation for the top layer ($\alpha$=$\beta$=$\gamma$=$0$ degrees). The bottom layer is rotated by $\alpha$=$45$ and $\beta$=$\gamma$=$0$ degrees with respect to the orientation of the top layer. Use for each layer a thickness of 10 nm. Plot the total absorbance of this system as a function of the beam energy. Keep the beam angle at $\Theta$=$45$ degrees and consider incoming light, which is fully parallel ( ) polarized. Compare the absorbance for the two-layer system with that of a 20 nm thick p layer of unchanged orientation($\alpha$=$\beta$=$\gamma$=$0$ degrees) and a 20 nm thick layer rotatedby $\alpha$=$45$ and $\beta$=$\gamma$=$0$ degrees. For all three systems (two-layer system, only bottomlayer, only toplayer) use the same beam parameters. , Comp. Phys. Comm. 201, 119–125 (2016) LayerOptics: Microscopic modeling of optical coefficients in layered materials
The term "imaginary" is somewhat disingenuous. It's a real concept, with real (at least theoretical) application, just like all the "real" numbers. Think back to that algebra class. You were asked to solve a polynomial equation; that is, find all the values of X for which the entire equation evaluates to zero. You learned to do this by polynomial factoring, simplifying the equation into a series of first-power terms, and then it was easy to see that if any one of those terms evaluated to zero, then everything else, no matter its value, was multiplied by zero, producing zero. You tried this on a few quadratic equations. Sometimes you got one answer (because the equation was $y=ax^2$ and so the only possible answer was zero), sometimes you got two (when the equation boiled down to $y= (x\pm n)(x \pm m)$, and so when $x=-m$ or $x=-n$ the equation was zero), and a couple of times, you got no answers at all (usually, an equation that breaks down to $y=(x+n)(x+m)$ doesn't evaluate to zero at $x=-m$ or $x=-n$). In your algebra class, you're told this just happens sometimes, and the only way to make sure any factored term $(x\pm k)$ represents a real root is to plug in $-k$ for $x$ and solve. But, this is math. Mathematicians like things to be perfect, and don't like these "rules of thumb", where a method works sometimes but it's really just a "hint" of where to look. So, mathematicians looked for another solution. This leads us to application of the quadratic formula: for $ax^2 + bx + c = 0$, $x=\dfrac{-b \pm \sqrt{b^2-4ac}}{2a}$. This formula is quite literally the solution of the general form of the equation for x, and can be derived algebraically. We can now plug in the coefficients, and find the values of $x$ where $ax^2 + bx + c=0$. Notice the square root; we're first taught, simply, that if $b^2-4ac$ is ever negative, then the roots you'd get by factoring the equation won't work, and thus the equation has no real roots. $b^2-4ac$ is called the determinant for this reason. But, the fact that $b^2-4ac$ can be negative remains a thorn in our side; we want to solve this equation. It's sitting right in front of us. If the determinant were positive, we would have solved it already. It's that pesky negative that's the problem. Well, what if there was something we could do, that conforms to the rules of basic algebra, to get rid of the negative? Well, $-m = m*-1$, so what if we took our term that, for the sake of argument, evaluated to $-36$, and made it $36*-1$? Now, because $\sqrt{mn} = \sqrt{m}\sqrt{n}$, $\sqrt{-36} = \sqrt{36}\sqrt{-1} = 6\sqrt{-1}$. We've simplified the expression by removing what we can't express as a real number from what we can. Now to clean up that last little bit. $\sqrt{-1}$ is a common term whenever the determinant is negative, so let's abstract it behind a constant, like we do $\pi$ and $e$, to make things a little cleaner. $\sqrt{-1} = i$. Now, we can define some properties of $i$, particularly a curious thing that happens as you raise its power: $$i^2 = \sqrt{-1}^2 = -1$$$$i^3 = i^2*i = -i$$$$i^4 = i^2*i^2 = -1*-1 = 1$$$$i^5 = i^4*i = i$$ We see that $i^n$ transitions through four values infinitely as its power $n$ increases, and also that this transition crosses into and then out of the real numbers. Seems almost... circadian, rotational. As Clive N's answer so elegantly explains it, that's what imaginary numbers represent; a "rotation" of the graph through another plane, where the graph DOES cross the $x$-axis. Now, it's not actually really a circular rotation onto a new linear z-plane. Complex numbers have a real part, as you'd see by solving the quadratic equation for a polynomial with imaginary roots. We typically visualize these values in their own 2-dimensional plane, the complex plane. A quadratic equation with imaginary roots can thus be thought of as a graph in four dimensions; three real, one imaginary. Now, we call $i$ and any product of a real number and $i$ "imaginary", because what $i$ represents doesn't have an analog in our "everyday world". You can't hold $i$ objects in your hand. You can't measure anything and get $i$ inches or centimeters or Smoots as your result. You can't plug any number of natural numbers together, stick a decimal point in somewhere and end up with $i$. $i$ simply is. As far as having use outside "ivory tower" math disciplines, a big one is in economics; many economies of scale can be described as a function of functions of the number of units produced, with a cost term and a revenue term (the difference being profit or loss), each of these in turn defined by a function of the per-unit sale price or cost and the number produced. This all generally simplifies to a quadratic equation, solvable by the quadratic formula. If the roots are imaginary, so are the breakeven points (and your expected profits). Another good one is in visualizations of complex numbers, and of their interactions when multiplied. The first one I was exposed to is a well-known series set, produced by taking an arbitrary complex number, squaring it ($(a+bi)^2 = (a+bi)(a+bi) = a^2 + 2abi + b^2i^2 = a^2-b^2 + 2abi$), and then adding back its original value. Repeated to infinity with this number, the series either converges to zero or diverges to infinity (with a few starting numbers exhibiting periodicity; they'll jump around infinitely between a finite number of points much like $i$ itself does). The set of all complex numbers for which the series does not diverge is the Mandelbrot set or M-set, and while the area of the graph is finite, its perimeter is infinite, making the graph of this set a fractal (one of the most highly-studied, in fact). The Mandelbrot set can in turn be defined as the set of all complex numbers $c$ for which the Julia set $J(f)$ of $f(z)=z^2 + c \to z$ is connected. A Julia set exists for every complex polynomial function, but usually the most interesting and useful sets are the ones for values of $c$ that belong to the M-set; Julia fractals are produced much the same way as the M-set (by repeated iteration of the function to determine if a starting $z$ converges or diverges), but $c$ is constant for all points of the set instead of being the original point being tested. You can define Julia sets with all sorts of fractal shapes. These fractals, more accurately the iterative evaluation behind them, are used for pseudorandom number generation, computer graphics (the sets can be plotted in 3-d to create landscapes, or they can be used in shaders to define complex reflective properties of things like insect shells/wings), etc.
As you’ve observed, the projected line segment is not symmetric about the view line to the circle’s center—the center shifts. Another reason that you might be having some trouble working this out is that the apparent width of the ellipse also depends on the visual angle subtended by the circle, which varies with both the distance of the viewpoint from the center of the circle and the circle’s radius. (Diagram provided by the OP.) This problem can be reduced to finding the intersection of pairs of lines, which is easily done using homogeneous coordinates. We place the circle on the $x$-$y$ plane, centered at the origin, and Let the viewpoint be $V=(0,d\cos\theta,d\sin\theta)$, ($d>0$). We’ll first look at what’s going on in the $y$-$z$ plane. In the above diagram, the horizontal direction represents $z$ and the vertical $y$. To reduce clutter, the $x$-coordinate will be suppressed and, at the risk of introducing some confusion, the others will be designated $x'$ and $y'$, respectively. The line perpendicular to $\overline{OV}$ that passes through the upper point $A(0,r)$ is $y'=r-x'\cot\theta$, which we can rewrite in normal form as $x'\cos\theta+y'\sin\theta-r\sin\theta=0$. The homogeneous vector that represents this line consists of the coefficients of the latter equation: $$\mathbf l=(\cos\theta,\sin\theta,-r\sin\theta).$$ The line through $V$ and the lower point $B(0,-r)$ is represented by the cross product of the homogeneous coordinates of these points: $$\mathbf m=(d\cos\theta,d\sin\theta,1)\times(0,-r,1)=(r+d\sin\theta,-d\cos\theta,-dr\cos\theta).$$ Their intersection is $$\mathbf l\times\mathbf m=(-dr\sin2\theta,dr\cos2\theta-r^2\sin\theta,-(d+r\sin\theta))$$ which in Cartesian coordinates is $$C=\left({dr\sin2\theta\over d+r\sin\theta},{r^2\sin\theta-dr\cos2\theta\over d+r\sin\theta}\right).$$ The minor axis length of the circle’s image is $AC$, which you can compute using the standard formula for the distance between two points as $${2dr\cos\theta\over d+r\sin\theta}.$$ Compared to your initial guess, there’s an extra factor of ${d\over d+r\sin\theta}$ that accounts for the asymmetry of the view and the “angular size correction” factor. As $d\to\infty$, so that the projection becomes closer and closer to parallel, this correction factor approaches unity. This isn’t the whole story, though. Rays from the viewpoint through points on the circle converge as they get closer to the viewpoint, so the other semi-axis of the circle’s projection isn’t going to be equal to $r$, either. To work out what this is, first project the center of the ellipse back to the $x$-$y$ plane. This center is the midpoint of $A$ and $C$ and its pre-image can be found by intersecting lines again: $$\begin{align}D &= \frac12\left((0,r)+\left({dr\sin2\theta\over d+r\sin\theta},{r^2\sin\theta-dr\cos2\theta\over d+r\sin\theta}\right)\right) \\ &=\left(\frac{d r \sin (\theta ) \cos (\theta )}{d+r \sin (\theta )},\frac{r \sin (\theta ) (d \sin (\theta )+r)}{d+r \sin (\theta )}\right)\end{align}$$ and its back-projection is $$\overline{VD}\times\overline{OA}=(V\times D)\times(-1,0,0)=\left(0,\frac{d r^2 \sin (\theta ) \cos (\theta )}{d+r \sin (\theta )},\frac{d^2 \cos (\theta )}{d+r \sin (\theta )}\right)$$ which becomes $\left(0,\frac{r^2}d\sin\theta\right)$ in Cartesian coordinates. Going back to 3-D coordinates, this is the point $\left(0,\frac{r^2}d\sin\theta,0\right)$. The $x$-coordinates of the points on the circle with this $y$-coordinate are $\pm\sqrt{r^2-y^2}=\pm r\sqrt{1-\left(\frac rd\sin\theta\right)^2}$. I’ll leave finding the projections of these points and the resulting semi-axis length to you (hint: you can use similar triangles).
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago BTW your program looks very interesting, in particular the way to enter mathematics. One thing that seem to be missing is documentation (at least I did not find it). This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for. For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$? ******* Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports. When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to. ******* If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string: I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead: One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find... In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som... @MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, " BTW those animations with examples of searching look really cool. @MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page! @MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users. @MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it. @MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords. @MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history. @MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though) @MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match. @MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell. @MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets. @MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit. @MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned. @MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish. @MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish. So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago @GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago @quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago "What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago @quid I will reply here, since I do not want to digress in the comments too much from the topic of that question. Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that". Book recommendations are certainly accepted on the main site, if they are formulated in the proper way. If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here. Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed. Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously. I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc. Academia.SE has some questions which could be classified as "demographic" (including gender). @quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar. But that is only anecdotal. And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat. From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov." My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men. As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation. It seems that they have also other interpretations in Poland. "A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House"). Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany." BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question. In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3] A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar). In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing. On Slovakia specifically it says there: The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko.
How to Automate Meshing in Frequency Bands for Acoustic Simulations Think of the curved lid of an elegant grand piano. The curve corresponds to the strings’ length, which corresponds to the perception of the pitch. This visual represents an important element of acoustics: Our perception of pitch is logarithmic. This means that there is a large frequency range involved in acoustics phenomena. In turn, when modeling acoustics problems, there is a large wavelength range to be meshed. But how? Introduction to Free-Field FEM Wave Problems A large frequency range needs to be computed, which means large wavelength ranges need to be resolved by the mesh. To efficiently mesh large frequency ranges, we can optimize the mesh element size by remeshing for a given frequency range when using finite element method (FEM) interfaces in the COMSOL Multiphysics® software. The finite element method is implemented in most interfaces in COMSOL Multiphysics, including the Pressure Acoustics, Frequency Domain and the Pressure Acoustics, Transient interfaces. Other interfaces in the Acoustics Module are optimized for their intended purpose by implementing the boundary element method (BEM), ray tracing, or dG-FEM (time explicit). When using the Pressure Acoustics interface, FEM uses a mesh to discretize the geometry and solves the acoustic wave equation at these points. The full, continuous solution is interpolated from these points. An automotive muffler with a porous lining, modeled using the pressure acoustics functionality in the COMSOL® software. When meshing an FEM model, we need to get a good approximation of the geometry and include details of the physics. When using the Pressure Acoustics interface, we always need to resolve the acoustic waves. A good mesh resolves the geometry and the physics of the model, but a great mesh accurately solves the problem and also uses the smallest number of mesh elements possible. In this blog post, we will look at how to mesh free-field/open-ended problems with the fewest mesh points. Mesh elements are comprised of nodes. For a linear mesh element, the nodes are located at the vertices. Second-order polynomial interpolation is the default shape function for wave equations in COMSOL Multiphysics. Second-order (or quadratic) elements have one additional node along the length of the element and resolve waves accurately. For free-field wave problems, we need to have about 10 or 12 nodes per wavelength to resolve the wave. Consequentially, for wave-based modeling with quadratic elements, we need 5 or 6 second-order elements per wavelength (hmax = \lambda_0/5). For short wavelengths (higher frequencies), the element size needs to be smaller than at lower frequencies. Audio applications, which are concerned with human perception, have a frequency range of 20 Hz to 20 kHz. In air at room temperature, audio problems have a wavelength range from about 17 m to 17 mm. If we were to compute over the entire human auditory frequency range with one mesh, we would need to resolve for the wavelengths that correspond to 20 kHz. At the high-frequency end, this leads to a maximum element size, or spatial resolution, of (17 mm/5 =) 3.2 mm. Resolving the mesh for the highest frequency leads to an excessively dense mesh for the low-frequency predictions. At 20 Hz, the wavelength is 17 m and would have 5360 nodes per wavelength, far more than the 10 or 12 that is required. Each node corresponds to a memory allocation for the computer. While this dense mesh approach is great from an accuracy perspective, the excessively dense mesh takes up computational resources and consequentially takes longer time to compute. Efficient Meshes in COMSOL Multiphysics® Setup for Single-Octave Mesh To avoid an inefficient meshing approach, we can split the problem into smaller frequency bands; initially, one octave, where the mesh for each frequency band is resolved according to its upper frequency limit. In this example, the center frequency, f_{C,n}, is referenced from f_0, the prescribed frequency, f_{C,n} = 2^n \times f_0, where n is the octave band number from the reference (positive n is higher-pitch octaves, negative n is lower-pitch octaves). The upper and lower frequency band limits are defined from the center-band frequency f_L = 2^{-\frac{1}{2}} \times f_{C,n} , f_U = 2^{\frac{1}{2}} \times f_{C,n} Note that f_U is twice f_L (thus one octave higher). Defining the octaves in the model parameters. We can use these parameters in the frequency-domain study using the range() function to define a logarithmic distribution of points within each band 10^{\textrm{range}(\log_{10}(f_L), df_\textrm{log}, \log_{10}(f_U) – df)}, The logarithmic frequency spacing, df_\textrm{log} = (\log_{10}(f_U)-\log_{10}(f_L))/(N-1), is set by the frequency range divided by the number of frequencies N. Setting the frequencies solved for in each octave band. The maximum mesh element size (traditionally given the variable name hmax) is then taken from the higher limit of the given frequency band hmax = 343[m/s]/f_U/5. Note that if you do not know the speed of sound, you can use comp1.mat1.def.cs(23[degC]) to access the speed of sound for the first material (in a list), defined in Component 1 at 23°C. If you are using the built-in material Air, the speed of sound comes from the ideal gas law, so the fluid temperature is a required input. The custom mesh sequence with the parameter hmax applied to the Maximum element size . The Maximum element size is applied to the mesh on the Size node. The elements can be smaller than this constraint if smaller geometry details needs to be resolved, as shown in the figure below. The smallest element is controlled by the Minimum element size setting. The Curvature factor and Resolution of narrow regions settings are also important mesh settings. The mesh element quality shown on the top for two octave bands. Setup for Multiple Octave Bands If the COMSOL Multiphysics model is set up as described above, it would yield one octave’s worth of frequencies. However, we need up to 10 octaves for our audio investigations. A parametric sweep over n, such that each value of n is an octave and the upper and lower frequency limits change accordingly. To implement a parametric sweep in COMSOL Multiphysics, a Parametric Sweep study step is added to the study to change the frequency bands. The benefit of working with parameters is that all of the frequency band limits change automatically when the parameter sweep variable n changes. The parameter n is the natural choice for the parameter sweep because each value of n corresponds to a frequency band. Setting it up in this way means that the original frequency is now the reference frequency and must be chosen appropriately. For the results shown below, the same frequencies were computed over the same range with the mesh for the highest frequency. The study that splits the mesh according to the octave band number took 32 s, whereas the single-mesh approach took 79 s. This shows a significant savings of time and computational resources. The instantaneous pressure is shown on the bottom for the different frequencies and meshes. The Octave Band plot type is used to calculate the required response. Ensure that the line markers are placed in data points. Alternatively, to obtain a continuous line, change the x-Axis Data to Expression and enter freq, the variable for frequency. Plotting the continuous line. Choose Point Graph and ensure that the plot settings are set up as shown above. Setup for n th-Octaves Bands The previous discussion sets up the problem in octave bands. However, you can use the general form of f_{C,n} = 2^\frac{n}{b} \times f_0 f_L = 2^{-\frac{1}{2b}} \times f_{C,n} , f_U = 2^{\frac{1}{2b}} \times f_{C,n} , to allow fractions of octave bands. In the above setup, let b = 3 for third octave bands or 6 for sixth octave bands. The narrower the frequency band, the more times the meshing sequence runs, so there is a balance to be struck. The parameters that set up the general meshing procedure in any octave band are located in the Remeshing in Frequency Bands model. It is easy to save the necessary parameters in a .txt file and load them when setting up a new model. This avoids having to enter them every time. Discussion and Caveats of Meshing in Frequency Bands for Acoustics Simulations The method presented in this blog post uses canonical geometry to clearly illustrate the process for optimizing the mesh. Consequentially, the meshing routine takes relatively little time. For realistic geometries, the meshing routine may take longer and the benefits may be less marked. In this instance, you should defeature or use virtual operations to reduce any physically irrelevant geometry. For some problems, the temperature or density of the fluid may change significantly over the computational domain. If this occurs, the speed of sound will change and must be included in the model. The mesh must be dense enough to reflect this. This discussion is not relevant to the Ray Tracing, Pressure Acoustics, Boundary Element, or Acoustic Diffusion interfaces. With care, the information in this blog post can be applied to free-field problems of the Aeroacoustics and Thermoviscous Acoustics interfaces or the dG-FEM-based Ultrasound interfaces. The convective effect of the flow alters the wavelength, and a sophisticated mesh should reflect this up- or downstream of a source. The Linearized Navier-Stokes and Linearized Euler interfaces have default linear interpolation, so 10 or 12 elements are required per wavelength. The Thermoviscous Acoustics interface is designed for resolving the acoustic boundary layer. The thickness of this layer is also frequency dependent, and a similar method to the one discussed here can be used for efficient meshing and resolution of the layer. Finally, the discussion in this blog post explicitly assumes that the wavelength is known. This assumption is usually the case for free-field modeling, however for bounded, resonant problems, the total sound field is dependent on boundary condition values and the location of the boundaries. This means that the pressure amplitudes can have shapes with an analogous wavelength that could be significantly shorter than the free-field wavelength. To get an accurate solution, you must perform a mesh convergence study. Conclusion This blog post has demonstrated that remeshing in frequency bands can save a significant amount of time. In COMSOL Multiphysics, this is implemented by parameterizing the upper- and lower-frequency band limits. The approach demonstrated here is applicable for interfaces that implement FEM and have quadratic interpolation. Next Steps Try it yourself: Click the button below to access the MPH-file for the model discussed in this blog post. Note that you must log into COMSOL Access and have a valid software license to download the file. Read More Learn more about how to enhance your meshing processes on the COMSOL Blog: Comments (2) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
February 10th, 2018, 09:57 PM # 1 Newbie Joined: Oct 2017 From: sweden Posts: 24 Thanks: 0 Problems on Surfaces I have seeked in other forums as well, but I get no replies, and I really need some solutions to these kind of problems as the examples in my book are just not good enough to help me further solve problems at the end of the chapter. The problem goes like this: If we have a unit sphere denoted by $S^2$ , in $\mathbb{R}^3$, and some function $z$ that maps $z:S^2 \rightarrow (0,\infty)$ which is defined to be smooth and has $+$ value. Now given some defined set $\Lambda$: $$\Lambda=\{{z(x)x: x\in S^2}\}$$ Question 1) If $Z(u_1,u_2):U\rightarrow S^2$ is a parametrisation of $S^2$ (smooth), how can one show that another parametrisation on $\Lambda$ defined by $F:U \rightarrow \Lambda$, $(u_1,u_2) \rightarrow$ $z(Z(u_1,u_2))Z(u_1,u_2)$, then $\Lambda$ is a regular surface? Question 2) Now, we have coinciding parametrisations of $S^2$ which are given by $Z_i(u_1,u_2):U_i \rightarrow S^2$, and $F_i:U_i\rightarrow \Lambda$ the parametrization of $\Lambda$ caused by $Z_i$. Prove that $F_1^{-1}(F_2)=Z_1^{-1}(Z_2)$ for $i=1,2$. Is $\Lambda$ and $S^2$ diffeomorhpic? If so give an explicit form of the diffeomorphism. Question 3) Question 4) Let us now define some map $\Psi: \mathbb{R}^3 -\{(0,0,0)\} \rightarrow$ $\mathbb{R}^3 - \{(0,0,0)\}$, which is given by: $$\Psi(x) = \frac{x}{|x|^2}$$ Denote $\Lambda^*=\Psi(\Lambda)$. a) Is $\Lambda^*$ a regular surface? b) For some map $\psi:\Lambda \rightarrow \Lambda ^*$which is some restriction on $\Psi$ on $\Lambda$, prove that $\psi$ is a diffeomorphism. c) For any $c \in \Lambda$, the tangent map $d\psi_c:T_c \Lambda \rightarrow T_{\psi(c)} \Lambda^*$ at $c$ (which is a map from the tangent plane of $\Lambda$ at $c$ to the tangent plane at $\Lambda ^*$ at $\psi(c)$), prove that: $$d\psi_c(W) =\frac{|c|^2 W-2(c.W)c}{|c|^4}$$ where $c.W$ is the dot product in $\mathbb{R}^3$. Tags problems, surfaces Thread Tools Display Modes Similar Threads Thread Thread Starter Forum Replies Last Post Ruled Surfaces bigli Real Analysis 1 July 19th, 2013 05:27 PM Riemann surfaces johng23 Complex Analysis 0 March 9th, 2013 05:41 AM area, surfaces, volume alex_0245 Algebra 0 May 15th, 2012 10:17 PM Quadric Surfaces aaron-math Calculus 6 February 12th, 2012 07:19 PM Quadratic surfaces remeday86 Calculus 1 February 17th, 2009 04:40 AM
You can use a matrix type seperability condition as well. This is similar but the equation has more flexibiliity. The rates are then markovian in some combinations of the Brownian motion. See More Mathematical Finance for details. The following paper, Interpolation Schemes in the Displaced-Diffusion LIBOR Market Model and the Efficient Pricing and Greeks for Callable Range Accruals, addresses this issue:We introduce a new arbitrage-free interpolation scheme for the displaced-diffusion LIBOR market model. Using this new extension, and the Piterbarg interpolation scheme, we study ... this is a well-known problem. One solution is to make volatility zero when rates exceed a certain high level.It's less problematic than it looks because any cash-flows generated will be divided by a rolling money market account which has huge value and so the deflated cash-flows are very small. For Q1, the function $a(t)$ is the instantaneous correlation. The form given by (2) is basically the Cholesky decomposition. Of course, you may directly show, uisng Levy's characterization, that$$\widetilde{W}(t) = \int_0^t\bigg[\frac{1}{\sqrt{1-||a(t)||^2}} dZ(t) -\frac{a(t)^T}{\sqrt{1-||a(t)||^2}} dW^B(t) \bigg]$$is a standard scalar Brownian motion ... Q1: $$(1)\rightarrow(2)$$(1): $a(t)$ is the instantaneous correlation of $\rho(Z_t,W_t)$ because:$$\rho(dZ_t,dW_t)=\dfrac{Cov(dZ_t,dW_t)}{\sigma_{dZ_t}\sigma_{dW_t}}=\dfrac{E(dZ_t\cdot dW_t)}{\sqrt{dt} \sqrt{dt}}=\dfrac{\langle dZ_t, dW_t\rangle}{t}=a(t)$$$\Rightarrow$ (2) holds as following, in the 1-dim case:$dZ_t\sim N(0,dt),$$dW_t,\tilde{dW_t}\... For a swap, we have a sequence of re-setting and payment dates. The # of forward rates corresponding to the # of payment dates. For example, let us assume that we have $n$ payment dates $t_1, \ldots, t_n$, where $0< t_1 < \cdots < t_n$. Then there are $n$ forward rates.During the simulation, for time steps prior to $t_1$, there exist $n$ "... You might want to set $a= \epsilon - d$ and write $\epsilon>0$ as a constraint.I guess $\textbf{lsqnonlin}$ is the suitable fonction for what you intend to do. I personnally like to use and play around with $\textbf{fmincon}$, which allows more flexibility and performs well, if you are willing to provide Jacobian and/or Hessian in algorithms options The forward Libor rate at time $t$ is the forward rate over a certain accrual period $[T, T+\Delta]$, where $\Delta$, in years, can be 3 months or 6 months, and is defined by\begin{align*}L(t, T, T+\Delta) = \frac{1}{\Delta}\left(\frac{P(t, T)}{P(t, T+\Delta)}-1 \right),\end{align*}where $P(t, u)$ is the price at time $t$ of a zero coupon bond with unit ... There are two things that might be confusing you. The time step in Time dimensions and time steps along the forward curve. The first is given a time t from today until a certain day in the future, this dt usually is the next reset date. The the other is tau representing a tenor for the forward curve maturing in tau days ahead. Dtau could vary ... Thanks to my research leader, I found what I missed. $V_{0,1}$ is vol of swaption that matures at $T_0$ which is not 0 (as I thought), rather it is maturity of the first libor. So $V_{0,1}$ is the closest available point on market.And now this is all clear with table on page 323 in section 7.4. $V_{0,2}$ is realy vol of swaption that matures at $T_0$=1y ... We assume that, under the $T_j$-forward probability measure $P_{T_j}$,\begin{align*}\frac{dP(t, T_j)}{P(t, T_j)} = \mu_P(t, T_j) dt + \sigma_P(t, T_j) dW_t^{T_j},\end{align*}where $\mu_P(t, T_j)$ and $\sigma_P(t, T_j)$ are the respective drift and volatility functions. Let $Q$ be the risk-neutral probability measure. Then\begin{align*}\frac{dQ}{dP_{... Just to be precisely clear, your mathematical formulation will not necessarily capture the nuances of the physical dates that libor is valued between, due to holiday calendars and modification rules.Take GBP for example. The LIBOR in that currency is subject to a Modified Following rule as well as a Month End Consistency rule.For example:Generally 6M ... For example a Caplet with Expiry of 3year with tenor = 0.5 has to be priced (following the analytical formula) with the LIBOR rate L(0,2.5,3). Am I getting it right ?Thats right. The caplet hast a tenor of half a year and expires in 3 more years, therefore it starts at T =2.5 and ends at T = 3. (Which in this case is the forward rate) From a practitioner standpoint, we know the prices of non accreting swaptions. The price of the accreting swaption in any model calibrated to these non accreting swaptions, is heavily dependent on the intra curve correlation assumptions in the model. We check that these correlations are consistent with other correlation dependent markets such as curve ... The explosion of the forward rates in the log-normal LMM simulated in the spot measure seems to be related to the explosion of the Eurodollar futures prices in this model which was studied in this paperhttp://www.tandfonline.com/doi/abs/10.1080/1350486X.2017.1297727The Eurodollar futures prices are given by the expectation of the Libor in the spot ... The rates will explode in the current low rates environment my friend where empirically they are at a too low level to use a log-normal model if you want to preserve your log-normality please use a shifted log normal distribution instead to a convenient rate cut off of around 2%. This happens mainly on EUR market.Hope this help If I have read the question correctly then I will assume that $a$, $b$, $c$, $d$, $T_i$, and $k_i$ are constants. If this is the case then the only term which we need to show is bounded is$$\big(a + b(T_i - t)\big)\exp\big(-c(T_i-t)\big).$$If we assume that we are only considering the temporal domain $0 \leq t \leq T_i$ such that $T_i - t \geq 0 $ then ...
Mathematics seems to be a nightmare for majority of students, as they lack confidence and practice of the subject. We at BYJU’S provide students of class 11th with important mark-wise questions for practice. Practicing these question would definitely provide you an idea about the pattern of examination, and questions that are usually framed in this section. Here we provide few important 6 marks questions for practice. As these questions are little tricky, thus this section require a good practice of different long type questions, which can be framed in the final examination. Important 6 Marks Questions for Class 11 Maths exam are as follows- Question 1- In a survey of 5,000 people in a town, 2,250 were listed as reading English newspaper, 1,750 are reading Hindi newspaper and 875 were listed as reading both Hindi as well as English. Find how many people neither read Hindi nor English newspaper. Also find how many read only English newspaper? Question 2- Using binomial theorem, find the value of \((52)^{4}\) Question 3- Show that- \(\frac{1.2^{2} + 2.3^{2} + …….. + n(n+1)^{2}}{1^{2}.2+ 2^{2}.3+ ………+ n^{2}.(n+1)} = \frac{3n+5}{3n+1}\) Question 4- Describe the set of complex number z such that \(\left | \frac{z+2-i}{z+5+4i} \right | = 5\) Question 5- A family of 4 members planner to go for Goa by train during summer for adventures. On the day of Journey all the auto/taxi drivers were on strike due to price hike of petrol. So the family couldn’t get any transport to railway station. Now family is standing at the crossing of two straight roads represented by equations \(2x – 3y – 4 = 0\) Question 6- Find the value of n, if the ratio of the fifth term from the beginning to the fifth term from end in the \(\left ( 2^{\frac{1}{4}} + \frac{1}{3^{\frac{1}{4}}} \right )\) Question 7- Of the members of three athletic teams in a certain school, 21 axe in the basketball team, 26 in hockey team and 20 in football team. 14 play hockey and basketball , 15 play hockey and football, 12 play football and basketball and 8 play all the three games. How many members are there in all? Question 8- Find the four number in G.P. in which third term is greater than the first by 9 and the second term is 3 greater than the fourth by 18. Question 9- Solve the given system of inequalities graphically: \(x-2y \leq 3\) \(3x + 4y \geq 12\) \(x \geq 1\) \(y \geq 1\) Question 10- Show that area of the triangle formed by the lines \(y = m_{1}x + c_{1}\) \(\frac{(c_{1} – c_{2})^{2}}{2\left | m_{1} – m_{2} \right |}\) Question 11- Using binomial theorem, prove that \(6^{n} – 5n -1\) Question 12- A student wants to arrange 3 Mathematics, 4 Hindi and 5 Physics book on a shelf. In how many ways book can be arranged? How many arrangements are possible if all the books in the same subject are to be altogether. Question 13- In any triangle ABC, prove that: (i) \(\left ( \frac{\sin(B-C)}{\sin (B + C)} \right ) = \frac{b^{2} – c^{2}}{a^{2}}\) (ii) \(b \cos B + c \cos C = a \cos (B-C)\) Question 14- Prove that the diagonals formed by the four straight lines \(\sqrt{3}x+ y = 0; \sqrt{3}x+ y = 1\) Question 15- Prove that there is no term involving \(x^{5}\) Question 16- Find the equation of the circle which passes through the points (2,-2) and (3,4) and whose centre lies in \(x + y = 1\) Question 17- Find the coefficient of \(x^{5}y^{7}\) Question 18- If the sum of n terms of two arithmetic progressions are in the ratio \(14 – 5n : 3n+5\) Question 19- In a survey, it was found that, people encourage their wards for science/commerce streams, and it looks commonly at school/college labels, there are 40 students in chemistry class and 60 students in physics class. Find the number of students which are either in physics or chemistry class in the following cases: (i) The two classes meet at same hour. (ii) The two classes meet at different hours and 20 students enrolled in both subjects, Question 20- An analysis of monthly wages paid to workers in two firms A and B belonging to the same industry, gives the following results: Firm A Firm B No. of wage earners 586 648 Mean of monthly wages Rs. 5253 Rs. 5253 Variance of the distribution of the wages 100 121 (i) Which firm A or B pays larger amount as monthly wages? (ii) Which firm A or B shows greater variability in individual wages? ‘
If a critical point is equal to zero, it is called a stationary point (where the slope of the original graph is zero). If it does not exist, this can correspond to a discontinuity in the original graph or a vertical slope. For functions of a single variable, critical points satisfy $ \dfrac{df}{dx} = 0. $ For functions of multiple variables, critical points satisfy $ \dfrac{\partial f(x_1,x_2,\ldots,x_n)}{\partial x_i} =0, \forall i \in \N \leq n. $ Properties A critical point equal to zero may indicate the presence of an extreme value, if the second derivative of the function is non-zero. A positive second derivative indicates a local minima, and a negative second derivative indicates a local maxima. Note that some functions (e.g. $ f(x,y)=x^3 $ or $ f(x,y)=y^2-x^2 $) have critical points that aren't extreme values.
Sep 27th 2019, 08:31 AM # 5 Senior Member Join Date: Oct 2017 Location: Glasgow Posts: 474 Judging from your answer... it seems that the weight and the y-component of the applied force are in the same direction; vertically downwards. I think your answer is wrong because you've put some negative signs on your components that shouldn't be there. Here's a full solution to explain. Assuming that the applied force is pointing downwards by 25 degrees from the horizontal, instead of upwards, we have the following forces labelled on the attached free-body diagram: 1. Weight = $\displaystyle mg$ 2. Reaction force, $\displaystyle R$ 3. Friction force, $\displaystyle F_r \le \mu R$, 4. Applied force = $\displaystyle F_{app}$ = 100 N Let's consider the y-direction. Since we know that the object accelerates along the surface, we know that the reaction force must balance the weight and the applied force. Therefore, according to Newton's first law, the reaction force balances the sum of the weight and y-component of the applied force: $\displaystyle R = mg + F_{app} sin(25)$ Let's now consider the x-direction. In this direction we know the object accelerates along the surface. Therefore, we can solve for Newton's second law along the x-direction. The total force in the x-direction is the difference between the x-component of the applied force and the friction force, which always resists motion: $\displaystyle F_{total,x} = ma = F_{app} cos(25) - F_r$ Because the object is moving, we know the limiting friction force has been overcome, so we know that the friction force relates to the reaction force exactly using: $\displaystyle F_r = \mu R$ We can now substitute these quantities in the second equation and solve for acceleration: $\displaystyle ma = F_{app} cos(25) - \mu R$ $\displaystyle = F_{app} cos(25) - \mu (mg + F_{app} sin(25))$ $\displaystyle a = \frac{F_{app} cos(25) - \mu (mg + F_{app} sin(25))}{m}$ Substituting values: $\displaystyle a = \frac{100 \times cos(25) - 0.12 \times (15 \times 9.81 + 100 \times sin(25))}{15} = 4.527 m/s^2$
Extraordinary claims require extraordinary proofs which really is the reason why this sorts of discussion is important. Similarly, sometimes, you are so blinded to some sorts of a truth and are faced with something so different that you can misread entirely what is being said. If you read this morning's entry, you might get a feel that I am little ambivalent about the true interesting nature of a paper entitled Statistical physics-based reconstruction in compressed sensing by Florent Krzakala, Marc Mézard, François Sausset, Yifan Sun, Lenka Zdeborová. Let's put this in perspective, our current understanding so far is that the universal phase transition observed by Donoho and Tanner seems to be seen with all the solvers featured here, that there are many ensembles for which it fits (not just Gaussian, I remember my jaw dropping when Jared Tanner showed it worked for the ensembles of Piotr Indyk, Radu Berinde et al) and that the only way to break it is to now consider structured sparsity as shown by Phil Schniter at the beginning of the week. In most people's mind, the L_1 solvers are really a good proxy to the L_0 solvers since even greedy solvers (the closest we can find to L_0 solvers) seem to provide similar results. Then there are results like the ones of Shrinivas Kudekar and Henry Pfister. ( Figure 5 of The Effect of Spatial Coupling on Compressive Sensing) that look like some sort of improvement (but not a large one). In all, a slight improvement over that phase transition could, maybe, be attributed to a slightly different solver, or ensemble (measurement matrices). So this morning I made the point that given what I understoodabout the graphs displayed in the article, it may be at besta small improvement over the Donoho-Tanner phase transition known to hold for not only Gaussian but other types of matrices and for different kinds of solvers, including greedy algorithms and SL0 (that simulate some sorts of L_0 approach). At bestis really an overstatement but I was intrigued mostly because of the use of an AMP solver, so I fired off an inquisitive e-mail on the subject to the corresponding author: Dear Dr. Krzakala, ... I briefly read your recent paper on arxiv with regards to your statistical physics based reconstruction capability and I am wondering if your current results are within the known boundary of what we know of the phase transition found by Donoho and Tanner or if it is an improvement on it. I provided an explanation of what I meant in today's entry (http://nuit-blanche.blogspot.com/2011/09/this-week-in-compressive-sensing.html). If this is an improvement, I'd love to hear about it. If it is is not an improvement, one wonders if some of the deeper geometrical findings featured by the Donoho-Tanner phase transition have a bearing on phase transition on real physical systems.Best regards,Igor. The authors responded quickly with: Dear Igor, Thanks for writing about our work in your blog.Please notice, however, that our axes in the figure you show are not the same as those of Donoho and Tanner. For a signal with N components, we define \rho N as the number of non-zeros in the signal, and \alpha N as the number of measurements. In our notation Donoho and Tanner's parameters are rho_DT = rho/alpha and delta_DT = alpha. We are attaching our figure plotted in Donoho and Tanner's way. Our green line is then exactly the DT's red line (since we do not put any restriction of the signal elements), the rest is how much we can improve on it with our method. Asymptotically (N\to \infty) our method can reconstruct exactly till the red line alpha=rho, which is the absolute limit for exact reconstruction (with exhaustive search algorithms).So we indeed improve a lot over the standard L1 reconstruction!We will be of course very happy to discuss/explain/clarify details if you are interested.With best regardsFlorent, Marc, Francois, Yifan, and Lenka The reason I messed up reading the variables is because I was probably not expecting something that stunning. Thank you for Florent Krzakala, Marc Mézard, François Sausset, Yifan Sun, Lenka Zdeborová for their rapid feedback. Liked this entry ? subscribe to the Nuit Blanche feed, there's more where that came from
With all of the other vote-controlled mechanisms on SE, you have every bit of information you need to vote. For example, wrt closing and reopening, you can see the post and all the comments. You're not making a blind decision.In this case, the suspension details (the warnings, the exact offence, etc) are private and only viewable by moderators (and the ... The offensive comment from Fawad has been deleted, but as a room owner I can see it and I can confirm that Fawad told ACuriousMind to shut the fuck up.I think Fawad should consider himself very lucky not to be banned for an extended period. Suspensions are private, and in some cases like vote fraud most of the details have to remain private at all times. The users voting to undo a suspension don't have access to all the information about the incident, how are they supposed to make a reasoned judgement without knowing what exactly happened?When a user is suspended, a lot of the information ... No. You can't.Every user with the requisite rep is entitled to vote on the content you post. That's by design: we use a crowd-sourced evaluation. There may exist users that cast more downvotes than up, and that is allowed. It just means that user has very stringent standards of quality; as long as those standards are applied evenly the user is free to ... As a user that frequents SE sites intermittently (and with less reputation than most, including yourself), please interpret my upcoming critique as independent from the high-rep user/users you believe are targeting you. You seem to be intentionally ignoring the community's critique, falsely assuming that you are the victim.I appreciate your curiosity - it ... The argument of the StackExchange folks against meta tags is reasonable in most cases, but it doesn't apply here in my opinion. Jeff Atwood quoted 2 arguments against meta tags here, but neither applies very well to this proposal.Shog9 wrote:I think the [subjective] tag is useless at best and actively harmful at worst.Useless, because for all the ... There is nothing about rep that magically makes some whose first language isn't English suddenly become a master at the language. Nor is there some way rep prevents someone from asking a meaningless question. So, to the titular question, No, there should not be a rep threshold for unclear question.What should be done in cases of unclear questions, ... This is a technical writing problem, and it is your problem as the author.Start with a good, representative title. If your title reads like a different or simpler question than you intend then you have communicated something other than what you intended.If you can't state the question in the title there is a good chance you are trying to cram too much ... I use the AutoReviewComments userscript with the following comments:[Q] Too broadPlease only ask one question per post - only ask several if they are so closely related that it wouldn't make sense to split them up since they cannot reasonably be answered separately. That way, answerers that might be able to answer one question but not the others ... To me personally it was always clear that astronomy (containing observational and theoretical astrophysics for example) IS physics, such that it would not be necessary to add it to the logo of physics SE.But after rethinking and discussing this a bit with my office mate (he is an astrophysicist), I'm no longer too opposed to adding astronomy to the logo ... There's been an awful lot said here already, so I'm gonna cut to the chase: popular-science is clearly a meta tag, in the same way that homework is a meta tag - it can't stand alone when describing a question.That doesn't necessarily mean it's bad or should be removed though. [popular-science], like [homework] has value in describing the sorts of answers ... We give the short indication on the profile to avoid the Streisand effect, so other users get a general idea of why an account was suspended. The user that was suspended is always welcome to contact us (the Stack Exchange team) to let us know if they feel the suspension was unwarranted or excessive. There are checks and balances on everything that community ... It seems MathJax supports size Latex commands:\tiny, \scriptsize, \footnotesize, \small, \normalsize (default), \large, \Large (capital "L"), \LARGE (all caps), \huge, \Huge (capital "H")Which go from smallest to largest. A quick test showed that \large or \Large worked rather well for the equation you have above (getting larger might be just too much).... For the record:I asked you to not ping users in chat who hadn't shown any indication of being interested in being pinged by you:Please do not ping random users in an attempt to get a question answered. If someone wants to answer your question, they will do so on their own. -- link to transcriptThe (currently) four stars on the message suggest that ... To be perfectly clear:We expect you to take responsibility for your actions.This includes every one of your actions on this site, but in this particular context, it's worth emphasising that it includes setting bounties, allowing other people to access your account, and lying or other deceptive behaviour.If you set a bounty and then regretted the ... We are not a forum, but a question and answer site. It is not the aim of the SE model to encourage debate, but to provide (more or less) definite answers to definite questions. Though comments may contain worthwhile information, reputation is granted for contributions towards good questions and answers.Comments are transient by design, and subject to ... Just from a brief look at this, it seems to me like a bad idea to enable it, for several reasons.It changes the behavior of various commands. Example:\sin\left(\frac{1+\frac{1}{1+\frac{1}{2}}}{2}\right)$\sin\left(\frac{1+\frac{1}{1+\frac{1}{2}}}{2}\right)$\sin(\frac{1+\frac{1}{1+\frac{1}{2}}}{2})$\sin(\frac{1+\frac{1}{1+\frac{1}{2}}}{2})$... Absolutely. I've only been helped a few times now, but I would want to professionally acknowledge some of the most interesting answers and posters.From a practical point of view, it is also very useful. Say, for example, one of my audience members wants to dwell deeper to an answer that I cite. If you convince your administrator to enable mhchem here – chemistry.SE does have it – then you could make use of MathJax/mhchem's \pu command.$1.38 \times 10^{-23} \, \mathrm{J\,K^{-1}}$could then be written as\pu{1.38E-23 J K^-1}See how it renders at chemistry.SE.Background information:MathJax is a LaTeX implementation, written in JavaScript. ... And what would such a flag do? Alert the moderators? What if they lack sufficient expertise in the flagged topic to make a good call? Alert high-rep users? So that they can... criticize the post for you?Perhaps a better idea would be to just indicate publicly that the post was flagged. We could even maintain a count of the number of such flags, and display ... It's unfortunate to hear that you are so personally offended. I assure you that the people who implemented this policy were probably not aware of your existence when they made it, so I hope that knowledge helps to lessen the amount you feel personally insulted.The sad reality is that this policy works extremely well. 50 reputation points is not difficult ... I won't give my opinion on this directly, but I would like to point out the site description from the first line of the Tour page:Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics and astronomy.So, I would read that to mean that answering questions for the layman (qualified in your post as ... I'm not a big fan of live Q/A because I don't think they produce really good answers. Personally I find a good answer requires me to go offline for a few minutes and think about exactly what is being asked and the best strategy for answering it.However I think the chat session worked really well and was very enjoyable. many thanks to everyone involved. ... Is there a way for moderators to act as intermediaries between users who provide an answer, and users who have downvoted said answer?As a moderator, that's not a job that I want. Users who can't communicate in a civil manner with each other on our site can spend their time elsewhere. This is a great community, but it's not the only place to talk to ... Every user is able to provide whatever details they would like in the "about me" section of his/her profile. The full text can be viewed by visiting the user's profile. If the user has earned enough Rep, we will show a summary of their profile when their usercard is hovered over as part of a user. With this, we highlight more about a user.Stack Exchange, ... This will be the official "poll" answer: vote this up if you agree that the logo should be changed, down if you disagree. In the latter case it would be great if you can express your reasons for disagreeing in a comment.A rough conception of what the modified logo could look like is provided in the question. This is not a finalized design, so please do not ... Bans and suspensions need to be carried out by moderators.There's no correlation between someone's ability to answer/ask physics questions and their social intelligence. Moderators, on the other hand, are usually judged by a community over a long period of time as possessing a character which enables them to consistently deal with people in a calm, fair ...
Research Open Access Published: Blow-up criteria of smooth solutions to the three-dimensional magneto-micropolar fluid equations Boundary Value Problems volume 2015, Article number: 118 (2015) Article metrics 1040 Accesses 2 Citations Abstract In this short article, the initial value problem for the 3D magneto-micropolar fluid equations is investigated. Some new blow-up criteria of smooth solutions in terms of the vorticity and the velocity in a homogenous Besov space are established, respectively. Introduction In the short article, we consider the initial value problem for three-dimensional magneto-micropolar fluid equations with the initial value where \(u(t, x)\), \(v(t, x)\), \(b(t, x)\) and \(p(t, x)\) denote the velocity of the fluid, the micro-rotational velocity, magnetic field and hydrostatic pressure, respectively. μ is the kinematic viscosity, χ is the vortex viscosity, γ and κ are spin viscosities, and \(\frac{1}{\nu}\) is the magnetic Reynold. Lots of physicists and mathematicians have studied the incompressible magneto-micropolar fluid equations because the equations have rich phenomena, important physical background and mathematical complexity and challenges. On the one hand, for well-posedness of solutions to problem (1.1), (1.2), we refer to [1–4] and [5] and the references cited therein. On the other hand, for the blow-up criteria of smooth solutions and regularity criteria of weak solutions, we refer to [6–8] and [5, 9, 10]. If \(b=0\), (1.1) reduces to micropolar fluid equations. The micropolar fluid equations were first proposed by Eringen [11] (see also [12]). The study of the micropolar fluid equations attracts lots of physicists and mathematicians’ attention, and many interesting results have been established. For instance, we refer to [13–18] and [19]. If both \(v=0\) and \(\chi=0\), then equations (1.1) reduce to being the magneto-hydrodynamic (MHD) equations. The MHD equations govern the dynamics of the velocity and magnetic fields in electrically conducting fluids such as plasmas, liquid metals, salt water, etc. (see [20]). The field of MHD was initiated by Hannes Alfvén, for which he received the Nobel Prize in physics in 1970. For global well-posedness of solutions to the MHD equations, there are a few results, we refer to [21, 22]. When the magnetic fields are purely swirling and prependicular to the velocity fields, Lei proved global existence of solutions. Wang and Wang proved global existence of solutions in the critical space \(\chi ^{-1}\), which was introduced in [23] and used in studying the global well-posedness of the incompressible Navier-Stokes equations by Lei and Lin [24] provided that the norm of initial norm of the initial value are bounded exactly by the minimal value of the viscosity coefficients. We also emphasize the various regularity criteria and blow-up criteria in [25–33] and [34]. Regularity criterion of weak solutions to the MHD equations in terms of the vorticity was established in [34]. Lei and Zhou [31] derived a criterion for the breakdown of classical solutions to the incompressible magneto-hydrodynamic equations with zero viscosity and positive resistivity. In the absence of global well-posedness, the development of blow-up/non blow-up theory is of major importance for both theoretical and practical purposes. The purpose of this paper is to establish the blow-up criteria of smooth solutions to (1.1), (1.2). The results obtained in this paper extend the MHD results in [34] to complex fluid equations (1.1). We state our main results as follows. Theorem 1.1 Assume that \(u_{0}, v_{0}, b_{0} \in H^{m}(\mathbb{R}^{3})\), \(m\geq3\) with \(\nabla\cdot u_{0}=0\), \(\nabla\cdot b_{0}=0\). Let \((u, v, b)\) be a smooth solution to problem (1.1), (1.2) for \(0\leq t< T\). If u satisfies then the solution \((u, v, b)\) can be extended beyond \(t=T\). We have the following corollary immediately. Corollary 1.1 Assume that \(u_{0}, v_{0}, b_{0} \in H^{m}(\mathbb{R}^{3})\), \(m\geq3\) with \(\nabla\cdot u_{0}=0\), \(\nabla\cdot b_{0}=0\). Let \((u, v, b)\) be a smooth solution to problem (1.1), (1.2) for \(0\leq t< T\). Suppose that T is the maximal existence time, then Noticing the equivalence of the norm \(\|\nabla\times u\|_{\dot {B}^{-1}_{\infty, \infty}}\) and \(\| u(t)\|_{\dot{B}^{0}_{\infty, \infty }}\), from Theorem 1.1, we immediately obtain the following. Corollary 1.2 Assume that \(u_{0}, v_{0}, b_{0} \in H^{m}(\mathbb{R}^{3})\), \(m\geq3\) with \(\nabla\cdot u_{0}=0\), \(\nabla\cdot b_{0}=0\). Let \((u, v, b)\) be a smooth solution to problem (1.1), (1.2) for \(0\leq t< T\). If u satisfies then the solution \((u, v, b)\) can be extended beyond \(t=T\). Corollary 1.2 implies the following result. Corollary 1.3 Assume that \(u_{0}, v_{0}, b_{0} \in H^{m}(\mathbb{R}^{3})\), \(m\geq3\) with \(\nabla\cdot u_{0}=0\), \(\nabla\cdot b_{0}=0\). Let \((u, v, b)\) be a smooth solution to problem (1.1), (1.2) for \(0\leq t< T\). Suppose that T is the maximal existence time, then Preliminaries Let \(\mathcal{S}(\mathbb{R}^{n})\) be the Schwartz class of rapidly decreasing functions. Given \(f \in\mathcal{S}(\mathbb{R}^{n})\), its Fourier transform \(\mathcal{F}f=\hat{f}\) is defined by and for any given \(g \in\mathcal{S}(\mathbb{R}^{n})\), its inverse Fourier transform \(\mathcal{F}^{-1}g=\check{g}\) is defined by Firstly, we recall the Littlewood-Paley decomposition. Choose a nonnegative radial function \(\phi\in \mathcal{S}(\mathbb{R}^{n})\), supported in \(\mathcal{C}=\{ \xi\in\mathbb{R}^{n}: \frac{3}{4}\leq|\xi|\leq \frac{8}{3}\}\), such that The frequency localization operator is defined by Next we recall the definition of homogeneous function spaces (see [35]). For \((p, q)\in[1, \infty]^{2} \) and \(s \in\mathbb{R}\), the homogeneous Besov space \(\dot{B}^{s}_{p, q}\) is defined as the set of f up to polynomials such that \(BMO\) denotes the homogenous space of bounded mean oscillations associated with the norm The following inequality is the well-known Gagliardo-Nirenberg inequality. Lemma 2.1 Let j, m be any integers satisfying \(0 \leq j < m\), and let \(1 \leq q, r \leq\infty\), and \(p\in\mathbb{R}\), \(\frac{j}{m}\leq\theta\leq1\) such that Then, for all \(f\in L^{q}(\mathbb{R}^{n})\cap W^{m, r}(\mathbb{R}^{n})\), there is a positive constant C depending only on n, m, j, q, r, θ such that the following inequality holds: with the following exception: if \(1 < r < \infty\) and \(m-j-\frac{n}{r}\) is a nonnegative integer, then (2.1) holds only for satisfying \(\frac{j}{m}\leq \theta<1\). In order to prove our main result, we need the following lemma, which may be found in [36]. Lemma 2.2 There exists a positive constant C such that We also need the following lemma, which may be found in [37]. Lemma 2.3 Assume that f, g satisfy \(\nabla\cdot f=0\) and \(\nabla\times g=0\). Then Proof of main results Proof of Theorem 1.1 It follows from (1.1) and energy estimate that Applying ∇ to the first equation in (1.1) and multiplying the resulting equation by ∇ u and integrating with respect to x on \(\mathbb{R}^{3}\), using integration by parts, we obtain Similarly, we get and By integration by parts and the Cauchy inequality, we obtain Using integration by parts, (2.3) and the Cauchy inequality, we arrive at where we have used \(\nabla\cdot\partial_{k}u=0\) and \(\nabla\times\nabla \partial_{k}u=0\). Integration by parts, \(\nabla\cdot\partial_{i}b=0\), \(\nabla\times\nabla \partial_{i}b=0\) and \(\nabla\cdot\partial^{2}_{i}b=0\), \(\nabla\times\nabla b=0\), (2.3) and the Cauchy inequality give By the method to obtain (3.16) in [38], we have Similar to the proof of (3.9), we arrive at Owing to (1.3), we know that for any small constant \(\varepsilon>0\), there exists \(T_{\star}< T\) such that Let Integrating (3.11) with respect to t, we have We apply \(\nabla^{m}\) to the first equation in (1.1) and multiply the resulting equation by \(\nabla^{m} u\) and integrate with respect to x on \(\mathbb{R}^{3}\), use integration by parts, we obtain Similarly, we deduce that and In what follows, for simplicity, we shall set \(m=3\). Similarly, we have The Cauchy inequality gives Similarly, we deduce that Integrating (3.24) with respect to time from \(T^{*}\) to \(t \in [T^{*}, T)\), we have By choosing \(\varepsilon<\frac{1}{7C_{1}}\) and noting (3.1), we know that \((u, v, b)\in L^{\infty}(0, T; H^{3}(\mathbb{R}^{3}))\). Thus, \((u, v, b)\) can be extended smoothly beyond \(t = T\). We have completed the proof of Theorem 1.1. □ References 1. Ortega-Torres, E, Rojas-Medar, M: On the uniqueness and regularity of the weak solution for magneto-micropolar fluid equations. Rev. Mat. Apl. 17, 75-90 (1996) 2. Ortega-Torres, E, Rojas-Medar, M: Magneto-micropolar fluid motion: global existence of strong solutions. Abstr. Appl. Anal. 4, 109-125 (1999) 3. Rojas-Medar, M: Magneto-micropolar fluid motion: existence and uniqueness of strong solutions. Math. Nachr. 188, 301-319 (1997) 4. Rojas-Medar, M, Boldrini, J: Magneto-micropolar fluid motion: existence of weak solutions. Rev. Mat. Complut. 11, 443-460 (1998) 5. Yuan, J: Existence theorem and blow-up criterion of the strong solutions to the magneto-micropolar fluid equations. Math. Methods Appl. Sci. 31, 1113-1130 (2008) 6. Gala, S: Regularity criteria for the 3D magneto-micropolar fluid equations in the Morrey-Campanato space. Nonlinear Differ. Equ. Appl. 17, 181-194 (2010) 7. Wang, Y, Hu, L, Wang, Y: A Beale-Kato-Madja criterion for magneto-micropolar fluid equations with partial viscosity. Bound. Value Probl. 2011, Article ID 128614 (2011) 8. Wang, Y: Regularity criterion for a weak solution to the three-dimensional magneto-micropolar fluid equations. Bound. Value Probl. 2013, Article ID 58 (2013) 9. Yuan, Y: Regularity of weak solutions to magneto-micropolar fluid equations. Acta Math. Sci. 30, 1469-1480 (2010) 10. Zhang, Z, Yao, Z, Wang, X: A regularity criterion for the 3D magneto-micropolar fluid equations in Triebel-Lizorkin spaces. Nonlinear Anal. 74, 2220-2225 (2011) 11. Eringen, A: Theory of micropolar fluids. J. Math. Mech. 16, 1-18 (1966) 12. Lukaszewicz, G: Micropolar Fluids: Theory and Applications. Modeling and Simulation in Science, Engineering and Technology. Birkhäuser, Boston (1999) 13. Dong, B, Chen, Z: Regularity criteria of weak solutions to the three-dimensional micropolar flows. J. Math. Phys. 50, 103525 (2009) 14. Dong, B, Jia, Y, Chen, Z: Pressure regularity criteria of the three-dimensional micropolar fluids flows. Math. Methods Appl. Sci. 34, 595-606 (2011) 15. Galdi, G, Rionero, S: A note on the existence and uniqueness of solutions of the micropolar fluid equations. Int. J. Eng. Sci. 15, 105-108 (1977) 16. Ortega-Torres, E, Rojas-Medar, M: On the regularity for solutions of the micropolar fluid equations. Rend. Semin. Mat. Univ. Padova 122, 27-37 (2009) 17. Wang, Y, Chen, Z: Regularity criterion for weak solution to the 3D micropolar fluid equations. J. Appl. Math. 2011, Article ID 456547 (2011) 18. Wang, Y, Yuan, H: A logarithmically improved blow-up criterion for smooth solutions to the 3D micropolar fluid equations. Nonlinear Anal., Real World Appl. 13, 1904-1912 (2012) 19. Yamaguchi, N: Existence of global strong solution to the micropolar fluid system in a bounded domain. Math. Methods Appl. Sci. 28, 1507-1526 (2005) 20. Lifschitz, A: Magnetohydrodynamics and spectral theory. In: Developments in Electromagnetic Theory and Applications, vol. 4. Kluwer Academic, Dordrecht (1989) 21. Lei, Z: On axially symmetric incompressible magnetohydrodynamics in three dimensions. J. Differ. Equ. 259, 3202-3215 (2015) 22. Wang, Y, Wang, K: Global well-posedness of the three dimensional magnetohydrodynamics equations. Nonlinear Anal., Real World Appl. 17, 245-251 (2014) 23. Constantin, P, Córdoba, D, Gancedo, F, Strain, R: On the global existence for the Muskat problem. J. Eur. Math. Soc. 15, 201-227 (2013) 24. Lei, Z, Lin, F: Global mild solutions of Navier-Stokes equations. Commun. Pure Appl. Math. 64, 1297-1304 (2011) 25. Cao, C, Wu, J: Two regularity criteria for the 3D equations. J. Differ. Equ. 248, 2263-2274 (2010) 26. Chen, Q, Miao, C, Zhang, Z: On the regularity criterion of weak solution for the 3D viscous magneto-hydrodynamics equations. Commun. Math. Phys. 284, 919-930 (2008) 27. Fan, J, Li, F, Nakamura, G, Tan, Z: Regularity criteria for the three-dimensional magnetohydrodynamic equations. J. Differ. Equ. 256, 2858-2875 (2014) 28. He, C, Xin, Z: On the regularity of solutions to the magnetohydrodynamic equations. J. Differ. Equ. 213, 235-254 (2005) 29. He, C, Wang, Y: On the regularity for weak solutions to the magnetohydrodynamic equations. J. Differ. Equ. 238, 1-17 (2007) 30. Jia, X, Zhou, Y: Regularity criteria for the 3D MHD equations via partial derivatives II. Kinet. Relat. Models 7, 291-304 (2014) 31. Lei, Z, Zhou, Y: BKM criterion and global weak solutions for magnetohydrodynamics with zero viscosity. Discrete Contin. Dyn. Syst., Ser. A 25, 575-583 (2009) 32. Wang, Y, Zhao, H, Wang, Y: A logarithmally improved blow up criterion of smooth solutions for the three-dimensional MHD equations. Int. J. Math. 23, 1250027 (2012) 33. Wang, Y, Wang, S, Wang, Y: Regularity criteria for weak solution to the 3D magnetohydrodynamic equations. Acta Math. Sci. 32, 1063-1072 (2012) 34. Xu, X, Ye, Z, Zhang, Z: Remark on an improved regularity criterion for the 3D MHD equations. Appl. Math. Lett. 42, 41-46 (2015) 35. Triebel, H: Theory of Function Spaces. Monograph in Mathematics, vol. 78. Birkhäuser, Basel (1983) 36. Kozono, H, Ogawa, T, Taniuchi, Y: The critical Sobolev inequalities in Besov spaces and regularity criterion to some semi-linear evolution equations. Math. Z. 242, 251-278 (2002) 37. Coifman, R, Lions, P, Meyer, Y, Semmes, S: Compensated compactness and Hardy spaces. J. Math. Pures Appl. 72, 247-286 (1993) 38. Zheng, X: A regularity criterion for the tridimensional Navier-Stokes equations in term of one velocity component. J. Differ. Equ. 256, 283-309 (2014) Acknowledgements The author would like to thank the referees for valuable comments and suggestions. This work is partially supported by the NNSF of China (Grant No. 11101144). Additional information Competing interests The author declares that she has no competing interests. Author’s contributions The author completed the paper herself. The author read and approved the final manuscript.
First, I think you made a mistake in your computations above. Where you wrote $(30-20)$, I think you really meant $(30-(-20))$ i.e. $30+20$, yielding a gamma P&L of $1000$ instead of $200$. Your total P&L over $[90,170]$ would then be $110$ instead of $-690$. It doesn't matter for my answer either way, just thought I'd point it out for confused readers. By definition, $\Delta = \frac{\partial V}{\partial S}$, where $V$ is the price of a financial derivative and $S$ is the price of its underlying. So if $S$ experiences a move from $S_0$ to $S_1$, it follows logically that $$\Delta V = V(S=S_1) - V(S=S_0) = \int_{S_0}^{S_1}{\Delta \mathrm{d}S}$$ So, your P&L is indeed the area under the curve. Now, unfortunately, what you are computing here is not that. Indeed, by writing your P&L as the sum of these simplistic delta and gamma terms, what you really are saying, mathematically, is: $$\Delta V = \int_{S_0}^{S_1}{\Delta \mathrm{d}S} = \Delta(S=S_0)\cdot(S_1-S_0) + \frac{\Delta(S=S_1)-\Delta(S=S_0)}{2}\cdot(S_1-S_0)$$i.e.$$\Delta V = \frac{\Delta(S=S_0)+\Delta(S=S_1)}{2}\cdot(S_1-S_0)$$ This would hold if for example $\Delta$ is assumed linear over $[S_0,S_1]$, but unfortunately, isn't true in the general case. For example, over $[110, 130]$, your computed $\Delta$ is slightly concave (it would have to be equal to $4.5$ at $120$ to be linear), so your estimation of the P&L is slightly off. Over $[130,170]$, as the interval is larger and the shape more complex, the error is obviously worse. A better estimation when you know some values of $\Delta$ over a discrete interval would be to assume that it is piecewise linear in between observations. It would be equivalent to using your method, but over the smallest possible intervals, which is indeed as you suggest the same as doing a discrete integral. In this case $$\int_{S_i}^{S_j}{\Delta \mathrm{d}S} = \sum_{k=i}^{k=j-1}\left(\frac{\Delta(S=S_k)+\Delta(S=S_{k+1})}{2}\cdot(S_{k+1}-S_k)\right)$$ In your specific case, the computation would yield $$\Delta V = \left(\frac{11-5}{2}+\frac{-20-5}2+\frac{-12-20}2+\frac{-12+10}2+\frac{15+10}2+\frac{30+15}2\right)\cdot10=85$$ So as you can see, it gives a quite different result ;-)
This example uses systune to generate smooth gain schedules for a three-loop autopilot. This example uses a three-degree-of-freedom model of the pitch axis dynamics of an airframe. The states are the Earth coordinates , the body coordinates , the pitch angle , and the pitch rate . Figure 1 summarizes the relationship between the inertial and body frames, the flight path angle , the incidence angle , and the pitch angle . Figure 1: Airframe dynamics. We use a classic three-loop autopilot structure to control the flight path angle . This autopilot adjusts the flight path by delivering adequate bursts of normal acceleration (acceleration along ). In turn, normal acceleration is produced by adjusting the elevator deflection to cause pitching and vary the amount of lift. The autopilot uses Proportional-Integral (PI) control in the pitch rate loop and proportional control in the and loops. The closed-loop system (airframe and autopilot) are modeled in Simulink. addpath(fullfile(matlabroot,'examples','control','main')) % add example data open_system('rct_airframeGS') The airframe dynamics are nonlinear and the aerodynamic forces and moments depend on speed and incidence . To obtain suitable performance throughout the flight envelope, the autopilot gains must be adjusted as a function of and to compensate for changes in plant dynamics. This adjustment process is called "gain scheduling" and are called the scheduling variables. In the Simulink model, gain schedules are implemented as look-up tables driven by measurements of and . Gain scheduling is a linear technique for controlling nonlinear or time-varying plants. The idea is to compute linear approximations of the plant at various operating conditions, tune the controller gains at each operating condition, and swap gains as a function of operating condition during operation. Conventional gain scheduling involves three major steps: Trim and linearize the plant at each operating condition Tune the controller gains for the linearized dynamics at each operating condition Reconcile the gain values to provide smooth transition between operating conditions. In this example, we combine Steps 2. and 3. by parameterizing the autopilot gains as first-order polynomials in and directly tuning the polynomial coefficients for the entire flight envelope. This approach eliminates Step 3. and guarantees smooth gain variations as a function of and . Moreover, the gain schedule coefficients can be automatically tuned with systune. Assume that the incidence varies between -20 and 20 degrees and that the speed varies between 700 and 1400 m/s. When neglecting gravity, the airframe dynamics are symmetric in so consider only positive values of . Use a 5-by-9 grid of linearly spaced pairs to cover the flight envelope: nA = 5; % number of alpha values nV = 9; % number of V values [alpha,V] = ndgrid(linspace(0,20,nA)*pi/180,linspace(700,1400,nV)); For each flight condition , linearize the airframe dynamics at trim (zero normal acceleration and pitching moment). This requires computing the elevator deflection and pitch rate that result in steady and . To do this, first isolate the airframe model in a separate Simulink model. open_system('rct_airframeTRIM') Use operspec to specify the trim condition, use findop to compute the trim values of and , and linearize the airframe dynamics for the resulting operating point. See the "Trimming and Linearizing an Airframe" example in Simulink Control Design for details. Repeat these steps for the 45 flight conditions . % Compute trim condition for each (alpha,V) pair clear op for ct=1:nA*nV alpha_ini = alpha(ct); % Incidence [rad] v_ini = V(ct); % Speed [m/s] % Specify trim condition opspec = operspec('rct_airframeTRIM'); % Xe,Ze: known, not steady opspec.States(1).Known = [1;1]; opspec.States(1).SteadyState = [0;0]; % u,w: known, w steady opspec.States(3).Known = [1 1]; opspec.States(3).SteadyState = [0 1]; % theta: known, not steady opspec.States(2).Known = 1; opspec.States(2).SteadyState = 0; % q: unknown, steady opspec.States(4).Known = 0; opspec.States(4).SteadyState = 1; % TRIM Options = findopOptions('DisplayReport','off'); op(ct) = findop('rct_airframeTRIM',opspec,Options); end % Linearize at trim conditions G = linearize('rct_airframeTRIM',op); G = reshape(G,[nA nV]); G.u = 'delta'; G.y = {'alpha' 'V' 'q' 'az' 'gamma' 'h'}; This produces a 5-by-9 array of linearized plant models at the 45 flight conditions . The plant dynamics vary substantially across the flight envelope. sigma(G), title('Variations in airframe dynamics') The autopilot consists of four gains to be "scheduled" (adjusted) as a function of and . Practically, this means tuning 88 values in each of the corresponding four look-up tables. Rather than tuning each table entry separately, parameterize the gains as a two-dimensional gain surfaces, for example, surfaces with a simple multi-linear dependence on and : . This cuts the number of variables from 88 down to 4 for each lookup table. Use the tunableSurface object to parameterize each gain surface. Note that: TuningGrid specifies the "tuning grid" (design points). This grid should match the one used for linearization but needs not match the loop-up table breakpoints ShapeFcn specifies the basis functions for the surface parameterization (, , and ) Each surface is initialized to a constant gain using the tuning results for = 10 deg and = 1050 m/s (mid-range design). TuningGrid = struct('alpha',alpha,'V',V); ShapeFcn = @(alpha,V) [alpha,V,alpha*V]; Kp = tunableSurface('Kp', 0.1, TuningGrid, ShapeFcn); Ki = tunableSurface('Ki', 2, TuningGrid, ShapeFcn); Ka = tunableSurface('Ka', 0.001, TuningGrid, ShapeFcn); Kg = tunableSurface('Kg', -1000, TuningGrid, ShapeFcn); Next create an slTuner interface for tuning the gain surfaces. Use block substitution to replace the nonlinear plant model by the linearized models over the tuning grid. Use setBlockParam to associate the tunable gain surfaces Kp, Ki, Ka, Kg with the Interpolation blocks of the same name. BlockSubs = struct('Name','rct_airframeGS/Airframe Model','Value',G); ST0 = slTuner('rct_airframeGS',{'Kp','Ki','Ka','Kg'},BlockSubs); % Register points of interest ST0.addPoint({'az_ref','az','gamma_ref','gamma','delta'}) % Parameterize look-up table blocks ST0.setBlockParam('Kp',Kp,'Ki',Ki,'Ka',Ka,'Kg',Kg); systune can automatically tune the gain surface coefficients for the entire flight envelope. Use TuningGoal objects to specify the performance objectives: loop: Track the setpoint with a 1 second response time, less than 2% steady-state error, and less than 30% peak error. Req1 = TuningGoal.Tracking('gamma_ref','gamma',1,0.02,1.3); viewGoal(Req1) loop: Ensure good disturbance rejection at low frequency (to track acceleration demands) and past 10 rad/s (to be insensitive to measurement noise). % Note: The disturbance is injected at the az_ref location RejectionProfile = frd([0.02 0.02 1.2 1.2 0.1],[0 0.02 2 15 150]); Req2 = TuningGoal.Gain('az_ref','az',RejectionProfile); viewGoal(Req2) loop: Ensure good disturbance rejection up to 10 rad/s. The disturbance is injected at the plant input delta. Req3 = TuningGoal.Gain('delta','az',600*tf([0.25 0],[0.25 1])); viewGoal(Req3) Transients: Ensure a minimum damping ratio of 0.35 for oscillation-free transients MinDamping = 0.35; Req4 = TuningGoal.Poles(0,MinDamping); Using systune, tune the 16 gain surface coefficients to best meet these performance requirements at all 45 flight conditions. ST = systune(ST0,[Req1 Req2 Req3 Req4]); Final: Soft = 1.13, Hard = -Inf, Iterations = 57 The final value of the combined objective is close to 1, indicating that all requirements are nearly met. Visualize the resulting gain surfaces. % Get tuned gain surfaces TGS = getBlockParam(ST); % Plot gain surfaces clf subplot(221), viewSurf(TGS.Kp), title('Kp') subplot(222), viewSurf(TGS.Ki), title('Ki') subplot(223), viewSurf(TGS.Ka), title('Ka') subplot(224), viewSurf(TGS.Kg), title('Kg') First validate the tuned autopilot at the 45 flight conditions considered above. Plot the response to a step change in flight path angle and the response to a step disturbance in elevator deflection. clf subplot(211), step(getIOTransfer(ST,'gamma_ref','gamma'),5), grid title('Tracking of step change in flight path angle') subplot(212), step(getIOTransfer(ST,'delta','az'),3), grid title('Rejection of step disturbance at plant input') The responses are satisfactory at all flight conditions. Next validate the autopilot against the nonlinear airframe model. First use writeBlockValue to apply the tuning results to the Simulink model. This evaluates each gain surface formula at the breakpoints specified in the two Prelookup blocks and writes the result in the corresponding Interpolation block. writeBlockValue(ST) Now simulate the autopilot performance for a maneuver that takes the airframe through a large portion of its flight envelope. The code below is equivalent to pressing the Play button in the Simulink model and inspecting the responses in the Scope blocks. % Initial conditions h_ini = 1000; alpha_ini = 0; v_ini = 700; % Simulate SimOut = sim('rct_airframeGS', 'ReturnWorkspaceOutputs', 'on'); % Extract simulation data SimData = get(SimOut,'sigsOut'); Sim_gamma = getElement(SimData,'gamma'); Sim_alpha = getElement(SimData,'alpha'); Sim_V = getElement(SimData,'V'); Sim_delta = getElement(SimData,'delta'); Sim_h = getElement(SimData,'h'); Sim_az = getElement(SimData,'az'); t = Sim_gamma.Values.Time; % Plot the main flight variables clf subplot(211) plot(t,Sim_gamma.Values.Data(:,1),'r--',t,Sim_gamma.Values.Data(:,2),'b'), grid legend('Commanded','Actual','location','SouthEast') title('Flight path angle \gamma in degrees') subplot(212) plot(t,Sim_delta.Values.Data), grid title('Elevator deflection \delta in degrees') subplot(211) plot(t,Sim_alpha.Values.Data), grid title('Incidence \alpha in degrees') subplot(212) plot(t,Sim_V.Values.Data), grid title('Speed V in m/s') subplot(211) plot(t,Sim_h.Values.Data), grid title('Altitude h in meters') subplot(212) plot(t,Sim_az.Values.Data), grid title('Normal acceleration a_z in g''s') Tracking of the flight path angle profile remains good throughout the maneuver. Note that the variations in incidence and speed cover most of the flight envelope considered here ([-20,20] degrees for and [700,1400] for ). And while the autopilot was tuned for a nominal altitude of 3000 m, it fares well for altitude changing from 1,000 to 10,000 m. The nonlinear simulation results confirm that the gain-scheduled autopilot delivers consistently high performance throughout the flight envelope. The "gain surface tuning" procedure provides simple explicit formulas for the gain dependence on the scheduling variables. Instead of using look-up tables, you can use these formulas directly for an more memory-efficient hardware implementation. rmpath(fullfile(matlabroot,'examples','control','main')) % remove example data
Here is a hopefully complete answer. Basically, this is just an elementary proof of the implicit function theorem for functions of 2 variables. If $h$ is constant, you can take as $g$ any injective $\mathcal C^1$ map from $(0,1)$ into $\mathbb R^2$; for example $g(t)=(t,0)$. Now, assume that $h$ is not constant. Then the partial derivatives of $h$ cannot be both identically $0$. So we may assume for example that $\frac{\partial h}{\partial y}(x_0,y_0)\neq 0$ for some point $(x_0,y_0)\in\mathbb R^2$; and without loss of generality, we may also assume that $\frac{\partial h}{\partial y}(x_0,y_0)> 0$. Set $$c:= f(x_0,y_0)\, .$$ Since $\frac{\partial h}{\partial y}$ is continuous, one can find an open interval $I_0\ni x_0$ and $\delta >0$ such that $$\frac{\partial h}{\partial y}(x,y)>0\quad {\rm for\; any\;}\; (x,y)\in I_0\times [y_0-\delta,y_0+\delta]\, .$$ The proof will be divided into several steps. Step 1. One can find an open interval $I$ with $x_0\in I\subset I_0$ such that and the following holds: for every $x\in I$, there is a unique point $y=y(x)\in [y_0-\delta,y_0+\delta]$ such that $h(x,y(x))=c$. To prove this, observe first that the map $y\mapsto f(x_0,y)$ is increasing on $[y_0-\delta,y_0+\delta]$ because $\frac{\partial h}{\partial y}(x_0,y)>0$ on this interval by the choice of $\delta$. Since $f(x_0,y_0)=c$, it follows in particular that $h(x_0, y_0-\delta)<c<h(x_0,y_0+\delta)$. Now the set $$U=\{ x\in\mathbb R;\; h(x,y_0-\delta)<c<h(x,y_0+\delta)\} $$is an open set in $\mathbb R$ by the continuity of $h$, and $x_0\in U$ by what has just been observed. So one can find an open interval $I_1$ such that $x_0\in I_1\subset U$. If we set $I:=I_0\cap I_1$ then $x_0\in I\subset I_0$ and $$h(x,y_0-\delta)<c<h(x,y_0+\delta)\quad{\rm for\; every\;}x\in I\, . $$Let us fix any $x\in I$. Then the map $y\mapsto h(x,y)$ is continuous, and it is increasing on $[y_0-\delta, y_0+\delta]$ because $\frac{\partial h}{\partial y}(x,y)>0$ on this interval, and $h(x,y_0-\delta)<c<h(x,y_0+\delta)$. By the intyermediate value theorem, it follows that there exists a unique $y=y(x)\in [y_0-\delta, y_0+\delta]$ such that $h(x,y(x))=c$. This concludes Step 1. Step 2. The map $x\mapsto y(x)$ is continuous on $I$. We prove this by contradiction. Assume that this map is not continuous at some point $x\in I$. Then one can find a sequence $(x_n)\subset I$ converging to $x$ and $\varepsilon >0$ such that $\vert y(x_n)-y(x)\vert\geq \varepsilon$ for all $n\in\mathbb N$. Since $y(x_n)\in y_0-\delta,y_0+\delta]$, One can find a subsequence $(y(x_{n_k}))$ and a point $y\in [y_0-\delta,y_0+\delta]$ such that $y(x_{n_k})\to y$ (by Bolzano-Weierstrass). Bu we have $h(x_{n_k},y(x_{n_k}))=c$ for all $k$, so $h(x,y)=c$ because $h$ is continuous and $(x_{n_k},y(x_{n_k}))\to (x,y)$. Since $y\in [x_0-\delta,x_0+\delta]$, it follows that $y=y(x)$; but this is a contradiction since $\vert y(x_{n_k})-y(x)\vert\geq\varepsilon$ for all $k$ and hence $\vert y-y(x)\vert\geq\varepsilon >0$. This concludes Step 2. Step 3. The map $x\mapsto y(x)$ is $\mathcal C^1$ on $I$. It is enough to show that this map is differentiable at any point $x\in I$, with $$ y'(x)=-\frac{\frac{\partial h}{\partial x}(x,y(x))}{\frac{\partial h}{\partial y}(x,y(x))}\cdot$$Indeed, since the partial derivatives of $h$ are continuous and the map $x\mapsto y(x)$ is continuous by Step 2, this formula will then show that $y'$ is continuous, i.e. that $y$ is $\mathcal C^1$. Let us fix $x\in I$. Set $$a:=\frac{\partial F}{\partial x}(x,y(x))\quad{\rm and}\quad b:=\frac{\partial F}{\partial y}(x,y(x))$$Keep in mind that $b\neq 0$ because $\frac{\partial F}{\partial y}(x,y)>0$ on $I\times [y_0-\delta,y_0+\delta]$. By step 2, one may write$$y(x+h)=y(x)+\varepsilon(h)\, ,$$ where $\varepsilon(h)\to 0$ as $h\to 0$. Moreover, we also have (by the differentiability of $h$ at $(x,y(x))$) \begin{eqnarray}h(x+h,y(x+h))&=&h(x+h, y(x)+\varepsilon (h))\\&=&h(x,y(x))+ a\, h+b\, \varepsilon (h)+R(h)\, ,\end{eqnarray}where $R(h)=o(\vert h\vert+ \vert\varepsilon (h)\vert)$ as $h\to 0$, i.e.$$\lim_{h\to 0}\frac{R(h)}{\vert h\vert+ \vert\varepsilon (h)\vert}=0\, . $$ Since $h(x+h,y(x+h))=c=h(x,y(x))$, this can be rewritten as\begin{equation}\varepsilon(h)=-\frac{a}{b}\, h-\frac1b R(h).\end{equation}If $h$ is small enough, we have $\left\vert-\frac1b R(h)\right\vert\leq \frac12\vert h\vert$ because $R(h)=o(\vert h\vert+\vert\varepsilon (h)\vert)$, so that $\vert\varepsilon(h)\vert\leq \left\vert\frac{a}{b}\right\vert \vert h\vert+\frac12(\vert h\vert+ \vert\varepsilon (h)\vert)$ and hence $\vert\varepsilon (h)\vert\leq C\,\vert h\vert$, where $C=2\vert a/b\vert+1$. This gives $\vert\varepsilon (h)\vert+\vert h\vert\leq (C+1)\vert h\vert$, which implies that in fact $$\lim_{h\to 0}\frac{R(h)}{\vert h\vert}=0\, . $$Since \begin{eqnarray}y(x+h)&=&y(x)+\varepsilon(h)\\&=&y(x)-\frac{a}{b}\, h-\frac1b R(h)\, ,\end{eqnarray}we conclude that the map $y$ is indeed differentiable at $x$ with $y'(x)=-\frac{a}{b}\cdot$ This concludes Step 3. Step 4. There is a one-to-one $\mathcal C^1$ map $g_0:I\to\mathbb R^2$ such that $h(g_0(x))=c$ for all $x\in I$. Just define $g_0(x)=(x,y(x))$. This map is $\mathcal C^1$ by Step 3, it is clearly one-to-one because of its first coordinate, and $h(g_0(x))=c$ for all $x\in I$ by the very definition of $y(x)$. Step 5. Conclusion. Write $I=(\alpha,\beta)$ and define $g:(0,1)\to\mathbb R^2$ by $g(t)=g_0(\alpha+ t(\beta-\alpha))$. Then $g$ has the required properties.
Maybe it would be helpful to think about the analogous situation in ordinary category theory. Suppose you are given a category $\mathcal{E}$ and a functor $F$ from$\mathcal{E}$ to the category of sets. There are several ways to encode this functor: $(a)$: Via the Grothendieck construction, $F$ determines a category $\mathcal{C}$ cofibered in sets over $\mathcal{E}$, so that for each object $E \in \mathcal{E}$ you can identify $F(E)$ with the fiber $\mathcal{C}_E$ of the map $\mathcal{C} \rightarrow \mathcal{E}$ over $E$. $(b)$: Using the functor $F$, you can construct an enlargement $\mathcal{E}_F$ of the category $\mathcal{E}$, adding a single object $v$ with$$Hom(E,v) = \emptyset \quad \quad Hom(v,E) = F(E) \quad \quad Hom(v,v) = \{ id \} $$ Now suppose we are given another functor $G$ from $\mathcal{E}$ to the category of sets,and a natural transformation $F \rightarrow G$. Then $G$ determines a category$\mathcal{D}$ cofibered in sets over $\mathcal{E}$, and an enlargement $\mathcal{E}_G$ of $\mathcal{E}$. The natural transformation $F \rightarrow G$ determines functors$$ \alpha: \mathcal{C} \rightarrow \mathcal{D} \quad \quad \beta: \mathcal{E}_F \rightarrow \mathcal{E}_G$$In this situation, the following conditions are equivalent: $(i)$: The natural transformation $F \rightarrow G$ is an isomorphism (that is, for each object $E \in \mathcal{E}$, the induced map $F(E) \rightarrow G(E)$ is bijective. $(ii)$: The functor $\alpha$ is an equivalence of categories. $(iii)$: The functor $\beta$ is an equivalence of categories. Now observe that the category $\mathcal{E}_F$ can be described as the pushout (and also homotopy pushout) of the diagram $$\mathcal{E} \leftarrow \mathcal{C} \rightarrow \mathcal{C}^{\triangleleft},$$where $\mathcal{C}^{\triangleleft}$ is the category obtained from $\mathcal{E}$ by adjoining a new initial object. Let's now forget the original functors $F$ and $G$, and think only about the categories$\mathcal{C}$ and $\mathcal{D}$ cofibered in sets over $\mathcal{E}$. The equivalence of conditions $(ii)$ and $(iii)$ shows that functor $\alpha: \mathcal{C} \rightarrow \mathcal{D}$ of categories cofibered over $\mathcal{E}$ is an equivalence of categories if and only if the induced map$$ \mathcal{E} \amalg_{ \mathcal{C} } \mathcal{C}^{\triangleleft}\rightarrow \mathcal{E} \amalg_{ \mathcal{D} } \mathcal{D}^{\triangleleft}$$is an equivalence of categories. Now go to the setting of quasi-categories. Assume for simplicity that $S$ is a quasi-category, and let $f: X \rightarrow Y$ be a map of simplicial sets over $S$. If $X$ and $Y$ are left-fibered over $S$, then we would like to say that $f$ is a covariant equivalence if and only if it an equivalence of quasi-categories. However, we would like to formulate this condition in a way that will behave well also when $X$ and $Y$ are not fibrant.Motivated by the discussion above, we declare that $f$ is a covariant equivalence if and only if it induces a categorical equivalence$$ S \amalg_{X} X^{\triangleleft} \rightarrow S \amalg_{Y} Y^{\triangleleft}.$$You can then prove that this is a good definition (it gives you a model structure with the cofibrations and fibrant objects that you described, and when $X$ and $Y$ are fibrant a map$f: X \rightarrow Y$ is a covariant equivalence if and only if it induces a homotopy equivalence of fibers $X_s \rightarrow Y_s$ for each vertex $s \in S$).
While propagating the satellite motion faced the strange effect. Without the atmosphere results ( x,y,z coordinates in meters and vx,vy,vz velocities in meters per second) are [3.30971e5, -6.55755e6, 2.57717e6, -1261.46, -2762.24, -6871.88] but including the atmosphere drag effect i obtain [-2.90003e5, -7.04404e6, -8.97772e5, -1268.31, 996.889, -7305.6]. Notice that difference between satelite positions is 300+ km. Can the atmospheric drag effect be so big or it is the implementation and modelling issues? The formula for atmospheric drag was taken from GMAT documentation:$$a=\frac{1}{2}\rho v^2_{rel}\frac{C_dh}{m_s}\hat{v_{rel}}$$ and the barometric formula(from Wikipedia) to calculate the density:$$\rho=\rho_d\exp\left[ \frac{-g_0M(h-h_0)}{ RT_b}\right ].$$ The $h_0,T_b$ and $\rho_d$ constants were taken from http://www.braeunig.us/space/atmos.htm.Here is the code: using DifferentialEquationsjd = Dates.datetime2julian(DateTime(2100,01,01,0,0,0)) * 86400jd2 = Dates.datetime2julian(DateTime(2100,01,11,0,0,0)) * 86400y = [-976.3107644649057E+03, -4835.627052558522E+03, -5031.728586125443E+03, -0.7944031487816871E+03, 5.474532271429767E+03, -5.094496750907486E+03]GMe = BigFloat(398600.4415E+9)req = BigFloat(6378136.3) #mmass = 850A = 15.0 #m^2rho_b = 6.65E-14 #kg/m^3,g = 9.80665 #m/s^2,M = 0.0289644 #kg/mol,h_0 = 650000 #m,R = 8.3144598 #N·m/(mol·K),T_b = 1011.5365 #Komega = [0.0, 0.0, 7.292115078468551e-5]v_wind = [0.0,0.0,0.0]C_d = 0.47#from GMAT#× vector productfunction atmospheric_drag(y) v = [y[4],y[5],y[6]] r = [y[1],y[2],y[3]] h = sqrt(y[1]^2+y[2]^2+y[3]^2) - req rho = rho_b*exp((-g*M*(h-h_0))/(R*T_b)) #println("rho = $rho") v_rel = v - omega×r + v_wind v_rel2 = v_rel[1]^2+v_rel[2]^2+v_rel[3]^2 a = -0.5*rho*(C_d*A/mass)*v_rel2*v_relendprintln(atmospheric_drag(y))function f2(dy,y,p,t) date=t/86400 re3=(y[1]^2+y[2]^2+y[3]^2)^(3/2) atm_dr = atmospheric_drag(y) dy[1] = y[4] dy[2] = y[5] dy[3] = y[6] dy[4] = - GMe*y[1]/re3 - atm_dr[1] dy[5] = - GMe*y[2]/re3 - atm_dr[2] dy[6] = - GMe*y[3]/re3 - atm_dr[3]endprob = ODEProblem(f2,y,(jd,jd2))solution = solve(prob,Vern9(),abstol=1e-13,reltol=1e-13)sol = solution[end]println(sol)#order 0 degree 0 no Sun, no MoonGMATresult = [330.97011851316,-6557.5472439174,2577.1624246640,-1.2614569607713,-2.7622388770690,-6.8718847034529]a = 0println("Error:")for i = (1:6) b = sol[i] - GMATresult[i]*1000 println("$b m") if i < 4 a = a + b^2 endendprintln("Distance error = $(sqrt(a)) m") I've noticed that even little values like 1e-6 being added to f2 affect significantly (70+ km) on the result. The values of the acceleration provided by atmospheric_drag function are of 1e-5 order. Maybe that is the reason?
Short Answer The reason is that the expected decrease in the Gini index for splitting with a categorical variable with $L \geq 3$ levels grows in $L$. As a result, the algorithm is biased to choose a variable with a high number of levels - this "maximises" the information gain. In the case of a binary classification problem, if the different levels provide no information about the response, the relationship between the expected increase in Gini is:\begin{align}\mathbb{E}(\hat{\Delta}_{Gini} \vert N = n) = \frac{2p_1(1-p_1)(L-1)}{n},\end{align}where $p_1$ is the probability of choosing class 1 (precise definition below). Therefore, as $L$ increases it is luring to pick a multilevel variable for a split, despite it not giving any information! Proof I will not show every step, but enough to fill the gaps. This answer is a copy of my solution to an exercise from my machine learning module; the exercise can be accessed here. In case the document is taken down in the future, I will write out the notation etc. below. Note, the proof will be in the case of binary classification, i.e. $Y \in \{0, 1\}$. We are at node $t$ in a decision tree and would like to split it based on Gini impurity. Consider a categorical variable $C$ with $L \geq 3 $ levels, i.e. $x^{(C)} \in \{c_1, c_2, \ldots, c_L\}$. Notation 1) For every data point reaching node $t$, $X_i, Y_i$, where $i \in \{1, \ldots, N \}$, denote: \begin{align} p_k &= \mathbb{P}(Y_i = k), \ \ \ k = 0, 1; \\ q_l &= \mathbb{P}(X_i^{(C)} = c_l), \ \ \ l = 1, \ldots, L; \\ p_{k \vert l} &= \mathbb{P}(Y_i = k \vert X_i^{C} = c_l), \ \ \ k = 0, 1; \ l = 1, \ldots, L.\end{align} Note that the case where variable $C$ provides no information for the response corresponds to $p_k = p_{k \vert l}$ for all l. We do not have the above probabilities, so we need to estimate them, e.g. using maximum likelihood estimation (MLE). The following notation will help us to do that. 2) For the same $i$ as in point 1): \begin{align}N^k &= \vert \{i: Y_i = k\} \vert, \ \ \ k = 0, 1; \\N_l &= \vert \{i: X_i^{(C)} = c_l\} \vert, \ \ \ l = 1, \ldots, L; \\N_{k\vert l} &= \vert \{i: Y_i = k \ \text{and} \ X_i^{C} = c_l\} \vert, \ \ \ k = 0, 1; \ l = 1, \ldots, L;\end{align} We assume that the N data vectors reaching node t are independent. First step Note that the following can be shown, for example, by looking at the PMFs of the random variables:\begin{align} N^{k} \vert N = n &\sim Binomial(n, p_k) \\ N_{l} \vert N = n &\sim Binomial(n, q_l) \\ N_{k \vert l} \vert N_l = n_l &\sim Binomial(n_l, p_{k \vert l})\end{align} From the above, we deduce that the MLE plugin estimates for the probabilities are \begin{align}\hat{p}_k &= \frac{N^k}{N} \\\hat{q}_l &= \frac{N_l}{N} \\\hat{p}_{k \vert l} &= \frac{N_{k \vert l}}{N_l}\end{align} Second step The population Gini impurity is given by $2p_1(1-p_1)$. There is a two in front because the impurity is symmetric wrt the class for a binary classification problem. If we split at node $t$ using the categorical variable $C$ with $L$ levels, the resulting change in Gini impurity will be\begin{align}\Delta_{Gini} = 2p_1(1-p_1) - 2\sum\limits_{l = 1}^Lq_lp_{1 \vert l}(1 - p_{1 \vert l})\end{align} Plugging in the MLE estimates gives us the estimated change in Gini impurity $\hat{\Delta}_{Gini}$. We would like to find its expected value $\mathbb{E}(\hat{\Delta}_{Gini} \vert N = n)$ and see how it depends on $L$. Third step To calculate the expected value we will use the distributional relationships found in the first step. \begin{align}\mathbb{E}(\hat{\Delta}_{Gini} \vert N = n) = 2\mathbb{E}(\hat{p}_1(1-\hat{p}_1) \vert N = n) - 2\sum\limits_{l =1}^{L} \mathbb{E}(\hat{q}_l \hat{p}_{1\vert l} (1 - \hat{p}_{1\vert l}) \vert N = n)\end{align} We shall treat the two terms separately. For the first one, \begin{align}2\mathbb{E}(\hat{p}_1(1-\hat{p}_1) \vert N = n) &= \frac{2}{n}\mathbb{E}(N^1 \vert N = n) - \frac{2}{n^2}\left(\mathbb{E}(N^1(N^1 - 1) \vert N = n) + \mathbb{E}(N^1 \vert N = n)\right) \\&= 2p_1 - \frac{2}{n}p_1(1 + (n-1)p_1) \\&= \frac{2}{n}p_1(1-p_1)(n-1)\end{align}For the second one, let's look at the expected value\begin{align}\mathbb{E}(\hat{q}_l \hat{p}_{1\vert l} (1 - \hat{p}_{1\vert l}) \vert N = n) &= \frac{1}{n}\mathbb{E}(N_{1 \vert l}(1 - \frac{N_{1 \vert l}}{N_l}) \vert N = n) \\&= \frac{1}{n} \sum\limits_{i=1}^{n} \mathbb{E}(N_{1 \vert l}(1 - \frac{N_{1 \vert l}}{N_l}) \vert N_l = i) \ \mathbb{P}(N_l = i \vert N = n) \\&= \frac{1}{n} \sum\limits_{i=1}^{n} \left( \mathbb{E}(N_{1 \vert l} \vert N_l = i) - \frac{1}{i} \mathbb{E} (N_{1 \vert l}^2 \vert N_l = i) \right) \ \mathbb{P}(N_l = i \vert N = n) \\&= \frac{1}{n} \sum\limits_{i=1}^{n} \left( i p_{1 \vert l} - p_{1 \vert l}(1 + (i -1)p_{1 \vert l}) \right) \ \mathbb{P}(N_l = i \vert N = n) \\&= \frac{1}{n} \sum\limits_{i=1}^{n} p_{1 \vert l}(1 - p_{1 \vert l})(i-1) \ \mathbb{P}(N_l = i \vert N = n) \\&= \frac{1}{n} p_{1 \vert l}(1 - p_{1 \vert l}) \left( \sum\limits_{i=1}^{n} i \ \mathbb{P}(N_l = i \vert N = n) - 1 \right) \\&= \frac{1}{n} p_{1 \vert l}(1 - p_{1 \vert l}) \left(nq_l - 1 \right)\end{align} So, combining the results, we get that: \begin{align}\mathbb{E}(\hat{\Delta}_{Gini} \vert N = n) = \frac{2}{n}p_1(1-p_1)(n-1) + \frac{2}{n}\sum\limits_{l = 1}^L p_{1 \vert l}(1 - p_{1 \vert l}) \left(1 - nq_l \right)\end{align} In the case where the categorical variable has no effect on the response, we substitute $p_{1 \vert l} = p_1$, which gives us the result stated at the beginning.
Operations with Numbers in Scientific Notation 04:55 minutes Get your free trial content now! Video Transcript Transcript Operations with Numbers in Scientific Notation Mr. and Mrs. Fox, who live in a village, need a new home. So they hire a real estate agent to help them. The agent only has listings of foxholes that are located in a nearby town. But the Foxes don't want to live in town because Mr. Fox thinks it will be crowded. He’s an artist, and he needs lots of space to work on his paintings, but with the addition of many baby foxes, they're practically bursting out of their current situation, so they can’t be too picky. The real estate agent assures them that, although the new foxhole is located in town, it’s really nice and very family friendly. The real estate agent also tells them that the village where they currently live has population density of 750 foxes per mi.² and is much more crowded than the town. Mr. and Mrs. Fox are not so easily convinced of this, so they ask exactly how many foxes live in their area? Calculating the Population To calculate this information, we have to multiply the density by the area. First, write the population density in scientific notation, and then multiply the density times the area. To do this, we can apply the Commutative Property and rearrange the equation. Ok, that looks easier to manage. Order matters, so first calculate the product of 10² x 10³. Since the bases are the same, we can just add the exponents, so that’s equal to 10⁵. Next, multiply the coefficients together. 7.5(3.2) = 24, so you can write the equation as 24 x 10⁵, or you can write it in scientific notation as 2.4 x 10⁶, or you can write it in standard form as 2,400,000. 2,400,000 – that’s a lot of foxes. Calculating the Population Density To persuade the foxes to buy the foxhole that's located in the town, the real estate agent bombards them with facts and figures. He tells them the town has an area of 3.8 x 10² mi.². Although he’s sold apartments to 95 foxes so far, there's still lots of space, but they should move quickly before the best locations are sold. He tells the Foxes that to understand what a good deal the new foxhole is, they should calculate the population density of the town. Let's help the Foxes. To do this, first write the population in scientific notation. 95 = 9.5 x 10¹ to find the polulation density, divide the population by the area.Now rewrite the equation to make it easier to solve. That's better. Remember, order matters, so first simplify the exponents. When you divide bases raised to an exponent – subtract, this means subtract the exponent in the numerator from the exponent in the denominator: 1 - 2. now, divide the coefficients. Now we can write the equation in scientific notation: 2.5 x 10⁻¹ Since the exponent is negative, to write the number in standard form, move the decimal to the left one place value. The polpulation density of the nearby town is 0.25 foxes per square mile. Convinced the town is a good alternative to the village, the fox family packs up and moves. Although they have lots of space, the real estate agent forgot to mention just one small, but very important fact... Operations with Numbers in Scientific Notation Übung Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video Operations with Numbers in Scientific Notation kannst du es wiederholen und üben. Calculate the fox population of Mr. and Mrs. Fox's village. Tipps The commutative property of multiplication states that you can change the order of multiplication: $a\times b=b\times a$. An example for multiplying powers with the same base is: $2^3\times 2^4=2^{(3+4)}=2^7$ The format of scientific notation is given by $n\times 10^a$, where $1\le n<10$, $n$ is the coefficient, $10$ the base, and $a$ the exponent. Lösung Mr. and Mrs. Fox are living in a village and they need a new home for their large family. They hire a real estate agent to help them, and the agent tries to convince them to move into town! But they aren't convinced that they should, as it sounds like the town will be too crowded for them. The agent tells them that their current village has a fox density of 750 foxes per square mile. To calculate the fox population of their village we have to multiply the fox density with the area of the village. First we write the fox density and the area of their village in scientific notation: The densitiy: $7.5\times 10^2$ foxes / mi$^2$ The area: $3.2\times 10^3$ sqm $(7.5\times 10^2)\times (3.2\times 10^3)$. Next we use the commutative property the rearrange the multiplication $(7.5\times 10^2)\times (3.2\times 10^3)=7.5\times 3.2\times 10^2\times 10^3$. The decimal powers have the same base, so we add the exponents: $7.5\times 3.2\times 10^2\times 10^3=7.5\times 3.2\times 10^{(2+3)}=7.5\times 3.2\times 10^5$. Finally we multiply $7.5\times 3.2=24=2.4\times 10$ and add the exponents of the decimal powers once again to get $7.5\times 3.2\times 10^5=2.4\times 10\times 10^5=2.4\times 10^6$. We can then conclude that two million four hundred thousand foxes are living in their village. Decide which numbers are written in scientific notation. Tipps Keep the definition of scientific notation above in mind. The number $2016$ written in scientific notation is $2016=2.016\times 10^3$. Lösung The fox density is given by $750$ foxes / mi$^2$. To write $750$, or $750.0$, in scientific notation, we move the decimal place over two to the left to get $750=7.5\times 10^2$. Remember that we do this as $n$ must be greater than or equal to $1$ and less than $10$. The area of of the village, $3.8\times 10^3$, is already given in scientific notation. We can also write $7.5\times 3.8=24$ in scientific notation: $24=2.4\times 10^1$. Find the fox density of the town. Tipps The format of scientific notation is given by $n\times 10^a$, where $1\le n<10$, $n$ is the coefficient, $10$ the base, and $a$ the exponent. To divide powers with the same base just subtract the exponents, just as in this example: Lösung With the number of foxes and the area of the town, we can determine the density by dividing the number of foxes by the area. $~$ First, write the number of foxes in a scientific notation: $\large {95=9.5\times 10^1}$. $~$ Then we divide the number of foxes by the area to get ${\large \frac{9.5\times 10^1}{3.2\times 10^2}=\frac{9.5}{3.8}\times\frac{10^1}{10^2}}$. $~$ To divide two decimal powers, we subtract the exponents: ${\large \frac{9.5}{3.8}\times\frac{10^1}{10^2}=\frac{9.5}{3.8}\times 10^{(1-2)}=\frac{9.5}{3.8}\times 10^{-1}}$. $~$ Lastly we divide $\frac{9.5}{3.8}=2.5$, and get the density of the town is $2.5\times10^{-1}=0.25$ foxes/mi$^2$. Complete the following operations using scientific notation. Tipps Keep in mind that $1\le n<10$. To multiply (or divide) decimal powers, just add (or subtract) the exponents. Take a look at the following example: Lösung To multiply (or divide) numbers in scientific notation, first group like terms and then multiply (or divide) the like terms. To multiply (or divide) the decimal powers we add (or subtract) the exponents. Remember: if the product of the coefficients is not greater than or equal to $1$ and less than $10$, then we have to also write this number in scientific notation. $~$ Problem 1: $\begin{array}{rcl} 1.2\times 10^3\times 3.2\times 10^4&=&1.2\times 3.2\times 10^3\times 10^4\\ &=&3.84\times 10^7 \end{array}$ $~$ Problem 2: $\begin{array}{rcl} {\large \frac{1.44\times 10^{\large 4}}{1.2\times 10^{\large 5}}}&=& {\large \frac{1.44}{1.2}}\times{\large \frac{10^{\large 4}}{10^{\large 5}}}\\ &=&1.2\times 10^{-1} \end{array}$ $~$ Problem 3: $\begin{array}{rcl} 3.55\times 10^2\times 4.2\times 10^3&=&3.55\times 10^2\times 4.2\times 10^3\\ &=&14.91\times 10^5\\ &=&1.491\times 10^6 \end{array}$ $~$ Problem 4: $\begin{array}{rcl} {\large \frac{6.75\times 10^{\large 7}}{1.5\times 10^{\large 5}}}&=&{\large \frac{6.75}{1.5}}\times{\large \frac{10^{\large 7}}{10^{\large 5}}}\\ &=&4.5\times 10^2 \end{array}$ Determine the fox population in Norway. Tipps The commutative property of multiplication states that you can change the order of multiplication: $a\times b=b\times a$. To multiply two numbers in scientific notation, just add the exponents of the decimal powers. Here you see an example of multiplying two numbers written in scientific notation: Lösung Let's figure out the population of Mr. Arctic-Fox's village in Norway. Mr. Arctic-Fox says that 15 foxes per mi$^2$ live in an area of $4.5\times 10^3$ sqm. We have to multiply the fox density by the area of the village, with both fox density and area expressed in scientific notation. We have: $(1.5\times 10^1)\times (4.5\times 10^3)$. Using the commutative property of multiplication, we can rearrange our calculation to get $(1.5\times 10^1)\times (4.5\times 10^3)=1.5\times 4.5\times 10^1\times 10^3$. We can multiply the decimal powers with the same base by adding the exponents: $1.5\times 4.5\times 10^1\times 10^3=6.75\times 10^4$. So the population of Mr. Arctic-Fox's village is thus $6.75\times 10^4=67500$. Examine the different population densities. Tipps You have to divide the population by the area to get the density. The format of scientific notation is given by $n\times 10^a$, where $1\le n<10$, $n$ is the coefficient, $10$ the base, and $a$ the exponent. To divide decimal powers just subtract the exponents, like in the following example: Lösung The population density is the population in an area. Thus we have to divide the population by the area each time: $~$ Wolves: $\begin{array}{rclll} \frac{240}{300}&=&\frac{2.4\times 10^2}{3.0\times 10^2}\\ &=&\frac{2.4}{3.0}\times\frac{10^2}{10^2}&|&\text{subtract the exponents}\\ &=&0.8\times 10^{(2-2)}\\ &=&0.8\times 10^0\\ &=&8.0\times 10^{-1} \end{array}$ Deers: $\begin{array}{rclll} \frac{230}{400}&=&\frac{2.3\times 10^2}{4.0\times 10^2}\\ &=&\frac{2.3}{4.0}\times\frac{10^2}{10^2}&|&\text{subtract the exponents}\\ &=&0.575\times 10^{(2-2)}\\ &=&0.575\times 10^0\\ &=&5.75\times 10^{-1} \end{array}$ Boars: $\begin{array}{rclll} \frac{130}{125}&=&\frac{1.3\times 10^2}{1.25\times 10^2}\\ &=&\frac{1.3}{1.25}\times\frac{10^2}{10^2}&|&\text{subtract the exponents}\\ &=&1.04\times 10^{(2-2)}\\ &=&1.04\times 10^0 \end{array}$ Rabbits: $\begin{array}{rclll} \frac{420}{300}&=&\frac{4.2\times 10^2}{3.0\times 10^2}\\ &=&\frac{4.2}{3.0}\times\frac{10^2}{10^2}&|&\text{subtract the exponents}\\ &=&1.4\times 10^{(2-2)}\\ &=&1.4\times 10^0 \end{array}$
This question already has an answer here: I wish to use protrusion and expansion from microtype as I like the look you get with them. I use amsthm and typeset theorems in italic as usual (with \theoremstyle{plain}). With microtype enabled, LaTeX fails to break some lines where the last item on the line is something in italic. I get three failures to break nicely with my included example. I have microtype 2.5 and use lualatex with Latin Modern Roman. I have a working example, though it's not very minimal. If I remove much more the ancillary changes to formatting hide the problem I suspect is a bug. The issue doesn't seem to be with the fact I'm using hyphenated words like Erdos-Hajnal, since the line break failure also occurs on other lines, and problems even occur in my bibliography where some portions of the reference text are in italics. The problem also occurs for me if I remove the \usepackage{fontspec} line. If I remove that line and compile with pdflatex there are no problems. (Fairly) Minimal Example: \documentclass[a4paper,twoside]{scrartcl}\usepackage[UKenglish]{babel}\usepackage{amsmath,amsthm}\usepackage{fontspec,blindtext}\usepackage[babel,protrusion=true,expansion]{microtype}\theoremstyle{plain}\newtheorem{thm}{Theorem}[section]\newtheorem{prop}[thm]{Proposition}\newtheorem{conj}[thm]{Conjecture}\begin{document}\begin{conj}For all graphs $H$ there is an $\epsilon > 0$ such that for all sufficiently large $n$, and for all $G\in\mathcal{G}^n$ either $H$ is an induced subgraph of $G$, or $G$ contains a homogeneous set of size at least $n^\epsilon$.\end{conj}\begin{prop}A short dummy paragraph. Let $\epsilon_2=\delta\epsilon_1$, where $\delta<1/(2k+1)$. Then there exists $n_0$ such that all graphs on $n\geq n_0$ vertices which do not contain homogeneous sets of size $n^{\epsilon_2}$ have $H$ as an induced subgraph. That is, $H$ has the Erd\H{o}s-Hajnal property.\end{prop}\begin{thm}Let the graphs $H$ and $F$ have the Erd\H{o}s-Hajnal property, and let $V(H) = \{v_1,\dotsc,v_k\}$. Then $H(F, v_2,\dotsc, v_k)$ obtained by substituting $v_1$ for $F$ also has the Erd\H{o}s-Hajnal property.\end{thm}\blindtext\end{document} EDIT: this is clearly nothing to do with amsmath, and a simple way to reproduce a similar italic-linebreaking problem is with: % !TEX TS-program = lualatex\documentclass{scrartcl}\usepackage{fontspec}\usepackage[UKenglish,latin]{babel}\usepackage{microtype}\usepackage{lipsum}\begin{document}\lipsum[1-2]\textit{\lipsum[1-2]}\end{document}
Imagine empty infinite universe with just a single resting electron - let's ask the question about configuration of electric field in such empty universe. The standard answer would be $E\propto 1/r^2$. However, if calculating energy $(\propto E^2)$ of such electric field, due to singularity in $r=0$ we get $$ \int_{0}^\infty 4\pi r^2 r^{-4} dr=\infty $$ In contrast, we know well that in reality this energy should be at most 511keV: released while annihilating with positron. We would get 511keVs if integrating from $r_0 \approx 1.4$fm instead of zero - we need deformation of electric field in scale of femtometers not to exceed electron's mass with energy of electric field alone. It is vaguely said that this issue is repaired by QED (how exactly?), but there still remains kind of basic question: what objectively would be electric field for single resting electron in empty universe? I have met with two trials to solve this fundamental problem: That vacuum polarization reduces electric field near the singularity (is it satisfactory?). In soliton particle models (slides) we have $E\propto q(r)/r^2$, where effective charge $q(r)$ is practically constant for large $r$, but $q(r)\to 0$ for $r\to 0$ to prevent infinite energy. It is made by activating Higgs potential - kind of deforming electromagnetism into weak/strong interaction to regularize infinite energy. This kind of effect is observed as running coupling. Can that vacuum polarization reduce energy of point charge below 511keV? Or maybe there is some other reasonable solutions to this problem? Clarification: I see nobody defends vacuum polarization explanation, but there are lots of "impossibility claims" and avoiding answers, so let me briefly elaborate on solution to this problem suggested by topological solitons. Requiring unitary vectors, $u(x)=x/|x|$ configuration would also have infinite energy due to discontinuity in the center. As in the diagram, it is regularized by getting out of minimum of Higgs potential (unitary vectors) - up to zero vector in the center, allowing to realize such topological charge using only finite energy. To recreate electromagnetism in 3D for topological charges as electric charges, we can use Gauss-Bonnet theorem in place of Gauss law: it says that integrating curvature over a closed surface, we get topological charge inside this surface. So interpreting curvature of a deeper field as electric field (analogously B), and using standard Lagrangian for it, we can recreate electromagnetism with two corrected issues: Gauss law allowing only integer (topological) charge (included charge quantization), and with charges containing only finite energy - some article. Is there a problem with such explanation of finite energy of a charge, or maybe there are some better explanations?
Markdown help Linebreaks End a line with two spaces to add a <br/> linebreak: How do I love thee? Let me count the ways Italics and Bold *This is italicized*, and so is _this_. **This is bold**, and so is __this__. Use ***italics and bold together*** if you ___have to___. You can also select text and press CTRL+ I or CTRL+ B to toggle italics or bold respectively. Links Basic Links There are three ways to write links. Each is easier to read than the last: Here's an inline link to [Google](http://www.google.com/). Here's a reference-style link to [Google][1]. Here's a very readable link to [Yahoo!][yahoo]. [1]: http://www.google.com/ [yahoo]: http://www.yahoo.com/ You can also select text and press CTRL+ L to make it a link, or press CTRL+ L with no text selected to insert a link at the current position. The link definitions can appear anywhere in the document -- before or after the place where you use them. The link definition names [1] and [yahoo] can be any unique string, and are case-insensitive; [yahoo] is the same as [YAHOO]. Advanced Links Links can have a title attribute, which will show up on hover. Title attributes can also be added; they are helpful if the link itself is not descriptive enough to tell users where they're going. Here's a <span class="hi">[poorly-named link](http://www.google.com/ "Google")</span>. Never write "[click here][^2]". Visit [us][web]. [^2]: http://www.w3.org/QA/Tips/noClickHere (Advice against the phrase "click here") [web]: https://ai.stackexchange.com/ "Artificial Intelligence Stack Exchange" You can also use standard HTML hyperlink syntax. <a href="http://example.com" title="example">example</a> Bare URLs We have modified our Markdown parser to support "naked" URLs (in most but not all cases -- beware of unusual characters in your URLs); they will be converted to links automatically: I often visit http://example.com. Force URLs by enclosing them in angle brackets: Have you seen <https://example.com>? URLs can be relative or full. Headers Underline text to make the two <h1> <h2> top-level headers : Header 1 ======== Header 2 -------- You can also select text and press CTRL+ H to step through the different heading styles. The number of = or - signs doesn't matter; one will work. But using enough to underline the text makes your titles look better in plain text. Use hash marks for several levels of headers: # Header 1 # ## Header 2 ## ### Header 3 ### The closing # characters are optional. Horizontal Rules Insert a horizontal rule <hr/> by putting three or more hyphens, asterisks, or underscores on a line by themselves: --- You can also press CTRL+ R to insert a horizontal rule. Rule #1 --- Rule #2 ******* Rule #3 ___ Using spaces between the characters also works: Rule #4 - - - - You can also press CTRL+ R to insert a horizontal rule. Simple lists A bulleted <ul> list: - Use a minus sign for a bullet + Or plus sign * Or an asterisk A numbered <ol> list: 1. Numbered lists are easy 2. Markdown keeps track of the numbers for you 7. So this will be item 3. You can also select text and press CTRL+ U or CTRL+ O to toggle a bullet or numbered list respectively. A double-spaced list: - This list gets wrapped in <p> tags - So there will be extra space between items Advanced lists: Nesting To put other Markdown blocks in a list; just indent four spaces for each nesting level: 1. Lists in a list item: - Indented four spaces. * indented eight spaces. - Four spaces again. 1. Lists in a list item: - Indented four spaces. * indented eight spaces. - Four spaces again. 2. Multiple paragraphs in a list items: It's best to indent the paragraphs four spaces You can get away with three, but it can get confusing when you nest other things. Stick to four. We indented the first line an extra space to align it with these paragraphs. In real use, we might do that to the entire list so that all items line up. This paragraph is still part of the list item, but it looks messy to humans. So it's a good idea to wrap your nested paragraphs manually, as we did with the first two. 3. Blockquotes in a list item: > Skip a line and > indent the >'s four spaces. 4. Preformatted text in a list item: Skip a line and indent eight spaces. That's four spaces for the list and four to trigger the code block. Simple blockquotes Add a > to the beginning of any line to create a blockquote. > The syntax is based on the way email programs > usually do quotations. You don't need to hard-wrap > the paragraphs in your blockquotes, but it looks much nicer if you do. Depends how lazy you feel. You can also select text and press CTRL+ Q to toggle a blockquote. Advanced blockquotes: Nesting To put other Markdown blocks in a blockquote, just add a > followed by a space. To put other Markdown blocks in a blockquote, just add a > followed by a space: > The > on the blank lines is optional. > Include it or don't; Markdown doesn't care. > > But your plain text looks better to > humans if you include the extra `>` > between paragraphs. Blockquotes within a blockquote: > A standard blockquote is indented > > A nested blockquote is indented more > > > > You can nest to any depth. Lists in a blockquote: > - A list in a blockquote > - With a > and space in front of it > * A sublist Preformatted text in a blockquote: > Indent five spaces total. The first > one is part of the blockquote designator. Images Images are exactly like links, but they have an exclamation point in front of them: ![Valid XHTML](http://w3.org/Icons/valid-xhtml10). You can also press CTRL+ G to insert an image. The word in square brackets is the alt text, which gets displayed if the browser can't show the image. Be sure to include meaningful alt text for screen-reading software. Just like links, images work with reference syntax and titles: This page is ![valid XHTML][checkmark]. [checkmark]: http://w3.org/Icons/valid-xhtml10 "What are you smiling at?" Note: Markdown does not currently support the shortest reference syntax for images: Here's a broken ![checkmark]. But you can use a slightly more verbose version of implicit reference names: This ![checkmark][] works. The reference name is also used as the alt text. You can also use standard HTML image syntax, which allows you to scale the width and height of the image. <img src="http://example.com/sample.png" width="100" height="100"> URLs can be relative or full. Preformatted Text Indent four spaces to create an block of preformatted text displayed in a monospaced font: 200 ml Milk 1 teaspoon cocoa 1 teaspoon sugar The first four spaces will be stripped off, but all other whitespace will be preserved. Alternatively, surround the text by three or more backticks or tildes: ``` /\ o / \ -|- |__| / \ ``` ~~~ Answer! ^ | +----- Question? ~~~ Markdown and HTML are ignored within a code block: You will see *stars* here, but no italics. Monospace Spans Use backticks to create an inline span of preformatted text: If your browser's location bar starts with `https://`, then the connection is encrypted. (The backtick key is in the upper left corner of most keyboards.) The text between the backticks will be displayed in a monospaced font, just like the preformatted blocks. Inline HTML If you need to do something that Markdown can't handle, use HTML. Note that we only support a very strict subset of HTML! To reboot your computer, press <kbd>ctrl</kbd>+<kbd>alt</kbd>+<kbd>del</kbd>. Markdown is smart enough not to mangle your span-level HTML: <b>Markdown works *fine* in here.</b> Block-level HTML elements have a few restrictions: They must be separated from surrounding text by blank lines. The begin and end tags of the outermost block element must not be indented. Markdown can't be used within HTML blocks. <pre> You can <em>not</em> use Markdown in here. </pre> Need More Detail? Visit the official Markdown syntax reference page. Stack Exchange additions The following sections describe some additional features for text formatting that aren't officially part of Markdown. Tags To talk about a tag on this site, like-this, use See the many questions tagged [tag:elephants] to learn more. The tag will automatically be linked to the corresponding tag page. Spoilers To hide a certain piece of text and have it only be visible when a user moves the mouse over it, use the blockquote syntax with an additional exclamation point: At the end of episode five, it turns out that >! he's actually his father. LaTeX Artificial Intelligence Stack Exchange uses MathJax to render LaTeX. You can use single dollar signs to delimit inline equations, and double dollars for blocks: The *Gamma function* satisfying $\Gamma(n) = (n-1)!\quad\forall n\in\mathbb N$ is via through the Euler integral $$ \Gamma(z) = \int_0^\infty t^{z-1}e^{-t}dt\,. $$ Learn more: MathJax help. Comment formatting Comments support only bold, italic, code and links; in addition, a few shorthand links are available. _italic_ and **bold** text, inline `code in backticks`, and [basic links](http://example.com). Supported shorthand links: [meta]– link to the current site's Meta; link text is the site name (e.g. "Super User Meta"). Does nothing if the site doesn't have (or already is) a Meta site. [main]– like [meta], just the other way around. [edit]– link to the edit page for the post the comment is on, i.e. /posts/{id}/edit. Link text is "edit" (capitalization is respected). [tag:tagname]and [meta-tag:tagname]– link to the given tag's page. Link text is the name of the tag. meta-tagonly works on meta sites. [help], [help/on-topic], [help/dont-ask], [help/behavior]and [meta-help]– link to frequently visited pages of the help center. Link text is "help center" (capitalization is respected). All links point to the main site. [tour]– link to the Tour page. Link text is "tour" (capitalization is respected). [so], [pt.so], [su], [sf], [metase], [a51], [se]– link to the given site. Link text is the site name. [chat]– link to the current site's chat site, the link text being "{site name} Chat". [something.se]– link to something.stackexchange.com, if that site exists. Link text is the site name. Use [ubuntu.se]for Ask Ubuntu. Replying in comments The owner of the post you're commenting on will be notified of your comment. If you are replying to someone else who has previously commented on the same post, mention their username: always @peter and @PeterSmith will both notify a previous commenter named “Peter Smith”. It is generally sufficient to mention only the first name of the user whose comment you are replying to, e.g. @ben or @marc. However you may need to be more specific if three people named Ben replied in earlier comments, by adding the first character of the last name, e.g. @benm or @benc Spaces are not valid in comment reply names, so don't use @peter smith, always enter it as @peters or @petersmith. If the user you're replying to has no natural first name and last name, simply enter enough characters of the name to make it clear who you are responding to. Three is the minimum, so if you're replying to Fantastico, enter @fan, @fant, or @fantastic. You can use the same method to notify any editor of the post, or – if this is the case – to the ♦ moderator who closed the question.
Contact InfoPure Mathematics University of Waterloo 200 University Avenue West Waterloo, Ontario, Canada N2L 3G1 Departmental office: MC 5304 Phone: 519 888 4567 x33484 Fax: 519 725 0160 Email: puremath@uwaterloo.ca Brett Nasserden, Department of Pure Mathematics, University of Waterloo "The valuative tree at infinity" To study the dynamics of a polynomial map on the complex affine plane we must be able to study the behavior of the mapping near the origin and near infinity. The study of the dynamics near a point leads to the notion of the valuative tree at the origin of the affine plane. To study the dynamics near infinity, we introduce a new but analogous object,the valuative tree at infinity, which will be the subject of this lecture. MC 5403 Sylvie Davies, Department of Pure Mathematics, University of Waterloo "Algebraic Approaches to State Complexity of Regular Operations" Wilson Poulter, Department of Pure Mathematics, University of Waterloo "NIP II" We finish up section 2.1 and begin section 2.2 of Simon's Guide to NIP theories. MC 5413 Mizanur Rahaman, Department of Pure Mathematics, University of Waterloo "Bisynchronous games and factorizable maps" Dino Rossegger, Department of Pure Mathematics, University of Waterloo "The complexity of Scott sentences of scattered linear orders -- Part II" Alex Iosevich, University of Rochester "Analytic, geometric and combinatorial aspects of the Falconer distance conjecture" **Note time and room change** Ehsaan Hossain, Department of Pure Mathematics, University of Waterloo "Introduction to the valuative tree" Adam Humeniuk, Department of Pure Mathematics, University of Waterloo "C*-covers of semicrossed products" Aasaimani Thamizhazhagan, Department of Pure Mathematics, University of Waterloo "On the structure of invertible elements in Fourier-Stieltjes algebras" Dino Rossegger, Department of Pure Mathematics, University of Waterloo "The complexity of Scott sentences of scattered linear orders" J.C. Saunders, Ben Gurion University of Negev "Diophantine equations involving the Euler totient function" We deal with various Diophantine equations involving the Euler totient function. In particular, for $a,b,c,m,n\in\mathbb{N}$ with $m\geq 2$ we study the equations $\varphi(ax^m)=\frac{b\cdot n!}{c}$ and $\varphi\left(\frac{b\cdot n!}{c}\right)=ax^m$ where $\varphi(x)$ is the Euler totient function. We also deal with similar equations involving Lucas sequences of the first kind and second kind, generalising the work of Luca and Stanica. Adina Goldberg, Department of Pure Mathematics, University of Waterloo "This title contains information" How can we mathematically test the claim made in the title? In this talk, we will learn about Claude Shannon's entropy and determine if it gives us the best measure of informativeness. If you can picture the graph of a logarithm, you are well prepared. MC 5501 Shai Ben-David, School of Computer Science, University of Waterloo "A basic machine learning problem is independent of set theory" Departmental office: MC 5304 Phone: 519 888 4567 x33484 Fax: 519 725 0160 Email: puremath@uwaterloo.ca
I want to show: Let $N\geq 2$ and $2< q <2^\ast$. Then the embedding \begin{align} H^1_{\text{rad}}(\mathbb{R}^N)\hookrightarrow L^q(\mathbb{R}^N) \end{align} is compact. I was able to show that \begin{align}|u(r)|\leq C R^{\frac{-(N-1)}{2}} \|\nabla u\|_2^{\frac{1}{2}} \|u\|_2^{\frac{1}{2}}\leq \hat C R^{\frac{-(N-1)}{2}} \|u\|_{H^1} \end{align} holds almost everywhere for $r\geq R$. How can I conclude now? I think the idea would be to use the above estimate to be able to only restrict on a bounded domain and then use the usual Rellich-Kondrachov embedding. But how to make this rigorous? Do I need some cut-off? I also posted the question on MSE but received no answers. I would also be happy about any reference regarding this proof.
Consider the LTI system with frequency response $$H(e^{j\omega}) = \frac{1-e^{-j2\omega}}{1+\frac{1}{2}e^{-j4 \omega}}, -\pi < \omega < \pi$$ Determine the output $y[n]$ for all $n$ if the input $x[n]$ for all $n$ is $$x[n] = \sin \left(\frac{\pi n}{4}\right)$$ My attempt: $x[n]$ is an eigenfunction of the LTI system, so the output have the form$$y[n]=|H(e^{j \omega})|\sin\left(\frac{\pi n}{4} + arg(H(e^{j \omega})\right)$$But I dont know how to determine the phase and modulus of the frequency response with this form. For example, I think that$$| 1 - e^{-j 2 \omega} | = \sqrt{(1-\cos(2 \omega))^2 + \sin^2(2 \omega)}$$Analogous, doing for the denominator, I could not simplificate the result. The answer: $$y[n]=2 \sqrt{2} \sin \left( \frac{\pi(n+1)}{4} \right) $$
We know that a lightning rod or lightning conductor is a metal rod or metallic object mounted on top of an elevated structure and, if we look closely, most of them have a sharp point at the top. What is the reason for this sharp point? Suppose there is a charged cloud floating over your conductor. Then making your lightning conductor pointy at the edge would facilitate better discharge by setting up a high electric field. We will take a spherical approximation of the pointed end, then ${\sigma}=\frac{q}{4\pi r^2}$ is the surface charge density of the end. It has a very high surface charge density due to its small radius. Hence, in this case, the electric field over that small part will be $E=\frac{\sigma}{\epsilon_0}$ which is also very high. Then, for a pointy metal rod, the electric field set up at pointy ends is high. Now for some reason, if the discharge of the cloud occurs, the charge will be easily passed through the lightning conductor and conducted to the ground. Your artifact which you are trying to save is ultimately protected from damage. The point of the point is to increase the electric field near the point. Small radius curves will have a higher local electric field, eventually creating a localize area where the field is greater than the dielectric strength of the air. This results in what I refer to as "micro-lightning." This microlightning discharges the air (or cloud) before the charge difference between the cloud and ground builds to the point where a very long path of breakdown is formed. The main idea is to prevent big lightning by have near-continual (during storms) microlightning. You can demonstrate this with a small Tesla coil or classroom Van de Graaff generator. Set up a situation with the coil or generator causing long (>10 cm) sparks. Then get a pointy object like a key or a nail, ground it, and bring it near the discharge. The spark will stop, but if you listen carefully, you can hear a crackle near the pointy object. You won't get a large spark around the pointy object until you get close to the coil tip or generator sphere. Then remove the pointy object and the long sparks will start again. This section might help: Which also states that : Finding that moderately rounded or blunt-tipped lightning rods act as marginally better strike receptors. Research conducted with actual lightning demonstrates that blunt lightning rods are as effective as pointed (tapered) rods. This has been codified in the National Fire Protection Association standard NFPA 780 - Installation of Lightning Protection Systems. The blunt tips are also pose less risk to someone who might fall while working on a roof. If there is an excess charge in the atmosphere, as happens during thunderstorms, a substantial charge of the opposite sign can build up on this blunt end. As a result, when the atmospheric charge is discharged through a lightning bolt, it tends to be attracted to the charged lightning rod rather than to other nearby structures that could be damaged. (A conducting wire connecting the lightning rod to the ground then allows the acquired charge to dissipate harmlessly.) A lightning rod with a sharp end would allow less charge buildup and hence would be less effective. protected by Qmechanic♦ Jan 2 '17 at 6:42 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Consider the following SDE $dV_t = rV_tdt +\sigma V_t dW_t + dJ_t$ where $J_t$ is a Compound poisson process with log-Normal jump size $Y_i$. How am I supposed to calibrate this model to CDS spreads? The problem of course is there doesn't exist an analytical formula for the survival probability function... [EDIT] Well, what I'd need is in fact the distribution of the first hitting time, that is $\tau = \inf\{t>0 : V_t = x\}$ where x is some barrier $\in R$ $Pr\left\{V_0 e^{(r-(1/2) \sigma^2)t + \sigma W_t + \sum_{i=0}^{N(t)} Y_i} = x \right\} =\\Pr \left\{(r-(1/2)\sigma^2)t + \sigma W_t + \sum_{i=0}^{N(t)}Y_i =\ln(x/V_0) \right\} = \\ Pr\left\{\sigma W_t + \sum_{i=0}^{N(t)}Y_i =\ln(x/V_0) - (r-(1/2)\sigma^2)t \right\}$ The problem is here...I don't know which distribution comes out in the left hand side
Inverted logic can be unnatural. Let's move over to quantified logic: $$\forall x:({duck}(x)\land {quacks}(x))\lor ({dog}(x)\land {barks}(x))\lor(\lnot {duck}(x)\land\lnot{dog}(x))$$ "Everything is either a duck (and quacks), or a dog (and barks) or else it is neither duck nor dog." If write down the dual, and then use DeMorgan's on it to flip the logic, we get something unnatural: Dual (so far so good): $$\lnot\exists x:\lnot((({duck}(x)\land {quacks}(x))\lor ({dog}(x)\land {barks}(x))\lor(\lnot {duck}(x)\land\lnot{dog}(x)))$$ DeMorgan's, step 1: $$\lnot\exists x:\lnot(({duck}(x)\land {quacks}(x))\land\lnot({dog}(x)\land {barks}(x)\land\lnot(\lnot {duck}(x)\land\lnot{dog}(x))$$ step 2: $$\lnot\exists x:(({\lnot duck}(x)\lor {\lnot quacks}(x))\land({\lnot dog}(x)\lor {\lnot barks}(x)\land({duck}(x)\lor{dog}(x))$$ "There does not exist a thing which, simultaneously: is either a non-quacker or a non-duck; and is either a non-barker or a non-dog; and is a dug duck or a dog." Say what? :) Sum-of-products goes hand in hand with divide-and-conquer. A sum-of-products representation of a proposition divides it into all the cases which independently make it true. Proposition P is true if such and such; or some situation; or if that other case. Division into independent cases assists clarity in reasoning. Furthermore, in predicate logic and related reasoning, we usually deal with positives, like "duck", and less with negatives like "non-duck". "non-duck" is not a class of object. Things are classified using positive attributes that they do have, not what they don't have. The space of things which are "non-duck" is unbounded. Reasoning with such negatives is confusing. In propositional logic, as in zeroth order logic without quantifiers, like what we deal with in logic circuits, we can write down the complete truth table. It may turn out that the negative space of a function is in fact simpler to characterize. For instance a boolean formula over four variables has only a 16 row table. Suppose there are three rows for which it is true, and it is false everywhere else. Then a simple formula is produced by giving those three combinations of four variables, and combine them with or. But suppose that the formula is only false in three rows. Then it may be more convenient and natural to characterize these exceptions, and express it that way: the formula is true when the variables are not in this combination, and not in this other combination, and not in this third combination. The not operators can then distribute into the combinations, yielding a product over sums. Positive example: A B C D P 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 1 1 0 0 1 0 0 1 * 0 1 0 1 0 0 1 1 0 0 0 1 1 1 1 * Sum of products: 1 0 0 0 0 P = A'BC'D' + A'BCD + ABC'D 1 0 0 1 0 1 0 1 0 0 1 0 1 1 0 1 1 0 0 0 1 1 0 1 1 * 1 1 1 0 0 1 1 1 1 0 Negative example: A B C D P 0 0 0 0 1 0 0 0 1 1 0 0 1 0 1 0 0 1 1 1 0 1 0 0 0 * 0 1 0 1 1 0 1 1 0 1 0 1 1 1 0 * Product of sums: 1 0 0 0 1 P = (A'BC'D' + A'BCD + ABC'D)' 1 0 0 1 1 P = (A'BC'D')'(A'BCD)'(ABC'D)' 1 0 1 0 1 P = (A + B' + C + D)(A + B' + C' + D')(A' + B' + C + D') 1 0 1 1 1 1 1 0 0 1 Sum of products: 1 1 0 1 0 * A'B'C'D' + A'B'C'D + A'B'CD' ... (10 more terms) 1 1 1 0 1 1 1 1 1 1 Even so, in spite of its simplicity, it is somewhat hard to understand the third formula (product-of-sums) compared to the second (product-of-negated-products). However, the alternative unsimplified sum of 13 products is also hard to understand, due to the large number of terms.
I ask because those functions are on the TI BA II Plus financial calculator. One can use the Karhunen–Loève expansion to approximate a trajectory of a Wiener Process, which can be used to model the evolvement of returns in time. (http://en.wikipedia.org/wiki/Karhunen%E2%80%93Lo%C3%A8ve_theorem#The_Wiener_process) Though the Karhunen–Loève expansion has theoretical advantages to other variants to generate a trajectory of a Wiener Process, many users will use different methods because on computers evaluation of trignometric is very expensive in terms of calculation time. You can use $\sin$ or $\cos$ to model seasonality. If all you have is a calculator it might be the most practical way. When you do Monte Carlo simulation and would like to draw sample from the normal distribution $\mathcal{N}(\mu,\sigma^2)$, you may use Box-Muller transform and come up with formulas using $\sin$ and $\cos$. Trigonometric functions show up in econometric models for business cycles. For example: the average length of a cycle of an AR(2) process is $ k = \frac{2 \pi}{\cos^{-1}( \phi_1/ (2 \sqrt{-\phi_2}))}$ For an AR(2) model given by $ r_t = \phi_0 + \phi_1 r_{t-1} + \phi_2 r_{t-2} + a_t$ with complex roots, $\phi_1^2 + 4\phi_2 <0 $ Trigonometric functions are WAVE phenomena. As such, they are best used to model so-called periodic functions, that is, functions with cycles of a fixed period in length. That's why they are good for modelling, seasonal, annual, "blue moon" (once every two and half years), or other functions with set "periods."
Nonlinear Schrödinger equations on a finite interval with point dissipation Department of Mathematics, Virginia Polytechnic Institute and State University, Blacksburg, VA USA $ iu_t+u_{xx}+f(u) = 0 , \;\;\;\; u ( x, 0 ) = w_0 (x) $ $ x\in [0, L] $ $ L^2 $ $ u(0, t) = \beta u(L, t), \beta u_x(0, t)-u_x(L, t) = i\alpha u(0, t), $ $ L>0 $ $ \alpha, \beta $ $ \alpha\beta<0 $ $ \beta\neq \pm 1 $ $ f(u) $ $ \mathbb{C} $ $ \mathbb{C} $ $ s \in \left ( \frac12, 1\right ] $ $ w_0 (x) \in H^s(0, L ) $ $ u \in C([0, T]; H^s (0, L )) $ $ t \rightarrow + \infty $ Mathematics Subject Classification:Primary: 35Q55; Secondary: 35Q93. Citation:Jing Cui, Shu-Ming Sun. Nonlinear Schrödinger equations on a finite interval with point dissipation. Mathematical Control & Related Fields, 2019, 9 (2) : 351-384. doi: 10.3934/mcrf.2019017 References: [1] J. L. Bona, S. M. Sun and B.-Y. Zhang, Nonhomogeneous boundary-value problems for one-dimensional nonlinear Schrödinger equations, [2] J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to non-linear evolution equations, part Ⅰ: Schrödinger equations, [3] [4] [5] [6] [7] [8] [9] T. Cazenave, D. Fang and Z. Han, Continuous dependence for NLS in fractional order spaces, [10] [11] [12] N. Dunford and J. T. Schwartz, [13] [14] G. Gao and S. M. Sun, A Korteweg-de Vries type of fifth-order equations on a finite domain with point dissipation, [15] [16] [17] [18] [19] [20] [21] [22] T. Kato, On nonlinear Schrödinger equations, [23] [24] [25] V. Komornik, A generalization of Ingham's inequality, in [26] H. Lange and H. Teismann, Controllability of the nonlinear Schrödinger equation in the vicinity of the ground state, [27] [28] [29] [30] L. Rosier and B.-Y. Zhang, Local exact controllability and stabilizability of the nonlinear Schrödinger equation on a bounded interval, [31] D. L. Russell, Controllability and stabilizability theory for linear partial differential equations: Recent progress and open questions, [32] D. L. Russell and B. Y. Zhang, Controllability and stabilizability of the third-order linear dispersion equation on a periodic domain, [33] D. L. Russell and B. Y. Zhang, Smoothing and decay properties of solutions of the Korteweg-de Vries equation on a periodic domain with point dissipation, [34] [35] [36] [37] [38] V. E. Zakharov and A. B. Shabat, Exact theory of two-dimensional self-focusing and one-dimensional self-modulation of waves in nonlinear media, show all references References: [1] J. L. Bona, S. M. Sun and B.-Y. Zhang, Nonhomogeneous boundary-value problems for one-dimensional nonlinear Schrödinger equations, [2] J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to non-linear evolution equations, part Ⅰ: Schrödinger equations, [3] [4] [5] [6] [7] [8] [9] T. Cazenave, D. Fang and Z. Han, Continuous dependence for NLS in fractional order spaces, [10] [11] [12] N. Dunford and J. T. Schwartz, [13] [14] G. Gao and S. M. Sun, A Korteweg-de Vries type of fifth-order equations on a finite domain with point dissipation, [15] [16] [17] [18] [19] [20] [21] [22] T. Kato, On nonlinear Schrödinger equations, [23] [24] [25] V. Komornik, A generalization of Ingham's inequality, in [26] H. Lange and H. Teismann, Controllability of the nonlinear Schrödinger equation in the vicinity of the ground state, [27] [28] [29] [30] L. Rosier and B.-Y. Zhang, Local exact controllability and stabilizability of the nonlinear Schrödinger equation on a bounded interval, [31] D. L. Russell, Controllability and stabilizability theory for linear partial differential equations: Recent progress and open questions, [32] D. L. Russell and B. Y. Zhang, Controllability and stabilizability of the third-order linear dispersion equation on a periodic domain, [33] D. L. Russell and B. Y. Zhang, Smoothing and decay properties of solutions of the Korteweg-de Vries equation on a periodic domain with point dissipation, [34] [35] [36] [37] [38] V. E. Zakharov and A. B. Shabat, Exact theory of two-dimensional self-focusing and one-dimensional self-modulation of waves in nonlinear media, [1] Pasquale Palumbo, Pierdomenico Pepe, Simona Panunzi, Andrea De Gaetano. Robust closed-loop control of plasma glycemia: A discrete-delay model approach. [2] Filippo Cacace, Valerio Cusimano, Alfredo Germani, Pasquale Palumbo, Federico Papa. Closed-loop control of tumor growth by means of anti-angiogenic administration. [3] Hanxiao Wang, Jingrui Sun, Jiongmin Yong. Weak closed-loop solvability of stochastic linear-quadratic optimal control problems. [4] Justine Yasappan, Ángela Jiménez-Casas, Mario Castro. Stabilizing interplay between thermodiffusion and viscoelasticity in a closed-loop thermosyphon. [5] Riccardo Adami, Diego Noja, Nicola Visciglia. Constrained energy minimization and ground states for NLS with point defects. [6] Xiaochen Sun, Fei Hu, Yancong Zhou, Cheng-Chew Lim. Optimal acquisition, inventory and production decisions for a closed-loop manufacturing system with legislation constraint. [7] Yi Jing, Wenchuan Li. Integrated recycling-integrated production - distribution planning for decentralized closed-loop supply chain. [8] Wenbin Wang, Peng Zhang, Junfei Ding, Jian Li, Hao Sun, Lingyun He. Closed-loop supply chain network equilibrium model with retailer-collection under legislation. [9] Xiaohong Chen, Kui Li, Fuqiang Wang, Xihua Li. Optimal production, pricing and government subsidy policies for a closed loop supply chain with uncertain returns. [10] [11] Masoud Mohammadzadeh, Alireza Arshadi Khamseh, Mohammad Mohammadi. A multi-objective integrated model for closed-loop supply chain configuration and supplier selection considering uncertain demand and different performance levels. [12] [13] Zhijian Yang, Pengyan Ding, Xiaobin Liu. Attractors and their stability on Boussinesq type equations with gentle dissipation. [14] Salvatore A. Marano, Sunra J. N. Mosconi. Multiple solutions to elliptic inclusions via critical point theory on closed convex sets. [15] Alexandre N. Carvalho, Jan W. Cholewa. NLS-like equations in bounded domains: Parabolic approximation procedure. [16] [17] [18] Cleverson R. da Luz, Gustavo Alberto Perla Menzala. Uniform stabilization of anisotropic Maxwell's equations with boundary dissipation. [19] Bo-Qing Dong, Jiahong Wu, Xiaojing Xu, Zhuan Ye. Global regularity for the 2D micropolar equations with fractional dissipation. [20] 2018 Impact Factor: 1.292 Tools Metrics Other articles by authors [Back to Top]
The language of inference rules are much more general than what is usually given in Logic. Indeed you can look at systems with rules of the shape $$ \frac{\Theta_1\ldots \Theta_n}{\Theta}$$ Where the $\Theta_i, \Theta$ are some kind of statement and ask: what are all the possible $\Theta$ I can get by repeated application of these rules? In the above case, $s \vdash M \Downarrow P, s'$ expresses the statement: With starting state $s$, program $M$ evaluates to $P$ and the resulting state is $s'$. This is a very general framework for expressing the operational semantics of a program, and it means that if you can build a finite derivation tree with $s \vdash M\Downarrow P, s'$ at the base, then you have shown that the program $M$ is well-defined and evaluates to $P$. Let's give an example. Say $s$ is the state in which variable $b$ is sent to true and $n$ is sent to $2$, which I will write $\{b\mapsto \mathrm{true}, n\mapsto 2 \}$. Then the following should be derivable: $$\{b\mapsto \mathrm{true}, n\mapsto 2 \}\vdash \mathrm{if}\ b\ \mathrm{then}\ n+1\ \mathrm{else}\ 0 \Downarrow 3, \{b\mapsto \mathrm{true}, n\mapsto 2 \}$$ Note that in your question, the rule: $$\frac{s_0 \vdash M_1 \Downarrow P_1, s_1 \; ... \; s_{n-1} \vdash M_n \Downarrow P_n, s_n}{s_0 \vdash M \Downarrow P, s_n}$$ Means: If, starting from state $s_0$, $M_1$ reduces to $P_1$ and new state $s_1$, and starting with state $s_1$ $M_2$ reduces to $P_2$ and new state $s_2$ etc. then starting from state $s_0$, $M$ reduces to $P$ with new state $s_n$. In particular, the order in which you do the reduction is important: the state may change after the evaluation of $M_1$, which in turn will influence the evaluation of $M_2$, etc. This is why in a language with state (side-effects), the order of evaluation of arguments passed to a function is important: we may get a different result depending on which argument we evaluate first. In Haskell, laziness makes it impossible to know which arguments will be evaluated first (or at all!), so having side effects is pretty much out of the question. I suggest reading the classic Types and Programming Languages, by Benjamin Pierce, which treats all of this in much detail.
Science:MER/Final answers A script automatically extracts the final answer from the solution. For this to work well, please follow this guide when writing your solution: Main Assumption: The final answer can be found in the last paragraph and is is either part of a sentence the last term in an align-environment or a picture. 1. Final answer in a sentence If the last paragraph is not an align-environment and not a picture, then we assume that final answer is in the last sentence. However, some questions end with <<which is what we wanted to show.>>, which is not the final answer we want to extract. Therefore, if there is no <math>, the script takes the sentence before the last sentence and so on.This means: Please write the final answer in a <math> environment so that the script recognizes it. Do notsplit the math environment like DON'T DO THIS! f = 2 <math> \frac{\pi}{2}</math>sin(x) DON'T DO THIS! Instead, wrap everything in a single math environment. <math>f = 2 \frac{\pi}{2}sin(x)</math> Do notuse \emph (<i>, <b>, ''text'') for math-environments. If the final answer is in text-form, e.g. (B), then use the \emph-environment for this, i.e. The final answer is \emph{(B)}. If you want to write a note in the last sentence, then don't use math-expressions and no \emph-environments in the note. The script would interpret this as the final answer. 2. Final answer in an align-environment IF the last paragraph is an align-environment the last item on the left hand side will be matched with the last item of the right hand side, as so <math> \begin{align} f(x) &= ab \\ & = cd \\ g(x) & = ef = gh \\ & = ij = kl = mn \end{align}</math> Then the final answer would be g(x) = mn. If the final answer is a picture, then place the figure as last paragraph. If you want to write a note for the figure, write the note below the figure and do not use math-expressions and no \emph-environment in the note. A figure which is not an answer-figure must be placed before the last paragraph and after the figure there is the final answer either as math-expression in math-environment or in an emph-environment. == General note ==The last paragraph of the solution should be self-contained. The last paragraph is separated from the rest by two new-lines: Script won't look for final answer here: Lorem ipsum dolor sit amet, consectetur adipiscing elit. Integer vestibulum erat sit amet tempus semper. Phasellus sit amet nisi quis mauris bibendum sagittis nec vel sapien. Place final answer in this paragraph instead: Vestibulum interdum non elit sed ultricies. Mauris at sollicitudin erat. Mauris lobortis est at congue venenatis. Maecenas ac augue at orci tempor viverra nec sit amet augue. Integer vitae justo dui. In particular, do not write matrices like this: <math>\begin{matrix} 1 & 2 \\ 3 & 4 \end{matrix}</math> The blank line is really bad! Then the last paragraph is 3 & 4 \end{matrix}</math> which not only gives the wrong answer but introduces lots of latex errors. Similarly with itemize, cases, enumerate.... environments, and curly brackets.
Research Open Access Published: Blow-up of arbitrarily positive initial energy solutions for a viscoelastic wave system with nonlinear damping and source terms Boundary Value Problems volume 2018, Article number: 35 (2018) Article metrics 677 Accesses 1 Citations Abstract This work is concerned with the Dirichlet initial boundary problem for a semilinear viscoelastic wave system with nonlinear weak damping and source terms. For nonincreasing positive functions g and h, we show the finite time blow-up of some solutions whose initial data have arbitrarily high initial energy. Introduction and main result We consider a semilinear viscoelastic wave system with nonlinear damping and source terms, subject to null Dirichlet boundary and initial conditions where \(\Omega \subset \mathbb{R}^{N}\) (\(N\geq 1\)) is a bounded domain with smooth boundary ∂Ω, \(m>1\), \(r>1\), and the relaxation functions \(g:\mathbb{R}^{+}\rightarrow \mathbb{R}^{+}\) and \(h: \mathbb{R}^{+}\rightarrow \mathbb{R}^{+}\) are positive nonincreasing. Problems of this type arise in viscoelasticity and systems governing the longitudinal motion of a viscoelastic configuration obeying the nonlinear Boltzmann model. During the past decades, there has been much work dealing with the well-posedness and qualitative properties of solutions for damped viscoelastic wave equation. In this paper, we would like to investigate the blow-up phenomena with high initial energy for a semilinear damped viscoelastic wave system. To motivate our work, let us recall some results regarding viscoelastic wave models. For the single viscoelastic wave equation, we refer the reader to [1, 2] (the case \(g=0\)) and [3–7] (the case \(g\neq 0\)), where blow-up solutions with initial negative energy, positive energy and arbitrarily positive energy are [1–7], respectively. Moreover, for general energy decay estimates on global solutions of a nonlinear abstract viscoelastic equation with variable density and the oscillation criteria and numerical solution of damped wave models, we refer the reader to [8–10]. Concerning wave systems without viscoelastic term (\(g=0\)), Agre and Rammaha [11] investigated the following coupled semilinear wave system with nonlinear damping terms: in \(\Omega \times (0,\infty)\), where \(\Omega \subset \mathbb{R}^{N}\) (\(N=1,2,3\)), \(m\geq 1\), \(r\geq 1\), \(a>1\), \(b>0\), \(p\geq 3\). Using the Galerkin method and the method in [2] different from the concavity method we already know, that is, differential inequality techniques, they determined local and global existence of weak solutions and showed that any weak solution with negative initial energy blows up in finite time. Thereafter, Said-Houari [12] considered the blow-up result for a larger class of initial data with positive initial energy combining potential well method and differential inequality techniques ([2]). Pişkin [13] studied a coupled semilinear Klein–Gordon system with nonlinear damping terms, in \(\Omega \times (0,\infty)\), where \(\Omega \subset \mathbb{R}^{N}\) (\(N=1,2,3\)), \(m\geq 1\), \(r\geq 1\), \(m_{1},m_{2}>0\), \(a,b>0\), \(p>1\). The decay estimates of the solution are established by using Nakao’s inequality. Meanwhile, similar to [2], he also proved the blow-up of the solution in finite time with negative initial energy, using the technique of appropriate modification for energy functional. In the presence of the viscoelastic term (\(g\neq 0\)), Han and Wang [14] discussed semilinear coupled viscoelastic wave system with nonlinear damping terms, in \(\Omega \times (0,\infty)\), where \(\Omega \subset \mathbb{R}^{N}\) (\(N=1,2,3\)), \(m\geq 1\), \(r\geq 1\), \(a>1\), \(b>0\), \(p\geq 3\). They established several results concerning the global existence, uniqueness and finite time blow-up of weak solutions with negative initial energy by utilizing the Galerkin and the concavity method. Recently, Messaoudi and Said-Hauari [15] dealt with our problem (1)–(4) and improved the result in [12] to a larger class of initial data for which the initial energy can take positive values. Besides, for the work on quasilinear wave equations, we refer the reader to [16–18] and the references therein. In view of the work mentioned above, one can find that research on the blow-up phenomena of the solutions with high initial energy for a semilinear damped viscoelastic wave system (1)–(4) has not been started yet. Since the viscoelastic terms, nonlinear damping and source terms are included in the system, the classical method employed in single equation cannot be directly used to prove the blow-up result. The main difficulty of the present paper is to find the technique to deal with nonlinear damping and source terms. In order to overcome the difficulty, combining an argument of contradiction, property of convex function ([7]) and important inequalities in [15] (cf. Lemma 2.1), we consider problem (1)–(4) and prove a blow-up result of certain solutions at a high energy level. Firstly, let us present some notations and assumptions used throughout this article. Taking one can easily verify that where For the relaxation functions \(g(s)\), \(h(s)\) and real number p, we give the following assumptions: (\(\mathrm{H}_{1}\)): \(g\in C^{1}([0,\infty ])\), \(h\in C^{1}([0, \infty ])\) are nonnegative functions satisfying$$\begin{aligned}& g'(s)\leq 0, \quad 1- \int_{0}^{\infty }g(s)\,ds=l>0, \\& h'(s)\leq 0,\quad 1- \int_{0}^{\infty }h(s)\,ds=k>0. \end{aligned}$$ (\(\mathrm{H}_{2}\)): $$\begin{aligned}& -1< p< \infty,\quad N=1,2, \\& -1< p\leq \frac{3-N}{N-2},\quad N\geq 3. \end{aligned}$$ Remark 1 Note that we easily obtain the following local existence and uniqueness of weak solution for problem (1)–(4) by using the Faedo–Galerkin approximation methods and the Banach contraction mapping principle, which is similar to [2] with slight modification. The process of this proof is standard, so we omit it here. Proposition Under the assumptions (\(\mathrm{H}_{1}\)) and (\(\mathrm{H}_{2}\)), let the initial data \((u_{0},u_{1})\in H_{0}^{1}(\Omega)\times L^{2}(\Omega)\) and \((v_{0},v_{1})\in H_{0}^{1}(\Omega)\times L^{2}(\Omega)\) are given, then the problem (1) –(4) has a unique local solution for the maximum existence time \(T>0\), where \(T\in (0,\infty ]\). where Now we are in a position to state our main result. Theorem 1 Under the assumptions (\(\mathrm{H}_{1}\)) and (\(\mathrm{H}_{2}\)), assume that \(m>1\), \(r>1\), \(2(p+2)>\max \{m+1,r+1\}\), and then \((u,v)\) blows up in finite time, where \(\varepsilon_{0}\in (0,1)\) is a root of the equation \(\frac{\sigma }{ \sigma +1} ( \frac{1-\xi }{2c_{0}\varepsilon_{0}(p+2)} ) ^{\frac{1}{ \sigma^{\ast }}}=\frac{2(p+2)(1-\varepsilon_{0})}{\alpha (\varepsilon _{0})}\), and \(\lambda_{1}\) being the first eigenvalue of −Δ. Preliminary results In the section, we give some lemmas which are useful for the proof of our blow-up result. Lemma 1 Proof for \(t\geq 0\), \(E'(t)\leq 0\). Moreover, the following energy inequality holds: □ Lemma 2 ([15], Lemma 2.1) There exist two positive constants \(c_{0}\) and \(c_{1}\) such that Next, we present the following crucial lemma which repeats the same one of Han and Wang [14], Theorem 2.4, with slight modification, so we will omit its proof. Lemma 3 ([14]) Under the assumptions (\(\mathrm{H}_{1}\)) and (\(\mathrm{H}_{2}\)), assume that \(m>1\), \(r>1\), \(2(p+2)>\max \{m+1,r+1\}\) and satisfying (6). If \(\exists t_{0}\geq 0\) such that \(E(t_{0})<0\), then the solution of the problem (1) –(4) blows up in finite time. Proof of Theorem 1 In the section, using an argument of contradiction and the property of a convex function, we prove our main result. Proof of Theorem 1 Thus the following equalities are obtained: For the right side of (13) to add \(2(p+2)(1-\varepsilon)E(t)\), one can get For the third and fifth terms on the right side of (14), Hölder’s and Young’s inequalities give us By the convexity of the function \(\frac{u^{y}}{y}\) in y, for \(u\geq 0\) and \(y>0\), we have where \(\theta =\frac{2(p+2)-(m+1)}{2(p+2)-2}\), then one can get Similarly, where \(\eta =\frac{2(p+2)-(r+1)}{2(p+2)-2}\). Take For the formula above, using Lemma 2 and the Poincaré inequality, we get where \(k_{1}(\varepsilon)= ( (p+2)(1-\varepsilon)-1) l-\frac{1-l}{4(p+2)(1- \varepsilon)}\), \(k_{2}(\varepsilon)= ( (p+2)(1-\varepsilon)-1) k-\frac{1-k}{4(p+2)(1- \varepsilon)}\), \(\lambda_{1}\) being the first eigenvalue of −Δ. Take \(\varepsilon_{1}= ( \frac{2c_{0}\varepsilon (p+2)}{1-\xi } ) ^{\frac{1}{\sigma^{\ast }+1}}\), we have Since Then we can take ε small enough such that The Cauchy inequality gives us where It is easy to see that On the other hand, by the definition of \(k(\varepsilon)\), we have Hence, there exists \(\varepsilon_{\ast }\in (0,1)\) such that This implies that Using (22), (23) and the continuity in ε of \(\frac{2(p+2)(1- \varepsilon)}{\alpha (\varepsilon)}\) and \(( \frac{2c_{0}\varepsilon (p+2)}{1-\xi } ) ^{-\frac{1}{\sigma^{\ast }}}\frac{\sigma }{ \sigma +1}\), there exists \(\varepsilon_{0}\in (0,\varepsilon_{ \ast })\subset (0,1)\) such that Then (21) can be rewritten as Now, setting \(H(t)=(u,u_{t})+(v,v_{t})- ( \frac{2c_{0}\varepsilon _{0}(p+2)}{1-\xi } ) ^{-\frac{1}{\sigma^{\ast }}}\frac{\sigma }{ \sigma +1}E(t)\). Then we exploit (7), this tells us that and A simple integration of (25) over \((0,t)\) then yields So, we get the estimate This contradicts (26) and we get the finite time blow-up result. □ Conclusion We prove the finite time blow-up of some solutions for a semilinear viscoelastic wave system with nonlinear weak damping and source terms whose initial data have arbitrarily high initial energy. We point out that the methods for a single equation in [6, 7] are not necessarily applicable to our system. We also notice that the result in Theorem 1 extends the results for the system in [14, 15]. References 1. Levine, H.A.: Instability and nonexistence of global solutions to nonlinear wave equations of the form \(Pu_{tt}=-Au+F(u)\). Trans. Am. Math. Soc. 192, 1–21 (1974) 2. Georgiev, V., Todorova, G.: Existence of a solution of the wave equation with nonlinear damping and source terms. J. Differ. Equ. 109, 295–308 (1994) 3. Messaoudi, S.A.: Blow up and global existence in a nonlinear viscoelastic wave equation. Math. Nachr. 260, 58–66 (2003) 4. Messaoudi, S.A.: Blow-up of positive-initial-energy solutions of a nonlinear viscoelastic hyperbolic equation. J. Math. Anal. Appl. 320, 902–915 (2006) 5. Kafinia, M., Messaoudi, S.A.: A blow-up result in a Cauchy viscoelastic problem. Appl. Math. Lett. 21, 549–553 (2008) 6. Wang, Y.J.: A global nonexistence theorem for viscoelastic equations with arbitrary positive initial energy. Appl. Math. Lett. 22, 1394–1400 (2009) 7. Song, H.T.: Blow up of arbitrarily positive initial energy solutions for a viscoelastic wave equation. Nonlinear Anal., Real World Appl. 26, 306–314 (2015) 8. Cavalcanti, M.M., Cavalcanti, V.N.D., Lasiecka, I., Webler, C.M.: Intrinsic decay rates for the energy of a nonlinear viscoelastic equation modeling the vibrations of thin rods with variable density. Adv. Nonlinear Anal. 6, 121–145 (2017) 9. Grace, S.R.: Oscillation criteria for third order nonlinear delay differential equations with damping. Opusc. Math. 35(4), 485–497 (2015) 10. Kumar, S., Kumar, D., Singh, J.: Fractional modelling arising in unidirectional propagation of long waves in dispersive media. Adv. Nonlinear Anal. 5(4), 383–394 (2016) 11. Agre, K., Rammaha, M.A.: Systems of nonlinear wave equations with damping and source terms. Differ. Integral Equ. 19(11), 1235–1270 (2006) 12. Said-Houari, B.: Global nonexistence of positive initial-energy solutions of a system of nonlinear wave equations with damping and source terms. Differ. Integral Equ. 23(1–2), 79–92 (2010) 13. Pişkin, E.: Uniform decay and blow-up of solutions for coupled nonlinear Klein–Gordon equations with nonlinear damping terms. Math. Methods Appl. Sci. 37, 3036–3047 (2014) 14. Han, X.S., Wang, M.X.: Global existence and blow-up of solutions for a system of nonlinear viscoelastic wave equations with damping and source. Nonlinear Anal. 71(11), 5427–5450 (2009) 15. Messaoudi, S.A., Said-Hauari, B.: Global nonexistence of positive initial-energy solutions of a system of nonlinear viscoelastic wave equations with damping and source terms. J. Math. Anal. Appl. 365(1), 277–287 (2010) 16. Liang, F., Gao, H.J.: Global nonexistence of positive initial-energy solutions for coupled nonlinear wave equations with damping and source terms. Abstr. Appl. Anal. 2011(4), 430 (2011) 17. Hao, J.H., Niu, S.S., Meng, H.H.: Global nonexistence of solutions for nonlinear coupled viscoelastic wave equations with damping and source terms. Bound. Value Probl. 2014(1), 1 (2014) 18. Li, G., Hong, L.H., Liu, W.J.: Global nonexistence of solutions for the viscoelastic wave equation of Kirchhoff type with high energy. J. Funct. Spaces Appl. 89(8), 1–15 (2011) Acknowledgements The authors would like to deeply thank all the reviewers for their insightful and constructive comments. Ethics declarations Competing interests The authors declare that they have no competing interests. Additional information Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$. Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$... What if $\theta$ is irrational...what did I do wrong? 'cause I understand that second one but I'm having a hard time explaining it in words (Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.) DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something. he based much of his success on principles like this I cant believe ive forgotten it it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book regardless of you attending Princeton university or not yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned @TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college. long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now @BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even A proof of this uses (basically) Fourier analysis Even though it looks rather innocuous albeit surprising result in pure number theory @BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about @Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$). The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero. I can try to recall more if you're interested. It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps So it sort of makes sense Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis @BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate
That $\ a,b\mid m\,\Rightarrow\,{\rm lcm}(a,b)\mid m\ $ may be conceptually proved by Euclidean descent as below. The set $M$ of all positive common multiples of $\,a,b\,$ is closed under positive subtraction, i.e. $\,m> n\in M$ $\Rightarrow$ $\,a,b\mid m,n\,\Rightarrow\, a,b\mid m\!-\!n\,\Rightarrow\,m\!-\!n\in M.\,$ Therefore, further, by induction, we deduce $\,M\,$ is closed under mod, i.e. remainder, since it arises by repeated subtraction, i.e. $\ m\ {\rm mod}\ n\, =\, m-qn = ((m-n)-n)-\cdots-n.\,$ Therefore it follows that the least positive $\,\ell\in M\,$ divides every $\,m\in M,\,$ else $\ 0\ne m\ {\rm mod}\ \ell\, $ would be an element of $\,M\,$ smaller than $\,\ell,\,$ contra minimality of $\,\ell.\,$ Thus the least common multiple $\,\ell\,$ divides every common multiple $\,m.$ Remark $\ $ The key structure exploited in the proof is abstracted out in the Lemma below. Lemma $\ \ $ Let $\,\rm S\ne\emptyset \,$ be a set of integers $>0$ closed under subtraction $> 0,\,$ i.e. for all $\rm\,n,m\in S, \,$ $\rm\ n > m\ \Rightarrow\ n-m\, \in\, S.\,$ Then the least $\rm\:\ell\in S\,$ divides every element of $\,\rm S.$ Proof ${\bf\ 1}\,\ $ If not there is a least nonmultiple $\rm\,n\in S,\,$ contra $\rm\,n-\ell \in S\,$ is a nonmultiple of $\rm\,\ell.$ Proof ${\bf\ 2}\,\rm\,\ \ S\,$ closed under subtraction $\rm\,\Rightarrow\,S\,$ closed under remainder (mod), when it is $\ne 0,$ because mod is simply repeated subtraction, i.e. $\rm\, a\ mod\ b\, =\, a - k b\, =\, a-b-b-\cdots -b.\,$ Thus $\rm\,n\in S\,$ $\Rightarrow$ $\rm\, (n\ mod\ \ell) = 0,\,$ else it is $\rm\,\in S\,$ and smaller than $\rm\,\ell,\,$ contra mimimality of $\rm\,\ell.$ Remark $\ $ In a nutshell, two applications of induction yield the following inferences $ \rm\begin{eqnarray} S\ closed\ under\ {\bf subtraction} &\:\Rightarrow\:&\rm S\ closed\ under\ {\bf mod} = remainder = repeated\ subtraction \\&\:\Rightarrow\:&\rm S\ closed\ under\ {\bf gcd} = repeated\ mod\ (Euclid's\ algorithm) \end{eqnarray}$ Interpreted constructively, this yields the extended Euclidean algorithm for the gcd. The Lemma describes a fundamental property of natural number arithmetic whose essence will become clearer when one studies ideals of rings (viz. $\,\Bbb Z\,$ is Euclidean $\Rightarrow$ PID).
Let $(M^{2n},\omega)$ be a symplectic manifold with an integral symplectic form $\omega$. Due to the work of M.Gromov and D.Tischler (M.Gromov "A topological technique for the construction of solutions of differential equations and inequalities", D.Tischler "Closed 2-forms and an embedding theorem for symplectic manifolds"), there exists a symplectic embedding $$ (M,\omega) \rightarrow (\mathbb{C}P^{2n+1},\omega_{FS}),$$ where $\omega_{FS}$ denote by the Fubini-Study form on the projective space. For example, Kodaira-Thurston manifold is a symplectic submanifold of $\mathbb{C}P^5$. My questions are as follows : Is there an example of non-Kaehler symplectic manifold $(M,\omega)$ which can be embedded into $\mathbb{C}P^n$ for some $n \leq 4$? (There is no restriction of the dimension of $M$.) Is there an example of non-Kaehler symplectic manifold $(M,\omega)$ of dimension $2n$ which can be embedded into $\mathbb{C}P^{n+1}$? (I mean, $M$ is a submanifold of codimension 2) I really appriciate for your any comments.
Bulletin of the American Physical Society APS April Meeting 2010 Volume 55, Number 1 Saturday–Tuesday, February 13–16, 2010; Washington, DC Session G7: Instrumentation for Relativistic Heavy Ion Physics Hide Abstracts Sponsoring Units: DNP Chair: J.H. Lee, Brookhaven National Laboratory Room: Delaware A Sunday, February 14, 2010 8:30AM - 8:42AM G7.00001: The Forward Silicon Vertex Detector Upgrade for the PHENIX Experiment at RHIC Zhengyun You The PHENIX detector at RHIC will be upgraded with the Forward Silicon Vertex Detector (FVTX). The FVTX consists of two arms, each with four discs of silicon strip sensors combined with FPHX readout chips, covering the acceptance of existing muon arm detectors (1.2 $< \quad \vert $y$\vert \quad <$ 2.4). It will provide precision tracking and reconstruction of the primary vertex and the recognition of secondary decay vertices in the collision system, to allow discrimination among prompt muons, heavy flavor decay muons and muons from hadronic decays. The proposed tracker is planned to be put into operation in FY2011. The tracking performance and expectations for the physics signal extraction, the current status of detector construction, assembly plan, and the results of beam tests will be presented. [Preview Abstract] Sunday, February 14, 2010 8:42AM - 8:54AM G7.00002: Physics capability with silicon vertex tracker at RHIC PHENIX experiment Maki Kurosawa PHENIX is an experiment aiming to study the spin structure of proton and hot and dense matter at Brookhaven National Laboratory's Relativistic Heavy Ion Collider. The PHENIX detector will be upgraded with a silicon vertex tracker (VTX) to enhance it's physics capabilities for spin and heavy ion program. The VTX comprised of a four-layer barrel detector, two inner silicon pixel detectors and two outer silicon strip detectors. The main roles of the VTX are precision measurement of heavy flavor and precision jet reconstruction with it's large acceptance. In spin program, the VTX can determine x dependency of gluon polarization Delta-G/G through heavy flavor and gamma-jet correlation measurements. In heavy ion program, heavy flavor measurement provides further information on property of QGP in addition that from light flavor. This presentation provides overview of VTX upgrade and enhanced physics as well as current status of pixel detector. [Preview Abstract] Sunday, February 14, 2010 8:54AM - 9:06AM G7.00003: Status of the Muon Trigger Resistive Plate Chamber Upgrade Project in PHENIX Ihnjea Choi The exploration of proton spin structure is one of the major goals of the PHENIX experiment at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory. Single longitudinal spin asymmetries for high momentum muons from W-boson decay at $\sqrt{s}=500$ GeV are a promising probe for the flavor decomposition of quark helicity distributions in the proton. The PHENIX muon trigger upgrade will provide the ability to select rare events with high momentum muons from the dominant background of low momentum muons from hadron decay. The upgrade consists of two components. (1) The existing muon spectrometer will be upgraded with new, fast trigger front-end electronics. (2) Fast Resistive Plate Chamber (RPC) stations will be added upstream and downstream of the two PHENIX muon spectrometers. In combination these upgrades make it possible to select high momentum tracks in the first level trigger and to reject beam related backgrounds. PHENIX muon trigger RPC technology, including the frontend electronics, follows closely the design of the CMS muon trigger RPCs. We report the status of the RPC upgrade project including the progress in detector assembly and RPC installation in the PHENIX muon spectrometers. [Preview Abstract] Sunday, February 14, 2010 9:06AM - 9:18AM G7.00004: Performance of PHENIX Resistive Plate Chambers Murad Sarsour The PHENIX experiment at the Relativistic Heavy Ion Collider at BNL uses polarized pp collisions to study the proton spin structure. One of the major emphases of the PHENIX spin program is to cleanly measure the sea quark and antiquark polarizations via single spin asymmetry of the W-decay muons. At forward rapidity, Resistive Plate Chambers (RPCs) will be used at PHENIX as a level-1 trigger to select high transverse momentum muon events from a large background of low transverse momentum muons. In addition, RPCs will be used offline to reduce cosmic muon backgrounds. Detector modules for one RPC station are currently being installed and tested at the PHENIX experimental site. In parallel, RPC prototypes are continuously monitored at a separate testing facility to study various environmental effects on the RPC performance. A report on results from these tests and performance will be presented. Results from the RPC prototype cosmic run to study the RPC's efficiency will also be presented. [Preview Abstract] Sunday, February 14, 2010 9:18AM - 9:30AM G7.00005: The PHENIX Muon Trigger Upgrade Level-1 Trigger System John Lajoie, Todd Kempel The PHENIX Muon Trigger Upgrade adds a set of Level-1 trigger detectors to the existing muon spectrometers and will enhance the ability of the experiment to pursue a rich program of spin physics in polarized proton collisions. The upgrade will allow the experiment to select high momentum muons from the decay of W bosons and reject both beam-associated and low-momentum collision background, enabling the study of quark and antiquark polarization in the proton. The Muon Trigger Upgrade will add momentum and timing information to the present muon Level-1 trigger, which only makes use of tracking in the PHENIX muon identifier (MuID) panels. Signals from new Resistive Plate Chambers (RPCs) and re-instrumented planes in the existing muon tracking (MuTr) chambers will provide momentum and timing information for the new Level-1 trigger. An RPC timing resolution of $\sim $2 ns will permit rejection of beam related backgrounds while tracking information from the RPCs and MuTr station will be used by the trigger to select events with high momentum muon candidates. The RPC and MuTr hit information will be sent by optical fibers to a set of Level-1 trigger processors that will make use of cutting edge FPGA technology to provide very high data densities in a compact form factor. The layout of the upgrade, details of the Level-1 electronics and trigger algorithm development will be presented. [Preview Abstract] Sunday, February 14, 2010 9:30AM - 9:42AM G7.00006: Status of the Silicon Stripixel Detector for PHENIX at RHIC Paul Kline A silicon vertex tracker upgrade is under development for the PHENIX detector at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL). The detector consists of four barrel layers with $|\eta| < 1.2$ rapidity coverage. The two inner layers are pixel sensors (located 2.5cm and 5cm from the beam line) and the two outer layers are a novel stripixel sensor (at 10cm and 14cm). The detector will allow for an improvement in the primary vertex resolution, determination of secondary vertices from heavy quark production, and construction of jet direction based on particle multiplicities, improving the capabilities of both the heavy ion and spin programs at PHENIX. The current status of the stripixel detector will be presented. [Preview Abstract] Sunday, February 14, 2010 9:42AM - 9:54AM G7.00007: Performance studies of the Silicon Detectors in STAR towards microvertexing of rare decays Jonathan Bouchet Heavy quarks ($b$ and $c$) carrying hadron production as well as their elliptic flow can be used as a probe of the thermalization of the medium created in heavy ions collisions. Direct topological reconstruction of $D$, $B$ mesons and $\Lambda_{\mathrm{c}}$ baryon decays is then needed to obtain this precise measurement. To achieve this goal the silicon detectors of the STAR experiment are explored. These detectors, a Silicon Drift (SVT) 3-layer detector [1] and a Silicon Strip one-layer detector [2] provide tracking very near to the beam axis and allow us to search for heavy flavour with microvertexing methods. $D^{0}$ meson reconstruction including the silicon detectors in the tracking algorithm will be presented for the Au+Au collisions at $\sqrt{s_{NN}}$ = 200 GeV, and physics opportunities will be discussed. \\[4pt] [1] R. Bellwied et al., \textit{Nucl. Inst. Methods} {\bf A499} (2003) 640. \\[0pt] [2] L. Arnold et al., \textit{Nucl. Inst. and Methods} {\bf A499} (2003) 652. [Preview Abstract] Sunday, February 14, 2010 9:54AM - 10:06AM G7.00008: The possibility of a Very High Momentum Particle Identification upgrade for Alice Edmundo Garcia The results of RHIC have strongly altered the perception of the baryon production in heavy-ion collisions. From a proton over pion ratio of 9{\%} in the thermal region, above transverse momenta of 3 GeV/c this ratio equals or even surpasses unity. Several theoretical predictions for LHC assume an enhanced baryon production at higher transverse momenta: 10-20 GeV/c. In that optics we have decided to propose to the ALICE collaboration an upgrade of the particle identification capabilities with a new detector of small size 12 square meters. In the first stage we consider building a prototype to be commissioned at the end of 2011. The prototype would consist of a C4F10 gas Cherenkov detector with spherical mirror focusing, and CsI photocathode coupled with MWPCs. The detector would identify pions and kaons up to a momentum of 26 GeV/c with a 4 sigma separation. We will discuss also the possible use of GEMs as a photo detector where encouraging results have been obtained by our protocollaboration. The physics capabilities of such a detector in conjunction with the ALICE experiment will be contemplated. [Preview Abstract] Sunday, February 14, 2010 10:06AM - 10:18AM G7.00009: Local Polarimetry at STAR Using the Zero Degree Calorimeter Shower Maximum Detector Alice Bridgeman The polarized proton program at the Relativistic Heavy Ion Collider (RHIC) began colliding beams at a center of mass energy of 500 GeV in 2009, after successful running at a center of mass energy of 200 GeV in previous years. The polarized beams are monitored locally at STAR using various local polarimeters. At 200 GeV, the Beam Beam Counter (BBC) detectors have a sufficiently large analyzing power to work effectively as local polarimeters. At 500 GeV, the BBCs showed a decreased analyzing power. In 2009 the STAR collaboration successfully commissioned the Zero Degree Calorimeter (ZDC) with Shower Maximum Detector (SMD) for use as a local polarimeter at 500 GeV. I will review the work done in this run and discuss plans for the ZDC SMD in future polarized proton running at 500 GeV at STAR. [Preview Abstract] Sunday, February 14, 2010 10:18AM - 10:30AM G7.00010: Heavy Flavor Physics in Heavy-Ion Collisions with STAR Heavy Flavor Tracker Yifei Zhang Heavy quarks are a unique tool to probe the strongly interacting matter created in relativistic heavy-ion collisions at RHIC energies. Due to their large mass, energetic heavy quarks are predicted to lose less energy than light quarks by gluon radiation when they traverse a Quark-Gluon Plasma. In contrast, recent measurements of non-photonic electrons from heavy quark decays at high transverse momentum (p$_{T})$ show a jet quenching level similar to that of the light hadrons. Heavy quark are produced mainly at early stage in heavy-ion collisions, thus they are proposed to probe the QCD medium and to be sensitive to bulk medium properties. Ultimately, their flow behavior may help establish whether light quarks thermalize. Therefore, topological reconstruction of D-mesons and identification of electrons from charm and bottom decays are crucial to understand the heavy flavor production and their in medium properties. The Heavy Flavor Tracker (HFT) is a micro-vertex detector utilizing active pixel sensors and silicon strip technology. The HFT will significantly extend the physics reach of the STAR experiment for precise measurement of charmed and bottom hadrons. We present a performance study with full detector on the open charm nuclear modification factor, elliptic flow v$_{2}$ and $\Lambda _{c}$ measurement as well as the measurement of bottom mesons via a semi-leptonic decay. [Preview Abstract] Engage My APS Information for The American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics. Headquarters1 Physics Ellipse, College Park, MD 20740-3844(301) 209-3200 Editorial Office1 Research Road, Ridge, NY 11961-2701(631) 591-4000 Office of Public Affairs529 14th St NW, Suite 1050, Washington, D.C. 20045-2001(202) 662-8700
Difference between revisions of "Geometry and Topology Seminar 2016-2017" (Created page with "== Fall 2016 == {| cellpadding="8" !align="left" | date !align="left" | speaker !align="left" | title !align="left" | host(s) |- |September 9 | [http://www.math.wisc.edu/~bwa...") Line 84: Line 84: | | |} |} + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + Revision as of 10:56, 2 February 2017 Contents 1 Fall 2016 2 Fall Abstracts 3 Spring Abstracts Fall 2016 Fall Abstracts Ronan Conlon New examples of gradient expanding K\"ahler-Ricci solitons A complete K\"ahler metric $g$ on a K\"ahler manifold $M$ is a \emph{gradient expanding K\"ahler-Ricci soliton} if there exists a smooth real-valued function $f:M\to\mathbb{R}$ with $\nabla^{g}f$ holomorphic such that $\operatorname{Ric}(g)-\operatorname{Hess}(f)+g=0$. I will present new examples of such metrics on the total space of certain holomorphic vector bundles. This is joint work with Alix Deruelle (Universit\'e Paris-Sud). Jiyuan Han Deformation theory of scalar-flat ALE Kahler surfaces We prove a Kuranishi-type theorem for deformations of complex structures on ALE Kahler surfaces. This is used to prove that for any scalar-flat Kahler ALE surfaces, all small deformations of complex structure also admit scalar-flat Kahler ALE metrics. A local moduli space of scalar-flat Kahler ALE metrics is then constructed, which is shown to be universal up to small diffeomorphisms (that is, diffeomorphisms which are close to the identity in a suitable sense). A formula for the dimension of the local moduli space is proved in the case of a scalar-flat Kahler ALE surface which deforms to a minimal resolution of \C^2/\Gamma, where \Gamma is a finite subgroup of U(2) without complex reflections. This is a joint work with Jeff Viaclovsky. Sean Howe Representation stability and hypersurface sections We give stability results for the cohomology of natural local systems on spaces of smooth hypersurface sections as the degree goes to \infty. These results give new geometric examples of a weak version of representation stability for symmetric, symplectic, and orthogonal groups. The stabilization occurs in point-counting and in the Grothendieck ring of Hodge structures, and we give explicit formulas for the limits using a probabilistic interpretation. These results have natural geometric analogs -- for example, we show that the "average" smooth hypersurface in \mathbb{P}^n is \mathbb{P}^{n-1}! Nan Li Quantitative estimates on the singular sets of Alexandrov spaces The definition of quantitative singular sets was initiated by Cheeger and Naber. They proved some volume estimates on such singular sets in non-collapsed manifolds with lower Ricci curvature bounds and their limit spaces. On the quantitative singular sets in Alexandrov spaces, we obtain stronger estimates in a collapsing fashion. We also show that the (k,\epsilon)-singular sets are k-rectifiable and such structure is sharp in some sense. This is a joint work with Aaron Naber. Yu Li In this talk, we prove that if an asymptotically Euclidean (AE) manifold with nonnegative scalar curvature has long time existence of Ricci flow, it converges to the Euclidean space in the strong sense. By convergence, the mass will drop to zero as time tends to infinity. Moreover, in three dimensional case, we use Ricci flow with surgery to give an independent proof of positive mass theorem. A classification of diffeomorphism types is also given for all AE 3-manifolds with nonnegative scalar curvature. Peyman Morteza We develop a procedure to construct Einstein metrics by gluing the Calabi metric to an Einstein orbifold. We show that our gluing problem is obstructed and we calculate the obstruction explicitly. When our obstruction does not vanish, we obtain a non-existence result in the case that the base orbifold is compact. When our obstruction vanishes and the base orbifold is non-degenerate and asymptotically hyperbolic we prove an existence result. This is a joint work with Jeff Viaclovsky. Caglar Uyanik Geometry and dynamics of free group automorphisms A common theme in geometric group theory is to obtain structural results about infinite groups by analyzing their action on metric spaces. In this talk, I will focus on two geometrically significant groups; mapping class groups and outer automorphism groups of free groups.We will describe a particular instance of how the dynamics and geometry of their actions on various spaces provide deeper information about the groups. Bing Wang The extension problem of the mean curvature flow We show that the mean curvature blows up at the first finite singular time for a closed smooth embedded mean curvature flow in R^3. A key ingredient of the proof is to show a two-sided pseudo-locality property of the mean curvature flow, whenever the mean curvature is bounded. This is a joint work with Haozhao Li. Ben Weinkove Gauduchon metrics with prescribed volume form Every compact complex manifold admits a Gauduchon metric in each conformal class of Hermitian metrics. In 1984 Gauduchon conjectured that one can prescribe the volume form of such a metric. I will discuss the proof of this conjecture, which amounts to solving a nonlinear Monge-Ampere type equation. This is a joint work with Gabor Szekelyhidi and Valentino Tosatti. Jonathan Zhu Entropy and self-shrinkers of the mean curvature flow The Colding-Minicozzi entropy is an important tool for understanding the mean curvature flow (MCF), and is a measure of the complexity of a submanifold. Together with Ilmanen and White, they conjectured that the round sphere minimises entropy amongst all closed hypersurfaces. We will review the basics of MCF and their theory of generic MCF, then describe the resolution of the above conjecture, due to J. Bernstein and L. Wang for dimensions up to six and recently claimed by the speaker for all remaining dimensions. A key ingredient in the latter is the classification of entropy-stable self-shrinkers that may have a small singular set. Yu Zeng Short time existence of the Calabi flow with rough initial data Calabi flow was introduced by Calabi back in 1950’s as a geometric flow approach to the existence of extremal metrics. Analytically it is a fourth order nonlinear parabolic equation on the Kaehler potentials which deforms the Kaehler potential along its scalar curvature. In this talk, we will show that the Calabi flow admits short time solution for any continuous initial Kaehler metric. This is a joint work with Weiyong He. Spring Abstracts Lucas Ambrozio "TBA" Rafael Montezuma "Metrics of positive scalar curvature and unbounded min-max widths" In this talk, I will construct a sequence of Riemannian metrics on the three-dimensional sphere with scalar curvature greater than or equal to 6, and arbitrarily large min-max widths. The search for such metrics is motivated by a rigidity result of min-max minimal spheres in three-manifolds obtained by Marques and Neves. Carmen Rovi The mod 8 signature of a fiber bundle In this talk we shall be concerned with the residues modulo 4 and modulo 8 of the signature of a 4k-dimensional geometric Poincare complex. I will explain the relation between the signature modulo 8 and two other invariants: the Brown-Kervaire invariant and the Arf invariant. In my thesis I applied the relation between these invariants to the study of the signature modulo 8 of a fiber bundle. In 1973 Werner Meyer used group cohomology to show that a surface bundle has signature divisible by 4. I will discuss current work with David Benson, Caterina Campagnolo and Andrew Ranicki where we are using group cohomology and representation theory of finite groups to detect non-trivial signatures modulo 8 of surface bundles. Yair Hartman "Intersectional Invariant Random Subgroups and Furstenberg Entropy." In this talk I'll present a joint work with Ariel Yadin, in which we solve the Furstenberg Entropy Realization Problem for finitely supported random walks (finite range jumps) on free groups and lamplighter groups. This generalizes a previous result of Bowen. The proof consists of several reductions which have geometric and probabilistic flavors of independent interests. All notions will be explained in the talk, no prior knowledge of Invariant Random Subgroups or Furstenberg Entropy is assumed. Bena Tshishiku "TBA" Mark Powell Stable classification of 4-manifolds A stabilisation of a 4-manifold M is a connected sum of M with some number of copies of S^2 x S^2. Two 4-manifolds are said to be stably diffeomorphic if they admit diffeomorphic stabilisations. Since a necessary condition is that the fundamental groups be isomorphic, we study this equivalence relation for a fixed group. I will discuss recent progress in classifying 4-manifolds up to stable diffeomorphism for certain families of groups, arising from work with Daniel Kasprowski, Markus Land and Peter Teichner. As a by-product we also obtained a result on the analogous question with the complex projective plane CP^2 replacing S^2 x S^2. Autumn Kent Analytic functions from hyperbolic manifolds At the heart of Thurston's proof of Geometrization for Haken manifolds is a family of analytic functions between Teichmuller spaces called "skinning maps." These maps carry geometric information about their associated hyperbolic manifolds, and I'll discuss what is presently known about their behavior. The ideas involved form a mix of geometry, algebra, and analysis.
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago BTW your program looks very interesting, in particular the way to enter mathematics. One thing that seem to be missing is documentation (at least I did not find it). This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for. For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$? ******* Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports. When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to. ******* If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string: I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead: One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find... In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som... @MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, " BTW those animations with examples of searching look really cool. @MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page! @MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users. @MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it. @MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords. @MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history. @MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though) @MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match. @MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell. @MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets. @MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit. @MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned. @MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish. @MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish. So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago @GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago @quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago "What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago @quid I will reply here, since I do not want to digress in the comments too much from the topic of that question. Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that". Book recommendations are certainly accepted on the main site, if they are formulated in the proper way. If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here. Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed. Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously. I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc. Academia.SE has some questions which could be classified as "demographic" (including gender). @quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar. But that is only anecdotal. And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat. From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov." My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men. As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation. It seems that they have also other interpretations in Poland. "A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House"). Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany." BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question. In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3] A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar). In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing. On Slovakia specifically it says there: The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko.
To my way of thinking, the other answers are missing an importantelement, a necessary feature for a mathematical tool or method tobe called "trick." Namely, in order to be called a "trick," a method or techniquemust involve artifice or misdirection of some kind. When we treat a mathematicalobject as something that it isn't really or when we pretend thatsomething is other than it is in order to advance an argument (which is not to suggest that the mathematics is not correct), then weare using trickery. When we solve a problem by placing our focus on something else, in which we aren't actually interested as such and which may even be silly in some way—a kind of misdirection—but by doing so we become successful in the original problem, then we are using trickery. When we replace a robust concept, in which we are really interested, with a modified version of it, perhaps even an absurd version of it, but which makes the argument work, then we are using trickery. For example, with Craig's trick, we replace a formula $\varphi$ withthe conjunction with itself$\varphi\wedge\varphi\wedge\dots\wedge\varphi$ repeated many timesover. The new assertion is just silly and we don't actually care about it as such, although of course it is logically equivalent to $\varphi$. How could it possibly help? The point is that we can use the new assertion to code some extra information into an axiomatization or presentation: the number of times it was repeated. By this artifice, we can deduce that every computably enumerable theoryhas a computable set of axioms. The same idea works in many other contexts. For example, every c.e. presentable group has a computable presentation, by sufficiently repeating relations suitably in the presentation. With Scott's trick, the issue to be solved is that the equivalenceclass of an object forms a proper class, which can cause certainproblems, and so we replace that equivalence class with the setof rank-minimal members of the class. If we think of this fakeequivalence class as the real thing, then everything works great!This trick is surprisingly robust, and can be used to find smallcanonical sets of representing structures in almost any situation.For example, in ZFC there is a definable manner of choosing a set of groups from each group isomorphism class: therank-minimal groups from that class. This is a trick, because wedon't really care much about that particular collection as such. With Rosser's trick, we replace the concept of a theory $T$proving a sentence $\sigma$, with: $T$ proves $\sigma$ by a prooffor which there is no shorter proof of $\neg\sigma$. When youthink of "proof" using this concept, then Gödel'sincompleteness theorem is improved to the Gödel-Rossertheorem, where one can drop Gödel's extra hypotheses about$\omega$-consistency. This is a trick, because we don't actuallywant to think about "proof" using Rosser's concept, except thatit makes the argument work. In many of the other tricks, we do something that seems a little absurd at first, misdirecting our attention from the original problem to this other thing, which may seem irrelevant at first, but when we follow it more fully it provides the answer we seek. In each case, we replace the concepts or objects in which we aretruly interested by concepts or objects that we don't actuallycare about as such and which in several cases are comical versions of the original, except that they make the argument work.
I have a noisy temperature (T) vs. time (t) measurement and I want to calculate dT/dt. If I approximate $dT/dt = \Delta T/\Delta t$ then the noise in the derivative gets too high and the derivative becomes useless. So I fit a smoothing spline (smoothing parameter say 'p') to the measured data and get $dT/dt$ by piecewise differentiation of the spline. Is there a way to obtain uncertainty in this $dT/dt$ based on uncertainty in T? Model two consecutive measurements as the real values plus some noise. Call the first measured temperature $T_1$ and the second $T_2$. Call the measured noises $\gamma_1$ and $\gamma_2$, and suppose that they are drawn from a distribution $\Gamma(\gamma)$ and are uncorrelated. The (approximation to the) derivative is $$\text{Derivative} \approx \frac{(T_2 + \gamma_2) - (T_1 + \gamma_1)}{\Delta t} \, .$$ Note that the derivative is itself a random variable because the $\gamma$'s are random variables. What is the probability distribution of this new random variable? Focus first on the numerator. Here we have a deterministic part $T_2 - T_1$ and a stochastic part $\gamma_2 - \gamma_1$. The trick thing you may not know is how to figure out the probability distribution of the sum or difference of two random variables; in fact the answer is not at all trivial. Given two random variables $x$ and $y$ with distributions $X(x)$ and $Y(y)$, the random variable $z$ defined by $z = x + y$ has distribution $$ Z(z) = (X \otimes Y)(z) \equiv \int_{-\infty}^\infty X(w) Y(z - w) \, dw \, .$$ This integral is called a convolution. Anyway, the point is that the probability distribution $P_{\gamma_2 - \gamma_1}$ of $\gamma_2 - \gamma_1$ is the convolution of the distributions of $\gamma_2$ and $-\gamma_1$, which is $$P_{\gamma_2 - \gamma_1}(\gamma) = \int_{-\infty}^\infty \underbrace{\Gamma(-\gamma')}_{\text{from }-\gamma_1} \underbrace{\Gamma(\gamma - \gamma')}_{\text{from }\gamma_2} \, d \gamma' \, .$$ As an example, suppose the noise is Gaussian distributed with standard deviation $\sigma$, $$\Gamma(\gamma) = \frac{1}{\sqrt{2 \pi} \sigma} \exp\left(\frac{-\gamma^2}{2 \sigma^2} \right) \, .$$ In this case we can do the integral, and the result is $$P_{\gamma_2 - \gamma_1}(\gamma) = \frac{1}{\sqrt{2\pi} (\sqrt{2} \sigma)} \exp \left( \frac{-\gamma^2}{2 (\sqrt{2}\sigma)^2}\right) \, ,$$ which is just a Gaussian with standard deviation $\sqrt{2} \sigma$. Now remember we also divide by $\Delta t$, and doing this too modifies the distribution. The result is that the probability distribution is still a Gaussian where the standard deviation turns out to be $\sqrt{2}\sigma / \Delta t$. So that's your answer: the error in the derivative is completely described by a Gaussian probability distribution with standard deviation $\sqrt{2} \sigma / \Delta t$. I would avoid using a spline in data analysis in general. The spline draws smooth curves through arbitrary sets of points but destroys a lot of the information in the points and adds extraneous, meaningless "information". Use it for making things pretty, not for analysis, unless you really think your data should follow a power series (which would be rather unusual data). I like @DanielSank's answer a lot (and I voted it up) as it leads to good ways to characterize the noise in your data. Here is a "rough and ready" answer that you might find easier to use with typical data. It is quite crude and will only be valid if the temperature varies slowly compared to your sampling frequency. If your temperature is slowly varying compared to your sampling frequency, $1/\Delta t$, then instead of taking a spline you could just "bin" measurements so that every group of $n$ successive measurements is averaged into a single temperature measurement. So you are replacing your $N$ temperature measurements with $N/n$ temperature measurements. You can now estimate the uncertainty of each of these measurements as simply the standard deviation of the mean (SDOM, a.k.a. standard error) of the measurements in that bin. Now you will have much less noisy data and you can do the numerical derivative as you normally would. Danger Will Robinson Do this only if you are surethat your temperature is slowly varying. Even then it is very crude. If you, later, need to take a Fourier transform because you want to know the spectrum of temperature variation then this binning will have "chopped off" the high frequency part of the spectrum. Answer to the OP can be obtained easily using the Savitzky-Golay (SG) smoothing-differentiation filter. Suppose we have noisy $n$-point data such as the temperature ($T$) vs. time ($t$) as in the OP. As per the OP we want to smooth the data, find the time rate of change, and the uncertainty that the SG procedure would introduce in the time rate of change. The SG method works like this: The Savitzky-Golay procedure fits a polynomial of degree $m$ to the data contained in a window of size $2w+1 << n$. Of interest in operating over a particular window is to determine the smooth temperature, the derivative, and the derivative uncertainty ONLY at the center of this window . The window is then moved across the data so that the quantities of interest may be calculated at every data point. To illustrate how this method works I've used $m$=2. Consider the procedure operating on a window centered at point at $j$. In this case the fitting polynomial is: $$T_j=b_{0,j}+b_{1,j}\bar t+b_{2,j}{\bar t}^2$$ where $\bar t=(t-t_j)/\Delta t$ , i.e., the abscissa of a data-point in the window relative to the abscissa of the window center-point , normalized by the spacing between adjacent data points . In simple terms, $\bar t$ represents the abscissa-distance of a data-point at $t$ in a window from the center of that window at $t_j$ in units of the spacing $\Delta t$. With this transformation $b_0$, $b_1/\Delta t$, and $\delta (b_1/\Delta t)$ respectively are the smooth temperature, the temperature derivative with time, and the uncertainty in derivative of temperature at $t_j$ (window center). The fit parameters for the window $j$ are given by: $$b_{0,j}=\sum_{i=-w}^{w}C_{0i,j}T_{i,j}$$ $$b_{1,j}=\sum_{i=-w}^{w}C_{1i,j}T_{i,j}$$ $$b_{2,j}=\sum_{i=-w}^{w}C_{2i,j}T_{i,j}$$ In these expressions the coefficients $C_{0i,j}$, $C_{1i,j}$, and $C_{2i,j}$ are obtained by the procedure devised by Savitzky Golay. The fit uncertainty for window $j$ is then: $$\delta_j=\sqrt{\frac{1}{2w-1}\sum_{i=-w}^{w}(T_{i,j}-T_j(t_i))^2}$$ which when propagated through the expression of $b_{1,j}$ yields $\delta {b_{1,j}}$, which the uncertainty in the time derivative of temperature. Note this uncertainty is purely comes from the smoothing procedure so that by choosing certain values of $m$ and $w$ the uncertainty can be made as small as one requires. The result of this procedure with $w$=10 is shown below:
51 0 Can anyone show me how to prove exactly that the composition of 2 function is again a function, by using the following 3 formulas? Suppose f: A -> B and g: B -> C are functions, then (1)[tex]\forall \ a \ \epsilon \ A : \exists ! \ b \ \epsilon \ B : (a,b) \ \epsilon \ f[/tex] (2)[tex]\forall \ b \ \epsilon \ B : \exists ! \ c \ \epsilon \ C : (b,c) \ \epsilon \ g[/tex]. And by definition the composition of relations f and g is (3)[tex] g \ o \ f = \{(a,c) \ | \ \exists \ b \ \epsilon \ B : (a,b) \ \epsilon \ f \ and \ (b,c) \ \epsilon \ g \}[/tex]. I should be getting [tex]\forall \ a \ \epsilon \ A : \exists ! \ c \ \epsilon \ C : (a,c) \ \epsilon \ g \ o \ f[/tex] but I'm not sure how to combine the givens. I can do it in words, no problem, but I'm not that good in the use of quantifiers. Thanks in advance. Suppose f: A -> B and g: B -> C are functions, then (1)[tex]\forall \ a \ \epsilon \ A : \exists ! \ b \ \epsilon \ B : (a,b) \ \epsilon \ f[/tex] (2)[tex]\forall \ b \ \epsilon \ B : \exists ! \ c \ \epsilon \ C : (b,c) \ \epsilon \ g[/tex]. And by definition the composition of relations f and g is (3)[tex] g \ o \ f = \{(a,c) \ | \ \exists \ b \ \epsilon \ B : (a,b) \ \epsilon \ f \ and \ (b,c) \ \epsilon \ g \}[/tex]. I should be getting [tex]\forall \ a \ \epsilon \ A : \exists ! \ c \ \epsilon \ C : (a,c) \ \epsilon \ g \ o \ f[/tex] but I'm not sure how to combine the givens. I can do it in words, no problem, but I'm not that good in the use of quantifiers. Thanks in advance. Last edited:
On the asymptotic character of a generalized rational difference equation 1. Department of Mathematics, Indian Institute of Science, Bangalore, Karnataka, 560012, India 2. Department of Mathematics, Maligram, Paschim Medinipur, 2421140, India We investigate the global asymptotic stability of the solutions of $X_{n+1}=\frac{β X_{n-l} + γ X_{n-k}}{A + X_{n-k}} $ for $n=1,2, ...$, where $l$ and $k$ are positive integers such that $l≠ k$. The parameters are positive real numbers and the initial conditions are arbitrary nonnegative real numbers. We find necessary and sufficient conditions for the global asymptotic stability of the zero equilibrium. We also investigate the positive equilibrium and find the regions of parameters where the positive equilibrium is a global attractor of all positive solutions. Of particular interest for this generalized equation would be the existence of unbounded solutions and the existence of prime period two solutions depending on the combination of delay terms ($l$, $k$) being (odd, odd), (odd, even), (even, odd) or (even, even). In this manuscript we will investigate these aspects of the solutions for all such combinations of delay terms. Mathematics Subject Classification:39A10, 39A11. Citation:Esha Chatterjee, Sk. Sarif Hassan. On the asymptotic character of a generalized rational difference equation. Discrete & Continuous Dynamical Systems - A, 2018, 38 (4) : 1707-1718. doi: 10.3934/dcds.2018070 References: [1] [2] E. Camouzis and G. Ladas, [3] E. Camouzis, E. Chatterjee and G. Ladas, On the dynamics of $ \displaystyle{x_{n+1}=\frac{\delta x_{n-2} + x_{n-3}}{ A+x_{n-3} }}$, [4] E. Chatterjee, R. DeVault and G. Ladas, On the Global Character of $ \displaystyle{x_{n+1}=\frac{\beta x_{n} + \delta x_{n-k}}{ A+x_{n-k} }}$, [5] [6] R. DeVault, G. Ladas and S. W. Schultz, On the recursive sequence $\displaystyle{x_{n+1}=\frac{A}{x_{n}}+\frac{1}{x_{n-2}}}$, [7] [8] E. A. Grove, G. Ladas, M. Predescu and M. Radin, On the global character of the difference equation$ \displaystyle{x_{n+1}=\frac{\alpha + \gamma x_{n-(2k+1)} + \delta x_{n-2l}}{ A+x_{n-2l} }}$, [9] V. L. Kocic and G. Ladas, [10] [11] M. R. S. Kulenovi$\acute{c}$ and G. Ladas, [12] [13] [14] show all references References: [1] [2] E. Camouzis and G. Ladas, [3] E. Camouzis, E. Chatterjee and G. Ladas, On the dynamics of $ \displaystyle{x_{n+1}=\frac{\delta x_{n-2} + x_{n-3}}{ A+x_{n-3} }}$, [4] E. Chatterjee, R. DeVault and G. Ladas, On the Global Character of $ \displaystyle{x_{n+1}=\frac{\beta x_{n} + \delta x_{n-k}}{ A+x_{n-k} }}$, [5] [6] R. DeVault, G. Ladas and S. W. Schultz, On the recursive sequence $\displaystyle{x_{n+1}=\frac{A}{x_{n}}+\frac{1}{x_{n-2}}}$, [7] [8] E. A. Grove, G. Ladas, M. Predescu and M. Radin, On the global character of the difference equation$ \displaystyle{x_{n+1}=\frac{\alpha + \gamma x_{n-(2k+1)} + \delta x_{n-2l}}{ A+x_{n-2l} }}$, [9] V. L. Kocic and G. Ladas, [10] [11] M. R. S. Kulenovi$\acute{c}$ and G. Ladas, [12] [13] [14] Parameters Delay Terms Estimated Interval of Lyapunov Exponent Parameters Delay Terms Estimated Interval of Lyapunov Exponent [1] Tomás Caraballo, David Cheban. On the structure of the global attractor for non-autonomous dynamical systems with weak convergence. [2] Yan Wang, Guanggan Chen. Invariant measure of stochastic fractional Burgers equation with degenerate noise on a bounded interval. [3] [4] Wided Kechiche. Regularity of the global attractor for a nonlinear Schrödinger equation with a point defect. [5] Zhijian Yang, Zhiming Liu. Global attractor for a strongly damped wave equation with fully supercritical nonlinearities. [6] D. Hilhorst, L. A. Peletier, A. I. Rotariu, G. Sivashinsky. Global attractor and inertial sets for a nonlocal Kuramoto-Sivashinsky equation. [7] Azer Khanmamedov, Sema Simsek. Existence of the global attractor for the plate equation with nonlocal nonlinearity in $ \mathbb{R} ^{n}$. [8] Tomás Caraballo, Marta Herrera-Cobos, Pedro Marín-Rubio. Global attractor for a nonlocal [9] Tomás Caraballo, David Cheban. On the structure of the global attractor for infinite-dimensional non-autonomous dynamical systems with weak convergence. [10] Monica Lazzo, Paul G. Schmidt. Convergence versus periodicity in a single-loop positive-feedback system 1. Convergence to equilibrium. [11] [12] Nikos I. Karachalios, Nikos M. Stavrakakis. Estimates on the dimension of a global attractor for a semilinear dissipative wave equation on $\mathbb R^N$. [13] Brahim Alouini. Finite dimensional global attractor for a Bose-Einstein equation in a two dimensional unbounded domain. [14] Boling Guo, Zhaohui Huo. The global attractor of the damped, forced generalized Korteweg de Vries-Benjamin-Ono equation in $L^2$. [15] Rolci Cipolatti, Otared Kavian. On a nonlinear Schrödinger equation modelling ultra-short laser pulses with a large noncompact global attractor. [16] [17] Oleksiy V. Kapustyan, Pavlo O. Kasyanov, José Valero. Structure and regularity of the global attractor of a reaction-diffusion equation with non-smooth nonlinear term. [18] Kotaro Tsugawa. Existence of the global attractor for weakly damped, forced KdV equation on Sobolev spaces of negative index. [19] Fengjuan Meng, Chengkui Zhong. Multiple equilibrium points in global attractor for the weakly damped wave equation with critical exponent. [20] Zhiming Liu, Zhijian Yang. Global attractor of multi-valued operators with applications to a strongly damped nonlinear wave equation without uniqueness. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
In this MathStackExchange post the question in the title was asked without much outcome, I feel. Edit: As Douglas Zare kindly observes, there is one more answer in MathStackExchange now. I am not used to basic Probability, and I am trying to prepare a class that I need to teach this year. I feel I am unable to motivate the introduction of random variables. After spending some time speaking about Kolmogoroff's axioms I can explain that they allow to make the following sentence true and meaningful: The probability that, tossing a coin $N$ times, I get $n\leq N$ tails equals $$\tag{$\ast$}{N \choose n}\cdot\Big(\frac{1}{2}\Big)^N.$$ But now people (i.e. books I can find) introduce the "random variable $X\colon \Omega\to\mathbb{R}$ which takes values $X(\text{tails})=1$ and $X(\text{heads})=0$" and say that it follows the binomial rule. To do this, they need a probability space $\Omega$: but once one has it, one can prove statement $(\ast)$ above. So, what is the usefulness of this $X$ (and of random variables, in general)? Added: So far my question was admittedly too vague and I try to emend. Given a discrete random variable $X\colon\Omega\to\mathbb{R}$ taking values $\{x_1,\dots,x_n\}$ I can define $A_k=X^{-1}(\{x_k\})$ for all $1\leq k\leq n$. The study of the random variable becomes then the study of the values $p(A_k)$, $p$ being the probability on $\Omega$. Therefore, it seems to me that we have not gone one step further in the understanding of $\Omega$ (or of the problem modelled by $\Omega$) thanks to the introduction of $X$. Often I read that there is the possibility of having a family $X_1,\dots,X_n$ of random variables on the same space $\Omega$ and some results (like the CLT) say something about them. But then I know no example—and would be happy to discover—of a problem truly modelled by this, whereas in most examples that I read there is either a single random variable; or the understanding of $n$ of them requires the understanding of the power $\Omega^n$ of some previously-introduced measure space $\Omega$. It seems to me (but admit to have no rigourous proof) that given the above $n$ random variables on $\Omega$ there should exist a $\Omega'$, probably much bigger, with a single $X\colon\Omega'\to\mathbb{R}$ "encoding" the same information as $\{X_1,\dots,X_n\}$. In this case, we are back to using "only" indicator functions. I understand that this process breaks down if we want to make $n\to \infty$, but I also suspect that there might be a deeper reason for studying random variables. All in all, my doubts come from the fact that random variables still look to me as being a poorer object than a measure (or, probably, of a $\sigma$-algebra $\mathcal{F}$ and a measure whose generated $\sigma$-algebra is finer than $\mathcal{F}$, or something like this); though, they are introduced, studied, and look central in the theory. I wonder where I am wrong. Caveat: For some reason, many people in comments below objected that "throwing random variables away is ridiculous" or that I "should try to come out with something more clever, then, if I think they are not good". That was not my point. I am sure they must be useful, lest all textbooks would not introduce them. But I was unable to understand why: many useful and kind answers below helped much.
Turbulent Boundary Layer Equations From Thermal-FluidsPedia Line 46: Line 46: |{{EquationRef|(5)}} |{{EquationRef|(5)}} |} |} - To obtain the turbulent boundary layer governing equations, scale analysis can be performed to eqs. ( + To obtain the turbulent boundary layer governing equations, scale analysis can be performed to eqs. () – (). While the treatments of the time-averaged quantities are similar to the cases of laminar flow, special attention must be paid to the time averaging of the products of the fluctuations. It can be shown through a scale analysis that the first terms in the last parentheses on the right hand side of eqs. () – () are negligible compared to the second terms in the parentheses Oosthuizenand Naylor, 1999. Therefore, eqs. () – () can be simplified to: {| class="wikitable" border="0" {| class="wikitable" border="0" Line 52: Line 52: | width="100%" |<center> | width="100%" |<center> <math>\bar{u}\frac{\partial \bar{u}}{\partial x}+\bar{v}\frac{\partial \bar{u}}{\partial y}=-\frac{1}{\rho }\frac{d\bar{p}}{dx}+\nu \frac{\partial ^{2}\bar{u}}{\partial y^{2}}-\frac{\partial \overline{{v}'{u}'}}{\partial y}</math> <math>\bar{u}\frac{\partial \bar{u}}{\partial x}+\bar{v}\frac{\partial \bar{u}}{\partial y}=-\frac{1}{\rho }\frac{d\bar{p}}{dx}+\nu \frac{\partial ^{2}\bar{u}}{\partial y^{2}}-\frac{\partial \overline{{v}'{u}'}}{\partial y}</math> - </center> </center> |{{EquationRef|(6)}} |{{EquationRef|(6)}} Line 74: Line 73: |{{EquationRef|(8)}} |{{EquationRef|(8)}} |} |} - where eq. ( + where eq. () became <math>\partial \bar{p}/\partial y=0</math> and the partial derivative of time-averaged pressure in eq. () became: <math>\partial \bar{p}/\partial x=d\bar{p}/dx</math> <math>\partial \bar{p}/\partial x=d\bar{p}/dx</math> - , which has been reflected in eq. ( + , which has been reflected in eq. (). + When molecules or eddies in a turbulent flow cross a control surface, they will carry momentum with them. Thus, the shear stress in a turbulent flow can be caused by both molecular and eddy level activities. While the molecular level activity is the only mechanism of shear stress, transport of momentum by eddies can only be found in turbulent flow. The time-averaged shear stress tensor can be expressed as When molecules or eddies in a turbulent flow cross a control surface, they will carry momentum with them. Thus, the shear stress in a turbulent flow can be caused by both molecular and eddy level activities. While the molecular level activity is the only mechanism of shear stress, transport of momentum by eddies can only be found in turbulent flow. The time-averaged shear stress tensor can be expressed as Line 86: Line 86: |{{EquationRef|(9)}} |{{EquationRef|(9)}} |} |} - where + where <math>\mathbf{\bar{\tau }}^{\text{m}}</math> is the contribution of the molecular motion, and <math>\mathbf{\bar{\tau }}^{\text{t}}</math> is caused by eddy level activity – referred to as Reynolds stress. For two-dimensional flow, the shear stress can be expressed as - <math>\mathbf{\bar{\tau }}^{\text{m}}</math> is the contribution of the molecular motion, and <math>\mathbf{\bar{\tau }}^{\text{t}}</math> is caused by eddy level activity – referred to as Reynolds stress. For two-dimensional flow, the shear stress can be expressed as + {| class="wikitable" border="0" {| class="wikitable" border="0" Line 114: Line 113: |{{EquationRef|(12)}} |{{EquationRef|(12)}} |} |} + + + Revision as of 01:54, 21 July 2010 External Turbulent Flow/Heat Transfer The generalized governing equations for three-dimensional turbulent flow have been presented in governing equations. For two-dimensional steady-state turbulent flow, the governing equations can be simplified to: To obtain the turbulent boundary layer governing equations, scale analysis can be performed to eqs. (2) – (5). While the treatments of the time-averaged quantities are similar to the cases of laminar flow, special attention must be paid to the time averaging of the products of the fluctuations. It can be shown through a scale analysis that the first terms in the last parentheses on the right hand side of eqs. (2) – (5) are negligible compared to the second terms in the parentheses [1]. Therefore, eqs. (2) – (5) can be simplified to: where eq. (3) became and the partial derivative of time-averaged pressure in eq. (2) became: , which has been reflected in eq. (6). When molecules or eddies in a turbulent flow cross a control surface, they will carry momentum with them. Thus, the shear stress in a turbulent flow can be caused by both molecular and eddy level activities. While the molecular level activity is the only mechanism of shear stress, transport of momentum by eddies can only be found in turbulent flow. The time-averaged shear stress tensor can be expressed as where is the contribution of the molecular motion, and is caused by eddy level activity – referred to as Reynolds stress. For two-dimensional flow, the shear stress can be expressed as Similarly, the heat flux in the turbulent flow can also be caused by molecular level activity (conduction) and eddy level activity. The time-averaged heat and mass flux can be respectively expressed as The mass flux can be obtained by a similar way: References ↑ Oosthuizen, P.H., and Naylor, D., 1999, Introduction to Convective Heat Transfer Analysis, WCB/McGraw-Hill, New York.
This question has many parts, but I will try to address them all. One question you ask is why the kernel has dimensions of $\text{length}^{-d}$. Another question you asked is how the kernel can be interpreted as a probability amplitude. The third thing you asked for is a physical interpretation of the prefactor in the kernel. I will start with a review of where the kernel comes from Review of kernel As a toy example let's consider a piece of metal that has a temperature profile $\newcommand{\bx}{\mathbf{x}}T_0(\bx)$ as a function of position $\bx$ at time zero. The temperature at later times $t$ is described by the equation $$\partial_t T(\bx,t) = k\partial_{xx} T(\bx,t).$$ Given this equation, the temperature at a point $\newcommand{\by}{\mathbf{y}}\by$ at the time $t$ is given by $$T(\by,t)=\int \frac{1}{(4 \pi k t)^{d/2}} \exp \left(\frac{|\bx -\by|^2}{4kt}\right)T_0(\bx)d\bx\equiv\int K(\by,\bx;t)T(\bx)d\bx.$$ We have defined the kernel $K(\by,\bx;t)$ to be the function $\frac{1}{(4 \pi k t)^{d/2}} \exp \left(\frac{|\bx -\by|^2}{4kt}\right)$. Notice that since $k$ has dimensions of $\text{length}^2/\text{time}$, the function $K$ has dimensions of $\text{length}^{-d}$. It is good that the kernel these dimensions, because looking at the equation $T(\by,t)=\int K(\by,\bx;t)T(\bx)d\bx,$ if the $T$ on the right hand side is to have the same units as the $T$ from the left hand side, the dimensions of $K$ out to cancel the $d$ powers of length that result from doing the integral. That is, $K$ must have units of $\text{length}^{-d}$. In general, regardless what the dimensions of $T$ are, if the kernel $K$ is being integrated over space, then it must have dimensions of $\text{length}^{-d}$. Now it should be obvious why the kernel in your case has dimensions of $\text{length}^{-d}$. Why the kernel has dimensions of $\text{length}^{-d}$ If $\psi_0(\bx)$ is a probability amplitude at $t=0$, having units of $\text{length}^{-d/2}$, and $\psi_t(\by)$ is a probability amplitude at time $t$, also having units of $\text{length}^{-d/2}$, and if the two are related by $\psi_t(\by)=\int K(\by,\bx;t)\psi_0(\bx)d\bx,$ then $K$ must have units of $\text{length}^{-d}$ in order for the dimensions to work out. Interpreting the kernel itself as a solution Next, I will address how the kernel can itself can be viewed as a solution. This seems counter-intuitive at this point because wavefunctions must have dimensions of $\text{length}^{-d/2}$, but we saw that the kernel has dimensions of $\text{length}^{-d}$. Let's go back to the example of the heat equation. Let's consider an initial temperature profile $T_0(\bx)$ that is only nonzero in a small region of volume $V$. If the volume $V$ is made smaller and smaller, then temperature will be zero everywhere at later times because the influence of the smaller region sill go to zero. Unless, that is, if the temperature within that region gets higher and higher as the region gets smaller. The limiting function, a function that is nonzero only on an infinitely small region, but that has an infinite value at that small region, is called a delta function. If we choose our initial temperature $T_0(\bx)$ to be one of these delta functions, with the inifinitely small region centered at $\mathbf{0}$, then the temperature $T(\by,t)$ at a time $t$ later is given by $\int K(\by,\bx;t)\delta(\bx)d\bx$, which is equal to $K(\by,\mathbf{0};t)$. There is a problem with what I said above. We used a delta function as our $T_0$, but a delta function has dimenions of inverse volume, while our initial temperature profile should have dimensions of temperature. To get a true initial temperature profile, we should multiple the delta function by a constant with appropriate units (volume times temperature). The solution for the temperature at a later time would then be not just $K$, but $K$ times this same constant. Multiplying by the dimensionful constant changes the units, but does not change the spatial profile of the solution, so while $K$ doesn't have the right units to be the solution, it does have the same spatial profile of the solution for a sharply concentrated initial temperature profile. Let's see how this works with quantum mechanics. In this case, a delta function represents the wavefunction of a particle with definite position. However, a delta function has units of inverse volume, while a wavefunction should have dimensions of square root of inverse volume. In analogy with the temperature example, the delta function should be multiplied by a constant having dimensions of square root of volume to give the appropriate dimensions for a wavefunction. Accordingly, the kernel should be multiplied by a constant of units of square root of volume. Since the kernel has units of inverse volume, this multiplicaiton gives the appropriate units of inverse square root of volume. However, the spatial profile of the wavefunction is the same whether or not you include this constant. I think this explains your second question Interpretation of prefactor The third thing you asked about is the meaning of the prefactor, which goes as $1/\sqrt{t}$. Since the initial state is a delta function, the momentum is completely undetermined. That is, you can think of the initial state as a superposition of all momenta. As the state evolves from the initial value, you can thihk of the wavefunction being a quantum superposition of the particle moving away from the origin at every constant velocity. So the wavefunction represents a unifrom expansion. If you have a uniform expansion of a fixed amount of mass, you expect the density to be inversely proportional to time. Since the density is inversely propotional to time, the wavefunction must be invsersely proportional to the square root of time.
Learning how to tell if a function is surjective at a glance. There are a few linear transformations in quotes below that I tried to show (informally) are/n't surjective. Please, see if any of that makes sense. $T: \mathbb R^2 \to \mathbb R^3 \text { given as } T \begin{pmatrix} x \\ y \\ \end{pmatrix} = \begin{pmatrix} 2x + 3y \\ x + y \\ 0 \end{pmatrix}$ $(a, b, 0)^T$ is an arbitrary vector in the range of $T$ and $(a, b, c)$ is an arbitrary vector in $\mathbb R^3$ where $c \in \mathbb R$ and so the range of $T$ has fewer vectors than its codomain. Thus the range and the codomain are not equal. $T$ is not surjective. $T: P_3 \to P_2$ given by $T(p) = p'$ where $p'$ is the derivative of $p$ $p' = ax^2 + bx + c$ is in arbitrary vector in both the range and codomain of $T$ so $T$ is surjective $T: \mathbb R^3 \to \mathbb R^2 \text { given as } T \begin{pmatrix} x \\ y \\ z \end{pmatrix} = \begin{pmatrix} y \\ z \\ \end{pmatrix}$ Arbitrary vector in the range is an ordered pair. Ordered pairs make up the codomain as well. So this transformation is surjective $T: P_2 → P_3$ given by $T(ax^2 + bx + c) = ax + (b + c)$ The range of $T$ contains linear polynomials only and so it contains fewer elements than $P_3.$ Thus $T$ is not surjective.