text
stringlengths
256
16.4k
The population mean could be defined as the average of a group characteristics. The group can include a person, a thing, or an item etc. for example, you may have to find population mean for a set of people living in USA or all dog in the Georgia and more. So, a characteristic is just an item of interest. For example, books in the library are usually checked seven times every year on an average. In statistics, population mean is calculated rarely as it is quite time consuming and costly too. Take one more example here if you wanted to calculate the average weight of a dog then will you be tracking 70-80 million dogs together. The best idea here is taking a sample and weight them. The population mean formula in mathematics could be given as – \[\large \mu=\frac{\sum x_i}{N}\] Where, x i = Sum of the values N=number of the value When you are calculating population mean, you could realize that it is very similar to the average that we have learned in the case of basic mathematics. However, you need to extra cautious when taking the mean for a population not the sample. Here we are giving the symbols for both – the population mean and the sample mean that are frequently used with statistics. You should take the ratio in case of
Definition and General Scheme for Solving Nonhomogeneous Equations A linear nonhomogeneous second-order equation with variable coefficients has the form \[{y^{\prime\prime} + {a_1}\left( x \right)y’ }+{ {a_2}\left( x \right)y }={ f\left( x \right),}\] where \({a_1}\left( x \right),\) \({a_2}\left( x \right)\) and \(f\left( x \right)\) are continuous functions on the interval \(\left[ {a,b} \right].\) The associated homogeneous equation is written as \[{y^{\prime\prime} + {a_1}\left( x \right)y’ }+{ {a_2}\left( x \right)y }={ 0.}\] The general solution of the nonhomogeneous equation is the sum of the general solution \({y_0}\left( x \right)\) of the associated homogeneous equation and a particular solution \(Y\left( x \right)\) of the nonhomogeneous equation: \[y\left( x \right) = {y_0}\left( x \right) + Y\left( x \right).\] To construct the general solution of the nonhomogeneous equation the following approach is most often used: First, by guessing, we find a particular solution of the homogeneous equation. Then using the Liouville formula, we get the general solution of the homogeneous equation. Further, using the method of variation of parameters (Lagrange’s method), we determine the general solution of the nonhomogeneous equation. The first two steps of this scheme were described on the page Second Order Linear Homogeneous Differential Equations with Variable Coefficients. Below we consider in detail the third step, that is, the method of variation of parameters. Method of Variation of Parameters The method of variation of parameters (Lagrange’s method) is used to construct the general solution of the nonhomogeneous equation, when we know the general solution of the associated homogeneous equation. Suppose that the general solution of the second order homogeneous equation is expressed through the fundamental system of solutions \({y_1}\left( x \right)\) and \({y_2}\left( x \right):\) \[{{y_0}\left( x \right) }={ {C_1}{y_1}\left( x \right) + {C_2}{y_2}\left( x \right),}\] where \({C_1},{C_2}\) are arbitrary constants. The idea of this method is to replace the constants \({C_1}\) and \({C_2}\) by functions \({C_1}\left( x \right)\) and \({C_2}\left( x \right),\) which are chosen so that the solution satisfies the nonhomogeneous equation. The derivatives of the unknown functions \({C_1}\left( x \right)\) and \({C_2}\left( x \right)\) can be determined from the system of equations \[\left\{ \begin{array}{l} {{C’_1}\left( x \right){y_1}\left( x \right) }+{ {C’_2}\left( x \right){y_2}\left( x \right)} = 0\\ {{C’_1}\left( x \right){y’_1}\left( x \right) }+{ {C’_2}\left( x \right){y’_2}\left( x \right)} = {f\left( x \right)} \end{array} \right.\] The main determinant of this system is the Wronskian of the functions \({y_1}\) and \({y_2},\) which is not equal to zero due to linear independence of the solutions \({y_1}\) and \({y_2}.\) Therefore, this system of equations always has a unique solution. The final formulas for \({C’_1}\left( x \right)\) and \({C’_2}\left( x \right)\) have the form \[ {{C’_1}\left( x \right) = – \frac{{{y_2}\left( x \right)f\left( x \right)}}{{{W_{{y_1},{y_2}}}\left( x \right)}},\;\;}\kern-0.3pt {{C’_2}\left( x \right) = \frac{{{y_1}\left( x \right)f\left( x \right)}}{{{W_{{y_1},{y_2}}}\left( x \right)}}.} \] When using the method of variation of parameters, it is important to remember that the function \(f\left( x \right)\) must correspond to the differential equation in the standard form, that is the coefficient \({a_0}\left( x \right)\) at the second derivative must be equal to \(1.\) Furthermore, knowing the derivatives \({C’_1}\left( x \right)\) and \({C’_2}\left( x \right),\) one can find the functions \({C_1}\left( x \right)\) and \({C_2}\left( x \right):\) \[ {{{C_1}\left( x \right) }={ – \int {\frac{{{y_2}\left( x \right)f\left( x \right)}}{{{W_{{y_1},{y_2}}}\left( x \right)}}dx} }+{ {A_1},\;\;}}\kern-0.3pt {{{C_2}\left( x \right) }={ \int {\frac{{{y_1}\left( x \right)f\left( x \right)}}{{{W_{{y_1},{y_2}}}\left( x \right)}}dx} }+{ {A_2},}} \] where \({A_1},\) \({A_2}\) are constants of integration. Then the general solution of the original nonhomogeneous equation will be expressed by the formula \[ {y\left( x \right) }={ {C_1}\left( x \right){y_1}\left( x \right) + {C_2}\left( x \right){y_2}\left( x \right) } = {{\left[ { – \int {\frac{{{y_2}\left( x \right)f\left( x \right)}}{{{W_{{y_1},{y_2}}}\left( x \right)}}dx} + {A_1}} \right] \cdot}\kern0pt{ {y_1}\left( x \right) }} + {{\left[ {\int {\frac{{{y_1}\left( x \right)f\left( x \right)}}{{{W_{{y_1},{y_2}}}\left( x \right)}}dx} + {A_2}} \right] \cdot}\kern0pt{ {y_2}\left( x \right) }} = {{{A_1}{y_1}\left( x \right) + {A_2}{y_2}\left( x \right) }+{ Y\left( x \right),}} \] where \[ {Y\left( x \right) } = {{y_2}\left( x \right)\int {\frac{{{y_1}\left( x \right)f\left( x \right)}}{{{W_{{y_1},{y_2}}}\left( x \right)}}dx} } – {{y_1}\left( x \right)\int {\frac{{{y_2}\left( x \right)f\left( x \right)}}{{{W_{{y_1},{y_2}}}\left( x \right)}}dx} } \] denotes a particular solution of the nonhomogeneous equation. Solved Problems Click a problem to see the solution.
Sakurai's "Advanced Quantum Mechanics" states in Eq. (2.116) that the density of states of a single photon with $\vec k$ vector pointing into the solid angle $d\Omega$ is given by \begin{equation} \rho_{d\Omega}(E) = \frac{V\omega^2}{(2\pi)^3}\frac{d\Omega}{\hbar c^3}. \qquad (1) \end{equation} This formula seems to be correct, I also found it in two other books. I found a derivation of this formula in the book "Quantum Mechanics in Chemistry" by Schatz/Ratner, which I sketch in the following: We assume that the photon is enclosed in a large cube of side length $L$ such that its volume is $V=L^3$. The "wavefunction" of the photon is a plane wave $e^{i\vec k \cdot \vec r}$. Because the wavefunction has to be zero at the boundaries, we have a quantization of the wave vector: $\vec k = \frac{2\pi}{L}\vec n$ with $n_{x,y,z} = 0, \pm 1, \pm 2, \dots$ This means that the number of states per unit "volume" in $\vec k$ space is given by $(\frac{L}{2\pi})^3 = \frac{V}{(2\pi)^3}$. Since the relation between energy and wavenumber is $E=\hbar c k$, the volume in $\vec k$ space of all states with energy between $E$ and $E+dE$ and wave vector pointing in the solid angle $d\Omega$ is given by $\frac{dE}{\hbar c} \times k^2 d\Omega$. Multiplying the $\vec k$ space volume with the number of states per unit volume, we arrive at Eq. (1). My question is: In the derivation we completely neglected the polarization/helicity degree of freedom of the photon. Shouldn't the DOS be twice as big?
Assuming these are constant voltages, you need to solve Laplace’s equation,$$\nabla^2 \phi = 0,$$ for the potential $\phi$, with appropriate boundary conditions. That gives you the electric field, $$\mathbf{E} = -\nabla \phi,$$ from which you get the current density by Ohm’s law in differential form $$\mathbf{J} = \sigma \mathbf{E},$$ where $\sigma$ is the ... Actually, LC Circuit is cause of LC oscillation,When you apply kvl you get.$Q/C=Ldi/dt$, and $dQ/dt$ so after double differentiating we get$Q/C=Ld^2Q/dt^2$, which look quite same as shm equation $a=-w^2x$,so we get $w=1/(LC)^1/2$, so energy oscillate,At any instant instant energy will. Be equal. To. Field. =energy stored in capacitor +energy ... It is always true that the power supplied/used by a circuit element is given by $P=IV$.If you are interested in the power dissipated by a single, ohmic resistor such that $V=IR$, then either relation you ask about is sufficient. i.e. $P=I^2R$ and $P=V^2/R$ will give you the same result as $P=IV$.However, if you are looking at the power consumed by ... When you are calculating power consumption of some electrical device, you will find that $P=f(I,V,R)$, where you only need to know two of the three variables I, V, and R. The two known values will determine which equation is appropriate. For example:$P=IV$$P=I^2R$$P=\frac{V^2}{R}$ The instant the switch is closed the current in the circuit $I$ is zero because the changing current in the circuit $\frac{dI}{dt}$ induces an emf in the inductor $L\frac {dI}{dt}$ which opposes the exactly "opposes" the voltage across the capacitor $V = \frac QC$.As time progress the voltage across the capacitor decreases and so must the the emf induced ... An inductor "resists" changes in the current through the energy required to build up the magnetic field. Much like a capacitor "resists" changes in voltage through the energy stored in the electric field between the plates.If you connect an inductor and a resistor in series, you will get a charge/discharge curve for the current, as with a capacitor+... Multimeters I’m aware of need to draw a little current from the circuit, without affecting the parameter being measured, in order to make the measurement. You don’t have a complete circuit. If you made the measurement across the terminals of one of the batteries it would give you the voltage since the circuit is completed within the battery.Hope this ... From your question I assume $X$ and $Y$ are named points in a circuit. What the question really means is:If you applied a voltage $U$ between the points $X$ and $Y$, what is the value of $R$ such that $U = R I$, where $I$ is the current flowing between $X$ and $Y$? So effectively it is asking to find the total resistance of all paths connecting $X$ and $Y$.... You have the logical order backwards. In circuit theory power is primary and then energy is derived from that. So you start with $P=IV$ and then from that you can derive $E=\int P\ dt=\int IV \ dt$. If $V$ is constant then that works out to $QV$ but that is a derived result assuming constant $V$ I was wondering why the current would remain the same for the portionof the wire even though the radius has decreased which would restrictflow and therefore current.Electric current $i(t)$ through a surface is defined as the rate of charge transport through that surface, or$$i(t)=\frac{dq(t)}{dt}$$where $q(t)$ denotes instantaneous charge.If ... In terms of particle electrons, consider a cross section of the wire. The number of electrons passing any cross section per unit time is the same - assuming that the electrons are not pooling up anywhere. For this to occur, the density of the electrons in the thinner section has to be greater than in the thicker section. Think in terms of the flow lines of ... In the circuit $\mathcal E_{\rm battery}-\mathcal E_{\rm inductor} = V_{\rm resistor} \Rightarrow \mathcal E_{\rm battery} - L\dfrac{dI}{dt} =IR$On switch on $I=0$ as the current cannot changer instantaneously.At this time $\mathcal E_{\rm battery} = L\left [\dfrac{dI}{dt}\right]_{\rm maximum}$ as $\mathcal E_{\rm inductor}$ cannot be larger than $\... When you say the $di/dt$ is initially "high" you are comparing it with the wrong thing, to understand what is going on.If there was no inductor and just a resistor, the initial value of $di/dt$ would be (in theory) "infinite" as the current changed instantaneously from $i = 0$ to $i = V/R$.The "back EMF" $e_L$ is initially equal and opposite to the ... Assuming you are talking about a series RL circuit with an ideal inductor, it is correct that $\frac{di}{dt}$ is maximum and the voltage across the inductor is a maximum equal to the battery voltage when the battery is first connected to the circuit. The current is initially zero.It is also correct that the rate of change in current then decreases. However,... As @Steeven points out, the battery polarity is actually the reverse of that shown.Just like loop currents in loop analysis are generally initially unknown, the center battery emf here is an unknown. For loop currents a direction is initially assumed. If after solving the loop equations a loop current turns out to be negative, it simply meant the assumed ... This is a "homework and exercise" problem that we generally don't provide solutions to, but only guidance. So here is some guidance for you to do additional research.If I remember my filter concepts correctly (its been a while), this appears to be what is called a low pass filter, i.e., a filter that permits low frequency signals at the input (left side) ... For calculations, it depends on the features you want to model.This won't happen in a physical circuit. If you don't want to get to fine detail and start worrying about propagation delays, then the simplest way to avoid it is to look at the inductance.Any real circuit will have a non-zero inductance. You can think of this as the circuit's mass or ... If there is no voltage change, then how would the capacitor becomecharged?I don't follow your logic here. It's true that there is a voltage drop across a resistor if there is non-zero current through the resistor.If there is zero volts across the capacitor (at some time), then all of the battery voltage is dropped across the resistor (if you don't see ... In fact, it is the voltage across the capacitor that stops the charging process.There is no problem running current through the capacitor when the voltage across it is zero. When the switch is thrown (not shown), there is a potential difference between the battery terminal and the capacitor. Charges will try to accumulate on the capacitor plate. ... Contrary to what the other answers say, resistors do change their value with time, even the most accurate ones, even at perfectly constant temperature. This is due to various phenomena, e.g. release of internal stresses, contamination from impurities etc. For instance, National Metrology Institutes keep historical records of the drift of their standard ... Resistors are usually a constant value. They may vary with temperature but you'd need to know the material the resistor is composed of. If you just search on resistors you should be able to find technical sheets for different resistors that will have information as I described. The resistors should be constant in time so $d\Omega/dt$ = 0. This is a very good question and you are not alone in finding it difficult to decide which resistors are in series and which resistors are in parallel.So go back one step and think about the reason that you want to know if resistors are in parallel or in series.The reason is that if you can identify series and parallel resistors you have formulae which ... "If the current must travel down two or more paths, items on thosepaths are in parallel until those paths reunite"I don't believe this is generally true. Consider the case of one path being a resistor and the other path being two series connected resistors. For this case, there are no parallel connected resistors."If two resistors share the same ... Are these all legitimate ways to work determine whether resistors arein series parallelThe first and third, with the exception of some slight tweaking, are the most common versions I have seen and seem legitimate to me. The other two are, at least to me, problematic, for the following reasons."If the current must travel down two or more paths, items ... Your first equation is correct for $v_{input}$ making a current through a series resistor $R$ into the feedback input of the opamp. Your second equation is for a capacitance $C_{detector}$ across the opamp inputs and is incorrect because the voltage $v_{input}$ on $C_{detector}$ at time $t$ is not $\frac {q(t)}{C_{detector}}$. Instead, the opamp has worked ... What exactly is happening above?Note that the $R$ in$$V = RI$$is ordinarily understood to be a constant. We say something likeFor the ideal resistor, the voltage $V$ is proportional to thecurrent $I$ where the constant of proportionality is theresistance $R$In your post, you wrote an equation involving the work $W$$$V = \frac{W}{T}\frac{1}... In the relation V=W/IT potential I s not exactly inversely proportional to I because if we increase current I then the work done W will also increase by I^2 as W=I^2RT. So the W will also increase and the value of potential will also Increase.V=W/ITLet for T and R be 1. ThenV=I^2 x 1 x 1/ I x 1V=I^2/IIf we increase I from 1 to 2 then potential will ... If $V$ would be inveresly proportional to $I$, then there would be a constant $k$ such that$$V = \frac k I$$Sorry, in your formula$$V=\frac W{I T} $$which is the same as$$V=\frac{\frac W T} I$$the part $\frac W T$ is not constant.$\left(\frac W T \right.$ is work by unit time, i. e. power, which is not a constant independent from $V$ and $I$.)... Ohms law gives the fundamental relationship between voltage and current for a resistor. V is proportional to I where the proportionality constant is R. The other equations are manipulations of various combinations of ohms law and power (P) or work (W).When the various equations are manipulated, the constants of proportionality change and contain other ... The reason that lightning strikes a lightning rod rather than another point is that the lightning rod provides a path of least resistance from it to the ground. This is the reason for the 25 Ω limit in the regulations.However, if you want to use electricity to heat something up, you need have a high enough resistance in the path of the electricity to ... Lightning can boil water. The reason why many objects explode when struck is that the water they contain vaporises. So there is enough energy available.However, lightning is a very transient phenomenon, so the amount of water you could manage to bring to boiling point, without wasting heat on steam explosions, would be dependent on passing the heat energy ... The real question here is "how much water"?But let's start with the literal question. Heating water to the boiling point (373K) just means that the lightning current heats up a resistor in the path to 373K. That means the resistor shouldn't melt at those temperatures. Is that physically possible? Sure - plenty of metals will melt at much higher ... For most (finite) DC circuits, you count the number of holes in the circuit, (six in the case of the cube), define a current around each hole, and then write a voltage loop equation for each. This gives you a number of simultaneous linear equations. If you are allowed to use a laptop computer, a good spreadsheet can solve these using matrix operations. (I ... If you know the formula forcapacitance of two capacitors in parallelcapacitance for two capacitors in seriesthen you can do a simple step by step simplification of the diagram as below. I haven't shown all the steps but start with calculating the capacitance of the two caps in series at the top left of the rectangle (= capacitance shown in blue), then ... The inductance of the inductor, $L$, and the capacitance of the capacitor, $C$, control the rate at which energy oscillates between them.The period of oscillation is $2\pi\sqrt LC$ which means that as the magnitude of the inductance gets larger then the period of oscillation gets larger which can be interpreted as a larger inductance resisting the change in ... For a simple resistor (in which we can ignore possible thermal effects of increasing current), if you double the current then the voltage also doubles - while the resistance remains constant. So R = V/I and R = (2V)/(2I) Resistance is a propety of electrical components, not a consequence of the active circuit - a 100-ohm resistor has a resistance of 100 ohms, whether it's hooked up to a AAA battery or the power grid. In Ohm's Law, the quantity $R$ is typically fixed. So, when you double the current running through an element, the voltage across that element also doubles, ... If we have a given physical resistor, and we want to double the current through it, we do that by doubling the voltage across it. The resistance, at least ideally, stays the same.In the real world, the resistor value will change somewhat due to the temperature of the part rising. Maybe a few 10's of ppm for a high quality resistor, or several per cent if ... The noise observed at a contact is known as chattering. A stronger mechanical contact should impart a steadier electrical contact resistance (ECR) [1].However the structure and cleanliness of the surface should also be considered in the design of this setup to minimise the presence of varying passivating layers. It should be noted that time further plays a ... Indeed the contact resistance can be, as discussed in previous answers, attributed to surface features in terms of asperities and passivating layers. The behaviour of these barriers to conduction depends on contact pressure. Passivating layers are oxides and hydroxides that ubiquitously form on conductor surfaces, and pose a barrier to electron transport. ... "if you apply an increasing voltage to one plate this will force charged carriers to that plate"I disagree with this. The negative charge carriers will be attracted toward this positive plate but will not move there unless the capacitor fails.The electrons will pack on the plate plugged to the negative terminal, i.e. where the electrons originates from. ... If during self-induction the inductor produces a current in opposite direction of that of battery in order to resist the change of current through it then how could the current after some reasonable amount of time gets to its peek point?If you apply a DC voltage to an ideal inductor, it will never reach its peak current. The current will continue to ... The whole point is that for a normal conductor Lenz can never win.It is true that at the start there is no current but there is a finite rate of change of current with time and so with the passage of time the current will change from its initial zero value.Another way of looking at the situation is that if Lenz did stop the change of current, and ... As you will see from the circuit diagram the resistors are used in a variation of the Wheatstone bridge arrangement.With such an arrangement a balance point, zero current through the galvanometer, needs to be found.This is achieved by tapping a jockey, spade ended conducting contact, along a uniform resistance wire, $A$, until the zero current position is ... Addition of salt will facilitate the conduction of electricity due to ionization. The extra ${Na^+}$ and ${Cl^-}$ ions will increase the conductivity of the medium.1)Voltage will be slightly affected if you are considering internal resistance.2) and 3)Current will increase due to the decrease in resistance in the conduction medium.
Take the set of all vectors $x = (x_1, \cdots, x_n)$ that are solutions to $p_1x_1 + \cdots + p_nx_n = I > 0$. Show that this set has $n-1$ dimensions. I have somehow managed to get myself stuck on the last part of this proof it seems. I am not using the fact that this set is a hyperplane and that hyperplanes are $n-1$ dimensions of the space they are in. It is easy to show that $\{x_1, \cdots x_n\}$ spans the set we are considering, since $\sum p \cdot x$ is a linear combination and all that. However, $x_n$ can be expressed as a linear combination of $\{x_1 \cdots x_{n-1}\}$: $$x_n = \frac{I - (p_1x_1 + \cdots p_{n-1}x_{n-1})}{p_n}$$ So we can remove $x_n$ from the span and the resulting set still spans. Now we want to show $\{x_1, \cdots n_{n-1}\}$ are linearly independent. That is, if $p_1x_1 + \cdots p_{n-1}x_{n-1} = 0$, all $p_i = 0$. If the set is spanning and linearly independent, then it is a basis. Since it would have $n-1$ vectors, it would be of dimension $n-1$ and we'd be done. So I note that $p_1x_1 + \cdots + p_{n-1}x_{n-1} = I - p_nx_n$, and that $I > 0$. So I assume there is a case where $I - p_nx_n = 0$ and where $I - p_nx_n \neq 0$. I am not sure how to finish off this proof, which makes me sad, because I think I'm just missing something obvious. Any assistance would be appreciated.
Lord_Farin A mathematics enthusiast caring to help others out. My most knowledgeable areas are logic and category theory. Additional topics of interest are algebraic topology and set theory. When applicable, in the process of answering/helping out, I point people to the community effort ProofWiki which catalogues and rigorously founds proofs in all areas of mathematics. I am not a native speaker of the English language; please feel free to point out any errors. For those trying to contact me urgently and/or personally, you can direct an e-mail to the following address (all letters lowercase): my-user-name (at) my-affiliated-site (dot) org Member for 7 years 1,089 profile views Last seen Sep 13 at 8:49 Communities (7) Top network posts 32 explaining the derivative of $x^x$ 29 More informative flagging history for comments 26 Is there a common symbol for concatenating two (finite) sequences? 24 Limit involving power tower: $\lim\limits_{n\to\infty} \frac{n+1}n^{\frac n{n-1}^\cdots}$ 18 Is my proof correct for: $\sqrt[7]{7!} < \sqrt[8]{8!}$ 18 What really is mathematical rigor? How can I be more rigorous? 17 If we have $A \cap B =\{e \}$, then $ab=ba$? View more network posts →
Zhenqi Wang, MSU New examples of local rigidity of solvable algebraic partially hyperbolic actions 04/10/2019 3:00 PM - 4:00 PM C117 Wells Hall We show $C^\infty$ local rigidity for a broad class of new examples of solvable algebraic partially hyperbolic actions on ${\mathbb G}=\mathbb{G}_1\times\cdots\times \mathbb{G}_k/\Gamma$, where $\mathbb{G}_1$ is of the following type: $SL(n, {\mathbb R})$, $SO_o(m,m)$, $E_{6(6)}$, $E_{7(7)}$ and $E_{8(8)}$, $n\geq3$, $m\geq 4$. These examples include rank-one partially hyperbolic actions. The method of proof is a combination of KAM type iteration scheme and representation theory. The principal difference with previous work that used KAM scheme is very general nature of the proof: no specific information about unitary representations of ${\mathbb G}$ or ${\mathbb G}_1$ is required. This is a continuation of the last talk.
It's in fact an if and only if : Each prime in "the" (due to uniqueness of p.f up to multiplication by units) prime factorization of $k$ occurs an even number of times, if and only if $k$ is a perfect square. Suppose each prime in the prime factorization of $k$ occurs an even number of times, say $k = \prod p_i^{r_i}$, where each $r_i$ is even. Then, you can see that if $b = \prod p_i^{\frac {r_i} 2}$, then $b$ is a well defined integer, and $b^2 = k$. Conversely, note that if $k = m^2$ is a perfect square, then the prime factorization of $m = \prod p_i^{r_i}$ suggests the prime factorization for $k = \prod p_i^{2r_i}$. Since prime factorization is unique, it follows that $k$ can indeed only be prime factorized in the above form, and hence every prime appears evenly many times in the rime factorization. Alternately, we can also go by this way : Suppose $p^r$ divides $m^2$, where $r$ is maximal. Suppose $r$ is even, then we are done. Otherwise, note that $r = 2k+1$, and we can write $p \ \mid\ \frac {m^2}{p^{2k}}$, so that $p$ divides a perfect square. Hence, from here, using Euclid's lemma, that $p$ divides $ab$ implies $p$ divides $a$ or $p$ divides $b$, we get that taking $a=b=\frac m{p^k}$, $p^{k+1} \mid m$, or that $p^{2k+2} \mid m^2$, a contradiction by uniqueness of prime factorization. On a little inspection,the second proof is just a longer winded version of the first proof, but two is better than one, I suppose. EXTENSION : If $m$ is a perfect $k$th power, then every prime that appears in the factorization does so, with multiplicity a multiple of $k$.
If you look at the cross $C\subset \mathbb A^2_k$ given by $xy=0$ in the affine plane over the field $k$, you see or compute that it is exceptional at $O=(0,0)$ for many (obviously not independent) reasons: $\bullet$ The gradient of $xy$ vanishes at $ O$ . $\bullet$ Two irreducible components pass through $O$. $\bullet$ If $k=\mathbb C$, the complement of $O$ is disconnected. $\bullet$ The tangent cone of $C$ at $O$ is not a line . $\bullet$ The maximal ideal $(x,y)\subset \mathcal O_{C,O}$cannot be generated by just one element. $\bullet$ The sheaf $\Omega_{C/k}$ is not locally free. $\bullet$ The $k$-morphism $Spec (k[t]/\langle t^2\rangle) \to C$ given by $x=t,=y=t$ cannot be lifted to the overscheme $Spec (k[t]/\langle t^2\rangle) \subset Spec (k[t]/\langle t^3\rangle) $. This exceptional character of $O$ is covered by several negative adjectives: non smooth,non-regular, non manifold-like , singular,... Although I know that the purely algebraic condition for singularity (in terms of number generators of the maximal ideal of a local ring) is due to Zariski and that smoothness in terms of infinitesimal liftings is due to Grothendieck, I don't know the earlier history of the concept of singularity. So my question is: Who first considered explicitly the concept of singularity for varieties , why the interest and what was the definition? Edit First of all, thanks for the interesting comments. It is certainly plausible that Newton knew what a singularity was, but from what I read (very little) his preoccupation seems to have been classification of curves by degree. I am curious about when he or others first wrote down the dichotomy between singular and non singular varieties , in analogy with Descartes's sharp distinction between mechanical (=transcendental) curves and geometric (=algebraic) curves ( see here) . [By the way, if you know French you will be delighted by Descartes's old-fashioned but easily understandable language]
Let $M$ be the $2$-dimensional hyperbolic manifold. Let $K(t,x,y)$ be the kernel appearing in the fundamental solution of the Cauchy problem $$(\partial^2_t-\Delta_M)u=0,\text{ on }\mathbb{R}^+\times M,$$along with suitable initial condition. Let $\chi\in C^\infty_c(\mathbb{R})$. For a constant $\lambda>0$ I am looking for supnorm of $$L(x,y):=\int_{\mathbb{R}}\chi(t)e^{i\lambda t}K(t,x,y)dt$$ in term of $\lambda$ where $x,y$ run over $M$. Sogge's book "Hangzhou lectures on Eigenfunction of the Laplacian" has similar content in the chapter 3 [see section 3.6]. He used Hadamard parametrix to prove that if $\mathrm{dist}_M(x,y)=r$ [eqns (3.6.7),(3.6.8),(3.6.15)] $$|L(x,y)| \ll \begin{cases} \lambda, \text{ if }x=y\\ \sqrt{\lambda/r},\text{ if }r\ge 1 \end{cases}.$$ I am not sure how to get supnorm from here. Equivalently, is it possible to prove that $L(x,y)$ attains supremum on $x=y$?
Suppose that a phenomenon is described by the system of \(n\) differential equations \[{\frac{{d{x_i}}}{{dt}} = {f_i}\left( {t,{x_1},{x_2}, \ldots ,{x_n}} \right),\;\;}\kern0pt{i = 1,2, \ldots ,n}\] with initial conditions \[{x_i}\left( {{t_0}} \right) = {x_{i0}},\;\;i = 1,2, \ldots ,n.\] We assume that the functions \({f_i}\left( {t,{x_1},{x_2}, \ldots ,{x_n}} \right)\) are defined and continuous together with its partial derivatives on the set \(\left\{ {t \in \left[ {{t_0}, + \infty } \right),{x_i} \in {\Re^n}} \right\}.\) Then without loss of generality we may assume that the initial time is zero: \({t_0} = 0.\) It is convenient to write the system of differential equations in vector form: \[ {\mathbf{X’} = \mathbf{f}\left( {t,\mathbf{X}} \right),\;\;\text{where}\;\;}\kern0pt {\mathbf{X} = \left( {{x_1},{x_2}, \ldots ,{x_n}} \right),\;\;}\kern0pt {\mathbf{f} = \left( {{f_1},{f_2}, \ldots ,{f_n}} \right).} \] In real systems, the initial conditions are specified with some precision. This raises the obvious question: how small changes in initial conditions affect the behavior of solutions for large time – in the extreme case when \(t \to \infty?\) If the trajectory of the system varies little under small perturbations of the initial position, we say that the motion of the system is stable. A mathematically rigorous definition of stability using \(\varepsilon – \delta\)-notation was proposed in \(1892\) by the Russian mathematician A.M.Lyapunov (\(1857-1918\)). Let us consider in more detail the concept of stability introduced by Lyapunov. Lyapunov Stability The solution \(\boldsymbol{\varphi} \left( t \right)\) of the system of differential equations \[\mathbf{X’} = \mathbf{f}\left( {t,\mathbf{X}} \right)\] with initial conditions \[\mathbf{X}\left( 0 \right) = {\mathbf{X}_0}\] is stable (in the sense of Lyapunov) if for any \(\varepsilon > 0\) there exists \(\delta = \delta \left( \varepsilon \right) > 0,\) such that if \[ {\left| {\mathbf{X}\left( 0 \right) – \boldsymbol{\varphi} \left( 0 \right)} \right| < \delta ,\;\;}\kern0pt {\text{then}\;\;\left| {\mathbf{X}\left( t \right) – \boldsymbol{\varphi} \left( t \right)} \right| < \varepsilon } \] for all values \(t \ge 0.\) Otherwise, the solution \(\boldsymbol{\varphi} \left( t \right)\) is said to be unstable. As the norm for measuring the distance between two points one can use, for example, the Euclidean metric \(\left\| {{\mathbf{x}_e}} \right\|\) or Manhattan metric \(\left\| {{\mathbf{x}_m}} \right\|:\) \[\left\| {{\mathbf{x}_e}} \right\| = \sqrt {\sum\limits_{i = 1}^n {{{\left| {{x_i}} \right|}^2}} } ,\;\;\left\| {{\mathbf{x}_m}} \right\| = \sum\limits_{i = 1}^n {\left| {{x_i}} \right|} .\] In the case \(n = 2,\) Lyapunov stability means that any trajectory \({\mathbf{X}\left( t \right)},\) which starts at \(\delta \left( \varepsilon \right)\)-neighborhood of the point \({\boldsymbol{\varphi} \left( 0 \right)},\) remains inside the tube with a maximum radius \(\varepsilon\) for all \(t \ge 0\) (Figure \(1\)). Asymptotic and Exponential Stability If the solution \(\boldsymbol{\varphi} \left( t \right)\) of the system of differential equations is not only stable in the sense of Lyapunov, but also satisfies the relationship \[\lim\limits_{t \to \infty } \left| {\mathbf{X}\left( t \right) – \boldsymbol{\varphi} \left( t \right)} \right| = 0\] provided that \[\left| {\mathbf{X}\left( 0 \right) – \boldsymbol{\varphi} \left( 0 \right)} \right| \lt \delta,\] then we say that the solution \(\boldsymbol{\varphi} \left( t \right)\) is asymptotically stable. In this case, all solutions that are sufficiently close to \(\boldsymbol{\varphi} \left( 0 \right)\) at the initial time, gradually converge to \(\boldsymbol{\varphi} \left( t \right)\) with increasing \(t.\) Schematically, this is shown in Figure \(2.\) If the solution \(\boldsymbol{\varphi} \left( t \right)\) is asymptotically stable and, in addition, from the condition \[\left| {\mathbf{X}\left( 0 \right) – \boldsymbol{\varphi} \left( 0 \right)} \right| \lt \delta\] it follows that \[\left| {\mathbf{X}\left( t \right) – \boldsymbol{\varphi} \left( t \right)} \right| \le \alpha \left| {\mathbf{X}\left( 0 \right) – \boldsymbol{\varphi} \left( 0 \right)} \right|{e^{ – \beta t}}\] for all \(t \ge 0,\) we say that the solution \(\boldsymbol{\varphi} \left( t \right)\) is exponentially stable. In this case all solutions that are close to \(\boldsymbol{\varphi} \left( 0 \right)\) at the initial time converge to \(\boldsymbol{\varphi} \left( t \right)\) with the rate (greater than or equal), which is determined by an exponential function with parameters \(\alpha,\) \(\beta\) (Figure \(3\)). The general theory of stability, in addition to stability in the sense of Lyapunov, contains many other concepts and definitions of stable movement. In particular, the concepts of orbital and structural stability are important. Orbital Stability Orbital stability describes the behavior of a closed trajectory (orbit) under the action of small external perturbations. Consider the autonomous system \[ {\frac{{d{x_i}}}{{dt}} = {f_i}\left( {{x_1},{x_2}, \ldots ,{x_n}} \right),\;\;}\kern0pt {{x_i}\left( {{t_0}} \right) = {x_{i0}},}\;\; {i = 1,2, \ldots ,n,} \] that is the system of equations, the right hand side of which does not contain the independent variable \(t.\) In vector form, the autonomous system is written as \[ {\mathbf{X’}\left( t \right) = \mathbf{f}\left( \mathbf{X} \right),\;\;\text{where}\;\;}\kern0pt {\mathbf{X} = \left( {{x_1},{x_2}, \ldots ,{x_n}} \right),\;\;}\kern0pt {\mathbf{f} = \left( {{f_1},{f_2}, \ldots ,{f_n}} \right).} \] Let \(\boldsymbol{\varphi} \left( t \right)\) be a periodic solution of the given autonomous system, that is has the form of a closed trajectory (orbit). If for any \(\varepsilon > 0\) there is a constant \(\delta = \delta \left( \varepsilon \right) > 0\) such that the trajectory of any solution \(\mathbf{X}\left( t \right)\) starting at the \(\delta\)-neighborhood of the trajectory \(\boldsymbol{\varphi} \left( t \right)\) remains in the \(\varepsilon\)-neighborhood of the trajectory \(\boldsymbol{\varphi} \left( t \right)\) for all \(t \ge 0,\) then the trajectory \(\boldsymbol{\varphi} \left( t \right)\) is called orbitally stable (Figure \(4\)). By analogy with the asymptotic stability in the sense of Lyapunov, one can also introduce the concept of asymptotic orbital stability. This type of motion occurs, for example, in systems with a limit cycle. Structural Stability Suppose that we have two autonomous systems with similar properties − in the sense that their phase portraits have the same singular points and geometrically similar trajectories. Such systems can be called structurally stable. In the strict definition, it is required that these systems are orbitally topologically equivalent, i.e. there must be a homeomorphism (this terrible word means one-to-one continuous mapping), which converts the family of trajectories of the first system into the family of trajectories of the second system while preserving the direction of motion. In these terms, the structural stability is defined as follows. Consider an autonomous system, which in the unperturbed and perturbed state is described, respectively, by two equations: \[\mathbf{X’} = \mathbf{f}\left( \mathbf{X} \right),\] \[\mathbf{X’} = \mathbf{f}\left( \mathbf{X} \right) + \varepsilon\mathbf{g}\left( \mathbf{X} \right).\] If for any bounded and continuously differentiable vector function \(\mathbf{g}\left( \mathbf{X} \right)\) there exists a number \(\varepsilon > 0\) such that the trajectories of the unperturbed and perturbed systems are orbitally topologically equivalent, then the system is called structurally stable. Reduction to the Problem of Stability of the Zero Solution Let an arbitrary non-autonomous system \[\mathbf{X’} = \mathbf{f}\left( {t,\mathbf{X}} \right)\] be given with the initial condition \(\mathbf{X}\left( 0 \right) = {\mathbf{X}_0}\) (an IVP or Cauchy problem). Here the vector-valued function \(\mathbf{f}\) is defined on the set \(\left\{ {t \in \left[ {{t_0}, + \infty } \right),{x_i} \in {\Re^n}} \right\}.\) Suppose that the system has a solution \(\boldsymbol{\varphi} \left( t \right),\) the stability of which is to be examined. The stability analysis is simplified if we consider perturbations \[\mathbf{Z}\left( t \right) = \mathbf{X}\left( t \right) – \boldsymbol{\varphi} \left( t \right),\] for which we obtain the differential equation \[\mathbf{Z’}\left( t \right) = \mathbf{f}\left( {t,\mathbf{Z}} \right).\] Obviously, the last equation is satisfied by the trivial solution \[\mathbf{Z}\left( {t,\mathbf{0}} \right) \equiv \mathbf{0},\] which corresponds to the identity \[\mathbf{X}\left( t \right) \equiv \boldsymbol{\varphi} \left( t \right).\] Thus, the study of stability of the solution \(\boldsymbol{\varphi} \left( t \right)\) can be replaced by the study of stability of the function \(\mathbf{Z}\left( t \right)\) near the point \(\mathbf{Z} = \mathbf{0}.\) Stability of Linear Systems The linear system \[\mathbf{X’} = A\left( t \right)\mathbf{X} + \mathbf{f}\left( t \right)\] is said to be stable if all its solutions are stable in the sense of Lyapunov. It turns out that the non-homogeneous linear system is stable with any free term \(\mathbf{f}\left( t \right)\) if the zero solution of the associated homogeneous system \[\mathbf{X’} = A\left( t \right)\mathbf{X}\] is stable. Therefore, when investigating stability in the class of linear systems, it is sufficient to analyze the homogeneous differential systems. In the simplest case, when the coefficient matrix \(A\) is constant, the stability conditions are formulated in terms of the eigenvalues of the matrix \(A.\) Consider the homogeneous linear system \[\mathbf{X’} = A\mathbf{X},\] where \(A\) is a constant matrix of size \(n \times n.\) Such a system (which is also autonomous) has the zero solution \(\mathbf{X}\left( t \right) = \mathbf{0}.\) The stability of this solution is determined by the following theorems. Let \({\lambda _i}\) be the eigenvalues of \(A.\) Theorem \(1\). A linear homogeneous system with constant coefficients is stable in the sense of Lyapunov if and only if all eigenvalues \({\lambda _i}\) of \(A\) satisfy the condition \[\text{Re}\left[ {{\lambda _i}} \right] \le 0\;\;\left( {i = 1,2, \ldots ,n} \right),\] If the real part of an eigenvalue is equal to zero, the algebraic and geometric multiplicity of the eigenvalue must be the same (i.e. the corresponding Jordan block must be of size \(1 \times 1\)). Theorem \(2\). A linear homogeneous system with constant coefficients is asymptotically stable if and only if all eigenvalues \({\lambda _i}\) have negative real parts: \[\text{Re}\left[ {{\lambda _i}} \right] \lt 0\;\;\left( {i = 1,2, \ldots ,n} \right).\] Theorem \(3\). A linear homogeneous system with constant coefficients is unstable if at least one of the conditions is satisfied: The matrix \(A\) has an eigenvalue \({\lambda _i}\) with a positive real part; The matrix \(A\) has an eigenvalue \({\lambda _i}\) with zero real part, and the geometric multiplicity of the eigenvalue \({\lambda _i}\) is less than its algebraic multiplicity. The above theorems allow us to study the stability of linear systems with constant coefficients knowing the eigenvalues and eigenvectors. However, in many cases, the character of stability can be determined by using a criteria of stability without solving the system of equations. One of these is the Routh-Hurwitz stability criterion. It allows to judge the stability of a system knowing only the coefficients of the characteristic equation of the matrix \(A.\) Stability in the First Approximation Consider a nonlinear autonomous system \[\mathbf{X’} = f\left( \mathbf{X} \right).\] Suppose that that the system has the trivial solution \(\mathbf{X} = \mathbf{0},\) which we will investigate for stability. Assuming that the functions \({f_i}\left( \mathbf{X} \right)\) are twice continuously differentiable in a neighborhood of the origin, we can expand the right side in a Maclaurin series: \[ {\frac{{d{x_1}}}{{dt}} = \frac{{\partial {f_1}}}{{\partial {x_1}}}\left( 0 \right){x_1} + \frac{{\partial {f_1}}}{{\partial {x_2}}}\left( 0 \right){x_2} + \cdots } + {\frac{{\partial {f_1}}}{{\partial {x_n}}}\left( 0 \right){x_n} } + {{R_1}\left( {{x_1},{x_2}, \ldots ,{x_n}} \right),} \] \[ {\frac{{d{x_2}}}{{dt}} = \frac{{\partial {f_2}}}{{\partial {x_1}}}\left( 0 \right){x_1} + \frac{{\partial {f_2}}}{{\partial {x_2}}}\left( 0 \right){x_2} + \cdots } + {\frac{{\partial {f_2}}}{{\partial {x_n}}}\left( 0 \right){x_n} } + {{R_2}\left( {{x_1},{x_2}, \ldots ,{x_n}} \right),} \] \[\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\cdots\] \[ {\frac{{d{x_n}}}{{dt}} = \frac{{\partial {f_n}}}{{\partial {x_1}}}\left( 0 \right){x_1} + \frac{{\partial {f_n}}}{{\partial {x_2}}}\left( 0 \right){x_2} + \cdots } + {\frac{{\partial {f_n}}}{{\partial {x_n}}}\left( 0 \right){x_n} } + {{R_n}\left( {{x_1},{x_2}, \ldots ,{x_n}} \right).} \] where the terms \({R_i}\) describe the terms of the second (and higher) order of smallness with respect to the coordinate functions \({{x_1},{x_2}, \ldots ,{x_n}}.\) Returning to vector-matrix form, we obtain: \[\mathbf{X’} = J\mathbf{X} + \mathbf{R}\left( \mathbf{X} \right),\] where the Jacobian \(J\) is given by \[J = \left[ {\begin{array}{*{20}{c}} {\frac{{\partial {f_1}}}{{\partial {x_1}}}}&{\frac{{\partial {f_1}}}{{\partial {x_2}}}}& \vdots &{\frac{{\partial {f_1}}}{{\partial {x_n}}}}\\ {\frac{{\partial {f_2}}}{{\partial {x_1}}}}&{\frac{{\partial {f_2}}}{{\partial {x_2}}}}& \vdots &{\frac{{\partial {f_2}}}{{\partial {x_n}}}}\\ \cdots & \cdots & \vdots & \cdots \\ {\frac{{\partial {f_n}}}{{\partial {x_1}}}}&{\frac{{\partial {f_n}}}{{\partial {x_2}}}}& \vdots &{\frac{{\partial {f_n}}}{{\partial {x_n}}}} \end{array}} \right].\] The values of the partial derivatives in this matrix are calculated at the series expansion point, i.e. in this case, at zero. In many cases, instead of the original nonlinear autonomous system, we can consider and investigate for stability the corresponding linearized system or the system of equations of the first approximation. The stability of such a system is determined by the following rules: If all eigenvalues of the Jacobian \(J\) have negative real parts, then the zero solution \(\mathbf{X} = \mathbf{0}\) of the original and linearized systems is asymptotically stable. If at least one eigenvalue of the Jacobian \(J\) has a positive real part, then the zero solution \(\mathbf{X} = \mathbf{0}\) of the original and linearized systems is unstable. In critical cases, when the eigenvalues have a real part equal to zero, one should use other methods of stability analysis. The problems on stability in the first approximation are given here. Lyapunov Functions One of the powerful tools for stability analysis of systems of differential equations, including nonlinear systems, are Lyapunov functions. This technique is discussed in detail in the separate web page “Method of Lyapunov Functions“. Solved Problems Click a problem to see the solution.
Problem 15 Let $p_1(x), p_2(x), p_3(x), p_4(x)$ be (real) polynomials of degree at most $3$. Which (if any) of the following two conditions is sufficient for the conclusion that these polynomials are linearly dependent? (a) At $1$ each of the polynomials has the value $0$. Namely $p_i(1)=0$ for $i=1,2,3,4$. (b) At $0$ each of the polynomials has the value $1$. Namely $p_i(0)=1$ for $i=1,2,3,4$. ( University of California, Berkeley) Problem 12 Let $A$ be an $n \times n$ real matrix. Prove the followings. (a) The matrix $AA^{\trans}$ is a symmetric matrix. (b) The set of eigenvalues of $A$ and the set of eigenvalues of $A^{\trans}$ are equal. (c) The matrix $AA^{\trans}$ is non-negative definite. (An $n\times n$ matrix $B$ is called non-negative definite if for any $n$ dimensional vector $\mathbf{x}$, we have $\mathbf{x}^{\trans}B \mathbf{x} \geq 0$.) Add to solve later (d) All the eigenvalues of $AA^{\trans}$ is non-negative. Problem 11 An $n\times n$ matrix $A$ is called nilpotent if $A^k=O$, where $O$ is the $n\times n$ zero matrix. Prove the followings. (a) The matrix $A$ is nilpotent if and only if all the eigenvalues of $A$ is zero. Add to solve later (b) The matrix $A$ is nilpotent if and only if $A^n=O$. Read solution Problem 9 Let $A$ be an $n\times n$ matrix and let $\lambda_1, \dots, \lambda_n$ be its eigenvalues. Show that (1) $$\det(A)=\prod_{i=1}^n \lambda_i$$ (2) $$\tr(A)=\sum_{i=1}^n \lambda_i$$ Here $\det(A)$ is the determinant of the matrix $A$ and $\tr(A)$ is the trace of the matrix $A$. Namely, prove that (1) the determinant of $A$ is the product of its eigenvalues, and (2) the trace of $A$ is the sum of the eigenvalues. Read solution Problem 5 Let $T : \mathbb{R}^n \to \mathbb{R}^m$ be a linear transformation. Let $\mathbf{0}_n$ and $\mathbf{0}_m$ be zero vectors of $\mathbb{R}^n$ and $\mathbb{R}^m$, respectively. Show that $T(\mathbf{0}_n)=\mathbf{0}_m$. ( The Ohio State University Linear Algebra Exam) Add to solve later Problem 3 Let $H$ be a normal subgroup of a group $G$. Then show that $N:=[H, G]$ is a subgroup of $H$ and $N \triangleleft G$. Here $[H, G]$ is a subgroup of $G$ generated by commutators $[h,k]:=hkh^{-1}k^{-1}$. In particular, the commutator subgroup $[G, G]$ is a normal subgroup of $G$Add to solve later
The inequality in question is obviously intimately related to Hadamard’s maximum determinant problem. So, I believe that the most natural construction of $A(n)$ is to make use of Hadamard matrices. Given any $n\times n$ $\{-1,1\}$ matrix $H$, there is a well-known trick to obtain a $\{0,1\}$ matrix $A$ such that $\det(A)=\frac1{2^{n-1}}|\det(H)|$. First, turn the first row of $H$ into a rows of ones by multiplying columns of $H$ by $-1$ if necessary. Second, for every row $i>1$, add the first row to it and then divide it by $2$. As a result, we get a $\{0,1\}$ matrix. Finally, if the matrix has a negative determinant, interchange some two rows to negate the determinant. It is also well-known that when $n$ is a power of $2$, Hadamard matrices of size $n$ can be constructed recursively (e.g. Sylvester's construction). Since the determinant of a Hadamard matrix is $\pm n^{n/2}$, it follows that whenever $n=2^m$, there exists a $\{0,1\}$ matrix $A(n)$ whose determinant is $\frac1{2^{n-1}}n^{n/2} = 2(\sqrt{n}/2)^n$. Now, consider any natural number $n$. Let $n=\sum_{i=1}^k 2^{m_i}+r$, where $m_1>m_2>\cdots>m_k\ge3$ and $0\le r<8$ (convention: $\sum_{i=1}^k 2^{m_i}$ is an empty sum if $n<8$). Therefore, if we define $A(n)=A(2^{m_1})\oplus A(2^{m_2})\oplus\cdots\oplus A(2^{m_k})\oplus I_r$, then$$\det A(n) = 2^k\prod_{i=1}^k (\sqrt{2^{m_i}}/2)^{2^{m_i}} \ge \prod_{i=1}^k (\sqrt{2^3}/2)^{2^{m_i}} \ge \sqrt{2}^{n-r} > \frac1{16}\sqrt{2}^n.$$
Using area On this diagram, the height we are looking for is labelled $x$ m. There are two ways to view area of the triangle. One is that it has a base of 12 and perpendicular height of 5, and so its area is $\frac{1}{2}\times5\times12 =30$m$^2$. The other way we can view it is with a base of 13 and perpendicular height of $x$ and so it also has area $\frac{1}{2}\times13x$. We can equate these two areas to get that: $\frac{13}{2}x = 30 \Rightarrow x=\frac{30\times2}{13}=4.62$m to the nearest cm Using similar triangles The height must make a right angle with the ground, which is shown in the diagram below. Also on the diagram, the red angle, the blue angle and a right angle must add up to 180$^{\text{o}}$, because the angles in a triangle add up to 180$^{\text{o}}$, and they are the angles in the triangle made by the slide and the ground. The triangle outlined in gold in the diagram below is also a right-angled triangle. It contains the blue angle and a right angle, so, to add up to 180$^{\text{o}}$, the third angle must be the same as the red angle. This means that the triangle made by the slide and the ground must be similar to the golden triangle. The hypotenuse of the triangle made by the slide and the ground is 13 m, and the hypotenuse of the golden triangle is 12 m. So the sale factor between the two triangles is $\frac{12}{13}$. The height, $x$ m, corresponds to the 5 m side of the larger triangle (the ladder). So the height is $\dfrac{12}{13}$ of 5, which is 4.61538..., which is 4.62 m to the nearest cm.
Lehigh ISE / COR@L Lab Wiki is our new system to create a collective information resource within the department. From the Wikipedia 1) definition, A wiki is a web application which allows people to add, modify, or delete content in collaboration with others. In a typical wiki, text is written using a simplified markup language (known as “wiki markup”) or a rich-text editor. At this moment, Lehigh ISE / COR@L Lab Wiki is only open to faculty and PhD Student of Lehigh ISE department. You can create new pages, edit the existing pages, upload resources or even delete the content from the system. You are free to share any information as long as it is in the interest of users. First of all, you need to be registered to the system. Click here to register to the system. In a sample page, simply click the pen button on the right sidebar. Then you will see the rich-text editor of the Dokuwiki. Write the new address of the page you want, in the following format; http://coral.ie.lehigh.edu/wiki/doku.php/ new_page_name Then click the same button that we used to edit pages, on the right sidebar. It will create the new page. Delete all content of the page in the text editor and save. The wiki system has a revision control to restore old information to prevent loss of page content. Even if someone delete a page, you can click the revision button to reach the previous versions. Wiki system can recognize mathematical formulations in tex format. The following input $ \displaystyle x^2 + \sum_{i=1}^n y_i \leq \alpha $ gives $ \displaystyle x^2 + \sum_{i=1}^n y_i \leq \alpha $ You can use \begin{equation} \end{equation} as well as \ref{label} type of LaTeX commands. For more information, check MathJax documentation. You can export page content in TeX/LaTeX format. Click the TeX button on the right side for any page you want and compile it with your favorite editor. It can also export any image that are available in the page in a zip file. Check this sample output. You can check the following sample pages and edit it without any registration. If you want, you can watch the DokuWiki tutorials to learn more about the system.
Let $T: \R^n \to \R^m$ be a linear transformation.Suppose that the nullity of $T$ is zero. If $\{\mathbf{x}_1, \mathbf{x}_2,\dots, \mathbf{x}_k\}$ is a linearly independent subset of $\R^n$, then show that $\{T(\mathbf{x}_1), T(\mathbf{x}_2), \dots, T(\mathbf{x}_k) \}$ is a linearly independent subset of $\R^m$. Let $V$ denote the vector space of all real $2\times 2$ matrices.Suppose that the linear transformation from $V$ to $V$ is given as below.\[T(A)=\begin{bmatrix}2 & 3\\5 & 7\end{bmatrix}A-A\begin{bmatrix}2 & 3\\5 & 7\end{bmatrix}.\]Prove or disprove that the linear transformation $T:V\to V$ is an isomorphism. Let $G, H, K$ be groups. Let $f:G\to K$ be a group homomorphism and let $\pi:G\to H$ be a surjective group homomorphism such that the kernel of $\pi$ is included in the kernel of $f$: $\ker(\pi) \subset \ker(f)$. Define a map $\bar{f}:H\to K$ as follows.For each $h\in H$, there exists $g\in G$ such that $\pi(g)=h$ since $\pi:G\to H$ is surjective.Define $\bar{f}:H\to K$ by $\bar{f}(h)=f(g)$. (a) Prove that the map $\bar{f}:H\to K$ is well-defined. (b) Prove that $\bar{f}:H\to K$ is a group homomorphism. Let $\calF[0, 2\pi]$ be the vector space of all real valued functions defined on the interval $[0, 2\pi]$.Define the map $f:\R^2 \to \calF[0, 2\pi]$ by\[\left(\, f\left(\, \begin{bmatrix}\alpha \\\beta\end{bmatrix} \,\right) \,\right)(x):=\alpha \cos x + \beta \sin x.\]We put\[V:=\im f=\{\alpha \cos x + \beta \sin x \in \calF[0, 2\pi] \mid \alpha, \beta \in \R\}.\] (a) Prove that the map $f$ is a linear transformation. (b) Prove that the set $\{\cos x, \sin x\}$ is a basis of the vector space $V$. (c) Prove that the kernel is trivial, that is, $\ker f=\{\mathbf{0}\}$.(This yields an isomorphism of $\R^2$ and $V$.) (d) Define a map $g:V \to V$ by\[g(\alpha \cos x + \beta \sin x):=\frac{d}{dx}(\alpha \cos x+ \beta \sin x)=\beta \cos x -\alpha \sin x.\]Prove that the map $g$ is a linear transformation. (e) Find the matrix representation of the linear transformation $g$ with respect to the basis $\{\cos x, \sin x\}$. Suppose that the vectors\[\mathbf{v}_1=\begin{bmatrix}-2 \\1 \\0 \\0 \\0\end{bmatrix}, \qquad \mathbf{v}_2=\begin{bmatrix}-4 \\0 \\-3 \\-2 \\1\end{bmatrix}\]are a basis vectors for the null space of a $4\times 5$ matrix $A$. Find a vector $\mathbf{x}$ such that\[\mathbf{x}\neq0, \quad \mathbf{x}\neq \mathbf{v}_1, \quad \mathbf{x}\neq \mathbf{v}_2,\]and\[A\mathbf{x}=\mathbf{0}.\] (Stanford University, Linear Algebra Exam Problem) Let $V$ be the subspace of $\R^4$ defined by the equation\[x_1-x_2+2x_3+6x_4=0.\]Find a linear transformation $T$ from $\R^3$ to $\R^4$ such that the null space $\calN(T)=\{\mathbf{0}\}$ and the range $\calR(T)=V$. Describe $T$ by its matrix $A$. A hyperplane in $n$-dimensional vector space $\R^n$ is defined to be the set of vectors\[\begin{bmatrix}x_1 \\x_2 \\\vdots \\x_n\end{bmatrix}\in \R^n\]satisfying the linear equation of the form\[a_1x_1+a_2x_2+\cdots+a_nx_n=b,\]where $a_1, a_2, \dots, a_n$ (at least one of $a_1, a_2, \dots, a_n$ is nonzero) and $b$ are real numbers.Here at least one of $a_1, a_2, \dots, a_n$ is nonzero. Consider the hyperplane $P$ in $\R^n$ described by the linear equation\[a_1x_1+a_2x_2+\cdots+a_nx_n=0,\]where $a_1, a_2, \dots, a_n$ are some fixed real numbers and not all of these are zero.(The constant term $b$ is zero.) Then prove that the hyperplane $P$ is a subspace of $R^{n}$ of dimension $n-1$. Let $n$ be a positive integer. Let $T:\R^n \to \R$ be a non-zero linear transformation.Prove the followings. (a) The nullity of $T$ is $n-1$. That is, the dimension of the nullspace of $T$ is $n-1$. (b) Let $B=\{\mathbf{v}_1, \cdots, \mathbf{v}_{n-1}\}$ be a basis of the nullspace $\calN(T)$ of $T$.Let $\mathbf{w}$ be the $n$-dimensional vector that is not in $\calN(T)$. Then\[B’=\{\mathbf{v}_1, \cdots, \mathbf{v}_{n-1}, \mathbf{w}\}\]is a basis of $\R^n$. (c) Each vector $\mathbf{u}\in \R^n$ can be expressed as\[\mathbf{u}=\mathbf{v}+\frac{T(\mathbf{u})}{T(\mathbf{w})}\mathbf{w}\]for some vector $\mathbf{v}\in \calN(T)$. Let $A$ be the matrix for a linear transformation $T:\R^n \to \R^n$ with respect to the standard basis of $\R^n$.We assume that $A$ is idempotent, that is, $A^2=A$.Then prove that\[\R^n=\im(T) \oplus \ker(T).\] (a) Let $A=\begin{bmatrix}1 & 2 & 1 \\3 &6 &4\end{bmatrix}$ and let\[\mathbf{a}=\begin{bmatrix}-3 \\1 \\1\end{bmatrix}, \qquad \mathbf{b}=\begin{bmatrix}-2 \\1 \\0\end{bmatrix}, \qquad \mathbf{c}=\begin{bmatrix}1 \\1\end{bmatrix}.\]For each of the vectors $\mathbf{a}, \mathbf{b}, \mathbf{c}$, determine whether the vector is in the null space $\calN(A)$. Do the same for the range $\calR(A)$. (b) Find a basis of the null space of the matrix $B=\begin{bmatrix}1 & 1 & 2 \\-2 &-2 &-4\end{bmatrix}$. Let $A$ be a real $7\times 3$ matrix such that its null space is spanned by the vectors\[\begin{bmatrix}1 \\2 \\0\end{bmatrix}, \begin{bmatrix}2 \\1 \\0\end{bmatrix}, \text{ and } \begin{bmatrix}1 \\-1 \\0\end{bmatrix}.\]Then find the rank of the matrix $A$. (Purdue University, Linear Algebra Final Exam Problem) Let $R$ be a commutative ring with $1$ and let $G$ be a finite group with identity element $e$. Let $RG$ be the group ring. Then the map $\epsilon: RG \to R$ defined by\[\epsilon(\sum_{i=1}^na_i g_i)=\sum_{i=1}^na_i,\]where $a_i\in R$ and $G=\{g_i\}_{i=1}^n$, is a ring homomorphism, called the augmentation map and the kernel of $\epsilon$ is called the augmentation ideal. (a) Prove that the augmentation ideal in the group ring $RG$ is generated by $\{g-e \mid g\in G\}$. (b) Prove that if $G=\langle g\rangle$ is a finite cyclic group generated by $g$, then the augmentation ideal is generated by $g-e$.
Edited (Thanks to Kevin Carlson and Zhen Lin for pointing out the mistakes in my definitions.) Assuming $C$ is a locally small category, Yoneda lemma says that for any given object $e \in \text{Ob}(C)$ and any functor $F: C \to \text{Set}$, the collection of natural isomorphisms from $\hom_C(e, {-})$ to $F$ is isomorphic to $F(e)$. This isomorphism is natural in $F$ and $e$. When I try to formulate naturality rigorously, I encounter a problem about size. Here's my attempt. Define $Y: C^{\text{op}} \to \text{Set}^C$ by $Y(c)(c') = \hom_C(c, c')$ where $\hom_C: C^{\text{op}} \times C \to \text{Set}$ is the $\hom$ bifunctor in $C$. Define $G: \text{Set}^C \times C \to \text{Set}^C$ by $G(\eta, c) = \hom_{\text{Set}^C}(Y(c), \eta)$ where $\hom_{\text{Set}^C}: (\text{Set}^C)^{\text{op}} \times \text{Set}^C \to \text{Class}$ is the $\hom$ bifunctor in $\text{Set}^C$. Define $E: \text{Set}^C \times C \to \text{Set}$ by $E(\eta, c) = \eta(c)$. This is the evaluation functor. Yoneda lemma says that there exists a natural isomorphism from $G$ to $E$. My problem is that I do not know if the codomain of $G$ can be restricted to $\text{Set}$. ($\text{Class}$ is, strictly speaking, not a category, but probably a metacategory?) My first attempt to fix this is to establish the isomorphism on each object of $\text{Set}^C \times C$ first. This will give smallness of $G(F, e) \cong F(e)$ for $F \in \text{Ob}(\text{Set}^C)$ and $e \in \text{Ob}(C)$. But then, I still have a problem proving local smallness of the image of $G$ on morphisms. Even though I know $G(F, e)$ and $G(F', e')$ are small, I do not know if the image of $G$ on $\hom_{\text{Set}^C \times C}((F, e), (F', e'))$ is small.
@HarryGindi So the $n$-simplices of $N(D^{op})$ are $Hom_{sCat}(\mathfrak{C}[n],D^{op})$. Are you using the fact that the whole simplicial set is the mapping simplicial object between cosimplicial simplicial categories, and taking the constant cosimplicial simplicial category in the right coordinate? I guess I'm just very confused about how you're saying anything about the entire simplicial set if you're not producing it, in one go, as the mapping space between two cosimplicial objects. But whatever, I dunno. I'm having a very bad day with this junk lol. It just seems like this argument is all about the sets of n-simplices. Which is the trivial part. lol no i mean, i'm following it by context actually so for the record i really do think that the simplicial set you're getting can be written as coming from the simplicial enrichment on cosimplicial objects, where you take a constant cosimplicial simplicial category on one side @user1732 haha thanks! we had no idea if that'd actually find its way to the internet... @JonathanBeardsley any quillen equivalence determines an adjoint equivalence of quasicategories. (and any equivalence can be upgraded to an adjoint (equivalence)). i'm not sure what you mean by "Quillen equivalences induce equivalences after (co)fibrant replacement" though, i feel like that statement is mixing category-levels @JonathanBeardsley if nothing else, this follows from the fact that \frakC is a left quillen equivalence so creates weak equivalences among cofibrant objects (and all objects are cofibrant, in particular quasicategories are). i guess also you need to know the fact (proved in HTT) that the three definitions of "hom-sset" introduced in chapter 1 are all weakly equivalent to the one you get via \frakC @IlaRossi i would imagine that this is in goerss--jardine? ultimately, this is just coming from the fact that homotopy groups are defined to be maps in (from spheres), and you only are "supposed" to map into things that are fibrant -- which in this case means kan complexes @JonathanBeardsley earlier than this, i'm pretty sure it was proved by dwyer--kan in one of their papers around '80 and '81 @HarryGindi i don't know if i would say that "most" relative categories are fibrant. it was proved by lennart meier that model categories are Barwick--Kan fibrant (iirc without any further adjectives necessary) @JonathanBeardsley what?! i really liked that picture! i wonder why they removed it @HarryGindi i don't know about general PDEs, but certainly D-modules are relevant in the homotopical world @HarryGindi oh interesting, thomason-fibrancy of W is a necessary condition for BK-fibrancy of (R,W)? i also find the thomason model structure mysterious. i set up a less mysterious (and pretty straightforward) analog for $\infty$-categories in the fappendix here: arxiv.org/pdf/1510.03525.pdf as for the grothendieck construction computing hocolims, i think the more fundamental thing is that the grothendieck construction itself is a lax colimit. combining this with the fact that ($\infty$-)groupoid completion is a left adjoint, you immediately get that $|Gr(F)|$ is the colimit of $B \xrightarrow{F} Cat \xrightarrow{|-|} Spaces$ @JonathanBeardsley If you want to go that route, I guess you still have to prove that ^op_s and ^op_Delta both lie in the unique nonidentity component of Aut(N(Qcat)) and Aut(N(sCat)) whatever nerve you mean in this particular case (the B-K relative nerve has the advantage here bc sCat is not a simplicial model cat) I think the direct proof has a lot of advantages here, since it gives a point-set on-the-nose isomorphism Yeah, definitely, but I'd like to stay and work with Cisinski on the Ph.D if possible, but I'm trying to keep options open not put all my eggs in one basket, as it were I mean, I'm open to coming back to the US too, but I don't have any ideas for advisors here who are interested in higher straightening/higher Yoneda, which I am convinced is the big open problem for infinity, n-cats Gaitsgory and Rozenblyum, I guess, but I think they're more interested in applications of those ideas vs actually getting a hold of them in full generality @JonathanBeardsley Don't sweat it. As it was mentioned I have now mod superpowers, so s/he can do very little to upset me. Since you're the room owner, let me know if I can be of any assistance here with the moderation (moderators on SE have network-wide chat moderating powers, but this is not my turf, so to speak). There are two "opposite" functors:$$ op_\Delta\colon sSet\to sSet$$and$$op_s\colon sCat\to sCat.$$The first takes a simplicial set to its opposite simplicial set by precomposing with the opposite of a functor $\Delta\to \Delta$ which is the identity on objects and takes a morphism $\langle k... @JonathanBeardsley Yeah, I worked out a little proof sketch of the lemma on a notepad It's enough to show everything works for generating cofaces and codegeneracies the codegeneracies are free, the 0 and nth cofaces are free all of those can be done treating frak{C} as a black box the only slightly complicated thing is keeping track of the inner generated cofaces, but if you use my description of frak{C} or the one Joyal uses in the quasicategories vs simplicial categories paper, the combinatorics are completely explicit for codimension 1 face inclusions the maps on vertices are obvious, and the maps on homs are just appropriate inclusions of cubes on the {0} face of the cube wrt the axis corresponding to the omitted inner vertex In general, each Δ[1] factor in Hom(i,j) corresponds exactly to a vertex k with i<k<j, so omitting k gives inclusion onto the 'bottom' face wrt that axis, i.e. Δ[1]^{k-i-1} x {0} x Δ[j-k-1] (I'd call this the top, but I seem to draw my cubical diagrams in the reversed orientation). > Thus, using appropriate tags one can increase ones chances that users competent to answer the question, or just interested in it, will notice the question in the first place. Conversely, using only very specialized tags (which likely almost nobody specifically favorited, subscribed to, etc) or worse just newly created tags, one might miss a chance to give visibility to ones question. I am not sure to which extent this effect is noticeable on smaller sites (such as MathOverflow) but probably it's good to follow the recommendations given in the FAQ. (And MO is likely to grow a bit more in the future, so then it can become more important.) And also some smaller tags have enough followers. You are asking posts far away from areas I am familiar with, so I am not really sure which top-level tags would be a good fit for your questions - otherwise I would edit/retag the posts myself. (Other than possibility to ping you somewhere in chat, the reason why I posted this in this room is that users of this room are likely more familiar with the topics you're interested in and probably they would be able to suggest suitable tags.) I just wanted to mention this, in case it helps you when asking question here. (Although it seems that you're doing fine.) @MartinSleziak even I was not sure what other tags are appropriate to add.. I will see other questions similar to this, see what tags they have added and will add if I get to see any relevant tags.. thanks for your suggestion.. it is very reasonable,. You don't need to put only one tag, you can put up to five. In general it is recommended to put a very general tag (usually an "arxiv" tag) to indicate broadly which sector of math your question is in, and then more specific tags I would say that the topics of the US Talbot, as with the European Talbot, are heavily influenced by the organizers. If you look at who the organizers were/are for the US Talbot I think you will find many homotopy theorists among them.
You start with an equation but I suppose you are only concerned about the left hand side of the equation. $$-\frac{a}{5}\left(2\times\frac{a^2}{5}\right)^2+5\left(-\frac{a}{5}\right)^3\left(2\times\frac{a^2}{5}\right) -a^2\left(-\frac{a}{5}\right)\left(2\times\frac{a^2}{5}\right)\tag{1}$$ I get $$\frac{-4a^5}{25}-\frac{2a^5}{125}+\frac{2a^5}{25}\tag{2}$$ (...) $$-\frac{12a^5}{125}\tag{3}$$ How to find the error? Plug in some numbers for the variablee $a$ and compare, e.g $a=5$ in $(1)$ will give $$-\frac{5}{5}\left(2\times\frac{5^2}{5}\right)^2+5\left(-\frac{5}{5}\right)^3\left(2\times\frac{5^2}{5}\right)-5^2\left(-\frac{5}{5}\right)\left(2\times\frac{5^2}{5}\right) \\=-1(10)^2+5(-1)^3(10)-25(-1)(10)\\=-100-50+250\\=100$$but $(2)$ will give $$\frac{-4\cdot 5^5}{25}-\frac{2\cdot 5^5}{125}+\frac{2\cdot 5^5}{25}\\=-4\cdot 5^3-2\cdot5^2+2\cdot5^3\\=-500-50+250\\=300$$So you introduced an error when you calculated $(2)$ from $(1)$ You can plug in arbitrary numbers for $a$ and use a calculator to verify/disprove your calculation. If a number you plug in verifies your calculation that does not mean that there is no error in the calculation, maybe other values will disprove the calculation.But if a number disproves a calculation you can be sure that there is an error in your calculation and try to find it by using the same method for the substeps of your calculation. So next you check each term in this step. Ifyou plug in $a=6$ in the first term $$-\frac{a}{5}\left(2\times\frac{a^2}{5}\right)^2$$and in the first term of your your result$$\frac{-4a^5}{25}$$ and use a calculator you will get something like $-248.832$ and $-1244.16$ (You almost always have a calculator. I simply put -6/5*(2*6^2/5)^2 and -4*6^5/25 into Google to get these results).So there is an error in transforming this term. You can often use such a technique to find errors in your calculations. If you have access to a CAS like Mathematica you can use this tool to check your intermediate results.
In Feynman's lecture on rotations in space, http://www.feynmanlectures.caltech.edu/I_20.html, he introduced the cross product by building upon a definition of torque he had derived in a previous lecture, $$\tau=xF_y-yF_x$$ He explained that radial distance, $\mathbf{r} = \sqrt{x^2+y^2}$, and force, $\mathbf{F}=\sqrt{F_x^2+F_y^2}$, are just two vectors and that any two vectors can be combined similarly to get a third resultant vector. Rather than try to represent the result on the same plane as the original vectors it makes organizational sense to represent the result in the third dimension, perpendicular to the original vectors. This is the cross product $\mathbf{c = a \times b}$: $$c_x = a_y b_z - a_z b_y$$ $$c_y = a_z b_x - a_x b_z$$ $$c_z = a_x b_y - a_y b_x$$ Glorious illumination! I had been trying for years to figure out why arbitrarily smooshing vector components together resulted in a vector sticking out of the blackboard somehow equivalent to simple torque. I now understand that the cross product is just a pseudo vector to represent two things that interact orthogonally. But I still don't feel 100% about about Feynman's original definition for torque. I followed the geometrical proof but I'm hoping there is a straightforward, intuitive, way to understand why $$xF_y-yF_x = \mathbf{rF_{tangential}}$$ If anybody could help me fill in this last hole it would be greatly appreciated.
Convolution layers one of the main building blocks for the deep learning computer vision nowadays. Let’s see what these layers consist of and how they work. Understanding of convolution operation According to the wiki “convolution is a mathematical operation on two functions (f and g) to produce a third function that expresses how the shape of one is modified by the other”. Roughly speaking, we shift one function towards another, and for each delta shift, we calculate both functions area intersection. See image bellow: This behavior becomes very handy if we want to perform some pattern matching. For example, if we are looking for some pattern, convolution operation will produce the maximum output at the point, where it matches the required pattern the most. This process can be easily shown with autocorrelation - correlation of a signal with a delayed copy of itself: And here is the example for a discrete vector case in python: >>> import numpy as np >>> input_ = np.array([0, 0, 0, 1, 2, 3, 2, 1, 0, 0]) >>> pattern = np.array([1, 2, 3, 2 ,1]) >>> np.convolve(input_, pattern) array([ 0, 0, 0, 1, 4, 10, 16, 19, 16, 10, 4, 1, 0, 0]) Mathematically convolution operation can be written as \((f*g)(t)\triangleq \ \int _{-\infty }^{\infty }f(\tau )g(t-\tau )\,d\tau\), where \(*\) actually denotes the convolution. In machine learning, first function \(f\) often referred to as the input, and the second argument (function \(g\)) as the kernel.Moreover, many neural network libraries implement a related function called the cross-correlation , which is the same as convolution but without flipping the kernel. Convolutions for images As you may know, images on computers represented as a matrix of pixels, usually ranged from 0 to 255. Let’s take a look on a simple 14x14 gray image: In terms of numbers it’s represented as 14x14 array: [[255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255], [255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 238, 238], [255, 255, 255, 0, 255, 255, 0, 255, 255, 0, 255, 255, 255, 255, 221, 221], [255, 255, 255, 0, 255, 255, 0, 255, 255, 0, 255, 255, 255, 255, 204, 204], [255, 255, 255, 0, 255, 255, 0, 255, 255, 255, 255, 255, 255, 255, 187, 187], [255, 255, 255, 0, 255, 255, 0, 255, 255, 0, 255, 255, 255, 255, 170, 170], [255, 255, 255, 0, 0, 0, 0, 255, 255, 0, 255, 255, 255, 255, 153, 153], [255, 255, 255, 0, 255, 255, 0, 255, 255, 0, 255, 255, 255, 255, 136, 136], [255, 255, 255, 0, 255, 255, 0, 255, 255, 0, 255, 255, 255, 255, 119, 119], [255, 255, 255, 0, 255, 255, 0, 255, 255, 0, 255, 255, 255, 255, 102, 102], [255, 255, 255, 0, 255, 255, 0, 255, 255, 0, 255, 255, 255, 255, 85, 85], [255, 255, 255, 0, 255, 255, 0, 255, 255, 0, 255, 255, 255, 255, 68, 68], [255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 51, 51], [255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 34, 34], [255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 17, 17], [255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 255, 0, 0]] I hope you can see patterns even without the image being displayed. Besides, because image represented as an array, we can apply convolution on it; convolution is the operation of two functions, but we are not limited to 1d arrays. We can proceed functions of arbitrary complexity, that’s why we can apply convolutions to N-d arrays as well. In computer vision discrete convolution with image represented as element-wise multiplication and addition. The next question you may ask - how we choose a kernel and how we shift it during the convolution.
[ Edited to describe triple and higher-order coincidences for prime $k$, recovering the observed $0.672$ proportion for $k=5$] Darij's pretty argument, extended by GH, nicely answers the question for $k$-th powers modulo a large prime $p \equiv 1 \bmod k$ for each fixed $k>2$. Yet more can be said: that approach yields the existence of one coincidence $a^k \equiv b^k$ with $0 < a < b < p/k\phantom.$; but in fact the number of coincidences is asymptotically proportional to $p$: the count is $C_k \phantom. p + O_k(p^{1-\epsilon(k)})$, where $C_k = (k-1)/(2k^2)$ or $(k-2)/(2k^2)$ according as $k$ is odd or even, and $\epsilon(k) = 1/\varphi(k) \geq 1/(k-1)$.Extending the analysis to triple and higher-order coincidences also yields the asymptotic proportion of $k$-th powers that arise in $\lbrace a^k \phantom. \bmod p : a < p/k \rbrace$. For example, when $k$ is an odd prime, the proportion of $k$-th powers that do not have a $k$-th root in $(0,p/k)$ is asymptotic to $((k-1)^k+1)/k^k$; for $k=5$ that's $41/125$, so the proportion with such a $k$-th root is $84/125$, which matches A.Caicedo's observed $0.672$ exactly. It also gives $1 - \frac{8+1}{27} = 2/3$ for $k=3$, matching the proportion of cubes reported by Greg Martin in comments below; as $k \rightarrow \infty$ the proportion of $k$-th powers with small $k$-th roots approaches $1 - (1/e)$. Here's how to estimate the number of pairs. Begin with the observation that $a^k = b^k$ iff $b \equiv ma \bmod p$ where $m$ is one of the $k-1$ solutions of $m^k \equiv 1 \bmod p$ other than $m=1$. If $k$ is even, we exclude also $m=-1$, which is impossible with $0<a,b<p/k$. Then $b \equiv ma \bmod p$ defines a lattice of index $p$ in ${\bf Z}^2$ all of whose nonzero vectors have length $\gg p^{\epsilon(k)}$, because for such a vector $p$ divides the nonzero number $a^k-b^k$, which factors into homogeneous polynomials in $a,b$ each of degree at most $\phi(k)$. [This is where we use $m \neq -1$: if $a=-b$ then $a^k-b^k=0$.] Thus the solutions of $b \equiv ma \bmod p$ with $a,b \in (0,p/k)$ are the lattice points in a square of area $(p/k)^2$, and their number is estimated by $p^{-1} (p/k)^2 = p/k^2$, with an error bound proportional to (perimeter)/(length of shortest nonzero vector), i.e. proportional to $p^{1-\epsilon(k)}$. The total of $C_k \phantom. p + O_k(p^{1-\epsilon(k)})$ then follows by summing over all $k-1$ or $k-2$ solutions of $m^k=1 \bmod p$ other than $m = \pm 1$, and dividing by 2 because we've counted each coincidence twice, as $(a,b)$ and $(b,a)$. Likewise one can estimate the counts of triples etc. One must be careful with subsets of the $k$-th roots of unity that have integer dependencies, but at least when $k$ is prime there are no dependencies except that all $k$ of them sum to zero. If I did this right, the result for $j<k$ is that the number of $j$-element subsets of $\lbrace 1, 2, \ldots, (p-1)/k \rbrace$ with the same $k$-th power is asymptotic to ${k \choose j} p / k^{j+1}$, while there are no such subsets with $j=k$ because the sum of all $k$ solutions of $a^k \equiv c \bmod p$ vanishes. An exercise in generatingfunctionological inclusion-exclusion then produces the formula $((k-1)^k+1)/k^k$ for the asymptotic proportion of $k$-th powers that have no $k$-th roots at all in $(0,p/k)$. The same technique also works for $0 < a < b < M$ with $M$ considerably smaller than $p/k$; and the resulting coincidences, when they exist, can be calculated efficiently using lattice basis reduction (which as it happens I mentioned on this forum a few days ago).
Difference between revisions of "Group cohomology of elementary abelian group of prime-square order" (→Over an abelian group) (→Over an abelian group) Line 35: Line 35: The homology groups with coefficients in an abelian group <math>M</math> are given as follows: The homology groups with coefficients in an abelian group <math>M</math> are given as follows: − <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};M) = \left\lbrace\begin{array}{rl} (M/ pM)^{(q+3)/2} \oplus (\operatorname{Ann}_M(p))^{(q-1)/2}, & \qquad q = 1,3,5,\dots\\ (M/pM)^{q/2} \oplus (\operatorname{Ann}_M(p))^{(q+2)/2}, & \qquad q = 2,4,6,\dots \\ M, & \qquad q = 0 \\\end{array}\right.</math> + <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};M) = \left\lbrace\begin{array}{rl} (M/ pM)^{(q+3)/2} \oplus (\operatorname{Ann}_M(p))^{(q-1)/2}, & \qquad q = 1,3,5,\dots\\ (M/pM)^{q/2} \oplus (\operatorname{Ann}_M(p))^{(q+2)/2}, & \qquad q = 2,4,6,\dots \\ M, & \qquad q = 0 \\\end{array}\right.</math> Here, <math>M/pM</math> is the quotient of <math>M</math> by <math>pM = \{ px \mid x \in M \}</math> and <math>\operatorname{Ann}_M(p) = \{ x \in M \mid px = 0 \}</math>. Here, <math>M/pM</math> is the quotient of <math>M</math> by <math>pM = \{ px \mid x \in M \}</math> and <math>\operatorname{Ann}_M(p) = \{ x \in M \mid px = 0 \}</math>. Revision as of 16:08, 24 October 2011 Contents 1 Particular cases 2 Homology groups for trivial group action 3 Cohomology groups for trivial group action Particular cases Homology groups for trivial group action FACTS TO CHECK AGAINST(homology group for trivial group action): First homology group: first homology group for trivial group action equals tensor product with abelianization Second homology group: formula for second homology group for trivial group action in terms of Schur multiplier and abelianization|Hopf's formula for Schur multiplier General: universal coefficients theorem for group homology|homology group for trivial group action commutes with direct product in second coordinate|Kunneth formula for group homology Over the integers The first few homology groups are given below: rank of as an elementary abelian -group -- 2 1 3 2 4 Over an abelian group The homology groups with coefficients in an abelian group are given as follows: Here, is the quotient of by and . These homology groups can be computed in terms of the homology groups over integers using the universal coefficients theorem for group homology. Important case types for abelian groups Case on Conclusion about odd-indexed homology groups, i.e., Conclusion about even-indexed homology groups, i.e., is uniquely -divisible, i.e., every element of can be divided uniquely by . This includes the case that is a field of characteristic not . all zero groups all zero groups is -torsion-free, i.e., no nonzero element of multiplies by to give zero. is -divisible, but not necessarily uniquely so, e.g., , any natural number is a finite abelian group isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of is a finitely generated abelian group all isomorphic to where is the rank for the -Sylow subgroup of the torsion part of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of all isomorphic to where is the rank for the -Sylow subgroup of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of Cohomology groups for trivial group action FACTS TO CHECK AGAINST(cohomology group for trivial group action): First cohomology group: first cohomology group for trivial group action is naturally isomorphic to group of homomorphisms Second cohomology group: formula for second cohomology group for trivial group action in terms of Schur multiplier and abelianization In general: dual universal coefficients theorem for group cohomology relating cohomology with arbitrary coefficientsto homology with coefficients in the integers. |Cohomology group for trivial group action commutes with direct product in second coordinate | Kunneth formula for group cohomology Over the integers The cohomology groups with coefficients in the integers are given as below: The first few cohomology groups are given below: 0 rank of as an elementary abelian -group -- 0 2 1 3 2 Over an abelian group The cohomology groups with coefficients in an abelian group are given as follows: Here, is the quotient of by and . These can be deduced from the homology groups with coefficients in the integers using the dual universal coefficients theorem for group cohomology. Important case types for abelian groups Important case types for abelian groups Case on Conclusion about odd-indexed cohomology groups, i.e., Conclusion about even-indexed homology groups, i.e., is uniquely -divisible, i.e., every element of can be divided by uniquely. This includes the case that is a field of characteristic not 2. all zero groups all zero groups is -torsion-free, i.e., no nonzero element of multiplies by to give zero. is -divisible, but not necessarily uniquely so, e.g., , any natural number is a finite abelian group isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of is a finitely generated abelian group all isomorphic to where is the rank for the -Sylow subgroup of the torsion part of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of all isomorphic to where is the rank for the -Sylow subgroup of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of
A matrix is a popular term in mathematics that is defined as the array of functions or numbers. Each matrix has two dimensions in the form of rows and columns. The number of rows is shown with the alphabet m and the number of columns is shown through the alphabet n. There are a plenty of parameters to categorize a matrix like the value of elements, the order of a matrix, and the total number of rows and columns etc. With the help of matrix study in mathematics, an engineer can design buildings, they can build powerful video games, and define animations that look like 3-dimensional or 4-dimensional, and many more applications. They are also used to solve complex linear equations, in this case, calculations are completed through the computer but a depth understanding of matrix formulas are still necessary. Matrix is an ordered arrangement of array or variable in the form of rectangular or square rows or columns. The individual number in the matrix is given as an entity or element. In case of the matrix, the numbers are written in between square brackets as shown below in the example – For example: If A = \begin{bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\ a_{31} & a_{32} \end{bmatrix} and B = \begin{bmatrix} b_{11} & b_{12}\\ b_{21} & b_{22}\\ b_{31} & b_{32} \end{bmatrix} let us calculate A + B. Here, both matrices A and B are of same size (3 x 2). This simplies C = A + B = \begin{bmatrix} a_{11} + b_{11} & a_{12} + b_{12}\\ a_{21} + b_{21} & a_{22} + b_{22}\\ a_{31} + b_{31} & a_{32} + b_{32} \end{bmatrix} If A = \begin{bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\ a_{31} & a_{32} \end{bmatrix} and B = \begin{bmatrix} b_{11} & b_{12}\\ b_{21} & b_{22}\\ b_{31} & b_{32} \end{bmatrix} let us calculate A – B. Here, both matrices A and B are of same size (3 x 2). This simplies C = A – B = \begin{bmatrix} a_{11} – b_{11} & a_{12} – b_{12}\\ a_{21} – b_{21} & a_{22} – b_{22}\\ a_{31} – b_{31} & a_{32} – b_{32} \end{bmatrix} If A = \begin{bmatrix} a_{1} & a_{2}\\ a_{3} & a_{4}\end{bmatrix} and B = \begin{bmatrix} b_{1} & b_{2}\\ b_{3} & b_{4}\end{bmatrix} let us calculate A X B. Here, both matrices A and B are of same size (3 x 2). This simplies C = A X B = \begin{bmatrix} a_{1} b_{1} + a_{2} b_{3} & a_{1} b_{2} + a_{2} b_{4}\\ a_{3} b_{1} + a_{4} b_{3} & a_{3} b_{2} + a_{4} b_{4}\end{bmatrix} If A = \begin{bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\end{bmatrix} adj(A) = \begin{bmatrix} a_{22} & -a_{12}\\ -a_{21} & a_{11}\end{bmatrix} The horizontal lines are defined as the rows in the matrix and vertical lines are taken as the column in the matrix. For a matrix having m rows and n columns, the order of matrix is defined as m × n. for the given example, there are three rows and three columns, so the order of above matrix would be 3 × 3 .Two matrices of same order can be added or subtracted by following certain rules and conditions. The applications of matrices are common in engineering and it can be used for practicing cryptography techniques. When information is exchanged between two parties, it will stay safe due to matrix formula enablement. It is suitable for Fourier analysis, Gauss theorem, to find the electric currents, to measure the force on the bridge and many more things are completed with the help of matrix implementation. A Covariance Matrix is a measure of how two random variables get modified altogether. It is actually needed to compute the covariance for every column in the data matrix. The other popular name for covariance matrices is dispersion matrix or variance-variance matrix. The covariance formula in mathematics is given as – \[\LARGE Cov(X,Y)=\sum \frac{(X_{i}-\overline{X})(Y_{i}-\overline{Y})}{N}=\sum \frac{X_{i}Y_{i}}{N}\] Where, N = Number of scores in each set of data X = Mean of the N scores in the first data set X i = i th raw score in the first set of scores \[\LARGEx_{i}\] = $i^{th}\] deviation score in the first set of scores Y = Mean of the N scores in the second data set \[\LARGEY_{i}\] = $i^{th}\] raw score in the second set of scores \[\LARGEy_{i}\] = $i^{th}\] deviation score in the second set of scores Cov(X, Y) = Covariance of corresponding scores in the two sets of data Where each of variables has its own meaning and significance. The square matrix having an inverse is written as either non-singular or invertible and a square matrix whose inverse cannot be calculated is named as singular or non-invertible matrix. Keep in mind that not all square matrices have inverse and non-square matrices don’t have inverses. For the matrix A, the inverse is written as A-1. \[\large A^{-1}=\frac{1}{|A|}\times adj(A)\] You must be thinking why we need an inverse of a matrix. Actually, we could not divide a matrix in mathematics. But this is possible to multiply a square matrix by its inverse that completes the same job.
math has always been the motivating factor to abandon word processing. math offers superior typesetting control over formulas, data tables, and figures. Package Support Package Support The amsmath package introduces several commands that are more powerful and flexible than the ones that ship with . To use amsmath, add the following line to the document preamble: 1 \usepackage{amsmath} LaTeX Math Modes LaTeX Math Modes math two modes: the inline mode and the display mode. The first is used to write formulas that are part of a text or paragraph. The second is to write expressions that are not part of the text and are placed on separate lines. LaTeX Inline Math LaTeX Inline Math Here is an example of the inline mode: . The markup commands appear below: 1 Here is an example of the inline mode: $\small{\int_0^\infty x^2 = \frac{x^3}{3} + C}$. There are several ways to launch the inline mode: Either command sequence is viable. The choice as to which is best is subjective and based on personal preference or coding standards. LaTeX Display Math LaTeX Display Math The display mode has two versions: numbered and unnumbered, as shown below: 1 2 3 4 5 6 7 8 9 The mass-energy equivalence is described by the famous equation: $$E=mc^2$$ If $c = 1$, then the formula expresses the identity: \begin{equation} E=m \end{equation} Hence, display mode can be launched with a markup syntax or by starting an equation environment: The environment has the benefit of macros for equation numbering and number formatting.
$$\frac{(\sec A - \tan A)(\sec A + \tan A)} {\csc A-\cot A} \equiv \cot A + \csc A $$ So I started by using DOTS (Difference of two squares) on the numerator on the left hand side. This gave me: $$\frac{\sec^2x - \tan^2x}{\csc A - \cot A}$$ Then using the trigonometric identity: $\tan^2x + 1 = \sec^2x$ I solved to: $$\frac{\sec^2x - (\sec^2x - 1)}{\csc A - \cot A}$$ And in turn: $$\frac {1}{\csc A - \cot A}$$ => $$\frac{1}{(\frac{1}{\sin A})} - \frac{1}{(\frac{1}{\tan A})}$$ => $$\sin A - \tan A$$ From here I couldn't think of a way to get closer to $\cot A + \csc A$ Am I on the right track? If not, could someone point me in the right direction? Thanks.
2018-09-11 04:29 Proprieties of FBK UFSDs after neutron and proton irradiation up to $6*10^{15}$ neq/cm$^2$ / Mazza, S.M. (UC, Santa Cruz, Inst. Part. Phys.) ; Estrada, E. (UC, Santa Cruz, Inst. Part. Phys.) ; Galloway, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; Gee, C. (UC, Santa Cruz, Inst. Part. Phys.) ; Goto, A. (UC, Santa Cruz, Inst. Part. Phys.) ; Luce, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; McKinney-Martinez, F. (UC, Santa Cruz, Inst. Part. Phys.) ; Rodriguez, R. (UC, Santa Cruz, Inst. Part. Phys.) ; Sadrozinski, H.F.-W. (UC, Santa Cruz, Inst. Part. Phys.) ; Seiden, A. (UC, Santa Cruz, Inst. Part. Phys.) et al. The properties of 60-{\mu}m thick Ultra-Fast Silicon Detectors (UFSD) detectors manufactured by Fondazione Bruno Kessler (FBK), Trento (Italy) were tested before and after irradiation with minimum ionizing particles (MIPs) from a 90Sr \b{eta}-source . [...] arXiv:1804.05449. - 13 p. Preprint - Full text Registre complet - Registres semblants 2018-08-25 06:58 Charge-collection efficiency of heavily irradiated silicon diodes operated with an increased free-carrier concentration and under forward bias / Mandić, I (Ljubljana U. ; Stefan Inst., Ljubljana) ; Cindro, V (Ljubljana U. ; Stefan Inst., Ljubljana) ; Kramberger, G (Ljubljana U. ; Stefan Inst., Ljubljana) ; Mikuž, M (Ljubljana U. ; Stefan Inst., Ljubljana) ; Zavrtanik, M (Ljubljana U. ; Stefan Inst., Ljubljana) The charge-collection efficiency of Si pad diodes irradiated with neutrons up to $8 \times 10^{15} \ \rm{n} \ cm^{-2}$ was measured using a $^{90}$Sr source at temperatures from -180 to -30°C. The measurements were made with diodes under forward and reverse bias. [...] 2004 - 12 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 533 (2004) 442-453 Registre complet - Registres semblants 2018-08-23 11:31 Registre complet - Registres semblants 2018-08-23 11:31 Effect of electron injection on defect reactions in irradiated silicon containing boron, carbon, and oxygen / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Yakushevich, H S (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) Comparative studies employing Deep Level Transient Spectroscopy and C-V measurements have been performed on recombination-enhanced reactions between defects of interstitial type in boron doped silicon diodes irradiated with alpha-particles. It has been shown that self-interstitial related defects which are immobile even at room temperatures can be activated by very low forward currents at liquid nitrogen temperatures. [...] 2018 - 7 p. - Published in : J. Appl. Phys. 123 (2018) 161576 Registre complet - Registres semblants 2018-08-23 11:31 Registre complet - Registres semblants 2018-08-23 11:31 Characterization of magnetic Czochralski silicon radiation detectors / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) Silicon wafers grown by the Magnetic Czochralski (MCZ) method have been processed in form of pad diodes at Instituto de Microelectrònica de Barcelona (IMB-CNM) facilities. The n-type MCZ wafers were manufactured by Okmetic OYJ and they have a nominal resistivity of $1 \rm{k} \Omega cm$. [...] 2005 - 9 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 548 (2005) 355-363 Registre complet - Registres semblants 2018-08-23 11:31 Silicon detectors: From radiation hard devices operating beyond LHC conditions to characterization of primary fourfold coordinated vacancy defects / Lazanu, I (Bucharest U.) ; Lazanu, S (Bucharest, Nat. Inst. Mat. Sci.) The physics potential at future hadron colliders as LHC and its upgrades in energy and luminosity Super-LHC and Very-LHC respectively, as well as the requirements for detectors in the conditions of possible scenarios for radiation environments are discussed in this contribution.Silicon detectors will be used extensively in experiments at these new facilities where they will be exposed to high fluences of fast hadrons. The principal obstacle to long-time operation arises from bulk displacement damage in silicon, which acts as an irreversible process in the in the material and conduces to the increase of the leakage current of the detector, decreases the satisfactory Signal/Noise ratio, and increases the effective carrier concentration. [...] 2005 - 9 p. - Published in : Rom. Rep. Phys.: 57 (2005) , no. 3, pp. 342-348 External link: RORPE Registre complet - Registres semblants 2018-08-22 06:27 Numerical simulation of radiation damage effects in p-type and n-type FZ silicon detectors / Petasecca, M (Perugia U. ; INFN, Perugia) ; Moscatelli, F (Perugia U. ; INFN, Perugia ; IMM, Bologna) ; Passeri, D (Perugia U. ; INFN, Perugia) ; Pignatel, G U (Perugia U. ; INFN, Perugia) In the framework of the CERN-RD50 Collaboration, the adoption of p-type substrates has been proposed as a suitable mean to improve the radiation hardness of silicon detectors up to fluencies of $1 \times 10^{16} \rm{n}/cm^2$. In this work two numerical simulation models will be presented for p-type and n-type silicon detectors, respectively. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 2971-2976 Registre complet - Registres semblants 2018-08-22 06:27 Technology development of p-type microstrip detectors with radiation hard p-spray isolation / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) ; Díez, S (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) A technology for the fabrication of p-type microstrip silicon radiation detectors using p-spray implant isolation has been developed at CNM-IMB. The p-spray isolation has been optimized in order to withstand a gamma irradiation dose up to 50 Mrad (Si), which represents the ionization radiation dose expected in the middle region of the SCT-Atlas detector of the future Super-LHC during 10 years of operation. [...] 2006 - 6 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 566 (2006) 360-365 Registre complet - Registres semblants 2018-08-22 06:27 Defect characterization in silicon particle detectors irradiated with Li ions / Scaringella, M (INFN, Florence ; U. Florence (main)) ; Menichelli, D (INFN, Florence ; U. Florence (main)) ; Candelori, A (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Bruzzi, M (INFN, Florence ; U. Florence (main)) High Energy Physics experiments at future very high luminosity colliders will require ultra radiation-hard silicon detectors that can withstand fast hadron fluences up to $10^{16}$ cm$^{-2}$. In order to test the detectors radiation hardness in this fluence range, long irradiation times are required at the currently available proton irradiation facilities. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 589-594 Registre complet - Registres semblants
Possible Duplicate: Can superconducting magnets fly (or repel the earth's core)? I've seen superconductors levitating on magnets. But is it possible for superconductors to levitate on Earth from Earth's magnetic field? Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community Possible Duplicate: Can superconducting magnets fly (or repel the earth's core)? I've seen superconductors levitating on magnets. But is it possible for superconductors to levitate on Earth from Earth's magnetic field? The lift generated by magnetic field B on a superconductor of area S is: \begin{equation} F = \frac{B^2S}{2\mu_0} \end{equation} disregarding lateral forces and assuming superconducting cylinder (or similar shape) with area S at the top and bottom and height h, we need three forces to remain in the equilibrium: magnetic pressure on top, bottom and gravity force: \begin{equation} F_{b} - F_{t} = F_{g} \end{equation} denoting density of the superconductor as ρ, Earth' gravity as g and magnetic field at the top and bottom of the object as B t and B b, we have \begin{equation} \frac{1}{2\mu_0}(B_{b}^2-B_{t}^2)=\rho gh \end{equation} assuming the vertical rate of change of magnetic field is nearly constant and denoting the average magnetic field as B, we have \begin{equation} -B\frac{dB}{dz}=\mu_{0}\rho g \end{equation} Compare with diamagnetic levitation (superconductor's magnetic susceptibility is -1). Now, Earth magnetic field is between 25 to 65 μT. For the derivative I have found this survey from British Columbia with upper point on the scale being 2.161 nT/m. Assuming this to be the maximum for vertical derivative we get the required density of 1.1394e-08 kg/m3. For comparison air density at the sea level at 15C is around 1.275 Even assuming a very high vertical derivative where B goes from its maximum 65 μT to 0 on 1 m of height results in density required of 0.00034272 kg/m3.
The Annals of Mathematical Statistics Ann. Math. Statist. Volume 43, Number 1 (1972), 293-302. The Occupation Time of a Set by Countably Many Recurrent Random Walks Abstract Let $A_0(x), x \in Z$ be independent nonnegative integer-valued random variables with $\mu_j(x) = E(A_0(x)(A_0(x) - 1)\cdots(A_0(x) - j + 1))$. Assume that $\{\mu j(x)\}_x$ has limits for $j = 1,2$ and that it is bounded for $3 \leqq j \leqq 6$. Suppose at time zero there are $A_0(x)$ particles at $x \in Z$ and subsequently the particles move independently according to the transition function $P(x, y)$ of a recurrent random walk. For a finite nonempty subset $B$ of $Z$ denote by $A_n(B)$ the number of particles in $B$ at time $n$. Then $S_n(B) = \sum^n_{k=1} A_k(B)$ is the total occupation time of $B$ by time $n$ of all particles. Assuming that the $n$ step transition function $P_n(x, y)$ is such that there is an $\alpha$, with $1 < \alpha \leqq 2$, so that $P_n(0, x) \sim cn^{-1/\alpha}$ for all $x$, it is proved that the strong law of large numbers and the central limit theorem hold for the sequence $\{S_n(B)\}$. Article information Source Ann. Math. Statist., Volume 43, Number 1 (1972), 293-302. Dates First available in Project Euclid: 27 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aoms/1177692722 Digital Object Identifier doi:10.1214/aoms/1177692722 Mathematical Reviews number (MathSciNet) MR300350 Zentralblatt MATH identifier 0238.60046 JSTOR links.jstor.org Citation Weiss, N. A. The Occupation Time of a Set by Countably Many Recurrent Random Walks. Ann. Math. Statist. 43 (1972), no. 1, 293--302. doi:10.1214/aoms/1177692722. https://projecteuclid.org/euclid.aoms/1177692722
Magnetosphere A magnetosphere is the area of space near an astronomical object in which charged particles are controlled by that object's magnetic field. [1] [2] Near the surface of the object, the magnetic field lines resemble those of a magnetic dipole. Farther away from the surface, the field lines are significantly distorted by electric currents flowing in the plasma (e.g. in ionosphere or solar wind). [3] [4] When speaking about Earth, magnetosphere is typically used to refer to the outer layer of the ionosphere, [3] although some sources consider the ionosphere and magnetosphere to be separate. [2] Contents History Study of Earth's magnetosphere began in 1600, when William Gilbert discovered that the magnetic field on the surface of Earth resembled that on a terrella, a small, magnetized sphere. In the 1940s, Walter M. Elsasser proposed the model of dynamo theory, which attributes Earth's magnetic field to the motion of Earth's iron outer core. Through the use of magnetometers, scientists were able to study the variations in Earth's magnetic field as functions of both time and latitude and longitude. Beginning in the late 1940s, rockets were used to study cosmic rays. In 1958, Explorer 1, the first of the Explorer series of space missions, was launched to study the intensity of cosmic rays above the atmosphere and measure the fluctuations in this activity. This mission observed the existence of the Van Allen radiation belt (located in the inner region of Earth's magnetosphere), with the Explorer 3 mission later that year definitively proving its existence. Also in 1958, Eugene Parker proposed the idea of the solar wind. In 1959, the term magnetosphere was proposed by Thomas Gold. The Explorer 12 mission in 1961 led to the observation by Cahill and Amazeen in 1963 of a sudden decrease in the strength of the magnetic field near the noon meridian, later named the magnetopause. In 1983, the International Cometary Explorer observed the magnetotail, or the distant magnetic field. [4] Types of magnetospheres The structure and behavior of magnetospheres is dependent on several variables: the type of astronomical object, the nature of sources of plasma and momentum, the period of the object's spin, the nature of the axis whereabout the object spins, the axis of the magnetic dipole, and the magnitude and direction of the velocity of the flow of solar wind. The distance at which a planet can withstand the solar wind pressure is called the Chapman–Ferraro distance. This is modeled by a formula wherein R_P represents the radius of the planet, B_{surf} represents the magnetic field on the surface of the planet at the equator, and V_{SW} represents the velocity of the solar wind. R_{CF}=R_{P} \left( \frac{B_{surf}^2}{\mu_{0} \rho V_{SW}^2} \right) ^{\frac{1}{6}} A magnetosphere is classified as "intrinsic" when R_{CF} \gg R_{P}, or when the primary opposition to the flow of solar wind is the magnetic field of the object. Mercury, Earth, Jupiter, Ganymede, Saturn, Uranus, and Neptune exhibit intrinsic magnetospheres. A magnetosphere is classified as "induced" when R_{CF} \ll R_P, or when the solar wind is not opposed by the object's magnetic field. In this case, the solar wind interacts with the atmosphere or ionosphere of the planet (or surface of the planet, if the planet has no atmosphere). Venus has an induced magnetic field. What this means is that because Venus appears to have no internal dynamo effect, the only magnetic field present is that formed by the solar wind's wrapping around the physical obstacle of Venus (see also Venus' Induced Magnetosphere). When R_{CF} \approx R_P, the planet itself and its magnetic field both contribute. It is possible that Mars is of this type. [5] Structure Bow shock The bow shock forms the outermost layer of the magnetosphere: the boundary between the magnetosphere and the ambient medium. For stars, this is usually the boundary between the stellar wind and interstellar medium; for planets, the speed of the solar wind there plummets as it approaches the magnetopause. [6] Magnetosheath The magnetosheath is the region of the magnetosphere between the bow shock and the magnetopause. It is formed mainly from shocked solar wind, though it contains a small amount of plasma from the magnetosphere. [7] It is an area exhibiting high particle energy flux, where the direction and magnitude of the magnetic field varies erratically. This is caused by the collection of solar wind gas that has effectively undergone thermalization. It acts as a cushion that transmits the pressure from the flow of the solar wind and the barrier of the magnetic field from the object. [4] Magnetopause The magnetopause is the area of the magnetosphere wherein the pressure from the planetary magnetic field is balanced with the pressure from the solar wind. [3] It is the convergence of the shocked solar wind from the magnetosheath with the magnetic field of the object and plasma from the magnetosphere. Because both sides of this convergence contain magnetized plasma, the interactions between them are very complex. The structure of the magnetopause depends upon the Mach number and beta of the plasma, as well as the magnetic field. [8] The magnetopause changes size and shape as the pressure from the solar wind fluctuates. [9] Magnetotail Opposite the compressed magnetic field is the magnetotail, where the magnetosphere extends far beyond the astronomical object. It contains two lobes, referred to as the northern and southern tail lobes. The northern tail lobe points towards the object and the southern tail lobe points away. The tail lobes are almost empty, with very few charged particles opposing the flow of the solar wind. The two lobes are separated by a plasma sheet, an area where the magnetic field is weaker and the density of charged particles is higher. [10] Earth's magnetosphere Over Earth's equator, the magnetic field lines become almost horizontal, then return to connect back again at high latitudes. However, at high altitudes, the magnetic field is significantly distorted by the solar wind and its solar magnetic field. On the dayside of Earth, the magnetic field is significantly compressed by the solar wind to a distance of approximately 65,000 kilometers (40,000 mi). Earth's bow shock is about 17 kilometers (11 mi) thick [11] and located about 90,000 kilometers (56,000 mi) from Earth. [12] The magnetopause exists at a distance of several hundred kilometers off earth's surface. Earth's magnetopause has been compared to a sieve because it allows solar wind particles to enter. Kelvin–Helmholtz instabilities occur when large swirls of plasma travel along the edge of the magnetosphere at a different velocity from the magnetosphere, causing the plasma to slip past. This results in magnetic reconnection, and as the magnetic field lines break and reconnect, solar wind particles are able to enter the magnetosphere. [13] On Earth's nightside, the magnetic field extends in the magnetotail, which lengthwise exceeds 6,300,000 kilometers (3,900,000 mi). [3] Earth's magnetotail is the primary source of the polar aurora. [10] Also, NASA scientists have suggested or "speculated" that Earth's magnetotail can cause "dust storms" on the Moon by creating a potential difference between the day side and the night side. [14] Other objects The magnetosphere of Jupiter is the largest planetary magnetosphere in the Solar System, extending up to 7,000,000 kilometers (4,300,000 mi) on the dayside and almost to the orbit of Saturn on the nightside. [15] Jupiter's magnetosphere is stronger than Earth's by an order of magnitude, and its magnetic moment is approximately 18,000 times larger. [16] See also References "Magnetospheres". NASA Science. NASA. Ratcliffe, John Ashworth (1972). An Introduction to the Ionosphere and Magnetosphere. CUP Archive. "Ionosphere and magnetosphere". Encyclopedia Britannica. Encyclopedia Britannica, Inc. 2012. Van Allen, James Alfred (2004). Origins of Magnetospheric Physics. Iowa City, Iowa, USA: University of Iowa Press. Blanc, M.; Kallenbach, R.; Erkaev, N.V. (2005). "Solar System Magnetospheres". Space Science Reviews(116): 227–298. Sparavigna, A.C.; Marazzato, R. (10 May 2010). "Observing stellar bow shocks" (PDF). Paschmann, G.; Schwartz, S.J.; Escoubet, C.P. et al., eds. (2005). "Outer Magnetospheric Boundaries: Cluster Results". Space Science Reviews(Dordrecht, The Netherlands: Springer) 118(1-4). Russell, C.T. (1990). "The Magnetopause". Physics of Magnetic Flux Ropes(Washington, D.C., USA: American Geophysical Union): 439–453. "The Magnetopause". NASA. "The Tail of the Magnetosphere". NASA. "Cluster reveals Earth's bow shock is remarkably thin". European Space Agency. 16 November 2011. "Cluster reveals the reformation of Earth's bow shock". European Space Agency. 11 May 2011. "Cluster observes a 'porous' magnetopause". European Space Agency. 24 October 2012. http://www.nasa.gov/topics/moonmars/features/magnetotail_080416.html NASA, The Moon and the Magnetotail Khurana, K.K.; Kivelson, M.G. et al. (2004). "The configuration of Jupiter's magnetosphere". In Bagenal, F.; Dowling, T.E.; McKinnon, W.B. Jupiter: The Planet, Satellites and Magnetosphere(PDF). Cambridge University Press. Russell, C.T. (1993). "Planetary Magnetospheres" (PDF). Reports on Progress in Physics 56(6): 687–732.
Suppose we are given an \(n\)th order homogeneous system of differential equations with constant coefficients: \[ {\mathbf{X’}\left( t \right) = A\mathbf{X}\left( t \right),\;\;}\kern-0.3pt {\mathbf{X}\left( t \right) = \left[ {\begin{array}{*{20}{c}} {{x_1}\left( t \right)}\\ {{x_2}\left( t \right)}\\ \vdots \\ {{x_n}\left( t \right)} \end{array}} \right],\;\;}\kern-0.3pt {{A \text{ = }}\kern0pt{\left[ {\begin{array}{*{20}{c}} {{a_{11}}}&{{a_{12}}}& \vdots &{{a_{1n}}}\\ {{a_{21}}}&{{a_{22}}}& \vdots &{{a_{2n}}}\\ \cdots & \cdots & \cdots & \cdots \\ {{a_{n1}}}&{{a_{n2}}}& \vdots &{{a_{nn}}} \end{array}} \right],}} \] where \(\mathbf{X}\left( t \right)\) is an \(n\)-dimensional vector containing the unknown functions, \(A\) is a square matrix of size \(n \times n.\) A nonlinear autonomous system can be reduced to the linear system by performing a linearization around an equilibrium point. Then without loss of generality, we may assume that the equilibrium point is at the origin. It is always possible to reach by choosing a suitable coordinate system. The stability or instability of the equilibrium state is determined by the signs of the real parts of the eigenvalues of \(A.\) To find the eigenvalues \(\lambda,\) it is necessary to solve the auxiliary equation \[\det \left( {A – \lambda I} \right) = 0,\] which is reduced to an algebraic equation of the \(n\)th degree \[{{a_0}{\lambda ^n} + {a_1}{\lambda ^{n – 1}} }+{ {a_2}{\lambda ^{n – 2}} + \cdots }+{ {a_{n – 1}}\lambda + {a_n} }={ 0.}\] The roots of this equation can be easily calculated in the case \(n = 2,\) and in some cases when \(n \ge 3.\) In other cases, solving the auxiliary equation can be a difficult problem. Moreover, Norwegian mathematician Niels Henrik Abel \(\left( 1802-1829 \right)\) proved a theorem according to which the general algebraic equation of degree \(n \ge 5\) cannot be solved using four basic arithmetical operations, that is there is no formula expressing the roots of the equation through its coefficients in the case \(n \ge 5.\) In such a situation, methods allowing to determine whether all roots have negative real parts and establish the stability of the system without solving the auxiliary equation itself, are of great importance. One of these methods is the Routh-Hurwitz criterion, which contains the necessary and sufficient conditions for the stability of the system. Consider again the auxiliary equation \[{{a_0}{\lambda ^n} + {a_1}{\lambda ^{n – 1}} }+{ {a_2}{\lambda ^{n – 2}} + \cdots }+{ {a_{n – 1}}\lambda + {a_n} }={ 0,}\] describing the dynamic system. Note that the necessary condition for the stability is satisfied if all the coefficients \({a_i} \gt 0.\) Therefore, we assume that the coefficient \({a_0} \gt 0.\) We write the so-called Hurwitz matrix. It is composed as follows. The main diagonal of the matrix contains elements \({a_1},\) \({a_2}, \ldots ,\) \({a_n}.\) The first column contains numbers with odd indices \({a_1},\) \({a_3},\) \({a_5}, \ldots \) In each row, the index of each following number (counting from left to right) is \(1\) less than the index of its predecessor. All other coefficients \({a_i}\) with indices greater than \(n\) or less than \(0\) are replaced by zeros. The result is a matrix of kind The principal diagonal minors \({\Delta _i}\) of the Hurwitz matrix are given by the formulas \[ {{\Delta _1} = {a_1},\;\;}\kern-0.3pt{{\Delta _2} = \left| {\begin{array}{*{20}{c}} {{a_1}}&{{a_0}}\\ {{a_3}}&{{a_2}} \end{array}} \right|,\;\;}\kern-0.3pt {{\Delta _3} = \left| {\begin{array}{*{20}{c}} {{a_1}}&{{a_0}}&0\\ {{a_3}}&{{a_2}}&{{a_1}}\\ {{a_5}}&{{a_4}}&{{a_3}} \end{array}} \right|,\;\;}\kern-0.3pt {{{\Delta _n} \text{ = }}\kern0pt{ \left| {\begin{array}{*{20}{c}} {{a_1}}&{{a_0}}&0& \vdots &0\\ {{a_3}}&{{a_2}}&{{a_1}}& \vdots &0\\ {{a_5}}&{{a_4}}&{{a_3}}& \vdots &0\\ \cdots & \cdots & \cdots & \cdots & \cdots \\ 0&0&0& \vdots &{{a_n}} \end{array}} \right|.}} \] We now formulate the Routh-Hurwitz stability criterion: The roots of the auxiliary equation have negative real parts if and only if all the principal diagonal minors of the Hurwitz matrix are positive provided that \({a_0} \gt 0:\) \({\Delta _1} \gt 0,\) \({\Delta _2} \gt 0,\ldots,\) \({\Delta _n} \gt 0.\) As \({\Delta _n} = {a_n}{\Delta _{n – 1}},\) the last inequality can be written as \({a_n} \gt 0.\) For the most common systems of the \(2\)nd, \(3\)rd and \(4\)th order, we obtain the following stability criteria: For a second order system, the condition of the stability is given by \[ {{a_0} \gt 0,\;\;}\kern-0.3pt{{\Delta _1} = {a_1} \gt 0,\;\;}\kern-0.3pt {{\Delta _2} = \left| {\begin{array}{*{20}{c}} {{a_1}}&{{a_0}}\ {{a_3}}&{{a_2}} \end{array}} \right| }={ {a_1}{a_2} \gt 0} \] or \[{{a_0} \gt 0,\;\;}\kern-0.3pt{{a_1} \gt 0,\;\;}\kern-0.3pt{{a_2} \gt 0,}\] that is, all coefficients of the quadratic characteristic equation must be positive. In other words, for a system of \(2\)nd order, the necessary condition of the stability is also the sufficient one. We emphasize that we consider here the asymptotic stability of the zero solution. For a \(3\)rd order system, the stability criterion is defined by the inequalities \[ {{a_0} \gt 0,\;\;}\kern-0.3pt{{\Delta _1} = {a_1} \gt 0,\;\;}\kern-0.3pt {{{\Delta _2} = \left| {\begin{array}{*{20}{c}} {{a_1}}&{{a_0}}\\ {{a_3}}&{{a_2}} \end{array}} \right| } = {{a_1}{a_2} – {a_0}{a_3} \gt 0,\;\;}}\kern-0.3pt {{\Delta _3} = {a_3} \gt 0.} \] or \[ {{a_0} \gt 0,\;\;}\kern-0.3pt{{a_1} \gt 0,\;\;}\kern-0.3pt {{a_2} \gt 0,\;\;}\kern-0.3pt {{a_3} \gt 0,\;\;}\kern-0.3pt {{a_1}{a_2} – {a_0}{a_3} \gt 0.} \] Similarly, for a \(4\)th order system, we obtain the following set of inequalities: \[ {{a_0} \gt 0,\;\;}\kern-0.3pt{{\Delta _1} = {a_1} \gt 0,\;\;}\kern-0.3pt {{{\Delta _2} = \left| {\begin{array}{*{20}{c}} {{a_1}}&{{a_0}}\\ {{a_3}}&{{a_2}} \end{array}} \right| }={ {a_1}{a_2} – {a_0}{a_3} \gt 0,\;\;}}\kern-0.3pt {{{\Delta _3} = \left| {\begin{array}{*{20}{c}} {{a_1}}&{{a_0}}&0\\ {{a_3}}&{{a_2}}&{{a_1}}\\ 0&{{a_4}}&{{a_3}} \end{array}} \right| } = {{a_1}{a_2}{a_3} }-{ a_1^2{a_4} }-{ {a_0}a_3^2 \gt 0,\;\;}}\kern-0.3pt {{\Delta _4} = {a_4} \gt 0} \] or \[ {{a_i} \gt 0\;\left( {i = 0, \ldots ,4} \right),\;\;\;}\kern-0.3pt {{a_1}{a_2} – {a_0}{a_3} \gt 0,\;\;\;}\kern-0.3pt {{{a_1}{a_2}{a_3} }-{ a_1^2{a_4} }-{ {a_0}a_3^2 \gt 0.}} \] If all the \(n – 1\) principal minors of the Hurwitz matrix are positive and the \(n\)th order minor is zero: \({\Delta _n} = 0,\) the system is at the boundary of stability. As \({\Delta _n} = {a_n}{\Delta _{n – 1}},\) then there are two options: The coefficient \({a_n} = 0.\) This corresponds to the case when one of the roots of the auxiliary equation is zero. The system is on the boundary of the aperiodic stability. The determinant \({\Delta _{n – 1}} = 0.\) In this case, there are two complex conjugate imaginary roots. The system is on the boundary of the oscillatory stability. The Routh-Hurwitz stability criterion belongs to the family of algebraic criteria. It can be conveniently used to analyze the stability of low order systems. The computational complexity grows significantly with the increase of the order. In such cases, it may be preferable to use other criteria such as the Lienard-Shipart theorem or the Nyquist frequency criterion. Solved Problems Click a problem to see the solution.
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s? @daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format). @JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems.... well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty... Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d... @Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure. @JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now @yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first @yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts. @JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing. @Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work @Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable. @Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time. @Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things @Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)] @JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :) @Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!) @JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand. @JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series @JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code. @PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
Some time ago I posted a question here on this forum. I would like to ask some questions regarding the way the energy per unit area between metallic plates is calculated. The full calculation is on wikipedia. At some point in the calculation on the relevant wikipedia page (see the link above), we have the equation: $$\frac{ \langle E \rangle }{ A} = - \frac{ \hbar c \pi^2 }{6a^3}\cdot\zeta(-3) . $$ In the next step, it is written rather casually that $\zeta(-3) = - \frac{1}{120} \qquad (*) $. This is true when considering the analytic continuation of the riemann zeta function or the Ramanujan Summation method. Therefore, it is concluded, that $$\frac{ \langle E \rangle }{A} = - \frac{ \hbar c \pi^2}{720 a^3} . $$ I am wondering under which circumstances people decided to assume the $(*)$-marked equation is 'true'. I can think of a couple of scenarios: The formula for $\frac{ \langle E \rangle }{A} $ was already derived by means of another method which did not require the use of (regularised) divergent sums. Therefore, physicists could infer that $\zeta(-3)$ hadto be equal to $ - \frac{1}{120} $, making the derivation of the formula by means of this method, which does use divergent series, correct. The exact formula for $\frac{ \langle E \rangle }{A} $ was not already known. Physicist did have some data points that roughly showed them how the formula should look. Therefore, they tried some different constants for $\zeta(-3)$. At some point they guessed $\zeta(-3) = - \frac{1}{120} $, which yielded a formula that coincided with the known data points. They might have already known that $\zeta(-3) = - \frac{1}{120} $ by means of zeta function regularisation, making it easier to use this equation as a "guess" to find a suitable formula for $\frac{ \langle E \rangle }{A} $ . Some other scenario. Which scenario roughly describes how the formula for $\frac{ \langle E \rangle }{A} $ came into existence? If it was scenario 1, which other method did physicists formerly employ to derive the formula? If it was scenario 3, how did this whole process unfold? Thanks a lot, Max
The definition of a risk-neutral probability measure depends on the model. The (one factor) Interest Rate Model in Shreve II consists of a single zero-coupon bond $B(t,T)$ with maturity $T$ and of a money market account. So we want discounted bond price to be a martingale under risk-neutral probability measure. We define it as usual (i.e. Shreve II, 5.2.2.): Assume that the interest rate $R(t)$ and the bond $B(t,T)$ processes satisfy their respective stochastic differential equations under the actual probability: $$ dR(t) = \xi(t,R(t))dt + \phi(t, R(t))dW(t)$$$$ dB(t,T) = \mu(t,T)B(t,T)dt + \sigma(t,T)B(t,T)dW(t)$$where $W(t)$ is a Brownian motion. The discount process $D(t) = e^{-\int_0^t R(s)ds}$ so as usual $ dD(t) = -R(t)D(t)dt$ We want the discounted bond price to be a martingale:$$ d(D(t)B(t,T)) = D(dB(t,T) - R(t)B(t,T)dt) = D(t)B(t,T)\sigma(t,T)\Big(\frac{\mu(t,T) -R(t)}{\sigma(t,T)}dt + dW(t)\Big) = D(t)B(t,T)\sigma(t,T)\Big(\theta(t)dt + dW(t)\Big)$$ where we defined the market price of risk $\theta(t) = \frac{\mu(t,T) -R(t)}{\sigma(t,T)}$. We introduce risk-neutral probability measure $\tilde{\mathbb{P}}$ using Girsanov's theorem as usual. The above considerations do not depend on the form of SDE for the interest rate process $R(t)$ so it is ok to start right from the riks-neutral probability measure as it is done in Shreve's book.
Simply notice that $\rm\displaystyle\ \frac{n}3\ +\ \frac{2\:n}3\ =\ n\in \mathbb Z\ \ $ therefore $\rm\displaystyle\ \frac{n}3\in\mathbb Z\ \iff\ \frac{2\:n}3\in \mathbb Z$ This is true precisely because $\rm\:\mathbb Z\:$ is an additive subgroup of $\rm\:\mathbb Q\:,\:$ i.e. a subset closed under subtraction. For if $\rm\:S\:$ is a subgroup of a group and $\rm\ a+b\ = s \in S\ $ then $\rm\ a = b-s \in S\iff\ a+s = b\in S\:,\ $ so your property holds. Conversely if your property holds and $\rm\:a,b\in S\ $ then since $\rm\ (a-b)+b = a \in S\ $ the property implies that $\rm\: a-b\in S\:,\: $ so $\rm\:S\:$ is closed under subtraction, so $\rm\:S\:$ is a subgroup (or empty). See also this complementary form of the subgroup property from my prior post. THEOREM $\ $ A nonempty subset $\rm\:S\:$ of abelian group $\rm\:G\:$ comprises a subgroup $\rm\iff\ S\ + \ \bar S\ =\ \bar S\ $ where $\rm\: \bar S\:$ is the complement of $\rm\:S\:$ in $\rm\:G$ Instances of this are ubiquitous in concrete number systems, e.g. transcendental algebraic * nonalgebraic = nonalgebraic if nonzero rational * irrrational = irrational if nonzero real * nonreal = nonreal if nonzero even + odd = odd additive example integer + noninteger = noninteger
How do I "formally write" a rational number $a_i$ in a logic formula? For example, I was taught that $x^2$ should be formally written as $F_\times(x_1,x_1)$, $1$ should be formally written as $c_1$, $2$ should be formally written as $F_+(c_1,c_1)$ and so on. I hope my question isn't too ambiguous. This is related to my previous question: How to show that the property of being algebraically closed is reflected by elementary extensions? The main idea is I want to write out the formula $\phi_n$ [credit to André Nicolas who suggested it] , $$\forall w_0\forall w_1\cdots\forall w_n \exists x\left(x^{n+1}+w_nx^n+\cdots+w_0=0\right).$$ and specify that all the $w_i$ are rationals. background information about the underlying question If $p(x)=x^{n+1}+a_nx^n+\cdots+a_1x+a_0$ is a polynomial such that $\{a_0,\cdots,a_n\}\subset\mathbb{Q}$, then $p(x)=0$ has a solution in $F$. Assume that the structure $(F,0,1,+,\cdot)$ is a countable elementary submodel of the complex field $(\mathbb{C},0,1,+,\cdot)$. Sincere thanks for any help!
There are three techniques used in CV that seem very similar to each other, but with subtle differences: Laplacian of Gaussian: $\nabla^2\left[g(x,y,t)\ast f(x,y)\right]$ Difference of Gaussians: $ \left[g_1(x,y,t)\ast f(x,y)\right] - \left[g_2(x,y,t)\ast f(x,y)\right]$ Convolution with Ricker wavelet: $\textrm{Ricker}(x,y,t)\ast f(x,y)$ As I understand it currently: DoG is an approximation of LoG. Both are used in blob detection, and both perform essentially as band-pass filters. Convolution with a Mexican Hat/Ricker wavelet seems to achieve very much the same effect. I've applied all three techniques to a pulse signal (with requisite scaling to get the magnitudes similar) and the results are pretty darn close. In fact, LoG and Ricker look nearly identical. The only real difference I noticed is with DoG, I had 2 free parameters to tune ($\sigma_1$ and $\sigma_1$) vs 1 for LoG and Ricker. I also found the wavelet was the easiest/fastest, as it could be done with a single convolution (done via multiplication in Fourier space with FT of a kernel) vs 2 for DoG, and a convolution plus a Laplacian for LoG. What are the comparative advantages/disadvantages of each technique? Are there different use-cases where one outshines the other? I also have the intuitive thought that on discrete samples, LoG and Ricker degenerate to the same operation, since $\nabla^2$ can be implemented as the kernel $$\begin{bmatrix}-1,& 2,& -1\end{bmatrix}\quad\text{or}\quad\begin{bmatrix} 0 & -1 & 0 \\ -1 & 4 & -1 \\ 0 & -1 & 0 \end{bmatrix}\quad\text{for 2D images}$$. Applying that operation to a gaussian gives rise to the Ricker/Hat wavelet. Furthermore, since LoG and DoG are related to the heat diffusion equation, I reckon that I could get both to match with enough parameter fiddling. (I'm still getting my feet wet with this stuff to feel free to correct/clarify any of this!)
The Fundamental Theorem of Finitely Generated Abelian groups is like the Fundamental Theorem of Arithmetic: it describes a "canonical way" of expressing a finitely generated abelian group as a direct sum (in fact, two different ways), in a way that is "essentially unique", and where two groups are isomorphic if and only if they have the same "canonical way of being described." The analogy with the Fundamental Theorem of Arithmetic is that the latter tells you that there is a unique way (up to order) of expressing a positive integer as a product of powers of distinct primes; it does not tell you that there is only one way of expressing a positive integer as a product. So, the fact that we can write $36$ as $6\times 6$, with neither factor a prime power, does not contradict the Fundamental Theorem of Arithmetic. The Fundamental Theorem of Arithmetic is reflected in the fact that we can write $36$ as a product of powers of distinct primes (namely $2^2\times 3^2$) and that this is the only way to express $36$ as a product of powers of distinct primes (up to order). But it says nothing about other ways of expressing $36$ as a product. You can also use the Fundamental Theorem of Arithmetic to say that every positive integer can be written as $n=q_1q_2\cdots q_m$, where $q_1\leq q_2\leq\cdots\leq q_m$ are all primes, and this expression is unique in that if $n=p_1p_2\cdots p_n$ with $p_1\leq p_2\leq\cdots\leq p_n$ primes, then $m=n$ and $p_i=q_i$ for each $i$. Even though you have two different expressions, each one is "unique within its domain". The Fundamental Theorem for Finitely Generated Abelian groups says that you have two different "canonical decompositions": one into cyclic groups of prime power order, and one into numbers that divide each other: Every finitely generated abelian group $G$ can be written as$$G\cong \mathbb{Z}^r \oplus \mathbb{Z}_{p_1^{a_1}}\oplus\cdots\oplus \mathbb{Z}_{p_k^{a_k}}$$where $r,k$ are nonnegative integers, $p_1,\ldots,p_k$ are primes, and $a_1,\ldots,a_k$ are positive integers. Moreover, this expression is unique in the sense that if$$G\cong\mathbb{Z}^s\oplus\mathbb{Z}_{q_1^{b_1}}\oplus\cdots\oplus \mathbb{Z}_{q_{\ell}^{b_{\ell}}}$$with $s,\ell$ nonnegative integers, $q_1,\ldots,q_{\ell}$ primes, and $b_1,\ldots,b_k$ positive integers, then $r=s$, $k=\ell$, and there is a permutation $\sigma$ of $\{1,\ldots,k\}$ such that $p_i=q_{\sigma(i)}$ and $a_i=b_{\sigma(i)}$ for all $i$. Every finitely generated abelian group $G$ can be written as$$G\cong \mathbb{Z}^r\oplus\mathbb{Z}_{n_1}\oplus\cdots\oplus\mathbb{Z}_{n_t}$$where $r,t$ are nonnegative integers, $n_1,\ldots,n_t$ are positive integers greater than $1$, and $n_t|n_{t-1}|\cdots|n_1$; moreover, the expression is unique in the sense that if $G$ can also be written as$$G\cong \mathbb{Z}^s\oplus\mathbb{Z}_{m_1}\oplus\cdots\oplus \mathbb{Z}_{m_u}$$where $s,u$ are nonnegative integers, $m_1,\ldots,m_u$ are positive integers greater than $1$, and $m_u|m_{u-1}|\cdots|m_1$, then $r=s$, $t=u$, and $m_i=n_i$ for each $i$. For $\mathbb{Z}_6$, the first format of the decomposition says that we can write it as $\mathbb{Z}_2\oplus\mathbb{Z}_3$, and that this is the only way to write it as a direct sum of cyclic groups of prime power order (except for the trivial $\mathbb{Z}_3\oplus\mathbb{Z}_2$, which is really "the same way"). The second part says that we can also write it as $\mathbb{Z}_6$, and that this is the only way to write it as a direct sum of cyclic groups in such a way that the order of each one divides the order of the previous one. That is, we have two different "unique factorizations", depending on which format you want to use.
The answer is certainly "Yes", but this is the problem I met in Algebraic Number Theory by Neukirch. I guess that I must be doing something wrong, since otherwise I will get the statement "There are no totally ramified extensions except the trival ones". Let $K$ be Henselian field, $L/K$ be a finite, totally ramified extension. Let $\lambda$ and $\kappa$ be the residue field of $L$ and $K$ respectively. Because $L/K$ is totally ramified, $K$ is the maximal unramified subextension, so we have $\lambda=\kappa$. If $L\ne K$, let $a \in L-K$. Since the valuation is non-trival, by multiplying by an element in $K$, we can suppose $a \in O_L$, the valuation ring of $L$. Because "the valuation ring of $L$ is the integral closure of the valuation ring of $K$ in $L$ "(P144, Chapt 2 Theorem (6.2) of Neukirch), let $f(x) \in O_K[x]$ be the minimal polynomial of $a$ in $O_K$, where $O_K$ is the valuation ring of $K$. (One can prove $f(x)$ is monic, and I guess it may differ from the minimal polynomial over $K$.) Let $\bar{f}(x)$ be the corresponding polynomial over the residue field $\kappa$. It must be the minimal polynomial of $\bar{a} \in \lambda$, because otherwise, by Hensel's lemma, $\bar{f}$ admits a factorization in $\kappa[x]$ implies $f$ admits a factorization in $O_K[x]$. But since $\lambda=\kappa$, we get $deg(\bar{f})=deg(f)=1$ ($f$ is monic). This means $a\in K$, a contradiction. This means $L=K$ !
Current browse context: math.OC Change to browse by: References & Citations Bookmark(what is this?) Computer Science > Machine Learning Title: Finite Precision Stochastic Optimization -- Accounting for the Bias (Submitted on 22 Aug 2019 (v1), last revised 26 Aug 2019 (this version, v2)) Abstract: We consider first order stochastic optimization where the oracle must quantize each subgradient estimate to $r$ bits. We treat two oracle models: the first where the Euclidean norm of the oracle output is almost surely bounded and the second where it is mean square bounded. Prior work in this setting assumes the availability of unbiased quantizers. While this assumption is valid in the case of almost surely bounded oracles, it does not hold true for the standard setting of mean square bounded oracles, and the bias can dramatically affect the convergence rate. We analyze the performance of standard quantizers from prior work in combination with projected stochastic gradient descent for both these oracle models and present two new adaptive quantizers that outperform the existing ones. Specifically, for almost surely bounded oracles, we establish first a lower bound for the precision needed to attain the standard convergence rate of $T^{-\frac 12}$ for optimizing convex functions over a $d$-dimentional domain. Our proposed Rotated Adaptive Tetra-iterated Quantizer (RATQ) is merely a factor $O(\log \log \log^\ast d)$ far from this lower bound. For mean square bounded oracles, we show that a state-of-the-art Rotated Uniform Quantizer (RUQ) from prior work would need atleast $\Omega(d\log T)$ bits to achieve the convergence rate of $T^{-\frac 12}$, using any optimization protocol. However, our proposed Rotated Adaptive Quantizer (RAQ) outperforms RUQ in this setting and attains a convergence rate of $T^{-\frac 12}$ using a precision of only $O(d\log\log T)$. For mean square bounded oracles, in the communication-starved regime where the precision $r$ is fixed to a constant independent of $T$, we show that RUQ cannot attain a convergence rate better than $T^{-\frac 14}$ for any $r$, while RAQ can attain convergence at rates arbitrarily close to $T^{-\frac 12}$ as $r$ increases. Submission historyFrom: Himanshu Tyagi [view email] [v1]Thu, 22 Aug 2019 04:57:22 GMT (49kb) [v2]Mon, 26 Aug 2019 04:56:31 GMT (49kb)
Forewords The main purpose of this post was to show how to simulate data under the assumptions of logistic models. The technique introduced should also be useful for generating simulated data to study other machine learning algorithms. But before the simulation, I will first introduce some key concepts of logistic regression. About the logistic function The logistic regression uses the logistic function to denote the probability of the occurrence of an outcome \(Y = 1\), given an input \(X\): \[ \begin{align} p(X) &= \frac{e^{X}}{e^{X} + 1} \\ &= \frac{1}{1 + e^{-X}} \end{align} \quad (1) \] It is apparent that the probability has a positive association with \(X\), and \(p = 1\) when \(X\) approaches \(+\infty\), \(p = 0\) when \(X\) approaches \(-\infty\). It is also worth noting that when \(X > 0\), \(p > 0.5\) and the occurrence of the event is therefore predicted. The input \(X\) can include one or more predictors, and the combination of the predictors can be linear or non-linear. For instance, if \(X\) is a linear function of one predictors \(x_1\), then \(X\) can be represented as \(\beta_1 x_1\). Typically, \(X\) also includes an intercept term \(\beta_0\), so that \(X = \beta_0 + \beta_1 x_1\). Then the logistic function for this instance becomes: \[ p(X) = \frac{1}{1 + e^{-(\beta_0 + \beta_1 x_1)}} \quad (2) \] Abouth the logit function The logistic function provides a good representation of the relationship between the probability of an outcome and its predictors, but it does not provide a straightforward interpretation of the coefficients (\(\beta_0, \beta_1, etc\)). An alternative representation of the relationship which makes the interpretation a lot easier is the logit function. The logit 1 (log odds) function (denoted by \(g\) below) is the inverse of the logistic function, which takes the following form: \[ \begin{align} g(p(X)) &= ln(\frac{p(X)}{1 - p(X)}) \\ &= X \\ \end{align} \quad (3) \] Note that \(\frac{p(X)}{1 - p(X)}\) is the odds of the occurrence of the outcome. By exponentiating the terms, we get: \[ \begin{align} \frac{p(X)}{1 - p(X)} &= e^X \\ \end{align} \quad (4) \] The above formula provides a straightforward interpretation for the coefficients in \(X\). For example, still asumming \(X = \beta_0 + \beta_1 x_1\), then every 1-unit increase in \(x_1\) will result in the multiplication of the odds by \(e^{\beta_1}\). \(e^{\beta_1}\) can also be seen as the ratio of the odds between two groups of observations where there’s only 1-unit difference in \(x_1\). In this case, \(e^{\beta_1}\) is said to be the odds ratio, which approximates the risk ratio 2 when \(p(X)\) is small. About maximum likelihood estimation Logistic regression uses maximum likelihood estimation to estimate the coefficients in \(X\). See the wiki page for more details. An experiment to simulate data for logistic regression In this example, I simulate a data set with known distribution and fit a logistic regression model to see how close the result is to the truth. Consider the probability of an event given input \(X\), as indicated by formula (1). Suppose \(X = -2 + 3x_1^2 + x_2 + \epsilon\), where \(x_1 \in [-1, 1]\) is continuous, \(x_2 \in \{0, 1\}\) is binary (categorical) and \(\epsilon \sim N(0, 0.1^2)\) is random noise. Then formula (1) becomes: \[ p(X) = \frac{1}{1 + e^{-(-2 + 3x_1^2 + x_2 + \epsilon)}} \quad (4) \] A data set that reflects the above definition can be simulated with the following code: library(tidyverse)set.seed(1)sample_size <- 300dat <- tibble( x1 = runif(sample_size, min = -1, max = 1), x2 = rbinom(sample_size, 1, 0.5), e = rnorm(sample_size, sd = 0.1), p = 1 / (1 + exp(-(-2 + 3 * x1^2 + x2 + e))), log_odds = log(p / (1 - p)), y = rbinom(sample_size, 1, p)) Figure 1 shows the simulated data set with the probability of the event in the y-axis. Now, let’s fit a logistic model on the data using the glm() function. Note how a quadratic term of x1 is represented in the formula to reflect the underlying definition. fit <- glm(y ~ I(x1^2) + x2, data = dat, family = binomial)fit #> #> Call: glm(formula = y ~ I(x1^2) + x2, family = binomial, data = dat)#> #> Coefficients:#> (Intercept) I(x1^2) x2 #> -2.070 2.747 1.088 #> #> Degrees of Freedom: 299 Total (i.e. Null); 297 Residual#> Null Deviance: 388.5 #> Residual Deviance: 342.2 AIC: 348.2 The result shows that \(X = -2.070 + 2.747x_1^2 + 1.088x_2^2\), which is fairly close to the truth (i.e., \(X = -2 + 3x_1^2 + x_2\)). The predict() function can be used to get the predicted probability, which can then be used to get the prediction. dat <- dat %>% mutate(p_predicted = predict(fit, type = "response"), y_predicted = if_else(p_predicted > 0.5, 1L, 0L)) Figure 2 shows the predicted probability of the outcome for each observation in the data set. The pattern shown in Figure 2 is comparable to that shown in Figure 1. The following code shows the training error of the model: mean(dat$y != dat$y_predicted) #> [1] 0.2733333 An unexpected discovery Logistic regression is a parametric method, so at first I expected the experiment results would have low variance 3 (see my other post on the topic) even with a small sample size. But it didn’t turn out that way. To put the issue in a comparable context, assume \(g(p(X))\) from formula (3) is the continuous outcome to be predicted, a linear regression model would need a much smaller sample size to get robust estimates of the coefficients in \(X\). To be fair, the variance was not caused by the logistic regression algorithm itself; rather, it originated from the nature of classification problems. Given the right predictors and assumptions, the uncertainty of a continuous outcome is only affected by random noise. However, the same cannot be said for a categorical outcome. Even without random noise, the outcome is still uncertain unless the probability of the outcome is 1 or 0, which is rarely the case. Therefore, it is very likely that a small data set does not reflect the underlying assumption due to variance casued by the uncertainty. In addition, even if a good model that is consistent with the truth is built (like the one shown in the experiment), misclassification error rate may still be high due to the aforementioned uncertainty. With this in mind, sometimes it may be more preferable to use other metrics (predicted probability, sensitivity, specificity, etc.) instead of misclassification error to evaluate the performance of classification models. Risk ratio, abbreviated as RR, is the ratio of the probability of an event occurring in a group of observations to the probability of the event occurring in another group of observations.↩ You can run the code with different random seeds to see how the estimates of the coefficients change every time.↩
On the global branch of positive radial solutions of an elliptic problem with singular nonlinearity 1. Department of Mathematics, Henan Normal University, Xinxiang, 453007, China 2. Department of Mathematics, Dong Hua University, Shanghai, 200051, China $\Delta u=\lambda [\frac{1}{u^p}-\frac{1}{u^q}]$ in $B$, $u=\kappa \in (0,(\frac{p-1}{q-1})^{-1/(p-q)} ]$ on $\partial B$, $0 < u < \kappa$ in $B$, where $p > q > 1$ and $B$ is the unit ball in $\mathbb R^N$ ($N \geq 2$). We show that there exists $\lambda_\star>0$ such that for $0<\lambda <\lambda_\star$, the maximal solution is the only positive radial solution. Furthermore, if $2 \leq N < 2+\frac{4}{p+1} (p+\sqrt{p^2+p})$, the branch of positive radial solutions must undergo infinitely many turning points as the maxima of the radial solutions on the branch go to 0. The key ingredient is the use of a monotonicity formula. Keywords:semilinear elliptic problems with singularity., Global branch, infinitely many turning points. Mathematics Subject Classification:Primary: 35B45; Secondary: 35J4. Citation:Zongming Guo, Xuefei Bai. On the global branch of positive radial solutions of an elliptic problem with singular nonlinearity. Communications on Pure & Applied Analysis, 2008, 7 (5) : 1091-1107. doi: 10.3934/cpaa.2008.7.1091 [1] [2] [3] [4] [5] Liang Zhang, X. H. Tang, Yi Chen. Infinitely many solutions for a class of perturbed elliptic equations with nonlocal operators. [6] Yinbin Deng, Shuangjie Peng, Li Wang. Infinitely many radial solutions to elliptic systems involving critical exponents. [7] Dušan D. Repovš. Infinitely many symmetric solutions for anisotropic problems driven by nonhomogeneous operators. [8] Ziheng Zhang, Rong Yuan. Infinitely many homoclinic solutions for damped vibration problems with subquadratic potentials. [9] Chunhua Wang, Jing Yang. Infinitely many solutions for an elliptic problem with double critical Hardy-Sobolev-Maz'ya terms. [10] [11] Sabri Bensid, Jesús Ildefonso Díaz. Stability results for discontinuous nonlinear elliptic and parabolic problems with a S-shaped bifurcation branch of stationary solutions. [12] [13] Eleonora Catsigeras, Marcelo Cerminara, Heber Enrich. Simultaneous continuation of infinitely many sinks at homoclinic bifurcations. [14] Rossella Bartolo, Anna Maria Candela, Addolorata Salvatore. Infinitely many solutions for a perturbed Schrödinger equation. [15] V. Lakshmikantham, S. Leela. Generalized quasilinearization and semilinear degenerate elliptic problems. [16] Hua Chen, Huiyang Xu. Global existence and blow-up of solutions for infinitely degenerate semilinear pseudo-parabolic equations with logarithmic nonlinearity. [17] [18] [19] José Carmona, Pedro J. Martínez-Aparicio. Homogenization of singular quasilinear elliptic problems with natural growth in a domain with many small holes. [20] Liping Wang, Chunyi Zhao. Infinitely many solutions for nonlinear Schrödinger equations with slow decaying of potential. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
I would like to show that for a Dirac spinor $\psi$, the scalar product $\bar\psi\psi$ transforms as a scalar under a Lorentz transformation $\Lambda$, where $\bar\psi = \psi^\dagger\gamma^0$. This is exercise II.1.1 a) of Zee's QFT in a Nutshell. This is what I have tried so far: $\psi$ transforms as $\psi\mapsto\psi' = S(\Lambda)\psi = e^{-\frac i4\omega_{\mu\nu}\sigma^{\mu\nu}}\psi$, where $\sigma^{\mu\nu}=\frac i2[\gamma^\mu,\gamma^\nu]$ are the generators of the Lorentz Lie algebra, and $\omega_{\mu\nu}$ are the coefficients of the Lorentz transformation $\Lambda$. So we have the transformation \begin{align} \bar\psi\psi \mapsto \bar\psi^{\,\prime}\psi^{\,\prime} & = {\psi^{\,\prime}}^\dagger{\gamma^0}^\dagger\psi^{\,\prime} = (S(\Lambda)\psi)^\dagger\gamma^0(S(\Lambda)\psi) \\ & = \psi^\dagger S(\Lambda)^\dagger\gamma^0 S(\Lambda)\psi\\ & = \tag{1}\psi^\dagger e^{\frac i4\omega_{\mu\nu}{\sigma^{\mu\nu}}^\dagger}\gamma^0e^{-\frac i4\omega_{\mu\nu}\sigma^{\mu\nu}}\psi \end{align} In the last line, I have a situation similar to $e^ABe^{-A}$ which is normally evaluated in an expansion (first-order) as $e^{\lambda A}Be^{-\lambda A} \simeq B+\lambda[A,B]+O(\lambda^2)$. This would make sense if the commutator $[A,B]$ vanishes, because then (1) would be equal to $\psi^\dagger\gamma^0\psi=\bar\psi\psi$, which is what we want to prove. But in this case, this does not work, since in the first exponential we have a ${\sigma^{\mu\nu}}^\dagger$. So I tried to calculate this first: \begin{align} {\sigma^{\mu\nu}}^\dagger & = -\frac i2(\gamma^\mu\gamma^\nu-\gamma^\nu\gamma^\mu)^\dagger\\ & = \frac i2({\gamma^\mu}^\dagger{\gamma^\nu}^\dagger-{\gamma^\nu}^\dagger{\gamma^\mu}^\dagger)\\ & = \frac i2(\gamma^0\gamma^\mu\gamma^0\gamma^0\gamma^\nu\gamma^0-\gamma^0\gamma^\nu\gamma^0\gamma^0\gamma^\mu\gamma^0)\\ \tag{2}& = \frac i2(\gamma^0\gamma^\mu\gamma^\nu\gamma^0-\gamma^0\gamma^\nu\gamma^\mu\gamma^0) \end{align} How to proceed? How can I use (2) to transform (1) into $\bar\psi\psi$?
I know that in conformal field theories conformal group acts not by pushforwards but (e.g. for scalar field $\phi$ with conformal dimension $\Delta$) $$ \phi(x) \mapsto \phi'(x') = \left| \frac{\partial x^{' \mu}}{\partial x^\nu} \right|^{-\frac{\Delta}{d}}(x) \phi(x)\,. $$ On the other hand, I know how conformal algebra acts on fields at the origin. Indeed, finite dimensionality of the ''index space'', Schur's lemma and conformal algebra structure fix action to $$ \mathcal K_\mu \phi(0) = 0\,,\quad \mathcal D \phi(0) = \Delta \phi(0)\,,\quad \mathcal M_{\mu \nu} \phi(0) = R_{\mu \nu} \phi(0)\, $$ where $R_{\mu \nu} = r(\mathcal M_{\mu \nu})$ for some irrep $r$ of $\mathfrak{so}$. It is also natural to demand $\mathcal P_\mu \phi(x) = \partial_\mu \phi(x)$. Here I denoted conformal Killing vectors by calligraphic letters: \begin{gather} \mathcal P_\mu = \partial_\mu,\;\; \mathcal M_{\mu \nu} = x_\mu \partial_\nu - x_\nu \partial_\mu,\\ \mathcal D = x^\mu \partial_\mu,\;\; \mathcal K_\mu = 2 x_\mu x^\nu \partial_\nu - x^2 \partial_\mu. \end{gather} I want to derive the first formula by exponentiating Lie algebra but I get some nonsense even for dilations. Using Hausdorff formula I get $$ \mathcal D \phi(x) = (\Delta + x \cdot \partial) \phi(x)\,. $$ I expect that $e^{t \mathcal D}$ correspond to dilatation $x \mapsto x' = e^{t} x$. So I should have $$\phi(x) \mapsto \phi'(x') = e^{- t \Delta} \phi(x).$$ But instead I have \begin{multline} e^{t \mathcal D} \phi(x) = e^{t \Delta} e^{t x \cdot \partial} \phi(x) = e^{t \Delta} \lim_{n \to \infty} \left( 1 + \frac{t x \cdot \partial}{n} \right)^n \phi(x)\\ = e^{t \Delta} \lim_{n \to \infty} \phi\left( \left( 1 + \frac{t}{n} \right)^n x \right) = e^{t \Delta} \phi\left( e^{t} x \right) = e^{t \Delta} \phi(x'). \end{multline} And if the overall multiple can be fixed by some minus, the primed argument is the issue. Situation becomes even worse in quantum theory. In QFT we consider the complexification of conformal algebra with basis \begin{equation} P_\mu = - i \mathcal P_\mu \,, \quad M_{\mu \nu} = i \mathcal M_{\mu \nu} \,, \quad D = - i \mathcal D \,, \quad K_\mu = - i \mathcal K_\mu \,. \end{equation} Their commutators are written down in https://en.wikipedia.org/wiki/Conformal_symmetry#Commutation_relations Conformal algebra acts on the space of operators by commutators such that (consider primary operator) \begin{equation} [K_\mu, \mathcal O(0)] = 0,\quad [D, \mathcal O(0)] = \Delta \mathcal O(0),\quad [M_{\mu \nu}, \mathcal O(0)] = R_{\mu \nu} O(0). \end{equation} We still want $[\mathcal P_\mu, \mathcal O(x)] = \partial_\mu \mathcal O(x)$ so $P_\mu$ acts like momentum operator in coordinate representation: $[P_\mu, \mathcal O(x)] = - i \partial_\mu \mathcal O(x)$. Analogously, using Hausdorff formula we get $\mathcal O(x) = e^{x \cdot \mathcal P} \mathcal O(0) e^{-x \cdot \mathcal P}$ and $$ [D, \mathcal O(x)] = (\Delta + i x \cdot \partial) \mathcal O(x)\,. $$ Now we have $i$ and I don't see such nice trick as above with ''putting derivatives as arguments inside function'' (please don't say: ''Don't mind: analytical continuation, Wick rotation...''). To summarize. How can finite form of conformal transformations can be derived from the infinitesimal one? There are some words in Simmons-Duffin: https://arxiv.org/abs/1602.07982 but I find them completely unconvincing. Di Franchesco says that we can do it and not doing it :) Conformal Killing vectors commutation relations: \begin{align} [\mathcal M_{\mu \nu}, \mathcal M_{\rho \lambda}] &=\eta_{\mu \lambda} \mathcal M_{\nu \rho} + \eta_{\nu \rho} \mathcal M_{\mu \lambda} - \eta_{\mu \eta} \mathcal M_{\nu \lambda} - \eta_{\nu \rho} \mathcal M_{\mu \rho}\,,\\ [\mathcal M_{\mu \nu}, \mathcal P_\lambda] &= \eta_{\nu \lambda} \mathcal P_\mu - \eta_{\mu \lambda} \mathcal P_\nu\,,\\ [\mathcal D, \mathcal P_\mu ] &= - \mathcal P_\mu\,,\\ [\mathcal D, \mathcal K_\mu ] &= \mathcal K_\mu\,,\\ [\mathcal M_{\mu \nu}, \mathcal K_\lambda] &= \eta_{\nu \lambda} \mathcal K_\mu - \eta_{\mu \lambda} \mathcal K_\nu\,,\\ [\mathcal P_\mu, \mathcal K_\nu ] &= 2 \eta_{\mu \nu} \mathcal D + 2 \mathcal M_{\mu \nu}\,. \end{align} Edit. It seems that I actually performed Simmons-Duffin argument for dilations. And it seems that he used another definition of action of conformal group: for $x \mapsto x'$ $$ \phi(x) \mapsto \phi'(x') = \left| \frac{\partial x^{' \mu}}{\partial x^\nu} \right|^{\frac{\Delta}{d}}(x) \phi(x') $$ (according to formula (55), which is just the answer to (kind of) my question, without derivation (there is an argument about factorization of infinitesimal Jacobi matrix (formula 25) but I don't see how to use it here)). But there is an other nice text: https://arxiv.org/abs/1805.04405 which, on one hand, relies on Simmons-Duffin in that, but on the other hand they use another representation (formula (18))...
R squared is also termed as the coefficient of determination that could be given either through R2 and R-squared in mathematics. This is the number indicating the variance for the dependent variable that could be predicted through independent variable too. This is a statistics model that can be used for the future predictions or outcomes. It is also used as hypothesis or testing technique too. The linear relationship between dependent or independent variables could be given though formula. Here is given the R squared formula in mathematics – \[\large R^{2}=\frac{N\sum xy-\sum x \sum y}{\sqrt{\left[N\sum x^{2}-\left(\sum x\right)^{2}\right]\left[N\sum y^{2}-\left(\sum y\right)^{2}\right]}}\] Where, N = No of scores given Σ XY = Sum of paired product Σ X = X score sum Σ Y = Y score sum Σ X 2 = square of X score sum Σ Y 2 = square of Y score sum Once you find the linear model with regression analysis, next you need to check how well the model fits the data. Today, there are a plenty of software applications too that can be used for the same purpose. Keep in mind that R-squared values are not always bad but they are not always good as well. Linear regression calculates the equation that minimizes the distance between fitted line and all data points too. Technically, ordinary least squares minimize the sum of squared residuals. In simple words, the model fits the data well when the differences between the observed values and model’s predicted values are unbiased and small. Before we check the fitness of statistical measures or goodness of fit, also check the residual plots too. In general, residual plots could never reveal the unwanted residual patterns and it indicates that biased results are always more effective when compared to other numbers. R squared value always lie between zero to hundred percent.
Monotone solutions to a class of elliptic and diffusion equations 1. Department of Mathematical Sciences, Tsinghua University, Beijing 100084, China, China $-\Delta u= F'(u),$ in $\mathbf R^n,$ $\partial_{x_n}u>0,$ and the diffusion equation $u_t-\Delta u= F'(u),$ in $\mathbf R^n\times$ {$t>0$}, $\partial_{x_n}u>0, u|_{t=0}=u_0,$ where $\Delta$ is the standard Laplacian operator in $\mathbf R^n$, and $u_0$ is a given smooth function in $\mathbf R^n$ with some monotonicity condition. We show that under a natural condition on the nonlinear term $F'$, there exists a global solution to the diffusion problem above, and as time goes to infinity, the solution converges in $C_{l o c}^2(\mathbf R^n)$ to a solution to the corresponding elliptic problem. In particular, we show that for any two solutions $u_1(x')<$ $u_2(x')$ to the elliptic equation in $\mathbf R^{n-1}$: $-\Delta u=F'(u),$ in $\mathbf R^{n-1}, $ either for every $c\in (u_1(0),u_2(0))$, there exists an $(n-1)$ dimensional solution $u_c$ with $u_c(0)=c$, or there exists an $x_n$-monotone solution $u(x',x_n)$ to the elliptic equation in $\mathbf R^n$: $-\Delta u=F'(u), $ in $\mathbf R^n,$ $\partial_{x_n}u>0,$ in $\mathbf R^n$ such that $\lim_{x_n\to-\infty}u(x',x_n)=v_1(x')\leq u_1(x')$ and $\lim_{x_n\to+\infty}u(x',x_n)=v_2(x')\leq u_2(x').$ A typical example is when $F'(u)=u-|u|^{p-1}u$ with $p>1$. Some of our results are similar to results for minimizers obtained by Jerison and Monneau [13] by variational arguments. The novelty of our paper is that we only assume the condition for $F$ used by Keller and Osserman for boundary blow up solutions. Mathematics Subject Classification:35Jx. Citation:Li Ma, Chong Li, Lin Zhao. Monotone solutions to a class of elliptic and diffusion equations. Communications on Pure & Applied Analysis, 2007, 6 (1) : 237-246. doi: 10.3934/cpaa.2007.6.237 [1] [2] Fabio Paronetto. A Harnack type inequality and a maximum principle for an elliptic-parabolic and forward-backward parabolic De Giorgi class. [3] [4] [5] Paolo Baroni, Agnese Di Castro, Giampiero Palatucci. Intrinsic geometry and De Giorgi classes for certain anisotropic problems. [6] Ludovick Gagnon. Qualitative description of the particle trajectories for the [7] Luis A. Caffarelli, Alexis F. Vasseur. The De Giorgi method for regularity of solutions of elliptic equations and its applications to fluid dynamics. [8] Isabeau Birindelli, Enrico Valdinoci. On the Allen-Cahn equation in the Grushin plane: A monotone entire solution that is not one-dimensional. [9] Shota Sato, Eiji Yanagida. Forward self-similar solution with a moving singularity for a semilinear parabolic equation. [10] Zhiming Guo, Zhi-Chun Yang, Xingfu Zou. Existence and uniqueness of positive solution to a non-local differential equation with homogeneous Dirichlet boundary condition---A non-monotone case. [11] [12] [13] [14] Shota Sato. Blow-up at space infinity of a solution with a moving singularity for a semilinear parabolic equation. [15] [16] [17] Guolian Wang, Boling Guo. Stochastic Korteweg-de Vries equation driven by fractional Brownian motion. [18] [19] Muhammad Usman, Bing-Yu Zhang. Forced oscillations of the Korteweg-de Vries equation on a bounded domain and their stability. [20] Eduardo Cerpa, Emmanuelle Crépeau. Rapid exponential stabilization for a linear Korteweg-de Vries equation. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
I am looking to prove that the infinite intersections of compact sets is compact. I'd like some feedback on my proof. Note: This course isn't a formal course in topology and we specifically work in $\mathbb{R}^n$. I have done some reading and there seems to be some discussion about this question in other spaces. Proof. Let $A_1$ and $A_2$ be arbitrary sets. We need to show that $A_1 \cap A_2$ is compact. Let $O$ be the open cover $A_1 \cup A_2$ which completely covers $A_1 \cap A_2$ since if $x \in A_1 \cap A_2$ then $x \in A_1$ and $x \in A_2$, which is completely covered by $O$. Since $A_1$ and $A_2$ are compact, then there exists a finite subcover $O_1 \subset O$ and $O_2 \subset O$ respectively which cover the respective sets. So $O_1 \cup O_2$ cover $A_1 \cup A_2$. Therefore $O_1 \cap O_2$ covers $A_1 \cap A_2$. (Perhaps I can justify this better?) Since the sets were arbitrary, we have that the infinite intersection is compact. QED. Let me know what you think. Thanks!
Let's prove that $latex e^{\pi} > \pi^e$ without a calculator. If you haven't seen this before, give it a try before you read any further. Consider the function $latex f(x) = x/\ln(x)$ on the interval $latex (1,\infty)$. This function shoots off to positive infinity as $latex x$ tends towards either endpoint of the domain. Let's … Continue reading e^pi > pi^e Consider the problem of finding the following limit:$latex \lim_{x \rightarrow 0} x^x&s=2$It's actually not too bad. We can write$latex x^x = e^{ x \ln x}&s=2$and bring the limit into the exponent (as exponentiating is continuous) to get that$latex \lim_{x \rightarrow 0} x^x = e^{\lim_{x \rightarrow 0} x \ln x}&s=2$From here, all we need to do … Continue reading Limits and Towers Feel up to seeing a bit of mathematical magic? See here. http://www.youtube.com/watch?v=g0qbNksZLgo Just me doing my 3MT. At the end of my undergraduate degree, I remember thinking "what better way to mark such an event than to write and record a maths rap?" Well, I did end up recording such a thing, but it was pretty woeful. However, a couple of months ago, I plucked up the zest to do it again. … Continue reading Don’t Mess with the Mathematician Integrals can tell us quite a lot. For those of you who are so disgustingly bored that you've found your way onto my blog, you should have a go at evaluating exactly the following integral:$latex \int_0^1 \frac{x^4 (1-x)^4}{1+x^2} dx$It might take some effort, but it's well worth it. Of course, Wolfram Mathematica could do it … Continue reading Late night integration I thought it would be a good idea to relay some of the advice given by Sir Michael Francis Atiyah during his talk on Tuesday. Sir Michael Atiyah during his talk at #hlf13 Picture: HLFF @Bernhard Kreutzer Always ask yourself questions. Atiyah says that one of the secrets of his success is to always be curious. … Continue reading Advice to a Young Mathematician
Number sequence: \(\left\{ {a_n} \right\} \) Alternating series: \(\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^n}{a_n}} ,\) \(\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^{n – 1}}{a_n}} \) Number of terms in a series: \(n\) Alternating series: \(\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^n}{a_n}} ,\) \(\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^{n – 1}}{a_n}} \) Number of terms in a series: \(n\) Partial sum of a series: \({S_n}\) Sum of a series: \(S\) Sum of a series: \(S\) An infinite series in which successive terms have opposite signs is called an alternating series. An alternating series can be written in the form \(\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^n}{a_n}} \) or \(\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^{n – 1}}{a_n}}.\) Alternating series test (Leibniz’s theorem) The alternating series test is a sufficient condition for convergence of an alternating series. Let \(\left\{ {a_n} \right\} \) be a sequence of positive terms such that \(-\;{a_{n + 1}} \lt {a_n}\) for all \(n\); \(-\;\lim\limits_{n \to \infty } {a_n} = 0\). Then the alternating series \(\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^n}{a_n}} \) and \(\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^{n – 1}}{a_n}} \) both converge. Alternating series remainder estimate Suppose that an alternating series converges by the alternating series test and its sum is equal to \(S\). We denote the \(n\)th partial sum of the series as \({S_n}.\) Then the remainder of the alternating series in absolute value is bounded by the absolute value of the first discarded term: \(\left| {S – {S_n}} \right| \lt \left| {{a_{n + 1}}} \right|\). Absolute convergence A series \(\sum\limits_{n = 1}^\infty {{a_n}} \) is called absolutely convergent if the series \(\sum\limits_{n = 1}^\infty {\left| {{a_n}} \right|} \) composed of the absolute values of the terms \({a_n}\) is convergent. If the series \(\sum\limits_{n = 1}^\infty {{a_n}} \) is absolutely convergent, then it is (just) convergent. The converse is not generally true. Conditional convergence A series \(\sum\limits_{n = 1}^\infty {{a_n}} \) is called conditionally convergent if it converges, but the series \(\sum\limits_{n = 1}^\infty {\left| {{a_n}} \right|} \) composed of the absolute values of the terms \({a_n}\) diverges. In other words, a series is conditionally convergent if it converges, but not absolutely.
Sorry if this question gets a little long; I want to explain why I'm asking it. The usual Schwarzchild metric $$ds^2 = -\left(1-\frac{2M}{r}\right) dt^2 + \left(1-\frac{2M}{r}\right)^{-1} dr^2 + r^2(d\theta^2+\sin^2\theta\ d\phi^2)$$ makes sense when $t,r$ lie in the ranges $-\infty<t<\infty$, $0<r<\infty$. We can change to Kruskal–Szekeres coordinates $u,v$ (which Wikipedia calls $X$ and $T$), which are nonsingular at the horizon. The corresponding range for them is $u+v>0$. But if we draw spacetime on the $u,v$ plane, we can see that there is no problem extending it to all possible values of the coordinates. This adds a symmetric half to spacetime, with a past singularity that behaves like a white hole: any object or light ray will eventually exit the horizon. When I learned this in General Relativity class, the professor said that the usual physical reasoning for doing this is that the good old Schwarzschild metric is geodesically incomplete: if we imagine a particle falling into the black hole and try to trace back its path into the past, it looks like (as long as it doesn't have too much energy) it should have come up from the black hole, stopped, and then proceeded to fall in. I wasn't too convinced of this for two reasons: One, the black hole hasn't existed for all eternity so whatever is falling into it can have its origin somewhere else. Two, we haven't actually observed any white holes. This remained a mathematical curiosity until we analyzed a Penrose diagram of the Reissner-Nordstrom metric for a charged black hole. This spacetime has two horizons and a singularity inside. But now there are timelike curves that end at some point in the future without hitting a singularity. To me this seems like a much bigger deal, since I can perfectly well imagine something falling into a charged black hole as a physically realistic situation. The extension in this case is something much weirder: an infinte chain of universes. You can enter a black hole here and come out at some other universe, and proceed to do the same until you get tired and settle down on some planet on whatever universe you happen to be on. This is as far from physically realistic as it gets, and yet it seems unavoidable if we want to have a charged black hole (I think the same thing happens for a rotating black hole too). Let me state my question, then: is the incompleteness of timelike curves in a charged black hole a real thing? Is it a problem? Is the infinite tower of universes the only way to make the problem go away, and if so, wouldn't that imply that it exists in the universe, since charged and rotating black holes do exist?
Someone gave me a proof of this, but I am not sure if it is correct. Let $B(p,w) = \{x: p\cdot x \leq w\}$ (the budget set). Then: \begin{align} x(p,w) &= \arg \max_{x\in B(p,w)} u(x)\\ &=\arg \max_{\alpha x\in \alpha B(p,w)} u(\alpha x) \\ &=\arg \max_{y\in B(p,\alpha w)} u(y) \\ &=\frac{1}{\alpha} \arg \max_{y\in B(p,\alpha w)} u(y) \\ &=\frac{1}{\alpha} x(p, \alpha w) \end{align} Where the result follows from taking $\alpha=\frac{1}{w}$. Is this proof correct (I am not sure of the middle three equalities)? Where is homotheticity used? EDIT: A monotone preference relation $\succsim$ on $X= \mathbb{R}^{L}_{+}$ is homothetic if all indifference sets are related by proportional expansion along rays; that is, if $x \sim y$, then $\alpha x \sim \alpha y$ for any $\alpha \geq 0$. Also, recall a continuous $\succsim$ on $X = \mathbb{R}_{+}^{L}$ is homothetic iff it admits a utility function that is homogenous of degree one; $u(\alpha x) = \alpha u(x)$.
Image Dimensions Contents Describing the fields of the Canvas Properties Dialog The user access the image dimensions in the Canvas Properties Dialog. The Other tab Here some properties can simply be locked (such that they can't be changed) and linked (so that changes in one entry simultaneously change other entries as well). The Image tab Obviously here the image dimensions can be set. There seem to be basically three groups of fields to edit: The on-screen size(?) The fields Widthand Heighttell synfigstudio how many pixels the image shall cover at a zoom level of 100%. The physical size The physical width and height should tell how big the image is on some physical media. That could be when printing out images on paper, or maybe even on transparencies or film. Not all file formats can save this on exporting/rendering images. The mysterious Image Area Given as two points (upper-left and lower-right corner) which also define the image span (Pythagoras: Failed to parse (MathML with SVG or PNG fallback (recommended for modern browsers and accessibility tools): Invalid response ("Math extension cannot connect to Restbase.") from server "https://api.formulasearchengine.com/v1/":): {\displaystyle \scriptstyle\text{span}=\sqrt{\Delta x^2 + \Delta y^2}}). The unit seems to be not pixels but units, which are at 60 pixels each. If the ratio of the image size and image area dimensions are off, for example circles will appear as an ellipse (see image). These settings seem to influence how large one Image Sizepixel is being rendered. This might be useful when one has to deal with non-square output pixels. Effects of the Image Area Somehow the image area setting seems to be saved when copy&pasting between image, see also bug #2116947. Possible intended effects of out-of-ratio image areas As mentioned above, different ratios might be needed when then output needs to be specified in pixels, but those pixels are not squares. That might happen for several kinds of media, such as videos encoded in some PAL formats or for dvds. For further reading, look at Wikipedia. Still, it is probably consensus that the image, as shown on screen while editing should look as closely as possible like when viewed by the final audience. So, while specifying a different output resolution at rendering time may well be wanted, synfigstudio should (for the majority of monitors) show square pixels, i.e. circles should stay circles. Feature wishlist to simplify working across documents See also Explanation by dooglus on the synfig-dev mailing list.
Let $f:\mathbb{R}\to\mathbb{C}$ be differentiable $k$ times, with $f, f',\dotsc,f^{(k)}\in L^1$. Let $\alpha\in \mathbb{R}/\mathbb{Z}$, $\alpha\ne 0$. In "Every odd number..." (Math. Comp. 83, 2014), Lemma 3.1, Tao shows that $$\left|\sum_{n\in \mathbb{Z}} f(n) e(\alpha n)\right|\leq \frac{1}{|2 \sin(\pi \alpha)|^k} |f^{(k)}|_1,$$ where $e(t) = e^{2\pi i t}$. The proof goes essentially by summation by parts. (a) Are there older sources for this? Somewhat confusingly, Tao credits Gallagher ("The large sieve") and Lemma 1.1 in Montgomery's Topics in Multiplicative Number Theory, but they give only equation (3.1) in Tao's papers, not the inequality above. (b) For $k=2$, this is not in general optimal: one can show $$\left|\sum_{n\in \mathbb{Z}} f(n) e(\alpha n)\right|\leq \frac{1}{|2 \sin(\pi \alpha)|^2} |\widehat{f''}|_\infty,$$ which is no weaker and often strictly stronger, since $|\widehat{f''}|_\infty\leq |f''|_1$. This is Lemma 2.1 in my three-prime book draft on the arxiv; the proof I give goes by the Poisson summation formula, plus Euler's formula for the cotangent. Are similar bounds true for general $k$? (Is $\left|\sum_{n\in \mathbb{Z}} f(n) e(\alpha n)\right| \leq |\widehat{f'}|_\infty/|2 \sin \pi \alpha|$, for instance?) Again, can such results be found in older sources?
I've a quadrature mixed signal with 10 MHz as highest frequency available. I want to shift it to any frequency lesser than 10 MHz. I should get it by multiplying this signal by cosine of the required frequency. Right? If you have a quadrature mixed signal, you have two outputs of the signal in 90 degree phase to each other. To complete a full complex frequency conversion (which would shift the frequency in one direction, as opposed to a real frequency conversion which would create two sidebands) you need to multiply one of the outputs by the cosine term of the required frequency shift and the other output by the sine of the required frequency shift. By swapping sine and cosine ports, you change the direction of the shift. The outputs after each multiplier are then summed in phase. See diagram and derivation using trig identities for the multiplication of sines and cosines below. If you are comfortable with the idea of exponential frequencies (full complex frequency including positive and negative frequency etc) then I prefer the view point below to describe single sideband frequency translation. Comparing the two diagrams shows the value of converting "I" and "Q" signals as typically constructed in hardware to implement complex signals into one complex signal path for the purpose of analysis. Using cosines and sines instead of exp can be quite cumbersome especially as the complexity of the system increases. For those less familiar with the notation, $Ke^{j\theta}$ is simply a complex value with magnitude K and angle $\theta$. Therefore $e^{j\omega t}$ is a signal whose magnitude is 1 and angle is increasing linearly with time at rate $\omega$; a spinning phasor counterclockwise on a complex IQ (polar) plot, representing a "positive" frequency. Similarly $e^{-j\omega t}$ spins clockwise and represents a "negative" frequency. With that, one should be well equipped to use the more compact and analytically simpler exponential notation. Notice too how the two block diagrams that I included are related, with the second one taking the real of the product to make the outputs equivalent. To do frequency translation, we use complex conjugate multiplication (the conjugate assures we shift in the direction desired). Consider a full complex conjugate multiplication, as is done prior to the real operation: $ e^{j\omega_ct} = I_1+jQ_1$ representing the carrier after the quadrature split $ e^{-j\omega_st} = I_2-jQ_2$ representing the conjugate of the I and Q terms of your signal $ e^{j\omega_ct}e^{-j\omega_st} = (I_1+jQ_1)(I_2-jQ_2) = (I_1I_2+Q_1Q_2)+j(I_2Q_1-I_1Q_2)$ The real portion is also called the dot product and the imaginary portion is called the cross product. Notice the dot product does match the implementation block diagram shown at the top of this answer. Further details for the very interested to understand complex frequency translation: The following additional figures and content were added to help answer the question on the subsequent down-conversion and to build a better understanding of the complex (positive and negative) frequency domain in describing frequency translation implementations. First consider a generic quadrature mixed signal, it is at "baseband", meaning not modulated to a carrier for purpose of transmission, and centered about DC (0 Hz) with the positive and negative frequencies independent of each other. The way we implement such a signal in practice is with two real signals (we could choose the real signals to have one represent the real and the other the imaginary such as I and Q in I+jQ, or one represent the magnitude K and the other represent the phase $\theta$ such as $Ke^{j\theta}$. In any event, by observing the spectrum and noting that the positive and negative frequencies do not match, we know from that detail it is a complex signal and two real signal paths are required to represent this in implementation. For a real signal in contrast, the positive and negative frequencies would be conjugate symmetric, meaning identical in magnitude and opposite in phase. Frequency spectrum of baseband quadrature modulated signal (this could be a single tone on one side, or multiple independent frequencies, point is the positive and negative frequencies are independent): Now consider a carrier signal $cos(\omega_ct)$, this has an impulse in the frequency domain at the positive frequency $\omega_c$, and another at the negative frequency $-\omega_c$ (also shown as two complex frequencies using Euler's identity $2cos(\omega_ct) = e^{j.\omega_ct}+e^{-j\omega_ct}$): If we split the carrier in quadrature (with a Hilbert transformer, or a 90° splitter, or just generate a sine and cosine component, but in the end have two tones with a 90° relationship to each other), then in implementation we have two real signals used to represent a single complex frequency $e^{j\omega_c}$ (you may start to see how it is so much easier doing all analysis with the e's and then do the implementation with the sines and cosines): UPCONVERSION To up-convert the baseband signal to the carrier frequency, we multiply the two signals in the time domain (which is convolution in the frequency domain; so for an impulse function such as our carrier it is a simple shift). First consider what would happen if we multiplied the baseband signal with the carrier $cos(\omega_ct)$ directly (eliminate the 90° splitter in my first figure, replacing $sin(\omega_ct)$ with $cos(\omega_ct)$). Notice the output would still be complex as the positive and negative frequencies are not the same- so we could not transmit this with a single antenna (2 real signals are required to represent a complex signal). If we proceeded to take the real portion, distortion of our signal would result as depicted in the figure, so we can't transmit that either. Figure showing spectrum result with an INCORRECT upconversion approach: This explains why we need to also take the Hilbert transform of the carrier (converting the cosine into sine and cosine components, and representing a complex version of the carrier that is a single spinning phasor $e^{j\omega_ct}$, or in the frequency domain a single impulse in the positive domain with no negative component) prior to multiplication. We take the real portion of the product and can then send an undistorted real waveform representing our baseband signal modulated up to the carrier frequency. Figure showing spectrum result with a CORRECT upconversion approach: DOWN-CONVERSION With the details of the up-conversion process in mind, it will now be very easy to answer the question if a quadrature conversion of the received signal is required prior to multiplying by sine and cosine to recover the baseband signal. The answer is no, but there is an advantage to doing so in situations when an image frequency cannot be easily filtered. Figure showing spectrum doing a downconversion with no Hilbert transform of the received signal. It is interesting to note if you swapped sine and cosine in the (real) multipliers, this would represent a single impulse in the positive frequency domain, shifting the negative frequency component of the received signal to baseband, causing the spectrum at baseband to be inverted: Figure showing spectrum doing a downconversion with a Hilbert transform of the received signal prior to sine/cosine multiplication. Here you can see what would happen if you swapped an input to one of the (real) multipliers: The signal would be shifted to a higher frequency twice the carrier instead of to baseband. Note I believe there would also be an SNR advantage up to 3dB in doing the receiver with a Hilbert transform prior to the multiplication. In the first case without the Hilbert transform, the signal energy is split and the upper portion filtered out while the noise is unchanged, while in the second case with the Hilbert transform all the signal energy is translated to baseband. I could be wrong here as I have not verified this point, so perhaps someone else can weigh in on that. If you have a quadrature-mixed signal, I assume you mean a complex-baseband signal. In that case, you'll want to multiply not just by the cosine of the desired shift frequency, but a complex exponential function instead, which contains in-phase (cosine) and quadrature (sine) components: $$ e^{j2\pi f t} = \cos(2\pi f t) + j \sin(2\pi f t) $$ where $f$ represents the frequency shift that you would like to apply to your signal.
(Note: This question has been cross-posted from MSE.) Euclid and Euler proved that every even perfect number is of the form $m = \frac{{M_p}\left(M_p + 1\right)}{2}$ where $M_p = 2^p - 1$ is a prime number, called a Mersenne prime. Thus, an even perfect number is triangular. On the other hand, Euler showed that an odd perfect number, if one exists, takes the form $N = q^k n^2$, where $q$ is prime with $q \equiv k \equiv 1 \pmod 4$ and $\gcd(q,n) = 1$. (Descartes, Frenicle and subsequently Sorli conjectured that $k = 1$ always holds.) Here is my question: Has it been proved that odd perfect numbers cannot be triangular? Added March 26 2016 If $\sigma(q) = 2n^2$, then it would follow that $n < q$, which implies that $k = 1$. The odd perfect number $N = q^k n^2$ then takes the form $N = \frac{q(q + 1)}{2}$. Unfortunately, it is known that $\sigma(q^k) \leq \frac{2n^2}{3}$. Any pointers to the existing literature containing such a proof would be most appreciated.
A distanceis a numerical measurement of how far apart [two] objects are. Follows a "more abstract" mathematical definition: In mathematics, a metricor distance functionis a function that defines a distance between each pair of elements of a set. A "Formal" Definition of Distance Measure Given this short conceptual introduction we should provide an overview of the properties of a generic function in order for it to be called distance function. Let's suppose we have a set of points called a space. A distance function on this space is a function $d(x,y)$ that given two points ($x$ and $y$) produces a real number and satisfies the following axioms: $ d(x,y) \geq 0 $ a distance measure can not be negative. $ d(x,y) = 0 $ if and only if $x=y$; a distance measure is always positive except from the distance from a point and itself has value 0. $ d(x,y) = d(y,x) $ a distance function is symmetric. $ d(x,y) \leq d(x,z) + d(z,y)$; a distance function always satisfies the. triangular inequalityconstraint The last axiom can be intuitively explained using basic geometry. If we consider two points: $x$ and $y$ and we define our distance function as the shortest possible line (path) drawn from $x$ to $y$ we clearly do not get any benefit to pass through any other point $z$ in order to go from $x$ to $y$, the best case scenario is when $z$ lies on the shortest line between the two points $x$ and $y$ apporting no benefits nor penalities on our distance path. Euclidean Distances The euclidean distance represents the most common concept of distance, it is the most familiar on our daily life. An n-Dimensional Euclidean Space is the one where points are represented as vectors of $n$ real numbers. $L_2$-Norm Distance AKA The Euclidean Distance The most conventional distance, the euclidean distance, also referred as $L_2$ -Norm Distance is defined as follows: $$ d(x,y)= d([x_1, x_2, \cdots,x_n], [y_1, y_2,\cdots,y_n]) = \sqrt{\sum_{i=1}^{n} (x_i - y_i)^2} $$ This distance satisfies all the four axioms mentioned above. In Euclidean Spaces the concept of distance can be generalized such that for any constant $r \in \mathbb{N}$ we can define the $L_r$- Norm Distance $d$ as: $$ d(x,y)= d([x_1, x_2, \cdots,x_n], [y_1, y_2,\cdots,y_n]) = (\sum_{i=1}^{n} |(x_i - y_i)|^r)^{1/r} $$ $L_1$-Norm Distance AKA Manhattan Distance Another common measure is the $L_1$ -Norm Distance more often referred as . It is the sum of the magnitudes of the differences in each dimension: Manhattan Distance $$ d(x,y)= d([x_1, x_2, \cdots,x_n], [y_1, y_2,\cdots,y_n]) = \sum_{i=1}^{n} |(x_i - y_i)|$$ $L_\infty$-Norm Distance Is the limit as $r$ tends to $\infty$ of the $L_r$- norm. Here, as $r$ gets larger and larger only the dimension with the largest difference really matters (becomes dominant), thus, formally the $L_\infty$ -Norm Distance as defined as the maximum of $|x_i-y_i|$ over all the dimensions $i$. $$ d(x,y)= d([x_1, x_2, \cdots,x_n], [y_1, y_2,\cdots,y_n]) = \max{(|x_i-y_i|)} $$ Follows a code snippet of a pure python3.x implementation of the Euclidean Distances: import numbersdef euclidean_distance(x, y, r=2): # Case wrong r value if r != 'inf' and r < 1: raise ValueError('r must be positive (r > 0)') # Case two vectors have different lengths if len(x) != len(y): raise ValueError('Input Vector Length Mismatch') # Case both vectors have length 0 if len(x) == 0: return 0.0 # Case values are not numeric if not all(isinstance(v, numbers.Number) for v in x) or \ not all(isinstance(v, numbers.Number) for v in y): raise TypeError('Euclidean Distance is Computed' ' over only numerical data') # Case L_infty-Norm Distance if r == 'inf': return max(abs(x - y) for x, y in zip(x, y)) # Compute the L_r-Norm Distance return sum(abs((x - y))**r for x, y in zip(x, y))**(1/r) >>> euclidean_distance([1,2,3],[1,2.5,3.5], 1)>>> 1.0>>> euclidean_distance([1,2,3],[1,2.5,3.5], 2)>>> 0.7071067811865476>>> euclidean_distance([1,2,3],[1,2.5,3.5], 3)>>> 0.6299605249474366>>> euclidean_distance([1,2,3],[1,2.5,3.5], 'inf')>>> 0.5 Interesting to note that as $r$ increases the value of the distance slowly approximates the definition of $L_\infty$- Norm Distance (in this case 0.5): >>> [euclidean_distance([1,2,3],[1,2.5,3.5],x) for x in range(1,1000,1)]>>> [1.0,>>> 0.7071067811865476,>>> 0.6299605249474366,>>> 0.5946035575013605,>>> 0.5743491774985174,>>> ...>>> 0.5003480865601369,>>> 0.5003477373047961,>>> 0.5003473887496088,>>> 0.5003470408924718] Those implementations have to be considered just for illustration proposes, refer to numpy.linalg or other libraries for production ready code. Jaccard Distance The Jaccard distance represents a metric to compute the distance between two sets. Given two sets $X$ and $Y$ it is defined as the ratio of the cardinality (/size) of the intersection among the two sets over the cardinality of the union of the two sets: $$ d(X,Y) = 1 - \frac{|X\cap Y|}{|X\cup Y|} $$ Its value range is between $0$ and $1$ and it make use of the Jaccard similarity between two sets defined as follows: $$ SIM(X,Y) = \frac{|X\cap Y|}{|X\cup Y|} $$ Follows a code snippet of a pure python3.x implementation of the Jaccard Distance: def jaccard_distance(x, y): if type(x) == list: x = set(x) if type(y) == list: y = set(y) # Avoiding division by 0 if len(x.union(y)) == 0: return 0 else: return 1 - len(x.intersection(y))/len(x.union(y)) >>> jaccard_distance([1,2,4], [2,3,4])>>> 0.5>>> jaccard_distance([1,2,4], [1,2,4])>>> 0.0 Cosine Distance In the context of Cosine Distance we iterpret points as directions from the origin of the plane to the points themselves. In this scenario we are going to measure the distace between two vectors as $1 - cos(\theta)$ where $cos(\theta)$ is the cosine of the angle between the two directions $\theta$. Cosine distance is generally used as a metric for measuring distance when the magnitude of the vectors does not really matter; often this happens when working with NLP (Natural Language Processing) text data represented by word counts. The lenght of the directions do not alter the distance measure, thus, multiplying a vector with a constant would not impact the distance measure at all. The cosine distance (based on cosine similarity and normalized) can be defined as: $$ d(x,y) = 1 - \frac {x \cdot y}{||x||_2 ||y||_2} $$ Follows a code snippet of a pure python3.x implementation of the Cosine Distance: def cosine_distance(x, y): # Case two vectors have different lengths if len(x) != len(y): raise ValueError('Input Vector Length Mismatch') # Case both vectors have length 0 if len(x) == 0: return 0.0 # If one of the vector has 0 magnitude then d is not defined if sum(x) == 0 or sum(y) == 0: return None # Case values are not numeric if not all(isinstance(v, numbers.Number) for v in x) or \ not all(isinstance(v, numbers.Number) for v in y): raise TypeError('Euclidean Distance is Computed' ' over only numerical data') # Numerator: Dot Product num = sum(x*y for x, y in zip(x, y)) # Denominator: Product of L_2 Norms den = sum(el_x*el_x for el_x in x)**.5 * sum(el_y*el_y for el_y in y)**.5 return (1 - (num/den))/2 The following example shows that a vector scaling does not impact the cosine distance measure: >>> cosine_distance([1,2,3],[1,2,0])>>> 0.2011928476664016>>> cosine_distance([1,2,3],[4,8,0])>>> 0.2011928476664016 Edit Distance This kind of distance comes into play when the data points are strings. Indeed, it is not as trivial as with real numbers to compute a distance among strings. What is the magnitude of a string? Maybe its length? The edit distance between two strings $x=x_1x_2\cdots x_n$ and $y=y_1y_2\cdots y_m$ can be defined as the smallest number insertions and deletions of single characters that will convert $x$ to $y$. Another way to define the edit distance between two strings $d(x,y)$ is to compute their longest common subsequence (LCS). Given the LCS of the two strings $x$ and $y$ the edit distance can be computed as the length of $x$ plus the length of $y$ minus twice the length of their LCS. Considering $|x|$ as the length of a string we might express the edit distance as: $$ d(x,y) = |x| + |y| - 2\cdot LCS(x,y)$$ A similar distance measure which takes into account not only insertions and deletions but also substitution is the Levenshtein Distance. Hamming Distance Given a space of vectors, is it possible to define the Hamming Distance between two vectors to be the number of components in which they differ: e.g.: $x = [1,2,3,4]$ and $y=[0,2,2,4]$ then $d(x,y) = 2$. Most often it is used when handling boolean vectors, nevertheless, it is possible to apply this metric to vectors having components from any set. Follows a code snippet of a pure python3.x implementation of the Hamming Distance: def hamming_distance(x, y): return [x != y for x, y in zip(x, y)].count(True) >>> hamming_distance([1,2,3],[1,2,3])>>> 0>>> hamming_distance([1,3,4],[1,2,3])>>> 2 Non-Euclidean Distances It is important to nothe that some of the distance measures mentioned above are not Euclidean Spaces. An important property of Euclidean spaces is that the average of points always exists; this does not apply for sets: it is not possible to compute the average of two sets, the same limitation applies also for strings. Vector spaces where we define the cosine distance may or may not be Euclidean, it depends on the components of such vectors, for example if we restrict the value of vectors to be Integers, then the space is not going to be Euclidean, in the case of Real numbers it forms an euclidean space.
In this section we consider how the derivatives can be used to prove mathematical inequalities. The general approach is to study the properties of functions in the inequality using derivatives. The most important here are the properties of monotonicity and boundedness of functions. In addition, Lagrange’s mean value theorem is often used for solving inequalities. Typical examples on this topic are listed below. Solved Problems Click a problem to see the solution. Example 1Prove the inequality \(\sqrt {1 + x} \le 1 + \large\frac{x}{2}\) for \(x \gt 1\). Example 2Determine which is greater: \({100^{101}}\) or \({101^{100}}?\) Example 3Prove that for any positive numbers \(a\) and \(b\) the inequality \({\large\frac{a}{b} + \frac{b}{a} \normalsize} \ge 2\) is valid. Example 4Prove that for \(x > 0\) the inequality \(1 + 2\ln x \le {x^2}\) is true. Example 5Prove that for \(x \gt 0\) the inequality \(\ln x \le x – 1\) is true. Example 6Show that for \(x \gt 1\) the inequality \(\sqrt x + {\large\frac{1}{{\sqrt x }}\normalsize} \gt 2\) is true. Example 7Prove the inequality \({\large\frac{{b – a}}{b}\normalsize} \le \ln {\large\frac{b}{a}\normalsize} \le {\large\frac{{b – a}}{a}\normalsize}\) provided \(0 \lt a \le b\). Example 8Prove the inequality \(\left| {\sin a – \sin b} \right| \le \left| {a – b} \right|\). Example 9Prove the inequality \(\left| {\arctan a – \arctan b} \right| \le \left| {a – b} \right|\). Example 10Prove that for \(x \ne 0\) the inequality \({e^x} \gt 1 + x\) holds. Example 11Prove that the inequality \(\sin x + \tan x \gt 2x\) holds in the interval \(\left( {0,{\large\frac{\pi }{2}}} \right).\) Example 12Prove the inequality \({\large\frac{{\tan {x_2}}}{{\tan {x_1}}}\normalsize} \gt {\large\frac{{{x_2}}}{{{x_1}}}}\) provided \(0 \lt {x_1} \lt {x_2} \lt \large\frac{\pi }{2}\). Example 1.Prove the inequality \(\sqrt {1 + x} \le 1 + \large\frac{x}{2}\) for \(x \gt 1\). Solution. Consider the function \(f\left( x \right) = \sqrt {1 + x} – {\large\frac{x}{2} \normalsize} – 1\) and find its derivative: \[ {f’\left( x \right) = {\left( {\sqrt {1 + x} – \frac{x}{2} – 1} \right)^\prime } } = {\frac{1}{{2\sqrt {1 + x} }} – \frac{1}{2} } = {\frac{{1 – \sqrt {1 + x} }}{2} \le 0.} \] We take into account that \(f\left( 0 \right) = 1 – 0 – 1 = 0\). Therefore, \(f\left( x \right) \le 0\) for \(x > 0\). Then \[ {\sqrt {1 + x} – \frac{x}{2} – 1 \le 0,\;\;\;}\Rightarrow {\sqrt {1 + x} \le 1 + \frac{x}{2}.} \] Example 2.Determine which is greater: \({100^{101}}\) or \({101^{100}}?\) Solution. Consider the function \(f\left( x \right) = {\large\frac{{\ln x}}{x}\normalsize}\) and calculate its derivative: \[f’\left( x \right)= {{\left( {\frac{{\ln x}}{x}} \right)^\prime } }= {\frac{{{{\left( {\ln x} \right)}^\prime } \cdot x – \ln x \cdot x’}}{{{x^2}}} }= {\frac{{\frac{1}{x} \cdot x – \ln x}}{{{x^2}}} = \frac{{1 – \ln x}}{{{x^2}}}.}\] As it can be seen, the derivative is negative provided \(x \gt e.\) Then for \(x \gt e,\) the function \(f(x)\) is decreasing and, therefore, the relation \(\large\frac{{\ln 100}}{{100}} \gt \frac{{\ln 101}}{{101}}\) is true. It follows from here that \[{101\ln 100 \gt 100\ln 101,\;\;\;}\Rightarrow { {100^{101}} \gt {101^{100}}.} \] Example 3.Prove that for any positive numbers \(a\) and \(b\) the inequality \({\large\frac{a}{b} + \frac{b}{a} \normalsize} \ge 2\) is valid. Solution. We denote \({\large\frac{a}{b}\normalsize} = x\) and consider the function \(f\left( x \right) = x + \large\frac{1}{x}\) given that \(x \gt 0\). Determine the points of local extrema of this function: \[f’\left( x \right)= {{\left( {x + \frac{1}{x}} \right)^\prime } }= {1 – \frac{1}{{{x^2}}} = 0,\;\;\;}\Rightarrow{{x^2} = 1,\;\;\;}\Rightarrow{x = \pm 1.}\] Only one point \(x = 1\) satisfies the condition \(x > 0.\) Since the derivative changes sign from minus to plus when passing through this point (from left to right), then the point \(x = 1\) is a minimum. The value of the function at this point is equal to \(f\left( 1 \right) = 1 + {\large\frac{1}{1}\normalsize} = 2\). Consequently, \[{f\left( x \right) \ge 2,\;\;\;}\Rightarrow {x + \frac{1}{x} \ge 2,\;\;\;}\Rightarrow {\frac{a}{b} + \frac{b}{a} \ge 2.} \] Example 4.Prove that for \(x > 0\) the inequality \(1 + 2\ln x \le {x^2}\) is true. Solution. We introduce the function \(f\left( x \right) = {x^2} – 2\ln x – 1\). Find the critical points: \[f’\left( x \right)= {{\left( {{x^2} – 2\ln x – 1} \right)^\prime } }= {2x – \frac{2}{x} = 0,\;\;\;}\Rightarrow{\frac{{2{x^2} – 2}}{x} = 0,\;\;\;}\Rightarrow{2{x^2} – 2 = 0,\;\;\;}\Rightarrow{{x^2} = 1,\;\;\;}\Rightarrow{x = \pm 1.}\] Of the three critical points \(x = -1\), \(x = 0\), \(x = 1,\) only the last point \(x = 1\) satisfies the condition \(x \gt 0.\) The derivative is negative to the left of this point and positive to the right. Hence, the function has a minimum at this point that is equal \[f\left( 1 \right) = 1 – 2\ln 1 – 1 = 0.\] Thus, \(f(x) \ge 0\) for \(x > 0\) (and is zero at \(x = 1\)). In this case \[{{x^2} – 2\ln x – 1 \ge 0,\;\;\;} \Rightarrow {1 + 2\ln x \le {x^2}.}\]
Maintainer:admin Planar graphs. Euler's formula, and using the formula to find an upper bound for the number of edges in a planar graph. Platonic solids and their relation to planarity. 1Planar graphs¶ Note that we're only working with simple graphs at the moment - graphs where each pair of vertices has at most 1 edge between them. A graph $G$ is planar if it can be drawn on a plane without any two edges crossing. Note that the edges don't have to be drawn as straight lines. Recall that $K_n$ is the complete graph containing $n$ vertices. Now, $K_4$ is planar, though it might not be obvious at first. Here's $K_4$, drawn the standard way (which is not a planar drawing): Here's a planar way of drawing $K_4$: So $K_4$ is planar, as there's a way of drawing it on a plane without edges crossing. What about $K_5$? Well, it doesn't seem planar. Can we prove that it isn't? 1.1Euler's formula¶ First, we introduce the term faces, which refers to the regions enclosed by the edges on the plane. We number the faces $F_1, \ldots, F_f$ where $f$ is the number of faces. Also, the "infinite" or "outer" face around the graph, we call $F_0$. Let $n$ be the number of vertices, and let $m$ be the number of edges. Then: Theorem (Euler, 1752): A connected planar graph satisfies $m+2 = n+f$. Proof: by induction. First, note that the cheapest way to connect a graph of $n$ vertices is with a spanning tree, which requires $m=n-1$ edges. So let the base case be the connected spanning tree $T$ on $n$ vertices. It's easy to see that if we have a tree, we can draw the graph so that it is planar (recall that trees, by definition, do not have any cycles). There is only one face for a tree, so $f=1$. Thus, $m+2 = (n-1) + 2 = n+1 = n + f \checkmark$ For larger connected graphs, the graph must contain a cycle (otherwise, it's just a tree, which is the base case). Consider the graph $G' = G\setminus e$ where $e$ is some edge of (one of) the cycle(s). Then, for $G'$, the number of vertices is the same - so $n'=n$ - but the number of edges is one less than that of $G$, so $m' = m - 1$. Furthermore, the number of faces is one less as well, since by removing an edge, we merge two faces into one. Thus $f' = f-1$. If we assume the induction hypothesis for $G'$, then we have: $$\begin{align} & m' + 2 = n' + f' \\ \therefore \; & (m-1) + 2 = n + (f-1) \\ \therefore \; & m + 2 = n + f \; \blacksquare \tag{so the formula holds for $G$} \end{align}$$ A corollary of this theorem is that any drawing of a planar graph results in the same number of faces. 1.2Upper-bounding the number of edges¶ Another corollary of the previous theorem is that it allows us to find an upper bound for the number of edges in a planar graph. Theorem: In any planar graph, $m \leq 3n - 6$ (for $n \geq 4$). Note that in a general graph, the maximum number of edges is $\Omega(n^2)$ since $\displaystyle \binom{n}{2} = \frac{1}{2}n(n-1)$. This theorem establishes an even smaller upper bound, but only for planar graphs. Proof: In a planar graph, each edge touches at most 2 faces 1. The number of edges around a face is at least 3. Thus, if we count the number of pairs $(e, F)$ such that $e \in F$, then this number is at least $3f$, but is at most $2m$. So then we have that $2m \geq 3f$. By Euler's formula, we have that $m + 2 = n + f$. Thus $3m+6 = 3n+3f \leq 3n+2m$, and so $3m+6 \leq 3n+2m$. Subtracting $2m$ from both sides, we get $m+6 \leq 3n$ and so $m \leq 3n-6$, as desired. $\blacksquare$ 1.2.1For bipartite graphs¶ Theorem: In a planar bipartite graph, $m \leq 2n-4$ (when $n \geq 5$). Proof: In a planar bipartite graph, there are no odd cycles, so the smallest cycle has 4 edges. Thus, any face (other than the infinite one) has to have at least 4 edges around it. Other than that, the proof is similar to what we had before: $2m \geq$ the number of pairs $\geq 4f$, so $2m \geq 4f$, thus $m \geq 2f$. Using Euler's formula, we get $2m+4 = 2n+2f$ and so $2m+4 \leq 2n+m$. Subtracting $m$ from both sides, we get $m+4 \leq 2n$, and so $m \leq 2n-4$. $\blacksquare$ This gives us an upper bound on the density of planar graphs. 1.2.2Proving non-planarity¶ We can use the theorems above to prove that $K_5$ is not planar, as we suspected. For $K_5$, we have $n=5$ vertices, and $m=4+3+2+1=10$ edges. But $3n-6 = 9$ and $10 > 9$. Thus $K_5$ is not planar. Similarly, consider $K_{3,3}$, the complete bipartite graph with 3 vertices in each bipartition. There are $n=6$ vertices, and $m = 9$ edges. But $2m-4=8$ and $9 > 8$. Thus $K_{3,3}$ is also not planar. We'll show later that these two graphs - $K_{3,3}$ and $K_5$ - are the only reasons a graph would not be a planar, in the sense that any graph that is not planar must contain a copy of $K_5$ or $K_{3,3}$ within it somewhere. 1.3Platonic solids¶ A regular polytope is a 3D convex solid whose sides are formed by identical 2D regular polygons. Here's a table of the standard ones, where $d$ is the number of faces touching each vertex and $p$ is the number of sides for each face: Name Drawing Faces Vertices Edges $p$ $d$ Tetrahedron 4 4 6 3 3 Cube 6 8 12 4 3 Octahedron 8 6 12 3 4 Dodecahedron 12 20 30 5 3 Icosahedron 20 12 30 3 5 Are there any other platonic solids? To find out, let's consider these solids as planar graphs, by projecting them down to a 2D plane. I'm not entirely sure how the projection works, but I don't think that matters. This isn't MATH 350. Image taken from Wolfram MathWorld, shown in the same order as in the table above Note that each vertex has degree $d$ and each face has $p$ sides. Theorem: The 5 platonic solids above are the only platonic solids. Proof: We can derive this from Euler's formula. A regular polytope gives a regular planar graph with $\displaystyle 2m = \sum_{v \in V}\deg(v) = n\cdot d$ (since it's regular). But each edge has exactly 2 faces. So $2m = f \cdot p$. By Euler's formula, $m+2 = n+f$, thus $$m+2 = \frac{2m}{d} + \frac{2m}{p}$$ Divide both sides by $2m$, and we get: $$\frac{1}{2} + \frac{1}{m} = \frac{1}{d} + \frac{1}{p}$$ For 3d solids, we need $d \geq 3$ and $p \geq 3$. So what values of $p$ and $d$ work for this formula? Suppose $p \geq 4$ and $d \geq 4$. Is that possible? No, because then $\displaystyle \frac{1}{d} + \frac{1}{p} \leq \frac{1}{2} < \frac{1}{2} + \frac{1}{m}$ and there is no possible value of $m$ that could satisfy that equation (that also happens to be a valid number of edges). On the other hand, if $d = 3$, then $\displaystyle \frac{1}{2} + \frac{1}{m} = \frac{1}{3} + \frac{1}{p}$, i.e., $$\frac{1}{p} = \frac{1}{6} + \frac{1}{m}$$ Since $\frac{1}{m} > 0$, then $p < 6$, i.e., $p \leq 5$. So the only possible values for $(d, p)$ are $(3, 3)$, $(3, 4)$, $(3, 5)$, $(4, 3)$, and $(5, 3)$. Let's investigate the solids resulting from each of these values: $d$ $p$ $\displaystyle m = \left (\frac{1}{d} + \frac{1}{p} - \frac{1}{2} \right )^{-1}$ $\displaystyle n = \frac{2m}{d}$ $\displaystyle f = \frac{2m}{p}$ Resulting solid $3$ $3$ $6$ $4$ $4$ Tetrahedron $3$ $4$ $12$ $8$ $6$ Cube $3$ $5$ $30$ $20$ $12$ Dodecahedron $4$ $3$ $12$ $6$ $8$ Octahedron $5$ $3$ $30$ $12$ $20$ Icosahedron So the solids we identified earlier are, in fact, the only ones! An edge is always adjacent to at least one face, which may just be the infinite face. ↩
Alas, there is no simple satisfactory answer to this question. What I can offer is a very strong property that $m \mapsto H\bigl(k \mathbin\| H(k \mathbin\| m)\bigr)$ fails to achieve; a more pedestrian property which even HMAC may or may not achieve but is typically asked to achieve; a reason not to worry about it for any new systems; and some historical ... Saarinen in his work GCM, GHASH and Weak Keys says that;This paper is not very clear and has led many people into regrettable confusion about universal hashing authenticators.The paper—both the manuscript you cited and the conference paper at FSE 2012—contains misleading claims and misattribution of ideas; describes attacks that apply only beyond the ... HMAC-SHA-xxx has an output length of xxx bitsThat's right, the output of the HMAC function is identical to the output of the hash by default. This is obvious if you take the design of HMAC in consideration.HMAC-SHA-xxx-yyy has an output length of yyy bitsCertainly. It has been defined that way in the venerable RFC 2104 - HMAC: Keyed-Hashing ... If we view HMAC as a message authentication code or a PRF, this doesn't quite make sense: the security property for a MAC or a PRF assumes that the forger doesn't know the key, but you've given them the key $k$ in the signature. But you have the right intuition that there is something here.While a signature scheme in terms of $H(m)$ requires $H$ to be ... We don't know. Although it seems unlikely to the extreme that there is some kind of mathematical equation that gets easier to solve when the second key relies on the first key, we probably cannot prove it. So that's it for the theoretical problems.One practical problem is that when the key for confidentiality is obtained by the adversary (e.g. through a ... But, for the purpose of optimization, I was thinking to delete only its IV and Tag, thus hoping to make decryption of that segment's data computationally infeasible.Neither work. The IV might seem to work (as doing a brute force search over a space of size $2^{96}$ might appear to be daunting), however the attacker has another potential approach.If an ... Yes, it helps to use HMAC instead of a hash in digital signature (with the HMAC key $k$ sent along the signature), but only when the signer chooses $k$ unpredictably when signing. The attacker can then no longer exploit a loss of collision resistance in the hash, because the attacker can't chose the message $m$ as a function of $k$, and that tends to make ... What you are looking for is a randomized authenticated cipher of short bit strings, 64 bits long. You seem to be willing to accept between 64 and 256 bits of ciphertext expansion.With an $r$-bit randomization string, the probability of a collision after $n$ ciphertexts—which would, at the very least, reveal equality of masked ids—is at best bounded by ... In many applications, especially in zero-knowledge proofs, we need commitment schemes that are additively homomorphic. Pedersen commitment schemes do have this property, hash-based commitment schemes don't.If we do Pedersen commitments on elliptic curves for performance reasons, where we fix two points $P$ and $Q$ on a curve, we can define:$\text{commit}(... Consider the following system involving a message authentication code like HMAC-SHA256:Alice generates a key and shares it with Bob, and Bob alone.Alice authenticates a message with the key.Bob acts on the message only if it can be verified with the key.Of course, Alice can verify messages too, and Bob can forge messages too. Indeed, anyone who has ... What is the convenience of universal hashes provides?They are simple to describe: $X_i=(X_{i-1}+D_i)\cdot H$, with $D_i$ being the $i$-th data word and $H$ being a key/iv-dependent secret and are really fast to evaluate in hardware and reasonably fast to evaluate in software. This is especially true if you consider the fact that hashes usually have 64 or ... Adding to Yehuda Lindell's answer, there are two special cases that one should take care to avoid:If the HMAC key $k$ is longer than the block size of $H$, then it is compressed into $k' = H(k)$ and $k'$ is used instead with HMAC as usual. This means that there are easily constructed collisions in HMAC keys, namely $k' = H(k)$ and $k$ for any key $k$ that ... If we assume that 1 and 2 are true then HMAC-SHA-512 is a better choice.For the time comparison, as stated Paul Uzsak, if there is no specific time relevant problem around ~50% faster shouldn't be considered. If more secure, then use it.For your comment;The shatter attack on SHA-1 requires a degree of freeness in the hashed data as in PDF's. This ... HMAC (and any other MAC) are totally different from Digital Signatures (RSA, DSA, ECDSA, EdDSA).MACs require a shared secret key that both the communicating parties have. The same secret is used to create the MAC as is used to verify it. Anyone with the shared secret key can create a MAC, and anyone with the shared secret key can verify a MAC.Digital ... The point is not to make the attack infeasible time-wise. Computational resources (usually, area*time, which is roughly a measure of the monetary cost) needed for the attack is the only thing you can hope to increase here.As you observed, it is always possible to parallelize brute-force search of passwords, so the best you can do is increase the ... This is the mechanism of the rolling system of one of the best known crypto faucet website. I think that the previous answer above is a bit misleading. However, when the math is confusing, a simple test with some code and some pseudo-random generated data should not leave any doubt.Your statement:"..the way I see it, it is 50% less likely to hit the ... Salsa20 has essentially no limits on its own for data volume: it can be used for up to $2^{64}$ messages of up to $2^{70}$ bytes apiece. You could use it in a nonstandard way for, say, more messages if they're each smaller, by carving up the input to the PRF differently, as long as the total volume of data is below $2^{134}$ bytes. You certainly can't ... Let's look at the construction more abstractly. Let $F : \{0,1\}^n \times \{0,1\}^n \to \{0,1\}^n$ be a pseudorandom permutation and let $H : \{0,1\}^* \to \{0,1\}^n$ be a collision resistant hash function.If we look at the simpler construction$$\operatorname{MAC}(k,x) := F_k(H(m))$$ we can actually prove that this is a secure MAC (at least in theory).... Generally a digital signature is created using a private key and verified with the public key of an asymmetric key pair. Public keys can be easily distributed to verify the signatures.It is however required that the public key can be trusted to be part of the correct key pair; the public key needs to be trusted. For this a public key infrastructure or PKI ... First of all, HMAC is not exactly a hash function. The Wikipedia clearly states that;In cryptography, an HMAC (sometimes expanded as either keyed-hash message authentication code or hash-based message authentication code) is a specific type of message authentication code (MAC) involving a cryptographic hash function and a secret cryptographic key.In ... I see no way. Typed keys are typed to prevent them from being used for another task than being the key to the designated algorithm.What you maybe could do is use the key to derive another, and use it as an HMAC key. But I do not immediately see how we could avoid that the HMAC key could leak of the PKCS#11 device.Note: this comment sketches an option. But ... As cheaper alternatives to HMAC with modest security goals, consider:SipHash—cheaper than MD5 because you don't have to pay for collision resistance; security is limited by the 64-bit output sizeMaybe a Gimli-based PRF—Gimli is a new compact designDerive a fresh key for each message, and use a one-time authenticator like a polynomial evaluation universal ... the IVs of next blocks are going to be extracted from previous block ciphertext.Here assuming chunks since CBC requires IV only once per message/chunk.IV of CBC should be unpredictable, i.e random. It is not a good idea to extract from a known source. Use randomly generated per chunk.First optionEnables chaining, you should also add the IV with the ... The reasoning is wrong: the whole point of the paper is to establish a definition for secure general-purpose KDFs.In definition 7 of the paper, it is defined that a KDF is secure, if it can withstand distinguishing game from an adversary with capability to query the KDF oracle - that's what something linear like SWIFFT cannot provide.As for the quoted ... The salt of a password hash mitigates multi-target attacks like rainbow tables (and variants like the parallel rainbow table search machine), if you use a distinct salt for each user. Effectively, a distinct salt for each user means each user is using a slightly different hash function, so the advantage of any batch attack on many instances of the same hash ... No, you can use the same HMAC key for any message, including an already HMAC'ed message (regardless if the HMAC value is encrypted or not).The key in HMAC is protected by the one-way hash function used internally, so getting the key should be hard.The collision resistance and other security properties will also prevent an attacker from generating a HMAC ... The first scheme is similar to what's called Encrypt-and-MAC. It is not ideal, but it is not fatally broken, and it is still used by the SSH protocol securely. You need to include a counter or other unique value in the data being MACed to maintain IND-CPA security (i.e. identical plaintexts don't have identical MACs).The second scheme you present doesn't ... Why does HMAC do that?The process of expanding or compressing the key material to exactly one block length this way shouldn't be necessary from a security perspective. The reason for this transformation is to enable an optimization when the same key will be used to process multiple messages. (See RFC 2104)If this optimization is used, it means that you ... HMAC-MD5 is not vulnerable to length-extension attacks, period.If that's what your code computes, then your code is not vulnerable to length-extension attacks. That said:Length-extension attacks are a part of the protocol, not an implementation of the protocol. It doesn't matter whether you use OpenSSL or something else to implement the same protocol....
$$ \frac{1}{\sum_{l\in \color{cyan}{L}} |\color{green}{\hat{y}}_l|} \sum_{l \in \color{cyan}{L}} |\color{green}{\hat{y}}_l| \phi(\color{magenta}{y}_l, \color{green}{\hat{y}}_l) $$ \(\color{cyan}{L}\) is the set of labels \(\color{green}{\hat{y}}\) is the true label \(\color{magenta}{y}\) is the predicted label \(\color{green}{\hat{y}}_l\) is all the true labels that have the label \(l\) \(|\color{green}{\hat{y}}_l|\) is the number of true labels that have the label \(l\) \(\phi(\color{magenta}{y}_l, \color{green}{\hat{y}}_l)\) computes the precision or recall for the true and predicted labels that have the label \(l\). To compute precision, let \(\phi(A,B) = \frac{|A \cap B|}{|A|}\). To compute recall, let \(\phi(A,B) = \frac{|A \cap B|}{|B|}\). How is Weighted Precision and Recall Calculated? Let’s break this apart a bit more. This last part of the equation weighs the precision or recall by the number of samples that have the \(l\)-th true label. $$ \frac{1}{\sum_{l\in \color{cyan}{L}} |\color{green}{\hat{y}}_l|} \sum_{l \in \color{cyan}{L}} \color{red}{\Bigg[} |\color{green}{\hat{y}}_l| \phi(\color{magenta}{y}_l, \color{green}{\hat{y}}_l) \color{red}{\Bigg]} $$ The middle part of the equation sums the weighted precision or recall over all the different labels to get a single number. $$ \frac{1}{\sum_{l\in \color{cyan}{L}} |\color{green}{\hat{y}}_l|} \color{red}{\Bigg[} \sum_{l \in \color{cyan}{L}} |\color{green}{\hat{y}}_l| \phi(\color{magenta}{y}_l, \color{green}{\hat{y}}_l) \color{red}{\Bigg]} $$ Finally, the first part of the equation normalizes the summed weighted precision or recall by the total number of samples. $$ \color{red}{\Bigg[} \frac{1}{\sum_{l\in \color{cyan}{L}} |\color{green}{\hat{y}}_l|} \color{red}{\Bigg]} \sum_{l \in \color{cyan}{L}} |\color{green}{\hat{y}}_l| \phi(\color{magenta}{y}_l, \color{green}{\hat{y}}_l) $$ There you go! We now know how to computed weighted precision or recall. The same weighting is applied to F-score. One problem with weighed precision and recall (and other weighted metrics), is that the performance of infrequent classes are given less weight (since \(|\color{green}{\hat{y}}_l|\) will be small for infrequent classes). Thus weighted metrics may hide the performance of infrequent classes, which may be undesirable (especially as the infrequent classes are often what we are most interested in detecting). See this description of macro and note how it compares to weighted.
I was interested in writing a program that given the number of variables and the degree of the multivariate polynomial, it was able to output the multivariate polynomial itself or evaluate it at a specific point (in reality I will feed it vectors with a value for each polynomial and I want to evaluate the polynomial). So here is an example input output in pseudocode: f(variables=[x1,x2],Degree=D) = 1+x1+x2+x1x2+x1^2+x2^2 when there is a general number of variables and Degree it gets tricky. I noticed that this problem is equivalent to considering tuples/sequences with that satisfy the following: $$ S_D = \{ (d_0, ..., d_N) : \sum^N_{i=0} d_i = D \}$$ then my answer would be the set: $$ S^{*}_D = \cup^D_{d'=0} S_{d'} = \{ (d_0, ..., d_N) : \sum^N_{i=0} d_i \leq D \} $$ I started with an example to try to actually compute that set, say degree 3 and 3 variables. I considered N = 3 and got the tuples: (3,0,0), (0,3,0), (0,0,3) (2,1,0), (2,0,1) (0,2,1), (1,2,0) (0,1,2), (1,0,2) I also tried higher numbers but I didn't really see an obvious way to generalize it wrt to N or D. Any hints on how to do it? (Also the full solution is welcome so I can implement it for my real task)
Let $f : S^1 \rightarrow S^1$ be a continuous map such that $\operatorname{deg}(f) \ne 0.$ [Note : I have the degree defined as in this question.] I'm trying to prove that the hypotheses above implie that $f(S^1)=S^1.$ What I've tried is the following, Suppose that $\exists \ x_0 \in S^1 \setminus f(S^1).$ Having in mind that $S^1 \setminus \{x_0\}$ is a subspace of $S^1$ that is contractible, we obtain a diagram: $$S^1 \overset{f}{\longrightarrow} S^1 \setminus \{x_0\} \overset{r}{\longrightarrow} \{f(1)\},$$ where $r$ is a strong deformation retraction of $S^1 \setminus \{x_0\}$ onto $\{ f(1) \}.$ Hence, $r \circ f = c_{f(1)}$, the constant map from $S^1$ to $f(1).$ Writing $i : \{f(1)\} \rightarrow S^1$ for the inclusion map, we have that $i \circ r \simeq Id_{S^1 \setminus \{x_0\}}$, that is, $i \circ r$ is homotopic to the identity relatively to the point $f(1).$ Hence $f = Id_{S^1 \setminus \{x_0\}} \circ f \simeq (i \circ r) \circ f = i \circ c_{f(1)} = c_{f(1)}$ and we get that f is homotopic to the constant map relatively to the point $f(1)$ and this contradicts the fact that $\operatorname{deg}(f) \ne 0.$ I don't know if my proof is correct or if it can be improved. Thanks for your time!
Let's prove that $latex e^{\pi} > \pi^e$ without a calculator. If you haven't seen this before, give it a try before you read any further. Consider the function $latex f(x) = x/\ln(x)$ on the interval $latex (1,\infty)$. This function shoots off to positive infinity as $latex x$ tends towards either endpoint of the domain. Let's … Continue reading e^pi > pi^e Consider the problem of finding the following limit:$latex \lim_{x \rightarrow 0} x^x&s=2$It's actually not too bad. We can write$latex x^x = e^{ x \ln x}&s=2$and bring the limit into the exponent (as exponentiating is continuous) to get that$latex \lim_{x \rightarrow 0} x^x = e^{\lim_{x \rightarrow 0} x \ln x}&s=2$From here, all we need to do … Continue reading Limits and Towers Feel up to seeing a bit of mathematical magic? See here. http://www.youtube.com/watch?v=g0qbNksZLgo Just me doing my 3MT. At the end of my undergraduate degree, I remember thinking "what better way to mark such an event than to write and record a maths rap?" Well, I did end up recording such a thing, but it was pretty woeful. However, a couple of months ago, I plucked up the zest to do it again. … Continue reading Don’t Mess with the Mathematician Integrals can tell us quite a lot. For those of you who are so disgustingly bored that you've found your way onto my blog, you should have a go at evaluating exactly the following integral:$latex \int_0^1 \frac{x^4 (1-x)^4}{1+x^2} dx$It might take some effort, but it's well worth it. Of course, Wolfram Mathematica could do it … Continue reading Late night integration I thought it would be a good idea to relay some of the advice given by Sir Michael Francis Atiyah during his talk on Tuesday. Sir Michael Atiyah during his talk at #hlf13 Picture: HLFF @Bernhard Kreutzer Always ask yourself questions. Atiyah says that one of the secrets of his success is to always be curious. … Continue reading Advice to a Young Mathematician
I am currently reading this paper on Restricted Boltzmann Machines. On page 22, Given a Markov Random Field $\mathbf{X} = (X_1,\ldots,X_N)$ w.r.t a graph $G = (V,E)$ where $V = \{1 \ldots N\}$ and $i \in V$. The Gibbs sampling procedure is as follows: At each transition step, pick random variable $X_i$, with a probability $q(i)$ given by strictly positive probability distribution $q$ on $V$. Then, sample a new value of $X_i$ based on it's conditional probability distribution given the state $(x_v)_{v \in V \setminus i}$ of all other variables $(X_v)_{v \in V \setminus i}$ i.e. based on $\pi(X_i \vert (x_v)_{v \in V \setminus i})$. Given the above steps, the paper states the transition probabilities $p_{xy}$ for two states $\mathbf{x},\mathbf{y}$ of the MRF $\mathbf{X}$ to be: I don't quite understand: 1) should $p_{xx}$ term have a product over all $i \in V$ as opposed to the sum? 2) why $p_{xy}$ does not have a sum over all $i \in V$.
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box.. There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$. What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation? Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach. Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line? Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$? Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?" @Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider. Although not the only route, can you tell me something contrary to what I expect? It's a formula. There's no question of well-definedness. I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer. It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time. Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated. You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system. @A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago. @Eric: If you go eastward, we'll never cook! :( I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous. @TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$) @TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite. @TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
This question already has an answer here: In my formal languages class, we discussed DIV, defined as following: $\mathrm{DIV} = \{\langle a,b\rangle : \text{$a, b \in N$ and $a$ has a divisor $d$ for some $1 < d \leq b$ }\}$ ($\langle\cdot\rangle$ means encoded, let's say as binary) We were told that it isn't known whether DIV is in P and were tasked to prove it was in NP. I naively and mistakenly assumed that DIV was in P because of the following algorithm: On input <a,b> For all 1<d<=b check if d divides a. If so, accept. reject. I thought that this algorithm would run in polynomial time because we do $b$ many divisions at worst. Division is polynomial time, therefore, $b$ many divisions is also polynomial time. (also note, $b < a$, or DIV is trivially true, where $d = a$). However, i was told that this algorithm is not polynomial time with respect to the input. I don't really understand this part. Something along the lines of since $a$ and $b$ are encoded in binary, the input is of order $O(\log n)$. And that means our b many divisions is actually $O(b) \cdot O(\text{divisions})$, and that $O(b)$ is $O(2^{\log b})$. However, isn't $2^{\text{log base $2$ of $b$}}$ the same as $b$? How is that not polynomial time?
Is there any 1st order electromagnetic Feynman diagram? I.e. a process whose probability is just $\propto \alpha_{EM}$? If not, is there any physical reason why? We always need at least two particles in and two out to conserve energy and momentum? Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community Is there any 1st order electromagnetic Feynman diagram? I.e. a process whose probability is just $\propto \alpha_{EM}$? If not, is there any physical reason why? We always need at least two particles in and two out to conserve energy and momentum? A diagram which is first order in $\alpha_\text{EM}$ would have to have one vertex, because $\alpha_\text{EM}\propto g^2$ where $g$ is the factor associated with each vertex (and the amplitude corresponding to the diagram gets squared). There's only one possible vertex in QED, namely the photon-electron-positron vertex, and it's impossible to arrange this in any way that conserves energy and momentum. There are only two really distinct possibilities: So no, there is no first-order diagram. The same argument goes to show that the corresponding process ($\gamma\to e^+e^-$, $e^-\gamma\to e^-$, or any variant) is kinematically forbidden. If you replace the photon with a sufficiently massive particle, like a Z boson, then it's totally fine. Of course, Z bosons aren't stable, so whatever diagram you draw that includes the $Z\to e^+e^-$ vertex should probably also include whatever interaction produced the Z boson in the first place, but in theory if you had a free Z boson propagating through space, it could undergo this decay. Note the distinction I've drawn between a diagram and a process, which is defined by its initial and final states but incorporates many different diagrams.
I prefer to interpret the mean-variance frontier as a consequence of linear algebra as developed in Hansen and Richard (1987) and discussed in Cochrane (2005). In brief: The space of returns is a hyperplane in the vector space of payoffs. The set of returns on the mean-variance frontier is a line in the space of returns. Any two distinct points on a line define the line. (This is basically what the two-fund separation theorem boils down to.) Why a line? Speaking loosely, there's only one direction in return space for up (higher expected return), and moving in any perpendicular direction doesn't change the expected return (but does change variance). Move up the line for a higher expected return, move down the line for a lower expected the return, and move perpendicular off the line for more variance and the same expected return. I'll briefly sketch some of the arguments but read Cochrane for a more in depth discussion. Preliminaries Let $X$ and $Y$ denote random variables. Observe that $\operatorname{E}[XY]$ is an inner product. $X$ and $Y$ are called orthogonal if their inner product is zero, i.e., $\operatorname{E}[XY] = 0$. Let $R^*$ denote the projection of the stochastic discount factor onto the space of payoffs and scaled so that it's a return. The hyperplane of excess returns is orthogonal to the discount factor and $R^*$. Let $R^{e*}$ denote the projection of a constant $1$ onto the space of excess returns. In return space, moving in the direction $R^{e*}$ will give a higher expected return. Hansen Richard orthogonal decomposition The important point is that any return $R_i$ can be written using the following orthogonal decomposition: $$ R_i = R^* + w_i R^{e*} + \eta_i$$ Different returns $R_i$ will have different $w_i$ and $\eta_i$. (Note $w_i$ is a scalar and $\eta_i$ is a random variable.) Observe that by construction, $\eta_i$ is orthogonal to 1, hence $\operatorname{E}[\eta_i]= 0$. Therefore: $$ \operatorname{E}[R_i] = \operatorname{E}[R^*] + w_i \operatorname{E}[R^{e*}]$$ Using $R^*$, $R^{e*}$, and $\eta_i$ are all orthogonal to eachother, you can further show that: $$ \operatorname{Var}(R_i) = \operatorname{Var}\left( R^* + w_i R^{e*} \right) + \operatorname{Var}(\eta_i) $$ The set of returns on the mean-variance frontier is the line $ R^{mv} = \left\{ R^* + \alpha R^{e*} \mid \alpha \in \mathbb{R} \right\}$. Any return along the mean-variance frontier has $\eta_i = 0$. A non-zero $\eta_i$ gives you variance but no change in expected return (because the space in which $\eta_i$ lies is orthogonal to $R^{e*}$, the projection of $1$ onto the space of excess returns). Why is $1$ special? The inner product of a random variable with $1$ gives its mean. Hansen Richard in the space of security weights (instead of return space) Let $ \mathbf{R} = \begin{bmatrix} R_1 \\ \ldots \\ R_k \end{bmatrix} $ be a random vector denoting the returns of $k$ securities. Let covariance matrix $\Sigma = \operatorname{Cov}(\mathbf{R})$ and mean return vector $\boldsymbol{\mu} = \operatorname{E}[\mathbf{R}]$. For convenience, let $A = \Sigma + \boldsymbol{\mu} \boldsymbol{\mu'}$. (Hence $A = \operatorname{E}[\mathbf{R}\mathbf{R}']$.) Define inner product $\langle \mathbf{x}, \mathbf{y} \rangle_A \equiv \mathbf{x}' A \mathbf{y}$. Let $\mathbf{1}$ denote a vector of 1s. Security weights $ \mathbf{w}^*$ (a vector) and corresponding return $R^*$ (a random variable) are given by:$$ \mathbf{w}^* = \frac{A^{-1}\mathbf{1}}{\mathbf{1}'A^{-1}\mathbf{1}} \quad \quad R^* = \mathbf{w}^* \cdot \mathbf{R} $$ Security weights $ \mathbf{w}^{e*}$ and excess return $R^{e*}$ are given by:$$ \mathbf{w}^{e*} = A^{-1}\boldsymbol{\mu} - \left( \frac{\mathbf{1}' A^{-1} \boldsymbol{\mu}}{\mathbf{1}'A^{-1}\mathbf{1}}\right) A^{-1}\mathbf{1} \quad \quad R^{e*} = \mathbf{w}^{e*} \cdot \mathbf{R}$$ Security weights for portfolios on the mean-variance frontier are: $$ \left\{ \mathbf{w}^{*} + \alpha \mathbf{w}^{e*} \mid \alpha \in \mathbb{R} \right\} $$ $$ \left\{ \frac{A^{-1}\mathbf{1}}{\mathbf{1}'A^{-1}\mathbf{1}} + \alpha \left[ A^{-1}\boldsymbol{\mu} - \left( \frac{\mathbf{1}' A^{-1} \boldsymbol{\mu}}{\mathbf{1}'A^{-1}\mathbf{1}}\right) A^{-1}\mathbf{1} \right] \,\middle|\, \alpha \in \mathbb{R} \right\} $$ Or equivalently: $$ \left\{ \left( 1 - \beta \right) \frac{A^{-1}\mathbf{1}}{\mathbf{1}'A^{-1}\mathbf{1}} + \beta \frac{ A^{-1}\boldsymbol{\mu}}{\mathbf{1}' A^{-1} \boldsymbol{\mu}} \; \middle| \ \beta \in \mathbb{R} \right\} $$ This is the same mean-variance frontier traced out by combinations of the minimum variance portfolio $\mathbf{w}_{\mathrm{mv}} = \frac{\Sigma^{-1}\mathbf{1}}{\mathbf{1}'\Sigma^{-1}\mathbf{1}}$ and tangency portfolio $\mathbf{w}_{\mathrm{tan}} = \frac{\Sigma^{-1}\boldsymbol{\mu}}{\mathbf{1}\Sigma^{-1}\boldsymbol{\mu}}$ but the algebra to see that is a bit horrible. References Cochrane, John, Asset Pricing, 2005 Hansen, Lars Peter and Scott F. Richard, "The Role of Conditioning Information in Deducing Testable Restrictions Implied by Dynamic Asset Pricing Models," Econometrica, 1987 link
If $K(s)$ is the Kolmogorov complexity of the string $s \in \{0,1\}^*$, Can we prove (or disprove) the following statement: "Every string $s$ is a prefix of an incompressible string; i.e. for every string $s$ there exists a string $r$ such that $K(sr) \geq |sr|$" ? In a very informal (and perhaps not too meaningful) way: we know that $K(r) \leq |r| + O(1)$; if we pick a large enough incompressible string $r$, can we "use" the $O(1)$ to "mask" the compressibility of the given string $s$ ? A similar (but different) result is that for any $c$, we can find $s$ and $r$ such that: $K(sr) > K(s) + K(r) + c$
Integration of substitution is also known as U – Substitution, this method helps in solving the process of integration function. When a function cannot be integrated directly, then this process is used. To integration by substitution is used in the following steps: A new variable is to be chosen, let’s name t “x” The value of dx is to is to be determined. Substitution is done Integral function is to be integrated Initial variable x, to be returned. The standard formula for integration is given as: \[\large \int f(ax+b)dx=\frac{1}{a}\varphi (ax+b)+c\] \[\large \int f\left(x^{n}\right)x^{n-1}dx=\frac{1}{n}\phi \left(x^{n}\right)+c\] \[\large \int \frac{{f}'(x)}{f(x)}dx=log\:f(x)+c\] Solved Examples Question: Find the integration using the substitution formula: $\int \frac{(3+ln+2x)^{3}}{x}dx$ Solution Let u = 3 + ln 2 xWe can expand out the log term on the right hand side as: 3 + ln2 x= 3 + ln2 + ln x The first 2 terms on the right are constants (whose derivative equals zero) and the derivative of the natural log of x is $\frac{1}{x}$ . Then: $du=\frac{1}{x}dx$ $\int \frac{(3+ln\; 2x)^{3}}{x}dx= \int u^{3}du$ $=\frac{u^{4}}{4}+k$ $=\frac{(3+ln\:2x)^{4}}{4}+k$
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues? Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson... Hmm, it seems we cannot just superimpose gravitational waves to create standing waves The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line [The Cube] Regarding The Cube, I am thinking about an energy level diagram like this where the infinitely degenerate level is the lowest energy level when the environment is also taken account of The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings @Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer). Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it? Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks. I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh... @0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P) Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio... the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\... @ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there. @CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer
I am having much distress over Maxwell's 3rd Equation (Faraday's Law of Induction) and a thought experiment I had. Given that Maxwell-Faraday's equation is $$\oint E \cdot ds = -\frac{d\phi}{dt}$$And from the definition by HyperPhysics (emphasis mine), The line integral of the electric field around a closed loop is equal to the negative of the rate of change of the magnetic flux through the area enclosed by the loop. If this is the case, please consider the following scenario. I insert a dense magnetic field into ONLY THE CENTER of a loop of wire (the magnetic field does not touch the actual loop). I was taught that Faraday's Law of Induction could be derived from the Lorentz Force on moving charges exposed to magnetic fields. However, as no magnetic field interacts with the charges in the wire (the field doesn't extend to the coil) there should be no EMF induced. But Maxwell's equations says there should be because there is a change in flux in the area of the loop. I'm pretty sure Maxwell's equations aren't wrong, so could someone please explain what's wrong here? Does Maxwell's equation assume that the flux change is uniform through the entire area? That doesn't sound like an assumption that he would make, given the universality of his 4 equations.
This may be a stupid question but when we work with exponentiation we can see that $x^{\frac 12}=\sqrt x$ because: $x^{\frac 12}\times x^{\frac 12}=x^{\frac 12+\frac 12}=x^1=x$ and $\sqrt x \times \sqrt x={\sqrt x}^2=x$ Now it seems obvious working with tetration that $x\uparrow \uparrow \frac 12 = \sqrt x_s$ (where $\sqrt x_s$ is the super square root so that $\sqrt x_s^{\sqrt x_s}=x$) but I'm not sure so how do I actually prove/disprove this ? Can it be generalized for higher degree operations like $x\uparrow \uparrow \uparrow \frac 12=\sqrt x_{ss}$ (where $\sqrt x_{ss}\uparrow \uparrow \sqrt x_{ss}=x$) and so on ?
In mathematics, computer science, economics, or management science, mathematical optimization (alternatively, optimization or mathematical programming) is the selection of a best element (with regard to some criteria) from some set of available alternatives. 1) In the simplest case, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function. The generalization of optimization theory and techniques to other formulations comprises a large area of applied mathematics. More generally, optimization includes finding “best available” values of some objective function given a defined domain (or a set of constraints), including a variety of different types of objective functions and different types of domains.. An optimization problem can be represented in the following way: Given: a function $f : A \to R$ from some set $A$ to the real numbers Sought: an element $x_0$ in $A$ such that $f(x_0) \leq f(x)$ for all $x$ in $A$ (“minimization”) or such that $f(x_0) \geq f(x)$ for all $x$ in $A$ (“maximization”). Such a formulation is called an optimization problem or a mathematical programming problem (a term not directly related to computer programming, but still in use for example in linear programming – see History below). Many real-world and theoretical problems may be modeled in this general framework. Problems formulated using this technique in the fields of physics and computer vision may refer to the technique as energy minimization, speaking of the value of the function f as representing the energy of the system being modeled. Typically, $A$ is some subset of the Euclidean space $\mathbb{R}^n$, often specified by a set of constraints, equalities or inequalities that the members of A have to satisfy. The domain $A$ of $f$ is called the search space or the choice set, while the elements of $A$ are called candidate solutions or feasible solutions. By convention, the standard form of an optimization problem is stated in terms of minimization. Generally, unless both the objective function and the feasible region are convex in a minimization problem, there may be several local minima, where a local minimum x* is defined as a point for which there exists some $\delta > 0$ so that for all $x$ such that $$ \|\mathbf{x}-\mathbf{x}^*\|\leq\delta;\, $$ the expression $$f(\mathbf{x}^*)\leq f(\mathbf{x})$$ holds; that is to say, on some region around $x^*$ all of the function values are greater than or equal to the value at that point. Local maxima are defined similarly. Optimization problems are often expressed with special notation. Here are some examples. Consider the following notation: $$\min_{x\in\mathbb R}\; (x^2 + 1)$$ This denotes the minimum value of the objective function $x^2 + 1$, when choosing $x$ from the set of real numbers $\mathbb R$. The minimum value in this case is 1, occurring at $x = 0$. Similarly, the notation $$\max_{x\in\mathbb R}\; 2x$$ asks for the maximum value of the objective function $2x$, where $x$ may be any real number. In this case, there is no such maximum as the objective function is unbounded, so the answer is “infinity” or “undefined”. Consider the following notation: $$ \underset{x\in(-\infty,-1]}{\operatorname{arg\,min}} \; x^2 + 1, $$ or equivalently $$ \underset{x}{\operatorname{arg\,min}} \; x^2 + 1, \; \text{subject to:} \; x\in(-\infty,-1]. $$ This represents the value (or values) of the argument $x$ in the interval $(-\infty,-1]$ that minimizes (or minimize) the objective function $x^2 + 1$ (the actual minimum value of that function is not what the problem asks for). In this case, the answer is $x = -1$, since $x = 0$ is infeasible, i.e. does not belong to the feasible set. Similarly, $$ \underset{x\in[-5,5], \; y\in\mathbb R}{\operatorname{arg\,max}} \; x\cos(y), $$ or equivalently $$ \underset{x, \; y}{\operatorname{arg\,max}} \; x\cos(y), \; \text{subject to:} \; x\in[-5,5], \; y\in\mathbb R, $$ represents the $(x,y)$ pair (or pairs) that maximizes (or maximize) the value of the objective function $x\cos(y)$, with the added constraint that $x$ lie in the interval $[-5,5]$ (again, the actual maximum value of the expression does not matter). In this case, the solutions are the pairs of the form $(5, 2k\pi)$ and $(-5,(2k+1)\pi)$, where $k$ ranges over all integers. Arg min and arg max are sometimes also written argmin and argmax, and stand for argument of the minimum and argument of the maximum.
Recall the standard argument for showing an AVL free is of size $\log n$: Let $n_h = $ be the minimum number of nodes of an AVL tree of height $h$. Then we have: $$ n_{h} \geq 1 + n_{h-1} + n_{h-2} \implies n_h > 2n_{h-2}$$ $$ n_h > 2^{\frac{h}{2} } \implies h < 2 \log n_h $$ so the height of the tree is $O( \log n)$. I understand the recurrence but I just don't understand why we argue about: be the minimum number of nodes of an AVL tree of height $h$ maybe my intuition is wrong but I thought that we'd want to argue about as many nodes as we can fit in a tree of height $h$ and show that it still balanced and show its $\log n$. Why is that reasoning incorrect?
Chow rings, decomposition of the diagonal, and the topology of families The present book originates from lectures that Claire Voisin delivered on topics related to algebraic cycles on complex algebraic varieties. The volume is intended for both students and researchers, and presents a survey of the geometric methods developed in the last thirty years to understand the famous Bloch-Beilinson conjectures. The book starts with a nice and comprehensive Introduction to the topics treated on the coming chapters. Chapter 2 gives a review of the theory of Chow groups and Chow motives, and the theory of Hodge structures and mixed Hodge structures, where the Hodge Conjecture and generalized Hodge Conjecture are introduced and put into context. In Chapter 3, one of the central objects of the book is studied, the Chow class of the diagonal of $X\times X$ for a variety $X$. A result on the decomposition of the diagonal, depending on the size of Chow groups, is reviewed. The generalized Bloch conjecture is a converse statement, saying that if the trascendental cohomology of $X$ is supported on a closed algebraic set of codimension $\geq c$, then for any $i\leq c-1$, the map $CH_i(X)_{\mathbb Q}\to H^{2n-2i}(X,{\mathbb Q})$ is injective. This conjecture is a central point for the results treated in the book. Chapter 4 is devoted to the study of Chow groups for complete intersections, and the equivalence of Bloch and Hodge conjectures for general complete intersections. In Chapter 5, the author studies the small diagonal in $X\times X \times X$, which is the appropriate object to understand the ring structure on the Chow and cohomology groups. In this regard, a very particular property satisfied by the Chow ring of K3 surfaces is shown, which leads to conjectures for hyper-Kähler manifolds. The final Chapter is devoted to analysing Chow groups with integer coefficients. It is known that the Hodge Conjecture fails for integer coefficients, a fact that it is reviewed by extracting torsion invariants associated to complex cobordism groups. This is a dense and very thorough book that reports some of the exciting discoveries that Claire Voisin has made in the study of algebraic cycles. There is a rich collection of ideas as well as detailed machinery with which to attack difficult problems in the field. Submitted by Vicente Munoz | 4 / May / 2017
Since you are having trouble findind the area in terms of median length and cosine of vertex angle, I suggest you do it differently. The shape of an isosceles triangle, up to congruence, has two degrees of freedom. Choose two measurable objects that determine the triangle. Then find expressions for the other relevant properties, namely the length of the median, the area, and the cosine of the vertex angle. Then set the expression for the median length equal to a constant and solve for one of your original objects in terms of the other. Substitute that into the expression for area, and you now have an expression for area in terms of only one variable. Now use calculus to maximize that area. Substitute that into your expression for the cosine of the vertex angle, and you are done. I left that explanation very general, because I can think of many ways to define an isosceles triangle in terms of two variables. Here is one way that more directly uses the cosine of the vertex angle. I have placed the isosceles triangle on a Cartesian coordinate system, with the apex vertex at the origin and one of the equal sides on the positive $x$-axis. We can let $c$ be the length of the equal sides and $\alpha$ be the measure of the apex angle. Then it is easy to find expressions for the vertices coordinates, midpoint of one side, median length, altitude, and triangle area. You can continue from there. In your comments you say you want your two variables that define the isosceles triangle to be $c$, the length of the two equal sides, and $b$, the length of the other side. That does also work, though that does not directly include what your final answer will be, the cosine of the apex angle. As you wrote, we have $$m^2=\frac{c^2+2b^2}{4}$$ We can easily solve this for $c$ and we get $$c^2=4m^2-2b^2$$ Your area formula $A=\frac 12ch$ is not useful here, since finding $h$ directly is difficult for you. Instead, use the base and height from my second diagram. The base is obviously $b$, and side $c$ is the hypotenuse of a right triangle with sides $\frac b2$ and $h'$, where $h'$ is the altitude on side $\overline{AB}$. We then find that $$h'=\sqrt{c^2-\frac 14b^2}$$ We then get for the area of the triangle, $$\begin{align}A&=\frac 12bh' & \\[2 ex] &= \frac 12b\sqrt{c^2-\frac 14b^2} & (\text{substituted for }h')\\[2 ex] &= \frac 12b\sqrt{(4m^2-2b^2)-\frac 14b^2} & (\text{substituted for }c^2)\\[2 ex] &= \frac 12b\sqrt{4m^2-\frac 94b^2} \\[2 ex] &= \frac 14b\sqrt{16m^2-9b^2}\end{align}$$ The median, $m$, is a constant, so we finally have a formula for the area in terms of only one variable, the side $b$. You should be able to continue from here, finding the value of $b$ that maximizes the area (or the square of the area) and finding an expression for $\cos\alpha$ in terms of $b$ then in terms of just constants.
This question is given in AMTI second level contest. Find all pairs of naturals $(a,b)$ such that $a^b-b^a=3$. My try: One such pair is $(4,1)$. Are there any other pairs? Please help me. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Taking mod 2 we can easily see that exactly one of them is even . Case 1. $a=2m$: $$(2m)^b-b^{2m}=3$$ If $b\geq3$, $(2m)^b\equiv 0\pmod 8$ so $$-b^{2m}\equiv 3\pmod 8$$ $$b^{2m}\equiv 5\pmod 8$$ but no square is 5 mod 8,so no solution, so $b=1$ $$2m-1=3$$ $$m=2,a=4$$ Case 1. $b=2m$: $$(a)^{2m}-(2m)^{a}=3$$ If $a\geq3$, $(2m)^a\equiv 0\pmod 8$ so $$a^{2m}\equiv 3\pmod 8$$ but no square is 3 mod 8,so no solution, so $a=1$ $$1-2m=3$$ but no natural solution The only solution is $(4,1)$
Tagged: ideal Problem 624 Let $R$ and $R’$ be commutative rings and let $f:R\to R’$ be a ring homomorphism. Let $I$ and $I’$ be ideals of $R$ and $R’$, respectively. (a) Prove that $f(\sqrt{I}\,) \subset \sqrt{f(I)}$. (b) Prove that $\sqrt{f^{-1}(I’)}=f^{-1}(\sqrt{I’})$ Add to solve later (c) Suppose that $f$ is surjective and $\ker(f)\subset I$. Then prove that $f(\sqrt{I}\,) =\sqrt{f(I)}$ Problem 526 A ring is called local if it has a unique maximal ideal. (a) Prove that a ring $R$ with $1$ is local if and only if the set of non-unit elements of $R$ is an ideal of $R$. Add to solve later (b) Let $R$ be a ring with $1$ and suppose that $M$ is a maximal ideal of $R$. Prove that if every element of $1+M$ is a unit, then $R$ is a local ring. Problem 525 Let \[R=\left\{\, \begin{bmatrix} a & b\\ 0& a \end{bmatrix} \quad \middle | \quad a, b\in \Q \,\right\}.\] Then the usual matrix addition and multiplication make $R$ an ring. Let \[J=\left\{\, \begin{bmatrix} 0 & b\\ 0& 0 \end{bmatrix} \quad \middle | \quad b \in \Q \,\right\}\] be a subset of the ring $R$. (a) Prove that the subset $J$ is an ideal of the ring $R$. Add to solve later (b) Prove that the quotient ring $R/J$ is isomorphic to $\Q$. Problem 524 Let $R$ be the ring of all $2\times 2$ matrices with integer coefficients: \[R=\left\{\, \begin{bmatrix} a & b\\ c& d \end{bmatrix} \quad \middle| \quad a, b, c, d\in \Z \,\right\}.\] Let $S$ be the subset of $R$ given by \[S=\left\{\, \begin{bmatrix} s & 0\\ 0& s \end{bmatrix} \quad \middle | \quad s\in \Z \,\right\}.\] (a) True or False: $S$ is a subring of $R$. Add to solve later (b) True or False: $S$ is an ideal of $R$. Problem 432 (a) Let $R$ be an integral domain and let $M$ be a finitely generated torsion $R$-module. Prove that the module $M$ has a nonzero annihilator. In other words, show that there is a nonzero element $r\in R$ such that $rm=0$ for all $m\in M$. Here $r$ does not depend on $m$. Add to solve later (b) Find an example of an integral domain $R$ and a torsion $R$-module $M$ whose annihilator is the zero ideal. Problem 431 Let $R$ be a commutative ring and let $I$ be a nilpotent ideal of $R$. Let $M$ and $N$ be $R$-modules and let $\phi:M\to N$ be an $R$-module homomorphism. Prove that if the induced homomorphism $\bar{\phi}: M/IM \to N/IN$ is surjective, then $\phi$ is surjective.Add to solve later Problem 417 Let $R$ be a ring with $1$ and let $M$ be an $R$-module. Let $I$ be an ideal of $R$. Let $M’$ be the subset of elements $a$ of $M$ that are annihilated by some power $I^k$ of the ideal $I$, where the power $k$ may depend on $a$. Prove that $M’$ is a submodule of $M$.
If the wave function $\psi\left( x,t\right) $ is a solution of the spinless time-independent Schr$\ddot{\mathrm{o}}$dinger equation, $$ i\hbar\frac{\partial}{\partial t}\psi\left( x,t\right) =\left[ -\frac {\hbar^{2}}{2m}\nabla^{2}+V\left( \mathbf{r}\right) \right] \psi\left( x,t\right) $$ then, $\psi^{\ast}\left( x,-t\right) $ is also the solution $$ i\hbar\frac{\partial}{\partial t}\psi^{\ast}\left( x,-t\right) =\left[ -\frac{\hbar^{2}}{2m}\nabla^{2}+V\left( \mathbf{r}\right) \right] \psi^{\ast}\left( x,-t\right) $$ and can be defined as the time reversed wave function of $\psi\left( x,t\right) $ $$ \psi_{r}\left( x,t\right) =\psi^{\ast}\left( x,-t\right) $$ However, in many discussions about the time-reversed operation, the time reversed wave function $\psi_{r}\left( x,t\right) $ is obtained by applying the time reversal operator $K$, which is the complex conjugate of the wave function, $$ \psi_{r}\left( x,t\right) =K\psi\left( x,t\right) =\psi^{\ast}\left( x,t\right) $$ So my question is, which one is the time reversed wave function $\psi^{\ast }\left( x,t\right) $ or $\psi^{\ast}\left( x,-t\right) ?$ The general expression for the time-reversal operator $T=UK$ (Eq. (4.4.14) in Modern Quantum Mechanics by J. J. Sakurai), where $U$ is a unitary operator and $K$ is the complex conjugation operator. For spinless case, one can choose $U=1$, so $T=K$.
Covariance formula is a statistical formula which is used to assess the relationship between two variables. In simple words, covariance is one of the statistical measurement to know the relationship of the variance between the two variables. The covariance indicates how two variables are related and also helps to know whether the two variables vary together or change together. The covariance is denoted as Cov(X,Y) and the formulas for covariance are given below. Formulas for Covariance (Population and Sample) Notations in Covariance Formulas x i= data value of x y i= data value of y x̄ = mean of x ȳ = mean of y N = number of data values. Relation Between Correlation Coefficient and Covariance Formulas \(Correlation = \frac{Cov(x,y)}{\sigma x*\sigma y*}\) Here, Cov (x,y) is the covariance between x and y while σ x and σ y are the standard deviations of x and y. Using the above formula, the correlation coefficient formula can be derived using the covariance and vice versa. Example Question Using Covariance Formula Question: The table below describes the rate of economic growth (xi) and the rate of return on the S&P 500 (y i). Using the covariance formula, determine whether economic growth and S&P 500 returns have a positive or inverse relationship. Before you compute the covariance, calculate the mean of x and y. Economic Growth % (x i) S&P 500 Returns % (y i) 2.1 8 2.5 12 4.0 14 3.6 10 x = 2.1, 2.5, 4.0, and 3.6 (economic growth) y = 8, 12, 14, and 10 (S&P 500 returns) Find x̄ and ȳ. Solution: x̄ = \(\frac{\sum x_{i}}{n}\) x̄ = \(\frac{2.1+2.5+4+3.6}{4}\) x̄ = \(\frac{12.2}{4}\) x̄ = 3.1 ȳ = \(\frac{\sum x_{i}}{n}\) ȳ = \(\frac{8+12+14+10}{4}\) ȳ = \(\frac{44}{4}\) ȳ = 11 Now, substitute these values into the covariance formula to determine the relationship between economic growth and S&P 500 returns. Cov(X,Y) = \( \frac{\sum (x_{i}-\overline{x})(y_{i}-\overline{y})}{N}\) Cov(X,Y) = \( \frac{(-1)(-3)+(-0.6)(1)+(0.9)(3)}{3}\) Cov(X,Y) = \( \frac{(-1)(-3)+(-0.6)(1)+(0.9)(3)}{3}\) Cov(X,Y) = \( \frac{3+(-0.6)+2.7+(0.5)}{3}\) Cov(X,Y) = \( \frac{4.6}{3}\) Cov(X,Y) = 1.53 Additional Covariance Formula Related Articles For more maths formulas and concepts, stay tuned with BYJU’S. Also, register now to access to various video lessons and other maths study materials to get a more effective and engaging learning experience.
The "x" axis you refer to is the frequency axis. To answer your second question first: The frequency axis is typically given as $\omega$ and waveforms as function of $\omega$ can be represented at Real and Imaginary components given by the cosines and sines, or more concisely as Magnitude and Phase as given by the complex exponential since $$Ae^{j\omega t} = Acos(\omega t) + j Asin (\omega t) = I(t) + j Q(t)$$ Either is a representation of a complex function having real and imaginary components. For representing the frequency spectrum of a time domain function we first map our time domain function to the frequency domain with the Fourier Transform which correlates the time domain function of interest to these basis functions (either cosines and sines or much simpler the complex exponential, either with magnitude = 1). Once in the frequency domain, the result will be complex, so two plots are typically used, since it takes two real numbers to represent one complex number. This can be plots of magnitude vs frequency, and phase vs frequency, or alternatively plots of real component (I) vs Frequency and another of the imaginary component (Q) vs frequency. When the functions are observed in the time domain (meaning how the function is changing versus time instead of versus frequency as used above) we can represent them as you suggest in terms of sine and cosine components in quadrature. This specifically is a phasor representation; for more detail on that see my explanation on the end where I use the phasor representation to help explain negative frequencies. (There is no reason we can't use phasors to show functions in the frequency domain as well; it would be a 2 dimensional plot showing the magnitude and phase versus frequency instead of versus time- how it is plotted really comes down to what is the best way to convey the information given the function at hand). The basis function for the complex exponential spectrum is generally given as $e^{j\omega t}$ as given by the Fourier Transform (which is a correlation to this basis function at any given $\omega$, since correlation in general is multiply and integrate). Generally the Fourier Transform is given as: $$FT\{g(t)\} = G(\omega) = \int_{-\infty}^{+\infty}g(t)e^{-j\omega t}dt$$ Note since $\omega$ is a continuous function in this context (we are solving for G as a function of $\omega$), there are an infinite number of actual basis functions as the FT is an infinite-dimensional space. DIRECT SOLUTION, k is the independent variable If we applied that in this case, with $\omega$ and $k$ as independent variables, we get the following result: $$FT\bigg\{\frac{1}{2}e^{jk\omega t} + \frac{1}{2}e^{-jk\omega t}\bigg\} = G(\omega)$$ $$= \frac{1}{2}\int_{-\infty}^{+\infty}e^{jk\omega t}e^{-j\omega t}dt + \frac{1}{2}\int_{-\infty}^{+\infty}e^{-jk\omega t}e^{-j\omega t}dt$$ The first term under the integral converges to an impulse (infinite height, area = 1, so 1/2 after being multiplied) when $k = +1$, as does the second term when $k = -1$. For all other k the result is zero. The amplitudes represent the areas of the impulses given and shows that the Fourier Transform can be given as a function of k alone in this case (since there is no dependence on $\omega$): $$G(k) = \frac{1}{2}\delta(k+1) + \frac{1}{2}\delta(k-1)$$ We could show this as a surface plot of the magnitude for all values of $k$ and $\omega$ as shown below, with the dashed line representing infinity, which is where an impulse would occur versus k for any fixed value of $\omega$: Thus for all $\omega$ we would obtain the following magnitude spectrum as a function of $k$ alone: product of kw as the independent variable If we decide to make the frequency axis be the product of $k \omega$ (since the frequency of the cosine function is given by that product), we can simply multiply the result given above by $\omega$ to result in the following magnitude spectrum: We can similarly choose to make $\omega$ be the independent variable and would result in impulses at $\omega = \pm \frac{\omega}{k}$. All of this is in agreement that it doesn't matter what $\omega$ is, in all cases the result is an impulse in the "k domain" when k = $\pm 1$. Further Background on the FT of the Cosine Function The following may help provide further intuition for those familiar with the Fourier Transform of a cosine function. Generally we describe the Fourier Transform of $cos(\omega t)$ as two impulses in the frequency domain $\omega$, but this applies to a specific value of $\omega$, so the time domain function of interest is given as $cos(\omega_o t)$ where $\omega_o$ represents a constant value. The Fourier Transform of $cos(\omega_o t)$ converges to two impulses in the frequency domain, but the Fourier Transform of $cos(\omega t)$ does not converge- it is infinite for all frequencies $\omega$! What the amplitudes represent is clearest in the case where the frequency term $k\omega$ is held constant, and we define another variable such as $\Omega$ to represent all possible values of $k\omega$, as in that case the Fourier Transform (FT) of a cosine function is simply two impulses in frequency, one at a positive frequency and the other at a negative frequency. To show this first in general, I will use capital omega $\Omega$ to avoid confusion with the $\omega$ in your formula to represent in general any angular frequency ($\Omega = 2\pi f$), and $\Omega_o$ to represent a specific frequency. $$FT\{\cos(\Omega_o t)\} = G(\Omega) = \frac{1}{2}\delta(\Omega-\Omega_o) + \frac{1}{2}\delta(\Omega+\Omega_o)$$ And we can see how this relates directly to the general relationship from Euler's identity expressing the real sinusoidal cosine in terms of two complex exponential frequencies: $$\cos(\Omega t)= \frac{1}{2}e^{j\Omega_o t} + \frac{1}{2}e^{-j\Omega_o t}$$ And with that we see how when we use the FT to correlate to $e^{j\Omega t}$ the result is two impulses, as when we set the frequency to either of these values $\pm \Omega_o$ the integral given by the FT above goes to infinity since $e^{j\Omega_o}e^{-j\Omega_o}=1$, but would integrate to zero anywhere else. As a plot vs frequency this would appear as follows showing two impulses that exist at the positive and negative frequency $\Omega_o$. This is a magnitude plot, and typically to be complete a phase plot would also be shown (showing the phase versus $\omega$), but in this case it is trivial since the phase is 0 at both impulses: Or with $k\omega$ variables, showing the magnitude spectrum that would result with both $k$ and $\omega$ as constants: This relationship between the frequency and time waveforms as I have shown them applies for when $\Omega_o$ is a constant, otherwise we will be dealing with arbitrary modulation functions and would need to know the modulation waveform specifically (how the independent variable is changing with time) to determine the frequency spectrum. Negative Frequencies? To further understand the meaning of a negative frequency, which is applicable to angular frequencies by definition, note that a negative frequency represents a single exponential frequency given as $e^{-j\omega_o t}$, with $\omega_o$ in this case any positive number. It may be helpful to visualize this with reference to the relationship I gave in the first paragraph, showing $e^{j\omega t}$ in terms of I and Q components. To give negative frequencies a physical interpretation, we can plot the function of $e^{j\omega t}$ as a function of its real and imaginary components versus time resulting in a phasor representation, specifically the plot will map out a unit circle, and rotate around the origin at rate $\omega$ as depicted below. Here we show the angle increasing in a positive direction with time, and thus this represents a positive frequency (A negative frequency would be depicted by instead having the phasor rotate in a clockwise direction): If this is still not clear, it may be helpful to know that the form $Ae^{j\theta}$ is identical to $A\angle \theta$ If you take the Fourier Transform of a specific exponential frequency with frequency term $-\omega_o$ given as $e^{-j\omega_o t}$, the result is a single impulse at that frequency: $\delta(\omega+\omega_o)$. (While as we showed above the cosine function has two exponential frequencies; a positive and a negative). The Fourier transform when presented on a graph with a positive and negative axis represents frequencies in the exponential form, so each impulse shown in the frequency domain is a $Ae^{j\omega t}$ in the time domain. When (and only when) the plot gives positive and negative frequencies where the negative frequencies are the complex conjugate of the positive frequencies given, then in that case we can also represent the same spectrum with just a positive frequency axis; as that is the only way the time domain signal can be real (using a basis function of cosine as you described); if the negative and positive frequencies are not related by a complex conjugate, then the time domain signal must be complex and a positive and negative frequency axis is required to represent the spectrum. This is clear from the plot below showing both $Ae^{j\omega t}$, and $Ae^{-j\omega t}$ as complex phasors rotating around the origin with time (magnitude A and angle linearly proportional with time as $\omega t$ , which is by definition a constant frequency since $f = d\phi/ dt$). The dotted line shows the result of summing those two phasors at any instantaneous time t, showing as long as the two phasors are related by a complex conjugate (equal magnitude opposite phase) the sum will be on the real axis and thus will be a real function with no imaginary components. Each spinning phasor shown is $e^{j\omega t}$ and $e^{-j\omega t}$ and their summation which stays on the real axis is a (real) cosine function. Dilation Property of the Fourier Transform Not specific to your question but worth mentioning since that is occuring here is the dilation property of the Fourier Transform (scaling the independent variable by a constant term k) which has the following general relationship: $$F\{g(kt)\} = \frac{1}{|k|}G\bigg(\frac{f}{k}\bigg)$$ We could apply the above property directly by transforming the known Fourier Transform of $cos(\omega_o t)$ as another approach to arriving at a solution, although this case would be trivial to do directly as a Fourier Transform on the $cos(k\omega_o t)$ function itself.
The standard deviation is as applicable here as anywhere else: it gives useful information about the dispersion of the data. In particular, the sd divided by the square root of the sample size is one standard error: it estimates the dispersion of the sampling distribution of the mean. Let's calculate: $$3.2\% / \sqrt{10000} = 0.032\% = 0.00032.$$ That's tiny--far smaller than the $\pm 0.50\%$ precision you seek. Although the data are not Normally distributed, the sample mean is extremely close to Normally distributed because the sample size is so large. Here, for instance, is a histogram of a sample with the same characteristics as yours and, at its right, the histogram of the means of a thousand additional samples from the same population. It looks very close to Normal, doesn't it? Thus, although it appears you are bootstrapping correctly, bootstrapping is not needed: a symmetric $100 - \alpha\%$ confidence interval for the mean is obtained, as usual, by multiplying the standard error by an appropriate percentile of the standard Normal distribution (to wit, $Z_{1-\alpha/200}$) and moving that distance to either side of the mean. In your case, $Z_{1-\alpha/200} = 2.5758$, so the $99\%$ confidence interval is $$\left(0.977 - 2.5758(0.032) / \sqrt{10000},\ 0.977 + 2.5758(0.032) / \sqrt{10000}\right) \\ = \left(97.62\%, 97.78\%\right).$$ A sufficient sample size can be found by inverting this relationship to solve for the sample size. Here it tells us that you need a sample size around $$(3.2\% / (0.5\% / Z_{1-\alpha/200}))^2 \approx 272.$$ This is small enough that we might want to re-check the conclusion that the sampling distribution of the mean is Normal. I drew a sample of $272$ from my population and bootstrapped its mean (for $9999$ iterations): Sure enough, it looks Normal. In fact, the bootstrapped confidence interval of $(97.16\%, 98.21\%)$ is almost identical to the Normal-theory CI of $(97.19\%, 98.24\%)$. As these examples show, the (An extreme but intuitive example is that a single drop of seawater can provide an accurate estimate of the concentration of salt in the ocean, even though that drop is such a tiny fraction of all the seawater.) For your stated purposes, obtaining a sample of $10000$ (which requires more than $36$ times as much work as a sample of $272$) is overkill. absolute sample size determines the accuracy of estimates rather than the proportion of the population size. R code to perform these analyses and plot these graphics follows. It samples from a population having a Beta distribution with a mean of $0.977$ and SD of $0.032$. set.seed(17) # # Study a sample of 10,000. # Sample <- rbeta(10^4, 20.4626, 0.4817) hist(Sample) hist(replicate(10^3, mean(rbeta(10^4, 20.4626, 0.4817))),xlab="%",main="1000 Sample Means") # # Analyze a sample designed to achieve a CI of width 1%. # (n.sample <- ceiling((0.032 / (0.005 / qnorm(1-0.005)))^2)) Sample <- rbeta(n.sample, 20.4626, 0.4817) cat(round(mean(Sample), 3), round(sd(Sample), 3)) # Sample statistics se.mean <- sd(Sample) / sqrt(length(Sample)) # Standard error of the mean cat("CL: ", round(mean(Sample) + qnorm(0.005)*c(1,-1)*se.mean, 5)) # Normal CI # # Compare the bootstrapped CI of this sample. # Bootstrapped.means <- replicate(9999, mean(sample(Sample, length(Sample), replace=TRUE))) hist(Bootstrapped.means) cat("Bootstrap CL:", round(quantile(Bootstrapped.means, c(0.005, 1-0.005)), 5))
Electronic Journal of Probability Electron. J. Probab. Volume 24 (2019), paper no. 34, 25 pp. Distances between zeroes and critical points for random polynomials with i.i.d. zeroes Abstract Consider a random polynomial $Q_n$ of degree $n+1$ whose zeroes are i.i.d. random variables $\xi _0,\xi _1,\ldots ,\xi _n$ in the complex plane. We study the pairing between the zeroes of $Q_n$ and its critical points, i.e. the zeroes of its derivative $Q_n'$. In the asymptotic regime when $n\to \infty $, with high probability there is a critical point of $Q_n$ which is very close to $\xi _0$. We localize the position of this critical point by proving that the difference between $\xi _0$ and the critical point has approximately complex Gaussian distribution with mean $1/(nf(\xi _0))$ and variance of order $\log n \cdot n^{-3}$. Here, $f(z)= \mathbb E [\frac 1 {z-\xi _k}]$ is the Cauchy–Stieltjes transform of the $\xi _k$’s. We also state some conjectures on critical points of polynomials with dependent zeroes, for example the Weyl polynomials and characteristic polynomials of random matrices. Article information Source Electron. J. Probab., Volume 24 (2019), paper no. 34, 25 pp. Dates Received: 5 July 2018 Accepted: 17 March 2019 First available in Project Euclid: 9 April 2019 Permanent link to this document https://projecteuclid.org/euclid.ejp/1554775413 Digital Object Identifier doi:10.1214/19-EJP295 Mathematical Reviews number (MathSciNet) MR3940764 Zentralblatt MATH identifier 07055672 Subjects Primary: 30C15: Zeros of polynomials, rational functions, and other analytic functions (e.g. zeros of functions with bounded Dirichlet integral) {For algebraic theory, see 12D10; for real methods, see 26C10} Secondary: 60G57: Random measures 60B10: Convergence of probability measures Citation Kabluchko, Zakhar; Seidel, Hauke. Distances between zeroes and critical points for random polynomials with i.i.d. zeroes. Electron. J. Probab. 24 (2019), paper no. 34, 25 pp. doi:10.1214/19-EJP295. https://projecteuclid.org/euclid.ejp/1554775413
Fix $p\in [1, 2)$ and denote the $p$-capacity of a compact set $K$ as $p$-$\text{cap}(K)$, i.e., \begin{equation} p\text{-cap}(K)\equiv\left\{\int_{\mathbb{R}^2}|D\varphi|^p\ \mathrm{d}x\ \Big|\ \varphi\in C_c^{\infty}(\mathbb{R}^2),\ \varphi\geq 0\text{ and } \varphi=1\text{ on }K\right\}. \end{equation}If $U$ is open, then \begin{equation} p\text{-cap}(U)\equiv\sup\{p\text{-cap}(K)\ |\ K\subset U\text{ and } K \text{ is compact}\}, \end{equation}and finally, \begin{equation} p\text{-cap}(A)\equiv\inf\{p\text{-cap}(U)\ |\ A\subset U\text{ and } U \text{ is open}\}\quad(A\subset\mathbb{R}^2). \end{equation} We say that a Radon measure $\mu$ is diffuse with respect to $p\text{-cap}$ if \begin{equation} p\text{-cap}(A)=0\implies \mu(A)=0, \end{equation}and $\mu$ is concentrated with respect to $p\text{-cap}$ if there is a Borel $A\subset\mathbb{R}^2$ such that \begin{equation} p\text{-cap}(A)=\mu(\mathbb{R}^2\setminus A)=0. \end{equation} Let $\mu_m$ be a sequence of finite Radon measures on $\mathbb{R}^2$ that converge weakly to the finite Radon measure $\mu$, that is, \begin{equation} \lim_{m\rightarrow\infty}\int_{\mathbb{R}^2}\varphi\ \mathrm{d}\mu_m=\int_{\mathbb{R}^2} \varphi\ \mathrm{d}\mu\quad (\varphi\in C_c(\mathbb{R}^2)). \end{equation}Assume also that $\mu_m$ is diffuse with respect to $p\text{-cap}$ for each $m\in \mathbb{N}$. The measure $\mu$ can be decomposed into two: \begin{equation} \mu\equiv \mu_d+\mu_c. \end{equation}Here $\mu_d$ is a Radon measure that is diffuse with respect to $p\text{-cap}$ and $\mu_c$ is a Radon measure that is concentrated with respect to $p\text{-cap}$. If $\mu_c(\mathbb{R}^2)\neq 0$, there exists a Borel set $F$ such that \begin{equation}\tag 1 p\text{-cap}(F)=\mu_c(\mathbb{R}^2\setminus F)=0. \end{equation} Question: Does there exist a Borel set $F$ satisfying (1) and in addition $\mu(\partial F)=0$?
Chi tiết Tên On the attached primes and Shifted Localization Principle for local cohomology modules Lĩnh vực Toán học Tác giả Tran Nguyen An Nhà xuất bản / Tạp chí Tập 4 Số 20 Năm 2013 Số hiệu ISSN/ISBN 1005-3867 (SCIE) Tóm tắt nội dung The author is supported by the Vietnam National Foundation for Science and Technology Development (Nafosted).}}. {Let $(R,\m)$ be a Noetherian local ring and let $M$ be a finitely generated $R$-module. For an integer $i\geq 0$, the Artinian $i$-th local cohomology module $H_{\m}^i(M)$ is said to satisfy {\it the Shifted Localization Principle} if $$\Att_{R_{\p}} (H_{\p R_{\p}}^{i-\dim R/\p}(M_{\p})) =\{ \q R_{\p} \mid \q \in \Att_R( H_{\m}^{i}(M)), \q\subseteq \p\}\ \text{for all}\ \p \in \Spec(R).$$ In this paper we study the attached primes of $H_{\m}^i(M)$ and give some conditions for $H_{\m}^i(M)$ to satisfy the Shifted Localization Principle.The author is supported by the Vietnam National Foundation for Science and Technology Development (Nafosted).}}. {Let $(R,\m)$ be a Noetherian local ring and let $M$ be a finitely generated $R$-module. For an integer $i\geq 0$, the Artinian $i$-th local cohomology module $H_{\m}^i(M)$ is said to satisfy {\it the Shifted Localization Principle} if $$\Att_{R_{\p}} (H_{\p R_{\p}}^{i-\dim R/\p}(M_{\p})) =\{ \q R_{\p} \mid \q \in \Att_R( H_{\m}^{i}(M)), \q\subseteq \p\}\ \text{for all}\ \p \in \Spec(R).$$In this paper we study the attached primes of $H_{\m}^i(M)$ and give some conditions for $H_{\m}^i(M)$ to satisfy the Shifted Localization Principle. Đính kèm:
Place a coordinate system at O oriented such that $\boldsymbol{r}_i = (x_i,0,z_i)$. The body is rotating with $\boldsymbol{\omega} = (0,0,\Omega)$ and accelerating with $\boldsymbol{\alpha} = (0,0,\dot\Omega)$ about the z axis, and the mass m i lies on the xz plane. Then linear momentum of m i is $$ \boldsymbol{p}_i = -m_i \boldsymbol{r}_i \times \boldsymbol{\omega} = (0,m_i\, \Omega\, x_i,0)$$ The angular momentum of m i about the origin O is $$ {\boldsymbol{L}_i}_O = \boldsymbol{r}_i \times \boldsymbol{p}_i = (-m_i \Omega x_i z_i,0,m_i \Omega x_i^2) $$ The component you are talking about is along the y axis (perpendicular to rotation and r i) and it is zero. But that does not mean the torque about this axis is zero (actually it guarantees it is not zero). In addition, the component towards P is $\propto \Omega$ and the rotation radius $x_i$. The total force applied on m i is found from the derivative on a rotating frame with $\Omega$ non-constant. $$ \boldsymbol{F}_i = \frac{{\rm d}}{{\rm d}t} \boldsymbol{p}_i = \left(\frac{\partial \boldsymbol{p}_i}{\partial \Omega}\right) \dot{\Omega} + \boldsymbol{\omega} \times \boldsymbol{p}_i = (-m_i \Omega^2 x_i , m_i \dot{\Omega} x_i ,0) $$ The above is basically centrifugal force along x and tangential force along y. The torque about the origin is similarly $$ {\boldsymbol{M}_i}_O = \frac{{\rm d}}{{\rm d}t} {\boldsymbol{L}_i}_O = \left(\frac{\partial {\boldsymbol{L}_i}_O}{\partial \Omega}\right) \dot{\Omega} + \boldsymbol{\omega} \times {\boldsymbol{L}_i}_O = ( -m_i \dot{\Omega} x_i z_i, -m_i \Omega^2 x_i z_i, m_i \dot{\Omega} x_i ) $$ If the body is not rotationally accelerating the the torque is only about the y axis ${\boldsymbol{M}_i}_O = (0,-m_i \Omega^2 x_i z_i,0)$. So here is a situation with a moment applied perpendicularly to the rotation axis, and the magnitude of the rotation does not change. I think you have a math error somewhere if you are arriving at a different conclusion. You can check the above by deriving the 3×3 inertia matrix about O as $${\mathtt{I}_i}_O = m_i \begin{vmatrix} y_i^2+z_i^2 & -x_i y_i & -x_i z_i\\-x_i y_i & x_i^2+z_i^2 & -y_i z_i \\ -x_i z_i & -y_i z_i & x_i^2+y_i^2 \end{vmatrix} $$ You can verify that $${\boldsymbol{M}_i}_O = {\mathtt{I}_i}_O \boldsymbol{\alpha} + \boldsymbol{\omega} \times {\mathtt{I}_i}_O \boldsymbol{\omega}$$ which is Euler's equations of rotational motion. PS. Trying to do 3D dynamics component by component is tedious and error prone.
Functions: \(f\), \(u\), \(v\) Argument (independent variable): \(x\) Derivatives: \(y’\left( x \right),\) \(f’\left( x \right)\) Constant: \(C\) Real numbers: \(A\), \(\alpha\) Argument (independent variable): \(x\) Derivatives: \(y’\left( x \right),\) \(f’\left( x \right)\) Constant: \(C\) Real numbers: \(A\), \(\alpha\) Small change in \(y:\) \(\Delta y\) Small change in \(x:\) \(\Delta x\) Differential of the function \(y:\) \(dy\) Differential of the independent variable \(x:\) \(dx\) Small change in \(x:\) \(\Delta x\) Differential of the function \(y:\) \(dy\) Differential of the independent variable \(x:\) \(dx\) Consider a function \(y = f\left( x \right)\) and suppose that the independent variable gets an increment \(dx\) at some point \(x\). This increment is called the differential of the independent variable. The function \(y = f\left( x \right)\) has a differential at the point \(x\) if its increment can be represented as the sum of two terms: \(\Delta y = f\left( {x + \Delta x} \right) – f\left( x \right) =\) \( A\Delta x + \alpha ,\) where the coefficient \(A\) is independent of \(\Delta x\) and the value of \(\alpha\) has a higher order of smallness with respect to the increment \(\Delta x\), i.e. \(\alpha /\Delta x \to 0\) as \(\Delta x \to 0\). In the formula above, the principal linear part of the increment is called the differential of the function \(f\left( x \right)\) at the point \(x\) and denoted by \(dy = A\Delta x\). In this expression the coefficient \(A\) is equal to the value of the derivative \(f’\left( x \right)\) at the point \(x.\) The differential of the independent variable is equal to its increment: \(dx = \Delta x\) The differential of a function is equal to the derivative of the function times the differential of the independent variable: \(dy = df\left( x \right) =\) \( f’\left( x \right)dx\) Derivative as the quotient of two differentials \(f’\left( x \right) = \large\frac{{dy}}{{dx}}\normalsize\) The differential of a constant is zero: \(dC = 0\) differential of the sum of two functions is equal to the sum of their differentials: \(d\left( {u + v} \right) =\) \( du + dv\) differential of the difference of two functions is equal to the difference of their differentials: \(d\left( {u – v} \right) =\) \( du – dv\) constant factor can be taken out of the differential: \(d\left( {Cu} \right) = Cdu\) Differential of the product of two functions \(d\left( {uv} \right) = vdu + udv\) Differential of the quotient of two functions \(d\left( {{\large\frac{u}{v}}\normalsize} \right) = {\large\frac{{vdu – udv}}{{{v^2}}}\normalsize}\)
Assume that $R$ is a unital ring or a complex or real (Banach or $C^{*}$) algebra. We define a relation $M$ on $R$ as follows: $$a\;M b \;\;\; \text{iff}\;\; a=xy,\;b=yx \;\; \text{for some}\;\; x,y\in R$$ It is a reflexive and symmetric (but not transitive ) relation. We define an equivalent relation $\simeq$ on $R$ as follows: $a\simeq b$ if there are $p_{i}\in R,\;i=0,1,\ldots,n$ with $$\begin{cases}a=p_{0},\;\; b=p_{n},&\\ p_{i} \; M\; p_{i+1}\end{cases}$$ The space of nilpotent elements of $R$, denoted by $N(R)$, is a saturated subset of $R$, with respect to this equivalent relation, while the space of idempotent elements is not necessarily a saturated subset. Notation: $M_{n}(R)$ is the space of $n\times n$ matrices with entries in $R$. The natural mapping $M_{n}(R) \to M_{n+1} (R)$ with $A \mapsto A\oplus 0$ sends nilpotent elements to nilpotent elements. Moreover the above equivalent relation is preserved under this map. We consider $\bigcup_{n=1}^{\infty} N(M_{n}(R))$, the union of all nilpotent matrices of all size. The equivalent relation $\simeq$ has a natural extension to the later space: $A\simeq B$ if there are natural numbers $k,p$ such that $A\oplus 0_{k} \simeq B\oplus 0_{p}$. The later are the zero matrices of size $k,p$, respectively. This enable us to equip $\bigcup_{n=1}^{\infty} N(M_{n}(R))/\simeq$ to an Abelian semi group structure, via the usual matrix-sum.(Note that for two nilpotent elements $a,b$ with $ab=ba=0$, $a+b$ is again a nilpotent element. On the other hand every two elements of the above quotient space have two representation $A,B$ with $AB=BA=0$). Because, for every $a\in R$ we have $\begin{pmatrix} a&0\\0&0 \end{pmatrix} \simeq \begin{pmatrix} 0&0\\0&a \end{pmatrix}$. The Grothendick group of this semi group is denoted by $NK(R)$. Questions: What is an example of a $C^{*}$ algebra $A$ for which $NK(A)$ is a non trivial group? Is there a commutative $C^{*}$ algebra $A$ with nontrivial $NK(A)$. Note 1 The mapping $A\mapsto NK(A)$ is realy a functor on the category of rings or algebra. according to Gelfand Naimark duality this could be counted as a functor on the category of compact Hausdorff topological space Note 2: This post is inspired by the construction in algebraic K theory and the following two posts
Hello! I am reading an article in which there is the following statement: Let $E\rightarrow X$ be a holomorphic vector bundle. The holomorphic sections of $E$ over a coordinate neighbourhood of $X$ are dense in the set of smooth sections of $E$. I have some knowledge in complex geometry but I am not aware of this fact. For which topologies this fact holds? Does somebody knows a place in which I could find some similar statement with a proof? I already had a look in the textbooks of Huybrechts, Demailly and Griffiths-Harris but I did not see a similar statement yet. I have another question that perhaps is related to the previous one (it is in the same article) but the statement hereafter is from me, not really from the author. Again, let $E\rightarrow X$ be a holomorphic vector bundle. Denote by $\mathcal{E}$ the sheaf of holomorphic sections of $E$. Then we have $\Gamma(U,E)=\mathcal{C}^\infty(U,\mathbb{C})\cdot\mathcal{E}(U)$ with $\Gamma(U,E)$ the sheaf of smooth sections of $E$, $\mathcal{C}^\infty(U,\mathbb{C})$ the sheaf of smooth $\mathbb{C}$-valued functions (seen as real smooth maps I presume) and the dot is just the usual multiplication of sections by functions. Is this statement true? Does it hold for any holomorphic vector bundle? Where can I find a proof if it is true? Thank you!
I am really having a terrible time applying Girsanov's theorem to go from the real-world measure $P$ to the risk-neutral measure $Q$. I want to determine the payoff of a derivative based an asset which is paying dividends, so the dividends will affect the value of the asset by changing the drift negatively, but aren't considered when calculating the payoff because it's just derived from the asset prices over time. We begin in the real-world measure $P$. We are interested in some index, $S_t$, which I believe we can just treat as a single entity like a stock and which we can observe in the real world. It has volatility $\sigma$. We also know that this index is paying dividends continuously at a rate of $\delta$. The index is described as "following a geometric Brownian motion", which to me says that the there is no other drift going on, so I take $\mu$ to be $0 - \delta$. First question: Is it normal to assume no other drift? $dSt = (-\delta) St dt + \sigma St dWt$ Where $dWt$ are standard Wiener increments. So clearly under $P$, our index asset has drift $-\delta$ because it is paying dividends out. If someone actually holds the asset then they can reinvest those dividends, but we are only concerned with the value of the asset in order to determine the payoff of a derivative. So what we want to do now is determine the expected value of the asset under the risk neutral measure, $Q$. We want this expected value, discounted by the value of a bond earning the risk free rate, to be a martingale in this measure, so the asset must also drift at the risk free rate. In this way, an asset under the risk neutral measure with volatility 0 is equivalent to a bond. In order to achieve this, we want to change the drift of our process from $-\delta$ under $P$ to $r$ under $Q$. We define a new process $X_t = \theta t + W_t$ where $\theta = \frac{(-\delta - r)}{\sigma}$ If we now consider the discounted asset price $S_t^* = S_t / Bt = S_t e^{-rt}$ we have the following. $dSt^* = (-\delta - r) St^* dt + \sigma St^* dWt$ $dSt^* = (-\delta - r) St^* dt + \sigma St^* (X_t - \theta)$ $dSt^* = (-\delta - r) St^* dt + \sigma St^* (X_t - \frac{-\delta - r}{\sigma})$ $dSt^* = \sigma St^* X_t$ Where now we have the process $St^*$ as a geometric Brownian with no drift if we are under $Q$ and with the same drift as before under $P$. All we have done was put $(X_t-\theta)$ in for $W_t$ and these are equal by definition. So under $Q$, and without discounting, the process for the asset $S_t$ should follow $dSt = r St dt + \sigma St dXt$ The dividend being payed by the asset is thus somehow 'inside' the brownian motion $X_t$ and that's apparently all we see of it? At this point, I'd like to stop and ask "at what point do we consider ourselves 'under $Q$'?". By google-fu, I found an example program in R where they make a binomial approximation to Brownian motion by taking constant size steps up or down with different propbabilities. The relevant portion of code looks like: n = 2000t = (0:n)/n # [0/n, 1/n, .... n/n]dt = 1/ntheta = 1p = 0.5 * (1 - sqrt(dt) * theta)u = runif(n) # Random uniform variatesdWP = ((u < .5) - (u > .5))*sqrt(dt) # increments under PdWQ = ((u < p) - (u > p))*sqrt(dt) # increments under QWP[1:n+1] = cumsum(dWP)WQ[1:n+1] = cumsum(dWQ)XP = WP + theta*t # Theta exactly offsets the change in measureXQ = WQ + theta*t # Now XQ == WP I see that they are using $p$ instead of 0.5 to make decisions about the sign in the binary discretization, but if we have access to actual random normal variates is this necessary? Shouldn't it be a enough to simply define $\theta$ appropriately as above and then use it and $W_t$ to derive $X_t$, directly? Although, if we do that, then we go from $W$ to $X$ but I don't understand how we got from $P$, to $Q$ if that happened at all. The payoff of our asset then, is the discounted expectation under the risk-neutral measure. So we evolve the process $S_t$ under measure $Q$ until expiry time T, calculate the payoff. Repeat this a few thousand times and then take the average and discount it back to $t = 0$ to get the price. I thought all of this seemed pretty reasonable and was making sense, but when I try to implement it, I get nothing like what I expect. Can anyone confirm that the story I just presented about how we arrive at the risk neutral measure is correct? A more general question: $W_t$ is a martingale under $P$. We shift it and change measures to create $X_t$ and have that be a martingale under $Q$. We then model the asset based on the process $X_t$ with drift r. If all we wanted was a process under which the stock drifts at the risk free rate, what do we gain by changing from $W$ to $X$ and $P$ to $Q$ if they exactly offset each other? Why don't we just use $W_t$? And what happened to the dividend? If the stock process is drifting downward at $-\delta dt$ under $P$, what does it really look like under $Q$? It can't just look like it's drifting at the risk-free-rate because then the dividend would have had no effect at all. Further search produced some suggestions that it should be drifting at $r-\delta$, because we assume that the dividends are invested into bonds. Why would we make such an assumption? Is it because we want to consider a portfolio containing the asset and not the asset itself?
Is it right that circular particle accelerators use magnetic fields to deflect the particle beam? Using the simple equation of "a charged particle in a magnetic field": $\vec{f}=q\vec{v}\times\vec{B}$ Say the particle accelerators is currently used for a proton beam. $\vec{f}$ will be directed to the center of the particle accelerators (centripetal force). The proton itself is constituted of two up quarks and one down quark, which have electric charge: $\frac{2}{3}e$ for one up quark. $-\frac{1}{3}e$ for one down quark. Now if I look at the effect of the magnetic field at quark level. The force is: $\frac{2}{3}\vec{f}$ for one up quark (centripetal force). $-\frac{1}{3}\vec{f}$ for one down quark ( centrifugal force). I imagine it as a centrifugation of the proton, the down quarks are pushed to the exterior side of the accelerator when the up ones are attracted to the center. Is this effect real and taken into account in particle accelerators? What about the mass ? The down quark is at least 2 times heavier than the up quark, does the rotation of the proton "separate" the quarks of the proton?
The differential equation of the \(n\)th order in the general case has the form: \[{F\left( {x,y,y’,y^{\prime\prime}, \ldots ,{y^{\left( n \right)}}} \right) }={ 0,}\] where \(F\) is a continuous function of the specified arguments. The order of the equation can be reduced if it does not contain some of the arguments, or has a certain symmetry. Below we consider in detail some cases of reducing the order with respect to the differential equations of arbitrary order \(n.\) Transformation of the \(2\)nd order equations is described here. Case \(1.\) Equation of Type \(F\left( {x,{y^{\left( k \right)}},{y^{\left( {k + 1} \right)}}, \ldots ,{y^{\left( n \right)}}} \right) = 0\) If the differential equation does not contain the original function and its \(k – 1\) first derivatives, then by replacing \[{y^{\left( k \right)}} = p\left( x \right)\] the order of this equation is reduced by \(k\) units. As a result, the original equation takes the form \[F\left( {x,p,p’, \ldots {p^{\left( {n – k} \right)}}} \right) = 0.\] From this equation (if possible) we can determine the function \(p\left( x \right).\) The original function \(y\left( x \right)\) can be found by \(k\)-fold integration. If the differential equation does not contain only the original function \(y,\) that is has the form \[F\left( {x,y’,y^{\prime\prime}, \ldots ,{y^{\left( n \right)}}} \right) = 0,\] then its order can be reduced by one by the substitution \(y = p\left( x \right).\) Case \(2.\) Equation of Type \(F\left( {y,y’,y^{\prime\prime}, \ldots ,{y^{\left( n \right)}}} \right) = 0\) Here the left side does not contain the independent variable \(x.\) The order of the equation can be reduced by the substitution \(y = p\left( y \right).\) The derivatives are defined through the new variables \(y\) and \(p\) as follows: \[y’ = \frac{{dy}}{{dx}} = p,\] \[ {y^{\prime\prime} = \frac{{{d^2}y}}{{d{x^2}}} = \frac{d}{{dx}}\left( {\frac{{dy}}{{dx}}} \right) } = {\frac{{dp}}{{dx}} } = {\frac{{dp}}{{dy}}\frac{{dy}}{{dx}} } = {p\frac{{dp}}{{dy}},} \] \[ {y^{\prime\prime\prime} = \frac{{{d^3}y}}{{d{x^3}}} = \frac{d}{{dx}}\left( {p\frac{{dp}}{{dy}}} \right) } = {\frac{d}{{dy}}\left( {p\frac{{dp}}{{dy}}} \right)\frac{{dy}}{{dx}} } = {\left[ {p\frac{{{d^2}p}}{{d{y^2}}} + {{\left( {\frac{{dp}}{{dy}}} \right)}^2}} \right]p } = {{p^2}\frac{{{d^2}p}}{{d{y^2}}} + p{\left( {\frac{{dp}}{{dy}}} \right)^2},} \] \[ {{y^{IV}} = \frac{{{d^4}y}}{{d{x^4}}} } = {\frac{d}{{dx}}\left[ {{p^2}\frac{{{d^2}p}}{{d{y^2}}} + p{{\left( {\frac{{dp}}{{dy}}} \right)}^2}} \right] } = {\frac{d}{{dy}}\left[ {{p^2}\frac{{{d^2}p}}{{d{y^2}}} + p{{\left( {\frac{{dp}}{{dy}}} \right)}^2}} \right]\frac{{dy}}{{dx}} } = {\left[ {{p^2}\frac{{{d^3}p}}{{d{y^3}}} + 2p\frac{{dp}}{{dy}}\frac{{{d^2}p}}{{d{y^2}}} }\right.}+{\left.{ 2p\frac{{dp}}{{dy}}\frac{{{d^2}p}}{{d{y^2}}} + {{\left( {\frac{{dp}}{{dy}}} \right)}^3}} \right]p } = {{p^3}\frac{{{d^3}p}}{{d{y^3}}} }+{ 4{p^2}\frac{{dp}}{{dy}}\frac{{{d^2}p}}{{d{y^2}}} }+{ p{\left( {\frac{{dp}}{{dy}}} \right)^3}.} \] It is seen that substitution of the derivatives into the original equation gives a new differential equation of the \(\left( {n – 1} \right)\)th order. Solving this equation, we can determine the function \(p\left( y \right)\) and then find \(y\left( x \right).\) Case \(3.\) Homogeneous Equation \(F\left( {x,y,y’,y^{\prime\prime}, \ldots ,{y^{\left( n \right)}}} \right) = 0\) The equation \(F\left( {x,y,y’,y^{\prime\prime}, \ldots ,}\right.\) \(\left.{{y^{\left( n \right)}}} \right) = 0\) is called homogeneous with respect to the arguments \({y,y’,}\) \({y^{\prime\prime}, \ldots ,}\) \({{y^{\left( n \right)}}}\) if the following identity holds: \[ {F\left( {x,ky,ky’,ky^{\prime\prime}, \ldots ,k{y^{\left( n \right)}}} \right) } \equiv {{k^m}F\left( {x,y,y’,y^{\prime\prime}, \ldots ,{y^{\left( n \right)}}} \right).} \] The order of this equation can be reduced by one using the substitution \[y = {e^{\int {zdx} }},\] where \(z\left( x \right)\) is the new unknown function. After \(z\left( x \right)\) is determined, we can find the original function \(y\left( x \right)\) by integration using the formula \[y\left( x \right) = {C_1}{e^{\int {zdx} }},\] where \({C_1}\) is an arbitrary number. Case \(4.\) Function \(F\left( {x,y,y’,y^{\prime\prime}, \ldots ,{y^{\left( n \right)}}} \right)\) is a Total Derivative In some cases, the left-hand side \(F\left( {x,y,y’,y^{\prime\prime}, \ldots ,{y^{\left( n \right)}}} \right)\) of the differential equation can be expressed as the total derivative with respect to \(x\) of a differential expression of the \(\left( {n – 1} \right)\)th order: \[ {F\left( {x,y,y’,y^{\prime\prime}, \ldots ,{y^{\left( n \right)}}} \right) } = {\frac{d}{{dx}}\Phi \left( {x,y,y’,y^{\prime\prime}, \ldots ,{y^{\left( {n – 1} \right)}}} \right).} \] Then the solution of the original equation can be written as \[{\Phi \left( {x,y,y’,y^{\prime\prime}, \ldots ,{y^{\left( {n – 1} \right)}}} \right) }={ C,}\] where \(C\) is an arbitrary constant. Consider examples for the various cases of order reduction. Solved Problems Click a problem to see the solution.
A capacitor, once charged up with charge $Q$ will have a given voltage $V$. I tried to work out this voltage by simply plugging into coulomb's equation. The potential difference between two static point charges is $$\frac{q}{4\pi\epsilon_0d_1}-\frac{q}{4\pi\epsilon_0d_2}=\frac{qd}{4\pi\epsilon_0d_1d_2}$$ where $q$ is a unit of charge located on the capacitor, $d_1$ and $d_2$ are the distances from the center of the other electron, and $d=d_2-d_1$; the distance through which pd is measured. If we assume that $q$ are distributed smoothly across the surface, then the size of the unit $\displaystyle{q = \frac{Q}{A}}$, if the area is a unit defined by the area one coulomb takes up on a capacitor. Therefore, the voltage should be $$V = \frac{Qdk}{4\pi\epsilon_0d_1d_2A}\qquad(1)$$ and $k$ is some constant. I found a much simpler derivation (in a textbook) that assumes Gauss's law. It says that the electric field $\displaystyle{E=\frac{cQ}{A}}$ by Gauss's law, and that it is also $\displaystyle{E=\frac{V}{d}}$, therefore $$V=\frac{Qdc}{A}\qquad(2)$$ and $c$ is $\displaystyle{\frac{1}{\epsilon_0}}$. I don't understand this. According to the first equation, the electric field does not depend on the distance. Since area and charge don't change with distance, how can that be? Could someone explain how this works, and why my equation is incorrect?
The use of kernels to estimate volatility using intraday data is "nothing more" than combining:intraday volatility estimationkernel smoothingThus you have to take care about the "usual pits" of these two approaches.Intraday volatility estimation. I hope you know the "signature plot" effect. Of course if you use the proper estimation method, it should ... I found and answer to my own question. So, I post it here for people who maybe have the same problem. The answer, however, is quite intuitive. The last observation used for the estimation of the physical density is also the time point where the investors know the most about the physical density because at this point the most possible historical observations ... Using a realized kernel for calculating volatility will give you results in the same resolution as the data you feed them. So if you feed them minute-by-minute data, then the volatility will be calculated minute-by-minute. What that really means is that only once per minute will you have a good estimate of the volatility of whatever asset you're looking at. ... First, I assume that your price data are all from the same asset but spread over a certain time range.If you are looking for the distribution of the price of this asset on the real axis, you have plenty of methods (several fields in mathematics and statistics deal with this topic).As a first step you could make a histogram of your data. There you can see ... Not exactly the answer you're looking for: It's not obvious that a region of stability is a desirable property.One can trivially construct an example where this is true: suppose the actual generation function of your target is $f: x_t \mapsto 2x_t+1$ over $\mathbb{R}$, and you have a signal $s$ of one parameter $s\left(p,x_t\right)=\left(p^{e} \mod 3\... To shorten the notation, let's write $T_t = T(D_t,y_t)$ and $\delta_t = \delta(D_t,y_t)$.There are two ways to show that, in fact, the dynamics of$$ \xi_t = \xi(D_t, y_t,t) = e^{-\int_0^t \delta_s ds}\, T_t $$is given by$$ \frac{d\xi_t}{\xi_t} = \left( -\delta_t + \frac{\mathscr{L} T_t}{T_t} \right)dt \quad+\quad \text{diffusion terms}. $$First way (... The cross-validation procedure does not turn on the choice of algorithm.Yes - calculate the prediction error of the fitted models when predicting the V'th part of the data. Combine the V estimates of prediction average using a simple average.Subsets should be randomly sampled (roughly equally sized). 2a. Subsets should not overlap.No. As long as the ... The problem is to find the best functional form of the utility function plus estimate its parameters.A good starting point is the following draft chapter from an upcoming book which gives a good intuition and many examples:Preferences by Andrew Ang One simple approach isConstruct the cumulative probability function (CDF), which will be a step-function.Smooth the CDF; for example, by using splines or a kernel smoothing function.Calculate the slope of the smoothed CDF, giving a curvy linear PDF.In R, this could be done using the ecdf function and one of the kernel smoothers.Again, as vanguard2k ...
Yesterday, I had another article published for The Conversation. I wrote about the Birthday Problem, which (I think!) is well known to most mathematicians, but certainly not to the general public. Anyways, it was very well received. I guess it's a bit sad that there's a ton of crazy maths facts like this that we … Continue reading The Birthday Problem Mathematical proofs are a bit like people. You can choose which ones to love and which ones to spit at. The loveable proofs are undoubtably the most important, especially as a pick-me-up on those cold lonely days. I'd like to share with you all a little proof that the $latex n$th root of 2 is … Continue reading A sledgehammer proof of irrationality I thought that today I might write about my most favourite arithmetic function. The Moebius function $latex \mu:\mathbb{N} \rightarrow \{-1,0,1\}$ is defined as follows: $latex \mu(1)=1$, $latex \mu(n)=0$ if $latex n$ is divisible by $latex m^2$ for $latex m>1$, and $latex \mu(n)=(-1)^k$ if $latex n$ is a product of $latex k$ distinct prime numbers. So … Continue reading Merten’s Function Another article of mine has just been published by The Conversation. Click here to get on and read it. Here's a cute little problem.Can every number of the form $latex 4/n$ (where $latex n$ is an integer) be written as the sum of three unit fractions? (A unit fraction is a fraction of the form $latex 1/a$ where $latex a$ is an integer.)For example, if $latex n = 5$ we have$latex \frac{4}{5} = \frac{1}{2} … Continue reading The Erdős-Straus Conjecture For a quick introduction to the theory of prime numbers, see my earlier post. A mathematician by the name of Yitang Zhang made the news last week, for taking a step towards the twin prime conjecture. This conjecture asserts that there are infinitely many pairs of prime numbers that differ by 2. We call such pairs … Continue reading Almost Twin Primes Dear MATH1115 student, Want to become a master of subspaces? Check this out! Mathematically yours, Adrian. P.S. If "Mathematically yours" is too weird, then please replace it with "yours". P.P.S. I've written the word "yours" too much, and now that thing has happened to my brain where it looks like a funny word.
Why is there a need to perform holographic renormalization for the normal $AdS_5\times S^5$/CFT$_4$ correspondence if the brane theory is conformal? Since the flow along the AdS direction $r$ is related to the renormalization scale does this not explicitly introduce an "energy" parameter that breaks conformal invariance in the SYM side of the duality? The fact that the boundary theory is conformal means that renormalization does not induce running of the coupling. However, there are divergences which have to be regularized and renormalized. The regularization requires the introduction of an arbitrary scale, which is not Weyl invariant and leads to a conformal anomaly (in even dimensions). Correspondingly, also the bulk theory has to be regularized by introducing a cutoff $\epsilon$ on the radial coordinate. The supergravity fields have to be expanded close to the horizon and local counterterms have to be introduced to subtract the divergences when taking the limit of $\epsilon\to 0$. For the metric, the regularization procedure requires picking a reference metric $g_{(0)}$ from the conformal structure on the boundary. For $d$ (boundary dimension) even, the dependence of the counterterm on the chosen reference metric leads to a renormalized Lagrangian, that is not Weyl invariant. One picks up exactly the expected Weyl anomaly. This is a very neat example of a connection of boundary UV physics (the cutoff) and bulk IR physics (divergences close to the boundary) which lead to the same Weyl anomaly. For details, see the paper by Henningson and Skenderis. There are also these very instructive lecture notes on holographic renormalization with the example of renormalization of the action of a massive bulk scalar. Addendum: Example of why CFT correlators need regularization/renormalization It is well-known that conformal invariance greatly restricts the form of CFT correlation functions. For example, two-point functions of a scalar operator $\mathcal{O}$ are restricted to $$ \langle\mathcal{O}(x)\mathcal{O}(0)\rangle=\frac{C}{x^{2\Delta}} $$ where $\Delta$ is the scaling dimension of $\mathcal{O}$ and $C$ is a normalization constant. It is much less known though, that this is only a bare correlator and it is not valid at $x^{2}=0$. A correlator should be a well-defined distribution and have well-defined Fourier transforms:$$G(p)=\int d^dx\, e^{-ipx}\frac{C}{x^{2\Delta}}=\frac{C\pi^{d/2}2^{d-2\Delta}\Gamma\left(\frac{d-2\Delta}{2}\right)}{\Gamma{\Delta}}p^{2\Delta-d}.$$Since the $\Gamma$-function is undefined for negative integer arguments, we can see that regularization is necessary when $\Delta=\frac{d}{2}+n$, where $n$ is a positive integer. This can be done, for example, using dimensional regularization. After addition of a counterterm in the action the correlator becomes$$G(p)=p^{2\Delta-d}\left(C_1 \log\frac{p^2}{\mu^2}+C_2\right),$$which clearly is a scale dependent expression. Scale invariance is an anomalous symmetry in the full quantum theory. However, in $\mathcal{N}=4$ super Yang-Mills the coupling is protected from running by supersymmetry. So it is not conformal symmetry that leads to a vanishing $\beta$-function, but it is SUSY. The vanishing of the $\beta$-function for this particular theory is discussed here including some references.
Convergence rates of solutions for a two-species chemotaxis-Navier-Stokes sytstem with competitive kinetics 1. Department of Mathematics, South China University of Technology, Guangzhou 510640, China 2. Institute for Mathematical Sciences, Renmin University of China, Beijing 100872, China $\begin{equation*} \begin{cases} & (n_1)_t + u\cdot\nabla n_1 = \Delta n_1 - \chi_1\nabla\cdot(n_1\nabla c) + \mu_1n_1(1- n_1 - a_1n_2), \\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x\in \Omega,\ t>0, \\ & (n_2)_t + u\cdot\nabla n_2 = \Delta n_2 - \chi_2\nabla\cdot(n_2\nabla c) + \mu_2n_2(1- a_2n_1 - n_2), \\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x\in \Omega,\ t>0, \\ & c_t + u\cdot\nabla c = \Delta c -(\alpha n_1 + \beta n_2)c, x \in \Omega,\ t>0, \\ & \ u_t + \kappa (u\cdot\nabla) u = \Delta u + \nabla P + (\gamma n_1 + \delta n_2)\nabla\phi, \quad \nabla\cdot u = 0, \\&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x \in \Omega,\ t>0 \end{cases} \end{equation*}$ $n_1,n_2,c$ $u$ $\Omega \subset \mathbb{R}^d(d\in\{2,3\})$ $2$ $3$ $\kappa = 0$ $\frac{\max\{\chi_1,\chi_2\}}{\min\{\mu_1,\mu_2\}}\|c_0\|_{L^\infty(\Omega)} $ explicit ratesof convergence for any supposedly given global bounded classical solution $(n_1, n_2, c, u)$ $L^\infty$ $(n_1(\cdot,t), n_2(\cdot,t), u(\cdot,t))\overset{t\rightarrow\infty}\rightarrow \begin{cases} (\frac{1 - a_1}{1 - a_1a_2},\frac{1 - a_2}{1 - a_1a_2},0) \text{ exponentially,}\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{ if } a_1, a_2 \in (0, 1), \\ (0,1,0) \text{ exponentially, if } a_1>1> a_2, \\ (0,1,0) \text{ algebraically, if } a_1 = 1> a_2, \\ (1,,0,0) \text{ exponentially, if } a_2>1> a_1, \\ (1,0,0) \text{ algebraically, if } a_2 = 1> a_1. \end{cases}$ $c$ $0$ $u$ $-\Delta$ $\Omega$ $a_i, \mu_i, \alpha$ $\beta$ $d$ Keywords:Chemotaxis-fluid system, boundedness, exponential convergence, algebraic convergence, convergence rates. Mathematics Subject Classification:Primary: 35B40, 35K55, 35B44, 35K57; Secondary: 35Q92, 92C17. Citation:Hai-Yang Jin, Tian Xiang. Convergence rates of solutions for a two-species chemotaxis-Navier-Stokes sytstem with competitive kinetics. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1919-1942. doi: 10.3934/dcdsb.2018249 References: [1] X. Bai and M. Winkler, Equilibration in a fully parabolic two-species chemotaxis system with competitive kinetics, [2] [3] X. Cao, Global bounded solutions of the higher-dimensional Keller-Segel system under smallness conditions in optimal spaces, [4] X. Cao, S. Kurima and M. Mizukami, Global existence and asymptotic behavior of classical solutions for a 3D two-species chemotaxis-Stokes system with competitive kinetics, arXiv: 1703.01794, [5] [6] P. De Mottoni and F. Rothe, Convergence to homogeneous equilibrium state for generalized Volterra-Lotka systems with diffusion, [7] D. Henry, [8] M. Hirata, S. Kurima, M. Mizukami and T. Yokota, Boundedness and stabilization in a two-dimensional two-species chemotaxis-Navier-Stokes system with competitive kinetics, [9] M. Hirata, S. Kurima, M. Mizukami and T. Yokota, Boundedness and stabilization in a three-dimensional two-species chemotaxis-Navier-Stokes system with competitive kinetics, [10] [11] [12] O. Ladyzhenskaya, V. Solonnikov and N. Uralceva, [13] [14] J. Lankeit and Y. Wang, Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption, [15] [16] [17] M. Mizukami, Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity, [18] M. Mizukami and T. Yokota, Global existence and asymptotic stability of solutions to a two-species chemotaxis system with any chemical diffusion, [19] M. Negreanu and J. I. Tello, Asymptotic stability of a two species chemotaxis system with non-diffusive chemoattractant, [20] [21] M. Porzio and V. Vespri, Hölder estimates for local solutions of some doubly nonlinear degenerate parabolic equations, [22] [23] [24] Y. Tao and M. Winkler, Eventual smoothness and stabilization of large-data solutions in a three-dimensional chemotaxis system with consumption of chemoattractant, [25] Y. Tao and M. Winkler, Boundedness and decay enforced by quadratic degradation in a three-dimensional chemotaxis-fluid system, [26] Y. Tao and M. Winkler, Large time behavior in a multidimensional chemotaxis-haptotaxis model with slow signal diffusion, [27] Y. Tao and M. Winkler, Blow-up prevention by quadratic degradation in a two-dimensional Keller-Segel-Navier-Stokes system, [28] I. Tuval, L. Cisneros, C. Dombrowski, C. W. Wolgemuth, J. O. Kessler and R. E. Goldstein, Bacterial swimming and oxygen transport near contact lines, [29] [30] M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source, [31] M. Winkler, Global large-data solutions in a chemotaxis-(Navier-)Stokes system modeling cellular swimming in fluid drops, [32] [33] [34] M. Winkler, Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with strong logistic dampening, [35] [36] T. Xiang, How strong a logistic damping can prevent blow-up for the minimal Keller-Segel chemotaxis system?, [37] Q. Zhang and Y. Li, Convergence rates of solutions for a two-dimensional chemotaxis-Navier-Stokes system, show all references References: [1] X. Bai and M. Winkler, Equilibration in a fully parabolic two-species chemotaxis system with competitive kinetics, [2] [3] X. Cao, Global bounded solutions of the higher-dimensional Keller-Segel system under smallness conditions in optimal spaces, [4] X. Cao, S. Kurima and M. Mizukami, Global existence and asymptotic behavior of classical solutions for a 3D two-species chemotaxis-Stokes system with competitive kinetics, arXiv: 1703.01794, [5] [6] P. De Mottoni and F. Rothe, Convergence to homogeneous equilibrium state for generalized Volterra-Lotka systems with diffusion, [7] D. Henry, [8] M. Hirata, S. Kurima, M. Mizukami and T. Yokota, Boundedness and stabilization in a two-dimensional two-species chemotaxis-Navier-Stokes system with competitive kinetics, [9] M. Hirata, S. Kurima, M. Mizukami and T. Yokota, Boundedness and stabilization in a three-dimensional two-species chemotaxis-Navier-Stokes system with competitive kinetics, [10] [11] [12] O. Ladyzhenskaya, V. Solonnikov and N. Uralceva, [13] [14] J. Lankeit and Y. Wang, Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption, [15] [16] [17] M. Mizukami, Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity, [18] M. Mizukami and T. Yokota, Global existence and asymptotic stability of solutions to a two-species chemotaxis system with any chemical diffusion, [19] M. Negreanu and J. I. Tello, Asymptotic stability of a two species chemotaxis system with non-diffusive chemoattractant, [20] [21] M. Porzio and V. Vespri, Hölder estimates for local solutions of some doubly nonlinear degenerate parabolic equations, [22] [23] [24] Y. Tao and M. Winkler, Eventual smoothness and stabilization of large-data solutions in a three-dimensional chemotaxis system with consumption of chemoattractant, [25] Y. Tao and M. Winkler, Boundedness and decay enforced by quadratic degradation in a three-dimensional chemotaxis-fluid system, [26] Y. Tao and M. Winkler, Large time behavior in a multidimensional chemotaxis-haptotaxis model with slow signal diffusion, [27] Y. Tao and M. Winkler, Blow-up prevention by quadratic degradation in a two-dimensional Keller-Segel-Navier-Stokes system, [28] I. Tuval, L. Cisneros, C. Dombrowski, C. W. Wolgemuth, J. O. Kessler and R. E. Goldstein, Bacterial swimming and oxygen transport near contact lines, [29] [30] M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source, [31] M. Winkler, Global large-data solutions in a chemotaxis-(Navier-)Stokes system modeling cellular swimming in fluid drops, [32] [33] [34] M. Winkler, Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with strong logistic dampening, [35] [36] T. Xiang, How strong a logistic damping can prevent blow-up for the minimal Keller-Segel chemotaxis system?, [37] Q. Zhang and Y. Li, Convergence rates of solutions for a two-dimensional chemotaxis-Navier-Stokes system, [1] Qingshan Zhang, Yuxiang Li. Convergence rates of solutions for a two-dimensional chemotaxis-Navier-Stokes system. [2] [3] Mina Jiang, Changjiang Zhu. Convergence rates to nonlinear diffusion waves for $p$-system with nonlinear damping on quadrant. [4] [5] Hongyun Peng, Lizhi Ruan, Changjiang Zhu. Convergence rates of zero diffusion limit on large amplitude solution to a conservation laws arising in chemotaxis. [6] [7] [8] [9] [10] [11] Jonathan Zinsl. Exponential convergence to equilibrium in a Poisson-Nernst-Planck-type system with nonlinear diffusion. [12] Narcisse Batangouna, Morgan Pierre. Convergence of exponential attractors for a time splitting approximation of the Caginalp phase-field system. [13] Feng Li, Yuxiang Li. Global existence of weak solution in a chemotaxis-fluid system with nonlinear diffusion and rotational flux. [14] [15] Zhong Tan, Qiuju Xu, Huaqiao Wang. Global existence and convergence rates for the compressible magnetohydrodynamic equations without heat conductivity. [16] Daniel Gerth, Andreas Hofinger, Ronny Ramlau. On the lifting of deterministic convergence rates for inverse problems with stochastic noise. [17] Stefano Galatolo, Isaia Nisoli, Benoît Saussol. An elementary way to rigorously estimate convergence to equilibrium and escape rates. [18] [19] José A. Carrillo, Jean Dolbeault, Ivan Gentil, Ansgar Jüngel. Entropy-energy inequalities and improved convergence rates for nonlinear parabolic equations. [20] Haibo Cui, Zhensheng Gao, Haiyan Yin, Peixing Zhang. Stationary waves to the two-fluid non-isentropic Navier-Stokes-Poisson system in a half line: Existence, stability and convergence rate. 2018 Impact Factor: 1.008 Tools Metrics Other articles by authors [Back to Top]
Sabancı University Mathematics Colloquium Permutations of the form $x ^ { k } - \gamma \operatorname { Tr } ( x )$ and curves over finite fields Nurdagül Anbar Sabancı University, Turkey Özet : Let $q$ be a power of a prime $p$, and let $\mathbb { F } _ { q }$ be the finite field with $q$ elements. A polynomial $P ( x ) \in \mathbb { F } _ { q } [ x ]$ is called a permutation of $\mathbb { F } _ { q }$ if the associated map from $\mathbb { F } _ { q }$ to $\mathbb { F } _ { q }$ defined by $x \mapsto P ( x )$ is a bijection, i.e., it permutes the elements of $\mathbb { F } _ { q }$. In this talk, we consider the polynomials of the form $P ( x ) = x ^ { k } - \gamma \operatorname { Tr } ( x )$ over $\mathbb { F } _ { q ^n}$ for $n \geq 2$, where $\mathbb { F } _ { q ^n}$ is the extension of $\mathbb { F } _ { q }$ of degree $n$ and Tr is the absolute trace from $\mathbb { F } _ { q ^n}$ to $\mathbb { F } _ { q }$. We show that $P(x)$ is not a permutation of $\mathbb { F } _ { q ^n}$ in the case gcd$(k, q^n − 1) > 1$. Our proof uses an absolutely irreducible curve over $\mathbb { F } _ { q ^n}$ and the number of rational points on it. Tarih : 04.04.2019 Saat : 13:40 Yer : FENS G035 Dil : English
$f\colon(-1,1)\rightarrow \mathbb{R}$ is bounded and continuous does it mean that $f$ is uniformly continuous? Well, $f(x)=x\sin(1/x)$ does the job for counterexample? Please help! Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community For continuity to lead to uniform continuity, domain has to be compact, and as you can see the domain is not compact here. Also, rightly $f(x)=\sin(\frac{1}{x+1}) $ serves as a counterexample or even $ \sin(e^x)$ for that matter. You're close: $$\sin\frac{1}{x+1}$$ is a counterexample to the statement. $\sin(x^2)$ is also a nice example and it's happening because it's not periodic. $F(x)=\tan(x)$ with domain: $(\frac{\pi}{2},\frac{\pi}{2})$ . For example, take $F(x)=\tan(\frac{\pi x}{2}):\text{ where, dom(F)}= (-1, 1)$ . A 'necessary condition' (not sufficient) condition for continuous $f$ on $(-1,1)$ to be non- uniformly continuous over $(-1, 1)\,\text{ is}\, $(A)$: $(A)$: the extension of $f: F$, where: $$\text{dom(f)}= (-1,1):\, \text{dom(F)}=[-1,1];\text{where}\,F=G\text{ on}\, (-1,1)\text{but}, \,F \text{is non-continuous at one the end points of the extended domain}.$$ But $F$ is non-continuous at one of the end points of the extended domain: $$[-1, 1]: -1\,\text{, or}\, 1$$. Otherwise,if $F$ were continuous at the end points, $F$ would be continuous over $[-1,1]$, given $F=G$ on $(-1,1)$ . And by hypothesis of the question, $f$ is continuous on $(-1,1)$ . Thus, the extension of $f$, $F$ is continuous on $(-1,1)$ ( say this tentatively)and would also be continuous at ${-1}\land \text{at}\, {1}$ . Thus, $F$ would be continuous on $[-1,1]$. As $[-1,1]$ on a closed compact subset of the reals. Then $F$, however, would be uniformly continuous, over $[-1,1]$, by the Heine-Borel theorem. And thus , $F$ is uniformly continuous on $(-1,1)$. Presumably $f\,\text{on}\, (-1,1)$ would be, as a result of this , uniformly continuous as well. As: $F=f\text{on}\,(-1,1)$. However, I stress that non-continuity of $F$ or the extension of $f$ to the closed domain,$[-1,1]$ at the end points $-1, 1$ : At least I think so. I say I think so, relates again the query about uniform continuity below under different designations : $(1)$ As,complete uniform continuity of the restricted function $f$ on the entirety of f's domain: $(-1,1)$ on the one hand. And $(2)$: The uniform continuity of the extension /unrestricted function, $F$, on part of its domain: $(-1,1)$. $$F: \text{F@}(-1,1),\,(-1,1)\subset[-1,1]= \text{Dom(F)}$$ As ,uniform continuity is a a global property of the function and its domain. There may be difference due to a technicality in the designation of the name 'evaluating the uniform continuity" of a restriction $f$: $$(1)\, \text{where uniform continuity of the function on} \,(-1,1)\, text{as}\, f\text{ rather than} \,F\,\text{ where dom(F)}=(-1,1)\neq =[-1,1]\,\text{dom(F), onn}\, (-1,1)\subsetneq\[-1,1]=\text{dom(F)}$$. $$ \text{where this is evaluated as the uniform continuity of} \,f\text{on f's entire entire domain}\land f\text{is undefined}@ 1,\land -1$$ . $$\text{Where: dom(f)}=(-1,1)\subset[-1,1]=\text{dom(F) }$$. on the one hand, and $(2)$: $$(2):\text{the uniform continuity of F on a sub part of F's domain}:(-1,1)\, \subsetneq [-1,1]=\text{dom(F)}.$$ $$\text{despite} \,F=f \, \text{on:} (-1,1)$$? . Where one evaluates uniform continuity : $$\text{of}\, F\,\text{on the open interval:} \,(-1,1) \subset\,\text{dom(F)}?$$ . That is, must one keep in mind: $$\text{that in (2) unlike\, (1) that we are considering F and :}\,(-1,1)\subsetneq[-1,1] \text{dom(F)}$$. Where ,$F$ ,is the function, under consideration, not $f$ .when evaluating global properties on a subset of its domain, like uniform continuity @ $(-1,1)\text{ of F}.$? $$\text{Where in} (2)\,\text{the entire domain of F:dom(F)}=[-1,1]\, ;\text{where} \,[-1,1]\neq( -1,1)=\text{dom(f) unlike in (1)}.$$? This being ,in contrast to $(1)$. Where we consider uniform continuity of $f@(-1,1)$, under the aspect of the restriction $f$: Namely, uniform continuity of $f$ over $f$'s entire domain, as $f$'s entire domain is $(-1,1)$?. Is there a difference, between $(1)$ and $(2)$ between the unfiorom contunity of an entirey function as a restriction and between uniform continuity of the unrestricted function on a subpart of its domain, where the bounded interval of interest is the same )-1,1) for example, in both cases and $f=F \text{on} (-,1)$ Or is this philosophical pedantantry. That is without restricting the domain of $F: =[-1,1]$ to $(-1,1)$, call the restriction $f$ and consider whether $f$ is completely uniformly continuous on its entire domain, where the restriction is undefined at the end points. $$F @ (-1,1)\text{, where}\, (-1,1)\subsetneq[-1,1]=\text{dom(F)}.$$ Whilst Bearing in mind that, $F$ is defined at the end points, unlike $f$. Or, rather whilst bearing in mind that,$(-1,1)$ is only a sub-part of $\text{dom(F)}=[-1,1]$. As I believe that uniform continuity (at a point,if it makes sense) depends not merely on the point of interest, but is evaluated globally on the properties and depend of the entire function over its entire domain? Would be a necessary condition to finding a candidate, of: 'non uniformly continuous, continuous function $f\text{where} \text{dom}=(-1,1)$.' I do not think that it is sufficient conditions. One merely needs to consider: $G(x)=x\text{on} (-1,1); G(-1)=-10\land G(10)=10$ ,constituting a failure of continuity. That is, some kind of end-point, dis-continuity .Whether this can be really considered a jump discontinuity or a removable discontinuity or something else, is hard to say, given that each end-point only has a single one side limit, whether continuous or not. In any case, the function function value of $\text{ at}\, 1, -1\neq\text{ the appropriate one-sided limits of f from the right at}-1\,\land \text{the left at} \, 1$. That is, at the end points of the domain: $$ [1,1]\,G(1)=10\neq \lim_{x\to 1_{-1}}=1$$. Whilst, the restriction, $F$, of the extension, $G$ ,where $F:F=G(-1,1)\land \text{ F is defined only on:} (-1,1): $$F:\forall(x\in(-1,1)):F(x)=x$$. $$ \text{where}\,, F\text{ is the identity function }\land \text{dmo(F)}=(-1,1).$$ And $F$ , the identity function, is clearly uniformly continuous unlike $G$ . I presume, however that $G:[-1,-1]\text{to, Im(G)}$ ,**may not technically qualify. I am not sure as uniformly continuous on $(-1,1)$** despite being identical to $F$ on this open interval. As uniform continuity is a global property unlike point-wise continuity, and the domain of $G$ so defined is: $[-1,1]$. I am unclear about this (it might sound like pedantry). That is despite appearances, I am not sure if one can completely ignore the title given to the function, $G$ or $F$, when considering whether the function so denoted, is uniformly continuous on $(-1,1)$ . That is will there be a difference, due to some technicality in terms, depending on whether we are considering uniform continuity of this function on the open interval $(-1,1)$ as $G$ rather than $F$?