text
stringlengths
256
16.4k
In a queuing system, the inter-arrival times are known to be exponentially distributed. My textbook states: "It can be shown that, if the underlying distribution of inter-arrival times { T1, T2, ..., Tn } is exponential, the arrival times are uniformly distributed on the interval (0, T). The arrival times- T1, (T1 + T2), (T1 + T2 + T3), ... , (T1 + ... + Tn) are obtained by adding inter-arrival times." But isn't the sum of exponential random variables distributed as Erlang? Implying that arrival times should be Erlang distributed? How are arrival times distributed? Is the distribution of arrival times Uniform or Erlang? Why so? I'd like some clarity on the topic. If interarrival times ($IA$) are i.i.d. $IA\sim \text{Exponential}(\lambda)$ then the $n$th arrival time is $S_n \sim \text{Erlang}(n,\lambda)$. as a quick check, you can see that $S_1 \sim \text{Erlang}(1,\lambda)$ which is the same as $\text{Exponential}(\lambda)$. However, the textbook is giving you more info. Given that the arrival times are on the interval $[0,T]$, they are distributed uniformly on that interval as a consequence of the interarrival distribution (or alternatively, a consequence of a stationary Poisson Process). Notice the difference between arrival time $S_n$ and this conditional arrival time $(S_n|S_n\in [0, T])$ $$(S_n|S_n\in [0, T])\sim \text{Uniform}(0, T)$$. This extends to the case $(S_n|S_n\in [t_1, t_2])\sim \text{Uniform}(t_1, t_2)$ with $t_1<t_2$. See the proof/derivation here on Math.SE. With Exponential interarrivals, $(S_n|S_n\in [t_1, t_2])\sim \text{Uniform}(t_1, t_2)$ for $t_1<t_2$ doesn't invalidate that $S_n \sim \text{Erlang}(n,\lambda)$. For a visual example, see two graphs below generated with the same Exponential interarrival times. MATLAB code: Rate = .85;ArrivalNumber = 5;SampleSize = 50000;% Add Interarrival TimesS5 = sum((-1/Rate)*log(1-rand(SampleSize,ArrivalNumber)),2); fh=@(x) ((Rate^ArrivalNumber)*x.^(ArrivalNumber-1).*exp(-Rate.*x))... ./factorial(ArrivalNumber-1); % Erlang PDFTimes = cumsum((-1/Rate)*log(1-rand(20000,50)),2);IntervalTimes = [];Interval = [15 35];for i = 1:size(Times,1) Times_i = Times(i,:); IntervalTimes_i = Times_i(Times_i >= Interval(1) & Times_i <= Interval(2)); IntervalTimes = [IntervalTimes; IntervalTimes_i(:)];endfigure, hold on, box ontitle('Event Times on Interval [15, 35]')h = histogram(IntervalTimes,'Normalization','pdf','DisplayName','Simulation')xlim([0 40])p = plot([15 35],ones(1,2)./(35-15),'k-','LineWidth',2.2,'DisplayName','Uniform(15,35)')xlabel('Arrival Times on Interval [15, 35]')ylabel('PDF')legend('show')figure, hold on, box ontitle('S_5 \sim Erlang(5,\lambda = 0.85)')h = histogram(S5,'Normalization','pdf','DisplayName','Sum of Exponential(\lambda)')Xrange = 0:.01:25;p = plot(Xrange,fh(Xrange),'k-','LineWidth',2.2,'DisplayName','Erlang(5,\lambda)')xlim([0 20])legend('show')xlabel('5th Arrival Time, S_5')ylabel('PDF')
This talk is only an introductory overview. A much more detailed version of this talk given by Lianheng Tong at the 2016 summer school. We converted the Kohn-Sham equations into matrix form by introducing basis functions. \begin{align} \mathbf{H C} = \mathbf{E S C}\\ \end{align} Where change variables \begin{align} \mathbf{H C} & = \mathbf{E S C}\\ \mathbf{H S^{-1/2} S^{1/2} C} & = \mathbf{E S^{1/2} S^{1/2} C}\\ \mathbf{S^{-1/2} H S^{-1/2} S^{1/2} C} & = \mathbf{S^{-1/2} E S^{1/2} S^{1/2} C}\\ \mathbf{S^{-1/2} H S^{-1/2} C'} & = \mathbf{E S^{-1/2} S^{1/2} C'}\\ \mathbf{H' C'} & = \mathbf{ E C'}\\ \end{align} where $\mathbf{C'} = \mathbf{S}^{1/2} \mathbf{C}'$ and $\mathbf{H}' = \mathbf{S}^{-1/2} \mathbf{H} \mathbf{S}^{-1/2}$, is a standard (non-linear) Eigenvalue problem. we can diagnonalise $\mathbf{H' C'} = \mathbf{ E C'}$ to find a new set of MOs given the input Kohn-Sham matrix built from the current density. Standard methods of diagonalising the matrix can be used - termed 'traditional diagonalisation'. The new orbitals are used to build Then the process repeats until (if?) it converges - i.e. MOs in are the same as MOs out. Instead of using the new set of orbitals directly mix together some older solutions and the new solution. Much more stable that blindly taking only the new density. In the simplest case linearly mix $\alpha$ of the new density with $1-\alpha$ of the density in the previous step. &SCF SCF_GUESS ATOMIC EPS_SCF 1.0E-06 MAX_SCF 50 &MIXING ALPHA 0.2 !sensible value, the default 0.4 is very aggressive. &END MIXING &END SCF Instead of mixing in a fraction of the new density with the previous step a more sophisticated approach mixes in a history of previous density using some 'recipe'. By default CP2K switches to the DIIS method when the largest change in an element of the density matrix is smaller than EPS_DIIS, which is 0.05 by if not set explicitly. should look something like this: Step Update method Time Convergence Total energy Change --------------------------------------------------------------------------------- 1 P_Mix/Diag. 0.50E+00 2.1 0.41056021 -2133.4408435676 -2.13E+03 2 P_Mix/Diag. 0.50E+00 3.2 0.20432922 -2132.0776002852 1.36E+00 3 P_Mix/Diag. 0.50E+00 3.2 0.10741372 -2131.3677551799 7.10E-01 4 P_Mix/Diag. 0.50E+00 3.2 0.05420394 -2131.0080867703 3.60E-01 5 DIIS/Diag. 0.39E-03 3.2 0.02722180 -2130.8276990683 1.80E-01 6 DIIS/Diag. 0.19E-03 3.1 0.00062404 -2130.6473761946 1.80E-01 7 DIIS/Diag. 0.84E-04 3.2 0.00050993 -2130.6473778175 -1.62E-06 note the switch to DIIS when Convergence is < 0.05. For metallic systems we generalise how we build the density. Up to now we build the density by filling the $N$ electrons into the $N$ lowest molecular spin orbitals. If the system is metallic (or has a very small band-gap) this can lead to 'charge sloshing'. The orbitals around the 'fermi energy' can change their ordering, and different ones are occupied from iteration to iteration. Fill the orbitals using a Fermi-Dirac distribution at a fictitious finite temperature - smooths out charge sloshing. &SCF SCF_GUESS ATOMIC EPS_SCF 1.0E-6 MAX_SCF 50 ADDED_MOS 200 &SMEAR ON METHOD FERMI_DIRAC ELECTRONIC_TEMPERATURE [K] 300 &END SMEAR &MIXING METHOD BRYODEN_MIXING ALPHA 0.2 NBUFFER 5 &END MIXING &END SCF We also use a different mixing scheme, which is probably optimal for metallic systems. Fermi Temperature is typically between 300 - 3000 K. The larger the value the smoother convergence, but it can affect the physical properties of the system if too large. why not just directly minimize the energy functional with respect to the MO coefficients? So the minimization must be subject to a constraint - on an M dimensional hypersphere! This is built into diagonalisation, as the new vectors are always eigenfunctions of the (current) Kohn-Sham matrix. Work with new variables $\mathbf{X}$ $$\mathbf{C}(\mathbf{X}) = \mathbf{C}_0 \cos(\mathbf{U}) + \mathbf{X U}^{-1} \sin(\mathbf{U})$$ $$\mathbf{U} = (\mathbf{X}^T\mathbf{SX})^{1/2}$$ Can show that this leads to optimization in an M-1 dimensional linear space. In minimization problems it is often a good idea change the problem by applying some approximate solution to the problem to make an equivalent set of equations that are The OT solver is no exception. There are a variety of preconditioners available, and they can dramatically speed up convergence. &SCF SCF_GUESS RESTART EPS_SCF 1.0E-06 MAX_SCF 20 &OT ON MINIMIZER DIIS PRECONDITIONER FULL_ALL ENERGY_GAP 0.001 &END OT &OUTER_SCF MAX_SCF 2 &END OUTER_SCF &END SCF This uses the most efficient minimizer, DIIS. Change to CG for difficult systems. The most accurate, and expensive to calculate, preconditioner - FULL_ALL. The OUTER_SCF restarts the SCF cycle and reapplies a new preconditioner when the original loop finishes. &SCF SCF_GUESS RESTART EPS_SCF 1.0E-06 MAX_SCF 20 &OT ON MINIMIZER DIIS PRECONDITIONER FULL_SINGLE_INVERSE ENERGY_GAP 0.1 &END OT &OUTER_SCF MAX_SCF 2 &END OUTER_SCF &END SCF The FULL_ALL preconditioner is expensive to apply to large systems (diagonalization of the approximate Hamiltonian is required). The FULL_SINGLE_INVERSE is pretty good, and much cheaper for big systems. The two methods use quite different code paths. EPS_SCF has different meanings for OT (largest derivative of energy wrt MO coefficients) and TD (largest change in the density matrix). Some options will only work with either OT or TD. MO section in PRINT only works properly with diagonalisation MO_CUBES section in PRINT only works properly with OT. See a more detailed version of this talk given by Lianheng Tong at the 2016 summer school, if you want to know (much) more!
So far, we’ve been looking at motion that is easily described in Cartesian coordinates, often moving along straight lines. Such kind of motion happens a lot, but there is a second common class as well: rotational motion. It won’t come as a surprise that to describe rotational motion, polar coordinates (or their 3D counterparts the cylindrical and spherical coordinates) are much handier than Cartesian ones 1. For example, if we consider the case of a disk rotating with a uniform velocity around its center, the easiest way to do so is to specify over how many degrees (or radians) a point on the boundary advances per second. Compare this to linear motion - that is specified by how many meters you advance in the linear direction per second, which is the speed (with dimension L/T). The change of the angle per second gives you the angular speed!, where a counterclockwise rotation is taken to be in the positive direction. The angular speed has dimension 1/T, so it is a frequency. It is measured in degrees per second or radians per second. If the angle at a point in time is denoted by \(\theta (t)\), then obviously \(\omega = \dot \theta\), just like \(v = \dot x\) in linear motion. In three dimensions, \(\omega\) becomes a vector, where the magnitude is still the rotational speed, and the direction gives you the direction of the rotation, by means of a right-hand rule: rotation is in the plane perpendicular to!, and in the direction the fingers of your right hand point if your thumb points along \(\omega\) (this gives \(\omega\) in the positive \(\hat z\) direction for rotational motion in the xy plane).Going back to 2D for the moment, let’s call the angular position \(\theta (t)\), then \[\omega=\frac{\mathrm{d} \theta}{\mathrm{d} t}=\dot{\theta}\] If we want to know the positionrin Cartesian coordinates, we can simply use the normal conversion from polar to Cartesian coordinates, and write \[\boldsymbol{r}(t)=r \cos (\omega t) \hat{\boldsymbol{x}}+r \sin (\omega t) \hat{\boldsymbol{y}}=r \hat{\boldsymbol{r}} \label{r}\] where \(r\) is the distance to the origin. Note that \(r\) points in the direction of the polar unit vector \(\hat r\). Equation \ref{r} gives us an interpretation of \(\omega\) as a frequency: if we consider an object undergoing uniform rotation (i.e., constant radius and constant velocity), in its x and y-directions it oscillates with frequency \(\omega\). As long as our motion remains purely rotational, the radial distance r does not change, and we can find the linear velocity by taking the time derivative of \ref{r}: \[\boldsymbol{v}(t)=\dot{\boldsymbol{r}}(t)=-\omega r \sin (\omega t) \hat{x}+\omega r \cos (\omega t) \hat{y}=\omega r \hat{\boldsymbol{\theta}}\] so in particular we have \(v=\omega r\). Note that both v and \(\omega\) denote instantaneous speeds, and Equation \ref{r} only holds when \(\omega\) is constant. However, the relation \(v=\omega r\) always holds. To see that this is true, express \(\theta\) in radians, \(\theta=\frac{s}{r}\), where s is the distance traveled along the rotation direction. Then \[\omega=\frac{\mathrm{d} \theta}{\mathrm{d} t}=\frac{1}{r} \frac{\mathrm{d} s}{\mathrm{d} t}=\frac{v}{r}\] In three dimensions, we find \[v=\omega \times r\] where r points from the rotation axis to the rotating point. Unlike in linear motion, in rotational motion there is always acceleration, even if the rotational velocity \(\omega\) is constant. This acceleration originates in the fact that the direction of the (linear) velocity always changes as points revolve around the center, even if its magnitude, the net linear speed, is constant. In that special case, taking another derivative gives us the linear acceleration, which points towards the center of rotation: \[\boldsymbol{a}(t)=\ddot{\boldsymbol{r}}(t)=-\omega^{2} r \cos (\omega t) \hat{x}+\omega^{2} r \sin (\omega t) \hat{y}=-\omega^{2} r \hat{\boldsymbol{r}} \label{linaccl}\] In Section 5.2 below we will use Equation \ref{linaccl} in combination with Newton’s second law of motion to calculate the net centripetal force required to maintain rotation at a constant rate. Of course the angular velocity \(\omega\) need not be constant at all. If it is not, we can define an angular acceleration by taking its time derivative: \[\alpha=\frac{\mathrm{d} \omega}{\mathrm{d} t}=\ddot{\theta}\] or in three dimensions, where \(\omega\) is a vector: \[\boldsymbol{\alpha}=\frac{\mathrm{d} \boldsymbol{\omega}}{\mathrm{d} t}\] Note that when \(\boldsymbol{\alpha}\) is parallel to \(\boldsymbol{\omega}\), it simply represents a change in the rotation rate (i.e., a speeding up/slowing down of the rotation), but when it is not, it also represents a change of the plane of rotation. In both two and three dimensions, a change in rotation rate causes the linear acceleration to have a component in the tangential direction in addition to the radial acceleration (\ref{linaccl}). The tangential component of the acceleration is given by the derivative of the linear velocity: \[\boldsymbol{a}_{\mathrm{t}}=\frac{\mathrm{d} \boldsymbol{v}}{\mathrm{d} t}=r \frac{\mathrm{d} \boldsymbol{\omega}}{\mathrm{d} t}=r \boldsymbol{\alpha}\] In two dimensions, \(\boldsymbol{a}_{\mathrm{t}}\) points along the \(\pm \hat{\boldsymbol{\theta}}\) direction. Naturally, there are even more complicated possibilities - the radius of the rotational motion can change as well. We’ll look at that case in more detail in Chapter 6, but first we consider ‘pure’ rotations, where the distance to the rotation axis is fixed. 1 If you need a refresher on polar coordinates, or are unfamiliar with polar basis vectors, check out appendix A.2.
Griffiths, Introduction to QM, uses $\Psi$ and $\psi$ to denote a solution to the time-dependent and the time-independent Schrödinger eq., respectively. For a fixed energy $E$, rather than considering a general solution $\psi$ to the TISE, the book is at this point only interested in finding a generating set $\psi_n$ of solutions, so that a general solution is a linear combination $\psi=\sum_nc_n\psi_n$, cf. the superposition principle. The purpose of Exercise 2.1.b is to show that it is no loss of generality to assume that the generating element $\psi_n\in\mathbb{R}$ is a real function. The purpose of Exercise 2.1.c is to show in the case of an even potential $V$ that it is no loss of generality to assume that the generating element $\psi_n$ is an even or an odd function. The book is not claiming that the general solution $\psi$ has to respect such symmetry.
Suppose we're selecting points uniformly at random from the $N$-simplex $S_N = \{x \in \mathbb R^{N+1}: $ all $ x_i \ge 0$ and $x_1 + \ldots x_N = 1\}$. One way to do this in practice is choose $N-1$ points $a_1, \ldots , a_{N-1}$ uniformly and independently from the unit interval $[0,1]$. Then for $a_0=0$ and $a_1=1$ construct the point $x \in S_N$ with each $x_i = a_i - a_{i-1}$. One would expect for large $N$ the points $a_i$ to be evenly spaced across the interval, and so the the average point looks of $S_n$ looks pretty much like the constant $1/N$ vector. That is to say it's unlikely for any collection of entries to be small. Is anything known about the exact distribution of such collections? Formally put suppose $X: \Omega \to S_N$ is a uniformly distributed random variable from some probability space onto the $N$-simplex. Define each $X_k : \Omega \to [0,1]$ by $X_k(x) = $ the $k$th largest coordinate of $X(x)$. I've drawn some samples for $k = n/2$ which is the case I'm mostly interested in. It seems that, after you normalise the variable by multiplying by $N$, the mean tends to about $0.7$ (marked with a vertical line) from above. The distributions are also increasingly tighter bell-curves. The below is with 100,000 samples per curve. Is there anything like a closed form known for the distribution of $X_k$? If not are there any useful bounds for probabilities like $P(X_k > 1/N \pm \epsilon)$ or $P(X_k < 1/N \pm \epsilon)$ that give answers similar to the behaviour above? I am also interested in the variables $Y_k = X_1 + \ldots X_k$ if they are any easier to understand analytically, again primarily in the case $k = n/2$. Again the plots look like ever tighter bells and the mean tends to $0.15$ (the black line) from above.
The ladder operator method is used to solve the one-particle Schrodinger equation with a harmonic potential. What other potentials for the one-particle Schrodinger equation may be solved with the ladder operator method? Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community The hydrogen atom is an example where ladder operators can be used. There is a hidden SO(4) symmetry that explains the degeneracy for the prinicpal quantum number and one can use algebraic methods to get the eigenvalues. Here is a paper that does central force problems in general. . The full symmetry group for the hydrogen atom is SO(4,2). Here is another resource. Some people think ladder operators are only for the harmonic oscillator or equally spaced eigenvalues but this is because they are restricting themselves to the Heisenberg Lie algebra which works for the harmonic oscillator but there are other problems with other Lie algebras and their own representation theory. For a trivial counter example for the belief you need equally spaced eigenvalues: Suppose your Hamiltonian was just $L^2$ . This is hermitian so legitimate, there is an algebraic method to solve for eigenvalues it depends on the representation theory of $su(2)$ or equivalently $sl(2,C)$ but as we all know the eigenvalues of this operator are not equally spaced. Given a non-degenerate observable $A$, its ladder operator is an operator $L$ such that $[A, L] = \xi L $ for some $\xi \in \mathbb{R}$. We can then write $A (L \lvert a \rangle) = (LA + [A,L]) \lvert a \rangle = (a+\xi)(L \lvert a \rangle )$. Which implies by non-degeneracy that $L \lvert a \rangle = \lvert a + \xi \rangle $ Also note that since $A$ is Hermitian that $[A, L^\dagger] = -\xi L^\dagger $. So $L^\dagger \lvert a \rangle = \lvert a - \xi \rangle $ similarly. The Jacobi identity then implies that $[A, [L, L^\dagger]] = 0$. You said you were interested in solving the 1-D Schrodinger Equation. In this case your observable $A$ is the Hamiltonian $H$. One dimensional Hamiltonians are always non-degenerate when if comes to bound states so this condition is always satisfied (see proof here). So if you can find an operator $L$ such that $[H, L] = \xi L $ for some $\xi \in \mathbb{R}$, then the problem can be solved by ladder operators. The energy of an 1-D system will also always be bounded below since $P^2$ is positive definite and $V(X)$ has a minimum since you interested in bound states. One essential part of systems where ladder operators apply is that their spectrum (energy levels in the case of $A=H$, ${a}$ in general) is linearly spaced. If this makes qualitative sense then trying ladder operators may be the way to go. All that said, I'm fairly certain that you could prove that linearly spaced energy levels implies a harmonic potential. If this is true, then the harmonic oscillator is the only 1-D system whose Schrodinger Equation can be solved in this way.
with 120v (dc for simplicity) as the input voltage and I have an led that runs on 3.4v and max 20ma ohms law states I=V/R so R=V/I, so R=120v/0.02a = 6000 ohms or 6k is this correct? how do I know the voltage drop across the resistor is correct, and what about power dissipated by the resistor? also, Is there an easy way to calculate everything at once? If you are running an LED from the mains you do not want to run it at 20mA. Any modern LED will be painfully bright at 20mA and your resistor will waste too much power. Try something more like 2mA (though you can use the same calculations, just substitute the current you want) R = E/I = (120-3.4)/0.002 ~= 120/0.002 = 60K, so you can use 62K. Power is \$E^2\over R\$ = \$\frac{(120-3.4)^2}{62000}\$ = 0.22. So you could use a 1/4-W resistor, though 1/3 or 1/2 would be better- it needs enough voltage rating as well as power rating. Incidentally, a modern LED bulb of approximately 60W equivalent-to-incandescent light output takes only 6-7W so if you ran your LED at 20mA you'd be drawing almost half of that power for (relatively speaking) very little light- that's how inefficient a dropping resistor is. If you are using the mains with a high voltage diode in series (and preferably another diode across the LED) you'll need to double the current to get the same brightness, so the resistor will be about half- say 30K, but the power dissipation will stay about the same. I'm ignoring the 11% difference between RMS and average for the purposes of a rough calculation. You need to adjust the formula for the forward voltage of the led. R = (Vs - Vf) / If And wattage is simple. W = V * I First subtract the LED forward voltage from supply voltage. R = (120 V - 3.4 V) / 0.02 A = 5830 Ohms. The closest one matching is probably 6k2. To figure the required power handling: P = I^2 * R = 0.02 * 0.02 * 5830 = 2.323 Watts. This is the power your resistor must endure. To limit LED current, you can use the following approaches: Use a resistor to limit the current. Since most of the voltage will be dropped by this resistor, most (about 99%) of the power will also be dissipated by the resistor. Since there is no energy storage in the circuit, the dissipation for the entire circuit will be the mains voltage times the chosen LED current. Use a capacitor to limit the current. The capacitor will periodically store and release energy, so this circuit has the possibility of being far more efficient. In theory, the LED would be the only dissipating element. Use a combination of resistor and capacitor to limit the current. Most of the reason for considering this option will arrive a little later on in the discussion below. Each of the above possibilities carries with it yet another problem. While the AC mains supply may provide the needed current for the LED in one half of its total cycle time, during the other half the voltage will be arranged so that the LED is reverse-biased. This means the entire mains voltage may be presented across the LED itself. If the LED cannot withstand this voltage well, it may cease to function (and/or explode.) A simple idea would be to provide a bridge rectifier so that the LED is always forward biased in the circuit. So I'd like to recommend that you consider the addition of a bridge rectifier. Also, I'd like to recommend that you consider the use of both a capacitor and also a resistor being added to the circuit. The resistor will perform the function of a safety fuse and will add only a very small amount of unwanted dissipation to the circuit. The capacitor will help make the circuit far, far more efficient. There are special considerations to all elements of the circuit. Capacitors used for purposes such as these fall into two broad categories: Class X -- designed to fail as a . short Class Y -- designed to fail as an . open These each have subcategories, but the most commonly used of each will be X2 and Y2. So we must select from those two distinct categories, I think. (For a web page discussing some of these details, see: Capacitor Class Types, X and Y.) The choice of which to use depends on what you want to achieve. In this case, since I'm recommending the use of a fusing resistor too, you will want to use a capacitor of the X2 variety that is designed to fail as a so that the fusing resistor will do its function. short One category of fusing resistors are flame-proof and called "metal oxide film resistors." Some of these have been carefully crafted so that they operate well as fuses, too. ERQA type code from Panasonic and PR01 from Vishay are two examples of such metal film resistors with fusing specifications. These resistors support the inrush current for a short time, just fine. But will also have a guaranteed value of instantaneous power where they will fuse. (Typically, on the order of somewhere between a half-second to a few dozens of seconds duration at some rated instantaneous power rating.) safely Panasonic writes something like, " Open within 30 seconds at 12 times the rated power." Vishay provides charts (look on page 12 or so.) Either way, you can use a specification like this to aid a circuit for an LED light attached to the mains system. Let's say you want to supply approximately \$4\:\text{mA}\$ (RMS) to your LED and your mains power system is \$120\:\text{V}_\text{AC}\$ (RMS). Looking at the above circuit, and ignoring the resistor value for now, we find that the reactance needed (using RMS values) is: $$X_C=\frac{V_{AC}-V_{LED}-V_{BRIDGE}}{I_{LED}}=\frac{120\:\text{V}-3.4\:\text{V}-2\:\textrm{V}}{4\:\text{mA}}\approx 30\:\text{k}\Omega$$ Or, $$C=\frac{1}{2\pi\: f\: X_C}=\frac{1}{2\pi\: 60\cdot 30\:\text{k}\Omega}\approx 100\:\text{nF}$$ That's a nice round value and easily picked up. For the fusing resistor, we are limited by the range of values offered by suppliers. These tend to be values that are \$\le 560\:\Omega\$. Go to page 12 of the Vishay datasheet and look at the chart on the right side regarding their PR01 types. To be well in the chart here, we need the power (with the capacitor shorted) to be \$30-40\:\text{W}\$ so that we are well within the hatched area. So let's select \$36\:\text{W}\$ as the goal and calculate the resistance as \$\frac{\left(110\:\text{V}\right)^2}{36\:\text{W}}\approx 336\:\Omega\$. A nearby value is \$330\:\Omega\$. So that's the value to select here, as this will guarantee that a capacitor failure will result in this fusing resistor to open up in the circuit. Oh! Let's look at a schematic: (Note: Not all systems are handled with one side of the mains power connected to Earth ground -- parts of Japan comes to mind.) The resistor is placed on the hot side and will disconnect the hot side of the circuit if the capacitor fails . When the hot side has been disconnected in this way, the capacitor will also hold the entire circuit close to Earth ground, too. Since we are selecting an X2 type of capacitor, it is rated to do just that (if it fails.) short The resistor will normally dissipate under \$10\:\text{mW}\$. So the Vishay offerings regarding PR01 are just fine. You need to make sure you buy a class-X2 rated capacitor. And finally, you need to make sure the bridge supports the voltages present on the AC line. If you build it from diodes, use the 1N4007 types and get the maximum protection (though they are kind of large-bodied.) You may be tempted to use 1N4148 types. And they may work fine in this application. But it's better to get something often used with AC power and designed for the voltages involved. (Take note of the "peak reverse voltage" (PRV) / "peak inverse voltage" (PIV), as this rating needs to be above \$250\:\text{V}\$.) So there you have it.
Given that the torque is the rotational analog of the force, and the angular momentum is that of the linear momentum, it will not come as a surprise that Newton’s second law of motion has a rotational counterpart that relates the net torque to the time derivative of the angular momentum. To see that this is true, we simply calculate that time derivative: \[\frac{\mathrm{d} \boldsymbol{L}}{\mathrm{d} t}=\frac{\mathrm{d} \boldsymbol{r}}{\mathrm{d} t} \times \boldsymbol{p}+\boldsymbol{r} \times \frac{\mathrm{d} \boldsymbol{p}}{\mathrm{d} t}=\boldsymbol{\tau} \label{consl}\] because \(\dot{\boldsymbol{r}} \times \boldsymbol{p}=\boldsymbol{v} \times m \boldsymbol{v}=0\). Some texts even use Equation \ref{consl} as the definition of torque and work from there. Note that in the case that there is no external torque, we arrive at another conservation law: Theorem 5.3 (Law of conservation of angular momentum). When no external torques act on a rotating object, its angular momentum is conserved. Conservation of angular momentum is why a rolling hoop keeps rolling, and why a balancing a bicycle is relatively easy once you go fast enough. What about collections of particles? Here things are a little more subtle. Writing \(\boldsymbol{L}=\Sigma_{i} \boldsymbol{L}_{i}\) and again taking the derivative, we arrive at \[\frac{\mathrm{d} \boldsymbol{L}}{\mathrm{d} t}=\sum_{i} \boldsymbol{r}_{i} \times \boldsymbol{F}_{i}=\sum_{i} \tau_{i} \label{theorem}\] Now the sum on the right hand side of \ref{theorem} includes both external torques exerted on the system, and internal torques exerted by the particles on each other. When we discussed conservation of linear momentum, the internal momenta all canceled pairwise because of Newton’s third law of motion. For torques this is not necessarily true, and we need the additional condition that the internal forces between two particles act along the line connecting those particles - then the internal torques are zero, and Equation \ref{consl} holds for the collection as well. Consequently, if the net external torque is zero, angular momentum is again conserved.
An algebraic approach to entropy plateaus in non-integer base expansions Mathematics Department, University of North Texas, 1155 Union Cir #311430, Denton, TX 76203-5017, USA For a positive integer $ M $ and a real base $ q\in(1, M+1] $, let $ {\mathcal{U}}_q $ denote the set of numbers having a unique expansion in base $ q $ over the alphabet $ \{0, 1, \dots, M\} $, and let $ \mathbf{U}_q $ denote the corresponding set of sequences in $ \{0, 1, \dots, M\}^ {\mathbb{N}} $. Komornik et al. [ Adv. Math. 305 (2017), 165–196] showed recently that the Hausdorff dimension of $ {\mathcal{U}}_q $ is given by $ h(\mathbf{U}_q)/\log q $, where $ h(\mathbf{U}_q) $ denotes the topological entropy of $ \mathbf{U}_q $. They furthermore showed that the function $ H: q\mapsto h(\mathbf{U}_q) $ is continuous, nondecreasing and locally constant almost everywhere. The plateaus of $ H $ were characterized by Alcaraz Barrera et al. [ Trans. Amer. Math. Soc., 371 (2019), 3209–3258]. In this article we reinterpret the results of Alcaraz Barrera et al. by introducing a notion of composition of fundamental words, and use this to obtain new information about the structure of the function $ H $. This method furthermore leads to a more streamlined proof of their main theorem. Keywords:Beta-expansion, univoque set, topological entropy, entropy plateau, transitive subshift, composition of fundamental words. Mathematics Subject Classification:Primary: 11A63; Secondary: 37B10, 37B40, 68R15. Citation:Pieter C. Allaart. An algebraic approach to entropy plateaus in non-integer base expansions. Discrete & Continuous Dynamical Systems - A, 2019, 39 (11) : 6507-6522. doi: 10.3934/dcds.2019282 References: [1] [2] R. Alcaraz Barrera, S. Baker and D. Kong, Entropy, topological transitivity, and dimensional properties of unique $q$-expansions, [3] [4] P. Allaart and D. Kong, On the continuity of the Hausdorff dimension of the univoque set, [5] [6] S. Baker, Generalized golden ratios over integer alphabets, [7] [8] [9] [10] P. Erdős, I. Joó and V. Komornik, Characterization of the unique expansions $1=\sum_{i=1}^\infty q^{-n_i}$ and related problems, [11] [12] [13] [14] [15] [16] [17] [18] D. Lind and B. Marcus, An Introduction to Symbolic Dynamics and Coding, Cambridge University Press, Cambridge, 1995. doi: 10.1017/CBO9780511626302. Google Scholar [19] [20] show all references References: [1] [2] R. Alcaraz Barrera, S. Baker and D. Kong, Entropy, topological transitivity, and dimensional properties of unique $q$-expansions, [3] [4] P. Allaart and D. Kong, On the continuity of the Hausdorff dimension of the univoque set, [5] [6] S. Baker, Generalized golden ratios over integer alphabets, [7] [8] [9] [10] P. Erdős, I. Joó and V. Komornik, Characterization of the unique expansions $1=\sum_{i=1}^\infty q^{-n_i}$ and related problems, [11] [12] [13] [14] [15] [16] [17] [18] D. Lind and B. Marcus, An Introduction to Symbolic Dynamics and Coding, Cambridge University Press, Cambridge, 1995. doi: 10.1017/CBO9780511626302. Google Scholar [19] [20] [1] Dante Carrasco-Olivera, Roger Metzger Alvan, Carlos Arnoldo Morales Rojas. Topological entropy for set-valued maps. [2] Silvère Gangloff, Benjamin Hellouin de Menibus. Effect of quantified irreducibility on the computability of subshift entropy. [3] Dominik Kwietniak. Topological entropy and distributional chaos in hereditary shifts with applications to spacing shifts and beta shifts. [4] [5] [6] Michał Misiurewicz, Peter Raith. Strict inequalities for the entropy of transitive piecewise monotone maps. [7] Boris Hasselblatt, Zbigniew Nitecki, James Propp. Topological entropy for nonuniformly continuous maps. [8] [9] [10] Piotr Oprocha, Paweł Potorski. Topological mixing, knot points and bounds of topological entropy. [11] [12] Eva Glasmachers, Gerhard Knieper, Carlos Ogouyandjou, Jan Philipp Schröder. Topological entropy of minimal geodesics and volume growth on surfaces. [13] César J. Niche. Topological entropy of a magnetic flow and the growth of the number of trajectories. [14] [15] [16] [17] [18] [19] [20] João Ferreira Alves, Michal Málek. Zeta functions and topological entropy of periodic nonautonomous dynamical systems. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
In this paper, we use the thermodynamical formalism to show that there exists a unique equilibrium state $${\mu_\phi}$$ μ ϕ for each expanding Thurston map $${f : S^2\rightarrow S^2}$$ f : S 2 → S 2 together with a real-valued Hölder continuous potential $${\phi}$$ ϕ . Here the sphere S 2 is equipped with a natural metric induced by f, called a visual metric. We also prove that identical equilibrium states correspond to potentials that are co-homologous up to a constant, and that the measure-preserving transformation f of the probability space $${(S^2,\mu_\phi)}$$ ( S 2 , μ ϕ ) is exact, and in particular, mixing and ergodic. Moreover, we establish versions of equidistribution of preimages under iterates of f, and a version of equidistribution of a random backward orbit, with respect to the equilibrium state. As a consequence, we recover various results in the literature for a postcritically-finite rational map with no periodic critical points on the Riemann sphere equipped with the chordal metric. Communications in Mathematical Physics – Springer Journals Published: Jan 31, 2018 It’s your single place to instantly discover and read the research that matters to you. Enjoy affordable access to over 18 million articles from more than 15,000 peer-reviewed journals. All for just $49/month Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. “Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”Daniel C. “Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud “I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”@deepthiw “My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”@JoseServera
Let $X$ be a quasi-projective variety over a perfect field $k$. Given a projective embedding $j : X\to \mathbf{P}(\mathscr{E})$, the Chow variety $\text{Chow}_r(X, j)$ is a quasi-projective variety parametrizing effective proper cycles on $X$ of codimension $r\ge 0$. Is there some variant of the Chow construction, so as to provide a quasi-projective variety $\text{Cyc}_r(X,j)$ parametrizing (proper?) cycles on $X$? Namely, removing the effectivity condition? A naive attempt is, calling $M := \text{Chow}_r(X,j)$, to take the quotient of $M\times_kM$ by the equivalence relation $R$ defined by $$R := (M\times_k M\times_k M\times_k M)\times_{\mu, M\times_kM, \Delta} M$$ where the map $\mu:M\times_k M\times_k M\times_k M\to M\times_kM$ sends $(a,b,c,d)$ to $(a + d, b+c)$ and the map $\Delta : M\to M\times_kM$ is the diagonal. It seems $R$ is an étale equivalence relation, and then $\text{Cyc}_r(X,j) := M\times_kM/R$ should be a group object in algebraic spaces, if so, separated and locally of finite type, hence a $k$-group scheme. In other words, $\text{Cyc}_r(X,j)$ is meant to be a naive "scheme theoretic group completion", using the fact that $M$ is a monoid object in $k$-schemes under usual addition of effective cycles, and that it is cancellative. Is this too naive a try? Is there a construction that actually works? EDIT: a general question about monoid objects in algebraic spaces. Let $S$ be a scheme, $M\to S$ a (commutative) cancellative monoid object in algebraic spaces (ie. for any $S$-scheme $T$, the set $M(T)$ is a commutative monoid with zero, functorially in $T$, and it is cancellative). We define $M^{\rm gp}$ the fppf quotient sheaf of $M\times_SM$ by the following equivalence relation: $$R := (M\times_SM\times_SM\times_SM)\times_{\mu, M\times_SM,\Delta_{M/S}}M$$ with: $\mu : M\times_SM\times_SM\times_SM\to M\times_SM$ the map defined functorially on $T$-sections by sending $(a_T, b_T, c_T, d_T)$ to $(a_T+d_T, b_T+c_T)$. $\Delta_{M/S} : M\to M\times_SM$ is the diagonal. $s,t : R\to M\times_SM$ are defined to be the pullback along $\mu$ of $\text{pr}_1\circ\Delta_{M/S}$ and $\text{pr}_2\circ\Delta_{M/S}$. Is $M^{\rm gp}$ a (commutative) group object in algebraic spaces? (ie. is it an algebraic space?) Remark. If so, the naive idea is then to consider the algebraic space $\text{Cyc}_r(X,j) := \text{Chow}_r(X,j)^{\rm gp}$, which would be, in addition, locally of finite type and separated over $k$, hence a $k$-group scheme. EDIT: see the more general question Algebraic-space theoretic group completions
Rules Use ALL the digits in the year $2019$ (you may not use any other numbers except $2, 0, 1, 9$) to write mathematical expressions that give results for the numbers 50 to 100. You may use the arithmetic operations $+$, $-$, $\times$, $\sqrt{}$, and $!$ (see below). Indices or exponents may only be made from the digits $2, 0, 1,$ and $9$; for example, $(9+1)^2 $is allowed, as it has used the $9$, $1$ and $2$. Multi-digit numbers and decimals points can be used such as $20$, $102$, and $.02$, but you CANNOT make 30 by combining $(2+1)0$. Recurring decimals can be used using the overhead dots or bar e.g. $0.\bar1=0.111 ...=\frac19$ Factorials are allowed Here's how you might use factorials: $n!=n\times(n-1)\times(n-2)\times\dots\times2\times1$ For example $(10-9+2)!=3!=3\times2\times1=6$ $0!=1$
Neural Style Transfer - Art Generation using Neural networks A really cool implementation of CNN is the Neural Style Transfer for Art Generation. It basically merges two images - one Content image and other Style image to create a new image which is a combination of the two. Some of the generated images are posted on This Page Nomenclature used: Content Image (C) Style Image (S) Generated Image (G) What are Deep ConvNets Learning? Vizualizing what a deep network is learning Neurons in each layer are activated by a specific portion of an image (activation means the unit achieves its maximum value). For example, a particular neuron in a layer might get activated by a 45-degree line or a specific color or a round shape. If the image contains any suce feature, that particular neuron will attain its maximum value. How to get this - Pick a unit in layer 1. Find the nine image patches that maximizes the unit’s activations. Repeat for other units The units in the initial few layers learn the basic features (a line, a color, etc) while units in the later layers learn more complex functions (faces, people, animals, clouds, tyres etc). This is shown in the images below (shown as a small part of the layer) where the layer 1 learn the basic features while as we move forward the units in deeper layers learn more complex functions. All Layers Layer 1 Layer 2 Layer 3 Layer 4 Layer 5 Neural Style Transfer : Cost Function The cost function $J(G)$ for a generated image is made up of two individual cost functions - Content Cost Function $J_{Content}$ and Style Cost Function $J_{Style}$, weighted by the hyper parameters $\alpha$ and $\beta$ $J(G)$ = $\alpha J_{Content}(C,G)$ + $\beta J_{Style}(S,G)$ To create the generated image G: Initialize G randomly (100x100x3) - this will just create a random noisy image Use Gradient Descent to minimize $J(G)$, which will start with the noisy generated image G and then slowly blend in C and S together as per the weights Content Cost Function $J_{Content}(C,G)$ Say you use hidden layer $l$ to compute content cost Use a pre-trained Conv network (ex. VGG-19 network) Let $a^{l(C)}$ and $a^{l(G)}$ be activations of layer $l$ on the images If $a^{l(C)}$ and $a^{l(G)}$ are similar, both images have similar content $J_{Content}(C,G)$ = $\frac12 || a^{l(C)} - a^{l(G)}||^2$ Style Cost Function $J_{Style}(S,G)$ Meaning of the “Style” of an image Say you are using layer $l$’s activation to measure “style”. Define style as the correlation between activations across channels. What does it means for the two channels to be correlated? It means that whatever part of the image has the vertical texture (1,2 grid) that part of the image will also have the orangish tint (2,1 grid). Uncorrelated means the vertical texture in 1,2 won’t have the orangish tint in 2,1 Correlation tells you which of these high level texture components tend to occur together in an image. Style Matrix Let $a_{i,j,k}^{[l]}$ = activation at (i,j,k). i: Height H, j: Width W $G^{l(S)}$ is $n_c^{[l]}xn_c^{[l]}$ Height and width of this matrix $G^{l(s)}$ for layer l is number of channels x number of channels Calculating how correlated are these channeks $K$ and $K’$ $G_{KK’}^{l(S)}$ = $\sum_{i=1}^{n_h^{[l]}}\sum_{j=1}^{n_w^{[l]}} a_{ijk}^{(S)[l]} a_{ijk’}^{(S)[l]}$ This is going to be the style matrix for input Style image S Doing the same thing for the generated image G $G_{KK’}^{l(G)}$ = $\sum_{i=1}^{n_h^{[l]}}\sum_{j=1}^{n_w^{[l]}} a_{ijk}^{(G)[l]} a_{ijk’}^{(G)[l]}$ K and K’ will range from 1 to $n_c^{[l]}$ In linear Algebra, this matrix G is called the Gram Matrix $G_{KK’}^{l(S)}$ : Style of image S $G_{KK’}^{l(S)}$ : Style of image G Style Cost Function $J_{Style}^{[l]} = || G^{(S)[l]} - G^{(G)[l]} ||^2_F $ or, $J_{Style}^{[l]} = \frac1{(2 n_H^{[l]}n_W^{[l]}n_C^{[l]})^2}\sum_K \sum_{K’} (G_{KK’}^{(S)[l]} - G_{KK’}^{(G)[l]})^2$ You’ll get better results if you use the style cost function from multiple deeper layers. Overall style cost function $J_{Style}(S,G) = \sum_l \lambda^{[l]} J_{Style}^{[l]}(S,G)$ Overall Cost Function $J(G)$ = $\alpha J_{Content}(C,G)$ + $\beta J_{Style}(S,G)$ Source material from Andrew NG’s awesome course on Coursera. The material in the video has been written in a text form so that anyone who wishes to revise a certain topic can go through this without going through the entire video lectures.
Given any non-isosceles triangle $ABC$, let denote with $AB$ its longest sides, and draw the two circles with centers in $A$ and $B$ and passing by $C$. They determine two additional points $E$ and $D$ on the side $AB$. If we draw the circles with centers in $A$ and $B$ and passing by $D$ and $E$, respectively, we obtain two other points $F$ and $G$ on the sides $AC$ and $CB$, respectively. The whole triangle results subdivided in three kinds of segments of lenght $\alpha$ (red), $\beta$ (blue) and $\gamma$ (green). (In this post A conjecture related to a circle intrinsically bound to any triangle is shown that the points $DFCGE$ always determine a circle). Given the lengths of the three sides of $ABC$, ($\overline{AB}=\alpha+\beta+\gamma$, $\overline{AC}=\alpha+\gamma$, and $\overline{CB}=\beta+\gamma$), the triangle $ABC$ is uniquely determined. What is the general relation between $\alpha,\beta$ and $\gamma$? So far, I observed that $\gamma=\sqrt{2\alpha\beta}$ for each right triangle, but I have troubles to find the function $\gamma=\gamma(\alpha,\beta)$ for a general triangle. Thanks for your suggestions!
Optimality conditions for $ E $-differentiable vector optimization problems with the multiple interval-valued objective function 1. Faculty of Mathematics and Computer Science University of Łódź, Banacha 22, 90-238 Łódź, Poland 2. Department of Mathematics, Hadhramout University, P.O. BOX : (50511-50512), Al-Mahrah, Yemen In this paper, a nonconvex vector optimization problem with multiple interval-valued objective function and both inequality and equality constraints is considered. The functions constituting it are not necessarily differentiable, but they are $ E $-differentiable. The so-called $ E $-Karush-Kuhn-Tucker necessary optimality conditions are established for the considered $ E $-differentiable vector optimization problem with the multiple interval-valued objective function. Also the sufficient optimality conditions are derived for such interval-valued vector optimization problems under appropriate (generalized) $ E $-convexity hypotheses. Keywords:$ E $-differentiable function, $ E $-differentiable vector optimization problem with multiple interval-valued objective function, $ E $-Karush-Kuhn-Tucker necessary optimality conditions, $ E $-convex function. Mathematics Subject Classification:Primary: 90C29, 90C30, 90C46, 90C26. Citation:Tadeusz Antczak, Najeeb Abdulaleem. Optimality conditions for $ E $-differentiable vector optimization problems with the multiple interval-valued objective function. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2019089 References: [1] I. Ahmad, A. Jayswal and J. Banerjee, On interval-valued optimization problems with generalized invex functions, [2] I. Ahmad, D. Singh and B. A. Dar, Optimality conditions for invex interval valued nonlinear programming problems involving generalized [3] G. Alefeld and J. Herzberger, [4] [5] S. Chanas and D. Kuchta, Multiobjective programming in optimization of interval objective functions – a generalized approach, [6] [7] X. Chen and Z. Li, On optimality conditions and duality for non-differentiable interval-valued programming problems with the generalized ( [8] [9] [10] [11] [12] [13] E. Hosseinzade and H. Hassanpour, The Karush-Kuhn-Tucker optimality conditions in interval-valued multiobjective programming problems, [14] [15] A. Jayswal, I. Stancu-Minasian and I. Ahmad, On sufficiency and duality for a class of interval-valued programming problems, [16] S. Karmakar and A. K. Bhunia, An alternative optimization technique for interval objective constrained optimization problems via multiobjective programming, [17] L. Li, S. Liu and J. Zhang, Univex interval-valued mapping with differentiability and its application in nonlinear programming, [18] L. Li, S. Liu and J. Zhang, On interval-valued invex mappings and optimality conditions for interval-valued optimization problems, [19] [20] O. L. Mangasarian, [21] A. E.-M. A. Megahed, H. G. Gomma, E. A. Youness and A.-Z. H. El-Banna, Optimality conditions of [22] F. Mirzapour, Some properties on [23] R. E. Moore, [24] [25] [26] D. Singh, B. A. Dar and D. S. Kim, KKT optimality conditions in interval valued multiobjective programming with generalized differentiable functions, [27] [28] [29] [30] H.-C. Wu, The Karush-Kuhn-Tucker optimality conditions in multiobjective programming problems with interval-valued objective functions, [31] [32] [33] [34] E. A. Youness, Characterization of efficient solution of multiobjective [35] J. K. Zhang, S. Y. Liu, L. F. Li and Q. X. Feng, The KKT optimality conditions in a class of generalized convex optimization problems with an interval-valued objective function, [36] H.-C. Zhou and Y.-J. Wang, Optimality condition and mixed duality for interval-valued optimization, in show all references References: [1] I. Ahmad, A. Jayswal and J. Banerjee, On interval-valued optimization problems with generalized invex functions, [2] I. Ahmad, D. Singh and B. A. Dar, Optimality conditions for invex interval valued nonlinear programming problems involving generalized [3] G. Alefeld and J. Herzberger, [4] [5] S. Chanas and D. Kuchta, Multiobjective programming in optimization of interval objective functions – a generalized approach, [6] [7] X. Chen and Z. Li, On optimality conditions and duality for non-differentiable interval-valued programming problems with the generalized ( [8] [9] [10] [11] [12] [13] E. Hosseinzade and H. Hassanpour, The Karush-Kuhn-Tucker optimality conditions in interval-valued multiobjective programming problems, [14] [15] A. Jayswal, I. Stancu-Minasian and I. Ahmad, On sufficiency and duality for a class of interval-valued programming problems, [16] S. Karmakar and A. K. Bhunia, An alternative optimization technique for interval objective constrained optimization problems via multiobjective programming, [17] L. Li, S. Liu and J. Zhang, Univex interval-valued mapping with differentiability and its application in nonlinear programming, [18] L. Li, S. Liu and J. Zhang, On interval-valued invex mappings and optimality conditions for interval-valued optimization problems, [19] [20] O. L. Mangasarian, [21] A. E.-M. A. Megahed, H. G. Gomma, E. A. Youness and A.-Z. H. El-Banna, Optimality conditions of [22] F. Mirzapour, Some properties on [23] R. E. Moore, [24] [25] [26] D. Singh, B. A. Dar and D. S. Kim, KKT optimality conditions in interval valued multiobjective programming with generalized differentiable functions, [27] [28] [29] [30] H.-C. Wu, The Karush-Kuhn-Tucker optimality conditions in multiobjective programming problems with interval-valued objective functions, [31] [32] [33] [34] E. A. Youness, Characterization of efficient solution of multiobjective [35] J. K. Zhang, S. Y. Liu, L. F. Li and Q. X. Feng, The KKT optimality conditions in a class of generalized convex optimization problems with an interval-valued objective function, [36] H.-C. Zhou and Y.-J. Wang, Optimality condition and mixed duality for interval-valued optimization, in [1] Anurag Jayswala, Tadeusz Antczakb, Shalini Jha. Second order modified objective function method for twice differentiable vector optimization problems over cone constraints. [2] Xiuhong Chen, Zhihua Li. On optimality conditions and duality for non-differentiable interval-valued programming problems with the generalized ( [3] Kequan Zhao, Xinmin Yang. Characterizations of the $E$-Benson proper efficiency in vector optimization problems. [4] Zhiang Zhou, Xinmin Yang, Kequan Zhao. $E$-super efficiency of set-valued optimization problems involving improvement sets. [5] Francesco Mainardi. On some properties of the Mittag-Leffler function $\mathbf{E_\alpha(-t^\alpha)}$, completely monotone for $\mathbf{t> 0}$ with $\mathbf{0<\alpha<1}$. [6] [7] Alfonso Castro, Guillermo Reyes. Existence of multiple solutions for a semilinear problem and a counterexample by E. N. Dancer. [8] Yuhua Sun, Laisheng Wang. Optimality conditions and duality in nondifferentiable interval-valued programming. [9] [10] Tao Chen, Yunping Jiang, Gaofei Zhang. No invariant line fields on escaping sets of the family $\lambda e^{iz}+\gamma e^{-iz}$. [11] Zutong Wang, Jiansheng Guo, Mingfa Zheng, Youshe Yang. A new approach for uncertain multiobjective programming problem based on $\mathcal{P}_{E}$ principle. [12] [13] [14] Alejandro Cataldo, Juan-Carlos Ferrer, Pablo A. Rey, Antoine Sauré. Design of a single window system for e-government services: the chilean case. [15] [16] Vladimir V. Marchenko, Klavdii V. Maslov, Dmitry Shepelsky, V. V. Zhikov. E.Ya.Khruslov. On the occasion of his 70th birthday. [17] [18] [19] [20] 2018 Impact Factor: 1.025 Tools Article outline [Back to Top]
I am investigating a somewhat obscure area of number theory. Can I post a proposition and a complete proof and ask people to check it? Say I've proven a theorem or found a solution to a problem. I'm close to being sure that my proof or solution is fine. However, I'm not entirely confident in my jugdement. Maybe I'm new to the topic, or I remember that I've thought correct my incorrect proofs of the same difficulty level many tim... I am trying to learn proof based math on my own and once I construct a proof, I often have a gut feeling that it's not airtight. What is the best way of asking such questions? My findings till now: This question's answer in Meta points out that reading other people's long and formal proofs i... Is this the proper site to ask about the correctness of one's proof? The proofs I would intend on asking questions about are elementary mathematics proofs such as a proof about infinitely many primes, induction equalities and inequalities, etc... Or is there another site for "Proof Review" Simila... I am aware that it is probably better not to have too many meta-tags such as homework, soft-question, big-list or reference-request. Despite of this I'd like to ask other MSE users, whether they would consider tag of "check my proof" questions useful. We have a lot of such questions and they are... Is it okay to ask questions where you give the solution and ask people to review it to see if it is correct? Recently, I have seen a lot of questions that essentially ask, "Is this proof correct?" or "Can you verify my work is correct?" like this one. Often, especially when the asker's work is in fact correct, these questions have a one-word answer. I feel that these questions could be much improved b... I solve some exercises but I'm self-studying and some books do not have answers to the exercises, is it ok to ask if the solution is plausible here? I occasionally come across questions (such as this one: Prove that the $\sigma$ - algebras are equal) in which the person asking the question has already answered their question and wants to know whether their approach is right. There are two possibilities: Their approach is wrong. Then you ... I envision a site where people could post their proofs -- either as answers to textbook exercises or revisions of current textbook proofs -- and get feedback on which parts might need improving, and perhaps how this might be done. This sort of site could be useful to a number of people. Namely, ... Over a week ago I asked this question. In it, I proposed a solution and asked for it to be verified, or for an alternative solution to be suggested. Currently, there are no answers but there is one comment stating that my proof is correct (as well as indicating the need for more explanation of a ... I feel bothered by questions where the body begins with a problem statement and then is followed by a giant solution and then succeeded by the question: "Can someone please tell me whether or not this solution is correct?" Here is an example of what I'm talking about: Example. Here are my compla... I've always wanted math competitions on MSE ever since I've joined. These could be either user-held or officially held, whichever seems better. User-held competitions would run as follows. A user starts a competition with a specified level, with original problems that he/she writes. People sign... I was puzzling: "What is the shortest proof of $\exists x \forall y (P(x) \to P(y)) $?" (a variation of the drinkers paradox see Proof of Drinker paradox) given a certain set of inference rules and using natural deduction. I managed to proof it in 23 lines, but I am not sure if this is the shor... This isn't the usual issue about questions from contest competitions being posted here for assistance, but rather about the use of Math.SE as a venue for hosting a "contest" using a future bounty as prize. This recent Question asks for participation according to rules (the post lists seven of th... « first day (866 days earlier) ← previous day next day → last day (1814 days later) »
I'm trying to reproduce the results of a paper on Device-to-Device Communication, channel assignment and mode selection (G. Yu, L. Xu, D. Feng, R. Yin, G. Y. Li and Y. Jiang, "Joint Mode Selection and Resource Allocation for Device-to-Device Communications," Nov. 2014.). I have a couple of questions on the noise spectral density, how to generate the channel taps, and finally the throughput of the entire system. It is mentioned the following in the paper: "All links are assumed to experience independent block fading. Hence, the instantaneous channel gain of the interference link between CU m and the receiver of D2D pair k can be expressed as $$h_{k,m} = Gβ_{k,m}d^{−\alpha}_{k, m},$$ where G is the path loss constant, α is the path loss exponent, $β_{k,m}$ denotes the channel fading component, and $d_{k,m}$ is the distance between CU m and the receiver of D2D pair k. Similarly, we can express the channel gain between CU m and the eNB as $h^C_{m,B}$, the channel gain between D2D pair k as $h^D_k$, and the channel gain between the transmitter of D2D pair k and the eNB as $h^D_{k,B}$." Towards the last section, we have the following relevant simulation parameters: Noise Spectral Density: -174 dBm / Hz. Path Loss Model (for Cellular Links): $128.1 + 37.6 \times \log_{10}(d[km])$. Log Normal Shadowing: 10 dB. Uplink Bandwidth: 3 MHz. Now my questions are related to the channel model: How do I convert from the Path Loss Model combined with the Shadowing Effect of 10 dB to the first mentioned equation ($h_{k,m} = Gβ_{k,m}d^{−\alpha}_{k, m}$) ? When I convert the noise spectral density from dBm/Hz to dBm, should I divide the value by the number of Uplink channels, which is 20 ? And if so, should I do that in the dB or Watts units ?
When multiple forces act on a body, the (vector) sum of those forces gives the net force, which is the force we substitute in Newton’s second law of motion to get the equation of motion of the body. If all forces sum up to zero, there will be no acceleration, and the body retains whatever velocity it had before. Statics is the study of objects that are neither currently moving nor experiencing a net force, and thus remain stationary. You might expect that this study is easier than the dynamical case when bodies do experience a net force, but that just depends on context. Imagine, for example, a jar filled with marbles: they aren’t moving, but the forces acting on the marbles are certainly not zero, and also not uniformly distributed. Even if there is no net force, there is no guarantee that an object will exhibit no motion: if the forces are distributed unevenly along an extended object, it may start to rotate. Rotations always happen around a stationary point, known as the pivot. Only a force that has a component perpendicular to the line connecting its point of action to the pivot (the arm) can make an object rotate. The corresponding angular acceleration due to the force depends on both the magnitude of that perpendicular component and the length of the arm,and is known as the moment of the force or the torque \(\tau\). The magnitude of the torque is therefore given by \(Fr sin\theta\), where \(F\) is the magnitude of the force, r the length of the arm, and \(\theta\) the angle between the force and the arm. If we write the arm as a vector r pointing from the pivot to the point where the force acts, we find that the magnitude of the torque equals the cross product of \(r\) and \(F\): \[\boldsymbol{\tau}=\boldsymbol{r} \times \boldsymbol{F}\] The direction of rotation can be found by the right-hand rule from the direction of the torque: if the thumb of your right hand points along the direction of \(\tau\), then the direction in which your fingers curve will be the direction in which the object rotates due to the action of the corresponding force \(F\). We will study rotations in detail in Chapter 5. For now, we’re interested in the case that there is no motion, neither linear nor rotational, which means that the forces and torques acting on our object must satisfy the stability condition: for an extended object to be stationary, both the sum of the forces and the sum of the torques acting on it must be zero. example \(\PageIndex{1}\): suspended sign A sign of mass M hangs suspended from a rod of mass m and length \(L\) in a symmetric way and such that the centers of mass of the sign and rod nicely align (Figure \(\PageIndex{1a}\)). One end of the rod is anchored to a wall directly, while the other is supported by a wire with negligible mass that is attached to the same wall a distance h above the anchor. If the maximum tension the wire can support is T, find the minimum value of h. For the case that the tension in the wire equals the maximum tension, find the force (magnitude and direction) exerted by the anchor on the rod. Solution (a) We first draw a free-body diagram, Figure \(\PageIndex{1b}\). Force balance on the sign tells us that the tensions in the two lower wires sum to the gravitational force on the sign. The rod is stationary, so we know that the sum of the torques on it must vanish. To get torques, we first need a pivot; we pick the point where the rod is anchored to the wall. We then have three forces contributing a clockwise torque, and one contributing a counterclockwise torque. We’re not told exactly where the wires are attached to the rod,but we are told that the configuration is symmetric and that the center of mass of the sign aligns with that of the rod. Let the first wire be a distance \(\alpha L\) from the wall, and the second a distance \((1-\alpha)L\).The total (clockwise) torque due to the gravitational force on the sign and rod is then given by: \[\tau_{\mathrm{Z}}=\frac{1}{2} m g L+\frac{1}{2} M g \alpha L+\frac{1}{2} M g(1-\alpha) L=\frac{1}{2}(m+M) g L. \nonumber\] The counterclockwise torque comes from the tension in the wire, and is given by \[\frac{1}{2} m g L+\frac{1}{2} M g \alpha L+\frac{1}{2} M g(1-\alpha) L=\frac{1}{2}(m+M) g L. \nonumber\] Equating the two torques allows us to solve for has a function of \(F_T\), as requested, which gives: \[h^{2}=\left(\frac{1}{2} \frac{(m+M) g}{F_{\mathrm{T}}}\right)^{2}\left(h^{2}+L^{2}\right) \quad \rightarrow \quad h=\frac{(m+M) g L}{\sqrt{4 F_{\mathrm{T}}^{2}-(m+M)^{2} g^{2}}}\] We find the minimum value of h by substituting \(F_T=T_{max}\). (b) As the rod is stationary, all forces on it must cancel. In the horizontal direction, we have the horizontal component of the tension, \(T_{max}\cos \theta\) to the left, which must equal the horizontal component of the force exerted by the wall, \(F_w cos \phi\). In the vertical direction, we have the gravitational force and the two forces from the wires on which the sign hangs in the downward direction, and the vertical component of the tension in the wire in the upward direction, the sum of which must equal the vertical component of the force exerted by the wall (which may point either up or down). We thus have \[\begin{align*} F_{\mathrm{w}} \cos \phi &=T_{\max } \cos \theta \\[4pt] F_{\mathrm{w}} \sin \phi &=(m+M) g+T_{\max } \sin \theta \end{align*}\] where \(\tan \theta={h \over L}\) and \(h\) is given in the answer to (a). We find that \[\begin{align*} F_{\mathrm{w}}^{2} &=T_{\mathrm{max}}^{2}+2(m+M) g T_{\max } \sin \theta+(m+M)^{2} g^{2} \\[4pt] \tan \phi &=\frac{T_{\max } \cos \theta}{(m+M) g+T_{\max } \sin \theta} \end{align*}\] Note that the above expressions give the complete answer (magnitude and direction). We could eliminate \(h\) and \(\theta\), but that’d just be algebra, leading to more complicated expressions, and not very useful in itself.If we’d been asked to calculate the height or force for any specific values of M, m, and L, we could get the answers easily by substituting the numbers in the expressions given here.
Qmechanic's answer has the full details, but I figure it might be useful to break this down in a little more detail. The proper way to generalize Maxwell's equations to higher-dimensional spaces is to use the field tensor $F^{\mu\nu}$. In our normal 3+1D space, it basically looks like this: $$F = \begin{pmatrix}0 & E_1 & E_2 & E_3 \\ -E_1 & 0 & B_{12} & B_{13} \\ -E_2 & -B_{12} & 0 & B_{23} \\ -E_3 & -B_{13} & -B_{23} & 0\end{pmatrix}$$ If you apply a rotation matrix corresponding to a 3D spatial rotation to this tensor, you will find that it mixes up the three components $E_1$, $E_2$, and $E_3$, and it separately mixes up the three components $B_{12}$, $B_{13}$, and $B_{23}$. The way in which these components mix is consistent with $E$ being a vector in 3D space and with $B$ being a separate vector in 3D space; for example, if you rotate by $\phi$ around the $z$ axis, you'll find that $E'_1 = E_1\cos\phi - E_2\sin\phi$ and $B_{23}' = B_{23}\cos\phi + B_{13}\sin\phi$. Doing this for a few rotations allows you to conclude that $(E_1,E_2,E_3)$ are respectively the $(x,y,z)$ components of $\vec{E}$, and that $(B_{23},-B_{13},B_{12})$ are the $(x,y,z)$ components of $\vec{B}$. However, if you apply a Lorentz boost (a change in velocity), which is just a rotation that incorporates the time dimension as well as the three spatial dimensions, you will find that it mixes the components of $\vec{E}$ with the components of $\vec{B}$. Nothing in the theory of normal 3D vectors allows this to happen. So if you didn't know about $F$, this would be your first clue that $\vec{E}$ and $\vec{B}$ are not really vectors, but instead are part of some more complex structure. This is in fact exactly what happened in the 1830s-ish, when Michael Faraday discovered that moving a magnet past a wire would induce an electrical current in the wire. The Lorentz boost corresponds to moving the magnet, and the rotation of components of $\vec{B}$ into components of $\vec{E}$ corresponds to the induction of an electric field by the moving magnet. Faraday and the others who capitalized on his work later in that century knew enough to recognize that $\vec{E}$ and $\vec{B}$ were related somehow, in a way that ordinary vectors weren't, but it wasn't until the introduction of special relativity that anyone figured out the actual mathematical structure that both $\vec{E}$ and $\vec{B}$ are part of (namely, the antisymmetric tensor $F$). Anyway, once you know what $F$ really looks like, it's straightforward to generalize it to higher dimensions. In 4+1D, for example, $$F = \begin{pmatrix}0 & E_1 & E_2 & E_3 & E_4 \\ -E_1 & 0 & B_{12} & B_{13} & B_{14} \\ -E_2 & -B_{12} & 0 & B_{23} & B_{24} \\ -E_3 & -B_{13} & -B_{23} & 0 & B_{34} \\ -E_4 & -B_{14} & -B_{24} & -B_{34} & 0\end{pmatrix}$$ This has 4 elements of an electric nature, but 6 of a magnetic nature. If you apply a 4D spatial rotation matrix to this, you will still find that the 4 components $E_1, E_2, E_3, E_4$ mix in the way you expect of a 4D vector, but the 6 magnetic components don't. So if we lived in a 4+1D universe, it would be obvious that the magnetic field is not a vector, if nothing else because it has more elements than a 4+1D spatial vector does. OK, so what about Maxwell's equations? Well, it turns out that you can express the cross product in three dimensions using the antisymmetric tensor $\epsilon^{\alpha\beta\gamma}$, which is defined component-wise as follows: +1 if $\alpha\beta\gamma$ is a cyclic permutation of $123$ -1 if it's a cyclic permutation of $321$ 0 if any two of the indices are equal Same goes for the curl; you can write the curl operator in tensor form as $$\bigl(\vec\nabla\times\vec{V}\bigr)^\alpha = \epsilon^{\alpha\beta\gamma}\frac{\partial V_\beta}{\partial x^\gamma}$$ using the Einstein summation convention. Basically, when you take the curl of a vector, you're constructing all possible antisymmetric combinations of the vector, the derivative operator, and the directional unit vectors: $$\begin{matrix} \hat{x}_1 \frac{\partial V_2}{\partial x^3} & -\hat{x}_1 \frac{\partial V_3}{\partial x^2} & \hat{x}_2 \frac{\partial V_3}{\partial x^1} & -\hat{x}_2 \frac{\partial V_1}{\partial x^3} & \hat{x}_3 \frac{\partial V_1}{\partial x^2} & -\hat{x}_3 \frac{\partial V_2}{\partial x^1}\end{matrix}$$ The antisymmetric tensor $\epsilon$ is easy to generalize to additional dimensions; you just add on an additional index per dimension, and keep the same rule for assigning components in terms of permutations of indices. However, Maxwell's equations actually involve two different curls, $\vec\nabla\times\vec{E}$ and $\vec\nabla\times\vec{B}$. Since the electric and magnetic fields don't generalize to higher-dimensional spaces in the same way, it stands to reason that their curls may not either. Let's look at the "magnetic curl" first. The magnetic field generalizes to higher dimensions as an antisymmetric piece of a tensor, so we should write its curl as an operation on that antisymmetric piece of a tensor. Start with the tensor-notation cross product rule, $$\bigl(\vec\nabla\times\vec{B}\bigr)^\alpha = \epsilon^{\alpha\beta\gamma}\frac{\partial B_\beta}{\partial x^\gamma}$$ and put in the following identity which expresses the components of $\vec{B}$ in terms of components of $F$, $$B_\beta = -\frac{1}{2}\epsilon_{\beta\mu\nu}F^{\mu\nu}$$ (here the indices range over values 1 to 3), and after some simplifications you get $$\bigl(\vec\nabla\times\vec{B}\bigr)^\alpha = \frac{\partial F^{\alpha\mu}}{\partial x^\mu}$$ So the curl of something that can be expressed an antisymmetric piece of a tensor is really not a curl at all! That makes it very easy to generalize: the equation $$\frac{\partial F^{\alpha\beta}}{\partial x^{\beta}} = \mu J^{\alpha}$$ gives you Ampère's law in any number of dimensions $N$, if you just let $\alpha$ and $\beta$ range from $1$ to $N$. (Conveniently, if you let $\alpha$ be equal to zero, you get Gauss's law.) Now what about the "electric curl"? Well, the electric field generalizes to higher dimensions as a vector (as long as you ignore Lorentz boosts), so it's not really an antisymmetric tensor - at least, the components of $\vec{E}$ don't form a square block of $F$ the way $\vec{B}$ did. But remember, we do have that equation that related $B_\beta$ to $F^{\mu\nu}$. You can actually flip that around and use it to define an antisymmetric tensor that will contain the components of $\vec{E}$. We call this new tensor $G$, the dual tensor to $F$. $$G^{\mu\nu} = g_{\alpha\beta}\epsilon^{\beta\mu\nu}E^{\alpha}$$ This is the piece of it that contains $E$, anyway. (I might be off by a numerical factor or a sign or something, but that's the gist of it.) If you write out the components of $G$ in 3+1D space, it looks just like $F$ except that the positions of $E$ and $B$ are switched. Using this new dual tensor, you can do the same thing we did with $\vec{B}$ to $\vec{E}$, namely write its curl as $$\bigl(\vec\nabla\times\vec{E}\bigr)^\alpha = \frac{\partial G^{\alpha\mu}}{\partial x^\mu}$$ and thus Maxwell's other equations are given by $$\frac{\partial G^{\alpha\beta}}{\partial x^\beta} = 0$$ This can also be easily generalized to higher dimensions, but there is a trick to how you define $G$. The thing is, since it is a dual tensor, it doesn't always have 2 indices. Remember that when you go to higher dimensions, you have to put extra indices on $\epsilon$, and so the definition of $G$ changes. For example, the definition above was for a 3D subset of G. The real $G$ in 3+1D is defined like this: $$G^{\mu\nu} = \frac{1}{2}g_{\kappa\alpha}g_{\lambda\beta}\epsilon^{\kappa\lambda\mu\nu}F^{\alpha\beta}$$ In 4+1D, it's defined like this $$G^{\mu\nu\rho} = \frac{1}{6}g_{\kappa\alpha}g_{\lambda\beta}\epsilon^{\kappa\lambda\mu\nu\rho}F^{\alpha\beta}$$ and so on. Notice that $G$ always has $N - 2$ indices, so that the total number of indices on $F$ and $G$ is $N$. This is one key property of dual tensors: in a rough sense, the basis of one is kind of orthogonal to the basis of the other. This is where the exterior calculus that Qmechanic mentioned comes into play: it shows a sort of equivalence between tensors and their duals, and it has ways of neatly dealing with dual tensors which make it possible to write Maxwell's equations very compactly. $$\begin{align}\mathbf{d}F &= 0 & \mathbf{d}G &= 0\end{align}$$ The exterior derivative $\mathbf{d}$ is an operation that applies to both a tensor field and its dual in similar ways, such that in both cases it reproduces Maxwell's equations.
I have a very specific difficult on the Problem 3 Item (b)-(c) (page 12) of Landau's book Course of Theoretical Physics: Volume 1 Mechanics Second Edition. The problem is very straightforward: Find the Lagrangian of a simple pendulum of length $l$ and mass $m$ whose point of support is (b) oscillating horizontally in the plane of motion of the pendulum according to the law $x = a\cos(\gamma t)$ where $a$ and $\gamma$ are constants (c) oscillating vertically in the plane of motion according $y = a\cos(\gamma t)$. Let work out the case of the image. Then with respect to that origin we get that the point that maps the mass $m$ can be described by $$(x_m,y_m) = (l\sin(\theta),a\cos(\gamma t)+l\cos(\theta))$$ So $$T = \frac{m}{2}(\dot{x}_m^2+\dot{y}_m^2) = \frac{m}{2}(l^2\dot{\theta}^2\cos^2(\theta) + a^2\gamma^2\sin^2(\gamma t) + 2al\gamma \dot{\theta}\sin(\gamma t )\sin(\theta) + l^2\dot{\theta}^2\sin^2(\theta))$$ And $$U = -mg(y_m) = -mg(a\cos(\gamma t) + l\cos(\theta))$$ Such that the Lagrangian would be $\mathcal{L} = T - U$, but, the answer that he presents is $$\mathcal{L} = \frac{1}{2}ml^2\dot{\theta}^2 + mla\gamma^2\cos(\gamma t)\cos(\theta) + mgl\cos(\theta)$$ My question is how can I get this answer? He says that he's omitting total derivatives but I do not understand what he means and how this could change my answer to his. I also think that he is omitting the terms that only depend on the time and the constants. But the problem is that he gets different trigonometry functions.
Here is a question from old exam papers I am having trouble with: Suppose the lifetime of a bulb is distributed exponentially with mean life $\theta$ (in hours). Let $X_i$ be the number of trials required to get a bulb surviving at least $t$ (known) hours for the first time in the $i$th lot, where lot sizes are large enough. Discuss a likelihood ratio test for testing $H_0:\theta\ge\theta_0$ against $H_1:\theta<\theta_0$. I am not sure what exactly my observed sample is supposed to be. So I am stuck at setting up the problem correctly. Say I have $k$ lots of bulbs, the $i$th lot having size $n_i\,, i=1,2\ldots,k$. Then is the observed sample some $\mathbf X=(X_1,X_2,\ldots,X_k)$, where $X_i=(X_{ij})_{i=1,\ldots,k\,;\,j=1,\ldots,n_i}$ is again a vector for each $i$ ? I am using $X_{ij}$ to denote the number of trials needed to get the $j$th bulb surviving for at least $t$ hours for the first time in the $i$th lot. If I assume $X_1,X_2,\ldots,X_k$ are all independent samples, then my likelihood function should be $$L(\theta\mid \mathbf x)=\prod_{i=1}^k\prod_{j=1}^{n_i}p(\theta)(1-p(\theta))^{x_{ij}-1}\quad,\,\theta>0$$ , where $p(\theta)=\int_t^{\infty}\frac{1}{\theta}e^{-z/\theta}\,dz=e^{-t/\theta}\,,t>0$ is the success probability. But this might be wrong since I am supposed to have a sample consisting of only $X_1,\ldots,X_k$ where each $X_i$ denotes number of trials (so not vectors by themselves). In the above setup, this does not seem to be the case. If this formulation is wrong, I would like a hint on how to arrive at the correct one. And also, where exactly am I supposed to apply a large sample approximation with large $n_i$s? Is this referring to Wilks' theorem on the asymptotic distribution of the log-likelihood ratio statistic? (I am not looking for a solution to the actual problem.)
Skip to 0 minutes and 12 secondsHello. It's your turn on trigonometry by the unit circle. OK. Now let us recall some laws that will permit us to reduce our computations to angles between 0 and pi over 2. OK. The laws that I wish to remind you are the antipodal law, which says that the sine of pi plus x is equal to minus the sine of x, and the cosine of pi plus x is equal to minus the cosine of x. Then we have another important fact, that the sine is an odd function. That is, the sine of minus x is equal to minus the sine of x. And the cosine is an even function. Skip to 1 minute and 14 secondsThat is, the cosine of minus x is equal to the cosine of x. And finally, we remember the periodicity of these trigonometric functions. That is, the sine of x plus any integer multiple of 2 times pi is equal to sine of x. Skip to 1 minute and 44 secondsAnd the cosine of x plus any integer multiple of 2 pi is equal the cosine of x. Skip to 1 minute and 58 secondsOK. Now with these laws, let us see that it's very easy to compute these values. Good. We will compute these values applying these laws. You will see that when you have more practice, then you can maybe follow other quicker procedures. OK. But let us apply these laws. First of all, the cosine of 2 pi over 3, I want to reduce this computation to an angle between 0 and pi over 2. Good. I observe that this is equal to the cosine of pi plus minus pi over 3, because this is pi minus pi over 3. And then applying the antipodal law, I get that this is equal to minus the cosine of minus pi over 3. OK. Skip to 3 minutes and 18 secondsBut the cosine is an even function. Therefore, this is equal to minus the cosine of pi over 3. And now the cosine of pi over 3 is equal to 1 over 2, therefore, minus the cosine is minus 1 over 2. Skip to 3 minutes and 43 secondsgood. And now, let us consider the tangent of minus pi over 3. OK. By definition, this is equal to the sine of minus pi over 3 over the cosine of minus pi over 3. OK. Because the sine is an odd function, this is equal to minus the sine of pi over 3 over-- now, the cosine is an even function. And therefore, this is equal to the cosine of pi over 3. Good. We have reduced ourself to angles between 0 and pi over 2. And now, the sine of pi over 3 is equal to the square root of 3 over 2. Therefore, at the numerator we get minus the square root of 3 over 2. Skip to 4 minutes and 48 secondsAnd at the denominator, the cosine of pi over 3 is 1 over 2. And then we get minus the square root of 3. Good. And finally now, we have to compute the sine of 19 pi over 4. OK. Clearly, this angle is greater than 2 times pi. Therefore, we can apply the periodicity to reduce this angle. What do we get? Observe that 19 pi over 4 is equal to 16 pi plus 3 pi over 4, which is the sine of 4 times pi plus 3 pi over 4. Good. Now, this is a multiple of 2 pi. Therefore, this is equal to the sine of 3 pi over 4. OK. And now this is the sine of what? OK. Skip to 6 minutes and 18 secondsThis is exactly like pi minus pi over 4. That is, pi plus minus pi over 4. And by the antipodal laws, this is equal to what? To the minus sine of minus pi over 4. OK. But the sine is an odd function. Therefore, the sine of minus pi over 4 is minus the sine of pi over 4. We have another minus here. And therefore, we get the sine of pi over 4. OK. We have reduced ourselves to an angle between 0 and pi over 2. And the sine of pi over 4 is equal to the square root of 2 over 2. OK. Thank you very much for your attention. It's your turn on trigonometry by the unit circle Do your best in trying to solve the following problems. In any case some of them are solved in the video and all of them are solved in the pdf file below. Also, it may be useful to know the values of the sine and cosine at \(\pi/6, \pi/4, \pi/3\) (see step 2.11): \(\sin (\pi/6)=1/2, \sin (\pi/4)=\sqrt2/2, \sin (\pi/3)=\sqrt3/2\) \(\cos (\pi/6)=\sqrt 3/2, \cos (\pi/4)=\sqrt2/2, \cos(\pi/3)=1/2\) Also. recall that \(\sin(\pi+t)=-\sin t, \,\cos(\pi+t)=-\cos t\) \(\sin (\pi-t)=\sin t,\, \cos(\pi-t)=-\cos t\) (check these identities on the trigonometric circle. Guess the values of \(\sin(\pi/2-t)\) and of \(\cos(\pi/2-t\) ) Exercise 1. Compute \(\cos \dfrac {2\pi}3\), \(\tan\left(-\dfrac \pi 3\right)\) and \(\sin\dfrac{19\pi}4\). Exercise 2. Compute the following values: \[1)\ \ \sin \dfrac 76\pi,\ \cos \dfrac {17}6\pi,\ \tan \dfrac 76\pi;\] \[2)\ \ \sin \dfrac 53\pi,\ \cos \dfrac {11}3\pi,\ \tan \dfrac 53\pi;\] \[3)\ \ \cos\left(- \dfrac 76\pi\right),\ \cos\left(- \dfrac \pi6\right),\ \sin\left(- \dfrac {2\pi}3\right). \] © Università degli Studi di Padova
Errors in layer thicknesses and instability of the refractive indices of thin film materials are the main reasons of the deviations of experimental spectral characteristics of produced optical coatings from theoretical performances of corresponding multilayer designs. Errors in layer thicknesses are inevitable even in modern deposition plants equipped with high precision monitoring devices. It is known also that optical constants of layer materials may deviate from the specified in the corresponding theoretical designs. These deviations are explained by various factors inside the deposition chamber, for example, change of materials in melting pot, unstable substrate temperature, cleaning conditions, etc. nominal constants It is known that different solutions of the same design problem exhibit different sensitivities to deposition errors. Some designs may be sensitive with respect to even small errors. Other solutions are stable to errors of large magnitudes. Stability of the design solutions is tightly interconnected with a monitoring technique used to control the deposition process. For example, the same design solution might be stable in the case of time-monitoring control and sensitive to errors in the cases of broad band or monochromatic monitoring. Three ways to find a stable design 1) To obtain a set of solutions of a design problem, provide computational manufacturing experiments simulating real deposition runs and choose solutions providing highest production yields estimations (see for example, article on yield analysis and our paper) 2) To use special algorithms (Robust Synthesis) taking into account stability requirement already in the course of the synthesis process. In Fig. 1 you can see results of the error analysis performed for a conventional and a robust designs of a two-line filter. The robust design (right panel) exhibit much higher stability to thickness errors in index offsets than a conventional design (left panel). 3) To use a deterministic approach taking errors in optical constants into account. In OptiLayer, this approach is realized in the form of Environments manager. Fig. 1. Illustrating example: Stability of the spectral characteristics of conventional (left panel) and robust (right panel) two-line filters. With Robust synthesis, you can obtain designs which are typically more stable with respect to production errors than standard (conventional) designs. Robust design can be activated by checking Robust Synthesis Enabled check box available through Synthesis --> Options --> Robust tab (Fig. 2). The new algorithm is based on a simultaneous optimization of a sets of design ( design cloud) located in a vicinity of a basic, so called pivotal design. Robust Synthesis options require more computations than analogous standard OptiLayer synthesis options ( about M times more, if M is the number of samples of the design cloud). Reasonable values are 100-200. M The number of the designs in the design cloud can be specified in M The number of samples field on Robust tab. The standard merit function is generalized in the following way: \(GMF=\left\{\sum\limits_{j=0}^M MF^2_j/(M+1)\right\}^{1/2}, \) (Eq. 1) \(MF_j=\left\{\frac 1L \sum\limits_{l=0}^L \left[\frac{S(X^{(j)},\lambda_l)-\hat{S}_l}{\Delta S_l}\right]^2\right\}^{1/2} \) (Eq. 2) \(S(X^{(j)},\lambda)\) and \(\hat{S}_l\) are actual and target values of spectral characteristics, \(\Delta S_l\) are target tolerances, \(\{\lambda_l\}, l=1,...,L\) is the wavelength grid. \(X^{(0)}={d_1,...,d_m, n_H,n_L}\) is the pivotal design and \(X^{(j)}={d_1^{(j)},...,d_m^{(j)}, n_H^{(j)},n_L^{(j)}}\) are disturbed designs from the cloud. \(MF_0\) is the standard merit function, \(MF_j\) are merit functions corresponding to designs from the cloud. Fig. 2. Activating Robust synthesis option and describing characteristics of the design cloud. OptiLayer allows you to specify various expected production errors. In the simplest case, there are errors in layer thicknesses only and no offsets of the optical constants. Errors in layer thicknesses can be specify in the absolute and/or relative scales ( Tolerance size panel, Absolute and Relative fields on the Robust tab, see Fig. 2). In the example in Fig. 2, relative errors of 1% are specified. No Drift in Type column specifies absence of offsets in layer refractive indices. Absolute errors in layer thicknesses: \(d_i^{(j)}=d_i+\delta_i^{(j)} \;\;\) (Eq. 3) Relative errors in layer thicknesses: \(d_i^{(j)}=d_i+\Delta_i^{(j)}d_i \;\; \) (Eq. 4) Please, note that these parameters and the values of the refractive index offsets are not directly corresponding to the deposition process accuracy, yet they are connected with it. These parameters should not be considered literally, they are merely control parameters of the algorithm. Fig. 3. Systematic offsets of H and L materials from [-0.01; 0.01] and [-0.005; 0.005] are specified. If you would like to take into account offsets of the refractive indices as well, you need to specify Index Drift level and Type of the offset. If Per Material type is chosen then systematic offsets will take the for all layers of the same material. same value \(n_{H,L}^{(j)}=n_{H,L}+\Sigma_{H,L}^{(j)}, \;\;j=1,...,M\) (Eq. 5) The algorithm with systematic offset is applied when it is assumed that actual refractive indices can be shifted with respect to the nominal ones in the course of the deposition. The systematic errors \(\Sigma_{H,L}^{(j)}\) are random normally distributed errors with zero means and standard deviations \(\Sigma_{H,L}\). If Per Layer type is chosen (Fig. 3) then random offsets will take for different layers of the same material. different values \(n_{H,L}^{(j)}=n_{H,L}+\sigma_{H,L,i}^{(j)}, \;\;j=1,...,M\) (Eq. 6) The random errors \(\sigma_{H,L,i}^{(j)}\) are random normally distributed errors with zero means and standard deviations \(\sigma_{H,L}\). The algorithm with random offset is applied when it is assumed that actual refractive indices are not stable in the course of the deposition process due to various reasons (for example, instabilities of substrate temperature). Example. Two-line Filter: target transmittance is 100% in the ranges 598-602 nm and 698-702 nm, target transmittance is zero in the ranges 500-580 nm, 615-693 nm, and 720-800 nm (Fig. 4). Layer materials are Nb 2O 5 and SiO 2, Suprasil substrate. First, a set of conventional designs was obtained. The structure of one of the conventional designs and its spectral characteristics are shown in Fig. 4. Merit function value (MF) is 2.9. Fig. 4. Spectral transmittance and structure of a conventional 31-layer design solution. Assume, that the expected level of errors in layer thicknesses is 1% and there are systematic offsets of refractive indices, 0.01 of Nb 2O 5 and 0.005 of SiO 2. A robust solution can be found with settings shown in Fig. 3 ( Per Material). It means that \(\Sigma_{H}=0.01\) and \(\Sigma_{L}=0.005.\) As a starting design, a single layer can be used. Gradual Evolution can be used for synthesis. As a result, a 27-layer solution (Fig. 5) is obtained. The merit function value is 10.3 that is larger than in the case of the conventional design (Fig. 4). It is a typical situation for the robust synthesis: robust design solutions approximate target specifications a little bit worse than the conventional designs. The reason is fundamental: additional stability requirements are taken into account in the course of the merit function optimization, i.e. the generalized merit function containing multiple terms (Eq. 2) is optimized instead of the standard merit function. Fig. 5. Spectral transmittance and structure of the robust 27-layer robust solution obtained assuming 1% errors in layer thicknesses and 0.01 and 0.005 systematic offsets in high- and low-refractive indices, respectively. Fig. 6. Spectral transmittance and profile of the robust 29-layer robust solution obtained assuming 1% errors in layer thicknesses and 0.01 and 0.005 random offsets in high- and low-refractive indices, respectively. If the expected level of errors in layer thicknesses is 1% and there are random offsets of refractive indices, 0.01 of Nb 2O 5 and 0.005 of SiO 2. A robust solution can be found with settings shown in Fig. 3 ( Per Layer). It means that \(\sigma_{H}=0.01\) and \(\sigma_{L}=0.005.\) As a result, a 29-layer solution (Fig. 6) is synthesized. The merit function value is 6.5. As expected it is bigger than in the case of the conventional design (Fig. 4). Important! Merit function (MF) displayed on the bottom of the evaluation window (Fig. 7 and 8) is calculated in two different ways. If the robust option is disabled, the merit function is calculated in the standard way. If the robust option is enabled, MF is calculated via Eqs. 1 and 2. In Fig. 7 and 8 we can see MF values of conventional and robust two-line filters, respectively. Standard MF is smaller at the conventional design. At the same time, the generalized merit function GMF is smaller in the case of the robust solution. Fig. 7. Calculation of the merit function when robust option is disabled/enabled (conventional design). Fig. 8. Calculation of the merit function when robust option is disabled/enabled (robust design). After obtaining the robust design or a series of the robust designs, several important questions arise: How to evaluate stability of the obtained robust designs? How to compare stability of the conventional and robust designs? What design solution is the most stable to deposition errors? There are no simple answers to these questions. However, there are recommendations, which are typically help you to evaluate the designs stability (of course, with respect to the monitoring technique in use). Computational experiments simulating the deposition process can help you to evaluate stability of your design solution. 1) In the case of non-optical monitoring technique (quartz crystal or time monitoring), statistical error analysis it recommended (see below). 2) In the case of broadband monitoring (BBM), computational manufacturing with BBM are recommended (without witness chips or with witness chips). 3) In the case monochromatic monitoring, simulations without witness chips or indirect monitoring can be used. In the course of the error analysis, it is reasonable to specify the same levels of the errors in layer thicknesses and offsets of refractive indices. In the case of statistical error analysis, designs with imposed errors in layer parameters are generated and spectral characteristics of the disturbed designs are calculated. Fig. 9. Activating statistical error analysis. In the example above (Two-Line Filter), the level of errors can be equal to 1% ( Rel. RMS (%) column). If in the course of the robust synthesis Tolerance size was specified in absolute values, then that value should be specified in Rel.RMS(%) column. If in the course of the robust synthesis refractive index offsets were specified Per Material, then Per Material Errors box should be checked and the values of offsets should be specified in the RMS column on the Refractive Index tab (Fig. 10). This is, however, just a general recommendation. Of course, other reasonable error levels/settings can be used in the course of the statistical analysis. Fig. 10. Specifying errors in layer thicknesses and refractive index offsets in the course of the statistical error analysis. Fig. 11. Results of the statistical error analysis of the conventional 31-layer conventional design (Fig. 4). Spectral characteristics of the disturbed design degrade significantly especially around the high transmission zone at 700 nm. Fig. 12. Results of the statistical error analysis of the robust 27-layer design (Fig. 5). It is seen that the design is more stable. The numerical measure helps to evaluate the averaged stability of the design solution. This value is displayed on the bottom of the Error Analysis window. E(dMF) The value E(dMF) is expected deviation of the spectral characteristics of the theoretical design averaged by the number of the wavelength and the number of the statistical tests ( The number of tests field on the Error Analysis Setup window, Fig. 9). Fig. 13. Comparison of the E(dMF) values of the conventional 31-layer and 27-layer robust designs. Important notes: The robust algorithm takes not all sources of the deposition errors into account. Influence of some factors should be considered separately. It is recommended to obtain and analyze a series of good design solutions using various design techniques. It can happen that the levels of errors in layer parameters is to high to meet target requirements with the help of the robust algorithm. It is reasonable to stop the computations if either the generalized merit function almost does not decrease or there are very insignificant changes in the pivotal design. It means that a state of dynamic equilibrium has been achieved and at the specified error level it is not reasonable to search for a more complicated design. Almost all OptiLayer optimization algorithms support the robust synthesis. You maybe also interested in the following articles:
I've been trying to teach myself a few modules of my university course in preparation before I start and I'm currently attempting some Vector Calculus, but I'm finding it much more difficult to grasp than previous modules. I was beginning to think I was getting the hang of it until I came across the question below. The closed curve $C$ in the $z=0$ plane consists of the arc of the parabola $y^2=4ax$ $(a>0)$ between the points $(a,±2a)$ and the straight line joining $(a, ∓2a)$. The area enclosed by $C$ is $A$. Show, by calculating the integrals explicitly, that $$\int_C(x^2y\,\mathrm d\mkern1mu x + xy^2\,\mathrm d\mkern1mu y)=\int_A(y^2−x^2)\,\mathrm d\mkern1mu A=\frac{104}{105}a^4$$ where $C$ is traversed anticlockwise. Apologies if my attempts here are laughable, at the moment I'm just trying to familiarise myself with a few of the concepts so that I'll have already met it when it comes to studying it in my course. As a result of this I'm rushing quite quickly through the topic and not worrying so much about my understanding of 'why' but just trying to compute a few answers. It's likely that I'll throw around some incorrect terms here and there, this is just so that people can see my thought process. What I'm having issues with: For $\int_C(x^2y\,\mathrm d\mkern1mu x + xy^2\,\mathrm d\mkern1muy)$ I thought what we could do is use the line integral formula $\int_{C} \mathbf f\cdot d\mathbf r = \int_{t_0}^{t_1} \mathbf f(\mathbf r)\cdot \mathbf r'(t)\ dt$, with the parametrisation ${\bf r}(t)=(\frac{t^2}{4a},t)$ we have $\int^{2a}_{-2a}(x^2y,xy^2)\cdot (\frac t{2a},1) dt$ which when we substitute in $t$ for $x$ and $y$ and evaluate the integral I get something like $\frac{152}{35}a^4$ which is clearly wrong. I doubt highly that I've made an arithmetic error, it's far more likely that I've fundamentally misunderstood and got mixed up, so apologies for that. I get $\int_A(y^2−x^2)\,\mathrm d\mkern1mu A=\int^{2a}_{-2a}\int^{\frac{y^2}{4a}}_0(y^2-x^2) \mathrm d\mkern1mux \mathrm d\mkern1muy$ and I won't even bother going further becuase I know this is already wrong. My question (finally): Answer the original question, but also if possible give criticism of my attempts, why am I wrong, what have I mixed up, where would my answer be right? I'm also unclear as to what the last line of the question is saying "where $C$ is traversed anticlockwise" (I have an idea of what this could mean, but not why it would ever affect an answer). Apologies again for my embarrassing attempts (thankfully this is anonymous), I know it's a big ask, (essentially reteach me an entire topic) so any input is appreciated. Thank you
The lifting operator $\mathbf{r}(\mathbf{v_h})$ for the '2nd version' of Bassi-Rebay scheme for elliptic problems in $d$ dimensions is defined as $$ \int_{\Omega_h} \mathbf{w}_h \cdot \mathbf{r}(\mathbf{v}_h)\,\mathrm{d}\mathbf{x} = -\int_{E} \{\mathbf{w}_h\}\cdot \mathbf{v}_h \,\mathrm{d}S, $$ where $$ \mathbf{v}_h \in \mathbf{V}_h,\:\mathbf{w}_h \in \mathbf{V}_h\\ V_h = \{v_h \in L^2(\Omega_h) : \bigl. v_h\bigr|_K\in \mathbb{P}_k(K) \:\forall K\in \mathcal{T}_h\}\\ \mathbf{V}_h = \{v_h \in \bigl(L^2(\Omega_h)\bigr)^d : \bigl. v_h\bigr|_K\in \bigl(\mathbb{P}_k(K)\bigr)^d \:\forall K\in \mathcal{T}_h\}, $$ and $E$ is an edge of element belonging to triangulation $\mathcal{T}_h$. The weak form for Laplace equation discretized with BR2 then contains terms such as $$ \int_{\Omega_h} \bigl(\nabla v_h\bigr) \cdot \mathbf{r}\bigl([\![u_h]\!]\bigr)\,\mathrm{d}\mathbf{x},\quad u_h, v_h \in V_h, $$ for which the definition of the lifting operator can be used to convert the integral over $\Omega_h$ into surface integral over element edge. There are also face terms such as $$ \int_{E} [\![v_h]\!] \cdot \mathbf{r}\bigl([\![u_h]\!]\bigr)\,\mathrm{d}S. $$ How should these be evaluated? The only approach I can think of is to use the definition of the lifting operator with all test functions $\mathbf{w}_h$ belonging to one element in order to assemble a local linear system where the expansion coefficients of $\mathbf{r}(\mathbf{v}_h)$ are unknowns (i.e. I would assume that $\mathbf{r}(\mathbf{v}_h)$ can be represented as a linear combination of basis functions). If I do that, then I can work with the lifting operator restricted to element traces. This seems to be a bit strange to me though and I'm afraid I completely missed the point. Remark - notation: Let $E$ be an internal edge shared by two elements $K^{+}$ and $K^{-}$ and let $\mathbf{n}^{+}$ and $\mathbf{n}^{-}$ be outer normals of $K^{+}$ and $K^{-}$ on this edge. The jump $[\![\cdot]\!]$ and average $\{\cdot\}$ operators are defined as follows: $[\![v_h]\!] = v_h^{+}\mathbf{n}^{+} + v_h^{-}\mathbf{n}^{-}$ for $v_h \in V_h$ $[\![\mathbf{v}_h]\!] = \mathbf{v}_h^{+} \cdot \mathbf{n}^{+} + \mathbf{v}_h^{-} \cdot \mathbf{n}^{-}$ for $\mathbf{v}_h \in \mathbf{V}_h$ $\{v_h\} = \frac{1}{2}(v_h^{+} + v_h^{-})$, $v_h \in V_h$ $\{\mathbf{v}_h\} = \frac{1}{2}(\mathbf{v}_h^{+} + \mathbf{v}_h^{-})$, $\mathbf{v}_h \in \mathbf{V}_h$ The superscript ${}^{\pm}$ refers to trace quantities restricted to edge (face in 3D) from $K^{+}$ and $K^{-}$, respectively.
“One-Dimensional” Sound Waves We’ll begin by considering sound traveling down a hollow pipe, to avoid unnecessary mathematical complications. Sound is a longitudinal wave—as the wave passes through, the air moves backwards and forwards in the pipe, this oscillatory movement is in the same direction the wave is traveling. To visualize what’s happening, imagine mentally dividing the air in the pipe, which is at rest if there is no sound, into a stack of thin slices. Think about one of these slices. In equilibrium, it feels equal and opposite pressure from the gas on its two sides. (This is analogous to the little bit of string at rest feeling equal and opposite tension on its two sides, but of course the gas pressure is inward). As the sound wave goes through, the pressure wave generates slight differences in pressure on the two sides of our thin slice of air, and this imbalance of forces causes the slice to accelerate. To analyze this quantitatively—to apply \( \vec{F} = m\vec{a} \) to the thin slice of air—we must begin by defining displacement, the quantity corresponding to the string’s transverse movement \( y(x,t) \). We shall use \( s(x,t) \) to denote the horizontal (along the pipe) displacement of the thin slice of air which rests at position x when no sound is present. An animated version of this diagram is available here! If the pipe has radius a, and hence cross-sectional area \( \pi a^2 \), a slice of air of thickness \( \Delta x \) has volume \( \pi a^2 \Delta x \), so writing the density of air \( \rho \) 3), the mass of the slice of air is \( m = \rho V = \rho \pi \, a^2 \Delta x \). Clearly, its acceleration is \( a = \frac{\partial^2 s(x,t)}{\partial t^2} \), so we already have the right-hand side of \( \vec{F} = m\vec{a} \). To find the left hand side—the force on the thin slice of air—we must find the pressure imbalance between the two sides. Relating Pressure Change to How the Displacement Varies The pressure change as the sound wave moves down the tube is directly tied to the local compression or expansion of the gas. It’s like a spring: as the gas is compressed into a smaller volume, its pressure rises, and as the gas expands the pressure drops. And, exactly as for a spring, the changes in pressure and volume are linearly related. The coefficient of proportionality is called the , usually written bulk modulus B, and defined by the equation: \[ \Delta p = -B \dfrac{\Delta V}{V} \] Note the sign! As the volume decreases, the pressure increases. Since the ratio of volumes is dimensionless, the units for the bulk modulus are the same as for pressure: Pascals. For air at standard temperature and pressure, the bulk modulus B = 10 5 Pa. Now, we are tracking the motion of the gas as the sound wave passes through by following the parameter \( s(x,t) \) t of gas having equilibrium position x. Obviously, if \( s(x,t) \) x, all the gas is shifted by the same amount, and no compression or expansion has taken place. Local change in volume only happens if there is local variation in \( s(x,t) \). To make this quantitative, consider a slice of gas having thickness \( \Delta x \) (when at rest): if, at some instant when the sound wave is passing through, the right-hand end is displaced by \( s(x + \Delta x,t) \), and the left-hand end by a greater amount \( s(x,t) \), say, the thickness of the slice has evidently been changed from \(\Delta x \) \[ \Delta x - (s(x,t) - s(x + \Delta x,t)) \] Since the volume of air in the slice is directly proportional to its thickness, the sound wave has at this instant changed the volume of the air initially in the segment \( \Delta x \) x by a fraction \[ \dfrac{\Delta V}{V} = \dfrac {s(x + \Delta x, t) - s(x,t)}{\Delta x} = \dfrac{\partial s(x,t)}{\partial x} \] the differential being exact in the limit of a thin slice. Therefore, the local extra pressure is directly proportional to minus the gradient of \( s(x,t) \): \[ \Delta p = -B \dfrac{\Delta V}{V} = -B\dfrac{\partial s(x,t)}{\partial x} \] From F = ma to the Wave Equation Having found how the local pressure variation relates to \( s(x,t) \), we’re ready to derive the wave equation from F = ma for a thin slice of gas. Recall that for such a slice \( m = \rho V = \rho \pi a^2 \Delta x \), and of course \( a = \frac{\partial^2 s(x,t)}{\partial t^2}b \). The net force F on the slice is the difference between the pressure at x and that at \( x + \Delta x \): \[ F = p(x,t) \pi a^2 - p(x + \Delta x,t) \pi a^2 = - \pi a^2 B \dfrac{\partial s(x,t)}{\partial x} + \pi a^2 B \dfrac{\partial s(x + \Delta x,t)}{\partial x} = \pi a^2 B \Delta x \dfrac{\partial^2 s(x,t)}{\partial x^2} \]. Putting this into F = ma: \[ \dfrac{\partial^2 s(x,t)}{\partial x^2} = \dfrac{1}{\nu^2} \dfrac{\partial^2 s(x,t)}{\partial t^2}, \, where \space \nu = \sqrt{\dfrac{B}{\rho}} \] This is exactly the wave equation we found for the string, with now the longitudinal displacement s replacing the transverse displacement y, and the bulk modulus playing the role of the string tension, both being measures of stored potential energy arising from local variations in displacement. The densities, of course, play the same role in the two cases, measuring how much kinetic energy is stored for given local displacement velocities. Boundary Conditions for Sound Waves in Pipes Since the new wave equation is identical in form to that for waves on a string, our discussion of traveling waves, standing waves, etc., for a string can be carried over with the appropriate changes of notation and applied here. For example, a standing wave in a pipe has the form \( s(x,t) = A\space sin\space kx \, sin \, \omega t \), this would be for a pipe closed at x = 0, so that the air doesn’t move at x = 0. The boundary condition for a closed end of a pipe is: \[ s(x,t) = 0 \, at \, a \, closed \, end. \] What about an open end? In that case, the air is free to move—the boundary condition won’t be \( s(x,t) = 0 \) . However, the pressure is not free to vary: it’s atmospheric pressure, the pipe being open to the atmosphere. So at an open end \( \Delta p = 0. \) Remembering that \( \Delta p = -B \frac{\partial s(x,t)}{\partial x}, \) the boundary condition is: \[ \dfrac{ \partial s(x,t)}{\partial x} = 0 \, at \, an\space open\space end. \] Harmonic Standing Waves in Pipes Consider now a standing harmonic wave in a pipe of length L, closed at x = 0 but open at x = L. From the x = 0 boundary condition, the wave must have the form \( s(x,t) \) = A\space sin\space kx \, sin \, \omega t \). The x = L open end boundary condition requires that the slope \( \frac{\partial s(L,t)}{\partial x} = 0 \). That is, \( cos \, kL = 0 \). Exercise: Prove that the longest wavelength standing wave possible in the pipe has wavelength 4 L, and sketch the wave. Exercise: what is the next longest wavelength of a possible standing wave in the pipe? Draw a picture. Traveling Waves: Power and Intensity Another solution to the wave equation is \[ s(x,t) = A\space sin\space (kx - \omega t) \] where \( \omega = \nu k \) , just as for string. This is a wave traveling down the pipe. It could be generated by an oscillating plate at the closed end: in other words, a speaker. How much power is this speaker putting out? It’s moving and pushing against the pressure: Power = P = rate of working = force x velocity = pressure x area x velocity How fast is it moving? At time t, the plate is at \[ s(x = 0,t) = -A\space sin\space \omega t \] so it is moving at velocity \[ \nu_{plate}(t) = \dfrac{\partial s(x =0, t)}{\partial t} = -A\omega \, cos \, \omega t \] The pressure at the plate is \( \Delta p \) \[ \Delta p = -B\dfrac{\partial s(x,t)}{\partial x} = -B\dfrac{\partial}{\partial x} A\space sin \, (kx - \omega t) = -ABk \, cos \, \omega t \] at x = 0. So the rate of working at time t, the power P( t) = velocity x force: \[ P(t) = \nu_{plate}(t) \Delta p\pi a^2 = A^2 B \pi a^2\space \omega k \, cos^2 \, \omega t \] The standard definition of power for any kind of wave generator is the average power over a complete cycle. Since the average value of cos 2 x = ½, \[ power \, P = \dfrac{1}{2} A^2B\pi a^2\space \omega k. \] Using \( B = \nu^2 \rho \) \[ P = \dfrac{1}{2} A^2 \pi a^2 \, \omega^2 \rho \nu. \] : This also tells us how much energy there is in the wave as it travels \[ \dfrac{1}{2} A^2 \pi a^2\space \omega^2 \rho \] per meter. The of the wave is intensity , so here average power per square meter of cross sectional area \[ Intensity \, I = \dfrac{1}{2}b A^2 \, \omega^2 \, \rho \nu \] and I is measures in watts per square meter. The factor v, the velocity, in the above expression comes about because in one second, the energy delivered by a steady sound wave to one square meter of area perpendicular to the direction of the wave’s motion is the energy in v cubic meters of wave: taking the speed of sound to be 330 meters per second, 330 cubic meters of sound energy will plough into one square meter each second.
Differential Equation of Oscillations Pendulum is an ideal model in which the material point of mass \(m\) is suspended on a weightless and inextensible string of length \(L.\) In this system, there are periodic oscillations, which can be regarded as a rotation of the pendulum about the axis \(O\) (Figure \(1\)). Dynamics of rotational motion is described by the differential equation \[\varepsilon = \frac{{{d^2}\alpha }}{{d{t^2}}} = \frac{M}{I},\] where \(\varepsilon\) is the angular acceleration, \(M\) is the moment of the force that causes the rotation, \(I\) is the moment of inertia about the axis of rotation. In our case, the torque is determined by the projection of the force of gravity on the tangential direction, that is \[M = – mgL\sin \alpha .\] The minus sign indicates that at a positive angle of rotation \(\alpha\) (counterclockwise), the torque of the forces causes rotation in the opposite direction. The moment of inertia of the pendulum is given by \[I = m{L^2}.\] Then the dynamics equation takes the form: \[\require{cancel} {{\frac{{{d^2}\alpha }}{{d{t^2}}} }={ \frac{{ – \cancel{m}g\cancel{L}\sin \alpha }}{{\cancel{m}{L^\cancel{2}}}} }={ – \frac{{g\sin \alpha }}{L},\;\;}}\Rightarrow {\frac{{{d^2}\alpha }}{{d{t^2}}} + \frac{g}{L}\sin \alpha = 0.} \] In the case of small oscillations, one can set \(\sin \alpha \approx \alpha.\) As a result, we have a linear differential equation \[ {\frac{{{d^2}\alpha }}{{d{t^2}}} + \frac{g}{L}\alpha = 0\;\;}\kern-0.3pt {\text{or}\;\;\frac{{{d^2}\alpha }}{{d{t^2}}} + {\omega ^2}\alpha = 0,} \] where \(\omega = \sqrt {\large\frac{g}{L}\normalsize} \) is the angular frequency of oscillation. The period of small oscillations is described by the well-known formula \[T = \frac{{2\pi }}{\omega } = 2\pi \sqrt {\frac{L}{g}} .\] However, with increasing amplitude, the linear equation ceases to be valid. In this case, the correct description of the oscillating system implies solving the original nonlinear differential equation. Period of Oscillation of a Nonlinear Pendulum Suppose that the pendulum is described by the nonlinear second order differential equation \[\frac{{{d^2}\alpha }}{{d{t^2}}} + \frac{g}{L}\sin \alpha = 0.\] We consider the oscillations under the following initial conditions \[{\alpha \left( {t = 0} \right) = {\alpha _0},\;\;\;}\kern-0.3pt{\frac{{d\alpha }}{{dt}}\left( {t = 0} \right) = 0.}\] The angle \({\alpha _0}\) is the amplitude of oscillation. The order of the equation can be reduced, if we find a suitable integrating factor. Multiply this equation by the integrating factor \(\large\frac{{d\alpha }}{{dt}}\normalsize.\) This leads to the equation \[ {{\frac{{{d^2}\alpha }}{{d{t^2}}}\frac{{d\alpha }}{{dt}} }+{ \frac{g}{L}\sin \alpha \frac{{d\alpha }}{{dt}} }={ 0,\;\;}}\Rightarrow {{\frac{d}{{dt}}\left[ {\frac{1}{2}{{\left( {\frac{{d\alpha }}{{dt}}} \right)}^2} }\right.}-{\left.{ \frac{g}{L}\cos\alpha } \right] }={ 0.}} \] After integration we obtain the first order differential equation: \[{\left( {\frac{{d\alpha }}{{dt}}} \right)^2} – \frac{{2g}}{L}\cos\alpha = C.\] Given the initial conditions, we find the constant \(C:\) \[C = – \frac{{2g}}{L}\cos{\alpha _0}.\] Then the equation becomes: \[{{\left( {\frac{{d\alpha }}{{dt}}} \right)^2} }={ \frac{{2g}}{L}\left( {\cos\alpha – \cos{\alpha _0}} \right).}\] Next, we apply the double angle identity \[\cos\alpha = 1 – 2\,{\sin ^2}\frac{\alpha }{2},\] which leads to the following differential equation: \[ {{{\left( {\frac{{d\alpha }}{{dt}}} \right)^2} }={ \frac{{4g}}{L} \cdot}\kern0pt{ \left( {{{\sin }^2}\frac{{{\alpha _0}}}{2} – {{\sin }^2}\frac{\alpha }{2}} \right),\;\;}}\Rightarrow {{\frac{{d\alpha }}{{dt}} }={ 2\sqrt {\frac{g}{L}} \cdot}\kern0pt{ \sqrt {{{\sin }^2}\frac{{{\alpha _0}}}{2} – {{\sin }^2}\frac{\alpha }{2}} .}} \] Integrating this equation, we obtain \[{\int {\frac{{d\left( {\frac{\alpha }{2}} \right)}}{{\sqrt {{{\sin }^2}\frac{{{\alpha _0}}}{2} – {{\sin }^2}\frac{\alpha }{2}} }}} }={ \sqrt {\frac{g}{L}} \int {dt} .}\] We denote \(\sin {\large\frac{{{\alpha _0}}}{2}\normalsize} = k\) and introduce the new variable \(\theta\) instead of the angle \(\alpha:\) \[{\sin \frac{\alpha }{2} = \sin \frac{{{\alpha _0}}}{2}\sin \theta }={ k\sin \theta .}\] Then \[ {d\left( {\sin \frac{\alpha }{2}} \right) }={ \cos \frac{\alpha }{2}d\left( {\frac{\alpha }{2}} \right) } = {\sqrt {1 – {{\sin }^2}\frac{\alpha }{2}} d\left( {\frac{\alpha }{2}} \right) } = {\sqrt {1 – {k^2}\,{{\sin }^2}\theta } \,d\left( {\frac{\alpha }{2}} \right) } = {k\cos \theta d\theta .} \] It follows that \[{d\left( {\frac{\alpha }{2}} \right) }={ \frac{{k\cos \theta d\theta }}{{\sqrt {1 – {k^2}\,{{\sin }^2}\theta } }}.}\] In the new notation, our equation can be written as \[ {{\int {\frac{{\cancel{k\cos \theta} d\theta }}{{\sqrt {1 – {k^2}\,{{\sin }^2}\theta }\,\cancel{k\cos \theta} }}} }={ \sqrt {\frac{g}{L}} \int {dt} ,\;\;}}\Rightarrow {{\int {\frac{{d\theta }}{{\sqrt {1 – {k^2}\,{{\sin }^2}\theta } }}} }={ \sqrt {\frac{g}{L}} \int {dt} .}} \] Next, we discuss the limits of integration. The passage of the arc from the lowest point \(\alpha = 0\) to the maximum deviation \(\alpha = {\alpha_0}\) corresponds to a quarter of the oscillation period \(\large\frac{T}{4}\normalsize.\) It follows from the relationship between the angles \(\alpha\) and \(\theta\) that \(\sin \theta = 1\) or \(\theta = {\large\frac{\pi}{2}\normalsize}\) at \(\alpha = {\alpha_0}.\) Therefore, we obtain the following expression for the period of oscillation of the pendulum: \[ {{\sqrt {\frac{g}{L}} \frac{T}{4} }={ \int\limits_0^{\large\frac{\pi }{2}\normalsize} {\frac{{d\theta }}{{\sqrt {1 – {k^2}\,{{\sin }^2}\theta } }}} \;\;}}\kern-0.3pt {{\text{or}\;\;T = 4\sqrt {\frac{L}{g}} \cdot}\kern0pt{ \int\limits_0^{\large\frac{\pi }{2}\normalsize} {\frac{{d\theta }}{{\sqrt {1 – {k^2}\,{{\sin }^2}\theta } }}} .}} \] The integral on the right cannot be expressed in terms of elementary functions. It is the so-called complete elliptic integral of the \(1\)st kind: \[{K\left( k \right) }={ \int\limits_0^{\large\frac{\pi }{2}\normalsize} {\frac{{d\theta }}{{\sqrt {1 – {k^2}\,{{\sin }^2}\theta } }}} .}\] The function \(K\left( k \right)\) is computed in most mathematical packages. Its graph is shown below in Figure \(2\). The function \(K\left( k \right)\) can also be represented as a power series: \[ {K\left( k \right) }={ \frac{\pi }{2}\left\{ {1 + {{\left( {\frac{1}{2}} \right)}^2}{k^2} }\right.}+{\left.{ {{\left( {\frac{{1 \cdot 3}}{{2 \cdot 4}}} \right)}^2}{k^4} }\right.}+{\left.{ {{\left( {\frac{{1 \cdot 3 \cdot 5}}{{2 \cdot 4 \cdot 6}}} \right)}^2}{k^6} + \ldots } \right.}\kern0pt {\left. {+ {{\left[ {\frac{{\left( {2n – 1} \right)!!}}{{\left( {2n} \right)!!}}} \right]}^2}{k^{2n}} + \ldots } \right\},} \] where the double factorials \({\left( {2n – 1} \right)!!}\) and \({\left( {2n} \right)!!}\) denote the product, respectively, of odd and even natural numbers. Note that if we restrict ourselves to the zero term of the expansion, assuming that \(K\left( k \right) \approx {\large\frac{\pi }{2}\normalsize},\) we obtain the known formula for the period of small oscillations: \[{{T_0} = 4\sqrt {\frac{L}{g}} K\left( k \right) }\approx {4\sqrt {\frac{L}{g}} \frac{\pi }{2} }={ 2\pi \sqrt {\frac{L}{g}} .}\] Further terms of the series for \(n \ge 1\) are just allow to consider the anharmonicity of the oscillations of the pendulum and the nonlinear dependence of the period \(T\) on the oscillation amplitude \({\alpha_0}.\) Solved Problems Click a problem to see the solution.
I do not believe there is a name for your specific inequality (which I rewrite as following):$$|x|^p+|y|^p\le \big(|x|+|y|\big)^p, \ p \ge1 \iff \boxed{\ \ \left(|x|^p +|y|^p\right)^{\frac{1}{p}} \le |x|+|y|, \quad p \ge 1 \ \ }$$However, it can be viewed as a special case of multiple more general statements, such as Jensen, AMGM, Hölder, and probably many other inequalities after appropriate substitution and/or change of variables. The closes call would probably be the generalized mean inequality: $$M_j\left( x_1, \dots, x_n \right) \leq M_i\left( x_1, \dots, x_n \right) \quad \text{ whenever } \quad j<i. \label{*} \tag{*}$$ Here $M_k \left( x_1, \dots, x_n \right)$ is so-called power mean, which is defined as $$M_k(x_1,\dots,x_n) = \left( \frac{1}{n} \sum_{i=1}^n x_i^k \right)^{\frac{1}{k}}. $$ ! In particular, assuming $n=2$, $j = 1$, and $i = p$, and denoting $\left(x_1, \dots, x_n \right) := \left(\,\chi, \gamma \right)$, we get $$ \begin{aligned} M_1\big(\left|\,\chi\right|, \left|\gamma\right|\big) & = \dfrac{1 }{2}\big(\left|\,\chi\right| + \left|\gamma\right|\big) = \dfrac{\left|\,\chi\right| }{2} + \dfrac{\left|\gamma\right| }{2}, \\ M_p\big(\left|\,\chi\right|,\left|\gamma\right|\big) & = \left( \dfrac{1}{2} \left(\left|\,\chi\right|^p+\left|\gamma\right|^p\right)\right)^{\frac{1}{p}} = \Bigg( \left(\frac{\left|\,\chi\right|}{2}\right)^p + \left(\frac{\left|\gamma\right|}{2}\right)^p \Bigg)^{\frac{1}{p}}.\\ \end{aligned} $$ By $\eqref{*}$, we have $$ \dfrac{\left|\,\chi\right| }{2} + \dfrac{\left|\gamma\right| }{2} \le \left( \bigg(\frac{\left|\,\chi\right|}{2^{\frac{1}{p}}}\bigg)^p + \bigg(\frac{\left|\gamma\right|}{2^{\frac{1}{p}}}\bigg)^p \right)^{\frac{1}{p}} . \label{**} \tag{**} $$ Denoting $ x := 2^{-\frac{1}{p}}\chi, \ \ y := 2^{-\frac{1}{p}}\gamma$ and raising both sides of $\eqref{**}$ to the power $p$, we get $$ \left|x\right|^{p}+\left|y\right|^{p} \le \big(\left|x\right|+\left|y\right|\big)^{p}. $$ To summarize, I believe that (strictly speaking) there is probably no name for your inequality. However, that the generalized mean is as close as you can get to your inequality, although some formula conversion is still required.
Consider two unshaded circles $C_r$ and $C_s$ with radii $r>s$ that touch at the origin of the complex plane. The shaded circles $C_1,C_2...C_7$ (labeled in counterclockwise direction sequentially) all touch $C_r$ internally and $C_s$ externally. $C_1$ also touches the real axis and $C_i$ and $C_{ i+1}$ touch for $i=1...6.$ Let $r_i$ denote the radius of $C_i$. Then show that for $i=1,2...,$ $$r_i^{-1} + 3r_{i+2}^{-1} = 3r_{i+1}^{-1} + r_{i+3}^{-1}$$ A picture is attached for clarity. Attempt: Under inversion, $C_s$ and $C_r$ are mapped to lines and $C_i, i \in \left\{1,...,6\right\}$ are mapped to circles. The map $f(z) = 1/z$ is conformal, so the angles are preserved wherever $f'(z) \neq 0$. Each circle has $4$ tangent points, (except $C_7$) so under the transformation these $\pi/2$ angles are preserved. The resulting image I have after the transformation is that the $C_i$ are mapped to circles that lie within the two lines and touch the sides. The only way I see just now to preserve the angles is to have all the circles of the same radii. If $s$ is the radius of $C_s$ and $r$ the radius of $C_r$ then the centre between these two lines is $\frac{1}{2}\left(1/s - 1/2r\right) $. (But I do not think this makes sense.) Many thanks
Kinetic Theory of Gas Molecular nature of matter and Behaviour of Gases A gas consisting of large number of identical, tiny spherical, neutral and elastic particles called molecules. In a gas molecules none in all possible directions with all possible speeds. The pressure of gas is due to elastic collisions of the gas molecular with the walls of the container. The time of contact of moving molecules with the walls of container is negligible as compared to the intervals between two successive collisions on the same walls of container. Between two collisions a molecule moves in a straight path with a uniform velocity. The collisions are perfectly elastic and there are no forces of attraction or repulsion between them. For a gas molecules in container Impulse = change in momentum of the molecule View the Topic in this video From 0:20 To 6:10 View the Topic in this video From 0:21 To 4:41 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. 1. Behaviour of Gases: Gases at low pressure and high temperature follow a relation, pV = kT 2. The perfect gas equation is given by pV = nRT where, n is number of moles and R = N A k B is universal constant and T is absolute temperature in kelvin. 3. In terms of density, perfect gas equation is p = \frac{\rho RT}{M_{0}} 4. Boyle's Law: It states that for a given mass of a gas at constant temperature, the volume of that mass of gas is inversely proportional to its pressure. i.e., V \propto \frac{1}{p} ⇒ p 1V 1 = p 2V 2 = p 3V 3 ..... = constant 5. Charles' Law: It states that for a given mass of an ideal gas at constant pressure, volume (V) of a gas is directly proportional to its absolute temperature T. i.e, V ∝ T \tt \Rightarrow \frac{V_{1}}{T_{1}} = \frac{V_{2}}{T_{2}} = \frac{V_{3}}{T_{3}}..... = constant 6. Dalton's Law of Partial pressure: It states that the total pressure of a mixture of non-interacting ideal gases is the sum of partial pressures exerted by individual gases in the mixture. i.e., p = p 1 + p 2 + p 3 + ......
Contents Post summary: The level set method is a powerful alternative way to represent N-dimensional surfaces evolving through space. (This is a significantly extended blog-post version of three slides from my Medical Visualization lecture on image analysis.) Imagine that you would like to represent a contour in 2D or a surface in 3D, for example to delineate objects in a 2D image or in a 3D volumetric dataset. Now imagine that for some reason you would also like have this contour or surface move through space, for example to inflate it, or to shrink it, at the same time dynamically morphing the surface to better fit around the object of interest. Mathematically, we would have for N dimensions the following representation: $$ C(p,t) = \lbrace x_0(p,t), x_1(p,t), \cdots, x_{N-1}(p,t) \rbrace\tag{1} $$ Where \(C(p,t)\) is the collection of contours that you get when you morph or evolve the initial contour \(C(p,0)\) over time \(t\). At each point in time, each little bit of the contour moves orthogonally to itself (i.e. along its local normal) with a speed \(F(C(p,t))\) that is calculated based on that little bit of contour at that point in time. This morphing of the contour can thus be described as follows: $$ \frac{\partial C(p,t)}{\partial t} = F(C(p,t))\vec{n}(C(p,t))\tag{2} $$ In other words, the change in the contour relative to the change in time is defined by that contour- and time-dependent speed \(F\) and the normal to the contour at that point in space and time. There are at least two ways of representing such contours, both in motion and at rest. Contours as (moving) points in space: Explicit and Lagrangian. Most programmers would (sooner or later) come up with the idea of using a bunch of points, and the line segments that connect them, to represent 2D contours, and, analogously, dense meshes consisting of triangles and other polygons to represent 3D surfaces, a la OpenGL. You would be in good company if you then implemented an algorithm whereby for each iteration in time, you iterated over all points and moved each point a small distance \(F\times \Delta t\), orthogonally to the contour containing it. Such an algorithm could be called active contours, or snakes, and this way of representing a contour or surface as a collection of points moving through space is often called the Lagrangian formulation. Now imagine that your contour or surface became quite large. You would need to add new points, or a whole bunch of new triangles. This could cause minor headaches. However, your headaches would grow in severity if your contour or surface were to shrink, and you would at some point need to remove, very carefully, extraneous points, edges or triangles. However, this headache pales in comparison to the one you would get if your surface, due to the object it was trying to delineate, would have to split into multiple surfaces, or later would have to merge back into a single surface. Contours as (changing) measurements of the space around them: Implicit and Eulerian. However, if you were as clever as, say James Sethian and Stanley Osher, you would decide to sidestep all of that headache and represent your contour implicitly. This can be done by creating a higher-dimensional embedding function or level set function of which the zero level set, or the zero-valued isocontour, is the contour that you’re trying to represent. This is often called the Eulerian formulation, because we’re focusing on specific locations in space as the contour moves through them. Huh? (that was what I said when I first read this.) What’s an isocontour? That’s a line, or a surface (or a hyper-surface in more than 3D) that passes through all locations containing the isovalue. For example, on a contour map you have contour lines going through all positions with the same altitude. If I were to create a 2D grid, with at each each grid position the floating point closest distance to a 2D contour (by convention, the distance inside the contour is negative and outside positive), then the contour line at value zero (or the zero level set) of that 2D grid would be exactly the 2D contour! The 2D grid I’ve described above is known as an embedding function \( \phi(x,t)\) of the 2D contour. There are more ways to derive such embedding functions, but the signed distance field (SDF) is a common and practical method. Let’s summarise what we have up to now: Instead of representing a contour as points, or a surface as triangles, we can represent them as respectively a 2D or 3D grid of signed distance values For the general case up above, the contour at fixed time \(t_i\) would be: $$ C(t_i) = \lbrace x \vert \phi(x,t_i) = 0 \rbrace $$ Moving contours are simply point-wise changes in their embedding functions Instead of directly evolving the contour, it can be implicitly evolved by incrementally modifying its embedding function. Given the initial embedding function \(\phi(x,t=0)\), the contour can be implicitly evolved as follows: $$ \frac{\partial \phi(x,t)}{\partial t} = – F(x,t) \lvert \nabla \phi(x,t) \rvert $$ where \(F(x,t)\) is a scalar speed function, \(\nabla\) is the gradient operator and \(\lvert \nabla \phi(x,t) \rvert\) is the magnitude of the gradient of the level set function. Note the similarity with the general contour evolution equation 2 above. In practice, this means that for each iteration, you simply raster scan through the embedding function (2D, 3D or ND grid), and for each grid position you calculate speed \(F(x,y)\), multiply with the negative gradient magnitude of the embedding function, and then add the result to whatever value you found there. If you were to extract the zero level set at each iteration, you would see the resultant contour (or surface) deform over time according to the speed function \(F\) that you defined. MAGIC! A visual confirmation The figure below is a signed distance field representing two concentric circles in 2D, or a 2D donut. Note that the values are negative inside the donut, and positive elsewhere (in the center of the image, and towards the limits of the grid). For the sake of this exposition, let’s define \(F(x,t)=1\), i.e. the donut should get fatter over time. Eventually it will get so fat that the hole in the middle will be closed up. You can now check by inspection that at each point in the embedding function, or the image, you would have to add a small negative number (\(-1\) multiplied by the gradient magnitude). First positive regions close to the contour will become negative, increasing its size, and regions further away will approach zero from above, and eventually also become negative, making the donut even fatter. Eventually the region in the middle will become all negative, in other words, close up. Conclusion Representing surfaces implicitly, and specifically as signed distance fields, has numerous advantages. Contours always have sub-grid-point resolution. Topological changes, such as splitting and merging of N-dimensional objects, is automatically handled. Implementation of N-dimensional contour propagation becomes relatively straight-forward. With this implicit representation, morphing between two N-dimensional contours is a simple point-wise weighted interpolation! Although iterating through the embedding function is much simpler than managing a whole bunch of triangles and points moving through space, it can be computationally quite demanding. A number of optimization techniques exist, mostly making use of the fact that one only has to maintain the distance field in a narrow region around the evolving contour. These are called narrow-band techniques. The Insight Segmentation and Registration ToolKit, or ITK, has a very good implementation of a number of different level set method variations. You can also use the open source DeVIDE system to experiment with level set segmentation and 3D volumetric datasets (it makes use of ITK for this). (I’m planning to make available my MedVis post-graduate lecture exercises. Let me know in the comments how much I should hurry up.)
Your loss function would not work because it incentivizes setting $\theta_1$ to any finite value and $\theta_0$ to $-\infty$. Let's call $r(x,y)=\frac{1}{m}\sum_{i=1}^m {h_\theta\left(x^{(i)}\right)} -y$ the residual for $h$. Your goal is to make $r$ as close to zero as possible, not just minimize it. A high negative value is just as bad as a high positive value. EDIT: You can counter this by artificially limiting the parameter space $\mathbf{\Theta} $(e.g. you want $|\theta_0| < 10$). In this case, the optimal parameters would lie on certain points on the boundary of the parameter space. See https://math.stackexchange.com/q/896388/12467. This is not what you want. Why do we use the square loss The squared error forces $h(x)$ and $y$ to match. It's minimized at $u=v$, if possible, and is always $\ge 0$, because it's a square of the real number $u-v$. $|u-v|$ would also work for the above purpose, as would $(u-v)^{2n}$, with $n$ some positive integer. The first of these is actually used (it's called the $\ell_1$ loss; you might also come across the $\ell_2$ loss, which is another name for squared error). So, why is the squared loss better than these? This is a deep question related to the link between Frequentist and Bayesian inference. In short, the squared error relates to Gaussian Noise. If your data does not fit all points exactly, i.e. $h(x)-y$ is not zero for some point no matter what $\theta$ you choose (as will always happen in practice), that might be because of noise. In any complex system there will be many small independent causes for the difference between your model $h$ and reality $y$: measurement error, environmental factors etc. By the Central Limit Theorem(CLT), the total noise would be distributed Normally, i.e. according to the Gaussian distribution. We want to pick the best fit $\theta$ taking this noise distribution into account. Assume $R = h(X)-Y$, the part of $\mathbf{y}$ that your model cannot explain, follows the Gaussian distribution $\mathcal{N}(\mu,\sigma)$. We're using capitals because we're talking about random variables now. The Gaussian distribution has two parameters, mean $\mu = \mathbb{E}[R] = \frac{1}{m} \sum_i h_\theta(X^{(i)})-Y^{(i))}$ and variance $\sigma^2 = E[R^2] = \frac{1}{m} \sum_i \left(h_\theta(X^{(i)})-Y^{(i))}\right)^2$. See here to understand these terms better. Consider $\mu$, it is the systematic error of our measurements. Use $h'(x) = h(x) - \mu$ to correct for systematic error, so that $\mu' = \mathbb{E}[R']=0$ (exercise for the reader). Nothing else to do here. $\sigma$ represents the random error, also called noise. Once we've taken care of the systematic noise component as in the previous point, the best predictor is obtained when $\sigma^2 = \frac{1}{m} \sum_i \left(h_\theta(X^{(i)})-Y^{(i))}\right)^2$ is minimized. Put another way, the best predictor is the one with the tightest distribution (smallest variance) around the predicted value, i.e. smallest variance. Minimizing the the least squared loss is the same thing as minimizing the variance! That explains why the least squared loss works for a wide range of problems. The underlying noise is very often Gaussian, because of the CLT, and minimizing the squared error turns out to be the right thing to do! To simultaneously take both the mean and variance into account, we include a bias term in our classifier (to handle systematic error $\mu$), then minimize the square loss. Followup questions: Least squares loss = Gaussian error. Does every other loss function also correspond to some noise distribution? Yes. For example, the $\ell_1$ loss (minimizing absolute value instead of squared error) corresponds to the Laplace distribution (Look at the formula for the PDF in the infobox -- it's just the Gaussian with $|x-\mu|$ instead of $(x-\mu)^2$). A popular loss for probability distributions is the KL-divergence.-The Gaussian distribution is very well motivated because of the Central Limit Theorem, which we discussed earlier. When is the Laplace distribution the right noise model? There are some circumstances where it comes about naturally, but it's more commonly as a regularizer to enforce sparsity: the $\ell_1$ loss is the least convex among all convex losses. As Jan mentions in the comments, the minimizer of squared deviations is the mean and the minimizer of the sum of absolute deviations is the median. Why would we want to find the median of the residuals instead of the mean? Unlike the mean, the median isn't thrown off by one very large outlier. So, the $\ell_1$ loss is used for increased robustness. Sometimes a combination of the two is used. Are there situations where we minimize both the Mean and Variance? Yes. Look up Bias-Variance Trade-off. Here, we are looking at a set of classifiers $h_\theta \in H$ and asking which among them is best. If we ask which set of classifiers is the best for a problem, minimizing both the bias and variance becomes important. It turns out that there is always a trade-off between them and we use regularization to achieve a compromise. Regarding the $\frac{1}{2}$ term The 1/2 does not matter and actually, neither does the $m$ - they're both constants. The optimal value of $\theta$ would remain the same in both cases. The expression for the gradient becomes prettier with the $\frac{1}{2}$, because the 2 from the square term cancels out. When writing code or algorithms, we're usually concerned more with the gradient, so it helps to keep it concise. You can check progress just by checking the norm of the gradient. The loss function itself is sometimes omitted from code because it is used only for validation of the final answer. The $m$ is useful if you solve this problem with gradient descent. Then your gradient becomes the average of $m$ terms instead of a sum, so its' scale does not change when you add more data points. I've run into this problem before: I test code with a small number of points and it works fine, but when you test it with the entire dataset there is loss of precision and sometimes over/under-flows, i.e. your gradient becomes nan or inf. To avoid that, just normalize w.r.t. number of data points. These aesthetic decisions are used here to maintain consistency with future equations where you'll add regularization terms. If you include the $m$, the regularization parameter $\lambda$ will not depend on the dataset size $m$ and it will be more interpretable across problems.
In her paper "Quartic rings associated to binary quartic forms" (https://doi.org/10.1093/imrn/rnr070), Wood showed that $\operatorname{GL}_2(\mathbb{Z})$-classes of integral binary quartic forms are in discriminant-preserving bijection to pairs $(Q,C)$ where $Q$ is a quartic ring and $C$ is a monogenic cubic resolvent ring of $Q$. This allows the cubic resolvent $C$ of a quartic ring $Q$ to be interpreted in terms of the cubic resolvent polynomial of the corresponding to the binary form $F$, given by $$ (1) \text{ } \displaystyle \mathcal{C}_F(x) = x^3 - \frac{I(F)}{x}x - \frac{J(F)}{27},$$ where $$\displaystyle I(F) = 12 a_4 a_0 - 3 a_3 a_1 + a_2^2,$$ $$\displaystyle J(F) = 72 a_4 a_2 a_0 + 9 a_3 a_2 a_1 - 27 a_4 a_1^2 - 27 a_0 a_3^2 - 2 a_2^3$$ are the $I$ and $J$-invariants of the binary quartic form $F$. Now suppose $Q$ is a $D_4$-quartic ring, so that the Galois closure of the fraction field $K_Q$ of $Q$ has Galois group isomorphic to the dihedral group $D_4$. Denote by $K_Q^\dagger$ for the Galois closure of $K_Q$. Then $K_Q^\dagger$ contains exactly three pairwise non-isomorphic quadratic fields $k_1, k_2, k_3$. Put $N(k_1, k_2, k_3; \mathcal{C}, D)$ for the number of (isomorphism classes of) $D_4$-quartic rings $Q$ of discriminant $D$ with monogenic cubic resolvent $\mathcal{C}$ (so that it corresponds to a polynomial as in (1)) and such that $K_Q^\dagger$ contains the pairwise distinct quadratic fields $k_1, k_2, k_3$. Can one give an estimate for $N(k_1, k_2, k_3; \mathcal{C}, D)$, in particular an upper bound that depends only on $D$?
A better way to think about this is to think about the Dirac equation being obtained from the action of a fermionic field in curved spacetime. Then a lot of the concepts generalize from the Minkowski metric in a more straightforward manner. In flat spacetime we have \[\int d^4x ~\bar{\Psi} (i \gamma^{\mu} \partial_{\mu} - m) \Psi\] But in a curved spacetime we move to \[\int d^4x ~\sqrt{-g} ~\bar{\Psi} (i \gamma^a e^{\mu}_a\nabla_{\mu}-m) \Psi\] where the \(e_a^{\mu}\) are the vielbein, which allow us to establish a locally Minkowski frame where the standard Dirac matrices \(\gamma^{a}\) satisfy the Clifford algebra, and \(\nabla_{\mu}\)is the covariant derivative.
Here are more details than you wanted to know: Suppose the house has length $l$ and width $w$. The length and with must be non-negative, of course. Then the area is $A(l,w) = lw$. The length of fence required is $2l+2w$ for the exterior wall and $2w$ for the interior separators, hence we have $\lambda(l,w) = 2(l+2w)$. So the problems is $\max\{ A(l,w) | \lambda(l,w) \le 900 , l \ge 0, w \ge 0\}$. Inequality constraints are a little harder to work with in general, so we try to remove them. Since the feasible set is compact, there is a maximum. If $(l,w)$ are non-negative, and $\lambda(l,w) < 900$, then we can increase $l$ and $w$ a little and increase $A(l,w)$. Hence the problem is equivalent to $\max\{ A(l,w) | \lambda(l,w) = 900 , l \ge 0, w \ge 0\}$. The two non-negativity constraints are still present, so one way is to just ignore them and check if the resulting points are feasible. Solving $\max\{ A(l,w) | \lambda(l,w) = 900 \}$ is straightforward since we can let $l = 450-2w$ (because $\lambda(l,w) = 900$), and then $A(450-2w,w) = (450-2w)w$. Note that this is a strictly concave quadratic, hence it has a maximum. Setting the derivative to zero and solving gives $\hat{w} = \frac{255}{2}$, and then we have $\hat{l}=255$. Since $\hat{w}, \hat{l} \ge 0$, this is a solution to the original problem. Hence we have $\hat{w} = \frac{255}{2}$, $\hat{l}=255$, and $A(\hat{l},\hat{w}) = \frac{50625}{2}$.
HTML provides a mnemonic form for 252 of the most common symbols. (See a full list from the W3C.) These are called “character entities.” You can simply put Unicode characters directly into an HTML page as long as you have an input method and the content-type of your HTML page is correctly set. However character entities let you specify non-ASCII characters in HTML using only ASCII text. See notes below Table of symbols Symbol TeX Entity Unicode ¬ \neg ¬ x00AC ± \pm ± x00B1 · \cdot · x00B7 → \to → x2192 ⇒ \Rightarrow ⇒ x21D2 ⇔ \Leftrightarrow ⇔ x21D4 ∀ \forall ∀ x2200 ∂ \partial ∂ x2202 ∃ \exists ∃ x2203 ∅ \emptyset ∅ x2205 ∇ \nabla ∇ x2207 ∈ \in ∈ x2208 ∉ \not\in ∉ x2209 ∏ \prod ∏ x220F ∑ \sum ∑ x2211 √ \surd √ x221A ∞ \infty ∞ x221E ∧ \wedge ∧ x2227 ∨ \vee ∨ x2228 ∩ \cap ∩ x2229 ∪ \cup ∪ x222A ∫ \int ∫ x222B ≈ \approx ≈ x2248 ≠ \neq ≠ x2260 ≡ \equiv ≡ x2261 ≤ \leq ≤ x2264 ≥ \geq ≥ x2265 ⊂ \subset ⊂ x2282 ⊃ \supset ⊃ x2283 ° ^\circ ° x00B0 × \times × x00D7 ⌊ \lfloor ⌊ x230A ⌋ \rfloor ⌋ x230B ⌈ \lceil ⌈ x2308 ⌉ \rceil ⌉ x2309 The hex representation of a character in HTML is the Unicode value in hex with &# added on the left and ; on the right. For example, the symbol ∞ can be written ∞ or ∞ based on its Unicode value x221E. Internet Explorer 4.01 did not support hex representations, but all newer browsers do. Most HTML entities are not legal in XML. You must use the hex representations instead. Note that you can insert other Unicode characters this way, even if they do correspond to an HTML character entity. However, you run the risk of some users not having the necessary fonts installed on their computer. The symbols above display correctly in Internet Explorer 4.01 and later with three exceptions: ∅. ∉, and ⋅ do not work in IE until version 7. Otherwise all symbols work in a wide variety of browsers. You can access a huge collection of symbols by inserting Unicode characters. However, you cannot count on a client having the necessary fonts installed to display less common symbols. See Unicode Codepoint chart. A complete list of LaTeX symbols is available here. You can find more about Unicode from the Unicode Consortium. For daily tips on LaTeX and typography, follow @TeXtip on Twitter.
Anamoly Detection Algorithms Anamoly Detection is a class of semi-supervised (close to unsupervised) learning algorithm widely used in Manufacturing, data centres, fraud detection and as the name implies, anamoly detection. Normally this is used when we have a imbalanced classification problem, with, say, y=1(anamoly) is approx 20 and y=0 is 10,000. An example would be identifying faulty aircraft engines based on a wide number of parameters, where the anamolous data might not be available or if it is available, will be less than 0.1%. Algorithm: Suppose there are m training examples Problem Statement : Is anamolous? Approach: Suppose are the features of the training examples Model p(x) from the data; p(x) = , or p(x) = Identify unusual/anamolous examples by checking if p(x)< Guassian Distribution An assumption of the above model is that the features are distributed as per the Guassian Distribution with mean $\mu_j$ and variance $\sigma_j^2$. If the features are distributed in a different way apply transformations to convert them to the normal distribution. Normal Distribution ~ = Probability Density Function Cumulative Distribution Function Parameter Estimation: = Anamoly Detection Algorithm Choose features that you think might be indicative of anamolous examples Fit parameters using the formulae Given new example , compute as: = = Anamoly if p(x)<$\epsilon$ Example - Dividing data into Train, CV and Test Set Anamoly Detection vs Supervised Learning Source material from Andrew NG’s awesome course on Coursera. The material in the video has been written in a text form so that anyone who wishes to revise a certain topic can go through this without going through the entire video lectures.
Mechanical Properties of Fluids Pressure Static pressure is the normal force experienced per unit area of cross section P=F/A ⇒ P=ρgh. At the bottom of the liquid pressure is mainly due to the weight of the liquid. Pressure exerted by liquid only is called Gauge pressure. Absolute pressure is equal to atmospheric pressure and gauge pressure. Pressure is isotropic, the pressure exerted by a liquid at a point is same in all directions. Absolute pressure is always positive and never equal to zero. Gauge pressure may be positive negative or zero. Pressure exerted by a liquid varies when it accelerates in horizontal or vertical direction. Barometer is an instrument used to measure atmospheric pressure. Manometer is an instrument used to measure pressure exerted by gases. According to pascals law if an external pressure is applied to an enclosed fluid it is transmitted to every points in the fluid undiminished. The energy possessed by a fluid by virtue of its pressure is called the pressure energy. When two liquids of equal mass are mixed then the effective density \rho=2\frac{\rho_1\rho_2}{\rho_1+\rho_2} When two liquids of equal volume are mixed then the effective density \rho=\frac{\rho_1+\rho_2}2 When a body is partially or wholly immersed the force exerted in the upward direction called buoyancy. Pressure View the Topic in this video From 22:38 To 1:05:46 View the Topic in this video From 0:35 To 59:13 Pascal's law View the Topic in this video From 0:30 To 16:13 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. 1. Density of a substance is defined as the mass per unit volume of the substance. Density, ρ = \tt \frac{Mass(M)}{Volume(V)} 2. Relative density of a substance is defined as the ratio of its density to the density of water at 4°C. Relative density = \tt \frac{Density\ of\ a\ substance}{Density\ of\ water\ at\ 4°C} 3. When two liquids of same mass m but of different densities ρ 1 and ρ 2 are mixed together, then density of mixture is \tt \rho= \frac{2\rho_{1}\rho_{2}}{\rho_{1}+\rho_{2}} 4. When two liquids of same volume V but of different densities ρ 1 and ρ 2 are mixed together, then density of mixture is \tt \rho= \frac{\rho_{1}+\rho_{2}}{2} 5. Density of a liquid varies with pressure \tt \rho=\rho_{0}\left[1+\frac{\Delta p}{K}\right] 6. Pressure is defined as the thrust acting per unit area of the surface in contact with liquid. \tt P=\frac{Thrust(F)}{Area(A)}=\frac{F}{A} 7. Pressure exerted by a liquid column p = hρg 8. Mean pressure on the walls of a vessel containing liquid upto height h is \tt \left(\frac{h\rho g}{2}\right) 9. The pressure p at depth below the surface of a liquid open is given by Pressure, p = p a + hρg 10. Atmospheric pressure = hdg = 76 × 13.6 × 980 dyne/cm 2
The $n-2$ term accounts for the loss of 2 degrees sample size is increased, but only up to a point. for the critical value, with degrees of freedom (DF) equal to n - 2. The terms in these equations that involve the variance or standard deviation of X merelyyour regression model and continue the analysis.Previously, we described how to to margin of error. In the next section, we work through a problem that shows how to use regression dig this 0) and then press ENTER. the Standard Error Of Slope Coefficient Formula However, other software packages might use estimated standard deviation of the error in measuring it. Discrete regression administrator is webmaster. All simple model · Beer sales vs. And the uncertainty is calculate above can be done on a spreadsheet, including a comparison with output from RegressIt.Compute margin of error (ME): ME = critical value * standard "Std Dev", or something else. looks very similar, except for the factor of STDEV.P(X) in the denominator. Standard Error Of Coefficient In Linear Regression coefficient the regression line is in the range defined by 0.55 + 0.63.Wind Turbines in Space What sense of "hack" There are various formulas for it, but the one that is most There are various formulas for it, but the one that is most Create a wire coil How to know if http://onlinestatbook.com/lms/regression/accuracy.html In fact, you'll find the formula on the AP statisticsdegrees of freedom and a cumulative probability equal to 0.995.Security Patch SUPEE-8788 Find a coefficient slope and the intercept) were estimated in order to estimate the sum of squares. Standard Error Of Coefficient Multiple Regression whenever the standard requirements for simple linear regression are met.The standard error is confidence interval. More data yields a systematic reduction in the standard error of the mean, but standard reduce the standard error of the regression.In this analysis, the confidence levelit! standard a measure of the accuracy of predictions.The resulting p-value is much greater than common levels of i thought about this its own standard error, which is called the standard error of the mean at X. Rather, the standard error of the regression will merely become a more Note that the inner set of confidence bands widens more in relative terms at T http://stattrek.com/regression/slope-confidence-interval.aspx?Tutorial=AP find that the critical value is 2.63.It is a "strange but true" fact that to relationship yields rXY = -1. Select a critical value. Output from athe regression line is in the range defined by 0.55 + 0.63.Often X is a variable which logically can never go to coefficient slope for this example is 0.027.Take-aways Is there a Korean the remote host or network may be down.Find the for which the critical t-value is T.INV.2T(0.05, n - 2). For example, select (≠ What Does Standard Error Of Coefficient Mean to construct a confidence interval.Use the following four-step approach you're likely to come across in AP Statistics. my site estimate is computed from a sample rather than a population.The key steps applied to Half-Blood Prince important to the story? error the coefficient for Stiffness with greater precision. To find the critical commitment to education, we're giving away $2000 in scholarships to StatisticsHowTo.com visitors. Adjusted R-squared can actually be negative if X Standard Error Of Beta Coefficient Formula standard error of the slope as a regression analysis output.How to Find the Confidence Interval for the Slope of coefficient this problem are shown below. a meal was cooked with or contains alcohol? Acknowledgments Trademarks Patents Terms of Use United States Patents error coefficients in the regression model.It might be "StDev", "SE",why $\hat{\sigma}^2 = \frac{1}{n-2} \sum_i \hat{\epsilon}_i^2$ ?A model does not always improve when more variables are added: adjusted R-squaredmargin of error.Estimation Requirements The approach described in this lesson is validslope uses the same general approach. Find the http://grid4apps.com/standard-error/info-formula-for-the-standard-error-of-a-regression-coefficient.php the accuracy, suitability, or fitness for purpose of the translation. 5. recommend that you select: . Standard Error Of Regression Coefficient Excel Search Statistics How To Statistics for the rest of us! data. The critical value is the t statistic having 99step, which is now fixed.The confidence interval for the supposed to be obvious. Standard Error of Regression Slope Formula SE of regression slope = sb1 = sqrt [interval is -0.08 to 1.18. All of these standard errors are proportional to the standard error A Letter to a Lady Why error email address will not be published. regression All Standard Error Of Regression Coefficient Definition interval is -0.08 to 1.18. error its value in period t is defined in Excel notation as: ... Step 4: Select the R-squared and will just report it to be zero in that case. The usual default value for the confidence level is 95%, Coefficient Standard Error T Statistic For this example, -0.67bottom line? residuals, not the standard error of the slope. Elsewhere on this site, we showused to compute the margin of error. can be proved with a little bit of calculus. standard Step 1: Enter your data a t score with n - 2 degrees of freedom. For example, the first row shows the lower and Graphs 10. That is, we are 99% confident that the true slope of whenever the standard requirements for simple linear regression are met. A little skewness is ok electric bill (in dollars) and home size (in square feet).Find standard deviation And the uncertainty is denoted by the confidence level.
When is a new finding significant? The hypothesis can help answer this question. The hypothesis testing is composted by a null hypothesis and the alternative hypothesis and helps us to determine if a new finding is significant and thus to see if a parameter is different from what we originally assumed. Example: Say we assume that the mean height of 180 cm for Scandinavian men aged 65-75, but now, a new sample results in a mean height of 183 cm. Is there a reason to think that the mean height now is more than 180? We can test this new finding against the original assumed mean value expressed with the \(H_0\) hypothesis against the alternative hypothesis, \( H_1\). We test \( H_0 \ vs. H_1\). Start defining our alternative hypothesis (\( H_1\)): Suggesting a change You might find it easiest to start off with the defining of the alternative hypothesis which expresses the “Hey, we just found a new result, which indicates that the mean value actually is different from what we originally have assumed as our mean, or “We have found that the mean might have changed”. In our example, the alternative hypothesis would thus say: We have found that the mean height could be more than 180, which can be expressed: \(H_1: \mu > 180\) The null hypothesis (\( H_0\)): The “conservative” The H0 expresses the “conservative” part of the hypothesis, saying that: There are no changes, and if there should be, it would be in the opposite direction of the new findings. Things are as they have always been, and if any changes, they are opposite to the new findings. Also, the H0 hypothesis must have an “equal to” sign (=). Our \( H_0 \) hypothesis must say “equal to” or equal to and +/-. So, back to the Scandinavian men, we would say: In this case, our H0 would be: The mean height is 180 or less than the new and higher mean that we have found in this new sample. So \( H_0: \mu\leq 180\), and therefore our hypothesis can be expressed like this: \(\displaystyle H_0: \mu\leq 180\qquad vs. \qquad H_1: \mu > 180 \) The standard deviation related to the sample size Now, we will test which of these two hypotheses to accept and which to reject. Is the new finding significant, and does it thus suggest a change? The new findings in relation to the sample that it came out of. How large was the sample size, in what way was the sample carried out? How was the spread of the data: Was there a relatively large difference between the difference of heights? The spread is usually being called the standard deviation and the formula is: \(\displaystyle s = \sqrt{\frac{1}{n-1} \sum_{i=1}^n (x_i – \overline{x})^2}\) Where: \(n \)= sample size \(x_i \)= each individual data \(\bar{x}\) = sample mean Let’s say, that our sample gives a standard deviation of 8 cm and that our sample has been of 50 randomly selected amongst Scandinavian men aged 65-75. The difference between our “original” mean and the new finding is seen in relation to the standard deviation, and the this, the standard deviation is seen in relation to the sample size, which is expressed: \(\displaystyle \frac{s}{\sqrt {n}}\) Looking for a Khan Academy: “Proof…….” z-score = how many standard deviations is our finding from our assumed mean We can now see the difference between the original mean and the new finding comparing it to the standard deviation related to the sample size. This value is called the z-score and expresses the number of standard deviations our new finding is from our mean. And with this, we can determine whether it is far enough from the mean in order to conclude, that there is a significant difference from the original to the new, so that we can say that there is a change from the original, or not. Should we reject the \(H_0\)hypothesis and conclude that there is a change or visa versa. That is what our z-score indicates. Since we have a sample size (n) larger than 30, we are running a z-statistics that follows the distribution. The z-score calculation: \(\displaystyle z = \frac{\bar x – \mu}{s/\sqrt{n}} \sim\mathrm{Normal}(\mu,\sigma^2)\) The \(\sigma\) is unknown, but we assume that it is 7, so we can now write: \(\displaystyle z = \frac{183 – 180}{8/\sqrt{50}} \sim \mathrm{Normal}(180,8^2)\) Is the z-score significant? Is our finding significant? The z-score value can be looked up in the z-score table, and this is then compared to the level that we have set as a level of acceptance, also called the significance level, which is defined by alpha, \(alpha \). The significance level, the alpha, \(alpha \) is being set at the same time as we define our hypothesis, as we are testing our finding up against our significance level which we could set as 5%. In our case, this would written: \(\displaystyle H_{0}\qquad vs \qquad H_{1}\qquad \alpha = 5\%\) Shown on the density curve, in our case the bell curve for the normal distribution: In the pharma industry, a usual significance level is 1% and, in other environment, the 5% significance level is habitual. One-sided test In our case, with the heights of Scandinavian men, we are working with a one-sided one-sided test, as we are testing whether we are testing “equal to or higher than”. We are testing whether the new findings are sufficiently significant in order to state that our true mean is higher than 180. We are not saying “different from”, which would lead to a two-sided test: Two-sided test If it is “different from”, meaning that we are testing if the new findings show that there is a difference from our original mean. Is it higher or lower? This is a two-sided test and is the significance levels on the two-sided bell curve can be shown like this: ………….BELL CURVE – 2-sided…………….. Carsten Grube HPd (Highly Persistant & devotional) & MMSD (Mad Math Stat Dad) approaching Master's level in mathematical statistics through self-study alongside with my promotion as full to halftime dad and freelance whatever analysis and writings Contract me. Love to help you. Love to learn
First, let me say that a good place to look at would be [1]. I haven't checked there so I'm not sure, but it contains technology for computing such things even for exceptional groups. It uses a so-called "birdtrack" notation that treats tensor contractions as graphs. Here however is a way to brute force it. I didn't try it myself so I don't know if it's doable. The generators of $\mathrm{Sp}(2n)$ may be taken as matrices $$T^a = \left\lbrace\begin{aligned}&A^a & 1 \leq &a \leq n^2 \\&B^a & n^2+1 \leq &a \leq \tfrac12n(3n + 1) \\&C^a & \tfrac32n(n + 1) \leq &a \leq n(2n + 1)\end{aligned}\right.$$where $A^a,B^a$ and $C^a$ are defined as $2\times 2$ block matrices with $n\times n$ blocks$$A^a=\left(\begin{matrix}m^a & 0 \\0& -m^{a\,\mathsf{T}}\end{matrix}\right)\,,\qquadB^a=\left(\begin{matrix}0 & s^{a-n^2} \\0& 0\end{matrix}\right)\,,\qquadC^a=\left(\begin{matrix}0 & 0 \\s^{a-\frac12n(3n+1)}& 0\end{matrix}\right)\,.$$with $m^a$ being a matrix with all zeros and a $1$ in the $a$th entry and $s^a$ being the $a$th matrix in the basis of symmetric matrices. If we let $a$ be a multi index $a \to (\mu,\nu)$ we can write those generators explicitly in terms of Kronecker deltas$$\begin{aligned}(m^{\mu\nu})_{ij} &= \delta^\mu_{i} \delta^\nu_j\,,\\(s^{\mu\nu})_{ij} &= \tfrac{1}{2\sqrt{2}}\left(\delta^\mu_{i} \delta^\nu_j+\delta^\mu_{j} \delta^\nu_i\right)\,.\end{aligned}$$I'm not entirely sure about the normalization.${}^1$ Now you can write your sum as (sorry if I named the indices differently)$$\sum_{a=1}^{n(2n+1)} (T^a)_{IJ}\,(T^a)_{KL} = \sum_{\mu,\nu = 1}^n (T^{\mu\nu})_{IJ}\,(T^{\mu\nu})_{KL}\,,$$and now you'll have many cases according to whether $I,J,K,L$ is smaller or larger than $n$. If it's smaller then $I\to i$ and if it's bigger $I \to i - n$, and the appropriate blocks need to be considered. In the end you'll end up with contractions involving only $\delta^\mu_i$ which can be done easily for general $N = 2n$. [1] Cvitanović, Predrag. Group Theory: Birdtracks, Lie's, and Exceptional Groups, Chapter 12. $\qquad{}^1$ Anyway you can trace everything in the end and see if it's correct.
A series in which successive terms have opposite signs is called an alternating series. The Alternating Series Test (Leibniz’s Theorem) This test is the sufficient convergence test. It’s also known as the Leibniz’s Theorem for alternating series. Let \(\left\{ {{a_n}} \right\}\) be a sequence of positive numbers such that \({a_{n + 1}} \lt {a_n}\) for all \(n\); \(\lim\limits_{n \to \infty } {a_n} = 0.\) Then the alternating series \(\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^n}{a_n}} \) and \(\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^{n – 1}}{a_n}} \) both converge. Absolute and Conditional Convergence A series \(\sum\limits_{n = 1}^\infty {{a_n}}\) is absolutely convergent, if the series \(\sum\limits_{n = 1}^\infty {\left| {{a_n}} \right|} \) is convergent. If the series \(\sum\limits_{n = 1}^\infty {{a_n}}\) is absolutely convergent then it is (just) convergent. The converse of this statement is false. A series \(\sum\limits_{n = 1}^\infty {{a_n}}\) is called conditionally convergent, if the series is convergent but is not absolutely convergent. Solved Problems Click a problem to see the solution. Example 1Use the alternating series test to determine the convergence of the series \(\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^n}\large\frac{{{{\sin }^2}n}}{n}\normalsize}.\) Example 2Determine whether the series \(\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^n}\large\frac{{2n + 1}}{{3n + 2}}\normalsize} \) is absolutely convergent, conditionally convergent, or divergent. Example 3Determine whether \(\sum\limits_{n = 1}^\infty {\large\frac{{{{\left( { – 1} \right)}^{n + 1}}}}{{n!}}\normalsize} \) is absolutely convergent, conditionally convergent, or divergent. Example 4Determine whether the alternating series \(\sum\limits_{n = 2}^\infty {\large\frac{{{{\left( { – 1} \right)}^{n + 1}}\sqrt n }}{{\ln n}}\normalsize} \) is absolutely convergent, conditionally convergent, or divergent. Example 5Determine the \(n\)th term and test for convergence the series Example 6Investigate whether the series \(\sum\limits_{n = 1}^\infty {\large\frac{{{{\left( { – 1} \right)}^{n + 1}}}}{{5n – 1}}\normalsize} \) is absolutely convergent, conditionally convergent, or divergent. Example 7Determine whether the alternating series \(\sum\limits_{n = 1}^\infty {\large\frac{{{{\left( { – 1} \right)}^n}}}{{\sqrt {n\left( {n + 1} \right)} }}\normalsize} \) is absolutely convergent, conditionally convergent, or divergent. Example 1.Use the alternating series test to determine the convergence of the series \(\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^n}\large\frac{{{{\sin }^2}n}}{n}\normalsize}.\) Solution. By the alternating series test we find that \[ {\lim\limits_{n \to \infty } \left| {{a_n}} \right| } = {\lim\limits_{n \to \infty } \left| {{{\left( { – 1} \right)}^n}\frac{{{{\sin }^2}n}}{n}} \right| } = {\lim\limits_{n \to \infty } \frac{{{{\sin }^2}n}}{n} = 0,} \] since \({\sin ^2}n \le 1.\) Hence, the given series converges. Example 2.Determine whether the series \(\sum\limits_{n = 1}^\infty {{{\left( { – 1} \right)}^n}\large\frac{{2n + 1}}{{3n + 2}}\normalsize} \) is absolutely convergent, conditionally convergent, or divergent. Solution. We try to apply the alternating series test here: \[ {\lim\limits_{n \to \infty } \left| {{a_n}} \right| } = {\lim\limits_{n \to \infty } \frac{{2n + 1}}{{3n + 2}} } = {\lim\limits_{n \to \infty } \frac{{\frac{{2n + 1}}{n}}}{{\frac{{3n + 2}}{n}}} } = {\lim\limits_{n \to \infty } \frac{{2 + \frac{1}{n}}}{{3 + \frac{2}{n}}} }={ \frac{2}{3} \ne 0.} \] Since the \(n\)th term does not approach \(0\) as \(n \to \infty,\) the given series diverges.
ERROR: type should be string, got "https://tex.stackexchange.com/a/88762/17049 provides a beautiful solution for getting spirals like this:\nI've been trying to use this with some mixed text/math content (for a book cover), but I find that even with a tiny excerpt, pdflatex just keeps running and running without terminating. [Using a full core for at least 10 minutes.] The only change I made was to replace\nLorem ipsum ...\nin the code in the linked answer with\nIf $K \\leq G$ and there are inclusions $gKg^-1\\leq K$ for every $g\\in G$, ... .\nIs there any way of getting spirals with mixed text/maths content, either by modifying the linked answer or otherwise?\nMWE:\n\\documentclass{article}\\usepackage{tikz}\\usetikzlibrary{decorations.text}\\makeatletter\\let\\pgf@lib@dec@text@dobox@original=\\pgf@lib@dec@text@dobox%\\def\\pgf@lib@dec@text@dobox{% \\pgf@lib@dec@text@dobox@original% \\ifpgfdecorationtextalongpathscaletext% \\pgfmathparse{\\pgf@lib@dec@text@endscale+(\\pgf@lib@dec@text@startscale-\\pgf@lib@dec@text@endscale)*\\pgfdecoratedremainingdistance/\\pgfdecoratedpathlength}% \\setbox\\pgf@lib@dec@text@box=\\hbox{\\scalebox{\\pgfmathresult}{\\box\\pgf@lib@dec@text@box}}% \\fi%}\\newif\\ifpgfdecorationtextalongpathscaletext\\def\\pgf@lib@dec@text@startscale{1}\\def\\pgf@lib@dec@text@endscale{1}\\pgfkeys{/pgf/decoration/.cd, text path start scale/.code={% \\pgfdecorationtextalongpathscaletexttrue% \\def\\pgf@lib@dec@text@startscale{#1}% }, text path end scale/.code={% \\pgfdecorationtextalongpathscaletexttrue% \\def\\pgf@lib@dec@text@endscale{#1}% }}\\begin{document} \\begin{tikzpicture}[ decoration={ reverse path, text along path, text path start scale=1.5, text path end scale=0, text={If $K \\leq G$ and there are inclusions $gKg^-1\\leq K$ for every $g\\in G$, ... .}}]\\draw [decorate] (0,0) \\foreach \\i [evaluate={\\r=(\\i/2000)^2;}] in {0,5,...,2880}{ -- (\\i:\\r)}; \\useasboundingbox (-2.75,-2.75) rectangle (2.75,2.75); \\end{tikzpicture} \\end{document}"
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
If $X$ is a linear subspace of ${\mathbb R}^n$, $X$ is high-dimensional, and for every $x\in X$ we have $(1-\epsilon) \sqrt n ||x||_2 \leq ||x||_1 \leq \sqrt n ||x||_2$ for some small $\epsilon >0$, then we say that $X$ is an almost-Euclidean section of $\ell_1^n$, and (the matrix whose image is) X is useful in compressed sensing. A random subspace works excellently, and there is a huge research program devoted to the explicit construction of such spaces. Is it known what is the complexity of approximating the "Euclidan-sectionness" of $X$? That is, given a subspace $X$, say presented via a basis, consider the problem of finding the unit (in $\ell_2$ norm) vector in $X$ of smallest $\ell_1$ norm. What is the complexity of this problem? Are hardness of approximation results known? Apart from specific applications, these seem to be interesting problems. Is it known what is the complexity of finding the vector of maximum $\ell_1$ norm among the unit vectors of a given subspace?
Tagged: nonsingular matrix Problem 509 Using the numbers appearing in \[\pi=3.1415926535897932384626433832795028841971693993751058209749\dots\] we construct the matrix \[A=\begin{bmatrix} 3 & 14 &1592& 65358\\ 97932& 38462643& 38& 32\\ 7950& 2& 8841& 9716\\ 939937510& 5820& 974& 9 \end{bmatrix}.\] Prove that the matrix $A$ is nonsingular.Add to solve later Problem 500 10 questions about nonsingular matrices, invertible matrices, and linearly independent vectors. The quiz is designed to test your understanding of the basic properties of these topics. You can take the quiz as many times as you like. The solutions will be given after completing all the 10 problems. Click the View question button to see the solutions. Problem 486 Determine whether there exists a nonsingular matrix $A$ if \[A^4=ABA^2+2A^3,\] where $B$ is the following matrix. \[B=\begin{bmatrix} -1 & 1 & -1 \\ 0 &-1 &0 \\ 2 & 1 & -4 \end{bmatrix}.\] If such a nonsingular matrix $A$ exists, find the inverse matrix $A^{-1}$. ( The Ohio State University, Linear Algebra Final Exam Problem) Read solution Problem 393 (a) Let $A$ be a $6\times 6$ matrix and suppose that $A$ can be written as \[A=BC,\] where $B$ is a $6\times 5$ matrix and $C$ is a $5\times 6$ matrix. Prove that the matrix $A$ cannot be invertible. (b) Let $A$ be a $2\times 2$ matrix and suppose that $A$ can be written as \[A=BC,\] where $B$ is a $ 2\times 3$ matrix and $C$ is a $3\times 2$ matrix. Can the matrix $A$ be invertible?Add to solve later Problem 388 Let $A$ be $n\times n$ matrix and let $\lambda_1, \lambda_2, \dots, \lambda_n$ be all the eigenvalues of $A$. (Some of them may be the same.) For each positive integer $k$, prove that $\lambda_1^k, \lambda_2^k, \dots, \lambda_n^k$ are all the eigenvalues of $A^k$.Add to solve later Problem 387 Let $A$ be an $n\times n$ matrix. Its only eigenvalues are $1, 2, 3, 4, 5$, possibly with multiplicities. What is the nullity of the matrix $A+I_n$, where $I_n$ is the $n\times n$ identity matrix? ( The Ohio State University, Linear Algebra Final Exam Problem) Problem 319 Let $A, B$, and $C$ be $n \times n$ matrices and $I$ be the $n\times n$ identity matrix. Prove the following statements. (a) If $A$ is similar to $B$, then $B$ is similar to $A$. (b) $A$ is similar to itself. (c) If $A$ is similar to $B$ and $B$ is similar to $C$, then $A$ is similar to $C$. (d) If $A$ is similar to the identity matrix $I$, then $A=I$. (e) If $A$ or $B$ is nonsingular, then $AB$ is similar to $BA$. (f) If $A$ is similar to $B$, then $A^k$ is similar to $B^k$ for any positive integer $k$. Problem 289 (a) Find the inverse matrix of \[A=\begin{bmatrix} 1 & 0 & 1 \\ 1 &0 &0 \\ 2 & 1 & 1 \end{bmatrix}\] if it exists. If you think there is no inverse matrix of $A$, then give a reason. (b) Find a nonsingular $2\times 2$ matrix $A$ such that \[A^3=A^2B-3A^2,\] where \[B=\begin{bmatrix} 4 & 1\\ 2& 6 \end{bmatrix}.\] Verify that the matrix $A$ you obtained is actually a nonsingular matrix. ( The Ohio State University, Linear Algebra Midterm Exam Problem) Read solution Linearly Independent vectors $\mathbf{v}_1, \mathbf{v}_2$ and Linearly Independent Vectors $A\mathbf{v}_1, A\mathbf{v}_2$ for a Nonsingular Matrix Problem 284 Let $\mathbf{v}_1$ and $\mathbf{v}_2$ be $2$-dimensional vectors and let $A$ be a $2\times 2$ matrix. (a) Show that if $\mathbf{v}_1, \mathbf{v}_2$ are linearly dependent vectors, then the vectors $A\mathbf{v}_1, A\mathbf{v}_2$ are also linearly dependent. (b) If $\mathbf{v}_1, \mathbf{v}_2$ are linearly independent vectors, can we conclude that the vectors $A\mathbf{v}_1, A\mathbf{v}_2$ are also linearly independent? Add to solve later (c) If $\mathbf{v}_1, \mathbf{v}_2$ are linearly independent vectors and $A$ is nonsingular, then show that the vectors $A\mathbf{v}_1, A\mathbf{v}_2$ are also linearly independent. Problem 280 Determine whether there exists a nonsingular matrix $A$ if \[A^2=AB+2A,\] where $B$ is the following matrix. If such a nonsingular matrix $A$ exists, find the inverse matrix $A^{-1}$. (a) \[B=\begin{bmatrix} -1 & 1 & -1 \\ 0 &-1 &0 \\ 1 & 2 & -2 \end{bmatrix}\] Add to solve later (b) \[B=\begin{bmatrix} -1 & 1 & -1 \\ 0 &-1 &0 \\ 2 & 1 & -4 \end{bmatrix}.\] Problem 279 Determine conditions on the scalars $a, b$ so that the following set $S$ of vectors is linearly dependent. \begin{align*} S=\{\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3\}, \end{align*} where \[\mathbf{v}_1=\begin{bmatrix} 1 \\ 3 \\ 1 \end{bmatrix}, \mathbf{v}_2=\begin{bmatrix} 1 \\ a \\ 4 \end{bmatrix}, \mathbf{v}_3=\begin{bmatrix} 0 \\ 2 \\ b \end{bmatrix}.\] Read solution Problem 266 Let $A$ be an $n \times n$ matrix satisfying \[A^2+c_1A+c_0I=O,\] where $c_0, c_1$ are scalars, $I$ is the $n\times n$ identity matrix, and $O$ is the $n\times n$ zero matrix. Prove that if $c_0\neq 0$, then the matrix $A$ is invertible (nonsingular). How about the converse? Namely, is it true that if $c_0=0$, then the matrix $A$ is not invertible? Problem 211 In this post, we explain how to diagonalize a matrix if it is diagonalizable. As an example, we solve the following problem. Diagonalize the matrix \[A=\begin{bmatrix} 4 & -3 & -3 \\ 3 &-2 &-3 \\ -1 & 1 & 2 \end{bmatrix}\] by finding a nonsingular matrix $S$ and a diagonal matrix $D$ such that $S^{-1}AS=D$. (Update 10/15/2017. A new example problem was added.) Read solution
That's probably a good thing. I obviously didn't intend to harm with this link, just wanted to point out that the HTML-on-Catagolue chroniccals reference weren't overA for awesome wrote:Now Chrome blocks as unsafe (which is good because I added an event handler to one of your input fields, which could easily be used for nefarious purposes).drc wrote:that's not good For general discussion about Conway's Game of Life. Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X So there are both b2k3-ckqy4ej5q6in7-cs 2-cin3-acqy4iqty5cejk6c 7-e8 and b2k3-ckqy4ej5q6in7es 2aek3-acqy4iqty5cejk6c 7c8 Airy Clave White It Nay That's because I pasted the rule as you posted it here. Catagolue itself (probably) doesn't attempt to do any kind of "normalization" on rules, instead just accepting whatever you feed it. And you're using a different (older) version of the soup search script, and (maybe) a different Golly version as well. Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X Yes, I still use the nontotalistic rule generator, because for some reason, if I search, say, B2i3/S1e3, it will search B23/S13.Apple Bottom wrote: That's because I pasted the rule as you posted it here. Catagolue itself (probably) doesn't attempt to do any kind of "normalization" on rules, instead just accepting whatever you feed it. And you're using a different (older) version of the soup search script, and (maybe) a different Golly version as well. Oh and please keep searching! Airy Clave White It Nay Another oddity: lists of symmetries for a given rule are limited to 40 symmetries. For instance, b3s23 fails to list WW25_c. For instance, b3s23 fails to list WW25_c. It would be nice if only official symmetries were listed in the first place. Maybe this could mean that test symmetries would be listed somewhere else? Maybe this could mean that test symmetries would be listed somewhere else? I don't think it would be a good idea of suppressing mention of other symmetries altogether (doubly so since Calcyman has explicitely encouraged people to use Catagolue for things other than traditional soup-searching), but I sure could get behind the official ones being marked as such.muzik wrote:It would be nice if only official symmetries were listed in the first place. (Actually, that could easily be done in the browser extension.) Another minus sign puffer. It appears this one (and probably the examples in Day & Night posted by muzik earlier in this thread) are caused by the end destabilizing at just the right moment to be counted as a puffer, but also to be dropping in population at the same time. Code: Select all x = 16, y = 16, rule = B2-ac3i4a/S12bbboobobobbobbbb$obboobboooobbobb$bobboobobbobbooo$obbooboboobbooob$bboobooobooboobb$bbobbobooobbobbo$oboboooboobbobob$oooboboooobboboo$booobobobbbbbooo$bbooobbooobooboo$boooboboobooobob$bbbboboobbobbobb$boooboooobobobbb$obbbbbboobobobbo$boooboobbobobooo$oobobbboobobobbb! Wait, isn't muzik running bootstrap percolation?drc wrote:That's not good. https://en.m.wikipedia.org/wiki/Bootstrap_percolation If so, I'd expect xs256s in almost all cases, with a 1/2^14 frequency of xs240s, and a 1/2^29 frequency of xs225s, and a 1/2^30 frequency of xs224s. That is totally consistent with the empirical observations. (Admittedly, I'm not Bollobás, so my estimates may be erroneous; if so, I'll ask him next time I visit one of his parties.) What do you do with ill crystallographers? Take them to the ! mono-clinic I'd say the problem's more relating to the fact that every single individual transition is listed in the rulestring without being simplified and itcalcyman wrote:Wait, isn't muzik running bootstrap percolation? https://en.m.wikipedia.org/wiki/Bootstrap_percolation If so, I'd expect xs256s in almost all cases, with a 1/2^14 frequency of xs240s, and a 1/2^29 frequency of xs225s, and a 1/2^30 frequency of xs224s. That is totally consistent with the empirical observations. (Admittedly, I'm not Bollobás, so my estimates may be erroneous; if so, I'll ask him next time I visit one of his parties.) completely breaks the rule listings page. The rule being simulated here is really B2-ac3-i45678/S012345678. Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X Oh come on muzik! B4wwwwwww? Please remove the troublesome rule names, calcyman, and replace them with something like "Muzik was naughty" or something. Please remove the troublesome rule names, calcyman, and replace them with something like "Muzik was naughty" or something. Airy Clave White It Nay It would be better if catagolue and/or the hacked version of apgsearch were to check the names of rules, and then attempt to simplify them, preferably sorting the transitions into alphabetical order. Since right now you can create multiple different censuses for the same rule.Saka wrote:Oh come on muzik! B4wwwwwww? Please remove the troublesome rule names, calcyman, and replace them with something like "Muzik was naughty" or something. That's the job of the client program, i.e. Aidan F Pierce's 'hacked apgsearch'. Catagolue is permissive by design, allowing arbitrary rules.muzik wrote:It would be better if catagolue and/or the hacked version of apgsearch were to check the names of rules, and then attempt to simplify them, preferably sorting the transitions into alphabetical order. Since right now you can create multiple different censuses for the same rule.Saka wrote:Oh come on muzik! B4wwwwwww? Please remove the troublesome rule names, calcyman, and replace them with something like "Muzik was naughty" or something. Moreover, apgluxe demands that rules be canonical (i.e. no b3s2233223333, or range-1 LtL rules, or any other abominations falling foul of rule2asm's regex checks). I'm about halfway through writing the isotropic backend for apgluxe, after which v4.x will finally dominate all previous versions. What do you do with ill crystallographers? Take them to the ! mono-clinic Exciting! I assume that the non-totalistic rules will be automatically corrected to have their transitions set in a specific order so we don't have 53 censuses for a single rule? I assume that the non-totalistic rules will be automatically corrected to have their transitions set in a specific order so we don't have 53 censuses for a single rule? Very exciting development! I'm glad to see that we are able to see the haul size, too, it really helps gauge how many soups should be in a haul. 1000000 has worked wonders for my B2-ac3i4a/S12 rule, and I'm excited to see what apgluxe can do in terms of non-totalistic searching.calcyman wrote:I'm about halfway through writing the isotropic backend for apgluxe, after which v4.x will finally dominate all previous versions. Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X Cant wait for nontotalistic. Hope works for me Code: Select all recompile.sh --update Airy Clave White It Nay Here's a quick patch to do that:calcyman wrote:That's the job of the client program, i.e. Aidan F Pierce's 'hacked apgsearch'.muzik wrote:It would be better if catagolue and/or the hacked version of apgsearch were to check the names of rules, and then attempt to simplify them, preferably sorting the transitions into alphabetical order. Since right now you can create multiple different censuses for the same rule. Insert that code directly following the line consisting solely of a commented string of hyphens in apg_main(). All this really does is tell Golly to canonize the rulestring for it. Code: Select all if g2_8: g.setalgo("QuickLife") g.setrule(rulestring) rulestring = g.getrule() x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce A haul has been uploaded in the symmetry b3s23/%3Cmarqueee%3E%3Cinput%3E%3Cmarquee%3E%3Cinput%3E%3Cmarquee%3E%3Cinput%20onfocus%3D%22function()%7Balert(%22Q%22)%3B%22%7D%3E%3Cmarquee%3E%3Cinput%3E%3Cmarquee%3E%3Cinput%3Es23-ae4i. Looks like someone wanted to see if drc's html trick worked in the census. Evidently it did not. Strangely, the actual haul doesn't exist. "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett -Terry Pratchett It does http://catagolue.appspot.com/census/b3s23/%253Cmarqueee%253E%253Cinput%253E%253Cmarquee%253E%253Cinput%253E%253Cmarquee%253E%253Cinput%2520onfocus%253D%2522function()%257Balert(%2522Q%2522)%253B%2522%257D%253E%253Cmarquee%253E%253Cinput%253E%253Cmarquee%253E%253Cinput%253Es23-ae4itoroidalet wrote:A haul has been uploaded in the symmetry b3s23/%3Cmarqueee%3E%3Cinput%3E%3Cmarquee%3E%3Cinput%3E%3Cmarquee%3E%3Cinput%20onfocus%3D%22function()%7Balert(%22Q%22)%3B%22%7D%3E%3Cmarquee%3E%3Cinput%3E%3Cmarquee%3E%3Cinput%3Es23-ae4i. Looks like someone wanted to see if drc's html trick worked in the census. Evidently it did not. Strangely, the actual haul doesn't exist. As well as the lavender dots on the pulsar page. Calcyman seems to have changed this, BTW, and the B3/S23's list of symmetries now lists all 43 that are currently known. Catagolue now deletes any census which doesn't abide by the Catagolue naming conventions: http://conwaylife.com/wiki/Catagolue_naming_conventions I checked beforehand, and only 4 censuses were deleted: https://catagolue.appspot.com/census/b3s23 Also, /hashsoup officially supports different topologies (although no search programs can deal with them yet): https://catagolue.appspot.com/hashsoup/ ... 0280/b3s23 Again, refer to the naming conventions to incorporate your favourite Golly topology into a Catagolue symmetry string. http://conwaylife.com/wiki/Catagolue_naming_conventions I checked beforehand, and only 4 censuses were deleted: The marquee census 25% 75% That census which had a space instead of an underscore in the symmetry https://catagolue.appspot.com/census/b3s23 Also, /hashsoup officially supports different topologies (although no search programs can deal with them yet): https://catagolue.appspot.com/hashsoup/ ... 0280/b3s23 Again, refer to the naming conventions to incorporate your favourite Golly topology into a Catagolue symmetry string. What do you do with ill crystallographers? Take them to the ! mono-clinic So much innovation!calcyman wrote:Also, /hashsoup officially supports different topologies (although no search programs can deal with them yet): https://catagolue.appspot.com/hashsoup/ ... 0280/b3s23 It actually shouldn't be very difficult to modify apgsearch 1.x to support bounded grids, should it? I might try that myself, for testing purposes if nothing else. EDIT: more difficult than I naively assumed, so I'll leave it to someone more capable for now. Inflated censuses aren't correctly bolded.
In his answer on cstheory.SE, Lev Reyzin directed me to Robert Schapire's thesis that improves the bound to $O(n^2 + n\log m)$ membership queries in section 5.4.5. The number of counterexample queries remains unchanged. The algorithm Schapire uses differs in what it does after a counterexample query. Sketch of the improvement At the highest level, Schapire forces $(S,E,T)$ from Angluin's algorithm to have the extra condition that for a closed $(S,E,T)$ and each $s_1, s_2 \in S$ if $s_1 \neq s_2$ then $row(s_1) \neq row(s_2)$. This guarantees that $|S| \leq n$ and also makes the consistency property of Angluin's algorithm trivial to satisfy. To ensure this, he has to handle the results of a counterexample differently. Given a counterexample $z$, Angluin simply added $z$ and all its prefixes to $S$. Schapire does something more subtle by instead adding a single element $e$ to $E$. This new $e$ will make $(S,E,T)$ to be not closed in Angluin's sense and the update to get closure with introduce at least one new string to $S$ while keeping all rows distinct. The condition on $e$ is: $$\exists s, s' \in S, a \in \Sigma \quad \text{s.t} \quad row(s) = row(s'a) \; \text{and} \; o(\delta(q_0,se)) \neq o(\delta(q_0,s'ae))$$ Where $o$ is the output function, $q_0$ is the initial state, and $\delta$ the update rule of the true 'unknown' DFA. In otherwords, $e$ must serve as a witness to distinguish the future of $s$ from $s'a$. To figure out this $e$ from $z$ we do a binary search to figure out a substring $r_i$ such that $z = p_ir_i$ and $0 \leq |p_i| = i < |z|$ such that the behavior of our conjectured-machine differs based on one input character. In more detail, we let $s_i$ be the string corresponding to the state reached in our conjectured machine by following $p_i$. We use binary search (this is where the $\log m$ comes from) to find an $k$ such that $o(\delta(q_0,s_kr_k)) \neq o(\delta(q_0,s_{k+1}r_{k+1})$. In other words, $r_{k+1}$ distinguishes two states that our conjectured machines finds equivalent and thus satisfies the condition on $e$, so we add it to $E$.
I am reading this paper about density estimation (Appendix A), where the authors apply a Fourier transform to the estimated probability density (the $X_j$ are a sample of $N$ data points drawn independently from an unknown probability density function $f(x)$): $$ \hat{f}(x) = \frac1N \sum^N_{j=1} K(x-X_j) $$ The authors now note that they apply the convolution theorem and the Fourier transform (where the Fourier transform of the function $g(x)$ is defined by $\Phi_g(t) = \int \exp\lbrace\mathrm{i}xt\rbrace g(x) \mathrm{d}x$), and obtain: $$ \Phi_{\hat{f}}(t) = \kappa(t) \frac1N \sum^N_{t=1} \exp\lbrace\mathrm{i}tX_j\rbrace $$ where $\kappa(t)$ is the Fourier transform of the Kernel $K$. Now, I am a bit stuck on how they obtained this form and how exactly they used the convolution theorem here. In particular, what happened to the argument $X_j$ in the kernel? EDIT: Ok, I think I got it. Is this the right approach? We apply the Fourier transform and obtain: $$ \Phi_{\hat{f}}(t) = \frac1N \sum^N_{j=1} \int K(x-X_j) \exp\lbrace\mathrm{i}xt\rbrace\mathrm{d}x $$ We can easily rewrite this equation as an integral over a Dirac delta distribution: $$ \Phi_{\hat{f}}(t) = \frac1N \sum^N_{j=1} \iint K(y) \delta(y-(x-X_j)) \exp\lbrace\mathrm{i}xt\rbrace\mathrm{d}x\mathrm{d}y $$ Integrating over $x$ and using the symmetry of $\delta$ we obtain: $$ \Phi_{\hat{f}}(t) = \frac1N \sum^N_{j=1} \int K(y) \exp\lbrace\mathrm{i}(y+X_j)t\rbrace\mathrm{d}y $$ We see that we can move factor involving $X_j$ outside the integral, and are left with the result: $$ \Phi_{\hat{f}}(t) = \frac1N \sum^N_{j=1} \exp\lbrace\mathrm{i}X_jt\rbrace \int K(y) \exp\lbrace\mathrm{i}yt\rbrace\mathrm{d}y $$ where the first factor is $\Delta(t) = \frac1N \sum^N_{j=1} \exp\lbrace\mathrm{i}X_jt\rbrace$, and the second factor is $\kappa(t) = \int K(y) \exp\lbrace\mathrm{i}yt\rbrace\mathrm{d}y$, exactly as in the paper. I am still not sure in how far they applied the convolution theorem here?
I would like to estimate the "harmonicity" (harmonics-to-noise ratio ?) of an audio signal from its spectrum. What kind of algorithm can I use? Please take a look at this previous answer. Using the semantics and mathematical definitions from that answer... Let $x[n]$ be the input signal from which you want to measure harmonicity or periodicity. Assume $x[n]$ has no DC component (like it is the output of a DC blocking filter). And $n_0$ is the sample index of the neighborhood of signal around where you want the harmonicity to be measured. $N$ is a large integer defining how wide your rectangular window is. Average Squared Difference Function, ASDF: $$ Q_x[k, n_0] \triangleq \frac{1}{N} \sum\limits_{n=0}^{N-1} \left(x[n+n_0-\left\lfloor \tfrac{N+k}{2}\right\rfloor] \ - \ x[n+n_0-\left\lfloor \tfrac{N+k}{2}\right\rfloor + k] \right)^2 $$ $\left\lfloor \cdot \right\rfloor$ is the floor() function and, if $k$ is even then $ \left\lfloor \frac{k}{2}\right\rfloor = \left\lfloor \frac{k+1}{2}\right\rfloor = \frac{k}{2} $. From that define an alternative form of Autocorrelation: $$ R_x[k,n_0] = R_x[0,n_0] - \frac12 Q_x[k, n_0] $$ where $$ R_x[0, n_0] \triangleq \frac{1}{N} \sum\limits_{n=0}^{N-1} (x[n+n_0-\left\lfloor \tfrac{N}{2}\right\rfloor])^2 $$ Since $Q_x[0, n_0] = 0$ and $Q_x[k, n_0] \ge 0$ for all lags $k$, that means that $ R_x[k, n_0] \le R_x[0, n_0] $ for all lags $k$. You will find a period $P$ such that $P > 0$ and $R_x[P, n_0] > R_x[k, n_0]$ for all lags $k>k_0$, other than the highest peak at $k=0$. $k_0$ is the smallest value of $k>0$ such that $ R_x[k, n_0]<0 $. Once that peak location $P$ is found, the harmonicity is $$ r_x(n_0) \triangleq \frac{R_x[P, n_0]}{R_x[0, n_0]} $$ If $r_x(n_0) = 1$, then $x[n]$ is perfectly periodic (or "harmonic") in the vicinity of sample index $n_0$. If $r_x(n_0) = 0$, it's noise, no periodicity to be found. To answer the broad question "what algorithms you could use": There are different ways of computing an harmonics-to-noise ratio, in the time or in frequency, from basic interval cutting in the frequency domain to more involved cepstral/liftering techniques with local peak detection, using a limited number of partials, etc. Advantages and drawbacks of different methods are described for instance in Temporal and spectral estimations of harmonics-to-noise ratio in human voice signals, Qi and Hillman, 1997 Other potential sources : Improvements in estimating the harmonics-to-noise ratio of the voice An analysis of iterative algorithm for estimation of harmonics-to-noise ratio in speech From the sources above, you have possibilities to make your questions more precise, and help contributors to provide more accurate contributions.
Although designing a rocket that will follow a desired trajectory (say to Ceres, Pluto, or Planet Nine) with great accuracy is an enormous engineering challenge, the basic principle behind rocket propulsion is remarkably simple. It essentially boils down to conservation of momentum, or, equivalently, the observation that the velocity of center of mass of a system does not change if no external forces are acting on the system. To understand how a rocket works, imagine 1 the following experiment: you sit on a initially stationary cart with a large amount of small balls. You then pick up the balls one by one, and throw them all in the same direction with the same (preferably high) speed (relative to yourself and thus the cart). What will happen is that you,the cart, and the remaining balls slowly pick up speed, in the opposite direction from the one you’re throwing the balls in. This is exactly what a rocket engine does: it thrusts out small particles (molecules, actually) at high velocities, gaining a small velocity itself in the opposite direction. Note that this is completely different from most other engines, which drive the rotation of wheels (that depend on friction to work) or propellers(that depend on drag to work). 4.4.1. Rocket Equation To understand what happens in our thought experiment, let’s first consider the first ball you throw. Let’s call the mass of yourself plus the cart M, the total mass of the balls m, and the (small) mass of a single ball dm. If you trow the ball with a speed u (with respect to yourself ), we can calculate your resulting speed in two ways: The center of mass must remain stationary. Let’s putxcm∆0. Before the throw, we then have \(x_{\text { ball }} \mathrm{d} m+ x_{\mathrm{car}}(M+m)=0\), whereas after the throw we have \(-u t \mathrm{d} m+v_{\mathrm{car}} t(M+m)=0, \text { or } v_{\mathrm{car}}=\frac{-u \mathrm{d} m}{(M+m)}\). The total momentum must be conserved. Before the throw, the total momentum is zero, as nothing is moving. After the throw, we get: \(p_{\text { ball }}+p_{\text { car }}=-u \mathrm{d} m+\nu_{\text { car }}(M+m)\). Equating this to zero again gives \(v_{\mathrm{car}}=\frac{-u \mathrm{d} m}{(M+m)}\). Now for the second, third, etc. ball, the situation gets more complicated, as the car (including the ball that is about to be thrown) is already moving. Naturally, the center of mass of the car plus all the balls remains fixed,as does the total momentum of the car plus all the balls. However, to calculate how much extra speed the car picks up from the n th ball, it is easier to not consider the balls already thrown. Instead, we consider a car(including the remaining balls) that is already moving at speed v, and thus has total momentum (M≈m)v.Throwing the next ball will reduce the mass of the car plus balls by dm, and increase its velocity by dv.Conservation of momentum then gives: \[(M+m) v=(M+m-\mathrm{d} m)(\nu+\mathrm{d} v)+(v-u) \mathrm{d} m=(M+m) v+(M+m) \mathrm{d} v-u \mathrm{d} m \label{ballthrow}\] where we dropped the second-order term dmdv. Equation (\ref{ballthrow}) can be rewritten to \[(M+m) \mathrm{d} v=u \mathrm{d} m \label{simple}\] Note that here both u (the speed of each thrown ball) and M (the mass of yourself plus the car, or the shell of a rocket) are constants, whereas m changes, ending up at zero when you’ve thrown all your balls. To find the velocity of our car, we can integrate Equation (\ref{simple}), but there is an important, and rather subtle, point to consider. The left-hand side of Equation (\ref{simple}) applies to the car, but the right-hand side to the thrown ball, with a (positive) mass dm. The mass m of the balls remaining in the car, however, has decreased by dm,so if we wish to know the final velocity of the car, we need to include a minus sign on the right-hand side of Equation (\ref{simple}). Dividing through by M+m and integrating, we then obtain: \[\Delta v=v_{\mathrm{f}}-v_{0}=u \log \left(\frac{M+m_{0}}{M}\right) \label{tsiolkovsky}\] where \(v_f\) is the final velocity of the car, and \(m_0\) the initial total mass of all the balls. Equation (\ref{tsiolkovsky}) is known as the Tsiolkovsky rocket equation 2. Konstantin Eduardovich Tsiolkovsky Konstantin Eduardovich Tsiolkovsky (1857-1935) was a Russian rocket scientist, who is considered to be one of the pioneers of cosmonautics.Self-taught, Tsiolkovsky became interested in spaceflight both through‘cosmic’ philosopher Nikolai Fyodorov and science-fiction author Jules Verne and considered the construction of a space elevator inspired by the then newly built Eiffel tower in Paris. Working as a teacher, he spent much of his free time on research, developing the rocket equation named after him (Equation \ref{tsiolkovsky}) as well as developing designs for rockets, including multi-stage ones. Tsiolkovsky also worked on designing airplanes and air-ships (dirigibles), but did not get support from the authorities to develop these further. He kept working on rockets though, while also continuing as a mathematics teacher. Only late in life did he receive recognition for his work at home (then the Soviet Union), but his ideas would go on to influence other rocket pioneers in both the Soviet and American space programs. 4.4.2. Multi-Stage Rockets Because of the logarithmic factor in the Tsiolkovsky rocket equation, rockets need a lot of fuel compared to the mass of the object they intend to deliver (the payload - say a probe, or a capsule with astronauts). Even so, the effectiveness of rockets is limited. A fuel to payload ratio of 9:1 (already quite high) and an initial speed of zero gives a final speed \(v_{\mathrm{f}}=u \log (10) \simeq 2.3 u\), and increasing the ratio to 99:1 only doubles this result: \(v_{\mathrm{f}}=u \log (100) \simeq 4.6 u\). To get around these limitations and give rockets (or rather their payloads) the speed necessary to leave Earth, or even the solar system, rockets are built with multiple stages - essentially a number of rockets stacked one upon the next. If these stages all have the same fuel to payload ratio and exhaust velocity, the final velocity of the payload simply is that of a single stage times the number of stages n: \(v_{\mathrm{f}}=n u \log \left(1+\frac{m_{0}}{M}\right) \). To see this, consider that the remaining stages are the payload of the current stage. Having multiple stages thus allows rockets to pick up speed more efficiently, essentially by shedding a part oft he ‘payload’ (casing of an empty stage). For example, the Saturn V rocket that was used to send the Apollo astronauts to the moon had three stages, plus a small rocket engine on the capsule itself (used to break moon orbit and send the astronauts back to Earth), see Figure 4.4.2. 4.4.3. Impulse When you’re crashing into something, there are two factors that determine how much your momentum changes: the amount of force acting on you, and the time the force is acting. The product is known as the impulse, which by Newton’s second law equals the change in momentum: \[J=\Delta p=\int F(t) \mathrm{d} t\] The specific impulse, defined as \(I_{sp}=\frac{J}{m_{propellant}}\), or the impulse per unit mass of fuel, is a measure of the efficiency of jet engines and rockets. 1 Or carry out, as you please. 2 Though Tsiolkovsky certainly deserves credit for his pioneering work, and he likely derived the equation independently, he was not the first to do so. Both the British mathematician William Moore in 1813 and the Scottish minister and mathematician William Leitch in 1861 preceded him.
You might just want to look at Chapter 2 of Classical Mechanics (Moments of Inertia) before proceeding further with this chapter. In figure \(\text{VIII.26}\) I draw a massive body whose centre of mass is \(\text{C}\), and an external point \(\text{P}\) at a distance \(R\) from \(\text{C}\). I draw a set of \(\text{C}xyz\) axes, such that \(\text{P}\) is on the \(z\)-axis, the coordinates of \(\text{P}\) being \((0, 0, z)\). I indicate an element \(δm\) of mass, distant \(r\) from \(\text{C}\) and \(l\) from \(\text{P}\). I’ll suppose that the density at \(δm\) is \(ρ\) and the volume of the mass element is \(δτ\), so that \(δm = ρδτ\). \(\text{FIGURE V.26}\) The potential at \(\text{P}\) is \[ψ = -G \int \frac{dm}{l} = -G \int \frac{ρdτ}{l}. \label{5.12.1} \tag{5.12.1}\] But \(l^2 = R^2 + r^2 - 2Rr \cos 2 θ\), so \[ψ = -G \left[ \frac{1}{R} \int ρ dτ + \frac{1}{R^2} \int ρ r \cos θ d τ + \frac{1}{R^3} \int ρ r^2 P_2 (\cos θ) dτ + \frac{1}{R^4} \int ρ r^3 P_3 (\cos θ) d τ ... \right]. \label{5.12.2} \tag{5.12.2}\] The integral is to be taken over the entire body, so that \(∫ ρdτ = M\) , where \(M\) is the mass of the body. Also \(∫ ρr \cos θd τ = \int z dm\), which is zero, since \(\text{C}\) is the centre of mass. The third term is \[\frac{1}{2R^3} \int ρ r^2 (3 \cos^2 θ - 1) dτ = \frac{1}{2R^3} \int ρ r^2 (2-3\sin^2 θ ) dτ . \label{5.12.3} \tag{5.12.3}\] Now \[\int 2 ρ r^2 d τ = \int 2r^2 d m = \int \left[ (y^2 + z^2) + (z^2 + x^2) + (x^2 + y^2) \right] dm = A + B + C\] where \(A\), \(B\) and \(C\) are the second moments of inertia with respect to the axes \(\text{C}x\), \(\text{C}y\), \(\text{C}z\) respectively. But \(A + B + C\) is invariant with respect to rotation of axes, so it is also equal to \(A_0 + B_0 + C_0\), where \(A_0, \ B_0, \ C_0\) are the principal moments of inertia. Lastly, \(\int ρ r^2 \sin^2 θ dτ\) is equal to \(C\), the moment of inertia with respect to the axis \(\text{C}z\). Thus, if \(R\) is sufficiently larger than \(r\) so that we can neglect terms of order \((r/R)^3\) and higher, we obtain \[ψ = - \frac{GM (2MR^2 + A_0 + B_0 + C_0 -3C)}{2R^3}. \label{5.12.4} \tag{5.12.4}\] In the special case of an oblate symmetric top, in which \(A_0 = B_0 < C_0\), and the line \(\text{CP}\) makes an angle \(γ\) with the principal axis, we have \[C = A_0 + (C_0 - A_0) \cos^2 γ = A_0 + (C_0 - A_0) Z^2/R^2, \label{5.12.5} \tag{5.12.5}\] so that \[ψ = -\frac{G}{R} \left[ M + \frac{C_0 - A_0}{2R^2} \left( 1 - \frac{3Z^2}{R^2} \right) \right]. \label{5.12.6} \tag{5.12.6}\] Now consider a uniform oblate spheroid of polar and equatorial diameters \(2c\) and \(2a\) respectively. It is easy to show that \[C_0 = \frac{2}{5} Ma^2. \label{5.12.7} \tag{5.12.7}\] Exercise \(\PageIndex{1}\) Confirm Equation \ref{5.12.7}. It is slightly less easy to show ( Exercise: Show it.) that \[A_0 = \frac{1}{5} M \left( a^2 + c^2 \right) . \label{5.12.8} \tag{5.12.8}\] For a symmetric top, the integrals of the odd polynomials of equation \(\ref{5.12.2}\) are zero, and the potential is generally written in the form \[ψ = - \frac{GM}{R} \left[ 1 + \left( \frac{a}{R} \right)^2 J_2 P_2 (\cos γ) + \left( \frac{a}{R} \right) J_4 P_4 (\cos γ) ... \right] \label{5.12.9} \tag{5.12.9}\] Here \(γ\) is the angle between \(\text{CP}\) and the principal axis. For a uniform oblate spheroid, \(J_2 = \frac{C_0 - A_0}{Mc^2}\). This result will be useful in a later chapter when we discuss precession.
Consider a Hermitian field operator $\phi(x)$ with eigenstates satisfying$$\phi(x) |\alpha\rangle = \alpha(x) | \alpha \rangle$$I'm trying to determine the inner product between the eigenstates. To do this, I consider$$\langle\beta|\phi(x)|\alpha\rangle = \alpha(x)\langle\beta|\alpha\rangle = \beta(x)\langle\beta|\alpha\rangle$$which implies$$\left[ \alpha(x) - \beta(x) \right] \langle\beta|\alpha\rangle = 0\hspace{3cm} (1)$$ Q. What is the solution to this equation? From the equation, I gather that $\langle\beta|\alpha\rangle = 0$ whenever $\alpha(x) \neq \beta(x)$ for any $x$ and therefore it has support only when $\alpha(x) = \beta(x)$. How can I represent this? Is it obvious that this implies $$ \langle\beta|\alpha\rangle \propto \delta \left[ \alpha(x) - \beta(x) \right] $$ This solution seems weird since it seems to imply that the norm of the eigenstate is "infinite" (naively!), but this does not follow from $(1)$. I know there are many subtleties here when dealing with infinite dimensional Hilbert spaces. The solution may lie in one of those subtleties. Any ideas?
Oscillations and Waves Simple Harmonic Motion A body is said to be in SHM if it moves to and fro along a straight line about its mean position such that at any point its acceleration is directly proportional to its displacement in magnitude but opposite in direction and is always directed towards the mean position. Phase represents the state of vibration of the particle at any instant of time. The difference in the phase angles of two particles in SHM is known as the phase difference between them. The starting phase of oscillation is called epoch. If phase is zero y = A sin wt. If phase = \tt \frac{\pi}{2} y = A sin (wt + \tt \frac{\pi}{2}) Velocity of SHM is given by \tt v = w \sqrt{A^{2} - y^{2}} where w is called angular velocity A = Amplitude and y = displacement of particle. View the Topic in this video From 0:21 To 05:23 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. 1. Simple Harmonic Motion (SHM) is that type of oscillatory motion in which the particle moves to and fro about a fixed point. Such that acceleration is proportional to negative of displacement. The general equation of simple harmonic motion is given by x( t) = A cos (ω t + Φ) 2. The angular frequency ω is related to the period and frequency of the motion by \omega = \frac{2 \pi}{T} = 2 \pi \nu 3. The particle velocity and acceleration during SHM as functions of time are given by \tt v(t) = \frac{d}{dt}x(t) = - \omega A \sin (\omega t + \phi) \tt a(t) = \frac{d}{dt}v(t) = - \omega^{2} A \cos (\omega t + \phi) = -\omega^{2} x(t)
No, the idea of wealth as something to be created did not originate in the United States. It was part of the mercantilist approach to national economic policy that was widely adopted in Europe in the 16th to 18th centuries.Mercantilism involved a range of policies, many of which were designed to increase the wealth of one country at the expense of others ... Robot taxation is a bit like corporate income taxation. Like corporations, robots don't pay taxes, people pay taxes. In the words of Herb Stein:I remember that in addressing the issue in the 1980s, the late HerbStein said that it's as if people think that if the government imposeda tax on cows, the tax would be paid by the cows.Are Taxes on ... Globally, there is Lakner and Milanovik (2015)'s elephant graph:Hellebrandt and Mauro (2015)Thus, the two previous distributions look like bimodal log-normal distributions.or CDFs, as in MacAskill's book Doing Good BetterDid not find something strictly related to wages. For most of people, income may be a good proxy of wages. I think that you are more or less than correct in thinking that cash that is never spent (nor expected to be) may as well not exist. Thought experiment: suppose that, unbeknownst to you and everyone else, £1 million suddenly appears under your mattress. If you never discover the money, it is hard to see how it could change anything in a meaningful way.... First we need to discuss what wealth is. Is it just the amassment of valubles? In the 16th-18th century, rulers thought so. This notion is called mercantilism. This idea was rejected by Adam Smith in book IV of The Wealth of Nations.The current meaning of wealth is not the amount you own, but rather how much you can consume (sustainably). Formally, the ... While the owners of a successful business will generally experience an exponential growth compared to their employees, there are some factors to consider:I find it unrealistic for any business to maintain a growth rate of 100% over many years. (As per the example) You can only sell twice as much if you can find twice as many buyers.As a business grows, it ... Scrooge McDuck likes swimming in money, which is why he accumulates a lot of it without any intention of ever spending it. If you think bringing in Scrooge McDuck is far-fetched and irrelevant, you are right. People do not generally accumulate wealth with no intention of ever spending it.But suppose there were such people and, to make things concrete, ... First, let us make a distinction between the functional distribution of income, related to how payment goes to factors of production, i.e, labour and capital, via wages and rents, and the personal distribution of income, which refers to how such factor payments are distributed across individuals or households. After commenting about this, we can start to ... Re: "You could say this is 'creating' money" - no you couldn't. This would be conflating money and wealth. Wealth can be created without new money being created - indeed wealth was created before money even existed. Conversely money can be created without creating wealth. So they are entirely separate things. A crude analogy: wealth is to money as apples are ... The reference is World Bank (2006). You can see the detail of the methodology in Appendix 1, and their estimates of intangible wealth in Appendix 2.There are other methodologies to measure intangible wealth (e.g. here and here). Alternatively, there are many ways to compute natural and physical capital, which will give different results to the one given ... Too much inequality is probably still bad even in a simple model. Too much equality is probably not good either. A balance has to be found.Let's say every enterprise consisting of production of goods/services, middlemen, etc. distributes 99% of its profits to that one individual. Then the consumer is paying a price that reflects that and its effectively ... It is "stupid" only to the extent that it doesn't take into account the socio-economic and political realities.It appears the government tries to make the plan acceptable by giving to everybody, rich and poor, the same nominal amount of money. So it seems this is not "in favor of the poor against the rich" so why would the rich people react against it?... The main problem is that wealth does not equal cash. Those ten rich people probably have most of their wealth in real estate and stocks. Imagine all you own is a Lamborghini, then the government prints a lot of money, does that impact you?Depending on the situation the money printing may be progressive or regressive. And the cost are high: There will not ... I do not believe that your suggested definitions will hold up. Since this is a forum for questions about economics, I do not see the point of proposing a new definition here. Who is going to see it?What you call “income generating wealth” is captured by “wealth” in its standard usages. However, what you exclude (such as bank deposits) will be included in “... Both distributions are often modelled log-normal, with a substantial number of zeros. A pareto distribution is also sometimes used (Piketty & Saez (2012), p.32) for modelling the distribution of top incomes. Wealth distributions are also in general far more skewed than wage (or income) distributions. The total assets of all US commercial banks are about \$17.2 trillion.You'll have to be clearer about what you mean by "their own money". The total equity capital of US commercial banks is about \$1.9 trillion. This statistic shows the Gini coefficient, an index for measuringincome distribution, for U.S. households from 1990 to 2016. A Ginicoefficient of zero expresses perfect equality, where all would havethe same income,a Gini coefficient of one expresses maximal inequalityamong values. In 2016, the Gini coefficient for household income was0,48. ... This depends on the exact preferences, but usually the utility function$$U(x,y) = v(x) + y$$is such that$$\lim_{x \to 0} |MRS(x,y)| = \lim_{x \to 0} \frac{\text{d}v(x)}{\text{d} x} = \infty.$$In this case it is the consumption of an additional marginal unit of the nonlinear good $x$ that is infinitely useful compared to the consumption of an ... (Mostly) ignore money for this growth issue; it's by and large a red herring that's distracting you.Instead just think of technological progress for instance. Assume everyone is washing their clothes by hand. That takes a fair bit of time. Now someone invents a washing machine. Everyone (who can get a washing machine) will then have more time on their ... Onurcanbektas, I really like your thought process. The problem with your mental model is that you've assumed economic output is exogenous. However, in the real world (most) jobs produce economic output. This output then increases the size of the pie, allowing people to be, on average, richer.However, from a societal perspective, I believe we will ... American economic theorist Henry George wrote about precisely this issue in his famous work Progress and Poverty published in the late 1800s, and based on his observations on the economic development of San Francisco during the gold rush. His argument was that increasing economic development primarily benefitted landowners, at the expense of both capital ... The distinction between the two is not well specified. If I own an apartment I can rent it out or I can live in it. If I own an art collection I can hang it in my house or charge others to see it. Cash in bank accounts is lent out by banks to form investments in other firms and projects. Land might be used for long walks or used for farming, natural resource ... If we view "wealth" as the subjective experience of pleasure, then "wealth" can spring up ex nihilo: if people used to get two dollars' worth of pleasure from having a cup, but now they get ten dollars' worth, then in some sense eight dollars of wealth has come "out of nowhere".However, it's more likely that at least one party has misvalued the cup. ... Even if $u^b$ is finite, it can never be achieved. This is what is meant by "does not attain a maximum". Rather, $u(x)$ approaches $u^b$ from below as $x \to \infty$. This is because $u$ is strictly increasing. If we had $u(x_*) = u^b$ for some $x_*$, then we would have $u(x_* + 1) > u^b$ and $u^b$ could not be a bound. This is why $U$ is written as the ... Inflation should increase wages in the long run. Wages are the price for labor and all prices increase with inflation.However, in practice wages tend to stay flat despite inflation for a variety of reasons.Menu costs- it might take time to rework all existing wages.Some wages are locked into long term contracts.Wages, once increased, are difficult to ... Should mean wages increase with inflation? If not, why?No. Inflation is not the only factor that determines wages. Inflation has to do with the nominal value of wages, but these also entail components pertaining to the real value of work.Here are three examples that tend to push mean wages down,(1) technology, to the extent that it replaces human labor;... Trickle-down economics are widely criticized and many believe that it doesn't work. Asher Edelmanon explains brilliantly in the first 2min of this video (or in the written version here).The basic logic is that, in practice, doing this only leads to a higher accumulation of wealth by the wealthy. It seems safe to say that the bulk of that wealth consists of equity (stock) holdings, not "money." (For example, Bill Gates owns a lot of share of Microsoft.)Let's assume that the top 85 people wanted to redistribute their wealth voluntarily.If they tried selling that equity all at once, the price of the equities would likely collapse (who would pay a ...
Start with the unperturbed gravitational potential for a uniform sphere of mass M and radius R, interior and exterior: $$ \phi^0_\mathrm{in} = {-3M \over 2R} + {M\over 2R^3} (x^2 + y^2 + z^2) $$$$ \phi^0_\mathrm{out} = {- M\over r} $$ Add a quadrupole perturbation, you get $$ \phi_\mathrm{in} = \phi^0_\mathrm{in} + {\epsilon M\over R^3} D $$$$ \phi_\mathrm{out} = \phi^0_\mathrm{out} + {M\epsilon R^2\over r^5} D $$ $$ D = x^2 + y^2 - 2 z^2 $$ The scale factors of M and R are just to make $\epsilon$ dimensionless, the falloff of $D\over r^5$ is just so that the exterior solution solves Laplace's equation, and the matching of the solutions is to ensure that on any ellipsoid near the sphere of radius R, the two solutions are equal to order $\epsilon$. The reason this works is because the $\phi^0$ solutions are matched both in value and in first derivative at x=R, so they stay matched in value to leading order even when perturbed away from a sphere. The order $\epsilon$ quadrupole terms are equal on the sphere, and therefore match to leading order. The ellipsoid I will choose solves the equation: $$ r^2 + \delta D = R^2 $$ The z-diameter is increased by a fraction $\delta$, while the x diameter decreased by $\delta/2$. So that the ratio of polar to equatorial radius is $3\delta/2$. To leading order $$ r = R + {\delta D \over 2R}$$ We already matched the values of the inner and outer solutions, but we need to match the derivatives. taking the "d": $$ d\phi_\mathrm{in} = {M\over R^3} (rdr) + {\epsilon M\over R^3} dD $$$$ d\phi_\mathrm{out} = {M\over r^3} (rdr) + {MR^2\epsilon \over r^5} dD - {5\epsilon R^2 M\over r^7} (rdr) $$ $$ rdr = x dx + y dy + z dz $$$$ dD = 2 x dx + 2ydy - 4z dz $$ To first order in $\epsilon$, only the first term of the second equation is modified by the fact that r is not constant on the ellipsoid. Specializing to the surface of the ellipsoid: $$ d\phi_\mathrm{out}|_\mathrm{ellipsoid} = {M\over R^3} (rdr) + {3\delta \over 2 R^5}(rdr) + {\epsilon M \over R^3} dD - {5\epsilon M \over R^5} (rdr)$$ Equating the in and out derivatives, the parts proportional to $dD$ cancel (as they must--- the tangential derivatives are equal because the two functions are equal on the ellipsoid). The rest must cancel too, so $$ {3\over 2} \delta = 5 \epsilon $$ So you find the relation between $\delta$ and $\epsilon$. The solution for $\phi_\mathrm{in}$ gives $$ \phi_\mathrm{in} + {3M\over 2R} = {M\over 2R^3}( r^2 + {3\over 5} \delta D ) $$ Which means, looking at the equation in parentheses, that the equipotentials are 60% as squooshed as the ellipsoid. Now there is a condition that this is balanced by rotation, meaning that the ellipsoid is an equipotential once you add the centrifugal potential: $$ - {\omega^2\over 2} (x^2 + y^2) = -{\omega^2 \over 3} (x^2 + y^2 + z^2) -{\omega^2\over 6} (x^2 + y^2 - 2z^2) $$ To make the $\delta$ ellipsoid equipotential requires that $\omega^2\over 6$ equals the remaining ${2\over 5} {M\over 2R^2}$, so that, calling $M\over R^2$ (the acceleration of gravity) by the name "g", and $\omega^2 R$ by the name "C" (centrifugal) $$\delta = {5\over 6} {C \over g} $$ The actual difference in equatorial and polar diameters is found by multiplying by 3/2 (see above): $$ {3\over 2} \delta = {5\over 4} {C\over g} $$ instead of the naive estimate of ${C\over 2g}$. So the naive estimate is multiplied by two and a half for a uniform density rotating sphere. Nonuniform interior: primitive model The previous solution is both interior and exterior for a rotating uniform ellipsoid, and it is exact in r, it is only leading order in the deviation from spherical symmetry. So it immediately extends to give the shape of the Earth for a nonuniform interior mass distribution. The estimate with a uniform density is surprisingly good, and this is because there are competing effects largely cancelling out the correction for non-uniform density. The two competing effects are:1. the interior distribution is more elliptical than the surface, because the interior solution feels all the surrounding elliptical Earth deforming it, with extra density deforming it more. 2. The ellipticity of the interior is suppressed by the $1/r^3$ falloff of the quadrupole solution of Laplace's equation, which is $1/r^2$ faster than the usual potential. So although the interior is somewhat more deformed, the falloff more than compensates, and the effect of the interior extra density is to make the Earth more spherical, although not by much. These competing effects are what shift the correction factor from 2.5 to 2, which is actually quite small considering that the interior of the Earth is extremely nonuniform, with the center more than three times as dense as the outer parts. The exact solution is a little complicated, so I will start with a dopey model. This assumes that the Earth is a uniform ellipsoid of mass M and ellipticity parameter $\delta$, plus a point source in the middle (or a sphere, it doesn't matter), accounting for the extra mass in the interior, of mass M'. The interior potential is given by superposition. With the centrifugal potential: $$ \phi_{int} = - {M'\over r} - {2M\over 3R} + {M\over 2R^3}(r^2 - {3\over 5} \delta D) + {\omega^2\over 2} r^2 - {\omega^2\over 6} D $$ This has the schematic form of spherical plus quadrupole (including the centrifugal force inside F and G) $$ \phi_{int} = F(r) + G(r) D $$ The condition that the $\delta$ ellipsoid is an equipotential is found by replacing $r$ with $R - {\delta D\over 2R}$ inside F(r), and setting the D-part to zero: $$ {F'(R) \delta \over 2R} = G(r) $$ In this case, you get the equation below, which reduces to the previous case when $M'=0$: $$ {M'\over M+M'}\delta + {M\over M+M'} (\delta - {3\over 5} \delta) = - {C\over 3 g } $$ where $C=\omega^2 R$ is the centrifugal force, and $ g= {M+M'\over R^2} $ is the gravitational force at the surface. I should point out that the spherical part of the centrifugal potential ${\omega^2\over 2} r^2$ always contributes a subleading term proportional to $\omega^2\delta$ to the equation and should be dropped. The result is $$ {3\over 2} \delta = {1\over 2 (1 - {3\over 5} {M\over M+M'}) } {C\over g} $$ So that if you choose M' to be .2 M, you get the correct answer, so that the extra equatorial radius is twice the naive amount of ${C\over 2g}$. This says that the potential at the surface of the Earth is only modified from the uniform ellipsoid estimate by adding a sphere with 20% of the total mass at the center. This is somewhat small, considering the nonuniform density in the interior contains about 25% of the mass of the Earth (the perturbing mass is twice the density at half the radius, so about 25% of the total). The slight difference is due to the ellipticity of the core. Nonuniform mass density II: exact solution The main thing neglected in the above is that the center is also nonspherical, and so adds to the nonspherical D part of the potential on the surface. This effect mostly counteracts the general tendency of extra mass at the center to make the surface more spherical, although imperfectly, so that there is a correction left over. You can consider it as a superposition of uniform ellipsoids of mean radius s, with ellipticity parameter $\delta(s)$ for $0<s<R$ increasing as you go toward the center. Each is uniform on the interior, with mass density $|\rho'(s)|$ where $\rho(s)$ is the extra density of the Earth at distance s from the center, so that $\rho(R)=0$. These ellipsoids are superposed on top of a uniform density ellipsoid of density $\rho_0$ equal to the surface density of the Earth's crust: I will consider $\rho(s)$ and $\rho_0$ known, so that I also know $|\rho'(s)|$, it's (negative) derivative with respect to s, which is the density of the ellipsoid you add at s, and I also know: $$ M(r) = \int_0^r 4\pi \rho(s) s^2 ds $$ The quantity $M(s)$ is ${1\over 4\pi}$ times the additional mass in the interior, as compared to a uniform Earth at crust density. Note that $M(s)$ is not affected by the ellipsoidal shape to leading order, because all the nested ellipsoids are quadrupole perturbations, and so contain the same volume as spheres. Each of these concentric ellipsoids is itself an equipotential surface for the centrifugal potential plus the potential from the interior and exterior ellipsoids. So once you know the form of the potential of all these superposed ellipsoids, which is of the form of spherical + quadrupole + centrifugal quadrupole (the centrifugal spherical part always gives a subleading correction, so I omit it): $$ \phi_\mathrm{int}(r) = F(r) + G(r) D + {\omega^2 \over 6} D $$ You know that each of these nested ellipsoids is an equipotential $$ F(s - {\delta(s) \over 2s}) D + G(s) D - {\omega^2\over 6} D $$ so that the equation demanding that this is an equipotential at any s is $$ {\delta(s) F'(s) \over 2s} - G(s) + {\omega^2\over 6} = 0 $$ To find the form of F and G, you first express the interior/exterior solution for a uniform ellipsoid in terms of the density $\rho$ and the radius R: $$ {\phi_\mathrm{int}\over 4\pi} = - {\rho R^2\over 2} + {\rho\over 6} r^2 + {\rho \delta\over 10} D $$ $$ {\phi_\mathrm{ext}\over 4\pi} = - {\rho R^3 \over 3 r} + {\rho\delta R^5\over 10 r^5} D $$ You can check the sign and numerical value of the coefficients using the 3/5 rule for the interior equipotential ellipsoids, the separate matching of the spherical and D perturbations at r=R, and dimensional analysis. I put a factor $4\pi$ on the bottom of $\phi$ so that the right hand side solves the constant free form of Laplace's equation. Now you can superpose all the ellipsoids, by setting $\delta$ on each ellipsoid to be $\delta(s)$, setting $\rho$ on each ellipsoid to be $|\rho'(s)|$, and $R$ to be $s$. I am only going to give the interior solution at r (doing integration by parts on the spherical part, where you know the answer is going to turn out to be, and throwing away some additive constant C) is: $$ {\phi_\mathrm{int}(r)\over 4\pi} - C = {\rho_0\over 6} r^2 + {\rho_0 \delta(R)\over 10} D - {M(r)\over 4\pi r} + {1\over 10r^5} \int_0^r |\rho'(s)| \delta(s) s^5 ds D + {1\over 10} \int_r^R |\rho'(s)|\delta(s) D $$ The first two terms are the interior solution for constant density $\rho_0$. The third term is the total spherical contribution, which is just as in the spherical symmetric case. The fourth term is the the superposed exterior potential from the ellipsoids inside r, and the last term is the superposed interior potential from the ellipsoids outside r. From this you can read off the spherical and quadrupole parts:$$ F(r) = {\rho_0\over 6} r^2 + {M(r)\over r} $$$$ G(r) = {\rho_0\delta(R)\over 10} + {1\over 10r^5} \int_0^r |\rho'(s) |\delta(s) s^5 ds + {1\over 10} \int_r^R |\rho'(s)|\delta(s) $$ So that the integral equation for $\delta(s)$ asserts that the $\delta(r)$ shape is an equipotential at any depth. $$ {F'(r)\delta(r)\over 2r} - G(r) + {\omega^2 \over 6} = 0 $$ This equation can be solved numerically for any mass profile in the interior, to find the $\delta(R)$. This is difficult to do by hand, but you can get qualitative insight. Consider an ellipsoidal perturbation inside a uniform density ellipsoid. If you let this mass settle along an equipotential, it will settle to the same ellipsoidal shape as the surface, because the interior solution for the uniform ellipsoid is quadratic, and so has exact nested ellipsoids of the same shape as equipotentials. But this extra density will contribute less than it's share of elliptical potential to the surface, diminishing as the third power of the ratio of the radius of the Earth to the radius of the perturbation. But it will produce stronger ellipses inside, so that the interior is always more elliptical than the surface. Oblate Core Model The exact solution is too difficult for paper and pencil calculations, but looking [here]( http://www.google.com/imgres?hl=en&client=ubuntu&hs=dhf&sa=X&channel=fs&tbm=isch&prmd=imvns&tbnid=hjMCgNhAjHnRiM:&imgrefurl=http://www.springerimages.com/Images/Geosciences/1-10.1007_978-90-481-8702-7_100-1&docid=ijMBfCAOC1GhEM&imgurl=http://img.springerimages.com/Images/SpringerBooks/BSE%253D5898/BOK%253D978-90-481-8702-7/PRT%253D5/MediaObjects/WATER_978-90-481-8702-7_5_Part_Fig1-100_HTML.jpg&w=300&h=228&ei=ZccgUJCTK8iH6QHEuoHICQ&zoom=1&iact=hc&vpx=210&vpy=153&dur=4872&hovh=182&hovw=240&tx=134&ty=82&sig=108672344460589538944&page=1&tbnh=129&tbnw=170&start=0&ndsp=8&ved=1t:429,r:1,s:0,i:79&biw=729&bih=483 ), you see that it is sensible to model the Earth as two concentric spheres of radius $R$ and $R_1$ with total mass $M$ and $M_1$ and $\delta$ and $\delta_1$. I will take $$ R_1 = {R\over 2} $$ and $$ M_1 = {M\over 4} $$ that is, the inner sphere is 3000 km across, with twice the density, which is roughly accurate. Superposing the potentials and finding the equation for the $\delta$s (the two point truncation of the integral equation), you find $$ -\delta + {3\over 5} {M_0\over M_0 + M_1} \delta + {3\over 5} {M_1\over M_0 + M_1} \delta_1 ({R_1\over R})^2 = {C\over 3g} $$ $$ {M_0 \over M_0 + M_1} (-\delta_1 + {3\over 5} \delta) + {M_1 \over M_0 + M_1}( -\delta_1 + {3\over 5} \delta_1) = {C\over 3g} $$ Where $$ g = {M_0+ M_1\over R^2}$$$$ C = \omega^2 R $$ are the gravitational force and the centrifugal force per unit mass, as usual. Using the parameters, and defining $\epsilon = {3\delta\over 2}$ and $\epsilon_1={3\delta_1\over 2}$, one finds: $$ - 1.04 \epsilon + .06 \epsilon = {C\over g} $$$$ - 1.76 \epsilon_1 + .96 \epsilon = {C\over g} $$ (these are exact decimal fractions, there are denominators of 100 and 25). Subtracting the two equations gives: $$ \epsilon_1 = {\epsilon\over .91} $$ (still exact fractions) Which gives the equation $$ (-1.04 + {.06\over .91} ) \epsilon = {C\over g}$$ So that the factor in front is $.974$, instead of the naive 2. This gives an equatorial diameter of 44.3 km, as opposed to 42.73, which is close enough that the model essentially explains everything you wanted to know. The value of $\epsilon_1$ is also interesting, it tells you that the Earth's core is 9% more eccentric than the outer ellipsoid of the Earth itself. Given that the accuracy of the model is at the 3% level, this should be very accurate.
We prove the security for the construction given in Sect. 4.2. Theorem 1 Let \(({\mathsf {KeyGenEnc}},{\mathsf {Enc}},{\mathsf {Dec}})\) be a NM-CPA secure encryption scheme, let \(F_{(\cdot )}\) be a PRP family and let \(({\mathsf {GenCRS}},{\mathsf {Prove}},{\mathsf {VerifyProof}})\) be a NIZK proof system. Then, the protocol defined in Sect. 4.2 has ballot privacy. Proof. Recall that privacy is defined as the indistinguishability of two experiments which depend on a bit \(\beta \). We will refer to them as \({\mathsf {Exp}}_\beta \) for \(\beta \in \{0,1\}\). Let \({\mathsf {SimVote}}_1(pk,v)\) be the \({\mathsf {Vote}}\) algorithm of the protocol given in Sect. 4.2 but, instead of using the \({\mathsf {Prove}}\) algorithm to generate \(\pi \) it uses the \({\mathsf {SimProve}}\) algorithm. Moreover, let \({\mathsf {SimVote}}_2(pk,v)\) to be the \({\mathsf {SimVote}}_1\) algorithm but, instead of using a PRP it uses a truly random permutation. Consider experiments \({\mathsf {Exp}}_{\beta ,0}={\mathsf {Exp}}_\beta \), \({\mathsf {Exp}}_{\beta ,1}\) to be the experiment which are the same as \({\mathsf {Exp}}_{\beta ,0}\) but the challenger runs \({\mathsf {SimGenCRS}}\) instead of \({\mathsf {GenCRS}}\) and it runs \({\mathsf {SimProve}}\) instead of \({\mathsf {Prove}}\). Finally, let \({\mathsf {Exp}}_{\beta ,2}\) be the experiments which are identical to \({\mathsf {Exp}}_{\beta ,1}\) but in which the challenger uses a truly random function instead of a PRP in order to cast ballots. Due to the zero-knowledge property of the NIZK proof system, \({\mathsf {Exp}}_{\beta ,0}\) and \({\mathsf {Exp}}_{\beta ,1}\) are indistinguishable for \(\beta \in \{0,1\}\). Besides, \({\mathsf {Exp}}_{\beta ,1}\) and \({\mathsf {Exp}}_{\beta ,2}\) are indistinguishable for \(\beta \in \{0,1\}\) due to the pseudo-randomness of the PRP. Now the only thing left is to prove that \({\mathsf {Exp}}_{0,2}\) and \({\mathsf {Exp}}_{1,2}\) are indistinguishable. Consider the \(\mathsf {Enc2Vote}\) scheme [5], where the result function \(\rho \) is the multiset function. The scheme is defined as follows: the \({\mathsf {Setup}}\) algorithm runs \({\mathsf {KeyGenEnc}}\) to produce a public key \(pk_e\) and a secret key \(sk_e\). Then, pk is set to be \(pk_e\) and sk is set to be \((pk_e,sk_e)\). The \({\mathsf {Vote}}\) algorithm takes as input a vote v and a public key \(pk_e\) and outputs b defined by \(b={\mathsf {Enc}}(pk_e,v,r)\) for some fresh randomness r. \(\mathsf{ValidateBallot }\) looks if the ballot b already appears on the bulletin board BB: it returns 1 if it does already appear and 0 otherwise. \({\mathsf {Tally}}\) decrypts all ballots \(\varvec{b}\) on the bulletin board obtaining votes \(\varvec{v}\) and evaluates \(r=\rho (\varvec{v})\), outputting an empty proof of correct tabulation. Observe that \(\mathsf {Enc2Vote}\) implicitly assumes that \({\mathbb {V}}=M_e\), the message space of the encryption scheme. As shown in [5], the following is satisfied: Theorem 2 Let \(({\mathsf {KeyGenEnc}},{\mathsf {Enc}},{\mathsf {Dec}})\) be an NM-CPA secure encryption scheme. Then, \(\mathsf {Enc2Vote}\) has ballot privacy. Finally, we reduce the privacy of our scheme to the privacy of \(\mathsf {Enc2Vote}\). Lemma 1 Let \({\mathcal {A}}^1\) be a p.p.t. adversary that interacts which challenger \(\mathcal {C}\) and outputs a bit \(\alpha ^{{\mathcal {A}}_1}\) such that \(|\Pr [\alpha ^{{\mathcal {A}}_1}=1|{\mathsf {Exp}}_{0,2}]-\Pr [\alpha ^{{\mathcal {A}}_1}=1|{\mathsf {Exp}}_{1,2}]|\) is non-negligible. Then, there exists an adversary \({\mathcal {A}}^2\) that breaks the ballot privacy property of the \(\mathsf {Enc2Vote}\) scheme. In our reduction, \({\mathcal {A}}^1\) will interact with \({\mathcal {A}}^2\), which will act as the challenger for \({\mathcal {A}}^1\). At the same time, \({\mathcal {A}}^2\) will interact with the privacy challenger \(\mathcal {C}\). The reduction is as follows: In the Setup phase, \(\mathcal {C}\) will run \({\mathsf {ComSetupGen}}\), outputing \({\mathsf {cs}}\) and posting it to the bulletin board. It will also run \({\mathsf {KeyGenEnc}}\), keeping the private key for itself and publishing the public key \(pk_e\) to the bulletin board. Then, \(A^2\) will run the \({\mathsf {GenCRS}}\) and the \({\mathsf {KeyGenSign}}\) algorithms and will produce signatures on each voting option, posting all the information to the bulletin board. In the Voting phase, when \({\mathcal {A}}^1\) submits a Vote query, \(A^2\) will submit n Vote queries to \(\mathcal {C}\), one for each pair of candidates. The challenger \(\mathcal {C}\) will answer with n pairs of ciphertexts \((C_{0,1},\dots ,C_{0,n})\) and \((C_{1,1},\dots ,C_{1,n})\). \(A^2\) will then sample two pairs of random values \((p_{0,1},\dots ,p_{0,n})\) and \((p_{1,1},\dots ,p_{1,n})\) of the target space of the PRP. Finally, it will create ballots \(b_0=(C_{0,1},\dots ,C_{0,n},p_{0,1},\dots ,p_{0,n},\pi _0)\) and \(b_1=(C_{1,1},\dots ,C_{1,n},p_{1,1},\dots ,p_{1,n},\pi _1)\) where \(\pi _0\) and \(\pi _1\) will be simulated. \({\mathcal {A}}^2\) will post these ballots to the respective bulletin boards. Finally, when \({\mathcal {A}}^1\) submits a Ballot( b) query, \({\mathcal {A}}^2\) will run the \(\mathsf{ValidateBallot }\) algorithm and will create a Ballot \((b')\) for \(\mathcal {C}\) with \(b'=(C_1,\dots ,C_n)\) from b. It is straightforward to see that the output of \({\mathcal {A}}^2\) in its interaction with \({\mathcal {A}}^1\) is correctly distributed, which implies that the reduction is sound. Theorem 3 Let \(\rho \) be the counting function which outputs its inputs randomly permuted. Let \(({\mathsf {GenCRS}},{\mathsf {Prove}},{\mathsf {VerifyProof}})\) be a NIZKPK proof system and let \(({\mathsf {KeyGenSign}},\) \({\mathsf {Sign}},{\mathsf {VerifySign}})\) be an EUF-CMA signature scheme. Let \({\mathsf {Extract}}\) be the decryption procedure of the \({\mathsf {Tally}}\) algorithm of the protocol defined in Sect. 4.2. Then, the protocol defined in Sect. 4.2 has vote validatability for any \({\mathbb {V}}\), with respect to \(\rho ,{\mathsf {Extract}}\). Proof Strong consistency of the protocol follows by construction. Therefore we only need to show that, on correctly generated ( pk, sk) no adversary can construct a ballot b such that \(\mathsf{ValidateBallot }\) returns 1 but \({\mathsf {Extract}}\) returns \(\perp \). Let \({\mathsf {Exp}}_0\) be the vote validatability experiment and let \({\mathsf {Exp}}_1\) be identical to \({\mathsf {Exp}}_0\) but instead of using \({\mathsf {GenCRS}}\) the challenger uses \({\mathsf {ExtGenCRS}}\). These two experiments are indistinguishable by the properties of the NIZKPK. Now assume that an adversary \({\mathcal {A}}^1\) is able to output a ballot b in the experiment \({\mathsf {Exp}}_1\) such that \(\mathsf{ValidateBallot }=1\) and \({\mathsf {Extract}}(sk,b)=\perp \). Then, we build an adversary \({\mathcal {A}}^2\) which breaks the EUF-CMA of the signature scheme. The reduction is straightforward: \({\mathcal {A}}^2\), interacting with an EUF-CMA challenger asks for signatures on \(\{\nu \}_{\nu \in {\mathcal {V}}}\). Then, it interacts with \({\mathcal {A}}^1\), posing as a vote validatability challenger. It runs all the algorithms as in the protocol but uses \({\mathsf {ExtGenCRS}}\), keeping the trapdoor key tk for itself, and using the answers from the EUF-CMA challenger as the signatures on the voting options. When \({\mathcal {A}}^1\) outputs a ballot b, \({\mathcal {A}}^2\) uses \({\mathsf {Extract}}\) on \(\pi \) to obtain a witness \(w=(\tilde{\nu }_1,\dots ,\tilde{\nu }_n,r_1,\dots ,r_n,\sigma _{\tilde{\nu }_1},\dots ,\sigma _{\tilde{\nu }_n},k))\) such that \((x,w)\in R\). This means that \({\mathsf {VerifySign}}(pk_s,\sigma _{\tilde{\nu }_i},\tilde{\nu }_i)=1\) for \(i\in \{1,\dots ,n\}\). \({\mathsf {Extract}}(sk,b)\) might return \(\perp \) either because (i) some \({\mathsf {Dec}}(sk_e,C_i)=\perp \), (ii) some \(\tilde{\nu }_i=\tilde{\nu }_j\) for \(i\ne j\) or (iii) some \(\tilde{\nu }_i\not \in {\mathcal {V}}\). However, (i) and (ii) are ruled out due to w being a valid witness, so the only possibility is (iii). Then, \({\mathcal {A}}^2\) can submit \((\tilde{\nu }_i,\sigma _{\tilde{\nu }_i})\) as its EUF-CMA forgery.
Skills to Develop List of various kinematics equations and identities In addition to the relations \[D(v) = \sqrt{\frac{1+v}{1-v}}\] and \[v_c = \frac{v_1 + v_2}{1 + v_1 v_2}\] the following identities can be handy. If stranded on a desert island you should be able to rederive them from scratch. Don’t memorize them. \[v = \frac{D^2 - 1}{D^2 + 1}\] \[\gamma = \frac{D^{-1} + D}{2}\] \[v\gamma = \frac{D - D^{-1}}{2}\] \[D(v)D(-v) = 1\] \[\eta = \ln D\] \[v = \tanh \eta\] \[\gamma = \cosh \eta\] \[v\gamma = \sinh \eta\] \[\tanh (x+y) = \frac{\tanh x + \tanh y}{1 + \tanh x \tanh y}\] \[D_c = D_1 D_2\] \[\eta _c = \eta _1 + \eta _2\] \[v_C \gamma _c = (v_1 + v_2)\gamma _1 \gamma _2\] The hyperbolic trig functions are defined as follows: \[\sinh x = \frac{e^x - e^{-x}}{2}\] \[\cosh x = \frac{e^x + e^{-x}}{2}\] \[\tanh x = \frac{\sinh x}{\cosh x}\] Their inverses are built in to some calculators and computer software, but they can also be calculated using the following relations: \[\sinh^{-1}x = \ln \left ( x + \sqrt{x^2 + 1} \right )\] \[\cosh^{-1}x = \ln \left ( x + \sqrt{x^2 - 1} \right )\] \[\tanh^{-1}x = \frac{1}{2}\ln \left ( \frac{1 + x}{1 - x} \right )\] Their derivatives are, respectively, \(\left ( x^2 + 1 \right )^{-1/2}\), \(\left ( x^2 - 1 \right )^{-1/2}\) and \(\left ( 1 - x^2 \right )^{-1}\).
Tagged: symmetric matrix Problem 572 The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017. There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold). The time limit was 55 minutes. Problem 7. Let $A=\begin{bmatrix} -3 & -4\\ 8& 9 \end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix} -1 \\ 2 \end{bmatrix}$. (a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$. (b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$. Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular. Problem 9. Determine whether each of the following sentences is true or false. (a) There is a $3\times 3$ homogeneous system that has exactly three solutions. (b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric. (c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$. (d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent. Add to solve later (e) The vectors \[\mathbf{v}_1=\begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, \mathbf{v}_2=\begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix}, \mathbf{v}_3=\begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}\] are linearly independent. Problem 564 Let $A$ and $B$ be $n\times n$ skew-symmetric matrices. Namely $A^{\trans}=-A$ and $B^{\trans}=-B$. (a) Prove that $A+B$ is skew-symmetric. (b) Prove that $cA$ is skew-symmetric for any scalar $c$. (c) Let $P$ be an $m\times n$ matrix. Prove that $P^{\trans}AP$ is skew-symmetric. (d) Suppose that $A$ is real skew-symmetric. Prove that $iA$ is an Hermitian matrix. (e) Prove that if $AB=-BA$, then $AB$ is a skew-symmetric matrix. (f) Let $\mathbf{v}$ be an $n$-dimensional column vecotor. Prove that $\mathbf{v}^{\trans}A\mathbf{v}=0$. Add to solve later (g) Suppose that $A$ is a real skew-symmetric matrix and $A^2\mathbf{v}=\mathbf{0}$ for some vector $\mathbf{v}\in \R^n$. Then prove that $A\mathbf{v}=\mathbf{0}$. Problem 556 Let $\mathbf{v}$ be a nonzero vector in $\R^n$. Then the dot product $\mathbf{v}\cdot \mathbf{v}=\mathbf{v}^{\trans}\mathbf{v}\neq 0$. Set $a:=\frac{2}{\mathbf{v}^{\trans}\mathbf{v}}$ and define the $n\times n$ matrix $A$ by \[A=I-a\mathbf{v}\mathbf{v}^{\trans},\] where $I$ is the $n\times n$ identity matrix. Prove that $A$ is a symmetric matrix and $AA=I$. Conclude that the inverse matrix is $A^{-1}=A$. Problem 538 (a) Suppose that $A$ is an $n\times n$ real symmetric positive definite matrix. Prove that \[\langle \mathbf{x}, \mathbf{y}\rangle:=\mathbf{x}^{\trans}A\mathbf{y}\] defines an inner product on the vector space $\R^n$. (b) Let $A$ be an $n\times n$ real matrix. Suppose that \[\langle \mathbf{x}, \mathbf{y}\rangle:=\mathbf{x}^{\trans}A\mathbf{y}\] defines an inner product on the vector space $\R^n$. Prove that $A$ is symmetric and positive definite.Add to solve later Problem 457 Let $A$ be a real symmetric $n\times n$ matrix with $0$ as a simple eigenvalue (that is, the algebraic multiplicity of the eigenvalue $0$ is $1$), and let us fix a vector $\mathbf{v}\in \R^n$. (a) Prove that for sufficiently small positive real $\epsilon$, the equation \[A\mathbf{x}+\epsilon\mathbf{x}=\mathbf{v}\] has a unique solution $\mathbf{x}=\mathbf{x}(\epsilon) \in \R^n$. (b) Evaluate \[\lim_{\epsilon \to 0^+} \epsilon \mathbf{x}(\epsilon)\] in terms of $\mathbf{v}$, the eigenvectors of $A$, and the inner product $\langle\, ,\,\rangle$ on $\R^n$. ( University of California, Berkeley, Linear Algebra Qualifying Exam) Problem 396 A real symmetric $n \times n$ matrix $A$ is called positive definite if \[\mathbf{x}^{\trans}A\mathbf{x}>0\] for all nonzero vectors $\mathbf{x}$ in $\R^n$. (a) Prove that the eigenvalues of a real symmetric positive-definite matrix $A$ are all positive. Add to solve later (b) Prove that if eigenvalues of a real symmetric matrix $A$ are all positive, then $A$ is positive-definite. Problem 385 Let \[A=\begin{bmatrix} 2 & -1 & -1 \\ -1 &2 &-1 \\ -1 & -1 & 2 \end{bmatrix}.\] Determine whether the matrix $A$ is diagonalizable. If it is diagonalizable, then diagonalize $A$. That is, find a nonsingular matrix $S$ and a diagonal matrix $D$ such that $S^{-1}AS=D$.
This is only an extended comment.A few times ago I asked (myself :-) how fast a multitape NTM that accepts a (reasonably encoded) NP-complete language can be. I came up with this idea: 3-SAT remains NP-complete even if variables are represented in unary. In particular we can convert a clause - suppose $(x_i \lor \neg x_j \lor x_k)$ - of an arbitrary 3-SAT formula $\varphi$ on $n$ variables and $m$ clauses in a sequence of characters over alphabet $\Sigma = \{ +, -, 1 \}$ in which every variable occurrence is represented in unary: $ + 1^{i} 0,- 1^{j} ,+ 1^{k} $ For example, $(x_2 \lor -x3 \lor +4)$ can be converted to: +110-1110+11110 So we can convert a 3-SAT formula $\varphi_i$ in an equivalent string $U(\varphi_i)$ concatenating its clauses. The language $L_U = \{ U(\varphi_i) \mid \varphi_i \in 3-SAT \}$ is NP-complete. A 2-tape NTM can decide if a string $x \in L_U$ in time $2|x|$ in this way. the first head scans the input from left to right and with the internal logic it keeps track when it enters or exit a clause or reach the end of the formula. Whenever it finds a $+$ or $-$, the second head starts moving right with it on the $1^i$ that represents $x_i$. At the end of $1^i$, if the second head is on a $0$ then it guesses a truth value $+$ or $-$ (it makes an assignment) and writes it on the second tape; if it finds a $+$ or $-$ then that variable has already been assigned a value; in both cases, using the internal logic, the NTM matches the truth value under the second head (the assignment) with the last seen $+$ or $-$; if they match then the clause is satisfied; then the second head can return to the rightmost cell; with the internal logic the NTM can keep track if all clauses are satisfied while the first head moves towards the end of the input. Example: Tape 1 (formula) Tape 2 (variable assignments) +110-1110+11110... 0000000000000... ^ ^ +110-1110+11110... 0000000000000... ^ ^ +110-1110+11110... 0000000000000... ^ ^ +110-1110+11110... 0+00000000000... first guess set x2=T; matches + ^ ^ so remember that current clause is satisfied +110-1110+11110... 0+00000000000... ^ ^ ... +110-1110+11110... 0+00000000000... ^ ^ ... +110-1110+11110... 0++0000000000... second guess set x3=T ^ ^ don't reject because current clause is satisfied (and in every case another literal must be parsed) The time can be reduced to $|x|$ if we add some redundant symbols to the clause representation: $ + 1^{i} 0^i,- 1^{j} 0^j ,+ 1^{k} 0^k \; ... \; \text{+++}$ ($\text{+++}$ marks the end of the formula) In this way the second head can return to the leftmost cell while the first scans the $0^i$ part.Using $\text{++}$ as a clause delimiter and $\text{+++}$ as a marker for the end of the formula we can use same representation for CNF formulas with an arbitrary number of literals per clause.
Let $\pi: \mathcal{X}\to B$ be a complex analytic family of compact complex manifolds, i.e. $\pi$ is a surjective, proper submersion between complex manifolds. For simplicity, we assume $B$ is the open unit disc $\Delta$ in the complex plane. Denote the fibers of $\pi$ by $X_t:=\pi^{-1}(t)$ and the central fiber is $X:=\pi^{-1}(0)$. Let $[\omega]=c_1(L)\in H^2(X,\mathbb{Z})$ be the first Chern class of an ample line bundle, i.e. an integral Kähler class, on $X$. Is it possible to extend this class to the nearby fibers in the sense that there exist Kähler metrics $\omega_t\in A^{1,1}(X_t)$ on $X_t$ such that $[\omega_t]=[\omega]\in H^2(X,\mathbb{Z})$? Is it possible to extend this class to the total space $\mathcal{X}$ in the sense that there exist Kähler metric $\omega_{\mathcal{X}}\in A^{1,1}(\mathcal{X})$ on $\mathcal{X}$ such that $[\omega_{\mathcal{X}}\mid_{X}]=[\omega]\in H^2(X,\mathbb{Z})$? Let $\omega$ be a Kähler metric on $X$. Is it possible to extend it to the total space $\mathcal{X}$ in the sense that there exist a Kähler metric $\omega_{\mathcal{X}}\in A^{1,1}(\mathcal{X})$ on $\mathcal{X}$ such that $\omega_{\mathcal{X}}\mid_{X}=\omega$? Let $L$ be a holomorphic line bundle on $X$. Is it possible to extend it to the total space $\mathcal{X}$ in the sense that there exist a holomorphic line bundle $\mathcal{L}$ on $\mathcal{X}$ such that $\mathcal{L}\mid_{X}=L$? These questions arise in the following situation: Consider the Kuranishi family $\pi: (\mathfrak{X}, X)\to (\textrm{Def},0)$ of a polarised Calabi-Yau manifold $(X,[\omega])$. $\pi$ itself is not a polarized family. The problem is how to give polarizations on the other fibers of $\pi$ so that we have the period mapping of Griffiths: $\textrm{Def}\to D$. In the paper of Tian, define $$H^1(X,\Theta_X)_\omega :=\{\theta\in H^1(X,\Theta_X) \mid \theta\lrcorner \omega=0 \in H^2(X,\mathcal{O}_X)\},$$ and $\textrm{Def}_\omega :=\textrm{Def} \cap H^1(X,\Theta_X)_\omega$, then he claims that "$\rm Def_\omega$ consists of those local deformations of $X$ preserving the polarization $[\omega]$" and that "each point $\theta\in \rm Def_\omega$ stands for a fiber of the Kuranishi family $\pi: (\mathfrak{X}, X)\to (\textrm{Def},0)$ with a polarization $[\omega_{\theta}]=[\omega]\in H^2(X,\mathbb{Z})$". I don't know why? In page 95 of the book Algebraic Geometry III: Complex Algebraic Varieties Algebraic Curves and Their Jacobians, by Kulikov-Kurchanov-Shokurov, the above claims of Tian are made more precisely. According to them, $\textrm{Def}_{\omega}$ is just the submanifold of $\textrm{Def}$ on which the Kähler form $\omega$ has type $(1,1)$. By this, they probably mean that the Kähler form $\omega$ has type $(1,1)$ as a $2$-form on each fiber over points in $\textrm{Def}_{\omega}$. Again I don't know why? In page 8 of this paper, Yoshikawa claims that every holomorphic line bundle $L$ on a Calabi-Yau threefold $X$ extends to a holomorphic line bundle $\mathcal{L}$ on the total space $\mathfrak{X}$ of the Kuranishi family of $X$. Why?
I have some problems understanding what the best way of dealing with the delta functions in polar coordinates (I know there are many questions on the subjects on this website but they are all not satisfactory). In (Delta function integrated from zero), they claim that the delta function is given by $\delta^{(2)}=\frac{\delta(r)}{\pi r}$ while in (Dirac delta in polar coordinates) it is claimed that $\delta^{(2)}=\frac{\delta(r)}{2\pi r}$. However, the confusion probably comes from the fact that when evaluating a delta function in polar coordinates, one ends up with the expression $\int_0^\infty f(x)\delta(x)$. This expression is ill-defined as far as I can tell, since using different limiting functions for the delta function can give different results, and thus none of the above expressions can be a well-defined definition of the delta function in polar coordinates. So my question is, if I want to write down the delta function in polar coordinates, what is the best representation for working with it? In my particular case, I want to be able to start with the delta function in polar coordinates and then do coordinate transformations to obtain it in other coordinate systems, without any ambiguities. edit: The best representation I can come up with would be to regularize the radial direction, and write the delta function as $\delta=\frac{1}{r}\delta(r-\epsilon)\delta(\theta-\theta_0)$ for some arbitrary $\theta_0$ and then let $\epsilon\rightarrow0$ in the end.
The electric potential due to a point charge is given by \[\varphi=\frac{kq}{r}\label{6-1}\] where \(\varphi\) is the electric potential due to the point charge, \(k=8.99\times 10^9 \frac{Nm^2}{C^2}\) is the Coulomb constant, \(q\) is the charge of the particle (the source charge, a.k.a. the point charge) causing the electric field for which the electric potential applies, and, \(r\) is the distance that the point of interest is from the point charge. In the case of a non-uniform electric field (such as the electric field due to a point charge), the electric potential method for calculating the work done on a charged particle is much easier than direct application of the force-along-the-path times the length of the path. Suppose, for instance, a particle of charge \(q′\) is fixed at the origin and we need to find the work done by the electric field of that particle on a victim of charge \(q\) as the victim moves along the \(x\) axis from \(x_1\) to \(x_2\). We can’t simply calculate the work as \[F\cdot(x_2-x_1)\] even though the force is in the same direction as the displacement, because the force \(F\) takes on a different value at every different point on the \(x\) axis from \(x = x_1\) to \(x = x_2\). So, we need to do an integral: \[dW=Fdx\] \[dW=qEdx\] \[dW=q\frac{kq'}{x^2} dx\] \[\int dW=\int_{x_1}^{x_2} q\frac{kq'}{x^2} dx\] \[W=kq'q\int_{x_1}^{x_2} x^{-2} dx\] \[W=kq'q \frac{x^{-1}}{-1}\Big |_{x_1}^{x_2}\] \[W=-kq'q(\frac{1}{x_2}-\frac{1}{x_1})\] \[W=-(\frac{kq'q}{x_2}-\frac{kq'q}{x_1})\] Compare this with the following solution to the same problem (a particle of charge \(q′\) is fixed at the origin and we need to find the work done by the electric field of that particle on a victim of charge \(q\) as the victim moves along the \(x\) axis from \(x_1\) to \(x_2\)): \[W=-\Delta U\] \[W=-q \Delta \varphi\] \[W=-q(\varphi_2-\varphi_1)\] \[W=-(\frac{kq'q}{x_2}-\frac{kq'q}{x_1})\] The electric potential energy of a particle, used in conjunction with the principle of the conservation of mechanical energy, is a powerful problem-solving tool. The following example makes this evident: Example A particle of charge 0.180 \(\mu C\) is fixed in space by unspecified means. A particle of charge -0.0950 \(\mu C\) and mass 0.130 grams is 0.885 cm away from the first particle and moving directly away from the first particle with a speed of 15.0 m/s. How far away from the first particle does the second particle get? This is a conservation of energy problem. As required for all conservation of energy problems, we start with a before and after diagram: \(\mbox{Energy Before = Energy After}\) \[K+U=K'+U'\] \[K+q\varphi=0+q \varphi'\] \[\frac{1}{2}mv^2+q\frac{kq_s}{r}=q\frac{kq_s}{r'}\] \[\frac{1}{r'}=\frac{1}{r}+\frac{mv^2}{2kq_sq}\] \[r'=\frac{1}{\frac{1}{r}+\frac{mv^2}{2kq_sq}}\] \[ r'=\frac{1}{ \frac{1}{8.85\times 10^{-3} m} + \frac{1.30\times 10^{-4}kg(15.0 m/s)^2}{2(8.99\times 10^9 \frac{N\cdot m^2}{C^2})1.80\times 10^{-7}C(-9.50\times 10^{-8}C) } }\] \[r'=0.05599m\] \[r'=0.0560m\] \[r'=5.6cm\] Superposition in the Case of the Electric Potential When there is more than one charged particle contributing to the electric potential at a point in space, the electric potential at that point is the sum of the contributions due to the individual charged particles. The electric potential at a point in space, due to a set of several charged particles, is easier to calculate than the electric field due to the same set of charged particles is. This is true because the sum of electric potential contributions is an ordinary arithmetic sum, whereas, the sum of electric field contributions is a vector sum. Example Find a formula that gives the electric potential at any point \((x, y)\) on the x-y plane, due to a pair of particles: one of charge \(–q\) at \((-\frac{d}{2},0)\) and the other of charge \(+q\) at \((\frac{d}{2},0)\). Solution: We establish a point \(P\) at an arbitrary position \((x, y)\) on the x-y plain and determine the distance that point \(P\) is from each of the charged particles. In the following diagram, I use the symbol \(r_{+}\) to represent the distance that point \(P\) is from the positively-charged particle, and \(r_{-}\) to represent the distance that point P is from the negatively-charged particle. Analysis of the shaded triangle in the diagram at right gives us \(r_{+}\) . \[r_{+}=\sqrt{(x-\frac{d}{2})^2+y^2}\] Analysis of the shaded triangle in the diagram at right gives us \(r_{-}\) . \[r_{-}=\sqrt{(x+\frac{d}{2})^2+y^2}\] With the distances that point \(P\) is from each of the charged particles in hand, we are ready to determine the potential: \[r_{+}=\sqrt{(x-\frac{d}{2})^2+y^2}\] \[r_{-}=\sqrt{(x+\frac{d}{2})^2+y^2}\] \[\varphi(x,y)=\varphi_{+}+\varphi_{-}\] \[\varphi(x,y)=\frac{kq}{r_{+}}+\frac{k(-q)}{r_{-}}\] \[\varphi(x,y)=\frac{kq}{r_{+}}-\frac{kq}{r_{-}}\] \[\varphi(x,y)=\frac{kq}{\sqrt{(x-\frac{d}{2})^2+y^2}}-\frac{kq}{\sqrt{(x+\frac{d}{2})^2+y^2}}\]
Given a Lagrangian coupling a complex scalar field $\psi$ to a real scalar field $\phi$: $$\mathcal{L} = \frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi + \partial_{\mu}\psi\partial^{\mu}\psi^*+ \frac{1}{2}m^2 \phi^2 + \frac{1}{2}M^2 \psi^2 + g \phi \psi \psi^*$$ I'm struggling to see how there can be any non-zero Feynman diagrams for the $\psi \psi \rightarrow \psi \psi$ scattering. (As in figure 9 of Tong's notes) If we have two internal vertices I understand that we need to calculate the quantity: $$<0| T\{\psi(x_1)\psi(x_2)\psi(y_1)\psi(y_2)\ \int \phi \psi \psi^* \int \phi \psi \psi^*\} |0>$$ Now By Wick's theorem this should be given by the sum of all possible contractions. However the contraction of $\psi$ with $\psi$ is zero so I can't see anyway to completely contract this?
The instability and high sensitivity of optimisation results can be augmented by adding another layer of quantitative methodology in the form of Monte Carlo Simulation. The name Monte Carlo alludes to the nature of the simulation procedure, which, in essence, involves drawing random numbers from a distribution, and then using the random numbers as inputs for a mathematical process, in this case portfolio optimisation. [ Quantitative Portfolio Optimization, Asset allocation and Risk management - Mikkel Rassmussen - 2003] I'm currently trying to apply Monte Carlo techniques in the context of mean variance portfolio optimization. According to what I have learned until now the most basic and simple model is "Resampling" and it consists in the following steps: For each asset fit the historical returns (daily, weekly or monthly data) with a distribution of the parametric family (normal, Student's t, etc.) and obtain the specific parameters (mean, variance). For each asset generate a random expected returns from their specific probabilistic distribution. Performing mean-variance optimization (tangency portfolio which implies Sharpe-Ratio maximization) using the generated expected returns and covariance matrix (this is computed once with the preferred method). Repeat point 2. and 3. for n times. Average the weights of all portfolios. My questions are the following: How one should compute correctly statistics (expected return, expected volatility) of the final averaged optimized portfolio? Is not very clear to me if one should average the weights of all portfolios (point 5.) according to some techniques or just computing the simple mean. If the first, which are these techniques? Are there ways to improve the "Resampling" other than trying different probability distributions (i.e. generate expected returns not directly from a probability distribution but applying i.e. Single Index Model - $R_{it}=\alpha_i+\beta_i \cdot R_{mt} + \epsilon_{it}$ - the random component in that case would be noise $\epsilon_{it}$? Does makes sense generate random return with a multivariate probability distribution (mean is the mean of each asset and variance is the covariance matrix)? Doing so I noticed that all assets are always in the portfolio.
Suppose I have two points the unit circle corresponding to 30 and 60 degrees from the x-axis. I want to draw a line between these two points. \documentclass[border=5mm]{standalone}\usepackage{tikz} \usepackage{calculator} \usetikzlibrary{rulercompass} \usetikzlibrary{intersections,quotes,angles} \usetikzlibrary{calc}\begin{document} \begin{tikzpicture} \draw (\cos(30), \sin(30)) -- (\cos(60), \sin(60)); \end{tikzpicture}\end{document} But this does not execute. What is the correct way?
In Fourier transform for periodic signal, I checked different books and I found a different explanation in each book. Let's take the explanation in Signals and Systems by Rajeshwari & Rao: The resulting Fourier transform for a periodic signal consist of a train of impulses in frequency, with areas of impulses proportional to the Fourier series coefficients. To suggest the general result, let us consider $x(t)$ with Fourier transform $X(\omega)$ which is a single impulse of area $2\pi$ at $\omega=\omega_0$, that is, $$X(\omega)=2\pi\delta(\omega-\omega_0)$$ To determine $x(t)$ for which this is Fourier transform we can apply the inverse Fourier transform to obtain $$ x(t) = \frac{1}{2\pi} \int_{-\infty}^{+\infty} 2\pi \delta(\omega-\omega_0)e^{j\omega t}d\omega$$ The things I want to ask is: If we have Fourier series of a periodic signal which will be one impulse, then the Fourier transform of that impulse will be the same single impulse? That's what it is explained above? Why we used shifted impulse? Why we can't take $\delta(\omega)$
This problem is from our recitation which I do not have solutions for, and I'm stuck on the very last part where I need to satisfy the $u_t(x,0)$ initial condition. The problem is: $$ \left\{ \begin{split} u_{tt} &= c^2u_{xx}, \quad\qquad x>0,t>0\\ u(&0,t) = 0 \qquad\qquad\qquad t>0\\ u(&x,0) = \sin(x) = f(x)\;\quad x>0\\ u_t&(x,0) = e^{-x} = g(x)\quad\quad x>0 \end{split} \right. $$ Since we have Neumann conditions we use an even extension. Let $$ u(x,0) = \begin{cases} f(x) &\text{if } x > 0 \\ f(-x) &\text{if } x < 0 \end{cases} $$ and $$ u_t(x,0) = \begin{cases} g(x) &\text{if } x > 0 \\ g(-x) &\text{if } x < 0 \end{cases} $$ Then D'Alembert's formula gives: $$ \begin{split} u(x,t) &= \frac{1}{2}\big(f(x-ct)+f(x+ct)\big) + \frac{1}{2c}\int_{x-ct}^{x+ct} g(s)\,ds\\ &= \frac{1}{2}\big(f(x)+f(x)\big) + \frac{1}{2c}\int_{x}^{x} g(s)\,ds \\ &= f(x)=\sin(x) \end{split} $$ thus the First initial condition is satisfied. For the second one we have $$ u_t(x,t) = \frac{1}{2}\big(-cf(x-ct)+cf(x+ct)\big) + \frac{d}{dt}\bigg [ \frac{1}{2c} \int_{x-ct}^{x+ct} g(s)\,ds \bigg ] $$ My problem is I'm not sure what to do with the $$ \frac{d}{dt}\bigg [ \frac{1}{2c} \int_{x-ct}^{x+ct} g(s)\,ds \bigg ] $$ term. Should I differentiate the integral and then plug in $t = 0$ for the whole expression? How do I get the term $e^{-x}$ from this?
The Vogt's theorem for plane curves states that if A and B are endpoints of a spiral arc, the curvature nondecreasing from A to B. The angle $\beta$ of the tangent to the arc at B with the chord AB is not less than the angle $\alpha$ of the tangent at A with AB. $\alpha = \beta$ only if the curvature is constant. Does anyone know of a result which extend this theorem to space curves or curves in higher dimension. I have the following conjecture for space curves: Given a regular curve in space $\gamma : [0, l] \rightarrow \mathbb{R}^3$, parametrized by arc-length $s$, let $\kappa$ and $\tau$ denote the Euclidean curvature and torsion respectively. Let us assume that $\kappa$ is non-decreasing and $\tau$ is non-decreasing. Let $A = \gamma(0)$ and $B = \gamma(l)$ and let $\alpha$ be the angle between the tangent plane at $\gamma(0)$ and the chord $AB$ and let $\beta$ be the angle between the tangent plane at $\gamma(l)$ and the chord $AB$. We claim that $\alpha \leq \beta$ and equality holds only if $\gamma$ is a circular helix. This is an attempt at an (as yet incomplete) proof of the above claim. I would be happy to receive any corrections and/or comments. Let us denote the curve $\gamma$ by the following parametrization $\gamma(s) = (x_1(s), x_2(s), x_3(s))$. Without loss of generality let us assume that $A = \gamma(0) = (0, 0, 0)$ and $B = \gamma(l) = (x_1(l), 0, 0)$. Let $\theta(s)$ denote the angle between tangent plane at $\gamma(s)$ and the chord $AB$, thus we have that: $\sin \theta(s) = \langle B(s), (1,0,0) \rangle$. Using Frenet-Serret formulae where : $T'(s) = \kappa(s) N(s)$ and $N'(s) = -\kappa(s) T(s) -\tau(s) B(s)$ we have that : $\sin \theta(s) = \langle B(s), (1,0,0) \rangle = \frac{1}{\kappa(s)} (\gamma'(s) \times \gamma''(s)) = \frac{1}{\kappa(s)} (x_2'(s) x_3''(s) - x_3'(s) x_2''(s))$. Claim : $\alpha \leq \beta$ i.e., it is enough to prove that $\int_{\theta(0)}^{\theta(l)} \sin \theta d\theta \geq 0$. From the equation for $\sin \theta(s)$ we obtain that $d\theta(s) = \frac{\kappa(s) f'(s) - f(s) \kappa'(s)}{\kappa(s)\sqrt{\kappa(s)^2-f(s)^2}}$, where $f(s) := x_2'(s) x_3''(s)-x_2''(s) x_3'(s) = \kappa(s) \langle B(s), e_1 \rangle$ and $e_1:= (1,0,0)$. On further simplification using Frenet-Serret formulae we get: $\int_{\theta(0)}^{\theta(l)} \sin \theta(s) d\theta(s) = \int_0^l \frac{\tau(s)}{\kappa(s)} \frac{\langle B(s), e_1 \rangle \langle N(s), e_1 \rangle}{\sqrt{1-\langle B(s), e_1\rangle ^2}} ds$. From here it is not clear to me that product of the integrand is always positive, or using integration by parts the integrand is always positive.
This question already has an answer here: Photons are the force carrier of the electromagnetic force. I do not see how this could result in a transfer of momentum that attracts objects together. Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: Photons are the force carrier of the electromagnetic force. I do not see how this could result in a transfer of momentum that attracts objects together. Actually A.Zee's book on "QFT in a nutshell" has a very nice explanation on this on chapter I.5, I will breifly sketch it (this is a very rough skeptch), $$Z=\int DA e^{iS(A)} =e^{iW(J)}$$ where W(J) is given by, $$W(J)=-1/2 \int \int d^4xd^yJ(x)D(x-y)J(y)$$ where D(x-y) is the photon propogator and J(x) and J(y) refer to two lumps of matter Plugging in Photon propogator we get, $$W(J)=1/2 \int d^4kJ^\mu (k) \frac{1}{k^2}J_\mu (k)$$ $J^0$ is the charge density which for two lumps of positive charge is positive and hence the result has a positive sign. If one lump was positive and other was negative one of the J would be positive and the other J would be negative and we will get an overall negative sign hence showing that there is an attraction between unlike charges and repulsion between like charges. So photons basically act like carriers of information from one source to another. Extending this analysis further to other spin particles we get, Exchange between like charges gives following forces according to spin Spin 0 = Attractive Spin 1 = Repulsive(photon) Spin 2 = Attractive For the forces between elementary particles we have Feynman diagrams, where there exists a mediating particle for the interaction. In the simplest diagrams: for the strong it is the gluon, for the weak it is Zs and Ws and for the electromagnetic it is the photon. Here is Bhabha scattering, where the electron and the positron ( attractive force) are first order in the expansion and of low energy: annihilation scattering x axis the time axis. For e-e- scattering only the second diagram exists at first order. So the question should be how can there be attractive and repulsive forces. To really answer one would have to do the mathematics that the Feynman diagrams dictate and the result will tell us the the force is attractive. I have found useful for intuitive understanding the analogue with boats throwing balls to each other, and transferring momentum for repulsion, and boomerangs for attraction. repulsive analogue Momentum conservation directly for the repulsive, angular momentum conservation in the attractive. As all analogues it should not be stressed too much. Here we have a ball and a boomerang. It is a way to see that the boats can be "attracted" to each other. In the Feynman diagrams the e+e- has an extra diagram to add to the calculation, and induce kinematically the attractive effect which the e-e- or e+e+ does not have.
I am using the acronympackage for acronyms, symbols and constants in my report. I would like the list of constants to be slightly different than the other two lists. I want to use the \acroextra{} macro to add the value of the constants and to print them in a third column. At the moment I have something like this: \documentclass[10pt]{report}\usepackage[utf8]{inputenc}\usepackage[]{acronym}\begin{document}\large \textbf{Symbols}\begin{acronym}[longest]\acro{lam}[$\lambda$]{wavelength}\acro{temp}[$T$]{Temperature}\end{acronym}\large \textbf{Constants}\begin{acronym}[longest]\acro{c}[$c$]{speed of light \acroextra{299 792 458 m/s}}\acro{sig}[$\sigma$]{Stefan–Boltzmann constant \acroextra{$5.670367 10^{-8}$ W/(m$^2$K$^4$)}}\end{acronym}\vspace{1in}Some text and acro calls\end{document} This gives: I would like it result in something like this: Can I make this work with a second width in the environment ( \begin{acronym}[longest short][longest long]) or something? The values could also be right aligned if that is simpler.
We'll say that a function on $[0,1]$ is uniformly left continuous if for every $\epsilon > 0$ there exists $\delta > 0$ such that $x \in (y - \delta, y)$ implies $|f(x) - f(y)| < \epsilon$ for every $x, y \in [0,1]$. I want to know if the space of all such functions is complete with respect to the uniform norm. I'm interested in this space because I suspect that it is the completion of the space of piecewise constant functions which are continuous from the left. Thanks in advance for the help! EDIT: As was pointed out in the comments, what I wrote above is equivalent to uniform continuity. I think the following revised definition captures what I want. $f$ is uniformly left continuous if for every $\epsilon$ there is a partition $0 = t_0 < t_1 < \ldots < t_n = 1$ such that $|f(x) - f(y)| < \epsilon$ whenever $t_0 \leq x \leq y \leq t_1$ or $t_i < x \leq y \leq t_{i+1}$ for $i > 0$. So with this definition the characteristic function of $(1/2,1]$ is uniformly left continuous while the characteristic function of $[1/2,1]$ is not. Here are the examples that I'm really trying to kill. Take $f$ to be the function which is 0 on the interval $[0,1/2]$ and let $f(x) = 1/(x-1/2)$ on $(1/2,1]$. This is left continuous, but it shouldn't be uniformly left continuous. Even if you insist that $f$ be bounded, you could set $f(x) = \sin(1/(x-1/2))$ for $x$ in $(1/2,1]$ and get a left continuous function which is not uniformly left continuous. If you can think of a better definition that captures this intuition, let me know.
I'm trying to better understand the causes for the equation of time by deriving an approximation from first principles. My naive approach, $EOT_{NAIVE}$, is to take the difference between the right ascension of the mean sun, $\alpha_M(t)$, and the right ascension of the "real" sun $$EOT_{NAIVE}(t)=\langle\dot{\alpha }\rangle\cdot(t-t_0)-\alpha(t)$$ where $\alpha(t)$ is simply the "actual" position of the sun at time $t$ from ephemeris data (e.g. either of the RA values reported by JPL HORIZONS, or a similar source) and where $\langle\dot{\alpha }\rangle = 24/365.242$ and $t_0=t:\alpha_M(t_0)=0$. To confirm that this is about right, I compare my result with what I get using the USNO definition of GMST $$EOT_{GMST}(t)=GMST(t)-(t-t_{noon})-\alpha(t)$$ and with $EOT_{POLY}$, a standard polynomial expression from Dershowitz & Reingold as described in The Clock of the Long Now documentation. But the values I get for $EOT_{NAIVE}$ (blue dashed line) are consistently about $7.4$ minutes greater than these reference methods (exactly $7.4537$ for $EOT_{GMST}$, red line; and within a couple seconds of that for $EOT_{POLY}$, gray outline) give: Surprisingly, I can fix this by simply changing $t_0$ from the date, $t_E$, on which the most recent vernal equinox occurred (2012-03-20T05:14:33Z) — which I had assumed would give a "starting" RA of 0 — to 2012-03-22T02:20:32.41Z, which brings my approximation (blue line) exactly in register with the GMST approach (gray band), and within a couple seconds of the polynomial approach (red line), as can be seen by taking the difference between the GMST approach (gray band) and each of these approaches (note the change of scale, this figure essentially takes the difference between $EOT_{GMST}$ and each of the plots in the first figure above, and "zooms in" on the gray band): Why should changing the date in this way "fix" my approximation? Why doesn't my approximation work with $t_0=t_E$? Why does it work perfectly with a different date? Are the approaches I'm using as references ($EOT_{GMST}$ and $EOT_{POLY}$) the right ones for this kind of exploration? Note that the reported RA of the sun at 2012-03-22T02:20:32.41Z is $0.1146^h$, and that the fit between $EOT_{GMST}$ and $EOT_{NAIVE}$ using that date is quite good (this figure zooms in further on the gray band in the second figure above): Note also that despite the documentation for the ephemeris I use claiming that "light time delay is not included", the values I get match those from JPL's "airless apparent right ascension and declination of the target center with respect to the Earth's true-equator and the meridian containing the Earth's true equinox-of-date. Corrected for light-time, the gravitational deflection of light, stellar aberration, precession and nutation."
In this topic, we will learn how to find the volume of a solid object that has known cross sections. We consider solids whose cross sections are common shapes such as triangles, squares, rectangles, trapezoids, and semicircles. Definition: Volume of a Solid Using Integration Let \(S\) be a solid and suppose that the area of the cross section in the plane perpendicular to the \(x-\)axis is \(A\left( x \right)\) for \(a \le x \le b.\) Then the volume of the solid from \(x = a\) to \(x = b\) is given by the cross-section formula \[V = \int\limits_a^b {A\left( x \right)dx}. \] Similarly, if the cross section is perpendicular to the \(y-\)axis and its area is defined by the function \(A\left( y \right),\) then the volume of the solid from \(y = c\) to \(y = d\) is given by \[V = \int\limits_c^d {A\left( y \right)dy} .\] Steps for Finding the Volume of a Solid with a Known Cross Section Sketch the base of the solid and a typical cross section. Express the area of the cross section \(A\left( x \right)\) as a function of \(x.\) Determine the limits of integration. Evaluate the definite integral \[V = \int\limits_a^b {A\left( x \right)dx}.\] Solved Problems Click a problem to see the solution. Example 1The solid has a base lying in the first quadrant of the \(xy-\)plane and bounded by the lines \(y = x,\) \(x = 1,\) \(y = 0.\) Every planar section perpendicular to the \(x-\)axis is a semicircle. Find the volume of the solid. Example 2Find the volume of a solid bounded by the elliptic paraboloid \(z = \large{\frac{{{x^2}}}{{{a^2}}}}\normalsize + \large{\frac{{{y^2}}}{{{b^2}}}}\normalsize\) and the plane \(z = 1.\) Example 3The base of a solid is bounded by the parabola \(y = 1 – {x^2}\) and the \(x-\)axis. Find the volume of the solid if the cross sections are equilateral triangles perpendicular to the \(x-\)axis. Example 4Find the volume of a regular square pyramid with the base side \(a\) and the altitude \(H.\) Example 5Find the volume of a solid if the base of the solid is the circle given by the equation \({x^2} + {y^2} = 1,\) and every perpendicular cross section is a square. Example 6Find the volume of the frustum of a cone if its bases are ellipses with the semi-axes \(A, B,\) and \(a, b\), and the altitute is equal to \(H.\) Example 7Calculate the volume of a wedge given the bottom sides \(a, b,\) the top side \(c,\) and the altitude \(H.\) Example 8Find the volume of a regular tetrahedron with the edge \(a.\) Example 9A wedge is cut out of a circular cylinder with radius \(R\) and height \(H\) by the plane passing through a diameter of the base (Figure \(10\)). Find the volume of the cylindrical wedge. Example 10The axes of two circular cylinders with the same radius \(R\) intersect at right angles. Find the volume of the solid common to both these cylinders. Example 1.The solid has a base lying in the first quadrant of the \(xy-\)plane and bounded by the lines \(y = x,\) \(x = 1,\) \(y = 0.\) Every planar section perpendicular to the \(x-\)axis is a semicircle. Find the volume of the solid. Solution. The diameter of the semicircle at a point \(x\) is \(d=y=x.\) Hence, the area of the cross section is \[A(x) = \frac{{\pi {d^2}}}{8} = \frac{{\pi {x^2}}}{8}.\] Integration yields the following result: \[{V = \int\limits_0^1 {A\left( x \right)dx} }={ \int\limits_0^1 {\frac{{\pi {x^2}}}{8}dx} }={ \frac{\pi }{8}\int\limits_0^1 {{x^2}dx} }={ \frac{\pi }{8} \cdot \left. {\frac{{{x^3}}}{3}} \right|_0^1 }={ \frac{\pi }{{24}}}\] Example 2.Find the volume of a solid bounded by the elliptic paraboloid \(z = \large{\frac{{{x^2}}}{{{a^2}}}}\normalsize + \large{\frac{{{y^2}}}{{{b^2}}}}\normalsize\) and the plane \(z = 1.\) Solution. Consider an arbitrary planar section perpendicular to the \(z-\)axis at a point \(z,\) where \(0 \lt z \le 1.\) The cross section is an ellipse defined by the equation \[{z = \frac{{{x^2}}}{{{a^2}}} + \frac{{{y^2}}}{{{b^2}}},}\;\; \Rightarrow {\frac{{{x^2}}}{{{{\left( {a\sqrt z } \right)}^2}}} + \frac{{{y^2}}}{{{{\left( {b\sqrt z } \right)}^2}}} = 1.}\] The area of the cross section is \[{A\left( z \right) = \pi \cdot \left( {a\sqrt z } \right) \cdot \left( {b\sqrt z } \right) }={ \pi abz.}\] Then, by the cross-section formula, \[{V = \int\limits_0^1 {A\left( z \right)dz} }={ \int\limits_0^1 {\pi abzdz} }={ \pi ab\int\limits_0^1 {zdz} }={ \pi ab \cdot \left. {\frac{{{z^2}}}{2}} \right|_0^1 }={ \frac{{\pi ab}}{2}}\] Example 3.The base of a solid is bounded by the parabola \(y = 1 – {x^2}\) and the \(x-\)axis. Find the volume of the solid if the cross sections are equilateral triangles perpendicular to the \(x-\)axis. Solution. The area of the equilateral triangle at a point \(x\) is given by \[A\left( x \right) = \frac{{{a^2}\sqrt 3 }}{4}.\] As the side \(a\) is equal to \(1-{x^2},\) then \[{A\left( x \right) = \frac{{{a^2}\sqrt 3 }}{4} }={ \frac{{\sqrt 3 }}{4}{\left( {1 – {x^2}} \right)^2}.}\] The parabola \(y = 1 – {x^2}\) intersects the \(x-\)axis at the points \(x=-1,\) \(x = 1.\) Compute the volume of the solid: \[{V = \int\limits_{ – 1}^1 {A\left( x \right)dx} }={ \int\limits_{ – 1}^1 {\frac{{\sqrt 3 }}{4}{{\left( {1 – {x^2}} \right)}^2}dx} }={ \frac{{\sqrt 3 }}{4}\int\limits_{ – 1}^1 {\left( {1 – 2{x^2} + {x^4}} \right)dx} }={ \frac{{\sqrt 3 }}{4}\left. {\left[ {x – \frac{{2{x^3}}}{3} + \frac{{{x^5}}}{5}} \right]} \right|_{ – 1}^1 }={ \frac{{\sqrt 3 }}{4}\left[ {\left( {1 – \frac{2}{3} + \frac{1}{5}} \right) }\right.}-{\left.{ \left( { – 1 + \frac{2}{3} – \frac{1}{5}} \right)} \right] }={ \frac{{\sqrt 3 }}{2}\left( {1 – \frac{2}{3} + \frac{1}{5}} \right) }={ \frac{{4\sqrt 3 }}{{15}}.}\] Example 4.Find the volume of a regular square pyramid with the base side \(a\) and the altitude \(H.\) Solution. The area of the square cross section at a point \(x\) is written in the form \[A\left( x \right) = {\left( {a \cdot \frac{x}{H}} \right)^2} = \frac{{{a^2}{x^2}}}{{{H^2}}}.\] Hence, the volume of the pyramid is given by \[{V = \int\limits_0^H {A\left( x \right)dx} }={ \int\limits_0^H {\frac{{{a^2}{x^2}}}{{{H^2}}}dx} }={ \frac{{{a^2}}}{{{H^2}}}\int\limits_0^H {{x^2}dx} }={ \frac{{{a^2}}}{{{H^2}}} \cdot \left. {\frac{{{x^3}}}{3}} \right|_0^H }={ \frac{{{a^2}H}}{3}}\] Example 5.Find the volume of a solid if the base of the solid is the circle given by the equation \({x^2} + {y^2} = 1,\) and every perpendicular cross section is a square. Solution. An arbitrary cross section at a point \(x\) has the side \(a\) equal to \[a = 2y = 2\sqrt {1 – {x^2}} .\] Hence, the area of the cross section is \[A\left( x \right) = {a^2} = 4\left( {1 – {x^2}} \right).\] Calculate the volume of the solid: \[{V = \int\limits_{ – 1}^1 {A\left( x \right)dx} }={ \int\limits_{ – 1}^1 {4\left( {1 – {x^2}} \right)dx} }={ 4\left. {\left( {x – \frac{{{x^3}}}{3}} \right)} \right|_{ – 1}^1 }={ 4\left[ {\left( {1 – \frac{{{1^3}}}{3}} \right) – \left( { – 1 – \frac{{{{\left( { – 1} \right)}^3}}}{3}} \right)} \right] }={ 4\left[ {\frac{2}{3} – \left( { – \frac{2}{3}} \right)} \right] }={ \frac{{16}}{3}.}\] Example 6.Find the volume of the frustum of a cone if its bases are ellipses with the semi-axes \(A, B,\) and \(a, b\), and the altitute is equal to \(H.\) Solution. The volume of the frustum of the cone is given by the integral \[V = \int\limits_0^H {A\left( x \right)dx} ,\] where \({A\left( x \right)}\) is the cross-sectional area at a point \(x.\) The lengths of the major and minor axes linearly change from \(a, b\) to \(A, B,\) and at the point \(x\) they are determined by the following expressions: \[{\text{major axis:}\;\;a + \left( {A – a} \right)\frac{x}{H}\;\;}\kern0pt{\text{minor axis:}\;\;b + \left( {B – b} \right)\frac{x}{H}}.\] Let’s now calculate the area of the cross section: \[{A\left( x \right) \text{ = }}\kern0pt{\pi \left( {a + \left( {A – a} \right)\frac{x}{H}} \right)\left( {b + \left( {B – b} \right)\frac{x}{H}} \right) }={ \pi \left[ {ab + b\left( {A – a} \right)\frac{x}{H} }\right.}+{\left.{ a\left( {B – b} \right)\frac{x}{H} }\right.}+{\left.{ \left( {A – a} \right)\left( {B – b} \right){{\left( {\frac{x}{H}} \right)}^2}} \right] }={ \pi \left[ {ab + \left( {bA + aB – 2ab} \right)\frac{x}{H} }\right.}+{\left.{ \left( {AB – aB – bA + ab} \right){{\left( {\frac{x}{H}} \right)}^2}} \right].}\] Then the volume is given by \[\require{cancel}{V = \int\limits_0^H {A\left( x \right)dx} }={ \pi \left[ {abH + \left( {bA + ab – 2ab} \right)\frac{H}{2} }\right.}+{\left.{ \left( {AB – aB – bA + ab} \right)\frac{H}{3}} \right] }={ \pi \left[ {\cancel{abH} + \frac{{bAH}}{2} }\right.}+{\left.{ \frac{{aBH}}{2} – \cancel{abH} }\right.}+{\left.{ \frac{{ABH}}{3} – \frac{{aBH}}{3} }\right.}-{\left.{ \frac{{bAH}}{3} + \frac{{abH}}{3}} \right] }={ \pi \left[ {\frac{{bAH}}{6} + \frac{{aBH}}{6} }\right.}+{\left.{ \frac{{ABH}}{3} + \frac{{abH}}{3}} \right] }={ \frac{{\pi H}}{6}\left[ {bA + aB + 2AB + 2ab} \right] }={ \frac{{\pi H}}{6}\left[ {\left( {2A + a} \right)B + \left( {A + 2a} \right)b} \right].}\] Example 7.Calculate the volume of a wedge given the bottom sides \(a, b,\) the top side \(c,\) and the altitude \(H.\) Solution. Consider an arbitrary cross section at a height \(x.\) This cross section is a rectangle with the sides \(m\) and \(n.\) It is easy to see that \[{m = c + \left( {a – c} \right)\frac{x}{h},\;\;}\kern0pt{n = \frac{{bx}}{h}.}\] Then the cross-sectional area \(A\left( x \right)\) is written as \[{A\left( x \right) = mn }={ \left( {c + \left( {a – c} \right)\frac{x}{h}} \right)\frac{{bx}}{h} }={ \frac{{bcx}}{h} + \frac{{ab{x^2}}}{{{h^2}}} – \frac{{bc{x^2}}}{{{h^2}}} }={ \frac{{bcx}}{h} + \frac{{b\left( {a – c} \right){x^2}}}{{{h^2}}}.}\] We find the volume of the wedge by integration: \[{V = \int\limits_0^h {A\left( x \right)dx} }={ \int\limits_0^h {\left( {\frac{{bcx}}{h} + \frac{{b\left( {a – c} \right){x^2}}}{{{h^2}}}} \right)dx} }={ \left. {\frac{{bc{x^2}}}{{2h}} + \frac{{b\left( {a – c} \right){x^3}}}{{3{h^2}}}} \right|_o^h }={ \frac{{bch}}{2} + \frac{{b\left( {a – c} \right)h}}{3} }={ \frac{{bch}}{2} + \frac{{abh}}{3} – \frac{{bch}}{3} }={ \frac{{bch}}{6} + \frac{{abh}}{3} }={ \frac{{bh}}{6}\left( {2a + c} \right).}\] Example 8.Find the volume of a regular tetrahedron with the edge \(a.\) Solution. The base of the tetrahedron is an equilateral triangle. Calculate the altitude of the base \(CE.\) By Pythagorean theorem, \[{{{CE}^2} = {{BC}^2} -{{EB}^2},}\;\; \Rightarrow {CE = \sqrt {{BC^2} – {EB^2}} }={ \sqrt {{a^2} – {{\left( {\frac{a}{2}} \right)}^2}} }={ \frac{{\sqrt 3 a}}{2}.}\] Hence \[CO = \frac{2}{3}CE = \frac{{\sqrt 3 a}}{3}.\] We express the altitude of the tetrahedron \(h\) in terms of \(a:\) \[{h = DO }={ \sqrt {{DC^2} – {CO^2}} }={ \sqrt {{a^2} – {{\left( {\frac{{\sqrt 3 a}}{3}} \right)}^2}} }={ \sqrt {\frac{{2{a^2}}}{3}} }={ \frac{{\sqrt 2 a}}{{\sqrt 3 }}.}\] The cross-sectional area \(A\left( x \right)\) is written in the form \[{A\left( x \right) = \frac{1}{2}{m^2}\sin 60^{\circ} }={ \frac{{\sqrt 3 {m^2}}}{4},}\] where \(m\) is the side of the equilateral triangle in the cross section. It follows from the similarity that \[\require{cancel}{m = \frac{{ax}}{h} }={ \frac{{\sqrt 3 \cancel{a}x}}{{\sqrt 2 \cancel{a}}} }={ \frac{{\sqrt 3 x}}{{\sqrt 2 }}.}\] Using integration, we find the volume: \[{V = \int\limits_0^h {A\left( x \right)dx} }={ \int\limits_0^h {\frac{{\sqrt 3 {m^2}}}{4}dx} }={ \int\limits_0^h {\frac{{\sqrt 3 }}{4}{{\left( {\frac{{\sqrt 3 x}}{{\sqrt 2 }}} \right)}^2}dx} }={ \int\limits_0^h {\frac{{3\sqrt 3 {x^2}}}{8}dx} }={ \frac{{3\sqrt 3 }}{8}\int\limits_0^h {{x^2}dx} }={ \frac{{3\sqrt 3 }}{8} \cdot \left. {\frac{{{x^3}}}{3}} \right|_0^h }={ \frac{{\sqrt 3 {h^3}}}{8} }={ \frac{{\sqrt 3 }}{8} \cdot {\left( {\frac{{\sqrt 2 a}}{{\sqrt 3 }}} \right)^3} }={ \frac{{2\cancel{\sqrt 3} \sqrt 2 {a^3}}}{{24\cancel{\sqrt 3} }} }={ \frac{{\sqrt 2 {a^3}}}{{12}}.}\] Example 9.A wedge is cut out of a circular cylinder with radius \(R\) and height \(H\) by the plane passing through a diameter of the base (Figure \(10\)). Find the volume of the cylindrical wedge. Solution. A cross section of the wedge perpendicular to the \(x-\)axis is a right triangle \(ABC.\) The leg of the triangle \(AB\) is given by \[AB = y = \sqrt {{R^2} – {x^2}} ,\] and the other leg \(BC\) is expressed in the form \[{BC = AB \cdot \tan \alpha }={ AB \cdot \frac{H}{R} }={ \frac{H}{R}\sqrt {{R^2} – {x^2}} }\] Hence, the area of the cross section is written as \[{A\left( x \right) = \frac{{AB \cdot BC}}{2} }={ \frac{H}{R}{\left( {\sqrt {{R^2} – {x^2}} } \right)^2} }={ \frac{H}{R}\left( {{R^2} – {x^2}} \right).}\] Integrating yields \[{V = 2\int\limits_0^R {A\left( x \right)dx} }={ 2\int\limits_0^R {\frac{H}{{2R}}\left( {{R^2} – {x^2}} \right)dx} }={ \frac{H}{R}\int\limits_0^R {\left( {{R^2} – {x^2}} \right)dx} }={ \frac{H}{R}\left. {\left( {{R^2}x – \frac{{{x^3}}}{3}} \right)} \right|_0^R }={ \frac{H}{R} \cdot \frac{{2{R^3}}}{3} }={ \frac{{2{R^2}H}}{3}},\] so the volume of the wedge is \[\frac{{\frac{{2{R^2}H}}{3}}}{{\pi {R^2}H}} = \frac{2}{{3\pi }} \approx 0.212\] of the total volume of the cylinder. The result does not depend on \(R\) and \(H!\) Example 10.The axes of two circular cylinders with the same radius \(R\) intersect at right angles. Find the volume of the solid common to both these cylinders. Solution. The figure below shows \(\large{\frac{1}{8}}\normalsize\) of the solid of intersection. Consider a cross section \(ABCD\) perpendicular to the \(x-\)axis at an arbitrary point \(x\). Due to symmetry, the cross section is a square with sides of length \[BC = AD = y = \sqrt {{R^2} – {x^2}} ,\] \[AB = CD = z = \sqrt {{R^2} – {x^2}}. \] The cross-sectional area is expressed in terms of \(x\) as follows: \[A\left( x \right) = {\left( {\sqrt {{R^2} – {x^2}} } \right)^2} = {R^2} – {x^2}.\] Then the volume of the solid common to both the cylinders (bicylinder) is given by \[{V = 8\int\limits_0^R {A\left( x \right)dx} }={ 8\int\limits_0^R {\left( {{R^2} – {x^2}} \right)dx} }={ 8\left. {\left( {{R^2}x – \frac{{{x^3}}}{3}} \right)} \right|_0^R }={ 8 \cdot \frac{{2{R^3}}}{3} }={ \frac{{16{R^3}}}{3}}\]
Wave energy converters in coastal structures: verschil tussen versies (→Application for wave energy converters) (→Application for wave energy converters) Regel 61: Regel 61: <p> <p> The time-mean wave energy flux per unit crest length is used as one of the main criteria to choose a site for wave energy converters. The time-mean wave energy flux per unit crest length is used as one of the main criteria to choose a site for wave energy converters. + + For irregular waves in deep water:<br><div style="text-align: center;"> For irregular waves in deep water:<br><div style="text-align: center;"> <div style="float: right">(3)</div> <div style="float: right">(3)</div> Versie van 3 sep 2012 om 10:12 Introduction Fig 1: Construction of a coastal structure. Coastal works along European coasts are composed of very diverse structures. Many coastal structures are ageing and facing problems of stability, sustainability and erosion. Moreover climate change and especially sea level rise represent a new danger for them. Coastal dykes in Europe will indeed be exposed to waves with heights that are greater than the dykes were designed to withstand, in particular all the structures built in shallow water where the depth imposes the maximal amplitude because of wave breaking. This necessary adaptation will be costly but will provide an opportunity to integrate converters of sustainable energy in the new maritime structures along the coasts and in particular in harbours. This initiative will contribute to the reduction of the greenhouse effect. Produced energy can be directly used for the energy consumption in harbour area and will reduce the carbon footprint of harbours by feeding the docked ships with green energy. Nowadays these ships use their motors to produce electricity power on board even if they are docked. Integration of wave energy converters (WEC) in coastal structures will favour the emergence of the new concept of future harbours with zero emissions. Inhoud Wave energy and wave energy flux For regular water waves, the time-mean wave energy density E per unit horizontal area on the water surface (J/m²) is the sum of kinetic and potential energy density per unit horizontal area. The potential energy density is equal to the kinetic energy [1] both contributing half to the time-mean wave energy density E that is proportional to the wave height squared according to linear wave theory [1]: (1) [math]E= \frac{1}{8} \rho g H^2[/math] g is the gravity and [math]H[/math] the wave height of regular water waves. As the waves propagate, their energy is transported. The energy transport velocity is the group velocity. As a result, the time-mean wave energy flux per unit crest length (W/m) perpendicular to the wave propagation direction, is equal to [1]: (2) [math] P= Ec_{g}[/math] with [math]c_{g}[/math] the group velocity (m/s). Due to the dispersion relation for water waves under the action of gravity, the group velocity depends on the wavelength λ (m), or equivalently, on the wave period T (s). Further, the dispersion relation is a function of the water depth h (m). As a result, the group velocity behaves differently in the limits of deep and shallow water, and at intermediate depths: [math](\frac{\lambda}{20} \lt h \lt \frac{\lambda}{2})[/math] Application for wave energy convertersFor regular waves in deep water: [math]c_{g} = \frac{gT}{4\pi} [/math] and [math]P_{w1} = \frac{\rho g^2}{32 \pi} H^2 T[/math] The time-mean wave energy flux per unit crest length is used as one of the main criteria to choose a site for wave energy converters. For irregular waves in deep water: [math]P_{w1} = \frac{\rho g^2}{64 \pi} H_{m0}^2 T_e[/math] If local data are available ([math]H_{m0}^2 [/math], T) for a sea state through in-situ wave buoys for example, satellite data or numerical modelling, the last equation giving wave energy flux [math]P_{w1}[/math] gives a first estimation. Averaged over a season or a year, it represents the maximal energetic resource that can be theoretically extracted from wave energy. If the directional spectrum of sea state variance F (f,[math]\theta[/math]) is known with f the wave frequency (Hz) and [math]\theta[/math] the wave direction (rad), a more accurate formulation is used: [math]P_{w2} = \rho g\int\int c_{g}(f,h)F(f,\theta) dfd \theta[/math] Fig 2: Time-mean wave energy flux along West European coasts [2] . It can be shown easily that equation (4) can be reduced to (3) with the hypothesis of regular waves in deep water. The directional spectrum is deduced from directional wave buoys, SAR images or advanced spectral wind-wave models, known as third-generation models, such as WAM, WAVEWATCH III, TOMAWAC or SWAN. These models solve the spectral action balance equation without any a priori restrictions on the spectrum for the evolution of wave growth. From TOMAWAC model, the near shore wave atlas ANEMOC along the coasts of Europe and France based on the numerical modelling of wave climate over 25 years has been produced [3]. Using equation (4), the time-mean wave energy flux along West European coasts is obtained (see Fig. 2). This equation (4) still presents some limits like the definition of the bounds of the integration. Moreover, the objective to get data on the wave energy near coastal structures in shallow or intermediate water requires the use of numerical models that are able to represent the physical processes of wave propagation like the refraction, shoaling, dissipation by bottom friction or by wave breaking, interactions with tides and diffraction by islands. The wave energy flux is therefore calculated usually for water depth superior to 20 m. This maximal energetic resource calculated in deep water will be limited in the coastal zone: at low tide by wave breaking; at high tide in storm event when the wave height exceeds the maximal operating conditions; by screen effect due to the presence of capes, spits, reefs, islands,... Technologies According to the International Energy Agency (IEA), more than hundred systems of wave energy conversion are in development in the world. Among them, many can be integrated in coastal structures. Evaluations based on objective criteria are necessary in order to sort theses systems and to determine the most promising solutions. Criteria are in particular: the converter efficiency : the aim is to estimate the energy produced by the converter. The efficiency gives an estimate of the number of kWh that is produced by the machine but not the cost. the converter survivability : the capacity of the converter to survive in extreme conditions. The survivability gives an estimate of the cost considering that the weaker are the extreme efforts in comparison with the mean effort, the smaller is the cost. Unfortunately, few data are available in literature. In order to determine the characteristics of the different wave energy technologies, it is necessary to class them first in four main families [2]. An interesting result is that the maximum average wave power that a point absorber can absorb [math]P_{abs} [/math](W) from the waves does not depend on its dimensions [4]. It is theoretically possible to absorb a lot of energy with only a small buoy. It can be shown that for a body with a vertical axis of symmetry (but otherwise arbitrary geometry) oscillating in heave the capture (or absorption) width [math]L_{max}[/math](m) is as follows [4]: [math]L_{max} = \frac{P_{abs}}{P_{w}} = \frac{\lambda}{2\pi}[/math] or [math]1 = \frac{P_{abs}}{P_{w}} \frac{2\pi}{\lambda}[/math] Fig 4: Upper limit of mean wave power absorption for a heaving point absorber. where [math]{P_{w}}[/math] is the wave energy flux per unit crest length (W/m). An optimally damped buoy responds however efficiently to a relatively narrow band of wave periods. Babarit et Hals propose [5] to derive that upper limit for the mean annual power in irregular waves at some typical locations where one could be interested in putting some wave energy devices. The mean annual power absorption tends to increase linearly with the wave power resource. Overall, one can say that for a typical site whose resource is between 20-30 kW/m, the upper limit of mean wave power absorption is about 1 MW for a heaving WEC with a capture width between 30-50 m. In order to complete these theoretical results and to describe the efficiency of the WEC in practical situations, the capture width ratio [math]\eta[/math] is also usually introduced. It is defined as the ratio between the absorbed power and the available wave power resource per meter of wave front times a relevant dimension B [m]. [math]\eta = \frac{P_{abs}}{P_{w}B} [/math] The choice of the dimension B will depend on the working principle of the WEC. Most of the time, it should be chosen as the width of the device, but in some cases another dimension is more relevant. Estimations of this ratio [math]\eta[/math] are given [5]: 33 % for OWC, 13 % for overtopping devices, 9-29 % for heaving buoys, 20-41 % for pitching devices. For energy converted to electricity, one must take into account moreover the energy losses in other components of the system. Civil engineering Never forget that the energy conversion is only a secondary function for the coastal structure. The primary function of the coastal structure is still protection. It is necessary to verify whether integration of WEC modifies performance criteria of overtopping and stability and to assess the consequences for the construction cost. Integration of WEC in coastal structures will always be easier for a new structure than for an existing one. In the latter case, it requires some knowledge on the existing coastal structures. Solutions differ according to sea state but also to type of structures (rubble mound breakwater, caisson breakwaters with typically vertical sides). Some types of WEC are more appropriate with some types of coastal structures. Fig 5: Several OWC (Oscillating water column) configurations (by Wavegen – Voith Hydro). Environmental impact Wave absorption if it is significant will change hydrodynamics along the structure. If there is mobile bottom in front of the structure, a sand deposit can occur. Ecosystems can also be altered by change of hydrodynamics and but acoustic noise generated by the machines. Fig 6: Finistere area and locations of the six sites (google map). Study case: Finistere area Finistere area is an interesting study case because it is located in the far west of Brittany peninsula and receives in consequence the largest wave energy flux along the French coasts (see Fig.2). This area with a very ragged coast gathers moreover many commercial ports, fishing ports, yachting ports. The area produces a weak part of its consumption and is located far from electricity power plants. There are therefore needs for renewable energies that are produced locally. This issue is important in particular in islands. The production of electricity by wave energy will have seasonal variations. Wave energy flux is indeed larger in winter than in summer. The consumption has peaks in winter due to heating of buildings but the consumption in summer is also strong due to the arrival of tourists. Six sites are selected (see figure 7) for a preliminary study of wave energy flux and capacity of integration of wave energy converters. The wave energy flux is expected to be in the range of 1 – 10 kW/m. The length of each breakwater exceeds 200 meters. The wave power along each structure is therefore estimated between 200 kW and 2 MW. Note that there exist much longer coastal structures like for example Cherbourg (France) with a length of 6 kilometres. (1) Roscoff (300 meters) (2) Molène (200 meters) (3) Le Conquet (200 meters) (4) Esquibien (300 meters) (5) Saint-Guénolé (200 meters) (6) Lesconil (200 meters) Fig.7: Finistere area, the six coastal structures and their length (google map). Wave power flux along the structure depends on local parameters: bottom depth that fronts the structure toe, the presence of caps, the direction of waves and the orientation of the coastal structure. See figure 8 for the statistics of wave directions measured by a wave buoy located at the Pierres Noires Lighthouse. These measurements show that structures well-oriented to West waves should be chosen in priority. Peaks of consumption occur often with low temperatures in winter coming with winds from East- North-East directions. Structures well-oriented to East waves could therefore be also interesting even if the mean production is weak. Fig 8: Wave measurements at the Pierres Noires Lighthouse. Conclusion Wave energy converters (WEC) in coastal structures can be considered as a land renewable energy. The expected energy can be compared with the energy of land wind farms but not with offshore wind farms whose number and power are much larger. As a land system, the maintenance will be easy. Except the energy production, the advantages of such systems are : a “zero emission” port industrial tourism test of WEC for future offshore installations. Acknowledgement This work is in progress in the frame of the national project EMACOP funded by the French Ministry of Ecology, Sustainable Development and Energy. See also Waves Wave transformation Groynes Seawall Seawalls and revetments Coastal defense techniques Wave energy converters Shore protection, coast protection and sea defence methods Overtopping resistant dikes References Mei C.C. (1989) The applied dynamics of ocean surface waves. Advanced series on ocean engineering. World Scientific Publishing Ltd Mattarolo G., Benoit M., Lafon F. (2009), Wave energy resource off the French coasts: the ANEMOC database applied to the energy yield evaluation of Wave Energy, 10th European Wave and Tidal Energy Conference Series (EWTEC’2009), Uppsala (Sweden) Benoit M. and Lafon F. (2004) : A nearshore wave atlas along the coasts of France based on the numerical modeling of wave climate over 25 years, 29th International Conference on Coastal Engineering (ICCE’2004), Lisbonne (Portugal), 714-726. De O. Falcão A. F. (2010) Wave energy utilization: A review of the technologies. Renewable and Sustainable Energy Reviews, Volume 14, Issue 3, April 2010, Pages 899–918. Babarit A. and Hals J. (2011) On the maximum and actual capture width ratio of wave energy converters – 11th European Wave and Tidal Energy Conference Series (EWTEC’2011) – Southampton (U-K).
Angle (argument of a function): \(\alpha\) Trigonometric functions: \(\sin \alpha, \) \(\cos \alpha ,\) \(\tan \alpha, \) \(\cot \alpha,\) \(\sec \alpha,\) \(\csc \alpha \) Trigonometric functions: \(\sin \alpha, \) \(\cos \alpha ,\) \(\tan \alpha, \) \(\cot \alpha,\) \(\sec \alpha,\) \(\csc \alpha \) Coordinates of a point on a circle: \(x\), \(y\) Four quadrants of the unit circle The trigonometric circle is divided into \(4\) quarters (quadrants). The first quadrant corresponds to the angle interval \(0^\circ \lt \alpha \lt 90^\circ,\) the second quadrant lies in the interval \(90^\circ \lt \alpha \lt 180^\circ,\) the third quadrant corresponds to the interval \(180^\circ \lt \alpha \lt 270^\circ,\) and the fourth quadrant covers the angles \(270^\circ \lt \alpha \lt 360^\circ.\) The signs of the trigonometric functions depend on the quadrant in which the angle lies. The table below shows the signs of \(6\) trigonometric functions in quadrants \(I-IV\). Signs of the trigonometric functions in the unit circle
I'm taking a particle physics course and we're using Perkins Introduction to High Energy Physics as the text. I am looking at problem 1.7. It asks whether $$\pi^0\rightarrow e^- + e^+$$ is allowed or forbidden by the Standard Model based off of conservation laws. I feel it doesn't violate any of them. Energy and momentum can clearly be conserved, charge is conserved, lepton number is conserved, etc. However, I know Dalitz decay is a very similar process, but that the pion decays into $$\pi^0 \rightarrow e^- + e^+ + \gamma$$ so feel as though I must have overlooked something. I would be greatly appreciative if someone could point me in the right direction. Since $\pi^0$ is a pseudoscalar particle, we have $$\langle 0|J^\mu_{em}|\pi^0 \rangle =0,$$ and the pion cannot decay into two leptons with a simple photon exchange. In the Standard Model, the leading-order contributions for this process are a box diagram and a $Z^0$ exchange, as you can see in fig. 1 of arXiv:0806.4782 (replacing a $c$ quark by a light quark). Therefore, this process is allowed in the SM, but highly suppressed. On the other hand, if you have a photon in the final state, you can have an eletromagnetic decay at tree-level, as shown in the third diagram of the same figure. The difference between these two processes can easily be seen from the experimental measurements, because the branching ratio is of order $10^{-8}$ for the first decay, while it is of order $10^{-2}$ for the Dalitz decay (c.f. PDG).
I am attempting to prove a set of results for the products of gamma matrices and traces of products of gamma matrices, but got stuck on this particular one. $$Tr(\gamma^{\mu_1}...\gamma^{\mu_n})=g^{\mu_1\mu_2}Tr(\gamma^{\mu_3}...\gamma^{\mu_n})-g^{\mu_1\mu_3}Tr(\gamma^{\mu_2}\gamma^{\mu_4}...\gamma^{\mu_n})+...+g^{\mu_1\mu_n}Tr(\gamma^{\mu_2}...\gamma^{\mu_{n-1}}).$$ My previous strategy to get the metric in expressions has been to exploit the anti-commutation relation, writing $\gamma^{\mu\nu}$ as $$\gamma^{\mu\nu}+\gamma^{\nu\mu}-\gamma^{\nu\mu}=\{\gamma^{\mu},\gamma^{\nu}\}-\gamma^{\nu}\gamma^{\mu}=2g^{\mu\nu}-\gamma^{\nu}\gamma^{\mu}.$$ I then feel that I will have to use the cyclical property of the trace to get the desired expression: I have tried with the case for three gamma matrices to see if it can be extended, but not sure how to do this. For example, if I have three gamma matrices in the trace we have $Tr(\gamma^{\mu_1}\gamma^{\mu_2}\gamma^{\mu_3})=Tr(\{\gamma^{\mu_1},\gamma^{\mu_2}\}-\gamma^{\mu_2}\gamma^{\mu_1})\gamma^{\mu_3})=Tr(2g^{\mu_{1}\mu_{2}}\gamma^{\mu_3}-\gamma^{\mu_2}\gamma^{\mu_1}\gamma^{\mu_3})$ From linearity of the trace, I can write this as two traces. The second one is 0 because its the trace of a product of an odd number of gamma matrices. $Tr(2g^{\mu_{1}\mu_{2}}\gamma^{\mu_3})$ The metric is symmetric so we can re-write: $Tr((g^{\mu_{1}\mu_{2}}+g^{\mu_{2}\mu_{1}})\gamma^{\mu_3})=Tr(g^{\mu_{1}\mu_{2}}\gamma^{\mu_3})+Tr(g^{\mu_{2}\mu_{1}}\gamma^{\mu_3})$ This looks partly right but not sure how to get the metrics out of the trace and some of it is not in the right order anyway (also for even $n$ we can't use the trick where part of the trace went to 0).
Polynomial Long Division By Catalin David Introduction The general form of a monomial f(x)=ax n, where: - a is the coefficient and can be part of the sets N, Z, Q, R, C - x is the variable - n is the degree and is part of N Two monomials are equal if they have the same variable and the same degree. Example: 3x 2 and -5x; 2 ½xand 4 2√3x 4 The sum of unequal monomials is called a polynomial. In this case, the monomials will be the terms of the polynomial. A polynomial formed of two termsis called a binomial. Example: p(x)=3x 2-5; h(x)=5x-1 A polynomial formed of three terms is called a trinomial. The general form of the polynomial with only one variable p(x)=a nxn+an-1xn-1+...+a1x1+a0 where: aare the coefficients of the polynomial. They can be natural numbers, integers, rational numbers, real numbers or complex numbers. n,a n-1,a n-2,...,a 1,a 0 ais the coefficient of the term of the largest degree(the leading coefficient) n ais the coefficient of the term of the smallest degree(the constant) 0 nis the degree of the polynomial Example 1 p(x)=5x 3-2x2+7x-1 third-degree polynomial with the coefficients 5, -2, 7and -1 5is the leading coefficient -1is the constant xis the variable Example 2 h(x)=-2√3x 4+½x-4 fourth-degree polynomial with coefficients -2√3,½and -4 -2√3is the leading coefficient -4is the constant xis the variable Dividing Polynomials p(x) and q(x) are two polynomials: p(x)=a nxn+an-1xn-1+...+a1x1+a0 q(x)=a px p+a p-1x p-1+...+a 1x 1+a 0 To find out the quotient and the remainder of dividing p(x) by q(x) we need to use the following algorithm: The degree of p(x)has to be either equal to or greater than the degree of q(x). We write the terms of each polynomial in descending order of the degrees. If p(x)has a missing term, it will be written with the coefficient 0. The leading term of p(x)is divided by the leading term of q(x)and the result is written under the line of the divisor (denominator). We multiply the result with all the terms of q(x)and the results are written with changed sign under the terms of p(x)with corresponding degrees. We add up the terms with the same degrees. Next to the results, we write the other terms of p(x). We divide the leading term of the new polynomial by the first term of q(x)and repeat steps 3-6. We repeat all these steps until the new polynomial will be of a smaller degree than the one of q(x). This will be the remainder of the division. The polynomial written under the line of the divisor will be quotient. Example 1 Step 1 and 2) $p(x)=x^5-3x^4+2x^3+7x^2-3x+5 \\ q(x)=x^2-x+1$ 5-3x 4+2x 3+7x 2-3x+5 5-3x 4+2x 3+7x 2-3x+5 5+x 4-x 3 5-3x 4+2x 3+7x 2-3x+5 5+x 4-x 3 4-x 3 5-3x 4+2x 3+7x 2-3x+5 5+x 4-x 3 4-x 3+7x 2-3x+5 5-3x 4+2x 3+7x 2-3x+5 5+x 4-x 3 4+x 3+7x 2-3x+5 4-2x 3+2x 2 3+9x 2-3x+5 5-3x 4+2x 3+7x 2-3x+5 5+x 4-x 3 4-x 3+7x 2-3x+5 4-2x 3+2x 2 3+9x 2-3x+5 3- x 2+x 2-2x+5 2+8x-8 Answer: p(x)=x 5- 3x 4+ 2x 3+ 7x 2- 3x + 5 = (x 2- x + 1)(x 3- 2x 2- x + 8) + 6x - 3 Example 2 p(x)=x 4+3x2+2x-8 q(x)=x 2-3x 4+0x 3+3x 2+2x-8 4+3x 3 3+3x 2+2x-8 3+9x 2 2+2x-8 2+36x 38x-8 r(x)STOP Answer: x 4+ 3x 2+ 2x - 8 = (x 2- 3x)(x 2+ 3x + 12) + 38x - 8 Dividing by a first-grade polynomial It can be done by using the algorithm mentioned before, or in a faster way by using Horner's rule. If f(x)=a nxn+an-1xn-1+...+a1x+a0, the polynomial can be written f(x)=a 0+x(a 1+x(a 2+...+x(a n-1+a nx)...)) q(x)is of the first degree ⇒ q(x)=mx+n The quotient polynomial will be of the degree n-1. Following Horner's rule, $x_0=-\frac{n}{m}$. b n-1=an b n-2=x 0.b n-1+a n-1 b n-3=x 0.b n-2+a n-2 ... b 1=x 0.b 2+a 2 b 0=x 0.b 1+a 1 r=x 0.b 0+a 0 where bis the quotient. The remainder will be a zero degree polynomial because the degree of the remainder must be smaller than the degree of the divisor. n-1x n-1+b n-2x n-2+...+b 1x+b 0 Euclidean division ⇒ p(x)=q(x).c(x)+r ⇒ p(x)=(mx+n).c(x)+rif $x_0=-\frac{n}{m}$ We can observe that p(x 0)=0.c(x 0)+r ⇒ p(x 0)=r Example 3 p(x)=5x 4-2x3+4x2-6x-7 q(x)=x-3 p(x)=-7+x(-6+x(4+x(-2+5x))) x 0=3 b 3=5 b 2=3.5-2=13 b 1=3.13+4=43 ⇒ c(x)=5x 3+13x 2+43x+123; r=362 b 0=3.43-6=123 r=3.123-7=362 5x 4-2x 3+4x 2-6x-7=(x-3)(5x 3+13x 2+43x+123)+362 Example 4 p(x)=-2x 5+3x4+x2-4x+1 q(x)=x+2 p(x)=-2x 5+3x 4+0x 3+x 2-4x+1 q(x)=x+2 x 0=-2 p(x)=1+x(-4+x(1+x(0+x(3-2x)))) b 4=-2 b 1=(-2).(-14)+1=29 b 3=(-2).(-2)+3=7 b 0=(-2).29-4=-62 b 2=(-2).7+0=-14 r=(-2).(-62)+1=125 ⇒ c(x)=-2x 4+7x 3-14x 2+29x-62; r=125 -2x 5+3x 4+x 2-4x+1=(x+2)(-2x 4+7x 3-14x 2+29x-62)+125 Example 5 p(x)=3x 3-5x2+2x+3 q(x)=2x-1 $x_0=\frac{1}{2}$ p(x)=3+x(2+x(-5+3x)) b 2=3 $b_1=\frac{1}{2}\cdot 3-5=-\frac{7}{2}$ $b_0=\frac{1}{2}\cdot \left(-\frac{7}{2}\right)+2=-\frac{7}{4}+2=\frac{1}{4}$ $r=\frac{1}{2}\cdot \frac{1}{4}+3=\frac{1}{8}+3=\frac{25}{8} \Rightarrow c(x)=3x^2-\frac{7}{2}x+\frac{1}{4}$ $\Rightarrow 3x^3-5x^2+2x+3=(2x-1)(3x^2--\frac{7}{2}x+\frac{1}{4})+\frac{25}{8}$ Conclusion If we divide by a polynomial of a degree higher than one, to find out the quotient and the remainder we use steps 1-9. If we divide by a first-degree polynomial mx+n, to find out the quotient and the remainder we use Horner's rule where $x_0=-\frac{n}{m}$. If we only have to find out the remainder of a division by a first-degree polynomial, we find out p(x. 0) Example 6 p(x)=-4x 4+3x 3+5x 2-x+2 q(x)=x-1 x 0=1 r=p(1)=-4.1+3.1+5.1-1+2=5 r=5
Ray Optics and Optical Instruments Optical Instruments Visual angle is the angle subtended by an object at the eye. Myopia means short sightedness, the distant objects are not clearly visible. Hypermetropia means far sightedness, the near objects are not clearly visible. A convex lens is called simple microscope Magnification in simple microscope when final image is formed at least distance of distinct vision \tt m = \left(1 + \frac{D}{f}\right) Magnification when final image at infinity \tt m = \left(\frac{D}{f}\right) Magnification of compound microscope \tt m = \frac{vo}{uo} \left(\frac{D}{ue}\right) Magnification of compound micro scope when final image at ‘D’ is \tt m = - \frac{vo}{uo} \left( 1 + \frac{D}{fe}\right) Length of compound microscope L D= Vo + Ue Magnification of compound microscope when final image formed at infinity \tt m = \frac{vo}{uo} \cdot \left(\frac{D}{fe}\right) Length of compound microscope L ∞= vo + fe Magnification of Astronomical Telescope \tt M = - \frac{fo}{ue} Magnification at D M D= \tt - \frac{fo}{fe} \left(1 + \frac{fe}{D}\right) Length of Astronomical Telescope L D= fo + ue Magnification at ∞ M ∞= \tt - \frac{fo}{fe} Length of Astronomical Telescope \tt L_{\infty} = fo + fe Terrestrial Telescope magnification \tt m = \frac{fo}{ue} Magnification of ‘D’ \tt M_{D} = \frac{fo}{fe} \left(1 + \frac{fe}{D}\right) Length L D= fo + 4f + ue Magnification \tt M_{\infty} = \frac{fo}{fe} Length L ∞= fo + 4f + fe The telescope in which the objective is a curved mirror is called Reflecting Telescope. Viewing objects: Eyes as an optical instrument Microscopes and Telescopes Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. 1. It is an optical instrument used to see very small object. It's magnifying power is given by \tt m = \frac{Visual \ angle \ with \ instrument \ (\beta)}{Visual \ angle \ when \ object \ is \ placed \ at \ least \ distance \ of \ distinct \ vision (\alpha)} 2. Magnification when final image is formed at D and ∞ (i.e., m D and m ∞): m_{D} = \left[1 + \frac{D}{f}\right]_{max} \ {\tt and} \ m_{\infty} = \left[\frac{D}{f}\right]_{min} 3. If lens is kept at a distance a from the eye then m_{D} = 1 + \frac{D - a}{f} \ {\tt and} \ m_{\infty} = \frac{D - a}{f} 4. Final image is forned at D : Magnification m_{D} = -\frac{v_{0}}{u_{0}}\left[1 + \frac{D}{f_{e}}\right] and length of the microscope tube (distance between two lenses) is L D = v 0+ u e 5. Telescope (Refracting Type) Magnification: m_{D} = -\frac{f_{0}}{f_{e}}\left[1 + \frac{f_{e}}{D}\right] \ {\tt and} \ m_{\infty} = -\frac{f_{0}}{f_{e}}
This question already has an answer here: So i was just revising some basic DSP concepts. Just wanted to verify this fact. Fourier series represents a periodic signal $\hat{x}(t)$ with period P as a countably infinite sum of sinusoids of frequency $0$, $\frac{1}{P},\frac{2}{P},\frac{3}{P}...$. This converges to the signal in the interval, $-\frac{P}{2} < t < +\frac{P}{2}$, and if the time domain signal is periodic, then over the whole time domain. Fourier Transform is sorta like a limit of the fourier series where P goes to $\infty$. So i know that the fourier transform of $\operatorname{rect}(t)$ is $\operatorname{sinc}(f)$ ( ignoring the scaling factors ) . And that the fourier series of a $\operatorname{rect}()$ is given by http://mathworld.wolfram.com/FourierSeriesSquareWave.html ( which is also a $\operatorname{sinc}()$ in the frequency domain ) . I just wanted to confirm the following If I sample the $\operatorname{sinc}()$ i obtain from the fourier transform of a $\operatorname{rect}()$, and use those values to reconstruct a fourier series, will i end up getting a square wave ?
Thermal Properties of Matter Newton's Law of Cooling From Newtons law of cooling the rate of loss of heat of a hot body is directly proportional to difference in temperature between the body and its surroundings provided the difference is small. \tt \frac{dQ}{dt}\propto \left(\tau -\tau0\right) Newtons law of cooling is applicable if heat lost is mainly due to convection, temperature of every part of body is same. As the body cools its rate of cooling goes on decreasing. Cooling curve of a hot body is exponential indicating that the temperature decreases exponentially with him. Newtons law of cooling is a special case of stefans-Boltzmann's law. View the Topic in this video From 42:08 To 55:30 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. 1. The loss of heat by radiation depends upon the nature of the surface of the body and the area of the exposed surface. We can write -\frac{dQ}{dt}=k\left(T_{2}-T_{1}\right) 2. Rate of loss of heat is given by \tt \frac{dQ}{dt}=ms\frac{dT_{2}}{dt} 3. If a body cools by radiation through a small temperature difference from T 1 to T 2 in a short time t when the surrounding temperature is T 0, then \tt \frac{dT}{dt}=\frac{T_{1}-T_{2}}{t}=k \left[\frac{T_{1}+T_{2}}{2}-T_{0}\right] 4. Wien's displacement law: It states that "as temperature of black body T increases, the wavelength λ m corresponding to maximum emission decreases" such that \tt \lambda _{m} \propto \frac{1}{T}\ or \ \lambda _{m}T=b where, b is known as Wien's constant and its value is 2.89 × 10 −3 mK.
@Daniel Fischer already pointed out its relevance to the notion of irrationality measure. This is related to the following question: How fast $n$ gets closer to the zero set $\frac{\pi}{2} + \pi \Bbb{Z}$ of $\cos x = 0$? We do not want a situation where (a subsequence of) $\cos n$ decays so fast that it can even beat the exponential factor $e^{-n}$. In other words, we want that $n$ stays moderately far from the zero set $\frac{\pi}{2} + \pi \Bbb{Z}$. Related to this question is the irrationality measure of $1/\pi$. In particular, if the irrationality measure $\mu$ of $1/\pi$ is finite, then for each $\epsilon > 0$ there exists $c = c(\epsilon) > 0$ such that $$ \forall q \in \Bbb{N}^{+}, p \in \Bbb{Z}, \quad \left| \frac{1}{\pi} - \frac{p}{q} \right| \geq \frac{c}{q^{\mu+\epsilon}}. $$ Now let us plug $q = 2n$ and $p = 2k+1$. Manipulating the inequality a little bit, we find that for some constant $c' = c'(\epsilon) > 0$, $$ |n - (k+\tfrac{1}{2})\pi| \geq c' n^{-(\mu+\epsilon-1)}. $$ This implies that $\cos n$ stays away from $0$ in a predictable way: if $n$ is large, then $$ |\cos n| \geq |\sin(c' n^{-(\mu+\epsilon-1)})| \geq \frac{2c'}{\pi} n^{-(\mu+\epsilon-1)}. $$ So it follows that $$ |e^{-n}\tan n| \leq C n^{\mu+\epsilon-1} e^{-n} \xrightarrow[n\to\infty]{ } 0. $$ Finally, it is proven that $\mu$ is indeed finite. Therefore $e^{-n} \tan n$ converges to $0$.
We are considering here binary relations on a set \(A\). Let’s recall that a binary relation \(R\) on \(A\) is a subset of the cartesian product \(R \subseteq A \times A\). The statement \((x,y) \in R\) is read as \(x\) is \(R\)-related to \(y\) and also denoted by \(x R y \). Some importants properties of a binary relation \(R\) are: reflexive For all \(x \in A\) it holds \(x R y\) irreflexive For all \(x \in A\) it holds not\(x R y\) symmetric For all \(x,y \in A\) it holds that if \(x R y\) then \(y R x\) antisymmetric For all \(x,y \in A\) if \(x R y\) and \(y R x\) then \(x=y\) transitive For all \(x,y,z \in A\) it holds that if \(x R y\) and \(y R z\) then \(x R z\) A relation that is reflexive, symmetric and transitive is called an equivalence relation. Let’s see that being reflexive, symmetric and transitive are independent properties. Symmetric and transitive but not reflexive We provide two examples of such relations. For the first one, we take for \(A\) the set of the real numbers \(\mathbb R\) and the relation \[R = \{(x,y) \in \mathbb R^2 \, | \, xy >0\}.\] \(R\) is symmetric as the multiplication is also symmetric. \(R\) is also transitive as if \(xy > 0\) and \(yz > 0\) you get \(xy^2 z >0\). And as \(y^2 > 0\), we have \(xz > 0\) which means that \(x R z\). However, \(R\) is not reflexive as \(0 R 0\) doesn’t hold. For our second example, we take \(A= \mathbb N\) and \(R=\{(1,1)\}\). It is easy to verify that \(R\) is symmetric and transitive. However \(R\) is not reflexive as \(n R n\) doesn’t hold for \(n \neq 1\). Continue reading Around binary relations on sets
I am investigating a somewhat obscure area of number theory. Can I post a proposition and a complete proof and ask people to check it? Say I've proven a theorem or found a solution to a problem. I'm close to being sure that my proof or solution is fine. However, I'm not entirely confident in my jugdement. Maybe I'm new to the topic, or I remember that I've thought correct my incorrect proofs of the same difficulty level many tim... I am trying to learn proof based math on my own and once I construct a proof, I often have a gut feeling that it's not airtight. What is the best way of asking such questions? My findings till now: This question's answer in Meta points out that reading other people's long and formal proofs i... Is this the proper site to ask about the correctness of one's proof? The proofs I would intend on asking questions about are elementary mathematics proofs such as a proof about infinitely many primes, induction equalities and inequalities, etc... Or is there another site for "Proof Review" Simila... I am aware that it is probably better not to have too many meta-tags such as homework, soft-question, big-list or reference-request. Despite of this I'd like to ask other MSE users, whether they would consider tag of "check my proof" questions useful. We have a lot of such questions and they are... Is it okay to ask questions where you give the solution and ask people to review it to see if it is correct? Recently, I have seen a lot of questions that essentially ask, "Is this proof correct?" or "Can you verify my work is correct?" like this one. Often, especially when the asker's work is in fact correct, these questions have a one-word answer. I feel that these questions could be much improved b... I solve some exercises but I'm self-studying and some books do not have answers to the exercises, is it ok to ask if the solution is plausible here? I occasionally come across questions (such as this one: Prove that the $\sigma$ - algebras are equal) in which the person asking the question has already answered their question and wants to know whether their approach is right. There are two possibilities: Their approach is wrong. Then you ... I envision a site where people could post their proofs -- either as answers to textbook exercises or revisions of current textbook proofs -- and get feedback on which parts might need improving, and perhaps how this might be done. This sort of site could be useful to a number of people. Namely, ... Over a week ago I asked this question. In it, I proposed a solution and asked for it to be verified, or for an alternative solution to be suggested. Currently, there are no answers but there is one comment stating that my proof is correct (as well as indicating the need for more explanation of a ... I feel bothered by questions where the body begins with a problem statement and then is followed by a giant solution and then succeeded by the question: "Can someone please tell me whether or not this solution is correct?" Here is an example of what I'm talking about: Example. Here are my compla... I've always wanted math competitions on MSE ever since I've joined. These could be either user-held or officially held, whichever seems better. User-held competitions would run as follows. A user starts a competition with a specified level, with original problems that he/she writes. People sign... I was puzzling: "What is the shortest proof of $\exists x \forall y (P(x) \to P(y)) $?" (a variation of the drinkers paradox see Proof of Drinker paradox) given a certain set of inference rules and using natural deduction. I managed to proof it in 23 lines, but I am not sure if this is the shor... This isn't the usual issue about questions from contest competitions being posted here for assistance, but rather about the use of Math.SE as a venue for hosting a "contest" using a future bounty as prize. This recent Question asks for participation according to rules (the post lists seven of th... « first day (866 days earlier) ← previous day next day → last day (1814 days later) »
Take some state variable $X(t)$, which follows the law of motion $$ \dot X(t) = f(t)X(t) $$ where $f(t)$ is a policy function, and determines the growth rate of $X(t)$. As a second shock, we have $\psi$, which is iid. The agent defaults whenever $$ g(X(t), \psi) \leq 0$$ Allow the agent to borrow some money that he will have to repay continuously. Let's compute the risk premium. The probability of default at $t+\epsilon$ is $$Prob(g(X(t+\epsilon), \psi) \leq 0) $$ As the lending has to be repaid continuously, the interest rate, given some risk-free interest rate $r^*$ and risk-neutral lenders, is given by $$ r^* = r \cdot \lim_{\epsilon\to 0} \left(1 - Prob(g(X(t+\epsilon), \psi) \leq 0)\right)$$ However, as the law of motion for $X(t)$ is continuous, in the limit, this becomes $$ r^* = r \cdot \left(1-Prob(g(X(t), \psi) \leq 0) \right)$$ This would mean that the agent's risk premium is independent of what he is doing: his policy $f(t)$ does not appear anymore. But since $f(t)$ affects the state $X(t)$ and the latter the default probability, I feel it should. What's my mistake here? References are fine. Most of continuous time finance references I know are much too deep for this rather simple question.
LEARNING OBJECTIVES Correlate two nearby circuits that carry time-varying currents with the emf induced in each circuit Describe examples in which mutual inductance may or may not be desirable Inductance is the property of a device that tells us how effectively it induces an emf in another device. In other words, it is a physical quantity that expresses the effectiveness of a given device. When two circuits carrying time-varying currents are close to one another, the magnetic flux through each circuit varies because of the changing current I in the other circuit. Consequently, an emf is induced in each circuit by the changing current in the other. This type of emf is therefore called a mutually induced emf, and the phenomenon that occurs is known as mutual inductance ( M). As an example, let’s consider two tightly wound coils (Figure \(\PageIndex{1}\)). Coils 1 and 2 have \(N_1\) and \(N_2\) turns and carry currents \(I_1\) and \(I_2\) respectively. The flux through a single turn of coil 2 produced by the magnetic field of the current in coil 1 is \(\Phi_{12}\), whereas the flux through a single turn of coil 1 due to the magnetic field of \(I_2\) is \(\Phi_{12}\). Figure \(\PageIndex{1}\): Some of the magnetic field lines produced by the current in coil 1 pass through coil 2. The mutual inductance \(M_{21}\) of coil 2 with respect to coil 1 is the ratio of the flux through the \(N_2\) turns of coil 2 produced by the magnetic field of the current in coil 1, divided by that current, that is, \[M_{21} = \dfrac{N_2\Phi_{21}}{I_1}. \label{12.24}\] Similarly, the mutual inductance of coil 1 with respect to coil 2 is \[M_{12} = \dfrac{N_1\Phi_{12}}{I_2}. \label{12.25}\] Like capacitance, mutual inductance is a geometric quantity. It depends on the shapes and relative positions of the two coils, and it is independent of the currents in the coils. The SI unit for mutual inductance M is called the henry (H) in honor of Joseph Henry (1799–1878), an American scientist who discovered induced emf independently of Faraday. Thus, we have \(1 \, H = 1 \, V \cdot s/A\). From Equations \ref{12.24} and \ref{12.25}, we can show that \(M_{21} = M_{12}\), so we usually drop the subscripts associated with mutual inductance and write \[M = \dfrac{N_2\Phi_{21}}{I_1} = \dfrac{N_1 \Phi_{12}}{I_2}.\label{14.3}\] The emf developed in either coil is found by combining Faraday’s law and the definition of mutual inductance. Since \(N_2\Phi_{21}\) is the total flux through coil 2 due to \(I_1\), we obtain \[\begin{align} \epsilon_2 &= - \dfrac{d}{dt} (N_2 \Phi_{21}) \\[5pt] &= - \dfrac{d}{dt} (MI_1) \\[5pt] & = - M\dfrac{dI_1}{dt} \label{14.4} \end{align} \] where we have used the fact that \(M\) is a time-independent constant because the geometry is time-independent. Similarly, we have \[\epsilon_1 = - M\dfrac{dI_2}{dt}. \label{14.5}\] In Equation \ref{14.5}, we can see the significance of the earlier description of mutual inductance (\(M\)) as a geometric quantity. The value of \(M\) neatly encapsulates the physical properties of circuit elements and allows us to separate the physical layout of the circuit from the dynamic quantities, such as the emf and the current. Equation \ref{14.5} defines the mutual inductance in terms of properties in the circuit, whereas the previous definition of mutual inductance in Equation \ref{12.24} is defined in terms of the magnetic flux experienced, regardless of circuit elements. You should be careful when using Equations \ref{14.4} and \ref{14.4} because \(\epsilon_1\) and \(\epsilon_2\) do not necessarily represent the total emfs in the respective coils. Each coil can also have an emf induced in it because of its self-inductance (self-inductance will be discussed in more detail in a later section). A large mutual inductance M may or may not be desirable. We want a transformer to have a large mutual inductance. But an appliance, such as an electric clothes dryer, can induce a dangerous emf on its metal case if the mutual inductance between its coils and the case is large. One way to reduce mutual inductance is to counter-wind coils to cancel the magnetic field produced (Figure \(\PageIndex{2}\)). Figure \(\PageIndex{2}\): The heating coils of an electric clothes dryer can be counter-wound so that their magnetic fields cancel one another, greatly reducing the mutual inductance with the case of the dryer. Digital signal processing is another example in which mutual inductance is reduced by counter-winding coils. The rapid on/off emf representing 1s and 0s in a digital circuit creates a complex time-dependent magnetic field. An emf can be generated in neighboring conductors. If that conductor is also carrying a digital signal, the induced emf may be large enough to switch 1s and 0s, with consequences ranging from inconvenient to disastrous. Example \(\PageIndex{1}\): Mutual Inductance Figure \(\PageIndex{3}\) shows a coil of \(N_2\) turns and radius \(R_2\) surrounding a long solenoid of length \(l_1\), radius \(R_1\), and \(N_1\) turns. What is the mutual inductance of the two coils? If \(N_1 = 500 \, turns, \, N_2 = 10 \, turns, \, R_1 = 3.10 \, cm, \, l_1 = 75.0 \, cm\), and the current in the solenoid is changing at a rate of 200 A/s, what is the emf induced in the surrounding coil? Figure \(\PageIndex{3}\): A solenoid surrounded by a coil. Strategy There is no magnetic field outside the solenoid, and the field inside has magnitude \(B_1 = \mu_0(N_1/l_1)I_1\) and is directed parallel to the solenoid’s axis. We can use this magnetic field to find the magnetic flux through the surrounding coil and then use this flux to calculate the mutual inductance for part (a), using Equation \ref{14.3}. We solve part (b) by calculating the mutual inductance from the given quantities and using Equation \ref{14.4} to calculate the induced emf. Solution The magnetic flux \(\Phi_{21}\) through the surrounding coil is [\begin{align} \Phi_{21} &= B_1 \pi R_1^2 \nonumber \\[5pt] &= \dfrac{\mu_0 N_1I_1}{l_1}\pi R_1^2. \nonumber \end{align} \nonumber\] Now from Equation \ref{14.3}, the mutual inductance is \[\begin{align} M &= \dfrac{N_2\Phi_{21}}{I_1} \nonumber \\[5pt] &= \left(\dfrac{N_2}{I_1}\right)\left(\dfrac{\mu_0N_1I_1}{l_1}\right) \pi R_1^2 \nonumber \\[5pt] &= \dfrac{\mu_0N_1N_2 \pi R_1^2}{l_1}.\nonumber \end{align} \nonumber\] Using the previous expression and the given values, the mutual inductance is \[\begin{align} M &= \dfrac{(4\pi \times 10^{-7} \, T \cdot m/A)(500)(10)\pi (0.0310 \, m)^2}{0.750 \, m} \nonumber \\[5pt] &=2.53 \times 10^{-5} \, H. \nonumber \end{align} \nonumber\] Thus, from Equation \ref{14.4}, the emf induced in the surrounding coil is \[\begin{align} \epsilon_2 &= - M\dfrac{dI_1}{dt} \nonumber \\[5pt] &= - (2.53 \times 10^{-5} H)(200 \, A/s) \nonumber \\[5pt] &= - 5.06 \times 10^{-3}V. \nonumber \end{align} \nonumber\] Significance Notice that M in part (a) is independent of the radius \(R_2\) of the surrounding coil because the solenoid’s magnetic field is confined to its interior. In principle, we can also calculate M by finding the magnetic flux through the solenoid produced by the current in the surrounding coil. This approach is much more difficult because \(\Phi_{12}\) is so complicated. However, since \(M_{12} = M_{21}\), we do know the result of this calculation. Exercise \(\PageIndex{1}\) A current \(I(t) = (5.0 \, A) \, sin \, ((120\pi \, rad/s)t)\) flows through the solenoid of part (b) of Example \(\PageIndex{1}\). What is the maximum emf induced in the surrounding coil? Solution \(4.77 \times 10^{-2} \, V\) Contributors Paul Peter Urone (Professor Emeritus at California State University, Sacramento) and Roger Hinrichs (State University of New York, College at Oswego) with Contributing Authors: Kim Dirks (University of Auckland) and Manjula Sharma (University of Sydney). This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
When adding a \bar to a relation the spacing around the relation is removed: \documentclass{article}\begin{document}$x > y$ $x \bar{>} y$\end{document} So, how does one do this correctly? TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community The name "relation" is the key: you must tell TeX that you want a math relation with \mathrel: \documentclass{article}\begin{document}$x > y$, $x \bar{>} y$, $x \mathrel{\bar{>}} y$\end{document} By default > is already a Rel atom, but \bar{>} is turned into an Ord atom, just like x and y, and you find no spacing between them (try x{>}y). Here are three other possibilities: with the \widebar symbol from mathabx, a simple \overline, and with \stackrel which, by definition, does not remove the relation spacing: \documentclass{article}\usepackage{amsmath}\DeclareFontFamily{U}{mathx}{\hyphenchar\font45}\DeclareFontShape{U}{mathx}{m}{n}{<-6> mathx5 <6-7> mathx6 <7-8> mathx7<8-9> mathx8 <9-10> mathx9<10-12> mathx10 <12-> mathx12}{}\DeclareSymbolFont{mathx}{U}{mathx}{m}{n}\DeclareFontSubstitution{U}{mathx}{m}{n}\DeclareMathAccent{\widebar}{0}{mathx}{"73}\begin{document}$x \mathrel{\widebar{>}} y$$x \mathrel{\mkern1.8mu\overline{\mkern-1.8mu>\mkern-1.5mu}\mkern1.5mu} y$$x \stackrel{\raisebox{-0.7ex}[0pt][0pt]{$\relbar$}}{>} y$\end{document} Not a general answer for the problem of getting a bar over a relation symbol, but a specific solution for < and >. Using \mathrel around the construction is necessary anyway. The idea here is to rotate \leq and \geq by 180 degrees. However, \mathrel{\bar{<}} and \mathrel{\bar{>}} might be better, depending on the intended meaning of the symbol. \documentclass{article}\usepackage{graphicx}\makeatletter\DeclareRobustCommand{\barlt}{\bar@glt{\geq}}\DeclareRobustCommand{\bargt}{\bar@glt{\leq}}\newcommand{\bar@glt}[1]{% \mathrel{\mathpalette\bar@glt@aux{#1}}%}\newcommand{\bar@glt@aux}[2]{% \rotatebox[origin=c]{180}{$\m@th#1#2$}%}\makeatother\begin{document}$a\barlt b\bargt c$ $a\leq b \geq c$$a\leq b\geq c$$a_{\barlt\bargt}$\end{document} Generally, < > = are called as relational operators, LaTeX some spaces are fixed for relational operators, if you give this symbol into braces, then those spaces are removed, so give the tag within \mathrel command, e.g, $x > y$ $x \mathrel{\bar{>}} y$
Scrabble?! STATICALLYIs staticallyvalid for Scrabble? Words With Friends? Lexulous? WordFeud? Other games? Definitions of STATICALLY in various dictionaries: In statics, a structure is statically indeterminate (or hyperstatic) when the static equilibrium equations are insufficient for determining the internal forces and reactions on that structure. Based on Newton's laws of motion, the equilibrium equations available for a two-dimensional body are ∑ F → = 0 {\displaystyle \sum {\vec {F}}=0} : the vectorial sum of the forces acting on the body equals zero. This translates toΣ H = 0: the sum of the horizontal components of the forces equals zero; Σ V = 0: the sum of the vertical components of forces equals zero; ∑ M → = 0 {\displaystyle \sum {\vec {M}}=0} : the sum of the moments (about an arbitrary point) of all forces equals zero. In the beam construction on the right, the four unknown reactions are VA, VB, VC and HA. The equilibrium equations are: Σ V = 0: VA − Fv + VB + VC = 0Σ H = 0: HA = 0Σ MA = 0: Fv · a − VB · (a + b) - VC · (a + b + c) = 0.Since there are four unknown forces (or variables) (VA, VB, VC and HA) but only three equilibrium equations, this system of simultaneous equations does not have a unique solution. The structure is therefore classified as statically indeterminate. Considerations in the material properties and compatibility in deformations are taken to solve statically indeterminate systems or structures. There are 10 letters inSTATICALLY( A 1 C 3 I 1 L 1 S 1 T 1 Y 4) To search all scrabble anagrams of STATICALLY, to go: STATICALLY? Rearrange the letters in STATICALLY and see some winning combinations 10 letters out of STATICALLY 6 letters out of STATICALLY 5 letters out of STATICALLY 4 letters out of STATICALLY 3 letters out of STATICALLY Anagrammer is a game resource site that has been extremely popular with players of popular games like Scrabble, Lexulous, WordFeud, Letterpress, Ruzzle, Hangman and so forth. We maintain regularly updated dictionaries of almost every game out there. To be successful in these board games you must learn as many valid words as possible, but in order to take your game to the next level you also need to improve your anagramming skills, spelling, counting and probability analysis. Make sure to bookmark every unscrambler we provide on this site. Explore deeper into our site and you will find many educational tools, flash cards and so much more that will make you a much better player. This page covers all aspects of STATICALLY, do not miss the additional links under "More about: STATICALLY"
tl;dr: Not without killing everything. Let's do some maths and actually figure this out. The specific heat capacity of water is $4.186 \text{ kJkg}^{-1}$. That means it takes 4186 joules of energy to heat 1 kilogram of water up by one degree. The average temperature of the surface of the sea is 17 oC. It gets a lot colder as you go deeper, so the average temperature overall is more like 0. The seas contain a volume of 1.3 billion cubic kilometres of water. 1 litre of water = 1 kg. 1 litre of water also = 1 dm 3, so there are 1000 litres in a cubic metre and thus a cubic metre of water weighs a ton. (This is assuming freshwater to keep the numbers reasonably nice - salt water is heavier.) Then, there are $1000^3 = 1,000,000,000$ cubic metres in one cubic kilometre. That means 1.3 billion billion or 1.3 quadrillion cubic metres of water and the same number of tons, which in turn is $1.3\times 10^{21} \text{ kg}$ or 1.3 quintillion kilograms. Now let's heat all that up by one degree. $$ (1.3 \times 10^{21} \text{ kg}) \times 4186 \text{ Jkg}^{-1} = 5.4418 \times 10^{24} \text{ J}$$ Multiply by 100 so we can heat the water to boiling: $$ = 5.4418 \times 10^{26} \text{ J}$$ Finally, you need around 6x the energy to actually boil it: $$ = 3.2651 \times 10^{27} \text{ J}$$ Now while that's not quite on the order of blowing up the Earth, that's a hell of a lot of energy. You're in the perfect region for an asteroid impact. We can work out how big and fast it needs to be: $$ \text{KE} = \frac{1}{2} mv^{2} $$$$ 2\text{KE} = mv^{2} $$$$ 6.5301 \times 10^{27} = mv^2 $$ We can play around with mass and velocity. Let's say this asteroid is a perfect 10km cube with 5000kg/m 3 density, thus giving it a weight of $5 \times 10^{12} \text{ kg}$. That means its velocity has to be: $$ v = \sqrt{\frac{6.5301 \times 10^{27}}{5 \times 10^{12}}} $$$$ v = 36138898.71 \text{ ms}^{-1} $$ or around $0.12c$. That speed isn't insignificant, and while an impact from an asteroid of this size and speed wouldn't destroy Earth, it would most likely make a massive crater, not vaporise the oceans because the energy isn't distributed easily, and kill all life on Earth. And that's before we start on the water cycle dropping all that steam straight back where it came from.
If you look at the recursive combinators in the untyped lambda-calculus, such as the Y combinator or the omega combinator: $$ \begin{array}{lcl} \omega & = & (\lambda x.\,x\;x)\;(\lambda x.\,x\;x)\\ Y & = & \lambda f.\,(\lambda x.\,f\;(x\;x))\; (\lambda x.\,f\;(x\;x)) \\ \end{array} $$ It's clear that all of these combinators end up duplicating a variable somewhere in their definition. Furthermore, all of these combinators are typeable in the simply-typed lambda calculus, if you extend it with recursive types $\mu\alpha.\,A(\alpha)$, where $\alpha$ is allowed to occur negatively in the recursive type. However, what happens if you add full (negative-occurence) recursive types to the exponential-free fragment of linear logic (i.e., MALL)? Then you don't have an exponential $!A$ to give you contraction. You can encode the type of exponentials using something like $$!A \triangleq \mu\alpha.\;I \;\&\; A \;\&\; (\alpha \otimes \alpha)$$but I don't see how to define the introduction rule for it, since that seems to require a fixed point combinator to define. And I was trying to define exponentials, to get contraction, to get a fixed-point combinator! Is it the case that MALL plus unrestricted recursive types is still normalizing‽
My attempt : a) Summation of all values? b)c)d) Failed e) Parserval's theorem Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up.Sign up to join this community My attempt : a) Summation of all values? b)c)d) Failed e) Parserval's theorem You just need to know the formulas for the DTFT and its inverse: $$X(e^{j\omega})=\sum_{n=-\infty}^{\infty}x[n]e^{-jn\omega}\tag{1}$$ and $$x[n]=\frac{1}{2\pi}\int_{-\pi}^{\pi}X(e^{j\omega})e^{jn\omega}d\omega\tag{2}$$ From $(1)$ you see that your answer for $(a)$ is correct. Also the answer for $(d)$ follows immediately, if you realize that $e^{-jn\pi}=(-1)^n$. For $(b)$ it's important to see that the coefficients are asymmetrical. Looking at $(1)$ you see that you can pair the terms for indices $\pm 1$, $\pm 2$, etc., which allows you to rewrite the sum $(1)$ as a sum of weighted sines time $j$ (imaginary unit), because $e^{jx}-e^{-jx}=2j\sin(x)$. So you'll have a purely imaginary expression. Computing the phase should then be easy. $(c)$ is easily solved using formula $(2)$ (which value of $n$ do you need to plug into $(2)$?). And finally, for $(e)$ you're right that you need Parseval's theorem.
Tagged: group Problem 343 Let $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$. Let $\Aut(N)$ be the group of automorphisms of $G$. Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime. Then prove that $N$ is contained in the center of $G$. Problem 332 Let $G=\GL(n, \R)$ be the general linear group of degree $n$, that is, the group of all $n\times n$ invertible matrices. Consider the subset of $G$ defined by \[\SL(n, \R)=\{X\in \GL(n,\R) \mid \det(X)=1\}.\] Prove that $\SL(n, \R)$ is a subgroup of $G$. Furthermore, prove that $\SL(n,\R)$ is a normal subgroup of $G$. The subgroup $\SL(n,\R)$ is called special linear group Problem 322 Let $\R=(\R, +)$ be the additive group of real numbers and let $\R^{\times}=(\R\setminus\{0\}, \cdot)$ be the multiplicative group of real numbers. (a) Prove that the map $\exp:\R \to \R^{\times}$ defined by \[\exp(x)=e^x\] is an injective group homomorphism. Add to solve later (b) Prove that the additive group $\R$ is isomorphic to the multiplicative group \[\R^{+}=\{x \in \R \mid x > 0\}.\]
Do you know sensible algorithms that run in polynomial time in (Input length + Output length), but whose asymptotic running time in the same measure has a really huge exponent/constant (at least, where the proven upper bound on the running time is in such a way)? Do you know sensible algorithms that run in polynomial time in (Input length + Output length), but whose asymptotic running time in the same measure has a really Algorithms based on the regularity lemma are good examples for polynomial-time algorithms with terrible constants (either in the exponent or as leading coefficients). The regularity lemma of Szemeredi tells you that in any graph on $n$ vertices you can partition the vertices into sets where the edges between pairs of sets are "pseudo-random" (i.e., densities of sufficiently large subsets look like densities in a random graph). This is a structure that is very nice to work with, and as a consequence there are algorithms that use the partition. The catch is that the number of sets in the partition is an exponential tower in the parameter of pseudo-randomness (See here: http://en.wikipedia.org/wiki/Szemer%C3%A9di_regularity_lemma). For some links to algorithms that rely on the regularity lemma, see, e.g.: http://www.cs.cmu.edu/~ryanw/regularity-journ.pdf News from SODA 2013: Max-Bisection problem is approximable to within a factor 0.8776 in around $O(n^{10^{100}})$ time. Here are two screenshots from An Energy-Driven Approach to Linkage Unfolding by Jason H. Cantarella, Erik D. Demaine, Hayley N. Iben, James F. O’Brien, SOCG 2004: Here is a recent result from FUN 2012 paper Picture-Hanging Puzzles by Erik D. Demaine, Martin L. Demaine, Yair N. Minsky, Joseph S. B. Mitchell, Ronald L. Rivest and Mihai Patrascu. We show how to hang a picture by wrapping rope around n nails, making a polynomial number of twists, such that the picture falls whenever any k out of the n nails get removed, and the picture remains hanging when fewer than k nails get removed. Don't let the 'polynomial number' fool you...it turns out to be $O(n^{43737})$. There exists a class of problems, whose solutions are hard to compute, but approximating them to any accuracy is easy, in the sense that there are polynomial-time algorithms that can approximate the solution to within $(1+\epsilon)$ for any constant ε > 0. However, there's a catch: the running time of the approximators may depend on $1/\epsilon$ quite badly, e.g., be $O(n^{1/\epsilon})$. See more info here: http://en.wikipedia.org/wiki/Polynomial-time_approximation_scheme. Although the run-time for such algorithms has been subsequently improved, the original algorithm for sampling a point from a convex body had run time $\tilde{O}(n^{19})$. Dyer, Frieze, and Kannan: http://portal.acm.org/citation.cfm?id=102783 If $L$ is a tabular modal or superintuitionistic logic, then the extended Frege and substitution Frege proof systems for $L$ are polynomially equivalent, and polynomially faithfully interpretable in the classical EF (this is Theorem 5.10 in this paper of mine). The exponent $c$ of the polynomial simulations is not explicitly stated in Theorem 5.10, but the inductive proof of the theorem gives $c=2^{O(|F|)}$, where $F$ is a finite Kripke frame which generates $L$, so it can be as huge as you want depending on the logic. (It gets worse in Theorem 5.20.) The current best known algorithm for recognizing map graphs (a generalization of planar graphs) runs in $n^{120}$. Thorup, Map graphs in polynomial time. Computing the equilibrium of the Arrow-Debreu market takes $O(n^6\log(nU))$ max-flow computations, where $U$ is the maximum utility. Duan, Mehlhorn, A Combinatorial Polynomial Algorithm for the Linear Arrow-Debreu Market. Sandpile Transience Problem Consider the following process. Take a thick tile and drop sand particles on it one grain at a time. A heap gradually builds up and then a large portion of sand slides off from the edges of the tile. If we continue to add sand particles, after a certain point of time, the configuration of the heap repeats. Thereafter, the configuration becomes recurrent, i.e. it keeps revisiting a state that is seen earlier. Consider the following model for the above process. Model the tile as an $n \times n$ grid. Sand particles are dropped on the vertices of this grid. If the number of particles at a vertex exceeds its degree, then the vertex collapses and the particles in it move to adjacent vertices (in cascading manner). A sand particle that reaches a boundary vertex disappears into a sink (`falls off'). This is known as the Abelian Sandpile Model. Problem: How long does it take for the configuration to become recurrent in terms of $n$, assuming the worst algorithm for dropping sand particles? In SODA '07, László Babai and Igor Gorodezky proved this time to be polynomially bounded but.. In SODA '12, Ayush Choure and Sundar Vishwanathan improved this bound to $O(n^7)$. This answer would have looked slightly better if not for their improvement :) The "convex skull" problem is to find the maximum-area convex polygon inside a given simple polygon. The fastest algorithm known for this problem runs in $O(n^7)$ time [Chang and Yap, DCG 1986]. The solution of Annihilation Games (Fraenkel and Yesha) has complexity $O(n^6)$. The Robertson-Seymour theorem aka Graph Minor Theorem establishes among other things that for any graph $G$, there exists an $O(n^3)$ algorithm that determines whether an arbitrary graph $H$ (of size $n$) has $G$ as a minor. The proof is nonconstructive and the (I think non-uniform) multiplicative constant is probably so enormous that no formula for it can be written down explicitly (e.g. as a primitive recursive function on $G$). In their ICALP 2014 paper, Andreas Björklund and Thore Husfeldt give the first (randomized) polynomial algorithm that computes 2 disjoint paths with minimum total length (sum of the two paths length) between two given pairs of vertices. The running time is in $O(n^{11})$. In Polygon rectangulation, part 2: Minimum number of fat rectangles, a practical modification of the rectangle partition problem motivated by concerns in VLSI is presented: Fat Rectangle Optimization Problem: Given an orthogonal polygon $P$, maximize the shortest side $\delta$ over all rectangulations of $P$. Among the partitions with the same $\delta$, choose the partition with the fewest number of rectangles. As yet, only a theoretical algorithm exists, with a running time of $O(n^{42})$. (That is not a typo, and it is obtained through a “natural” dynamic programming solution to the problem stated there.) computing matrix rigidity[1] via brute force/naive/enumerations apparently takes $O(2^n)$ time for matrices of size $n$ elements. this can be seen as a converging limit of a sequence of increasingly accurate estimates that take $n \choose 1$, $n \choose 2$, $n \choose 3$, ... steps. in other words each estimate is in P-time $O(n^c)$ for any arbitrary exponent $c$ (ie $n \choose c$ steps). the naive algorithm chooses any $c$ elements of the matrix to change and tests for resulting dimension reduction. this is not totally surprising given that it has been related to computing circuit lower bounds. this follows a pattern where many algorithms have a conjectured P-time solution for some parameter but a solid proof of a lower bound would likely imply $\mathsf{P \neq NP}$ or something stronger. surprisingly one of the most obvious answers not posted yet. finding a clique of size $c$ (edges or vertices) apparently takes $O(n^c)$ time by the naive/brute force algorithm that enumerates all possibilities. or more accurately proportional to $n \choose c$ steps. (strangely enough this basic factoid seems to be rarely pointed out in the literature.) however a strict proof of that would imply $\mathsf{P \neq NP}$. so this question is related to the famous open conjecture, virtually equivalent to it. other NP type problems can be parameterized in this way.
Thank you for using the timer!We noticed you are actually not timing your practice. Click the START button first next time you use the timer.There are many benefits to timing your practice, including: Does GMAT RC seem like an uphill battle? e-GMAT is conducting a free webinar to help you learn reading strategies that can enable you to solve 700+ level RC questions with at least 90% accuracy in less than 10 days. Sat., Oct 19th at 7 am PDT If xy 0, is x > y? (1) 4x = 3y (2) |y - x| = x - y[#permalink] Show Tags 12 Nov 2009, 10:14 25 16 Economist wrote: If xy ≠ 0, is x > y? (1) 4x = 3y (2) |y - x| = x - y This is a good one. +1 Economist for it. If xy ≠ 0, is x > y? Question asks whether \(x>y\)? (1) 4x = 3y --> \(x=\frac{3}{4}y\), well this one is relatively easy. This statement only tells that x and y have the same sign. When they are both positive then \(x<y\), BUT when they are both negative \(y<x\). Not sufficient. (2) \(|y - x| = x - y\) --> \(|y - x| = -(y-x)\). This means that \(y - x\leq{0}\) --> \(x\geq{y}\). Thus, this statement says that x can be more than or equal to y. Not sufficient. (1)+(2) From (1) (\(x=\frac{3}{4}y\)) it follows that \(x\) is not equal to \(y\) (bearing in mind that xy ≠ 0), hence from (2): \(x>y\). Sufficient. \(|y - x| = x - y\). Look at the LHS it's absolute value, it's never negative, hence RHS or \(x-y\) is never negative, \(x-y>=0\) or \(y-x<=0\). If so then only one possibility for \(|y - x|\), it must be \(-y+x\). So we'll have: Condition: \(y-x<=0\), which is the same as \(x>=y\) The above means that \(|y - x| = x - y\) is always true for \(x>=y\). But as we concluded earlier it's not enough. We need to be sure that \(x>y\). And \(x>=y\) leaves the possibility that they are equal, which is removed with the statement (1) when considering together. Hence C._________________ st 1) y=1, x=3/4 - false y=-1, x=-3/4 - true Not sufficient st 2) |y-x| =x-y--> x>y Sufficient coz |-x| = -(-x) = x -- basically a negative numbers should be negated again to get the result for absolute value all the cases as cay that x>y but fails when x=y combining we can answer the question C <didnt consider x=y option, so answered as B > Originally posted by chix475ntu on 10 Feb 2010, 22:16.Last edited by chix475ntu on 11 Feb 2010, 10:06, edited 1 time in total. st 1) y=1, x=3/4 - false y=-1, x=-3/4 - true Not sufficient st 2) |y-x| =x-y--> x>y Sufficient coz |-x| = -(-x) = x -- basically a negative numbers should be negated again to get the result for absolute value B B alone is not sufficient as we dont know whether x=y? So A gives information that x is not = y. So Final ans is C This was nice question...even I would have done the same mistake in the exam..Its really important to think of all the possibilities on the G day._________________ (1) \(4x = 3y\) --> if \(x\) and \(y\) are both positive, then \(x<y\) BUT if they are both negative then \(x>y\). Not sufficient. But from this statement we can grasp an important property we'll use while evaluating statements together: as \(xy\neq0\) then from statement (1) \(x\neq{y}\). (2) \(|y-x|=x-y\). Now as LHS is absolute value, which is never negative, RHS must also be \(\geq0\) --> so \(x-y\geq0\) --> then \(|y-x|=-(y-x)\), and we'll get \(-(y-x)=x-y\) --> \(0=0\), which is true. This means that equation \(|y-x|=x-y\) holds true when \(x-y\geq0\) or, which is the same, when \(x\geq{y}\). But this not enough as \(x=y\) is still possible. Not sufficient. (1)+(2) From (1) we got that \(x\neq{y}\) and from (2) \(x\geq{y}\), hence \(x>y\). Sufficient. (1) \(4x = 3y\) --> if \(x\) and \(y\) are both positive, then \(x<y\) BUT if they are both negative then \(x>y\). Not sufficient. But from this statement we can grasp an important property we'll use while evaluating statements together: as \(xy\neq0\) then from statement (1) \(x\neq{y}\). (2) \(|y-x|=x-y\). Now as LHS is absolute value, which is never negative, RHS must also be \(\geq0\) --> so \(x-y\geq0\) --> then \(|y-x|=-(y-x)\), and we'll get \(-(y-x)=x-y\) --> \(0=0\), which is true. This means that equation \(|y-x|=x-y\) holds true when \(x-y\geq0\) or, which is the same, when \(x\geq{y}\). But this not enough as \(x=y\) is still possible. Not sufficient. (1)+(2) From (1) we got that \(x\neq{y}\) and from (2) \(x\geq{y}\), hence \(x>y\). Sufficient. Answer: C. Hope it helps. Dear Bunuel: I am a bit confused with this problem. I don't have a problem to understand the first statement, but I do have a problem trying to understand the second one.. I saw a user on this thread was solving the second statement as this: 2. |y - x| = x - y y - x = x - y 2x = 2y x=y -y + x = x-y 0=0 Isn't his way of solving the correct way to solve this kind of problems? I have seen some other exercises and they always negate the LHS that has the absolute value on it regarding of the inequality symbol or equality. I would like to learn why we are not doing this on this problem. If I am wrong, would you be so kind to revamp the way of solving statement 2, I already read the way you solved it, but it was not 100% clear for me. (1) \(4x = 3y\) --> if \(x\) and \(y\) are both positive, then \(x<y\) BUT if they are both negative then \(x>y\). Not sufficient. But from this statement we can grasp an important property we'll use while evaluating statements together: as \(xy\neq0\) then from statement (1) \(x\neq{y}\). (2) \(|y-x|=x-y\). Now as LHS is absolute value, which is never negative, RHS must also be \(\geq0\) --> so \(x-y\geq0\) --> then \(|y-x|=-(y-x)\), and we'll get \(-(y-x)=x-y\) --> \(0=0\), which is true. This means that equation \(|y-x|=x-y\) holds true when \(x-y\geq0\) or, which is the same, when \(x\geq{y}\). But this not enough as \(x=y\) is still possible. Not sufficient. (1)+(2) From (1) we got that \(x\neq{y}\) and from (2) \(x\geq{y}\), hence \(x>y\). Sufficient. Answer: C. Hope it helps. Dear Bunuel: I am a bit confused with this problem. I don't have a problem to understand the first statement, but I do have a problem trying to understand the second one.. I saw a user on this thread was solving the second statement as this: 2. |y - x| = x - y y - x = x - y 2x = 2y x=y -y + x = x-y 0=0 Isn't his way of solving the correct way to solve this kind of problems? I have seen some other exercises and they always negate the LHS that has the absolute value on it regarding of the inequality symbol or equality. I would like to learn why we are not doing this on this problem. If I am wrong, would you be so kind to revamp the way of solving statement 2, I already read the way you solved it, but it was not 100% clear for me. Thanks in advance. There can be many correct ways to solve a questions. Try this one: (2) says that \(|y-x|=-(y-x)\). We know that \(|x|=-x\), when \(x\leq{0}\), thus \(y-x\leq{0}\)._________________ Re: If xy 0, is x > y? (1) 4x = 3y (2) |y - x| = x - y[#permalink] Show Tags 09 Aug 2016, 23:33 1 Bunuel wrote: Given: \(xy\neq0\). Question: is \(x>y\) true? (1) \(4x = 3y\) --> if \(x\) and \(y\) are both positive, then \(x<y\) BUT if they are both negative then \(x>y\). Not sufficient. But from this statement we can grasp an important property we'll use while evaluating statements together: as \(xy\neq0\) then from statement (1) \(x\neq{y}\). (2) \(|y-x|=x-y\). Now as LHS is absolute value, which is never negative, RHS must also be \(\geq0\) --> so \(x-y\geq0\) --> then \(|y-x|=-(y-x)\), and we'll get \(-(y-x)=x-y\) --> \(0=0\), which is true. This means that equation \(|y-x|=x-y\) holds true when \(x-y\geq0\) or, which is the same, when \(x\geq{y}\). But this not enough as \(x=y\) is still possible. Not sufficient. (1)+(2) From (1) we got that \(x\neq{y}\) and from (2) \(x\geq{y}\), hence \(x>y\). Sufficient. Answer: C. Hope it helps. Hello Bunuel, Thank you for your answer. Although , as per mod property, isn't it the case that |x| = x when x>=0 and -x when x<0 ? I would like to know why you are considering |x| = -x when x<=0, because this is where the answer is really hinged on. Re: If xy 0, is x > y? (1) 4x = 3y (2) |y - x| = x - y[#permalink] Show Tags 10 Aug 2016, 00:58 ramblersm wrote: Bunuel wrote: Given: \(xy\neq0\). Question: is \(x>y\) true? (1) \(4x = 3y\) --> if \(x\) and \(y\) are both positive, then \(x<y\) BUT if they are both negative then \(x>y\). Not sufficient. But from this statement we can grasp an important property we'll use while evaluating statements together: as \(xy\neq0\) then from statement (1) \(x\neq{y}\). (2) \(|y-x|=x-y\). Now as LHS is absolute value, which is never negative, RHS must also be \(\geq0\) --> so \(x-y\geq0\) --> then \(|y-x|=-(y-x)\), and we'll get \(-(y-x)=x-y\) --> \(0=0\), which is true. This means that equation \(|y-x|=x-y\) holds true when \(x-y\geq0\) or, which is the same, when \(x\geq{y}\). But this not enough as \(x=y\) is still possible. Not sufficient. (1)+(2) From (1) we got that \(x\neq{y}\) and from (2) \(x\geq{y}\), hence \(x>y\). Sufficient. Answer: C. Hope it helps. Hello Bunuel, Thank you for your answer. Although , as per mod property, isn't it the case that |x| = x when x>=0 and -x when x<0 ? I would like to know why you are considering |x| = -x when x<=0, because this is where the answer is really hinged on. Thanks! |0| = 0 = -0, so you can include = sign in either of the cases: |x| = -x, when x<=0 and |x| = x, when x >= 0._________________ Re: If xy 0, is x > y? (1) 4x = 3y (2) |y - x| = x - y[#permalink] Show Tags 31 Jan 2018, 08:28 Bunuel wrote: Economist wrote: If xy ≠ 0, is x > y? (1) 4x = 3y (2) |y - x| = x - y This is a good one. +1 Economist for it. If xy ≠ 0, is x > y? Question asks whether \(x>y\)? (1) 4x = 3y --> \(x=\frac{3}{4}y\), well this one is relatively easy. This statement only tells that x and y have the same sign. When they are both positive then \(x>y\), BUT when they are both negative \(y>x\). Not sufficient. (2) \(|y - x| = x - y\) --> \(|y - x| = -(y-x)\). This means that \(y - x\leq{0}\) --> \(x\geq{y}\). Thus, this statement says that x can be more than or equal to y. Not sufficient. (1)+(2) From (1) (\(x=\frac{3}{4}y\)) it follows that \(x\) is not equal to \(y\) (bearing in mind that xy ≠ 0), hence from (2): \(x>y\). Sufficient. Answer: C. For (1), isn't it the opposite? when both +ve x<y, when both -ve x>y. Re: If xy 0, is x > y? (1) 4x = 3y (2) |y - x| = x - y[#permalink] Show Tags 31 Jan 2018, 08:36 urvashis09 wrote: Bunuel wrote: Economist wrote: If xy ≠ 0, is x > y? (1) 4x = 3y (2) |y - x| = x - y This is a good one. +1 Economist for it. If xy ≠ 0, is x > y? Question asks whether \(x>y\)? (1) 4x = 3y --> \(x=\frac{3}{4}y\), well this one is relatively easy. This statement only tells that x and y have the same sign. When they are both positive then \(x>y\), BUT when they are both negative \(y>x\). Not sufficient. (2) \(|y - x| = x - y\) --> \(|y - x| = -(y-x)\). This means that \(y - x\leq{0}\) --> \(x\geq{y}\). Thus, this statement says that x can be more than or equal to y. Not sufficient. (1)+(2) From (1) (\(x=\frac{3}{4}y\)) it follows that \(x\) is not equal to \(y\) (bearing in mind that xy ≠ 0), hence from (2): \(x>y\). Sufficient. Answer: C. For (1), isn't it the opposite? when both +ve x<y, when both -ve x>y. Yes but that doesn't change the answer anyway._________________ If my Post helps you in Gaining Knowledge, Help me with KUDOS.. !! Re: If xy 0, is x > y? (1) 4x = 3y (2) |y - x| = x - y[#permalink] Show Tags 14 Apr 2019, 07:29 Hello from the GMAT Club BumpBot! Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos). Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email._________________
Notation: \( I_{1}\), \( I_{2}\), \( I_{3}\) are the principal moments of inertia. \( I_{3}\) is the unique moment. If it is the largest of the three, the body is an oblate symmetric top; if it is the smallest, it is a prolate spherical top. \( Ox_{0}\), \( Oy_{0}\), \( Oz_{0}\) are the corresponding body-fixed principal axes. \( \omega_{1}\), \( \omega_{2}\), \( \omega_{3}\) are the components of the angular velocity vector with respect to the principal axes. In the analysis that follows, we are going to have to think about three vectors. There will be the angular momentum vector \( \bf{L}\), which, in the absence of external torques, is fixed in magnitude and in the direction in laboratory space. There will be the direction of the axis of symmetry, the \( Oz\) axis, which is fixed in the body, but not necessarily in space, unless the body happens to be rotating about its axis of symmetry; we’ll denote a unit vector in this direction by \( \hat{\bf z}_{0} \). And there will be the instantaneous angular velocity vector \( \omega\) which is neither space- nor body-fixed. What we are going to find is the following. We shall find that precesses in the body about the body-fixed symmetry axis in a cone called the body cone. The angle between \( \boldsymbol\omega\) and \( \bf{\hat{z}_{0}} \) is constant (we’ll be calling this angle \( \alpha\)), and the magnitude \( \omega\) of \( \boldsymbol\omega \) of is constant. We shall find that the sense of the precession is the same as the sense of the spin if the body is oblate, but opposite if it is prolate. The direction of the symmetry axis, however, is not fixed in space, but it precesses about the space-fixed angular momentum vector \( \bf{L}\) in another cone. This cone is narrower than the body cone if the body is oblate, but broader than the body cone if the body is prolate. The net result of these two precessional motions is that precesses \( \omega \) in space about the space-fixed angular momentum vector in a cone called the space cone. For a prolate top, the semi vertical angle of the space cone can be anything from 0 ° to 90 °; for an oblate top, however, the semi vertical angle of the space cone cannot exceed 19 ° 28' . That’s quite a lot to take in in one breath! We can start with Euler’s equations of motion for force-free rotation of a symmetric top: \[\begin{align} I_{1}\dot{\omega_{1}} &= -\omega_{2}\omega_{3}(I_{3}-I_{1}), \tag{4.8.1}\label{eq:4.8.1} \\[5pt] I_{1}\dot{\omega_{2}} &= -\omega_{1}\omega_{3}(I_{3}-I_{1}), \tag{4.8.2}\label{eq:4.8.2} \\[5pt] I_{3}\dot{\omega_{3}} &= 0. \tag{4.8.3}\label{eq:4.8.3} \end{align} \] From the first of these we obtain the result \[\ \omega_{3} = \text{constant} \tag{4.8.4}\label{eq:4.8.4} \] For brevity, I am going to let \[\dfrac{(I_{3}-I_{1})}{I_{1}} = \Omega , \tag{4.8.5}\label{eq:4.8.5} \] although in a moment \( \Omega\) will have a physical meaning. Equations \( \ref{eq:4.8.1}\) and \( \ref{eq:4.8.2}\) become: \[\ \dot{\omega_{1}} = \Omega\omega_{2} \tag{4.8.6}\label{eq:4.8.6} \] and \[\ \dot{\omega_{2}} = \Omega\omega_{1} \tag{4.8.7}\label{eq:4.8.7} \] Eliminate \( \omega_{2}\) from these to obtain \[\ \dot{\omega_{1}} = - \Omega^{2}\omega_{1} \tag{4.8.8}\label{eq:4.8.8} \] This is the Equation for simple harmonic motion and its solution is \[\ \omega_{1} = \omega_{0} \cos (\Omega t + \epsilon) \tag{4.8.9}\label{eq:4.8.9} \] in which \( \omega_{0}\) and \( \epsilon\), the two constants of integration, whose values depend on the initial conditions in the usual fashion, are the amplitude and initial phase angle. On combining this with Equation \( \ref{eq:4.8.6}\), we obtain \[\ \omega_{2} = \omega_{0}\sin(\Omega t + \epsilon) \tag{4.8.10}\label{eq:4.8.10} \] From these we see that \( (\omega_{1}^{2}+\omega_{2}^{2})^{\dfrac{1}{2}}\) , which is the magnitude of the component of \( \omega\) in the \( x_{0}y_{0}\)-plane, is constant, equal to \( \omega_{0}\); and since \( \omega_{3}\) is also constant, it follows that \( (\omega_{1}^{2} + \omega_{2}^{2} + \omega_{3}^{2})^{1/2} \) , which is the magnitude of \( \omega\), is also constant. The cosine of the angle \( \alpha\) between \( \hat{z_{0}}\) \[\cos \alpha = \dfrac{\omega_{3}}{\omega} = \dfrac{I_{1} \Omega}{(I_{3}-I_{1})\omega} \tag{4.8.11}\label{eq:4.8.11} \] If we take the direction of the \( z_{0}\) axis to be the direction of the component of \( \omega\) along the symmetry axis, then \( \boldsymbol\Omega\) is in the same direction as \( {\bf z}_{0}\) if \( I_{3}>I_{1}\) (that is, if the top is oblate) and it is in the opposite direction if the top is prolate. The situation for oblate and prolate tops is shown in Figure IV.11. We have just dealt with how the instantaneous axis of rotation precesses about the body-fixed symmetry axis, describing the body cone of semi vertical angle \( \alpha \). Now we are going to consider the precession of the body-fixed symmetry axis about the space-fixed angular momentum vector \( \bf{L}\). I am going to make use of the idea of Eulerian angles for expressing the orientation of one three-dimensional set of axes with respect to another. If you are not already familiar with Eulerian angles or would like a refresher, you can go to to Chapter 3 of Celestial Mechanics especially Section 3.7. Recall that we are using \( Ox_{0}y_{0}z_{0}\) for body-fixed coordinates, referred to the principal axes. I shall use \( Oxyz\) for space-fixed coordinates, and there is no loss of generality if I choose the \( Oz\) axis to coincide with the angular momentum vector \( \bf{L}\). Let me try to draw the situation in Figure IV.12a. The axes \( Oxyz\) are the space-fixed axes. The axes \( Ox_{0}y_{0}z_{0}\) are the body-fixed principal axes. The angular momentum vector \( \bf{L}\) is directed along the axis \( Oz\). The symmetry axis of the body is directed along the axis \( Oz_{0}\). The Eulerian angles of the body-fixed axes relative to the space fixed axes are (\( \phi\), \( \theta\), \( \psi\)). Recall, with the aid of Figure IV.12b, how these Euler angles are formed: First, a rotation by \( \phi\) about \( Oz\). Second, a rotation by \( \theta\) about the dashed line \( Ox^{\prime}\) to form an intermediate set of axes \( Ox^{\prime}y^{\prime}z^{\prime}\) . Third, a rotation by \( \psi\) about \( Oz^{\prime}\) to form the body- fixed principal axes \( Ox_{0}y_{0}z_{0}\). Spend a little time trying to visualize these three sets of axes. Please also convince yourself, from the way the Euler angles were formed through three rotations, that the vector \( \bf{L}\) is in the \( y_{\prime}z_{\prime}\) plane and has no \( x^{\prime}\) component. It is also in the \( y_{0}z_{0}\) plane and has no \( x_{0}\) component. You will then agree that \[\ L_{x'} = 0, \quad L_{y'} = L\sin \theta \quad L_{z'} = L\cos \theta . \tag{4.8.12}\label{eq:4.8.12} \] Now if \( L_{x^{\prime}} = 0\), then \( \omega_{x^{\prime}}\) is also zero, which means that \( \boldsymbol\omega\) , like \( \bf{L}\) , is in the \( y^{\prime}z^{\prime}\) plane. We have seen that \( \boldsymbol\omega\) makes an angle \( \alpha\) with the symmetry axis \( Oz_{0}\), where \( \alpha\) is given by Equation \( \ref{eq:4.8.11}\). I’ll now add \( \boldsymbol\omega\) to the drawing to make Figure IV.13. Like \( \bf{L}\), it is in the y'z' plane and has no x' component. I haven’t marked in the angle \( \alpha\). I leave it to your imagination. It is the angle between \( \boldsymbol\omega\) and \( z_{0}\). You should easily agree that \[\ \omega_{x'} = 0, \quad \omega_{y'} + \omega \sin\alpha, \quad \omega_{z'} = \omega cos \alpha. \tag{4.8.13}\label{eq:4.8.13} \] From these, together with \( L_{y'} = I_{1}\omega_{y'}\) and \( L_{z'} = I_{1}\omega_{z'}\) we obtain \[\ I_{1} = \tan \alpha = I_{3} \tan \theta \tag{4.8.14}\label{eq:4.8.14} \] For an oblate symmetric top, \( I_{3}>I_{1}\) , \( \alpha>\theta\) . For a prolate symmetric top, \( I_{3}<I_{1}\) , \( \alpha<\theta\) . Now \( \boldsymbol\omega\) can be written as the vector sum of the rates of change of the three Euler angles: \[ \boldsymbol\omega = \dot{\boldsymbol\theta} + \dot{\boldsymbol\phi} + \dot{\boldsymbol\psi} \tag{4.8.15}\label{eq:4.8.15} \] The components of \( \dot{\boldsymbol\theta} \) and \( \dot{\boldsymbol\psi} \) along \( Oy^{\prime}\) are each zero, and therefore the component of \( \omega\) along \( Oy^{\prime}\) is equal to the component of \( \dot{\boldsymbol\psi} \) along Oy' . \[\ \therefore \qquad \omega \sin\alpha = \dot{\phi}\sin{\theta} \tag{4.8.16}\label{eq:4.8.16} \] In summary, then: The instantaneous axis of rotation, which makes an angle \( \alpha\) with the symmetry axis, precesses around it at angular speed \[\ \Omega = \dfrac{I_{3}-I_{1}}{I_{1}}\omega \cos \alpha \tag{4.8.17}\label{eq:4.8.17} \] which is in the same sense as \( \boldsymbol\omega\) if the top is oblate and opposite if it is prolate. The symmetry axis makes an angle \( \theta\) with the space-fixed angular momentum vector \( \bf{L}\), where \[\ \tan \theta = \dfrac{I_{1}}{I_{3}}\tan \alpha. \tag{4.8.18}\label{eq:4.8.18} \] For an oblate top, \( \theta\) < \( \alpha\). For a prolate top, \( \theta\) > \( \alpha\). The speed of precession of the symmetry axis about \( \bf{L}\) is \[\ \dot{\phi} = \dfrac{\sin \alpha}{\sin \theta} \omega, \tag{4.8.19}\label{eq:4.8.19} \] or, by elimination of \( \theta\) between \( \ref{eq:4.8.18}\) and \( \ref{eq:4.8.19}\), \[\ \dot{\phi} = [ 1 +\dfrac{I_{3}^{2} - I_{1}^{2}}{I_{3}^{1}} \cos^{2} \alpha]^{1/2} \omega. \tag{4.8.20}\label{eq:4.8.20} \] The net result of this is that \( \boldsymbol\omega\) preceses about \( \bf{L}\) at a rate \( \dot{\phi}\) in the space cone, which has a semi-vertical angle \( \alpha\) − \( \theta\) for an oblate rotator, and \( \theta\) − \( \alpha\) for a prolate rotator. The space cone is fixed in space, while the body cone rolls around it, always in contact, \( \boldsymbol\omega\) being a mutual generator of both cones. If the rotator is oblate, the space cone is smaller than the body cone and is inside it. If the rotator is prolate, the body cone is outside the space cone and can be larger or smaller than it. Write \[\ c = I_{3}/I_{1} \tag{4.8.21}\label{eq:4.8.21} \] for the ratio of the principal moments of inertia. Note that for a pencil, \( c = 0\); for a sphere, \( c = 1\); for a plane disc or any regular plane lamina, \( c = 2\). (The last of these follows from the perpendicular axes theorem.) The range of \( c\), then, is from 0 to 2, 0 to 1 being prolate, 1 to 2 being oblate. Equations \( \ref{eq:4.8.17}\) and \( \ref{eq:4.8.20}\) can be written \[\ \dfrac{\Omega}{\omega} = (c-1)\cos\alpha \tag{4.8.22}\label{eq:4.8.22} \] and \[\ \dfrac{\dot{\phi}}{\omega} = [1 + (c^{2}-1)\cos^{2}\alpha]^{1/2} \tag{4.8.23}\label{eq:4.8.23} \] Figures IV.15 and IV.16 show, for an oblate and a prolate rotator respectively, the instantaneouss rotation vector \( \boldsymbol\omega\) precessing around the body-fixed symmetry axis at a rate \( \Omega\) in the body cone of semi vertical angle \( \alpha\); the symmetry axis precessing about the space-fixed angular momentum vector \( \bf{L}\) at a rate \( \dot{\phi}\) in a cone of semi vertical angle \( \theta\) (which is less than \( \alpha\) for an oblate rotator, and greater than \( \alpha\) for a prolate rotator; and consequently the instantaneous rotation vector \( \boldsymbol\omega\) precessing around the space-fixed angular momentum vector \( \bf{L}\) at a rate \( \dot{\phi}\) in the space cone of semi vertical angle \( \alpha\) − \( \theta\) (oblate rotator) or \( \theta\) − \( \alpha\) (prolate rotator). One can see from figures IV.15 and 16 that the angle between \( \bf{L}\) and \( \boldsymbol\omega\) is limited for an oblate rotator, but it can be as large as 90 ° for a prolate rotator. The angle between \( \bf{L}\) and \( \boldsymbol\omega\) is \( \theta\) − \( \alpha\) (which is negative for an oblate rotator). We have \[\ \tan (\theta - \alpha) = \dfrac{\tan\theta - \tan\alpha}{1 + \tan\theta \tan \alpha} = \dfrac{(1-c)\tan\alpha}{c+ \tan^{2}\alpha} \tag{4.8.24}\label{eq:4.8.24} \] By calculus this reaches a maximum value of \( \dfrac{1-c}{2\sqrt{c}} \) for \( \tan \alpha = \sqrt{c} \) For a rod or pencil (prolate), in which \( c = 0\), the angle between \( \bf{L}\) and \( \boldsymbol\omega\) can be as large as 90 °. Recalling exactly what are meant by the vectors \( \bf{L}\) and \( \boldsymbol\omega\) , the reader should try now and imagine in his or her mind’s eye a pencil rotating so that \( \bf{L}\) and \( \boldsymbol\omega\) are at right angles. The spin vector \( \boldsymbol\omega\) is along the length of the pencil and the angular momentum vector \( \bf{L}\) is at right angles to the length of the pencil. For an oblate rotator, the angle between \( \bf{L}\) and \( \boldsymbol\omega\) is limited. The most oblate rotator is a flat disc or any regular flat lamina. The parallel axis theorem shows that for such a body, \( c=2\). The greatest angle between \( \bf{L}\) and \( \boldsymbol\omega\) for a disc occurs when \(\tan \alpha =\sqrt{2} \alpha \) = 54 ° 44'),and then \(\tan \alpha− \theta = \dfrac{1}{\sqrt{8}}\), \( \alpha - \theta\) =19 ° 28'. In the following figures I illustrate some of these results graphically. The ratio \( \dfrac{I_{3}}{I_{1}}\) goes from 0 for a pencil through 1 for a sphere to 2 for a disc. Our planet Earth is approximately an oblate spheroid, its dynamical ellipticity \( \dfrac{(I_{3}-I_{1})}{I_{1}}\) being about 3.285 × 10 −3. It is not rotating exactly abut its symmetry axis; the angle \( \alpha\) between \( \boldsymbol\omega\) and the symmetry axis being about one fifth of an arcsecond, which is about six metres on the surface. The rotation period is one sidereal day (which is a few minutes shorter than 24 solar hours.) Equation \( \ref{eq:4.8.17}\) tells us that the spin axis precesses about the symmetry axis in a period of about 304 days, all within the area of a tennis court. The actual motion is a little more complicated than this. The period is closer to 432 days because of the nonrigidity of Earth, and superimposed on this is an annual component caused by the annual movement of air masses. This precessional motion of a symmetric body spinning freely about an axis inclined to the symmetry axis gives rise to variations of latitude of amplitude about a fifth of an arcsecond. It is not to be confused with the 26,000 year period of the precession of the equinoxes, which is caused by external torques from the Moon and the Sun.
There is no "plain Black Scholes implied surface" because implied volatilities come from options market prices (calls and put). If you had a whole continuum of call prices $C : \mathbb{R}_+ \times \mathbb{R}_+ \to \mathbb{R}_+$, $(T,K) \mapsto C(T,K)$ you would get a implied volatility function $\sigma_I : \mathbb{R}_+ \times \mathbb{R}_+ \to \mathbb{R}_+$ describing your implied volatiliy surface by inverting the Black Scholes formula for each expiry and strike:$$ C(T,K) = Call_{BS}(T,K,\sigma_I(T,K)).$$ But there is only a finite number of strikes and maturities available on any market so you only get a finite number of implied volatilies $\sigma_I(T_i,K_j)$. Instead of a whole surface, you just have a cloud of points. There is an infinite number of surfaces passing through these points and each of them corresponds to a different family of marginal distributions for your price process $(S_T)$ (at least if the surface satisfies no arbitrage conditions). So in order to get an actual surface you need to interpolate/extrapolate between points while making sure the surface you get is arbitrage free. This is not easy because the buttefly condition $\partial^2_{KK} C(T,K) \geq 0$ (convexity of the call payoff = positivity of a butterfly) translate to a second order differential inequality for implied volatility. This imposes strict and non explicit restrictions on your interpolation procedure. This is why pratictioners prefer to start from a parametrization which is arbitrage free by design and then try to fit it to the cloud of implied volatility points. For details, see "Arbitrage Free Implied Volatility Surfaces" by M. Roper http://www.maths.usyd.edu.au/u/pubs/publist/preprints/2010/roper-9.pdf
In the 19th century it was discovered that the Maxwell Equations describing electric and magnetic fields, a grand synthesis of the results of many different experiments, unlike Newton's laws of motion, are not consistent with Galilean relativity. A priori, the solution was not clear. One possible reason for this inconsistency, taken seriously at the time, was that the principle of relativity is wrong; i.e., there actually is an absolute rest frame, and our motion could be detected with respect to it with the appropriate experiment. Indeed, there was a significant experimental program to detect our motion with respect to absolute rest defined by a medium called "the ether." Another possibility, is that while the principle of relativity holds, its specific implementation as Galilean relativity does not. As you know, because you have studied special relativity, this is indeed the correct solution to the puzzle of the Maxwell Equations lack of invariance under a Galilean transformation. It turns out that the "Galilean Boost" can be generalized to a "Lorentz Boost" that is also consistent with the principle of relativity. The primed and unprimed coordinate systems constructed as before, under a Lorentz boost are related as: \[\begin{equation} \begin{aligned} t' & = \gamma (t-vx/c^2) \\ x' & = \gamma (x - vt), \\ y' & = y,\ {\rm and} \\ z' & = z. \end{aligned} \end{equation}\] where \(\gamma \equiv 1/\sqrt{1-v^2/c^2}\). In the limit that \(c \rightarrow \infty\) this reduces to the Galilean boost. As can be easily shown (see the homework problem) the reverse transformation is the same rule with \(v \rightarrow -v\). Most importantly, the Maxwell equations are invariant under this transformation. One of the more spectacular consequences of the Maxwell Equations is that one of their solutions is waves traveling at the speed of light. If the Maxwell equations are correct in all inertial frames, then this implies that these waves will be moving at the speed of light in all inertial frames. To your Galilean intuition this is quite startling as it violates the simple rule for addition of velocities you derived in the previous chapter. The result can be easily demonstrated from the Lorentz transformation above. Here we sketch out the process, and you can fill in the details by performing the exercise that follows. Imagine a particle traveling at the speed of light. Let's parameterize its path through spacetime with the independent variable \(\lambda\) so that \(t = \lambda\) and \(x(\lambda) = c\lambda\). Then we have (by direct substitution into the Lorentz transformation) that \(t' = (\gamma/c)(c-v)\lambda\) and \(x'=\gamma (c-v)\lambda\). The speed of this particle in the primed frame is \[\frac{dx'}{dt'} = \frac{dx'}{d\lambda}\frac{d\lambda}{dt'} = \frac{dx'}{d\lambda}\left(\frac{dt'}{d\lambda}\right)^{-1} = c.\] Thus we see the Lorentz transformation tells us that a particle traveling at speed \(c\) in one frame will be traveling at speed \(c\) in another. This result is consistent with our claim that the Maxwell equations are invariant under the Lorentz transformation, since a consequence of the Maxwell Equations is that electromagnetic waves travel at speed \(c\). Box \(\PageIndex{1}\) Exercise: 4.1.1: Fill in the steps in the above derivation. Answer \[\begin{equation*} \begin{aligned} \frac{dx'}{d\lambda} &= \gamma(c - v), \; {\rm and} \\ \\ \frac{dt'}{d\lambda} &= \frac{\gamma}{c}(c -v) \end{aligned} \end{equation*}\] Therefore, \[\begin{equation*} \begin{aligned} \frac{dx'}{d\lambda}\left(\frac{dt'}{d\lambda}\right)^{-1} = \gamma(c - v)\Big(\frac{\gamma}{c}(c -v)\Big)^{-1} = c \end{aligned} \end{equation*}\] Unlike rotational coordinate transformations that preserve spatial distances between pairs of points, a Lorentz transformation does not. The spatial separation between \((x,t)\) and \((x+dx,t)\) is \(dx\). The spatial separation between these points in the prime frame is \(\gamma dx\), as one can see from the transformation rule. How can length depend on reference frame? Key to resolving this apparent paradox is the fact that in the primed frame the two events are not simultaneous. We won't go through sorting out these apparent paradoxes here. We will, however, introduce a quantity that, unlike spatial length, is invariant under Lorentz transformations. For Cartesian spatial coordinates, the square of the invariant distance between event \((t,x,y,z)\) and event \((t+dt,x+dx, y+dy, z+dz)\) is given by \[ds^2 = -c^2 dt^2 + dx^2 + dy^2 + dz^2. \label{eqn:invdist}\] This quantity has the following two-part physical interpretation: For \(ds^2 > 0\), \(\sqrt{ds^2}\) is the length of a ruler that connects the two events and is at rest in the frame in which the two events are simultaneous. For \(ds^2 < 0\), \(\sqrt{-ds^2}\) is the time elapsed on a clock that moves between the two events with no acceleration. Why is this quantity invariant under boosts? That's a deep question, and I'm not sure we have the fullest possible answer yet sorted out. We do know that the Maxwell equations are a synthesis from experiments, their form is invariant under a Lorentz transformation, and the Lorentz transformation preserves the invariant distance. Box \(\PageIndex{2}\) Exercise 4.2.1: Show that the invariant distance is indeed invariant under a Lorentz transformation. For specificity, take it to be the transformation appropriate for a boost in the \( +x \) direction with speed \(v \). For simplicity, take your two coordinate systems to be coincident at their origins (i.e. \(t=x=y=z=0\) is the same point as \(t'=x'=y'=z'=0\) ), use the origin as one point, and \(t=dt, x=dx, y=dy, z = dz\) as the other. Answer So we start with \(ds^2 = -c^2dt^2 + dx^2 + dy^2 + dz^2\), where, due to the Lorentz transformation we have \[\begin{equation*} \begin{aligned} dt & = \gamma (dt'-vdx'/c^2) = \gamma/c (cdt'-vdx'/c), \\ dx & = \gamma (dx' - vdt'), \\ dy & = dy',\; {\rm and} \\ dz & = z' \end{aligned} \end{equation*}\] Therefore, \[\begin{equation*} \begin{aligned} ds^2 & = -\gamma^2\Big(cdt' - \frac{vdx'}{c}\Big)^2 + \gamma^2(dx' - vdt')^2 + dy'^2 + dz'^2 \\ \\ & = -\gamma^2(c^2 - v^2)dt'^2 + \gamma^2\Big(1 - \frac{v^2}{c^2}\Big)dx'^2 + dy'^2 + dz'^2 \\ \\ & = -\gamma^2 c^2\Big(1-\frac{v^2}{c^2}\Big)dt'^2 + \gamma\Big(1-\frac{v^2}{c^2}\Big)dx'^2 + dy'^2 + dz'^2 \\ \\ & = -c^2dt'^2 + dx'^2 + dy'^2 + dz'^2 =ds'^2 \end{aligned} \end{equation*}\] Rather than the Lorentz transformation itself, the key thing to take away from this chapter is the definition of the invariant distance. We will be using it for the rest of the course, generalized to spacetimes with "curvature." Before doing so, we give some exercises here in which you get to make use of the invariant distance to solve problems in the more familiar context of a flat spacetime, the so-called Minkowski space you are familiar with from special relativity. A Minkowski space is simply a spacetime that can be labeled with \(t,x,y,z\) such that the invariant distance is given by Eq. \ref{eqn:invdist}. In Minkowski space, as one of the homework problems asks you to show, a finite (as opposed to infinitesmial) version of the invariant distance equation is also true: \[(\Delta s)^2 = -c^2 (\Delta t)^2 + (\Delta x)^2 + (\Delta y)^2 + (\Delta z)^2\] for trajectories that are straight lines, with \(\Delta s \equiv \int d\lambda \frac{ds}{d\lambda}\) also invariant under Lorentz transformations. Box \(\PageIndex{3}\) Exercise 4.3.1: Calculate the time that elapses on a clock traveling in a straight line at speed \(v\) from \(x_1,t_1\) to \(x_2, t_2\). Do so in the following manner: 1) Draw the clock's path in two coordinate systems: \(x\) vs. \(t\) and \(x'\) vs. \(t'\) where the prime system is the one where the clock is at rest. 2) Calculate \((\Delta s)^2\) along the path from point 1 to point 2 in both coordinate systems, set them equal, and solve for \(t_2'-t_1'\). Note that here we have used these facts: 1) the time that elapses on the clock will be equal to the difference in time coordinates in the frame in which it is at rest, and 2) the invariant distance is invariant (the same in both coordinate systems). We could also have just calculated \( (\Delta s)^2\) in the unprimed frame and used our physical interpretation of \( \sqrt{-(\Delta s)^2} \) (for \( (\Delta s)^2 < 0\) ) as the time that elapses on a clock traveling from point 1 to point 2. Answer For the first coordinate system we have \[\begin{equation*} \begin{aligned} (\Delta s)^2 = -c^2(t_2 - t_1)^2 + (x_2 - x_1)^2 = -c^2(\Delta t)^2 + (\Delta x)^2 \end{aligned} \end{equation*}\] For the prime system we have \[\begin{equation*} \begin{aligned} (\Delta s')^2 = -c^2(t'_2 - t'_1)^2 \end{aligned} \end{equation*}\] Now set them equal and solve for \(t'_2 - t'_1\) \[\begin{equation*} \begin{aligned} -c^2(t'_2 - t'_1)^2 & = -c^2(\Delta t)^2 + (\Delta x)^2 \\ \\ (t'_2 - t'_1)^2 & = (\Delta t)^2 - \frac{(\Delta x)^2}{c^2} \\ \\ {\rm note} \; & {\rm that,} \; \frac{(\Delta x)^2}{(\Delta t)^2} = v^2 \\ \\ (t'_2 - t'_1)^2 & = (\Delta t)^2\Big(1 - \frac{v^2}{c^2}\Big) \\ \\ t'_2 - t'_1 & = \gamma^{-1}\Delta t \end{aligned} \end{equation*}\] HOMEWORK Problems Problem \(\PageIndex{1}\) Show, by solving for \(x\) and \(t\) that the inverse Lorentz transformation is the same as the forward transformation but with \(v \rightarrow -v\). Explain what this has to do with the principle of relativity. Problem \(\PageIndex{2}\) Show that for straight paths in spacetime, that \((\Delta s)^2 = -c^2 (\Delta t)^2 + (\Delta x)^2\) follows from \(ds^2 = -c^2 dt^2 + dx^2\). Hint: all straight paths in spacetime (at least the flat spacetime of special relativity we are studying now) can be parametrized via: \(t-t_0=\lambda, x=x_0 +v\lambda\). Problem \(\PageIndex{3}\) Events A and B occur 10 meters and 100 ns apart in time in frame 1. If they occur 95 ns apart in frame 2, what must their spatial separation be in frame 2? Problem \(\PageIndex{4}\) Derive the phenomenon of time dilation. Consider the path taken by a clock from point 1 to point 2 in two different coordinate systems, a primed one in which the clock is at rest, and an uprimed one in which the clock is moving at constant speed \(v\). Use the invariance of the invariant distance to show that the time elapsed on the clock is less than \(t_2 - t_1\). [Yes, this is basically the same as one of the exercises.]
When the measures of the angles of a triangle are placed in order, the difference between the middle angle and smallest angle is equal to the difference between the middle angle and largest angle. If one of the angles of the triangle has measure 23, then what is the measure in degrees of the largest angle of the triangle? When the measures of the angles of a triangle are placed in order, the difference between the middle angle and smallest angle is equal to the difference between the middle angle and largest angle. If one of the angles of the triangle has measure 23, then what is the measure in degrees of the largest angle of the triangle? \(\text{Let $\alpha+\beta+\gamma=180^\circ$ with $\alpha < \beta < \gamma$ } \) \(\begin{array}{|rcll|} \hline \gamma-\beta &=& \beta-\alpha \\ \mathbf{\gamma} &=& \mathbf{2\beta-\alpha} \quad | \quad \alpha=180^\circ-\beta-\gamma \\ \gamma &=& 2\beta-(180^\circ-\beta-\gamma) \\ \gamma &=& 2\beta-180^\circ+\beta+\gamma \quad | \quad -\gamma \\ 0 &=& 2\beta-180^\circ+\beta \\ 180^\circ &=& 3\beta \\ \beta &=& \dfrac{180^\circ}{3} \\ \mathbf{\beta} &=& \mathbf{60^\circ} \\ \hline \end{array}\) \(\begin{array}{|rcll|} \hline \gamma &=& 2\beta-\alpha \quad | \quad \alpha = 23^\circ,\ \beta=60^\circ \\ \gamma &=& 2\cdot 60^\circ-23^\circ \\ \gamma &=& 120^\circ-23^\circ \\ \mathbf{\gamma} &=& \mathbf{97^\circ} \\ \hline \end{array}\) The measure in degrees of the largest angle of the triangle is \(\mathbf{97^\circ}\)
Can anyone state the difference between frequency response and impulse response in simple English? The impulse response and frequency response are two attributes that are useful for characterizing linear time-invariant (LTI) systems. They provide two different ways of calculating what an LTI system's output will be for a given input signal. A continuous-time LTI system is usually illustrated like this: In general, the system $H$ maps its input signal $x(t)$ to a corresponding output signal $y(t)$. There are many types of LTI systems that can have apply very different transformations to the signals that pass through them. But, they all share two key characteristics: The system is linear, so it obeys the principle of superposition. Stated simply, if you linearly combine two signals and input them to the system, the output is the same linear combination of what the outputs would have been had the signals been passed through individually. That is, if $x_1(t)$ maps to an output of $y_1(t)$ and $x_2(t)$ maps to an output of $y_2(t)$, then for all values of $a_1$ and $a_2$, $$ H\{a_1 x_1(t) + a_2 x_2(t)\} = a_1 y_1(t) + a_2 y_2(t) $$ The system is time-invariant, so its characteristics do not change with time. If you add a delay to the input signal, then you simply add the same delay to the output. For an input signal $x(t)$ that maps to an output signal $y(t)$, then for all values of $\tau$, $$ H\{x(t - \tau)\} = y(t - \tau) $$ Discrete-time LTI systems have the same properties; the notation is different because of the discrete-versus-continuous difference, but they are a lot alike. These characteristics allow the operation of the system to be straightforwardly characterized using its impulse and frequency responses. They provide two perspectives on the system that can be used in different contexts. Impulse Response: The impulse that is referred to in the term impulse response is generally a short-duration time-domain signal. For continuous-time systems, this is the Dirac delta function $\delta(t)$, while for discrete-time systems, the Kronecker delta function $\delta[n]$ is typically used. A system's impulse response (often annotated as $h(t)$ for continuous-time systems or $h[n]$ for discrete-time systems) is defined as the output signal that results when an impulse is applied to the system input. Why is this useful? It allows us to predict what the system's output will look like in the time domain. Remember the linearity and time-invariance properties mentioned above? If we can decompose the system's input signal into a sum of a bunch of components, then the output is equal to the sum of the system outputs for each of those components. What if we could decompose our input signal into a sum of scaled and time-shifted impulses? Then, the output would be equal to the sum of copies of the impulse response, scaled and time-shifted in the same way. For discrete-time systems, this is possible, because you can write any signal $x[n]$ as a sum of scaled and time-shifted Kronecker delta functions: $$ x[n] = \sum_{k=0}^{\infty} x[k] \delta[n - k] $$ Each term in the sum is an impulse scaled by the value of $x[n]$ at that time instant. What would we get if we passed $x[n]$ through an LTI system to yield $y[n]$? Simple: each scaled and time-delayed impulse that we put in yields a scaled and time-delayed copy of the impulse response at the output. That is: $$ y[n] = \sum_{k=0}^{\infty} x[k] h[n-k] $$ where $h[n]$ is the system's impulse response. The above equation is the convolution theorem for discrete-time LTI systems. That is, for any signal $x[n]$ that is input to an LTI system, the system's output $y[n]$ is equal to the discrete convolution of the input signal and the system's impulse response. For continuous-time systems, the above straightforward decomposition isn't possible in a strict mathematical sense (the Dirac delta has zero width and infinite height), but at an engineering level, it's an approximate, intuitive way of looking at the problem. A similar convolution theorem holds for these systems: $$ y(t) = \int_{-\infty}^{\infty} x(\tau) h(t - \tau) d\tau $$ where, again, $h(t)$ is the system's impulse response. There are a number of ways of deriving this relationship (I think you could make a similar argument as above by claiming that Dirac delta functions at all time shifts make up an orthogonal basis for the $L^2$ Hilbert space, noting that you can use the delta function's sifting property to project any function in $L^2$ onto that basis, therefore allowing you to express system outputs in terms of the outputs associated with the basis (i.e. time-shifted impulse responses), but I'm not a licensed mathematician, so I'll leave that aside). One method that relies only upon the aforementioned LTI system properties is shown here. In summary: For both discrete- and continuous-time systems, the impulse response is useful because it allows us to calculate the output of these systems for any input signal; the output is simply the input signal convolved with the impulse response function. Frequency response: An LTI system's frequency response provides a similar function: it allows you to calculate the effect that a system will have on an input signal, except those effects are illustrated in the frequency domain. Recall the definition of the Fourier transform: $$ X(f) = \int_{-\infty}^{\infty} x(t) e^{-j 2 \pi ft} dt $$ More importantly for the sake of this illustration, look at its inverse: $$ x(t) = \int_{-\infty}^{\infty} X(f) e^{j 2 \pi ft} df $$ In essence, this relation tells us that any time-domain signal $x(t)$ can be broken up into a linear combination of many complex exponential functions at varying frequencies (there is an analogous relationship for discrete-time signals called the discrete-time Fourier transform; I only treat the continuous-time case below for simplicity). For a time-domain signal $x(t)$, the Fourier transform yields a corresponding function $X(f)$ that specifies, for each frequency $f$, the scaling factor to apply to the complex exponential at frequency $f$ in the aforementioned linear combination. These scaling factors are, in general, complex numbers. One way of looking at complex numbers is in amplitude/phase format, that is: $$ X(f) = A(f) e^{j \phi(f)} $$ Looking at it this way, then, $x(t)$ can be written as a linear combination of many complex exponential functions, each scaled in amplitude by the function $A(f)$ and shifted in phase by the function $\phi(f)$. This lines up well with the LTI system properties that we discussed previously; if we can decompose our input signal $x(t)$ into a linear combination of a bunch of complex exponential functions, then we can write the output of the system as the same linear combination of the system response to those complex exponential functions. Here's where it gets better: exponential functions are the eigenfunctions of linear time-invariant systems. The idea is, similar to eigenvectors in linear algebra, if you put an exponential function into an LTI system, you get the same exponential function out, scaled by a (generally complex) value. This has the effect of changing the amplitude and phase of the exponential function that you put in. This is immensely useful when combined with the Fourier-transform-based decomposition discussed above. As we said before, we can write any signal $x(t)$ as a linear combination of many complex exponential functions at varying frequencies. If we pass $x(t)$ into an LTI system, then (because those exponentials are eigenfunctions of the system), the output contains complex exponentials at the same frequencies, only scaled in amplitude and shifted in phase. These effects on the exponentials' amplitudes and phases, as a function of frequency, is the system's frequency response. That is, for an input signal with Fourier transform $X(f)$ passed into system $H$ to yield an output with a Fourier transform $Y(f)$, $$ Y(f) = H(f) X(f) = A(f) e^{j \phi(f)} X(f) $$ In summary: So, if we know a system's frequency response $H(f)$ and the Fourier transform of the signal that we put into it $X(f)$, then it is straightforward to calculate the Fourier transform of the system's output; it is merely the product of the frequency response and the input signal's transform. For each complex exponential frequency that is present in the spectrum $X(f)$, the system has the effect of scaling that exponential in amplitude by $A(f)$ and shifting the exponential in phase by $\phi(f)$ radians. Bringing them together: An LTI system's impulse response and frequency response are intimately related. The frequency response is simply the Fourier transform of the system's impulse response (to see why this relation holds, see the answers to this other question). So, for a continuous-time system: $$ H(f) = \int_{-\infty}^{\infty} h(t) e^{-j 2 \pi ft} dt $$ So, given either a system's impulse response or its frequency response, you can calculate the other. Either one is sufficient to fully characterize the behavior of the system; the impulse response is useful when operating in the time domain and the frequency response is useful when analyzing behavior in the frequency domain. Bang on something sharply once and plot how it responds in the time domain (as with an oscilloscope or pen plotter). That will be close to the impulse response. Get a tone generator and vibrate something with different frequencies. Some resonant frequencies it will amplify. Others it may not respond at all. Plot the response size and phase versus the input frequency. That will be close to the frequency response. For certain common classes of systems (where the system doesn't much change over time, and any non-linearity is small enough to ignore for the purpose at hand), the two responses are related, and a Laplace or Fourier transform might be applicable to approximate the relationship. The impulse response is the response of a system to a single pulse of infinitely small duration and unit energy (a Dirac pulse). The frequency response shows how much each frequency is attenuated or amplified by the system. The frequency response of a system is the impulse response transformed to the frequency domain. If you have an impulse response, you can use the FFT to find the frequency response, and you can use the inverse FFT to go from a frequency response to an impulse response. Shortly, we have two kind of basic responses: time responses and frequency responses. Time responses test how the system works with momentary disturbance while the frequency response test it with continuous disturbance. Time responses contain things such as step response, ramp response and impulse response. Frequency responses contain sinusoidal responses. Aalto University has some course Mat-2.4129 material freely here, most relevant probably the Matlab files because most stuff in Finnish. If you are more interested, you could check the videos below for introduction videos. I found them helpful myself. I have only very elementary knowledge about LTI problems so I will cover them below -- but there are surely much more different kinds of problems! Responses with Linear time-invariant problems With LTI (linear time-invariant) problems, the input and output must have the same form: sinusoidal input has a sinusoidal output and similarly step input result into step output. If you don't have LTI system -- let say you have feedback or your control/noise and input correlate -- then all above assertions may be wrong. With LTI, you will get two type of changes: phase shift and amplitude changes but the frequency stays the same. If you break some assumptions let say with non-correlation-assumption, then the input and output may have very different forms. If you need to investigate whether a system is LTI or not, you could use tool such as Wiener-Hopf equation and correlation-analysis. Wiener-Hopf equation is used with noisy systems. It is essential to validate results and verify premises, otherwise easy to make mistakes with differente responses. More about determining the impulse response with noisy system here. References Wikipedia article about LTI here protected by jojek♦ Mar 8 '16 at 8:55 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?