text
stringlengths
256
16.4k
Recommended Level Beginner Introduction In some of the technical articles, the converter operation is studied in ideal conditions. An ideal condition is when there is no interruptions, disturbances and errors that cause the operation to deviate from its normal condition. These inevitable interruptions from normal conditions may be due to variations in the circuit parameters, such as voltages of source and load, switching time, and circuit components like inductors or capacitors. This behavior of the system is known as the dynamic system behavior, which needs to be rectified for proper output through a control mechanism. This requires an analysis and design of the controller by modelling approach which gives us the wide spectrum to analyze the various interruptions or issues. Control system controls the parameters of the circuit by measuring the disturbances through open-loop or closed-loop system. Open-loop controller can minimize the anticipated disturbances by feed-forward path as shown. However, open-loop system alone can’t serve the need of the dynamic control for conversion. Here, closed-loop system can be used to measure the present behavior of the system and to take proper actions through feedback. The configuration for the control system is shown below in Fig.1. Figure 1. General Configuration of Control System There is a need to know the static as well as the dynamic behavior of the converter in designing the components for the required operating modes. Switching converters are a time-variant, non-linear and discrete system so the system must be made less sensitive to load or disturbances in the line. The closed loop controller sets the output by noticing the deviations of the input and changing the parameter such as firing angle. Closed loop has an advantage that lets the controller also control the transients appearing during the switching of output. Feed-forward path alone can’t set the desired value of the output and gives the transients which are outside the allowed limits. One of the solutions in maintaining the output voltage or current is to use a PI or PID controller. When it comes to steady-state errors, the PID controller is faster compared to the PI controller due to its derivative error/output inclusion. In this technical article, the techniques to obtain suitable model for non-linear and linear converters are illustrated. Converter models will be made linear for continuous or discontinuous conduction mode for simplicity. Thus, special attention is intended for linear time invariant (LTI) models. Linear models are used for creating equivalent circuits. The Basic AC Modelling Approach There is a need to select the proper model for the different stages. For example, a closed loop controller is not a good choice for open loop performance of the circuit at a particular stage. There are different approaches to design the model of a converter. Here, we can start our discussion with circuit average design of model which describes the average performance of converters for analysis, but It is not always the best method. Sometimes, circuit methods or averaging methods are not always appropriate and convenient for modeling controllers, so we will also see the state-space modeling approach. Dynamic Models by Circuit Averaging Simple circuit diagram is used for high-frequency converters. We will construct the non-linear model and convert it into a linear circuit that describes the small-signal performance of a circuit. Switching operation of the power converter causes the pulse frequency for currents and voltages in the components. But it is difficult to consider these pulse frequencies of voltages and currents for stability purposes. For the dynamic analysis of the converter, it is required to take another method, then go for a detailed switching pattern behavior or pulsation in the pulse frequency, resulting in the dynamic averaging method. In many of the power electronics applications, average values of current and voltage is of more value than the instantaneous values, provided that the value of harmonics and ripples are small enough for it to be ignored. Thus, we can take the average of a variable such as voltage or current using the circuit approach. The average at any time is taken over the interval T which is the shortest repeating switching interval related to the process of power circuit. $$\overline{x(t)}= \frac{1}{T}\int_{ta-T}^{ta} x(t)dt$$ Average is taken over the length, T. Thus, $$ \overline{x(t)}$$ is more flat as compared to x (t). If x(t) has oscillations of particular frequency $$f_{C}=\frac{c}{T}$$. Then, these frequency components are nullified during the dynamic averaging. These average variables also satisfy the fundamental equations i.e. KCL and KVL. Thus, we can say $$\overline{V_{R}(t)} = R\overline{I_{R}(t)}$$ $$\overline{V_{L}(t)} = L\frac{d}{dt}\overline{I_{L}(t)}$$ The sequence to apply the averaging and differentiation is interchangeable (i.e. any method can be applied first and then later other one). Similarly for a capacitor, we have the following equations: $$C\frac{d}{dt}\overline{V_{C}(t)}=\overline{I_{C}(t)}$$ With these types of fundamental equations we can create an average circuit model. We have to change all of the instantaneous values with the average values without altering the LTI components. The linear parts of the circuits are not altered as they impose the same constraints on the normal as well as deviated variables. However, time-varying components and non-linear components are altered with the representation of an equivalent circuit that has some value of average voltage or average current. For instance, BJT switch is replaced with the small-signal model as shown below: Figure 2. Non-Linear Model and Small-Signal Model for BJT respectively Figure 3. General Switch Circuit Here, x (t) is a switching function that modulates versus, the source voltage. This is dependent on the duty ratio of the switches. The value of q(t) set between finite values such as 1 and 0 for buck converter or 1, 0, -1 for the PWM inverter. The average circuit for this basic switched circuit is shown in Fig. 4. Figure 4. Average Circuit for the Above Circuit The function $$\overline{x(t)}$$ is an average switching function and also called continuous duty cycle ratio. $$\overline{x(t)}$$ is dependent on the control variables and average value of current i.e. $$\overline{I(t)}$$ entirely. This is generated with the help of circuits utilizing comparator, latch, clock, etc. The switching function is at the output of the latch. Latch is connected to the clock and the output of the comparator. Output of the controller, which is a modulated continuous average value wave of duty cycle, is connected to the comparator. Clock can be a sawtooth waveform that is applied to the positive terminal of the comparator. Average value of voltage = $$\overline{x(t)} V_{S}$$.Thus, $$\overline{x(t)}$$ vary inversely with the input voltage. Thus, disturbance in the output due to variation in the input voltage is restricted in the average circuit. The feedforward control for input voltage reduces the transient and steady state errors effect on the output. $$\overline{x(t)}$$ is time varying and can even be negative. But, if x(t) is of constant period T without any deviation in switching frequency then $$\overline{x(t)}$$ is constant. Replacement of switch with the linear small signal model gives us the variables which can control the performance of the model. Fig. 5 shows the standard circuit of a switch which can be replaced by the average circuit for continuous conduction in Fig.6. Average circuit can also be represented using ideal transformer as shown in Fig. 7. Figure 5. Standard Switch Circuit Assume that there are only small ripples, then load current and capacitor voltage can be well approximated by their average values. Also, it is assumed that the load voltage, source voltage and capacitor voltage do not vary considerably over an interval of length T. If we assume i X(t) nearly constant due to small ripple approximation and slow variation of average value over the period T, then For the interval t-T ≤ r ≤ T, $$\overline{i_{Y}(t)}=\overline{x(t)i_{X}(t)}$$ Here the continuous duty ratio is $$\overline{x(t)}$$ = D (say), then $$\overline{i_{Y}(t)}=D \overline{i_{X}(t)}$$ And $$\overline{v_{XZ}}=D\overline{v_{YZ}}$$ $$\overline{V_{L}(t)}=\frac{1}{T}\int_{ta-T}^{ta}V_{L}(t)dt=D\overline{V_{XZ}(t)} + {D}' \overline{V_{YX}(t)} $$ where, $${D}'=1-D$$ $$\Rightarrow L\frac{d}{dt}\overline{i_{X}(t)}=D\overline{V_{XZ}(t)}+{D}'\overline{V_{YX}(t)}$$ Similarly, for the capacitor (if the load R is connected on the xy side) $$C\frac{d}{dt}(\overline{V_{C}(t)}) = -{D}'\overline{I(t)}-\frac{\overline{V(t)}}{R}$$ Now, the inductor volt balance condition and capacitor charge balance condition do not hold. But, it is true for nominal values. Moreover, these equations signify that there exist a separate voltage and current source to represent the deviations. Figure 6. Average Circuit for a Standard Switch Under Continuous Conduction Mode Figure 7. Average Circuit for Standard Switch Using an Ideal Transformer We can further simplify this circuit for linearization. Consider that the circuit is initially in steady-state such that the average and instantaneous values are equal such as $$\overline{V_{S}(t)}=V_{n}(t)$$, where V n(t) is the normal operating voltage. Now, if there is any small deviation from the normal condition. Then every voltage in the non-linear circuit is replaced with two sets of voltage or current to account for deviation. The system imposes the constraint on the original variable as well as deviation. Due to small signal assumption, we have now, On source side $$|v_{S}|(\check t)|\ll|V_{n}|,$$ $$|d(\check{t})|\ll|D_{n}|,$$ $$|i_{s}(\check{t})|\ll I_{n}$$ On load side $$|v(\check{t})|\ll |V|$$ $$|i(\check{t})|\ll|I|$$ Now, the linear circuit configuration will be as shown in Fig. 8. Let the deviation in continuous duty cycle is ď and D n is the continuous duty cycle for the nominal value. Thus, $$D(t)=D_{n}(t)+d(\check{t})$$ $$\Rightarrow D(t)={D}'_{n}-d(\check{t})$$ Where $$D(\check{t})=1-D(t) \; and \; {D}'_{n}=1-D_{n}$$ Also assume that the variation in input voltage is $$V_{S}(\check{t})$$ Hence, $$V_{S}(t)=V_{n}(t)+v_{S}(\check{t})$$ Where V n(t) is the normal operating voltage without any deviation. On using these equations (including deviations) for inductor and capacitor, we will get the non-linear terms for the model. These equations need to be expanded using the Taylor series. If we retain the first order terms, we will get the linear model which also rule over the small deviations. Figure 8. Linear Average Model for the Standard Switch Another representation using the ideal transformer for the linear model of the switch is shown in Fig. 9. Figure 9. Linear Model for Switch Using an Ideal Transformer Shown in Fig. 10 is the configuration for the three different patterns for the switch connection. These linear models are valid for both AC and DC. Figure 10. Different Switch Configuration with their Linear-Model Conversion The linear circuits shown below can even be designed for discontinuous conduction mode. In cases when the discontinuous conduction load R is high enough, the value of the inductor current reaches zero value. If the input voltage is constant over a switching period, the inductor current rises linearly until it decays to zero. When the current decays to zero, it can form a ringing circuit of RLC which is approximated by the linear segment. The different circuit patterns for the buck converter are shown below. Ringing circuit is shown in Fig. 11(b) which needed approximation for linearity. Figure 11. Different Circuit Diagrams for Discontinuous Conduction Mode Results for Several Basic Converters Now, the linear circuits models and the switching converter circuits for the buck converter, boost converter, buck-boost converter and flyback converter are presented in Fig. 12 to Fig.19. Figure 12. Switching Converter Circuit for Buck Converter Figure 13. Linear Circuit Model for Buck Converter Figure 14. Switching Converter Circuit for Boost Converter Figure 15. Linear Circuit Model for Boost Converter Figure 16. Switching Converter Circuit for Buck-Boost Converter Figure 17. Linear Circuit Model for Buck-Boost Converter Figure 18. Switching Circuit for Flyback Converter Let the resistance of switch S during conduction period is R ON and $$n=\frac{N_{2}}{N_{1}}$$. Switch S and Diode D conducts alternatively. Figure 19. Linear Circuit Model for Flyback Converter Disadvantage of this Method The controller derived from the linear model does not necessarily provide satisfactory performance especially for large deviations. Moreover, these model parameters vary with the operating point. Large disturbances are usually dealt through refinement and alterations to basic LTI controller. Current-mode control is also used to remove the drawback of this LTI controller.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
L # 1 Show that It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline Last edited by krassi_holmz (2006-03-09 02:44:53) IPBLE: Increasing Performance By Lowering Expectations. Offline It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline L # 2 If It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline Let log x = x' log y = y' log z = z'. Then: x'+y'+z'=0. Rewriting in terms of x' gives: IPBLE: Increasing Performance By Lowering Expectations. Offline Well done, krassi_holmz! It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline L # 3 If x²y³=a and log (x/y)=b, then what is the value of (logx)/(logy)? It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline loga=2logx+3logy b=logx-logy loga+3b=5logx loga-2b=3logy+2logy=5logy logx/logy=(loga+3b)/(loga-2b). Last edited by krassi_holmz (2006-03-10 20:06:29) IPBLE: Increasing Performance By Lowering Expectations. Offline Very well done, krassi_holmz! It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline L # 4 Offline It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline You are not supposed to use a calculator or log tables for L # 4. Try again! Last edited by JaneFairfax (2009-01-04 23:40:20) Offline No, I didn't I remember It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline You still used a calculator / log table in the past to get those figures (or someone else did and showed them to you). I say again: no calculators or log tables to be used (directly or indirectly) at all!! Last edited by JaneFairfax (2009-01-06 00:30:04) Offline Offline log a = 2log x + 3log y b = log x log y log a + 3 b = 5log x loga - 2b = 3logy + 2logy = 5logy logx / logy = (loga+3b) / (loga-2b) Offline Hi ganesh for L # 1 since log(a)= 1 / log(b), log(a)=1 b a a we have 1/log(abc)+1/log(abc)+1/log(abc)= a b c log(a)+log(b)+log(c)= log(abc)=1 abc abc abc abc Best Regards Riad Zaidan Offline Hi ganesh for L # 2 I think that the following proof is easier: Assume Log(x)/(b-c)=Log(y)/(c-a)=Log(z)/(a-b)=t So Log(x)=t(b-c),Log(y)=t(c-a) , Log(z)=t(a-b) So Log(x)+Log(y)+Log(z)=tb-tc+tc-ta+ta-tb=0 So Log(xyz)=0 so xyz=1 Q.E.D Best Regards Riad Zaidan Offline Gentleman, Thanks for the proofs. Regards. It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline log_2(16) = \log_2 \left ( \frac{64}{4} \right ) = \log_2(64) - \log_2(4) = 6 - 2 = 4, \, log_2(\sqrt[3]4) = \frac {1}{3} \log_2 (4) = \frac {2}{3}. \, Offline L # 4 I don't want a method that will rely on defining certain functions, taking derivatives, noting concavity, etc. Change of base: Each side is positive, and multiplying by the positive denominator keeps whatever direction of the alleged inequality the same direction: On the right-hand side, the first factor is equal to a positive number less than 1, while the second factor is equal to a positive number greater than 1. These facts are by inspection combined with the nature of exponents/logarithms. Because of (log A)B = B(log A) = log(A^B), I may turn this into: I need to show that Then Then 1 (on the left-hand side) will be greater than the value on the right-hand side, and the truth of the original inequality will be established. I want to show Raise a base of 3 to each side: Each side is positive, and I can square each side: ----------------------------------------------------------------------------------- Then I want to show that when 2 is raised to a number equal to (or less than) 1.5, then it is less than 3. Each side is positive, and I can square each side: Last edited by reconsideryouranswer (2011-05-27 20:05:01) Signature line: I wish a had a more interesting signature line. Offline Hi reconsideryouranswer, This problem was posted by JaneFairfax. I think it would be appropriate she verify the solution. It is no good to try to stop knowledge from going forward. Ignorance is never better than knowledge - Enrico Fermi. Nothing is better than reading and gaining more and more knowledge - Stephen William Hawking. Offline Hi all, I saw this post today and saw the probs on log. Well, they are not bad, they are good. But you can also try these problems here by me (Credit: to a book): http://www.mathisfunforum.com/viewtopic … 93#p399193 Practice makes a man perfect. There is no substitute to hard work All of us do not have equal talents but everybody has equal oppurtunities to build their talents.-APJ Abdul Kalam Offline JaneFairfax, here is a basic proof of L4: For all real a > 1, y = a^x is a strictly increasing function. log(base 2)3 versus log(base 3)5 2*log(base 2)3 versus 2*log(base 3)5 log(base 2)9 versus log(base 3)25 2^3 = 8 < 9 2^(> 3) = 9 3^3 = 27 < 25 3^(< 3) = 25 So, the left-hand side is greater than the right-hand side, because Its logarithm is a larger number. Offline
Averaged and Integral Characteristics These characteristics can be monitor in the course of design process. It is possible to monitor minimum and maximum values as well. Example. As an example, a 6-layer metal-dielectric coating was designed. The corresponding design bar is shown on the bottom of the Evaluation window (Fig. 5). The light reflectance from coating's front and back side should appear as orange and violet. At the same time, solar transmittance of the coating is to be as large as possible. Color target and integral target are shown in Fig. 2 and Fig. 3, respectively. OptiLayer allows to specify combined targets and optimize the design with respect many criteria simultaneously. Integral weights can be specified in a special Total merit function MF takes the form: \[ MF^2=0.5\cdot MF_{color}^2+MF_{int}^2 \] Along with different components of the merit function, OptiLayer allows you to display and monitor integral values. These values are calculated based on the spectral weight function \(W(\lambda)\) that can be chosen from the pop-up list. All required spectral weights are to be specified in advance through integral target option (see Integral Target). As in Integral Target, there are two check boxes: \[ F=\frac{\int\limits_{\lambda_d}^{\lambda_u} W(\lambda)D(\lambda) S(\lambda) C(\lambda)d\lambda}{\int\limits_{\lambda_d}^{\lambda_u} W(\lambda) S(\lambda) D(\lambda)d\lambda},\] where \(\lambda_d\) and \(\lambda_u\) are boundaries of the wavelength interval of interest, \(W(\lambda)\) is a given weight function, \(S(\lambda)\) is a spectral characteristic of a coating, \(D(\lambda)\) and \(S(\lambda)\) are spectral distributions of the detector and light source, respectively. You can monitor current color coordinates and not necessary those ones specified in the color target. You can specify reflectance, back side reflectance, or transmittance; polarization states, and incidence angle. Integral calculations are performed in the stack mode as well. It is also possible to display and calculate averaged spectral characteristics as well as their maximum/minimum over a 2D region of [wavelengths x angles of incidence]. Computations of It is possible to adjust font type, size, font color and background color using corresponding toolbar controls. Also it is possible to change the number of digits to display and to select scientific format when necessary (use [E±] button for this purpose). Of course, you can turn back to the Build-in Style. You may be interested also in following articles:
Let $H(s)$ be a transfer function of the form$$H(s) = \frac{1}{s-p}$$where $p$, which is a pole of $H(s)$, can be written as a complex number $a+jb$. Taking the inverse Laplace transform of $H(s)$ gives the corresponding impulse response $h(t)$ (that is, the output of your system when given $\delta(t)$ as input). Noting $\mathcal{L}^{-1}$ the inverse Laplace transform, we have$$h(t) = \mathcal{L}^{-1}\{H(s)\} = e^{pt} = e^{at}e^{jbt}.$$Now let's look at what this impulse response looks like. The term $e^{at}$ is a simple exponential which will be either decaying (if $a < 0$) or growing (if $a$ > 0) with time. The term $e^{jbt}$ will be responsible for oscillations in the output of your system (remember that $e^{jbt} = \cos(bt) + j\sin(bt)$). From this, you can infer the stability of your system and understand why we need poles in the left-hand side of the $s$-plane (i.e. we need $a < 0$) for the system to be stable. Often, the numerator and the denominator of your transfer function have real coefficients, and in this case poles appear in complex conjugate pairs. You could for example have$$h(t) = e^{at}(e^{jbt} + e^{-jbt}) = 2e^{at}\cos{bt}.$$ I like to keep this picture in mind (taken from here) which greatly summarizes this. For more complex transfer function, partial fraction decomposition can be used to go back to simple cases as presented here.
Update: An idea very similar to this one is described in "Digital Access to Comparison-Based Tree Data Structures and Algorithms". Here's an idea for a structure that might meet these bounds.It uses an balanced binary search tree where each node is annotated with a pair of integers in $[-1,k-1]$ to indicate how much of the prefix of the key does not need to be compared again with the search key. Notation A shared prefix pair is a pair of integers associated with a key, called the focus, and a set of string keys not containing the focus, called the context.For a focus $f$ and context $c$, if $\forall y \in c, f < y$, then the left shared prefix (the left value in the shared prefix pair) of $f$ and $c$ is $-1$.Otherwise, let $g$ be the largest key in $c$ such that $g < f$.This is called the left neighbor of $f$ in $c$.Then the left shared prefix $i$ is the largest integer such that $f[i] > g[i]$ and $\forall j < i, f[j] = g[j]$.The right shared prefix and right neighbor are defined symmetrically. Search Annotate each node in a balanced binary search tree with a shared prefix pair, using the key at that node as the focus and they keys of its ancestors in the tree as the context.I'll first describe how to search for a string key (the needle, as in "needle in a haystack") in an annotated tree, then describe insertion and deletion. During search, maintain a shared prefix pair with the needle as the focus and the keys of the nodes inspected during the search as the context.By the definition above, we start with a shared prefix pair of $(-1,-1)$. Assume we are searching for the needle $d$ with shared prefix pair $(i,j)$ in a tree with root $v$ with key $z$ and shared prefix pair $(p,q)$.Since the search proceeds from parent to child, the prefix pairs share the same context and the same left and right neighbors, $a$ and $b$. Now, if $i < p$, then $z[i] = a[i] < d[i]$, and $z < d$.In this case, the search proceeds recursively to the right child of $v$.$z$ is the new left neighbor of $d$ and $d$'s prefix pair stays the same. If $i > p$, then $d[p] = a[p] < z[p]$, and $d < z$.In this case, the search proceeds recursively to the left child of $v$.$z$ is $d$'s new right neighbor and $p$ is the new right shared prefix of $d$. For $j \neq q$, proceed similarly. In the remaining case, $i = p$ and $j = q$.Assume w.l.o.g., that $i \geq j$.Find the first $i^{\prime} > i$ such that $d[i^{\prime}] \neq z[i^{\prime}]$ but $\forall n < i^{\prime}, d[n] = z[n]$.If $d[i^{\prime}] < z[i^{\prime}]$, then $d < z$, so proceed to the left child of $v$.The new right neighbor of $d$ is $z$ and the new right shared prefix of $d$ is $i^{\prime}$.If $d[i^{\prime}] > z[i^{\prime}]$, then $d > z$, so proceed to the right; $z$ is the new left neighbor of $d$, and $i^{\prime}$ is $d$'s new left shared prefix.The $j > i$ case is symmetric. Search Performance Letters are not compared unless $i=p$ and $j=q$.At each step in which letters are compared, either the sum of the shared prefix pair increases or only one letter comparison is made.We will discuss these two separately. If only one letter comparison is made, a child pointer is immediately followed.Since paths from the root to leaves are $O(\lg n)$ and $n \leq A^k$, fewer than $O(k \lg A)$ single-letter-comparison steps are taken. If $p > 1$ letter comparisons are made in a step, the sum $i+j$ of the shared prefix pair increases by at least $p-1$.Since the maximum possible sum is $2k$, $O(k)$ letter comparisons are made in comparisons that result in larger shared prefix pair sums.However, shared prefix pair sums may seemingly decrease in non-letter-comparison steps when $i > p$ and $j$ is replaced by $p$ or $j > q$ and $i$ is replaced by $q$.This is not the case, though:If $i > p$ then $d < z$, so $j \leq q$.If $j < q$, then $d$'s new right shared prefix is $j$, so $j = p$ and the shared prefix pair sum is not decreased.Similarly, if $j = q$, then $d$'s new right shared prefix is $\geq j$, and the sum is not decreased. Similarly, each step requires $O(1+p/B)$ block transfers if it makes $p-1$ letter comparisons. Insertion and Deletion To perform an insertion, first locate where the node with the key $d$ would be found if it were in the tree.After modification, perform the necessary $O(\log n)$ restructuring necessary to restore balance.Rotations might invalidate shared prefix pairs. Consider the case of a left rotation: B / \ A D / \ C E D / \ B E / \ A C The shared prefix pairs at A, C, and E all remain valid.The left shared prefix of B and the right shared prefix of D also remain valid.The right shared prefix at B is the left shared prefix at D from before the rotation.The left shared prefix of D is the minimum of the left shared prefixes of D and B from before the rotation. The right rotation and deletion cases are the same, mutatis mutandis.
If you want to multiply two matrices $A$ and $B$ then observe that$$\begin{pmatrix}I_n&A&\\&I_n&B\\&&I_n\end{pmatrix}^{-1}=\begin{pmatrix}I_n&-A&AB\\&I_n&-B\\&&I_n\end{pmatrix}$$which gives you $AB$ in the top-right block.It follows that inversion is at least hard as multiplication. EDIT: I had misread the question, the original answer below shows that multiplication is at least as hard as inversion.Based on the wikipedia article: write block inverse of the matrix as$$\displaystyle {\begin{bmatrix}A & B \\C &D \end{bmatrix}}^{-1}={\begin{bmatrix}A^{-1}+A^{-1}B(D -CA^{-1}B)^{-1}CA^{-1}&-A^{-1}B(D -CA^{-1}B )^{-1}\\-(D-CA^{-1}B)^{-1}CA^{-1}&(D-CA^{-1}B)^{-1}\end{bmatrix}}.$$Note that $A$ is invertible because it is a submatrix of the original matrix (which is invertible). One can prove that $D-CA^{-1}B$ is invertible because of the following identity ($M$ is the original matrix):$$\det(M)=\det(B)\det(D-CA^{-1}B).$$Some clever rewriting using Woodbury identity gives$$\displaystyle {\begin{bmatrix}A & B \\C &D \end{bmatrix}}^{-1}={\begin{bmatrix}X&-XBD^{-1}\\-D^{-1}CX&D^{-1}+D^{-1}CXBD^{-1}\end{bmatrix}}$$where$$X=(A-BD^{-1}C)^{-1}.$$Let $C(n)$ denote the complexity of matrix inversion for a $n\times n$ matrix. Let $\omega$ be the exponent of the best matrix multiplication algorithm, so that we can multiply two $n\times n$ matrices in time $O(n^\omega)$. Using the formula above, we can express the inverse of an $n\times n$ matrix using: two inverses of half-size ($\frac{n}{2}\times\frac{n}{2}$): $D$ and $X$ six multiplications of half-size: $BD^{-1}$, $(BD^{-1})C$, $X(BD^{-1})$, $D^{-1}C$ and $(D^{-1}C)(XBD^{-1})$ two additions of half-size This gives the recurrence$$C(n)=2C(n/2)+6O((\tfrac{n}{2})^\omega)+2O((\tfrac{n}{2})^2).$$Since $\omega\geqslant 2$, we rewrite the above as$$C(n)=2C(n/2)+O(n^\omega).$$We can now apply the Master theorem. Using the notation of the wikipedia article, we have $f(n)=Kn^\omega$ for some constant $K$, $a=b=2$ thus $c_{crit}=\log_22=1<\omega$. On the other we have a regularity condition on $f$ since$$af(n/b)=2K(\frac{n}{2})^\omega=2{1-\omega}Kn^\omega\leqslant \frac{1}{2}f(n)$$ because $\omega\geqslant 2$. Thus the theorem tells us that$$C(n)=O(f(n))=O(n^\omega).$$It follows from that inversion is at least as hard as multiplication.
The kinetic energy of the bullet is $\frac12 mv^2 \approx 1$ kilojoule. If the deceleration is continuous over $x=1$ meter, energy conservation gives us an acceleration $a = v_\text{initial}^2/2x \approx 65,000\,\mathrm{m/s^2} \approx 6600\,g$, and the stopping time is $t = v_\text{initial}/a \approx 5.6$ milliseconds. Spreading the bullet's energy over the stopping time gives an average power of 175 kilowatts. If you make some hand-waving assumptions that the mechanism for stopping the bullet is inefficient you might multiply this power by a factor of 10–100. This is a lot of power! But the time interval is very brief. And it's certainly not prima facie unphysical—after all, the gunpowder explosion that launched the bullet involved the same energy transfer and an acceleration length of much less than a meter. After some thinking, and a silly mistake, I can make an order-of-magnitude estimate of the magnetic field that would have to be involved. I would expect that the main effect involved in rapidly stopping a bullet would not be diamagnetism, a small effect where the magnetic field strength inside a "non-magnetic" material is changed in its fourth or fifth decimal place (and thus the energy density of the field $E\propto B^2$ is changed in its eighth or tenth decimal place). The predominant factor on introduction of a strong magnetic field to a bullet would be eddy currents in the material. Wikipedia gives me a formula for energy loss due to eddy currents in a material,$$P = \frac{\pi^2 B^2 d^2 f^2}{6k\rho D}$$where $P$ is the power in watts per kilogram, $B$ is the peak field, $d$ is the thickness of the conductor, $f$ is the frequency, $k$ is a dimensionless constant which depends on the geometry, $\rho$ is the resistivity, and $D$ is the mass density. Wiki gives $k=1$ for a thin plane and $k=2$ for a thin wire, so I wild-guess $k=3$ for a zero-dimensional bullet.Using values for the lead core of the bullet, we find the rate of field change\begin{align*}(Bf)^2 &= \mathrm{ \frac{18}{\pi^2} \frac{2\times10^{-7}\,\Omega\,m \cdot 10^4\,kg/m^3}{(10^{-2}\,m)^2}\frac{2\times10^5\,W}{8\times10^{-3}\,kg}}= \mathrm{10^{9} \frac{N^2}{C^2\,m^2}}\\ {}\\Bf&= \mathrm{\pi\times10^4\,T/s}\end{align*} The simplest assumption about the frequency is that the field is being ramped up to its maximum while the bullet stops, so we've seen a quarter-oscillation and $1/f = 20\,\mathrm{ms}$.This gives us a peak field of 600 tesla, which is large, but not absurdly large. On the other hand, if Magneto is actually an FM radio broadcaster at 100 MHz, he'd need only a field of$$\mathrm{\frac{ \pi\times10^4\,T/s }{ 10^8\,Hz } = \pi\times10^{-4}\,T.}$$I don't think that radio engineers ordinarily think about local peak magnetic field strengths, but this isn't outrageous either. My college NPR station has a 100 kW transmitter. However, their antenna isn't shaped correctly to put that entire power into a one-cc volume.
The General Curl The Theory The image you posted above is trace visualization for what is known as a divergence free vector field. To understand what that means, consider particles that move along the lines you see above (the field represents their instantaneous velocity), if the field is indeed divergence free, those particles will never collide. That's what gives those visualizations their beauty, the lines never intersect. Mathematically, a vector field $F$ is divergence free if $\nabla \cdot F = 0$. It is hard to compute a random vector field that is divergence free proceduraly, so we use some math to aid us. In vector calculus, the Curl of any vector field is divergence free. The curl is an operator that operates on a vector field and returns another vector field that represents the infinitesimal (microscopic) rotation of the input vector field. The infinitesimal rotation of a vector field can be understood using this intuitive interpretation from Wikipedia 1: Suppose the vector field describes the velocity field of a fluid flow (such as a large tank of liquid or gas) and a small ball is located within the fluid or gas (the center of the ball being fixed at a certain point). If the ball has a rough surface, the fluid flowing past it will make it rotate. The rotation axis (oriented according to the right hand rule) points in the direction of the curl of the field at the center of the ball, and the angular speed of the rotation is half the magnitude of the curl at this point. We are not interested in what the curl represents, we are only interested in the fact that the curl of any vector field is divergence free. For our artistic use, the input of the curl operator $F$ can be a simplex or perlin noise-based vector field, that is, $F = (F_x, F_y, F_z)$ where $F_x, F_y, F_z$ are different perlin or simplex noise functions. Let us now look into the curl operator that will take $F$ as an input and returns a divergence free vector field, the curl of $F$ denoted by $\nabla \times F$ is equal to: $$\nabla \times F =\left( \frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z} \right)\mathbf{i} +\left( \frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x} \right)\mathbf{j} + \left( \frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y} \right)\mathbf{k}$$ Where $\mathbf{i}$,$\mathbf{j}$ and $\mathbf{k}$ are the fundamental vectors of the rectangular space. Don't worry if you don't understand those equations, we shall understand them soon. The notation $\frac{\partial F_z}{\partial y}$ represents the partial derivative of $F_z$ with respect to the $y$ axis, similarly, $\frac{\partial F_y}{\partial z}$ represents the partial derivative of $F_y$ with respect to the $z$ axis, and so on. A partial derivative measures how much a function changes along an axis. Take a brief look at derivatives before we continue, you don't have to study it, just recognize what it represents. To compute the partial derivative, we use what is known as the central finite difference method or Symmetric Derivative which states that the partial derivative is equal to: $$\frac{f(x+h) - f(x-h)}{2h}$$ Where $h$ is some arbitrary small number and $f$ is the function in question. The same equation applies for the $y$ and $z$ axis. For instance to compute $\frac{\partial F_z}{\partial y}$, we add $h$ (a small number) to the $y$ component of the the vector, evaluate the $F_z$ noise at that vector, subtract $h$ from the $y$ component of the vector, evaluate the noise and take the difference between the two evaluated noise values, and finally divide them by $2h$. To understand how the central finite difference method work, consider this intuitive interpretation: Suppose one is standing on a mountain blindfolded, one was asked to report the steepness (slope, partial derivative) of the point one is standing at, one might move a step to the right and identify whether one ascended or descended and how much did one ascend or descend by, another might move a step to the left and identify whether one ascended or descended and how much did one ascend or descend by. Now consider the situation where one is at the very top of the mountain, at that point, the slope is zero, one is not ascending or descending, but both persons will report that they are descending, because if one move to the left or the right one will be descending. We conclude that to get an accurate result, we have to move a step to the right, observe, move a step to the left, observe and then determine if you are ascending or descending. In our case, if one move both to the left and to the right, one will understand that one is at an even surface (at the top). So when we say $f(x+h)$ we simply mean that we observe after we move a step $h$ to the right and $f(x-h)$ means to the left. Once we compute the curl of the vector field, we can do what is known as advection using Euler's integration. When treating the curl vector field as the instantaneous velocity of some particles in space, Euler's integration approximates the location of the particles after one second by adding the curl vector field to the original particles location. So to get the whole trace of a particle, we evaluate the curl, add it to the location of the particle, evaluate the curl at the new location, add it to the new location, and so on. Implementation Simple Curl Implementation The first step is to make a group that generates the vector field by initializing its components with different simplex noises $(F_x, F_y, F_z)$: The next step is to compute the partial derivatives. This can be done using the formula for the symmetric derivative: Take your time to understand this node tree. It can be hard to grasp at first. Having all the partial derivatives, we can now compute the curl easily using the equation above: And by advecting some initial vectors along the curl vector field using Euler's integration and creating splines from the output points: We get something like this: Optimized Curl Implementation While the previous implementation works, it is very slow because it requires a lot of noise evaluations and noise is expensive to compute. Thankfully, the noise functions in Animation Nodes v2.1 is implemented using SIMD instructions, which means it can somewhat be executed in parallel for CPUs that support Advanced Vector Extensions (AVX) which is pretty much every modern CPU. This result in a speed up of up to 600x the original implementation. SIMD requires the input data to occupy a contiguous memory block. So to utilize SIMD instructions and optimize this setup, we should combine all our data into a single big vector list, evaluate the noise at it, segment the output, and process it. The following implementation is hard to understand, so it is better to do it on your own, my implementation looks like this: Euler's integration loop stays the same: However, notice that we are appending a list of vectors and not a vector, so the output will be of length $n \cdot m$, where $n$ is the number of iterations and $m$ is the number of input initial vectors. We have to do another segmentation before creating splines from those points, so the spline loop will include segmentation and spline creation: And what we get is a fully functional, fast, curl trace generator: Blend file for study and practice: Surface Curl Now that we have an understanding of what the curl is, we can go ahead and finally answer your question. Theory Let us define the problem as follows: We want the output vector field of the curl operator to always be tangent to the surface of our meshes, that is, the curl should always be perpendicular to the vector field that represents our mesh surface normals. If our initial points were on the surface of the mesh and the above condition is satisfied, then we will get the result we are looking for. The above condition is satisfied only if $(\nabla \times F) \cdot \vec{N} = 0$, where $(\nabla \times F)$ is the curl and $\vec{N}$ is our normals vector field. The problem is now to compute a vector field $F$ so that the dot product between its curl and the normals vector field is equal to zero. By some analysis we get, $F$ is equal to $\vec{N} \cdot p(x,y,z)$ where $p$ is a perlin or simplex noise function just like we used in the general case, the curl in that case becomes: $$\begin{aligned}F &= p(x,y,z) \begin{bmatrix} N_x \\ N_y \\ N_z \end{bmatrix}\\(\nabla \times F) &=\left( \frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z}, \frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x},\frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y} \right)\\&= \left( N_z \frac{\partial p}{\partial y} - N_y \frac{\partial p}{\partial z}, N_x \frac{\partial p}{\partial z} - N_z \frac{\partial p}{\partial x},N_y \frac{\partial p}{\partial x} - N_x \frac{\partial p}{\partial y} \right)\end{aligned}$$ And the proof: $$\begin{aligned}(\nabla \times F) \cdot \vec{N} =&N_z N_x \frac{\partial p}{\partial y} - N_y N_x \frac{\partial p}{\partial z} +\\&N_x N_y \frac{\partial p}{\partial z} - N_z N_y \frac{\partial p}{\partial x} +\\&N_y N_z \frac{\partial p}{\partial x} - N_x N_z \frac{\partial p}{\partial y} = 0\end{aligned}$$ Notice that if the normal is $\vec{N}=(0,0,1)$, you only end up with the equation that Robert Bridson proposed in his paper "Curl-Noise for Procedural Fluid Flow", which is: $$\frac{\partial p}{\partial y}\mathbf{i} -\frac{\partial p}{\partial x}\mathbf{j}$$ So now you know where my other answer came from. Since the curl have zero z component, we shall call this special case the 2D Case. Implementation The implementation is much easier because we only have to differentiate a single function. You should by now know how to implement that. Here is the non-trivial implementation of the surface curl; notice how there is an extra input called normals now: Lets consider the simple example in which we use a sphere as our mesh, the normal of the sphere at any point on its surface is equal to that point normalized, so the normal is simply the point locations normalized. After the curl is computed and the the points advected, we can project the output onto the surface to make sure the splines always lie on the surface of the sphere since we are merely performing a finite approximation. We won't need this step for sufficiently small step sizes for Euler's integration which is in our case decided by the magnitude of the normal vector field or the amplitude of the noise function. To project any point onto the surface of a sphere of radius $r$ we normalize the vector and multiply by the radius, so the advection loop looks like this now: The result are those magnificent and beautiful splines: Blend file for study and practice: Since the 2D case have two zero normal components, let's implement the 2D case using an optimized setup. We simply remove anything that got multiplied by zero to get: This gives: Blend file: Ok, I have given you an example where the mesh is defined implicitly, but how do we use an actual mesh? Well, we can use a BVH tree: We approximate the normal field by the Nearest Surface Point node Normal output, and we change the generator to be the location of the nearest surface point. Why? Because this is the projection of the point on the surface, which is a step we have to do to make sure we get accurate results as we discussed before. Make sure to reduce the step size, because BVH trees are not as accurate as the implicit definition of our sphere. It should be noted that constructing BVH trees from objects directly won't apply modifiers, so such node tree can be used: And finally, here is the result of both setups: Inviscid Boundary Condition An inviscid boundary condition required the dot product between the curl and the surface normals around the boundaries of objects to be zero, in other words, it requires the curl to be orthogonal to the normals of the object around its boundaries. Ensuring this condition in the surface curl case is easy, while it is somewhat challenging for the general 3D case, so I shall only introduce the surface curl. Looking back at the equation for the 2D case: $$\frac{\partial p}{\partial y}\mathbf{i} -\frac{\partial p}{\partial x}\mathbf{j}$$ We notice that the curl is actually orthogonal to the Gradient, the gradient vector being, in simple terms, a vector pointing in the direction of the greatest change. The gradient is computed by: $$\frac{\partial p}{\partial x}\mathbf{i} +\frac{\partial p}{\partial y}\mathbf{j}$$ Can you see the similarities? So we can implement the boundary condition by making sure the noise field have a gradient tangent to the normals of the object around its boundaries, in other words, the noise field should increase the most when moving in the direction of the surface normals. We actually know a field that satisfied this condition everywhere, that is, its gradient is tangent to the surface normals. This field is the Signed Distance Field (SDF). An SDF is, for every point, the distance to the closest surface point, having a negative sign if it is inside the surface and positive if outside (However, the sign is redundant in this particular application). So the condition can be implemented by modulating the noise field based on the SDF of the surface using a ramp. As Robert implemented it, the noise field should be multiplied by the ramp: $$\begin{cases} 1 & r > 1 \\ \frac{15}{8}r - \frac{10}{8}r^3 + \frac{3}{8}r^5 & -1 \leq r \leq 1 \\ -1 & r < -1\end{cases}$$ Where $r$ is the SDF divided by a scalar defining the modulation width. Which we can implement as follows: What we did is replace the noise node with the above subprogram and add a BVH tree as an input. Using Older Animation Nodes Versions The only node used in this answer and does not exist prior to Animation Nodes v2.1 is the Vector Noise node which provides us with the perlin or simplex noise functions. To make the node tree work for older versions, you simply have to replace the Vector Noise node with the built-in blender noise functions that exists in the mathutils python module. To do so, we can use the expression node: With this code: [mathutils.noise.noise(x) for x in vectors] Aside from noise.noise, there are other functions that give you more control, which you can find in the API. File for study:
First principle of stationary action Consider a real Klein-Gordon scalar field $\phi$ living in a $D$ dimensional flat spacetime. The field is considered off shell (the on shell condition is defined below). Suppose for simplicity that its action on an arbitrary region of spacetime $\Omega$ is \begin{equation}\tag{1} S = \int_{\Omega} \frac{1}{2} \big((\partial_a \, \phi )(\partial^a \, \phi) - m^2 \phi^2 \big) d^D x. \end{equation}The on shell field is as the one which render the action stationary under an arbitrary compactly supported variation of the field. The variation $\delta \phi$ is an arbitrary smooth function with defined compact support(it is not necessarily analytic). It is vanishing on the boundary $\partial \, \Omega$, and all its derivatives are also vanishing there; $\delta \phi = 0$ and $\partial_a \, \delta \phi = 0$ on $\partial \, \Omega$. An arbitrary variation of the field induces a variation of its action : \begin{align} \delta S &= \int_{\Omega} \big( (\partial_a \, \phi )(\partial^a \, \delta \phi) - m^2 \phi \, \delta \phi \big) d^D x \\[18pt] &= \int_{\Omega} \partial^a \big( (\partial_a \phi) \, \delta \phi \big) \, d^D x - \int_{\Omega} \big( \partial^a \, \partial_a \phi + m^2 \phi \big) \, \delta \phi \; d^D x. \tag{2} \end{align} The first integral gives a surface term, by virtue of the Gauss theorem. It's vanishing if $\delta \phi = 0$ on $\partial \, \Omega$. Since $\delta \phi$ is arbitrary inside the bulk of $\Omega$, we get the Klein-Gordon equation, which definesthe on shellcondition : \begin{equation} \partial^a \, \partial_a \phi + m^2 \phi = 0. \tag{3} \end{equation} This is all fine with the usual variational principle. However, to solve the on shelldifferential equation (i.e the equation of motion), we need some that should be imposed on the scalar field. Obviously, they should be compatible with the equation of motion. Without them, the equation of motion cannot be solved. proper boundary conditions What is the "law" that defines the boundary conditions to be imposed on the field ? Second principle of stationary action ( hypothetical method to find the boundary conditions on the field) Now consider an field $\phi$ with some unknown boundary conditions on $\partial \, \Omega$. An arbitrary small variation of the boundary conditions induces a variation of the field ; $\phi' = \phi + \delta \phi$, which is still on shell on shell. In this case, the variation $\delta \phi$ and its derivatives ! ($\delta \phi$ is not anymore of compact support). The change of boundary conditions also produces a change in the action : \begin{equation}\tag{4} \delta S = \int_{\Omega} \partial^a \big( (\partial_a \phi) \; \delta \phi \big) \, d^D x - \int_{\Omega} \big( \partial^a \, \partial_a \phi + m^2 \phi \big) \, \delta \phi \; d^D x. \end{equation} Since the field is do not necessarily vanish on the boundary on shell, the equation of motion is satisfied in the bulk and the second integral vanishes. We now get a surface integral : \begin{equation}\tag{5} \delta S = \int_{\partial \, \Omega} (\partial_a \phi) \, \delta \phi \; d\sigma^a, \end{equation} where $d\sigma^a$ are the components of the outward boundary normal. Lets suppose that the action is still of an stationary under the variation of the boundary conditions on shellfield. The condition $\delta S = 0$ then imposes \begin{equation}\tag{6} (d\sigma^a \; \partial_a \phi) \, \delta \phi = 0, \end{equation} everywhere on the boundary $\partial \, \Omega$ (I'm not sure this is right, since the surface integral is a flux. Maybe it is just the integral which vanishes). This suggest two choices : \begin{align}\tag{7} \delta \phi &= 0 \; \text{(Dirichlet conditions),} &&\text{or} &d\sigma^a \; \partial_a \phi &= 0 \; \text{(Neumann conditions).} \end{align} So to summarize: I use the stationary action principle to get the field equations, and then use the principle again but now together with the field equations in order to see what are the possible boundary conditions. Now, the question is this : Do the previous procedure actually make sense ? How can we make the boundary conditions more precise, in details ? And more specifically, how should we translate the Dirichlet conditionsabove ; $\delta \phi = 0$ on the boundary $\partial \, \Omega$ ? I'm unable to make sense of this part. Take note that the arbitrary region of spacetime $\Omega$ and its boundary $\partial \, \Omega$ are fixed here, and there is no variation on coordinates (which are fixed). The boundary conditions that I'm talking about refer to the field configuration on $\partial \, \Omega$, which is a closed hypersurface in spacetime, enclosing the arbitrary region $\Omega$. What are your opinion on this hypothetical (unconventional ?) application of the stationary action principle ? EDIT: Please, use the same variables (i.e. a scalar field) in your answer, to talk about "boundary conditions" on $\partial \Omega$ of a field in spacetime, instead of "initial conditions". To me, there's a huge distinction between "field boundaries" and "initial conditions". Very important: Take note that I may be using the "Nature" Hamilton-Jacobi action and not the "observer" Euler-Lagrange action (I'm not sure yet), as defined in this paper : As a reference to this question, see section 2 (page 4) of the following paper from Padmanabhan:
Base: \(a\) Legs: \(b\) Base angle: \(\beta\) Vertex angle: \(\alpha\) Legs: \(b\) Base angle: \(\beta\) Vertex angle: \(\alpha\) Altitude to the base: \(h\) Perimeter of an isosceles triangle: \(P\) Area of an isosceles triangle: \(S\) Perimeter of an isosceles triangle: \(P\) Area of an isosceles triangle: \(S\) An isosceles triangle is a triangle that has two equal sides. The equal sides are called the legs and the third side is called the base. In the figure below, the legs and the base are denoted by the letters \(b\) and \(a,\) respectively. Relationship between the vertex and base angles \(\beta = 90^\circ – {\large\frac{\alpha }{2}\normalsize}\) Altitude drawn to the base \({h^2} = {b^2} – {\large\frac{{{a^2}}}{4}\normalsize}\) In an isosceles triangle, the altitude, angle bisector, median and perpendicular bisector drawn from the vertex to the base coincide. Relationships between the legs and the base \(b = 2a\cos \alpha,\;\) \(b = 2a\sin {\large\frac{\beta }{2}\normalsize}\) Perimeter of an isosceles triangle \(P = a + 2b\) Area of an isosceles triangle \(S = {\large\frac{{ah}}{2}\normalsize} =\) \({\large\frac{{{b^2}}}{2}\normalsize}\sin \alpha =\) \({\large\frac{{ab}}{2}\normalsize}\sin \beta \)
A generalized sequential machine (GSM) is a generalization of a Mealy machine where on each transition one input symbol is read and 0 or more output symbols are written. As in a Mealy machine, we assume that there are no final states, i.e. a GSM operates on infinite input/output words. A GSM has finitely many states. Assume $A$ and $B$ are two single-valued GSMs and $C$ is a not necessarily single-valued GSM. Single-valued means that for each infinite input word there is at most one infinite output word (determinism is a sufficient but not necessary condition for single-valuedness). Suppose in addition that $A$ and $C$ are fully reachable, i.e. there is a path from the start state to every other state. In the following, two GSM synthesis problems are described. Let $T_X$ denote the set of states of GSM $X$. Are there single-valued GSMs $X$ and $Y$ such that $X \circ A \equiv Y \circ B$ and in $X \circ A$ for every $t \in T_A$ there is a state $(\_, t) \in T_X \times T_A$ reachable from the start state, i.e. $A$ remains fully reachable in $X \circ A$? Note that $\circ$ denotes transducer composition. Are there single-valued, finite state transducers $X$ and $Y$ such that $X \circ C \supseteq Y \circ B$ and in $X \circ C$ for every $t \in T_C$ there is a state $(\_, t) \in T_X \times T_C$ reachable from the start state, i.e. $C$ remains fully reachable in $X \circ C$? Clearly, 1. is semi-decidable and 2. is undecidable if $C$ is non-deterministic (without restriction, i.e. infinitely-valued). However, is 1. decidable and if so what is the complexity? For it to be decidable it must be possible to bound the number of states of $X$ and $Y$. Moreover, is 2. decidable if $C$ is finitely-valued, i.e. there is a bounded number of infinite output words for each infinite input word? I'm particularly interested in symbolic methods, also references to practical bounded synthesis approaches. I appreciate any hints that relate this problem to a standard problem as well. Thanks for your input.
I'm reading Altland & Simons' Condensed Matter Field Theory, and when presenting the t-J model, $$ H = -t \sum_{<mn>}P_s a^{\dagger}_{m\sigma} a_{n\sigma} P_s + J \sum_{<mn>} S_m \cdot S_n $$ they say that Nagaoka's theorem asserts that, in the $U=\infty\ (J=0)$ limit, when we remove one electron from the half-filled Hubbard model, the ground state becomes ferromagnetic. I'm having a hard time trying to show this for a four-site square lattice with three electrons. 1) The way I understand it, we can consider separately the cases $S^z_{total} = 3/2$ and $S^z_{total} = 1/2$, because this Hamiltonian does not allow the total z-spin to change. Is this correct? 2) For the $S^z_{total} = 3/2$ case, we can take the basis $\{a^{\dagger}_{1\uparrow}a^{\dagger}_{2\uparrow}a^{\dagger}_{3\uparrow}|0\rangle, a^{\dagger}_{2\uparrow}a^{\dagger}_{3\uparrow}a^{\dagger}_{4\uparrow}|0\rangle, a^{\dagger}_{1\uparrow}a^{\dagger}_{3\uparrow}a^{\dagger}_{4\uparrow}|0\rangle, a^{\dagger}_{1\uparrow}a^{\dagger}_{2\uparrow}a^{\dagger}_{4\uparrow}|0\rangle\}$, and the ground state (with energy $-2t$) is $$ |\psi_{-2t}\rangle = \frac{1}{\sqrt{4}} \left(a^{\dagger}_{1\uparrow}a^{\dagger}_{2\uparrow}a^{\dagger}_{3\uparrow} + a^{\dagger}_{2\uparrow}a^{\dagger}_{3\uparrow}a^{\dagger}_{4\uparrow} + a^{\dagger}_{1\uparrow}a^{\dagger}_{3\uparrow}a^{\dagger}_{4\uparrow} + a^{\dagger}_{1\uparrow}a^{\dagger}_{2\uparrow}a^{\dagger}_{4\uparrow}\right)|0\rangle$$ and it's total spin is indeed $3/2$. However, if we take any other eigenstate, like $|\psi_{0}\rangle = \frac{1}{\sqrt{2}}\left( -a^{\dagger}_{1\uparrow}a^{\dagger}_{2\uparrow}a^{\dagger}_{3\uparrow} + a^{\dagger}_{1\uparrow}a^{\dagger}_{3\uparrow}a^{\dagger}_{4\uparrow} \right)|0\rangle$, isn't its spin also $3/2$? So I don't see how it is surprising that the ground state is ferromagnetic... but I probably got something wrong. 3) For $S^z_{total} = 1/2$ I have no idea how to proceed without having to diagonalize an enormous 12x12 matrix. The book gives a hint to arrange the basis in the order in which they are generated by application of the Hamiltonian, but I don't know how this helps. I understand that there is also a mirror symmetry in the lattice. I looked at Patrik Fazekas' "Lecture notes on Electron Correlation and Magnetism" solution to this problem, but he seems to conclude that the $S^z_{total}=1/2$ case has not a ferromagnetic ground state, so I'm a little confused.
Tagged: conjugate Problem 209 Let $G$ be a group. We fix an element $x$ of $G$ and define a map \[ \Psi_x: G\to G\] by mapping $g\in G$ to $xgx^{-1} \in G$. Then prove the followings. (a) The map $\Psi_x$ is a group homomorphism. (b) The map $\Psi_x=\id$ if and only if $x\in Z(G)$, where $Z(G)$ is the center of the group $G$. Add to solve later (c) The map $\Psi_y=\id$ for all $y\in G$ if and only if $G$ is an abelian group. Problem 129 Let $G$ be a group and $H$ and $K$ be subgroups of $G$. For $h \in H$, and $k \in K$, we define the commutator $[h, k]:=hkh^{-1}k^{-1}$. Let $[H,K]$ be a subgroup of $G$ generated by all such commutators. Show that if $H$ and $K$ are normal subgroups of $G$, then the subgroup $[H, K]$ is normal in $G$.Add to solve later
Consider the language $$L = \{1^i 0^j 1^k \mid i + j = 2k, k ≥ 1\}\,,$$ and let $x_n$ be the canonical $n$'th word in $L$. My problem involves proving that the Kolmogorov complexity of $x_n$ can be bounded by $$K(x_n) \leq c + 2\log_2 |x_n|$$ for some constant $c$. My ideas : (please note that I'm not writing this in a formal way, I am neglecting some constants/factors) we want to be able to compress the string given by x_n above, thus we could set : $$K(x_n) = K(1^i) + K(0^j) + K(1^k)$$ (to do this compression I evaluate that only i, j and k need to be compressed, thus we only consider the length of the binary representation of i,j and k as the upper-bound of their corresponding Komolgorov Complexity) $$ \leq log_2 (i) + log_2 (j) + log_2 ((i+j)/2) + C$$ $$ = log_2 (i) + log_2 (j) + log_2 (i+j) - 1 + C$$ $$ = log_2 (i^2j + ij^2) + K$$ Now we might want to evaluate the length of X_n $$|x_n| \leq i + j + k = i + j + (i + j)/2$$ now to show the supposition we just need to show that : $$ log_2 (i^2j + ij^2) \leq 2log_2 |x_n| = log_2 (|x_n|^2) $$ and this is where I get stuck
I essentially agree with Martin's comment, I can elaborate on that to make a tentative answer, knowing that there is no general formal definition of calculus or abstract machine and that what I am going to describe cannot possibly cover the meaning of all instances of these two words found in the literature. In brief: a calculus usually gives you the abstract spefication of the meaning of programs, whereas an abstract machine usually implements that specification. Such an implementation is likely to be still high-level ( i.e., many low-level details are not specified), hence the adjective "abstract", but it gets closer to what a phyisical machine would do to execute programs (according to the spefication given by the calculus). More in detail: a calculus usually comes with an operational semantics, which gives you the meaning of programs in terms of the result they denote (if any). For this, one often uses the notation $t\Downarrow v$, which means that "the value ( i.e., the final result) of the expression $t$ is $v$". Now, such a "big step" operational semantics (as it is sometimes referred to) is usually given in terms of a derivation system ( i.e., a system in which you can prove judgments of the form $t\Downarrow v$), which is useful to reason abstractly about programs but is a bit far from describing how the execution of $t$ would look like on a concrete machine. One may then move to a "small step" operational semantics, which is a set of rewriting rules on expressions (written $t\rightarrow t'$) describing how one can, step by step, compute the value of an expression (if any): $t\Downarrow v\quad$ iff $\quad t\rightarrow^\ast v$. Now, even the small-step semantics may be too coarse: typically, some rewriting rules may duplicate arbitrary big sub-expressions, and a chain of duplications may cause an exponential blow-up of the size of expressions in only a linear number of steps, which means that these steps are not so "small" after all... In particular, one may want to give a more realistic description of the execution of programs, getting even closer to the machine level. This is where abstract machines come into the picture. It is impossible to give a general description of an abstract machine, since they may differ greatly. However, these usually include low-level components ( e.g. a pointer to a "code memory" where the executed program is stored, a stack, a memory for storing the values of variables, etc.) which make them look much more like physical machines executing the programs of the language underlying the original calculus. About where to "draw the line" between calculi and machines: this is actually a tricky question. There are calculi whose small-step semantics is more or less the same thing as an abstract machine. Lambda-calculi with explicit substitutions are examples of this: in such calculi, expressions contain constructs (the explicit substitutions) which make it possible for the small-step semantics to operate at a lower level, much closer to that of the machine. About your reasoning on how the levels of expression of an algorithm are "layered", I am not sure I follow it, so I cannot say much. But I hope that at least I gave you some elements of answer to your last two questions.
dyld: Library not loaded: /sw/lib/libpng12.0.dylib Referenced from: /sw/bin/latex Reason: Incompatible library version: latex requires version 30.0.0 or later, but libpng12.0.dylib provides version 26.0.0 sh: line 1: 81964 Trace/BPT trap latex -output-directory=/var/folders/71/71wgK3o7FtGClPSEzjU3v++++TM/-Tmp-/inkscape-9yWR6s -halt-on-error /var/folders/71/71wgK3o7FtGClPSEzjU3v++++TM/-Tmp-/inkscape-9yWR6s/eq.tex > /var/folders/71/71wgK3o7FtGClPSEzjU3v++++TM/-Tmp-/inkscape-9yWR6s/eq.out invalid LaTeX input: \(\displaystyle\frac{\pi^2}{6}=\lim_{n \to \infty}\sum_{k=1}^n \frac{1}{k^2}\) temporary files were left in: /var/folders/71/71wgK3o7FtGClPSEzjU3v++++TM/-Tmp-/inkscape-9yWR6s Curiously, when I print out the version number of "/sw/lib/libpng12.0.dylib" via otool, it tells me that I already use the most recent version, which is 32.0.0, so there should be no problems. I just found out what goes wrong: The Inkscape .dmg application package which I got from the Inkscape homepage comes with an own version of libpng12.0.dylib, which lies in "Inkscape.app/Contents/Resources/lib/", that has version number 26.0.0. It seems that Inkscape uses this file although the error message above explicitely shows the Fink path "/sw/lib/" !! So I just renamed the libpng12.0.dylib which comes with the application package so that Inkscape doesn't find / use this file anymore and now the LaTeX effect works fine. Just wanted to tell you in case someone else has a similar problem... bluefloyd P.S.: My version of Inkscape is 0.46. P.P.S.: The identical problem occurs when using the textext extension. The solution is the same. An additional comment concerning textext: On a Mac there are actually two places where you can put the files of the textext extension: 1) ~/inkscape/extensions/ 2)/Applications/Inkscape.app/Contents/Resources/extensions/ I propose putting the files in 2), since the extension is looking for inkex.py which has to be in the same directory as the extensions itself; inkex.py lies in 2), not in 1). Another possibility would be to use 1) and copy inkex.py (and probably some other needed files?) also from 2) to 1).
The heart is the muscular organ responsible to pump blood in order to supply the body with oxygen and nutrients. The heart consist of four chambers, namely the left and right atria and the left and right ventricles. What is modeling? Before talking about cardiac modeling, it make sense to first define what we mean by modeling in general. Models are found everywhere and they are used to make the world comprehensible. Modeling it simply that are of creating and using models. When you are driving home from work and you want to give an estimate of how long time it will take before you are home, you can start by checking how long it is, then you check what is the average speed limit and then you can simply divide the distance by the average speed limit to get an estimate of the time.Taking the distance and dividing by the average speed limit is a very simple model, but often good enough. You could also take into account construction work along the way, or you can use data of the number of cars on the road during the time period that you are driving for previous days, but your model, which will be a collection of information and a way of putting the information together, will soon be too complicated for you to use. This is where computational models come in. Models are particularly useful when we want to use computers to help us understand some type of phenomena. In this case we often call the model, computational. If you can formulate your problem in terms of mathematical equations it is very likely that you can solve it with a computer. What is cardiac modeling? Cardiac modeling is simply the art of creating and applying models to the heart. As you may know, the heart exhibit a so called multi-scale nature, meaning that there are processes happening at different scales that work together. For instance, there are chemical reactions happening at the molecular level that are responsible for making the heart contract, and any perturbations in these processes might have effect on the overall pumping effect. Multi-scale modeling and homogenization Modeling processes along different scales if referred to as multi-scale modeling, and this is an ongoing research topic. Modeling processes along different scale can be extremely hard, especially since the events are usually happening at different time scales, and the interaction between scales often goes both ways, i.e the changes in the molecular processes affect processes at the tissue level and changes in the processes at the tissue level changes gives rise to changes at the molecular level. Since multi-scale modeling is hard it is often useful to focus the attention at one particular scale. The question is then what to do with the multi-scale issue? A common approach is to assume some type of continuity down at the lower scale. This continuity is often referred to as homogenization. Lets take a realistic example. The cardiac tissue is made up by cardiac cells. Each cell can be thought of (here comes the modeling approach) as a compartment having an inside ( the intracellular space) an outside ( the extracellular space) and a something separating the inside from the outside ( the membrane). Now, if you wanted to make you model of the cardiac tissue “realistic” you should be able to zoom in and be able to see some regions with only intracellular space and some region with only extracellular space. However, having this amount of detail in your model can be very expensive when you want to do computations. Therefore, a common approach is to assume that at any point you have both an extracellular and an intracellular space. Within the cardiac modeling community this modeling approach is used when trying to model the electrical current that flows through the heat, and we call this model the bidomain model. Modeling the Mechanics of the heart Since I spent a lot of my time during my PhD to look at models for the mechanics of the heart I will write a little section about this. When modeling the mechanics of the heart at an organ level it is common to only focus on the ventricles, and in particular the left ventricle which is the chamber responsible to pump blood from the heart to the body. In other engineering disciplines such as in mechanical engineering or civil engineering, when talking about mechanics we typically talk about stress and strain. Stress can be thought of as the forces acting on a material whereas strain is the resulting change in shape as a response to a change in stresses. Stress and strain are like yin and yang. Just knowing one of them will only provide you with half the story. For example if you stretch a rubber band by 10 percent you hardly need to apply any force to achieve that, while you need to apply a massive amount of force if you want to do the same with steel. The differences between rubber and steel lies in their material properties which in the case of the heart is generally unknown. Moreover, while it is “easy” to measure the strain (which can be achieved using imaging techniques), it is impossible to measure the stresses in the heart. Therefore models are the only way to get an estimate of the stresses in the heart The law of Laplace The first thing we need to do in order to estimate the stresses in the heart is to make assumptions. Assumptions are an important ingredient in modeling, and how well you are going to represent the physical object you are studying depends on your modeling assumptions. One of the first models ever used to study stresses in the left ventricle is known as the law of Laplace. Laplace is i famous mathematician from the 18th century, and although his name is probably best known from the Laplace equation or the Laplace transform, most clinicians know him for giving name to a formula that relates the stresses in the heart to a few measurable quantities. If we assume that the left ventricle is i sphere with a radius $r$, a wall thickness $w$ that is subjected to a pressure $P$, then the stress on the wall $\sigma$ is given by $$\sigma = \frac{P \cdot r}{2 w}$$ I know what you are thinking; the underlying assumption here that the ventricle is similar to a sphere is pretty far from the reality. Although it is true that we cannot really trust the numbers we are getting out from this formula, we can use it to get some intuition about what is happening with the stresses when the morphology (the form and shape) of the ventricle is changing. For example we would expect that the stresses goes up when the radius goes up, while the stresses goes down it the width of the ventricular wall goes up. What is a realistic model? The law of Laplace is a great model if you want to build up some intuition about how stresses are related to the shape for the left ventricle, but how can we more realistically model the stresses in the heart? First of all, we need a more realistic geometry. For example we could use medical images of the heart, and then perform so called image segmentation in order to get a more realistic geometry. Second we need to incorporate more realistic structure of the cardiac tissue. It is well known that the cardiac tissue is a non-linear anisotropic material.By non-linear we mean that the amount of force (or stress) you need to apply changes non-linearly with the amount of strain. This is similar to rubber; To begin with you can stretch rubber a lot without applying much force, but as the rubber stretches you will find that you need to apply a greater amount force to stretch the rubber further. By anisotropic we mean that the stress-strain relation is different depending on how you look at the material. For fibre-reinforced concrete, the properties along the steel wires are different that in the direction orthogonal to the steel fibers. In cardiac tissue we also have fibers (so called muscle fibers) that bundles together and varies through the wall. These fibers are again organized in a laminar structure called sheets. This microstructure is of course an important ingredient in the modeling assumptions, but it is not possible to accurately measure this microstructure in a beating heart. In the end the level of realism in your model is determined by what you want to use the model for, and of course the availability of realistic model information.
I came across an identity involving binomial coefficients. I'm not sure if I'm looking at the identity the wrong way but I am not aware if this identity is known and if there is an (easy) proof for it. Take a nonnegative integer $n$ and form two $k$-tuples consisting of integers at most $n$, say, $(a_1,a_2,\ldots,a_k)$ and $(b_1,b_2,\ldots,b_k)$ such that $a_i\geq a_{i+1}$ and $b_i\geq b_{i+1}$. Let $a_0=b_0=n$ and $a_{k+1}=b_{k+1}=0$. Let $j\in\mathbb N$. The sum goes as follows: $$\sum_{x_1+x_2+\cdots+x_{k+1}=j} ~~\sum_{m=1}^{k+1} \binom{a_{m-1}-a_m+x_m}{a_{m-1}-a_m} = \sum_{x_1+x_2+\cdots+x_{k+1}=j} ~~\sum_{m=1}^{k+1} \binom{b_{m-1}-b_m+x_m}{b_{m-1}-b_m}.$$ I've asked this question at SE but received no replies.
I have 2 questions - the first is what the title refers to, and the second is something I want a reference on (I thought I'd include them in one post since they are very strongly related). Sorry this post is a bit long, I tried to put as much as detail as I could .. $1$-st question: I'm interested only in the group $GL_n(F_q)$. In Carter's book "Finite Groups of Lie Type: Conjugacy Classes and Complex Characters", in Chapter 7 "The generalized characters of Deligne-Lusztig", the construction of the virtual representations $R_{T, \theta}$ as alternating sums of $l$-adic cohomology of Deligne-Lusztig varieties is given in some details, and a series of formulae are proved in the chapter about these ($T$ a torus, and $\theta$ a character of $T^{F}$). It says that if $\theta \in \widehat{T^{F}}$ is in general position, then $\pm R_{T, \theta}$ is irreducible. The following formula is given (also in http://en.wikipedia.org/wiki/Deligne%E2%80%93Lusztig_theory), where $g=su=us$, $s,u$ being the semisimple and unipotent parts, and $Q_{T}(u) = R_{T, 1}(u)$, $C^{0}(s)$ being the identity connected component of the centralizer of $s$, and $F$ the Frobenius endomorphism. $ R_{T, \theta}(g) = \frac{1}{ | C^{0}(s)^{F} |} \sum_{ x \in G^{F}, x^{-1}sx \in T^{F} } \theta ( x^{-1} s x) Q_{x T x^{-1}}^{C^{0}(s)} (u) $ The book then says that $Q_{T}(u)$ is a Green function, depends only on the torus (I understand it will not change if we conjugate the torus in $G^F$ either so essentially corresponds to an element of $S_n$ for the group general linear group of size $n$, which is what I'm most curious about; unless I'm mistaken). The book does not give an explicit formulae for these $Q_{T}(u)$, but it does give orthogonality relations and such - explicit formulae is what I"m looking for: Question: What's an explicit formulae for these $Q_{T}(u)$? How does this relate to the Green function that I've been studying from in Macdonald's book "Symmetric Functions and Hall polynomials", in the chapter "Characters of $GL_n$ over a finite field" -i.e., how do I express the character $ \pm R_{T, \theta}$ as a sum of the irreducible characters described by Green functions in Macdonald's book (or a single irreducible character in the case where $\theta$ is in general position)? In that book, I've learnt that the polynomials correspond to symmetric functions $S_{\lambda}$, via a correspondence that maps $A$, the sums of the representation ring of for all $n$, to $B$, an algebra generated by elementary symmetric functions in independent variables $X_{i,f}$ ($f$ ranges over all irreducible polynomials in $\mathbb{F}_{q}[t]$). I'm sorry I'm being a bit vague right here - it would take pages to define precisely all the notation that Macdonald uses in his book; feel free to work with any alternative explicit definitions of these Green functions (but please include a reference so I know where to look it up). $2$-nd question: I have looked through Carter's book and Digne&Michel's book on the same topic, but I have been unable to find a reference which gives the representing matrices for these virtual representations $\pm R_{T, \theta}$ of these finite Lie type groups (the fact that they are defined with alternating sum complicates matters somewhat). I'm not so interested in the entries of the representing matrices as such, just a construction for the module which enables you to find the representing matrices. Can anyone suggest a good reference for this? The closest I can find is Lusztig's original book "Characters of reductive groups over finite fields", where it mentions that $l$-adic intersection homology can be used as a substitute (this was from what I can see in googlebooks preview); but I hear this book is horrible to learn from, and I'm not entirely certain if what's given there is what I'm looking for (I don't have a copy of the book at present).
Definition of Exact Equation A differential equation of type \[{P\left( {x,y} \right)dx + Q\left( {x,y} \right)dy }={ 0}\] is called an exact differential equation if there exists a function of two variables \(u\left( {x,y} \right)\) with continuous partial derivatives such that \[{du\left( {x,y} \right) \text{ = }}\kern0pt{ P\left( {x,y} \right)dx + Q\left( {x,y} \right)dy.}\] The general solution of an exact equation is given by \[u\left( {x,y} \right) = C,\] where \(C\) is an arbitrary constant. Test for Exactness Let functions \(P\left( {x,y} \right)\) and \(Q\left( {x,y} \right)\) have continuous partial derivatives in a certain domain \(D.\) The differential equation \(P\left( {x,y} \right)dx +\) \( Q\left( {x,y} \right)dy \) \(= 0\) is an exact equation if and only if \[\frac{{\partial Q}}{{\partial x}} = \frac{{\partial P}}{{\partial y}}.\] Algorithm for Solving an Exact Differential Equation First it’s necessary to make sure that the differential equation is exact using the test for exactness: \[\frac{{\partial Q}}{{\partial x}} = \frac{{\partial P}}{{\partial y}}.\] Then we write the system of two differential equations that define the function \(u\left( {x,y} \right):\)\[\left\{ \begin{array}{l} \frac{{\partial u}}{{\partial x}} = P\left( {x,y} \right)\\ \frac{{\partial u}}{{\partial y}} = Q\left( {x,y} \right) \end{array} \right..\] Integrate the first equation over the variable \(x.\) Instead of the constant \(C,\) we write an unknown function of \(y:\)\[{u\left( {x,y} \right) \text{ = }}\kern0pt{ \int {P\left( {x,y} \right)dx} + \varphi \left( y \right).}\] Differentiating with respect to \(y,\) we substitute the function \(u\left( {x,y} \right)\)into the second equation:\[ {\frac{{\partial u}}{{\partial y}} \text{ = }}\kern0pt {\frac{\partial }{{\partial y}}\left[ {\int {P\left( {x,y} \right)dx} + \varphi \left( y \right)} \right] } = {Q\left( {x,y} \right).} \]From here we get expression for the derivative of the unknown function \({\varphi \left( y \right)}:\)\[ {\varphi’\left( y \right) } = {Q\left( {x,y} \right) }-{ \frac{\partial }{{\partial y}}\left( {\int {P\left( {x,y} \right)dx} } \right).} \] By integrating the last expression, we find the function \({\varphi \left( y \right)}\) and, hence, the function \(u\left( {x,y} \right):\)\[{u\left( {x,y} \right) \text{ = }}\kern0pt{ \int {P\left( {x,y} \right)dx} + \varphi \left( y \right).}\] The general solution of the exact differential equation is given by\[u\left( {x,y} \right) = C.\] Note: In Step \(3,\) we can integrate the second equation over the variable \(y\) instead of integrating the first equation over \(x.\) After integration we need to find the unknown function \({\psi \left( x \right)}.\) Solved Problems Click a problem to see the solution. Example 1Solve the differential equation \(2xydx +\) \( \left( {{x^2} + 3{y^2}} \right)dy \) \(= 0.\) Example 2Find the solution of the differential equation \(\left( {6{x^2} – y + 3} \right)dx +\) \( \left( {3{y^2} – x – 2} \right)dy \) \(= 0.\) Example 3Solve the differential equation \({e^y}dx +\) \(\left( {2y + x{e^y}} \right)dy \) \(= 0.\) Example 4Solve the equation \(\left( {2xy – \sin x} \right)dx +\) \( \left( {{x^2} – \cos y} \right)dy \) \(= 0.\) Example 5Solve the equation \(\left( {1 + 2x\sqrt {{x^2} – {y^2}} } \right)dx -\) \( 2y\sqrt {{x^2} – {y^2}} dy \) \(= 0.\) Example 6Solve the differential equation \({\large\frac{1}{{{y^2}}}\normalsize} – {\large\frac{2}{x}\normalsize} =\) \( {\large\frac{{2xy’}}{{{y^3}}}\normalsize}\) with the initial condition \(y\left( 1 \right) = 1.\) Example 1.Solve the differential equation \(2xydx +\) \( \left( {{x^2} + 3{y^2}} \right)dy \) \(= 0.\) Solution. The given equation is exact because the partial derivatives are the same: \[ {{\frac{{\partial Q}}{{\partial x}} }={ \frac{\partial }{{\partial x}}\left( {{x^2} + 3{y^2}} \right) }={ 2x,\;\;}}\kern-0.3pt {{\frac{{\partial P}}{{\partial y}} }={ \frac{\partial }{{\partial y}}\left( {2xy} \right) }={ 2x.}} \] We have the following system of differential equations to find the function \(u\left( {x,y} \right):\) \[\left\{ \begin{array}{l} \frac{{\partial u}}{{\partial x}} = 2xy\\ \frac{{\partial u}}{{\partial y}} = {x^2} + 3{y^2} \end{array} \right..\] By integrating the first equation with respect to \(x,\) we obtain \[{u\left( {x,y} \right) = \int {2xydx} }={ {x^2}y + \varphi \left( y \right).}\] Substituting this expression for \(u\left( {x,y} \right)\) into the second equation gives us: \[ {{\frac{{\partial u}}{{\partial y}} }={ \frac{\partial }{{\partial y}}\left[ {{x^2}y + \varphi \left( y \right)} \right] }={ {x^2} + 3{y^2},\;\;}}\Rightarrow {{{x^2} + \varphi’\left( y \right) }={ {x^2} + 3{y^2},\;\;}}\Rightarrow {\varphi’\left( y \right) = 3{y^2}.} \] By integrating the last equation, we find the unknown function \({\varphi \left( y \right)}:\) \[\varphi \left( y \right) = \int {3{y^2}dy} = {y^3},\] so that the general solution of the exact differential equation is given by \[{x^2}y + {y^3} = C,\] where \(C\) is an arbitrary constant.
3.1 Show that, if you ignore drag, a projectile fired at an initial velocity \(v_0\) and angle \(\theta\) has a range R given by A target is situated 1.5 km away from a cannon across a flat field. Will the target be hit if the firing angle is \(42^{\circ}\) and the cannonball is fired at an initial velocity of 121 m/s? (Cannonballs, as you know,do not bounce). To increase the cannon’s range, you put it on a tower of height \(h_0\). Find the maximum range in this case, as a function of the firing angle and velocity, assuming the land around is still flat. 3.2 You push a box of mass m up a slope with angle \(\theta\) and kinetic friction coefficient \(\mu\). Find the minimum initial speed v you must give the box so that it reaches a height h. 3.3 A uniform board of length L and mass M lies near a boundary that separates two regions. In region 1, the coefficient of kinetic friction between the board and the surface is \(\mu _1\), and in region 2, the coefficient is \(\mu _2\). Our objective is to find the net work W done by friction in pulling the board directly from region 1 to region 2, under the assumption that the board moves at constant velocity. Suppose that at some point during the process, the right edge of the board is a distance x from the boundary, as shown. When the board is at this position, what is the magnitude of the force of friction acting on the board, assuming that it’s moving to the right? Express your answer in terms of all relevant variables (L, M, g, x, \(\mu _1\), and \(\mu _2\)). As we’ve seen in Section 3.1, when the force is not constant, you can determine the work by integrating the force over the displacement, \(W= \int F(x) dx\). Integrate your answer from (a) to get the net work you need to do to pull the board from region 1 to region 2. 3.4 The government wishes to secure votes from car-owners by increasing the speed limit on the highway from 120 to 140 km/h. The opposition points out that this is both more dangerous and will cause more pollution. Lobbyists from the car industry tell the government not to worry: the drag coefficients of the cars have gone down significantly and their construction is a lot more solid than in the time that the 120 km/h speed limit was set. Suppose the 120 km/h limit was set with a Volkswagen Beetle (\({c_d}=0.48\)) in mind, and the lobbyist’scar has a drag coefficient of 0.19. Will the new car need to do more or less work to maintain a constant speed of 140 km/h than the Beetle at 120 km/h? What is the ratio of the total kinetic energy released in a full head-on collision (resulting in an immediate standstill) between two cars both at 140 km/h and two cars both at 120 km/h? The government dismisses the opposition’s objections on safety by stating that on the highway, all cars move in the same direction (opposite direction lanes are well separated), so if they all move at 140 km/h, it would be just as safe as all at 120 km/h. The opposition then points out that running a Beetle (those are still around) at 120 km/h is already challenging, so there would be speed differences between newer and older cars. The government claims that the 20 km/h difference won’t matter, as clearly even a Beetle can survive a 20 km/h collision. Explain why their argument is invalid. 3.5 Nuclear fusion, the process that powers the Sun, occurs when two low-mass atomic nuclei fuse together to make a larger nucleus, releasing substantial energy. Fusion is hard to achieve because atomic nuclei carry positive electric charge, and their electrical repulsion makes it difficult to get them close enough for the short-range nuclear force to bind them into a single nucleus. The figure below shows the potential-energy curve for fusion of two deuterons (heavy hydrogen nuclei, consisting of a proton and a neutron).The energy is measured in million electron volts (\(MeV, 1 eV=1.6 \cdot 10^{-19} J\)), a unit commonly used in nuclear physics, and the separation is in femtometers (\(1 fm=10^{-15} m\)). Find the position(s) (if any) at which the force between two deuterons is zero. Find the kinetic energy two initially widely separated deuterons need to have to get close enough to fuse. The energy available in fusion is the energy difference between that of widely separated deuterons and the bound deutrons after they’ve ‘fallen’ into the deep potential well shown in the figure. About how big is that energy? Determine whether the force between two deuterons that are 4 fm apart is repulsive, attractive, or zero. 3.6 A pigeon in flight experiences a drag force due to air resistance given approximately by \(F=bv^2\), where v is the flight speed and b is a constant. What are the units of b? What is the largest possible speed of the pigeon if its maximum power output is P? By what factor does the largest possible speed increase if the maximum power output is doubled 3.7 For which value(s) of the parameters \(\alpha, \beta, \text { and } \gamma\) is the force given by \[\boldsymbol{F}=\left(x^{3} y^{3}+\alpha z^{2}, \beta x^{4} y^{2}, \gamma x z\right)\] conservative? Find the force for the potential energy given by \(U(x,y,z)=\frac{xy}{z}-\frac{xz}{y}\). 3.8 A point mass is connected to two opposite walls by two springs, as shown in the figure. The distance between the walls is 2L. The left spring has rest length \(l_1=\frac{L}{2}\) and spring constant \(k_1=k\), the right spring has rest length \(l_2=\frac{3L}{4}\) and spring constant \(k_2=3k\). Determine the magnitude of the force acting on the point mass if it is at x=0. Determine the equilibrium position of the point mass. Find the potential energy of the point mass as a function of x. Use the equilibrium point from (b) as your point of reference. If the point mass is displaced a small distance from its equilibrium position and then released, it will oscillate. By comparing the equation of the net force on the mass in this system with a simple harmonic oscillator, determine the frequency of that oscillation. (We’ll return to systems oscillating about the minimum of a potential energy in Section 8.1.4, feel free to take a sneak peak ahead). 3.9 A block of mass m=3.50 kg slides from rest a distance d down a frictionless incline at angle \(\theta=30.0^\circ\),where it runs into a spring of spring constant 450 N/m. When the block momentarily stops, it has com-pressed the spring by 25.0 cm. Find d. What is the distance between the first block-spring contact and the point at which the block’s speed is greatest? 3.10 Playground slides frequently have sections of varying slope: steeper ones to pick up speed, less steep ones to lose speed, so kids (and students) arrive at the bottom safely. We consider a slide with two steep sections (angle \(\alpha\)) and two less steep ones (angle \(\beta\)). Each of the sections has a width L. The slide has a coefficient of kinetic friction \(\mu\). Kids start at the top of the slide with velocity zero. Calculate the velocity of a kid of mass m at the end of the first steep section. Now calculate the velocity of the kid at the bottom of the entire slide. If L=1.0 m, \(\alpha=30^\circ\) and \(\mu=0.5\), find the minimum value \(\beta\) must have so that kids up to 30 kg can enjoy the slide (Hint: what is the minimum requirement for the slide to be functional)? A given slide has \(\alpha=30^\circ\), \(\beta=20^\circ\), and \(\mu=0.5\). A young child of 10 kg slides down, while its cousin of 20 kg sits at the bottom. When the sliding kid reaches the end, the two children collide, and together slide further over the ground. The coefficient of kinetic friction with the ground is 0.70. How far do the two children slide before they come to a full stop? 3.11 In this problem, we consider the anharmonic potential given b \[U(x)=\frac{a}{2}\left(x-x_{0}\right)^{2}+\frac{b}{3}\left(x-x_{0}\right)^{3} \label{anharmonic}\] where a, b, and \(x_0\) are positive constants. Find the dimensions of a, b, and \(x_0\). Determine whether the force on a particle at a position \(x \gt \gt x_0\) is attractive or repulsive (taking the origin as your point of reference). Find the equilibrium point(s) (if any) of this potential, and determine their stability. For b=0, the potential given in Equation (3.24) becomes harmonic (i.e., the potential of a harmonic oscillator), in which case a particle that is initially located at a non-equilibrium point will oscillate. Are there initial values for x for which a particle in this anharmonic potential will oscillate? If so,find them,and find the approximate oscillation frequency; if not, explain why not. (NB: As the problem involves a third order polynomial function, you may find yourself having to solve a third order problem. When that happens, for your answer you can simply say: the solution x to the problem X). 3.12 After you have successfully finished your mechanics course, you decide to launch the book into an orbit around the Earth. However, the teacher is not convinced that you do not need it anymore and asks the following question: What is the ratio between the kinetic energy and the potential energy of the book in its orbit? Let m be the mass of the book, \(M_{\oplus} \text { and } R_{\oplus}\) the mass and the radius of the Earth respectively. The gravitational pull at distance r from the center is given by Newton’s law of gravitation (Equation 2.2.3): \[F_{\mathrm{g}}(r)=-G \frac{m M_{\oplus}}{r^{2}} \hat{\boldsymbol{r}}\] Find the orbital velocity v of an object at height h above the surface of the Earth. Express the work required to get the book at height h. Calculate the ratio between the kinetic and the potential energy of the book in its orbit. What requires more work, getting the book to the International Space Station (orbiting at h=400 km)or giving it the same speed as the ISS? 3.13 Using dimensional arguments, in Problem 1.4 we found the scaling relation of the escape velocity (the minimal initial velocity an object must have to escape the gravitational pull of the planet/moon/other object it’s on completely) with the mass of the radius of the planet. Here, we’ll re-derive the result, including the numerical factor that dimensional arguments cannot give us. Derive the expression of the gravitational potential energy,Ug, of an object of mass m due to a gravitational force \(F_g\) given by Newton’s law of gravitation (Equation 2.2.3) \[F_{\mathrm{g}}=-\frac{G m M}{r^{2}} \hat{r}\] Set the value of the integration constant by \({U_g} \rightarrow 0 \text { as } r \rightarrow \infty\) Find the escape velocity on the surface of a planet of mass M and radius R by equating the initial kinetic energy of your object (when launched from the surface of the planet) to the total gravitational potential energy it has there. 3.14 A cannonball is fired upwards from the surface of the Earth with just enough speed such that it reaches the Moon. Find the speed of the cannonball as it crashes on the Moon’s surface, taking the gravity of both the Earth and the Moon into account. Table B.3 contains the necessary astronomical data. 3.15 The draw force F(x) of a Turkish bow as a function of the bowstring displacement x (for x \gt 0) is approximately given by a quadrant of the ellipse \[\left(\frac{F(x)}{F_{\max }}\right)^{2}+\left(\frac{x+d}{x}\right)^{2}=1\] In rest, the bowstring is at x=0; when pulled all the way back, it’s at x=-d. Calculate the work done by the bow in accelerating an arrow of mass m=37 g, for d=0.85 m, and F max=360 N. Assuming that all of the work is converted to kinetic energy of the arrow, find the maximum distance the arrow can fly.Hint: which variable can you control when shooting? Maximize the distance with respect to that variable. Compare the result of (b) with the range of a bow that acts like a simple (Hookean) spring with the same values of F maxand d. How much further does the arrow shot from the Turkish bow fly than that of the simple spring bow? 3.16 A massive cylinder with mass M and radius R is connected to a wall by a spring at its center (see figure).The cylinder can roll back-and-forth without slipping. Determine the total energy of the system consisting of the cylinder and the spring. Differentiate the energy of problem (16a) to obtain the equation of motion of the cylinder and spring system. Find the oscillation frequency of the cylinder by comparing the equation of motion at (16b) with that of a simple harmonic oscillator (a mass-spring system). 3.17 A small particle (blue dot) is placed atop the center of a hemispherical mount of ice of radius R (see figure). It slides down the side of the mount with negligible initial speed. Assuming no friction between the ice and the particle, find the height at which the particle loses contact with the ice. Hint: To solve this problem, first draw a free body diagram, and combine what you know of energy and forces. 3.18 Pulling membrane tubes The (potential) energy of a cylindrical membrane tube of length L and radius R is given by \[\mathscr{E}_{\text { tube }}(R, L)=2 \pi R L\left(\frac{\kappa}{2} \frac{1}{R^{2}}+\sigma\right)\] Here \(\kappa\) is the membrane’s bending modulus and \(\sigma\) its surface tension. Find the dimensions of the bending modulus and the surface tension. Find the forces acting on the tube along its radial and axial direction. Membrane tubes are often pulled by membrane motors pulling along the axial direction, as sketched in Figure 3.5. For that case, we add the work done by the motors to the total energy of the tube, so we get: \[\mathscr{E}_{\text { tube }}(R, L)=2 \pi R L\left(\frac{\kappa}{2} \frac{1}{R^{2}}+\sigma\right)-F L\] Show that for a stable tube, the motors need to exert a force of magnitude \(F=2 \pi \sqrt{2 \kappa \sigma}\) Can the force of (c) be considered to be an effective spring force? If so, find its associated spring constant. If not, explain why not.
Difference between revisions of "State Feedback" (→Textbook Contents) (30 intermediate revisions by the same user not shown) Line 1: Line 1: {{chheader|Linear Systems|State Feedback|Output Feedback}} {{chheader|Linear Systems|State Feedback|Output Feedback}} − This chapter describes how feedback can be used shape the local behavior of a system. The concept of reachability is introduced and used to investigate how to "design" the dynamics of a system through placement of its eigenvalues. In particular, it will be shown that under certain conditions it is possible to assign the system eigenvalues to arbitrary values by appropriate feedback of the system state. + This chapter describes how feedback can be used shape the local behavior of a system. The concept of reachability is introduced and used to investigate how to "design" the dynamics of a system through placement of its eigenvalues. In particular, it will be shown that under certain conditions it is possible to assign the system eigenvalues to arbitrary values by appropriate feedback of the system state. {{chaptertable begin}} {{chaptertable begin}} {{chaptertable left}} {{chaptertable left}} == Textbook Contents == == Textbook Contents == − {{am05pdf| + {{am05pdf|-statefbk||State Feedback}} * 1. Reachability * 1. Reachability * 2. Stabilization by State Feedback * 2. Stabilization by State Feedback + * 3. State Feedback Design Issues * 3. State Feedback Design Issues * 4. Integral Action * 4. Integral Action Line 20: Line 21: == Supplemental Information == == Supplemental Information == * [[#Frequently Asked Questions|Frequently Asked Questions]] * [[#Frequently Asked Questions|Frequently Asked Questions]] + * Wikipedia entries: [http://en.wikipedia.org/wiki/Controllability Controllability (reachability)], [http://en.wikipedia.org/wiki/State_space_%28controls%29#Feedback state feedback], [http://en.wikipedia.org/wiki/Optimal_control#Linear_quadratic_control LQR] * Wikipedia entries: [http://en.wikipedia.org/wiki/Controllability Controllability (reachability)], [http://en.wikipedia.org/wiki/State_space_%28controls%29#Feedback state feedback], [http://en.wikipedia.org/wiki/Optimal_control#Linear_quadratic_control LQR] * [[#Additional Information|Additional Information]] * [[#Additional Information|Additional Information]] Line 34: Line 36: \end{aligned} \end{aligned} </amsmath></center> </amsmath></center> − is said to be ''reachable'' if we can find an input < + is said to be ''reachable'' if we can find an input <>u(t)</> defined on the interval <>[0, T]</> that can steer the system from a given final point <>x(0) = x_0</> to a desired final point <>x(T) = x_f</>. − </p> + </p> <li><p>The ''reachability matrix'' for a linear system is given by <li><p>The ''reachability matrix'' for a linear system is given by Line 41: Line 43: W_r = \left[\begin{matrix} B & AB & \cdots & A^{n-1}B \end{matrix}\right]. W_r = \left[\begin{matrix} B & AB & \cdots & A^{n-1}B \end{matrix}\right]. </amsmath></center> </amsmath></center> − A linear system is reachable if and only if the reachability matrix < + A linear system is reachable if and only if the reachability matrix <>W_r</> is invertible (assuming a single intput/single output system). Systems that are not reachable have states that are constrained to have a fixed relationship with each other. − </p> + </p> <li><p>A linear system of the form <li><p>A linear system of the form Line 61: Line 63: \det(sI-A) = s^n+a_1 s^{n-1} + \cdots + a_{n-1}s + a_n, \det(sI-A) = s^n+a_1 s^{n-1} + \cdots + a_{n-1}s + a_n, </amsmath></center> </amsmath></center> − A reachable linear system can be transformed into reachable canonical form through the use of a coordinate transformation < + A reachable linear system can be transformed into reachable canonical form through the use of a coordinate transformation <>z = T x</>. − </p> + </p> <li><p>A state feedback law has the form <li><p>A state feedback law has the form Line 68: Line 70: u = -K x + k_r r u = -K x + k_r r </amsmath></center> </amsmath></center> − where < + where <>r</> is the reference value for the output. The closed loop dynamics for the system are given by <center><amsmath> <center><amsmath> \dot x = (A - B K) x + B k_r r. \dot x = (A - B K) x + B k_r r. </amsmath></center> </amsmath></center> − The stability of the system is determined by the stability of the matrix < + The stability of the system is determined by the stability of the matrix <>A - BK</>. The equilibrium point and steady state output (assuming the systems is stable) are given by <center><amsmath> <center><amsmath> x_e = -(A-BK)^{-1} B k_r r \qquad y_e = C x_e. x_e = -(A-BK)^{-1} B k_r r \qquad y_e = C x_e. </amsmath></center> </amsmath></center> − Choosing < + Choosing <>k_r</> as <center><amsmath> <center><amsmath> k_r = {-1}/\left(C (A-BK)^{-1} B\right). k_r = {-1}/\left(C (A-BK)^{-1} B\right). </amsmath></center> </amsmath></center> − gives < + gives <>y_e = r</>.</p> <li><p>If a system is reachable, then there exists a feedback law of the form <li><p>If a system is reachable, then there exists a feedback law of the form Line 87: Line 89: </amsmath></center> </amsmath></center> the gives a closed loop system with an arbitrary characteristic polynomial. Hence the eigenvalues of a reachable linear system can be placed arbitrarily through the use of an appropriate feedback control law. the gives a closed loop system with an arbitrary characteristic polynomial. Hence the eigenvalues of a reachable linear system can be placed arbitrarily through the use of an appropriate feedback control law. − </p> + </p> − <li><p>''Integral feedback'' can be used to provide zero steady state error instead of careful calibration of the gain < + <li><p>''Integral feedback'' can be used to provide zero steady state error instead of careful calibration of the gain <>K_r</>. An integral feedback controller has the form <center><amsmath> <center><amsmath> u = - k_p (x - x_e) - k_i z + k_r r. u = - k_p (x - x_e) - k_i z + k_r r. </amsmath></center> </amsmath></center> where where − <center>< + <center><> \dot z = y - r \dot z = y - r − </ + </></center> − is the integral error. The gains < + is the integral error. The gains <>k_p</>, <>k_i</> and <>k_r</> can be found by designing a stabilizing state feedback for the system dynamics augmented by the integrator dynamics. − </p> + </p> <li><p>A ''linear quadratic regulator'' minimizes the cost function <li><p>A ''linear quadratic regulator'' minimizes the cost function Line 109: Line 111: u = -Q_u^{-1} B^T P x. u = -Q_u^{-1} B^T P x. </amsmath></center> </amsmath></center> − where < + where <>P \in R^{n \times n}</> is a positive definite, symmetric matrix that satisfies the equation matrix that satisfies the equation <center><amsmath> <center><amsmath> P A + A^T P - P B Q_u^{-1} B^T P + Q_x = 0. P A + A^T P - P B Q_u^{-1} B^T P + Q_x = 0. </amsmath></center> </amsmath></center> − This equation is called the''algebraic + This equation is called the ''algebraic Riccati equation'' and can be solved numerically. Riccati equation'' and can be solved numerically. − </p> + </p> </ol> </ol> − == Exercises == + + + + == Exercises == + <ncl>State Feedback Exercises</ncl> <ncl>State Feedback Exercises</ncl> == Frequently Asked Questions == == Frequently Asked Questions == <ncl>State Feedback FAQ</ncl> <ncl>State Feedback FAQ</ncl> + + + + + + + + + + + + + + + + + + + == Additional Information == == Additional Information == + + + Latest revision as of 04:06, 19 November 2012 Prev: Linear Systems Chapter 6 - State Feedback Next: Output Feedback This chapter describes how feedback can be used to shape the local behavior of a system. The concept of reachability is introduced and used to investigate how to "design" the dynamics of a system through placement of its eigenvalues. In particular, it will be shown that under certain conditions it is possible to assign the system eigenvalues to arbitrary values by appropriate feedback of the system state. Textbook Contents Lecture Materials Supplemental Information Chapter Summary This chapter describes how state feedback can be used to design the (closed loop) dynamics of the system: A linear system with dynamics The reachability matrixfor a linear system is given by A linear system is reachable if and only if the reachability matrix is invertible (assuming a single intput/single output system). Systems that are not reachable have states that are constrained to have a fixed relationship with each other. A linear system of the form is said to be in reachable canonical form. A system in this form is always reachable and has a characteristic polynomial given by A state feedback law has the form gives . If a system is reachable, then there exists a feedback law of the form the gives a closed loop system with an arbitrary characteristic polynomial. Hence the eigenvalues of a reachable linear system can be placed arbitrarily through the use of an appropriate feedback control law. where A linear quadratic regulatorminimizes the cost function The solution to the LQR problem is given by a linear control law of the form This equation is called the algebraicRiccati equation and can be solved numerically. Additional Exercises The following exercises cover some of the topics introduced in this chapter. Exercises marked with a * appear in the printed text. Frequently Asked Questions Errata MATLAB code The following MATLAB scripts are available for producing figures that appear in this chapter. See the software page for more information on how to run these scripts. Additional Information More information on optimal control and the linear quadratic regulator can be found in the Optimization-Based Control supplement:
Search Now showing items 1-10 of 55 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC (Elsevier, 2013-12) The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ... Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (American Physical Society, 2013-12) The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
$\DeclareMathOperator{\PGL}{PGL} \DeclareMathOperator{\GL}{GL} \newcommand{\F}{\mathbb{F}} \newcommand{\p}{\mathfrak{p}} \DeclareMathOperator{\Sym}{Sym}$ Let $F$ be a number field, and let $\Gamma$ be a congruence subgroup of $\PGL_2(\mathbb{Z}_F)$ with level supported at some finite set of primes $S$. To a Hecke-eigenclass in $H^*(\Gamma,\F_p)$, we expect to be able to attach a semisimple continuous Galois representation $$ \rho \colon G_F \to \GL_2(\bar{\F}_p), $$ unramified outside $S\cup p$, and with characteristic polynomial of Frobenius determined by the Hecke eigenvalues in the usual way. Let $\p \mid p$ be a prime ideal not in $S$ and unramified in $F$, let $F_\p$ be the completion of $F$ at $\p$ and $I_\p$ be the inertia group of $F_\p$. Let $k$ be the quadratic extension of the residue field of $F_\p$. For each embedding $\tau$ of the residue field $k \hookrightarrow \bar{\F}_p$, let $\omega_\tau \colon I_\p \to \bar{\F}_\p^\times$ be the associated tame character. Assume that $\rho|I_\p$ is tamely ramified. Is it expected that $\rho|I_\p$ is then of the form $$ \rho|I_\p \cong \prod_{\tau}\omega_\tau^{a_\tau} \oplus \prod_\tau\omega_\tau^{b_\tau} $$ with $\{a_\tau,b_\tau\} = \{0,1\}$ for all $\tau$? If that is the correct expectation, could you point me to a reference stating such a conjecture? When $F$ is totally real or CM, the existence of $\rho$ was proved by Scholze. In the totally real or CM case, is it known that $\rho|I_\p$ has the form described above whenever it is tamely ramified? EDIT Let me try to provide a little more context. If $\rho\colon G_F \to \GL_2(\bar{\F}_p)$ is a continuous representation, $\p\mid p$ is an unramified prime and $\rho|I_\p$ is tame, then this restriction (if I got this right) is of the form $$ \rho|I_\p = \prod_{\tau}(\omega_\tau\omega_{\tau\circ\sigma})^{d_\tau}(\prod_{\tau}\omega_\tau^{a_\tau} \oplus \prod_\tau\omega_\tau^{b_\tau}) $$ where $\sigma$ is the order $2$ automorphism of $k$, and we have $d_\tau\in\{0,\dots,p-1\}$, $a_\tau,b_\tau\in\{0,\dots,p\}$ and $\{0\}\subsetneq\{a_\tau,b_\tau\}$ for all $\tau$. In papers about generalisations of Serre's conjecture, this form is used to define the set of Serre weights attached to the representation (the non-tame case is more complicated but I don't need it), and if I understand correctly the weights are $\bigotimes_{\tau}(\det^{d_\tau} \otimes \Sym^{\max(a_\tau,b_\tau)-1}\F_\p^2)\otimes_\tau\bar{\F}_p$ and sometimes some other weights. In the references I looked at, everything was formulated for $F$ totally real and for actual modular forms. Since the weight recipe is completely local, my naive expectation would be that it should remain the same in general. In addition, the direction I need is only that the Galois representations attached to cohomology with trivial coefficients have the shape above with $d_\tau = 0$ and $\max(a_\tau,b_\tau)=1$. (One other less important point I am not sure about is whether this needs modification if $F$ is ramified at $\p$.)
My query is regarding following question:- Let $\mathcal Q$ denote the additive group of rational numbers, i.e. the structure $\langle Q ; +; 0\rangle$. Let $\mathcal L$ be the language of $\mathcal Q$ and let $\mathcal T$ be the complete theory of $\mathcal Q$. (i) By considering automophisms of $\mathcal Q$ prove that every formula in $F_1(\mathcal L)$ is $E_1(\mathcal T)$-equivalent to exactly one of the four formulas $v_1\bumpeq v_1; v_1\bumpeq 0; \neg v_1\bumpeq 0; \neg v_1\bumpeq v_1$: (ii) Deduce that there are exactly two 1-types (over T), both of which are principal. (iii) Show that there exists a 2-type (over T) which is not realised in $\mathcal Q$ and deduce that T is not $\aleph_0$-categorical. $F_n(\mathcal L)$ denotes the set of all $\mathcal L$-formulas $\phi$ with FrVar($\phi$)$ \subseteq \{v_1,...,v_n\}$ $E_n(\mathcal T)$ denotes the binary relation on $F_n(\mathcal L)$ defined by ($\psi\space,\phi)\in E_n(\mathcal T)\iff\mathcal T\models\forall v_1,...,v_n (\phi(v_1,...,v_n)\iff\psi(v_1,...,v_n))$ I am wondering if someone could help me understand and solve this question. (i) The automorphism is $\pi :\mathcal Q \rightarrow \mathcal Q $ such that for $r\in Q $ we have $\pi(r)=r\space\pi(1)$. I just don't know how automorphism is used to deduce that "every formula in $F_1(\mathcal L)$ is $E_1(\mathcal T)$-equivalent to exactly one of the four formulas $v_1\bumpeq v_1; v_1\bumpeq 0; \neg v_1\bumpeq 0; \neg v_1\bumpeq v_1$". Similar question is solved in course notes without description. Only automorphism is given and then without explanation lecturer says, " From this automorphism we get four formulas...." Can someone please explain the role of automorphism here? (ii) I deduced that $v_1\bumpeq 0$ and $\neg v_1\bumpeq 0$ are two principal formulas that generates two principal 1-type $\{v_1\bumpeq 0,v_1\bumpeq v_1\}$ and $\{\neg v_1\bumpeq 0,\neg v_1\bumpeq v_1\}$. I deduced so because $\mathcal T\models\forall v_1( v_1\bumpeq 0\implies v_1\bumpeq v_1)$ , $\mathcal T\models\forall v_1( v_1\bumpeq 0\implies v_1\bumpeq 0)$ and $\mathcal T\models\forall v_1(\neg v_1\bumpeq 0\implies \neg v_1\bumpeq v_1)$ ,$\mathcal T\models\forall v_1( \neg v_1\bumpeq 0\implies \neg v_1\bumpeq 0)$ Am I right? (iii) I am confused how to solve this last part. I am thinking along this way. Please let me know if I am thinking along the right line. If I give example that show there are infinitely many $E_2(\mathcal T)$-equivalence classes in $F_2(\mathcal L)$ then I deduce there exists a non-principal 2-type say $p$ (by theorem in notes).Then by 'Omitting types theorem' I can deduce that there exists a countably infinite model of $\mathcal T$ that omits $p$. By another theorem in notes I know that there exists countably infinite model that realizes $p$. Clearly both these models are not isomorphic otherwise they would be realising same 2-types. So $\mathcal T$ is not $\aleph_0- categorical$. Please ask me if you need any further information.
Difference between revisions of "stat946w18/Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolutional Layers" m (→Image Foreground-Background Segmentation Experiment) (→Results) Line 78: Line 78: == Results == == Results == − + === CIFAR-10 Experiment === === CIFAR-10 Experiment === Revision as of 00:18, 21 April 2018 Contents Introduction With the recent and ongoing surge in low-power, intelligent agents (such as wearables, smartphones, and IoT devices), there exists a growing need for machine learning models to work well in resource-constrained environments. Deep learning models have achieved state-of-the-art on a broad range of tasks; however, they are difficult to deploy in their original forms. For example, AlexNet (Krizhevsky et al., 2012), a model for image classification, contains 61 million parameters and requires 1.5 billion floating point operations (FLOPs) in one inference pass. A more accurate model, ResNet-50 (He et al., 2016), has 25 million parameters but requires 4.08 FLOPs. A high-end desktop GPU such as a Titan Xp is capable of (12 TFLOPS (tera-FLOPs per second)), while the Adreno 540 GPU used in a Samsung Galaxy S8 is only capable of (567 GFLOPS) which is less than 5% of the Titan Xp. Clearly, it would be difficult to deploy and run these models on low-power devices. In general, model compression can be accomplished using four main non-mutually exclusive methods (Cheng et al., 2017): weight pruning, quantization, matrix transformations, and weight tying. By non-mutually exclusive, we mean that these methods can be used not only separately but also in combination for compressing a single model; the use of one method does not exclude any of the other methods from being viable. Ye et al. (2018) explores pruning entire channels in a convolutional neural network (CNN). Past work has mostly focused on norm[based or error-based heuristics to prune channels; instead, Ye et al. (2018) show that their approach is easily reproducible and has favorable qualities from an optimization standpoint. In other words, they argue that the norm-based assumption is not as informative or theoretically justified as their approach, and provide strong empirical evidence of these findings. Motivation Some previous works on pruning channel filters (Li et al., 2016; Molchanov et al., 2016) have focused on using the L1 norm to determine the importance of a channel. Ye et al. (2018) show that, in the deep linear convolution case, penalizing the per-layer norm is coarse-grained; they argue that one cannot assign different coefficients to L1 penalties associated with different layers without risking the loss function being susceptible to trivial re-parameterizations. As an example, consider the following deep linear convolutional neural network with modified LASSO loss: $$\min \mathbb{E}_D \lVert W_{2n} * \dots * W_1 x - y\rVert^2 + \lambda \sum_{i=1}^n \lVert W_{2i} \rVert_1$$ where W are the weights and * is convolution. Here we have chosen the coefficient 0 for the L1 penalty associated with odd-numbered layers and the coefficient 1 for the L1 penalty associated with even-numbered layers. This loss is susceptible to trivial re-parameterizations: without affecting the least-squares loss, we can always reduce the LASSO loss by halving the weights of all even-numbered layers and doubling the weights of all odd-numbered layers. Furthermore, batch normalization (Ioffe, 2015) is incompatible with this method of weight regularization. Consider batch normalization at the [math]l[/math]-th layer. Due to the batch normalization, any uniform scaling of [math]W^l[/math] which would change [math]l_1[/math] and [math]l_2[/math] norms, but has no have no effect on [math]x^{l+1}[/math]. Thus, when trying to minimize weight norms of multiple layers, it is unclear how to properly choose penalties for each layer. Therefore, penalizing the norm of a filter in a deep convolutional network is hard to justify from a theoretical perspective. In contrast with these existing approaches, the authors focus on enforcing sparsity of a tiny set of parameters in CNN — scale parameter [math]\gamma[/math] in all batch normalization. Not only placing sparse constraints on [math]\gamma[/math] is simpler and easier to monitor, but more importantly, they put forward two reasons: 1. Every [math]\gamma[/math] always multiplies a normalized random variable, thus the channel importance becomes comparable across different layers by measuring the magnitude values of [math]\gamma[/math]; 2. The reparameterization effect across different layers is avoided if its subsequent convolution layer is also batch-normalized. In other words, the impacts from the scale changes of [math]\gamma[/math] parameter are independent across different layers. Thus, although not providing a complete theoretical guarantee on loss, Ye et al. (2018) develop a pruning technique that claims to be more justified than norm-based pruning is. Method At a high level, Ye et al. (2018) propose that, instead of discovering sparsity via penalizing the per-filter or per-channel norm, penalize the batch normalization scale parameters gamma instead. The reasoning is that by having fewer parameters to constrain and working with normalized values, sparsity is easier to enforce, monitor, and learn. Having sparse batch normalization terms has the effect of pruning entire channels: if gamma is zero, then the output at that layer becomes constant (the bias term), and thus the preceding channels can be pruned. Summary The basic algorithm can be summarized as follows: 1. Penalize the L1-norm of the batch normalization scaling parameters in the loss 2. Train until loss plateaus 3. Remove channels that correspond to a downstream zero in batch normalization 4. Fine-tune the pruned model using regular learning Details There still exist a few problems that this summary has not addressed so far. Sub-gradient descent is known to have inverse square root convergence rate on subdifferentials (Gordon et al., 2012), so the sparsity gradient descent update may be suboptimal. Furthermore, the sparse penalty needs to be normalized with respect to previous channel sizes, since the penalty should be roughly equally distributed across all convolution layers. Slow Convergence To address the issue of slow convergence, Ye et al. (2018) use an iterative shrinking-thresholding algorithm (ISTA) (Beck & Teboulle, 2009) to update the batch normalization scale parameter. The intuition for ISTA is that the structure of the optimization objective can be taken advantage of. Consider: $$L(x) = f(x) + g(x).$$ Let f be the model loss and g be the non-differentiable penalty (LASSO). ISTA is able to use the structure of the loss and converge in O(1/n), instead of O(1/sqrt(n)) when using subgradient descent, which assumes no structure about the loss. Even though ISTA is used in convex settings, Ye et. al (2018) argue that it still performs better than gradient descent. Penalty Normalization In the paper, Ye et al. (2018) normalize the per-layer sparse penalty with respect to the global input size, the current layer kernel areas, the previous layer kernel areas, and the local input feature map area. To control the global penalty, a hyperparamter rho is multiplied with all the per-layer lambda in the final loss. Steps The final algorithm can be summarized as follows: 1. Compute the per-layer normalized sparse penalty constant [math]\lambda[/math] 2. Compute the global LASSO loss with global scaling constant [math]\rho[/math] 3. Until convergence, train scaling parameters using ISTA and non-scaling parameters using regular gradient descent. 4. Remove channels that correspond to a downstream zero in batch normalization 5. Fine-tune the pruned model using regular learning Results CIFAR-10 Experiment Model A is trained with a sparse penalty of [math]\rho = 0.0002[/math] for 30 thousand steps, and then increased to [math]\rho = 0.001[/math]. Model B is trained by taking Model A and increasing the sparse penalty up to 0.002. Similarly Model C is a continuation of Model B with a penalty of 0.008. For the convNet, reducing the number of parameters in the base model increased the accuracy in model A. This suggests that the base model is over-parameterized. Otherwise, there would be a trade-off of accuracy and model efficiency. ILSVRC2012 Experiment The authors note that while ResNet-101 takes hundreds of epochs to train, pruning only takes 5-10, with fine-tuning adding another 2, giving an empirical example how long pruning might take in practice. Both models were trained with an aggressive sparsity penalty of 0.1. Image Foreground-Background Segmentation Experiment The authors note that it is common practice to take a network with pre-trained on a large task and fine-tune it to apply it to a different, smaller task. One might expect there might be some extra channels that while useful for the large task, can be omitted for the simpler task. This experiment replicated that use-case by taking a NN originally trained on multiple datasets and applying the proposed pruning method. The authors note that the pruned network actually improves over the original network in all but the most challenging test dataset, which is in line with the initial expectation. The model was trained with a sparsity penalty of 0.5 and the results are shown in table below The neural network used in this experiment is composed of two branches: An inception branch that locates the foreground objects A DenseNet branch to regress the edges It was found that the pruning primarily affected the inception branch as shown in Figure 1 below. This likely explains the poor performance on more challenging datasets as a result of a higher requirement on foreground objects, which has been impacted by the pruning of the inception branch. Conclusion Pruning large neural architectures to fit on low-power devices is an important task. For a real quantitative measure of efficiency, it would be interesting to conduct actual power measurements on the pruned models versus baselines; reduction in FLOPs doesn't necessarily correspond with vastly reduced power since memory accesses dominate energy consumption (Han et al., 2015). However, the reduction in the number of FLOPs and parameters is encouraging, so moderate power savings should be expected. It would also be interesting to combine multiple approaches, or "throw the whole kitchen sink" at this task. Han et al. (2015) sparked much recent interest by successfully combining weight pruning, quantization, and Huffman coding without loss in accuracy. However, their approach introduced irregular sparsity in the convolutional layers, so a direct comparison cannot be made. In conclusion, this novel, theoretically-motivated interpretation of channel pruning was successfully applied to several important tasks. Implementation A PyTorch implementation is available here: https://github.com/jack-willturner/batchnorm-pruning References Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105). He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). Cheng, Y., Wang, D., Zhou, P., & Zhang, T. (2017). A Survey of Model Compression and Acceleration for Deep Neural Networks. arXiv preprint arXiv:1710.09282. Ye, J., Lu, X., Lin, Z., & Wang, J. Z. (2018). Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers. arXiv preprint arXiv:1802.00124. Li, H., Kadav, A., Durdanovic, I., Samet, H., & Graf, H. P. (2016). Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710. Molchanov, P., Tyree, S., Karras, T., Aila, T., & Kautz, J. (2016). Pruning convolutional neural networks for resource efficient inference. Ioffe, S., & Szegedy, C. (2015, June). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning (pp. 448-456). Gordon, G., & Tibshirani, R. (2012). Subgradient method. https://www.cs.cmu.edu/~ggordon/10725-F12/slides/06-sg-method.pdf Beck, A., & Teboulle, M. (2009). A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1), 183-202. Han, S., Mao, H., & Dally, W. J. (2015). Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149
A bit of pedanticism first. Some of your terminology usage is inaccurate, I'd just like to take the opportunity to clear some of it up. Nuclear spins do not "undergo a FID", they undergo free precession. The free induction decay (FID) is the signal that is measured when spins undergo free precession. "Noise" in spectroscopy refers to unwanted, random signals arising from processes apart from the magnetisation of the spins. It's exactly analogous as the crackling noise you get if you try listening to a radio underground. The music you want to hear is the signal, but the crackling sound is the noise. From what I understand, you are referring to a pulse that is unselective in terms of the frequencies it can excite. Such a pulse is generally called "strong" or "hard", because it obeys the condition $B_1 \gg \Delta B_0$ (the strength of the pulse is much larger than the reduced external magnetic field). Many 2D experiments are designed to measure correlations between two nuclear spins, commonly denoted I and S. In the case of COSY both I and S are the same nucleus (typically $\ce{^1H}$, but there are versions with other nuclei). The correlation that is being detected is through-bond coupling. The pulse sequence of COSY looks easy, but the maths is actually more complicated than it is for something like HSQC. So, if you don't want a mathematical answer, quite a bit of detail has to be omitted. The best non-mathematical way I can explain it is as follows: The first 90° pulse leads to excitation of spin I. During the $t_1$ period, spin I undergoes free precession. As $t_1$ is increased, spin I will precess through a larger and larger angle. The rate at which spin I precesses is related to its resonance frequency. So, since we repeat the experiment for many different values of $t_1$, information about the resonance frequency of spin I is encoded in the data set we obtain. Up till now, it is mostly the same as in a 1D experiment. The difference of course lies in the second 90° pulse. It turns out that the combined effect of the $t_1$ period, as well as the second 90° pulse, also leads to some transfer of the excitation from spin I to spin S. This transfer only occurs if spins I and S have a through-bond coupling. In the detection period, both spins I and S have been excited and will therefore precess at their respective resonance frequencies. Both can be detected simultaneously (since they are the same nucleus). Overall, what information do we have? From the $t_1$ incrementation, we know about the resonance frequency of spin I. This means that, after Fourier transformation (which converts from a time domain $t_1$ to a frequency domain $\omega_1$), we will have a peak, centred at $\omega_I$ (where $\omega$ is the resonance frequency). From the FID obtained in $t_2$, we know about the resonance frequencies of both spins I and S. So, in the $\omega_2$ dimension we will have two peaks, centred at $\omega_I$ and $\omega_S$. In the 2D spectrum, then, we will see two peaks. One is centred at $(\omega_I, \omega_I)$, and is known as the diagonal peak. The other one is centred at $(\omega_I, \omega_S)$, and is known as the cross peak. Now, the cross peak only appears if there has been transfer of magnetisation from spin I to spin S, which in turn can only occur if spins I and S have a through-bond coupling. "But wait! Aren't there peaks at $(\omega_S,\omega_I)$ and $(\omega_S,\omega_S)$?" Yes, absolutely. That's because the first 90° pulse also leads to excitation of spin S; after all, spins I and S are the same nucleus, and an unselective pulse necessarily excites both. In an exactly analogous manner to that described above, this gives rise to the other two peaks (you can just switch the labels I and S in my description to see how this happens). Finally, to answer some of your questions directly: in 2D COSY NMR, are both "pulses" the same? Yes, they are exactly the same. (This depends on how advanced an answer you want, though. For the purposes of 2D data processing, e.g. the States method, the phases of the two pulses may be different - i.e. one may be aligned along the x-axis, and the other along the y-axis. But I suppose you can ignore this for now.) in 2D COSY, during the delay time, the protons are undergoing FID the same as they do in 1D NMR, even if we are not measuring it at this time As I mentioned above it is free precession and not FID, but yes, this is exactly correct! How, then, does varying the delay time lead to the discovery of "correlation" between protons? The thing about varying the delay time, is that we are trying to measure frequencies. These can be either resonance frequencies (i.e. chemical shift), or coupling frequencies (recall that coupling constants are expressed in Hz). I offer you the analogy of a clock. Let's say you have a clock, and you want to measure the frequency at which the minute hand rotates. If you simply look at the clock for one instant and record the position of the minute hand, you can't tell how fast it is rotating - or whether it is even rotating at all! You need to look at it continuously over a period of time, or you need to look at it at constant intervals and jot down how its position changes with time. Using only one value of $t_1$, then, is analogous to just looking at the "nuclear clocks" once. Only by using multiple values of $t_1$, can one figure out the frequencies at which the nuclear clocks operate (i.e. the coupling constant) - or whether they are even operating in the first place (i.e. whether they are coupled). What is it about applying a second pulse to partially relaxed protons, and then measuring the FID, that leads us to discover how the protons are correlated, and to the presence or absence of cross-peaks in our 2D spectrum? I hope the above discussion was sufficient. And if you want more detail I'm afraid that you need to go into the maths.
Difference between revisions of "stat946w18/Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolutional Layers" (→Results) (→Results) Line 78: Line 78: == Results == == Results == − + === CIFAR-10 Experiment === === CIFAR-10 Experiment === Latest revision as of 00:18, 21 April 2018 Contents Introduction With the recent and ongoing surge in low-power, intelligent agents (such as wearables, smartphones, and IoT devices), there exists a growing need for machine learning models to work well in resource-constrained environments. Deep learning models have achieved state-of-the-art on a broad range of tasks; however, they are difficult to deploy in their original forms. For example, AlexNet (Krizhevsky et al., 2012), a model for image classification, contains 61 million parameters and requires 1.5 billion floating point operations (FLOPs) in one inference pass. A more accurate model, ResNet-50 (He et al., 2016), has 25 million parameters but requires 4.08 FLOPs. A high-end desktop GPU such as a Titan Xp is capable of (12 TFLOPS (tera-FLOPs per second)), while the Adreno 540 GPU used in a Samsung Galaxy S8 is only capable of (567 GFLOPS) which is less than 5% of the Titan Xp. Clearly, it would be difficult to deploy and run these models on low-power devices. In general, model compression can be accomplished using four main non-mutually exclusive methods (Cheng et al., 2017): weight pruning, quantization, matrix transformations, and weight tying. By non-mutually exclusive, we mean that these methods can be used not only separately but also in combination for compressing a single model; the use of one method does not exclude any of the other methods from being viable. Ye et al. (2018) explores pruning entire channels in a convolutional neural network (CNN). Past work has mostly focused on norm[based or error-based heuristics to prune channels; instead, Ye et al. (2018) show that their approach is easily reproducible and has favorable qualities from an optimization standpoint. In other words, they argue that the norm-based assumption is not as informative or theoretically justified as their approach, and provide strong empirical evidence of these findings. Motivation Some previous works on pruning channel filters (Li et al., 2016; Molchanov et al., 2016) have focused on using the L1 norm to determine the importance of a channel. Ye et al. (2018) show that, in the deep linear convolution case, penalizing the per-layer norm is coarse-grained; they argue that one cannot assign different coefficients to L1 penalties associated with different layers without risking the loss function being susceptible to trivial re-parameterizations. As an example, consider the following deep linear convolutional neural network with modified LASSO loss: $$\min \mathbb{E}_D \lVert W_{2n} * \dots * W_1 x - y\rVert^2 + \lambda \sum_{i=1}^n \lVert W_{2i} \rVert_1$$ where W are the weights and * is convolution. Here we have chosen the coefficient 0 for the L1 penalty associated with odd-numbered layers and the coefficient 1 for the L1 penalty associated with even-numbered layers. This loss is susceptible to trivial re-parameterizations: without affecting the least-squares loss, we can always reduce the LASSO loss by halving the weights of all even-numbered layers and doubling the weights of all odd-numbered layers. Furthermore, batch normalization (Ioffe, 2015) is incompatible with this method of weight regularization. Consider batch normalization at the [math]l[/math]-th layer. Due to the batch normalization, any uniform scaling of [math]W^l[/math] which would change [math]l_1[/math] and [math]l_2[/math] norms, but has no have no effect on [math]x^{l+1}[/math]. Thus, when trying to minimize weight norms of multiple layers, it is unclear how to properly choose penalties for each layer. Therefore, penalizing the norm of a filter in a deep convolutional network is hard to justify from a theoretical perspective. In contrast with these existing approaches, the authors focus on enforcing sparsity of a tiny set of parameters in CNN — scale parameter [math]\gamma[/math] in all batch normalization. Not only placing sparse constraints on [math]\gamma[/math] is simpler and easier to monitor, but more importantly, they put forward two reasons: 1. Every [math]\gamma[/math] always multiplies a normalized random variable, thus the channel importance becomes comparable across different layers by measuring the magnitude values of [math]\gamma[/math]; 2. The reparameterization effect across different layers is avoided if its subsequent convolution layer is also batch-normalized. In other words, the impacts from the scale changes of [math]\gamma[/math] parameter are independent across different layers. Thus, although not providing a complete theoretical guarantee on loss, Ye et al. (2018) develop a pruning technique that claims to be more justified than norm-based pruning is. Method At a high level, Ye et al. (2018) propose that, instead of discovering sparsity via penalizing the per-filter or per-channel norm, penalize the batch normalization scale parameters gamma instead. The reasoning is that by having fewer parameters to constrain and working with normalized values, sparsity is easier to enforce, monitor, and learn. Having sparse batch normalization terms has the effect of pruning entire channels: if gamma is zero, then the output at that layer becomes constant (the bias term), and thus the preceding channels can be pruned. Summary The basic algorithm can be summarized as follows: 1. Penalize the L1-norm of the batch normalization scaling parameters in the loss 2. Train until loss plateaus 3. Remove channels that correspond to a downstream zero in batch normalization 4. Fine-tune the pruned model using regular learning Details There still exist a few problems that this summary has not addressed so far. Sub-gradient descent is known to have inverse square root convergence rate on subdifferentials (Gordon et al., 2012), so the sparsity gradient descent update may be suboptimal. Furthermore, the sparse penalty needs to be normalized with respect to previous channel sizes, since the penalty should be roughly equally distributed across all convolution layers. Slow Convergence To address the issue of slow convergence, Ye et al. (2018) use an iterative shrinking-thresholding algorithm (ISTA) (Beck & Teboulle, 2009) to update the batch normalization scale parameter. The intuition for ISTA is that the structure of the optimization objective can be taken advantage of. Consider: $$L(x) = f(x) + g(x).$$ Let f be the model loss and g be the non-differentiable penalty (LASSO). ISTA is able to use the structure of the loss and converge in O(1/n), instead of O(1/sqrt(n)) when using subgradient descent, which assumes no structure about the loss. Even though ISTA is used in convex settings, Ye et. al (2018) argue that it still performs better than gradient descent. Penalty Normalization In the paper, Ye et al. (2018) normalize the per-layer sparse penalty with respect to the global input size, the current layer kernel areas, the previous layer kernel areas, and the local input feature map area. To control the global penalty, a hyperparamter rho is multiplied with all the per-layer lambda in the final loss. Steps The final algorithm can be summarized as follows: 1. Compute the per-layer normalized sparse penalty constant [math]\lambda[/math] 2. Compute the global LASSO loss with global scaling constant [math]\rho[/math] 3. Until convergence, train scaling parameters using ISTA and non-scaling parameters using regular gradient descent. 4. Remove channels that correspond to a downstream zero in batch normalization 5. Fine-tune the pruned model using regular learning Results The authors show state-of-the-art performance, compared with other channel-pruning approaches. It is important to note that it would be unfair to compare against general pruning approaches; channel pruning specifically removes channels without introducing intra-kernel sparsity, whereas other pruning approaches introduce irregular kernel sparsity and hence computational inefficiencies. CIFAR-10 Experiment Model A is trained with a sparse penalty of [math]\rho = 0.0002[/math] for 30 thousand steps, and then increased to [math]\rho = 0.001[/math]. Model B is trained by taking Model A and increasing the sparse penalty up to 0.002. Similarly Model C is a continuation of Model B with a penalty of 0.008. For the convNet, reducing the number of parameters in the base model increased the accuracy in model A. This suggests that the base model is over-parameterized. Otherwise, there would be a trade-off of accuracy and model efficiency. ILSVRC2012 Experiment The authors note that while ResNet-101 takes hundreds of epochs to train, pruning only takes 5-10, with fine-tuning adding another 2, giving an empirical example how long pruning might take in practice. Both models were trained with an aggressive sparsity penalty of 0.1. Image Foreground-Background Segmentation Experiment The authors note that it is common practice to take a network with pre-trained on a large task and fine-tune it to apply it to a different, smaller task. One might expect there might be some extra channels that while useful for the large task, can be omitted for the simpler task. This experiment replicated that use-case by taking a NN originally trained on multiple datasets and applying the proposed pruning method. The authors note that the pruned network actually improves over the original network in all but the most challenging test dataset, which is in line with the initial expectation. The model was trained with a sparsity penalty of 0.5 and the results are shown in table below The neural network used in this experiment is composed of two branches: An inception branch that locates the foreground objects A DenseNet branch to regress the edges It was found that the pruning primarily affected the inception branch as shown in Figure 1 below. This likely explains the poor performance on more challenging datasets as a result of a higher requirement on foreground objects, which has been impacted by the pruning of the inception branch. Conclusion Pruning large neural architectures to fit on low-power devices is an important task. For a real quantitative measure of efficiency, it would be interesting to conduct actual power measurements on the pruned models versus baselines; reduction in FLOPs doesn't necessarily correspond with vastly reduced power since memory accesses dominate energy consumption (Han et al., 2015). However, the reduction in the number of FLOPs and parameters is encouraging, so moderate power savings should be expected. It would also be interesting to combine multiple approaches, or "throw the whole kitchen sink" at this task. Han et al. (2015) sparked much recent interest by successfully combining weight pruning, quantization, and Huffman coding without loss in accuracy. However, their approach introduced irregular sparsity in the convolutional layers, so a direct comparison cannot be made. In conclusion, this novel, theoretically-motivated interpretation of channel pruning was successfully applied to several important tasks. Implementation A PyTorch implementation is available here: https://github.com/jack-willturner/batchnorm-pruning References Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105). He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). Cheng, Y., Wang, D., Zhou, P., & Zhang, T. (2017). A Survey of Model Compression and Acceleration for Deep Neural Networks. arXiv preprint arXiv:1710.09282. Ye, J., Lu, X., Lin, Z., & Wang, J. Z. (2018). Rethinking the Smaller-Norm-Less-Informative Assumption in Channel Pruning of Convolution Layers. arXiv preprint arXiv:1802.00124. Li, H., Kadav, A., Durdanovic, I., Samet, H., & Graf, H. P. (2016). Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710. Molchanov, P., Tyree, S., Karras, T., Aila, T., & Kautz, J. (2016). Pruning convolutional neural networks for resource efficient inference. Ioffe, S., & Szegedy, C. (2015, June). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning (pp. 448-456). Gordon, G., & Tibshirani, R. (2012). Subgradient method. https://www.cs.cmu.edu/~ggordon/10725-F12/slides/06-sg-method.pdf Beck, A., & Teboulle, M. (2009). A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM journal on imaging sciences, 2(1), 183-202. Han, S., Mao, H., & Dally, W. J. (2015). Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149
The Effect of Symmetry Even-function symmetry Odd-function symmetry Half-wave symmetry Quarter-wave symmetry Even-Function Symmetry A function is defined to be even if and only if f( t) = f(- t) 1.1 If a function satisfies Eq. 1.1, then it is said to be even because polynomial functions with only even exponents have this type of behavior. For any even periodic functions, the equations for the Fourier coefficients simplify to the following: $$a_{v} = \frac{2}{T}\int_{0}^{T/2} f(t)dt.$$ (1.2) $$a_{k} = \frac{4}{T}\int_{0}^{T/2} f(t)\cos k\omega _{0}tdt.$$ (1.3) $$b_{k} = 0$$ for all k (1.4) Noting for Eq. 1.4, that all b coefficients are zero if the function is even. Below, Fig. 1.1 depicts an even periodic function. The two derivatives below follow exactly from Eq. 1.2 - 1.4. Through each derivation, $$t_{0} = -T/2$$ is selected and then we break the interval of integration into the range from - T/2 to 0 and 0 to T/2, or as follows $$a_{v} = \frac{1}{T}\int_{-T/2}^{T/2} f(t)dt$$ $$= \frac{1}{T}\int_{-T/2}^{0}f(t)dt + \int_{0}^{T/2}f(t)dt.$$ (1.5) Figure 1.1 Even function of f(t) = f(-t) Now, the variable of integration must be changed in the first integral on the right-hand side of Eq. 1.5. Particularly, we can let t = - x and observe that f( t) = f(- x) = f( x) due to the fact that the function is even. Also noting that x = T/2 when t = - T/2 and dt = -dx. Thus $$\int_{-T/2}^{0}f(t)dt = \int_{T/2}^{0}f(x)(-dx) = \int_{0}^{T/2}f(x)dx.$$ (1.6) which does show that integrating from - T/2 to 0 is the same as integrating from 0 to T/2. Thus Eq. 1.5 is the same as Eq. 1.2. Deriving Eq. 1.3 can be completed as follows: $$a_{k} = \frac{2}{T}\int_{-T/2}^{0}f(t)\cos k\omega _{0}tdt + \frac{2}{T}\int_{0}^{T/2}\cos k\omega _{0}tdt$$ (1.7) however $$\int_{-T/2}^{0}f(t)\cos k\omega _{0}tdt = \int_{T/2}^{0}f(x)\cos (-k\omega _{0}x)(-dx)$$ $$=-\int_{0}^{T/2}f(x)\cos k\omega _{0}xdx.$$ (1.8) Similarly, as before, integrating from - T/2 to 0 is identical as integrating from 0 to T/2. By combining Eq. 1.7 with Eq. 1.8, Eq. 1.3 is produced. After this, all the b coefficients are zero when f( t) is an even periodic function, because integrating from - T/2 to 0 is the exact negative of the integration from 0 to T/2. Thus, $$\int_{-T/2}^{0}f(t)\sin k\omega _{0}tdt = \int_{T/2}^{0}f(x)\sin (-k\omega _{0}x)(-dx)$$ $$=-\int_{0}^{T/2}f(x)\sin k\omega _{0}xdx.$$ (1.9) Now, if Eqs. 1.2 and 1.3 are used to find the Fourier coefficients, the integration interval must be between 0 and T/2. Odd-Function Symmetry A periodic function is defined to be odd if f( t) = - f( t) (1.10) A Function that satisfies Eq. 1.10 is said to be odd due to the fact that polynomial functions with only odd exponents behave this way. The expressions for the Fourier coefficients are as follows $$a_{v} = 0;$$ (1.11) $$a_{k}= 0,$$ for all k; (1.12) $$b_{k}=\frac{4}{T}\int_{0}^{T/2}f(t)\sin k\omega _{0}dt.$$ (1.13) Figure 1.2 Looking at Eqs. 1.11 - 1.13, all the a coefficients are zero if the periodic function is odd. The figure shown above illustrates an odd periodic function. The same method of derivation is used on Eqs. 1.11 - 1.13 as was used in the derivation of Eqs. 1.2 - 1.4. The evenness (oddness) of a function can be dismantled by shifting the periodic function along the time axis. Essentially this means that the wise choice of where t = 0 might give a function either odd or even symmetry. For instance, the triangular function in Fig 1.3 (a) is not even or odd. Nevertheless, the function can be made even, as illustrated in Fig 1.3 (b), or odd, as shown in Fig 1.3 (c). Figure 1.3 Half-Wave Symmetry A function is said to have half-wave symmetry if it satisfies the following constraint: f( t) = - f( t - T/2) (1.14) Equation 1.14 expresses that a periodic function has a half-wave symmetry if, after it has been shifted by one-half of a period and inverted, it is said to be identical to the original periodic function. For instance, the periodic functions illustrated in Figures 1.2 and 1.3 possess half-wave symmetry, whereas those functions in Figures 1.4 and 1.5 do not possess such symmetry. For t = 0, the half-wave symmetry does not exist as a function. If a given function does possess a half-wave symmetry, both a k and b k are defined as zero for an even value of k. Similarly, a v is also zero due to the fact that an average value of a periodic function with this symmetry is zero. Expressions for the Fourier coefficients are as follows: $$a_{v}=0,$$ (1.15) $$a_{k}=0,$$ for k even (1.16) $$a_{k}=\frac{4}{T}\int_{0}^{T/2}f(t)\cos k\omega_{0}tdt,$$ for k odd (1.17) $$b_{k}=0,$$ for k even (1.18) $$b_{k}=\frac{4}{T}\int_{0}^{T/2}f(t)\sin k\omega_{0}tdt,$$ for k odd (1.19) The equations are derived from starting with Eqs 1.2 - 1.4 from the previous article, Learn About Fourier Coefficients. An interval of integration from - T/2 to T/2 is chosen and then this range is divided into the intervals - T/2 to 0 and 0 to T/2. $$a_{k}= \frac{2}{T}\int_{t_{0}}^{t_{0}+T}f(t)\cos k\omega_{0}tdt$$ $$= \frac{2}{T}\int_{-T/2}^{T/2}f(t)\cos k\omega_{0}tdt$$ $$= \frac{2}{T}\int_{-T/2}^{0}f(t)\cos k\omega_{0}tdt$$ $$+ \frac{2}{T}\int_{0}^{T/2}f(t)\cos k\omega_{0}tdt$$ (1.20) From here, the variable in the first integral on the right-hand side is changed. t = x - T/2 Then x = T/2, if t = 0 x = 0, if t = - T/2; dt = dx Rewriting the first integral, $$\int_{-T/2}^{0}f(t)\cos k \omega_{0}tdt = \int_{0}^{T/2}f(x - T/2)\cos k\omega_{0}(x - T/2)dx$$ (1.21) Considering that $$\cos k\omega_{0}(x - T/2) = \cos (k\omega_{0}x - k\pi) = \cos k\pi\cos k\omega_{0}x$$ and, by postulating, f( x - T/2) = - f(Ix ) Thus Eq. 1.21 can now be written as $$\int_{-T/2}^{0}f(t)\cos k\omega_{0}tdt = \int_{0}^{T/2}[-f(x)]\cos k\omega_{0}tdt$$ (1.22) By including Eq. 1.22 into Eq 1.20, $$a_{k}= \frac{2}{T}(1 - \cos k\pi)\int_{0}^{T/2}f(t)\cos k\omega_{0}tdt$$ (1.23) However, $$\cos k\pi$$ is equal to 1 if k is even and -1 if k is odd. To summarize, the representation of the Fourier series of a periodic function with a half-wave symmetry zero average value and only contains odd harmonics. Quarter-Wave Symmetry If a function has half-wave symmetry and symmetry about the midpoint of the positive and negative half-cycles, the periodic function is said to have quarter--wave symmetry. This function is illustrated in Figure 1.4; the function in Fig 1.4(a) is said to have quarter-wave symmetry about the midpoint of the positive and negative half-cycles. The function in Fig 1.4(b) does not have this symmetry, but it does have half-wave symmetry. Figure 1.4 A function that possesses quarter-wave symmetry always can be made even or odd by choosing where t = 0. For instance, the periodic function in Fig. 1.4(a) is odd and can be turned into an even function by shifting over T/4 unites either left or right along the t-axis. However, because the periodic function in Fig. 1.4(b) only possesses half-wave symmetry, it cannot ever be made even or odd. If the periodic function were to be made even, then $$a_{v} = 0,$$ due to half-wave symmetry $$a_{k} = 0,$$ for k even, due to half-wave symmetry $$a_{k} = \frac{8}{T}\int_{0}^{T/4}f(t) \cos k\omega_{0}tdt,$$ for k odd $$b_{k} = 0,$$ for all k, because the periodic function is even (1.24) The above Eqs. 1.24 are the results from the periodic function's symmetry in addition to it being even. If the quarter-wave symmetry is super-imposed on half-wave symmetry, a v and a k for k even can therefore be eliminated. Taking a look at the expression for a k and k odd, Eq. 1.19 demonstrates that when combining a quarter-wave symmetry with evenness, the range of integration shortens from 0 to T/2 to 0 to T/4. If a quarter-wave symmetric periodic function is made odd, $$a_{v}=0,$$ due to the function being odd $$a_{k}=0,$$ for all k, due to the function being odd $$b_{k}=0,$$ for k even, due to half-wae symmetry $$b_{k}=\frac{8}{T}\int_{0}^{T/4}f(t)\sin k\omega_{0}tdt,$$ for k odd (1.25) The above Eqs of 1.25 come consequently because of quarter-wave symmetry as well as oddness. Similarly to the evenness, the quarter-wave symmetry allows for the interval of integration from 0 to T/2 to 0 to T/4 to be shortened. Coming Up As of now, you should have a better understanding of the Fourier coefficients and the different types of symmetry that can happen. These five types, even, odd, half-wave, quarter-wave half-wave even, and quarter-wave half-wave odd are all used to simplify the computation of the Fourier coefficients. A few topics that will be covered next will go in depth to find the steady-state response of a linear circuit from a Fourier series, calculation average power with periodic functions, as well as the rms value of such periodic functions.
Category: Group Theory Group Theory Problems and Solutions. Popular posts in Group Theory are: Problem 625 Let $G$ be a group and let $H_1, H_2$ be subgroups of $G$ such that $H_1 \not \subset H_2$ and $H_2 \not \subset H_1$. (a) Prove that the union $H_1 \cup H_2$ is never a subgroup in $G$. Add to solve later (b) Prove that a group cannot be written as the union of two proper subgroups. Problem 616 Suppose that $p$ is a prime number greater than $3$. Consider the multiplicative group $G=(\Zmod{p})^*$ of order $p-1$. (a) Prove that the set of squares $S=\{x^2\mid x\in G\}$ is a subgroup of the multiplicative group $G$. (b) Determine the index $[G : S]$. Add to solve later (c) Assume that $-1\notin S$. Then prove that for each $a\in G$ we have either $a\in S$ or $-a\in S$. Problem 613 Let $m$ and $n$ be positive integers such that $m \mid n$. (a) Prove that the map $\phi:\Zmod{n} \to \Zmod{m}$ sending $a+n\Z$ to $a+m\Z$ for any $a\in \Z$ is well-defined. (b) Prove that $\phi$ is a group homomorphism. (c) Prove that $\phi$ is surjective. Add to solve later (d) Determine the group structure of the kernel of $\phi$. If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order Problem 575 Let $G$ be a finite group of order $2n$. Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup. Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$. Then prove that $H$ is an abelian normal subgroup of odd order.Add to solve later Problem 497 Let $G$ be an abelian group. Let $a$ and $b$ be elements in $G$ of order $m$ and $n$, respectively. Prove that there exists an element $c$ in $G$ such that the order of $c$ is the least common multiple of $m$ and $n$. Also determine whether the statement is true if $G$ is a non-abelian group.Add to solve later
Answer Express $\cos(180^\circ-\theta)$ as a function of $\theta$ alone. $$\cos(180^\circ-\theta)=-\cos\theta$$ Work Step by Step $$\cos(180^\circ-\theta)$$ To be expressed as a function of $\theta$ alone, we need here the identity for difference of cosines: $$\cos(A-B)=\cos A\cos B+\sin A\sin B$$ Apply it to $\cos(180^\circ-\theta)$, we have $$\cos(180^\circ-\theta)=\cos180^\circ\cos\theta+\sin180^\circ\sin\theta$$ $$\cos(180^\circ-\theta)=(-1)\times\cos\theta+0\times\sin\theta$$ $$\cos(180^\circ-\theta)=-\cos\theta$$ This is the ultimate function we need to find.
CryptoDB Mark Zhandry Affiliation: Princeton University, USA Publications Year Venue Title 2019 EUROCRYPT On ELFs, Deterministic Encryption, and Correlated-Input Security 📺 We construct deterministic public key encryption secure for any constant number of arbitrarily correlated computationally unpredictable messages. Prior works required either random oracles or non-standard knowledge assumptions. In contrast, our constructions are based on the exponential hardness of DDH, which is plausible in elliptic curve groups. Our central tool is a new trapdoored extremely lossy function, which modifies extremely lossy functions by adding a trapdoor. 2019 EUROCRYPT On Finding Quantum Multi-collisions 📺 A k-collision for a compressing hash function H is a set of k distinct inputs that all map to the same output. In this work, we show that for any constant k, $$\varTheta \left( N^{\frac{1}{2}(1-\frac{1}{2^k-1})}\right) $$ quantum queries are both necessary and sufficient to achieve a k-collision with constant probability. This improves on both the best prior upper bound (Hosoyamada et al., ASIACRYPT 2017) and provides the first non-trivial lower bound, completely resolving the problem. 2019 EUROCRYPT Simple Schemes in the Bounded Storage Model 📺 The bounded storage model promises unconditional security proofs against computationally unbounded adversaries, so long as the adversary’s space is bounded. In this work, we develop simple new constructions of two-party key agreement, bit commitment, and oblivious transfer in this model. In addition to simplicity, our constructions have several advantages over prior work, including an improved number of rounds and enhanced correctness. Our schemes are based on Raz’s lower bound for learning parities. 2019 EUROCRYPT Quantum Lightning Never Strikes the Same State Twice 📺 ★ Public key quantum money can be seen as a version of the quantum no-cloning theorem that holds even when the quantum states can be verified by the adversary. In this work, we investigate quantum lightning where no-cloning holds even when the adversary herself generates the quantum state to be cloned. We then study quantum money and quantum lightning, showing the following results:We demonstrate the usefulness of quantum lightning beyond quantum money by showing several potential applications, such as generating random strings with a proof of entropy, to completely decentralized cryptocurrency without a block-chain, where transactions is instant and local.We give Either/Or results for quantum money/lightning, showing that either signatures/hash functions/commitment schemes meet very strong recently proposed notions of security, or they yield quantum money or lightning. Given the difficulty in constructing public key quantum money, this suggests that natural schemes do attain strong security guarantees.We show that instantiating the quantum money scheme of Aaronson and Christiano [STOC’12] with indistinguishability obfuscation that is secure against quantum computers yields a secure quantum money scheme. This construction can be seen as an instance of our Either/Or result for signatures, giving the first separation between two security notions for signatures from the literature.Finally, we give a plausible construction for quantum lightning, which we prove secure under an assumption related to the multi-collision resistance of degree-2 hash functions. Our construction is inspired by our Either/Or result for hash functions, and yields the first plausible standard model instantiation of a non-collapsing collision resistant hash function. This improves on a result of Unruh [Eurocrypt’16] which is relative to a quantum oracle. 2019 EUROCRYPT New Techniques for Obfuscating Conjunctions 📺 A conjunction is a function $$f(x_1,\dots ,x_n) = \bigwedge _{i \in S} l_i$$ where $$S \subseteq [n]$$ and each $$l_i$$ is $$x_i$$ or $$\lnot x_i$$. Bishop et al. (CRYPTO 2018) recently proposed obfuscating conjunctions by embedding them in the error positions of a noisy Reed-Solomon codeword and placing the codeword in a group exponent. They prove distributional virtual black box (VBB) security in the generic group model for random conjunctions where $$|S| \ge 0.226n$$. While conjunction obfuscation is known from LWE [31, 47], these constructions rely on substantial technical machinery.In this work, we conduct an extensive study of simple conjunction obfuscation techniques. We abstract the Bishop et al. scheme to obtain an equivalent yet more efficient “dual” scheme that can handle conjunctions over exponential size alphabets. This scheme admits a straightforward proof of generic group security, which we combine with a novel combinatorial argument to obtain distributional VBB security for |S| of any size.If we replace the Reed-Solomon code with a random binary linear code, we can prove security from standard LPN and avoid encoding in a group. This addresses an open problem posed by Bishop et al. to prove security of this simple approach in the standard model.We give a new construction that achieves information theoretic distributional VBB security and weak functionality preservation for $$|S| \ge n - n^\delta $$ and $$\delta < 1$$. Assuming discrete log and $$\delta < 1/2$$, we satisfy a stronger notion of functionality preservation for computationally bounded adversaries while still achieving information theoretic security. 2019 CRYPTO Revisiting Post-quantum Fiat-Shamir 📺 The Fiat-Shamir transformation is a useful approach to building non-interactive arguments (of knowledge) in the random oracle model. Unfortunately, existing proof techniques are incapable of proving the security of Fiat-Shamir in the quantum setting. The problem stems from (1) the difficulty of quantum rewinding, and (2) the inability of current techniques to adaptively program random oracles in the quantum setting. In this work, we show how to overcome the limitations above in many settings. In particular, we give mild conditions under which Fiat-Shamir is secure in the quantum setting. As an application, we show that existing lattice signatures based on Fiat-Shamir are secure without any modifications. 2019 CRYPTO How to Record Quantum Queries, and Applications to Quantum Indifferentiability 📺 The quantum random oracle model (QROM) has become the standard model in which to prove the post-quantum security of random-oracle-based constructions. Unfortunately, none of the known proof techniques allow the reduction to record information about the adversary’s queries, a crucial feature of many classical ROM proofs, including all proofs of indifferentiability for hash function domain extension.In this work, we give a new QROM proof technique that overcomes this “recording barrier”. We do so by giving a new “compressed oracle” which allows for efficient on-the-fly simulation of random oracles, roughly analogous to the usual classical simulation. We then use this new technique to give the first proof of quantum indifferentiability for the Merkle-Damgård domain extender for hash functions. We also give a proof of security for the Fujisaki-Okamoto transformation; previous proofs required modifying the scheme to include an additional hash term. Given the threat posed by quantum computers and the push toward quantum-resistant cryptosystems, our work represents an important tool for efficient post-quantum cryptosystems. 2019 CRYPTO The Distinction Between Fixed and Random Generators in Group-Based Assumptions 📺 There is surprisingly little consensus on the precise role of the generator g in group-based assumptions such as DDH. Some works consider g to be a fixed part of the group description, while others take it to be random. We study this subtle distinction from a number of angles. In the generic group model, we demonstrate the plausibility of groups in which random-generator DDH (resp. CDH) is hard but fixed-generator DDH (resp. CDH) is easy. We observe that such groups have interesting cryptographic applications.We find that seemingly tight generic lower bounds for the Discrete-Log and CDH problems with preprocessing (Corrigan-Gibbs and Kogan, Eurocrypt 2018) are not tight in the sub-constant success probability regime if the generator is random. We resolve this by proving tight lower bounds for the random generator variants; our results formalize the intuition that using a random generator will reduce the effectiveness of preprocessing attacks.We observe that DDH-like assumptions in which exponents are drawn from low-entropy distributions are particularly sensitive to the fixed- vs. random-generator distinction. Most notably, we discover that the Strong Power DDH assumption of Komargodski and Yogev (Komargodski and Yogev, Eurocrypt 2018) used for non-malleable point obfuscation is in fact false precisely because it requires a fixed generator. In response, we formulate an alternative fixed-generator assumption that suffices for a new construction of non-malleable point obfuscation, and we prove the assumption holds in the generic group model. We also give a generic group proof for the security of fixed-generator, low-entropy DDH (Canetti, Crypto 1997). 2018 TCC The MMap Strikes Back: Obfuscation and New Multilinear Maps Immune to CLT13 Zeroizing Attacks All known multilinear map candidates have suffered from a class of attacks known as “zeroizing” attacks, which render them unusable for many applications. We provide a new construction of polynomial-degree multilinear maps and show that our scheme is provably immune to zeroizing attacks under a strengthening of the Branching Program Un-Annihilatability Assumption (Garg et al., TCC 2016-B).Concretely, we build our scheme on top of the CLT13 multilinear maps (Coron et al., CRYPTO 2013). In order to justify the security of our new scheme, we devise a weak multilinear map model for CLT13 that captures zeroizing attacks and generalizations, reflecting all known classical polynomial-time attacks on CLT13. In our model, we show that our new multilinear map scheme achieves ideal security, meaning no known attacks apply to our scheme. Using our scheme, we give a new multiparty key agreement protocol that is several orders of magnitude more efficient that what was previously possible.We also demonstrate the general applicability of our model by showing that several existing obfuscation and order-revealing encryption schemes, when instantiated with CLT13 maps, are secure against known attacks. These are schemes that are actually being implemented for experimentation, but until our work had no rigorous justification for security. 2018 TCC Return of GGH15: Provable Security Against Zeroizing Attacks The GGH15 multilinear maps have served as the foundation for a number of cutting-edge cryptographic proposals. Unfortunately, many schemes built on GGH15 have been explicitly broken by so-called “zeroizing attacks,” which exploit leakage from honest zero-test queries. The precise settings in which zeroizing attacks are possible have remained unclear. Most notably, none of the current indistinguishability obfuscation (iO) candidates from GGH15 have any formal security guarantees against zeroizing attacks.In this work, we demonstrate that all known zeroizing attacks on GGH15 implicitly construct algebraic relations between the results of zero-testing and the encoded plaintext elements. We then propose a “GGH15 zeroizing model” as a new general framework which greatly generalizes known attacks.Our second contribution is to describe a new GGH15 variant, which we formally analyze in our GGH15 zeroizing model. We then construct a new iO candidate using our multilinear map, which we prove secure in the GGH15 zeroizing model. This implies resistance to all known zeroizing strategies. The proof relies on the Branching Program Un-Annihilatability (BPUA) Assumption of Garg et al. [TCC 16-B] (which is implied by PRFs in $$\mathsf {NC}^1$$ secure against $$\mathsf {P}/\mathsf {poly}$$) and the complexity-theoretic p-Bounded Speedup Hypothesis of Miles et al. [ePrint 14] (a strengthening of the Exponential Time Hypothesis). 2018 TCC Impossibility of Order-Revealing Encryption in Idealized Models An Order-Revealing Encryption (ORE) scheme gives a public procedure by which two ciphertexts can be compared to reveal the order of their underlying plaintexts. The ideal security notion for ORE is that only the order is revealed—anything else, such as the distance between plaintexts, is hidden. The only known constructions of ORE achieving such ideal security are based on cryptographic multilinear maps and are currently too impractical for real-world applications.In this work, we give evidence that building ORE from weaker tools may be hard. Indeed, we show black-box separations between ORE and most symmetric-key primitives, as well as public key encryption and anything else implied by generic groups in a black-box way. Thus, any construction of ORE must either (1) achieve weaker notions of security, (2) be based on more complicated cryptographic tools, or (3) require non-black-box techniques. This suggests that any ORE achieving ideal security will likely be somewhat inefficient.Central to our proof is a proof of impossibility for something we call information theoretic ORE, which has connections to tournament graphs and a theorem by Erdös. This impossibility proof will be useful for proving other black box separations for ORE. 2018 ASIACRYPT Parameter-Hiding Order Revealing Encryption Order-revealing encryption (ORE) is a primitive for outsourcing encrypted databases which allows for efficiently performing range queries over encrypted data. Unfortunately, a series of works, starting with Naveed et al. (CCS 2015), have shown that when the adversary has a good estimate of the distribution of the data, ORE provides little protection. In this work, we consider the case that the database entries are drawn identically and independently from a distribution of known shape, but for which the mean and variance are not (and thus the attacks of Naveed et al. do not apply). We define a new notion of security for ORE, called parameter-hiding ORE, which maintains the secrecy of these parameters. We give a construction of ORE satisfying our new definition from bilinear maps. 2017 EUROCRYPT 2017 TCC 2016 CRYPTO 2016 CRYPTO 2016 ASIACRYPT 2015 EUROCRYPT 2014 CRYPTO 2014 CRYPTO 2014 EPRINT 2013 EUROCRYPT 2011 ASIACRYPT Program Committees Asiacrypt 2019 Crypto 2018 Eurocrypt 2017 TCC 2017 Coauthors Saikrishna Badrinarayanan (2) James Bartusek (3) Dan Boneh (7) Mark Bun (2) David Cash (1) Özgür Dagdelen (1) Marc Fischlin (1) Sanjam Garg (4) Sumegha Garg (1) Craig Gentry (2) Jiaxin Guan (2) Shai Halevi (2) Dennis Hofheinz (1) Tibor Jager (1) Dakshita Khurana (1) Ilan Komargodski (2) Lucas Kowalczyk (1) Anja Lehmann (1) Tancrède Lepoint (1) Kevin Lewi (1) Feng-Hao Liu (1) Qipeng Liu (3) Fermi Ma (4) Tal Malkin (1) Eric Miles (4) Pratyay Mukherjee (1) Ryo Nishimaki (2) Adam O’Neill (1) Omkant Pandey (1) Mariana Raykova (1) Amit Sahai (6) Christian Schaffner (1) Akshayaram Srinivasan (2) Jonathan Ullman (1) Brent Waters (3) Daniel Wichs (2) Henry Yuen (1) Cong Zhang (2) Joe Zimmerman (1)
It is my understanding that upthrust from a liquid on a body is due to pressure difference on the top of the body and the bottom of the body. How, then, is this fact used in order to derive/work out that the upthrust on a body is equal to the weight of fluid displaced? (For example, if a ball bearing is dropped through water then the upthrust is equal to $\frac 43\pi r^3\rho_{water}.g$, where $r$ is the radius of the ball bearing and $\rho_{water}$ is the density of water.) Imagine your object to be made up of a lot of infinitesimally small "straws" - little cylinders. Each cylinder has an area $dA$ and a length $\ell$. You know that the volume of such a cylinder is $\ell dA$. Now look at the pressure difference between the top and bottom of that cylinder: at the bottom, the pressure will be greater by $\rho \ell g$ (where $\rho$ is the density of the liquid, and $g$ is the gravitational acceleration) - that's just the way water pressure works, the pressure exactly supports the weighted the column of liquid above it. The difference in pressure is proportional to the difference in depth, which is $\ell$. So with an area at top and bottom of $dA$, the force on the cylinder is $\rho \ell g dA$ - difference in pressure, times area. But if the volume is $\ell dA$, then the force can be written as $$F = \rho g V$$ Now if an object is made up of many such cylinders, each experiencing a force equal to the weight of the liquid it displaced, then the entire object will also experience a force equal to the weight of the displaced liquid.
I found these nice lecture notes Lectures on localization and matrix models in supersymmetric Chern-Simons-matter theories so I am hoping to understand some parts of the Chern Simons theory better. As an exercise in stenography I wrote down the Chern Simons action: $$ S = \frac{k}{4\pi} \int_{M_3} (A \wedge dA + \frac{2}{3} A \wedge A \wedge A) $$ where $A$ is a connection on $M_3$ over some Lie group $G$. For now, I do not even bother asking why $A \wedge A \wedge A \neq 0$ or why this is a top-level form (a 3-form). In many cases the Chern Simons path integral simplify or "localizes" to a matrix integral, so that's whaty I am interested in today. $$ Z(S^3) = \int \prod_{i=1}^N d\mu_i \prod_{i < j} \sinh \left( \frac{\mu_i - \mu_j}{2} \right) \;\mathrm{exp}\left( - \frac{k}{8\pi}\sum \mu_i^2 \right)$$ what is the domain of integration here -- is it $\mathbb{R}^n$ ? I have no way of verifying this integral if I don't know the domain. Usually they just say the "Cartan" but I have on idea what that is. This is very general. If instead of $G = U(N)$ we have any $G$: $$ Z(S^3) \propto \int \prod_{i=1}^N d\mu_i \prod_{i < j} 2\sinh \left( \frac{\alpha \cdot \mu }{2} \right) \;\mathrm{exp}\left( - \frac{k}{8\pi}\sum \mu_i^2 \right) $$ where $\alpha = e_i - e_j$ in the case $G = U(N)$ so this becomes the first formula. At first I thought this integral was Haar measure over the Unitary group, but that is an integral over the maximal torus $[0, 2\pi]^n$ in the Lie group instead of the Lie algebra. So I am really confused. Wilson loops can be evaluated exactly for this Chern Simons theory. $$ \langle W \rangle = \frac{e^{\frac{N\pi i}{k}}}{N} \frac{\sin \frac{N\pi}{k}}{\sin \frac{\pi}{k}} $$ $$ \langle W_{\wedge^m \square} \rangle = \dots = \left[ \begin{array}{c} N \\ m \end{array} \right] $$ This might be the same formula. I know it also deals with a 3-sphere. If $m = 1$ the right side is $[N] = \frac{q^N - q^{-N}}{q - q^{-1}}$ with $q = e^{2\pi i t}$ and the sine functions emerge. I am nervous because that paper says $G = SU(N)$ instead of $G = U(N)$ (so determinant is always 1). Hopefully by now physicists agree on what the value of the Chern Simons path integral over the 3-sphere should be in this case... so it makes sense to ask for clarifications. My apologies in advance for confusing all the terminology.
Nuclei Radioactivity α-decay can be written as, ZP A→ Z−2D A− 4+ 2He 4 Ex: 92U 238→ 90Th 234+ 2He 4 β – decay can be written in the form, ZP A→ Z+1D A+ −1e 0 Ex: 90Th 234→ 91Pa 234+ −1e 0 The emission of γ-rays from the nucleus does not alter either atomic number Z or mass number A. The wave lengths of γ-rays is less than 1Å RADIO ACTIVE DECAY LAW : \tt \frac{-dN}{dt} \propto N \Rightarrow \frac{dN}{dt} - \lambda N\ [N = N 0e −λt] A = λN = λN 0e −λt= A 0e −λt A = λN ⇒ \tt A = \frac{0.693}{t_{1/2}}\ N \tt \therefore A \propto \frac{N}{t_{1/2}} Units of activity are curie and Rutherford 1 Curie = 3.7 × 10 10disintegrations / sec 1 Becquerel = 1 disintegration per second. The time taken by the atoms to decrease from N 0to N is \tt t = \frac{1}{\lambda} \log_{e} \frac{N_{0}}{N} \Rightarrow t = \frac{2.303}{\lambda} \log_{10} \frac{N_{0}}{N} The time taken by the radioactive element to disintegrate to half of the initial number of atoms is known as the half-life (t 1/2) of a radioactive nuclei. \tt t_{1/2} = \frac{2.303}{\lambda}\ \log_{10} (2) = \frac{0.693}{\lambda} The MEAN LIFE of a radioactive substance is equal to the average time for which the nuclei of atoms of the radioactive substance exist. The mean life of an atom of a radioactive nuclide is equal to the inverse of its decay constant. \tau = \frac{1}{\lambda} \Rightarrow \tau = 1.44 t_{1/2} Time required for disintegration of \tt \frac{3}{4} or 75% of the radioactive element is 2t 1/2 t 87.5%(or) t 7/8= 3t 1/2 t 90%= \tt \frac{10}{3} t_{1/2} \tt t_{\frac{15}{16}} or t 93.75%= 4t 1/2 t 99%= \tt \frac{20}{3} t_{1/2} t 99.9%= 10t 1/2 t 29.3%= \tt \frac{1}{2} t_{1/2} View the Topic in this video From 00;20 To 09:42 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. 1. Activity : It is defined as the rate of disintegration (or count rate) of the substance (or the number of atoms of any material decaying per second) i.e., ratio active decay A = -\frac{dN}{dt} = \lambda N = \lambda N_{0}e^{-\lambda t} = A_{0}e^{-\lambda t} 2. Half life (T 1/2): Time interval in which the mass of a radioactive substance or the number of it's atom reduced to half of it's initial value is called the half life of the substance. i.e., if N = \frac{N_{0}}{2} then t = T 1/2 Hence form N = N 0e −λt \frac{N_{0}}{2} = N_{0}e^{-\lambda(T_{1/2})} \Rightarrow T_{1/2} = \frac{\log_{e}2}{\lambda} = \frac{0.693}{\lambda} 3. Mean (orl average) life (τ) : The time for which a radioactive material remains active is defined as mean (average) life of that material. i.e., τ = \tt \frac{Sum \ of \ the \ lives \ of \ all \ the \ atoms}{Total \ number \ of \ atoms} = \frac{1}{\lambda}
This is a bad metaphor for rubes, as you say. The axle gives a conserved quantity across the transition for a wagon, the distance between the wheels is constant as the transition is happening. So the bending goes to 90 degrees when the boundary becomes parallel to the direction of cart propagation. This is not what happens with light, light refracts at a finite angle less than 90 degrees for light coming in nearly parallel to the surface, this is the angle past which you have total internal reflection in the medium. The reason is that the perpendicular horizontal distance along the wavefronts is not the quantity that is conserved when light enters a stationary medium, like it would be if photons had little axles (they don't). The quantity that is conserved across the refraction transition is the frequency of the light, the energy of the photons. The reason is that the material provides a time-independent propagation environment, and in a time-independent background, the energy is conserved. Classically, the modes for a time-independent medium are found by separation of variables with a fixed frequency in time, and this is saying the same thing but without using quantum mechanics to relate conservation of frequency to conservation of energy. So the analog of the axle length in this case is the time between wave-crests crossing a given point. This means that the wavelength in the medium is reduced by the index of refraction (to keep the frequency of crests crossing a given point constant), so that the outside wavelength is $\lambda$ and the interior wavelength is $\lambda/n$. If medium surface lies parallel to the x-axis, and the incoming light wave crests make an angle of $\theta$ with respect to the x-axis ($\theta=0$ is crests parallel to the x-axis, so the light is coming head on, and no refraction), then the distance between the points where successive wave crests hit the medium boundary is $\lambda/\sin(\theta)$. In the interior of the medium, the same argument tells you that it's $\lambda/n\sin(\alpha)$ where $\alpha$ is the angle of the crests with respect to the x-axis in the medium, so in order for the crests inside to match the crests outside, you need $$ \sin(\theta) = n \sin(\lambda)$$ and this is Snell's law for the case where the n outside is 1. You can always consider the wave-speed outside to be 1, so the quantity n is the ratio of the speed of waves inside to outside, and so it isn't really a special case. Also note that the same law holds for sound refraction, or any wave.
Differential and Integral Equations Differential Integral Equations Volume 22, Number 5/6 (2009), 575-585. On pairs of positive solutions for a class of quasilinear elliptic problems Abstract We prove, by using bifurcation theory, the existence of at least two positive solutions for the quasilinear problem $-\Delta_p u = f(x,u)$ in $\Omega$, $u=0$ on $\partial \Omega$, where $N>p>1$ and $\Omega$ is a smooth bounded domain in $\mathbb{R}^N$, $N\geq2,$ and the non-linearity $f$ is a locally Lipschitz continuous function, among other assumptions. Article information Source Differential Integral Equations, Volume 22, Number 5/6 (2009), 575-585. Dates First available in Project Euclid: 20 December 2012 Permanent link to this document https://projecteuclid.org/euclid.die/1356019607 Mathematical Reviews number (MathSciNet) MR2501685 Zentralblatt MATH identifier 1240.35206 Citation Arruda, Lynnyngs Kelly; Marques, Ilma. On pairs of positive solutions for a class of quasilinear elliptic problems. Differential Integral Equations 22 (2009), no. 5/6, 575--585. https://projecteuclid.org/euclid.die/1356019607
I have a problem writing an equation in eqnarray environment, where I split a single line into two lines. The first line starting with \left[ and the second line ending with \right], but I could not compile it using latex. I have a problem writing an equation in Here is an example of using the \right., and \left. pair to complete the matching pair using the align environment. Note that the size of the brackets is not the same in the first example. This is due to the fact that the \left. <math> \right] of the second line does not see the vertical spacing of the \left[ <math> \right. that the first line does. To fix that you need to add a \vphantom{} with the term which has the largest vertical spacing in the first line. This yields the second result: \documentclass{standalone}\usepackage{amsmath}\begin{document}\begin{align*} y &= \left[\frac{1}{2}\right. \\ &\qquad + \left.x^2+c\right]\end{align*}%\begin{align*} y &= \left[\frac{1}{2}\right. \\ &\qquad + \left.x^2+c\vphantom{\frac{1}{2}}\right]\end{align*}\end{document} Another option would be to use the big-g delimiters instead of the \left, \right construct; using the big-g delimiters you don't have to worry for pairing the symbols in every line: \documentclass{article}\usepackage{amsmath}\begin{document}\begin{align*} y &= \biggl[ \frac{1}{2} \\ &+ x^2 + c \biggr]\end{align*}\end{document} you can use the breqn package. instead of \begin{equation} or \begin{equation*}, use the environment dmath or dmath*. the delimiters will be sized properly, and the line broken in an appropriate location. there are also multi-line environments if your display needs them. there are some limitations; for details, see the package documentation -- texdoc breqn if you have a tex live installation; otherwise, look on ctan. Another option is to use nath class. It is not completely compatible with amsmath, but does provide its own multi-line math environments, which might be sufficient for simple math displays. One of the features of nath is automatic delimiter scaling that works across multiple lines. For example \documentclass{article}\usepackage{nath}\begin{document}\begin{equation} y = \wall [ \sum_{i=1}^n a_i \\ + x^2 + c ] \return\end{equation}\end{document} gives
Word2Vec Algorithm- Skip-gram and Continuous Bag of Words Word2vec is a group of related models that are used to produce word embeddings. These models are shallow, two-layer neural networks that are trained to reconstruct linguistic contexts of words. Word2vec takes as its input a large corpus of text and produces a vector space, typically of several hundred dimensions, with each unique word in the corpus being assigned a corresponding vector in the space. Word vectors are positioned in the vector space such that words that share common contexts in the corpus are located in close proximity to one another in the space. Word2vec was created by a team of researchers led by Tomas Mikolov at Google. The algorithm has been subsequently analysed and explained by other researchers. Embedding vectors created using the Word2vec algorithm have many advantages compared to earlier algorithms such as latent semantic analysis. Word2vec can utilize either of two model architectures to produce a distributed representation of words: continuous bag-of-words (CBOW) or continuous skip-gram. In the continuous bag-of-words architecture, the model predicts the current word from a window of surrounding context words. The order of context words does not influence prediction (bag-of-words assumption). In the continuous skip-gram architecture, the model uses the current word to predict the surrounding window of context words. The skip-gram architecture weighs nearby context words more heavily than more distant context words. According to the authors’ note, CBOW is faster while skip-gram is slower but does a better job for infrequent words. Skip Grams Sentence: I want a glass of orange juice to go along with my cereal. Choosing Context and Target: Choose a window of +5/-5 words, anr randomly choose a context and a target word. We now set up a seupervised learning problem where are are required to predict a target word given a context word in the window. This might not be very efficient, but it helps in estimating an Embedding matrix. Model Vocab sizs: 10000 Context to target prediction Context (c) : Orange Target (t): Juice For one example, say $O_c$ be the one hot representation of the context word. This is multiplied with an Embedding Matrix $E$ to arrive at the embedding vector $e_c$. The embedding vector $e_c$ is thenfed into a softmax unit to arrive at a prediction $\hat y$. Softmax Model: $p(t|c)$ = $\frac {e^{\theta_t^Te_c}}{\sum_{j=1}^{10,000}e^{\theta_j^Te_c}}$ $\theta_t$ is the parameter associated with output t Loss Function for Softmax Model $L(\hat y, y) = -\sum_{i=1}^{10000} y_i log \hat y_i$ The target $y_i$ is represented as a 10000 dim one hot vector $\left[ \eqalign{0\cr 0\cr .\cr .\cr 1\cr 0\cr 0\cr 0\cr 0} \right]$ $\hat y_i$ will also be be a 10000 dim vector with probabilities for each of the 10000 word as elements of the vector. The optimization of this network provides a pretty good approximation of the Embedding Matrix $E$ Problems with Skip-Gram Model Computational Complexity: The denominator in the equation $p(t|c)$ = $\frac {e^{\theta_t^Te_c}}{\sum_{j=1}^{10,000}e^{\theta_j^Te_c}}$ sums over the probabilities of all 10k words in the denominator. This might not be too bad, but if we are using 1Bn or 100Bn words then it will take up enormous time. Solution: Use a Hierarchial Softmax Classifier (tree based model to classify) How to sample a context c: Once a context is sampled, the target word can be sampled in a window of +/- 5 or +/-10. But how to sample the context? c can be samples from a uniform random distribution, but in that case words like the, a, of etc will appear too frequently and the model will try to use them CBOW In the continuous bag of words model, context is represented by multiple words for a given target words. For example, we could use “cat” and “tree” as context words for “climbed” as the target word. This calls for a modification to the neural network architecture. The modification, shown below, consists of replicating the input to hidden layer connections C times, the number of context words, and adding a divide by C operation in the hidden layer neurons. [The figure below might lead some readers to think that CBOW learning uses several input matrices. It is not so. It is the same matrix, WI, that is receiving multiple input vectors representing different context words] With the above configuration to specify C context words, each word being coded using 1-out-of-V representation means that the hidden layer output is the average of word vectors corresponding to context words at input. The output layer remains the same and the training is done in the manner discussed above. Negative Sampling Softmax objective is very slow to compute. To make it more efficient, negative sampling is used. Create a new learning problem. Given two words “orange” & “juice”, we want to predict if this is a context-target pair. If it is ‘orange’ and ‘juice’, output will be 1 else 0 context, word –> target orange, juice –> 1 orange, king –> 0 orange, book –> 0 orange, the –> 0 For the positive example, pick the context and target word from nearby or +/-5 in the window. To generate a negative example, pick any word at random from the dictionary and label it as 0 For k times, we will pick random words from the dictionary and label them as negative example against the context. It is ok if the random words happen to be in the window of the context word. The we create a supervised learning problem which inputs a pair of words and predict the output $y$. This is how a training set is generated. The model will have k:1 ratio of positivve:negative example Value of K: 2-5 for larger datasets, 5-20 for smaller datasets. Supervised learning model for learning a mapping from x to y: Define a logistic regression model $P(y=1|c,t) = \sigma(\theta_t^Te_c)$ If input word is say Orange ($O_{6257}$), multiply with $E$ to get $e_{6257}$ to generate 10000 possible logistic regression classification problem where one of these will be the classifier corresponding to whether the target word is juice or not, and others will be for the other word. We will traqin only 5(k=4) out of these 10000 logistic model which will save a lot of time. Instead of 10000 softmax classification, we will have only 5 logistic regression classifier which is computationally much cheaper rather than updating a 10k softmax. Sampling negative examples: Sample according to empirical frequency (problem: high chances of ‘the’, ‘of’ etc) $f(w_i)$ is the frequency of word in the dictionary For distribution, Take the value of $P(w_i) = \frac{f(w_i)^{3/4}}{\sum_{j=1}^{10000}f(w_j)^{3/4}}$ Source material from Andrew NG’s awesome course on Coursera. The material in the video has been written in a text form so that anyone who wishes to revise a certain topic can go through this without going through the entire video lectures.
Recall that a family of graphs (indexed by an infinite set, such as the primes, say) is called an expander family if there is a $\delta>0$ such that, on every graph in the family, the discrete Laplacian (or the adjacency matrix) has spectral gap $|\lambda_0-\lambda_1|\geq \delta$. (Assume all graphs in the family have the same degree (= valency) $d$.) If, for every $p$, we specify some maps from $\mathbb{Z}/p\mathbb{Z}$ to itself, then we are defining a graph with vertex set $\mathbb{Z}/p\mathbb{Z}$ for each $p$: a vertex is adjacent to the vertices it is mapped to. If we consider the maps $x\to x+1$, $x\to 3x$ (say), then we do not get expanders. On the other hand, if we take $x\to x+1$, $x\to x^{-1}$, then, as is widely known, we do get an expander family. Consider the maps $x\mapsto x+1$, $x\mapsto x^3$. Do they (and their inverses, if you wish) give an expander family? (Let $p$ range only over primes $p\equiv 2 \mod 3$, so as to keep the map $x\mapsto x^3$ injective.) What about the maps $x\mapsto x+1$, $x\mapsto 3 x$, $x\mapsto x^3$? Do they, taken together, give an expander family? (Is there a way to relate these maps to the action of a linear group? Are there examples of sets of not neessarily linear maps giving rise to expander families?) UPDATE: What if these were shown not to be expanders? What would be some interesting consequences?
One equation to calculate a heat change is $q = c_p \cdot m \cdot\Delta T$. I know that $\Delta T$ can be in either degrees Celsius ($^\circ{}\mathrm{C}$) or kelvin ($\mathrm{K}$). However, one thing that confuses me is how the units cancel if $\Delta T$ is in degrees Celsius, since $c_p$ has units of $\mathrm{J\ g^{-1}\ K^{-1}}$. How does this work? Elaborating Ron’s answer through a concrete example, we start by looking at the Celsius scale $$ T_\text{C1} = 25\ \mathrm{^\circ C} ,~ T_\text{C2} = 100\ \mathrm{^\circ C}$$ Thus, we obtain $$ \Delta T_\text{C} = T_\text{C2} - T_\text{C1} = 100\ \mathrm{^\circ C} - 25\ \mathrm{^\circ C} = 75\ \mathrm{^\circ C} $$ In a similar fashion, using the Kelvin scale $$ T_\text{K1} = 298.15\ \mathrm K = (273.15 + 25)\ \mathrm{K},~ T_\text{K2} = 373.15 = (273.15 + 100)\ \mathrm{K}$$ Thus, we obtain $$ \Delta T_\text{K} = T_\text{K2} - T_\text{K1} = (273.15 + 100)\ \mathrm{K} - (273.15 + 25)\ \mathrm{K} = 75\ \mathrm{K} $$ As you can see, the constant offset of 273.15 between the Celsius scale and the Kelvin scale cancels, and the difference is exactly the same. The units remain, but the numerical value of the difference is the same. The temperature in kelvin is equal to the temperature in degrees Celsius plus 273. So, $\Delta T$ will be the same whether the temperature was reported in $\mathrm{K}$ or $^\circ\mathrm{C}$. As an example if $T_\mathrm{initial} = 0\ ^\circ\mathrm{C}\ (273\ \mathrm{K})$ and $T_\mathrm{final} = 100\ ^\circ\mathrm{C}\ (373\ \mathrm{K})$, $\Delta T$ is 100 degrees no matter whether you used $^\circ\mathrm{C}$ or $\mathrm{K}$ in your computation. If your specific heat has units of degrees Celsius in it then use the temperatures in degrees Celsius to find $\Delta T$. Although, the difference would be exactly the same as if you used kelvin as 1 degree difference is the same on both scales. Nevertheless, I would discourage using degrees Celsius as the SI unit is kelvin and is the one that's normally used.
We define a parabola as the locus of a point that moves such that its distance from a fixed straight line called the directrix is equal to its distance from a fixed point called the focus. Unlike the ellipse, a parabola has only one focus and one directrix. However, comparison of this definition with the focus - directrix property of the ellipse (which can also be used to define the ellipse) shows that the parabola can be regarded as a limiting form of an ellipse with eccentricity equal to unity. We shall find the equation to a parabola whose directrix is the line \(y = −q\) and whose focus is the point \((q , 0)\). Figure \(\text{II.20}\) shows the parabola. \(\text{F}\) is the focus and \(\text{O}\) is the origin of the coordinate system. The vertex of the parabola is at the origin. In an orbital context, for example, the orbit of a comet moving around the Sun in parabolic orbit, the Sun would be at the focus \(\text{F}\), and the distance between vertex and focus would be the perihelion distance, for which the symbol \(q\) is traditionally used in orbit theory. \(\text{FIGURE II.20}\) From figure \(\text{II.20}\), it is evident that the definition of the parabola \((\text{PF} = \text{PN})\) requires that \[(x-q)^2 + y^2 = (x+q)^2 , \label{2.4.1} \] from which \[y^2 = 4qx , \label{2.4.2}\] which is the equation to the parabola. Exercise \(\PageIndex{1}\) Sketch the following parabolas: \(y^2 = -4qx \) \(x^2 = 4qy\) \(x^2 = -4qy,\) \((y-2)^2 = 4q (x-3).\) The line parallel to the \(y\)-axis and passing through the focus is the latus rectum. Substitution of \(x = q\) into \(y^2 = 4ax\) shows that the latus rectum intersects the parabola at the two points \((q , \pm 2q)\), and that the length \(l\) of the semi latus rectum is \(2q\). The equations \[x = qt^2 , \quad y = 2qt \label{2.4.3} \] are the parametric equations to the parabola, for \(y^2 = 4qx\) results from the elimination of \(t\) between them. In other words, if \(t\) is any variable, then any point that satisfies these two equations lies on the parabola. Most readers will know that if a particle is moving with constant speed in one direction and constant acceleration at right angles to that direction, as with a ball projected in a uniform gravitational field or an electron moving in a uniform electric field, the path is a parabola. In the constant speed direction the distance is proportional to the time, and in the constant acceleration direction, the distance is proportional to the square of the time, and hence the path is a parabola. Tangents to a Parabola. Where does the straight line \(y = mx + c\) intersect the parabola \(y^2 = 4qx\)? The answer is found by substituting \(mx + c\) for \(y\) to obtain, after rearrangement, \[m^2 x^2 + 2(mc - 2q) x + c^2 = 0 . \label{2.4.4} \] The line is tangent if the discriminant is zero, which leads to \[c = q/m . \label{2.4.5} \] Thus a straight line of the form \[y = mx + q/m \label{2.4.6} \] is tangent to the parabola. Figure \(\text{II.22}\) illustrates this for several lines, the slopes of each differing by \(5^\circ\) from the next. \(\text{FIGURE II.22}\) We shall now derive an equation to the line that is tangent to the parabola at the point \((x_1 , y_1 )\). Let \((x_1 , y_1) = (qt_1^2 , 2qt_1)\) be a point on the parabola, and Let \((x_2 , y_2) = (qt_2^2 , 2qt_2)\) be another point on the parabola. The line joining these two points is \[\frac{y-2qt_1}{x-qt_1^2} = \frac{2q(t_2 - t_1)}{q(t_2^2 - t_1^2)} = \frac{2}{t_2+t_1}. \label{2.4.7} \] Now let \(t_2\) approach \(t_1\) , eventually coinciding with it. Putting \(t_1 = t_2 = t\) in the last equation results, after simplification, in \[ty = x + qt^2 , \label{2.4.8} \] being the equation to the tangent at \((qt^2 , 2qt )\). Multiply by \(2q\): \[2qty = 2q (x+qt^2) \label{2.4.9} \] and it is seen that the equation to the tangent at \((x_1 , y_1 )\) is \[y_1 y = 2q (x_1 + x). \label{2.4.10} \] There are a number of interesting geometric properties, some of which are given here. For example, if a tangent to the parabola at a point \(P\) meets the directrix at \(Q\), then, just as for the ellipse, \(P\) and \(Q\) subtend a right angle at the focus (figure \(\text{II.23}\)). The proof is similar to that given for the ellipse, and is left for the reader. \(\text{FIGURE II.23}\) The reader will recall that perpendicular tangents to an ellipse meet on the director circle. The analogous theorem vis-à-vis the parabola is that perpendicular tangents meet on the directrix. This is also illustrated in figure \(\text{II.23}\). The theorem is not specially important in orbit theory, and the proof is also left to the reader. Let \(\text{PG}\) be the normal to the parabola at point \(\text{P}\), meeting the axis at \(\text{G}\) (figure \(\text{II.24}\)). We shall call the length \(\text{GH}\) the subnormal. A curious property is that the length of \(\text{GH}\) is always equal to \(l\), the length of the semi latus rectum (which in figure \(\text{II.24}\) is of length 2 − i.e. the ordinate where \(x = 1\)), irrespective of the position of \(\text{P}\). This proof again is left to the reader. \(\text{FIGURE II.24}\) The following two geometrical properties, while not having immediate applications to orbit theory, certainly have applications to astronomy. \(\text{FIGURE II.25}\) The tangent at \(\text{P}\) makes an angle \(\alpha\) with the \(x\)-axis, and \(\text{PF}\) makes an angle \(\beta\) with the \(x\)-axis (figure \(\text{II.25}\)). We shall show that \(\beta = 2\alpha\) and deduce an interesting consequence. The equation to the tangent (see equation \(\ref{2.4.8}\)) is \(ty = x + qt^2\), which shows that \[\tan \alpha = 1/t . \label{2.4.11} \] The coordinates of \(\text{P}\) and \(\text{F}\) are, respectively, \(\left(qt^2 , 2qt \right)\) and \((q , 0)\), and so, from the triangle \(\text{PFH}\), we find. \[\tan \beta = \frac{2t}{t^2-1}. \label{2.4.12}\] Let \(\tau = 1/t\), then \(\tan \alpha = \tau\) and \(\tan \beta = 2\tau/(1 - \tau^2)\), which shows that \(\beta = 2\alpha\). This also shows that triangle \(\text{JFP}\) is isosceles, with the angles at \(\text{J}\) and \(\text{P}\) each being \(\alpha\). This can also be shown as follows. From the equation \(ty = x + qt^2\) , we see that \(\text{J}\) is the point \((−qt^2 , 0)\), so that \(\text{JF} = q (t^2 + 1)\). From triangle \(\text{PFH}\), we see that \[(\text{PF})^2 = 4q^2 t^2 + q^2 \left(t^2 - 1 \right)^2 - q^2 \left( t^2 + 1 \right)^2 . \label{2.4.13}\] Therefore \[\text{PF} = \text{JF} . \label{2.4.14}\] Either way, since the triangle \(\text{JPF}\) is isosceles, it follows that \(\text{QP}\) and \(\text{PF}\) make the same angle \(\alpha\) to the tangent. If the parabola is a cross section of a telescopic mirror, any ray of light coming in parallel to the axis will be focussed at \(\text{F}\), so that a paraboloidal mirror, used on-axis, does not suffer from spherical aberration. (This property holds, of course, only for light parallel to the axis of the paraboloid, so that a paraboloidal mirror, without some sort of correction, gives good images over only a narrow field of view.) Now consider what happens when you stir a cup of tea. The surface takes up a shape that looks as though it might resemble the parabola \(y = x^2 /(4q)\) - see figure \(\text{II.26}\): \(\text{FIGURE II.26}\) Suppose the liquid is circulating at angular speed \(ω\). A tea leaf floating on the surface is in equilibrium (in the rotating reference frame) under three forces: its weight \(mg\), the centrifugal force \(mω^2 x\) and the normal reaction \(R\). The normal to the surface makes an angle \(\theta\) with the vertical (and the tangent makes an angle \(\theta\) with the horizontal) given by \[\tan \theta = \frac{ω^2x}{g}. \label{2.4.15} \] But the slope of the parabola \(y = x^2 /(4q)\) is \(x/(2q)\), so that the surface is indeed a parabola with semi latus rectum \(2q = g/ω^2\). This phenomenon has been used in Canada to make a successful large telescope (diameter \(6 \ \text{m}\) ) in which the mirror is a spinning disc of mercury that takes up a perfectly paraboloidal shape. Another example is the spin casting method that has been successfully used for the production of large, solid glass paraboloidal telescope mirrors. In this process, the furnace is rotated about a vertical axis while the molten glass cools and eventually solidifies into the required paraboloidal effect. Exercise \(\PageIndex{1}\) The 6.5 metre diameter mirrors for the twin Magellan telescopes at Las Campañas, Chile, have a focal ratio \(f/1.25\). They were made by the technique of spin casting at The University of Arizona's Mirror Laboratory. At what speed would the furnace have had to be rotated in order to achieve the desired focal ratio? (Answer \(= 7.4 \ \text{rpm}\).) Notice that \(f/1.25\) is quite a deep paraboloid. If this mirror had been made by traditional grinding from a solid disc, what volume of material would have had to be removed to make the desired paraboloid? (Answer - a whopping 5.4 cubic metres, or about 12 tons!) Polar equation to the Parabola As with the ellipse, we choose the focus as pole and the axis of the parabola as initial line. We shall orient the parabola so that the vertex is toward the right, as in figure \(\text{II.27}\). We recall the focus-directrix property, \(\text{FP} = \text{PN}\). Also, from the definition of the directrix, \(\text{FO}=\text{OM} = q\), so that \(\text{FM} = 2q = l\), the length of the semi latus rectum. It is therefore immediately evident from figure \(\text{II.27}\) that \(r \cos \theta + r = 2q = l\), so that the polar equation to the parabola is \[r = \frac{l}{1+ \cos \theta}. \label{2.4.16} \] \(\text{FIGURE II.27}\) This is the same as the polar equation to the ellipse (Equation 2.3.36), with \(e = 1\) for the parabola. I have given different derivations for the ellipse and for the parabola; the reader might like to interchange the two approaches and develop equation 2.3.36 in the same manner as we have developed equation \(\ref{2.4.16}\). When we discuss the hyperbola, I shall ask you to show that its polar equation is also the same as 2.3.36. In other words, Equation 2.3.36 is the equation to a conic section, and it represents an ellipse, parabola or hyperbola according to whether \(e<1, \ e=1 \ \text{or } e>1\).
Let's examine the one-dimensional three-point stencil case in detail,because I think it's important to be clear just how this behaviourarises, and what it means to set a point to a certain value in afinite-difference grid when the underlying function isdiscontinuous. The equation will be$$ u''(x) = \rho(x). $$Instead of using the interval $[-1,1]$ with discontinuity at$\frac12$, I will use the interval $[-\frac12,\frac12]$ with thediscontinuity placed at $0$. The grid size will be $h$, and I will only have to consider the interval $[-\frac h2,\frac h2]$ around the grid point $0$. First, in a finite-difference approximation, we approximate$$ u''(0) = \frac{\hat u(-h) - 2\hat u(0) + \hat u(h)}{h^2} $$and solve$$ \frac{\hat u(-h) - 2\hat u(0) + \hat u(h)}{h^2} = \hat \rho(0). $$(Here the variables with hats are the numerical approximations on thegrid to the variables without hats.) But this is a very bad specification of the problem, because yourfunction $\rho(0) = 2$ at $x=0$, and $\rho(x) = 0$ everywhere else, isdiscontinuous. In particular, if we shift the grid point $0$ to eitherside by some tiny amount $\epsilon$, the numerical solution changesentirely and becomes exactly zero.This means that this is a terribly misspecified problem. We can make sense of it byconverting it to an equivalent finite-volume formulation, where itwill make much more sense. In a finite-volume method, we solve$$ \int_{-h/2}^{h/2} u''(x)\,dx = \int_{-h/2}^{h/2}\rho(x)\,dx, $$by picking $u(x)$ to be a suitable approximation to the unknownfunction. Let's pick, on the interval $[-\frac h2,\frac h2]$ the approximation$$ u(x) = \hat u(-h) \phi(x+h) + \hat u(0) \phi(x) + \hat u(h) \phi(x-h), $$where $\phi(x)$ is the basis function$$ \phi(x) = \max\left(1-\frac{|x|}{h}, 0\right), $$(it's a piecewise linear function that goes from $0$ at $-h$ to $1$ at$0$ to $0$ at $+h$, thus interpolating between grid points.). The approximation $u(x)$ is a weighted sum of three basis functions that look like this: We can then compute$$ \phi''(x) = \frac1h\delta(h-|x|) - \frac2h \delta(x), $$so that the approximation to the integral is$$ \int_{-h/2}^{h/2} u''(x) = \frac{\hat u(-h) -2\hat u(0)+\hat u(h)}{h}, $$and the finite-volume approximation to our equation becomes$$ \frac{\hat u(-h) -2\hat u(0) + \hat u(h)}{h} = \int_{-h/2}^{h/2}\rho(x)\,dx = h \hat \rho(0). $$ There are two important things here. First, this is equivalent tothe finite-difference formulation in that we end up solving the sameequations. Second, the discontinuity in $\rho$ is given a very precisemeaning: when we use the value $2$ for $\hat \rho(0)$, we are sayingthat this is the average value of $\rho$ on the interval $[-h/2,h/2]$:$$2 = \hat\rho(0) = \frac1h \int_{-h/2}^{h/2} \rho(x)\,dx.$$This interpretation is not available in the finite-differenceformulation. It is also not so sensitive to the location of the discontinuity: if the discontinuity were at some small distance $\epsilon$ away from $0$, the average value would be almost the same, but the value at $0$ might be completely different. But if we say that $2$ is the average value of $\rho$ near the gridpoint, we can then go back and compute the exact solution of theequation with the right-hand side given by$$ \tilde \rho(x) = 2[-h/2 < x < h/2]. $$(We pick a function of our own choice that gives the right average.)In this case, the exact solution will be$$ \tilde u(x) = 2 \int_{-h/2}^{h/2} G(x; u)\,du \approx 2h G(x;0), $$in terms of the Green function for the Poisson equation. In the two-dimensional case it will be$$ \approx 2h^2 G(x, y; 0, 0), $$as in the other (correct) answer on MSE. Finally, the outcome of all this is that when you say that you compareyour numerical approximate solution with the exact solution $u(x)=0$,this is wrong. The exact solution should not be zero, it should be$$ \approx 2h^2 G(x, y; 0,0). $$It therefore should make perfect sense that the fourth-order solution does not converge to zero with order $4$: it should converge to the correct solution, which is not zero but has magnitude of order $O(h^2)$. If you do want to get zero as the numerical solution and compare with the mathematical solution, you should set $\hat\rho(0)$ to be the average value of $\rho$, which is $0$, not $2$. The fact that the exact solution depends on the chosen grid size indicates that this is not a good way to check whether you implemented the method correctly. A very straightforward technique known as the method of manufactured solutions is better for this.
The intuition behing the residual graph in the Maximum flow problem is very well presented in this lecture. The explanation goes as follows.Suppose that we are trying to solve the maximum flow problem for the following network $G$ (where each label $f_e/c_e$ denotes both the flow $f_e$ pushed through an edge $e$ and the capacity $c_e$ of this edge):One ... Note: The notations and definitions used below are borrowed from the third edition of the book.To answer this question, first, observe that if $(u,v) \notin E$, then by flow definition,$$f(u,v) = f'(u,v) = (f \uparrow f')(u,v) = 0 \, .$$Furthermore, since $f'(v,u) \le c_f(u,v) = f(u,v)$, it is obtained that $f'(v,u) = 0$. This simply implies that $\... I believe you can represent node N as two nodes, A and B. Node A has all of the inbound flow edges of N, and Node B has all of the outbound flow edges of N. Nodes A and B are connected by a single edge which you can use to throttle the flow. The edge from A to B is the only edge out of A and the only edge into B. Strangely enough, no such reduction is known. However, in a recent paper, Madry (FOCS 2013), showed how to reduce maximum flow in unit-capacity graphs to (logarithmically many instances of) maximum $b$-matching in bipartite graphs.In case you are unfamiliar with the maximum $b$-matching problem, this is a generalization of the matching, defined as follows:... Flows with more than one "thing" flowing are known as "multicommodity flows".The basic definitions assume that every thing can flow through every vertex and edge. However, the standard way of solving these problems is linear programming and you could easily modify the normal multicommodity flow program to deal with your situation, just by substituting zero ... There is a paper titled "Dynamic Trees in Practice" which reviews the practical implementation.The other categories that Link-Cut tree could be used efficiently is in Database Indexing.You can find this in the book "Database Index Techniques". You've left out part of the statement. It should be "If there's no path between the source and the sink with unused capacity the flow is a max flow." If you look at your graph you'll see that there is no path with unused capacity all the way from $s$ to $t$. The $s$ to $a$ link has spare capacity but $a$'s lone outbond link is saturated. The $s$ to $c$ ... It cannot be solved in polynomial time, assuming P$\,\neq\,$NP.Without worrying about colors (i.e. if every vertex had the same color), it is the MAX SIZE EXCHANGE problem from the Kidney Exchange literature and can be solved in polynomial time with a reduction to the Assignment Problem.With the introduction of colors to the problem, it is NP-hard (and ... The solution given is clearly incorrect, as you demonstrate with the counterexample. Note that the graph U+V is a connected component by the infinite-capacity edges. Therefore every valid cut will have to contain all of A, B, C, D, E, F on the same side.Trying to trace back where the solution came from:http://www.cs.washington.edu/education/courses/cse521/... A very simple approach for question 2 is the following. Sort the edges by capacity. Remove the edge with lowest capacity, and check if there is still a path from $s$ to $t$. If there is, move on the edge with the second lowest capacity, and so on.At some point, we will disconnect $s$ from $t$ by removing an edge of capacity $c$. Now, we know that $c$ is ... In general, the answer is no. If we put XOR-like restrictions on the out-going edges of a vertex, we can prove that finding a min-cut-max-flow is NP-Hard. The technique is to reduce 3-SAT to it.Let's assume there are $n$ variables $x_1, x_2, ..., x_n$ in the 3-SAT and $m$ clauses $c_1, c_2, ..., c_m$. We create a graph $G(V,E)$ encoding the instance of the ... Let $G=(X,Y,E)$ be a bipartite graph with $|X|=|Y|=n$ having a maximum matching $M$. Consider the directed graph $G'$ on the vertex set $X \cup Y$ which includes the edges of $G$ oriented from $X$ to $Y$ and the edges of $M$ oriented from $Y$ to $X$. Let $U \subseteq X$ be the set of vertices of $X$ unmatched in $M$, and let $S$ be the set of vertices ... Yes, Ford-Fulkerson always finds the cut that is "closest" to the source. See this question for a formalization of what is meant by "closest".A graph can contain exponentially many min-cuts, so beware that any procedure to enumerate all min-cuts must take exponential time in total in the worst case.Based on what I've read, there are output-sensitive ... Yes. If the flow is not maximum, then there is an augmenting path. If there's an augmenting path, Ford-Fulkerson will find it (and continue to find them until the flow is maximum). Starting from a different initial flow does not change this. Edmonds-Karp algorithm works by building successive flows $f_0, \dots, f_n$ where each flow $f_{i+1}$ can be obtained by combining $f_i$ and a path in the "residual graph" $G_{f_i}$ obtained through a BFS (the residual graph is just the original graph where we removed full edges).Now, the idea of the proof in Introduction to Algorithms is to introduce a ... Network flow has been used for all sorts of interesting and surprising tasks in computer vision and image processing. For instance, it has been used for image segmentation, image stitching, seam carving, image denoising, stereo image correspondence, and more.See, e.g., https://en.wikipedia.org/wiki/Graph_cuts_in_computer_vision, What's the ... this paper finds at the end that a link-cut (LC) tree outperforms rake-compress (RC) trees for the Sleator/Tarjan max-flow algorithm using a standard Dimacs random graph generator.the paper focuses on change propagation as one application of dynamic trees. eg, change propagation is similar to the way that excel spreadsheet cells have to be recomputed on ... As saadtaame mentions, maximum usually means global maximum, whereas maximal means local maximum. For example, a maximal independent set in a graph is one to which no vertex can be added (making it a dominating set), while a maximum independent set is one with largest cardinality among all independent set.My guess is that a maximal flow is one in which ... In the worst case, the minimum cut itself doesn't convey much information about the maximum flow. Consider a graph $G=(V,E)$ in which the minimum $s,t$-cut has value $w$. If I extend $G$ by adding a new vertex $s'$ and an edge $(s',s)$ with weight $w$, a minimum $s',t$-cut in the new graph consists of just the edge $(s', s)$ but that doesn't give any ... In his FOCS2013 (Best Paper award) work, Aleksander Mądry gives a $\widetilde O(m^{\frac{10}{7}})$-time for exact max-flow and gives a nice survey on the existing techniques (including near-linear time for $(1+\epsilon)$-approximation in undirected graphs). There might be some more clever trick in the analysis to get rid of the $V$, but at the very least, I can provide some intuition as to why you can get rid of it.With Ford-Fulkerson, it is generally assumed that you are working with connected graphs.For any connected graph with $V$ nodes, there are at least $V-1$ edges, since each node needs at least one ... Sliding windows are used to:Keep track which packets were sent and received, hence the data transmission is reliableKeep track of the memory available to the receiver. The receiver may fill its buffers and tell to the sender to slow down (because more packets will simply be dropped, causing the sender to re-send them with a probably bigger delay)When ... A blocking $s$-$t$ flow is a flow whose residual network (consisting of all edges not saturated by the flow) contains no $s$-$t$ path. Stated differently, a blocking flow is a flow which, for every $s$-$t$ path, saturates at least one edge. Equivalently, a blocking flow is a flow which, for every simple $s$-$t$ path, saturates at least one edge.The ... You can do it in $\small \mathcal{O}(m + n)$ time where $\small m$ and $\small n$ are the # of edges and vertices respectively. Let the edge to be updated be $\small e = (u, v)$.If you increment the capacity of $\small e$ by $\small 1$, the maximum flow increments by at most $\small 1$. Hence, starting with the current max flow $\small f$, you only need ... There is a classical linear time algorithm of Gabow and Kariv.The first step is to find an Eulerian tour. You do this by starting at an arbitrary vertex and following an arbitrary path until you close a cycle. If you're not back where you started, you continue following an arbitrary path until closing a cycle, and so on. If you are back where you started, ... This is an instance of multi-commodity network flow. If you insist on integer flows, the problem is NP-hard, but if you allow flows to take fractional values, the problem can be solved in polynomial time using linear programming. That's not what the formula gives you. As the caption says, the capacity of the augmenting path in the residual network in (b) is $4$. Therefore we send 4 units of flow along the augmenting path from $s$ to $t$, namely, the path $s \to v_2 \to v_3 \to t$. In particular, $f(s,v_2)=8$, $f'(s,v_2)=4$, and $f'(v_2,s)=0$, so the updated flow is $8+4-0=12$. Those answers assume that all edge capacities are integers. Assuming they are, this works.Suppose the min-cut in the original graph has total capacity $x$; then it will have total capacity $x(|E|+1)+k$ in the transformed graph, where $k$ counts the number of edges crossing that cut. Note that if you consider any cut in the original graph with larger ...
This only answers your first question. The groups $(\mathbb Z_2)^k$ have the fastest growing (outer) automorphism groups possible. We can derive a pretty good upper bound on the size of automorphism groups. In particular, first note that if $G$ is a finite group with $n$ elements, then it has a generating set $S$ of size at most $\log_2(n)$ - we can find such a generating set just by a greedy algorithm in which we start with the empty set, then repeatedly add elements not yet generated until we hit the whole group. Then, since a group homomorphism is determined by its values on a generating set, there are at most $n^{\log_2(n)}$ endomorphisms of a group $G$. Note that if $G=(\mathbb Z_2)^k$, then $G$ has exactly $n^{\log_2(n)}=2^{k^2}$ endomorphisms and these are the only groups for which this bound is tight (since the bound on the size of the generating set is only tight if no element has order $>2$). The number of automorphisms of $G=(\mathbb Z_2)^k$ is $$(2^{k}-2^0)(2^k-2^1)\cdots (2^k-2^{k-1})=2^{k^2}\cdot \prod_{i=1}^{k}(1-2^{-i}).$$Since $\prod_{i=1}^{\infty}(1-2^{-i})$ converges to some positive quantity $c$, the asymptotic growth of the number of (outer) automorphisms of this family of groups grows as $c\cdot n^{\log_2(n)}$, which is a constant factor beneath the theoretical upper bound. Addendum: Refining the argument slightly actually shows a tighter bound and shows that the groups $(\mathbb Z_2)^k$ simply have the largest automorphism groups of groups of the same size. In particular, let $G$ be a group. Pick some minimal generating set $g_1,\ldots,g_n$ and define $G_i=\langle g_1,\ldots,g_i\rangle$. Let $n_i$ be the number of injective homomorphisms $G_i\rightarrow G$ for each $i$. Clearly $n_0=1$. Then, observe that $n_{i+1}\leq n_i \cdot (|G|-|G_{i}|)$ since injective homomorphisms $G_i\rightarrow G$ may be identified with a subset of pairs of injective homomorphisms $f:G_{i-1}\rightarrow G$ and elements $g\in G\setminus f(G_{i-1})$. Using that the size of the groups $|G_i|$ must be an ascending tower of divisors of $|G|$, one can derive better bounds on the number of automorphisms - these bounds will be tight for all vector spaces. In fact, no other groups can make the suggested upper bound hold as an equality: A group for which this bound is tight has the property that its automorphism group acts transitively on the non-identity elements. This means every non-identity element has order $p$ for some prime. However, $p$-groups have a non-trivial center and automorphisms preserve the center. By transitivity, this means that the group must be abelian - which leaves only the finite vector spaces as candidates.
Newton limit is approximation of GR in weak fields and SMALL velocities. Small velocities means, that whole 4-velocity of a particle is basically in time component. So you can imagine, that if spacetime is curved same in all directions, than the time component is most significant simply because the particle almost doesn't move in space at all. To be more precise: The spacetime around spherically symmetrical field is given by Schwarzschild metric (in natural units):$$ds^2=-\left(1-\frac{r_s}{r}\right)dt^2+\left(1-\frac{r_s}{r}\right)^{-1}dr^2+r^2d\Omega\approx ds^2_{flat}+\frac{r_s}{r}(dt^2+dr^2)$$where $r_s$ is Schwarzschild radius and $ds^2_{flat}$ is Minkowski part (flat spacetime part) of the metric. As you clearly see, the perturbation of flat spacetime metric has same magnitude in time component as in space component in natural units. But now, let us compute geodesics. The geodesics equation is given by:$$a^\mu=-\Gamma^\mu_{\alpha\beta}v^{\alpha} v^{\beta}$$where $a^\mu$ is 4-acceleration of a particles, $v^\mu$ its 4-velocity and $\Gamma^\mu_{\alpha\beta}$ is Christoffel symbol. Now, the relevant Christoffel symbols for radial motion are $\Gamma^t_{\alpha\beta}$ and $\Gamma^r_{\alpha\beta}$ of which nonzero are only:$$\Gamma^t_{tr}=\Gamma^t_{rt}\approx -g_{tt,r}/2$$$$\Gamma^r_{rr}\approx g_{rr,r}/2$$$$\Gamma^r_{tt}\approx -g_{tt,r}/2$$and all of them are of same order since perturbations of metric components $g_{tt}$ and $g_{rr}$ are of same order (in fact $g_{tt,r}=g_{rr,r}$). So the geodesic equation for radial motion in weak field of spherically symmetric source is:$$a^t=-\Gamma^t_{\alpha\beta}v^{\alpha} v^{\beta}\approx g_{tt,r}v^{t} v^{r}$$$$a^r=-\Gamma^r_{\alpha\beta}v^{\alpha} v^{\beta}\approx g_{tt,r}v^{t} v^{t}/2-g_{rr,r}v^{r} v^{r}/2=g_{tt,r}/2$$Where I have used $g_{tt,r}=g_{rr,r}$ from the metric and $v^{t}v^{t}-v^{r}v^{r}=1$ from normalization. Having 4-acceleration we can get radial 3-acceleration component ($a^r_3$) using:$$a^r=a^t v^r/\gamma+\gamma^2 a^r_3$$where $\gamma$ is Lorentz factor. Now this doesn't lead to Newtonian gravitation law without assumption, that velocities are small. With this assumption $\gamma\approx 1$, $v^t\approx-1$, $v^r\ll 1$ and $v^\mu\approx (-1,\vec{v})$ and the equation simplifies further:$$a^r\approx a^t v^r+a^r_3 => a^r_3 \approx a^r - a^t v^r$$Substituting from geodesic equation:$$a^r_3\approx g_{tt,r}/2 - g_{tt,r}v^{t} (v^{r})^2=g_{tt,r}/2+o((v^{r})^2)\approx r_s/(2r^2)=-GM/r^2$$with $M$ being mass of the source, as Newton gravitation says. So the approximation is not that space-components of curvature can be neglected, it is in the fact that space-components of 4-velocity can be neglected.
It may be a pseudo question. But I still decide to ask. Given two $k$-modules $M$ and $N$,it seems to me that in the literature the tensor product $M\bigotimes_kN$ is always defined as the quotient of the free module generated over the set $M\times N$ modulo the ideal generated by the bi-linearity relations. But I am curious to see a different construction of $M\bigotimes_kN$ which is of course isomorphic to the one mentioned above. In my first encounter with the tensor product of modules (in a course on representation theory by prof. Lenstra), it was done in the following spirit: First, the tensor product $M\otimes _RE$ is defined using the universal property. Next, we prove the following (and other) elementary properties (here I am concentrating on the object and leaving out the morphism): $M\otimes _RR$ exists, and equals $M$. '$\otimes$ commutes with $\oplus$': if $(M\otimes E_i)_i$ exist, then $M\otimes \oplus_iE_i$ exists and equals their direct sum. '$\otimes$ commutes with $coker$ (right-exactness of $M\otimes_R-$)': Let $f:E\to F$ be $R$-linear, and assume $M\otimes_RE$ and $M\otimes_RF$ exist. Then $M\otimes_R coker f$ exists and equals $coker(M\otimes_RE\xrightarrow{1\otimes f}to M\otimes_RF)$ Theorem: The tensor product $M\otimes_RE$ exists. Proof: Take a generating set S of E, i.e. the natural map $f:R^{(S)}\to E$ is surjective. Next pick a generating set T of ker(f), so the natural map $h:R^{(T)}\to R^{(S)}$ has image ker(f). Now $coker(h)=R^{(S)}/\ker f$ is (isomorphic via f to) $E$. By property 1 and 2, $M\otimes_RR^{(T)}$ and $M\otimes_RR^{(S)}$ exist. By property 3, we conclude that $M\otimes_RE$ exists. Remark: I guess a didactical merit of this approach (compared to the standard construction as the free abelian group on the product modulo bilinear relations) is that it forces you to think and reason in terms of the universal property and exact sequences. I am not sure if this is also the reason my teachers had in mind. A rather general definition of tensor products, far beyond the algebraic setup can be found in the arxiv papers 0808.0095, 0807.1436. A recent application to quanta is in the arxiv paper 1203.0412.
Let $x$ denote the solution of $Ax=b$ and let $\hat{x}$ denote the computed solution. We cannot hope to do better than $$\hat{x} = \text{fl}(x),$$ i.e., the floating point representation of $x$. In this, the most favorable case, we have $\hat{x}_j = x_j(1+\delta_j)$ where $|\delta_j| \leq u$ and $u$ is the unit roundoff. It follows, that $\|x-\hat{x}\|_2 \leq u \|x\|_2$. Now, let $r$ denote the residual given by$$ r = b - A\hat{x} = A(x-\hat{x}).$$ We have $$\|r\|_2 \leq \|A\|_2 \|x-\hat{x}\|_2 \leq u \|A\|_2 \|x\|_2 \leq u \|A\|_2 \| A^{-1} \|_2 \|b\|_2.$$ We conclude that the relative residual satisfies $$ \frac{\|r\|_2}{\|b\|_2} \leq u \, \kappa_2(A), $$ where $\kappa_2(A) = \|A\|_2 \| A^{-1} \|_2$ denotes the 2-norm condition number of the matrix $A$. The estimate above is true for a general matrix $A$ . In practice, you will find that the relative residual of iterative methods stagnates at the level of $u \kappa_2(A)$ . There is no hope of the CG algorithm doing better in general.
Now that we have set our axioms - Newton’s laws of motion and the various force laws - we are ready to start combining them to get useful results, things that we did not put into the axioms in the first place but follow from them. The first thing we can do is write down equations of motion: an equation that describes the motion of a particle due to the action of a certain type of force. For example, suppose you take a rock of a certain mass m and let go of it at some height h above the ground, then what will happen? Once you’ve let go of the rock, there is only one force acting on the rock, namely Earth’s gravity, and we are well within the regime where Equation 2.2.2 applies, so we know the force. We also know that this net force will result in a change of momentum (Equation 2.1.4), which, because the rock won’t lose any mass in the process of falling, can be rewritten as Equation 2.1.5. By equating the forces we arrive at an equation of motion for the rock, which in this case is very simple: \[m \boldsymbol{g}=m \ddot{\boldsymbol{x}} \label{rock}\] We immediately see that the mass of the rock does not matter (Galilei was right! - though of course he was in our set of axioms, because we arrived at them by assuming he was right...). Less trivially, Equation (\ref{rock}) is a second-order differential equation for the motion of the rock, which means that in order to find the actual motion, we need two initial conditions - which in our present example are that the rock starts at height h and zero velocity. Equation (\ref{rock}) is essentially one-dimensional - all motion occurs along the vertical line. Solving it is therefore straightforward - you simply integrate over time twice. The general solution is: \[\boldsymbol{x}(t)=\boldsymbol{x}(0)+\boldsymbol{v}(0) t+\frac{1}{2} \boldsymbol{g} t^{2}\] which with our boundary conditions becomes \[\boldsymbol{x}(t)=\left(h-\frac{1}{2} g t^{2}\right) \hat{\boldsymbol{z}} \label{soln}\] where \(g\) is the magnitude of \(g\) (which points down, hence the minus sign). Of course Equation \ref{soln} breaks down when the rock hits the ground at \(t=\sqrt{2h \over g}\), which is easily understood because at that point gravity is no longer the only force acting on it. We can also immediately write down the equation of motion for a mass on a spring (no gravity at present), in which the net force is given by Hooke’s law. Equating that force to the net force in Newton’s second law of motion gives: \[-k \boldsymbol{x}(t)=m \ddot{\boldsymbol{x}}(t) \label{spring}\] Of course, we find another second-order differential equation, so we again need the initial position and velocity to specify a solution. The general solution of Equation \ref{spring} is a combination of sines and cosines, with a frequency \(\omega=\sqrt{k \over m}\) (as we already know from the dimensional analysis in Section 1.2): \[\boldsymbol{x}(t)=\boldsymbol{x}(0) \cos (\omega t)+\frac{\boldsymbol{v}(0)}{\omega} \sin (\omega t)\] We’ll study this case in more detail in Section 8.1. In general, the force in Newton’s second law may depend on time and position, as well as on the first derivative of the position, i.e., the velocity. For the special case that it depends on only one of the three variables, we can write down the solution formally, in terms of an integral over the force. These formal solutions are given in Section 2.6. To see how they work in practice, let’s consider a slightly more involved problem, that of a stone falling with drag. Example \(\PageIndex{1}\): Falling Stone with Drag Suppose we have a spherical stone of radius a that you drop from a height h at t=0. At what time, and with which velocity, will the stone hit the ground? Solution We already solved this problem in the simple case without drag above, but now let’s include drag. There are then two forces acting on the stone: gravity (pointing down) with magnitude \(m_g\), and drag (pointing in the direction opposite the motion, in this case up) with magnitude \(6 \pi \eta a v=b v\), as given by Stokes’ law (Equation 2.2.5). Our equation of motion is now given by (with x as the height of the particle, and the downward direction as positive): \[m \ddot{x}=-b \dot{x}+m g\] We see that our force does not depend on time or position, but only on velocity - so we have case 3 of Appendix 2.6. We could invoke either Equation (2.33) or (2.34) to write down a formal solution, but there is an easier way, which will allow us to evaluate the relevant integrals without difficulty. Since our equation of motion is linear, we know that the sum of two solutions is again a solution. One of the terms on the right hand side of Equation (2.19) is constant, which means that our equation is not homogeneous (we can rewrite it to \(m \ddot{x}+b \dot{x}=m g\) to see this), so a useful thing to do is to split our solution in a homogeneous and a particular part. Rewriting our equation in terms of \(v=\dot{x}\) instead of x, we get \(m \dot{v}+b v=m g\), from which we can immediately get a particular solution: \(v_{\mathrm{p}}= {m g \over b}\), as the time derivative of this constant \(v_{\mathrm{p}}\) vanishes. Subtracting \(v_{\mathrm{p}}\),we are left with a homogeneous equation: \(m \dot{v}_{\mathrm{h}}+b v_{\mathrm{h}}\), which we now solve by separation of variables. First we write ̇\(\dot{v}_{\mathrm{h}}={\mathrm{d} v_{\mathrm{h}} \over \mathrm{d} t}\), then re-arrange so that all factors containing \(v_{\mathrm{h}}\) are on one side and all factors containing t are on the other, which gives \(-({m \over b})({1 \over v_h})dv_h=dt\). We can now integrate to get: \[-\frac{m}{b} \int_{\nu_{0}}^{\nu} \frac{1}{v^{\prime}} \mathrm{d} v^{\prime}=-\frac{m}{b} \log \left(\frac{v}{v_{0}}\right)=t-t_{0}\] which is an example of Equation (2.33). After rearranging and setting \(t_0=0\): \[v_{\mathrm{h}}(t)=v_{0} \exp \left(-\frac{b}{m} t\right)\] Note that this homogeneous solution fits our intuition: if there is no extra force on the particle, the drag force will slow it down exponentially. Also note that we didn’t set \(v_0=0\), as the homogeneous solution does not equal the total solution. Instead \(v_0\) is an integration constant that we’ll need to set once we’ve written down the full solution, which is: \[v(t)=v_{\mathrm{h}}(t)+v_{\mathrm{p}}(t)=v_{0} \exp \left(-\frac{b}{m} t\right)+\frac{m g}{b}\] Now setting \(v(0)=0\) gives \(v_0=-{mg \over b}\), so \[v(t)=\frac{m g}{b}\left[1-\exp \left(-\frac{b}{m} t\right)\right]\] To get x(t), we simply integrate v(t) over time, to get: \[x(t)=\frac{m g}{b}\left[t+\frac{m}{b} \exp \left(-\frac{b}{m} t\right)\right]\] We can find when the stone hits the ground by setting x(t)=h and solving for t; we can find how fast it is going at that point by substituting that value of t back into v(t).
This is a picture of a Upsilon system. The thing I'm wondering is that energy width $\Gamma$ for forth one (4S) wave is very different than the first three. I know there is a formula which explains the relation between cross-section and energy width $\Gamma$ but i don't understand how this graph can be explained using the formula: $$\sigma (E) = \frac{2 J +1 }{(2S_1+1)(2S_2 +1)} \frac{4 \pi}{K^2} \frac{\Gamma^2 /4 }{(E-E_0)^2+ \Gamma^2 /4} B_{in} B_{out}$$ More precisely, How the resonance width can change depending on the different wave 1S, 2S, 3S etc ? Any details answer will much appreciated.
How do we evaluate the limits: $$ \lim_{x \to 0^{+,-} } \frac{\sin\ 2x}{|\sin\ 2x|}, \lim_{x\to \frac{π}{2}^{+,-} } \frac{\sin\ 2x}{|\sin\ 2x|}$$ if we have absolute value at the denominator? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community An idea: $$\frac{\sin 2x}{|\sin 2x|}=\begin{cases}\frac{\sin 2x}{\sin2x}\;,\;\;x>0\;\;\text{and close to zero, say}\;\;|x|<10^{-3}\\{}\\\frac{\sin2x}{-\sin2x}\;,\;\;x<0\;\;\text{and close to zero, say}\;\;|x|<10^{-3}\end{cases}$$ and now evaluate both one-sided limits. For the second one things are pretty similar as above. Try it. Hint: When $x< 0$ meaning limit from $-$, left side. What is $|x| = ?$ equal to? Keep in mind $|x| > 0$ for all real $x$. For $|\sin(2x)|$ when $\sin(2x) < 0$ we get $|\sin(2x)| = -\sin(2x)$ For values close to $x = 0$, approximately $-1 < x < 0$ 4th quadrant, $\sin(2x) < 0$ Try it.
I am struggling with this question: Prove or give a counterexample: If $f : X \to Y$ is a continuous mapping from a compact metric space $X$, then $f$ is uniformly continuous on $X$. Thanks for your help in advance. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community I am struggling with this question: Prove or give a counterexample: If $f : X \to Y$ is a continuous mapping from a compact metric space $X$, then $f$ is uniformly continuous on $X$. Thanks for your help in advance. The answer is yes, if $f$ is continuous on a compact space then it is uniformly continuous: Let $f: X \to Y$ be continuous, let $\varepsilon > 0$ and let $X$ be a compact metric space. Because $f$ is continuous, for every $x$ in $X$ you can find a $\delta_x$ such that $f(B(\delta_x, x)) \subset B({\varepsilon\over 2}, f(x))$. The balls $\{B(\delta_x, x)\}_{x \in X}$ form an open cover of $X$. So do the balls $\{B(\frac{\delta_x}{2}, x)\}_{x \in X}$. Since $X$ is compact you can find a finite subcover $\{B(\frac{\delta_{x_i}}{2}, x_i)\}_{i=1}^n$. (You will see in a second why we are choosing the radii to be half only.) Now let $\delta_{x_i}' = {\delta_{x_i}\over 2}$. You want to choose a distance $\delta$ such that for any two $x,y$ they lie in the same $B(\delta_{x_i}', x_i)$ if their distance is less than $\delta$. How do you do that? Note that now that you have finitely many $\delta_{x_i}'$ you can take the minimum over all of them: $\min_i \delta_{x_i}'$. Consider two points $x$ and $y$. Surely $x$ lies in one of the $B(\delta_{x_i}', x_i) $ since they cover the whole space and hence $x$ also lies in $B(\delta_{x_i}', x_i)$ for some $i$. Now we want $y$ to also lie in $B(\delta_{x_i}', x_i)$. And this is where it comes in handy that we chose a subcover with radii divided by two: If you pick $\delta : = \min_i \delta_{x_i}'$ (i.e. $\delta = \frac{\delta_{x_i}}{2}$ for some $i$) then $y$ will also lie in $B(\delta_{x_i}, x_i)$: $d(x_i, y) \leq d(x_i, x) + d(x,y) < \frac{\delta_{x_i}}{2} + \min_k \delta_{x_k} \leq \frac{\delta_{x_i}}{2} + \frac{\delta_{x_i}}{2} = \delta_{x_i}$. Hope this helps. Let $(X, d)$ be a compact metric space, and $(Y, \rho)$ be a metric space. Suppose $f : X \to Y$ is continuous. We want to show that it is uniformly continuous. Let $\epsilon > 0$. We want to find $\delta > 0$ such that $d(x,y) < \delta \implies \rho(f(x), f(y))< \epsilon$. Ok, well since $f$ is continuous at each $x \in X$, then there is some $\delta_{x} > 0$ so that $f(B(x, \delta_{x})) \subseteq B(f(x), \frac{\epsilon}{2})$. Now, $\{B(x, \frac{\delta_{x}}{2})\}_{x \in X}$ is an open cover of $X$, so there is a finite subcover $\{B(x_{i}, \frac{\delta_{x_{i}}}{2})\}_{i =1}^{n}$. If we take $\delta := \min_{i} (\frac{\delta_{x_{i}}}{2})$, then we claim $d(x,y) < \delta \implies \rho(f(x), f(y)) < \epsilon$. Why? Well, suppose $d(x,y) < \delta$. Since $x \in B(x_{i}, \frac{\delta_{x_{i}}}{2})$ for some $i$, we get $y \in B(x_{i}, \delta_{x_{i}})$. Why? $d(y, x_{i}) \leq d(y,x) + d(x,x_{i}) < \frac{\delta_{x_{i}}}{2} + \frac{\delta_{x_{i}}}{2} = \delta_{x_{i}}$. Ok, finally, if $d(x,y) < \delta$, then we claim $\rho(f(x), f(y)) < \epsilon$. This is because $\rho(f(x), f(y)) \leq \rho(f(x), f(x_{i})) + \rho(f(x_{i}), f(y)) < \frac{\epsilon}{2} + \frac{\epsilon}{2} = \epsilon$.
While studying a certain Diophantine equation in the integer $k \ge 2$, I believe I have proven the necessary restriction $$2^{k-1} \equiv 1\!\!\pmod{k^2}. \qquad(\star)$$ Based on what I read about Wieferich primes on Wikipedia (http://en.wikipedia.org/wiki/Wieferich_prime), if $k$ is a prime, it must be a Wieferich prime. So far, so good. However, I haven’t found anything — on Wikipedia or elsewhere — that proves there are no composite solutions to the congruence ($\star$). Is that statement true? If so, what’s an easy proof? If not, what's an easy disproof? Many thanks, Kieren. EDIT: In case it helps with the proof/disproof, $k$ is squarefree. EDIT: This question has been cross-posted to MO (https://mathoverflow.net/questions/142526/are-wieferich-primes-the-only-solutions-to-the-equation-2k-1-equiv-1-pmo), once I realized the difficulty level of the question I was asking.
The Hong–Ou–Mandel effect can be described like this: Two photons enter a particular type of beamsplitter, one from each side. If the two photons are "identical" in the sense defined below, then the probability of detecting one photon on each side of the output is zero. Both photons will always be detected on the same side of the output, though we cannot predict which side. The notation used in the OP is partly unclear to me, but one thing is clear: In a state with two photons, shifting one photon's phase by a constant $\theta$ cannot affect anything. It merely multiplies the two-photon state-vector by and overall factor of $\exp(i\theta)$. Physical predictions depend only on the ray (the one-dimensional Hilbert space spanned by the given state-vector), not on the individual state-vector, so multiplying the state-vector by $e^{i\theta}$ has no physical effect. (This is in contrast to the effect of a relative phase between two terms in a superposition.) This is restated below in mathematical notation. The OP considered only plane waves, but the following calculation allows the two incoming photons to have arbitrary longitudinal profiles, described by complex-valued functions $f$ and $g$, respectively. The HOM effect occurs whenever $f\propto g$, which includes the case $f=e^{i\theta}g$ considered in the OP. I'll use the creation/annihilation operator formalism, because this makes the calculation easy and make the reason for the result very clear. Calculation Let $a^\dagger(k)$ denote the operator that, when applied to any state-vector, adds a photon traveling diagonally downward with wavenumber $k$. Let $b^\dagger(k)$ denote the operator that, when applied to any state-vector, adds a photon traveling diagonally upward with wavenumber $k$. Let $|0\rangle$ denote the state with no photons (vacuum state). This state is annihilated by $a(k)$ and $b(k)$, the adjoints of $a^\dagger(k)$ and $b^\dagger(k)$. The definitions of $a^\dagger$ and $b^\dagger$ are illustrated here: The letter ($a$ or $b$) indicates the photon's direction, and the argument $k$ indicates its wavenumber along that direction. For any complex-valued function $f$, use the abbreviation$$ a^\dagger(f) := \int dx\ f(k)a^\dagger(k)\tag{1}$$and similarly for $b^\dagger(f)$. The commutation relations are\begin{gather} [a(f),a(g)] = [b(f),b(g)] = 0\\ [a(f),b(g)] =[a(f),b^\dagger(g)] = 0\tag{2}\end{gather}and$$ [a(f),a^\dagger(g)] = [b(f),b^\dagger(g)] = \int dk\ f^*(k) g(k).\tag{3}$$The effect of an ideal beamsplitter can be modeled using the transformation\begin{align} a^\dagger(f) &\to \big(a^\dagger(f)+ib^\dagger(f)\big)/\sqrt{2}\\ b^\dagger(f) &\to \big(b^\dagger(f)+ia^\dagger(f)\big)/\sqrt{2}\tag{4}\\ |0\rangle &\to |0\rangle.\end{align}This transformation has these properties: It is unitary. It is symmetric in $a$ and $b$. In other words, the beamsplitter has up/down symmetry. It splits the photon's direction but otherwise preserves the wavenumber $k$. The factors of $i$ are necessary in order for the transformation to be both unitary and symmetric in $a$ and $b$. That's important, because the HOM effect relies on these factors of $i$. (The WP article linked in the OP uses a different unitary transformation that doesn't have factors of $i$ and is not manifestly symmetric in $a$ and $b$. That transformation can be converted into this one by absorbing a factor of $i$ into either $a$ or $b$, which doesn't affect the commutation relations. And as ZeroTheHero indicated in a comment, (4) is not the most general form for a 50:50 beamsplitter, but it is sufficient for the present purposes.) To derive the HOM effect, suppose that the state prior to the beamsplitter is$$ a^\dagger(f) b^\dagger(g)|0\rangle.\tag{5}$$This is a two-photon state, with one photon traveling diagonaly downward and one traveling diagonally upward. Since this is the initial state, the photons are understood to be approaching the beamsplitter. (We could make this explicit by using the machinery of quantum field theory — local field operators — to relate the wavenumber domain to the position domain.) According to the transformation defined above, the effect of the beamsplitter is\begin{align} a^\dagger(f) b^\dagger(g)|0\rangle &\to \frac{1}{2} \big(a^\dagger(f)+ib^\dagger(f)\big) \big(b^\dagger(g)+ia^\dagger(g)\big)|0\rangle\\ &= \frac{1}{2} \big( ia^\dagger(f)a^\dagger(g) +ib^\dagger(f)b^\dagger(g)\\ &\phantom{\frac{1}{2}\big(} +a^\dagger(f)b^\dagger(g) - a^\dagger(g)b^\dagger(f) \big)|0\rangle.\tag{6}\end{align}This final state has four terms: The $aa$ and $bb$ terms, in which both photons exit the beamsplitter in the same direction, The $ab$ and $ba$ terms, in which the two photons exit the beamsplitter in different directions. Conclusion In the special case $f\propto g$, a glance at the result (6) shows that the $ab$ and $ba$ terms cancel each other. This is the HOM effect. The relative minus sign between these two terms comes from $i^2=-1$. The linked WP article words it this way: The Hong–Ou–Mandel effect ... occurs when two identical single-photon waves enter a 1:1 beam splitter, one in each input port. When the photons are identical, they will extinguish each other. If they become more distinguishable, the probability of detection will increase. That wording is unclear. Here's the decoder ring: "Identical" means $f\propto g$. "Distinguishable" means that $f$ and $g$ are not proportional to each other. This wording apparently alludes to the single-photon states $a^\dagger(f)|0\rangle$ and $a^\dagger(g)|0\rangle$, which are physically distinguishable if and only if $f$ and $g$ are not proportional to each other. The "probability of detection" apparently means the probability of detecting photons at both output ports, which cannot happen when $f\propto g$. (That's what the preceding calculation showed.) This wording is probably related to this statement from https://arxiv.org/abs/1711.00080: "In HOM interference, we are often interested in the coincidence probability, that is, the probability of detecting one photon in each output port of the beam splitter." The OP considers the case $f=e^{i\theta}g\propto g$. The preceding calculation shows that the $ab$ and $ba$ terms cancel each other whenever $f\propto g$. This agrees with the conclusion of the OP's calculation. What does "coherent" mean? The most important point is that the result cannot depend on $\theta$, and this is already evident from the initial state (5), without calculating anything at all. The replacement $g\to e^{i\theta}g$ can't have any effect, because it merely multiplies the initial state-vector by an overall factor:$$ a^\dagger(f) b^\dagger(g)|0\rangle \to e^{i\theta}a^\dagger(f) b^\dagger(g)|0\rangle.\tag{7}$$Physical predictions depend only on the ray (the one-dimensional Hilbert space spanned by the given state-vector), not on the individual state-vector. Contrast this to the single-photon state with a $\theta$-dependent relative phase:$$ a^\dagger(f) |0\rangle + e^{i\theta} b^\dagger(g)|0\rangle.\tag{8}$$In this case, the value of $\theta$ does matter. In the OP, the phase shift $g\to e^{i\theta}g$ is described as making the two photons "not [mutually] coherent." Not sure why such a word would be used in the context of (7), but the word does make sense in the context of a state like$$ \exp\big(b^\dagger(g)\big)|0\rangle =\sum_{n\geq 0} \frac{\big(b^\dagger(g)\big)^n}{n!}|0\rangle,\tag{9}$$which is often used as a model of the light emitted by a laser. This so-called "coherent state" is affected by the phase shift $g\to e^{i\theta} g$. If we consider a situation in which the input to the beamsplitter consists of two of these "laser beams," then the wording used in the OP would make more sense. Explicitly, the input state in that case would be$$ \exp\big(a^\dagger(f)\big) \exp\big(e^{i\theta}b^\dagger(g)\big)|0\rangle,\tag{10}$$which is equivalent to$$ \exp\big(a^\dagger(f)+e^{i\theta}b^\dagger(g)\big)|0\rangle.\tag{11}$$In contrast to the two-photon state (7), the phase $\theta$ does affect the physical significance of the state (10)-(11), as it does in (8).
CryptoDB Dongvu Tonien Publications Year Venue Title 2006 EPRINT An Efficient Single-Key Pirates Tracing Scheme Using Cover-Free Families A cover-free family is a well-studied combinatorial structure that has many applications in computer science and cryptography. In this paper, we propose a new public key traitor tracing scheme based on cover-free families. The new traitor tracing scheme is similar to the Boneh-Franklin scheme except that in the Boneh-Franklin scheme, decryption keys are derived from Reed-Solomon codes while in our case they are derived from a cover-free family. This results in much simpler and faster tracing algorithms for single-key pirate decoders, compared to the tracing algorithms of Boneh-Franklin scheme that use Berlekamp-Welch algorithm. Our tracing algorithms never accuse innocent users and identify all traitors with overwhelming probability. 2005 EPRINT Recursive Constructions of Secure Codes and Hash Families Using Difference Function Families To protect copyrighted digital data against piracy, codes with different secure properties such as frameproof codes, secure frameproof codes, codes with identifiable parent property (IPP codes), traceability codes (TA codes) are introduced. In this paper, we study these codes together with related combinatorial objects called separating and perfect hash families. We introduce for the first time the notion of difference function families and use these difference function families to give generalized recursive techniques that can be used for any kind of secure codes and hash families. We show that some previous recursive techniques are special cases of these new techniques. 2005 EPRINT Fuzzy Universal Hashing and Approximate Authentication Traditional data authentication systems are sensitive to single bit changes and so are unsuitable for message spaces that are naturally fuzzy where similar messages are considered the same or at least indistinguishable. In this paper, we study unconditional secure approximate authentication. We generalize traditional universal hashing to fuzzy universal hashing and use it to construct secure approximate authentication for multiple messages. 2005 EPRINT Explicit Construction of Secure Frameproof Codes $\Gamma$ is a $q$-ary code of length $L$. A word $w$ is called a descendant of a coalition of codewords $w^{(1)}, w^{(2)}, \dots, w^{(t)}$ of $\Gamma$ if at each position $i$, $1 \leq i \leq L$, $w$ inherits a symbol from one of its parents, that is $w_i \in \{ w^{(1)}_i, w^{(2)}_i, \dots, w^{(t)}_i \}$. A $k$-secure frameproof code ($k$-SFPC) ensures that any two disjoint coalitions of size at most $k$ have no common descendant. Several probabilistic methods prove the existance of codes but there are not many explicit constructions. Indeed, it is an open problem in [J. Staddon et al., IEEE Trans. on Information Theory, 47 (2001), pp. 1042--1049] to construct explicitly $q$-ary 2-secure frameproof code for arbitrary $q$. In this paper, we present several explicit constructions of $q$-ary 2-SFPCs. These constructions are generalisation of the binary inner code of the secure code in [V.D. To et al., Proceeding of IndoCrypt'02, LNCS 2551, pp. 149--162, 2002]. The length of our new code is logarithmically small compared to its size. 2005 EPRINT On a Traitor Tracing Scheme from ACISP 2003 At ACISP 2003 conference, Narayanan, Rangan and Kim proposed a secret-key traitor tracing scheme used for pay TV system. In this note, we point out a flaw in their scheme.
Search Now showing items 1-10 of 25 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
Forgot password? New user? Sign up Existing user? Log in If α\alphaα and β\betaβ are the roots of the equation 2x2−5x−1=02x^2-5x-1=02x2−5x−1=0, form equations whose roots are (a) αβ\frac{\alpha}{\beta}βα, βα\frac{\beta}{\alpha}αβ (b) α2β\alpha^2\betaα2β, αβ2\alpha\beta^2αβ2 Note by Victor Loh 5 years, 2 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: Before attempting to solve these, it is better to find out what α+β\alpha+\betaα+β and αβ\alpha\betaαβ is as we can easily see that we can try and use the Vieta's Formula to solve these. α+β=2.5\alpha+\beta=2.5α+β=2.5 αβ=−0.5\alpha\beta=-0.5αβ=−0.5 Solution (a): The equation should be in the form x2−(αβ+βα)x+1x^2-(\dfrac{\alpha}{\beta}+\dfrac{\beta}{\alpha})x+1x2−(βα+αβ)x+1 We can rewrite αβ+βα\dfrac{\alpha}{\beta}+\dfrac{\beta}{\alpha}βα+αβ to be α2+β2αβ\dfrac{\alpha^2+\beta^2}{\alpha\beta}αβα2+β2. α2+β2\alpha^2+\beta^2α2+β2 is not hard to find, as it is(α+β)2−2αβ=7.25(\alpha+\beta)^2-2\alpha\beta=7.25(α+β)2−2αβ=7.25. Then we can easily get α2+β2αβ=7.25−0.5=−14.5\dfrac{\alpha^2+\beta^2}{\alpha\beta}=\dfrac{7.25}{-0.5}=-14.5αβα2+β2=−0.57.25=−14.5 Therefore, (x−αβ)(x−βα)=x2+14.5x+1(x-\dfrac{\alpha}{\beta})(x-\dfrac{\beta}{\alpha})=\boxed{x^2+14.5x+1}(x−βα)(x−αβ)=x2+14.5x+1 Solution (b): The equation should be in the form x2−(α2β+αβ2)x+α3β3x^2-(\alpha^2\beta+\alpha\beta^2)x+\alpha^3\beta^3x2−(α2β+αβ2)x+α3β3 To find α2β+αβ2\alpha^2\beta+\alpha\beta^2α2β+αβ2, we can factorize it to be αβ(α+β)\alpha\beta(\alpha+\beta)αβ(α+β). It's lot more easier now, and we get α2β+αβ2=−1.25\alpha^2\beta+\alpha\beta^2=-1.25α2β+αβ2=−1.25 Now we need to find α3β3\alpha^3\beta^3α3β3, which is not so difficult as we can write it as (αβ)3=−0.125(\alpha\beta)^3=-0.125(αβ)3=−0.125 Therefore, (x−α2β)(x−αβ2)=x2+1.25x−0.125(x-\alpha^2\beta)(x-\alpha\beta^2)=\boxed{x^2+1.25x-0.125}(x−α2β)(x−αβ2)=x2+1.25x−0.125 Edited: The equation for (a) can also be written as 2x2+29x+22x^2+29x+22x2+29x+2 while the equation for (b) can be written as 8x2+10x−18x^2+10x-18x2+10x−1 as it will look better. Log in to reply Great :D Thanks you could rewrite the solution for (a) to be 2x2+29x+22x^2+29x+22x2+29x+2 & of (b) to be 8x2+10x−18x^2+10x-18x2+10x−1. integers seem better in equations than fractions . oh, that's right, I will edit it Thanks ;D Nice solution! By the way, here's a challenge for you: can you generalise this to f(x)=ax2+bx+cf(x)=ax^2+bx+cf(x)=ax2+bx+c?:) Like what? f(x)=2x2+29x+2f(x)=2x^2+29x+2f(x)=2x2+29x+2 and f(x)=8x2+10x−1f(x)=8x^2+10x-1f(x)=8x2+10x−1? @Daniel Lim – Nope, if the coefficients of a certain quadratic equation are a,b,ca,b,ca,b,c and its roots are α,β\alpha, \betaα,β, then construct a new equation in terms of a,b,ca,b,ca,b,c if the roots of the new equation are i) αβ,βα\frac{\alpha}{\beta}, \frac{\beta}{\alpha}βα,αβ ii) α2β,αβ2\alpha^2\beta, \alpha\beta^2α2β,αβ2 @敬全 钟 – Ok, I'll try For (a), f(x)=x2−((−ba)2−2caca)x+1f(x)=x^2-\left(\frac{\left(\frac{-b}{a}\right)^2-\frac{2c}{a}}{\frac{c}{a}}\right)x+1f(x)=x2−(ac(a−b)2−a2c)x+1 For (b), f(x)=x2−(ca×−ba)x+(ca)3f(x)=x^2- \left(\frac{c}{a}\times\frac{-b}{a}\right)x+\left(\frac{c}{a}\right)^3f(x)=x2−(ac×a−b)x+(ac)3 Problem Loading... Note Loading... Set Loading...
Assume that there is polytime algorithm that given $C(\vec{x}) \in F(\vec{x})$ and $\vec{a}$ computed the result of the multi-linearization of $C$ on $\vec{a}$. (w.l.o.g. I will assume that the output $\vec{b}$ will be a vector of $p$-bit binary numbers $b_i$ is $k$ iff the $b_{i,k}$ is one.) Since $P \subseteq P/poly$, there is a polysize boolean circuit that given the encoding of the arithmetic circuit and the values for the variables computes the multi-linearization of the arithmetic circuit on the inputs. Let call this circuit $M$. Let $C$ be an arbitrary arithmetic circuit. Fix the variables of the boolean circuit $M$ which describe the arithmetic circuit, so we have a boolean circuit computing the multi-linearization of $C$ on given inputs. We can turn this circuit into an arithmetic circuit over $F_p$ by noting that $x^{p-1}$ is $1$ for all values but $0$ so first raise all inputs to the power $p-1$. Replace each $f \land g$ gate by multiplication $f.g$, each $f \lor g$ gate by $f+g-f.g$ and each $\lnot f$ gate by $1-f$. By the assumption we made above about the format of the output, we can turn the output from binary to values over $F_p$. Take the output for $b_i$ and combine them to get $\sum_{0 \leq k \leq p-1}{kb_{i,k}}$. We can also convert the input given as values over $F_p$ to binary form since there are polynomials passing through any finite number of points. E.g. if we are working in $\bmod 3$, consider the polynomials $2x(x+1)$ and $2x(x+2)$ which give the first and the second bits of the input $x \in F_3$. Combining these we have an arithmetic circuit over $F_p$ computing the multi-linearization of $C$ with size polynomail in the size of $C$.
Votes cast (165) all time by type 156 up 57 question 9 down 108 answer 13 How many points are needed to uniquely define an ellipse? 7 Convert infinite 2D plane integer coords to 1D number 5 Calculating the square root of 2 4 Is $\tan^{-1}(-1) = 3\pi /4$ or $=7\pi /4$? I understand they're both valid solutions, but what about places where the value is added/subtracted? 4 How to integrate $\int\frac{1}{\sqrt{x(x-9)(x-5)}}\,dx$? 4 Prove that a surreal number is born in a finite stage if and only if it is of the form $\frac m{2^n}$. 3 Understanding a particular method of solving generalized version of Pell's equation 2 Why should we adopt the cumulative universe convention? 2 Do type constructors have type themselves? 1 Visually suggestive way to present a finite group all time by type 156 up 57 question 9 down 108 answer
I am trying to figure out the scattering wave function for the following potential: $$V(x,x')=-A \phi(x)\phi^*(x')$$ Such that the SE can be written as $$[\frac{\hbar^2\partial^2_x}{2m}-E]\psi = A\phi(x)\int dx'\phi^*(x')\psi(x')$$ This has a solution $$\psi(x)=\alpha e^{ikx}+\beta e^{-ikx}+\lambda[\int dx' K(x,x';E)\phi(x')\int dx''\phi(x'')\psi(x'')]$$ Where $K$ is the propagator as defined in Sakurai: $$K(x,x';E)=\frac{2m}{\hbar\sqrt{2mE}}e^{i|x-x'|\sqrt{2mE}/\hbar}$$ Back to the question, I am lost with. Based on this information how can I find a $\psi$ that satisfies the boundry conditions: $$\psi(x\rightarrow-\infty)=e^{ikx}+re^{-ikx}$$ $$\psi(x\rightarrow\infty)=te^{ikx}$$ Not completely sure how to solve this. Supposedly, it can be assumed that $\phi$ goes to 0 as $x$ goes to $\infty$, which immediately implies the boundary conditions, but that does not seem clear to me why that happens
Thank you Kasper. I've build the github version 2.2.7, and soo that you improve my rusty version of the notebooks. Great work! I'd like to mention that I try the following code: {M,N,P,Q,J,K,L}::Indices(full, position=independent). {\mu,\nu,\rho,\sigma,\gamma,\lambda}::Indices(sub,position=independent, parent=full). e^{M}_{\mu}::Vielbein; E^{\mu}_{M}::InverseVielbein; \delta^{\mu?}_{\nu?}::KroneckerDelta; \delta_{\mu?}^{\nu?}::KroneckerDelta; ex := e^{M}_{\mu} E^{\nu}_{M}; eliminate_vielbein(ex); The result is E^{\nu}_{\mu}, which is correct of course, but I expected that after defining the InverseVielbein the result would be a KroneckerDelta. Question: Do you think it is possible to change that behaviour? I know that it is possible that my expectations make not a lot of sense from the coding view point... since it's possible that the user had not defined the KronerckerDelta, or the fact that the delta has to be defined in both spaces, and so on. BTW, Bonus question: Instead of defining several KroneckerDelta, Would be possible to define a single \delta{#}::KroneckerDelta; that works on whatever indices type and position?
Learning Objectives State the forces that act on a simple pendulum Determine the angular frequency, frequency, and period of a simple pendulum in terms of the length of the pendulum and the acceleration due to gravity Define the period for a physical pendulum Define the period for a torsional pendulum Pendulums are in common usage. Grandfather clocks use a pendulum to keep time and a pendulum can be used to measure the acceleration due to gravity. For small displacements, a pendulum is a simple harmonic oscillator. The Simple Pendulum A simple pendulum is defined to have a point mass, also known as the pendulum bob, which is suspended from a string of length L with negligible mass (Figure 15.20). Here, the only forces acting on the bob are the force of gravity (i.e., the weight of the bob) and tension from the string. The mass of the string is assumed to be negligible as compared to the mass of the bob. Consider the torque on the pendulum. The force providing the restoring torque is the component of the weight of the pendulum bob that acts along the arc length. The torque is the length of the string L times the component of the net force that is perpendicular to the radius of the arc. The minus sign indicates the torque acts in the opposite direction of the angular displacement: $$\begin{split} \tau & = -L (mg \sin \theta); \\ I \alpha & = -L (mg \sin \theta); \\ I \frac{d^{2} \theta}{dt^{2}} & = -L (mg \sin \theta); \\ mL^{2} \frac{d^{2} \theta}{dt^{2}} & = -L (mg \sin \theta); \\ \frac{d^{2} \theta}{dt^{2}} & = - \frac{g}{L} \sin \theta \ldotp \end{split}$$ The solution to this differential equation involves advanced calculus, and is beyond the scope of this text. But note that for small angles (less than 15°), sin \(\theta\) and \(\theta\) differ by less than 1%, so we can use the small angle approximation sin \(\theta\) ≈ \(\theta\). The angle \(\theta\) describes the position of the pendulum. Using the small angle approximation gives an approximate solution for small angles, $$\frac{d^{2} \theta}{dt^{2}} = - \frac{g}{L} \theta \ldotp \label{15.17}$$ Because this equation has the same form as the equation for SHM, the solution is easy to find. The angular frequency is $$\omega = \sqrt{\frac{g}{L}} \label{15.18}$$ and the period is $$T = 2 \pi \sqrt{\frac{L}{g}} \ldotp \label{15.19}$$ The period of a simple pendulum depends on its length and the acceleration due to gravity. The period is completely independent of other factors, such as mass and the maximum displacement. As with simple harmonic oscillators, the period T for a pendulum is nearly independent of amplitude, especially if \(\theta\) is less than about 15°. Even simple pendulum clocks can be finely adjusted and remain accurate. Note the dependence of T on g. If the length of a pendulum is precisely known, it can actually be used to measure the acceleration due to gravity, as in the following example. Example \(\PageIndex{1}\): Measuring Acceleration due to Gravity by the Period of a Pendulum What is the acceleration due to gravity in a region where a simple pendulum having a length 75.000 cm has a period of 1.7357 s? Strategy We are asked to find g given the period T and the length L of a pendulum. We can solve T = 2\(\pi\)L g for g, assuming only that the angle of deflection is less than 15°. Solution Square T = 2\(\pi \sqrt{\frac{L}{g}}\) and solve for g: $$g = 4 \pi^{2} \frac{L}{T^{2}} ldotp$$ Substitute known values into the new equation: $$g = 4 \pi^{2} \frac{0.75000\; m}{(1.7357\; s)^{2}} \ldotp$$ Calculate to find g: $$g = 9.8281\; m/s^{2} \ldotp$$ Significance This method for determining g can be very accurate, which is why length and period are given to five digits in this example. For the precision of the approximation sin \(\theta\) ≈ \(\theta\) to be better than the precision of the pendulum length and period, the maximum displacement angle should be kept below about 0.5°. Exercise \(\PageIndex{1}\) An engineer builds two simple pendulums. Both are suspended from small wires secured to the ceiling of a room. Each pendulum hovers 2 cm above the floor. Pendulum 1 has a bob with a mass of 10 kg. Pendulum 2 has a bob with a mass of 100 kg. Describe how the motion of the pendulums will differ if the bobs are both displaced by 12°. Physical Pendulum Any object can oscillate like a pendulum. Consider a coffee mug hanging on a hook in the pantry. If the mug gets knocked, it oscillates back and forth like a pendulum until the oscillations die out. We have described a simple pendulum as a point mass and a string. A physical pendulum is any object whose oscillations are similar to those of the simple pendulum, but cannot be modeled as a point mass on a string, and the mass distribution must be included into the equation of motion. As for the simple pendulum, the restoring force of the physical pendulum is the force of gravity. With the simple pendulum, the force of gravity acts on the center of the pendulum bob. In the case of the physical pendulum, the force of gravity acts on the center of mass (CM) of an object. The object oscillates about a point O. Consider an object of a generic shape as shown in Figure 15.21. When a physical pendulum is hanging from a point but is free to rotate, it rotates because of the torque applied at the CM, produced by the component of the object’s weight that acts tangent to the motion of the CM. Taking the counterclockwise direction to be positive, the component of the gravitational force that acts tangent to the motion is −mg sin \(\theta\). The minus sign is the result of the restoring force acting in the opposite direction of the increasing angle. Recall that the torque is equal to \(\vec{\tau} = \vec{r} \times \vec{F}\). The magnitude of the torque is equal to the length of the radius arm times the tangential component of the force applied, |\(\tau\)| = rFsin\(\theta\). Here, the length L of the radius arm is the distance between the point of rotation and the CM. To analyze the motion, start with the net torque. Like the simple pendulum, consider only small angles so that sin \(\theta\) ≈ \(\theta\). Recall from Fixed-Axis Rotation on rotation that the net torque is equal to the moment of inertia I = \(\int\)r 2 dm times the angular acceleration \(\alpha\), where \(\alpha = \frac{d^{2} \theta}{dt^{2}}: $$I \alpha = \tau_{net} = L (-mg) \sin \theta \ldotp$$ Using the small angle approximation and rearranging: $$\begin{split} I \alpha & = -L (mg) \theta; \\ I \frac{d^{2} \theta}{dt^{2}} & = -L (mg) \theta; \\ \frac{d^{2} \theta}{dt^{2}} & = - \left(\dfrac{mgL}{I}\right) \theta \ldotp \end{split}$$ Once again, the equation says that the second time derivative of the position (in this case, the angle) equals minus a constant \(\left(− \dfrac{mgL}{I}\right)\) times the position. The solution is $$\theta (t) = \Theta \cos (\omega t + \phi),$$ where \(\Theta\) is the maximum angular displacement. The angular frequency is $$\omega = \sqrt{\frac{mgL}{I}} \ldotp \label{15.20}$$ The period is therefore $$T = 2 \pi \sqrt{\frac{I}{mgL}} \ldotp \label{15.21}$$ Note that for a simple pendulum, the moment of inertia is I = \(\int\)r 2dm = mL 2 and the period reduces to T = 2\(\pi \sqrt{\frac{L}{g}}\). Example \(\PageIndex{2}\): Reducing the Swaying of a Skyscraper In extreme conditions, skyscrapers can sway up to two meters with a frequency of up to 20.00 Hz due to high winds or seismic activity. Several companies have developed physical pendulums that are placed on the top of the skyscrapers. As the skyscraper sways to the right, the pendulum swings to the left, reducing the sway. Assuming the oscillations have a frequency of 0.50 Hz, design a pendulum that consists of a long beam, of constant density, with a mass of 100 metric tons and a pivot point at one end of the beam. What should be the length of the beam? Strategy We are asked to find the length of the physical pendulum with a known mass. We first need to find the moment of inertia of the beam. We can then use the equation for the period of a physical pendulum to find the length. Solution Find the moment of inertia for the CM. Use the parallel axis theorem to find the moment of inertia about the point of rotation: $$I = I_{CM} + \frac{L^{2}}{4} M = \frac{1}{12} ML^{2} + \frac{1}{4} ML^{2} = \frac{1}{3} ML^{2} \ldotp$$ The period of a physical pendulum has a period of T = 2\(\pi \sqrt{\frac{I}{mgL}}\). Use the moment of inertia to solve for the length L: $$\begin{split} T & = 2 \pi \sqrt{\frac{I}{mgL}} = 2 \pi \sqrt{\frac{\frac{1}{3} ML^{2}}{MgL}} = 2 \pi \sqrt{\frac{L}{3g}}; \\ L & = 3g \left(\dfrac{T}{2 \pi}\right)^{2} = 3 (9.8\; m/s^{2}) \left(\dfrac{2\; s}{2 \pi}\right)^{2} = 2.96\; m \ldotp \end{split}$$ Significance There are many ways to reduce the oscillations, including modifying the shape of the skyscrapers, using multiple physical pendulums, and using tuned-mass dampers. Torsional Pendulum A torsional pendulum consists of a rigid body suspended by a light wire or spring (Figure 15.22). When the body is twisted some small maximum angle (\(\Theta\)) and released from rest, the body oscillates between (\(\theta\) = + \(\Theta\)) and (\(\theta\) = − \(\Theta\)). The restoring torque is supplied by the shearing of the string or wire. The restoring torque can be modeled as being proportional to the angle: $$\tau = - \kappa \theta \ldotp$$ The variable kappa (\(\kappa\)) is known as the torsion constant of the wire or string. The minus sign shows that the restoring torque acts in the opposite direction to increasing angular displacement. The net torque is equal to the moment of inertia times the angular acceleration: $$\begin{split} I \frac{d^{2} \theta}{dt^{2}} & = - \kappa \theta; \\ \frac{d^{2} \theta}{dt^{2}} & = - \frac{\kappa}{I} \theta \ldotp \end{split}$$ This equation says that the second time derivative of the position (in this case, the angle) equals a negative constant times the position. This looks very similar to the equation of motion for the SHM \(\frac{d^{2} x}{dt^{2}}\) = − \(\frac{k}{m}\)x, where the period was found to be T = 2\(\pi \sqrt{\frac{m}{k}}\). Therefore, the period of the torsional pendulum can be found using $$T = 2 \pi \sqrt{\frac{I}{\kappa}} \ldotp \label{15.22}$$ The units for the torsion constant are [\(\kappa\)] = N • m = (kg • m/s 2)m = kg • m 2/s 2 and the units for the moment of inertial are [I] = kg • m 2, which show that the unit for the period is the second. Example \(\PageIndex{3}\): Measuring the Torsion Constant of a String A rod has a length of l = 0.30 m and a mass of 4.00 kg. A string is attached to the CM of the rod and the system is hung from the ceiling (Figure 15.23). The rod is displaced 10° from the equilibrium position and released from rest. The rod oscillates with a period of 0.5 s. What is the torsion constant \(\kappa\)? Strategy We are asked to find the torsion constant of the string. We first need to find the moment of inertia. Solution Find the moment of inertia for the CM: $$I_{CM} = \int x^{2} dm = \int_{- \frac{L}{2}}^{+ \frac{L}{2}} x^{2} \lambda dx = \lambda \Bigg[ \frac{x^{3}}{3} \Bigg]_{- \frac{L}{2}}^{+ \frac{L}{2}} = \lambda \frac{2L^{3}}{24} = \left(\dfrac{M}{L}\right) \frac{2L^{3}}{24} = \frac{1}{12} ML^{2} \ldotp$$ Calculate the torsion constant using the equation for the period: $$\begin{split} T & = 2 \pi \sqrt{\frac{I}{\kappa}}; \\ \kappa & = I \left(\dfrac{2 \pi}{T}\right)^{2} = \left(\dfrac{1}{12} ML^{2}\right) \left(\dfrac{2 \pi}{T}\right)^{2}; \\ & = \Big[ \frac{1}{12} (4.00\; kg)(0.30\; m)^{2} \Big] \left(\dfrac{2 \pi}{0.50\; s}\right)^{2} = 4.73\; N\; \cdotp m \ldotp \end{split}$$ Significance Like the force constant of the system of a block and a spring, the larger the torsion constant, the shorter the period. Contributors Samuel J. Ling (Truman State University), Jeff Sanny (Loyola Marymount University), and Bill Moebs with many contributing authors. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
Intuitively, shifting then reflecting is not the same as reflecting then shifting. Consider the case of first shifting 1 unit to the right from 0, then reflecting: you end up at $x=-1$. If you reflect first, it does nothing, and then shifting to the right by $1$ means you end up at $x=1$. The problem is that$$\hat{T}(-a)\hat{R}\psi(x) =\hat{T}(-a)\psi(-x) = \psi(-(x+a)) = \psi(-x-a),$$whereas$$\hat{R}\hat{T}(-a)\psi(x) = \hat{R}\psi(x+a) = \psi((-x)+a) = \psi(-x+a).$$So these aren't the same. However,$$\hat{R}\hat{T}(a)\psi(x) = \hat{R}\psi(x-a) = \psi((-x)-a) = \psi(-x-a),$$which indicates that (since $\psi$ was arbitrary),$$(\hat{R}\hat{T}(a))^{\dagger}=(\hat{T}(a))^{\dagger}\hat{R}=\hat{T}(-a)\hat{R}=\hat{R}\hat{T}(a).$$This shows that it's Hermitian. The thing that we show above can also be used to show that it's unitary, but I'll leave that to you. Here's how I like to understand things, because perhaps it's not clear why some of the above calculations are true. I prefer to work in Dirac notation and consider explicitly the action of these operators on the position basis vectors:$$\hat{R}|x\rangle = \left|-x\right\rangle,$$and$$\hat{T}(a)|x\rangle = |x+a\rangle.$$For the bra-vectors,$$\langle x|\hat{R} = \langle-x|,$$and$$\langle x|\hat{T}(a) = \langle x-a|.$$(That is, $\hat{T}(a)$ acts like $\hat{T}(-a)$ if it acts to the left.) This allows us to compute what happens to the amplitudes by taking inner products. For instance, if we define the transformed vector $|\psi'\rangle=\hat{T}(a)|\psi\rangle$, then$$\psi'(x) = \langle x|\psi'\rangle = \langle x|\hat{T}(a)|\psi\rangle = \langle x-a|\psi\rangle = \psi(x-a),$$which indicates that $\hat{T}(a)\psi(x) = \psi(x-a)$.
Modular forms and the inverse Galois problem for PSL_2( Z /p^n Z ) Adibhatla, Rajender Presentation (2013, August 28)Detailed reference viewed: 23 (0 UL) A characterization of ordinary modular eigenforms with CM Adibhatla, Rajender ; Tsaknias, Panagiotis in Arithmetic and Geometry (2013, July)Detailed reference viewed: 39 (2 UL) The Ramakrishna-Taylor method and modular lifts of mod p^n Galois representations Adibhatla, Rajender Presentation (2013, June 21)Detailed reference viewed: 36 (0 UL) Higher companion forms via Galois deformation theory Adibhatla, Rajender Presentation (2013, May 13)Detailed reference viewed: 32 (0 UL) Modularity of certain 2-dimensional mod p^n representations of Gal(Qbar/Q Adibhatla, Rajender Presentation (2013, March 07) For an odd rational prime p and integer n>1, we consider certain continuous representations rho_n of G_Q into GL_2(Z/p^nZ) with fixed determinant, whose local restrictions "look" like they arise from ... [more ▼] For an odd rational prime p and integer n>1, we consider certain continuous representations rho_n of G_Q into GL_2(Z/p^nZ) with fixed determinant, whose local restrictions "look" like they arise from modular Galois representations, and whose mod p reductions are odd and irreducible. Under suitable hypotheses on the size of their images, we use deformation theory to lift rho_n to rho in characteristic 0. We then invoke a modularity lifting theorem of Skinner-Wiles to show that rho is modular. [less ▲]Detailed reference viewed: 34 (0 UL) Higher congruence companion forms Adibhatla, Rajender in Acta Arithmetica (2012), 156(2), 17 For a rational prime p≥3 we consider p-ordinary, Hilbert modular newforms f of weight k≥2 with associated p-adic Galois representations \rho_f and mod p^n reductions \rho_{f,n}. Under suitable hypotheses ... [more ▼] For a rational prime p≥3 we consider p-ordinary, Hilbert modular newforms f of weight k≥2 with associated p-adic Galois representations \rho_f and mod p^n reductions \rho_{f,n}. Under suitable hypotheses on the size of the image, we use deformation theory and modularity lifting to show that if the restrictions of \rho_{f,n} to decomposition groups above p split then f has a companion form g modulo pn (in the sense that \rho_{f,n} \sim \rho_{g,n}\otimes \chi^{k−1}). [less ▲]Detailed reference viewed: 50 (1 UL) Modularity of certain mod p^n Galois representations Adibhatla, Rajender E-print/Working paper (2012) For a rational prime $p \geq 3$ and an integer $n \geq 2$, we study the modularity of continuous $2$-dimensional mod $p^n$ Galois representations of $\Gal(\overline{\Q}/\Q)$ whose residual representations ... [more ▼] For a rational prime $p \geq 3$ and an integer $n \geq 2$, we study the modularity of continuous $2$-dimensional mod $p^n$ Galois representations of $\Gal(\overline{\Q}/\Q)$ whose residual representations are odd and absolutely irreducible. Under suitable hypotheses on the local structure of these representations and the size of their images we use deformation theory to construct characteristic $0$ lifts. We then invoke modularity lifting results to prove that these lifts are modular. As an application, we show that certain unramified mod $p^n$ Galois representations arise from modular forms of weight $p^{n-1}(p-1)+1$. [less ▲]Detailed reference viewed: 30 (0 UL)
Forgot password? New user? Sign up Existing user? Log in Finally posted the paper on AoPS here! It is on a new inequality I discovered while writing a Proofathon problem a while back. Enjoy, and please give feedback! Note by Daniel Liu 4 years, 10 months ago Easy Math Editor This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: *italics* _italics_ **bold** __bold__ - bulleted- list 1. numbered2. list paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org) > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world" \( \) \[ \] 2 \times 3 2^{34} a_{i-1} \frac{2}{3} \sqrt{2} \sum_{i=1}^3 \sin \theta \boxed{123} Sort by: That you're amazing might be sort of an understatement! Log in to reply This is a very interesting read. I especially like the examples which you have chosen to illustrate the technique, as they are otherwise non-trivial. The lemma used in your proof is reminiscent to that of the Rearrangement Inequality. Can you add it to the Reverse Rearrangement Inequality Wiki page? P.S. I think it could have a much better name. What you need to do, is find a bunch of "hard" inequalities of which this approach simplifies it, and someday we can call it Daniel's Lemma. I've definitely seen this approach used in several olympiad problems. The 2 variable case is often overlooked by simply expanding terms and canceling, while the 3 variable case starts to show it's usefulness. That's where I would need to have some sort of spark of ingenuity. Currently, most of the inequalities in the problems section can be solved using Holder's Inequality (the symmetric ones usually) and I heard that some of the others can be solved using clever applications of AM-GM. But then, most problems can be solved with AM-GM used in some way or the other, so I'm not sure if that qualifies as my problems being bad. Currently, I'm thinking of doing something based on the corollaries, which seem to give the problem maker relative freedom, as we just need to plug in any arbitrary increasing/decreasing function in some domain (which will be specified in the problem) in order to create a problem. However, in most cases the application is pretty obvious. I'm not sure if it is also obvious using other inequalities though. I also want to use the "any permutation" condition to my advantage, as not every inequality allows that. As an example of an inequality trivial by Reverse Rearrangement: ∏cyc((y+1)2+(x+y)(x−y))≥((x+1)(y+1)(z+1))2\prod_{cyc}\big((y+1)^2+(x+y)(x-y)\big)\ge \big((x+1)(y+1)(z+1)\big)^2cyc∏((y+1)2+(x+y)(x−y))≥((x+1)(y+1)(z+1))2 I created the Wiki page. I also went along and created the Holder's Inequality page too (for basic holders) Thanks. If you add more examples to it, I can add it to the featured Wiki list. I will try and add a section on motivation / explanation too. @Calvin Lin – Well, now there are three examples. Sorry, I was too lazy to make up new problems, so I just took problems from my paper. @Daniel Liu – Thanks. I've made some further edits to it, can you take a look and do a sanity check? Comment: Observe that the base case of n=1,2n=1, 2 n=1,2 are trivially true for similarly ordered sequences, and we do not require the condition of non-negativity. I believe the condition on non-negative sequences, can be relaxed to having a3+b3 a_3 + b_3 a3+b3 be positive (where a1,b1 a_1, b_1 a1,b1 are the smallest terms in their sequence). I tried tracking down where non-negative sequence was used in the proof, and the only place that I see it is in the induction hypothesis where we have to multiply by (ak+1+bk+1) (a_{k+1} + b_{k+1} ) (ak+1+bk+1). We apply it when k=2k = 2 k=2, hence we just require a3+b3 a_3 + b_3 a3+b3 to be positive, which makes ai+bi a_ i + b_i ai+bi positive for i≥3 i \geq 3 i≥3. If this were true, it could lead to very interesting results! We could have a1,a2,a3,b1,b2a_1, a_2, a_3, b_1, b_2 a1,a2,a3,b1,b2 as negative values. Thoughts? @Calvin Lin – That's interesting... I was too lazy myself to check where it stopped working when some of the terms were negative. If what you said is true, wouldn't it mean we could have that the entire sequence b1,b2,…bnb_1,b_2,\ldots b_nb1,b2,…bn can be negative, as long as ak≥bka_k\ge b_kak≥bk for all k=1→nk=1\to nk=1→n? I'll ask Cody Johnson do a quick check of this on Mathematica to see if what you said is actually true. wow man that was really great. :okudos!! God..you guys are geniuses.. what am I even doing here among y'all :/anyways....again..brilliant job :D Don't worry, you will get there with persistence and ardor. As for the document, it is very intriguing indeed. Daniel, you have certainly cracked upon a new discovery that will be talked about for a while. I will spread the word immediately to my school and to my friends. Fascinating read. Outstanding job, Daniel! Where is the inequality? Click on the link above, then download the PDF. Now look right under section 2. The link redirects me to AoPS. However I still could not find the pdf link on the page.@Daniel Liu . Could you please help me @Sualeh Asif – If you're on mobile, then it doesn't show up. You must be on a computer for the attachment to show up. As for the inequality, I also posed it on Brilliant Wiki, you can find it here. Hi Daniel,Can you remember what you were thinking when you discovered the inequality?In other words,What was going through your mind when you created something new? I was creating an Algebra problem. I created the problem, then proceeded to solve it. However, I ended up having to prove that (m+n)(2m+2n)⋯(km+kn)≤(m+σ(1)n)(2m+σ(2)n)⋯(km+σ(k)n)(m+n)(2m+2n)\cdots (km+kn)\le (m+\sigma(1)n)(2m+\sigma(2)n)\cdots (km+\sigma(k)n)(m+n)(2m+2n)⋯(km+kn)≤(m+σ(1)n)(2m+σ(2)n)⋯(km+σ(k)n) where σ(1),…σ(k)\sigma(1),\ldots \sigma(k)σ(1),…σ(k) is a permutation of 1,…k1,\ldots k1,…k. I couldn't prove it, so I just gave up on the problem. A month later, I came back, wondering if I could make a more generalized version of that inequality. It looked so much like the Rearrangement Inequality, it just had to be true. So I created the inequality as it is now, and then managed to prove it. For this particular inequality, you can make use of ∏(1+ai)≥[1+∏aik]k−(1) \prod ( 1 + a_i ) \geq [ 1 + \sqrt[k]{ \prod{a_i} } ] ^k \quad - (1)∏(1+ai)≥[1+k∏ai]k−(1) Substitute in ai=σ(i)i×nm a_i = \frac{ \sigma(i) } { i } \times \frac{ n}{m} ai=iσ(i)×mn, clear out denominators and the result follows. Note: The simplest approach that I know to prove inequality (1) is to take logarithms and apply Jensens to f(x)=ln(1+x) f(x) = \ln (1 + x ) f(x)=ln(1+x). Have you considered presenting it to the AMS journal? @A Former Brilliant Member – I tried submitting it to AwesomeMath journal, but they declined because they weren't looking for inequality articles at the time. And now I already made it public, so I think it's too late already. These types of topics I doubt the AMS journal cares about, since it is purely competition-math style. Amazing. There you are creating (i.e-discovering) new concepts, here I am struggling to understand even the basic ones :/ (My status) Your status says they didn't accept your submission. Can you tell what was wrong? I'm wondering too. I sent an email asking why right now and am waiting for their reply. EDIT: Dr. Andreescu said that they were not interested in articles about inequalities to publish right now. Bad timing, I guess. @Daniel Liu how did you wrote in pdf ( can you explain me , i too want to create a pdf) , i know how to write in latex , can it be converted into pdf? @Sandeep Rathod – I used a program called TexWorks. It creates PDF's with the nice latex font that is characteristic in mathematical research papers. @Daniel Liu Please can you post it on Brilliant. No matter what i tried I have mot been able to find your inequality. P.s. Is it in the paper .If so where? It should be right under section 2. The Reverse Rearrangement Inequality Lower Bound is proven in the Rearrangement Inequality section in Math Olympiad Treasures by Titu Andreescu. Problem Loading... Note Loading... Set Loading...
Let $p:\overline{X}\rightarrow X$ be a simply connected covering of a path connected space $X$ and $A\subset X$ be a path connected set. Show that the inclusion induced homomorphism $i_{\sharp} : \pi_1(A)\rightarrow \pi_1(X)$ is injective iff each path component of $p^{-1}(A)$ is simply connected. First of all i do not think i understand what does it mean to say $i_{\sharp} : \pi_1(A)\rightarrow \pi_1(X)$ is injective.. Is it like if i have a loop in $A$ which is not nullhomotopic in $A$ (existence of $H:I\times I\rightarrow A$) then it is not nullhomotopic (existence of $H:I\times I\rightarrow X$) in $X$. Let me know if this is what it actually mean... Assume $i_{\sharp}$ is injective and let $\omega$ be a loop in $p^{-1}(A)$... See this as a loop in $\overline{X}$... as $\overline{X}$ is simply connected $\omega$ is nullhomotopic in $\overline{X}$ i.e., we have $H:I\times I\rightarrow \overline{X}$ such that $H(t,o)=\omega(t)$ and $H(t,1)=w(0)$ for all $t\in I$.. Compose this with $p$ to get $I\times I\xrightarrow{p\circ H}X$ with $(p\circ H)(t,0)=(p\circ \omega)(t)$ and $(p\circ H)(t,1)=(p\circ \omega)(0)$.. So, this $p\circ \omega$ is null homotopic in $X$ so it has to be null homotopic in $A$ as well.. As $p\circ \omega$ is nullhomotopic in $A$ i belive this would imply $\omega$ is nullhomotopic in $p^{-1}(A)$.. I could not think of any ideas about converse part and how to prove if $p^{-1}(A)$ is not actually path connected... Please give only hints..
I'm writing a beamer presentation on basic math type-setting in latex and I've been trying to use verbatim to display how math equations are typed. Using fragile this works, but I still get several annoying error messages every time I compile, so I'm never sure if I have an actual error or if beamer is just complaining about verbatim. The current offender would be: \begin{verbatim}\[ X := \bigcup_{n \in \Mb N}\coprod_{\lambda \in \Lambda} (X_\lambda \cap Y_\lambda ) \vee \Mb S^{n}.\]\end{verbatim} I get error messages along the lines of: LaTeX Font Warning: Font shape `OT1/cmss/m/n' in size <4> not available(Font) size <5> substituted on input line 11.[1{/home/schlatjj/.texmf-var/fonts/map/pdftex/updmap/pdftex.map}] (./Math.toc)[2] (./Math.vrbLaTeX Font Warning: Font shape `OMS/cmss/m/n' undefined(Font) using `OMS/cmsy/m/n' instead(Font) for symbol `textbraceleft' on input line 6.) [3] (./Math.vrb) [4] (./Math.vrb! Undefined control sequence.\test@single@character ...ken ->\def \math@format ##1{\mydollar ##1\mydollar...l.10 \end{verbatim} Are there any workarounds for this, something I can change in the code to make these error messages go away?
Problem 676 Let $V$ be the vector space of $2 \times 2$ matrices with real entries, and $\mathrm{P}_3$ the vector space of real polynomials of degree 3 or less. Define the linear transformation $T : V \rightarrow \mathrm{P}_3$ by \[T \left( \begin{bmatrix} a & b \\ c & d \end{bmatrix} \right) = 2a + (b-d)x – (a+c)x^2 + (a+b-c-d)x^3.\] Find the rank and nullity of $T$.Add to solve later Problem 675 The space $C^{\infty} (\mathbb{R})$ is the vector space of real functions which are infinitely differentiable. Let $T : C^{\infty} (\mathbb{R}) \rightarrow \mathrm{P}_3$ be the map which takes $f \in C^{\infty}(\mathbb{R})$ to its third order Taylor polynomial, specifically defined by \[ T(f)(x) = f(0) + f'(0) x + \frac{f^{\prime\prime}(0)}{2} x^2 + \frac{f^{\prime \prime \prime}(0)}{6} x^3.\] Here, $f’, f^{\prime\prime}$ and $f^{\prime \prime \prime}$ denote the first, second, and third derivatives of $f$, respectively. Prove that $T$ is a linear transformation.Add to solve later Problem 674 Let $\mathrm{P}_n$ be the vector space of polynomials of degree at most $n$. The set $B = \{ 1 , x , x^2 , \cdots , x^n \}$ is a basis of $\mathrm{P}_n$, called the standard basis. Let $T : \mathrm{P}_4 \rightarrow \mathrm{P}_{4}$ be the map defined by, for $f \in \mathrm{P}_4$, \[ T (f) (x) = f(x) – x – 1.\] Determine if $T(x)$ is a linear transformation. If it is, find the matrix representation of $T$ relative to the standard basis of $\mathrm{P}_4$.Add to solve later Problem 673 Let $\mathrm{P}_n$ be the vector space of polynomials of degree at most $n$. The set $B = \{ 1 , x , x^2 , \cdots , x^n \}$ is a basis of $\mathrm{P}_n$, called the standard basis. Let $T : \mathrm{P}_3 \rightarrow \mathrm{P}_{5}$ be the map defined by, for $f \in \mathrm{P}_3$, \[T (f) (x) = ( x^2 – 2) f(x).\] Determine if $T(x)$ is a linear transformation. If it is, find the matrix representation of $T$ relative to the standard basis of $\mathrm{P}_3$ and $\mathrm{P}_{5}$.Add to solve later Problem 672 For an integer $n > 0$, let $\mathrm{P}_n$ be the vector space of polynomials of degree at most $n$. The set $B = \{ 1 , x , x^2 , \cdots , x^n \}$ is a basis of $\mathrm{P}_n$, called the standard basis. Let $T : \mathrm{P}_n \rightarrow \mathrm{P}_{n+1}$ be the map defined by, for $f \in \mathrm{P}_n$, \[T (f) (x) = x f(x).\] Prove that $T$ is a linear transformation, and find its range and nullspace.Add to solve later Problem 669 (a) Suppose that a $3\times 3$ system of linear equations is inconsistent. Is the coefficient matrix of the system nonsingular? (b) Suppose that a $3\times 3$ homogeneous system of linear equations has a solution $x_1=0, x_2=-3, x_3=5$. Is the coefficient matrix of the system nonsingular? Add to solve later (c) Let $A$ be a $4\times 4$ matrix and let \[\mathbf{v}=\begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \end{bmatrix} \text{ and } \mathbf{w}=\begin{bmatrix} 4 \\ 3 \\ 2 \\ 1 \end{bmatrix}.\] Suppose that we have $A\mathbf{v}=A\mathbf{w}$. Is the matrix $A$ nonsingular? Problem 668 Consider the system of differential equations \begin{align*} \frac{\mathrm{d} x_1(t)}{\mathrm{d}t} & = 2 x_1(t) -x_2(t) -x_3(t)\\ \frac{\mathrm{d}x_2(t)}{\mathrm{d}t} & = -x_1(t)+2x_2(t) -x_3(t)\\ \frac{\mathrm{d}x_3(t)}{\mathrm{d}t} & = -x_1(t) -x_2(t) +2x_3(t) \end{align*} (a) Express the system in the matrix form. (b) Find the general solution of the system. Add to solve later (c) Find the solution of the system with the initial value $x_1=0, x_2=1, x_3=5$. Solve the Linear Dynamical System $\frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t} =A\mathbf{x}$ by Diagonalization Problem 667 (a) Find all solutions of the linear dynamical system \[\frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t} =\begin{bmatrix} 1 & 0\\ 0& 3 \end{bmatrix}\mathbf{x},\] where $\mathbf{x}(t)=\mathbf{x}=\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$ is a function of the variable $t$. Add to solve later (b) Solve the linear dynamical system \[\frac{\mathrm{d}\mathbf{x}}{\mathrm{d}t}=\begin{bmatrix} 2 & -1\\ -1& 2 \end{bmatrix}\mathbf{x}\] with the initial value $\mathbf{x}(0)=\begin{bmatrix} 1 \\ 3 \end{bmatrix}$. Prove that $\{ 1 , 1 + x , (1 + x)^2 \}$ is a Basis for the Vector Space of Polynomials of Degree $2$ or Less Problem 665 Let $\mathbf{P}_2$ be the vector space of polynomials of degree $2$ or less. (a) Prove that the set $\{ 1 , 1 + x , (1 + x)^2 \}$ is a basis for $\mathbf{P}_2$. Add to solve later (b) Write the polynomial $f(x) = 2 + 3x – x^2$ as a linear combination of the basis $\{ 1 , 1+x , (1+x)^2 \}$. Problem 663 Let $\R^2$ be the $x$-$y$-plane. Then $\R^2$ is a vector space. A line $\ell \subset \mathbb{R}^2$ with slope $m$ and $y$-intercept $b$ is defined by \[ \ell = \{ (x, y) \in \mathbb{R}^2 \mid y = mx + b \} .\] Prove that $\ell$ is a subspace of $\mathbb{R}^2$ if and only if $b = 0$.Add to solve later Problem 659 Fix the row vector $\mathbf{b} = \begin{bmatrix} -1 & 3 & -1 \end{bmatrix}$, and let $\R^3$ be the vector space of $3 \times 1$ column vectors. Define \[W = \{ \mathbf{v} \in \R^3 \mid \mathbf{b} \mathbf{v} = 0 \}.\] Prove that $W$ is a vector subspace of $\R^3$. Problem 658 Let $V$ be the vector space of $n \times n$ matrices with real coefficients, and define \[ W = \{ \mathbf{v} \in V \mid \mathbf{v} \mathbf{w} = \mathbf{w} \mathbf{v} \mbox{ for all } \mathbf{w} \in V \}.\] The set $W$ is called the center of $V$. Prove that $W$ is a subspace of $V$.Add to solve later
I'm interested in the case where I have received a signal as a composition of three echoes of the transmitted signal $x[n]$ (each echo has its amplitude and time delay). I know that in order to compare the signals I use cross-correlation. My question is if I use $r_{xx}[m] = \delta[m]$, would that make it easier to calculate the parameters of echoes? And, if so, can you explain me why? The mathematical justification would be that the Cramer Rao Bound on the estimate of time delay is inversely proportional to the time bandwidth product of the pulse, so in cases where there are many distinct echoes, the shorter the pulse is better to resolve each arrival, so one would increase the bandwidth to minimize the bound on time delay. In the case where the signal is long, as illustrated by the matlab code below for a single delay: clear all x=randn(1,16*32768); x1=[x zeros(1,.5*128)]; x2=[zeros(1,.5*128) x]; figure(1) pwelch(x1+x2) The psd has periodic nulls, and you can get an estimate from the null spacing, and the wider the band (white noise), the more nulls that can be resolved. $$ H(\omega)=1+\alpha e^{\jmath \omega \tau} $$ For 3 delays it will be a bit more of a bother, but doable. Actually if you don't know the waveform, like a passive SONAR problem, this is an essential approach. If the echoes are far apart and interfere minimally, and you know the peak shape to be sinc, you can calculate the peak time more simply and using less data points. See my answer to "How to calculate a delay (correlation peak) between two signals with a precision smaller than the sampling period?". If the signal covers the whole spectrum up to half the sampling frequency, then there is no bandwidth headroom left where could be only noise. Alternatively with a lower bandlimit the noise could be filtered without affecting the wanted signal.
Problem 616 Suppose that $p$ is a prime number greater than $3$. Consider the multiplicative group $G=(\Zmod{p})^*$ of order $p-1$. (a) Prove that the set of squares $S=\{x^2\mid x\in G\}$ is a subgroup of the multiplicative group $G$. (b) Determine the index $[G : S]$. Add to solve later (c) Assume that $-1\notin S$. Then prove that for each $a\in G$ we have either $a\in S$ or $-a\in S$. Problem 613 Let $m$ and $n$ be positive integers such that $m \mid n$. (a) Prove that the map $\phi:\Zmod{n} \to \Zmod{m}$ sending $a+n\Z$ to $a+m\Z$ for any $a\in \Z$ is well-defined. (b) Prove that $\phi$ is a group homomorphism. (c) Prove that $\phi$ is surjective. Add to solve later (d) Determine the group structure of the kernel of $\phi$. Problem 612 Let $C[-2\pi, 2\pi]$ be the vector space of all real-valued continuous functions defined on the interval $[-2\pi, 2\pi]$. Consider the subspace $W=\Span\{\sin^2(x), \cos^2(x)\}$ spanned by functions $\sin^2(x)$ and $\cos^2(x)$. (a) Prove that the set $B=\{\sin^2(x), \cos^2(x)\}$ is a basis for $W$. Add to solve later (b) Prove that the set $\{\sin^2(x)-\cos^2(x), 1\}$ is a basis for $W$. Problem 611 An $n\times n$ matrix $A$ is called orthogonal if $A^{\trans}A=I$. Let $V$ be the vector space of all real $2\times 2$ matrices. Consider the subset \[W:=\{A\in V \mid \text{$A$ is an orthogonal matrix}\}.\] Prove or disprove that $W$ is a subspace of $V$. Problem 607 Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less. Let \[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\] where \begin{align*} p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\ p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3. \end{align*} (a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$. (b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$. Add to solve later (The Ohio State University, Linear Algebra Midterm) Read solution Problem 606 Let $V$ be a vector space and $B$ be a basis for $V$. Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$. Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$. After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form \[\begin{bmatrix} 1 & 0 & 2 & 1 & 0 \\ 0 & 1 & 3 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}.\] (a) What is the dimension of $V$? (b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$? Add to solve later (The Ohio State University, Linear Algebra Midterm) Read solution Problem 605 Let $T:\R^2 \to \R^3$ be a linear transformation such that \[T\left(\, \begin{bmatrix} 3 \\ 2 \end{bmatrix} \,\right) =\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} \text{ and } T\left(\, \begin{bmatrix} 4\\ 3 \end{bmatrix} \,\right) =\begin{bmatrix} 0 \\ -5 \\ 1 \end{bmatrix}.\] (a) Find the matrix representation of $T$ (with respect to the standard basis for $\R^2$). (b) Determine the rank and nullity of $T$. Add to solve later (The Ohio State University, Linear Algebra Midterm) Read solution Problem 604 Let \[A=\begin{bmatrix} 1 & -1 & 0 & 0 \\ 0 &1 & 1 & 1 \\ 1 & -1 & 0 & 0 \\ 0 & 2 & 2 & 2\\ 0 & 0 & 0 & 0 \end{bmatrix}.\] (a) Find a basis for the null space $\calN(A)$. (b) Find a basis of the range $\calR(A)$. (c) Find a basis of the row space for $A$. Add to solve later (The Ohio State University, Linear Algebra Midterm) Read solution Problem 603 Let $C[-2\pi, 2\pi]$ be the vector space of all continuous functions defined on the interval $[-2\pi, 2\pi]$. Consider the functions \[f(x)=\sin^2(x) \text{ and } g(x)=\cos^2(x)\] in $C[-2\pi, 2\pi]$. Prove or disprove that the functions $f(x)$ and $g(x)$ are linearly independent. Add to solve later (The Ohio State University, Linear Algebra Midterm) Read solution Problem 601 Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers. Let \[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix} a & b\\ c& -a \end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\] (a) Show that $W$ is a subspace of $V$. (b) Find a basis of $W$. (c) Find the dimension of $W$. Add to solve later (The Ohio State University, Linear Algebra Midterm) Read solution
This was inspired by many puzzles that use three/four numbers to create other numbers. I chose these numbers in particular because of this post. Can you find a way to make all the natural numbers from $1$ to $15$ with all four of just following numbers? $3,9,9,9$ You are allowed to use any operation only the following operations, but can jumble up the order and even turn $9$s upside-down to make a $6$ (though that will be replacing a $9$). You can also use an operation more than once if you like. $+\;\;\times\;\;\div\;\;-\;\;\sqrt{\cdot}\;\;0.\;\;\lfloor\rceil\;\;!\;\;\$\;\;\%\;\;(\,)\;\;\hat\,$ Be as creative as you want. Why limit the mind? And $\$$ does not necessarily mean dollars... You can include zeroes for decimals if you want, because really, $1=01$ and $2=0002$ so I see no difference. Challenge Solution: I am interested to see all the solutions, especially those containing only the mainstream operations and/or one radical and/or floor/ceiling functions. In that particular case, I myself have discovered a few solutions from $1$ to $5$ which means... well... I genuinely don't know if there exist these particular types of solutions for greater numbers. Accepting an Answer: The answer will be accepted to the person who finds challenge solutions from $1$ to $15$, and no, the first one won't be accepted... unless it is the most creative answer, because I will accept an answer if it has found all these challenge solutions and is the most creative (partly based on upvotes so the decision of accepting a certain answer is not too subjective). As for solely creativity, a $50$ rep bounty will be awarded to the answer that has the most creative solutions (that might be the accepted, as well). No answer must have just one solution, especially partial answers. There must be more than one solution in the posted answer before any further progress is made. This rule just gives others time to come up with solutions themselves without being tempted to look at an answer! Enjoy! $$$$ P.S. If you like mathematical challenges, go here!
Hi good people!, How would I go about finding the modulus, the argument and plotting: \(z=\sqrt2-i\)? I get this to be in the 4th quadrant, but it seems incorrect?...thanx for the help!.. The modulus is 3, and the angle calculates to 30 or 40 something..cannot remember...anycase the actual question is the quadrant...please guys, just help me out with this...i do appreciate!! Yes it is in the 4th quadrant. It is just in the position (sqrt2, -1) where sqrt2 is the real co-ordinate (horiontal), and -1 is the complex co-ordinate (vertical). \(z=\sqrt2-i\\ z=\sqrt2-1i\\ \text{The modulus is just }\\ |z|=\sqrt{(\sqrt2)^2+(-1)^2}\\ |z|=\sqrt{2+1}\\ |z|=\sqrt{3}\\ \) The simplest way to do this is to say if \(z=a+bi \quad then\\ \text{The first quadrant (equivalent) angle will be } \quad atan(|\frac{b}{a}|)\\ z=\sqrt{2}-1i\\ acute\;angle=atan\frac{1}{\sqrt2}\approx 35.26^\circ\\ \text{The 4th quadrant correct answer will be}\quad \\ \theta\approx 360- 35.26^\circ \approx 324.74^\circ \) OR you could go the long way as done below. \(now\\ z=\sqrt{3} \left( \sqrt{\frac{2}{3}}+\frac{-1}{\sqrt{3}} i \; \right)\\ z=r(cos\theta +isin\theta)\\ cos\theta=\sqrt\frac{2}{3}\qquad sin\theta = \frac{-1}{\sqrt{3}}\\ 4th \;\;quad\\ \theta=360-acos(\sqrt\frac{2}{3})\\ \theta \approx 360-35.26^\circ\\ \theta \approx 324.74^\circ\\ \) Melody, thank you a million times over...I was going through a question paper with memo, with a student, and saw the memo had the answer as 180- 35.26...in other words they had it in the 2nd quadrant. I assured my student that the memo was incorrect, but I just had to make 100% sure myself, therefore I asked the question. You have confirmed the quadrant. Thank you very much Melody, and also, a BIG thank you for your in depth explanations. I do appreciate. Take care.
CryptoDB Satrajit Ghosh Affiliation: Aarhus University Publications Year Venue Title 2019 EUROCRYPT An Algebraic Approach to Maliciously Secure Private Set Intersection 📺 Private set intersection (PSI) is an important area of research and has been the focus of many works over the past decades. It describes the problem of finding an intersection between the input sets of at least two parties without revealing anything about the input sets apart from their intersection.In this paper, we present a new approach to compute the intersection between sets based on a primitive called Oblivious Linear Function Evaluation (OLE). On an abstract level, we use this primitive to efficiently add two polynomials in a randomized way while preserving the roots of the added polynomials. Setting the roots of the input polynomials to be the elements of the input sets, this directly yields an intersection protocol with optimal asymptotic communication complexity $$O(m\kappa )$$. We highlight that the protocol is information-theoretically secure against a malicious adversary assuming OLE.We also present a natural generalization of the 2-party protocol for the fully malicious multi-party case. Our protocol does away with expensive (homomorphic) threshold encryption and zero-knowledge proofs. Instead, we use simple combinatorial techniques to ensure the security. As a result we get a UC-secure protocol with asymptotically optimal communication complexity $$O((n^2+nm)\kappa )$$, where n is the number of parties, m is the set size and $$\kappa $$ is the security parameter. Apart from yielding an asymptotic improvement over previous works, our protocols are also conceptually simple and require only simple field arithmetic. Along the way we develop techniques that might be of independent interest. 2019 CRYPTO The Communication Complexity of Threshold Private Set Intersection 📺 Threshold private set intersection enables Alice and Bob who hold sets $$S_{\mathsf {A}}$$ and $$S_{\mathsf {B}}$$ of size n to compute the intersection $$S_{\mathsf {A}} \cap S_{\mathsf {B}} $$ if the sets do not differ by more than some threshold parameter $$t$$ . In this work, we investigate the communication complexity of this problem and we establish the first upper and lower bounds. We show that any protocol has to have a communication complexity of $$\varOmega (t)$$ . We show that an almost matching upper bound of $$\tilde{\mathcal {O}}(t)$$ can be obtained via fully homomorphic encryption. We present a computationally more efficient protocol based on weaker assumptions, namely additively homomorphic encryption, with a communication complexity of $$\tilde{\mathcal {O}}(t ^2)$$ . For applications like biometric authentication, where a given fingerprint has to have a large intersection with a fingerprint from a database, our protocols may result in significant communication savings.Prior to this work, all previous protocols had a communication complexity of $$\varOmega (n)$$ . Our protocols are the first ones with communication complexities that mainly depend on the threshold parameter $$t$$ and only logarithmically on the set size n.
Difference between revisions of "NTS ABSTRACTSpring2019" (→Feb 7) Line 60: Line 60: $$trace (\rho(g))/dim (\rho),$$ $$trace (\rho(g))/dim (\rho),$$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (TAMU). for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (TAMU). + + + + + + + + + + + + + + + |} |} </center> </center> Revision as of 21:41, 27 January 2019 Return to [1] Jan 23 Yunqing Tang Reductions of abelian surfaces over global function fields For a non-isotrivial ordinary abelian surface $A$ over a global function field, under mild assumptions, we prove that there are infinitely many places modulo which $A$ is geometrically isogenous to the product of two elliptic curves. This result can be viewed as a generalization of a theorem of Chai and Oort. This is joint work with Davesh Maulik and Ananth Shankar. Jan 24 Hassan-Mao-Smith--Zhu The diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$ Abstract: Assume a polynomial-time algorithm for factoring integers, Conjecture~\ref{conj}, $d\geq 3,$ and $q$ and $p$ prime numbers, where $p\leq q^A$ for some $A>0$. We develop a polynomial-time algorithm in $\log(q)$ that lifts every $\mathbb{Z}/q\mathbb{Z}$ point of $S^{d-2}\subset S^{d}$ to a $\mathbb{Z}[1/p]$ point of $S^d$ with the minimum height. We implement our algorithm for $d=3 \text{ and }4$. Based on our numerical results, we formulate a conjecture which can be checked in polynomial-time and gives the optimal bound on the diophantine exponent of the $\mathbb{Z}/q\mathbb{Z}$ points of $S^{d-2}\subset S^d$. Jan 31 Kyle Pratt Breaking the $\frac{1}{2}$-barrier for the twisted second moment of Dirichlet $L$-functions Abstract: I will discuss recent work, joint with Bui, Robles, and Zaharescu, on a moment problem for Dirichlet $L$-functions. By way of motivation I will spend some time discussing the Lindel\"of Hypothesis, and work of Bettin, Chandee, and Radziwi\l\l. The talk will be accessible, as I will give lots of background information and will not dwell on technicalities. Feb 7 Shamgar Gurevich Harmonic Analysis on $GL_n$ over finite fields Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the {\it character ratio}: $$trace (\rho(g))/dim (\rho),$$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant {\it rank}. This talk will discuss the notion of rank for GLn over finite fields, and apply the results to random walks. This is joint work with Roger Howe (TAMU). Feb 14 Tonghai Yang The Lambda invariant and its CM values Abstract: The Lambda invariant which parametrize elliptic curves with two torsions (X_0(2)) has some interesting properties, some similar to that of the j-invariants, and some not. For example, $\lambda(\frac{d+\sqrt d}2)$ is a unit sometime. In this talk, I will briefly describe some of the properties. This is joint work with Hongbo Yin and Peng Yu.
Tagged: determinant of a matrix Problem 718 Let \[ A= \begin{bmatrix} 8 & 1 & 6 \\ 3 & 5 & 7 \\ 4 & 9 & 2 \end{bmatrix} . \] Notice that $A$ contains every integer from $1$ to $9$ and that the sums of each row, column, and diagonal of $A$ are equal. Such a grid is sometimes called a magic square. Compute the determinant of $A$.Add to solve later Problem 686 In each of the following cases, can we conclude that $A$ is invertible? If so, find an expression for $A^{-1}$ as a linear combination of positive powers of $A$. If $A$ is not invertible, explain why not. (a) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=0$. Add to solve later (b) The matrix $A$ is a $3 \times 3$ matrix with eigenvalues $\lambda=i , \lambda=-i$, and $\lambda=-1$. Problem 582 A square matrix $A$ is called nilpotent if some power of $A$ is the zero matrix. Namely, $A$ is nilpotent if there exists a positive integer $k$ such that $A^k=O$, where $O$ is the zero matrix. Suppose that $A$ is a nilpotent matrix and let $B$ be an invertible matrix of the same size as $A$. Is the matrix $B-A$ invertible? If so prove it. Otherwise, give a counterexample. Problem 571 The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017. There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold). The time limit was 55 minutes. This post is Part 2 and contains Problem 4, 5, and 6. Check out Part 1 and Part 3 for the rest of the exam problems. Problem 4. Let \[\mathbf{a}_1=\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix}, \mathbf{a}_2=\begin{bmatrix} 2 \\ -1 \\ 4 \end{bmatrix}, \mathbf{b}=\begin{bmatrix} 0 \\ a \\ 2 \end{bmatrix}.\] Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$. Problem 5. Find the inverse matrix of \[A=\begin{bmatrix} 0 & 0 & 2 & 0 \\ 0 &1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 1 \end{bmatrix}\] if it exists. If you think there is no inverse matrix of $A$, then give a reason. Problem 6. Consider the system of linear equations \begin{align*} 3x_1+2x_2&=1\\ 5x_1+3x_2&=2. \end{align*} (a) Find the coefficient matrix $A$ of the system. (b) Find the inverse matrix of the coefficient matrix $A$. (c) Using the inverse matrix of $A$, find the solution of the system. ( Linear Algebra Midterm Exam 1, the Ohio State University) Read solution Problem 546 Let $A$ be an $n\times n$ matrix. The $(i, j)$ cofactor $C_{ij}$ of $A$ is defined to be \[C_{ij}=(-1)^{ij}\det(M_{ij}),\] where $M_{ij}$ is the $(i,j)$ minor matrix obtained from $A$ removing the $i$-th row and $j$-th column. Then consider the $n\times n$ matrix $C=(C_{ij})$, and define the $n\times n$ matrix $\Adj(A)=C^{\trans}$. The matrix $\Adj(A)$ is called the adjoint matrix of $A$. When $A$ is invertible, then its inverse can be obtained by the formula For each of the following matrices, determine whether it is invertible, and if so, then find the invertible matrix using the above formula. (a) $A=\begin{bmatrix} 1 & 5 & 2 \\ 0 &-1 &2 \\ 0 & 0 & 1 \end{bmatrix}$. (b) $B=\begin{bmatrix} 1 & 0 & 2 \\ 0 &1 &4 \\ 3 & 0 & 1 \end{bmatrix}$. Problem 509 Using the numbers appearing in \[\pi=3.1415926535897932384626433832795028841971693993751058209749\dots\] we construct the matrix \[A=\begin{bmatrix} 3 & 14 &1592& 65358\\ 97932& 38462643& 38& 32\\ 7950& 2& 8841& 9716\\ 939937510& 5820& 974& 9 \end{bmatrix}.\] Prove that the matrix $A$ is nonsingular.Add to solve later Problem 505 Let $A$ be a singular $2\times 2$ matrix such that $\tr(A)\neq -1$ and let $I$ be the $2\times 2$ identity matrix. Then prove that the inverse matrix of the matrix $I+A$ is given by the following formula: \[(I+A)^{-1}=I-\frac{1}{1+\tr(A)}A.\] Using the formula, calculate the inverse matrix of $\begin{bmatrix} 2 & 1\\ 1& 2 \end{bmatrix}$. Problem 486 Determine whether there exists a nonsingular matrix $A$ if \[A^4=ABA^2+2A^3,\] where $B$ is the following matrix. \[B=\begin{bmatrix} -1 & 1 & -1 \\ 0 &-1 &0 \\ 2 & 1 & -4 \end{bmatrix}.\] If such a nonsingular matrix $A$ exists, find the inverse matrix $A^{-1}$. ( The Ohio State University, Linear Algebra Final Exam Problem) Read solution Eigenvalues of Orthogonal Matrices Have Length 1. Every $3\times 3$ Orthogonal Matrix Has 1 as an Eigenvalue Problem 419 (a) Let $A$ be a real orthogonal $n\times n$ matrix. Prove that the length (magnitude) of each eigenvalue of $A$ is $1$. Add to solve later (b) Let $A$ be a real orthogonal $3\times 3$ matrix and suppose that the determinant of $A$ is $1$. Then prove that $A$ has $1$ as an eigenvalue.
Kinetic Theory of Gas Molecular nature of matter and Behaviour of Gases A gas consisting of large number of identical, tiny spherical, neutral and elastic particles called molecules. In a gas molecules none in all possible directions with all possible speeds. The pressure of gas is due to elastic collisions of the gas molecular with the walls of the container. The time of contact of moving molecules with the walls of container is negligible as compared to the intervals between two successive collisions on the same walls of container. Between two collisions a molecule moves in a straight path with a uniform velocity. The collisions are perfectly elastic and there are no forces of attraction or repulsion between them. For a gas molecules in container Impulse = change in momentum of the molecule View the Topic in this video From 0:20 To 6:10 View the Topic in this video From 0:21 To 4:41 Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. 1. Behaviour of Gases: Gases at low pressure and high temperature follow a relation, pV = kT 2. The perfect gas equation is given by pV = nRT where, n is number of moles and R = N A k B is universal constant and T is absolute temperature in kelvin. 3. In terms of density, perfect gas equation is p = \frac{\rho RT}{M_{0}} 4. Boyle's Law: It states that for a given mass of a gas at constant temperature, the volume of that mass of gas is inversely proportional to its pressure. i.e., V \propto \frac{1}{p} ⇒ p 1V 1 = p 2V 2 = p 3V 3 ..... = constant 5. Charles' Law: It states that for a given mass of an ideal gas at constant pressure, volume (V) of a gas is directly proportional to its absolute temperature T. i.e, V ∝ T \tt \Rightarrow \frac{V_{1}}{T_{1}} = \frac{V_{2}}{T_{2}} = \frac{V_{3}}{T_{3}}..... = constant 6. Dalton's Law of Partial pressure: It states that the total pressure of a mixture of non-interacting ideal gases is the sum of partial pressures exerted by individual gases in the mixture. i.e., p = p 1 + p 2 + p 3 + ......
The Danish butcher production and retail chain, MeatMe, is considering whether to start production of dogfood and decides to start a 7 days test production and call the product BargainBone. For the fixating of price, MeatMe wishes to analyze the relation between the produced quantity (in kg.) and the total costs of production. It could be logical to assume that the more production in kg of any product, the higher the total costs of production. However, this relation can be altered by e.g. synergy effects, and MeatMe aims to understand the actual relation between these two parameters stating the following question: Question 1: Is there a correlation between the production in kg of BargainBone and the total costs of production? Question 1: Is there a correlation between the production in kg of BargainBone and the total costs of production? Let’s work on an answer to the question 1: The 7 days test production resulted in the following observations: Visualizing can help to get an immediate intuition: From the graph, we could assume that there is a linear correlation between the production of BargainBone and the total production costs. The higher the production of BargainBone, the higher the total costs of production. But we wish to be as exact as possible in our answer, so we decide to run a linear regression analysis testing the linear correlation between the two parameters. First, we calculate our regression line: \( \\displaystyle y = mx +b \) where m is the slope of the line and calculated by: \( \\displaystyle m = \\frac {\\bar{x}\\bar{y}-\\bar{xy}}{\\bar{(x)}^2-\\bar{x}^2} \) and where b is the intercept with the y axis where x = 0: \( \\displaystyle b = \\bar{y} – m\\bar{x} \) Let’s display a table from which we can derive these parameters So, our m and b are: \( \\displaystyle m = \\frac{(1,114.3\\times 17.8) – 20.509}{1.114^2 – 1,426,429} = 0.0036 \) \( \\displaystyle b = 17.8 – 0.0036\\times 1,114 = 13.73 \) Thus, our regression line becomes: \( \\displaystyle y = mx +b \\Leftrightarrow \\qquad y=0.0036x + 13.73\) The linear correlation between BargainBone production and the total costs Is our line a good fit? Does it express the real relation between x (production in kg. of BargainBone) and y (total production costs)? Is there a linear correlation? These are the questions that we need to clarify. The coefficient of correlation, denoted as \( r^2 \), answers these questions, as it is a measure of the strength of the linear relationship between the two variables. \( r2\) describes the percentage in the variation in the total costs of production that is described by the variation in the production of BargainBone. The perfect regression line that fits the observed data is a 100% fit. It is 1. So, we will say 1 minus the error in our line. So, let’s find the error in our line: We find the percentage of the error by comparing the error of the line with total error from the mean of the y. \( SE_{Line}\) : The Squared Error of the Line describes error of our regression line compared to each of the observed data points. What is the distance from each observed data point to our line? What’s the error of our line? \( \\displaystyle SE_{\\text{Line}}= \\sum _{i}^n(y_{i}-(mx_{i}+b))^2 \\Leftrightarrow \\sum_{i}^n(y_{i}-{\\hat {y}})^{2}\) \( \\displaystyle SE_{\\hat{y}}\) = The Squared Error of the mean y describes the total variation in y being the difference from each observed data point to the mean y. \( \\displaystyle SE_{\\bar{y}} = \\sum_i^n(y_{1}-\\bar{y})^2+(y_{2}-\\bar{y})^2…(y_{n}-\\bar{y})^2 \\Leftrightarrow SE_{\\bar{y}} = \\sum_{i}^n(y_{i}-{\\bar {y}})^{2}\) So, seing the relation between the error of our line and the total error expresses the percentage of variation in y that is NOT described by the variation in x: \( \\displaystyle \\frac{\\sum_{i}^n(y_{i}-{\\hat {y}})^{2}}{\\sum_{i}^n(y_{i}-{\\bar {y}})^{2}} = \\frac{SE_{Line}}{SE_{\\bar{y}}}\) So, now we have the parameters to fill in our formula for the \( r^2\): \( \\displaystyle r^2 = \\frac{SE_{Line}}{SE_{\\bar{y}}}\) Answer to Question 1: Yes, there is a clear linear correlation between the production in kg. of BargainBone and the total costs of production. Our \( r^2 \) = 0.947 meaning that 94.7% of the variation in the total costs of production can be explained by the variation of the production in kg. of BargainBone.
Let $(N \subset M)$ be an irreducible finite index depth $n$ subfactor. Let $P = P(N \subset M)$ its planar algebra. Let $(B_i)$ be the finite sequence of $N$-$N$-bimodules appearing in the principal graph. Let $2m = n$ if $n$ even, else $2m=n+1$. Let $p_i \in P_{2m,+}$ be the minimal central projection related to the $N$-$N$-bimodule $B_i$. Question: Is there a planar tangle $T: P_{2m,+} \otimes P_{2m,+} \to P_{2m,+}$ such that $T(p_i \otimes p_j) = \sum_{k} n_{ij}^k p_k $ with $B_i \boxtimes B_j = \bigoplus_k M_{ij}^k \otimes B_k$ and $dim(M_{ij}^k)= n_{ij}^k$ (the fusion coefficients)? Else, is there such a $T$ if we only consider the range support? the central support? Remark: If $n = 2$, such a $T$ exists, it's the coproduct (see here). Then, a generalization of the coproduct on $P_{2m,+}$ could do the job.
There is quasi 1-D C-D nozzle test case for compressible flow codes, like that "is there any 1-D test case for incompressible flow codes?" To build on Bill Barth's commment, there really isn't any depth here. The equations you're trying to solve are: \begin{align} u_t + uu_x &= - \frac{1}{\rho}p_x + \nu u_{xx} \\ u_x & = 0 \end{align} Now, $u_x = 0$ immediately implies $u$ has no space-dependence, so we can write $u = f(t)$, for some time-dependent function $f: \mathbb{R} \to \mathbb{R}$. The momentum equation then reduces to $$ f'(t) = -\frac{1}{\rho}p_x(x,t).$$ Integrating this, we get $$p = -\rho x f'(t) + g(t),$$ where $g :\mathbb{R} \to \mathbb{R}$ is another function of time. (Note that $g$ is irrelevant to the dynamics of the system, as there is dependence only on the spatial derivatives of $p$.) At this point, the solution space is small and uninteresting, and we haven't even applied boundary conditions. Homogenous Dirichlet boundary conditions on $u$, for example, imply that $u \equiv 0$ is the only solution. Even periodic boundary conditions, by requiring that $p$ be periodic, lead to the condition that $f'(t) \equiv 0$, so $u \equiv C$ for some constant $C$. In short, what you're looking for simply doesn't exist. Assuming that the system you are interested in is not hyperbolic, (shallow water equations model flow of an incompressible fluid and yet are hyperbolic), Following could be taken as test cases for validation of the code. (although none of them is purely 1d) 1) As per Bill Barth's suggestion, Couette and Poiseuille flow problems are standard test cases 2) Uniform flow over the flat plate, developing laminar boundary layer can be solved. Its behavior is given by the well-established Blasius solution. (The reason of why there are not a plenty 1d problems available, might be as follows: In hyperbolic systems, 1d test / benchmark problems are plenty. The reason being that wave formation and transmission are more dominant phenomena in hyperbolic systems. Shocks and contacts can be analysed as a 1d phenomena (In certain 2d problems, these can be converted into 1d and then solved). But in incompressible flow, no purely 1d phenomenon takes place)
I am reading some old papers regarding Learning With Malicious Noise. In one of them, Learning in the presence of Malicious Errors, by Kearns and Li $[1]$ (https://www.cis.upenn.edu/~mkearns/papers/malicious.pdf), it is proved that in the case that an adversary may choose a fraction $\beta \in [0, \frac{1}{2})$ of the training set to poison with instances, such that no restriction is put on the poison instances, then the upper bound $\beta < \frac{\epsilon}{\epsilon+1}$ must be met, so that it is possible to learn an $\epsilon$-good hypothesis with a probability at least $1-\delta$ (in the PAC-learning setting, using the usual notations). This bound is, however, not reached everytime, but for some specific hypothesis classes, the bound is much lower than that. However, in the paper Learning from Noisy Examples, by Angluin and Laird $[2]$ (http://homepages.math.uic.edu/~lreyzin/f14_mcs548/angluin88b.pdf), it is shown that in the case of Clasification Noise, i.e. the attacker doesn't modify the underlying distribution of the instances, but may flip the label of some instances, with a probability $\beta$, then for every $\beta < \frac{1}{2}$, it is possible to find an $\epsilon$-good hypothesis with a probability at least $1-\delta$ (in the PAC-learning setting, using the usual notations), using at least $m \geq \frac{2}{\epsilon^2(1-2\beta)^2}\ln{\Big( \frac{2 \cdot |\mathcal{H}|} {\delta} \Big)}$ instances, where $\mathcal{H}$ is the hypothesis class supposed to be learned. The question is: why is not possible to apply the result of $[2]$ in the case of $[1]$? How is the hypothesis that the underlying distribution of instances stays the same in case of $[2]$ used to prove the correctness? I am not able to find that in the proof. Can you please point it out for me? Thank you so much everyone!
I am trying to implement an algorithm in real-time on a Fixed point DSP (The Blackfin from Analog Devices). The algorithm does a lot of stuff, but in the middle it performs an algorithm called "Fast Data Projection Method" (FDPM), which goes something like this: Let $\mathbf{x}_{k} = [x_{1}, x_{2},\dotsb,x_{M}]^{T}$ be a random vector which contains $M$ samples from a discrete signal. In the Process we take sequentially many vectors from the sampled signal with some degree of overlapping, but that is not relevant. We can assume that the vectors $\mathbf{x}_{k}$ come one after another. The FDPM aims at obtaining a matrix $W \in \mathbb{R}^{M\times N}$ whose columns are the eigenvectors of the correlation matrix $R_{x} = E[\mathbf{x}\mathbf{x}^{T}]$ of the vector $\mathbf{x}$. So we initiallize the algorithm with a random matrix $W_{0}$ (Can also be the identity matrix), and perform the following steps: 1 - $\mathbf{y}_{k} = W_{k}^{T}\mathbf{x}_{k}$ 2 - $\mathbf{a}_{k} = \mathbf{y}_{k} - \|\mathbf{y}_{k}\|\mathbf{e}_{1}$, (Where $\mathbf{e}_{1} = [1,0,0,\dotsb,0]^{T}$) 3 - $G_{k+1} = I-\frac{2}{\|\mathbf{a}_{k}\|^{2}}\mathbf{a}_{k}\mathbf{a}_{k}^{T}$ 4 - $W_{k+1} = Normalize\{\left[W_{k}+\mu_{k}\mathbf{x}_{k}\mathbf{x}_{k}^{T}W_{k}\right]G_{k+1}\}$ Where $Normalize\{·\}$ stand for normalizing each of the columns of the matriz individually. So the problem is the following. I have tested this algoritm in MATLAB with double precisiòn and it works fine, i am able to obtain the eigenvectors very close to the real signal eigenvectors and the algorithms does it's job, no problem. The thing is, when i use the fixed point toolbox to perform this operations with 16bit numbers in Q15 format, then everything goes wrong. The first one or two iterations have small error, but quickly all the vectors start to saturate (The components go to either 1 or -1) and while the matrices do not saturate, they converge to a matrix which is orthogonal to the same matrix calculated with double precision. I am guessing that the loss of precision occurred by casting to 16bit affects too much, so i think i might have to do some signal scaling or some "re normalization" every once in a while, but i have no idea how to analize the problem and how to know where i have to modify the algorithm to make it work in 16bit. Has anyone ever worked with this type of problem? anyone has any idea what i could do? Thanks!!
Does 1 kHz sine tone means $\sin(2(1000)\pi t)$ or $\sin(2(500)\pi t)$? The trigonometric functions "do not know" what a Hertz is and they do not care either. The only thing they know is that a full circle is $2 \pi$ radians. Whether this circle concludes in days, hours, picoseconds or a slice of it represents the angle a force is applied to some lever, is immaterial. $2 \pi \omega$ expressed in Hertz, denotes a rate. A rate of going around a circle at the time span of a second. $y = \cos(2 \pi 1 t)$ where $t$ is in seconds, would have concluded 1 circle, composed of $2 \pi$ radians, by the time $t$ ticks to 1. To make it conclude the circle faster, we multiply the "passing of time" (denoted by $t$) by some number $f$. Therefore, a 1kHz tone is $2 \pi 1000$ radians per second. Hope this helps. $1$ kHz denotes the frequency, i.e. the inverse of the period of the signal. You have $T=0.001$ seconds and as the period of the sinusoid is $2\pi$, $$2\pi\cdot1000\cdot T=2\pi.$$ When the angle $\theta$ of the trigonometric function $\sin(\theta)$ spans a $2\pi$ range, it makes one revolution and to make $f_0$ revolutions in one second (i.e., $f_0$ Hz), the angle should span $2\pi f_0$ range for $t \in [0,1]$, whose mathematical expression will be: $$ x(t) = \sin( \omega_0 t) = \sin( 2 \pi f_0 t) .$$ With your particular example $f_0 = 1000$ Hz (1k Hz), then you have: $$ x(t) = \sin( \omega_0 t) = \sin( 2 \pi (1000) t) .$$ Note that for simplicity, the relation between the angular frequency $\omega$ in radians (per second) and the frequency $f$ in Hertz is: $$ \boxed{ \omega = 2 \pi f} $$
Problem 616 Suppose that $p$ is a prime number greater than $3$. Consider the multiplicative group $G=(\Zmod{p})^*$ of order $p-1$. (a) Prove that the set of squares $S=\{x^2\mid x\in G\}$ is a subgroup of the multiplicative group $G$. (b) Determine the index $[G : S]$. Add to solve later (c) Assume that $-1\notin S$. Then prove that for each $a\in G$ we have either $a\in S$ or $-a\in S$. Problem 613 Let $m$ and $n$ be positive integers such that $m \mid n$. (a) Prove that the map $\phi:\Zmod{n} \to \Zmod{m}$ sending $a+n\Z$ to $a+m\Z$ for any $a\in \Z$ is well-defined. (b) Prove that $\phi$ is a group homomorphism. (c) Prove that $\phi$ is surjective. Add to solve later (d) Determine the group structure of the kernel of $\phi$. Problem 612 Let $C[-2\pi, 2\pi]$ be the vector space of all real-valued continuous functions defined on the interval $[-2\pi, 2\pi]$. Consider the subspace $W=\Span\{\sin^2(x), \cos^2(x)\}$ spanned by functions $\sin^2(x)$ and $\cos^2(x)$. (a) Prove that the set $B=\{\sin^2(x), \cos^2(x)\}$ is a basis for $W$. Add to solve later (b) Prove that the set $\{\sin^2(x)-\cos^2(x), 1\}$ is a basis for $W$. Problem 611 An $n\times n$ matrix $A$ is called orthogonal if $A^{\trans}A=I$. Let $V$ be the vector space of all real $2\times 2$ matrices. Consider the subset \[W:=\{A\in V \mid \text{$A$ is an orthogonal matrix}\}.\] Prove or disprove that $W$ is a subspace of $V$. Problem 607 Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less. Let \[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\] where \begin{align*} p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\ p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3. \end{align*} (a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$. (b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$. Add to solve later (The Ohio State University, Linear Algebra Midterm) Read solution Problem 606 Let $V$ be a vector space and $B$ be a basis for $V$. Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$. Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$. After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form \[\begin{bmatrix} 1 & 0 & 2 & 1 & 0 \\ 0 & 1 & 3 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix}.\] (a) What is the dimension of $V$? (b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$? Add to solve later (The Ohio State University, Linear Algebra Midterm) Read solution Problem 605 Let $T:\R^2 \to \R^3$ be a linear transformation such that \[T\left(\, \begin{bmatrix} 3 \\ 2 \end{bmatrix} \,\right) =\begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} \text{ and } T\left(\, \begin{bmatrix} 4\\ 3 \end{bmatrix} \,\right) =\begin{bmatrix} 0 \\ -5 \\ 1 \end{bmatrix}.\] (a) Find the matrix representation of $T$ (with respect to the standard basis for $\R^2$). (b) Determine the rank and nullity of $T$. Add to solve later (The Ohio State University, Linear Algebra Midterm) Read solution Problem 604 Let \[A=\begin{bmatrix} 1 & -1 & 0 & 0 \\ 0 &1 & 1 & 1 \\ 1 & -1 & 0 & 0 \\ 0 & 2 & 2 & 2\\ 0 & 0 & 0 & 0 \end{bmatrix}.\] (a) Find a basis for the null space $\calN(A)$. (b) Find a basis of the range $\calR(A)$. (c) Find a basis of the row space for $A$. Add to solve later (The Ohio State University, Linear Algebra Midterm) Read solution Problem 603 Let $C[-2\pi, 2\pi]$ be the vector space of all continuous functions defined on the interval $[-2\pi, 2\pi]$. Consider the functions \[f(x)=\sin^2(x) \text{ and } g(x)=\cos^2(x)\] in $C[-2\pi, 2\pi]$. Prove or disprove that the functions $f(x)$ and $g(x)$ are linearly independent. Add to solve later (The Ohio State University, Linear Algebra Midterm) Read solution Problem 601 Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers. Let \[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix} a & b\\ c& -a \end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\] (a) Show that $W$ is a subspace of $V$. (b) Find a basis of $W$. (c) Find the dimension of $W$. Add to solve later (The Ohio State University, Linear Algebra Midterm) Read solution
Wondering if there are any documents, theories, or methodologies for dealing with mutable memory mathematically. Basically a formal algebraic model of how computers manipulate memory. Along the lines of, say I want to model an add operation at a low level, and it writes its output to a register. Then we can model the computer state as a memory, and: $$add : a \times b \times m \to m'$$ the function $add$ transforms the memory into a different state, holding the final value. Then the second time we call add, the memory is different than it was before. So it's always changing: $$m \neq m' \neq m'' \neq \dots \neq m^{(n)}$$ Wondering if there are any formalisms out there that deal with this. I would specifically like to see it applied to abstract algebra or category theory. Whereas in abstract algebra typically, everything is immutable and you never worry about "state". I would like to explore the state / mutable memory from a math framework.
Supposing we have a boolean function from $f:\{0,1\}^n\rightarrow\{0,1\}$. It is clear that a real multivariate polynomial $p(x)$ such that $f(x)=p(x)$ on $x\in\{0,1\}^n$ can be multilinear. What are some interesting classes of boolean functions for which the minimal degree of $p(x)$ is known? Do we have concrete examples? Any function which has non-zero correlation with parity has degree $n$. That is, if $$\sum_{x \in \{0,1\}^n} (-1)^{\sum_i x_i}f(x) \neq 0$$ then the unique multilinear expansion of $f$ contains the monomial $x_1\cdots x_n$. Indeed, since $(-1)^{x_i} = \frac{1-x_i}{2}$, the Fourier expansion of $f$ (expressed in terms of products of $\frac{1-x_i}{2}$) will contain the term $\prod_i \frac{1-x_i}{2}$, and the corresponding monomial $\prod_i x_i$ doesn't appear in any other term. Nisan and Szegedy proved that functions of degree $d$ depend on at most $d2^d$ variables. For $d = 1$ we can be more exact: the function must depend on at most one coordinate. Classes of Boolean functions with unique multilinear presentation contain Pseudo-Boolean functions over reals (Theorem 1.34 [1]) Boolean function over unit cube $[0,1]^n$ Background "Every Boolean function can be represented by a disjunctive normal form and by a conjuctive normal form."(Theorem 1.4 (p.16 [1]) so every disjunctive normal form (DNF) of the form $\vee (\wedge {x} \wedge \bar{x} )$ can be written as $\vee (\prod x \prod (1-x))$ and $\sum c \prod x$ and further definitions on the page 18 of the book such as subscripts. You can represent every Boolean function $F$ on $\mathcal B^n$ in terms of powerset $\mathcal P(N)$ and direct sum $\oplus$ such that $f(x_1,\ldots,x_n)=\oplus_{A\in\mathcal P(N)} c(A)\prod_{i\in A} x_i$ (THM1.33). and their applications contain Game theory on p.579 [1] with focus on multilinear polynomial extensions with the structure $[0,1]^n$ Reliability Boolean functions characterised in terms of minimum pathsets and different cuts, some information in p.58 [1], with the structure $[0,1]^n$ (fourier analysis) Lower bounds for Polynomials computing the boolean functions References [1] Boolean Functions Theory, Algorithms, and Applications (Yves Crama, Peter L. Hammer, 2011)
The refinement order on partitions of an integer $n$ can be defined as follows: $\lambda=(\lambda_1,\dots,\lambda_k)\leq\mu=(\mu_1,\dots,\mu_\ell)$ if there is a partition of the parts of $\lambda$ into blocks whose sums are the parts of $\mu$. It is known that the problem of deciding whether $\lambda\leq\mu$ is NP-complete. However, for a practical application I would need an algorithm which performs reasonably when $n$ is around $200$. Ideally, this algorithm would already have a (free) implementation... I tried the following naive approach: find the last index $j$ such that $\mu_j>\lambda_1$ if $\mu_{j+1} = \lambda_1$ remove $\mu_{j+1}$ from $\mu$ and $\lambda_1$ from $\lambda$ and recurse otherwise, for $j\in\{j,j-1,\dots,1\}$: subtract $\lambda_1$ from $\mu_j$ and reorder to obtain a new partition, and recurse with this partition and the rest of $\lambda$ Although this seems to work reasonably well for many pairs $(\lambda,\mu)$ of partitions of size around $200$, it performs poorly when $\lambda$ has many small parts but is not a refinement of $\mu$.
If $x(t)$ is an action signal like voltage or current, then the square of $x(t)$ is to instantaneous power and the constant of proportionality depends on what kinda animal $x(t)$ is and what load $x(t)$ is connected to. proportional So if $\big|x(t)\big|^2$ is proportional to instantaneous power, so also is the integral of $\big|x(t)\big|^2$ over all time proportional to energy (and having the same constant of proportionality). So you can call this "energy": $$E_x = \int\limits_{-\infty}^{\infty}\big|x(t)\big|^2 \, dt$$ as long as you remember that it's really to energy and the constant of proportionality depends on what dimension is $x(t)$ and what load that signal is connected to. proportional For example, if $v(t)$ is a voltage, then $|v(t)|^2$ is proportional to instantaneous power and if $v(t)$ is connected to a resistor having resistance $R$, then instantaneous power is $$ p(t) = \frac{1}{R}\big|v(t)\big|^2 $$ and the total energy is $$E_v=\int\limits_{-\infty}^{\infty}p(t) \, dt$$ or $$E_v=\frac{1}{R}\int\limits_{-\infty}^{\infty}\big|v(t)\big|^2 \, dt$$ Clearly the constant of proportionality is $\frac{1}{R}$. If $x(t)$ is a dimensionless value in a DSP or computer, then you have to scale it with the reference voltage $V_\mathrm{ref}$ of the D/A converter to make this number a voltage that delivers power to a load. Then the constant of proportionality becomes $\frac{V^2_\mathrm{ref}}{R}$ and the energy really is (with correct scaling): $$E_x = \frac{V^2_\mathrm{ref}}{R} \int\limits_{-\infty}^{\infty}\big|x(t)\big|^2 \, dt$$
Let $k$ be a field. Let $\mathcal{C},\mathcal{D}$ be finitely cocomplete $k$-linear categories, which are essentially small. Then Kelly's tensor product $\mathcal{C} \boxtimes \mathcal{D}$ is a finitely cocomplete $k$-linear category together with an universal functor from $\mathcal{C} \times \mathcal{D}$ which is right exact and $k$-linear in each variable (also denoted by $(A,B) \mapsto A \boxtimes B$). For an overview of this construction, see section 2.3 in Schäppi's paper on ind-abelian categories. Roughly, it is constructed as a full subcategory of the category $L$ of $k$-linear functors $F : (\mathcal{C} \otimes_k \mathcal{D})^{op} \to \mathsf{Vect}_k$ with the property that for every exact sequence $A'' \to A' \to A \to 0$ in $\mathcal{C}$ and all $B \in \mathcal{D}$ the sequence $0 \to F(A,B) \to F(A',B) \to F(A'',B)$ is exact, and similarly for the other variable. Representable functors lie in $L$. The crucial step is to observe that $L$ is an orthogonal class in the category of all $k$-linear functors, hence reflective. In particular, it is cocomplete. Now $\mathcal{C} \boxtimes \mathcal{D}$ is the closure of the representable functors under finite colimits taken in $L$. In my research I need a more explicit description of the objects in $\mathcal{C} \boxtimes D$. Unfortunately, the reflector is dreadful and isn't useful at all. First of all, is it true that every object $M$ in $\mathcal{C} \boxtimes \mathcal{D}$ can be written as a cokernel of a map of the form $\oplus_j (A'_j \boxtimes B'_j) \to \oplus_i (A_i \boxtimes B_i)$? The answer is yes when $C,D$ are ind-abelian (Lemma 6.5 in Schäppi's paper). Secondly, and more important for me: Given that we know the objects, what are the morphisms? If $A \in \mathcal{C}, B \in \mathcal{D}$, how can we describe $\hom(A \boxtimes B,M)$ explicitly in terms of such a presentation of $M$?
BackgroundI have searched a bit for the definition/constructions on how to "semi-localize" a scheme, but have been unsuccessful in finding a good reference; I apologize in advance if this topic has been covered in detail elsewhere (e.g. in a book or article) and would be happy for a reference! This question arose from a problem I had been working on in finding an étale morphism into affine space. Much of the terminology here will be from EGA I. (Aside: The constructions below take place in the Zariski site but I think some of them go through in the étale/Nisnevich site) DefinitionsA local scheme is the spectrum of a local ring and a semi-local scheme the spectrum of a semilocal ring. In the constructions below a ring $O_{X,C}$ is given, so then the candidate semi-local scheme is $Spec \; O_{X,C}$. NB: for some reason I was having trouble with "\varinjlim" here, so I'm using "lim" below to mean direct limit i.e. colimit. Question Given a scheme $X$ we can localize $X$ at a point $x\in X$ by taking $$O_{X,x} := \lim_{U\ni x} O_X(U). $$ Suppose now that we are given a finite set of closed points $x_1,\ldots, x_n \in X$. Let $C :=$ {$x_1,\ldots, x_n $}. How can we 'localize' $X$ around $C$? There are at least three ways I know how to do this procedure and would be happy to hear about other methods as well as comments (especially geometric ones) regarding the following constructions: $1.$ Define $$O_{X,C} : = \lim_{U\supset C} O_X(U).$$ This construction is similar to the localization construction above in that we take opens $U$ of $X$ containing $C$ and then take the direct limit; the case $n=1, C = \{x_1\}$ is then a special case. NB: we can 'see' this direct limit in the sense that for each $x_i$ we find an open $U_i\ni x_i$, then taking the (finite!) union of the $U_i$ we obtain an open $U$ containing $C$. Just as in the local case above, this direct limit is filtered by inclusion. $2.$ Further assume now that $X$ is locally noetherian and regular. Let $A_i: = O_{X,x_i}$ and then define $$O_{X,C}:= \prod_i A_i .$$ Using the hypothesis that $X$ is regular, we can argue that the maximal ideals here correpsond to the $x_i$: the maximal ideals in $\prod_i A_i$ are generated by elements of the form $(1,1,\ldots, b_{ij},1,\ldots, 1)$ where the $b_{ij}$ generate $x_i$ (here is where we are using the two added hypothesis), i.e. that $(b_{ij})_{1\leq j\leq n_i} = m_i$ where $m_i$ is the max ideal corresponding to $x_i$ and $n_i = dim O_{X,x_i}$. This construction is more ad-hoc (I think) vs. 1. Moreover, the geometry here is slightly more explicit in that this $Spec \; O_{X,C}$ is a finite disjoint union of local schemes, whereas in case 1, the topology is less disjoint when looking at neighborhoods of the $x_i$. $3.$ With $X$ any scheme (no additional hypothesis as in 2), let $F_i : = O_{X,x_i}/m_i$ where $m_i$ is the maximal ideal corresponding to the closed point $x_i$. Define: $$O_{X,C}: = \prod_i F_i .$$ This construction is the most disjoint of the three in that the spectrum is now we have a finite coproduct of ''points''. Closing remarksPresently, for me the most useful of the three is 1 and I would appreciate feedback on where the process of semi-localization has been defined. A professor that I admire very much once said (during a lecture) "from now on and for the rest of your life, every time you see something in commutative algebra, try to relate it to geometry, and vice versa" (I'm paraphrasing).
Neighbourhood density function Computes the neighbourhood density function, a local version of the \(K\)-function or \(L\)-function, defined by Getis and Franklin (1987). Usage localK(X, ..., rmax = NULL, correction = "Ripley", verbose = TRUE, rvalue=NULL) localL(X, ..., rmax = NULL, correction = "Ripley", verbose = TRUE, rvalue=NULL) Arguments X A point pattern (object of class "ppp"). … Ignored. rmax Optional. Maximum desired value of the argument \(r\). correction String specifying the edge correction to be applied. Options are "none", "translate", "translation", "Ripley", "isotropic"or "best". Only one correction may be specified. verbose Logical flag indicating whether to print progress reports during the calculation. rvalue Optional. A singlevalue of the distance argument \(r\) at which the function L or K should be computed. Details The command localL computes the neighbourhood density function, a local version of the \(L\)-function (Besag's transformation of Ripley's \(K\)-function) that was proposed by Getis and Franklin (1987). The command localK computes the corresponding local analogue of the K-function. Given a spatial point pattern X, the neighbourhood density function \(L_i(r)\) associated with the \(i\)th point in X is computed by $$ L_i(r) = \sqrt{\frac a {(n-1) \pi} \sum_j e_{ij}} $$ where the sum is over all points \(j \neq i\) that lie within a distance \(r\) of the \(i\)th point, \(a\) is the area of the observation window, \(n\) is the number of points in X, and \(e_{ij}\) is an edge correction term (as described in Kest). The value of \(L_i(r)\) can also be interpreted as one of the summands that contributes to the global estimate of the L function. By default, the function \(L_i(r)\) or \(K_i(r)\) is computed for a range of \(r\) values for each point \(i\). The results are stored as a function value table (object of class "fv") with a column of the table containing the function estimates for each point of the pattern X. Alternatively, if the argument rvalue is given, and it is a single number, then the function will only be computed for this value of \(r\), and the results will be returned as a numeric vector, with one entry of the vector for each point of the pattern X. Inhomogeneous counterparts of localK and localL are computed by localKinhom and localLinhom. Value If rvalue is given, the result is a numeric vector of length equal to the number of points in the point pattern. the vector of values of the argument \(r\) at which the function \(K\) has been estimated the theoretical value \(K(r) = \pi r^2\) or \(L(r)=r\) for a stationary Poisson process References Getis, A. and Franklin, J. (1987) Second-order neighbourhood analysis of mapped point patterns. Ecology 68, 473--477. See Also Aliases localK localL Examples # NOT RUN { data(ponderosa) X <- ponderosa # compute all the local L functions L <- localL(X) # plot all the local L functions against r plot(L, main="local L functions for ponderosa", legend=FALSE) # plot only the local L function for point number 7 plot(L, iso007 ~ r) # compute the values of L(r) for r = 12 metres L12 <- localL(X, rvalue=12) # Spatially interpolate the values of L12 # Compare Figure 5(b) of Getis and Franklin (1987) X12 <- X %mark% L12 Z <- Smooth(X12, sigma=5, dimyx=128) plot(Z, col=topo.colors(128), main="smoothed neighbourhood density") contour(Z, add=TRUE) points(X, pch=16, cex=0.5)# } Documentation reproduced from package spatstat, version 1.60-1, License: GPL (>= 2)
What is meant by a complete description of a stochastic process? Well, mathematically, a stochastic process is a collection $\{X(t) : t \in {\mathbb T}\}$ of random variables, one for each time instant $t$ in an index set $\mathbb T$, where usually $\mathbb T$ is the entire real line or the positive real line, and a complete description means that for each ... The only difference between cross-correlation and convolution is a time reversal on one of the inputs. Discrete convolution and cross-correlation are defined as follows (for real signals; I neglected the conjugates needed when the signals are complex):$$x[n] * h[n] = \sum_{k=0}^{\infty}h[k] x[n-k]$$$$corr(x[n],h[n]) = \sum_{k=0}^{\infty}h[k] x[n+k]$$... The idea of autocorrelation is to provide a measure of similarity between a signal and itself at a given lag. There are several ways to approach it, but for the purposes of pitch/tempo detection, you can think of it as a search procedure. In other words, you step through the signal sample-by-sample and perform a correlation between your reference window ... pichenettes is right, of course. The FFT implements a circular convolution while the xcorr() is based on a linear convolution. In addition you need to square the absolute value in the frequency domain as well. Here is a code snippet that handles all the zero padding, shifting & truncating.%% Cross correlation through a FFTn = 1024;x = randn(n,1);% ... I can recommend you two books about DSP for C language.Embree P. M. - C Language Algorithms for Digital Signal ProcessingIt is old and you can easily get it second-hand for a decent price. It covers pretty much all 4 topics that you described.The other one I recommend is:Malepati H. - Digital Media Processing: DSP Algorithms Using CIt covers ... For continuous convolution $$[Hf](x) \equiv f(x) * h(x) \equiv \int\mathrm{d}x' h(x-x')f(x')$$and continuous cross-correlation $$[Gf](x) \equiv f(x) \star h(x) \equiv \int \mathrm{d}x'h^*(x'-x)f(x')$$It's easy to show that the cross-correlation operator $G$ is the adjoint operator of the the convolution operator $H$.Also, the convolution operation is ... According to your definition of autocorrelation, the autocorrelation is simply the covariance of the two random variables $Z(n)$ and $Z(n+\tau)$. This function is also called autocovariance.As an aside, in signal processing, the autocorrelation is usually defined as$$R_{XX}(t_1,t_2)=E\{X(t_1)X^*(t_2)\}$$i.e., without subtracting the mean. The ... Are you looking for a formal proof or the intuition behind this? In the later case: "Nothing can be more similar to a function than itself". Autocorrelation at lag $\tau$ measures the similarity between a function $f$ and the same function shifted by $\tau$. Note that if $f$ is periodic, $f$ shifted by any integer multiple of $\tau$ and $f$ coincide, so the ... I've never seen the word "Formula" with "AMDF". My understanding of the definition of AMDF is$$ Q_x[k,n_0] \triangleq \frac{1}{N} \sum\limits_{n=0}^{N-1} \Big| x[n+n_0] - x[n+n_0+k] \Big| $$$n_0$ is the neighborhood of interest in $x[n]$. Note that you are summing up only non-negative terms. So $Q_x[k,n_0] \ge 0$. We call "$k$" the "lag". clearly if ... You are right that the repetition is around 650 by how exactly do I compute that automatically? Seems like a peak-picking problem to me? Or is there some other methods that can be used?Yes, it's just peak-picking. Your period is the x value of the first strong peak:Your peaks are all similar in height, probably because you're doing the autocorrelation ... Autocorrelation is not about finding the distance between individual peaks. It is more about finding those lag distances that minimize the averaged squared delta between everything, all the peaks, all the valleys, all the flat spots, all in combination, and etc. Because of this averaging over the entire window, the lag distance may not correspond to the ... The autocorrelation matrix is diagonalized by sinusoids when the process is stationary, this follows from the fact that the covariance operator is a convolution for a stationary process. A more rigorous proof is that$$f(t,s)=Cov(X(t),X(s))=Cov(X(t-u),X(s-u))=f(t-u,s-u)$$ which in particular means that $f(t,s)=f(t-s,0)$ which is also a positive ... You can think of linear-least squares in single dimension. The cost function is something like $a^{2}$. The first derivative (Jacobian) is then $2a$, hence linear in $a$. The second derivative (Hessian) is $2$ - a constant.Since the second derivative is positive, you are dealing with convex cost function. This is eqivalent to positive definite Hessian ... For starters, autocorrelation is a function of the relative time only for WSS processes, otherwise it depends on the absolute times: $\mathrm R_X(t_1,t_2) \equiv \mathbb E[X(t_1)^* X(t_2)]$Secondly, it is wrong to say "time is just inverse frequency" because frequency is a characteristic of periodic processes. The autocorrelation is not generally a ... Suppose you have signals $x(t)$ and $y(t)$ whose cross-correlation function $R_{x,y}(t)$ is not something you like; you want $R_{x,y}$ to be impulse-like. Note that in the frequency domain,$$\mathcal{F}[R_{x,y}] = S_{x,y}(f) = X(f)Y^*(f).$$So you filter the signals through linear filters $g$ and $h$ respectively to get$\hat{x}(t) = x*g$, $\hat{X}(f) = ... Pre-whitening can be done by filtering with a transfer function that is roughly the inverse of the power spectrum of the signal. Let's say you have an audio signal that's roughly pink. In order to whiten that, you would apply an inverse pink filter (frequency response rises by 3 dB per octave).However, I'm not sure whether this will help with your issue. ... The autocorrelation function of an aperiodic discrete-time finite-energy signalis given by$$R_x[n] = \sum_{m=-\infty}^{\infty}x[m]x[m-n]~~~~ \text{or}~~~R_x[m] = \sum_{m=-\infty}^{\infty}x[m](x[m-n])^*$$for real signals and complex signals respectively. Restricting ourselves to real signals for ease of exposition,let us consider the summand $x[m]x[m-n]$... I can tell you of at least three applications related to audio.Auto-correlation can be used over a changing block (a collection of) many audio samples to find the pitch. Very useful for musical and speech related applications.Cross-correlation is used all the time in hearing research as a model for what the left and ear and the right ear use to figure ... I'd recommend Introduction to Signal Processing by S.J. Orfanidis. It's a great book with a good mix of theory and practice, and it also has code examples in C and Matlab. Once you've worked through it you'll know enough to carry on by yourself. Cons: Not as accurateThis is just compared to the other methods. I was measuring frequency very accurately to look for clock drift, etc: 1000.000004 Hz for 1000 Hz, for instance. For guitar pitch detection it will be fine.doesn't work for inharmonic things like musical instrumentsI should have said "it can't find an accurate fundamental if there is ... Let $\theta_a$ and $\theta_c$ respectively denote the maximum magnitudes of the off-peak or out-of-phase periodic autocorrelation functions and the periodic crosscorrelation functions of a set of $K$ sequences of length $N$ and energy $\sum_{n=0}^{N-1}|x[n]]|^2 = N$. In a seminal paper published in 1974, Welch proved that$$\max\big(\theta_a, \theta_c\big)\... No. Quoting Wikipedia's article Independence (probability theory):If $X$ and $Y$ are independent random variables,then the expectation operator $\operatorname{E}$ has the property$$\operatorname{E}[X Y] = \operatorname{E}[X]\operatorname{E}[Y].$$Consider your $X(t_1)$ and $Y(t_2)$ as $X$ and $Y$ in this answer. If both $\operatorname{E}[X] \ne ... Radians are considered to be dimensionless. See Are angles dimensionless? and Dimensionless quantity. They are considered to be pure numbers like pi.So $\alpha$ is in Hz, which is a measure of 1/second, and $s$ is also considered to be measured per second. One would expect such a sequence to have a spectrum consisting of lines, as it is almost periodic (if it was periodic, it would have a Fourier series representation, even though it is not sinusoidal). As a quick example:load raw1.mat% calculate "unbiased" normalized cross-correlation; adjusted for% regions where there isn't full overlapcorr = xcorr(... A synchronization sequence generally needs the property that its autocorrelation function resembles an impulse. There are two possible autocorrelation functions that can be considered. For a (real-valued) sequence $x$ of length $N$, the periodic autocorrelation function is$$R_x[n] = \sum_{k=0}^{N-1}x[k]x[k+n]$$where the sequence is assumed to extend ... The definition of the autocorrelation function $R_x(\tau)$ depends on the nature of your $x$.If $x$ is a deterministic signal with finite energy then: $$R_x(\tau)=\int_{-\infty}^{+\infty}x(t)x^*(t-\tau)dt$$If $x$ is a deterministic signal with finite average power$^{(1)}$ then: $$R_x(\tau)=\lim_{T\to+\infty}\frac{1}{T}\int_{-T/2}^{+T/2}x(t)x^*(t-\tau)dt$$... Let's look at the case $x[n] \in \mathbb{R}$, where $x[n]$ is real.Autocorrelation is basically convolution of the signal with it's time inverse. This can be easily expressed in the frequency domain.$$ \mathscr{F}\Big\{ r_{xx}[n] \Big\} = \mathscr{F}\Big\{ x[n] \Big\} \cdot \mathscr{F}\Big\{ x[-n] \Big\} $$$$R_{xx}(\omega) = X(\omega)\cdot X^*(\... Some "gut-level" reasons why it is better to work with the autocorrelation matrix instead of a matrix with your observations:If you want to take into account all your observations and you have a lot of data, you'll end up manipulating (inverting, multiplying) fairly large matrices. If you work with the autocorrelation matrix, you "summarize" your data once ...
Current results are able to yield such results. Depending on how generous one is regarding what $X$ is. If it is just the optimal value can be calculated exactly this will work for many more $k$ and if one is happy with an explicit bound for all $k$. For example Dusart showed that $$ \frac{x}{\log x - 1} \le \pi(x) \le \frac{x}{\log x - 1.1} $$for $x\ge 60184$.Now for some $k$, write $y=kx$. Then, if the upper bound for $kx=y$ is smaller than the lower bound for $(k+1)x = (1+1/k)y$, that is$$\frac{y}{\log y - 1.1} \lt \frac{y(1+ 1/k)}{\log( y (1+1/k) )- 1}$$one has a prime between $kx$ and $(k+1)x$, since then $\pi(kx) \lt \pi((k+1)x)$. One can check that this inequality holds for (up to potential error in my calculation)$$y \ge 10 e^{0.1 k}.$$ So, for $x \ge \max \lbrace 10 e^{0.1 k}/k , 60184/k \rbrace $ one always has a prime between $kx$ and $(k+1)x$. While this grows exponential in $k$, the growth is such that it is well feasible to check 'everything' up to the bound to get an optimal $X$ for not too large $k$. And, one always has an explict value. This proof is of course not elementary (the non-elementariness being hidden in Dusart's result) and is an application of the PNT in some sense. But what this is meant to show is that for a result around this to be interesting it seems necessary either to be better (and one could still optimize this here) than this or the proof would have to be interesting (or both). [What an interesting proof is is of course a bit subjective.]
Home > Semileptonic B-Meson decays at Belle II BELLE2-CONF-PROC-2018-013 Jo-Frederik Krohn 31 July 2018 Abstract: The Belle II experiment is the upgrade of the Belle experiment, performed at the SuperKEKB asymmetric electron-positron-collider, located in Tsukuba, Japan. With an design instantaneous luminosity of $8\cdot 10^{35}\rm{cm}^{-2}\rm{s}^{-1}$ a dataset of $50\ \rm{ab}^{-1}$ will be collected. The clean collision environment of the experiment and the large dataset yields the ability to perform high precision measurements of physics mediated by the weak force, such as semileptonic $B$-meson decays. Of special focus of this document are magnitudes of the CKM matrix elements $\rm{V_{\rm{ub}}}$ and $\rm{V_{\rm{cb}}}$ measured in $B\to D^{(\star)} l \nu$ and $B\to \pi l \nu$, where $l = e,\mu$, as well as the ratio of the branching fractions, $R_{D^{(\star)}} := \mathcal{B} (B\to D^{(\star)} e \nu) / \mathcal{B} (B\to D^{(\star)} \mu \nu) $ and $R_{\tau} := \mathcal{B} (B\to D^{(\star)} \tau\nu) / \mathcal{B} (B\to D^{(\star)} l \nu) $ , of lepton couplings in these decays. Some of these have shown persistent tension with the Standard Model and are therefore of primary interest towards a better understanding of the weak force carriers and potential new physics couplings.
Preprints (rote Reihe) des Fachbereich Mathematik Refine Year of publication 1996 (21) (remove) Language English (21) (remove) 293 Tangent measure distributions were introduced by Bandt and Graf as a means to describe the local geometry of self-similar sets generated by iteration of contractive similitudes. In this paper we study the tangent measure distributions of hyperbolic Cantor sets generated by contractive mappings, which are not similitudes. We show that the tangent measure distributions of these sets equipped with either Hausdorff or Gibbs measure are unique almost everywhere and give an explicit formula describing them as probability distributions on the set of limit models of Bedford and Fisher. 274 This paper investigates the convergence of the Lanczos method for computing the smallest eigenpair of a selfadjoint elliptic differential operator via inverse iteration (without shifts). Superlinear convergence rates are established, and their sharpness is investigated for a simple model problem. These results are illustrated numerically for a more difficult problem. 280 This paper develops truncated Newton methods as an appropriate tool for nonlinear inverse problems which are ill-posed in the sense of Hadamard. In each Newton step an approximate solution for the linearized problem is computed with the conjugate gradient method as an inner iteration. The conjugate gradient iteration is terminated when the residual has been reduced to a prescribed percentage. Under certain assumptions on the nonlinear operator it is shown that the algorithm converges and is stable if the discrepancy principle is used to terminate the outer iteration. These assumptions are fulfilled , e.g., for the inverse problem of identifying the diffusion coefficient in a parabolic differential equation from distributed data. 284 A polynomial function \(f : L \to L\) of a lattice \(\mathcal{L}\) = \((L; \land, \lor)\) is generated by the identity function id \(id(x)=x\) and the constant functions \(c_a (x) = a\) (for every \(x \in L\)), \(a \in L\) by applying the operations \(\land, \lor\) finitely often. Every polynomial function in one or also in several variables is a monotone function of \(\mathcal{L}\). If every monotone function of \(\mathcal{L}\)is a polynomial function then \(\mathcal{L}\) is called orderpolynomially complete. In this paper we give a new characterization of finite order-polynomially lattices. We consider doubly irreducible monotone functions and point out their relation to tolerances, especially to central relations. We introduce chain-compatible lattices and show that they have a non-trivial congruence if they contain a finite interval and an infinite chain. The consequences are two new results. A modular lattice \(\mathcal{L}\) with a finite interval is order-polynomially complete if and only if \(\mathcal{L}\) is finite projective geometry. If \(\mathcal{L}\) is simple modular lattice of infinite length then every nontrivial interval is of infinite length and has the same cardinality as any other nontrivial interval of \(\mathcal{L}\). In the last sections we show the descriptive power of polynomial functions of lattices and present several applications in geometry. 271 The paper deals with parallel-machine and open-shop scheduling problems with preemptions and arbitrary nondecreasing objective function. An approach to describe the solution region for these problems and to reduce them to minimization problems on polytopes is proposed. Properties of the solution regions for certain problems are investigated. lt is proved that open-shop problems with unit processing times are equivalent to certain parallel-machine problems, where preemption is allowed at arbitrary time. A polynomial algorithm is presented transforming a schedule of one type into a schedule of the other type. 282 Let \(a_1,\dots,a_m\) be independent random points in \(\mathbb{R}^n\) that are independent and identically distributed spherically symmetrical in \(\mathbb{R}^n\). Moreover, let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_m\) and let \(L_k\) be an arbitrary \(k\)-dimensional subspace of \(\mathbb{R}^n\) with \(2\le k\le n-1\). Let \(X_k\) be the orthogonal projection image of \(X\) in \(L_k\). We call those vertices of \(X\), whose projection images in \(L_k\) are vertices of \(X_k\)as well shadow vertices of \(X\) with respect to the subspace \(L_k\) . We derive a distribution independent sharp upper bound for the expected number of shadow vertices of \(X\) in \(L_k\). 285 On derived varieties (1996) Derived varieties play an essential role in the theory of hyperidentities. In [11] we have shown that derivation diagrams are a useful tool in the analysis of derived algebras and varieties. In this paper this tool is developed further in order to use it for algebraic constructions of derived algebras. Especially the operator \(S\) of subalgebras, \(H\) of homomorphic irnages and \(P\) of direct products are studied. Derived groupoids from the groupoid \(N or (x,y)\) = \(x'\wedge y'\) and from abelian groups are considered. The latter class serves as an example for fluid algebras and varieties. A fluid variety \(V\) has no derived variety as a subvariety and is introduced as a counterpart for solid varieties. Finally we use a property of the commutator of derived algebras in order to show that solvability and nilpotency are preserved under derivation. 277 A convergence rate is established for nonstationary iterated Tikhonov regularization, applied to ill-posed problems involving closed, densely defined linear operators, under general conditions on the iteration parameters. lt is also shown that an order-optimal accuracy is attained when a certain a posteriori stopping rule is used to determine the iteration number. 270 301 We extend the methods of geometric invariant theory to actions of non reductive groups in the case of homomorphisms between decomposable sheaves whose automorphism groups are non recutive. Given a linearization of the natural actionof the group Aut(E)xAut(F) on Hom(E,F), a homomorphism iscalled stable if its orbit with respect to the unipotentradical is contained in the stable locus with respect to thenatural reductive subgroup of the automorphism group. Weencounter effective numerical conditions for a linearizationsuch that the corresponding open set of semi-stable homomorphismsadmits a good and projective quotient in the sense of geometricinvariant theory, and that this quotient is in additiona geometric quotient on the set of stable homomorphisms. 279 It is shown that Tikhonov regularization for ill- posed operator equation \(Kx = y\) using a possibly unbounded regularizing operator \(L\) yields an orderoptimal algorithm with respect to certain stability set when the regularization parameter is chosen according to the Morozov's discrepancy principle. A more realistic error estimate is derived when the operators \(K\) and \(L\) are related to a Hilbert scale in a suitable manner. The result includes known error estimates for ordininary Tikhonov regularization and also the estimates available under the Hilbert scale approach. 275 283 A regularization Levenberg-Marquardt scheme, with applications to inverse groundwater filtration problems (1996) The first part of this paper studies a Levenberg-Marquardt scheme for nonlinear inverse problems where the corresponding Lagrange (or regularization) parameter is chosen from an inexact Newton strategy. While the convergence analysis of standard implementations based on trust region strategies always requires the invertibility of the Fréchet derivative of the nonlinear operator at the exact solution, the new Levenberg-Marquardt scheme is suitable for ill-posed problems as long as the Taylor remainder is of second order in the interpolating metric between the range and dornain topologies. Estimates of this type are established in the second part of the paper for ill-posed parameter identification problems arising in inverse groundwater hydrology. Both, transient and steady state data are investigated. Finally, the numerical performance of the new Levenberg-Marquardt scheme is studied and compared to a usual implementation on a realistic but synthetic 2D model problem from the engineering literature.
It is not difficult to calculate upper bounds on $s(n)$ from bounds on the prime counting function $\pi(n)$. Just use integration by parts,$$s(n) = \int_0^n x\,d\pi(x) = n\pi(n) - \int_0^n\pi(x)\,dx.$$I'm not sure what the currently best known bounds for $\pi(x)$ are but, checking Wikipedia, gives$$\frac{x}{\log x}\left(1+\frac{1}{\log x}\right) < \pi(x) < \frac{x}{\log x}\left(1+\frac{1}{\log x}+\frac{2.51}{(\log x)^2}\right)$$with the left hand inequality holding for $x\ge599$ and the right hand holding for $x\ge355991$. So, $$s(n)\le \frac{n^2}{\log n}\left(1+\frac{1}{\log n}+\frac{2.51}{(\log n)^2}\right)-\int^n\left(1+\frac{1}{\log x}\right)\frac{x\,dx}{\log x}+c$$(where $c$ is a constant which you can compute if you feel so inclined). Applying integration by parts, $$s(n)\le\frac{n^2}{2\log n}\left(1+\frac{1}{\log n}+\frac{5.02}{(\log n)^2}\right)-\frac12\int^n\left(1+\frac{2}{\log x}\right)\frac{x\,dx}{(\log x)^2}+c$$ Bounding $\log x\le\log n$ in the integral gives a bound $$s(n)\le\frac{n^2}{2\log n}\left(1+\frac{1}{2\log n}+\frac{4.02}{(\log n)^2}\right)+c$$ You can also take $c=0$ if you only require the bound to hold for $n\ge N$ (some $N$), since the term I neglected in the integral by applying $\log x\le \log n$ grows withuot bound, and will eventually dominate any constant term. Obviously, if you know any better bounds for $\pi(n)$ then you will get improved bounds for $s(n)$. For example, the same Wikipedia article linked to above states that $\left\vert\pi(x)-{\rm Li}(x)\right\vert\le\frac{\sqrt{x}\log x}{8\pi}$ for $x\ge2657$ under the assumption that the Riemann hypothesis holds.
In addition to @Nanoputian's excellent description of constructive and destructive interference in the formation of MOs, I want to provide a more mathematical explanation for why the phase of the wavefunction does not matter. Finding the wavefunction The time-independent Schrödinger equation, in one dimension, reads: $$\hat{H}\psi(x) = E\psi(x)$$ It can be shown that, if a wavefunction $\psi = \psi(x)$ satisfies the above equation, the wavefunction $k\psi$ (with $k \in \mathbb{C}$) also satisfies the above equation with the same energy eigenvalue $E$. This is because of the linearity of the Hamiltonian: $$\begin{align}\hat{H}(k\psi) &= k(\hat{H}\psi) \\&= k(E\psi) \\&= E(k\psi)\end{align}$$ There are several conditions that a wavefunction must satisfy for it to be physically realisable, i.e. for it to represent a "real" physical particle. In this discussion, the relevant condition is that the wavefunction must be square-integrable (or normalisable). In mathematical terms: $$\langle\psi\lvert\psi\rangle = \int_{-\infty}^{\infty}\!\lvert\psi\rvert^2\,\mathrm{d}x < \infty$$ This means that there has to exist a constant $N \in \mathbb{C}$ such that $N\psi$ is normalised: $$\int_{-\infty}^{\infty}\!\lvert N\psi\rvert^2\,\mathrm{d}x = \lvert N \rvert^2 \!\!\int_{-\infty}^{\infty}\!\lvert\psi\rvert^2\,\mathrm{d}x = 1$$ From this point onwards, we will assume that we will already have found the suitable normalisation constant such that the wavefunction $\psi$ is already normalised. In other words, let's assume $\langle\psi\lvert\psi\rangle = 1$, because we can. Now let's consider the wavefunction $-\psi$, which is equivalent to $N\psi$ with $N = -1$. Is this new wavefunction normalised? $$\begin{align}\int_{-\infty}^{\infty}\!\lvert -\psi\rvert^2\,\mathrm{d}x &= \lvert -1 \rvert^2 \!\!\int_{-\infty}^{\infty}\!\lvert\psi\rvert^2\,\mathrm{d}x \\&= \int_{-\infty}^{\infty}\!\lvert\psi\rvert^2\,\mathrm{d}x \\&= 1\end{align}$$ Of course it is. So, what I've written so far basically says: if $\psi$ is a normalised solution to the Schrödinger equation, so is $-\psi$. In fact, you could go one step further. Using exactly the same working as above, you could show that if $\psi$ is a normalised solution to the Schrödinger equation, the wavefunction $(a + ib)\psi$ would also be one, as long as $a^2 + b^2 = 1$. (If you like exponentials, that's equivalent to saying $a + ib = e^{i\theta}$.) I've illustrated this idea on this diagram: $\qquad\qquad\qquad\qquad\qquad\qquad$ If $\psi$ is a real-valued, one-dimensional wavefunction, you could plot it on a graph against $x$. The wavefunction $i\psi$ would then be exactly the same shape, just coming out of the plane of the paper ($\theta = 90^\circ$). You could have the wavefunction $(1+i)\psi/\sqrt{2}$. It would be pointing outwards of the plane of the paper by $\theta = 45^\circ$, exactly halfway in between $\psi$ and $i\psi$, but exactly the same shape. However, physics doesn't know where the plane of your paper is, so all these wavefunctions are equally admissible. From the point of view of the system, they are all the same thing. Using the wavefunction "But wait! If the wavefunction is negative, what about the values of momentum, position, and energy that you calculate? Will they become negative?" "Good question, myself!" Well, for starters, one thing that you use the wavefunction for is to find the probability density, $P(x)$. According to Max Born's interpretation of the wavefunction, this is given by $P(x) = \lvert \psi \rvert ^2$. Let's say that the probability density described by the negative wavefunction $-\psi$ is a different function of $x$, called $Q(x)$: $$\begin{align}Q(x) = \lvert -\psi \rvert ^2 &= \lvert -1 \rvert^2 \lvert \psi \rvert ^2 \\&= \lvert \psi \rvert ^2 \\&= P(x)\end{align}$$ So, the probability density described by the negative wavefunction is exactly the same. In fact, the probability density described by $i\psi$ is exactly the same as well. Now let's talk about observables, such as position $x$, momentum $p$, and energy $E$. Every observable has a corresponding operator: $\hat{x}$, $\hat{p}$, and $\hat{H}$ respectively (the Hamiltonian has a special letter because it's named after William Hamilton). You use these operators to calculate the mean value of the observable. I'll give an example regarding the momentum. If you want to find the mean momentum, denoted $\langle p \rangle$, you would do the following: $$\begin{align}\langle p \rangle &= \langle\psi\lvert\hat{p}\rvert\psi\rangle \\&= \int_{-\infty}^\infty\!\psi^*\hat{p}\psi\,\mathrm{d}x\end{align}$$ I'm going to call the value of that integral $p_1$. Now, let's do the same thing. Let's assume that the mean momentum for the negative wavefunction is not necessarily the same value. Let's call the new mean momentum something else, like $p_2$. Before we go on, I'm going to establish that the momentum operator $\hat{p} = -i\hbar\frac{\mathrm{d}}{\mathrm{d}x}$ is also linear. If you doubt it, you can test it out using the definition of linearity in the very first link I posted. In fact, all quantum mechanical operators corresponding to observables are linear. Therefore $\hat{p}(-\psi) = -\hat{p}\psi$ and so: $$\begin{align}p_2 &= \langle -\psi\lvert\hat{p}\lvert-\psi\rangle \\&= \int_{-\infty}^\infty\! (-\psi)^*\hat{p} (-\psi)\,\mathrm{d}x \\&= (-1)^2\!\!\int_{-\infty}^\infty\! \psi^*\hat{p}\psi\,\mathrm{d}x \\&= \int_{-\infty}^\infty\! \psi^*\hat{p}\psi\,\mathrm{d}x \\&= p_1\end{align}$$ So, if we talk about the ground state of the particle in a box of length $L$, no matter whether you use the positive wavefunction $$\psi_1 = \sqrt{\frac{2}{L}}\sin{\left(\frac{\pi x}{L}\right)}$$ or the negative wavefunction $$-\psi_1 = -\sqrt{\frac{2}{L}}\sin{\left(\frac{\pi x}{L}\right)}$$ or the complex wavefunction $$i\psi_1 = i\sqrt{\frac{2}{L}}\sin{\left(\frac{\pi x}{L}\right)}$$ you'll get exactly the same values for average position $(= L/2)$, average momentum $(= 0)$, and average energy $(= h^2/2mL^2)$ (the word average is redundant here, since this is a stationary state, but whatever). Everything that I have said so far can be easily generalised to three dimensions. It can also be generalised to linear combinations of stationary states, i.e. solutions of the time-dependent Schrödinger equation. A note about molecular orbitals "Okay, but what happens when you combine atomic orbitals to make molecular orbitals? You have constructive interference from the positive + positive, and destructive interference from the positive + negative, but what about the negative + negative combination?" "Good question, myself!" Let's talk about the $\ce{H2}$ molecule. The proper way to find the molecular orbitals is to solve the Schrödinger equation for the entire system, which is really difficult to do. One way to find approximate forms of the MOs is to make linear combinations of atomic orbitals; this method is called the LCAO approximation. Let's call the 1s orbital of the hydrogen on the left $\phi_1$ and the 1s orbital of the hydrogen on the right $\phi_2$. From the previous sections, we have already established that as far as the hydrogen atom is concerned, the individual phases of $\phi_1$ and $\phi_2$ do not matter. So, let's assume for simplicity's sake that their phases are both positive. Now, from what you already know, you can get two molecular orbitals $\psi_1$ and $\psi_2$: $$\begin{align}\psi_1 &= \phi_1 + \phi_2 \\\psi_2 &= \phi_1 - \phi_2\end{align}$$ These are the bonding and antibonding orbitals respectively (at least, to within a normalisation constant, which I'm not going to care about here because the details are irrelevant). Now let's talk about those combinations that we missed out. $$\begin{align}-\phi_1 - \phi_2 &= -\psi_1 \\-\phi_1 + \phi_2 &= -\psi_2\end{align}$$ We already said that $\psi_1$ and $\psi_2$ are (approximations of) solutions to the Schrödinger equation. That means that, from what we've talked about earlier, $-\psi_1$ and $-\psi_2$ must also be (approximations of) solutions to the Schrödinger equation. They must have the same energies as $\psi_1$ and $\psi_2$. In fact, as far as the molecule knows (and cares), they are the same thing as $\psi_1$ and $\psi_2$. Now, since the individual phases of the atomic orbitals do not matter, if you really wished to, you could declare to the whole world that you define: $$\phi_3 = \phi_1 \text{ and } \phi_4 = -\phi_2$$ i.e. left hydrogen 1s orbital, $\phi_3$, is positive and right hydrogen 1s orbital, $\phi_4$, is negative. In that case, you can construct the molecular orbitals: $$\begin{align}\psi_1 &= \phi_3 - \phi_4 \\\psi_2 &= \phi_3 + \phi_4\end{align}$$ The coefficients of the atomic orbitals would have to be different, since you insisted on having them in different phases - however, the outcome is the same! You get one bonding MO and one antibonding MO.
Tagged: ideal Problem 624 Let $R$ and $R’$ be commutative rings and let $f:R\to R’$ be a ring homomorphism. Let $I$ and $I’$ be ideals of $R$ and $R’$, respectively. (a) Prove that $f(\sqrt{I}\,) \subset \sqrt{f(I)}$. (b) Prove that $\sqrt{f^{-1}(I’)}=f^{-1}(\sqrt{I’})$ Add to solve later (c) Suppose that $f$ is surjective and $\ker(f)\subset I$. Then prove that $f(\sqrt{I}\,) =\sqrt{f(I)}$ Problem 526 A ring is called local if it has a unique maximal ideal. (a) Prove that a ring $R$ with $1$ is local if and only if the set of non-unit elements of $R$ is an ideal of $R$. Add to solve later (b) Let $R$ be a ring with $1$ and suppose that $M$ is a maximal ideal of $R$. Prove that if every element of $1+M$ is a unit, then $R$ is a local ring. Problem 525 Let \[R=\left\{\, \begin{bmatrix} a & b\\ 0& a \end{bmatrix} \quad \middle | \quad a, b\in \Q \,\right\}.\] Then the usual matrix addition and multiplication make $R$ an ring. Let \[J=\left\{\, \begin{bmatrix} 0 & b\\ 0& 0 \end{bmatrix} \quad \middle | \quad b \in \Q \,\right\}\] be a subset of the ring $R$. (a) Prove that the subset $J$ is an ideal of the ring $R$. Add to solve later (b) Prove that the quotient ring $R/J$ is isomorphic to $\Q$. Problem 524 Let $R$ be the ring of all $2\times 2$ matrices with integer coefficients: \[R=\left\{\, \begin{bmatrix} a & b\\ c& d \end{bmatrix} \quad \middle| \quad a, b, c, d\in \Z \,\right\}.\] Let $S$ be the subset of $R$ given by \[S=\left\{\, \begin{bmatrix} s & 0\\ 0& s \end{bmatrix} \quad \middle | \quad s\in \Z \,\right\}.\] (a) True or False: $S$ is a subring of $R$. Add to solve later (b) True or False: $S$ is an ideal of $R$. Problem 432 (a) Let $R$ be an integral domain and let $M$ be a finitely generated torsion $R$-module. Prove that the module $M$ has a nonzero annihilator. In other words, show that there is a nonzero element $r\in R$ such that $rm=0$ for all $m\in M$. Here $r$ does not depend on $m$. Add to solve later (b) Find an example of an integral domain $R$ and a torsion $R$-module $M$ whose annihilator is the zero ideal. Problem 431 Let $R$ be a commutative ring and let $I$ be a nilpotent ideal of $R$. Let $M$ and $N$ be $R$-modules and let $\phi:M\to N$ be an $R$-module homomorphism. Prove that if the induced homomorphism $\bar{\phi}: M/IM \to N/IN$ is surjective, then $\phi$ is surjective.Add to solve later Problem 417 Let $R$ be a ring with $1$ and let $M$ be an $R$-module. Let $I$ be an ideal of $R$. Let $M’$ be the subset of elements $a$ of $M$ that are annihilated by some power $I^k$ of the ideal $I$, where the power $k$ may depend on $a$. Prove that $M’$ is a submodule of $M$.