text
stringlengths
256
16.4k
Suppose we start off with the traditional standard bivariate normal distribution: $$\phi_2(x,y|\rho,\mu_x=0,\mu_y=0,\sigma_x=1,\sigma_y=1)=\frac{1}{2\pi\sqrt{1-\rho^2}}\exp \left(-\frac{x^2-2\rho x y + y^2}{2(1-\rho^2)}\right)$$ where the $\mu$s are the mean parameters, the $\sigma$s are the standard deviation parameters and $\rho$ is the correlation term between $x$ and $y$. My question is: what happens to $\phi_2$ when we fix $y=y_F$? In other words, what happens when the only "free" variable in $\phi_2$ is $x$? In this case, can we express $\phi_2$ in terms of a univariate normal distribution $\phi_1$? Edit After banging my head against my table for quite a bit, I noticed that I was confusing two different concepts: a bivariate normal distribution with $y$ fixed at some point $y_F$ the conditional bivariate distribution with a given $y$ (i.e. $prob(X=x|Y=y_F)$) These are two very different beasts, but I was treating them as the same. Here's how you can calculate $\phi_2(x,y,\rho)$ when $y$ is fixed at $y_F$. First, from basic conditioning rules, we know that: $$ \text{prob}(X=x|Y=y_F) = \frac{\text{prob}(Y=y_F)}{\text{prob}(X=x,Y=y_F)} \Rightarrow$$ $$ \text{prob}(X=x,Y=y_F) = \text{prob}(X=x|Y=y_F) \cdot \text{prob}(Y=y_F) \Rightarrow$$ $$ \phi_2(x,y_F,\rho) = \text{prob}(X=x|Y=y_F) \cdot \phi_1(y_F|\text{mean}=0,\text{var}=1) $$ Now,using the information from the conditional probability of bivariate normal distributions, we can swap out that $\text{prob}$ term in the middle by an actual function: $$ \phi_2(x,y_F,\rho)=\phi_1(x|\text{mean}= \rho y_F, \text{var}=1-\rho^2) \cdot \phi_1(y_F|\text{mean}= 0, \text{var}=1)$$
Some background I run a factor analysis of a time series $Y$ using a standard OLS model with n+1 independent variables $(F,X_1...X_n)$, where $F$ is the main factor (from an explanatory power perspective). $X_i$ and $F$ are highly correlated ($\rho>0.8$) but on certain days $t$, $X_i(t)$ will diverge significantly from its estimated value using the coefficient of its regression against $F$ : $X_i=\beta_i F$. In other words the distribution of $\epsilon_i(t)=X_i(t)-\beta_i F(t)$ has fat tails. The reason why those highly collinear factors are included in the regression is precisely the presence of outliers which have a significant impact on the dependent variable $Y$. My approach To get rid of the multicollinearity problem yet keep the outliers, I create new factors $X_i'$ which are constructed like this: $X_i'(t)=\epsilon_i(t)$ if $\epsilon_i(t)$ is in the top or bottom $n$ percentiles of $\epsilon_i$ $X_i'(t)=0$ otherwise so if $n=5$ for example, $X_i'$ will be zero 90% of the time. I then regress $Y$ against $(F, X_1'...X_n')$ My questions Is there a flaw in the method? Are there alternatives that would help me capture that effect? (I know that I could orthogonalise each $X_i$ vs $(F,X_0...X_{i-1})$ but the results are the same as and as unstable as regressing against the original factors). Note: in practice, $Y$ represents the returns of a stock, $F$ represents the returns of the market and the $X_i...X_n$ represents other factors such as industry indices.
Search Now showing items 1-10 of 167 J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-02) Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ... Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV (Elsevier, 2013-04-10) The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ... Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV (Springer, 2014-12) The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ... Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC (Springer, 2014-10) Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ... Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
Search Now showing items 1-1 of 1 Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-09) The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
It is said that: Abel summation and Euler summation are not comparable. We were able to find examples of divergent series which are Euler summable but not Abel summable, for instance $$ 1-2+4-8+16-\dots$$ However, we couldn't find any example of a divergent series which is Abel summable but not Euler summable. Do you know such an example? Thank you! EDIT: Dear Peter, this is the definition of Euler summation: Let $\sum_{n=0}^\infty a_n$ be any series. The Euler transformation of this series is defined as: \begin{equation*} \sum_{n=0}^\infty \frac{1}{2^{n+1}}b_n\quad\text{ with }\quad b_n:=\sum_{k=0}^n\left(\begin{array}[h]{c} n \\ k \end{array}\right) a_k \end{equation*} The series $\sum_{n=0}^\infty a_n$ is called Euler summable if the Euler transformation of this series $$\sum_{n=0}^\infty \frac{1}{2^{n+1}}b_n$$ is converges in the usual sense. The Euler sum is then given by $$\sum_{n=0}^\infty \frac{1}{2^{n+1}}b_n.$$
I am stuck when it comes to finding the end value of a trig function. I have the following question: $$ \tan^5x - 9\tan{x} = 0 $$ I worked the problem and got: $$ \tan x = 0\\ \tan^4x-9 = 0\\ x = 0, \pi, \frac {\pi}{3}, \frac {2\pi}{3}, \frac {4\pi}{3}, \frac {5\pi}{3} $$ My book answer is $x = \frac {\pi k}{3}$ how do you get that? I understand that tan uses $ \pi $ and sin, cos use $ 2\pi $ but I'm not sure how they got to that answer.
Suppose we have a finite Egyptian fraction decomposition of a rational: $$\frac{n}{m} = \sum_{i=1}^k \frac{1}{x_i}$$ such that (i) $x_i>0$, (ii) $x_i \neq x_j$ for $i \neq j$, and (iii) $\gcd(m, x_1,x_2,...x_k) = 1$. Are their any known results concerning $\max_{i,j} |x_i-x_j|$ or maybe $\max_{i,j} |\frac{x_i}{x_j}|$? For example, $\frac{5}{121} = \frac{1}{26} + \frac{1}{350} + \frac{1}{275275}$ or $\frac{5}{121} = \frac{1}{33} + \frac{1}{93} + \frac{1}{3751}.$ Certainly the latter expression is "better" than the previous one for some vague notion of "better". Note: Condition (iii) means we don't consider $\frac{5}{121}= \frac{1}{33}+\frac{1}{121}+\frac{1}{363}$ since this is really just a good decomposition of $\frac{5}{11}$ that has been divided through by $11$. Motivation: I'm investigating a technique in my research that would produce Egyptian fraction representations where all the denominators are roughly the same size and I'm curious if this has been looked at before.
First, if $uv=0$ then $uTv=0$ a.e.: indeed, applying the hypothesis with the function $\epsilon u+\epsilon^{-1}v$ and evaluating on $\{u\neq 0\}$, we get$$0\le \epsilon^2uTu+\epsilon^{-2}vTv+uTv+vTu=\epsilon^2 uTu+uTv$$a.e. and deduce the claim (sending $\epsilon\to 0$). This gives the property $(*)$ that you showed. If the space is $\sigma$-finite, using $(*)$ we can easily reduce to the case that the measure is finite. Now take $v:=T1$. We claim that $Tu=uv$ (a.e.) when $u$ is simple: since $T$ is linear, it suffices to show this when $u=1_A$ is a characteristic function. But$$v=T1=T1_A+T1_{A^c}$$and $T1_{A^c}$ vanishes (a.e.) on $A$, hence $T1_A=v$ (a.e.) on $A$. Since $T1_A$ vanishes (a.e.) on $A^c$, the claim follows. Taking $A:=\{|v|>\|T\|\}$, if $\mu(A)>0$ we see that $\|T1_A\|_{L^2}>\|T\|\|1_A\|_{L^2}$, contradiction. So $v\in L^\infty$. The statement now follows since simple functions are dense. It seems false to me without assuming the space to be $\sigma$-finite: take $\Omega:=[0,1]^2$, with $\mu:=\mathcal{H}^1$ (1-dimensional Hausdorff measure) and the $\sigma$-algebra $\mathcal A$ generated by horizontal and vertical slices ($\{s\}\times[0,1]$ and $[0,1]\times\{t\}$). Now with little work you can show that all elements of $\mathcal A$, up to adding and removing negligible sets, are of the form$$\bigcup\Big(\{s_i\}\times[0,1]\Big)\cup\bigcup\Big([0,1]\times\{t_j\}\Big),$$where both unions are (at most) countable. Hence, $L^2(\mu)$ splits as a direct sum $V\oplus W$, where $V$ consists of functions of the form $f=\sum_i a_i 1_{\{s_i\}\times[0,1]}$ and $W$ of similar "vertical" functions. Now declare $T$ to act by multiplication by $0$ on $V$ and multiplication by $1$ on $W$. It's easy to see that there is no consistent choice of $v$. If you don't want atoms in the counterexample, take instead the $\sigma$-algebra generated by sets of the form $\{s\}\times E'$ and $E\times\{t\}$, where $s,t$ range in $[0,1]$ and $E,E'$ vary among Borel subsets of $[0,1]$. In this case, measurable sets have the form$$\bigcup(\{s_i\}\times E_i')\cup\bigcup(E_j\times\{t_j\})\cup (E\times E')\cup N,$$where $E_j,E$ are Borel subsets of $[0,1]\setminus\bigcup\{s_i\}$, $E_i',E'$ are Borel subsets of $[0,1]\setminus\bigcup\{t_j\}$, and finally $N$ is any subset of $\Big(\bigcup\{s_i\}\Big)\times\Big(\bigcup\{t_j\}\Big)$.Once you have this, you can argue as before.
Search In this lab we will implement an OCR system. For simplicity we assume the texts contain only two letters: A and C, and that the letters are already well segmented into 10×10 pixel image chips. We will extract a simple feature from the image chips and use it in a bayesian decision framework to partition the image set. Furthermore, the relative frequency of occurrence of A and C in text is given. We will consider two cases: (1) the measurements are discrete, and (2) the measurements are continuous. In the discrete probabilities case, we will find the optimal strategy, compute its Bayesian risk, experiment with the loss function and classify independent data samples by applying the theory from the lecture slides directly. In the continuous measurements case, an assumption is made that the extracted feature for each letter class (A,C) is generated by a normal distribution. The 'zero-one' loss will be used to estimate risk, which simplifies the decision strategy. To fulfil this assignment, you need to submit these files (all packed in a single .zip file) into the upload system: .zip answers.txt assignment_bayes.m find_strategy_discrete.m bayes_risk_discrete.m classify_discrete.m classification_error_discrete.m find_strategy_2normal.m bayes_risk_2normal.m classify_2normal.m classification_error_2normal.m classif_W1.png classif_W2.png decision_discrete.png thresholds.png decision_2normal.png Start by downloading the template of the assignment. It contains function templates and the data needed for the tasks. When submitting the assignment to the upload system, please upload all the files included in the template. This is highly experimental feature. Use at your own risk. Standard is to use Matlab. You may skip this info box if you follow the standard rules. General information for Python development. To fulfil this assignment, you need to submit these files (all packed in one .zip file) into the upload system: bayes.ipynb bayes.py find_strategy_discrete bayes_risk_discrete classify_discrete classification_error_discrete find_strategy_2normal bayes_risk_2normal classify_2normal classification_error_2normal Use template of the assignment. When preparing a zip file for the upload system, do not include any directories, the files have to be in the zip file root. The task is to design a classifier (Bayesian strategy) $q(x)$, which distinguishes between 10×10 images of two letters (classes) A and C using only a single measurement (feature): $$ q: \mathcal{X} \rightarrow D $$$$ \mathcal{X} = \{-10, \ldots, 10\}, \quad D = K = \{A, C\}$$ The measurement $x$ is obtained from an image by computing mu = -563.9sigma = 2001.6x_unnormalized = (sum of pixel values in the left half of image) -(sum of pixel values in the right half of image)x = (x_unnormalized - mu) / (2 * sigma) * 10limit x to the interval <-10, 10> Computation of this measurement is implemented in compute_measurement_lr_discrete.m. compute_measurement_lr_discrete.m The strategy $q(x)$ will be represented in Matlab by a 1×21 vector containing 1 if the classification for that value of $x$ is supposed to be A, and 2 if the classification is supposed to be C. Thus given the vector $q$ and some $x$ we can decide for one of the classes. As required in the formulation of Bayesian task, we are given all the necessary probabilities (specified in the assignment_bayes.m): discreteA.Prob discreteA.Prior discreteC.Prob discreteC.Prior images_test labels_test >> q_discrete = [ones(1, 10), 2 ones(1, 10)];W = [0 1; 1 0];R_discrete = bayes_risk_discrete(discreteA, discreteC, W, q_discrete) R_discrete = 0.5154 >> W = [0 1; 1 0];q_discrete = find_strategy_discrete(discreteA, discreteC, W) q_discrete = Columns 1 through 14 1 1 1 1 1 2 2 2 1 1 1 1 1 1 Columns 15 through 21 1 1 1 1 1 1 1 >> distribution1.Prior = 0.3;>> distribution2.Prior = 0.7;>> distribution1.Prob = [0.2, 0.3, 0.4, 0.1];>> distribution2.Prob = [0.5, 0.4, 0.1, 0.0];>> W = [0 1; 1 0];>> q = find_strategy_discrete(distribution1, distribution2, W) q = 2 2 1 1 visualise_discrete W1=[0 1; 1 0] W2=[0 5; 1 0] >> W = [0 1; 1 0];q_discrete = find_strategy_discrete(discreteA, discreteC, W);error_discrete = classification_error_discrete(images_test, labels_test, q_discrete) error_discrete = 0.2250 Fill the correct answers to your answers.txt file. discreteA.prob W=[0 1; 2 0] In the second part of the assignment the probabilities $p_{X|k}(x|k)$ are given as continuous normal distributions (specified in the assignment_bayes.m): $$p_{X|k}(x|A) = \frac{1}{\sigma_k\sqrt{2\pi}}e^{-\frac{(x-\mu_k)^2}{2\sigma_k^2}}$$ contA.Mean contA.Sigma contA.Prior contC.Mean contC.Sigma contC.Prior Further we assume zero-one loss matrix W = [0 1; 1 0]. W = [0 1; 1 0] In this particular case, as explained in lecture notes, slides 21-22, the optimal strategy $q$ can be found by solving the following problem: $$q(x) = \arg\max_k p_{K|X}(k|x) = \arg\max_k p_{XK}(x,k)/p_X(x) = \arg\max_k p_{XK}(x,k) = \arg\max_k p_K(k)p_{X|K}(x|k),$$ where $p_{K|X}(k|x)$ is called posterior probability of class $k$ given the measurement $x$ (i.e. it is the probability of the data being from class $k$ if we measure feature value $x$). Symbol $\arg\max_k$ denotes finding $k$ maximising the argument. We will also work with an unnormalised measurement here instead of the discrete one used above x = ((sum of pixel values in the left half of image) -(sum of pixel values in the right half of image)) compute_measurement_lr_cont.m At the beginning of the labs we will together show that in the case when $p_{X|k}(x|k)$ are normal distributions and when we consider zero-one loss function, the optimal Bayesian strategy $q(x)$ corresponds to a quadratic inequality (i.e. $ax^2+bx+c \ge 0$). Hint:$$\arg\max_{k\in\{A,C\}} p_K(k)p_{X|k}(x|k) \quad \mbox{translates into} \quad p_K(A)p_{X|k}(x|A) \ge p_K(C)p_{X|k}(x|C)$$ Optimal strategy representation: Solving the above quadratic inequality gives up to two thresholds $t_1$ and $t_2$ (zero crossing points) and a classification decision (A or C) in each of the intervals $\mathcal{X}_1 = (-\infty, t_1>$, $\mathcal{X}_2 = (t_1, t_2>$, $\mathcal{X}_3 = (t_2, \infty)$. So similarly to the discrete case we can represent the strategy in Matlab as a structure with the following fields: q.t1 q.t2 q.decision >> q_cont = find_strategy_2normal(contA, contC) q_cont = t1: -3.5360e+03 t2: -1.2488e+03 decision: [1 2 1] visualize_2norm normcdf >> R_cont = bayes_risk_2normal(contA, contC, q_cont) R_cont = 0.1352 images labels >> error_cont = classification_error_2normal(images_test, labels_test, q_cont) error_cont = 0.1000 4. Which of the conditions allowed us to express the optimal Bayesian strategy as a quadratic discriminative function (check all that apply): 5. If we extend the number of classes from 2 to 3 and keep the zero-one loss function, how does the optimal Bayesian strategy differ (check all that apply): This task is not compulsory. Work on bonus tasks deepens on your knowledge about the subject. Successful solution of a bonus task(s) will be rewarded by 3 points. The data for this task can be downloaded from data_33rpz_cv02_bonus.mat. Use distributions D1, D2 and D3. The task is analogous to the previous one, only there are now two measurements (features) and three classes. The second measurement is D1 D2 D3 y = (sum of pixel values in the upper half of image) -(sum of pixel values in the lower half of image) The task is to classify the input images into one of the three classes. Use both measurements $x$ and $y$ here. The density function $p_{X}(\mathbf{x}) = p_K(A)p_{X|k}(\mathbf{x}|A)+p_K(C)p_{X|k}(\mathbf{x}|C)+p_K(T)p_{X|k}(\mathbf{x}|T)$, where $\mathbf{x} = [x, y]$ is visualised below (drawn using pgmm function from the toolbox) pgmm Output: Hints: mnvpdf pdfgauss
One can scan the intensity $I$ of incoming linear polarized light by a linear polarizer, the outcome is the well known Malus' law. $$I = I_0 \cos^2 \Theta$$ Deriving the dependence $I(\Theta)$ is rather easy, when going one step back to the electric field component and the relationship $I \sim |E|^2$. Now, I was wondering, what happens if I scan circular or even elliptically polarized light by this method. The former should bring $I(\Theta) = const.$, while the ladder should bring a more complicated expression. Especially the ladder is of interest, because linear and circular polarizations are just a special case of generally elliptical polarization. Now, if one could find a dependence $I(\Theta,\phi)$, with $\phi$ being the phasedifference between two orthogonal polarized beams (for simplicity (surely depending on the reader), but I'm thinking of linear x- and y-polarized light), one could easily alter Malus' law for the more general case of elliptical polarization. Finally I'm interested in $I(\Theta,\phi)$ with $\phi$ being the phasedifference between two orthogonal polarized beams and $\Theta$ being the scan angle. Can anyone help me to figure out the right derivation? I hope my question is rather clearly asked. In case it needs improvement, don't hesitate to correct me! Thank you!!
I'm running a Metropolis sampler (C++) and want to use the previous samples to estimate the convergence rate. One easy to implement diagnostic I found is the Geweke diagnostic, which computes the difference between the two sample means divided by itsestimated standard error. The standard error is estimated from the spectral density at zero. $$Z_n=\frac{\bar{\theta}_A-\bar{\theta}_B}{\sqrt{\frac{1}{n_A}\hat{S_{\theta}^A}(0)+\frac{1}{n_B}\hat{S_{\theta}^B}(0)}},$$ where $A$, $B$ are two windows within the Markov chain. I did some research on what are $\hat{S_{\theta}^A}(0)$ and $\hat{S_{\theta}^B}(0)$ but get into a mess of literature on energy spectral density and power spectral density but I'm not an expert on these topics; I just need a quick answer: are these quantities the same as sample variance? If not, what's the formula for computing them? Another doubt on this Geweke diagnostic is how to pick $\theta$? The above literature said that it is some functional $\theta(X)$ and should imply an existence of a spectral density $\hat{S_{\theta}^A}(0)$, but for convenience I guess the simplest way is to use the identity function (use samples themselves). Is this correct? The R coda package has a description but neither does it specify how to compute the $S$ values.
INFORMAL TREATMENT We should remember that the notation where we condition on random variables is inaccurate, although economical, as notation. In reality we condition on the sigma-algebra that these random variables generate. In other words $E[Y\mid X]$ is meant to mean $E[Y\mid \sigma(X)]$. This remark may seem out of place in an "Informal Treatment", but it reminds us that our conditioning entities are collections of sets (and when we condition on a single value, then this is a singleton set). And what do these sets contain? They contain the information with which the possible values of the random variable $X$ supply us about what may happen with the realization of $Y$. Bringing in the concept of Information, permits us to think about (and use) the Law of Iterated Expectations (sometimes called the "Tower Property") in a very intuitive way: The sigma-algebra generated by two random variables, is at least as large as that generated by one random variable: $\sigma (X) \subseteq \sigma(X,Z)$ in the proper set-theoretic meaning. So the information about $Y$ contained in $\sigma(X,Z)$ is at least as great as the corresponding information in $\sigma (X)$. Now, as notational innuendo, set $\sigma (X) \equiv I_x$ and $\sigma(X,Z) \equiv I_{xz}$.Then the LHS of the equation we are looking at, can be written $$E \left[ E \left(Y|I_{xz} \right) |I_{x} \right]$$Describing verbally the above expression we have : "what is the expectation of {the expected value of $Y$ given Information $I_{xz}$} given that we have available information $I_x$ only?" Can we somehow "take into account" $I_{xz}$? No - we only know $I_x$. But if we use what we have (as we are obliged by the expression we want to resolve), then we are essentially saying things about $Y$ under the expectations operator, i.e. we say "$E(Y\mid I_x)$", no more -we have just exhausted our information. Hence$$E \left[ E \left(Y|I_{xz} \right) |I_{x} \right] = E\left(Y|I_{x} \right)$$ If somebody else doesn't, I will return for the formal treatment. A (bit more) FORMAL TREATMENT Let's see how two very important books of probability theory, P. Billingsley's Probability and Measure (3d ed.-1995) and D. Williams "Probability with Martingales" (1991), treat the matter of proving the "Law Of Iterated Expectations": Billingsley devotes exactly three lines to the proof. Williams, and I quote, says "(the Tower Property) is virtually immediate from the definition of conditional expectation". That's one line of text. Billingsley's proof is not less opaque. They are of course right: this important and very intuitive property of conditional expectation derives essentially directly (and almost immediately) from its definition -the only problem is, I suspect that this definition is not usually taught, or at least not highlighted, outside probability or measure theoretic circles. But in order to show in (almost) three lines that the Law of Iterated Expectations holds, we need the definition of conditional expectation, or rather, its defining property. Let a probability space $(\Omega, \mathcal F, \mathbf P)$, and an integrable random variable $Y$. Let $\mathcal G$ be a sub-$\sigma$-algebra of $\mathcal F$, $\mathcal G \subseteq \mathcal F$. Then there exists a function $W$ that is $\mathcal G$-measurable, is integrable and (this is the defining property) $$E(W\cdot\mathbb 1_{G}) = E(Y\cdot \mathbb 1_{G})\qquad \forall G \in \mathcal G \qquad [1]$$ where $1_{G}$ is the indicator function of the set $G$. We say that $W$ is ("a version of") the conditional expectation of $Y$ given $\mathcal G$, and we write$W = E(Y\mid \mathcal G) \;a.s.$ The critical detail to note here is that the conditional expectation, has the same expected value as $Y$ does, not just over the whole $\mathcal G$, but in every subset $G$ of $\mathcal G$. (I will try now to present how the Tower property derives from the definition of conditional expectation). $W$ is a $\mathcal G$-measurable random variable. Consider then some sub-$\sigma$-algebra, say $\mathcal H \subseteq \mathcal G$. Then $G\in \mathcal H \Rightarrow G\in \mathcal G$. So, in an analogous manner as previously, we have the conditional expectation of $W$ given $\mathcal H$, say $U=E(W\mid \mathcal H) \;a.s.$ that is characterized by $$E(U\cdot\mathbb 1_{G}) = E(W\cdot \mathbb 1_{G})\qquad \forall G \in \mathcal H \qquad [2]$$ Since $\mathcal H \subseteq \mathcal G$, equations $[1]$ and $[2]$ give us $$E(U\cdot\mathbb 1_{G}) = E(Y\cdot \mathbb 1_{G})\qquad \forall G \in \mathcal H \qquad [3]$$ But this is the defining property of the conditional expectation of $Y$ given $\mathcal H$. So we are entitled to write $U=E(Y\mid \mathcal H)\; a.s.$ Since we have also by construction $U = E(W\mid \mathcal H) = E\big(E[Y\mid \mathcal G]\mid \mathcal H\big)$, we just proved the Tower property, or the general form of the Law of Iterated Expectations - in eight lines.
Search The K-means algorithm is the first unsupervised method we encounter in the course. The previous supervised methods worked with a set of observations and labels $\mathcal{T} = \{(\mathbf{x}_1, y_1), \ldots, (\mathbf{x}_n, y_n)\}$. The unsupervised methods use unlabeled observations $\mathcal{T} = \{\mathbf{x}_1, \ldots, \mathbf{x}_n\}$. The goal is to infer properties of a probability distribution $p_X(\mathbf{x})$. In low-dimensional problems ($|X| \leq 3$), we can estimate the probability density directly using variety of techniques (e.g. histograms, Parzen window). When we get prior information about the type of a particular probability distribution, we can directly estimate the distribution parameters. Due to curse of dimensionality these methods fail in higher dimensions. When the dimensionality increases, the volume grows exponentially and the data becomes exponentially sparser. In such cases we use techniques that do not find the full underlying distribution $p_X(\mathbf{x})$, but instead describe the data in a more relaxed way. The main groups of unsupervised methods are: The dimensionality reduction methods map data onto a lower-dimensional manifold while minimising the lost information. The clustering methods partition the observations into groups with small difference between the group samples. The unsupervised methods have many applications including smart feature selection, data visualization, efficient data encoding, adaptive classification and image segmentation. Last but not least example is Google's PageRank algorithm. The K-means algorithm is a popular clustering approach. The goal is to find a clustering $\{\mathcal{T}_k\}^K_{k=1}$ of the observations that minimizes the within-cluster sum of squares (WCSS):$$\{\mathbf{c}_1, \ldots, \mathbf{c}_K; \mathcal{T}_1, \ldots, \mathcal{T}_K\} = \underset{\text{all }\mathbf{c}^{'}_k, \mathcal{T}^{'}_k} {\operatorname{arg\,min}} \sum_{k=1}^{K} \sum_{\mathbf{x} \in \mathcal{T}^{'}_k} \| \mathbf{x} - \mathbf{c}^{'}_k \|^2_2,$$where $K$ is a user specified number of clusters, $\mathcal{T}_1,\ldots,\mathcal{T}_K$ are sets of observations belonging to clusters $1,\ldots,K$ and each cluster is represented by its prototype $\mathbf{c}_k$. The K-means algorithm Input: set of observations $\mathcal{T} = \{\mathbf{x}_1, \ldots, \mathbf{x}_n\}$, number of clusters $K$, maximum number of iterations.Output: set of cluster prototypes $\{\mathbf{c}_1, \ldots, \mathbf{c}_K\}$. Initialize $\{\mathbf{c}_1, \ldots, \mathbf{c}_K\}$ with random unique input observations $\mathbf{x}_i$. Do: Until no change in clustering $\mathcal{T}_k$ or maximum number of iterations is exceeded. The solution that K-means reaches often depends on the initialisation of the means and there is no guarantee that it reaches the global optimum. As you can see in the two examples below, the solution is not unique and it can happen that the algorithm reaches a local minimum (as in the lower example), where reassigning any point to some other cluster would increase WCSS, but where a better solution does exist. The chance of getting trapped in a local minimum can be reduced by performing more trials of the K-means clustering and choosing the result that minimizes WCSS. K-Means++ is the K-means with clever initialization of cluster centers. The motivation isto make initializations which make K-means more likely to end up in better local minima(minima with lower WCSS). K-Means++ uses the following randomized sampling strategy for constructing the initial cluster centers set $\mathcal{C}$: After this initialization, standard K-means algorithm is employed. To fulfil this assignment, you need to submit these files (all packed in a single .zip file) into the upload system: .zip answers.txt assignment_10.m k_means.m k_means_multiple_trials.m k_meanspp.m random_sample.m quantize_colors.m random_initialization.png kmeanspp_initialization.png geeks_quantized.png Start by downloading the template of the assignment. Experimental. Use at your own risk. Standard is to use Matlab. You may skip this info box if you follow the standard rules. General information for Python development. kmeans.ipynb kmeans.py k_means k_means_multiple_trials k_meanspp random_sample quantize_colors Use template of the assignment. When preparing the archive file for the upload system, do not include any directories, the files have to be in the archive file root. Your task is to implement the basic K-means algorithm as described in the box above. The function prototype is [c, means, x_dists] = k_means(x, k, max_iter, show), where x are observations, k is the number of clusters, c are indices of the found clusters, means are means of the clusters (see the function header in the template for more details). [c, means, x_dists] = k_means(x, k, max_iter, show) x k c means Important (for automatic evaluation): Use Matlab's function randsample to generate indices for means initialization (prepared in the template for both, Matlab and Python). randsample rng(0); % In order to obtain the same results, use this initialization of the random number generator % (don't put this into functions submitted to the upload system).x = [16 12 50 96 34 59 22 75 26 51];[cluster_idxs, means, sq_dists] = k_means(x, 3, Inf) cluster_idxs = 3 3 2 1 3 2 3 1 3 2 means = 85.5000 53.3333 22.0000 sq_dists = 36.0000 100.0000 11.1111 110.2500 144.0000 32.1111 0 110.2500 16.0000 5.4444 Limited number of iterations: rng(0);load image_data.matx = compute_measurements(images);[~, means, ~] = k_means(x, 3, 4) means = 1.0e+03 * 0.3608 -4.5668 -0.1417 -2.3709 2.3758 0.6181 Test the algorithm on the measurements extracted from the unlabelled data: image_data.mat mnist_image_data.mat Inspect the results. Implement the function [c, means, x_dists] = k_means_multiple_trials(x, k, n_trials, max_iter, show) which tries to avoid being trapped in local minimum by performing K-means several times and selecting the solution minimizing the WCSS. [c, means, x_dists] = k_means_multiple_trials(x, k, n_trials, max_iter, show) Implement clever K-means initialization: K-means++. The function centers = k_meanspp(x, k) returns initial cluster prototypes for use in the already implemented K-means algorithm. You will first need to prepare idx = random_sample(weights) function, that picks randomly a sample based on the sample weights. See the following recipe and figure to illustrate the sampling process: centers = k_meanspp(x, k) idx = random_sample(weights) Your implementation should return the same results: >>> rng(0);>>> distances = [10 2 4 8 3];>>> idx = random_sample(distances)>>> idx = random_sample(distances) idx = 4 idx = 5 K-means++ favors initialization with cluster prototypes far from each other: >>> rng(0);>>> x = 1:100;>>> means = k_meanspp(x, 3) means = 82 46 3 Inspect the K-means++ initialization for the data generated by gen_kmeanspp_data.m and compare it to a uniform random initialization of vanilla K-means. Save the figures of initializations to random_initialization.png and kmeanspp_initialization.png. Use your random_sample function to generate ALL kmeans++ output indexes. gen_kmeanspp_data.m Apply K-means algorithm to the color quantization problem. The aim is to reduce the number of distinct colors in an image such that the resulting image is visually close to the source image. The algorithm is as follows: max_iter=Inf You are supposed to implement a function im_q = quantize_colors(im, k) which performs the described color quantization. Run this function on geeks.png image and save the resulting quantized version as geeks_quantized.png. The right image below is a sample result (for k=8): im_q = quantize_colors(im, k) geeks.png k=8 Important (for automatic evaluation): To generate indices of the 1000 random pixels, use this code: inds = randsample(size(im,1) * size(im,2), 1000);. For python: inds = np.random.randint(0, (h_image * w_image)-1, 1000). inds = randsample(size(im,1) * size(im,2), 1000); inds = np.random.randint(0, (h_image * w_image)-1, 1000)
Damped wave equations with fast growing dissipative nonlinearities 1. Departamento de Matemática, Instituto de Ciências Matemáticas e de Computação, Caixa postal 668, 13560-970 São Carlos, São Paulo, Brazil 2. Institute of Mathematics, Silesian University, 40-007 Katowice, Poland, Poland u tt$ + a u_t + A u = f(u), \ t>0, $ $u(0)=u_0\in H^1_0(\Omega), \ \ u_t(0)=v_0\in L^2(\Omega),$ where $f:\R\to\R$ is a continuously differentiable function satisfying the growth condition $|f(s)-f(t)|\leq C|s-t|(1+|s|^{\rho-1}+|t|^{\rho-1})$, $1<\rho<\frac{N+2}{N-2}$, ($N\geq 3$), and the dissipativeness condition $\lim$sup$_|s|\to\infty \frac{f(s)}{s}< \lambda_1$ with $\lambda_1$ being the first eigenvalue of $A$. We construct the global weak solutions of this problem as the limits as $\eta\to0^+$ of the solutions of wave equations involving the strong damping term $2\eta A^{1/2} u$ with $\eta>0$. We define a subclass $\mathcal LS\subset C([0,\infty),L^2(\Omega)\times H^{-1}(\Omega))\cap L^\infty([0,\infty),H^1_0(\Omega)\times L^2(\Omega))$ of the 'limit' solutions such that through each initial condition from $H^1_0(\Omega)\times L^2(\Omega)$ passes at least one solution of the class $\mathcal LS$. We show that the class $\mathcal LS$ has bounded dissipativeness property in $H^1_0(\Omega)\times L^2(\Omega)$ and we construct a closed bounded invariant subset A of $H^1_0(\Omega)\times L^2(\Omega)$, which is weakly compact in $H^1_0(\Omega)\times L^2(\Omega)$ and compact in $H^s_{\I}(\Omega)\times H^{s-1}(\Omega)$, $s\in[0,1)$. Furthermore A attracts bounded subsets of $H^1_0(\Omega)\times L^2(\Omega)$ in $H^s_\{I\}(\Omega)\times H^{s-1}(\Omega)$, for each $s\in[0,1)$. For $N=3,4,5$ we also prove a local uniqueness result for the case of smooth initial data. Keywords:asymptotic behavior of solutions., Damped wave equations, singular perturbations, critical exponents, dissipativeness. Mathematics Subject Classification:Primary: 35L05; Secondary: 35B25, 35B33, 35B4. Citation:Alexandre Nolasco de Carvalho, Jan W. Cholewa, Tomasz Dlotko. Damped wave equations with fast growing dissipative nonlinearities. Discrete & Continuous Dynamical Systems - A, 2009, 24 (4) : 1147-1165. doi: 10.3934/dcds.2009.24.1147 [1] Sergey Zelik. Asymptotic regularity of solutions of a nonautonomous damped wave equation with a critical growth exponent. [2] Michel Chipot, Senoussi Guesmia. On the asymptotic behavior of elliptic, anisotropic singular perturbations problems. [3] Sergey Zelik. Asymptotic regularity of solutions of singularly perturbed damped wave equations with supercritical nonlinearities. [4] [5] Kosuke Ono. Global existence and asymptotic behavior of small solutions for semilinear dissipative wave equations. [6] Shota Sato, Eiji Yanagida. Asymptotic behavior of singular solutions for a semilinear parabolic equation. [7] Filippo Dell'Oro. Global attractors for strongly damped wave equations with subcritical-critical nonlinearities. [8] Shengfan Zhou, Linshan Wang. Kernel sections for damped non-autonomous wave equations with critical exponent. [9] [10] Yimin Zhang, Youjun Wang, Yaotian Shen. Solutions for quasilinear Schrödinger equations with critical Sobolev-Hardy exponents. [11] Seunghyeok Kim. On vector solutions for coupled nonlinear Schrödinger equations with critical exponents. [12] Yinbin Deng, Qi Gao, Dandan Zhang. Nodal solutions for Laplace equations with critical Sobolev and Hardy exponents on $R^N$. [13] Sofía Nieto, Guillermo Reyes. Asymptotic behavior of the solutions of the inhomogeneous Porous Medium Equation with critical vanishing density. [14] Kin Ming Hui, Soojung Kim. Asymptotic large time behavior of singular solutions of the fast diffusion equation. [15] [16] [17] [18] [19] Peter E. Kloeden, Jacson Simsen, Petra Wittbold. Asymptotic behavior of coupled inclusions with variable exponents. [20] Bouthaina Abdelhedi. Existence of periodic solutions of a system of damped wave equations in thin domains. 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
Archive: Subtopics: Comments disabled Mon, 16 Apr 2018 I dreamed this one up in high school and I recommend it as an exercise for kids at an appropriate level. Consider the set of all Roman numerals $${ \text{I}, \text{II}, \text{III}, \text{IV}, \text{V}, \ldots, \text{XIII}, \text{XIV}, \text{XV}, \text{XVI}, \ldots, \\ \text{XXXVIII}, \text{XXXIX}, \text{XL}, \text{XLI}, \ldots, \text{XLIX}, \text{L},\ldots,\\ \text{C}, \ldots , \text{D}, \ldots, \text{M}, \ldots, \text{MM}, \ldots, \text{MMM}, \ldots, \text{MMMM}, \ldots, \text{MMMMM}, \ldots }$$ where we allow an arbitrarily large number of M's on the front, so that every number has a unique representation. For example the number 10,067 is represented by !!\text{MMMMMMMMMMLXVII}!!. Now sort the list into It is easy to show that it begins with !!\text{C}, \text{CC}, \text{CCC}, \text{CCCD}, \ldots!! and ends !!\text{XXXVII}, \text{XXXVIII}!!. But it's still an infinite list! Instead of being infinite at one end or the other, or even both, like most infinite lists, it's infinite in the middle. Of course once you have the idea it's easy to think of more examples (!!\left\{ \frac1n\mid n\in\Bbb Z, n\ne 0\right\}!! for instance) but I hadn't seen anything like this before and I was quite pleased.
Zzo38 is a user on Wikipedia (and other stuff as well). Also search on Google for "zzo38 OR zzo38computer" Multi-licensed with Creative Communism I agree to multi-license my text contributions, unless otherwise stated, under Wikipedia's copyright terms and the terms of creative communism. Use it as you seem fit. Spread it. Take what you want, add, remove, change and improve. Why not sign with you own name and claim authorship? One rule, however: Don't you ever try subjecting this material under the terms of copyright, intellectual property, or similar licenses of evil. Please be aware that other contributors might not do the same, so if you want to use my contributions under the terms of communism, please check the multi-licensing guide, or use a peer-to-peer network to override such restrictions of free use. All contributions by this user are hereby released into the public domain I, the author, hereby agree to waive all claim of copyright (economic and moral) in all content contributed by me, the user, and immediately place any and all contributions by me into the , unless otherwise noted. public domain I grant anyone the right to use my work for any purpose, without any conditions, to be changed or destroyed in any manner whatsoever without any attribution or notification. English This user knows that 'to' is pronounced /tʊ/, not /a/. "…" This user favours typewriter style over typographic ones. quotation marks php This user can program in . PHP This user is an advanced TI-BASIC programmer. This user is against . DRM {{User WLM hate}} This user has no cell phone. IRC This user uses for communicating over netcat IRC. > _ This user often uses the command line when operating their computer. This user is a player. shogi This user's high score on is Pokemon de Panepon Level 99 BALL mode 6-digits mode 999999. !@.. This user prefers computer games that are text-only and without graphics. ∑ n = 1 ∞ n x x ! {\displaystyle \sum _{n=1}^{\infty }{\frac {n^{x}}{x!}}} This user understands capital sigma and pi notation, and you should too! ∏ n = 1 ∞ n x x ! {\displaystyle \prod _{n=1}^{\infty }{\frac {n^{x}}{x!}}} 2 {\displaystyle {\sqrt {2}}} This user knows how to prove that the is irrational. square root of two アカギ This user likes Akagi manga. User:Mythdon/Userboxes/Analogoverdigital This user plays the . piano BACH OWNS This user is a fan of the works of Bach. ? This user follows their own political ideals. This user believes marriage is between one man and one woman, but doesn't think the government should dictate morality or impose any restrictions on love. All this user is sayingis give a chance. peace ∂ 2 Ψ ∂ t 2 = ∇ 2 Ψ {\displaystyle \scriptstyle {\frac {\partial ^{2}\Psi }{\partial t^{2}}}=\nabla ^{2}\Psi } This user is interested in . mathematical physics This user has a keen interest in . physics Name This user wants their name to be confidential XXX Fantastic Brain This user can't stop thinking. This user respects the beliefs and religions of others. 1, 2, 3... This userbox is a test. Please tell this user if you don't see it. This user has been on Wikipedia for 0 days. This place is reserved for a userbox. This user has made over 97 edits on Wikipedia. patent This user is against patents. This user tries to do the right thing. If they make a mistake, please . let them know This editor is strictly opposed to in articles except when articles would otherwise be protected from editing. FlaggedRevisions NO This user believes that Flagged Revisions are an abomination that will, if enacted, lead to the end of Wikipedia. This user has a sense of humour and shows it on their . userpage Please Sign Your Posts UBX This user has way too many userboxes tools This user reverts vandalism manually, without the help of any tools such as Twinkle. This user believes they are unique, just like everybody else. =] @ This user does not wish to disclose their email or screen name. No iPod This user does not use an iPod and does not wish to own one.. User:Zheliel/notabot This user still remembers . these This user needs more userboxes. MORE, I tell you, more!!! Muhahaha! This user does not smoke . In Soviet Russia, userbox reads you. This user is . drug-free a²+b²=c² This user has invented their own proof(s) of the Pythagorean theorem, independently of anyone else who may have invented the same proof (if any). {{Wiki}} This user can write in the MediaWiki language. {{ t| 2}} This user understands what templates do, and how to write them. XUL This user can code in XUL. This user is not a pet owner. This user dislikes commercials that happen during TV shows and movies and finds them annoying "…"! US vs. UK This user uses " ". Forcing internal punctuation leads to logical quotation marks factual errors. It's not a nationalistic style issue! This user is old enough to remember what a is, and that's all you need to know. typewriter Majority ≠ right This user recognizes that even if 300,000,000 people make the same mistake, it's still a mistake. its This user understands the difference between its ( ) and of it it's ( or it is ). it has This user spends most of his free time thinking. This user does not use automated or semi-automated editing tools. This user believes that a user's edit count does not necessarily reflect on the value of their contributions to Wikipedia. This user remembers VHS. ± Charities for poor people and monsters with names starting with "A" . D&D This user plays monster characters in Dungeons and Dragons game. BBS This user knows what a is and has accessed them by phone line. BBS This user built his/her own . PC This user can type 119 WPM. C This user can program in . C This user is a philosopher of "pocket monster". C:\> _ This user uses , or knows how to use it. DOS This user would rather drink tea than coffee. 4+4=10 This user likes octal. 10 This user realizes that there are 10 types of people in this world: those who understand binary and those who do not. This user knows how to travel 1 mile due south, due east, then due north and end up where they started. † This user is a collector. MOD This user knows that a is an positron moving backwards in time. And so should you! electron K This user uses the scale, because Kelvin 0K is much cooler than −273.15 °C or −459.67 °F. mth-N This user can solve ADVANCED math problems in their head. Go slowly insane as you sit and stare... WEB This user programs in Enhanced CWEB. User:FleetCommand/Not Wiki Respect Anons This user respects anonymous users and highly values their contributions This user has some of these. This user learnt how to use all their fingers for typewriting. Peace This user is not a ghost. Don't worry, , I'm not a planet either. Pluto test This is a test userbox. It is just a test. Were it an actual userbox it would have useful information. † This user is confused by every aspect of life, including his/her own confusion. False This userbox is false. This user believes in VOLUNTEERING This user has big spider on their bed. This user might (or might not) be the Mad . TeX nician CR-0 This user is totally sane, and proud of it. CR-1 This user might be a little off, but only when (s)he's having a bad day. CR-2 This user occasionally does things that might be perceived as crazy, but all in all (s)he's pretty down to earth. CR-3 This user is definitely on the weird side, but cannot actually be considered crazy, just different. CR-4 This user always seems a little off, and sometimes is quite crazy. CR-5 This user is most likely completely crazy. This user is not a fire truck. OGG This user prefers listening to the superior quality of audio files on a music player instead of MP3. .ogg † 0 This user has 0 children. WP: BOX This user likes to use Wikipedia . shortcuts This user knows how to Find X. RD Non cogito ergo usorarcam sum. This user knows that their userboxes are disorganised but has decided that they don't really care. This userbox is correct. Too many userboxes This user may have too many ... nah, no way!! userboxes This user knows that this userbox is pointless! YOMI This user plays Yomi cards. DVI PDF This user prefer over DVI PDF. This user believes in freedom of all types of information for all. This user adamantly considers both Pluto and to be "Planets" Eris This user sleeps, but not for predictable amounts of time. User:Zzo38/Userboxes/nonhumanist no G This user only uses Google products as a last resort. This user uses as a Wikipedia primary point of reference. This user admires all expressions of nature. It is 01:57:43 on October 15, 2019, according to the server's time and date. This user advocates personal belief but does not support organised religion. User:Zzo38/Userboxes/nonhumanist religion This user is more than happy to retain their " and delights in telling their mental health professional to screw off. personality disorder(s)" This user is not an administrator and has no desire to be one. ; This user is addicted to ; he or she uses them frequently. semicolons Oh Shit! This user thinks does not exist. (S)he believes that such words are the same as any other words and should be treated as such. profanity This user believes in . logic ◊ x → ◻ x {\displaystyle \Diamond x\!\!\rightarrow \!\!\Box x} This user lives in a . mathematical universe This user opposes copyright, and encourages the free exchange of culture. 9.8 m/s 2 This user is often touching the floor, because gravity has too much of an effect on them. This user supports the right to arm bears. This user likes jokes and humor. There is water on Mars! User:Neutralhomer/Userboxes/Time2006 The sanity of this user has been disputed. Refer to the talk page to discuss this topic. This user's favorite article can be seen here. [[red]] This editor adds or leaves in useful red links to encourage content creation. This user would donate to the Wikimedia Foundation if they could; however, they are too young to do so or do not have a credit card. COMMON REASON This user believes that "common sense" is a worthless delusion and prefers to argue using reason. This user uses the "Show preview" button so as to avoid making a great number of useless, successive edits. This user isn't looking to receive any . This user doesn't in fact even own a barnstars . barn This user uses the Nostalgia skin. WP EN This user thinks Wikipedia is better than . Encarta This User reads for pleasure. You'd better believe it! This user still uses Wikipedia even when their teacher tells them otherwise. ∑ {\displaystyle \sum } This user is greater than the sum of his or her userboxes. Dems This user knows both major US parties have been hijacked by special interests, can no longer identify with either one, and has no hope that the situation will ever improve. Pubs This user loves the environment and wants to protect it. This user thinks Eris and Pluto are both planets, and the IAU can be stupid. This user is a who believes Wikipedians should avoid agressive non-notability deletion and advocates that the preservationist notability rule be amended to specify that any subject with some historical value can merit an article. Details . here τ This believes user should replace pi as a mathematical constant. tau This user prefers and supports licensing. Copyleft STOP GLOBAL BLOATING! Smaller and lighter is better. SQL This user can program in . SQL This user arranges existing music for other instruments. respect This user respects others' religions and realises not all people wish to follow the same path. This user is interested in , but isn't necessarily a Pagan. Paganism respect This user respects others' religions and realises not all people wish to follow the same path. This user does not smoke. This user is open to the concept of religion as long as lives are not harmed . x n + y n = z n {\displaystyle x^{n}+y^{n}=z^{n}} This user is fascinated by number theory. This user contributes using a . PC "What is right is not always popular, what is popular is not always right." ~ Albert Einstein "Learn from the mistakes of others. You can never live long enough to make them all yourself." ~ Groucho Marx User:Bojan~enwiki/Userbox/Open Source *nix This user prefers a *nix operating system. js This user browses with JavaScript disabled. Everyone should. C This user programs in the awesome language of . C This userbox was put here by mistake. This user thinks they might have too many . Oh well... userboxes This user has just wasted both their time and yours placing this userbox on their profile, because it says absolutely nothing important, interesting or relevant to anything. LOL mailx This user is using as their primary e-mail client. Heirloom mailx
I think the questions were about unconditional proofs or counter examples. I don't have an answer to any of those questions but I think it is still interesting to understand how the yoga of motives suggests natural answers to theses questions. Even though this may seem trivial to people familiar with the subject. Let's work in the setting of Voevodsky's $\otimes$-triangulated categories $DM^{eff}(\mathbb{Q}):= DM_{gm}^{eff}(Spec(\mathbb{Q});\mathbb{Q}) \subset DM_{gm}(Spec(\mathbb{Q});\mathbb{Q}) =: DM(\mathbb{Q})$. Remember that the latter is obtained by formally inverting $\mathbb{Q}(1)$ and that it is a rigid $\otimes$-triangulated category. Question 1: Is the ring of (effective) periods a field? Following Beilinson's "Remarks on Grothendieck's standard conjecctures", let's assume Motivic conjecture: There exists a non degenerate t-structure on Voevodsky's category $DM(\mathbb{Q}) := DM(Spec(\mathbb{Q});\mathbb{Q})$ and such that the Betti realization function $\omega_B: DM(\mathbb{Q}) \to D^bMod_f(\mathbb{Q})$ is a $t$-exact $\otimes$-functor. This is an extremely strong conjecture as it implies the standard conjectures in characteristic 0. Under this conjecture, the heart of the motvitic $t$-structure is a tannakian category $MM(\mathbb{Q})$. We have Betti and Rham realization functors $\omega_B,\omega_{dR}: MM(\mathbb{Q}) \rightrightarrows Mod_f(\mathbb{Q})$. And we can define $$ Per := Isom^\otimes(\omega_{dR},\omega_{B})$$This is a fpqc-torsor under the motivic Galois group $G_B := Aut^\otimes(\omega_B)$. Define the algebra of motivic periods as the ring of regular functions on the Betti/de Rham torsor:$$ P_{mot} := \mathcal{O}(Per)$$ Integration of differential forms (or more generally the Riemann-Hilbert correspondance) defines an $\mathbb{C}$-point $$ Spec(\mathbb{C}) \longrightarrow Per$$The image of the corresponding morphism $P_{mot} \to \mathbb{C}$ is the ring of periods $P$. Note: I think this whole part is actually known unconditionnally in the setting of Nori's motives (see arXiv:1105.0865v4). Period conjecture: The morphism $P_{mot} \to P$ is an isomorphism. Now based on these tiny little conjectures we can say Prop: $P_{mot}$ is not a field so $P$ isn't either. Proof: Indeed in his comment G-torsor whose ring of regular functions is a field. @quasi-coherent explained how faithfull flatness would imply that if $P_{mot}$ were a field then it would be algebraic over $\mathbb{Q}$ which contradicts the fact that $2\pi i$ belongs to the image of $P_{mot}\to \mathbb{C}$. Question 2 Is it true that $(P_{mot}^{eff})^\times = \overline{\mathbb{Q}}^\times$? This post is getting too long already so I'll try and write down the rest later but the basic idea is that invertible effective motives are Artin motives. This can be proved in terms of weights or niveau (level).
to solve and obtain the potential of a 1-D Hamiltonian we must solve an integral equation $$ N(E)= A \int_{0}^{E}\frac{V^{-1}(x)}{\sqrt{E-x}}$$ fro a some constant 'A' , then my question is since the approximated espectral fucntion $$ N(E)= <N(E)>+ \sum_{p.p.o}A_{p}e^{\frac{iS(E)}{\hbar}}+c.c $$ with $ <N(E)>$ meaning the WKB approximation (smooth) to the spectra staircase and p.p.o means summation over the periodic orbits with $ S(E)= \sqrt{2m} l_{p.o}$ the action over the closed orbits , then can i from the Gutzwiller trace formula recover the potential ? this is my ansatz http://vixra.org/pdf/1301.0078v3.pdf which i think is valid at least for one dimension
I need to find the norm of $x \in \mathbb{R}^2$ if the unit ball is defined by this inequality: $B=(\begin{pmatrix} x_1\\ x_2 \end{pmatrix}: -a_1\leq x_1\leq a_1, -a_2\leq x_2\leq a_2 ) $. What exactly I am asked to do? Clearly $|x_1| \leq a_1$ and $|x_2| \leq a_2$ so any norm is smaller or equal to $\sqrt{a_1^2+a_2^2}$. Thanks a lot!
I'm looking to solve the equation $$ x^2y'' + xy' - (x^2+p^2)y = 0 $$ in particular for $p = \frac12$. I've already determined the indicial equation to be $r^2 - p^2 = 0$, and so the roots are $r = \pm p$. I've also already determined a series solution (which seems to be a constant factor off, namely $a_0$, from the modified Bessel function $I_p$): $$ y_1 = a_0x^r\sum_{n=0}^\infty\left(\prod_{i=1}^n i(i+p)\right)^{-1}\left(\frac x2\right)^{2n} $$ Now, for $p = \frac12$, I noticed that $x^{-1/2}e^x$ was a solution (by brute-force). However, I can't for the life of me figure out why plugging $p = \frac12$ into the above series equation should give me something that looks remotely like $x^{-1/2}e^x$. To go through my steps, I assumed $y = \sum_{n\ge 0}a_nx^{n+r}$, and substituted this into the DE to get$$(r^2 - p^2)a_0x^r + ((r+1)^2 - p^2)a_1x^{r+1} + \sum_{n=0}^\infty ([(n+r)^2 - p^2]a_n - a_{n-2})x^{n+r} = 0$$Clearly $r = \pm p$ in order for $a_0\neq 0$. However, it's the $x^{r+1}$ coefficient which is bugging me. Namely, for $p = \frac12$, taking the root $r = -p = -\frac12$ gives $(r + 1)^2 - p^2 = 0$, which means $a_1\neq 0$ can be anything, meaning the odd powers of $x$ also have coefficients in the series solution. This doesn't effect the solution $y = x^{1/2}\sum(\cdots)$ since that still requires $a_1 = 0$, however the solution $y = x^{-1/2}\sum(\cdots)$ changes, which is why I think $x^{-1/2}e^x$ might still arise from the Frobenius solution. That said, I simply can't figure out how to get to $x^{-1/2}e^x$. Even if you account for the coefficients of the odd powers the series doesn't look like $e^x$ at all as far as I can tell. Are you simply supposed to arrive at $x^{-1/2}e^x$ from another method?
So for this one I'm having trouble isolating for y. If its not possible then in the form with dy with the y variable and x with the x variable. $$\frac{dy}{dx}-2xy=e^{x^{2}}.$$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community If you have a differential equation of the form $$\frac{dy}{dx} + a(x) y(x) = b(x) \tag{1}$$ we call the equation a first-order linear ODE and we can obtain it's solution using the following method. First, we multiply both sides of $(1)$ by a function $f(x)$ (called the integrating factor) and we obtain $$f y' + fay = fb \tag{2}$$ Using the product rule $(fy)' = fy' + f'y$, we can rewrite $(2)$ as $$(fy)' - f'y + fay = fb \Longrightarrow (fy)' + y \left(f' + fa \right) = fb \tag{3}$$ If it is the case that $f' + fa = 0$, then the LHS of $(3)$ is just $(fy)'$, and integrating both sides would yield an expression for $fy$. So let's solve for the $f$ that guarantees that $f' + fa = 0$. Solving this separable differential equation for $f$, we get that $$\frac{df}{dx} = f'(x) = -f(x)a(x) \Longrightarrow \frac{df}{f(x)} = a(x) dx \Longrightarrow \log(f(x)) = \int a(x) dx \Longrightarrow f(x) = e^{\int a(x) dx} $$ Using the $f$ we just found, $(3)$ therefore reduces to $$(fy)' = fb \Longrightarrow fy = \int fb \: dx \Longrightarrow y = \frac{\int fb \: dx}{f}$$ Plugging in our formula for $f(x)$, we get that the solution to $(1)$ is $$y(x) = \displaystyle\frac{\displaystyle\int \left(b(x) \: e^{\int a(x) dx} \right) dx}{e^{\int a(x) dx}}$$ Now, noting that $a(x) = -2x$ and $b(x) = e^{x^2}$ in your example, we see that $$y(x) = \displaystyle\frac{\displaystyle\int \left(e^{x^2} \: e^{\int -2x dx} \right) dx}{e^{\int -2x dx}} = \frac{\displaystyle\int \left(e^{x^2} e^{-x^2}\right)dx}{e^{x^2}} = \frac{\int e^0 dx}{e{x^2}} = \frac{x+c}{e^{x^2}} \Longrightarrow \boxed{y(x) = xe^{-x^2} + c e^{-x^2}}$$ Hint : Multiply throughout by the integrating factor, $e^{\int -2x dx} = e^{-x^2}$. Then notice that $$\begin{align}\frac{d}{dx}(e^{-x^2}y) &= e^{-x^2}\frac{dy}{dx} - 2e^{-x^2}xy\\&=LHS\end{align}$$ Rewrite the equation like this: $$e^{-x^2}\frac{dy}{dx}-2xe^{-x^2}y=1$$ Notice that if we apply the product rule in differentiating $ye^{-x^2}$ with respect to $x$, that we get exactly the left hand side. In other words, the equation is equivalent to: $$\frac{d(e^{-x^2}y)}{dx}=1$$ Integrating both sides yields: $$e^{-x^2}y=x+c\implies y=xe^{-x^2}+ce^{-x^2}.$$ This kind of technique can be generalised to the method of 'integrating factors', however it happens to work out nicely enough here that you can just follow your nose. Hint Because of the rhs, suppose that you define $y(x)=z(x)e^{x^2}$; then the differential equation write $$x z'(x)+z(x)=0$$ which is quite easy to integrate for $z(x)$. I am sure that you can take from here.
I am reading the chapter on electron-proton scattering from "Quantum Field Theory in a Nutshell". The author calculates the amplitude of the electron-proton scattering (up to the second order). The Feynman diagram used in this calculation is We start by using the massive spin 1 boson as an exchange, and hope to get rid of the mass later. Let the mass of the boson be $\mu$ Using the Feynman rules, we obtained the amplitude to be $$\mathcal{M}(P,P_N) = (-ie)(ie)\frac{i}{(P-p)^2-\mu^2}\left(\frac{k_\mu k_\nu}{\mu^2}-\eta_{\mu\nu}\right)\bar{u}(P)\gamma^\mu u(p)\bar{u}(P_N)\gamma^\nu u(p_N).$$ He claims that the term $k_\mu k_\nu$ will just disappear because $$k_\mu \bar{u}(P)\gamma^\mu u(p) = (P-p)_\mu \bar{u}(P)\gamma^\mu u(p) = \bar{u}(P)(\not P - \not p) u(p)=\bar{u}(P)(m - m) u(p)=0.$$ However, the last equation follows the Dirac equation, which is the equation of motion of the electron. The point that I do not understand is, how can we use the equation of motion in the quantum field theory? Clearly, the equation of motion comes from minimising (extremising) the action, which leads to the classical field rather than quantum field.
A linear model has t-distributed noise $$ Y= bX + \epsilon $$ where $\epsilon \sim t(0, \sigma^2, df)$, with mean 0, variance $\sigma^2$ and dof $df$. Given an independent sample $(x_i, y_i), i=1, \dots, n$, suppose we already have estimates of $b, \sigma^2, df$. How would you estimate a $1-\alpha$ prediction interval of $Y$ at $X=x$? Is it still the same as when the noise $\epsilon$ is normal distributed: $$ \hat{y} \pm t_{\alpha/2, n-2} s_y \sqrt{\frac{1}{n} + \frac{(x-\bar{x})^2}{\sum (x_i-\bar{x})^2}} $$ where $$ s_y = \sqrt{\frac{\sum (y_i - \hat{y}_i)^2}{n-2}} $$ Thanks!
In some sense the empty set ($\emptyset$) and the global set of all sets ($G$) are the ends of the universe of mathematical objects. The world which $ZFC$ describes has an end from the bottom and is endless from the top. Even in a straight forward way one can find an equiconsistent theory (respect to $ZFC$) which its world is endless from the bottom and bounded from the top by the set of all sets. It is sufficient to consider the theory $ZFC^{-1}$ ($ZFC$ inverse) which is obtained from $ZFC$ by replacing each phrase $x\in y$ in the axioms of $ZFC$ by the phrase $\neg (x\in y)$. This operation for example transforms the axiom of empty set of $ZFC$ to an statement which asserts "the set of all sets exists". $[\exists x \forall y~~\neg(y\in x)]\mapsto [\exists x \forall y~~\neg \neg(y\in x)] $ Even the axiom of extensionality remains unchanged because we have: $[\forall x\forall y~~(x=y\longleftrightarrow \forall z~~(z\in x\longleftrightarrow z\in y))]\mapsto [\forall x\forall y~~(x=y\longleftrightarrow \forall z~~(\neg (z\in x)\longleftrightarrow \neg (z\in y)))]$ So the "set of all sets" is unique in this theory. Even the equiconsistency simply follows from the fact that for all set (or proper class) $M$ and for all binary relation $E$ on it we have: $\langle~M~,~E~\rangle \models ZFC \Longleftrightarrow \langle~M~,~M\times M\setminus E~\rangle \models ZFC^{-1}$ So it is trivial that $ZFC^{-1}\models \neg (\exists x \forall y~~\neg(y\in x))$ in the same way which one can prove $ZFC\models \neg (\exists x \forall y~~y\in x)$ by the Russell's paradox. But the situation seems rather strange when one wants to find an equiconsistent theory with $ZFC$ which has end points in both up and down direction because the existence of two contradictory objects like $\emptyset$ and $G$ seems ontologically incompatible in a particular "$ZFC$-like" world. So the question is: Question (1): Is there an $\mathcal{L}=\lbrace \in\rbrace$-theory $T$ such that the following conditions hold: $(1)~Con(ZFC)\Longleftrightarrow Con(T)$ $(2)~T\models \exists !x~\forall y~~(y\in x)$ $(3)~T\models \exists !x~\forall y~~\neg (y\in x)$ Remark (1): Quine's new foundation axiomatic system ($NF$) is not an answer because its equiconsistency with $ZFC$ is still unknown. Even one can define two dual sets from empty and global sets. The set which does not belong to any other set ($\emptyset^{\star}$) and the set which belongs to any set ($G^{\star}$).Now one can restate the question (1) as follows: Question (2): Is there an $\mathcal{L}=\lbrace \in\rbrace$-theory $T$ such that the following conditions hold: $(1)~Con(ZFC)\Longleftrightarrow Con(T)$ $(2)~T\models \exists !x~\forall y~~(x\in y)$ $(3)~T\models \exists !x~\forall y~~\neg (x\in y)$ Even it is interesting to have an equiconsistent theory which has no end points in both up and down directions.So: Question (3): Is there an $\mathcal{L}=\lbrace \in\rbrace$-theory $T$ such that the following conditions hold: $(1)~Con(ZFC)\Longleftrightarrow Con(T)$ $(2)~T\models \neg (\exists x~\forall y~~(y\in x))$ $(3)~T\models \neg (\exists x~\forall y~~\neg (y\in x))$
Yes, it is possible. Consider the graph $\Gamma$ of$$z=\phi\left(\sqrt{x^2+y^2}\right),$$ where $\phi\colon\mathbb R\to \mathbb R$ is a convex even function with $\phi(0)=0$.With its intrinsic metric, $\Gamma$ forms an Alexandrov space. Note that the curve $\gamma$ on the graph described by $y=0$ is formed by two geodesics starting at the origin. For an appropriate choice of $\phi$ there is always a shortcut which goes around the origin, from say from $(\varepsilon,0,\phi(\varepsilon))$ to $(-\varepsilon,0,\phi(\varepsilon))$.That is, arbitrary small segment of $\gamma$ containing the origin does not minimize the length. Roughly, $\phi$ has to have infinite second derivative at $0$ in a strong sense.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
The problem is to find a line that passes through $p$ and has all the other points in $S$ on one side. This is a two-dimensional linear-programming problem, so it can be solved in $O(n)$ time using textbook geometric algorithms.But let me describe a self-contained solution. To simplify notation, translate all the points so that $p$ is the origin $(0,0)$, and let $Q = S\setminus\{p\}$. Then we want to determine if there is a real number $m$ such that either (1) $y < mx$ for all $(x,y)\in Q$ or (2) $y > mx$ for all $(x,y)\in Q$. In the first case, $p$ is a vertex of the upper hull of $S$; in the second case, $p$ is a vertex of the lower hull of $S$. I'll describe an algorithm for the first case; the other case is symmetric. If any point in $Q$ lies directly above $p$ (that is, if any point in $Q$ has coordinates $(0,y)$ for some $y>0$), then $p$ cannot lie on the upper hull. It is easy to check this condition in $O(n)$ time. So assume no points in $Q$ lie directly above $p$. The $y$-axis splits $Q$ into two subsets $L$ (left) and $R$ (right). Points in $L$ have negative $x$-coordinates, and points in $R$ has positive $x$-coordinates. (Points directly below $p$ don't matter; just ignore them.) Let$$ m_L = \min_{(x,y)\in L} \frac{y}{x},\quad M_R = \max_{(x,y)\in R} \frac{y}{x},\quad \text{and}\quad m = \frac{m_L + M_R}{2}.$$Now there are three cases to consider: If $m_L < M_R$, then every point in $Q$ lies strictly below the line $y = mx$, so $p$ is a vertex of the upper hull. If $m_L = M_R$, then the line $y=mx$ passes through a point in $L$ and a point in $R$, and no point in $Q$ is strictly above that line. So $p$ lies on an edge of the upper hull, but it is not a vertex. If $m_L > M_R$, then at least one point in $L$ and at least one point in $R$ lie strictly above the line $y=mx$. So $p$ lies strictly below the upper hull. It is easy to compute $m_L$ and $M_R$ in $O(n)$ time. We don't actually need to compute $m$.
Determining the impedance of this circuit requires the implementation of the right tool. If you use brute-force analysis, simply considering series-parallel combination, you may quickly end-up with an algebraic paralysis to quote Dr. Middlebrook who forged the extra-element theorem or EET from which the fast analytical circuit techniques or FACTs derive. The principle is simple: determine the time constants of the circuit when observed in two different conditions. When the excitation is zeroed, you determine the natural time constants for the circuit by temporarily disconnecting each energy-storing elements and "looking" through its connecting terminals to determine the resistance \$R\$. You then form a time constant \$\tau\$ equal to either \$RC\$ or \$\frac{L}{R}\$. Associating all these natural time constants will form the denominator \$D(s)\$. In the second condition, the excitation is back in place and you determine the resistance "seen" through the connecting terminals of the considered energy-storing elements but when the response is nulled. These newly-determined time constants will then be assembled to form the numerator \$N(s)\$. In this exercise, to determine the input impedance, the stimulus will be a current source while the response will be the voltage developed across its terminals. To zero the excitation, we turn the current source off or simply open-circuit it. Then, we "look" at the resistance offered by the energy-storing element. This is what is shown in the below sketches: For the first sketches, a time constant is determined while the other elements are left in their dc states: shorted inductor and open capacitors. Then, for the 2nd-order time constant, one element is set in its high-frequency state while you "look" at the considered terminals to determine the resistance. When you read \$\tau_{12}\$, it implies that element 1 is set in its hi-frequency state (open-circuited inductor and shorted capacitor) while you look at the resistance offered by element 2's terminals. \$\tau_{123}\$ means that 1 and 2 are in their hi-frequency state while you look at the resistance offered from 3's terminals. For the zeroes, it is a bit more complicated. You perform similar operations while the response is nulled. A nulled response across the current source is similar to replacing the current source by a short circuit. Here we go, short the current source and repeat the same operations we did for the natural time constants. The resulting expressions are computed in the below Mathcad sheet. As you can see, no equations, just inspecting small drawings. If you make a mistake at the end, you just fix the guilty drawing and correct the corresponding time constants. With the brute-force analysis, when you spot a mistake, you yell, swear (I do!) and throw the draft on the wall : ) Then, the exercise consists of re-arranging the raw transfer function which, by the way, is already presented in a normalized form: this is another strength of the FACTs. I have approximated the 3rd-order denominator as the product of a low-frequency pole and a higher-frequency double pole. Another factorization in the numerator leads to a zero with a leading term. Comparing the pole and the zero shows that they are very close and thus compensate each other: they disappear from the equation. Perform another factorization and you obtain the low-entropy input impedance expression: \$Z_{in}(s)=H_{peak}\frac{1}{1+\frac{\omega_0Q}{s}+\frac{sQ}{\omega_0}}\$ The leading term \$H_{peak}\$ carries the unit of \$\Omega\$ and that's what you want. The expression I derived is: As you can see, both expressions lead to the exact same response. However, the expression you provided is a very compact version and kudos to his author! The complete picture is here: The plot are given here. They compare the brute-force expression with that obtained with the FACTs and the final factored version. They are all in excellent agreement. A SPICE simulation with a large amount of points per decade (10000) confirms the exact peaking value computed by Mathcad. This is a typical example where the FACTs are the simplest and fastest tool you can think of. Breaking the circuit in a succession of simple drawings you inspect rather than solving by equations is a tremendous advantage compared to other methods. Finally, some skill is needed to properly rearrange the final expression and unveil a leading term but nothing insurmountable. Vive les FACTs!
So in a simple way we can define the electrostatic field considering the force exerted by a point charge on a unit charge. In other words we can define the electric field as the force per unit charge. To detect an electric field of a charge q, we can introduce a test charge q 0 and measure the force acting on it. \(\overrightarrow {F}\) Thus the force exerted per unit charge is: \( \overrightarrow{E} \) Note that the electric field is a vector quantity that is defined at every pint in space, the value of which is dependent only upon the radial distance from q. The test charge q 0 itself has the ability to exert an electric field around it. Hence, to prevent the influence of the test charge, we must ideally make it as small as possible. Thus, \( \overrightarrow{E}\) =\( \lim_{{q_0}\to 0} \frac {\overrightarrow{F}}{q_0} \) This is the electric field of a point charge. Also, observe that it exhibits spherical symmetry since the electric field has the same magnitude on every point of an imaginary sphere centered around the charge q. Read more on physics calculators and physics formulas at BYJU’S
This is a confusing question because, while solubilities can be reported in mL/L, there can be ambiguity when choosing a pressure during conversion to this unit, for instance using the following equation to convert from molarity $c$ to volume/volume units: $$ \rho = \frac{cRT}{p}$$ In this online data page, for instance , in some columns the solubility is reported in mL/L, converted to this unit using throughout a pressure of $\pu{1 atm}$ (the source of the data cannot be verified), even as the partial pressure of oxygen $p_{O_2}$ is increased above $\pu{1 atm}$. In the OP the volume refers presumably to an equivalent volume of oxygen gas at the (partial) pressure of the gas above the liquid. Using the numbers from the OP, assuming the gas is ideal, then $ c=\frac{101325\times28\times 10^{-6}}{8.3145\times298.15} \pu{M} =\pu{ 1.1 \times 10^{-3}M}$ when $p = \pu{1 atm}$. On the other hand, if $p = \pu{4 atm}$ $ c=\frac{4\times101325\times28\times 10^{-6}}{8.3145\times298.15} \pu{M} =\pu{ 4.6\times 10^{-3} M}$ so the solubility is the same ($\pu{28 mL/L}$) if described in terms of volume at the given pressure, but $\times 4$ greater when regarded as a molar concentration. Note by the way that according to a number of sources the solubility at $\pu{25 ^\circ C}$ is $\pu{258 \mu M}$ (~$\pu{8.2 mm Hg}$) at $p_{O_2}=\pu{1 atm}$, and $\pu{1.0 mM}$ at $p_{O_2}=\pu{4 atm}$.
TL;DR: The $\ce{O-O}$ and $\ce{S-S}$ bonds, such as those in $\ce{O2^2-}$ and $\ce{S2^2-}$, are derived from $\sigma$-type overlap. However, because the $\pi$ and $\pi^*$ MOs are also filled, the $\pi$-type overlap also affects the strength of the bond, although the bond order is unaffected. Bond strengths normally decrease down the group due to poorer $\sigma$ overlap. The first member of each group is an anomaly because for these elements, the $\pi^*$ orbital is strongly antibonding and population of this orbital weakens the bond. Setting the stage The simplest species with an $\ce{O-O}$ bond would be the peroxide anion, $\ce{O2^2-}$, for which we can easily construct an MO diagram. The $\mathrm{1s}$ and $\mathrm{2s}$ orbitals do not contribute to the discussion so they have been neglected. For $\ce{S2^2-}$, the diagram is qualitatively the same, except that $\mathrm{2p}$ needs to be changed to a $\mathrm{3p}$. The main bonding contribution comes from, of course, the $\sigma$ MO. The greater the $\sigma$ MO is lowered in energy from the constituent $\mathrm{2p}$ AOs, the more the electrons are stabilised, and hence the stronger the bond. However, even though the $\pi$ bond order is zero, the population of both $\pi$ and $\pi^*$ orbitals does also affect the bond strength. This is because the $\pi^*$ orbital is more antibonding than the $\pi$ orbital is bonding. (See these questions for more details: 1, 2.) So, when both $\pi$ and $\pi^*$ orbitals are fully occupied, there is a net antibonding effect. This doesn't reduce the bond order; the bond order is still 1. The only effect is to just weaken the bond a little. Comparing the $\sigma$-type overlap The two AOs that overlap to form the $\sigma$ bond are the two $\mathrm{p}_z$ orbitals. The extent to which the $\sigma$ MO is stabilised depends on an integral, called the overlap, between the two $n\mathrm{p}_z$ orbitals ($n = 2,3$). Formally, this is defined as $$S^{(\sigma)}_{n\mathrm{p}n\mathrm{p}} = \left\langle n\mathrm{p}_{z,\ce{A}}\middle| n\mathrm{p}_{z,\ce{B}}\right\rangle = \int (\phi_{n\mathrm{p}_{z,\ce{A}}})^*(\phi_{n\mathrm{p}_{z,\ce{B}}})\,\mathrm{d}\tau$$ It turns out that, going down the group, this quantity decreases. This has to do with the $n\mathrm{p}$ orbitals becoming more diffuse down the group, which reduces their overlap. Therefore, going down the group, the stabilisation of the $\sigma$ MO decreases, and one would expect the $\ce{X-X}$ bond to become weaker. That is indeed observed for the Group 14 elements. However, it certainly doesn't seem to work here. That's because we ignored the other two important orbitals. Comparing the $\pi$-type overlap The answer for our question lies in these two orbitals. The larger the splitting of the $\pi$ and $\pi^*$ MOs, the larger the net antibonding effect will be. Conversely, if there is zero splitting, then there will be no net antibonding effect. The magnitude of splitting of the $\pi$ and $\pi^*$ MOs again depends on the overlap integral between the two $n\mathrm{p}$ AOs, but this time they are $\mathrm{p}_x$ and $\mathrm{p}_y$ orbitals. And as we found out earlier, this quantity decreases down the group; meaning that the net $\pi$-type antibonding effect also weakens going down the group. Putting it all together Actually, to look solely at oxygen and sulfur would be doing ourselves a disservice. So let's look at how the trend continues. $$\begin{array}{|c|c|c|c|}\hline\mathbf{X} & \mathbf{BDE(X-X)\ /\ kJ\ mol^{-1}} & \mathbf{X} & \mathbf{BDE(X-X)\ /\ kJ\ mol^{-1}} \\\hline\ce{O} & 144 & \ce{F} & 158 \\\ce{S} & 266 & \ce{Cl} & 243 \\\ce{Se} & 192 & \ce{Br} & 193 \\\ce{Te} & 126 & \ce{I} & 151 \\\hline\end{array}$$(Source: Prof. Dermot O'Hare's web page.) You can see that the trend goes this way: there is an overall decrease going from the second member of each group downwards. However, the first member has an exceptionally weak single bond. The rationalisation, based on the two factors discussed earlier, is straightforward. The general decrease in bond strength arises due to weakening $\sigma$-type overlap. However, in the first member of each group, the strong $\pi$-type overlap serves to weaken the bond. I also added the Group 17 elements in the table above. That's because the trend is exactly the same, and it's not a fluke! The MO diagram of $\ce{F2}$ is practically the same as that of $\ce{O2^2-}$, so all of the arguments above also apply to the halogens. How about the double bonds? In order to look at the double bond, we want to find a species that has an $\ce{O-O}$ bond order of $2$. That's not hard at all. It's called dioxygen, $\ce{O2}$, and its MO scheme is exactly the same as above except that there are two fewer electrons in the $\pi^*$ orbitals. Since there are only two electrons in the $\pi^*$ MOs as compared to four in the $\pi$ MOs, overall the $\pi$ and $\pi^*$ orbitals generate a net bonding effect. (After all, this is where the second "bond" comes from.) Since the $\pi$-$\pi^*$ splitting is much larger in $\ce{O2}$ than in $\ce{S2}$, the $\pi$ bond in $\ce{O2}$ is much stronger than the $\pi$ bond in $\ce{S2}$. So, in this case, both the $\sigma$ and the $\pi$ bonds in $\ce{O2}$ are stronger than in $\ce{S2}$. There should be absolutely no question now as to which of the $\ce{O=O}$ or the $\ce{S=S}$ bonds is stronger!
I've started learning some quantum physics and one often encounters special functions (like Legendre polynomials, Laguerre polynomials, Bessel functions, ...). Many calculations with these functions are immensely simplified (or maybe only made possible?) by making use of a generating function. My physics book unfortunately does not give proofs that the generating functions do indeed generate the functions, and for example for the following I'm not able to do it myself: It is stated that $$U(\rho, s) = \frac{\exp[-\rho s/(1-s)]}{1-s} = \sum_{q=0}^\infty \frac{L_q(\rho)}{q!}s^q$$ is a generating function for the Laguerre polynomials $L_q(\rho)$, defined by $$L_q(\rho) = e^\rho \frac{\mathrm d^q}{\mathrm d\rho^q}\left(\rho^q e^{-\rho}\right)$$ I have played around with it a bit, but wasn't able to show that $U(\rho, s)$ has the claimed series development around $s = 0$. Looking randomly through some books on special functions, I was not able to find a proof of this either. So my questions are: What is a good book on special functions - one where I would find this stuff (the functions and relations mostly used in atomic physics, maybe)? Can somebody show me how to prove this particular identity or give a reference? The physics book I'm working on is Physics of Atoms and Molecules by Bransden and Joachain.
Other ways of writing Green's theorem In the introductory page on Green's theorem, we wrote Green's theorem as \begin{align*} \dlint = \iint_\dlr (\curl \dlvf) \cdot \vc{k} \, dA \end{align*} or \begin{align*} \dlint = \iint_\dlr \left( \pdiff{\dlvfc_2}{x} - \pdiff{\dlvfc_1}{y}\right)dA, \end{align*} where $\dlvf(x,y) = (\dlvfc_1(x,y),\dlvfc_2(x,y))$ is a two-dimensional vector field, $\dlr$ is a region in plane, and $\dlc$ is its positively oriented boundary. This notation to ties Green's theorem nicely in with the concept of the curl of a vector field. We often denote $\dlc$ by $\partial \dlr$ to make it explicit that the curve $\dlc$ is the (positively oriented) boundary of $\dlr$. This notation is also more natural when the region $\dlr$ has more than one boundary component. Then, Green's theorem can look like, for example, \begin{align*} \lint{\partial \dlr}{\dlvf} = \iint_\dlr \left( \pdiff{\dlvfc_2}{x} - \pdiff{\dlvfc_1}{y}\right)dA, \end{align*} Note that the notation $\partial \dlr$ simply means the boundary of $\dlr$. It has nothing to do with a partial derivative. People frequently write Green's theorem another way. First, they like to change the formula by writing the line integral at the left in terms of components: \begin{align*} \int_{\partial \dlr} F_1\, dx + F_2\, dy = \iint_\dlr \left( \pdiff{\dlvfc_2}{x} - \pdiff{\dlvfc_1}{y}\right)dA. \end{align*} Then, they like to let the vector field be $\vc{F}(x,y) =(P(x,y), Q(x,y))$, so that Green's theorem becomes \begin{align*} \int_{\partial \dlr} P\, dx + Q\, dy= \iint_\dlr \left( \pdiff{Q}{x} - \pdiff{P}{y}\right)dA. \end{align*} Sometimes, $\vc{F} = (P,Q)$ won't be referred to as a vector field. Instead, one can discuss the above version of Green's theorem as applied to the two scalar valued functions $Q : \dlr \to \R$ and $P : \dlr \to \R$ (confused?).
I am a bit disturbed lately since I don't know the answer this basic problem. Say we have a standard isotropic antenna with some fixed parameters (load impedance, etc), and we feed this antenna with a sinusoidal current of the form: \begin{equation} I(t) = Acos(2\pi f_{c}t) \end{equation} where $f_{c}$ is the carrier frequency (typically in the GHz range). Assuming no circuit mismatches or losses, the power delivered to the antenna (and radiated) is $P \propto (A^2)/2$. Since the antenna is isotropic this gives a electric far field strength of $|E| \propto I$ in every angle at some distance $d$ from the antenna. Now let's say that we have 2 similar isotropic antennas "really" close together (compared to the wavelength), but we feed each one with sinusoidal current half of the amplitude, i.e. \begin{equation} I_d(t) = \frac{A}{2}cos(2\pi f_{c}t) \end{equation} and the power delivered to each one antenna will be $P_d\propto (A^2)/8$. The electric fields will sum up constructively in all angles at distance $d$ from this 2 element antenna array the field strength is $|\frac{E}{2}+\frac{E}{2}|=|E|$, i.e. equal to the single antenna case. The problem is that the sum of the powers feed to the 2 antennas is $(A^2)/8 + (A^2)/8 = (A^2)/4$ which is smaller than in the first case. Thus there might be something wrong here, because if one takes this approach one step forward, I would a field with infinite magnitude (hence infinite power) using an infinite amount of antennas for a given input power $P$. Where is the error in my approach? I agree that in a practical system, there will be coupling between antennas thus their efficiencies will decrease, etc. But this cannot be the fundamental explanation for this because in standard textbook in antennas as the one of Balanis, the superposition principle is assumed and everything should be coherent from this point-of-view. Thanks in advance for your help. Joao
The common understanding is that, setting air resistance aside, all objects dropped to Earth fall at the same rate. This is often demonstrated through the thought experiment of cutting a large object in half. The halves clearly can't then fall more slowly just by being sliced in two. However, I believe the answer is that when two objects fall together, attached or not, they do "fall" faster than an object of less mass alone does. This is because not only does the Earth accelerate the objects toward itself but the objects also accelerate the Earth toward themselves. Considering the formula: $$ F_{\text{g}} = \frac{G m_1 m_2}{d^2} $$ Given $F = ma$ thus $a = F/m$, we note that the mass of the small object doesn't seem to matter as when calculating acceleration the force is divided by the $m$ term, its mass. However, this overlooks that the force is actually applied to both objects, not just to the smaller one. The acceleration on the second, larger object is found by dividing $F$, in turn, by the larger object's mass. The two objects' acceleration vectors are exactly opposite, so closing acceleration is the sum of the two: $$ a_{\text{closing}} = \frac{F}{m_1} + \frac{F}{m_2} $$ Since the Earth is extremely massive compared to everyday objects, the acceleration imparted on the object by the Earth will radically dominate the equation. As the Earth is $\sim 5.972 \times {10}^{24} \, \mathrm{kg} ,$ a falling object of $5.972 \, \mathrm{kg}$ (just over 13 pounds) would accelerate the Earth about $\frac{1}{{10}^{24}}$ as much, which is one part in a trillion trillion. Of course in everyday situations, we can for all practical purposes treat objects as falling at the same rate because of this negligible difference—which our instruments probably couldn't even detect. But I'm hoping not for a discussion of practicality or what's measurable or observable, but what we think is actually happening. Am I right or wrong? What really clinched this for me was considering dropping a small Moon-massed object close to the Earth and a small Earth-massed object close to the Moon. This made me realize that falling isn't one object moving toward some fixed frame of reference, but that the Earth is just another object, and thus "falling" consists of multiple objects mutually attracting in space.
Why is Power Analysis Important? Section Consider a research experiment where the p-value computed from the data was 0.12. As a result, one would fail to reject the null hypothesis because this p-value is larger than \(\alpha\) = 0.05. However, there still exist two possible cases for which we failed to reject the null hypothesis: the null hypothesis is a reasonable conclusion, the sample size is not large enough to either accept or reject the null hypothesis, i.e., additional samples might provide additional evidence. Power analysis is the procedure that researchers can use to determine if the test contains enough power to make a reasonable conclusion. From another perspective power analysis can also be used to calculate the number of samples required to achieve a specified level of power. Example S.5.1 Let's take a look at an example that illustrates how to compute the power of the test. Example Let X denote the height of a randomly Penn State students. Assume that X is normally distributed with unknown mean \(\mu\) and standard deviation of 9. Take a random sample of n = 25 students, so that, after setting the probability of committing a Type I error at \(\alpha = 0.05\), we can test the null hypothesis \(H_0: \mu = 170\) against the alternative hypothesis that \(H_A: \mu > 170\). What is the power of the hypothesis test if the true population mean were \(\mu = 175\)? \[\begin{align}z&=\frac{\bar{x}-\mu}{\sigma / \sqrt{n}} \\ \bar{x}&= \mu + z \left(\frac{\sigma}{\sqrt{n}}\right) \\ \bar{x}&=170+1.645\left(\frac{9}{\sqrt{25}}\right) \\ &=172.961\\ \end{align}\] So we should reject the null hypothesis when the observed sample mean is 172.961 or greater: We get \[\begin{align}\text{Power}&=P(\bar{x} \ge 172.961 \text{ when } \mu =175)\\ &=P\left(z \ge \frac{172.961-175}{9/\sqrt{25}} \right)\\ &=P(z \ge -1.133)\\ &= 0.8713\\ \end{align}\] and illustrated below: In summary, we have determined that we have a 87.13% chance of rejecting the null hypothesis \(H_0: \mu = 170\) in favor of the alternative hypothesis \(H_A: \mu > 170\) if the true unknown population mean is in reality \(\mu = 175\). Calculating Sample Size Section If the sample size is fixed, then decreasing Type I error \(\alpha\) will increase Type II error \(\beta\). If one wants both to decrease, then one has to increase the sample size. To calculate the smallest sample size needed for specified \(\alpha\), \(\beta\), \(\mu_a\), then (\(\mu_a\) is the likely value of \(\mu\) at which you want to evaluate the power. Sample Size for One-Tailed Test \(n = \dfrac{\sigma^2(Z_{\alpha}+Z_{\beta})^2}{(\mu_0−\mu_a)^2}\) Sample Size for Two-Tailed Test \(n = \dfrac{\sigma^2(Z_{\alpha/2}+Z_{\beta})^2}{(\mu_0−\mu_a)^2}\) Let's investigate by returning to our previous example. Example S.5.2 Let X denote the height of a randomly Penn State students. Assume that X is normally distributed with unknown mean \(\mu\) and standard deviation 9. We are interested in testing at \(\alpha = 0.05\) level , the null hypothesis \(H_0: \mu = 170\) against the alternative hypothesis that \(H_A: \mu > 170\). Find the sample size n that is necessary to achieve 0.90 power at the alternative μ = 175. \[\begin{align}n&= \dfrac{\sigma^2(Z_{\alpha}+Z_{\beta})^2}{(\mu_0−\mu_a)^2}\\ &=\dfrac{9^2 (1.645 + 1.28)^2}{(170-175)^2}\\ &=27.72\\ n&=28\\ \end{align}\] In summary, you should see how power analysis is very important so that we are able to make the correct decision when the data indicate that one cannot reject the null hypothesis. You should also see how power analysis can also be used to calculate the minimum sample size required to detect a difference that meets the needs of your research.
Just as in your previous question, there is no analytic solution to that problem. You're required to find some quick approximation that gives you a useful result. Rewrite the last equation as $$p_c=\sqrt[9]{\frac{2.79\cdot 10^{-8}}{\binom{204}{9}}}\frac{1}{(1-p_c)^{195/9}}\tag{1}$$ If we boldly assume that $p_c\ll 1$ holds, we can use the following approximation: $$\frac{1}{(1-p_c)^{195/9}}\approx 1\tag{2}$$ which, when combined with $(1)$, results in $$p_c\approx \sqrt[9]{\frac{2.79\cdot 10^{-8}}{\binom{204}{9}}}\approx 0.003002\tag{3}$$ The exact numerical solution is $$p_c=0.00321898475092782$$ which shows that the approximation $(3)$ is reasonably good. You can use the binomial approximation: $$ (1 + x)^\alpha \approx 1 + \alpha x, $$ which should be a very good approximation. The conditions for this being accurate are that $|\alpha x| \ll 1$. When you substitute this expression into your total probability, you now have a polynomial you can solve for the roots of.
Rocky Mountain Journal of Mathematics Rocky Mountain J. Math. Volume 45, Number 2 (2015), 457-474. Nontrivial periodic solutions of second order singular damped dynamical systems Abstract Assuming that the linear equation $x'' + h(t)x' + a(t)x = 0$ has a positive Green's function, we study the existence of nontrivial periodic solutions of second order damped dynamical systems \[ x'' + h(t)x' + a(t)x = f(t, x) + e(t), \] where $h$, $a\in \C(\!(\R/T\Z),\R)$, $e\! =\! (e_1,\ldots, e_N\!)^T\!\! \in \C(\!(\R/T\Z),\R^N\!)$, $N \ge 1$, and the nonlinearity $f = (f_1,\ldots, f_N)^T\in\C((\R=T\Z)\times\R^N\setminus\{0\},\R^N)$ has a repulsive singularity at the origin. We consider a very general singularity and do not need any kind of strong force condition. The proof is based on a nonlinear alternative principle of Leray-Schauder. Recent results in the literature are generalized and improved. Article information Source Rocky Mountain J. Math., Volume 45, Number 2 (2015), 457-474. Dates First available in Project Euclid: 13 June 2015 Permanent link to this document https://projecteuclid.org/euclid.rmjm/1434208483 Digital Object Identifier doi:10.1216/RMJ-2015-45-2-457 Mathematical Reviews number (MathSciNet) MR3356624 Zentralblatt MATH identifier 1327.34068 Citation Chu, Jifeng; Li, Shengjun; Zhu, Hailong. Nontrivial periodic solutions of second order singular damped dynamical systems. Rocky Mountain J. Math. 45 (2015), no. 2, 457--474. doi:10.1216/RMJ-2015-45-2-457. https://projecteuclid.org/euclid.rmjm/1434208483
Assume I have a $m\times n$ Transition Matrix $A$ with $m$ different observations and $n$ represents a discrete state space. Each column counts the frequency that state $j\in [1...n]$ was visited from state $i\in [1,...,m]$. In reality the "states" are no integer numbers, but a discretized real valued variable. I need to know the mean and the variance of the "next" variable, i.e. of the "columns". For example I want to "encode" the speed $v\in \{0,10,20,30\}$ m/s then is $n=4$ and $A(i,2)=10$ m/s and so on. For that I use the weighted mean + variance from http://en.wikipedia.org/wiki/Weighted_arithmetic_mean: $\bar{v_i}=\frac{\sum_{j=1}^m w_j{v}_j}{\sum_{j=1}w_j}$, where $x_i$ are the single values from the featurespace, represented by the i-th row and $w_j$ the frequency. That seems to work. For the covariance I use $\sigma^2=\frac{\sum_{j=1}^m w_j({v}_j-\bar{v_i})^2}{\sum_{j=1}w_j}$. So I have $i$ different values for the mean and covariance that I use to fit a distribution via moment matching. Now the problem is, that there are rows with only a few columns $\ne 0$. That can result in a very small covariance, which might be misleading: The last "hat" at $v=70$ was generated from only $\sum_j=24$ samples and the one before from $\sum_j =3100$, but is even lower. If I assume, that more data means more safety, can I weight the covariance again? Or can I introduce a "uncertainty" of the variance (that does not make sense does it?) Thank you very much! As whuber suggested here is a little background: The whole thing is a Markov chain like procedure. As whuber said, I want to know "How can I estimate transition probabilities for a continuous Markov process based on binned? observed frequencies?" Thats correct. I thought these information was not neccessary, sorry. There is one different PDF for each possible velocity. If I assume that the process is memoryless and stationary then I shouldn't care about the time anymore or? The PDF shall be the distribution at the next timestep k+1, but only depending on the current timestep k. So k are the rows, k+1 the columns. I'm searching the distribution of all velocities at all times, I think. I counted all the transitions from velocity i to j. But over the whole timeseries. It's like a 1st order markov chain, but with cont. pdfs instead of discrete ones. The goal is not to have a giant Markov transition Matrix, but only $m\times 2$ Matrix, containing the PDF parameters. This might not be appropriate way, whuber mentioned. Thank you
The goal of applying Valence Bond Theory to water is to describe the bonding in \(H_2O\) and account for its structure (i.e., appropriate bond angle and two lone pairs predicted from VSEPR theory). The ground state electronic configuration of atomic oxygen atom is \(1s^2\,2s^2\,2p_x^2\,2p_y^1 \, 2p_z^1\) and of course the ground state electronic configuration of atomic hydrogen atom is \(1s^1\), i.e., a spherical atomic orbital with no preferential orientation. If only the unfilled \(2p_y\) and \(2p_z\) atomic orbitals of the oxygen were used as bonding orbitals, then two bonds would be predicted. These bonding wavefunctions would be mixture of only \(|2p_y \rangle\) and \(|2p_z \rangle\) orbitals on oxygen and the \(|1s \rangle\) orbitals on the hydrogens (\(H_1\) and \(H_2\)): \[ | \chi_1 \rangle = a_1 |1s \rangle_{H_1} + b_1 |2p_y \rangle_O \label{wrong1}\] \[ | \chi_2 \rangle = a_2 |1s \rangle_{H_2} + b_2 |2p_z \rangle_O \label{wrong2}\] However, with a H-O-H bond angle for these bonds would be expected to be 90° since \(2p_y\) and \(2p_z\) are oriented 90° with respect to each other. Note that \( | \chi_1 \rangle \) and \( | \chi_2 \rangle \) are two-center bonding orbitals common to Valence Bond theory. Figure \(\PageIndex{1}\): The three p-orbitals are aligned along perpendicular axis (CC-SA-BY-NC; Nick Greeves ChemTube3D); Using the oxygen atomic orbitals directly is obviously not a good model for describing bonding in water, since we know from experiment that the bond angle for water is 104.45° (Figure \(\PageIndex{2}\)), which is also in agreement with VSEPR theory. Since the \(2s\) orbital is spherical, mixing some \(2s\) character into the \(2p_z\) orbitals can adjust the bond angle as discussed previously by creating new hybrid orbitals. Figure \(\PageIndex{2}\) : Historically, Valence Bond theory was used to explain bend angles in small molecules. Of course, it was only qualitatively correct in doing this, as the following example shows. Let us construct the Valence Bond wavefunctions for the two bonding pairs in \(H_2O\) by mixing the \(|2s \rangle\), \(|2p_x \rangle\), \(|2p_y \rangle\), and \(|2p_z \rangle\) into four new \(sp^3\) hybrid orbitals: \[\begin{align*}\chi_1 (r) &= \dfrac{1}{2} \left[\psi_{2s}(r)+\psi_{2p_x}(r)+\psi_{2p_y}(r)+\psi_{2p_z}(r)\right]\\ \chi_2 (r) &= \dfrac{1}{2}\left[\psi_{2s}(r)-\psi_{2p_x}(r)-\psi_{2p_y}(r)+\psi_{2p_z}(r)\right]\\ \chi_3 (r) &= \dfrac{1}{2}\left[\psi_{2s}(r)+\psi_{2p_x}(r)-\psi_{2p_y}(r)-\psi_{2p_z}(r)\right]\\ \chi_4 (r) &= \dfrac{1}{2}\left[\psi_{2s}(r)-\psi_{2p_x}(r)+\psi_{2p_y}(r)-\psi_{2p_z}(r)\right]\end{align*}\] Hence, the three \(2p\) orbitals of the oxygen atom combined with the \(2s\) orbitals of the oxygen to form four \(sp^3\) hybrid orbitals (Figure \(\PageIndex{3}\)). Figure \(\PageIndex{3}\) : Hybridizing the atomic orbitals on the oxygen atom before mixing with the atomic orbitals on the hydrogens to generate the VB wavefunctions. Images used with permission (Richard Banks).The two non-bonded electron pairs will occupy hybrid orbitals. Again we need a hybrid orbital for each atom and each pair of non-bonding electrons (Figure \(\PageIndex{4}\)). This would leave the \(2s\) and \(2p_z\) orbitals of oxygen left over for the two lone pairs on the oxygen. The bond angle for four groups of electrons around a central atom is 109.5 degrees. However, for water the experimental bond angle is 104.45°. The VSPER picture (general chemistry) for this is that the smaller angle can be explained by the presence of the two lone-pairs of electrons on the oxygen atom. Since they take up more volume of space compared to a bonding pair of electrons the repulsions between lone pairs and bonding pairs is expected to be greater causing the H-O-H bond angle to be smaller than the ideal 109.5°. We can rationalize this by thinking about the s and p characters of the hybrids. In a perfectly \(sp^3\) hybridized set of hybrid orbitals, each \(sp^3\) orbital should have: 25% s character and and 75% p character. Since the bond angle is not 109.5° in water, the hybrid orbitals cannot have exactly this ratio of s and p character. So there there is uneven distribution of s and p character between the 4 hybrid orbitals. First we will write down the wavefunction and see what this means and then we will rationalize it. Note: A few cautionary words about hybridization. Hybridization is an often misconceived concept. It only is a mathematical interpretation, which explains a certain bonding situation (in an intuitive fashion). In a molecule the equilibrium geometry will result from various factors, such as steric and electronic interactions, and further more interactions with the surroundings like a solvent or external field. The geometric arrangement will not be formed because a molecule is hybridized in a certain way, it is the other way around, i.e., a result of the geometry or more precise and interpretation of the wavefunction for the given molecular arrangement. Estimating Character of Hybrid Orbitals The terminology we use for hybridization actually is just an abbreviation: \[\mathrm{sp}^{x} = \mathrm{s}^{\frac{1}{x+1}}\mathrm{p}^{\frac{x}{x+1}}\] In theory \(x\) can have any value, hence any of the following combinations constitute valid hybridization schemes for 1 s orbital and 3 p orbitals: \[\begin{align} 1\times\mathrm{s}, 3\times\mathrm{p} \nonumber &\leadsto 4\times\mathrm{sp}^3 \nonumber\\ &\leadsto 3\times\mathrm{sp}^2, 1\times\mathrm{p} \nonumber \\ &\leadsto 2\times\mathrm{sp}, 2\times\mathrm{p} \nonumber \\ &\leadsto 2\times\mathrm{sp}^3, 1\times\mathrm{sp}, 1\times\mathrm{p} \nonumber \\ &\leadsto \text{etc. pp.} \nonumber \\ &\leadsto 2\times\mathrm{sp}^4, 1\times\mathrm{p}, 1\times\mathrm{sp}^{(2/3)} \nonumber \end{align}\] There are virtually infinite possibilities of combinations. Which one is "valid" is only determined by experiment (e.g., structure or spectroscopy). The generic \(sp^x\) hybrid orbitals wavefunction can be roughly written in terms of atomic orbital character: \[ |\chi_i \rangle = N ( p + \gamma s) \label{H1}\] where \(N\) is a normalization constant and \(\gamma\) is the relative contribution of s character to the hybrid orbital. For a pure \(sp^3\) hybrid, \(\gamma\) would be 0.25 and for a pure \(sp\) hybrid, \(\gamma\) would be 1. The question is how to determine \(\gamma\) to get a better picture of the hybridization of water. Starting with the normalization criteria for wavefunctions: \[ \langle \chi_i | \chi_i \rangle =1\] and substituting Equation \(\ref{H1}\) into to get \[ \langle N ( p + \gamma s) | N ( p + \gamma s) \rangle =1\] which in integral notation is \[ \int N^2 ( p + \gamma s)^2 d\tau =1 \] where \(d\tau\) represents all space. This is then expanded to \[ N^2 \cancelto{1}{\int p^2\; d\tau} + N^2 2 \gamma \cancelto{0} {\int sp\; d\tau} + N^2 \gamma^2 \cancelto{1} {\int s^2 \; d\tau} =1\label{H3} \] These terms simplify either due to orthogonality or normality of the constitute atomic orbitals. Equation \(\ref{H3}\) simplifies to \[ N^2 + N^2 \gamma^2 =1 \] and thus the normalization factor can be expressed in terms of \(\gamma\) \[ N = \dfrac{1}{\sqrt{1+\gamma^2}}\] and the generic normalized \(sp^x\) hybrid orbital (Equation \(\ref{H1}\)) is \[ |\chi_i \rangle = \dfrac{1}{\sqrt{1+\gamma^2}} ( p + \gamma s) \label{H4}\] The s and p characters to a hybrid orbital are now easy to obtain by squaring \( |\chi_i \rangle \) The magnitude of p-character is \[ \left(\dfrac{1}{\sqrt{1+\gamma^2}} \right)^2 = \dfrac{1}{1+\gamma^2} \label{p}\] as \(\gamma \rightarrow 0\), then the p character of the hybrid goes to 100% The magnitude of s-character is \[ \left(\dfrac{1}{\sqrt{1+\gamma^2}} \gamma ^2 \right)^2 = \dfrac{\gamma^2}{1+\gamma^2} \label{s}\] as \(\gamma \rightarrow 1\), then the s character of the hybrid goes to 50% As mentioned above, the geometric arrangement will not be formed because a molecule is hybridized in a certain way, it is the other way around. How do we choose the correct value of \(\gamma\) for the hybrid orbitals? The mixing coefficient \(\gamma\) is clearly related to the bond angle θ. Using some simple trigonometric relationships, it can be proven that: \[\cos θ = - \gamma^2 \label{angle} \] Equation \(\ref{angle}\) is an important equation as it related experimentally determined structure to the nature of the bonding and specifically, the composition of the atomic orbitals that create the hybrid orbitals used in the bonding. Example \(\PageIndex{1}\): Carbon Dioxide What is the s-character in the hybrid orbitals for \(CO_2\). Solution We know from simple VSEPR theory that the geometry of \(CO_2\) is a linear triatomic molecule. Thus \(\theta = 180°\) and via Equation \(\ref{angle}\), \(\gamma = 1\) since \(\cos 180 ° = -1\). Hence, Equation \(\ref{s}\) argues that the hybrid orbitals used in the bonding of \(CO_2\) have 50% character; i.e., they are \(sp\) hybrid orbitals \[ | \chi_1 \rangle = \dfrac{1}{\sqrt{2}} ( s + p)\] and \[ | \chi_2 \rangle = \dfrac{1}{\sqrt{2}} ( s - p)\] Now, let's apply Equation \(\ref{s}\) to water to find the character of the hybrid orbitals in water. The bond angle in water is 104.45° (Figure \(\PageIndex{2}\), hence \[ \cos 104.5° = -0.25\] and \[ \gamma = \sqrt{0.25} = 0.5\] From Equation \(\ref{p}\), then the amount of p character in the hybrid orbitals are \[ \dfrac{1}{1+\gamma^2} = \dfrac{1}{1+0.5^2} = 0.80\% \] which leave 20% for s character (Equation \(\ref{s}\)). \[ \dfrac{\gamma^2}{1+\gamma^2} = \dfrac{0.5^2}{1+0.5^ 2} = 0.20 \] The two hybridized atomic orbitals of oxygen involved in bonding are each 80% p and 20% s character. This are not perfect \(sp^3\) hybrid orbitals, as expected. Actually, the orbitals involved in the bonds would be better described as \(sp^4\) hybridized. It does not mean that there are 4 p-orbitals in the hybrid orbital, but that each hybrid consists of 20% of s and 80% of p atomic orbitals. Lone Pairs Water has two sets of non-bonding electron pairs (Figure \(\PageIndex{4}\)). Without a bond angle to start from, we cannot derive \(\gamma\) that describes the nonbonding hybrid orbitals that they occupy. However, we do know that the O atom has three \(p\) orbitals. So the TOTAL absolute p-character in all hybrid orbitals must be 3. Let \(x\) be the p-character in the lone pairs hybrid orbitals: \[0.8 + 0.8 + x + x = 3 \] This is assuming the lone pairs are identical. Solving for this, x = 0.7 ( i.e. 70% p and 30% s ). From this we can estimate the angle between the lone pair using Equations \(\ref{p}\) and \(\ref{angle}\) p-character: \[\dfrac{1}{1 +\gamma^2} = 0.7\] so \[-\gamma^2 = \dfrac{1}{0.7 -1} =-0.42\] and \(\theta = 115° \). The angle between the lone pairs is greater (115°) than the bond angle (104.5°). The \(sp^3\) hybrid atomic orbitals of the lone pairs have > 25% s-character. These hybrid orbitals are less directional and held more tightly to the O atom. The \(sp^3\) hybrid atomic orbitals of the bonding pairs have < 25% s-character. They are more directional (i.e., more p-character) and electron density found in the bonding region between O and H. Warning It should be noted that the valence bond theory application described above predicts that the two lone electron pairs are in the same hybrid orbitals and hence have the same energies. As discussed in the next sections, that is not experimentally observed in photoelectron spectroscopy, which is a shortcoming of valence bond theory's application to water. Contributors Andrew Wolff (Adjunct Professor of Chemistry) Martin (Stackexchange)
I have a markov chain with $Q(u,v)$ as transition probability matrix and $\pi(u)$ as stationary distribution defined on state space $\Omega$. The dimension of matrix $Q$ is $nxn$ and vector $\pi$ is $1xn$. I need to construct a vorticity matrix $\Gamma (u,v)$ of dimension $nxn$ which has below properties $\Gamma$ is skew symmetric matrix i.e, $$\Gamma (u,v) = -\Gamma (v,u) \quad ,\forall \, u,v \in \Omega $$ Row sum of $\Gamma$ is zero for every row i.e, $$ \sum_v \Gamma (u,v) = 0 \quad ,\forall \, u \in \Omega $$ Third property is, $$\Gamma(u,v) > -\pi (v)Q(v,u) \quad ,\forall \, u,v \in \Omega $$ My question is : How to construct vorticity matrix $\Gamma (u,v)$ which satisfies above three properties? I need to construct at least one such matrix. Is there any systematic way to build such matrices NOTE: Transition probability matrix $P$, and stationary distribution $\pi$ has below properties Row sum of $P$ is one for each row, $$\sum_v P(u,v)=1 \quad ,\forall \, u \in \Omega$$ $\pi$ is probability distribution hence, $$\sum_v \pi(v) = 1$$ Stationary distribution condition for $\pi$, $$\sum_u \pi(u) P(u,v) = \pi(v) \quad ,\forall \, v \in \Omega $$
2019-10-09 06:01 HiRadMat: A facility beyond the realms of materials testing / Harden, Fiona (CERN) ; Bouvard, Aymeric (CERN) ; Charitonidis, Nikolaos (CERN) ; Kadi, Yacine (CERN)/HiRadMat experiments and facility support teams The ever-expanding requirements of high-power targets and accelerator equipment has highlighted the need for facilities capable of accommodating experiments with a diverse range of objectives. HiRadMat, a High Radiation to Materials testing facility at CERN has, throughout operation, established itself as a global user facility capable of going beyond its initial design goals. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPRB085 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPRB085 詳細記錄 - 相似記錄 2019-10-09 06:01 Commissioning results of the tertiary beam lines for the CERN neutrino platform project / Rosenthal, Marcel (CERN) ; Booth, Alexander (U. Sussex (main) ; Fermilab) ; Charitonidis, Nikolaos (CERN) ; Chatzidaki, Panagiota (Natl. Tech. U., Athens ; Kirchhoff Inst. Phys. ; CERN) ; Karyotakis, Yannis (Annecy, LAPP) ; Nowak, Elzbieta (CERN ; AGH-UST, Cracow) ; Ortega Ruiz, Inaki (CERN) ; Sala, Paola (INFN, Milan ; CERN) For many decades the CERN North Area facility at the Super Proton Synchrotron (SPS) has delivered secondary beams to various fixed target experiments and test beams. In 2018, two new tertiary extensions of the existing beam lines, designated “H2-VLE” and “H4-VLE”, have been constructed and successfully commissioned. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW064 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW064 詳細記錄 - 相似記錄 2019-10-09 06:00 The "Physics Beyond Colliders" projects for the CERN M2 beam / Banerjee, Dipanwita (CERN ; Illinois U., Urbana (main)) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; Cholak, Serhii (Taras Shevchenko U.) ; D'Alessandro, Gian Luigi (Royal Holloway, U. of London) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) ; Rae, Bastien (CERN) et al. Physics Beyond Colliders is an exploratory study aimed at exploiting the full scientific potential of CERN’s accelerator complex up to 2040 and its scientific infrastructure through projects complementary to the existing and possible future colliders. Within the Conventional Beam Working Group (CBWG), several projects for the M2 beam line in the CERN North Area were proposed, such as a successor for the COMPASS experiment, a muon programme for NA64 dark sector physics, and the MuonE proposal aiming at investigating the hadronic contribution to the vacuum polarisation. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW063 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW063 詳細記錄 - 相似記錄 2019-10-09 06:00 The K12 beamline for the KLEVER experiment / Van Dijk, Maarten (CERN) ; Banerjee, Dipanwita (CERN) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; D'Alessandro, Gian Luigi (CERN) ; Doble, Niels (CERN) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) et al. The KLEVER experiment is proposed to run in the CERN ECN3 underground cavern from 2026 onward. The goal of the experiment is to measure ${\rm{BR}}(K_L \rightarrow \pi^0v\bar{v})$, which could yield information about potential new physics, by itself and in combination with the measurement of ${\rm{BR}}(K^+ \rightarrow \pi^+v\bar{v})$ of NA62. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW061 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW061 詳細記錄 - 相似記錄 2019-09-21 06:01 Beam impact experiment of 440 GeV/p protons on superconducting wires and tapes in a cryogenic environment / Will, Andreas (KIT, Karlsruhe ; CERN) ; Bastian, Yan (CERN) ; Bernhard, Axel (KIT, Karlsruhe) ; Bonura, Marco (U. Geneva (main)) ; Bordini, Bernardo (CERN) ; Bortot, Lorenzo (CERN) ; Favre, Mathieu (CERN) ; Lindstrom, Bjorn (CERN) ; Mentink, Matthijs (CERN) ; Monteuuis, Arnaud (CERN) et al. The superconducting magnets used in high energy particle accelerators such as CERN’s LHC can be impacted by the circulating beam in case of specific failure cases. This leads to interaction of the beam particles with the magnet components, like the superconducting coils, directly or via secondary particle showers. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPTS066 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPTS066 詳細記錄 - 相似記錄 2019-09-20 08:41 Shashlik calorimeters with embedded SiPMs for longitudinal segmentation / Berra, A (INFN, Milan Bicocca ; Insubria U., Varese) ; Brizzolari, C (INFN, Milan Bicocca ; Insubria U., Varese) ; Cecchini, S (INFN, Bologna) ; Chignoli, F (INFN, Milan Bicocca ; Milan Bicocca U.) ; Cindolo, F (INFN, Bologna) ; Collazuol, G (INFN, Padua) ; Delogu, C (INFN, Milan Bicocca ; Milan Bicocca U.) ; Gola, A (Fond. Bruno Kessler, Trento ; TIFPA-INFN, Trento) ; Jollet, C (Strasbourg, IPHC) ; Longhin, A (INFN, Padua) et al. Effective longitudinal segmentation of shashlik calorimeters can be achieved taking advantage of the compactness and reliability of silicon photomultipliers. These photosensors can be embedded in the bulk of the calorimeter and are employed to design very compact shashlik modules that sample electromagnetic and hadronic showers every few radiation lengths. [...] 2017 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 64 (2017) 1056-1061 詳細記錄 - 相似記錄 2019-09-20 08:41 Performance study for the photon measurements of the upgraded LHCf calorimeters with Gd$_2$SiO$_5$ (GSO) scintillators / Makino, Y (Nagoya U., ISEE) ; Tiberio, A (INFN, Florence ; U. Florence (main)) ; Adriani, O (INFN, Florence ; U. Florence (main)) ; Berti, E (INFN, Florence ; U. Florence (main)) ; Bonechi, L (INFN, Florence) ; Bongi, M (INFN, Florence ; U. Florence (main)) ; Caccia, Z (INFN, Catania) ; D'Alessandro, R (INFN, Florence ; U. Florence (main)) ; Del Prete, M (INFN, Florence ; U. Florence (main)) ; Detti, S (INFN, Florence) et al. The Large Hadron Collider forward (LHCf) experiment was motivated to understand the hadronic interaction processes relevant to cosmic-ray air shower development. We have developed radiation-hard detectors with the use of Gd$_2$SiO$_5$ (GSO) scintillators for proton-proton $\sqrt{s} = 13$ TeV collisions. [...] 2017 - 22 p. - Published in : JINST 12 (2017) P03023 詳細記錄 - 相似記錄 2019-04-26 08:32 Baby MIND: A magnetised spectrometer for the WAGASCI experiment / Hallsjö, Sven-Patrik (Glasgow U.)/Baby MIND The WAGASCI experiment being built at the J-PARC neutrino beam line will measure the ratio of cross sections from neutrinos interacting with a water and scintillator targets, in order to constrain neutrino cross sections, essential for the T2K neutrino oscillation measurements. A prototype Magnetised Iron Neutrino Detector (MIND), called Baby MIND, has been constructed at CERN and will act as a magnetic spectrometer behind the main WAGASCI target. [...] SISSA, 2018 - 7 p. - Published in : PoS NuFact2017 (2018) 078 Fulltext: PDF; External link: PoS server In : 19th International Workshop on Neutrinos from Accelerators, Uppsala, Sweden, 25 - 30 Sep 2017, pp.078 詳細記錄 - 相似記錄 2019-04-26 08:32 詳細記錄 - 相似記錄 2019-04-26 08:32 ENUBET: High Precision Neutrino Flux Measurements in Conventional Neutrino Beams / Pupilli, Fabio (INFN, Padua) ; Ballerini, G (Insubria U., Como ; INFN, Milan Bicocca) ; Berra, A (Insubria U., Como ; INFN, Milan Bicocca) ; Boanta, R (INFN, Milan Bicocca ; Milan Bicocca U.) ; Bonesini, M (INFN, Milan Bicocca) ; Brizzolari, C (Insubria U., Como ; INFN, Milan Bicocca) ; Brunetti, G (INFN, Padua) ; Calviani, M (CERN) ; Carturan, S (INFN, Legnaro) ; Catanesi, M G (INFN, Bari) et al. The ENUBET project aims at demonstrating that the systematics in neutrino fluxes from conventional beams can be reduced to 1% by monitoring positrons from K$_{e3}$ decays in an instrumented decay tunnel, thus allowing a precise measurement of the $\nu_e$ (and $\overline{\nu}_e$) cross section. This contribution will report the results achieved in the first year of activities. [...] SISSA, 2018 - 8 p. - Published in : PoS NuFact2017 (2018) 087 Fulltext: PDF; External link: PoS server In : 19th International Workshop on Neutrinos from Accelerators, Uppsala, Sweden, 25 - 30 Sep 2017, pp.087 詳細記錄 - 相似記錄
The Annals of Statistics Ann. Statist. Volume 11, Number 3 (1983), 793-803. Second Order Efficiency of Minimum Contrast Estimators in a Curved Exponential Family Abstract This paper presents a sufficient condition for second order efficiency of an estimator. The condition is easily checked in the case of minimum contrast estimators. The $\alpha^\ast$-minimum contrast estimator is defined and proved to be second order efficient for every $\alpha, 0 < \alpha < 1$. The Fisher scoring method is also considered in the light of second order efficiency. It is shown that a contrast function is associated with the second order tensor and the affine connection. This fact leads us to prove the above assertions in the differential geometric framework due to Amari. Article information Source Ann. Statist., Volume 11, Number 3 (1983), 793-803. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176346246 Digital Object Identifier doi:10.1214/aos/1176346246 Mathematical Reviews number (MathSciNet) MR707930 Zentralblatt MATH identifier 0519.62027 JSTOR links.jstor.org Keywords Affine connection ancillary subspace of estimator curvature curved exponential family Fisher consistency Fisher information Fisher scoring method information loss maximum likelihood estimator minimum contrast estimator $\Gamma$-transversality searching curve of estimator second order efficiency Citation Eguchi, Shinto. Second Order Efficiency of Minimum Contrast Estimators in a Curved Exponential Family. Ann. Statist. 11 (1983), no. 3, 793--803. doi:10.1214/aos/1176346246. https://projecteuclid.org/euclid.aos/1176346246
Frege did not see himself as regimenting existing practice. He explicitly says that he developed Begriffsschrift (concept-script) because he was dissatisfied with the vagueness of natural language for the conceptual analysis of mathematics, the purpose of which was not to service practice but to remake it after exposing its logical foundations (logicist program). Notationally, Frege's version of the predicate calculus with quantifiers is sometimes described as "atrocious", and it played virtually no role in the adoption of the variable/quantifier notation. SEP's Algebra of Logic Tradition, Dipert's Peirce, Frege, the Logic of Relations, and Church's Theorem and Peckhaus's Calculus Ratiocinator vs. Characteristica Universalis? The Two Traditions in Logic, Revisited give detailed accounts of relevant developments and controversies. Two years after Begriffsschrift, in 1881, C.S. Peirce and his student Mitchell independently introduced their version of the quantifier calculus that evolved into what we have today, the original versions of $\exists$ and $\forall$ were $\Sigma$ and $\Pi$, respectively, introduced by Peirce in 1885. It spread after being enshrined in Schröder's monumental Vorlesungen über die Algebra der Logik, the three volumes published in 1890, 1895 and 1905, and picked up by Peano and Russell, among others. Vorlesungen became the notational blueprint for Russell's Principles of Mathematics (1903), and later his with Whitehead Principia Mathematica (1910-1913). Frege's role would likely have been forgotten if Russell did not pluck him from obscurity in his promotion of logicism. Peirce built on past work more than Frege but even he was not codifying existing practice. The use of variables as in quantifier logic was largely alien to pre-19th century mathematics, and even informal use of nested quantifiers does not really appear until the work of Dedekind and Weierstrass (one can detect some precursors to it in Gauss, Cauchy, Dirichlet and Riemann, among others, but their use is transitional from the older framework of "variable quantities"). Aristotelian logic, syllogistic, is roughly equivalent to the monadic (one-place) predicate calculus, and does not require variables, mathematical purposes met by nested quantifiers were served by iterated construction (as in Euclidean geometry) and infinitesimal/kinematic interpretations (as in calculus). For a comparison of pre and post quantifier practice see Friedman's Kant's Theory of Geometry. The movement towards modern use originated in the rise of abstract algebra, the origins of which can perhaps be traced to Vieta, see Viète's Relevance and his Connection to Euler, and it was partly anticipated by Leibniz. However, Leibniz's logical ideas had very little influence, and much of his work remained unpublished until after they were rediscovered by others. An extension of algebra to logic was developed by Boole in The Mathematical Analysis of Logic (1847), where propositional variables were introduced, and de Morgan introduced polyadic predicates (relations) in On the syllogism, No. IV, and on the Logic of Relations (1859), thus setting the stage for quantifiers. Weierstrass's limits and Dedekind's cuts soon generated applications for them. Here is from Dipert: " Frege, apparently in response to the negative reviews attracted by his Begriffsschrift, especially one by Schroder, wrote several essays in the early 1880s in an attempt to show how his logic was superior to traditional and Boolean logic (see Frege 1880 and 1882). He shows no appreciation of the work by De Morgan and Peirce, primarily criticizes Boole's work from a quarter century earlier, and seizes mainly on issues and faults which are controversial, even today, or from which most Booleans had retreated or were then retreating... The advantages of his elegant, rigorous theory, and the implications it had for the theory of relations, became lost in this polemic for most 19th century readers. However, Frege's battle was not lost. His victory was merely postponed a generation. Unfortunately, the legitimate contributions of Peirce and Schröder, especially to the logic of the relations, did get lost in the ensuing fray. Neither Peirce nor Schröder had the services of such an excellent propagandist as Russell. The Peirce-Schröder calculus was portrayed as purely algebraic, without the variable-binding operators Peirce regarded as essential and to which Schröder usually resorted; its weaknesses were rhetorically exploited with the bon mot 'too complicated'; its subtlest achievements were ignored (e.g., clever theorems proven and Peirce's insights into the differences between the monadic and polyadic predicate logic); and, in a final injustice, the development of the theory of relations in Principia Mathematica owes most, especially in notation, to Schröder via the influence of Peano rather than to Frege, but it was presented without substantial acknowledgement. Alfred Tarski is one of the few logicians or historians writing in the 20th century who seems to realize the proportions of this injustice".
If your system is of the form $$\dot{x} = A\,x + B\,u + f \\y = C\,x + D\,u + g \tag{1}$$ with $f$ and $g$ constant vectors, then you can do a coordinate transformation $z=x+\alpha$ and $v=u+\beta$ with $\alpha$ and $\beta$ constant vectors which satisfies $$\begin{bmatrix}A & B \\ C & D\end{bmatrix}\begin{bmatrix}\alpha \\ \beta\end{bmatrix} = \begin{bmatrix}f \\ g\end{bmatrix}. \tag{2}$$ This can always be solved if the $(A,B,C,D)$ matrix is full rank and if this is not the case when the vector of $(f,g)$ lies in the span of the $(A,B,C,D)$ matrix. After this transformation the dynamics will simply be $$\dot{z} = A\,z + B\,v \\y = C\,z + D\,v. \tag{3}$$ However if it is required that $u=f(y)$, with $f(0)=0$ and $f(-y)=-f(y)$, then this transformation will only work if the value found for $\beta$ is zero. If $\beta\neq0$ or the system of equation in $(2)$ is not solvable, then you could resort to extending the state space by one state $\xi$, whose time derivative is always zero. If the initial condition of $\xi$ is one, then the same dynamics as $(1)$ will be obtained when using the following extended state space model $$\begin{bmatrix}\dot{x} \\ \dot{\xi}\end{bmatrix} = \begin{bmatrix}A & f \\ 0 & 0\end{bmatrix}\begin{bmatrix}x \\ \xi\end{bmatrix} + \begin{bmatrix}B \\ 0\end{bmatrix} u \\y = \begin{bmatrix}C & g\end{bmatrix}\begin{bmatrix}x \\ \xi\end{bmatrix} + D\,u. \tag{4}$$
I understand that I need at least 0.7V applied to the base to turn on the base emitter junction, Not exactly. A base-emitter junction can be considered forward-biased with larger or smaller magnitudes than \$700\:\text{mV}\$. It's just that the collector current (speaking now about common small-signal BJTs) will change by roughly a factor of 10 for every \$60\:\text{mV}\$ change in the base-emitter voltage. So if \$I_\text{C}=1\:\text{mA}\$ when \$V_\text{BE}=670\:\text{mV}\$, then you might expect that \$I_\text{C}=100\:\mu\text{A}\$ when \$V_\text{BE}=610\:\text{mV}\$ (assuming the temperature didn't change and you are using the same BJT as before.) and also that I want to choose R1 so the base current biases the transistor in the middle of it's load line. Again, that depends. But yes, you want to bias the BJT somewhere useful. How you make that decision may involve a little more than just "middle of...." My confusion starts when I inject a signal to be amplified. The signal will be superimposed on the bias, however wouldn't that super imposed signal now be varying slightly above and below 0.7V thus turning the transistor off and on, creating a very distorted output signal? Actually, for your circuit it's quite true that the output signal will be distorted. It's one of the reasons that it's not used (much) without some kind of global NFB included. You are looking at a grounded emitter, so the only AC impedance will be \$r_e=\frac{V_T}{I_\text{C}}\$, where \$V_T\$ is the thermal voltage. To a first order approximation, \$I_\text{C}\$ follows the Shockley equation. And that means that approximately a 10-fold change in \$I_\text{C}\$ for a \$60\:\text{mV}\$ change in \$V_\text{BE}\$. So once again, suppose \$I_\text{C}=1\:\text{mA}\$ and that you've biased things so that \$V_\text{BE}=670\:\text{mV}\$. From this, the voltage gain will be hovering at about \$A_V=\frac{R_\text{C}=1\:\text{k}\Omega}{r_e\approx 26\:\Omega}\approx 38.5\$. Suppose your AC signal is \$v_{ac}=\pm25\:\text{mV}\$. Near the top of the positive-going swing, it pulls upward on the base by \$25\:\text{mV}\$ and this causes the collector current to increase by a factor of about 2.6. So the gain raises to about \$A_V=\frac{R_\text{C}=1\:\text{k}\Omega}{r_e\approx 10\:\Omega}\approx 100\$. Near the bottom of the negative-going swing, it pulls downward on the base by \$25\:\text{mV}\$ and this causes the collector current to decrease by a factor of about 2.6. So the gain falls to about \$A_V=\frac{R_\text{C}=1\:\text{k}\Omega}{r_e\approx 68\:\Omega}\approx 14.8\$. In short, the voltage gain is varying with the signal, everywhere from about 15 to about 100. So yes, the signal at the collector is usually distorted. If you keep the input signal swing small enough, say below a millivolt or so, then it's not so bad. But this kind of stage isn't often used as a single stage. Instead, global NFB as part of a multi-stage system is often applied to help straighten things out. (Or an emitter resistor is added to provide local NFB.) So far, I've just addressed your questions. But the circuit isn't very good. Different BJTs from the same family of parts will vary significantly in a variety of ways. They will have different saturation currents which affect their base-emitter voltage for a given collector current; they will exhibit different values of \$\beta\$; they will vary somewhat differently as the ambient temperature rises and falls during the day; etc. If you do know how to calculate (in theory) the operating point given some varying parameters, try it out and see how differently that circuit biases itself with slight variations in these three parameters. If not, use Spice and see what it says. Most especially, play around with \$\beta\$ and see what happens. (Assuming you can get it into active mode and not saturated, of course.)
The momentum decomposition you wrote is valid only for a scalar (spinless), real field, satisfying the Klein-Gordon equation. When considering a field with spin, like a spin-$1/2$ field satisfying the Dirac equation, you must include the polarization vectors, obtaining something of the form$$ \psi_\alpha(x) = \sum_{\textbf{p},s} N_{s,\textbf{p}}[ c_s(\textbf{p}) u_\alpha(s,\textbf{p}) e^{-ipx} + d_s^\dagger(\textbf{p}) v_\alpha(s,\textbf{p}) e^{ipx} ] $$$$ \bar \psi_\alpha(x) = \sum_{\textbf{p},s} N_{s,\textbf{p}}[ c_s^\dagger(\textbf{p}) \bar u_\alpha(s,\textbf{p}) e^{ipx} + d_s(\textbf{p}) \bar v_\alpha(s,\textbf{p}) e^{-ipx} ] $$where $c,c^\dagger$ and $d,d^\dagger$ are respectively annihilation/creation operators of particles and corresponding antiparticles, $u,v$ are the polarization vectors, which encode the information about the spin, and $N_{s,\textbf{p}}$ normalization factors depending on the convention used. The sum is extended over all momentum ($\textbf{p}$) and spin ($s$) eigenstates.The objects I denoted with $ue^{-ipx}$ and $ve^{ipx}$ are called Dirac spinors. They have four components, reflecting the fact that a Dirac field describes both electron and positron, each having 2 spin degrees of freedom (for a total of $2+2=4$ degrees of freedom). In other words, , in this case the spin-index which I denoted with $\alpha$.This additional degrees of freedom do not evolve independently, as you can see from the (free) Dirac equation, which showing explicitly the spinor indices reads$$ ( i \gamma^\mu_{\alpha\beta} \partial_\mu - m \delta_{\alpha\beta})\psi_\beta(x) = 0, \quad \forall \alpha=1,2,3,4,$$where $\gamma^\mu$ are the gamma matrices, and a sum over the spin-index $\beta$ is implicit. the spin information is encoded in the additional degrees of freedom of the field For another example you can look at spin-1 fields (e.g. photons). In this case the quantum field is denoted by $A_\mu(x)$ with $\mu$ a vector index, which is the spin-index for spin-1 fields. The decomposition has now the form:$$ A_\mu(x) = \sum_{\textbf{p},s} N_{s,\textbf{p}} [ a(s,\textbf{p}) \varepsilon_\mu(s,\textbf{p}) e^{-ipx} + a^\dagger(s,\textbf{p}) \varepsilon_\mu^*(s,\textbf{p}) e^{ipx} ]. $$Again, $s$ denotes the spin states (which are usually called polarization states in this context), and $\varepsilon$ are the polarization vectors. Finally, note that each spin component of the Dirac field satisfied the Klein-Gordon equation:$$ (\square + m^2)\psi_\alpha(x) = 0, \forall \alpha $$An excellent explanation of this can be found here. Related discussions:
Definition:Prime Ideal of Ring Contents Definition Let $R$ be a ring. A prime ideal of $R$ is a proper ideal $P$ such that: $I \circ J \subseteq P \implies I \subseteq P \text { or } J \subseteq P$ for any ideals $I$ and $J$ of $R$. A prime ideal of $R$ is a proper ideal $P$ such that: $\forall a, b \in R : a \circ b \in P \implies a \in P$ or $b \in P$ A prime ideal of $R$ is a proper ideal $P$ of $R$ such that: $I \circ J \subseteq P \implies I \subseteq P \text { or } J \subseteq P$ for all ideals $I$ and $J$ of $R$. A prime ideal of $R$ is a proper ideal $P$ of $R$ such that: Also defined as Also see Special cases Definition:Prime Number Definition:Prime Element of Ring, as shown at Prime Element iff Generates Principal Prime Ideal Generalizations Definition:Prime Ideal (Order Theory), as shown at Prime Ideal of Ring iff Prime Ideal in Lattice of Ideals
In module one (1), we demonstrated the data preparation phase of time series analysis. In module two (2), we described few steps to calculate numerous summary statistics and verify the significance of their values. In this module, we will walk you through time series smoothing in Excel using NumXL functions and tools. For sample data, we’ll use the S&P 500 weekly closing prices between January 2009 and July 2012. NumXL supports numerous smoothing functions, but each function assumes a particular characteristic about the sample data. Let’s consider the S&P 500 weekly close prices time series between Jan 2009 and July 2012. The time series exhibits a trend over time. Using equally-weighted moving average (WMA) with a window size of 4 weeks, forecasting into the next 12 weeks, we find: The WMA keeps pace with the original data, but it is lagging. Furthermore, the out-of-sample forecast is flat. Assuming the trend is deterministic (non-stochastic), we can use the Holt-Winter’s double exponential smoothing functions (DESMTH). The double exponential smoothing functionWe used optimal values for the smoothing parameters of the exponential smoothing function tracks the data pretty well and the forecast looks inline with the original curve. Is this it? Did we find a crystal ball that tells us where the price will be each week? Not quite! Earlier, we made the assumption that the trend is deterministic (non-stochastic), but the price is more like a random-walk process, so the trend we observe is just an anomaly that can occur in the random-walk. Proof! The Augmented Dickey-Fuller unit-root test (ADF Test) in NumXL can test for the presence of a unit-root (i.e. random-walk) in the presence of drift and/or trend.$$(1-L)y_t=\triangledown y_t =\alpha + \gamma y_{t-1}+\beta \times t + \cdots$$ The ADF test is basically $H_o:\gamma = 0$ The unit-root existance is confirmed in all 3 different formulations: no constant ($\triangledown y_t =\gamma y_{t-1}+ \epsilon_t$), constant ($\triangledown y_t =\alpha + \gamma y_{t-1}+ \epsilon_t$) and constant+trend ($\triangledown y_t =\alpha + \gamma y_{t-1}+\beta \times t+ \epsilon_t$). The time series is integrated (i.e. has a unit-root), so we need to take the first difference to stabilize (i.e. make stationary) the input data:$$z_t = (1-L)y_t=\triangledown y_t$$
This post is the third in a series explaining Basic Time Series Analysis. Click the link to check out the first post which focused on stationarity versus non-stationarity, and to find a list of other topics covered. As a reminder, this post is intended to be a very applied example of how use certain tests and models in a time-sereis analysis, either to get someone started learning about time-series techniques or to provide a big-picture perspective to someone taking a formal time-series class where the stats are coming fast and furious. As in the first post, the code producing these examples is provided for those who want to follow along in R. If you aren’t into R, just ignore the code blocks and the intuition will follow. In this post I explain how one goes about estimating the relationship among several variables over time. This approach has natural applications in agricultural economics and finance. In ag econ there are commodities whose prices are inherently related because of substitution or complementary effects in production and/or consumption (e.g., corn and soybeans), or because of production processes (e.g., soybeans, soybean oil, and soybean meal). In finance, security prices for companies in a similar sector might be related because of common economic conditions driving profitability (e.g., Bank of America and J.P. Morgan Chase). In time-series analysis, there are two basic models typically used to estimate and evaluate the relationships between multiple variables over time. Vector Auto-regression (VAR) Vector Error Correction (VECM) We will start with the Vector Auto-regression model, because it is the simpler one. The next post will cover VECM which estimates how a group of variables move together in equilibrium. Also for simplicity, we will continue as in the first post using SPY (the S&P 500 exchange traded fund) and GS (Goldman Sachs) prices. # If you are following along, uncomment the next lines and run once to install the required packages # install.packages('ggplot2')# install.packages('xts')# install.packages('quantmod')# install.packages('broom')# install.packages('tseries')# install.packages("kableExtra")# install.packages("knitr")# install.packages("vars")library(quantmod)getSymbols(c('SPY', 'GS')) ## [1] "SPY" "GS" SPYRet <- log(SPY$SPY.Adjusted) - log(lag(SPY$SPY.Adjusted))GSRet <- log(GS$GS.Adjusted) - log(lag(GS$GS.Adjusted))time_series <- cbind(SPYRet, GSRet)colnames(time_series) <- c('SPY', 'GS') Vector Autoregression Model A VAR model that estimates the relationship between SPY and GS looks like the following. \[\begin{align} SPY_t &= \beta^{spy}_0 + \beta^{spy}_1SPY_{t-1} + \beta^{spy}_2SPY_{t-2} + \beta^{spy}_3GS_{t-1} + \beta^{spy}_4GS_{t-2} + \nu_{spy} \\ GS_t &= \beta^{gs}_0 + \beta^{gs}_1SPY_{t-1} + \beta^{gs}_2SPY_{t-2} + \beta^{gs}_3GS_{t-1} + \beta^{gs}_4GS_{t-2} + \nu_{gs} \end{align}\] It consists of two linear regression equations, the first explaining the SPY price and the second explaining the GS price. Notice both equations have the exact same explanatory variables. Namely, today’s (time t) price is explained by yesterdays’ SPY price \((SPY_{t-1})\), the day before yesterday’s SPY price \((SPY_{t-2})\), yesterday’s GS price (\(GS_{t-1}\)), and the day before yesterday’s GS price (\(GS_{t-2}\)). This is why the VAR is sometimes called a ‘reduced form’ model. We haven’t specified any economic theory about how these equations should be formed. We have simply written them down with exactly equal opportunity for past SPY prices to affect its own and GS prices and for past GS prices to affect its own and and SPY prices. Our estimates of the \(\beta\)’s might reveal more about the economics. This is in contrast to a ‘structural model’ where the correct explanatory variables to include on the right hand side would be informed by economic theory. With the VAR, we opt for a fairly general model and let the data do the talking. This approach was perhaps most famously advocated for by Sims(1980), after giving an extensive critique of carefully (mis-)specified macro models of the day. Levels or Returns? If you read my first post in the series you should be wondering why in the world it is OK to put the SPY and GS prices into the VAR in levels. After all, we found strong evidence that both are non-stationary. This is true, the VAR model written above would have all the problems of a spurious regression we discussed in the first post. Writing the VAR using returns instead of price levels will usually remedy the situation, as noted in the first post. Then, a VAR(2) using price returns (using the \(\Delta\) notation to indicate \(\Delta SPY_t = log(SPY_t) – log(SPY_{t-1})\)) is \[\begin{align} \Delta SPY_t &= \beta^{spy}_0 + \beta^{spy}_1 \Delta SPY_{t-1} + \beta^{spy}_2 \Delta SPY_{t-2} + \beta^{spy}_3 \Delta GS_{t-1} + \beta^{spy}_4 \Delta GS_{t-2} + \epsilon_{spy} \\ \Delta GS_t &= \beta^{gs}_0 + \beta^{gs}_1 \Delta SPY_{t-1} + \beta^{gs}_2 \Delta SPY_{t-2} + \beta^{gs}_3 \Delta GS_{t-1} + \beta^{gs}_4 \Delta GS_{t-2} + \epsilon_{gs} \end{align}\] Fitting a VAR with two lags to SPY and GS returns yields the following. VAR(2) on SPY and GS Returns library(vars)library(stargazer)var <- VAR(time_series[2:dim(time_series)[1],], p = 2, type = "const", )stargazer(var$varresult$SPY, var$varresult$GS, type = 'html', dep.var.labels = c("Equation 1-SPY Equation 2-GS")) Dependent variable: Equation 1-SPY Equation 2-GS (1) (2) SPY.l1 0.003 0.216 *** (0.028) (0.055) GS.l1 -0.063 *** -0.127 *** (0.014) (0.028) SPY.l2 -0.098 *** -0.135 ** (0.028) (0.054) GS.l2 0.018 0.036 (0.014) (0.028) const 0.0004 0.0001 (0.0002) (0.0005) Observations 2,787 2,787 R 2 0.021 0.011 Adjusted R 2 0.020 0.010 Residual Std. Error (df = 2782) 0.012 0.024 F Statistic (df = 4; 2782) 14.895 *** 7.955 *** Note: p<0.1; p<0.05; p<0.01 In the SPY returns equation (1), the first lag of GS returns and the second lag of SPY returns are statistically significant. Also, in the GS returns equation (2), the first and second lag of SPY returns are statistically significant, and the first lag of GS returns is statistically significant. What is it Used for? The VAR model is used to determine the relationship among several variables. You can use a VAR for forecasting, like we did with the ARIMA and GARCH models, but as we found with those, the forecasts are usually not precise enough to be all that informative from a practical standpoint. Instead, in practice the researcher will usually end up looking at the following three things that are derived from the fitted VAR model: Granger Causality, Impulse Response Functions, and Forecast Error Variance Decomposition, that reveal something about the nature of how these markets move together (or not). Granger Causality is most commonly implemented by an F-test on the lags of the other variable on the variables of interest. Stated more simply in our context, it tests whether lags of SPY returns are helpful in forecasting GS returns, and vice versa. Impulse response functions show how one variable might react to sudden changes in the other variable. Finally, forecast error variance decomposition (FEVD) estimates how much of your forecast error can be attributed to unpredictability in each variable in the VAR. We will look more closely at each. Granger Causality Granger Causality is a different kind of causality than one typically runs into in cross-section econometrics, where you might have some kind of natural experiment. A typical story-line of that type might be something like the following: an unpredictable policy change gave a random subset of people more access to credit. An econometrician might then come along and test if greater access to credit leads to X, Y, or Z – like college enrollment, or mechanization on small-holder farms, etc. depending on the context. In that case, the econometrician will try to convince you whether or not access to credit really caused the change in outcome. Then, once causality has been established (or not), policy prescriptions might be suggested, perhaps to encourage (or not) the provision of credit. In time-series econometrics, we can seldom hope to show ‘real’ causality. We settle for Granger causality. Are these things correlated enough that one is useful in forecasting the other? If so, Granger Causality can be established. The following code takes the VAR regression output object var, and tests for Granger causality of SPY returns on GS returns and vice versa. The F-test is done the standard way in the lines where boot = FALSE, and using bootstrapped standard errors in the lines where boot = TRUE. 1 Since we already looked in detail and found heteroskedasticity in these return series in the previous GARCH post, we should be concerned that the standard errors of the F-test calculated in the usual way are too narrow. Note that in the output table below the code that the F-statistics are the same using both methods, but the P-values are bigger using the bootstrap. This is because the bootstrap increases the standard errors according to how bad the heteroskedasticity is. Tests for Granger Causality library(broom)library(knitr)library(kableExtra)causeSPY_noboot <- causality(var, cause = 'SPY', boot = FALSE, boot.runs = 5000)$Granger %>% tidy()causeSPY_boot <- causality(var, cause = 'SPY', boot = TRUE, boot.runs = 5000)$Granger %>% tidy()causeGS_noboot <- causality(var, cause = 'GS', boot = FALSE, boot.runs = 5000)$Granger %>% tidy()causeGS_boot <- causality(var, cause = 'GS', boot = TRUE, boot.runs = 5000)$Granger %>% tidy()causeSPY_noboot$parameter <- NAcauseGS_noboot$parameter <- NAcauseSPY_boot$df1 <- NAcauseSPY_boot$df2 <- NAcauseGS_boot$df1 <- NAcauseGS_boot$df2 <- NAresultstable <- rbind(causeGS_noboot[, c(1, 2, 6, 3, 4, 5)], causeSPY_noboot[, c(1, 2, 6, 3, 4, 5)], causeGS_boot[, c(5, 6, 3, 1, 2, 4)], causeSPY_boot[, c(5, 6, 3, 1, 2, 4)])resultstable$Bootstrap <- c('No Bootstrap', '', 'Bootstrap', '')resultstable <- resultstable[, c(7, 1, 2, 3, 4, 5, 6)]colnames(resultstable) <- c('Bootstrap', 'df1', 'df2', '#Bootstraps', 'Stat', 'P-Value', 'Equation') resultstable %>% kable("html") %>% kable_styling(bootstrap_options = c("striped", "hover")) Bootstrap df1 df2 #Bootstraps Stat P-Value Equation No Bootstrap 2 5564 NA 10.60502 2.53e-05 Granger causality H0: GS do not Granger-cause SPY 2 5564 NA 11.65211 8.90e-06 Granger causality H0: SPY do not Granger-cause GS Bootstrap NA NA 5000 10.60502 6.86e-02 Granger causality H0: GS do not Granger-cause SPY NA NA 5000 11.65211 7.22e-02 Granger causality H0: SPY do not Granger-cause GS Without the bootstrapped standard errors we would conclude strongly that SPY Granger causes GS and GS Granger causes SPY. When we use bootstrapped standard errors we fail to reject at a 5% confidence level both that GS returns do not Granger cause SPY returns and that SPY returns do not Granger cause GS returns (so we conclude they do not, but just barely). Or, if you like, you could say that at the 10% confidence level, both SPY returns Granger cause GS returns and GS returns Granger cause SPY returns. Pick you narrative. Forecast Error Variance Decomposition A Forecast Error Variance Decomposition is another way to evaluate how markets affect each other using the VAR model. In a FEVD, forecast errors are considered for each equation in the fitted VAR model, then the fitted VAR model is used to determine how much of each error realization is coming from unexpected changes (forecast errors) in the other variable. We can calculate this conveniently with fevd() from the vars package. FEVD <- fevd(var, n.ahead = 5)plot(FEVD) In the first plot, we see the FEVD for SPY. It appears that although we were borderline on whether or not to conclude that GS returns Granger cause SPY returns, the FEVD reveals that the magnitude of the causality is tiny anyway (we can’t even see the contribution from GS returns on the FEVD graph). Conversely, it appears SPY returns account for about half of the forecast error variance in the GS equation. That seems economically significant to me. Impulse Response Function Impluse Response Functions have a similar motivation, but go about it in a little bit different way. In this exercise, you take a shock to one variable, say SPY, and propagate it through the fitted VAR model for a number of periods. You can trace this through the VAR model and see if it impacts the other variables in a statistically significant way. Also from the vars package, this is easily achieved with the irf() function. Here also confidence intervals are produced by bootstrapping. IRF <- irf(var, impulse = 'SPY', response = 'GS', n.ahead = 5, boot = TRUE, runs = 100, ci = 0.95)plot(IRF) IRF <- irf(var, impulse = 'GS', response = 'SPY', n.ahead = 5, boot = TRUE, runs = 100, ci = 0.95)plot(IRF) It appears that a one standard deviation to SPY returns produces a statistically significant response to GS returns for one period, then becomes insignificant. The one standard deviation to GS returns produces a statistically significant response to SPY two periods later (also nearly statistically significant three periods later). Eyeballing the size of these effects, it looks to me like the FEVD and impulse response analysis point to similar findings. The impact of SPY returns on GS returns appears to be sizable in the IRF, while the the impact of GS returns seems to be minimal on SPY returns from an economic impact perspective. That’s It! That covers the basics of what a VAR model is and how to use Granger causality, FEVD, and impulse response functions to analyze how a group of markets relate to one another. There will be just a couple more posts in this series on the basics of time-series analysis. In the next post we will cover cointegration and the VECM. Sometimes, non-stationary variables move so closely together that there is a linear combination of those variables that is stationary! This case requires special consideration. The most intuitive cases are markets that are related by a production process, like the soybean complex. Soybean crushers buy soybeans and sell meal and oil. Economic theory would suggest the prices of those three commodities to maintain a relationship so that the profitability of soybean crushers trends around some modest number greater than zero. More details in the next post! The final post will summarize the series and provide a ‘cook-book’ like decision tree on how to decide which model is appropriate for your case. References Sims, C. A. (1980). Macroeconomics and reality. Econometrica: Journal of the Econometric Society, 1-48. A bootstrap is where you do a simulation experiment by randomly drawing a subset of the sample, then calculate the statistic of interest. By doing this a bunch of times you get natural variability in the statistic you care about, which can be used to calculate standard errors of the statistic that are more conservative than the standard method when regression assumptions are not perfectly valid (like having homoskedastic errors). The implementation for many statistics program in the option to ‘bootstrap the standard errors’.↩ // add bootstrap table styles to pandoc tables function bootstrapStylePandocTables() { $('tr.header').parent('thead').parent('table').addClass('table table-condensed'); } $(document).ready(function () { bootstrapStylePandocTables(); });
: Brief introduction Let $n \in \mathbb N$. The non-equality function, denoted $ NE : \{0,1\}^n \times \{0,1\}^n \to \{0,1\} $ is defined as follows: \begin{align} \forall x,y\in \{0,1\}^n \;\;\; NE(x,y) = \begin{cases}1 & x\neq y \\ 0 & x = y\end{cases} \end{align} In the setting of two party communication complexity, Alice receives $x$ and Bob receives $y$, and they want to compute $NE(x,y)$, while minimizing the number of bits in the communication between them. This is one of the fundamental function discussed in the study of communication complexity, along with many others, such as the equality function and the disjointness function. It is known that its 1-non-deterministic communication complexity is $\mathcal N ^1 (NE) = \log n + 1$. A direct sum version of the question is as follows: Alice receives a list of $\ell$ inputs, $x_1,\dots,x_\ell$, Bob receives another list of $\ell$ inputs, $y_1,\dots,y_\ell$, and together they wish to compute the expression $\bigwedge_{i=1}^\ell NE(x_i,y_i)$, again, while minimizing the number of bits in the communication between them. : The question It is known that the 1-non-deterministic communication complexity of $\bigwedge_{i=1}^\ell NE(x_i,y_i)$ is $O(\ell + \log n)$. An elegant argument that goes through the notation of $B^1_*$ is presented in Communication complexity by Kushilevitz and Nisan (in page 46). However, I could not seem to find a concrete protocol that achieves this complexity. Can you please propose such a protocol? Thanks in advance.
The Physics of Scattering Experiments The physics of scattering can be understood according to the Huygens-Fresnel principle of interference. Incoming radiation interacts with scatterers in the specimen (electrons in case of X-rays and atomic nuclei in case of neutrons), causing these to emit secondary, nearly spherical wavefronts. The superposition of these secondary waves yields the scattered radiation in the detector placed far from the sample. The scattering intensity recorded by the detector is described by the general formula: \[I(\vec{q})=\left|\iiint \Delta\rho(\vec{r}) e^{-i\vec{q}\vec{r}}\mathrm{d}^3\vec{r}\right|^2\textrm{,}\] which is formally a Fourier transform of $\Delta\rho(\vec{r})$, the scattering length density function describing the sample. For the X-ray case, this is the same as the number density function of the electrons, apart from a constant factor. The natural variable of the Fourier transform, $\vec{q}$ describes the angle dependence of the scattering. The relation of its magnitude to the scattering angle ($2\theta$) is $q=4\pi\sin{\theta}/\lambda$ where $\lambda$ is the X-ray wavelength. From the physical point of view, $\vec{q}$ is the vector difference of the wave vectors of the scattered and incident radiation, and is thus proportional to the momentum transfer vector, which is proportional to $\hbar \vec{q}$, the momentum gained by the X-ray photon upon interacting with the sample. This is shown in the next figure. The typical units of $q$ are nm -1. The scattered intensity describes the scattering power of the sample. The physical quantity is the differential scattering cross-section in this case, which represents the amount of particles scattered in a given solid angle, normalized by the incident flux at the sample: $$\frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}\left(\vec{q}\right) = \frac{\mathrm{d}N(\vec{q})}{\mathrm{d}\Omega} \frac{A}{N_\mathrm{in}}\textrm{,}$$ where $\frac{\mathrm{d}N(\vec{q})}{\mathrm{d}\Omega}$ is the number of particles scattered from the sample in infinitesimal solid angle at the direction given by $\vec{q}$, $A$ is the irradiated cross-section of the sample and $N_\mathrm{in}$ is the number of incoming rays. For practical reasons, the differential scattering cross-section is usually normalized by the sample volume, thus yielding the commonly used cm -1 units. In contrast to the calibration of scattering angles to $q$, obtaining the intensity on this absolute scale is more difficult. Luckily, it is not needed in most cases, only when absolute quantities, such as molecular weight, concentration, etc. is to be determined.
Understanding the scattering results The direct result of a small-angle scattering measurement is the scattering pattern, i.e. the two-dimensional image recorded by the position sensitive detector. This can be considered as a matrix of positive integer numbers, each matrix element corresponding to an image element of the sensitive surface of the detector. The numbers in the matrix elements are the total number of X-ray incidences registered by the given pixel during the time of the exposure. A typical scattering pattern is shown below. The image is a colour-coded representation of the scattering intensity matrix. The colour scale of this representation is found on the right side. The white crosshair points the position of the direct beam, aka. beam center, which is the position at which the direct, non-interacting radiation would hit the detector surface. This corresponds to $q=0$. From the definition of the scattering vector, $\vec{q}$, it is clear that the momentum transfer vectors corresponding to each point on a given circle around the beam position have the same magnitude, which is in direct relation with the radius of the circle. More precisely the following holds: $$q=\frac{4\pi}{\lambda}\sin{\left(\frac{\tan^{-1}{\left(\frac{r}{L}\right)}}{2}\right)}\textrm{,}$$ where $r$ is the radius of the circle in real space length units, and $L$ is the sample-to-detector distance. The grayed-out parts are called "masked": these are not taken into account by subsequent image processing and data reduction routines. Different areas of the detector might be masked because of several reasons: The scattering originating from the sample is shadowed by camera elements: beam-stop or its support. Dead (unresponsive or overresponsive) pixels Insensitive areas ("gaps") between detector modules High parasitic scattering in the region (compared to the sample scattering) The scattered intensity can be represented in a more tractable, one dimensional form using azimuthal averaging, i.e. collecting the intensity of pixels falling into concentric rings of the scattering pattern. The following figure intends to clarify the concept. The resulting curve, the so-called radial scattering curve is more convenient to handle, plot and employ as initial dataset for model fitting.
The local escape velocity (relative to a ZAMO) of an angular momentum free test particle in the vicinity of a naked singularity with spin a and charge ℧ can be derived with the Kerr Newman metric by setting the total conserved specific energy of the test particle (in natural units of G=M=c=k c=1) $${\rm E} = g_{\rm t t} \ \dot {\rm t}+g_{\rm t \phi} \ \rm \dot \phi+\rm q \ A_{t}= $$ $$= \rm \left(1-\frac{2 r-\mho ^2}{\Sigma }\right) \dot{t}+ \frac{a \sin ^2 \theta \left(2 r-\mho ^2\right)}{\Sigma } \ \dot{\phi}+q \ \frac{\mho \ r }{\Sigma} = $$ $$= {\rm \sqrt{\frac{\Delta \ \Sigma}{(1- \mu^2 v^2) \ \chi}} + \omega \ L_z+ q \ \frac{\mho \ r }{\Sigma}}$$ to 1 and the angular momentum L z to 0, and solve for the the local velocity v. Then we get $$\rm v_{esc}=v_{r}=|\frac{\sqrt{a^4 \cos ^4 \theta \ \zeta +2 a^2 r \cos ^2 \theta (q \chi \mho + r \ \zeta )+r^2 \left(-q^2 \chi \mho ^2+2 q r \chi \mho +r^2 \ \zeta \right)}}{\sqrt{\chi } \left(a^2 \cos ^2 \theta +r (r-q \mho )\right)}|$$ where the abbreviated terms are $$\rm \Sigma =a^2 \cos ^2 \theta +r^2, \ \Delta =a^2+r^2-2 r+\mho ^2, \ \chi =\left(a^2+r^2\right)^2-a^2 \Delta \sin ^2 \theta , \ \zeta = \Delta \Sigma -\chi $$ and q is the specific charge of the test particle. That holds for neutral and oppositely charged test particles all the way down an infinitesimal above the singularity (on the singularity itself it is, like John Rennie already mentioned, undefined due to the division by 0, but you can take the limit, which can be anything from 0 to c, depending on the spin and charge), while for test particles with the same charge as the black hole or naked singularity the optimal configuration for the test particle contains angular momentum. It might be that there are some shorter equations that can do the same, I let Mathematica solve it for me which doesn't always give the shortest possible form. The analytical solution for the required vertical launch angle for charged test particles is too long to post here, but it can be found here. For neutral test particles it would simply be 90° (locally, relative to a ZAMO), while chargend particles need a local velocity along the φ axis in order to have zero axial angular momentum because of the qA φ term in the equation for L z $${\rm L_z} = |g_{\phi \phi}| \ \dot \phi - |g_{\rm t \phi}| \ \rm \dot t - q \ A_{\phi}=$$ $$= {\rm \frac{\dot \phi \ \chi \sin ^2 \theta }{\Sigma }-\frac{\dot t \ a \ \sin ^2 \theta \left(2 r-Q^2\right)}{\Sigma }+\frac{a \ r \ \mho \ q \ \sin ^2 \theta }{\Sigma }} = $$ $$= {\frac{{\rm v_{\phi}} \ \sqrt{|g_{\phi \phi}|} }{\rm \sqrt{1-\mu^2 \ v^2}}+\rm \frac{(1-\mu^2 v^2) \ a \ r \ \mho \ q \ \sin ^2 \theta }{\Sigma }}$$ where the velocity components v=√(v r²+v θ²+v φ²) are measured in the local tetrad of a ZAMO. In the case of an uncharged test particle and a nonrotating naked singularity or black hole the equations reduce to those given by John Rennie in the answer above. Here for example is a plot of the escape velocity of an angular momentum free test particle from a naked singularity with spin a=2 in the plane θ=80°, the x axis is r (the singularity is at r=0, R=a): As you can see in this scenario the escape velocity is decreasing again when you get close to the singularity (that is because the singularity can be repulsive, see d²r/dτ² in the numerical display below), here the particle escapes from r=0.001, θ=80° (it doesn't look like it though because the Boyer Lindquist radius is not the same as the cartesian radius) and gets accelerated first before it decelerates again: The local velocity components on the bottom of the last row are measured relative to local ZAMOs. The black ring represents the singularity, and the blue and red surfaces are the outer and inner ergospheres (more examples with other configuratons can be found here). To enlarge click on the image.
Determinant The determinant of a square matrix $A = (a_{ij})$ of order $n$ over a commutative associative ring $R$ with unit 1 isthe element of $R$ equal to the sum of all terms of the form $$(-1)^k a_{1i_1}\cdots a_{ni_n},$$ where $i_1,\dots,i_n$ is a permutation of the numbers $1,\dots,n$ and $k$ is the number of inversions of the permutation $1\mapsto i_1,\dots,n\mapsto i_n$, so that $(-1)^k$ is the signature of this permutation.. The determinant of the matrix $$A=\begin{pmatrix}a_{11} & \dots & a_{1n}\\ \vdots & \ddots & \vdots\\ a_{n1} & \dots & a_{nn} \end{pmatrix}$$ is written as $$\begin{vmatrix}a_{11} & \dots & a_{1n}\\ \vdots & \ddots & \vdots\\ a_{n1} & \dots & a_{nn} \end{vmatrix} \textrm{ or } \det A.$$ The determinant of the matrix $A$ contains $n!$ terms. When $n=1$, $\det A = a_{11}$, when $n=2$, $\det A = a_{11}a_{22} - a_{21}a_{12}$. The most important instances in practice are those in which $R$ is a field (especially a number field), a ring of functions (especially a ring of polynomials) or a ring of integers. From now on, $R$ is a commutative associative ring with 1, $\def\Mn{\textrm{M}_n(R)}\Mn$ is the set of all square matrices of order $n$ over $R$ and $E_n$ is the identity matrix over $R$. Let $A\in\Mn$, while $a_1,\dots,a_n$ are the rows of the matrix $A$. (All that is said from here on is equally true for the columns of $A$.) The determinant of $A$ can be considered as a function of its rows: $$\det A = D(a_1,\dots,a_n).$$ The mapping $$d:\Mn\to R\quad(A\mapsto \det A)$$ is subject to the following three conditions: 1) $d$ is a linear function of any row of $A$: $$\def\l{\lambda}\def\m{\mu}D(a_1,\dots,\l a_i+\m b_i,\dots,a_n) = \l D(a_1,\dots,a_i,\dots,a_n) + \m D(a_1,\dots,b_i,\dots,a_n),$$ where $\l,\m\in R$; 2) if the matrix $B$ is obtained from $A$ by replacing a row $a_i$ by a row $a_i+a_j$, $i\ne j$, then $d(A)=d(b)$; 3) $d(E_n) = 1$. Conditions 1)–3) uniquely define $d$, i.e. if a mapping $h:\Mn\to R$ satisfies conditions 1)–3), then $h(A) = \det(A)$. An axiomatic construction of the theory of determinants is obtained in this way. Let a mapping $f:\Mn\to R$ satisfy the condition: $1'$) if $B$ is obtained from $A$ by multiplying one row by $\l\in R$, then $f(B)=\l f(A)$. Clearly 1) implies $1'$). If $R$ is a field, the conditions 1)–3) prove to be equivalent to the conditions $1'$), 2), 3). The determinant of a diagonal matrix is equal to the product of its diagonal entries. The surjectivity of the mapping $d:\Mn\to R$ follows from this. The determinant of a triangular matrix is also equal to the product of its diagonal entries. For a matrix $$A=\begin{pmatrix}B&0\\D&C\end{pmatrix}\;,$$ where $B$ and $C$ are square matrices, $$\det A = \det B \det C.$$ It follows from the properties of transposition that $\det A^t = \det A$, where ${}^t$ denotes transposition. If the matrix $A$ has two identical rows, its determinant equals zero; if two rows of a matrix $A$ change places, then its determinant changes its sign; $$D(a_1,\dots,a_i+\l a_j,\dots,a_n) = D(a_1,\dots,a_i,\dots,a_n)$$ when $i\ne j$, $\l\in R$; for $A$ and $B$ from $\Mn$, $$\det (AB) = (\det A)(\det B).$$ Thus, $d$ is an epimorphism of the multiplicative semi-groups $\Mn$ and $R$. Let $m\le n$, let $A=(a_{ij})$ be an $(m\times n)$-matrix, let $B=(b_{ij})$ be an $(n\times m)$-matrix over $R$, and let $C=AB$. Then the Binet–Cauchy formula holds: $$\det C = \sum_{1\le j_1<\cdots<j_m\le n} \begin{vmatrix} a_{ij_1}&\dots&a_{ij_m}\\ \vdots&\ddots&\vdots\\ a_{mj_1}&\dots&a_{mj_m} \end{vmatrix} \begin{vmatrix} b_{j_11}&\dots&a_{j_1m}\\ \vdots&\ddots&\vdots\\ b_{j_m1}&\dots&a_{j_mm} \end{vmatrix} $$ Let $A=(a_{ij})\in \Mn$, and let $A_{ij}$ be the cofactor of the entry $a_{ij}$. The following formulas are then true: $$\begin{equation}\left.\begin{aligned} \sum_{j=1}^n a_{ij}A_{kj} &= \delta_{ik} \det A\\ \sum_{i=1}^n a_{ij}A_{ik} &= \delta_{jk} \det A \end{aligned}\right\}\label{1}\end{equation}$$ where $\delta_{ij}$ is the Kronecker symbol. Determinants are often calculated by development according to the elements of a row or column, i.e. by the formulas (1), by the Laplace theorem (see Cofactor) and by transformations of $A$ which do not alter the determinant. For a matrix $A$ from $\Mn$, the inverse matrix $A^{-1}$ in $\Mn$ exists if and only if there is an element in $R$ which is the inverse of $\det A$. Consequently, the mapping $$\def\GL{\textrm{GL}}\GL(n.K)\to K^*\quad (A\mapsto \det A).$$ where $\GL(n,K)$ is the group of all invertible matrices in $\Mn$ (i.e. the general linear group) and where $K^*$ is the group of invertible elements in $K$, is an epimorphism of these groups. A square matrix over a field is invertible if and only if its determinant is not zero. The $n$-dimensional vectors $a_1,\dots,a_n$ over a field $F$ are linearly dependent if and only if $$D(a_1,\dots,a_n) = 0.$$ The determinant of a matrix $A$ of order $N>1$ over a field is equal to 1 if and only if $A$ is the product of elementary matrices of the form $$x_{ij}(\l) = E_n+\l e_{ij},$$ where $i\ne j$, while $e_{ij}$ is a matrix with its only non-zero entries equal to 1 and positioned at $(i,j)$. The theory of determinants was developed in relation to the problem of solving systems of linear equations: $$\begin{equation}\left.\begin{aligned} a_{11}x_1+\cdots+a_{1n}x_n &=b_1\\ \cdots &\\ a_{n1}x_1+\cdots+a_{nn}x_n &=b_n\\ \end{aligned}\right\}\label{2}\end{equation}$$ where $a_{ij}, b_j$ are elements of the field $R$. If $\det A\ne 0$, where $A=(a_{ij})$ is the matrix of the system (2), then this system has a unique solution, which can be calculated by Cramer's formulas (see Cramer rule). When the system (2) is given over a ring $R$ and $\det A$ is invertible in $R$, the system also has a unique solution, also given by Cramer's formulas. A theory of determinants has also been constructed for matrices over non-commutative associative skew-fields. The determinant of a matrix over a skew-field $k$ (the Dieudonné determinant) is introduced in the following way. The skew-field $k$ is considered as a semi-group, and its commutative homomorphic image $\bar k$ is formed. $k$ consists of a group, $k^*$, with added zero 0, while the role of $\bar k$ is taken by the group $\overline{k^*}$ with added zero $\bar 0$, where $\overline{k^*}$ is the quotient group of $k^*$ by the commutator subgroup. The epimorphism $k\to \bar k$, $\l \mapsto \bar\l$, is given by the canonical epimorphism of groups $k^*\to \overline{k^*}$ and by the condition $0\to \bar0$. Clearly, $\bar 1$ is the unit of the semi-group $\bar k$. The theory of determinants over a skew-field is based on the following theorem: There exists a unique mapping $$\delta:\textrm{M}_n(k)\to \bar k$$ satisfying the following three axioms: I) if the matrix $B$ is obtained from the matrix $X$ by multiplying one row from the left by $\l \in k$, then $\delta(B) = \bar\l \delta(A)$; II) if $B$ is obtained from $A$ by replacing a row $a_i$ by a row $a_i+a_j$, where $i\ne j$, then $\delta(B)=\delta(A)$; III) $\delta(E_n)=\bar1$. The element $\delta(A)$ is called the determinant of $A$ and is written as $\det A$. For a commutative skew-field, axioms I), II) and III) coincide with conditions $1'$), 2) and 3), respectively, and, consequently, in this instance ordinary determinants over a field are obtained. If $A=\textrm{diag}(a_{11},\dots,a_{nn})$, then $\det A = a_{11}\cdots a_{nn}$; thus, the mapping $\delta:\textrm{M}_n(k)\to \bar k$ is surjective. A matrix $A$ from $\textrm{M}_n(k)$ is invertible if and only if $\det A \ne 0$. The equation $\det AB = (\det A)(\det B)$ holds. As in the commutative case, $\det A$ will not change if a row $a_i$ of $A$ is replaced by a row $a_i+\l a_j$, where $i\ne j$, $\l\in k$. If $n>1$, $\det A = \bar1$ if and only if $A$ is the product of elementary matrices of the form $x_{ij} = E_n+\l e_{ij}$, $i\ne j$, $\l\in k$. If $a\ne 0$, then $$\begin{vmatrix}a&b\\c&d\end{vmatrix} = \overline{ad-aca^{-1}b},\quad \begin{vmatrix}0&b\\c&d\end{vmatrix} = -\overline{cd}. $$ Unlike the commutative case, $\det A^t$ does not have to coincide with $\det A$. For example, for the matrix $$A=\begin{pmatrix}i&j\\k&-1\end{pmatrix}$$ over the skew-field of quaternions (cf. Quaternion), $\det A = -\overline{2i}$, while $\det A^t = \bar0$. Infinite determinants, i.e. determinants of infinite matrices, are defined as the limit towards which the determinant of a finite submatrix converges when its order is growing infinitely. If this limit exists, the determinant is called convergent; in the opposite case it is called divergent. The concept of a determinant goes back to G. Leibniz (1678). H. Cramer was the first to publish on the subject (1750). The theory of determinants is based on the work of A. Vandermonde, P. Laplace, A.L. Cauchy and C.G.J. Jacobi. The term "determinant" was first coined by C.F. Gauss (1801). The modern meaning was introduced by A. Cayley (1841). References [Ar] E. Artin, "Geometric algebra", Interscience (1957) MR0082463 Zbl 0077.02101 [Bo] N. Bourbaki, "Elements of mathematics. Algebra: Algebraic structures. Linear algebra", 1, Addison-Wesley (1974) pp. Chapt.1;2 (Translated from French) MR0354207 [Di] J.A. Dieudonné, "La géométrie des groups classiques", Springer (1955) [EfRo] N.V. Efimov, E.R. Rozendorn, "Linear algebra and multi-dimensional geometry", Moscow (1970) (In Russian) [HoKu] K. Hoffman, R. Kunze, "Linear algebra", Prentice-Hall (1961) MR0125849 [Ka] V.F. Kagan, "Foundations of the theory of determinants", Odessa (1922) (In Russian) [Ko] A.I. Kostrikin, "Introduction to algebra", Springer (1982) (Translated from Russian) MR0661256 Zbl 0482.00001 [Ko2] M. Koecher, "Lineare Algebra und analytische Geometrie", Springer (1983) MR0725166 Zbl 0517.15001 [Ku] A.G. Kurosh, "Higher algebra", MIR (1972) (Translated from Russian) Zbl 0237.13001 [La] S. Lang, "Linear algebra", Addison-Wesley (1970) Zbl 0216.06001 [TyFe] R.I. Tyshkevich, A.S. Fedenko, "Linear algebra and analytic geometry", Minsk (1976) (In Russian) How to Cite This Entry: Determinant. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Determinant&oldid=39861
Updated 07.11 We can chose the model to discuss the problem and so let us chose: Model: Newtonian mechanics/Newtonian gravity, with the Universe filled with uniformly dense matter, interacting only gravitationally (in cosmology this called “dust matter”), and at the initial time of our spaceship journey all this matter is at rest. Hence my spaceship should start accelerating toward ×. By choosing the sphere large enough, I should be able to make it accelerate arbitrarily fast, and by choosing the location of × I can make it accelerate in any direction. Absolutely! Of course this doesn't work, but why?. It does work. If we assume that initially the spaceship was at rest together with the whole universe it will reach the point × in time needed for the ship to fall into a point mass equal to the mass of the pink sphere. The problem is that by that time all of the pink sphere also falls toward that same point as well, as do all other colored spheres and the rest of the universe also. If our astronaut checks its distance to the point × before the spaceship falls into it she would notice that this distance has decreased, but at the same time is she checks her surroundings she would notice that the spaceship is surrounded by precisely the same matter particles that when the journey started only they are closer to each other and to the spaceship. This distance contraction is simply a Newtonian version of Big Crunch event. If the universe is filled with matter interacting only gravitationally and we assume that the density of matter will stay uniform throughout the universe, then the only conclusion would be that such universe is not static. It has either (Newtonian version of) Big Bang in its past or Big Crunch in its future (or in our model, since we chose initial moment as a turning point from expansion to contraction, it has both). It may seem that the whole Universe falling toward our chosen point × is an absurdity, since we have chosen this point arbitrarily. But in this situation there is no paradox, the acceleration of all matter toward this point is due to the fact that in our setup there is no “absolute space”, no set of outside stationary inertial observers which could give us absolute accelerations, instead we can only choose a reference point × (or rather specify an observer located at this point and at rest with respect to surrounding matter) and calculate relative accelerations toward this point. Recall, that the first principle of Newtonian mechanics states that every particle continues in its state of rest or uniform motion in a straightline unless it is acted upon by some exterior force. For an isolated system, for example collection of gravitating objects of finite total mass we could (at least in principle) place an observer at rest so far away that it could be considered an inertial object. This would allow us to define a reference frame with respect to which we would measure accelerations. But in our Newtonian cosmology matter is filling the whole Universe, there is no observer on which gravity is not acting, so there is no set of reference frames defined by observers “at infinity” only observers inside the matter concentrations that are affected by the gravitational forces. While there is no absolute accelerations, the relative positions ($\mathbf{d}_{AB}(t)= \mathbf{x}_A(t)-\mathbf{x}_B(t)$ between objects $A$ and $B$ comoving with the matter of the universe) do have a meaning independent of the choice of reference point. This relative positions, relative velocities ($\dot{\mathbf{d}}_{AB}$), relative accelerations, etc. constitute the set of unambiguously defined quantities measurable within our universe. then my intuition tells me that I can just choose a sufficiently static universe. This intuition is wrong, if there is a gravitational force that would accelerate your spaceship toward ×, then it would also be acting on a nearby matter (call them dust particles or planets or stars) producing the same acceleration, so all of the universe would be falling toward ×. Note on Newtonian cosmology it may seems that Newtonian theory of gravitation is ill suited to handle homogeneous spatially infinite distributions of matter. But one can try to separate the physics of the situation from the deficiencies of particular formalism and possibly to overcome them. As a motivation we could note that over large, cosmological distances our universe to a high degree of accuracy could be considered spatially flat, and the velocities of most massive objects relative to each other and to the frame of CMB are very small compared with the speed of light, meaning that Newtonian approximation may be appropriate. While we do know that general relativity provides a better description for the gravitation, Newtonian gravity is computationally and conceptually much simpler. This seems to suggest that it is worthwhile to “fix” whatever problems one encounters while attempting to formalize cosmological solutions of Newtonian gravity. The most natural approach is to “geometrize” Newtonian gravity and instead of “force” consider it a part of geometry, dynamical connection representing gravity and inertia. This is done within the framework of Newton–Cartan theory. As a more detailed reference, with an emphasis on cosmology, see this paper (knowledge of general relativity is required): Newton–Cartan theory underscores conceptual similarities between Newtonian gravity and general relativity, with Galilei group replacing the Lorentz group of GR. The general approach is coordinate-free and is closely related to the machinery of general relativity, but a specific choice of local Galilei coordinates would produce the usual equations for acceleration ($\mathop{\mathrm{div}} \mathbf{g} = - 4\pi \rho$), with gravitational acceleration now being part of Newtonian connection. Homogeneous and isotropic cosmological solutions are a straightforward lifts of FLRW cosmologies. While equations are the same, we may already answer some conceptual questions. Since gravitational acceleration is part of the connection, there is no reason to expect it to be an “absolute” object, there would be gauge transformations that would alter it. We can have multiple charts on which we define the physics with the normally defined transition maps between. We can have a closed FRW cosmology, the “space” does not has to be a Euclidean space, it could be torus $T_3$ (field equations require that locally the space is flat). Since the spatial volume of a closed universe varies, and tend to zero as the universe approaches the Big Crunch, this asserts that not just matter but space itself collapses during the Big Crunch (to answer one of the comments). It is quite simple to include the cosmological constant / dark energy thus making the models more realistic. Note on answer by user105620: If we formulate a regularization procedure by introducing a window function $W(\epsilon,x_0)$ that would make potential well behaved. This provides us with an another way to “fix” problems of our cosmological model. The acceleration of our spaceship computed with this regularization is indeed dependent on the choice of $x_0$ in the limit $\epsilon\to 0$, which is the consequence of the same freedom in choosing the reference point ×. But he/she just should not have stopped there. Divergences requiring the use of regulators and ambiguities remaining after regularization are quite normal features in developing physical models. The next step would be identifying the physically meaningful quantities and checking that those are independent on the regulator artifacts. In our case neither potential $\Phi$ nor gravitational acceleration $\mathbf{g}$ are directly observable in this model. Relative positions, relative velocities and relative accelerations are observable and those are turning to be independent of the regulator parameter $x_0$.
Consider two independent discrete random variables $y_1$ and $y_2$, both distributed with a Beta-Binomial distribution, with different number of successes $n_1$ and $n_2$ but the same parameters $a$ and $b$ $ p(y_1|n_1,a,b) = {n_1 \choose y_1} \dfrac{B(y_1 + a,n_1 -y_1 +b)}{B(a,b)} $ $ p(y_2|n_2,a,b) = {n_2 \choose y_2} \dfrac{B(y_2 + a,n_2 -y_2 +b)}{B(a,b)} $ Consider a discrete variable $Z = y_1 + y_2$. Is $Z$ distributed as well as a Beta-Binomial (with parameters $n_1+n_2$, $a'$ and $b'$? I could not prove it in an analytic form so far, but I have been trying it out with some simulations, at least to check whether the assumption is wrong in some cases. Reparametrising $a$ and $b$ as $\mu = \dfrac{a}{a+b}$ and $\rho = \dfrac{1}{a+b+1}$, $y_1$ and $y_2$ have mean $\mu n_1$ and $\mu n_2$ respectively and variance $\mu(1-\mu)n_1(1 + (n_1-1)\rho)$ and $\mu(1-\mu)n_2(1 + (n_2-1)\rho)$ respectively. Based on independence, the mean of $Z$ is $\mu(n_1+n_2)$ and the variance of $Z$ is $\mu(1-\mu)(n_1(1 + (n_1-1)\rho) + n_2(1 + (n_2-1)\rho))$. If $Z$ was distributed according to a beta-binomial distribution, then it would have paramters $\mu'$ and $\rho'$, with $\mu'= \mu$ and $\rho' = \dfrac{\dfrac{n_1(1 + (n_1-1)\rho) + n_2(1 + (n_2-1)\rho)}{n_1+n_2} - 1}{n_1+n_2-1} = \rho \dfrac{n_1(n_1-1)+n_2(n_2-1)}{(n_1+n_2)(n_1+n_2 -1)} $ Here is some code to generate $Z$ as a sum of two independent Beta-Binomials (sorry about the code, R is not my main language) n1 = 20n2 = 50mu = .6k = 20p1 = rbeta(1e6,mu*k,(1-mu)*k)y1 = rbinom(1e6,n1,p1)p2 = rbeta(1e6,mu*k,(1-mu)*k)y2 = rbinom(1e6,n2,p2)z = y1+y2rho = 1/(k+1)rho1 = rho*(n1*(n1-1)+n2*(n2-1))/((n1+n2)*(n1+n2-1))k1 = 1/ rho1 - 1p3 = rbeta(1e6,mu*k1,(1-mu)*k1)z1 = rbinom(1e6,n1+n2,p3)print(c(var(z),var(z1)))plot(density(z,width= 3))lines(density(z1,width = 3)) I have been trying this code for different values of $n_1$, $n_2$, $\mu$ and $k$, but in all the cases the variances using the sum of two beta-binomials or an appropriately tuned beta-binomial are very similar (the densities look indistinguishable)
Diese Seite ist aus Gründen der Barrierefreiheit optimiert für aktuelle Browser. Sollten Sie einen älteren Browser verwenden, kann es zu Einschränkungen der Darstellung und Benutzbarkeit der Website kommen! Flux quanta, magnetic field lines, merging – some sub-microscale relations of interest in space plasma physics Treumann, R. A., R. Nakamura, and W. Baumjohann (2011), Flux quanta, magnetic field lines, merging – some sub-microscale relations of interest in space plasma physics, Annales Geophysicae, 29, 1121-1127, doi:10.5194/angeo-29-1121-2011. Abstract Abstract. We clarify the notion of magnetic field lines in plasma by referring to sub-microscale (quantum mechanical) particle dynamics. It is demonstrated that magnetic field lines in a field of strength B carry single magnetic flux quanta \Phi_0 = h/e. The radius of a field line in the given magnetic field B is calculated. It is shown that such field lines can merge and annihilate only over the length `k of their strictly anti-parallel sections, for which case we estimate the power generated. The length `k becomes a function of the inclination angle of the two merging magnetic flux tubes (field lines). Merging is possible only in the interval (1/2)π <\theta<π . This provides a sub-microscopic basis for “component reconnection” in classical macro-scale reconnection. We also find that the magnetic diffusion coefficient in plasma appears in quanta D^m_0 = e\Phi_0/m_e = h/m_e. This lets us conclude that the bulk perpendicular plasma resistivity is limited and cannot be less than \eta_{0\perp} =μ_0e\Phi_0/m_e =μ_0h/m_e 10^{−9} Ohm m. This resistance is an invariant. Keywords. Space plasma physics (Magnetic reconnection) Further information BibTeX @article{id1689, author = {R. A. Treumann and R. Nakamura and W. Baumjohann}, journal = {Annales Geophysicae}, month = {jun}, pages = {1121-1127}, title = {{Flux quanta, magnetic field lines, merging {--} some sub-microscale relations of interest in space plasma physics}}, volume = {29}, year = {2011}, url = {www.ann-geophys.net/29/1121/2011/}, doi = {10.5194/angeo-29-1121-2011}, } EndNote %0 Journal Article %A Treumann, R. A. %A Nakamura, R. %A Baumjohann, W. %D 2011 %V 29 %J Annales Geophysicae %P 1121-1127 %T Flux quanta, magnetic field lines, merging – some sub-microscale relations of interest in space plasma physics %U www.ann-geophys.net/29/1121/2011/ %8 jun Printed 15. Oct 2019 03:08
It looks like you're new here. If you want to get involved, click one of these buttons! Last time we studied meets and joins of partitions. We observed an interesting difference between the two. Suppose we have partitions \(P\) and \(Q\) of a set \(X\). To figure out if two elements \(x , x' \in X\) are in the same part of the meet \(P \wedge Q\), it's enough to know if they're the same part of \(P\) and the same part of \(Q\), since $$ x \sim_{P \wedge Q} x' \textrm{ if and only if } x \sim_P x' \textrm{ and } x \sim_Q x'. $$ Here \(x \sim_P x'\) means that \(x\) and \(x'\) are in the same part of \(P\), and so on. However, this does not work for the join! $$ \textbf{THIS IS FALSE: } \; x \sim_{P \vee Q} x' \textrm{ if and only if } x \sim_P x' \textrm{ or } x \sim_Q x' . $$ To understand this better, the key is to think about the "inclusion" $$ i : \{x,x'\} \to X , $$ that is, the function sending \(x\) and \(x'\) to themselves thought of as elements of \(X\). We'll soon see that any partition \(P\) of \(X\) can be "pulled back" to a partition \(i^{\ast}(P)\) on the little set \( \{x,x'\} \). And we'll see that our observation can be restated as follows: $$ i^{\ast}(P \wedge Q) = i^{\ast}(P) \wedge i^{\ast}(Q) $$ but $$ \textbf{THIS IS FALSE: } \; i^{\ast}(P \vee Q) = i^{\ast}(P) \vee i^{\ast}(Q) . $$ This is just a slicker way of saying the exact same thing. But it will turn out to be more illuminating! So how do we "pull back" a partition? Suppose we have any function \(f : X \to Y\). Given any partition \(P\) of \(Y\), we can "pull it back" along \(f\) and get a partition of \(X\) which we call \(f^{\ast}(P)\). Here's an example from the book: For any part \(S\) of \(P\) we can form the set of all elements of \(X\) that map to \(S\). This set is just the preimage of \(S\) under \(f\), which we met in Lecture 9. We called it $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S \}. $$ As long as this set is nonempty, we include it our partition \(f^{\ast}(P)\). So beware: we are now using the symbol \(f^{\ast}\) in two ways: for the preimage of a subset and for the pullback of a partition. But these two ways fit together quite nicely, so it'll be okay. Summarizing: Definition. Given a function \(f : X \to Y\) and a partition \(P\) of \(Y\), define the pullback of \(P\) along \(f\) to be this partition of \(X\): $$ f^{\ast}(P) = \{ f^{\ast}(S) : \; S \in P \text{ and } f^{\ast}(S) \ne \emptyset \} . $$ Puzzle 40. Show that \( f^{\ast}(P) \) really is a partition using the fact that \(P\) is. It's fun to prove this using properties of the preimage map \( f^{\ast} : P(Y) \to P(X) \). It's easy to tell if two elements of \(X\) are in the same part of \(f^{\ast}(P)\): just map them to \(Y\) and see if they land in the same part of \(P\). In other words, $$ x\sim_{f^{\ast}(P)} x' \textrm{ if and only if } f(x) \sim_P f(x') $$ Now for the main point: Proposition. Given a function \(f : X \to Y\) and partitions \(P\) and \(Q\) of \(Y\), we always have $$ f^{\ast}(P \wedge Q) = f^{\ast}(P) \wedge f^{\ast}(Q) $$ but sometimes we have $$ f^{\ast}(P \vee Q) \ne f^{\ast}(P) \vee f^{\ast}(Q) . $$ Proof. To prove that $$ f^{\ast}(P \wedge Q) = f^{\ast}(P) \wedge f^{\ast}(Q) $$ it's enough to prove that they give the same equivalence relation on \(X\). That is, it's enough to show $$ x \sim_{f^{\ast}(P \wedge Q)} x' \textrm{ if and only if } x \sim_{ f^{\ast}(P) \wedge f^{\ast}(Q) } x'. $$ This looks scary but we just follow our nose. First we rewrite the right-hand side using our observation about the meet of partitions: $$ x \sim_{f^{\ast}(P \wedge Q)} x' \textrm{ if and only if } x \sim_{ f^{\ast}(P)} x' \textrm{ and } x\sim_{f^{\ast}(Q) } x'. $$ Then we rewrite everything using what we just saw about the pullback: $$ f(x) \sim_{P \wedge Q} f(x') \textrm{ if and only if } f(x) \sim_P f(x') \textrm{ and } f(x) \sim_Q f(x'). $$ And this is true, by our observation about the meet of partitions! So, we're really just stating that observation in a new language. To prove that sometimes $$ f^{\ast}(P \vee Q) \ne f^{\ast}(P) \vee f^{\ast}(Q) , $$ we just need one example. So, take \(P\) and \(Q\) to be these two partitions: They are partitions of the set $$ Y = \{11, 12, 13, 21, 22, 23 \}. $$ Take \(X = \{11,22\} \) and let \(i : X \to Y \) be the inclusion of \(X\) into \(Y\), meaning that $$ i(11) = 11, \quad i(22) = 22 . $$ Then compute everything! \(11\) and \(22\) are in different parts of \(i^{\ast}(P)\): $$ i^{\ast}(P) = \{ \{11\}, \{22\} \} . $$ They're also in different parts of \(i^{\ast}(Q)\): $$ i^{\ast}(Q) = \{ \{11\}, \{22\} \} .$$ Thus, we have $$ i^{\ast}(P) \vee i^{\ast}(Q) = \{ \{11\}, \{22\} \} . $$ On the other hand, the join \(P \vee Q \) has just two parts: $$ P \vee Q = \{\{11,12,13,22,23\},\{21\}\} . $$ If you don't see why, figure out the finest partition that's coarser than \(P\) and \(Q\) - that's \(P \vee Q \). Since \(11\) and \(22\) are in the same parts here, the pullback \(i^{\ast} (P \vee Q) \) has just one part: $$ i^{\ast}(P \vee Q) = \{ \{11, 22 \} \} . $$ So, we have $$ i^{\ast}(P \vee Q) \ne i^{\ast}(P) \vee i^{\ast}(Q) $$ as desired. \( \quad \blacksquare \) Now for the real punchline. The example we just saw was the same as our example of a "generative effect" in Lecture 12. So, we have a new way of thinking about generative effects: the pullback of partitions preserves meets, but it may not preserve joins! This is an interesting feature of the logic of partitions. Next time we'll understand it more deeply by pondering left and right adjoints. But to warm up, you should compare how meets and joins work in the logic of subsets: Puzzle 41. Let \(f : X \to Y \) and let \(f^{\ast} : PY \to PX \) be the function sending any subset of \(Y\) to its preimage in \(X\). Given \(S,T \in P(Y) \), is it always true that $$ f^{\ast}(S \wedge T) = f^{\ast}(S) \wedge f^{\ast}(T ) ? $$ Is it always true that $$ f^{\ast}(S \vee T) = f^{\ast}(S) \vee f^{\ast}(T ) ? $$ To read other lectures go here.
A lot of what is written in the question is not correct. The URCA process would be cycles of beta decay followed by inverse beta decay as follows. $$n \rightarrow p + e + \bar{\nu_e}$$$$ p + e \rightarrow n + \nu_e$$ The neutrinos escape from the star, carrying away energy. The URCA process is very important during the collapse to a neutron star state, during the initial cooling of a neutron star and possibly in the most extreme density cores of neutron stars at later times. The direct URCA process can be be blocked by degeneracy. In the neutron fluid, there must be a small number (of order 1 part in 100) protons and electrons, because neutrons are unstable and undergo beta decay. However, once the electron Fermi energy reaches the maximum possible decay energy of a beta electron, then the reaction ceases because there are no free energy states for it to occupy. Instead, inverse beta decay (or neutronisation) becomes possible, if the electron Fermi energy + proton Fermi energy equals or exceeds the neutron Fermi energy. An equilibrium is set up so that$$E_{F,n} = E_{F,p} + E_{F,e}$$. Ok, having got this you should then look at my answer to this question, where I explain why the direct URCA process is blocked unless densities are very high ($>8\times10^{17}$ kg/m$^3)$, because it is impossible for the reactions to simultaneously conserve energy and momentum. At lower densities, the modified URCA process is inefficient, yet more effective. This is similar to the URCA process but includes a bystander particle (a neutron) to enable simultaneous conservation of energy and momentum, at the expense of requiring an extra particle to be within $kT$ of it's Fermi surface. I do not really understand what you are asking in the last part of your question. Muons are produced at high densities. The higher density leads to higher neutron Fermi energies, and once this Fermi energy exceeds the Fermi energy of the protons plus the rest mass energy of the muons (105 Mev) then they can emerge. Almost equivalently, one can demand that the Fermi energy of the electrons in the gas reach 105 MeV and then muons can be created. It is the appearance of muons at $\sim 8\times10^{17}$ kg/m$^3$ that increases the proton fraction, reduces the difference between the neutron and proton Fermi momenta and makes the direct URCA process possible.
Research Open Access Published: Neimark-Sacker bifurcation of a two-dimensional discrete-time predator-prey model SpringerPlus volume 5, Article number: 126 (2016) Article metrics 1241 Accesses 4 Citations Abstract In this paper, we study the dynamics and bifurcation of a two-dimensional discrete-time predator-prey model in the closed first quadrant \(\mathbb {R}_+^2\). The existence and local stability of the unique positive equilibrium of the model are analyzed algebraically. It is shown that the model can undergo a Neimark-Sacker bifurcation in a small neighborhood of the unique positive equilibrium and an invariant circle will appear. Some numerical simulations are presented to illustrate our theocratical results and numerically it is shown that the unique positive equilibrium of the system is globally asymptotically stable. Background As is well known, in the theory of population dynamical models there are two kinds of mathematical models: the continuous-time models described by differential equations, and the discrete time models described by difference equations. In recent years more and more attention is being paid to discrete time population models. The reasons are as follows: First, the discrete time models are more appropriate than the continuous time models when populations have non-overlapping generations, or the number of population is small. Second, we can get more accurate numerical simulations results from discrete time models. Moreover, the numerical simulations of continuous-time models are obtained by discretizing the models. At last, the discrete-time models have rich dynamical behaviors as compared to continuous time models. Predator-prey models have already received much attention during last few years. For example, the stability, permanence and the existence of periodic solutions of the predator-prey models are studied in Fazly (2007), Hu and Zhang (2008), Liu (2010), Xia et al. (2007) and Yang and Li (2009). Study of discrete dynamical behavior of systems is usually focus on boundedness and persistence, existence and uniqueness of equilibria, periodicity, and there local and global stability (see for example, Khan and Qureshi 2014a, b, 2015a, b, c; Kalabuŝić et al. 2009; Khan 2014; Ibrahim and El-Moneam 2015; Kalabuŝić et al. 2011; Elsayed and Ibrahim 2015a, b; Garić-Demirović et al. 2009; Qureshi and Khan 2015; Kalabuŝić et al. 2011; Ibrahim 2014; Ibrahim and Touafek 2014) but there are many articles that discuss the dynamical behavior of discrete-time models for exploring the possibility of bifurcation and chaos phenomena (Hu et al. 2011; Sen et al. 2012; Chen and Changming 2008; Gakkhar and Singh 2012; Jing and Yang 2006; Zhang et al. 2010; Smith 1968). We consider the following discrete predator-prey model described by difference equations which was proposed by Smith et al. (1968): where \(x_n\) and \(y_n\) denotes the numbers of prey and predator respectively. Moreover the parameters \(\alpha ,\ \beta \) and the initial conditions \( x_0,\ y_0\) are positive real numbers. The organization of the paper is as follows: In Sect. “Existence of equilibria and local stability”, we discuss the existence and local stability of equilibria for system (1) in \(\mathbb {R}_+^2\). This also include the specific parametric condition for the existence of Neimark-Sacker bifurcation of the system (1). In Sect. “Neimark-Sacker bifurcation”, we study the Neimark-Sacker bifurcation by choosing \(\alpha \) as a bifurcation parameter. In Sect. “Numerical simulations”, numerical simulations are presented to verify theocratical discussion. Finally a brief conclusion is given in Sect. “Conclusion”. Existence of equilibria and local stability In this section, we will study the existence and stability of equilibria of system (1) in the close first quadrant \(\mathbb {R}^2_{+}\). So, we can summarized the results about the existence of equilibria of system (1) as follows: Lemma 2.1 (i) System(1) has a unique equilibrium O(0, 0) if\(\alpha <\frac{1}{1-\beta }\) and\(\beta <1\); (ii) System(1) has two equilibria O(0, 0) and\(A\left( \beta ,\alpha (1-\beta )-1\right) \) if\(\alpha >\frac{1}{1-\beta }\) and\(\beta <1\). More precisely, system(1) has a unique positive equilibrium\(A\left( \beta ,\alpha (1-\beta )-1\right) \) if\(\alpha >\frac{1}{1-\beta }\) and\(\beta <1\). The characteristic equation of the Jacobian matrix J of linearized system of (1) about the unique positive equilibrium \(A\left( \beta ,\alpha (1-\beta )-1\right) \) is given by where Moreover the eigenvalues of the Jacobian matrix of linearized system of (1) about the unique positive equilibrium \(A\left( \beta ,\alpha (1-\beta )-1\right) \) is given by where Now we will state the topological classification of these equilibria as follows: Lemma 2.2 (i) For the equilibrium point O(0, 0), following topological classification holds: (i.1) O(0, 0) is a sink if\(\alpha <1\); (i.2) O( 0, 0) is a saddle if\(\alpha >1\); (i.3) O(0, 0) is non-hyperbolic if\(\alpha =1\). (i.1) Lemma 2.3 For the unique positive equilibrium \(A\left( \beta ,\alpha (1-\beta )-1\right) \) of system (1), following topological classification holds: (i) \(A\left( \beta ,\alpha (1-\beta )-1\right) \) is a sink if one of the following parametric conditions holds: (i.1) \(\Delta \ge 0\) and\(0<\alpha <\frac{1}{1-\beta }\); (i.2) \(\Delta <0\) and\(0<\alpha <\frac{1}{1-2\beta }\) ; (i.1) (ii) \(A\left( \beta ,\alpha (1-\beta )-1\right) \) is a source if one of the following parametric conditions holds: (ii.1) \(\Delta \ge 0\) and\(\alpha >\frac{1}{1-\beta }\); (ii.2) \(\Delta <0\) and\(\alpha >\frac{1}{1-2\beta }\); (ii.1) (iii) \(A\left( \beta ,\alpha (1-\beta )-1\right) \) is non-hyperbolic if one of the following parametric conditions holds: (iii.1) \(\Delta \ge 0\) and\(\alpha =\frac{1}{1-\beta }\); (iii.2) \(\Delta <0\) and\(\alpha =\frac{1}{1-2\beta }\); (iii.1) Theorem 2.4 (i) If\(\alpha <\frac{1}{1-\beta }\) and\(\beta <1\), then system(1) has a unique equilibrium O( 0, 0), which is locally asymptotically stable; (ii) If\(\alpha >\frac{1}{1-\beta }\) and\(\beta <1\), then system(1) has two equilibria O( 0, 0) and\(A\left( \beta ,\alpha (1-\beta )-1\right) \), in which\(A\left( \beta ,\alpha (1-\beta )-1\right) \) i s locally asymptotically stable. In the following section, we will study the Neimark-Sacker bifurcation about the unique positive equilibrium \(A\left( \beta ,\alpha (1-\beta )-1\right) \) by using bifurcation theory (Guckenheimer and Holmes 1983; Kuznetson 2004). Neimark-Sacker bifurcation From Lemma 2.3, it is established that \(A\left( \beta ,\alpha (1-\beta )-1\right) \) is non-hyperbolic when \(\alpha =\frac{1}{1-2\beta }\). Henceforth, we choose \(\alpha \) as a bifurcation parameter to study Neimark-Sacker bifurcation of system (1) in the small neighborhood of \(A\left( \beta ,\alpha (1-\beta )-1\right) \). For simplicity, we denote the parameters satisfying non-hyperbolic condition by Consider system (1) with arbitrary parameters \((\alpha _1,\beta _1)\in H_A\), which is described as follows: It is clear that if \(\alpha _1>\frac{1}{1-\beta _1}\ and \ \beta _1<1\), then \(A\left( \beta _1,\alpha _1(1-\beta _1)-1\right) \) has a unique positive equilibrium of system (4). Given a perturbation of model (4) as follows: where \(|\alpha ^*|\ll 1\), which is small parameters. The characteristic equation of the Jacobian matrix of linearized system of (5) about the unique positive equilibrium \(A\left( \beta _1,\alpha _1(1-\beta _1)-1\right) \) is given by where Moreover when \(\alpha ^*\) varies in a small neighborhood of 0, the roots of the characteristic equation are and there we have Further calculation shows that \(\lambda _{1,2}^k\ne 1\) for \(\alpha _1=\frac{1}{1-2\beta _1}\) and \(k=1,2,3,4\). Now, let \(u_n=x_n-\beta _1, v_n=y_n-\alpha _1(1-\beta _1)+1\), then we transform the equilibrium \(A\left( \beta _1,\alpha _1(1-\beta _1)-1\right) \) of system (5) into the origin. By calculating we obtain where Now, let and then T is invertible. Using translation then system (7) becomes of the following form: where and Furthermore, and where After calculating, we get Analyzing the above and the Neimark- Sacker bifurcation conditions discussed in Guckenheimer and Holmes (1983), we write the theorem as follows: Theorem 3.1 If the condition (10) holds, i. e., \(\Omega \not =0\) and the parameter \(\alpha \) alters in the limited region of the point (0, 0), then the system (4) passes through a Neimark- Sacker bifurcation at the unique positive equilibrium \(A\left( \beta _1,\alpha _1(1-\beta _1)-1\right) \). Moreover, if \(\Omega <0\) ( respectively \(\Omega >0\)), then an attracting (respectively repelling) invariant closed curve bifurcates from the equilibrium \(A\left( \beta ,\alpha (1-\beta )-1\right) \) for \(\alpha >0\) ( respectively \(\alpha <0\)). Numerical simulations In this section, we will give some numerical simulations for the system (1) to support our theoretical results. If we choose \(\beta =0.23\), then from non-hyperbolic condition ( iii.2) of Lemma 2.3, the value of bifurcation parameter is \(\alpha =1.85185\). In theoretical point of view, the unique positive equilibrium is stable if \(\alpha <1.85185\), loss its stability at \(\alpha =1.85185\) and an attracting invariant close curves appear from the positive equilibrium when \(\alpha >1.85185\). From subfigures a and b of Fig. 1 it is clear that if \(\alpha =1.48<1.85185\), then unique positive equilibrium is locally stable and corresponding to Fig. 1a, b one can easily seen from Fig. 2a that it is an attractor. So, Fig. 1 shows the local stability of system (1) whereas Fig. 2 shows that the unique positive equilibrium of system (1) is globally asymptotically stable. Figure 3 shows that for different choices of parameters when \(\alpha >1.85185\), then unique positive equilibrium is unstable and meanwhile an attracting invariant closed curve bifurcates from the positive equilibrium, as in Fig. 3a–i. Conclusion This work is related to stability and bifurcation analysis of a discrete predator-pray model. We proved that system (1) have two equilibria namely (0, 0) and \(A\left( \beta ,\alpha (1-\beta )-1\right) \). Moreover, simple algebra shows that if \(\alpha >\frac{1}{1-\beta },\ \beta <1\) then system (1) has unique positive equilibrium \(A\left( \beta ,\alpha (1-\beta )-1\right) \). The method of linearization is used to prove the local asymptotic stability of equilibria. Linear stability analysis shows that O(0, 0) is a sink if \(\alpha <1\), saddle if \(\alpha >1\), and non-hyperbolic if \(\alpha =1\). For the unique positive equilibrium \(A\left( \beta ,\alpha (1-\beta )-1\right) \), we have different topological types for possible parameters and proved that it is locally asymptotically stable and under the condition \(\alpha =\frac{1}{1-2\beta }\) the eigenvalues of the Jacobian matrix are a pair of complex conjugate with modulus one. This means that there exist a Neimark-Saker bifurcation when the parameters vary in the neighborhood of \(H_A\). Then we present the Neimark-Saker bifurcation for the unique positive equilibrium point \(A\left( \beta ,\alpha (1-\beta )-1\right) \) of system (1) by choosing \(\alpha \) as a bifurcation parameter. We analysis the Neimark-Sacker bifurcation both by theoretical point of view and by numerical simulations. These numerical examples are experimental verifications of theoretical discussions. References Chen Y, Changming S (2008) Stability and hopf bifurcation analysis in a prey-predator system with stage-structure for prey and time delay. Chaos Solitons Fractals 38(4):1104–1114 Elsayed EM, Ibrahim TF (2015a) Solutions and periodicity of a rational recursive sequences of order five. Bull Malays Math Sci Soc 38(1):95–112 Elsayed EM, Ibrahim TF (2015b) Periodicity and solutions for some systems of nonlinear rational difference equations. Hacettepe J Math Stat (To Appear) Fazly M, Hesaaraki M (2007) Periodic solutions for discrete time predator-prey system with monotone functional responses. C R Acad Sci Paris Ser I 345:199–202 Gakkhar S, Singh A (2012) Complex dynamics in a prey predator system with multiple delays. Commun Nonlinear Sci Numer Simul 17(2):914–929 Garić-Demirović M, Kulenović MRS, Nurkanović M (2009) Global behavior of four competitive rational systems of difference equations in the plane. Discrete Dyn Nat Soc. doi:10.1155/2009/153058 (Article ID 153058) Guckenheimer J, Holmes P (1983) Nonlinear oscillations, dynamical systems and bifurcation of vector fields. Springer, New York Hu D, Zhang Z (2008) Four positive periodic solutions of a discrete time delayed predator-prey system with nonmonotonic functional response and harvesting. Comp Math Appl 56:3015–3022 Hu Z, Teng Z, Zhang L (2011) Stability and bifurcation analysis of a discrete predator-prey model with nonmonotonic functional response. Nonlinear Anal Real World Appl 12(4):2356–2377 Ibrahim TF, Touafek N (2014) Max-type system of difference equations with positive two-periodic sequences. Math Method Appl Sci 37(16):2562–2569 Ibrahim TF, El-Moneam MA (2015) Global stability of a higher-order difference equation. Iranian J Sci Technol (in Press) Ibrahim TF (2014) Solving a class of three-order max-type difference equations. Dyn Cont Discrete Impuls Syst Ser A Math Anal 21:333–342 Jing Z, Yang J (2006) Bifurcation and chaos in discrete-time predator-prey system. Chaos Solitons Fractals 27(1):259–277 Kalabusić S, Kulenović MRS, Pilav E (2011) Dynamics of a two-dimensional system of rational difference equations of Leslie-Gower type. Adv Differ Equ. doi:10.1186/1687-1847-2011-29 Kalabusić S, Kulenović MRS, Pilav E (2011) Multiple attractors for a competitive system of rational difference equations in the plane. Abstr Appl Anal. doi:10.1155/2011/295308 Kalabusić S, Kulenović MRS, Pilav E (2009) Global dynamics of a competitive system of rational difference equations in the plane. Adv Differ Equ (Article ID 132802) Khan AQ, Qureshi MN (2015c) Stability analysis of a discrete biological model. Int J Biomath 9(2):1–19 Khan AQ (2014) Global dynamics of two systems of exponential difference equations by Lyapunov function. Adv Diff Equ 1:297 Khan AQ, Qureshi MN (2014b) Global dynamics of a competitive system of rational difference equations. Math Method Appl Sci. doi:10.1002/mma.3392 Khan AQ, Qureshi MN (2015a) Dynamics of a modified Nicholson-Bailey host-parasitoid model. Adv Differ Equ 1:15 Khan AQ, Qureshi MN (2014a) Behavior of an exponential system of difference equations. Disc Dyn Nat Soc. doi:10.1155/2014/607281 (Article ID 607281) Khan AQ, Qureshi MN (2015b) Qualitative behavior of two systems of higher-order difference equations. Math Meth Appl Sci. doi:10.1002/mma.3752 Kuznetson YA (2004) Elements of applied bifurcation theorey, 3rd edn. Springer, New York Liu X (2010) A note on the existence of periodic solution in discrete predator-prey models. Appl Math Model 34:2477–2483 Qureshi MN, Khan AQ (2015) Local stability of an open-access anchovy fishery model. Comp Eco Soft 5(1):48–62 Sen M, Banerjee M, Morozov A (2012) Bifurcation analysis of a ratio-dependent prey-predator model with the Allee effect. Ecol Complex 11:12–27 Smith J (1968) Mathematical ideas in biology. Cambridge Press, Cambridge Xia Y, Cao J, Lin M (2007) Discrete-time analogues of predator-prey models with monotonic or nonmonotonic functional responses. Nonlinear Anal RWA 8:1079–1095 Yang W, Li X (2009) Permanence for a delayed discrete ratio-dependent predator-prey model with monotonic functional responses. Nonlinear Anal RWA 10:1068–1072 Zhang CH, Yan XP, Cui GH (2010) Hopf bifurcations in a predator-prey system with a discrete delay and a distributed delay. Nonlinear Anal Real World Appl 11(5): 4141–4153 Acknowledgements The author thanks the main editor and referee for their valuable comments and suggestions leading to improvement of this paper. This work was supported by the Higher Education Commission of Pakistan. Competing interests The author declares that he has no competing interests. About this article Received Accepted Published DOI Keywords Predator-prey model Stability Neimark-Sacker bifurcation Bifurcation theory Mathematics Subject Classification 37D45 37G35 39A30 39A33
In real life, we come across many such incidents where a change in one quantity leads to the change in another quantity. For example, if the speed of a car is increased then the time taken to cover the same distance will decrease. Let’s take one more example if we want that the construction of a building to finish soon then we need to deploy more workers. So, the more the number of workers less will be time taken to complete the work. By now you must have understood that there is some relation between them as the variation in one quantity is bringing the variation in other quantity. You will learn all these concepts in chapter 10 of ICSE Class 8 Maths. For students convenience, we have provided a detailed solution in pdf format for ICSE Class 8 Maths Selina Solutions Chapter 10 Direct and Inverse Variations. These Selina Solutions are prepared by the experienced teachers in the simplest form. By going through them students can easily get to know how the questions are being solved. Also, these solutions will help students to prepare for the annual exams. Click on the link below to download the Solution PDF. ICSE Class 8 Maths Chapter 10 Direct and Inverse Variations has only 5 questions. The detailed solution of all these questions are provided below: ICSE Class 8 Maths Selina Solutions Chapter 10 – Direct and Inverse Variations In which of the following tables, x and y vary directly: (i) x 3 5 8 11 y 4.5 7.5 12 16.5 \(\frac{x_{1}}{y_{1}}=\frac{3}{4.5}=\frac{1}{1.5}\)\(\frac{x_{2}}{y_{2}}=\frac{5}{7.5}=\frac{1}{1.5}\)\(\frac{x_{3}}{y_{3}}=\frac{8}{12}=\frac{1}{1.5} \) (Forming the given data in fractional form)\(\frac{x_{4}}{y_{4}}=\frac{11}{16.5}=\frac{1}{1.5}\)\(\Rightarrow \frac{x_{1}}{y_{1}}=\frac{x_{2}}{y_{2}}=\frac{x_{3}}{y_{3}}=\frac{x_{4}}{y_{4}}\) Solution:- Yes, x and y vary directly. (ii) x 16 30 40 56 y 32 60 80 84 \(\frac{x_{1}}{y_{1}}=\frac{16}{32}=\frac{1}{2}\) (Forming the given data in fractional form)\(\frac{x_{2}}{y_{2}}=\frac{30}{60}=\frac{1}{2}\)\(\frac{x_{3}}{y_{3}}=\frac{40}{80}=\frac{1}{2}\)\(\frac{x_{4}}{y_{4}}=\frac{56}{84}=\frac{28}{42}=\frac{14}{21}=\frac{7}{3}\)\(\Rightarrow \frac{x_{1}}{y_{1}}=\frac{x_{2}}{y_{2}}=\frac{x_{3}}{y_{3}} \neq \frac{x_{4}}{y_{4}}\) Solution:- x and y are not in direct variation. (iii) x 27 45 54 75 y 81 180 216 225 \(\frac{x_{1}}{y_{1}}=\frac{27}{81}=\frac{3}{9}=\frac{1}{3}\) (Forming the given data in fractional form)\(\frac{x_{2}}{y_{2}}=\frac{45}{150}=\frac{15}{50}=\frac{3}{10}\)\(\frac{x_{3}}{y_{3}}=\frac{54}{216}=\frac{18}{72}=\frac{1}{4}\)\(\frac{x_{4}}{y_{4}}=\frac{75}{225}=\frac{25}{45}=\frac{5}{9} \)\(\Rightarrow \frac{x_{1}}{y_{1}} \neq \frac{x_{2}}{y_{2}} \neq \frac{x_{3}}{y_{3}} \neq \frac{x_{4}}{y_{4}}\) Solution:- x and y are not in direct variation. Question 2. If x and y vary directly, find the values of x,y and z. X 3 x y 10 Y 36 60 96 Z Solution:- X and y are in direct variation\(∴ \frac{3}{36}=\frac{x}{60}=\frac{y}{96}=\frac{10}{z} \) (Forming the given data in fractional form) \(\Rightarrow \frac{3}{36}=\frac{x}{60}, \frac{3}{36}=\frac{y}{96}, \frac{3}{36}=\frac{10}{z}\) \(x=\frac{3}{36} \times 60, y=\frac{3}{36} \times 96 \) \(x=\frac{3}{36} \times 60, y=\frac{3}{36} \times 96\) \(z=10 \times \frac{36}{3}\) ⇒x=5,y=8,z=120 X 3 5 8 10 Y 36 60 96 120 Question 3. A truck consumes 28 liters of diesel for moving through a distance of 448km. How much distance will it cover in 64 liters of diesel? Solution:- Let the truck cover x km in 64 liters of diesel. Diesel (in liters) 28 64 Distance (in km) 448 x It is the case of direct variation (Forming the given data in fractional form)\(\Rightarrow \quad \frac{x_{1}}{y_{1}}=\frac{x_{2}}{y_{2}} \Rightarrow \frac{28}{448}=\frac{64}{x}\) i.e., 28 x=64 \times 448\(x=\frac{64 \times 448}{28}=1024 \mathrm{km}\) Question 4. For 100km, a taxi charges ₹ 1,800. How much will it charge for a journey of 120 km? Solution:- Let a charges of car is ₹ x in 120km Distance in (km) 1800 x Taxi charges (₹) 100 120 Question 5. If 27 identical articles cost ₹ 1,890, how many articles can be bought for ₹ 1,750? Solution:- Let x number of articles be purchased in ₹ 1750 Cost (₹) 1890 1750 No. of articles 27 X Since, it is a case of direct variation\(\Rightarrow \frac{1890}{27}=\frac{1750}{x}\) \(\Rightarrow x=\frac{1750 \times 27}{1890}\) = 25 articles Question 6. 7kg of rice costs Rs.1,120. How much rice can be bought for Rs.3,680? Solution: Rice : Cost : Rice: Cost 7kg: 1120 :: x kg : 3680\(∴ x=\frac{7 \times 3680}{1120}=23 \mathrm{kg}\) Question 7. 6 note-books cost ₹ 156, find the cost of 54 such note-books. Solution:- Notebooks: cost :: notebooks : cost 6 : Rs.156 :: 54 : Rs.x\(∴ x=\frac{156 \times 54}{6}=R s .1404\) Question 8. 22 men can dig a 27m long trench in one day. How many men should be employed for digging 135m long trench of the same type in one day? Solution:- Men : length trench :: men : length of trench 22 : 27m :: x : 135m (Expressing in ratios)\(x=\frac{22 \times 135}{27}=110 \mathrm{men}\) Question 9. If the total weight of 11 identical articles is 77 kg, how many articles of the same type would weigh 224 kg? Solution:- No. of : weight :: no. of articles : weight Articles 11 : 77 kg :: x : 224kg\(x=\frac{11 \times 224}{77}=32 articles\) Question 10. A train is moving with uniform speed of 120km per hour. (i) How far will it travel in 36 minutes? Solution:- Speed of train in 60 minutes =120 km i.e. distance covered in 60 minutes \(=\frac{120}{60}\) Distance covered in 36 minutes \(=\frac{120 \times 36}{60} \) \(=2 \times 36=73 \mathrm{km}\) (ii) In how much time will it cover 210 km? Solution:- If distance covered is 120 km then time taken = 60 minutes If distance covered is 1 km then time taken \(=\frac{60}{120} \) If distance covered is 210 km then time taken \( =\frac{60}{120} \times 210=105 \) minutes =1 hour 45 minutes Question 1. Check whether x and y vary inversely or not. (i) x 4 3 12 1 y 6 8 2 24 Solution:- x and y are inversely proportional Then xy are equal. (i) \(x y=4 \times 6=24\) \(x y=3 \times 8=24\) \(x y=12 \times 2=24\) (Using the data’s in the table)\(x y=1 \times 24=24\) xy in each case is equal x and y are inversely proportional (ii) x 30 120 60 24 y 60 30 30 75 \(x y=30 \times 60=1800\)\(x y=120 \times 30=3600\)\(x y=60 \times 30=1800\)\(x y=24 \times 75=1800\) Solution:- xy in each case is not equal. x and y are not inversely proportional Question 2. If x and y vary inversely, find the values of 1, m and n: (i) x 4 8 2 32 y 4 l m n Solution:- ∵ x and y are inversely proportional ∴xy is equal Now,\(x y=4 \times 4=16\) \(8 \times l=16 \Rightarrow l=\frac{16}{8}=2\) \(2 \times m=16 \Rightarrow m=\frac{16}{2}=8\) \(32 \times n=16 \Rightarrow n=\frac{16}{32}=0.5\) (ii) x 24 32 m 16 y l 12 8 n Solution:- ∵x and y are inversely proportional ∴xy is equal Now, (ii) \(x y=32 \times 12=384\) \(24 \times l=384 \Rightarrow l=\frac{384}{24}=16\) \(m \times 8=384 \Rightarrow m=\frac{384}{8}=48\) \(16 \times n=384 \Rightarrow n=\frac{384}{16}=24\) Question 3. 36 men can do a piece of work in 7 days. How many men will do the same work in 42 days? Solution:- Men : Days :: Men : Days 36 : 7 : x : 42 ∴ By inverse proportional\(36 \times 7=x \times 42\) \(\Rightarrow x=\frac{36 \times 7}{42}=6 \mathrm{men}\) Question 4. 12 pipes, all of the same size, fill a tank in 42 minutes. How long will it take to fill the same tank, if 21 pipes of the same size are used? Solution:- Pipes : Time :: Pipes: Time 12 : 2x :: 21 : 42 ∴ By inverse proportion\(12 \times 42=21 \times x\) \(\Rightarrow x=\frac{12 \times 42}{21}=24 minutes \) Question 5. In a fort 150 men had provisions for 45 days. After 10 days, 25 men left the fort. How long would the food last at the same rate? Solution:- After 10 days: For 150 men, provision will last (45-10) days =35 days For 1 man, the provisions will last \(=150 \times 35 days \) And for (150-25)=125 men, the provisions will last for \(=\frac{150 \times 35}{125}=42 \)days. To get the Selina Solutions for all the chapters of class 8 Maths, visit the ICSE Class 8 Maths Selina Solutions page. ICSE Class 8 Maths Selina Solutions Chapter 10 – Direct and Inverse Variations The application of direct and indirect variation can be seen in daily life. Mostly this concept is used in solving the problems related to: Time, speed and distance Time and work Pipe and cistern Solving these questions will also improve your quantitative aptitude. So, you must practice all the questions provided in the chapter. In case you couldn’t able to solve any problem then refer to the solutions provided in the PDF. Moreover, you can also access the answers for all the exercises of Maths, Physics, Chemistry and Biology subjects by clicking on ICSE Class 8 Selina Solutions. We hope this information on “ICSE Class 8 Maths Selina Solutions Chapter 10 Direct and Inverse Variations” is useful for students. Keep learning and stay tuned for further updates on ICSE and other competitive exams. To access interactive Maths and Science Videos download BYJU’S App and subscribe to YouTube Channel.
Let $G$ be a Lie group and $V$ be a vector space. Let $\rho_{l} : G \times V \to V$ be a left representation and $\rho_r:V \times G \to V$ be a right representation which commutes with $\rho_l$ in the sense that $\rho_l(g_1 , \rho_r(v , g_2) ) = \rho_r( \rho_l(g_1, v) , g_2)$. Then the group multiplication $(g_1,v_1) \cdot (g_2,v_2) = (g_1 \cdot g_2 \quad,\quad \rho_r(v_1,g_2) + \rho_l(g_1,v_2))$ makes the set $G \times V$ into a Lie group. The identity is $(e_G, e_H)$ and the inverse of $(g,h)$ is $(g^{-1}, - \rho_l[ g^{-1} , \rho_r(h,g^{-1} ) ] )$ and associativity can be verified by hand. This Lie group appears to be some sort of sum of a left semi-direct product with a right semi-direct product. Does it have a name? Let $G$ be a Lie group and $V$ be a vector space. Let $\rho_{l} : G \times V \to V$ be a left representation and $\rho_r:V \times G \to V$ be a right representation which It is isomorphic to the semidirect product of $G$ with $V$ for the left representation $g\mapsto (v\mapsto g.v.g^{-1})$.
Uniqueness for Neumann problems for nonlinear elliptic equations 1. Dipartimento di Ingegneria, Università degli Studi di Napoli Parthenope, Centro Direzionale, Isola C4 80143 Napoli, Italy 2. Laboratoire de Mathématiques Raphaël Salem, UMR 6085 CNRS-Université de Rouen, Avenue de l'Université, BP.12 76801 Saint-Étienne-du-Rouvray, France 3. Dipartimento di Matematica e Applicazioni "R. Caccioppoli", Università degli Studi di Napoli Federico Ⅱ, Complesso Monte S. Angelo, Via Cintia, 80126 Napoli, Italy $\left\{ \begin{align} & -\text{div}({{(1+|\nabla u{{|}^{2}})}^{(p-2)/2}}\nabla u)-\text{div}(c(x)|u{{|}^{p-2}}u)=f\ \ \ \text{in}\ \Omega , \\ & \left( {{(1+|\nabla u{{|}^{2}})}^{(p-2)/2}}\nabla u+c(x)|u{{|}^{p-2}}u \right)\cdot \underset{\raise0.3em\hbox{$\smash{\scriptscriptstyle-}$}}{n}=0\ \ \ \text{on}\ \partial \Omega , \\ \end{align} \right.$ $Ω$ $\mathbb{R}^{N}$ $N≥ 2$ $ 1 < p < N$ $\underline n$ $\partial Ω$ $f$ $L^{(p^{*})'}(Ω)$ $L^{1}(Ω)$ $\int{{}}_Ω f \, dx = 0$ $c(x)$ Keywords:Nonlinear elliptic equations, Neumann problems, renormalized solutions, weak solutions, uniqueness results. Mathematics Subject Classification:Primary: 35J25; Secondary: 35J60. Citation:Maria Francesca Betta, Olivier Guibé, Anna Mercaldo. Uniqueness for Neumann problems for nonlinear elliptic equations. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1023-1048. doi: 10.3934/cpaa.2019050 References: [1] A. Alvino, A. Cianchi, V. G. Maz'ya and A. Mercaldo, Well-posed elliptic Neumann problems involving irregular data and domains, [2] [3] F. Andreu, J. M. Mazón, S. Segura de León and J. Toledo, Quasi-linear elliptic and parabolic equations in $L^1$ with nonlinear boundary conditions, [4] [5] G. Barles, G. Diaz and J. I. Diaz, Uniqueness and continuum of foliated solutions for a quasilinear elliptic equation with a non-Lipschitz nonlinearity, [6] [7] P. Bénilan, L. Boccardo, T. Gallouët, R. Gariepy, M. Pierre and J. L. Vázquez, An $L^1$-theory of existence and uniqueness of solutions of nonlinear elliptic equations, [8] M. F. Betta, O. Guibé and A. Mercaldo, Neumann problems for nonlinear elliptic equations with $L^1$ data, [9] M. F. Betta, A. Mercaldo, F. Murat and M. M. Porzio, Existence and uniqueness results for nonlinear elliptic problems with a lower order term and measure datum, [10] M. F. Betta, A. Mercaldo, F. Murat and M. M. Porzio, Existence of renormalized solutions tononlinear elliptic equations with a lower-order term and right-hand side a measure, [11] [12] [13] [14] [15] M. Chipot and G. Michaille, Uniqueness results and monotonicity properties for strongly nonlinear elliptic variational inequalities, [16] [17] A. Dall'Aglio, Approximated solutions of equations with $L^1$ data. Application to the $H$-convergence of quasi-linear parabolic equations, [18] A. Decarreau, J. Liang and J.-M. Rakotoson, Trace imbeddings for $T$-sets and application to Neumann-Dirichlet problems with measures included in the boundary data, [19] J. Droniou, Solving convection-diffusion equations with mixed, Neumann and Fourier boundary conditions and measures as data, by a duality method, [20] J. Droniou and J.-L. Vázquez, Noncoercive convection-diffusion elliptic problems with Neumann boundary conditions, [21] V. Ferone and A. Mercaldo, A second order derivation formula for functions defined by integrals, [22] [23] O. Guibé and A. Mercaldo, Existence and stability results for renormalized solutions to noncoercive nonlinear elliptic equations with measure data, [24] O. Guibé and A. Mercaldo, Existence of renormalized solutions to nonlinear elliptic equations with two lower order terms and measure data, [25] [26] J.-L. Lions, [27] P. L. Lions and F. Murat, Sur les solutions renormalisées d'équations elliptiques non linéaires, In [28] F. Murat, Equations elliptiques non linéaires avec second membre ${L}^1$ ou mesure, In [29] [30] W. P. Ziemer, show all references References: [1] A. Alvino, A. Cianchi, V. G. Maz'ya and A. Mercaldo, Well-posed elliptic Neumann problems involving irregular data and domains, [2] [3] F. Andreu, J. M. Mazón, S. Segura de León and J. Toledo, Quasi-linear elliptic and parabolic equations in $L^1$ with nonlinear boundary conditions, [4] [5] G. Barles, G. Diaz and J. I. Diaz, Uniqueness and continuum of foliated solutions for a quasilinear elliptic equation with a non-Lipschitz nonlinearity, [6] [7] P. Bénilan, L. Boccardo, T. Gallouët, R. Gariepy, M. Pierre and J. L. Vázquez, An $L^1$-theory of existence and uniqueness of solutions of nonlinear elliptic equations, [8] M. F. Betta, O. Guibé and A. Mercaldo, Neumann problems for nonlinear elliptic equations with $L^1$ data, [9] M. F. Betta, A. Mercaldo, F. Murat and M. M. Porzio, Existence and uniqueness results for nonlinear elliptic problems with a lower order term and measure datum, [10] M. F. Betta, A. Mercaldo, F. Murat and M. M. Porzio, Existence of renormalized solutions tononlinear elliptic equations with a lower-order term and right-hand side a measure, [11] [12] [13] [14] [15] M. Chipot and G. Michaille, Uniqueness results and monotonicity properties for strongly nonlinear elliptic variational inequalities, [16] [17] A. Dall'Aglio, Approximated solutions of equations with $L^1$ data. Application to the $H$-convergence of quasi-linear parabolic equations, [18] A. Decarreau, J. Liang and J.-M. Rakotoson, Trace imbeddings for $T$-sets and application to Neumann-Dirichlet problems with measures included in the boundary data, [19] J. Droniou, Solving convection-diffusion equations with mixed, Neumann and Fourier boundary conditions and measures as data, by a duality method, [20] J. Droniou and J.-L. Vázquez, Noncoercive convection-diffusion elliptic problems with Neumann boundary conditions, [21] V. Ferone and A. Mercaldo, A second order derivation formula for functions defined by integrals, [22] [23] O. Guibé and A. Mercaldo, Existence and stability results for renormalized solutions to noncoercive nonlinear elliptic equations with measure data, [24] O. Guibé and A. Mercaldo, Existence of renormalized solutions to nonlinear elliptic equations with two lower order terms and measure data, [25] [26] J.-L. Lions, [27] P. L. Lions and F. Murat, Sur les solutions renormalisées d'équations elliptiques non linéaires, In [28] F. Murat, Equations elliptiques non linéaires avec second membre ${L}^1$ ou mesure, In [29] [30] W. P. Ziemer, [1] [2] [3] [4] Monica Musso, Donato Passaseo. Multiple solutions of Neumann elliptic problems with critical nonlinearity. [5] Antonio Vitolo, Maria E. Amendola, Giulio Galise. On the uniqueness of blow-up solutions of fully nonlinear elliptic equations. [6] Sophia Th. Kyritsi, Nikolaos S. Papageorgiou. Multiple solutions for nonlinear coercive Neumann problems. [7] Shouchuan Hu, Nikolaos S. Papageorgiou. Solutions of nonlinear nonhomogeneous Neumann and Dirichlet problems. [8] Leszek Gasiński, Nikolaos S. Papageorgiou. Multiple solutions for a class of nonlinear Neumann eigenvalue problems. [9] Pierpaolo Soravia. Uniqueness results for fully nonlinear degenerate elliptic equations with discontinuous coefficients. [10] Olivier Guibé, Anna Mercaldo. Uniqueness results for noncoercive nonlinear elliptic equations with two lower order terms. [11] Sabri Bensid, Jesús Ildefonso Díaz. Stability results for discontinuous nonlinear elliptic and parabolic problems with a S-shaped bifurcation branch of stationary solutions. [12] [13] Nikolaos S. Papageorgiou, Vicenţiu D. Rădulescu. Bifurcation of positive solutions for nonlinear nonhomogeneous Robin and Neumann problems with competing nonlinearities. [14] [15] M. Chuaqui, C. Cortázar, M. Elgueta, J. García-Melián. Uniqueness and boundary behavior of large solutions to elliptic problems with singular weights. [16] [17] Shiren Zhu, Xiaoli Chen, Jianfu Yang. Regularity, symmetry and uniqueness of positive solutions to a nonlinear elliptic system. [18] Robert Jensen, Andrzej Świech. Uniqueness and existence of maximal and minimal solutions of fully nonlinear elliptic PDE. [19] Rushun Tian, Zhi-Qiang Wang. Bifurcation results on positive solutions of an indefinite nonlinear elliptic system. [20] Gabriele Bonanno, Pasquale Candito, Roberto Livrea, Nikolaos S. Papageorgiou. Existence, nonexistence and uniqueness of positive solutions for nonlinear eigenvalue problems. 2018 Impact Factor: 0.925 Tools Metrics Other articles by authors [Back to Top]
Find the value(s) of positive integer $n$ such that $n² + 19n + 48$ is a perfect square. I factorised it to $(n+3)(n+16)$, but that gives negative integer answers $-3$ and $-16$. What do I do? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Notice that since: $$n^2+19n+48=(n+3)(n+16)\Rightarrow $$$$(n+3)^2<(n+3)(n+16)<(n+16)^2 $$ This means that you must check only a finite number of cases: $$n^2+19n+48=(n+k)^2 \ \ \ k\in\{4,5,...,14,15\}$$ $$19n+48=2kn+k^2$$ $$n=\frac{k^2-48}{19-2k}$$ Since $n$ is positive clearly $k\leq 9$ and notice that we have: $$19-2k\leq k^2-48 \Rightarrow k\geq 8$$ So we must check only $k=8$ and $k=9$: $$k=8 \Rightarrow n=\frac{16}{3}$$ $$k=9 \Rightarrow n=33$$ So the only solution is $n=33$ :) $n^2+19n+48=m^2$, $4n^2+76n+192=4m^2$, $(2n+19)^2-361+192=4m^2$, $(2n+19)^2-4m^2=169$, $(2n+19+2m)(2n+19-2m)=169=13^2$, so for each way of factoring $169$ you get a system of two linear equations in $m$ and $n$. Can you take it from there? Hint: For $n\in[1,5]$, $(n+7)^2<n^2+19n+48<(n+8)^2$. For $n\in[6,33)$, $(n+8)^2<n^2+19n+48<(n+9)^2$. For $n\in(33,\infty)$, $(n+9)^2<n^2+19n+48<(n+10)^2$.
I just had time to go over this more carefully. I know this is 1 year and a half old, but it's still an interesting (and unanswered) question. Here is a toy example of my approach to solving this using a regularized regression estimator like Lasso. The only important parameters in this approach are: min_lag ($L_{min} = 100$): the assumed minimum separation between both series. nb_lags ($\Delta L = 20$): how many lags to scan (to include in the data matrix for regression). There is also sigma_e ($\sigma_{\varepsilon} = 0.1$), which is the amount of noise in the series we will simulate. Using these parameters, the data matrix will include: Recent lags for x and y, from 1 to nb_lags. Older lags for x and y, from min_lag to min_lag + nb_lags - 1. This amounts to 80 regressors for each equation. On the other hand, the process that generates the data consists of the equations (four coefficients): $x_t = 0.9x_{t-1} + 0.1y_{t-110} + \varepsilon_t^x$ $y_t = 0.9y_{t-1} + 0.1x_{t-110} + \varepsilon_t^y$ where $\{\varepsilon_t^x, \varepsilon_t^y\}_{t\in\{1...N\}}\overset{iid}{\sim} N(0, \sigma_{\varepsilon}^2)$, which we try to discover by fitting the Lasso to $x_t = \beta_0^x + \sum_{l=1}^{\Delta L}{\beta_l^x x_{t-l}} + \sum_{l=L_{min}}^{L_{min}+\Delta L-1}{\beta_l^x x_{t-l}} + \sum_{l=L_{min}}^{L_{min}+\Delta L-1}{\gamma_l^x y_{t-l}} + u_t^x$ $y_t = \beta_0^y + \sum_{l=1}^{\Delta L}{\beta_l^y y_{t-l}} + \sum_{l=L_{min}}^{L_{min}+\Delta L-1}{\beta_l^y y_{t-l}} + \sum_{l=L_{min}}^{L_{min}+\Delta L-1}{\gamma_l^y x_{t-l}} + u_t^y$ Note that I've left out the recent lags of the opposed variable in each equation because we know beforehand that they are not relevant. I still leave in the old lags of the same variable, though. Using the code shown below, the estimated coefficients are non-zero only for the correct lags (1 and 110 in both regressions) and with magnitudes $\hat{\beta_1^x} = 0.86$ $\hat{\gamma_{110}^x} = 0.09$ $\hat{\beta_1^y} = 0.86$ $\hat{\gamma_{110}^y} = 0.05$ which is pretty close IMHO to the real coefficients (you would expect them to be lower than the real value because of the regularization imposed). I imagine including more data could yield a more accurate answer, but for me what's amazing is the fact that the Lasso can pick up the right lags in both series. Hope this helps. # seed set.seed(1313) # create data N = 1000 x = numeric(N) y = numeric(N) xy_lag = 110 # position of the lag between series sigma_e = 0.1 # fill first data randomly x[1:xy_lag] = rnorm(xy_lag, 0, sigma_e) y[1:xy_lag] = rnorm(xy_lag, 0, sigma_e) # construct the series for (t in (xy_lag+1):N) { x[t] = 0.9 * x[t - 1] + 0.1 * y[t - xy_lag] + rnorm(1, 0, sigma_e) y[t] = 0.9 * y[t - 1] + 0.1 * x[t - xy_lag] + rnorm(1, 0, sigma_e) } # drop warm-up data x = x[(xy_lag+1):N] y = y[(xy_lag+1):N] plot(x) plot(y) # create data matrix for regression min_lag = 100 # assumed minimum separation nb_lags = 20 # number of lags to scan # util multi_lag = function(x, k1, k2){ return(sapply((k1):(k2), function(n){dplyr::lag(x, n)})) } data_mat = cbind(multi_lag(x, 1, nb_lags), # recent lags of x multi_lag(x, min_lag, min_lag + nb_lags - 1), # old lags of x multi_lag(y, 1, nb_lags), # recent lags of y multi_lag(y, min_lag, min_lag + nb_lags - 1)) # old lags of y colnames(data_mat) = c(paste0("x_l", 1:nb_lags), paste0("x_l", min_lag:(min_lag + nb_lags - 1)), paste0("y_l", 1:nb_lags), paste0("y_l", min_lag:(min_lag + nb_lags - 1))) # remove first rows (NA's) data_mat = data_mat[-(1:(min_lag + nb_lags - 1)),] x = x[-(1:(min_lag + nb_lags - 1))] y = y[-(1:(min_lag + nb_lags - 1))] # fit lasso library(glmnet) fit_x_cv = cv.glmnet(x = data_mat[, -((2*nb_lags+1):(3*nb_lags))], # don't need recent lags of y y = x) coef(fit_x_cv) # correct lags, coefficients are close! fit_y_cv = cv.glmnet(x = data_mat[, -(1:nb_lags)], # don't need recent lags of x y = y) coef(fit_y_cv) # correct lags, coefficients are close! # END
what is the interpretation of $|\langle AB \rangle|$ ? since $|\langle AB \rangle|^2 = \langle AB \rangle [\langle AB \rangle]^\dagger$ , could $|\langle AB \rangle| = \sqrt(|\langle AB \rangle|^2)$ ? PS. : feel free to correct me Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community what is the interpretation of $|\langle AB \rangle|$ ? since $|\langle AB \rangle|^2 = \langle AB \rangle [\langle AB \rangle]^\dagger$ , could $|\langle AB \rangle| = \sqrt(|\langle AB \rangle|^2)$ ? PS. : feel free to correct me Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question. $| \langle AB \rangle|=|\langle \Psi|AB|\Psi \rangle|$ for some operators $A,B$ acting on state $|\Psi \rangle$. Using for example a position representation of $A,B$ we can use a wave function i.e. $\Psi= \langle q|\Psi \rangle$ where $|q \rangle$ is a position eigenket (in some dimension, nothing is specified so it is hard to imagine the dimensionality of the problem 1D, 3D etc.). Then we get: $| \langle A_{pos}B_{pos} \rangle|=|\int\Psi^* A_{pos} B_{pos}\Psi d^3x|$ where $A_{pos}$is the position representation of the operator $A$. If $A$ already is in position representation, i.e. $\hat{x}=x,\hat{p}=-i\hbar \partial_x$ for 1D then you could say $A_{pos}=A$ and ignore the first part. Clearly a lot of information missing and this is the only answer I can provide unfortunately.
I am aware that I am replying to an old question that has been satisfactorily answered, but I think it might be worth while pointing out, for the completeness of MathOverflow, that the question is discussed (in slightly greater generality, and with an example) as corollary 4.4.5 in the book An Introduction to Gröbner Bases by William Adams and Philippe Loustaunau (AMS 1994, Graduate Studies in Mathematics 3). The method is the same as in the approved answer, but formulated in a slightly different way. Their statements (very slightly reworded) are: Proposition 4.4.1 [computation of the saturation]: Let $R$ be an integral domain and $I$ a non-zero ideal of $A = R[x_1,\ldots,x_n]$. Let $g\in A$, $g\neq 0$. Let $w$ be a new variable. Consider the ideal $\langle I,wg-1\rangle$ of $A[w]$. Then $I A_g \cap A = \langle I, wg-1\rangle \cap A$ where $A_g = A[\frac{1}{g}]$. (Note that the intersection $\langle I, wg-1\rangle \cap A$ in $A[w]$ can be computed using an elimination order on the variables: it is theorem 4.3.6 in that book.) Proposition 4.4.4: Let $R$ be an integral domain with $k$ its quotient field. Let $I$ be a non-zero ideal of $A = R[x_1,\ldots,x_n]$ and let $G = \{g_1,\ldots,g_t\}$ be a Gröbner basis for $I$ with respect to some term ordering. Let $s$ be the product of the leading coefficients of $g_1,\ldots,g_t$. Then$$I k[x_1,\ldots,x_n] \cap R[x_1,\ldots,x_n] = IR_s[x_1,\ldots,x_n] \cap R[x_1,\ldots,x_n]$$ (This is essentially what the approved answer to this question explains.) Corollary 4.4.5: Let $R$ be an integral domain in which linear equations are algorithmically solvable and let $k$ be its quotient field. Let $I$ be an ideal of $A = R[x_1,\ldots,x_n]$. Then we can compute generators for the ideal $I k[x_1,\ldots,x_n] \cap R[x_1,\ldots,x_n]$. (This follows immediately from the above results.)
Class-A amplifier design overview After watching those two videos, and Single transistor, 1W , I had to illustrate a slightly more sane design approach. To start out, here's the basic idea about how to drive a speaker (without having to find and use a flux-limiting, gapped, audio transformer -- which you will definitely need if you plan to run DC current through the primary): Adventures in a one transistor audio amplifier simulate this circuit – Schematic created using CircuitLab \$Q_1\$ is an emitter follower and it will be able to source current into the speaker. \$Q_2\$ is a common emitter and it will be able to sink current from the load. Together, they can do a fair job of sinking and sourcing current. A 3rd BJT provides the voltage difference required: simulate this circuit Note that I've shown two schematics. On the left, I've connected the speaker slightly differently (which would also work find) in order to help segue into the transition on the right. I've also added the 3rd BJT that's needed. On the left is also a current source above \$Q_3\$ and a small resistor to sink a little excess current. On the right, I've bootstrapped the design to provide the current source indicated on the left. Capacitor \$C_1\$ will develop a voltage across it that remains fairly fixed. Also, \$Q_1\$'s \$V_\text{BE}\$ will have a relatively fixed voltage across it, too. As a consequence, resistor \$R_2\$ will have a nearly fixed voltage across it. So it will form the equivalent of a constant current source. Just what we needed. Class-A amplifier design details The design is already taking shape. But it's time to put some parameters into it. In this case, \$V_\text{CC}=12\:\text{V}\$ (in keeping with one of the videos.) I'd like to leave about \$1.5\:\text{V}\$ \$V_\text{CE}\$ headroom for the two BJTs, \$Q_1\$ and \$Q_2\$, to keep them out of saturation and having some decent \$\beta\$ left over. This means that I've got about \$12\:\text{V}-2\cdot 1.5\:\text{V}=9\:\text{V}\$ of remaining swing feeding into \$C_1\$. So \$V_\text{PEAK}=\frac{9\:\text{V}}{2}=4.5\:\text{V}\$, or \$V_\text{RMS}=\frac{V_\text{PEAK}}{\sqrt{2}}\approx 3.2\:\text{V}\$. The maximum power into the speaker will then be about \$\frac{V_\text{RMS}^2}{8\:\Omega}\approx 1\frac{1}{4}\:\text{W}\$. We can estimate the peak current to the speaker now as \$\frac{4.5\:\text{V}}{8\:\Omega}\approx 560\:\text{mA}\$. \$Q_2\$ will have to sink that much current, plus some added current to stay in class-A operation. Let's set this minimum to about \$100\:\text{mA}\$. So the peak collector current for \$Q_2\$ (and the peak emitter current for \$Q_1\$) will be about \$660\:\text{mA}\$. Assuming an active \$\beta=60\$ (we can achieve that), this means the peak base currents will be on the order of \$11\:\text{mA}\$. At this point I'm going to select the D44H11 BJT in the TO-220 package, knowing that I'm going to burn a watt or two in each. From the datasheet I estimate a peak \$V_\text{BE}\approx 800\:\text{mV}\$. I'd like to sink about \$1.5\:\text{mA}\$ with \$R_1\$, so this suggests \$R_1=\frac{800\:\text{mV}}{1.5\:\text{mA}}\approx 533\:\Omega\$. So I'll set \$R_1=560\:\Omega\$. I will want the needed \$11\:\text{mA}\$, plus this added \$1.5\:\text{mA}\$, in \$R_2\$. So \$R_2=\frac{11\:\text{mA}+1.5\:\text{mA}}{6\:\text{V}-800\:\text{mV}}=416\:\Omega\$. I'll set it a little hotter to \$R_2=390\:\Omega\$. Let's update the schematic: simulate this circuit NFB will be required to linearize the output and to set the AC gain. So I've added that feedback network above, with the addition of \$R_3\$, \$R_4\$, and \$C_2\$. I'm going to choose an AC gain of \$15\$, so \$R_3=15\cdot R_4\$. Since \$Q_3\$ kind of makes up a Darlington with \$Q_2\$, the base current required for \$Q_3\$ will be on the order of \$\frac{13\:\text{mA}}{\beta=150}\approx 90\:\mu\text{A}\$. (It will often be less, but this is safe.) So this means I want a collector current for the BJT to be about 10X more, or about \$1\:\text{mA}\$. \$C_2\$ just needs to be "big enough." I could get into the details to show that \$1\:\mu\text{F}\$ might be fine. But let's make it 10X bigger. So \$C_2=10\:\mu\text{F}\$. fourth Let's plug in the 4th BJT so that we can discuss the above details in a better light: simulate this circuit Now you can see that I've already put in values for the AC gain resistors and \$R_5\$ (newly added here.) I did this by recalling that I mentioned above that the collector current for \$Q_4\$ should be \$1\:\text{mA}\$. If you recall, I figure that the center voltage feeding into \$C_1\$ will be a quiescent \$6\:\text{V}\$. I'd like to drop about half that across the \$V_\text{CE}\$ of \$Q_4\$ and the rest I split evenly between \$R_3\$ and \$R_5\$. So those values are set. And since the gain is 15, the value of \$R_4\$ is also thereby set, too. As shown on the above schematic. The only remaining problem is biasing \$Q_4\$. You can see in the above schematic that I've added a few parts to achieve that. Since \$Q_4\$'s collector current is about \$1\:\text{mA}\$, the base current will be well under \$10\:\mu\text{A}\$. I decided to choose about \$80\:\mu\text{A}\$ for the biasing current in \$R_6\$ and \$R_7\$, to make it stiff enough. I need a base voltage for \$Q_4\$ of about \$6\:\text{V}-1.5\:\text{V}-70\:\text{mV}=3.8\:\text{V}\$. So \$R_6=R_7=\frac{3.8\:\text{V}}{80\:\mu\text{A}}\approx 47\:\text{k}\Omega\$. \$C_4\$ gives me an AC ground for a balanced midpoint. The only remaining thing is to compute \$R_8=\frac{12\:\text{V}-3.8\:\text{V}}{80\:\mu\text{A}}-R_6=55.5\:\text{k}\Omega\$. So \$R_8=56\:\text{k}\Omega\$. Here's the final schematic: simulate this circuit That's a class-A amplifier, educational-level not professional. (Update: I've added a necessary compensation pole capacitor, \$C_5\$, to the above circuit with a nominal value I think will be about right. It was important in order to roll off the high frequencies. So it's included now.) I promised to add a class-AB. There are some tweaks needed to deal with discrete BJTs that haven't been included, nor discussions about them. So again, this is more of an educational level -- though if you built it I expect you'd still get passable results from it. simulate this circuit It includes temperature and device \$\beta\$ and \$V_\text{CE}\$ matching compensation for the diff-amp and mirrors and a few other areas, as well. Adjustments available for the VBE multiplier so that the quiescent current can be set up where you want it and so that the parabolic thermal response can be tweaked, as well.
The Bessel Function $J_n(x)$ is defined, for a natural number $n$ and real number x, as $J_n(x) = \frac{1}{2\pi}\int_0^{2\pi}\cos(n\theta-x\sin\theta)d\theta.$ By using contour integration with integrand $z^{n-1}\exp(\frac{-xz}{2})\exp(\frac{x}{2z})$, or otherwise, show that $J_n(x)=\sum_{\substack{k=0}}^\infty\frac{(-1)^k}{k!(n+k)!}(\frac{x}{2})^{n+2k}$ Hi, can someone give me some hints or give a simple example to convert the integral to a sum form using integrand. I am not very familiar with how to use integrand. Many many thanks!
Fujimura's problem Let [math]\overline{c}^\mu_n[/math] the largest subset of the triangular grid [math]\Delta_n := \{ (a,b,c) \in {\Bbb Z}_+^3: a+b+c=n \}[/math] which contains no equilateral triangles [math](a+r,b,c), (a,b+r,c), (a,b,c+r)[/math] with [math]r \gt 0[/math]; call such sets triangle-free. (It is an interesting variant to also allow negative r, thus allowing "upside-down" triangles, but this does not seem to be as closely connected to DHJ(3).) Fujimura's problem is to compute [math]\overline{c}^\mu_n[/math]. This quantity is relevant to a certain hyper-optimistic conjecture. n 0 1 2 3 4 5 [math]\overline{c}^\mu_n[/math] 1 2 4 6 9 12 Contents n=0 [math]\overline{c}^\mu_0 = 1[/math]: This is clear. n=1 [math]\overline{c}^\mu_1 = 2[/math]: This is clear. n=2 [math]\overline{c}^\mu_2 = 4[/math]: This is clear (e.g. remove (0,2,0) and (1,0,1) from [math]\Delta_2[/math]). n=3 [math]\overline{c}^\mu_3 = 6[/math]: For the lower bound, delete (0,3,0), (0,2,1), (2,1,0), (1,0,2) from [math]\Delta_3[/math]. For the upper bound: observe that with only three removals each of these (non-overlapping) triangles must have one removal: set A: (0,3,0) (0,2,1) (1,2,0) set B: (0,1,2) (0,0,3) (1,0,2) set C: (2,1,0) (2,0,1) (3,0,0) Consider choices from set A: (0,3,0) leaves triangle (0,2,1) (1,2,0) (1,1,1) (0,2,1) forces a second removal at (2,1,0) [otherwise there is triangle at (1,2,0) (1,1,1) (2,1,0)] but then none of the choices for third removal work (1,2,0) is symmetrical with (0,2,1) n=4 [math]\overline{c}^\mu_4=9[/math]: The set of all [math](a,b,c)[/math] in [math]\Delta_4[/math] with exactly one of a,b,c =0, has 9 elements and is triangle-free. (Note that it does contain the equilateral triangle (2,2,0),(2,0,2),(0,2,2), so would not qualify for the generalised version of Fujimura's problem in which [math]r[/math] is allowed to be negative.) Let [math]S\subset \Delta_4[/math] be a set without equilateral triangles. If [math](0,0,4)\in S[/math], there can only be one of [math](0,x,4-x)[/math] and [math](x,0,4-x)[/math] in S for [math]x=1,2,3,4[/math]. Thus there can only be 5 elements in S with [math]a=0[/math] or [math]b=0[/math]. The set of elements with [math]a,b\gt0[/math] is isomorphic to [math]\Delta_2[/math], so S can at most have 4 elements in this set. So [math]|S|\leq 4+5=9[/math]. Similar if S contain (0,4,0) or (4,0,0). So if [math]|S|\gt9[/math] S doesn’t contain any of these. Also, S can’t contain all of [math](0,1,3), (0,3,1), (2,1,1)[/math]. Similar for [math](3,0,1), (1,0,3),(1,2,1)[/math] and [math](1,3,0), (3,1,0), (1,1,2)[/math]. So now we have found 6 elements not in S, but [math]|\Delta_4|=15[/math], so [math]S\leq 15-6=9[/math]. Remark: curiously, the best constructions for [math]c_4[/math] uses only 7 points instead of 9. n=5 [math]\overline{c}^\mu_5=12[/math]: The set of all (a,b,c) in [math]\Delta_5[/math] with exactly one of a,b,c=0 has 12 elements and doesn’t contain any equilateral triangles. Let [math]S\subset \Delta_5[/math] be a set without equilateral triangles. If [math](0,0,5)\in S[/math], there can only be one of (0,x,5-x) and (x,0,5-x) in S for x=1,2,3,4,5. Thus there can only be 6 elements in S with a=0 or b=0. The set of element with a,b>0 is isomorphic to [math]\Delta_3[/math], so S can at most have 6 elements in this set. So [math]|S|\leq 6+6=12[/math]. Similar if S contain (0,5,0) or (5,0,0). So if |S| >12 S doesn’t contain any of these. S can only contain 2 point in each of the following equilateral triangles: (3,1,1),(0,4,1),(0,1,4) (4,1,0),(1,4,0),(1,1,3) (4,0,1),(1,3,1),(1,0,4) (1,2,2),(0,3,2),(0,2,3) (3,2,0),(2,3,0),(2,2,1) (3,0,2),(2,1,2),(2,0,3) So now we have found 9 elements not in S, but [math]|\Delta_5|=21[/math], so [math]S\leq 21-9=12[/math]. n=6 [math]15 \leq \overline{c}^\mu_6 \leq 17[/math]: [math]15 \leq \overline{c}^\mu_6[/math] from the bound for general n. Note that there are eight extremal solutions to [math] \overline{c}^\mu_3 [/math]: Solution I: remove 300, 020, 111, 003 Solution II: remove 030, 111, 201, 102 Solution III (and 2 rotations): remove 030, 021, 210, 102 Solution III' (and 2 rotations): remove 030, 120, 012, 201 Also consider the same triangular lattice with the point 020 removed, making a trapezoid. Solutions based on I-III are: Solution IV: remove 300, 111, 003 Solution V: remove 201, 111, 102 Solution VI: remove 210, 021, 102 Solution VI': remove 120, 012, 201 Suppose we can remove all equilateral triangles on our 7×7x7 triangular lattice with only 10 removals. The triangle 141-411-114 must have at least one point removed. Remove 141, and note because of symmetry any logic that follows also applies to 411 and 114. There are three disjoint triangles 060-150-051, 240-231-330, 042-132-033, so each must have a point removed. (Now only six removals remaining.) The remainder of the triangle includes the overlapping trapezoids 600-420-321-303 and 303-123-024-006. If the solutions of these trapezoids come from V, VI, or VI', then 6 points have been removed. Suppose the trapezoid 600-420-321-303 uses the solution IV (by symmetry the same logic will work with the other trapezoid). Then there are 3 disjoint triangles 402-222-204, 213-123-114, and 105-015-006. Then 6 points have been removed. Therefore the remaining six removals must all come from the bottom three rows of the lattice. Note this means the "top triangle" 060-330-033 must have only four points removed so it must conform to solution either I or II, because of the removal of 141. Suppose the solution of the trapezoid 600-420-321-303 is VI or VI'. Both solutions I and II on the "top triangle" leave 240 open, and hence the equilateral triangle 240-420-222 remains. So the trapezoid can't be VI or VI'. Suppose the solution of the trapezoid 600-420-321-303 is V. This leaves an equilateral triangle 420-321-330 which forces the "top triangle" to be solution I. This leaves the equilateral triangle 201-321-222. So the trapezoid can't be V. Therefore the solution of the trapezoid 600-420-321-303 is IV. Since the disjoint triangles 402-222-204, 213-123-114, and 105-015-006 must all have points removed, that means the remaining points in the bottom three rows (420, 321, 510, 501, 312, 024) must be left open. 420 and 321 force 330 to be removed, so the "top triangle" is solution I. This leaves triangle 321-024-051 open, and we have reached a contradiction. n = 7 n = 8 [math]\overline{c}^\mu_{8} \geq 22[/math]: 008,026,044,062,107,125,134,143,152,215,251,260,314,341,413,431,440,512,521,620,701,800 n = 9 n = 10 [math]\overline{c}^\mu_{10} \geq 29[/math]: 028,046,055,064,073,118,172,181,190,208,217,235,262, 316,334,352,361,406,433,442,541,550,604,613,622, 721,730,901,1000 General n A lower bound for [math]\overline{c}^\mu_n[/math] is 2n for [math]n \geq 1[/math], by removing (n,0,0), the triangle (n-2,1,1) (0,n-1,1) (0,1,n-1), and all points on the edges of and inside the same triangle. In a similar spirit, we have the lower bound [math]\overline{c}^\mu_{n+1} \geq \overline{c}^\mu_n + 2[/math] for [math]n \geq 1[/math], because we can take an example for [math]\overline{c}^\mu_n[/math] (which cannot be all of [math]\Delta_n[/math]) and add two points on the bottom row, chosen so that the triangle they form has third vertex outside of the original example. An asymptotically superior lower bound for [math]\overline{c}^\mu_n[/math] is 3(n-1), made of all points in [math]\Delta_n[/math] with exactly one coordinate equal to zero. A trivial upper bound is [math]\overline{c}^\mu_{n+1} \leq \overline{c}^\mu_n + n+2[/math] since deleting the bottom row of a equilateral-triangle-free-set gives another equilateral-triangle-free-set. We also have the asymptotically superior bound [math]\overline{c}^\mu_{n+2} \leq \overline{c}^\mu_n + \frac{3n+2}{2}[/math] which comes from deleting two bottom rows of a triangle-free set and counting how many vertices are possible in those rows. Another upper bound comes from counting the triangles. There are [math]\binom{n+2}{3}[/math] triangles, and each point belongs to n of them. So you must remove at least (n+2)(n+1)/6 points to remove all triangles, leaving (n+2)(n+1)/3 points as an upper bound for [math]\overline{c}^\mu_n[/math]. Asymptotics The corners theorem tells us that [math]\overline{c}^\mu_n = o(n^2)[/math] as [math]n \to \infty[/math]. By looking at those triples (a,b,c) with a+2b inside a Behrend set, one can obtain the lower bound [math]\overline{c}^\mu_n \geq n^2 \exp(-O(\sqrt{\log n}))[/math].
I am trying to understand the solution to the following question. At which $c\in\mathbb{R}$ is the function $f:\mathbb{R}\rightarrow\mathbb{R}$ defined by $$f(x)=\begin{cases}x&\text{if $x$ is rational,}\\1-x&\text{if $x$ is irrational,}\end{cases}$$ continuous? The solution states that the only answer is $c=1/2$, but am not sure why this is so. I have thought about what might happen if we suppose (for a contradiction) that $f$ is continuous at some $c\neq1/2$. To arrive at a contradiction, I am trying to use the fact that $\exists$ an irrational number between any two numbers. However, I have not yet managed to arrive at a contradiction. Could I have some suggestions as to how to progress, please?
Let $\ M'\ M''\ $ be simply-connected Hausdorff compact manifolds (possibly with boundary for another variant of the question). Let $\ f:M'\rightarrow M''\ $ be a continuous function which induces an isomorphism of the cohomological rings. Does there exist a continuous function $\ g:M''\rightarrow M'\ $ which induces the inverse cohomology ring isomorphism? The question has been answered in the comments. To justify writing this answer I'll sweeten it with some links and further details. Compact manifolds (possibly with boundary) have the homotopy type of a CW-complex, see the answers to this MO-question which also provide links to the literature. If $f:X\to Y$ induces isomorphisms on cohomology and $X$ and $Y$ are simply-connected spaces, then $f$ is a weak equivalence, discussed e.g. here. Subtleties with homological Whitehead theorems in the non-simply-connected case are discussed in this MSE-question. I suppose that the implication from cohomology isomorphism to weak equivalence needs isomorphisms on fundamental groups and cohomology with all local systems coefficients, unless the space is nilpotent, see this paper of Gersten and the cited paper of E. Dror. If $f:X\to Y$ is a weak equivalence of CW-complexes, then it is a homotopy equivalence, and therefore has a homotopy inverse $g:Y\to X$, which induces the inverse cohomological isomorphism for $f^\ast:H^\ast(Y)\to H^\ast(X)$. By the first remark this applies to all compact manifolds with boundary. Finally, an example with homology spheres, some other examples of what can go wrong have been provided in the links above. In Milnor's paper on Brieskorn homology spheres, one can find lots of examples of homology spheres which have a contractible universal cover (they are covered by hyperbolic three-space and their fundamental groups are related to triangle groups). The plus-construction again provides a map $M\to S^3$ from the homology sphere to $S^3$. (The plus-construction produces a CW-complex which by Whitehead's theorem again is homotopy equivalent to $S^3$.) This map $M\to S^3$ induces an isomorphism on cohomology with integral coefficients. However, there can not be an inverse, because the universal covering of $M$ is contractible and so every map $S^3\to M$ has degree $0$. This provides another example that some condition like simply-connected is necessary. Some further comments on homology spheres: First, for any closed connected orientable $n$-manifold $M$ there is always a degree $1$ map $M\to S^n$ obtained by collapsing the complement of an open ball in $M$ to a point. In the reverse direction, any degree $1$ map $f:M\to N$ of closed connected orientable $n$-manifolds must induce a surjection on $\pi_1$, for otherwise $f$ could be lifted to the covering space $\tilde N \to N$ corresponding to the proper subgroup $f_*(\pi_1M) \subset \pi_1N$. If this covering is finite-sheeted, then the degree of the projection $\tilde N\to N$ is equal to the number of sheets (which is the index of $f_*(\pi_1M)$ in $\pi_1N$) since an orientation of $N$ lifts to an orientation of $\tilde N$, making the local degrees at all points in a fiber have the same sign. Thus the degree of $f$ is divisible by the number of sheets, so it can't be $1$. If the covering is infinite-sheeted then $\tilde N$ is noncompact so $H_n(\tilde N)=0$, forcing $f$ to have degree $0$. Applying this fact to a degree $1$ map $S^n\to N$ we see that $\pi_1N=0$ so it can't be a nonsimply-connected homology sphere. (These are classical arguments, incidentally.)
The atmosphere of the Earth is mainly composed of nitrogen (N 2, 78%) and oxygen (O 2, 21%) molecules, which together make up about 99% of its total volume. The remaining 1% contains all sorts of other stuff like argon, water and carbon dioxide, but let's ignore those for now. As you probably know, the oxygen we breathe is produced by plants from water and carbon dioxide as a byproduct of photosynthesis. Conversely, animals (including humans) use the oxygen to burn organic compounds (like sugars, fats and proteins) back into water and carbon dioxide, obtaining energy in the process. So do many bacteria and fungi, too, and some oxygen also gets burned in abiotic processes like wildfires and the oxidization of minerals. The result is that oxygen cycles pretty rapidly in and out of the atmosphere. According to the Wikipedia article I just linked to, the average time an oxygen molecule spends in the atmosphere before being burned or otherwise removed from the air is around 4,500 years.The most recent known Homo erectus fossil dates from about 143,000 years ago, so the probability that a particular oxygen molecule hitting your face today has been around since that time is roughly $\exp(- 143000 / 4500) = \exp(-31.78) \approx 1.58 \times 10^{-14}$, i.e. basically zero. Of course, the oxygen atoms used for respiration don't disappear anywhere: they just become part of the water and carbon dioxide molecules. Those that end up in carbon dioxide usually get photosynthesized back into free oxygen pretty soon, unless they happen to get trapped in a carbonate sediment or something like that. The oxygen atoms that end up in water, on the other hand, may spend quite a long time in the oceans before being recycled back into the air; if I'm not mistaken, the total amount of oxygen in the hydrosphere is about 1000 times the amount in the atmosphere, so the mean cycle time should also be about 1000 times longer. Still, eventually, even the oxygen in the oceans gets cycled back into the atmosphere. Thus, while the oxygen molecules you breathe might not have been around for more than a few thousand years, the atoms they consist of have been around since long before the dinosaurs. How about nitrogen, then? Perhaps a bit surprisingly, given how inert nitrogen generally is, it's also actively cycled by the biosphere. Unfortunately, the actual rate at which this cycling occurs seems to be still poorly understood, which makes estimating the mean cycle times difficult. If I'm reading these tables correctly, the annual (natural) nitrogen flux in and out of the atmosphere is estimated to be somewhere between 40 and 400 teragrams per year, while the total atmospheric nitrogen content is about 4 zettagrams. This would put the mean lifetime of a nitrogen molecule in the atmosphere somewhere between 10 million and 100 million years, well above the time since Homo erectus first appeared (about 1.8 million years ago). Thus, it seems that most of the air molecules around you have probably been around since the days of Homo erectus, and some of them might even have been present during the age of the dinosaurs, which ended about 66 million years ago.
Archive: Subtopics: Comments disabled Thu, 29 Jan 2009 A simple trigonometric identity Here I put A at (0,0), B at (4,0), and 1 at(2,2). This is clear enough, but I wished that it were more nearlyequilateral. So that night as I was waiting to fall asleep, I thought about the problem of finding lattice points that are at the vertices of an equilateral triangle. This is a sort of two-dimensional variation on the problem of finding rational approximations to surds, which is a topic that has turned up here many times over the years. Or rather, I wanted to find lattice points that are A simple proof that there are no equilateral lattice triangles has just now occurred to me, though, and I am really pleased with it, so we are about to have a digression. The area By Pick's theorem, the area ofany lattice triangle is a half-integer. So 3 But Wasn't that excellent? That is just the sort of thing that I Okay, end of digression. Back to the law of cosines. We have atriangle with sides Before I fell alseep, it occurred to me that you could take theanalogous law So today I got out the paper and did the thing, and came up with the very simple relation that: Which holds in any triangle. But somehow I had never seen this before, or, if I had, I had completely forgotten it. The thing is so simple that I thought that it must be wrong, or Iwould have known it already. But no, it checked out for the easycases (right triangles, equilateral triangles, trivial triangles) andthe geometric proof is easy: Just drop a perpendicular from C. Thefoot of the perpendicular divides the base Perhaps that was anticlimactic. Have I mentioned that I have a sign on the door of my office that says "Penn Institute of Lower Mathematics"? This is the kind of thing I'm talking about. I will let you all know if I come up with anything about the almost-equilateral lattice triangles. Clearly, you can approximate the equilateral triangle as closely as you like by making the lattice coordinates sufficiently large, just as you can approximate √3 as closely as you like with rationals by making the numerator and denominator sufficiently large. Proof: Your computer draws equilateral-seeming triangles on the screen all the time. I note also that it is important that the lattice is two-dimensional. In three or more dimensions the triangle (1,0,0,0...), (0,1,0,0...), (0,0,1,0...) is a perfectly equilateral lattice triangle with side √2. [ Addendum 20090130: Vilhelm Sjöberg points out that the area of anequilateral triangle is [ Addendum 20140403: As a practical matter, one can draw a good lattice approximation to an equilateral triangle by choosing a good rational approximation to !!\sqrt3!!, say !!\frac ab!!, and then drawing the points !!(0,0), (b,a),!! and !!(2b, 0)!!. The rational approximations to !!\sqrt3!! quickly produce triangles that are indistinguishable from equilateral. For example, the rational approximation !!\frac74!! gives the isosceles triangle with vertices !!(0,0), (4,7), (8,0)!! which has one side of length 8 and two sides of length !!\sqrt{65}\approx 8.06!!, an error of less than one percent. The next such approximation, !!\frac{26}{15}!!, gives a triangle that is correct to about 1 part in 1800. (For more about rational approximations to !!\sqrt3!!, see my article on Archimedes and the square root of 3.) ] [ Addendum 20181126: Even better ways to make 60-degree triangles on lattice points. ]
Let's say that we have one area and in this area there are designated 5 points, $a, b, c, d, e.$ For these 5 points we count the number of a specific feature. We capture the values for two different days, for example: $\begin{array}[t]{llllll|l} \text{date} & a & b & c & d & e & \text{sum}\\\hline \text{first day}& 10 & 20 & 30 & 50 & - & 110\\ \text{second day} & 30 & - & - & - & 30 & 60 \end{array}$ The dashes mean that we have missing data for these points. We want to find the average of the number of features for these two days. We know that when there is missing data, it is quite unlikely that the value is zero. So, the mean $ \frac {110 + 60}{2} = 85$ would lead to not a so correct result. I can think of two approaches for this. a) We can take the average for each point for both of the days; if for a day there is no data, we can consider that the value is the same, i.e. for point $a: \frac{10 + 30}{2}= 20$, for point $b: \frac{20 + 20}{2} = 20, \ldots$ and then we can add all the values. So, we will end up with $$ 20 + 20 + 30 + 50 + 30 = 150.$$ b) This approach I prefer more (but I am not sure about its correctness or if I am missing some crucial points) is to get a weighted average. So, because the first day we have more data, it seems more logical to me to give a larger weight to the sum of that date that we have more data, i.e. the first day. So, we could say: $$\frac{4\cdot(10 + 20 + 30 + 50) + 2\cdot (30 + 30 )} {4+2} = 93.33,$$ which means we don't take into account that much the second day, due to the large number of missing data. I would like to know what measure is considered to be more appropriate in this case, in terms of which method is more likely to be more accurate if it makes sense at all. If there are more elegant methods to deal with such situations, you are more than welcome to suggest them.
Let $G$ be a semisimple algebraic group over an algebraically closed field of arbitrary characteristic. (I'm most interested in the positive characteristic case). Let $B \subseteq G$ be a Borel subgroup. Let $\cal T$ denote the tangent bundle of the flag variety $G/B$ and let $\pi : {\cal T} \to G/B$ be the projection. For each integral weight $\lambda$ we have a line bundle $\cal L(\lambda)$ on $G/B$. A lot is known about the pullbacks of these bundles to the cotangent bundle of $G/B$ (see work of Broer, Kumar, Lauritzen, Thomsen, etc). For example, it is known which of these pullbacks to the cotangent bundle is ample, and there have a been a series of papers studying the $G$-module structures of the global sections of these pullbacks. On the other hand, I haven't seen analogous results regarding the pullbacks $\pi^* \cal L(\lambda)$ of these bundles to the tangent bundle $\cal T$. To be more precise, I am most interested in knowing which of these pullbacks is ample. It would also be interesting to know if anyone has studied the $G$-module structure of $H^0( \cal T, \pi^* \cal L(\lambda) )$ for various $\lambda$.
It looks like you're new here. If you want to get involved, click one of these buttons! Last time we studied meets and joins of partitions. We observed an interesting difference between the two. Suppose we have partitions \(P\) and \(Q\) of a set \(X\). To figure out if two elements \(x , x' \in X\) are in the same part of the meet \(P \wedge Q\), it's enough to know if they're the same part of \(P\) and the same part of \(Q\), since $$ x \sim_{P \wedge Q} x' \textrm{ if and only if } x \sim_P x' \textrm{ and } x \sim_Q x'. $$ Here \(x \sim_P x'\) means that \(x\) and \(x'\) are in the same part of \(P\), and so on. However, this does not work for the join! $$ \textbf{THIS IS FALSE: } \; x \sim_{P \vee Q} x' \textrm{ if and only if } x \sim_P x' \textrm{ or } x \sim_Q x' . $$ To understand this better, the key is to think about the "inclusion" $$ i : \{x,x'\} \to X , $$ that is, the function sending \(x\) and \(x'\) to themselves thought of as elements of \(X\). We'll soon see that any partition \(P\) of \(X\) can be "pulled back" to a partition \(i^{\ast}(P)\) on the little set \( \{x,x'\} \). And we'll see that our observation can be restated as follows: $$ i^{\ast}(P \wedge Q) = i^{\ast}(P) \wedge i^{\ast}(Q) $$ but $$ \textbf{THIS IS FALSE: } \; i^{\ast}(P \vee Q) = i^{\ast}(P) \vee i^{\ast}(Q) . $$ This is just a slicker way of saying the exact same thing. But it will turn out to be more illuminating! So how do we "pull back" a partition? Suppose we have any function \(f : X \to Y\). Given any partition \(P\) of \(Y\), we can "pull it back" along \(f\) and get a partition of \(X\) which we call \(f^{\ast}(P)\). Here's an example from the book: For any part \(S\) of \(P\) we can form the set of all elements of \(X\) that map to \(S\). This set is just the preimage of \(S\) under \(f\), which we met in Lecture 9. We called it $$ f^{\ast}(S) = \{x \in X: \; f(x) \in S \}. $$ As long as this set is nonempty, we include it our partition \(f^{\ast}(P)\). So beware: we are now using the symbol \(f^{\ast}\) in two ways: for the preimage of a subset and for the pullback of a partition. But these two ways fit together quite nicely, so it'll be okay. Summarizing: Definition. Given a function \(f : X \to Y\) and a partition \(P\) of \(Y\), define the pullback of \(P\) along \(f\) to be this partition of \(X\): $$ f^{\ast}(P) = \{ f^{\ast}(S) : \; S \in P \text{ and } f^{\ast}(S) \ne \emptyset \} . $$ Puzzle 40. Show that \( f^{\ast}(P) \) really is a partition using the fact that \(P\) is. It's fun to prove this using properties of the preimage map \( f^{\ast} : P(Y) \to P(X) \). It's easy to tell if two elements of \(X\) are in the same part of \(f^{\ast}(P)\): just map them to \(Y\) and see if they land in the same part of \(P\). In other words, $$ x\sim_{f^{\ast}(P)} x' \textrm{ if and only if } f(x) \sim_P f(x') $$ Now for the main point: Proposition. Given a function \(f : X \to Y\) and partitions \(P\) and \(Q\) of \(Y\), we always have $$ f^{\ast}(P \wedge Q) = f^{\ast}(P) \wedge f^{\ast}(Q) $$ but sometimes we have $$ f^{\ast}(P \vee Q) \ne f^{\ast}(P) \vee f^{\ast}(Q) . $$ Proof. To prove that $$ f^{\ast}(P \wedge Q) = f^{\ast}(P) \wedge f^{\ast}(Q) $$ it's enough to prove that they give the same equivalence relation on \(X\). That is, it's enough to show $$ x \sim_{f^{\ast}(P \wedge Q)} x' \textrm{ if and only if } x \sim_{ f^{\ast}(P) \wedge f^{\ast}(Q) } x'. $$ This looks scary but we just follow our nose. First we rewrite the right-hand side using our observation about the meet of partitions: $$ x \sim_{f^{\ast}(P \wedge Q)} x' \textrm{ if and only if } x \sim_{ f^{\ast}(P)} x' \textrm{ and } x\sim_{f^{\ast}(Q) } x'. $$ Then we rewrite everything using what we just saw about the pullback: $$ f(x) \sim_{P \wedge Q} f(x') \textrm{ if and only if } f(x) \sim_P f(x') \textrm{ and } f(x) \sim_Q f(x'). $$ And this is true, by our observation about the meet of partitions! So, we're really just stating that observation in a new language. To prove that sometimes $$ f^{\ast}(P \vee Q) \ne f^{\ast}(P) \vee f^{\ast}(Q) , $$ we just need one example. So, take \(P\) and \(Q\) to be these two partitions: They are partitions of the set $$ Y = \{11, 12, 13, 21, 22, 23 \}. $$ Take \(X = \{11,22\} \) and let \(i : X \to Y \) be the inclusion of \(X\) into \(Y\), meaning that $$ i(11) = 11, \quad i(22) = 22 . $$ Then compute everything! \(11\) and \(22\) are in different parts of \(i^{\ast}(P)\): $$ i^{\ast}(P) = \{ \{11\}, \{22\} \} . $$ They're also in different parts of \(i^{\ast}(Q)\): $$ i^{\ast}(Q) = \{ \{11\}, \{22\} \} .$$ Thus, we have $$ i^{\ast}(P) \vee i^{\ast}(Q) = \{ \{11\}, \{22\} \} . $$ On the other hand, the join \(P \vee Q \) has just two parts: $$ P \vee Q = \{\{11,12,13,22,23\},\{21\}\} . $$ If you don't see why, figure out the finest partition that's coarser than \(P\) and \(Q\) - that's \(P \vee Q \). Since \(11\) and \(22\) are in the same parts here, the pullback \(i^{\ast} (P \vee Q) \) has just one part: $$ i^{\ast}(P \vee Q) = \{ \{11, 22 \} \} . $$ So, we have $$ i^{\ast}(P \vee Q) \ne i^{\ast}(P) \vee i^{\ast}(Q) $$ as desired. \( \quad \blacksquare \) Now for the real punchline. The example we just saw was the same as our example of a "generative effect" in Lecture 12. So, we have a new way of thinking about generative effects: the pullback of partitions preserves meets, but it may not preserve joins! This is an interesting feature of the logic of partitions. Next time we'll understand it more deeply by pondering left and right adjoints. But to warm up, you should compare how meets and joins work in the logic of subsets: Puzzle 41. Let \(f : X \to Y \) and let \(f^{\ast} : PY \to PX \) be the function sending any subset of \(Y\) to its preimage in \(X\). Given \(S,T \in P(Y) \), is it always true that $$ f^{\ast}(S \wedge T) = f^{\ast}(S) \wedge f^{\ast}(T ) ? $$ Is it always true that $$ f^{\ast}(S \vee T) = f^{\ast}(S) \vee f^{\ast}(T ) ? $$ To read other lectures go here.
It looks like you're new here. If you want to get involved, click one of these buttons! In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly: Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints. Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints. Today we'll conclude our discussion of Chapter 1 with two more bombshells: Joins are left adjoints, and meets are right adjoints. Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down. This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world! Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders. In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets. Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have all joins: it's enough that all the joins in this formula exist: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have all meets: it's enough that all the meets in this formula exist: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes. Suppose \(A\) is a poset with all binary joins. Then we get a function $$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows: $$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that $$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the diagonal $$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called duplication, since it duplicates any element of \(A\). Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact: $$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \). Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \). A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function $$ \wedge : A \times A \to A $$that's the right adjoint of \( \Delta \). This is just a clever way of saying $$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check. Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number. All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on. Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by $$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short. I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason. Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\). So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs. Once you start looking you can find duality everywhere, from ancient Chinese philosophy: to modern computers: But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality! This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises.
Tsukuba Journal of Mathematics Tsukuba J. Math. Volume 20, Number 2 (1996), 471-478. Proper n-homotopy equivalences of locally compact polyhedra Abstract We prove the following theorem which is a locally compact analogue of results of $S$. Ferry and the author. Theorem. Let $f:X\rightarrow Y$ be a proper map between finite dimensional locally compact polyhedra $X$ and Y. Suppose that (1) $\pi_{j}(f):\pi_{i}(X)\rightarrow\pi_{i}(Y)$ is an isomorphism for each $i\leq n$, (2) $f$ induces a surjection between the ends of $X$ and $Y$, and (3) $f$ induces an isomorphism between the $i$-th homotopy groups of ends of $X$ and $Y$ for each $i\leq n$. Then there exist a locally compact polyhedron $Z$ and proper $UV^{n}$-maps $\alpha:Z\rightarrow X$ and $\beta:Z\rightarrow Y$ such that (4) $\dim Z\leq 2\max(\dim X,n)+3$, (5) $f\circ\alpha$ and $\beta$ is properly $n$-homotopic, and (6) $\alpha$ has at most countably many non-contractible fibre all of which have the homotopy type of $S^{n+1}$. Article information Source Tsukuba J. Math., Volume 20, Number 2 (1996), 471-478. Dates First available in Project Euclid: 30 May 2017 Permanent link to this document https://projecteuclid.org/euclid.tkbjm/1496163095 Digital Object Identifier doi:10.21099/tkbjm/1496163095 Mathematical Reviews number (MathSciNet) MR1422634 Zentralblatt MATH identifier 0887.55012 Citation Kawamura, Kazuhiro. Proper n-homotopy equivalences of locally compact polyhedra. Tsukuba J. Math. 20 (1996), no. 2, 471--478. doi:10.21099/tkbjm/1496163095. https://projecteuclid.org/euclid.tkbjm/1496163095
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
Let $R$ be a finite ring with unity. Prove that every nonzero element of $R$ is either a unit or a zero-divisor. In a finite commutative ring with unity, every element is either a unit or a zero-divisor. Indeed, let $a\in R$ and consider the map on $R$ given by $x \mapsto ax$. If this map is injective then it has to be surjective, because $R$ is finite. Hence, $1=ax$ for some $x\in R$ and $a$ is a unit. If the map is not injective then there are $u,v\in R$, with $u\ne v$, such that $au=av$. But then $a(u-v)=0$ and $u-v\ne0$ and so $a$ is a zero divisor. Your question is incomplete: you say you want to prove that every nonzero element of $R$ is "either a zero-divisor?" If one inserts a unit or before zero-divisor then you get a true statement, so I'll assume for now that's what you meant. First, following a comment by Gerry Myerson on a recent related answer, let me divulge that for me zero is a zero-divisor. I claim that this is just a convention that you should be able to translate back from if you see fit. Next, note that if you have a family $\{R_i\}_{i \in I}$ of rings in which every element is either a unit or a zero-divisor, the same holds in the Cartesian product $R = \prod_{i \in I} R_i$. In your case you can use the structure theorem for Artinian rings: $R$ is a finite product of local Artinian rings -- to reduce to the case in which $R$ is local Artinian. Then the maximal ideal is nilpotent, so every nonunit is nilpotent and in particular a zero divisor. HINT $\rm\ |R|<\infty\ \Rightarrow\ r^j=r^k,\: j<k\ \Rightarrow\ r^j\:(1-r^{k-j})=0\ \Rightarrow\ 1 = r^{k-j}\: $ if $\rm\:r\:$ not a zero-divisor. NOTE $\ $ The idea generalizes: if a non-zero-divisor $\rm\:r\:$ is algebraic then it divides the least degree coefficient of any polynomial of which it is a root. When said coefficient is a unit then so too is $\rm\:r\:.\:$ Hence the result holds more generally for any ring satisfying a polynomial identity whose least degree coefficient is unit, e.g. for Jacobson's famous rings satisfying the identity $\rm\:X^n =\: X\:.$ Since one good cannonball deserves another, I'd like to provide a solution using right Artinian rings that aren't necessarily commutative. Definitions: A ring $R$ is called strongly $\pi$-regularif for all $x\in R$, chains of the form $xR\supseteq x^2R\supseteq x^3R\supseteq\dots \supseteq x^iR\supseteq\dots$ become stationary. A ring is called Dedekind finiteif $xy=1$ implies $yx=1$ for all $x,y\in R$. Strongly $\pi$-regular rings were introduced by Kaplansky in the citation at the bottom. The definition is usually given in terms of "$\forall x\exists r(x^n=x^{n+1}r)$", but this is equivalent. Moreover, it's been shown that $r$ can be chosen to commute with $x$, and so the left-hand version of this definition is equivalent to this one. It's obvious right Artinian rings are strongly $\pi$-regular, and it turns out they are Dedekind finite too. Proposition: In a strongly $\pi$-regular Dedekind finite ring (in particular, right or left Artinian rings), each element is a unit or a zero divisor. (Zero being counted as a zero divisor.) Proof: Let $x\in R$ be a nonunit, and let $n$ be minimal such that there exists $r$ that commutes with $x$ and $x^n=x^{n+1}r$. Since $x$ isn't a unit, $n\geq 1$. (Because if $1=xr$, $x$ would be a unit by Dedekind finiteness.) Rearranging, we get $x(x^{n-1}-x^nr)=0=(x^{n-1}-x^nr)x$ since $r$ commutes with $x$. By minimality of $n$, $x^{n-1}-x^nr\neq 0$. Thus, $x$ has been demonstrated to be a two-sided zero divisor. I. Kaplansky, Topological representations of algebras II, Trans. Amer. Math. Soc. 68 (1950), 62-75. MR 11:317 Let $a$ in $R$ be non-zero and suppose that $a$ is not a zero-divisor. First I will prove the cancellation property just for $a$. If $ab = ac$, then $ab-ac = 0$ and $a(b-c) = 0$. Since $a$ is not a zero-divisor, then $b-c = 0$ so $b = c$. Consider the set $\{a^n\mid n \in\mathbb N\}=\{1,a^1,a^2,...\}$ Since $R$ is finite, we must have $a^i = a^j$ for some $i$, $j$ with $i \gt j$. Then since we have the cancellation property for $a$ and we have $a^{i-j}a^j = 1a^j$ (remember we have unity), then cancellation gives us $a^{i-j} = 1$. If $a = 1$ then $a$ is clearly a unit. If $a\ne 1$, then $i-j \gt 1$ so we can factor out one copy of $a$ to get $a^{i-j-1}a^1 = 1$. Thus the element $a^{i-j-1}$ is the multiplicative inverse of $a$, so $a$ is a unit. Thus every nonzero element of this ring that is not a zero-divisor is a unit. In other words, every nonzero element is either a zero-divisor or a unit. If we drop the finite condition then the result does not hold true. For example, $\mathbb Z$ is a commutative ring with unity, but $2$ is neither a zero-divisor nor a unit. Let $a\not=0 $ Because $R $is finite then $a^j=a$ , then $(a^j -a )=0$ $a (a^{j-1}-1) =0$ If $a\not=0$ then $a$ is zero divisor and $a^j a^{-1} = a^{j-2}a=1 $ so $a$ is unit
Function examples A function is a mapping from a set of inputs (the domain) to a set of possible outputs (the codomain). The definition of a function is based on a set of ordered pairs, where the first element in each pair is from the domain and the second is from the codomain. But, a metaphor that makes the idea of a function easier to understand is the function machine, where an input $x$ from the domain $X$ is fed into the machine and the machine spits out the element $y=f(x)$ from the codomain $Y$. Below, the domain is visualized as a set of spheres and the codomain as a set of cubes, so that the function machine transforms spheres into cubes. We often think of a function as taking a number as its input and producing another number as its output. But, we could make a function machine that operates on different types of objects, so a function is in no way limited to numbers. To illustrate this fact, we start with examples that operate on objects other than numbers. Then, we turn to more traditional functions where the domain and codomain are sets of numbers. Example 1: The mother machine Let the set $X$ of possible inputs to a function (the domain) be the set of all people. For the purpose of making this example simple, we will assume all people have exactly one mother (i.e., we'll ignore the problem of the origin of our species and not worry about folks such as Adam and Eve). We are going to create a function $m$ from people to people, so let the set of possible outputs of our function (the codomain) also be the set $X$ of people. (We can write this using function notation as $m: X \to X$.) We define the function $m$ so that $m(x)$ is the mother of the person $x$ for all people $x \in X$ (confused?). If, for example, we put Martin Luther King, Jr. into our mother function, we would get $$m(\text{Martin Luther King, Jr.})=\text{Alberta Williams King}.$$ Or if we put in Madame Curie, we'd get $$m(\text{Marie Skłodowska-Curie})=\text{Bronisława Skłodowski}.$$ This function is a well-defined function, since we assume every element $x \in X$ is mapped via the function machine to a unique element $y \in X$, i.e., every person $x$ has exactly one mother $y$. Although the codomain is the set of all people $X$, it's clear that it will be impossible for this function to output certain people. There's no way the mother function $m$ could output any males, nor could it output any childless females. In other words, the range of the function $m$ is the set of female people who have had children, which is a proper subset of the set $X$ of all people. Example 2: Number of children A function can output objects of a completely different type than the inputs, as suggested by the above picture where spheres enter the function machine and cubes come out. We could define a function where the domain $X$ is again the set of people but the codomain is a set of numbers. For example, let the codomain $Y$ be the set of whole numbers and define the function $c$ so that for any person $x$, the function output $c(x)$ is the number of children of the person $x$. Since there is an upper limit on the number of children a person could possibly have, it's clear the range of $c$ is not the entire set $Y$ of whole numbers. Putting in the same people into the child number function, we'd obtain $c(\text{Martin Luther King, Jr.})=4$ and $c(\text{Marie Skłodowska-Curie})=2.$ Example 3: Symbols The domain and codomain of a function could be sets of any type of objects. For example, the domain could be the set $A = \{\bigcirc, \bigtriangleup, \bigstar,\square \}$ and the codomain could be the set $B=\{\Diamond, \bigstar, \square, \bigcirc, \circ \}$. We could define a function $f$ of the form $$f: \{\bigcirc, \bigtriangleup, \bigstar,\square \} \to \{\Diamond, \bigstar, \square, \bigcirc, \circ\}$$ that maps each of the four symbols in $A$ to one of the five symbols in $B$. We could define the function by $f(\bigcirc)=\Diamond$, $f(\bigtriangleup)= \square$, $f(\bigstar)= \square$, and $f(\square)=\bigstar$. (Equivalently, using the ordered pair definition we could define $f$ by the set of ordered pairs $\{(\bigcirc, \Diamond), (\bigtriangleup, \square ), (\bigstar, \square), (\square,\bigstar) \}$.) Since $f$ never maps onto the elements $\bigcirc$ or $\circ$ of the codomain, the range of the function is the set $\{\Diamond, \bigstar, \square \}$. Example 4: Algebraic formula We can also define a function using an algebraic formula, such as $f(x)=x^2+1$. Such algebraic formulas are the way many people think of functions, though, as the above examples show, such a formula is not required. To fully define a function, we need to specify the domain and range. If the domain and range are not specified, it is frequently safe to assume that the domain and range are the set of real numbers. So, if we simply refer to the $f(x)=x^2+1$, we probably mean the function $f: \R \to \R$ where $f(x)=x^2+1$. Since $f(x) \ge 1$, the codomain is the subset of real numbers that are 1 or larger. We could define a different function $g: \mathbf{Z} \to \mathbf{Z}$ by $g(x)=x^2+1$, where $\mathbf{Z}$ is the set of integers. Since the function $g$ takes only integers as inputs and outputs only integers, it has a different domain and range than $f$. Thus, $g$ is a different function than $f$. However, in most cases, we won't need to worry about such differences. Even for a functions specified by algebraic formulas such as $f(x)=x^2+1$, we can still think of the function in terms of its definition of a set of ordered pairs. The function $f$ has an infinite number of such ordered pairs $(x,f(x))$. The function $g$ also has an infinite number of ordered pairs $(x,g(x))$, but this set of ordered pairs is much smaller. Most ordered pairs in $f$, such as $(1/2,5/4)$, $(\sqrt{2},3)$, or $(\pi, \pi^2+1)$, are not in the set of ordered pairs for $g$. For functions whose input is a number and output is a number, we can visualize the set of ordered pairs in terms of its graph. There's nothing sacred about using the variable $x$ in the algebraic formula defining the function. We could have also defined the function by $f(t)=t^2+1$ or $f(\bigstar) = \bigstar^2+1$, and, assuming the domain and codomain are the real numbers, all formulas indicate the same function that can take a real number as an input, square that number, add 1, and give the result as the output. Example 5: Piecewise formula An algebraic formula for a function can be much more complicated than the simple example $f(x)=x^2+1$. Any formula that unambiguously assigns an element in the codomain for each element in the domain will define a function. For example, we can use the formula \begin{align*} p(x) = \begin{cases} -4 & \text{if } x \lt -1\\ 3x & \text{if } -1 \le x \lt 4\\ x^2-x & \text{if } x \ge 4 \end{cases} \end{align*} to define a function from the real numbers to the real numbers. For any input real number $x$, it first checks if $x \lt -1$ or if $-1 \le x \lt 4$ or if $x \ge 4$, and then it assigns an output using the respective formula. Since for any real number $x$, exactly one of those three conditions is satisfied, the formula unambiguously assigns a real output value $p(x)$ for each $x$. We refer to such a formula as a piecewise formula, as it breaks the domain into pieces and uses a separate formula for each piece. For this definition of $p$, we calculate that, for example, $p(-2) = -4$, $p(-1) = 3(-1)=-3$, and $p(10)=10^2-10=90$. Other examples As suggested by the function machine metaphor, there's an endless variety to the types of functions you could define. You just need to come up with a collection of objects for the input, a collection of objects for the possible outputs, and decide what the function machine with spit out for each input object. The input or output objects could even be sets containing many subparts. For example, one could make a function machine that requires both an integer $i$ and a person $p$ as inputs, adds the number $i$ to the number of children of person $p$, and spits out the result as its output. Or one could make a function machine that takes a person $p$ as its input and outputs two numbers: the number of male children and the number of female children of person $p$.
Consider a heteroskedastic model of the form, $y_i|x_i \sim \mathcal{N}\left(x_i, \text{exp}\{\boldsymbol\beta^\top\boldsymbol{x}\}\right)$ where $\boldsymbol{\beta}=\left[\beta_0,\beta_1\right]$ and $\boldsymbol{x}=\left[1,x_i\right]$. $x_i$ and $\boldsymbol{\beta}$ must be estimated from the data. We set a multivariate normal prior on $\boldsymbol{\beta}, \boldsymbol{\beta}\sim \mathcal{N}\left(0,I\right)$ and try to derive the posterior for $\boldsymbol{\beta}, p\left(\boldsymbol{\beta}|y,\boldsymbol{x}\right)$ for the purpose of sampling. It is not possible to get a posterior in closed form, so we must resort to Metropolis-Hastings. For M-H to work we sample a new value $\boldsymbol{\beta}^*$ from a multivariate normal proposal density of the form $\boldsymbol{\beta}^*\sim\mathcal{N}\left(\boldsymbol{\beta}^{t},\Sigma\right)$, where $\boldsymbol{\beta}^t$ is the value from the current iteration. The question I have is, how does one choose $\Sigma$, the proposal variance, given that there are no predictors only latent variables. This problem is similar to Poisson regression except that unlike Poisson regression problem, we have unobserved or latent variables instead of predictors. Hence, proposal variance cannot be estimated from the data. Any references are appreciated.
It looks like you're new here. If you want to get involved, click one of these buttons! Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the way in which we match up these two objects, to see that they look the same. For example, any two of these squares look the same after you rotate and/or reflect them: An isomorphism between two of these squares is a process of rotating and/or reflecting the first so it looks just like the second. As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse: Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that and I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\). Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse. Now we're ready for isomorphisms! Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\). Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like! What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph: The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2: $$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1: $$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms: $$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism! In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism. We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\). Puzzle 144 says that in a poset, the only isomorphisms are identities. Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions. Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\). So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them. One more example: Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism. This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the isomorphisms deserve to be called 'natural isomorphisms'. But what are they like? Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism $$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes: Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism $$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that $$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means $$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\). In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\). But the converse is true, too! It takes a little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism. Doing this will help you understand natural isomorphisms. But you also need examples! Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal! We should talk about this.
What is the importance of eigenvalues/eigenvectors? Short Answer Eigenvectors make understanding linear transformations easy. They are the "axes" (directions) along which a linear transformation acts simply by "stretching/compressing" and/or "flipping"; eigenvalues give you the factors by which this compression occurs. The more directions you have along which you understand the behavior of a linear transformation, the easier it is to understand the linear transformation; so you want to have as many linearly independent eigenvectors as possible associated to a single linear transformation. Slightly Longer Answer There are a lot of problems that can be modeled with linear transformations, and the eigenvectors give very simply solutions. For example, consider the system of linear differential equations \begin{align*} \frac{dx}{dt} &= ax + by\\\ \frac{dy}{dt} &= cx + dy. \end{align*} This kind of system arises when you describe, for example, the growth of population of two species that affect one another. For example, you might have that species $x$ is a predator on species $y$; the more $x$ you have, the fewer $y$ will be around to reproduce; but the fewer $y$ that are around, the less food there is for $x$, so fewer $x$s will reproduce; but then fewer $x$s are around so that takes pressure off $y$, which increases; but then there is more food for $x$, so $x$ increases; and so on and so forth. It also arises when you have certain physical phenomena, such a particle on a moving fluid, where the velocity vector depends on the position along the fluid. Solving this system directly is complicated. But suppose that you could do a change of variable so that instead of working with $x$ and $y$, you could work with $z$ and $w$ (which depend linearly on $x$ and also $y$; that is, $z=\alpha x+\beta y$ for some constants $\alpha$ and $\beta$, and $w=\gamma x + \delta y$, for some constants $\gamma$ and $\delta$) and the system transformed into something like\begin{align*}\frac{dz}{dt} &= \kappa z\\\\frac{dw}{dt} &= \lambda w\end{align*}that is, you can "decouple" the system, so that now you are dealing with two independent functions. Then solving this problem becomes rather easy: $z=Ae^{\kappa t}$, and $w=Be^{\lambda t}$. Then you can use the formulas for $z$ and $w$ to find expressions for $x$ and $y$.. Can this be done? Well, it amounts precisely to finding two linearly independent eigenvectors for the matrix $\left(\begin{array}{cc}a & b\\c & d\end{array}\right)$! $z$ and $w$ correspond to the eigenvectors, and $\kappa$ and $\lambda$ to the eigenvalues. By taking an expression that "mixes" $x$ and $y$, and "decoupling it" into one that acts independently on two different functions, the problem becomes a lot easier. That is the essence of what one hopes to do with the eigenvectors and eigenvalues: "decouple" the ways in which the linear transformation acts into a number of independent actions along separate "directions", that can be dealt with independently. A lot of problems come down to figuring out these "lines of independent action", and understanding them can really help you figure out what the matrix/linear transformation is "really" doing. A short explanation Consider a matrix $A$, for an example one representing a physical transformation (e.g rotation). When this matrix is used to transform a given vector $x$ the result is $y = A x$. Now an interesting question is Are there any vectors $x$ which does not change it's direction under this transformation, but allow the vector magnitude to vary by scalar $ \lambda $? Such a question is of the form $$A x = \lambda x $$ So, such special $x$ are called eigenvector(s) and the change in magnitude depends on the eigenvalue $ \lambda $. The behaviour of a linear transformation can be obscured by the choice of basis. For some transformations, this behaviour can be made clear by choosing a basis of eigenvectors: the linear transformation is then a (non-uniform in general) scaling along the directions of the eigenvectors. The eigenvalues are the scale factors. I think if you want a better answer, you need to tell us more precisely what you may have in mind: are you interested in theoretical aspects of eigenvalues; do you have a specific application in mind? Matrices by themselves are just arrays of numbers, which take meaning once you set up a context. Without the context, it seems difficult to give you a good answer. If you use matrices to describe adjacency relations, then eigenvalues/vectors may mean one thing; if you use them to represent linear maps something else, etc. One possible application: In some cases, you may be able to diagonalize your matrix $M$ using the eigenvalues, which gives you a nice expression for $M^k$. Specifically, you may be able to decompose your matrix into a product $SDS^{-1}$ , where $D$ is diagonal, with entries the eigenvalues, and $S$ is the matrix with the associated respective eigenvectors. I hope it is not a problem to post this as a comment. I got a couple of Courics here last time for posting a comment in the answer site. Mr. Arturo: Interesting approach!. This seems to connect with the theory of characteristic curves in PDE's(who knows if it can be generalized to dimensions higher than 1), which are curves along which a PDE becomes an ODE, i.e., as you so brilliantly said, curves along which the PDE decouples. When you apply transformations to the systems/objects represented by matrices, and you need some characteristics of these matrices you have to calculate eigenvectors (eigenvalues). "Having an eigenvalue is an accidental property of a real matrix (since it may fail to have an eigenvalue), but every complex matrix has an eigenvalue."(Wikipedia) Eigenvalues characterize important properties of linear transformations, such as whether a system of linear equations has a unique solution or not. In many applications eigenvalues also describe physical properties of a mathematical model. Some important applications - Principal Components Analysis (PCA) in object/image recognition; Physics - stability analysis, the physics of rotating bodies; Market risk analysis - to define if a matrix is positive definite; PageRank from Google. In data analysis, the eigenvectors of a covariance (or correlation matrix) are usually calculated. Eigenvectors are the set of basis functions that are the most efficient set to describe data variability. They are also the coordinate system that the covariance matrix becomes diagonal allowing the new variables referenced to this coordinate system to be uncorrelated. The eigenvalues is a measure of the data variance explained by each of the new coordinate axis. They are used to reduce the dimension of large data sets by selecting only a few modes with significant eigenvalues and to find new variables that are uncorrelated; very helpful for least-square regressions of badly conditioned systems. It should be noted that the link between these statistical modes and the true dynamical modes of a system is not always straightforward because of sampling problems. Lets Go Back to the Historical Background to get the motivation! Consider The Linear Transformation T:V->V. Given a Basis of V, T is characterized beautifully by a Matrix A,which tells everything about T. The Bad part about A is that " A changes with the change of Basis of V ". Why it is bad? Because the Same Linear Transformation due to two different basis selection are given by two distinct matrices. Believe me! You cannot relate between the two matrices by looking at their entries. Really Not Interesting! Intuitively, there exist some strong relation between two such Matrices. Our Aim is to capture THE THING in common(Mathematically). Now Eigen Values are a necessary condition to check so but not sufficient though! Let make my statement clear. "The Invariance of a Subspace of V under a Linear Transformation of V" is such a property. That is if A and B represent the same T, then they must have all the eigenvalues equal. But The Converse is Not True! Hence not Sufficient but NECESSARY! And The Relation Is " A is Similar to B ". i.e. $ A=PBP^{-1} $ where P is a non-Singular Matrix. I would like to direct you to an answer that I posted here: Importance of eigenvalues I feel it is a nice example to motivate students who ask this question, in fact I wish it were asked more often. Personally, I hold such students in very high regard. An eigenvector $v$ of a matrix $A$ is a directions unchanged by the linear transformation: $Av=\lambda v$. An eigenvalue of a matrix is unchanged by a change of coordinates: $\lambda v =Av \Rightarrow \lambda (Bu) = A(Bu)$. These are important invariants of linear transformations. This made it clear for me https://www.youtube.com/watch?v=PFDu9oVAE-g There's a lot of other linear algebra videos as well from 3 blue 1 brown protected by J. M. is a poor mathematician Mar 11 '18 at 14:28 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
User:Nikita2 Pages of which I am contributing and watching Analytic function | Cauchy criterion | Cauchy integral | Condition number | Continuous function | D'Alembert criterion (convergence of series) | Dedekind criterion (convergence of series) | Derivative | Dini theorem | Dirichlet-function | Ermakov convergence criterion | Extension of an operator | Fourier transform | Friedrichs inequality | Fubini theorem | Function | Functional | Generalized derivative | Generalized function | Geometric progression | Hahn-Banach theorem | Harmonic series | Hilbert transform | Hölder inequality | Lebesgue integral | Lebesgue measure | Leibniz criterion | Leibniz series | Lipschitz Function | Lipschitz condition | Luzin-N-property | Newton-Leibniz formula | Newton potential | Operator | Poincaré inequality | Pseudo-metric | Raabe criterion | Riemann integral | Series | Sobolev space | Vitali theorem | TeXing I'm keen on turning up articles of EoM into better appearance by rewriting formulas and math symbols in TeX. Now there are 2325 (out of 15,890) articles with Category:TeX done tag. $\quad \rightarrow \quad$ $\sum_{n=1}^{\infty}n!z^n$ Just type $\sum_{n=1}^{\infty}n!z^n$. Today You may look at Category:TeX wanted. How to Cite This Entry: Nikita2. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Nikita2&oldid=35213
The Guinier approximation The study of the shape and size distribution of nanoparticles is a traditional field of application of SAXS. The scattering intensity of a spherical particle of radius $R$ and homogeneous scattering length density $\Delta\rho$ inside it, for example, can be calculated as $$I_\mathrm{sphere}(q)=\Delta\rho^2 V_\mathrm{sphere}^2\left[\frac{3}{(qR)^3}\left(\sin\left(qR\right)-qR\cos\left(qR\right)\right)\right]^2\textrm{,}$$ where $V_\mathrm{sphere}$ is the volume of the particle. The scattering curves of spheres with different radii are plotted in the figure below. The scattering curve starts with a slight bend at small $q$ values, and levels to a constant value at $q \approx 0$. This constant is $I(0)=V_\mathrm{sphere}^2\Delta\rho^2$, i.e. the square of the total scattering length (roughly speaking the total number of electrons) inside the particle. This enables one to determine an average molecular weight if our approach is extended to solutions of globular macromolecules. The small-$q$ limit of the scattering curve is a Gaussian curve, centered to the origin: $$I(q \gtrapprox 0) = I(0) e^{-\frac{q^2R^2}{5}}\textrm{.}$$ This formula can be generalized to any form of scattering particle, including disks, rods, even non-homogeneous scattering length densities. It can be proven that the above equation, the so-called Guinier approximation holds in this case, too, in the following form: $$I(q \gtrapprox 0) = I(0) e^{-\frac{q^2R_G^2}{3}}\textrm{.}$$ The only difference is that instead of $R$, we have $R_G$, the radius of gyration or Guinier radius, which is defined for an arbitrary scattering particle as $$R_G=\sqrt{\frac{\iiint \Delta\rho(\vec{r})r^2\mathrm{d}\!\vec{r}}{\iiint \Delta\rho(\vec{r})\mathrm{d}\!\vec{r}}}\textrm{,}$$ where the domain of the integrals is the whole irradiated volume. This complex formula reduces into usually much simpler forms with the knowledge of the form of the scattering particle. For instance in the case of spheres, $R_G=\sqrt{3/5}R$ holds.
It looks like you're new here. If you want to get involved, click one of these buttons! In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly: Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints. Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints. Today we'll conclude our discussion of Chapter 1 with two more bombshells: Joins are left adjoints, and meets are right adjoints. Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down. This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world! Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders. In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets. Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have all joins: it's enough that all the joins in this formula exist: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have all meets: it's enough that all the meets in this formula exist: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes. Suppose \(A\) is a poset with all binary joins. Then we get a function $$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows: $$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that $$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the diagonal $$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called duplication, since it duplicates any element of \(A\). Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact: $$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \). Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \). A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function $$ \wedge : A \times A \to A $$that's the right adjoint of \( \Delta \). This is just a clever way of saying $$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check. Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number. All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on. Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by $$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short. I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason. Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\). So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs. Once you start looking you can find duality everywhere, from ancient Chinese philosophy: to modern computers: But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality! This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises.
Line 1: Line 1: + ''derived numbers'' ''derived numbers'' − A concept in the theory of functions of a real variable. The upper right-hand Dini derivative <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032530/d0325301.png" /> is defined to be the limes superior of the quotient <img align="absmiddle" border="0" src="https:/ /www.encyclopediaofmath.org/legacyimages/d/d032/d032530/d0325302.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032530/d0325303.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032530/d0325304.png" />. The lower right-hand <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032530/d0325305.png" />, the upper left-hand <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032530/d0325306.png" />, and the lower left-hand Dini derivative <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032530/d0325307.png" /> are defined analogously. If <img align= "absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032530/d0325308.png" /> ( <img align="absmiddle" border="0" src= "https://www.encyclopediaofmath.org/legacyimages/d/d032/d032530/d0325309.png" />), then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032530/d03253010.png" /> has at the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032530/d03253011.png" /> a one-sided right-hand (left-hand) Dini derivative. The ordinary derivative exists if all four Dini derivatives coincide. Dini derivatives were introduced by U. Dini [[#References|[1]]]. As N.N. Luzin showed, if all four Dini derivatives are finite on a set, then the function has an ordinary derivative almost-everywhere on that set. + A concept in the theory of functions of a real variable. The upper right-hand Dini derivative is defined to be the limes superior of the quotient /as , where >. The lower right-hand , the upper left-hand , and the lower left-hand Dini derivative are defined analogously. If =(=), then has at the point a one-sided right-hand (left-hand) Dini derivative. The ordinary derivative exists if all four Dini derivatives coincide. Dini derivatives were introduced by U. Dini [[#References|[1]]]. As N.N. Luzin showed, if all four Dini derivatives are finite on a set, then the function has an ordinary derivative almost-everywhere on that set. ====References==== ====References==== Line 9: Line 10: ====Comments==== ====Comments==== − The Dini derivatives are also called the Dini derivates, and are frequently denoted also by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032530/d03253012.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032530/d03253013.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032530/d03253014.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d032/d032530/d03253015.png" />. + The Dini derivatives are also called the Dini derivates, and are frequently denoted also by , , , . Revision as of 17:12, 27 August 2014 derived numbers A concept in the theory of functions of a real variable. The upper right-hand Dini derivative $\Lambda_\alpha$ is defined to be the limes superior of the quotient $(f(x_1)-f(x))/(x_1-x)$ as $x_1\to x$, where $x_1>x$. The lower right-hand $\lambda_\alpha$, the upper left-hand $\Lambda_g$, and the lower left-hand Dini derivative $\lambda_g$ are defined analogously. If $\Lambda_\alpha=\lambda_\alpha$ ($\Lambda_g=\lambda_g$), then $f$ has at the point $x$ a one-sided right-hand (left-hand) Dini derivative. The ordinary derivative exists if all four Dini derivatives coincide. Dini derivatives were introduced by U. Dini [1]. As N.N. Luzin showed, if all four Dini derivatives are finite on a set, then the function has an ordinary derivative almost-everywhere on that set. References [1] U. Dini, "Grundlagen für eine Theorie der Funktionen einer veränderlichen reellen Grösse" , Teubner (1892) (Translated from Italian) [2] S. Saks, "Theory of the integral" , Hafner (1952) (Translated from French) The Dini derivatives are also called the Dini derivates, and are frequently denoted also by $D^+f(x)$, $D_+f(x)$, $D^-f(x)$, $D_-f(x)$. How to Cite This Entry: Dini derivative. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Dini_derivative&oldid=13670 This article was adapted from an original article by T.P. Lukashenko (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Bragg peaks If some sort of periodicity on the resolvable size range is present in the sample under investigation, usually many orders of relatively sharp peaks manifest themselves in the scattering pattern. This can be understood as a special case of the scattering, the so-called Bragg-reflection principle. According to the classic description in wide-angle X-ray diffraction, radiation coming from the X-ray source is reflected by crystal planes, and is detected by a single-channel detector placed far apart the sample in such a way, that the incidence and the exit angle are the same (left part of the figure above). If the optical path differences between rays reflected from subsequent layers is integer times the wavelength, the two rays arrive at the detector in phase, thus constructive interference occurs. This is formalized by the Bragg equation: $$2 d \sin{\theta} = n\lambda \quad \textrm{where}\quad n \in \mathbb{N}\textrm{,}$$ $d$ being the periodicity of the crystal, $\theta$ the incidence and exit angle and $\lambda$ the wavelength. This formula can be expressed in a simpler form by using the $q$ scattering variable: $$q = \frac{2\pi n}{d} \quad \textrm{where}\quad n \in \mathbb{N}\textrm{.}$$ The direct result of these equations is the occurrence of sharp peaks in the diffractograms of crystals, found at those $2\theta$ scattering angles, where the criterion is fulfilled. The right side of the figure above symbolizes that to fulfill the Bragg criterion, one does not need to have crystalline order, just some kind of discrete translational symmetry. For example, in case of equidistantly ordered lamellae, as found in silver behenate or multilamellar phospholipid vesicles, can also yield Bragg-like peaks in the scattering curve. Silver behenate contains such lamellae, spaced ~5.8 nm apart, and as a consequence, Bragg peaks are manifested in its small-angle scattering curve.
Answer To feed 200 people we need: 35.7 lb of meat, 50 lb of potatoes, and 21.4 lb of carrots. Work Step by Step We know that 25 lb of meat fed 140 people. To find out how much meat is needed to feed 200 people, we set up a proportion: $\displaystyle \frac{25~lb}{140~people}=\frac{x}{200~people}$ Cross-multiply: $25*200=x*140$ $5000=x*140$ $x=5000/140$ $x\approx35.7$ lb of meat We use the same method with 35 lb of potatoes: $\displaystyle \frac{35~lb}{140~people}=\frac{x}{200~people}$ Cross-multiply: $35*200=x*140$ $7000=x*140$ $x=7000/140$ $x\approx 50$ lb of potatoes We also use the same method with 15 lb of carrots: $\displaystyle \frac{15~lb}{140~people}=\frac{x}{200~people}$ Cross-multiply: $15*200=x*140$ $3000=x*140$ $x=3000/140$ $x\approx 21.4$ lb of carrots So to feed 200 people we need: 35.7 lb of meat, 50 lb of potatoes, and 21.4 lb of carrots.
Here is a much simpler proof from Special Functions and Their Applications by N. N. Lebedev. We begin with Legendre's differential equation\begin{equation}[(1-x^{2})P^{\prime}_{n}(x)]^{\prime} +n(n+1)P_{n}(x) = 0,\quad n \in \mathbb{Z}_{0}^{+}\label{eq:lp1}\tag{1}\end{equation} The first step is to multiple equation \eqref{eq:lp1} by $P_{m}(x)$ and subtract it from equation \eqref{eq:lp1} written for $m$ and multiplied by $P_{n}(x)$. \begin{equation}[(1-x^{2})P^{\prime}_{m}(x)]^{\prime}P_{n}(x) \,-\, [(1-x^{2})P^{\prime}_{n}(x)]^{\prime}P_{m}(x) + [m(m+1)-n(n+1)]P_{m}(x)P_{n}(x) = 0\end{equation} Rearrangement yields\begin{equation}\{(1-x^{2})[P^{\prime}_{m}(x)P_{n}(x)-P^{\prime}_{n}(x)P_{m}(x)]\}^{\prime} + (m-n)(m+n+1)P_{m}(x)P_{n}(x) = 0\label{eq:lp2}\tag{2}\end{equation} Integrating equation \eqref{eq:lp2} from -1 to 1, the first term goes to 0 and we are left with\begin{equation}(m-n)(m+n+1) \int\limits_{-1}^{1} P_{m}(x)P_{n}(x) dx = 0\end{equation}or\begin{equation}\int\limits_{-1}^{1} P_{m}(x)P_{n}(x) dx = 0, \quad m \ne n\end{equation}
Suppose $X_1$ and $X_2$ are independent $N(0,1)$ variables. Define $$Y_1=X_1\,\text{sign}(X_2)\quad,\quad Y_2=X_2\,\text{sign}(X_1)$$ I have to show that $(Y_1,Y_2)$ is not bivariate normal and find the correlation between $Y_1$ and $Y_2$. It is easy to see that both $Y_1$ and $Y_2$ are themselves univariate normal. And the joint distribution function of $(Y_1,Y_2)$ can be derived to show that it is not jointly normal. But I am wondering if there is an alternate way to see this result, i.e. without explicitly finding the distribution of $(Y_1,Y_2)$ can I justify that the distribution is not jointly normal? I am asking this because I think I do not require the joint distribution of $(Y_1,Y_2)$ to find the correlation. Since $$Y_1=\begin{cases}X_1&,\text{ if }X_2>0\\-X_1&,\text{ if }X_2<0\end{cases}\quad\text{ and }\quad Y_2=\begin{cases}X_2&,\text{ if }X_1>0\\-X_2&,\text{ if }X_1<0\end{cases}$$ I have $$Y_1Y_2=\begin{cases}X_1X_2&,\text{ if }X_1X_2>0\\-X_1X_2&,\text{ if }X_1X_2<0\end{cases}$$ so that $$E(Y_1Y_2)=E(|X_1X_2|)=E(|X_1|)E(|X_2|)=\frac{2}{\pi},$$ thus implying that the correlation is $2/\pi$.
I understand that in general the wave function can take complex numbers, however when talking about combining atomic orbitals to form molecular orbitals we talk about the phase of the orbital being positive or negative to create constructive or destructive interference. It seems as if we have lost the aspect of the wave function taking on complex phase, what happened to it? I understand that in general the wave function can take complex numbers [...] It is not that it can, rather it should: by the very postulates of quantum mechanics wave function is a complex-valued function. So, both the spin orbitals (that are one-electron wave functions) and the Slater determinant built out of them are in principle complex-valued functions. However, in solving the HF equations by the SCF procedure it is quite typical to impose some constraints on the spin orbitals, for instance, restrict them to be real-valued functions rather then complex-valued ones. As with any other constraint this restriction might lead to what is called SCF instabilities, i.e. situations when relaxing the constraints lead to a different variational solution of lower energy. To elaborate a bit more let us mention few other well-known (as well as lesser-known) constraints for the SCF procedure: The infamous restrictedHartree-Fock (RHF) model with the requirement that spin orbitals come in pairs: two spin orbitals corresponding to two different pure spin states are constructed from the samespatial orbital: $$ \begin{aligned} \psi_{2i-1}(1) &= \phi_{i}(1) \alpha(1) \, , \\ \psi_{2i}(1) &= \phi_{i}(1) \beta(1) \, . \end{aligned} $$ In the unrestrictedHartree-Fock (UHF) model the requirement above is relaxed and we exclusively use spatial orbitals from one set to construct $\alpha$ spin orbitals and spatial orbitals from another set to construct $\beta$ spin orbitals: $$ \begin{aligned} \psi_{2i-1}(1) &= \phi_{i}^{\alpha}(1) \alpha(1) \, , \\ \psi_{2i}(1) &= \phi_{i}^{\beta}(1) \beta(1) \, . \end{aligned} $$ A bit less known is the fact that the unrestricted Hartree-Fock method is notso unrestricted. In fact, there is a constraint here: each and every spin orbital describes an electron in a pure spin state, either $\alpha$ or $\beta$, while in general (in accordance with the postulates of quantum mechanics) electron can be in superposition of this states: $$ \psi_{i}(1) = \phi_{i}^{\alpha}(1) \alpha(1) + \phi_{i}^{\beta}(1) \beta(1) $$ This most general setting is known as the general Hartree-Fock (GHF) method. And all these three methods (RHF, UHF, GHF) exist in two variants: the real one, in which spin orbitals additionally required to be real-valued functions, and the more general complex one. All this give rise to six variants of the HF method with many possible instabilities between them discussed it some details in the seminal paper by Schlegel & McDouall 1: There is also a very similar earlier paper by Seeger & Pople. 2 which suddenly is not freely available. Its important to realize that any constraint of the mentioned above can only raise the electronic energy: with a great deal of certainty, we may expect that if any constraint is relaxed, the variational procedure will result in a lower energy due to greater variational freedom. Thus, for instance, for the most usual different real variants of the Hartree-Fock method we have $$ E_\mathrm{e}(\mathrm{RGHF}) \leq E_\mathrm{e}(\mathrm{RUHF}) \leq E_\mathrm{e}(\mathrm{RRHF}) \, . $$ The same, of course, is true for the corresponding complex variants of these methods, $$ E_\mathrm{e}(\mathrm{CGHF}) \leq E_\mathrm{e}(\mathrm{CUHF}) \leq E_\mathrm{e}(\mathrm{CRHF}) \, . $$ And for any of the three formalism $$ E_\mathrm{e}(\mathrm{C}x\mathrm{HF}) \leq E_\mathrm{e}(\mathrm{R}x\mathrm{HF}) \, , \quad \text{where} \quad x = \mathrm{G}, \mathrm{U}, \mathrm{R} \, . $$ RHF/UHF/GHF and correct spin symmetry The possible rise of electronic energy that accompanies introduction of more and more constraints in the GHF -> UHF -> RHF sequence seems to be quite contrary to the goal of the variation method. However, UHF and RHF constraints are nothing but symmetry constraints: they arise by requiring an approximate electronic wave function to have the same spin symmetry as the exact non-relativistic one, i.e. to be an eigenfunction of spin operators, the total spin-squared operator $\hat{S}^2$ and the $z$-component of the total spin operator $\hat{S}_{z}$. With respect to $\hat{S}_{z}$ operator it is well known that any Slater determinant built out of spin orbitals corresponding to pure spin states is an eigenfunction of $\hat{S}_{z}$. Thus, RHF and UHF wave functions areeigenfunctions of $\hat{S}_{z}$, but GHF wave function is not. The situation with $\hat{S}^{2}$ is a bit more complicated, but all in all only in RHF (but not in UHF or in GHF) formalism it is possible to construct an approximate electronic wave function which is an eigenfunction of $\hat{S}^{2}$. 3 To conclude, as it was pointed out by Löwdin, we always face with a dilemma here: should we seek a solution that is a true variational minimum or should we seek a solution with the correct spin symmetry? 1) H. B. Schlegel and J. J. W. McDouall, Do You Have SCF Stability and Convergence Problems? in Computational Advances in Organic Chemistry: Molecular Structure and Reactivity, Springer Netherlands, 1991, pp. 167-185. DOI: 10.1007/978-94-011-3262-6_2. Free PDF from wayne.edu. 2) Seeger, R., & Pople, J. A., Self‐consistent molecular orbital methods. XVIII. Constraints and stability in Hartree–Fock theory. The Journal of Chemical Physics, 66(7), 1977, 3045-3050. DOI: 10.1063/1.434318. 3) This can be done even for an open-shell electron configuration, although it would require a linear combination of Slater determinants with constant coefficients, none of which alone is an eigenfunction of $\hat{S}^{2}$.
For discussion of specific patterns or specific families of patterns, both newly-discovered and well-known. gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm Kazyan wrote: Component found in a CatForce result: Code: Select all x = 42, y = 67, rule = LifeHistory A$.2A$2A5$7.A$8.2A$7.2A19$24.2A$24.2A2$39.A$37.A3.A$36.A$36.A4.A$36. 5A$14.2A.2D$13.A.AD.D$13.A$12.2A25$5.3A$7.A$6.A! That can be done with 4 gliders, although it's still interesting that it was found accidentally: Code: Select all x = 21, y = 30, rule = B3/S23 10b2o$11bo$11bobo$12b2o14$10bo4bo$10b2ob2o$9bobo2b2o8$2o17bo$b2o15b2o$ o17bobo! What were you looking for, exactly? A MWSS-to-herschel converter? Kazyan Posts: 864 Joined: February 6th, 2014, 11:02 pm gmc_nxtman wrote:What were you looking for, exactly? A MWSS-to-herschel converter? I'd settle for any signal, but yes. The current Orthogonoids have geometry challenges that pad their size, and the limiting factor in their repeat time is the syringe. Repeat time is more important for single-channel operations than probably any other constructor design, so I'm trying to give that fire some better fuel. Tanner Jacobi mniemiec Posts: 1055 Joined: June 1st, 2013, 12:00 am gmc_nxtman wrote:4-glider trans-boat with tail edgeshoot: ... Even though was already buildable from 4 gliders, this method improves syntheses of one still-life and 18 pseudo-objects. Kazyan Posts: 864 Joined: February 6th, 2014, 11:02 pm Potential component spotted in a failed eating reaction: Code: Select all x = 21, y = 17, rule = B3/S23 o$3o$3bo$2b2o2$6bo$5bobo2$5b3o$19bo$8bo9bo$18b3o3$15bo$14b2o$14bobo! Tanner Jacobi gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm Unusual still life in 8 gliders: Code: Select all x = 18, y = 26, rule = B3/S23 11bo$10bobo$10b2o2$10bo$9b2o$9bobo5$obo$b2o$bo2$3b2o$4b2o$3bo$9bo$9b2o $8bobo2$15b3o$7b2o6bo$6bobo7bo$8bo! EDIT: This also gives 21.41458 in 9 gliders. Kazyan Posts: 864 Joined: February 6th, 2014, 11:02 pm Potentially grow out a BTS into a structure like a snorkel loop: Code: Select all x = 15, y = 25, rule = B3/S23 2b2obo$3bob3o$bobo4bo$ob2ob2obo$o4bobo$b3obo$3bob2o3$10b3o2$8bo5bo$8bo 5bo$8bo5bo2$10b3o6$11b2o$10bo2bo$11b2o$11bo! I suspect that the drifter catalyst and its variants also have odd transformations, since both objects are robust. Tanner Jacobi gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm Haven't seen a component quite like this before: Code: Select all x = 27, y = 18, rule = B3/S23 20bobo$20b2o$21bo7$15bo$15bobo$15b2o2$3o9bobo$b3o9b2o$13bo10b3o$24bo$ 25bo! EDIT: Better version: Code: Select all x = 15, y = 11, rule = B3/S23 13bo$12bo$12b3o3$7bo$6bobo$6bobo2b2o$7bo2b2o$3o9bo$b3o! Gamedziner Posts: 796 Joined: May 30th, 2016, 8:47 pm Location: Milky Way Galaxy: Planet Earth p8 c/2 derived from blinker puffer 1 : Code: Select all 2bo$o3bo$5bo$o4bo$b5o5$b2o2b2o$bob2ob2o$2b5o$3b3o$4bo$2bo3bo$7bo$2bo4bo$3b5o! Code: Select all x = 81, y = 96, rule = LifeHistory 58.2A$58.2A3$59.2A17.2A$59.2A17.2A3$79.2A$79.2A2$57.A$56.A$56.3A4$27. A$27.A.A$27.2A21$3.2A$3.2A2.2A$7.2A18$7.2A$7.2A2.2A$11.2A11$2A$2A2.2A $4.2A18$4.2A$4.2A2.2A$8.2A! mniemiec Posts: 1055 Joined: June 1st, 2013, 12:00 am This is known. It can be easily synthesized from 10 gliders: Code: Select all x = 88, y = 26, rule = B3/S23 34bobo$35boo$35bo3$45bo$bo44boo$bbo42boo$3o$20boo18boo6bo$bbo17bobo17b obo4bo$boo18bo19bo5b3o$bobo$$43boo$44boo7b3o22bo4b3o$31b3o9bo9bobbo20b 3o3bobbo$33bo19bo16b3o3boobo3bo$32bo20bo3bo12bobbobb3o4bo3bo$53bo16bo 6boo4bo$54bobo13bo3bo3bo5bobo$70bo$71bobo$77bo$78bo$77bo! gameoflifemaniac Posts: 774 Joined: January 22nd, 2017, 11:17 am Location: There too Code: Select all x = 17, y = 17, rule = B3/S23 8bo$7bobo$4bo2bobo2bo$3bobobobobobo$2bo2bobobobo2bo$3b2o2bobo2b2o$7bob o$b6o3b6o$o15bo$b6o3b6o$7bobo$3b2o2bobo2b2o$2bo2bobobobo2bo$3bobobobob obo$4bo2bobo2bo$7bobo$8bo! Code: Select all x = 17, y = 17, rule = B3/S23 8bo$7bobo$4bo2bobo2bo$3bobobobobobo$2bo2bobobobo2bo$3b2o2bobo2b2o$7bob o$b6o3b6o$o7bo7bo$b6o3b6o$7bobo$3b2o2bobo2b2o$2bo2bobobobo2bo$3bobobob obobo$4bo2bobo2bo$7bobo$8bo! dvgrn Moderator Posts: 5874 Joined: May 17th, 2009, 11:00 pm Location: Madison, WI Contact: While incompetently welding a tremi-Snark this evening... Code: Select all x = 23, y = 31, rule = LifeHistory $3.4B$4.4B$5.4B5.2A$6.4B4.2A$7.9B$8.6B$8.4BA3B$6.7BA2B$6.5B3A2B$6.11B $4.2AB.10B$4.2AB3.B2A4B$9.2B2A5B$10.8B$10.6B$11.5B$5.2A5.3B$5.A.A3.5B $7.A2.B2AB2A$7.2A2.2A.AB2.2A$10.B3.A.A2.A$5.10A.2A$5.A$6.12A$17.A$8. 7A$8.A5.A$11.A$10.A.A$11.A! ... I ended up with a p3 that I didn't really want. Doesn't seem worth keeping it around until people are synthesizing all the 58-bit p3's, but it seemed mildly entertaining anyway. A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: dvgrn wrote: Code: Select all x = 23, y = 31, rule = LifeHistory $3.4B$4.4B$5.4B5.2A$6.4B4.2A$7.9B$8.6B$8.4BA3B$6.7BA2B$6.5B3A2B$6.11B $4.2AB.10B$4.2AB3.B2A4B$9.2B2A5B$10.8B$10.6B$11.5B$5.2A5.3B$5.A.A3.5B $7.A2.B2AB2A$7.2A2.2A.AB2.2A$10.B3.A.A2.A$5.10A.2A$5.A$6.12A$17.A$8. 7A$8.A5.A$11.A$10.A.A$11.A! Pointless reduction: Code: Select all x = 17, y = 25, rule = LifeHistory 4B$.4B$2.4B5.2A$3.4B4.2A$4.9B$5.6B$5.4BA3B$3.7BA2B$3.5B3A2B$3.11B$.2A B.10B$.2AB3.B2A4B$6.2B2A5B$7.8B$7.6B$8.5B$9.3B$8.5B$7.B2AB2A$4.2A2.2A .AB2.2A$4.A2.B3.A.A2.A$5.7A.3A2$7.2A.4A$7.2A.A2.A! x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all Aidan F. Pierce gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm Can someone salvage this? (Look at T≈20) Code: Select all x = 16, y = 12, rule = B3/S23 bo$2bo$3o7$3b2o5b2o2b2o$4b2o3bobob2o$3bo7bo3bo! BlinkerSpawn Posts: 1905 Joined: November 8th, 2014, 8:48 pm Location: Getting a snacker from R-Bee's gmc_nxtman wrote: Can someone salvage this? (Look at T≈20) Code: Select all x = 16, y = 12, rule = B3/S23 bo$2bo$3o7$3b2o5b2o2b2o$4b2o3bobob2o$3bo7bo3bo! The red pattern inserted at gen 16 would do it: Code: Select all x = 17, y = 14, rule = LifeHistory 13.D$11.2D$.A14.D$2.A8.5D$3A7$3.2A5.2A2.2A$4.2A3.A.A.2A$3.A7.A3.A! AbhpzTa Posts: 475 Joined: April 13th, 2016, 9:40 am Location: Ishikawa Prefecture, Japan gmc_nxtman wrote: Can someone salvage this? (Look at T≈20) Code: Select all x = 16, y = 12, rule = B3/S23 bo$2bo$3o7$3b2o5b2o2b2o$4b2o3bobob2o$3bo7bo3bo! Code: Select all x = 27, y = 19, rule = B3/S23 16bo$4bo9b2o$5bo9b2o$3b3o$22bo$20b2o$21b2o$bo$2bo$3o2$25b2o$24b2o$26bo 3$3b2o5b2o2b2o$4b2o3bobob2o$3bo7bo3bo! Iteration of sigma(n)+tau(n)-n [sigma(n)+tau(n)-n : OEIS A163163] (e.g. 16,20,28,34,24,44,46,30,50,49,11,3,3, ...) : 965808 is period 336 (max = 207085118608). gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm Reduced an old synthesis from eleven (I think) down to eight gliders: Code: Select all x = 34, y = 34, rule = B3/S23 10bo$bobo7bo19bobo$2b2o5b3o19b2o$2bo29bo3$32bo$30b2o$31b2o3$12bo$12bob o$12b2o11$14b2o$14bobo$14bo12b2o$27bobo$27bo3$b2o$obo$2bo! Extrementhusiast Posts: 1796 Joined: June 16th, 2009, 11:24 pm Location: USA Glider + two-glider loaf/tub/block/blinker constellation lasts for over 10K gens: Code: Select all x = 16, y = 13, rule = B3/S23 3bobo$3b2o$4bo9bo$13bobo$4b2o8bo$4b2o3$6bo$5bobo$4bo2bo$5b2o$3o! I Like My Heisenburps! (and others) Entity Valkyrie Posts: 247 Joined: November 30th, 2017, 3:30 am A glider synthesis of Sawtooth 311 x = 193, y = 140, rule = B3/S23 40bo$41bo$39b3o$72bo$70b2o$71b2o19$32bobo$33b2o$33bo30bo$63bo$63b3o2$ 75bo$74b2o$24bo49bobo$25b2o$24b2o$49bo$47b2o$48b2o2$2bo$obo$b2o7$34b2o $35b2o$34bo3$67b2o$67bobo$53b2o12bo$53bobo$53bo6$27b2o93b2o$26bobo93bo bo$28bo93bo2$53b3o$53bo74bobo$54bo73b2o$4bo124bo$4b2o172b2o$3bobo171b 2o$174b2o3bo$31b2o140bobo$30b2o98b2o43bo$32bo97bobo36bobo$130bo3bobo 10bo22b2o$135b2o11b2o20bo$135bo4bobo4b2o34bobo$115b2o21bobobobo6b2o20b o9b2o$116b2o21b2ob2o6b2o22bo9bo$115bo36bo19b3o2$120bo23b2ob2o23b3o5bo$ 120b2o21bobobobo24bo4bo$119bobo13b2ob2o5bobo25bo5b3o$134bobobobo$115bo 20bobo13b2o25b3o$116b2o12b2o19b2o22bo3bo$115b2o14b2o20bo22bo3bo$130bo 35bo4bo2b3o$164bobo2b2o$113b3o49b2o3b2o2b3o4bobo$115bo60bo4b2o$114bo 60bo6bo$179bo$126bo25bo18bo7b2o$125b2o23b2o19b2o5bobo$125bobo23b2o17bo bo$146b2o$147b2o$121b2o23bo$120b2o$122bo2$156b2o$142bobo10bobo$134bobo 5b2o13bo$134b2o7bo$135bo$132bo6bo$132b2o4bo$131bobo4b3o2$138b3o43bo$ 134bo3bo22bo20b2o$135bo3bo22b2o19b2o$133b3o25b2o13bobo$174bobobobo7bo$ 133b3o5bo25bobo5b2ob2o6bobo$135bo4bo24bobobobo15b2o3bo$63b3o68bo5b3o 23b2ob2o19b2o$63bo127b2o$64bo75b3o19bo$130bo9bo22b2o6b2ob2o$130b2o9bo 20b2o6bobobobo$129bobo34b2o4bobo$144bo20b2o$143b2o22bo12b2o$143bobo33b 2o$181bo2$139bo$138bo$138b3o2$137bo$136b2o$136bobo! Entity Valkyrie Posts: 247 Joined: November 30th, 2017, 3:30 am This Simkin-Glider=Gun=like object actually produces two MWSS: Code: Select all x = 53, y = 17, rule = B3/S23 44b2o5b2o$44b2o5b2o2$47b2o$47b2o$12bo$12b3o$12bobo$14bo4$4b2o$4b2o2$2o 5b2o$2o5b2o! mniemiec Posts: 1055 Joined: June 1st, 2013, 12:00 am Entity Valkyrie wrote:A glider synthesis of Sawtooth 311 ... It's nice to have syntheses like this. Unfortunately, in this case, there are several pairs of gliders that would have had to pass through each other earlier (i.e. they would have already collided before this phase). To make sure this doesn't happen, it is usually a good idea to backtrack all the gilders a certain amount (e.g. far enough away that they are in four distinct clouds, one coming from each direction) and then run them to see if any unwanted interactions occur first. Rhombic Posts: 1056 Joined: June 1st, 2013, 5:41 pm This component (the reverse component would have been more useful). Found accidentally though. Code: Select all x = 12, y = 14, rule = B3/S23 11bo$9b3o$8bo$9bo$6b4o$6bo$2b2o3b3o$2b2o5bo$9bobo$2bo7b2o$bobo$bob2o$o $2bo! Code: Select all x = 13, y = 15, rule = B3/S23 7bo$7b3o$10bo$2b2ob3o2bo$o2bobo2bob2o$2o4b3o3bo$9bobo$3b2o3b2ob2o$3b2o 2$3bo$2bobo$2bob2o$bo$3bo! Extrementhusiast Posts: 1796 Joined: June 16th, 2009, 11:24 pm Location: USA Switch engine turns two rows of beehives into two rows of table on tables: Code: Select all x = 88, y = 96, rule = B3/S23 13b2o$12bo2bo$13b2o6$21b2o$8b3o9bo2bo$9bo2bo8b2o$13bo$10bobo4$29b2o$ 28bo2bo$29b2o2$bo$obo$obo$bo$37b2o$36bo2bo$37b2o2$9bo$8bobo$8bobo$9bo$ 45b2o$44bo2bo$45b2o2$17bo$16bobo$16bobo$17bo$53b2o$52bo2bo$53b2o2$25bo $24bobo$24bobo$25bo$61b2o$60bo2bo$61b2o2$33bo$32bobo$32bobo$33bo$69b2o $68bo2bo$69b2o2$41bo$40bobo$40bobo$41bo$77b2o$76bo2bo$77b2o2$49bo$48bo bo$48bobo$49bo$85b2o$84bo2bo$85b2o2$57bo$56bobo$56bobo$57bo5$65bo$64bo bo$64bobo$65bo5$73bo$72bobo$72bobo$73bo! I Like My Heisenburps! (and others) KittyTac Posts: 533 Joined: December 21st, 2017, 9:58 am Extrementhusiast wrote: Switch engine turns two rows of beehives into two rows of table on tables: Code: Select all x = 88, y = 96, rule = B3/S23 13b2o$12bo2bo$13b2o6$21b2o$8b3o9bo2bo$9bo2bo8b2o$13bo$10bobo4$29b2o$ 28bo2bo$29b2o2$bo$obo$obo$bo$37b2o$36bo2bo$37b2o2$9bo$8bobo$8bobo$9bo$ 45b2o$44bo2bo$45b2o2$17bo$16bobo$16bobo$17bo$53b2o$52bo2bo$53b2o2$25bo $24bobo$24bobo$25bo$61b2o$60bo2bo$61b2o2$33bo$32bobo$32bobo$33bo$69b2o $68bo2bo$69b2o2$41bo$40bobo$40bobo$41bo$77b2o$76bo2bo$77b2o2$49bo$48bo bo$48bobo$49bo$85b2o$84bo2bo$85b2o2$57bo$56bobo$56bobo$57bo5$65bo$64bo bo$64bobo$65bo5$73bo$72bobo$72bobo$73bo! And then explodes. I wonder if there's a way to eat it at the end. dvgrn Moderator Posts: 5874 Joined: May 17th, 2009, 11:00 pm Location: Madison, WI Contact: KittyTac wrote: Extrementhusiast wrote:Switch engine turns two rows of beehives into two rows of table on tables... And then explodes. I wonder if there's a way to eat it at the end. Yeah, switch engine/swimmer eaters definitely aren't a problem: Code: Select all x = 96, y = 98, rule = B3/S23 13b2o$12bo2bo$13b2o6$8b3o10b2o$20bo2bo$8bo3bo8b2o$9b4o$12bo4$29b2o$28b o2bo$29b2o2$bo$obo$obo$bo$37b2o$36bo2bo$37b2o2$9bo$8bobo$8bobo$9bo$45b 2o$44bo2bo$45b2o2$17bo$16bobo$16bobo$17bo$53b2o$52bo2bo$53b2o2$25bo$ 24bobo$24bobo$25bo$61b2o$60bo2bo$61b2o2$33bo$32bobo$32bobo$33bo$69b2o$ 68bo2bo$69b2o2$41bo$40bobo$40bobo$41bo$77b2o$76bo2bo$77b2o2$49bo$48bob o$48bobo$49bo$85b2o$84bo2bo$85b2o2$57bo$56bobo$56bobo$57bo5$65bo$64bob o$64bobo$65bo5$73bo$72bobo$72bobo17b2o$73bo18bo$93b3o$95bo! #C [[ AUTOSTART STEP 9 THEME 2 ]] kiho park Posts: 50 Joined: September 24th, 2010, 12:16 am I found this c/3 diagonal fuse while searching c/3 long barge crawler. Code: Select all x = 10, y = 11, rule = B3/S23:T40,27 8b2o$7bo2$6bobo$5bo2bo$4bobo$3bobo$2bobo$bobo$obo$bo!
Jacob, Thomas K and Mishra, Saurabh and Waseda, Yoshio (2000) Refinement of thermodynamic properties of $ReO_2$. In: Thermochimica Acta, 348 (1-2). pp. 61-68. PDF Jacob_[publication_year]_Thermochimica-Acta.pdf Restricted to Registered users only Download (193kB) | Request a copy Abstract The standard Gibbs energy of formation of $ReO_2$ in the temperature range from 900 to 1200 K has been determined with high precision using a novel apparatus incorporating a buffer electrode between reference and working electrodes. The role of the buffer electrode was to absorb the electrochemical flux of oxygen through the solid electrolyte from the electrode with higher oxygen chemical potential to the electrode with lower oxygen potential. It prevented the polarization of the measuring electrode and ensured accurate data. The $Re+ReO_2$ working electrode was placed in a closed stabilized-zirconia crucible to prevent continuous vaporization of $Re_2O_7$ at high temperatures. The standard Gibbs energy of the formation of $ReO_2$ can be represented by the equationde. $\Delta_fG^0(Re0_2)/J mol^{-1}$$= -451,510+ 295.11$$(^{\frac{T}{K}})-14.3261^{\frac{T}{K}} In^{\frac{T}{K}}(\pm80)$. Accurate values of low and high temperature heat capacity of $ReO_2$ are available in the literature. The thermal data are coupled with the standard Gibbs energy of formation, obtained in this study, to evaluate the standard enthalpy of formation of $ReO_2$ at 298.15 K by the ' third law ' method. The value of standard enthalpy of formation at 298.15 K is: $\Delta_f H^0_{298.15 K}(ReO_2)/ kJ mol^{-1}=-445.1 (\pm0.2)$. The uncertainty estimate includes both random (2\sigma) and systematic errors. Item Type: Journal Article Additional Information: Copyright of this article belongs to Elsevier. Keywords: Gibbs energy of formation;Enthalpy;Entropy;Advanced solid-state cell;Micropolarization. Department/Centre: Division of Mechanical Sciences > Materials Engineering (formerly Metallurgy) Depositing User: RM Shivakumara Date Deposited: 25 Aug 2008 Last Modified: 19 Sep 2010 04:49 URI: http://eprints.iisc.ac.in/id/eprint/15606 Actions (login required) View Item
Current browse context: physics.bio-ph Change to browse by: References & Citations Bookmark(what is this?) Physics > Biological Physics Title: Actin filaments growing against an elastic membrane: Effect of membrane tension (Submitted on 17 Mar 2018) Abstract: We study the force generation by a set of parallel actin filaments growing against an elastic membrane. The elastic membrane tries to stay flat and any deformation from this flat state, either caused by thermal fluctuations or due to protrusive polymerization force exerted by the filaments, costs energy. We study two lattice models to describe the membrane dynamics. In one case, the energy cost is assumed to be proportional to the absolute magnitude of the height gradient (gradient model) and in the other case it is proportional to the square of the height gradient (Gaussian model). For the gradient model we find that the membrane velocity is a non-monotonic function of the elastic constant $\mu$, and reaches a peak at $\mu=\mu^\ast$. For $\mu < \mu^\ast$ the system fails to reach a steady state and the membrane energy keeps increasing with time. For the Gaussian model, the system always reaches a steady state and the membrane velocity decreases monotonically with the elastic constant $\nu$ for all nonzero values of $\nu$. Multiple filaments give rise to protrusions at different regions of the membrane and the elasticity of the membrane induces an effective attraction between the two protrusions in the Gaussian model which causes the protrusions to merge and a single wide protrusion is present in the system. In both the models, the relative time-scale between the membrane and filament dynamics plays an important role in deciding whether the shape of elasticity-velocity curve is concave or convex. Our numerical simulations agree reasonably well with our analytical calculations. Submission historyFrom: Raj Kumar Sadhu [view email] [v1]Sat, 17 Mar 2018 12:13:53 GMT (253kb)
Archive: In this section: Subtopics: Comments disabled Sun, 22 Mar 2015 Shortly after I posted A public service announcement about contracts Steve Bogart asked me on on Twitter for examples of dealbreaker clauses. Some general types I thought of immediately were: A couple of recent specific examples: Sat, 21 Mar 2015 Every so often, when I am called upon to sign some contract or other, I have a conversation that goes like this: There is only one response you should make to this line of argument: Because if the lawyers made them put in there, that is for a reason. And there is only one possible reason, which is that the lawyers do, in fact, envision that they might one day exercise that clause and chop off your hand. The other party may proceed further with the same argument: “Look, I have been in this business twenty years, and I swear to you that we have never chopped off anyone's hand.” You must remember the one response, and repeat it: You must repeat this over and over until it works. The other party is lazy. They just want the contract signed. They don't want to deal with their lawyers. They may sincerely believe that they would never chop off anyone's hand. They are just looking for the easiest way forward. You must make them understand that there is no easier way forward than to remove the hand-chopping clause. They will say “The deadline is looming! If we don't get this contractexecuted soon it will be TOO LATE!” They are trying to blame And if the other party would prefer to walk away from the deal ratherthan abandon their hand-chopping rights, what does that tell you aboutthe value [ Addendum: Steve Bogart asked on Twitter for examples of unacceptable contract demands; I thought of so many that I put them in a separate article. ] [ Addendum 20150401: Chas. Owens points out that you don't have to argue about it; you can just cross out the hand-chopping clause, add your initials and date in the margin. I do this also, but then I bring the modification it to the other party's attention, because that is the honest and just thing to do. ] Fri, 20 Mar 2015 Wednesday while my 10-year-old daughter Katara was doing her math homework, she observed with pleasure that a !!6×3!! rectangle has a perimeter of 18 units and also an area of 18 square units. I mentioned that there was an infinite family of such rectangles, and, after a small amount of tinkering, that the only other such rectangle with integer sides is a !!4×4!! square, so in a sense she had found the single interesting example. She was very interested in how I knew this, and I promised to show her how to figure it out once she finished her homework. She didn't finish before bedtime, so we came back to it the following evening. This is just one of many examples of how she has way too much homework, and how it interferes with her education. She had already remarked that she knew how to write an equation expressing the condition she wanted, so I asked her to do that; she wrote $$(L×W) = ([L+W]×2).$$ I remember being her age and using all different shapes of parentheses too. I suggested that she should solve the equation for !!W!!, getting !!W!! on one side and a bunch of stuff involving !!L!! on the other, but she wasn't sure how to do it, so I offered suggestions while she moved the symbols around, eventually obtaining $$W = 2L\div (L-2).$$ I would have written it as a fraction, but getting the right answer is important, and using the same notation I would use is much less so, so I didn't say anything. I asked her to plug in !!L=3!! and observe that !!W=6!! popped right out, and then similarly that !!L=6!! yields !!W=3!!, and then I asked her to try the other example she knew. Then I suggested that she see what !!L=5!! did: it gives !!W=\frac{10}3!!, This was new, so she checked it by calculating the area and the perimeter, both !!\frac{50}3!!. She was very excited by this time. As I have mentioned earlier, algebra is magical in its ability to mechanically yield answers to all sorts of questions. Even after thirty years I find it astonishing and delightful. You set up the equations, push the symbols around, and all sorts of stuff pops out like magic. Calculus is somehow much less astonishing; the machinery is all explicit. But how does algebra work? I've been thinking about this on and off for a long time and I'm still not sure. At that point I took over because I didn't think I would be able to guide her through the next part of the problem without a demonstration; I wanted to graph the function !!W=2L\div(L-2)!! and she does not have much experience with that. She put in the five points we already knew, which already lie on a nice little curve, and then she asked an incisive question: does it level off, or does it keep going down, or what? We discussed what happens when !!L!! gets close to 2; then !!W!! shoots up to infinity. And when !!L!! gets big, say a million, you can see from the algebra that !!W!! is a hair more than 2. So I drew in the asymptotes on the hyperbola. Katara is not yet familiar with hyperbolas. (She has known about parabolas since she was tiny. I have a very fond memory of visiting Portland with her when she was almost two, and we entered Holladay park, which has fountains that squirt out of the ground. Seeing the water arching up before her, she cried delightedly “parabolas!”) Once you know how the graph behaves, it is a simple matter to see thatthere are no integer solutions other than !!\langle 3,6\rangle,\langle 4,4\rangle,!! and !!\langle6,3\rangle!!. We know that !!L=5!!does not work. For !!L>6!! the value of !!W!! is always strictlybetween !!2!! and !!3!!. For !!L=2!! there is no value of !!W!! that works atall. For !!0\lt L\lt 2!! the formula says that !!W!! is negative, on theother branch of the hyperbola, which is a perfectly good [ Addendum 20150330: Thanks to Steve Hastings for his plot of the hyperbola, which is in the public domain. ] Thu, 19 Mar 2015 The computer is really awesome at doing quick searches for numbers with weird properties, and people with an amateur interest in recreational mathematics would do well to learn some simple programming. People appear on math.stackexchange quite often with questions about tic-tac-toe, but there are only 5,478 total positions, so any question you want to ask can be instantaneously answered by an exhaustive search. An amateur showed up last fall asking “Is it true that no prime larger than 241 can be made by either adding or subtracting 2 coprime numbers made up out of the prime factors 2,3, and 5?” and, once you dig through the jargon, the question is easily answered by the computer, which quickly finds many counterexamples, such as !!162+625=787!! and !!2^{19}+3^4=524369!!. But sometimes the search appears too large to be practical, and then you need to apply theory. Sometimes you can deploy a lot of theory and solve the problem completely, avoiding the search. But theory is expensive, and not always available. A hybrid approach often works, which uses a tiny amount of theory to restrict the search space to the point where the search is easy. One of these I wrote up on this blog back in 2006: The programmer who gave me thie problem had tried a brute-force search over all numbers, but to find all 10-digit excellent numbers, this required an infeasible search of 9,000,000,000 candidates. With the application of a tiny amount of algebra, one finds that !!a(10^k+a) = b^2+b!! and it's not hard to quickly test candidates for !!a!! to see if !!a(10^k+a)!! has this form and if so to find the corresponding value of !!b!!. (Details are in the other post.) This reduces the search space for 10-digit excellent numbers from 9,000,000,000 candidates to 90,000, which could be done in under a minute even with last-century technology, and is pretty nearly instantaneous on modern equipment. But anyway, the real point of this note is to discuss a different problem entirely. A recreational mathematician on stackexchange wanted to find distinct integers !!a,b,c,d!! for which !!a^2+b^2, b^2+c^2, c^2+d^2, !! and !!d^2+a^2!! were all perfect squares. You can search over all possible quadruples of numbers, but this takes a long time. The querent indicated later that he had tried such a search but lost patience before it yielded anything. Instead, observe that if !!a^2+b^2!! is a perfect square then !!a!! and !!b!! are the legs of a right triangle with integer sides; they are terms in what is known as a Pythagorean triple. The prototypical example is !!3^2 + 4^2 = 5^2!!, and !!\langle 3,4,5\rangle!! is the Pythagorean triple. (The querent was quite aware that he was asking for Pythagorean triples, and mentioned them specifically.) Here's the key point: It has been known since ancient times that if !!\langle a,b,c\rangle!! is a Pythagorean triple, then there exist integers !!m!! and !!n!! such that: $$\begin{align} \require{align} a & = n^2-m^2 \\ b & = 2mn \\ c & = n^2 + m^2 \end{align}$$ So you don't have to search for Pythagorean triples; you can just generate them with no searching: This builds a hash table, The table has only around 40,000 entries. Having constructed it, we now search it: The outer loop runs over each !!a!! that is known to be a member of aPythagorean triple. (Actually the !!m,n!! formulas show thatevery number bigger than 2 is a member of The This runs in less than a second on so-so hardware and produces 11 solutions: Only five of these are really different. For example, the last one is the same as the second, with every element multiplied by 2; the third, seventh, and eighth are similarly the same. In general if !!\langle a,b,c,d\rangle!! is a solution, so is !!\langle ka, kb,kc,kd\rangle!! for any !!k!!. A slightly improved version would require that the four numbers not have any common factor greater than 1; there are few enough solutions that the cost of this test would be completely negligible. The only other thing wrong with the program is that it produces each solution 8 times; if !!\langle a,b,c,d\rangle!! is a solution, then so are !!\langle b,c,d,a\rangle, \langle d,c,b,a\rangle,!! and so on. This is easily fixed with a little post-filtering; pipe the output through or something of that sort. The corresponding run with !!m!! and !!n!! up to 2,000 instead of only 200 takes 5 minutes and finds 445 solutions, of which 101 are distinct, including !!\langle 3614220, 618192, 2080820, 574461\rangle!!. It would take a very long time to find this with a naïve search. [ For a much larger and more complex example of the same sort of thing, see When do !!n!! and !!2n!! have the same digits?. I took a seemingly-intractable problem and analyzed it mathematically. I used considerably more than an ounce of theory in this case, and while the theory was not enough to solve the problem, it was enough to reduce the pool of candates to the point that a computer search was feasible. ] [ Addendum 20150728: Another example ]
In this article I’m about to present the process of decoding previously recorded FSK transmission with the use of Python, NumPy and SciPy. The data was gathered using SDR# software and it is stored as a wave file that contains all the IQ baseband data @ 2.4Msps The complete source code for this project is available here: https://github.com/MightyDevices/python-fsk-decoder Step 0: Load the data The data provided is in form of a ‘stereo’ wave file where the left channel contains the In-Phase data and the right one has the Quadrature data. The wave file is using 16-bit PCM samples. For the sake of further processing let’s convert the data into the list of complex number in form: \[z[n] = in\_phase[n] + j * quadrature[n]\] If we don’t wan’t to deal with large numbers along the way it it also wise to scale things down to \((-1, 1)\) according to how many bits per sample are there. import numpy as npimport scipy.signal as sigimport scipy.io.wavfile as wfimport matplotlib.pyplot as plt# read the wave filefs, rf = wf.read('data.wav')# get the scale factor according to the data typesf = { np.dtype('int16'): 2**15, np.dtype('int32'): 2**32,}[rf.dtype]# convert to complex number c = in_phase + j*quadrature and scale so that we are# in (-1 , 1) rangerf = (rf[:, 0] + 1j * rf[:, 1]) / sf Step 1: Center The data around the DC First of all we need to tune the system so that it receives only what will become the subject of further analysis. Initial data spectrum looks like this: In order to move the signal of interest to the DC we use the concept of mixing which is no more no less than multiplying by a sine and cosine that are tuned to the offset frequency. Thank’s to the magic of complex numbers we can combine the whole sine/cosine gig into single operation using the equation \[e^{i\theta}=cos(\theta) + isin(\theta)\] If we want to generate appropriate sine and cosine functions that represent 360kHz oscillations in 2.4Msps system then the \(\theta\) has to be as follows: \[\theta(n)=n*2\pi*\frac{360kHz}{2.4MHz}\] where \(n\) is the sample number, ranging from 0 to the number of samples within the input data. # offset frequency in Hz (read from the previous plot)offset_frequency = 366.8e3# baseband local oscillatorbb_lo = np.exp(1j * (2 * np.pi * (-offset_frequency / fs) * np.arange(0, len(rf))))# complex-mix to bring the rf signal to baseband (so that is centered around# something around 0Hz. doesn't have to be perfect.bb = rf * bb_lo This is what we end up with. All is shifted, including that naaasty DC spike. Step 2: Remove unwanted data in the frequency domain It’s easy to see that the signal of interest occupies only a small part of the band so the obvious next step will be to limit the sampling rate. The process is called Decimation and must be accompanied by proper filtering. Without filtering all the out-of-band data would simply alias into the band of interest. Let’s design appropriate filter, apply it and decimate (remove all samples except every n-th) In this example I’ve decimated by 4 even though one could go as high as 8, but the remaining samples will come in handy when we reach the point of symbol synchronization. # limit the sampling rate using decimation, let's use the decimation by 4bb_dec_factor = 4# get the resulting baseband sampling frequencybb_fs = fs // bb_dec_factor# let's prepare the low pass decimation filter that will have a cutoff at the# half of the bandwidth after the decimationdec_lp_filter = sig.butter(3, 1 / (bb_dec_factor * 2))# filter the signalbb = sig.filtfilt(*dec_lp_filter, bb)# decimatebb = bb[::bb_dec_factor] Step 3: Remove unwanted data in time domain The overall length of the recording is much-much longer than the transmission itself. It would be wise to get rid of the moments of “radio silence” so that we can focus solely on what’s meaningful. Needless to say the computation performance will also benefit greatly from that. The easiest way around that is to select the signal that is above certain magnitude threshold. Here’s the selection code. I’ve selected samples spanning from the 1st one that exceeded 0.01 level to the last one. This is to avoid any potential discontinuities that may occur for when signal strength drops for couple of samples. # using the signal magnitude let's determine when the actual transmission took# placebb_mag = np.abs(bb)# mag threshold level (as read from the chart above)bb_mag_thrs = 0.01# indices with magnitude higher than thresholdbb_indices = np.nonzero(bb_mag > bb_mag_thrs)[0]# limit the signalbb = bb[np.min(bb_indices) : np.max(bb_indices)] Step 4: Demodulation Now it’s the perfect time to do the demodulation. The transmitter uses FSK which is basically a form of digitally controlled Frequency Modulation where ones and zeros are transmitted as tones below and above the center frequency. If you take a look at the baseband spectrum from Step 2 you will clearly notice two peaks in the spectrum that are the result of transmitter sending consecutive zeros and ones. In order to know whether a zero or one is being transmitted at the moment we need to know the instantaneous frequency. Since we are dealing with complex numbers this is actually easier to accomplish than one may think. During the transmission the consecutive samples form a circle when plotted on a complex plane: In order to determine the current frequency value one needs simply to calculate the rate at which the samples rotate around the circle, meaning that we need to know the angle between consecutive samples. Calculating the angle between two complex numbers is quite easy: \[angle=arg(z_{n-1}*\bar{z_{n}})\] where \( \bar{z_{n}} \) denotes the complex conjugate of the n-th sample. Here’s the code for the demodulation: # demodulate the fm transmission using the difference between two complex number# arguments. multiplying the consecutive complex numbers with their respective# conjugate gives a number who's angle is the angle difference of the numbers# being multipliedbb_angle_diff = np.angle(bb[:-1] * np.conj(bb[1:]))# mean output will tell us about the frequency offset in radians per sample# time. If the mean is not zero that means that we have some offset. lets' get# rid of itdem = bb_angle_diff - np.mean(bb_angle_diff) Second part of the snippet above helps to remove any frequency offset that may come from us not being able to provide the exact value for the frequency shift in Step 1. This operation ensures that both zeros and ones are now evenly spaced around the center frequency. And finally here’s the result of the demodulation: Step 5: Guess the data rate. This is as easy as looking at the spectral plot of the demodulated data. Since we are dealing with binary data transmission it should have a spectrum with shape similar to \(H(f) = \frac{\sin(f)}{f}\) Such spectra have nulls (zeros in magnitude-scale or dips in decibel-scale) at the exact multiplies of the bit frequency a.k.a bitrate. From the shape of the spectrum we can make a initial guess about the bit rate to be 100kbit/s Step 6: Symbol synchronization – data recovery With all the information gathered we can now proceed with data sampling. In order to determine the correct sampling time a symbol synchronization scheme must be employed. Such schemes (or algorithms if you like) are often build around controllable clocks (oscillators) that provide the correct timing for the sampler. The clock ‘tick’ rate is controlled by an error term derived from the algorithm. In this article I’ll discuss the simplest method: Early-Late Imagine that you were to sample data three times per symbol, taking the middle sample as the final symbol value. You can imagine that the best moment to sample a bit is right in the middle. That means that samples before and after the middle sample should have similar values: If the sampling time is not aligned with the incoming stream of bits then the first and last sample are no longer similar in value. If you subtract the value of the last sample from the value of the first you’ll get the positive values when you are sampling too early (the sampling clock needs to be slowed down) and negative when sampling is late (the sampling clock needs to be sped up) with respect to incoming data. The result of the subtraction is nothing else than the error term itself. In order to support sign change (applying the process for the negative data from the demodulator) we additionally scale the output by the middle sample value. The whole error term is then fed to the sampling clock generator implemented as the Numerically Controlled Oscillator. This oscillator is nothing more than accumulator ( phase accumulator, to be exact) to which we add a number ( frequency word) once per every iteration of the algorithm and wait for it to reach a certain level (1.0 in my code). If the value is reached then we sample. Simple as that. Frequency word (the pace at which the sampling clock is running) is constantly adjusted by the error term from above, so that the clock will eventually synchronize to the data. Following code implements the early-late synchronizer. Keep in mind that I’ve intentionally negated the error value so that I can add it (in proportion) to the frequency word (nco_step and the phase accumulator (nco_phase_acc). # bitrate assumption, will be corrected for using early-late symbol sync (as# read from the spectral content plot from above)bit_rate = 100e3# calculate the nco step based on the initial guess for the bit rate. Early-Late# requires sampling 3 times per symbolnco_step_initial = bit_rate * 3 / bb_fs# use the initial guessnco_step = nco_step_initial# phase accumulator valuesnco_phase_acc = 0# samples queueel_sample_queue = []# couple of control valuesnco_steps, el_errors, el_samples = [], [], []# process all samplesfor i in range(len(dem)): # current early-late error el_error = 0 # time to sample? if nco_phase_acc >= 1: # wrap around nco_phase_acc -= 1 # alpha tells us how far the current sample is from perfect # sampling time: 0 means that dem[i] matches timing perfectly, 0.5 means # that the real sampling time was between dem[i] and dem[i-1], and so on alpha = nco_phase_acc / nco_step # linear approximation between two samples sample_value = (alpha * dem[i - 1] + (1 - alpha) * dem[i]) # append the sample value el_sample_queue += [sample_value] # got all three samples? if len(el_sample_queue) == 3: # get the early-late error: if this is negative we need to delay the # clock if el_sample_queue: el_error = (el_sample_queue[2] - el_sample_queue[0]) / \ -el_sample_queue[1] # clamp el_error = np.clip(el_error, -10, 10) # clear the queue el_sample_queue = [] # store the sample elif len(el_sample_queue) == 2: el_samples += [(i - alpha, sample_value)] # integral term nco_step += el_error * 0.01 # sanity limits: do not allow for bitrates outside the 30% tolerance nco_step = np.clip(nco_step, nco_step_initial * 0.7, nco_step_initial * 1.3) # proportional term nco_phase_acc += nco_step + el_error * 0.3 # append nco_steps += [nco_step] el_errors += [el_error] As the result of the algorithm we end up with sample times and their values which look like this when plotted over the demodulator output: As one can tell the algorithm works nicely as there are no samples taken during the bit transitions, only when the bits are ‘stable’ Step 7: Determining the bit value Probably the easiest step. Just use the sampled data and produce the output value: zero or one depending on it’s sign and your done!