text
stringlengths
256
16.4k
To describe the evolution of a (in this example non-relativistic) fluid system, the evolution equations for all relevant variables, conservation laws, the second law of thermodynamics, and appropriate (a)n appropriate equation(s) of state have to be considered. The evolution equation for momentum is the Navier-Stokes equation which in a geophysical context can be written as \[\frac{d u}{d t} + (u\cdot\nabla)u = -\frac{\nabla p}{\rho} -\nabla\Phi +\frac{1}{\rho}\nabla S\] $\Phi$ is the geopotential and $S$ is the stress tensor. Conservation of angular momentum is taken into account by imposing the contraint that the stress tensor \[S = \rho \nu \{ \nabla \circ u + (\nabla \circ u)^T\} + \rho \eta I (\nabla \cdot u)\] where $\nu$ and $\eta$ denote the dynamic and kinematic viscosity respectively, is symmetric. The evolution equation for internal energy (or equivalently temperature) can be written as \[\frac{d e}{d t} = \frac{p}{\rho^2}\frac{d\rho}{d t} + Q_{rad} + Q_{lat} -\frac{1}{\rho}\nabla J + \frac{1}{\rho}(S\nabla)\cdot u\] The second law of thermodynamics is considered by demanding that the last term in the above equation \[\epsilon = \frac{1}{\rho} (\nabla S)\cdot u\] which describes the frictional heating or dissipation is positive definite. Conservation of mass is considered by including the continuity equation \[\frac{d\rho}{d t} = \frac{\partial\rho}{\partial t} +\nabla(\rho u)\] into the relevant system of equations. As you can see, this is a coupled system of equations. The kinetic energy dissipated due to the friction term in the Navier-Stokes equation reappears as dissipative heating $\epsilon$ in the internal energy (or temperature) equation, so the (kinetic) energy can not disappear. A nice example of a study which includes the temperature equation in addition to the Navier-Stokes equation, is the stuy of Sukorianski et al., who investigate stochastically forced turbulent flows with stable stratification by making use of renormalization-group like methods. By deriving a coupled system of RG equations that describes the scale-dependence of the anisotropic diffusivities of velocitie as well as of (potential) temperature fluctuations, they are by assuming the presence of a Kolmogorov scale invariant subrange able to repoduce the correct kinetic energy cascade and by slightly extending their work, it should be possible to derive the corresponding scale-dependence of the spectrum of temperature fluctuations (or available potential energy) too. If this answer is not exactly what you wanted, I hope that it helps a bit at least.
Let's assume, we have standard model singlet particle $s$, that mixes after electroweak symmetry breaking with an exotic, vectorlike neutral lepton $N$. The relevant part of the Lagrangian reads $$ L \supset h^c s N + h s N^c + M N N^c, $$ where $h$ is the standard model higgs and $M$ is a superheavy mass. Moreover, we assume that for some reason there is (at tree level) no Majorana mass term: $ M_s ss$ for the singlet $s$. The tree-level analysis now yields for the singlet $s$ a tiny seesaw type mass: $m_s \approx v_{EW} / M^2 \ll v_{EW}$. Now, a Majorana mass term for the singlet $s$ will be generically generated at the 1-loop level through a diagram with $N$ in the loop. It was pointed out to me that this 1-loop contribution " may give rise to a much larger mass for the singlet". I would like understand how this can happen. I think the relevant diagram looks like this My naive estimate for this one-loop contribution is $ m_s \approx 1/16 \pi \ m_{EW}^2 /M$, i.e. something comparable to the tree-level estimate, divided by a loop factor, possibly times some logarithm. Thus, while there is possibly some relevant correction due to the logarithm, the consequences do not seem dramatic. Is there any other possible correction that I'm missing here? Is there some diagram that potentially leads to a much heavier mass for the singlet $s$? ---------- A relevant analogous scenario The situation is similar to the usual seesaw for the left-handed neutrinos $\nu_L$. However, the situation described above is reversed. In the usual seesaw scenario, the left-handed neutrinos $\nu_L$ are light and the singlet $\nu_R$ is heavy. The 1-loop correction to the usual seesaw formula, is discussed in On the importance of the 1-loop finite corrections to seesaw neutrino masses by D. Aristizabal Sierra, Carlos E. Yaguna. (See also, this summary. The relevant diagrams are and the result is This result yields a contribution comparable to the result of the tree-level analysis: $ m_{EW}^2 / M$, where $m_{EW}$ denotes the electroweak scale and $M$ a superheavy scale. (In addition the is a potentially enhancing log factor.)
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Lecture: HGX208, F 9:55-11:35 Section: HGX208, T 18:30-20:10 Lecture: H5116, F 8:00-9:40 Section: HGW2403, F(e) 18:30-20:10 Textbook: https://doi.org/10.1017/CBO9781107050884 Lecture: HGX205, M 18:30-21 Section: HGW2403, F 18:30-20 Exercise 01 Prove that \(\neg\Box(\Diamond\varphi\wedge\Diamond\neg\varphi)\) is equivalent to \(\Box\Diamond\varphi\rightarrow\Diamond\Box\varphi\). What you have assumed? Define strategyand winning strategyfor modal evaluation games. Prove Key Lemma: \(M,s\vDash\varphi\) iff V has a winning strategy in \(G(M,s,\varphi)\). Prove that modal evaluation games are determined, i.e. either V or F has a winning strategy. And all exercises for Chapter 2 (see page 23, open minds) Exercise 02 Let \(T\) with root \(r\) be the tree unraveling of some possible world model, and \(T’\) be the tree unraveling of \(T,r\). Show that \(T\) and \(T’\) are isomorphic. Prove that the union of a set of bisimulations between \(M\) and \(N\) is a bisimulation between the two models. We define the bisimulation contraction of a possible world model \(M\) to be the “quotient model”. Prove that the relation links every world \(x\) in \(M\) to the equivalent class \([x]\) is a bisimulation between the original model and its bisimulation contraction. And exercises for Chapter 3 (see page 35, open minds): 1 (a) (b), 2. Exercise 03 Prove that modal formulas (under possible world semantics) have ‘Finite Depth Property’. And exercises for Chapter 4 (see page 47, open minds): 1 – 3. Exercise 04 Prove the principle of Replacement by Provable Equivalents: if \(\vdash\alpha\leftrightarrow\beta\), then \(\vdash\varphi[\alpha]\leftrightarrow\varphi[\beta]\). Prove the following statements. “For each formula \(\varphi\), \(\vdash\varphi\) is equivalent to \(\vDash\varphi\)” is equivalent to “for each formula \(\varphi\), \(\varphi\) being consistent is equivalent to \(\varphi\) being satisfiable”. “For every set of formulas \(\Sigma\) and formula \(\varphi\), \(\Sigma\vdash\varphi\) is equivalent to \(\Sigma\vDash\varphi\)” is equivalent to “for every set of formulas \(\Sigma\), \(\Sigma\) being consistent is equivalent to \(\Sigma\) being satisfiable”. Prove that “for each formula \(\varphi\), \(\varphi\) being consistent is equivalent to \(\varphi\) being satisfiable” using the finite version of Henkin model. And exercises for Chapter 5 (see page 60, open minds): 1 – 5. Exercise 05 Exercises for Chapter 6 (see page 69, open minds): 1 – 3. Exercise 06 Show that “being equivalent to a modal formula” is not decidable for arbitrary first-order formulas. Exercises for Chapter 7 (see page 88, open minds): 1 – 6. For exercise 2 (a) – (d), replace the existential modality E with the difference modality D. In the clause (b) of exercise 4, “completeness” should be “correctness”. Exercise 07 Show that there are infinitely many non-equivalent modalities under T. Show that GL + Idis inconsistent and Unproves GL. Give a complete proof of the fact: In S5, Every formula is equivalent to one of modal depth \(\leq 1\). Exercises for Chapter 8 (see page 99, open minds): 1, 2, 4 – 6. Exercise 08 Let \(\Sigma\) be a set of modal formulas closed under substitution. Show that \[(W,R,V),w\vDash\Sigma~\Leftrightarrow~ (W,R,V’),w\vDash\Sigma\] hold for any valuation \(V\) and \(V’\). Define a \(p\)- morphismbetween \((W,R),w\) and \((W’,R’),w’\) as a “functional bisimulation”, namely bisimulation regardless of valuation. Show that if there is a \(p\)-morphism between \((W,R),w\) and \((W’,R’),w’\), then for any valuation \(V\) and \(V’\), we have \[(W,R,V),w\vDash\Sigma~\Leftrightarrow~ (W’,R’,V’),w\vDash\Sigma.\] Exercises for Chapter 9 (see page 99, open minds). Exercise the last Exercises for Chapter 10 and 11 (see page 117 and 125, open minds). 有朋友在知乎上问: 集合论可以被看作是一种一阶逻辑理论 一阶逻辑的语法、语义概念都可以在集合论中定义, 关于一阶逻辑的定理可以被看作是集合论的定理 建立一阶逻辑时貌似很多定义都用了集合论描述。而集合论本身又可以被看作是一种一阶逻辑理论。不是有循环定义的嫌疑吗? 我在知乎的回答如下: Continue reading “集合论和一阶逻辑的关系?”
I came across Shimura (1971) notes about cosets representatives of the congruence subgroups $ \Gamma_0(N) $. He firstly proves that its index in the modular group $\Gamma$ is \begin{equation} [\Gamma : \Gamma_0(N)]=N \cdot \prod_{p|N} (1+p^{-1} ) \end{equation} Then he comes up with a sets of cosets representatives for $\Gamma_0(N)$ in $\Gamma$ made in this way: we first choose pairs $(c,d)$ of positive integers such that \begin{equation} (c,d)=1, \qquad d|N, \qquad 0 < c \le N/d \end{equation} then for each pairs we fix integers $a,b$ such that $ad-bc=1$. Our list of cosets representatives is made of the matrices with such entries. However, let us take for example $N=12$ when we know the index is 24 and thus this is the cardinality of the set of cosets representatives. Using the rule above, I only find 22 cosets representatives, namely the ones corresponding to the following $(c,d)$ pairs: $$(1,1),(2,1),\dots,(12,1),(1,2),(3,2),(5,2),(1,3),(2,3),(4,3),(1,4),(3,4),(1,6),(1,12).$$ I also tried to run SAGE and it gives me 24 cosets representatives but they seem redundant, for example $[[1, 0] [2, 1]]$ and $[[1, 2][2, 5]]$ are listed as different cosets representatives, but $$\begin{pmatrix} 1 & 0 \\ 2 & 1 \end{pmatrix} \cdot \begin{pmatrix} 1 & 2 \\ 0 & 1 \end{pmatrix}=\begin{pmatrix} 1 & 2 \\ 2 & 5 \end{pmatrix}$$ and thus it seems to me that these 2 matrices belong in fact to the same coset. Something is clearly wrong, I hope you can help me.
Ex.14.1 Q4 Statistics Solution - NCERT Maths Class 10 Question Thirty women were examined in a hospital by a doctor and the number of heart beats per minute were recorded and summarised as follows. Find the mean heart beats per minute for these women, choosing a suitable method. Number of heart beats per minute \(65 – 68\) \(68 – 71\) \(71 – 74\) \(74 – 77\) \(77 – 80\) \(80 – 83\) \(83 – 86\) Number of women \(2\) \(4\) \(3\) \(8\) \(7\) \(4\) \(2\) Text Solution What is known? The heart beats per minute of \(30 \) women. What is unknown? The mean heart beats per minute for these women. Reasoning: We will use Step-deviation Method to solve this question because the data given is large and will be convenient to apply to all the \(d_i\) have a common factor. Sometimes when the numerical values of \(x_i\) and \(f_i \) are large, finding the product of \(x_i\) and \(f_i \)becomes tedious. We can do nothing with the \(f_i,\) but we can change each \(x_i\) to a smaller number so that our calculations become easy. Now we have to subtract a fixed number from each of these \(x_i.\) The first step is to choose one among the \(x_i \)as the assumed mean and denote it by \(‘a’\). Also, to further reduce our calculation work, we may take \(‘a’\) to be that \(x_i\) which lies in the centre of \(x_1, x_2, . . ., x_n.\) So, we can choose \(a.\) The next step is to find the difference \(‘d_i’\) between a and each of the \(x_i,\) that is, the deviation of \(‘a’\) from each of the \(x_i.\) i.e.,\(d_i = x_i - a\) The third step is to find \(‘u_i’\) by dividing \(d_i\) and class size for each of the \(x_i.\) i.e.,\(u_i = \frac{{d_i}}{h}\) \(h\) The next step is to find the product of \(u_i\) with the corresponding \(f_i,\) and take the sum of all the \(f_iu_i\) The step-deviation method will be convenient to apply if all the \(d_i \)have a common factor. Now put the values in the below formula Mean, \(\overline x = a + (\frac{{\Sigma f_i u_i}}{\Sigma f_i}) \times h\) Steps: We know that, Class mark, \( x_i = \frac{{\text{Upper class limit + Lower class limit}}}{2}\) Class size, \(h=3\) Taking assumed mean,\( a= 75.5\) Number of heart beats per minute No of women \( (f_i)\) \( X_i\) \( d_i = x_i -a \) \(\begin{align} {u_i}=\frac{ {x_i}-{a}}{h}\end{align}\) \( f_iu_i \) \(65-68\) \(2\) \(66.5\) \(-9\) \(-3\) \(-6\) \(68-71\) \(4\) \(69.5\) \(-6\) \(-2\) \(-8\) \(71 -74\) \(3\) \(72.5\) \(-3\) \(-1\) \(-3\) \(74-77\) \(8\) \(75.5(a)\) \(0\) \(0\) \(0\) \(77 – 80\) \(7\) \(78.5\) \(3\) \(1\) \(7\) \(80 – 83\) \(4\) \(81.5\) \(6\) \(2\) \(8\) \(83 – 86\) \(2\) \(84.5\) \(9\) \(3\) \(6\) \(\Sigma f_i=30 \) \(\Sigma f_iu_i=4\) From the table, we obtain \(\Sigma f_i=30\) \(\Sigma f_i u_i=4\) \[\begin{align} \operatorname{Mean}\,\,(\overline{{x}}) &={a}+\left(\frac{\Sigma {f}_{{i}} {u}_{{i}}}{\Sigma {f}_{{i}}}\right) {h} \\ \overline{{x}} &=75.5+\left(\frac{4}{30}\right) 3 \\ \overline{{x}} &=75.5-\frac{{2}}{5} \\\overline{{x}} &=75.5-0.4 \\ \overline {{x}}&=75.9 \end{align}\] Hence, the mean heartbeat per minute for these women is \(75.9\)
If you haven’t seen the previous blog, or not familiar with Hoeffding bounds, I suggest you read about it. This blog goes head first into the continuation of the previous blog here. A hypothesis, can be thought of as a function. However, it’s not necessary. A function maps something to something else, like $f: \Re^n \to \Re$. For every input in the domain, we know there exists a single output. However, in our world of unknowns, it’s not known whether a point will map to more than one coordinate, depending on the underlying noise. Our definition of noise, is the unknown consequences of the outside environment that we failed to capture in our data mining. This noise makes the deterministic idea of a function fall apart, so instead of: We now take the stochastic analog: Which gives us a probability distribution of all outputs. Instead of a mapping of $f: \Re^n \to \Re$, for example, it is now $X: \Omega \to \Re$, where $\Omega$ is the set of all events. A hypothesis in our case is just a mathematical model that tries to map x’s to y’s with a probability associated with it, or a transformation of a random variable. Hypotheses are motivated with reasonable assumptions: And the list goes on. But be careful, you can’t say “linear models” is a hypothesis. It’s too broad - we need to specify the specific weights, i.e. “linear model with coefficients $x0 = 1$, $x1 = 0.2$, etc.”. Thus, we call “linear models” a hypothesis set, containing many hypotheses. Hoeffding, as we talked about before, tries to analyze the bound of the difference between the empirical risk and the expected risk of a hypothesis, $f$. What we managed to get was: This gives us confidence about whether our model, trained on our training data, will be able to generalize, and express the underlying data distribution, $x, y \sim p_\theta$. This is great and all, but this is only one hypothesis. For a general class of hypotheses, like support vectors, how can we make a similar argument? The most basic way to plug Hoeffding into a general class of hypothesis, which we will denote $ \mathcal{H} $, is to assume that there exists no overlap between the bad hypotheses(I will elaborate on what overlap means). A bad hypothesis is one where: for some $\epsilon$. This event can happen to any of the hypotheses in our set of hypotheses. So if our hypothesis set has cardinality $|\mathcal{H}| = M$ , then we have $M$ of these events that we have to avoid. If we define the event of i-th hypothesis being bad as Then the event that none of the hypotheses are bad is: $ {\cap (B_i^c)_{i=0}^M} $ , which translates to “The 1st hypothesis is NOT bad AND the 2nd hypothesis is NOT bad AND…” If we apply De Morgan’s law to this, we get: which has a much more obvious probability: due to an assumption on each hypothesis being disjoint, which is a broad assumption. This assumption on that they are disjoint means that we add up the probability for each one without accounting for the inclusion. Recall the inclusion-exclusion principle: Here, we are assuming that $P(A \cap B) = 0$. If we didn’t use this assumption, then it’d get ugly. Just for the sake of illustration, we would get: $-1^0 2C1P(B_i) + -1^12C2P(B_i \cap B_j)$ for 2 events…$-1^0 3C1P(B_i) + -1^13C2P(B_i \cap B_j) + -1^2*3C3P(B_i \cap B_j \cap B_k)$ for 3 events… So we could utilize the fact that there are overlaps to get a tighter bound, but it’s way too resource consuming for now. Thus, we use hoeffding on each individual disjoint event and get: Remember that we had an upperbound on $P(B_i)$, so the negative means we flip the inequality here. This means that the greater the cardinality of our hypothesis class, the harder it is to bound the PAC likelihood. You may be thinking: “But don’t linear models and neural networks and stuff have infinite cardinality?” And you’re right. We will get results for that, but hold on, because we will set up the machinery for it :) Now let’s do some analysis about what the above means. Consider our previous notation, but replace $\sum_{i=0}^M P(B_i)$ with $\delta$. We yield the following inequality: $1 - \delta \geq 1 - 2Me^{-2N\epsilon^2}$, or $2Me^{-2N\epsilon^2} \geq \delta$. We solve the above equation to get: Once again, solve the equation, and we get: And we yield similar observations as above. So far, we’ve used the hoeffding bound to get a good estimate on how far apart empirical risk and expected risk is. Let’s now denote $f$ as the hypothesis we choose to minimize the empirical risk. Can we say something about the empirical risk between $f$ and $f^*$? The argument is pretty subtle, so I’ll break it down into 4 parts: We string these inequalities together: Since $R_N(f^*) - R_N(f) \geq 0$. Thus, we have just bounded $R(f) - R(f^*) \leq 2\epsilon$. I thought this proof was pretty magical when I first saw it, so definitely take a second look if you’re not sure. Now, it’s time for the biggest result, plugging in everything we’ve seen: We know that $R(f) - R(f^*) \leq 2\epsilon \leq 2 \sqrt{\frac{log(\frac{2M}{\delta})}{2N}}$, so: $R(f) \leq [inf_{f^* \in \mathcal{H}}R(f^*)] + 2 \sqrt{\frac{log(\frac{2M}{\delta})}{2N}}$ This single equation is the holy grail equation to learning theory. In the next section when we talk about Vapnik-Chervonenkis dimensions, we will still use this equation, but with a small twist. We can also observe the bias-variance tradeoff in this single equation… If we have a ton of classes, like say a neural network over a linear model, our $\mathcal{H}$ will be a large set, and thus we can’t get a tight bound. However, the optimal risk will also be hopefully lower. We increase the complexity of our model, the set size becomes larger[increase risk], but the $inf R(f^*)$ is smaller[decrease risk]. Thus, using a neural net by itself won’t help much. Regularization saves the day here. It’s a way to control the complexity of our model[variance] while still decreasing risk. This situation is low bias, high variance. We decrease the complexity of our model, the set size becomes smaller[decrease risk], but the $inf R(f^*)$ is larger[increase risk]. Thus, using a linear model can’t get us to the best solution. This situation is high bias, low variance. BTW, this is one way of looking at bias-variance. I learned it by completely expanding the expected risk, so if you’re lost/looking for a more elegant solution, then wikipedia is for you, or check out my professor’s book: Although we have retrieved something concrete here, the resulting find is quite pessimistic for now. Why? Because this cannot be used for even linear models, let alone neural nets! The number of different hypotheses for linear models in $\Re^d$ space is not let alone finite, it’s not even countable! How can we ever hope to use uniform bound on such a large hypothesis class? Enter VC Dimensions! They’ll be the main topic for next time, which, together with our grand equation, is the single most important theorem in learning theory.
the measurement affects the position of the body To be clear a so called measurement doesn't reveal some preexisting value, it is an interaction that happens continuously over time and leaves the resulting state in one of many fixed final states. Each of which has a real number associated with it. mean that the position of the radiating body isn't certain (even theoretically) only because of your measuring This isn't what the uncertainty principle is about. I have a good answer about what the uncertainty principle is at https://physics.stackexchange.com/a/169757 and an essential aspect is that the uncertainty principle relates two uncertainties. It says that if the order you do two so called measurements matters (if doing A then B then A again can give different results compared by on doing A twice then B which always has the second A give the same result as the first A) then there might be a lower bound on the products of the uncertainty of the two measurements. If the product of the two uncertainties has a lower bound and one of the uncertainties is low then the other one has to be high. And that is the real affect of the uncertainty principle. It tells you about a trade off. You could choose to have a very low uncertainty for one kind of measurement but eventually when the uncertainty for that measurement gets low enough then if you keep decreasing the uncertainty of that one measurement then the other measurements that aren't compatible with that one can start to have high uncertainties. And to be clear when o say decrease the uncertainty I mean get up and walk away from that lab and find a lab where the states are different states that have less uncertainty in that measurement. This might still be confusing so let's talk about what uncertainty is. First you need a state. Lots of copies of the state, many systems prepared to be in the same state. Then you can measure a bunch of them. You get lots of results. They might have a mean and a standard deviation. Even better the sameple mean and sample standard deviation might come from a probability distribution with a population mean and a population standard deviation. That population standard deviation is what we call the uncertainty. It depended on the state as well as the thing you choose to measure. The uncertainty principle isn't really talking about the uncertainty of that one measurement it is talking about multiple measurement of two different things, look at $$\Delta x\Delta p\geq \hbar.2$$ See how it has a $\Delta x,$ that's the population standard deviation of measuring the position $x.$ See how it has a $\Delta p,$ that's the population standard deviation of measuring the momentum $p.$ it says the product can't be too small. The $\Delta x$ and $\Delta p$ can be small. You can pick a state where $\Delta x$ is say $10^{-20}m$ and you can pick a state where $\Delta p$ is say $10^{-20}kg m/s$ but you can't pick a state where both standard deviations are that small. It says that way back when you picked a state there was a trade off. The results of one measurement could be made to have a small standard deviation and the results of the other standard deviation for the same state would then have to be larger. Sometimes both standard deviations are large. That happens for some states. But they can't both be small. Another thing that can happen is that a state can fail to even have a standard deviation or even a mean. This happens for instance if one the measurements has a zero standard deviation then the other one won't even have a standard deviation. There is nothing wrong with not having a mean and not having a standard deviation. However it is worth checking whether what you proposed can be done done in the lab without using an infinitely big piece of equipment or one that can create arbitrarily high energies. the uncertainty in position will always be non-zero. The uncertainty in position can be zero. But this requires arbitrarily high energies and the momentum distribution has every momentum equally likely from $-\infty$ to $+\infty$ and so the momentum uncertainty is in some sense infinity because every momentum is equally likely. It has no mean and so you can't even define the standard deviation (since that depends on the mean). In a loose sense the standard deviation is infinite. Similarly the uncertainty in momentum can be zero. But this requires am infinite space for your particle to spread out in. The position distribution has every position equally likely from $-\infty$ to $+\infty$ and so the position uncertainty is in some sense infinity because every position is equally likely. It has no mean and so you can't even define the standard deviation (since that depends on the mean). In a loose sense the standard deviation is infinite. The real point is that when to choose states with a real low uncertainty in one thing that same state ends up having a high uncertainty on another thing. That's the uncertainty principle. Is there some possible way to modify this model to apply always in quantum mechanics? You can generalize the uncertainty principle to any two measurements. If the two measurements are compatible (i.e. A then B then A then B again always gives the same results the a b a b in the sense that you get the same thing for your A measurement the second time and get the same thing the second time for your b measurement) then the uncertainty in each can be as low as you want. Otherwise there is a lower bound on the two. However the lower bound actually depends on the mean of something else. So even when they aren't compatible sometimes they can still both be small. That doesn't happen for position and momentum since the thing whose mean you want is something that always has the same mean for every state. If no, is there some good model what describes the origin of the uncertainity? Yes. The Schrödinger equation has enough information to show you exactly where uncertainty comes from. And you can derive the uncertainty principle from it. So see uncertainty a cheap way is to just look at a wavefunction and its Fourier transform and note that each gives the distribution for the position and momentum measurements respectively. So then you get the quantum uncertainty from the acoustic one, that to have a chirp you need lots of pitches and to have a good pitch you have to have a long tone. But that jumps over how you get probability distributions from a state. The full answer for that requires answering the measurement problem (which you did tag so I'll mention it). I'll talk about the simplest measurement, a Stern-Gerlach measurement of a spin 1/ 2 particle. It is the simplest because it only gives two possible result and so it always has a mean and a standard deviation. And the results can be computed with the Schrödinger equation and nothing else. Just write down the Hamiltonian for an inhomogeneous magnetic field inside the device. So you have a device where beams come in and there are two places beams can come out. An up one and a down one. If you have three devices you could bolt one down and then put one of the others at each the two output ends so the output of the bolted one is the bin out of the other one. If you do this then all the ones that go in the up box come out of the up of that box and all the ones that come out the down box come out the down of that box. This is an example of multiple copies of the device being compatible with itself measuring with it twice always the same result the second time as you got the first time. So up the first time means you get up the second time. And down the first time means you get down the second time. The device usually splits an incoming beam into two beams one going up and one going down. But it changes the result for each outgoing beam so that it becomes one of the special beams that doesn't get split and comes out just one end. Why do we say it changes things? Because if we put three in a row with the second on the up output of the first and the third in the down output of second then the third one never has anything come out (once it went up it always went up). So the output of the first box really is a thing with the property that it always goes up. But if you rotate that second box so it still gets the up output of the first box but now sends the beams left and right then no matter which one (left or right) you attach the third box to then beams come out of both the up and the down parts of the third unrotated box. So that property of always giving up was destroyed by the second box. So the rotated devices are incompatible with each other. And since the output of the first box had the property that it gives the same result (up) and then didn't have that property after it went through the second box, so the second box destroyed that property so changed the particle. Every device changes a particle unless it happens to already be in one of the few kinds of output states that device allows. So we know that so called measurement changes things. And it changes things into one of the few states that that device doesn't change. And if there isn't a state that both devices leave alone then the order you do them matters since they will change it back and forth. Like of you had two remotes for a TV and two people that really want to make the TV be on different stations. Compatible results would be like someone that only cares about volume and the other cares about the station eventually you get to a state that neither will change. So we know that we can get more than one result. What about the standard deviations. For that you need to know the relative frequency of getting each result. We have two results. Now you can put s detector in front of the incoming beam and it will fire at a certain rate. That rate depends on the current density and the width of the beam, so depends on the current. Each separate beam has its own current density and its own width. And the Schrödinger equation accurately predicts the current density at each point (it predicts the whole wave from which you can get the current density at each point) so you can get the current. So just writing the initial input state and the Hamiltonian for the device and then evolving with the Schrödinger equation gets you the rate rate a detector at any device location fires. There are shortcuts. The relative size of the square of the projection of the spin state onto the eigenstates of an operator tells you the relative frequencies of the rates. And you can predict how the detector works with the Schrödinger equation. Bit it works the same way, by splitting the wave but now you have two things the particle and the detector so we have to be honest that for the Schrödinger equation the wave is in configuration space. So the split happens in configuration space. And what happens is that originally just the position and spin on the particle changed then the detector changes too and you end up having two groups of configurations that each acts like they are the only group. And forever will. These effectively separate worlds can have other devices that count the relative frequencies of repeated measurements and the group of configurations will become almost entirely focused on a reading near the population average. And since detectors have to have a certain insensitivity to noise from other sources they are also insensitive to the much smaller group of configurations where it isn't nearer the average. This is similar to a statistical effect of the law of large numbers. It isn't the law of large numbers since there isn't a fixed sample space (remember how measurements change things, this means no foxed sample space so regular probability theory does not apply).
In order to prove a certain function to be partially computable, I need to show an $\mathbb S$-program that computes it. I could really use the predicate $X \in B$ in my program to draw my conclusion. To give you the idea of what I am dealing with here it is one of my problems: Give an infinite set $B$ such that $\Phi(x,x)\uparrow$ for all $b \in B$ and such that $$H(x) = \begin{cases}1 \text{ if } \Phi(x,x)\downarrow \\ 0 \text{ if } x \in B \\ \uparrow \text{ otherwise}\end{cases}$$ show that $H(x)$ is partially computable. I am wondering if membership for infinite set is decidable and therefore can be used to write $\text{IF } X \in B$ such program. Am I allowed? Edit: the notation $\Phi(x,x)\uparrow$ means the function is undefined.
Here's a more compact and mathematical description of what is going on. Let $a$ and $b$ be the input, already reduced modulo $m$, so $a < m$ and $b < m$. (Code-wise, this means after the b %= m line.) We want to calculate $ab \mod m$, which is to say, setting $x=ab$, we want to find $r_x$ such that $x = q_xm + r_x$ for some $q_x$. The Non-Overflowing Case In the non-overflowing case, we could just calculate the modulus and be done with it. Instead, the code calculates: $$y = \left\lfloor\frac{x}{m}\right\rfloor = \left\lfloor q_x + \frac{r_x}{m}\right\rfloor = q_x + \left\lfloor\frac{r_x}{m}\right\rfloor = q_x$$ Now $x - ym = q_xm + r_x - q_xm = r_x$. I elided a concern here though. It could be that $x$ doesn't fit in the mantissa of our floating point variable. As long as $m$ fits, though, this will not be a problem as $x = ab < m^2$. We'll end up with $x' = x + e$ where $e$ is some round-off error whose magnitude is less than $m$. We proceed as before: $$y = \left\lfloor\frac{x'}{m}\right\rfloor = \left\lfloor q_x + \frac{r_x+e}{m}\right\rfloor = q_x + \left\lfloor\frac{r_x+e}{m}\right\rfloor$$ In this case, we can't eliminate the $\left\lfloor\frac{r_x+e}{m}\right\rfloor$ because, while $r_x < m$ and $e < m$, it may be the case that $r_x+e > m$ or $r_x+e < 0$, though it will certainly be (strictly) between $-m$ and $2m$, so $\left\lfloor\frac{r_x+e}{m}\right\rfloor$ is either $0$ or $\pm1$. Now $$x - ym = q_x m + r_x - q_x m - \left\lfloor\frac{r_x+e}{m}\right\rfloor m = r_x - \left\lfloor\frac{r_x+e}{m}\right\rfloor m$$Performing the modulus operation now will get rid of that extra term, however, due to the convention C chooses, (-1)%m = -1. To get a convention where we always return a positive number, we can add $m$ to the result if negative. The Overflowing Case Let's assume that we are doing the integer arithmetic mod $N$, e.g. $2^{64}$, and let's assume $ab > N$ which means $m^2 > N > m$. Now the multiplication is going to wrap. We'll write $x = ab = z + kN$ for an integer $k < m$. That means the computation a*b will give the number $z$. The floating point computation is as before assuming, again, that the $m$ fits in the mantissa (which may require 80-bit extended precision floats for larger $m$). For the modulus, define $z = q_z m + r_z$ and $kN = q_N m + r_N$ so $x = q_z m + q_N m + r_z + r_N$. As before storing $x$ as a floating point variable may produce some round-off error $|e| < m$ so again have $x' = x + e$. If we do the same calculation as before it looks like: $$\begin{align}z - \left\lfloor\frac{x'}{m}\right\rfloor m & = q_zm + r_z - q_z m - q_N m - \left\lfloor\frac{r_z+r_N+e}{m}\right\rfloor m \\& = r_z - q_N m - \left\lfloor\frac{r_z+r_N+e}{m}\right\rfloor m \\& = r_z + r_N - \left\lfloor\frac{r_z+r_N+e}{m}\right\rfloor m \\\end{align}$$I've pulled a rabbit out of the hat here. Here, we're doing addition mod $N$, and mod $N$ $q_N m + r_N = kN = 0$ so $q_N m = -r_N$. As before, we mod by $m$ to get $r_z + r_N = x \mod m$, and, as before, the C convention may lead to this being negative which will need correcting. There's one potential remaining issue. $\left\lfloor\frac{r_z+r_N+e}{m}\right\rfloor$ could be $2$. This causes no problem unless $2m > N$ in which case you'd get the wrong answer. For $N = 2^{64}$ this means $m > 2^{63}$. (As an aside, if $2m = N$ it causes no issue since we'll end up with $0$.) If we configure the floating point hardware to round down so that $e \leq 0$, this case will not come up. (Though don't forget my restriction that the mantissa can hold $m$!). Connecting this to most/least significant bits To specifically address the part that you quoted, consider an integer, $B$, greater than 2 which we'll think of as a base, as in "base $10$". In the usual $B=10$ case, the number $21$ is represented as $21 = 2B+1$. In a floating point representation we'd write this as $2.1\times B^1$. Now let's say we wanted to multiply two (positive) single base-$B$ digit numbers, the result would require at most two digits, say $cB+d$ where (as required by a [standard] base-$B$ representation) $0 \leq c < B$ and $0\leq d < B$. If we're restricted to only being able to store one base-$B$ digit of the result, there are two obvious choices, either $c$ or $d$. Choosing $d$ is equivalent to working mod $B$ as $cB + d = d\mod B$, this is what happens with integer arithmetic. (Incidentally, at the assembly level, integer multiplication often does produce both of these digits.) Floating point arithmetic, on the other hand, effectively corresponds to choosing $c$, but compensating that by incrementing the exponent. In effect, we represent the result as $c.d\times B^1$ but since we can store only one base-$B$ digit, this becomes just $c\times B^1$. (In practice, we'll consider numbers as multi-digit numbers in a small base (i.e. 2), rather than 1- or 2-digit numbers in a large base. This allows us to save some of the higher digits of $d$ if they aren't needed to store $c$, but in the worst-case scenario all of $d$ is lost. None of $c$ is lost until we start running out of room in the exponent. For the code above, this is not an issue.) As long as $m$ can be represented faithfully in the floating point format, the expression $\left\lfloor\frac{ab}{m}\right\rfloor$ can be viewed as extracting that upper digit in base-$m$. You can view the code and math above as the interplay between base-$N$ and base-$m$ representations of a number. Practicalities Based on section 5.2.4.2.2 of this draft, the C11 standard appears to only require long double to have a mantissa roughly 33 bits in length. (In particular, it appears to only specify the minimum number of decimal digits that can be faithfully represented.) In practice, most C compilers when targeting general purpose CPUs and particularly x86-family CPUs, will use IEEE754 types. In this case double will effectively have a 53-bit mantissa. x86-family CPUs support an 80-bit format with a mantissa with effectively 64-bits, and several but not all compilers will have long double indicate that when targeting x86. The range of validity of the code depends on these implementation details.
Regularity of 3D axisymmetric Navier-Stokes equations School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China In this paper, we study the three-dimensional axisymmetric Navier-Stokes system with nonzero swirl. By establishing a new key inequality for the pair $(\frac{ω^{r}}{r},\frac{ω^{θ}}{r})$, we get several Prodi-Serrin type regularity criteria based on the angular velocity, $u^θ$. Moreover, we obtain the global well-posedness result if the initial angular velocity $u_{0}^{θ}$ is appropriate small in the critical space $L^{3}(\mathbb{R}^{3})$. Furthermore, we also get several Prodi-Serrin type regularity criteria based on one component of the solutions, say $ω^3$ or $u^3$. Mathematics Subject Classification:Primary:35K15, 35K55;Secondary:35Q35, 76A05. Citation:Hui Chen, Daoyuan Fang, Ting Zhang. Regularity of 3D axisymmetric Navier-Stokes equations. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 1923-1939. doi: 10.3934/dcds.2017081 References: [1] H. Abidi and P. Zhang, Global smooth axisymmetric solutions of 3-D inhomogeneous incompressible Navier-Stokes system, [2] M. Badiale and G. Tarantello, A Sobolev-Hardy inequality with applications to a nonlinear elliptic equation arising in astrophysics, [3] [4] L. Cafferalli, R. Kohn and L. Nirenberg, Partial regularity of suitable weak solutions of the Navier-Stokes equations, [5] [6] [7] C. Chen, R. M. Strain, H. Yau and T. Tsai, Lower bound on the blow-up rate of the axisymmetric Navier-Stokes equations [8] C. Chen, R. M. Strain, H. Yau and T. Tsai, Lower bounds on the blow-up rate of the axisymmetric Navier-Stokes equations Ⅱ, [9] Q. Chen and Z. Zhang, Regularity criterion of axisymmetric weak solutions to the 3D Navier-Stokes equations, [10] P. Constantin and C. Foias, [11] [12] E. B. Fabes, B. F. Jones and N. M. Rivieére, The initial value problem for the Navier-Stokes equations with data in Arch. Rational Mech. Anal., 45 (1972), 222-240. doi: 10.1007/BF00281533. Google Scholar [13] D. Fang and C. Qian, The regularity criterion for 3D Navier-Stokes equations involving one velocity gradient component, [14] [15] [16] J. P. García Azorero and I. Peral Alonso, Hardy inequalities and some critical elliptic and parabolic problems, [17] Y. Giga, Solutions for semilinear parabolic equations in J. Differential Equations, 62 (1986), 186-212. doi: 10.1016/0022-0396(86)90096-3. Google Scholar [18] G. H. Hardy, J. E. Littlewood and G. Pólya, [19] [20] [21] [22] G. Koch, N. Nadirashvili, G. A. Seregin and V. Šverák, Liouville theorems for the Navier-Stokes equations and applications, [23] O. Kreml and M. Pokorný, A regularity criterion for the angular velocity component in axisymmetric Navier-Stokes equations, [24] A. Kubica, M. Pokorný and W. Zajaczkowski, Remarks on regularity criteria for axially symmetric weak solutions to the Navier-Stokes equations, [25] O. A. Ladyženskaja, Unique global solvability of the three-dimensional Cauchy problem for the Navier-Stokes equations in the presence of axial symmetry, [26] [27] [28] [29] F. Lin, A new proof of the Caffarelli-Kohn-Nirenberg theorem, [30] [31] [32] J. Liu and W. Wang, Characterization and regularity for axisymmetric solenoidal vector fields with application to Navier-Stokes equation, [33] [34] [35] P. Penel and M. Pokorný, Some new regularity criteria for the Navier-Stokes equations containing gradient of the velocity, [36] [37] [38] [39] [40] M. R. Ukhovskii and V. I. Iudovich, Axially symmetric flows of ideal and viscous fluids filling the whole space, [41] [42] Y. Zhou and M. Pokorný, On the regularity of the solutions of the Navier-Stokes equations via one velocity component, show all references References: [1] H. Abidi and P. Zhang, Global smooth axisymmetric solutions of 3-D inhomogeneous incompressible Navier-Stokes system, [2] M. Badiale and G. Tarantello, A Sobolev-Hardy inequality with applications to a nonlinear elliptic equation arising in astrophysics, [3] [4] L. Cafferalli, R. Kohn and L. Nirenberg, Partial regularity of suitable weak solutions of the Navier-Stokes equations, [5] [6] [7] C. Chen, R. M. Strain, H. Yau and T. Tsai, Lower bound on the blow-up rate of the axisymmetric Navier-Stokes equations [8] C. Chen, R. M. Strain, H. Yau and T. Tsai, Lower bounds on the blow-up rate of the axisymmetric Navier-Stokes equations Ⅱ, [9] Q. Chen and Z. Zhang, Regularity criterion of axisymmetric weak solutions to the 3D Navier-Stokes equations, [10] P. Constantin and C. Foias, [11] [12] E. B. Fabes, B. F. Jones and N. M. Rivieére, The initial value problem for the Navier-Stokes equations with data in Arch. Rational Mech. Anal., 45 (1972), 222-240. doi: 10.1007/BF00281533. Google Scholar [13] D. Fang and C. Qian, The regularity criterion for 3D Navier-Stokes equations involving one velocity gradient component, [14] [15] [16] J. P. García Azorero and I. Peral Alonso, Hardy inequalities and some critical elliptic and parabolic problems, [17] Y. Giga, Solutions for semilinear parabolic equations in J. Differential Equations, 62 (1986), 186-212. doi: 10.1016/0022-0396(86)90096-3. Google Scholar [18] G. H. Hardy, J. E. Littlewood and G. Pólya, [19] [20] [21] [22] G. Koch, N. Nadirashvili, G. A. Seregin and V. Šverák, Liouville theorems for the Navier-Stokes equations and applications, [23] O. Kreml and M. Pokorný, A regularity criterion for the angular velocity component in axisymmetric Navier-Stokes equations, [24] A. Kubica, M. Pokorný and W. Zajaczkowski, Remarks on regularity criteria for axially symmetric weak solutions to the Navier-Stokes equations, [25] O. A. Ladyženskaja, Unique global solvability of the three-dimensional Cauchy problem for the Navier-Stokes equations in the presence of axial symmetry, [26] [27] [28] [29] F. Lin, A new proof of the Caffarelli-Kohn-Nirenberg theorem, [30] [31] [32] J. Liu and W. Wang, Characterization and regularity for axisymmetric solenoidal vector fields with application to Navier-Stokes equation, [33] [34] [35] P. Penel and M. Pokorný, Some new regularity criteria for the Navier-Stokes equations containing gradient of the velocity, [36] [37] [38] [39] [40] M. R. Ukhovskii and V. I. Iudovich, Axially symmetric flows of ideal and viscous fluids filling the whole space, [41] [42] Y. Zhou and M. Pokorný, On the regularity of the solutions of the Navier-Stokes equations via one velocity component, [1] Weimin Peng, Yi Zhou. Global well-posedness of axisymmetric Navier-Stokes equations with one slow variable. [2] Bin Han, Changhua Wei. Global well-posedness for inhomogeneous Navier-Stokes equations with logarithmical hyper-dissipation. [3] Daniel Coutand, J. Peirce, Steve Shkoller. Global well-posedness of weak solutions for the Lagrangian averaged Navier-Stokes equations on bounded domains. [4] Daoyuan Fang, Ruizhao Zi. On the well-posedness of inhomogeneous hyperdissipative Navier-Stokes equations. [5] [6] Matthias Hieber, Sylvie Monniaux. Well-posedness results for the Navier-Stokes equations in the rotational framework. [7] Maxim A. Olshanskii, Leo G. Rebholz, Abner J. Salgado. On well-posedness of a velocity-vorticity formulation of the stationary Navier-Stokes equations with no-slip boundary conditions. [8] Yoshihiro Shibata. Local well-posedness of free surface problems for the Navier-Stokes equations in a general domain. [9] Jishan Fan, Yasuhide Fukumoto, Yong Zhou. Logarithmically improved regularity criteria for the generalized Navier-Stokes and related equations. [10] Zijin Li, Xinghong Pan. Some Remarks on regularity criteria of Axially symmetric Navier-Stokes equations. [11] Hammadi Abidi, Taoufik Hmidi, Sahbi Keraani. On the global regularity of axisymmetric Navier-Stokes-Boussinesq system. [12] Chao Deng, Xiaohua Yao. Well-posedness and ill-posedness for the 3D generalized Navier-Stokes equations in $\dot{F}^{-\alpha,r}_{\frac{3}{\alpha-1}}$. [13] [14] Wendong Wang, Liqun Zhang, Zhifei Zhang. On the interior regularity criteria of the 3-D navier-stokes equations involving two velocity components. [15] [16] Saoussen Sokrani. On the global well-posedness of 3-D Boussinesq system with partial viscosity and axisymmetric data. [17] [18] [19] [20] 2018 Impact Factor: 1.143 Tools Metrics Other articles by authors [Back to Top]
So, here's a question and a solution to part b). I do not understand why they make $y^{1/2}$ belong to interval $[0,1)$ and then separately to the interval $[1,3)$. You have $X\sim \mathcal U(-1;3)$ and $Y=X^2$ Now $Y\in(0;1)$ when $X\in(-1;0)$ and also when $X\in(0;1)$. So this interval for $Y$ is mapped to by two intervals for $X$. Ie, for all $0\leq y\lt 1$ we have $\{Y\leq y\} = \{-\surd y\leq X\leq\surd y\}$ However $Y\in[1;9)$ when $X \in[1;3)$. So this interval for $Y$ is mapped to by only one interval for $X$. Ie, for all $1\leq y\lt 9$ we have $\{Y\leq y\} = \{-1\leq X\leq\surd y\}$ So clearly we find that: $$F_Y(y)=\begin{cases}0&:&\qquad y\lt 0\\F_X(\surd y)-F_X(-\surd y)&:& 0\leq y<1\\ F(\surd y)&:& 1\leq y\lt 9\\1 &:& 9\leq y\end{cases}$$ Comment: This is not a 1-1 transformation. Values of $Y$ in$(0,1)$ originate from values of $X$ in $(-1,0)$ and in $(0,1).$ @GrahamKemp (+1) has given you a formal derivation, in terms of $y,$ that may be easier to follow than the one in the answer key, in terms of $\sqrt{y}.$ By simulating a million values of $X$ sampled from $\mathsf{Unif}(-1,3)$ in R statistical software and squaring them, one can plot a histogram that suggests the density function of $Y,$ which is $f_Y(y) =\frac{1}{4\sqrt{y}},$ for $0 \le y \le 1,$ and $f_Y(y) = \frac{1}{8\sqrt{y}},$ for $1 \le y \le 9.$ Of course, you can get the density function by piece-wise differentiation of the CDF, $F_Y(y).$ Notice that the density function (plotted in red) is 'piece-wise' continuous, but that it is not continuous at $y=0,1,$ or $9.$ Note: In case it is of interest, the R code for the simulation and plotting is shown below. x = runif(10^6, -1, 3); y = x^2hist(y, prob=T, br=50, col="skyblue2") curve(.25*x^-.5, 0,1, add=T, lwd=2, col="red") curve(.125*x^-.5, 1,9, add=T, lwd=2, col="red") It is a quirk of the curve procedure in R that thefunction to be graphed must be expressed in terms of a variable named x. The reason is that the CDF is defined as a definite integral and in this case the integration area is composite, so it must be decomposed. Look at the graph: For the blue area, where $y\in [0,1)$: $$F_Y(y)=\mathbb P(X^2\le y)=\mathbb P(-\sqrt{y}\le X\le \sqrt{y})=F_X(\sqrt{y})-F_X(-\sqrt{y})=\int_{-\sqrt{y}}^{\sqrt{y}} \frac14 dx=\frac{2\sqrt{y}}{4}.$$ For the green area, where $y\in [1,9)$: $$F_Y(y)=\mathbb P(X^2\le y)=\mathbb P(-1\le X\le \sqrt{y})=F_X(\sqrt{y})-F_X(-1)=\int_{-1}^{\sqrt{y}} \frac14 dx=\frac{\sqrt{y}+1}{4}.$$
Let's assume the basic nouns of our language to describe the physical world are the members of Lie groups. Okay, this is a pompous-sounding statement and somewhat arbitrary, but my justification is that these objects describe all the continuous symmetries there can be, and almost every clarification of physics using mathematics is done either (1) by viewing a mathematical object from a different standpoint (unification of hitherto seemingly unrelated concepts) or (2) by exploiting symmetries to reduce or get rid of the redundant complexity in a statement. In our continuous manifold descriptions of the physical World, these symmetries are all continuous. So, somewhere in that list of symmetries, we meet $U(1)$, $SU(2)$, $SO(3)$, $U(N)$ and so forth. So we would needfully be doing calculations and simplifcations with these objects when we exploit symmetries of a problem. Whether or not we choose to single out an object like: $$\left(\begin{array}{cc}0&-1\\1&0\end{array}\right)\in U(1), SU(2), SO(3), U(N) \cdots$$ and give it a special symbol $i$ where $i^2=-1$ is a "matter of taste", so in this sense the use of complex numbers is not essential. Nonetheless, we would needfully still meet this object and ones like it and would have to handle statements involving such objects when describing physics in a continuous manifold - there's no way around this as it belongs to any full description of symmetries of the World. So in this sense, complex numbers, quaternions, octonions and so forth are all there and essential in such description. Notice that complex numbers and their algebra are wonted to almost everyone in physics, quaternions to somewhat fewer physicists and octonions not really to that many. This is simply related to how often the relevant symmetry calculations come up: almost any interesting continuous symmetry involves Lie group objects for which $i^2=-1$ and so we single these out and commit all the rules of their algebra to stop ourselves going outright spare and committed to lunatic asylums writing out their full Lie theoretical representations all the time. Singling out quaternions and doing the same saves some work, but not so much, because quaternions come up in fewer symmetries. By the time we get to octonions, the symmetries wherein they come up are quite seldom, so not that many of us are very adept with their special algebra (me included): we can do the full matrix / Lie calculations without too much pain because we don't do them that often, so we don't notice their octonionhood so readily. Footnote: One can take "Lie Group members" and "Continuous Symmetries" to be the same by dint of: The solution to Hilbert's fifth problem by Montgomery, Gleason and Zippin i.e. we don't need the concept of manifold nor the concept of analyticity ($C^\omega$) - these "build themselves" from the basic idea of a continuous topological group; The classification of all Lie algebras by Wilhelm Killing (whose saw that he could do it, but botched the proof a little) and the great Elie Cartan - so we know what all continuous symmetries look like. Once we have classified all Lie algebras, we can find all possible Lie groups, since every Lie group has a Lie algebra, every Lie algebra can be exponentiated into a Lie group ( e.g. through the matrix exponential, since every Lie algebra can be represented as a matrix Lie algebra (Ado's theorem)) and the (global-topological) relationships between Lie groups that have the same Lie algebra is also known.
I am trying to follow the proof of Lemma 9.2 (Certain prerequisite lemma to prove a maximum principle) of the Book of Gilbarg and Trudinger and I am having certain difficulties with an assertion in the proof. It is the following Let's take a function $u:\Omega\subset\mathbb{R}^n\rightarrow \mathbb{R}$ belonging to the class of $C^2(\Omega)\cap C(\overline{\Omega})$ functions, that takes a positive maximum at a point $y\in \Omega$, and let $k$ be the function whose graph is the cone $K$ with vertex $(y,u(y))$ and base $\partial \Omega \times \{0\}$. Then, the book says that for each supporting hyperplane (Defined later) to $K$, there exists a parallel hyperplane tangent to the graph of $u$. Moreover, as far as I understand, it is not only tangent to the graph of $u$ but also a supporting hyperplane of it. I am utterly lost in even knowing how to start proving this, any hint is welcomed. Now, the promised definition. We call a supporting hyperplane of the graph of a function $u$ at the point $z$ to an hyperplane given by the graph of a function $f:\Omega\rightarrow \mathbb{R}$ of the form $f(x) := u(z) + p \cdot (x-z)$ such that $u(x)\leq f(x)$ $\forall x\in\Omega$. The $\cdot$ represents the inner product in $\mathbb{R}^n$. I am purposedly not giving any condition on the set $\Omega$ except posibly being bounded because I don't know if it is necessary for this claim to be truth. I have no problem in assuming it is smooth though because I have to start somewhere. Apart from that, since I have taken the question from a PDEs book I am tagging it with the PDEs tag. I am not sure which other tags would be appropiate but would thank if any reader comes up with any and tell me in a comment. Regarding comments: I must add the assumption that $u\leq 0$ on the boundary. Posibly another necessary assumption: As far as I could prove below (with the help of the answers and comments given here) it may be necessary to ask to the cone $K$ to have a base slightly bigger than $\Omega$. For example $\Omega+B_\varepsilon(0)$. I don't have a counterexample for this assumption to be necessary yet.
When I was trying to find closed-form representations for odd zeta-values, I used $$ \Gamma(z) = \frac{e^{-\gamma \cdot z}}{z} \prod_{n=1}^{\infty} \Big( 1 + \frac{z}{n} \Big)^{-1} e^{\frac{z}{n}} $$ and rearranged it to $$ \frac{\Gamma(z)}{e^{-\gamma \cdot z}} = \prod_{n=1}^{\infty} \Big( 1 + \frac{z}{n} \Big)^{-1} e^{z/n}. $$ As we know that $$\prod_{n=1}^{\infty} e^{z/n} = e^{z + z/2 + z/3 + \cdots + z/n} = e^{\zeta(1) z},$$ we can state that $$\prod_{n=1}^{\infty} \Big( 1 + \frac{z}{n} \Big) = \frac{e^{z(\zeta(1) - \gamma)}}{z\Gamma(z)}\qquad\text{(1)}$$ I then stumbled upon the Wikipedia page of Ramanujan Summation (see the bottom of the page), which I used to set $\zeta(1) = \gamma$ (which was, admittedly, a rather dangerous move. Remarkably, things went well eventually. Please don't stop reading). The $z^3$ -coefficient of both sides can now be obtained. Consider \begin{align*} (1-ax)(1-bx) &= 1 - (a+b)x + abx^2\\ &= 1-(a+b)x + (1/2)((a+b)^2-(a^2+b^2)) \end{align*} and \begin{align*}(1-ax)(1-bx)(1-cx) &= 1 - (a + b + c)x\\ &\qquad + (1/2)\Bigl((a + b + c)^2 - (a^2 + b^2 + c^2)\Bigr)x^2\\ &\qquad -(abc)x^3. \end{align*} We can also set \begin{align*}(abc)x^3 &= (1/3)\Bigl((a^3 + b^3 + c^3) - (a + b + c)\Bigr)\\&\qquad + (1/2)(a + b + c)^3 -(a + b + c)(a^2 + b^2 + c^2).\end{align*}It can be proved by induction that the x^3 term of $(1-ax)(1-bx)\cdots(1-nx)$ is equal to \begin{align*}(1/3)&\Bigl((a^3 + b^3 + c^3 +\cdots + n^3) - (a + b + c + \cdots + n^3))\\&\qquad+ (1/2)(a + b + c + \cdots + n)^3 -(a + b + c + \cdots + n)(a^2 + b^2 + c^2 + \cdots + n^2).\qquad\text{(2)}\end{align*}On the right side of equation (1), the $z^3$-term can be found by looking at the z^3 term of the Taylor expansion series of $1/(z \Gamma(z))$, which is $(1/3)\zeta(3) + (1/2)\zeta(2) + (1/6)\gamma^3$. We then use (2) to obtain the equality$$ (1/6)\gamma^3 - (1/2)\gamma \pi^2 - (1/6) \psi^{(2)}(1) = 1/3)\zeta(3) + (1/2)\zeta(2) + (1/6)\gamma^3$$ to find that $$\zeta(3) = - (1/2) \psi^{(2)} (1),$$ (3)which is a true result that has been known ( known should be a hyperlink but it isn't for some reason) for quite a long time. The important thing here is that I used $\zeta(1) = \gamma$, which isn't really true. Ramanujan assigned a summation value to the harmonic series (again, see Ramanujan Summation), and apparently it can be used to verify results and perhaps to prove other conjectures/solve problems. My first question is: Is this a legitimate way to prove (3) ? Generalizing this question: When and how are divergent series and their summation values used in mathematics? What are the 'rules' when dealing with summed divergent series and using them to (try to) find new results? Thanks, Max
Suppose $A$ and $B$ are two $R-$algebras, where $R$ is a commutative ring with $1$. There are natural algebra homomorphisms $\ a \to a \otimes 1 \ $ and $ \ b \to 1 \otimes b \ $ from $A$ and $B$ to $A \otimes_{R} B$. Are these homomorphisms injective? Consider the first homomorphism. If $a_1 \neq a_2$ then we want $(a_1 - a_2) \otimes 1 \neq 0$. If there is an $R-$bilinear map $f: A \times B \to M$, where $M$ is some $R-$module, such that $f(a_1 - a_2, 1) \neq 0$ then we are done. But how can one find such $f$? UPDATE: As it was pointed out, it is not true in general. But when is it true? It seems that if $A$ and $B$ are real or complex vector spaces, both homomorphisms are injective. Furthermore, when $A$ is a $\mathbb{Z}$-tosrion-free ring and $B$ is a field of characteristic $0$, then $a \to a \otimes 1$ seems to be injective.
I'm having a trouble understanding the concept. Can you give me a math example where $P\Rightarrow Q$ Is true but $P\Leftrightarrow Q$ is false? Thank you. Set theory example: let be $A\subset B$ with $A\ne B$. $$x\in A\implies x\in B$$ but $$x\in B\kern.6em\not\kern -.6em \implies x\in A.$$ Almost all the examples of the other answers are particular cases of this. For real numbers: $$x > 1$$ implies that $$ x^2 > 1 $$ But $x^2 > 1$ does not imply that $x > 1$. For instance, $(-2)^2 = 4 > 1$, but $-2$ is not greater than $1$. $x = 2 \implies x \ge 2$ is true. $x \ge 2 \iff x = 2$ is false. An easy one: all squares are rectangles, but not every rectangle is a square. The fallacy of the converse is something lots of students are guilty of. Remember, to say P implies Q means that if you have P, you have Q. The presence of Q doesn't mean P holds. Another example: differentiability implies continuity. Every differentiable function is continuous on its domain's interior, but a continuous function need not have a derivative anywhere. $x=2$ implies that $x$ is even. But $x$ being even does not imply $x=2$. $$x=-1 \implies x^2 = 1$$ but $$x=-1 \not\Longleftarrow x^2=1$$ because it could be that $x=1$. We have: $$P \leftrightarrow Q \Rightarrow P \rightarrow Q$$ but not $$P \leftrightarrow Q \Leftrightarrow P \rightarrow Q$$ For example it always true that $$a\ge 2 \implies a^2\ge 4$$ but the following does not hold (eg $a=-3$) $$a\ge 2 \iff a^2\ge 4$$ Imagine the following: $$\begin{align} a&=1\\ \Leftrightarrow a^2&=a\\ \Leftrightarrow a^2-1&=a-1\\ \Leftrightarrow (a-1)(a+1)&=a-1\\ \Leftrightarrow a+1&=1\\ \Leftrightarrow a&=0 \end{align}$$ Where's the mistake? In general, $A\Rightarrow B$ means that, in order for $A$ to be true, it is neccessary that $B$ is true, but, nothing is neccessary in order for $A$ to be false. On the other hand, $A\Leftrightarrow B$ means that, in order for $A$ to be true it is neccessary and sufficient the $B$ is true. So, when $A$ i true so does $B$ and when $A$ is false, so does $B$. $$\mbox{[$(x_n)_n$ bounded sequence in $\Bbb R$] $\implies$ [$(x_n)_n$ has a convergent subsequence]}$$ but the converse is failed just take $$u_{2n}= 2, ~~~and ~~~u_{2n+1} = n^2$$ the subsequence $(u_{2n})_n$ converges but $ (u_{n})_n$ is unbounded. In our beginner math classes we often used real world examples to grasp such concepts before starting with mathematical explanations. For this we used the following example: $$ snow \Rightarrow cold\\ cold \nLeftrightarrow snow $$ So when it's snowing outside it is definitely cold*. But if it's cold outside, you can not be sure that it is snowing. * There are theoretically some exceptions to this statement, but for the sake of understanding the concept, they can be ignored. $x=0 \implies x\neq 1$, but $x=0 \not \Longleftarrow x\neq 1$, because it is possible that $x=2\neq 0$.
This shows you the differences between two versions of the page. monoidal_t-norm_logic_algebras [2010/07/29 15:46] 127.0.0.1 external edit monoidal_t-norm_logic_algebras [2010/09/04 13:43] (current) jipsen Line 4: Line 4: ====Definition==== ====Definition==== - A \emph{monoidal t-norm logic algebra} is a [[ FL$_{ew]]$-algebra } $\mathbf{A}=\langle A, \vee, \wedge, \cdot, 1, \to, 0\rangle$ such that + A \emph{monoidal t-norm logic algebra} is a [[ FLew-algebra ]] $\mathbf{A}=\langle A, \vee, \wedge, \cdot, 1, \to, 0\rangle$ such that $\cdot$ is \emph{prelinear}: $(x\to y)\vee (y\to x)=1$ $\cdot$ is \emph{prelinear}: $(x\to y)\vee (y\to x)=1$
2019-10-09 06:01 HiRadMat: A facility beyond the realms of materials testing / Harden, Fiona (CERN) ; Bouvard, Aymeric (CERN) ; Charitonidis, Nikolaos (CERN) ; Kadi, Yacine (CERN)/HiRadMat experiments and facility support teams The ever-expanding requirements of high-power targets and accelerator equipment has highlighted the need for facilities capable of accommodating experiments with a diverse range of objectives. HiRadMat, a High Radiation to Materials testing facility at CERN has, throughout operation, established itself as a global user facility capable of going beyond its initial design goals. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPRB085 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPRB085 详细记录 - 相似记录 2019-10-09 06:01 Commissioning results of the tertiary beam lines for the CERN neutrino platform project / Rosenthal, Marcel (CERN) ; Booth, Alexander (U. Sussex (main) ; Fermilab) ; Charitonidis, Nikolaos (CERN) ; Chatzidaki, Panagiota (Natl. Tech. U., Athens ; Kirchhoff Inst. Phys. ; CERN) ; Karyotakis, Yannis (Annecy, LAPP) ; Nowak, Elzbieta (CERN ; AGH-UST, Cracow) ; Ortega Ruiz, Inaki (CERN) ; Sala, Paola (INFN, Milan ; CERN) For many decades the CERN North Area facility at the Super Proton Synchrotron (SPS) has delivered secondary beams to various fixed target experiments and test beams. In 2018, two new tertiary extensions of the existing beam lines, designated “H2-VLE” and “H4-VLE”, have been constructed and successfully commissioned. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW064 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW064 详细记录 - 相似记录 2019-10-09 06:00 The "Physics Beyond Colliders" projects for the CERN M2 beam / Banerjee, Dipanwita (CERN ; Illinois U., Urbana (main)) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; Cholak, Serhii (Taras Shevchenko U.) ; D'Alessandro, Gian Luigi (Royal Holloway, U. of London) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) ; Rae, Bastien (CERN) et al. Physics Beyond Colliders is an exploratory study aimed at exploiting the full scientific potential of CERN’s accelerator complex up to 2040 and its scientific infrastructure through projects complementary to the existing and possible future colliders. Within the Conventional Beam Working Group (CBWG), several projects for the M2 beam line in the CERN North Area were proposed, such as a successor for the COMPASS experiment, a muon programme for NA64 dark sector physics, and the MuonE proposal aiming at investigating the hadronic contribution to the vacuum polarisation. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW063 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW063 详细记录 - 相似记录 2019-10-09 06:00 The K12 beamline for the KLEVER experiment / Van Dijk, Maarten (CERN) ; Banerjee, Dipanwita (CERN) ; Bernhard, Johannes (CERN) ; Brugger, Markus (CERN) ; Charitonidis, Nikolaos (CERN) ; D'Alessandro, Gian Luigi (CERN) ; Doble, Niels (CERN) ; Gatignon, Laurent (CERN) ; Gerbershagen, Alexander (CERN) ; Montbarbon, Eva (CERN) et al. The KLEVER experiment is proposed to run in the CERN ECN3 underground cavern from 2026 onward. The goal of the experiment is to measure ${\rm{BR}}(K_L \rightarrow \pi^0v\bar{v})$, which could yield information about potential new physics, by itself and in combination with the measurement of ${\rm{BR}}(K^+ \rightarrow \pi^+v\bar{v})$ of NA62. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPGW061 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPGW061 详细记录 - 相似记录 2019-09-21 06:01 Beam impact experiment of 440 GeV/p protons on superconducting wires and tapes in a cryogenic environment / Will, Andreas (KIT, Karlsruhe ; CERN) ; Bastian, Yan (CERN) ; Bernhard, Axel (KIT, Karlsruhe) ; Bonura, Marco (U. Geneva (main)) ; Bordini, Bernardo (CERN) ; Bortot, Lorenzo (CERN) ; Favre, Mathieu (CERN) ; Lindstrom, Bjorn (CERN) ; Mentink, Matthijs (CERN) ; Monteuuis, Arnaud (CERN) et al. The superconducting magnets used in high energy particle accelerators such as CERN’s LHC can be impacted by the circulating beam in case of specific failure cases. This leads to interaction of the beam particles with the magnet components, like the superconducting coils, directly or via secondary particle showers. [...] 2019 - 4 p. - Published in : 10.18429/JACoW-IPAC2019-THPTS066 Fulltext from publisher: PDF; In : 10th International Particle Accelerator Conference, Melbourne, Australia, 19 - 24 May 2019, pp.THPTS066 详细记录 - 相似记录 2019-09-20 08:41 Performance study for the photon measurements of the upgraded LHCf calorimeters with Gd$_2$SiO$_5$ (GSO) scintillators / Makino, Y (Nagoya U., ISEE) ; Tiberio, A (INFN, Florence ; U. Florence (main)) ; Adriani, O (INFN, Florence ; U. Florence (main)) ; Berti, E (INFN, Florence ; U. Florence (main)) ; Bonechi, L (INFN, Florence) ; Bongi, M (INFN, Florence ; U. Florence (main)) ; Caccia, Z (INFN, Catania) ; D'Alessandro, R (INFN, Florence ; U. Florence (main)) ; Del Prete, M (INFN, Florence ; U. Florence (main)) ; Detti, S (INFN, Florence) et al. The Large Hadron Collider forward (LHCf) experiment was motivated to understand the hadronic interaction processes relevant to cosmic-ray air shower development. We have developed radiation-hard detectors with the use of Gd$_2$SiO$_5$ (GSO) scintillators for proton-proton $\sqrt{s} = 13$ TeV collisions. [...] 2017 - 22 p. - Published in : JINST 12 (2017) P03023 详细记录 - 相似记录 2019-04-09 06:05 The new CGEM Inner Tracker and the new TIGER ASIC for the BES III Experiment / Marcello, Simonetta (INFN, Turin ; Turin U.) ; Alexeev, Maxim (INFN, Turin ; Turin U.) ; Amoroso, Antonio (INFN, Turin ; Turin U.) ; Baldini Ferroli, Rinaldo (Frascati ; Beijing, Inst. High Energy Phys.) ; Bertani, Monica (Frascati) ; Bettoni, Diego (INFN, Ferrara) ; Bianchi, Fabrizio Umberto (INFN, Turin ; Turin U.) ; Calcaterra, Alessandro (Frascati) ; Canale, N (INFN, Ferrara) ; Capodiferro, Manlio (Frascati ; INFN, Rome) et al. A new detector exploiting the technology of Gas Electron Multipliers is under construction to replace the innermost drift chamber of BESIII experiment, since its efficiency is compromised owing the high luminosity of Beijing Electron Positron Collider. The new inner tracker with a cylindrical shape will deploy several new features. [...] SISSA, 2018 - 4 p. - Published in : PoS EPS-HEP2017 (2017) 505 Fulltext: PDF; External link: PoS server In : 2017 European Physical Society Conference on High Energy Physics, Venice, Italy, 05 - 12 Jul 2017, pp.505 详细记录 - 相似记录 2019-04-09 06:05 CaloCube: a new homogenous calorimeter with high-granularity for precise measurements of high-energy cosmic rays in space / Bigongiari, Gabriele (INFN, Pisa)/Calocube The direct observation of high-energy cosmic rays, up to the PeV region, will depend on highly performing calorimeters, and the physics performance will be primarily determined by their acceptance and energy resolution.Thus, it is fundamental to optimize their geometrical design, granularity, and absorption depth, with respect to the total mass of the apparatus, probably the most important constraints for a space mission. Furthermore, a calorimeter based space experiment can provide not only flux measurements but also energy spectra and particle identification to overcome some of the limitations of ground-based experiments. [...] SISSA, 2018 - 5 p. - Published in : PoS EPS-HEP2017 (2017) 481 Fulltext: PDF; External link: PoS server In : 2017 European Physical Society Conference on High Energy Physics, Venice, Italy, 05 - 12 Jul 2017, pp.481 详细记录 - 相似记录 2019-03-30 06:08 详细记录 - 相似记录 2019-03-30 06:08 详细记录 - 相似记录
No. String theory's resolution of the old paradox is a sign of the hidden cleverness of string theory. After he read some of my essays on the electron's spin, Tom W. Larkin asked an interesting question: Does string theory resolve the paradox of (post-)classical physics that the electron, if imagined as a spinning ball of a very small radius, has to rotate faster than the speed of light for its spin to be \(\hbar/2\)?One natural, fast, legitimate, but cheap reaction is to say: the electron isn't really a rotating ball. The spin may be carried even by a point-like particle, without any violations of relativity, as QED shows, so the paradox has never been there. Of course that a string theorist is likely to answer in this way, too. Quantum field theory is a limit of string theory so any explanation that was OK within quantum field theory may be said to be correct within string theory, too. The paradox doesn't exist because the electron isn't a classical ball that gets the mass from the electrostatic self-interaction energy. However, string theory doesrepresent the electron (and other elementary particles) as some kind of an extended object which is qualitatively analogousto the rotating ball so some version of the "superluminal spinning" paradox may be said to reemerge in string theory. Does it cause inconsistencies within string theory? It doesn't but the reasons are tricky and ingenious, if you allow me to lick Nature's buttocks a little bit. The angular momentum (I will call it "spin") of a gyroscope etc. is equal to\[ \vec S = I\vec \omega \] where \(\omega\) is the angular frequency and \(I\) is the moment of inertia. Up to constants of order one, the moment of inertia is equal to\[ I \sim mr^2 \] where \(r\) is some typical distance of the points of the object from the axis of rotation and \(m\) is the mass of the spinning object. That was some elementary classical mechanics, OK? Now, if you assume that \(E=mc^2\) for the electron where \(m\) is the rest mass and that the full latent energy \(E\) is obtained as some electrostatic energy of the electron's charge \(e\) interacting with itself, via (up to numerical constants of order one)\[ E \sim \frac{e^2}{4\pi \epsilon_0 r}, \] then you may derive the typical distance \(r\) between the "pieces of the charge" of the electron i.e. the "classical electron radius" which is a few femtometers (that is \(10^{-15}\) meters). If you know \(r\) and \(m\), you may calculate the moment of inertia \(I\sim mr^2\) as well as the angular frequency \(\omega\sim I/S\sim I/\hbar \). When you multiply this \(\omega\) by the radius \(r\) again, you get the velocity \(v\) of the points on the spinning surface of the electron. And it will be much higher than the speed of light! I invite you to complete the steps above if you have never done so. You may evaluate all these things in the SI units, as if it were a basic school problem in mechanics. However, it's also nice to calculate it as an adult physicist, in the Planck units. In the Planck units, the electron mass is something like \(10^{-23}\). Similarly, setting the fine-structure constant to one for a while (we will have to return to this approximation), the electrostatic formula above implies that \(E\sim 1/r\) and the classical radius is therefore about \(10^{23}\) Planck lengths. The moment of inertia is roughly \(mr^2\) so there are two factors of \(10^{23}\) "up" and one factor down, so it is again \(10^{23}\) in Planck units. The angular frequency is \(\omega\sim 1 / I\) where \(1\) means \(\hbar\) and it is again of order \(10^{-23}\), but if you multiply it by the radius \(10^{23}\) to get the speed, you get something of order one (the speed of light). To quickly see (without real calculations) whether the speed on the surface is greater than the speed of light, we must be a bit more careful about the fine-structure constant. The mass was OK, \(10^{-23}\), but the electron radius is \(137\) times smaller than we have said because this reduced \(r\) has to cancel the \(\alpha=1/137\) that appeared in the numerator. The moment of inertia is \(137^2\) times smaller than we said, because it's \(mr^2\), and the angular velocity is therefore \(137^2\) times larger than we said because \(\hbar = I\omega\) has to be kept fixed. Even when multiplied by the \(137\) times smaller radius, we still get the velocity \(137\) times larger than we said. The speed of the surface of the electron is therefore comparable to \(c/\alpha\sim 137 c\), up to numbers of order one that hopefully don't reduce \(137\) below one. You see that the speed of the classical electron is higher than the speed of light. A bummer. If you quickly and naively think about the changes that string theory makes to this calculation, string theory makes the problem worse because the electron in string theory is smaller. The energy of the electron comes from very stiff strings inside, not from the electrostatic energy, so the extended string hiding in the electron isn't \(10^{-15}\) meters large but \(10^{-35}\) meters tiny or so, not far from the Planck length (the string length, about 100 or 1,000 times longer, would be a better estimate). So the size of the electron has seemingly shrunk \(10^{20}\) times which means that the required velocity on the surface has to increase \(10^{20}\) times – hopelessly larger than the speed of light. Do the points on the string move with these excessive superluminal speeds? The answer is, of course, No. String theory reduces the "classical radius" of the electron but it changes other things, too. Most importantly, it changes the relevant mass, too. The trick is that the thing that is spinning isn't as light as the electron. It's as heavy as the Planck mass (again, more precisely, the string mass, the square root of the string tension). Why? Because the electron, like all massless and observably light particles, comes from the massless level of the string whose mass is constructed in a similar way as the massless level of the bosonic string theory which I pick as an example because of its simplicity. The massless open bosonic strings are given by\[ \ket\gamma = \alpha_{-1}^\mu \ket 0. \] I called the one-string state a "photon". A similar relationship holds for massless closed strings (the graviton, the dilaton, and the \(B\)-field) but there are two alpha excitations (one left-moving and one right-moving) in front of the tachyonic ground state \(\ket 0\). If we want to see how the string pretending to be a point-like particle is spinning, we may see that it's really the oscillator excitation \(\alpha_{-1}^\mu\) or the analogous excitations in the superstring case (that may include fermionic world sheet fields) that carries all the spin because it carries the \(\mu\) Lorentz vector index (or spinor indices, in the Green-Schwarz formalism for the superstrings). This \(\alpha\) oscillator literally corresponds to adding some relative motion to the individual points of the string, so that they move as a wave with one cycle (the subscript) around the string. The tachyonic ground state \(\ket 0\) is not spinning. The tachyon is a scalar, after all. The squared mass of the tachyonic ground state (which is filtered out in superstring theory but may be still used as a useful starting point to construct the spectrum in the RNS superstring) is equal to \(-(D-2)/24\) times \(1/\alpha'\) for the open string – so it's exactly \(-1/\alpha'\) for \(D=26\) – because the term\[ \frac 12 \zav{ 1+2+3+4+5+\dots } = -\frac{1}{24} \] is contributed by each of the \(D-2\) transverse spatial dimensions. I've discussed this semi-heuristic explanation of the zero-point energies in string theory many times (calculations that avoid all of this heuristics exist, too, but they prove that the semi-heuristic treatment involving the sum of positive integers is at least morally right whether people like it or not). And the oscillator \(\alpha_{-1}^\mu\) is increasing the squared mass back to the massless level, by \(+1/\alpha'\). And it's the spinning part of the electron or other particles in string theory. So the relevant estimate for the mass of the "gyroscope" that we should use in string theory isn't the tiny electron mass but the string mass, \(1/\sqrt{\alpha'}\) or so. The estimate for the speed of the "stringy surface" of the electron is easy to calculate now. In the string units \(\hbar=c=\alpha'=1\), the radius is one, the spin is one, the moment of inertia is one, the angular velocity is therefore also one, and so is the speed of the surface. This estimate is compatible with the assumption that the pieces of the strings never really move faster than light although they get close – and more accurate derivations within string theory may be shown to confirm this claim in some detail. Note that no counterpart of the fine-structure constant such as the string coupling \(g_{\rm string}\) entered in our calculation based on string units. Everything was comparable to the string scale – which may differ from the Planck scale by a power of \(g_{\rm string}\) but the string scale really simplifies the calculation more than the Planck scale. For \(g_{\rm string}\sim 1\), you don't have to distinguish the string scale and the Planck scale. The "string-scale" part of the electron in perturbative string theory is very heavy and therefore it's easy for it to produce the angular momentum of order \(\hbar=1\) even with velocities that don't breach the speed-of-light limit. And the tachyonic, negative contribution to the squared mass cancels most of the squared mass from the "positive excitation" and it makes the particle massless (or, when subleading effects are included, very light). This tachyonic part doesn't enter the calculations of the gyroscope. It's very natural that string theory had to solve this puzzle – even though you could simply deny its existence – because string theory partly restored the assumptions that were used in the derivation of the nonsensical superluminal speed of spinning electron. You may see that string theory is a typical unifying theory that really wants to see all quantities of fundamental objects as being close to "one" in the Planck units. And if some quantity is much smaller than the natural Planck unit, e.g. if the electron is much lighter than the Planck mass, it's due to some cancellations that are known to occur almost everywhere in string theory. But the fundamental parts of the explanations that matter – in this case, I mean the "positive-mass" part of the electron's gyroscope – universally work in the regime where "everything is of order one in some units". Whenever dimensional analysis is any useful in string theory, except for telling you that everything is comparable to the Planck/string scale, it's always in situations where some leading Planck-scale natural contributions "mostly cancel". Quantum field theorists like to think that any precise cancellation is "unnatural" but string theory offers us tons of totally sensible, justifiable, provable cancellations like that. The cancellations resulting from supersymmetry represent a well-known example but string theory really does imply similar cancellations even in the absence of SUSY (or cancellations that don't seem to be consequences of SUSY), too.
Would a high voltage ( of the order of a few score kV ) drive electrolysis? Or, does it require a large current and low voltage? Alternatively, does electrolysis require both large voltage & large current? Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.Sign up to join this community For any electrochemcial reaction, electrons are transfer from the oxidant to the reductant. By definition, any flow of charge per unit time period is an electric current. However, in the case of electrolysis, the electron transfer is not spontaneous. An external energy source is required for the reaction to take place. The energy (work) provided per unit charge is called the voltage. For any electrolysis reaction to occur therefore requires a supply of electrical energy which means both a voltage and electric current are needed. Specifically, the amount of work (energy) resulting from a voltage, $V$ and a current, $I$ applied for a time $t$ is given by: Work produced by electrical energy, $W = V\cdot I\cdot t$ For a given electrolytic reaction to proceed, the required minimum voltage which must be applied is determined from the Gibb's function, $E^\circ_\text{cell}=\frac{-\Delta G^\circ}{nF}$ Where $E^\circ_\text{cell}$ is the minimum voltage needed for the electrolysis reaction to occur, $\Delta G^\circ$ is the change in Gibb's free energy under standard conditions, $n$ is the number of electrons transferred and $F$ is Faraday's constant (96,485 coulombs per mole). For example, if pure water is placed in an 'electrolytic cell' with two non-reactive electrodes (eg: platinum), electrons forced into the 'cell' by an electrical source (eg: battery) will react with water molecules at the cathode forcing them to lyze (split) into hydrogen ions and hydroxide ions. $\ce{H2O(aq) ->[\text{elect}] H+(aq) + OH- (aq)}$ At the surface of the 'anode', hydroxide ion will be oxidised (dontate electrons) to form oxygen gas. $\ce{4OH- (aq) -> 2H2O + O2 + 4e- }$ (anode half-cell reaction) Meanwhile, at the surface of the 'cathode', hydrogen ions will accept electrons to form hydrogen gas. $\ce{2H+(aq) + 2e- -> H2(g)}$ (cathode half-cell reaction) The overall cell reaction is therefore: $\ce{2H2O(aq) -> 2H2(g) + O2(g)}$ ; $\Delta G^\circ=237.2\ \mathrm{kJ/mol}$ The positive Gibb's function for this reaction indicates that the reaction will not occur sponteneously but requires an external energy source (eg: electrical energy), which is reflected in the negative cell potential. The minimum (theoretical) voltage required is: $E^\circ_\text{cell}=\frac{-\Delta G^\circ}{nF}=\frac{-237.2 \times 10^{3}}{2\times 96485}=-1.23\ \mathrm{V}$ However, in practice, a slightly larger voltage of 1.48 V is required since the enthalpy (heating) of the products results in slightly lower efficiency which is manifested as an overpotential of about 0.25 V. Once this critical voltage level is exceeded, the electrolysis reaction proceeds at a rate determined largely by the the current, since the current represents the rate at which charge is delivered to the system. Basically, the higher the current, the more molecules will react (electrolyze) and the more products (hydrogen gas) will form per unit time. Current is, by definition, a flow of charged particles, so it doesn't drive anything. Electrical potential is a store of energy (you can think of a voltage source as an ideal "battery" of sorts). One volt can produce $\pu{1 A}$ of current through a $\pu{1 \Omega}$ resistor. One ampere of current is equal to a coulomb per second. One coulomb is $6.24150965(16)×10^{18}$ electrons. Were you to use enough voltage that oxidation at the anode released enough electrons to produce a massive current, you'd likely deplete your anode material before you ever achieved it. High voltage and low current is more efficient but must be oscillated at specific frequencies to be effective low cost electrolysis. it is the oscillation that breaks the bond of hydrogen and oxygen not the current or voltage. Current can be below .5 amp at millions of volts and still be non life threatening like a stun gun. But you need to experiment with the frequency of oscillation to find the most efficient electrolysis. Some of the information contained in this post requires additional references. Please edit to add citations to reliable sources that support the assertions made here. Unsourced material may be disputed or deleted.
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s? @daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format). @JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems.... well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty... Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d... @Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure. @JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now @yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first @yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts. @JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing. @Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work @Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable. @Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time. @Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things @Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)] @JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :) @Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!) @JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand. @JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series @JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code. @PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
It is known that the spiral phyllotactic pattern is common in Nature, especially in Botany. It consists of two group of clockwise and anticlockwise spirals, starting from the center. In most cases the number of those spirals are two consecutive Fibonacci numbers: $F_n, F_{n+1}$. Also there are patterns where number of spirals are doubled Fibonacci numbers, Lucas numbers, or Fibonacci ± 1. The equations for generating some sort of a spiral phyllotaxis are: $$ \phi = \pi(1+\sqrt{5}) \\ \forall n \in [0, N] \\ \theta = n \phi \\ r = \sqrt{n} \\ x = r \cos{\theta} \\ y = r \sin{\theta} $$ How to change that equations to take into account the predefined number of spirals? For example, I want to generate a phyllotactic pattern consisted of (11, 18) spirals: 11 clockwise and 18 anticlockwise. How to do that? UPDATE. It looks that the number of visible spirals is depended on the radius of a pattern.
Schützenberger promotion, studied (for example) in Richard Stanley, Promotion and Evacuation, 2009, is a permutation of the set of all linear extensions of a finite poset. Since one can identify the linear extensions of a poset with saturated chains of order ideals in that poset, this allows one to also view Schützenberger promotion as a permutation of the set of the latter. The famous promotion of standard Young tableaux is a particular case of this. Striker-Williams promotion, defined in Jessica Striker, Nathan Williams, Promotion and Rowmotion, arXiv:1108.1172v3, Definition 4.13, is a permutation of the set of all order ideals (not saturated chains of order ideals!) of a so-called "rc poset" (which is a poset with a map into $\mathbb Z^2$ satisfying certain conditions, best viewed as a way to draw its Hasse diagram on a grid; see below or §4.2 of Striker-Williams for an exact definition). Apparently people are considering these two promotions to be closely related. However, the only direct relation I am aware of is Striker-Williams Theorem 4.12, which bijects Schützenberger promotion on standard tableaux on a two-rowed Young diagram with Striker-Williams promotion on a poset which looks like a triangle grid. Questions: 1. Is this really the only relation? Is promotion of standard Young tableaux of a Young diagram with more than $2$ rows not a (known) case of Striker-Williams promotion? 2. I've seen some kind of promotion on semistandard Young tableaux being mentioned on the internet. Assuming it's not a typo, how is that defined? Appendix: Let me define the two notions involved for the sake of completeness. Probably the sources quoted give better definitions... Definition of Schützenberger promotion: Let $P$ be a finite poset. Let $\mathcal L\left(P\right)$ denote the set of all linear extensions of $P$. We define a map $\partial : \mathcal L\left(P\right)\to \mathcal L\left(P\right)$ as follows: Let $f \in \mathcal L\left(P\right)$ be a linear extension. We set $p=\left|P\right|$, and we view $f$ as a function $P\to\left\lbrace 1,2,...,p\right\rbrace$, i. e., as a labelling of the elements of $P$ by the numbers $1$, $2$, ..., $p$ (we get this labelling by labelling every element $v\in P$ with the number $\left| \left\lbrace w\in P \ \mid \ f\left(w\right)\leq f\left(v\right) \right\rbrace \right|$). Define a (dynamic) map $g:P\to\mathbb Z$ by $g = f$ (we will be modifying $g$, while $f$ remains static). If $p=0$, do nothing. Else, set $u$ to be the element of $P$ labelled $1$ (that is, the smallest element of $P$ with respect to $g$), and do the following loop: While there exists an element of $P$ covering $u$: let $v$ be the smallest (with respect to $g$) among the elements of $P$ covering $u$ (that is, the element $p$ of $P$ covering $u$ with smallest $g\left(p\right)$); slid the label of $v$ down to $u$ (that is, set $g\left(u\right)$ to be $g\left(v\right)$, accepting that $g$ will temporarily fail to be injective); set $u = v$. Endwhile. After the end of this loop, label $u$ with $p+1$ (that is, set $g\left(u\right) = p+1$), and then subtract $1$ from each label (i. e., replace $g$ by $g-\mathbf{1}$, where $\mathbf{1}$ is the constant function $P\to\mathbb Z,\ p\mapsto 1$). The resulting $g$ is called the promotion of $f$, and denoted by $\partial f$. (It is more common to call it $f\partial$, so that $\partial$ is seen as a map acting from the right). Definition of Striker-Williams promotion: Let $P$ be a finite poset. Let $J\left(P\right)$ denote the set of all order ideals of $P$. For every $q\in P$, define a map $t_p : J\left(P\right) \to J\left(P\right)$ as follows: Let $I \in J\left(P\right)$. If $I \bigtriangleup \left\lbrace p\right\rbrace$ (with $\bigtriangleup$ standing for "symmetric difference") is an order ideal of $P$, set $t_p\left(I\right) = I \bigtriangleup \left\lbrace p\right\rbrace$. Otherwise, set $t_p\left(I\right) = I$. Let $\mathbb Z^2_{\operatorname*{ev}}$ denote the $\mathbb Z$-submodule of $\mathbb Z^2$ spanned by $\left(1,1\right)$ and $\left(2,0\right)$. In other words, let $\mathbb Z^2_{\operatorname*{ev}}$ be the set of all $\left(x,y\right)\in\mathbb Z^2$ for which $x+y$ is even. Now, let $P$ be a finite rc-poset; this means a poset along with a map $\pi : P \to \mathbb Z^2_{\operatorname*{ev}}$ such that whenever an element $p_1$ of $P$ covers an element $p_2$ of $P$, we have $\pi\left(p_1\right)-\pi\left(p_2\right) \in \left\lbrace \left(-1,1\right), \left(1,1\right) \right\rbrace$. (See §4.2 of Striker-Williams for some good pictures of what this means.) For every $p\in P$, let $\pi_1\left(p\right)$ denote the first coordinate of $\pi\left(P\right)$. Now, consider the composition of the maps $t_p$ in decreasing order of $\pi_1\left(p\right)$ (the relative order of the $t_p$ for distinct $p$ having the same $\pi_1\left(p\right)$ does not matter). This composition is Striker-Williams promotion.
I want to find analog of following two statements. Let $G$ be a discrete group, $M$ is representation of $G$. Local systems on $BG$ are the same as $G$ representations (because $\pi_1 (BG) =G$). Let $\mathscr{M}$ be a local system corresponding to $M$. $$ H^{\bullet}_{Grp} (G, M) = H^{\bullet} (BG, \mathscr{M})$$ Let $G$ be compact connected Lie group. $\mathfrak{g}$ is corresponding Lie algebra. $\mathbb{R}$ - trivial representation of $\mathfrak{g}$. $$ H^{\bullet}_{Lie} (\mathfrak{g}, \mathbb{R}) = H_{dR}^{\bullet} (G)$$ Question: How to express $H^{\bullet}_{Lie} (\mathfrak{g}, M)$ geometrically? Here $M$ is a finite dimensional representation of $\mathfrak{g}$ (if you wish you can assume that it is integrated to Lie group representation). Comment 1 :I am even not sure which geometric object corresponds to representation of $\mathfrak{g}$. Is it bi-D-module (bimodule over differential operators)? Is it $G \times G$ equivariant bundle on $G$? Comment 2:I want to say in other words what I want. I want a geometric structure on group $G$ considered as a manifold, which counts $H^{\bullet}_{Lie} (\mathfrak{g}, M)$ . It can be sheaf, D-module, whatever. The point is that I want to forget that $G$ is a group. I want just $G$ considered as a manifold with extra geometric structure. And the way to get my cohomology back.
We would have to know more about the ice we want to build the wall with. For example, for ice in icesheets, you have an ice which effectively reaches a plastic region of the stress strain curve at around $0,5 MPa$. I am not a geologist, but I believe that the glaciers can be only thicker than $\sim 50 m$ thanks to it's specific shape and the fact that the surroundings press the glacier back and don't let it crumble (mostly). This would not be the case of a narrow wall, where already at $50 m$ the ice from a natural ice-sheet would crumble away before reaching plasticity (I have a feeling the plasticity will not be caused purely by microscopical structure breakdown but also mesoscopic ice grains breaking each other and moving around). This study of the US geological survey assesses pragmatically "the crushing strength" of ice and finds that the best ice you find in nature at an ideal temperature has a lower crushing bound at about $\sim 400 \, pounds/inches^2 = 2,8 MPa$. As Jaime already stated, the main issue is the very lowest bit of the ice carrying the pressure of the whole column above it. This pressure is $$P=\rho h g \approx 1000 kg \cdot m^{-3} \cdot 200 m \cdot 10 m\cdot s{-2} = 2 MPa \, .$$ So, if you do not take just any ice, but optimalize it's properties, you should be able to build wall of any thickness up to the height of $200m$. But our intuition is rightly not satisfied with the vision of the mighty wall of ice with the thickness of a paper sheet. Such a wall would obviously tip over with the slightest blow of air, not to mention a giant riding a mammoth from far north Beyond the Wall. (I assume it is no accident the height is same as the one of the Wall from Game of Thrones :) It is obvious that a thicker wall makes it less prone to being made a hole into, but let us concentrate on the issue of stability. First, the mere fact that a thicker wall is heavier makes it harder to tip over to a critical angle where the wall falls on it's own. Second, the larger base causes that the tipping angle is much higher and the beginning torque to start tipping it of is much larger. If you draw a scheme of the wall with a thickness $d$ and a height $l$, it is easy to show that for the tipping angle $\alpha_t$ we have$$tan(\alpha_t) = \frac{d/2}{h/2} \to \alpha_t = arctan \left( \frac{d}{h} \right) \,.$$Or, the tipping angle $\alpha_t$ is always growing with the ratio $d/h$. Furthermore, the initial torque needed to even start tipping of the wall is also growing with the thickness of the wall. The initial torque $\tau$ would have to be$$\tau = F_{proj} l \,, $$ where $l = \sqrt{(d/2)^2 + (h/2)^2}$ is the distance to the lower edge of the wall from the center of mass and the $F_{proj} = M g d/l$ is the part og the weight of the wall with mass $M$ perpendicular to length $l$. The torque required to even start to tip the wall over then is$$\tau = Mgd \,.$$So the torque required to tip the wall is simply proportional both to the mass of the wall and the thickness. Considering the fact that the mass is also a function of $d$, we have an even steeper growth of stability of our wall with it's thickness. But beware! The edge over which the wall would be tipping reaches virtually infinite pressure in the process of the tipping. This is a consequence of the fact that the very lower part of the edge over which we are tipping the wall is carrying the weight of the whole wall. We must thus build the wall so that the required $\tau$ to even start tipping it is never reached by an intruder. You do not even have to use so much ice - the tipping base is made effectively larger e.g. by slightly curving the wall around. Even if we ensure a large enough base, we must be sure the intruder actually does not exert enough pressure to start crumbling the lower edges of the wall. For example, a very brutal gust such as the one detected in Australia could exert a dynamical pressure of roughly $0.9 MPa$. This blow would be surely fatal, but even weaker ones could crumble the wall due to inhomogeneities and uneven distribution of pressure. Overall, the $700 ft \sim 200 m$ tall Wall from Game of Thrones is just slightly unrealistic. It is hardly imaginable that the technology and coordination of that time would be able to create such consistently ideal ice as we were considering up to now. If anything, my best guess would be an estimate of an overall maximal pressure $\sim 2 MPa$ leading to say a $100 m$ tall stable wall. Taking into account that even stone structures in the medieval ages such as cathedrals were never higher than around $160 m$, even the $100 m$ wall from ice would be formidable. Obviously, Physics is loosing breath here, reasoning requires that Magic had to be involved.
Linear mixed fluid-structure interaction system¶ This tutorial demonstrates the use of subdomain functionality and show how to describe a system consisting of multiple materials in Firedrake. The model considered consists of fluid with a free surface and an elastic solid. We will be using interchangeably notions of fluid/water and structure/solid/beam. For simplicity (and speed of computation) we consider a model in 2D, however it can be easily generalised to 3D. The starting point is the linearised version (domain is fixed) of the fully nonlinear variational principle. In non-dimensional units: in which the first line contains integration over fluid domain, second, fluid-structure interface, and third, structure domain. The following notions are used: \(\eta\) - free surface deviation \(\phi\) - fluid flow potential \(\rho_0\) - structure density (in fluid density units) \(\lambda\) - first Lame constant (material parameter) \(\mu\) - second Lame constant (material parameter) \({\bf X}\) - structure displacement \({\bf U}\) - structure velocity \(e_{ij} = \frac{1}{2} \bigl( \frac{\partial X_j }{ \partial x_i } + \frac{ \partial X_i }{ \partial x_j } \bigr)\) - linear strain tensor; \(i\), \(j\) denote vector components \({\mathrm d} S_f\) - integration element over fluid free surface \({\mathrm d} s_s\) - integration element over structure-fluid interface \({\mathrm d} x_F\) - integration element over fluid domain \({\mathrm d} x_S\) - integration element over structure domain After numerous manipulations (described in detail in [SBK17]) and evaluation of individual variations, the time-discrete equations, with symplectic Euler scheme, that we would like to implement in Firedrake, are: The underlined terms are the coupling terms. Note that the first equation for \(\phi\) at the free surface is solved on the free surface only, the last equation for \({\bf X}\) in the structure domain, while the others are solved in both domains. Moreover, the second and third equations for \(\phi\) and \({\bf U}\) need to be solved simultaneously. The geometry of the system with initial condition is shown below. Now we present the code used to solve the system of equations above. We start with appropriate imports: from firedrake import *import mathimport numpy as np Then, we set parameters of the simulation: # parameters in SI unitst_end = 5. # time of simulation [s]dt = 0.005 # time step [s]g = 9.8 # gravitational acceleration# waterLx = 20. # length of the tank [m] in x-direction; needed for computing initial conditionLz = 10. # height of the tank [m]; needed for computing initial conditionrho = 1000. # fluid density in kg/m^2 in 2D [water]# solid parameters# - we use a sufficiently soft material to be able to see noticeable structural displacementrho_B = 7700. # structure density in kg/m^2 in 2Dlam = 1e7 # N/m in 2D - first Lame constantmu = 1e7 # N/m in 2D - second Lame constant# meshmesh = Mesh("L_domain.msh")# these numbers must match the ones defined in the mesh filefluid_id = 1 # fluid subdomainstructure_id = 2 # structure subdomainbottom_id = 1 # structure bottomtop_id = 6 # fluid surfaceinterface_id = 9 # fluid-structure interface# control parametersoutput_data_every_x_time_steps = 20 # to avoid saving data every time stepcoupling = True # turn on coupling terms The equations are in nondimensional units, hence we transform: L = LzT = L/math.sqrt(g*L)t_end /= Tdt /= TLx /= LLz /= Lrho_B /= rholam /= g*rho*Lmu /= g*rho*Lrho = 1. # or equivalently rho /= rho Let us define function spaces, including the mixed one: V_W = FunctionSpace(mesh, "CG", 1)V_B = VectorFunctionSpace(mesh, "CG", 1)mixed_V = V_W * V_B Then, we define functions. First, in the fluid domain: phi = Function(V_W, name="phi")phi_f = Function(V_W, name="phi_f") # at the free surfaceeta = Function(V_W, name="eta")trial_W = TrialFunction(V_W)v_W = TestFunction(V_W) Second, in the beam domain: X = Function(V_B, name="X")U = Function(V_B, name="U")trial_B = TrialFunction(V_B)v_B = TestFunction(V_B) And last, mixed functions in the mixed domain: trial_f, trial_s = TrialFunctions(mixed_V)v_f, v_s = TestFunctions(mixed_V)tmp_f = Function(V_W)tmp_s = Function(V_B)result_mixed = Function(mixed_V) We need auxiliary indicator functions, that are 0 in one subdomain and 1 in the other. They are needed both in “CG” and “DG” space. We use the fact that the fluid and structure subdomains are defined in the mesh file with an appropriate ID number that Firedrake is able to recognise. That can be used in constructing indicator functions: V_DG0_W = FunctionSpace(mesh, "DG", 0)V_DG0_B = FunctionSpace(mesh, "DG", 0)# Heaviside step function in fluidI_W = Function( V_DG0_W )par_loop(('{[i] : 0 <= i < f.dofs}', 'f[i, 0] = 1.0'), dx(fluid_id), {'f': (I_W, WRITE)}, is_loopy_kernel=True)I_cg_W = Function(V_W)par_loop(('{[i] : 0 <= i < A.dofs}', 'A[i, 0] = fmax(A[i, 0], B[0, 0])'), dx, {'A' : (I_cg_W, RW), 'B': (I_W, READ)}, is_loopy_kernel=True)# Heaviside step function in solidI_B = Function( V_DG0_B )par_loop(('{[i] : 0 <= i < f.dofs}', 'f[i, 0] = 1.0'), dx(structure_id), {'f': (I_B, WRITE)}, is_loopy_kernel=True)I_cg_B = Function(V_B)par_loop(('{[i, j] : 0 <= i < A.dofs and 0 <= j < 2}', 'A[i, j] = fmax(A[i, j], B[0, 0])'), dx, {'A' : (I_cg_B, RW), 'B': (I_B, READ)}, is_loopy_kernel=True) We use indicator functions to construct normal unit vector outward to the fluid domain at the fluid-structure interface: n_vec = FacetNormal(mesh)n_int = I_B("+") * n_vec("+") + I_B("-") * n_vec("-") Now we can construct special boundary conditions that limit the solvers only to the appropriate subdomains of our interest: class MyBC(DirichletBC): def __init__(self, V, value, markers): # Call superclass init # We provide a dummy subdomain id. super(MyBC, self).__init__(V, value, 0) # Override the "nodes" property which says where the boundary # condition is to be applied. self.nodes = np.unique(np.where(markers.dat.data_ro_with_halos == 0)[0])def surface_BC(): # This will set nodes on the top boundary to 1. bc = DirichletBC( V_W, 1, top_id ) # We will use this function to determine the new BC nodes (all those # that aren't on the boundary) f = Function( V_W, dtype=np.int32 ) # f is now 0 everywhere, except on the boundary bc.apply(f) # Now I can use MyBC to create a "boundary condition" to zero out all # the nodes that are *not* on the top boundary: return MyBC( V_W, 0, f )# same as above, but in the mixed spacedef surface_BC_mixed(): bc_mixed = DirichletBC( mixed_V.sub(0), 1, top_id ) f_mixed = Function( mixed_V.sub(0), dtype=np.int32 ) bc_mixed.apply(f_mixed) return MyBC( mixed_V.sub(0), 0, f_mixed )BC_exclude_beyond_surface = surface_BC()BC_exclude_beyond_surface_mixed = surface_BC_mixed()BC_exclude_beyond_solid = MyBC( V_B, 0, I_cg_B ) Finally, we are ready to define the solvers of our equations. First, equation for \(\phi\) at the free surface: a_phi_f = trial_W * v_W * ds(top_id)L_phi_f = ( phi_f - dt * eta ) * v_W * ds(top_id)LVP_phi_f = LinearVariationalProblem( a_phi_f, L_phi_f, phi_f, bcs=BC_exclude_beyond_surface )LVS_phi_f = LinearVariationalSolver( LVP_phi_f ) Second, equation for the beam displacement \({\bf X}\), where we also fix it to the bottom by applying zero Dirichlet boundary condition: a_X = dot( trial_B, v_B ) * dx(structure_id)L_X = dot( (X + dt * U), v_B ) * dx(structure_id)# no-motion beam bottom boundary conditionBC_bottom = DirichletBC( V_B, as_vector([0.,0.]), bottom_id)LVP_X = LinearVariationalProblem(a_X, L_X, X, bcs = [BC_bottom, BC_exclude_beyond_solid])LVS_X = LinearVariationalSolver( LVP_X ) Finally, we define solvers for \(\phi\), \({\bf U}\) and \(\eta\) in the mixed domain. In particular, value of \(\phi\) at the free surface is used as a boundary condition. Note that avg(…) is necessary for terms in expressions containing n_int, which is built in “DG” space: # phi-U# no-motion beam bottom boundary condition in the mixed spaceBC_bottom_mixed = DirichletBC( mixed_V.sub(1), as_vector([0.,0.]), bottom_id )# boundary condition to set phi_f at the free surfaceBC_phi_f = DirichletBC( mixed_V.sub(0), phi_f, top_id )delX = nabla_grad(X)delv_B = nabla_grad(v_s)T_x_dv = lam * div(X) * div(v_s) + mu * ( inner( delX, delv_B + transpose(delv_B) ) )a_U = rho_B * dot( trial_s, v_s ) * dx(structure_id)L_U = ( rho_B * dot( U, v_s ) - dt * T_x_dv ) * dx(structure_id)a_phi = dot( grad(trial_f), grad(v_f) ) * dx(fluid_id)if coupling: a_U += dot( avg(v_s), n_int ) * avg(trial_f) * dS # avg(...) necessary here and below L_U += dot( avg(v_s), n_int ) * avg(phi) * dS a_phi += - dot( n_int, avg(trial_s) ) * avg(v_f) * dSLVP_U_phi = LinearVariationalProblem( a_U + a_phi, L_U, result_mixed, bcs = [BC_phi_f, BC_bottom_mixed] )LVS_U_phi = LinearVariationalSolver( LVP_U_phi )# etaa_eta = trial_f * v_f * ds(top_id)L_eta = eta * v_f * ds(top_id) + dt * dot( grad(v_f), grad(phi) ) * dx(fluid_id)if coupling: L_eta += - dt * dot( n_int, avg(U) ) * avg(v_f) * dSLVP_eta = LinearVariationalProblem( a_eta, L_eta, result_mixed, bcs=BC_exclude_beyond_surface_mixed )LVS_eta = LinearVariationalSolver( LVP_eta ) Let us set the initial condition. We choose no motion at the beginning in both fluid and structure, zero displacement in the structure and deflected free surface in the fluid. The shape of the deflection is computed from the analytical solution: # initial condition in fluid based on analytical solution# compute analytical initial phi and etan_mode = 1a = 0. * T / L**2 # in nondim unitsb = 5. * T / L**2 # in nondim unitslambda_x = np.pi*n_mode/Lxomega = np.sqrt( lambda_x*np.tanh(lambda_x*Lz) )x = mesh.coordinatesphi_exact_expr = a * cos(lambda_x*x[0]) * cosh(lambda_x*x[1])eta_exact_expr = - omega * b * cos(lambda_x*x[0]) * cosh(lambda_x*Lz)bc_top = DirichletBC(V_W, 0, top_id)eta.assign(0.)phi.assign(0.)eta_exact = Function(V_W)eta_exact.interpolate( eta_exact_expr )eta.assign( eta_exact, bc_top.node_set )phi.interpolate( phi_exact_expr )phi_f.assign( phi, bc_top.node_set) A file to store data for visualization: outfile_phi = File("results_pvd/phi.pvd") To save data for visualization, we change the position of the nodes in the mesh, so that they represent the computed dynamic position of the free surface and the structure: def output_data(): output_data.counter += 1 if output_data.counter % output_data_every_x_time_steps != 0: return mesh_static = mesh.coordinates.vector().get_local() mesh.coordinates.vector().set_local( mesh_static + X.vector().get_local() ) mesh.coordinates.dat.data[:,1] += eta.dat.data_ro outfile_phi.write( phi ) mesh.coordinates.vector().set_local( mesh_static )output_data.counter = -1 # -1 to exclude counting print of initial state In the end, we proceed with the actual computation loop: t = 0.output_data()while t <= t_end + dt: t += dt print('time = ', t * T) # symplectic Euler scheme LVS_phi_f.solve() LVS_U_phi.solve() tmp_f, tmp_s = result_mixed.split() phi.assign(tmp_f) U.assign(tmp_s) LVS_eta.solve() tmp_f, _ = result_mixed.split() eta.assign(tmp_f) LVS_X.solve() output_data() The result of the computation, visualised with paraview, is shown below. The mesh is deflected for visualization only. As the model is linear, the actual mesh used for computation is fixed. Colours indicate values of the flow potential \(\phi\). A python script version of this demo can be found here. An extended 3D version of this code is published here. The work is based on the articles [SBK17] and [SBK16]. The authors gratefully acknowledge funding from European Commission, Marie Curie Actions - Initial Training Networks (ITN), project number 607596. References SBK16 Tomasz Salwa, Onno Bokhove, and Mark A. Kelmanson. Variational modelling of wave-structure interactions for offshore wind turbines. Extended paper for Int. Conf. on Ocean, Offshore and Arctic Eng., OMAE2016, Busan, South-Korea, June 2016. URL: http://proceedings.asmedigitalcollection.asme.org/proceeding.aspx?articleID=2570974. SBK17(1,2) Tomasz Salwa, Onno Bokhove, and Mark A. Kelmanson. Variational modelling of wave–structure interactions with an offshore wind-turbine mast. Journal of Engineering Mathematics, Sep 2017. doi:10.1007/s10665-017-9936-4.
Since moving to a Mac a bit over a year ago, I've had only a few reasons to look back (the business with the HP LJ1022 printer being one of them). I'm now rather close to the end of my tether, and the reason is fonts. As an academic and a computer scientist, I end up writing quite a lot of papers and presentations with maths in them. Like any sensible person, I use LaTeX for typesetting the maths; it's a lot easier to type $\sum_{i=0}^{i=n-1} i^2$ than to wrestle with the equation editor in Word. I've also been using LaTeX for rendering mathematical expressions in lecture slides; there are two tools - LaTeXit and LaTeX Equation Editor - which make putting maths in Powerpoint or KeyNote a drag-and-drop operation. However, I've spent quite a lot of time over the last week trying to debug a problem with the font rendering of TeX-generated PDF files on OS X. If I wrote a LaTeX file containing the following: \documentclass{article} \begin{document} \section{This is a test} \[e = mc^2 \rightarrow \chi \pi \ldots r^2 \] \end{document} then I'd expect it to render something like this: Preview renders it like that, but not reliably - perhaps one time in eight. The rest of the time, it randomly substitutes a sans serif font for the various Computer Modern fonts. Sometimes it looks like this (missing the italic font): Sometimes it looks like this (missing the bold and italic fonts): And sometimes it looks like this (missing the bold and symbol fonts): It isn't predictable which rendering I get. The problem also isn't limited to CM, but appears whenever you have a subset of a Type1 font embedded in PDF (on my machine, at least); TeX isn't the problem. The problem didn't exist on 10.4. The best guess from the Mac communities is that it's a cache corruption problem with the OS X PDF-rendering component on 10.5 (which would explain why I see the same problem in LaTeXit, LEE and Papers, but not in Acrobat). I really don't see how Apple could have let a release out of the door with a bug like this - this is surely a critical bug for anyone in publishing. Edited to add links: Apple forums [1] [2] [3] Macscoop on 10.5.2 update Another report of the problem Clearing the font cache
Yes, it is possible to render the process stationary regardless of whether thenon-stationary cycles are related to real or complex roots. For example, the following seasonal random walk: $$y_t = y_{t-4} + \epsilon_t \quad \epsilon_t \sim NID(0, \sigma^2) \,,$$ contains four unit roots: $\pm 1, \pm i$ (i.e., 2 real and 2 complex roots). Applying the filter $(1 - L^4)$ (where $L$ is the lag operator such that $L^i y_t = y_{t-i}$) the non-stationary cycles are removed from the series $y_t$, which is not surprising since all we did is moving $y_{t-4}$ to the left-hand-side of the equation so that we get that the filtered series is $y_t - y_{t-4} = \epsilon_t$. The seasonal differencing filter is commonly used when working with ARIMA models. It is illuminating to write the seasonal differencing filter factorized as follows (for a quarterly series in this example): $$(1 - L^4)y_t = (1-L)(1+L)(1+L^2)y_t = \epsilon_t \,.$$ The factor $(1-L)$ contains the root $1$, $(1+L)$ the root $-1$ and $(1+L^2)$ the complex roots $\pm i$. Thus, applying the filter $(1+L^2)$ we remove only those cycles related to the complex roots. The first graphic in the plot below shows the sample spectrum of a simulated random walk. It contains a spike at cycles of frequencies $0$ (related to the root $1$), $\pi/2$ and $3\pi/2$ (related to roots $\pm i$) and $\pi$ (related to the root $-1$). In the second graphic, the cycles related to the complex roots have been removed by the filter $(1+L^2)$; the spike at the middle of the periodogram is not present, confirming that the contribution of those cycles to the whole series is negligible. The plot can be reproduced with the following R code: set.seed(125) x <- diffinv(rnorm(100), 4)[-seq(4)] par(mfrow = c(2,1)) spectrum(x, span = c(3,3), main = paste("Periodogram of a seasonal random walk", dQuote("x"))) spectrum(na.omit(filter(x, filter = c(1,0,1), method = "conv", sides = 1)), span = c(3,3), main = expression(paste("Periodogram of ", (1+L^2), "x")))
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
I'm trying to write a module. Its input is a matrix (tensor), the module should return new tensor with increased rank with 1. The new tensor is defined as$$O \Rightarrow N_{\alpha \beta \ldots \sigma}^{\gamma \delta \ldots}=\frac{\partial}{\partial x^{\sigma}} O_{\alpha \beta \ldots}^{\gamma \delta \ldots}$$ where $O$ is input ("old tensor") and $N$ is new tensor returned by module. Since the module should work for tensor of general rank (general number of indices), i try to obtain rank of input tensor, generate list of indices representing elements and set them in Do loop (see commentary in code). The problem is in setting values (element by element) in Do loop: PartialDerivative[tensor_] := Module[{tens = tensor, oldorder, neworder, dlist, newtensor, \[Mu], i, doparams, \[Alpha], l, d, r},(*coordinates, derivate with respect to them*) r = {t, x, y, z};(*dimension*) d = 4;(*rank of input tensor*) oldorder = ArrayDepth[tens];(*rank of new tensor*) neworder = oldorder + 1;(*create list {d,d,...d} for creation of new tensor*) dlist = Table[d, {i, neworder}];(*create new dummy tensor with correct size *) newtensor = SparseArray[{}, dlist];(*now I try to create iterator list for last Do loop*) l = {};(*lets join all individual iterators, each is type {\[Mu][i], 1, d}e.g. the first one is {\[Mu][1], 1, d}and join them into one list*) Do[l = Join[l, {{\[Mu][i], 1, d}}], {i, 1, oldorder}];(*join the last iterator - the derivative index*) l = Join[l, {{\[Alpha], 1, d}}];(*now create Sequence from the list*) doparams = Delete[l, 0];(*now assign elements of the new tensor from the old tensor - Error line*) Do[newtensor[[Delete[Array[\[Mu], d], 0], \[Alpha]]] = \!\(\*SubscriptBox[\(\[PartialD]\), \\(r[\([\[Alpha]]\)]\)]\(tens[\([Delete[Array[\[Mu], d], 0]]\)]\)\), doparams]; Return[newtensor]; ](*calling example - should return identity matrix 4x4*)PartialDerivative[{t, x, y, z}] Everything seems to work fine until the the Error line (see code, the line above returning new tensor), where I get error Argument doparams$22092 at position 2 does not have the correct formfor an iterator. Do you know where is mistake? Or is there another or simpler way to write the function? Thank you for tips.
This shows you the differences between two versions of the page. congruence_regular [2010/08/20 20:14] jipsen created congruence_regular [2010/08/20 20:14] (current) jipsen Line 2: Line 2: An algebra is \emph{congruence regular} if each congruence relation of the algebra is An algebra is \emph{congruence regular} if each congruence relation of the algebra is - determined by any one of its congruence classes, i.e. $\forall a,b\ [a]_{\theta}=[b]_{\psi}\ implies + determined by any one of its congruence classes, i.e. $\forall a,b\ [a]_{\theta}=[b]_{\psi}\ Longrightarrow \theta =\psi$. \theta =\psi$.
OpenCV 3.4.8-dev Open Source Computer Vision In this chapter, In last chapter, we saw that corners are regions in the image with large variation in intensity in all the directions. One early attempt to find these corners was done by Chris Harris & Mike Stephens in their paper A Combined Corner and Edge Detector in 1988, so now it is called Harris Corner Detector. He took this simple idea to a mathematical form. It basically finds the difference in intensity for a displacement of \((u,v)\) in all directions. This is expressed as below: \[E(u,v) = \sum_{x,y} \underbrace{w(x,y)}_\text{window function} \, [\underbrace{I(x+u,y+v)}_\text{shifted intensity}-\underbrace{I(x,y)}_\text{intensity}]^2\] Window function is either a rectangular window or gaussian window which gives weights to pixels underneath. We have to maximize this function \(E(u,v)\) for corner detection. That means, we have to maximize the second term. Applying Taylor Expansion to above equation and using some mathematical steps (please refer any standard text books you like for full derivation), we get the final equation as: \[E(u,v) \approx \begin{bmatrix} u & v \end{bmatrix} M \begin{bmatrix} u \\ v \end{bmatrix}\] where \[M = \sum_{x,y} w(x,y) \begin{bmatrix}I_x I_x & I_x I_y \\ I_x I_y & I_y I_y \end{bmatrix}\] Here, \(I_x\) and \(I_y\) are image derivatives in x and y directions respectively. (Can be easily found out using cv.Sobel()). Then comes the main part. After this, they created a score, basically an equation, which will determine if a window can contain a corner or not. \[R = det(M) - k(trace(M))^2\] where So the values of these eigen values decide whether a region is corner, edge or flat. It can be represented in a nice picture as follows: So the result of Harris Corner Detection is a grayscale image with these scores. Thresholding for a suitable give you the corners in the image. We will do it with a simple image. OpenCV has the function cv.cornerHarris() for this purpose. Its arguments are : See the example below: Below are the three results: Sometimes, you may need to find the corners with maximum accuracy. OpenCV comes with a function cv.cornerSubPix() which further refines the corners detected with sub-pixel accuracy. Below is an example. As usual, we need to find the harris corners first. Then we pass the centroids of these corners (There may be a bunch of pixels at a corner, we take their centroid) to refine them. Harris corners are marked in red pixels and refined corners are marked in green pixels. For this function, we have to define the criteria when to stop the iteration. We stop it after a specified number of iteration or a certain accuracy is achieved, whichever occurs first. We also need to define the size of neighbourhood it would search for corners. Below is the result, where some important locations are shown in zoomed window to visualize:
Consider the following properties for a subset $A$ of $\mathbb{N}$: (1) $A$ is large: $\sum_{n \in A}$$ 1\over n$$=\infty,$ (2) $A^\infty=\limsup \frac{|A \cap \{ 1, \dots, n\}|}{n} >0$, (3) $A_\infty=\liminf \frac{|A \cap \{ 1, \dots, n\}|}{n} >0.$ By a conjecture of Erdős-Turán, if $A$ is large, then it contains arithmetic progressions of any given (finite) length. By a result of Szemerédi, if $A^\infty >0,$ then $A$ contains infinite arithmetic progressions of length $k$ for all positive integers $k$. Let's consider these properties for generic subsets of $\omega$ added by forcing (I assume they do not contain $0$) and let me say a few examples: (a) If $r$ is Cohen, then $r$ is large, $r^\infty=1, r_\infty=0$ and for any $K,L$, we can find $M$ such that $M, M+L, M+2L, \dots M+KL$ are in $r$. (b) If $r$ is Random, then $r$ is large, and $r^\infty=r_\infty=$$1\over 2.$ Soby Szemerédi's result, it contains arbitrary large arithmetic progressions. Question 1.Is there a direct proof, without using of Szemerédi's result, that $r$ contains arbitrary large arithmetic progressions? (c) If $r$ is Mathias, related to an ultrafilter $U$, then the properties of $r$ depends on $U$. It is possible to say the same results for some other generic reals, but as there are many generic reals that I do not know, I would like to ask a more general question: Question 2.Suppose that $r$ is a generic real added by forcing (we assume it does not contain $0$). Disuss if $r$ is large, and what are $r^\infty$ and $r_\infty$. Also say if $r$ contains arbitrary large arithmetic progressions or not (I would rather direct proofs instead of referring to some known results (say for example, as $r^\infty>0,$ it contains arbitrary large arithmetic progressions)).
In Python, objects can declare their textual representation using the __repr__ method. IPython expands on this idea and allows objects to declare other, rich representations including: A single object can declare some or all of these representations; all are handled by IPython's display system. This Notebook shows how you can use this display system to incorporate a broad range of content into your Notebooks. The display function is a general purpose tool for displaying different representations of objects. Think of it as from IPython.display import display A few points: display on an object will send If you want to display a particular representation, there are specific functions for that: from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg) To work with images (JPEG, PNG) use the Image class. from IPython.display import Image i = Image(filename='../images/ipython_logo.png') Returning an Image object from an expression will automatically display it: i Or you can pass an object with a rich representation to display: display(i) An image can also be displayed from raw data or a URL. Image(url='http://python.org/images/python-logo.gif') SVG images are also supported out of the box. from IPython.display import SVGSVG(filename='../images/python_logo.svg') By default, image data is embedded in the notebook document so that the images can be viewed offline. However it is also possible to tell the Image class to only store a link to the image. Let's see how this works using a webcam at Berkeley. from IPython.display import Imageimg_url = 'http://www.lawrencehallofscience.org/static/scienceview/scienceview.berkeley.edu/html/view/view_assets/images/newview.jpg'# by default Image data are embeddedEmbed = Image(img_url)# if kwarg `url` is given, the embedding is assumed to be falseSoftLinked = Image(url=img_url)# In each case, embed can be specified explicitly with the `embed` kwarg# ForceEmbed = Image(url=img_url, embed=True) Here is the embedded version. Note that this image was pulled from the webcam when this code cell was originally run and stored in the Notebook. Unless we rerun this cell, this is not todays image. Embed Here is today's image from same webcam at Berkeley, (refreshed every minutes, if you reload the notebook), visible only with an active internet connection, that should be different from the previous one. Notebooks saved with this kind of image will be smaller and always reflect the current version of the source, but the image won't display offline. SoftLinked Of course, if you re-run this Notebook, the two images will be the same again. Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the HTML class. from IPython.display import HTML s = """<table><tr><th>Header 1</th><th>Header 2</th></tr><tr><td>row 1, cell 1</td><td>row 1, cell 2</td></tr><tr><td>row 2, cell 1</td><td>row 2, cell 2</td></tr></table>""" h = HTML(s) display(h) Header 1 Header 2 row 1, cell 1 row 1, cell 2 row 2, cell 1 row 2, cell 2 You can also use the %%html cell magic to accomplish the same thing. %%html<table><tr><th>Header 1</th><th>Header 2</th></tr><tr><td>row 1, cell 1</td><td>row 1, cell 2</td></tr><tr><td>row 2, cell 1</td><td>row 2, cell 2</td></tr></table> Header 1 Header 2 row 1, cell 1 row 1, cell 2 row 2, cell 1 row 2, cell 2 The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as d3.js for output. from IPython.display import Javascript Pass a string of JavaScript source code to the JavaScript object and then display it. js = Javascript('alert("hi")'); display(js) The same thing can be accomplished using the %%javascript cell magic: %%javascriptalert("hi"); Here is a more complicated example that loads d3.js from a CDN, uses the %%html magic to load CSS styles onto the page and then runs ones of the d3.js examples. Javascript( """$.getScript('//cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""") %%html<style type="text/css">circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px;}.leaf circle { fill: #ff7f0e; fill-opacity: 1;}text { font: 10px sans-serif;}</style> %%javascript// element is the jQuery element we will append tovar e = element.get(0); var diameter = 600, format = d3.format(",d");var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; });var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)");d3.json("data/flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); });});d3.select(self.frameElement).style("height", diameter + "px"); The IPython display system also has builtin support for the display of mathematical expressions typeset in LaTeX, which is rendered in the browser using MathJax. You can pass raw LaTeX test as a string to the Math object: from IPython.display import MathMath(r'F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx') With the Latex class, you have to include the delimiters yourself. This allows you to use other LaTeX modes such as eqnarray: from IPython.display import LatexLatex(r"""\begin{eqnarray}\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\\nabla \cdot \vec{\mathbf{B}} & = 0 \end{eqnarray}""") Or you can enter LaTeX directly with the %%latex cell magic: %%latex\begin{align}\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\\nabla \cdot \vec{\mathbf{B}} & = 0\end{align} IPython makes it easy to work with sounds interactively. The Audio display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the Image display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. from IPython.display import AudioAudio(url="http://www.nch.com.au/acm/8k16bitpcm.wav") A NumPy array can be auralized automatically. The Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook. For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as beats occur. This can be auralised as follows: import numpy as npmax_time = 3f1 = 220.0f2 = 224.0rate = 8000.0L = 3times = np.linspace(0,L,rate*L)signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)Audio(data=signal, rate=rate) More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: from IPython.display import YouTubeVideoYouTubeVideo('sjfsUzECqK0') Using the nascent video capabilities of modern browsers, you may also be able to display local videos. At the moment this doesn't work very well in all browsers, so it may or may not work for you; we will continue testing this and looking for ways to make it more robust. The following cell loads a local file called animation.m4v, encodes the raw video as base64 for httptransport, and uses the HTML5 video tag to load it. On Chrome 15 it works correctly, displaying a control bar at the bottom with a play/pause button and a location slider. from IPython.display import HTMLfrom base64 import b64encodevideo = open("../images/animation.m4v", "rb").read()video_encoded = b64encode(video).decode('ascii')video_tag = '<video controls alt="test" src="data:video/x-m4v;base64,{0}">'.format(video_encoded)HTML(data=video_tag) You can even embed an entire page from another site in an iframe; for example this is today's Wikipedia page for mobile users: from IPython.display import IFrameIFrame('http://jupyter.org', width='100%', height=350) IPython provides builtin display classes for generating links to local files. Create a link to a single file using the FileLink object: from IPython.display import FileLink, FileLinksFileLink('Cell Magics.ipynb') Alternatively, to generate links to all of the files in a directory, use the FileLinks object, passing '.' to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, FileLinks would work in a recursive manner creating links to files in all sub-directories as well. FileLinks('.') The IPython Notebook allows arbitrary code execution in both the IPython kernel and in the browser, though HTML and JavaScript output. More importantly, because IPython has a JavaScript API for running code in the browser, HTML and JavaScript output can actually trigger code to be run in the kernel. This poses a significant security risk as it would allow IPython Notebooks to execute arbitrary code on your computers. To protect against these risks, the IPython Notebook has a security model that specifies how dangerous output is handled. Here is a short summary: A full description of the IPython security model can be found on this page. Much of the power of the Notebook is that it enables users to share notebooks with each other using http://nbviewer.ipython.org, without installing IPython locally. As of IPython 2.0, notebooks rendere on nbviewer will display all output, including HTML and JavaScript. Furthermore, to provide a consistent JavaScript environment on the live Notebook and nbviewer, the following JavaScript libraries are loaded onto the nbviewer page, before the notebook and its output is displayed: Libraries such as mpld3 use these capabilities to generate interactive visualizations that work on nbviewer.
Say you consider spherical waves (momentum eigenstates) propagating outwards from some starting point $r = r_{0} $ (not defined for $r < r_{0}$) with $k \in \mathbb{R} $: \begin{equation} \psi\left(r, t\right) = \left( A {e^{ikr} \over r} + B {e^{-ikr } \over r } \right) e^{-ikt} \end{equation} We have boundary conditions at $r = r_{0} $: \begin{equation} \psi\left(r_{0}, 0\right) = {e^{ikr_{0}} \over r} \end{equation} This automatically sets $A = 1$ and $ B = 0 $. Question: Does this wave reach infinity? At first glance it looks like the answer is definitely no: \begin{equation} \lim_{r \rightarrow \infty } \psi\left(r,t \right) = \lim_{r \rightarrow \infty} {e^{ikr} \over r } e^{-ikt} = {\lim_{r \rightarrow \infty} e^{ikr- ikt} \over \lim_{r \rightarrow \infty} r} = 0 \end{equation} Since the the $e^{ikr-ikt}$ is just oscillating for any value of $r$ and $t$. However, if we consider the energy of the wave (let $I$ be the intensity): \begin{equation} I\left(r\right) = \left \vert \psi\left(r,t\right) \right \vert^2 = {1 \over r^2} \end{equation} And so the energy in a spherical shell at distance $r$: \begin{equation} 4 \pi r^2 \cdot I\left(r\right) = 4\pi \end{equation} Notice the energy is constant independent of $r$, so in this sense the energy of the wave reaches infinity. What is happening, is the wave physically reaching infinity or not? I am looking for an this question that might be generalizable to more complicated situations. Thanks!
Let us for simplicity discuss RHF formalism. For $2n$-electron system we have $n$ Hartree-Fock equations written for $n$ spatial orbitals $\{ \phi_{k} \}_{k=1}^{n}$ $$ \newcommand{\mat}[1]{\boldsymbol{\mathbf{#1}}} $$ \begin{equation} \hat{F}(1) \phi_{k}(1) = \varepsilon_{k} \phi_{k}(1) \, , \quad k = 1, 2, \dotsc, n \, . \end{equation} Once we introduce finite basis $\{ \chi_{q} \}_{q=1}^{m}$ and express spatial orbitals as a linear combination of basis functions $\chi_{q}$ \begin{equation} \phi_{k}(1) = \sum\limits_{q=1}^{m} c_{qk} \chi_{q}(1) \, , \quad k = 1, 2, \dotsc, n \, . \end{equation} we end up with $n$ Roothaan–Hall equations \begin{equation} \sum\limits_{q=1}^{m} F_{pq} c_{qk} = \varepsilon_{k} \sum\limits_{q=1}^{m} S_{pq} c_{qk} \, , \quad k = 1, 2, \dotsc, n \, , \end{equation} which can be rewritten in the following matrix form \begin{equation} \mat{F} \mat{c}_{k} = \varepsilon_{k} \mat{S} \mat{c}_{k} \quad k = 1, 2, \dotsc, n \, . \end{equation} The Fock matrix $\mat{F}$ and the overlap matrix $\mat{S}$ are both $m \times m$ square matrices, $\mat{c}_{k}$ is a column $m \times 1$ matrix, $\varepsilon_{k}$ is just a scalar value. We can then collecl all $n$ $\mat{c}_{k}$ column $m \times 1$ matrices into one $m \times n$ matrix $\mat{C}$ and all $n$ values $\varepsilon_{k}$ into $n \times n$ square matrix $\mat{\varepsilon}$ \begin{equation} \mat{F} \mat{C} = \mat{S} \mat{C} \mat{\varepsilon} \, . \end{equation} In practice, however, we extend both $\mat{C}$ and $\mat{\varepsilon}$ to $m \times m$ matrices from $m \times n$ and $n \times n$ respectively, which results in having $m-n$ virtual (unoccupied) orbitals. Taking into account that virtual orbitals are even more unphysical than their occupied counterparts the question is what is the point of such extension of $\mat{C}$ and $\mat{\varepsilon}$? Why do not we just leave them of $m \times n$ and $n \times n$ sizes respectively?
Give a convincing argument that the following inequalities are true: $$\int_1^n \log x\mathrm dx \leq \log1 + \log2 + ... \log n \leq \int_1^{n+1}\log x \mathrm dx$$ for any $n \geq 1 $ . We are given the hint to observe that: $$\int_{k-1}^k \log x\mathrm dx \leq \log k \leq \int_k^{k+1}\log x\mathrm dx $$ Update 1 BRIC-Fan's argument makes sense but I'm supposed to use the result of the above inequality to show that: $$ n^ne^{1-n} \leq n! \leq (n+1)^{n+1}e^{-n} $$ My apologies if this is trivial but could someone please help bridge the gap?
Of course, most of you will, upon reading the title, exclaim "But isn't that the definition of the continuum hypothesis?" So I need to be a little more careful about the exact definitions. Let ${\sf CH}(\frak m)$ be the statement that either $\frak m$ is a finite cardinal or there is no cardinality $\frak n$ such that ${\frak m}<{\frak n}<2^{\frak m}$. The standard continuum hypothesis is then ${\sf CH}(\aleph_0)$, and the GCH is $\forall{\frak m}\,{\sf CH}({\frak m})$. The statement I am interested in is the question of whether ${\sf CH}(\aleph_\alpha)$ implies $2^{\aleph_\alpha}=\aleph_{\alpha+1}$. I suspect the following lemma, used in the proof of $\sf GCH\to AC$, may be useful: If ${\sf CH}(\frak m)$ and $\frak m+m=m$ and ${\frak n}\le 2^{\frak m}$, then $\frak m,n$ are comparable, which is to say $\frak m\le n$ or $\frak n\le m$. One side of the inequality is easy - if $2^{\aleph_\alpha}<\aleph_{\alpha+1}$ then we would have $\aleph_\alpha<2^{\aleph_\alpha}<\aleph_{\alpha+1}$ which violates the properties of the $\aleph$ function. But usually proving $\aleph_{\alpha+1}\le 2^{\aleph_\alpha}$ is done using some kind of choice, and I don't have enough $\sf GCH$ here to prove that $2^{\aleph_\alpha}$ is well-orderable (Sierpinski's proof gives the result assuming ${\sf CH}({\cal P}^n(\aleph_\alpha))$ for $n=1,\dots,4$).
In Python, objects can declare their textual representation using the __repr__ method. IPython expands on this idea and allows objects to declare other, rich representations including: A single object can declare some or all of these representations; all are handled by IPython's display system. This Notebook shows how you can use this display system to incorporate a broad range of content into your Notebooks. The display function is a general purpose tool for displaying different representations of objects. Think of it as from IPython.display import display A few points: display on an object will send If you want to display a particular representation, there are specific functions for that: from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg) To work with images (JPEG, PNG) use the Image class. from IPython.display import Image i = Image(filename='../images/ipython_logo.png') Returning an Image object from an expression will automatically display it: i Or you can pass an object with a rich representation to display: display(i) An image can also be displayed from raw data or a URL. Image(url='http://python.org/images/python-logo.gif') SVG images are also supported out of the box. from IPython.display import SVGSVG(filename='../images/python_logo.svg') By default, image data is embedded in the notebook document so that the images can be viewed offline. However it is also possible to tell the Image class to only store a link to the image. Let's see how this works using a webcam at Berkeley. from IPython.display import Imageimg_url = 'http://www.lawrencehallofscience.org/static/scienceview/scienceview.berkeley.edu/html/view/view_assets/images/newview.jpg'# by default Image data are embeddedEmbed = Image(img_url)# if kwarg `url` is given, the embedding is assumed to be falseSoftLinked = Image(url=img_url)# In each case, embed can be specified explicitly with the `embed` kwarg# ForceEmbed = Image(url=img_url, embed=True) Here is the embedded version. Note that this image was pulled from the webcam when this code cell was originally run and stored in the Notebook. Unless we rerun this cell, this is not todays image. Embed Here is today's image from same webcam at Berkeley, (refreshed every minutes, if you reload the notebook), visible only with an active internet connection, that should be different from the previous one. Notebooks saved with this kind of image will be smaller and always reflect the current version of the source, but the image won't display offline. SoftLinked Of course, if you re-run this Notebook, the two images will be the same again. Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the HTML class. from IPython.display import HTML s = """<table><tr><th>Header 1</th><th>Header 2</th></tr><tr><td>row 1, cell 1</td><td>row 1, cell 2</td></tr><tr><td>row 2, cell 1</td><td>row 2, cell 2</td></tr></table>""" h = HTML(s) display(h) Header 1 Header 2 row 1, cell 1 row 1, cell 2 row 2, cell 1 row 2, cell 2 You can also use the %%html cell magic to accomplish the same thing. %%html<table><tr><th>Header 1</th><th>Header 2</th></tr><tr><td>row 1, cell 1</td><td>row 1, cell 2</td></tr><tr><td>row 2, cell 1</td><td>row 2, cell 2</td></tr></table> Header 1 Header 2 row 1, cell 1 row 1, cell 2 row 2, cell 1 row 2, cell 2 The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as d3.js for output. from IPython.display import Javascript Pass a string of JavaScript source code to the JavaScript object and then display it. js = Javascript('alert("hi")'); display(js) The same thing can be accomplished using the %%javascript cell magic: %%javascriptalert("hi"); Here is a more complicated example that loads d3.js from a CDN, uses the %%html magic to load CSS styles onto the page and then runs ones of the d3.js examples. Javascript( """$.getScript('//cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""") %%html<style type="text/css">circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px;}.leaf circle { fill: #ff7f0e; fill-opacity: 1;}text { font: 10px sans-serif;}</style> %%javascript// element is the jQuery element we will append tovar e = element.get(0); var diameter = 600, format = d3.format(",d");var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; });var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)");d3.json("data/flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); });});d3.select(self.frameElement).style("height", diameter + "px"); The IPython display system also has builtin support for the display of mathematical expressions typeset in LaTeX, which is rendered in the browser using MathJax. You can pass raw LaTeX test as a string to the Math object: from IPython.display import MathMath(r'F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx') With the Latex class, you have to include the delimiters yourself. This allows you to use other LaTeX modes such as eqnarray: from IPython.display import LatexLatex(r"""\begin{eqnarray}\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\\nabla \cdot \vec{\mathbf{B}} & = 0 \end{eqnarray}""") Or you can enter LaTeX directly with the %%latex cell magic: %%latex\begin{align}\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\\nabla \cdot \vec{\mathbf{B}} & = 0\end{align} IPython makes it easy to work with sounds interactively. The Audio display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the Image display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. from IPython.display import AudioAudio(url="http://www.nch.com.au/acm/8k16bitpcm.wav") A NumPy array can be auralized automatically. The Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook. For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as beats occur. This can be auralised as follows: import numpy as npmax_time = 3f1 = 220.0f2 = 224.0rate = 8000.0L = 3times = np.linspace(0,L,rate*L)signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)Audio(data=signal, rate=rate) More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: from IPython.display import YouTubeVideoYouTubeVideo('sjfsUzECqK0') Using the nascent video capabilities of modern browsers, you may also be able to display local videos. At the moment this doesn't work very well in all browsers, so it may or may not work for you; we will continue testing this and looking for ways to make it more robust. The following cell loads a local file called animation.m4v, encodes the raw video as base64 for httptransport, and uses the HTML5 video tag to load it. On Chrome 15 it works correctly, displaying a control bar at the bottom with a play/pause button and a location slider. from IPython.display import HTMLfrom base64 import b64encodevideo = open("../images/animation.m4v", "rb").read()video_encoded = b64encode(video).decode('ascii')video_tag = '<video controls alt="test" src="data:video/x-m4v;base64,{0}">'.format(video_encoded)HTML(data=video_tag) You can even embed an entire page from another site in an iframe; for example this is today's Wikipedia page for mobile users: from IPython.display import IFrameIFrame('http://jupyter.org', width='100%', height=350) IPython provides builtin display classes for generating links to local files. Create a link to a single file using the FileLink object: from IPython.display import FileLink, FileLinksFileLink('Cell Magics.ipynb') Alternatively, to generate links to all of the files in a directory, use the FileLinks object, passing '.' to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, FileLinks would work in a recursive manner creating links to files in all sub-directories as well. FileLinks('.') The IPython Notebook allows arbitrary code execution in both the IPython kernel and in the browser, though HTML and JavaScript output. More importantly, because IPython has a JavaScript API for running code in the browser, HTML and JavaScript output can actually trigger code to be run in the kernel. This poses a significant security risk as it would allow IPython Notebooks to execute arbitrary code on your computers. To protect against these risks, the IPython Notebook has a security model that specifies how dangerous output is handled. Here is a short summary: A full description of the IPython security model can be found on this page. Much of the power of the Notebook is that it enables users to share notebooks with each other using http://nbviewer.ipython.org, without installing IPython locally. As of IPython 2.0, notebooks rendered on nbviewer will display all output, including HTML and JavaScript. Furthermore, to provide a consistent JavaScript environment on the live Notebook and nbviewer, the following JavaScript libraries are loaded onto the nbviewer page, before the notebook and its output is displayed: Libraries such as mpld3 use these capabilities to generate interactive visualizations that work on nbviewer.
I am trying to calculate the tension in a string in the configuration shown in the diagram below. Here is a verbal description of the setup: The string is fixed at one end with the other end connected to a spring. The spring, with constant $K$, is limited to only vertical movement. The string begins with a tension $T$, which causes the free end of the spring to extend from its resting position $S_0$ to position $S_0+S$ such that the following relationship is upheld:$$F_s=K\cdot S=T\cdot\frac{h}{\sqrt{D^2+h^2}}$$ A distance $x$ along the horizontal from the fixed end of the string, a force $F$ is applied vertically downward. This force is such that the string is forced into the configuration shown with a tension $T_1$. The spring will naturally extend to accommodate this deformation until the free end is $h_1$ above the horizontal line defined by the segment labeled $x$. It seems to me that if the string has cross-sectional area $A$ and Young's modulus $Y$ then it should be possible to determine the tension $T_1$ as a function of $A$, $Y$, $x$, $T$, $D$, and $h$. $S_0$ is not relevant to the calculation at all. I can calculate $S$ using $$S=\frac{T}{K}\cdot\frac{h}{\sqrt{D^2+h^2}}$$ I should be able to calculate $$T_1=K(S+h-h_1)\cdot\frac{\sqrt{(D-x)^2+h_1^2}}{h_1}$$ and from previous problems I know that I can calculate $$T_1=(T+AY)\frac{x+\sqrt{(D-x)^2+h_1^2}}{\sqrt{D^2+h^2}}-AY$$ However, I can't for the life of me find a way to write $h_1$ in terms of the other variables so that I can remove it from either equation for $T_1$. I think it should be possible since $h_1$ is simply an equilibrium point, but I've been unable to work it out myself. I'm probably missing something obvious, but some other eyes on this problem would be greatly appreciated! I assume the length of the rope remains constant. At first the length is given by: $$L=\sqrt{D^2+h^2}$$ After applying the force, the length is given by: $$L=x+\sqrt{(D-x)^2+h_1^2}$$ By assuming $L$ is constant, one can derive an expression for $h_1$: $$\sqrt{D^2+h^2}=x+\sqrt{(D-x)^2+h_1^2}$$ $$\left(\sqrt{D^2+h^2}-x\right)^2=(D-x)^2+h_1^2$$ $$h_1 = \pm \sqrt{\left(\sqrt{D^2+h^2}-x\right)^2-(D-x)^2}$$
Ex.4.3 Q10 Quadratic Equations Solutions - NCERT Maths Class 10 Question An express train takes \(1\) hour less than a passenger train to travel \(132 \,\rm{km}\) between Mysore and Bangalore (without taking into consideration the time they stop at intermediate stations). If the average speed of express train is \(11\, \rm{km /hr}\) more than that of passenger train, find the average speed of the two trains. Text Solution What is Known? i) Express train takes \(1\) hour less than a passenger train to travel \(132\,\rm{ km.}\) ii) Average speed of express train is \(11\,\rm{km/hr}\) more than that of passenger train. What is Unknown? Average speed of express train and the passenger train. Reasoning: Let the average speed of passenger train \(= x\,\rm{km/hr}\) Average speed of express train \(= (x + 11)\, \rm{km /hr}\) \[\begin{align}\rm{Distance} &= \rm{Speed} \times \rm{Time}\\\rm{Time}& = \frac{{{\rm{Distance}}}}{{\rm{Speed}}}\end{align}\] Time taken by passenger train to travel \( 132\,{\rm{km}}= \frac{{132}}{x}\) Time taken by express train to travel \(132 \,{\rm{km}} =\frac{{132}}{{x + 11}}\) Difference between the time taken by the passenger and the express train is \(1\) hour. Therefore, we can write: \[\frac{{132}}{x} - \frac{{132}}{{x + 11}} = 1\] Steps: Solving \(\begin{align} \frac{{132}}{x} - \frac{{132}}{{x + 11}} = 1 \end{align}\) by taking the LCM on the LHS: \[\begin{align}\frac{{132\left( {x + 11} \right) - 132x}}{{x\left( {x + 11} \right)}}& = 1\\\frac{{132x + 1452 - 132x}}{{{x^2} + 11x}} &= 1\\1452 &= {x^2} + 11x\\{x^2} + 11x - 1452 &= 0\end{align}\] By comparing \({x^2} + 11x - 1452 = 0\) with the general form of a quadratic equation \(ax² + bx + c = 0:\) \[a = 1,\; b = 11,\; c = - 1452\] \[\begin{align} {{b}^{2}}-4ac&={{11}^{2}}-4\left( 1 \right)\left( -1452 \right) \\ & =121+5808 \\ & =5929>0 \\ b{}^\text{2}\text{ }-\text{ }4ac &>0 \\\end{align}\] \(\therefore\) Real roots exist. \[\begin{align}{\rm{x}} &= \frac{{ - b \pm \sqrt {{b^2} - 4ac} }}{{2a}}\\ &= \frac{{ - 11 \pm \sqrt {5929} }}{{2(1)}}\\&= \frac{{ - 11 \pm 77}}{2}\\ x &= \frac{{ - 11 + 77}}{2} \qquad x = \frac{{ - 11 - 77}}{2}\\&= \frac{{66}}{2} \qquad \qquad \quad\;\; = \frac{{ - 88}}{2}\\&= 33 \qquad \qquad \quad\;\;\;\;= - 44\\\\&x=33 \qquad \qquad \qquad -x=44\end{align}\] \(x\) can’t be a negative value as it represents the speed of the train. Speed of passenger train \(= 33\,\rm{ km/hr}\) Speed of express train \(=x+11 = 33 + 11 = 44\;\rm{km/hr.}\)
IAMCS Workshop in Large-Scale Inverse Problems and Uncertainty Quantification – Texas A&M University College Station, TX Stephen W. Hawking Auditorium George P. and Cynthia Woods Mitchell Institute for Fundamental Physics and Astronomy (MIST) Bruce Fryxell, University of Michigan Calibration and Prediction in a Radiative Shock Experiment Authors Bruce Fryxell Members of the CRASH Team Abstract The CRASH experiment uses a laser to generate a strong shock in a Be disk μm thick. The shock breaks out of the disk after about 400 ps into a Xe-filled tube and produces sufficient radiation to modify the shock structure. The shock location is predicted using two simulation codes, Hyades and CRASH. Hyades models the laser-plasma interaction at times less than 1.1 ns and can predict the shock breakout time. The CRASH code is initialized at 1.1 ns and is used to predict the shock location at later times for comparison to experiment. We use the simulation tools and experiments conducted in one region of input space to predict in a new region where no prior experiments exist. Two data sets exist on which to base predictions: shock break time data, and shock location data at 13, 14, and 16 ns, and we wish to predict shock locations at 20 and 26 ns to compare to subsequent experiments. We use two models of the Kennedy-O'Hagen form to combine experiments with simulations, using one to inform the other, and interpret the discrepancy in these models in a way that allows us to gain some understanding of model error separately from parameter tuning. Shock breakout times are modeled by constructing \(t=\eta_{BO}\left(x, \theta\right)+\delta_{BO}\left(x\right)+\epsilon_{BO}\) that jointly fits the field measurements \(T\) of shock breakout time \(t\) and a set of 1024 Hyades simulations over a 6 dimensional input space (4 experimental variables \(x\) and 2 calibration parameters \(\theta\)). This model provides posterior distributions for the calibration parameters \(\pi\left(\theta \mid T\right)\), as well as for the parameters in Gaussian Process (GP) models of the emulator \(\eta_{BO}\), the discrepancy function \(\delta_{BO}\), and for the replication error \(\epsilon_{BO}\). If the discrepancy function is significant compared to measurement uncertainty, we would call this process "tuning," but if the discrepancy is small (as in our case), we refer to this as calibration. Next, we use the shock location field data at 13-16 ns along with shock locations from 1024 CRASH simulations to construct a model of the form \(z=\eta_{SL}\left(x,\theta\right)+\delta_{SL}\left(x\right)+\epsilon_{SL}\) , with \(\theta\) now treated as an experimental, rather than a calibration parameter, drawn from the posterior constructed in the previous step, so \(\theta \sim \eta\left(\theta \mid T\right)\). The \(x\) are drawn from distributions representing uncertainties in the experimental parameters. This second model is used to construct the emulator \(\eta_{SL}\), its discrepancy \(\delta_{SL}\), and a best estimate of the replication error \(\epsilon_{SL}\). The discrepancy can be studied to understand the defects of the physics model. The result shows that our model tends to under predict shock location. Finally we can use \(\eta_{SL}\left(x, \theta\right)+\delta_{SL}\left(x\right)+\epsilon_{SL}\) to predict shock location at 20 and 26 ns, times at which we had simulations but no previous measurements. In doing so we can separate the code prediction \(\eta_{SL}\left(x, \theta\right)\) and the uncertainty due to this prediction (caused by uncertainty in \(\theta\), \(x\), and in the GP modeling parameters) from the uncertainty due to discrepancy \(\delta_{SL}\left(x\right)\). The uncertainty in discrepancy is of course large, because we are extrapolating the discrepancy to a new region of input space. The uncertainty in the emulator \(\eta_{SL}\left(x, \theta\right)\) is significantly smaller because there were simulation data in this region. Finally, comparison of the predictions with field measurements at 20 and 26 ns show that even the smaller predictive interval from the emulator alone contains the actual field measurements.
xcorr function (of Matlab) computes the cross-correlation between two sequences $x[n]$ and $y[n]$ of length $M$ each. If $x$ and $y$ are random processes (jointly WSS) then xcorr returns an estimate $$\hat{r}_{xy}[m] = \frac{1}{M} \sum_{n=0}^{M-1} x[n]y[n+m]^*$$ computed over the two random vectors $x$ and $y$, of the theoretical cross-correlation between the two random processes $X$ and $Y$:$$r_{xy} [m] = E\{ X[n]Y[n+m]^* \} $$ (Note that the practical estimate computation (the sum's range and the scale) may be modified to make it an unbiased estimator and to deal with out of bound entries...) When the two sequences belong to deterministic signals, then it's called as the deterministic cross-correlation, essentially the same thing computed from a practical point of view, but its interpretation is made carefully depending on the case. In signal processing theory and signal detection theory, the operations of correlation, convolution and matched filtering merge at some point. Without further details, one can observe that given two sequences $x$ and $y$, the the computation mechanics and hence the results of the three operations, will be the same (with slight modifications in the orientation of the arguments) Matched filtering, essentially convolves a test signal with a target signal, with the test signal reversed in time. It can be shown that there'll be a unique peak at the matched filter output, if the target signal involves a copy of the test signal inside. From a theoretical point of view, intuition tells that, if the target and test signals are uncorrelated (and orthogonal, for zero mean processes, as a result) their cross-correlations should produce low values in line with inner products between orthogonal vectors. But when the target signal involves a copy of the test signal at some point, then there will be high correlation between the target and test signals close to those shifts, at the output, that will create the peak. There are subtle details, however, such as if the test signal is contained more than once, if several test signals overlapp, and if the test signal is not contained purely but through some (inevitable) transformations, which make the comparison more difficult or less reliable unless those transformations are known and rolled back properly. In most practical scenarios, the target signal will consist of noise or irrelevant signals plus the test signal. And for those lags which compare the test signal with the noise portions, the output will be arbitrary (in the ideal case of when the noise belongs to a zero mean white random process, the output should tend to zero, as the summation would sequentially add and subtract test signal samples which would produce apprx a zero, when the test signal belongs to a zero mean (ergodic) process). And when the comparison is between target signal and test signal for a lag that corresponds to the place of the test signal in the target signal, the computation should produce a large value, as the sum is essentially computing the energy of the test signal; i.i, sum of $x[n]^2$ values. Note that when the SNR is low; that the noise portions have large amplitudes compared to the test signal embedded in the target signal, then the above paragraph becomes invalidated and the expected peak is burried within random peaks produced as the result of the irrelevant computations between the test signals and noise contained in the target signal.
If $C$ is any additive category then every object has a unique structure as an abelian group object so $Ab(C)=C$; but typically $C$ is not abelian. For example, this applies to the category of free abelian groups. One can also think about triangulated categories, which are usually not abelian, although a nice theorem of Freyd gives a canonical embedding in an abelian category. One example that has been studied extensively is the category of spectra in the sense of stable homotopy theory. Similarly, one can consider abelian group objects in the homotopy category of spaces, otherwise known as commutative H-spaces. The question also says: In general, I have already trouble to show that $Hom(A,B)\times Hom(B,C)\to Hom(A,C)$ is linear in the left coordinate Surely this is formal? I have drawn the diagram but sadly I cannot get MathJax to display it. Update: Just to be clear about notation, I'll write $\mathcal{C}(X,Y)$ for morphism sets in $\mathcal{C}$, and $Hom(A,B)$ for morphism sets in $Ab(\mathcal{C})$. An object $A\in Ab(\mathcal{C})$ has a natural abelian group structure on $\mathcal{C}(T,A)$ for all $T\in\mathcal{C}$. Naturality means that $q\circ p+r\circ p=(q+r)\circ p$ for all $p:S\to T$ and $q,r:T\to A$. Now let $B$ be another object of $Ab(\mathcal{C})$. A morphism in $Ab(\mathcal{C})$ from $A$ to $B$ is just a morphism $f:A\to B$ in $\mathcal{C}$ with the property that $f\circ(p+q)=f\circ p+f\circ q$ for all $T$ and all $p,q\in\mathcal{C}(T,A)$. Now suppose we have such morphisms $f,g:A\to B$ and $h,k\:B\to C$. We then have $ (f+g)\circ(p+q) = f\circ(p+q) + g\circ(p+q) = f\circ p + g\circ p + f\circ q + g\circ q =$$ (f+g)\circ p + (f+g)\circ q $ (using the naturality of addition, the homomorphism property of $f$ and $g$, and then naturality again). This shows that $f+g$ is again a homomorphism. A similar argument shows that $h\circ f$, $h\circ g$ and $h\circ(f+g)$ are homomorphisms. We have $h\circ(f+g)=h\circ f+h\circ g$ by the homomorphism property of $h$. We also have $(h+k)\circ f=h\circ f+k\circ f$ by the naturality of addition.
How can I calculate the $\alpha$ and $\beta$ parameters for a Beta distribution if I know the mean and variance that I want the distribution to have? Examples of an R command to do this would be most helpful. I set$$\mu=\frac{\alpha}{\alpha+\beta}$$and$$\sigma^2=\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}$$and solved for $\alpha$ and $\beta$. My results show that$$\alpha=\left(\frac{1-\mu}{\sigma^2}-\frac{1}{\mu}\right)\mu^2$$and$$\beta=\alpha\left(\frac{1}{\mu}-1\right)$$ I've written up some R code to estimate the parameters of the Beta distribution from a given mean, mu, and variance, var: estBetaParams <- function(mu, var) { alpha <- ((1 - mu) / var - 1 / mu) * mu ^ 2 beta <- alpha * (1 / mu - 1) return(params = list(alpha = alpha, beta = beta))} There's been some confusion around the bounds of $\mu$ and $\sigma^2$ for any given Beta distribution, so let's make that clear here. $\mu=\frac{\alpha}{\alpha+\beta}\in\left(0, 1\right)$ $\sigma^2=\frac{\alpha\beta}{\left(\alpha+\beta\right)^2\left(\alpha+\beta+1\right)}=\frac{\mu\left(1-\mu\right)}{\alpha+\beta+1}<\frac{\mu\left(1-\mu\right)}{1}=\mu\left(1-\mu\right)\in\left(0,0.5^2\right)$ Here's a generic way to solve these types of problems, using Maple instead of R. This works for other distributions as well: with(Statistics):eq1 := mu = Mean(BetaDistribution(alpha, beta)):eq2 := sigma^2 = Variance(BetaDistribution(alpha, beta)):solve([eq1, eq2], [alpha, beta]); which leads to the solution $$ \begin{align*} \alpha &= - \frac{\mu (\sigma^2 + \mu^2 - \mu)}{\sigma^2} \\ \beta &= \frac{(\sigma^2 + \mu^2 - \mu) (\mu - 1)}{\sigma^2}. \end{align*} $$ This is equivalent to Max's solution. In R, the beta distribution with parameters $\textbf{shape1} = a$ and $\textbf{shape2} = b$ has density $f(x) = \frac{\Gamma(a+b)}{\Gamma(a) \Gamma(b)} x^{a-1}(1-x)^{b-1}$, for $a > 0$, $b >0$, and $0 < x < 1$. In R, you can compute it by dbeta(x, shape1=a, shape2=b) In that parametrisation, the mean is $E(X) = \frac{a}{a+b}$ and the variance is $V(X) = \frac{ab}{(a + b)^2 (a + b + 1)}$. So, you can now follow Nick Sabbe's answer. Good work! Edit I find: $a = \left( \frac{1 - \mu}{V} - \frac{1}{\mu} \right) \mu^2$, and $b = \left( \frac{1 - \mu}{V} - \frac{1}{\mu} \right) \mu (1 - \mu)$, where $\mu=E(X)$ and $V=V(X)$. On Wikipedia for example, you can find the following formulas for mean and variance of a beta distribution given alpha and beta: $$ \mu=\frac{\alpha}{\alpha+\beta} $$ and $$ \sigma^2=\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)} $$ Inverting these ( fill out $\beta=\alpha(\frac{1}{\mu}-1)$ in the bottom equation) should give you the result you want (though it may take some work). For a generalized Beta distribution defined on the interval $[a,b]$, you have the relations: $$\mu=\frac{a\beta+b\alpha}{\alpha+\beta},\quad\sigma^{2}=\frac{\alpha\beta\left(b-a\right)^{2}}{\left(\alpha+\beta\right)^{2}\left(1+\alpha+\beta\right)}$$ which can be inverted to give: $$\alpha=\lambda\frac{\mu-a}{b-a},\quad\beta=\lambda\frac{b-\mu}{b-a}$$ where $$\lambda=\frac{\left(\mu-a\right)\left(b-\mu\right)}{\sigma^{2}}-1$$ Solve the $\mu$ equation for either $\alpha$ or $\beta$, solving for $\beta$, you get $$\beta=\frac{\alpha(1-\mu)}{\mu}$$ Then plug this into the second equation, and solve for $\alpha$. So you get $$\sigma^2=\frac{\frac{\alpha^2(1-\mu)}{\mu}}{(\alpha+\frac{\alpha(1-\mu)}{\mu})^2(\alpha+\frac{\alpha(1-\mu)}{\mu}+1)}$$ Which simplifies to $$\sigma^2=\frac{\frac{\alpha^2(1-\mu)}{\mu}}{(\frac{\alpha}{\mu})^2\frac{\alpha+\mu}{\mu}}$$ $$\sigma^2=\frac{(1-\mu)\mu^2}{\alpha+\mu}$$ Then finish solving for $\alpha$. I was looking for python, but stumbled upon this. So this would be useful for others like me. Here is a python code to estimate beta parameters (according to the equations given above): # estimate parameters of beta dist.def getAlphaBeta(mu, sigma): alpha = mu**2 * ((1 - mu) / sigma**2 - 1 / mu) beta = alpha * (1 / mu - 1) return {"alpha": 0.5, "beta": 0.1}print(getAlphaBeta(0.5, 0.1) # {alpha: 12, beta: 12} You can verify the parameters $\alpha$ and $\beta$ by importing scipy.stats.beta package.
Given a million data points in say 100d,is there a way to generate an optimal filter bank of say 20 filtersfrom an SVD of the data ? Call the 100d space $F$ (as in Frequency), with coordinates $[f_1, f_2 ... f_{100}]$. There are many bases $V_i$ of $F$, that split the 1M $\times$ 100 data array $ A \approx \sum_{i=1}^{100} d_i U_i \otimes V_i $ where the $d_i$ are scalars $d_1 \ge d_2\ ...\ \ge 0$, the $U_i$ are 1M long, and the $V_i$ 100 long. I have two goals: the $d_i$ should be rapidly decreasing, so that 20 terms approximate $A$ reasonably well for a filter bank, I want a local basis, with each $V_i$ zero outside some interval $[f_a, f_b]$ How can I optimize both together ? The problem is that SVD has no notion of locality in $F$. And one can optimize a filter bank with a given local basis, such as overlapping triangles in MFCC, but I see no connection to SVD. Edit 12 Jan: SVD and filter banks / local overlapping bases / dictionaries seem to me quite different: $\qquad$ SVD: orthogonal, fast and easy, very dependent on data and noise $\qquad$ local: many possibilities / objectives, robust. Nonetheless it would be nice if one could use SVD/PCA to improve a given filter bank / local basis.
This question already has an answer here: I'm trying to understand what frequency domain is. I found general explanations on the Internet, for example: frequency-domain graph shows how much of the signal lies within each given frequency band over a range of frequencies Fourier transformrefers to both the frequency domain representation and the mathematical operation that associates the frequency domain representation to a function of time. Fourier seriestransform a signal from time domain to frequency domain. But I could not find an example which shows how I can obtain the frequency domain graph using the formula of a signal. and sometimes continuous like this one: which seems confusing. Suppose I have a signal $x:\Bbb R\to \Bbb R$ defined by the formula $x(t) = \cos(6\pi t)e^{-\pi t^2}$. It's clear how to plot its time domain graph, But how can I find its frequency domain function or graph? Should I apply fourier transform on it?: $$\widehat{x}(\tau)=\int_{-\infty}^\infty \cos(6\pi t)e^{-\pi t^2}e^{-2\pi t \tau i}dt$$ Does plotting this new function gives the frequency domain graph? (but it is complex valued, how can it be plotted?) Or should I find the Fourier series of $x(t)$ and plot the series coefficients discretely? So my only question is: How can I mathematically obtain the the frequency domain function (and then plot it to get the frequency domain graph) using the formula of $x(t)$?
I have been working this problem almost 3 days, I would appreciate any help or idea: Let (M,h) be a Riemannian manifold. For every $ p \in M $ and $ (e_1 , ... , e_n)$ basis of $T_pM$. There exists an orthonormal frame in an neighborhood of p $(E_1, ... , E_n)$ , with $E_i (p) = e_i$ and $\nabla E_i (p) = 0 $ Hint: Fix an orthonormal frame $(\overline E_i)$ near p with $\overline E_i (p) = e_i$ and define $E_i = \alpha_i^j\overline E_i $ with $(\alpha_i^j(x))_{ij} \in SO(n)$ and $\alpha_i^j(p)= \delta_i^j$. What I have got: The construction of an orthonormal frame $(\overline E_i)$ near p with $\overline E_i (p) = e_i$. Follows from Gram-Schmidt process with no problem. Defining $E_i= \sum_j \alpha_i^j\overline E_i$ the frame is still orthonormal, that follows from a direct calculation and the fact that $(\alpha_i^j(x))_{ij} \in SO(n)$ and $\overline E_i (p) = e_i$ because $\alpha_i^j(p)= \delta_i^j$. Since $h(E_i,E_j)= \delta_i^j$ then: $h(\nabla_X E_i, E_j) + h(E_i,\nabla_X E_j) = 0 $. So all I have to see is that: $h(\nabla_X E_i, E_j) = h(E_i,\nabla_X E_j) $ Writing the above equation with the koszul formula and doing the calculations, Using the antisymmetry of Lie bracket and the fact that the inner product is commutative: $h(\nabla_X E_i, E_j) - h(E_i,\nabla_X E_j) = -h([E_i,E_j],X) $ So all at have to see is that in p: $[E_i,E_j](p) = 0$ That the Lie bracket its 0 at p for the basis constructed in this way! I have been trying to justify this from lie bracket proprieties with no luck
Before answering the question more or less directly, I'd like to point out that this is a good question that provides an object lesson and opens a foray into the topics of singular integral equations, analytic continuation and dispersion relations. Here are some references of these more advanced topics: Muskhelishvili, Singular Integral Equations; Courant & Hilbert, Methods of Mathematical Physics, Vol I, Ch 3; Dispersion Theory in High Energy Physics, Queen & Violini; Eden et.al., The Analytic S-matrix. There is also a condensed discussion of `invariant functions' in Schweber, An Intro to Relativistic QFT Ch13d. The quick answer is that, for $m^2 \in\mathbb{R}$, there's no "shortcut." One must choose a path around the singularities in the denominator. The appropriate choice is governed by the boundary conditions of the problem at hand. The $+i\epsilon$ "trick" (it's not a "trick") simply encodes the boundary conditions relevant for causal propagation of particles and antiparticles in field theory. We briefly study the analytic form of $G(x-y;m)$ to demonstrate some of these features. Note, first, that for real values of $p^2$, the singularity in the denominator of the integrand signals the presence of (a) branch point(s). In fact, [Huang, Quantum Field Theory: From Operators to Path Integrals, p29] the Feynman propagator for the scalar field (your equation) may be explicitly evaluated:\begin{align}G(x-y;m) &= \lim_{\epsilon \to 0} \frac{1}{(2 \pi)^4} \int d^4p \, \frac{e^{-ip\cdot(x-y)}}{p^2 - m^2 + i\epsilon} \nonumber \\&= \left \{ \begin{matrix}-\frac{1}{4 \pi} \delta(s) + \frac{m}{8 \pi \sqrt{s}} H_1^{(1)}(m \sqrt{s}) & \textrm{ if }\, s \geq 0 \\ -\frac{i m}{ 4 \pi^2 \sqrt{-s}} K_1(m \sqrt{-s}) & \textrm{if }\, s < 0.\end{matrix} \right.\end{align}where $s=(x-y)^2$. The first-order Hankel function of the first kind $H^{(1)}_1$ has a logarithmic branch point at $x=0$; so does the modified Bessel function of the second kind, $K_1$. (Look at the small $x$ behavior of these functions to see this.) A branch point indicates that the Cauchy-Riemann conditions have broken down at $x=0$ (or $z=x+iy=0$). And the fact that these singularities are logarithmic is an indication that we have an endpoint singularity [eg. Eden et. al., Ch 2.1]. (To see this, consider $m=0$, then the integrand, $p^{-2}$, has a zero at the lower limit of integration in $dp^2$.) Coming back to the question of boundary conditions, there is a good discussion in Sakurai, Advanced Quantum Mechanics, Ch4.4 [NB: "East Coast" metric]. You can see that for large values of $s>0$ from the above expression that we have an outgoing wave from the asymptotic form of the Hankel function. Connecting it back to the original references I cited above, the $+i\epsilon$ form is a version of the Plemelj formula [Muskhelishvili]. And the expression for the propagator is a type of Cauchy integral [Musk.; Eden et.al.]. And this notions lead quickly to the topics I mentioned above -- certainly a rich landscape for research.This post imported from StackExchange Physics at 2014-07-13 04:38 (UCT), posted by SE-user MarkWayne
In my talk at 2018 Chinese Mathematical Logic Conference, I asked if \((V,\subset,P)\) is epsilon-complete, namely if the membership relation can be recovered in the reduct. Professor Joseph S. Miller approached to me during the dinner and pointed out that it is epsilon-complete. Let me explain how. Theorem Let \((V,\in)\) be a structure of set theory, \((V,\subset,P)\) is the structure of the inclusion relation and the power set operation, which are defined in \((V,\in)\) as usual. Then \(\in\) is definable in \((V,\subset,P)\). Proof. Fix a set \(x\). Define \(y\) to be the \(\subset\)-least such that \[\forall z \big((z\subset x\wedge z\neq x)\rightarrow P(z)\subset y\big).\] Actually, \(y=P(x)-\{x\}\), so \(\{x\}= P(x) – y\). Since set difference can be defined from subset relation and \((V,\subset,\{x\})\) can define \(\in\), we are done. \(\Box\) Here is another argument figured out by Jialiang He and me after we heard Professor Miller’s Claim. Proof. Since \(\in\) can be defined in \((V,\subset,\bigcup)\) (see the slides). Fix a set \(A\), it suffices to show that we can define \(\bigcup A\) from \(\subset\) and \(P\). Let \(B\) be the \(\subset\)-least set such that there is \(c\), \(B=P(c)\) and \(A\subset B\). Note that \[ \bigcap\big\{P(d)\bigm|A\subset P(d)\big\}= P\big(\bigcap\big\{d\bigm|A\subset P(d)\big\}\big). \] Therefore, \(B\) is well-defined. Next, we show that \[ \bigcap\big\{d\bigm|A\subset P(d)\big\}=\bigcup A. \] Clearly, \(A\subset P(\bigcup A)\). This proves the direction from left to right. For the other direction, if \(x\) is in an element of \(A\), then it is in an element of \(P(d)\) given \(A\subset P(d)\), i.e. it is an element of such \(d\). Therefore \(\bigcup A\) is the unique set whose power set is \(B\). \(\Box\)
Consider the function $e^x$ on the reals. I want to show that $e^x$ is continuous at the any point $t \in \mathbb{R}$. Is the following argument valid? (1) Let $\{t_n\}$ be an arbitrary sequence of reals s.t. $t_n \to t$ and $t_n \ne t$. (2) Then $\lim\limits_{t_n \to t}$ $e^{t_n} = \lim\limits_{t_n \to t}1 + \frac{{t_n}^2}{2} + \frac{{t_n}^3}{6} + \ldots = 1 + \frac{{t}^2}{2} + \frac{{t}^3}{6} + \ldots = e^t$ So that since $\{t_n\}$ is arbitrary it follows $e^x$ is continuous at $t$.
[I'll ask this in the one-dimensional setting, and (if the answer is yes) I'll leave as open what possible extensions to more general settings one might wish to discuss.] Let $K \subset \mathbb{R}$ be a compact nowhere dense set. Suppose that for each $x \in K$ we have a sequence $(U_n^x)_{n \in \mathbb{N}}$ of connected neighbourhoods of $x$ such that the length of $U_n^x$ tends to $0$ as $n \to \infty$. Does there necessarily exist a finite set $S \subset K$ and a list of integers $(n_x)_{x \in S}$ such that $K \subset \bigcup_{x \in S} U_{n_x}^x\,$ and for all distinct $x,y \in S$, $\,U_{n_x}^x \cap U_{n_y}^y = \emptyset$? If not, what about if we add the assumption that $K$ is a Lebesgue-null set? (My vague intuition on the last bit is that requiring $K$ to be a null set won't make a difference to the answer, as there's probably some homeomorphic or "nearly homeomorphic" transformation of $\mathbb{R}$ that will turn a positive-measure nowhere dense set into a null set.) If the answer is yes (with or without the null set requirement), is there a reference for this? Even just an exercise from a textbook to prove this, or prove something that easily implies this, would suffice.
I'm writing a paper and I used them as an example, but then reconsidered . . . maybe I'm not getting it right! thanks! At first, I had written that they certainly could have, given that blackbody radiation had been fairly well analyzed since the early 1900's and Wien's law would have been common knowledge among physicists at the time. But it looks like I may have been wrong. Both the observational paper (Penzias and Wilson) and the theoretical paper (Dicke, Peebles, Roll, Wilkinson) are freely accessible online. (Click on "Full referreed scanned article (GIF)" at the top.) What I didn't realize until I read these papers is that Penzias and Wilson's publication was based upon an observation at a single wavelength, $7.3\text{ cm}$, or equivalently a frequency of $4080\text{ MHz}$. They did not have the ability to measure at a large number of different wavelengths, so they weren't able to determine the full spectrum of their mystery radiation, and certainly they did not have the ability to resolve a peak in that spectrum. So in fact, it seems that they did not use Wien's law. (In fact, if you use Wien's law to calculate the peak wavelength corresponding to their $3.5\text{ K}$ detection, you get only $0.8\text{ mm}$.) Instead, if I'm understanding correctly, they would have used Planck's law, $$I = \frac{2hc^3}{\lambda^5}\frac{1}{\exp(\frac{hc}{\lambda kT}) - 1}$$ In this form, the way it's normally written, Planck's law gives the spectral intensity of radiation emitted at any particular wavelength by a blackbody at a particular temperature. But you can rearrange it to this: $$T = \frac{hc}{\lambda k\ln\Bigl(1 + \frac{2hc^3}{I\lambda^5}\Bigr)}$$ which lets you express a given spectral intensity of radiation at a particular wavelength as the temperature of the blackbody source which would have produced it. If you keep $\lambda$ fixed and avoid small values of $I$, this is an excellent approximation to a linear relationship, $$T = \frac{\lambda^4}{2kc^2}I\hspace{1cm}\text{if}\quad I \gtrsim \frac{hc^3}{3\lambda^5}$$ This means that if you're making measurements at one specific wavelength, as Penzias and Wilson were, you can report your measured intensities as temperatures instead, because the temperature is proportional to the intensity. If you look at the paper, you'll see that they report as temperatures the various contributions to the intensity which they identified: $2.3\pm 0.3\text{ K}$ from atmospheric absorption $0.8\pm 0.4\text{ K}$ from ohmic losses $<0.1\text{ K}$ for ground radiation These add up to a total of $3.2\pm 0.5\text{ K}$ (assuming uncorrelated errors). But the actual measurement yielded a value of $6.7\text{ K}$. The difference, $3.5\text{ K}$, is what they postulated they were detecting from the cosmic microwave background. Now, you could plug that number back into Planck's law to calculate the intensity received from the CMB, but that's not necessary when what you really want to know is the temperature.
I am currently simulating particle trajectories in Kerr spacetime numerically with $M=1$ and $a=1$. In the picture above, I am calculating the geodesic in Boyer-Lindquist coordinates. I was messing around with the simulation a little bit and I wanted to transform to a local lorentz frame by use of a vierbein (tetrad) $e^m_{\ \ \ \nu}$. The problem I encounter is that in the local lorentz frame I get velocities higher than the speed of light. So far, I have checked that: $e^m_{\ \ \ \mu}e^n_{\ \ \ \nu}g^{\mu \nu}=\eta^{mn}$ $u^\mu u_\nu = -1$ which seems to imply both that 1) the $e^m_{\ \ \ \mu}$ is calculated correctly and 2) that the particle is not moving faster than the speed of light. My transformation into local frame is done via: $e^m_{\ \ \ \mu}u^\mu=u^m$ And I get the result $u^3>1$. My question would be whether I am doing the local frame transformation incorrectly / I am missing something. The other possibility is numerical error.
No, sadly. This is an unfortunate consequence we have in our notation, we write $r$ when we mean the function $(r\cdot)$ which multiplies a function by $r$, and we write $A B$ when we mean the composition of functions $A\circ B$. So when we write something like $$r^{-2}\frac{\partial}{\partial r} \left(r^2 \frac{\partial}{\partial r}\right)$$what we really mean is the function which would more unambiguously be written as $$\psi \mapsto \left\{(r,\theta,\phi)\mapsto \frac{2}{r} \cdot \psi_{(1)}(r, \theta,\phi) + \psi_{(1,1)}(r,\theta,\phi) \right\},$$where $f_{(n)}$ is the partial derivative of $f$ with respect to its $n^\text{th}$ argument holding all its other arguments constant, and $f_{(m,n)}$ is shorthand for $[f_{(m)}]_{(n)},$ the partial derivative with respect to the $n^\text{th}$ argument of the partial derivative with respect to the $m^\text{th}$ argument. So it is a function which takes a scalar field and returns another scalar field. We typically extend our understanding addition $(+)$ and subtraction $(-)$ to such functions-from-fields-to-fields by saying that for example $$f + g = \psi \mapsto \big\{\mathbf r \mapsto f(\psi)(\mathbf r) + g(\psi)(\mathbf r)\big\},$$to add two field-transformers you construct the field-transformer which provides its argument field to the two constituent transformers and then adds the two fields together pointwise. Indeed this thing in the curly braces could be correctly denoted $f(\psi) + g(\psi)$ when we extend $+$ to fields: we add fields together pointwise, giving the same input to both fields and adding the numbers that they produce; then we add field-transformers together fieldwise, giving the same field input to both transformers and adding the fields that they produce. It's kind of the same idea. With that idea we could correctly say that due to the product rule of normal calculus, $$\partial_r \circ (r \cdot) = (1 \cdot) + (r\cdot) \circ \partial_r,$$in other words if you multiply a field by $r$ and then take a partial derivative of the result with respect to $r$, that is related to taking the partial derivative of a field and then multiplying the result by $r$, by adding the original field at the end. But in QM we let ourselves get sloppy with notation and write this as $$\partial_x ~x = 1+ x~\partial_x.$$In some sense this is less sloppy than it sounds because it is correct if we think about $x$ and $\partial_x$ as somehow being like matrices and this as being a matrix equation.
The length of a vector is defined as: $$ ||\mathbf{v}||^2=\mathbf{v}\cdot \mathbf{v} $$ In the case that $\mathbf{v}:=a\hat{x}+b\hat{y}$ is expressed in an orthogonal basis using $\hat{x}$ and $\hat{y}$ as the generators of a Clifford algebra $Cl_{0,2}(\mathbb{R})$, then $||\mathbf{v}||^2=a^2+b^2$ For a poly-vector (say $\mathbf{p}:=c+a\hat{x}$), one can also define an inner product. Then if one takes the inner product, one gets $$ ||\mathbf{p}||^2=\mathbf{p}\cdot \mathbf{p}=c^2+a^2 $$ However, I am skeptical of the inner product defines the length of the poly-vector primarily become the scalar $1$ and the 1-vector basis $\hat{x}$ are not orthogonal. Intuitively I would think the square of the geometric product of $\mathbf{p}$ with itself would be the length. $$ ||\mathbf{p}||=\sqrt{\mathbf{p}\mathbf{p}} $$ In the case where the vectors are orthogonal k-vectors, the result is the same. For example $$ \sqrt{\mathbf{v}\mathbf{v}}=\sqrt{(a\hat{x}+b\hat{y})(a\hat{x}+b\hat{y})}\\ =aa\hat{x}\hat{x}+2ab(\hat{x}\hat{y}+\hat{y}\hat{x})+ bb\hat{y}\hat{y}\\ =\sqrt{a^2+b^2} $$ But, in the case where the poly-vector is not a k-vector, the definition differs: $$ \sqrt{\mathbf{p}\mathbf{p}}=\sqrt{(c+a \hat{x})(c+a \hat{x})}\\ \sqrt{c^2+2ac\hat{x}+ aa\hat{x}\hat{x}}\\ \sqrt{c^2+a^2+2ac\hat{x}} $$ Using this definition, we conclude that in the case of a poly-vector, a scalar length cannot be defined. Thus, define such a line as the inner product ought to erase some important geometric information about the length of the poly-vector. Is this correct? Is there a standard definition for the length of a poly-vector?
When $m^2 + n^2$ is large, the contribution from $1 - \frac{\mathrm{exp}(-(m^2+n^2))}{m(m^2+n^2)}$ is roughly $1$ because the fraction goes to zero as $m^2 + n^2 \to \infty$. If your series would converge, since clearly$$\sum_{m,n \ge 1} \frac{\mathrm{exp}(-(m^2+n^2))}{m(m^2+n^2)}$$converges, it would imply that $$\sum_{m \ge 1} \sum_{n \ge 1} \sin(a_1m)\sin(a_2m)\sin(a_3 m) \sin(b_1n) \sin(b_2n) = \left( \sum_{m \ge 1} \prod_{i=1}^3 \sin(a_im) \right)\left( \sum_{b \ge 1} \prod_{i=1}^2 \sin(b_im) \right)$$would converge. I think it becomes quite believable now that this series doesn't converge in general. I leave it up to you to find out why those two remaining factors don't converge. EDIT : With this new completely different summation... bound it this way : $$\sum_{m,n \ge 1} \left| \dots \right| \le \sum_{m,n \ge 1} \frac 1{m(m^2 + n^2)}.$$By doing a two-dimensional version of the integral test, show that the integral$$\iint_{x,y \ge 1} \frac 1{x(x^2 + y^2)} dx dy$$converges (hint ; polar change of variables). Hope that helps,
Solvers for ordinary differential equations (ODEs) belong to the best-studied algorithms of numerical mathematics. An ODE is an implicit statement about the relationship of a curve $x:\mathbb{R}_{\geq 0}\to\mathbb{R}^N$ to its derivative, in the form $x'(t) = f(x(t),t)$, where $x'$ is the derivative of curve, and $f$ is some function. To identify a unique solution of a particular ODE, it is typically also necessary to provide additional statements about the curve, such as its initial value $x(t_0)=x_0$. An ODE solver is a mathematical rule that maps function and initial value, $(f,x_0)$ to an estimate $x(t)$ for the solution curve. Good solvers have certain analytical guarantees about this estimate, such as the fact that its deviation from the true solution is of a high polynomial order in the step size used by the algorithms to discretize the ODE. One of the main theoretical contributions of the group is the development of probabilistic versions of these solvers. In several works, we established a class of solvers for initial value problems that generalize classic solvers by taking as inputs Gaussian distributions $\mathcal{N}(x(t_0);x_0,\Psi)$, $\mathcal{GP}(f;\hat{f},\Sigma)$ over the initial value and vector field, and return a Gaussian process posterior $\mathcal{GP}(x;m,k)$ over the solution. We were able to show that these methods have the same (linear) computational computational complexity in solver's step-size $h$ as classic methods [ ] (they are Bayesian filters) can inherit the famous local and global polynomial convergence rates of classic solvers [ ] (i.e. $\|m-x\|\leq Ch^q$ for $q\geq 1$) produce posterior variance estimates that are calibrated worst-case error estimates [ ] (i.e. $\|m-x\|^2\leq Ck$). In Short, they produce meaningful uncertainty are in fact a generalization of certain famous classic ODE solvers (namely they reduce to explicit single-step Runge Kutta methods and multi-step Nordsieck methods in the limit of uninformative prior and steady-state operation, respectively. In practical operation, they offer a third, novel type of solver) [ ] they can be generalized to produce non-Gaussian, nonparametric output while retaining many of the above properties [ ]. Together, these results provide a rich and reliable new theory for probabilistic simulation that current ongoing research projects are seeking to leverage to speed up structured simulation problems inside of machine learning algorithms.
Lecture: HGX205, M 18:30-21 Section: HGW2403, F 18:30-20 Exercise 01 Prove that \(\neg\Box(\Diamond\varphi\wedge\Diamond\neg\varphi)\) is equivalent to \(\Box\Diamond\varphi\rightarrow\Diamond\Box\varphi\). What you have assumed? Define strategyand winning strategyfor modal evaluation games. Prove Key Lemma: \(M,s\vDash\varphi\) iff V has a winning strategy in \(G(M,s,\varphi)\). Prove that modal evaluation games are determined, i.e. either V or F has a winning strategy. And all exercises for Chapter 2 (see page 23, open minds) Exercise 02 Let \(T\) with root \(r\) be the tree unraveling of some possible world model, and \(T’\) be the tree unraveling of \(T,r\). Show that \(T\) and \(T’\) are isomorphic. Prove that the union of a set of bisimulations between \(M\) and \(N\) is a bisimulation between the two models. We define the bisimulation contraction of a possible world model \(M\) to be the “quotient model”. Prove that the relation links every world \(x\) in \(M\) to the equivalent class \([x]\) is a bisimulation between the original model and its bisimulation contraction. And exercises for Chapter 3 (see page 35, open minds): 1 (a) (b), 2. Exercise 03 Prove that modal formulas (under possible world semantics) have ‘Finite Depth Property’. And exercises for Chapter 4 (see page 47, open minds): 1 – 3. Exercise 04 Prove the principle of Replacement by Provable Equivalents: if \(\vdash\alpha\leftrightarrow\beta\), then \(\vdash\varphi[\alpha]\leftrightarrow\varphi[\beta]\). Prove the following statements. “For each formula \(\varphi\), \(\vdash\varphi\) is equivalent to \(\vDash\varphi\)” is equivalent to “for each formula \(\varphi\), \(\varphi\) being consistent is equivalent to \(\varphi\) being satisfiable”. “For every set of formulas \(\Sigma\) and formula \(\varphi\), \(\Sigma\vdash\varphi\) is equivalent to \(\Sigma\vDash\varphi\)” is equivalent to “for every set of formulas \(\Sigma\), \(\Sigma\) being consistent is equivalent to \(\Sigma\) being satisfiable”. Prove that “for each formula \(\varphi\), \(\varphi\) being consistent is equivalent to \(\varphi\) being satisfiable” using the finite version of Henkin model. And exercises for Chapter 5 (see page 60, open minds): 1 – 5. Exercise 05 Exercises for Chapter 6 (see page 69, open minds): 1 – 3. Exercise 06 Show that “being equivalent to a modal formula” is not decidable for arbitrary first-order formulas. Exercises for Chapter 7 (see page 88, open minds): 1 – 6. For exercise 2 (a) – (d), replace the existential modality E with the difference modality D. In the clause (b) of exercise 4, “completeness” should be “correctness”. Exercise 07 Show that there are infinitely many non-equivalent modalities under T. Show that GL + Idis inconsistent and Unproves GL. Give a complete proof of the fact: In S5, Every formula is equivalent to one of modal depth \(\leq 1\). Exercises for Chapter 8 (see page 99, open minds): 1, 2, 4 – 6. Exercise 08 Let \(\Sigma\) be a set of modal formulas closed under substitution. Show that \[(W,R,V),w\vDash\Sigma~\Leftrightarrow~ (W,R,V’),w\vDash\Sigma\] hold for any valuation \(V\) and \(V’\). Define a \(p\)- morphismbetween \((W,R),w\) and \((W’,R’),w’\) as a “functional bisimulation”, namely bisimulation regardless of valuation. Show that if there is a \(p\)-morphism between \((W,R),w\) and \((W’,R’),w’\), then for any valuation \(V\) and \(V’\), we have \[(W,R,V),w\vDash\Sigma~\Leftrightarrow~ (W’,R’,V’),w\vDash\Sigma.\] Exercises for Chapter 9 (see page 99, open minds). Exercise the last Exercises for Chapter 10 and 11 (see page 117 and 125, open minds).
OpenCV 3.0.0 Open Source Computer Vision This class is used to perform the non-linear non-constrained minimization of a function,. More... #include "optim.hpp" virtual void getInitStep (OutputArray step) const =0 Returns the initial step that will be used in downhill simplex algorithm. More... virtual void setInitStep (InputArray step)=0 Sets the initial step that will be used in downhill simplex algorithm. More... Public Member Functions inherited from cv::MinProblemSolver virtual Ptr< Function > getFunction () const =0 Getter for the optimized function. More... virtual TermCriteria getTermCriteria () const =0 Getter for the previously set terminal criteria for this algorithm. More... virtual double minimize (InputOutputArray x)=0 actually runs the algorithm and performs the minimization. More... virtual void setFunction (const Ptr< Function > &f)=0 Setter for the optimized function. More... virtual void setTermCriteria (const TermCriteria &termcrit)=0 Set terminal criteria for solver. More... Public Member Functions inherited from cv::Algorithm Algorithm () virtual ~Algorithm () virtual void clear () Clears the algorithm state. More... virtual bool empty () const Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read. More... virtual String getDefaultName () const virtual void read (const FileNode &fn) Reads algorithm parameters from a file storage. More... virtual void save (const String &filename) const virtual void write (FileStorage &fs) const Stores algorithm parameters in a file storage. More... static Ptr< DownhillSolver > create (const Ptr< MinProblemSolver::Function > &f=Ptr< MinProblemSolver::Function >(), InputArray initStep=Mat_< double >(1, 1, 0.0), TermCriteria termcrit=TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5000, 0.000001)) This function returns the reference to the ready-to-use DownhillSolver object. More... Static Public Member Functions inherited from cv::Algorithm template<typename _Tp > static Ptr< _Tp > load (const String &filename, const String &objname=String()) Loads algorithm from the file. More... template<typename _Tp > static Ptr< _Tp > loadFromString (const String &strModel, const String &objname=String()) Loads algorithm from a String. More... template<typename _Tp > static Ptr< _Tp > read (const FileNode &fn) Reads algorithm from the file node. More... This class is used to perform the non-linear non-constrained minimization of a function,. defined on an n-dimensional Euclidean space, using the Nelder-Mead method, also known as downhill simplex method**. The basic idea about the method can be obtained from http://en.wikipedia.org/wiki/Nelder-Mead_method. It should be noted, that this method, although deterministic, is rather a heuristic and therefore may converge to a local minima, not necessary a global one. It is iterative optimization technique, which at each step uses an information about the values of a function evaluated only at n+1 points, arranged as a simplex in n-dimensional space (hence the second name of the method). At each step new point is chosen to evaluate function at, obtained value is compared with previous ones and based on this information simplex changes it's shape , slowly moving to the local minimum. Thus this method is using only function values to make decision, on contrary to, say, Nonlinear Conjugate Gradient method (which is also implemented in optim). Algorithm stops when the number of function evaluations done exceeds termcrit.maxCount, when the function values at the vertices of simplex are within termcrit.epsilon range or simplex becomes so small that it can enclosed in a box with termcrit.epsilon sides, whatever comes first, for some defined by user positive integer termcrit.maxCount and positive non-integer termcrit.epsilon. static This function returns the reference to the ready-to-use DownhillSolver object. All the parameters are optional, so this procedure can be called even without parameters at all. In this case, the default values will be used. As default value for terminal criteria are the only sensible ones, MinProblemSolver::setFunction() and DownhillSolver::setInitStep() should be called upon the obtained object, if the respective parameters were not given to create(). Otherwise, the two ways (give parameters to createDownhillSolver() or miss them out and call the MinProblemSolver::setFunction() and DownhillSolver::setInitStep()) are absolutely equivalent (and will drop the same errors in the same way, should invalid input be detected). f Pointer to the function that will be minimized, similarly to the one you submit via MinProblemSolver::setFunction. initStep Initial step, that will be used to construct the initial simplex, similarly to the one you submit via MinProblemSolver::setInitStep. termcrit Terminal criteria to the algorithm, similarly to the one you submit via MinProblemSolver::setTermCriteria. pure virtual Returns the initial step that will be used in downhill simplex algorithm. step Initial step that will be used in algorithm. Note, that although corresponding setter accepts column-vectors as well as row-vectors, this method will return a row-vector. pure virtual Sets the initial step that will be used in downhill simplex algorithm. Step, together with initial point (givin in DownhillSolver::minimize) are two n-dimensional vectors that are used to determine the shape of initial simplex. Roughly said, initial point determines the position of a simplex (it will become simplex's centroid), while step determines the spread (size in each dimension) of a simplex. To be more precise, if \(s,x_0\in\mathbb{R}^n\) are the initial step and initial point respectively, the vertices of a simplex will be: \(v_0:=x_0-\frac{1}{2} s\) and \(v_i:=x_0+s_i\) for \(i=1,2,\dots,n\) where \(s_i\) denotes projections of the initial step of n-th coordinate (the result of projection is treated to be vector given by \(s_i:=e_i\cdot\left<e_i\cdot s\right>\), where \(e_i\) form canonical basis) step Initial step that will be used in algorithm. Roughly said, it determines the spread (size in each dimension) of an initial simplex.
Search Now showing items 1-1 of 1 Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-09) The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
Eigenvalues are properly one of the most important metrics which can be extracted from matrices. Together with their corresponding eigenvector, they form the fundamental basis for many applications. Calculating eigenvalues from a given matrix is straightforward and implementations exist in many libraries. But sometimes the concrete matrix is not known in advance, e.g. when the matrix values are based on some bounded input data. In this case, it may be good to give at least some estimation of the range in which the eigenvalues can lie. As the name of this article suggests, there is a theorem intended for this use case and which will be discussed here. For a square \( n \times n\) matrix \(A\) the Gershgorin circle theorem returns a range in which the eigenvalues must lie by simply using the information from the rows of \(A\). Before looking into the theorem though, let me remind the reader that eigenvalues may be complex valued (even for a matrix which contains only real numbers). Therefore, the estimation lives in the complex plane, meaning we can visualize the estimation in a 2D coordinate system with the real part as \(x\)- and the imaginary part as the \(y\)-axis. Note also that \(A\) has a maximum of \(n\) distinct eigenvalues. For the theorem, the concept of a Gershgorin disc is relevant. Such a disk exists for each row of \(A\), is centred around the diagonal element \(a_{ii}\) (which may be complex as well) and the sum of the other elements in the row \(r_i\) constraint the radius. The disk is therefore defined as with the corresponding row sum\begin{equation} \label{eq:GershgorinCircle_Disc_RowSum} r_i = \sum_{j=1,\\ j\not=i}^n \left|a_{ij}\right| \end{equation} (absolute sum of all row values except the diagonal elements itself). As an example, let's take the following definition for\begin{equation*} A = \begin{pmatrix} 4 & 3 & 15 \\ 1 & 1+i & 5 \\ -8 & -2 & 22 \end{pmatrix}. \end{equation*} There are three Gershgorin discs in this matrix: \(C_1\) with the centre point \(a_{11} = 4\) and radius \(r_1 = \left|3\right| + \left|15\right| = 18\) \(C_2\) with the centre point \(a_{22} = 1+i\) and radius \(r_2 = \left|1\right| + \left|5\right| = 6\) \(C_3\) with the centre point \(a_{33} = 22\) and radius \(r_3 = \left|-8\right| + \left|-2\right| = 10\) We have now all ingredients for the statement of the theorem: Every eigenvalue \(\lambda \in \mathbb{C}\) of a square matrix \(A \in \mathbb{R}^{n \times n}\) lies in at least one of the Gershgorin discs \(C_i\) (\eqref{eq:GershgorinCircle_Disc}). The possible range of the eigenvalues is defined by the outer borders of the union of all discs\begin{equation*} C = \bigcup_{i=1}^{n} C_i. \end{equation*} The union, in the case of the example, is \(C = C_1 \cup C_2 \cup C_3\) and based on the previous information of the discs we can now visualize the situation in the complex plane. In the following figure, the discs are shown together with their disc centres and the actual eigenvalues (which are all complex in this case)\begin{equation*} \lambda_1 = 13.4811 - 7.48329 i, \quad \lambda_2 = 13.3749 + 7.60805 i \quad \text{and} \quad \lambda_3 = 0.14402 + 0.875241 i. \end{equation*} Indeed, all eigenvalues lie in the blue area defined by the discs. But you also see from this example that not all discs have to contain an eigenvalue (the theorem does not state that each disc has one eigenvalue). E.g. \(C_3\) on the right side does not contain any eigenvalue. This is why the theorem makes only a statement about the complete union and not each disc independently. Additionally, you can also see that one disc can be completely contained inside another disc as it is the case with \(C_2\) which lies inside \(C_1\). In this case, \(C_2\) does not give any useful information at all, since it does not expand the union \(C\) (if \(C_2\) would be missing, nothing changes regarding the complete union of all discs, i.e. \(C=C_1 \cup C_2 \cup C_3 = C_1 \cup C_3\)). If we want to estimate the range in which the eigenvalues of \(A\) will lie, we can use the extrema values from the union, e.g.\begin{equation*} \left[4-18; 22+10\right]_{\operatorname{Re}} = \left[-14; 32\right]_{\operatorname{Re}} \quad \text{and} \quad \left[0 - 18; 0 + 18\right]_{\operatorname{Im}} = \left[-18; 18 \right]_{\operatorname{Im}} \end{equation*} for the real and complex range respectively. This defines nothing else than a rectangle containing all discs. Of course, the rectangle is an even more inaccurate estimation as the discs already are, but the ranges are easier to handle (e.g. to decide if a given point lies inside the valid range or not). Furthermore, if we have more information about the matrix, like that it is symmetric and real-valued and therefore contains only real eigenvalues, we can discard the complex range completely. In summary, with the help of the Gershgorin circle theorem, it is very easy to give an estimation of the eigenvalues of some matrix. We only need to look at the diagonal elements and corresponding sum of the rest of the row and get a first estimate of the possible range. In the next part, I want to discuss why this estimation is indeed correct. Let's start again with a 3-by-3 matrix called \(B\) but now I want to use arbitrary coefficients\begin{equation*} B = \begin{pmatrix} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \\ b_{31} & b_{32} & b_{33} \end{pmatrix}. \end{equation*} Any eigenvalue \(\lambda\) with corresponding eigenvector \(\fvec{u} = (u_1,u_2,u_3)^T\) for this matrix is defined as\begin{align*} B\fvec{u} &= \lambda \fvec{u} \\ \begin{pmatrix} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \\ b_{31} & b_{32} & b_{33} \end{pmatrix} \begin{pmatrix} u_{1} \\ u_{2} \\ u_{3} \end{pmatrix} &= \lambda \begin{pmatrix} u_{1} \\ u_{2} \\ u_{3} \end{pmatrix} \end{align*} Next, see how the equation for each component of \(\fvec{u}\) looks like. I select \(u_1\) and also assume that this is the largest absolute 1 component of \(\fvec{u}\), i.e. \(\max_i{\left|u_i\right|} = \left|u_1\right|\). This is a valid assumption since one component must be the maximum and there is no restriction on the component number to choose for the next discussion. For \(u_1\) it results in the following equation which will be directly transformed a bit All \(u_1\) parts are placed on one side together with the diagonal element and I am only interested in the absolute value. For the right side, there is an estimation possible\begin{equation*} \left| b_{12}u_2 + b_{13}u_3 \right| \leq \left| b_{12}u_2 \right| + \left| b_{13}u_3 \right| \leq \left| b_{12}u_1 \right| + \left| b_{13}u_1 \right| = \left| b_{12} \right| \left| u_1 \right| + \left| b_{13} \right| \left| u_1 \right| \end{equation*} First, two approximations: with the help of the triangle inequality for the \(L_1\) norm 2 and with the assumption that \(u_1\) is the largest component. Last but not least, the product is split up. In short, this results to where \(\left| u_1 \right|\) is thrown away completely. This states that the eigenvalue \(\lambda\) lies in the radius of \(r_1\) (cf. \eqref{eq:GershgorinCircle_Disc_RowSum}) around \(b_{11}\) (the diagonal element!). For complex values, this defines the previously discussed discs. Two notes on this insight: The result is only valid for the maximum component of the eigenvector. Note also that we usually don't know which component of the eigenvector is the maximum (if we would now, we probably don't need to estimate the eigenvalues in the first place because we already have them). In the explanation above only one eigenvector was considered. But usually, there are more (e.g. usually three in the case of matrix \(B\)). The result is therefore true for each maximum component of each eigenvector. This also implies that not every eigenvector gives new information. It may be possible that for multiple eigenvectors the first component is the maximum. In this case, one eigenvector would have been sufficient. As an example, let's look at the eigenvectors of \(A\). Their absolute value is defined as (maximum component highlighted)\begin{equation*} \left| \fvec{u}_1 \right| = \begin{pmatrix} {\color{Aquamarine}1.31711} \\ 0.40112 \\ 1 \end{pmatrix}, \quad \left| \fvec{u}_2 \right| = \begin{pmatrix} {\color{Aquamarine}1.33013} \\ 0.431734 \\ 1 \end{pmatrix} \quad \text{and} \quad \left| \fvec{u}_3 \right| = \begin{pmatrix} 5.83598 \\ {\color{Aquamarine}12.4986} \\ 1 \end{pmatrix}. \end{equation*} As you can see, the third component is never the maximum. But this is coherent to the example from above: the third disc \(C_3\) did not contain any eigenvalue. What the theorem now does is some kind of worst-case estimate. We now know that if one component of some eigenvector is the maximum, the row corresponding to this component defines a range in which the eigenvalue must lie. But since we don't know which component will be the maximum the best thing we can do is to assume that every component was the maximum in some eigenvector. In this case, we need to consider all diagonal elements and their corresponding absolute sum of the rest of the row. This is exactly what is done in the example from above. There is another nice feature which can be derived from the theorem when we have disjoint discs. This will be discussed in the next section. Additional statements can be extracted from the theorem when we deal with disjoint disc areas 3. Consider another example with the following matrix Using Geshgorin discs this results in a situation like shown in the following figure. This time we have one disc (centred at \(d_{33}=9+10i\)) which does not share a common area with the other discs. With other words: we have two disjoint areas. The question is: does this gives us additional information?. Indeed, it is possible to state that there is exactly one eigenvalue in the third disc. Let \(A \in \mathbb{R}^{n \times n}\) be a square matrix with \(n\) Gershgorin discs (\eqref{eq:GershgorinCircle_Disc}). Then, each joined area defined by the discs contains so many eigenvalues as discs contributed to the area. If the set \(\tilde{C}\) contains \(k\) discs which are disjoint from the other \(n-k\) discs, then \(k\) eigenvalues lie in the range defined by the union\begin{equation*} \bigcup_{C \in \tilde{C}} C \end{equation*} of the discs in \(\tilde{C}\). In the example, we have exactly one eigenvalue in the third disc and exactly two eigenvalues somewhere in the union of disc one and two 4. Why is it possible to restrict the estimation when we deal with disjoint discs? To see this, let me first remind you that the eigenvalues of any diagonal matrix are exactly the diagonal elements itself. Next, I want to define a new function which separates the diagonal elements from the off-diagonals With \(\alpha \in [0;1]\) this smoothly adds the off-diagonal elements in \(D_2\) to the matrix \(D_1\) containing only the diagonal elements by starting from \(\tilde{D}(0) = \operatorname{diag}(D) = D_1\) and ending at \(\tilde{D}(1) = D_1 + D_2 = D\). Before we see why this step is important, let us first consider the same technique for a general 2-by-2 matrix\begin{align*} F &= \begin{pmatrix} f_{11} & f_{12} \\ f_{21} & f_{22} \end{pmatrix} \\ \Rightarrow \tilde{F}(\alpha) &= F_1 + \alpha F_2 \end{align*} If we now want to calculate the eigenvalues for \(\tilde{F}\), we need to find the roots of the corresponding characteristic polynomial, meaning\begin{align*} \left| \tilde{F} - \lambda I \right| &= 0 \\ \left| F_1 + \alpha F_2 - \lambda I \right| &= 0 \\ \left| \begin{pmatrix} f_{11} & 0 \\ 0 & f_{22} \end{pmatrix} + \alpha \begin{pmatrix} 0 & f_{12} \\ f_{21} & 0 \end{pmatrix} - \lambda \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \right| &= 0. \end{align*} The solution for the first root of this polynomial and therefore the first eigenvalue is defined as\begin{equation*} \lambda(\alpha) = \frac{1}{2} \left(-\sqrt{{\color{Aquamarine}4 \alpha ^2 f_{12} f_{21}} +f_{11}^2+f_{22}^2-2 f_{11} f_{22}}+f_{11}+f_{22}\right). \end{equation*} The thing I am driving at is the fact that the eigenvalue \(\lambda(\alpha)\) changes only continuously with increasing value of \(\alpha\) (highlighted position) and as closer \(\alpha\) gets to 1 as more off-diagonals are added. Especially, \(\lambda(\alpha)\) does not jump suddenly somewhere completely different. I chose a 2-by-2 matrix because this point is easier to see here. Finding roots of higher dimensional matrices can suddenly become much more complicated. But the statement of continuously increasing eigenvalues stays true, even for matrices with higher dimensions. No back to the matrix \(\tilde{D}(\alpha)\) from the example. We will now increase \(\alpha\) and see how this affects our discs. The principle is simple: just add both matrices together and apply the circle theorem on the resulting matrix. The following animation lets you perform the increase of \(\alpha\). As you can see, the eigenvalues start at the disc centres because here only the diagonal elements remain, i.e. \(\tilde{D}(0) = D_1\). With increasing value of \(\alpha\) more and more off-diagonal elements are added letting the eigenvalues move away from the centre. But note again that this transition is smooth: no eigenvalue suddenly jumps to a completely different position. Note also that at some point the discs for the first and second eigenvalue merge together. Now the extended theorem becomes clear: if the eigenvalues start at the disc centres, don't jump around and if the discs don't merge at \(\alpha=1\), then each union must contain as many eigenvalues as discs contributed to this union. In the example, this gives us the proof that \(\lambda_3\) must indeed lie in the disc around \(d_{33}\). List of attached files:
The amsmath package provides a handful of options for displaying equations. You can choose the layout that better suits your document, even if the equations are really long, or if you have to include several equations in the same line. Contents The standard LaTeX tools for equations may lack some flexibility, causing overlapping or even trimming part of the equation when it's too long. We can surpass these difficulties with amsmath. Let's check an example: \begin{equation} \label{eq1} \begin{split} A & = \frac{\pi r^2}{2} \\ & = \frac{1}{2} \pi r^2 \end{split} \end{equation} You have to wrap your equation in the equation environment if you want it to be numbered, use equation* (with an asterisk) otherwise. Inside the equation environment, use the split environment to split the equations into smaller pieces, these smaller pieces will be aligned accordingly. The double backslash works as a newline character. Use the ampersand character &, to set the points where the equations are vertically aligned. This is a simple step, if you use LaTeX frequently surely you already know this. In the preamble of the document include the code: \usepackage{amsmath} To display a single equation, as mentioned in the introduction, you have to use the equation* or equation environment, depending on whether you want the equation to be numbered or not. Additionally, you might add a label for future reference within the document. \begin{equation} \label{eu_eqn} e^{\pi i} + 1 = 0 \end{equation} The beautiful equation \ref{eu_eqn} is known as the Euler equation For equations longer than a line use the multline environment. Insert a double backslash to set a point for the equation to be broken. The first part will be aligned to the left and the second part will be displayed in the next line and aligned to the right. Again, the use of an asterisk * in the environment name determines whether the equation is numbered or not. \begin{multline*} p(x) = 3x^6 + 14x^5y + 590x^4y^2 + 19x^3y^3\\ - 12x^2y^4 - 12xy^5 + 2y^6 - a^3b^3 \end{multline*} Split is very similar to multline. Use the split environment to break an equation and to align it in columns, just as if the parts of the equation were in a table. This environment must be used inside an equation environment. For an example check the introduction of this document. If there are several equations that you need to align vertically, the align environment will do it: Usually the binary operators (>, < and =) are the ones aligned for a nice-looking document. As mentioned before, the ampersand character & determines where the equations align. Let's check a more complex example: \begin{align*} x&=y & w &=z & a&=b+c\\ 2x&=-y & 3w&=\frac{1}{2}z & a&=b\\ -4 + 5x&=2+y & w+2&=-1+w & ab&=cb \end{align*} Here we arrange the equations in three columns. LaTeX assumes that each equation consists of two parts separated by a &; also that each equation is separated from the one before by an &. Again, use * to toggle the equation numbering. When numbering is allowed, you can label each row individually. If you just need to display a set of consecutive equations, centered and with no alignment whatsoever, use the gather environment. The asterisk trick to set/unset the numbering of equations also works here. For more information see
I was asked to calculate all the possible groups for elliptic curves and their order in $\mathbb{F}_5$. There are $p^2-p$ groups that respect $\Delta \neq 0$, so there are $20$ groups. Some of them may be isomorphic. I have to look for the ones with the same order. For example: are the ones defined by $x^3+4x+2$ and $x^3+4x+3$ isomorphic? The points of the first groups are $(3,1), (3,4), (\infty, \infty)$, for the second group are $(2,2), (2,3), (\infty, \infty)$ How do I check if they are isomorphic or not?
I have a point $P_0 = [x_0, y_0, z_0]'$. I want to rotate the axes so that the new coordinates will be $P_1 = [x_1, y_1, z_1]'$. Define the following rotation matrices: $R_x = \left[\matrix{ 1 & 0 & 0\\ 0 & \cos\alpha & - \sin\alpha\\ 0 & \sin\alpha & \cos\alpha} \right]$, $R_y = \left[\matrix{ \cos\beta & 0 & \sin\beta\\ 0 & 1 & 0\\ -\sin\beta & 0 & \cos\beta} \right]$, $R_z = \left[\matrix{ \cos\gamma & -\sin\gamma & 0\\ \sin\gamma & \cos\gamma & 0\\ 0 & 0 & 1} \right]$ and $R_{xyz} = R_x R_y R_z$. I want $P_1 = R_{xyz} P_0$ and $\left[\alpha, \beta, \gamma\right]$ be such that $\alpha, \beta, \gamma = 0$ if $P_1 = P_0$; $\alpha, \beta = 0$ if the rotation can be obtained by setting only $\gamma$; $\alpha = 0$ if the rotation can be obtained by setting only $\beta$ and $\gamma$. Any hint? EDIT: Following JordiC's answer: yes, the distance to the origin is the same for $P_0$ and $P_1$; I can set $\alpha = 0$ and the rotation matrix will be simplified: $R = \left[\matrix{ \cos\beta \cos\gamma & -\cos\beta \sin\gamma & \sin\beta\\ \sin\gamma & \cos\gamma & 0\\ -\cos\gamma \sin\beta & \sin\beta \sin\gamma & \cos\beta} \right]$. I have found this: Find rotation that maps a point to its target which seems to be the same question.
In mathematics, especially in applications of linear algebra to physics, the Einstein notation or Einstein summation convention is a notational convention that implies summation over a set of indexed terms in a formula, thus achieving notational brevity. As part of mathematics it is a notational subset of Ricci calculus; however, it is often used in applications in physics that do not distinguish between tangent and cotangent spaces. It was introduced to physics by Albert Einstein in 1916. [1] Contents Introduction 1 Statement of convention 1.1 Application 1.2 Vector representations 2 Superscripts and subscripts vs. only subscripts 2.1 Mnemonics 2.2 Abstract description 2.3 Common operations in this notation 3 See also 4 Notes 5 References 6 Bibliography 7 External links 8 Introduction Statement of convention According to this convention, when an index variable appears twice in a single term and is not otherwise defined (see free and bound variables), it implies summation of that term over all the values of the index. So where the indices can range over the set {1, 2, 3}, y = \sum_{i=1}^3 c_i x^i = c_1 x^1 + c_2 x^2 + c_3 x^3 is reduced by the convention to: y = c_i x^i \,. The upper indices are not exponents but are indices of coordinates, coefficients or basis vectors. For example, x 2 should be read as "x-two", not "x squared", and typically ( x 1, x 2, x 3) would be equivalent to the traditional ( x, y, z). In general relativity, a common convention is that the Greek alphabet is used for space and time components, where indices take values 0,1,2,3 (frequently used letters are μ, ν, ...), the Latin alphabet is used for spatial components only, where indices take values 1,2,3 (frequently used letters are i, j, ...), In general, indices can range over any indexing set, including an infinite set. This should not be confused with a typographically similar convention used to distinguish between tensor index notation and the closely related but distinct basis-independent abstract index notation. An index that is summed over is a summation index, in this case i. It is also called a dummy index since any symbol can replace i without changing the meaning of the expression, provided that it does not collide with index symbols in the same term. An index that is not summed over is a free index and should be found in each term of the equation or formula if it appears in any term. Compare dummy indices and free indices with free variables and bound variables. Application Einstein notation can be applied in slightly different ways. Typically, each index occurs once in an upper (superscript) and once in a lower (subscript) position in a term; however, the convention can be applied more generally to any repeated indices within a term. [2] When dealing with covariant and contravariant vectors, where the position of an index also indicates the type of vector, the first case usually applies; a covariant vector can only be contracted with a contravariant vector, corresponding to summation of the products of coefficients. On the other hand, when there is a fixed coordinate basis (or when not considering coordinate vectors), one may choose to use only subscripts; see below. Vector representations Superscripts and subscripts vs. only subscripts In terms of covariance and contravariance of vectors, They transform contravariantly, resp. covariantly, with respect to change of basis. In recognition of this fact, the following notation uses the same symbol both for a (co)vector and its components, as in: \, v = v^i e_i = \begin{bmatrix}e_1&e_2&\cdots&e_n\end{bmatrix}\begin{bmatrix}v^1\\v^2\\\vdots\\v^n\end{bmatrix} \, w = w_i e^i = \begin{bmatrix}w_1 & w_2 & \cdots & w_n\end{bmatrix}\begin{bmatrix}e^1\\e^2\\\vdots\\e^n\end{bmatrix} where v is the vector and v are its components (not the i ith covector v), w is the covector and w are its components. i In the presence of a non-degenerate form (an isomorphism V \to V^*, for instance a Riemannian metric or Minkowski metric), one can raise and lower indices. A basis gives such a form (via the dual basis), hence when working on R n with a Euclidean metric and a fixed orthonormal basis, one can work with only subscripts. However, if one changes coordinates, the way that coefficients change depends on the variance of the object, and one cannot ignore the distinction; see covariance and contravariance of vectors. Mnemonics In the above example, vectors are represented as n×1 matrices (column vectors), while covectors are represented as 1× n matrices (row covectors). When using the column vector convention " Upper indices go up to down; lower indices go left to right" " COvariant tensors are ROW vectors that have indices that are bel OW. Co-below-row Vectors can be stacked (column matrices) side-by-side: \begin{bmatrix}v_1 & \cdots & v_k\end{bmatrix}. Hence the lower index indicates which column you are in. You can stack covectors (row matrices) top-to-bottom: \begin{bmatrix}w^1 \\ \vdots \\ w^k\end{bmatrix} Hence the upper index indicates which row you are in. Abstract description The virtue of Einstein notation is that it represents the invariant quantities with a simple notation. In physics, a scalar is invariant under transformations of basis. In particular, a Lorentz scalar is invariant under a Lorentz transformation. The individual terms in the sum are not. When the basis is changed, the components of a vector change by a linear transformation described by a matrix. This led Einstein to propose the convention that repeated indices imply the summation is to be done. As for covectors, they change by the inverse matrix. This is designed to guarantee that the linear function associated with the covector, the sum above, is the same no matter what the basis is. The value of the Einstein convention is that it applies to other vector spaces built from V using the tensor product and duality. For example, V\otimes V, the tensor product of V with itself, has a basis consisting of tensors of the form \mathbf{e}_{ij} = \mathbf{e}_i \otimes \mathbf{e}_j. Any tensor \mathbf{T} in V\otimes V can be written as: \mathbf{T} = T^{ij}\mathbf{e}_{ij}. V^*, the dual of V, has a basis e 1, e 2, ..., e which obeys the rule n \mathbf{e}^i (\mathbf{e}_j) = \delta^i_j. where \delta is the Kronecker delta. As \mathrm{Hom}(V,W) = V^* \otimes W the row-column coordinates on a matrix correspond to the upper-lower indices on the tensor product. Common operations in this notation In Einstein notation, the usual element reference A_{mn} for the mth row and nth column of matrix A becomes A^m{}_n. We can then write the following operations in Einstein notation as follows. Inner product (hence also vector dot product) Using an orthogonal basis, the inner product is the sum of corresponding components multiplied together: \mathbf{u} \cdot \mathbf{v} = u_j v^j This can also be calculated by multiplying the covector on the vector. Vector cross product Again using an orthogonal basis (in 3d) the cross product intrinsically involves summations over permutations of components: \mathbf{u} \times \mathbf{v}= u^j v^k\epsilon^i{}_{jk} \mathbf{e}_i where \epsilon^i{}_{jk}=\delta^{il}\epsilon_{ljk} and \epsilon_{ijk} is the Levi-Civita symbol. Based on this definition of \epsilon , there is no difference between \epsilon^i{}_{jk} and \epsilon_{ijk} but the position of indices. Matrix multiplication The matrix product of two matrices A_{ij} and B_{jk} is: \mathbf{C}_{ik} = (\mathbf{A} \, \mathbf{B})_{ik} =\sum_{j=1}^N A_{ij} B_{jk} equivalent to C^i{}_k = A^i{}_j \, B^j{}_k Trace For a square matrix A^i{}_j, the trace is the sum of the diagonal elements, hence the sum over a common index A^i{}_i. Outer product The outer product of the column vector u^i by the row vector v_j yields an m× n matrix A: A^i{}_j = u^i \, v_j = (uv)^i{}_j Since i and j represent two different indices, there is no summation and the indices are not eliminated by the multiplication. Raising and lowering indices Given a tensor, one can raise an index or lower an index by contracting the tensor with the metric tensor, g_{\mu\nu}. For example, take the tensor T^{\alpha}_{\beta}, one can raise an index: T^{\mu\alpha}=g^{\mu\sigma}T^{\;\alpha}_{\sigma} Or one can lower an index: T_{\mu\beta}=g_{\mu\sigma}T^{\sigma}_{\;\beta} See also Notes This applies only for numerical indices. The situation is the opposite for abstract indices. Then, vectors themselves carry upper abstract indices and covectors carry lower abstract indices, as per the example in the introduction of this article. Elements of a basis of vectors may carry a lower numerical index and an upper abstract index. References ^ ^ "Einstein Summation". Wolfram Mathworld. Retrieved 13 April 2011. Bibliography Kuptsov, L.P. (2001), "Einstein rule", in Hazewinkel, Michiel, . External links Rawlings, Steve (2007-02-01). "Lecture 10 - Einstein Summation Convention and Vector Identities" (PDF). Oxford University. This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles. By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
Design open well staircase. Draw the plan showing flight details, mid-landings etc. Draw reinforcement details in a flight. Grade of concrete is M20 and steel Fe415. Civil Eng > Sem 8 > Design and Drawing of Reinforced Concrete structure Design open well staircase. Draw the plan showing flight details, mid-landings etc. Draw reinforcement details in a flight. Grade of concrete is M20 and steel Fe415. Civil Eng > Sem 8 > Design and Drawing of Reinforced Concrete structure Step 1: Floor to floor height = 3.1m (Open well) M20 Fe415 $fck = 20 N/mm^2$ $f_y = 415 N/mm^2$ $M - 0.138 \ fck \ bd^2$ Step 2: Preliminary Dimensions Floor to floor height = 3.1m height of each fright = $\frac{3.1}{2} = 1.55m$ But in this case we need to design open well staircase $\therefore$ Assume Riser = 155mm Total Riser = $\frac{3100}{155} = 20$ Since in open well staircase there are 3 flights $\therefore$ Total tread = 20 - 3 = 17 $\therefore$ Provide 7 tread in $1^{st}$ flight, 3 tread in $2^{nd}$ flight, 7 tread in $3^{rd}$ flight Assume $tread = 250mm$ Assume width of the landing to be 1m (effective) Step 3: Effective span and depth Less for $I^{st}$ and $III^{rd}$ flight Less = 1.75 + 1 = 2.75 m Less of $II^{nd}$ flight Less = 1 + 1 0.75 = 2.75 m $d = \frac{less}{B_v \times mf}$ assume % of steel = 0.4%, mf = 1.15, $Bv$ = 20 $\frac{2750}{20 \times 1.15} = 119.6$ $\therefore$ Provide d = 130 mm effective cover = 25 mm $D_{overall} = 155 mm$ Step 4: Load calculations On going portion $\begin{aligned} \text{1) Self wt. of slab } &= \frac{25D \sqrt{R^2 + T^2}}{T} \\ &= \frac{25 \times 0.155 \times \sqrt{0.155^2 + 0.25^2} }{0.25} \\ &= 4.56 KN/m \\ \text{} \\ \text{2) Self wt of step } &= \frac{25 R}{2} \\ &= \frac{25 \times 0.155}{2} \\ &= 1.94 KN/m \\ \text{} \\ \text{3)} LL = 3 KN/m & \quad P = 1 KN/m \end{aligned}$ Total load = 10.5 KN/m Ultimate load = 16.75 KN/m ON LOADING PORTION 1) Self wt. of slab = $25 \times D$ = 3.9 KN/m 2) LL = 3 KN/m $\quad$ FF = 1 KN/m Total = 7.9 KN/m Ultimate = 11.85 KN/m As per clause 33.2 page 63 on half load has to be considered in open well staircase Step 5: Pending Moment For $I^{st}$ and $III^{rd}$ flight $R_A + R_B = (5.925 \times 1) + (15.75 \times 1.75) = 33.5 KN$ Moment of B [email protected] = 0 = $[5.925 \times 1 \times (0.5+1.75) ] + \frac{15.75 \times 1.75^2}{2} - R_A \times 2.75$ $R_A = 3.62 KN \quad R_B = 19.88 KN$ $x = 1.26 m$ $M_{max} = (19.88 \times 1.26) - (\frac{15.75 \times 1.26^2}{2}) = 12.55 KN/m$ for $II^{nd}$ flight $R_A + R_B = 23.66$ $R_A = R_B = 11.83$ Max at mid span = $(11.83 \times 1.375) - [5.925 \times (0.5 + \frac{075}{2} )] - (\frac{15.75 \times 0.375^2}{2})$ $M_{max} = 10 KNm$ Step 6: Check for depth $d = \sqrt{\frac{M}{0.138 \ fck \ b}} = 67.43 mm \lt d_{provided} \quad \therefore \ Safe$ Step 7: Ast calculation Since moment is almost same provide Ast for Max moment $Ast = \left( \frac{0.5 \times 20}{4.5} \right) \left[ 1 - \sqrt{1 - \frac{4.6 \times 12.55 \times 10^6}{20 \times 1000 \times 130^2}} \right] 1000 \times 130$ $Ast = 280 mm^2$ $\therefore $ Provide #10mm @ 275mm c/c Distribution Steel $Ast_{min} = \frac{0.12}{100} bD = 186 mm^2$ provide #10mm @ 265 mm c/c Step 8: Check for deflection % of Steel provided = $\frac{285.6}{1000 \times 130} \times 100 = 0.22 % \lt 0.4$ $\therefore$ Safe Step 9: Check for Shear $V_{max} = \frac{15.75 \times 2.75}{2} = 21.66 KN$ $V_{UC} = \tau_c \ bd \ K = 1.15 \times 0.28 \times 1000 \times 130 = 41.86 KN$ $\therefore $ Safe Step 10: Check for development length $L_d \lt \frac{1.3 M_1}{V} + L_D$ $L_d = \frac{\phi \ f_y \ 0.87}{4 \tau_{bd}} = \frac{10 \times 415 \times 0.87}{4 \times 1.6 \times 1.2} = 470.12 mm$ $M_1 = \frac{12.55}{2} = 6.275 KNm$ $V = 4.66 KN$ $\therefore \frac{1.3 \times 6.275 \times 10^6}{4.66 \times 10^3} = 476 mm$ $\therefore$ Safe
Bunuel wrote: \(a\) and \(b\) are integers. \([x]\) is an integer less than or equal to \(x\). Is \([\frac{a}{b}] \geq {1}\)? Harshgmat (1) \(ab = 64\) (2) \(a=b^2\) Yes the question is a bit confusing because we have been dealing with such questions with a bit different wordings. Had it been \([x]\) is the GREATEST integer less than or equal to \(x\), the answer would be YES. Because a/B would be 4 and thus >1. However here GREATEST is missing, so the value could be anything but not greater than a/B.. So if a/b=4, [a/b] can be 4,3,2,1,0,-1,-2..... So it could be any integer as shown above and \(\frac{a}{b}\leq{4}\) The difference is the word GREATEST not being there. Hope it helps. Yes it is helpful. Thanks. Kudos. Everything will fall into place… There is perfect timing for everything and everyone. Never doubt, But Work on improving yourself, Keep the faith and Stay ready. When it’s finally your turn, It will all make sense.
What does it mean for the ratio of the lengths of the sides of a rectangle to be rational, say $\frac{5}{3}$? It means that if the long side of the rectangle is divided into five equal parts, and if one counts out three of those parts, then the length of the resulting line segment equals the length of the short side of the rectangle. The short side can now be divided into three parts all equal to the parts of the long side. Hence the rectangle can be tiled as a $3\times5$ array of squares. Conversely an $m\times n$ array of squares forms a rectangle whose ratio of sides is the rational number $\frac{n}{m}$. So to say that the ratio of sides of a rectangle is rational is the same as to say that the rectangle can be tiled as an array of squares. The question now is, can every rectangle be tiled as an array of squares? The answer isn't obvious. One might imagine that as long as we make the squares small enough, it can always be done. Before answering this question, let's imagine that someone has told you that a rectangle can be tiled as an array of squares but hasn't told you what size square to use. How would you go about finding the size of square? To approach this, notice that if you manage to find a square that tiles a given rectangle, the same square will tile the shorter rectangle you get by chopping off a square section from the long end of the rectangle. On the other hand, if the rectangle is, in fact, a square, then the rectangle is a square tiling of itself (using only one tile). Because of these two properties, we can find the square we want by chopping square sections off of the rectangle until the remainder rectangle is itself a square. By reversing the process, this remainder square will tile the original rectangle, as the following image should make clear. At this point, the idea that every rectangle can be tiled as an array of squares may seem much less plausible than it did previously. In order for a tiling with squares to exist, the remainder rectangle in the chopping-off process must eventually be a square. But it is not clear that this always has to happen. Why couldn't it be the case that, for certain rectangles, the chopping-off process continues forever, never resulting in a square? After all, for the remainder rectangle to be a square, its two sides have to be precisely equal. That the two sides should always be at least slightly unequal seems much more probable, from a random starting rectangle, than that they should ever be exactly the same. These misgivings are all well-founded, but the true situation can actually be even worse than this. For certain starting rectangles, the sides of the remainder rectangle never even get close to approximate equality, much less exact equality. This happens, for example, when the sequence of remainder rectangles falls into a repeating pattern, which occurs when a remainder rectangle is geometrically similar to an earlier remainder rectangle in the sequence. The simplest example of such a rectangle is the golden rectangle, which is defined by the property that chopping a square section off of the long end of the rectangle results in a rectangle that is similar to the starting rectangle. The ratio of the long side to the short side is known as the golden ratio and has decimal expansion $1.61803\ldots$. All remainder rectangles in the sequence have side lengths in this ratio, and hence none is ever close to being square. As a consequence, the golden rectangle cannot be tiled as a rectangular array of squares, and therefore the ratio of its side lengths is not rational. The defining property of the golden ratio implies that its value is $(1+\sqrt{5})/2$. It turns out that similar numbers involving square roots, that is, numbers of the form $r+s\sqrt{d}$ where $d$ is a natural number that is not a perfect square and $r$ and $s$ are rational numbers, always fall into a repeating, but generally more complicated, pattern of remainder rectangles. None of these numbers are rational. These examples are but the simplest way to see the phenomenon of irrationality. An interesting irrational number has decimal expansion $1.433127\ldots$. If the chopping-off process is carried out on a rectangle whose horizontal side has this length and whose vertical side has length $1$ then, after chopping a square off the long side, the vertical side becomes the long side; after chopping two squares off the long side, the horizontal side again becomes the long side; after chopping three squares off the long side, the vertical side becomes the long side; after chopping four squares off the long side, the horizontal side becomes the long side; and so on. Hence among the remainder rectangles are rectangles that become progressively longer and longer relative to their width. Assuming that this continues, it follows that the remainder rectangle is never a square and hence that that original rectangle does not have a whole-number ratio of sides. Actually proving that this pattern continues for this particular ratio (which is $I_0(2)/I_1(2)$, where the functions $I_n(z)$ are things called modified Bessel functions of the first kind) is considerably more work than in the case of the golden ratio. A more familiar number that exhibits a similar, but more complicated pattern is $e$, which is therefore also irrational. The chopping-off pattern in this case is$[2,1,2,1,1,4,1,1,6,1,1,8,\ldots]$, where the the numbers represent how many squares get chopped off with each change in orientation of the long edge. Unlike the examples we have looked at so far, however, most irrational numbers exhibit a rather unpredictable chopping-off pattern. For example, $\pi$ has the pattern $[3,7,15,1,292,1,1,1,2,\ldots]$. In any case, it is only when the chopping-off process terminates that the side lengths have a whole-number ratio. Incidentally, the chopping-off process is usually called the Euclidean algorithm, and the sequences of numbers representing squares chopped off with each change in orientation of the long side are called the coefficients of the continued fraction. Examples of continued fraction coefficients for various numbers, some of which were discussed in this post, are listed below.$$\begin{aligned}5/3&=[1,1,2]\\22/9&=[2,2,4]\\99/34&=[2,1,10,3]\\(1+\sqrt{5})/2&=[1,1,1,\ldots]\\\sqrt{2}&=[1,2,2,2,\ldots]\\(6+\sqrt{10})/4&=[2,3,2,3,1,3,2,3,1,3,2,3,1,\ldots]\\I_0(2)/I_1(2)&=[1,2,3,4,5,6,\ldots]\\e&=[2,1,2,1,1,4,1,1,6,1,1,8,1,1,\ldots]\\\sqrt[3]{2}&=[1,3,1,5,1,1,4,1,1,8,1,14,1,10,2,1,\ldots]\\\pi&=[3,7,15,1,292,1,1,1,2,1,3,1,14,2\ldots]\end{aligned}$$
I am measuring voltages and currents of a 3-phase electrical machine and I need to calculate the power. Every interrupt (frequency between 30-70kHz) I get values (voltages, currents) from analogue-digital converter and I need to do a simple calculation to determine power. However, I was told to use averaging with a first order filter with a time constant of approximate 1s for the power calculation. This, I believe, means low pass filter with a frequency of approximately 1Hz? What is the simplest solution to this? Moving average? A first order lowpass filter is usually implemented like this: $$p[n] = \alpha p[n-1] + (1-\alpha) pi[n]$$ Where $p[n]$ is your filtered power estimation, $p[n-1]$ is the previous result, $pi[n]$ is your new measurement (probably the product of instantaneous voltage and current measurements), and $\alpha$ is a positive parameter just less than 1. The nearer $\alpha$ gets to 1, the larger the time constant (lower cutoff frequency) of your filter. But beware, especially in embedded systems with limited precision, that getting too near to 1 can make your filter unstable, or at least have problems due to numerical precision. The cutoff frequency for that filter is around $f_s \frac{1-\alpha}{2\pi\alpha}$, where $f_s$ is your sampling frequency.
Practical and theoretical implementation discussion. Post Reply 8 posts • Page 1of 1 Hi, I have a simple question as in the title. Why is volumetric emission proportional to absorption coefficient? I often see the volumetric emission term is written as I can also see another representation, volumetric emittance function (e.g. in Mark Pauly's thesis: Robust Monte Carlo Methods for Photorealistic Rendering of Volumetric Effects) , which has the unit of radiance divided by metre (that is W sr^-1 m^-3). Do particles that emit light do not scatter light at all? Thanks I have a simple question as in the title. Why is volumetric emission proportional to absorption coefficient? I often see the volumetric emission term is written as I can also see another representation, volumetric emittance function (e.g. in Mark Pauly's thesis: Robust Monte Carlo Methods for Photorealistic Rendering of Volumetric Effects) , which has the unit of radiance divided by metre (that is W sr^-1 m^-3). Do particles that emit light do not scatter light at all? Thanks I don't quite understand the question: doesn't the first term refer to a particle density at an integration position x that emits radiance L_e in viewing direction w, and that the particle density absorbs radiance by w.r.t. sigma_a(x)? The emitted radiance is not proportional to the absorption, but is scaled by sigma_a(x). Let sigma_a := 1.0 be a constant for all x with respect to the density field, then your model only accounts for emission. With L_b(xb,w)+\int_xb^x L_e(x,w) sigma_a(x), b being the position of a constantly radiating background light and \int_xb^x meaning integration over the viewing ray from the backlight to the integration position, you get the classical emission+absorption model that is e.g. used for interactive DVR in SciVis. In- and out-scattering can be incorporated in the equation. See Nelson Max's '95 paper on optical models for DVR for the specifics: https://www.cs.duke.edu/courses/cps296. ... dering.pdf Also note that those models usually don't consider individual particles, but rather particle densities, and then derive coefficients e.g. by considering the projected area of all particles inside an infinitesimally flat cylinder projected on the cylinder cap. Emission and absorption are sometimes expressed with a single coefficient in code for practical reasons, e.g. so that a single coefficient in [0..1] can be used to look up an RGBA tuple in a single, pre-computed and optionally pre-integrated transfer function texture. With L_b(xb,w)+\int_xb^x L_e(x,w) sigma_a(x), b being the position of a constantly radiating background light and \int_xb^x meaning integration over the viewing ray from the backlight to the integration position, you get the classical emission+absorption model that is e.g. used for interactive DVR in SciVis. In- and out-scattering can be incorporated in the equation. See Nelson Max's '95 paper on optical models for DVR for the specifics: https://www.cs.duke.edu/courses/cps296. ... dering.pdf Also note that those models usually don't consider individual particles, but rather particle densities, and then derive coefficients e.g. by considering the projected area of all particles inside an infinitesimally flat cylinder projected on the cylinder cap. Emission and absorption are sometimes expressed with a single coefficient in code for practical reasons, e.g. so that a single coefficient in [0..1] can be used to look up an RGBA tuple in a single, pre-computed and optionally pre-integrated transfer function texture. Thanks for reply. I can find the volumetric emission term which proportional to absorption coefficient for example in, Jensen's Photon Mapping book, Spectral and Decomposition Tracking paper or Wojciech Jarosz's thesis. In the Jarosz's thesis reference, there is the following sentence by the equation (4.12) in the page 60: I can find the volumetric emission term which proportional to absorption coefficient for example in, Jensen's Photon Mapping book, Spectral and Decomposition Tracking paper or Wojciech Jarosz's thesis. In the Jarosz's thesis reference, there is the following sentence by the equation (4.12) in the page 60: I think it is required that emitting particles should not scatter light in order L_e^V to be represented as a decomposed form \sigma_a(x) L_e(x, w).Media, such as fire, may also emit radiance, Le , by spontaneously converting other forms of energy into visible light. This emission leads to a gain in radiance expressed as: I'm not quite sure if I understand how you come to this assumption and if I totally understand your question, but I don't see why particles that emit light shouldn't also scatter light.I think it is required that emitting particles should not scatter light However, the mental model behind radiance transfer is not one that considers the interaction of individual particles. The model rather derives the radiance in a density field due to emission, absorption, and scattering phenomena at certain sampling positions and in certain directions. So the question is: for position x, how much light is emitted by particles at or near x, how much light arrives there due to other particles scattering light into direction x ("in-scattering"), and conversely: how much light is absorbed due to local absorption phenomena at x, and how much light is scattered away from x ("out-scattering", distributed w.r.t. the phase function). See e.g. Hadwiger et al. "Real-time Volume Graphics", p. 6: (https://doc.lagout.org/science/0_Comput ... aphics.pdf)Analogously, the total emission coefficient can be split into a source term q, which represents emission (e.g., from thermal excitation), and a scattering term Out-scattering + heat dissipation etc. ==> total absorption at point x contributed to a viewing ray in direction w In-scattering + emission ==> added radiance at point x along the viewing direction w It is not about individual particles. The scattering equation is about the four effects contributing to the total radiance at a point x in direction w. There are no individual particles associated with the position x, you consider particle distributions and how they affect the radiance at x. The radiance increases if particles scatter light towards x, or if particles at (or near) x emit light. The radiance goes down due to absorption and out-scattering from the particle density at x. The point x is usually the sampling position that is encountered when marching a ray through the density field, and is not associated with individual particle positions. I didn't find a more general source and am working with this paper anyway - the paper also shows the scattering equation and states that it has a combined emission+in-scattering term: http://www.vis.uni-stuttgart.de/~amentm ... eprint.pdf (cf. Eq. 3 on page 3). Hope I'm not misreading your question? My current thinking process when reading the paper you lastly mentioned is like following: 0. 1. eq. (1) says that contribution from source radiance Lm(x', w) is proportional to the extinction coefficient sigma_t(x'). - I can understand RTE of this form. Probability density with which light interact (one of scattering/absorption/emission) with medium at x' is proportional to particle density, that is sigma_t(x'). 2. eq (3) says that once interaction happens, it is emission with probability (1 - \Lambda) and scattering with probability \Lambda. - I can understand the latter because scattering albedo \Lambda is the probability that scattering happens out of some interaction. This is straightforward. However I can't understand the former. The original question: Why is volumetric emission proportional to absorption coefficient? I can understand absorption happens with the probability (1 - \Lambda), but cannot understand emission also happens with the probability (1 - \Lambda) Now my question can be paraphrased as follows: Shouldn't the probability emission happens be independent of absorption coefficient? I'm sorry in case that the above explanation confuse you more and thank you for your kindness for detailed replying. 0. - Yes, I know.However, the mental model behind radiance transfer is not one that considers the interaction of individual particles. 1. eq. (1) says that contribution from source radiance Lm(x', w) is proportional to the extinction coefficient sigma_t(x'). - I can understand RTE of this form. Probability density with which light interact (one of scattering/absorption/emission) with medium at x' is proportional to particle density, that is sigma_t(x'). 2. eq (3) says that once interaction happens, it is emission with probability (1 - \Lambda) and scattering with probability \Lambda. - I can understand the latter because scattering albedo \Lambda is the probability that scattering happens out of some interaction. This is straightforward. However I can't understand the former. The original question: Why is volumetric emission proportional to absorption coefficient? I can understand absorption happens with the probability (1 - \Lambda), but cannot understand emission also happens with the probability (1 - \Lambda) Now my question can be paraphrased as follows: Shouldn't the probability emission happens be independent of absorption coefficient? I'm sorry in case that the above explanation confuse you more and thank you for your kindness for detailed replying. I found an interesting lecture script. http://www.ita.uni-heidelberg.de/~dulle ... pter_3.pdf Section 3.3 Eq 3.9. As I understand, ultimately it is a matter of definition motivated by thermodynamics of a special case. I imagine that the same particles that block light along some beam also emit light of their own. So it makes sense that emission and absorption strength have a common density-related prefactor. The reverse view from the point of importance being emitted into the scene seems more intuitive to me: Importance particle have a chance to interact with particles of the medium in proportion to their cross section. If they interact, the medium transfers energy to the imaging sensor. Section 3.3 Eq 3.9. As I understand, ultimately it is a matter of definition motivated by thermodynamics of a special case. I imagine that the same particles that block light along some beam also emit light of their own. So it makes sense that emission and absorption strength have a common density-related prefactor. The reverse view from the point of importance being emitted into the scene seems more intuitive to me: Importance particle have a chance to interact with particles of the medium in proportion to their cross section. If they interact, the medium transfers energy to the imaging sensor. That lecture script says: "This is Kirchhoff’s law.It says that a medium in thermal equilibrium can have any emissivity jν and extinction αν, as long as their ratio is the Planck function." Which sounds like they really CAN'T have any emissivity and extinction, but have to have them in a specific ratio. For example, for green light of wavelength 570 nm and 2000 degrees K, that ratio is (from the Planck function) 6537. So the extinction is relatively small in comparison. Later it says "If the temperature is constant along the ray, then the intensity will indeed exponentially approach [the Planck function]". Anyway, the real reason I am replying is so I can share this video of a "black" flame. The flame emits light but has no shadow (it seems fires don't have shadows), but can be made to have one and even appear black under single-frequency lighting: https://www.youtube.com/watch?v=5ZNNDA2WUSU This seems to contradict the notion that media has to absorb light in order to emit it.... unless the amount absorbed is very tiny, as suggested by the lecture. "This is Kirchhoff’s law.It says that a medium in thermal equilibrium can have any emissivity jν and extinction αν, as long as their ratio is the Planck function." Which sounds like they really CAN'T have any emissivity and extinction, but have to have them in a specific ratio. For example, for green light of wavelength 570 nm and 2000 degrees K, that ratio is (from the Planck function) 6537. So the extinction is relatively small in comparison. Later it says "If the temperature is constant along the ray, then the intensity will indeed exponentially approach [the Planck function]". Anyway, the real reason I am replying is so I can share this video of a "black" flame. The flame emits light but has no shadow (it seems fires don't have shadows), but can be made to have one and even appear black under single-frequency lighting: https://www.youtube.com/watch?v=5ZNNDA2WUSU This seems to contradict the notion that media has to absorb light in order to emit it.... unless the amount absorbed is very tiny, as suggested by the lecture. Ha! Now this comes a bit late, but I appreciate you posting this experiment. It is very cool indeed. I think, in contrast to the assumptions in that part of the lecture, the lamp is not a black body. At least, obviously, its emission spectrum is does not follow Planck's law. Please don't ask when in reality the idealization as black body is justified ... Somewhere I read that good emitters are generally also good absorbers, in the sense of material properties. The experiment displays this very well since the Sodium absorbs a lot of the light, whereas normal air and normal flame do not. I think, in contrast to the assumptions in that part of the lecture, the lamp is not a black body. At least, obviously, its emission spectrum is does not follow Planck's law. Please don't ask when in reality the idealization as black body is justified ... Somewhere I read that good emitters are generally also good absorbers, in the sense of material properties. The experiment displays this very well since the Sodium absorbs a lot of the light, whereas normal air and normal flame do not.
Let $x[n]$ be real signal and $y[n]=\exp(j3\pi n)$ be a complex signal Would the convolution between those two signals be $$x[n] * \Re(y[n]) + jx[n]*\Im(y[n])$$? Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up.Sign up to join this community Discrete convolution is a linear operation, so yes $$\begin{align*}x[n]*y[n] &= (x*y)[n]\\ \\ &= \sum_{m=-\infty}^{\infty} x[m]y[n-m] \\ \\ &= \sum_{m=-\infty}^{\infty} x[m]\Re(y[n-m])+ jx[m]\Im(y[n-m]) \\ \\ &= \left[\sum_{m=-\infty}^{\infty} x[m]\Re(y[n-m])\right] + \left[\sum_{m=-\infty}^{\infty} jx[m]\Im(y[n-m])\right] \\ \\ &= x[n]*\Re(y[n])+jx[n]*\Im(y[n])\\ \end{align*}$$ Yes so you can see this even simpler with distribution of convolution as: $$ \begin{align} x[n] \star y[n] & = x[n] \star \left( \Re \{ y[n] \} + j ~~ \Im \{ y[n] \} \right) \\ &= x[n] \star \Re \{ y[n] \} + j ~~ x[n] \star \Im \{ y[n] \} \end{align}$$
Here is a string of comments which might be helpful. UPDATE at the end I conjecture an upper bound $a(n) \leq \lfloor (\frac{n-1}{2})^2 \rfloor$ which satisfies a stronger property. Consider instead cases of $$\prod_1^k(x_i+a)= \prod_1^k(y_i+a) \tag{*}$$ where the multisets $\{x_1,\cdots ,x_k\}$ and $\{y_1,\cdots ,y_k\}$ are disjoint. I'll assume the elements are listed in increasing order. To stick to the OP, add the requirement that the $y_i$ are distinct. For example, $a(5)\geq 2$ because there are counter-examples to $a=0$ and $a=1.$ $$(2+0)(2+0)(3+0)(2+0)(5+0)=(1+0)(2+0)(3+0)(4+0)(5+0)$$$$(2+1)(2+1)(3+1)(3+1)(4+1)=(1+1)(2+1)(3+1)(4+1)(5+1)$$ Cancel out common factors to to see that sources of these counter-examples are $1\cdot 4=2 \cdot 2 $ and $2 \cdot 6=3 \cdot 4.$ In the other direction, one can pad an example of $(*)$ by changing the right-hand side to $\prod_1^n(i+a)$ and adding on the left the same new factors. Here $n$ could be $\max(x_k,y_k)$ or anything larger. The final remark exhibits that $a(n)$ is non-decreasing. Of the values reported so far the larger ones are somewhat close . $$a(14)=33 \lt 42=\lfloor (\frac{13}2)^2\rfloor$$ $$ a(15)=45 \lt 49$$Here is a potential conjecture. It is false. I mention it only because the counter-examples are lovely. Suppose that the value of $\prod_{i=1}^n (a + x_i) -\prod_{i=1}^n (a + y_i)$ is independent of $a$. Does that mean that the shared value is $0$ and $x_i=y_i?$ The answer is no because of ideal solutions to the Prouhet-Tarry-Escott problem. For example $2^k+3^k+7^k=1^k+5^k+6^k$ for $k=0,1,2.$ This explains the observation that $$(2+a)(3+a)(7+a)=42+41a+12a^2+a^3$$$$(1+a)(5+a)(6+a)=30+41a+12a^2+a^3$$ so the two always differ by $12.$ The OP is to find the first $a$ which satisfies the condition. for any set of integers $(x_1,...,x_n)$ and $1\leq x_i \leq n$: $(x_1,\dotsc,x_n)$ is a permutation of $(1,\dotsc,n)$ if and only if: $(x_1+a)\dotsb(x_n+a)=(1+a)\dotsb(n+a)$. I will instead seek the last $a$ which fails the property. This (plus $1$) is then an upper-bound on $a(n).$ I will conjecture that for fixed $n,$ this last bad $a$ is at most $(\frac{n-1}{2})^2.$ My justification is sketchy and would probably benefit from classical inequalities. By my comments above, given $n$, a particular $a$ is bad if there is $k$-member subset of $\{a+1,\cdots ,a+n\}$ and a disjoint multiset of $k$ elements from the same set which have the same product. I think that the extreme case is $k=2$ with $a+1=s^2$ and $n=2s+1$ so $a+n=(s+1)^2$ Then $s^2\cdot (s+1)^2=(s^2+s)\cdot (s^2+s).$ Here are plots showing that $a=18$ and $a=45$ are good for $n=11$ at least as far as $k=2.$ The first shows that there are no solutions of $18\cdot 29=u \cdot v$ with $19 \leq u,v \leq 28$ The hyperbola $xy=19\cdot 28$ (on this interval) snakes through the lattice points without hitting any of them. That isn't surprising given that $19$ is prime. The second shows the hyperbola $xy=45\cdot 56.$ Along the diagonal are the lattice points $(x,101-x).$ The diagonal below is the closest lattice points. But the hyperbola stays above that closest diagonal.Hence there are no solutions of $u \cdot v=2520$ in that range other than the endpoints. The $a$ chosen for these is larger than needed but it makes the plot easier to see. In the cases mentioned above such as $ 25\cdot 36=30 \cdot 30$ , the hyperbola is tangent to the lower diagonal and the contact point is a lattice point. I'll suffice to end this sketch by saying, without justification, that for larger $k$ the surface $x_1x_2\cdots x_k=y_1y_2\cdots y_k$ lies below the hyperplane $x_1+x_2+\cdots +x_k=y_1+y_2+\cdots + y_k$ which is rich in lattice points. If the numbers are large enough then that surface stays close enough to the hyperplane that it never touches the parallel hyperplane of nearest lattice points. It seems as if “large enough” decreases with $k$. A study of the known bad a values might make that clear. Do any of the known counter examples use more than $k=2?$ The exact value of $a(n)$ in the OP depends on the distribution of fairly composite integers in certain intervals of length $n.$ That is not very predictable.However I think the simplifications here might make the searches easier. The values reported so far seem close to the bound.
A measure of concordance between two random variables based on ranks. Kendall's tau is a measure of concordance for two random variables. It is based on ranks, and has many properties in common with Spearman's rho. We say that $(y_{i,1},y_{i,2})$ and $(y_{j,1},y_{j,2})$ are concordant if: $$(y_{i,1} - y_{j,1}) \times (y_{i,2} - y_{j,2}) >0 $$ And discordant if the product is $< 0$. For a given dataset, let $c$ = # of concordant observations and $d$ = # of discordant observations. Then: $$\hat{\tau} = \frac{c-d}{n \choose 2}$$ If pairs are tied (i.e. $y_{i,1} = y_{j,1}$), then $\hat{\tau}$ is not bound by -1 and +1. There are different approaches to handling ties. For more information:
A simple question about notation of Moore Nekrasov and Shatashvili which makes me confused. Page 3, the authors rearranged the action into a novel form. For D=3+1,5+1,9+1 respectively, the path integral is $$ I_D=(\frac{\pi}{g})^{\frac{(N^2-1)(D-3)}{2}} \frac{1}{{\rm{Vol(G)}}}\int d^DXd^{\frac{D}{2}-1}\Psi e^{-S} $$ where $S=\frac{1}{g}(\frac{1}{4}\Sigma {\rm{Tr}}[X_\mu,X_\nu]^2+\frac{i}{2}\Sigma {\rm{Tr}}(\bar{\Psi}\Gamma^\mu[X_\mu,\Psi]))$ The authors rearrange the fields into the following form: $$ \phi=X_D+iX_{D-1} \\ B_j=X_{2j-1}+iX_{2j} ~~~~~~(\rm{for ~~~j=1,2...D/2-1})\\ \Psi \rightarrow \Psi_a=(\psi_j,\psi_j^{\dagger}), \vec{\chi },\eta$$ $B_j$ are often denoted as $\mathbf{X}=\{X_a, a=1,....D-2\}$ My question is what the rearrangement of fermion field means? For D=4, a=2, j=1 ,$\chi$ has one component, before arrangement, we have a Dirac spinor $\Psi$, and what do we have after arrangement? are $\chi$ and $\eta$ Weyl spinors? If we expand formula (2.4) using nilpotent symmetry (2.3), why no such terms like $\chi^{\dagger}[\bullet,\bullet]$? Edit: From some related paper Kazakov Kostov and Nekrasov, the rearrangement is clear while there are some other puzzles. From KKN, first rewrite matrix model into complex fermion form, which is the formula (2.1) of KKN$$\frac{1}{\rm{Vol(G)}}\int dXd\lambda \exp(\frac{1}{2}\Sigma _{\mu<\nu}\rm{Tr}[X_\mu,X_\nu]^2+ )+\rm{Tr}\bar{\lambda}_\dot{\alpha}\sigma_\mu^{\alpha \dot{\alpha}}[X_\mu,\lambda_\alpha]$$ Two complex fermions $\lambda$ can be written as four real fermions $\chi$, $\eta$,$\psi_\alpha$, $\alpha=1,2$,$$ \lambda_1=\frac{1}{2}(\eta-i\chi),\\\lambda_2=\frac{1}{2}(\psi_1+\psi_2),\\\bar{\lambda}_{\dot{a}} =\sigma_2^{a\dot{a}}\lambda_a^*,~~~s=[X_1,X_2]$$Using the following nilpotent symmetry:$$ \delta X_\alpha=\psi_\alpha,~~~\delta\psi_\alpha=[\phi,X_\alpha] ,\\\delta\bar{\phi}=\eta,~~~\delta \eta [\phi,\bar{\phi}],\\\delta\chi=H,~~~\delta H=[\phi,\chi],\\\delta \phi=0$$ KKN claims that the action can be written as formula (2.5), $$ S=\delta \left( -i\rm{Tr} \chi s -\frac{1}{2} \rm{Tr}\chi H - \Sigma_a\psi_a[X_a,\bar{\phi}] -\frac{1}{2} \eta[\phi,\bar{\phi}]\right) $$ I found that there is some inconsistence between formula (2.1) and formula (2.5). Look at fermionic part that consist $X_2$ in (2.5) which is proportional to $i\rm{Tr} \chi[\psi_1,X_2] + \rm{Tr}\psi_2[X_2,\eta] $. If we start from (2.1), fermionic part that consist $X_2$ should be proportional to $ \bar{\lambda}_\dot{1}\sigma_2^{2\dot{1}}[X_2,\lambda_2] +\bar{\lambda}_\dot{2}\sigma_2^{1\dot{2}}[X_2,\lambda_1] $. Further using definition of $\bar{\lambda}_\dot{a}=\sigma_2^{a\dot{a}}\lambda_a^*$, we always get terms of form $\lambda_1^*[X_2,\lambda_1]$ and $\lambda_2^*[X_2,\lambda_2]$ which can not consist $i\rm{Tr} \chi[\psi_1,X_2] + \rm{Tr}\psi_2[X_2,\eta] $. Anything wrong here? Some comments and references are welcoming.
Let $L$ be a finite lattice with least element $\hat{0}$, greatest element $\hat{1}$, and Möbius function $\mu$. Question 1: What class of lattices the following property characterizes? $$\mu(\hat{0},a)=\mu(\hat{0},\hat{1})\mu(a,\hat{1}), \ \forall a \in L$$ It follows that $\mu(\hat{0},\hat{1}) = \pm1$. Remark: It is satisfied by any boolean lattice, more generally by the face lattice of any convex polytope (as suggested by John Shareshian), and more generally by any Eulerian lattice. Proof: An Eulerian lattice is a graded lattice $L$ such that for any $a,b \in L$ with $a \le b$, we have $\mu(a,b)= (-1)^{|b|-|a|} $, with $a \mapsto |a|$ the rank function. The result is immediate. $\square$ Question 2: Is there a non-Eulerian lattice with the above property on the Möbius function? Yes, see the answer of John Machacek. As suggested by Sam Hopkins: Question 3: Is there a non-Eulerian atomistic lattice with the above property on the Möbius function? No for $|L| \le 13$, as checked by the following Sage program (using these lists of Martin Malandro): from itertools import productdef relationtest(L,n): for l in L: P=Poset((range(n),l)) b = P.bottom() t = P.top() if all(P.moebius_function(b,x) == P.moebius_function(b,t)*P.moebius_function(x,t) for x in P): L=LatticePoset(P) if L.is_atomic(): if not L.is_graded(): print(P.cover_relations()) if L.is_graded(): for x, y in product(P, P): if P.compare_elements(x,y)==-1: if not P.moebius_function(x,y)==(-1)^(P.rank(y)-P.rank(x)): print(P.cover_relations()) break Are the small atomistic lattices listed somewhere?
I am having trouble grasping the concept of vorticity. Above is a picture of fluid element under deformation (rotational). Say we ignore the vertical motion, which is $\frac {\partial {v}}{\partial x} = 0$. Which can be considered the case in the boundary layer close to a flat plate. Assume flow is incompressible, and steady. Now the text says that above motion is rotating the fluid element. The problem I am facing is that after some time $\Delta \theta_1$ would grow larger, larger,... What happens after sufficiently long time. This would converge itself to straight line (approx.). So all rotations will cease after some time. But we know that rotations will keep happening. My question is if the above picture is the way it is done, then after long time all fluid element will be a straight line, then how come rotation will keep happening in the fluid. Note: You may say that going to that asymptote line will take time $t \to \infty$. But then rotation is slowing down. But take this experiment. Say we have a laminar flow along a flat plate, and consider the area close to flat plate (boundary layer). Now if you place a vorticity meter in the boundary layer, you will be able to see that the meter is rotating with uniform (almost) angular speed. So somehow the fluid element is retaining it's own angular speed. Where the hell am I wrong...? This is killing me!
When dealing with wave propagation problems such as electromagnetic waves passing from one medium to another we set up boundary conditions to ensure the field is continuous and the flux is continuous as the waves pass from the first medium into the other. Similarly in the case of acoustic waves passing from, say, a region of fluid into a region of gas. We set boundary conditions so that the field is continuous and the flux is continuous. For example, if we let $k_f$ and $k_g$ represent the wavenumbers in fluid and gas, respectively, we can model a transmission problem for a fluid-gas system with the Helmholtz equation, where the fluid occupies $\mathbb{R}^3\backslash \overline{\Omega}$ and the gas occupies $\Omega$, as follows: \begin{align} \Delta p + k_f^2p = 0 & \quad x \in \mathbb{R}^3\backslash \overline{\Omega}\\ \Delta p + k_g^2p = 0 & \quad x \in \Omega\\ p_+ = p_- & \quad x \in \partial \Omega \\ \frac{1}{\rho_f}\frac{\partial p}{\partial \nu}\bigg|_+ = \frac{1}{\rho_g}\frac{\partial p}{\partial \nu}\bigg|_- & \quad x \in \partial \Omega \end{align} with the scattered wave $p^s = p - p^{inc}$ obeying the Sommerfeld radiation condition as $|x| \to \infty$, and $\rho$ represents the density in the fluid/gas. Now what about the case where we have acoustic wave in a solid-fluid system? In this case we have two fundamentally different systems in each region as while the fluid region is governed by the Helmholtz equation like above, in the solid the wave propagation is governed by the linear elasticity while involves longitudinal waves as in the fluid case, but also shear waves. So how can we model wave propagation in this fluid-solid case in which the governing equations are different in the fluid and solid regions? And in particular, how can we handle transmission boundary conditions?
Is there sparse representation for stationary noise and nonstationary noise? How can I learn dictionary for each noise class? (my mean of noise is noises with which speech signals are often contaminated such as white gaussian noise, car noise, babble noise, impulsive noise....) Let's think about it in a different way - Generate Noise from a Dictionary. Let's create a Dictionary $ A \in \mathbb{R}^{m \times n} $ where each of its rows is normalized (Has Euclidean Norm of $ 1 $) and generated by a Gaussian Random Vector. Now, let's create $ N $ random vector $ {\left\{ {r}_{i} \right\}}_{i = 1}^{N} $ by: $$ {r}_{i} = A {g}_{i} $$ Where $ {g}_{i} $ are random vectors $ {g}_{i} \in \mathbb{R}^{n} $ with only $ k \ll n $ non zero elements (Let's say they are pre defined and deterministic otherwise we'll have to deal with multiplication of random variables).. Now, the output vectors are valid Gaussian Random Variables and given enough of them and some properties of $ A $ you'd be able to build a Dictionary which they are sparse with relation to it. Yet it means the dictionary is random by itself and next time when you get new set of random data it won't be able to represent it correctly (It doesn't generalize). While the above is not rigorous (It means to develop some intuition, not analytic analysis) it tells you why when we deal with finite amount of data it can be done but for noise as in general it won't work. The question of the existence of a sparse basis of noise is closely related to the question of the effective dimensionality of the noise subspace. First, it is important to realise that noise is a process, and not a signal. You can think of a full characterisation of any kind of noise by means of a function $p: S \to \mathbb{R}^+_0 $ that maps a signal $s$ to a probability $p(s)$ of finding that signal as the realisation of the noise process. This description is fully general and more useful than it may appear at first sight. The ensemble of all possible noise signals $s$ together with their respective probability $p(s)$ is a very unhandy object to deal with mathematically. Finding a more practical description that still contains relevant information is possible. Consider the orthogonal projector $P_s$ onto a signal $s$. This projector is a non-negative symmetric operator on the signal space. If we have a fairly and sufficiently densely sampled set of possible signals $\{s_i:i\in\mathbb{N}\}$ with their respective probabilities $p(s_i)$, then the weighted convex sum of the projectors onto these sample signals $$E:=\frac{\sum_i p(s_i) P_{s_i}}{\sum_i p(s_i)\cdot\mathrm{trace}\left(\sum_i P_{s_i}\right)}$$ is also a non-negative symmetric operator on the signal space. (Note that this ad-hoc construction becomes somewhat more involved if done rigorously, but then would require a bit of measure theory on the signal space.) The operator $E$ has some interesting properties. It's relatively straight forward to see, that its trace is bounded by $0$ from below and $1$ from above. The upper bound is attained exactly if $p(s)\equiv \mathrm{const}$. The trace of $E$ has an interesting interpretation as the effective relative dimensionality of the noise subspace $S_N$: $$\mathrm{trace}\left(E\right)\approx\frac{\mathrm{dim\,S_N}}{\mathrm{dim}\,S}$$ What is more, an orthogonal basis of the noise subspace can be derived from $E$ in form of its Eigenspectrum. If you sort the Eigenvectors of $E$ in order of their descending Eigenvalue and take the first $\mathrm{dim}\,S_N\approx\mathrm{ceil}\left(\mathrm{dim}\,S \cdot \mathrm{trace}(E) \right)$ vectors, you should have a basis that spans your noise subspace reasonably well. So, how do we find $E$ in practice? Let us assume you have a noise generator that produces a single output signal. It is reasonable to assume that for such a noise generator, each received output signal is approximately equally likely. Therefore, generating an ensemble of noise signals with this generator samples the noise subspace evenly and with constant probability. The rest of the signal space has a probability of 0 and is not sampled. However, we do need a set of signals that sample the entire signal space to apply the definition of $E$ from above. Let us call the samples of the signal space that we do not take $\{ \bar{s}_j \}$ and the constant probability of those $s_i$ that we do take just $p$. Then the definition becomes $$E=\frac{\sum_i p\,P_{s_i}+\sum_j 0\,P_{\bar{s}_j}}{\left(\sum_i p + \sum_j 0\right)\cdot\mathrm{trace}\left(\sum_i P_{s_i}+\sum_j P_{\bar{s}_j}\right)}$$ which simplifies to $$E=\frac{\sum_i P_{s_i}}{\mathrm{trace}\left(\sum_i P_{s_i} \right) + \mathrm{trace}\left(\sum_j P_{\bar{s}_j} \right)}$$ We are stuck here, because don't know $\mathrm{trace}\left(\sum_j P_{\bar{s}_j} \right)$ and estimating it would require the dimensionality of the noise subspace, which we are trying to find. However, we can look at the operator $$E'=\frac{\sum_i P_{s_i}}{\mathrm{trace}\left(\sum_i P_{s_i} \right)}$$ which is has the same Eigenstructure as $E$ up to scaling. Because of the assumptions we have made, the Eigenvalues of $E'$ associated with the Eigenvectors outside of the noise subspace are all 0. Because the trace of $E'$ equals $1$, the Eigenvalues associated with the Eigenvectors inside of the noise subspace approximately equal $\frac{1}{\mathrm{dim}\,S_N}$. So doing an Eigendecomposition of $E'$ will result in an orthogonal basis of the noise subspace and give a natural separation between the noise subspace and its orthogonal complement. Numerically constructing $E'$ requires a bit of memory and summing of the projectors onto the generated signals. The following Eigendecomposition can be computationally challenging, but should be doable for reasonably sized signal vectors.
Last edited: April 20th 2016 The objective of this module is to use the method of forward shooting to determine numerically the eigenenergies of the quantum harmonic oscillator in one dimension. In other words, we are looking for eigenvalues of the time-independent Schrödinger equation$$ \qquad\left[-\frac{\hbar^2}{2m}\frac{\rm{d}^2}{\rm{d} x^2}+V(x)\right] \psi(x) = E \psi(x) $$ with$$ \qquad V=\frac{1}{2}m \omega^2 x^2. $$ We will focus mainly on the three lowest eigenenergies. This will demonstrate how "an eigenvalue has just the right value" to solve the time-independent Schrödinger equation. Moreover, this example shows that the forward-shooting method also works for potentials $V(x)$ for which we do not have closed-form solutions. First, we write the Schrödinger equation in the following form$$ \qquad\psi''(x) = -\frac{2m}{\hbar^2}\left[E-\frac{1}{2}m \omega^2 x^2 \right]\psi(x), $$ and for simplicity, we set the values $m=\omega=\hbar=1$ which yields the equation$$ \qquad\psi''(x) = \left[ x^2-2E\right] \psi(x). $$ We will now compute the eigenenergies of this problem by using the forward shooting method, knowing that the wave function must tend to zero as $x\to\infty$. First, we divide the interval we are interested in, $[a,b]$, into $n$ equal intervals with length$$ \qquad \Delta x = \frac{b-a}{n} $$ which defines the coordinates $x_i = a + i\Delta x$. Hence, it is natural to write the function values of $\psi(x)$, $\psi(x)'$ and $\psi(x)''$ at a point $x = x_i$ as $\psi_i$, $\psi'_i$ and $\psi''_i$, respectively. Next, we approximate $\psi''(x)$ at a point $x_i$ by using the second-order central difference method which yields$$ \qquad \psi_i'' = \frac{\psi_{i+1}+\psi_{i-1} - 2\psi_{i}}{(\Delta x)^2}. $$ Using the simplified Schrödinger equation,$$ \qquad \psi_i'' = \left[x_i^2-2E\right]\psi_i, $$ we obtain$$ \qquad \frac{\psi_{i+1}+\psi_{i-1} - 2\psi_{i}}{(\Delta x)^2} = \left[x_i^2-2E\right]\psi_i. $$ By rearranging the previous equation, we find an equation for the function value $\psi_{i+1}$ based on the values of the previous two points$$ \qquad \psi_{i+1} = -\psi_{i-1} + \left[2 + (\Delta x)^2 \left(x_i^2-2E\right)\right] \psi_i. $$ By using this formula, we can calculate iteratively all function values as long as we have two initial values to start with. In this problem, we consider a potential which is symmetric about $x=0$. Therefore, the initial values $\psi_0$ and $\psi_0'$ will be given at $x=0$ and we approximate the next function value $\psi_1$ simply by$$ \qquad \psi_1 = \psi_0 + \Delta x \cdot \psi_0'. $$ This then allows us to calculate $\psi_i$ for higher values of $i$ using the formula above. We know that for a symmetric potential, the ground state, second excited state, fourth excited state etc. are symmetric, while the first excited state, third excited state etc. are anti-symmetric. Since the harmonic oscillator potential is symmetric about $x=0$, it allows us to use the method in only one direction, focusing on $x>0$ and starting with the initial values at $x = 0$, namely $\psi_0$ and $\psi_0'$. Since the ground state should be symmetric about $x=0$, we have to have $\psi_0' = 0$. We also have to choose a value for $\psi_0$, e.g. $\psi_0=1$. The particular choice of the value of $\psi_0$ will not affect the energy eigenvalue because its actual value will be determined by normalization of the wave function. Using these two initial values, we can now compute the ground state energy as follows. First, we choose an upper and a lower bound for the estimated ground state energy. This is where uncertainty arises since we do not know a priori where the eigenenergies lie. Subsequently, we keep shooting forward to $x_i=b$, starting at $x_0=0$, with an estimated eigenenergy E0 that is determined iteratively by the bisection method: if the value of the wave function grows to infinity as $x\to\infty$, we need to raise the value of E0; if the value of the wave function goes to negative infinity, we need to lower E0. We truncate the bisection method when a desired accuracy acc is achieved and plot the analytical solution versus the numerical solution of the wave function. The agreement is quite impressive. %matplotlib inlineimport numpy as npimport matplotlib.pyplot as plt# Set common figure parameters:newparams = {'axes.labelsize': 14, 'axes.linewidth': 1, 'savefig.dpi': 300, 'lines.linewidth': 1.0, 'figure.figsize': (8, 3), 'figure.subplot.wspace': 0.4, 'ytick.labelsize': 10, 'xtick.labelsize': 10, 'ytick.major.pad': 5, 'xtick.major.pad': 5, 'legend.fontsize': 10, 'legend.frameon': False, 'legend.handlelength': 1.5}plt.rcParams.update(newparams) n = 1000.0 # number of points per unit on the x-axis.p = 10.0 # the number of points included on the x-axisdx = 1.0/n # step lengthf0= np.zeros(p*n) # array for function values of f0f0[0] = 1.0 # initial value for f0 at x = 0df0_0 = 0.0 # initial value for df0/dx at x = 0x = np.linspace(0,p,p*n,True)acc = 1e-15 # Accuracy of eigenenergye1 = 0.0 # Lower bound, must be positive since the potential is positive for all xe2 = 4.0 # Upper boundE0 = e1deltaE0 = 1while deltaE0 > acc: for i, x_ in enumerate(x[0:-1]): if i == 0: f0[i+1] = f0[i]+dx*df0_0 else: f0[i+1] = -f0[i-1]+f0[i]*(2-dx**2*(2*E0-x_**2)) # Implementation of bisection method. If the function value is out of bounds, # a new value for the energy is chosen. When the difference between the upper # and lower bound for the energy is smaller than the given accuracy, # the while loop stops, yielding a result for E1. if f0[i] > 5: e1 = E0 E0 = e1 + (e2-e1)/2 break elif f0[i] < -5: e2 = E0 E0 = e2 - (e2-e1)/2 break deltaE0 = e2-e1print(r'The energy E0 is: %s'% E0)#Plot:p1, = plt.plot(x, np.exp(-x**2/2), 'g:') # analytical eigenfunctionp2, = plt.plot(x, f0, 'b') # computed eigenfunctionp3, = plt.plot(x, 0.5*x**2, 'r') # potentialplt.plot(-x, np.exp(-x**2/2), 'g:', -x, f0, 'b', -x,0.5*x**2, 'r') # same as above for negative x-valuesplt.legend([p1, p2, p3],['Analytical wavefunction', 'Calculated wavefunction', r'Potential $V(x)$'])plt.xlabel(r'$x$')plt.ylabel(r'$\psi(x)$')plt.ylim([-0.5, 3])plt.xlim([-6, 6])plt.title('Wavefunction for Ground State of Harmonic Oscillator'); The energy E0 is: 0.500332214879553 For the first excited state, we have to choose different initial values. Since this is an anti-symmetric function, we must have $\psi_0=0$, while $\psi_0'$ can be chosen freely this time, e.g. $\psi_0'=1$. Again, this is only a matter of normalization. Since we are looking for the first excited state, we know that $E_1>E_0$, which means we can choose $E_0$ as our lower bound. Choosing an appropriate upper bound can be the tricky part since it should be lower than the eigenenergy of the second excited state. In any event, the agreement between the numerical and the analytical solution is again impressive. f1 = np.zeros(p*n)f1[0] = 0.0df1_0 = 1.0e1 = E0e2 = 2.0E1 = e1deltaE1 = 1while deltaE1 > acc: for i, x_ in enumerate(x[0:-1]): if i == 0: f1[i+1] = f1[i]+dx*df1_0 else: f1[i+1] = -f1[i-1]+f1[i]*(2-dx**2*(2*E1-x_**2)) if f1[i] > 5: e1 = E1 E1 = e1 + (e2-e1)/2 break elif f1[i] < -5: e2 = E1 E1 = e2 - (e2-e1)/2 break deltaE1 = e2-e1print(r'The energy E1 is: %s' % E1)p1, = plt.plot(x, x*np.exp(-x**2/2), 'g:')p2, = plt.plot(x, f1, 'b')p3, = plt.plot(x, 0.5*x**2, 'r')plt.plot(-x, -x*np.exp(-x**2/2), 'g:', -x, -f1, 'b', -x, 0.5*x**2, 'r') plt.legend([p1, p2, p3],['Analytical wavefunction', 'Calculated wavefunction', r'Potential $V(x)$'])plt.xlabel(r'$x$')plt.ylabel(r'$\psi(x)$')plt.ylim([-1,1.5])plt.xlim([-6,6])plt.title('Wavefunction for First Excited State'); The energy E1 is: 1.5001498587221596 For the second excited state, we know that $E_2 > E_1$, and that the wavefunction is symmetric. Hence, we can use the same initial conditions as for the ground state. Again, choosing an appropriate upper limit is critical. f2 = np.zeros(p*n)f2[0] = 1.0df2_0 = 0.0e1 = E1e2 = 3.0E2 = e1deltaE2 = 1while deltaE2 > acc: for i, x_ in enumerate(x[0:-1]): if i == 0: f2[i+1] = f2[i]+dx*df2_0 else: f2[i+1] = -f2[i-1]+f2[i]*(2-dx**2*(2*E2-x_**2)) if f2[i] < -5: e1 = E2 E2 = e1 + (e2-e1)/2 break elif f2[i] > 5: e2 = E2 E2 = e2 - (e2-e1)/2 break deltaE2 = e2-e1 # Notice that the two last conditions here are different from the two above! # The reason is that the wavefunction now has roots for x ≠ 0, and approaches the x-axis from below.print(r'The energy E2 is: %s' % E2) p1, = plt.plot(x, -(2*x**2-1)*np.exp(-x**2/2), 'g:')p2, = plt.plot(x, f2, 'b')p3, = plt.plot(x, 0.5*x**2, 'r')plt.plot(-x, -(2*x**2-1)*np.exp(-x**2/2), 'g:', -x, f2, 'b', -x,0.5*x**2, 'r')plt.legend([p1, p2, p3],['Analytical wavefunction', 'Calculated wavefunction', r'Potential $V(x)$'])plt.xlabel(r'$x$')plt.ylabel(r'$\psi(x)$')plt.ylim([-1.5,2])plt.xlim([-6,6])plt.title('Wavefunction for Second Excited State'); The energy E2 is: 2.5009550642267624 From the analytical solution of the harmonic oscillator potential, we know that the eigenenergies are $$ \qquad E_n^a = \hbar \omega (n + \frac{1}{2}).$$ With $\hbar = \omega = 1$, we find $E_n^a = (n+1/2)$, which is in excellent agreement with the results obtained above! We also see from the plots of the eigenfunctions that they are in excellent agreement with the analytical eigenfunctions. This is all the more surprising since we used a simple numerical scheme. Consequently, the reasons why the results deviate slightly, are mostly due to numerical errors.
Difference between revisions of "Using Gamma-ray Counts to Calculate Element Concentration" Line 23: Line 23: '''''<math>\frac{w_{p}}{w_{s}}=\frac{R_{xp} \cdot e^{\lambda_{x} \cdot t_{dx}}}{R_{xs} \cdot e^{\lambda_{x} \cdot t_{ds}}}</math>'''''<br><br> '''''<math>\frac{w_{p}}{w_{s}}=\frac{R_{xp} \cdot e^{\lambda_{x} \cdot t_{dx}}}{R_{xs} \cdot e^{\lambda_{x} \cdot t_{ds}}}</math>'''''<br><br> − [[Category:Detection]] + [[Category:Detection]] Latest revision as of 10:50, 9 July 2012 Written and developed by Prof. Tor Bjørnstad (IFE/UiO) Return to Main In principle, it is possible to perform quantitative NAA by using the general formula for activation: where w = mass of unknown element in sample R x = measured count rate of unknown element (cps) M = atomic weight of unknown element λ x = decay constant of the measured radionuclide of the unknown element = ln2/T1/2 Where T 1/2 is the half-life of this radionuclide (s -1) t d = decay time = time between irradiation end and midpoint of counting period (s) = counting efficiency of the detected gamma energy σ= thermal neutron reaction cross section of the unknown element (barn = 10 -24 cm 2) φ= neutron flux in irradiation position (ns -1cm -2). N A = Avogadro’s number (6.023 10 23) I A = natural occurrence of the target isotope in the actual element (as fraction in the range 0-1) t i = irradiation time (s) However, in practice, the parameters σ and φ are not easily determined exactly. Therefore, a simpler method is to carry out so-called comparative analysis where the activity in the sample is compared to the activity in a simultaneously irradiated standard of the same element. If the unknown sample and the comparator standard are both measured on the same detector, then one needs to correct the difference in decay between the two. One usually decay corrects the measured counts (or activity) for both samples back to the end of irradiation using the half-life of the measured isotope. The following relation may be put up: If the unknown sample and the comparator standard are both measured on the same detector,and the following simple expression results:
##plugins.themes.bootstrap3.article.main## Abstract For an arbitrary fixed element $\beta$ in $\{1; 2; 3; ...; \omega\}$ both a sequent calculus and a natural deduction calculus which axiomatise simple paracomplete logic $I_{2;\beta}$ are built. Additionally, a valuation semantic which is adequate to logic $I_{2;\beta}$ is constructed. For an arbitrary fixed element $\gamma$ in $\{1; 2; 3;...\}$ a cortege semantic which is adequate to logic $I_{2;\gamma}$ is described. A number of results obtainable with the axiomatisations and semantics in question are formulated. ##plugins.generic.usageStats.downloads## ##plugins.generic.usageStats.noStats## ##plugins.themes.bootstrap3.article.details## How to Cite Popov V., Shangin V. O.Syntax and semantics of simple paracomplete logics // Logicheskie Issledovaniya / Logical Investigations. 2013. VOL. 19. C. 325-333. Issue Section Статьи References Bolotov, A., Grigoryev, O., and Shangin, V., Automated Natural Deduction for Propositional Linear-time Temporal Logic, Proceedings of the 14th International Symposium on Temporal Representation and Reasoning (Time2007), Alicante, Spain, June 28-June 30, pp. 47–58, 2007. Bolotov, A.E., Shangin, V., Natural Deduction System in Paraconsistent Setting: Proof Search for PCont, Journal of Intelligent Systems, 21(1):1–24, 2012. Gentzen, G., Investigations into logical deductions, Mathematical theory of logical deduction, Nauka Publishers, M., 1967, pp. 9–74 (in Russian). Kleene, S.C., Introduction to Metamathematics, Ishi Press International, 1952. Popov, V.M., On the logic related to A. Arruda’s system V1, Logic and Logical Philosophy, 7:87–90, 1999. Popov, V.M., Intervals of simple paralogics, Proceedings of the V conference ‘Smirnov Readings in Logic’, June, 20-22, 2007, M., 2007, pp. 35–37 (in Russian). Popov, V.M., Two sequences of simple paraconsistent logics, Logical investigations, Vol. 14, M., 2007, pp. 257–261 (in Russian). Popov, V.M., Two sequence of simple paracomplete logics, Logic today: theory, history and applications. The proceedings of X Russian conference, June, 26-28, 2008, St.-Petersburg, SPbU Publishers, 2008, pp. 304–306 (in Russian). Popov, V.M., Semantical characterization of paracomplete logics $I_{2,1}$, $I_{2,2}$, $I_{2,3}$, ... , Logic, methodology: actual problems and perspectives. The proceedings of conference, Rostov-on-Don, UFU Publishers, 2010, pp. 114–116 (in Russian). Popov, V.M., Sequential characterization of simple paralogics, Logical investigations, 16:205–220, 2010 (in Russian). Vasiliev, N.A., Imaginary (non-Aristotelian) logic, Vasiliev N.A. Imaginary logic. Selected works, M., Nauka Publishers, 1989, pp. 53–94. (in Russian). Bolotov, A.E., Shangin, V., Natural Deduction System in Paraconsistent Setting: Proof Search for PCont, Journal of Intelligent Systems, 21(1):1–24, 2012. Gentzen, G., Investigations into logical deductions, Mathematical theory of logical deduction, Nauka Publishers, M., 1967, pp. 9–74 (in Russian). Kleene, S.C., Introduction to Metamathematics, Ishi Press International, 1952. Popov, V.M., On the logic related to A. Arruda’s system V1, Logic and Logical Philosophy, 7:87–90, 1999. Popov, V.M., Intervals of simple paralogics, Proceedings of the V conference ‘Smirnov Readings in Logic’, June, 20-22, 2007, M., 2007, pp. 35–37 (in Russian). Popov, V.M., Two sequences of simple paraconsistent logics, Logical investigations, Vol. 14, M., 2007, pp. 257–261 (in Russian). Popov, V.M., Two sequence of simple paracomplete logics, Logic today: theory, history and applications. The proceedings of X Russian conference, June, 26-28, 2008, St.-Petersburg, SPbU Publishers, 2008, pp. 304–306 (in Russian). Popov, V.M., Semantical characterization of paracomplete logics $I_{2,1}$, $I_{2,2}$, $I_{2,3}$, ... , Logic, methodology: actual problems and perspectives. The proceedings of conference, Rostov-on-Don, UFU Publishers, 2010, pp. 114–116 (in Russian). Popov, V.M., Sequential characterization of simple paralogics, Logical investigations, 16:205–220, 2010 (in Russian). Vasiliev, N.A., Imaginary (non-Aristotelian) logic, Vasiliev N.A. Imaginary logic. Selected works, M., Nauka Publishers, 1989, pp. 53–94. (in Russian).
In Python, objects can declare their textual representation using the __repr__ method. IPython expands on this idea and allows objects to declare other, rich representations including: A single object can declare some or all of these representations; all are handled by IPython's display system. This Notebook shows how you can use this display system to incorporate a broad range of content into your Notebooks. The display function is a general purpose tool for displaying different representations of objects. Think of it as from IPython.display import display A few points: display on an object will send If you want to display a particular representation, there are specific functions for that: from IPython.display import ( display_pretty, display_html, display_jpeg, display_png, display_json, display_latex, display_svg) To work with images (JPEG, PNG) use the Image class. from IPython.display import Image i = Image(filename='../images/ipython_logo.png') Returning an Image object from an expression will automatically display it: i Or you can pass an object with a rich representation to display: display(i) An image can also be displayed from raw data or a URL. Image(url='http://python.org/images/python-logo.gif') SVG images are also supported out of the box. from IPython.display import SVGSVG(filename='../images/python_logo.svg') By default, image data is embedded in the notebook document so that the images can be viewed offline. However it is also possible to tell the Image class to only store a link to the image. Let's see how this works using a webcam at Berkeley. from IPython.display import Imageimg_url = 'http://www.lawrencehallofscience.org/static/scienceview/scienceview.berkeley.edu/html/view/view_assets/images/newview.jpg'# by default Image data are embeddedEmbed = Image(img_url)# if kwarg `url` is given, the embedding is assumed to be falseSoftLinked = Image(url=img_url)# In each case, embed can be specified explicitly with the `embed` kwarg# ForceEmbed = Image(url=img_url, embed=True) Here is the embedded version. Note that this image was pulled from the webcam when this code cell was originally run and stored in the Notebook. Unless we rerun this cell, this is not today's image. Embed Here is today's image from same webcam at Berkeley, (refreshed every minutes, if you reload the notebook), visible only with an active internet connection, that should be different from the previous one. Notebooks saved with this kind of image will be smaller and always reflect the current version of the source, but the image won't display offline. SoftLinked Of course, if you re-run this Notebook, the two images will be the same again. Python objects can declare HTML representations that will be displayed in the Notebook. If you have some HTML you want to display, simply use the HTML class. from IPython.display import HTML s = """<table><tr><th>Header 1</th><th>Header 2</th></tr><tr><td>row 1, cell 1</td><td>row 1, cell 2</td></tr><tr><td>row 2, cell 1</td><td>row 2, cell 2</td></tr></table>""" h = HTML(s) display(h) Header 1 Header 2 row 1, cell 1 row 1, cell 2 row 2, cell 1 row 2, cell 2 You can also use the %%html cell magic to accomplish the same thing. %%html<table><tr><th>Header 1</th><th>Header 2</th></tr><tr><td>row 1, cell 1</td><td>row 1, cell 2</td></tr><tr><td>row 2, cell 1</td><td>row 2, cell 2</td></tr></table> Header 1 Header 2 row 1, cell 1 row 1, cell 2 row 2, cell 1 row 2, cell 2 The Notebook also enables objects to declare a JavaScript representation. At first, this may seem odd as output is inherently visual and JavaScript is a programming language. However, this opens the door for rich output that leverages the full power of JavaScript and associated libraries such as d3.js for output. from IPython.display import Javascript Pass a string of JavaScript source code to the JavaScript object and then display it. js = Javascript('alert("hi")'); display(js) The same thing can be accomplished using the %%javascript cell magic: %%javascriptalert("hi"); Here is a more complicated example that loads d3.js from a CDN, uses the %%html magic to load CSS styles onto the page and then runs ones of the d3.js examples. Javascript( """$.getScript('//cdnjs.cloudflare.com/ajax/libs/d3/3.2.2/d3.v3.min.js')""") %%html<style type="text/css">circle { fill: rgb(31, 119, 180); fill-opacity: .25; stroke: rgb(31, 119, 180); stroke-width: 1px;}.leaf circle { fill: #ff7f0e; fill-opacity: 1;}text { font: 10px sans-serif;}</style> %%javascript// element is the jQuery element we will append tovar e = element.get(0); var diameter = 600, format = d3.format(",d");var pack = d3.layout.pack() .size([diameter - 4, diameter - 4]) .value(function(d) { return d.size; });var svg = d3.select(e).append("svg") .attr("width", diameter) .attr("height", diameter) .append("g") .attr("transform", "translate(2,2)");d3.json("data/flare.json", function(error, root) { var node = svg.datum(root).selectAll(".node") .data(pack.nodes) .enter().append("g") .attr("class", function(d) { return d.children ? "node" : "leaf node"; }) .attr("transform", function(d) { return "translate(" + d.x + "," + d.y + ")"; }); node.append("title") .text(function(d) { return d.name + (d.children ? "" : ": " + format(d.size)); }); node.append("circle") .attr("r", function(d) { return d.r; }); node.filter(function(d) { return !d.children; }).append("text") .attr("dy", ".3em") .style("text-anchor", "middle") .text(function(d) { return d.name.substring(0, d.r / 3); });});d3.select(self.frameElement).style("height", diameter + "px"); The IPython display system also has builtin support for the display of mathematical expressions typeset in LaTeX, which is rendered in the browser using MathJax. You can pass raw LaTeX test as a string to the Math object: from IPython.display import MathMath(r'F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx') With the Latex class, you have to include the delimiters yourself. This allows you to use other LaTeX modes such as eqnarray: from IPython.display import LatexLatex(r"""\begin{eqnarray}\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\\nabla \cdot \vec{\mathbf{B}} & = 0 \end{eqnarray}""") Or you can enter LaTeX directly with the %%latex cell magic: %%latex\begin{align}\nabla \times \vec{\mathbf{B}} -\, \frac1c\, \frac{\partial\vec{\mathbf{E}}}{\partial t} & = \frac{4\pi}{c}\vec{\mathbf{j}} \\\nabla \cdot \vec{\mathbf{E}} & = 4 \pi \rho \\\nabla \times \vec{\mathbf{E}}\, +\, \frac1c\, \frac{\partial\vec{\mathbf{B}}}{\partial t} & = \vec{\mathbf{0}} \\\nabla \cdot \vec{\mathbf{B}} & = 0\end{align} IPython makes it easy to work with sounds interactively. The Audio display class allows you to create an audio control that is embedded in the Notebook. The interface is analogous to the interface of the Image display class. All audio formats supported by the browser can be used. Note that no single format is presently supported in all browsers. from IPython.display import AudioAudio(url="http://www.nch.com.au/acm/8k16bitpcm.wav") A NumPy array can be auralized automatically. The Audio class normalizes and encodes the data and embeds the resulting audio in the Notebook. For instance, when two sine waves with almost the same frequency are superimposed a phenomena known as beats occur. This can be auralised as follows: import numpy as npmax_time = 3f1 = 220.0f2 = 224.0rate = 8000.0L = 3times = np.linspace(0,L,rate*L)signal = np.sin(2*np.pi*f1*times) + np.sin(2*np.pi*f2*times)Audio(data=signal, rate=rate) More exotic objects can also be displayed, as long as their representation supports the IPython display protocol. For example, videos hosted externally on YouTube are easy to load: from IPython.display import YouTubeVideoYouTubeVideo('sjfsUzECqK0') Using the nascent video capabilities of modern browsers, you may also be able to display local videos. At the moment this doesn't work very well in all browsers, so it may or may not work for you; we will continue testing this and looking for ways to make it more robust. The following cell loads a local file called animation.m4v, encodes the raw video as base64 for httptransport, and uses the HTML5 video tag to load it. On Chrome 15 it works correctly, displaying a control bar at the bottom with a play/pause button and a location slider. from IPython.display import HTMLfrom base64 import b64encodewith open("../images/animation.m4v", "rb") as f: video = f.read()video_encoded = b64encode(video).decode('ascii')video_tag = '<video controls alt="test" src="data:video/x-m4v;base64,{0}">'.format(video_encoded)HTML(data=video_tag) You can even embed an entire page from another site in an iframe; for example this is today's Wikipedia page for mobile users: from IPython.display import IFrameIFrame('https://jupyter.org', width='100%', height=350) IPython provides builtin display classes for generating links to local files. Create a link to a single file using the FileLink object: from IPython.display import FileLink, FileLinksFileLink('Cell Magics.ipynb') Alternatively, to generate links to all of the files in a directory, use the FileLinks object, passing '.' to indicate that we want links generated for the current working directory. Note that if there were other directories under the current directory, FileLinks would work in a recursive manner creating links to files in all sub-directories as well. FileLinks('.') The IPython Notebook allows arbitrary code execution in both the IPython kernel and in the browser, though HTML and JavaScript output. More importantly, because IPython has a JavaScript API for running code in the browser, HTML and JavaScript output can actually trigger code to be run in the kernel. This poses a significant security risk as it would allow IPython Notebooks to execute arbitrary code on your computers. To protect against these risks, the IPython Notebook has a security model that specifies how dangerous output is handled. Here is a short summary: A full description of the IPython security model can be found on this page. Much of the power of the Notebook is that it enables users to share notebooks with each other using http://nbviewer.ipython.org, without installing IPython locally. As of IPython 2.0, notebooks rendered on nbviewer will display all output, including HTML and JavaScript. Furthermore, to provide a consistent JavaScript environment on the live Notebook and nbviewer, the following JavaScript libraries are loaded onto the nbviewer page, before the notebook and its output is displayed: Libraries such as mpld3 use these capabilities to generate interactive visualizations that work on nbviewer.
Ex.5.3 Q13 Arithmetic Progressions Solution - NCERT Maths Class 10 Question Find the sum of first \(15\) multiples of \(8.\) Text Solution What is Known? Multiples of \(8\) What is Unknown? Sum of first \(15\) multiples of \(8,\) \({S_{15}}\) Reasoning: Sum of the first \(n\) terms of an AP is given by \({S_n} = \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right]\) Where \(a\) is the first term, \(d\) is the common difference and \(n\) is the number of terms. Steps: The multiples of \(8\) are \(8, 16, 24, 32, \dots\) These are in an A.P., Hence, First term, \(a = 8\) Common difference, \(d = 8\) Number of terms, \(n = 15\) As we know that Sum of \(n\) terms, \[\begin{align}{S_n} &= \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right]\\{S_{15}} &= \frac{{15}}{2}\left[ {2 \times 8 + \left( {15 - 1} \right)8} \right]\\ &= \frac{{15}}{2}\left[ {16 + 14 \times 8} \right]\\ &= \frac{{15}}{2}\left[ {16 + 112} \right]\\& = \frac{{15}}{2} \times 128\\ &= 15 \times 64\\& = 960\end{align}\]
Our new book (NAT) Nonabelian algebraic topology: filtered spaces, crossed complexes, cubical homotopy groupoids, EMS Tracts in Mathematics vol 15 uses mainly cubical, rather than simplicial, sets. The reasons are explained in the Introduction: in strict cubical higher categories we can easily express algebraic inverse to subdivision, a simple intuition which I have found difficult to express in simplicial terms. Thus cubes are useful for local-to-global problems. This intuition is crucial for our Higher Homotopy Seifert-van Kampen Theorem, which enables new calculations of some homotopy types, and suggests a new foundation for algebraic topology at the border between homotopy and homology. A further reason for the connections is that they enabled an equivalence between crossed modules and certain double groupoids, and later, crossed complexes and strict cubical $\omega$-groupoids. Also cubes have a nice tensor product and this is crucial in the book for obtaining some homotopy classification results. See Chapter 15. I have found that with cubes I have been able to conjecture and in the end prove theorems which have enabled new nonabelian calculations in homotopy theory, e.g. of second relative homotopy groups. So I have been happy to use cubes until someone comes up with something better. ($n$-simplicial methods, in conjunction with cubical ideas, turned out, however, to be necessary for proofs in the work with J.-L. Loday.) See also some beamer presentations available on my preprint page. Here is a further emphasis on the above point on algebraic structures: consider the following diagram: From left to right pictures subdivision; from right to left pictures composition. The composition idea is well formulated in terms of double categories, and that idea is easily generalised to $n$-fold categories, and is expressed well in a cubical context. In that context one can conjecture, and eventually prove, higher dimensional Seifert-van Kampen Theorems, which allow new calculations in algebraic topology. Such multiple compositions are difficult to handle in globular or simplicial terms. The further advantage of cubes, as mentioned in above answers, is that the formula $$I^m \times I^n \cong I^{m+n}$$ makes cubes very helpful in considering monoidal and monoidal closed structures. Most of the major results of the EMS book required cubical methods for their conjecture and proof. The main results of Chapter 15 of NAT have not been done simplicially. See for example Theorem 15.6.1, on a convenient dense subcategory closed under tensor product. Sept 5, 2015: The paper by Vezzani arxiv::1405.4548 shows a use of cubical, rather than simplicial, methods, in motivic theory; while the paper by I. Patchkoria, HHA arXiv:1011.4870, Homology Homotopy Appl.Volume 14, Number 1 (2012), 133-158, gives a "Comparison of Cubical and Simplicial Derived Functors". In all these cases the use of connections in cubical methods is crucial. There is more discussion on this mathoverflow. For us connections arose in order to define commutative cubes in higher cubical categories: compare this paper. See also this 2014 presentation The intuition for cubical methods in algebraic topology. April 13, 2016. I should add some further information from Alberto Vezzani: The cubical theory was better suited than the simplicial theory when dealing with (motives of) perfectoid spaces in characteristic 0. For example: degeneracy maps of the simplicial complex $\Delta$ in algebraic geometry are defined by sending one coordinate $x_i$ to the sum of two coordinates $y_j+y_{j+1}$. When one considers the perfectoid algebras obtained by taking all $p$-th roots of the coordinates, such maps are no longer defined, as $y_j+y_{j+1}$ doesn't have $p$-th roots in general. The cubical complex, on the contrary, is easily generalized to the perfectoid world. November 29, 2016 There is more information in this paper on Modelling and Computing Homotopy Types: I which can serve as an introduction to the NAT book.
A Spearman rank correlation is a number between -1 and +1 that indicates to what extent 2 variables are monotonously related. Spearman Correlation - Example Spearman Rank Correlation - Basic Properties Spearman Rank Correlation - Assumptions Spearman Correlation - Formulas and Calculation Spearman Rank Correlation - Software Spearman Correlation - Example A sample of 1,000 companies were asked about their number of employees and their revenue over 2018. For making these questions easier, they were offered answer categories. After completing the data collection, the contingency table below shows the results. The question we'd like to answer is is company size related to revenue? A good look at our contingency table shows the obvious: companies having more employees typically make more revenue. But note that this relation is not perfect: there's 60 companies with 1 employee making $50,000 - $99,999 while there's 89 companies with 2-5 employees making $0 - $49,999. This relation becomes clear if we visualize our results in the chart below. The chart shows an undisputable positive monotonous relation between size and revenue: larger companies tend to make more revenue than smaller companies. Next question. How strong is the relation? The first option that comes to mind is computing the Pearson correlation between company size and revenue. However, that's not going to work because we don't have company size or revenue in our data. We only have size and revenue categories. Company size and revenue are ordinal variables in our data: we know that 2-5 employees is larger than 1 employee but we don't know how much larger. So which numbers can we use to calculate how strongly ordinal variables are related? Well, we can assign ranks to our categories as shown below. As a last step, we simply compute the Pearson correlation between the size and revenue ranks. This results in a Spearman rank correlation (Rs) = 0.81. This tells us that our variables are strongly monotonously related. But in contrast to a normal Pearson correlation, we do not know if the relation is linear to any extent. Spearman Rank Correlation - Basic Properties Like we just saw, a Spearman correlation is simply a Pearson correlation computed on ranks instead of data values or categories. This results in the following basic properties: Spearman correlations are always between -1 and +1; Spearman correlations are suitable for all but nominal variables. However, when both variables are either metric or dichotomous, Pearson correlations are usually the better choice; Spearman correlations indicate monotonous -rather than linear- relations; Spearman correlations are hardly affected by outliers. However, outliers should be excluded from analyses instead of determine whether Spearman or Pearson correlations are preferable; Spearman correlations serve the exact same purposes as Kendall’s tau. Spearman Rank Correlation - Assumptions The Spearman correlation itself only assumes that both variables are at least ordinal variables. This excludes all but nominal variables. The statistical significance test for a Spearman correlation assumes independent observationsor -precisely- independent and identically distributed variables. Spearman Correlation - Example II A company needs to determine the expiration date for milk. They therefore take a tiny drop each hour and analyze the number of bacteria it contains. The results are shown below. For bacteria versus time, the Pearson correlation is 0.58 but the Spearman correlation is 1.00. There is a perfect monotonous relation between time and bacteria: with each hour passed, the number of bacteria grows. However, the relation is very non linear as shown by the Pearson correlation. This example nicely illustrates the difference between these correlations. However, I'd argue against reporting a Spearman correlation here. Instead, model this curvilinear relation with a (probably exponential) function. This'll probably predict the number of bacteria with pinpoint precision. Spearman Correlation - Formulas and Calculation First off, an example calculation, exact significance levels and critical values are given in this Googlesheet (shown below). Right. Now, computing Spearman’s rank correlation always starts off with replacing scores by their ranks (use mean ranks for ties). Spearman’s correlation is now computed as the Pearson correlation over the (mean) ranks. Alternatively, compute Spearman correlations with $$R_s = 1 - \frac{6\cdot \Sigma \;D^2}{n^3 - n}$$ where \(D\) denotes the difference between the 2 ranks for each observation. For reasonable sample sizes of N ≥ 30, the (approximate) statistical significance uses the t distribution. In this case, the test statistic $$T = \frac{R_s \cdot \sqrt{N - 2}}{\sqrt{1 - R^2_s}}$$ follows a t-distribution with $$Df = N - 2$$ degrees of freedom. This approximation is inaccurate for smaller sample sizes of N < 30. In this case, look up the (exact) significance level from the table given in this Googlesheet. These exact p-values are based on a permutation test that we may discuss some other time. Or not. Spearman Rank Correlation - Software Spearman correlations can be computed in Googlesheets or Excel but statistical software is a much easier option. JASP -which is freely downloadable- comes up with the correct Spearman correlation and its significance level as shown below. SPSS also comes up with the correct correlation. However, its significance level is based on the t-distribution: $$t = \frac{0.77\cdot\sqrt{4}}{\sqrt{(1 - 0.77^2)}} = 2.42$$ and $$t(4) = 2.42,\;p = 0.072 $$ Again, this approximation is only accurate for larger sample sizes of N ≥ 30. For N = 6, it is wildly off as shown below. Thanks for reading.
I am trying to understand the LSM algorithm applied to grayscale image segmentation. There are essentially 2 things that are blocking me: 1) From my point of view - the level sets - i.e. ''moving'' the 3D surface (that is, conceptually visualizing the grayscale image (say 8 bits/pixel) as a surface whose height is given at each point (pixel $x,y$) by the intensity value of that pixel, through a fixed plane, or the other way around, is exactly the same as performing a thresholding from 0 to 255 (i.e. each "level set" would be all the pixels at threshold value 0,1,2 ...255)? 2) Let $f(x,y)$ be the image function (associating an intensity between 0 and 255 to each point $(x,y)$. The LSM is defined as $$\Gamma = \{(x,y) | \phi(x,y)=0\}$$ However, it does not make sense to me, (from my point of view which is probably wrong, hence my question), that $$\phi(x,y)=0$$ as as the surface has its own height at each point and it would make more sense to me to consider that the plane intersecting the surface is moving from bottom to up, i.e. I can ''accept'' better the formulation given for the level sets: $$\Gamma = \{(x,y) | \phi(x,y)=c\}$$ and for the example I use, $$c \in [0,255] $$ Question 2 is more like a ''conceptual'' problem, but question 1 is more fundamental, since I cannot seem to see any difference with a simple thresholding. Rephrased: Question 1) Can we consider (loosely speaking, as indeed as the li is a set of points not a matrix) that each level curve $$l_i$$ of a level set of an image I is (in programmatic terms, Matlab syntax): li= (I==i) ? (where i is the height, in grayscale 8 bits, i is a number between 1 and 255, and as Matlab uses matrices everywhere, I is a $n \times m$ matrix and the operation == checks for every pixel in I if it is equal to i and the result is a binary image with 0s or 1s, i.e. this is called thresholding, hence my question is LSM the same as thresholding with operator == (and the single '=' is the assignment operation of course)) Question 2) Is it essentially (conceptually) the same to imagine moving the shape through a fixed plane, or moving a plane through a fixed 3D shape ? What I was (in an incorrect way) trying to say is that we can visualize the level curves $$l_i$$ as the white circles on the thresholded image (although not perfect since it is discretized an maybe rounding errors/noise etc but suppose they are continuous) and ideed i do understand that the result of thresholding is a matrix so strictly speaking not equal to a lvl curve. But "loosely" speaking, visualizing the lvl curves do you agree?
Your task is to take an array of numbers and a real number and return the value at that point in the array. Arrays start at \$\pi\$ and are counted in \$\pi\$ intervals. Thing is, we're actually going to interpolate between elements given the "index". As an example: Index: 1π 2π 3π 4π 5π 6πArray: [ 1.1, 1.3, 6.9, 4.2, 1.3, 3.7 ] Because it's \$\pi\$, we have to do the obligatory trigonometry, so we'll be using cosine interpolation using the following formula: \${\cos(i \mod \pi) + 1 \over 2} * (\alpha - \beta) + \beta\$ where: \$i\$ is the input "index" \$\alpha\$ is the value of the element immediately before the "index" \$\beta\$ is the value of the element immediately after the "index" \$\cos\$ takes its angle in radians Example Given [1.3, 3.7, 6.9], 5.3: Index 5.3 is between \$1\pi\$ and \$2\pi\$, so 1.3 will be used for before and 3.7 will be used for after. Putting it into the formula, we get: \${\cos(5.3 \mod \pi) + 1 \over 2} * (1.3 - 3.7) + 3.7\$ Which comes out to 3.165 Notes Input and output may be in any convenient format You may assume the input number is greater than \$\pi\$ and less than array length* \$\pi\$ You may assume the input array will be at least 2 elements long. Your result must have at least two decimal points of precision, be accurate to within 0.05, and support numbers up to 100 for this precision/accuracy. (single-precision floats are more than sufficient to meet this requirement) Happy Golfing!
setting constant of integration \\chi for initial conditions Post Reply 5 posts • Page 1of 1 Hello, I have a question about how [tex]\chi[/tex] is determined in CAMB. I know that it is set to [tex]-1[/tex], but see below. \begin{equation} \label{1} \mathcal{R} = \pm (\Delta_{\mathcal{R}})^{1/2} = \pm \sqrt{A_s} \end{equation} at Planck's pivot scale [tex]k_{\star} = 0.05 ~\mathrm{Mpc}^{-1}[/tex], and In the synchronous gauge, using the (+ - - -) signature, the comoving curvature perturbation is \begin{equation} \label{2} \mathcal{R} = \eta + \frac{\mathcal{H} v}{ k} \end{equation} where [tex]v \equiv \theta/k[/tex] using the notation of Ma and Bertschinger ({\tt arXiv:astro-ph/9506072}). For [tex]k<<\mathcal{H}[/tex] in the radiation epoch, \begin{equation} \label{3} \eta= 2C - \frac{5+4 R_{\nu}}{6(15+4R_{\nu})} C (k \tau)^2, \end{equation} and \begin{equation} \label{4} v_{rad} \equiv (1-R_{\nu}) v_{\gamma}+ R_{\nu} v_{\nu} = - \frac{C}{18} (k \tau)^3 \biggl(1-R_{\nu}+R_{\nu} \frac{23+4R_{\nu}}{15+4R_{\nu}}\biggr). \end{equation} It follows from Eqs. (\ref{1}) and (\ref{2}) that, for values of [tex]\tau[/tex] early enough during radiation domination such that [tex]k=k_{\star}[/tex] is super-horizon, \begin{equation} \label{5} C \approx \mp 2 \cdot 10^{-5} \end{equation} for [tex]\pm \sqrt{A_s}[/tex] evaluated at [tex]k=k_{\star}[/tex]. I used [tex]R_{\nu}=\rho_{\nu}/(\rho_{\gamma}+\rho_{\nu})[/tex], [tex]\rho_{\nu}/\rho_{\gamma}=(7 N_{\nu}/8)(4/11)^{4/3}[/tex], [tex]N_{\nu}=3.046[/tex], and [tex]\ln(10^{10} A_s)= 3.064[/tex], from Planck 2015. Comparing equations for initial conditions in CAMB notes, we see that [tex]C = \chi/2[/tex]. However, in CAMB, [tex]\chi[/tex] is set to [tex]-1[/tex]. Am I doing something wrong here? Why this discrepancy? I know that using [tex]\chi=-1[/tex] in CAMB gives a CMB angular power spectrum that agrees with Planck's 2015 results, and using [tex]\chi=2C[/tex] gives an angular power spectrum with amplitudes that are too small. And [tex]A_s[/tex] is obtained from the CMB, so it makes sense to me that [tex]\chi[/tex] should be constrained observationally. Thank you for any help. \begin{equation} \label{1} \mathcal{R} = \pm (\Delta_{\mathcal{R}})^{1/2} = \pm \sqrt{A_s} \end{equation} at Planck's pivot scale [tex]k_{\star} = 0.05 ~\mathrm{Mpc}^{-1}[/tex], and In the synchronous gauge, using the (+ - - -) signature, the comoving curvature perturbation is \begin{equation} \label{2} \mathcal{R} = \eta + \frac{\mathcal{H} v}{ k} \end{equation} where [tex]v \equiv \theta/k[/tex] using the notation of Ma and Bertschinger ({\tt arXiv:astro-ph/9506072}). For [tex]k<<\mathcal{H}[/tex] in the radiation epoch, \begin{equation} \label{3} \eta= 2C - \frac{5+4 R_{\nu}}{6(15+4R_{\nu})} C (k \tau)^2, \end{equation} and \begin{equation} \label{4} v_{rad} \equiv (1-R_{\nu}) v_{\gamma}+ R_{\nu} v_{\nu} = - \frac{C}{18} (k \tau)^3 \biggl(1-R_{\nu}+R_{\nu} \frac{23+4R_{\nu}}{15+4R_{\nu}}\biggr). \end{equation} It follows from Eqs. (\ref{1}) and (\ref{2}) that, for values of [tex]\tau[/tex] early enough during radiation domination such that [tex]k=k_{\star}[/tex] is super-horizon, \begin{equation} \label{5} C \approx \mp 2 \cdot 10^{-5} \end{equation} for [tex]\pm \sqrt{A_s}[/tex] evaluated at [tex]k=k_{\star}[/tex]. I used [tex]R_{\nu}=\rho_{\nu}/(\rho_{\gamma}+\rho_{\nu})[/tex], [tex]\rho_{\nu}/\rho_{\gamma}=(7 N_{\nu}/8)(4/11)^{4/3}[/tex], [tex]N_{\nu}=3.046[/tex], and [tex]\ln(10^{10} A_s)= 3.064[/tex], from Planck 2015. Comparing equations for initial conditions in CAMB notes, we see that [tex]C = \chi/2[/tex]. However, in CAMB, [tex]\chi[/tex] is set to [tex]-1[/tex]. Am I doing something wrong here? Why this discrepancy? I know that using [tex]\chi=-1[/tex] in CAMB gives a CMB angular power spectrum that agrees with Planck's 2015 results, and using [tex]\chi=2C[/tex] gives an angular power spectrum with amplitudes that are too small. And [tex]A_s[/tex] is obtained from the CMB, so it makes sense to me that [tex]\chi[/tex] should be constrained observationally. Thank you for any help. The [tex]\eta[/tex] of the CAMB notes, e.g. in Eq 43, is not the synchronous gauge quantity, which is [tex]\eta_{sync} = -\eta/2[/tex] (see Sec 1.A). Maybe that is the confusion? Sorry if my last post was a bit confusing. The [tex]\eta[/tex] in my post is the [tex]\eta_s[/tex] from the synchronous gauge. And I'm using Equation A6 from astro-ph/0212248 for my expression for the comoving curvature perturbation [tex]\mathcal{R}[/tex] (or [tex]\chi[/tex] as CAMB uses), accounting for the relation between the [tex]\eta[/tex] and [tex]\eta_s[/tex]. (Sorry, my comment about [tex]C=\chi/2[/tex] was wrong. What CAMB does is set [tex]C=-1/2[/tex], or [tex]\chi=-1[/tex], for flat space. Bertschinger and Ma in astro-ph/9506072 set [tex]C=-1/6[/tex] for their plots.) I guess my question is more of a conceptual one: Why is the comoving curvature parameter [tex]\chi=-1[/tex] for super-horizon modes as an initial condition? In principle, it seems to me that specifying the initial conditions from the relation [tex]\mathcal{\chi}= \pm \sqrt{A_s}[/tex] (where [tex]A_s[/tex] is the primordial scalar power spectrum amplitude) when the pivot scale is super-horizon should be correct and consistent with initial conditions that lead to the correct angular power spectrum for the CMB. But according to CAMB (I've tested this), [tex]\chi=\pm 1[/tex] outputs the correct CMB angular spectrum, but [tex]\chi= \pm \sqrt{A_s} \approx \pm 10^{-5}[/tex] does not. I guess my question is more of a conceptual one: Why is the comoving curvature parameter [tex]\chi=-1[/tex] for super-horizon modes as an initial condition? In principle, it seems to me that specifying the initial conditions from the relation [tex]\mathcal{\chi}= \pm \sqrt{A_s}[/tex] (where [tex]A_s[/tex] is the primordial scalar power spectrum amplitude) when the pivot scale is super-horizon should be correct and consistent with initial conditions that lead to the correct angular power spectrum for the CMB. But according to CAMB (I've tested this), [tex]\chi=\pm 1[/tex] outputs the correct CMB angular spectrum, but [tex]\chi= \pm \sqrt{A_s} \approx \pm 10^{-5}[/tex] does not. CAMB evolves transfer functions, which are nicely normalized to fixed unit amplitude. The actual power spectrum goes in later when calculating the C_\ell. Oh, I see. Okay, thanks for the help!
The Warsaw circle $W$ http://en.wikipedia.org/wiki/Continuum_%28topology%29 is a counterexample for quite a number of too naive statements. The Warsaw circle can be defined as the subspace of the plane $R^2$ consisting of the graph of $y = \sin(1/x)$, for $x\in(0,1]$, the segment $[−1,1]$ in the $y$ axis, and an arc connecting $(1,\sin(1))$ and $(0,0)$ (which is otherwise disjoint from the graph and the segment). Some observations: $W$ is weakly contractible (because a map from a locally path connected space cannot ''go over the bad point''). Let $I$ denote the segment $[−1,1]$ in the $y$ axis. Then $W/I\cong S^1$ is just the usual circle, and thus we have a natural projection map $g:W \to S^1$. The point-preimages of $g$ are either points or, for a single point on $S^1$, a closed interval. Thus the assumptions of the Vietoris-Begle mapping theorem hold for $g$, proving that $g$ induces an isomorphism in Cech cohomology. Thus the Cech cohomology of $W$ is that of $S^1$, but it has the singular homology of a point, by Hurewicz. Since $I\to W$ is an embedding of compact Hausdorff spaces, we have an induced long exact sequence in (reduced) topological $K$-theory (see, for example, Atiyah's $K$-theory Proposition 2.4.4). Since $I$ is contractible, we get that $W$ and $S^1$ also have the same topological $K$-theory. Note that the Warsaw circle is a compact metrizable space, being a bounded closed subspace of $R^2$. By looking on points on $I$ one sees that $W$ is not locally path-connected (and, in particular, not locally contractible). The above observations imply: A map with contractible point-inverses does not need to be a weak homotopy equivalence, even if both, source and target, are compact metric spaces. Assuming that the base and the preimages are finite CW complexes does not help. The Vietoris-Begle Theorem is false for singular cohomology (in particular, the wikipedia version of that Theorem is not quite correct). The embedding $I\to W$ cannot be a cofibration in any model structure on $Top$, where the weak equivalences are the weak homotopy equivalences and the interval $I$ is cofibrant. Because then we would have a cofiber sequence $I\to W\to S^1$ and thus also a long exact sequence in singular cohomology. $W$ does not have the homotopy type of a CW complex (since it is not contractible). Even though the map $g$ is trivial on fundamental groups, it does not lift to the universal cover $p: \mathbb{R} \to S^1$, because $g$ cannot be nullhomotopic. Thus the assumption of local path connectivity in the lifting theorem is necessary.
The speed of sound is defined as $c^2 = \frac{\partial p}{\partial \rho}$, which for an ideal gas becomes $c^2 = \gamma \frac{p}{\rho}$. For a real gas, the relationship to an ideal gas can be found through the compressibility factor $z$. This is a measure of how much a real gas deviates from an ideal gas. It shows up in the equations as: $$ P = z \rho R T$$ You can work through the math on it (or follow along on this page), but essentially for a real gas, the speed of sound uses the compressibility factor $z$ and a factor $n$: $$ n = \gamma \frac{z + T \partial z/\partial T \rvert_p}{z + T \partial z/\partial T\rvert_\rho} $$ such that: $$c^2 = zn \frac{p}{\rho}$$ which can be related to the ideal gas speed of sound as: $$c^2_{real} = z c^2_{ideal} \frac{z + T \partial z/\partial T \rvert_p}{z + T \partial z/\partial T\rvert_\rho} $$ For gases and conditions where intermolecular forces are unimportant, $z = 1$ and $\partial z/\partial T \approx 0$ and the ideal speed of sound is correct. This is true for a majority of gasses at ambient conditions. At very high pressures and/or low temperatures, real gas effects become important.
Newform invariants Coefficients of the \(q\)-expansion are expressed in terms of a basis \(1,\beta_1,\beta_2\) for the coefficient ring described below. We also show the integral \(q\)-expansion of the trace form. Basis of coefficient ring in terms of \(\nu = \zeta_{18} + \zeta_{18}^{-1}\): \(\beta_{0}\) \(=\) \( 1 \) \(\beta_{1}\) \(=\) \( \nu \) \(\beta_{2}\) \(=\) \( \nu^{2} - 2 \) \(1\) \(=\) \(\beta_0\) \(\nu\) \(=\) \(\beta_{1}\) \(\nu^{2}\) \(=\) \(\beta_{2}\mathstrut +\mathstrut \) \(2\) Character Values We give the values of \(\chi\) on generators for \(\left(\mathbb{Z}/3311\mathbb{Z}\right)^\times\). \(n\) \(904\) \(1893\) \(2927\) \(\chi(n)\) \(-1\) \(-1\) \(-1\) For each embedding \(\iota_m\) of the coefficient field, the values \(\iota_m(a_n)\) are shown below. For more information on an embedded modular form you can click on its label. This newform can be constructed as the intersection of the kernels of the following linear operators acting on \(S_{1}^{\mathrm{new}}(3311, [\chi])\): \(T_{2}^{3} \) \(\mathstrut -\mathstrut 3 T_{2} \) \(\mathstrut -\mathstrut 1 \) \(T_{3}^{3} \) \(\mathstrut -\mathstrut 3 T_{3} \) \(\mathstrut -\mathstrut 1 \)
ExB in BICEP2 Anze Slosar Posts: 183 Joined: September 24 2004 Affiliation: Brookhaven National Laboratory Contact: I am having arguments with some colleagues of mine about the following: To me, the fact that ExB and TxB are (mostly) consistent with zero is a good argument in favour of BICEP2 seeing primordial fluctuations rather than foregrounds. One would expect that a generic foreground would push roughly the same amount of power into all cross-correlations. However, some people insist that ExB=0 by construction because of this paragraph: Paper, Secition 8: wrote: Once differential ellipticity has been corrected we notice that an excess of TxB and ExB power remains at > 200 versus the ΛCDM expectation. The spectral form of this power is consistent with an overall rotation of the polarization angle of the experiment. While the detector-to-detector relative angles have been measured to differ from the design values by < 0.2◦ we currently do not have an accurate external measurement of the overall polarization angle. We therefore apply a rotation of ∼ 1◦ to the final Q/U maps to minimize the T B and EB power (Keating et al. 2013; Kaufman et al. 2013). We empha- size that this has a negligible effect on the BB bandpowers at < 200. Ok, they have one degree of freedom and they can minimize some power. Say a "loop" produces a uniform polarization contribution, that wold be a k=0 mode and you could get rid of that that way. But it is one mode and you definitely cannot kill all correlations with just one d.o.f.. So, is ExB=TxB=0 a good argument in favour of "It cannot all be foregrounds."? Antony Lewis Posts: 1501 Joined: September 23 2004 Affiliation: University of Sussex Contact: I think I would agree that if they've just changed a constant rotation angle by a degree, that will not significantly change any foreground argument at low L - as they say it doesn't have a big affect at L~100 anyway where most of their interesting signal is. However I think having TB and EB consistent with zero is a pretty weak check - T must be dominated by primordial CMB temperature so the sensitivity to a small foreground contamination in T must be small compared to cosmic variance from the primary signal and their B noise (might be more interesting if they correlated their B with high- and low-frequency T from Planck in the same patch of sky with CMB projected out -did they try that?). Likewise even for E: If we assume as an extreme case that observed E is [tex]E_o = E + f[/tex], and observed [tex]B_o=f[/tex], so [tex]\langle E_o B_o\rangle= \langle ff\rangle= C_f[/tex], the number of sigmas at which you could expect to see the EB in the null hypothesis of zero B is something like [tex] \sigma \sim \sqrt{\frac{n C_f^2}{C_E N}} [/tex] where [tex]n[/tex] is the number of modes observed and [tex]N[/tex] the (lensing+) noise power. If O(f)~O(E)/6 is the case you are trying to check against (where all the excess B is foregrounds), then [tex]C_f[/tex] ~ [tex]C_E/36[/tex], so for [tex]N[/tex] ~ [tex]C_E/100[/tex] you'd need at least 50 modes to be able to tell at [tex]\sigma > 2[/tex]. I don't think they say exactly how many modes they have, but for L~100 with 2% of the sky, it's not much bigger than 50 (and certainly not per L bin), and this was a very extreme back-of-the-envelope toy case (if I did it right). Anze Slosar Posts: 183 Joined: September 24 2004 Affiliation: Brookhaven National Laboratory Contact: I see, I didn't appreciate just how much more power there is in EE. BB at ell=100 is something like 0.015+-0.003, while EB point there is around 0.01+-0.01. Averaged over all bins, you might actually get a better competitive error, but then there is this 2sigma point at l=50. Thanks.
Here is Artin's proof: First we need to prove a lemma. Lemma: Let $\gamma$ be a path in an open set $U$. There there exists a rectangular path $\eta$ with the same end points, and such that $\gamma,\eta$ are close together in $U$. In particular, $\gamma$ and $\eta$ are homologous in $U$, and for any holomorphic function $f$ on $U$ we have $$ \int_{\gamma}f=\int_{\eta}f. $$ Definition: We say that $\gamma,\eta$ are close together if there exists a partition $$ a = a_0\leq a_1\leq\cdots\leq a_n=b, $$ and for each $i=0,\ldots, n-1$ there exists a disc $D_i$ contained in $U$ such that the images of each segment $[a_i,a_{i+1}]$ under the two paths are contained in $D_i$. Proof of Lemma: Suppose $\gamma$ is defined on the interval $[a,b]$. Partition the interval as $a = a_0\leq a_1\leq\cdots\leq a_n=b$ such that the image $\gamma[a_i,a_{i+1}]$ is contained in a disc $D_i$ on which $f$ has a primitive (see image). Thus $\gamma,\eta$ are homologous since they can be deformed into each other so $\int_{\gamma}f=\int_{\eta}f$. The lemma reduces the proof of Cauchy's theorem to $\gamma$ a rectangular closed chain. Let $\gamma_i:[a_i,a_{i+1}]\to U$ be the restriction of $\gamma$ to the smaller interval. Then the chain is $\gamma_1+\cdots+\gamma_n$ a subdivision of $\gamma$. If $\eta_i$ is obtained from $\gamma_i$ by another parametrization, we have the chain $\eta_1+\cdots+\eta_n$ which is a subdivision of $\gamma$. The chains $\gamma$ and $\eta$ do not differ from each other. If $\gamma = \sum m_i\gamma_i$ is a chain and $\{\eta_{ij}\}$ is a subdivision, we call$$\sum_i\sum_jm_in_{ij}$$a subdivision of $\gamma$. Next we need to prove the following theorem. Theorem: Let $\gamma$ be a rectangular closed chain in $U$, and assume that $\gamma$ is homologous to $0$ in $U$; that is, $W(\gamma,\alpha)=0$ for every point $\alpha$ not in $U$. Then there exist rectangles $R_1,\ldots,R_n$ contained in $U$, such that if $\partial R_i$ is the boundary of $R_i$ oriented counterclockwise, then a subdivision of $\gamma$ is equal to $$ \sum_i^nm_i\partial R_i $$ for some integers $m_i$. Note $W(\gamma,\alpha)=0$ is the winding number. Proof of Theorem: Given the rectangle chain $\gamma$, we draw all vertical and horizontal lines passing through the sides of the chain. The lines decompose the plane into rectangles, and rectangular regions extending to infinity in the vertical and horizontal directions. Let $R_i$ be one of the rectangles, and let $\alpha_i$ be a point inside $R_i$. Let $W(\gamma,\alpha_i)=m_i$. For some rectangles we have $m_i=0$, and for some, we have $m_i\neq 0$. Let $R_i,\ldots, R_N$ be the rectangles whose $m_i\neq 0$. Let $\partial R_i$ be the boundaries of these rectangles for $i=1,\ldots,N$ oriented counterclockwise. Every rectangle $R_i$ such that $m_i\neq 0$ is contained in $U$. Some subdivision of $\gamma$ is equal to $\sum_i^Nm\partial R_i$. Assertion $1$. By assumption, $\alpha_i$ must be in $U$, becuase $W(\gamma,\alpha)=0$ for all $\alpha$ outside of $U$. The winding number is constant on connected sets so it is constant on the interior of $\partial R_i\subset U$ and not equal to zero. If the boundary points of $R_i$ are on $\gamma$, then it is in $U$. If not on $\gamma$, then the winding number is defined and equal to $m_i\neq 0$. Thus $R_i\subset U$. Assertion $2$. Replace $\gamma$ by an appropriate subdivision. We can find a subdivision $\eta$ such that every curve occurring in $\eta$ is some side of a rectangle or the finite side of an infinite rectangular region. The subdivision $\eta$ is the sum of the sides taken with appropriate multiplicity. If a finite side of an infinite rectangle occurs in the subdivision, after inserting one more horizontal or vertical line, we may assume that this side is also the side of a finite rectangle. WLOG, we may assume that every side of the subdivision is also of one of the finite rectangles in the gird formed by horizontal and vertical lines. Suppose $\eta-\sum m_i\partial R_i$ is not the $0$ chain. Then it contains some horizontal or vertical segment $\sigma$, so that we can write$$\eta-\sum m_i\partial R_i = m\sigma + C,$$where $m$ is an integer, and $C$ is a chain of vertical or horizontal segments other than $\sigma$. Then $\sigma$ is the side of a finite rectangle $R_k$. We take $\sigma$ with the orientation arising from the counterclockwise orientation of the boundary of the rectangle $R_k$. Then the closed chain$$C=\eta-\sum m_i\partial R_i-m\partial R_k$$does not contain $\sigma$. Let $\alpha_k$ be a point interior to $R_k$, and let $\alpha'$ be a point near $\sigma$ but on the opposite side from $\alpha_k$. Since $\eta-\sum m_i\partial R_i-m\partial R_k$ does not contain $\sigma$, the points $\alpha_k$ and $\alpha'$ are connected by a line segment which does not intersect $C$. Therefore, $W(C,\alpha_k)=W(C,\alpha')$. But $W(\eta,\alpha_n)=m_k$ and $W(\partial R_i,\alpha_k)=0$ unless $i=k$ in which case $W(\partial R_k,\alpha_k)=1$. Similarly, if $\alpha'$ is inside some finite rectangle $R_j$ so $\alpha'=\alpha_j$, we have$$W(\partial R_k,\alpha_j)=\begin{cases}0, & \text{if }j\neq k\\1, & \text{if }j=k\end{cases}$$If $\alpha'$ is in an infinite rectangle, then $W(\partial R_k,\alpha')=0$. Hence\begin{alignat}{2}W(C,\alpha_k)&=W\Bigl(\eta-\sum m_i\partial R_i-m\partial R_k,\alpha_k\Bigr) &&= m_k-m_k-m &&= -m\\W(C,\alpha')&=W\Bigl(\eta-\sum m_i\partial R_i-m\partial R_k,\alpha'\Bigr)&&=0\end{alignat}Thus, $m=0$ and $\eta-\sum m_i\partial R_i=0$. Suppose $C\neq 0$. Then $$C=m\sigma+C^*$$where $\sigma$ is a horizontal or vertical segment, $m$ is an integer not equal to zero, and $C^*$ is a chain of vertical or horizontal segments other than $\sigma$. Then $\sigma$ is the side of a finite rectangle. We take $\sigma$ with the counterclockwise orientation of the boundary rectangle $R$. the the chain $C-m\partial R$ does not contain $\sigma$. let $\alpha$ be a point inside of $R$, and let $\alpha'$ be a point near $\sigma$ but on the opposite side from $\alpha$. Then we can join $\alpha$ to $alpha'$ by a segment which does not intersect $C-m\partial R$. By continuity and connectedness of the segment, we have$$W(C-m\partial R,\alpha)=W(C-m\partial R,\alpha')$$but $W(m\partial R,\alpha)=m$ and $W(m\partial R, \alpha')=0$. Thus $W(C,\alpha)=W(C,\alpha')=0$ so $m=0$. By the lemma and theorem, we know that for any holomorphic function $f$ on $U$, we have $$\int_{\partial R_i}f =0$$by Goursat's theorem. Hence, the integral of $f$ over $\gamma$ is also equal to $0$.
You get $2^\kappa$ many models for any uncountable $\kappa$. This follows immediately from the fact that $\mathsf{ZFC}$ (trivially) defines an infinite linear order, and this is all that's needed to ensure unstability. (The same argument shows that already much weaker theories, such as $\mathsf{PA}$, are unstable.) This gives the result for $\kappa$ uncountable. For $\kappa$ countable you also have the maximum number of nonisomorphic models of $\mathsf{ZFC}$. To see this, use the incompleteness theorem to show that you can recursively label the nodes of the complete binary tree as $T_s$, $s\in 2^{<\omega}$, so that $T_\emptyset=\mathsf{ZFC}$ (or $\mathsf{PA}$, if you prefer), $T_s$ is consistent for each $s$, $T_s\subsetneq T_t$ for $s\subsetneq t$, and each $T_{s{}^\frown\langle i\rangle}$, for $i=0,1$, is obtained from $T_s$ by adding a single new axiom (that depends on $s$, of course). Now, for each $x\in 2^\omega$, let $T_x$ be any consistent complete theory extending $\bigcup_n T_{x\upharpoonright n}$. These are $2^{\aleph_0}$ pairwise incompatible theories, all extending $\mathsf{ZFC}$ (or $\mathsf{PA}$), and all having countable models. (Examples such as the theory of $(\mathbb Q,<)$ show that the argument must be different for countable models.) The paragraph above shows that there are $2^{\aleph_0}$ incompatible extensions of $\mathsf{ZFC}$, and therefore $2^{\aleph_0}$ non-isomorphic countable models of $\mathsf{ZFC}$. The result is also true for a fixed complete extension $T$, but the argument seems harder. A proof follows from the following, but most likely there are easier approaches: Consider first the paper Ali Enayat. Leibnizian models of set theory, J. Symbolic Logic, 69 (3), (2004), 775–789. MR2078921 (2005e:03076). In it, Enayat defines a model to be Leibnizian iff it has no pair of indiscernibles. He shows that there is a first order statement $\mathsf{LM}$ such that any (consistent) complete extension $T$ of $\mathsf{ZF}$ admits a Leibnizian model iff $T\vdash\mathsf{LM}$. He also shows that $\mathsf{LM}$ follows from $\mathrm{V}=\mathsf{OD}$, and that any (consistent) complete extension of $\mathsf{ZF}+\mathsf{LM}$ admits continuum many countable nonisomorphic Leibnizian models. Now consider the paper Ali Enayat. Models of set theory with definable ordinals, Arch. Math. Logic, 44 (3), (2005), 363–385. MR2140616 (2005m:03098). In it, Enayat defines a model to be Paris iff all its ordinals are first order definable within the model. He shows that any (consistent) complete extension of $\mathsf{ZF}+\mathrm{V}\ne\mathsf{OD}$ admits continuum many countable nonisomorphic Paris models. These two facts together imply the result you are after (and more, of course). An earlier result of Keisler and Morley, that started the whole area of model theory of set theory, shows that any countable model of $\mathsf{ZFC}$ admits an elementary end-extension. (This fails for uncountable models.) It may well be that an easy extension of this is all that is needed to prove the existence of continuum many non-isomorphic countable models of any fixed (consistent) $T\supset\mathsf{ZFC}$, but I do not see right now how to get there. The Keisler-Morley theorem alone does not seem to suffice, in view of Joel Hamkins's beautiful answer to this question.
Protein Synthesis Category : NEET Protein Synthesis Formation of protein from mRNA is called translation is also known as polypeptide synthesis or protein synthesis. It is unidirectional process. The ribosomes of a polyribosome are held together by a strand of mRNA. Each eukaryotic ribosome has two parts, smaller 40S subunit (30S in prokaryotes) and larger 60S subunit (50S in prokaryotes). Larger subunit has a groove for protection and passage of polypeptide, site A (acceptor or aminoacyl site), enzyme peptidyl transferees and a binding site for tRNA. The smaller subunit has a point for attachment of mRNA. Along with larger subunit, it forms a P-site or peptidyl transfer (donor site). There are binding sites for initiation factors, elongation factors, translocase, GTPase, etc. The raw materials for protein synthesis are amino acids. mRNA, tRNAs and amino acyl tRNA synthetases. Amino acids: Twenty types of amino acids and amides constitute the building blocks of proteins. mRNA: It carries the coded information for synthesis of one (unicistronic) or more polypeptides (polycistronic). Its codons are recognised by tRNAs. tRNAs: They picks up specific amino acid from amino acid pool and carrying over the mRNA strand. Amino Acyl tRNA Synthetases: The enzymes are specific for particular amino acids and their tRNAs. (1) Activation of Amino Acids: An amino acid combines with its specific aminoacyl tRNA enzyme (AA-activating enzyme) in the presence of ATP to form aminoacyl adenylate enzyme complex (AA-AMP-E). Pyrophosphate is released. Amino acid present in the complex is synthetase Amino Acid (AA) + ATP + Aminoacyl tRNA Synthetase (E) \[\underset{\begin{smallmatrix} \text{amino acid adenylate} \\ \text{enzyme complex} \end{smallmatrix}}{\mathop{\to \,AA-AMP-E}}\,\,+PPi\] AA-AMP-E + tRNA \[\to \]AA-tRNA + AMP + Enzyme. (2) Initiation: It is accomplished with the help of initiation factors. Prokaryotes have three initiation factors – IF 3, IF 2 and IF 1. Eukaryotes have nine initiation factors – eIF 1, eIF 2, eIF 3, eIF 4A, eIF 4B, \[eI{{F}_{4C}},\text{ }eI{{F}_{4D}},\text{ }eI{{F}_{5}},\text{ }eI{{F}_{6,}}\] ,mRNA attaches itself to smaller subunit of ribosome with its cap coming in contact with 3 ¢ end of 18 S rRNA (16S RNA in prokaryotes). It requires \[eI{{F}_{2}}\] (\[I{{F}_{3}}\] in prokaryotes). The initiation codon AUG or GUG comes to lie over P-site. It produces 40S – mRNA complex. P-site now attracts met tRNA (depending upon initiation codon). The anticodon of tRNA (UAC or AUG) comes to lie opposite initiation codon. Initiation factor \[\mathbf{eI}{{\mathbf{F}}_{\mathbf{3}}}\] (\[I{{F}_{2}}\] in prokaryotes ) and GTP are required. It gives rise to \[40S-mRNA\text{ }-\text{ }tRN{{A}^{Met}}\]. Methionine is nonformylated (tRNA\[_{m}^{Met}\]) in eukaryotic cytoplasm and formylated (tRNA\[_{f}^{Met}\]) in case of prokaryotes. The larger subunit of ribosome now attaches to \[40S-mRNA-tRN{{A}^{Met}}\] complex to form 80S mRNA -tRNA complex. Initiation factors \[\mathbf{eI}{{\mathbf{F}}_{\mathbf{1}}}\]and \[\mathbf{eI}{{\mathbf{F}}_{\mathbf{4}}}\](A, B and C) are required in eukaryotes and \[\mathbf{I}{{\mathbf{F}}_{\mathbf{1}}}\] in prokaryotes.\[M{{g}^{2+}}\]is essential for union of the two subunit of ribosomes. A-site becomes operational. Second codon of mRNA lies over it. (3) Elongation/chain formation: A new AA-tRNA comes to lie over the A site codon by means of GTP and elongation factor (\[\mathbf{eE}{{\mathbf{F}}_{\mathbf{1}}}\] in eukaryotes, EF-Tu and EF-Ts in prokaryotes). Peptide bond (–CO.NH–) is established between carboxyl group (–COOH) of amino acid of P-site and amino group (\[N{{H}_{2}}\]) of amino acid at A-site with the help of enzyme peptidyl transferase/synthetase. Connection between tRNA and amino acid of P-site and A-site tRNA comes to bear a dipeptydl. Freed tRNA of P-site slips away. By means of translocase ( eEF 2 in (4) Termination: Polypeptide synthesis stops when a nonsense or termination codon [UAA, (ochre), UAG (Amber) or UGA (opal)] reaches A-site. It does not attract any AA-tRNA, P-site tRNA seperates from its amino acid in the presence of release factor \[eR{{F}_{1}}\] in (5) Modification: Formylated methionine present at the beginning of polypeptide in prokaryotes and organelles is either deformylated (enzyme deformylase) or removed from chain (enzyme exopeptidase). Initially the polypeptide is elongated having only primary structure. As soon as the polypeptide comes out the groove of larger ribosome sub-unit, it forms \[\alpha \]-helix (secondary structure) which coils further forming a number of linkages (tertiary structure). Two or more polypeptides may get associated to become \[\beta \]-pleated which then coil to produce tertiary and quaternary structure. Important Tips You need to login to perform this action. You will be redirected in 3 sec
$$\mathrm{molarity} = \frac{\text{amount of solute}}{\text{volume of solution}} $$ and amount of substance is based on quantity (larger mass means larger amount), so how come it is an intensive property. Shouldn't it be an extensive property? Concentration is an intensive property. The value of the property does not change with scale. Let me give you an example: Let us say you had a homogenous mixture (solution) of sodium carbonate in water prepared from 112 g of sodium carbonate dissolved in 1031 g of water. The concentration (in mass percent, or mass of solute per mass of solution) is: $$c=\frac{112\text{ g solute}}{(112+1031)\text{g solution}}=0.09799 =9.799\%\text{ sodium carbonate by mass}$$ The concentration is the ratio of sodium carbonate to the total mass of the solution, which does not change if you are dealing with the entire 1143 g of the solution or if you dispense some of that solution into another vessel. If you dispense 11.7 g of that solution into a flask for a reaction, what is the concentration of sodium carbonate in that flask? It is still 9.799% by mass. The ratio of the mass of sodium carbonate present to the total mass present has not changed. The actual mass of sodium carbonate has changed: $$0.09799\dfrac{\text{g solute}}{\text{g solution}}\times11.7\text{ g solution}=1.15\text{ g solute}$$ The concentration is a property dependent only on the concentration of the solution, not the amount of solution you have. The concentration of a solution with defined composition is independent of the size of the system. In general, any property that is a ratio of two extensive properties becomes an intensive property, since both extensive properties will scale similarly with increasing or decreasing size of the system. Some examples include: Concentration (including molarity) - ratio of amount of solute (mass, volume, or moles) to amount of solution (mass or volume usually) Density - ratio of mass of a sample to the volume of the sample Specific heat - ratio of heat transferred to a sample to the amount of the sample (mass or moles usually, but volume also) Each of these intensive properties is a ratio of an extensive property we care about (amount of solute, mass of sample, heat transferred) divided by the scale of the system (amount of stuff usually). This is like finding the slope of a graph showing the relationship between two extensive properties. The graph is linear and the value of slope does not change based on how much stuff you have - thus the slope (the ratio) is an intensive property. Consider the following picture: Break the ice block shown in the picture into two equal halves.Now I hope you would be able to answer the following questions: 1.What are the physical properties of ice block which got halved? Absolutely mass,volume,etc.(These are all extensive properties.) 2.What are the physical properties of ice block which remained same? Density,etc.(These are all intensive property.) If you have the doubt so as to why the density remained same,here is the explanation: I hope you know basically even if block got halved,mass per unit volume remains the same in either of the pieces.All the way it mean that density remained the same(mass per unit volume).Thus it is an intensive property. Similarly if you imagine solution instead of ice block,you will find that molarity remains the same even if you divide solution into two equal halves.Thus molarity is a intensive property.
Refine Despite their very good empirical performance most of the simplex algorithm's variants require exponentially many pivot steps in terms of the problem dimensions of the given linear programming problem (LPP) in worst-case situtation. The first to explain the large gap between practical experience and the disappointing worst-case was Borgwardt (1982a,b), who could prove polynomiality on tbe average for a certain variant of the algorithm-the " Schatteneckenalgorithmus (shadow vertex algorithm)" - using a stochastic problem simulation. A Simple Integral Representation for the Second Moments of Additive Random Variables on Stochastic Polyhedra (1992) Let \(a_1, i:=1,\dots,m\), be an i.i.d. sequence taking values in \(\mathbb{R}^n\), whose convex hull is interpreted as a stochastic polyhedron \(P\). For a special class of random variables, which decompose additively relative to their boundary simplices, eg. the volume of \(P\), simple integral representations of its first two moments are given in case of rotationally symmetric distributions in order to facilitate estimations of variances or to quantify large deviations from the mean. Let \(a_1,\dots,a_n\) be independent random points in \(\mathbb{R}^d\) spherically symmetrically but not necessarily identically distributed. Let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_n\) and for any \(k\)-dimensional subspace \(L\subseteq \mathbb{R}^d\) let \(Vol_L(X) :=\lambda_k(L\cap X)\) be the volume of \(X\cap L\) with respect to the \(k\)-dimensional Lebesgue measure \(\lambda_k, k=1,\dots,d\). Furthermore, let \(F^{(i)}\)(t):= \(\bf{Pr}\) \(\)(\(\Vert a_i \|_2\leq t\)), \(t \in \mathbb{R}^+_0\) , be the radial distribution function of \(a_i\). We prove that the expectation functional \(\Phi_L\)(\(F^{(1)}, F^{(2)},\dots, F^{(n)})\) := \(E(Vol_L(X)\)) is strictly decreasing in each argument, i.e. if \(F^{(i)}(t) \le G^{(i)}(t)t\), \(t \in {R}^+_0\), but \(F^{(i)} \not\equiv G^{(i)}\), we show \(\Phi\) \((\dots, F^{(i)}, \dots\)) > \(\Phi(\dots,G^{(i)},\dots\)). The proof is clone in the more general framework of continuous and \(f\)- additive polytope functionals. Let \(A\):= {\(a_i\mid i= 1,\dots,m\)} be an i.i.d. random sample in (\mathbb{R}^n\), which we consider a random polyhedron, either as the convex hull of the \(a_i\) or as the intersection of halfspaces {\(x \mid a ^T_i x\leq 1\)}. We introduce a class of polyhedral functionals we will call "additive-type functionals", which covers a number of polyhedral functionals discussed in different mathematical fields, where the emphasis in our contribution will be on those, which arise in linear optimization theory. The class of additive-type functionals is a suitable setting in order to unify and to simplify the asymptotic probabilistic analysis of first and second moments of polyhedral functionals. We provide examples of asymptotic results on expectations and on variances. An improved asymptotic analysis of the expected number of pivot steps required by the simplex algorithm (1995) Let \(a_1,\dots,a_m\) be i.i .d. vectors uniform on the unit sphere in \(\mathbb{R}^n\), \(m\ge n\ge3\) and let \(X\):= {\(x \in \mathbb{R}^n \mid a ^T_i x\leq 1\)} be the random polyhedron generated by. Furthermore, for linearly independent vectors \(u\), \(\bar u\) in \(\mathbb{R}^n\), let \(S_{u, \bar u}(X)\) be the number of shadow vertices of \(X\) in \(span (u, \bar u\)). The paper provides an asymptotic expansion of the expectation value \(E (S_{u, \bar u})\) for fixed \(n\) and \(m\to\infty\). The first terms of the expansion are given explicitly. Our investigation of \(E (S_{u, \bar u})\) is closely connected to Borgwardt's probabilistic analysis of the shadow vertex algorithm - a parametric variant of the simplex algorithm. We obtain an improved asymptotic upper bound for the number of pivot steps required by the shadow vertex algorithm for uniformly on the sphere distributed data. Let (\(a_i)_{i\in \bf{N}}\) be a sequence of identically and independently distributed random vectors drawn from the \(d\)-dimensional unit ball \(B^d\)and let \(X_n\):= convhull \((a_1,\dots,a_n\)) be the random polytope generated by \((a_1,\dots\,a_n)\). Furthermore, let \(\Delta (X_n)\) : = (Vol \(B^d\) \ \(X_n\)) be the deviation of the polytope's volume from the volume of the ball. For uniformly distributed \(a_i\) and \(d\ge2\), we prove that tbe limiting distribution of \(\frac{\Delta (X_n)} {E(\Delta (X_n))}\) for \(n\to\infty\) satisfies a 0-1-law. Especially, we provide precise information about the asymptotic behaviour of the variance of \(\Delta (X_n\)). We deliver analogous results for spherically symmetric distributions in \(B^d\) with regularly varying tail. Let \(a_1,\dots,a_m\) be independent random points in \(\mathbb{R}^n\) that are independent and identically distributed spherically symmetrical in \(\mathbb{R}^n\). Moreover, let \(X\) be the random polytope generated as the convex hull of \(a_1,\dots,a_m\) and let \(L_k\) be an arbitrary \(k\)-dimensional subspace of \(\mathbb{R}^n\) with \(2\le k\le n-1\). Let \(X_k\) be the orthogonal projection image of \(X\) in \(L_k\). We call those vertices of \(X\), whose projection images in \(L_k\) are vertices of \(X_k\)as well shadow vertices of \(X\) with respect to the subspace \(L_k\) . We derive a distribution independent sharp upper bound for the expected number of shadow vertices of \(X\) in \(L_k\). Let \(a_i i:= 1,\dots,m.\) be an i.i.d. sequence taking values in \(\mathbb{R}^n\). Whose convex hull is interpreted as a stochastic polyhedron \(P\). For a special class of random variables which decompose additively relative to their boundary simplices, eg. the volume of \(P\), integral representations of their first two moments are given which lead to asymptotic estimations of variances for special "additive variables" known from stochastic approximation theory in case of rotationally symmetric distributions. The article provides an asymptotic probabilistic analysis of the variance of the number of pivot steps required by phase II of the "shadow vertex algorithm" - a parametric variant of the simplex algorithm, which has been proposed by Borgwardt [1] . The analysis is done for data which satisfy a rotationally invariant distribution law in the \(n\)-dimensional unit ball.
In Spivak's Calculus, he asks for a proof that $\lim_{x\to a}f(x)=\lim_{h\to 0}f(a+h)$. He first shows that the existence of $\lim_{x\to a}f(x)$ implies the existence and equivalence of $\lim_{h\to0}f(a+h)$, and then he says the argument for the other direction is "similar," but I am having a hard time replicating it (I may be getting unnecessarily bogged down in notational issues). His proof of the first direction is essentially as follows: (Spivak forward direction):Let $\ell=\lim_{x\to a}f(x)$ and define $g(h)=f(a+h)$. Then for every $\epsilon>0$ there is a $\delta>0$ such that, for all $x$, if $0<|x-a|<\delta$, then $|f(x)-\ell|<\epsilon$. Now, if $0<|h|<\delta$, then $0<|(a+h)-a|<\delta$, so $|f(a+h)-\ell|<\epsilon$, which we can write as $|g(h)-\ell|<\epsilon$. Thus, $\lim_{h\to0}g(h)=\ell$, which can also be written $\lim_{h\to 0}f(a+h)=\ell$. The same sort of argument shows that if $\lim_{h\to 0}f(a+h)=m$, then $\lim_{x\to a}f(x)=m$. So either limit exists if the other does, and in this case they are equal. My attempt at other direction: Let $m=\lim_{h\to 0}f(a+h)$. Then for every $\epsilon>0$ there is a $\delta>0$ for all $h$ such that if $0<|h|<\delta$ then $|f(a+h)-m|<\epsilon$. Now, if $0<|x-a|<\delta$, then $|f(a+(x-a))-m|=|f(x)-m|<\epsilon$. Thus, $\lim_{x\to a}f(x)=m$. What am I missing here? Is my proof okay? Why does Spivak use the function $g$ in the previous direction? Is it really necessary? What would such a $g$ be in the other direction?