text
stringlengths 256
16.4k
|
|---|
Loading... Error
There was an error loading the page.
Contributors History and Feedback Vicky Hall 2 years, 2 months ago
The answer to part a) in the advice is wrong. There's also an issue with the answer to part b). You give all lengths and the angle to the student to two decimal places but then you calculate the answer using more decimal places, which makes your answer slightly more accurate than theirs can ever be. In the example I just did, it still rounds the same to two decimal places in the end (although it might not always) but it wasn't the same to five decimal places, which you give on the line above. This may seem like a small thing but it will confuse the student as no matter how many times they retype it they will not get the same answer as Numbas. You could fix this by rounding your numbers to two decimal places in the variables section.
The advice here is far too wordy. Try it shorten it to get straight to the point. In part a), for example, it would sound better to say:
'We can see from the diagram that the radius of the frisbee is $22\mathrm{cm}$. Replacing the letter $r$ in the formula for the area of a circle with $22$ gives....' etc.
Try to do this with all of the sections.
I have also put all occurences of $\mathrm{volume}$ and $\mathrm{area}$ into \mathrm and changed your American spelling of 'centimetres'.
Stanislav Duris 2 years, 2 months ago
This question is very nice, I like how there is some context in every part. The images are great and it's clear what the question asks people to do. Good job with geogebra and all those variables. I've noticed some tiny problems.
You use "Area =" or "Volume =" a lot in the question and sometimes this is displayed in Latex and sometimes it's not. For example, in parts a)c)d)e), before the gap, it is displayed as normal text. In part b), the font is different and much nicer. Do you want to fix this so it is consistent? In part c), "Calculate the volume of a cone given the formula for the areaof a cone is" should be "Calculate the volume of a cone given the formula for the volumeof a cone is". In part d), there is a sentence saying "Using the diagram and applying the volume of a sphere to the tennis ball." which is a bit out of place. Do you want to remove it or maybe adjust it so it makes more sense? Is it possible to align geogebra applets to the centre? If not, don't worry.
Advice
In part a), "(the perimeter of the circle) of the circle." does not read well, I think you should just keep (the perimeter) in brackets. I feel like the sentence starting with "Identifying from the diagram that the radius.." is unfinished but maybe that's just me. Maybe replacing "that" with a comma would fix this. Similar problem in part c). In part b), the first line of the equation should look like the formula you gave them in the question, with sin as a function rather than just letters s,i,n. I think your a)b)c)d)e) headings shouldn't be bold as well, because now they seem way too big in the advice. Vicky Hall 2 years, 3 months ago
I meant to link all formulae to the question so that the student understands why they have been given a particular formula. When asking them to calculate the volume of a tennis ball, tell them that you are providing the formula for the volume of a sphere. Hopefully this will help them to learn the correct formulae at the same time as practising substition.
Ensure that in all formulae the words 'Area' and 'Volume' are in Roman and also use \displaystyle to make the formulae clearer. Use $\sin C$ instead of $sin(C)$.
It's not necessary to state the value of $\pi$ (and certainly not after every part of the question!) as the student will be using their calculator for these questions anyway.
Please remember full stops at the end of
allsentences.
Most of these comments also apply to 'Substitutuion without geometry' as well so please have a check through that too.
Vicky Hall 2 years, 3 months ago
In part c), you have mixed up the values of $r$ and $h$ in the solution. I would also change the possible values of $r$ and $h$ so that $h$ is always greater than $r$, because the diagram makes it look as though this is the case.
Link the formulae you are stating to the questions you are asking. For example, in part a), reword the question to say: 'Calculate the area of the frisbee given that the area of a circle is $\mathrm{Area}=\pi r^2$. Part d) needs to be reworded as the current prompt doesn't make sense and has 'cm' randomly placed in the sentence.
Remember full stops at the end of all sentences and at the end of the statement. If the formula is the end of your question, put the full stop after that.
In the advice section there are issues with a few of the solutions. In part a), the penultimate line of working is correct but the final line gives a totally different number. In part c), $r$ and $h$ are mixed up like they were in the expected answer to the question. And in part d), the solution isn't rounded in the final line of working, as you've done in all of the other sections.
Name Status Author Last Modified Substitution with Geometry draft Aiden McCall 15/06/2017 12:02 Use formulae for the area and volume of geometric shapes Ready to use Aiden McCall 14/08/2017 11:02 Use formulae for the area and volume of geometric shapes draft steve kilgallon 21/11/2017 16:27 David's copy of Use formulae for the area and volume of geometric shapes draft David Rickard 03/03/2019 10:45 David's copy of Use formulae for the area and volume of geometric shapes draft David Rickard 03/03/2019 10:40 Use formulae for the area and volume of geometric shapes draft David Rickard 08/11/2018 09:33 Joshua's copy of Use formulae for the area and volume of geometric shapes draft Joshua Capel 10/12/2018 04:14 Ryan's copy of Use formulae for the area and volume of geometric shapes draft Ryan Poling 25/02/2019 22:22 Maria's copy of Use formulae for the area and volume of geometric shapes draft Maria Aneiros 27/05/2019 04:31
No variables have been defined in this question.
Name Type Generated Value Error in variable testing condition
There's an error in the condition you specified in the Variable testing tab. Variable values can't be generated until it's fixed.
No parts have been defined in this question.
Select a part to edit.
Pattern restriction Variables String restrictions
Answers Choices Test that the marking algorithm works
Check that the marking algorithm works with different sets of variables and student answers using the interface below.
Create unit tests to save expected results and to document how the algorithm should work.
There's an error which means the marking algorithm can't run:
Name Value
Note Value Feedback
Click on a note's name to show or hide it. Only shown notes will be included when you create a unit test.
Unit tests
No unit tests have been defined. Enter an answer above, select one or more notes, and click the "Create a unit test" button.
The following tests check that the question is behaving as desired.
This test has not been run yet This test produces the expected output This test does not produce the expected output
This test is not currently producing the expected result. Fix the marking algorithm to produce the expected results detailed below or, if this test is out of date, update the test to accept the current values.
One or more notes in this test are no longer defined. If these notes are no longer needed, you should delete this test.
Name Value
Note Value Feedback This note produces the expected output Current: Expected:
This test has not yet been run.
In order to create a variable replacement, you must define at least one variable and one other part.
Variable Answer to use Must be answered?
The variable replacements you've chosen will cause the following variables to be regenerated each time the student submits an answer to this part:
These variables have some random elements, which means they're not guaranteed to have the same value each time the student submits an answer. You should define new variables to store the random elements, so that they remain the same each time this part is marked.
to
This question is used in the following exams:
Units of measurement by Christian Lawson-Perfect in Transition to university. Area of geometric shapes by Christian Lawson-Perfect in Transition to university. Volume by Christian Lawson-Perfect in Transition to university. David's copy of Area of geometric shapes by David Rickard in David's workspace. Jane's copy of Area of geometric shapes by Jane Courtney in Jane's workspace. Linn's copy of Units of measurement by Linn Flaten in Linn's workspace. Nick's copy of Units of measurement by Nick Walker in Nick's workspace. Units of measurement [L1 Randomised] by Matthew James Sykes in CHY1205. Lecture 1 by Matthew James Sykes in CHY1205. David's copy of Units of measurement [L1 Randomised] by David Rickard in PHYS1010.
|
Browse/search for people Mr Joseph Najnudel
Mr Joseph Najnudel Reader in Mathematics PhD Summary
Random matrix theory: study of random unitary matrices (in particular the Circular Unitary Ensemble), random permutation matrices and related infinite-dimensional limiting objects, link between random matrix theory and number theory, link between random matrix theory and mathematical physics.
Stochastic analysis: study of limiting measures constructed by modifying the Brownian motion or more general stochastic processes, study of the Brownian local times and polymer models.
Recent publications Najnudel, J & Virág, B, 2019, The bead process for beta ensembles. arXiv:1904.00848. Najnudel, J, 2019, On consecutive values of random completely multiplicative functions. arXiv:1702.01470. Najnudel, J & Virág, B, 2019, Uniform point variance bounds in classical beta ensembles. arXiv:1904.00858. Chhaibi, R & Najnudel, J, 2019, On the circle, $GMC^{\gamma} = C\beta E_{\infty}$ for $\gamma = \sqrt{\frac{2}{\beta}}$, ($\gamma \leq 1$). arXiv:1904.00578. Chhaibi, R, Hovhannisyan, E, Najnudel, J, Nikeghbali, A & Rodgers, B, 2019, A limiting characteristic polynomial of some random matrix ensembles. Annales de l'Institut Henri Poincaré (B) Probabilités et Statistiques, vol 20. Maples, K, Najnudel, J & Nikeghbali, A, 2019, Strong convergence of eigenangles and eigenvectors for the Circular Unitary Ensemble. Annals of Probability, vol 47. Najnudel, J & Pitman, J, 2019, Feller coupling of cycles and Poisson spacings. arXiv:1907.09587. Assiotis, T & Najnudel, J, 2019, The boundary of the orbital beta process. arXiv:1905.08684. Chhaibi, R & Najnudel, J, 2018, Rigidity of the Sine_{\beta} process. Electronic Communications in Probability, vol 23. Chhaibi, R, Madaule, T & Najnudel, J, 2018, On the maximum of the $C\beta E$ field. Duke Mathematical Journal, vol 167.
Edit this profile If you are Mr Joseph Najnudel, you can edit this page. Login required.
Download PDF
|
I cannot claim to be an expert on AQFT, but the parts that I'm familiar with rely on local fields quite a bit.
First, a clarification. In your question, I think you may be conflating two ideas: local fields ($\phi(x)$, $F^{\mu\nu}(x)$, $\bar{\psi}\psi(x)$, etc) and unobservable local fields ($A_\mu(x)$, $g_{\mu\nu}(x)$, $\psi(x)$, etc).
Local fields are certainly recognizable in AQFT, even if they are not used everywhere. In the Haag-Kastler or Brunetti-Fredenhagen-Verch (aka Locally Covariant Quantum Field Theory or LQFT), you can think of algebras assigned to spacetime regions by a functor, $U\mapsto \mathcal{A}(U)$. These could be causal diamonds in Minkowski space (Haag-Kastler) or globally hyperbolic spacetimes (LCQFT). You can also have a functor assigning smooth compactly supported test functions to spacetime regions, $U\mapsto \mathcal{D}(U)$. A local field is then a natural transformation $\Phi\colon \mathcal{D} \to \mathcal{A}$ between these two functors. Unwrapping the definition of a natural transformation, you find for every spacetime region $U$ a map $\Phi_U\colon \mathcal{D}(U)\to \mathcal{A}(U)$, such that $\Phi_U(f)$ behaves morally as a smeared field, $\int \mathrm{d}x\, f(x) \Phi(x)$ in physics notation.
This notion of smeared field is certainly in use in the algebraic constructions of free fields as well as in the perturbative renormalization of interacting LCQFTs (as developed in the last decade and a half by Hollands, Wald, Brunetti, Fredenhagen, Verch, etc), where locality is certainly taken very seriously.
Now, my understanding of This post has been migrated from (A51.SE)
unobservable local fields is unfortunately much murkier. But I believe that they are indeed absent from the algebras of observables that one would ideally work with. For instance, following the Haag-Kastler axioms, localized algebras of observables must commute when spacelike separated. That is impossible if you consider smeared fermionic fields as elements of your algebra. However, I think at least the fermionic fields can be recovered via the DHR analysis of superselection sectors. The issue with unobservable fields with local gauge symmetries is much less clear (at least to me) and may not be completely settled yet (though see some speculative comments on my part here).
|
The equivalence is not correct.
To see this, consider any divergent series of real numbers $\sum a_n$ such that $a_n\to 0$ as $n\to \infty$; for example $a_n=\frac1n$. Then define $x_n=a_1+\dots +a_n$. This is a non-Cauchy sequence (as any non-convergent sequence of real numbers), but $x_{n+k}-x_n=a_{n+1}+\dots +a_{n+k}\to 0$ as $n\to \infty$ for any fixed $k\in\mathbb N$, since this is the sum of $k$ terms tending to $0$; that is, $d(x_n,x_{n+k})\to 0$ as $n\to\infty$. So the implication $\Leftarrow$ does not hold.
If you prefer, you can take $x_n=\log n$ or $x_n=\sqrt{n}$, and check that $x_{n+k}-x_n\to 0$ as $n\to \infty$, for any $k\in\mathbb N$.
On the other hand, your proof of the other implication is correct.
Last remark : in fact the assumption "$\forall k\in\mathbb N\; d(x_{n+k},x_n)\to 0$" is equivalent to the seemingly weaker "$d(x_{n+1},x_n)\to 0$", as you can easily check.
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Search
Now showing items 1-1 of 1
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
|
A brief description of the 18 electron rule
A valence shell of a transition metal contains the following: 1 $s$ orbital, 3 $p$ orbitals and 5 $d$ orbitals; 9 orbitals that can collectively accommodate 18 electrons (as either bonding or nonbonding electron pairs). This means that, the combination of these nine atomic orbitals with ligand orbitals creates nine molecular orbitals that are either metal-ligand bonding or non-bonding, and when metal complex has 18 valence electrons, it is said to have achieved the same electron configuration as the noble gas in the period.
In some respect, it is similar to the octet rule for main group elements, something you might be more familiar with, and thus it may be useful to bear that in mind. So in a sense, there's not much more to it than "electron bookkeeping"
As already mentioned in the comments, 18 electron rule is more useful in the context of organometallics.
Two methods are commonly employed for electron counting:
Neutral atom method: Metal is taken as in zero oxidation state for counting purpose
Oxidation state method: We first arrive at the oxidation state of the metal by considering the number of anionic ligands present and overall charge of the complex
I think this website does a good job of explaining this: http://www.ilpi.com/organomet/electroncount.html (plus, they have some practice exercises towards the end)
Let's just focus on Neutral Atom Method (quote from the link above)
The major premise of this method is that we remove all of the ligands from the metal, but rather than take them to a closed shell state, we do whatever is necessary to make them neutral. Let's consider ammonia once again. When we remove it from the metal, it is a neutral molecule with one lone pair of electrons. Therefore, as with the ionic model, ammonia is a neutral two electron donor.
But we diverge from the ionic model when we consider a ligand such as methyl. When we remove it from the metal and make the methyl fragment neutral, we have a neutral methyl radical. Both the metal and the methyl radical must donate one electron each to form our metal-ligand bond. Therefore, the methyl group is a one electron donor, not a two electron donor as it is under the ionic formalism. Where did the other electron "go"? It remains on the metal and is counted there. In the covalent method, metals retain their full complement of d electrons because we never change the oxidation state from zero; i.e. Fe will always count for 8 electrons regardless of the oxidation state and Ti will always count for four.
Ligand Electron Contribution (for neutral atom method)
a. Neutral Terminal (eg. $\ce{CO}, \ce{PR_3}, \ce{NR_3}$) : 2 electrons
b. Anionic Terminal (eg. $\ce{X^-}, \ce{R_2P^-}, \ce{Ro^-}$) : 1 electron
c. Hapto Ligands (eg. $ \eta^2-\ce{C_2R_4}, \eta^1-\text{allyl}$): Same as hapticity
d. Bridging neutral (eg. $\mu_2-\ce{CO}$) : 2 electrons
e. Bridging anionic (eg. $\mu_2-\ce{CH_3}$) ( no lone pairs): 1 electron
f.Bridging anionic (eg. $\mu_2-\ce{Cl}, \mu_2-\ce{OR}$) (with 1 lone pair)): 3 electrons
(with 1 lone pair)
or, $\mu_2-\ce{Cl}$(with 2 lone pairs): 5 electrons
g. Bridging alkyne 4 electrons
h. NO linear 3 electrons
i. NO bent ( lone pair on nitrogen): 1 electron
j. Carbene (M=C): 2 electron
k.Carbyne (M≡C): 3 electron
Determining # Metal-Metal bonds
Step 1: Determine the total valence electrons (TVE) in the entire molecule (that is, the number of valence electrons of the metal plus the number of electrons from each ligand and the charge)-- I'll call this T (T for total, I'm making this up )
Subtract this number from $n × 18$ where $n$ is the number of metals in the complex, i.e $(n × 18) – T$ -- call this R (R for result, nothing fancy)
(a) R divided by 2 gives the total number of M–M bonds in the complex.(b) T divided by n gives the number of electrons per metal.
If the number of electrons is 18, it indicates that there is no M–M bond; if it is 17 electrons, it indicates that there is 1 M–M bond; if it is 16electrons, it indicates that there are 2 M–M bonds and so on.
At this point, let's apply this method to a few examples
a) Tungsten Hexacarbonyl (picture)
Let's use the neutral atom method, W has 6 electrons, the carbonyls donate 12 electrons and we get a total of 18. Of course there can be no metal metal bonds here.
(b) Tetracobalt dodecacarbonyl (picture), here let's figure out the no. of metal-metal bonds. T is 16, R is 12, Total # M-M bonds is 6, # electrons per metal is 15, so 3 M-M bonds.
A few examples where the "18 electron Rule" works
I.
Octahedral Complexes with strong $\pi$ - acceptor ligands
eg. $\ce{[Cr(CO)_6]}$
Here, $t_{2g}$ is strongly bonding and is filled and, $e_g$ is strongly antibonding, and empty. Complexes of this kind tend to obey the 18-electron rule irrespective of their coordination number. Exceptions exist for $d^8$, $d^{10}$ systems (see below)
II. Tetrahedral Complexes
e.g. $\ce{[Ni(PPh_3)_4]}$ ($\ce{Ni^0}$, $d^{10}$ 18-electron complex
Tetrahedral complexes cannot exceed 18 electrons because there are no lowlying MOs that can be filled to obtain tetrahedral complexes with more than 18 electrons. In addition, a transition metal complex with the maximum of 10 d-electrons, will receive 8 electrons from the ligands and end up with a total of 18 electrons.
Violations of 18 electron rule
I. Bulky ligands : (eg. $\ce{Ti(\text{neopentyl})_4} $ has 8 electrons) Bulky ligands prevent a full complement of ligands to assemble around the metal to satisfy the 18 electron rule.
Additionally, for early transition metals, (e.g in $d^0$ systems), it is often not possible to fit the number of ligands necessary to reach 18 electrons around the metal. (eg. tungsten hexamethyl, see below)
II. Square Planar $d^8$ complexes (16 electrons) and Linear $d^{10}$ complexes (14 electrons)
For square planar complexes, $d^8$ metals with 4 ligands, gives 16-electron complexes. This is commonly seen with metals and ligands high in the spectrochemical series
For instance, $\ce{Rh^+}, \ce{Ir^+}, \ce{Pd^2+}, \ce{Pt^2+}$) are square planar. Similarly, $\ce{Ni^2+}$ can be square planar, with strong $\pi$-acceptor ligands.
Similarly, $d^{10}$ metals with 2 ligands, give 14-electron complexes. Commonly seen for for $\ce{Ag^+}, \ce{Au^+}, \ce{Hg^{2+}}$
III. Octahedral Complexes which disobey the 18 electron rule, but still have fewer than 18 electrons (12 to 18)
This is seen with second and third row transition metal complexes, high in the spectrochemical series of metal ions with $\sigma$-donor or $\pi$-donor ligands (low to medium in the spectrochemical series).$t_{2g}$ is non-bonding or weakly anti-bonding (because the ligands are either $\sigma$-donor or $\pi$-donor), and $t_{2g}$ usually contains 0 to 6 electrons. On the other hand, $e_g$ are strongly antibonding, and thus are empty.
IV. Octahedral Complexes which exceed 18 electrons (12 to 22)
This is observed in first row transition metal complexes that are low in the spectrochemical series of metal ions, with $\sigma$-donor or $\pi$-donor ligands Here, the $t_{2g}$ is non-bonding or weakly anti-bonding, but the $e_g$ are only weakly antibonding, and thus can contain electrons. Thus, 18 electrons maybe exceeded.
References:
The following weblinks proved useful to me while I was writing this post, (especially handy for things like MO diagrams)
http://www.chem.tamu.edu/rgroup/marcetta/chem462/lectures/Lecture%203%20%20excerpts%20from%20Coord.%20Chem.%20lecture.pdf
http://classes.uleth.ca/201103/chem4000b/18%20electron%20rule%20overheads.pdf
http://web.iitd.ac.in/~sdeep/Elias_Inorg_lec_5.pdf
http://www.ilpi.com/organomet/electroncount.html
http://www.yorku.ca/stynes/Tolman.pdf
and obviously, the wikipedia page is a helpful guide https://en.wikipedia.org/wiki/18-electron_rule
|
No, this is very false in general. For example, if $X=Y=S^n$ ($n>0$), the connected components of the space $Y^X$ of continuous maps from $X$ to $Y$ are in bijection with $\mathbb{Z}$ (the integer corresponding to a map $f:X\to Y$ is known as the degree of $f$). For $n>1$, $X$ and $Y$ are simply connected. In general, determining the connected components such spaces $Y^X$ (when $Y$ and $X$ are reasonably nice, at least) is a deep geometric problem and is one of the central problems of the entire field of algebraic topology. Just as an example, classifying all of the connected components of $Y^X$ in the case $Y=S^m$ and $X=S^n$ for arbitrary values of $m$ and $n$ is fantastically difficult and the answer is so complex we will probably never have any satisfactory complete description of it (see https://en.wikipedia.org/wiki/Homotopy_groups_of_spheres for an overview of the problem).
One case where it is true is if $X$ or $Y$ is contractible: we say $X$ is contractible if there exists a continuous map $H:X\times [0,1]\to X$ such that $H(x,0)=x$ for all $x$ and $H(x,1)$ is constant (i.e., the same for all $x$). For example, if $X=\mathbb{R}^n$, then $X$ is contractible via the map $H(x,t)=(1-t)x$.
If $X$ is contractible, then for any $f\in Y^X$, there is a continuous map $F:[0,1]\to Y^X$ given by $F(t)(x)=f(H(x,t))$. Note that $F(0)=f$, and $F(1)$ is the constant function with value $f(H(-,0))$. It follows that $Y^X$ is path-connected, since every element of $Y^X$ can be connected to a constant function by a path, and all constant functions can be connected by paths since you have assumed $Y$ is simply connected (in particular, path-connected). A similar argument shows that when $Y$ is contractible, $Y^X$ is path-connected (in fact, more strongly, $Y^X$ is contractible, without needing any hypothesis like path-connectedness on $X$).
A version of the argument above also works if you are only considering continuous linear maps between topological vector spaces, since the contraction $H(x,t)=(1-t)x$ only passes through linear maps.
|
I'm asked to prove that the cardinality of the set of all bijections in $\mathbb{N} \to \mathbb{N}$ is $\mathfrak {c}$.
Note: $\mathfrak {c}$ is the cardinality of the real numbers.
I would appreciate some help understanding the following solution:
Let's denote this set as $|A|$. On the one hand, $|A| \subseteq \mathbb{N} \to \mathbb{N}$. Thus, according to CSB theorem, $|A|\le \mathfrak {c}$. This part I understand.
On the other hand, we can define an injective function $f\in \{0,1\}^{\mathbb{N_{even}}}\to A$ as follows:
$f=\lambda g\in \{0,1\}^{\mathbb{N_{even}}}.\lambda n\in \mathbb{N}.\begin{cases} n+1, & \text{if $n\in \mathbb{N}_{even} \land g(n)=1$ } \\[2ex] n-1, & \text{if $n\in \mathbb{N}_{odd} \land g(n-1)=1$} \\[2ex] n, & \text{otherwise } \end{cases}$
Now, I can conclude that $|A|\ge \mathfrak {c}$. Hence, $|A|= \mathfrak {c}$.
I would appreciate if someone could explain why $Im(g)\subseteq A$? Or in other words, why is the output of $g$ necessarily a bijection.
Also, if you think there's an easier way to solve that, I would be glad to see it. Thank you.
|
Say that the exterior differential system (EDS) corresponding to a PDE system is:
$$df-f_x\,dx-f_y\,dy-f_w\,dw-f_z\,dz=0,\\ a_1\,f_x+a_2\,f_y=0,\tag{sys}$$
Of course we also require the independence condition, $dx\wedge dy\wedge dw\wedge dz\neq 0$.
Instead of (sys) can I simply use the following? $$ df +\dfrac{a_2}{a_1}f_y\,dx-f_y\,dy-f_w\,dw-f_z\,dz=0 \tag{sys$^\prime$}$$ I guess what I'm asking above is whether the ideal generated by $$\theta=df +\dfrac{a_2}{a_1}f_y\,dx-f_y\,dy-f_w\,dw-f_z\,dz$$ coincides with the pull-back of the ideal generated by the contact form ($i.e.$ the left-hand side of first line in sys) to the manifold in jet space given by the second line of sys?
I think the answer is yes because the pullback to the manifold in jet space defined by $a_1\,f_x+a_2\,f_y=0$ commutes with the exterior product and with the exterior derivative so the ideal generated by $\mathrm{sys^\prime}$ will coincide with the pullback of the ideal generated by $\mathrm{sys}$ but not 100% sure.
Sorry if all of this obvious but I'm not a mathematician. Besides Bryant et. al. I'm using "Cartan for Beginners" and "Lie's Structural Approach to PDE Systems". I would be grateful for any other useful references.
|
Question: Use the fact that the power series, centered at x = 0, for \(\frac{1}{1-x}\) is \(\sum\limits_{n=0}^{\infty }{{{x}^{n}}}\), find the power series for the following, with center at x = c.
(a) \(\frac{1}{x+3}\),
c = 5
(b) \(\frac{1}{{{\left( 1-x \right)}^{3}}}\),
c = 0
|
ok, suppose we have the set $U_1=[a,\frac{a+b}{2}) \cup (\frac{a+2}{2},b]$ where $a,b$ are rational. It is easy to see that there exists a countable cover which consists of intervals that converges towards, a,b and $\frac{a+b}{2}$. Therefore $U_1$ is not compact. Now we can construct $U_2$ by taking the midpoint of each half open interval of $U_1$ and we can similarly construct a countable cover that has no finite subcover.
By induction on the naturals, we eventually end up with the set $\Bbb{I} \cap [a,b]$. Thus this set is not compact
I am currently working under the Lebesgue outer measure, though I did not know we cannot define any measure where subsets of rationals have nonzero measure
The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure
that is, trying to compute the Lebesgue outer measure of the irrationals using only the notions of covers, topology and the definition of the measure
What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set
Problem: Let $X$ be some measurable space and $f,g : X \to [-\infty, \infty]$ measurable functions. Prove that the set $\{x \mid f(x) < g(x) \}$ is a measurable set. Question: In a solution I am reading, the author just asserts that $g-f$ is measurable and the rest of the proof essentially follows from that. My problem is, how can $g-f$ make sense if either function could possibly take on an infinite value?
@AkivaWeinberger For $\lambda^*$ I can think of simple examples like: If $\frac{a}{2} < \frac{b}{2} < a, b$, then I can always add some $\frac{c}{2}$ to $\frac{a}{2},\frac{b}{2}$ to generate the interval $[\frac{a+c}{2},\frac{b+c}{2}]$ which will fullfill the criteria. But if you are interested in some $X$ that are not intervals, I am not very sure
We then manipulate the $c_n$ for the Fourier series of $h$ to obtain a new $c_n$, but expressed w.r.t. $g$.
Now, I am still not understanding why by doing what we have done we're logically showing that this new $c_n$ is the $d_n$ which we need. Why would this $c_n$ be the $d_n$ associated with the Fourier series of $g$?
$\lambda^*(\Bbb{I}\cap [a,b]) = \lambda^*(C) = \lim_{i\to \aleph_0}\lambda^*(C_i) = \lim_{i\to \aleph_0} (b-q_i) + \sum_{k=1}^i (q_{n(i)}-q_{m(i)}) + (q_{i+1}-a)$. Therefore, computing the Lebesgue outer measure of the irrationals directly amounts to computing the value of this series. Therefore, we first need to check it is convergent, and then compute its value
The above workings is basically trying to compute $\lambda^*(\Bbb{I}\cap[a,b])$ more directly without using the fact $(\Bbb{I}\cap[a,b]) \cup (\Bbb{I}\cap[a,b]) = [a,b]$ where $\lambda^*$ is the Lebesgue outer measure
What I hope from such more direct computation is to get deeper rigorous and intuitve insight on what exactly controls the value of the measure of some given uncountable set, because MSE and Asaf taught me it has nothing to do with connectedness or the topology of the set
Alessandro: and typo for the third $\Bbb{I}$ in the quote, which should be $\Bbb{Q}$
(cont.) We first observed that the above countable sum is an alternating series. Therefore, we can use some machinery in checking the convergence of an alternating series
Next, we observed the terms in the alternating series is monotonically increasing and bounded from above and below by b and a respectively
Each term in brackets are also nonegative by the Lebesgue outer measure of open intervals, and together, let the differences be $c_i = q_{n(i)-q_{m(i)}}$. These form a series that is bounded from above and below
Hence (also typo in the subscript just above): $$\lambda^*(\Bbb{I}\cap [a,b])=\sum_{i=1}^{\aleph_0}c_i$$
Consider the partial sums of the above series. Note every partial sum is telescoping since in finite series, addition associates and thus we are free to cancel out. By the construction of the cover $C$ every rational $q_i$ that is enumerated is ordered such that they form expressions $-q_i+q_i$. Hence for any partial sum by moving through the stages of the constructions of $C$ i.e. $C_0,C_1,C_2,...$, the only surviving term is $b-a$. Therefore, the countable sequence is also telescoping and:
@AkivaWeinberger Never mind. I think I figured it out alone. Basically, the value of the definite integral for $c_n$ is actually the value of the define integral of $d_n$. So they are the same thing but re-expressed differently.
If you have a function $f : X \to Y$ between two topological spaces $X$ and $Y$ you can't conclude anything about the topologies, if however the function is continuous, then you can say stuff about the topologies
@Overflow2341313 Could you send a picture or a screenshot of the problem?
nvm I overlooked something important. Each interval contains a rational, and there are only countably many rationals. This means at the $\omega_1$ limit stage, thre are uncountably many intervals that contains neither rationals nor irrationals, thus they are empty and does not contribute to the sum
So there are only countably many disjoint intervals in the cover $C$
@Perturbative Okay similar problem if you don't mind guiding me in the right direction. If a function f exists, with the same setup (X, t) -> (Y,S), that is 1-1, open, and continous but not onto construct a topological space which is homeomorphic to the space (X, t).
Simply restrict the codomain so that it is onto? Making it bijective and hence invertible.
hmm, I don't understand. While I do start with an uncountable cover and using axiom of choice to well order the irrationals, the fact that the rationals are countable means I eventually end up with a countable cover of the rationals. However the telescoping countable sum clearly does not vanish, so this is weird...
In a schematic, we have the following, I will try to figure this out tomorrow before moving on to computing the Lebesgue outer measure of the cantor set:
@Perturbative Okay, kast question. Think I'm starting to get this stuff now.... I want to find a topology t on R such that f: R, U -> R, t defined by f(x) = x^2 is an open map where U is the "usual" topology defined by U = {x in U | x in U implies that x in (a,b) \subseteq U}.
To do this... the smallest t can be is the trivial topology on R - {\emptyset, R}
But, we required that everything in U be in t under f?
@Overflow2341313 Also for the previous example, I think it may not be as simple (contrary to what I initially thought), because there do exist functions which are continuous, bijective but do not have continuous inverse
I'm not sure if adding the additional condition that $f$ is an open map will make an difference
For those who are not very familiar about this interest of mine, besides the maths, I am also interested in the notion of a "proof space", that is the set or class of all possible proofs of a given proposition and their relationship
Elements in a proof space is a proof, which consists of steps and forming a path in this space For that I have a postulate that given two paths A and B in proof space with the same starting point and a proposition $\phi$. If $A \vdash \phi$ but $B \not\vdash \phi$, then there must exists some condition that make the path $B$ unable to reach $\phi$, or that $B$ is unprovable under the current formal system
Hi. I believe I have numerically discovered that $\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n$ as $K\to\infty$, where $c=0,\dots,K$ is fixed and $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$. Any ideas how to prove that?
|
Given a topological space $X$ and a presheaf of abelian groups on $X$, $A$, we can construct the set of germs at a point $x\in X$ by taking $\mathscr{A}_x=\lim\limits_{\rightarrow} A(U)$ where $x\in U\subseteq X$ and $U$ is open. In
Sheaf Theory by Bredon, he claims there is a canonical group structure on $\mathscr{A}_x$.
What is the canonical group structure of the direct limit?
I would think if $[s]_x, [t]_x$ are germs then $[s]_x+[t]_x=[s+t]_x$ but, if $s,t$ are not in the same group, would we first need to restrict their sections until they are and then add them?
|
Motivation
In the finance literature, authors seem to use returns, : \begin{equation} R_{t+1}=\frac{p_{t+1}}{p_t} \end{equation} and log returns, : \begin{equation} r_{t+1}=log\left( \frac{p_{t+1}}{p_t} \right)=log(R_{t+1}) \end{equation} interchangeably (this notation is standard, following the convention that lower case variables are the logs of upper case variables). This post shows how the choice of logs vs. levels can affect basic results in a big way.
First Order Approximation
Consider the first order Taylor expansion of about :\begin{equation}log(1+x) \approx x\end{equation}Think of as percent returns, . Recalling that , this approximation implies that .
Log returns are useful, as compounding can be done through addition, rather than multiplication. In levels, total return is calculated as: \begin{equation} R_{total}=R_1 \cdot R_2 \cdot \dots \cdot R_n \end{equation} This works because , so . In logs, total return is calculated as: \begin{equation} r_{total}=r_1 + r_2 + \dots + r_n \end{equation} This works because so . Approximation Error
The table below presents values of between and , the corresponding approximation, and the approximation error.
Three important takeaways from this table:
1) The approximation error is small for values of close to zero, but substantial for . 2) The approximation error is asymmetric about zero for - it is larger for than . 3) The error direction is also asymmetric- negative values of are amplified, while positive values of are dampened. S&P 500 Data
Consider the price dividend ratio, P/D, for the S&P 500 index, and log(P/D) (Source: Robert Shiller )
During the technology “bubble”, P/D in levels appears to increase exponentially, while in logs it looks almost linear (this makes sense, ). During the financial crisis, the drop in logs is much more severe in than the drop in levels.
This reveals a bigger point - a rising series will look more dramatic in levels, and a falling series will look more dramatic in logs. Summary
Using logs is very convenient, as working with addition is always easier than working with multiplication. Further, the overwhelming majority of stock returns are between -0.05 and 0.05, where the approximation error is small. This breaks down during extreme events, when the approximation error causes the two methods to materially differ. When trying to describe “bubbles” and “crises”, it is important to consider the impact of choosing to logs vs. levels, as it could substantially alter the results.
|
I am currently working on problem that I think could be expressed as an integer lattice problem.
Given $u \in \mathbb{R}^n$ and a
bounded integer lattice $L = \mathbb{Z}^n \cap [-M,M]^n$ I would like to find an integer vector $v \in L$ that minimizes the angle between $u$ and $v$. That is, I would like $$v \in \text{argmax}_{w \in L} \frac{u.w}{\|u\|\|w\|}$$
Here, the objective is maximizing the cosine of the angle between $u$ and $w$ (i.e. minimizing the angle between $u$ and $w$). The vectors $u$ and $w$ are said to be "similar" if this quantity is close to 1.
I am wondering:
Is this problem related to a well-known integer lattice problem (e.g. a closest vector problem)?
Could this problem be solved using existing lattice algorithms (e.g. the LLL algorithm?)
|
Impulse Response DT Convolution CT Convolution Properties of LTI Systems Deriving the Convolution Integral
Consider an arbitrary input $x(t)$,(1)
Suppose $h_p$ is the response of the system to an input $\frac{1}{T_p}rect\bigg(\frac{t-nT_p}{T_p}\bigg)$.Then $y(t)$ corresponding to $x(t)$ is $y(t)=\sum T_px(nT_p)h_p(t-nT_p)$.
Taking limit $T_p$ $\rightarrow$ 0,then $\frac{1}{T_p}rect(\frac{t}{T_p})$ $\rightarrow$ $\delta(t)$.Let the impulse response be $h(t)$.Now, look at $y(t)=\sum T_px(nT_p)h_p(t-nT_p)$ and let $\tau=nT_p$. Then, $y(t)=\int d\tau x(\tau)h(t-\tau)$.Therefore,
Notice that-(3)
We already know the above equation from the shifting property.
Examples
CT convolution can be best understood by the the following examples:-
|
My calculation shows that
$$I(\epsilon)
:= \int_{0}^{\infty} \frac{dx}{\sinh^{2}(\epsilon\sqrt{x^{2}+1})}
= \frac{\pi}{2\epsilon^{2}} - \frac{1}{\epsilon} + \pi \epsilon \sum_{n=1}^{\infty} \frac{1}{(\pi^{2}n^{2} + \epsilon^{2})^{3/2}} \tag{1}. $$
In particular, if we expand the infinite sum on the RHS, we get
$$ I(\epsilon) = \frac{\pi}{2\epsilon^{2}} - \frac{1}{\epsilon} + \sum_{n=0}^{\infty} \binom{-3/2}{n} \frac{\zeta(2n+3)}{\pi^{2n+2}} \epsilon^{2n+1}. $$
Indeed, from the following expansion
$$ \frac{1}{\sinh^{2}z}= \sum_{n=-\infty}^{\infty} \frac{1}{(z-i\pi n)^{2}}= \frac{1}{z^{2}} + 2 \sum_{n=1}^{\infty} \frac{z^{2} - \pi^{2}n^{2}}{(z^{2} + \pi^{2} n^{2})^{2}}, $$
it follows from term-wise integration that
\begin{align*}\int_{0}^{R} \frac{dx}{\sinh^{2}(\epsilon\sqrt{x^{2}+1})}= \frac{\arctan R}{\epsilon^{2}} + \sum_{n=1}^{\infty} &\Bigg( \frac{2\epsilon \arctan \left( R\epsilon \big/ \sqrt{\pi^{2}n^{2} + \epsilon^{2}} \right)}{(\pi^{2}n^{2} + \epsilon^{2})^{3/2}} \\&\quad + \frac{2}{R}\frac{1}{\pi^{2}n^{2}+\epsilon^{2}} \\&\quad - \frac{2(R^{2}+1)}{R(\pi^{2}n^{2} + (R^{2}+1)\epsilon^{2})} \Bigg). \tag{2}\end{align*}
Here, term-wise integration is possible from Fubini's theorem together with the following estimate:
$$ \int_{0}^{R} \left| \frac{\epsilon^{2}(x^{2}+1) - \pi^{2}n^{2}}{(\epsilon^{2}(x^{2}+1) + \pi^{2} n^{2})^{2}} \right| = \frac{\arctan \left( R\epsilon \big/ \sqrt{\pi^{2}n^{2} + \epsilon^{2}} \right)}{\epsilon\sqrt{\pi^{2}n^{2}+\epsilon^{2}}}\lesssim_{\epsilon, R} \frac{1}{n^{2}}. $$
(Notice that the arctan term also contributes to order $n^{-1}$. It means that this argument fails if we consider $R = \infty$. This is why we consider proper integral first.)
Finally, taking $R \to \infty$ to (2) yields (1). When doing this, the only non-trivial calculation is to check that
$$ \lim_{R\to\infty} \sum_{n=1}^{\infty} \frac{2(R^{2}+1)}{R(\pi^{2}n^{2} + (R^{2}+1)\epsilon^{2})} = \frac{1}{\epsilon}. $$
But this follows from the squeezing lemma combined with the following inequality
$$ C(1, R) \leq \sum_{n=1}^{\infty} \frac{2(R^{2}+1)}{R(\pi^{2}n^{2} + (R^{2}+1)\epsilon^{2})} \leq C(0, R), $$
where
\begin{align*}C(a, R)&:= \int_{a}^{\infty} \frac{2(R^{2}+1)}{R(\pi^{2}x^{2} + (R^{2}+1)\epsilon^{2})} \, dx \\&= \frac{\sqrt{R^{2}+1}}{R\epsilon} \frac{\arctan\left( \epsilon\sqrt{R^{2}+1} \big/ a\pi \right)}{\pi/2}.\end{align*}
|
I need to find $\lim _{x \to 0} \cot(3x)\sin(4x)$. However, I am having trouble finding a way to do that. I am a Calculus 1 student and the only ways I know to handle a problem like this are by multiplying by a conjugate, or L'Hospital's Rule. Neither of which seems to work here.
I think I need to identify the correct trig identity for cotangent or sine, and then apply one of the two methods I mentioned, but I can't seem to find any trig identities that seem to allow that.
I have tried replacing $\cot(3x)\sin(4x)$ with $\frac{\cot(3x)}{\csc(4x)}$ and $\frac{\frac{cos(3x)}{sin(3x)}}{\csc(4x)}$ and $\frac{\cos(3x)\sin(4x)}{\sin(3x)}$, but I can't find any that work.
As I understand it, L'Hospital's rule requires that both the numerator and denominator of the limit approach either zero or infinity. I'm not entirely sure what that means, but $(0, 0)$ is a point on both the graph of $\cos(3x)\sin(4x)$ and $\sin(3x)$.
I can do,
$$\lim _{x \to 0}\frac{\frac{d}{dx}\cos(3x)\sin(4x)}{\frac{d}{dx}\sin(3x)} $$
$$=\lim_{x \to 0} \frac{[-3\sin(3x)\cdot\sin(4x)] + [\cos(3x)\cdot4\cos(4x)]}{\cos(3x)}$$
$$=\lim_{x \to 0} \frac{4\cos(3x)\cos(4x)-3\sin(3x)\sin(4x)}{\cos(3x)}$$
$$=\frac{4(1)(1) - 3(0)(0)}{1} = 4$$
But this is wrong.
How can I solve this limit?
|
There is a minor ambiguity in the term "Riemann integral": it tends to be used both for Riemann's original formulation -- which involves tagged partitions and requires convergence in a very strong sense: uniformly in the
mesh (or norm) $||\mathcal{P}||$ of the partition $\mathcal{P}$ -- and also G. Darboux's later simplification in terms of upper and lower sums and upper and lower integrals, which is for most purposes technically easier to work with and thus is the one which is carefully developed in most undergraduate texts.
The ambiguity can be justified by the fact the Riemann and Darboux theories give different descriptions of what ultimately turns out to be the same linear functional: a function $f: [a,b] \rightarrow \mathbb{R}$ is Riemann integrable if and only if it is Darboux integrable (the hard part of this is to show that Darboux integrable functions are Riemann integrable) and when these conditions hold the associated real number $\int_a^b f$ is the same. For a careful exposition of the Darboux and Riemann integrals including a comparison between the two, see Chapter 8 of these notes.
I bring up the distinction between Darboux and Riemann because it is relevant to your question of boundedness of integral functions, and because of the two helpful answers already left to this question, one addresses the Darboux case and the other the Riemann case. Either way though the following simple observation lies at the heart of the matter.
For $f: [a,b] \rightarrow \mathbb{R}$, the following are equivalent:
(i) $f$ is bounded above (respectively, bounded below).
(ii) For any partition $\mathcal{P} = \{a= x_0 < x_1 < \ldots < x_{n-1} < x_n = b\}$, the restriction of $f$ to each subinterval $[x_i,x_{i+1}]$ is bounded above (respectively, bounded below).
So if $f$ is unbounded above, then for any partition $\mathcal{P}$, there is at least one subinterval $[x_i,x_{i+1}]$ on which $f$ is unbounded above, hence the upper sum $\mathcal{U}(f,\mathcal{P})$ does not exist as a real number, so we can't even define the Darboux integral. Alternately, if we want to work in the extended real numbers, we would say that if $f$ is unbounded above, $\mathcal{U}(f,\mathcal{P}) = \infty$. Similarly, if $f$ is unbounded below, $\mathcal{L}(f,\mathcal{P}) = -\infty$ (c.f. Proposition 8.2 in the linked notes). This means: if $f$ is unbounded above then $\overline{\int}_a^b f = \infty$, and if $f$ is unbounded below then $\underline{\int}_a^b f = -\infty$. With this extended definition we would define a function to be Darboux integrable if and only if its upper and lower integrals are
both finite and are equal, so we see that Darboux integrable functions are bounded.
For the Riemann integral there is a similar argument: if $f$ is unbounded above, then no matter what partition we choose, then for any $M > 0$ there will be a tagging $\tau$ -- i.e., a choice of sample point $x_i^* \in [x_i,x_{i+1}]$ such that the Riemann sum $R(f,\mathcal{P},\tau) = \sum_{i=0}^{n-1} f(x_i^*)(x_{i+1}-x_i)$ is greater than $M$. This is a nice exercise: the idea is to use the above observation and choose one subinterval $[x_i,x_{i+1}]$ on which $f$ is unbounded above, choose the sample points in the other subintervals arbitrarily, and then choose $x_i^* \in [x_i,x_{i+1}]$ so that $f(x_i^*)$ is large enough to make the entire Riemann sum come out greater than $M$. Thus we see that if $f$ is unbounded above it cannot be Riemann integrable (but it's definitely a result, not a definition, in this case), and similarly if $f$ is unbounded below.
To address the rest of your question: yes, when one writes "Riemann integrable" one generally means to neglect the case of improper Riemann integrals, which of course can be finite even for some unbounded functions. This is true notwithstanding the fact that the same notation $\int_a^b f$ is used for Riemann integrals and improper Riemann integrals. Generally, anyway: in any particular case you should check to confirm that the terminology is being used in this way.
Added: The class of Riemann-Darboux integrable functions was characterized by Lebesgue (though my colleague Roy Smith has shown me a passage in Riemann's work showing that he had the result as well).
Theorem (
Lebesgue Criterion) For a function $f: [a,b] \rightarrow \mathbb{R}$, the following are equivalent:
(i) $f$ is Riemann integrable.
(ii) $f$ is bounded, and the set of discontinuities of $f$ has measure zero.
Since this result was first published in the 20th century, one can correctly infer that it is not really needed in the development of the Riemann integral. From a pedagogical perspective, I would rather have students learn to use less heavy tools with more dexterity. Nevertheless I give a proof in $\S$ 8.5 of my notes which does not use any measure theory or even the theory of countable/uncountable sets. (The proof given there is not due to me; it is taken from notes of A.R. Schep.)
Note also that another answer to this question currently contains a false statement of this result. The characteristic function of the classical middle thirds Cantor set shows that "measure zero" cannot be replaced with "countable".
|
How to compute the following trigonometric question $$\sqrt2\sin10 (\sec5+\frac{2\cos 40}{\sin5}-4\sin35)=...$$ I am having problem to solve this trigonometric question. I tried to use identity $\sin10=2\sin5\cos5,\cos40=\cos(45-5), and \sin35=\sin(30+5)$ but it became complicated and I cannot simplify into a simpler term. I checked on Wolfram Alpha and it seemed the answer is 4 but I cannot get it. Could anyone here help me? Any help would be appreciated. Thanks in advance.
(All the arguments are in degrees)
Let the expression to be evaluated be $x$.
Since $\sin 10=2\sin 5\cos 5$, you get: $$2\sqrt{2}(\sin 5+2\cos 40 \cos 5-4\sin 35\sin 5\cos 5)=2\sqrt{2}(\sin 5+2\cos 40 \cos 5-2\sin 35\sin 10)$$
We can write: $$2\cos 40 \cos 5=\cos 45+\cos 35$$ $$2\sin 35\sin 10=\cos 25-\cos 45$$ Hence, $$x=2\sqrt{2}(\sin 5+\cos 45+\cos 35+\cos 45-\cos 25)=2\sqrt{2}(\sqrt{2}+\sin 5 +\cos 35-\cos 25)$$ Since $$\cos 35-\cos 25=-2\sin 30\sin 5=-\sin 5$$ The final answer is $\boxed{4}$
|
Group Set
context $G$ definiendum $ \langle G,* \rangle \in \mathrm{it}$ inclusion $\langle G,* \rangle \in \mathrm{monoid}(G)$ let $e$ such that $\forall g.\, e*a=a*e=a$ range $g,g^{-1}\in G$ postulate $\forall g.\,\exists g^{-1}.\;(g*g^{-1}=g^{-1}*g=e)$ Alternative definitions Sharper definitions
We could just define left units and left inverses and prove from the group axioms that they are already units and inverses.
Group axioms explicitly in the first order language
Let $\langle G,* \rangle $ be a set $G$ with a binary operation.
1. $\forall (a,b\in G).\ (a*b\in G)$
2. $\forall (a,b,c\in G).\ ((a*b)*c=a*(b*c))$
3. $\exists (e\in G).\ \forall (a\in G).\ (a*e=e*a=a) $
4. $\forall (a\in G).\ \exists (a^{-1}\in G).\ (a*a^{-1}=a^{-1}*a=e)$
The first axiom is already implied if “$*$” is a binary operation $*:G\times G\to G$.
For given $G$, the set $\text{group}(G)$ is the set of all pairs $\langle G,* \rangle$, containing $G$ itself, as well a binary operation which fulfills the group axioms. One generally calls $G$ the group, i.e. the set with respect to which the operation “$*$” is defined.
|
ä is in the extended latin block and n is in the basic latin block so there is a transition there, but you would have hoped \setTransitionsForLatin would have not inserted any code at that point as both those blocks are listed as part of the latin block, but apparently not.... — David Carlisle12 secs ago
@egreg you are credited in the file, so you inherit the blame:-)
@UlrikeFischer I was leaving it for @egreg to trace but I suspect the package makes some assumptions about what is safe, it offers the user "enter" and "exit" code for each block but xetex only has a single insert, the interchartoken at a boundary the package isn't clear what happens at a boundary if the exit of the left class and the entry of the right are both specified, nor if anything is inserted at boundaries between blocks that are contained within one of the meta blocks like latin.
Why do we downvote to a total vote of -3 or even lower? Weren't we a welcoming and forgiving community with the convention to only downvote to -1 (except for some extreme cases, like e.g., worsening the site design in every possible aspect)?
@Skillmon people will downvote if they wish and given that the rest of the network regularly downvotes lots of new users will not know or not agree with a "-1" policy, I don't think it was ever really that regularly enforced just that a few regulars regularly voted for bad questions to top them up if they got a very negative score. I still do that occasionally if I notice one.
@DavidCarlisle well, when I was new there was like never a question downvoted to more than (or less?) -1. And I liked it that way. My first question on SO got downvoted to -infty before I deleted it and fixed my issues on my own.
@DavidCarlisle I meant the total. Still the general principle applies, when you're new and your question gets donwvoted too much this might cause the wrong impressions.
@DavidCarlisle oh, subjectively I'd downvote that answer 10 times, but objectively it is not a good answer and might get a downvote from me, as you don't provide any reasoning for that, and I think that there should be a bit of reasoning with the opinion based answers, some objective arguments why this is good. See for example the other Emacs answer (still subjectively a bad answer), that one is objectively good.
@DavidCarlisle and that other one got no downvotes.
@Skillmon yes but many people just join for a while and come form other sites where downvoting is more common so I think it is impossible to expect there is no multiple downvoting, the only way to have a -1 policy is to get people to upvote bad answers more.
@UlrikeFischer even harder to get than a gold tikz-pgf badge.
@cis I'm not in the US but.... "Describe" while it does have a technical meaning close to what you want is almost always used more casually to mean "talk about", I think I would say" Let k be a circle with centre M and radius r"
@AlanMunn definitions.net/definition/describe gives a websters definition of to represent by drawing; to draw a plan of; to delineate; to trace or mark out; as, to describe a circle by the compasses; a torch waved about the head in such a way as to describe a circle
If you are really looking for alternatives to "draw" in "draw a circle" I strongly suggest you hop over to english.stackexchange.com and confirm to create an account there and ask ... at least the number of native speakers of English will be bigger there and the gamification aspect of the site will ensure someone will rush to help you out.
Of course there is also a chance that they will repeat the advice you got here; to use "draw".
@0xC0000022L @cis You've got identical responses here from a mathematician and a linguist. And you seem to have an idea that because a word is informal in German, its translation in English is also informal. This is simply wrong. And formality shouldn't be an aim in and of itself in any kind of writing.
@0xC0000022L @DavidCarlisle Do you know the book "The Bronstein" in English? I think that's a good example of archaic mathematician language. But it is still possible harder. Probably depends heavily on the translation.
@AlanMunn I am very well aware of the differences of word use between languages (and my limitations in regard to my knowledge and use of English as non-native speaker). In fact words in different (related) languages sharing the same origin is kind of a hobby. Needless to say that more than once the contemporary meaning didn't match a 100%.
However, your point about formality is well made. A book - in my opinion - is first and foremost a vehicle to transfer knowledge. No need to complicate matters by trying to sound ... well, overly sophisticated (?) ...
The following MWE with showidx and imakeidx:\documentclass{book}\usepackage{showidx}\usepackage{imakeidx}\makeindex\begin{document}Test\index{xxxx}\printindex\end{document}generates the error:! Undefined control sequence.<argument> \ifdefequal{\imki@jobname }{\@idxfile }{}{...
@EmilioPisanty ok I see. That could be worth writing to the arXiv webmasters as this is indeed strange. However, it's also possible that the publishing of the paper got delayed; AFAIK the timestamp is only added later to the final PDF.
@EmilioPisanty I would imagine they have frozen the epoch settings to get reproducible pdfs, not necessarily that helpful here but..., anyway it is better not to use \today in a submission as you want the authoring date not the date it was last run through tex
and yeah, it's better not to use \today in a submission, but that's beside the point - a whole lot of arXiv eprints use the syntax and they're starting to get wrong dates
@yo' it's not that the publishing got delayed. arXiv caches the pdfs for several years but at some point they get deleted, and when that happens they only get recompiled when somebody asks for them again
and, when that happens, they get imprinted with the date at which the pdf was requested, which then gets cached
Does any of you on linux have issues running for foo in *.pdf ; do pdfinfo $foo ; done in a folder with suitable pdf files? BMy box says pdfinfo does not exist, but clearly do when I run it on a single pdf file.
@EmilioPisanty that's a relatively new feature, but I think they have a new enough tex, but not everyone will be happy if they submit a paper with \today and it comes out with some arbitrary date like 1st Jan 1970
@DavidCarlisle add \def\today{24th May 2019} in INITEX phase and recompile the format daily? I agree, too much overhead. They should simply add "do not use \today" in these guidelines: arxiv.org/help/submit_tex
@yo' I think you're vastly over-estimating the effectiveness of that solution
(and it would not solve the problem with 20+ years of accumulated files that do use it)
@DavidCarlisle sure. I don't know what the environment looks like on their side so I won't speculate. I just want to know whether the solution needs to be on the side of the environment variables, or whether there is a tex-specific solution
@yo' that's unlikely to help with prints where the class itself calls from the system time.
@EmilioPisanty well th eenvironment vars do more than tex (they affect the internal id in teh generated pdf or dvi and so produce reproducible output, but you could as @yo' showed redefine \today oor teh \year, \month\day primitives on teh command line
@EmilioPisanty you can redefine \year \month and \day which catches a few more things, but same basic idea
@DavidCarlisle could be difficult with inputted TeX files. It really depends on at which phase they recognize which TeX file is the main one to proceed. And as their workflow is pretty unique, it's hard to tell which way is even compatible with it.
"beschreiben", engl. "describe" comes from the math. technical-language of the 16th Century, that means from Middle High German, and means "construct" as much. And that from the original meaning: describe "making a curved movement". In the literary style of the 19th to the 20th century and in the GDR, this language is used.
You can have that in englisch too: scribe(verb) score a line on with a pointed instrument, as in metalworking https://www.definitions.net/definition/scribe
@cis Yes, as @DavidCarlisle pointed out, there is a very technical mathematical use of 'describe' which is what the German version means too, but we both agreed that people would not know this use, so using 'draw' would be the most appropriate term. This is not about trendiness, just about making language understandable to your audience.
Plan figure. The barrel circle over the median $s_b = |M_b B|$, which holds the angle $\alpha$, also contains an isosceles triangle $M_b P B$ with the base $|M_b B|$ and the angle $\alpha$ at the point $P$. The altitude of the base of the isosceles triangle bisects both $|M_b B|$ at $ M_ {s_b}$ and the angle $\alpha$ at the top. \par The centroid $S$ divides the medians in the ratio $2:1$, with the longer part lying on the side of the corner. The point $A$ lies on the barrel circle and on a circle $\bigodot(S,\frac23 s_a)$ described by $S$ of radius…
|
First, let me introduce my little story:
I started searching information about the famous Dirichlet Divisor Problem: getting the exact asymptotic behaviour of the sum of the divisor function up to an integer $x$. More specifically, the aim of the problem is to bound $\theta$ in:
$$D(x)=\sum_{n \le x} d(n) = x \log x +x(2 \gamma -1) + O(x^{\theta + \epsilon})$$
Then, I found that this problem could be generalised to finding the sum of the number of ways an integer $n$ can be written as the product of $k$ natural numbers up to an integer $x$, which is called the Piltz Divisor Problem. Again, it is based on bounding the error term in:
$$D_k(x)=xP_k(\log x) + O(x^{\alpha_k + \epsilon})$$
being $P_k$ a polynomial of degree $k$.
After all that, I found a paper relating these two problems to the Riemann Hypothesis, which seemed something quite interesting for me. It was called
The Generalized Divisor Problem andThe Riemann Hypothesis, by Hideki Nakaya. It starts quoting in (1) a work by A. Selberg (which I have not been able to find) where he extended Piltz's Divisor Problem to all complex $k$. And I have a lot of trouble when trying to understand this.
I. How can one extend Piltz's Problem to complex numbers?
II. What would that 'extension' be useful for?
III. Does this 'extension' have any geometrical interpretations such as the hyperbola method of both Dirichlet's and Piltz's Problems?
IV. Is there any direct relationship between the error bound on Piltz's Problem and the error bound for the extended problem?
I know these are a lot of questions, so thank you for your help.
|
I will illustrate with the example in the question, because a general answer is too complicated to write down.
Let $F$ be the common distribution function. We will need the distributions of the order statistics $x_{[1]} \le x_{[2]} \le \cdots \le x_{[n]}$. Their distribution functions $f_{[k]}$ are easy to express in terms of $F$ and its distribution function $f=F^\prime$ because, heuristically, the chance that $x_{[k]}$ lies within an infinitesimal interval $(x, x+dx]$ is given by the trinomial distribution with probabilities $F(x)$, $f(x)dx$, and $(1-F(x+dx))$,
$$\eqalign{f_{[k]}(x)dx &= \Pr(x_{[k]} \in (x, x+dx]) \\&= \binom{n}{k-1,1,n-k} F(x)^{k-1} (1-F(x+dx))^{n-k} f(x)dx\\&= \frac{n!}{(k-1)!(1)!(n-k)!} F(x)^{k-1} (1-F(x))^{n-k} f(x)dx.}$$
Because the $x_i$ are iid, they are exchangeable: every possible ordering $\sigma$ of the $n$ indices has equal probability. $X$ will correspond to some order statistic, but
which order statistic depends on $\sigma$. Therefore let $\operatorname{Rk}(\sigma)$ be the value of $k$ for which
$$\eqalign{x_{[k]} = X = \max&\left( \min(x_{\sigma(1)},x_{\sigma(2)},x_{\sigma(3)}),\min(x_{\sigma(1)},x_{\sigma(4)},x_{\sigma(5)}), \right. \\& \left. \min(x_{\sigma(5)},x_{\sigma(6)},x_{\sigma(7)}),\min(x_{\sigma(3)},x_{\sigma(6)},x_{\sigma(8)})\right).}$$
The distribution of $X$ is a mixture over all the values of $\sigma\in\mathfrak{S}_n$. To write this down, let $p(k)$ be the number of reorderings $\sigma$ for which $\operatorname{Rk}(\sigma)=k$, whence $p(k)/n!$ is the chance that $\operatorname{Rk}(\sigma)=k$. Thus the density function of $X$ is
$$\eqalign{g(x) &= \frac{1}{n!} \sum_{\sigma \in \mathfrak{S}_n} f_{k(\sigma)}(x) \\&= \frac{1}{n!}\sum_{k=1}^n p(k)\binom{n}{k-1,1,n-k} F(x)^{k-1} (1-F(x))^{n-k} f(x) \\&=\left(\sum_{k=1}^n \frac{p(k)}{(k-1)!(n-k)!}F(x)^{k-1} (1-F(x))^{n-k} \right)f(x) .}$$
I do not know of any general way to find the $p(k)$. In this example, exhaustive enumeration gives
$$\begin{array}{l|rrrrrrrrr}k & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9\\\hlinep(k) & 0 & 20160 & 74880 & 106560 & 92160 & 51840 & 17280 & 0 & 0\end{array}$$
The figure shows a histogram of $10,000$ simulated values of $X$ where $F$ is an Exponential$(1)$ distribution. On it is superimposed in red the graph of $g$. It fits beautifully.
The
R code that produced this simulation follows.
set.seed(17)
n.sim <- 1e4
n <- 9
x <- matrix(rexp(n.sim*n), n)
X <- pmax(pmin(x[1,], x[2,], x[3,]),
pmin(x[1,], x[4,], x[5,]),
pmin(x[5,], x[6,], x[7,]),
pmin(x[3,], x[6,], x[8,]))
f <- function(x, p) {
n <- length(p)
y <- outer(1:n, x, function(k, x) {
pexp(x)^(k-1) * pexp(x, lower.tail=FALSE)^(n-k) * dexp(x) * p[k] /
(factorial(k-1) * factorial(n-k))
})
colSums(y)
}
hist(X, freq=FALSE)
curve(f(x, p), add=TRUE, lwd=2, col="Red")
|
In the Kerr metric the ring singularity is located at the coordinate radius $r=0$, which corresponds to a ring with the cartesian radius $R=a$.
So the center of the ring singularity in cartesian coordinates is at $r=-a, \ \theta=\pi/2$.
But the center in cartesian coordinates is also at $r=0, \ \theta=0$ (at $r=0$ all $\theta$ are in the equatorial plane, at least in Boyer Lindquist and also in Kerr Schild coordinates).
$$ (1) \ \ \ \ \theta=\pi/2 , \ \ d =2 \int_{-a}^0 \sqrt{|g_{rr}|} \ \ {\rm d}r = 2 \sqrt{(2-a) a}+4 \arcsin \left(\sqrt{\frac{a}{2}}\right)$$
in the equatorial plane, or is it rather
$$ (2) \ \ \ \ r=0 , \ \ d =\int_{-\pi/2}^{\pi/2} \sqrt{|g_{\theta \theta}|} \ \ {\rm d}\theta = 2a$$
since that should also cover the distance from one side of the ring to the opposite.
Approach $(2)$ gives exactly the diameter in cartesian coordinates, but I don't know if that's supposed to be so, or only a coincidence, since otherwise the metric distance is not nescessarily the same as the coordinate or cartesian distance.
So which one is it, $(1)$ or $(2)$? Or is it done in a completely different way?
The coordinates I used are Kerr Schild coordinates, which should cover the inside with the relevant components
$g_{r r}=-\frac{2 r}{a^2 \cos ^2 \theta +r^2}-1 \ , \ \ g_{\theta \theta }= -r^2 - a^2 \cos^2 \theta$
I guess it is approach $(2)$ since no one can enter the ring singularity from the equatorial plane, but I'd like to hear a 2nd opinion on that
|
I am confused. In the static correlation, we use combination of Slater determinants to account for electronic correlation (known as CI or configuration interaction) and the wave function is represented as a sum of various SD where the HF SD is the base function and higher excited SD contributes some parts. Similarly, in dynamic electronic correlation, we use basis set expansion to account for higher excitation orbitals. What is the fundamental difference between these two correlations? Both are essentially expanding the electronic space to allow more relaxation. What am I missing here? Thanks
$\newcommand{\el}{_\mathrm{e}}$In quantum chemistry, when a nomenclature in which one distinguishes between "static" and "dynamic" correlation is used, "correlation" referrers to all the deficiencies of the Hartree-Fock (HF) single-determinantal approach. For instance, the the
correlation energy is defined as the difference between the exact (non-relativistic) energy and the HF energy (calculated with a complete basis),$$ E_{\mathrm{corr}} = E_{\mathrm{exact}} - E_{\mathrm{HF}} \, .$$
Now, what are the deficiencies of a Hartree-Fock approach?
First, electrons in this model do not
instantaneously interact with each other, as they do in reality, but rather each and every electron interacts with the average, or mean, field created by all other electrons. Classically speaking, each electron moves in a way so that it avoids locations in a close proximity to the instantaneous positions of all other electrons. And the failure of the HF model to correctly reproduce such motion of electrons is the first source of $E_{\mathrm{corr}}$. This type of correlation is called dynamic correlation since it is directly related to electron dynamics.
Secondly, the wave function in the HF model is a single Slater determinant, which might be a rather poor representation of a many-electron system's state: in certain cases an electronic state can be well described only by a linear combination of more than one (nearly-)degenerate Slater determinants. This is the second reason why $E_{\mathrm{HF}}$ may differ from $E_{\mathrm{exact}}$ and the corresponding type of correlation is called static, or nondynamic, since it ist not related to electron dynamics.
Interestingly, both static and dynamic correlation effects can be taken into account by "mixing in" more Slater determinants $\Phi_i$ to the Hartree-Fock one $\Phi_0$, $$ \Psi\el(\vec{r}\el) = c_0 \Phi_0 + \sum_i c_i \Phi_i \, , $$ Here, if $c_0$ is assumed to be close to $1$ and a large number of excited determinants $\Phi_i$ are added each of which is assumed to give only a small contribution, then the method primarily treats dynamic correlation. And if, on the other hand, it is assumed that there are just a few excited determinants $\Phi_i$ with weights close to that of the reference determinant $\Phi_0$, then the method primarily treats static correlation.
An example of a method that recovers primarily dynamic correlation is Møller–Plesset perturbation theory (MP$n$), while multi-configurational self-consistent field (MCSCF) method primarily takes account of static correlation. Note the word "primarily" here and above. It is almost impossible in principle to keep dynamic and static correlation effects separated since they both arise from the very same physical interaction. Thus, methods that typically cover dynamical correlation effects include at high-order also some of the non-dynamical correlation effects and vice versa.
|
The following argument is essentially an application of the path lifting property for covering spaces.
Let's think about $\mathbb{R}P^2$ as being the quotient space you get by identifying antipodal points on the sphere $S^2$. That is, let $x\sim -x$, let $\mathbb{R}P^2=S^2/\sim$ and let $p\colon S^2\rightarrow\mathbb{R}P^2$ be the quotient map. Let $z$ be the base point of $S^2$ and $y$ be the base point of $\mathbb{R}P^2$.
Now, consider a non-trvial loop $\gamma\colon[0,1]\rightarrow\mathbb{R}P^2$ based at the point $y\in\mathbb{R}P^2$ (so $\gamma$ can not be homotoped to a constant loop). Note that the preimage of $y$ under $p$ is exactly two points in $S^2$ which are $z$ and $-z$. If we lift the loop $\gamma$ up to $S^2$ via the lift $\tilde{p}$, the end points of the lifted path $\tilde{\gamma}\colon[0,1]\rightarrow S^2$ will either both be at $z$, or $\tilde{\gamma}(0)=z$ and $\tilde{\gamma}(1)=-z$.
But note that if both end points are at $z$, then $\tilde{\gamma}$ is a loop and we know that $S^2$ is simply connected so such a loop can be homotoped to a constant loop. Such a homotopy induces a similar homotopy in the loop $\gamma$ and so $\gamma$ must be trivial. This is a contradiction as we asked for $\gamma$ to be non-trivial. So, $\tilde{\gamma}(0)=z$ and $\tilde{\gamma}(1)=-z$.
Now, in this case, the path $\tilde{\gamma}$ can not be homotoped to a constant loop without moving the fixed ends of the path
but if we consider the lift of the path $2\gamma$ via $\tilde{p}$, then the lifted path $\tilde{2\gamma}$ is a loop in $S^2$. Again, $S^2$ is simply connected and so such a loop can be homotoped to a constant loop and such a homotopy induces a similar homotopy in the loop $2\gamma$ and so $2\gamma$ is a trivial loop.
|
Let $\sigma_1,\sigma_2$ be the real and comlex embeddings, and let $f = (\sigma_1,\sigma_2) : K \to \Bbb R \times \Bbb C$.
Then $f$ preserves addition and multiplication, and $N(x) = \sigma_1(x)|\sigma_2(x)|$.
So units are all on the surface $S = \{(y,z) \in \Bbb R \times \Bbb C \mid y|z|=1 \}$.Suppose that you have found a unit (of norm $1$) $u$ with $\sigma_1(u) > 1$. (If you got $\sigma_1(u)<1$, just pick $1/u$ instead).
Then any $n$th root of $u$ is sent by $f$ onto the compact piece of the surface $S$ given by the condition $1 \le y \le \sigma_1(u)$.
Then you only have to look for possible lattice points on that compact surface.
More precisely this gives you some bounds on the coordinates of the possible roots of $u$ in whatever basis for $\mathfrak O_K$ you have, then it is a finite computation to check all of them.
The elements of norm $-1$ are $(-1)$ times an element of norm $1$ so you don't need to check for a possible square root of $u$ of norm $-1$ once you have found the $u$ with the $y$-component closest to $1$.
Here's how to check @Lubin's comment :Let $u = 2^{1/3}$ and $v = u-1$. We want to be sure that $v$ is a fundamental unit.
we have $f(a+bu+cu^2) = (a+bu+cu^2, a-\frac12(bu+cu^2) + i \frac{\sqrt3}2(bu-cu^2))$
Then $f^{-1}(x,y+iz) = (\frac 13(x+2y), \frac 1{3u}(x-y+\sqrt3 z), \frac 1{3u^2}(x-y-\sqrt3 z))$
Very roughly, the surface $x|y+iz|=1$, restricted to $\sqrt v \le x \le 1$ is bounded by $|y|,|z| \le \frac 1{|x|} \le \frac 1 {\sqrt v}$.
Going back to $a,b,c$ we get the bounds
$-1.137698 = \frac 13 (\sqrt v -2/\sqrt v) \le a \le \frac13 (1 + 2/\sqrt v) = 1.640973$ $-1.282880 = \frac 1{3u}(\sqrt v-(1+\sqrt3)/\sqrt v) \le b \le \frac 1{3u}(1+(1+\sqrt 3)/\sqrt v) = 1.682329$ $-1.018222 = \frac 1{3u^2}(\sqrt v-(1+\sqrt3)/\sqrt v) \le c \le \frac 1{3u^2}(1+(1+\sqrt 3)/\sqrt v) = 1.335266$
Hence if $v$ has a norm $1$ root $a+bu+cu^2$, we have $a,b,c \in \{-1;0;1\}$ which leaves $27$ candidates. The elements of norm $1$ among those are $1,-1+u=v,1+u+u^2=1/v$.
As a reality check we can compute the first few roots in $\Bbb R \times \Bbb C$ then look at their coefficients in $\Bbb R \otimes_\Bbb Q K$
The square roots of norm $1$ are $(-0.101474-0.371471u+0.679930u^2),(-0.441357+0.641236u-0.465818u^2)$
The cube roots are$(0.480750-0.480750u+0.480750u^2),(-0.605707+0.605707u+0.302853u^2),(0.763143+0.381571u-0.381571u^2)$
|
A \(\varGamma \)-magic Rectangle Set and Group Distance Magic Labeling Abstract
A \(\varGamma \)-distance magic labeling of a graph \(G = (V, E)\) with \(|V| = n\) is a bijection \(\ell \) from
V to an Abelian group \(\varGamma \) of order n such that the weight \(w(x) =\sum _{y\in N_G(x)}\ell (y)\) of every vertex \(x \in V\) is equal to the same element \(\mu \in \varGamma \) called the magic constant. A graph G is called a group distance magic graph if there exists a \(\varGamma \)-distance magic labeling for every Abelian group \(\varGamma \) of order | V( G)|.
A \(\varGamma \)-magic rectangle set \(MRS_{\varGamma }(a, b; c)\) of order
abc is a collection of c arrays \((a\times b)\) whose entries are elements of group \(\varGamma \), each appearing once, with all row sums in every rectangle equal to a constant \(\omega \in \varGamma \) and all column sums in every rectangle equal to a constant \(\delta \in \varGamma \).
In the paper we show that if
a and b are both even then \(MRS_{\varGamma }(a, b; c)\) exists for any Abelian group \(\varGamma \) of order abc. Furthermore we use this result to construct group distance magic labeling for some families of graphs. KeywordsDistance magic labeling Magic constant Sigma labeling Graph labeling Cartesian product \(\varGamma \)-magic rectangle set References 1.Barrientos, C., Cichacz, S., Froncek, D., Krop, E., Raridan, C.: Distance Magic Cartesian Product of Two Graphs (preprint)Google Scholar 2. 3. 4.Cichacz, S., Froncek, D.: Distance magic circulant graphs. Preprint Nr MD 071 (2013). http://www.ii.uj.edu.pl/documents/12980385/26042491/MD_71.pdf 5. 6.Diestel, R.: Graph Theory, Graduate Texts in Mathematics, vol. 173. Springer, Heidelberg (2005)Google Scholar 7. 8. 9. 10.Gallian, J.A.: A dynamic survey of graph labeling. Electron. J. Comb. 17, 17–20 (2013). #DS6Google Scholar 11. 12. 13.Rao, S.B., Singh, T., Parameswaran, V.: Some sigma labelled graphs I. In: Arumugam, S., Acharya, B.D., Raoeds, S.B. (eds.) Graphs, Combinatorics, Algorithms and Applications, pp. 125–133. Narosa Publishing House, New Delhi (2004)Google Scholar
|
In the chapter on Artinian rings in "Introduction to Commutative Algebra" by Atiyah and MacDonald, we have:
Proposition 8.6. Let $(A,\mathfrak{m})$ be a local Noetherian ring. Then exactly one of the following holds:
$\frak{m}^n\neq\frak{m}^{n+1}$ for all $n \in \mathbb{N}$;
$\frak{m}$ is a nilpotent ideal, in which case, $A$ is Artinian.
Proposition 8.8. Let $(A,\mathfrak{m})$ be a local Artinian ring. Then the following are equivalent:
$A$ is a principal ideal ring;
$\frak{m}$ is principal;
$\dim_K(\frak{m}/\frak{m}^2)\leq 1$ (where $K=A/\frak{m}$ is the residue field).
In the proof of 8.8, A&M quickly boil down to the case:
$$\dim_K(\mathfrak{m}/\mathfrak{m}^2)= 1 \Rightarrow A \text{ is a principal ideal ring.}$$
A&M begin by explaining that $\mathfrak{m}$ is nilpotent, which is because $\mathfrak{m}$ equals the Jacobson radical of $A$ (as $A$ is a local ring) and the Jacobson radical of an Artinian ring is nilpotent (as A&M prove earlier).
However, doesn't this also follow from 8.6 as Artinian rings are Noetherian (which A&M prove in Proposition 8.5)?
|
In Euclidean domains, such as $\mathbb Z$ and $\rm\:F[x],\:$ the gcd is often defined as a common divisor that is "greatest" as measured by the Euclidean valuation, here $\rm\:|n|\:$ and $\rm\:deg\ f(x)\:$ resp. But general integral domains may not come equipped with such structure, so in this more rarified atmosphere one is forced to rely only on the divisibility relation itself to specify the appropriate extremality property. Namely, one employs the following
universal dual definitions of LCM and GCD
Definition of LCM $\quad$ If $\quad\rm a,b\mid c \iff [a,b]\mid c \quad$ then $\,\quad\rm [a,b] \;$ is an LCM of $\:\rm a,b$
Definition of GCD $\quad$ If $\quad\rm c\mid a,b \iff c\mid (a,b) \quad$ then $\quad\rm (a,b) \;$ is an GCD of $\:\rm a,b$
Notice $\rm\,(a,b)\mid a,b\,$ by the direction $(\Rightarrow)$ for $\rm\,c=(a,b),\,$ i.e. $\rm\,(a,b)\,$ is a
common divisor of $\rm\,a,b.\,$ Conversely direction $(\Leftarrow)$ implies $\rm\,(a,b)\,$ is divisible by every common divisor $\,\rm c\,$ of $\rm\,a,b\,$ therefore $\rm\,(a,b)$ is a "greatest" common divisor in the (partial) order given by divisibility. Similarly, dually, the lcm definition specifies it as a common multiple that is "least" in the divisiility order.
One easily checks that this universal definition is equivalent to the more specific notions employed in Euclidean domains such as $\,\Bbb Z\,$ and $\rm\,F[x].$
Such universal definitions frequently enable one to give slick
bidirectional proofs that concisely unify both arrow directions, e.g. consider the following proof of the fundamental lcm $*$ gcd law.
Theorem $\rm\;\; (a,b) = ab/[a,b] \;\;$ if $\;\rm\ [a,b] \;$ exists.
Proof $\ \ \ \rm d\mid (a,b)\iff d\:|\:a,b \iff a,b\:|\:ab/d \iff [a,b]\:|\:ab/d \iff d\:|\:ab/[a,b] $
Many properties of domains are purely multiplicative so can be described in terms of monoid structure. Let R be a domain with fraction field K. Let R* and K* be the multiplicative groups of units of R and K respectively. Then G(R), the
divisibility group of R, is the factor group K*/R*.
R is a UFD $\iff$ G(R) $\:\rm\cong \mathbb Z^{\:I}\:$ is a sum of copies of $\rm\:\mathbb Z\:.$
R is a gcd-domain $\iff$ G(R) is lattice-ordered (lub{x,y} exists)
R is a valuation domain $\iff$ G(R) is linearly ordered
R is a Riesz domain $\iff$ G(R) is a Riesz group, i.e. an ordered group satisfying the Riesz interpolation property: if $\rm\:a,b \le c,d\:$ then $\rm\:a,b \le x \le c,d\:$ for some $\rm\:x\:.\:$ A domain $\rm\:R\:$ is Riesz if every element is primal, i.e. $\rm\:A\:|\:BC\ \Rightarrow\ A = bc,\ b\:|\:B,\ c\:|\:C,\:$ for some $\rm b,c\in R.$
For more on divisibility groups see the following surveys:
J.L. Mott. Groups of divisibility: A unifying concept for integral domains and partially ordered groups, Mathematics and its Applications, no. 48, 1989, pp. 80-104.
J.L. Mott. The group of divisibility and its applications, Conference on Commutative Algebra (Univ. Kansas, Lawrence, Kan., 1972), Springer, Berlin, 1973, pp. 194-208. Lecture Notes in Math., Vol. 311. MR 49 #2712
|
Let $\{(x_i, y_i), 1\le i\le n\}$ be the pairwise values of the observations and responses respectively. Let us fit the linear regression model: $y_i=b_0+b_1 x_i+\epsilon_i, \epsilon_i\sim\mathcal{N}(0,\sigma^2)$ are iid.
I'd like to find a necessary and sufficient condition for this above model to be identifiable.
Let me explain, just in case: let $\theta=(b_0, b_1, \sigma^2)$ be a vector of unknown parameters, and let $\varphi=(a_0,a_1, \nu^2)$ be another such set. Let us assume that they give rise to the same distribution. i.e. assuming $y=(y_1, y_2, ...y_n)$, assume also that $\sum_{i=1}^{n} \frac{(y_i-b_0-b_1 x_i)^2}{\sigma^2}=\sum_{i=1}^{n} \frac{(y_i-a_0-a_1 x_i)^2}{\nu^2}\forall (y_1,y_2,...y_n)\in \mathbb{R}^{n}$. Need a condition on $(x_1,x_2,...x_n)$ so that this equality implies $\theta=\varphi$.
Question 1: I'm new to the whole identifiability definition, but this is exactly what we need to check, right? If not, please correct me!
Question 2: What is the condition on $(x_1,x_2,...x_n)$ in order for the model to be identifiable?
|
In his book, Algorithmic Trading: Winning Strategies and Their Rationale, Ernie Chan shows how to use a Kalman filter to improve the returns of a cointegrated portfolio. Recall that the state equation is: $$\beta_t=\alpha\cdot\beta_{t-1}+\omega_{t-1}$$ Here, $\alpha$ is the state transition matrix, $\beta_t$ is the state vector, and $\omega_t$ is the process noise vector.
In his Kalman filter code, Chan sets the state transition matrix, $\alpha$, to the identity matrix. However, I would argue that this is wrong. Recall that the state vector is used in the measurement equation: $$y_t=\beta_t\cdot x_t + \epsilon_t$$
For k cointegrating time series, the observation vector, $x_t$, the state vector, $\beta_t$, and the observable, $y_t$, are: $$x_t=(1,\;x_{2,t},\;x_{3,t},\;...\;,\;x_{k,t})$$ $$\beta_t=(\beta_{1,t},\;\beta_{2,t},\;\beta_{3,t},\;...\;,\;\beta_{k,t})$$ $$y_t = x_{1,t}$$
The state vector is related to the static weights ($w_1,w_2,...,w_k$) obtained from the Johansen procedure because these weights give a stationary time series, $y_{\text{port}}$:$$y_{\text{port}}=w_1\cdot x_1\:+\:w_2\cdot x_2\:+\:w_3\cdot x_3\:+\:...\:+\:w_k\cdot x_k$$Solving for $y=x_1$, we get:$$x_1=(y_{\text{port}}\:-\:w_2\cdot x_2\:-\:w_3\cdot x_3\:-\:...\:-\:w_k\cdot x_k)\,/\,w_1$$Therefore, from the measurement equation, the initial state vector at time $t$ must be:$$\beta_t=(y_{\text{port},t}/w_1,\;-w_2/w_1,\;-w_3/w_1,\;...\;,\;-w_k/w_1)$$The first column of matrix $\beta$ is $y_{\text{port}}/w_1$, which is stationary because $y_{\text{port}}$ is stationary. Therefore, the state estimate for this component of $\beta$ at time $t$ is:$$\beta_{1,t}=\alpha_{11}\cdot\beta_{1,t-1}$$where $\alpha_{11}$
for the stationary case.This analysis indicates that the correct state transition matrix is not the identity matrix, but rather:$$\alpha=\left[ \begin{array}{cccc} \alpha_{11} & 0 & 0 & ... & 0\\ 0 & 1 & 0 & ... & 0\\ 0 & 0 & 1 & ... & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & ... & 1 \end{array} \right]$$In other words, must be less than 1 , which must be less than 1 (for stationarity). the state transition matrix is the identity matrix, except for the (1,1) element
How do we calculate $\alpha_{11}$? Right-multiply both sides of the state equation by $\beta_{t-1}^T$, take the expectation value, and solve for $\alpha_{11}$:
$$\alpha_{11}=\frac{\sum_{t=2}^n\beta_{1,t}\cdot\beta_{1,t-1}^T}{\sum_{t=2}^n \beta_{1,t-1}\cdot\beta_{1,t-1}^T}$$
I’m currently trading with a cointegrated triplet of ETFs that give a stationary portfolio, $y_{\text{port}}$. When I apply a unit root test (which measures stationarity) to $y_{\text{port}}$, I get the following p-values using a state transition matrix in the Kalman filter with different values for the (1,1) element of the state transition matrix, $\alpha$:
\begin{array}{|c|c|} \hline \alpha_{11} & p\\ \hline 1 & 0.00086\\ \hline 0.93 & 0.00058\\ \hline 0.007 & 3\times10^{-13}\\ \hline \end{array}
Smaller values of p suggest greater likelihood of stationarity. All of these p-values indicate stationarity, but the modified transition matrix ($\alpha_{11}=0.93$) gives better results than the identity matrix ($\alpha_{11}=1$). In the last example ($\alpha_{11}=0.007$), I iterated the Kalman filter calculation 1000 times, each time recalculating $\alpha_{11}$, as well as the process noise covariance matrix, the observation noise variance, the initial state vector, and the initial state covariance matrix, with each iteration (a procedure known as "adaptive tuning" of the Kalman filter). This gave very high stationarity for the $y_{\text{port}}$ array. (This is also reflected in higher returns in backtesting of the trading algorithm.)
I haven't seen this analysis in the literature on Kalman filter in financial time series. Can anyone find fault with it?
|
Search
Now showing items 1-10 of 26
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
|
Search
Now showing items 21-30 of 165
Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2013-10)
Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ...
Anisotropic flow of charged hadrons, pions and (anti-)protons measured at high transverse momentum in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2013-03)
The elliptic, $v_2$, triangular, $v_3$, and quadrangular, $v_4$, azimuthal anisotropic flow coefficients are measured for unidentified charged particles, pions, and (anti-)protons in Pb–Pb collisions at $\sqrt{s_{NN}}$ = ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
|
I'm interested because I want to show that $x^2-34y^2\equiv -1\pmod{m}$ has solutions for all integers $m$. I started by using the following reasoning:
If $3\nmid m$, then $gcd(m,3)=1$. Then there exists a multiplicative inverse $\bar{3}$ modulo $m$. I note that $5^2-34=-(3^2)$, and thus $\bar{3}^2(5^2-34)\equiv (\bar{3}\cdot 5)^2-34(\bar{3}^2)\equiv -(\bar{3})^2(3^2) \equiv -1\pmod{4}$. And thus $(\bar{3}\cdot 5, \bar{3})$ is a solution modulo $m$.
Similarly, if $5\nmid m$, then $(m,5)=1$. Then since $3^2-34=-(5^2)$, then I also have $\bar{5}^2(3^2-34)\equiv (\bar{5}\cdot 3)^2-34(\bar{5}^2)\equiv -(\bar{5})^2(5^2)\equiv -1\pmod{m}$.
So for any $m$ not divisible by $3$ or $5$, there exists a solution. Then for $m$ such that $3|m$ and $5|m$, then $m$ has prime factorization $m=3^a5^b{p_1}^{q_1}\cdots {p_r}^{q_r}$. This would give the system of congruences
$x^2-34y^2 \equiv -1 \pmod{3^a}, x^2-34y^2 \equiv -1 \pmod{5^b}, x^2-34y^2 \equiv -1 \pmod{{p_i}^{q_i}}$
Then $5\nmid 3^a$, $3\nmid 5^b$, and $3\nmid {p_i}^{q_i}$ and $5\nmid {p_i}^{q_i}$, so each of the congruences has a solution. Does the Chinese Remainder Theorem then imply that there is a solution modulo $m$? I know that it holds for polynomials in one variable $x$, and that the number of solutions is the product of the number of solutions for each prime power modulus. Would the same result hold now that there are two variables in the polynomial? I haven't found any proofs to support or contradict the result. Thanks!
|
Cardinality of Set Union/2 Sets
Jump to navigation Jump to search
Theorem
Let $S_1$ and $S_2$ be finite sets.
Then:
$\card {S_1 \cup S_2} = \card {S_1} + \card {S_2} - \card {S_1 \cap S_2}$ Proof
We have that Cardinality is Additive Function.
$\card {S_1 \cup S_2} + \card {S_1 \cap S_2} = \card {S_1} + \card {S_2}$
from which the result follows.
$\blacksquare$
Sources 1965: J.A. Green: Sets and Groups... (previous) ... (next): $\S 1.4$. Union: Example $16$ 1975: T.S. Blyth: Set Theory and Abstract Algebra... (previous) ... (next): $\S 1$. Sets; inclusion; intersection; union; complementation; number systems: Exercise $8$ 1965: Seth Warner: Modern Algebra... (previous) ... (next): Exercise $19.1$ 1971: Allan Clark: Elements of Abstract Algebra... (previous) ... (next): Chapter $1$: Mappings: $\S 15 \beta$ 1978: Thomas A. Whitelaw: An Introduction to Abstract Algebra... (previous) ... (next): Chapter $1$: Sets and Logic: Exercise $8 \ \text{(b)}$ 1982: P.M. Cohn: Algebra Volume 1(2nd ed.) ... (previous) ... (next): Chapter $1$: Sets and mappings: $\S 1.2$: Sets: Exercise $3$
|
Coriolis acceleration
Coriolis acceleration is the acceleration due to the rotation of the earth, experienced by particles (water parcels, for example) moving along the earth's surface. Ocean currents are influenced by Coriolis acceleration.
Coriolis acceleration is generated by the eastward rotation of the earth around the N-S axis.
This acceleration can be considered a purely kinematic effect by noting that time derivation in a rotating frame introduces a term related to rotation of the axes of reference [1]:
[math](d \vec r / dt)_{fixed frame} = \vec u + \vec \Omega \times \vec r ,[/math]
where [math]\vec r[/math] indicates the position of a fluid parcel on the rotating earth ([math]\vec r =0[/math] at the earth's centre), [math]\vec u \equiv d \vec r /dt[/math] is the velocity on the rotating earth, [math]\vec \Omega[/math] the rotation vector along the N-S rotation axis. The radial earth rotation frequency [math]\Omega \approx 7.3 \,10^{-5} s^{-1}[/math]. The acceleration then follows from
[math](d^2 \vec r /dt^2)_{fixed frame} = d \vec u /dt + \vec \Omega \times \vec u + \vec \Omega \times \vec u + \vec \Omega \times (\vec \Omega \times \vec r) .[/math]
The term [math]d \vec u /dt[/math] is the acceleration on the rotating earth, [math]2 \vec \Omega \times \vec u[/math] is the Coriolis acceleration and the term [math]\vec \Omega \times (\vec \Omega \times \vec r)[/math] is the component of the centrifugal force compensated by earth's attraction and adjustment of the equilibrium sea surface slope. If no forces are acting on the fluid and in the absence of friction, [math](d^2 \vec r /dt^2)_{fixed frame} =0 .[/math]
In order to find the more usual formulas for the Coriolis acceleration we use a system of axes on the rotating earth as indicated in figure 1, where [math]\theta[/math] is the azimuthal angle indicating longitude (expressed in radians), [math]\phi[/math] the elevation radian angle indicating latitude and [math]R[/math] is the earth radius. In this coordinate system the vector components are
[math]\vec u = (u=R \cos \phi \; d\theta / dt, v = R d \phi /dt, 0) , \; \vec \Omega = (0, \Omega \cos \phi, \Omega \sin \phi) .[/math]
The Coriolis acceleration is then given by:
[math]du/dt = f v, \; dv/dt = - fu, \; f=2 \Omega \sin \phi .[/math]
The Coriolis acceleration can be derived in a more intuitive way if we consider fluid velocities ([math]u,v[/math]) much smaller than the earth's surface rotation velocity [math]U = R \Omega \cos \phi[/math]. Due to the rotation of the earth, a centrifugal force, perpendicular to the earth's rotation axis, acts on ocean waters. This centrifugal force is only partly compensated by earth's attraction; it has a component acting along the surface of the earth towards the equator, see figure 2.
If water is at rest this tangential component of the centrifugal force is balanced by a sea surface slope. When seawater is brought into motion, this equilibrium is broken and water motion experiences Coriolis acceleration. A current [math]u[/math] in east-direction will experience an acceleration toward the equator as a result of the increased tangential component of the centrifugal force:
[math] dv/dt = - \sin \phi [(U+u)^2 - U^2]/R \cos\phi \approx - fu[/math] with [math]f = 2 \Omega \sin{\phi} . [/math]
The approximation is very accurate because in natural situations [math]|u| \lt \lt |U|[/math].
When fluid is travelling northward along the earth's surface with velocity [math]v = R d\phi/dt ,[/math] it experiences a decrease of surface rotation velocity [math]U[/math]. The principle of angular momentum conservation implies that the fluid will accelerate in order to conserve its angular momentum. This can be expressed as
[math]d[(U + u) R \cos \phi ] / dt = d[\Omega R^2 \cos^2 \phi + u R \cos \phi ] / dt =0 .[/math]
From this equation we find [math]du/dt = fv[/math], assuming again that [math]|u| \lt \lt |U| .[/math]
When fluid parcels are set in motion they thus experience an acceleration to the right on the northern hemisphere ([math]\phi\gt 0[/math]) and an acceleration to the left on the southern hemisphere. Because [math]f[/math] is small (typically on the order of [math]10^{-4}[/math]), earth's rotation hardly influences currents which fluctuate with higher frequencies, such as wave orbital motion. However, the influence on stationary currents or slowly varying currents, such as wind-driven flow and tidal motion, can be significant (see Coriolis and tidal motion in shelf seas). This is the case in wide basins, with width of the order or larger than [math]|\vec u + \vec v|/f .[/math]
A fluid moving freely (no external or frictional forces) in a wide basin (no water surface inclination), with initial velocity [math]V[/math], will describe a circular motion due to Coriolis acceleration:
[math]u=V \sin ft , \; v= V \cos ft .[/math]
The fluid follows an inertial circle with radius [math]V/f[/math]. This flow pattern can be sometimes observed when a strong river jet enters a wide basin.
Please note that others may also have edited the contents of this article.
Job Dronkers (2019): Coriolis acceleration. Available from http://www.coastalwiki.org/wiki/Coriolis_acceleration [accessed on 23-09-2019]
|
PDE Geometric Analysis seminar
The seminar will be held in room B115 of Van Vleck Hall on Mondays from 3:30pm - 4:30pm, unless indicated otherwise.
Contents 1 Previous PDE/GA seminars 2 Seminar Schedule Fall 2013 3 Seminar Schedule Spring 2014 4 Seminar Schedule Fall 2014 5 Abstracts Seminar Schedule Fall 2013
date speaker title host(s) September 9 Greg Drugan (U. of Washington) Construction of immersed self-shrinkers Angenent October 7 Guo Luo (Caltech)
Potentially Singular Solutions of the 3D Incompressible Euler Equations.
Kiselev November 18 Roman Shterenberg (UAB)
Recent progress in multidimensional periodic and almost-periodic spectral problems.
Kiselev November 25 Myeongju Chae (Hankyong National University visiting UW)
On the global classical solution of the Keller-Segel-Navier -Stokes system and its asymptotic behavior.
Kiselev December 2 Xiaojie Wang
Uniqueness of Ricci flow solutions on noncompact manifolds.
Wang December 16 Antonio Ache(Princeton)
Ricci Curvature and the manifold learning problem. NOTE: Due to final exams, this seminar will be held in B231.
Viaclovsky Seminar Schedule Spring 2014
date speaker title host(s) January 14 at 4pm in B139 (TUESDAY), joint with Analysis Jean-Michel Roquejoffre (Toulouse)
Front propagation in the presence of integral diffusion.
Zlatos February 10 Myoungjean Bae (POSTECH)
Free Boundary Problem related to Euler-Poisson system.
Feldman February 24 Changhui Tan (Maryland)
Global classical solution and long time behavior of macroscopic flocking models.
Kiselev March 3 Hongjie Dong (Brown)
Parabolic equations in time-varying domains.
Kiselev March 10 Hao Jia (University of Chicago)
TBA.
Kiselev March 31 Alexander Pushnitski (King's College London)
TBA.
Kiselev April 7 Zoran Grujic (University of Virginia)
TBA.
Kiselev April 21 Ronghua Pan (Georgia Tech)
TBA.
Kiselev Seminar Schedule Fall 2014
date speaker title host(s) September 22 (joint with Analysis Seminar) Steven Hofmann (U. of Missouri) TBA Seeger Abstracts Greg Drugan (U. of Washington) Construction of immersed self-shrinkers
Abstract: We describe a procedure for constructing immersed self-shrinking solutions to mean curvature flow. The self-shrinkers we construct have a rotational symmetry, and the construction involves a detailed study of geodesics in the upper-half plane with a conformal metric. This is a joint work with Stephen Kleene.
Guo Luo (Caltech) Potentially Singular Solutions of the 3D Incompressible Euler Equations
Abstract: Whether the 3D incompressible Euler equations can develop a singularity in finite time from smooth initial data is one of the most challenging problems in mathematical fluid dynamics. This work attempts to provide an affirmative answer to this long-standing open question from a numerical point of view, by presenting a class of potentially singular solutions to the Euler equations computed in axisymmetric geometries. The solutions satisfy a periodic boundary condition along the axial direction and no-flow boundary condition on the solid wall. The equations are discretized in space using a hybrid 6th-order Galerkin and 6th-order finite difference method, on specially designed adaptive (moving) meshes that are dynamically adjusted to the evolving solutions. With a maximum effective resolution of over $(3 \times 10^{12})^{2}$ near the point of singularity, we are able to advance the solution up to $\tau_{2} = 0.003505$ and predict a singularity time of $t_{s} \approx 0.0035056$, while achieving a \emph{pointwise} relative error of $O(10^{-4})$ in the vorticity vector $\omega$ and observing a $(3 \times 10^{8})$-fold increase in the maximum vorticity $\norm{\omega}_{\infty}$. The numerical data is checked against all major blowup (non-blowup) criteria, including Beale-Kato-Majda, Constantin-Fefferman-Majda, and Deng-Hou-Yu, to confirm the validity of the singularity. A careful local analysis also suggests that the blowing-up solution develops a self-similar structure near the point of the singularity, as the singularity time is approached.
Xiaojie Wang(Stony Brook) Uniqueness of Ricci flow solutions on noncompact manifolds
Abstract: Ricci flow is an important evolution equation of Riemannian metrics. Since it was introduced by R. Hamilton in 1982, it has greatly changed the landscape of riemannian geometry. One of the fundamental question about ricci flow is when is its solution to initial value problem unique. On compact manifold, with arbitrary initial metric, it was confirmed by Hamilton. On noncompact manifold, we only know this is true when further restrictions are imposed to the solution. In this talk, we will discuss various conditions that guarantee the uniqueness. In particular, we will discuss in details with the following uniqueness result. Let $(M,g)$ be a complete noncompact non-collapsing $n$-dimensional riemannian manifold, whose complex sectional curvature is bounded from below and scalar curvature is bounded from above. Then ricci flow with above as its initial data, on $M\times [0,\epsilon]$ for some $\epsilon>0$, has at most one solution in the class of complete riemannian metric with complex sectional curvature bounded from below.
Roman Shterenberg(UAB) Recent progress in multidimensional periodic and almost-periodic spectralproblems
Abstract: We present a review of the results in multidimensional periodic and almost-periodic spectral problems. We discuss some recent progress and old/new ideas used in the constructions. The talk is mostly based on the joint works with Yu. Karpeshina and L. Parnovski.
Antonio Ache(Princeton) Ricci Curvature and the manifold learning problem
Abstract: In the first half of this talk we will review several notions of coarse or weak Ricci Curvature on metric measure spaces which include the works of Lott-Villani, Sturm and Ollivier. The discussion of the notion of coarse Ricci curvature will serve as motivation for developing a method to estimate the Ricci curvature of a an embedded submaifold of Euclidean space from a point cloud which has applications to the Manifold Learning Problem. Our method is based on combining the notion of ``Carre du Champ" introduced by Bakry-Emery with a result of Belkin and Niyogi which shows that it is possible to recover the rough laplacian of embedded submanifolds of the Euclidean space from point clouds. This is joint work with Micah Warren.
Jean-Michel Roquejoffre (Toulouse) Front propagation in the presence of integral diffusion
Abstract: In many reaction-diffusion equations, where diffusion is given by a second order elliptic operator, the solutions will exhibit spatial transitions whose velocity is asymptotically linear in time. The situation can be different when the diffusion is of the integral type, the most basic example being the fractional Laplacian: the velocity can be time-exponential. We will explain why, and discuss several situations where this type of fast propagation occurs.
Myoungjean Bae (POSTECH) Free Boundary Problem related to Euler-Poisson system
One dimensional analysis of Euler-Poisson system shows that when incoming supersonic flow is fixed, transonic shock can be represented as a monotone function of exit pressure. From this observation, we expect well-posedness of transonic shock problem for Euler-Poisson system when exit pressure is prescribed in a proper range. In this talk, I will present recent progress on transonic shock problem for Euler-Poisson system, which is formulated as a free boundary problem with mixed type PDE system. This talk is based on collaboration with Ben Duan(POSTECH), Chujing Xie(SJTU) and Jingjing Xiao(CUHK).
Changhui Tan (University of Maryland) Global classical solution and long time behavior of macroscopic flocking models
Abstract: Self-organized behaviors are very common in nature and human societies. One widely discussed example is the flocking phenomenon which describes animal groups emerging towards the same direction. Several models such as Cucker-Smale and Motsch-Tadmor are very successful in characterizing flocking behaviors. In this talk, we will discuss macroscopic representation of flocking models. These systems can be interpreted as compressible Eulerian dynamics with nonlocal alignment forcing. We show global existence of classical solutions and long time flocking behavior of the system, when initial profile satisfies a threshold condition. On the other hand, another set of initial conditions will lead to a finite time break down of the system. This is a joint work with Eitan Tadmor.
Hongjie Dong (Brown University) Parabolic equations in time-varying domains
Abstract: I will present a recent result on the Dirichlet boundary value problem for parabolic equations in time-varying domains. The equations are in either divergence or non-divergence form with boundary blowup low-order coefficients. The domains satisfy an exterior measure condition.
|
These are both wrong, as they don't obey the syntactic rules of first-order logic. The rules are:
if $P$ is an $n$-ary predicate and $x_1, \dots, x_n$ are variables, then $P(x_1, \dots, x_n)$ is a formula; if $\varphi$ and $\psi$ are formulas, then $\neg \varphi$, $\varphi\wedge\psi$ and $\varphi\vee\psi$ are formulas; if $\varphi$ is a formula and $x$ is a variable, then $\exists x\,\varphi$ and $\forall x\,\varphi$ are formulas; nothing else is a formula.
(If you like, you can say that $\varphi\rightarrow\psi$ and $\varphi \leftrightarrow \psi$ are formulas, or you can say that they're just abbreviations for $\neg\varphi\vee\psi$ and $(\neg\varphi\vee\psi)\wedge(\varphi\vee\neg\psi)$, respectively.)
There's often more than one way of translating any English-language statement into a first-order sentence. Decide what your domain is going to be and the rest will follow.
Every student who likes chocolate is smart.Here, you have several choices of domain, such as the following.
Domain is the set of all students: $\forall x\,\big(\mathrm{LikesChoc}(x)\rightarrow\mathrm{Smart}(x)\big)$. Since everything in the domain is a student, the variable $x$ can only be a student so you don't need a predicate to say that.
Domain is, say, the set of all people: $\forall x\,\big(\big(\mathrm{Student}(x) \wedge \mathrm{LikesChoc}(x)\big)\rightarrow \mathrm{Smart}(x)\big)$. The domain now includes things that aren't students, so we need to enforce that with a predicate. A professor who likes chocolate isn't necessarily smart.
Domain is, say, the set of all people and all foods. $$\forall x\,\big(\big(\mathrm{Student}(x) \wedge \exists y\,\big(\mathrm{Chocolate}(y) \wedge \mathrm{Likes}(x,y)\big)\big)\rightarrow \mathrm{Smart}(x)\big).$$ That is, for every entity $x$, if $x$ is a student and there is a thing $y$ that is chocolate and is liked by the student $x$, then $x$ is smart. I've assumed that "likes chocolate" means "likes at least one thing that is chocolate"; you could interpret it to mean "likes all things that are chocolate", and you can probably figure out how to change the formula to reflect that.
There exists a barber who shaves all men if, and only if, they do not shave themselves. I'll let you have another go at this. Try taking the domain to be the set of men, with a unary predicate that says who's a barber and a binary predicate (as you have), that says who shaves whom.
|
Can be easily proved that the following series onverges/diverges?
$$\sum_{k=1}^{\infty} \frac{\tan(k)}{k}$$
I'd really appreciate your support on this problem. I'm looking for some easy proof here. Thanks.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
A proof that the sequence $\frac{\tan(n)}{n}$ does not have a limit for $n\to \infty$ is given in this article
(Sequential tangents, Sam Coskey). This, of course, implies that the series does not converge.
The proof, based on this paper by Rosenholtz (*), uses the continued fraction of $\pi/2$, and, essentially, it shows that it's possible to find a subsequence such that $\tan(n_k)$ is "big enough", by taking numerators of the truncated continued fraction ("convergents").
(*)
"Tangent Sequences, World Records, π, and the Meaning of Life: Some Applications of Number Theory to Calculus", Ira Rosenholtz - Mathematics Magazine Vol. 72, No. 5 (Dec., 1999), pp. 367-376
Let $\mu$ be the
irrationality measure of $\pi^{-1}$. Then for $s < \mu$ given, we have sequences $(p_n)$ and $(q_n)$ of integers such that $0 < q_n \uparrow \infty$ and
$$\left| \frac{1}{\pi} - \frac{2p_n + 1}{2q_n} \right| \leq \frac{1}{q_n^{s}}.$$
Rearranging, we have
$$ \left| \left( q_n - \frac{\pi}{2} \right) - p_n \pi \right| \leq \frac{\pi}{q_n^{s-1}}.$$
This shows that
$$ \left|\tan q_n\right| = \left| \tan \left( \frac{\pi}{2} + \left( q_n - \frac{\pi}{2} \right) - p_n \pi \right) \right| \gg \frac{1}{\left| \left( q_n - \frac{\pi}{2} \right) - p_n \pi \right|} \gg q_n^{s-1}, $$
hence
$$ \left| \frac{\tan q_n}{q_n} \right| \geq C q_n^{s-2}.$$
Therefore the series diverges if $\mu > 2$. But as far as I know, there is no known result for lower bounds of $\mu$, and indeed we cannot exclude the possibility that $\mu = 2$.
p.s. Similar consideration shows that, for $r > s > \mu$ we have
$$ \left| \frac{\tan k}{k^{r}} \right| \leq \frac{C}{k^{r+1-s}}.$$
Thus if $r > \mu$, then
$$ \sum_{k=1}^{\infty} \frac{\tan k}{k^r} $$
converges absolutely!
A simle example would be the term $\tan 121 / 121$. Noting that $\dfrac{\pi}{2}+2 \cdot 19 \cdot \pi \approx 120,95131$ shows why the term is much larger relative to the others. Here you can see a plot and spot the rogues. Those are the first 20k terms.
Here's a much more interesting take, of the first 50k terms. Note the way they align.
Basically, what we're worried about is how close an integer can get to $$\dfrac{\pi}{2}+2 k \pi$$
and that is a though question to answer.
|
Consider the empty set $\emptyset$ as a topological space. Since the power set of it is just $\wp(\emptyset)=\{\emptyset\}$, this means that the only topology on $\emptyset$ is $\tau=\wp(\emptyset)$.
Anyway, we can make $\emptyset$ into a topological space and therefore talk about its homeomorphisms. But here, we seem to have an annoying pathology: is $\emptyset$ homeomorphic to itself? In order to this be true, we need to find a homeomorphism $h:\emptyset \to \emptyset$. It would be very unpleasant if such a homeomorphism did not exist.
I was tempted to think that there are no maps from $\emptyset$ into $\emptyset$, but consider the following definition of a map:
Given two sets $A$ and $B$, a map $f:A\to B$ is a subset of the Cartesian product $A\times B$ such that, for each $a\in A$, there exists only one pair $(a,b)\in f\subset A\times B$ (obviously, we denote such unique $b$ by $f(a)$, $A$ is called the
domain of the map $f$and $B$ is called the codomain of the map $f$).
Thinking this way, there is (a unique) map from $\emptyset$ into $\emptyset$! This is just $h=\emptyset\subset \emptyset\times \emptyset$. This is in fact a map, since I can't find any element in $\emptyset$ (domain) which contradicts the definition.
But is $h$ a homeomorphism? What does it mean for $h$ to have an inverse, since the concept of identity map is not clear for $\emptyset$? Nevertheless, $h$ seems to be continuous, since it can't contradict (by emptiness) anything in the continuity definition (
"pre-images of open sets are open")…
So is $\emptyset$ homeomorphic to itself? Which is the mathematical consensus about this?
"Homeomorphic by definition"? "We'd rather not speak about empty set homeomorphisms…" "…"?
|
The term "imaginary" is somewhat disingenuous. It's a real concept, with real (at least theoretical) application, just like all the "real" numbers.
Think back to that algebra class. You were asked to solve a polynomial equation; that is, find all the values of X for which the entire equation evaluates to zero. You learned to do this by polynomial factoring, simplifying the equation into a series of first-power terms, and then it was easy to see that if any one of those terms evaluated to zero, then everything else, no matter its value, was multiplied by zero, producing zero.
You tried this on a few quadratic equations. Sometimes you got one answer (because the equation was $y=ax^2$ and so the only possible answer was zero), sometimes you got two (when the equation boiled down to $y= (x\pm n)(x \pm m)$, and so when $x=-m$ or $x=-n$ the equation was zero), and a couple of times, you got no answers at all (usually, an equation that breaks down to $y=(x+n)(x+m)$ doesn't evaluate to zero at $x=-m$ or $x=-n$).
In your algebra class, you're told this just happens sometimes, and the only way to make sure any factored term $(x\pm k)$ represents a real root is to plug in $-k$ for $x$ and solve. But, this is math. Mathematicians like things to be perfect, and don't like these "rules of thumb", where a method works sometimes but it's really just a "hint" of where to look. So, mathematicians looked for another solution.
This leads us to application of the quadratic formula: for $ax^2 + bx + c = 0$, $x=\dfrac{-b \pm \sqrt{b^2-4ac}}{2a}$. This formula is quite literally the solution of the general form of the equation for x, and can be derived algebraically. We can now plug in the coefficients, and find the values of $x$ where $ax^2 + bx + c=0$. Notice the square root; we're first taught, simply, that if $b^2-4ac$ is ever negative, then the roots you'd get by factoring the equation won't work, and thus the equation has no real roots. $b^2-4ac$ is called the
determinant for this reason.
But, the fact that $b^2-4ac$
can be negative remains a thorn in our side; we want to solve this equation. It's sitting right in front of us. If the determinant were positive, we would have solved it already. It's that pesky negative that's the problem.
Well, what if there was something we could do, that conforms to the rules of basic algebra, to get rid of the negative? Well, $-m = m*-1$, so what if we took our term that, for the sake of argument, evaluated to $-36$, and made it $36*-1$? Now, because $\sqrt{mn} = \sqrt{m}\sqrt{n}$, $\sqrt{-36} = \sqrt{36}\sqrt{-1} = 6\sqrt{-1}$. We've simplified the expression by removing what we can't express as a real number from what we can.
Now to clean up that last little bit. $\sqrt{-1}$ is a common term whenever the determinant is negative, so let's abstract it behind a constant, like we do $\pi$ and $e$, to make things a little cleaner. $\sqrt{-1} = i$. Now, we can define some properties of $i$, particularly a curious thing that happens as you raise its power:
$$i^2 = \sqrt{-1}^2 = -1$$$$i^3 = i^2*i = -i$$$$i^4 = i^2*i^2 = -1*-1 = 1$$$$i^5 = i^4*i = i$$
We see that $i^n$ transitions through four values infinitely as its power $n$ increases, and also that this transition crosses into and then out of the real numbers. Seems almost... circadian, rotational. As Clive N's answer so elegantly explains it, that's what imaginary numbers represent; a "rotation" of the graph through another plane, where the graph DOES cross the $x$-axis. Now, it's not actually really a circular rotation onto a new linear z-plane. Complex numbers have a real part, as you'd see by solving the quadratic equation for a polynomial with imaginary roots. We typically visualize these values in their own 2-dimensional plane, the complex plane. A quadratic equation with imaginary roots can thus be thought of as a graph in four dimensions; three real, one imaginary.
Now, we call $i$ and any product of a real number and $i$ "imaginary", because what $i$ represents doesn't have an analog in our "everyday world". You can't hold $i$ objects in your hand. You can't measure anything and get $i$ inches or centimeters or Smoots as your result. You can't plug any number of natural numbers together, stick a decimal point in somewhere and end up with $i$. $i$ simply is.
As far as having use outside "ivory tower" math disciplines, a big one is in economics; many economies of scale can be described as a function of functions of the number of units produced, with a cost term and a revenue term (the difference being profit or loss), each of these in turn defined by a function of the per-unit sale price or cost and the number produced. This all generally simplifies to a quadratic equation, solvable by the quadratic formula. If the roots are imaginary, so are the breakeven points (and your expected profits).
Another good one is in visualizations of complex numbers, and of their interactions when multiplied. The first one I was exposed to is a well-known series set, produced by taking an arbitrary complex number, squaring it ($(a+bi)^2 = (a+bi)(a+bi) = a^2 + 2abi + b^2i^2 = a^2-b^2 + 2abi$), and then adding back its original value. Repeated to infinity with this number, the series either converges to zero or diverges to infinity (with a few starting numbers exhibiting periodicity; they'll jump around infinitely between a finite number of points much like $i$ itself does). The set of all complex numbers for which the series does not diverge is the Mandelbrot set or M-set, and while the area of the graph is finite, its perimeter is infinite, making the graph of this set a fractal (one of the most highly-studied, in fact).
The Mandelbrot set can in turn be defined as the set of all complex numbers $c$ for which the Julia set $J(f)$ of $f(z)=z^2 + c \to z$ is connected. A Julia set exists for every complex polynomial function, but usually the most interesting and useful sets are the ones for values of $c$ that belong to the M-set; Julia fractals are produced much the same way as the M-set (by repeated iteration of the function to determine if a starting $z$ converges or diverges), but $c$ is constant for all points of the set instead of being the original point being tested. You can define Julia sets with all sorts of fractal shapes. These fractals, more accurately the iterative evaluation behind them, are used for pseudorandom number generation, computer graphics (the sets can be plotted in 3-d to create landscapes, or they can be used in shaders to define complex reflective properties of things like insect shells/wings), etc.
|
Trying to apply Cavalieri's method of indivisibles to calculate the volume of a cylinder with radius $R$ and height $h$, I get the following paradoxical argument.
A cylinder with radius $R$ and height $h$ can be seen as a solid obtained by rotating a rectangle with height $h$ and base $R$ about its height. Therefore, the volume of the cylinder can be thought as made out of an infinity of areas of such rectangles of infinitesimal thickness rotated for $360^\circ$; hence, the volume $V$ of the cylinder should the area of the rectangle $A_\text{rect} = R \cdot h$ multiplied by the circumference of the rotation circle $C_\text{circ} = 2\pi R$: \begin{align} V = A_\text{rect} \cdot C_\text{circ} = 2 \pi R^2 \cdot h \end{align}
Of course, the right volume of a cylinder with radius $R$ and height $h$ is \begin{align} V = A_\text{circ} \cdot h = \pi R^2 \cdot h \end{align} where $A_\text{circ} = \pi R^2$ is the area of the base circle of the cylinder.
Question: Where is the error in my previous argument based on infinitesimals?
|
Oozing honey through pipes
The solution below is for a very viscous fluid which has negligible inertia and large viscosity. It is wrong for water in real pipes, because it neglects the pressure drop which comes with the changing velocity of water. This term is higher order in v, but it is obviously relevant for real water pipers. I leave it, because it is an interesting exercise with a direct analogy to resistive current flow, the correct solution is at the end.
The way to do this is to note that the pressure at the divergence point is equal for all 4 pipes, and that there is a given law for pressure drop along a pipe per unit length at any a given flow rate. The answer is different depending on whether you have a fixed pressure forcing the water through the pipes (as you do in a water main system) or whether you are forcing a given volume of water through per unit time, as you suggest, and which is appropriate when you have a large pressure drop along a very long pipe before you get to your splitter.
I will assume that the 4 pipes have a given length, and that they empty at atmospheric pressure, which I will label as 0, and that the water flow is sufficient to keep the pipes filled until near the exit point, otherwise the problem requires more information. Consider the fixed flow rate problem first. If the imposed flow rate is F units of water per second, the first equation is the mass conservation equation
$$\sum_i f_i = F $$
Where $f_i$ are the flow rates along the pipes. Phill.Zitt gave this formula, but it is not enough--- it is analogous to the current Kirchhoff Law. You also need the analog of the voltage Kirchhoff law.
The voltage law tells you that the flow rate $f_i$ is proportional to the pressure drop along pipe i. I'll call the proportionality constant the "flow conductance" $C_i$ (it's the analog of the reciprocal of resistance in an electrical circuit):
$$ f_i = C_i \Delta P $$
For the four pipes, $\Delta P$ is equal, so that
$$ f_i \propto C_i $$
and along with the sum rule, you find:
$$ f_i = {C_i f \over \sum_i C_i } $$
So the only thing you need to know are the $C_i$, just as in a resistor network.
Two pipes with flow conductances $C_1,C_2$ connected in series have a flow conductance C given by the formula:
$$ {1\over C} = {1\over C_1} + {1\over C_2}$$
For the same two pipes in parallel,
$$ C = C_1 + C_2 $$
So that conductances add in series and parallel just like the reciprocal of the resistance (the electrical conductance) in circuits. You have a problem of 4 parallel resistors connected in series to an input resistor, just like a resistor connected to 4 resistors in parallel.
For a cylindrical pipe of length L and radius R, the laminar flow profile is exactly parabolic in the radial cylindrical coordinate r:
$$ v(r) = V(1 - {r^2\over R^2}) $$
so that the total flow as a function of R is
$$ f(R) = \int_0^R v(r) 2\pi r dr = {\pi V R^2\over 2}$$
The Navier stokes equations reduce to something very simple in the laminar pipe flow case--- all the terms drop out except the viscosity term, which tells you the diffusion of momentum out of the pipe, and so the pressure drop per unit length. (see here: Is there an analytical solution for fluid flow in a square duct? )
The equation is
$$ \nu \nabla^2 v = \delta P $$
so that
$$ 2\nu {V\over R^2} = {\Delta P \over L} $$
This gives you the flow rate as a function of R and L,
$$ f = {\pi V R^2 \over 4} = {\pi R^4\over 8\nu L} \Delta P$$
so that the conductance is
$$ C(R,L) = {\pi R^4 \over 8 \nu L} $$
And this determines the flow through the i'th pipe in terms of the total flow and the geometry:
$$ f_i = {f {R_i^4\over L_i} \over \sum_k {R_k^4\over L_k}} $$
This solves the constant flow-rate problem purely geometrically.
The limit of constant flow rate is achieved when there is a long pipe feeding into the whole thing with a much larger pressure drop than the pressure drop after the split. The total flow is determined by the total conductance, which is essentially equal to the conductance of the long pipe, so no matter what you attach at the end, so long as the part at the end has much more conductance than the initial pipe.
The same problem can be solved at a fixed pressure at the divergence point, the outgoing flow is just the conductance times the shared pressure. For question 2, the issue of constant pressure or constant flow rate is essential. At constant pressure, if you attach the contraption to the side of a wide water main at high pressure, closing one pipe does nothing to the flow in the other pipes. At constant flow rate, closing pipe number 4 increases the flow through the other 3 by the factor
$$ C_1 + C_2 + C_3 + C_4 \over C_1 + C_2 + C_3 $$
For non-rigid pipes, you just need to know the R as a function of the pressure. This will be a fine approximation if the pressure drops are slow in the pipe as usual, so that the radius change slowly with length. In normal pipes, the radius doesn't change hardly at all with the pressure, so I didn't bother to calculate anything, but you can split up the pipe into slices with a radius R(P), giving a conductance, which you add according to the series rule.
Water in pipes
I will assume the flow is laminar in the pipes, but that the pipes are short, so that the pressure drop due to viscosity is negligible between the two ends. This is the correct limit for water pipes. The pressure does work on the water which is not dissipated significantly in the pipes, and comes out as kinetic energy in the water, not as heat in the pipe.
Given a pressure drop from P to atmospheric pressure 0, the water in each of the four pipes will adjust it's velocity so that the Bernoulli principle is obeyed--- the work done by the pressure is the energy gained by the water. The energy flow in a cross section of the pipe is:
$$ \int {\rho v(r)^2\over 2}v(r) 2\pi r dr $$
with the laminar profile (the flow f is as before), and this gives
$$ f {\rho V^2\over 4} $$
Where V is the velocity at the center, as before. The work done by the pressure difference at the two ends is $Pf$, so you get a version of Bernoulli's equation for laminar pipes:
$$ P + {\rho V^2\over 4} = {\rho V_0^2\over 2}$$
The velocity in the pipes are then
$$ V= \sqrt{{4P\over \rho} + {V_0^2\over 2}} $$
and they are equal. So that the flow rate in this limit (the right limit for water) is proprotional to the cross section area of the pipe, to R^2. If you have a fixed flow rate, the pressure rises to the point where the total outflow is equal to the inflow, and the water flow is partitioned according to the cross section area:
$$ f_i = {f R_i^2\over \sum_k R_k^2 }$$
This neglects the incoming velocity $V_0$, assuming the water coming out is significantly faster than the water coming in. The answer for 2 and 3 is not changed in the water case compared to the honey.
|
Suppose a particle with mass $m_1$ and speed $v_{1i}$ undergoes an elastic collision with stationary particle of mass $m_2$. After the collision, particle of mass $m_1$ moves with speed $v_{1f}$ in a direction of angle $\theta$ above the line it was moving previously. Particle with mass $m_2$ moves with speed $v_{2f}$ in a direction of angle $\phi$ below the line which particle with mass $m_1$ was moving previously. Using equations for conservation of momentum and kinetic energy, how can we prove these two equations
$\frac{v_{1f}}{v_{1i}}=\frac{m_1}{m_1+m_2}[\cos \theta \pm \sqrt{\cos^2 \theta - \frac{m_1^2-m_2^2}{m_1^2}}]$
and
$\frac{\tan(\theta +\phi)}{\tan(\phi)}=\frac{m_1+m_2}{m_1-m_2}$ ?
EDIT. Here is what I've done:
For the first one, set the $xy$ coordinate system so that the positive direction of the $x$ axis points toward the original path of the particle with mass $m_1$. So we have three equations:
$m_1v_{1i}=m_1v_{1f}\cos \theta + m_2v_{2f} \cos \phi$
$0=m_1v_{1f}\sin \theta - m_2v_{2f}\sin \phi$
$m_1v_{1i}^2=m_1v_{1f}^2+m_2v_{2f}^2$.
From the second one, we get:
$v_{2f}=\frac{m_1v_{1f}\sin \theta}{m_2 \sin \phi}$
Plotting this into third equation, we get
$v_{1i}^2=v_{1f}^2(1+\frac{m_1 \sin^2 \theta}{m_2 \sin^2 \phi})$ (1)
From the first equation, we have
$\cos \phi =\frac{m_1(v_{1i}-v_{1f}\cos \theta)}{m_2v_{2f}}$
which after applying the equation we have for $v_2f$ becomes
$\sin^2 \phi = \frac{1}{1+\frac{(v_{1i}-v_{1f}\cos \theta)^2}{\sin^2 \theta \times v_1f^2}}$
Plotting this into equation (1), gives us an equation in terms of $m_1$, $m_2$, $v_{1f}$, $v_{1i}$ and $\theta$, but it is too far from what I expected.
For the second one, assigning the $xy$ coordinate in a way that the positive direction of the $x$ axis points toward the final path of the particle $m_2$, will give us three equations (two for conservation of linear momentum and one for conservation of kinetic energy), but I don't know what to do next.
|
A Hilbert space is a kind of linear vector space.
In chemistry which encounter it when quantum mechanics, when we can represent wavefunctions by their contributions from different orthonormal single particle states. It is these single particle states which we build up out our atomic orbitals (AOs).
Example:
Consider a two level systems - minimal basis $\ce{H2}$:
Each atom has a $1s$ AO on its hydrogen: $\phi_1(\textbf{r})$, $\phi_2(\textbf{r})$. These AOs aren't orthogonal though:$$\iiint d^3\textbf{r}\ \phi_1^*(\textbf{r}) \phi_2(\textbf{r}) \ne 0$$but we can construct orthonormal states:$$\psi_\pm(\textbf{r}) = \frac{1}{\sqrt{2}}\left(\phi_1(\textbf{r})\pm\phi_2(\textbf{r})\right)$$such that like functions integrate to $1$ and functions of unlike functions vanish:$$\iiint d^3\textbf{r}\ \psi_\pm^*(\textbf{r}) \psi_\mp(\textbf{r}) = 0 \\\iiint d^3\textbf{r}\ \psi_\pm^*(\textbf{r}) \psi_\pm(\textbf{r}) = 1$$.These two orthogonal function can be represented as state vectors:$$\psi_+(\textbf{r}) = \pmatrix{1\\0},\; \psi_-(\textbf{r}) = \pmatrix{0\\1}$$This defines a Hilbert or vector space, where our single particle states $\psi_\pm$ are our vectors and the inner product of two vectors, $\psi_i$ and $\psi_j$, is defined by the integral over all space:$$\langle\psi_i\vert\psi_j\rangle = \iiint d^3\textbf{r}\ \psi_i^*(\textbf{r}) \psi_j(\textbf{r})$$and is equivalent to the dot product of two state vectors.
In a electronic structure (specifically DFT) code, a "real space code" means using atom orbitals as basis functions rather than plane waves.
The Hilbert space described above is complete - it describes all possible wavefunctions which can be made from the AOs included. If you included the infinite number of hydrogenic wavefunctions on each atom you would be able to have an exact description of all possible wavefunctions for $\ce{H_2}$ - as the hydrogenic wavefunctions form a complete set, instead you include as many as practical.
An alternative complete set of functions is the infinite set of all plane waves of all frequencies. In cases where you are performing calculations on a periodic system, such as a solid, plane waves can be a more natural choice as they themselves are also periodic. In a box of width $L$, the set of waves with wavelengths:$$\lambda = \frac{2L}{n}$$for all integral n, form a complete set. They are more commonly described by their wavevectors, $\textbf{k}$, a vector in what is called reciprocal space, which describes the momentum of the wave.
In DFT, when calculating integrals over density, in a plane wave code you manipulate the density in terms of wavevectors, $\textbf{k}$. In a real space code, you compose your density from Kohn-Sham orbitals in real space and calculate integrals of the density over quadrature grids in real (Euclidean) space.
Hence "real space" is used as a descriptor, for DFT codes that work with AOs and basis functions defined on atoms in Euclidean space, in contrast to "plane wave" codes which used basis functions defined in position space.
|
This blog discusses a problematic situation that can arise when we try to implement certain digital filters. Occasionally in the literature of DSP we encounter impractical digital IIR filter block diagrams, and by impractical I mean block diagrams that cannot be implemented. This blog gives examples of impractical digital IIR filters and what can be done to make them practical.
Implementing an Impractical Filter: Example 1
Reference [1] presented the digital IIR bandpass filter...
I've recently encountered a digital filter design application that astonished me with its design flexibility, capability, and ease of use. The software is called the "ASN Filter Designer." After experimenting with a demo version of this filter design software I was so impressed that I simply had publicize it to the subscribers here on dsprelated.com.What I Liked About the ASN Filter Designer
With typical filter design software packages the user enters numerical values for the...
This blog describes a general discrete-signal network that appears, in various forms, inside so many DSP applications.
Figure 1 shows how the network's structure has the distinct look of a digital filter—a comb filter followed by a 2nd-order recursive network. However, I do not call this useful network a filter because its capabilities extend far beyond simple filtering. Through a series of examples I've illustrated the fundamental strength of this Swiss Army Knife of digital networks...
Recently I've been thinking about the process of envelope detection. Tutorial information on this topic is readily available but that information is spread out over a number of DSP textbooks and many Internet web sites. The purpose of this blog is to summarize various digital envelope detection methods in one place.
Here I focus on envelope detection as it is applied to an amplitude-fluctuating sinusoidal signal where the positive-amplitude fluctuations (the sinusoid's envelope)...
I just discovered a useful web-based source of signal processing information that was new to me. I thought I'd share what I learned with the subscribers here on DSPRelated.com.
The Home page of the web site that I found doesn't look at all like it would be useful to us DSP fanatics. But if you enter some signal processing topic of interest, say, "FM demodulation" (without the quotation marks) into the 'Search' box at the top of the web page
and click the red 'SEARCH...
This blog discusses a not so well-known rule regarding the filtering in multistage decimation and interpolation by an integer power of two. I'm referring to sample rate change systems using half-band lowpass filters (LPFs) as shown in Figure 1. Here's the story.
Figure 1: Multistage decimation and interpolation using half-band filters.Multistage Decimation – A Very Brief Review
Figure 2(a) depicts the process of decimation by an integer factor D. That...
Recently I've been thinking about digital differentiator and Hilbert transformer implementations and I've developed a processing scheme that may be of interest to the readers here on dsprelated.com.
This blog presents a novel method for simultaneously implementing a digital differentiator (DD), a Hilbert transformer (HT), and a half-band lowpass filter (HBF) using a single tapped-delay line and a single set of coefficients. The method is based on the similarities of the three N =...
This blog proposes a novel differentiator worth your consideration. Although simple, the differentiator provides a fairly wide 'frequency range of linear operation' and can be implemented, if need be, without performing numerical multiplications.Background
In reference [1] I presented a computationally-efficient tapped-delay line digital differentiator whose $h_{ref}(k)$ impulse response is:$$ h_{ref}(k) = {-1/16}, \ 0, \ 1, \ 0, \ {-1}, \ 0, \ 1/16 \tag{1} $$
and...
This blog discusses a little-known filter characteristic that enables real- and complex-coefficient tapped-delay line FIR filters to exhibit linear phase behavior. That is, this blog answers the question:What is the constraint on real- and complex-valued FIR filters that guarantee linear phase behavior in the frequency domain?
I'll declare two things to convince you to continue reading.
Declaration# 1: "That the coefficients must be symmetrical" is not a correct
If you need to compute inverse fast Fourier transforms (inverse FFTs) but you only have forward FFT software (or forward FFT FPGA cores) available to you, below are four ways to solve your problem.
Preliminaries To define what we're thinking about here, an N-point forward FFT and an N-point inverse FFT are described by:$$ Forward \ FFT \rightarrow X(m) = \sum_{n=0}^{N-1} x(n)e^{-j2\pi nm/N} \tag{1} $$ $$ Inverse \ FFT \rightarrow x(n) = {1 \over N} \sum_{m=0}^{N-1}...
Some time ago I was studying various digital differentiating networks, i.e., networks that approximate the process of taking the derivative of a discrete time-domain sequence. By "studying" I mean that I was experimenting with various differentiating filter coefficients, and I discovered a computationally-efficient digital differentiator. A differentiator that, for low fequency signals, has the power of George Foreman's right hand! Before I describe this differentiator, let's review a few...
This blog presents two very easy ways to test the performance of multistage cascaded integrator-comb (CIC) decimation filters [1]. Anyone implementing CIC filters should take note of the following proposed CIC filter test methods.
Introduction
Figure 1 presents a multistage decimate by D CIC filter where the number of stages is S = 3. The '↓D' operation represents downsampling by integer D (discard all but every Dth sample), and n is the time index.
If the Figure 3 filter's...
There are two code snippets associated with this blog post:
and
This blog discusses an accurate method of estimating time-domain sinewave peak amplitudes based on fast Fourier transform (FFT) data. Such an operation sounds simple, but the scalloping loss characteristic of FFTs complicates the process. We eliminate that complication by...
There are so many different time- and frequency-domain methods for generating complex baseband and analytic bandpass signals that I had trouble keeping those techniques straight in my mind. Thus, for my own benefit, I created a kind of reference table showing those methods. I present that table for your viewing pleasure in this blog.
For clarity, I define a complex baseband signal as follows: derived from an input analog xbp(t)bandpass signal whose spectrum is shown in Figure 1(a), or...
Recently I was on the Signal Processing Stack Exchange web site (a question and answer site for DSP people) and I read a posted question regarding Goertzel filters [1]. One of the subscribers posted a reply to the question by pointing interested readers to a Wikipedia web page discussing Goertzel filters [2]. I noticed the Wiki web site stated that a Goertzel filter:"...is marginally stable and vulnerable tonumerical error accumulation when computed usinglow-precision arithmetic and...
I just learned a new method (new to me at least) for computing the group delay of digital filters. In the event this process turns out to be interesting to my readers, this blog describes the method. Let's start with a bit of algebra so that you'll know I'm not making all of this up.
Assume we have the N-sample h(n) impulse response of a digital filter, with n being our time-domain index, and that we represent the filter's discrete-time Fourier transform (DTFT), H(ω), in polar form...
I've recently encountered a digital filter design application that astonished me with its design flexibility, capability, and ease of use. The software is called the "ASN Filter Designer." After experimenting with a demo version of this filter design software I was so impressed that I simply had publicize it to the subscribers here on dsprelated.com.What I Liked About the ASN Filter Designer
With typical filter design software packages the user enters numerical values for the...
Most of us are familiar with the process of flipping the spectrum (spectral inversion) of a real signal by multiplying that signal's time samples by (-1)n. In that process the center of spectral rotation is fs/4, where fs is the signal's sample rate in Hz. In this blog we discuss a different kind of spectral flipping process.
Consider the situation where we need to flip the X(f) spectrum in Figure 1(a) to obtain the desired Y(f) spectrum shown in Figure 1(b). Notice that the center of...
Earlier this year, for the Linear Audio magazine, published in the Netherlands whose subscribers are technically-skilled hi-fi audio enthusiasts, I wrote an article on the fundamentals of interpolation as it's used to improve the performance of analog-to-digital conversion. Perhaps that article will be of some value to the subscribers of dsprelated.com. Here's what I wrote:
We encounter the process of digital-to-analog...
This blog explains why, in the process of time-domain interpolation (sample rate increase), zero stuffing a time sequence with zero-valued samples produces an increased-length time sequence whose spectrum contains replications of the original time sequence's spectrum.
Background
The traditional way to interpolate (sample rate increase) an x(n) time domain sequence is shown in Figure 1.
Figure 1
The '↑ L' operation in Figure 1 means to...
There are so many different time- and frequency-domain methods for generating complex baseband and analytic bandpass signals that I had trouble keeping those techniques straight in my mind. Thus, for my own benefit, I created a kind of reference table showing those methods. I present that table for your viewing pleasure in this blog.
For clarity, I define a complex baseband signal as follows: derived from an input analog xbp(t)bandpass signal whose spectrum is shown in Figure 1(a), or...
This blog discusses two ways to determine an exponential averager's weighting factor so that the averager has a given 3-dB cutoff frequency. Here we assume the reader is familiar with exponential averaging lowpass filters, also called a "leaky integrators", to reduce noise fluctuations that contaminate constant-amplitude signal measurements. Exponential averagers are useful because they allow us to implement lowpass filtering at a low computational workload per output sample.
Figure 1 shows...
If you've read about the Goertzel algorithm, you know it's typically presented as an efficient way to compute an individual kth bin result of an N-point discrete Fourier transform (DFT). The integer-valued frequency index k is in the range of zero to N-1 and the standard block diagram for the Goertzel algorithm is shown in Figure 1. For example, if you want to efficiently compute just the 17th DFT bin result (output sample X17) of a 64-point DFT you set integer frequency index k = 17 and N =...
This blog presents several interesting things I recently learned regarding the estimation of a spectral value located at a frequency lying between previously computed FFT spectral samples. My curiosity about this FFT interpolation process was triggered by reading a spectrum analysis paper written by three astronomers [1].
My fixation on one equation in that paper led to the creation of this blog.
Background
The notion of FFT interpolation is straightforward to describe. That is, for example,...
This blog is not about signal processing. Rather, it discusses an interesting topic in number theory, the magic of the number 9. As such, this blog is for people who are charmed by the behavior and properties of numbers.
For decades I've thought the number 9 had tricky, almost magical, qualities. Many people feel the same way. I have a book on number theory, whose chapter 8 is titled "Digits — and the Magic of 9", that discusses all sorts of interesting mathematical characteristics of the...
It works like this: say we have a real xR(n) input bandpass...
This blog describes a general discrete-signal network that appears, in various forms, inside so many DSP applications.
Figure 1 shows how the network's structure has the distinct look of a digital filter—a comb filter followed by a 2nd-order recursive network. However, I do not call this useful network a filter because its capabilities extend far beyond simple filtering. Through a series of examples I've illustrated the fundamental strength of this Swiss Army Knife of digital networks...
There have been times when I wanted to determine the z-domain transfer function of some discrete network, but my algebra skills failed me. Some time ago I learned Mason's Rule, which helped me solve my problems. If you're willing to learn the steps in using Mason's Rule, it has the power of George Foreman's right hand in solving network analysis problems.
This blog discusses a valuable analysis method (well known to our analog control system engineering brethren) to obtain the z-domain...
I just encountered what I think is an interesting technique for multiplying two integer numbers. Perhaps some of the readers here will also find it interesting.
Here's the technique: assume we want to multiply 18 times 17. We start by writing 18 and 17, side-by-side in column A and column B, as shown at the top of Figure 1. Next we divide the 18 at the top of column A by two, retaining only the integer part of the division, and double the 17 at the top of column B. The results of those two...
I have read, in some of the literature of DSP, that when the discrete Fourier transform (DFT) is used as a filter the process of performing a DFT causes an input signal's spectrum to be frequency translated down to zero Hz (DC). I can understand why someone might say that, but I challenge that statement as being incorrect. Here are my thoughts.
Using the DFT as a Filter It may seem strange to think of the DFT as being used as a filter but there are a number of applications where this is...
|
A series of the form $a b^k, a b^{k+1}, \ldots, a b^{l}$, where $a$ and $b$ can be any {\em complex} number is called a geometric series with $l-k+1$ terms. For example, $1,\frac{1}{2},\frac{1}{4},\ldots$ is an infinite geometric series with $a=1$, $b=\frac12$. You may have seen these before, but in this class often we will be interested in the case when $b$ (and $a$) are complex numbers. Luckily, nothing changes from when $a$ and $b$ are just real numbers.We will particularly be interested in writing a closed form expression for the sum of consecutive terms of a geometric series. The most general result that you should memorize is(1)
A few special cases of the above general result are important. Just convince yourself that these are true(2)
The following identity is also true, although we will not use this often in this class.(3)
Here are couple of examples to try out
1. For any two given integers $k$ and $M$, what is$\displaystyle{\sum_{n=0}^{M-1} e^{\frac{j 2 \pi k n}{M}}}$?
2. Just for intellectual curiosity - Can you prove the results in Equation 1 and Equation 3 ?
|
Let me just introduce some notation first to make this easier. Let's denote the total number of data points in the full sample as $N$. The variance of the full sample is
$$\sigma_N^2 = \frac{1}{N} \sum_{n=0}^{N} (x_i - \mu_N)^2$$
for the total sample mean $\mu_N$, where you might change the normalisation factor to $N-1$ for an unbiased estimator. Similarly, we will denote the subsample variances and means as $\sigma_A^2$, $\sigma_B^2$ and $\mu_A$, $\mu_B$ and we'll say that the number of draws in subsample $A$ is $n_A$, leaving subsample $B$ with $n_B = N - n_A$.
If we expand the above expression, we get
$$\sigma_N^2 = \frac{1}{N} \sum_{n=0}^N(x_i^2 - 2\mu_Nx_i + \mu_N^2)$$
and bringing the sum through gives us
$$\sigma_N^2 = \frac{1}{N} \sum_{n=0}^{N} x_i^2 - 2\mu_N \left( \frac{1}{N} \sum_{n=0}^{N} x_i \right) +\frac{1}{N}\sum_{n=0}^{N}\mu_N^2 $$
If we add $\mu_N^2$ to both sides we are left with
$$\sigma_N^2 + \mu_N^2 = \frac{1}{N} \sum_{n=0}^N x_i^2$$
which we can do similarly with the subsamples. Splitting the sum in two over the subsamples, we now get
$$\sigma_N^2 + \mu_N^2 = \frac{1}{N} \left[\sum_{n=0}^{n_A} x_i^2 + \sum_{n=n_m + 1}^{N}x_i^2 \right]$$
and we can subsititute the sums for the identity derived above in each subsample
$$\sigma_N^2 + \mu_N^2 = \frac{1}{N} \left[ n_A \left( \sigma_A^2 + \mu_A^2 \right) + n_B \left( \sigma_B^2 + \mu_B^2 \right) \right] $$
So finally, after rearranging, we can get the variance of the second subsample as a function of the means and other variances:
$$\sigma_B^2 = \frac{N\left(\sigma_N^2 + \mu_N^2\right) - n_A \left(\sigma_A^2 + \mu_A^2 \right)}{n_B} - \mu_B^2$$
This will be slightly different if you are using unbiased estimators for the variance. You can see extra info relating to this problem in answers to the following other questions:
How to calculate the variance of a partition of variables
How to calculate pooled variance of two groups given known group variances, means, and sample sizes?
|
Why does it seems like every number $ababab$, where $a$ and $b$ are integers $[0, 9]$ is divisible by $13$?
Ex: $747474$, $101010$, $777777$, $989898$, etc...
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Note that $$ [ababab] = a\times 101010 + b \times 10101 = 13 (7770a + 777b) $$
Also noteworthy: $$ 10101 = 1 + 10^2 + 10^4 \equiv \\ 1 + 3^2 + 3^4 = 1 + 9 + 9^2 \equiv\\ 1 +(-4) + (-4)^2 = 1 - 4 + 16 = 13 \equiv 0 $$ where $\equiv$ indicates equivalence modulo $13$.
These numbers are of the form $(10a+b)\cdot 10101$, and $10101=13\cdot 777$.
Note:$$ab=a\times 10+b\times 1$$
so $$ab\times 100=ab00$$ therefore
$$abab=ab00+ab=ab(100+1)=ab\times 101$$ $$abab00=ab\times101\times100=ab\times10100$$ $$ababab=ab\times10101=ab\times3\times7\times13\times37$$
Hence it is divisible by $3$,$7,13$ and $37$.
A number $ABCDEF$ is divisible by $13$ if and only if $ABC-DEF$ is divisible by $13$.
Note that $\small{ABA-BAB=100(A-B)+10(B-A)+1(A-B)=91A-91B=13(7A-7B)}$.
Therefore $ABA-BAB$ is divisible by $13$.
Therefore $ABABAB$ is divisible by $13$.
And not only that: Each such number is also divisible by the other primes 3, 7 and 37. Just factor the number 10101! Your specimen number is any 2-digit number $\times 10101\,$.
$abcabc=abc(10^3+10^6)=abc\cdot 1001000=abc\cdot13\cdot77000$.
REMARK.-It is easy generalisable to $abcabcabcabcabc.....abcabc$ for $2n$ times abc.
$ababab$ is a multiple of $10101$, which in turn is a multiple of $13$.
Hint $\,\ {\rm mod}\ 13\!:\,\ 10\equiv 6^2\,\Rightarrow\, 10^6\equiv 6^{12}\equiv 1\,$ by little Fermat.
Hence $\, 0\equiv 10^6-1 \equiv (10^2\!-1)(10^4\!+10^2\!+1) \equiv 99\cdot 10101$
Thus $\,99\not\equiv 0\,\Rightarrow\,10101\equiv 0\,\Rightarrow\, ab\cdot 10101 = ababab\equiv 0,\,$ for $\, 0 \le ab \le 99$
$100^0 \equiv 1 \bmod 13$
$100^1 \equiv 9 \bmod 13$
$100^2 \equiv 3 \bmod 13$
$(ababab)_{10}=c100^2+c100+c =c(100^2+100+1) \equiv c(3+9+1) \equiv 0 \bmod 13$, where $c=10a+b$.
|
Summary
It turns out that even relatively low-mass ocean planets are capable of forming some of the exotic ices you name in their cores. Ice VII appears to form at the centers of planets of $0.015M_{\oplus}$ (Earth masses), while ice X forms at the centers of planets of $1.256M_{\oplus}$. Interestingly, despite the increase in mass by two orders of magnitude and the increase in central pressure by a factor of 25, these worlds have radii differing by only a factor of four. While there may be a temperature dependence, given the relative simplicity of water's phase diagram at $\sim300\text{ K}$, I suspect this should not be an issue, and the relevant equations of state are not temperature-dependent.
Theory
Since we have two competing answers (Dubukay's and Mark's) with vastly different results, I thought I would add a third method to see if I could come up with something similar. I went to Seager et al. 2008, my favorite set of models of the interiors of terrestrial exoplanets. Their setup assumes that the bodies are isothermal at low pressures - as Dubukay did - and uses equations of state of the form$$\rho(P)=\rho_0+cP^n\tag{11}$$where $\rho$ is density, $P$ is pressure and $c$ and $n$ are composition-dependent constants; $n\approx0.5$ for most terrestrial worlds, but it does differ, which is important. This equation is essentially a modified polytrope, with the one major change being that $\rho(0)\neq0$, which would be true in a classic polytrope. For a pure $\text{H}_2\text{O}$ planet, $n=0.513$ and $c=0.00311$. When using these constants, bear in mind that pressure is in pascals, and density is in kilograms per cubic meter.
Seager et al. derive the following mass-radius relationship (I have numbered the equations as they are numbered in the paper):$$M(R)=\frac{4\pi}{3}R^3\left[\rho(P_c)-\frac{2}{5}\pi GR^2\rho_0^2f'(P_c)\right]\tag{31}$$where $f(P)=cP^n$ and $P_c$ is the central pressure. It can be shown via hydrostatic equilibrium that$$P_c=\frac{3G}{8\pi}\frac{M^2}{R^4}\tag{27}$$Given a desired central pressure, I can test various radii and corresponding masses and find the values I need.
We can check these results a different way: by numerical integration. The structure of any planet is governed by two key equations:$$\frac{dP}{dr}=-\frac{Gm\rho}{r^2}$$$$\frac{dm}{dr}=4\pi r^2\rho$$These are the equations of hydrostatic equilibrium and mass continuity. $r$ is a radial coordinate, measured from the center of the planet, and $m$ is the mass enclosed within $r$. By modeling the planet as a collection of progressively larger shells, and knowing the value of $P$ and $m$ in any given shell, we can find the value of $P$ and $m$ in the next shell via the Euler method: finding the change in these variables by multiplying their derivatives at a point by some step size $\Delta r$. This is essentially what Mark did, I think. I'm simply using a particular equation of state, rather than a bulk modulus.
Code
I wrote some fairly simple code in Python 3 to accomplish this. It only requires NumPy (as well as Matplotlib for auxiliary plots).
import numpy as np
earthMass = 5.97*10**(24) # kg
earthRadius = 6.371*10**(6) # m
G = 6.67*10**(-11) # gravitational constant, SI units
def rho(P,rho0,c,n):
"""Polytropic equation of state"""
rho = rho0 + c*(P**n)
return rho
def fprime(P,c,n):
"""Derivative of the first order contribution
to the polytropic equation of state"""
fprime = c*n*(P**(n-1))
return fprime
def mass(R,rho0,c,n):
"""Compute planetary mass for a particular radius,
given equation of state parameters for a particular
composition."""
Rscaled = R*earthRadius # convert to SI units
Pc = (2*np.pi/3)*G*(Rscaled**2)*(rho0**2) # central pressure
rho_mean = rho(Pc,rho0,c,n) - (2*np.pi/5)*G*(Rscaled**2)*(rho0**2)*fprime(Pc,c,n) # mean density
Mscaled = (4*np.pi/3)*(Rscaled**3)*rho_mean
Mp = Mscaled/earthMass # convert to Earth masses
return Mp
def pressure(R,rho0,c,n):
"""Compute central pressure if radius is known"""
M = mass(R,rho0,c,n)
M = M*earthMass # convert to SI units
R = R*earthRadius # convert to SI units
Pc = (3*G/(8*np.pi))*(M**2)/(R**4)
return Pc
def minimumMass(P,rho0,c,n):
"""Compute mass at which a particular central
pressure is reached"""
radii = np.logspace(-1,1,1000) # reasonable radius range
i = 0
r = radii[i]
while pressure(r,rho0,c,n) < P:
# Brute force check of various radii
i += 1
r = radii[i]
return(mass(r,rho0,c,n))
def radius(M,rho0,c,n):
"""Compute radius which yields a given mass"""
radii = np.logspace(-1,1,1000)
i = 0
r = radii[i]
while mass(r,rho0,c,n) < M:
# Brute force check of various radii
i += 1
r = radii[i]
return r
pressureList = [2,50] # central pressures to check, in GPa
for p in pressureList:
print('Central pressure: '+str(p)+' GPa.')
print(' The required mass is '\
+str('%.3f'%minimumMass(p*10**9,1460,0.00311,0.513))+\
' Earth masses.')
print(' The required radius is '+\
str('%.3f'%radius(minimumMass(p*10**9,1460,0.00311,\
0.513),1460,0.00311,0.513))+' Earth radii.')
Here is my numerical integration code. It's written specifically for water worlds, so the equation of state parameters are not function arguments. If you want to, it can be generalized easily enough for any composition.
import numpy as np
earthMass = 5.97*10**(24) # kg
earthRadius = 6.371*10**(6) # m
G = 6.67*10**(-11) # gravitational constant, SI units
rho0 = 1460
c = 0.00311
n = 0.513
def dP(M,R,P,dR):
"""Compute change in pressure via hydrostatic
equilibrium"""
rho = rho0 + c*(P**n) # density
dP = -((G*M*rho)/(R**2))*dR
return dP
def dM(R,P,dR):
"""Compute change in mass via mass continuity
equation"""
rho = rho0 + c*(P**n) # density
dM = 4*np.pi*(R**2)*rho*dR
return dM
def integrator(Pc,dR):
"""Numerically integrate differential equations
to construct the planet"""
P = [Pc,Pc]
M = [0,0]
R = [0,dR]
# To avoid singularities at r = 0, I really
# start the code at one step, r = dR. I assume
# that this step is small enough that the mass
# and pressure don't change significantly.
while P[-1] > 0:
# The surface of the planet is where P = 0
m = M[-1]
r = R[-1]
p = P[-1]
deltaR = 1
deltaP = dP(m,r,p,deltaR)
deltaM = dM(r,p,deltaR)
P.append(P[-1]+deltaP)
M.append(M[-1]+deltaM)
R.append(R[-1]+deltaR)
return M, R, P
pressureList = [2,50] # central pressures to check, in GPa
for p in pressureList:
massList, radiusList, pressureList = integrator(p*(10**9),1)
M = massList[-1]/earthMass
R = radiusList[-1]/earthRadius
print('Central pressure: '+str(p)+' GPa.')
print(' The required mass is '+str('%.3f'%M)+\
' Earth masses.')
print(' The required radius is '+str('%.3f'%R)+\
' Earth radii.')
Results
I chose a central pressure of $P_c=2\text{ GPa}$ for ice VII and $P_c=50\text{ GPa}$ for ice X, as Dubukay and Mark did. For both cases, my results agreed with Mark's to within an order of magnitude; the discrepancy with Dubukay's numbers still remains:
$$\begin{array}{|c|c|c|c|}\hline \text{} & \text{Ice VII} & \text{Ice X}\\\hline \text{Dubukay} & M=8.327\times10^{-6}M_{\oplus} & M=0.0149M_{\oplus} \newline & R=0.0313R_{\oplus} & R=0.334R_{\oplus}\\\hline \text{Mark} & M=0.0149M_{\oplus} & M=0.409M_{\oplus} \newline & R=0.401R_{\oplus} & R=0.998R_{\oplus}\\\hline \text{Analytical} & M=0.0154M_{\oplus} & M=1.256M_{\oplus} \newline \text{models} & R=0.377R_{\oplus} & R=1.525R_{\oplus}\\\hline \text{Numerical} & M=0.015M_{\oplus} & M=0.959M_{\oplus} \newline \text{integration} & R=0.372R_{\oplus} & R=1.389R_{\oplus}\\\hline\end{array}$$
Both of my ice VII models agree very closely with Mark's, and my ice X models are only off by a factor of a few. The numerical integration does not match the analytical models, which worries me a little bit, but the discrepancy is not overly serious, and I'll do some poking around to see if I can find the problem. I'm happy enough to get within an order of magnitude in astronomy, so I'll consider all of this a victory. Here's a plot of my analytical results, with the terrestrial planets of the Solar System for comparison, as well as a curve of silicate planets ($\text{MgSiO}_3$):
What's going on?
This does shed some light on the different answers because a more detailed look at the theory rules out possible reasons for the discrepancy. The equations of state I used are isothermal; the other answers assume the same. Similarly, simple plots of density within these planets indicate that the weak dependence on pressure indeed justifies Dubukay's assumption of incompressibility. Both cases see perhaps a 10% change in density from the inner core to the surface - hardly enough to cause a discrepancy of three orders of magnitude. Indeed, at these pressures, most worlds should be quite incompressible.
I suspect that the key problem with Dubukay's answer is the assumption that the pressure-depth relationship doesn't change based on depth - and it likely does. By plotting the density inside each planet, we can see that it changes only slightly for the ice VII planet and a bit more for the ice X planet:
Now, the gravitational acceleration $g(r)$ at a radius $r$ scales as $g\propto\bar{\rho}r$, where $\bar{\rho}$ is the mean density inside $r$. The deviations from constant density are small for most regions onside the planet, so we should expect $g(r)$ to be fairly linear, and it is (closer to linear for the ice VII planet, which has a more uniform density profile):
Therefore, the simple depth-to-pressure conversion is inaccurate far from the surface. I also suspect that the core-ocean model is a little too simple.
|
The game is actually an instance of two-persons Pebble game, as @HendrikJan pointed out, and as such is proven to be $EXPTIME-complete$. The following is a summary based on a proof by Kasai, Adachi and Iwata in SICOMP 8 (4).
For starters, it's pretty obvious that the game is in $EXPTIME$ - we can simply check all the possible games and see if there is winning strategy. To proove it's $EXPTIME-hard$ is a little bit more challenging.
First we need to know the notion of Alternating Turing machines (or ATMs for short). We will further tighten the definition to get so-called
standard ATM:
We say an ATM $M$ is
standard if $M$ has only one work tape with the head initialized to the first cell of the tape, if a configuration $C$ of $M$ is existential (universal), then every configuration $C’ \in Next_M(C)$ is universal (existential), the initial state is existential and the accepting state is universal, and $Next_M(C) = \emptyset$ if and only if $C$ is an accepting configuration.
Where $Next_M(C)$ deontes set of possible configurations after one move starting from configuration $C$
Now there come two important lemmas proven by Chandra, Kozen and Stockmayer in Journal of the ACM 28(1):
Lemma 1
For every $S(n) \geq log (n)$, if $L \in ASPACE(S(n))$, then $L$ is accepted by a
standard ATM within space $S(n)$.
Lemma 2
$EXPTIME = APSPACE$
Having those two in mind, we now see that, given a standard ATM $M = (Q, \Sigma, \Gamma, \delta, b, q_1, q_a, U)$ such that only $p(n)$ cells areavailable on the work tape for some polynomial $p$ in $n$, and a word $w = w_1 w_2 ... w_n$, we need to construct, in logarythmic space, an instance of pebbles game $G$ such, that $w$ is accepted by $M$ iff. first player has winning strategy in $G$.
In order to do that we will need
set of fields $X$ consisting of
fields representing the state of working tape ($\{1..p(n)\} \times \Gamma$) fields representing current state of machine and it's heads ($Q \times \{1..n\} \times \{1..p(n)\}$) fields representing work tape transitions ($Q \times \{1..n\} \times \{1..p(n)\} \times \Gamma^2)$ three additional fields $s_1, s_2, t$ to ensure that correct player wins the game
Set $R$ of rules that translates $\delta$ into our game:
For each element of $Q \times \{1..n\} \times \{1..p(n)\}$ if $\delta (q, w_i, a)$ contains $(q', a', (d', d'')), a \neq a'$ then this transition can be encoded with the following rules: $([q, i, l], [l, a], [q, i, l, a, a'])$ $([l, a], [q, i, l, a, a'], [l, a'])$ $([q, i, l, a, a'], [l, a'], [q, i+d', l+d''])$ For each element of $Q \times \{1..n\} \times \{1..p(n)\}$ if $\delta (q, w_i, a)$ contains $(q', a, (d', d''))$ we need just one rule: $([q, i, l], [l, a], [q, i+d', l+d''])$ Finally we need to have "game finishers" rules: for each $i$ and $l$ there should be rule $([q_a, i, l], s_1, s_2)$ we also add rule $(s_2, s_1, t)$
And to start the game properly we need the set $S = \{[q_1,1,1],s_1\} \cup \{[l,b] | 1 \leq l \leq p(n)\}$, which denotes that we're in initiall state, both heads are at the beggining of the tapes, and the working tape is empty.
From this, the proof of the fact that $w$ is accepted by $M$ iff. first player has a winning strategy in $G$ should be pretty straightforward.
|
Knowledge of the specific fact that $(\sin x)' = \cos x$ actually predates the general knowledge of calculus and derivatives. It was known in the following form: that for very small $\Delta x$, when you increase $x$ to $x + \Delta x$, the increase in value of the sine, from $\sin x$ to $\sin (x + \Delta x)$, is proportional to $\Delta x$ times $\cos x$. In other words, that$$\frac{\sin (x + \Delta x) - \sin x}{\Delta x} \approx \cos x$$The approximation being exact in the limit as $\Delta x \to 0$ is of course the modern definition of the derivative.
This happened historically in Indian mathematics, where Muñjala (around 932), Āryabhaṭa II (around 950), Prashastidhara (around 958) all give the above rule for calculating $\sin(x + \Delta x)$, and an explicit geometric reasoning / justification is given by Bhāskara II (around 1150) in his Siddhanta Shiromani. I have not found a perfectly good reference to these, but you can start with the following article:
Use of Calculus in Hindu Mathematics, by Bibhutibhusan Datta and Awadhesh Narayan Singh, revised by Kripa Shankar Shukla, Indian Journal of the History of Science, 19 (2): 95–104 (1984). (PDF)
It was first pointed out by Bapu Deva Shastri in
Bhaskara's knowledge of the Differential Calculus, Journal of the Asiatic Society of Bengal, Volume 27, 1858, pp. 213–6.
|
AI News, What is the difference between a Perceptron, Adaline, and neural network model? What is the difference between a Perceptron, Adaline, and neural network model?
learning algorithms can actually be summarized by 4 simple steps – given that we use stochastic gradient descent for Adaline: We write the weight update in each iteration as:
Here, the activation function is not linear (like in Adaline), but we use a non-linear activation function like the logistic sigmoid (the one that we use in logistic regression) or the hyperbolic tangent, or a piecewise-linear activation function such as the rectifier linear unit (ReLU).
In addition, we often use a softmax function (a generalization of the logistic sigmoid for multi-class problems) in the output layer, and a threshold function to turn the predicted probabilities (by the softmax) into class labels.
By connecting the artificial neurons in this network through non-linear activation functions, we can create complex, non-linear decision boundaries that allow us to tackle problems where the different classes are not linearly separable.
What is the difference between a Perceptron, Adaline, and neural network model?
learning algorithms can actually be summarized by 4 simple steps – given that we use stochastic gradient descent for Adaline: We write the weight update in each iteration as:
Here, the activation function is not linear (like in Adaline), but we use a non-linear activation function like the logistic sigmoid (the one that we use in logistic regression) or the hyperbolic tangent, or a piecewise-linear activation function such as the rectifier linear unit (ReLU).
In addition, we often use a softmax function (a generalization of the logistic sigmoid for multi-class problems) in the output layer, and a threshold function to turn the predicted probabilities (by the softmax) into class labels.
By connecting the artificial neurons in this network through non-linear activation functions, we can create complex, non-linear decision boundaries that allow us to tackle problems where the different classes are not linearly separable.
Single-Layer Neural Networks and Gradient Descent
This article offers a brief glimpse of the history and basic concepts of machine learning.
We will take a look at the first algorithmically described neural network and the gradient descent algorithm in context of adaptive linear neurons, which will not only introduce the principles of machine learning but also serve as the basis for modern multilayer neural networks in future articles.
Thanks to machine learning, we enjoy robust email spam filters, convenient text and voice recognition, reliable web search engines, challenging chess players, and, hopefully soon, safe and efficient self-driving cars.
The perceptron is not only the first algorithmically described learning algorithm [1], but it is also very intuitive, easy to implement, and a good entry point to the (re-discovered) modern state-of-the-art machine learning algorithms: Artificial neural networks (or “deep learning” if you like).
To put the perceptron algorithm into the broader context of machine learning: The perceptron belongs to the category of supervised learning algorithms, single-layer binary linear classifiers to be more specific.
Next, we define an activation function g(\mathbf{z}) that takes a linear combination of the input values \mathbf{x} and weights \mathbf{w} as input (\mathbf{z} = w_1x_{1} + \dots + w_mx_{m}), and if g(\mathbf{z}) is greater than a defined threshold \theta we predict 1 and -1 otherwise;
in this case, this activation function g is an alternative form of a simple “unit step function,” which is sometimes also called “Heaviside step function.” (Please note that the unit step is classically defined as being equal to 0 if % <![CDATA[ z
To summarize the main points from the previous section: A perceptron receives multiple input signals, and if the sum of the input signals exceed a certain threshold it either returns a signal or remains “silent” otherwise.
What made this a “machine learning” algorithm was Frank Rosenblatt’s idea of the perceptron learning rule: The perceptron algorithm is about learning the weights for the input signals in order to draw linear decision boundary that allows us to discriminate between the two linearly separable classes +1 and -1.
Rosenblatt’s initial perceptron rule is fairly simple and can be summarized by the following steps: The output value is the class label predicted by the unit step function that we defined earlier (output =g(\mathbf{z})) and the weight update can be written more formally as w_j := w_j + \Delta w_j.
The value for updating the weights at each increment is calculated by the learning rule where \eta is the learning rate (a constant between 0.0 and 1.0), “target” is the true class label, and the “output” is the predicted class label.
w_j = \eta(1^{(i)} - 1^{(i)})\;x^{(i)}_{j} = 0 However, in case of a wrong prediction, the weights are being “pushed” towards the direction of the positive or negative target class, respectively: \Delta w_j = \eta(1^{(i)} - -1^{(i)})\;x^{(i)}_{j} = \eta(2)\;x^{(i)}_{j} \Delta
If the two classes can’t be separated by a linear decision boundary, we can set a maximum number of passes over the training dataset (“epochs”) and/or a threshold for the number of tolerated misclassifications.
Our intuition tells us that a decision boundary with a large margin between the classes (as indicated by the dashed line in the figure below) likely has a better generalization error than the decision boundary of the perceptron.
In contrast to the perceptron rule, the delta rule of the adaline (also known as Widrow-Hoff” rule or Adaline rule) updates the weights based on a linear activation function rather than a unit step function;
(The fraction \frac{1}{2} is just used for convenience to derive the gradient as we will see in the next paragraphs.) In order to minimize the SSE cost function, we will use gradient descent, a simple yet useful optimization algorithm that is often used in machine learning to find the local minimum of linear systems.
mentioned above, each update is updated by taking a step into the opposite direction of the gradient \Delta \mathbf{w} = - \eta \nabla J(\mathbf{w}), thus, we have to compute the partial derivative of the cost function for each weight in the weight vector: \Delta w_j = - \eta \frac{\partial J}{\partial w_j}.
The partial derivative of the SSE cost function for a particular weight can be calculated as follows: (t = target, o = output) And if we plug the results back into the learning rule, we get \Delta w_j = - \eta \frac{\partial J}{\partial w_j} = - \eta \sum_i (t^{(i)} - o^{(i)})(- x^{(i)}_{j}) = \eta \sum_i (t^{(i)} - o^{(i)})x^{(i)}_{j}, Eventually, we can apply a simultaneous weight update similar to the perceptron rule: \mathbf{w} := \mathbf{w} + \Delta \mathbf{w}.
Another advantage of online learning is that the classifier can be immediately updated as new training data arrives, e.g., in web applications, and old training data can be discarded if storage is an issue.
In later articles, we will take a look at different approaches to dynamically adjust the learning rate, the concepts of “One-vs-All” and “One-vs-One” for multi-class classification, regularization to overcome overfitting by introducing additional information, dealing with nonlinear problems and multilayer neural networks, different activation functions for artificial neurons, and related concepts such as logistic regression and support vector machines.
ADALINE
ADALINE (Adaptive Linear Neuron or later Adaptive Linear Element) is an early single-layer artificial neural network and the name of the physical device that implemented this network.[1][2][3][4][5] The network uses memistors.
It was developed by Professor Bernard Widrow and his graduate student Ted Hoff at Stanford University in 1960.
It consists of a weight, a bias and a summation function.
The difference between Adaline and the standard (McCulloch–Pitts) perceptron is that in the learning phase, the weights are adjusted according to the weighted sum of the inputs (the net).
In the standard perceptron, the net is passed to the activation (transfer) function and the function's output is used for adjusting the weights.
Adaline is a single layer neural network with multiple nodes where each node accepts multiple inputs and generates one output.
Given the following variables:as then we find that the output is
If we further assume that then the output further reduces to:
Let us assume: then the weights are updated as follows
.[6] This update rule is in fact the stochastic gradient descent update for linear regression.[7] MADALINE (Many ADALINE[8]) is a three-layer (input, hidden, output), fully connected, feed-forward artificial neural network architecture for classification that uses ADALINE units in its hidden and output layers, i.e.
its activation function is the sign function.[9] The three-layer network uses memistors.
Three different training algorithms for MADALINE networks, which cannot be learned using backpropagation because the sign function is not differentiable, have been suggested, called Rule I, Rule II and Rule III.
The first of these dates back to 1962 and cannot adapt the weights of the hidden-output connection.[10] The second training algorithm improved on Rule I and was described in 1988.[8] The third 'Rule' applied to a modified network with sigmoid activations instead of signum;
it was later found to be equivalent to backpropagation.[10] The Rule II training algorithm is based on a principle called 'minimal disturbance'.
It proceeds by looping over training examples, then for each example, it: Additionally, when flipping single units' signs does not drive the error to zero for a particular example, the training algorithm starts flipping pairs of units' signs, then triples of units, etc.[8]
Machine Learning with scikit-learn
In this tutorial, we'll learn another type of single-layer neural network (still this is also a perceptron) called Adaline (Adaptive linear neuron) rule (also known as the Widrow-Hoff rule).
The perceptron algorithm enables the model automatically learn the optimal weight coefficients that are then multiplied with the input features in order to make the decision of whether a neuron fires or not.
Update of each weight $w_j$ in the weight vector $w$ can be written as: The value of $\Delta w_j$ , which is used to update the weight $w_j$, is calculated as the following: One of the most critical tasks in supervised machine learning algorithms is to minimize cost function.
Errors (SSE) between the calculated outcome and the true class label: COmpared with the unit step function, the advantages of this continuous linear activation function are:
the adaptive linear learning rule looks identical to the perceptron rule, the $\phi(z^{(i)})$ with $z^{(i)}=w^Tx^{(i)}$ is a real number and not an integer class label. Also,
can minimize a cost function by taking a step into the opposite direction of a gradient that is calculated from the whole training set, and this is why this approach is also called as batch gradient descent. Implementation
the perceptron rule and Adaptive Linear Neuron are very similar, we can take the perceptron implementation that we defined earlier and change the fit method so that the weights are updated by minimizing the cost function via gradient descent. Here
errors = y - output
self.weight[0] += self.rate * errors.sum()
Although the adaptive linear learning rule looks identical to the perceptron rule, the $\phi(z^{(i)})$ with $z^{(i)}=w^Tx^{(i)}$ is a real number and not an integer class label.
We can minimize a cost function by taking a step into the opposite direction of a gradient that is calculated from the whole training set, and this is why this approach is also called as batch gradient descent.
Since the perceptron rule and Adaptive Linear Neuron are very similar, we can take the perceptron implementation that we defined earlier and change the fit method so that the weights are updated by minimizing the cost function via gradient descent.
So, to standardize the $j$-th feature, we just need to subtract the sample mean $\mu_j$ from every training sample and divide it by its standard deviation $sigma_j$: where $x_j$ is a vector consisting of the $j$-th feature values of all training samples $n$.
We can standardize by using the NumPy methods mean and std: After the standardization, we will train the Linear model again using the not so small learning rate of $\eta = 0.01$: Here is our new code for the two pictures above: Continued to Single Layer Neural Network : Adaptive Linear Neuron using linear (identity) activation function with stochastic gradient descent (SGD).
On Monday, September 23, 2019 Soft Computing Lecture Adaline Neural Network
Soft Computing Lecture Adaline Neural Network Adaline is when unit with linear activation function are called linear units a network with a single linear unit is ...
Neural Networks 6: solving XOR with a hidden layer Artificial Neural Networks (Part 1) - Classification using Single Layer Perceptron Model
Support Vector Machines Video (Part 1): Support Vector Machine (SVM) Part 2: Non Linear SVM .
Lec05 Classification with Perceptron Model (Hands on)
Introduction to simple neural network in Python 2.7 using sklearn, handling features, training the network and testing its inferencing on unknown data.
Neural Networks problem asked in Nov 17
Neural Networks problem asked in Nov 17.
Perceptron Learning Algorithm 2 - AND
Perceptron learning AND - slow version.
Mod-06 Lec-15 AdaLinE and LMS algorithm; General nonliner least-squares regression
Pattern Recognition by Prof. P.S. Sastry, Department of Electronics & Communication Engineering, IISc Bangalore. For more details on NPTEL visit ...
Machine Learning - The Perceptron
In Machine Learning with perceptron learning rule is based on MCP neuron model. It is an algorithm that automatically learns the optimal weights of coefficients ...
Getting Started with Neural Network Toolbox
Use graphical tools to apply neural networks to data fitting, pattern recognition, clustering, and time series problems. Top 7 Ways to Get Started with Deep ...
|
It is an interesting news on conduction heat transfer.
The new article reports that
vanadium dioxide conducts electricity much better than it conducts heat at near room temperature:
Abstract
In electrically conductive solids, the Wiedemann-Franz law requires the electronic contribution to thermal conductivity to be proportional to electrical conductivity. Violations of the Wiedemann-Franz law are typically an indication of unconventional quasiparticle dynamics, such as inelastic scattering, or hydrodynamic collective motion of charge carriers, typically pronounced only at cryogenic temperatures. We report an order-of-magnitude breakdown of the Wiedemann-Franz law at high temperatures ranging from 240 to 340 kelvin in metallic vanadium dioxide in the vicinity of its metal-insulator transition. Different from previously established mechanisms, the unusually low electronic thermal conductivity is a signature of the absence of quasiparticles in a strongly correlated electron fluid where heat and charge diffuse independently.
Wiedemann–Franz(-Lorenz) Law
In solids, heat is transported by vibrations of the solid lattice (
Phonon contribution) and motion of free electrons (Electronic contribution). In metals, thermal energy transport by electrons predominates. Thus, good electrical conductors are also good thermal conductors as the Wiedemann–Franz law states that: \begin{equation} \frac{\lambda}{\sigma} = LT \tag{1} \end{equation} \(\lambda\): thermal conductivity \(\sigma\): electrical conductivity \(T\): absolute temperature \(L\): Lorenz number (\(=2.45 \times 10^{-8}\)) [\({\rm W}\Omega/{\rm K}^2\)]
The news reports that vanadium dioxide does not obey this empirical law.
Related Topics and Refereces (Japanese)
|
Equivalence of Definitions of Ordering Contents 1 Theorem 2 Proof 1 3 Proof 2 Theorem
An
ordering on $S$ is a relation $\mathcal R$ on $S$ such that:
\((1)\) $:$ $\mathcal R$ is reflexive \(\displaystyle \forall a \in S:\) \(\displaystyle a \mathop {\mathcal R} a \) \((2)\) $:$ $\mathcal R$ is transitive \(\displaystyle \forall a, b, c \in S:\) \(\displaystyle a \mathop {\mathcal R} b \land b \mathop {\mathcal R} c \implies a \mathop {\mathcal R} c \) \((3)\) $:$ $\mathcal R$ is antisymmetric \(\displaystyle \forall a \in S:\) \(\displaystyle a \mathop {\mathcal R} b \land b \mathop {\mathcal R} a \implies a = b \)
An
ordering on $S$ is a relation $\mathcal R$ on $S$ such that: $(1): \quad \mathcal R \circ \mathcal R = \mathcal R$ $(2): \quad \mathcal R \cap \mathcal R^{-1} = \Delta_S$
where:
$\circ$ denotes relation composition $\mathcal R^{-1}$ denotes the inverse of $\mathcal R$ $\Delta_S$ denotes the diagonal relation on $S$. Definition 1 implies Definition 2
Let $\mathcal R$ be a relation on $S$ satisfying:
\((1)\) $:$ $\mathcal R$ is reflexive \(\displaystyle \forall a \in S:\) \(\displaystyle a \mathop {\mathcal R} a \) \((2)\) $:$ $\mathcal R$ is transitive \(\displaystyle \forall a, b, c \in S:\) \(\displaystyle a \mathop {\mathcal R} b \land b \mathop {\mathcal R} c \implies a \mathop {\mathcal R} c \) \((3)\) $:$ $\mathcal R$ is antisymmetric \(\displaystyle \forall a \in S:\) \(\displaystyle a \mathop {\mathcal R} b \land b \mathop {\mathcal R} a \implies a = b \) $\mathcal R \circ \mathcal R = \mathcal R$ $\mathcal R \cap \mathcal R^{-1} = \Delta_S$ Thus $\mathcal R$ is an ordering by definition 2.
$\Box$
Definition 2 implies Definition 1
Let $\mathcal R$ be a relation which fulfils the conditions:
$(1): \quad \mathcal R \circ \mathcal R = \mathcal R$ $(2): \quad \mathcal R \cap \mathcal R^{-1} = \Delta_S$ By definition of set equality, it follows from $(1)$: $\mathcal R \circ \mathcal R \subseteq \mathcal R$
Thus, by definition, $\mathcal R$ is transitive.
By Relation is Antisymmetric and Reflexive iff Intersection with Inverse equals Diagonal Relation, it follows from $(2)$ that: Thus $\mathcal R$ is an ordering by definition 1.
$\blacksquare$
Definition 1 implies Definition 2
Let $\mathcal R$ be a relation on $S$ satisfying:
\((1)\) $:$ $\mathcal R$ is reflexive \(\displaystyle \forall a \in S:\) \(\displaystyle a \mathop {\mathcal R} a \) \((2)\) $:$ $\mathcal R$ is transitive \(\displaystyle \forall a, b, c \in S:\) \(\displaystyle a \mathop {\mathcal R} b \land b \mathop {\mathcal R} c \implies a \mathop {\mathcal R} c \) \((3)\) $:$ $\mathcal R$ is antisymmetric \(\displaystyle \forall a \in S:\) \(\displaystyle a \mathop {\mathcal R} b \land b \mathop {\mathcal R} a \implies a = b \) Condition $(1)$
Let $\left({x, y}\right) \in \mathcal R \circ \mathcal R$.
Then there exists a $z \in \mathcal R$ such that:
$\left({x, z}\right), \left({z, y}\right) \in \mathcal R$
By $\mathcal R$ being transitive:
$\left({x, y}\right) \in \mathcal R$
Hence:
$\mathcal R \circ \mathcal R \subseteq \mathcal R$ Now let $\left({x, y}\right) \in \mathcal R$.
By $\mathcal R$ being reflexive:
$\left({y, y}\right) \in \mathcal R$
Hence by the definition of relation composition:
$\left({x, y}\right) \in \mathcal R \circ \mathcal R$
Hence:
$\mathcal R \subseteq \mathcal R \circ \mathcal R$ Condition $(2)$
Follows immediately from Relation is Antisymmetric iff Intersection with Inverse is Coreflexive and $\mathcal R$ being reflexive.
Thus $\mathcal R$ is an ordering by definition 2.
$\Box$
Definition 2 implies Definition 1
Let $\mathcal R$ be a relation which fulfils the conditions:
$(1): \quad \mathcal R \circ \mathcal R = \mathcal R$ $(2): \quad \mathcal R \cap \mathcal R^{-1} = \Delta_S$ Reflexivity
By Intersection is Subset the condition:
$\mathcal R \cap \mathcal R^{-1} = \Delta_S$
implies:
$\Delta_S \subseteq \mathcal R$
Thus $\mathcal R$ is reflexive by definition
Antisymmetry
By Relation is Antisymmetric iff Intersection with Inverse is Coreflexive the condition:
$\mathcal R \cap \mathcal R^{-1} = \Delta_S$
implies that $\mathcal R$ is antisymmetric
Transitivity
Let $\left({x, y}\right), \left({y, z}\right) \in \mathcal R$.
Then by the definition of relation composition:
$\left({x, z}\right) \in \mathcal R \circ \mathcal R$
But by the condition:
$\mathcal R \circ \mathcal R = \mathcal R$
It follows that:
$\left({x, z}\right) \in \mathcal R$
Hence $\mathcal R$ is transitive.
Thus $\mathcal R$ is an ordering by definition 1.
$\blacksquare$
|
Integral
One of the central notions in mathematical analysis and all of mathematics, which arose in connection with two problems: to recover a function from its derivative (for example, the problem of finding the law of motion of a material object along a straight line when the velocity of this point is known); and to calculate the area bounded by the graph of a function $f$ on an interval $a\leq x\leq b$ and the $x$-axis (the problem of calculating the work performed by a force over an interval of time $a\leq t\leq b$ leads to this problem, as do other problems).
The two problems indicated above lead to two forms of the integral, the indefinite and the definite integral. The study of the properties and calculation of these interrelated forms of the integral constitutes the problem of integral calculus.
In the course of development of mathematics and under the influence of the requirements of natural science and technology, the notions of the indefinite and the definite integral have undergone a number of generalizations and modifications.
Contents The indefinite integral.
A primitive of a function $f$ of the variable $x$ on an interval $a<x<b$ is any function $F$ whose derivative is equal to $f$ at each point $x$ of the interval. It is clear that if $F$ is a primitive of $f$ on the interval $a<x<b$, then so is $F_1=F+C$, where $C$ is an arbitrary constant. The converse also holds: Any two primitives of the same function $f$ on the interval $a<x<b$ can only differ by a constant. Consequently, if $F$ is one of the primitives of $f$ on the interval $a<x<b$, then any primitive of $f$ on this interval has the form $F+C$, where $C$ is a constant. The collection of all primitives of $f$ on the interval $a<x<b$ is called the indefinite integral of $f$ (on this interval) and is denoted by the symbol
$$\int f(x)dx.$$
According to the fundamental theorem of integral calculus, there exists for each continuous function $f$ on the interval $a<x<b$ a primitive, and hence an indefinite integral, on this interval (cf. also Indefinite integral).
The definite integral.
The notion of the definite integral is introduced either as a limit of integral sums (see Cauchy integral; Riemann integral; Lebesgue integral; Stieltjes integral) or, in the case when the given function $f$ is defined on some interval $[a,b]$ and has a primitive $F$ on this interval, as the difference between the values at the end points, that is, as $F(b)-F(a)$. The definite integral of $f$ on $[a,b]$ is denoted by $\int_a^bf(x)dx$. The definition of the integral as a limit of integral sums for the case of continuous functions was stated by A.L. Cauchy in 1823. The case of arbitrary functions was studied by B. Riemann (1853). A substantial advance in the theory of definite integrals was made by G. Darboux (1879), who introduced the notion of upper and lower Riemann sums (see Darboux sum). A necessary and sufficient condition for the Riemann integrability of discontinuous functions was established in final form in 1902 by H. Lebesgue.
There is the following relationship between the definitions of the definite integral of a continuous function $f$ on a closed interval $[a,b]$ and the indefinite integral (or primitive) of this function: 1) if $F$ is any primitive of $f$, then the following Newton–Leibniz formula holds:
$$\int\limits_a^bf(x)dx=F(b)-F(a);$$
2) for any $x$ in the interval $[a,b]$, the indefinite integral of the continuous function $f$ can be written in the form
$$\int f(x)dx=\int\limits_a^xf(t)dt+C,$$
where $C$ is an arbitrary constant. In particular, the definite integral with variable upper limit,
$$F(x)=\int\limits_a^xf(t)dt\tag{1}$$
is a primitive of $f$.
In order to introduce the definite integral of $f$ over $[a,b]$ in the sense of Lebesgue, the set of values of $y$ is divided into subintervals of points $\ldots<y_{-1}<y_0<y_1<\dots$, and one denotes by $M_i$ the set of all values of $x$ in the interval $[a,b]$ for which $y_{i-1}\leq f(x)<y_i$, and by $\mu(M_i)$ the measure of the set $M_i$ in the sense of Lebesgue (cf. Lebesgue measure). A Lebesgue integral sum of the function $f$ on the interval $[a,b]$ is defined by the formula
$$\sigma=\sum_i\eta_i\mu(M_i),\tag{2}$$
where $\eta_i$ are arbitrary numbers in the interval $[y_{i-1},y_i]$.
A function $f$ is said to be Lebesgue integrable on the interval $[a,b]$ if the limit of the integral sums \ref{2} exists and is finite as the maximum width of the intervals $(y_{i-1},y_i)$ tends to zero, that is, if there exists a real number $I$ such that for any $\epsilon>0$ there is a $\delta>0$ such that under the single condition $\max(y_i-y_{i-1})<\delta$ the inequality $|\sigma-I|<\epsilon$ holds. The limit $I$ is then called the definite Lebesgue integral of $f$ over $[a,b]$.
Instead of the interval $[a,b]$ one can consider an arbitrary set that is measurable with respect to some non-negative complete countably-additive measure. An alternative introduction to the Lebesgue integral can be given, when one defines this integral originally on the set of so-called simple functions (that is, measurable functions assuming at most a countable number of values), and then introduces the integral by means of a limit transition for any function that can be expressed as the limit of a uniformly-convergent sequence of simple functions (see Lebesgue integral).
Each Riemann-integrable function is Lebesgue integrable. The converse is false, since there exist Lebesgue-integrable functions that are discontinuous on a set of positive measure (for example, the Dirichlet function).
In order that a bounded function be Lebesgue integrable, it is necessary and sufficient that this function belongs to the class of measurable functions (cf. Measurable function). The functions encountered in mathematical analysis are, as a rule, measurable. This means that the Lebesgue integral has a generality that is sufficient for the requirements of analysis.
The Lebesgue integral also covers the cases of absolutely-convergent improper integrals (cf. Improper integral).
The generality attained by the definition of the Lebesgue integral is absolutely essential in many questions in modern mathematical analysis (the theory of generalized functions, the definition of generalized solutions of differential equations, and the isomorphism of the Hilbert spaces $L_2$ and $l_2$, which is equivalent to the so-called Riesz–Fischer theorem in the theory of trigonometric or arbitrary orthogonal series; all these theories have proved possible only by taking the integral to be in the sense of Lebesgue).
The primitive in the sense of Lebesgue is naturally defined by means of equation \ref{1}, in which the integral is taken in the sense of Lebesgue. The relation $F'=f$ in this case holds everywhere, except perhaps on a set of measure zero.
Other generalizations of the notions of an integral.
In 1894 T.J. Stieltjes gave another generalization of the Riemann integral (which acquired the name of Stieltjes integral), important for applications, in which one considers the integrability of a function $f$ defined on some interval $[a,b]$ with respect to a second function defined on the same interval. The Stieltjes integral of $f$ with respect to the function $U$ is denoted by the symbol
$$I=\int\limits_a^bf(x)dU(x).\tag{3}$$
If $U$ has a bounded Riemann-integrable derivative $U'$, then the Stieltjes integral reduces to the Riemann integral by the formula
$$\int\limits_a^bf(x)dU(x)=\int\limits_a^bf(x)U'(x)dx.$$
In particular, when $U(x)=x+C$, the Stieltjes integral \ref{3} is the Riemann integral $\int_a^bf(x)dx$.
However, the interesting case for applications is when the function $U$ does not have a derivative. An example of such a $U$ is the spectral measure in the study of spectral decompositions.
$$\int\limits_\Gamma f(x,y)dx$$
along the curve $\Gamma$ defined by the equations $x=\phi(t),y=\psi(t)$, $a\leq t\leq b$, is a special case of the Stieltjes integral, since it can be written in the form
$$\int\limits_a^bf[\phi(t),\psi(t)]d\phi(t).$$
A further generalization of the notion of the integral is obtained by integration over an arbitrary set in a space of any number of variables. In the most general case it is convenient to regard the integral as a function of the set $M$ over which the integration is carried out (see Set function), in the form
$$F(M)=\int\limits_Mf(x)dU(x),$$
where $U$ is a set function on $M$ (its measure in a particular case) and the points belong to the set $M$ over which the integration proceeds. Particular cases of this type of integration are multiple integrals and surface integrals (cf. Multiple integral; Surface integral).
Another generalization of the notion of the integral is that of the improper integral.
In 1912 A. Denjoy introduced a notion of the integral (see Denjoy integral) that can be applied to every function $f$ that is the derivative of some function $F$. This enables one to reduce the constructive definition of the integral to a degree of generality which completely answers the problem of finding a definite integral taken in the sense of a primitive.
References
[1] V.A. Il'in, E.G. Poznyak, "Fundamentals of mathematical analysis" , 1–2 , MIR (1971–1973) (Translated from Russian) [2] A.N. Kolmogorov, S.V. Fomin, "Elements of the theory of functions and functional analysis" , 1–2 , Graylock (1957–1961) (Translated from Russian) MR1025126 MR0708717 MR0630899 MR0435771 MR0377444 MR0234241 MR0215962 MR0118796 MR1530727 MR0118795 MR0085462 MR0070045 Zbl 0932.46001 Zbl 0672.46001 Zbl 0501.46001 Zbl 0501.46002 Zbl 0235.46001 Zbl 0103.08801 [3] L.D. Kudryavtsev, "Mathematical analysis" , Moscow (1973) (In Russian) MR1617334 MR1070567 MR1070566 MR1070565 MR0866891 MR0767983 MR0767982 MR0628614 MR0619214 Zbl 1080.00002 Zbl 1080.00001 Zbl 1060.26002 Zbl 0869.00003 Zbl 0696.26002 Zbl 0703.26001 Zbl 0609.00001 Zbl 0632.26001 Zbl 0485.26002 Zbl 0485.26001 [4] S.M. Nikol'skii, "A course of mathematical analysis" , 1–2 , MIR (1975) (Translated from Russian) Zbl 1029.00003 Zbl 0727.00001 Zbl 0723.00003 Zbl 0397.00003 Zbl 0384.00004 [5] V.I. Smirnov, "A course of higher mathematics" , 5 , Addison-Wesley (1964) (Translated from Russian) MR0182690 MR0182688 MR0182687 MR0177069 MR0168707 Zbl 0122.29703 Zbl 0121.25904 Zbl 0118.28402 Zbl 0117.03404 [6] H. Lebesgue, "Leçons sur l'intégration et la récherche des fonctions primitives" , Gauthier-Villars (1928) MR2857993 Zbl 54.0257.01 Comments
Concerning the "simple functions" mentioned above: every real-valued measurable function is the limit of a uniformly-convergent sequence of simple functions. However, such functions need not be Lebesgue integrable.
There are many other types of integrals besides those of Riemann and Lebesgue, cf., e.g., $A$-integral; Boks integral; Burkill integral; Daniell integral; Darboux sum; Kolmogorov integral; Perron integral; Pettis integral; Radon integral; Repeated integral; Strong integral; Wiener integral.
References
[a1] E. Hewitt, K.R. Stromberg, "Real and abstract analysis" , Springer (1965) MR0188387 Zbl 0137.03202 [a2] E.J. MacShane, "Integration" , Princeton Univ. Press (1944) Zbl 1088.26500 [a3] W. Rudin, "Real and complex analysis" , McGraw-Hill (1974) pp. 24 MR0344043 Zbl 0278.26001 [a4] S. Saks, "Theory of the integral" , Hafner (1952) (Translated from French) MR0167578 Zbl 1196.28001 Zbl 0017.30004 Zbl 63.0183.05 [a5] K.R. Stromberg, "Introduction to classical real analysis" , Wadsworth (1981) MR0604364 Zbl 0454.26001 [a6] A.J. Weir, "Lebesgue integration and measure" , Cambridge Univ. Press (1985) MR0480918 MR0480919 Zbl 0257.26001 [a7] A.C. Zaanen, "Integration" , North-Holland (1967) MR0222234 Zbl 0175.05002 [a8] G.E. Shilov, B.L. Gurevich, "Integral, measure, and derivative: a unified approach" , Prentice-Hall (1966) (Translated from Russian) MR0194571 Zbl 0138.27501 [a9] I.N. Pesin, "Classical and modern integration theories" , Acad. Press (1970) (Translated from Russian) MR0264015 Zbl 0206.06401 [a10] J. Diestel, J.J. Uhl jr., "Vector measures" , Math. Surveys , 15 , Amer. Math. Soc. (1977) MR0453964 Zbl 0369.46039 How to Cite This Entry:
Integral.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Integral&oldid=33200
|
MCQs with Answers
In this one PDF, MCQs of all chapters of FSc Part1 are given. There are seven chapters. Answers of MCQs is starting from page 71.
SAMPLE MCQs $i^{13}=$…………. (A) $i$ (B) 1 (C) -1 (D) 2 Set of all possible subsets of $S$ is called (A) Equivalent sets (B) Empty set (C) Power set (D) subset Cube root of unity are (A) $1, \omega, \omega^2$ (B) $-1, \omega, \omega^2$ (C) $-1, -\omega, -\omega^2$ (D) $1, -1, 2$ The equation $ax^2+bx+c=0$ will be quadratic if (A) $a=0, b\neq 0$ (B) $a\neq 0$ (C) $a=b=0$ (D) $b=$ any real number An open sentence formed by using a sign of '=' is a/an (A) equation (B) formula (C) rational fraction (D) theorem An arrangement of the number according to some definite rule is called (A) sequence (B) combination (C) series (D) permutation The equation $ax^2+bx+c=0$ will be quadratic if (A) $a=0, b\neq 0$ (B) $a\neq 0$ (C) $a=b=0$ (D) $b=$ any real number $n!=n(n-1)(n-2)\ldots3\cdot 2\cdot 1$ defined only when $n$ is (A) positive integer (B) integer (C) real number (D) whole number If the angle of rotation is counter clockwise then angle is (A) negative (B) positive (C) non-negative (D) none of these
fsc/fsc_part_1_mcqs/mcqs_with_answers Last modified: 15 months ago by Administrator
|
Particle number expectation value
Set
context $ w $ … grand canonical weight definiendum $ \langle\hat N\rangle(\beta,\mu) := \sum_{N=0}^\infty w_N(\beta,\mu)\cdot N $ Discussion
The notation “$\langle\hat N\rangle$” is chosen for the function because we can also introduce the sequence of observables $\hat N$ defined to give us the particle number of each canonical ensemble, i.e. $\hat N_N=N$, and then the above coincides with the proper grand canonical expectation value of $\hat N$. Notice that this $\hat N$ is sometimes denoted by $N$, which can get a little confusing.
Theorems
$ \langle\hat N\rangle = - \frac{\partial}{\partial\mu}\Omega $
$\frac{1}{\beta}\frac{\partial}{\partial\mu}\langle\hat N\rangle = \langle {\hat N}^2\rangle-\langle\hat N\rangle^2$ Context
|
I have to solve a linear system of three equations and 3 unknowns and can't find a way to solve it. Applying Cramer's rule I obtain $\Delta = 0$, $\Delta_x \neq 0$, $\Delta_y \neq 0$, $\Delta_z \neq 0$ so that may exist a solution. How to deal with the system in such a situation?
Just solve it directly by applying Gauss algorithm.
Assume your equation system is of the form $\begin{bmatrix}a_{11}&a_{12}&a_{13}\\a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33}\end{bmatrix} \begin{bmatrix} x\\y\\z\end{bmatrix} = \begin{bmatrix}b_1\\b_2\\b_3\end{bmatrix}$ then you can solve it as follows:
First make sure that $a_{11}\not=0$, otherwise you can change 2 rows / columns s.t. the new matrix has the first entry not equal to $0$.
Then you subtract the first row multiplicated with $-\frac{a_{21}}{a_{11}}$ from the second. This makes, that the resulting system looks something like this
$\begin{bmatrix}a_{11}&a_{12}&a_{13}\\0 & \tilde{a}_{22} & \tilde{a}_{23}\\ a_{31} & a_{32} & a_{33}\end{bmatrix} \begin{bmatrix} x\\y\\z\end{bmatrix} = \begin{bmatrix}b_1\\\tilde{b}_2\\b_3\end{bmatrix}$
NOTE that for example $\tilde{a}_{22} = a_{22} - \frac{a_{21}}{a_{11}}$
Now repeat: Add the first row multiplicated with $\frac{-a_{31}}{a_{11}}$ to the third row. This makes your system
$\begin{bmatrix}a_{11}&a_{12}&a_{13}\\0 & \tilde{a}_{22} & \tilde{a}_{23}\\ 0 & \tilde{a}_{32} & \tilde{a}_{33}\end{bmatrix} \begin{bmatrix} x\\y\\z\end{bmatrix} = \begin{bmatrix}b_1\\\tilde{b}_2\\\tilde{b}_3\end{bmatrix}$
Note again, that the entries in the third row of the system have changed, so they have a tilde above them now.
Now you need one more step. Make sure, that $\tilde{a}_{22}\not=0$ (if not, change the 2nd and the 3rd row or column in your system to make this the case). Now subtract the second row (with $\tilde{a}_{22}\not=0$) multiplicated with $\frac{-\tilde{a}_{33}}{\tilde{a}_{22}}$ from the third row. This makes your system look like
$\begin{bmatrix}a_{11}&a_{12}&a_{13}\\ 0 & \tilde{a}_{22} & \tilde{a}_{23}\\ 0 & 0 & \bar{a}_{33}\end{bmatrix} \begin{bmatrix} x\\y\\z\end{bmatrix} = \begin{bmatrix}b_1\\ \tilde{b}_2\\\bar{b}_3\end{bmatrix}$
Note, that in the third row again something has changed. The modified elements have a bar above them.
Now you can see that $ \bar{a}_{33} \cdot z = b_3$, so $z = \frac{\bar{b}_3}{\bar{a}_{33}}$.
Inserting this value for $z$ in the second equation, you get the value of $y$.
Afterwards insert the values of $z$ and $y$ into the first equation, to get a value for $x$.
The vector $\begin{bmatrix}x\\y\\z\end{bmatrix}$ is then a solution to the system.
REMARK:
If at one point during calculation you get to the equation $0 = 0$, this means that you can chose a variable (e.g. z) as you wish. As you're normally interested in ALL solutions of the system, not just a particular one, you should call it $t$. This parameter $t$ is no variable anymore, so you can bring it to the right side of your system and calculate the values for the remaining variables with respect to this parameter $t$.
If at one point during calculation you get a wrong equation, e.g. $1=0$, you either did some mistake or the system has no solution!
Good luck :-)
Here are hints:
This kind of systems may have infinitely many solutions or no solution. Change system into row echelon form. Separate pivot variables and free variables. express pivot variables in terms of free variables.
|
ISSN:
2156-8472
eISSN:
2156-8499 Mathematical Control & Related Fields
March 2012 , Volume 2 , Issue 1
Select all articles
Export/Reference:
Abstract:
We consider a finite planar network of 1-$d$ thermoelastic rods using Fourier's law or Cattaneo's law for heat conduction, we show that the system is exponentially stable in the two cases.
Abstract:
This paper addresses a study of the eventual regularity of a wave equation with boundary dissipation and distributed damping. The equation under consideration is rewritten as a system of first order and analyzed by semigroup methods. By a certain asymptotic expansion theorem, we prove that the associated solution semigroup is eventually differentiable. This implies the eventual regularity of the solution of the wave equation.
Abstract:
An abstract $\nu$-metric was introduced in [1], with a view towards extending the classical $\nu$-metric of Vinnicombe from the case of rational transfer functions to more general nonrational transfer function classes of infinite-dimensional linear control systems. Here we give an important concrete special instance of the abstract $\nu$-metric, namely the case when the ring of stable transfer functions is the Hardy algebra $H^\infty$, by verifying that all the assumptions demanded in the abstract set-up are satisfied. This settles the open question implicit in [2].
Abstract:
We consider the Euler-Bernoulli equation coupled with a wave equation in a bounded domain. The Euler-Bernoulli has clamped boundary conditions and the wave equation has Dirichlet boundary conditions. The damping which is distributed everywhere in the domain under consideration acts through one of the equations only; its effect is transmitted to the other equation through the coupling. First we consider the case where the dissipation acts through the Euler-Bernoulli equation. We show that in this case the coupled system is not exponentially stable. Next, using a frequency domain approach combined with the multiplier techniques, and a recent result of Borichev and Tomilov on polynomial decay characterization of bounded semigroups, we provide precise decay estimates showing that the energy of this coupled system decays polynomially as the time variable goes to infinity. Second, we discuss the case where the damping acts through the wave equation. Proceeding as in the first case, we prove that this new system is not exponentially stable, and we provide precise polynomial decay estimates for its energy. The results obtained complement those existing in the literature involving the hinged Euler-Bernoulli equation.
Abstract:
This paper deals with the Pontryagin's principle of optimal control problems governed by the 2D Navier-Stokes equations with integral state constraints and coupled integral control--state constraints. As an application, the necessary conditions for the local solution in the sense of $L^r(0,T;L^2(\Omega))$ ($2 < r < \infty$) are also obtained.
Abstract:
Momentum (or trend-following) trading strategies are widely used in the investment world. To better understand the nature of trend-following trading strategies and discover the corresponding optimality conditions, we consider the cases when the market trends are fully observable. In this paper, the market follows a regime switching model with three states (bull, sideways, and bear). Under this model, a set of sufficient conditions are developed to guarantee the optimality of trend-following trading strategies. A dynamic programming approach is used to verify these optimality conditions. The value functions are characterized by the associated HJB equations and are shown to be either linear functions or infinity depending on the parameter values. The results in this paper will help an investor to identify market conditions and to avoid trades which might be unprofitable even under the best market information. Finally, the corresponding value functions will provide an upper bound for trading performance which can be used as a general guide to rule out unrealistic expectations.
Readers Authors Editors Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]
|
A Markov View of the Phase Vocoder Part 1 Introduction
Hello! This is my first post on dsprelated.com. I have a blog that I run on my website, http://www.christianyostdsp.com. In order to engage with the larger DSP community, I'd like to occasionally post my more engineering heavy writing here and get your thoughts.
Today we will look at the phase vocoder from a different angle by bringing some probability into the discussion. This is the first part in a short series. Future posts will expand further upon the ideas introduced here. After learning the basic PV idea, we might wonder what would warrant an investigation into improvements. Let’s start by reminding ourselves of the informal phase vocoder algorithm description:
When we measure how much the phase changed in our current frame from its previous frame, we are looking at how far each 3D-complex sinusoid turned over the analysis time period. Then we use this information to estimate how much the corkscrew will probably turn over the next analysis time period. Here it is important to note that the phase vocoder creates an approximation to the hypothetical signal we are trying to create.
We see here that the next state of the classic phase vocoder is completely dependent on its current state. Processes that behave in such a way are known as
Markov processes. Wikipedia gives us the following definition [1]:
A stochastic process has the Markov property if the conditional probability distribution of future states of the process (conditional on both past and present states) depends only upon the present state, not on the sequence of events that preceded it. A process with this property is called a Markov process.
Now most people wouldn’t describe the phase vocoder as “stochastic”, and the signals fed into the PV certainly aren’t random, however, from the point of view of the computer, they can appear to be. In one sense, statistics is the tool people created to deal with a set of possibilities, which are not necessarily random. We can think about statistics in card gambling. The order of cards in a deck isn’t random. It is completely determined by iterations of different shuffling patterns from an initial, constant state, that summed up until the present will tell you their order. If the shuffling patterns are the same, the resultant order will be the same no matter the deck. While this process isn’t random, it is as good as random to us, because there is no way we could know all the shuffling patterns that have affected the deck. Even if you knew all shuffles but one, you couldn’t know the current order! Because of this, probability is used to get a sense of the general structure of a card deck.
Something similar can be said about musical signals, except we are the shuffling patterns, and the computer is us. We know everything that happens in the signal we feed into the phase vocoder, and the computer doesn’t. And if we go through some lengthy scheme to program every feature in the signal, we have completely sacrificed real-time control, which is perhaps the most valuable feature of the phase vocoder. Like how seeing three “face” cards on the table let’s you know that face cards are much less likely to be drawn, knowing that a sinusoidal component is in a certain bin hints that it is much less likely to move to some frequency bin 10,000 Hz away. This gives us some sense of the sinusoidal structure of the signal.
Statistical Considerations
So how would we construct a general structure for a musical signal? How can we determine if a structure exists at all? Well it seems like some locations are far more likely a destination for a given frequency than others. And if we see that there is at least one location less preferred than another, by induction perhaps we can infer that there exists some order of location preference that we could call the signal’s sinusoidal structure. Now this obviously doesn’t suffice as a rigorous mathematical proof, but it does tell us that investigation into the time-frequency information of a signal is likely to produce some outline of a structure within the signal. Many DSP engineers, some of whom are mentioned in post 3, have approached the phase vocoder with some signal sinusoidal structure in mind. I spent most of 2018 investigating these solutions and designing real-time implementations during my undergraduate senior project. Here I’d like to take a step back and show why so many people have looked into these solutions in the first place. The result should be some quantitative base to rest our intuitions on. Using the idea of a Markov Chain, we will tease out a general structure that musical signals have. It is exactly structures like this one which engineers have used to guide their improved phase vocoder algorithms.
Since the classic phase vocoder’s future state depends entirely on the current one, and the signal it’s operating on isn’t known to the computer, we can say it behaves like a Markov Chain. This means there exists a transition graph which expresses the next location of a sinusoid in a given bin as a probability. Below, the Y-axis represents the current location of a sinusoid, and the X-axis represents the next location of a sinusoid. Table 1 illustrates the structure which the phase vocoder assumes.
This doesn’t seem like an accurate view of an input signal, so let’s start an investigation into some properties of a signal to see if we can create a better transition graph. Since we are working with probability and statistics, let’s start with some elementary equations that we will use later on in our calculations.
\begin{equation*}\label{mean} \bar{x} = \frac{\sum_{i=0}^{N-1}x_{i}}{N} \end{equation*} \begin{equation*}\label{cov} Cov(X,Y) = \frac{\sum_{i=0}^{N-1}(x_{i}-\bar{x})(y_{i}-\bar{y})}{N} \end{equation*} \begin{equation*}\label{sd} \sigma(X) = \sqrt{\frac{1}{N}\sum_{i=0}^{N-1}(x_{i}-\bar{x})^{2}} \end{equation*} \begin{equation*}\label{covxx} Cov(X,X) = \frac{\sum_{i=0}^{N-1}(x_{i}-\bar{x})^{2}}{N} = (\sigma(x))^{2} \end{equation*}
Consider the amplitude envelopes of two adjacent frequency bins. Suppose that a single sinusoidal component in some mixture occupies each frequency bin at adjacent time points. In such a case we would expect to see a similar change in the amplitude envelope of each frequency as the sinusoid moves from one bin to the other. However, the amplitude difference in the bin it is coming from should be opposite of the change in amplitude of the bin it is going to. Figure 1 illustrate this behavior.
So if there is a change in amplitude in a given frequency bin, and we observe a similar amplitude change in another frequency bin, that amplitude difference could be coming from the same source, in other words sinusoidal movement. In order to get a sense of such sinusoidal movement, for each frequency bin we will calculate the covariance and correlation between its amplitude difference envelope and the amplitude difference envelope of every other frequency bin. The results will be plotted in an $N/2 \times N/2$ matrix. We choose $N/2$ because the second half of the spectrum is just a mirror of the first. Because $Cor(X,X) = 1$, we expect there should be a diagonal line across the matrix, similar to the one the phase vocoder assumes.
Implementation
The linked MATLAB code will calculate this matrix. It starts by recording the spectrum of the signal, as well as the amplitude difference between two adjacent FFT frames both in principle value and as a change proportional to its previous amplitude. For each time point, we will calculate the mean amplitude difference and its standard deviation, again both in principle and proportional value. The frequency amplitude vs. time plots we looked at earlier will also be calculated. For each frequency bin we can see the mean and standard deviation of the amplitude and amplitude difference. Finally the script uses this information to calculate the covariance and correlation between frequency bins. A couple notes about the script and calculations:
1. When calculating proportional change, we set a gate so that channels whose current and previous amplitude is below the gate are ignored. This is because many frequency bins have little to no energy, so small increases in energy show up as up to 25x’s increase in energy, when in reality it is an inaudible frequency component.
2. From that data we calculate the mean absolute value difference as well as the standard deviation of that difference for a given frequency. The absolute value is a more desirable version of the difference. Because the signal starts and ends quietly, if we think about what the mean value theorem says, the average amplitude difference is actually zero. So by taking the absolute value, we see the average change each frequency experiences per frame. Additionally, both positive and negative changes in amplitude are signs of sinusoidal movement, as we saw above.
The input signal we will use will be 7 seconds of Bach’s Violin Sonata No. 1 in G minor [2]. This seems like a nice choice because it is only one voice, and has a fair amount of sinusoidal movement. The FFT size is N = 1024 and we will use a Hanning window. By correlating each bin’s amplitude difference envelope with that of every other bin’s, we generate the following picture. The stronger the correlation between two amplitude difference envelopes, the brighter that cell is.
We notice a couple of things here. First, as noted before, there is a diagonal line traversing the matrix. Secondly, it is symmetrical on either side of the line since $Cor(X,Y) = Cor(Y,X)$. We also see some common frequency domain behavior in this picture. We know sinusoidal components in the input signal are probably affecting multiple frequency bins. Because of this, bins close together tend to correlate similarly with the same frequency bins. This can be seen as a checkered pattern in the matrix. Particularly quiet sections of the spectrum are also shown in the matrix. These are the dark sections. Regions that are most likely filled with discrete spectral noise don’t correlate well with anything intentional, and thus aren’t bright. Perhaps the most important thing to take away from this picture is the width of the diagonal line. We expect this to be the brightest part of the picture, which indeed it is. For each frequency at a given point on the Y-axis, the width of the horizontal line at that point tells us how strongly the amplitude difference envelope of that frequency correlates with that of the frequencies around it. Consequently this give us a sense of how likely those bins are as targets for sinusoidal movement from a given location. Furthermore, we see that the width of the line appears to get a bit wider the further in the spectrum we go. This also makes sense given that, in musical signals, higher frequency components in the spectrum travel a further frequency distance than lower frequency components.
So how good of a representation is this matrix of signal structure? For comparison, let’s see what matrix is drawn for a signal with no structure.
As we can see, there is far less apparent structure in the correlation matrix. The checkered behavior as well as the dark regions we saw earlier don’t show up in this picture. Furthermore, since this input doesn't really have a sinusoidal structure like a musical one does, what the FFT perceives as sinusoidal components are affecting many frequency bins at once. This is displayed as a very wide diagonal line throughout the spectrum. So it seems like this correlation matrix is an accurate representation of how structured its input signal is.
Conclusion
We have given the motivation to why a statistical investigation into the phase vocoder is warranted, some practical considerations and methods for going about these calculations, and shown some promising first results when we assume the phase vocoder is a Markov chain and carry out the subsequent ideas. Looking forward to next post, we will investigate how we can use this correlation matrix data to help create a more accurate transition matrix, how these pictures and transition matrices are represented in phase vocoder improvements, and what they suggest for future algorithms.
Read the original post here where we approach this idea using Max/MSP.
Code References Next post by Christian Yost:
A Markov View of the Phase Vocoder Part 2
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.
|
Search
Now showing items 1-10 of 17
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV
(American Physical Society, 2014-12-05)
We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
Measurement of quarkonium production at forward rapidity in pp collisions at √s=7 TeV
(Springer, 2014-08)
The inclusive production cross sections at forward rapidity of J/ψ , ψ(2S) , Υ (1S) and Υ (2S) are measured in pp collisions at s√=7 TeV with the ALICE detector at the LHC. The analysis is based on a data sample corresponding ...
|
Its really a great question and let us go by simple definition.
Defination:
As per what wikepedia says:
A yo-yo (also spelled yoyo) is a toy which in its simplest form is an
object consisting of an axle connected to two disks, and a length of
string looped around the axle, similar to a slender spool.
Working:
But what really makes it work? In simple it is energy converting machine.If we see at first yo yo has some potential energy as it is above the ground. But when we release yo yo. It converts that potential energy to kinetic energy. When a yo yo spins at bottom generally its the kinetic energy of rotation.
In general we can say that yo yo can spin up and down forever but what makes it stop is :
string is attached with plastic axle therefore there is friction force acting between two objects.
And the air resistance as yoyo wheels also rub air which we are not able to see.
The image of simple yoyo when it is opened:
Mathematical:
It is simply solved as I am no expert.Diagram below this:
$$-mg+T=ma$$$$\tau=I \alpha ,Tr = I \alpha$$$$v=-r \omega , a=-r \alpha $$
From 2 and 3
$$a= -\frac{g}{1+I/mr^2}$$
$$I = \frac{1}{2} m R^2$$
$$a=- \frac{g}{1+ \frac{1}{2}(R/r)^2}$$
There are clutch yoyo which acts quite interesting which acts because of centrifugal force and they call them centrifugal clutch. What makes this yoyo different is that there is one spring mechanism attached inside the yoyo. Whenever the yoyo is rotating slowly then springs comes contact with the axle and makes it fall back on the string. But the main advantage is that if the yoyo is rotating much faster, then the weights fly out from the axle (weights are attached with spring) because of centrifugal force. Now there is nothing to clamp and it is free to move at the bottom and when it got slow down then it come in contact with axle and got back up because of friction.
Here's the photo showing this mechanism :
Here at this image you will notice how clutch yoyo works. If you see when you spin hard those two solid bars go away but if you will flick it will come back automatically to you.
|
When finding the maximum margin separator in the primal form we have the quadratic program
$$min\frac{1}{2}||\theta||^2$$ $$\text{ subject to: } y^{(t)}(\theta \cdot x^{(t)} + \theta_0) \geq 1, \ t=1,...,n,$$
saying basically to find the maximum margin separator. The margin size will be:
$$\frac{1}{||\theta||}.$$
Does the size of the margin change if we change the constants of the constraint?
That is, if we have
$$\text{ subject to: } y^{(t)}(\theta \cdot x^{(t)} + \theta_0) \geq k, \ t=1,...,n,$$
instead of 1?
If it does not matter, why doesn't this matter? How is it an equivalent formulation regardless of the exact constants for the constraint?
|
I have to build the stability diagram of mercury and I have a problem with this couple:
$\ce{Hg^2+}/\ce{Hg2^2+}$ $E^\circ=0.91\ \mathrm{V}$
The exercise says that a the border the concentration is $C=0.10\ \mathrm{mol \cdot L^{-1}}$ for all ions.
So I have : $\ce{2Hg^2+ +2e^- <=> Hg2^2+}$
Then by Nernst relation I have : $E=E^\circ+0.03\ \mathrm{V} \times \log\left(\frac{\left[\ce{Hg^2+}\right]^2}{\left[\ce{Hg2^2+}\right]}\right)$
And in the solution of the exercise they write at the border $\left[\ce{Hg^2+}\right]=\left[\ce{Hg2^2+}\right]=\frac{C}{2}$
I don’t understand why it is $C/2$.
I think that’s a “stupid” question but I really don’t understand.
|
Difference between revisions of "Linear representation theory of symmetric group:S5"
(→Character table)
(→Family contexts)
(10 intermediate revisions by the same user not shown) Line 8: Line 8:
==Summary==
==Summary==
+
{| class="sortable" border="1"
{| class="sortable" border="1"
! Item !! Value
! Item !! Value
|-
|-
−
| [[Degrees of irreducible representations]] over a [[splitting field]] || 1,1,4,4,5,5,6<br>[[maximum degree of irreducible representation|maximum]]: 6, [[lcm of degrees of irreducible representations|lcm]]: 60, [[number of irreducible representations equals number of conjugacy classes|number]]: 7, [[sum of squares of degrees of irreducible representations equals order of group|sum of squares]]: 120
+
| [[Degrees of irreducible representations]] over a [[splitting field]] || 1,1,4,4,5,5,6<br>[[maximum degree of irreducible representation|maximum]]: 6, [[lcm of degrees of irreducible representations|lcm]]: 60, [[number of irreducible representations equals number of conjugacy classes|number]]: 7, [[sum of squares of degrees of irreducible representations equals order of group|sum of squares]]: 120
|-
|-
| [[Schur index]] values of irreducible representations || 1,1,1,1,1,1,1<br>[[maximum Schur index of irreducible representation|maximum]]: 1, [[lcm of Schur indices of irreducible representations|lcm]]: 1
| [[Schur index]] values of irreducible representations || 1,1,1,1,1,1,1<br>[[maximum Schur index of irreducible representation|maximum]]: 1, [[lcm of Schur indices of irreducible representations|lcm]]: 1
Line 23: Line 24:
| Smallest size [[splitting field]] || [[field:F7]], i.e., the field of 7 elements.
| Smallest size [[splitting field]] || [[field:F7]], i.e., the field of 7 elements.
|}
|}
− +
==Family contexts==
==Family contexts==
Line 29: Line 30:
! Family name !! Parameter values !! General discussion of linear representation theory of family
! Family name !! Parameter values !! General discussion of linear representation theory of family
|-
|-
−
| [[symmetric group]] || 5 || [[linear representation theory of symmetric groups]]
+
| [[symmetric group]] || 5|| [[linear representation theory of symmetric groups]]
|-
|-
−
| [[projective general linear group of degree two]] || [[field:F5]] || [[linear representation theory of projective general linear group of degree two]]
+
| [[projective general linear group of degree two]] || [[field:F5]]|| [[linear representation theory of projective general linear group of degree two ]]
|}
|}
Line 81: Line 82:
| Unclear || a nontrivial homomorphism <math>\varphi:\mathbb{F}_{q^2}^\ast \to \mathbb{C}^\ast</math>, with the property that <math>\varphi(x)^{q+1} = 1</math> for all <math>x</math>, and <math>\varphi</math> takes values other than <math>\pm 1</math>. Identify <math>\varphi</math> and <math>\varphi^q</math>. || unclear || <math>q - 1</math> || 4 || <math>(q-1)/2</math> || 2 || <math>(q-1)^3/2</math> || 32 || standard representation, product of standard and sign
| Unclear || a nontrivial homomorphism <math>\varphi:\mathbb{F}_{q^2}^\ast \to \mathbb{C}^\ast</math>, with the property that <math>\varphi(x)^{q+1} = 1</math> for all <math>x</math>, and <math>\varphi</math> takes values other than <math>\pm 1</math>. Identify <math>\varphi</math> and <math>\varphi^q</math>. || unclear || <math>q - 1</math> || 4 || <math>(q-1)/2</math> || 2 || <math>(q-1)^3/2</math> || 32 || standard representation, product of standard and sign
|-
|-
− +
Total || NA || NA || NA || NA || <math>q + 2</math> || 7 || <math>q^3 - q</math> || 120 || NA
|}
|}
Line 87: Line 88:
{{character table facts to check against}}
{{character table facts to check against}}
+ + + + + + + + + + + + + + + + + + + +
{| class="sortable" border="1"
{| class="sortable" border="1"
−
! Representation/conjugacy class representative and size !! <math>()</math> (size 1) !! <math>(1,2)</math> (size 10) !! <math>(1,2)(3,4)</math> (size 15) !! <math>(1,2,3)</math> (size 20) !! <math>(1,2,3,
+
! Representation/conjugacy class representative and size !! <math>()</math> (size 1) !! <math>(1,2)</math> (size 10) !! <math>(1,2)(3,4)</math> (size 15) !! <math>(1,2,3)</math> (size 20) !! <math>(1,2,3,)</math> (size ) !! <math>(1,2,34,5)</math> (size ) !! <math>(1,2,3,4)</math> (size )
|-
|-
−
| trivial representation || 1 ||
+
| trivial representation || 1 || || || || || ||
|-
|-
−
| sign representation || 1 ||
+
| sign representation || 1 || || || || -|| ||
|-
|-
−
| standard representation
+
| standard representation || 1 || || 0 || || -|| -
|-
|-
−
| product of standard and sign representation
+
| product of standard and sign representation || 1 || -|| 0 || || || -
|-
|-
−
| irreducible five-dimensional representation ||
+
| irreducible five-dimensional representation || || || || || || ||
|-
|-
−
| irreducible five-dimensional representation ||
+
| irreducible five-dimensional representation || || || || || || ||
|-
|-
−
| exterior square of standard representation ||
+
| exterior square of standard representation || || 0 || || || 0 || ||
|}
|}
Latest revision as of 05:41, 16 January 2013 This article gives specific information, namely, linear representation theory, about a particular group, namely: symmetric group:S5. View linear representation theory of particular groups | View other specific information about symmetric group:S5
This article describes the linear representation theory of symmetric group:S5, a group of order . We take this to be the group of permutations on the set .
Summary
Item Value Degrees of irreducible representations over a splitting field (such as or ) 1,1,4,4,5,5,6
maximum: 6, lcm: 60, number: 7, sum of squares: 120
Schur index values of irreducible representations 1,1,1,1,1,1,1
maximum: 1, lcm: 1
Smallest ring of realization for all irreducible representations (characteristic zero) -- ring of integers Smallest field of realization for all irreducible representations, i.e., smallest splitting field (characteristic zero) -- hence it is a rational representation group Criterion for a field to be a splitting field Any field of characteristic not equal to 2,3, or 5. Smallest size splitting field field:F7, i.e., the field of 7 elements. Family contexts
Family name Parameter values General discussion of linear representation theory of family symmetric group of degree linear representation theory of symmetric groups projective general linear group of degree two over a finite field of size , i.e., field:F5, so the group is linear representation theory of projective general linear group of degree two over a finite field Degrees of irreducible representations FACTS TO CHECK AGAINST FOR DEGREES OF IRREDUCIBLE REPRESENTATIONS OVER SPLITTING FIELD: Divisibility facts: degree of irreducible representation divides group order | degree of irreducible representation divides index of abelian normal subgroup Size bounds: order of inner automorphism group bounds square of degree of irreducible representation| degree of irreducible representation is bounded by index of abelian subgroup| maximum degree of irreducible representation of group is less than or equal to product of maximum degree of irreducible representation of subgroup and index of subgroup Cumulative facts: sum of squares of degrees of irreducible representations equals order of group | number of irreducible representations equals number of conjugacy classes | number of one-dimensional representations equals order of abelianization
Note that the linear representation theory of the symmetric group of degree four works over any field of characteristic not equal to two or three, and the list of degrees is .
Interpretation as symmetric group
Common name of representation Degree Corresponding partition Young diagram Hook-length formula for degree Conjugate partition Representation for conjugate partition trivial representation 1 5 1 + 1 + 1 + 1 + 1 sign representation sign representation 1 1 + 1 + 1 + 1 + 1 5 trivial representation standard representation 4 4 + 1 2 + 1 + 1 + 1 product of standard and sign representation product of standard and sign representation 4 2 + 1 + 1 + 1 4 + 1 standard representation irreducible five-dimensional representation 5 3 + 2 2 + 2 + 1 other irreducible five-dimensional representation irreducible five-dimensional representation 5 2 + 2 + 1 3 + 2 other irreducible five-dimensional representation exterior square of standard representation 6 3 + 1 + 1 3 + 1 + 1 the same representation, because the partition is self-conjugate. Interpretation as projective general linear group of degree two Compare and contrast with linear representation theory of projective general linear group of degree two over a finite field
Description of collection of representations Parameter for describing each representation How the representation is described Degree of each representation (general odd ) Degree of each representation () Number of representations (general odd ) Number of representations () Sum of squares of degrees (general odd ) Sum of squares of degrees () Symmetric group name Trivial -- 1 1 1 1 1 1 trivial Sign representation -- Kernel is projective special linear group of degree two (in this case, alternating group:A5), image is 1 1 1 1 1 1 sign Nontrivial component of permutation representation of on the projective line over -- -- 5 1 1 25 irreducible 5D Tensor product of sign representation and nontrivial component of permutation representation on projective line -- -- 5 1 1 25 other irreducible 5D Induced from one-dimensional representation of Borel subgroup ? ? 6 1 36 exterior square of standard representation Unclear a nontrivial homomorphism , with the property that for all , and takes values other than . Identify and . unclear 4 2 32 standard representation, product of standard and sign Total NA NA NA NA 7 120 NA Character table FACTS TO CHECK AGAINST (for characters of irreducible linear representations over a splitting field): Orthogonality relations: Character orthogonality theorem | Column orthogonality theorem Separation results(basically says rows independent, columns independent): Splitting implies characters form a basis for space of class functions|Character determines representation in characteristic zero Numerical facts: Characters are cyclotomic integers | Size-degree-weighted characters are algebraic integers Character value facts: Irreducible character of degree greater than one takes value zero on some conjugacy class| Conjugacy class of more than average size has character value zero for some irreducible character | Zero-or-scalar lemma
Representation/conjugacy class representative and size (size 1) (size 10) (size 15) (size 20) (size 20) (size 24) (size 30) trivial representation 1 1 1 1 1 1 1 sign representation 1 -1 1 1 -1 1 -1 standard representation 4 2 0 1 -1 -1 0 product of standard and sign representation 4 -2 0 1 1 -1 0 irreducible five-dimensional representation 5 1 1 -1 1 0 -1 irreducible five-dimensional representation 5 -1 1 -1 -1 0 1 exterior square of standard representation 6 0 -2 0 0 1 0
Below are the size-degree-weighted characters, i.e., these are obtained by multiplying the character value by the size of the conjugacy class and then dividing by the degree of the representation. Note that size-degree-weighted characters are algebraic integers.
Representation/conjugacy class representative and size (size 1) (size 10) (size 15) (size 20) (size 20) (size 24) (size 30) trivial representation 1 10 15 20 20 24 30 sign representation 1 -10 15 20 -20 24 -30 standard representation 1 5 0 5 -5 -6 0 product of standard and sign representation 1 -5 0 5 5 -6 0 irreducible five-dimensional representation 1 2 3 -4 4 0 -6 irreducible five-dimensional representation 1 -2 3 -4 -4 0 6 exterior square of standard representation 1 0 -5 0 0 4 0 GAP implementation
The degrees of irreducible representations can be computed using GAP's CharacterDegrees function:
gap> CharacterDegrees(SymmetricGroup(5)); [ [ 1, 2 ], [ 4, 2 ], [ 5, 2 ], [ 6, 1 ] ]
This means that there are 2 degree 1 irreducible representations, 2 degree 4 irreducible representations, 2 degree 5 irreducible representations, and 1 degree 6 irreducible representation.
The characters of all irreducible representations can be computed in full using GAP's CharacterTable function:
gap> Irr(CharacterTable(SymmetricGroup(5))); [ Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 1, -1, 1, 1, -1, -1, 1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 4, -2, 0, 1, 1, 0, -1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 5, -1, 1, -1, -1, 1, 0 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 6, 0, -2, 0, 0, 0, 1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 5, 1, 1, -1, 1, -1, 0 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 4, 2, 0, 1, -1, 0, -1 ] ), Character( CharacterTable( Sym( [ 1 .. 5 ] ) ), [ 1, 1, 1, 1, 1, 1, 1 ] ) ]
|
Let $R$ be an order on a class $X$. A subclass $S$ of $X$ is called an $R$-segment if $$(\forall s \in S) (\forall x \in X) ((x,s) \in R \implies x \in S).$$ If $a \in X$, then the set $S_{X,R} (a) = \{x : x \in X ,\, x \lneq a \}$ is an initial $R$-segment determined by $a$.
Let $R$ be a well-ordering on set $X$. Show that the set of $R$-segments of $X$ is well-ordered by the inclusion relation.
My proof( Is it correct?):
Let $A$ be the set of proper $R$-segments of $X$. Consider a non-empty subset $B$ of $A$.
Let $C = \{x: S_{X,R}(x) \in B\}$. Since every proper $R$-segment is also an initial $R$-segment, $C$ is non-empty. Since $R$ is a well-ordering, $$(\exists x_0) ((x_0 \in C ) \wedge (x \in C \implies x_0\leq x )).$$
Thus, given any $S_{X,R}(x)\in B$, we have $x \in C$ and $x_0 \leq x$.
Given any $x' \in S_{X,R}(x_0)$, we conclude that: \begin{align*} &x' \lneq x_0 \leq x \\ \therefore & x' \in S_{X,R} (x) \\ \therefore & S_{X,R}(x_0) \subseteq S_{X,R}(x) \end{align*}
|
A duality is a sort-of equivalence between two objects or theories under a certain condition. Very useful in QFT and string theory.
List of some dualities in physics:
Involves the reciprocal of the coupling constant. Theory A is said to be S-dual to Theory B if Theory A with coupling constant $g$ is the equivalent of Theory B with coupling constant $\frac1g$. In string theory, this is the same as negating the dilaton field, as $g=\langle e^{\phi}\rangle$.
Involves the reciprocal of the compactification radius. Theory A is said to be T-dual to Theory B if Theory A with compactification radius $R$ is the equivalent of Theory B with compactification radius $\frac{\alpha'}{R}$, which is simply $\frac1R$ if we chose to use natural units where we let $\ell_s=1$.
Relates a string theory or quantum gravitational theory in a certain AdS spacetime to one conformal field theory on the conformal boundary of the AdS spacetime. Prominent examples include BFSS Matrixz theory and Matrix string theoryies (IIA and HE).
Self-explanatory. Dualities which are like EM. In EM, field - strengths can be described by the electric field part, whereas the dual field strengths can be described by the magnetic field part. Useful even in string theory, where $Dp$ - branes are EM-dual to $D(6-p)$ - branes.
|
Learning Objectives
Relate the work done during a time interval to the power delivered Find the power expended by a force acting on a moving body
The concept of work involves force and displacement; the work-energy theorem relates the net work done on a body to the difference in its kinetic energy, calculated between two points on its trajectory. None of these quantities or relations involves time explicitly, yet we know that the time available to accomplish a particular amount of work is frequently just as important to us as the amount itself. In the chapter-opening figure, several sprinters may have achieved the same velocity at the finish, and therefore did the same amount of work, but the winner of the race did it in the least amount of time.
We express the relation between work done and the time interval involved in doing it, by introducing the concept of power. Since work can vary as a function of time, we first define
average power as the work done during a time interval, divided by the interval,
$$P_{ave} = \frac{\Delta W}{\Delta t} \ldotp \label{7.10}$$
Then, we can define the
instantaneous power (frequently referred to as just plain power).
Definition: Power
Power is defined as the rate of doing work, or the limit of the average power for time intervals approaching zero,
$$P = \frac{dW}{dt} \ldotp \label{7.11}$$
If the power is constant over a time interval, the average power for that interval equals the instantaneous power, and the work done by the agent supplying the power is W = P\(\Delta\)t. If the power during an interval varies with time, then the work done is the time integral of the power,
$$W = \int Pdt \ldotp$$
The work-energy theorem relates how work can be transformed into kinetic energy. Since there are other forms of energy as well, as we discuss in the next chapter, we can also define power as the rate of transfer of energy. Work and energy are measured in units of joules, so power is measured in units of joules per second, which has been given the SI name watts, abbreviation W: 1 J/s = 1 W. Another common unit for expressing the power capability of everyday devices is horsepower: 1 hp = 746 W.
Example \(\PageIndex{1}\): Pull-Up Power
An 80-kg army trainee does 10 pull-ups in 10 s (Figure 7.14). How much average power do the trainee’s muscles supply moving his body? (
Hint: Make reasonable estimates for any quantities needed.) Strategy
The work done against gravity, going up or down a distance \(\Delta\)y, is mg\(\Delta\)y. (If you lift and lower yourself at constant speed, the force you exert cancels gravity over the whole pull-up cycle.) Thus, the work done by the trainee’s muscles (moving, but not accelerating, his body) for a complete repetition (up and down) is 2mg\(\Delta\)y. Let’s assume that \(\Delta\)y = 2ft ≈ 60 cm. Also, assume that the arms comprise 10% of the body mass and are not included in the moving mass. With these assumptions, we can calculate the work done for 10 pull-ups and divide by 10 s to get the average power.
Solution
The result we get, applying our assumptions, is
$$P_{ave} = \frac{10 \times (0.9 \times 80\; kg)(9.8\; m/s^{2})(0.6\; m)}{10\; s} = 850\; W \ldotp$$
Significance
This is typical for power expenditure in strenuous exercise; in everyday units, it’s somewhat more than one horsepower (1 hp = 746 W).
Exercise \(\PageIndex{1}\)
Estimate the power expended by a weightlifter raising a 150-kg barbell 2 m in 3 s.
Answer
Add texts here. Do not delete this text first.
The power involved in moving a body can also be expressed in terms of the forces acting on it. If a force \(\vec{F}\) acts on a body that is displaced d\(\vec{r}\) in a time dt, the power expended by the force is
$$P = \frac{dW}{dt} = \frac{\vec{F}\; \cdotp d \vec{r}}{dt} = \vec{F}\; \cdotp \left(\dfrac{d \vec{r}}{dt}\right) = \vec{F}\; \cdotp \vec{v}, \label{7.12}$$
where \(\vec{v}\) is the velocity of the body. The fact that the limits implied by the derivatives exist, for the motion of a real body, justifies the rearrangement of the infinitesimals.
Example \(\PageIndex{2}\): Automotive Power Driving Uphill
How much power must an automobile engine expend to move a 1200-kg car up a 15% grade at 90 km/h (Figure 7.15)? Assume that 25% of this power is dissipated overcoming air resistance and friction.
Strategy
At constant velocity, there is no change in kinetic energy, so the net work done to move the car is zero. Therefore the power supplied by the engine to move the car equals the power expended against gravity and air resistance. By assumption, 75% of the power is supplied against gravity, which equals m\(\vec{g}\; \cdotp \vec{v}\) = mgv sin \(\theta\), where \(\theta\) is the angle of the incline. A 15% grade means tan \(\theta\) = 0.15. This reasoning allows us to solve for the power required.
Solution
Carrying out the suggested steps, we find
$$0.75 P = mgv \sin(\tan^{−1} 0.15),$$
or
$$P = \frac{(1200 \times 9.8\; N)(\frac{90\; m}{3.6\; s}) \sin (8.53^{o})}{0.75} = 58\; kW,$$
or about 78 hp. (You should supply the steps used to convert units.)
Significance
This is a reasonable amount of power for the engine of a small to mid-size car to supply (1 hp = 0.746 kW). Note that this is only the power expended to move the car. Much of the engine’s power goes elsewhere, for example, into waste heat. That’s why cars need radiators. Any remaining power could be used for acceleration, or to operate the car’s accessories.
Contributors
Samuel J. Ling (Truman State University), Jeff Sanny (Loyola Marymount University), and Bill Moebs with many contributing authors. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
|
I recently began browsing the 5e PHB when I noticed that there was no distance per round when falling under the Falling category. Is there a set fall speed and if so, what is it?
[This answer superseded by the release of Xanathar's Guide to Everything, Nov 2017, as detailed in this answer.] The rules have no explicit guidance on falling kinematics. Mostly.
Free-falling motion isn't tackled in the rules. Back to that in a moment.
Feather Fall allows one to fall at 60 ft. per round (6 sec.), or at a speed of 10 fps without suffering damage. Free-fall, which is injurious, should be faster than that. A little high-school physics will tell us that a body falling freely (assuming g=32 ft/s 2) for 10 ft. will attain a final speed of ~25 fps. So this all makes sense: 10fps=no damage, 25fps=1d6 damage. Distance fallen:
To me this means it's not inherently unreasonable to use the simple classical physics in this situation: assuming acceleration due to gravity similar to that experienced at sea level on Earth and ignoring air resistance at low speeds:
starting from rest: \$ d_{\text{1 round}} = 576\text{ ft} \$
starting from rest: \$ d_{n\text{ rounds}} = 576 \times n^2\text{ ft} \$
Falling speed: your average velocity during the fall would be \$\sqrt{16d}\$, in feet per second. (Your final velocity is twice that.)
For those who really want a refresher on simple kinematics, assuming uniform acceleration and starting velocities of zero:
\$ \text{distance traveled} = \frac{1}{2} \times \text{acceleration} \times \text{time}^2 \$ \$ \text{final velocity} = \sqrt{2 \times \text{acceleration} \times \text{distance traveled}} \$ \$ \text{average velocity} = \dfrac{\text{final velocity}}{2} \$ \$ \text{time of fall} = \sqrt{\dfrac{2 \times \text{distance traveled}}{\text{acceleration due to gravity}}} \$
In non-SI units the acceleration due to gravity is approximately 32 feet per second
2. (up to) 500 feet, by rule. Xanathar's Guide to Everything p. 77 gives this optional rule for the rate of falling (particularly for long falls):
When you fall from a great height you instantly descend up to 500 feet. If you're still falling on your next turn you descend up to 500 feet at the end of that turn. This process continues until the fall ends.
The 500ft rule (c. 170m) is designed to simplify things. I also don't like the use of the word 'instantly' here. I think a more accurate representation, especially for sentient entities, would be 'before one can take an action', explaining why entities who can negate falling by taking an action, such as casting levitate or fly, cannot stop themselves in the first round of action. Why? Because they are surprised at suddenly falling and hence cannot take an action. But, it seems plausible that other entities might be able to act before one has fallen 500ft.
I'm not suggesting that new rules applying the laws of physics (in our world) or dissecting a 6-second round into sub-components are required or even desired.
Rather, I would remind everyone of the first rule of D&D: The GM is
always right. The rules are meant to be guidelines, not constraints. 20d6 will kill a lot of characters, and for the sake of the story the GM may not want a character to die. On the other hand, 20d6 will not kill a lot of characters, meaning that a fall of tens of thousands of feet will not kill them, and the GM may decide that that is patently absurd and have death ensue.
The angle here is that the GM and players are telling a story, not playing a video game.
|
I'm reading the article Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information (Candes, Romberg and Tao, 2004).
In this article they are talking about recovering the function $f$ whose fourier coefficients are known on some domain $\Omega$, by solving the following optimization problems:
$$ \min ||g||_{TV} \space\space\space \text{s.t.} \space\space\space\hat{g}(w)=\hat{f}(w),w \in \Omega$$
and
$$ \min ||g||_{L_1} \space\space\space \text{s.t.} \space\space\space\hat{g}(w)=\hat{f}(w),w \in \Omega$$
Can someone please give me a reference that suggests how to actually solve these optimization problems (that combines both $g$ and $\hat{g}$)?
A relevant R package would also be nice.
|
Colloquia/Fall18 Contents 1 Mathematics Colloquium 1.1 Spring 2018 1.2 Spring Abstracts 1.2.1 January 29 Li Chao (Columbia) 1.2.2 February 2 Thomas Fai (Harvard) 1.2.3 February 5 Alex Lubotzky (Hebrew University) 1.2.4 February 6 Alex Lubotzky (Hebrew University) 1.2.5 February 9 Wes Pegden (CMU) 1.2.6 March 2 Aaron Bertram (Utah) 1.2.7 March 16 Anne Gelb (Dartmouth) 1.2.8 April 5 John Baez (UC Riverside) 1.2.9 April 6 Edray Goins (Purdue) 1.3 Past Colloquia Mathematics Colloquium
All colloquia are on Fridays at 4:00 pm in Van Vleck B239,
unless otherwise indicated. Spring 2018
date speaker title host(s) January 29 (Monday) Li Chao (Columbia) Elliptic curves and Goldfeld's conjecture Jordan Ellenberg February 2 (Room: 911) Thomas Fai (Harvard) The Lubricated Immersed Boundary Method Spagnolie, Smith February 5 (Monday, Room: 911) Alex Lubotzky (Hebrew University) High dimensional expanders: From Ramanujan graphs to Ramanujan complexes Ellenberg, Gurevitch February 6 (Tuesday 2 pm, Room 911) Alex Lubotzky (Hebrew University) Groups' approximation, stability and high dimensional expanders Ellenberg, Gurevitch February 9 Wes Pegden (CMU) The fractal nature of the Abelian Sandpile Roch March 2 Aaron Bertram (University of Utah) Stability in Algebraic Geometry Caldararu March 16 (Room: 911) Anne Gelb (Dartmouth) Reducing the effects of bad data measurements using variance based weighted joint sparsity WIMAW April 5 (Thursday) John Baez (UC Riverside) Monoidal categories of networks Craciun April 6 Edray Goins (Purdue) Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups Melanie April 13 Jill Pipher (Brown) TBA WIMAW April 16 (Monday) Christine Berkesch Zamaere (University of Minnesota) TBA Erman, Sam April 25 (Wednesday) Hitoshi Ishii (Waseda University) Wasow lecture TBA Tran May 4 Henry Cohn (Microsoft Research and MIT) TBA Ellenberg date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty date person (institution) TBA hosting faculty Spring Abstracts January 29 Li Chao (Columbia)
Title: Elliptic curves and Goldfeld's conjecture
Abstract: An elliptic curve is a plane curve defined by a cubic equation. Determining whether such an equation has infinitely many rational solutions has been a central problem in number theory for centuries, which lead to the celebrated conjecture of Birch and Swinnerton-Dyer. Within a family of elliptic curves (such as the Mordell curve family y^2=x^3-d), a conjecture of Goldfeld further predicts that there should be infinitely many rational solutions exactly half of the time. We will start with a history of this problem, discuss our recent work (with D. Kriz) towards Goldfeld's conjecture and illustrate the key ideas and ingredients behind these new progresses.
February 2 Thomas Fai (Harvard)
Title: The Lubricated Immersed Boundary Method
Abstract: Many real-world examples of fluid-structure interaction, including the transit of red blood cells through the narrow slits in the spleen, involve the near-contact of elastic structures separated by thin layers of fluid. The separation of length scales between these fine lubrication layers and the larger elastic objects poses significant computational challenges. Motivated by the challenge of resolving such multiscale problems, we introduce an immersed boundary method that uses elements of lubrication theory to resolve thin fluid layers between immersed boundaries. We apply this method to two-dimensional flows of increasing complexity, including eccentric rotating cylinders and elastic vesicles near walls in shear flow, to show its increased accuracy compared to the classical immersed boundary method. We present preliminary simulation results of cell suspensions, a problem in which near-contact occurs at multiple levels, such as cell-wall, cell-cell, and intracellular interactions, to highlight the importance of resolving thin fluid layers in order to obtain the correct overall dynamics.
February 5 Alex Lubotzky (Hebrew University)
Title: High dimensional expanders: From Ramanujan graphs to Ramanujan complexes
Abstract:
Expander graphs in general, and Ramanujan graphs , in particular, have played a major role in computer science in the last 5 decades and more recently also in pure math. The first explicit construction of bounded degree expanding graphs was given by Margulis in the early 70's. In mid 80' Margulis and Lubotzky-Phillips-Sarnak provided Ramanujan graphs which are optimal such expanders.
In recent years a high dimensional theory of expanders is emerging. A notion of topological expanders was defined by Gromov in 2010 who proved that the complete d-dimensional simplical complexes are such. He raised the basic question of existence of such bounded degree complexes of dimension d>1.
This question was answered recently affirmatively (by T. Kaufman, D. Kazdhan and A. Lubotzky for d=2 and by S. Evra and T. Kaufman for general d) by showing that the d-skeleton of (d+1)-dimensional Ramanujan complexes provide such topological expanders. We will describe these developments and the general area of high dimensional expanders.
February 6 Alex Lubotzky (Hebrew University)
Title: Groups' approximation, stability and high dimensional expanders
Abstract:
Several well-known open questions, such as: are all groups sofic or hyperlinear?, have a common form: can all groups be approximated by asymptotic homomorphisms into the symmetric groups Sym(n) (in the sofic case) or the unitary groups U(n) (in the hyperlinear case)? In the case of U(n), the question can be asked with respect to different metrics and norms. We answer, for the first time, one of these versions, showing that there exist fintely presented groups which are not approximated by U(n) with respect to the Frobenius (=L_2) norm.
The strategy is via the notion of "stability": some higher dimensional cohomology vanishing phenomena is proven to imply stability and using high dimensional expanders, it is shown that some non-residually finite groups (central extensions of some lattices in p-adic Lie groups) are Frobenious stable and hence cannot be Frobenius approximated.
All notions will be explained. Joint work with M, De Chiffre, L. Glebsky and A. Thom.
February 9 Wes Pegden (CMU)
Title: The fractal nature of the Abelian Sandpile
Abstract: The Abelian Sandpile is a simple diffusion process on the integer lattice, in which configurations of chips disperse according to a simple rule: when a vertex has at least 4 chips, it can distribute one chip to each neighbor.
Introduced in the statistical physics community in the 1980s, the Abelian sandpile exhibits striking fractal behavior which long resisted rigorous mathematical analysis (or even a plausible explanation). We now have a relatively robust mathematical understanding of this fractal nature of the sandpile, which involves surprising connections between integer superharmonic functions on the lattice, discrete tilings of the plane, and Apollonian circle packings. In this talk, we will survey our work in this area, and discuss avenues of current and future research.
March 2 Aaron Bertram (Utah)
Title: Stability in Algebraic Geometry
Abstract: Stability was originally introduced in algebraic geometry in the context of finding a projective quotient space for the action of an algebraic group on a projective manifold. This, in turn, led in the 1960s to a notion of slope-stability for vector bundles on a Riemann surface, which was an important tool in the classification of vector bundles. In the 1990s, mirror symmetry considerations led Michael Douglas to notions of stability for "D-branes" (on a higher-dimensional manifold) that corresponded to no previously known mathematical definition. We now understand each of these notions of stability as a distinct point of a complex "stability manifold" that is an important invariant of the (derived) category of complexes of vector bundles of a projective manifold. In this talk I want to give some examples to illustrate the various stabilities, and also to describe some current work in the area.
March 16 Anne Gelb (Dartmouth)
Title: Reducing the effects of bad data measurements using variance based weighted joint sparsity
Abstract: We introduce the variance based joint sparsity (VBJS) method for sparse signal recovery and image reconstruction from multiple measurement vectors. Joint sparsity techniques employing $\ell_{2,1}$ minimization are typically used, but the algorithm is computationally intensive and requires fine tuning of parameters. The VBJS method uses a weighted $\ell_1$ joint sparsity algorithm, where the weights depend on the pixel-wise variance. The VBJS method is accurate, robust, cost efficient and also reduces the effects of false data.
April 5 John Baez (UC Riverside)
Title: Reducing the effects of bad data measurements using variance based weighted joint sparsity
Abstract: Nature and the world of human technology are full of networks. People like to draw diagrams of networks: flow charts, electrical circuit diagrams, chemical reaction networks, signal-flow graphs, Bayesian networks, food webs, Feynman diagrams and the like. Far from mere informal tools, many of these diagrammatic languages fit into a rigorous framework: category theory. I will explain a bit of how this works and discuss some applications.
April 6 Edray Goins (Purdue)
Title: Toroidal Belyĭ Pairs, Toroidal Graphs, and their Monodromy Groups
Abstract: A Belyĭ map [math] \beta: \mathbb P^1(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] is a rational function with at most three critical values; we may assume these values are [math] \{ 0, \, 1, \, \infty \}. [/math] A Dessin d'Enfant is a planar bipartite graph obtained by considering the preimage of a path between two of these critical values, usually taken to be the line segment from 0 to 1. Such graphs can be drawn on the sphere by composing with stereographic projection: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq \mathbb P^1(\mathbb C) \simeq S^2(\mathbb R). [/math] Replacing [math] \mathbb P^1 [/math] with an elliptic curve [math]E [/math], there is a similar definition of a Belyĭ map [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C). [/math] Since [math] E(\mathbb C) \simeq \mathbb T^2(\mathbb R) [/math] is a torus, we call [math] (E, \beta) [/math] a toroidal Belyĭ pair. The corresponding Dessin d'Enfant can be drawn on the torus by composing with an elliptic logarithm: [math] \beta^{-1} \bigl( [0,1] \bigr) \subseteq E(\mathbb C) \simeq \mathbb T^2(\mathbb R). [/math]
This project seeks to create a database of such Belyĭ pairs, their corresponding Dessins d'Enfant, and their monodromy groups. For each positive integer [math] N [/math], there are only finitely many toroidal Belyĭ pairs [math] (E, \beta) [/math] with [math] \deg \, \beta = N. [/math] Using the Hurwitz Genus formula, we can begin this database by considering all possible degree sequences [math] \mathcal D [/math] on the ramification indices as multisets on three partitions of N. For each degree sequence, we compute all possible monodromy groups [math] G = \text{im} \, \bigl[ \pi_1 \bigl( \mathbb P^1(\mathbb C) - \{ 0, \, 1, \, \infty \} \bigr) \to S_N \bigr]; [/math] they are the ``Galois closure
of the group of automorphisms of the graph. Finally, for each possible monodromy group, we compute explicit formulas for Belyĭ maps [math] \beta: E(\mathbb C) \to \mathbb P^1(\mathbb C) [/math] associated to some elliptic curve [math] E: \ y^2 = x^3 + A \, x + B. [/math] We will discuss some of the challenges of determining the structure of these groups, and present visualizations of group actions on the torus.
This work is part of PRiME (Purdue Research in Mathematics Experience) with Chineze Christopher, Robert Dicks, Gina Ferolito, Joseph Sauder, and Danika Van Niel with assistance by Edray Goins and Abhishek Parab.
|
Terms sourced from: http://iupac.org/publications/pac/65/4/0819/
"Nomenclature for chromatography (IUPAC Recommendations 1993)", Ettre, L.S.,
Pure and Applied Chemistry 1993, 65(4), 819
adsorption chromatography affinity chromatography anion exchange anion exchanger anticircular elution ascending elution
capillary column cation exchange cation exchanger chromatogram chromatograph (noun) chromatograph (verb) chromatographic detector chromatography column chromatography column column volume \(V_{\text{c}}\)concentration-sensitive detector counter-ions circular development circular elution
dead-volume descending elution/development differential detector displacement chromatography distribution constant
effluent elute elution chromatography exclusion chromatography external standard extra-column volume
fixed ions flow rate flow resistance parameter \(\mathit{\Phi}\)fraction collector frontal chromatography fronting
immobilized phase impregnation integral detector internal standard interparticle porosity \(\varepsilon\)ion exchanger ion-exchange chromatography ion-exchange isotherm ionogenic groups isocratic analysis isothermal chromatography interparticle volume of the column \(V_{0}\)ion exchange
mass (weight) of the stationary phase \(W_{\text{s}}\)mass-flow sensitive detector mobile phase mobile-phase velocity \(u\)modified active solid marker
packed column packing partition chromatography peak elution volume (time) \(\bar{V}_{\text{R}}\),\(\bar{t}_{\text{R}}\)peak peak resolution \(R_{\text{s}}\)peak widths pellicular packing permselectivity phase ratio \(\beta\)planar chromatography plate height \(H_{\text{eff}}\)plate number \(N\)porous-layer open-tabular (PLOT) column post-column derivatization programmed-flow chromatography programmed-pressure chromatography programmed-temperature chromatography pyrolysis-gas chromatography
radial elution (radial development) or circular elution (circular development) reaction chromatography redox ion exchangers reduced mobile phase velocity \(\nu\)relative retardation \(R_{\text{rel}}\)relative retention \(r\)retardation factor \(R\)retardation factor \(R_{\text{F}}\)retention factor \(k\)retention index \(I\)retention time retention volumes reversed-phase chromatography RF value \(R_{\text{F}}\)RM value \(R_{\text{M}}\)
salt form of an ion exchanger selectivity coefficient \(k_{\text{A/B}}\)selectivity factor separation factor \(\alpha\)separation number \(\text{SN}\)separation temperature solid support sorption sorption isotherm specific detector specific permeability spot stationary phase stationary phase volume \(V_{\text{s}}\)stepwise elution supercritical fluid chromatography support-coated open-tubular (SCOT) column
tailing total retention volume \(V_{\text{R}}\),\(t_{\text{R}}\)totally porous packing two-dimensional chromatography
|
Article Title Keywords
Interval arithmetic, generalized coupled matrix equations, AE-solution set
Abstract
In this work, the interval generalized coupled matrix equations \begin{equation*} \sum_{j=1}^{p}{{\bf{A}}_{ij}X_{j}}+\sum_{k=1}^{q}{Y_{k}{\bf{B}}_{ik}}={\bf{C}}_{i}, \qquad i=1,\ldots,p+q, \end{equation*} are studied in which ${\bf{A}}_{ij}$, ${\bf{B}}_{ik}$ and ${\bf{C}}_{i}$ are known real interval matrices, while $X_{j}$ and $Y_{k}$ are the unknown matrices for $j=1,\ldots,p$, $k=1,\ldots,q$ and $i=1,\ldots,p+q$. This paper discusses the so-called AE-solution sets for this system. In these types of solution sets, the elements of the involved interval matrices are quantified and all occurrences of the universal quantifier $\forall$ (if any) precede the occurrences of the existential quantifier $\exists$. The AE-solution sets are characterized and some sufficient conditions under which these types of solution sets are bounded are given. Also some approaches are proposed which include a numerical technique and an algebraic approach for enclosing some types of the AE-solution sets.
Recommended Citation
Dehghani-Madiseh, Marzieh.(2018),"On the Interval Generalized Coupled Matrix Equations",
Electronic Journal of Linear Algebra,Volume 34, pp. 695-717. DOI: https://doi.org/10.13001/1081-3810.3824
|
Search
Now showing items 1-2 of 2
Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector
(Elsevier, 2014-11-10)
This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ...
Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector
(Elsevier, 2014-11-10)
Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ...
|
I have a separable function $f[x,y]$, and I would like to find two functions $g[x]$ and $h[y]$ with
$f[x,y]=g[x] h[y]$
where $g[x]$ doesn't depend on $y$ and $h[y]$ doesn't depend on $x$. Ideally, $g$ and $h$ should have the same magnitude, to prevent overflows/underflows. I have a hackish approach that works, but involves a lot of manual labor.
Background: $f[x,y]$ is a filter kernel I want to apply to an image, and using two separate 1d-filters is much more efficient.
My first approach was to start with $g[x]=f[x,0]$. But that doesn't work for e.g. $f[x,y]=\frac{e^{-\frac{x^2+y^2}{2 \sigma ^2}} x y}{2 \pi \sigma ^6}$
Currently, I have a function that "removes" $x$ or $y$ from $f[x,y]$ using pattern matching:
removeSymbol[f_, s_] := f //. {s^_ + a_ -> a, s^_.*a_ -> a}
but that means I have to manually adjust this pattern for different f's.
Is there a more elegant way to do this? $f[x,y]$ is usually a derivative of a gaussian, e.g.
gaussian[x_,y_] := 1/(2 π σ^2) Exp[-((x^2 + y^2)/(2 σ^2))]f[x_,y_] := D[gaussian[x,y], x, y]
|
It simply is
probability, you can call it "predicted" as suggested by others.
I see from the discussion that you disagree with such name, so let me proove you that this
is probability.
First, recall that if $X$ is a Bernoulli distributed random variable parametrized by $p$, then $E(X) = p$. Second, take an intercept-only logistic regression model, such model will calculate
mean of your predicted $Y$ variable. This would be the same as if you calculated it simply taking $\hat y_i = (1/N) \sum_{i=1}^N y_i$. This mean would converge to expected value as $N\rightarrow\infty$, i.e. to $E(Y)= p$. In fact, sample mean is a maximum likelihood estimator of $p$ for Bernoulli distributed random variable. In case of more complicated logistic regression model you predict conditional means, i.e. conditional probabilities.
Check also Why isn't Logistic Regression called Logistic Classification?
If this still does not convince you, below you can see simple R example showing exactly that case:
set.seed(123)
p1 <- 0.75
Y1 <- sample(0:1, 500, replace = TRUE, prob = c(1-p1, p1))
fit1 <- glm(Y1~1, family = "binomial")
p1
## [1] 0.75
fitted(fit1)[1] # only the first one since all predictions are the same
## 1
## 0.762
mean(Y1)
## [1] 0.762
q <- 0.3
p2 <- c(0.4, 0.7)
X <- sample(0:1, 500, replace = TRUE, prob = c(1-q, q))
Y2 <- numeric(500)
Y2[X==0] <- sample(0:1, sum(X==0), replace = TRUE, prob = c(1-p2[1], p2[1]))
Y2[X==1] <- sample(0:1, sum(X==1), replace = TRUE, prob = c(1-p2[2], p2[2]))
fit2 <- glm(Y2~X, family = "binomial")
# predicted probabilities vs the true ones
table( ifelse(X==0, p2[1], p2[2]), round(fitted(fit2), 3))
##
## 0.359 0.658
## 0.4 348 0
## 0.7 0 152
# empirical conditional probabilities (conditional means)
tapply( Y2, X, mean )
## 0 1
## 0.3591954 0.6578947
|
Dataset Open Access
Radice, David; Bernuzzi, Sebastiano; Ott, Christian D.
We distribute complete gravitational-wave signals in the Advanced LIGO band (10 Hz - 8192 Hz) of the inspiral and merger of two neutron stars. These waveforms been constructed by hybridizing numerical-relativity data obtained with the WhiskyTHC code [1] with tidal effective-one-body waveforms [2,3]. More details on the procedure used to generate these waveforms are given in [4].
The waveforms are distributed as HDF5 files containing the amplitude and phase of the -2 spin-weighted spherical harmonics multipoles of the strain:
\(( h_+ - \mathrm{i} h_\times )_{l,m} = \frac{A_{l,m}}{D_{\rm cm}} \exp(-\mathrm{i} \phi_{l,m} )\)
where \(D_{\rm cm}\) is the distance in cm from the source.
The data files include a machine readable "/metadata" group with:
We store amplitude and phase for multipoles modes up to l=4 as time series sampled at 16384 Hz.
We make these waveforms freely available in the hope that they will be useful. We kindly ask you to cite [3] and [4] in any publication resulting from the use of these waveforms.
---
[1] http://www.tapir.caltech.edu/~david_e/whiskythc.html [2] https://eob.ihes.fr/ [3] S. Bernuzzi, A. Nagar, T. Dietrich, T. Damour; Modeling the Dynamics of Tidally Interacting Binary Neutron Stars up to the Merger; Phys.Rev.Lett. 114 (2015) 16, 161103. [4] D. Radice, S. Bernuzzi, C. D. Ott; The One-Armed Spiral Instability in Neutron Star Mergers and its Detectability in Gravitational Waves; arXiv:1603.05726.
D. Radice, S. Bernuzzi, C. D. Ott; The One-Armed Spiral Instability in Neutron Star Mergers and its Detectability in Gravitational Waves; In preparation.
http://http://www.tapir.caltech.edu/~david_e/whiskythc.html
https://eob.ihes.fr/
S. Bernuzzi, A. Nagar, T. Dietrich, T. Damour; Modeling the Dynamics of Tidally Interacting Binary Neutron Stars up to the Merger; Phys.Rev.Lett. 114 (2015) 16, 161103.
|
How to represent in first order logic the expression:
"there are infinitely many"
To be honest I'm confused and not even sure whether you can represent them in first order logic.
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
The existing answers provide examples of contexts where "there are infinitely many" can be expressed. However, there is an important sense in which "there are infinitely many"
cannot be expressed in a first-order way without some additional context restrictions:
Specifically, fix your favorite first-order language $\Sigma$. Then it's a consequence of the compactness theorem that there is no $\Sigma$-sentence $\varphi$ which is true in exactly the infinite $\Sigma$-structures:
If there were, then $\neg\varphi$ would hold in exactly the finite $\Sigma$-structures.
Then the $\Sigma$-theory $\{\exists x_1...x_n(\bigwedge_{1\le i<j\le n}x_i\not=x_j): n\in\mathbb{N}\}\cup\{\neg\varphi\}$ would be consistent (since any finite fragment holds of any sufficiently large $\Sigma$-structure), so by compactness has a (presumably non-unique) model $\mathcal{M}$.
But $\mathcal{M}$ must be infinite, since any structure satisfying $\exists x_1...x_n(\bigwedge_{1\le i<j\le n}x_i\not=x_j)$ has at least $n$ many elements. By assumption on $\varphi$, this means $\mathcal{M}\not\models\neg\varphi$, and we have a contradiction.
Phrased most relevantly to your question, what we've shown is that even the sentence
"There are infinitely many $x$ satisfying $x=x$" is not expressible in first-order logic.
On the other hand, when we restrict attention to specific structures we often have a better situation:
The other two answers fall into this category: they exhibit structures over which "there are infinitely many" is a first-order definable generalized quantifier.
You'll need to be a little bit more precise about what "there are infinitely many" means.
But for example, if you want to say that there are infinitely many natural numbers, you could rephrase that as "there is no largest natural number", or "for every natural number there exists a larger natural number". Either of these could be expressed in FOL.
So if you wanted to say that there are infinitely many stars in the universe, you might say "there is no furthest star from the Sun": in other words, for every star, there exists a star further from the Sun. This can also be expressed in FOL.
It depends on how you interpret the logic.
For an example on $\{x\mid x\subseteq\mathbb{N}\}$, if a predicate symbol $P(x)$ is interpreted as "the set $x$ have infinite many numbers", then "there are infinitely many numbers in $x$" can be expressed as $P(x)$ directly.
For another example on $\{x\mid x\subseteq\mathbb{N}\text{ or }x\text{ is a function }\mathbb{N}\rightarrow\mathbb{N}\}$, assume
the predicate symbol $I(f)$ is interpreted as "$f$ is an injection", and
the predicate symbol $R(f,x)$ is interpreted as "$x$ is the range of $f$".
Then "there are infinitely many numbers in $x$" can be expressed as $\exists f( I(f)\wedge R(f, x))$.
|
This asks whether or not
differentiating both sides of an equation is allowed. Which it isn't, however, can you integrate both sides of an equation?
If we have,
$$x^2=x+1$$
Can we apply the integral operator and get,
$${1 \over 3} \cdot x^3 \ |_a^b={1 \over 2} \cdot x+x \ |_a^b$$
Haha, just kidding. That's too easy, I actually mean can we take the anti-derivative of both sides?
$${1 \over 3} \cdot x^3+C_1={1 \over 2} \cdot x+x+C_2$$ $$\Rightarrow {1 \over 3} \cdot x^3-{1 \over 2} \cdot x-x=C$$
Where $C$ is an arbitrary constant. Can doing this ever result in an equation simpler to solve? Perhaps this be used to derive new results? I know that this can be used when the equation is functional, but I'm interested in the cases when it isn't.
Here's one use I came up with,
We have,
$$(1) \quad x^2=x+1$$
$$(2) \quad {1 \over 3} \cdot x^3+C_1={1 \over 2} \cdot x+x+C_2$$ $$\Rightarrow {1 \over 3} \cdot x^3-{1 \over 2} \cdot x-x=C$$
With $C=-\cfrac{5\cdot \sqrt{5}+7}{12}$. Therefore, a root of $(2)$ is ${{\sqrt{5}+1} \over 2}$. This could be done for any order equation where a solution is known. For instance, you could have a quartic equation with known solutions, and then derive a solution to a quantic equation. Assuming you integrate and set $C$ to the right value. This would allow you to find specific solutions of quantic equations, which are not generally solvable.
For a concrete example, consider,
$$(3) \quad (x-1) \cdot (x-2) \cdot (x-3) \cdot (x-4)=0$$
Integrating both sides results in,
$$(4) \quad {{x^5} \over 6}-{{5 \cdot x^4} \over 2}+{{35 \cdot x^3} \over 3}-25 \cdot x^2+24 \cdot x=C$$
If we wish to retain the solution $x=1$ we set $C=251/30$ and then we have,
$$(5) \quad {{x^5} \over 6}-{{5 \cdot x^4} \over 2}+{{35 \cdot x^3} \over 3}-25 \cdot x^2+24 \cdot x-{{251} \over {30}}=0$$
Where we actually know one of the solutions!
|
I posted a following question in MSE, but I think it should be posted here in MO. Since I don't know how to transfer the post from MSE to MO, I have pasted the question below. Thank you in advance and looking for your comments/suggestions.
My question is two fold:
(1) Is it possible to "solve" (iterative convex/non-convex) optimization problems via learning (e.g., regression / classification) with an objective to "solve" (in some sense) the problem in "one-shot"?
(2) If the above answer is affirmative, do you have any preferred methods or papers that you can suggest or refer to?
ADD:
Let's consider a following convex optimization whose solution is obtained iteratively via some solver (e.g., CVX).
\begin{aligned} & \underset{\mathbf{x} \in \mathbb{R}^n}{\text{minimize}} & & \left\| \mathbf{y} - \mathbf{x} \right\|_2^2 %\\ & \text{subject to} & & \mathbf{A} \mathbf{x} \leq \mathbf{b} \ ,\\ %&&& X \succeq 0. \end{aligned} where the inequality constraint is element-wise. The matrix $\mathbf{A} \in \mathbb{R}^{m \times n}$, vector $\mathbf{b} \geq 0\in \mathbb{R}_{+}^{m}$, and the vector $\mathbf{y} \in \mathbb{R}^{n}$ are given/known. Also, $n > m$, and $n$ can be a very large value.
Question is: can we utilize some "learning" to "predict" the optimal solution (non-iteratively)?
I found some papers, e.g., Link, that attempt to solve the optimization problems utilizing "learning" (e.g., neural networks). Does anyone have any feeling about this? I will try to dig into this more in the future, but would be happy to hear your experience if any.
|
Impulse Response DT Convolution CT Convolution Properties of LTI Systems Procedure
Let $h[n]$ be the impulse response of a linear and time-invariant(LTI) system. If the signal $x[n]$ is input to the system, the output signal from the system is given by:$y[n]=\sum_{k=-\infty}^{\infty}x[k]h[n-k]$
This operation is called convolution and we say that the signal $y[n]$ is the convolution of the signal $x[n]$ and the signal $h[n]$ and we denote this by $y[n] = x[n]* h[n]$. Thus,
To compute the signal $y[n]$, perform the following steps:
Think of $x[n]$ and $h[n]$ as signals $x[k]$ and $h[k]$ respectively, i.e., with the independent variable being $k$ instead of $n$. Flip $h[k]$ about the Y -axis to obtain the signal $h[-k]$. To compute the signal $y[n]$ for a fixed value of $n$, shift the signal $h[-k]$ by n units to the right to obtain the signal $h[n-k]$. When n is negative, this amounts to shifting the signal $h[-k]$ to the left, but mathematically it is equivalent to shift right by a negative number. Compute $w_n[k] = x[k]h[n-k]$, i.e., $w_n[k]$ is the product of the signals $x[k]$ and $h[n-k]$. Compute $y[n] =\sum_{k}w_n[k] =\sum_{k}x[k]h[n-k]$ by summing the values of $w_n[k]$ for all values of $k$.
This will give you the value of the signal $y[n]$ for one value of $n$. Repeat this procedure for every integer value of $n$, i.e., $n \in$ […,-3,-2,-1,0,1,2,3,…] to obtain the full signal $y[n]$. In practice, it will be easy to start with large negative values of $n$ and increase $n$.
|
I am stuck on a question in Chapter 11 of Advanced Solid State Physics by Philip Phillips, which asks to do the Cooper instability calculation for triplet pairing.
I attempt to solve the Schroedinger equation
$ [-\frac{\hbar^2}{2 m} (\nabla^2_1 + \nabla^2_2)+V(r_1 - r_2)] \psi(r_1,r_2) = E \psi(r_1,r_2)$
with a antisymmetric wavefunction $\psi$ following the steps in the textbook.
First express the wavefunction in center of mass coordinate
$\psi(r_1,r_2) = \phi(r) e^{i Q \cdot R},$
where $r = r_1 - r_2, Q = k_1 + k_2$ and $R = (r_1 + r_2)/2$.
Expand $\phi(r)$ in a Fourier series
$\phi(r) = \displaystyle\sum_{k} \frac{e^{i k \cdot r}}{\sqrt{V}} \alpha_k$.
Spatial antisymmetry then requires that $\alpha_k = - \alpha_{-k}$, but I don't know how this requirement changes the subsequent calculations.
Any explanation and hint are appreciated.
|
Interaction of an elastic plate with a linearized inviscid incompressible fluid
1.
Department of Mechanics and Mathematics, Kharkov National University, 4 Svobody sq., 61077, Kharkov, Ukraine
Keywords:nonlinear plate, linearized 3D Euler equations, global attractor., well-posedness, Fluid--structure interaction. Mathematics Subject Classification:Primary: 74F10; Secondary: 35B41, 35Q30, 74K2. Citation:I. D. Chueshov. Interaction of an elastic plate with a linearized inviscid incompressible fluid. Communications on Pure & Applied Analysis, 2014, 13 (5) : 1759-1778. doi: 10.3934/cpaa.2014.13.1759
References:
[1]
A. V. Babin and M. I. Vishik,
[2]
A. Chambolle, B. Desjardins, M. Esteban and C. Grandmont, Existence of weak solutions for the unsteady interaction of a viscous fluid with an elastic plate,,
[3]
V. V. Chepyzhov and M. I. Vishik,
[4]
I. Chueshov,
[5]
I. Chueshov, A global attractor for a fluid-plate interaction model accounting only for longitudinal deformations of the plate,,
[6] [7] [8] [9]
I. Chueshov and I. Lasiecka, Long-time dynamics of von Karman semi-flows with nonlinear boundary/interior damping,,
[10] [11] [12]
I. Chueshov and I. Lasiecka, On Global attractor for 2D Kirchhoff-Boussinesq model with supercritical nonlinearity,,
[13]
I. Chueshov and I. Lasiecka, Generation of a semigroup and hidden regularity in nonlinear subsonic flow-structure interactions with absorbing boundary conditions,,
[14]
I. Chueshov and I. Lasiecka, Well-posedness and long time behavior in nonlinear dissipative hyperbolic-like evolutions with critical exponents,,
[15] [16] [17]
I. Chueshov and I. Ryzhkova, Unsteady interaction of a viscous fluid with an elastic shell modeled by full von Karman equations,,
[18] [19]
I. Chueshov and I. Ryzhkova, Well-posedness and long time behavior for a class of fluid-plate interaction models,,
[20]
I. Chueshov and A. Shcherbina, Semi-weak well-posedness and attractors for 2D Schrödinger-Boussinesq equations,,
[21] [22] [23] [24]
N. Kopachevskii and S. Krein,
[25] [26]
J.-L. Lions and E. Magenes,
[27]
J.-L. Lions,
[28]
B. S. Massey and J. Ward-Smith,
[29] [30] [31] [32]
R. Temam,
[33]
H. Triebel,
[34]
J. T. Webster, Weak and strong solutions of a nonlinear subsonic flow-structure interaction: semigroup approach,,
show all references
References:
[1]
A. V. Babin and M. I. Vishik,
[2]
A. Chambolle, B. Desjardins, M. Esteban and C. Grandmont, Existence of weak solutions for the unsteady interaction of a viscous fluid with an elastic plate,,
[3]
V. V. Chepyzhov and M. I. Vishik,
[4]
I. Chueshov,
[5]
I. Chueshov, A global attractor for a fluid-plate interaction model accounting only for longitudinal deformations of the plate,,
[6] [7] [8] [9]
I. Chueshov and I. Lasiecka, Long-time dynamics of von Karman semi-flows with nonlinear boundary/interior damping,,
[10] [11] [12]
I. Chueshov and I. Lasiecka, On Global attractor for 2D Kirchhoff-Boussinesq model with supercritical nonlinearity,,
[13]
I. Chueshov and I. Lasiecka, Generation of a semigroup and hidden regularity in nonlinear subsonic flow-structure interactions with absorbing boundary conditions,,
[14]
I. Chueshov and I. Lasiecka, Well-posedness and long time behavior in nonlinear dissipative hyperbolic-like evolutions with critical exponents,,
[15] [16] [17]
I. Chueshov and I. Ryzhkova, Unsteady interaction of a viscous fluid with an elastic shell modeled by full von Karman equations,,
[18] [19]
I. Chueshov and I. Ryzhkova, Well-posedness and long time behavior for a class of fluid-plate interaction models,,
[20]
I. Chueshov and A. Shcherbina, Semi-weak well-posedness and attractors for 2D Schrödinger-Boussinesq equations,,
[21] [22] [23] [24]
N. Kopachevskii and S. Krein,
[25] [26]
J.-L. Lions and E. Magenes,
[27]
J.-L. Lions,
[28]
B. S. Massey and J. Ward-Smith,
[29] [30] [31] [32]
R. Temam,
[33]
H. Triebel,
[34]
J. T. Webster, Weak and strong solutions of a nonlinear subsonic flow-structure interaction: semigroup approach,,
[1]
Zhaohi Huo, Yueling Jia, Qiaoxin Li.
Global well-posedness for the 3D Zakharov-Kuznetsov equation in energy space $H^1$.
[2] [3]
Gaocheng Yue, Chengkui Zhong.
On the global well-posedness to the 3-D incompressible anisotropic magnetohydrodynamics equations.
[4]
Xiaoping Zhai, Yongsheng Li, Wei Yan.
Global well-posedness for the 3-D incompressible MHD equations in the critical Besov spaces.
[5] [6] [7]
Oualid Kafi, Nader El Khatib, Jorge Tiago, Adélia Sequeira.
Numerical simulations of a 3D fluid-structure interaction model for blood flow in an atherosclerotic artery.
[8]
Chao Deng, Xiaohua Yao.
Well-posedness and ill-posedness for the 3D generalized Navier-Stokes equations in $\dot{F}^{-\alpha,r}_{\frac{3}{\alpha-1}}$.
[9]
Qiao Liu, Ting Zhang, Jihong Zhao.
Well-posedness for the 3D incompressible nematic liquid crystal system in the critical $L^p$ framework.
[10] [11]
George Avalos, Pelin G. Geredeli, Justin T. Webster.
Semigroup well-posedness of a linearized, compressible fluid with an elastic boundary.
[12]
George Avalos, Roberto Triggiani.
Semigroup well-posedness in the energy space of a parabolic-hyperbolic coupled Stokes-Lamé PDE system of fluid-structure interaction.
[13]
M. Bulíček, F. Ettwein, P. Kaplický, Dalibor Pražák.
The dimension of the attractor for the 3D flow of a non-Newtonian fluid.
[14] [15]
Saoussen Sokrani.
On the global well-posedness of 3-D Boussinesq system with partial viscosity and axisymmetric data.
[16]
Elaine Cozzi, James P. Kelliher.
Well-posedness of the 2D Euler equations when velocity grows at infinity.
[17]
Daniela De Silva, Nataša Pavlović, Gigliola Staffilani, Nikolaos Tzirakis.
Global well-posedness for a periodic nonlinear Schrödinger equation in 1D and 2D.
[18]
Yong Yang, Bingsheng Zhang.
On the Kolmogorov entropy of the weak global attractor of 3D Navier-Stokes equations:Ⅰ.
[19]
T. Tachim Medjo.
Non-autonomous 3D primitive equations with
oscillating external force and its global attractor.
[20]
Boling Guo, Guoli Zhou.
Finite dimensionality of global attractor for the solutions to 3D viscous primitive equations of large-scale moist atmosphere.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top]
|
Why Time-Domain Zero Stuffing Produces Multiple Frequency-Domain Spectral Images
This blog explains why, in the process of time-domain interpolation (sample rate increase), zero stuffing a time sequence with zero-valued samples produces an increased-length time sequence whose spectrum contains replications of the original time sequence's spectrum.
Background
The traditional way to interpolate (sample rate increase) an
x( n) time domain sequence is shown in Figure 1.
Figure 1
The '↑
L' operation in Figure 1 means to insert L–1 zero-valued samples between each sample in x( n), creating a longer-length w( n 3) sequence. (The sample rate of the x( n) input is fs/ L samples/second.) To the end of that longer sequence we append L–1 zero-valued samples. Those two steps are what we call "upsampling." Next, we apply the upsampled w( n 3) sequence to a lowpass filter whose output is the interpolated y( n 3) sequence. We formally refer to interpolation as the two-step process of upsampling followed by lowpass filtering.
An Example
An example of the Figure 1 process is given in Figure 2. In this example the input
x( n) time sequence is the sum of two sine waves. The discrete Fourier transform (DFT) of x( n) is X( m). Because the x( n)sequence comprises sine waves, the real parts of X( m) are zero-valued. As such, we'll plot the imaginary parts of the X( m) spectral samples as the Imag[ X( m)] sequence shown on the right side of Figure 2(a).
Figure 2
Upsampling
x( n) by L = 3 produces the w( n 3) sequence shown at the left side of Figure 2(b). The imaginary parts of the W( m 3) DFT spectral samples are represented by the Imag[ W( m 3)] sequence shown on the right side of Figure 2(b). Notice that the Imag[ W( m 3)] sequence contains replications of the Imag[ X( m)] spectral samples.
Our Question
The question that occurs to people when they first study the topic of time-domain interpolation (the question answered in this blog) is,
"Why does inserting zero-valued samples in x( n) to produce w( n 3) result in a W( m 3) spectrum containing replications of the original X( m) spectral samples?" An Intuitive Answer
One answer to our question involves recalling how the DFT of several periods of a periodic time signal is a discrete Fourier series (DFS). That is, a DFS containing non-zero-valued spectral samples separated by zero-valued spectral samples. In our Figure 2(b) case we exchange the traditional DFS time and frequency domains. The inverse DFT of several periods of a periodic
W( m 3) spectrum results in a w( n 3) time sequence containing non-zero-valued time samples separated by zero-valued time samples.
OK, given that
hand-waving intuitive answer we now present an alternate answer to our question by way of an example.
An Answer By Way of Example
An alternate answer to our question comes from our realization that the two sequences in Figure 2(b) are
Fourier transform pairs. That is, we can show that the inverse DFT of the Imag[ W( m 3)] sequence really does produce the zero-valued samples in the w( n 3) time sequence. For example, let's show why the w(1) and w(2) samples are zero-valued as shown on the right side of Figure 3.
Figure 3
To compute the
w( n 3) time samples we perform a 24-point inverse DFT of Imag[ W( m 3)] using
$$w(n_3) = \frac 1{24} \cdot \sum_0^{23} Imag[W(m_3)]e^{j2\pi n_3m_3/24}\tag{1}$$
To compute Figure 3(b)'s
w(1) time sample (the second sample in the sequence), we modify Eq. (1) by setting n 3 = 1 as:
$$w(1) = \frac 1{24} \cdot \sum_0^{23} Imag[W(m_3)]e^{j2\pi m_3/24}\tag{2}$$
As such the real part of
w(1), Real[ w(1)], is:
$$Real[w(1)] = \frac 1{24} \cdot \sum_0^{23} Imag[W(m_3)] \cdot cos(2\pi m_3/24)\tag{3}$$
Pictorially, the summation in Eq. (3) is the summation of the products of the black square dots times the blue circular dots as shown in Figure 4(a). While not immediately obvious, the sum of those products is equal to zero. We show this zero-valued summation in Figure 4(b) where the black squares that produce individual zero-valued products are omitted for clarity.
Figure 4
The imaginary part of Eq. (2)'s
w(1), Imag[ w(1)], is:
$$Imag[w(1)] = \frac 1{24} \cdot \sum_0^{23} Imag[W(m_3)] \cdot sin(2\pi m_3/24)\tag{4}$$
Graphically, the summation in Eq. (4) is the summation of the products of the black square dots times the blue circular dots as shown in Figure 5(a). We show Eq. (4)'s zero-valued summation in Figure 5(b) where the zero-valued black squares are omitted for clarity. The numbers on the arrows in Figure 5(b) are the individual products of square and circular sample pairs.
Figure 5
So, we have shown that
w(1)'s real and imaginary parts are both zero-valued and now we see why w(1) = 0 in Figure 3(b).
Next, as promised, we show that Figure 3(b)'s
w(2)time sample is zero-valued. To compute Figure 3(b)'s w(2)time sample, we modify Eq. (1) by setting n 3 = 2 as:
$$w(2) = \frac 1{24} \cdot \sum_0^{23} Imag[W(m_3)]e^{j2\pi 2m_3/24} \tag{5}$$
$$\qquad \qquad \qquad = \frac 1{24} \cdot \sum_0^{23} Imag[W(m_3)] \cdot cos(2\pi 2m_3/24) \\ \qquad \qquad \qquad \qquad \qquad + \frac 1{24} \cdot \sum_0^{23} Imag[W(m_3)] \cdot sin(2\pi 2m_3/24) \tag{6}$$
Similar to Figure 4, the first summation in Eq. (6) is the summation of the products of the black square dots times the blue circular dots as shown in Figure 6(a). The second summation in Eq. (6) is the summation of the dots given in Figure 6(b). And both summations in Eq. (6) are equal to zero as shown in the Appendix. Thus
w(2)'s real and imaginary parts are both zero-valued and now we see why w(2) = 0 in Figure 3(b).
Figure 6
If we cared to do so, we could also show that the inverse DFT Figure 3's Imag[
W( m 3)] sequence produces the remaining zero-valued "stuffed" samples, w(4), w(5), w(7), w(8), etc., in the w( n 3) sequence. And that would, hopefully, answer this blog's question: "Why does time domain zero stuffing produce spectral replications."
Appendix
This Appendix shows why the
w(2) time sample in Figure 3(b) is zero-valued.
To compute the
w(2) time sample, we modify Eq. (1) by setting n 3 = 2 as:
$$w(2) = \frac 1{24} \cdot \sum_0^{23} Imag[W(m_3)]e^{j2\pi 2m_3/24} \tag{A-1}$$
As such the real part of
w(2), Real[ w(2)], is:
$$Real[w(2)] = \frac 1{24} \cdot \sum_0^{23} Imag[W(m_3)] \cdot cos(2\pi 2m_3/24) \tag{A-2}$$
Pictorially, the summation in Eq. (A-2) is the summation of the products of the black square dots times the blue circular dots as shown in Figure A1(a). While not immediately obvious, the sum of the products is equal to zero. We show this zero-valued summation in Figure A1(b) where the zero-valued black squares are omitted for clarity.
Figure A1
The imaginary part of Eq. (A-1)'s
w(2), Imag[ w(2)], is:
$$Imag[(w(2)] = \frac 1{24} \cdot \sum_0^{23} Imag[W(m_3)] \cdot sin(2\pi 2m_3/24) \tag{A-3}$$
Graphically, the summation in Eq. (A-3) is the summation of the products of the black square dots times the blue circular dots as shown in Figure A2(a). We show Eq. (A-3)'s zero-valued summation in Figure A2(b) where the black squares that produce individual zero-valued products are omitted for clarity. The numbers on the arrows in Figure A1(b) are the individual products of square and circular sample pairs.
Figure A2
That concludes our proof that the Figure 3(b)
w(2) time sample's real and imaginary parts are both zero-valued, thus w(2) = 0.
Previous post by Rick Lyons:
Complex Down-Conversion Amplitude Loss
Next post by Rick Lyons:
Handy Online Simulation Tool Models Aliasing With Lowpass and Bandpass Sampling
I was wondering what is the "z transform" of time domain zero stuffing. Could you tell me?
Am I right that the z transform of time domain zero padding is z^-m with m being the number of zeros?
Thank you in advance
پویا پاکاریان
Hi پویا پاکاریان.
I hope that answers your question.
Many thanks Prof. Lyons
Please let me ask my question this way. Maybe this way I can explain myself better: Suppose we have a signal named lowercase "u" whose z-transform is capital "U" experiment 1: Suppose we pad the signal by placing 5 zeros before it, which pushes the signal 5 steps ahead in the time domain to obtain: lowercase "x" = [0 0 0 0 0 u] Then we know that the z-transform of lowercase "x" will be: X=(z^(-5)).U with both X and U capital. Am I right in my explanations above? experiment 2: Suppose we stuff our signal "u" by 3 zeros; i.e. we place 3 zeros after each single data point of it to build the signal lowercase "y". Is there a simple way to show the change that occurs in the z-transform? i.e. is there a simple relation (almost as simple as the one we saw in experiment 1 for capital "X") for capital "Y" in terms of Capital "U"?
The simplest that I can think of is in terms of lowercase "u"; as follows:
capital Y = sigma (from n=0 to N-1) of (u(n).(z^(-4n))) with N as the length of the signal and "4" coming from the fact that we stuffed 3 zeros between any two consecutive data point of lowercase "u". (i.e. L=3 and 4=L+1) The equation above explains capital "Y" in terms of the lowercase "u"; but I need an equation for capital"Y" in terms of capital"U" (akin to what we obtained for capital "X" in experiment 1). Does any such equation exist? Apologies for the length of my message; and kindest regards
پویا پاکاریان
.
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.
|
Working with all sorts of data, it happens sometimes that we want to predict the value of a variable which is not numerical. For those cases, a logistic regression is appropriate. It is similar to a linear regression except that it deals with the fact that the dependent variable is categorical.
Here is the formula for the linear regression, where we want to estimate the parameters beta (coefficients) that fit best our data :
\begin{equation} Y_i = \beta_0 + \beta_1 X_i + \epsilon_i \end{equation}
Well, in a logistic regression, we are still estimating the parameters beta but for :
\begin{equation} y = \left\{\begin{matrix}1\: \: \: \: Y_i = \beta_0 + \beta_1 X_i + \epsilon_i > 0 \\ 0\: \: \: \:else \end{matrix}\right. \end{equation}
Here,
y is not linearly dependent on
x. Let’s look at an example.
Predicting the sex from the gene expression profiles
When building a model from expression data, I usually try to build a model that can predict the sex. This serves as a great control, since it is a very easy task. Looking at just a few genes from the chromosome Y is enough to assess the sex.
For my example, I have decided to use The Genotype-Tissue Expression (GTEx) RPKM dataset. GTEx Portal allows you to look at transcriptomic data and genetic association across several tissues of healthy deceased donors. The V6p release contains the 8555 samples surveying 53 tissues from 544
post-mortem donors. You should have a look at their portal, it’s really interesting and user-friendly. Figure1.Distribution of the tissues samples in the GTEx cohort [source:http://www.gtexportal.org/home/tissueSummaryPage]
Since GTEx’s dataset is quite big (56238 rows by 8555 columns), I have used python to do the logistic regression using functions from the scikit-learn package. If you have a smaller dataset, the R function
glm(formula, data=mydata, family='binomial') is what you are looking for.
So, here we go. The first steps are : 1) loading the RPKM matrix, the sample annotations and the subject annotations, 2) log-transforming the RPKM values (
log10(RPKM+0.01)) 3) filtering the genes to keep the 10,000 most variable ones. I’m keeping a little less than 20% of the genes for the example but it is an arbitrary decision.
SMCENTER SMTS SMTSD SUBJECT GTEX-1117F-0003-SM-58Q7G B1 Blood Whole Blood GTEX-1117F GTEX-1117F-0003-SM-5DWSB B1 Blood Whole Blood GTEX-1117F GTEX-1117F-0226-SM-5GZZ7 B1 Adipose Tissue Adipose – Subcutaneous GTEX-1117F GTEX-1117F-0426-SM-5EGHI B1 Muscle Muscle – Skeletal GTEX-1117F GTEX-1117F-0526-SM-5EGHJ B1 Blood Vessel Artery – Tibial GTEX-1117F Table1. Example of the data When trying to predict something, it is recommended to always work with a training and a test sets. You build your model using the training set and than you assess it’s performance on the test set. All scikit-learn functions are designed to work in this fashion.
from sklearn.linear_model import LogisticRegression
from sklearn import metrics, cross_validation
from sklearn.cross_validation import train_test_split
from sklearn.cross_validation import cross_val_score
# X is the sample by gene matrix,
# Y is a vector containing the gender of each sample
# 0.3 means I want 30% of the data in the test set and 70% in the training set
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, random_state=0)
# Those parameters try to restrain the variables used in the model
model = LogisticRegression(penalty='l2', C=0.2)
model.fit(X_train, y_train)
# Apply the model on the test set
predicted = model.predict(X_test)
probs = model.predict_proba(X_test)
That’s it! Once the model has converged, I can look at the predictions on the training and test set, to see how well it did. Since there was 62.3% of males in the test set (that close to the proportion of males in the whole dataset), the model needs to be right more than 62.3% of the time to say that it is performing well. Otherwise, it could just say ‘male’ all the time (without using the actual gene expression data) and it would be right 62.3% of the time. For this example, the model had an accuracy of 100%. It made the right call for all the samples.
Then, by looking at the beta coefficients, I can identify which genes are important for predicting the sex. Those would be the genes having the greater or smallest coefficients. Top four genes includes : RPS4Y1 (coefficient of 1.34), KDM5D (1.27), DDX3Y (1.25) and XIST (-1.25). Note that XIST is the only one that looks specific to female.
Figure2. Distribution of the expression of the top genes having the most important coefficients.
Note that you should always do cross-validation to be sure that the model is not specific to your dataset (that is does not overfit). The goal of building a predictive model is after all to be able to reuse it on new data, so it needs to be able to generalize. Cross-validation implies building the model several times using different training and test sets and averaging the scores to get a sense of its real performance. In the code below, the model is built 10 times each time using a different training set and test set. The mean score for the 10 iterations tells us if the model is good at generalization.
# evaluate the model using 10-fold cross-validation
scores = cross_val_score(LogisticRegression(penalty='l2', C=0.2), xx.T, Y, scoring='accuracy', cv=10)
Predicting the tissue from the gene expression profiles
Now, for fun, I can try to derive a model that will predict the tissue. I say for fun because I’ll only look at one model (not cross-validated) and because of the different number of samples in each category of tissue. One category has 1259 samples and the other only 6. This might need to be address.
So, there are 31 high level tissue types in the dataset, the two most represented being Brain and Skin, respectively 14.7% and 10.4% of the samples. Using the same code and matrix as before, only changing the vector Y – which is my dependent variable that I want to predict-, I get a good model which is right 99.3 % of the time. It made 18 errors on a total of 2567 predictions. Interestingly, a number of the errors involved female tissues, which may be related to the fact that female samples are under-represented in the dataset!
Predicted Real Stomach Vagina Nerve Adipose Tissue Esophagus Stomach Colon Small Intestine Esophagus Stomach Esophagus Stomach Colon Esophagus Skin Vagina Uterus Fallopian Tube Adipose Tissue Breast Ovary Fallopian Tube Skin nan Blood Vessel Uterus Adipose Tissue Stomach Stomach Colon Esophagus Stomach Heart Blood Vessel Adipose Tissue Fallopian Tube Table2. Errors made by the model.
|
I'm trying to understand, at least intuitively why the derivative of a function at a point is the tangent vector at this point.
If we see the functions of this form $f:\mathbb R\to \mathbb R$ we see clearly that
$$f'(a)=\lim_{h\to 0}\frac{f(a+h)-f(a)}{h}$$
is the slope of the tangent of $f$ at the point $a\in \mathbb R$, because the angular coefficient of a line is $\frac{\Delta y}{\Delta x}$.
However I couldn't understand why the derivative is the tangent vector in higher dimensions so clearly as we see in my example above.
In order to illustrate what I said above, let's take for example the helix $$\alpha(t):\mathbb R\to \mathbb R^3,\ \alpha(t)=(a\cos t,a\sin t,bt)$$
If we take the definition of derivative in higher dimensions in Spivak's book we have:
A function $f:\mathbb R^n\to \mathbb R^m$ is differentiable at $a\in \mathbb R^m$ if there is a linear transformation $\lambda: \mathbb R^n\to \mathbb R^m$ such that
$$\lim_{h\to 0}\frac{|f(a+h)-f(a)-\lambda(h)|}{|h|}=0$$
I can't see why the $\alpha'(t)$ is the tangent at the point $t$ and I tried also see the derivative as the Jacobian without success.
So my question is why does the derivative is the tangent vector?
Thanks in advance.
|
Version 3 (modified by 4 years ago) (diff), 'perturbative' and 'real' particles; the perturbative weigth 'perturbative' and 'real' particles
(following text is taken - slightly modified - from: O.Buss, PhD thesis, pdf, Appendix B.1)
For some calculations, e.g. low-energetic πA or γA collision, it is a good assumption, that thetarget nucleus stays very close to its ground state. Henceforth, one keeps as an approximationthe target nucleus constant in time. This basically means that the phase space density ofthe target is not allowed to change during the run. The test-particles which represent thisconstant target nucleus are called
real test-particles. However, one also wants to consider thefinal state particles. Thus one defines another type of test-particles which are called perturbative.The perturbative test-particles are propagated and may collide with real ones, the products are perturbative particles again. However, perturbative particles may not scatter among each other.Furthermore, they are neglected in the calculation of the actual densities. One can simulate inthis fashion the effects of the almost constant target on the outgoing nucleons without modifyingthe target. E.g. in πA collisions we initialize all initial state pions as perturbative test-particles.Thus the target stays automatically constant and all products of the collisions of pions andtarget nucleons are assigned to the perturbative regime.
Furthermore, since the
perturbative particles do not react among each other or modify the realparticles in a reaction, one can also split a perturbative particle in \(N_{test}\) pieces (several perturbativeparticles) during a run. Each piece is given a corresponding weight \(1/N_{test}\) and one simulates likethis \(N_{test}\) possible final state scenarios of the same perturbative particle during one run. The perturbative weigth 'perWeight'
Usually, in the cases mentioned above, where ou use the seperation into
real and perturbative particles like this, you want to calculate some final quantity like \(d\sigma^A_{tot}=\int_{nucleus}d^3r\int \frac{d^3p}{(2\pi)^3} d\sigma^N_{tot}\,\times\,\dots \). Here we are hiding all medium modifications, as e.g. Pauli blocking, flux corrections or medium modifications of the cross section in the part "\(\,\times\,\dots \)". Now, solving this via the testparticle ansatz (with \(N_{test}\) being the number of test particles), this quantity is calulated as \(d\sigma^A_{tot}=\frac{1}{N_{test}}\sum_{j=1}^{N_{test}\cdot A}d\sigma^j_{tot}\,\times\,\dots \), with \(d\sigma^j_{tot}=d\sigma^N_{tot}(\vec r_j,\vec p_j)\) standing for the cross section of the \(j\)-th test-particle.
The internal implementation of calculations like this in GiBUU is, that a loop runs over all \(N_{test}\cdot A\) target nucleons and creates some event. Thus all these events have the same probability. But since they should be weighted according \(d\sigma^j_{tot}\), this is corrected by giving all (final state) particles coming out of event \(j\) the weight \(d\sigma^j_{tot}\).
This information is stored the variable
perWeight in the definition of the particle type.
Thus, in order to get the correct final cross section, one has to
sum the perWeight, and not the particles.
As an example: if you want to calculate the inclusive pion production cross section, you have to loop over all particles and sum the perWeights of all pions. Simply taking the number of all pions would give false results.
|
When writing down the the action of the RNS superstring in superspace, all of the sources I have checked (BBS, GSW, Polchinski) seem to just write down the action in conformal gauge, that is$$S_{\text{RNS}}:=\mathrm{i}\, \frac{T}{4}\int _W\mathrm{d}^2\sigma \mathrm{d}^2\theta \, \bar{D}Y\cdot DY,$$where $W$ is the superworldsheet, $Y$ is a superfield on $W$:$$Y:=X+\bar{\theta}\psi +\frac{1}{2}\bar{\theta}\theta B,$$$D$ is the 'supercovariant derivaitve':$$D_A:=\frac{\partial}{\partial \bar{\theta}^A}+(\rho ^\alpha \theta )_A\partial _\alpha,$$$\rho ^\alpha$ are generators of the $(-,+)$ Clifford algebra:$$\{ \rho ^\alpha ,\rho ^\beta \}=2\eta ^{\alpha \beta},$$and the bar denotes the Dirac conjugate.
On the other hand, for the bosonic string, we have the Polyakov action:$$S_{\text{P}}:=-\frac{T}{2}\int _W\mathrm{d}^2\sigma \, \sqrt{-h}\nabla _\alpha X\cdot \nabla ^\alpha X,$$where $h_{\alpha \beta}$ is the metric on the worldsheet $W$ and $\nabla _\alpha$ the corresponding Levi-Civita covariant derivative (which for scalar fields happens to agree with just the usual partial derivative). If we take $h_{\alpha \beta}=\eta _{\alpha \beta}$ (conformal gauge), then this reduces to the Bosonic part (ignoring the auxilary field $B$) of $S_{\text{RNS}}$.
I was wondering: what is the appropriate generalization of $S_{\text{RNS}}$ to a theory defined on a supermanifold with a 'supermetric'? For that matter, what is the right notion of a supermetric on a supermanifold and do we have an analogous Fundamental Theorem of Super-Riemannian Geometry that, given a supemetric, gives us a canonical supercovariant$^1$ derivative? This generalization should be analogous to the pre-gauge-fixed form of $S_{\text{P}}$ given above in which the metric and all covariant derivatives appear explicitly.
$^1$ While $D$ is called the "supercovariant derivative", it clearly cannot be the right notion, at least not in general, because it makes no reference to a supermetric.This post imported from StackExchange Physics at 2014-08-23 05:00 (UCT), posted by SE-user Jonathan Gleason
|
This question already has an answer here:
Let $z_1,z_2$ be two complex numbers with $\operatorname{Re}(z_1)\leq0$ and $\operatorname{Re}(z_2)\leq0$. I want to prove: $$\big|e^{z_2}-e^{z_1}\big|\leq\big|z_2-z_1\big|$$
I began by using the reverse triangle inequality: $\big|e^{z_2}-e^{z_1}\big|\geq\bigg|\big|e^{z_2}\big|-\big|e^{z_1}\big|\bigg|$
So, it must be shown that: $$\frac{\bigg|\big|e^{z_2}\big|-\big|e^{z_1}\big|\bigg|}{\big|z_2-z_1\big|}=\bigg|\frac{e^{\operatorname{Re}(z_2)}-e^{\operatorname{Re}(z_1)}}{z_2-z_1}\bigg|\leq1$$
Why is this true?
|
I am reading the following paper: Takáč, Peter On the Fredholm alternative for the p-Laplacian at the first eigenvalue. Indiana Univ. Math. J. 51 (2002), no. 1, 187–237.
I need help to understand the following argument (page 193 section 2.1):
$\Omega\subset\mathbb{R}^N$ is a bounded regular domain, $p\in (2,\infty)$. Let $\phi_1$ be the first eigenfunction associated with the problem $-\Delta_p u=f$ and $u\in W_0^{1,p}(\Omega)$, i.e. $$\int|\nabla\phi_1|^{p-2}\nabla\phi_1\nabla v=\lambda_1\int|\phi_1|^{p-2}\phi_1v,\ \forall\ v\in W_0^{1,p}$$
where $\lambda_1>0$ os the first eigenvalue. We can assume that $\phi_1\in C^1(\overline{\Omega})$, $\phi_1>0$ in $\Omega$ and $\frac{\partial\phi_1}{\partial\eta}<0$ in $\partial\Omega$, where $\frac{\partial\phi_1}{\partial\eta}$ represents derivative in the normal direction.
Define in $W_0^{1,p}$ the semi-norm $$\|u\|_{\phi_1}=\Big(\int|\nabla\phi_1|^{p-2}|\nabla u|^2\Big)^{\frac{1}{2}}$$
The author says that in fact, $\|\cdot\|_{\phi_1}$ is an norm because of the following argument: if $v\in W_0^{1,p}$, then
\begin{eqnarray} \lambda_1\int\phi_1^{p-1}v^2 &=& \int|\nabla\phi_1|^{p-2}\nabla\phi_1\nabla(v^2) \nonumber \\ &\leq& 2\int|\nabla\phi_1|^{p-1}|\nabla v||v| \nonumber \\ &\le& 2\|v\|_{\phi_1}\Big(\int|\nabla\phi_1|^pv^2\Big)^{\frac{1}{2}} \end{eqnarray}
I can understand the last two inequalities, and I can use it to prove that $\|\cdot\|_{\phi_1}$ is a norm. The problem is the first equality. To use the characterization of the eigenvalue, we have to take $u\in W_0^{1,p}$. Why does $v^2\in W_0^{1,p}$? I think that he is using this fact, is this true?
|
Consider the model
$ \mathbf{y} = f(\mathrm{X}) + \epsilon $.
Here $\mathrm{X}$ is a
fixed $n \times d$ data matrix, and $\epsilon \sim \mathcal{N}(0, \sigma^2 I)$ is iid Gaussian noise. Assume that $\sigma^2$ is known.
First, consider modeling this using a Gaussian process i.e. $f \sim \mathcal{GP}(0, k)$. Then it can be shown that for a new point $x_\ast$, the predictive distribution is Gaussian with mean and variance given by
$ \mu_p = k(\mathrm{X}, x_\ast)^T(k(\mathrm{X}, \mathrm{X}) + \sigma^2 I)^{-1}\mathbf{y} $,
$ V_p = k(x_\ast, x_\ast) - k(\mathrm{X}, x_\ast)^T(k(\mathrm{X}, \mathrm{X}) + \sigma^2 I)^{-1}k(\mathrm{X}, x_\ast) $,
respectively. Now, consider modeling the same data using kernel ridge regression (with regularization parameter $\lambda$). In this case, we estimate $f$ (assumed to be in the RKHS corresponding to kernel $k$), and get predictions given by
$ \hat{f}(x_\ast) = k(\mathrm{X}, x_\ast)^T(k(\mathrm{X}, \mathrm{X}) + \lambda I)^{-1}\mathbf{y} $,
which is of course the same as the posterior Gaussian process mean (with $\lambda = \sigma^2$), because the two models are just different ways of looking at the same thing.
Now, here is where my confusion arises. Based on this equivalence, it seems to me that the variance of the ridge prediction should match the posterior Gaussian process variance. But this does not seem to be the case. We have,
$\mathbb{V}[\hat{f}(x_\ast)] = k(\mathrm{X}, x_\ast)^T(k(\mathrm{X}, \mathrm{X}) + \lambda I)^{-1} \mathbb{V}[\mathbf{y}] (k(\mathrm{X}, \mathrm{X}) + \lambda I)^{-1}k(\mathrm{X}, x_\ast) = \sigma^2 k(\mathrm{X}, x_\ast)^T(k(\mathrm{X}, \mathrm{X}) + \lambda I)^{-2}k(\mathrm{X}, x_\ast) $.
Using the Woodbury identity, this can be re-written as
$ \frac{\sigma^2}{\lambda}(\phi(x_\ast)^T(\phi(\mathrm{X})^T\phi(\mathrm{X}) + \lambda I)^{-1} \phi(\mathrm{X})^T\phi(\mathrm{X}) \phi(x_\ast) - \phi(x_\ast)^T(\phi(\mathrm{X})^T\phi(\mathrm{X}) + \lambda I)^{-1}\phi(\mathrm{X})^T\phi(\mathrm{X})\phi(\mathrm{X})^T(k(\mathrm{X}, \mathrm{X} + \lambda I)^{-1}\phi(\mathrm{X})\phi(x_\ast)) $,
where $\phi$ is the feature map corresponding to $k$. This is similar to the posterior Gaussian process variance, but not equal (with $\lambda = \sigma^2$). We can get approximate equality by taking $(\phi(\mathrm{X})^T\phi(\mathrm{X}) + \lambda I)^{-1}\phi(\mathrm{X})^T\phi(\mathrm{X}) \approx I$, but it is not clear to me why there isn't a strict equality like in the case of the mean.
|
Version 5 (modified by 4 years ago) (diff), 'perturbative' and 'real' particles; the perturbative weigth 'perturbative' and 'real' particles
(following text is taken - slightly modified - from: O.Buss, PhD thesis, pdf, Appendix B.1)
Reactions which are so violent that they disassemble the whole target nucleus can be treated only by explicitly propagating all particles, the ones in the target and the ones produced in the collision on the same footing.
For reactions which are not violent enough to disrupt the whole target nucleus, e.g. low-energy πA, γA or neutrino A collisions at not too high energies, the target nucleus stays very close to its ground state. Henceforth, one keeps as an approximationthe phase-space density of the target nucleus constant in time ('frozen approximation'). In GiBUU this is controlled by the switch
freezeRealParticles. The test-particles which represent this constant target nucleus are called
real test-particles. However, one also wants to consider the final state particles. Thus one defines another type of test-particles which are called perturbative.The perturbative test-particles are propagated and may collide with real ones, the products are perturbative particles again. However, perturbative particles may not scatter among each other.Furthermore, their feedback on the actual densities is neglected. One can simulate inthis fashion the effects of the almost constant target on the outgoing pparticles without modifyingthe target. E.g. in πA collisions we initialize all initial state pions as perturbative test-particles.Thus the target automatically remains frozen and all products of the collisions of pions andtarget nucleons are assigned to the perturbative regime.
Furthermore, since the
perturbative particles do not react among themselves or modify the realparticles in a reaction, one can also split a perturbative particle into \(N_{test}\) pieces (several perturbativeparticles) during a run. Each piece is given a corresponding weight \(1/N_{test}\) and one simulates likethis \(N_{test}\) possible final state scenarios of the same perturbative particle during one run. The perturbative weigth 'perWeight'
Usually, in the cases mentioned above, where one uses the seperation into
real and perturbative particles, one wants to calculate some final quantity like \(d\sigma^A_{tot}=\int_{nucleus}d^3r\int \frac{d^3p}{(2\pi)^3} d\sigma^N_{tot}\,\times\,\dots \). Here we are hiding all medium modifications, as e.g. Pauli blocking, flux corrections or medium modifications of the cross section in the part "\(\,\times\,\dots \)". Now, solving this via the testparticle ansatz (with \(N_{test}\) being the number of test particles), this quantity is calulated as \(d\sigma^A_{tot}=\frac{1}{N_{test}}\sum_{j=1}^{N_{test}\cdot A}d\sigma^j_{tot}\,\times\,\dots \), with \(d\sigma^j_{tot}\) standing for the cross section of the \(j\)-th test-particle.
The internal implementation of calculations like this in GiBUU is, that a loop runs over all \(N_{test}\cdot A\) target nucleons and creates some event. Thus all these events have the same probability. But since they should be weighted according \(d\sigma^j_{tot}\), this is corrected by giving all (final state) particles coming out of event \(j\) the weight \(d\sigma^j_{tot}\).
This information is stored the variable
perWeight in the definition of the particle type.
Thus, in order to get the correct final cross section, one has to
sum the perWeight, and not the particles.
As an example: if you want to calculate the inclusive pion production cross section, you have to loop over all particles and sum the perWeights of all pions. Simply taking the number of all pions would give false results.
|
Trigonometry Complex Numbers Geometric Series Integrals of Complex Functions and Integration by parts
In this class, we will deal only with integrals of complex functions of a real variable integrated with respected to the real variable. This is identical to integration of real functions of real variables. The $j$ is simply treated as a constant in all these cases. In this class, we are typically interested in time $t$ or frequency $\omega$ being the independent variable. Hence, the integrals will be with respect to $t$ or $\omega$.
For example,
$\int_0^{\pi/4} e^{j 2 t} \ dt = \left[\frac{e^{j 2 t}}{2 j}\right]_{0}^{\pi/4} = \frac{j-1}{2j}$
Sometimes we will have to use Integration by parts to evaluate integrals. The main result to recall is(1)
Example:
Evaluate $\displaystyle{\int_0^1 t e^{-j \omega t} dt}$, where $\omega$ is any complex number
We choose
|
Let $G$ be a compact Lie group acting on a manifold $M$. So for each vector $X \in TeG$ we have $X^{\#}$ the vector field on $M$ defined at each point $p \in M$ by the curve $exp(tX) \cdot p$. If the action is free we have that $X^{\#}_p=0 \iff X=0$, I get the instinctive idea but I can not proof it rigorously.
Define $\phi_t(x)=exp(tx).x$, $\phi_t$ is the flow of $X^{\#}$, we deduce that $X^{\#}_P=0$ implies that $\phi_t(x)=exp(tX).x=x$ for every $t$. This contradicts the fact that the action is free.
N.B. The flow $\psi_t$ of $Y$ verifies the differential equation
(1) ${d\over {dt}}\psi_t=X(\psi_t)$,
so if $Y(x)=0$ the constant solution $\psi_t(x)$ satisfies $(1)$, so $\psi_t(x)=x$.
|
You can see a full exposition of the
Completeness Theorem for propositional logic in every good math log textbook, like :
The
proof system used is Natural Deduction; here is a sketch of the proof.
Lemma 2.5.1 (Soundness) If $Γ \vdash \varphi$, then $Γ \vDash \varphi$.
The proof of it needs the rules of the
proof system.
Definition 2.5.2 A set $\Gamma$ of propositions is consistent if $\Gamma \nvdash \bot$ [$\bot$ is the logical constant for "the falsum, used in ND].
Let us call $Γ$
inconsistent if $Γ \vdash \bot$.
Lemma 2.5.4 If there is a valuation $v$ such that $v(\psi) = 1$ for all $\psi \in \Gamma$, then $\Gamma$ is consistent.
Lemma 2.5.5
(a) $Γ \cup \{ ¬\varphi \}$ is inconsistent
iff $Γ \vdash \varphi$,
(b) $Γ \cup \{ \varphi \}$ is inconsistent
iff $Γ \vdash ¬\varphi$.
For (a) : by assumption we have (see def of
consistency) $\Gamma, \lnot \varphi \vdash \bot$. Now apply the (RAA) rule [i.e. if we have a derivation of $\bot$ from $\lnot \varphi$, we can infer $\varphi$, "discharging" the assumption $\lnot \varphi$] to conclude with : $\Gamma \vdash \varphi$.
Lemma 2.5.7 Each consistent set $Γ$ is contained in a maximally consistent set $Γ^*$.
Lemma 2.5.8 If $Γ$ is maximally consistent, then $Γ$ is closed under derivability (i.e. if $Γ \vdash \varphi$, then $\varphi \in Γ$).
[...]
Lemma 2.5.11 If $Γ$ is consistent, then there exists a valuation $v$ such that $v(\psi) = 1$, for all $\psi \in Γ$.
Corollary 2.5.12 $Γ \nvdash \varphi$ iff there is a valuation $v$ such that $v(\psi) = 1$ for all $\psi \in Γ$ and $v(\varphi) = 0$.
Proof $Γ \nvdash \varphi$ iff $Γ \cup \{ ¬ \varphi \}$ is consistent iff there is a valuation $v$ such that $v(\psi) = 1$ for all $\psi \in Γ \cup \{ ¬ \varphi \}$, or $v(\psi) = 1$ for all $\psi \in Γ$ and $v(\varphi) = 0$.
Theorem 2.5.13 (Completeness Theorem) $Γ \nvdash \varphi$ iff $Γ \nvDash \varphi$.
Proof if $Γ \nvdash \varphi$, then $Γ \nvDash \varphi$ by Corollary 2.5.12. The converse holds by Lemma 2.5.1.
|
I've recently started studying differential geometry and I'm a bit unsure on the notion of a tangent vector on a manifold. Is the point that we can no longer thing of a vector as an arrow (a straight line) extending between two points (in general we cannot compare two points on a manifold), as there is no well-defined concept of an origin, or indeed, what a straight-line is. Hence we need a new definition of a vector?! Given this, we can define a curve $\gamma :[0,1]\subset\mathbb{R}\rightarrow M$ (Question: doesn't this implicitly define a 1-D coordinate system, or is the point that a curve on the manifold exists independently of any coordinate system and hence we can choose a real parameter $t$ such that each value of t maps to a point on the manifold, therefore tracing out a curve on the manifold?). We then define a tangent vector in terms of a differentiable function $f$ by stating that tangent vector at a point $p\in M$ is the directional derivative of $f$ along the curve $\gamma$ at this point, i.e. $$\mathbf{v}_{p}[f]=\frac{d}{dt}(f\circ\gamma)\bigg\vert_{t=0}$$ where $\gamma (0)=p$. Clearly this definition is independent of coordinates, and it can also be shown to be independent of the curve passing through $p\in M$ (also, it is independent of the differentiable function $f$ as this was chosen arbitrarily). A tangent vector at $p$ is then defined to be the equivalence class of curves that have the same derivative at $p$, i.e. $$\mathbf{v}_{p}=[\gamma]=\lbrace \tilde{\gamma}\vert\;(\phi\circ\tilde{\gamma})'(0)=(\phi\circ\gamma)'(0)\rbrace$$ I'm really trying to understand the motivations and intuition behind this definition.
I'm not sure how you understand a vector, but to me its simply an element of some vector space. The vector space is more important.
Here is a very conceptual answer to what I personally consider a motivation behind the definition of tangent vectors.
If you think of a manifold embedded in $\mathbb{R}^n$, intuitively the tangent space at a point is the "tangent hyperplane" "tangent" to that point. Any vector in this plane gives you a directional derivative.(you differentiate "stuff" along that "direction").
However without being embedded in $\mathbb{R}^n$, it is not clear how to differentiate functions on a manifold. What you can do is by utilizing a curve $\gamma : [-1,1]\to M$ and get a function $f\circ \gamma :\mathbb{R}\to \mathbb{R}$, which you know how to differentiate. One then fiddles around and realizes that the good condition to distinguish "different"(in the sense that they give "different" directional derivatives) curves is what you gave in your question.
But if one thinks abstractly what really is a directional derivative, it's something that takes a function $f\in C^\infty(M)$ as an input and gives you a number. Furthermore it should be a linear operator, and should satisfy the Leibniz rule. It then turns out these are exactly the "nice conditions". These are called derivations and is an equivalent way of defining the tangent space.
Whitney’s Embedding Theorem says that we can smoothly embed any smooth manifold into $ \mathbb{R}^{n} $ for $ n $ large enough. In other words, any smooth manifold can be viewed as a smooth sub-manifold of a Euclidean space of sufficiently high dimension. Once we have this picture of a smooth manifold, we can draw a tangent plane to any point $ p $ on the smooth manifold and view tangent vectors as vectors in this hyperplane, where the origin is taken to be $ p $.
If we have a smooth curve $ \gamma: (-1,1) \to M $ in the original abstract manifold $ M $, we can transfer it to a smooth curve $ \gamma': (-1,1) \to \mathbb{R}^{n} $ whose image lies in the corresponding sub-manifold of $ \mathbb{R}^{n} $. As you know, the time derivative $ \dfrac{d}{dt} [\gamma'(t)] \Bigg|_{t = 0} $ of this curve in $ \mathbb{R}^{n} $ is a vector that is
tangent to $ \gamma' $ at $ t = 0 $.
The reason why we need an abstract definition of a tangent vector is that we must be able to discuss differential geometry in a coordinate-free language. Manifolds have a reality that is independent of any choice of coordinates.
|
I try to find a good proof for invertibility of strictly diagonally dominant matrices (defined by $|m_{ii}|>\sum_{j\ne i}|m_{ij}|$). There is a proof of this in this paper but I'm wondering whether there are are better proof such as using determinant, etc to show that the matrix is non singular.
The proof in the PDF (Theorem 1.1) is very elementary. The crux of the argument is that if $M$ is strictly diagonally dominant and singular, then there exists a vector $u \neq 0$ with $$Mu = 0.$$
$u$ has some entry $u_i > 0$ of largest magnitude. Then
\begin{align*} \sum_j m_{ij} u_j &= 0\\ m_{ii} u_i &= -\sum_{j\neq i} m_{ij}u_j\\ m_{ii} &= -\sum_{j\neq i} \frac{u_j}{u_i}m_{ij}\\ |m_{ii}| &\leq \sum_{j\neq i} \left|\frac{u_j}{u_i}m_{ij}\right|\\ |m_{ii}| &\leq \sum_{j\neq i} |m_{ij}|, \end{align*} a contradiction.
I'm skeptical you will find a significantly more elementary proof. Incidentally, though, the Gershgorin circle theorem (also described in your PDF) is very beautiful and gives geometric intuition for why no eigenvalue can be zero.
I would probe it a bit tangentially. And not because it will be simpler, but because it gives an excuse to show an application. I would take an iterative method, like Jacobi's, and show that it converges in this case; and that it converges to a unique solution. This, incidentally implies the matrix is non-singular.
How does it work exactly?
For the system $Ax=b$, Jacobi's method consists in writing $A=D+R$, where $D$ is diagonal and $R$ has zeros in the diagonal. Then you define the recurrence
$$x_{n+1}=D^{-1}(b-Rx_{n}).$$
Now we can show that it converges.
We have
\begin{align}||x_m-x_n||&=||\sum_{k=n}^{m}(D^{-1}R)^kb-((D^{-1}R)^{m}-(D^{-1}R)^{n})x_0||\\ &\leq\sum_{k=n}^{m}||D^{-1}R||^k||b||+\left(||D^{-1}R||^m+||D^{-1}R||^n\right)||x_0|| \end{align}
For the norm $||\cdot||:=||\cdot||_{\infty}$, the matrix norm is bounded by the maximum of the sums of the absolute values of its entries in each row. Therefore $$||D^{-1}R||$$ is less than some number less than $1$. For this reason the sum above can be as small as you want for $n,m$ large. This shows the convergence of the sequence.
If it clear too that it has to converge to any solution of the system $Ax=b$. To see this we use the same argument above but placing a solution $x$ in place of $x_m$. We use that $Ax=b$, i.e. $x=D^{-1}(b-Rx)$ and we get it. So $x_n$ converges to any solution. Since it is a convergent sequence it converges to only one thing so there is only one solution to the system.
protected by Zev Chonoles Sep 12 '16 at 18:44
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.