text stringlengths 256 16.4k |
|---|
Current Electricity Electric Current and Drift of Electrons The time rate of flow of charge through any cross section is called current i = q/t. Current is having direction but it is scalar. The conventional direction of current is taken to be the direction of flow of positive charge. If "n" particles each having a charge 'q' pass through a given area in time 't'. \tt i = \frac{nq}{t} If a point charge "q" is moving in a circle of radius "r" with speed "v" then. \tt i = \frac{qv}{2 \pi r} Drift velocity (vd) is the average velocity acquired by force electrons inside a metal by electric field. Drift velocity depends up on electric field. Drift velocity decreases with increase of temperature. The mobility of a charge is the average drift velocity resulting from the application of unit electric field. \tt \mu = \frac{Vd}{E} Mobility of charge carries can be experimentally determined by Hall effect. Current density (j) is the amount of charge flowing per unit cross-sectional area per second. \tt J = \frac{i}{A} The direction of current density is the flow of positive charge Relation between electric current and drift velocity i = neAvd. Relation between current density and drift velocity J = ne vd. The property that opposes the flow of current through it is called electric Resistance. (R). \tt R = \frac{V}{i} Unit of resistance is ohm. Resistance depends up on length and area of cross section. Nature of materials and temperature. View the Topic in this video From 06:31 To 58:00
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Free electron density in a metal is given by n = \frac{N_{A}x d}{A} where N A = Avogadro's number, x= number of free electrons per atom, d= density of metal and A= Atomic weight of metal. 2. Electric Current (I) The rate of flow of charge through any cross-section of a wire is called electric current flowing through it. Electric current (I) = \frac{q}{t}. 3. Drift Velocity of Free ElectronsWhen a potential difference is applied across the ends of a conductor, the free electrons in it move with an average velocity opposite to the direction of electric field, which is called drift velocity of free electrons.
Drift velocity v_{d} = \frac{eE \tau}{m} = \frac{eV \tau}{ml}
4. Current DensityThe electric current flowing per unit area of cross-section of a conductor is called current density.
Current density (J) = \frac{I}{A} = nev_{d}
5. MobilityThe drify velocity of electron per unit electric field applied is called mobility of electron.
Mobility of electron (\mu) = \frac{v_{d}}{E} |
How many homomorphisms $\Psi : S_3 \rightarrow S_3$ exist?
Attempt: I found $16$ homomorphisms in total.
$S_3 ={(1). (12),(13),(23), (123),(132)}$
There are three normal subgroups in $S_3 = \{(1),~ S_3 ~, ~A_3 = \langle (123)\rangle\}$
Let $\Psi : S_3 \rightarrow S_3$ be a homomorphism. then $Ker~ \Psi$ can be any of the normal subgroups in $S_3$. When :
(a) $Ker ~\Psi= S_3 \rightarrow$ then this is the trivial homomorphism with all mappings converging to identity. There is one homomorphism in this case
(b) $Ker ~\Psi= (1) \rightarrow \Psi(x) = y \implies$ then $O(y)$ must divide $O(x)$. Hence, $(123)$ can get mapped only to $(132)$ or $(123)$.
$(12), (13) ,(23)$ can be mapped among themselves in $3! = 6$ ways.
Hence $6 .2 =12$ homomorphisms are possiblie in this case
(c) $Ker ~\Psi=(\langle 123 \rangle) \rightarrow$ now $(\langle 123 \rangle) = \{ (1), (123),(132) \}$ . All elements in this set get mapped to $(1)$ in just $1$ way. We have to think of how the other elements get mapped
in this case, the image of $\Psi$ is a subgroup of order $2$ since $S_3/ Ker~\Psi \approx \Psi(S_3) \implies |\Psi (S_3)| = 2$
$ \implies \Psi (S_3) = \{1, (12)\}$ or $\{1,(13)\}$ or $\{1,(23)\}$
There are $3$ possible homomorphisms in this case
Hence total number of homomorphisms possible = $16$.
Is my attempt correct?
Thank you for your help. |
X
Search Filters
Format
Subjects
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
Journal of High Energy Physics, ISSN 1126-6708, 12/2016, Volume 2016, Issue 12, pp. 1 - 59
A combination of measurements sensitive to the CKM angle gamma from LHCb is performed. The inputs are from analyses of time-integrated B+ -> DK+, B-0 ->...
B physics | CKM angle gamma | CP violation | Hadron-Hadron scattering (experiments) | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | EXTRACTION | IMPACT | PHASE | PARAMETRIZATION | CP-VIOLATION | DECAYS | PHYSICS, PARTICLES & FIELDS | Intervals | Confidence intervals | Time dependence | Uncertainty | Compatibility | Decay | Physics - High Energy Physics - Experiment | High Energy Physics - Experiment
B physics | CKM angle gamma | CP violation | Hadron-Hadron scattering (experiments) | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | EXTRACTION | IMPACT | PHASE | PARAMETRIZATION | CP-VIOLATION | DECAYS | PHYSICS, PARTICLES & FIELDS | Intervals | Confidence intervals | Time dependence | Uncertainty | Compatibility | Decay | Physics - High Energy Physics - Experiment | High Energy Physics - Experiment
Journal Article
2. Measurement of the CKM angle gamma using B-+/- -> DK +/- with D -> K-S(0)pi(+)pi(-), (KSK+K-)-K-0 decays
JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 08/2018, Issue 8
A binned Dalitz plot analysis of B-+/- -> DK +/- decays, with D -> K-S(0)pi(+)pi(-) and D -> (KSK+K-)-K-0, is used to perform a measurement of the CP-violating...
SYSTEM | B physics | CKM angle gamma | CP violation | Hadron-Hadron scattering (experiments) | Flavor physics | CONSTRAINTS | CP-VIOLATION | TOOL | PHYSICS, PARTICLES & FIELDS
SYSTEM | B physics | CKM angle gamma | CP violation | Hadron-Hadron scattering (experiments) | Flavor physics | CONSTRAINTS | CP-VIOLATION | TOOL | PHYSICS, PARTICLES & FIELDS
Journal Article
3. Measurement of the CKM angle gamma using B-+/- -> DK +/- with D -> K-S(0)pi(+)pi(-), (KSK+K-)-K-0 decays
Journal of High Energy Physics, ISSN 1029-8479, 10/2014, Issue 10, pp. 1 - 52
A binned Dalitz plot analysis of B-+/- -> DK +/- decays, with D -> K-S(0) pi(+)pi(-) and D -> K0 S K + K -, is performed to measure the C P -violating...
HADRON COLLIDERS | CP violation | DETECTOR | Flavor physics | Hadron-Hadron Scattering | DALITZ PLOT ANALYSIS | MONTE-CARLO | PHI | B physics | CKM angle gamma | MEASURING MASSES | PLUS PLUS | TOOL | PHYSICS, PARTICLES & FIELDS
HADRON COLLIDERS | CP violation | DETECTOR | Flavor physics | Hadron-Hadron Scattering | DALITZ PLOT ANALYSIS | MONTE-CARLO | PHI | B physics | CKM angle gamma | MEASURING MASSES | PLUS PLUS | TOOL | PHYSICS, PARTICLES & FIELDS
Journal Article
4. CP violation and material interaction of neutral kaons in measurements of the CKM angle gamma using B-+/- -> DK +/- decays where D -> K-s(0)pi(+)pi(-)
JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 07/2019, Issue 7
As measurements of the CKM angle gamma in decays of b-hadrons become increasingly precise, it is important to consider the impact of processes that affect...
B physics | CKM angle gamma | CP violation | Hadron-Hadron scattering (experiments) | REGENERATION | Flavor physics | ABSORPTION | SCATTERING | PHYSICS, PARTICLES & FIELDS
B physics | CKM angle gamma | CP violation | Hadron-Hadron scattering (experiments) | REGENERATION | Flavor physics | ABSORPTION | SCATTERING | PHYSICS, PARTICLES & FIELDS
Journal Article
Journal of High Energy Physics, ISSN 1029-8479, 08/2016, Issue 8
A model-dependent amplitude analysis of the decay B (0) -> D(K (S) (0) pi (+) pi (-))K (au 0) is performed using proton-proton collision data corresponding to...
PHI | MATRIX | B physics | FACTORIES | CKM angle gamma | CP violation | Hadron-Hadron scattering (experiments) | Flavor physics | PHYSICS | DALITZ PLOT ANALYSIS | CP-VIOLATION | PHYSICS, PARTICLES & FIELDS
PHI | MATRIX | B physics | FACTORIES | CKM angle gamma | CP violation | Hadron-Hadron scattering (experiments) | Flavor physics | PHYSICS | DALITZ PLOT ANALYSIS | CP-VIOLATION | PHYSICS, PARTICLES & FIELDS
Journal Article
6. Quantum-correlated measurements of D -> K-S(0)pi(+)pi(-)pi(0) decays and consequences for the determination of the CKM angle gamma
JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 01/2018, Issue 1
Journal Article
7. Measurement of the CKM angle $\gamma$ using $B^\pm\to DK^\pm$ with $D\to K_\text{S}^0\pi^+\pi^-$, $K_\text{S}^0K^+K^-$ decays
06/2018
The LHCb collaboration, Aaij, R., Adeva, B. et al. J. High Energ. Phys. (2018) 2018: 176; erratum: The LHCb collaboration, Aaij, R., Adeva, B. et al. J. High...
Physics - High Energy Physics - Experiment
Physics - High Energy Physics - Experiment
Journal Article
JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 12/2016, Volume 2016, Issue 12
A combination of measurements sensitive to the CKM angle γ from LHCb is performed. The inputs are from analyses of time-integrated B+ → DK+, B0 → DK∗0, B0 →...
Journal Article
9. Model-independent measurement of the CKM angle gamma using B-0 -> DK0 decays with D -> K (S) (0) pi (+)pi (-) and K (S) (0) K+K
Journal of High Energy Physics, ISSN 1029-8479, 06/2016, Issue 6
A binned Dalitz plot analysis of the decays B (0) -> DK*(0), with D -> K (S) (0) pi(+)pi(-) and D -> K (S) (0) K+K-, is performed to measure the observables...
B physics | CKM angle gamma | CP violation | Hadron-Hadron scattering (experiments) | Flavor physics | DALITZ ANALYSIS | PHYSICS, PARTICLES & FIELDS
B physics | CKM angle gamma | CP violation | Hadron-Hadron scattering (experiments) | Flavor physics | DALITZ ANALYSIS | PHYSICS, PARTICLES & FIELDS
Journal Article
10. Measurement of the CKM angle \gamma using B^\pm --> D K^\pm with D-->K_S\pi^+\pi^-, K_SK^+K^- decays
Journal of High Energy Physics, ISSN 1126-6708, 08/2014, Volume 2014, Issue 10, pp. 97 - 52
JHEP 10 (2014) 097 A binned Dalitz plot analysis of $B^\pm \to D K^\pm$ decays, with $D \to K_S \pi^+\pi^-$ and $D \to K_S K^+ K^-$, is performed to measure...
Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Particle Physics - Experiment | CP violation; CKM angle gamma; Hadron-Hadron Scattering; B physics; Flavor physics | CP violation | 12.15.Hh | LHCb - Abteilung Hofmann | Hadronic decays of bottom mesons | Flavor physics | Charge conjugation parity time reversal and other discrete symmetries | Settore FIS/04 - Fisica Nucleare e Subnucleare | Science & Technology | CKM angle gamma | Determination of Cabibbo-Kobayashi & Maskawa (CKM) matrix elements | LHCb | Física nuclear | 13.25.Hw | 13.25.Ft | Bottom mesons (|B|>0) | Hadron-Hadron Scattering | 14.40.Nd | Physical Sciences | B physics | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Hadronic decays of charmed mesons | Physics, Particles & Fields | 11.30.Er
Physics - High Energy Physics - Experiment | Physics | High Energy Physics - Experiment | Particle Physics - Experiment | CP violation; CKM angle gamma; Hadron-Hadron Scattering; B physics; Flavor physics | CP violation | 12.15.Hh | LHCb - Abteilung Hofmann | Hadronic decays of bottom mesons | Flavor physics | Charge conjugation parity time reversal and other discrete symmetries | Settore FIS/04 - Fisica Nucleare e Subnucleare | Science & Technology | CKM angle gamma | Determination of Cabibbo-Kobayashi & Maskawa (CKM) matrix elements | LHCb | Física nuclear | 13.25.Hw | 13.25.Ft | Bottom mesons (|B|>0) | Hadron-Hadron Scattering | 14.40.Nd | Physical Sciences | B physics | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Hadronic decays of charmed mesons | Physics, Particles & Fields | 11.30.Er
Journal Article
11. Measurement of $CP$ violation and constraints on the CKM angle $\gamma$ in $B^{\pm}\rightarrow D K^{\pm}$ with $D \rightarrow K_S^0 \pi^+ \pi^-$ decays
ISSN 0550-3213, 01/2014, Volume 888, Issue 550-3213, pp. 169 - 193
28 pages, 7 figures; International audience; A model-dependent amplitude analysis of $B^{\pm} \rightarrow D K^{\pm}$ with $D \rightarrow K_S^0 \pi^+ \pi^-$...
Física nuclear | 13.25.Hw | Particle Physics - Experiment | Nuclear and High Energy Physics | 12.15.Hh | Bottom mesons (|B|>0) | LHCb - Abteilung Hofmann | 530 Physics | Hadronic decays of bottom mesons | Charge conjugation parity time reversal and other discrete symmetries | Nuclear and particle physics. Atomic energy. Radioactivity | High Energy Physics - Experiment | Physics | Settore FIS/04 - Fisica Nucleare e Subnucleare | Science & Technology | 14.40.Nd | Physical Sciences | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Determination of Cabibbo-Kobayashi & Maskawa (CKM) matrix elements | Physics, Particles & Fields | Physics Institute | hep-ex | LHCb | 11.30.Er
Física nuclear | 13.25.Hw | Particle Physics - Experiment | Nuclear and High Energy Physics | 12.15.Hh | Bottom mesons (|B|>0) | LHCb - Abteilung Hofmann | 530 Physics | Hadronic decays of bottom mesons | Charge conjugation parity time reversal and other discrete symmetries | Nuclear and particle physics. Atomic energy. Radioactivity | High Energy Physics - Experiment | Physics | Settore FIS/04 - Fisica Nucleare e Subnucleare | Science & Technology | 14.40.Nd | Physical Sciences | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Determination of Cabibbo-Kobayashi & Maskawa (CKM) matrix elements | Physics, Particles & Fields | Physics Institute | hep-ex | LHCb | 11.30.Er
Journal Article
12. Observation of direct CP violation in the measurement of the Cabibbo-Kobayashi-Maskawa angle gamma with $B^±\to D^()K^()±$ decays
Physical Review D, ISSN 1550-7998, 2013, Volume 87, Issue 5, p. 052015
We report the determination of the Cabibbo-Kobayashi-Maskawa CP-violating angle γ through the combination of various measurements involving B
Physics - High Energy Physics - Experiment | Física de partícules | Experiments | Particle physics | Physics | High Energy Physics - Experiment | Experiment-HEP,HEPEX
Physics - High Energy Physics - Experiment | Física de partícules | Experiments | Particle physics | Physics | High Energy Physics - Experiment | Experiment-HEP,HEPEX
Journal Article
13. Quantum-correlated measurements of D → K S 0 π + π − π 0 decays and consequences for the determination of the CKM angle γ
Journal of High Energy Physics, ISSN 1126-6708, 01/2018, Volume 2018, Issue 1, pp. 1 - 22
Journal Article
14. Model-independent measurement of the CKM angle γ using B 0 → DK ∗0 decays with D → K S 0 π + π − and K S 0 K + K
Journal of High Energy Physics, ISSN 1126-6708, 06/2016, Volume 2016, Issue 6
Journal Article
15. Model-independent determination of the strong phase difference between D 0 and D¯ 0→ π+π−π+π− amplitudes
Journal of High Energy Physics, ISSN 1126-6708, 01/2018, Volume 2018, Issue 1
Journal Article
Journal of High Energy Physics, ISSN 1126-6708, 8/2018, Volume 2018, Issue 8, pp. 1 - 36
A binned Dalitz plot analysis of B ± → DK ± decays, with D → K S 0 π + π − and D → K S 0 K + K −, is used to perform a measurement of the CP-violating...
B physics | CKM angle gamma | CP violation | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Luminosity | Uncertainty | Large Hadron Collider | Particle collisions | Decay
B physics | CKM angle gamma | CP violation | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Luminosity | Uncertainty | Large Hadron Collider | Particle collisions | Decay
Journal Article
JHEP, ISSN 1029-8479, 12/2016
Journal Article
18. Model independent determination of the CKM phase gamma using input from D-0-(D)over-bar(0) mixing
JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 03/2015, Issue 3
Journal Article
JOURNAL OF HIGH ENERGY PHYSICS, ISSN 1029-8479, 08/2016, Volume 2016, Issue 8, pp. 1 - 30
A model-dependent amplitude analysis of the decay B0 → D(KS 0π+π−)K∗0 is performed using proton-proton collision data corresponding to an integrated luminosity...
B physics | CKM angle gamma | CP violation | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
B physics | CKM angle gamma | CP violation | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory
Journal Article |
I try to do a little more spectacular that phrase ( Δελτιο Υλης) . Is the headline on a press material for my students. The fact is that I could accomplish this with gimp or a graphics program such as the inscape or similar programs.
But what I want is to make a somewhat
artistic distortion of letters through tikz to show to my students the possibilities of having the LaTeX without using external programs. My question: is there a way to warp letter and how?
Currently this is what I've done ... and certainly I have not artistic vein ...
\documentclass[b5paper,svgnames,10pt]{book}\usepackage[utf8x]{inputenc}\usepackage{tikz}\usepackage{color}\usepackage{xcolor}\begin{document}\begin{tikzpicture}\draw[grid,step=0.5cm,gray,very thin] (-1cm,-1cm) grid (9cm,1.5cm);\node[scale=5.,color =MidnightBlue ] at (0,0) {$\Delta$};\node[scale=5.,color =MidnightBlue ,rotate=30] at (0.84cm,0.1cm) {$\varepsilon$};\node[scale=5.,color =MidnightBlue,rotate=-15 ] at (1.3cm,0.0cm) {$\lambda$};\node[scale=5.,color =MidnightBlue,rotate=+15 ] at (1.95cm,-0.2cm) {$\tau$};\node[scale=5.,color =MidnightBlue ] at (2.5cm,0) {$\iota$};\node[scale=5.,color =MidnightBlue ] at (2.6cm,0.4cm) {$'$};\node[scale=5.,color =MidnightBlue ] at (3.0cm,0) {$o$};\node[scale=6.,color =MidnightBlue,rotate=-20 ] at (5.3cm,0.08cm) {$\Upsilon$};\node[scale=5.,color =MidnightBlue ] at (6.cm,-0.2cm) {$\lambda$};\node[scale=5.,color =MidnightBlue ] at (7.cm,-0.1cm) {$\eta$};\node[scale=5.,color =MidnightBlue ] at (8.cm,0) {$\varsigma$};\end{tikzpicture}\end{document} |
Answer
$\frac{8x}{9}$
Work Step by Step
$\frac{x}{3}\div\frac{3}{8}$ Invert the divisor and multiply. $\frac{x}{3}\times\frac{8}{3}$ Multiply numerator by numerator and denominator by denominator. $\frac{8x}{9}$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Which metric is better to use in different scenarios while using decision trees?
Gini impurity and Information Gain Entropy are pretty much the same. And people do use the values interchangeably. Below are the formulae of both:
$\textit{Gini}: \mathit{Gini}(E) = 1 - \sum_{j=1}^{c}p_j^2$ $\textit{Entropy}: H(E) = -\sum_{j=1}^{c}p_j\log p_j$
Given a choice, I would use the Gini impurity, as it doesn't require me to compute logarithmic functions, which are computationally intensive. The closed form of it's solution can also be found.
Which metric is better to use in different scenarios while using decision trees ?
The Gini impurity, for reasons stated above.
So,
they are pretty much same when it comes to CART analytics.
Generally, your performance will not change whether you use Gini impurity or Entropy.
Laura Elena Raileanu and Kilian Stoffel compared both in "Theoretical comparison between the gini index and information gain criteria". The most important remarks were:
It only matters in 2% of the cases whether you use gini impurity or entropy. Entropy might be a little slower to compute (because it makes use of the logarithm).
I was once told that both metrics exist because they emerged in different disciplines of science.
Gini is intended for continuous attributes and Entropy is for attributes that occur in classes
Gini is to minimize misclassification Entropy is for exploratory analysis
Entropy is a little slower to compute
To add upon the fact that there are more or less the same, consider also the fact that: $$ \begin{split} \forall \; 0 < u < 1,\; \log (1-u) &= -u - u^2/2 - u^3/3 \, + \, \cdots\\ \forall \; 0 < p < 1,\; \log (p) &= p-1 - (1-p)^2/2 - (1-p)^3/3 \, + \, \cdots\\ \end{split} $$ so that: $$ \forall \; 0 < p < 1,\; -p \log (p) = p(1-p) + p(1-p)^2/2 + p(1-p)^3/3 \, + \, \cdots $$ See the following plot of the two functions normalised to get 1 as maximum value: red curve is for Gini while black one is for entropy.
In the end as explained by @NIMISHAN Gini is more suitable to minimise misclassfication as it is symetric to 0.5, while entropy will more penalised small probabilities.
Entropy takes slightly more computation time than Gini Index because of the log calculation, maybe that's why Gini Index has become the default option for many ML algorithms. But, from Tan et. al book Introduction to Data Mining
"Impurity measure are quite consistent with each other... Indeed, the strategy used to prune the tree has a greater impact on the final tree than the choice of impurity measure."
So, it looks like the selection of impurity measure has little effect on the performance of single decision tree algorithms.
Also. "Gini method works only when the target variable is a binary variable." - Learning Predictive Analytics with Python.
I've been doing optimizations on binary classification for the past week+, and in every case, entropy significantly outperforms gini. This may be data set specific, but it would seem like trying both while tuning hyperparameters is a rational choice, rather than making assumptions about the model ahead of time.
You never know how data will react until you've run the statistics.
As per parsimony principal Gini outperform entropy as of computation ease (log is obvious has more computations involved rather that plain multiplication at processor/Machine level).
But entropy definitely has an edge in some data cases involving high imbalance.
Since entropy uses log of probabilities and multiplying with probabilities of event, what is happening at background is value of lower probabilities are getting scaled up.
If your data probability distribution is exponential or Laplace (like in case of deep learning where we need probability distribution at sharp point) entropy outperform Gini.
To give an example if you have 2 events one .01 probability and other .99 probability.
In Gini Prob sq will be .01^2+.99^2, .0001 + .9801 means lower probability do not play any role as everything is govern by majority probability.
Now in case of entropy .01*log(.01)+.99*log(.99)= .01*(-2)+ .99*(-.00436) = -.02-.00432 now in this case clearly seen lower probabilities are given better weight-age. |
For the other sciences it´s easy to point to the most important equations that ground the discipline. If I want to explain Economics to a physicist say, what are considered to be the most important equations that underly the subject which I should introduce and attempt to explain?
Instead of proposing specific equations, I will point to two concepts that
lead to specific equations for specific theoretical set ups: A) Equilibrium The most fundamental and the most misunderstood concept in Economics. People look around and see constant movement -how more irrelevant can a concept be, than "equilibrium"? So the job here is to convey that Economics models the observation that things most of the time tend to "settle down" -so by characterizing this "fixed point", it gives us an anchor to understand the movements outside and around this equilibrium (which may be changing of course).
It is
not the case that " quantity supplied equals quantity demanded" (here is a foundational equation)
$$Q_d = Q_s$$
but it
is the case that supply tends to equal demand (of anything) for reasons that any economist should be able to convincingly present to anyone interested in listening (and deep down they all have to do with finite resources).
Also, by determining the conditions for equilibrium, we can understand, when we observe divergence, which conditions were violated.
B) Marginal optimization under constraints In a static environment, it leads to the equation of marginal quantities/first derivatives of functions. Goods market: marginal revenue equals marginal cost. Inputs market: marginal revenue product equals marginal reward (rent, wage). Etc. (I left "utility maximization" out of the picture on purpose, because, here first one would have to present what this "utility index" is all about, and how crazy we are ( not), by trying to model human "enjoyment" through the concept of utility).
Perhaps you could cover it all under the umbrella "marginal benefit equal marginal cost" as other questions suggested:
$$MB = MC$$
Economists live in marginal optimization and most consider it self-evident. But if you try to explain it to an outsider, there is a respectable probability that he will object or remain unconvinced, instead usually proposing "average optimization" as "more realistic", since "people do not calculate derivatives" (we don't argue that they do, only that their thought processes can be modeled
as if they were). So one has to get his story straight about marginal optimization, with convincing examples, and a discussion about "why not average optimization".
In an
intertemporal setting, it leads to the discounted trade-off between "the present and the future", again "at the margin" -starting with the "Euler equation in consumption", which in its discrete deterministic version reads
$$u'(c_{t})=\beta(1+r_{t+1})u'(c_{t+1})$$
...and one cannot avoid the theme of utility, after all: $u'()$ is marginal utility from consumption, $0<\beta<1$ is a discount rate and $r_{t+1}$ is the interest rate
(
don't consult wikipedia article on Euler's equation in consumption, the concept behind it is much more generally applicable and foundational than the specific application that the wikipedia article discusses).
Interestingly, although dynamic economics are more technically demanding, I find this more intuitively appealing since people seem to understand way better "what you save today will determine what you will consume tomorrow", than "your wage rate will be the marginal revenue product of all labor employed".
As has already been said, the MOST fundamental equation is surely: $$\text{MB}=\text{MC}$$
EDIT: This equation is fundamental in terms of the way economists think. As pointed out in the comments below, in terms of fundamental equations of economic models, the most fundamental equations describe equivalences between the uses and supplies of items (money, goods, etc.). These provide the tension of the marginal cost side of this equation.
I would add equations related to comparative statics:
Envelope theorem$$V^\prime(y)=f_y(x,y)$$ "Delta" analysis, as described in Samuelson's Foundations of Economic Analysis: $$\Delta p\Delta y-\Delta w\Delta x\geq0$$ (this examines responses of price-taking producers in terms of vectors of production $y$ and uses of inputs $x$, to their prices $p$ and $w$, essentially revealed preference for producers) Revealed preference
If we can claim game theorists or mathematicians whose equations we use constantly:
Karush-Kuhn-Tucker conditions, especially complementary slackness. There's no single equation for linear programming, but I think econ has a claim to Kantorovich too. \ Stationarity: $$\nabla f(x^*) = \sum_{i=1}^m \mu_i \nabla g_i(x^*) + \sum_{j=1}^l \lambda_j \nabla h_j(x^*)$$ Primal feasibility: $$g_i(x^*) \le 0, \mbox{ for all } i = 1, \ldots, m$$ $$h_j(x^*) = 0, \mbox{ for all } j = 1, \ldots, l \,\!$$ Dual feasibility: $$\mu_i \ge 0, \mbox{ for all } i = 1, \ldots, m$$ Complementary slackness: $$\mu_i g_i (x^*) = 0, \mbox{for all}\; i = 1,\ldots,m.$$ Nash equilibrium$$\theta_{i}^\star = \arg \max_{\theta_i} u_i(\theta_i ,\theta_{-i}^\star)$$ Revelation principle: which to be fair isn't so much an equation as a theorem... Bellman equation$$V(x)=\max_{c\in\Omega(x)} U(x,z)+\beta\left[V(x^\prime)\right]$$
Most of intro econ is intersecting lines. Specifically,
$$\dfrac{MU_x}{p_x}=\dfrac{MU_y}{p_y}.$$
Marginal Utility per unit cost should always be equal
Economics is about the logic of human behavior, how we make decisions in a world of scarcity. These equations describe constrained optimization under some usual assumptions like continuity, convex preferences, and no corner solutions. I'd also give prominence to consumer theory over producer. Most of undergrad producer theory can be understood with the same tools used in consumer theory.
I think one of the most important equations (at least within macroeconomics) is:
$$E\left[ m R \right] = 1$$
This equation has been used to derive many foundational results. This equation motivated the Hansen–Jagannathan bound. It is fundamental for asset pricing as well.
Also, something interesting I saw from once from Tom Sargent. If you use the stochastic discount factor for a standard model $m = \beta E_t \left[ \frac{u'(c_{t+1})}{u'(c_t)} \right]$ then depending on which piece of the equation you allow to be exogenous you can get some fundamental results of macro:
Permanent Income Hypothesis: Let $\beta R = 1$ then we get $c_t = E [c_{t+1}]$ Lucas Asset Pricing Model: Let the process for consumption be a given. Then the price of an asset can be described by $R_t^{-1} = p_t = E \left[ \frac{u'(c_{t+1})}{u'(c_t)} \right]$
I once heard Roger Myerson talk about why he thought Economics has, as a Social Science, been so successful in applying (or has so readily incorporated) mathematics. He suggested that perhaps it was due to some of the fundamental linearities within the world. Two examples would be the flow-balance constraints of scarce goods (commodity constraints) and no-arbitrage conditions. These are fundamentally linear constraints.
It's important to emphasize the importance of these because we can get a surprising amount out of the two. For example, a lot of people think that the law of demand is a consequence of assuming rationality (specifically, preferences that exhibits a diminishing marginal rate of substitution). A result due to Gary Becker shows that the law of demand (albeit just a slightly weaker version) can be derived
from the budget constraint alone. (See Becker 1962, "Irrational Behavior and Economic Theory.") That is, this fundamental economic result can be derived from the reality of scarce resources alone---without assuming rationality.
The no-arbitrage condition is an application of the linear duality theorem (Farkas' lemma). A lot of economics and finance (asset pricing) can be done just by the assumption that in economic equilibrium there is no arbitrage.
Extra Notes:
Gary Becker made a lot of advances in the field by studying the way constraints affect human behavior. One famous quote, taken from his Nobel prize lecture, is the remark that "different constraints are decisive for different situations, but the most fundamental constraint is limited time." (Some discussion here.) Some more resources about how his work in this regard can be found here and here.
Linear duality can be used to describe the no arbitrage condition. More generally, this theorem is typically proved with the Hyperplane Separation Theorem, which is mathematical tool that shows up a lot in economics textbooks.
Also, keep in mind that it's enough just to assume that in economic equilibrium, there is approximately no arbitrage.
Although I agree with Jyotirmoy Bhattacharya that the most interesting ideas in economics are not always best expressed through equations, I still want to mention the Slutsky or compensated law of demand from consumer theory
$$ (p' -p) \Big[ x\big(p', p' x(p,w)\big) – x\big(p,w\big)\Big]^T \leq 0,$$
where $p',p \in \mathbb{R}_{++}^n$ are any two price vectors, $w \in \mathbb{R}_+$ is any level of income, and $x(\cdot,\cdot) \in \mathbb{R}^n$ is the demand function.
The underlying relation is a couple of orders of certitude away from fundamental equations in other fields. Also, it does not ground the discipline, in the sense that it is not used all that often.
However, I tend to view it as fundamental because
It is an absolutely non-trivialconsequence of three simple and fundamental assumptions in consumer theory, namely, That the demand function $x(\cdot,\cdot)$ is homogenous of degree zero (no money illusion) Walras' law (people do not burn money) The weak axiom of revealed preferences (if you chose A when B is available “today”, you will not chose B “tomorrow” if A remains available) Therefore testing the inequality is equivalent to testing these three assumptions jointly. The three assumptions are used in the vast majority (maybe more than 90%?) of the models including consumers in economic theory. Their validity (at least as approximations) is therefore crucial to the validity of most models in economic theory (at least as approximations). Although it is not always obvious how to relate the notions of prices, goods and income to observables, all the element in the equation are observable in principle(as opposed to utility levels for instance) and the validity of the inequality can therefore be tested empirically.
I don't think there are any economics equations with the same status as, say, Maxwell's equations in physics. In its place we have concepts like the equimarginal principle, competitive equilibrium or Nash equilibrium which are at the core of the "economist's approach". But I think that the real worth of economics is not even in these ideas themselves but in what we know about concrete problems in specific areas of applications: for example what we know about business cycles in macro. In this economics may be more like medicine than physics.
For me, one of the most important ones is the budget constraint. It might seem too obvious but a lot of laypersons (though maybe not physicist) don't get it!
$p⋅x \leq w$
A bit late to the game, but I'm surprised no one has named the equation to calculate OLS estimates: $$ \hat\beta=(X'X)^{-1}X'y $$
Whilst not as foundational as, for example, the Slutsky equation, the condition on the Lerner index that a profit maximising firm with price $p$, cost $c$, and price elasticity of demand $\eta$ has $$\frac{p-c}{p}=-\frac{1}{\eta}$$ is an important equation in industrial organisation.
This is not only an elegant formulation of the solution of the firm's problem, but it is also practically useful:
A firm that estimates its $\eta$ and knows its $c$ can use this formula to calculate the profit-maximising price. A regulator that observes a $p$ and estimates $\eta$ can use the formula to calculate $c$—important in many forms of regulation.
It is already written but Euler equation in continous time yields
$$\frac{\dot{C}}{C}=\sigma(r-\rho)$$
where $\sigma$ is intertemporal elasticity of substitution, $r$ interest rate and $\rho$ is the discount rate (impatience level).
The foundation of intertemporal economics is the net present value equation. That is, the net present value of a future income stream is the yearly incomes divided by an appropriate discount factor, based on the prevailing interest rate, r, taken to the nth power, where n is the number of years.
Well for microeconomics there are several, however they all follow the same pattern.
Here I'll attempt to teach an entire intermediate microeconomics course in one post. Most microeconomics problems follow this format:
Though leaving out some minor details, if you do enough microeconomics practice sets the problems end up looking the same after a while. This is what I got to share.
Production/Utility functions
There are three main types of utility/production functions you will be exposed to in an intermediate microeconomics course
1. They are: Cobb Douglas $$f(x_1,x_2)=x_1^ax_2^b$$ Leontif/ Perfect Complement $$f(x_1,x_2)=\min\{x_1,x_2\}$$ Perfect substitutes $$f(x_1,x_2)=x_1+x_2$$ Budget lines and Cost functions
In consumer theory, you have a budget line represented by the formula:
$$m=p_1x_1+p_2x_2$$
In producer theory we call it a cost function. $$C(x_1,x_2)=w_1x_1+w_2x_2$$
we either want to maximize consumption given a budget/cost function or minimize costs holding your utility/output level constant. To do this we use another equation:
The Lagrangian Multiplier:
Though not exclusive to economics tool per say, its the primary tool of all intermediate microeconomics students.
$$\mathcal{L}=f(x_1,x_2)\pm\lambda(H-g(x_1,x_2))$$
where $H-g(x_1,x_2)$ is either a budget line/cost function or Utility/Production function when its equal to zero.
We use this for calculating utility/profit maximizing consumption bundles/inputs or Minimize Costs holding profit/utility constant.
And thats a wrap!*
*Though there is what to say on marshallian and hicksian demands I'll leave that for others to fill in. |
I noticed that the generator of the second stable stem b is the square of the generator of the first stable stem a, in the sense that if take two copies of a and smash product them together you get b out. I'm wondering if there are any ( many) other exmples of this. What are the elements in the stable homotopy of spheres which a squares in the above sense?
Appendix 3 of Ravenel's Green Book, http://www.math.rochester.edu/u/faculty/doug/mu.html#repub, has a chart of stable homotopy groups including much of the multiplicative structure. Figure A3.1 depicts some of this structure visually, while Table A3.3 lists the elements out by name and degree.
The next example of a square after \eta^2 is the element in the sixth stable stem, which is the square of the Hopf map \nu in the third stable stem.
Though stable homotopy is not multiplicatively finitely generated, you can consider Toda brackets, which are a form of higher multiplication in homotopy analogous to Massey products in cohomology, and it is known that the entire stable homotopy groups of spheres are generated by Toda brackets on the Hopf elements 2, \eta, \nu, and \sigma.
See the "ring structure" section for the stable homotopy groups of spheres: http://en.wikipedia.org/wiki/Homotopy_groups_of_spheres
This ring structure is well understood in the range that the homotopy groups of spheres are known. There are further operations called Toda brackets. The story goes on. |
Side of an equilateral triangle: \(a\)
Angle of an equilateral triangle: \(\alpha = 60^\circ\) Perimeter: \(P\) Altitude: \(h\)
Angle of an equilateral triangle: \(\alpha = 60^\circ\)
Perimeter: \(P\)
Altitude: \(h\)
Radius of the circumscribed circle: \(R\)
Radius of the inscribed circle: \(r\) Area: \(S\)
Radius of the inscribed circle: \(r\)
Area: \(S\)
An equilateral triangle is a triangle in which all three sides are equal. All angles in an equilateral triangle are equal to \(60^\circ\). In an equilateral triangle, the altitude, angle bisector, median and perpendicular bisector drawn from any vertex coincide. Relationship between the altitude (median, angle bisector or perpendiculr bisector) and the side \(h = {\large\frac{{a\sqrt 3 }}{2}\normalsize}\) Radius of the circumscribed circle (circumradius) of an equilateral triangle \(R = {\large\frac{{2h}}{3}\normalsize} = {\large\frac{{a\sqrt 3 }}{3}\normalsize}\) Radius of the inscribed circle (inradius) of an equilateral triangle \(r = {\large\frac{{h}}{3}\normalsize} = {\large\frac{{a\sqrt 3 }}{6}\normalsize}\) Relation between the circumradius and inradius in an equilateral triangle \(R = 2r\) Perimeter of an equilateral triangle \(P = 3a = 6\sqrt 3 r =\) \(3\sqrt 3 R\) Area of an equilateral triangle \(S = {\large\frac{{ah}}{2}\normalsize} = {\large\frac{{{a^2}\sqrt 3 }}{4}\normalsize} =\) \({\large\frac{{3{R^2}\sqrt 3 }}{4}\normalsize} =\) \(3\sqrt 3 {r^2}\) |
Side of a rhombus: \(a\)
Diagonals of a rhombus: \({d_1}\), \({d_2}\) Consecutive angles: \(\alpha\), \(\beta\) Altitude: \(h\)
Diagonals of a rhombus: \({d_1}\), \({d_2}\)
Consecutive angles: \(\alpha\), \(\beta\)
Altitude: \(h\)
Radius of the inscribed circle: \(r\)
Perimeter: \(P\) Area: \(S\)
Perimeter: \(P\)
Area: \(S\)
A rhombus is a parallelogram in which all four sides are equal. The sum of the angles adjacent to any side of a rhombus is \(180^\circ:\) \(\alpha + \beta = 180^\circ\) The diagonals of a rhombus are perpendicular and bisect each other. If a rhombus has one right angle it is a square. Relation between the sides and the diagonals of rhombus \(d_1^2 + d_2^2 = 4{a^2}\) Altitude of a rhombus \(h = a\sin \alpha =\) \({\large\frac{{{d_1}{d_2}}}{{2a}}\normalsize}\) Radius of the inscribed circle \(r = {\large\frac{h}{2}\normalsize} = {\large\frac{{a\sin \alpha }}{2}\normalsize} =\) \({\large\frac{{{d_1}{d_2}}}{{4a}}\normalsize} =\) \({\large\frac{{{d_1}{d_2}}}{{2\sqrt {d_1^2 + d_2^2} }}\normalsize}\) Perimeter of a rhombus \(P = 4a\) Area of a rhombus \(S = ah =\) \({a^2}\sin \alpha =\) \({\large\frac{{{d_1}{d_2}}}{2}\normalsize}\) |
Let $A$ be a commutative ring and $I$ a (finitely generated) ideal in $A$. We denote by $\hat{A}$ the $I$-adic completion of $A$, i.e. $\hat{A} = \varprojlim(A/I \leftarrow A/I^2 \leftarrow \ldots)$.
When do we have $I \otimes_A \hat{A} \cong I\hat{A}$?
It is well known that this is true when $A$ is noetherian (e.g. see Atiyah-MacDonald), but in general $\hat{A}$ is not even flat.
However, when $I$ is finitely generated then we have at least isomorphisms $I^n\hat{A} \cong \widehat{I^nA}$, $A/I^nA \cong \hat{A}/I^n\hat{A}$ and $\hat{A}$ is complete.
In the special case where $I=(f)$ and $f$ is a non-zerodivisor in $A$, we can tensorize
$0 \rightarrow A \stackrel{\cdot f}\rightarrow A \rightarrow A/fA \rightarrow 0$
by $\hat{A}$ to see that $(f) \otimes_A \hat{A} \cong f\hat{A}$.
Thus, I hope that for $I$ finitely generated by non-zerodivisors (or at least not containing any zerodivisors) I could obtain a similar result. But I am not sure if this is true. If I try to do a similar trick as above, I get
$0 \rightarrow IA \rightarrow A \rightarrow A/IA \rightarrow 0$
and obtain
$IA \otimes_A \hat{A} \rightarrow \hat{A} \rightarrow \hat{A}/I\hat{A} \rightarrow 0$
using $(A/IA) \otimes_A \hat{A} \cong A/IA \cong \hat{A}/I\hat{A}$. However, it is not clear that this is exact on the left.
Edit: I think I can show by an easy snake lemma argument, that $I \otimes_A \hat{A} \rightarrow I\hat{A}$ is surjective in the setting above, but I still do not see what I need for it to be injective. The problem is that even non-zerodivisors from $IA$ can become zero-divisors in $\hat{A}$, but maybe it is always possible to find a set of generators of $I$ which does not become zero-divisors in $\hat{A}$ and I then could use that? At least in the case $I=(f)$ I could prove that $f$ does not become a zero-divisor.
Edit2: Here my precise question
If $I$ is finitely generated and does not contain any zerodivisors in $A$, is it true that $I \otimes_A \hat{A} \rightarrow I \hat{A}$ is an isomorphism?
If not, what properties can we ask for $I$ and $A$ to have which are weaker than "$A$ noetherian or $I$ principal", such that $I \otimes_A \hat{A} \rightarrow I \hat{A}$ is an isomorphism (e.g. $I$ flat over $A$, ...)? |
We consider a connection-level model of Internet congestion control, introduced by Massouli\'{e} and Roberts [Telecommunication Systems 15 (2000) 185--201], that represents the randomly varying number of flows present in a network. Here, bandwidth is shared fairly among elastic document transfers according to a weighted $\alpha$-fair bandwidth sharing policy introduced by Mo and Walrand [IEEE/ACM Transactions on Networking 8 (2000) 556--567] [$\alpha\in (0,\infty)$]. Assuming Poisson arrivals and exponentially distributed document sizes, we focus on the heavy traffic regime in which the average load placed on each resource is approximately equal to its capacity. A fluid model (or functional law of large numbers approximation) for this stochastic model was derived and analyzed in a prior work [Ann. Appl. Probab. 14 (2004) 1055--1083] by two of the authors. Here, we use the long-time behavior of the solutions of the fluid model established in that paper to derive a property called multiplicative state space collapse, which, loosely speaking, shows that in diffusion scale, the flow count process for the stochastic model can be approximately recovered as a continuous lifting of the workload process. ; Comment: Published in at http://dx.doi.org/10.1214/08-AAP591 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org)
We consider the stepping stone model on the torus of side $L$ in $\mathbb{Z}^2$ in the limit $L\to\infty$, and study the time it takes two lineages tracing backward in time to coalesce. Our work fills a gap between the finite range migration case of [Ann. Appl. Probab. 15 (2005) 671--699] and the long range case of [Genetics 172 (2006) 701--708], where the migration range is a positive fraction of $L$. We obtain limit theorems for the intermediate case, and verify a conjecture in [Probability Models for DNA Sequence Evolution (2008) Springer] that the model is homogeneously mixing if and only if the migration range is of larger order than $(\log L)^{1/2}$. ; Comment: Published in at http://dx.doi.org/10.1214/09-AAP639 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org)
The generalized Fleming-Viot processes were defined in 1999 by Donnelly and Kurtz using a particle model and by Bertoin and Le Gall in 2003 using stochastic flows of bridges. In both methods, the key argument used to characterize these processes is the duality between these processes and exchangeable coalescents. A larger class of coalescent processes, called distinguished coalescents, was set up recently to incorporate an immigration phenomenon in the underlying population. The purpose of this article is to define and characterize a class of probability-measure valued processes called the generalized Fleming-Viot processes with immigration. We consider some stochastic flows of partitions of Z_{+}, in the same spirit as Bertoin and Le Gall's flows, replacing roughly speaking, composition of bridges by coagulation of partitions. Identifying at any time a population with the integers $\mathbb{N}:=\{1,2,.\}$, the formalism of partitions is effective in the past as well as in the future especially when there are several simultaneous births. We show how a stochastic population may be directly embedded in the dual flow. An extra individual 0 will be viewed as an external generic immigrant ancestor, with a distinguished type, whose progeny represents the immigrants. The "modified" lookdown construction of Donnelly-Kurtz is recovered when no simultaneous multiple births nor immigration are taken into account. In the last part of the paper we give a sufficient criterion for the initial types extinction. ; Comment: typos and corrections in references
Branching processes and Fleming-Viot processes are two main models in stochastic population theory. Incorporating an immigration in both models, we generalize the results of Shiga (1990) and Birkner et al. (2005) which respectively connect the Feller diffusion with the classical Fleming-Viot process and the alpha-stable continuous state branching process with the Beta(2-alpha, alpha)-generalized Fleming-Viot process. In a recent work, a new class of probability-measure valued processes, called M-generalized Fleming-Viot processes with immigration, has been set up in duality with the so-called M-coalescents. The purpose of this article is to investigate the links between this new class of processes and the continuous-state branching processes with immigration. In the specific case of the $\alpha$-stable branching process conditioned to be never extinct, we get that its genealogy is given, up to a random time change, by a Beta(2-alpha, alpha-1)-coalescent. ; Comment: 21 pages
We introduce a notion of intervention for stochastic differential equations and a corresponding causal interpretation. For the case of the Ornstein-Uhlenbeck SDE, we show that the SDE resulting from a simple type of intervention again is an Ornstein-Uhlenbeck SDE. We discuss criteria for the existence of a stationary distribution for the solution to the intervened SDE. We illustrate the effect of interventions by calculating the mean and variance in the stationary distribution of an intervened process in a particularly simple case. ; Comment: Extended version of article to be presented at the 18th EYSM
Let $X_1, X_2,\ldots$ be random elements of the Skorokhod space $D(\mathbb{R})$ and $\xi_1, \xi_2, \ldots$ positive random variables such that the pairs $(X_1,\xi_1), (X_2,\xi_2),\ldots$ are independent and identically distributed. We call the random process $(Y(t))_{t \in \mathbb{R}}$ defined by $Y(t):=\sum_{k \geq 0}X_{k+1}(t-\xi_1-\ldots-\xi_k)1_{\{\xi_1+\ldots+\xi_k\leq t\}}$, $t\in\mathbb{R}$ random process with immigration at the epochs of a renewal process. Assuming that $X_k$ and $\xi_k$ are independent and that the distribution of $\xi_1$ is nonlattice and has finite mean we investigate weak convergence of $(Y(t))_{t\in\mathbb{R}}$ as $t\to\infty$ in $D(\mathbb{R})$ endowed with the $J_1$-topology. The limits are stationary processes with immigration. ; Comment: 20 pages, accepted for publication in Bernoulli
Let $(X_1, \xi_1), (X_2,\xi_2),\ldots$ be i.i.d.~copies of a pair $(X,\xi)$ where $X$ is a random process with paths in the Skorokhod space $D[0,\infty)$ and $\xi$ is a positive random variable. Define $S_k := \xi_1+\ldots+\xi_k$, $k \in \mathbb{N}_0$ and $Y(t) := \sum_{k\geq 0} X_{k+1}(t-S_k) 1_{\{S_k \leq t\}}$, $t\geq 0$. We call the process $(Y(t))_{t \geq 0}$ random process with immigration at the epochs of a renewal process. We investigate weak convergence of the finite-dimensional distributions of $(Y(ut))_{u>0}$ as $t\to\infty$. Under the assumptions that the covariance function of $X$ is regularly varying in $(0,\infty)\times (0,\infty)$ in a uniform way, the class of limiting processes is rather rich and includes Gaussian processes with explicitly given covariance functions, fractionally integrated stable L\'evy motions and their sums when the law of $\xi$ belongs to the domain of attraction of a stable law with finite mean, and conditionally Gaussian processes with explicitly given (conditional) covariance functions, fractionally integrated inverse stable subordinators and their sums when the law of $\xi$ belongs to the domain of attraction of a stable law with infinite mean. ; Comment: 46 pages, accepted for publication in Bernoulli
An enviromental-random effect over a deterministic population mo\-del, a resource ({\it e.g.}, a fish stock) is introduced. It is assumed that the harvest activity is concentrated at a non predetermined sequence of instants, at which the abundance reaches a certain predetermined level, for then to fall abruptly a constant capture quota (pulse harvesting). So that, the abundance is modeled by a stochastic impulsive type differential equation, incorporating an standard Brownian motion in the per capita rate of growth. With this random effect, the pulse times are images of a random variable, more precisely, they are "stopping-times" of the stochastic process. The proof of the finite expectation of the next access time, {\it i.e.}, the feasibility of the regulation, is the main result.
We study the number of white balls in a classical P\'olya urn model with the additional feature that, at random times, a black ball is added to the urn. The number of draws between these random times are i.i.d. and, under certain moment conditions on the inter-arrival distribution, we characterize the limiting distribution of the (properly scaled) number of white balls as the number of draws goes to infinity. The possible limiting distributions obtained in this way vary considerably depending on the inter-arrival distribution and are difficult to describe explicitly. However, we show that the limits are fixed points of certain probabilistic distributional transformations, and this fact provides a proof of convergence and leads to properties of the limits. The model can alternatively be viewed as a preferential attachment random graph model where added vertices initially have a random number of edges, and from this perspective, our results describe the limit of the degree of a fixed vertex. ; Comment: 23 pages
A superprocess with dependent spatial motion and interactive immigration is constructed as the pathwise unique solution of a stochastic integral equation carried by a stochastic flow and driven by Poisson processes of one-dimensional excursions.
We study certain consistent families $(F_\lambda)_{\lambda\ge 0}$ of Galton-Watson forests with lifetimes as edge lengths and/or immigrants as progenitors of the trees in $F_\lambda$. Specifically, consistency here refers to the property that for each $\mu\le\lambda$, the forest $F_\mu$ has the same distribution as the subforest of $F_\lambda$ spanned by the black leaves in a Bernoulli leaf colouring, where each leaf of $F_\lambda$ is coloured in black independently with probability $\mu/\lambda$. The case of exponentially distributed lifetimes and no immigration was studied by Duquesne and Winkel and related to the genealogy of Markovian continuous-state branching processes. We characterise here such families in the framework of arbitrary lifetime distributions and immigration according to a renewal process, related to Sagitov's (non-Markovian) generalisation of continuous-state branching renewal processes, and similar processes with immigration. ; Comment: 31 pages, 2 figures
In this paper, by the singular-perturbation technique, we investigate the heavy-traffic behavior of a priority polling system consisting of three M/M/1 queues with threshold policy. It turns out that the scaled queue-length of the critically loaded queue is exponentially distributed, independent of that of the stable queues. In addition, the queue lengths of stable queues possess the same distributions as a priority polling system with N-policy vacation. Based on this fact, we provide the exact tail asymptotics of the vacation polling system to approximate the tail distribution of the queue lengths of the stable queues, which shows that it has the same prefactors and decay rates as the classical M/M/1 preemptive priority queues. Finally, a stochastic simulation is taken to test the results aforementioned.
This is a survey on recent progresses in the study of branching processes with immigration, generalized Ornstein-Uhlenbeck processes and affine Markov processes. We mainly focus on the applications of skew convolution semigroups and the connections in those processes. |
Numerical Approximation of Gradient and Gradient Checking
How to make sure the implementation of Backpropagation is correct?
Solution: Implement gradient checking
Representation: Let f($\theta$) be a function of $\theta$ and it is required to calculate the derivate f’($\theta$).
Without using calculus: Let $\epsilon$ be a very small number = $10^{-7}$
Approx. Slope of f($\theta$) at $\theta$ = $\frac{f(\theta + \epsilon) - f(\theta - \epsilon)}{2\epsilon}$
Gradient Checking
Take all the parameters of cost function J() : $W^1, b^1, W^2, b^2,…..W^L,b^L$ and reshape them into a single vector $\theta$
J($W^1, b^1, W^2, b^2,…..W^L,b^L$ ) = J($\theta$)
Take parametes of gradient $dW^1, db^1,…dW^L,db^L$ and reshape into a single vector $d\theta$
Question: Is $d\theta$ really the gradient/slope of $\theta$?
Implementation:
for each i:
{
$d\theta_{approx}[i] = \frac{J(\theta_1,\theta_2,…,\theta_i+\epsilon,….\theta_L)-J(\theta_1,\theta_2,…,\theta_i-\epsilon,….\theta_L)}{2\epsilon}$
}
After completing the loop, check if $d\theta$ ~= $d\theta_{approx}$
How: Compute Eucledian distance
distance = $\frac{\lVert d\theta_{approx} - d\theta \rVert_2}{\lVert d\theta_{approx}\rVert_2 +\lVert d\theta\rVert_2}$
distance should be very low, if distance ~= $10^{-7}$ implementation is correct, if it is $10^{-3}$ it needs to be checked properly.
Implementation Tips: Don’t use in training, only to debug If algorithm fails grad check, look at the components to try and identify the bug Remember the terms for regularization if used Doesn’t work with Dropout Run at random initialization, perhaps again after some training Source material from Andrew NG’s awesome course on Coursera. The material in the video has been written in a text form so that anyone who wishes to revise a certain topic can go through this without going through the entire video lectures. |
The main goal of the
on-line characterization is the determination of thicknesses of already deposited layers. On-line characterization functions of OptiReOpt DLL can be also used for monitoring deposition processes by generating a command for the termination of current layer deposition.
The on-line characterization of thicknesses of already deposited layers requires
in situ measurements of spectral transmittance and/or spectral reflectance just after the end of deposition of each coating layer. If OptiReOpt DLL is used for the monitoring purposes, then in situ spectral scans of coating transmittance and/or reflectance should be done during layer depositions in small time intervals that can provide a sufficient accuracy of layer thickness monitoring.
\(\hat{T}^j_1,...,\hat{T}^j_L\)
in situ spectral monitoring data collected after the deposition of j-th layer (\(L\) – total number of data points in one spectral scan)
There are two computational modes of the OptiReOpt characterization routines. These modes are referred to as
SEQUENTIAL and TRIANGLE. In the case of SEQUENTIAL mode only thickness of the last deposited layer is determined by the characterization routine. It is advisable to use SEQUENTIAL mode only in the case of very stable deposition processes and high-accuracy in situ measurement data.
Discrepancy function (SEQUENTIAL algorithm):
\[DF(d_i)=\left(\frac 1L \sum_{j=1}^L\left[ \frac{\hat{T}(\hat{d}_1,...,\hat{d}_{i-1}, d_i;\lambda_j)-\hat{T}^{(i)}(\lambda_j)}{\Delta T_j}\right]^2\right)^{1/2},\]
where \(\hat{d}_1,...,\hat{d}_{i-1}\) are the thicknesses of the previously deposited layers that were determined at the previous steps of the algorithm. The discrepancy functions are minimized with respect to only one variable:
\[ DF(i)\rightarrow \min \]
Because of this fact, this algorithm usually works faster than the triangular algorithm. This is of course an attractive feature of the sequential algorithm. At the same time, this algorithm may suffer from
the effect of accumulation of thickness errors.
In the case of TRIANGLE mode thicknesses of all already deposited layers are determined at each characterization step. Operational time of OptiReOpt DLL in SEQUENTIAL mode is faster than in TRIANGLE mode. At the same time, in general, TRIANGLE mode provides more reliable thicknesses of all already deposited layers.
Discrepancy function (TRIANGULAR algorithm):
\[DF(i)=\left(\frac 1i \sum_{k=1}^i \frac 1L\sum_{j=1}^L\left[ \frac{\hat{T}(d_1,..., d_i;\lambda_j)-\hat{T}^{(k)}(\lambda_j)}{\Delta T_j}\right]^2\right)^{1/2},\]
The discrepancy functions are minimized with respect to thicknesses of the first \(k\) deposited layers:
\[ DF(d_1,...,d_i)\rightarrow \min \]
Example. 15-layer quarter-wave mirror was deposited with imposed errors in layer thicknesses. Thickness errors of +7%, -7%, -5% and +5% were done in layers 3, 8, 14 and 15 (empty bars on the right panel).
Sequential algorithm provides errors' estimations that do not correlate with the imposed errors ( grey bars).
Triangular algorithm determines thickness errors with high accuracy ( black bars).
There are other types of algorithms, for example, hybrid algorithms where at each step errors in several previous layers are determined.
Automation tool of OptiLayer can help you to find the algorithm appropriate for your production environment. You can easily create your own program which will call OptiRE from your program with various setting/models and develop the most appropriate optical coating model. This model can be later used in on-line characterization with OptiReOpt.
References:
T.V. Amotchkina, M.K. Trubetskov, V. Pervak, S.Schlichting, H. Ehlers, D. Ristau, and A.V. Tikhonravov. "Comparison of algorithms used for optical characterization of multilayer optical coatings," Appl. Opt. 50, 3389-3395 (2011).
A. V. Tikhonravov, M. K. Trubetskov, "Online characterization and reoptimization of optical coatings", Proc. SPIE. 5250, Advances in Optical Thin Films 406 (2004)
S. Wilbrandt, O. Stenzel, N. Kaiser, M.K. Trubetskov, and A.V. Tikhonravov, "In situ optical characterization and reengineering of interference coatings," Appl. Opt. 47, C49-C54 (2008).
S. Wilbrandt, O. Stenzel, N. Kaiser, M. K. Trubetskov, and A. V. Tikhonravov, "On-line Re-engineering of Interference Coatings," in Optical Interference Coatings, OSA Technical Digest (CD) (Optical Society of America, 2007), paper WC10.
J. Oliver, A. Tikhonravov, M. Trubetskov, I. Kochikov, and D. Smith, "Real-Time characterization and optimization of e-beam evaporated optical coatings," in Optical Interference Coatings, OSA Technical Digest Series (Optical Society of America, 2001), paper ME8. |
Augmented Dickey-Fuller Test
Performs the Augmented Dickey-Fuller test for the null hypothesisof a unit root of a univarate time series
x (equivalently,
x is anon-stationary time series).
Usage
adf.test(x, nlag = NULL, output = TRUE)
Arguments x a numeric vector or univariate time series. nlag the lag order with default to calculate the test statistic. See details for the default. output a logical value indicating to print the test results in R console.The default is
TRUE.
Details
The Augmented Dickey-Fuller test incorporatesthree types of linear regression models. The first type (
type1) is a linear modelwith no drift and linear trend with respect to time:$$dx[t] = \rho*x[t-1] + \beta[1]*dx[t-1] + ... + \beta[nlag - 1]*dx[t - nlag + 1]+e[t],$$where $d$ is an operator of first order difference, i.e.,$dx[t] = x[t] - x[t-1]$, and $e[t]$ is an error term.
The second type (
type2) is a linear model with drift but no linear trend:$$dx[t] = \mu + \rho*x[t-1] + \beta[1]*dx[t-1] + ... +\beta[nlag - 1]*dx[t - nlag + 1] +e[t].$$
The third type (
type3) is a linear model with both drift and linear trend:$$dx[t] = \mu + \beta*t + \rho*x[t-1] + \beta[1]*dx[t-1] + ... +\beta[nlag - 1]*dx[t - nlag + 1] +e[t].$$
We use the default
nlag = floor(4*(length(x)/100)^(2/9)) tocalcuate the test statistic.The Augmented Dickey-Fuller test statistic is defined as$$ADF = \rho.hat/S.E(\rho.hat),$$where $\rho.hat$ is the coefficient estimationand $S.E(\rho.hat)$ is its corresponding estimation of standard error for eachtype of linear model. The p.value iscalculated by interpolating the test statistics from the corresponding critical valuestables (see Table 10.A.2 in Fuller (1996)) for each type of linear models with givensample size $n$ = length(
x).The Dickey-Fuller test is a special case of Augmented Dickey-Fuller testwhen
nlag = 2.
Value A list containing the following components: type1 a matrix with three columns:
lag,
ADF,
p.value, where
ADFis the Augmented Dickey-Fuller test statistic.
type2 same as above for the second type of linear model. type3 same as above for the third type of linear model. Note
Missing values are removed.
References
Fuller, W. A. (1996). Introduction to Statistical Time Series, second ed., New York: John Wiley and Sons.
See Also Aliases adf.test Examples
# ADF test for AR(1) processx <- arima.sim(list(order = c(1,0,0),ar = 0.2),n = 100)adf.test(x)# ADF test for co2 dataadf.test(co2)
Documentation reproduced from package aTSA, version 3.1.2, License: GPL-2 | GPL-3 |
For baseband real signals the frequency spectrum is conjugate symmetric; i.e., $$X(f) = X(-f)^*$$ This translates to an even magnitude spectrum; i.e., $$|X(f)| = |X(-f)|$$ and an odd phase spectrum.
Because of the symmetry it's also called as the
double side band spectrum in commnnications terminology. For the real basband signal, a single side band is also sufficient to represent it. THere are different approaches to obtain the single side band as the uppersideband and the lower side band.
One method of obtaining the upper side band is to convert the signal into the
analytic signal which is $x_+(t) = x(t) + j \hat{x}(t)$ where $\hat{x}(t)$ is the Hilbert transform of $x(t)$. The resulting spectrum is:$$ X_+(f) = \begin{cases}2 X(f) ~~, &\text{ for} ~~ f > 0 \\0 ~~, &\text{ for} ~~ f < 0 \\\end{cases}$$
Therefore for the given signal $x(t) = \cos(2\pi 20000 t)$ whose spectrum is $$X(f) = 0.5 \delta(f-20000) + 0.5 \delta(f+20000)$$ the upper side band will be given by $$X_+(f) = \delta(f-20000)$$
Note that you can adjust the amplitude of single side band if you wish to do so. |
Let $(X,F,\mu)$ be a probability space. Let $f:X\rightarrow [0,\infty)$ be a random variable (so measurable). The integral $$E=\int_X f \, d\mu$$ is the expectation value of $f$ and $$V=\int_X (f-E)^2 \, d\mu$$ is the variance of $f$.
Show that, if the variance of $f$ is small, $f$ deviates from its expectation value with very small probability. Explicitly, show that the probability that $f$ deviates by $\varepsilon$ from $E$ ($\mu(\{x\in X:|f(x)-E|>\varepsilon\})$) is less than or equal to $\frac{V}{\varepsilon^2}$.
Consider the "random" series $1\pm \frac{1}{2}\pm\frac{1}{4}\pm\frac{1}{8}\pm\cdots$ with the assignment of a $+$ or $-$ in the $n$th term decided by the toss of a coin. Compute its expectation value and variance. (Hint: first show that $2t-1=\sum\limits_{k=1}^\infty \frac{R_k(t)}{2^k}$)
Here's what I have so far:
By Chebyshev's inequality, $\mu(\{x\in X:|f(x)-E|>\varepsilon\})\leq\frac{1}{\varepsilon}\int_X|f(x)-E| \, d\mu$. Edit: then $\mu(\{x\in X:(f(x)-E)^2>\varepsilon^2\})\leq\frac{1}{\varepsilon^2}\int_X(f(x)-E)^2 \, d\mu$.
I'm not sure how to prove the statement in the hint, but given that, we can let $f=2t$, then define simple functions $s_1,s_2$ such that $s_1\leq f\leq s_2$. Somehow I need to choose these functions so their expectation value and variance will be the same, but I don't know how to do that. |
"If $a \equiv b \pmod m$ and $a \equiv b \pmod n$ and $\gcd(m,n)=1$, then $a \equiv b \pmod {mn}$ "
Is that a true theorem? I can't find it in my textbook!
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This theorem can be seen as a particular instance of the Chinese remainder theorem. So yes, it is true.
To counter a bit the many answers suggesting the contrary, this is true
independently of the Chinese remainder theorem, which states that if $\gcd(m,n)=1$ then any pair of congruences $x\equiv a\pmod m$, $x\equiv b\pmod n$ has a solution for $x$. Of course the question bears some relation to CRT, but the result is more a preliminary to it: it shows that the solutions to such a pair of congruences occupy at most one congruence class modulo$~nm$. Indeed one can deduce the CRT from it by a counting argument: since there are $nm$ possible pairs of congruences, and as many classes modulo$~nm$, it must be that every pair of congruences admits a single class as solution. (Other proofs are more helpful for finding that solution.)
Given that $a\equiv b\pmod m$ and $a\equiv b\pmod n$, in other words that $a-b$ is a common multiple of $m$ and $n$, the essential property of the least common multiple (namely that it divides all other common multiples) says that $\def\lcm{\operatorname{lcm}}\lcm(n,m)\mid a-b$, in other words $a\equiv b\pmod{\lcm(n,m)}$. So all that is needed here is to check that $\lcm(m,n)=mn$, which clear from the general rule $\gcd(n,m)\lcm(n,m)=nm$ (or directly: since $nm$ is a common multiple of $n,m$ one has $\lcm(n,m)\mid nm$, and $nm/\lcm(n,m)$ then is a common divisor of $n,m$, hence divides $\gcd(n,m)=1$). This is the same as what Bill Dubuque says more succinctly.
To make point that this result is really independent of the CRT, all the above (and therefore the statement of the question) is valid more generally than in $\Bbb Z$. Namely, it is valid in all integral domains where a $\gcd$ and $\lcm$ of two elements always exists, so called GCD-domains; in particular it holds in any Unique Factorisation Domain like a ring of multivariate polynomials, even though the Chinese Remainder Theorem does not hold there. Of course this generality is superfluous if you are only interested in arithmetic of the the integers, but the only way I can argue convincingly that this is independent of CRT is to look at a situation the statement makes sense, but where CRT fails.
The mentioned essential property of $\lcm(n,m)$ is clear in $\Bbb Z$, since the set of common multiples of $n,m$ is closed under addition and subtraction, and therefore equal to the set of multiples of its least positive element, which is $\lcm(n,m)$. In more general settings where "least" has no obvious meaning, the property serves as
definition of $\lcm(n,m)$, just like being a common divisor divisible by all other common divisors becomes the definition of $\gcd(n,m)$. Note that without any assumption other than working in an integral domain, these definitions do not ensure that $\lcm(n,m)$ or $\gcd(n,m)$ always exist; this is why I mentioned that the result does depend on the assumption that $\lcm(n,m)$ exists.
See the Chinese Remainder Theorem.
Yes. This is just a special case of the Chinese Remainder Theorem
Yes, $\ n,m\mid a\!-\!b\,\iff\, \color{#c00}{{\rm lcm}(n,m)}\mid a\!-\!b,\,$ and $\ \color{#c00}{{\rm lcm}(n,m) = nm}\ $ by $\ \gcd(n,m)=1.$
Remark $\ $ Generally $\ {\rm lcm}(n,m) = \dfrac{nm}{\gcd(n,m)}.\, $ While this can be deduced as a special case of the Chinese Remainder Theorem (CRT), generally lcm laws are more applicable, so it's essential to be familiar with basic lcm laws (the first $\iff$ above is the universal property of the lcm).
It is equivalent to saying that if $c$ is a multiple of $n$ and if $c$ is a multiple of $o$ and if $gcd(n,0)=1$ then $c$ is a multiple of $no$.
So yes, it is true |
I'm dealing with a doubly charged scalar singlet that interacts only with theright-handed muon as follows,$$\mathcal{L} = \lambda \psi_{R}C\psi_{R} \phi^{++},$$where $\lambda$ is the coupling, $\...
How do I find the Feynman rules for a general Lagrangian density?For example the Lagrangian $$L = \partial_\mu \psi \partial^\mu \psi +a \psi\partial_\mu \psi \partial^\mu \psi+b \psi^2 \partial_\mu ...
I'm new to particle physics, and I'm reading chapter 5 of Prof. Mark A. Thompson's "Modern Particle Physics", which talks about Time-ordered perturbation theory vs QED. However, in page 119 he wrote:...
One of the momentum space Feynman rules in $\phi^{4}$ theory (for correlation functions) is that for an external point with 4-momentum $p$ (with direction headed towards the external point), we need a ...
For a an interaction term like $g(\overline{\psi} \gamma^\mu \psi) \partial_\mu \phi$ in which $\psi$ is a Dirac spinor and $\phi$ a scalar field (d=4), should we expect this vertex to have a momentum ...
Given the interaction term with $N$ scalars $\phi_i$, each massless, what would be the Feynman rules for an interaction term in the action as$$ \int d^dx (\partial^2 \phi^i)\phi_i(\partial_\mu \phi^...
What are the properties of the Feynman diagrams in Fermi's effective interaction theory and how can one draw a Feynman diagram in this theory in relation to the Feynman diagram in the standard model ...
I work with this interaction Lagrangian density$$\mathcal{L}_{int} = \mathcal{L}_{int}^{(1)} + \mathcal{L}_{int}^{(2)} + {\mathcal{L}_{int}^{(2)}}^\dagger = ia\bar{\Psi}\gamma^\mu\Psi Z_\mu +ib(\phi^... |
Search
Now showing items 1-10 of 18
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV
(American Physical Society, 2014-12-05)
We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ... |
Optimization Algorithms - RMSprop and Adam
Gradient Descent is widely used as an optimization algorithm for optimizing the cost functions. There are various improved version of these algorithms like StochasticGradientDescent, Gradient Descent with Momentum, RMSprop and Adam.
RMSprop : Root Mean Square Prop
Intuition: Suppose we have two parameters, one in vertical direction (say $b$), and another in horizontal direction (say $W$).We want to speed up learning in the horizontal direction ($W$) and slow down to prevent oscillations in the vertical direction ($b$).
$\beta_2$: Parameter for RMSprop (similar to momentum $\beta$ but written as $\beta_2$ to prevent confusion
$\epsilon$: Parameter added to $S_{dW}$ and $S_{db}$ to prevent division by zero in case any of this is equal to zero.
Implementation:
On iteration t:
{
*Compute dW, db in the current mini batch
*$S_{dW} = \beta_2 S_{dW} + (1-\beta_2)dW^2$
*$S_{db} = \beta_2 S_{db} + (1-\beta_2)db^2$
*Update Parameters
**$W = W-\alpha\frac{dW}{\sqrt{S_{dW}+\epsilon}}$
**$b = b-\alpha\frac{db}{\sqrt{S_{db}+\epsilon}}$
}
We can use large $\alpha$ to speed up learning
$S_{dW}$ is relatively small which means $W$ is divided by a small number speeding up learning in the $W$ direction
$S_{db}$ is relatively large which means $b$ is divided by a large number preventing movement in vertical direction
In reality, there are multiple parameters in horizontal direction (say, $W_1, W_3, W_7…$) and multiple in the vertical direction (say $W_2, W_4, W_6,..$).
Adam
Adam stands for Adaptive Moment Estimation. It is essentially a combination of RMSprop and Gradient Descent with momentum algorithms.
Implementation:
Initialize: $v_{dW} = 0, S_{dW}=0, v_{db}=0, v_{dW}=0 $
On iteration t,
{
Compute dW, db on current minibatch
Momentum:
$v_{dW} = \beta_1 v_{dW} + (1-\beta_1)dW$ $v_{db} = \beta_1 v_{db} + (1-\beta_1)db$
RMSProp
$S_{dW} = \beta_2 S_{dW} + (1-\beta_2)dW^2$ $S_{db} = \beta_2 S_{db} + (1-\beta_2)db^2$
Applying Bias Correction: $v^c$ stands for $v^{corrected}$, similarly for S
$v_{dW}^c = \frac{v_{dW}}{1-\beta_1^t}$ $v_{db}^c = \frac{v_{db}}{1-\beta_1^t}$ $S_{dW}^c = \frac{S_{dW}}{1-\beta_2^t}$ $S_{db}^c = \frac{S_{db}}{1-\beta_2^t}$
Weight Update:
$W = W - \alpha \frac{v_{dW}^c}{\sqrt{S_{dW}^c + \epsilon}}$ $b = b - \alpha \frac{v_{db}^c}{\sqrt{S_{db}^c + \epsilon}}$
}
Hyperparameters:
$\alpha$ : Needs to be tuned $\beta_1$: Typical value : 0.9 (Not tuned very often) $\beta_2$: Typical value: 0.999 (Not tuned very often) $\epsilon: 10^{-8} $: Doesn’t matter much Additional Material: Learning Rate Decay
To speed up an optimization algorithms, the learning rate $\alpha$ needs to be reduced over time. The reason for this is that in the beginning, the bigger steps can be taken but as the cost function reachesto its minumum, learning rate needs to be decreased.
In the first figure, if the learning rate is same, the alogorithm will never converge and will always oscillate around the minima (BLUE). If learning rate is decreased over time, it will coverge to minima in the later epochs (GREEN).
Implementation:
decay_rate: Hyperparameter epoch: 1 pass through the data
$\alpha = \frac{1}{1+decayRate*epochNum}.\alpha_0$
For example, let $\alpha_0$ = 0.2, decayRate =1epoch = 1; $\alpha$ = 0.1
epoch = 2; $\alpha$ = 0.67 epoch = 3; $\alpha$ = 0.5 epoch = 4; $\alpha$ = 0.4
Other Types of Decay:
Exponential decay: $\alpha$ = $0.95^{epochNum}\alpha_0$
Others: $\alpha = \frac{K}{\sqrt{epochNum}}\alpha_0$
Additional Material: Local Optima
What if there are multiple local optima present for the cost function as shown in the figure below?
Turns out that this is not the case in a multi-dimensional space. In reality, most of the minima would be saddle points where some directions would go up and some will go down.
The rel problem is Plateau which can really slow down learning.
RMSprop, Adam and GD with Momentum can be used to fix these
Source material from Andrew NG’s awesome course on Coursera. The material in the video has been written in a text form so that anyone who wishes to revise a certain topic can go through this without going through the entire video lectures. |
What is the difference between a $\pi/4$-QPSK modulation and a QPSK differential modulation (DQPSK)?
Are these modulations identical?
Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up.Sign up to join this community
What is the difference between a $\pi/4$-QPSK modulation and a QPSK differential modulation (DQPSK)?
Are these modulations identical?
No, the modulations are not identical; they are quite different.
Differential QPSK uses one of 4 different signals $$\sqrt{2}\cos\left(2\pi f_c t + \frac{\pi}{4}\right), ~ \sqrt{2}\cos\left(2\pi f_c t + \frac{3\pi}{4}\right), ~ \sqrt{2}\cos\left(2\pi f_c t + \frac{5\pi}{4}\right), ~\sqrt{2}\cos\left(2\pi f_c t + \frac{7\pi}{4}\right)$$
with phase angles $\displaystyle \frac{\pi}{4},\frac{3\pi}{4},\frac{5\pi}{4},$ and $\displaystyle \frac{7\pi}{4}$ to transmit two information bits, but the information is carried in the
change of phase from the previous signaling interval, and not in the absolute value of the phase. That is, the information bits are $(0,0), (1,0), (1,1), (0,1)$ according as the phase changed by $\displaystyle0, \frac{\pi}{2}, \pi,$ or $\displaystyle\frac{3\pi}{2}$ in comparison to the previous signaling interval. Note that it is possible to have no phase change or a phase change of $\pi$ (inversion of polarity of the signal) both of which can undesirable for different reasons.
In contrast, $\displaystyle \frac{\pi}{4}$-QPSK uses the four signals shown above
and the same signals shifted in phase by $\displaystyle \frac{\pi}{4}$, namely, $$\sqrt{2}\cos(2\pi f_c t), ~ -\sqrt{2}\sin(2\pi f_c t), ~ -\sqrt{2}\cos(2\pi f_c t),~\sqrt{2}\sin(2\pi f_c t)$$but it uses them in alternation: in one signaling interval, the signal constellation is$$\sqrt{2}\cos\left(2\pi f_c t + \frac{\pi}{4}\right), ~ \sqrt{2}\cos\left(2\pi f_c t + \frac{3\pi}{4}\right), ~ \sqrt{2}\cos\left(2\pi f_c t + \frac{5\pi}{4}\right),~\sqrt{2}\cos\left(2\pi f_c t + \frac{7\pi}{4}\right)$$and in the next signaling interval, the constellation is $$\sqrt{2}\cos(2\pi f_c t), ~ -\sqrt{2}\sin(2\pi f_c t), ~ -\sqrt{2}\cos(2\pi f_c t),~\sqrt{2}\sin(2\pi f_c t),$$ followed by $$\sqrt{2}\cos\left(2\pi f_c t + \frac{\pi}{4}\right), ~ \sqrt{2}\cos\left(2\pi f_c t + \frac{3\pi}{4}\right), ~ \sqrt{2}\cos\left(2\pi f_c t + \frac{5\pi}{4}\right),~\sqrt{2}\cos\left(2\pi f_c t + \frac{7\pi}{4}\right)$$again, and so on, switching back and forth between the two choices. Effectively, the overall constellation is that of $8$-PSK but only four points can be used during any given signaling interval. Note that it is guaranteed that the phase will change at the transition from one signaling interval to the next, and also that the phase will not change by $\pi$ and so the undesirable phase transitions mentioned in differential QPSK (they also exist in plain vanilla QPSK) are avoided. Of course, the $\displaystyle \frac{\pi}{4}$-QPSK receiver (and transmitter) is more complicated than a QPSK receiver or transmitter. Whether it is a case of"What you gain on the swings, you lose on the roundabouts" is something that needs to be considered carefully.
For the signals mentioned above, see also
this answer of mine. |
A t-test is a family of statistical hypothesis tests in which the test statistic follows a Student's t-distribution under the null hypothesis. The most widely used t-tests include the one-sample t-test, t-test for paired samples and independent two-sample t-test.
The one-sample t-test is used to test the null hypothesis the population mean \(\mu\) is equal to a specified value \(\mu_0\), which is often 0. Therefore, the null hypothesis is
\[H_0: \mu = \mu_0.\]
Depending on the alternative hypothesis, we can carry out either a one-tailed or two-tailed test. If the sign for the difference between \(\mu\) and \(\mu_0\) is not known, a two-tailed test should be used and the alternative hypothesis is
\[H_1: \mu \neq \mu_0. \]
Otherwise, a one-tailed test is used and the alternative hypothesis is
\[H_a: \mu > \mu_0 \]
if it is expected the population mean is greater than \(\mu_0\), or
\[H_a: \mu < \mu_0 \]
if it is expected the population mean is less than \(\mu_0\).
For one-sample t-test, the statistic
\[t={\frac {{\overline {x}}-\mu _{0}}{s/{\sqrt {n}}}}\]
where \(\overline{x}\) is the sample mean, \(s\) is the sample standard deviation of the sample and \(n\) is the sample size. This is also called the t-statistics, which follows a \(t-\)distribution with the degrees of freedom \(n − 1\), under the assumption that the data follow a normal distribution. With the \(t-\)distribution, the calculation of a p-value is illustrated in the figure below.
Using the ACTIVE data, we want to test whether the education level of people older than 65 years is above high school (years of education is greater than 13). From the t-test output, we have t-value 2.465. Comparing that with a t-distribution with degrees of freedom 2801, we get the \(p\)-value = 0.007. We therefore reject the null hypothesis.
> usedata('active') > attach(active) > > t.test(edu, mu=12, alternative = "greater" ) One Sample t-test data: edu t = 3.7234, df = 1574, p-value = 0.0001017 alternative hypothesis: true mean is greater than 12 95 percent confidence interval: 13.30691 Inf sample estimates: mean of x 14.34222 >
For the one-sample t-test, the effect size is defined as
\[\delta={\frac {\mu-\mu _0}{\delta}},\]
where \(\delta\) is the population standard deviation. The sample effect size
\[d={\frac {{\overline {x}}-\mu _{0}}{s}}.\]
Whether an effect size should be interpreted small, medium, or large depends on its substantive context and its operational definition. Some guidelines were provided in the literature as shown in the table below. However, they should be used with extraordinary caution.
Effect size d Very small 0.01 Small 0.20 Medium 0.50 Large 0.80 Very large 1.20 Huge 2.0
For the ACTIVE example, the estimated effect size \(d=0.047\), indicating a small effect. Note that even though the result based on the t-test was significant, the difference was actually quite small in practice.
> usedata('active') > attach(active) > > (14.23233 - 13)/sd(edu) [1] 0.04656958 >
This test is used when we have paired samples where two samples can be matched or "paired". A common example is pre- and post-test design or repeated measures. For example, suppose we want to assess of an intervention method to reduce depression. We can enroll 100 participants and measure each participant's depression level. Then all the participants are given the intervention, after which their depression levels are measured again. Our interest is in whether the intervention has any effect on mean depression levels.
To answer the question, we can first calculate the difference in depression level before and after intervention: \(d_i = y_{i2} - y_{i1}\). Then \(d_i\) can be analyzed using the one-sample t-test.
\[t = \frac{\overline{d}-\mu_0}{s_d/\sqrt(n)},\]
where \(\overline{d}\) is the average and \(s_d\) is the standard deviation of the differences. Under the null hypothesis \(H_0:\mu=\mu_0\), the t-statistic follows a t-distribution with the degree of freedom used is \(n-1\), where \(n\) represents the number of pairs.
Note that although the mean difference is the same for the paired and unpaired samples, their statistical significance levels can be very different. This is because the variance of \(d\) is
\[\text{var}(d) = \text{var}(y_2 - y_1) = \text{var}(y_1) + \text{var}(y_2) - 2\rho \text{var}(y_1) \text{var}(y_2)\]
where \(\rho\) is the correlation before and after the treatment. Since the correlation is often positive, the variance for the paired samples is often smaller than unpaired samples.
In the ACTIVE data, we have measures on verbal test for 4 times. As an example, we want to test if there is any difference in the score between the first time and the last time. The input and output can be seen below. Note that the same
t.test() function is used but the option
paired=TRUE is used.
> usedata('active') > attach(active) > > t.test(hvltt, hvltt4, paired=TRUE) Paired t-test data: hvltt and hvltt4 t = -0.45061, df = 1811, p-value = 0.6523 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -0.2658514 0.1665137 sample estimates: mean of the differences -0.04966887 >
The independent samples t-test is used when two separate sets of independent and identically distributed samples are obtained, one from each of the two populations being compared. For example, in evaluating the effect of an intervention, we enroll 100 participants and randomly assign 50 to the treatment group and the other 50 to the control group. In this case, we have two independent samples and should use the independent two-sample t-test.
When the two population variances of the two groups are not equal (the two sample sizes may or may not be equal). The \(t\) statistic to test whether the population means are different is calculated as:
\[t=\frac{\bar{x}_{1}-\bar{x}_{2}}{s_{\overline{\Delta}}}\]
where
\[s_{\overline{\Delta}}=\sqrt{\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}}.\]
Here, \(s_{1}^{2}\) and \(s_{2}^{2}\) are the unbiased estimators of the variances of the two samples with \(n_{k}\) = number of participants in group \(k\) = 1 or 2. For use in significance testing, the distribution of the test statistic is approximated as an ordinary Student's \(t\) distribution with the degrees of freedom calculated as
\[\mathrm{d.f.}=\frac{(s_{1}^{2}/n_{1}+s_{2}^{2}/n_{2})^{2}}{(s_{1}^{2}/n_{1})^{2}/(n_{1}-1)+(s_{2}^{2}/n_{2})^{2}/(n_{2}-1)}\]
This is known as the Welch-Satterthwaite equation. The true distribution of the test statistic actually depends (slightly) on the two unknown population variances.
In R, the function
t.test() can be used to conduct a t test. The following code conducts the Welch's t test. Note that
alternative = "greater" sets the alternative hypothesis. The other options include
two.sided and
less.
> usedata('active') > attach(active) > > training<-hvltt2[group==1] > control<-hvltt2[group==4] > > mean(training, na.rm=T)-mean(control, na.rm=T) [1] 1.538577 > > t.test(training, control, alternative = 'greater') Welch Two Sample t-test data: training and control t = 4.6022, df = 1272.7, p-value = 2.299e-06 alternative hypothesis: true difference in means is greater than 0 95 percent confidence interval: 0.9882856 Inf sample estimates: mean of x mean of y 25.15493 23.61635 >
When the two samples have the same population variance.The \(t\) statistic can be calculated as follows:
\[t=\frac{\bar{x}_{1}-\bar{x}_{2}}{s_{p}\cdot\sqrt{\frac{1}{n_{1}}+\frac{1}{n_{2}}}}\]
where
\[s_{p}=\sqrt{\frac{(n_{1}-1)s_{1}^{2}+(n_{2}-1)s_{2}^{2}}{n_{1}+n_{2}-2}}\]
is an estimator of the pooled standard deviation of the two samples. \(n_{k}-1\) is the degrees of freedom for each group, and the total sample size minus two (\(n_{1}+n_{2}-2\)) is the total number of degrees of freedom, which is used in significance testing.
The pooled two independent sample t test can also be conducted using the
t.test() function by setting the option
var.equal=T or
TRUE.
> usedata('active') > attach(active) > > training<-hvltt2[group==1] > control<-hvltt2[group==4] > > t.test(training, control, var.equal=T, alternative = 'greater') Two Sample t-test data: training and control t = 4.602, df = 1273, p-value = 2.301e-06 alternative hypothesis: true difference in means is greater than 0 95 percent confidence interval: 0.9882598 Inf sample estimates: mean of x mean of y 25.15493 23.61635 >
For the two-sample t-test, Cohen's d is defined as the difference between two means divided by a standard deviation.
\[d=\frac{\bar{x}_{1}-\bar{x}_{2}}{s_{p}}\]
Cohen defined \(s_{p}\), the pooled standard deviation, as\[s_{p}=\sqrt{\frac{(n_{1}-1)s_{x_{1}}^{2}+(n_{2}-1)s_{x_{2}}^{2}}{n_{1}+n_{2}-2}}\] For the ACTIVE data analysis example, the effect size is calculated as below.
> usedata('active') > attach(active) > > training<-hvltt2[group==1] > control<-hvltt2[group==4] > > mean1=mean(training,na.rm=T) > mean2=mean(control,na.rm=T) > meandiff=mean1-mean2 > > n1=length(training)-sum(is.na(training)) > n2=length(control)-sum(is.na(control)) > > v1=var(training,na.rm=T) > v2=var(control,na.rm=T) > s=sqrt(((n1-1)*v1+(n2-1)*v2)/(n1+n2-2)) > s [1] 5.968894 > > cohend=meandiff > cohend [1] 1.538577 > |
I am modeling a lime slaker which is basically a stirred tank reactor with two separate feeds:
1. CaO feed screw (powder) 2. H2O binary valve (liquid)
These react according to the reaction:
$\ce{CaO(s) + H2O(l) -> Ca(OH)2(s)}$
The slaker is operated on a semibatch fashion where water and lime are fed until the desired amounts have been added and when the reaction has finished a new batch can be started. Water is added in excess with a mass ratio of approximately 1:4. The addition of the reactants takes about six minutes and after that some minutes are required for the temperature to rise due to the reaction. The reactor is
not emptied between the batches but the contents exit to a secondary tank through internal piping on an overflow basis. An example of the development of the reactor temperature is given below.
Because of the overflow the volume of the reactor (and the contents) stays constant. The output feed is not measured. In general, the material balance of a model is:
the rate of accumulation of mass in the system = rate of mass entering the system - rate of mass leaving the system
or
$\frac{dM}{dt}=\dot{m}_{in}-\dot{m}_{out}$
or
$\frac{dV\rho}{dt}=\dot{F}_{in}\rho_{in}-\dot{F}_{out}\rho$.
For the lime addition weighing data is available to determine the mass of the added quicklime. The lime feed screw is operated at a constant speed so the feed rate can be determined. The water feed has a flow measurement.
What would be the best way to form the material balances as the reaction happens between a liquid and a solid? How to determine how much material is displaced by the addition of lime? How does the formation of calcium hydroxide affect the balance? |
In the clamped case, things get a lot easier! Some modifications to the previous post:
Let: [u(x,t) = X(x)T(t)] in [u_{tt}(x,t) + K u_{xxxx} = 0, K > 0, u(0,t) = u_{xx}(0,t)=u(l,t)=u_{xx}(l,t)=0]
Then: [u_{tt}(x,t) = X(x)T''(t), u_{xxxx}(x,t) = X''''(x)T(t)]
[u_{tt}(x,t) + K u_{xxxx} = X(x)T''(t) + K X''''(x)T(t) = 0]
[\frac{X''''(x)}{X(x)} = \frac{-T''(t)}{K T(t)} = \lambda]
For some constant, as both sides are independant of the other respective variable. Now, we have by assumptions that [\lambda = c^4 > 0, K = k^2 > 0]
So we are left with two ODE's in the form: [X''''(x) = c^4 X(x), T''(t) = -c^4 k^2 T(t)]
Which yield solutions: [X(x) = A \cosh(c x) + B \sinh(c x) + C \cos(c x) + D \sin(c x)]
[T(t) = A \cos(c^2 k t) + B \sin(c^2 k t)]
Using the two boundary conditions: [u(0,t) = 0 = u_{xx}(0,t)]
[X(0) = A \cosh(c 0) + B \sinh(c 0) + C \cos(c 0) + D \sin(c 0) = A + C = 0, A = -C]
[X''(0) = A c^2 \cosh(c 0) + B c^2 \sinh(c 0) - C c^2 \cos(c 0) - D c^2 \sin(c 0) = A c^2 + D c^2, A = C]
[A = C = -C \implies A = C = 0]
As we disregard the case where: [X(0) \ne 0, X''(x) \ne 0 \implies T(t) ≡ 0]
OK, nice. We plug into the 3rd and 4th boundary conditions, using our new function for X:
[X(x) = B \sinh(c x) + D \sin(c x)]
[u(l,t) = 0 = u_{xx}(l,t)]
[X(l) = B \sinh(c l) + D \sin(c l) = 0]
[X''(l) = B c^2 \sinh(c l) - c^2 D\sin(c l) = B \sinh(c l) - D\sin(c l) = 0]
As c != 0. Adding equations gives us that: [B \sinh(c l) + D \sin(c l) + B \sinh(c l) - D\sin(c l) = 2 B \sinh(c l) = 0]
Now, for real x, sinh(x) has a root only at c = 0, which would yield an eigenvalue of 0, which we disregard by assumption. So:
[2 B \sinh(c l) = B \sinh(c l) = 0 \implies B = 0]
Now we cannot have D = 0, or our solution is trivial. So, resubstituting for B, we're left with only:
[B \sinh(c l) + D \sin(c l) = D \sin(c l) = \sin(c l) = 0]
And our eigenvalues are those c which satisfy [\sin(c l) = 0 \blacksquare]
Which we can provide a closed form for in this case since our solution is so nice:
[\sin(c l) = 0 \implies c = \frac{\pi n}{l} \implies \lambda = \frac{\pi ^4 n^4}{l^4}]
These next two sections need basically no modification, as only which term vanishes in our boundary case during integration by parts changes.
[\int_0^l X_n(x)X_m(x) dx, n \ne m, \lambda_n \ne \lambda_m]
Then [(\lambda_n - \lambda_m) \int_0^l X_n(x)X_m(x) dx = \int_0^l \lambda_n X_n(x)X_m(x) - \lambda_m X_n(x)X_m(x) dx]
[\int_0^l X''''_n(x)X_m(x) - X_n(x)X''''_m(x) dx = (X'''_n(x)X_m(x) - X_n(x)X'''_m(x))_{x=(0,l)} - \int_0^l X'''_n(x)X'_m(x) - X'_n(x)X'''_m(x) dx]
[= 0 - (X''_n(x)X'_m(x) - X_n'(x)X''_m(x))_{x=(0,l)} + \int_0^l X''_n(x)X''_m(x) - X''_n(x)X''_m(x) dx = 0 + 0 = 0]
Using integration by parts, and the fact that our boundary conditions vanish at:
[X(0) = 0, X''(0) = 0, X(l) = 0, X''(l) = 0]
Then we have [(\lambda_n - \lambda_m) \int_0^l X_n(x)X_m(x) dx = 0, \lambda_n \ne \lambda_m \ne 0 \implies \int_0^l X_n(x)X_m(x) dx = 0]
And our eigenfunctions are orthogonal â– .
Let our differential operator be [\mathcal{I}, st. \mathcal{I} X = \lambda X, \mathcal{I} Y = \lambda Y]
We show that: [\langle \mathcal{I}X,Y\rangle = \langle X,\mathcal{I}^*Y\rangle]
[\langle \mathcal{I}X,Y\rangle = \int_0^l \mathcal{I}X(x)Y(x)^* dx = \int_0^l \lambda X(x)Y(x)^* dx = \int_0^l X(x)\lambda^* Y(x)^* dx = \int_0^l X(x)\mathcal{I}^*Y(x)^* dx = \langle X,\mathcal{I}^*Y\rangle]
As we had by assumption our eigenvalues were real. Then our operator is hermetian, so the eigenfuncitons of different eigenvalues are linearly independant, and the eigenfunctions of the same eigenvalue are linearly dependant.â– |
The following fact, which I've heard being called "soft version of Moschovakis's lemma" (see top answer here) is the following:
Under AD, if there is a surjection $\Bbb R\rightarrow\alpha$, then there is a surjection $\Bbb R\rightarrow\mathcal{P}(\alpha)$.
I remember few days ago I have seen a really nice proof of this result, which directly used AD (and possibly DC, can't remember) and not some complex workaround using scales and things like that. However, when yesterday I wanted to show the proof to a friend of mine, I couldn't have find it anywhere, and I would be very thankful if anyone pointed out where I could have seen this proof.
The rough idea of the proof is the following:
Fix a surjection $f:\Bbb R\rightarrow\alpha$. For each $X\subseteq\alpha$, using $f$, define a game $G(X)$ on $\omega$. Show that, for any $X\neq Y$, no winning strategy for $G(X)$ is a winning strategy for $G(Y)$. Thus, the function $F:\Bbb R\rightarrow\mathcal{P}(\alpha)$, defined as $F(r)=X$ if $r$ codes a winning strategy for $G(X)$, and $F(r)=\varnothing$ if $r$ codes no winning strategy, is well-defined. By AD, every $G(X)$ has a winning strategy for some player, so $F$ is surjective.
Hope this helps to find the proof I meant. Thanks in advance. |
An electron is confined to a finite square well whose “walls” are $8.0$ eV high. If the ground-state energy is $0.5$ eV, estimate the width of the well.
My solution:
$E_1=0.5 \ eV=8\times10^{-20} \ J$
Electron mass: $m=9.11\times10^{-31} \ Kg$
$\hbar=1.055\times 10^{-34} \ Js$
$E_1=\frac{\pi^2 \hbar}{2mL^2} \ \Rightarrow \ L=\frac{\pi\hbar}{\sqrt{2mE_1}}=\frac{\pi(1.055\times 10^{-34} \ Js)}{\sqrt{2(9.11\times10^{-31} \ Kg)(8\times10^{-20} \ J)}}=8.6813\times 10^{-10} \ m\approx 0.87 \ nm$
This is a problem from
Modern Physics, Paul A. Tipler / Ralph A. Llewellyn, and when I check solution in Answers section at the end of the book, I found that it is the same answer that I found.
But, my T.A. told me I can't do this because I'm apllying concepts for Infinite Square Well (equation for $E_1$). I thought I can do this because $E_1<V_0$ and does not matter if $V_0\rightarrow\infty$, $E_1$ is still under $V_0$.
What am I understanding wrong?
How can I find answer without using quation for $E_1$ that I already use? |
Tagged: invertible matrix Problem 583
Consider the $2\times 2$ complex matrix
\[A=\begin{bmatrix} a & b-a\\ 0& b \end{bmatrix}.\] (a) Find the eigenvalues of $A$. (b) For each eigenvalue of $A$, determine the eigenvectors. (c) Diagonalize the matrix $A$.
Add to solve later
(d) Using the result of the diagonalization, compute and simplify $A^k$ for each positive integer $k$. Problem 582
A square matrix $A$ is called
nilpotent if some power of $A$ is the zero matrix. Namely, $A$ is nilpotent if there exists a positive integer $k$ such that $A^k=O$, where $O$ is the zero matrix.
Suppose that $A$ is a nilpotent matrix and let $B$ be an invertible matrix of the same size as $A$.
Is the matrix $B-A$ invertible? If so prove it. Otherwise, give a counterexample. Problem 562
An $n\times n$ matrix $A$ is called
nonsingular if the only vector $\mathbf{x}\in \R^n$ satisfying the equation $A\mathbf{x}=\mathbf{0}$ is $\mathbf{x}=\mathbf{0}$. Using the definition of a nonsingular matrix, prove the following statements. (a) If $A$ and $B$ are $n\times n$ nonsingular matrix, then the product $AB$ is also nonsingular. (b) Let $A$ and $B$ be $n\times n$ matrices and suppose that the product $AB$ is nonsingular. Then: The matrix $B$ is nonsingular. The matrix $A$ is nonsingular. (You may use the fact that a nonsingular matrix is invertible.) Problem 552
For each of the following $3\times 3$ matrices $A$, determine whether $A$ is invertible and find the inverse $A^{-1}$ if exists by computing the augmented matrix $[A|I]$, where $I$ is the $3\times 3$ identity matrix.
Add to solve later
(a) $A=\begin{bmatrix} 1 & 3 & -2 \\ 2 &3 &0 \\ 0 & 1 & -1 \end{bmatrix}$ (b) $A=\begin{bmatrix} 1 & 0 & 2 \\ -1 &-3 &2 \\ 3 & 6 & -2 \end{bmatrix}$. Problem 548
An $n\times n$ matrix $A$ is said to be
invertible if there exists an $n\times n$ matrix $B$ such that $AB=I$, and $BA=I$,
where $I$ is the $n\times n$ identity matrix.
If such a matrix $B$ exists, then it is known to be unique and called the
inverse matrix of $A$, denoted by $A^{-1}$.
In this problem, we prove that if $B$ satisfies the first condition, then it automatically satisfies the second condition.
So if we know $AB=I$, then we can conclude that $B=A^{-1}$.
Let $A$ and $B$ be $n\times n$ matrices.
Suppose that we have $AB=I$, where $I$ is the $n \times n$ identity matrix.
Prove that $BA=I$, and hence $A^{-1}=B$.Add to solve later
Problem 546
Let $A$ be an $n\times n$ matrix.
The $(i, j)$
cofactor $C_{ij}$ of $A$ is defined to be \[C_{ij}=(-1)^{ij}\det(M_{ij}),\] where $M_{ij}$ is the $(i,j)$ minor matrix obtained from $A$ removing the $i$-th row and $j$-th column.
Then consider the $n\times n$ matrix $C=(C_{ij})$, and define the $n\times n$ matrix $\Adj(A)=C^{\trans}$.
The matrix $\Adj(A)$ is called the adjoint matrix of $A$.
When $A$ is invertible, then its inverse can be obtained by the formula
For each of the following matrices, determine whether it is invertible, and if so, then find the invertible matrix using the above formula.
(a) $A=\begin{bmatrix} 1 & 5 & 2 \\ 0 &-1 &2 \\ 0 & 0 & 1 \end{bmatrix}$. (b) $B=\begin{bmatrix} 1 & 0 & 2 \\ 0 &1 &4 \\ 3 & 0 & 1 \end{bmatrix}$. Problem 506
Let $A$ be an $n\times n$ invertible matrix. Then prove the transpose $A^{\trans}$ is also invertible and that the inverse matrix of the transpose $A^{\trans}$ is the transpose of the inverse matrix $A^{-1}$.
Namely, show that \[(A^{\trans})^{-1}=(A^{-1})^{\trans}.\] Problem 500
10 questions about nonsingular matrices, invertible matrices, and linearly independent vectors.
The quiz is designed to test your understanding of the basic properties of these topics.
You can take the quiz as many times as you like.
The solutions will be given after completing all the 10 problems.
Click the View question button to see the solutions. Problem 452
Let $A$ be an $n\times n$ complex matrix.
Let $S$ be an invertible matrix. (a) If $SAS^{-1}=\lambda A$ for some complex number $\lambda$, then prove that either $\lambda^n=1$ or $A$ is a singular matrix. (b) If $n$ is odd and $SAS^{-1}=-A$, then prove that $0$ is an eigenvalue of $A$.
Add to solve later
(c) Suppose that all the eigenvalues of $A$ are integers and $\det(A) > 0$. If $n$ is odd and $SAS^{-1}=A^{-1}$, then prove that $1$ is an eigenvalue of $A$. Problem 438
Determine whether each of the following statements is True or False.
(a) If $A$ and $B$ are $n \times n$ matrices, and $P$ is an invertible $n \times n$ matrix such that $A=PBP^{-1}$, then $\det(A)=\det(B)$. (b) If the characteristic polynomial of an $n \times n$ matrix $A$ is \[p(\lambda)=(\lambda-1)^n+2,\] then $A$ is invertible. (c) If $A^2$ is an invertible $n\times n$ matrix, then $A^3$ is also invertible. (d) If $A$ is a $3\times 3$ matrix such that $\det(A)=7$, then $\det(2A^{\trans}A^{-1})=2$. (e) If $\mathbf{v}$ is an eigenvector of an $n \times n$ matrix $A$ with corresponding eigenvalue $\lambda_1$, and if $\mathbf{w}$ is an eigenvector of $A$ with corresponding eigenvalue $\lambda_2$, then $\mathbf{v}+\mathbf{w}$ is an eigenvector of $A$ with corresponding eigenvalue $\lambda_1+\lambda_2$.
(
Stanford University, Linear Algebra Exam Problem) Read solution |
Take $n = 12$
$12$'s prime factorization is $2^1\times2^1\times3^1$
So then, the number of factors by UFT is $(1+1)(1+1)(1+1) = 8$
But there's only $1,2,3,4,6,12 = 6$ factors!!
Where are the other two?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
The formula you've written assumes that all factors in the product are distinct. The reason for the product of $(e+1)$ over each factor $p^e$ is because we are multiplying $p^k$ for all $0\le k\le e$, for each factor $p^e$ in the product. Thus, the decomposition $2^12^13^1$ as you've given gives the following $8$ factors:
\begin{align} 2^02^03^0&=1\\ 2^12^03^0&=2\\ 2^02^13^0&=2\\ 2^12^13^0&=4\\ 2^02^03^1&=3\\ 2^12^03^1&=6\\ 2^02^13^1&=6\\ 2^12^13^1&=12 \end{align}
And now you can see that the double-counted factors come from the fact that $2^12^0=2^02^1$. As long as the prime bases are different, this can't happen, which is why we demand that the decomposition use distinct prime factors in the factorization that yields that formula.
In your example you get $2^2\cdot 3$ so the exponents are $2$ and $1$, not three $1$s. This gives $(2+1)\cdot (1+1)=6$, the correct answer.
The way the theorem is proven is by noting that you can choose exactly how many of a specific prime to include. That is, if
$$n=p_1^{e_1}\cdot\ldots\cdot p_r^{e_r}$$
then there are $e_i+1$ choices for how many factors of $p_i$ are in there. $0,1,2,3,4\ldots, e_i$. That's where the factor comes from. If you tried to do, as you do in this instance $2^1\cdot 2^1\cdot 3^1$, then you are saying by putting in the first two $(1+1)$ factors (the ones that go with the $2$s) that you can take $4$ cases: no $2$s at all, $0$ of the first two and $1$ of the second, $1$ of the first two and $0$ of the second, or $1$ of each $2$. Notice that you count the case of having a single $2$ twice! It doesn't matter
which factor of $2$ you grab, it still produces the same factor of $12$. If you are allowed to break up the number, you will always double count something, so you must keep all of the same primes together. |
Differential and Integral Equations Differential Integral Equations Volume 22, Number 7/8 (2009), 669-678. Multiple positive solutions for a class of $p - q$-Laplacian systems with multiple parameters and combined nonlinear effects Abstract
In this work, we prove a multiplicity result for a class of quasilinear elliptic equation involving the subcritical Hardy-Sobolev exponent, and singularities both in the operator and in the non-linearity. Precisely, we study the problem $$ \begin{cases} {-\operatorname{div} \big[ |x_N|^{-ap} | \nabla u |^{p-2} \nabla u \big] + \lambda|x_N|^{-(a+1-c)p} |u|^{p-2}u } & \\ \ \ \ = |x_N|^{-bq} |u|^{q-2} u + f & \mbox{in }\mathbb R_+^N \\ {u} = 0 &\mbox{on } \partial \mathbb R_+^N, \end{cases} $$ where we denote $ x=(x_1,x_2,\dots,x_N)=(x',x_N) \in \mathbb R^{N-1}\times \mathbb R $, $ \mathbb R_+^N= \left\{ x \in \mathbb R^N : x_N > 0 \right\} $, $ \partial \mathbb R_+^N= \left\{ x \in \mathbb R^N : x_N = 0 \right\} $, and we consider $ 1 < p < N $, $ 0 \leqslant a < (N-p)/p $, $ a < b < a+1 $, $c=0 $, $ d \equiv a+1-b $, $ q = q(a,b) \equiv Np/(N - pd) $ (the Hardy-Sobolev critical exponent), $ \lambda \in \mathbb R $ is a parameter, and $ f \in \big( L_b^q(\mathbb R_+^N) \big)^{*} $, the dual space of the weighted Lebesgue space. We prove an existence result for the case $ f \equiv 0 $ and a multiplicity result in the case $ \lambda = 0 $ for non-autonomous perturbations~$ f \not\equiv 0.$
Article information Source Differential Integral Equations, Volume 22, Number 7/8 (2009), 669-678. Dates First available in Project Euclid: 20 December 2012 Permanent link to this document https://projecteuclid.org/euclid.die/1356019543 Mathematical Reviews number (MathSciNet) MR2532116 Zentralblatt MATH identifier 1240.35153 Subjects Primary: 35J57: Boundary value problems for second-order elliptic systems Secondary: 35J60: Nonlinear elliptic equations 35J62: Quasilinear elliptic equations 35J70: Degenerate elliptic equations 47J15: Abstract bifurcation theory [See also 34C23, 37Gxx, 58E07, 58E09] Citation
Ali, Jaffar; Shivaji, R. Multiple positive solutions for a class of $p - q$-Laplacian systems with multiple parameters and combined nonlinear effects. Differential Integral Equations 22 (2009), no. 7/8, 669--678. https://projecteuclid.org/euclid.die/1356019543 |
A two-dimensional figure where all points are at a fixed equal distance from a center point. Center of a circle having all points on the line circumference are at equal distance from the center point. Chord of a circle is line segment on the interior of a circle. Diameter of a circle is any straight line segment that passes through the center of the circle and whose endpoints are on the circle. The diameters are the longest chords of the circle. In the process industry, the diameter is typically used to describe the size pipe that the process is flowing through. Unless explictily specified, the diameter is assumed to mean the nominal pipe size (NPS). The inside diameter of a pipe is the longest distance between the two inside walls of the pipe. The outside diameter is the distance between the two outside walls. To find the thickness of the pipe, subtract the outside diameter from the inside diameter and divide by two. When sizing flow meters or impact tees, a certain straight run maybe required. This is typically specified in terms of diameters. For example a 10" orifice meter with a 10 diameter upstream requirement will require 100" of unobstructed straight run upstream of the orifice plate. Radius of a circle is a line segment between the center point and a point on a circle or sphere. Sector of a circle is a fraction of the area of a circle with a radius on each side and an arc. Segment of a circle is the area of a sector of a circle minus a piece of that sector. Structural Shapes area of a Circle formula
\(\large{ A =\pi \; r^2 }\)
Where:
\(\large{ A }\) = area
\(\large{ r }\) = radius
\(\large{ \pi }\) = Pi
Circumference of a Circle formula
\(\large{ C= 2 \; \pi \; r }\)
Where:
\(\large{ C }\) = circumference (perimeter)
\(\large{ r }\) = radius
\(\large{ \pi }\) = Pi
Chord Arc Length of a Circle Formula
\(\large{ l = \frac { \theta} { 180 } \; 2 \; \pi \; r }\)
Where:
\(\large{ l }\) = length
\(\large{ r }\) = radius
\(\large{ \theta }\) = angle
\(\large{ \pi }\) = Pi
Chord Length of a Circle formula
\(\large{ c = 2 \; r \; \sin \; \frac {\theta}{2} }\)
\(\large{ c = 2 \; \sqrt{r^2-h^2} }\)
Where:
\(\large{ c }\) = chord
\(\large{ h, h' }\) = height
\(\large{ r }\) = radius
\(\large{ \theta }\) = angle
Diameter of a Circle formula
\(\large{ d = 2 \; r }\)
\(\large{ d = \frac {C} {\pi} }\)
\(\large{ d = \sqrt {\frac {4 \; A} {\pi} } }\)
Where:
\(\large{ d }\) = diameter
\(\large{ A }\) = area
\(\large{ C }\) = circumference
\(\large{ r }\) = radius
\(\large{ \pi }\) = Pi
Distance from Centroid of a Circle formula
\(\large{ C_x = r}\)
\(\large{ C_y = r}\)
Where:
\(\large{ C_x, C_y }\) = distance from centroid
\(\large{ r }\) = radius
Elastic Section Modulus of a Circle formula
\(\large{ S = \frac { \pi \; r^3 } { 4 } }\)
Where:
\(\large{ S }\) = elastic section modulus
\(\large{ r }\) = radius
\(\large{ \pi }\) = Pi
Plastic Section Modulus of a Circle formula
\(\large{ Z = \frac { d^3 } { 6 } }\)
Where:
\(\large{ Z }\) = plastic section modulus
\(\large{ d }\) = diameter
Polar Moment of Inertia of a Circle formula
\(\large{ J_{z} = \frac { \pi \; r^4 } { 2 } }\)
\(\large{ J_{z1} = \frac { 5 \; \pi \; r^4 } { 2 } }\)
Where:
\(\large{ J }\) = torsional constant
\(\large{ r }\) = radius
\(\large{ \pi }\) = Pi
Radius of a Circle formula
\(\large{ r = \frac {d} {2} }\)
\(\large{ r = \frac {C} {2 \; \pi} }\)
\(\large{ r = \sqrt {\frac {A} {\pi} } }\)
Where:
\(\large{ r }\) = radius
\(\large{ A }\) = area
\(\large{ C }\) = circumference
\(\large{ d }\) = diameter
\(\large{ \pi }\) = Pi
Sector Area of a Circle formula
\(\large{ A = \frac { \theta } { 360 } \; \pi \; r^2 \;\; }\)
\(\large{ A = \frac { \theta \; \pi } { 360 } \; r^2 \;\; }\)
Where:
\(\large{ A }\) = area
\(\large{ r }\) = radius
\(\large{ \theta }\) = angle
\(\large{ \pi }\) = Pi
Segment Area of a Circle formula
\(\large{ A = \frac { 1 } { 2 } \; r^2 \; \left( \; \frac {\pi} {180} \theta \;-\; sin \; \theta \; \right) \;\; }\)
\(\large{ A = \left( \frac { \theta \; \pi } { 360 } \;-\; \frac { sin \; \theta } { 2 } \right) r^2 \;\; }\)
Where:
\(\large{ A }\) = area
\(\large{ r }\) = radius
\(\large{ \theta }\) = angle
\(\large{ \pi }\) = Pi
Radius of Gyration of a Circle formula
\(\large{ k_{x} = \frac { r } { 2 } }\)
\(\large{ k_{y} = \frac { r } { 2 } }\)
\(\large{ k_{z} = \frac { \sqrt {2} } { 2 } \; r }\)
\(\large{ k_{x1} = \frac { \sqrt {5} } { 2 } \; r }\)
\(\large{ k_{y1} = \frac { \sqrt {5} } { 2 } \; r }\)
\(\large{ k_{z1} = \frac { \sqrt {10} } { 2 } \; r }\)
Where:
\(\large{ k }\) = radius of gyration
\(\large{ r }\) = radius
Second Moment of Area of a circle formula
\(\large{ I_{x} = \frac { \pi \; r^4}{ 4 } }\)
\(\large{ I_{y} = \frac { \pi \; r^4}{ 4 } }\)
\(\large{ I_{x1} = \frac { 5 \; \pi \; r^4}{ 4 } }\)
\(\large{ I_{y1} = \frac {5 \; \pi \; r^4}{ 4 } }\)
Where:
\(\large{ I }\) = moment of inertia
\(\large{ r }\) = radius
\(\large{ \pi }\) = Pi
Torsional Constant of a Circle formula
\(\large{ J = \frac { \pi \; r^4 } { 2 } }\)
\(\large{ J = \frac { \pi \; d^4 } { 32 } }\)
Where:
\(\large{ J }\) = torsional constant
\(\large{ d }\) = diameter
\(\large{ r }\) = radius
\(\large{ \pi }\) = Pi |
I have often seen a statement that we can model only a short rate process $r(t)$ and then use it to derive a term structure $R(t,T)$ for every $t$. Could someone please elaborate? Say, I’ve simulated $r(t)$ up to time $t$, what would I use to derive $R(t,T)$?
This is indeed a standard result. You can convince yourself by noticing
The bank account grows from 1 at $t=\tau$ to $E\left[\exp(\int_\tau^T r(u)du)|\mathscr{F}_\tau\right]$ at time $T$ The price of a security paying $X$ at time $T$ discounted to $t=\tau$ is then $E\left[X \exp(-\int_\tau^T r(u)du)\right|\mathscr{F}_\tau]$ Hence the price of a credit risk-free zero coupon bond, which pays 1 at $T$ is $$ B(\tau,T)=E\left[\exp(-\int_0^T r(u)du|\mathscr{F}_\tau\right],$$ which will define the yield curve at $t=\tau$.
So the only challenge remaining is to go from $r|\mathscr{F}_\tau$ to $\int_\tau^T r(u)du|\mathscr{F}_\tau$. This can be either done by approximations of the integral (e.g. by Riemann sums) or in Gaussian models by avoiding discretisation (and its errors) using that $r|\mathscr{F}_\tau$ and $\int_\tau^T r(u)du|\mathscr{F}_\tau$ are joint Gaussian and simulating joint normal distributions. Every textbook on short rate models will probably explain this. Look for example in Chapter 3 of Glassermann's "Monte Carlo models in financial engineering". |
I'm programming an orbital simulator and need some help modeling hyperbolic orbits (or trajectories, whatever you want to call them). So far I can model elliptical orbits with the standard orbital elements (perigee, eccentricity, semi-major axis, mean anomaly, etc); I can calculate the mean anomaly in a few seconds from now, and from that derive the craft's position and velocity. Is there a way to do this for hyperbolic orbits? If you start with just the velocity and position vector of a craft can you predict where it will be every second from now while it is in a hyperbolic orbit/trajectory?
Yes. (It sounds like you are dealing with a two-body problem, which has analytic solutions.)
First off, given the position, $\vec{r}$, the velocity, $\vec{v}$, both in a coordinate system with the central body at the origin at rest, and the GM of the central body, $\mu$, you can readily compute the specific energy (energy per unit mass), which will tell you whether the trajectory is elliptical (negative energy) or hyperbolic (positive energy). Or perhaps parabolic (exactly zero energy), which is however only of mathematical interest. From the energy, you can get the semi-major axis, $a$. Here $v=|\vec{v}|$ and $r=|\vec{r}|$:
$$\mathcal{E}={v^2\over 2}-{\mu\over r}$$
$$a={\mu\over 2\mathcal{E}}$$
You are probably propagating elliptical orbits with sines and cosines. A hyperbolic trajectory is done in a similar way with hyperbolic sines and hyperbolic cosines. You can compute the specific angular momentum from just the position and velocity, and from that get the eccentricity, $e$.
$$\vec{\mathcal{M}}=\vec{r}\times\vec{v}$$
$$e=\sqrt{1+{\vec{\mathcal{M}}\cdot\vec{\mathcal{M}}\over\mu a}}$$
Then a simple form of the trajectory in the $xy$ plane, with closest approach on the $+x$ axis is:
$$x=a\left(e-\cosh\tau\right)$$ $$y=a\sqrt{e^2-1}\sinh\tau$$
where $\tau$ is the eccentric anomaly, related to time, where $\tau=0$ and $t=0$ is at closest approach:
$$t=\sqrt{a^3\over\mu}\left(e\sinh\tau-\tau\right)$$ |
Products are omnipresent in physics – and even in less quantitative sciences – from Day One. The product $A=XY$ of two numbers may be visualized as the area $A$ of the rectangle whose sides are the two factors $X$ and $Y$. When embedded in physics, this simple formula $A=XY$ already includes units: the sides are in meters and the area is in squared meters. You may divide the rectangle to the unit squares. That's probably how children learn products when they are 7 or 8.
The height of the rectangle is six units (imagine they are meters), the width is five units, and it contains thirty squares – units of area (thirty squared meters).
But the optimum interpretation depends on the precise quantity. The way to think about every product depends on the quantities we multiply.
Take $W=\vec F\cdot \vec s$. It's the work that the force $\vec F$ does when it moves the object by the distance $\vec s$ (including directions). Why is the formula a product? Because the work is the change of some energy and the following argument works for energy. In the following argument, we move objects and we will do so in a clever way that makes it obvious how much energy we spend (how much work we do). But because the energy is conserved, the work we do must be independent of the precise way how we moved the objects.
Imagine that you have a piece of metal that is attracted to the earth by the force 1 newton (the mass is about 0.1 kilograms). To vertically lift this object by 1 meter (increase the altitude) requires us to use and fully discharge a battery of a particular size. It's always the same and let's just call the energy contained in the battery "1 newton meter" or, more precisely, "1 joule". It is just a name. If you are inconvenient with the name, call it "one battery of mine" and we will return to this naming issue.
Now, the question is how many of these batteries one needs to buy and fully "consume" for a metallic object of weight (the force) $F=5\,{\rm N}$, five newtons, to be lifted by $s=6\,{\rm m}$, seven meters. Equivalently, what work do these batteries do?
It's easy to answer. We may cut the object to 5 smaller ones. Each of them is attracted by the force 1 newton, as previously. And we may lift these parts separately. We may lift the first piece by one meter. Then another meter... Seven times. We lift it by 6 meters. We consume 6 batteries.
Then we do the same with the second object, extra 6 batteries are gone. Then with the third piece, 6 batteries gone. Fourth piece, 6 batteries. Fifth piece, 7 batteries. In total, we will consume$$ 6+6+6+6+6 = 6\times 5 = 30$$thirty batteries. Each of them was said to have the energy 1 joule. So we will need 30 joules. Note that the numerical part of the calculation is obvious and the picture above (which computed areas) may also show how many batteries we need. Each column corresponds to one fifth of the metallic object; each row corresponds to one meter of the distance by which the objects have to be lifted. The only "controversial" thing about the calculation is that I called the energy in one battery "one joule" which is equal to "one newton-meter".
But this has the advantage that the formula$$ W = F\cdot s $$whose numerical part was fine (it's clear why we got a product, right?) will work including units. If we just use units for $W$ that are naturally "products" of the units for $F$ and for $s$, then we may multiply the numbers as well as the units and both of them come out correctly. So if we want the formula to work including the units, we see that the unit of energy stored in the battery I started with should be said to be the product of one meter and one newton – this product is also called one joule.
So physicists aren't afraid of multiplying quantities with units. They routinely multiply their powers, too. For example, the accelerated motion with acceleration $a$ changes the distance by $s=at^2/2$ after time $t$. The distance is the average speed times time and the average speed is $\vec v = at/2$ (zero at the beginning, $at$ at the end). |
Introduction: General Formalism
We look at a Hamiltonian with \(V(t)\) some time-dependent perturbation
\[H=H^0+V(t) \label{eq1}\]
so now the wavefunction will have perturbation-induced time dependence. Our starting point is the set of eigenstates \(|n\rangle\) of the unperturbed Hamiltonian \(H^0|n\rangle =E_n|n\rangle\), notice we are not labeling with a zero, no \(E^0_n\), because with a time-dependent Hamiltonian, energy will not be conserved, so it is pointless to look for energy corrections. What happens instead, provided the perturbation is not too large, is that the system makes transitions between the eigenstates \(|n\rangle\) of \(H^0\).
Of course, even for \(V=0\), the wavefunctions have the usual time dependence,
\[ |\psi(t)\rangle =\sum_nc_ne^{-iE_nt/\hbar} |n\rangle \label{9.5.1}\]
with the \(c_n\) ’s constant. What happens on introducing \(V(t)\) is that the \(c_n\)’s
themselves acquire time dependence,
\[ |\psi(t)\rangle =\sum_nc_n(t)e^{-iE_nt/\hbar} |n\rangle \label{9.5.2}\]
and this time dependence is determined by Schrödinger’s equation with the Hamiltonian in Equation \ref{eq1}
\[ i\hbar \dfrac{\partial}{\partial t}\sum_nc_n(t)e^{-iE_nt/\hbar} |n\rangle =(H^0+V(t))\sum_nc_n(t)e^{-iE_nt/\hbar} |n\rangle \label{9.5.3}\]
so \[ i\hbar \sum_n\dot{c_n}(t)e^{-iE_nt/\hbar} |n\rangle =V(t)\sum_nc_n(t)e^{-iE_nt/\hbar} |n\rangle \label{9.5.4}\]
Taking the inner product with the bra \(\langle m|e^{iE_mt/\hbar}\) , and introducing \(\omega_{mn}=\dfrac{E_m-E_n}{\hbar}\),
\[ i\hbar \dot{c}_m=\sum_n\langle m|V(t)|n\rangle c_ne^{i\omega_{mn}t} =\sum_n V_{mn}e^{i\omega_{mn}t}c_n \label{9.5.5}\]
This is a matrix differential equation for the \(c_n\)’s :
\[ i\hbar \begin{pmatrix} \dot{c}_1\\ \dot{c}_2\\ \dot{c}_3\\ .\\ . \end{pmatrix}=\begin{pmatrix} V_{11}& V_{12}e^{i\omega_{12}t}&.&.&.\\ V_{21}e^{i\omega_{12}t}& V_{22}&.&.&.\\ .&.&V_{33}&.&.\\ .&.&.&.&.\\ .&.&.&.&.\end{pmatrix}\begin{pmatrix} c_1\\ c_2\\ c_3\\ .\\ .\end{pmatrix} \label{9.5.6}\]
and solving this set of coupled equations will give us the \(c_n(t)\)’s, and hence the probability of finding the system in any particular state at any later time.
If the system is in initial state \(|i\rangle\) at \(t=0\), the probability amplitude for it being in state \(|f\rangle\) at time \(t\) is
to leading order in the perturbation \[ c_f(t)=\delta_{fi}-\dfrac{i}{\hbar} \int_0^t V_{fi}(t′)e^{i\omega_{fi}t′}dt′. \label{9.5.7}\]
The probability that the system is in fact in state \(|f\rangle\) at time \(t\) is therefore \[|c_f(t)|^2=\dfrac{1}{\hbar^2}\left| \int_0^t V_{fi}(t′)e^{i\omega_{fi}t′}dt′\right|^2. \label{9.5.8}\]
Obviously, this is only going to be a good approximation if it predicts that the probability of transition is small—otherwise we need to go to higher order, using the Interaction Representation (or an exact solution like that in the next section).
Example \(\PageIndex{1}\): Example: kicking an oscillator
Suppose a simple harmonic oscillator is in its ground state \(|0\rangle\) at \(t=-\infty\). It is perturbed by a small time-dependent potential \(V(t)=-eExe^{-t^2/\tau^2}\). What is the probability of finding it in the first excited state \(|1\rangle\) at \(t=+\infty\)?
Solution
Here
\[V_{fi}(t′)=-eE\langle 1|x|0\rangle e^{-t′^2/\tau^2}\]
and
\[x=\sqrt{\hbar /2m\omega}(a+a^{\dagger})\]
from which the probability can be evaluated. It is
\[(e^2E^2/\hbar^2)(\hbar /2m\omega )\pi \tau^2e^{-\omega^2\tau^2/2}.\]
It’s worth thinking through the physical interpretations for very long and for very short times, and explaining the significance of the time for which the probability is a maximum.
The Two-State System: an Exact Solution
For the particular case of a two-state system perturbed by a periodic external field, the matrix equation above can be solved exactly. Of course, real physical systems have more than two states, but in fact for some important cases two of the states may be strongly coupled to each other, but only weakly coupled to other states, and the analysis then becomes relevant. A famous example, the ammonia maser, is discussed at the end of the section.
For a two-state system, then, the most general wavefunction is \[ |\psi(t)\rangle =c_1(t)e^{-iE_1t/\hbar} |1\rangle +c_2(t)e^{-iE2t/\hbar} |2\rangle \label{9.5.9}\]
and the differential equation for the \(c_n(t)\)’s is: \[ i\hbar \begin{pmatrix}\dot{c}_1\\ \dot{c}_2\end{pmatrix}=\begin{pmatrix} 0&Ve^{i\omega t}e^{i\omega_{12}t}\\ Ve^{-i\omega t}e^{-i\omega_{12}t}&0 \end{pmatrix}\begin{pmatrix} c_1\\ c_2\end{pmatrix}. \label{9.5.10}\]
Writing \(\omega +\omega_{12}=\alpha\) for convenience, the coupled equations are: \[ \begin{matrix} i\hbar \dot{c}_1=Ve^{i\alpha t}c_2\\ i\hbar \dot{c}_2=Ve^{-i\alpha t}c_1. \end{matrix} \label{9.5.11}\]
These two first-order equations can be transformed into a single second-order equation by differentiating the second one, then substituting \(\dot{c}_1\) from the first one and \(c_1\) from the second one to give \[ \ddot{c}_2=-i\alpha \dot{c}_2-\dfrac{V^2}{\hbar^2}c_2. \label{9.5.12}\]
This is a standard second-order differential equation, solved by putting in a trial solution \(c_2(t)=c_2(0)e^{i\Omega t}\) . This satisfies the equation if \[ \Omega =-\dfrac{\alpha}{2} \pm \sqrt{\dfrac{\alpha^2}{4}+\dfrac{V^2}{\hbar^2}}, \label{9.5.13}\]
so, reverting to the original \(\omega +\omega_{12}=\alpha\) , the general solution is:
\[ c_2(t)=e^{-i\dfrac{(\omega -\omega_{21})}{2}t} \left( Ae^{i\sqrt{\left(\dfrac{\omega -\omega_{21}}{2}\right)^2+\dfrac{V^2}{\hbar^2}} t}+Be^{-i\sqrt{\left(\dfrac{\omega -\omega_{21}}{2}\right)^2+\dfrac{V^2}{\hbar^2}} t} \right) \label{9.5.14}.\]
Taking the initial state to be \(c_1(0)=1,\; c_2(0)=0\) gives \(A=-B\).
To fix the overall constant, note that at \(t = 0\), \[ \dot{c}_2(0) = \dfrac{V}{i\hbar} c_1(0) = \dfrac{ V}{i\hbar} . \label{9.5.15}\]
Therefore \[ |c_2(t)|^2=\dfrac{\dfrac{V^2}{\hbar^2}}{\left(\dfrac{\omega -\omega_{21}}{2}\right)^2+\dfrac{V^2}{\hbar^2}} \sin^2 \left( \sqrt{\left(\dfrac{\omega -\omega_{21}}{2}\right)^2+\dfrac{V^2}{\hbar^2}} t\right) . \label{9.5.16}\]
Note in particular the result if \(\omega =\omega_{12}\): \[ |c_2(t)|^2=\sin^2\left( \dfrac{Vt}{\hbar} \right). \label{9.5.17}\]
Assuming \(E_2>E_1\), and the two-state system to be initially in the ground state \(|1\rangle\) , this means that after a time \(h/4V\) the system will
certainly be in state \(|2\rangle\), and will oscillate back and forth between the two states with period \(h/2V\).
That is to say, a precisely timed period spent in an oscillating field can drive a collection of molecules all in the ground state to be all in an excited state. The ammonia
maser works by sending a stream of ammonia molecules, traveling at known velocity, down a tube having an oscillating field for a definite length, so the molecules emerging at the other end are all (or almost all, depending on the precision of ingoing velocity, etc.) in the first excited state. Application of a small amount of electromagnetic radiation of the same frequency to the outgoing molecules will cause some to decay, generating intense radiation and therefore a much shorter period for all to decay, emitting coherent radiation. A “Sudden” Perturbation
A sudden perturbation is defined here as a sudden switch from one time-independent Hamiltonian \(H_0\) to another one \(H′_0\), the time of switching being much shorter than any natural period of the system. In this case, perturbation theory is irrelevant: if the system is initially in an eigenstate \(|n\rangle\) of \(H_0\), one simply has to write it as a sum over the eigenstates of \(H′_0\), \(|n\rangle =\sum_{n′}|n′\rangle \langle n′|n\rangle \). The nontrivial part of the problem is in establishing that the change
is sudden enough, by estimating the actual time taken for the Hamiltonian to change, and the periods of motion associated with the state \(|n\rangle\) and with its transitions to neighboring states.
(We discussed one example last semester—an electron in the ground state in a one-dimensional box that suddenly doubles in size. Other favorite examples include an atom with spin-orbit coupling in a magnetic field that suddenly reverses (Messiah p 743), and the reaction of orbiting electrons to nuclear \(\alpha\) - or \(\beta\) -decay.)
Harmonic Perturbations: Fermi’s Golden Rule
Let us consider a system in an initial state \(|i\rangle\) perturbed by a periodic potential \(V(t)=Ve^{-i\omega t}\) switched on at \(t=0\). For example, this could be an atom perturbed by an external oscillating electric field, such as an incident light wave.
What is the probability that at a later time \(t\) the system be in state \(|f\rangle\)?
Recall the matrix differential equation for the \(c_n\)’s (Equation \ref{9.5.6})
\[ i\hbar \begin{pmatrix} \dot{c}_1\\ \dot{c}_2\\ \dot{c}_3\\ .\\ . \end{pmatrix}=\begin{pmatrix} V_{11}& V_{12}e^{i\omega_{12}t}&.&.&.\\ V_{21}e^{i\omega_{12}t}& V_{22}&.&.&.\\ .&.&V_{33}&.&.\\ .&.&.&.&.\\ .&.&.&.&.\end{pmatrix}\begin{pmatrix} c_1\\ c_2\\ c_3\\ .\\ .\end{pmatrix} \nonumber \]
Since the system is definitely in state \(|i\rangle\) at \(t=0\), the ket vector on the right is initially \(c_i=1,\; c_{j\neq i}=0\).
The first-order approximation to keep the vector \(c_i=1,\; c_{j\neq i}=0\) on the right, that is, to solve the equations \[ i\hbar \dot{c}_f(t)=V_{fi}e^{i\omega_{fi}t}. \label{9.5.18}\]
Integrating this equation, the probability amplitude for an atom in initial state \(|i\rangle\) to be in state \(|f\rangle\) after time \(t\) is, to first order:
\[ \begin{align} c_f(t) &=-\dfrac{i}{\hbar} \int_0^t \langle f| V|i\rangle e^{i(\omega_{fi}-\omega )t′}dt′ \\[5pt] &=-\dfrac{i}{\hbar} \langle f|V|i\rangle \dfrac{e^{i(\omega_{fi}-\omega )t}-1}{i(\omega_{fi}-\omega )}. \end{align} \label{9.5.19}\]
The probability of transition is therefore
\[ \begin{align} P_{i\to f}(t) &=|c_f|^2 \\[5pt] &=\dfrac{1}{\hbar^2}|\langle f|V|i\rangle |^2\left( \dfrac{\sin((\omega_{fi}-\omega )t/2)}{(\omega_{fi}-\omega )/2}\right)^2 \label{9.5.20} \end{align}\]
and we are interested in the large \(t\) limit.
Writing \(\alpha =(\omega_{fi}-\omega )/2\), our function has the form \(\dfrac{\sin^2\alpha t}{\alpha^2}\). This function has a peak at \(\alpha =0\), with maximum value \(t^2\), and width of order \(1/t\), so a total weight of order \(t\). The function has more peaks at \(\alpha t=(n+1/2)\pi\). These are bounded by the denominator at \(1/\alpha^2\). For large \(t\) their contribution comes from a range of order \(1/t\) also, and as \(t\to \infty\) the function tends to a \(\delta\) function at the origin, but multiplied by \(t\).
This divergence is telling us that there is a finite probability
rate for the transition, so the likelihood of transition is proportional to time elapsed. Therefore, we should divide by \(t\) to get the transition rate.
To get the quantitative result, we need to evaluate the weight of the \(\delta\) function term. We use the standard result
\[\int_{-\infty}^{\infty} \left( \dfrac{\sin\xi}{\xi}\right)^2d\xi =\pi\]
to find
\[\int_{-\infty}^{\infty} \left(\dfrac{\sin\alpha t}{\alpha} \right)^2d\alpha =\pi t\]
and therefore
\[ \lim_{t\to \infty} \dfrac{1}{t}\left(\dfrac{\sin\alpha t}{\alpha} \right)^2=\pi \delta (\alpha ). \label{9.5.21}\]
Now, the transition rate is the probability of transition divided by \(t\) in the large \(t\) limit, that is,
\[ \begin{align} R_{i\to f}(t)&=\lim_{t\to \infty} \dfrac{P_{i\to f}(t)}{t} \\&=\lim_{t\to \infty} \dfrac{1}{t}\dfrac{1}{\hbar^2}|\langle f|V|i\rangle |^2\left[ \dfrac{\sin((\omega_{fi}-\omega )t/2)}{(\omega_{fi}-\omega )/2}\right] \\ &=\dfrac{1}{\hbar^2}|\langle f|V|i\rangle |^2\pi \delta (\dfrac{1}{2}(\omega_{fi}-\omega )) \\ &=\dfrac{2\pi}{\hbar^2}|\langle f|V|i\rangle |^2\delta (\omega_{fi}-\omega ) \label{9.5.22} \end{align} \]
This last line is Fermi’s Golden Rule: we shall be using it a lot. You might worry that in the long time limit we have taken the probability of transition is in fact diverging, so how can we use first order perturbation theory? The point is that for a transition with \(\omega_{fi}\neq \omega\), “long time” means \((\omega_{fi}-\omega )t\gg 1\), this can still be a very short time compared with the mean transition time, which depends on the matrix element. In fact, Fermi’s Rule agrees extremely well with experiment when applied to atomic systems.
Another Derivation of the Golden Rule
Actually, when light falls on an atom, the full periodic potential is not suddenly switched on, on an atomic time scale, but builds up over many cycles (of the atom and of the light). Baym re-derives the Golden Rule assuming the limit of a very slow switch on, \[ V(t)=e^{\varepsilon t}Ve^{-i\omega t} \label{9.5.23}\]
with \(\varepsilon\) very small, so \(V\) switched on very gradually in the past, and we are looking at times much smaller than \(1/\varepsilon\) . We can then take the initial time to be \(-\infty\) , that is, \[ c_f(t)=-\dfrac{i}{\hbar} \int_{-\infty}^{t} \langle f| V|i\rangle e^{i(\omega_{fi}-\omega -i\varepsilon )t′} dt′=-\dfrac{1}{\hbar} \dfrac{e^{i(\omega_{fi}-\omega -i\varepsilon )t}}{\omega_{fi}-\omega -i\varepsilon} \langle f|V|i\rangle \label{9.5.24}\]
so \[ |c_f(t)|^2=\dfrac{1}{\hbar^2}\dfrac{e^{2\varepsilon t}}{(\omega_{fi}-\omega )^2+\varepsilon^2} |\langle f|V|i\rangle |^2 \label{9.5.25}\]
and the time rate of change
\[ \dfrac{d}{dt}|c_f(t)|^2=\dfrac{1}{\hbar^2}\dfrac{2\varepsilon e^{2\varepsilon t}}{(\omega_{fi}-\omega )^2+\varepsilon^2}|\langle f|V|i\rangle |^2 . \label{9.5.26}\]
In the limit \(\varepsilon \to 0\), the function
\[ \dfrac{2\varepsilon}{(\omega_{fi}-\omega )^2+\varepsilon^2}\to 2\pi \delta (\omega_{fi}-\omega ) \label{9.5.27}\]
giving the Golden Rule again (Equation \ref{9.5.22}).
Harmonic Perturbations: Second-Order Transitions
Sometimes the first order matrix element \(\langle f|V|i\rangle\) is identically zero (parity, Wigner-Eckart, etc.) but other matrix elements are nonzero—and the transition can be accomplished by an indirect route. In the notes on the interaction representation, we derived the probability amplitude for the second-order process,
\[ c^{(2)}_n(t)=\left(\dfrac{1}{i\hbar}\right)^2\sum_n\int_0^t \int_0^{t′}dt′dt′′e^{-i\omega_f(t-t′)}\langle f|V_S(t′)|n\rangle e^{-i\omega_n(t′-t′′)}\langle n|V_S(t′′)|i\rangle e^{-i\omega_it′′}, \label{9.5.28}\]
Taking the gradually switched-on harmonic perturbation
\[V_S(t)=e^{\varepsilon t}Ve^{-i\omega t}\]
and the initial time \(-\infty\), as above,
\[ c^{(2)}_n(t)=\left(\dfrac{1}{i\hbar}\right)^2 \sum_n\langle f|V|n\rangle \langle n|V|i\rangle e^{-i\omega_ft}\int_{-\infty}^{t}dt′\int_{-\infty}^{t′}dt′′ e^{i(\omega_f-\omega_n-\omega -i\varepsilon )t′}e^{i(\omega_n-\omega_i-\omega -i\varepsilon )t′′}. \label{9.5.29}\]
Exactly as in the first-order Golden Rule, we can find the transition rate:
\[ \dfrac{d}{dt}|c^{(2)}_n(t)|^2=\dfrac{2\pi}{\hbar^4}\left| \sum_n\dfrac{\langle f|V|n\rangle \langle n|V|i\rangle}{\omega_n-\omega_i-\omega -i\varepsilon} \right|^2\delta (\omega_f-\omega_i-2\omega ). \label{9.5.30}\]
(The \(\hbar^4\) in the denominator goes to \(\hbar\) on replacing the frequencies \(\omega\) with energies \(E\), both in the denominator and the delta function, remember that if \(E=\hbar \omega ,\; \delta (\omega )=\hbar \delta (E)\). )
This is a transition in which the system gains energy \(2\hbar \omega\) from the beam, in other words
two photons are absorbed, the first taking the system to the intermediate energy \(\omega_n\), which is short-lived and therefore not well defined in energy—there is no energy conservation requirement into this state, only between initial and final states.
Of course, if an atom in an arbitrary state is exposed to monochromatic light, other second order processes in which two photons are emitted, or one is absorbed and one emitted (in either order) are also possible. |
I am reviewing some concepts in statistical mechanics and am becoming confused with how to calculate probabilities when a system has $N$ non-interacting particles.
For instance, let's say we have $N$ electrons with magnetic moment $\vec{\mu} = (g e/2 m)\vec{S}$. If we apply a strong magnetic field parallel to $\vec{S}$, then
$$ E = - \vec{\mu} \cdot \vec{B} = \pm \frac{g e \hbar}{4 m} = E_{\pm}$$
depending on the orientation of the spin of the electron. Therefore, the partition function for one electron is simply
$$ \xi = 2 \cosh \left(\frac{g e \hbar B}{4 m k T}\right) $$
And the probability to find the electron with spin parallel to the magnetic field is simply $e^{-\beta E_{+}}/Z$. So far, so good.
However, what happens when we have $N$ such electrons? Statistical mechanics says that the partition function for the system is now
$$ Z = \frac{\xi^N}{N!} $$
if we assume that the electrons do not interact with each other.
This is where I get confused. Now, if we want to find the probability that 75% of the electrons have energy $E_+$, then the Boltzmann argument doesn't hold anymore:
$$ \frac{0.75 N }{N} \neq \frac{e^{-\beta E_{+}}}{Z} $$
If the Boltzmann ratio doesn't hold, how can one proceed to calculate the aforementioned probability? |
This is a bordism problem, and as such can be answered using algebraic topology. I'll answer in the unoriented setting, then indicate how to modify things if $M$ and $W$ are required to be oriented.
Complex line bundles $\mathcal{L}$ over $W$ are classified by maps $f:W\to BU(1)\simeq \mathbb{C}P^{\infty}$. We want to decide if there is a $4$-manifold $M$ with a map $F:M\to BU(1)$ such that $\partial M=W$ and $F|_{\partial M}=f$.
We can define an equivalence relation called
bordism on the set of pairs $(W,f)$ where $W$ is a closed $3$-manifold and $f:W\to BU(1)$ is a continuous map. Two such pairs $(W_0,f_0)$ and $(W_1,f_1)$ are bordant if there is a pair $(M,F)$ consisting of a compact $4$-manifold $M$ with $\partial M = W_0\sqcup W_1$ and a map $F:M\to BU(1)$ satisfying $F|_{\partial M} = f_0\sqcup f_1$.
The set of equivalence classes $[W,f]$, denoted $\mathfrak{M}_3(BU(1))$, becomes an abelian group under the operation of disjoint union. The zero element is represented by the empty $3$-manifold. This group is a homotopy invariant, and so $\mathfrak{M}_3(BU(1))\cong \mathfrak{M}_3(\mathbb{C}P^\infty)$.
All of this is fairly standard, and can of course be generalised. A classic reference is Conner and Floyd's
Differentiable periodic maps.
Eventually we see that Witten's claim is equivalent to the group $\mathfrak{M}_3(BU(1))$ being trivial. There may be more elementary ways to see this, but an algebraic topologist would use the following spectral sequence argument. Let $\mathfrak{M}_q$ denote the group of closed $q$-manifolds up to bordism (the same equivalence relation as above, but without the maps to $BU(1)$). These groups have been computed by Thom and others. All we need to know here is that $\mathfrak{M}_q\cong \mathbb{Z}/2,0,\mathbb{Z}/2,0$ for $q=0,1,2,3$.
There is a spectral sequence, called the
Atiyah-Hirzebruch spectral sequence or the unoriented bordism spectral sequence, whose $E^2$-term is $E^2_{p,q} = H_p(BU(1),\mathfrak{M}_q)$ and which converges to $\mathfrak{M}_{p+q}(BU(1))$. Since $BU(1)\simeq \mathbb{C}P^\infty$ has homology concentrated in even degrees, we see that the groups $H_p(BU(1),\mathfrak{M}_q)$ are zero for $p+q=3$, and it follows that $\mathfrak{M}_3(BU(1))\cong 0$ as claimed.
The same argument works for oriented bordism, since the low dimensional oriented bordism groups are $\Omega_q\cong \mathbb{Z}, 0 , 0, 0$ for $q=0,1,2,3$. It also works for oriented rank $2$ real bundles, since $BSL(2,\mathbb{R})\simeq BU(1)\simeq \mathbb{C}P^\infty$. |
In this paper http://arxiv.org/abs/hep-th/9705122 Section 2
We have $$S_A = \frac{1}{4g^2} \int{d^4x F_{\mu\nu}(A)F^{\mu\nu}(A)}$$
where $F_{\mu\nu}(A) = \partial_{[\mu A\nu]}$. Its Bianchi Identity is $\partial_\mu *F^{\mu\nu}$ (Note that $*$ represents hodge dual)
Great. Now the author went to parent action:
$$S_{F,\Lambda} = \int{d^4x(\frac{1}{4g^2} F_{\mu\nu} F^{\mu\nu} +a \Lambda_\mu \partial_\nu *F^{\nu\mu}} )$$
He first varied it w.r.t $\Lambda_\mu$ and then w.r.t $F_{\mu\nu}$.
1)He got in the first case, $\partial_\mu *F^{\mu\nu} = 0$ and thus he mentioned that our parent action reduces to $S_A$.
2)He got in the second case, $$\frac{1}{2g^2} F^{\mu\nu} = \frac{a}{2} \partial_\rho \Lambda_\sigma \epsilon^{\rho \sigma \mu \nu}= \frac{a}{2} *G^{\mu\nu}$$ and thus he mentioned that now
plugging this back into the action:$$S_{F,\Lambda} \rightarrow S_{\Lambda} = \frac{-g^2a^2}{4} \int{d^4x *G_{\mu\nu} *G^{\mu\nu}}$$He then said knowing that $*G_{\mu\nu} *G^{\mu\nu}=-2G_{\mu\nu} G^{\mu\nu}$ We obtain perfectly $$S_\Lambda = \frac{g^2}{4} \int{d^4x G_{\mu\nu}(\Lambda)G^{\mu\nu}(\Lambda)}$$
And so this is duality with the coupling constants inversed. Perfect.
My questions:
A) When he said plugging this back into the action above (in italic). He plugged it in the first term of the parent action. What about the second term? Did he throw it away?
B) $\frac{1}{2g^2} F^{\mu\nu} = \frac{a}{2} \partial_\rho \Lambda_\sigma \epsilon^{\rho \sigma \mu \nu}= \frac{a}{2} *G^{\mu\nu}$ Where did this relation come from (The first and the second equality)? |
PRIMA-PARC-PIMS meeting on PDEs Start Date: 12/07/2010 End Date: 12/08/2010
Sun Sig Byun, Seoul National University
Alessio Figalli, University of Texas - Austin (lecturer for two expository talks)
Stephen Gustafson, University of British Columbia
Lami Kim, Seoul National University
Seick Kim, Yonsei University
Ki-Ahm Lee, Seoul National University
Yong Jung Kim, KAIST, Korea
Tai-Peng Tsai, University of British Columbia
University of British Columbia
This meeting is to stimulate the communication between PDE research
groups affiliated with PARC (Korea) and PIMS (Canada/US). Abstracts:
Speaker: Sun Sig Byun (Seoul National University)
Title: Gradient estimates for nonlinear parabolic systems of p-Laplacian type
Abstract: We discuss nonlinear gradient estimates for parabolic systems of p-Laplacian type with measurable coefficients in irregular domains.
/////
Speaker: Alessio Figalli (University of Texas - Austin)
Note: He will give two expository lectures
Title: DiPerna-Lions theory for ordinary differential equations and applications to semiclassical limits
ABSTRACT: At the beginning of the 90's, DiPerna and Lions developed a well-posedness theory for ordinary differential equations with Sobolev vector fields, which (roughly speaking) states the following: if $b(t)$ is
a time-dependent Sobolev vector field then, for a.e. $x_0$, there exists a unique solution to the ODE $\dot x=b(t,x)$ starting from $x_0$ (this is a kind of a.e. version of the classical Cauchy-Lipschitz result). In 2004, Ambrosio extended this result to BV vector fields. The aim of these lectures is to review these result, and to show recent applications to semiclassical limits for the Schrodinger equation.
/////
Speaker: Stephen Gustafson (UBC)
Title: Singularities and asymptotics for some geometric nonlinear Schroedinger equations
Abstract: I will describe some recent results on singularity (non-)formation and stability, in the energy-critical 2D setting, for some nonlinear Schroedinger-type systems of geometric origin -- the Schroedinger map and Landau-Lifshitz equation -- which model dynamics in ferromagnets and liquid crystals.
/////
Speaker: Ki-Ahm Lee (Seoul National University)
Title: Regularity theory for Nonlinear Nonlocal equations with non-symmetric kernels
Abstract: Nonlocal equations comes from the infinitesimal generator of given purely jump processes
and nonlinear nonlocal equations can be derived from stochastic control theory or game theory based on the jump process. Luis A. Caffarelli and Luis Silvestre showed various regularities when the kernel is symmetric. In this talk, we will discuss the main difficulties cased by the fact that kernel is non-symmetric. And Several different versions of A-B-P estimates will be discussed to overcome the difficulties in various range of the parameter $\sigma$ related to weight of kernels.
/////
Speaker: Seick Kim (Yonsei University)
Title: Elliptic systems with measurable coefficients of the type of Lam\'e system in three dimensions
Abstract: We study the $3 \times 3$ elliptic systems $\nabla (a(x) \nabla\times u)-\nabla (b(x) \nabla \cdot u)=f$, where the coefficients $a(x)$ and $b(x)$ are positive scalar functions that are measurable and bounded away from zero and infinity. We prove that weak solutions of the above system are H\"older continuous under some minimal conditions on the inhomogeneous term $f$. We also present some applications and discuss several related topics including estimates of the Green's functions and the heat kernels of the above systems.
/////
Speaker: Lami Kim (Seoul National University)
Title: Evolution of hypersurfaces under the scalar curvature flow
Abstract: We study the evolution of the hypersurfaces whose deforming rate in the normal direction at each point is proportional to the Scalar curvature. We will present the C^{1,1}-regularity and the convexity of convex hypersurface deforming under the Scalar curvature flows before the collapsing and we also discuss the preservation of ellipticity of 3 dimensional non-convex hypersurface in R^4 which is evolved under the same flow.
/////
Speaker: Yong-Jung Kim (KAIST)
Title :Generalization of Oleinik and Aronson-Benilan type one-sided inequalities
Abstract: The one-sided Oleinik inequality provides the uniqueness and a sharp regularity of solutions to a scalar conservation law if its flux is convex. The Aronson-Benilan type inequalities are also one-sided and play a similar role for solutions to the porous medium or $p$-Laplacian type equations. In this talk we will discuss that these inequalities reflect the common feature of nonnegative solutions to a wide class of evolutionary equations in the form of
$$ u_t=\sigma(t,u,u_x,u_{xx}),\quad u(x,0)=u^0(x)\ge0,\quad t>0,\,x\in\bfR,
$$ where $u^0(x)\ge0$ is bounded and ${\partial\over\partial q} \sigma(t,u,p,q)\ge0$.
In this talk we will see that the zero level set $A(t,x_0,m):=\{x:\rhom(x-x_0,t)-u(x,t)\ge0\}$ is connected for all $t,m>0$ and $x_0\in\bfR$, where $\rhom$ is the fundamental solution of mass $m>0$. It will be discussed that this geometric structure gives a generalization of previously mentioned one-sided inequalities.
/////
Speaker: Tai-Peng Tsai (UBC)
Title: Small solutions of nonlinear Schr\"odinger equations near first excited states
Abstract: Consider a nonlinear Schr\"odinger equation in $\R^3$ whose linear part has three or more eigenvalues satisfying some resonance conditions. Solutions which are initially small in
$H^1 \cap L^1(\R^3)$ and inside a neighborhood of the first excited state family are shown to converge to either a first excited state or a ground state at time infinity. An essential part of our analysis is on the linear and nonlinear estimates near nonlinear excited states, around which the linearized operators have eigenvalues with nonzero real parts and their corresponding eigenfunctions are not uniformly localized in space. This is a joint work with Kenji Nakanishi and Tuoc Van Phan.
Venue: MATX 1100
December 7th, 2010 (Tue)
09:00—10:00 Alessio Figalli
10:00 – 10:30 break
10:30-- 11:30 Ki-Ahm Lee
11:30 – 12:30 Seick Kim
12:30 – 14:00 Lunch
14:00-- 15:00 Lami Kim
15:00 – 15:30 break
15:30 – 16:30 Stephen Gustafson
18:00 Banquet
December 8th, 2010 (Wed)
09:00—10:00 Alessio Figalli
10:00 – 10:30 break
10:30-- 11:30 Sun Sig Byun
11:30 – 12:30 free time
12:30 – 14:00 lunch
14:00-- 15:00 Yong Jung Kim
15:00 – 15:30 break
15:30 – 16:30 Tai-Ping Tsai
Nassif Ghoussoub (University of British Columbia)
Tai-Peng Tsai (University of British Columbia)
Stephen Gustafson (University of British Columbia)
Young-Heon Kim (University of British Columbia) |
Tagged: group Problem 343
Let $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$.
Let $\Aut(N)$ be the group of automorphisms of $G$.
Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime.
Then prove that $N$ is contained in the center of $G$. Problem 332
Let $G=\GL(n, \R)$ be the
general linear group of degree $n$, that is, the group of all $n\times n$ invertible matrices. Consider the subset of $G$ defined by \[\SL(n, \R)=\{X\in \GL(n,\R) \mid \det(X)=1\}.\] Prove that $\SL(n, \R)$ is a subgroup of $G$. Furthermore, prove that $\SL(n,\R)$ is a normal subgroup of $G$. The subgroup $\SL(n,\R)$ is called special linear group Problem 322
Let $\R=(\R, +)$ be the additive group of real numbers and let $\R^{\times}=(\R\setminus\{0\}, \cdot)$ be the multiplicative group of real numbers.
(a) Prove that the map $\exp:\R \to \R^{\times}$ defined by \[\exp(x)=e^x\] is an injective group homomorphism.
Add to solve later
(b) Prove that the additive group $\R$ is isomorphic to the multiplicative group \[\R^{+}=\{x \in \R \mid x > 0\}.\] |
there are n derivative filters: $f_i$, and denote $f_i^r$ as $f_i$'s reverse filter such that$$f_i(x,y)=f_i^r(-x, -y)$$$r_i, f_i$ given, to find $r$ from the equations:$$f_i * r = r_i, (1 \leq i \...
This article here talks about a room impulse response generator which takes the source and destination coordinates, sound speed, sampling rate, room dimension and wall reflection coefficients as input ...
I have particle activity as shown in the left pane of animation below. The activity is clustered and it moves slowly. Sometimes these clusters merges together. On the right side of it, its shuffled ...
I have the measured signal $y(t)$ that can be modeled in the frequency domain as $Y(f)$:$$Y(f) = X(f)\cdot A(f) - [X(f)\cdot B(f)] \ast C(f)$$where $\ast$ is the convolution.I know $A(f)$, $B(f)$,...
Is there any way to synchronise phase of output of inverse DFT in each buffer?When I send the output of inverse DFT to the speaker it sounds nasty.Of course I know the „windowing functions” but it ... |
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is?
Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!...
I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $...
No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA...
The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why?
mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it
Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true:
(1) If $x=y$ then $x\sim y$.
(2) If $x=y$ then $y\sim x$.
(3) If $x=y$ and $y=z$ then $x\sim z$.
Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly.
This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$.
I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$."
That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems...
(comment on many many posts above)
In other news:
> C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999
probably the weirdness bunch of data I ever seen with so many 000000 and 999999s
But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti?
@AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms).
This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality.
Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it.
@schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$.
@GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course.
Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul...
@GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0?
Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$. |
Inequality constraints in an SDP can be turned into equality constraints by introducing non-negative slack variables. If your original problem is
$\max \Sigma_{i=1}^{n} z_{i}$
subject to
$\mbox{tr}(P_{i}X) \geq z_{i}\;\;$ $i=1, 2, \ldots, n$
$ X \succeq 0$
You can rewrite the constraints as
$\mbox{tr}(P_{i}X) - z_{i} \geq 0\;\;$ $i=1, 2, \ldots, n.$
You can then introduce slack variables $w_{i}$, $i=1, 2, \ldots, n$ and write the constraints as
$\mbox{tr}(P_{i}X) - z_{i} -w_{i} = 0\;\;$ $i=1, 2, \ldots, n.$
where $w_{i} \geq 0$, $i=1, 2, \ldots, n$.
I'll assume for this answer that the $z_{i}$ variables are also nonnegative. If they're free to be negative, you could handle this by writing $z_{i}=r_{i}-s_{i}$, where $r, s \geq 0$.
Let $Z=\mbox{diag}(z)$ and $W=\mbox{diag}(w)$. Let
$V=\left[\begin{array}{ccc}X & 0 & 0 \\0 & Z & 0 \\0 & 0 & W \\\end{array}\right]$.
Let
$C=\left[\begin{array}{ccc}0 & 0 & 0 \\0 & I & 0 \\0 & 0 & 0 \\\end{array}\right]$.
Note that $\mbox{tr}(CV)=\sum_{i=1}^{n} z_{i}$.
Let
$A_{i}=\left[\begin{array}{ccc}P_{i} & 0 & 0 \\0 & -E_{i,i} & 0 \\0 & 0 & -E_{i,i} \\\end{array}\right]\;\;$ $i=1, 2, \ldots, n$.
Here $E_{i,i}$ is the 0 matrix with a single $1$ in the $(i,i)$ position.
Now the original problem can be written in standard form as
$\max \mbox{tr}(CV)$
subject to
$\mbox{tr}(A_{i}V)=b_{i}\;\;$ $i=1, 2, \ldots, n$
$V \succeq 0$.
Note that $V$ is a block diagonal matrix, and that furthermore, $Z$ and $W$ are diagonal blocks. In the SDPA sparse matrix format it is easy to encode $X$ as an $n$ by $n$ positive semidefinite block and $Z$ and $W$ as $n$ by $1$ vectors of non-negative variables. The additional storage required for $Z$ and $W$ is minimal.
The constraint $V \succeq 0$ ensures that all of the blocks on the diagonal of $V$ are positive semidefinite. In particular, $X \succeq 0$, $z \geq 0$, and $w \geq 0$.
I assume that you probably have additional linear constraints on $X$ and $z$. These can easily be added to the formulation. |
Search
Now showing items 1-10 of 108
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC
(Elsevier, 2013-12)
The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ...
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV
(American Physical Society, 2017-06)
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ...
Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(American Physical Society, 2013-12)
The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... |
In the book Schutz on general relativity, I have come across the dot product between vectors, the action of a dual vector on a vector (or also a tensor on vectors) and the tensor product between dual vectors and vectors. I am not able to understand the difference between the three distinctively. Kindly help. Try to keep it simple and not too mathematical. Still a beginner.
Unfortunately this is mathematical because it is linear algebra. Still, I'll try to make it as simple as possible.
Dot Product, also known as Inner Product
The dot product is the usual product from basic geometry. Let us work on $\mathbb{R}^3$, the euclidean three-dimensional space. Given two vectors sitting on the origin, $v,w\in \mathbb{R}^3$ we define the dot product between them to be:
$$v\cdot w = v_xw_x+v_yw_y+v_zw_z$$
The reason is that this gives information about lengths and angles. First, the lenght of a vector, which is the distance between the origin and the point it reaches, is simply $|v|=\sqrt{v\cdot v}$ the angle between two vectors is characterized by $v\cdot w = |v| |w| \cos\theta$.
The inner product ends up being the name of the generalization of this. First one can readily generalize to an euclidean space of arbitrary dimension $\mathbb{R}^n$ by creating the inner product between $v,w\in \mathbb{R}^n$ as
$$\langle v,w\rangle=\sum_{i} v_i w_i$$
and defining the lenght and angle by the corresponding formulas. The inner product can be further abstracted. One can define it by axioms: saying what properties it is expected to have. And this generalizes even further to more abstract spaces. The bottom line is: the inner product gives geometric information such as angles and lenghts of vectors.
Important to understand also, the inner product gives you the idea of projecting one vector in the direction of the other. Actually, if $w$ is a unit vector, $|w|=1$, then $\langle v,w\rangle$ is the projection of $v$ in the direction of $w$. You can check this in $\mathbb{R}^3$ to be according to our intuition.
The action of a covector on a vector
Now, forget $\mathbb{R}^n$. Consider instead one vector space $V$ over the real numbers. Associated to every such vector space, there is the so-called dual space denoted $V^\ast$. It is defined as
the space of all linear functions that takes vectors and delivers numbers. In symbols, a member of $V^\ast$ is a function $f : V \to \mathbb{R}$, it takes an element of the set to the left of the arrow and gives one element to the right of the arrow. Furthermore, $f$ is linear, meaning that $f(\alpha v + \beta w) = \alpha f(v) + \beta f(w)$. All such functions comprises $V^\ast$.
It turns out that if $V$ has an inner product $\langle,\rangle$ defined on it, you can build elements of $V^\ast$ with it. You just fix one arbitrary vector $w\in V$, and define $f_w : V\to \mathbb{R}$ to be
$$f_w(v)=\langle v,w\rangle$$
What is $f_w$? It is a function that takes a vector and dots it with the fixed $w$, so it projects. Actually,
independently of the existence of an inner product we can think of all elements of $V^\ast$ as things that takes vectors and extract projections. This is the way to understand $V^\ast$ and covectors in general.
By the way, if $V$ is finite dimension and has one inner product, as it will be in general relativity, this works the other way around to. All covectors are of this form: the inner product with a certain vector.
Importantly enough, if $\{e_i\}$ is a basis for $V$, there is one special basis for $V^\ast$ called dual basis of $\{e_i\}$ defined as all elements $\varphi^i : V\to \mathbb{R}$ such that $\varphi^i(e_j)=\delta^i_j$.
The Tensor Product
The tensor product is altogether different. There is one very general and abstract definition which depends on the so-called universal property. It states basically the following: we want the most general way to multiply vectors together and manipulate these products obeying some reasonable assumptions.
We won't follow this route. For differential geometry (which is the language of general relativity), there is another definition which ends up being equivalent to this one for the case of interest.
Given a vector space $V$, we know $V^\ast$ the space of all linear functions $f : V\to \mathbb{R}$. We also know the inner product, which is a function that takes two vectors and delivers a number. We symbolicaly write $\langle,\rangle : V\times V\to \mathbb{R}$ because it takes two vectors to a number.
Furthermore, if we fix a vector $w\in V$ into either entry of $\langle,\rangle$, the other slot is linear. Such a map is called bilinear. It is linear in each entry with the others held fixed with something in them.
We define a tensor of type $(r,0)$ to be the generalization to $r$ vectors. It takes $r$ vectors to a number, and is linear in each entry with the others held fixed. We write $T : V\times\cdots \times V \to \mathbb{R}$ with $r$ copies of $V$. The space of tensors of type $(r,0)$ is denoted $T_r^0 (V)$ and obviously $T_1^0(V)=V^\ast$.
Now, if you have $f,g\in V^\ast$ it is quite obvious that $T(v,w)=f(v)g(w)$ is an element of $T_2^0(V)$. If $f\in V^\ast$ and $h\in T_2^0(V)$ for example, we also have $T(v_1,v_2,v_3)=f(v_1)h(v_2,v_3)$ one element of $T_3^0(V)$.
The generalization is the tensor product. It is one operation that concatenates together two tensors in this way. It can be written as $\otimes : T_r^0(V)\times T_s^0(V)\to T_{r+s}^0(V)$ and is defined so that if $T\in T_r^0(V)$ is a $(r,0)$ type tensor and $S\in T_s^0(V)$ is an $(s,0)$ type tensor, then
$$T\otimes S(v_1,\dots,v_r,w_1,\dots,w_s)=T(v_1,\dots,v_r)S(w_1,\dots,w_s).$$
Now a theorem says that given a basis $\{e_i\}$ of $V$ and the dual basis $\{\varphi^j\}$ of $V^\ast$, then for every $r\in \mathbb{N}$, the set of all products of $r$ elements $\varphi^j$ is a basis of $T_r^0(V)$. For example, $\{\varphi^i\otimes \varphi^j\}$ is a basis of $T_2^0(V)$.
Differences and relations
Each product has its use. Inner products give geometric ideas like projections, angles and lenghts. Covectors when applied to vectors can also be thought to be giving some sort projections, but they don't give alone a notion of angles and lenghts. While inner products must be postulated (a vector space can carry an inner product or not), all vector spaces comes with covectors together in the dual.
The tensor product is a more general multiplication of vectors that allows one to build a tensor algebra. But for differential geometry, tensors are to be thought as multilinear maps of a number of vectors. In this setting the tensor products allows to build higher types tensors by putting together other ones of lower types.
Interestingly, since the inner product itself is a $(2,0)$ tensor, and since we can form a basis for $T_2^0(V)$ using covectors and the tensor product, we see that the inner product ends up being a linear combination of products of covectors. It is, thus, build out of covectors really.
The tensor product combines two lower rank tensors into a higher rank one. For example, you can put two vectors $v^a$ and $w^b$ together to create a rank-2 tensor $v^aw^b$, which can be thought as a matrix. In this particular example, the tensor product is essentially the direct product of two vectors. You can generalize this idea to higher rank tensors straightforwardly.
The dot product combines two vectors into a scalar (a number). It is actually the inner product. You need a metric $g_{ab}$ to do so.
The action of a dual vector $\omega_a$ on a vector $v^a$ also results in a scalar. The difference is that the metric tensor is not needed.
If a metric tensor $g_{ab}$ exists, the above two operations are related. If a vector field $w^a$ is given, you can construct a dual vector $\omega_a=g_{ab}w^b$, and the action of $\omega_a$ on a vector $v^a$ is actually the inner product between $v^a$ and $w^a$: $\omega_av^a=g_{ba}w^bv^a$. |
Well, these are very fundamental questions, and I think they are written in any good book. But maybe it's hard to extract the exact information from them, which you ask for. So, let me briefly answer:
In wireless, the channel is usually modelled as a FIR filter with impulse response $h(\tau)$. But, this channel might change over time (due to e.g. mobility), to make it a time-variant impulse response $h(t,\tau)$. Moreover, the channel is often modeled as a tapped delay line, $h(t,\tau)=\sum_ia_i(t)\delta(\tau-\tau_i(t))$ where $a_i(t)$ are time-varying gains of the taps and $\tau_i(t)$ are the time-varying delays of the taps. The fact that $a_i(t)$ has complex values does not make it Rayleigh fading. Rayleigh fading describes the temporal distribution of $a_i(t)$. If over a long time range the distribution of each path gain is a Gaussian, it becomes Rayleigh fading. The doppler spread of the Rayleigh fading describes how quickly the value $a_i(t)$ changes over time. In the special case of block-fading $a_i(t)$ is a constant (the channel remains static) for the whole transmission. Then, when a new transmission starts, a new channel is generated. Then, the realizations $a_i$ follow a Gaussian, to make it block Rayleigh fading.
A multipath channel corresponds to the tapped-delay line model. Have a look at one of my articles about multipath propagation, maybe it helps. The number of inputs and outputs depends on the number of TX and RX antennas of your system (talking about MIMO systems here) or the number of users. For single-antenna systems there is just one input and one output.
The input to the physical channel is analog, if you model it as a tapped delay lines. However, you can assume transmission in passband, then all signals are real. Or, you can do an equivalent analysis in baseband, where all signals are complex but the math is easier to treat. For the description of transition from baseband to passband, you can look at another article of mine.
Though, there are other channel models such as BSC or BEC, though these are not used to model the transmission of an analog signal over the wireless physical channel, but abstract away from that.
Modulation is performed after encoding ;-). Modulation means to generate a signal which is adapted to the channel, a a function of the transmit bits. It can e.g. include mapping the bits to complex QAM constellation points.
Encoding takes your bits you want to transmit (the payload) and adds redundancy via a channel code. This way, if at the receiver side, one bit is lost, it can be recovered by appropriate decoding. |
Determine truth value of following:
$(1)\;\forall x P(x)$
$(2)\;\exists x Q(x)$
$(3)\;\forall x\, \exists y\;R(x,y)$
$(4)\;\exists x \,\forall y\;R(x, y)$
$(5)\;\forall x\,(\lnot Q(x))$
For $x, y \in \mathbb Z^+$, (meaning $x, y$ are positive integers):
Let $P(x): x$ is even; $\quad\;Q(x): x$ is a prime number; $\quad \;R(x, y): x+y$ is even.
My Understanding: p(x) = 2,4,8,10 q(x) = 3,7,11,13,17 not sure on r(x) Ans: i. false as x is all postive integers and all are not even ii. true. atleast one x which is prime iii. iv. v.False. x are set of postive integers, negation of p(x) is odd integers |
I have a doubt as to in which frame of reference the Schrödinger equation is written? I think it is inertial but can't reason it out.
I'd like to add to Michael Brown's comment that "You can write Schrödinger's equation in any frame you like": it is quite literally true and Schrödinger's equation has a meaning that is far more basic than frames of reference and so forth. As Trimok and Michael Brown say, Schrödinger's equation for certain physical systems does depend on frame if indeed transformation between reference frames makes sense for the physical system under consideration, but its fundamental "shape" is always the same as discussed below. Actually, one has to specify quite a bewildering gathering of information about the scenario one is doing quantum mechanics in to give a full description - "pictures" (whether "Schrödinger" or "Heisenberg or "Interaction" or otherwise), "coordinates" or "space" (whether "position" or "momentum" and so forth), and, if it is even at all relevant, the frame of reference in that space. This information is not always altogether clear from a discussion and specifications can be sloppy (especially, sadly, in some elementary texts) so your question is a good one.
Schrödinger's equation is simply the first order differential equation describing the time evolution of a "vector", representing the state of the quantum system, in some Hilbert space. That state can represent anything, it doesn't have to have anything to do with positions or motions in space. For example, if I want to make a quantum model of an inductor-capacitor resonant circuit, I would end up describing the system's state as a discrete sequence of complex numbers $\Psi = \{\psi_0, \psi_1, \cdots\}$, such that $\sum_j |\psi_j|^2 = 1$. $\psi_0$ is the probability amplitude that the system is in its quantum ground state, i.e. as close as one can get to "unenergised" without violating the Heisenberg inequality, $\psi_1$ the probability amplitude that the oscillator is in a one photon state, i.e. its energy is $\frac{\hbar}{\sqrt{L\,C}}$, $\psi_2$ the amplitude that it is two photon state, and in general $\psi_N$ the attitude that is in an $N$-photon state; or, if you like, the amplitude that it has had $N$-photons added to its ground state from somewhere outside the oscillator system. In this quantised resonant circuit, spatial positions are irrelevant. "Frame of reference" has no meaning here. Naturally here the inductance and capacitance are respectively $L$ and $C$.
The Schrödinger equation is very general: it simply says that a quantum system's makeup and working is in some sense "constant" when the system is sundered from the rest of the World. This vague statement makes more sense in symbols: the mathematical description has to be invariant with respect to time shifts: if I begin with a quantum state at 12 o'clock and evolve it until 1 o'clock, then my state evolution is going to be the same as if I began with the same state at 4 o'clock and waited until five. Now, we assume linearity, so that our state vector (now written as a column vector) is going to evolve following some matrix equation: $\psi(t) = U(t) \psi(0)$, where state transition matrix $U(t)$ must:
Fulfil $U(t+s) = U(t) U(s) = U(s) U(t)$ for any time intervals $t$ and $s$. This is simply our discussion about time shift invariance above. Straight away we know $U(t) = \exp(A t)$, for some constant matrix $A$ as the exponential is the only continuous function with this time shift invariance property; It must be unitary: this means it must conserve norms, so that $\sum_j |\psi_j|^2 = 1$ holds at all times: this simply says that the system has to be in somestate, owing to the probability interpretation of the squared magnitudes.
So the most general state evolution possible is $\psi(t) = \exp(-\hbar^{-1} i\, \hat{H}\, t)\,\psi(0)$, where $\hat{H}$ is a constant, Hermitian matrix (this is equivalent to the unitaryhood statement). This in turn is equivalent to:
$i\,\hbar\,\mathrm{d}_t \psi = \hat{H}\,\psi$
which is the Schrödinger equation (see footnote about the mysterious constants $\hbar$ and $i$). Hopefully the Schrödinger's equation's essential nature should now be clear:
The Schrödinger equation for a quantum system asserts the system's time shift invariance when that system is sundered from the rest of the World.
It is no more and no less than this idea, which you can see is much more basic than and independent of either "coordinates" or "frames of reference" (if indeed the latter is even meaningful). As you can see I have said nothing about space, let alone frames of reference. Different coordinates and frames of reference give rise to different constant matrices $\hat{H}$, but they're all constant and they're all Hermitian: I'll give a few examples when I go back to the inductor-capacitor example below.
Schrödinger's equation is not the only way to make the above assertion of time shift invariance, which leads me to the discussion of "pictures", sometimes, highly unhelpfully, called "frames" or "frameworks". What you might be thinking about when you say "frame" is that sometimes it is easier to analyse a system's evolution in what is called the Heisenberg picture. In quantum mechanics the only "real" things are measurements, represented by observables, which are Hermitian matrices (operators). So the only "real" quantities are the moments of the probability distribution for the measured quantity: if the quantity is measured by an observable $\hat{M}$ then the nth moment of the probability distribution for the value of that measurement when the system state is $\psi$ is $\psi^\dagger \hat{M}^n\psi$ in matrix notation, or in bra-ket notation $\left<\psi|\hat{M}^n|\psi\right>$. One can think of a system's state being constant when the system is isolated and of the observables themselves evolving in time. Since only the measurements matter this is altogether acceptable as long as the values of measurements don't change: the measurement evolves with first-time derivative $\mathrm{d}_t\left<\psi|\hat{M}^n|\psi\right>$ and, if we use Leibniz's rule and plug in the time evolution of $\psi$ described by the Schrödinger equation we get:
$\mathrm{d}_t \psi^\dagger \hat{M} \psi = \frac{i}{\hbar} \psi^\dagger [\hat{H}, \hat{M}]\psi $
You can now see that the measurements will evolve exactly the same way as they would if the system evolved as described by Schrödinger's equation if we think of the state $\psi$ as constant and if the observables instead evolve following:
$\mathrm{d}_t \hat{M} = \frac{i}{\hbar} [\hat{H}, \hat{M}]$
This is the equation of motion for an observable in the Heisenberg picture ("frame"). I'm pretty sure somewhere Feynman say the Heisenberg picture is like doing quantum mechanics in a rotating frame in his lecture series. He is of course being metaphorical. Also note that, because we want the Heisenberg equation to hold for
any observable, its form is very constrained. In particular, the operation on the right has to be a derivation (something which fulfils the Leibnitz product rule, which the Lie bracket is) so that if observables $\hat{A}$ and $\hat{B}$ fulfill the Heisenberg equation, so too do $\hat{A}^n$, $\hat{B}^n$ and $i [\hat{A}, \hat{B}]$, which can also be (Hermitian) observables.
Now if you're thinking position observables, then when one solves the Schrödinger equation for, say the hydrogen atom, one always sits in a frame stationery relative to the hydrogen atom. Inertial forces are often taken to be utterly negligible if such a small system happens to be accelerating but see the paper that @Trimok has cited.
From especially the Heisenberg equation, one can readily see that any observable that commutes with the constant matrix $\hat{H}$ defines an observable whose measurements are constant with time. So $\hat{H}$ is hypothesised to be the
energy observable, since in classical physics energy is the conserved "current" corresponding, by Noether's theorem, to invariance with respect to time shifts.
The Stone-von Neumann theorem has more to say about the unitary equivalence between the Heisenberg and Schrödinger pictures.
For the sake of completeness, and to give an idea for how many possibilities there are for the Schrödinger equation, Let's go back to our inductor-capacitor resonant circuit example (I've taken this unwonted but delightfully simple example of a system to quantise from a little known book by Dietrich Marcuse (formerly of Bell Labs), "Engineering Quantum Electrodynamics"). This is, of course, deliberately chosen as a physical system which spatial "frame of reference" has no meaning for.
For the voltage and current sign conventions shown in the drawing of our LC Tank circuit, the classical equations of state evolution are:
$\mathrm{d}_t\left(\begin{array}{c}V\\I\end{array}\right) = \left(\begin{array}{cc}0&C^{-1}\\-L^{-1}&0\end{array}\right)\left(\begin{array}{c}V\\I\end{array}\right);\quad H = \frac{1}{2}\,L\,I^2 + \frac{1}{2}\,C\,V^2$
which are wholly analogous to those for an oscillating mass $m$ on a spring with spring constant $k = m\,\omega_0^2$:
$\mathrm{d}_t\left(\begin{array}{c}x\\p\end{array}\right) = \left(\begin{array}{cc}0&m^{-1}\\-k&0\end{array}\right)\left(\begin{array}{c}x\\p\end{array}\right);\quad H = \frac{p^2}{2\,m} + \frac{1}{2}\,k\,x^2$
when $x$ and $p$ stand for the mass's linear position and momentum, respectively. Here $H$ is the classical Hamiltonian (total energy). To cut a long story short, we can quantise this system were transferring known results for the quantum harmonic oscillator. In
energy or, equivalently, photon number co-ordinates the system's state is the normalized discrete $\ell^2$ sequence $\Psi = \{\psi_0, \psi_1, \cdots\}$ I talked about at the beginning, with the interpretation that $\psi_n$ is the probability amplitude that the system has been raised by $n$ equal energy quantums $\hbar\,\omega_0$ from the ground state. Schrödinger's equation is defined as above with the countably infinite square matrix $\hat{H} =\frac{\hbar\,\omega_0}{2} I + \hbar\,\omega_0\,\mathrm{diag}\left(0,\,1,\,2,\,3,\,\cdots\right)$ where here $I$ stands for the identity operator and $\omega_0 = \frac{1}{\sqrt{L\,C}}$. The two conjugate (i.e. fulfilling the canonical commutation relationships) observables, which represent in the sense talked about above, physical measurements on the system are the voltage and current observables, respectively:
$\hat{V} = \sqrt{\frac{\hbar}{2\,C\,\sqrt{L\,C}}}\left(A^\dagger + A\right);\quad \hat{I} = i\,\sqrt{\frac{\hbar}{2\,L\,\sqrt{L\,C}}}\left(A^\dagger - A\right)$
where:
$A = \left(\begin{array}{cccc}0&0&0&\cdots\\\sqrt{1}&0&0&\cdots\\0&\sqrt{2}&0&\cdots\\0&0&\sqrt{3}&\cdots\\\cdots\end{array}\right);\quad A^\dagger = \left(\begin{array}{cccccc}0&\sqrt{1}&0&0&0&\cdots\\0&0&\sqrt{2}&0&0&\cdots\\0&0&0&\sqrt{3}&0&\cdots\\0&0&0&0&\sqrt{4}&\cdots\\\cdots\end{array}\right)$
and the canonical commutation relationship is:
$\left[\hat{V},\,\hat{I}\right] = \frac{i\,\hbar}{\sqrt{L\,C\,\sqrt{L\,C}}}\,I$
In these energy co-ordinates the state vector is a discrete sequence, all the observables are discrete (albeit infinite) matrices and the Schrödinger equation's solution is quite straightforward, to wit:
$\Psi = \mathrm{diag}\left(e^{-i\,\omega_0\,\frac{t}{2}}\,\psi_0(0),\,e^{-3\,i\,\omega_0\,\frac{t}{2}}\,\psi_1(0),\,e^{-5\,i\,\omega_0\,\frac{t}{2}}\,\psi_2(0),\,\cdots\right)$
One can
define the quantum harmonic oscillator and its observables to be a quantum system with two observables that (i) fulfil a canonical commutation relationship and (ii) yield measurements with time-harmonic expected (mean) values. One can, with a little work, prove that this definition uniquely defines the quantum system to within the energy spacing $\hbar\,\omega_0$ between its evenly spaced, discrete energy spectrum (the fact of a discrete, evenly spaced spectrum also follows from the definition just given and does not have to be assumed). The spectrums of the conjugate voltage and current variables are both continuous, and now one can do the Dirac "ladder operator procedure" backwards and find a co-ordinate system where the voltage observable takes the simple form $\hat{V} \Psi(v, \,t) = v\, \Psi(v,\,t)$ and current observable is $\hat{I} \Psi(v, \,t)= -i\,\frac{\hbar}{L\,C}\,\partial_v \Psi(v,\,t)$. This procedure is actually not trivial (well not for me, at least) and the answer is that the Hamiltonian is now the a more complicated, but more wonted, continuous operator and the full Schrödinger equation is now:
$i\,\hbar\,\partial_t \Psi(v, \,t) = \frac{1}{2}\left(C\,v^2 - \frac{\hbar^2}{L\,C^2} \partial_v^2\right) \Psi(v, \,t)$
$\Psi(v, \,t)$ is now the probability amplitude of measuring voltage $v$ across the tank circuit at time $t$. Furthermore, one can transform the voltage coordinate line with a one-dimensional, continuous Fourier transform to get to a new co-ordinate system (analogous to momentum space for a quantised harmonic mass on a spring oscillator) wherein the current observable takes on the particularly simple form $\hat{I} \Psi(\iota, \,t) = \iota \Psi(\iota, \,t)$ and wherein the voltage observable is now $\hat{V} \Psi(\iota, \,t) = i\,\frac{L\,C}{\hbar} \partial_\iota\Psi(\iota, \,t) $ and the full Schrödinger equation is now:
$i\,\hbar\,\partial_t \Psi(\iota, \,t) = \frac{1}{2}\left(L\,\iota^2 - \frac{L^2\,C^3}{\hbar^2} \partial_\iota^2\right) \Psi(\iota, \,t)$
$\Psi(\iota, \,t)$ is now the probability amplitude of measuring current $\iota$ through the tank circuit at time $t$. As you can see from the above, simple example wherein space and time reference frames have no relevance, one can use a vast number of different co-ordinate systems for the Schrödinger equation.
I say more about the uniqueness theorem just cited and the backwards Dirac latter procedure in the reference Journal Optical Society of America B/Vol. 24, No. 4/ April 2007 p 928.
Footnote on constants in Schrödinger's equation: For the purposes of this answer, one can just think of the mysterious constant $\hbar$ is just some arbitrary constant I've pulled out of the constant matrix - it also keeps the exponential's argument dimensionless by having $[\hbar] = J\,s$ as its SI units because it turns out that $\hat{H}$ has units of energy. $i$ is also kind of arbitrary: it makes observables (see below) Hermitian, rather than skew-Hermitian and makes the measurements derived from those observables real rather than purely imaginary, as they would be if the observables were skew-Hermitian, as would seem most natural to many mathematicians thinking about observables belonging to the Lie algebra of the group of unitary state transition matrices. But, in (somewhat awkward) principle, $i$ could be dropped, and the unit system can be redefined to make $\hbar = 1$ (the latter is done in practice in "Planck" units). |
s -Block Elements Preparation and properties of sodium carbonate, sodium hydroxide and sodium hydrogen carbonate Preparation of sodium hydroxide : Nelson's cell Castner–Kellner process
Nelson cell method : It is a 'U' shaped perforated steel vessel Cathode : steel vessel Anode : graphite rod At anode : 2Cl − → Cl 2 + 2e − At cathode : 2Na + + 2e − + 2H 2O → 2NaOH + H 2 ↑ Castner - Kellner process :
Outer compartment : Cathode - 'Hg' Anode - Graphite electrolyte - Brine solution At anode : 2Cl − → Cl 2 + 2e − At cathode : 2Na + + 2e − + Hg → Na 2Hg Middle compartment : Cathode - Fe rod Anode - Hg electrolyte - dil NaOH At anode : Na 2Hg → 2Na + + 2e − + Hg At cathode : 2Na + + 2e − + 2H 2O → 2NaOH Chemical properties of NaOH :
Non metals : Cl 2, Br 2, I 2 reacts with dil and cold, conc and hot NaOH. It gives NaOCl and NaClO 3 (sodium oxychloride, sodium chlorate)
Chlorine, bromine, iodine, phosphorus gives disproportionation reaction.
With metals : Aluminium reacts with cold and dil. NaOH gives NaAlO 2 Hot and conc NaOH gives H 2 + Na 3AlO 3
Preparation of sodium peroxide (Na 2O2)
Preparation : Na_{2}O \xrightarrow[]{\Delta}Na_{2}O_{2}
Na_{2}O_{2} + CO \rightarrow Na_{2}O_{3}
Na_{2}O_{2} + CO_{2} \rightarrow Na_{2}CO_{3} + \frac{1}{2}O_{2} Preparation of Na 2CO3 (Sodium carbonate)
Washing soda - Na
2CO 3.10H 2O
soda ash - Na
2CO 3
It is prepared by 1) Leblanc process 2) Solvay process
Leblanc process : Raw materials : NaCl, Coke, limestone 2NaCl + H_{2}SO_{4}\rightarrow Na_{2}SO_{4} + 2HCl Na_{2}SO_{4} + 4C \rightarrow Na_{2}S + 4CO
Solvay process : NH_{3} + H_{2}O + O_{2} \rightarrow NH_{4}HCO_{3} NH_{4}HCO_{3} + NaCl \rightarrow NaHCO_{3}\downarrow + NH_{4}Cl 2NaHCO_{3} \xrightarrow[]{\Delta} Na_{2}CO_{3} + H_{2}O + CO_{2} 2NH_{3}+ H_{2}O + CO_{2} \rightarrow (NH_{4})_{2}CO_{3} (NH_{4})_{2}CO_{3} + MgCl_{2} \rightarrow MgCO_{3}\downarrow Physical properties : Efflorescent substance i.e, it looses its crystalline water molecules when it is kept in open air. Chemical properties :
Preparation of sodium bicarbonate (NaHCO 3) :
Na_{2}CO_{3} + H_{2}O + CO_{2} \rightarrow 2NaHCO_{3}
Chemical properties :
2NaHCO_{3} \xrightarrow[]{\Delta} Na_{2}CO_{3} + H_{2}O + CO_{2}
CO_3^{-2} \xrightarrow[]{H_{2}O} 2OH^{-}
HCO_3^{-} \xrightarrow[]{H_{2}O} OH^{-}
It is a mild antiseptic for skin infections and fire extinguishers.
General properties of IA group - element : Atomic size : (↑) down the group increases Density : (↑) But Na > K Melting and boiling point : down the group decreases. Conductivity : In free state Li + > Na + > K + > Rb + > Cs + Aqueous state : Li + < Na + < K + < Rb + < Cs + Lithium has more hydrated energy. Ionisation enthalpy down the group decreases Hydration enthalpy down the group increases Metallic radius down the group increases Ionic radius down the group increases. Reducing nature : Due to small size lithium has more hydration enthalpy and also high negative electrode potential value. Lithium good reducing agent → More hydration energy. flame colourisation : Li → Crimson red Na → Golden yellow K → Pale violet Rb → Red, violet Cs → Violet Reactivity towards air : M + O_{2} \rightarrow M_{2}O(oxide) \xrightarrow[]{H_{2}O}MOH \xrightarrow[]{CO_{2}}M_{2}CO_{3} 4Na + O_{2} \rightarrow 2Na_{2}O 2Na + O_{2} \rightarrow Na_{2}O_{2}(Peroxide) 4Li + O_{2} \rightarrow 2Li_{2}O(oxide) M + O_{2}(Excess) \rightarrow MO_{2}(superoxide), M = K, Rb, Cs 2M + 2H_{2}O\rightarrow 2M^{+} + 2OH^{-}+ H_{2} 2M + H_{2} \xrightarrow[]{673 K} 2M^{+} + H^{-} Reactivity with NH 3
Aqueous NH
3:
M + NH_{3} \rightarrow Ammoniated \ ions+ Ammoniated \ electrons
Conductivity ⇒ Due to Ammoniated ions and electrons.
Alkali metals dissolve in NH 3 giving deep blue colour solution. On warming blue colour changes to bronze colour and becomes diamagnetic. Li 2CO 3 on heating gives Li 2O and CO 2 Li_{2}CO_{3} \xrightarrow[]{\Delta} Li_{2}O + CO_{2} Remaining other all carbonates not decomposed by heating. Part1: View the Topic in this Video from 0:08 to 3:55 Part2: View the Topic in this Video from 0:06 to 11:14
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers. |
The idea of the argument is this: an integrable function that does not tend to zero may have thinner and thinner pikes, and the integral on each pike needs to tend to zero. If the function is uniformly continuous, these pikes can't get thinner and thinner, and the function can't be integrable, as the integral on pikes does not tend to zero.
I'll consider $\Bbb R^+$ here, but the same argument works with $\Bbb R^-$, hence $\Bbb R$.
Suppose $f$ is uniformly continuous on $\Bbb R^+$ and $f(x)$ does not tend to zero as $x\to+\infty$. Let's prove it's not integrable.
Since $f$ does not converge to zero, there is an $\varepsilon>0$ such that you can define an increasing sequence $x_n\to+\infty$ such that $|f(x_n)|>\varepsilon$.
Since $f$ is uniformly continuous, for this $\varepsilon$, there is a $\delta>0$ such that $|x-y|<\delta\implies|f(x)-f(y)|<\varepsilon/2$.That means that on every interval $]x_n-\delta,x_n+\delta[$, $|f(x)|>\varepsilon/2$. Hence the area under the curve on this interval is larger that $\delta\varepsilon>0$, and that does not depend on $n$.
Therefore $\int_{x_n-\delta}^{x_n+\delta}f(u)du$ does not tend to zero as $n\to+\infty$, and the function $h(x)=\int_0^{x}f(u)du$ can't converge as $x\to+\infty$, hence the function $f$ is not integrable. |
Problem 115
Express the vector $\mathbf{b}=\begin{bmatrix}
2 \\ 13 \\ 6 \end{bmatrix}$ as a linear combination of the vectors \[\mathbf{v}_1=\begin{bmatrix} 1 \\ 5 \\ -1 \end{bmatrix}, \mathbf{v}_2= \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix}, \mathbf{v}_3= \begin{bmatrix} 1 \\ 4 \\ 3 \end{bmatrix}.\] ( The Ohio State University, Linear Algebra Exam) Problem 113
Let $A$, $B$ be groups. Let $\phi:B \to \Aut(A)$ be a group homomorphism.
The semidirect product $A \rtimes_{\phi} B$ with respect to $\phi$ is a group whose underlying set is $A \times B$ with group operation \[(a_1, b_1)\cdot (a_2, b_2)=(a_1\phi(b_1)(a_2), b_1b_2),\] where $a_i \in A, b_i \in B$ for $i=1, 2$.
Let $f: A \to A’$ and $g:B \to B’$ be group isomorphisms. Define $\phi’: B’\to \Aut(A’)$ by sending $b’ \in B’$ to $f\circ \phi(g^{-1}(b’))\circ f^{-1}$.
\[\require{AMScd}
\begin{CD} B @>{\phi}>> \Aut(A)\\ @A{g^{-1}}AA @VV{\sigma_f}V \\ B’ @>{\phi’}>> \Aut(A’) \end{CD}\] Here $\sigma_f:\Aut(A) \to \Aut(A’)$ is defined by $ \alpha \in \Aut(A) \mapsto f\alpha f^{-1}\in \Aut(A’)$. Then show that \[A \rtimes_{\phi} B \cong A’ \rtimes_{\phi’} B’.\] Problem 108
Let $\F_p$ be the finite field of $p$ elements, where $p$ is a prime number.
Let $G_n=\GL_n(\F_p)$ be the group of $n\times n$ invertible matrices with entries in the field $\F_p$. As usual in linear algebra, we may regard the elements of $G_n$ as linear transformations on $\F_p^n$, the $n$-dimensional vector space over $\F_p$. Therefore, $G_n$ acts on $\F_p^n$.
Let $e_n \in \F_p^n$ be the vector $(1,0, \dots,0)$.
(The so-called first standard basis vector in $\F_p^n$.)
Find the size of the $G_n$-orbit of $e_n$, and show that $\Stab_{G_n}(e_n)$ has order $|G_{n-1}|\cdot p^{n-1}$.
Conclude by induction that
\[|G_n|=p^{n^2}\prod_{i=1}^{n} \left(1-\frac{1}{p^i} \right).\] Problem 104
Test your understanding of basic properties of matrix operations.
There are
10 True or False Quiz Problems.
These 10 problems are very common and essential.
So make sure to understand these and don’t lose a point if any of these is your exam problems. (These are actual exam problems at the Ohio State University.)
You can take the quiz as many times as you like.
The solutions will be given after completing all the 10 problems.
Click the View question button to see the solutions. Problem 102
Determine whether the following systems of equations (or matrix equations) described below has no solution, one unique solution or infinitely many solutions and justify your answer.
(a)\[\left\{
\begin{array}{c}
ax+by=c \\
dx+ey=f,
\end{array}
\right.
\] where $a,b,c, d$ are scalars satisfying $a/d=b/e=c/f$.
(b)$A \mathbf{x}=\mathbf{0}$, where $A$ is a singular matrix. (c)A homogeneous system of $3$ equations in $4$ unknowns. (d)$A\mathbf{x}=\mathbf{b}$, where the row-reduced echelon form of the augmented matrix $[A|\mathbf{b}]$ looks as follows:
\[\begin{bmatrix}
1 & 0 & -1 & 0 \\
0 &1 & 2 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}.\] (
The Ohio State University, Linear Algebra Exam)
Read solution Add to solve later
Problem 98
Let $A$ and $B$ be $n\times n$ matrices. Suppose that the matrix product $AB=O$, where $O$ is the $n\times n$ zero matrix.
Is it true that the matrix product with opposite order $BA$ is also the zero matrix?
If so, give a proof. If not, give a counterexample. |
Circle
The functions \(\sin (\theta)\) and \(\cos (\theta)\) are defined such that, if you drew a circle of radius \(r\) with a triangle inset like the one below, the lengths of the sides will have the listed values:
As the picture suggests, \(\sin \theta\) and \(\cos \theta\) have values that repeat when you increase or decrease \(\theta\) by increments of \(2 \pi\).
Using the figure above and the Pythagorean Theorem, we can write
\[(r\sin\theta)^2 + (r\cos\theta)^2 = r^2\]
Dividing the whole equation by
\(r^2\) we obtain the useful result:
\[\sin^2\theta + \cos^2\theta = 1\]
where \(\sin^2\theta\) is just a short way of writing \((\sin\theta)^2\). The above equation is true for
any value of \(\theta\).
Using the above results, we can also derive these equations:
\[\tan\theta = \dfrac{\sin\theta}{\cos\theta}\]
\[\sin\theta = \pm\sqrt{1-\cos^2\theta}\]
\[\cos\theta = \pm\sqrt{1-\sin^2\theta}\]
Trigonometric Identities
\[\sin A + \sin B = 2\sin\left(\dfrac{A + B}{2}\right)\cos\left(\dfrac{A - B}{2}\right)\]
\[\cos A + \cos B = 2\cos\left(\dfrac{A + B}{2}\right)\cos\left(\dfrac{A - B}{2}\right)\]
\[\sin(A + B) = \sin A \cos B + \sin B \cos A\]
\[\cos(A + B) = \cos A \cos B - \sin A \sin B\]
\[\tan(A+B) = \dfrac{\tan A + \tan B}{1 - \tan A\tan B}\]
Small-Angle Approximation
If \(\theta\), expressed in radians, is close to zero, then we can approximate:
\[\sin\theta \approx \theta\]
\[\cos\theta \approx 1\]
\[\tan\theta \approx \theta\] |
We can use a qubit as a particle detector that doesn't change the particle's energy. This can be implemented as follows. We start out with a qubit initialized in the state $\left|0\right>$ and we apply the Hadamard gate $U$ that acts as follows:
$$\begin{split}U\left|0\right> &= \frac{1}{\sqrt{2}}\left[\left|0\right\rangle + \left|1\right\rangle\right]\\U\left|1\right> &= \frac{1}{\sqrt{2}}\left[\left|0\right\rangle - \left|1\right\rangle\right]\end{split}$$
Note that $U$ is its own inverse, so applying $U$ again will bring the qubit back to the state $\left|0\right\rangle$ we started out with. But now consider what happens if during the time the qubit spends being a superposition of $\left|0\right\rangle$ and $\left|1\right\rangle$ a particle collides with it, but such that no energy is exchanged. Then the qubit will get entangled with the particle, so the qubit-particle system will be in a state of the form:
$$\left|\psi\right\rangle = \frac{1}{\sqrt{2}}\left[\left|0\right\rangle \left|D_0\right\rangle + \left|1\right\rangle\left|D_1\right\rangle\right]$$
where the states $\left|D_{i}\right\rangle$ are the particle states after scattering off the qubit in state $\left|i\right\rangle$. You might think that because the qubit was not affected by the interaction at all, we cannot perform a measurement on the qubit's state to find out that it has interacted with a particle. But watch what happens if we apply the Hadamard gate again to the qubit:
$$U\left|\psi\right\rangle =\left|0\right\rangle\left|D^{+}\right\rangle+\left|1\right\rangle \left|D^{-}\right\rangle$$
where $D^{\pm} = \frac{1}{2}\left[\left|D_0\right\rangle\pm\left|D_1\right\rangle\right]$
So, had there been no interaction, the qubit would have returned to the initial state $\left|0\right\rangle$ but now we end up with an entangled state of the qubit and the particle such that there is now a finite probability of finding the qubit in the state $\left|1\right\rangle$, despite the fact that the collision with the particle happened in a purely elastic way such that it did not affect the physical state of the qubit in any way at the time of the collision. The probability of finding the qubit in the state $\left|1\right\rangle$ is $\frac{1}{2}\left[1-\operatorname{Re}\left\langle D_0\right|D_1\left.\right\rangle\right]$, so it depends on the overlap between the two particle states corresponding to scattering off the qubit in the two states of the superposition.
If the states $\left|D_i\right\rangle$ are orthogonal, then you have 50% probability of finding the qubit in the states $\left|0\right\rangle$ and $\left|1\right\rangle$; the density matrix after tracing out the particle state is $\frac{1}{2}\left[\left|0\right\rangle\langle 0| + \left|1\right\rangle\langle 1|\right]$. |
Yes, nucleus is composed of subatomic particles that have probability cloud. Protons and neutrons fill orbitals in the nucleus just like electrons in the atom do. What's more, every proton or neutron is a complex particle itself and the quarks inside have their very own probability cloud. (Quarks are simple objects that have no internal structure as far as ...
Very often indeed the nucleus is assumed motionless. It is then assumed that the motion of the nuclei and the electrons can be treated separately. This is known as the Born-Oppenheimer approximation. The reason is that solving the equations simultaneously is very difficult and would not be very efficient.Note that for the hydrogen atom this approximation ...
The nucleus does have a probability cloud. As the simplest example, consider the hydrogen-1 atom. Conservation of momentum requires that the center of mass of the electron and proton remain fixed. Therefore we have$$|\Psi_p(\textbf{x})|=|\Psi_e(-\alpha\textbf{ x})|,$$where $\Psi_p$ is the wavefunction of the proton, $\Psi_e$ is the wavefunction of the ...
To build upon Bruce's answer, what happens according to Schrödinger's equation is the following:The initial quantum state of the system (after putting the cat in the box) is$$|\text{cat alive}⟩ |\text{you don't see the cat}⟩$$As the Geiger counter measures whether the radioactive atom decayed or not, and kills or spares the cat accordingly, this ...
The answer is that the protons in the nucleus are quantum particles and don't have a well-defined position, but the uncertainty isn't a big factor in determining the potential experienced by the orbiting electrons, so we can just treat them as a fixed source of potential. That does, also, make the calculations much, much easier.
You use the phrase phosphorescent molecule but phosphorescence is normally seen in solids where there are many interacting molecules. When you excite electrons in a solid they will generally decay in a radiationless manner, usually by transferring their energy to vibrations of the lattice (i.e. heat). It is unusual for excited states in solids to decay by ...
Hint:$$x^2+y^2+xy = \frac34 (x+y)^2 + \frac14 (x-y)^2.$$This means that if you transform to the new variables\begin{align}\xi & = \frac1{\sqrt{2}}(x+y) \\\eta & = \frac1{\sqrt{2}}(x-y),\end{align}and with a similar transformation from $p_x,p_y$ to $p_\xi,p_\eta$ to make sure that $[\xi,p_\xi] = i = [\eta,p_\eta]$ and $[\xi,p_\eta] = 0 = [\...
No. Take, for instance, the fully depolarizing channel, where $A_a=\{I,X,Y,Z\}$. Since$\mathcal E(\rho)=\tfrac12 I$, your condition $(1)$ holds for all $H$. On the other hand, there is no operator which commutes with all $A_a$.(Let me take the opportunity to advertise my list of canonical counterexamples for quantum channels ;) ).
Photons are elementary particles, part of the SM, they are traveling at speed c in vacuum, when measured locally.The world line (or worldline) of an object is the path that object traces in 4-dimensional spacetime. It is an important concept in modern physics, and particularly theoretical physics.https://en.wikipedia.org/wiki/World_lineNow ...
There is no particular reason to expect that energy and angular momentum should have a one-to-one coupling.If we take the case of classical two-body orbits as a rough guide, we see that each (bound) orbital energy is associated with a upper limit on angular momentum but that the acutal angular momentum can range from that limit all the way down to zero....
I believe that you can do all of physics apart from QM without using complex numbers: complex numbers are a convenience (generally because $e^{ix} = \cos x + i \sin x$), but they are only a convenience.However if you want to do QM you either end up using complex numbers or creating mathematical objects which have all the properties of complex numbers: you ...
One could say that all numbers are imaginary in that sense, they are abstract concepts that we force on a relation into nature just to allow us to have some descriptive power of our surroundings. The fact that natural numbers seem more "natural" or easy to relate to has nothing to do with what they really are; abstract ideas without physical meaning until we,...
The term "imaginary" for referring to the so-called "imaginary numbers" (which, by the way, are only a one-dimensional sliver of the full complex numbers that are actually used here) is a historical artifact that has caused way more confusion than it should, now. Let me say one thing that is crucially important here:There is no ontological difference ...
Writing the requirement explicitly$$\mathcal{E}(U\rho U^\dagger)=U\mathcal{E}(\rho)U^\dagger $$in terms of Kraus operators$$\sum_a A_aU\rho U^\dagger A_a^\dagger=\sum_a U A_a\rho A_a^\dagger U^\dagger $$Hence we want the channel with Kraus operators $ A'_a=A_aU $ and $A_a''=UA_a$ to be equal. We know that two channels are equal if and only if their ...
Your expansion is not quite the most general and should more generally read$$\Psi(x,t)=\sum_m c_m e^{-i E_m t/\hbar} \psi_m(x)$$so that\begin{align}\frac{d\rho}{dt}=0\quad\Rightarrow 0= \sum_{mn} c_m c_n^* (E_m-E_n) e^{-i(E_m-E_n)t/\hbar}\psi_n(x)^*\psi_m(x)\tag{1}\end{align}(up to a factor of $i\hbar$). For simplicity order the eigenvalues so ...
There are two aspects to this.Firstly, as long as the interaction energies are well below the energies necessary to create new particles the Schrodinger equation is usually an excellent approximation. For the majority of atoms and molecules it gives an excellent description of the electronic structure. (There is also the pragmatic consideration that bound ...
It's important to understand that we don't directly observe quarks and gluons in scattering experiments. What we observe is an unruly shower of well known and understood particles hitting our detectors. To try and understand what happens in the collision we use a mathematical model called the Standard Model. That is, we have equations that describe how ...
Are you referring to hydrogen? In other atoms different $l$ values with the same principal quantum number $n$ will have different energies. It is only for hydrogen with its single electron that all states with the same $n$ have the same energy. In this sepcial case there is an extra, hidden, $O(4)$ symmetry possesed by the Kepler $V(r)=k/r$ potential. ...
How can the annihilation of an electron and a positron create a quark-antiquark pair or a muon-anti muon pair?....But the total rest energy of the electron and positron(1.102 Mev) is less than the the total energy required to produce either the quark-antiquark pair or the muon-anti muon pair(211.4 Mev)Certainly if the annihilation happens at rest, i....
It is possible to simulate quantum computers with classical computers. However, the time (and typically memory) needed for such a simulation will scale exponentially with the time a quantum computer needs. Thus, if a quantum computer needs $10$ times the computation time for a more complicated case, the simulation would need $2^{10}$ times more time, and so ...
Quantum mechanics as a theory was invented in order to explain why classical mechanics and classical electrodynamics did not work in certain situations: small scales, spectra of atoms , black body radiation. Classical mechanics and classical electrodynamics cannot explain or predict the behavior of atoms (their interactions) , neither that of ...
This is a partial answer.There is a class of proteins called enzymes. Enzymes facilitate chemical reactions. Many enzymes facilitate in the following way: a particular small region of the enzyme has a high affinity for a particular molecule. That is, when that molecule comes in contact with that active region it tends to get stuck there. The fit is good, ...
Insofar as your question regarding whether or not it is a statement about "actual" particles, as @knzhou suggests, we don't really have access to "actual" reality in general in any kind of science (or perhaps we do - this really more depends on your philosophical viewpoint). You see, a scientific theory is perhaps best understood as what I'd call a "useful ...
Your question is a good one, though it's somewhat unclear. I think what you're asking, in essence, is whether one needs quantum mechanics to accurately model the activity of proteins, or whether classical physics is sufficient. If one takes the proteins as already existing, the answer, in the vast majority of cases, is that classical mechanics is sufficient. ...
$d\tau$ is the element of volume $dxdydz$, which in spherical coordinates is $r^2dr\sin{\theta}d\theta d\phi$. When integrating a function that doesn’t depend on the two angles, the integration over those angles gives $4\pi$.
The existence of a minimum energy follows from the wave-like aspects of matter.The allowed values of energy are those corresponding to stationary wave states, so the trite answer to your question is that where you have a set of allowed energy values one of them has to be a minimum.To get some physical insight into why the minimum level ends up where it ...
To simulate means to mimic or model the characteristics or appearance of something. Cloning means to make an exact replica- another actual instance of the subject being cloned. The two actions are utterly different in essence. So yes, you could straightforwardly simulate the two slits experiment, and no that would not violate the no-cloning theorem.
The double slit experiment, regardless of the size of the particles (electrons, neutrons, molecules) does not prove that those particles exist in two places at once, as claimed by the SciAm article. The difficulty of understanding this experiment in classical physics is caused by the use of an unsuitable classical model, rigid body Newtonian mechanics with ...
There are no forces acting between photons, so they do not interact with each other directly (eg attract, repel, scatter, etc).If one photon interacts with a charged particle to change its energy in some way, it is possible for that particle to interact with another photon as a consequence of its interaction with the first. There can therefore be a chain ... |
How to solve the optimization problem written below?
$$\begin{align} &\operatorname{argmax}\limits_{a}\; a^T b - \frac{1}{2} a^T X a\\ &\text{subject to } \sum_i |a_i|=4,\; \sum_i a_i = 0 \end{align}$$
where $a$, $b$ are $n$-vectors and $X$ is a $n\times n$ matrix. Also, $b$ and $X$ are constants.
My main issue is about the absolute values. Without absolute values, there is actually an analytic solution. I guess with absolute values, I have to use iterative approach such as quadratic programming but still not sure how to express the problem to call relevant optimization procedures. |
31
(a)Determine all critical points of the given system of equations.
(b) Find the corresponding linear system near each critical point.
(c)Find the eigenvalues of each linear system. What conclusions can you then draw about the nonlinear system? (d)Draw a phase portrait of the nonlinear system to confirm your conclusions, or to extend them in those cases where the linear system does not provide definite information about the nonlinear system.
$$\left\{\begin{aligned}
&\frac{dx}{dt} = (1 + x) \sin (y), \\
&\frac{dy}{dt} = 1 - x - \cos (y).
\end{aligned}\right.$$
Bonus:Computer generated picture |
Event detail Topology Seminar (Main Talk): Computational complexity and 3-manifolds and zombies
Seminar | January 25 | 4:10-5 p.m. | 3 Evans Hall
Eric Samperton, UC Davis
We consider the computational complexity of counting homomorphisms from 3-manifold groups to fixed finite groups $G$. Let $G$ either be non-abelian simple or $S_m$, where $m \geq 5$. Then counting homomorphisms from fundamental groups of 3-manifolds to $G$ is $\mathsf { P}$-complete. In particular, for fixed $m \geq 5$, it is $\mathsf {NP}$-complete to decide when a 3-manifold admits a connected $m$-sheeted cover.
These results follow from an analysis of the action of the pointed mapping class group $\text {Mod}_*(\Sigma _g)$ on the set of homomorphisms $X_g := \{\pi _1(\Sigma _g) \to G\}$. We build on ideas of Dunfield-Thurston that were originally used in the context of random 3-manifolds. In particular, we show that when $g$ is large enough, there exists a subgroup of $\text {Mod}_*(\Sigma _{2g})$ that acts on $X_g^2$ in a manner that allows us to produce gadgets encoding reversible logic gates. Our construction can be considered as a classical analogue of topological quantum computing. This is joint work with Greg Kuperberg. |
\(\text{FIGURE V.2A}\)
Consider a disc of surface density (mass per unit area) \(σ\), radius \(a\), and a point \(\text{P}\) on its axis at a distance \(z\) from the disc. The contribution to the field from an elemental annulus, radii \(r\), \(r + δr\), mass \(2πσ \ r \ δr\) is (from equation 5.4.1)
\[δg = 2 \pi G σ \frac{zrδr}{\left( z^2 + r^2 \right)^{3/2}}. \label{5.4.6} \tag{5.4.6}\]
To find the field from the entire disc, just integrate from \(r = 0\) to \(a\), and, if the disc is of uniform surface density, \(σ\) will be outside the integral sign. It will be easier to integrate with respect to \(θ\) (from \(0\) to \(α\)), where \(r = z \tan θ\). You should get
\[g = 2 \pi G σ (1 - \cos α), \label{5.4.7} \tag{5.4.7}\]
or, with \(M = \pi a^2 σ\),
\[g = \frac{2GM (1-\cos α)}{a^2}. \label{5.4.8} \tag{5.4.8}\]
Now \(2\pi (1 − \cos α)\) is the solid angle \(ω\) subtended by the disc at \(\text{P}\). (Convince yourself of this – don’t just take my word for it.) Therefore
\[g = G σ ω . \label{5.4.9} \tag{5.4.9}\]
This expression is also the same for a uniform plane lamina of any shape, for the downward component of the gravitational field. For, consider figure \(\text{V.3}\).
The downward component of the field due to the element \(δ\text{A}\) is \(\frac{Gσδ A \cos θ}{r^2} = G σ δω\). Thus, if you integrate over the whole lamina, you arrive at \(Gσω\).
\(\text{FIGURE 5.3}\)
Returning to equation \(\ref{5.4.8}\), we can write the equation in terms of \(z\) rather than \(α\). If we express \(g\) in units of \(GM/a^2\) and \(z\) in units of \(a\), the equation becomes
\[g = 2 \left( 1 - \frac{z}{\sqrt{1+z^2}} \right) . \label{5.4.10} \tag{5.4.10}\]
This is illustrated in figure \(\text{V.4}\).
\(\text{FIGURE V.4}\)
The field is greatest immediately above the disc. On the opposite side of the disc, the field changes direction. In the plane of the disc, at the centre of the disc, the field is zero. For more on this, see Subsection 5.4.7.
If you are calculating the field on the axis of a disc that is not of uniform surface density, but whose surface density varies as \(σ(r)\), you will have to calculate
\[M = 2 \pi \int_0^a σ(r) r dr \label{5.4.11} \tag{5.4.11}\]
and \[g = 2 \pi G z \int_0^a \frac{σ (r) r dr}{\left( z^2 + r^2 \right)^{3/2}}. \label{5.4.12} \tag{5.4.12}\]
You could try, for example, some of the following forms for \(σ(r)\):
\[σ_0 \left( 1 - \frac{kr}{a} \right), \quad σ_0 \left( 1 - \frac{kr^2}{a} \right), \quad σ_0 \sqrt{1-\frac{kr}{a}}, \quad σ_0 \sqrt{1- \frac{kr^2}{a^2}}. \]
If you are interested in galaxies, you might want to try modelling a galaxy as a central spherical bulge of density \(ρ\) and radius \(a_1\), plus a disc of surface density \(σ(r)\) and radius \(a_2\), and from there you can work your way up to more sophisticated models.
In this section we have calculated the field on the
axis of a disc. As soon as you move off axis, it becomes much more difficult.
Exercise. Starting from equations 5.4.1 and \(\ref{5.4.10}\), show that at vary large distances along the axis, the fields for a ring and for a disc each become \(GM/z^2\). All you have to do is to expand the expressions binomially in \(a/z\). The field at a large distance r from any finite object will approach \(GM/r^2\).
Contributor |
Relay selection based on social relationship prediction and information leakage reduction for mobile social networks
1.
School of Computer Science and Engineering, Changshu Institute of Technology, Changshu, China
2.
Provincial Key Laboratory for Computer Information Processing Technology, Soochow University, Suzhou, China
3.
The School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
4.
Computer Science, The George Washington University, Washington DC, USA
Despite the extensive study on relay selection in mobile social networks (MSNs), few work has taken both transmission latency (i.e. efficiency) and information leakage probability (i.e. security) into consideration. Therefore we target on designing an efficient and secure relay selection algorithm to enable communication among legitimate users while reducing the information leakage probability to other users. In this paper, we propose a novel mobility model for MSN users considering both the randomness and the sociality of the movements, based on which the social relationship among users, i.e. the meeting probabilities among the users, are predicted. Taken both efficiency and security into consideration, we design a network formation game based relay selection algorithm by defining the payoff functions of the users, designing the game evolving rules, and proving the stability of the formed network structure. Extensive simulation is conducted to validate the performance of the relay selection algorithm by using both synthetic trace and real-world trace. The results show that our algorithm outperforms other algorithms by trading a balance between efficiency and security.
Mathematics Subject Classification:Primary: 58F15, 58F17; Secondary: 53C35. Citation:Xiaoshuang Xing, Gaofei Sun, Yong Jin, Wenyi Tang, Xiuzhen Cheng. Relay selection based on social relationship prediction and information leakage reduction for mobile social networks. Mathematical Foundations of Computing, 2018, 1 (4) : 369-382. doi: 10.3934/mfc.2018018
References:
[1]
J. A. Bazerque and G. B. Giannakis,
Distributed spectrum sensing for cognitive radio networks by exploiting sparsity,
[2]
Z. Cai, Z. He, X. Guan and Y. Li,
Collective data-sanitization for preventing sensitive information inference attacks in social networks,
[3]
Z. Cai and X. Zheng, A private and efficient mechanism for data uploading in smart cyber-physical systems,
[4]
W. Cheng, D. Wu, X. Cheng and D. Chen,
Routing for information leakage reduction in multi-channel multi-hop ad-hoc social networks,
[5] [6] [7] [8]
Z. He, Z. Cai, J. Yu, X. Wang, Y. Sun and Y. Li,
Cost-efficient strategies for restraining rumor spreading in mobile social networks,
[9] [10]
T. Jing, J. Zhou, H. Liu and Z. Zhang,
Soroute: a reliable and effective social-based routing in cognitive radio ad hoc networks,
[11]
S. K. Kim, J. H. Yoon, J. Y. Lee, G. Y. Jang and S. B. Yang,
A cooperative forwarding scheme for social preference-based selfishness in mobile social networks,
[12]
W. Li, X. Cheng, T. Jing and X. Xing, Cooperative multi-hop relaying via network formation games in cognitive radio networks, in
[13]
Y. Liang, Z. Cai, Q. Han and Y. Li, Location privacy leakage through sensory data,
[14]
J. Lu, Z. Cai, X. Wang, L. Zhang, P. Li and Z. He,
User social activity-based routing for cognitive radio networks,
[15] [16]
L. Muchnik, S. Pei, L. C. Parra, S. D. S. Reis, J. S. Andrade Jr., S. Havlin and H. A. Makse, Origins of power-law degree distribution in the heterogeneity of human activity in social networks,
[17]
I. Parris and F. Ben Abdesslem, Crawdad trace/social network analysis/st_andrews/locshare/2010/sta1, Downloaded from http://crawdad.org//download/st_andrews/locshare/locshare-StA1.tar.gz, Nov. 2010.Google Scholar
[18]
T. Spyropoulos, K. Psounis and C. Raghavendra,
Efficient routing in intermittently connected mobile networks: The multiple-copy case,
[19]
A. Vahdat and D. Becker,
[20]
J. Wang, Z. Cai, Y. Li, D. Yang, J. Li and H. Gao, Protecting query privacy with differentially private k-anonymity in location-based services,
[21]
S. Wang, M. Liu, X. Cheng, Z. Li, J. Huang and B. Chen, Hero-a home based routing in pocket switched networks, in
[22]
J. Wu and Y. Wang, Social feature-based multi-path routing in delay tolerant networks, in
[23]
J. Wu, M. Xiao and L. Huang, Homing spread: Community home-based multi-copy routing in mobile social networks, in
[24] [25] [26]
X. Zheng, Z. Cai and Y. Li, Data linkage in smart iot systems: A consideration from privacy perspective,
[27]
X. Zheng, Z. Cai, J. Yu, C. Wang and Y. Li,
Follow but no track: Privacy preserved profile publishing in cyber-physical social systems,
[28]
X. Zheng, G. Luo and Z. Cai, A fair mechanism for private data publication in online social networks,
show all references
References:
[1]
J. A. Bazerque and G. B. Giannakis,
Distributed spectrum sensing for cognitive radio networks by exploiting sparsity,
[2]
Z. Cai, Z. He, X. Guan and Y. Li,
Collective data-sanitization for preventing sensitive information inference attacks in social networks,
[3]
Z. Cai and X. Zheng, A private and efficient mechanism for data uploading in smart cyber-physical systems,
[4]
W. Cheng, D. Wu, X. Cheng and D. Chen,
Routing for information leakage reduction in multi-channel multi-hop ad-hoc social networks,
[5] [6] [7] [8]
Z. He, Z. Cai, J. Yu, X. Wang, Y. Sun and Y. Li,
Cost-efficient strategies for restraining rumor spreading in mobile social networks,
[9] [10]
T. Jing, J. Zhou, H. Liu and Z. Zhang,
Soroute: a reliable and effective social-based routing in cognitive radio ad hoc networks,
[11]
S. K. Kim, J. H. Yoon, J. Y. Lee, G. Y. Jang and S. B. Yang,
A cooperative forwarding scheme for social preference-based selfishness in mobile social networks,
[12]
W. Li, X. Cheng, T. Jing and X. Xing, Cooperative multi-hop relaying via network formation games in cognitive radio networks, in
[13]
Y. Liang, Z. Cai, Q. Han and Y. Li, Location privacy leakage through sensory data,
[14]
J. Lu, Z. Cai, X. Wang, L. Zhang, P. Li and Z. He,
User social activity-based routing for cognitive radio networks,
[15] [16]
L. Muchnik, S. Pei, L. C. Parra, S. D. S. Reis, J. S. Andrade Jr., S. Havlin and H. A. Makse, Origins of power-law degree distribution in the heterogeneity of human activity in social networks,
[17]
I. Parris and F. Ben Abdesslem, Crawdad trace/social network analysis/st_andrews/locshare/2010/sta1, Downloaded from http://crawdad.org//download/st_andrews/locshare/locshare-StA1.tar.gz, Nov. 2010.Google Scholar
[18]
T. Spyropoulos, K. Psounis and C. Raghavendra,
Efficient routing in intermittently connected mobile networks: The multiple-copy case,
[19]
A. Vahdat and D. Becker,
[20]
J. Wang, Z. Cai, Y. Li, D. Yang, J. Li and H. Gao, Protecting query privacy with differentially private k-anonymity in location-based services,
[21]
S. Wang, M. Liu, X. Cheng, Z. Li, J. Huang and B. Chen, Hero-a home based routing in pocket switched networks, in
[22]
J. Wu and Y. Wang, Social feature-based multi-path routing in delay tolerant networks, in
[23]
J. Wu, M. Xiao and L. Huang, Homing spread: Community home-based multi-copy routing in mobile social networks, in
[24] [25] [26]
X. Zheng, Z. Cai and Y. Li, Data linkage in smart iot systems: A consideration from privacy perspective,
[27]
X. Zheng, Z. Cai, J. Yu, C. Wang and Y. Li,
Follow but no track: Privacy preserved profile publishing in cyber-physical social systems,
[28]
X. Zheng, G. Luo and Z. Cai, A fair mechanism for private data publication in online social networks,
Parameter Meaning Setting $k_r$ The exponent of the power law distribution for $\zeta_{i\tau}$ 1.7 $k_l$ The exponent of the power law distribution for $p_i$ 3 $C_l$ The maximum value of $p_i$ 0.6 $R_d$ The radius of the communication range 6m $\epsilon$ The length of the time interval within which users keep their moving direction and speed unchanged 30s $\mu$ The mean of the normal distribution for users' speed 1.4 $\sigma$ The standard deviation of the normal distribution for users' speed $\frac{\mu}{3}$
Parameter Meaning Setting $k_r$ The exponent of the power law distribution for $\zeta_{i\tau}$ 1.7 $k_l$ The exponent of the power law distribution for $p_i$ 3 $C_l$ The maximum value of $p_i$ 0.6 $R_d$ The radius of the communication range 6m $\epsilon$ The length of the time interval within which users keep their moving direction and speed unchanged 30s $\mu$ The mean of the normal distribution for users' speed 1.4 $\sigma$ The standard deviation of the normal distribution for users' speed $\frac{\mu}{3}$
ESRS Relation Leakage Rand A-Latency 16.2 15.4 17.9 30.4 A-MLP 0.38 0.63 0.35 0.72
ESRS Relation Leakage Rand A-Latency 16.2 15.4 17.9 30.4 A-MLP 0.38 0.63 0.35 0.72
[1]
Jingli Ren, Dandan Zhu, Haiyan Wang.
Spreading-vanishing dichotomy in information diffusion in online social networks with intervention.
[2] [3]
Werner Creixell, Juan Carlos Losada, Tomás Arredondo, Patricio Olivares, Rosa María Benito.
Serendipity in social networks.
[4] [5] [6]
A. Cascone, Alessia Marigo, B. Piccoli, L. Rarità.
Decentralized optimal routing for packets flow on data networks.
[7]
Hong Il Cho, Myungwoo Lee, Ganguk Hwang.
A cross-layer relay selection scheme of a wireless network with multiple relays under Rayleigh fading.
[8] [9] [10]
Karan Pattni, Mark Broom, Jan Rychtář.
Evolving multiplayer networks: Modelling the evolution of cooperation in a mobile population.
[11]
Weiping Li, Haiyan Wu, Jie Yang.
Intelligent recognition algorithm for social network sensitive information based on classification technology.
[12]
Linet Ozdamar, Dilek Tuzun Aksu, Elifcan Yasa, Biket Ergunes.
Disaster relief routing in limited capacity road networks with heterogeneous flows.
[13]
Rui Wang, Denghua Zhong, Yuankun Zhang, Jia Yu, Mingchao Li.
A multidimensional information model for managing construction information.
[14]
Mahendra Piraveenan, Mikhail Prokopenko, Albert Y. Zomaya.
On congruity of nodes and assortative information content in complex networks.
[15]
Guowei Dai, Ruyun Ma, Haiyan Wang, Feng Wang, Kuai Xu.
Partial differential equations with Robin boundary condition in online social networks.
[16]
Lea Ellwardt, Penélope Hernández, Guillem Martínez-Cánovas, Manuel Muñoz-Herrera.
Conflict and segregation in networks: An experiment on the interplay between individual preferences and social influence.
[17] [18]
Serap Ergün, Bariş Bülent Kırlar, Sırma Zeynep Alparslan Gök, Gerhard-Wilhelm Weber.
An application of crypto cloud computing in social networks by cooperative game theory.
[19] [20]
Impact Factor:
Tools Metrics Other articles
by authors
[Back to Top] |
Search
Now showing items 1-2 of 2
Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector
(Elsevier, 2014-11-10)
This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ...
Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector
(Elsevier, 2014-11-10)
Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ... |
By the “equator” of a magnet I mean a plane normal to its magnetic moment vector, passing through the mid-point of the magnet.
The magnetic field at a point at a distance
r on the equator of a magnet may be expressed as a series of terms of successively higher powers of \(1/r\) (the first term in the series being a term in \(r^{-3}\)), and the higher powers decrease rapidly with increasing distance. At large distances, the higher powers become negligible, so that, at a large distance from a small magnet, the magnitude of the magnetic field produced by the magnet is given approximately by
\[B = \frac{\mu_0}{4 \pi} \frac{p}{r^3}.\]
For example, if the surface magnetic field on the equator of a planet has been measured, and the magnetic properties of the planet are being modelled in terms of a small magnet at the centre of the planet, the dipole moment can be calculated by multiplying the surface equatorial magnetic field by \(\mu_0/(4 \pi)\) times the cube of the radius of the planet. If \(\text{B}\), \(\mu_0\) |
If we rotate the set of axes in counter-clockwise through 3 Euler's angles to get the transformation matrix, then what about the direction of rotation to get direct transformation instead of the previously performed 3 transformations? Is the direction of rotation of set of axis in direct transformation clockwise or counter-clockwise?
You have the 3×3 rotation matrix and you want to get the axis and angle that corresponds to the identical rotation.
Look up Determining the axis angle of a rotation. The matrix trace is used to find the angle $${\rm tr}(R) = 1 + 2 \cos \theta\;\Longrightarrow\; \theta = \cos^{-1}\left( \frac{{\rm tr}(R)-1}{2} \right)$$
The the axis is $${\bf n} = \frac{1}{2\sin\theta} \pmatrix{ R_{3,2}-R_{2,3} \\ R_{1,3}-R_{3,1} \\ R_{2,1}-R_{1,2} }$$ |
Instead of updating my previous answer, I've decided to add a new answer in order to keep it short(ish).
In the comments following his original question, TonyS added the extra assumption that $R$ is finitely generated over $A$. This is a strong condition, since it makes $R$ very close to being commutative. Moreover $R$ is known to be an integral domain, and a maximal order in its division ring of fractions $B$.
Under these assumptions the $A$-torsion-free module $M$ is actually $R$-torsion-free. To see this, let $F$ be the field of fractions of $A$; then since $R$ is finitely generated over $A$, the central localisation $F \otimes_A R$ is an integral domain which is finite dimensional over the field $F$, and is therefore a division ring; thus $F \otimes_A R = B$. Now $F \otimes_A M = F \otimes_A (R \otimes_R M) = (F\otimes_A R) \otimes_R M = B \otimes_R M$ so the kernels of the localisation maps $M \to F \otimes_A M$ and $M \to B \otimes_R M$ coincide. Thus $M$ is $A$-torsion-free if and only if $M$ is $R$-torsion-free.
Now let $N$ be a finitely generated $R$-bimodule which is torsion-free on both sides (for example $N$ could be $\omega_R$). Then $N \otimes_R M$ is a finitely generated $R$-module and therefore a finitely generated $A$-module. To study the torsion $T$ in this module, we study its support $Supp(T)$ in $Spec(A)$, or equivalently, the primes above the annihilator $Ann_A(M)$. Clearly $0$ is not in this support because $T$ is by definition a torsion $A$-module.
I claim that there are no primes in $Supp(T)$ of height $1$. Suppose for a contradiction that $P \in Supp(T)$ has height $1$; then localising $A$ and $R$ at $P$ produces a new maximal order $R_P$ which is free and finitely generated as an $A_P$-module. But $A$ is a commutative regular local ring, hence a UFD by Auslander-Buchsbaum, so $A_P$ is a discrete valuation ring. Since $R_P$ is finitely generated over $A_P$, it must be semilocal; since $R_P$ is also a maximal order, under these conditions it is known that $R_P$ is actually a right and left principal ideal domain: see Proposition 2.9 and Theorem 2.8 of the book "Ordres Maximaux au Sens de K.Asano" by Guy Maury and Jacques Raynaud.
Therefore the module $N_P$ is actually free over $R_P$ and hence $N_P\otimes_{R_P} M_P \cong M_P$ has no torsion. But this module is just $(N \otimes_R M)_P$ and by the exactness of localisation, $T_P$ is a torsion submodule of $(N \otimes_R M)_P$ and is therefore zero: thus $P \notin Supp(T)$, proving the claim.
Now $A$ was assumed to be of dimension at most $2$, so we see that $Supp(T) \subseteq \{ \mathfrak{m} \}$ where $\mathfrak{m}$ is the maximal ideal of $A$. This is the best possible result, because $N \otimes_R M$ can easily have $\mathfrak{m}$-torsion, as the following (commutative!) example shows.
Let $R = A$ and $N = M = \mathfrak{m}$. Pick a regular sequence $x,y$ in $\mathfrak{m}$, so that $0 \to A \stackrel{\alpha}{\to} A^2 \to \mathfrak{m} \to 0$ is a projective resolution of $\mathfrak{m}$, where $\alpha(a) = (ay, -ax)$. Then it's easy to see that $\mathfrak{m} \otimes_A \mathfrak{m} \cong \mathfrak{m}^2 / \alpha(\mathfrak{m})$. Now the image of the element $(y,-x) \in \mathfrak{m}^2$ in $\mathfrak{m}^2 / \alpha(\mathfrak{m})$ is non-zero and is killed by $\mathfrak{m}$, so the $\mathfrak{m}$-torsion submodule of $\mathfrak{m} \otimes_A \mathfrak{m}$ is non-zero.
So to show that $\omega_R \otimes_R M$ has no torsion, one would have to show that $\omega_R$ doesn't "look like" $\mathfrak{m}$ (or perhaps a finite direct sum of copies of $\mathfrak{m}$) as an $A$-module. One way to ensure this is to perhaps try to show that $\omega_R$ is reflexive as an $R$-module, since this would help you to show that there are no essential extensions $E$ of $\omega_R$ such that $E/\omega_R$ is $\mathfrak{m}$-torsion. |
In the clamped case, things get a lot easier! Some modifications to the previous post:
Let: [u(x,t) = X(x)T(t)] in [u_{tt}(x,t) + K u_{xxxx} = 0, K > 0, u(0,t) = u_{xx}(0,t)=u(l,t)=u_{xx}(l,t)=0]
Then: [u_{tt}(x,t) = X(x)T''(t), u_{xxxx}(x,t) = X''''(x)T(t)]
[u_{tt}(x,t) + K u_{xxxx} = X(x)T''(t) + K X''''(x)T(t) = 0]
[\frac{X''''(x)}{X(x)} = \frac{-T''(t)}{K T(t)} = \lambda]
For some constant, as both sides are independant of the other respective variable. Now, we have by assumptions that [\lambda = c^4 > 0, K = k^2 > 0]
So we are left with two ODE's in the form: [X''''(x) = c^4 X(x), T''(t) = -c^4 k^2 T(t)]
Which yield solutions: [X(x) = A \cosh(c x) + B \sinh(c x) + C \cos(c x) + D \sin(c x)]
[T(t) = A \cos(c^2 k t) + B \sin(c^2 k t)]
Using the two boundary conditions: [u(0,t) = 0 = u_{xx}(0,t)]
[X(0) = A \cosh(c 0) + B \sinh(c 0) + C \cos(c 0) + D \sin(c 0) = A + C = 0, A = -C]
[X''(0) = A c^2 \cosh(c 0) + B c^2 \sinh(c 0) - C c^2 \cos(c 0) - D c^2 \sin(c 0) = A c^2 + D c^2, A = C]
[A = C = -C \implies A = C = 0]
As we disregard the case where: [X(0) \ne 0, X''(x) \ne 0 \implies T(t) ≡ 0]
OK, nice. We plug into the 3rd and 4th boundary conditions, using our new function for X:
[X(x) = B \sinh(c x) + D \sin(c x)]
[u(l,t) = 0 = u_{xx}(l,t)]
[X(l) = B \sinh(c l) + D \sin(c l) = 0]
[X''(l) = B c^2 \sinh(c l) - c^2 D\sin(c l) = B \sinh(c l) - D\sin(c l) = 0]
As c != 0. Adding equations gives us that: [B \sinh(c l) + D \sin(c l) + B \sinh(c l) - D\sin(c l) = 2 B \sinh(c l) = 0]
Now, for real x, sinh(x) has a root only at c = 0, which would yield an eigenvalue of 0, which we disregard by assumption. So:
[2 B \sinh(c l) = B \sinh(c l) = 0 \implies B = 0]
Now we cannot have D = 0, or our solution is trivial. So, resubstituting for B, we're left with only:
[B \sinh(c l) + D \sin(c l) = D \sin(c l) = \sin(c l) = 0]
And our eigenvalues are those c which satisfy [\sin(c l) = 0 \blacksquare]
Which we can provide a closed form for in this case since our solution is so nice:
[\sin(c l) = 0 \implies c = \frac{\pi n}{l} \implies \lambda = \frac{\pi ^4 n^4}{l^4}]
These next two sections need basically no modification, as only which term vanishes in our boundary case during integration by parts changes.
[\int_0^l X_n(x)X_m(x) dx, n \ne m, \lambda_n \ne \lambda_m]
Then [(\lambda_n - \lambda_m) \int_0^l X_n(x)X_m(x) dx = \int_0^l \lambda_n X_n(x)X_m(x) - \lambda_m X_n(x)X_m(x) dx]
[\int_0^l X''''_n(x)X_m(x) - X_n(x)X''''_m(x) dx = (X'''_n(x)X_m(x) - X_n(x)X'''_m(x))_{x=(0,l)} - \int_0^l X'''_n(x)X'_m(x) - X'_n(x)X'''_m(x) dx]
[= 0 - (X''_n(x)X'_m(x) - X_n'(x)X''_m(x))_{x=(0,l)} + \int_0^l X''_n(x)X''_m(x) - X''_n(x)X''_m(x) dx = 0 + 0 = 0]
Using integration by parts, and the fact that our boundary conditions vanish at:
[X(0) = 0, X''(0) = 0, X(l) = 0, X''(l) = 0]
Then we have [(\lambda_n - \lambda_m) \int_0^l X_n(x)X_m(x) dx = 0, \lambda_n \ne \lambda_m \ne 0 \implies \int_0^l X_n(x)X_m(x) dx = 0]
And our eigenfunctions are orthogonal â– .
Let our differential operator be [\mathcal{I}, st. \mathcal{I} X = \lambda X, \mathcal{I} Y = \lambda Y]
We show that: [\langle \mathcal{I}X,Y\rangle = \langle X,\mathcal{I}^*Y\rangle]
[\langle \mathcal{I}X,Y\rangle = \int_0^l \mathcal{I}X(x)Y(x)^* dx = \int_0^l \lambda X(x)Y(x)^* dx = \int_0^l X(x)\lambda^* Y(x)^* dx = \int_0^l X(x)\mathcal{I}^*Y(x)^* dx = \langle X,\mathcal{I}^*Y\rangle]
As we had by assumption our eigenvalues were real. Then our operator is hermetian, so the eigenfuncitons of different eigenvalues are linearly independant, and the eigenfunctions of the same eigenvalue are linearly dependant.â– |
Wave Optics Interference of Light Waves and Young’s Experiment Fringe width is the distance between any two consecutive maxima or minima Fringe width \tt \beta=\frac{\lambda D}{d} Fringe width is independent of the order of fringe. Fringe width depends on wave length (λ) of light used. Fringe width Depends upon distances between the slits and screen, Distance between slits. Fringe width in a medium of refraction index μ \tt \beta'=\frac{\lambda D}{\mu d} Angular fringe width \tt \theta=\tan\theta=\frac{\beta}{D}=\frac{\lambda}{d} In YDST N 1fringes are visible with light of wavelength λ 1and N 2fringes with λ 2 ,N 1λ 1= N 2λ 2 . Number of fringes and fringe width are related as N 1β 1= N 2β 2 Fringe visibility \tt V=\frac{I_{max}-I_{min}}{I_{max}+I_{min}}=\frac{2\sqrt{I_{1}I_{2}}}{\left(I_{1}+I_{2}\right)} If I min= 0 Then v = 1 maximum visibility. If I max= I minthen v = 0 bright and does fringes are not distinguishable. When a transparent plate of thickness ‘t’ and refractive index ‘μ’ in introduced in the path of one of the beams then shift = \tt \frac{D}{d}\left(\mu-1\right)t=\frac{\beta}{\lambda}\left(\mu-1\right)t The Effective path in air is increased by an amount (μ − 1)t The Shift of fringes is independent of the order of fringe. Condition for maxima in interference in thin films \tt 2\ \mu t \cos r=\left(2n-1\right)\frac{\lambda}{2} Condition for minima in interference in thin films 2μt cosr = nλ Diffraction is the process of bending of light around the corners of obstacle. View the Topic in this video From 00:15 To 3:45 Young's Experiment View the Topic in this video From 00:15 To 20:30
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. The fringe width β, both for bright and dark fringes is given by
\beta = \frac{D}{d}\lambda
2. Intensity of Fringes :
I = 4I_{0} \cos^{2}\frac{\delta}{2}
3. Eringe visibility \tt V = \left[\frac{I_{max} + I_{min}}{I_{max} - I_{min}}\right] |
I just started self-studying measure theory and had a question about the "generalised density" (as it appears on Wikipedia):
A random variable X with values in a measurable space $({\mathcal{X}},{\mathcal {A}})$ (usually $\mathbb {R} ^{n}$ with the Borel sets as measurable subsets) has as probability distribution the measure $X_*P$ on $({\mathcal {X}},{\mathcal {A}})$: the density of $X$ with respect to a reference measure $\mu$ on $({\mathcal {X}},{\mathcal{A}})$ is the Radon–Nikodym derivative:
$$f={\frac {dX_{*}P}{d\mu }}.$$
For a random variable $X$, if we had a cumulative distribution function:
$$F(X) = \begin{cases} 0 & if \; X < 0 \\ X^2 & if \; 0 \leq X < 0.8 \\ 1 & if \; X \geq 0.8 \end{cases}$$
... that looks like:
Then, does a "generalised density" or a Radon-Nikodym derivative as defined above exist?
I think that it does... If $\mathcal X = [0, 0.8) \cup \{0.8\}$, for an appropriate $\sigma$-algebra $\mathcal A$, could one use the "reference measure":
$$\mu = l + \delta_{0.8}$$
... where $l$ is the Lebesgue measure and $\delta_{0.8}$ is the Dirac measure on $\{0.8\}$? I'd also love some help clarifying what $X_*P$ is (is this a "pushforward measure", and if so, is this the same as the "measure induced by the CDF"?)
If one
can use the reference measure above, why can we use it? And would the Radon-Nikodym derivative be:
$$f = \frac{dX_*P}{d\mu} = 2X * \mathcal I(X \in [0, 0.8)) + 0.2 * \mathcal I(X = 0.8)$$
where $\mathcal I$ is the indicator function? |
What are the Christoffel symbols for the Schwarzschild spacetime, expressed in the Kruskal-Szekeres coordinates?
closed as off-topic by AccidentalFourierTransform, ZeroTheHero, Emilio Pisanty, Prahar, Jon Custer Jun 4 '18 at 20:42
This question appears to be off-topic. The users who voted to close gave this specific reason:
"Homework-like questions should ask about a specific physics conceptand show some effortto work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – AccidentalFourierTransform, ZeroTheHero, Emilio Pisanty, Prahar, Jon Custer
I worked these out recently, and it was a fair amount of computation to get them, so I thought they might be useful to others.
Throughout the following, let $m=1/2$, so the horizon is at $r=1$, i.e., $r$ is in units of the Schwarzschild radius. The exterior regions of the maximal extension of the Schwarzschild spacetime are at $r>1$, which is regions I and III. The interior is $r<1$, regions II and IV.
The version of the Kruskal-Szekeres coordinates I'll use are null coordinates $(V,W)$, equivalent to Hawking and Ellis's $(v'/\sqrt2,w'/\sqrt2)$.
Even when working in the Kruskal-Szekeres coordinates, it's convenient to express some things in terms of the Schwarzschild $r$ coordinate, which can be found using \begin{equation} r = 1+W(-VW/e). \end{equation} Here the function $W$ (not to be confused with the coordinate $W$ inside the parentheses) is the principal real branch of the Lambert W function, and $e$ is the base of natural logarithms. It's also convenient to define \begin{equation} B = \frac{4}{re^r}. \end{equation}
The metric is \begin{equation} ds^2 = B dVdW-r^2 d\Omega^2. \end{equation}
The Christoffel symbols are as follows: \begin{align} \Gamma^V_{VV} &= (r^{-1}+r^{-2})We^{-r} \\ \Gamma^W_{WW} &= (r^{-1}+r^{-2})Ve^{-r} \\ \Gamma^\theta_{V\theta} = \Gamma^\phi_{V\phi} &= -WB/4r \\ \Gamma^\theta_{W\theta} = \Gamma^\phi_{W\phi} &= -VB/4r \\ % \Gamma^V_{\theta\theta} &= -Vr/2 \\ \Gamma^W_{\theta\theta} &= -Wr/2 \\ \Gamma^V_{\phi\phi} &= -(Vr/2) \sin^2\theta \\ \Gamma^W_{\phi\phi} &= -(Wr/2) \sin^2\theta \\ % \Gamma^\theta_{\phi\phi} &= -\sin\theta \cos\theta \\ \Gamma^\phi_{\theta\phi} &= \cot\theta \\ \end{align} I got these by calculating them in the computer algebra system Maxima and then cleaning up the resulting expressions by hand. I checked my cleaned-up versions numerically against the raw output from Maxima to make sure they were right. They are implemented in an open-source software project called karl.
The metric and the Christoffel symbols misbehave at the $r=0$ singularities, and also at the coordinate singularities at $\theta=0$ and $\pi$. |
If I have two balls with masses and charges $m_1, q_1^{+}$, $m_2, q_2^{+}$, initially held at distance $d$, and then released, how can I know the kinetic energies of each of the balls at infinite distance between them? I'm quite stuck on that, because they both have the same potential energy at the beginning, and it decreases not in the same pattern, as if one of the balls was stationary. So it not only falls like $1/R$, because at the same time, the other ball that is causing this potential energy, is also being repelled. So how can I really find out the energies? I tried to apply the conservation of energy law, because I know that at infinite distance from each other they'll have zero potential energy, thus all the initial was transformed into kinetic form, however I'm stuck with the initial potential energy (they both have it, so should I put $2U_p$?), and even so, I can't find their kinetic energies separately, without having another equation.
Potential energy is a property of the
system, not any one object. Thus there should only be one copy of the typical $1/r$ potential energy between two charges (plus an analogous gravitational term if that can't be neglected).
The easiest way to see this is to start from "infinite" separation. Instead of pushing the two charges together, hold one fixed and move the other toward it. The moving charge must fight the standard Coulomb force (with a little help from gravity) to get closer to the stationary one, so the potential energy obtained here is just the integral of this force over the distance traversed ($d$ to $\infty$).
But what about the stationary object? Well, sure, we need to exert a force on it to keep it from being repelled by the approaching charge.
But it is not moving, so the change in $\vec{F} \cdot \vec{x}$ energy vanishes.
The fact that at some point in the future we will let both objects move doesn't change the potential energy, so you should get the same potential energy as if the problem were stated:
A point mass $m_1$ with charge $q_1$ is fixed at the origin. Another point mass $m_2$ with charge $q_2$ is brought in from infinity. What is the potential energy of the system?
It may also help to remember that "$2\infty = \infty$." Moving objects from $x = -\infty$ and $x = \infty$ to the origin covers the
same distance as moving one object from $x = \infty$ to the origin.
I would suggest you to use this equation: $$ W=\int_C{Fdx}, $$ where $F$ is the force on an object and $W$ the work done by this force.
In this case there are two types of forces acting on the two objects, gravitation and Coulomb force: $$ F_{result}=\frac{1}{4\pi\epsilon_0}\frac{q_1q_2}{r^2}-G\frac{m_1m_2}{r^2}=\left(\frac{q_1q_2}{4\pi\epsilon_0}-Gm_1m_2\right)\frac{1}{r^2}, $$ with $r$ the distance between the two objects. So:$$ W_{total}=\left(\frac{q_1q_2}{4\pi\epsilon_0}-Gm_1m_2\right)\int^\infty_{d}{\frac{1}{r^2}dr}=\left(\frac{q_1q_2}{4\pi\epsilon_0}-Gm_1m_2\right)\frac{1}{d} $$
Edit: This isn't entirely correct, since I am assuming that this is a symmetric situation, so $m_1=m_2$ and therefore $W_1=W_2=\frac{W_{total}}{2}$. This will affect the ratio of distance to the origin of the two objects and therefore the amount of work done on each object. |
Answer
$S_6=\dfrac{1365}{256} \approx 5.332$
Work Step by Step
Here, we have $\sum_{i=1}^6 4(1/4)^{i-1}$ We know that $S_{n}=a_1(\dfrac{1-r^{n}}{1-r})$ Now, $S_6=4[\dfrac{1-(1/4)^{6}}{1-(1/4)}]$ or, $=4 \times (\dfrac{4096/4096-1/4096}{3/4})$ or, $=4 \times \dfrac{1365}{1024}$ Hence, $S_6=\dfrac{1365}{256} \approx 5.332$ |
Background
The use of block (displayed) equations (
$$...$$) in question titles has been disallowed on Mathematics Stack Exchange. Titles are meant to be short descriptions of the question, and using block equations generally causes your equations to be larger: compare
$$\lim_{n \to \infty} \frac{1}{n}$$ to
$\lim_{n \to \infty} \frac{1}{n}$ which produce $$\lim_{n \to \infty} \frac{1}{n}$$ and $\lim_{n \to \infty} \frac{1}{n}$, respectively.
More importantly, block equations introduce line breaks, and these line-breaks also appear on the front-page (and in all pages listing questions by title). Together, this means that titles containing block equations usually take up significantly more vertical space than question titles which do not. While not as much an issue when viewing the site on traditional monitors, on mobile devices with significantly smaller displays titles with block equations take up too much screen real estate.
What to do
Instead, all mathematics in titles should use
inline equations (
$...$), and should additionally be designed to take up as little vertical space as possible.
More guidelines for the proper use of MathJax in question titles may be found here.
False positives
It is possible that you received this message because your title includes two immediately adjacent inline equations
$...$$...$. Often, you will be able to simply remove the inner
$$ and retain the desired formatting of the title.
In cases where this is not possible (
e.g., your title contains
$\bf f$$(x)$ which produces $\bf f$$(x)$, while
$\bf f(x)$ produces $\bf f(x)$), additionally placing braces
{...} around the first part should produce the desired formatting (in the example above,
${\bf f}(x)$ produces ${\bf f}(x)$). |
What is the correct series expansion for the $U(1)$ Faddeev-Popov ghosts?
I know that the $U(1)$ ghosts are only a phase such that they can be neglected in most cases but it turns out that this is not true in curved spaces even for $U(1)$ theories so please don't answer this...
In this thread Faddeev-Popov ghost propagator in canonical quantization I found that $c$ is hermitian and $\bar{c}$ anti-hermitian which makes sense since $\bar{c} = c^\dagger \gamma_0$.
But in the $U(1)$ case the ghost are Grassmann variables such that $\bar{c} = c^\dagger \gamma_0$ doesn't make sense does it?
For those willing to help me even more. I think that the source of my problem is a poor understanding of the Faddeev-Popov mechanism. More precisely, what happens when $\det(\square)$ is written as a path integral? What exactly do the $c$ and $\bar{c}$ fields mean? Why is it said that one is a ghost and the other an anti ghost?
When quantizing them I obtain $\{ c_k , \bar{c_{k'}}\} = -\delta(k-k')$ how does this tell us anything regarding the norm of these ghosts?
I read Peskin and Schroeder but they do not answer this question (or I missed it).
Finally, my sincere aplogies for this "all over the place" type question. I fail to pinpoint the exact sources of my confusion that's why my question is rather broad. I hope that someone more experiences can pinpoint it with the above information. |
What is the sum total of all individual digits from 1 to 10^100? In other words: 1+2+3+4+5+6+7+8+9+[1+0]+[1+1]+[1+2]+[1+3].......and so on to 10^100? Any help or hint would be greatly appreciated. thank you.
HINT:
1 through 9 is obviously 45.
So look from 10 to 99
the tens digit it 1 through 9 = 45
but the ones digit it is 1 through 9 occuring 9 times 45*9
So the digit sum of 1 to 100 is
45 for the single digit numbers
and 45 (units digits) and 45 (tens digit) for the double digit numbers
Do you see the pattern?
Now just keep going you might have to do some exponent work the answer will be big
that was me
____________________________________________________
sorry, the sum of the digits from 1 - 100 is actually
45 for the single digit numbers +
45 (tens digit) and 45 * 9 (ones digit) this is double digits
____________________________________________________
So the sum of the digits from 1 - 100 is actually 450
____________________________________________________
The sum of the digits from 1 - 1000
____________________________________________________
Is 450 +
45 + 45 * 9 + 45 * 81
So 1 - 1000 is 4545
____________________________________________________
So try to get 1 - 10^100
Since 1 to 99 =[99+1] * 2 * 4.5 =900. [4.5 being the average value of all 10 digits 0 - 9 =45 / 10 = 4.5 ]
Since 1 to 999 =[999+1] * 3 * 4.5 =13,500 Since 1 to 9,999 =[9,999+1] * 4 * 4.5 = 180,000 . . Since 1 to 999,999 =[999,999 +1] * 6 * 4.5 =27,000,000 Now, you can clearly see a pattern: Therefore: 1 to 10^100 - 1 =[10^100 - 1 + 1] * 100 * 4.5 =[4.5 x 10^102] + 1. This last 1 being the leading digit of 10^100.
This short computer code confirms the above answer: n=1E100;s=0;cycle:if(n<10, return s=s+n*(n+1)/2, 0);place=0;p=1;loop:place=place*10+45*p;p=p*10;if(p*10<=n,goto loop,0);msd=int(n/p);n=n%p;s=s+msd*place + (msd*(msd-1)/2)*p + msd*(1+n);goto cycle Answer: 4.5000000000 0000000000 0000000000 0000000000 0000000000 0000000000 0000000000 0000000000 0000000000 0000000000 01 E+102 What is the sum total of all individual digits from 1 to 10^100? In other words: 1+2+3+4+5+6+7+8+9+[1+0]+[1+1]+[1+2]+[1+3].......and so on to 10^100? Any help or hint would be greatly appreciated. thank you. \(\text{Let $ \mathbf{b} = $ numeral system } \\ \text{Let $ \mathbf{s} = $ sum of all digits from $1$ to $b^n$ }\)
\(\begin{array}{|rcll|} \hline \mathbf{s} &=& \mathbf{1 + n\cdot b^n \cdot \left( \dfrac{b-1}{2} \right) } \\ \hline \end{array}\)
\(\begin{array}{|rcll|} \hline b &=& 10 \quad (\text{decimalism}) \\ n &=& 100 \\ && \text{from $1$ to $10^{100}$ } : \\\\ \mathbf{s} &=& \mathbf{1 + n\cdot b^n \cdot \left( \dfrac{b-1}{2} \right) } \\ s &=& 1 + 100\cdot 10^{100} \cdot \left( \dfrac{10-1}{2} \right) \\ &=& 1 + 10^2\cdot 10^{100} \cdot \left( \dfrac{9}{2} \right) \\ &=& 1 + \left( \dfrac{9}{2} \right)\cdot 10^{102} \\ \mathbf{s} &=& \mathbf{ 1 + 4.5 \cdot 10^{102} } \\ \hline \end{array}\)
For problems like these, I like to see if we can discover a pattern
Note that the sum of the digits from 1 -9 inclusive = 45
And adding the "1" in the next integer, 10, gives us 46
And we can write this sum as
Sum of the individual digits from 1 - 10^1 inclusive = 1* 45 + 1
Next....the sum of the digits of the integers from 1-99 inclusive = 900
We can see this because we have the digits 1-9 in the ones place repreated 10 times = 10(45) = 450
And we have the sum of the digits in the tens place as 10 ( 1 + 2 +3 +....9) = 10 (45) = 450
And adding the "1" in the next integer (100) gives us this sum = 900 + 1 = 20*45 + 1
Following this, the sum of the individual digits of the integers from 1 -999 = 13,500 [check this for yourself]
And adding the "1" in the next integer (1000) gives us this sum:
13500 + 1 = 300* 45 + 1
Note the pattern that seems to be emerging......the sum of the digits of the integers from 1 - 10^n inclusive =
(n) followed by the number of zeroes calculated as (n - 1) * 45 + 1
So....for instance .....the sum of the digits of the integers from 1 -10^1 inclusive =
(1) followed by ( 1 - 1) zeroes * 45 + 1 =
(1) followed by no zeroes * 45 + 1 =
1*45 + 1
And the sum of the digits of the integers from 1 - 10^2 inclusive =
(2)followed by (2-1) zeroes * 45 + 1 =
(2) followed by 1 zero * 45 + 1 =
20*45 + 1
Note that the sum of the individual digits of the integers from 1 -100^3 =
300*45 + 1......which follows our pattern
This seems to imply that the sum of the individual digits in the integers from 1 - 10^100 inclusive should be :
(100) followed by (100-1) zeroes * 45 + 1 =
100 followed by 99 zeroes * 45 + 1 =
10^101 * 45 + 1 =
4.5 * 10^102 + 1 as found by the Guest and heureka !!!! |
Difference between revisions of "SageMath"
Line 46: Line 46:
For a more comprehensive tutorial on the Sage Notebook see the [http://www.sagemath.org/doc/reference/notebook.html Sage documentation]. For more information on the {{ic|notebook()}} command see [http://www.sagemath.org/doc/reference/sagenb/notebook/notebook_object.html this page].
For a more comprehensive tutorial on the Sage Notebook see the [http://www.sagemath.org/doc/reference/notebook.html Sage documentation]. For more information on the {{ic|notebook()}} command see [http://www.sagemath.org/doc/reference/sagenb/notebook/notebook_object.html this page].
+ +
=== Cantor ===
=== Cantor ===
Revision as of 11:51, 12 May 2013
Template:Article summary start Template:Article summary text Template:Article summary heading Template:Article summary wiki Template:Article summary wiki Template:Article summary wiki Template:Article summary end Sage is a program for numerical and symbolic mathematical computation that uses Python as its main language. It is meant to provide an alternative for commercial programs such as Maple, Matlab, and Mathematica.
Sage provides support for the following:
Calculus: using Maxima and SymPy. Linear Algebra: using the GSL, SciPy and NumPy. Statistics: using R (through RPy) and SciPy. Graphs: using matplotlib. An interactive shellusing IPython. Access to Python modulessuch as PIL, SQLAlchemy, etc. Contents Installation Usage
Sage mainly uses Python as a scripting language with a few modifications to make it better suited for mathematical computations.
Sage command-line
Sage can be started from the command-line:
$ sage
For information on the Sage command-line see this page.
Note, however, that it is not very comfortable for some uses such as plotting. When you try to plot something, for example:
sage: plot(sin,(x,0,10))
Sage opens a browser window with the Sage Notebook.
Sage Notebook
A better suited interface for advanced usage in Sage is the Notebook. To start the Notebook server from the command-line, execute:
$ sage -n
The notebook will be accessible in the browser from http://localhost:8080 and will require you to login.
However, if you only run the server for personal use, and not across the internet, the login will be an annoyance. You can instead start the Notebook without requiring login, and have it automatically pop up in a browser, with the following command:
$ sage -c "notebook(automatic_login=True)"
If you want to auto start the sage notebook on your server, then have a look at this systemd service file example for sage-mathematics.
Cantor
Cantor is an application included in the KDE Edu Project. It acts as a front-end for various mathematical applications such as Maxima, Sage, Octave, Scilab, etc. See the Cantor page on the Sage wiki for more information on how to use it with Sage.
Cantor can be installed with the official repositories.package or as part of the or groups, available in the
Documentation
For local documentation, one can compile it into multiple formats such as HTML or PDF. To build the whole Sage reference, execute the following command (as root):
# sage --docbuild reference html
This builds the HTML documentation for the whole
reference tree (may take longer than an hour). An option is to build a smaller part of the documentation tree, but you would need to know what it is you want. Until then, you might consider just browsing the online reference.
For a list of documents see
sage --docbuild --documents and for a list of supported formats see
sage --docbuild --formats.
Optional additions SageTeX
If you have installed TeX Live on your system, you may be interested in using SageTeX, a package that makes the inclusion of Sage code in LaTeX files possible. TeX Live is made aware of SageTeX automatically so you can start using it straight away.
As a simple example, here is how you include a Sage 2D plot in your TEX document (assuming you use
pdflatex):
include the
sagetexpackage in the preamble of your document with the usual
\usepackage{sagetex} create a
sagesilentenvironment in which you insert your code:
\begin{sagesilent} dob(x) = sqrt(x^2 - 1) / (x * arctan(sqrt(x^2 - 1))) dpr(x) = sqrt(x^2 - 1) / (x * log( x + sqrt(x^2 - 1))) p1 = plot(dob,(x, 1, 10), color='blue') p2 = plot(dpr,(x, 1, 10), color='red') ptot = p1 + p2 ptot.axes_labels(['$\\xi$','$\\frac{R_h}{\\max(a,b)}$']) \end{sagesilent} create the plot, e.g. inside a
floatenvironment:
\begin{figure} \begin{center} \sageplot[width=\linewidth]{ptot} \end{center} \end{figure} compile your document with the following procedure: $ pdflatex <doc.tex> $ sage <doc.sage> $ pdflatex <doc.tex> you can have a look at your output document.
The full documentation of SageTeX is available on CTAN.
Troubleshooting TeX Live does not recognize SageTex
If your TeX Live installation does not find the SageTex package, you can try the following procedure (as root or use a local folder):
Copy the files to the texmf directory: # cp /opt/sage/local/share/texmf/tex/* /usr/share/texmf/tex/ Refresh TeX Live: # texhash /usr/share/texmf/ texhash: Updating /usr/share/texmf/.//ls-R... texhash: Done. |
So, under this generality, it seems to me that we can choose $A$ in a clever way: if we make $A$ smaller, then it gets "easier" to satisfy the condition. So why not take $A$ to be the scalar multiples of the identity-- then $D$ commutes with all of $A$ on the nose.
Now let $H=\ell^2$ and let $D$ be multiplication by a sequence of real numbers $(d_n)$. If $z\in\mathbb C$ not an accumulation point of the $(d_n)$ then $(zI-D)^{-1}$ exists and is the multiplication operator by the sequence $(z-d_n)^{-1}$. If $d_n\rightarrow\infty$ then $(z-d_n)^{-1}\rightarrow 0$ and so $(zI-D)^{-1}$ is compact and so $D$ has compact resolvant.
Similarly, $e^{-tD^2}$ is the multiplication operator by the sequence $(\exp(-td_n^2))$. This will be trace class if and only if$$ \sum_n \exp(-td_n^2) < \infty $$So you just need to let $(d_n)$ grow very slowly. For example, set$$ d_n = \big( \log(1/e_n) \big)^{1/2} \implies \exp(-td_n^2)= e_n^{t} $$where we now just need that $e_n\rightarrow 0$. Let $e_n = 1/2$ for the first $N_2$ terms, then $e_n=1/3$ for next $N_3$ terms, and so on. Then$$ \sum_n \exp(-td_n^2) = \sum_{k\geq 2} \frac{N_k}{k^t}. $$Pick $N_k \geq k^k$ so that for any $t>0$ if $K>t$ then$$ \sum_n \exp(-td_n^2) \geq \sum_{k\geq K} \frac{N_k}{k^t}\geq \sum_{k\geq K} 1 = \infty. $$
In this example, you could also take $A=c_0$ for a less trivial algebra. |
The Context-Free tree grammar has rules of the form:
$A\rightarrow t$ or $A(x_1,\dots,x_n)\rightarrow t_x$,
where $A\in N$, $t\in T(N\cup T)$, $t_x\in T(N\cup T\cup \{x_1,\dots,x_n\})$, $T(Z)$ means a set of all possible trees with labels from $Z$.
where $N$ is a finite unranked set of non-terminals and $T$ is a finite unranked set of terminals, $x_i$ are free-variables.
It is clear, that this form of rules is definitely
context-free.
The thing which I doubt about is: would the following form of rules
$x_1(A)\rightarrow x_1(A_1,\dots,A_2)$, where $A_i\in(N\cup T)$,
be context-free?
(More generally: $x_1(A)\rightarrow x_1(t_1,\dots,t_2)$, where $t_i\in T(N\cup T)$).
(Obviously it is not context-free according to the definition, but why not?). I don't see any context here. Such kind of rules may be useful for describing changing of the branch without growing tree down. $X$ here is a free-variable, and it points to the arbitrary parent node of the terminal $A$.
The only one theoretical objection here against "context-free-ness" of this form of rules, is that this kind of rules implies, that non-terminal $A$ is not a root of the current derivation tree. From other hand, conventional rules of the form $A(X_1)\rightarrow\dots$ can not be applied for the case when $A$ is a leave in the current derivation.
UPD: For those who asked me for an example of Context-free tree grammar, please refer this link: http://research.nii.ac.jp/~kanazawa/Courses/2011/Kyoto/cft.pdf
Though it describes CFTG for
ranked trees, obviously the same rules in the same semantics can be applied for the case of unranked trees. |
Motivating the classical momentum $\mathbf{p} = m\mathbf{v}$ is quite easy: it is meant to represent the quantity of motion of the particle, and since the mass is one measure of quantity of matter it should be proportional to mass (how much thing is moving) and should be proportional to velocity (how fast and to where it is moving).
Now, in Special Relativity the momentum changes. The new quantity of motion becomes
$$\mathbf{p} = \dfrac{m\mathbf{v}}{\sqrt{1-\dfrac{v^2}{c^2}}}$$
Or, using $\gamma$ the Lorentz factor $\mathbf{p} = \gamma(v) m\mathbf{v}$ where I write $\gamma(v)$ to indicate that the velocity is that of the particle relative to the frame in which the movement is being observed.
The need for this new momentum is because the old one fails to be conserved and because using the old one in Newton's second law leads to a law which is not invariant under Lorentz transformations. So the
need for a new momentum is perfectly well motivated.
What I would like to know is how can one motivate that the correct choice for $\mathbf{p}$ is the $\gamma(v)m\mathbf{v}$. There are some arguments using the mass: considering a colision, requiring momentum to be conserved, transform the velocity and then find how mass should transform. Although this work, it doesn't seem natural, and it is derived in one particular example.
On my book there's even something that Einstein wote saying that he didn't think it was a good idea to try transforming the mass from $m$ to $M = \gamma(v)m$, that it was better to simply keep $\gamma$ on the new momentum without trying to combine it with the mass.
So I would like to know: without resorting to arguments based on transformation of the mass, how can one motivate the new form of momentum that works for special relativity? |
In my previous article “Better Insight into DSP: Learning about Convolution”, I discussed convolution and its two important applications in signal processing field. There, the signals were presumably considered to be one-dimensional in the spatial domain. However, the process of convolution can be carried-on on multi-dimensional signals too.
In this article, we'll try to better understand the process and consequences of two-dimensional convolution, used extensively in the field of image processing.
The Definition of 2D Convolution
Convolution involving one-dimensional signals is referred to as 1D convolution or just convolution. Otherwise, if the convolution is performed between two signals spanning along two mutually perpendicular dimensions (i.e., if signals are two-dimensional in nature), then it will be referred to as 2D convolution. This concept can be extended to involve multi-dimensional signals due to which we can have multi-dimensional convolution.
In the digital domain, convolution is performed by multiplying and accumulating the instantaneous values of the overlapping samples corresponding to two input signals, one of which is flipped. This definition of 1D convolution is applicable even for 2D convolution except that, in the latter case, one of the inputs is flipped twice.
This kind of operation is extensively used in the field of digital image processing wherein the 2D matrix representing the image will be convolved with a comparatively smaller matrix called 2D kernel.
An Example of 2D Convolution
Let's try to compute the pixel value of the output image resulting from the convolution of 5×5 sized image matrix
x with the kernel h of size 3×3, shown below in Figure 1.
Figure 1: Input matrices, where x represents the original image and h represents the kernel. Image created by Sneha H.L. Figure 1:Input matrices, where x represents the original image and h represents the kernel. Image created by Sneha H.L.
To accomplish this, the step-by-step procedure to be followed is outlined below.
Step 1: Matrix inversion
This step involves flipping of the kernel along, say, rows followed by a flip along its columns, as shown in Figure 2.
Figure 2: Pictorial representation of matrix inversion. Image created by Sneha H.L. Figure 2:Pictorial representation of matrix inversion. Image created by Sneha H.L.
As a result, every (i,j)th element of the original kernel becomes the (j,i)th element in the new matrix.
Step 2: Slide the kernel over the image and perform MAC operation at each instant
Overlap the inverted kernel over the image, advancing pixel-by-pixel.
For each case, compute the product of the mutually overlapping pixels and calculate their sum. The result will be the value of the output pixel at that particular location. For this example, non-overlapping pixels will be assumed to have a value of ‘0’. We'll discuss this in more detail in the next section on “Zero Padding”.
In the present example, we'll start sliding the kernel column-wise first and then advance along the rows.
Pixels Row by Row
First, let's span the first row completely and then advance to the second, and so on and so forth.
During this process, the first overlap between the kernel and the image pixels would result when the pixel at the bottom-right of the kernel falls on the first-pixel value at the top-left of the image matrix. Both of these pixel values are highlighted and shown in dark red color in Figure 3a. So, the first pixel value of the output image will be 25 × 1 = 25.
Next, let us advance the kernel along the same row by a single pixel. At this stage, two values of the kernel matrix (0, 1 – shown in dark red font) overlap with two pixels of the image (25 and 100 depicted in dark red font) as shown in Figure 3b. So, the resulting output pixel value will be 25 × 0 + 100 × 1 = 100.
Figure 3a, 3b. Convolution results obtained for the output pixels at location (1,1) and (1,2). Image created by Sneha H.L. Figure 3a, 3b.
Figure 3c, 3d: Convolution results obtained for the output pixels at location (1,4) and (1,7). Image created by Sneha H.L. Figure 3c, 3d:Convolution results obtained for the output pixels at location (1,4) and (1,7). Image created by Sneha H.L.
Advancing similarly, all the pixel values of the first row in the output image can be computed. Two such examples corresponding to fourth and seventh output pixels of the output matrix are shown in the figures 3c and 3d, respectively.
If we further slide the kernel along the same row, none of the pixels in the kernel overlap with those in the image. This indicates that we are done along the present row.
Move Down Vertically, Advance Horizontally
The next step would be to advance vertically down by a single pixel before restarting to move horizontally. The first overlap which would then occur is as shown in Figure 4a and by performing the MAC operation over them; we get the result as 25 × 0 + 50 × 1 = 50.
Following this, we can slide the kernel in horizontal direction till there are no more values which overlap between the kernel and the image matrices. One such case corresponding to the sixth pixel value of the output matrix (= 49 × 0 + 130 × 1 + 70 × 1 + 100 × 0 = 200) is shown in Figure 4b.
Figure 4a, 4b. Convolution results obtained for the output pixels at location (2,1) and (2,6). Image created by Sneha H.L. Figure 4a, 4b.Convolution results obtained for the output pixels at location (2,1) and (2,6). Image created by Sneha H.L.
This process of moving one step down followed by horizontal scanning has to be continued until the last row of the image matrix. Three random examples concerned with the pixel outputs at the locations (4,3), (6,5) and (8,6) are shown in Figures 5a-c.
Figure 5a. Convolution results obtained for the output pixels at (4,3). Image created by Sneha H.L. Figure 5a.Convolution results obtained for the output pixels at (4,3). Image created by Sneha H.L.
Figure 5b. Convolution results obtained for the output pixels at (6,5). Image created by Sneha H.L. Figure 5b.Convolution results obtained for the output pixels at (6,5). Image created by Sneha H.L.
Figure 5c. Convolution results obtained for the output pixels at (8,6). Image created by Sneha H.L. Figure 5c.Convolution results obtained for the output pixels at (8,6). Image created by Sneha H.L.
Step
Hence the resultant output matrix will be:
Figure 6. Our example's resulting output matrix. Image created by Sneha H.L. Figure 6.Our example's resulting output matrix. Image created by Sneha H.L.
Zero Padding
The mathematical formulation of 2-D convolution is given by
$$ y\left[i,j\right]=\sum_{m=-\infty}^\infty\sum_{n=-\infty}^\infty h\left[m,n\right] \cdot x\left[i-m,j-n\right] $$
where,
x represents the input image matrix to be convolved with the kernel matrix h to result in a new matrix y, representing the output image. Here, the indices i and j are concerned with the image matrices while those of m and n deal with that of the kernel. If the size of the kernel involved in convolution is 3 × 3, then the indices m and n range from -1 to 1. For this case, an expansion of the presented formula results in
$$ y\left[i,j\right]=\sum_{m=-\infty}^\infty h\left[m,-1\right] \cdot x\left[i-m,j+1\right] + h\left[m,0\right] \cdot x\left[i-m,j-0\right] \\ + h\left[m,1\right] \cdot x\left[i-m,j-1\right] $$
$$ y\left[i,j\right]= h\left[-1,-1\right] \cdot x\left[i+1,j+1\right] + h\left[-1,0\right] \cdot x\left[i+1,j\right] + h\left[-1,1\right] \cdot x\left[i+1,j-1\right] \\ + h\left[0,-1\right] \cdot x\left[i,j+1\right] + h\left[0,0\right] \cdot x\left[i,j\right] + h\left[0,1\right] \cdot x\left[i,j-1\right] \\ + h\left[1,-1\right] \cdot x\left[i-1,j+1\right] + h\left[1,0\right] \cdot x\left[i-1,j\right] + h\left[1,1\right] \cdot x\left[i-1,j-1\right] $$
This indicates that to obtain every output pixel, there has to be 9 multiplications to be performed whose factors are the overlapping pixel elements of the image and the kernel. However while we computed the value for our first output pixel, we performed only a single multiplication (Figure 3a replicated as Figure 7a). What does this mean? Does it imply an inconsistency with the equation form of 2-D convolution?
No, not really. Because, the result obtained by the summation of nine product terms can be equal to the product of a single term if the collective effect of the other eight product terms equalizes to zero. One such way is the case in which each product of the other eight terms evaluate themselves to be zero. In the context of our example, this means, all the product terms corresponding to the non-overlapping (between image and the kernel) pixels must become zero in order to make the results of formula-computation equal to that of graphical-computation.
From our elementary knowledge of mathematics, we know that if at least one of the factors involved in multiplication is zero, then the resulting product is also zero. By this analogy, we can state that, in our example, we need to have a zero-valued image-pixel corresponding to each non-overlapping pixels of the kernel matrix. Pictorial representation of this would be the one as shown in Figure 7b. One important thing to be noticed here is, such an addition of zeros to the image do not alter the image in any sense except its size.
Figure 7: Zero-padding shown for the first pixel of the image (Drawn by me) Figure 7:Zero-padding shown for the first pixel of the image (Drawn by me)
This process of adding extra zeros is known as zero padding and is required to be done in each case where there are no image pixels to overlap the kernel pixels. For our example, zero padding requires to be carried on for each and every pixel which lies along the first two rows and columns as well as those which appear along the last two rows and columns (these pixels are shown in blue font in Figure 8). In general, the number of rows or columns to be zero-padded on each side of the input image is given by (number of rows or columns in the kernel – 1).
Figure 8
One important thing to be mentioned is the fact that zero padding is not the only way to deal with the edge effects brought about by convolution. Other padding techniques include replicate padding, periodic extension, mirroring, etc. (Digital Image Processing Using Matlab 2E, Gonzalez, Tata McGraw-Hill Education, 2009).
Summary
This article aims at explaining the graphical method of 2-D convolution and the concept of zero padding with respect to digital image processing. |
Category: Group Theory
Group Theory Problems and Solutions.
Popular posts in Group Theory are:
Problem 625
Let $G$ be a group and let $H_1, H_2$ be subgroups of $G$ such that $H_1 \not \subset H_2$ and $H_2 \not \subset H_1$.
(a) Prove that the union $H_1 \cup H_2$ is never a subgroup in $G$.
Add to solve later
(b) Prove that a group cannot be written as the union of two proper subgroups. Problem 616
Suppose that $p$ is a prime number greater than $3$.
Consider the multiplicative group $G=(\Zmod{p})^*$ of order $p-1$. (a) Prove that the set of squares $S=\{x^2\mid x\in G\}$ is a subgroup of the multiplicative group $G$. (b) Determine the index $[G : S]$.
Add to solve later
(c) Assume that $-1\notin S$. Then prove that for each $a\in G$ we have either $a\in S$ or $-a\in S$. Problem 613
Let $m$ and $n$ be positive integers such that $m \mid n$.
(a) Prove that the map $\phi:\Zmod{n} \to \Zmod{m}$ sending $a+n\Z$ to $a+m\Z$ for any $a\in \Z$ is well-defined. (b) Prove that $\phi$ is a group homomorphism. (c) Prove that $\phi$ is surjective.
Add to solve later
(d) Determine the group structure of the kernel of $\phi$. If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order Problem 575
Let $G$ be a finite group of order $2n$.
Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup. Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$.
Then prove that $H$ is an abelian normal subgroup of odd order.Add to solve later
Problem 497
Let $G$ be an abelian group.
Let $a$ and $b$ be elements in $G$ of order $m$ and $n$, respectively. Prove that there exists an element $c$ in $G$ such that the order of $c$ is the least common multiple of $m$ and $n$.
Also determine whether the statement is true if $G$ is a non-abelian group.Add to solve later |
Communication Systems Amplitude Modulation, Production and Detection of Amplitude Modulated Wave During PHASE MODULATION, the phase of the CW (carrier wave) is changed in accordance with the amplitude variations of the signal. The extent to which the modulation is to be taken up is called the modulation factor (m a) \tt m_{a} = \frac{Amplitude \ change \ in \ carrier \ wave}{Amplitude \ of \ unmodulated \ CW} If a carrier wave is modulated by different audio waves to different strengths then the effective modulation factor is given by \tt \sqrt{m_{1}^{2} + m_{2}^{2} + ....} Carrier Wave power \tt P_{c} = \frac{V_{c}^{2}}{2R} Power of each side band \tt P_{1} = \frac{m_{a}^{2} V_{c}^{2}}{8R} Total power of side bands \tt P_{s} = \frac{m_{a}^{2} V_{c}^{2}}{4R} Total power carried by modulated wave \tt P_{T} = \frac{V_{c}^{2}}{2R} + \frac{m_{a}^{2} V_{c}^{2}}{4R} = \frac{V_{c}^{2}}{2R} \left[\frac{2+m_{a}^{2}}{2}\right] Fractional power carried by the side bands \tt \frac{P_{S}}{P_{T}} = \frac{m_{a}^{2}}{2 + m_{a}^{2}} \tt \frac{P_{C}}{P_{T}} = \left(\frac{I_{c}}{I_{t}}\right)^{2} \tt \frac{P_{S}}{P_{T}} = \frac{m_{a}^{2}}{2 + m_{a}^{2}} = \frac{1}{3} \tt \frac{P_{C}}{P_{T}} = \frac{2}{2 + m_{a}^{2}} = \frac{2}{3} y(t) = BV msin ω mt + BV csin ω ct + \tt \frac{CV_{m}^{2}}{2} + \frac{V_{c}^{2}}{2} - \frac{CV_{m}^{2}}{2} \cos 2 \omega_mt - \frac{CV_{c}^{2}}{2} \cos 2 \omega_{c}t + CV mV ccos (ω c− ω m)t − CV mV ccos (ω c+ ω m)t
Detection:
View the Topic in this video From 00:24 To 23:24 View the Topic in this video From 00:11 To 2:19
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Power in Amplitude Modulation waves Power dissipated in any circuit, P = V^{2}_{rms}/R. Hence,
Carrier power, P_{c} = \frac{(E_{c}/\sqrt{2})^{2}}{R} = \frac{E_{c}^{2}}{2R}
2. Total power of Amplitude modulation wave,
P_{t} = P_{c} + P_{sb} = \frac{E_{c}^{2}}{2R}\left[1 + \frac{m_{a}^{2}}{2}\right]
3. A sinusoidal carrier wave can be represented as
c( t) = A c sin (ω c t+ Φ) |
1IntroductionIn the theory of ordinary differential equations and in particular in the theory of Hamiltonian systems the existence of first integrals is important, because they allow to lower the dimension where the Hamiltonian system is defined. Furthermore, if we know a sufficient number of first integrals, these allow to solve the Hamiltonian system explicitly, and we say that the system is integrable. Almost until the end of the 19th century the major part of mathematicians and physicians believe that the equations of classical mechanics were
graph for every independent set I of G , a graph is a fractional independent-set-deletable ( a,b,m )-deleted graph (shortly, fractional ID-( a,b,m )-deleted graph). If a = b = k , then a fractional ID-( a,b,m )-deleted graph is a fractional ID-( k,m )-deleted graph. If m = 0, then a fractional ID-( a,b,m )-deleted graph is just a fractional ID-( a,b )-factor-critical graph.If G has a fractional ( g, f )-factor containing a Hamiltonian cycle, it is said that G includes a Hamiltonian fractional ( g, f )-factor. A graph G is called an ID-Hamiltonian
1IntroductionThe use of computer algebra systems for normal forms computations is considered at present a routine operation. As a general reference see e.g. Sanders et al . [ 36 ] and Meyer et al . [ 32 ]. Nevertheless when we deal with special classes of differential equations, like Poisson or Hamiltonians systems which is our case, it is advisable to employ specific transformations as well as tailored variables for those problems [ 32 ], mostly connected with the symmetries that those systems might possess. More precisely we are interested in
the longest path or cycle is required the problem is closely related to well-known hamiltonian problems in graph theory. In the rest of this paper, we will use standard terminology in graphs(see ref.[ 2 ]). It is very difficult to determine that a graph is hamiltonian or not. Readers may refer to [ 4 , 5 , 6 ].2 Definitions and NotationWe follow [ 2 ] for graph-theoretical terminology and notation not defined here. A graph G = ( V,E ) always means a simple graph(without loops and multiple edges), where V = V ( G ) is the vertex set and E = E ( G
1IntroductionThe rotation of a triaxial rigid body in the absence of external torques is known to be integrable [ 1 , 2 ]. In particular, the canonical transformation to Andoyer variables [ 3 ] reduces the free rigid body rotation to an integrable, one degree of freedom Hamiltonian, which immediately shows the preservation of the total angular momentum and allows for the representation of the possible solutions by contour plots of the reduced Hamiltonian [ 4 ]. However, because the solution to the torque-free motion depends on elliptical integrals and
coupling with the Korteweg-de Vries equation, which is associated with non-semisimple matrix Lie algebras. In the references [ 10 ] and [ 11 ], its Lax pair and bi-Hamiltonian formulation were presented respectively. It should be noted that its bi-Hamiltonian structure is the first example of local bi-Hamiltonian structures, which lead to hereditary recursion operators in (2+1)-dimensions.Several methods have been developed to find exact solutions of the NLPDEs. Some of these are the homogeneous balance method [ 12 ], the ansatz method [ 13 ], the inverse scattering
Poincaré and Arnold, we split the Hamiltonian into two terms:H = H 0 + H 1 ,$$\begin{array}{}\displaystyle{\cal H} = {\cal H}_0 + {\cal H}_1,\end{array}$$where the intermediary 𝓗 0 defines a non-degenerate and simplified model of the problem at hand, which includes the Kepler and free rigid-body as particular cases and 𝓗 1 is usually dubbed as the perturbation. A special realization of an intermediary occurs for the case in which it is an integrable 1-DOF system. The work of Hill on the Moon motion [ 32 ] is, perhaps, the best known example. The
1IntroductionFor motivation and background to this work see [ 1 ]. In this paper, we consider only finite and simple graphs. Let G = ( V ( G ) , E ( G )) be a graph, where V ( G ) denotes its vertex set and E ( G ) denotes its edge set. A graph is Hamiltonian if it admits a Hamiltonian cycle. For each x ∊ V ( G ), the neighborhood N G ( x ) of x is the set of vertices of G adjacent to x , and the degree d G ( x ) of x is | N G ( x )|. For S ⊆ V ( G ), we write N G ( S ) = ∪ x∊S N G ( x ). G [ S ] denotes the subgraph of G
, and we denote it by div( x , y ), as the functiondiv ( x , y ) = ∂ X ∂ x ( x , y ) + ∂ Y ∂ y ( x , y ) . $$\begin{array}{}\displaystyle{\rm div} (x,y) \, = \, \frac{\partial X}{\partial x}(x,y) \, + \,\frac{\partial Y}{\partial y}(x,y).\end{array}$$System 1 is said to be Hamiltonian if div( x , y ) ≡ 0. In such a case there exists a neighborhood of the origin U and an analytic function H : U ⊆ ℝ 2 → ℝ, called the Hamiltonian, such thatX ( x , y ) = − ∂ H ∂ y and Y ( x , y ) = ∂ H ∂ y . $$\begin{array}{}\displaystyleX(x,y) = - \frac
–body problem Physica D 238 2009 563 571 10.1016/j.physd.2008.12.014[26] Llibre, J., Moeckel, R. and Simó, C., Central configurations, periodic orbits and Hamiltonian systems , Advances Courses in Math., CRM Barcelona, Birhauser, 2015. Llibre J. Moeckel R. Simó C. Central configurations, periodic orbits and Hamiltonian systems Advances Courses in Math., CRM Barcelona Birhauser 2015[27] Long, Y. and Sun, S., Four–Body Central Configurations with some Equal Masses , Arch. Rational Mech. Anal. 162 (2002), 24–44. doi 10.1007/s002050100183 Long Y. Sun S. Four–Body Central |
It is obvious that a question which is too short will almost certainly lack context, and that a question which is too long may run the risk of readers never finding out what the question actually is (Some may not want to read the full question).
Therefore, I was wondering:
What is the optimal length in characters for a question? I would like to evaluate this statistically, using the Site Analytics (Data.SE) by Stack Exchange.
To evaluate the community response, I suggest using the following metric:
$$\text{Percentage of upvotes}=\frac{\text{Total number of upvotes}}{\text{Total number of votes}}\cdot 100$$
From experience, I think there would be a global maximum around $3000$ characters.
According to this post, the maximum amount of characters per question is $30000$ characters (with spacing). Since most questions seem to be approximately $500$ characters, I suggest that we represent the data on two different bar charts. One going from $0$ to $2500$ characters on intervals of $50$ characters, the other going from $0$ to $30000$ characters on intervals of $500$. Here is an example of what I mean by "intervals" (Except that here I represented it on a table instead of a bar chart). Obviously, this data is made up:
$$\small\begin{array}{c|c}\text{number of characters}&\text{Percentage of Upvotes}\\\hline1-50&10\%\\51-100&15\%\\101-150&25\%\\ \vdots&\vdots\\2451-2500&90\% \end{array} \qquad \begin{array}{c|c}\text{number of characters}&\text{Percentage of Upvotes}\\\hline1-500&30\%\\501-1000&60\%\\1001-1500&77.5\%\\ \vdots&\vdots\\29501-30000&85\% \end{array}$$
I suggest that we let the
Number of characters (with spacing) be on the horizontal axis and the Percentage of upvotes to be on the vertical axis on the bar chart.
Of course, if you can think of a better way of representing this (Rather than a bar chart), feel free to write an answer. Similarly, if you can think of a better metric to evaluate community response, feel free to suggest one in the comments or write an answer using that metric.
Since I lack experience in programming and I have not seen any query which does this, I would appreciate it if you could show us the statistics and conclude with an optimal length. |
This is a three-dimensional problem and the wave equation is
\[c^2 \nabla^2 \Psi = \ddot \Psi. \label{7.6.1}\]
The wave-function here, \(\Psi\), which is a function of the coordinates and the time, can be thought of as the density of the sphere; it describes how the density of the sphere is varying in space and with time.
In problems of spherical symmetry it is convenient to write this in spherical coordinates. The expression for \(\nabla^2\) in spherical coordinates is fairly lengthy. You have probably not memorized it, but you will have seen it or know where to look it up. Stationary solutions are of the form
\[\Psi (r, \theta, \phi ; t) = \psi ( r, \theta, \phi ) . \chi ( t ) . \label{7.6.2}\]
Is it necessary to know the mathematical details of these functions? Probably not if it is not your intention to have a career in theoretical spectroscopy. If you
are going to have a career in theoretical spectroscopy, it probably wouldn't hurt to do the detailed calculation - but in practice a great deal of calculation can be and is done without reference to the detailed algebra; all that is necessary to know and become familiar with are the properties of the functions and how they react to the several operators encountered in quantum mechanics. At this stage, we are just looking at some general principles, and need not worry about the details of the algebra, which can be fairly involved.
The spherical coordinates \(r\), \(\theta\), \(\phi\) are independent variables, and consequently the timeindependent part of the wave function can be written as the product of three functions:
\[ \psi ( r, \theta, \phi ) = R(r). \Theta ( \theta) . \Phi ( \phi ) . \label{7.6.3}\]
Again, it is not immediately necessary to know the detailed forms of these functions - you will find them in various books on physics or chemistry or spectroscopy. The simplest is \(\Phi\) − it is a simple sinusoidal function, usually written as \(e^{- i m \phi}\) . The function \(\Theta\) is a bit more complicated and it involves things called Legendre polynomials. The function \(R\) involves somewhat less-familiar polynomials called Laguerre polynomials. But there are
boundary conditions. Thus \(\phi\) goes only from \(0\) to \(2\pi\), and in that interval there can only be an integral number of half waves. Likewise, \(\theta\) goes only from \(0\) to \(\pi\), and \(r\) goes only from \(0\) to \(a\), where \(a\) is the radius of the sphere. All three of these functions have integral "quantum numbers" associated with them. There is nothing mysterious about this, nor is it necessary to see the detailed algebra to see why this must be so; it is one of the inevitable constraints of fixed boundary conditions. The function \(R\), the radial part of the wavefunction, has associated with it an integer \(n\) , which can take only integral values \(1, \ 2. \ 3,…\). The function \(\Theta\), the meridional wavefunction, has associated with it an integer \(l\), which, for a given radial function (i.e. a given value of \(n\)) can have only the n different integral values \(0, \ 1, \ 2, \ … \ …n−1\). Finally, the function \(\Phi\), the azimuthal wavefunction, has associated with it an integer m, which, for a given meridional function (i.e. a given value of \(l\)) can have only the \(2l+1\) different integral values \(−l, \ −l+1, \ … \ 0, \ …l−1, \ l\). Thus for a given \(n\), the number of possible wavefunctions is
\[ \sum_0^{n-1} 2l + 1. \nonumber\]
You will have to remember how to sum arithmetic series in order to evaluate this, so please do so. The answer is \(n^2\).
When I first came across these quantum numbers, it was in connection with the wave mechanics of the hydrogen atom, and I thought there was something very mysterious about atomic physics. I was reassured only rather later - as I hope you will be reassured now - that the introduction of these quantum numbers is nothing to do with some special mysterious properties of atoms, but comes quite naturally out of the classical theory of vibrating spheres. Indeed, if you come to think about it, it would be very difficult indeed to believe that the wavefunctions of vibrating spheres did not involve numbers that had to be integers with restrictions on them.
The time-independent part of the wavefunction can be written:
\[\psi_{lmn} ( r, \theta, \phi ) = R_{nl} ( r ) . \Theta_{lm} ( \theta ) . \Phi_m ( \phi ) . \label{7.6.4}\]
Often the angular parts of the wavefunction are combined into a single function:
\[ Y_{lm} ( \theta, \phi ) = \Theta_{lm} ( \theta ) . \Phi_m ( \phi ) . \label{7.6.5}\]
The functions \(Y_{lm}\) are known as
spherical harmonics.
When performing various manipulations on the wavefunctions, very often all that happens is that you end up with a similar function but with different values of the integers ("quantum numbers"). Anyone who does a lot of such calculations soon gets used to this, and consequently, rather than write out the rather lengthy functions in full, what is done is merely to list the quantum number of a function between two special symbols known as a "ket". (That's one half of a bracket.) Thus a wavefunction may be written merely as \(|lmn \rangle \). This is done frequently in the quantum mechanics of atoms. I have never seen it done in other, more "classical" branches of physics, but I see no reason why it should not be done, and I dare say that it is in some circles.
The problem of a vibrating sphere of uniform density is fairly straightforward. Similar problems face geophysicists or planetary scientists when discussing the interiors of the Earth or the Moon or other planets, or astrophysicists studying "helioseismology" or "asteroseismology" - except that you have to remember there that you are not dealing with
uniform spheres. |
Search
Now showing items 1-1 of 1
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... |
Denote by $i \in \{1, \ldots, n\}$ an economic agent. Let $\mathbf x \in \mathbb R^n$ denote a vector of actions and $x_i \in \mathbf x$ a typical element. Let further $f_i : \mathbb R^n \to \mathbb R$ denote the objective function. The vector $\mathbf x^*$ constitutes a Nash equilibrium if the following holds \begin{align} \forall i : f_i(\mathbf x^*) = \max_{x_i} f_i(x_i, \mathbf x_{-i}^*) \end{align} where $\mathbf x_{-i}^* := (\ldots, x_{i-1}^*, x_{i+1}^*, \ldots)$ are the equilibrium actions of the opponents. The $n$ first order conditions read \begin{align} \frac{\partial f_i(x_i, \mathbf x_{-i}^*)}{\partial x_i}\Bigg|_{x_i = x_i^*} = 0. \end{align} The second order conditions read \begin{align} \frac{\partial^2 f_i(x_i, \mathbf x_{-i}^*)}{\partial x_i^2}\Bigg|_{x_i = x_i^*} < 0. \end{align} Assume symmetry such that $x_j^* = x^* ~ \forall j$. Since the problem is quite complex I cannot derive the second order partial derivative. I was thus thinking to define the first derivative in a single argument (the symmetric equilibrium actions) \begin{align} \forall i : f'(x^*) = f'_i(x^*) := \frac{\partial f_i(x_i,\mathbf x_{-i}^*)}{\partial x_i} \Bigg|_{x_j^* = x^* ~ \forall j} \end{align} and then take the derivative of the the simplier first order derivative and check wether it's negative. In order to make the simplifictaion in between I must somehow make sure that no information gets lost and that's why I'm wondering if the follwoing relation holds in general \begin{align} \text{sign}\left[f''(x^*)\right] ~~ \substack{? \\ =} ~~ \text{sign}\left[\frac{\partial^2 f_i(x_i,\mathbf x_{-i}^*))}{\partial x_i^2} \Bigg|_{x_j^* = x^* ~ \forall j}\right]. \end{align}
I was thinking that there may exists a general theorem from optimization for symmetric actions.
ExampleConsider text book Cournot duopol with $i,j \in \{1,2\}$, $i \neq j$ and\begin{align}f_i(x_i, x_j) = (1-x_i-x_j)x_i.\end{align}Now we readily get the following partial derivatives\begin{align}&\frac{\partial f_i(x_i, x_j^*)}{\partial x_i}\Bigg|_{x_i = x_i^*} = 1 - 2x_i^* - x_j^* \\[2mm]&\frac{\partial^2 f_i(x_i, x_j^*)}{\partial x_i^2}\Bigg|_{x_i = x_i^*} = -2\\[2mm]&f'(x^*) = 1-3x^*\\[2mm]&f''(x^*) = -3\\\end{align}So the sign is the same, but the values differ. SolutionIt seems that the idea was published in the latest Theoretical Economics and the approach is valid. The author actually discusses the super simple Cournot example as well.
The following statement is adapted from Theorem 1 of Hefti (2017): "Equilibria in symmetric games: Theory and applications", Theoretical Economics 12, pp. 979–1002
Suppose $f_i(x_i, \mathbf x_{-i})$ is strongly quasiconcave in $x_i$. Define $f'(x)$ as above.
Let $x^* \in \mathbb R$ solve $f'(x^*) = 0$. If $f''(x^*) < 0$ then there exists a unique symmtric equilibrium.
Further Issues In footnote 7 the author argues
The assumption of a strongly quasiconcave payoff function in own strategies is mainly for convenience, because it is a sufficient condition for the existence of a (possibly differentiable) best-reply function. However, many results require only differentiability of best replies (and do not otherwise hinge on quasiconcavity), and some results do not require that best replies are everywhere differentiable.
This statment is particular useful, because I have a problem in mind where I cannot say anything about (quasi)concavity of the objective function. I'm, however, not sure if I can apply Theorem 1, because the author does not specify when quasiconcavity is imposed for a result.
How would I know wether the theorem is applicable?
Note that I cannot(!) derive a best response function $x_i^* = \phi_i(\mathbf x_{-i}^*)$, but only the symmetric equilibrium actions $x_j = x^* ~ \forall j$. |
Without knowing the mass of the Earth, calculating the gravitational constant is impossible from $g$ and the acceleration of the Moon. The best you can do is calculate the product of the gravitational constant and the Earth's mass (GM). This is why Cavendish's experiments with the gravity of lead weights was important, since the mass of the body providing the gravitational force was known. Once $G$ was calculated from this experiment, the Earth could then be weighed from using either $g$ or the Moon's acceleration (both hopefully yielding the same answer).
Newton's Principia can be downloaded here:https://archive.org/stream/newtonspmathema00newtrich#page/n0/mode/2up
Follow up questions copied from the comments (in case the comment-deletion strike force shows up):
So how exactly did Newton express his universal gravitational law. Was it like this "$F_g$ is equal to $GMm/r^2,$ but I must avow that I doth not know neither $G$ nor big $M$". Or did he just assign some number "$X$" to the gravitational effect due to the Earth, which ended up being $GM$?
Philip Wood: I'm pretty sure that Newton never wrote his law of gravitation in algebraic form, nor thought in terms of a gravitational constant. In fact the Principia looks more like geometry than algebra. Algebra was not the trusted universal tool that it is today. Even as late as the 1790s, Cavendish's lead balls experiment was described as 'weighing [finding the mass of] the Earth', rather than as determining the gravitational constant. Interestingly, Newton estimated the mean density of the Earth pretty accurately (how, I don't know) so he could have given a value for G if he'd thought algebraically
Mark H: Philip Wood is correct. Newton wrote Principia in sentences, not equations. The laws of gravity were described in two parts (quoting from a translation): "Tn two spheres mutually gravitating each towards the other, ... the weight of either sphere towards the other will be reciprocally as the square of the distance between their centres." And, "That there is a power of gravity tending to all bodies, proportional to the several quantities of matter which they contain." This is the full statement of the behavior of gravity. No equations or constants used.
Who first measured the standard gravitational acceleration 9.80 m/s/s? I assume that was well known by the time of Newton?
After a quick search, I can't find who first measured $g=9.8m/s^2$. It's not a difficult measurement, but would require accurate clocks with subsecond accuracy. This is an interesting article: https://en.wikipedia.org/wiki/Standard_gravity
Actually, on page 520, Newton lists the acceleration due to gravity at Earth's surface like so: "the same body, ... falling by the impulse of the same centripetal force as before [Earth's gravity], would, in one second of time, describe 15 1/12 Paris feet." So, the value was first measured sometime between Galileo's experiments and Newton's Principia.
Was Newton (and therefore all of us!) just a tiny bit luck y that the ratios worked out so nicely. I'm not putting down Sir Isaac (perhaps the smartest bloke who's ever drawn breath in tights), but even I might notice that
$\frac{g(Earth)}{a_c(Moon)}=3600=\left(\frac{r(Earth−to−Moon)}{r(Earth)}\right)^2$.
If the ratio had been a little messier, say one to 47½, it might have been a little harder to spot the connection.
Newton knew that the moon was not exactly 60 earth-radii distant. He quotes a number of measurements in Principia: "The mean distance of the moon from the centre of the earth, is, in semi-diameters of the earth, according to Ptolemy, Kepler in his Ephemerides, Bidliuldus, Hevelius, and Ricciolns, 59; according to Flamsted, 59 1/3; according to Tycho, 56 1/2; to Vendelin, 60; to Copernicus, 60 1/3; to Kircher, 62 1/2 (p . 391, 392, 393)." He used 60 as an average, which results in an easily calculable square, but squaring isn't a difficult calculation anyway.
The inverse square law was already being talked about by many scientists at the time, including Robert Hooke. Newton used the Moon as a confirmation of the inverse square law, not to discover it. He already knew what the answer should be if the inverse square law was true. In fact, it was the orbital laws discovered by Johannes Kepler--especially the constant ratio of the cube of the average distance from the central body and the square of the orbital period--that provided the best evidence for the inverse square law.
In "The System of the World" part of Newton's Principia, he uses astronomical data to show that gravity is a universal phenomena: the planets around the Sun, the moons around Jupiter, the moons around Saturn, and the Moon around Earth. For the last, in order to establish the ratio of forces and accelerations, you need at least two bodies. Since Earth only has one moon, he made the comparison with terrestrial acceleration.
I would love to read a proof (requiring less mathematical nous than Sir Isaac had at his disposal) for the connection from Kepler's 3rd law to Newton's inverse square. Do you know of one?
A simple version of Kepler's Third Law to the inverse square law can be shown for circular orbits pretty easily. Define $r$ as the constant radius of the orbit, $T$ as the time period of the orbit, $v$ as the planet's velocity, $m$ as the mass of the orbiting planet, $F$ as the gravitational force, and $k$ as some constant. \begin{align}\frac{r^3}{T^2} = k &\iff r^3 = k\left(\frac{2πr}{v}\right)^2 \\ &\iff r = \frac{4\pi^2k}{v^2} \\ &\iff \frac{v^2}{r} = \frac{4\pi^2k}{r^2} \\ &\iff \frac{mv^2}{r} = \frac{4\pi^2km}{r^2} \\ &\iff F = \frac{4\pi^2km}{r^2}\end{align}
The quantity $v^2/r$ is the centripetal acceleration necessary for constant speed circular motion. |
I thought this result was a bit interesting. Mahlon M. Day in the paper [1] showed that the amenable groups are precisely the groups where there Markov-Kakutani theorem holds.
If $(X,\mathcal{M})$ is an algebra of sets, then a function $\mu:\mathcal{M}\rightarrow[0,1]$ is said to be a finitely additive probability measure if $\mu(\emptyset)=0,\mu(X)=1$ and $\mu(A\cup B)=\mu(A)+\mu(B)$ whenever $A,B\in\mathcal{M}$ and $A\cap B=\emptyset$. If $G$ is a group, then a finitely additive probability measure $\mu:P(G)\rightarrow G$ on the algebra of sets $(G,P(G))$ is said to be left-invariant if $\mu(aR)=\mu(R)$ for each $R\subseteq G$.
A group $G$ is said to be amenable if there exists a left-invariant finitely additive probability measure $\mu:P(G)\rightarrow[0,1]$. For example, every finite group is amenable, and every abelian group is amenable. Furthermore, the class of amenable groups is closed under taking quotients, subgroups, direct limits, and finite products.
Let $C$ be a convex subset of a real vector space. Then a function $f:C\rightarrow C$ is said to be an affine map if $f(\lambda x+(1-\lambda)y)=\lambda f(x)+(1-\lambda)f(y)$ for each $\lambda\in[0,1]$ and $x,y\in C$.
$\textbf{Theorem}$(Day) Let $G$ be a group. Then the following are equivalent.
$G$ is amenable.
Let $X$ be a Hausdorff topological vector space and let $C\subseteq X$ be a compact convex subset. Let $\phi:G\rightarrow C^{C}$ be a group action such that each $\phi(g)$ is a continuous affine map. Then there is a point in $C$ fixed by every element of $G$.
Let $X$ be a locally convex topological vector space and let $C\subseteq X$ be a compact convex subset. Let $\phi:G\rightarrow C^{C}$ be a group action such that each $\phi(g)$ is a continuous affine map. Then there is a point in $C$ fixed by every element in $G$.
[1] Fixed-point theorems for compact convex sets.Mahlon M. Day.Illinois J. Math. Volume 5, Issue 4 (1961), 585-590.
[2] Ceccherini-Silberstein, Tullio, and M. Coornaert. Cellular Automata and Groups. Heidelberg: Springer, 2010. |
I am trying to estimate required shielding masses to maintain a radiation level at or below earths average radiation of 3 mSv per year at different earth orbits (LEO, MEO, GEO and 50000 km+).
Initially I reverse engineered the radiation estimation method used to compute the estimated received radiation for the apollo mission to determine the required shielding in the highest (50000 km+) earth orbit.
Unfortunately this document has been removed.
and find a dose rate experienced in the most intense radiation (in mSv/hr), link it to the value with 3*10^8 [units?] and then use the other 10^x values as a scaling factor for the dose rate experienced at those orbit heights in mSv/hr.
And then dividing that with the 7 cm of water required to half the radiation as mentioned by KeithS in this StackExchange question.
However, this method
Does not take into account the different composition of radiation particles of the different orbits. Does not take into account the different composition of the radiation particles in the shielding effectivity. Does not take into account the effects of Bremzstralung. Is highly inaccurate due to the inaccurate data (1 static picture) Needs verification on the units of the used picture as datasource.
So a 3d model that converts the radiation measurements into either mSv or Gy to a volume (sphere) with 1 kg of material as a function of the additional shielding mass with density rho, and shielding of y grams/cm^2 would highly improve the accuracy of the estimate.
The data is available as appears from the figures, but I can not find such a model (I understand the actual radiation is time dependent but even an average or instance of the data would significantly increase estimate accuracy.)
Do you know any such models?
Yielding
$ {(10 \cdot 9.33 \cdot 365 \cdot 24)}\cdot {(\frac{1}{2})}^{n_{worst}}=3$ $ {(1 \cdot 9.33 \cdot 365 \cdot 24)}\cdot {(\frac{1}{2})}^{n_{best}}=3$
Resulting in
-$ n_{worst} = 18.06 -> t_{worst} = 18.06 \cdot 7 = 126.5 grams/cm2$ -$ n_{best} = 14.8 -> t_{best} = 14.8 \cdot 7 = 103.6 grams/cm2$
Assuming gold as a shielding material with density of $\rho_{gold}$ = 19.3 g/cm^3, for the 1L sphere with radius $(\frac{4}{3} \cdot Pi \cdot r^3) =0.001 -> r_v = 0.062035 m =6.2035 cm$, the shielding mass (-the 1 liter of non shielded sphere) becomes:
-worst case mass = $(\frac{4}{3} \cdot Pi \cdot (r_v+\frac{t_{worst}}{\rho_{gold}}/)^3)-19.3=\frac{4}{3} \cdot Pi \cdot (0.062035+0.01 \cdot \frac{126.5}{19.3})^3)19300-19.3 =148.6 kg$
-best case mass = $(\frac{4}{3} \cdot Pi \cdot (r_v+\frac{t_{best}}{\rho_{gold}}/)^3)-19.3=\frac{4}{3} \cdot Pi \cdot (0.062035+0.01 \cdot \frac{103.6}{19.3})^3)19300-19.3 = 106 kg $
Doubts:
Validity of assumption 1, the conversion of Roentgen to mSv is estimated at: 10 to a 100 [Roentgen/hour] = 0.01 to 0.04 Gy/hour according to van Allen in 1958. Where 0.01 to 0.04 Gy/hour would convert to 0.01 mSv per 10 roentgen in stead of 0.01 mSv per 1 Roentgen as assumed.
Applicability of assumption 2: The required shielding will in reality be optimized for different orbits, since for example bremsstraling is most efficiently shielded different than the high energy protons, yielding to different values than the simple 7 grams/cm^2. |
Properties of the algorithm: Sequential complexity: [math]O(n^3)[/math] Height of the parallel form: [math]O(n)[/math] Width of the parallel form: [math]O(n^2)[/math] Amount of input data: [math]\frac{n (n + 1)}{2}[/math] Amount of output data: [math]\frac{n (n + 1)}{2}[/math] 1 Properties and structure of the algorithm 1.1 General description The Cholesky decomposition algorithm was first proposed by Andre-Louis Cholesky (October 15, 1875 - August 31, 1918) at the end of the First World War shortly before he was killed in battle. He was a French military officer and mathematician. The idea of this algorithm was published in 1924 by his fellow officer and, later, was used by Banachiewicz in 1938 [7]. In the Russian mathematical literature, the Cholesky decomposition is also known as the square-root method [1-3] due to the square root operations used in this decomposition and not used in Gaussian elimination.
Originally, the Cholesky decomposition was used only for dense real symmetric positive definite matrices. At present, the application of this decomposition is much wider. For example, it can also be employed for the case of Hermitian matrices. In order to increase the computing performance, its block versions are often applied.
In the case of sparse matrices, the Cholesky decomposition is also widely used as the main stage of a direct method for solving linear systems. In order to reduce the memory requirements and the profile of the matrix, special reordering strategies are applied to minimize the number of arithmetic operations. A number of reordering strategies are used to identify the independent matrix blocks for parallel computing systems.
1.2 Mathematical description
Input data: a symmetric positive definite matrix [math]A[/math] whose elements are denoted by [math]a_{ij}[/math]).
Output data: the lower triangular matrix [math]L[/math] whose elements are denoted by [math]l_{ij}[/math]).
The Cholesky algorithm can be represented in the form
[math]\begin{align}l_{11} & = \sqrt{a_{11}}, \\l_{j1} & = \frac{a_{j1}}{l_{11}}, \quad j \in [2, n], \\l_{ii} & = \sqrt{a_{ii} - \sum_{p = 1}^{i - 1} l_{ip}^2}, \quad i \in [2, n], \\l_{ji} & = \left (a_{ji} - \sum_{p = 1}^{i - 1} l_{ip} l_{jp} \right ) / l_{ii}, \quad i \in [2, n - 1], j \in [i + 1, n].\end{align}[/math]
There exist block versions of this algorithm; however, here we consider only its “dot” version.
In a number of implementations, the division by the diagonal element [math]l_{ii}[/math] is made in the following two steps: the computation of [math]\frac{1}{l_{ii}}[/math] and, then, the multiplication of the result by the modified values of [math]a_{ji}[/math] . Here we do not consider this computational scheme, since this scheme has worse parallel characteristics than that given above.
Read more… |
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks
@skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :)
2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus
Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein.
However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown
Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them
I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
@ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams
@0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs
Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go?
enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes
orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others
Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging)
Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet.
So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves?
@JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources.
But if we could figure out a way to do it then yes GWs would interfere just like light wave.
Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern?
So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like.
if**
Pardon, I just spend some naive-phylosophy time here with these discussions**
The situation was even more dire for Calculus and I managed!
This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side.
In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying.
My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago
(Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers)
that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention
@JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice
I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy
I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do)
Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks.
@Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :)
@Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa.
@Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again.
@user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject;
it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding
If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc. |
Structure of Atom Heisenberg uncertainty principle Heisenberg's Uncertainty Principle :
\triangle x.\triangle p \geq \frac{h}{4\pi}
Δx = uncertainty in the position Δp = uncertainty in momentum Δp = Δ(mv) = mΔv \triangle x.\triangle v \geq \frac{h}{4\pi m} View the Topic in this Video from 0:40 to 11:55
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. In terms of uncertainty in position, ΔX and uncertainty in momentum ΔP, this principle is written as,\tt \triangle X.\triangle P \geq \frac{h}{4\pi}
2. In terms of uncertainty in energy, ΔE and uncertainty in time Δt, this principle is written as,\tt \triangle E.\triangle t \geq \frac{h}{4\pi} |
Considering the top answer to the question “If xor-ing a one way function with different input, is it still a one way function?”…
The function is no longer one-way.
we build a counter example in the following way. Assume $g$ is a one-way function that preserves size, and define $f$ on input $w=bx_1x_2$ in the following way, $$f(bx_1x_2) = \begin{cases} g(x_1)\,x_2 & b=0 \\ x_1\, g(x_2) & b=1 \end{cases}$$ (assuming $b\in\{0,1\}$ and $|x_1|=|x_2|$.) It is easy to see that $f$ is also one-way — to invert it, you need to either invert $g$ on the first half or invert $g$ on the second half.
Now we show how to invert $h$. Assume you are given $h(u,v)=Z$, we write it as $h(u,v)= z_1z_2$ with $|z_1|=|z_2|=n$. Then a possible preimage of $Z$ is $$u=0 \,0^n \,\langle g(0^n)\oplus z_2\rangle$$ $$v=1 \, \langle g(0^n)\oplus z_1\rangle \, 0^n$$
because $f(u) = g(0^n)\, \langle g(0^n)\oplus z_2\rangle$ and $f(v) = \langle g(0^n)\oplus z_1\rangle \, g(0^n)$ thus their XOR gives exactly $z_1\,z_2$ as required.
Wouldn't this counter-example imply that we've inverted $f$?
Consider the reduction where we take in $f(x_1)$ and $f(x_2)$: then we could compute $f(x_1) \oplus f(x_2)$, invert this to $x_1x_2$, and then we have inverted $f$ as well.
Is the quoted answer correct? If so,
why, given my considerations outlined above? |
Question
Four fair six-sided dice are rolled. The probability that the sum of the results being $22$ is $$\frac{X}{1296}.$$ What is the value of $X$?
My Approach
I simplified it to the equation of the form:
$x_{1}+x_{2}+x_{3}+x_{4}=22, 1\,\,\leq x_{i} \,\,\leq 6,\,\,1\,\,\leq i \,\,\leq 4 $
Solving this equation results in:
$x_{1}+x_{2}+x_{3}+x_{4}=22$
I removed restriction of $x_{i} \geq 1$ first as follows-:
$\Rightarrow x_{1}^{'}+1+x_{2}^{'}+1+x_{3}^{'}+1+x_{4}^{'}+1=22$
$\Rightarrow x_{1}^{'}+x_{2}^{'}+x_{3}^{'}+x_{4}^{'}=18$
$\Rightarrow \binom{18+4-1}{18}=1330$
Now i removed restriction for $x_{i} \leq 6$ , by calculating the number of
bad cases and then subtracting it from $1330$:
calculating
bad combination i.e $x_{i} \geq 7$
$\Rightarrow x_{1}^{'}+x_{2}^{'}+x_{3}^{'}+x_{4}^{'}=18$
We can distribute $7$ to $2$ of $x_{1}^{'},x_{2}^{'},x_{3}^{'},x_{4}^{'}$ i.e$\binom{4}{2}$
We can distribute $7$ to $1$ of $x_{1}^{'},x_{2}^{'},x_{3}^{'},x_{4}^{'}$ i.e$\binom{4}{1}$ and then among all others .
i.e
$$\binom{4}{1} \binom{14}{11}$$
Therefore, the number of bad combinations equals $$\binom{4}{1} \binom{14}{11} - \binom{4}{2}$$
Therefore, the solution should be:
$$1330-\left( \binom{4}{1} \binom{14}{11} - \binom{4}{2}\right)$$
However, I am getting a negative value. What am I doing wrong?
EDIT
I am asking for my approach, because if the question is for a larger number of dice and if the sum is higher, then predicting the value of dice will not work. |
Forgot password? New user? Sign up
Existing user? Log in
how many brilliant users liked the new brilliant look ?
Note by Rishabh Jain 4 years, 12 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
I think the new Home Page needs to be redone. It never changes, and it seems to change the focus of Brilliant from "challenging problems" to self-guided math education. And where's all the great Wiki pages now? I think if the Home Page is not going to be Community (which now I have to keep clicking every time), it should take us to some portal on Wiki pages, with the best or latest featured. You know, like what happens when we open up a news website. Keeps things lively. Otherwise, it feels like I'm back in school, and I don't want that.
I think it's a mistake, because when people not familiar with Brilliant.Org first click on to see what it's about, they're not going to see anything exciting . First impressions are the most important.
Log in to reply
We agree that it's not ideal that the homepage is static. More homepage features are on the way, including featured new problems and wikis.
yeah its right that brilliant has turned from a" challenging question producing website" to a "self a guided math education"
A better suggest. On opening www.brilliant.org, the community tab should be opened as default. :)
a better option !!
It is Superb!!\color{#D61F06}{Superb!!}Superb!!
It's good. But unfamiliar with it . So it will take some time to adapt according to it. Change is always done for the better. Also I believe that change is the only constant.
That would work great for a cool status..
@Dinesh Chavan is right...Community must be made the default home page @Calvin Lin ..Just a suggestion...since we're very used to it ...
Yes it should be..
Deepanshu and Krishna, your feedback on what should go on the new homepage is welcome on the new discussion happening here.
Also it needs to push harder to click something. It is very uncomfortable. :(
yes i m too finding it difficult to use .if i want to open community problems then we used to click on home tab but now its community tab .
I cant find the community problems on android
@Sualeh Asif – Please upgrade your Android app to the latest version in the Google Play store (v1.8). This version has the new Community tab.
@Anton Kriksunov – Yes i found it Thank you
@Anton Kriksunov – When i open someone's profile on brilliant android app it does not opens all problems posted by him...
@Aman Sharma – Yes, the load more button never works and similarly with the sets, when I am seeimg others set I can only see few(2-4)
@Anton Kriksunov – I have a request to make. Could you please make the Android app more fluid and functional? I mean selecting the options is not as smooth is other android apps. Plus the loading bar could be redesigned to match the latest material design ui by Google? I mean the entire app could use a lot of material design! I would spend all my nights on the brilliant.org app if this could be included! I hope this is a reasonable request which could benefit the entire wonderful community on this website!
Same here on android
This one is good but the previous one was better. Here the community page should be stored as default under home button.
I happened to visit the site today after about 8 days, and was completely perplexed. I think this model is not that user friendly.According to me, unfamiliar users may find it odd to see problems and interesting notes under the community tab.As pointed out by others, .Community may be made the default home page since it, at the beginning itself, introduces the user to lots of questions.
It's just my opinion. I have not considered the long term effects or anything of that sort.
it is not good. the old look was better.
Problem Loading...
Note Loading...
Set Loading... |
I am looking for a formula (Fourier series) to generate an impulse train waveform - a spike-wave with amplitude and period both $1$ – so that $f(x)$ has value $1$ at $x = 1,2,3,4...$ and $f(x)$ has value $0$ at all non-integer values of $x$.
Someone very helpfully gave me:
$$S = \frac 1 N \sum_{k=0}^{N-1} e ^{j2\pi\frac{kn}{N}}$$
The same equation can be found on this site: Equation for impulse train as sum of complex exponentials. However...
I have two questions:
Is there an equivalent trigonometric function? If so, what is it?
Sadly, my maths A level is such ancient history that I am struggling with what the terms in the above equation mean. Specifically:
$S$ = series, i.e. the equivalent of $f(x)$ - yes?
$e$ = famous irrational number - yes?
$j$ = square root of $-1$ - yes?
$N$ and $n$ ... Now, here I get muddled. 30 years of doing no maths at all has left me less than fluent... I assume the lowercase ($n$) is the period/frequency of the impulse train. So that leaves uppercase ($N$) as... The number on the $x$-axis that we are solving for...? The duration of the signal...? Some factor that compensates for $\pi$ to create integer values...?
Sorry. I know this is basic stuff. But I'd really appreciate some help... |
Introduction
Let’s begin with the
definition of gravitational field: The gravitational field at any point P in space is defined as the gravitational force felt by a tiny unit mass placed at P.
So, to visualize the gravitational field, in this room or on a bigger scale such as the whole Solar System, imagine drawing a vector representing the gravitational force on a one kilogram mass at many different points in space, and seeing how the pattern of these vectors varies from one place to another (in the room, of course, they won’t vary much!). We say “a tiny unit mass” because we don’t want the gravitational field from the test mass itself to disturb the system. This is clearly not a problem in discussing planetary and solar gravity.
To build an intuition of what various gravitational fields look like, we’ll examine a sequence of progressively more interesting systems, beginning with a simple point mass and working up to a hollow spherical shell, this last being what we need to understand the Earth’s own gravitational field, both outside and inside the Earth.
Field from a Single Point Mass
This is of course simple: we know this field has strength
GM/ r 2, and points towards the mass—the direction of the attraction. Let’s draw it anyway, or, at least, let’s draw in a few vectors showing its strength at various points:
This is a rather inadequate representation: there’s a lot of blank space, and, besides, the field attracts in three dimensions, there should be vectors pointing at the mass in the air above (and below) the paper. But the picture does convey the general idea.
A different way to represent a field is to draw “field lines”, curves such that at every point along the curve’s length, its direction is the direction of the field at that point. Of course, for our single mass, the field lines add little insight:
The arrowheads indicate the direction of the force, which points the same way all along the field line. A shortcoming of the field lines picture is that although it can give a good general idea of the field, there is no precise indication of the field’s
strength at any point. However, as is evident in the diagram above, there is a clue: where the lines are closer together, the force is stronger. Obviously, we could put in a spoke-like field line anywhere, but if we want to give an indication of field strength, we’d have to have additional lines equally spaced around the mass. Gravitational Field for Two Masses
The next simplest case is two equal masses. Let us place them symmetrically above and below the
x-axis:
Recall Newton’s Universal Law of Gravitation states that any two masses have a mutual gravitational attraction \( \dfrac {Gm_1m_2}{r^2} \). A point mass
m = 1 at P will therefore feel gravitational attraction towards both masses M, and a total gravitational field equal to the vector sum of these two forces, illustrated by the red arrow in the figure. The Principle of Superposition
The fact that the total gravitational field is just given by adding the two vectors together is called the
Principle of Superposition. This may sound really obvious, but in fact it isn’t true for every force found in physics: the strong forces between elementary particles don’t obey this principle, neither do the strong gravitational fields near black holes. But just adding the forces as vectors works fine for gravity almost everywhere away from black holes, and, as you will find later, for electric and magnetic fields too. Finally, superposition works for any number of masses, not just two: the total gravitational field is the vector sum of the gravitational fields from all the individual masses. Newton used this to prove that the gravitational field outside a solid sphere was the same as if all the mass were at the center by imagining the solid sphere to be composed of many small masses—in effect, doing an integral, as we shall discuss in detail later. He also invoked superposition in calculating the orbit of the Moon precisely, taking into account gravity from both the Earth and the Sun. Exercise: For the two mass case above, sketch the gravitational field vector at some other points: look first on the x-axis, then away from it. What do the field lines look like for this two mass case? Sketch them in the neighborhood of the origin. Field Strength at a Point Equidistant from the Two Masses
It is not difficult to find an exact expression for the gravitational field strength from the two equal masses at an equidistant point
P.
Choose the
x, y axes so that the masses lie on the y-axis at (0, a) and (0,- a).
By symmetry, the field at
P must point along the x-axis, so all we have to do is compute the strength of the x-component of the gravitational force from one mass, and double it.
If the distance from the point
P to one of the masses is s, the gravitational force towards that mass has strength \( \dfrac{GM}{s^2} \). This force has a component along the x-axis equal to, \( \dfrac {GM}{s^2}cos \alpha \) where \( \alpha \) is the angle between the line from P to the mass and the x-axis, so the total gravitational force on a small unit mass at P is \( \dfrac {2GM}{s^2}cos\alpha \) directed along the x-axis.
From the diagram, \( cos \alpha = \dfrac {x}{s} \), so the force on a unit mass at
P from the two masses M is \[F = - \dfrac {2GMx}{(x^2 + a^2)^{3/2}} \]
in the
x-direction. Note that the force is exactly zero at the origin, and everywhere else it points towards the origin. Gravitational Field from a Ring of Mass
Now, as long as we look
only on the x-axis, this identical formula works for a ring of mass 2 M in the y, z plane! It’s just a three-dimensional version of the argument above, and can be visualized by rotating the two-mass diagram above around the x-axis, to give a ring perpendicular to the paper, or by imagining the ring as made up of many beads, and taking the beads in pairs opposite each other.
Bottom line: the field from a ring of total mass M, radius a, at a point P on the axis of the ring distance x from the center of the ring is \[F = - \dfrac {2GMx}{(x^2 + a^2)^{3/2}} \]. *Field Outside a Massive Spherical Shell This is an optional section: you can safely skip to the result on the last line. In fact, you will learn an easy way to derive this result using Gauss’s Theorem when you do Electricity and Magnetism. I just put this section in so you can see that this result can be derived by the straightforward, but quite challenging, method of adding the individual gravitational attractions from all the bits making up the spherical shell.
What about the gravitational field from a hollow spherical
shell of matter? Such a shell can be envisioned as a stack of rings.
To find the gravitational field at the point
P, we just add the contributions from all the rings in the stack.
In other words, we divide the spherical shell into narrow “zones”: imagine chopping an orange into circular slices by parallel cuts, perpendicular to the axis—but of course our shell is just the
skin of the orange! One such slice gives a ring of skin, corresponding to the surface area between two latitudes, the two parallel lines in the diagram above. Notice from the diagram that this “ring of skin” will have radius \( \alpha \, sin \theta \), therefore circumference \( 2\pi \alpha \, sin \theta \) and breadth \(\alpha \, d \theta\), where we’re taking \( d \theta \) to be very small. This means that the area of the ring of skin is \[ length * breath = 2\pi \alpha sin \theta * \alpha d \theta \].
So, if the shell has mass \( \rho \) per unit area, this ring has mass \(2 \pi \alpha^2 \rho sin \theta d \theta \), and the gravitational force at
P from this ring will be \[F = - \dfrac {2Gx \pi \alpha^2 \rho sin \theta d \theta}{(x^2 + \alpha^2 )^{3/2}} \].
Now, to find the total gravitational force at
P from the entire shell we have to add the contributions from each of these “rings” which, taken together, make up the shell. In other words, we have to integrate the above expression in \( \theta \, from \space \theta = 0 \, to \, \theta = \pi \).
So the gravitational field is: \[ F = - \int \limits_ {0}^{\pi} \dfrac {2Gx \pi \alpha^2 \rho sin \theta d \theta}{(x^2 + \alpha^2 )^{3/2}} = - \int \limits_{0}^{\pi} \dfrac {2Gx \pi \alpha^2 \rho sin \theta d \theta}{(x^2 + \alpha^2 )^{3/2}} \].
In fact, this is quite a tricky integral: \(\theta \),
x and s are all varying! It turns out to be is easiest done by switching variables from \(\theta\) to s.
Label the distance from
P to the center of the sphere by r. Then, from the diagram, \(s^2 = r^2 + \alpha^2 - 2\alpha r cos\theta \), and a, r are constants, so \( sds=\alpha r sin \theta d \theta \), and \[ F = - \int \limits_ {0}^{\pi} \dfrac {2Gx \pi \alpha^2 \rho sin \theta d \theta}{s^3} = - \int \limits_{\gamma - \alpha}^{\gamma + \alpha} \dfrac {2Gx \pi \alpha^2 \rho}{s^3} \cdot \dfrac {sds}{ar} = - \dfrac {2Ga^2 \rho \pi}{ar} \int \limits_{\gamma - \alpha}^{\gamma + \alpha} \dfrac {xds}{s^2} \]
Now \(x = s cos \alpha \), and from the diagram \( \alpha^2 = s^2 + r^2 - 2 sr cos \alpha \), so \( x = \dfrac {s^2 + r^2 - \alpha^2 }{2r} \),
and, writing \( 4\pi \alpha^2 \rho = M. \).
\[F = \dfrac{GM}{4\alpha r^2} \int \limits_{\gamma - \alpha}^{\gamma + \alpha} \left( 1 + \dfrac {r^2 - \alpha^2}{s^2} \right)ds = \dfrac{GM}{4\alpha r^2} \left(2\alpha + (r^2 - \alpha^2)\left( \dfrac{1}{r-\alpha} - \dfrac{1}{r+\alpha} \right) \right) = \dfrac{GM}{r^2} \]
The derivation was rather lengthy, but the answer is simple:
The gravitational field outside a uniform spherical shell is GM/r 2 towards the center.
And, there’s a bonus: for the ring, we only found the field
along the axis, but for the spherical shell, once we’ve found it in one direction, the whole problem is solved—for the spherical shell, the field must be the same in all directions. Field Outside a Solid Sphere
Once we know the gravitational field outside a shell of matter is the same as if all the mass were at a point at the center, it’s easy to find the field outside a solid sphere: that’s just a nesting set of shells, like spherical Russian dolls. Adding them up,
The gravitational field outside a uniform sphere is GM/r 2 towards the center.
There’s an added bonus: since we found this result be adding uniform spherical shells, it is
still true if the shells have different densities, provided the density of each shell is the same in all directions. The inner shells could be much denser than the outer ones—as in fact is the case for the Earth. Field Inside a Spherical Shell
This turns out to be surprisingly simple! We imagine the shell to be very thin, with a mass density \( \rho \) kg per square meter of surface. Begin by drawing a two-way cone radiating out from the point
P, so that it includes two small areas of the shell on opposite sides: these two areas will exert gravitational attraction on a mass at P in opposite directions. It turns out that they exactly cancel.
This is because the ratio of the areas
A 1 and A 2 at distances r 1 and r 2 are given by \( \dfrac {A_1}{A_2} = \dfrac {r_1^2}{r_2^2} \): since the cones have the same angle, if one cone has twice the height of the other, its base will have twice the diameter, and therefore four times the area. Since the masses of the bits of the shell are proportional to the areas, the ratio of the masses of the cone bases is also \( \dfrac {r_1^2}{r_2^2} \). But the gravitational attraction at Pfrom these masses goes as \( \dfrac {GM}{r^2} \), and that r 2 term cancels the one in the areas, so the two opposite areas have equal and opposite gravitational forces at P.
In fact, the gravitational pull from every small part of the shell is balanced by a part on the opposite side—you just have to construct a lot of cones going through
P to see this. (There is one slightly tricky point—the line from P to the sphere’s surface will in general cut the surface at an angle. However, it will cut the opposite bit of sphere at the same angle, because any line passing through a sphere hits the two surfaces at the same angle, so the effects balance, and the base areas of the two opposite small cones are still in the ratio of the squares of the distances r 1, r 2.) Field Inside a Sphere: How Does g Vary on Going Down a Mine?
This is a practical application of the results for shells. On going down a mine, if we imagine the Earth to be made up of shells, we will be inside a shell of thickness equal to the depth of the mine, so will feel
no net gravity from that part of the Earth. However, we will be closer to the remaining shells, so the force from them will be intensified.
Suppose we descend from the Earth’s radius
r E to a point distance rfrom the center of the Earth. What fraction of the Earth’s mass is still attracting us towards the center? Let’s make life simple for now and assume the Earth’s density is uniform, call it \( \rho \) kg per cubic meter.
Then the fraction of the Earth’s mass that is still attracting us (because it’s closer to the center than we are—inside the red sphere in the diagram) is \( \dfrac {V_{red}}{V_blue} = \dfrac {4}{3} \pi r^3 / \dfrac {4}{3} \pi r_E^3 = \dfrac {r^3}{r_E^3} \).
The gravitational attraction from this mass at the bottom of the mine, distance
r from the center of the Earth, is proportional to mass/ r 2. We have just seen that the mass is itself proportional to r 3, so the actual gravitational force felt must be proportional to \( \dfrac {r^3}{r^2} = r \).
That is to say, the gravitational force on going down inside the Earth is
linearly proportional to distance from the center. Since we already know that the gravitational force on a mass m at the Earth’s surface \( r = r_E \) is mg, it follows immediately that in the mine the gravitational force must be \[F = \dfrac {mgr}{r_E} \].
So there’s no force at all at the center of the Earth—as we would expect, the masses are attracting equally in all directions. |
I am trying to make my first LaTeX file and have been reading syntax for a bit and have not been able to figure one thing out yet and don't seem to find anything about it online so I thought I might just ask:)
How is it possible to place an image next to an equation?
I tried to do it by wrapping it around, putting it into a table or anything similar but I can't get it working... Of course I could just have used the syntax wrong since I'm new to the subject:) It's for a summary for my exams and the pages are limited so I would like to save some space, plus it would look a lot better.
Additionally, if anyone could give me a link to a tutorial that is a bit more extensive than just the basics i would be very thankful as well!
\documentclass[11pt, a4paper]{article} \usepackage{geometry} \usepackage{amsmath}\usepackage{wrapfig}\geometry{a4paper} \usepackage{graphicx} \usepackage{amssymb}\geometry{left=5mm, right=5mm, top=0mm, bottom=5mm}\title{\huge {\textbf{Lineare Algebra}}}\author{ David Wright}\date{\today}\begin{document}\maketitle{\large{\textbf{Vektorgeometrie}}}\vspace {5mm}\maketitle{\textbf{Skalarprodukt}}Das Skalarprodukt ergibt die L\"{a}nge der Projektion von \ensuremath{\vec{a}} auf \ensuremath{\vec{b}}.\begin {tabular}{p{10cm}l}{\begin {align*}\vec{a} \bullet \vec{b}& = |\vec{a}| \cdot |\vec{b}| \cdot \cos(\phi) \\ & = a_x \cdot b_x + a_y \cdot b_y + a_z \cdot b_z \\ \cos(\phi) & = \frac {\vec{a} \bullet \vec{b}}{|\vec{a}| \cdot |\vec{b}|}\\ & = \frac {a_x \cdot b_x + a_y \cdot b_y + a_z \cdot b_z}{\sqrt{a_x^2 + a_y2 + a_z^2}\sqrt{b_x^2 + b_y^2 + b_z^2}}\end {align*}}&\includegraphics[keepaspectratio = true, scale = 1] {dotp.jpg}\end{tabular}\vspace{5mm}\maketitle{\textbf{Vektorprodukt}}TextTextTextTextTextTextTextTextTextText\end{document} |
This is an exercise from Apostol (p.285) that I'm having trouble with (in fact, I'm having trouble with the whole section):
Prove that $\displaystyle{\int_0^1 \frac{1+x^{30}}{1+x^{60}} = 1 + \frac{c}{31}}, \qquad \text{where } 0 < c < 1.$
This comes from the section of Exercises following Taylor expansions, and Taylor's formula with error term. It seems like the approach should involve getting this as the error term of the Taylor expansion of a function that we know something about? I'm having trouble making much more progress than that.
Updated Progress: From the definition of the error term of the Taylor expansion we have:
$$E_n (x) = \frac{1}{n!} \int_a^x (x-t)^n f^{(n+1)} (t) dt$$
Alternatively, we may express this in the Lagrange form of the error term (derived from the weighted mean value theorem of integrals):
$$E_n (x) = \frac{f^{(n+1)}(c)}{(n+1)!} (x-a)^{n+1} \qquad \text{where } a < c < x.$$
Now, let $a = 0$, $x = 1$, $n = 30$, $f^{(n+1)}(t) = \dfrac{1+t^{30}}{(1+t^{60})(1-t)^{30}}$, and set the two alternative expressions of $E_n(x)$ equal to each other:
$$\begin{align*} \implies & \frac{1}{n!} \int_a^x (x-t)^n f^{(n+1)}(t) dt & = & \dfrac{f^{(n+1)}(c)}{(n+1)!} (x-a)^{n+1}\\ \implies & \frac{1}{30!} \int_0^1 (1-t)^{30} \dfrac{1+t^{30}}{(1+t^{60})(1-t)^{30}} & = & \dfrac{1}{31!} \dfrac{1+c^{30}}{(1+c^{60})(1-c)^{30}}\\ \implies & \int_0^1 \dfrac{1+t^{30}}{1+t^{60}} & = & \frac{1}{31} \frac{1+c^{30}}{(1+c^{60})(1-c)^{30}}. \end{align*} $$
I cannot seem to get from here to $=1 + \dfrac{c}{31}$.
If someone can show me pretty explicitly, step-by-step how to tackle this problem, I'd appreciate it. Dealing with the error term on Taylor expansions is giving me quite a bit of trouble, so hopefully seeing a full solution will help clear things up. |
A problem in section 1.3 of Hatcher's
Algebraic Topology is
Let $\tilde{X}$ and $\tilde{Y}$ be simply-connected covering spaces of the path-connected, locally path-connected spaces $X$ and $Y$. Show that if $X \simeq Y$ then $\tilde{X} \simeq \tilde{Y}$. [Exercise 11 in Chapter 0 may be helpful.]
Exercise 11 in chapter 0 says
Show that $f:X \to Y$ is a homotopy equivalence if there exist maps $g, h:Y \to X$ such that $fg \simeq 1$ and $hf \simeq 1$. More generally, ,show that $f$ is a homotopy equivalence if $fg$ and $hf$ are homotopy equivalences.
What I have so far: Given universal covering maps $p: \tilde{X} \to X$ and $q: \tilde{Y} \to Y$, and homotopy inverses $f:X \to Y$ and $g: Y \to X$, we can find a lift $\tilde{f}: \tilde{X} \to \tilde{Y}$ such that $q\tilde{f} = fp$. (In fact, I think there are as many such $\tilde{f}$ as there are elements of $q^{-1}(y)$ for any basepoint $y \in Y$). Similarly we can find $\tilde{g}$ such that $gq = p\tilde{g}$.
From the homotopy $gf \simeq 1$ we have a unique lift of a homotopy $p\tilde{g}\tilde{f} = gfp \Rightarrow p$ starting at $\tilde{g}\tilde{f}$, but how do we know it ends at $1_{\tilde{X}}$?
I notice I haven't used exercise 11...
I'm guessing these are unbased homotopies, since that's generally how Hatcher uses the term. |
It seems as if $$\lim_{x\to 0^+} \sqrt{x+\sqrt[3]{x+\sqrt[4]{\cdots}}}=1$$
I really am at a loss at a proof here. This doesn't come from anywhere, but just out of curiosity. Graphing proves this result fairly well.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
For any $2 \le n \le m$, let $\phi_{n,m}(x) = \sqrt[n]{x + \sqrt[n+1]{x + \sqrt[n+2]{x + \cdots \sqrt[m]{x}}}}$. I will interpret the expression we have as following limit.
$$\sqrt{x + \sqrt[3]{x + \sqrt[4]{x + \cdots }}}\; = \phi_{2,\infty}(x) \stackrel{def}{=}\;\lim_{m\to\infty} \phi_{2,m}(x)$$
For any $x \in (0,1)$, we have $\lim\limits_{m\to\infty}(1-x)^m = 0$. This implies the existence of an $N$ so that for all $m > N$, we have
$$(1-x)^m < x \implies 1 - x < \sqrt[m]{x} \implies \phi_{m-1,m}(x) = \sqrt[m-1]{x + \sqrt[m]{x}} > 1$$ It is clear for such $m$, we will have $\phi_{2,m}(x) \ge 1$.
Recall for any $k > 1$ and $t > 0$, $\sqrt[k]{1 + t} < 1 + \frac{t}{k}$.
Start from $\phi_{m,m}(x) = \sqrt[m]{x} \le 1$, we have
$$\begin{align} & \phi_{m-1,m}(x) = \sqrt[m-1]{x + \phi_{m,m}(x)} \le \sqrt[m-1]{x + 1} \le 1 + \frac{x}{m-1}\\ \implies & \phi_{m-2,m}(x) = \sqrt[m-2]{x + \phi_{m-1,m}(x)} \le \sqrt[m-2]{x + 1 + \frac{x}{m-1}} \le 1 + \frac{1}{m-2}\left(1 + \frac{1}{m-1}\right)x\\ \implies & \phi_{m-3,m}(x) = \sqrt[m-3]{x + \phi_{m-2,m}(x)} \le 1 + \frac{1}{m-3}\left(1 + \frac{1}{m-2}\left(1 + \frac{1}{m-1}\right)\right)x\\ & \vdots\\ \implies & \phi_{2,m}(x) \le 1 + \frac12\left( 1 + \frac13\left(1 + \cdots \left(1 + \frac{1}{m-1}\right)\right)\right)x \le 1 + (e-2)x \end{align} $$
Notice for fixed $x$ and as a sequence of $m$, $\phi_{2,m}(x)$ is monotonic increasing. By arguments above, this sequence is ultimately sandwiched between $1$ and $1 + (e-2)x$. As a result, $\phi_{2,\infty}(x)$ is defined for this $x$ and satisfies
$$1 \le \phi_{2,\infty}(x) \le 1 + (e-2) x$$
Taking $x \to 0^{+}$, we get
$$1 \le \liminf_{x\to 0^+} \phi_{2,\infty}(x) \le \limsup_{x\to 0^+}\phi_{2,\infty}(x) \le \limsup_{x\to 0^+}(1 + (e-2)x) = 1$$ This implies $\lim\limits_{x\to 0^+} \phi_{2,\infty}(x)$ exists and equal to $1$.
Edit: there was a silly mistake in the original post, this only proves an upper bound (on the limit, provided it exists), not the actual limit.
For $x > 0$, you have, for any integers $0 \leq n \leq m$,$$1 \leq x^{\frac{1}{m}} \leq x^{\frac{1}{n}}.$$Denoting your expression $f(x)$ (I'm sweeping under the rug the question of showing it is well-defined for $x> 0$, but that should be the very first step -- you are dealing with an infinite notation, after all):$$\sqrt{x} \leq f(x) \leq \sqrt{x+\sqrt{x+\sqrt{x+\dots}}} \stackrel{\rm def}{=} g(x).$$(where we used the above remark for the upper bound).By squeezing, it is sufficient to show that $g(x)\xrightarrow[x\to0^+]{} 1$
[Not true -- one obviously needs a lower bound that also goes to 1]. But note that $g$ satisfies (under the same caveat -- how is it well-defined?) the functional equation$$g(x)^2-x = g(x)\qquad\forall x > 0$$Solving the quadratic equation (in $g(x)$), we get$$g(x) = \frac{1\pm \sqrt{1+4x}}{2}.$$Since one must have $g>0$ (by its definition as a square root), we can eliminate the spurious solution, getting $$g(x) = \frac{1+ \sqrt{1+4x}}{2}, \quad\forall x >0.$$It only remains to show that $\lim_{x\to 0^+} g(x) = 1$, which is immediate. |
Consider a body attached to a horizontal spring and resting on a surface, inclined at an angle $\theta$ from the ground.
The spring constant is $k$. Initially the spring was kept in its natural length while the body was held still by some external agent. When the external agent was removed, the body slid $x$ units down the inclined plane to achieve equilibrium.The coeffecient of (kinetic) frictional force acting on the body is $\mu$
To solve this problem, I can use two methods:
1.Work-Energy Theorem: This yields $$0=mg(x\sin\theta)-\frac12kx^2-\mu (mgx\cos\theta)$$
2.Equating forces at equilibrium: This yields $$kx+\mu mg\cos\theta-mg\sin\theta=0$$(along the inclined plane)
On solving the equations the two methods give different answers, while they should give the same. Is there anything I am missing out here? Please help me solve this problem. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.