text stringlengths 256 16.4k |
|---|
시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율 1 초 256 MB 2 1 1 50.000%
In the study of social networks, we commonly use graph theory to explain certain social phenomena.
Let's look at a problem regarding this. There are
n people in a social group. Between people, there are different levels of relationships. We represent this relationship network using an undirected graph with
n nodes. If two different people are familiar with each other, then there will exist an undirected edge between their corresponding nodes on the graph. Additionally, the edge will have a positive weight
c – the smaller the value of
c, the closer the relationship between the two people.
We can use the shortest path between the corresponding nodes of two people
s and
t to measure the closeness of their relationship. Note that other nodes on the shortest path provides some level of benefit to the relationship between
s and
t, and that these nodes have a certain level of importance with respect to
s and
t's relationship. Through analysis of the shortest paths passing through node
v, we can measure
v's importance level to the entire social network.
Observing that there may be multiple shortest paths between nodes
A and
B, we revise our definition of the importance level as follows:
Let
C
represent the number of shortest paths between nodes s,t
$I(v) = \sum_{s \neq v, t \neq v}{\frac{C_{s, t}(v)}{C_{s, t}}}$
To ensure that
I(
v) and
C
( s,t
Given such a weighted, undirected graph representing the social network, please calculate the importance level of each node.
The first line of input contains two integers
n and
m, representing the number of nodes and the number of undirected edges in the social network. The nodes in the graph are numbered consecutively from 1 to
n.
For the next
m lines, each line contains three integers
a,
b, and
c describing an undirected edge connecting nodes
a and
b with weight
c. Note that there will be at most one undirected edge between any pair of nodes, and no loops will occur within the graph (where the two ends of an edge rest on the same node).
Output
n lines, each line containing a single real number, accurate to 3 digits after the decimal point. Line
i represents the importance of node
i to the social network. Your outputted numbers will be considered correct if the absolute differences between them and the correct values do not exceed 0.001.
For 50% of the test cases:
n ≤ 10,
m ≤ 45
For 100% of the test cases:
n ≤ 100,
m ≤ 4500, and the weight
c of any edge will be a positive integer satisfying 1 ≤
c ≤ 1000.
It is guaranteed that the data will consist of connected, undirected graphs, and that the number of shortest paths between any two nodes will not exceed 10 10. 4 4 1 2 1 2 3 1 3 4 1 4 1 1 1.000 1.000 1.000 1.000 |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Name: ______________________________
Section: _____________________________
Student ID#:__________________________
In chemistry and physics, selection rules define the transition probability from one eigenstate to another eigenstate. In this Workgroup activity, we are going to discuss the transition moment, which is the key to understanding the intrinsic transition probabilities. Selection rules have been divided into the electronic spectroscopy, vibrational spectroscopy, and rotational spectroscopy.
Transition Moment Integrals Couple Eigenstates
In an atom or molecule, an electromagnetic wave (for example, visible light) can induce an oscillating electric or magnetic moment. The amplitude of this (electric or magnetic) moment is called the transition moment and the
probability of promoting (or demoting) a molecule from one eigenstate \(|Ψ_1 \rangle \) to a different eigenstate \(|Ψ_2 \rangle \) 2, and \(\vec{M}_{21}\) is called the transition dipole moment, or transition moment, from \(|Ψ_1 \rangle \)
\[\vec{M}_{21}= \langle \Psi_2 | \vec{\mu} | \Psi_1 \rangle \tag{transition moment integral}\]
where \(\vec{M}_{21}\) is the electric dipole moment operator. If we have a system with \(n\) particles with charge \(q_n\), and the electric dipole moment operator:
\[ \vec{\mu}=\sum_{n}q_n\vec{x}_n \tag{electric dipole moment operator}\]
the \(\vec{x}_{n}\) is the position vector operator.
If \(| \vec{M}_{21} |\) is big, then the transition has a high probability of occurring (i.e., observing in a spectrum) If \(| \vec{M}_{21} |\) is small, then the transition has a weak probability of occurring (i.e., observing in a spectrum) If \(| \vec{M}_{21} |\) is zero, then the transition has a zero probability of occurring (i.e., observing in a spectrum)
Symmetry to the Rescue
To evaluate \(| \vec{M}_{21} |\), we need to know the functional forms of \(|Ψ_1 \rangle \), \(|Ψ_ 2 \rangle \), and \(\vec{\mu} \), which results in a difficult integral to numerically or analytically solve. However, symmetry can be used to consider if \(| \vec{M}_{21} |\)
may be zero. Microwave Spectroscopy Selection Rules
The solutions to the rigid rotor Schrödinger equation are
spherical harmonic functions (see Table M4).
\[ | \psi _{l, m_l } (\theta , \phi) \rangle = Y^{m_l}_l (\theta , \phi)\]
These depend upon the two variables \(\theta\), and \(\phi \) with two quantum numbers \(l\), and \(m_l\). These are the angular part of atomic orbitals that have been discussed before (see below).
The
spherical Harmonics 5 as commonly displayed, sorted by increasing energies and aligned for symmetry. Q1
Evaluate this integral
\[ \langle Y^{m_l'}_{l'} (\theta , \phi) | Y^{m_l}_l (\theta , \phi) \rangle \]
if \(m_l' \neq m_l\) or \(l' \neq l\)? if \(m_l' = m_l\) and \(l' = l\)?
Q2 Vibrational Spectroscopy Selection Rules
The harmonic oscillator wavefunctions are
\[ | \psi(r) \rangle = N_vH_v(\alpha^{1/2}r) e^{-\alpha r^2/2}\]
where Hv(a1/2q) is a Hermite polynomial and a = (km/á2)1/2. |
This is a follow-up question to
How to uniformly sample vertices from a large graph with given distance from a fixed vertex?.
Suppose I took a set $B$ of $n$ samples from a large but finite universe $U\supset B$. Each element $x\in U$ has a bias value $b(x)>0$ such that for any elements $x,y\in U$ we have $\frac{P[x\in B]}{P[y\in B]}=\frac{b(x)}{b(y)}$, i.e. $b(x)$ is proportional to $P[x\in B]$.
Knowing $b(x)$ for all $x\in B$, I would like to take a subsample $S\subset B$ of size $|S|=\frac{n}{c}$ for some $c>0$ such that $S$ is a uniform random sample of $U$. To achieve this, my idea was to take a weighted random sample with weights $w(x)=b(x)^{-1}$ from $B$, cancelling out the existing bias.
You may assume that $n$ is reasonably large (in my case $n\ge 1000000$) and that it can be taken as large as needed.
Does this method work? What value for $c$ can I choose to not run into artifacts from this method? If the method does not work, is there a different way to achieve the desired result? |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
X
Search Filters
Format
Subjects
Library Location
Language
Publication Date
Click on a bar to filter by decade
Slide to change publication date range
1. Search for heavy ZZ resonances in the ℓ + ℓ - ℓ + ℓ - and ℓ + ℓ - vv - final states using proton–proton collisions at √s=13 TeV with the ATLAS detector
European Physical Journal C, ISSN 1434-6044, 04/2018, Volume 78, Issue 4
Journal Article
2. Measurement of the ZZ production cross section in proton-proton collisions at √s = 8 TeV using the ZZ → ℓ−ℓ+ℓ′−ℓ′+ and ZZ→ℓ−ℓ+νν¯¯¯ decay channels with the ATLAS detector
Journal of High Energy Physics, ISSN 1126-6708, 2017, Volume 2017, Issue 1, pp. 1 - 53
A measurement of the ZZ production cross section in the ℓ−ℓ+ℓ′ −ℓ′ + and ℓ−ℓ+νν¯ channels (ℓ = e, μ) in proton-proton collisions at s=8TeV at the Large Hadron...
Hadron-Hadron scattering (experiments) | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences
Hadron-Hadron scattering (experiments) | Fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Subatomär fysik | Natural Sciences
Journal Article
3. Search for new resonances decaying to a W or Z boson and a Higgs boson in the ℓ+ℓ−bb¯, ℓνbb¯, and νν¯bb¯ channels with pp collisions at s=13 TeV with the ATLAS detector
Physics Letters B, ISSN 0370-2693, 02/2017, Volume 765, Issue C, pp. 32 - 52
Journal Article
4. Search for heavy ZZ resonances in the ℓ + ℓ - ℓ + ℓ - and ℓ + ℓ - ν ν ¯ final states using proton–proton collisions at s = 13 TeV with the ATLAS detector
The European Physical Journal. C, Particles and Fields, ISSN 1434-6044, 04/2018, Volume 78, Issue 4, pp. 1 - 34
Journal Article
5. Search for heavy ZZ resonances in the $$\ell ^+\ell ^-\ell ^+\ell ^-$$ ℓ+ℓ-ℓ+ℓ- and $$\ell ^+\ell ^-\nu \bar{\nu }$$ ℓ+ℓ-νν¯ final states using proton–proton collisions at $$\sqrt{s}= 13$$ s=13 $$\text {TeV}$$ TeV with the ATLAS detector
The European Physical Journal C, ISSN 1434-6044, 4/2018, Volume 78, Issue 4, pp. 1 - 34
A search for heavy resonances decaying into a pair of $$Z$$ Z bosons leading to $$\ell ^+\ell ^-\ell ^+\ell ^-$$ ℓ+ℓ-ℓ+ℓ- and $$\ell ^+\ell ^-\nu \bar{\nu }$$...
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology
Journal Article
6. ZZ -> l(+)l(-)l '(+)l '(-) cross-section measurements and search for anomalous triple gauge couplings in 13 TeV pp collisions with the ATLAS detector
PHYSICAL REVIEW D, ISSN 2470-0010, 02/2018, Volume 97, Issue 3
Measurements of ZZ production in the l(+)l(-)l'(+)l'(-) channel in proton-proton collisions at 13 TeV center-of-mass energy at the Large Hadron Collider are...
PARTON DISTRIBUTIONS | EVENTS | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Particle data analysis | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
PARTON DISTRIBUTIONS | EVENTS | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Particle data analysis | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
Journal Article
7. Measurement of exclusive γγ→ℓ+ℓ− production in proton–proton collisions at s=7 TeV with the ATLAS detector
Physics Letters B, ISSN 0370-2693, 10/2015, Volume 749, Issue C, pp. 242 - 261
Journal Article
8. Measurement of the ZZ production cross section in proton-proton collisions at s = 8 $$ \sqrt{s}=8 $$ TeV using the ZZ → ℓ−ℓ+ℓ′−ℓ′+ and Z Z → ℓ − ℓ + ν ν ¯ $$ ZZ\to {\ell}^{-}{\ell}^{+}\nu \overline{\nu} $$ decay channels with the ATLAS detector
Journal of High Energy Physics, ISSN 1029-8479, 1/2017, Volume 2017, Issue 1, pp. 1 - 53
A measurement of the ZZ production cross section in the ℓ−ℓ+ℓ′ −ℓ′ + and ℓ − ℓ + ν ν ¯ $$ {\ell}^{-}{\ell}^{+}\nu \overline{\nu} $$ channels (ℓ = e, μ) in...
Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Nuclear Experiment
Quantum Physics | Quantum Field Theories, String Theory | Hadron-Hadron scattering (experiments) | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Nuclear Experiment
Journal Article
9. Measurement of event-shape observables in Z→ℓ+ℓ− events in pp collisions at √s = 7 TeV with the ATLAS detector at the LHC
European Physical Journal C, ISSN 1434-6044, 2016, Volume 76, Issue 7, pp. 1 - 40
Journal Article
10. Search for heavy ZZ resonances in the l(+) l(-) l(+) l(-) and l(+) l(-) nu(nu)over-bar final states using proton-proton collisions at root s=13 TeV with the ATLAS detector
EUROPEAN PHYSICAL JOURNAL C, ISSN 1434-6044, 04/2018, Volume 78, Issue 4
A search for heavy resonances decaying into a pair of Z bosons leading to l(+) l(-) l(+) l(-) and l(+) l(-) nu(nu) over bar final states, where l stands for...
DISTRIBUTIONS | BOSON | DECAY | MASS | TAUOLA | TOOL | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
DISTRIBUTIONS | BOSON | DECAY | MASS | TAUOLA | TOOL | PHYSICS, PARTICLES & FIELDS | Fysik | Physical Sciences | Naturvetenskap | Natural Sciences
Journal Article
11. Comparison of the pharmacokinetics between L-BPA and L-FBPA using the same administration dose and protocol: A validation study for the theranostic approach using [ 18 F]-L-FBPA positron emission tomography in boron neutron capture therapy
BMC Cancer, ISSN 1471-2407, 11/2016, Volume 16, Issue 1, p. 859
Background: Boron neutron capture therapy (BNCT) is a cellular-level particle radiation therapy that combines the selective delivery of boron compounds to...
F]-L-FBPA | Boron concentration | [ | FBPA | L-BPA | Boron neutron capture therapy | BORONOPHENYLALANINE | BRAIN-TUMOR | ONCOLOGY | [F-18]-L-FBPA | TISSUE | COMPOUND | IMPROVEMENT | 4-BORONO-2-FLUORO-D,L-PHENYLALANINE | Tissue Distribution | Animals | Fluorine Radioisotopes - pharmacokinetics | Fluorine Radioisotopes - administration & dosage | Boron Neutron Capture Therapy | Female | Neoplasms - radiotherapy | Mice | Positron-Emission Tomography | Fluorine Radioisotopes - chemistry | Neoplasms - diagnostic imaging | Disease Models, Animal | Usage | PET imaging | Boron-neutron capture therapy | Analysis | Radiation | Pharmacokinetics | Health aspects
F]-L-FBPA | Boron concentration | [ | FBPA | L-BPA | Boron neutron capture therapy | BORONOPHENYLALANINE | BRAIN-TUMOR | ONCOLOGY | [F-18]-L-FBPA | TISSUE | COMPOUND | IMPROVEMENT | 4-BORONO-2-FLUORO-D,L-PHENYLALANINE | Tissue Distribution | Animals | Fluorine Radioisotopes - pharmacokinetics | Fluorine Radioisotopes - administration & dosage | Boron Neutron Capture Therapy | Female | Neoplasms - radiotherapy | Mice | Positron-Emission Tomography | Fluorine Radioisotopes - chemistry | Neoplasms - diagnostic imaging | Disease Models, Animal | Usage | PET imaging | Boron-neutron capture therapy | Analysis | Radiation | Pharmacokinetics | Health aspects
Journal Article |
Five days ago, Stephen Hawking – or someone who has hacked his computerized speech generator – has told us that Donald Trump is a supervillain who will transform the Earth to another Venus with temperatures at 250 °C and sulfuric acid rains.
Wow. Now, every intelligent 10-year-old kid must know why this possibility is non-existent, why the statement is nonsense. Some scientists including Roy Spencer have pointed out how absurd these Hawking's statements were from a scientific viewpoint. But lots of the scientists who have paid lip service to the lies about the so-called global warming or climate change in the past have remained silent and confirmed that their scientific dishonesty has no limits. I despise all the climate alarmists who know that statements like that are absurd but who hide this fact because a lie like that could be helpful for their profits or political causes. You know, what these jerks and the people who tolerate these jerks' existence haven't quite appreciated is that it is only lies that may be helpful for them. Now, there are exceptions. Zeke Hausfather, a US Berkeley climatologist, has been an alarmist but he has pointed out that he realizes that Hawking's statement is just junk:
A good example that even brilliant scientists sometimes say silly things when it's outside their field of expertise (see Nobel disease) https://t.co/QPsmB1bsv0— Zeke Hausfather (@hausfath) July 2, 2017
However, I disagree with Hausfather's assertion that this statement by Hawking's is outside Hawking's field of expertise. It is some rather basic physics combined with the basic knowledge of the outer space that should be known to 10-year-old boys who attend physics lectures at the elementary school. It isn't or shouldn't be outside Stephen Hawking's expertise because Hawking is a physicist and one who has studied the outer space. I think it's right to say that Stephen Hawking has shown a rudimentary ignorance about
hisfield, physics.
A reader has asked me "why Venus is special". But Venus isn't special in any general sense. Or if we said that Venus is special, almost every planet would be special. A more sensible assertion is that every planet is completely different. It has a completely different chemistry than others. It has a completely different temperature than others, mostly due to the completely different distance from the Sun.
I really think that it's a shame that kids and even adults don't reliably know these basic things.
First, look at the distances of the planets from the Sun, e.g. in this table. Mercury, Venus, and Mars have 38%, 73%, and 152% of the Earth's distance while Neptune, the most distant planet from the Sun, has 3,000% of the Earth's distance.
Planets are just rocks that ended up there. But the positions have consequences. The greater the distance is, the cooler the planet will be, at least approximately. Why? Because the amount of solar radiation per unit area goes down as \(1/R^2\). This incoming radiation has to be equal to the outgoing one which scales like \(\sigma T^4\) where \(T\) is the absolute temperature of the planetary surface (i.e. temperature in kelvins). I am neglecting albedo and greenhouse effects and other details. You may see that \(T\sim 1/\sqrt{R}\).
So Venus whose distance from the Sun is 0.73 times greater than the Earth's ("\(k\) times greater" means "smaller" for \(k\lt 1\)) should have the temperature that is \(1/\sqrt{0.73}\sim 1.17\) times the Earth's. If the albedo and greenhouse effect were the same, that would be \(1.17\times 288=336\) kelvins or so. That would be 63 °C or so on the surface and Venus could be a bit warmer but habitable. But the composition of the atmosphere and the albedo etc. are different so Venus ends up much higher than that, well above the boiling point of water. Due to the chemistry and the greenhouse effects etc. that result from it, it's largely unavoidable.
That's why we say Venus is barely out of the habitable zone. People usually conclude that Mars is barely inside the habitable zone. The habitable zone is the region of the parameter space, mostly but not necessarily only as a function of the distance from the Sun or another star, where liquid water survives on the surface. Water is good for life.
While Earth and Venus may look like siblings (the radii and distances from the Sun are comparable) and they're sometimes described in this way, they differ in all the details – especially chemistry – dramatically. In particular, the atmosphere of Venus is almost 100 times denser. Around 95% of it is carbon dioxide so the total mass of Venus' carbon dioxide is almost 200,000 times greater than the mass of carbon dioxide in the Earth's atmosphere. There's just no way to pump this much CO2 into the Earth's atmosphere because there's not enough burnable carbon we could access in any imaginable way. At most, if we tried really hard, we could perhaps quintuple the CO2 concentration in the air – which would be a good thing for life on Earth and our economies – but it would be extremely difficult.
Again, the extra greenhouse effect on Venus that adds over 100 °C to the planetary temperature results from the amount of CO2 that is almost 200,000 times greater than that on Earth. Even if we double the CO2 in the atmosphere relatively to now, the ratio would still be almost 100,000. Note that the greenhouse effect due to CO2 on Earth contributes of order several °C so it is significantly greater than the 1/200,000 times the greenhouse effect from CO2 on Venus. It's because the dependence isn't linear. It's sublinear, approximately logarithmic. The more greenhouse gas you have, the less another molecule matters.
If you have never studied the diverse temperatures of the Sun's eight planets, you are encouraged to spend at least minutes by looking at the Wikipedia pages about these atmospheres:
You see that there are numerous planets – both the distant ones as well as Mercury, the closest one to the Sun (it may be surprising to get it at both extremes) – whose atmospheres are dominated by hydrogen followed by helium – it's like the early elements in the Cosmos. But the precise compositions are totally different, the following trace elements are different, and the overall pressures of the atmospheres differ by many orders of magnitude. Mercury: hydrogen, helium, oxygen, sodium, ... Venus: carbon dioxide, nitrogen, sulfur dioxide, argon ... Earth: nitrogen, oxygen, argon, water vapor, ... Mars: carbon dioxide, argon, nitrogen, ... Jupiter: hydrogen, helium, methane, ammonia, ... Saturn: hydrogen, helium, traces of volatiles, ... Uranus: hydrogen, helium, water, ammonia, methane, ... Neptune: hydrogen, helium, methane, ...
The atmospheres of our neighbors, Venus and Mars, happen to have carbon dioxide as the dominant contribution of the atmosphere. But even though some alarmists are working hard to make all the people forget the composition of the Earth's atmosphere, well, Earth is
nota planet where CO2 is important in the atmosphere. It doesn't make it to the top three. On Earth, the main gases are nitrogen, oxygen, argon, and water vapor – although the concentration of water vapor is highly variable, sometimes above and sometimes below that of argon. Carbon dioxide is only the fifth gas.
Also, carbon dioxide fails to be the dominant greenhouse gas on Earth, too. By far, it's the water vapor that plays this role. Water's greenhouse effect is stronger than that from the carbon dioxide by more than one order of magnitude.
There are lots of differences. One may run some reconstructions of the chemistry and behavior of the planets to find out why these particular gases ended up dominant in the atmosphere. These explanations have lots of parts. In particular, you could ask why Venus has no water. Well, solar wind has probably stolen water and even hydrogen in any form from Venus' atmosphere.
But I don't want to drown in the detailed mechanisms especially because I am no planetary expert. But there are some basic qualitative conclusions. And one of them is that the chemistry of planets is naturally extremely diverse. Why is it diverse? Because the atmospheres above planets look e.g. like a distillation equipment. You heat a mixture of gases and liquids to some temperature, some of them get up, some of them condense and drop down, some of them may chemically change and react with something else, and so on. What you end up with on the surface sensitively depends on the temperature of your distillation equipment you started with and other choices.
It's enough for the temperature to drop beneath the boiling point of some gas and the gas drops down as a liquid and may be removed from the atmosphere. And vice versa. New gases may appear in the atmosphere if the temperature jumps above a threshold. Some reactions may get vastly faster when the temperature moves in some interval. Some elements may escape from the planet or be stolen by the solar wind. Now, add the geological activity similar to one we know on the Earth. Different rocks have diverse composition. The processes are complex and numerous and so are the possible outcomes.
Despite the complexity, we know certain things, and one of them surely is that CO2 isn't a major gas in the Earth's atmosphere. The alarmist fraudsters have clearly succeeded in brainwashing a big stupid part of the human population which must have accepted the faith that CO2 is a very important gas in the Earth's atmosphere. But CO2 represents just 400 ppm i.e. 0.04% of the volume of the Earth's atmosphere, or 1/2,500 of it. You just can't get any qualitative changes of the Earth is such a minor gas were doubled because 0.08% would still be tiny. A doubling of CO2 – which takes a century if the economy keeps on growing – only adds some 1 °C to the temperature of the planet. Even if it were 2 or 3 °C, it's still nothing compared to the hundreds of Celsius degrees that you would need to make the Earth a bit more similar to Venus.
But the connection between Venus and Donald Trump is yet another level of Hawking's stunning stupidity. Donald Trump may be the U.S. president but he's not a dictator controlling life on Earth, not even life in the U.S. The Americans are increasing or decreasing their consumption of fossil fuels in various ways – some people grow the economy, others are unhinged green lunatics, and so on – in ways that don't depend on the identity of a guy in the White House much.
What one U.S. president may do is to change the U.S. emissions by 5% in one direction or another during his 8-year tenure. But the U.S. is just about 1/5 of the world so this would amount to the change of the world emissions by 1% during these 8 years. During these 8 years, 4 ppm per year times 8 = 32 ppm is being emitted by the mankind to the atmosphere. 1% of that, as I just explained, which Trump may affect is just 0.32 ppm. The greenhouse effect from 120 ppm that we've added since the industrial revolution could have been 0.7 °C of warming. But 0.32 ppm is 375 times less than 120 ppm so you expect 375 times less warming than 0.7 °C from that, about 0.002 °C.
A U.S. president like Donald Trump has the capacity to change the temperature of the Earth by 0.002 °C in one way or another, not by hundreds of degrees that would be needed to make Earth more similar to Venus. Can you see the difference between 0.002 °C and 200 °C? It is the same
five damn orders of magnitudethat I have mentioned as the ratio of CO2 in the atmospheres. Is Stephen Hawking or the hacker of his computer unable to distinguish the numbers 200 and 0.002?
I think that just a few decades ago, a scientist who would say something like that would mock himself so much that he would completely lose all credibility and become a joke that everyone laughs at. But these days, things like that are apparently normal. What you say may be arbitrarily insane, arbitrarily contradict the things that 10-year-old kids should reliably know. Nevertheless, if this plain absurd statement of yours is positively correlated with the interests of the far left movement that has hijacked an important part of the public discourse, you won't be finished. In most cases, you will even not be criticized. In some cases, you will even be reported and praised.
It's the deceitful far left intellectual contamination that is turning us into brain-dead structures similar to those on Venus, and not carbon dioxide, that needs to be removed from the face of Earth.
And that's the memo. |
In the following picture $\kappa$ is the key length and $n$ is the block size.
I understand that $\{0,1\}^\kappa$ means all possible combination of keys and $\{0,1\}^n$ means all possible combination of a plain-text block.
But I don't understand the overall equation, specially the meaning of the arrow $\to$. Can anyone explain it in brief? (note: I am very weak in mathematics)
Let $E: \{0,1\}^\kappa \times \{0,1\}^n \to \{0,1\}^n$ be a block cipher with a $\kappa$ bit key and an $n$-bit state. |
I have developed my own 3D Finite Volume Navier-Stokes solver based on projection method for nonuniform grid. I am looking to incorporate automatic timestep adjustment at each time step based on velocity. Is there any simple technique or procedure to calculate the size of timestep at each step for this problem?
Yes! Normally what's done is called Method of Lines. Essentially, you discretize in space to get all of your operators, but instead of discretizing the time component, you leave that derivative along. Now you have a system of ODEs. Then you call an ODE solver like SUNDIALS or DifferentialEquations.jl which have tools for handling the sparsity of a PDE-derived ODE system (sparse factorization, preconditioned GMRES, IMEX integrators, etc.).
For flow solvers, the general rule is that the time step needs to satisfy some kind of "CFL condition", named after Courant, Friedrichs, and Lewy. This means that $$ \Delta t \le C \min_{K} \frac{h_K}{\|\mathbf u\|_{L^\infty(K)}} $$ In other words, the time step must be proportional to the (minimum over all cells $K$) of the ratio of the mesh size $h_K$ and the velocity on that cell.
For
explicit time stepping methods, this is a theoretical requirement: If you violate the condition (i.e., choose the time step too large) then the solution will become unstable. Each time stepping method leads to a particular value of the constant $C$.
For
implicit time stepping methods, you can violate the condition without becoming unstable, but you will generally obtain a not-very-accurate solution if you choose the time step too large. That's because, if you think of it in a space-time diagram, a large time step with a small mesh size results in very elongated space-time cells. So people choose $C=1$ or $C=2$, or maybe even moderately larger values, but nothing crazy.
The condition above gives you a way to compute a time step adaptively. |
So suppose I have a vector space, $V$ which is all continuous functions on $[0,1]$. Additionally, we have an inner product over $V$ where $\langle f,g \rangle = \int_{0}^{1}f(x)g(x)dx$.
Now suppose I have a subspace, $U \subset V$, defined to be the functions where $f(0) = 0$.
I wish to find $U^\perp$, the orthogonal complement to U.
My attempts so far are unsuccessful, just trying to use the definition of $U^\perp$ ($ =\{g \in V | \langle f,g \rangle=0 , f \in U\}$) to arrive somewhere, but I plug in the inner product and can't deduce any further.
Any help would be appreciative.
Thank you. |
I have three questions about the BRST symmetry in Polchinski's string theory vol I p. 126-127, which happen together
Given a path integral $$ \int [ d\phi_i dB_A db_A d c^{\alpha}] \exp(-S_1-S_2-S_3) \tag{4.2.3}$$ with $$ S_2 =-iB_A F^A (\phi) \tag{4.2.4}$$ $$ S_3 = b_A c^{\alpha} \delta_{\alpha} F^A(\phi) \tag{4.2.5} $$
the BRST transformation $$ \delta_B \phi_i = -i \epsilon c^{\alpha} \delta_{\alpha} \phi_i \tag{4.2.6a} $$ $$ \delta_B B_A=0 \tag{4.2.6b} $$ $$ \delta_B b_A = \epsilon B_A \tag{4.2.6c} $$ $$ \delta_B c^{\alpha} = \frac{i}{2} \epsilon f^{\alpha}_{\beta \gamma} c^{\beta} c^{\gamma} \tag{4.2.6d} $$
It is said
There is a conserved ghost number which is $+1$ for $c^{\alpha}$, $-1$ for $b_A$ and $\epsilon$, and 0 for all other fields.
How to see that?
The variation of $S_2$ cancels the variation of $b_A$ in $S_3$
I could get $i B_A \delta F^A$ in $\delta S_2$ and $(\delta_B b_A) c^{\alpha} \delta_{\alpha} F^A= \epsilon B_A c^{\alpha} \delta_{\alpha} F^A $ in $\delta S_3$. But there is a $c^{\alpha}$ in $\delta S_3$
the variations of $\delta_{\alpha} F^A$ and $c^{\alpha}$ in $S_3$ cancel.
How to see that? |
What exactly happens in the case of equi-spaced points?
Why does increase in polynomial order cause the error to rise after a certain point?
This is similar to the Runge's phenomenon where, with equi-spaced nodes, the interpolation error goes to infinity with the increase of the polynomial degree, i.e. the number of points.
One of the roots of this problem can be found in the Lebesgue's constant as noted by @Subodh's comment to @Pedro answer. This constant relates the interpolation with the best approximation.
Some notations
We have a function $f \in C([a,b])$ to interpolate over the nodes $x_k$. In the Lagrange interpolation are defined the Lagrange polynomials:
$$L_k(x) = \prod_{i=0, i\neq j}^{n}\frac{x-x_i}{x_k-x_i}$$
with this is defined the interpolation polynomial $p_n \in P_n$ over the couples $(x_k, f(x_k))$ for light notation $(x_k, f_k)$
$$p_n(x) = \sum_{k=0}^n f_kL_k(x)$$
Now consider a perturbation over the data, this can be for example for rounding, so we have got $\tilde{f}_k$. With this the new polynomial $\tilde{p}_n$ is:
$$\tilde{p}_n(x) = \sum_{k=0}^n \tilde{f}_k L_k(x)$$
The error estimates are:
$$p_n(x) - \tilde{p}_n(x) = \sum_{k=0}^n (f_k - \tilde{f}_k) L_k(x)$$
$$| p_n(x) - \tilde{p}_n(x) | \leq \sum_{k=0}^n |f_k - \tilde{f}_k| |L_k(x)|\leq \left ( \max_k |f_k - \tilde{f}_k| \right) \sum_{k=0}^n |L_k(x)|$$
Now it is possible define the
Lebesgue's constant $\Lambda_n$ as:
$$\Lambda_n = \max_{x \in [a,b]} \sum_{k=0}^n |L_k(x)|$$
With this the final estimates is:
$$|| p_n - \tilde{p}_n ||_{\infty} \leq \left ( \max_k |f_k - \tilde{f}_k| \right) \Lambda_n $$
(marginal note, we look only $\infty$ norm also because we are over a space of finite measure so $L^{\infty} \subseteq \dots \subseteq L^1 $)
From the above calculation we have got that $\Lambda_n$ is:
independent from the date: depends only from the nodes distribution; an indicator of stability (the smaller it is, the better it is).
It is also the norm of the interpolation operator respect the$|| \cdot||_\infty$ norm.
Withe the follow theorem we con have got an estimate of the interpolation error with the Lebesgue's constant:
Let $f$ and $p_n$ as above we have
$$ || f - p_n ||_{\infty} \leq (1 + \Lambda_n) d_n(f) $$
where
$$ d_n(f) = \inf_{q_n \in P_n} || f - q_n ||_{\infty} $$
is the error by the best uniform approximation polynomial
I.e. if $\Lambda_n$ is small the error of the interpolation is not far from the error of the best uniform approximation and the theorem compares the interpolation error with the smallest possible error which is the error of best uniform approximation.
For this the behavior of the interpolation depends by the nodes distribution.There is a lower bounds about $\Lambda_n$ that given a node distribution exist a constant $c$ such that:$$ \Lambda_n \geq \frac{2}{\pi} \log(n) - c $$so the constant grows, but how it grow is importan.
For
equi-spaced nodes$$\Lambda_n \approx \frac{2^{n+1}}{en \log(n)} $$I omitted some details, but we see that the grow is exponential.
For
Chebyshev nodes$$\Lambda_n \leq \frac{2}{\pi} \log(n) + 4 $$also here I omitted some details, there are more accurate and complicate estimate. See [1] for more details.Note that nodes of Chebyshev family have got logarithmic grow and from the previous estimates is near the best you can obtain.
For other nodes distributions see for example table 1 of this article.
There are a lot of reference on book about interpolation.On-line these slides are nice as resume.
Also this open article ([1])
A Numerical Seven Grids Interpolation Comparison of for polynomial on the Interval for various comparisons. |
I am given a few functions and I have to study the following aspects:
Continuity in the point (0,0) If the derivative exists at (0,0) Continuity of the partial derivatives at (0,0) Directional derivatives at (0,0)
One of the functions is, for example: $$ f(x,y) = \begin{cases} \frac{x^2y^2}{\sqrt(x^2+y^2)}, & \text{if if (x,y) not (0,0)} \\ 0, & \text{if (x,y) = (0,0)} \end{cases}$$
I was able to prove the continuity of the function via epsilon-delta proof (proved the $\lim\limits_{(x,y) \to (0,0)} f(x,y) = 0$), but my question is: do I always have to do this to prove the continuity of a function? So I have to do the same for the continuity of the partial derivatives? |
Quadratic Formula (deterministic---no guess and check about it)
The QF yields that $-{\frac13}$ and $5$ are roots. So $$3x^2-14x-5=c\left(x+\frac13\right)(x-5)$$ Comparing leading coefficients, $c$ must be $3$: $$\begin{align}3x^2-14x-5&=3\left(x+\frac13\right)(x-5)\\&=(3x+1)(x-5)\end{align}$$
Use Parabola Vertex Form (deterministic---no guess and check about it)
The $x$-coordinate of the vertex of the parabola $y=3x^2-14x-5$ is $-{\frac{b}{2a}}=-{\frac{-14}{2\cdot3}}={\frac73}$. The $y$-coordinate is $3\left(\frac73\right)^2-14\left(\frac73\right)-5=\frac{49}{3}-\frac{2\cdot49}{3}-5=-{\frac{49}{3}}-\frac{15}{3}=-{\frac{64}{3}}$.
So $y=c\left(x-\frac73\right)^2-\frac{64}{3}$. Comparing leading coefficients, $c=3$, so $$\begin{align}y&=3\left(x-\frac73\right)^2-\frac{64}{3}\\&=\frac{1}{3}\left(9\left(x-\frac73\right)^2-64\right)\\&=\frac{1}{3}\left(3\left(x-\frac73\right)-8\right)\left(3\left(x-\frac73\right)+8\right)\\&=\frac{1}{3}\left(3x-15\right)\left(3x+1\right)\\&=\left(x-5\right)\left(3x+1\right)\end{align}$$
Complete the Square (deterministic---no guess and check about it)
Starting with $3x^2-14x-5$, always multiply and divide by $4a$ to avoid fractions:$$\begin{align}&3x^2-14x-5\\&=\frac{4\cdot3}{4\cdot3}\left(3x^2-14x-5\right)\\&=\frac{1}{12}\left(36x^2-12\cdot14x-60\right)\\&=\frac{1}{12}\left(\left(6x\right)^2-2(6x)(14)-60\right)\\&=\frac{1}{12}\left(\left(6x\right)^2-2(6x)(14)+14^2-14^2-60\right)\\&=\frac{1}{12}\left((6x-14)^2-196-60\right)\\&=\frac{1}{12}\left((6x-14)^2-256\right)\\&=\frac{1}{12}(6x-14-16)(6x-14+16)\\&=\frac{1}{6\cdot2}(6x-30)(6x+2)\\&=(x-5)(3x+1)\\\end{align}$$
AC Method (involves integer factorization and a list of things to inspect)
$$3x^2-14x-5$$
Take $3\cdot(-5)=-15$. List pairs that multiply to $-15$:
$$(-15,1),(-5,3),(-3,5),(-1,15)$$
We could have stopped at the first pair, because $-15+1=-14$, the middle coefficient. Use this to replace the $-14$:
$$3x^2-15x+x-5$$
Group two terms at a time and factor out the GCF:
$$3x(x-5)+1(x-5)$$$$(3x+1)(x-5)$$
Prime Factor what you can version 1 (involves integer factorization and a list of things to inspect)
If $3x^2-14x-5$ factors, then prime factoring $3$, it factors as $$(3x+?)(x+??)$$And $(?)(??)=-5$. There are only four possibilities. $(?,??)$ is one of $$(1,-5),(-1,5),(5,-1),(-5,1)$$Multiplying out $(3x+?)(x+??)$ for each of the four cases reveals $3x^2-14x-5=(3x+1)(x-5)$.
Rational Root Theorem (involves integer factorization and a list of things to inspect)
If $3x^2-14x-5$ factors, there are rational roots. They must be of the form $\pm\frac{a}{b}$ where $a\mid5$ and $b\mid3$. The only options are $\pm5,\pm{\frac53},\pm1,\pm{\frac13}$. Check these eight inputs to $3x^2-14x-5$ and find that $-{\frac13}$ and $5$ are roots. So $$3x^2-14x-5=c(x+1/3)(x-5)$$ Comparing leading coefficients, $c$ must be $3$.
Prime Factor what you can version 2 (using Rational Root Theorem to speed up version 1)
If $3x^2-14x-5$ factors, then prime factoring $3$, it factors as $$(3x+?)(x+??)$$The latter factor reveals that if the thing factors at all, one of its roots is an integer. Considering the RRT, check if any of $\pm5,\pm1$ are roots, and discover that $5$ is. Conclude $$(3x+?)(x-5)$$ and then conclude $$(3x+1)(x-5)$$
Graphing to improve efficiency ot Rational Root Theorem method
Using the vertex formula again, locate the vertex at $\left(\frac73,-{\frac{64}{3}}\right)$. Since $a=3$, consider the sequence $\{3\cdot1,3\cdot3,3\cdot5,3\cdot7,\ldots\}$. Extend horizontally outward from the vertex by $1$ in each direction, move up $3$ and plot a point. Extend horizontally outward again by $1$, move up $9$ and plot a point. Continue until you've plotted points that cross over the $x$-axis.
Now you have a rough idea where the roots are. Returning to the rational root theorem approach, you can eliminate many of the potential roots now from the initial list, speeding up that approach. |
Given the following recurrence relation,
$T(n) = 2 T(\frac{n}{2}) + f(n)$,
where $f(n) = \Omega(n^2)$, I'm asked to prove or disprove that $T(n) = O(f(n))$.
If I'm allowed to restrict my discussion within the special cases in which that $n = 2^k$ for positive integer $k$, how can I prove or disprove the proposed bound, $T(n) = O(f(n))$?
As a side information, if $f(n) = \Theta(n^2)$, then we can show that $T(n) = \Theta(f(n))$ by the Master Theorem. But how should I handle the subtlety of this case with big-Omega and big-O?
Thanks for the help. |
I am trying to build a model for reactions on a lattice in the Doi-Peliti formalism. Suppose there exists a lattice of $N$ sites indexed by $i$. Each site can be either occupied or unoccupied. Assuming there exists a single type of particle, I can use $SU(2)$ fermionic operators: $a^\dagger$ and $a$ to denote creation and annihilation operators that obey the anti-commutation rules: (subscript indicates lattice site) $$\{a_i,a^\dagger_j\} = \delta_{i,j}$$ $$\{a_i,a_j\}= \{a^\dagger_i, a^\dagger_j\} = 0$$
Now suppose there is more than one type of fermion (say $a^{(1)}$ and $a^{(2)}$), however, each lattice site can either be unoccupied or be occupied by either exactly one $a^{(1)}$ or $a^{(2)}$ but not both.
First question, what would be the appropriate commutation rules in this case,
I assume the following are still valid: $$\{a^{(x)}_i,a^{(x)\dagger}_i\} = 1$$ $$[a^{(x)}_i,a^{(y)\dagger}_j] = 0 \qquad \text{if} x \neq y\ \text{and}\ i \neq j $$
However, what about $$[a^{(x)}_i,a^{(y)\dagger}_i] = ? \qquad \text{if} x \neq y\ \text{and}\ i = j $$
Again, I want each site to be only singly occupied (either by $a^{(1)}$ or $a^{(2)}$) or unoccupied.
Second, would these commutators be enough to characterise the system or do I need something more?
Third, am I correct to assume that the number operators for $a^{(1)}$, $a^{(2)}$ and vacancies would be given by $N_i^{(1)} = a^{(1)\dagger}_ia^{(1)}_i$ $N_i^{(2)} = a^{(2)\dagger}_ia^{(2)}_i$ and $N_i^{(\text{vac})} = 1 - N_i^{(1)}- N_i^{(2)}$
I suspect this problem might be vaguely connected to parastatistics and Green ansatz, but I am not certain.
Fourth, now in Doi-Peliti formalism a reaction where particle at site $i$ interacts with its neighbour at $j$ and is turned to C: $$A_i + B_j \rightarrow C_i+ B_j $$ would be given by the hamiltonian: ($j(i)$ indicates summing over sites neighbouring $i$). Typically I am familiar with the situation of unrestricted occupation numbers where the operators are bosonic, however would this still hold in the case of restricted occupation numbers using fermionic operators described above.
$$H = k \sum_{j(i)}b^{\dagger}_jb_j(c^\dagger_ia_i-a^\dagger_ia_i)$$ Now, consider the case wherein a vacancy is created instead of a new particle.
$$A_i + B_j \rightarrow \emptyset + B_j $$
Should the vacancy be treated just like a particle in this case? Or is the hamiltonian simply
$$H = k \sum_{j(i)} b^{\dagger}b_j(a_i-a^\dagger_ia_i)$$ |
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for
@JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default?
@JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever
I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font.
@DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma).
@egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge.
@barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually)
@barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording?
@barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us.
@DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.)
@barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow)
if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.)
@egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended.
@barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really
@DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts.
@DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ...
@DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts.
MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers...
has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable?
I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something.
@baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out
You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!...
@baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier.
@baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals. |
This is a statement from a finance textbook - I find it pretty clear everywhere else, but this particular part I am clueless. Hopefully you guys can figure it out.
The problem is solving: $$\sup_{(C_t,M_t)} E[\int_0^\infty e^{-\delta t} \, u(C_t,M_t/I_t) \,dt]$$ $$s.t. E[ \int_0^\infty \zeta_t \cdot (C_t+\frac{M_t}{I_t}r_t) \, dt]\leq W_0$$
where $W_0, \delta$ are constants. u is some function (as nice as it has to be) and in general $(C_t) , (I_t), (M_t) , (r_t)$ are all stochastic processes. It is just stated, without further ado, that the first order conditions are $$ e^{-\delta t} u_C(C_t,M_t/I_t) = \psi \zeta_t $$ $$ e^{-\delta t} u_M(C_t,M_t/I_t) = \psi \zeta_t r_t$$ where $\psi$ is a Lagrange multiplier, which is set such that the constraint holds. $U_C$ and $U_M$ are derivatives with respect to C and M respectively. Then he continuous using this result as a truth and never speaks of it again.
To be honest I've got no idea - I've seen lagrange used before but I don't see it giving me anything here. (p.s. sorry - I couldn't think of a better title, feel very free to change it)
Edit: Fixed some typos - it should be clear from the answer which. |
Functiones et Approximatio Commentarii Mathematici Funct. Approx. Comment. Math. Volume 36 (2006), 45-70. Estimates of the approximation error for abstract sampling type operators in Orlicz spaces Abstract
We get some inequalities concerning the modular distance $I^\varphi_G[Tf -f]$ for bounded functions $f:G\rightarrow \mathbb{R}.$ Here $G$ is a locally compact Hausdorff topological space provided with a regular and $\sigma$-finite measure $\mu_G,$ $I^\varphi_G$ is the modular functional generating the Orlicz spaces $L^\varphi(G)$ and $T$ is a nonlinear integral operator of the form $$(Tf)(s) = \int_H K(s,t, f(t)) d\mu_H(t),$$ where $H$ is a closed subset of $G$ endowed with another regular and $\sigma$-finite measure $\mu_H.$ As a consequence we obtain a convergence theorem for a net of such operators. Some applications to discrete operators are given.
Article information Source Funct. Approx. Comment. Math., Volume 36 (2006), 45-70. Dates First available in Project Euclid: 18 December 2008 Permanent link to this document https://projecteuclid.org/euclid.facm/1229616441 Digital Object Identifier doi:10.7169/facm/1229616441 Mathematical Reviews number (MathSciNet) MR2296638 Subjects Primary: 47G10: Integral operators [See also 45P05] Secondary: 47H30: Particular nonlinear operators (superposition, Hammerstein, Nemytskiĭ, Uryson, etc.) [See also 45Gxx, 45P05] 26D15: Inequalities for sums, series and integrals 46E30: Spaces of measurable functions (Lp-spaces, Orlicz spaces, Köthe function spaces, Lorentz spaces, rearrangement invariant spaces, ideal spaces, etc.) Citation
Bordaro, Carlo; Mantellini, Ilaria. Estimates of the approximation error for abstract sampling type operators in Orlicz spaces. Funct. Approx. Comment. Math. 36 (2006), 45--70. doi:10.7169/facm/1229616441. https://projecteuclid.org/euclid.facm/1229616441 |
Preprint Series 1999 1999:14 V. Buslaev
The paper is dedicated to the generalizations and applications of Poincaré's theorem on recurrence equations with limit constant coefficients. In particular, applications in the theory of continued fractions, mainly to problems related with Van Vleck’s theorem on regular C-fractions with limit constant coefficients are considered. Special attention is ...
[Full Abstract
]
1999:13 R. Howard, G. Károlyi, and L. Székely
We study the possibility of the existence of a Katona type proof for the Erdős-Ko-Rado theorem for 2- and 3-intersecting families of sets. An Erdős-Ko-Rado type theorem for 2-intersecting integer arithmetic progressions and a model theoretic argument show that such an approach works in the 2-intersecting case.
[Full Abstract
]
1999:12 S. Dilworth, R. Howard, and J. Roberts
Let
X be a normed space. A set $A\subseteq{X}$ is approximately convex if $d(ta+(1-t)b,A)\leq1$ for all $a,b\in{A}$ and $t\in[0,1]$. We prove that every n-dimensional normed space contains approximately convex sets A with $H(A,\textrm{Co ...
[Full Abstract
]
1999:11 A. Fokas and L. Sung
A general approach to solving boundary value problems for twodimensional and integrable nonlinear PDEs was announced in [2] andfurther developed in [3,4]. This method can be applied to
linear PDEs with constant coefficients and to integrable non-linear PDEs.It involves (a) Formulating the given PDE as the ...
[Full Abstract
]
BACK TO TOP
1999:10 R. DeVore and G. Petrova
Averaging lemmas deduce smoothness of velocity averages, such as
$$\bar{f}(x):=\int _ \Omega{f}(x,v)dv,\ \ \Omega\subset{\mathbb{R}^d},$$
from properties of
f. A canonical example is that $\bar{f}$ is in the Sobolev space $W^{\frac{1}{2}}(L _ 2(\mathbb{R}^d ...
[Full Abstract
]
BACK TO TOP
1999:09 A. Cohen, W. Dahmen, I. Daubechies, and R. DeVore
Tree approximation is a new form of nonlinear approximation which appears naturally in some applications such as image processing and adaptive numerical methods. It is somewhat more restrictive than the usual n-term approximation. We show that the restrictions of tree approximation cost little in terms of rates of approximation. We ...
[Full Abstract
]
BACK TO TOP
1999:08 V. Temlyakov
We suggest a three step strategy to find a good basis (dictionary) for nonlinear
m-term approximation. The first step consists of solving an optimization problem for a given function class F, when we optimize over a collection D of bases (dictionaries). The second step is devoted to finding a ...
[Full Abstract
]
BACK TO TOP
1999:07 S. Brenner Convergence of the multigrid V-cycle algorithm for second order boundary value problems without full elliptic regularity (file not available) (Math. Comp. 71 (2002), 507-525)
The multigrid V-cycle algorithm using the Richardson relaxation scheme as the smoother is studied in this paper. For second-order elliptic boundary value problems, the contraction number of the V-cycle algorithm is shown to improve uniformly with the increase of the number of smoothing steps, without ...
[Full Abstract
]
BACK TO TOP
1999:06 G. Kyriazis and P. Petrushev
We give a new method for construction of unconditional bases forgeneral classes of Triebel-Lizorkin and Besov spaces. These include the$L _ p$, $H _ p$,potential, and Sobolev spaces. The main feature of our method is thatthe character of the basis functions can be prescribed in a ...
[Full Abstract
]
BACK TO TOP
1999:05 É. Czabarka, G. Konjevod, M. Marathe, A. Percus, and D. Torney Algorithms for optimizing production DNA sequencing (file not available)
We discuss the problem of optimally "finishing" a partially sequenced, reconstructed DNA segment. At first sight, this appears to be computationally hard. We construct a series of increasingly realistic models for the problem and show that all of these can in fact be solved to optimality in polynomial time, with ...
[Full Abstract
]
BACK TO TOP
1999:04 A. Cohen, R. DeVore, G. Kerkyacharian, and D. Picard
In recent years, various nonlinear methods have been proposed and deeply investigated in the context of nonparametric estimation: shrinkage methods [21], locally adaptive bandwidth selection [16] and wavelet thresholding [7].
One way of comparing the performances of two different method is to fix a class of functions to be estimated ...
[Full Abstract
]
BACK TO TOP
1999:03 V. Temlyakov
Theoretical greedy type algorithms are studied: a Weak Greedy Algorithm, a Weak Orthogonal Greedy Algorithm, and a Weak Relaxed Greedy Algorithm. These algorithms are defined by weaker assumptions than their analogs the Pure Greedy Algorithm, an Orthogonal Greedy Algorithm, and a Relaxed Greedy Algorighm. The weaker assumptions make these new ...
[Full Abstract
]
BACK TO TOP
1999:02 M. Steel and L. Székely
In this paper we study how to invert random functions under different criteria. The motivation for this study is phylogeny reconstruction, since the evolution of biomolecular sequences may be considered as a random function from the set of possible phylogenetic trees to the set of collections of biomolecular sequences of ...
[Full Abstract
]
BACK TO TOP
1999:01 R. Getsadze
A complement to A.M. Olevskii’s fundamental inequality on logarithmic growth of Lebesgue functions of an arbitrary uniformly bounded orthonormal system on a set of positive measure is made. Namely, the index where the Lebesgue functions have growth slightly weaker than logarithm can be chosen independent of the variable ...
[Full Abstract
]
BACK TO TOP |
Here is what I know:
A space form is defined as a manifold admitting a Riemannian manifold of constant sectional curvature
A classical result of Cartan states that a manifold is a space form if and only if it is a quotient of $S^n$, $\mathbb{R}^n$, or $\mathbb{H}^n$ with their usual metrics by a discrete group of isometries $\Gamma$ acting properly discontinuously; furthermore $\Gamma$ is isomorphic to the fundamental group of the space form. This reduces the space form problem to a problem in group theory.
Such a discrete group in the case of $\mathbb{R}^n$ or $\mathbb{H}^n$ are referred to respectively as Kleinian or crystallographic.
The sphere and the real projective space are the only even-dimensional spherical space forms.
I have often seen papers in differential geometry, for example convergence results for Ricci flow [cf. Hamilton 1982 & 1986, Böhm-Wilking, Brendle-Schoen], show that manifolds admitting metrics of a certain type (e.g. of positive Ricci curvature, positive curvature operator, positive isotropic curvature) are space forms, "which have been completely classified" [cf. Wolf's book
Spaces of constant curvature], seeming to imply that this gives a complete understanding of such manifolds.
However, even having seen such a list of groups, I don't feel that it gives me a much better understanding of space forms than what I listed above. How can I understand it better? Some sample questions: (I'm happy to restrict to the three-dimensional case for simplicity.)
Are two space forms diffeomorphic if their fundamental groups are isomorphic? If this is not true, what if we require more of their homotopy groups to be isomorphic?
Maybe a natural class of 3-manifolds to consider are those with locally homogeneous metrics (i.e. for any $p,q\in M$ there are neighborhoods $U_p\ni p$ and $V_q\ni q$ and an isometry $U_p\to V_q$); by a result of Singer [CPAM 1960] the universal cover of such a manifold with the pulled back metric is homogeneous and so is one of the eight geometries. If this geometry is $S^2\times\mathbb{R}$, $\mathbb{H}^2\times\mathbb{R}$ or $\widetilde{\operatorname{SL}}(2,\mathbb{R})$ then it is immediate from the second bullet point above that $M$ is not a space form; however I think (?) Nil and Sol have the topology of $\mathbb{R}^3$; so is it possible for a locally homogeneous manifold of one of these types to be a space form?
More generally, given any closed 3-manifold, are there any generally applicable (but not too trivial) necessary or sufficient conditions to see if it is a space form, perhaps on the level of homotopy, homology, or cohomology?
Given any compact 3-manifold $M$, when can we make $M\sharp N$ a space form for some choice of compact 3-manifold $N$?
These are just a handful of things I'm curious about.
My main question is the title. |
Let $M^n$ be a connected, closed manifold. It has Poincaré duality with $\mathbb{Z}/2$ coefficients $H^k(M;\mathbb{Z}/2)\cong H_{n-k}(M;\mathbb{Z}/2)$, induced by cap product with the fundamental class $[M]_2\in H_n(M;\mathbb{Z}/2)$. It also has Poincaré duality with twisted integral coefficients, induced by cap product with a twisted fundamental class. It has Poincaré duality with $\mathbb{Z}$ coefficients if and only if it's orientable.
Let us say that $M$ satisfies "almost Poincaré duality in dimension $k$" if for every $x\in H^k(M;\mathbb{Z})$ there exists $a\in H_{n-k}(N;\mathbb{Z})$ such that $$ \rho(x)\cap[M]_2 = \rho(a), $$ where $\rho$ denotes reduction mod $2$ in both cohomology and homology. That is, the Poincaré dual of the reduction of an integral cohomology class admits an integral lift.
Examples:
An orientable manifold satisfies almost Poincaré duality in all dimensions (because reduction commutes with the duality isomorphisms). A non-orientable manifold cannot satisfy almost Poincaré duality in dimension $0$, since $[M]_2$ is not the reduction of an integral class. $\mathbb{R}P^{2m}$ satisfies almost Poincaré duality dimension $k$ if and only if $k$ is odd or $k=2m$.
Question: Are there conditions we can place on the (co)homology of $M$ which imply that it satisfies almost Poincaré duality in a certain dimension? I am particularly interested in closed $4$-manifolds which satisfy almost Poincaré duality in dimension $2$.
More generally, I would be interested to know if this notion has come up in the literature anywhere before. |
Not sure about the QR decomposition (see edit), but I found a way to do it with an eigendecomposition (derivation below). Formulating the problem
Let $n \times (k+1)$ matrix $V$ be the Vandermonde matrix, as defined in the original question. We want to factor $V$ into a product of two matrices: 1) An $n \times (k+1)$ matrix $A$, where $A_{ij}$ contains the value of the $j$th orthogonal polynomial evaluated at the $i$th data point, and 2) A $(k+1) \times (k+1)$ matrix $B$:
$$A B = V$$
The columns of $A$ must be orthogonal in the sense defined in the original question. That is:
$$A^T W A = I$$
where $I$ is the identity matrix and $W$ is a diagonal $n \times n$ matrix containing the weights. The solution below actually applies for any symmetric, positive definite $W$; it doesn't have to be diagonal.
Solution
One possible solution is:
$$A = V U \Lambda^{-\frac{1}{2}} \quadB = \Lambda^{\frac{1}{2}} U^T$$
where $U \Lambda U^T$ is the eigendecomposition of $V^T W V$
The solution is actually only specified up to unitary transformations (i.e. rotations and sign flips), so there are infinitely many. That is, given a solution $(A,B)$ and any matrix $M$ where $M^T M = I$, then $(A M, M^T B)$ is also a solution.
Derivation
Since $AB = V$ we can write:
$$(A B)^T W (A B)\enspace = \enspace B^T A^T W A B\enspace = \enspace V^T W V$$
Since $A^T W A = I$ this simplifies to:
$$B^T B = V^T W V$$
Let $U$ be an orthogonal matrix containing the eigenvectors of $V^T W V$ on the columns, and let diagonal matrix $\Lambda$ contain the corresponding eigenvalues. So $U \Lambda U^T = V^T W V$. Then a solution to the above equation is:
$$B = \Lambda^\frac{1}{2} U^T$$
Plugging this back into $A B = V$ gives:
$$A \Lambda^{\frac{1}{2}} U^T = V$$
Since $U$ is orthogonal and $\Lambda$ is diagonal, we can solve for $A$ by right-multiplying both sides by $U \Lambda^{-\frac{1}{2}}$:
$$A = V U \Lambda^{-\frac{1}{2}}$$
Edit: Solution based on QR decomposition
The OP proposed the following solution (I've changed some variable names): Let $\tilde{Q} \tilde{R}$ be the QR decomposition of $W^{\frac{1}{2}} V$. Then:
$$A = W^{-\frac{1}{2}} \tilde{Q}$$
Here's a proof that the columns are orthogonal:
$$A^T W A\tag{This must equal the identity matrix}$$
$$= \tilde{Q}^T (W^{-\frac{1}{2}})^T W W^{-\frac{1}{2}} \tilde{Q}\tag{Substitute in expression for $A$}$$
$$= \tilde{Q}^T W^{-\frac{1}{2}} W W^{-\frac{1}{2}} \tilde{Q}\tag{$W$ is symmetric}$$
$$= \tilde{Q}^T \tilde{Q}\tag{$W^{-\frac{1}{2}} W W^{-\frac{1}{2}} = I$}$$
$$= I\tag{By definition of the QR decomposition}$$
We can find the corresponding $B$ by solving $A B = V$, which gives:
$$B = \tilde{Q}^T W^{\frac{1}{2}} V$$
Therefore, this is also a valid solution. The solution above (based on the eigendecomposition; call it $A_{eig}$) and the OP's solution (based on the QR decomposition; call it $A_{QR}$) are related as:
$$A_{eig} = A_{QR} M\quad B_{eig} = M^T B_{QR}\quad \text{where} \quadM = \tilde{Q}^T W^{\frac{1}{2}} V U L^{-\frac{1}{2}}$$
One can show that $M^T M = I$, so this is a unitary transformation (see the note above about infinitely many solutions). |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
General case. In relativistic thermodynamics, inverse temperature $\beta^\mu$ is a vector field, namely the multipliers of the 4-momentum density in the exponent of the density operator specifying the system in terms of statistical mechanics, using the maximum entropy method, where $\beta^\mu p_\mu$ (in units where $c=1$) replaces the term $\beta H$ of the nonrelativistic canonical ensemble. This is done in
C.G. van Weert, Maximum entropy principle and relativistic hydrodynamics, Annals of Physics 140 (1982), 133-162.
for classical statistical mechanics and for quantum statistical mechanics in
T. Hayata et al., Relativistic hydrodynamics from quantum field theory on the basis of the generalized Gibbs ensemble method, Phys. Rev. D 92 (2015), 065008. https://arxiv.org/abs/1503.04535
For an extension to general relativity with spin see also
F. Becattini, Covariant statistical mechanics and the stress-energy tensor, Phys. Rev. Lett 108 (2012), 244502. https://arxiv.org/abs/1511.05439
Conservative case. One can define a scalar temperature $T:=1/k_B\sqrt{\beta^\mu\beta_\mu}$ and a velocity field $u^\mu:=k_BT\beta^\mu$ for the fluid; then $\beta^\mu=u^\mu/k_BT$, and the distribution function for an ideal fluid takes the form of a Jüttner distribution $e^{-u\cdot p/k_BT}$.
For an ideal fluid (i.e., assuming no dissipation, so that all conservation laws hold exacly), one obtains the format commonly used in relativistic hydrodynamics (see Chapter 22 in the book Misner, Thorne, Wheeler, Gravitation). It amounts to treating the thermodynamics nonrelativistically in the rest frame of the fluid.
Note that the definition of temperature consistent with the canonical ensemble needs a distribution of the form $e^{-\beta H - terms~ linear~ in~ p}$, conforming with the identification of the noncovariant $\beta^0$ as the inverse canonical temperature. Essentially, this is due to the frame dependence of the volume that enters the thermodynamics. This is in agreement with the noncovariant definition of temperature used by Planck and Einstein and was the generally agreed upon convention until at least 1968; cf. the discussion in
R. Balescu, Relativistic statistical thermodynamics, Physica 40 (1968), 309-338.
In contrast, the covariant Jüttner distribution has the form $e^{-u_0 H/k_BT - terms~ linear~ in~ p}$. Therefore the covariant scalar temperature differs from the canonical one by a velocity-dependent factor $u_0$. This explains the different transformation law. The covariant scalar temperature is simply the canonical temperature in the rest frame, turned covariant by redefinition.
Quantum general relativity. In quantum general relativity, accelerated observers interpret temperature differently. This is demonstrated for the vacuum state in Minkowski space by the Unruh effect, which is part of the thermodynamics of black holes. This seems inconsistent with the assumption of a covariant temperature.
Dissipative case. The situation is more complicated in the more realistic dissipative case. Once one allows for dissipation, amounting to going from Euler to Navier-Stokes in the nonrelativistic case, trying to generalize this simple formulation runs into problems. Thus it cannot be completely correct. In a gradient expansion at low order, the velocity field defined above from $\beta^\mu$ can be identified in the Landau-Lifschitz frame with the velocity field proportional to the energy current; see (86) in Hayata et al.. However, in general, this identification involves an approximation as there is no reason for these velocity fields to be exactly parallel; see, e.g.,
P. Van and T.S. Biró, First order and stable relativistic dissipative hydrodynamics, Physics Letters B 709 (2012), 106-110. https://arxiv.org/abs/1109.0985
There are various ways to patch the situation, starting from a kinetic description (valid for dilute gases only): The first reasonable formulation by Israel and Stewart based on a first order gradient expansion turned out to exhibit acausal behavior and not to be thermodynamically consistent. Extensions to second order (by Romatschke, e.g., https://arxiv.org/abs/0902.3663) or third order (by El et al., https://arxiv.org/abs/0907.4500) remedy the problems at low density, but shift the difficulties only to higher order terms (see Section 3.2 of Kovtun, https://arxiv.org/abs/1205.5040).
A causal and thermodynamically consistent formulation involving additional fields was given by Mueller and Ruggeri in their book Extended Thermodynamics 1993 and its 2nd edition, called Rational extended Thermodynamics 1998.
Paradoxes. Concerning the paradoxes mentioned in the original post:
Note that the formula $\langle E\rangle = \frac32 k_B T$ is valid only under very special circumstances (nonrelativistic ideal monatomic gas in its rest frame), and does not generalize. In general there is no simple relationship between temperature and velocity.
One can say that your paradox arises because in the three scenarios, three different concepts of temperature are used. What temperature is and how it transforms is a matter of convention, and the dominant convention changed some time after 1968; after Balescu's paper mentioned above, which shows that until 1963 it was universally defined as being frame-dependent. Today both conventions are alive, the frame-independent one being dominant.
This post imported from StackExchange Physics at 2016-06-24 15:03 (UTC), posted by SE-user Arnold Neumaier |
The rotation group SO(3) can be viewed as the group that preserves our old friends the delta tensor $\delta^{ab}$ and $\epsilon^{abc}$ (the totally antisymmetric tensor). In equations, this says:
$R^i_{ a}R^j_{ b} \delta^{ab} = \delta^{ij}$, a.k.a. $RR^T = I$.and $R^i_{ a}R^j_{ b}R^k_c \epsilon^{abc} = \epsilon^{ijk}$, a.k.a. $Det(R) = 1$.
When we derive the fact that the Lie algebra by using R infinitesimally different from I, $R = I + \delta R$, the first condition gives us
1B $\delta R_{ab} = -\delta R_{ba}$.
This is all very familiar, the antisymmetric matrices. Since we are near the identity matrix, the determinant of $R$ is automatically one, and we don't need to plug $R = I + \delta R$ into condition 2.
But if we do, and combine with 1B, we derive an identity between $\delta$ and $\epsilon$ that looks like this:$\iota^{abijk} = \delta^{ak} \epsilon^{ijb}+\delta^{aj} \epsilon^{ibk}+\delta^{ai} \epsilon^{bjk} -\delta^{bk} \epsilon^{ija}-\delta^{bj} \epsilon^{iak}-\delta^{bi} \epsilon^{ajk}=0$. (I take the big expression to be the definition of $\iota$.)
This identity seems to work numerically. The issue is that when computing explicit commutators for spinors in the Lorentz algebra in QFT, this identity keeps popping up in various forms (typically making it hard to simplify an answer into a desired form until you recognize it.)
The question is whether there is a good way to understand this identity and others that might arise with the Lorentz group invariant tensors $\epsilon ^{\mu \nu \rho \sigma}$, or even with more general Lie groups. I've looked at the diagrammatic form of the identity and didn't see any enlightenment there (although it helped me prove to myself that it should be true numerically.)
Edit: For reasons of avoiding circular logic, in the following I'll assume that some group is defined by $\delta$ and $\epsilon$ being invariant tensors, but I don't know which tensors they are, i.e. they are not necessarily kronecker delta and the totally antisymmetric tensors. I define the lower index version $R_{ab}$ of a group element $R^a_b$ using the $\delta$ tensor. I think I can do this consistently, and that these are the only facts about $\epsilon$ and $\delta$ I have actually used.
Explicitly, plugging into 2 gives $\delta^i_a \delta^j_b \delta R^k_c \epsilon^{abc} + \delta^i_a \delta^k_c \delta R^j_b \epsilon^{abc} +\delta^k_c \delta^j_b \delta R^i_a \epsilon^{abc} = 0$.Factoring out the $\delta R$ gives 1C:$\delta R_{ab} (\delta^{ak} \epsilon^{ijb}+\delta^{aj} \epsilon^{ibk}+\delta^{ai} \epsilon^{bjk})=0$.However by condition 1B, $\delta R_{ab}$ is antisymmetric. Only the anti-symmetric part of the thing multiplying $\delta R_{ab}$ contributes. Therefore we get $\delta R_{ab} \iota^{abcde} =0.$ Logically, this could be a further condition on $\delta R$, but for SO(3) at least it seems that $\iota^{abcde}$ is just always $0$, which I argued for above based on the fact that condition 2 is equivalent to $Det(R) = 1$. I do not know what happens in general.This post has been migrated from (A51.SE) |
Search
Now showing items 1-8 of 8
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
Production of inclusive $\Upsilon$(1S) and $\Upsilon$(2S) in p-Pb collisions at $\mathbf{\sqrt{s_{{\rm NN}}} = 5.02}$ TeV
(Elsevier, 2015-01)
We report on the production of inclusive $\Upsilon$(1S) and $\Upsilon$(2S) in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV at the LHC. The measurement is performed with the ALICE detector at backward ($-4.46< y_{{\rm ...
Elliptic flow of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Springer, 2015-06-29)
The elliptic flow coefficient ($v_{2}$) of identified particles in Pb--Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV was measured with the ALICE detector at the LHC. The results were obtained with the Scalar Product ...
Measurement of electrons from semileptonic heavy-flavor hadron decays in pp collisions at s =2.76TeV
(American Physical Society, 2015-01-07)
The pT-differential production cross section of electrons from semileptonic decays of heavy-flavor hadrons has been measured at midrapidity in proton-proton collisions at s√=2.76 TeV in the transverse momentum range ...
Multiplicity dependence of jet-like two-particle correlations in p-Pb collisions at $\sqrt{s_NN}$ = 5.02 TeV
(Elsevier, 2015-02-04)
Two-particle angular correlations between unidentified charged trigger and associated particles are measured by the ALICE detector in p–Pb collisions at a nucleon–nucleon centre-of-mass energy of 5.02 TeV. The transverse-momentum ... |
Let $\phi(n)$ be Euler's totient function. For $n=4m$ what is the smallest $n$ for which
$$\phi(n) \ne \phi(k) \textrm{ for any } k<n \textrm{ ?} \quad (1)$$
When $n=4m+1$ and $n>1$ the smallest is $n=5$ and when $n=4m+3$ it's $n=3.$ However, when $n=4m+2$ the above condition can never be satisfied since
$$\phi(4m+2) = (4m+2) \prod_{p | 4m+2} \left( 1 - \frac{1}{p} \right)$$ $$ = (2m+1) \prod_{p | 2m+1} \left( 1 - \frac{1}{p} \right) = \phi(2m+1).$$
In the case $n=4m,$ $n=2^{33}$ is a candidate and $\phi(2^{33})=2^{32}.$ This value satisfies $(1)$ because $\phi(n)$ is a power of $2$ precisely when $n$ is the product of a power of $2$ and any number of distinct Fermat primes:
$$2^1+1,2^2+1,2^4+1,2^8+1 \textrm{ and } 2^{16}+1.$$
Note the $n=2^{32}$ does not satisfy condition $(1)$ because the product of the above Fermat primes is $2^{32}-1$ and so $\phi(2^{32})=2^{31}=\phi(2^{32}-1)$ and $2^{32}-1 < 2^{32}.$
The only solutions to $\phi(n)=2^{32}$ are given by numbers of the form $n=2^a \prod (2^{x_i}+1)$ where $x_i \in \lbrace 1,2,4,8,16 \rbrace $ and $a+ \sum x_i = 33$ (note that the product could be empty), so all these numbers are necessarily $ \ge 2^{33}.$
Why don't many "small" multiples of $4$ satisfy condition $(1)$? Well, note that for $n=2^a(2m+1)$ we have
$$\phi(2^a(2m+1))= 2^a(2m+1) \prod_{p | 2^a(2m+1)} \left( 1 - \frac{1}{p} \right)$$ $$ = 2^{a-1}(2m+1) \prod_{p | 2m+1} \left( 1 - \frac{1}{p} \right) = 2^{a-1}\phi(2m+1),$$
and so, for $a \ge 2,$ if $2^{a-1}\phi(2m+1)+1$ is prime we can take this as our value of $k<n$ and we have $\phi(n)=\phi(k).$ This, together with the existence of the Fermat primes, seems to be why it's difficult to satisfy when $n=4m.$
I have only made hand calculations so far, so I would not be too surprised if the answer is much smaller than my suggestion. The problem is well within the reach of a computer, and possibly further analysis without the aid of a computer. But, anyway, I've decided to ask here as many of you have ready access to good mathematical software and I'm very intrigued to know whether there is a smaller solution than $2^{33}.$
Some background information:
This question arose in my search to bound the function $\Phi(n)$ defined as follows.
Let $\Phi(n)$ be the number of distinct values taken on by $\phi(k)$ for $1 \le k \le n.$ For example, $\Phi(13)=6$ since $\phi(k)$ takes on the values $\lbrace 1,2,4,6,10,12 \rbrace$ for $1 \le k \le 13.$
It is clear that $\Phi(n)$ is increasing and increases by $1$ at each prime value of $n,$ except $n=2,$ but it also increases at other values as well. For example, $\Phi(14)=6$ and $\Phi(15)=7.$
Currently, for an upper bound, I'm hoping to do better than $\Phi(n) \le \lfloor (n+1)/2 \rfloor .$
But this this not the issue at the moment, although it may well become a separate question.
This work originates from this stackexchange problem. |
General case. In relativistic thermodynamics, inverse temperature $\beta^\mu$ is a vector field, namely the multipliers of the 4-momentum density in the exponent of the density operator specifying the system in terms of statistical mechanics, using the maximum entropy method, where $\beta^\mu p_\mu$ (in units where $c=1$) replaces the term $\beta H$ of the nonrelativistic canonical ensemble. This is done in
C.G. van Weert, Maximum entropy principle and relativistic hydrodynamics, Annals of Physics 140 (1982), 133-162.
for classical statistical mechanics and for quantum statistical mechanics in
T. Hayata et al., Relativistic hydrodynamics from quantum field theory on the basis of the generalized Gibbs ensemble method, Phys. Rev. D 92 (2015), 065008. https://arxiv.org/abs/1503.04535
For an extension to general relativity with spin see also
F. Becattini, Covariant statistical mechanics and the stress-energy tensor, Phys. Rev. Lett 108 (2012), 244502. https://arxiv.org/abs/1511.05439
Conservative case. One can define a scalar temperature $T:=1/k_B\sqrt{\beta^\mu\beta_\mu}$ and a velocity field $u^\mu:=k_BT\beta^\mu$ for the fluid; then $\beta^\mu=u^\mu/k_BT$, and the distribution function for an ideal fluid takes the form of a Jüttner distribution $e^{-u\cdot p/k_BT}$.
For an ideal fluid (i.e., assuming no dissipation, so that all conservation laws hold exacly), one obtains the format commonly used in relativistic hydrodynamics (see Chapter 22 in the book Misner, Thorne, Wheeler, Gravitation). It amounts to treating the thermodynamics nonrelativistically in the rest frame of the fluid.
Note that the definition of temperature consistent with the canonical ensemble needs a distribution of the form $e^{-\beta H - terms~ linear~ in~ p}$, conforming with the identification of the noncovariant $\beta^0$ as the inverse canonical temperature. Essentially, this is due to the frame dependence of the volume that enters the thermodynamics. This is in agreement with the noncovariant definition of temperature used by Planck and Einstein and was the generally agreed upon convention until at least 1968; cf. the discussion in
R. Balescu, Relativistic statistical thermodynamics, Physica 40 (1968), 309-338.
In contrast, the covariant Jüttner distribution has the form $e^{-u_0 H/k_BT - terms~ linear~ in~ p}$. Therefore the covariant scalar temperature differs from the canonical one by a velocity-dependent factor $u_0$. This explains the different transformation law. The covariant scalar temperature is simply the canonical temperature in the rest frame, turned covariant by redefinition.
Quantum general relativity. In quantum general relativity, accelerated observers interpret temperature differently. This is demonstrated for the vacuum state in Minkowski space by the Unruh effect, which is part of the thermodynamics of black holes. This seems inconsistent with the assumption of a covariant temperature.
Dissipative case. The situation is more complicated in the more realistic dissipative case. Once one allows for dissipation, amounting to going from Euler to Navier-Stokes in the nonrelativistic case, trying to generalize this simple formulation runs into problems. Thus it cannot be completely correct. In a gradient expansion at low order, the velocity field defined above from $\beta^\mu$ can be identified in the Landau-Lifschitz frame with the velocity field proportional to the energy current; see (86) in Hayata et al.. However, in general, this identification involves an approximation as there is no reason for these velocity fields to be exactly parallel; see, e.g.,
P. Van and T.S. Biró, First order and stable relativistic dissipative hydrodynamics, Physics Letters B 709 (2012), 106-110. https://arxiv.org/abs/1109.0985
There are various ways to patch the situation, starting from a kinetic description (valid for dilute gases only): The first reasonable formulation by Israel and Stewart based on a first order gradient expansion turned out to exhibit acausal behavior and not to be thermodynamically consistent. Extensions to second order (by Romatschke, e.g., https://arxiv.org/abs/0902.3663) or third order (by El et al., https://arxiv.org/abs/0907.4500) remedy the problems at low density, but shift the difficulties only to higher order terms (see Section 3.2 of Kovtun, https://arxiv.org/abs/1205.5040).
A causal and thermodynamically consistent formulation involving additional fields was given by Mueller and Ruggeri in their book Extended Thermodynamics 1993 and its 2nd edition, called Rational extended Thermodynamics 1998.
Paradoxes. Concerning the paradoxes mentioned in the original post:
Note that the formula $\langle E\rangle = \frac32 k_B T$ is valid only under very special circumstances (nonrelativistic ideal monatomic gas in its rest frame), and does not generalize. In general there is no simple relationship between temperature and velocity.
One can say that your paradox arises because in the three scenarios, three different concepts of temperature are used. What temperature is and how it transforms is a matter of convention, and the dominant convention changed some time after 1968; after Balescu's paper mentioned above, which shows that until 1963 it was universally defined as being frame-dependent. Today both conventions are alive, the frame-independent one being dominant.
This post imported from StackExchange Physics at 2016-06-24 15:03 (UTC), posted by SE-user Arnold Neumaier |
Difference between revisions of "Cole equation of state"
m (Slight rewrital)
m (minor syntax)
Line 5: Line 5:
:<math>p = B \left[ \left( \frac{\rho}{\rho_0} \right)^\gamma -1 \right]</math>
:<math>p = B \left[ \left( \frac{\rho}{\rho_0} \right)^\gamma -1 \right]</math>
−
In it, <math>\rho_0</math> is a reference density around which the density varies
+
In it, <math>\rho_0</math> is a reference density around which the density varies
−
<math>\gamma</math> is the [[Heat capacity#Adiabatic index | adiabatic index]] and <math>B</math> is a pressure parameter.
+
<math>\gamma</math> is the [[Heat capacity#Adiabatic index | adiabatic index]] and <math>B</math> is a pressure parameter.
Usually, the equation is used to model a nearly incompressible system. In this case,
Usually, the equation is used to model a nearly incompressible system. In this case,
Line 21: Line 21:
Therefore, if <math>B=100 \rho_0 v^2 / \gamma</math>, the relative density fluctuations
Therefore, if <math>B=100 \rho_0 v^2 / \gamma</math>, the relative density fluctuations
−
will be
+
will be about 0.01.
If the fluctuations in the density are indeed small, the
If the fluctuations in the density are indeed small, the
Revision as of 13:54, 17 October 2012
In it, is a reference density around which the density varies, is the adiabatic index, and is a pressure parameter.
Usually, the equation is used to model a nearly incompressible system. In this case, the exponent is often set to a value of 7, and is large, in the following sense. The fluctuations of the density are related to the speed of sound as
Therefore, if , the relative density fluctuations will be about 0.01.
If the fluctuations in the density are indeed small, the equation of state may be approximated by the simpler: |
I'm asked to find if the fixed-point iteration
$$x_{k+1} = g(x_k)$$
converges for the
fixed points of the function $$g(x) = x^2 + \frac{3}{16}$$ which I found to be $\frac{1}{4}$ and $\frac{3}{4}$.
In this short video by Wen Shen,
it's explained how to find these fixed-points and to see if a fixed-point iteration converges. My doubt is related to find if a fixed point iteration converges for a certain fixed point.
At more or less half of the video, she comes up with the following relation for the error
$$e_{k+1} = |g'(\alpha)| e_k$$
where $\alpha \in (x_k, r)$, by the mean value theorem, and because $g$ is continuous and differentiable.
If $|g'(\alpha)| < 1$, then the fixed-point iteration converges.
I think I agree with this last statement, but when she tries to see if the fixed point iteration converges for a certain root of a certain function, she simply finds the derivative of that function and plugs in it the root.
I don't understand why this is equivalent to $$e_{k+1} = |g'(\alpha)| e_k$$
Can someone fire up some light on my brain? (lol, can I say this?) |
This form is taken from a talk by Seiberg to which I was listening to,
Take the Kahler potential ($K$) and the supersymmetric potential ($W$) as,
$K = \vert X\vert ^2 + \vert \phi _1 \vert ^2 + \vert \phi_2\vert ^2 $
$W = fX + m\phi_1 \phi_2 + \frac{h}{2}X\phi_1 ^2 $
This notation looks a bit confusing to me. Are the fields $X$, $\phi_1$ and $\phi_2$ real or complex? The form of $K$ seems to suggest that they are complex - since I would be inclined to read $\vert \psi \vert ^2$ as $\psi ^* \psi$ - but then the form of $W$ looks misleading - it seems that $W$ could be complex. Is that okay?
Now he looks at the potential $V$ defined as $V = \frac{\partial ^2 K}{\partial \psi_m \partial \psi_n} \left ( \frac {\partial W}{\partial \psi_m} \right )^* \frac {\partial W}{\partial \psi_n}$
(..where $\psi_n$ and $\psi_m$ sums over all fields in the theory..)
For this case this will give, $V = \vert \frac{h}{2}\phi_1^2 + f\vert ^2 + \vert m\phi_1 \vert ^2 + \vert hX\phi_1 + m\phi_2 \vert ^2 $
Though for the last term Seiberg seemed to have a "-" sign as $\vert hX\phi_1 - m\phi_2 \vert ^2 $ - which I could not understand.
I think the first point he was making is that it is clear by looking at the above expression for $V$ that it can't go to $0$ anywhere and hence supersymmetry is not broken at any value of the fields.
I would like to hear of some discussion as to why this particular function $V$ is important for the analysis - after all this is one among several terms that will appear in the Lagrangian with this Kahler potential and the supersymmetry potential.
He seemed to say that if *``$\phi_1$ and $\phi_2$ are integrated out then in terms of the massless field $X$ the potential is just $f^2$"* - I would be glad if someone can elaborate the calculation that he is referring to - I would naively think that in the limit of $h$ and $m$ going to $0$ the potential is looking like just $f^2$.
With reference to the above case when the potential is just $f^2$ he seemed to be referring to the case when $\phi_2 = -\frac{hX\phi_1}{m}$. I could not get the significance of this. The equations of motion from this $V$ are clearly much more complicated.
He said that one can work out the spectrum of the field theory by
"diagonalizing the small fluctuations" - what did he mean? Was he meaning to drop all terms cubic or higher in the fields $\phi_1, \phi_2, X$ ? In this what would the "mass matrix" be defined as?
The confusion arises because of the initial doubt about whether the fields are real or complex. It seems that $V$ will have terms like $\phi^*\phi^*$ and $\phi \phi$ and also a constant term $f^2$ - these features are confusing me as to what diagonalizing will mean.
Normally with complex fields say $\psi_i$ the "mass-matrix" would be defined the $M$ in the terms $\psi_i ^* M_{ij}\psi_j$ But here I can't see that structure!
The point he wanted to make is that once the mass-matrix is diagonalized it will have the same number of bosonic and fermionic masses and also the super-trace of its square will be $0$ - I can't see from where will fermionic masses come here!
If the mass-matrix is $M$ then he seemed to claim - almost magically out of the top of his hat! - that the 1-loop effective action is $\frac{1}{64\pi^2} STr \left ( M^4 log \frac{M^2}{M_{cut_off}^2} \right ) $ - he seemed to be saying that it follows from something else and he didn't need to do any loop calculation for that!
I would be glad if someone can help with these. This post has been migrated from (A51.SE) |
First, let $m$,$n$ be coprime.
Suppose we want to find:
$x\equiv y\text{ mod }m$
$x\equiv z\text{ mod }n$
As m,n are coprime $\exists a,b\in\mathbb{Z}:am+bn=1$ (GCD=1 basically)
Then, for $y\in\mathbb{Z}$ we have $ybn=y\text{ mod }m$ and $ybn = 0\text{ mod }n$
Then, for $z\in\mathbb{Z}$ we have $zam=z\text{ mod }n$ and $zam = 0\text{ mod }m$
Thus $$ybn+zam\equiv y\text{ mod }m$$ and $$ybn+zam\equiv z\text{ mod }n$$
I am happy with this, I am them asked to find $x$ where $x\equiv 2\text{ mod }7$ and $x\equiv -2{ mod }11$ this is easy you just choose $y,b,n,z,a,m$ so the forms match, which explains why the first part wanted an explicit solution to $7a+11b=1$
The actual question
Part 2 is different, it wants me to give an explicit solution to $$77a+13b=1$$ (easy enough, $a=-1,b=6$ works) and to find $k$ such that:
$k\equiv 2\text{ mod }7$
$k\equiv -2\text{ mod }11$
$k\equiv -1\text{ mod }13$
I'm not sure how to do this in the way I'd like to / the question wants, I may not say $k\equiv \text{(something) mod }77$ but 77 is 7*11, and the last part was about a 7 and an 11!
How do I do this in the spirit of the question?
I have tagged this as "Abstract algebra" and "Ring theory" because it follows on from some work about defining a bijection from rings that are cyclic groups of coprime orders. I have titled it Chinese Remainder Theorem because that can be phrased to state "The rings $Z_mXZ_n$ are isomorphic if and only if m, n are coprime" and then it finds them. |
I'm playing with the electromagnetic field tensor. Heard about it? Yes, it's the very tensor that, well, pretty much makes Maxwell's equations redundant.
We start with an arbitrary smooth vector field, $A^\mu$, in four dimensions, one for time, three for space. In the language of differential forms, this vector field is also called a 1-form. (Well, strictly speaking, its the dual vector field, defined as $A_\mu=g_{\mu\nu}A^\nu$, that can serve as a 1-form, but that in no way detracts from the argument presented here.) To make a 2-form from the 1-form, we can apply the derivative operator:
\[d{\bf\mathrm{A}}=\partial_\mu A_\nu-\partial_\nu A_\mu=F_{\mu\nu}= \begin{pmatrix} 0&\partial_1A_0-\partial_0A_1&\partial_2A_0-\partial_0A_2&\partial_3A_0-\partial_0A_3\\ \partial_0A_1-\partial_1A_0&0&\partial_2A_1-\partial_1A_2&\partial_3A_1-\partial_1A_3\\ \partial_0A_2-\partial_2A_0&\partial_1A_2-\partial_2A_1&0&\partial_3A_2-\partial_2A_3\\ \partial_0A_3-\partial_3A_0&\partial_1A_3-\partial_3A_1&\partial_2A_3-\partial_3A_2&0 \end{pmatrix}=g_{\mu\nu}g_{\kappa\lambda}F^{\kappa\lambda}.\]
This tensor is totally antisymmetric, with 6 independent components. Let's label these components:
\[\begin{pmatrix} 0&E^1&E^2&E^3\\ -E^1&0&-B^3&B^2\\ -E^2&B^3&0&-B^1\\ -E^3&-B^2&B^1&0 \end{pmatrix}\]
Looks familiar? Of course through these labels, we defined the components of the electric and magnetic field.
What can we do with this tensor now? Why, we can apply the differential operator to it one more time. There are two ways to do so: we can compute the interior and the exterior derivative. Let's start with the interior derivative:
\[\partial_\nu F^{\mu\nu}=\begin{pmatrix} \partial_1E^2+\partial_2E^2+\partial_3E^3\\ -\partial_0E^1+\partial_3E^2-\partial_2E^3\\ -\partial_0E^2+\partial_1E^3-\partial_3E^1\\ -\partial_0E^3+\partial_2E^1-\partial_1E^2 \end{pmatrix}=\begin{pmatrix}\nabla\cdot{\bf\mathrm{E}}\\~\\\nabla\times{\bf\mathrm{B}}-\partial{\bf\mathrm{E}}/\partial t\\~\end{pmatrix}=\begin{pmatrix}\rho\\~\\{\bf\mathrm{j}}\\~\end{pmatrix}.\]
Yes, this is just the charge density and current. Computing the inner product with the derivative operator one more time tells us why these quantities are special: it is easy to check that the continuity equation applies, i.e., $\partial_\mu\partial_\nu F^{\mu\nu}=\partial\rho/\partial t+\nabla\cdot\vec{j}=0$, meaning that charge is conserved.
With the exterior derivative things get just as interesting, because we know from the calculus of differential forms that repeated application of the exterior derivative produces a null result, i.e., ${\rm d}^2A=0$. Can this be used to extract new and useful identities? Let's see. ${\rm d}^2A={\rm d}F$, which is a 4×4×4 totally antisymmetric matrix. Let's call it $M$. It has only four independent components:
\begin{align}M^{012}&=\partial_0F^{12}-\partial_0F^{21}+\partial_1F^{20}-\partial_1F^{02}+\partial_2F^{01}-\partial_2F^{10}=2(-\partial_0B^3-\partial_1E^2+\partial_2E^1),\\ M^{013}&=\partial_0F^{13}-\partial_0F^{31}+\partial_1F^{30}-\partial_1F^{03}+\partial_3F^{01}-\partial_3F^{10}=2(\partial_0B^2-\partial_1E^3+\partial_3E^1),\\ M^{023}&=\partial_0F^{23}-\partial_0F^{32}+\partial_2F^{30}-\partial_2F^{03}+\partial_3F^{02}-\partial_3F^{20}=2(-\partial_0B^1-\partial_2E^3+\partial_2E^2),\\ M^{123}&=\partial_1F^{23}-\partial_1F^{32}+\partial_2F^{31}-\partial_2F^{13}+\partial_3F^{12}-\partial_3F^{21}=2(-\partial_1B^1-\partial_2E^2-\partial_3E^3), \end{align}
All these should be identically zero of course. Now is the time to notice that $M^{012}=0$, $-M^{013}=0$, $M^{023}=0$ together express the following equation:
\[-\frac{\partial{\bf\mathrm{B}}}{\partial t}+\nabla\times{\bf\mathrm{E}}=0,\]
while the fourth equation translates into this:
\[\nabla\cdot{\bf\mathrm{B}}=0.\]
So what have we got? Basically we discovered that the first two of Maxwell's equations are merely the defining equations of the charge density and current, whereas the second pair are identities that hold for all smooth vector fields $A^\mu$.
In sum, the theory of electromagnetism is really just the geometric theory of an arbitrary smooth vector field.
As an added bonus, we can discover something new. Neither the charge density or current, nor the electric or magnetic fields would change if we were to modify $A$ by adding to it the gradient of a scalar field: $A\rightarrow A+\nabla f$. That is because a scalar field is nothing but a 0-form; its gradient is a 1-form; and applying the derivative operator for the second time to form $F^{\mu\nu}$ the contribution of $f$ will be identically zero (${\rm d}(A + {\rm d}f)={\rm d}A+{\rm d}^2f={\rm d}A$), so the expression for the electromagnetic field tensor will not change.
Ah, and one more thing. The way the exterior derivative is defined, were you to compute it in curved space, all Christoffel-symbols would drop out automatically. In other words, the values you get are independent of the differential operator you use: in particular, you are allowed to use the ordinary differential operator. The physical significance of this is that in the case of the electromagnetic field, both the equations used to define $\vec{E}$ and $\vec{B}$ and the field equations will automatically be satisfied in curved spacetime, under the conditions of general relativity.
Indeed, I believe this, namely that their definition is not dependent on the geometry of the underlying manifold, is one of the major strengths of differential forms.
One other observation that is evident from the above is worth mentioning. It concerns the somewhat arbitrary relabeling of the components of $F^{\mu\nu}$. Though I chose to call $\vec{E}$ and $\vec{B}$ vectors, it should be evident that they decidedly do NOT transform as 4-vectors under a change of coordinates. Indeed, it is possible at any point in space to choose a coordinate system in which either $\vec{E}$ or $\vec{B}$ (but usually not both) vanishes altogether. I have often seen $\vec{E}$ and $\vec{B}$ described as "real" physical quantities, as opposed to $A$, which is not "real" because it is undetermined to the extent that you can add $\nabla f$ to it. But what we see here suggests to me that $\vec{E}$ and $\vec{B}$ are far less "real" than $A$! (Arguably, though, the electromagnetic field tensor $F$ is real in the sense that it is a "proper" tensorial quantity.) |
Is the language $L = \{ a^ib^j \mid i\ \nmid\ j \ \} $ context free ?
If we fix $n \in N$ then we know that the language $L = \{ a^ib^j \mid \ \forall \ 1 \le k \le n \ , \ \ j\neq ki \} $ is context free (as it can be presented as a finite union of context free languages in a similar way to the example here: Is $L= \{ a^ib^j \mid j\neq i \ and \ j\neq2i \ \} $ context free?)
I think that it's not context free but have failed to prove it. By reading other questions on this site I noticed this interesting observation: CFL's in $a^*b^*$ are closed under complement as can be seen here: Are context-free languages in $a^*b^*$ closed under complement?
So our language $L$ is context free if and only if $ \bar L = \{ a^ib^j \mid \ \ i\ \mid\ j \ \} $ is context free. I tried using the pumping lemma but to no avail.
Thanks in advance |
This is essentially a variation on the answer of denesp that requires slightly fewer assumptions.
Assume there are $l$ commodities and $m$ agents. An allocation is then a point in $\mathbb{R}^{lm}_+$. If the aggregate endowment is $e\in\mathbb{R^l}_+$, an allocation is a point in $\sum^{-1}(\{e\})$, where $\sum:\mathbb{R}^{lm}_+\to\mathbb{R}^l$ is the continuous "summation function". Since this function is continuous and the set $\{e\}\subseteq\mathbb{R}^l$ closed, the space of allocations is closed too. It is also clearly bounded, so the space of allocation is compact. Let $A\subseteq\mathbb{R}^{lm}_+$ be the nonempty compact space of feasible allocations.
Define the relation $\succeq$ on $A$ such that for allocations $x=(x_1,\ldots,x_m)$ and $y=(y_1,\ldots,y_m)$, we have $x\succeq y$ if and only if $x_i\succeq_i y_i$ for every agent $i$. Now $x^*\in A$ is a maximal Pareto improvement over $x$ exactly if $x^*\succeq x$ and there is no $y\in A$ such that $y\succeq x^*$ but not $x^*\succeq y$.
Assume now that for all $a\in\mathbb{R}^l_+$ and every agent $i$, the "weakly-better-set" $\{b\in\mathbb{R}^l\mid b\succeq_i a\}$ is closed. Then the set $A_x=\{y\in A\mid y\succeq x\}$ is closed and, as the closed subset of a compact set, compact. Our problem reduces to showing that there exists a $\succeq$-maximal element $x^*\in A_x$.
Let $\succ$ be the asymmetric part of $\succeq$. It is transitive and irreflexive and therefore acyclic. Also, the "upper sections" of $\succeq$ are closed and therefore the lower sections of $\succ$ open. The existence of a $\succ$-maximal element follows then from what is sometimes referred to as the Walker-Bergstrom theorem (first proven by Sloss....). For the sake of completeness, I give the easy proof here.
Let $L_z=\{y\in A_y\mid y\prec z\}$ be the lower set of $\succ$ at $z$. Assume for the sake of contradiction that there is no $\succ$-maximal element in $A_x$. Then every point in $A_x$ lies in some $L_z$ with $z\in A_x$. Also, the $L_z$ are relatively open in the compact space $A_x$. So $\{L_z\mid z\in A_x\}$ is an open cover of $A_x$ and, by compactness, there is a finite set $F\subseteq A_x$ such that $\{L_z\mid z\in F\}$ is still an open cover of $A_x$. In particular, for each $z\in F$, there is some $z'\in F$ such that $z\in L_{z'}$ or, equivalently, $z'\succ z$. So the relation $\succ$ has no maximal element on the finite set $F$. This means there exists an infinite sequence $\langle z_n\rangle$ such that $z_{n+1}\succ z_n$ for all $n$. Since $\succ$ is acyclic, the sequence consists of infinitely many distinct elements. Since $F$ is finite, this is impossible. |
Forgive me if this topic is too much in the realm of philosophy. John Baez has an interesting perspective on the relative importance of dimensionless constants, which he calls fundamental like alpha, versus dimensioned constants like $G$ or $c$ [ http://math.ucr.edu/home/baez/constants.html ]. What is the relative importance or significance of one class versus the other and is this an area that physicists have real concerns or expend significant research?
first of all, the question you are asking is very important and you may master it completely.
Dimensionful constants are those that have units - like $c, \hbar, G$, or even $k_{\rm Boltzmann}$ or $\epsilon_0$ in SI. The units - such as meter; kilogram; second; Ampere; kelvin - have been chosen partially arbitrarily. They're results of random cultural accidents in the history of mankind. A second was original chosen as 1/86,400 of a solar day, one meter as 1/40,000,000 of the average meridian, one kilogram as the mass of 1/1,000 cubic meters (liter) of water or later the mass of a randomly chosen prototype, one Ampere so that $4\pi \epsilon_0 c^2$ is a simple power of 10 in SI units, one Kelvin as 1/100 of the difference between the melting and boiling points of water.
Clearly, the circumference of the Earth, the solar day, a platinum prototype brick in a French castle, or phase transitions of water are not among the most "fundamental" features of the Universe. There are lots of other ways how the units could be chosen. Someone could choose 1.75 meters - an average man's height - to be his unit of length (some weird people in the history have even used their feet to measure distances) and he could still call it "one meter". It would be his meter. In those units, the numerical values of the speed of light would be different.
Exactly the products or ratios of powers of fundamental constants that are
dimensionless are those that don't have any units, by definition, which means that they are independent of all the random cultural choices of the units. So all civilizations in the Universe - despite the absence of any interactions between them in the past - will agree about the numerical value of the proton-electron mass ratio - which is about $6\pi^5=1836.15$ (the formula is just a teaser I noticed when I was 10!) - and about the fine-structure constant, $\alpha\sim 1/137.036$, and so on.
In the Standard Model of particle physics, there are about 19 such dimensionless parameters that "really" determine the character of physics; all other constants such as $\hbar,c,G,k_{\rm Boltzmann}, \epsilon_0$ depend on the choice of units, and the number of independent units (meter, kilogram, second, Ampere, Kelvin) is actually exactly large enough that all those constants, $\hbar,c,G,k_{\rm Boltzmann},\epsilon_0$, may be set equal to one which simplifies all fundamental equations in physics where these fundamental constants appear frequently. By changing the value of $c$, one only changes social conventions (what the units mean), not the laws of physics.
The units where all these constants are numerically equal to 1 are called the Planck units or natural units, and Max Planck understood that this was the most natural choice already 100 years ago. $c=1$ is being set in any "mature" analysis that involves special relativity; $\hbar=1$ is used everywhere in "adult" quantum mechanics; $G=1$ or $8\pi G=1$ is sometimes used in the research of gravity; $k_{\rm Boltzmann}=1$ is used whenever thermal phenomena are studied microscopically, at a professional level; $4\pi\epsilon_0$ is just an annoying factor that may be set to one (and in Gaussian 19th century units, such things are actually set to one, with a different treatment of the $4\pi$ factor); instead of one mole in chemistry, physicists (researchers in a more fundamental discipline) simply count the molecules or atoms and they know that a mole is just a package of $6.022\times 10^{23}$ atoms or molecules.
The 19 (or 20?) actual dimensionless parameters of the Standard Model may be classified as the three fine-structure constants $g_1,g_2,g_3$ of the $U(1)\times SU(2)\times SU(3)$ gauge group; Higgs vacuum expectation value divided by the Planck mass (the only thing that brings a mass scale, and this mass scale only distinguishes different theories once we also take gravity into account); the Yukawa couplings with the Higgs that determine the quarks and fermion masses and their mixing. One should also consider the strong CP-angle of QCD and a few others.
Once you choose a modified Standard Model that appreciates that the neutrinos are massive and oscillate, 19 is lifted to about 30. New physics of course inflates the number. SUSY described by soft SUSY breaking has about 105 parameters in the minimal model.
The original 19 parameters of the Standard Model may be expressed in terms of more "fundamental" parameters. For example, $\alpha$ of electromagnetism is not terribly fundamental in high-energy physics because electromagnetism and weak interactions get unified at higher energies, so it's more natural to calculate $\alpha$ from $g_1,g_2$ of the $U(1)\times SU(2)$ gauge group. Also, these couplings $g_1,g_2$ and $g_3$ run - depend on the energy scale approximately logarithmically. The values such as $1/137$ for the fine-structure constant are the low-energy values, but the high-energy values are actually more fundamental because the fundamental laws of physics are those that describe very short-distance physics while long-distance (low-energy) physics is derived from that.
I mentioned that the number of dimensionless parameters increases if you add new physics such as SUSY with soft breaking. However, more complete, unifying theories - such as grand unified theories and especially string theory - also imply various relations between the previously independent constants, so they reduce the number of independent dimensionless parameters of the Universe. Grand unified theories basically set $g_1=g_2=g_3$ (with the right factor of $\sqrt{3/5}$ added to $g_1$) at their characteristic "GUT" energy scale; they may also relate certain Yukawa couplings.
String theory is perfectionist in this job. In principle, all dimensionless continuous constants may be calculated from any stabilized string vacuum - so all continuous uncertainty may be removed by string theory; one may actually prove that it is the case. There is nothing to continuously adjust in string theory. However, string theory comes with a large discrete class of stabilized vacua - which is at most countable and possibly finite but large. Still, if there are $10^{500}$ stabilized semi-realistic stringy vacua, there are only 500 digits to adjust (and then you may predict everything with any accuracy, in principle) - while the Standard Model with its 19 continuous parameters has 19 times infinity of digits to adjust according to experiments.
Only dimensionless quantities are important. They are just pure numbers and there can't be any ambiguity about their value. This is not so with dimensionful quantities. E.g. if I tell you my speed $v$ relative to you is $0.5\, \rm speedons$ that doesn't give you much information as I have a freedom to define my $\rm speedon$ units any way I want. Only way I
can give you some information is if I give you dimensionless quantity like $v/c = 0.5$.
Now what we need to make dimensionful quantities dimensionless is some reference scale (in previous example it was $c$). We can in principle choose any scale we want but usually it will be something from day to day experience. E.g. you choose meter to be what it is so that stuff you usually encounter (other people, houses, trees, etc.) is of the order $\sim 1$ with respect to meter. These is how all our units originated. Naturally, there's nothing particularly special about humans and the scales they usually work with. We know there are lots of important scales as we go down to atomic and nuclear sizes. We also know there is more important speed scale (namely, ultra-relavistic $v/c \to 1$). And so on.
Still, we need to choose some units to work with to be able to compute anything and it would be nice to choose some units that wouldn't suffer from the above-mentioned arbitrariness. It turns out we are in luck because Nature has given us few special constants. Each of them is related to some fundamental theory ($c$ in special relativity, $G$ in gravity, $\hbar$ in quantum mechanics, etc.). It would be silly not to exploit this generous gift. So we can talk about speeds being 0.9 (meaning actually $v/c$), action of 20 ($=S/\hbar$) and so on. This system of units is called Planck's and while it's not used in day to day life for obvious reasons, it's very useful anytime we deal with fundamental physics.
(...) is this an area that physicists have real concerns or expend significant research?
Interestingly, Paul Dirac did some research on Cosmology, based on the consideration of dimensionless combinations of numbers approaching the unity, that are built from fundamental physical quantities. The combinations mixed micro-physical quantities like the electron charge with cosmological parameters like the Hubble constant. This is an example, extracted from Coles/Lucchin Cosmology book (Wiley, 2nd ed 2002):
$ \frac{e^{4}H_{0}}{Gm_{p}m_{e}^{2}c^{3}} \simeq 1$
Assuming the validity of this relation has interesting implications: since $H_{0}$ evolves with time, one or more of the so-called fundamental constants that appear in the equation must vary in time too. This lead to some attempts to build theories with different past values of the gravitational constant.
The theory is almost forgotten. It is still not fully clear if he opened a Pandora's box of numerology speculation, of if something with deep physical, still unveiled meaning is hidden there. The current explanations for these numerical coincidences(?) is the Weak Anthropic Principle, which seems to me at least as speculative and philosophical as the original idea of Dirac.
Here is a link to the full text of a Dirac paper about the question, in 1974: http://www.jstor.org/discover/10.2307/78591?uid=3737952&uid=2&uid=4&sid=21101428637013
The universe can be described within a formal mathematical framework, all physical quantities can therefore be described using equation that contain only dimensionless numbers. Now, given any set of equations, you are always free to introduce scaling variables allowing you to study certain scaling limits of the theory. The universe as we experience it can be accurately described as a degenerate scaling limit that requires introducing 3 scaling variables and then taking a scaling limit in the right order. That degenerate limit is what we call "classical physics".
Since we are not exactly at the scaling limit, the scaling variables are not actually at their limiting values (infinite or zero). But to obtain classical physics exactly, you do need to send these variables to their appropriate limits. Since we started out with almost zero knowledge of the laws of physics several centuries ago, we needed to find out how the universe works by doing experiments. But since we live in almost the scaling limit, what happens is that certain relations between observables are very difficult to observe (exactly at the scaling limit, you can end up with singular equations, you then lose relations between physical variables). It then looks like a complete description of the Universe requires a few independent physical variables that cannot be related to each other.
We then developed a mathematical formalism that imposes this incompatibility via the introduction of "dimensions". When we later learned about how these supposedly incompatible quantities are actually related, we found these relations with the scaling variables appearing as dimensionfull constants in the equations that when expressed in the old units, have a very large or small magnitude.
Speaking of the electron-proton mass ratio (which is about 1/1836), Lubosh found it might be connected with $\pi$, and I think it is kind of a coupling constant in the Hydrogen atom.
The atom has the center of inertia variables and internal motion variables. When an external force is applied to the atomic nucleus, the atom is accelerated as a whole and its internal motion can also be excited. The ratio $m_e/m_p$ determines efficiency of "pumping" the internal degrees of freedom of an atom with an external force acting on the nucleus.
EDIT: Seeing so many downvotes, I changed my mind. I agree with Lubosh: $m_p/m_e = 6\pi^5$ and has nothing to do with physics :-(.
protected by Qmechanic♦ Sep 16 '15 at 23:57
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
A block cipher is a bijective map from the set of possible plaintexts to the set of ciphertexts, which are the same size and might as well be considered the same thing: $\theta: S\to S$. In this there must exist a fixed point m $\in S$ such that $\theta(m) = m$ (alternatively you could consider encryption to be a binary operator acting on the set $S$ then you could write encryption under key $k$ as $m \odot k = m$, with $m,k \in S$. Unfortunately this cannot form a group since there is no unit $e$ that makes $e \odot k = e$ for all $k \in S$).
It is clear that when the block size is equal to the key size a single fixed point will exist when you hold the plaintext message constant and permute the key. The key allows you to select one of
all the possible possible unique mappings, of which one must surely be a fixed point (am I right about my use of the word unique here?).
What I don't understand is how a key that is longer than a block size provides any extra security. From what I understand this would suggest the existence of many fixed points, or equivalently many different keys that will decrypt a ciphertext.
Note: I would also be interested to see articles that apply algebra to the study of block ciphers. Specifically constructing groups to aid analysis that also consider the fact that a block cipher is a composition of several round functions. |
The Annals of Probability Ann. Probab. Volume 43, Number 2 (2015), 528-571. Planar Ising magnetization field I. Uniqueness of the critical scaling limit Abstract
The aim of this paper is to prove the following result. Consider the critical Ising model on the rescaled grid $a\mathbb{Z}^{2}$, then the renormalized magnetization field
\[\Phi^{a}:=a^{15/8}\sum_{x\in a\mathbb{Z}^{2}}\sigma_{x}\delta_{x},\]
seen as a random distribution (i.e., generalized function) on the plane, has a unique scaling limit as the mesh size $a\searrow0$. The limiting field is conformally covariant.
Article information Source Ann. Probab., Volume 43, Number 2 (2015), 528-571. Dates First available in Project Euclid: 2 February 2015 Permanent link to this document https://projecteuclid.org/euclid.aop/1422885569 Digital Object Identifier doi:10.1214/13-AOP881 Mathematical Reviews number (MathSciNet) MR3305999 Zentralblatt MATH identifier 1332.82012 Subjects Primary: 82B20: Lattice systems (Ising, dimer, Potts, etc.) and systems on graphs 82B27: Critical phenomena 60K35: Interacting random processes; statistical mechanics type models; percolation theory [See also 82B43, 82C43] 60G20: Generalized stochastic processes 60G60: Random fields Citation
Camia, Federico; Garban, Christophe; Newman, Charles M. Planar Ising magnetization field I. Uniqueness of the critical scaling limit. Ann. Probab. 43 (2015), no. 2, 528--571. doi:10.1214/13-AOP881. https://projecteuclid.org/euclid.aop/1422885569 |
Because a lot of really practical problems are the halting problem in disguise. A solution to them solves the halting problem.
You want a compiler that finds the fastest possible machine code for a given program? Actually the halting problem.
You have JavaScript, with some variables at a high security levels, and some at a low security level. You want to make sure that an attacker can't get at the high security information. This is also just the halting problem.
You have a parser for your programming language. You change it, but you want to make sure it still parses all the programs it used to. Actually the halting problem.
You have an anti-virus program, and you want to see if it ever executes a malicious instruction. Actually just the halting problem.
As for the wikipedia example, yes, you could model a modern computer as a finite-state machine. But there's two problems with this.
Every computer would be a different automaton, depending on the exact number of bits of RAM. So this isn't useful for examining a particular piece of code, since the automaton is dependent on the machine on which it can run.
You'd need $2^n$ states if you have n bits of RAM. So for your modern 8GB computer, that's $2^{32000000000}$. This is a number so big that wolfram alpha doesn't even know how to interpret it. When I do $2^{10^9}$ it says that it has $300000000$ decimal digits. This is clearly much to large to store in a normal computer.
The Halting problem lets us reason about the relative difficulty of algorithms. It lets us know that, there are some algorithms that don't exist, that sometimes, all we can do is guess at a problem, and never know if we've solved it.
If we didn't have the halting problem, we would still be searching for Hilbert's magical algorithm which inputs theorems and outputs whether they're true or not. Now we know we can stop looking, and we can put our efforts into finding heuristics and second-best methods for solving these problems.
UPDATE: Just to address a couple of issues raised in the comments.
@Tyler Fleming Cloutier: The "nonsensical" problem arises in the proof that the halting problem is undecidable, but what's at the core of undecidability is really having an infinite search space. You're searching for an object with a given property, and if one doesn't exist, there's no way to know when you're done.
The difficulty of a problem can be related to the number of quantifiers it has. Trying to show that there exists ($\exists$) an object with an arbitrary property, you have to search until you find one. If none exists, there's no way (in general) to know this. Proving that all objects ($\forall$) have a property is hard, but you can search for an object without the property to disprove it. The more alternations there are between forall and exists, the harder a problem is.
For more on this, see the Arithmetic Hierarchy. Anything above $\Sigma^0_0=\Pi^0_0$ is undecidable, though level 1 is semi-decidable.
It's also possible to show that there are undecidable problems without using a nonsensical paradox like the Halting problem or Liars paradox. A Turing Machine can be encoded using a string of bits, i.e. an integer. But a problem can be encoded as a language, i.e. a subset of the integers. It's known that there is no bijection between the set of integers and the set of all subsets of the integers. So there must be some problems (languages) which don't have an associated Turing machine (algorithm).
@Brent: yes, this admits that this is decidable for modern computers. But it's decidable for a specific machine. If you add a USB drive with disk space, or the ability to store on a network, or anything else, then the machine has changed and the result doesn't still hold.
It also has to be said that there are going to be many times where the algorithm says "this code will halt" because it the code will fail and run out of memory, and that adding a single extra bit of memory would cause the code to succeed and give a different result.
The thing is, Turing machines don't have an infinite amount of memory. There's never a time where an infinite amount of symbols are written to the tape. Instead, a Turing machine has "unbounded" memory, meaning that you can keep getting more sources of memory when you need it. Computers are like this. You can add RAM, or USB sticks, or hard drives, or network storage. Yes, you run out of memory when you run out of atoms in the universe. But having unlimited memory is a much more useful model. |
One can in fact use fractions in modular arithmetic, as long as one only uses fractions with denominator
coprime to the modulus. For these fractions the usual grade school arithmetic of fractions holds true. For example, let's consider your problem.
$\quad {\rm mod}\ 3n\!+\!1\!:\,\ 3n\!+\!1\equiv 0\ $ so $\ 1 \equiv 3(-n)\ $ therefore $\ \dfrac{1}3 \equiv -n \equiv 2n\!+\!1.\ $
$\quad$ In your case $\ 3n\!+\!1 = 3016\,$ so $\,n=\dfrac{3015}3 = 1005,\,$ so $\,\dfrac{1}3\equiv 2n\!+\!1 = 2011$
The notation $\,1/3\,$ means $\,3^{-1},\,$ i.e. a root of $\,3x\equiv 1\pmod{3n\!+\!1}.\,$ The inverse exists and is unique because $\,\gcd(3,3n\!+\!1)=\gcd(3,1)=1,\,$ so by Bezout's identity for the gcd we have
$\quad \text{for some } j,k\!:\ \ 3j+(3n\!+\!1)k = 1\ \Rightarrow\ {\rm mod}\ 3n\!+\!1\!:\ 3j\equiv 1\ \ {\rm so}\ \ j\equiv 3^{-1}\! \equiv 1/3$
and inverses are always unique. Hence the notation $\,1/3\, :=\, 3^{-1}\,$ is well-defined.
Remark $\ $ Generally we can use the extended Euclidean algorithm to compute modular inverses. The above is essentially an optimization for the case when it terminates in a single step, i.e. inverting $\,a\,$ modulo $\,m = an+1,\,$ i.e. when $\,a\mid m-1.$ |
I am working on the following problem
Given the solution to the Geometric Brownian Motion $$S_t=S(0)\exp\Big[(\mu-\frac{1}{2}\sigma^2)t+\sigma B_t\Big]$$ Where $\{B_t:t\geq 0\}$ is a Brownian motion.
a)Show that for $\mu>\frac{1}{2}\sigma^2$, we have that $S(t)\rightarrow\infty$ as $t\rightarrow\infty$.
b)Show that for $\mu<\frac{1}{2}\sigma^2$, we have that $S(t)\rightarrow 0$ as $t\rightarrow\infty$.
I know multiple things that will probably help me in finding the solution, but I'm not able to connect the dots. For question a), we know that the term in the exponent can be split into an exponent with a strictly positive term, and an exponent with the Brownian Motion.
I can't figure out why the exponent with the Brownian motion would go to infinity in question a), but not in question b). In other words, I cannot see why we would have that $$S(0)\exp\Big[ct+\sigma B_t\Big]\rightarrow\infty ,\quad c>0$$But also $$S(0)\exp\Big[dt+\sigma B_t\Big]\rightarrow\infty ,\quad d<0$$ What is the convergence behavior of Brownian Motion in the exponent?
Moreover, we know that the definition of convergence in probability of some sequence $(X_n)$ is that $$\forall\epsilon>0:\mathbb{P}(|X_n-X|>\epsilon)\rightarrow 0$$
Any tips are highly appreciated! |
I have been trying for 2-3 days now to get L2 regularized logistric regression to work in Matlab (CVX) and Python(CVXPY) but no success. I am fairly new to convex optimization so I am quite frustrated. Following is the equation that I am trying to solve using CVX/CVXPY. I have taken this equation from the paper https://intentmedia.github.io/assets/2013-10-09-presenting-at-ieee-big-data/pld_js_ieee_bigdata_2013_admm.pdf
In the case of L2 regularized logistic regression the problem becomes: $$ \text{minimize} \frac{1}{m}\sum_{i=1}^{m}\log[1 + \exp(-b_i\mathbf{A}_i^Tx)] + \lambda\Vert x\Vert_2^2$$ where $\lambda$ is the regularization factor.
My Matlab (CVX) code is
function L2m = 800; N = 5;lambda =0.000001;A = load('/path/to/training/file'); b= A(:,6); //Label Matrix (800x1)A = A(:,1:5); //Feature matrix (800x5)cvx_begin variable x(N) minimize( (1/m * sum( log(1+ exp(-1* A' * (b * x')) ) ) ) + lambda*(norm(x,2)))cvx_end
CVX returns an error saying which makes sense but the paper mentions the above equation. How can I solve it ?
Your objective function is not a scalar.
After trying on Matlab, I tried on CVXPY. Here is the python code
from cvxopt import solvers, matrix,log, exp,mulfrom cvxopt.modeling import op,variableimport numpy as npn = 5m=800data = np.ndarray(shape=(m,n), dtype=float,)bArray = []file = open('/path/to/training/file')i = 0;j=0;for line in file: for num in line.split(): if(j==5): bArray.append(float(num)) else: data[i][j] = num j = j + 1 j=0 i = i + 1A = matrix(data)b_mat= matrix(bArray)m, n = A.sizelamb_default = 0.000001x=variable(n)b = -1*b_matw = exp(A.T*b*x)f = (1/m) + sum(log(1+w)) + lamb_default*mul(x,x)lp1 = op(f)lp1.solve()lp1.statusprint(lp1.objective.value())
I get the error
TypeError: incompatible dimensions
So, my question is: What am I doing wrong in the code for calculation of L2 problem in CVX/CVXPY ? |
The motion of ions in solution is mainly random
The conductance of an electrolytic solution results from the movement of the ions it contains as they migrate toward the appropriate electrodes. But the picture we tend to have in our minds of these ions moving in a orderly, direct march toward an electrode is wildly mistaken. The thermally-induced random motions of molecules is known as
diffusion. The term migration refers specifically to the movement of ions due to an externally-applied electrostatic field.
The average thermal energy at temperatures within water's liquid range (given by
RT ) is sufficiently large to dominate the movement of ions even in the presence of an applied electric field. This means that the ions, together with the water molecules surrounding them, are engaged in a wild dance as they are buffeted about by thermal motions (which include Brownian motion).
If we now apply an external electric field to the solution, the chaotic motion of each ion is supplemented by an occasional jump in the direction dictated by the interaction between the ionic charge and the field. But this is really a surprisingly tiny effect:
It can be shown that in a typical electric field of 1 volt/cm, a given ion will experience only about one field-directed (non-random) jump for every 10
5 random jumps. This translates into an average migration velocity of roughly 10 –7 m sec –1 (10 –4 mm sec –1). Given that the radius of the H 2O molecule is close to 10 –10 m, it follows that about 1000 such jumps are required to advance beyond a single solvent molecule! The ions migrate Independently
All ionic solutions contain at least two kinds of ions (a cation and an anion), but may contain others as well. In the late 1870's, the physicist Friedrich Kohlrausch noticed that the limiting equivalent conductivities of salts that share a common ion exhibit constant differences.
electrolyte Λ 0(25°C) difference electrolyte Λ 0 (25°C) difference KCl
LiCl
149.9
115.0
34.9
HCl
HNO
3 426.2
421.1
4.9
KNO 3
LiNO
3 145.0
140.1
34.9 LiCl
LiNO
3 115.0
110.1
4.9
These differences represent the differences in the conductivities of the ions that are
not shared between the two salts. The fact that these differences are identical for two pairs of salts such as KCl/LiCl and KNO 3 /LiNO 3 tells us that the mobilities of the non-common ions K + and LI + are not affected by the accompanying anions. Kohlrausch's law greatly simplifies estimates of Λ 0
This principle is known as Kohlrausch's law of independent migration, which states that
in the limit of infinite dilution,
Each ionic species makes a contribution to the conductivity of the solution that depends only on the nature of that particular ion, and is independent of the other ions present.
Kohlrausch's law can be expressed as
Λ
0 = Σ λ 0 + + Σ λ 0 –
This means that we can assign a limiting equivalent conductivity λ
0 to each kind of ion:
cation H 3O + NH 4 + K + Ba 2 + Ag + Ca 2 + Sr 2 + Mg 2 + Na + Li + λ 0 349.98 73.57 73.49 63.61 61.87 59.47 59.43 53.93 50.89 38.66 anion OH – SO 4 2– Br – I – Cl – NO 3 – ClO 3 – CH 3COO – C 2H 5COO – C 3H 7COO – λ 0 197.60 80.71 78.41 76.86 76.30 71.80 67.29 40.83 35.79 32.57
Just as a compact table of thermodynamic data enables us to predict the chemical properties of a very large number of compounds, this compilation of equivalent conductivities of twenty different species yields reliable estimates of the of Λ
0 values for five times that number of salts. We can now estimate weak electrolyte limiting conductivities
One useful application of Kohlrausch's law is to estimate the limiting equivalent conductivities of weak electrolytes which, as we observed above, cannot be found by extrapolation. Thus for acetic acid CH
3COOH ("HAc"), we combine the λ 0 values for H 3O + and CH 3COO – given in the above table:
Λ
= λ 0HAc 0 H++ λ 0 Ac– How fast do ions migrate in solution?
Movement of a migrating ion through the solution is brought about by a force exerted by the applied electric field. This force is proportional to the field strength and to the ionic charge. Calculations of the frictional drag are based on the premise that the ions are spherical (not always true) and the medium is continuous (never true) as opposed to being composed of discrete molecules. Nevertheless, the results generally seem to be realistic enough to be useful.
According to Newton's law, a constant force exerted on a particle will
accelerate it, causing it to move faster and faster unless it is restrained by an opposing force. In the case of electrolytic conductance, the opposing force is frictional drag as the ion makes its way through the medium. The magnitude of this force depends on the radius of the ion and its primary hydration shell, and on the viscosity of the solution.
Eventually these two forces come into balance and the ion assumes a constant average velocity which is reflected in the values of λ
0 tabulated in the table above.
The relation between λ
0 and the velocity (known as the ionic mobility μ 0) is easily derived, but we will skip the details here, and simply present the results:
Anions are conventionally assigned negative μ
0 values because they move in opposite directions to the cations; the values shown here are absolute values |μ 0|. Note also that the units are cm/sec per volt/cm, hence the cm 2 term.
cation H 3O + NH 4 + K + Ba 2+ Ag + Ca 2+ Sr 2+ Mg 2+ Na + Li + μ 0 0.362 0.0762 0.0762 0.0659 0.0642 0.0616 0.0616 0.0550 0.0520 0.0388 anion OH – SO 4 2– Br – I – Cl – NO 3 – ClO 3 – CH 3COO – C 2H 5COO – C 3H 7COO – μ 0 .2050 0.0827 0.0812 0.0796 0.0791 0.0740 0.0705 0.0461 0.0424 0.0411
As with the limiting conductivities, the trends in the mobilities can be roughly correlated with the charge and size of the ion. (Recall that negative ions tend to be larger than positive ions.)
Cations and anions carry different fractions of the current
In electrolytic conduction, ions having different charge signs move in opposite directions. Conductivity measurements give only the sum of the positive and negative ionic conductivities according to Kohlrausch's law, but they do not reveal how much of the charge is carried by each kind of ion. Unless their mobilities are the same, cations and anions do not contribute equally to the total electric current flowing through the cell.
Recall that an electric current is defined as a flow of electric charges; the current in amperes is the number of coulombs of charge moving through the cell per second. Because ionic solutions contain equal quantities of positive and negative charges, it follows that the current passing through the cell consists of positive charges moving toward the cathode, and negative charges moving toward the anode. But owing to mobility differences, cations and ions do not usually carry identical fractions of the charge.
Transference numbers are often referred to as
transport numbers; either term is acceptable in the context of electrochemistry. The fraction of charge carried by a given kind of ion is known as the transference number \(t_{\pm}\). For a solution of a simple binary salt,
\[ t_+ = \dfrac{\lambda_+}}{\lambda_+ + \lambda_-}\]
and
\[ t_- = \dfrac{\lambda_-}}{\lambda_+ + \lambda_-}\]
By definition,
\[t_+ + t_– = 1.\]
To help you visualize the effects of non-identical transference numbers, consider a solution of M
+ X – in which t + = 0.75 and t= 0.25. Let the cell be divided into three [imaginary] sections as we examine the distribution of cations and anions at three different stages of current flow. –
Initially, the concentrations of M + and X – are the same in all parts of the cell.
After 4 faradays of charge have passed through the cell, 3 eq of cations and 1 eq of anions have crossed any given plane parallel to the electrodes. Note that 3 anions are discharged at the anode, exactly balancing the number of cations discharged at the cathode. In the absence of diffusion, the ratio of the ionic concentrations near the electrodes equals the ratio of their transport numbers.
Transference numbers can be determined experimentally by observing the movement of the boundary between electrolyte solutions having an ion in common, such as LiCl and KCl:
In this example, K
+ has a higher transference number than Li +, but don't try to understand why the KCl boundary move to the left; the details of how this works are rather complicated and not important for the purposes of this this course. H + and OH – ions "migrate" without moving, and rapidly!
You may have noticed from the tables above that the hydrogen- and hydroxide ions have extraordinarily high equivalent conductivities and mobilities. This is a consequence of the fact that unlike other ions which need to bump and nudge their way through the network of hydrogen-bonded water molecules, these ions are
participants in this network. By simply changing the H 2O partners they hydrogen-bond with, they can migrate "virtually". In effect, what migrates is the hydrogen-bonds, rather than the physical masses of the ions themselves.
This process is known as the Grothuss Mechanism. The shifting of the hydrogen bonds occurs when the rapid thermal motions of adjacent molecules brings a particular pair into a more favorable configuration for hydrogen bonding within the local molecular network. Bear in mind that what we refer to as "hydrogen ions" H
+ (aq) are really hydronium ions H 3O +. It has been proposed that the larger aggregates H 5O 2 +and H 9O 4 + are important intermediates in this process.
It is remarkable that this virtual migration process was proposed by Theodor Grotthuss in 1805 — just five years after the discovery of electrolysis, and he didn't even know the correct formula for water; he thought its structure was H–O–O–H.
These two diagrams will help you visualize the process. The successive downward rows show the first few "hops" made by the virtual H
+ and OH –ions as they move in opposite directions toward the appropriate electrodes. (Of course, the same mechanism is operative in the absence of an external electric field, in which case all of the hops will be in random directions.)
Covalent bonds are represented by black lines, and hydrogen bonds by gray lines. |
Let $T_{n}$ be an arc-colored tournament of order $n$. The maximummonochromatic indegree $\Delta^{-mon}(T_{n})$ (resp. outdegree$\Delta^{+mon}(T_{n})$) of $T_{n}$ is the maximum number of in-arcs (resp.out-arcs) of a same color incident to a vertex of $T_{n}$. The irregularity$i(T_{n})$ of $T_{n}$ is the maximum difference between the indegree andoutdegree of a vertex of $T_{n}$. A subdigraph $H$ of an arc-colored digraph$D$ is called rainbow if each pair of arcs in $H$ have distinct colors. In thispaper, we show that each vertex $v$ in an arc-colored tournament $T_{n}$ with$\Delta^{-mon}(T_n)\leq\Delta^{+mon}(T_n)$ is contained in at least$\frac{\delta(v)(n-\delta(v)-i(T_n))}{2}-[\Delta^{-mon}(T_{n})(n-1)+\Delta^{+mon}(T_{n})d^+(v)]$rainbow triangles, where $\delta(v)=\min\{d^+(v), d^-(v)\}$. We also give somemaximum monochromatic degree conditions for $T_{n}$ to contain rainbowtriangles, and to contain rainbow triangles passing through a given vertex.Finally, we present some examples showing that some of the conditions in ourresults are best possible. Keywords: arc-colored tournament, rainbow triangle, maximum monochromaticindegree (outdegree), irregularity
Let $k$ be a positive integer. Bermond and Thomassen conjectured in 1981 thatevery digraph with minimum outdegree at least $2k-1$ contains $k$vertex-disjoint cycles. It is famous as one of the one hundred unsolvedproblems selected in [Bondy, Murty, Graph Theory, Springer-Verlag London,2008]. Lichiardopol, Por and Sereni proved in [SIAM J. Discrete Math. 23 (2)(2009) 979-992] that the above conjecture holds for $k=3$. Let $g$ be thegirth, i.e., the length of the shortest cycle, of a given digraph. Bang-Jensen,Bessy and Thomass\'{e} conjectured in [J. Graph Theory 75 (3) (2014) 284-302]that every digraph with minimum outdegree at least $\frac{g}{g-1}k$ contains$k$ vertex-disjoint cycles. In this note, we first present a new shorter proof of the Bermond-Thomassenconjecture for the case of $k=3$, and then we disprove the conjecture proposedby Bang-Jensen, Bessy and Thomass\'{e} by constructing a family ofcounterexamples.
In 1995, Stiebitz asked the following question: For any positive integers$s,t$, is there a finite integer $f(s,t)$ such that every digraph $D$ withminimum out-degree at least $f(s,t)$ admits a bipartition $(A, B)$ such that$A$ induces a subdigraph with minimum out-degree at least $s$ and $B$ induces asubdigraph with minimum out-degree at least $t$? We give an affirmative answerfor tournaments, multipartite tournaments, and digraphs with bounded maximumin-degrees. In particular, we show that for every $\epsilon$ with$0<\epsilon<1/2$, there exists an integer $\delta_0$ such that every tournamentwith minimum out-degree at least $\delta_0$ admits a bisection $(A, B)$, sothat each vertex has at least $(1/2-\epsilon)$ of its out-neighbors in $A$, andin $B$ as well.
For an arc-colored digraph $D$, define its {\em kernel by rainbow paths} tobe a set $S$ of vertices such that (i) no two vertices of $S$ are connected bya rainbow path in $D$, and (ii) every vertex outside $S$ can reach $S$ by arainbow path in $D$. In this paper, we show that it is NP-complete to decidewhether an arc-colored tournament has a kernel by rainbow paths, where a {\emtournament} is an orientation of a complete graph. In addition, we show thatevery arc-colored $n$-vertex tournament with all its strongly connected$k$-vertex subtournaments, $3\leq k\leq n$, colored with at least $k-1$ colorshas a kernel by rainbow paths, and the number of colors required cannot bereduced.
A {\em kernel by properly colored paths} of an arc-colored digraph $D$ is aset $S$ of vertices of $D$ such that (i) no two vertices of $S$ are connectedby a properly colored directed path in $D$, and (ii) every vertex outside $S$can reach $S$ by a properly colored directed path in $D$. In this paper, weconjecture that every arc-colored digraph with all cycles properly colored hassuch a kernel and verify the conjecture for unicyclic digraphs, semi-completedigraphs and bipartite tournaments, respectively. Moreover, weaker conditionsfor the latter two classes of digraphs are given.
For nonnegative integers $k$ and $l$, let $\mathscr{D}(k,l)$ denote thefamily of digraphs in which every vertex has either indegree at most $k$ oroutdegree at most $l$. In this paper we prove that the edges of every digraphin $\mathscr{D}(3,3)$ and $\mathscr{D}(4,4)$ can be covered by at most fivedirected cuts and present an example in $\mathscr{D}(3,3)$ showing that thisresult is best possible. |
General case. In relativistic thermodynamics, inverse temperature $\beta^\mu$ is a vector field, namely the multipliers of the 4-momentum density in the exponent of the density operator specifying the system in terms of statistical mechanics, using the maximum entropy method, where $\beta^\mu p_\mu$ (in units where $c=1$) replaces the term $\beta H$ of the nonrelativistic canonical ensemble. This is done in
C.G. van Weert, Maximum entropy principle and relativistic hydrodynamics, Annals of Physics 140 (1982), 133-162.
for classical statistical mechanics and for quantum statistical mechanics in
T. Hayata et al., Relativistic hydrodynamics from quantum field theory on the basis of the generalized Gibbs ensemble method, Phys. Rev. D 92 (2015), 065008. https://arxiv.org/abs/1503.04535
For an extension to general relativity with spin see also
F. Becattini, Covariant statistical mechanics and the stress-energy tensor, Phys. Rev. Lett 108 (2012), 244502. https://arxiv.org/abs/1511.05439
Conservative case. One can define a scalar temperature $T:=1/k_B\sqrt{\beta^\mu\beta_\mu}$ and a velocity field $u^\mu:=k_BT\beta^\mu$ for the fluid; then $\beta^\mu=u^\mu/k_BT$, and the distribution function for an ideal fluid takes the form of a Jüttner distribution $e^{-u\cdot p/k_BT}$.
For an ideal fluid (i.e., assuming no dissipation, so that all conservation laws hold exacly), one obtains the format commonly used in relativistic hydrodynamics (see Chapter 22 in the book Misner, Thorne, Wheeler, Gravitation). It amounts to treating the thermodynamics nonrelativistically in the rest frame of the fluid.
Note that the definition of temperature consistent with the canonical ensemble needs a distribution of the form $e^{-\beta H - terms~ linear~ in~ p}$, conforming with the identification of the noncovariant $\beta^0$ as the inverse canonical temperature. Essentially, this is due to the frame dependence of the volume that enters the thermodynamics. This is in agreement with the noncovariant definition of temperature used by Planck and Einstein and was the generally agreed upon convention until at least 1968; cf. the discussion in
R. Balescu, Relativistic statistical thermodynamics, Physica 40 (1968), 309-338.
In contrast, the covariant Jüttner distribution has the form $e^{-u_0 H/k_BT - terms~ linear~ in~ p}$. Therefore the covariant scalar temperature differs from the canonical one by a velocity-dependent factor $u_0$. This explains the different transformation law. The covariant scalar temperature is simply the canonical temperature in the rest frame, turned covariant by redefinition.
Quantum general relativity. In quantum general relativity, accelerated observers interpret temperature differently. This is demonstrated for the vacuum state in Minkowski space by the Unruh effect, which is part of the thermodynamics of black holes. This seems inconsistent with the assumption of a covariant temperature.
Dissipative case. The situation is more complicated in the more realistic dissipative case. Once one allows for dissipation, amounting to going from Euler to Navier-Stokes in the nonrelativistic case, trying to generalize this simple formulation runs into problems. Thus it cannot be completely correct. In a gradient expansion at low order, the velocity field defined above from $\beta^\mu$ can be identified in the Landau-Lifschitz frame with the velocity field proportional to the energy current; see (86) in Hayata et al.. However, in general, this identification involves an approximation as there is no reason for these velocity fields to be exactly parallel; see, e.g.,
P. Van and T.S. Biró, First order and stable relativistic dissipative hydrodynamics, Physics Letters B 709 (2012), 106-110. https://arxiv.org/abs/1109.0985
There are various ways to patch the situation, starting from a kinetic description (valid for dilute gases only): The first reasonable formulation by Israel and Stewart based on a first order gradient expansion turned out to exhibit acausal behavior and not to be thermodynamically consistent. Extensions to second order (by Romatschke, e.g., https://arxiv.org/abs/0902.3663) or third order (by El et al., https://arxiv.org/abs/0907.4500) remedy the problems at low density, but shift the difficulties only to higher order terms (see Section 3.2 of Kovtun, https://arxiv.org/abs/1205.5040).
A causal and thermodynamically consistent formulation involving additional fields was given by Mueller and Ruggeri in their book Extended Thermodynamics 1993 and its 2nd edition, called Rational extended Thermodynamics 1998.
Paradoxes. Concerning the paradoxes mentioned in the original post:
Note that the formula $\langle E\rangle = \frac32 k_B T$ is valid only under very special circumstances (nonrelativistic ideal monatomic gas in its rest frame), and does not generalize. In general there is no simple relationship between temperature and velocity.
One can say that your paradox arises because in the three scenarios, three different concepts of temperature are used. What temperature is and how it transforms is a matter of convention, and the dominant convention changed some time after 1968; after Balescu's paper mentioned above, which shows that until 1963 it was universally defined as being frame-dependent. Today both conventions are alive, the frame-independent one being dominant.
This post imported from StackExchange Physics at 2016-06-24 15:03 (UTC), posted by SE-user Arnold Neumaier |
General case. In relativistic thermodynamics, inverse temperature $\beta^\mu$ is a vector field, namely the multipliers of the 4-momentum density in the exponent of the density operator specifying the system in terms of statistical mechanics, using the maximum entropy method, where $\beta^\mu p_\mu$ (in units where $c=1$) replaces the term $\beta H$ of the nonrelativistic canonical ensemble. This is done in
C.G. van Weert, Maximum entropy principle and relativistic hydrodynamics, Annals of Physics 140 (1982), 133-162.
for classical statistical mechanics and for quantum statistical mechanics in
T. Hayata et al., Relativistic hydrodynamics from quantum field theory on the basis of the generalized Gibbs ensemble method, Phys. Rev. D 92 (2015), 065008. https://arxiv.org/abs/1503.04535
For an extension to general relativity with spin see also
F. Becattini, Covariant statistical mechanics and the stress-energy tensor, Phys. Rev. Lett 108 (2012), 244502. https://arxiv.org/abs/1511.05439
Conservative case. One can define a scalar temperature $T:=1/k_B\sqrt{\beta^\mu\beta_\mu}$ and a velocity field $u^\mu:=k_BT\beta^\mu$ for the fluid; then $\beta^\mu=u^\mu/k_BT$, and the distribution function for an ideal fluid takes the form of a Jüttner distribution $e^{-u\cdot p/k_BT}$.
For an ideal fluid (i.e., assuming no dissipation, so that all conservation laws hold exacly), one obtains the format commonly used in relativistic hydrodynamics (see Chapter 22 in the book Misner, Thorne, Wheeler, Gravitation). It amounts to treating the thermodynamics nonrelativistically in the rest frame of the fluid.
Note that the definition of temperature consistent with the canonical ensemble needs a distribution of the form $e^{-\beta H - terms~ linear~ in~ p}$, conforming with the identification of the noncovariant $\beta^0$ as the inverse canonical temperature. Essentially, this is due to the frame dependence of the volume that enters the thermodynamics. This is in agreement with the noncovariant definition of temperature used by Planck and Einstein and was the generally agreed upon convention until at least 1968; cf. the discussion in
R. Balescu, Relativistic statistical thermodynamics, Physica 40 (1968), 309-338.
In contrast, the covariant Jüttner distribution has the form $e^{-u_0 H/k_BT - terms~ linear~ in~ p}$. Therefore the covariant scalar temperature differs from the canonical one by a velocity-dependent factor $u_0$. This explains the different transformation law. The covariant scalar temperature is simply the canonical temperature in the rest frame, turned covariant by redefinition.
Quantum general relativity. In quantum general relativity, accelerated observers interpret temperature differently. This is demonstrated for the vacuum state in Minkowski space by the Unruh effect, which is part of the thermodynamics of black holes. This seems inconsistent with the assumption of a covariant temperature.
Dissipative case. The situation is more complicated in the more realistic dissipative case. Once one allows for dissipation, amounting to going from Euler to Navier-Stokes in the nonrelativistic case, trying to generalize this simple formulation runs into problems. Thus it cannot be completely correct. In a gradient expansion at low order, the velocity field defined above from $\beta^\mu$ can be identified in the Landau-Lifschitz frame with the velocity field proportional to the energy current; see (86) in Hayata et al.. However, in general, this identification involves an approximation as there is no reason for these velocity fields to be exactly parallel; see, e.g.,
P. Van and T.S. Biró, First order and stable relativistic dissipative hydrodynamics, Physics Letters B 709 (2012), 106-110. https://arxiv.org/abs/1109.0985
There are various ways to patch the situation, starting from a kinetic description (valid for dilute gases only): The first reasonable formulation by Israel and Stewart based on a first order gradient expansion turned out to exhibit acausal behavior and not to be thermodynamically consistent. Extensions to second order (by Romatschke, e.g., https://arxiv.org/abs/0902.3663) or third order (by El et al., https://arxiv.org/abs/0907.4500) remedy the problems at low density, but shift the difficulties only to higher order terms (see Section 3.2 of Kovtun, https://arxiv.org/abs/1205.5040).
A causal and thermodynamically consistent formulation involving additional fields was given by Mueller and Ruggeri in their book Extended Thermodynamics 1993 and its 2nd edition, called Rational extended Thermodynamics 1998.
Paradoxes. Concerning the paradoxes mentioned in the original post:
Note that the formula $\langle E\rangle = \frac32 k_B T$ is valid only under very special circumstances (nonrelativistic ideal monatomic gas in its rest frame), and does not generalize. In general there is no simple relationship between temperature and velocity.
One can say that your paradox arises because in the three scenarios, three different concepts of temperature are used. What temperature is and how it transforms is a matter of convention, and the dominant convention changed some time after 1968; after Balescu's paper mentioned above, which shows that until 1963 it was universally defined as being frame-dependent. Today both conventions are alive, the frame-independent one being dominant.
This post imported from StackExchange Physics at 2016-06-24 15:03 (UTC), posted by SE-user Arnold Neumaier |
Since TMs are equivalent to algorithms, they must be able to perform algoriths like, say, mergesort. But the formal definition allows only for decision problems, i.e, acceptance of languages. So how can we cast the performance of mergesort as a decision problem?
Usually, Turing machines are explained to calculate functions $f:A \rightarrow B$, of which decision problems are a special case where $B = \mathbb{B}$.
You can define two kinds of Turing Machines, transducers and acceptors. Acceptors have two final states (accept and reject) while transducers have only one final state and are used to calculate functions.
Let $\Sigma$ be the alphabet of the Turing Machine.
Transducers take an input $x \in \Sigma^*$ on an
input tape and compute a function $f(x) \in \Sigma^*$ that is written on another tape (called output tape) when (and if) the machine halts.
The are various results that link together acceptors and transducers. For example:
Let $\Sigma=\{0, 1\}$. Given a language $L \subseteq \Sigma^*$ you can always define $f : L \to \{0,1\}$ to be the charateristic function of $L$ i.e. $$ f(x) = \begin{cases} 1 & \mbox{if $x \in L$} \\ 0 & \mbox{if $x \not\in L$} \end{cases} $$
In this case an acceptor machine for $L$ is essentially the same as a transducer machine for $f$ and vice versa.
For more details you can see Introduction to the Theory of Complexity by Crescenzi (download link at the bottom of the page). It is lincensed under Creative Commons.
We focus on studying the decision problems in undergrad complexity theory courses because they are simpler and also questions about many other kinds of computations problems can be reduced to questions about decision problems.
However there is nothing in the definition of Turing machine by itself that restricts it to dealing with decision problems. Take for example the number function computation problems: we want to compute some function $f:\{0,1\}^*\to\{0,1\}^*$. We say that a Turing machine compute this function if on every input $x$, the machine halts and what is left over the tape is equal to $f(x)$.
It is easy to define what it means for a Turing Machine to compute a function: The input is written on the tape at the beginning, and the output is whatever is on the tape when the machine halts (if it halts).
Most complexity classes we introduce initally, like $\mathsf{P}$, are defined for decision problems. However, there are also complexity classes for function problems, like $\mathsf{FP}$.
We can convert between a decision problem and function problem pretty easily in most cases. In a function problem, you are given input $x$ and want to compute $f(x) = y$. In the equivalent decision problem, you are given input $(x,y)$, you want to decide whether $f(x) = y$ or not. (Or even whether $f(x) \leq y$, or so on.) Often it turns out that we can solve the first in polynomial time if and only if we can solve the second in polynomial time, so complexity-wise, the distinction is not very important. |
This question already has an answer here:
Hubble's law and conservation of energy 5 answers
If a photon (wave package) redshifts (stretches) travelling in our expanding universe, is its energy reduced?
If so, where does that energy go?
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
If a photon (wave package) redshifts (stretches) travelling in our expanding universe, is its energy reduced?
If so, where does that energy go?
Since you say you're talking about what happens locally (in a small volume), I'll answer from that point of view. The usual formulation of energy conservation in such a volume is that energy is conserved
in an inertial reference frame. In general relativity, there are no truly inertial frames, but in a sufficiently small volume, there are reference frames that are approximately inertial to any desired level of precision. If you restrict your attention to such a frame, there is no cosmological redshift. The photon's energy when it enters one side of the frame is the same as the energy when it exits the other side. So there's no problem with energy conservation.
The (apparent) failure of energy conservation arises only when you consider volumes that are too large to be encompassed by a single inertial reference frame.
To be slightly more precise, in some small volume $V=L^3$ of a generic expanding Universe, imagine constructing the best possible approximation to an inertial reference frame. In that frame, observers near one edge will be moving with respect to observers near the other edge, at a speed given by Hubble's Law (to leading order in $L$). That is, in such a frame, the observed redshift is an ordinary Doppler shift, which causes no problems with energy conservation.
If you want more detail, David Hogg and I wrote about this at considerable (perhaps even excessive!) length in an AJP paper.
It goes to make work to expand the universe against the forces of gravity and inertia. This is like adiabatically expanding volume of gas: the gas becomes cooler as the volume increases. Where the energy goes?
This answer was intended to stay in this question.
Conservation of the energy is (it used to be) a cornerstone in the physics framework. Without that
anything can happen.
Let's see how energy can be conserved.
Galaxies are moving dragged by the space expansion. When atoms are in motion the doppler effect will shift the spectra of the emitted photons, as @anna answer showed in the link above.
The proton-to-electron mass ratio, $\frac{m_e}{m_p}$ has been measured constant along the history of the universe, but nothing can be said about the constancy of the electron's mass (to the downvoters: a reference is welcome).
The photon's energy obey the Sommerfeld relation, $E_{jn}=-m_e*f(j,n,\alpha,c)$, as seen here, and it
is evident that a redshifted spectrum is obtained with a larger $m_e$.
The spectra lines are not only due to the Hydrogen atom; there are other spectral lines due to molecular interactions, due to electric/magnetic dipoles, etc, and so the electromagnetic interaction,the Coulomb's law, $F_{}=\frac{1}{4\cdot \pi\cdot \varepsilon_0}\cdot \frac{q1\cdot q2}{d^2}$ must be analyzed.
If we scale the mass $m_e$ by the relation $\alpha(t)$ (not related with the above fine structure constant), where $t$ is time (past), we should also scale the charge and the distance by the same factor, giving
exactly the same value $F_{}=\frac{1}{4\cdot \pi\cdot \varepsilon_0}\cdot \frac{q_1\cdot q_2\cdot \alpha^ 2(t)}{d^2\cdot \alpha^2(t)}$. Thus the system with and without the transformation behaves in the same manner. The same procedure shows that the universal gravitational law is also insensitive to the scaling of the atom. This should not be a complete surprise because the scaling of masses, charges, time units and distances is routinely used on computer simulations that mimic the universe in a consistent way. The conclusion is that there is no way to distinguish between the spectrum of an atom in motion and the one of a scaled atom.
The photons that were emitted by a larger atom in the past are received now without any change in its wavelength and, thus,
with energy conservation.
The mainstream viewpoint not being aware that scaling the atom gave the same observational results, adopted the receding interpretation long time ago. As a consequence the models derived from that interpretation (BB, Inflation, DE, DM, ) do not obey the general laws of the universe, namely the energy conservation principle.
My viewpoint offers
a cause for the space expansion. You can think about that, unless you are comfortable with: 'space expands', period, without a known cause.
Physics is about causes and whys, backed by proper references. I used the most basic laws to show that another viewpoint is inscribed in the laws of nature. I've only used
Basic laws that do not need to be peer-reviewed as they are mainstream physics.
When I graduated as electronic engineer, long time ago, I accepted naively that the fields (electrostatic and gravitational) are sourced by the particles, and expand at $c$ speed, without being drained. But now, older but not senile, I assume without exception, that in the universe there are no 'free lunches' and thus the energy must be transferred from the particles (shrinking) to the fields (growing).
This new viewpoint is formalized and compared to the $\Lambda CDM$ model in a rigourous document, with the derivation of the scale relation $\alpha(t)$ that corresponds to the universe's evolution, at:
A self-similar model of the Universe unveils the nature of dark energy preceded by older documents at arxiv: Cosmological Principle and Relativity - Part I A relativistic time variation of matter/space fits both local and cosmic data Ps: Can someone provide a way to distinguish between the spectrum of an atom in motion and the one of a scaled atom ? (maybe probing the atom's nucleus and find the isotope ratio's abundance (D/H evolution and others) as Mr Webb has done) |
The Poisson bracket is the bracket of a Lie algebra defined by the symplectic 2-form.
That's a lot to unpack so let's go through it slowly. A 2-form $\omega$ is an anti-symmetric two-tensor $\omega_{\mu\nu}$. If $\omega_{\mu\nu}x^\nu \neq 0$ at points where $x^\nu \neq 0$, then $\omega$ is said to be non-degenerate. So $\omega$ is like the metric tensor, except it's
anti-symmetric instead of symmetric. To be symplectic, the "curl" ("exterior derivative") of $\omega$ should also be zero, $(d\omega)_{\mu\nu\rho} = \partial_{[\mu} \omega_{\nu\rho]} = 0$ where the brackets indicate complete anti-symmetrization.
Since $\omega_{\mu\nu}$ is non-degenerate, like the usual metric tensor, it defines an isomorphism between vectors (index up) and one-forms (index down). If $f$ is a scalar function, then $\partial_\mu f$ is naturally a one-form. With this isomorphism, we can define an associated vector field $X_f^\mu$. Note that in terms of concrete components, what this is does is quite different from the usual operation of raising and lowering indices in relativity. Viz., in 2 dimensions, any symplectic 2-form can be represented by the matrix $\begin{pmatrix}0 & 1 \\ -1 & 0 \end{pmatrix}$ so that if $\partial_\mu f$ has components $(a, b)$, $X_f^\mu$ has components $(b,-a)$.
Because we can get one-forms from scalars by taking the gradient, we can define an operation on scalars $f,g$ as $(f,g) \mapsto \omega(X_f, X_g)$. One can verify that this operation is linear in both arguments, anti-symmetric, and satisfies the Jacobi idenitity (because of the requirement that the exterior derivative $d\omega$ vanishes), so it defines a Lie algebra.
If you use the matrix representation of $\omega$ above and that in coordinates $p,q$, the components of $\partial_\mu f$ are $(\partial f/\partial p, \partial f/\partial q)$, then you can work out that this coincides with the usual definition of the Poisson bracket in coordinates. The extension to $2n$ dimensions with coordinates $p_i, q_i,\, i = 1,\ldots,n$ is found by replacing $1$ by the $n\times n$ idenity matrix in the matrix above. A theorem by Darboux says that this can always be done locally.
The canonical reference for this is V. I. Arnold,
Mathematical Methods of Classical Mechanics. |
I'm using Tex Live 2010. Here's the example I want to discuss.
\documentclass{article} \RequirePackage{amsmath} \RequirePackage{unicode-math} \setmainfont{Linux Libertine O} \setmathfont{xits-math.otf} \setmathfont[range=\mathit/{latin, Latin, greek, Greek}]{Linux Libertine O} \begin{document} Transfinite induction will reveal that $\kappa$ is $\alpha$-Mahlo for each$\alpha < \kappa$. We have proved elsewhere that $\kappa$ is 1-Mahlo, and hence0-Mahlo. If $\kappa$ is $\alpha$-Mahlo for all $\alpha$ below some limit ordinal$\lambda < \kappa$, then $\kappa$ is $\lambda$-Mahlo by definition. \end{document}
My problem is that the latin, Latin, greek, and Greek characters are set in XITS, not Libertine. Now fontspec issues the warning,
fontspec Warning: Font 'Linux Libertine O' does not contain script 'Math'.
I guess Libertine isn't a math font. Nevertheless, similar markup seemed to yield the desired result in June at this blog. Has something changed dramatically since then?
I'm unwilling to abandon unicode-math. But I desperately want to use, within math mode, Latin and Greek glyphs that were designed for text mode. Since (as far as I know) mathspec is no longer compatible with unicode-math, I'm not sure what to do. Any suggestions? |
Stokes flow through a complex porous medium
The medium is periodic and described using embedded boundaries.
This tests mainly the robustness of the representation of embedded boundaries and the convergence of the viscous and Poisson solvers.
We will vary the maximum level of refinement, starting from 5.
int maxlevel = 5;
The porous medium is defined by the union of a random collection of disks. The number of disks can be varied to vary the porosity.
void porous (scalar cs, face vector fs){ int ns = 800; double xc[ns], yc[ns], R[ns]; srand (0); for (int i = 0; i < ns; i++) xc[i] = 0.5*noise(), yc[i] = 0.5*noise(), R[i] = 0.01 + 0.02*fabs(noise());
Once we have defined the random centers and radii, we can compute the levelset function \phi representing the embedded boundary.
Since the medium is periodic, we need to take into account all the disk images using periodic symmetries.
for (double xp = -L0; xp <= L0; xp += L0) for (double yp = -L0; yp <= L0; yp += L0) for (int i = 0; i < ns; i++) phi[] = intersection (phi[], (sq(x + xp - xc[i]) + sq(y + yp - yc[i]) - sq(R[i]))); phi[] = -phi[]; } boundary ({phi}); fractions (phi, cs, fs); fractions_cleanup (cs, fs);}
The domain is the periodic unit square centered on the origin.
We turn off the advection term. The choice of the maximum timestep and of the tolerance on the Poisson and viscous solves is not trivial. This was adjusted by trial and error to minimize (possibly) splitting errors and optimize convergence speed.
We define the porous embedded geometry.
porous (cs, fs);
The gravity vector is aligned with the channel and viscosity is unity.
const face vector g[] = {1.,0.}; a = g; mu = fm;
The boundary condition is zero velocity on the embedded boundary.
u.n[embed] = dirichlet(0); u.t[embed] = dirichlet(0);
We initialize the reference velocity.
foreach() un[] = u.x[];}
We check for a stationary solution.
event logfile (i++; i <= 500){ double avg = normf(u.x).avg, du = change (u.x, un)/(avg + SEPS); fprintf (ferr, "%d %d %d %d %d %d %d %d %.3g %.3g %.3g %.3g %.3g\n", maxlevel, i, mgp.i, mgp.nrelax, mgp.minlevel, mgu.i, mgu.nrelax, mgu.minlevel, du, mgp.resa*dt, mgu.resa, statsf(u.x).sum, normf(p).max);
If the relative change of the velocity is small enough we stop this simulation.
if (i > 1 && (avg < 1e-9 || du < 1e-2)) {
We are interested in the permeability k of the medium, which is defined by \displaystyle U = \frac{k}{\mu}\nabla p = \frac{k}{\mu}\rho g with U the average fluid velocity.
We output fields and dump the simulation.
scalar nu[]; foreach() nu[] = sqrt (sq(u.x[]) + sq(u.y[])); boundary ({nu}); view (fov = 19.3677); draw_vof ("cs", "fs", filled = -1, fc = {1,1,1}); squares ("nu", linear = true, spread = 8); char name[80]; sprintf (name, "nu-%d.png", maxlevel); save (name); draw_vof ("cs", "fs", filled = -1, fc = {1,1,1}); squares ("p", linear = false, spread = -1); sprintf (name, "p-%d.png", maxlevel); save (name); draw_vof ("cs", "fs", filled = -1, fc = {1,1,1}); squares ("level"); sprintf (name, "level-%d.png", maxlevel); save (name); sprintf (name, "dump-%d", maxlevel); dump (name);
We stop at level 10.
if (maxlevel == 10) return 1; /* stop */
We refine the converged solution to get the initial guess for the finer level. We also reset the embedded fractions to avoid interpolation errors on the geometry.
maxlevel++;#if 0 refine (level < maxlevel && cs[] > 0. && cs[] < 1.);#else adapt_wavelet ({cs,u}, (double[]){1e-2,2e-6,2e-6}, maxlevel);#endif porous (cs, fs); boundary (all); // this is necessary since BCs depend on embedded fractions }}
set xlabel 'Level'set gridset ytics format '%.1e'set logscale yplot 'out' w lp t ''
set xlabel 'Iterations'set logscale yset ytics format '%.0e'set yrange [1e-10:]plot '../porous.ref' u 2:9 w l t '', '' u 2:10 w l t '', \ '' u 2:11 w l t '', '' u 2:12 w l t '', '' u 2:13 w l t '', \ 'log' u 2:9 w p t 'du', '' u 2:10 w p t 'resp', \ '' u 2:11 w p t 'resu', '' u 2:12 w p t 'u.x.sum', '' u 2:13 w p t 'p.max' |
Difference between revisions of "Upper and lower bounds"
(→n=4)
(→n=4)
Line 106: Line 106:
Indeed, divide a line-free set in <math>[3]^4</math> into three blocks <math>1***, 2***, 3***</math> of <math>[3]^3</math>. If two of them are of size 18, then they must both be xyz, and the third block can have at most 6 elements, leading to an inferior bound of 42. So the best one can do is <math>18+17+17=52</math> which can be attained by deleting the diagonal {1111,2222,3333} from <math>D_{4,1} = xyz yzx xzy</math>, <math>D_4 = yzx zxy xyz</math>, or <math>D_{4,2} = zxy xyz yxz</math>. In fact,
Indeed, divide a line-free set in <math>[3]^4</math> into three blocks <math>1***, 2***, 3***</math> of <math>[3]^3</math>. If two of them are of size 18, then they must both be xyz, and the third block can have at most 6 elements, leading to an inferior bound of 42. So the best one can do is <math>18+17+17=52</math> which can be attained by deleting the diagonal {1111,2222,3333} from <math>D_{4,1} = xyz yzx xzy</math>, <math>D_4 = yzx zxy xyz</math>, or <math>D_{4,2} = zxy xyz yxz</math>. In fact,
−
'''Lemma 2.
+
'''Lemma 2.'''
* The only 52-element line-free sets in <math>[3]^4</math> are formed by removing the diagonal {1111,2222,3333} from <math>D_{4,j}</math> for some j=0,1,2.
* The only 52-element line-free sets in <math>[3]^4</math> are formed by removing the diagonal {1111,2222,3333} from <math>D_{4,j}</math> for some j=0,1,2.
Revision as of 18:30, 16 February 2009 Upper and lower bounds for [math]c_n[/math] for small values of n.
[math]c_n[/math] is the size of the largest subset of [math][3]^n[/math] that does not contain a combinatorial line. A spreadsheet for all the latest bounds on [math]c_n[/math] can be found here. In this page we record the proofs justifying these bounds.
n 0 1 2 3 4 5 6 7 [math]c_n[/math] 1 2 6 18 52 150 450 [1302,1350] Contents Basic constructions
For all [math]n \geq 1[/math], a basic example of a mostly line-free set is
[math]D_n := \{ (x_1,\ldots,x_n) \in [3]^n: \sum_{i=1}^n x_i = 0 \ \operatorname{mod}\ 3 \}[/math]. (1)
This has cardinality [math]|D_n| = 2 \times 3^{n-1}[/math]. The only lines in [math]D_n[/math] are those with
A number of wildcards equal to a multiple of three; The number of 1s equal to the number of 2s modulo 3.
One way to construct line-free sets is to start with [math]D_n[/math] and remove some additional points. We also have the variants [math]D_{n,0}=D_n, D_{n,1}, D_{n,2}[/math] defined as
[math]D_{n,j} := \{ (x_1,\ldots,x_n) \in [3]^n: \sum_{i=1}^n x_i = j \ \operatorname{mod}\ 3 \}[/math]. (1')
When n is not a multiple of 3, then [math]D_{n,0}, D_{n,1}, D_{n,2}[/math] are all cyclic permutations of each other; but when n is a multiple of 3, then [math]D_{n,0}[/math] plays a special role (though [math]D_{n,1}, D_{n,2}[/math] are still interchangeable).
Another useful construction proceeds by using the slices [math]\Gamma_{a,b,c} \subset [3]^n[/math] for [math](a,b,c)[/math] in the triangular grid
[math]\Delta_n := \{ (a,b,c) \in {\Bbb Z}_+^3: a+b+c = n \},[/math]. (2)
where [math]\Gamma_{a,b,c}[/math] is defined as the strings in [math][3]^n[/math] with [math]a[/math] 1s, [math]b[/math] 2s, and [math]c[/math] 3s. Note that
[math]|\Gamma_{a,b,c}| = \frac{n!}{a! b! c!}.[/math] (3)
Given any set [math]B \subset \Delta_n[/math] that avoids equilateral triangles [math] (a+r,b,c), (a,b+r,c), (a,b,c+r)[/math], the set
[math]\Gamma_B := \bigcup_{(a,b,c) \in B} \Gamma_{a,b,c}[/math] (4)
is line-free and has cardinality
[math]|\Gamma_B| = \sum_{(a,b,c) \in B} \frac{n!}{a! b! c!},[/math] (5)
and thus provides a lower bound for [math]c_n[/math]:
[math]c_n \geq \sum_{(a,b,c) \in B} \frac{n!}{a! b! c!}.[/math] (6)
All lower bounds on [math]c_n[/math] have proceeded so far by choosing a good set of B and applying (6). Note that [math]D_n[/math] is the same as [math]\Gamma_{B_n}[/math], where [math]B_n[/math] consists of those triples [math](a,b,c) \in \Delta_n[/math] in which [math]a \neq b\ \operatorname{mod}\ 3[/math].
Note that if one takes a line-free set and permutes the alphabet [math]\{1,2,3\}[/math] in any fashion (e.g. replacing all 1s by 2s and vice versa), one also gets a line-free set. This potentially gives six examples from any given starting example of a line-free set, though in practice there is enough symmetry that the total number of examples produced this way is less than six. (These six examples also correspond to the six symmetries of the triangular grid [math]\Delta_n[/math] formed by rotation and reflection.)
Another symmetry comes from permuting the [math]n[/math] indices in the strings of [math][3]^n[/math] (e.g. replacing every string by its reversal). But the sets [math]\Gamma_B[/math] are automatically invariant under such permutations and thus do not produce new line-free sets via this symmetry.
The basic upper bound
Because [math][3]^{n+1}[/math] can be expressed as the union of three copies of [math][3]^n[/math], we have the basic upper bound
[math]c_{n+1} \leq 3 c_n.[/math] (7)
Note that equality only occurs if one can find an [math]n+1[/math]-dimensional line-free set such that every n-dimensional slice has the maximum possible cardinality of [math]c_n[/math].
n=0 [math]c_0=1[/math]:
This is clear.
n=1 [math]c_1=2[/math]:
The three sets [math]D_1 = \{1,2\}[/math], [math]D_{1,1} = \{2,3\}[/math], and [math]D_{1,2} = \{1,3\}[/math] are the only two-element sets which are line-free in [math][3]^1[/math], and there are no three-element sets.
n=2 [math]c_2=6[/math]:
There are four six-element sets in [math][3]^2[/math] which are line-free, which we denote [math]x = D_{2,2}[/math], [math]y=D_{2,1}[/math], [math]z=D_2[/math], and [math]w[/math] and are displayed graphically as follows.
13 .. 33 .. 23 33 13 23 .. 13 23 .. x = 12 22 .. y = 12 .. 32 z = .. 22 32 w = 12 .. 32 .. 21 31 11 21 .. 11 .. 31 .. 21 31
Combining this with the basic upper bound (7) we see that [math]c_2=6[/math].
n=3 [math]c_3=18[/math]:
We describe a subset [math]A[/math] of [math][3]^3[/math] as a string [math]abc[/math], where [math]a, b, c \subset [3]^2[/math] correspond to strings of the form [math]1**[/math], [math]2**[/math], [math]3**[/math] in [math][3]^3[/math] respectively. Thus for instance [math]D_3 = xyz[/math], and so from (7) we have [math]c_3=18[/math].
Lemma 1. The only 18-element line-free subset of [math][3]^3[/math] is [math]D_3 = xyz[/math]. The only 17-element line-free subsets of [math][3]^3[/math] are formed by removing a point from [math]D_3=xyz[/math], or by removing either 111, 222, or 333 from [math]D_{3,2} = yzx[/math] or [math]D_{3,3}=zxy[/math]. Proof. We prove the second claim. As [math]17=6+6+5[/math], and [math]c_2=6[/math], at least two of the slices of a 17-element line-free set must be from x, y, z, w, with the third slice having 5 points. If two of the slices are identical, the last slice can have only 3 points, a contradiction. If one of the slices is a w, then the 5-point slice will contain a diagonal, contradiction. By symmetry we may now assume that two of the slices are x and y, which force the last slice to be z with one point removed. Now one sees that the slices must be in the order xyz, yzx, or zxy, because any other combination has too many lines that need to be removed. The sets yzx, zxy contain the diagonal {111,222,333} and so one additional point needs to be removed.
The first claim follows by a similar argument to the second. [math]\Box[/math]
n=4 [math]c_4=52[/math]:
Indeed, divide a line-free set in [math][3]^4[/math] into three blocks [math]1***, 2***, 3***[/math] of [math][3]^3[/math]. If two of them are of size 18, then they must both be xyz, and the third block can have at most 6 elements, leading to an inferior bound of 42. So the best one can do is [math]18+17+17=52[/math] which can be attained by deleting the diagonal {1111,2222,3333} from [math]D_{4,1} = xyz yzx xzy[/math], [math]D_4 = yzx zxy xyz[/math], or [math]D_{4,2} = zxy xyz yxz[/math]. In fact,
Lemma 2. The only 52-element line-free sets in [math][3]^4[/math] are formed by removing the diagonal {1111,2222,3333} from [math]D_{4,j}[/math] for some j=0,1,2. The only 51-element line-free sets in [math][3]^4[/math] are formed by removing the diagonal and one further point from [math]D_{4,j}[/math] for some j=0,1,2. Proof It suffices to prove the second claim. Suppose first that we can slice this set into three slices of 17 points. Each of the slices is then formed by removing one point from xyz, yxz, and zxy. Arguing as before we obtain the claim.If one of the xyz, yzx, zyx patterns are used twice, then the third block can have at most 8 points, a contradiction, so each pattern must be used exactly once. A pattern such as xyz zxy yzx (with three points removed) cannot occur since the columns xzy, yxz, zyx of this pattern can contain at most 16 points each. So we must remove three points from [math]D_{4,1} = xyz yxz zyx[/math] or a cyclic permutation thereof. Let's say we are removing three points from [math]D_{4,1}[/math]. If 3333 is not removed, then we must remove one from each of the sets {1113,2223}, {1131,2232}, {1311,2322}, {3111,3222}, causing four points to be removed in all, a contradiction; thus 3333 must be removed, and similarly 2222, and the claim follows.
If there is no such slicing available, then every slicing of the 51-point set must slice into an 18-point set, an 17-point set, and a 16-point set. By symmetry we may assume the 18-point slice is the first one, and the 17 point set is the next one:
xyz ??? ???
Looking at the vertical slices, we see that the first column must also be an xyz:
xyz y?? z??
this forces the second slice, which has 17 points, to be yzx with one point removed; in fact, the point removed must be either 2222 or 2333. This forces the third slice to be contained in zxy. Now we are back in the situation of [math]D_{4,1}[/math] with three points removed, and the claim follows from the previous argument. [math]\Box[/math]
n=5 [math]c_5=150[/math]:
We have the upper bound [math]c_5 \leq 154[/math]
Suppose for contradiction that we had a pattern with [math]155 = 3 \times 52 - 1[/math] points, then two of the [math][3]^3[/math] slices must have 52 points and the third has 51, no matter how one slices. Using the previous theorem, we see up to permutation that one is now removing seven points from
yzx zxy xyz
zxy xyz yzx
xyz yzx zxy
Now the major diagonal of the cube is yyy, and six points must be removed from that. Four of the off-diagonal cubes must also lose points. That leaves 152 points, which contradicts the 155 points we started with.
We have the lower bound [math]c_5 \geq 150[/math]
One way to get 150 is to start with [math]D_5[/math] and remove the slices [math]\Gamma_{0,4,1}, \Gamma_{0,5,0}, \Gamma_{4,0,1}, \Gamma_{5,0,0}[/math].
Another pattern of 150 points is this: Take the 450 points in [math]{}[3]^6[/math] which are (1,2,3), (0,2,4) and permutations, then select the 150 whose final coordinate is 1. That gives this many points in each cube:
17 18 17
17 17 18
12 17 17
An integer programming method has established the upper bound [math]c_5\leq 150[/math], with 12 extremal solutions.
This file contains the extermisers. One point per line and different extermisers separated by a line with “—”
This is the linear program, readable by Gnu’s glpsol linear programing solver, which also quickly proves that 150 is the optimum.
Each variable corresponds to a point in the cube, numbered according to their lexicografic ordering. If a variable is 1 then the point is in the set, if it is 0 then it is not in the set. There is one linear inequality for each combinatorial line, stating that at least one point must be missing from the line.
n=6 [math]c_6=450[/math]:
The upper bound follows since [math]c_6 \leq 3 c_5[/math]. The lower bound can be formed by gluing together all the slices [math]\Gamma_{a,b,c}[/math] where (a,b,c) is a permutation of (0,2,4) or (1,2,3).
n=7 [math]1302 \leq c_7 \leq 1350[/math]:
The upper bound follows since [math]c_7 \leq 3 c_6[/math]. The lower bound can be formed by removing 016,106,052,502,151,511,160,610 from [math]D_7[/math].
Larger n
The following construction gives lower bounds for the number of triangle-free points, There are of the order [math]2.7 \sqrt{log(N)/N}3^N[/math] points for large N (N ~ 5000)
It applies when N is a multiple of 3.
For N=3M-1, restrict the first digit of a 3M sequence to be 1. So this construction has exactly one-third as many points for N=3M-1 as it has for N=3M. For N=3M-2, restrict the first two digits of a 3M sequence to be 12. This leaves roughly one ninth of the points for N=3M-2 as for N=3M.
The current lower bounds for [math]c_{3m}[/math] are built like this, with abc being shorthand for [math]\Gamma_{a,b,c}[/math]:
[math]c_3[/math] from (012) and permutations [math]c_6[/math] from (123,024) and perms [math]c_9[/math] from (234,135,045) and perms [math]c_{12}[/math] from (345,246,156,02A,057) and perms (A=10) [math]c_{15}[/math] from (456,357,267,13B,168,04B,078) and perms (B=11)
To get the triples in each row, add 1 to the triples in the previous row; then include new triples that have a zero.
A general formula for these points is given below. I think that they are triangle-free. (For N<21, ignore any triple with a negative entry.)
There are thirteen groups of points in the centre, that are the same for all N=3M: (M-7, M-3, M+10) and perms (M-7, M, M+7) and perms (M-7, M+3, M+4) and perms (M-6, M-4, M+10) and perms (M-6, M-1, M+7) and perms (M-6, M+2, M+4) and perms (M-5, M-1, M+6) and perms (M-5, M+2, M+3) and perms (M-4, M-2, M+6) and perms (M-4, M+1, M+3) and perms (M-3, M+1, M+2) and perms (M-2, M, M+2) and perms (M-1, M, M+1) and perms There is also a string of points, that is slightly different for odd and even N: For N=6K: (2x, 2x+2, N-4x-2) and permutations (x=0..K-4) (2x, 2x+5, N-4x-5) and perms (x=0..K-4) (2x, 3K-x-4, 3K+x+4) and perms (x=0..K-4) (2x, 3K-x-1, 3K+x+1) and perms (x=0..K-4) (2x+1, 2x+5, N-4x-6) and perms (x=0..K-5) (2x+1, 2x+8, N-4x-9) and perms (x=0..K-5) (2x+1, 3K-x-1, 3K-x) and perms (x=0..K-5) (2x+1, 3K-x-4, 3K-x+3) and perms (x=0..K-5) For N=6K+3: the thirteen points mentioned above, and: (2x, 2x+4, N-4x-4) and perms, x=0..K-4 (2x, 2x+7, N-4x-7) and perms, x=0..K-4 (2x, 3K+1-x, 3K+2-x) and perms, x=0..K-4 (2x, 3K-2-x, 3K+5-x) and perms, x=0..K-4 (2x+1, 2x+3, N-4x-4) and perms, x=0..K-4 (2x+1, 2x+6, N-4x-7) and perms, x=0..K-4 (2x+1, 3K-x, 3K-x+2) and perms, x=0..K-4 (2x+1, 3K-x-3, 3K-x+5) and perms, x=0..K-4 For N=6K:
An alternate construction:
First define a sequence, of all positive numbers which, in base 3, do not contain a 1. Add 1 to all multiples of 3 in this sequence. This sequence does not contain a length-3 arithmetic progression.
It starts 1,2,7,8,19,20,25,26,55, …
Second, list all the (abc) triples for which the larger two differ by a number from the sequence, excluding the case when the smaller two differ by 1, but then including the case when (a,b,c) is a permutation of N/3+(-1,0,1)
Asymptotics
DHJ(3) is equivalent to the upper bound
[math]c_n \leq o(3^n)[/math]
In the opposite direction, observe that if we take a set [math]S \subset [3n][/math] that contains no 3-term arithmetic progressions, then the set [math]\bigcup_{(a,b,c) \in \Delta_n: a+2b \in S} \Gamma_{a,b,c}[/math] is line-free. From this and the Behrend construction it appears that we have the lower bound
[math]c_n \geq 3^{n-O(\sqrt{\log n})}[/math]
though this has to be checked.
Numerics suggest that the first large n construction given above above give a lower bound of roughly [math]2.7 \sqrt{\log(n)/n} \times 3^n[/math], which would asymptotically be inferior to the Behrend bound.
The second large n construction had numerical asymptotics for \log(c_n/3^n) close to [math]1.2-\sqrt{\log(n)}[/math] between n=1000 and n=10000, consistent with the Behrend bound.
Numerical methods
A greedy algorithm was implemented here. The results were sharp for [math]n \leq 3[/math] but were slightly inferior to the constructions above for larger n. |
I would like to write an algorithm that lists all the ways a natural number $m \in \mathbb{N}$ can be written as the sum of three squares $m = x^2 + y^2 + z^2$ with integers $x,y,z \in \mathbb{Z}$.
Legendre's theorem says $m$ has such a representation if $m \neq 4^a(8b+7)$ with $a,b \in \mathbb{N}$. See on MathOverflow:
Legendre and sums of three squares Efficient computation of integer representation as sum of three squares
If I just wanted to
count the number of representations as the sum of squares, we could find formulas in the Online Encyclopedia of Integer Sequences: A005875 - Theta series of simple cubic lattice; also number of ways of writing a nonnegative integer n as a sum of 3 squares (zero being allowed). A074590 - Number of primitive solutions to $n = x^2 + y^2 + z^2$ (i.e. with $\gcd(x,y,z) = 1$).
How do I
list the sum of squares representations for each $m$? I would take a slower algorithm if it were easy to implement.
I considered just writing: $\boxed{m -x^2 -y^2 = z^2 }$ and looping over $0 < x < y < \sqrt{m} $ and checking it is perfect square.
Another possibility is keeping an array
square writing
square[x*x+y*y*z*z] += [[x,y,z]] storing triples in the appropriate place as I find them. That is something like $m^3$ time since I loop over $0 < x,y,z < m$.
Do any solutions jump out at you?
10/12/17 Here I take a naive approach:
N = int(M**0.5)z = [ (a,b,c) for a in range(N) for b in range(N) for c in range(N) if a**2 + b**2 + c**2 == M]
This rather slow, but I could get solutions in the range
N=10000 in about 20 seconds. Here I run it 100 times within a half-hour.
Sometimes, there is no solution. How can I improve this to work quickly at $N \approx 10^6$ ? |
I have read a paper that performs Channel Estimation of Wireless Channel as follows.
A training sequence of length $N$, lets call it $a_N(i)$ for $i\in[0:N-1]$ is repeated twice then sent out over the channel.
Assume the transmit sequence formed of this two sequeses is denoted by $s_{CE}(t)$.
The received signal is given by (assuming channel of length $T_{CH}$) is $$r_{CE}(t) = \sum_{t'=0}^{T_{CH}-1}h(t')s_{CE}(t-t')+n(t)$$
The authors claim that if one takes the autocorrelation between a sequence $a_N(i)$ and the received signal then we can mathematically write the autorcorrelation as
$$R(t)= \frac{1}{N}\sum_{d=0}^{N-1}r_{CE}(t+d\underbrace{-N+1}_{????}) a_{N}^*(d) $$
My question is why is there a need for the term I have underbrace $-N+1$. I thought that the autocorrelation is in general expressed as
$$R(t)= \frac{1}{N}\sum_{d=0}^{N-1}r_{CE}(t+d) a_{N}^*(d) $$
Thanks looking forward for your view!
Update: The following is the reference
I particular (25) and (26) are my main concerns. |
I have a hard time to wrap my head around pressure in the Navier-Stokes equation! It may sounds ridiculous but still I cannot understand the true meaning of pressure in the Navier-Stokes equation. Let's do some math to explain my purpose more accurately! Let's start from basics of the physics and in my opinion that would be the first equation in the classical thermodynamics as equation of state. We assume: there is a fluid, which have a equation of state as:
$$\rho = \rho(P,T)$$
Where $\rho$ is the density of the fluid, $P$ is the pressure, and $T$ is the temperature. Let's take a derivative from this equation to have:
$$d\rho = (\frac{\partial \rho}{\partial P})_{T} dP + (\frac{\partial \rho}{\partial T})_{P} dT$$
Let's assume that our fluid is in the thermal equilibrium and its temperature will not change, as a result: $d T = 0$
So, we have:
$$d \rho = (\frac{\partial \rho}{\partial P})_{T} dP$$
I know it's a lot of assumption but again let's assume that density change because of pressure change is not nonlinear and our fluid in fact behaves like a ideal gas. As a result, I call $(\frac{\partial \rho}{\partial P})_{T}$ the inverse square of the speed of sound, which is a constant number, as:
$$(\frac{\partial \rho}{\partial P})_{T} = c_{s}^{-2}$$
So, finally we have:
$$d \rho = c_{s}^{-2} d P$$
Or:
$$\Delta \rho = c_{s}^{-2} \Delta P$$
Or again:
$$(\rho - \rho_{f}) = c_{s}^{-2} (P - P_{0})$$
Where $\rho_{f}$ is the density of the fluid at the rest or reference, which is a tabulated value for each fluid, and $P_{0}$ is the reference pressure.
Now, I would assume my fluid is an incompressible fluid and it means (density is constant and it is really constant!):
$$\rho = \rho_{f}$$
As a result, because, every fluid regardless of its compressibility or incompressibility has a finite speed of sound, I would argue that:
$$P = P_{0}$$
Or in other word, strictly speaking pressure should be equal to the reference pressure.
Now, I proved that for an incompressible fluid as long as density is constant, pressure should also be a constant. So in incompressible Navier-Stokes equation we have:
$$\rho_{f} \frac{\partial \mathbf{u}}{\partial t} + \rho_{f} (\mathbf{u} \cdot \nabla)\mathbf{u} = -\nabla P + \nabla \cdot \tau$$
And I showed that for incompressible fluid, P is just constant, so: $\nabla P = 0$!
As a result, I could simplify the Navier-Stokes equation as:
$$\rho_{f} \frac{\partial \mathbf{u}}{\partial t} + \rho_{f} (\mathbf{u} \cdot \nabla)\mathbf{u} = \nabla \cdot \tau$$
Now let's back to my original question:
Based on these calculations I would say that pressure in the incompressible Navier-Stokes equation is just a dummy variable, which does not have any physical meaning! I appreciate if someone could explain this to me! |
Imagine a long, thin current carrying conductor carrying a current $I$ and moving through space with a velocity $\mathbf v$. If there exists a magnetic field such that there is a force on the current carrying wire in a direction opposite to that of its velocity shouldn't the work done by the magnetic force on the current carrying conductor be non zero as the conductor is being displaced along the direction of its velocity?
Consider the well known "sliding rod in a magnetic field" setup:
There is an electric current 'up' (electron current 'down') through the conductor that is moving to the right, and there is a force to the left acting to slow the conductor down.
The magnetic force on the mobile electrons, due to the motion of the conductor is downward while the magnetic force due to the electric current is leftward.
Note that the vector sum of these force components is always orthogonal the velocity vector of the mobile electrons, thus no work is done by this magnetic force on the mobile electrons.
However, due to this leftward force, the mobile electron density is greater on the trailing side of the moving conductor. The resulting electric field from right to left (within the moving conductor) produces a rightward electric force on the mobile electrons that just balances the leftward magnetic force.
But there is also a leftward electric force on the lattice ions on the leading side of the moving conductor with no balancing magnetic force.
It is this electric force that does work, not the magnetic force.
Adapted from my answer here:
The Lorentz force on a point charge is $$\vec{F} = q(\vec{E}+\vec{v}\times\vec{B}). $$ The force due to the magnetic field is $$ \vec{F}_{mag} = q(\vec{v}\times\vec{B}) .$$ The work done on $q$ due to the magnetic force per unit time is $$P_{mag} = \vec{F}_{mag}·\vec{v} = q(\vec{v}\times\vec{B})·\vec{v} = q(\vec{v}\times\vec{v})·\vec{B} = 0. $$ This is saying that the work done per unit time is zero because the magnetic force $\vec{F}_{mag}$ is orthogonal to the velocity $\vec{v}$.
Work done per unit time on an extended charge distribution by magnetic forces can be expressed as an integral, with the integrand (power per unit length, area or volume) again vanishing because of the magnetic force per unit length, area or volume being orthogonal to velocity. Thus, quite generally, forces due to magnetic fields do no work.
Why is it assumed that magnetic forces arising from magnetic fields do not do work on a current carrying conductor?
The reason that it is assumed that magnetic forces do no work is not really an assumption at all; it can be proven directly from Maxwell's equations. This is known as Poynting's theorem. My favorite derivation is found at section 11.2 here: http://web.mit.edu/6.013_book/www/book.html
$$-\nabla \cdot (\mathbf E \times \mathbf H) = \frac{\partial}{\partial t} \left(\frac{1}{2}\epsilon_0 \mathbf E \cdot \mathbf E\right)+ \mathbf E \cdot \frac{\partial}{\partial t}\mathbf P + \frac{\partial}{\partial t}\left(\frac{1}{2} \mu_0 \mathbf H \cdot \mathbf H\right) + \mathbf H \cdot \frac{\partial}{\partial t} \mathbf M + \mathbf E \cdot \mathbf J$$
In your case, with just a conductor, we have $\mathbf P=0$ and $\mathbf M=0$ so the equation simplifies to a more commonly recognizable form
$$-\nabla \cdot (\mathbf E \times \mathbf H) = \frac{\partial}{\partial t} \left(\frac{1}{2}\epsilon_0 \mathbf E \cdot \mathbf E\right) + \frac{\partial}{\partial t}\left(\frac{1}{2} \mu_0 \mathbf H \cdot \mathbf H\right) + \mathbf E \cdot \mathbf J$$
Where the term on the left is the flow of energy from one region of the field to another and the first two terms on the right are the change in the energy density of the electric and magnetic fields respectively. Those are purely field terms; the only term involving an interaction with matter is the last term $\mathbf E \cdot \mathbf J$ where $\mathbf J$ is the free current. This means that the only way that work can be done on a conductor is via the $\mathbf E$ field.
Note, this is a derivation based on the macroscopic Maxwell's equations. A similar derivation can be done based on the microscopic Maxwell's equations, and again the only term involving an interaction with the matter of a conductor is $\mathbf E \cdot \mathbf J$. So regardless of if you are talking about the macroscopic or microscopic Maxwell's equations, for a conductor, the conclusion is the same: all work is done by the $\mathbf E$ field and the amount of work is given by $\mathbf E \cdot \mathbf J$.
It is not an assumption, it is a theorem. It holds as long as Maxwell's equations hold.
As @Alfred Centauri described in his answer whenever you have a situation that looks like there is a magnetic field doing work, you can always "dig deeper" and find where it is actually the $\mathbf E$ field doing the work.
However, instead of digging deeper, let's say that we want to step back a bit. Is there anything else we can learn? The term $\mathbf E \cdot \mathbf J$ includes not only the mechanical work, but also the non-mechanical work. Usually we want to maximize the mechanical work and we want to minimize the non-mechanical work. So suppose, instead of trying to find where the $\mathbf E$ field is, we try to separate out the non-mechanical work from the mechanical work.
To do so, we will transform to the rest-frame of the conductor, since in that frame there is no mechanical work so all of the work in that frame is non-mechanical. Assuming that $v<<c$ the transformation equations are: $$\mathbf E' = \mathbf E + \mathbf v \times \mathbf B$$ $$\mathbf J' = \mathbf J - \rho \mathbf v$$ where the primed quantities are quantities in the rest frame of the conductor. Substituting those into $\mathbf E \cdot \mathbf J$ and simplifying, we get: $$\mathbf E \cdot \mathbf J = \mathbf E' \cdot \mathbf J' + \mathbf v \cdot (\rho \mathbf E + \mathbf J \times \mathbf B)$$
So, that means that the $\mathbf E \cdot \mathbf J$ term itself contains within it the mechanical work due to the magnetic field that you are interested in. If you pull out the non-mechanical work, then the mechanical work is exactly what you would expect including a term from the magnetic field.
This result may seem a little surprising or confusing, as it appears to contradict the above, but it does not. The thing is that all of the fields in electromagnetism are closely related to each other. You can often express the same thing in multiple ways, or dig out hidden dependencies. So although Poynting's theorem holds and although it clearly states that the total work is always $\mathbf E \cdot \mathbf J$, it is not a mere coincidence that formulas describing only the mechanical work correctly include the $\mathbf B$ field and show that it does mechanical work.
Yes. This is how generators work. You move a wire through a magnetic field. The field generates an EMF, i.e. an electric field parallel to the wire. The EMF/field causes a current to flow in wire. The current together with the magnetic field produces a force directed opposite to the motion of wire. Consequently whatever is pushing the wire has to do work to move the wire. The work-rate is equal to the product of the current and the total EMF, and this is the power that the generator is supplying to the external user.
A stationary magnetic field does not perform work but a time dependent one does, by virtue of the Maxwell-Faraday equation.
Yes, the macroscopic magnetic force on wire, given by familiar formula $BIL$, can do often non-zero work (this is how DC electric motor gets spinning due to magnetic forces acting on current-carrying wires). The idea that magnetic force cannot do work comes from the microscopic theory, where magnetic part of Lorentz force on charged particle indeed does not work, but this does not translate into macroscopic theory, because there magnetic force means something different.
In macroscopic theory, magnetic force means usually the macroscopic force due to external magnetic field acting
on the body as a whole, not just on current forming mobile charges, or individual charged particles. This macroscopic force, in terms of microscopic theory as Alfred described in his answer, is actually internal (possibly electric, but that is not important) force due to electrons pushing on the rest of the wire. This push occurs because the electrons are pushed by the magnetic part of the Lorentz force towards the wire boundary but since they are bound to the wire, they cannot jump out, so they translate the push on the rest of the wire. |
“Superb!” — often, a one word review like this one encapsulates the singularly superlative experience that one of our diners had at an OpenTable restaurant.
While we do see a few of these extremely terse declarations of satisfaction pop up here and there, the typical OpenTable review is more verbose. Our reviewing customers often hit the 2000-character upper limit to pour out their hearts. They are passionate about every aspect of fine dining, and leave detailed, nuanced and constructive reviews of their journey through their dining experience.
The level of care we see in these reviews make them unquestionably one of the most important sources of insights into the ecosystem of restaurants and diners. Potential diners will often go through hundreds of reviews to help them decide where to dine next. A restaurateur, on the other hand, will always keep a sharp eye out for reviews to gauge how their business is doing, and what, if anything, needs more work.
Reviews can also be a great way to familiarize oneself with the dining scene in a new neighborhood, city, or country. Reviews can inform the diner about aspects of a restaurant that are not obvious from its description — is it a
local gem or a tourist trap? Does this restaurant have a view? If so, a view of what, and is the view particularly stunning during sunset? Is the service friendly?
Mining Reviews
Mining reviews for insights is an obvious thing to do, but it is not necessarily easy. As is usually the case with unstructured data, there is a lot of information buried in a lot of noise.
Once you have read through a few reviews you start getting the sense that there is only a handful of broad categories that people are writing about. These may range from food and drink related comments, to sentences devoted to ambiance, service, value for money, special occasions, and so on. Within a category like, say, ambiance, you would come across distinct themes such as live music, decor, and views. One review may contain only one or two of these themes (say seafood and views) while another may contain several.
It is therefore only natural that one of the first things that one would like to extract from a corpus of reviews are all the themes that occur across reviews. In more technical parlance, what we have been calling themes are known as topics, and the technique of learning these topics from a corpus of documents is called Topic Modeling.
Suppose all of our reviews are generated from a fixed vocabulary of, say, 100,000 words, and we learn 200 topics from this corpus. Each topic is a distribution over this vocabulary of words. The way one topic is different from another is in the weights with which each word occurs in them.
In the image of above, we display a sample of six topics learned from our review corpus. We show the top 25 words from each topic, with the size of the each word scaled proportionally to the importance of that word in that topic. It does not take any effort to see how tightly knitted topics are, and what they are about. Just by looking at them, we know that the first one is about
steakhouse food, the second one is about wine, the third about live music , and the next three about desserts, bar scene, and views.
When modeling is performed topics basically just fall out. That this can be achieved is an amazing fact, given that we did not have to label or annotate the reviews beforehand, or tell the algorithm that we are working in the space of restaurant reviews. We basically throw in all the reviews in the mix, and out comes these topics.
A byproduct of topic modeling are the weights with which each each topic is associated with each review. For example, consider the following three reviews:
“They had an extensive wine list to chose from, and we each ordered a glass of the 1989 Opus One to pair with our NY strip steaks. We sat near the live jazz band.” “The view of the sunset over the ocean was spectacular, while we sat there savoring the dark chocolate pudding meticulously paired with the wine by our very knowledgeable sommelier. “The restaurant was crowded so we sat at the bar. The bartender whipped up some amazing cocktails for us. There was blues playing in the background.”
It is easy to see that Review 1 mainly draws from the
wine, steakhouse and live music topics, while the other topics like desserts or view have zero weight in this review. Review 2, on the other hand, is about the view topic, a bit about the desserts topic, and again the wine topic. Review 3 draws mostly from the bar scene and the live music topics.
The intuition here is that documents, in our case reviews, are composed of multiple topics. The share of topics in each review is different. Each word in each review comes from a topic, where that topic is one of the topics in the per-review topic distribution.
Next, we discuss how in practice we learn topics from a review corpus.
From Reviews to Topics using Matrix Factorization
A popular approach for topic modeling is what is known as the Latent Dirichlet Allocation (LDA). A very approachable and comprehensive review of LDA is found in this article by David Blei. Here, I am going to use an alternative method to model topics, based on Non-negative Matrix Factorization (NMF).
Bag-of-words
To see how reviews can be put in a matrix form, consider again the three reviews above. A usual first step is to remove
stop words — terms that are too common, such as “a”, “and”, “of”, “to”, “that”, “was” etc. Now consider all the tokens left in these three reviews — we have 39 of them. So we can express the reviews as a 3 by 39 matrix, where the entries are the counts or term-frequencies (tf) for a certain token in a review. The matrix looks like the following:
TF-IDF
Note that while a word like
bartender is unique to only one review, the word sat is all three reviews, and should have less weight in the matrix as it is less distinctive. To achieve this, one usually multiplies these term frequencies with an inverse-document-frequency (idf), which is defined as $$$\log\left(\frac{n}{1+m(t)}\right)$$$ where $$$n$$$ is the number of documents in the corpus and $$$m(t)$$$ is the number of documents in which the token $$$t$$$ occurs. If a token occurs in all documents, the ratio within the brackets is almost equal to unity which makes its logarithm almost equal to zero.
Here is what the matrix looks like after tf-idf. Note that the word “sat” now has much lower importance relative to other words.
NMF
In practice, the document-term matrix $$$\bf{D}$$$ can be quite big, $$$n$$$ documents tall and $$$v$$$ tokens wide, where $$$n$$$ can be a several millions, and $$$v$$$ several hundreds of thousands. One more step that is usually performed to precondition the matrix is to normalize each row, such that the squares of the elements add up to unity (other normalizations are also possible).
Matrix Factorization (MF) takes such a matrix $$$\bf{D}$$$ of dimension $$$[n \times v]$$$, and approximates it as a product of two low rank matrices: a$$$[n\times k]$$$ matrix $$$\bf{W}$$$, and a $$$[k\times v]$$$ matrix $$$\bf T$$$, where$$$k$$$ is a small number, typically in few tens to a few hundreds. This is shown schematically below:
NMF is a variant of MF where we start with a matrix $$$\bf D$$$ with non-negative entires like our document-term matrix, and also constrain the elements of $$$\bf W$$$ and $$$\bf T$$$ to be non-negative.
Everything being non-negative lets us interpret the factorization in an additive sense, and interpret each row of the $$$\bf T$$$ matrix as a topic. This is how it works:
Let’s take the first row of $$$\bf D$$$. That is essentially our first review, expressed as a vector of length v. Remembering how matrix multiplication works, what the above relation tells us is that we can reconstruct this review approximately by linearly combining the $$$k$$$ rows of the matrix $$$\bf T$$$ with weights taken from the first row of $$$\bf W$$$ – the first element of the first row of $$$\bf W$$$ multiplying the first row of $$$\bf T$$$, the second element of the first row of $$$\bf W$$$ multiplying the second row of $$$\bf T$$$, and so on.
Each row of $$$\bf T$$$ is a distribution over the $$$v$$$ terms in a vocabulary, and easily interpreted as the topics described in the earlier section. What this factorization says is that each of the $$$n$$$ reviews (rows in $$$\bf D$$$) can be built up by a different linear combination of the $$$k$$$ topics (rows in $$$\bf T$$$).
So there we have it, $$$\bf W$$$ expresses the share of topics in each review, while each row of $$$\bf T$$$ represents a topic.
Code
Here are some Python code to perform these steps:
import sklearn.feature_extraction.text as text import numpy as np # This step performs the vectorization, # tf-idf, stop word extraction, and normalization. # It assumes docs is a Python list, #with reviews as its elements. cv = text.TfidfVectorizer(docs, stop_words='english') doc_term_matrix = cv.fit_transform(docs) # The tokens can be extracted as: vocab = cv.get_feature_names() # Next we perform the NMF with 20 topics from sklearn import decomposition num_topics = 20 #doctopic is the W matrix decomp = decomposition.NMF(n_components = num_topics, init = 'nndsvd') doctopic = decomp.fit_transform(doc_term_matrix) # Now, we loop through each row of the T matrix # i.e. each topic, # and collect the top 25 words from each topic. n_top_words = 25 topic_words = [] for topic in decomp.components_: idx = np.argsort(topic)[::-1][0:n_top_words] topic_words.append([vocab[i] for i in idx]) A note about initialization
The standard random initialization of NMF does not guarantee a unique factorization, and interpretability of topics. Note that we specified the initialization with ‘nndsvd’ above, which stands for Nonnegative Double Singular Value Decomposition (Boutsidis & Gallopoulos, 2008). It choses initial k factors based on positive components of the first k dimensions of SVD, and makes the factorization deterministic.
Topics at OpenTable
At OpenTable, we have performed topic modeling on tens of millions of textual reviews, and these topics are being used in various innovative applications that will be the subject of subsequent posts.
We have already seen six topics depicted as word clouds. Here are six more just to show how nicely they touch upon different aspects of dining.
I leave you with a teaser of how topics can be used to extract regional nuances. As an experiment we ran topic analysis separately on US cities and UK cities. We found that while in the US
Sunday brunch is a common thing, the UK has its Sunday roasts. Also, the concept of wine pairing somewhat loses ground to wine matching while transitioning from the US to the UK!
More in future posts! |
Consider a simple Hamiltonian for the Helium atom (where $e'^2 = e^2/4\pi \epsilon_0)$:
$H=\frac{P_1^2}{2\mu}+\frac{P_2^2}{2\mu}-\frac{Ze'^2}{R_1}-\frac{Ze'^2}{R_2}+\frac{e'^2}{|\vec{R}_1-\vec{R}_2|}$
I understand the first two kinetic terms are positive because they contribute to the ionization; the second two are negative because they correspond to the
attractive potential of the nucleus. But is the third term positive? If I calculate the ground state energy ($-\frac{\mu Z^2e'^4}{2\hbar^2}\sim-Z^2$Ry) without electron interactions, I obtain $\sim-4$ Ry, whereas the experimental value is $\sim-5,8$ Ry, which leads me to conclude that to correct this value the interaction term must be attractive (in order to "attract" the system towards its center), so it should have a minus sign.
However, intuitively, I know the interaction between electrons should be repulsive, and have a positive sign! |
Claim 1:
The divisibility rule for a number '$a$' to be divided by '$n$' is as follows. Express the number '$a$' in base '$n+1$'. Let '$s$' denote the sum of digits of '$a$' expressed in base '$n+1$'. Now $n|a \iff n|s$. More generally, $a \equiv s \pmod{n}$.
Example:
Before setting to prove this, we will see an example of this. Say we want to check if $13|611$. Express $611$ in base $14$.$$611 = 3 \times 14^2 + 1 \times 14^1 + 9 \times 14^0 = (319)_{14}$$where $(319)_{14}$ denotes that the decimal number $611$ expressed in base $14$. The sum of the digits $s = 3 + 1 + 9 = 13$. Clearly, $13|13$. Hence, $13|611$, which is indeed true since $611 = 13 \times 47$.
Proof:
The proof for this claim writes itself out. Let $a = (a_ma_{m-1} \ldots a_0)_{n+1}$, where $a_i$ are the digits of '$a$' in the base '$n+1$'.$$a = a_m \times (n+1)^m + a_{m-1} \times (n+1)^{m-1} + \cdots + a_0$$Now, note that\begin{align}n+1 & \equiv 1 \pmod n\\(n+1)^k & \equiv 1 \pmod n \\a_k \times (n+1)^k & \equiv a_k \pmod n\end{align}\begin{align}a & = a_m \times (n+1)^m + a_{m-1} \times (n+1)^{m-1} + \cdots + a_0 \\& \equiv (a_m + a_{m-1} \cdots + a_0) \pmod n\\a & \equiv s \pmod n\end{align}Hence proved.
Claim 2:The divisibility rule for a number '$a$' to be divided by '$n$' is as follows. Express the number '$a$' in base '$n-1$'. Let '$s$' denote the alternating sum of digits of '$a$' expressed in base '$n-1$' i.e. if $a = (a_ma_{m-1} \ldots a_0)_{n-1}$, $s = a_0 - a_1 + a_2 - \cdots + (-1)^{m-1}a_{m-1} + (-1)^m a_m$. Now $n|a$ if and only $n|s$. More generally, $a \equiv s \pmod{n}$.
Example:
Before setting to prove this, we will see an example of this. Say we want to check if $13|611$. Express $611$ in base $12$.$$611 = 4 \times 12^2 + 2 \times 12^1 + B \times 12^0 = (42B)_{12}$$where $(42B)_{14}$ denotes that the decimal number $611$ expressed in base $12$, $A$ stands for the tenth digit and $B$ stands for the eleventh digit.The alternating sum of the digits $s = B_{12} - 2 + 4 = 13$. Clearly, $13|13$. Hence, $13|611$, which is indeed true since $611 = 13 \times 47$.
Proof:
The proof for this claim writes itself out just like the one above. Let $a = (a_ma_{m-1} \ldots a_0)_{n+1}$, where $a_i$ are the digits of '$a$' in the base '$n-1$'.$$a = a_m \times (n-1)^m + a_{m-1} \times (n-1)^{m-1} + \cdots + a_0$$Now, note that\begin{align}n-1 & \equiv (-1) \pmod n\\(n-1)^k & \equiv (-1)^k \pmod n \\a_k \times (n-1)^k & \equiv (-1)^k a_k \pmod n\end{align}\begin{align}a & = a_m \times (n-1)^m + a_{m-1} \times (n-1)^{m-1} + \cdots + a_0 \\ & \equiv ((-1)^m a_m + (-1)^{m-1} a_{m-1} \cdots + a_0) \pmod n\\a & \equiv s \pmod n\end{align}Hence proved.
Pros and Cons:
The one obvious advantage of the above divisibility rules is that it is a generalized divisibility rule that can be applied for any '$n$'.
However, the major disadvantage in these divisibility rules is that if a number is given in decimal system we need to first express the number in a different base. Expressing it in base $n-1$ or $n+1$ may turn out to be more expensive. (We might as well try direct division by $n$ instead of this procedure!).However, if the number given is already expressed in base $n+1$ or $n-1$, then checking for divisibility becomes a trivial issue. |
I'm going through Andrew Ng's lecture notes on Machine Learning and I just learnt about softmax regression there.
We see that, for softmax regression, the conditional distribution of $y$ given $x$ is given as:
This formula contains terms of form $e^{\theta^Tx}$. I was just wondering if there is an intuitive explanation for this? Or, why isn't the derived formula for probability simpler like:
$$\frac{\theta^Tx}{\sum_j\theta_{j}^Tx}$$
And is there an intuitive explanation for what that would mean? |
I don't think you need to create the term "cross-sectional dispersion". From your code, it seems that you are seeking the first two moments of the sample variance when the sample values are not iid draws from a normal distribution but rather are correlated draws (a vector drawn from a multivariate normal).
You have $(X_1, ..., X_N) \sim MVN$ with known mean vector and covariance matrix. You calculate the sample variance, $S^2 = \sum (X_i-\bar{X})^2/(N-1)$ and want to know the expected value and variance of $S^2$.
You can write $S^2$ as a function of $\sum X_i^2$ and $(\sum X_i)^2$; the latter can be written as $\sum_{i,j} X_i X_j$.
The expected value will be relatively simple to obtain using the means, variances, and covariances.
The variance will be a bit tedious and will require you to look at the 3rd and 4th moments of the normal distribution (see the table in the Moments section of the Wikipedia page on the normal distribution), but shouldn't be too deep.
Let me spell out how to calculate the expected value:
$$\mathbb{E}(S^2) = \mathbb{E}\left[\frac{1}{N-1} \sum (X_i - \bar{X})^2\right] = \frac{1}{N-1}\left\{ \mathbb{E}\left[\sum X_i^2\right] - \frac{1}{N}\mathbb{E}\left[\left(\sum X_i\right)^2\right]\right\}$$
And note that $\mathbb{E}\left[\sum X_i^2\right] = \sum \mathbb{E}\left(X_i^2\right) = \sum (\mu_i^2 + \sigma_i^2)$
And then $\mathbb{E}\left[\left( \sum X_i \right)^2\right] = \mathbb{E}\left[ \sum_{i,j} X_i X_j \right] = \sum_{i,j} \mathbb{E}\left( X_i X_j \right) = \sum_{i,j} (\mu_i \mu_j + \sigma_{ij}) = \left(\sum_i \mu_i\right)^2 + \sum_{i,j} \sigma_{i,j}$
Thus we obtain$$\mathbb{E}(S^2) = \frac{1}{N-1} \left\{ \sum_i \; \mu_i^2 + \sum_i \; \sigma_i^2 - \frac{1}{N}\left(\sum_i \; \mu_i\right)^2 - \frac{1}{N} \sum_{i,j} \; \sigma_{ij} \right\}$$
Here $\sigma_i^2$ is the $i$th diagonal element of the covariance matrix and $\sigma_{ij}$ is the $(i,j)$th element, so $\sigma^2_i = \sigma_{ii}$.
For your example code, I calculate an expected value of 0.0725:
(sum(m1^2) + sum(diag(S1)) - sum(m1)^2/3 - sum(S1)/3)/2
Note that you could also write this as:
n <- nrow(S1)
var(m1) + sum(diag(S1) - mean(S1)) / (n-1) |
Here is a neat problem that I spent a lot of time trying to solve when I was around ten. Around forty years ago.
I already learned about simple electronic circuits. In particular, I learned the rules to calculate resistance when resistors are connected in series:
$$R=R_1+R_2.$$
or, when they are connected in parallel:
$$R=\frac{1}{\displaystyle\frac{1}{R_1}+\displaystyle\frac{1}{R_2}}.$$
But... what about a resistor bridge?
For the life of me, I could not figure it out. No matter what clever combination I tried, I could not reduce this circuit to a combination of series- and parallel-connected resistors.
Today, of course, I know better: it just cannot be done. What needs to be done, instead, is an application of Kirchoff's laws for circuits. Kirchoff's current law is that the sum of currents entering and exiting a node (junction) will always be zero. Kirchoff's voltage law is that the sum of voltages over any closed circuit will be zero.
So then, remembering that $U=RI$, here are five equations for the five currents flowing through the five resistors:
\begin{align*}
R_1I_1-R_2I_2+R_5I_5&=0,\\ R_3I_3-R_4I_4-R_5I_5&=0,\\ R_1I_1-R_2I_2+R_3I_3-R_4I_4&=0,\\ I_1-I_3-I_5&=0,\\ I_2-I_4+I_5&=0. \end{align*}
Unfortunately this set of equations is not independent. We need one more equation to make the system solvable. For instance, we may assume that we know the total voltage $U$ applied to the circuit, which yields:
$$R_1I_1+R_3I_3=U.$$
Ultimately, what we are interested in is $I_1+I_2$ as, in combination with $U$, it will yield the resistance of this circuit:
$$R=\frac{U}{I_1+I_2}.$$
So then, let's do a little algebra. First, eliminate $I_3=(U-R_1I_1)/R_3$:
\begin{align*}
R_1I_1-R_2I_2+R_5I_5&=0,\\ R_1I_1+R_4I_4+R_5I_5&=U,\\ (R_1+R_3)I_1-R_3I_5&=U,\\ I_2-I_4+I_5&=0. \end{align*}
Next, use the last equation to eliminate $I_2=I_4-I_5$:
\begin{align*}
R_1I_1-R_2I_4+(R_2+R_5)I_5&=0,\\ R_1I_1+R_4I_4+R_5I_5&=U,\\ (R_1+R_3)I_1-R_3I_5&=U. \end{align*}
Using the last equation we can eliminate $I_5=[(R_1+R_3)I_1-U]/R_3$:
\begin{align*}
[R_1R_3+(R_2+R_5)(R_1+R_3)]I_1-R_2R_3I_4&=(R_2+R_5)U,\\ [R_1R_3+(R_1+R_3)R_5]I_1+R_3R_4I_4&=(R_3+R_5)U. \end{align*}
Multiplying the first of these equations by $R_4$, the second by by $R_2$, and adding them eliminates $I_4$:
\begin{align*}
I_1&=\frac{(R_2+R_5)R_4+(R_3+R_5)R_2}{R_1R_3(R_2+R_4)+(R_1+R_3)(R_2+R_5)R_4+(R_1+R_3)R_2R_5}U. \end{align*}
Now,
\begin{align*}
I_2=\frac{R_1I_1+R_5I_5}{R_2}=\frac{R_1I_1+R_5[(R_1+R_3)I_1-U]/R_3}{R_2}=\frac{[R_1R_3+(R_1+R_3)R_5]I_1-R_5U}{R_2R_3}, \end{align*}
thus
\begin{align*}
I_1+I_2&=\frac{[(R_1+R_2)R_3+(R_1+R_3)R_5]I_1-R_5U}{R_2R_3}\\ &=\frac{\dfrac{[(R_1+R_2)R_3+(R_1+R_3)R_5][(R_2+R_5)R_4+(R_3+R_5)R_2]}{R_1R_3(R_2+R_4)+(R_1+R_3)(R_2+R_5)R_4+(R_1+R_3)R_2R_5}-R_5}{R_2R_3}U\\ &=\frac{\dfrac{[(R_1+R_2)R_3+(R_1+R_3)R_5][(R_3+R_4)R_2+(R_2+R_4)R_5]}{R_1R_3(R_2+R_4)+(R_1+R_3)(R_2+R_4)R_5+R_2R_4(R_1+R_3)}-R_5}{R_2R_3}U\\ &=\frac{(R_1+R_2+R_3+R_4)R_5+(R_1+R_2)(R_3+R_4)}{(R_1+R_3)(R_2+R_4)R_5+(R_1+R_2)R_3R_4+R_1R_2(R_3+R_4)}U, \end{align*}
where the final form demonstrates the symmetry of the result. Therefore,
$$R=\frac{U}{I_1+I_2}=\frac{(R_1+R_3)(R_2+R_4)R_5+(R_1+R_2)R_3R_4+R_1R_2(R_3+R_4)}{(R_1+R_2+R_3+R_4)R_5+(R_1+R_2)(R_3+R_4)}.$$
What a beautiful result! There was a time (again, back forty years ago) when I actually believed that with series resistors, parallel resistors, and the resistor bridge, all circuits are solvable. Alas, that is not the case: there are circuits one can construct that cannot be broken down into combinations of these three elementary circuits. In any case, the formulae get progressively complicated, so rather than working them out in the general case, it is easier to just use Kirchoff's laws and solve each circuit individually. Of course nowadays, with circuit analysis software readily available, this is not something people need to do by hand very often... |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
Let $F: \mathbb{R} \rightarrow \mathbb{R}$ be a linear map. I want to evaluate an expression of the type $$F((ax+b)^k)$$ in terms of $F(x)$ for some fixed value of $x$ (I already know $F(x)^r$ for $r=1,...,k$).
$x$ is typically small ($0<x<1$), and $k$ is typically about $15$. $a$ is always positive (about $30$) and $b$ is always negative (about $-30$) so that $ax+b$ is always between $0$ and $1$. Further it is also known that the range of the map $F$ is $[0,1]$.
Using the binomial theorem to expand $(ax+b)^k$ involves very large numbers, which cause errors due to overflow (in Matlab).
Since it is already known that the argument $(ax+b)^k$ as well as its image $F((ax+b)^k)$ are always small, I am interested to know if there are methods to evaluate $F((ax+b)^k)$ in terms of $F(x)$ stably or without involving large numbers. Any help will be much appreciated.
EDIT: The actual problem I want to solve is not quite the same. I am describing it below.
I have two functions $f,g:\mathbb{R} \to \mathbb{R}$. $g$ is positive and unit normalized, i.e. $\int_{-\infty}^{\infty} g(x)dx=1$ The function $f$ is not known, but it is known that $f(x) \in [0,1]$ for all $x$. Also the quantity $\int_{-\infty}^{\infty} g(x) f(x)^r dx$ is known for $r=1,...,k$ (which is, of course, in $[0,1]$). I want to calculate the quantity $$I = \int_{-\infty}^{\infty} g(x) (a f(x) + b)^k dx$$ where $a,b \in \mathbb{R}$. It is known that $a>0,b<0$ and $a,b$ are chosen such that $af(x)+b \in [0,1]$. Hence $I \in [0,1]$. $a$ is typically about $30$, and $k$ is about $15$.
Since only $\int_{-\infty}^{\infty} g(x) f(x)^r dx$ is known I have no option but to expand $(a f(x) + b)^k = \sum_{r=0}^k \binom{k}{r} a^r f(x)^r b^{k-r}$ using the binomial theorem. However this involves the product of large numbers like $\binom{k}{r}$ and powers of $a$ and $b$. Since $I \in [0,1]$, compuing $I$ by adding and subtracting such large nos. does not seem like a good idea.
In the original question I intended the linear map $F$ to correspond to the map $f \mapsto \int_{-\infty}^{\infty} g(x) f(x) dx$, but that was clearly not a good example. |
A Hadamard matrix is a matrix with all elements equal to \(+1\) or \(-1\), and for which the rows are
mutually orthogonal. If you pick two rows from the matrix and write it as vectors \(\bf x\) and \(\bf y\), then these are orthogonal if their dot product is zero, written as \({\bf x}\cdot{\bf y}=0\). For a Hadamard matrix, this is true for each combination of two rows. The dot product itself is defined using the elements of the vector. With \({\bf x}=(x_1,\ldots,x_n)\) and \({\bf y}=(y_1,\ldots,y_n)\), it is given by
\[{\bf x}\cdot{\bf y}=\sum_{i=1}^n\,x_iy_i.\]
An example of a \(4\times 4\) Hadamard matrix is
\[\begin{pmatrix}
1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1\end{pmatrix}.\]
Any two rows of this matrix orthogonal. For example, the dot product of the third and fourth row is
\[1\!\cdot\!1\,+\,1\!\cdot\!(-1)\,+\,(-1)\!\cdot\!(-1)\,+\,(-1)\!\cdot\!1=0.\]
How to construct a Hadamard matrix? Defining what a Hadamard matrix is, is one thing. However, constructing one is not necessarily trivial. There is one particularly straightforward method that constructs Hadamard matrices of size \(2^n\times 2^n\) for \(n\in\mathbb{N}_{>0}\). This method is called Sylverster’s construction. Incidentally, Sylvester is the person who actually discoverd Hadamard matrices in 1867, 26 years before Hadamard started working with them. The method works as follows. You start with the basic Hadamard matrix
\[{\bf H}_1=
\begin{pmatrix} 1 \end{pmatrix}.\]
The \(2\times 2\) matrix is then given by
\[{\bf H}_2=
\begin{pmatrix} {\bf H}_1 & {\bf H}_1 \\ {\bf H}_1 & -{\bf H}_1 \end{pmatrix}= \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}.\]
If you apply the same step again and write it out, this method produces exactly the \(4\times 4\) example that is given above. In general, Sylvester’s construction is given by
\[{\bf H}_{2^n}=
\begin{pmatrix} {\bf H}_{2^{n-1}} & {\bf H}_{2^{n-1}} \\ {\bf H}_{2^{n-1}} & -{\bf H}_{2^{n-1}} \end{pmatrix}.\]
Of course, this method only constructs the \(2^n\times 2^n\) Hadamard matrices. There are many different methods for other sizes, but there is no known general method that creates all sizes. And, moreover, there is the point of the next section.
Which sizes of Hadamard Matrices Exist?
This is an open question. The
Hadamard conjecture states that a Hadamard matrix exists for matrices of size \(4n\times 4n\) with \(n\in\mathbb{N}_{>0}\). Currently, the smallest \(n\) for which no Hadamard matrix is known, is \(668\). This happens to be the house number of the neighbor of the beast, but that must be a coincidence. It is known that an \(n\times n\) Hadamard matrix must have \(n=1\), \(2\), or a multiple of 4. |
In astronautics and aerospace engineering, the
bi-elliptic transfer is an orbital maneuver that moves a spacecraft from one orbit to another and may, in certain situations, require less delta-v than a Hohmann transfer maneuver.
The bi-elliptic transfer consists of two half elliptic orbits. From the initial orbit, a first burn expends delta-v to boost the spacecraft into the first transfer orbit with an apoapsis at some point r_b away from the central body. At this point a second burn sends the spacecraft into the second elliptical orbit with periapsis at the radius of the final desired orbit, where a third burn is performed, injecting the spacecraft into the desired orbit.
While they require one more engine burn than a Hohmann transfer and generally requires a greater travel time, some bi-elliptic transfers require a lower amount of total delta-v than a Hohmann transfer when the ratio of final to initial semi-major axis is 11.94 or greater, depending on the intermediate semi-major axis chosen.
[1]
The idea of the bi-elliptical transfer trajectory was first published by Ary Sternfeld in 1934.
[2]
Contents Calculation 1 Delta-v 1.1 Transfer time 1.2 Example 2 See also 3 References 4 Calculation Delta-v
A bi-elliptic transfer from a low circular starting orbit (dark blue), to a higher circular orbit (red).
The three required changes in velocity can be obtained directly from the vis-viva equation,
v^2 = \mu \left( \frac{2}{r} - \frac{1}{a} \right) v \,\! is the speed of an orbiting body \mu = GM\,\! is the standard gravitational parameter of the primary body r \,\! is the distance of the orbiting body from the primary a \,\! is the semi-major axis of the body's orbit r_b is the common apoapsis distance of the two transfer ellipses and is a free parameter of the maneuver. a_1 and a_2 are the semimajor axes of the two elliptical transfer orbits, which are given by a_1 = \frac{r_0+r_b}{2} a_2 = \frac{r_f+r_b}{2}
Starting from the initial circular orbit with radius r_0 (dark blue circle in the figure to the right), a prograde burn (mark 1 in the figure) puts the spacecraft on the first elliptical transfer orbit (aqua half ellipse). The magnitude of the required delta-v for this burn is:
\Delta v_1 = \sqrt{ \frac{2 \mu}{r_0} - \frac{\mu}{a_1}} - \sqrt{\frac{\mu}{r_0}}
When the apoapsis of the first transfer ellipse is reached at a distance r_b from the primary, a second prograde burn (mark 2) raises the periapsis to match the radius of the target circular orbit, putting the spacecraft on a second elliptic trajectory (orange half ellipse). The magnitude of the required delta-v for the second burn is:
\Delta v_2 = \sqrt{ \frac{2 \mu}{r_b} - \frac{\mu}{a_2}} - \sqrt{ \frac{2 \mu}{r_b} - \frac{\mu}{a_1}}
Lastly, when the final circular orbit with radius r_f is reached, a
retrograde burn (mark 3) circularizes the trajectory into the final target orbit (red circle). The final retrograde burn requires a delta-v of magnitude: \Delta v_3 = \sqrt{ \frac{2 \mu}{r_f} - \frac{\mu}{a_2}} - \sqrt{\frac{\mu}{r_f}}
If r_b=r_f, then the maneuver reduces to a Hohmann transfer (in that case \Delta v_3 can be verified to become zero). Thus the bi-elliptic transfer constitutes a more general class of orbital transfers, of which the Hohmann transfer is a special two-impulse case.
The maximum savings possible can be computed by assuming that r_b=\infty, in which case the total \Delta v simplifies to \left(\sqrt 2 - 1\right) \left(\sqrt{\mu/r_0} + \sqrt{\mu/r_f}\right).
Transfer time
Like the Hohmann transfer, both transfer orbits used in the bi-elliptic transfer constitute exactly one half of an elliptic orbit. This means that the time required to execute each phase of the transfer is half the orbital period of each transfer ellipse.
Using the equation for the orbital period and the notation from above:
T = 2 \pi \sqrt{\frac{a^3}{\mu}}
The total transfer time t is the sum of the time required for each half orbit. Therefore:
t_1 = \pi \sqrt{\frac{a_1^3}{\mu}} \quad and \quad t_2 = \pi \sqrt{\frac{a_2^3}{\mu}}
And finally:
t = t_1 + t_2 \; Example
To transfer from a circular low Earth orbit with
r 0=6700 km to a new circular orbit with r 1=93800 km using a Hohmann transfer orbit requires a Δv of 2825.02+1308.70=4133.72 m/s. However, because r 1=14 r 0 >11.94 r 0, it is possible to do better with a bi-elliptic transfer. If the spaceship first accelerated 3061.04 m/s, thus achieving an elliptic orbit with apogee at r 2=40 r 0=268000 km, then at apogee accelerated another 608.825 m/s to a new orbit with perigee at r 1=93800 km, and finally at perigee of this second transfer orbit decelerated by 447.662 m/s, entering the final circular orbit, then the total Δv would be only 4117.53 m/s, which is 16.19 m/s (0.4%) less.
The Δv saving could be further improved by increasing the intermediate apogee, at the expense of longer transfer time. For example, an apogee of 75.8
r 0=507,688 km (1.3 times the distance to the Moon) would result in a 1% Δv saving over a Hohmann transfer, but a transit time of 17 days. As an impractical extreme example, an apogee of 1757 r 0=11,770,000 km (30 times the distance to the Moon) would result in a 2% Δv saving over a Hohmann transfer, but the transfer would require 4.5 years (and, in practice, be perturbed by the gravitational effects of other solar system bodies). For comparison, the Hohmann transfer requires 15 hours and 34 minutes.
Δv for various orbital transfers
Type Hohmann Bi-elliptic Apogee (km) 93800 268000 507688 11770000 ∞ Burn 1 (m/s) 2825.02 3061.04 3123.62 3191.79 3194.89 Burn 2 (m/s) 1308.70 608.825 351.836 16.9336 0 Burn 3 (m/s) 0 447.662 616.926 842.322 853.870 Total (m/s) 4133.72 4117.53 4092.38 4051.04 4048.76 Percentage 100% 99.6% 99.0% 98.0% 97.94% Δv applied prograde Δv applied retrograde
Evidently, the bi-elliptic orbit spends more of its delta-v early on (in the first burn). This yields a higher contribution to the specific orbital energy and, due to the Oberth effect, is responsible for the net reduction in required delta-v.
See also References ^ Vallado, David Anthony (2001). Fundamentals of Astrodynamics and Applications. Springer. p. 318. ^ Sternfeld, Ary J. [
Shape/Size Orientation Position Variation
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. |
$$ \newcommand{\bsth}{{\boldsymbol\theta}} \newcommand{\va}{\textbf{a}} \newcommand{\vb}{\textbf{b}} \newcommand{\vc}{\textbf{c}} \newcommand{\vd}{\textbf{d}} \newcommand{\ve}{\textbf{e}} \newcommand{\vf}{\textbf{f}} \newcommand{\vg}{\textbf{g}} \newcommand{\vh}{\textbf{h}} \newcommand{\vi}{\textbf{i}} \newcommand{\vj}{\textbf{j}} \newcommand{\vk}{\textbf{k}} \newcommand{\vl}{\textbf{l}} \newcommand{\vm}{\textbf{m}} \newcommand{\vn}{\textbf{n}} \newcommand{\vo}{\textbf{o}} \newcommand{\vp}{\textbf{p}} \newcommand{\vq}{\textbf{q}} \newcommand{\vr}{\textbf{r}} \newcommand{\vs}{\textbf{s}} \newcommand{\vt}{\textbf{t}} \newcommand{\vu}{\textbf{u}} \newcommand{\vv}{\textbf{v}} \newcommand{\vw}{\textbf{w}} \newcommand{\vx}{\textbf{x}} \newcommand{\vy}{\textbf{y}} \newcommand{\vz}{\textbf{z}} \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator\mathProb{\mathbb{P}} \renewcommand{\P}{\mathProb} % need to overwrite stupid paragraph symbol \DeclareMathOperator\mathExp{\mathbb{E}} \newcommand{\E}{\mathExp} \DeclareMathOperator\Uniform{Uniform} \DeclareMathOperator\poly{poly} \DeclareMathOperator\diag{diag} \newcommand{\pa}[1]{ \left({#1}\right) } \newcommand{\ha}[1]{ \left[{#1}\right] } \newcommand{\ca}[1]{ \left\{{#1}\right\} } \newcommand{\norm}[1]{\left\| #1 \right\|} \newcommand{\nptime}{\textsf{NP}} \newcommand{\ptime}{\textsf{P}} \newcommand{\R}{\mathbb{R}} \newcommand{\card}[1]{\left\lvert{#1}\right\rvert} \newcommand{\abs}[1]{\card{#1}} \newcommand{\sg}{\mathop{\mathrm{SG}}} \newcommand{\se}{\mathop{\mathrm{SE}}} \newcommand{\mat}[1]{\begin{pmatrix} #1 \end{pmatrix}} \DeclareMathOperator{\var}{var} \DeclareMathOperator{\cov}{cov} \newcommand\independent{\perp\kern-5pt\perp} \newcommand{\CE}[2]{ \mathExp\left[ #1 \,\middle|\, #2 \right] } \newcommand{\disteq}{\overset{d}{=}} $$
Numpy Gems 1: Approximate Dictionary Encoding and Fast Python Mapping
Welcome to the first installment of
Numpy Gems, a deep dive into a library that probably shaped python itself into the language it is today, numpy.
I’ve spoken extensively on numpy (HN discussion), but I think the library is full of delightful little gems that enable perfect instances of API-context fit, the situation where interfaces and algorithmic problem contexts fall in line oh-so-nicely and the resulting code is clean, expressive, and efficient.
What is dictionary encoding?
A dictionary encoding is an efficient way of representing data with lots of repeated values. For instance, at the MovieLens dataset, which contains a list of ratings for a variety of movies.
But the dataset only has around 27K distinct movies for over 20M ratings. If the average movie is rated around 700 times, then it doesn’t make much sense to represent the list of movies for each rating as an array of strings. There’s a lot of needless copies. If we’re trying to build a recommendation engine, then a key part of training is going to involve iterating over these ratings. With so much extra data being transferred between RAM and cache, we’re just asking for our bandwidth to be saturated. Not to mention the gross overuse of RAM in the first place.
That’s why this dataset actually comes with
movieIds, and then each rating refers to a movie though its identifier. Then we store a “dictionary” mapping movie identifiers to movie names and their genre metadata. This solves all our problems: no more duplication, no more indirection, much less memory use.
That’s basically it. It’s a very simple encoding, which makes it easy to integrate efficiently in many algorithms. So much so, that many, many libraries natively support dictionary encoding your data–see factors in R and pandas.
Why approximate?
Let’s run with our example. Suppose we have a list of our movie titles, and we’re doing some NLP on them for better recommendations. Usually, that means each of these movies correspond to some kind of encodings.
Let’s use the built-in pandas categorical dtype, which is a dictionary encoding.
len(titles) # ---> 20000263cat_titles = titles.astype( pd.api.types.CategoricalDtype( pd.unique(titles)))len(cat_titles.cat.categories) # ---> 9260len(cat_titles.cat.codes) # ---> 20000263
This stores our data into a densely packed array of integers, the codes, which index into the categories array, which is now a much smaller array of 9K deduplicated strings. But still, if our movie titles correspond to giant floating-point encodings, we’ll still end up shuffling a bunch of memory around. Maybe 9K doesn’t sound so bad to you, but what if we had a larger dataset? Bear with this smaller one for demonstration purposes.
A key observation is that, like most datasets, we’ll observe a power-law like distribution of popularity:
What this means is that we have a long tail of obscure movies that we just don’t care about. In fact, if we’re OK dropping 5% coverage, which won’t affect our performance too much, we can save a bunch of space.
cdf = counts_desc.cumsum() / counts_desc.sum()np.searchsorted(cdf, [.95, .99, .999, 1])# ---> array([3204, 5575, 7918, 9259])
Indeed, it looks like dropping the 5% least-popular movies corresponds to needing to support only 1/3 as many movies overall! This can be a huge win, especially if your model considers higher-order interactions (if you like movie X and movie Y, then you might like movie Z). In such models that 1/3 becomes a 1/27th!
How to approximate?
However, if we’re being asked to serve model predictions online or want to train a “catch-all” encoding, then we still need to have a general catch-all “movie title” corresponding to the unknown situation. We have a bunch of dictionary indices in
[0, d), like
[1, 3, 5, 2, 6, 1, 0, 11]. In total we have
n of these. We also have a list of
e items we actually care about in our approximate dictionary, say
[5, 8, 10, 11], but this might not be a contiguous range.
What we want is an approximate dictionary encoding with a catch-all, namely we want to get a list of
n numbers between
0 and
e, with
e being the catch all.
In the above example,
n = 8, d = 12, e = 4, and the correct result array is
[4, 4, 0, 4, 4, 4, 4, 3]. For something like embeddings, it’s clear how this is useful in greatly reducing the number of things we need to represent.
The Gem
The above is actually an instance of a translation problem, in the sense that we have some translation mapping from
[0, d) into
[0, e] and we’d like to apply it to every item in the array. Like many things in python, this is most efficient when pushed to C. Indeed, for strings, there’s translate that does this.
We’ll consider two dummy distributions, which will either be extremely sparse (
d > n) or more typical (
d <= n). Both kinds show up in real life.We extract the most popular
e of these items (or maybe we have some other metric, not necessarily popularity, that extracts these items of interest).There are more efficient ways of doing the below, but we’re just setting up.
if d < n: dindices = np.random.geometric(p=0.01, size=(n - d)) - 1 dindices = np.concatenate([dindices, np.arange(d)]) dcounts = np.bincount(dindices) selected = dcounts.argsort()[::-1][:e]else: dindices = np.random.choice(d, n // 2) frequent = np.random.choice(n, n - n // 2) dindices = np.concatenate([dindices, frequent]) c = Counter(dindices) selected = np.asarray(sorted(c, key=c.get, reverse=True)[:e])
Let’s look at the obvious implementation. We’d like to map contiguous integers, so let’s implement a mapping as an array, where the array value at an index is the mapping’s value for that index as input. This is the implementation that pandas uses under the hood when you ask it to change its categorical values.
mapping = np.full(d, e)mapping[selected] = np.arange(e)result = np.take(mapping, dindices)
As can be seen from the code, we’re going to get burned when
d is large, and we can’t take advantage of the fact that
e is small. These benchmarks, performed with
%%memit and
%%timeit jupyter magics on fresh kernels each run, back this sentiment up.
e
d
n
memory (MiB) time (ms)
10^3
10^4
10^8
763 345
10^3
10^6
10^6
11 9.62
10^3
10^8
10^4
763 210
10
10^4
10^8
763 330
10
10^6
10^6
11 9.66
10
10^8
10^4
763 210
This brings us to our first puzzle and numpy gem. How can we re-write this to take advantage of small
e? The trick is to use a sparse representation of our mapping, namely just
selected. We can look in this mapping very efficiently, thanks to
np.searchsorted. Then with some extra tabulation (using
-1 as a sentinel value), all we have to ask is where in
selected a given index from
dindices was found.
searched = np.searchsorted(selected, dindices)selected2 = np.append(selected, [-1])searched[selected2[searched] != dindices] = -1searched[searched == -1] = eresult = searched
A couple interesting things happen up there: we switch our memory usage from linear in
d to linear in
n, and completely adapt our algorithm to being insensitive to a high number of unpopular values. Certainly, this performs horribly where
d is small enough that the mapping above is the clear way to go, but the benchmarks expose an interesting tradeoff frontier:
e
d
n
memory (MiB) time (ms)
10^3
10^4
10^8
1546 5070
10^3
10^6
10^6
13 31
10^3
10^8
10^4
0.24 0.295
10
10^4
10^8
1573 1940
10
10^6
10^6
13 17
10
10^8
10^4
0.20 0.117 |
In response to a Quora question, I wrote the following:
[Jordan-]Brans-Dicke theory is actually not consistent with solar system observations unless its dimensionless coupling constant is given an unreasonably large value.
How come, you might ask? After all, the Schwarzschild solution remains a legitimate solution in Brans-Dicke theory! Well... true, but. Whereas in general relativity, there is only one spherically symmetric, static vacuum solution of the theory, in Brans-Dicke theory, there is a whole family of solutions. And not all of them are consistent with the presence of matter. If you take spherically symmetric solutions in the presence of, say, dust and then reduce the dust density to zero, keeping only the central singularity, the solution you get will not be the Schwarzschild solution.
To be more specific, if we write Brans-Dicke theory in the standard form, with the Lagrangian
$$I=\frac{1}{2\kappa}\int d^4x\left(\phi R-\omega\frac{\partial_\mu\phi\partial^\mu\phi}{\phi}\right),$$
(where $\kappa=8\pi G/c^4$, $R$ is the Ricci tensor and $\phi$ is the scalar field), we find that in the first post-Newtonian approximation, the metric will be given by:
\begin{align}g_{00}&=1-2U+2\beta U^2,\\
g_{ij}&=-(1+2\gamma U)\delta_{ij},\end{align}
where $U$ is the Newtonian potential divided by $c^2$ and the Eddington parameters are given by $\beta=1$, $\gamma=(1+\omega)/(2+\omega)$.
In general relativity, $\gamma=1$. Observations (e.g., radio-metric observations from the Cassini probe presently orbiting Saturn) tell us that $|\gamma -1|<2.3\times 10^{-5}$. This is only possible if $|\omega| > 40,000$. Such a large dimensionless parameter is always considered suspect. Not just that, but if you make $|\omega|$ big enough for the theory to work, its predictions become indistinguishable from the predictions of general relativity... and then, as Leo C. Stein suggests, Occam's razor prevails, as we prefer the simpler of two theories (i.e., the one with a smaller parameter space.) |
Answer
Estimated answer = 5 Exact answer = $4\frac{5}{24}$
Work Step by Step
$\frac{5}{8}$ + $3\frac{7}{12}$ Estimate Calculation: $\frac{5}{8} \approx 1$ $3\frac{7}{12} \approx 4$ Therefore $\frac{5}{8}$ + $3\frac{7}{12}$ $\approx$ 1 + 4 = 5 Exact Calculation: $\frac{5}{8}$ + $3\frac{7}{12}$ = $\frac{5\times3}{8\times3}$ + $3\frac{7\times2}{12\times2}$ (24 is the least common denominator) = $\frac{15}{24}$ + $3\frac{14}{24}$ = $3$ + $\frac{15}{24}$ + $\frac{14}{24}$ (taking whole part separately and fractional part separately) = $3$ + $\frac{15 + 14}{24}$ = $3$ + $\frac{29}{24}$ = $3$ + $\frac{24 + 5}{24}$ = $3$ + $\frac{24}{24} + \frac{5}{24} $ = 3 + 1 + $\frac{5}{24}$ = $4\frac{5}{24}$ The estimate was 5, so the exact answer of $4\frac{5}{24}$ is reasonable. |
I am studying different models of computation and how algorithms can be interpreted under different models.
Here is a math(?) question that has been bugging me.
Suppose we have $n = \Theta(N\log N)$ bits as the size of input and there are N elements in the input. (The implied assumption is each element has $\Theta(\log N)$ bits).
Now we know Insertion Sort has running time of $\Theta(N^2)$ in the worst case with respect to N, the number of elements in the input.
Under the RAM model, the input is $n = \Theta(N\log N)$ bits. So we can say $N = \Theta\left(\frac{n}{\log N}\right)$. I want to show that the running time $T = \Theta\left(\frac{n^2}{\log^2n}\right)$ by first showing that $N = \Theta\left(\frac{n}{\log n}\right)$.
Since $N \leq n$, $N = \Theta\left(\frac{n}{\log N}\right)=\Omega\left(\frac{n}{\log n}\right)$. I am having trouble showing $ N = \Theta\left(\frac{n}{\log N}\right)= O\left(\frac{n}{\log n}\right)$ to be able to use the $\Theta$ notation.
In general, I am not so sure what kind of manipulations are allowed inside $\Theta$ so that its definition is intact. |
If I have a normal distributed variable $N(\mu,\sigma^2)$ so with fixed $\mu$ the conjugate prior for $\lambda:=\frac{1}{\sigma^2}$ is given by the gamma distribution $\propto \lambda^{\alpha-1}exp{-\lambda\beta}$ (therefore $\sigma^2\sim\sigma^{2(1-\alpha)}exp{\frac{-\beta}{\sigma^2}}$). And the posterior is given by $Ga(\lambda|\alpha_n,\beta_n)$ with $\alpha_n=\alpha+\frac{n}{2}$ and $\beta_n=\beta+\frac{1}{2}\sum^n_{i=1}\limits{(x_i-\mu)^2}$.
That means with increasing sample size the $\sigma^2$ decreases, right? Or is that observation wrong?
In my special case I do not expect a decrease in $\sigma^2$ for some reasons. I expect the opposite for $\sigma^2$. That means $\sigma^2$ should converge against a value greater than zero ($\approx 50$).
Is there a trick or a similar prior which reflects the described behavior? That the variance can be far away from zero?
(2) I forgot... I have a lot of prior knowledge so I can simplify the issue as follows:
$z~(\mu,\tau\times\sigma_{fix}^2)$ In this notation I "just" have to estimate $\tau$. But it is enough to estimate it on a logarithmic scale $(0.001, 0.1, 1 ,10, 100 ,1000)$ What could be the prior distribution for $\tau$ in this case? |
Edit:
Hilbert Derived the EFE before Einstein, but using the EH Action as a postulate. Einstein took the EFE as a postulate.
I think the matter is only the formulation of General Relativity; the postulates made.
Hilbert's postulate was elegantly simple. It was only that
$$\mathcal L_G=\lambda R$$
I.e. that the Lagrangian Density of Gravity was proportional to the Ricci Scalar. From then on, he applied the Hamilton's principle to this's equation and found that the field equations for gravity are $$G_{\mu\nu}=\kappa T_{\mu\nu}$$ And then, he showed how this is a better theory of gravity etc. etc.
Einstein's postulate was not so elegant or simple, in my opinion. However, many hold that it was simpler. His postulate was the field equation itself! I.e that
$$G_{\mu\nu}=\kappa T_{\mu\nu}$$
The reason why people hold this to be very elegant is that $\nabla^\mu G_{\mu\nu}=0$ and $\nabla^\mu T_{\mu\nu}=0$.
So, it is no different than whether Einstein or Minkowski should get the credit. Minkowski did it the elegant way.
Similarly, Hilbert did it the elegant way.
Edit:
That Einstein's postulate was the EFE has its reference in Einstein's original GR paper "On the Foundations of the General Theory of Relativity". As for Hilbert's postulate being the EH Action, I think it was from some book... Can't remember. Note:
P.S. The fact that Einstein's postulate itself was the EFE means that he first wrote down the EFE. The fact that Hilbert's postulate itself was the EH Action means that he first wrote down the EH Action, (that is, 5 days before Einstein wrote down the EFE).
P.P.S. By "wrote down", I mean "published", who knows wheter Einstein/Hilbert actually wrote down the action/FE, or forced someone else to write it for them? Who knows...
P.P.P.S. Another important point of this answer is that there is no controversy, since it is just the formulation that matters, like Einstein&Minkowski, Newton&Leibniz, Heisenberg&Schrodinger&Feynman&somerandompersonwhodiscoveredthevariationalformulation&..., etc. Because the OP seems to think there's a big controversy... But there's not, at least there shouldn't be. |
Here is how we can recover Newton's laws for gravitation from General Relativity.
We begin with a metric $g_{\mu\nu}$ that is a perturbation $h_{\mu\nu}$ (with $|h_{\mu\nu}|\ll 1$) of the Minkowski-metric $\eta_{\mu\nu}$:
$$g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}.$$
Further, we assume that the gravitational field is approximately static, hence time derivatives are zero.
The starting point is Einstein's field equations for gravity:
$$R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=8\pi GT_{\mu\nu},$$
with the Ricci-tensor given by $R_{\mu\nu}=\partial_\alpha\Gamma^\alpha_{\mu\nu}-\partial_\nu\Gamma^\alpha_{\mu\alpha}+\Gamma^\alpha_{\mu\nu}\Gamma^\beta_{\alpha\beta}-\Gamma^\alpha_{\mu\beta}\Gamma^\beta_{\alpha\nu}$, $\Gamma_{\mu\nu}^\alpha=\frac{1}{2}g^{\alpha\beta}(\partial_\mu g_{\nu\beta}+\partial_\nu g_{\mu\beta}-\partial_\beta g_{\mu\nu})$ are the Christoffel-symbols associated with $g_{\mu\nu}$, and the metric signature is $[+,-,-,-]$.
In the weak field, the Ricci-tensor simplifies to
$$R_{\mu\nu}\simeq\partial_\alpha\Gamma^\alpha_{\mu\nu}-\partial_\nu\Gamma^\alpha_{\mu\alpha}.$$
Moreover,
$$\Gamma_{\mu\nu}^\alpha\simeq \frac{1}{2}\eta^{\alpha\beta}(\partial_\mu h_{\nu\beta}+\partial_\nu h_{\mu\beta}-\partial_\beta h_{\mu\nu}),$$
and
$$R\simeq\eta^{\mu\nu}R_{\mu\nu}.$$
With these approximations, the field equations now read
$$2\partial_\alpha\Gamma^\alpha_{\mu\nu}-2\partial_\nu\Gamma^\alpha_{\mu\alpha}-\eta_{\mu\nu}\eta^{\kappa\lambda}\partial_\alpha\Gamma^\alpha_{\kappa\lambda}+\eta_{\mu\nu}\eta^{\kappa\lambda}\partial_\lambda\Gamma^\alpha_{\kappa\alpha}=16\pi GT_{\mu\nu},$$
or
$$\eta^{\alpha\beta}(\partial_\alpha\partial_\mu h_{\nu\beta}+\partial_\alpha\partial_\nu h_{\mu\beta}-\partial_\alpha\partial_\beta h_{\mu\nu}-\partial_\mu\partial_\nu h_{\alpha\beta}-\eta_{\mu\nu}\eta^{\kappa\lambda}\partial_\alpha\partial_\kappa h_{\lambda\beta}+\eta_{\mu\nu}\eta^{\kappa\lambda}\partial_\alpha\partial_\beta h_{\kappa\lambda})=16\pi GT_{\mu\nu},$$
For simplicity, we introduce
$$\bar{h}_{\mu\nu}=h_{\mu\nu}-\frac{1}{2}\eta_{\mu\nu}\eta^{\alpha\beta}h_{\alpha\beta}.$$
Then we get
$$
\eta^{\alpha\beta}\partial_\alpha\partial_\mu\bar{h}_{\nu\beta} +\eta^{\alpha\beta}\partial_\alpha\partial_\nu\bar{h}_{\mu\beta} -\eta^{\alpha\beta}\partial_\alpha\partial_\beta\bar{h}_{\mu\nu} -\eta^{\alpha\beta}\eta_{\mu\nu}\eta^{\kappa\lambda}\partial_\alpha\partial_\kappa\bar{h}_{\lambda\beta} =16\pi GT_{\mu\nu}.$$
We know that without loss of generality, we can impose the Lorenz-gauge:
$$\eta^{\alpha\beta}\partial_\alpha\bar{h}_{\nu\beta}=0.$$
When we do so, the field equations become just
$$-\eta^{\alpha\beta}\partial_\alpha\partial_\beta\bar{h}_{\mu\nu}=16\pi GT_{\mu\nu}.$$
When the metric is static ($\partial_0\bar{h}_{\mu\nu}=0$), we get
$$\nabla^2\bar{h}_{\mu\nu}=16\pi GT_{\mu\nu},$$
where $\nabla$ is the three-dimensional vector gradient operator.
Now let $h_{\mu\nu}=\mathrm{diag}[2\phi,2\phi,2\phi,2\phi]$. In this case, $\bar{h}_{\mu\nu}=h_{\mu\nu}+2\phi\eta_{\mu\nu}=\mathrm{diag}[4\phi,0,0,0]$, and recognizing that $T_{00}=\rho$ is just the matter density, the field equations turn into Poisson's equation for gravity:
$$\nabla^2\phi=4\pi G\rho.$$
All other components of $T_{\mu\nu}$ are zero. The equations $T_{0i}=0$, $T_{ij}=0~(i,j=1..3)$ state that in this approximation, insofar as gravity is concerned, momenta, pressure and stresses are negligible.
We can also derive the Newtonian acceleration law directly. We begin with the geodesic equation of motion, itself a direct consequence of Einstein's field equations:
$$\frac{d^2x^\alpha}{d\tau^2}+\Gamma_{\mu\nu}^\alpha\frac{dx^\mu}{d\tau}\frac{dx^\nu}{d\tau}=0,$$
where $x^\alpha$ is the 4-vector describing a test particle and $\tau$ is proper time. When the motion is slow, we can neglect $dx^i/d\tau$ ($i=1..3$) compared to $dt/d\tau$ ($t=x^0$):
$$\frac{d^2x^\alpha}{d\tau^2}+\Gamma_{00}^\alpha\left(\frac{dt}{d\tau}\right)^2=0.$$
In a stationary gravitational field, all time derivatives vanish, thus $\Gamma^\mu_{00}=-\frac{1}{2}g^{\mu\nu}\partial g_{00}/\partial x^\nu$. Once again writing the metric in the form $g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}$, we have, approximately,
$$\Gamma^\mu_{00}=-\frac{1}{2}\eta^{\mu\nu}\frac{\partial h_{00}}{\partial x^\nu}.$$
The temporal term in this equation amounts to $dt/d\tau$ being constant in time, which is as it should be, given the static metric. Let $h_{\mu\nu}=\mathrm{diag}(2\phi,2\phi,2\phi,2\phi)$. Then the rest of the geodesic equation reads
$$\frac{d^2\vec{x}}{dt^2}=-\nabla\phi,$$
which is just the Newtonian gravitational acceleration law in a potential given by $\phi$. |
Some input:
Kimball assumes a constant-returns-to-scale (CRS) production of the final good $Y$ from the intermediate goods (and no other inputs are involved in this function). Turn to discrete space for a moment, and this means that we would have something like
$$Y = F(y_1,...,y_l,...,y_m)$$
Since this is a CRS function we have
$$1 = F\left(\frac {y_1}{Y},...,\frac {y_l}{Y},...\frac {y_m}{Y}\right)$$
But also, from Euler's theorem for homogeneous functions we have
$$Y = \sum_{i=1}^m \frac {\partial F}{\partial y_i}\cdot y_i \implies 1 = \sum_{i=1}^m \frac {\partial F}{\partial y_i}\cdot\frac{ y_i}{Y}$$
Combining and manipulating the index into $[0,1]$-continuity we get something like
$$1 = F\left(\frac {y_1}{Y},...,\frac {y_l}{Y},...\frac {y_m}{Y}\right) =\sum_{i=1}^m \frac {\partial F}{\partial y_i}\cdot\frac{ y_i}{Y}\rightarrow \int_0^1\left[\frac {\partial F}{\partial y_l}\cdot\frac{ y_l}{Y}\right]{\rm d}l$$
In a sense, $G(y_l/Y)$ is the elasticity of final output with respect the the $l$-th intermediate good. Given the assumptions on $G()$, it rules out a Cobb-Douglas CRS production function, where the elasticities not only sum to unity but they are constant, and it looks, say, to a C.E.S. production function with constant returns to scale, where the elasticities are variable but always sum up to unity. |
I'm experimenting with two coils. I drive them with two MOSFETs and one consumes around 1A and the second ~0.7A (12V power supply able to deliver 15A). While the coils work with permanent magnets and repel and attract them as opposed, I observe no interaction when I bring them close together. I mean not even a hint of movement. Unfortunately I don't have a way to take a picture of these, the first one is around 1 cm diameter and 50 turns and the second around 2cm diameter and 50 turns. Is the power very limited? How should I estimate electromagnet force in Tesla units, and how to measure permanent magnet (small neodymium) magnetic force?
There will be some interaction but very small compared to using a permanent magnet and a coil. The magnetic field produced by the coils will be individually weak hence no apparent interaction but the magnetic field strength from a permanent magnet will be massive in comparison and this is enough to produce a noticeable effect.
Have you tried winding the coils around an iron core to increase the flux density? Here's a formula that should give you some general idea about the force from an electromagnet acting on a piece of magnetizable metal: -
Force = \$(N\cdot I)^2\cdot 4\pi 10^{-7}\cdot \dfrac{A}{2g^2}\$
F = Force I = Current N = Number of turns g = Length of the gap between the solenoid and the magnetizable metal A = Area
Detail above taken from here |
As of November, 2018, I have been working at Quansight. Quansight is a new startup founded by the same people who started Anaconda, which aims to connect companies and open source communities, and offers consulting, training, support and mentoring services. I work under the heading of Quansight Labs. Quansight Labs is a public-benefit division of Quansight. It provides a home for a "PyData Core Team" which consists of developers, community managers, designers, and documentation writers who build open-source technology and grow open-source communities around all aspects of the AI and Data Science workflow.
My work at Quansight is split between doing open source consulting for various companies, and working on SymPy. SymPy, for those who do not know, is a symbolic mathematics library written in pure Python. I am the lead maintainer of SymPy.
In this post, I will detail some of the open source work that I have done recently, both as part of my open source consulting, and as part of my work on SymPy for Quansight Labs.
Bounds Checking in Numba
As part of work on a client project, I have been working on contributing codeto the numba project. Numba is a just-in-timecompiler for Python. It lets you write native Python code and with the use ofa simple
@jit decorator, the code will be automatically sped up using LLVM.This can result in code that is up to 1000x faster in some cases:
In [1]: import numba In [2]: import numpy In [3]: def test(x): ...: A = 0 ...: for i in range(len(x)): ...: A += i*x[i] ...: return A ...: In [4]: @numba.njit ...: def test_jit(x): ...: A = 0 ...: for i in range(len(x)): ...: A += i*x[i] ...: return A ...: In [5]: x = numpy.arange(1000) In [6]: %timeit test(x) 249 µs ± 5.77 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) In [7]: %timeit test_jit(x) 336 ns ± 0.638 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) In [8]: 249/.336 Out[8]: 741.0714285714286
Numba only works for a subset of Python code, and primarily targets code that uses NumPy arrays.
Numba, with the help of LLVM, achieves this level of performance through manyoptimizations. One thing that it does to improve performance is to remove allbounds checking from array indexing. This means that if an array index is outof bounds, instead of receiving an
IndexError, you will get garbage, orpossibly a segmentation fault.
>>> import numpy as np >>> from numba import njit >>> def outtabounds(x): ... A = 0 ... for i in range(1000): ... A += x[i] ... return A >>> x = np.arange(100) >>> outtabounds(x) # pure Python/NumPy behavior Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 4, in outtabounds IndexError: index 100 is out of bounds for axis 0 with size 100 >>> njit(outtabounds)(x) # the default numba behavior -8557904790533229732
In numba pull request #4432, I amworking on adding a flag to
@njit that will enable bounds checks for arrayindexing. This will remain disabled by default for performance purposes. Butyou will be able to enable it by passing
boundscheck=True to
@njit, or bysetting the
NUMBA_BOUNDSCHECK=1 environment variable. This will make iteasier to detect out of bounds issues like the one above. It will work like
>>> @njit(boundscheck=True) ... def outtabounds(x): ... A = 0 ... for i in range(1000): ... A += x[i] ... return A >>> x = np.arange(100) >>> outtabounds(x) # numba behavior in my pull request #4432 Traceback (most recent call last): File "<stdin>", line 1, in <module> IndexError: index is out of bounds
The pull request is still in progress, and many things such as the quality of the error message reporting will need to be improved. This should make debugging issues easier for people who write numba code once it is merged.
removestar
removestar is a new tool I wrote toautomatically replace
import * in Python modules with explicit imports.
For those who don't know, Python's
import statement supports so-called"wildcard" or "star" imports, like
from sympy import *
This will import every public name from the
sympy module into the currentnamespace. This is often useful because it saves on typing every name that isused in the import line. This is especially useful when working interactively,where you just want to import every name and minimize typing.
However, doing
from module import * is generally frowned upon in Python. It isconsidered acceptable when working interactively at a
python prompt, or in
__init__.py files (removestar skips
__init__.py files by default).
Some reasons why
import * is bad:
It hides which names are actually imported. It is difficult both for human readers and static analyzers such as pyflakes to tell where a given name comes from when
import *is used. For example, pyflakes cannot detect unused names (for instance, from typos) in the presence of
import *.
If there are multiple
import *statements, it may not be clear which names come from which module. In some cases, both modules may have a given name, but only the second import will end up being used. This can break people's intuition that the order of imports in a Python file generally does not matter.
import *often imports more names than you would expect. Unless the module you import defines
__all__or carefully
dels unused names at the module level,
import *will import every public (doesn't start with an underscore) name defined in the module file. This can often include things like standard library imports or loop variables defined at the top-level of the file. For imports from modules (from
__init__.py),
from module import *will include every submodule defined in that module. Using
__all__in modules and
__init__.pyfiles is also good practice, as these things are also often confusing even for interactive use where
import *is acceptable.
In Python 3,
import *is syntactically not allowed inside of a function definition.
Here are some official Python references stating not to use
import * infiles:
In general, don’t use
from modulename import *. Doing so clutters the importer’s namespace, and makes it much harder for linters to detect undefined names.
PEP 8 (the official Python style guide):
Wildcard imports (
from <module> import *) should be avoided, as they make it unclear which names are present in the namespace, confusing both readers and many automated tools.
Unfortunately, if you come across a file in the wild that uses
import *, itcan be hard to fix it, because you need to find every name in the file that isimported from the
* and manually add an import for it. Removestar makes thiseasy by finding which names come from
* imports and replacing the importlines in the file automatically.
As an example, suppose you have a module
mymod like
mymod/ | __init__.py | a.py | b.py
with
# mymod/a.py from .b import * def func(x): return x + y
and
# mymod/b.py x = 1 y = 2
Then
removestar works like:
$ removestar -i mymod/ $ cat mymod/a.py # mymod/a.py from .b import y def func(x): return x + y
The
-i flag causes it to edit
a.py in-place. Without it, it would justprint a diff to the terminal.
For implicit star imports and explicit star imports from the same module,
removestar works statically, making use ofpyflakes. This means none of the code isactually executed. For external imports, it is not possible to work staticallyas external imports may include C extension modules, so in that case, itimports the names dynamically.
removestar can be installed with pip or conda:
pip install removestar
or if you use conda
conda install -c conda-forge removestar sphinx-math-dollar
In SymPy, we make heavy use of LaTeX math in our documentation. For example, in our special functions documentation, most special functions are defined using a LaTeX formula, like
However, the source for this math in the docstring of the function uses RST syntax:
class besselj(BesselBase): """ Bessel function of the first kind. The Bessel `J` function of order `\nu` is defined to be the function satisfying Bessel's differential equation .. math :: z^2 \frac{\mathrm{d}^2 w}{\mathrm{d}z^2} + z \frac{\mathrm{d}w}{\mathrm{d}z} + (z^2 - \nu^2) w = 0, with Laurent expansion .. math :: J_\nu(z) = z^\nu \left(\frac{1}{\Gamma(\nu + 1) 2^\nu} + O(z^2) \right), if :math:`\nu` is not a negative integer. If :math:`\nu=-n \in \mathbb{Z}_{<0}` *is* a negative integer, then the definition is .. math :: J_{-n}(z) = (-1)^n J_n(z).
Furthermore, in SymPy's documentation we have configured it so that textbetween `single backticks` is rendered as math. This was originally done forconvenience, as the alternative way is to write
:math:`\nu` everytime you want to use inline math. But this has lead to many people beingconfused, as they are used to Markdown where `single backticks` produce
code.
A better way to write this would be if we could delimit math with dollarsigns, like
$\nu$. This is how things are done in LaTeX documents, as wellas in things like the Jupyter notebook.
With the new sphinx-math-dollarSphinx extension, this is now possible. Writing
$\nu$ produces $\nu$, andthe above docstring can now be written as
class besselj(BesselBase): """ Bessel function of the first kind. The Bessel $J$ function of order $\nu$ is defined to be the function satisfying Bessel's differential equation .. math :: z^2 \frac{\mathrm{d}^2 w}{\mathrm{d}z^2} + z \frac{\mathrm{d}w}{\mathrm{d}z} + (z^2 - \nu^2) w = 0, with Laurent expansion .. math :: J_\nu(z) = z^\nu \left(\frac{1}{\Gamma(\nu + 1) 2^\nu} + O(z^2) \right), if $\nu$ is not a negative integer. If $\nu=-n \in \mathbb{Z}_{<0}$ *is* a negative integer, then the definition is .. math :: J_{-n}(z) = (-1)^n J_n(z).
We also plan to add support for
$$double dollars$$ for display math so that
..math :: is no longer needed either .
For end users, the documentation on docs.sympy.org will continue to render exactly the same, but for developers, it is much easier to read and write.
This extension can be easily used in any Sphinx project. Simply install it with pip or conda:
pip install sphinx-math-dollar
or
conda install -c conda-forge sphinx-math-dollar
Then enable it in your
conf.py:
extensions = ['sphinx_math_dollar', 'sphinx.ext.mathjax'] Google Season of Docs
The above work on sphinx-math-dollar is part of work I have been doing to improve the tooling around SymPy's documentation. This has been to assist our technical writer Lauren Glattly, who is working with SymPy for the next three months as part of the new Google Season of Docs program. Lauren's project is to improve the consistency of our docstrings in SymPy. She has already identified many key ways our docstring documentation can be improved, and is currently working on a style guide for writing docstrings. Some of the issues that Lauren has identified require improved tooling around the way the HTML documentation is built to fix. So some other SymPy developers and I have been working on improving this, so that she can focus on the technical writing aspects of our documentation.
Lauren has created a draft style guide for documentation at https://github.com/sympy/sympy/wiki/SymPy-Documentation-Style-Guide. Please take a moment to look at it and if you have any feedback on it, email me or write to the SymPy mailing list. |
How do I use a covering map $p \times p : \mathbb{R}\times \mathbb{R} \to S^1 \times S^1$, where $p = (\cos2\pi x, \sin2\pi x)$ to compute the fundamental group of a torus?
You certainly know $\pi_1(S^1\times S^1)=\mathbb{Z}\times\mathbb{Z}$. This is what you want to show. First you should prove that your map gives a universal covering. Now there are at least two possibilities:
1.) Use:
If $p\times p:\mathbb{R}\times\mathbb{R}\to S^1\times S^1$ is the universal cover, then $\pi_1(S_1\times S^1)=Homeo_{S^1\times S^1}(\mathbb{R}\times \mathbb{R})$, where the last term describes the group of homeo of $\mathbb{R}\times \mathbb{R}$ commuting with $p\times p$ (not fixing a basepoint $x_0$).
This has something to do with the deck transformation group (cf. https://en.wikipedia.org/wiki/Covering_space#Deck_transformation_group.2C_regular_covers).
2.) Try to adapt the proof of the fundamental group of $S^1$ to your problem, i.e. you should prove: the map $\mathbb{Z}\times\mathbb{Z}\to\pi_1(S^1\times S^1,(1,1))$, $(m,n)\mapsto [t\mapsto(e^{2\pi im t},e^{2\pi i n t})]$ is an isomorphism of groups. The proof can be seen in Hatcher's textbook. Like in the proof you should work with the lifting property. (By the way: The loops given above are called torus knots.) |
Hint:
Use the fact that $\dfrac d{du}\left(\ln u\right) = \dfrac {u'}u:\;$ And so we have , $$\int \frac{du}{u} = \ln|u| + c$$
$$\int\frac{dt}{1+t}$$ Correctly, you let $u = 1 + t,\quad \,du = dt$.
This gives us $$\int \frac{du}{u}$$
I trust you can take it from here?!
Note: You can either
change the bounds of integration by replacing the lower bound with $u$ evaluated at $x = 0$ and replacing the upper bound with $u$ evaluated at $x = e^3 - 1$, thereby keeping all subsequent work in terms of $u$, or you can integrate (as you would an indefinite integral) with respect to $u$, back-substitute by replacing $u$ in the result with $1 + t$, and use then evaluate that at the original bounds. |
LEARNING OBJECTIVES
By the end of this module, you will be able to:
Describe the properties, preparation, and uses of the noble gases
The elements in group 18 are the noble gases (helium, neon, argon, krypton, xenon, and radon). They earned the name “noble” because they were assumed to be nonreactive since they have filled valence shells. In 1962, Dr. Neil
Bartlett at the University of British Columbia proved this assumption to be false.
These elements are present in the atmosphere in small amounts. Some natural gas contains 1–2% helium by mass. Helium is isolated from natural gas by liquefying the condensable components, leaving only helium as a gas. The United States possesses most of the world’s commercial supply of this element in its helium-bearing gas fields. Argon, neon, krypton, and xenon come from the fractional distillation of liquid air. Radon comes from other radioactive elements. More recently, it was observed that this radioactive gas is present in very small amounts in soils and minerals. Its accumulation in well-insulated, tightly sealed buildings, however, constitutes a health hazard, primarily lung cancer.
The boiling points and melting points of the noble gases are extremely low relative to those of other substances of comparable atomic or molecular masses. This is because only weak London dispersion forces are present, and these forces can hold the atoms together only when molecular motion is very slight, as it is at very low temperatures. Helium is the only substance known that does not solidify on cooling at normal pressure. It remains liquid close to absolute zero (0.001 K) at ordinary pressures, but it solidifies under elevated pressure.
Helium is used for filling balloons and lighter-than-air craft because it does not burn, making it safer to use than hydrogen. Helium at high pressures is not a narcotic like nitrogen. Thus, mixtures of oxygen and helium are important for divers working under high pressures. Using a helium-oxygen mixture avoids the disoriented mental state known as nitrogen narcosis, the so-called rapture of the deep. Helium is important as an inert atmosphere for the melting and welding of easily oxidizable metals and for many chemical processes that are sensitive to air.
Liquid helium (boiling point, 4.2 K) is an important coolant to reach the low temperatures necessary for cryogenic research, and it is essential for achieving the low temperatures necessary to produce superconduction in traditional superconducting materials used in powerful magnets and other devices. This cooling ability is necessary for the magnets used for magnetic resonance imaging, a common medical diagnostic procedure. The other common coolant is liquid nitrogen (boiling point, 77 K), which is significantly cheaper.
Neon is a component of neon lamps and signs. Passing an electric spark through a tube containing neon at low pressure generates the familiar red glow of neon. It is possible to change the color of the light by mixing argon or mercury vapor with the neon or by utilizing glass tubes of a special color.
Argon was useful in the manufacture of gas-filled electric light bulbs, where its lower heat conductivity and chemical inertness made it preferable to nitrogen for inhibiting the vaporization of the tungsten filament and prolonging the life of the bulb. Fluorescent tubes commonly contain a mixture of argon and mercury vapor. Argon is the third most abundant gas in dry air.
Krypton-xenon flash tubes are used to take high-speed photographs. An electric discharge through such a tube gives a very intense light that lasts only [latex]\frac{1}{50,000}[/latex] of a second. Krypton forms a difluoride, KrF
2, which is thermally unstable at room temperature.
Stable compounds of xenon form when xenon reacts with fluorine. Xenon difluoride, XeF
2, forms after heating an excess of xenon gas with fluorine gas and then cooling. The material forms colorless crystals, which are stable at room temperature in a dry atmosphere. Xenon tetrafluoride, XeF 4, and xenon hexafluoride, XeF 6, are prepared in an analogous manner, with a stoichiometric amount of fluorine and an excess of fluorine, respectively. Compounds with oxygen are prepared by replacing fluorine atoms in the xenon fluorides with oxygen.
When XeF
6 reacts with water, a solution of XeO 3 results and the xenon remains in the 6+-oxidation state:
[latex]{\text{XeF}}_{6}\left(s\right)+{\text{3H}}_{2}\text{O}\left(l\right)\rightarrow{\text{XeO}}_{3}\left(aq\right)+\text{6HF}\left(aq\right)[/latex]
Dry, solid xenon trioxide, XeO
3, is extremely explosive—it will spontaneously detonate. Both XeF 6 and XeO 3 disproportionate in basic solution, producing xenon, oxygen, and salts of the perxenate ion, [latex]{\text{XeO}}_{6}{}^{4-},[/latex] in which xenon reaches its maximum oxidation sate of 8+.
Radon apparently forms RnF
2—evidence of this compound comes from radiochemical tracer techniques.
Unstable compounds of argon form at low temperatures, but stable compounds of helium and neon are not known.
Key Concepts and Summary
The most significant property of the noble gases (group 18) is their inactivity. They occur in low concentrations in the atmosphere. They find uses as inert atmospheres, neon signs, and as coolants. The three heaviest noble gases react with fluorine to form fluorides. The xenon fluorides are the best characterized as the starting materials for a few other noble gas compounds.
Chemistry End of Chapter Exercises Give the hybridization of xenon in each of the following. You may wish to review the chapter on the advanced theories of covalent bonding. XeF 2 XeF 4 XeO 3 XeO 4 XeOF 4 XeF What is the molecular structure of each of the following molecules? You may wish to review the chapter on chemical bonding and molecular geometry. XeF 2 XeF 4 XeO 3 XeO 4 XeOF 4 XeF Indicate whether each of the following molecules is polar or nonpolar. You may wish to review the chapter on chemical bonding and molecular geometry. XeF 2 XeF 4 XeO 3 XeO 4 XeOF 4 XeF 4. What is the oxidation state of the noble gas in each of the following? You may wish to review the chapter on chemical bonding and molecular geometry. XeO 2F 2 KrF 2 [latex]{\text{XeF}}_{3}{}^{+}[/latex] [latex]{\text{XeO}}_{6}{}^{4-}[/latex] XeO 3 XeO A mixture of xenon and fluorine was heated. A sample of the white solid that formed reacted with hydrogen to yield 81 mL of xenon (at STP) and hydrogen fluoride, which was collected in water, giving a solution of hydrofluoric acid. The hydrofluoric acid solution was titrated, and 68.43 mL of 0.3172 Msodium hydroxide was required to reach the equivalence point. Determine the empirical formula for the white solid and write balanced chemical equations for the reactions involving xenon.
Step 1: The white solid that formed must be a xenon fluoride, so write a tentative set of equations describing the reactions:
[latex]\begin{array}{l}\\ \\ \\ \text{Xe}+m{\text{F}}_{2}\stackrel{\phantom{\rule{0.4em}{0ex}}\Delta\phantom{\rule{0.4em}{0ex}}}{\to }{\text{XeF}}_{2m}\\ {\text{XeF}}_{2m}+m{\text{H}}_{2}\rightarrow 2m\text{HF}+\text{Xe}\end{array}[/latex]
where
m= stoichiometric coefficient of F 2.
Step 2: Calculate the moles of HF from the titration data:
[latex]0.06841\cancel{\text{L NaOH}}\times \frac{0.3172\cancel{\text{mol NaOH}}}{\cancel{\text{L NaOH}}}\times \frac{\text{1 mol HF}}{1\cancel{\text{mol NaOH}}}=\text{0.02171 mol HF}[/latex]
Step 3: Calculate the moles of Xe from the ideal gas law:
[latex]n=\frac{PV}{RT}=\frac{\text{(1 atm)}\text{(0.081 L)}}{\left(\text{0.08206 L atm}{\text{K}}^{-1}{\text{mol}}^{-1}\right)\text{(273.15 K)}}=3.614\times {\text{10}}^{-3}\text{mol}[/latex]
Step 4: Divide the moles of F by the moles of Xe to find the molar ratio.
[latex]\frac{\text{mol F}}{\text{mol Xe}}=\frac{0.02170}{3.614\times {10}^{-3}}=6.004[/latex]
So the empirical formula is XeF
6. Therefore, min the initial equations is 3, and the balanced reactions are:
[latex]\begin{array}{l}\\ \\ \text{Xe}\left(g\right)+{\text{3F}}_{2}\left(g\right)\stackrel{\phantom{\rule{0.4em}{0ex}}\Delta\phantom{\rule{0.4em}{0ex}}}{\to }{\text{XeF}}_{6}\left(s\right)\\ {\text{XeF}}_{6}\left(s\right)+{\text{3H}}_{2}\left(g\right)\rightarrow\text{6HF}\left(g\right)+\text{Xe}\left(g\right)\end{array}[/latex]
Basic solutions of Na 4XeO 6are powerful oxidants. What mass of Mn(NO 3) 2•6H 2O reacts with 125.0 mL of a 0.1717 Mbasic solution of Na 4XeO 6that contains an excess of sodium hydroxide if the products include Xe and solution of sodium permanganate? Selected Answers
1. (a)
sp 3 d hybridized; (b) sp 3 d 2 hybridized; (c) sp 3 hybridized; (d) sp 3 hybridized; (e) sp 3 d 2 hybridized;
3. (a) XeF
2 is nonpolar due to its linear geometry. (b) XeF 4 is nonpolar due to its square planar geometry. (c) XeO 3 is polar due to its pyramidal geometry. (d) XeO 4 is nonpolar due to its tetrahedral geometry. (e) XeOF 4 is polar due to its square pyramidal geometry.
5. Step 1: The white solid that formed must be a xenon fluoride, so write a tentative set of equations describing the reactions:
[latex]\begin{array}{l}\\ \\ \\ \text{Xe}+m{\text{F}}_{2}\stackrel{\phantom{\rule{0.4em}{0ex}}\Delta\phantom{\rule{0.4em}{0ex}}}{\to }{\text{XeF}}_{2m}\\ {\text{XeF}}_{2m}+m{\text{H}}_{2}\rightarrow 2m\text{HF}+\text{Xe}\end{array}[/latex]
where
m = stoichiometric coefficient of F 2.
Step 2: Calculate the moles of HF from the titration data:
[latex]0.06841\cancel{\text{L NaOH}}\times \frac{0.3172\cancel{\text{mol NaOH}}}{\cancel{\text{L NaOH}}}\times \frac{\text{1 mol HF}}{1\cancel{\text{mol NaOH}}}=\text{0.02171 mol HF}[/latex]
Step 3: Calculate the moles of Xe from the ideal gas law:
[latex]n=\frac{PV}{RT}=\frac{\text{(1 atm)}\text{(0.081 L)}}{\left(\text{0.08206 L atm}{\text{K}}^{-1}{\text{mol}}^{-1}\right)\text{(273.15 K)}}=3.614\times {\text{10}}^{-3}\text{mol}[/latex]
Step 4: Divide the moles of F by the moles of Xe to find the molar ratio.
[latex]\frac{\text{mol F}}{\text{mol Xe}}=\frac{0.02170}{3.614\times {10}^{-3}}=6.004[/latex]
So the empirical formula is XeF
6. Therefore, m in the initial equations is 3, and the balanced reactions are:
[latex]\begin{array}{l}\\ \\ \text{Xe}\left(g\right)+{\text{3F}}_{2}\left(g\right)\stackrel{\phantom{\rule{0.4em}{0ex}}\Delta\phantom{\rule{0.4em}{0ex}}}{\to }{\text{XeF}}_{6}\left(s\right)\\ {\text{XeF}}_{6}\left(s\right)+{\text{3H}}_{2}\left(g\right)\rightarrow\text{6HF}\left(g\right)+\text{Xe}\left(g\right)\end{array}[/latex]
Glossary halide compound containing an anion of a group 17 element in the 1- oxidation state (fluoride, F –; chloride, Cl –; bromide, Br –; and iodide, I –) interhalogen compound formed from two or more different halogens |
Say that $a_1a_2\ldots a_n$ and $b_1b_2\ldots b_n$ are two strings of the same length. An
anagramming of two strings is a bijective mapping $p:[1\ldots n]\to[1\ldots n]$ such that $a_i = b_{p(i)}$ for each $i$.
There might be more than one anagramming for the same pair of strings. For example, If $a=$`abcab` and $b=$
cabab we have $p_1[1,2,3,4,5]\to[4,5,1,2,3]$ and $p_2[1,2,3,4,5] \to [2,5,1,4,3]$, among others.
We'll say that the
weight $w(p)$ of an anagramming $p$ is the number of cuts one must make in the first string to get chunks that can be rearranged to obtain the second string. Formally, this the number of values of $i\in[1\ldots n-1]$ for which $p(i)+1\ne p(i+1)$. That is, it is the number of points at which $p$ does not increase by exactly 1.For example, $w(p_1) = 1$ and $w(p_2) = 4$, because $p_1$ cuts
12345 once, into the chunks
123 and
45, and $p_2$ cuts
12345 four times, into five chunks.
Suppose there exists an anagramming for two strings $a$ and $b$. Then at least one anagramming must have least weight. Let's say this this one is
lightest. (There might be multiple lightest anagrammings; I don't care because I am interested only in the weights.) Question
I want an algorithm which, given two strings for which an anagramming exists, efficiently
yields the exact weight of the lightest anagramming of the two strings. It is all right if the algorithm also yields a lightest anagramming, but it need not.
It is a fairly simple matter to generate all anagrammings and weigh them, but there may be many, so I would prefer a method that finds light anagrammings directly.
Motivation
The reason this problem is of interest is as follows. It is very easy to make the computer search the dictionary and find anagrams, pairs of words that contain exactly the same letters. But many of the anagrams produced are uninteresting. For instance, the longest examples to be found in Webster's Second International Dictionary are:
cholecystoduodenostomy
duodenocholecystostomy
The problem should be clear: these are uninteresting because they admit a very light anagramming that simply exchanges the
cholecysto,
duedeno, and
stomy sections, for a weight of 2. On the other hand, this much shorter example is much more surprising and interesting:
coastline
sectional
Here the lightest anagramming has weight 8.
I have a program that uses this method to locate interesting anagrams, namely those for which all anagrammings are of high weight. But it does this by generating and weighing all possible anagrammings, which is slow. |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
If \(x^2 = x + 1\), then \(x = 1 + \sqrt{5}\) or \(x = 1 - \sqrt{5}\)
Prove the following: If \(x^3\) is irrational then \(x\) is irrational.
Let \(n = ab\) be the product of positive integers \(a\) and \(b\). Prove that either \(a \leq \sqrt{n}\) or \(b \leq \sqrt{n}\).
Show that \( (p \to q) \to r\) is not logically equivalent to \( p \to (q \to r)\)
Simplify the following if possible: \( (p \to (q \to r)) \to ((p \to q) \to (p \to r))\)
Let \(a\) be an integer. Prove that there exists an integer \(k\) such that \(a^2 = 3k\) or \(a^2 = 3k + 1\).
Let \(x,a\) be integers with \(a \geq 2 \) such that \(a \mid 11x + 3\) and \(a \mid 55x + 52\). Find \(a\).
Tell me how many natural numbers \( 1 \leq k \leq 14\) smaller than 15 have the following property \( \gcd{(k, 15)} = 1 \). Just for your information, we say two numbers \(a, b\) are
relatively prime when \( \gcd{(a,b)} = 1\). Also (for your information only) the number of relatively prime natural numbers smaller than \(a\) is called \( \phi(a) \) the Euler Totient Function. So this question is asking you to calculate \(\phi(15)\).
Determine whether 9833 is prime.
Show that for every natural number \(k\), \(6^k\) will never end in a 0 (using a decimal representation).
Prove that \( 6 | n^3 + 5n\) for all \( n \geq 1 \)
Find a (closed-form) formula for the following sum: $$ (0 - 0) + (1 - 1) + (8-4) + (27-9) + \cdots + (n^3 - n^2) $$ Use induction to prove your result.
Find this sum: $$ S = -56 - 49 - 42 - \cdots + 623 + 630 + 637 $$
Which, if any, of the following three sets are equal? $$ A = \{ k \mid k \in \mathbb{N}, 3 \nmid k, k \leq 12 \} $$ $$ B = \{ 3k + 1 \mid k \in \mathbb{N}, k \leq 4 \} $$ $$ C = \{ k \mid k \in \mathbb{N}, \gcd{(k, 3)}=1, k \leq 11 \} $$
Given the set \(A = \{1, 2, 3\}\) create a binary relation on \(A\) (that is, a subset of \(A \times A\)) which is
reflexive, symmetric, and is not transitive
Prove that \(\overline{a} + \overline{b} = \overline{a+b}\) where \(\overline{a}\) is the equivalence class of \(a\) under the relation on \(\mathbb{Z}\) given by \(x \sim y\) when \(p | (x-y)\). Here \(\overline{a} + \overline{b}\) means the set \(\{ x + y \mid x \sim a \land y \sim b \}\), made by summing anything in \(\overline{a}\) and anything in \(\overline{b}\).
This is an elaborate way of asking if computing \( (a + b) \textrm{ mod } p = (a \textrm{ mod } p) + (b \textrm{ mod } p) \).
None, happy spring break.
Let \(S = \{ 1,2,3,4\}\) and \(f,g : S \to S\) by \(f = \{(1,3),(2,2),(3,4),(4,1)\}\) and \(g = \{(1,4), (2,3), (3,1), (4,2)\}\). Find \(g \circ f \circ g^{-1} \).
Find \(x\) such that \(79x \equiv 15 \mod{ 722 }\)
Find \(6^{128} \mod{ 13 }\)
Problem 5 from math.prof.ninja/417, the in-class problems from Friday.
Problem 8 from math.prof.ninja/420, the in-class problems from Monday.
Problem 6 from math.prof.ninja/422
The last problem from 424
Give a method for detecting if a graph is bipartite.
How many 6 digit integers have 1 digit repeated 3 times, another distinct digit repeated twice, and a third digit? For instance 292129 or 877666.
Prove that the Petersen graph is not Hamiltonian
Create a graph in which any walk on the graph of length \(k\) is a length \(k\) word made from 'a' and 'b' with at most two 'b's in a row. Use this graph to decide the number of words of length 4 with this property.
Problem 4 from Monday's problem set
What is the worst case complexity of insertion sort?
Make a list of the super-powers you developed (or are developing) in this class. |
I believe that the word problem is the problem to decide whether two different expressions denote the same element of a suitably defined algebraic structure. For simplicity, let us focus on free groups here. (Because I'm only interested in free algebras, and for groups one might indeed call this a word problem.) The expressions $(b^{-1}c)^{-1}b^{-1}(ab^{-1})^{-1}$, $(ab^{-1}c)^{-1}$, and $a^{-1}bc^{-1}$ are examples of such expressions. The first and second expression denote the same element of the free group, while the third expression denotes a different element.
The straight line program encoding is basically the same concept as arithmetic circuits, without implicit commutativity. It is one of the natural encodings of elements for a free algebra. One way to define the straight line program encoding is like in definition 1.1 from one of the google results for straight line program: The straight line program encoding of $f$ is an evaluation circuit $\gamma$ for $f$, where the only operations allowed belong to $\{()^{-1},\cdot\}$. More precisely: $\gamma=(\gamma_{1-n},\dots,\gamma_0,\gamma_1,\dots,\gamma_L)$ where $f=\gamma_L$, $\gamma_{1-n}:=x_1,\dots,\gamma_0:=x_n$ and for $k>0$, $\gamma_k$ is one of the following forms: $\gamma_k=(\gamma_j)^{-1}$ or $\gamma_k=\gamma_i\cdot\gamma_j$ where $i,j<k$.
The application of the operation $()^{-1}$ can easily be restricted to $\gamma_k=(\gamma_j)^{-1}$ for $j\leq 0$ without increasing $L$ to more than $2L+n$. This means that we are basically talking about words over the alphabet $\{x_1,\dots,x_n,(x_1)^{-1},\dots,(x_n)^{-1}\}$, hence the name "word problem" makes sense. But it seems a bad name for the general problem to decide whether two elements of a free algebra given by straight line programs are identical. It might be called
identity testing.
Does the problem (to decide whether two elements of a free algebra given by straight line programs are identical) already has an established name, or is there a good name for this problem?
Maybe a better idea would be to give a name to the complementary problem, i.e. the problem to distinguish two different elements of a free algebra. So calling it
slp distinction problem for free groups (commutative rings, commutative inverse rings, Boolean rings, ...) could work, because straight line program (slp) is a long name (but good and descriptive nevertheless). The advantage of naming the complementary problem is that we get problems in RP and NP, instead of problems in co-RP and co-NP.
The computational complexity of this problem is not worse than that of identity testing of constant polynomials over $\mathbb Z$ in straight line program encoding (no variables, i.e. $n=0$, but the straight line programs allow to compactly encode huge numbers): Using the same approach as in the dlog-space algorithm for the normal word problem, the problem can be reduced to deciding whether the product of integer 2x2 matrices equals the identity matrix. (The word problem over $n$ letters easily embeds into the word problem over $2$ letters, for example you can replace $a$, $b$, $c$, $d$ by $aa$, $ab$, $ba$, and $bb$.) So the problem is in randomized polynomial time (RP) (or rather co-RP). However, I didn't manage to show that it is actually equivalent (in complexity) to identity testing of (constant) polynomials over $\mathbb Z$, as I initially hoped. (This is unrelated to the answer by D.W., which rather shows that the significance of straight line encoding is currently not widely appreciated.) |
Your problem is: given $t$, find a set $S$ of primes whose sum is minimized, such that the product is at least $t$.
This is equivalent to: given $u$, find a set $S$ of primes whose sum is minimized, such that $\sum_{p \in S} \log p \ge u$. (Simply take $u=\log t$.) This is a knapsack problem, so it can be solved exactly using standard dynamic programming algorithms, or you can find good (but not necessarily optimal) solutions using standard approximation algorithms. I'll summarize a few options below, but the knapsack problem is well-studied, so you can probably adapt any of the standard methods to your setting.
Dynamic programming
There is a straightforward dynamic programming algorithm that solves your problem in $O((\log n)^3)$ time. We will construct an array $A[\cdot,\cdot]$ such that
$$A[s,q] = \max \{\prod_{p \in S} p : \sum_{p \in S} p = s \text{ and } p \le q \text{ for all } p \in S\},$$
i.e., $A[s,q]$ stores the largest possible product that is attainable from a set of primes that are all at most $q$ and that sum to $s$. You can fill in the contents of the array $A$ using a straightforward recursion:
$$A[s,q] = \max(A[s,q^*], q \times A[s-q,q^*]),$$
where $q^*$ is the previous prime before $q$. We'll fill in this array only for values $q$ that are prime. If we fill in the array in order of increasing $s$ and increasing $q$, each entry is known will be known before it is first used.
Finally, once we've filled in the $A$ array, we can find the smallest $s$ such that there exists a prime $q$ such that $A[s,q] \ge n$ by iterating over all the elements of the array.
How much of the array do we need to fill in? In particular, what's the largest value of $s$ and $q$ we need to fill in? Well, we can show that $s \le (\lg n)^2$ suffices. Also, we certainly have $q \le s$. Therefore, we only need to fill in $(\lg n)^3$ elements of $A$, and the algorithm has running time $O((\log n)^3)$.
I suspect that it can be done faster, and in particular, we can probably restrict to smaller values of $q$ (in particular, I suspect $q= O(\log s) = O(\log \log n)$ probably suffices)... but this gives a simple algorithm that is easy to show correct. Also, it is possible to find a tighter bound on the largest possible value of $s$, but that's not really necessary: if you fill in all of $A[1,\cdot]$, then $A[2,\cdot]$, then $A[3,\cdot]$, etc., and check each one to find the first that has an entry that is $\ge n$, this will enable the algorithm to terminate as soon as it finds the smallest attainable $s$.
Approximation algorithms
There are many possible approximation algorithms. A simple one is to use a greedy strategy: take the first $k$ primes, choosing $k$ as small as possible such that their product is at least $n$.
Why is this a greedy strategy? Well, we're trying to minimize $\sum_{p \in S} p$ subject to $\sum_{p \in S} \log p \ge n$. So, the greedy algorithm for that is to sort the primes in order of increasing value of $p/\log p$, and take the first $k$, for some $k$. But sorting the primes in increasing value of $p/\log p$ is equivalent to taking the primes in increasing order ($p=2,3,5,7,11,\dots$).
There are other standard ways to improve on this.
Another approach is to formulate this as an instance of integer linear programming and feed it to an ILP solver. We want to minimize $\sum_{p \in Q} p x_p$ subject to $\sum_{p \in Q} (\log p) x_p \ge n$, where $x_2,x_3,x_5,x_7,\dots$ are the unknowns and each is constrained to be zero or one, where $Q$ is the set of primes $\le \lg n$. That's an ILP instance, so you can give it to an ILP solver and let it grind away at it. |
I was trying to learn Coq using the famous book Software Foundations. In it I found the following:
Theorem mult_0_r : forall n:nat, n * 0 = 0.Proof. induction n as [| n IHn]. - simpl. reflexivity. - simpl. rewrite -> IHn. reflexivity.Qed.
which I understand perfectly but find rather unintuitive. I understand perfectly how each step works but it would have never occurred to me to use induction to prove such a trivial fact. In fact in the mathematical proof I had in mind that would have been a fact/property (or I guess an axiom) of
0. i.e. $\forall n \in N, n \cdot 0 = 0$ is true by definition. I guess in Coq (or the way we set up numbers? thats not true).
My biggest complaint or worry is that if such a trivial thing requires induction I feel now I am unable to recognize what needs induction (at least in Coq). I know it needs it here because I am in the induction chapter. But in normal maths its usually quite obvious because the problem is obviously recursive. But I wouldn't have really thought of that proposition as recursive. For example it goes on to prove more things as exercises:
Theorem plus_n_Sm : ∀n m : nat, S (n + m) = n + (S m).Proof. (* FILL IN HERE *) Admitted.Theorem plus_comm : ∀n m : nat, n + m = m + n.Proof. (* FILL IN HERE *) Admitted.Theorem plus_assoc : ∀n m p : nat, n + (m + p) = (n + m) + p.Proof. (* FILL IN HERE *) Admitted.
which I am sure are not too difficult, but my worry is that in isolation I would have never thought such trivial statements required something as sophisticated as induction. I didn't even learn induction until last 2 years of highschool and didn't really do it seriously until college. So now I see trivial statements requiring what seems to me, sophisticated mathematics.
I just feel my intuition got really lost. When I do Coq proofs (in isolation), how do I know when to use induction and on what? I doubt there is a general procedure (of course) but proofs do exist. So there must be something guiding us to use induction in Coq. |
The Belle-2 experiment is preparing itself for initial data collection in the upcoming months. The anticipated physics program is wide and diverse, ranging from dark photon searches to anomalies in precisions measurements of B meson decays. A brief overview of the Belle-2 experiment, detector and collider, and its current status will be presented.
Measurements of top production in the LHCb acceptance have particular sensitivity to high values of Bjorken-x, and offer complementary PDF constraints to measurements at the central detectors. In addition, the higher contribution from quark-initiated production to top pair production in the forward region leads to a larger expected charge asymmetry at LHCb than at the other experiments.
The...
While Jarlskog-like flavor invariants are adequate for estimating CP-violation from closed fermion loops, non-invariant structures arise from rainbow-like processes.
For the CKM contributions to the quark EDM, or the PMNS contributions to lepton EDMs, the dominant diagrams have a rainbow topology whose flavor structure does not collapse to flavor invariants. Numerically, they are found...
The measurements of differential (in the fiducial phase space) and production mode cross sections are presented in the $H\rightarrow\gamma\gamma$ decay channel using $36~\text{fb}^{-1}$ data collected by the ATLAS detector at a centre of mass energy of $\sqrt{s}=13~\text{TeV}$. These characterise $pp\rightarrow H\rightarrow\gamma\gamma$ processes in a variety of ways; production mode cross... |
What is the most efficient and numerically stable algorithm for computing the inverse CDF $F^{-1}(y)$ of a probability function, assuming that both the PDF $f(x)$ and the CDF $F(x)$ are known analytically but the inverse CDF is not?
Clearly, this is the same as finding the root of the nonlinear function $G(x) \equiv F(x) - y$, with $x \in \mathbb{R}$ for a given $y \in (0, 1)$. We know/assume that $G(x)$ is:
monotonically non-decreasing (being $F(x)$ a CDF); differentiable at least $k \ge 1$ times (its first derivative is $G^\prime(x) = f(x)$); continuous (we assume that $f(x)$ is not a delta function). I am particularly interested in the case in which the shape of the PDF is a generic mixture of Gaussian distributions multiplied by a polynomial of arbitrary degree (in this case, the CDF can be computed analytically).
There are several standard methods for root-finding of generic nonlinear functions (see for example Chapter 9 of Numerical Recipes). The method I would use is Brent's method, with perhaps a couple of steps of Newton-Raphson in the end to refine the root (if at all); but I wonder if there are better ways given the assumptions above.
Also, note that I need to compute the inverse CDF for a large number ($\sim 10^5$) of distinct CDFs within this class; a lookup table or pre-computation is not feasible. |
Difference between revisions of "Kakeya problem"
(4 intermediate revisions by 3 users not shown) Line 1: Line 1: −
A '''Kakeya set''' in <math>{\mathbb F}_3^
+
A '''Kakeya set''' in <math>{\mathbb F}_3^</math> is a subset <math>E\subset{\mathbb F}_3^n</math> that contains an [[algebraic line]] in every direction; that is, for every <math>d\in{\mathbb F}_3^n</math>, there exists <math>e\in{\mathbb F}_3^n</math> such that <math>e,e+d,e+2d</math> all lie in <math>E</math>. Let <math>k_n</math> be the smallest size of a Kakeya set in <math>{\mathbb F}_3^n</math>.
Clearly, we have <math>k_1=3</math>, and it is easy to see that <math>k_2=7</math>. Using a computer, it is not difficult to find that <math>k_3=13</math> and <math>k_4\le 27</math>. Indeed, it seems likely that <math>k_4=27</math> holds, meaning that in <math>{\mathbb F}_3^4</math> one cannot get away with just <math>26</math> elements.
Clearly, we have <math>k_1=3</math>, and it is easy to see that <math>k_2=7</math>. Using a computer, it is not difficult to find that <math>k_3=13</math> and <math>k_4\le 27</math>. Indeed, it seems likely that <math>k_4=27</math> holds, meaning that in <math>{\mathbb F}_3^4</math> one cannot get away with just <math>26</math> elements.
−
==
+
== ==
Trivially, we have
Trivially, we have
Line 9: Line 9:
:<math>k_n\le k_{n+1}\le 3k_n</math>.
:<math>k_n\le k_{n+1}\le 3k_n</math>.
−
Since the Cartesian product of two Kakeya sets is another Kakeya set,
+
Since the Cartesian product of two Kakeya sets is another Kakeya set,
:<math>k_{n+m} \leq k_m k_n</math>;
:<math>k_{n+m} \leq k_m k_n</math>;
−
this implies that <math>k_n^{1/n}</math> converges to a limit as <math>n</math> goes to infinity.
+
this implies that <math>k_n^{1/n}</math> converges to a limit as <math>n</math> goes to infinity.
− +
To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, <math>\binom{k_n}{2}\ge 3\cdot(3^n-1)/2</math>, and hence
To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, <math>\binom{k_n}{2}\ge 3\cdot(3^n-1)/2</math>, and hence
−
:<math>k_n\
+
:<math>k_n\3^{(n+1)/2}.</math>
One can derive essentially the same conclusion using the "bush" argument, as follows. Let <math>E\subset{\mathbb F}_3^n</math> be a Kakeya set, considered as a union of <math>N := (3^n-1)/2</math> lines in all different directions. Let <math>\mu</math> be the largest number of lines that are concurrent at a point of <math>E</math>. The number of point-line incidences is at most <math>|E|\mu</math> and at least <math>3N</math>, whence <math>|E|\ge 3N/\mu</math>. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity <math>\mu</math>, we see that <math>|E|\ge 2\mu+1</math>. Comparing the two last bounds one obtains
One can derive essentially the same conclusion using the "bush" argument, as follows. Let <math>E\subset{\mathbb F}_3^n</math> be a Kakeya set, considered as a union of <math>N := (3^n-1)/2</math> lines in all different directions. Let <math>\mu</math> be the largest number of lines that are concurrent at a point of <math>E</math>. The number of point-line incidences is at most <math>|E|\mu</math> and at least <math>3N</math>, whence <math>|E|\ge 3N/\mu</math>. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity <math>\mu</math>, we see that <math>|E|\ge 2\mu+1</math>. Comparing the two last bounds one obtains
<math>|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}</math>.
<math>|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}</math>.
−
A better bound follows by using the "slices argument". Let <math>A,B,C\subset{\mathbb F}_3^{n-1}</math> be the three slices of a Kakeya set <math>E\subset{\mathbb F}_3^n</math>. Form a bipartite graph <math>G</math> with the partite sets <math>A</math> and <math>B</math> by connecting <math>a</math> and <math>b</math> by an edge if there is a line in <math>E</math> through <math>a</math> and <math>b</math>. The restricted sumset <math>\{a+b\colon (a,b)\in G\}</math> is contained in the set <math>-C</math>, while the difference set <math>\{a-b\colon (a,b)\in G\}</math> is all of <math>{\mathbb F}_3^{n-1}</math>. Using an estimate from [http://front.math.ucdavis.edu/math.CO/9906097 a paper of Katz-Tao], we conclude that <math>3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}</math>, leading to <math>|E|\ge 3^{6(n-1)/11}</math>. Thus,
+ + + + + + +
A better bound follows by using the "slices argument". Let <math>A,B,C\subset{\mathbb F}_3^{n-1}</math> be the three slices of a Kakeya set <math>E\subset{\mathbb F}_3^n</math>. Form a bipartite graph <math>G</math> with the partite sets <math>A</math> and <math>B</math> by connecting <math>a</math> and <math>b</math> by an edge if there is a line in <math>E</math> through <math>a</math> and <math>b</math>. The restricted sumset <math>\{a+b\colon (a,b)\in G\}</math> is contained in the set <math>-C</math>, while the difference set <math>\{a-b\colon (a,b)\in G\}</math> is all of <math>{\mathbb F}_3^{n-1}</math>. Using an estimate from [http://front.math.ucdavis.edu/math.CO/9906097 a paper of Katz-Tao], we conclude that <math>3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}</math>, leading to <math>|E|\ge 3^{6(n-1)/11}</math>. Thus,
:<math>k_n \ge 3^{6(n-1)/11}.</math>
:<math>k_n \ge 3^{6(n-1)/11}.</math>
−
==
+
== ==
We have
We have
Latest revision as of 00:35, 5 June 2009
A
Kakeya set in [math]{\mathbb F}_3^n[/math] is a subset [math]E\subset{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]e\in{\mathbb F}_3^n[/math] such that [math]e,e+d,e+2d[/math] all lie in [math]E[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math].
Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, it is not difficult to find that [math]k_3=13[/math] and [math]k_4\le 27[/math]. Indeed, it seems likely that [math]k_4=27[/math] holds, meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements.
Basic Estimates
Trivially, we have
[math]k_n\le k_{n+1}\le 3k_n[/math].
Since the Cartesian product of two Kakeya sets is another Kakeya set, the upper bound can be extended to
[math]k_{n+m} \leq k_m k_n[/math];
this implies that [math]k_n^{1/n}[/math] converges to a limit as [math]n[/math] goes to infinity.
Lower Bounds
To each of the [math](3^n-1)/2[/math] directions in [math]{\mathbb F}_3^n[/math] there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, [math]\binom{k_n}{2}\ge 3\cdot(3^n-1)/2[/math], and hence
[math]k_n\ge 3^{(n+1)/2}.[/math]
One can derive essentially the same conclusion using the "bush" argument, as follows. Let [math]E\subset{\mathbb F}_3^n[/math] be a Kakeya set, considered as a union of [math]N := (3^n-1)/2[/math] lines in all different directions. Let [math]\mu[/math] be the largest number of lines that are concurrent at a point of [math]E[/math]. The number of point-line incidences is at most [math]|E|\mu[/math] and at least [math]3N[/math], whence [math]|E|\ge 3N/\mu[/math]. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that [math]|E|\ge 2\mu+1[/math]. Comparing the two last bounds one obtains [math]|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}[/math].
The better estimate
[math]k_n\ge (9/5)^n[/math]
is obtained in a paper of Dvir, Kopparty, Saraf, and Sudan. (In general, they show that a Kakeya set in the [math]n[/math]-dimensional vector space over the [math]q[/math]-element field has at least [math](q/(2-1/q))^n[/math] elements).
A still better bound follows by using the "slices argument". Let [math]A,B,C\subset{\mathbb F}_3^{n-1}[/math] be the three slices of a Kakeya set [math]E\subset{\mathbb F}_3^n[/math]. Form a bipartite graph [math]G[/math] with the partite sets [math]A[/math] and [math]B[/math] by connecting [math]a[/math] and [math]b[/math] by an edge if there is a line in [math]E[/math] through [math]a[/math] and [math]b[/math]. The restricted sumset [math]\{a+b\colon (a,b)\in G\}[/math] is contained in the set [math]-C[/math], while the difference set [math]\{a-b\colon (a,b)\in G\}[/math] is all of [math]{\mathbb F}_3^{n-1}[/math]. Using an estimate from a paper of Katz-Tao, we conclude that [math]3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}[/math], leading to [math]|E|\ge 3^{6(n-1)/11}[/math]. Thus,
[math]k_n \ge 3^{6(n-1)/11}.[/math] Upper Bounds
We have
[math]k_n\le 2^{n+1}-1[/math]
since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set.
This estimate can be improved using an idea due to Ruzsa (seems to be unpublished). Namely, let [math]E:=A\cup B[/math], where [math]A[/math] is the set of all those vectors with [math]r/3+O(\sqrt r)[/math] coordinates equal to [math]1[/math] and the rest equal to [math]0[/math], and [math]B[/math] is the set of all those vectors with [math]2r/3+O(\sqrt r)[/math] coordinates equal to [math]2[/math] and the rest equal to [math]0[/math]. Then [math]E[/math], being of size just about [math](27/4)^{r/3}[/math] (which is not difficult to verify using Stirling's formula), contains lines in a positive proportion of directions: for, a typical direction [math]d\in {\mathbb F}_3^n[/math] can be represented as [math]d=d_1+2d_2[/math] with [math]d_1,d_2\in A[/math], and then [math]d_1,d_1+d,d_1+2d\in E[/math]. Now one can use the random rotations trick to get the rest of the directions in [math]E[/math] (losing a polynomial factor in [math]n[/math]).
Putting all this together, we seem to have
[math](3^{6/11} + o(1))^n \le k_n \le ( (27/4)^{1/3} + o(1))^n[/math]
or
[math](1.8207+o(1))^n \le k_n \le (1.8899+o(1))^n.[/math] |
I have the following limit:
$$\lim_{n\to\infty} \left[\frac{\cos(n)+t\sin(n)}{e^{tn}}\right]$$
I wish to show that the limit equals $0$ rigorously. I have a good sense that it equals zero by taking the Taylor series expansion of the top, and comparing with that of the bottom. (The magnitude of the terms are the same, but the numerator has alternating signs, whereas the denominator is strictly positive. I should say; assume that $t>0$.)
I'm not sure however, whether that approach can be made rigorous. Other than that I am at a loss to prove it by a theorem or definition. |
Quickchecks Quickchecks¶ 1. Overview¶
Quickcheks are small problems embedded in other documents. The user types in the answer in one or more input fields, and clicks a "Check" button to see wether the answer is correct or not. Ther user can also click a "Show solution" button to see the solution and, if existing, an explanation. In contrast to normal problems, no data is stored on the server, neither personalized data, nor answers, nor corrections.
The reader of this document is supposed to have a basic knowledge of generic problems.
2. TeX description¶
The TeX description of quickchecks is very similar to that of generic TeX problems. Here is an example:
\begin{quickcheck} \type{input.number} \displayprecision{3} \correctorprecision{3} \begin{variables} \randint{num}{1}{6} \function[calculate]{deg}{num/6*180} \function[calculate]{rad}{deg/180*pi} \end{variables} \text{ Wie groß ist der Winkel $\var{deg}^\circ$ im Bogenmaß? Runden Sie Ihr Ergebnis auf drei Stellen hinter dem Komma. Antwort: \ansref } \explanation{ Bogenmaß = Gradmaß / 180 $\cdot$ $\pi$, also $\var{deg}/180\cdot\pi$, also, da $\pi = 3.14159\ldots$, Bogenmaß gerundet auf drei Stellen = $\var{rad}$. } \begin{answer} \solution{rad} \end{answer}\end{quickcheck}
In the browser, this looks as followes:
.
If you type in the answer and click the check button, the system will mark correct and wrong answers as follows:
Correct answer:
Wrong answer:
Clicking the "Show solution" button will show the correct answer and, if existing, the explantion:
The "Show solution" button changes to a "Hide solution" button by this, which hides these things again.
3. Pools¶
Quickchecks provide a functionality similar to random question pools in generic problems. You can define a "pool" of quickchecks from which one or more are selected randomly. All quickchecks of the pool must be enclosed in a
quickcheckcontainer environment. The selection is controlled by one or more
\randomquickcheckpool commands, which are completely analog to the
\randomquestionpool commands in generic problems:
\begin{quickcheckcontainer}\randomquickcheckpool{1}{5}\randomquickcheckpool{6}{10}\begin{quickcheck} ...\end{quickcheck}\begin{quickcheck} ...@\explanation{...}@\end{quickcheck} . . .\end{quickcheckcontainer}
4. Multiple Choice Questions¶
As an alternative to input fields (e.g.
input.number) that the user has to fill out, you can use multiple choice questions. The syntax is similar to multiple choice questions in generic problems.
Use the
choices environment with the parameter
multiple or
unique. You don't need the
type command like in 'regular' quickchecks. It will be completely ignored.
\begin{quickcheck} \begin{variables} \randint[Z]{c1}{1}{12} \randint[Z]{c2}{1}{12} \function[calculate]{cSum1}{c1+c2} \function[calculate]{cSum2}{c1-c2} \end{variables} \text{Wähle alle richtigen Aussagen aus:} \begin{choices}{multiple} \begin{choice} \text{$\var{c1}+\var{c2}=\var{cSum1}$} \solution{true} \end{choice} \begin{choice} \text{$\var{c1}+\var{c2}=\var{cSum2}$} \solution{false} \end{choice} \begin{choice} \text{$\var{c1}+\var{c2}=\abs{\var{c1}+\var{c2}}$} \solution[compute]{c1+c2 > 0} \end{choice} \begin{choice} \text{$\var{c1}-\var{c2}=\abs{\var{c1}-\var{c2}}$} \solution[compute]{cSum1 > 0} \end{choice} \end{choices} \end{quickcheck}
The solution command is mendatory in a
choice environment. You have the following three options:
\solution{true}or
\solution{false}or
\solution[compute]{<expression>}
You can use arithmetic and logical expressions (including variables) to compute a solution. Make sure that if you use the type
uniquethat one but only one choice is
true.
5. List of TeX commands and environments¶
There are five quickcheck environments:
quickcheck-- Top-level environment of a quickcheck
quickcheckcontainer-- Container for quickchecks that should form a pool
variables-- Container for variables; same as in generic problems
choices-- multiple choice questions
choice-- one choice in a multiple choice questions
The quickcheck commands are:
\ansref
\checkAsFunction[<options>]{<variable>}{<min>}{<max>}{<steps>}
\correctorprecision{<number>}
\derivative[<actions>]{<variable>}{<expression>}{<free_variable>}
\displayprecision{<number>}
\explanation{...}
\field{<field>}
\function[<actions>]{<variable>}{<expression>}
\number{<variable>}{<value>}
\drawFromSet[<options>]{variable}{set}
\precision{<number>}
\randadjustIf{<variables>}{<condition>}
\randdouble{<name>}{<min>}{<max>}
\randint[<nonzero>]{<name>}{<min>}{<max>}
\randomquickcheckpool{<min>}{<max>}
\randrat{<name>}{<minNumerator>}{<maxNumerator>}{<minDenominator>}{<maxDenominator>}
\solution{<solution>}
\solution[compute]{<solution>}
\substitute[<actions>]{<variable>}{<expression>}{<free_variable>}{<substitute>}
\text{...}
\type{<type>}
\var{<variable>}
All commands have the same meaninmg as in generic problems.
The content of
\text{...} and
\explanation{...} may contain arbitrary JMmtex code, including tables, lists, images, etc.
Note: type=string for
\drawFromSet is not very well supported for Quickchecks at the moment. That's due to the fact that
variables outside a math environment are not supported at the moment in a Quickcheck environment. Supported input types¶
Currently, the input types
"input.number" and
"input.function" are supported. Alternatively
you can use multiple choice questions (see Multiple Choice Questions) |
Let us have simplicial mesh and continuous function $u$ which is piece-wise linear and non-constant on every cell. Then normal vector to level-sets of $u$ is given $$\mathbf{n}=\frac{\nabla u}{|\nabla u|}$$ and is piece-wise constant function. What is then possible definition of curvature $\kappa$ which would be for $u\in{\cal C}^2$ $$\kappa = \mathrm{div}\,\mathbf{n}.$$ This definition is not suitable as it yields $\kappa$ being zero in every cell and a multiple of Dirac $\delta$ on every facet.
In conclusion:
what is a suitable finite-element space for $\kappa$ and weak formulation for $\kappa = \mathrm{div}\,\mathbf{n}$ with given piece-wise constant $\mathbf{n}$? EDIT: I got another idea: first project $\mathbf{n}$ on some $H(\mathrm{div})$-conforming space; then it is no problem calculating $\mathrm{div}\,\mathbf{n}$ directly. I did some numerical experiments with $u$ being projection of$$ u^\mathrm{exact} = (1+x)(1+y) $$to CG1 (continuous, piece-wise linear) on domain $\Omega=(0,1)\times(0,1)$. Exact curvature for $u^\mathrm{exact}$ is$$ \kappa^\mathrm{exact} = \frac{-2(1+x)(1+y)}{[(1+x)^2+(1+y)^2]^{3/2}} .$$Taking Raviart-Thomas degree 1 space (piece-wise linears, continuous on facet midpoints) or CG1 space as space for $\mathbf{n}$ method seems to converge in norms but with annoying wiggles on $\partial\Omega$. Here is calculated $\kappa$
and error $\kappa-\kappa^\mathrm{exact}$
Can you see the problem? |
Search
Now showing items 1-10 of 18
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV
(American Physical Society, 2014-12-05)
We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ... |
Forgot password? New user? Sign up
Existing user? Log in
∏n=2∞n3−1n3+1\large \displaystyle \prod_{n=2}^{\infty} \dfrac {n^3-1}{n^3 + 1}n=2∏∞n3+1n3−1
If the value of the above expression can be expressed as ab \dfrac abba, where aaa and bbb are coprime positive integers, find a+ba+ba+b.
Problem Loading...
Note Loading...
Set Loading... |
I am going to consider Freivald's algorithm in the field mod 2. So in this algorithm we want to check wether $$AB = C$$ and be correct with high probability.
The algorithm choose a random $r$ n-bit vector and if $$A(Br) = Cr$$ then outputs YES, otherwise outputs no.
I want to show that it has has one-sided success probability 1/2. For that I want to show that when $AB = C$ then the algorithm is correct with probability 1 and when $AB \neq C$ then the probability of it being correct is at least 1/2.
When $AB = C$ its clear that the algorithm is always correct because:
$$ A(Br) = (AB)r = Cr$$
So, if $AB=C$, then $A(Br)$ is always equal to $Cr$.
The case when $AB \neq C$ is a little trickier. Let $D = AB - C$. When the multiplication are not equal then $D \neq 0$. Let a good $r$ be a vector that discovers the incorrect multiplication (i.e. $D \neq 0$ and $Dr \neq 0$, multiplying by $r$ makes it still zero). let a bad $r$ be a vector that makes us mess up, i.e. conclude the multiplication is correct when is not. In other words, when $D \neq 0$ (i.e. multiplication was done correctly $AB = C$) but when we use $r$ to check this we get the wrong conclusion (i.e. $Dr = 0$, when in fact $D \neq 0$).
The high level idea of the proof is, if we can show that for every bad $r$ there is a good $r$, then half the $r$'s are good so our algorithm is about 1/2 percent of the time correct. This high level idea of the proof makes sense to me, however, what is not clear to me is the precise detail of the inequality (whether $Pr[error] \geq \frac{1}{2}$ or the other way round).
So consider the case where $D \neq 0 $ i.e. $AB \neq C$. In this case then we have that at least one entry $(i,j)$ in $D$ is not zero (its 1 cuz we are doing mod 2). Let that entry be $d_{i,j}$. Let $v$ be a vector such that it picks up that entry. i.e. $Dv \neq 0$ such that $(Dv)_{i} = d_{i,j}$. In this case, if we have a bad $r$ (i.e. an r such that when $D \neq 0$ yields $Dr = 0$ ) then we could make it into a good vector $r'$ by flipping its $i$ coordinate. i.e.
$$ r' = r +v_i$$
This mapping from Bad to Good is clearly 1 to one. i.e. it clear that the equation $ r' = r +v_i$ only maps to one unique $r'$, so it cannot be one to many. Now lets see that its not many to one either. If there was another $\tilde r$ s.t. $\tilde r = r'$ then that would mean
$$r' = r+ v_i = \tilde r + v_i \;\; (\operatorname{mod} \; 2) \implies r = \tilde r \;\; (\operatorname{mod} \; 2)$$
So its one to one. Therefore, the for each Bad $r$ there is a good $r'$. However, it is not clear to me why that would imply:
$$ Pr[A(Br) \neq C] = Pr[Dr \neq 0] \geq \frac{1}{2}$$
I see why it would be exactly equal to 1 but it is not clear to me at all why it implies the above inequality. |
Summary: This article shows how to implement a low-pass single-pole IIR filter. The article is complemented by a Filter Design tool that allows you to create your own custom versions of the example filter that is shown below.
The low-pass single-pole IIR filter is a very useful tool to have in your DSP toolbox. Its performance in the frequency domain may not be stellar, but it is very computationally efficient.
Definition
A low-pass single-pole IIR filter has a single design parameter, which is the decay value \(d\). It is customary to define parameters \(a=d\) and \(b=1-d\) (the logic behind this follows from the general case below). For a typical value of \(d=0.99\), we have that \(a=0.99\) and \(b=0.01\). The recurrence relation is then given by
\[y[n]=bx[n]+ay[n-1],\]
where the sequence \(x[n]\) is the input and \(y[n]\) is the output of the filter.
The recurrence relation directly shows the effect of the filter. The previous output value of the the filter, \(y[n-1]\), is decreased with the decay factor \(a=d\). The current input value, \(x[n]\), is taken into account by adding a small fraction \(b=1-d\) of it to the output value.
Substituting \(b=1-a\) in the given recurrence relation and rewriting leads to the expression
\[y[n]=y[n-1]+b(x[n]-y[n-1]).\]
This then leads to compact update expressions such as
y += b * (x - y), in programming languages that support the
+=-operator (see the Python code below for an example).
Impulse Response
For windowed-sinc filters (see, e.g., How to Create a Simple Low-Pass Filter), the impulse response
is the filter. To apply the filter, you convolve the impulse response of the filter with the data. This is different for the single-pole IIR filter. Its action is essentially defined on a sample-by-sample basis, as described by the recurrence relation given above. The impulse response of a filter with \(d=0.9\) (\(b=0.1\)) is shown in Figure 1.
Of course, this impulse response is actually
infinite. I’ve plotted the first 50 samples here, and at that point it is quite close to zero, but it never actually reaches zero. Properties
The response of this filter is completely analogous to the response of an electronic low-pass filter consisting of a single resistor and a single capacitor.
The decay value \(d\) is related to the
time constant \(\tau\) of the filter with the relation
\[d=e^{-1/\tau}.\]
Hence, if \(d\) is given, the value of \(\tau\) can be computed as \(\tau=-1/\ln(d)\). As for an electronic RC-filter, the time constant \(\tau\) gives the time (in samples for the discrete case) for the output to decrease to 36.8% (\(1/e\)) of the original value.
Another useful relation is that between \(d\) and the (-3 dB)
cutoff frequency \(f_c\), which is
\[d=e^{-2\pi f_c}.\]
Hence, if \(d\) is given, the value of \(f_c\) can be computed as \(f_c=-\ln(d)/2\pi\).
The frequency response of the filter with the impulse response of Figure 1 is given in Figure 2.
Python code
In Python and in most other programming languages, the recurrence relation can be implemented through the already mentioned expression
y += b * (x - y). Below is a small Python class that implements this expression in its
filter() member.
decay = 0.9 # Decay between samples (in (0, 1)). class LowPassSinglePole: def __init__(self, decay): self.b = 1 - decay self.reset() def reset(self): self.y = 0 def filter(self, x): self.y += self.b * (x - self.y) return self.y low_pass_single_pole = LowPassSinglePole(decay)
This filter can then be applied by calling the
filter() member for each new input sample
x, resulting in a new output sample
y:
y = low_pass_single_pole.filter(x) Filter Design Tool
This article is complemented with a Filter Design tool. Experiment with different values for \(d\), visualize the resulting filters, and download the filter code. Try it now!
General Case This last section is mainly here to have everything in one place for now. I plan to add a separate article on the Z-transform later.
A single-pole low-pass infinite impulse response (IIR) filter is given by the
Z-transform
\[H[z]=\frac{bz}{z-a}=\frac{b}{1-az^{-1}},\]
where \(a+b=1\) results in a filter with unity gain at DC.
The general form of this equation is
\[H[z]=\frac{b_0+b_1z^{-1}+b_2z^{-2}+\ldots}{1-a_1z^{-1}-a_2z^{-2}+\ldots}=\frac{\sum\limits_{n=0}^{\infty}b_nz^{-n}}{1-\sum\limits_{n=1}^{\infty}a_nz^{-n}}.\] |
View Answer
question_answer1) Name the two components of peripheral nervous system.
View Answer
question_answer2) A charge of 150 coulomb flows through a wire in one minute. Find the electric current flowing through it.
View Answer
question_answer3) What are hot spots inside earth's crust?
View Answer
question_answer4) Explain why, an aqueous solution of sodium sulphate is neutral while an aqueous solution of sodium carbonate is basic in nature.
View Answer
question_answer5) When hydrogen gas is passed over heated copper (II) oxide, copper and steam are formed. Write the balanced chemical equation for this reaction and state (i) the substance oxidized and (ii) the substance reduced in the reaction.
View Answer
question_answer6) Why do herbivores have longer, small intestine than carnivores? question_answer7)
State reason for the following: (i) Lemon is used for restoring the shine of tarnished copper vessels. (ii) A metal sulphide is converted into its oxide to extract the metal from the sulphide ore. (iii) Copper wires are used in electrical connections. question_answer8)
Select (i) combination reaction, (ii) decomposition reaction and (iii) displacement reaction from the following chemical equations: (i) \[ZnC{{O}_{3}}(s)\to ZnO(s)+C{{O}_{2}}(g)\] (ii) \[Pb(s)+CuC{{l}_{2}}(aq)\to PbC{{l}_{2}}(aq)+Cu(s)\] (iii) \[NaBr(aq)+AgN{{O}_{3}}(aq)\to AgBr(s)+NaN{{O}_{3}}(aq)\] (iv) \[{{H}_{2}}(g)+C{{l}_{2}}(g)\to 2HCl(g)\] (v) \[F{{e}_{2}}{{O}_{3}}+2Al\to A{{l}_{2}}{{O}_{3}}+2Fe\] (vi) \[3{{H}_{2}}(g)+{{N}_{2}}(g)\to 2N{{H}_{3}}(g)\] (vii) \[CaC{{O}_{3}}(s)\xrightarrow{Heat}CaO(s)+C{{O}_{2}}(g)\] question_answer9)
State reason for the following: (i) dry HCl gas does not change the colour of the dry blue litmus paper. (ii) alcohol and glucose also contain hydrogen, but do not conduct electricity. (iii) Cone. of \[{{H}_{3}}{{O}^{+}}\] ion is affected when a solution of an acid is diluted. question_answer10)
State the kind of chemical reactions in the following examples: (i) Digestion of food in stomach (ii) Combustion of coal in air (iii) Heating of limestone
View Answer
question_answer11) The rate of breathing in aquatic organisms is much faster than that seen in terrestrial organisms. Give reason. State the pathway of air from nostrils to the lungs in human beings.
View Answer
question_answer12) Mention three characteristic features of hormonal secretions in human beings. question_answer13)
(a) State the purpose of formation of urine. (b) What will happen if there is no tubular reabsorption in the nephrons of kidney.
View Answer
question_answer14) A circuit has a line of 5 A. How many lamps of rating 40 W; 220 V can simultaneously run on this line safely?
View Answer
question_answer15) The resistance of a wire of 0.01 cm radius is \[10\,\Omega \]. If the resistivity of the material of the wire is \[50\times {{10}^{-8}}\,ohm\] metre, find the length of the wire. question_answer16)
Show four different ways in which four resistors of r ohm each may be connected in a circuit. In which case is the equivalent resistance of the combination. (i) maximum (ii) minimum question_answer17)
Amit lives in Delhi and is much concerned about the increasing electricity bill of his house. He took some steps to save electricity and succeeded in doing so. (i) Mention any two steps that Amit might have taken to save electricity. (ii) Amit fulfilled his duty towards the environment by saving electricity. How? (iii) Which alternative source of energy would you suggest Amit to use?
View Answer
question_answer18) List any three qualities of an ideal source of energy. question_answer19)
(a) Define corrosion. (b) What is corrosion of iron called? (c) How will you recognise the corrosion of silver? (d) Why corrosion of iron is a serious problem? (e) How can we prevent corrosion? question_answer20)
Write balanced chemical equations for the following statements: (i) NaOH solution is heated with zinc granules. (ii) Excess of carbon dioxide gas is passed through lime water. (iii) Dilute sulphuric acid reacts with sodium carbonate. (iv) Egg shells are dropped in hydrochloric acid. (v) Copper (II) oxide reacts with dilute hydrochloric acid. question_answer21)
(a) Write three main functions of the nervous system. (b) In the absence of muscle cells, how do plant cells show movement? question_answer22)
(a) Draw magnetic field lines of a bar magnet. "Two magnetic field lines never intersect each other." Why? (b) An electric oven of 1.5 kW is operated in a domestic circuit (220 V) that has a current rating of 5 A. What result do you expect in this case? Explain. question_answer23)
What is meant by resistance of a conductor? Name and define its SI unit. List the factors on which the resistance of a conductor depends. How is the resistance of a wire affected if ? (i) its length is doubled, (ii) its radius is doubled? question_answer24)
(i) Establish a relationship to determine the equivalent resistance R of a combination of three resistors having resistances \[{{\mathbf{R}}_{\mathbf{1}}}\mathbf{,}\,{{\mathbf{R}}_{\mathbf{2}}}\] and \[{{\mathbf{R}}_{\mathbf{3}}}\] connected in parallel. (ii) Three resistors are connected in an electrical circuit as shown. Calculate the resistance between A and B. question_answer25)
Four students studied reactions of zinc and sodium carbonate with dilute hydrochloric acid and dilute sodium hydroxide solutions and presented their results as follows. The \[(\uparrow )\] shows evolution of gas and \[(-)\] shows no reaction. The right set is: (a) \[Zn\] \[N{{a}_{2}}C{{O}_{3}}\] HCl √ √ NaOH √ √ (b) \[Zn\] \[N{{a}_{2}}C{{O}_{3}}\] HCl - √ NaOH √ √ (c) \[Zn\] \[N{{a}_{2}}C{{O}_{3}}\] HCl √ √ NaOH √ - (d) \[Zn\] \[N{{a}_{2}}C{{O}_{3}}\] HCl √ - NaOH √ √ question_answer26)
Dilute NaOH solution and solid sodium carbonate: (a) react only on heating (b) react very slowly (c) do not react (d) react vigorously question_answer27)
The colour of Cu metal is: (a) reddish brown (b) blue (c) green (d) grey question_answer28)
Shashank was asked to carry out a displacement reaction which would show the following: (i) Formation of colourless solution (ii) Black deposits The reactants he should use are: (a) \[Fe(s)\] and \[A{{l}_{2}}{{(S{{O}_{4}})}_{3}}(aq)\] (b) \[Al(s)\] and \[FeS{{O}_{4}}(aq)\] (c) \[Zn(s)\] and \[CuS{{O}_{4}}(aq)\] (d) \[Fe(s)\] and \[ZnS{{O}_{4}}(aq)\] question_answer29)
Mrignayani was doing the experiment of comparing reactivity of metals in the laboratory. She was given aluminium metal and was told to check reactivity by using four solutions as shown below. She would observe that reaction takes place in: (A) (B) (C) (D) (a) A and B (b) B, C and D (c) A, C and D (d) C and D question_answer30)
In an experiment to find the equivalent resistance of a series combination of two resistance of \[3\,\Omega \] and \[4\,\Omega \] in the circuit diagram given. The circuit will give: (a) Incorrect reading for current I and reading for voltage V (b) Incorrect readings for both current I and voltage V (c) Correct reading for current I and incorrect reading for voltage V (d) Correct readings for both voltage V and current question_answer31)
A Student joined three resistances as shown in the circuit below. The current recorded by ammeter (A). (a) 0.25 A (b) 0.5 A (c) 0.75 A (d) 1 A question_answer32)
The iodine solution is: (a) Pure iodine dissolved in water (b) Potassium iodide in water (c) Iodine dissolved in potassium iodide (d) Potassium iodide dissolved in iodide question_answer33)
(A) (B) (C) (D) Choose the correct set up to demonstrate that \[C{{O}_{2}}\] is given out during respiration: (a) A (b) B (c) C (d) D
View Answer
question_answer34) An iron nail is dipped in the solution of copper sulphate for about 30 minutes, state the change in colour observed. Give the reason for the change.
View Answer
question_answer35) A student while verifying Ohm's law calculated the value of resistance of the resistor for each set of observation. However, the values of resistance were slightly different from the actual value. Is this experiment wrong? Justify your answer:
View Answer
question_answer36) Draw a labelled diagram of stomatal apparatus with closed stomatal pore.
You need to login to perform this action.
You will be redirected in 3 sec |
A result of Cai and Ellis (see Theorem 5 in http://www.sciencedirect.com/science/article/pii/0166218X9190010T) implies that deciding whether a cubic perfect line-graph is $3$-edge-colorable is NP-complete. Counter-examples to Conjecture 2 can be built from their argument as follows:
First, notice that every cubic bridgeless graph $G$ satisfies $\chi_f'(G)=3$. This is easily obtained using the following formula for $\chi_f'(G)$, which is derived from Edmonds' inequalities for the matching polytope of $G$:$$\chi_f'(G)=\max\left(\Delta(G),\max_{U\subseteq V(G), |U|\geq 3\, \text{odd}}\frac{|E(U)|}{\frac{|U|-1}{2}}\right).$$
Now, consider the following construction: let $H$ be a bridgeless cubic graph and $S(H)$ be the graph obtained from $H$ by subdividing each edge exactly once. Let $G$ be the line graph of $S(H)$.
It is straightforward to check that $G$ is cubic, bridgeless and that: $\chi'(G)=3$ if and only if $\chi'(H)=3$. Furthermore, $G$ is perfect because $S(H)$ is bipartite.
Therefore, if $H$ is a cubic bridgeless graph with $\chi'(H)=4$ (for example the Petersen graph or any other
snark http://en.wikipedia.org/wiki/Snark_(graph_theory)), then $G$ is a cubic bridgeless perfect line-graph with $\chi'(G)>\lceil\chi_f'(G)\rceil$. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Is there any solver for convex optimization in C++ (or some dedicated scheme while no solver is yet available) that could solve a convex optimization problem with objective function value given by an oracle? Thank you.
My specific problem is this:
\[\mathop {\max }\limits_\lambda \mathop {\min }\limits_{\sigma \in {{\{ 0,1\} }^N}} {E_{\sigma ,\,\lambda }} \]
wherer lambda is a vector, and for each
\[{\sigma \in {{\{ 0,1\} }^N}} \]
E is a linear function of lambda \[{E_{\sigma ,\,\lambda }} \]
In words: It is actually maximize over lambda the piece-wise linear function defined by the minimum of exponential number of linear functions. Given lambda I have an effective scheme to obtain sigma and thus calculate \[\mathop {\min }\limits_{\sigma \in {{\{ 0,1\} }^N}} {E_{\sigma ,\,\lambda }} \] . so my problem is effectively a convex optimization with objective function given by my oracles (maximize over a concave function) and I am wondering whether there would be some solvers suitable to this type of problem. Or if there is any dedicated procedure for this while no solvers available.
Thank you:D |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.