text stringlengths 256 16.4k |
|---|
Methods Funct. Anal. Topology
18 (2012), no. 4, 305-331
In the paper, we investigate realizations of a $p$-dimensionalregular weak stationary discrete time stochastic process $y(t)$ asthe output data of a passive linear bi-stable discrete timedynamical system. The state $x(t)$ is assumed to tend to zero as ttends to $-\infty$, and the input data is the $m$-dimensional whitenoise. The results are based on author's development of the Darlington method for passive impedance systems with losses of thescattering channels. Here we establish that considering realizationfor a discrete time process is possible, if the spectral density $\rho(e^{i\mu})$ of the process is a nontangential boundary value ofa matrix valued meromorphic function $\rho(z)$ of rank $m$ withbounded Nevanlinna characteristic in the open unitdisk. A parameterization of all such realizations is given and minimal,optimal minimal, and *-optimal minimal realizations areobtained. The last two coincide with those which are obtained by Kalman filters. This is a further development of the Lindquist-Picci realization theory.
Methods Funct. Anal. Topology
18 (2012), no. 4, 332-342
We propose the power moment approach to investigation of double-infinite Toda lattices, which was contained in author's article [6]. As a result, we give the main theorem from [6] in a more effective form.
Methods Funct. Anal. Topology
18 (2012), no. 4, 343-359
Graph Laplacians on finite compact metric graphs are considered under the assumption that the matching conditions at the graph vertices are of either $\delta$ or $\delta'$ type. In either case, an infinite series of trace formulae which link together two different graph Laplacians provided that their spectra coincide is derived. Applications are given to the problem of reconstructing matching conditions for a graph Laplacian based on its spectrum.
Methods Funct. Anal. Topology
18 (2012), no. 4, 360-372
In the present work we consider the Schrödinger operator $\mathrm{H_{X,\alpha}}=-\mathrm{\frac{d^2}{dx^2}}+\sum_{n=1}^{\infty}\alpha_n\delta(x-x_n)$ acting in $L^2(\mathbb{R}_+)$. We investigate and complete the conditions of self-adjointness and nontriviality of deficiency indices for $\mathrm{H_{X,\alpha}}$ obtained in [13]. We generalize the conditions found earlier in the special case $d_n:=x_{n}-x_{n-1}=1/n$, $n\in \mathbb{N}$, to a wider class of sequences $\{x_n\}_{n=1}^\infty$. Namely, for $x_n=\frac{1}{n^{\gamma}\ln^\eta n}$ with $\langle\gamma,\eta \rangle\in(1/2,\,1)\!\times\!(-\infty,+\infty)\:\cup\:\{1\}\!\times\!(-\infty,1]$, the description of asymptotic behavior of the sequence $\{\alpha_n\}_{n=1}^{\infty}$ is obtained for $\mathrm{H_{X,\alpha}}$ either to be self-adjoint or to have nontrivial deficiency indices.
Methods Funct. Anal. Topology
18 (2012), no. 4, 373-386
Irreducible representations of $*$-algebras $A_q$ generated by relations of the form $a_i^*a_i+a_ia_i^*=1$, $i=1,2$, $a_1^*a_2=qa_2a_1^*$, where $q\in (0,1)$ is fixed, are classified up to the unitary equivalence. The case $q=0$ is considered separately. It is shown that the $C^*$-algebras $\mathcal{A}_q^F$ and $\mathcal{A}_0^F$ generated by operators of Fock representations of $A_q$ and $A_0$ are isomorphic for any $q\in (0,1)$. A realisation of the universal $C^*$-algebra $\mathcal{A}_0$ generated by $A_0$ as an algebra of continuous operator-valued functions is given.
Methods Funct. Anal. Topology
18 (2012), no. 4, 387-400
In this paper we obtain a Nevanlinna-type formula for the matrix Hamburger moment problem. We only assume that the problem is solvable and has more than one solution. We express the matrix coefficients of the corresponding linear fractional transformation in terms of the prescribed moments. Necessary and sufficient conditions for the determinacy of the moment problem in terms of the given moments are obtained. |
Forgot password? New user? Sign up
Existing user? Log in
A strictly increasing and continuous function f(x)f(x)f(x) intersects with its inverse f−1(x)f^{-1}(x) f−1(x) at points x=ax=ax=a and x=bx=bx=b, where aaa and bbb are integers.
If ∫ab[f(x)+f−1(x)] dx=17 \displaystyle \int_a^b[f(x) + f^{-1}(x)] \, dx = 17 ∫ab[f(x)+f−1(x)]dx=17, find ∣a×b∣ |a \times b|∣a×b∣.
Problem Loading...
Note Loading...
Set Loading... |
Direct FSI Approach to Computing the Acoustic Radiation Force
In an earlier blog post, we considered the computation of acoustic radiation force using a perturbation approach. This method has the advantage of being both robust and fast; however, it relies heavily on the theoretical evaluation of correct perturbation terms. The idea behind the method presented here is to solve the problem by deducing the radiation force from the solution of the full nonlinear set of Navier-Stokes equations, interacting with a solid, elastic microparticle.
Fluid-Structure Interaction (FSI) Formulation
The problem that we want to solve involves the motion of a fluid due to the acoustic wave, which in turn exerts force on a solid particle. The particle responds to the applied force by deformation and net motion and also applies reaction forces on the fluid. This means that in addition to solving the equations describing the fluid dynamics, you would also need to account for the deformation of the particle and the resulting deformation of the space occupied by the fluid. The pre-built
Fluid-Structure Interaction physics interface in COMSOL Multiphysics® software allows you to solve this problem by coupling fluid flow, structural stress analysis, and mesh deformation.
The acoustic radiation force is a nonlinear effect, where the nonlinearity is inherent to the flow and stems from the convective term (\mathbf u \cdot \nabla )\mathbf u in the Navier-Stokes equation rather than from the material nonlinearity. To support acoustic waves, the fluid has to be compressible. The fluid compressibility can be introduced by modifying the constitutive relation of the fluid. We assume a linear elastic fluid where p = c_0^2(\rho-\rho_0) and, for water, we put c_0 = 1500 m/s and \rho_0 = 1000 kg/m
3. Because we initially want to compare the method with classical, analytical models that neglect the effect of viscosity, we assume an arbitrary, small viscosity value.
The acoustic radiation force is much higher in the standing wave fields, and most practical applications utilizing this effect involve standing waves. Let us examine this case.
Because the problem is nonlinear, it must be solved in the time domain. Since the solution can be quite time consuming, we will solve it in a 2D axisymmetric geometry. To create a standing wave in a time-dependent solution, we consider a resonant box two wavelengths high and initiate the standing wave by setting up the corresponding initial conditions. For example, in a box two wavelengths high, the standing wave solution would be p(r,z,t) = p_0 cos(k_0 z) cos(\omega t) and, at the time t= 0 , the initial conditions should read p(r,z,t=0) = p_0 cos(k_0 z) and \mathbf u(r,z,t=0) = 0. This is illustrated below for a simulation box that is 3 mm high and has a wavelength of \lambda = 1.5 mm (the corresponding frequency is 1 MHz).
As noted in the previous blog post, we expect the nonlinear force term to be a few orders of magnitude smaller than the linear force, such that the particle will appear to be bouncing up and down in the acoustic field. However, every time the particle moves, it will go a little further in one direction than the other. This is a result of the nonlinear force component that does not change its direction when the field changes its sign. Because the nonlinear force is so small, it is very hard to extract the value of the nonlinear force from the time-domain solution, unless a very simple trick is used.
The essence of this trick is to utilize the very fact that the nonlinear force does not change direction when the excitation changes sign. Let us denote x_\textrm p (t) as the average displacement of the particle obtained when p_0 is positive and x_\textrm m (t) when p_0 is negative. The nonlinear displacement component will be given by the combination of the two, x_\textrm{nl}(t) = 1/2 \left [ x_\textrm p (t) +x_\textrm m (t) \right]. This method was first used here. The solution corresponding to the pressure amplitude of 0.1 MPa and a nylon particle of 100-μm radius is depicted in the plot below. You can see that x_\textrm p (t) and x_\textrm m (t) have opposite signs and otherwise appear identical; however, their sum is non-zero, albeit very small in comparison.
Here, the particle is shown in motion. Notes on Implementation in COMSOL Multiphysics® Software
The method outlined above considers nonlinear acoustic phenomena. Therefore, the usual rules of acoustic modeling apply. They are:
The mesh has to be sufficiently fine to resolve the shortest wavelength. Here, the basic wavelength is that of the externally imposed standing wave. Shorter wavelengths, however, are produced by the particle vibrating at its resonant frequency. These wavelengths will be captured by the model because it is solved in the time domain. A suitable time progression method when resolving wave phenomena in COMSOL Multiphysics is the generalized alpha method. To make sure that the solution is stable, a Courant–Friedrichs–Lewy (CFL) criterion has to be manually imposed by setting a manual step size with \delta t < 0.5\ h_\textrm{min} / c_0.
Postprocessing
The last step of the analysis is computing the force from the average nonlinear displacement given by the finite element model. We can observe the effect of the acoustic radiation force as a small offset of the displacement from the otherwise perfect oscillatory motion of the particle. Judging by the difference in the computed linear and nonlinear components of the displacements, the difference in force components is about three orders of magnitude. So, the effect of the acoustic radiation force is negligible during one acoustic cycle and, to evaluate it correctly, around five to ten acoustic cycles must be computed.
With this data, we can export the results to Excel® spreadsheet software and find the average acceleration by fitting a second-degree polynomial to the displacement curve and computing the force as F = m \ddot x_\textrm{nl}. The graph below shows the results of such an analysis.
Here, the maximum force at a λ/8 distance from the pressure node of the standing wave is evaluated as a function of the particle’s radius. A good agreement is obtained in the limit of the small particle k_0R_0 \ll 1 for which the analytical model is valid, and the deviation appears to increase as we depart from this limit.
Conclusion
We have outlined a direct method for evaluating nonlinear acoustic radiation force. The same approach can be applied to other nonlinear acoustic effects such as acoustic streaming. The advantage of this method is that it does not rely on any theoretical models (e.g., the perturbation method) to express the nonlinear terms. Note that because the fluid-structure interaction approach requires the model to be solved in the time domain, it is much slower than the perturbation method.
Other Posts in This Series Excel is a registered trademark of Microsoft Corporation in the United States and/or other countries. Comments (2) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
Search
Now showing items 1-1 of 1
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... |
I've seen a number of news reports indicating there is likely a 9th planet in our Solar System, something with an orbital period of between 10k-20k years, that is 10 times Earth's mass. I haven't seen any real indication of where this object might be. If I had access to a sufficient telescope, would I be able to find this planet, and what way would I point a telescope to find it? How far is it likely to be, or is that not well known?
It's too dim to be seen during a normal survey during the majority of itsorbit. Update: Scientists at the University of Bern have modeled a hypothetical 10 Earth mass planet in the proposed orbit to estimate its detectability with more precision than my attempt below.
The takeaway is that NASAs WISE mission would have probably spotted a planet of at least 50 Earth masses in the proposed orbit and that none of our current surveys would have had a chance to find one below 20 earth masses in most of its orbit. They put the planets temperature at 47K due to residual heat from formation; which would make is 1000x brighter in infrared than it is in visible light reflected from the sun.
It should however be within reach of the LSST once it is completed (first light 2019, normal operations beginning 2022); so the question should be resolved within a few more years even if its far enough from Batygin and Brown's proposed orbit that their search with the Subaru telescope comes out empty.
My original attempt to handwave an estimate of detectability is below.The paper gives potential orbital parameters of $400-1500~\textrm{AU}$ for the semi major axis, and $200-300~\textrm{AU}$ for perihelion. Since the paper doesn't give a most-likely case for orbital parameters, I'm going to go with the extreme case that makes it most difficult to find. Taking the most eccentric possible values from that gives an orbit with a $1500~\textrm{AU}$ semi-major axis and a $200~\textrm{AU}$ perihelion has a $2800~\textrm{AU}$ aphelion.
To calculate the brightness of an object shining with reflected light, the proper scaling factor is not a $1/r^2$ falloff as could be naively assumed. That is correct for an object radiating its own light; but not for one shining by reflected light; for that case the same $1/r^4$ scaling as in a radar return is appropriate. That this is the correct scaling factor to use can be sanity checked based on the fact that despite being similar in size, Neptune is $\sim 6x$ dimmer than Uranus despite being only $50\%$ farther away: $1/r^4$ scaling gives a $5x$ dimmer factor vs $2.25$ for $1/r^2$.
Using that gives a dimming of 2400x at $210~\textrm{AU}\;.$ That puts us down $8.5$ magnitudes down from Neptune at perihelion or $16.5$ magnitude. $500~\textrm{AU}$ gets us to $20$th magnitude, while a $2800~\textrm{AU}$ aphelion dims reflected light down by nearly $20$ magnitudes to $28$ magnitude. That's equivalent to the faintest stars visible from an 8 meter telescope; making its non-discovery much less surprising.
This is something of a fuzzy boundary in both directions. Residual energy from formation/radioactive material in its core will be giving it some innate luminosity; at extreme distances this might be brighter than reflected light. I don't know how to estimate this. It's also possible that the extreme cold of the Oort Cloud may have frozen its atmosphere out. If that happened, its diameter would be much smaller and the reduction in reflecting surface could dim it another order of magnitude or two.
Not knowing what sort of adjustment to make here, I'm going to assume the two factors cancel out completely and leave the original assumptions that it reflects as much light as Neptune and reflective light is the dominant source of illumination for the remainder of my calculations.
For reference, data from NASA's WISE experiment has ruled out a Saturn-sized body within $10,000~\textrm{AU}$ of the sun.
It's also likely too faint to have been detected via proper motion; although if we can pin its orbit down tightly Hubble could confirm its motion.
Orbital eccentricity can be calculated as:
$$e = \frac{r_\textrm{max} - r_\textrm{min}}{2a}$$
Plugging in the numbers gives:
$$e = \frac{2800~\textrm{AU} - 200~\textrm{AU}}{2\cdot 1500~\textrm{AU}} = 0.867$$
Plugging $200~\textrm{AU}$ and $e = 0.867$ into a cometary orbit calculator gives a $58,000$ year orbit.
While that gives an average proper motion of $ 22~\textrm{arc-seconds/year}\;,$ because the orbit is highly eccentric its actual proper motion varies greatly, but it spends a majority of its time far from the sun where its values are at a minimum.
Kepler's laws tell us that the velocity at aphelion is given by:
$$v_a^2 = \frac{ 8.871 \times 10^8 }{ a } \frac{ 1 - e }{ 1 + e }$$
where $v_a$ is the aphelion velocity in $\mathrm{m/s}\;,$ $a$ is the semi-major axis in $\mathrm{AU},$ and $e$ is orbital eccentricity.
$$v_a = \sqrt{\frac{ 8.871 \times 10^8 }{ 1500 } \cdot \frac{ 1 - 0.867 }{ 1 + 0.867 }} = 205~\mathrm{m/s}\;.$$
To calculate the proper motion we first need to convert the velocity into units of $\textrm{AU/year}:$
$$205 \mathrm{\frac{m}{s}}\; \mathrm{\frac{3600 s}{1 h}} \cdot \mathrm{\frac{24 h}{1 d}} \cdot \mathrm{\frac{365 d}{1 y}} \cdot \mathrm{\frac{1\; AU}{1.5 \times 10^{11}m}} = 0.043~\mathrm{\frac{AU}{year}}$$
To get proper motion from this, create a triangle with a hypotenuse of $2800~\textrm{AU}$ and a short side of $0.043~\textrm{AU}$ and then use trigonometry to get the narrow angle.
$$\sin \theta = \frac{0.044}{2800}\\ \implies \theta = {8.799×10^{-4}}^\circ = 3.17~\textrm{arc seconds}\;.$$
This is well within Hubble's angular resolution of $0.05~\textrm{arc seconds};$ so if we knew exactly where to look we could confirm its orbit even if its near its maximum distance from the sun. However its extreme faintness in most of its orbit means that its unlikely to have been found in any survey. If we're lucky and it's within $\sim 500~\textrm{AU},$ it would be bright enough to be seen by the ESA's GAIA spacecraft in which case we'll located it within the next few years. Unfortunately, it's more likely that all the GAIA data will do is to constrain its minimum distance slightly.
Its parallax movement would be much larger; however the challenge of actually seeing it in the first place would remain.
The position of the hypothetical object is not known with any certainty, so it's hard to know where to point your telescope.
The paper proposes a wide range of orbital distances anywhere from 400 to 1500 AU semi-major axis, with a perihelion (closest approach to the sun) of 200-300AU. This is 8 times as far as Neptune. (I didn't read the article closely enough to determine whether the body would be near perihelion or not at present; it could be over 1000 AU away, 30 times Neptune's distance.)
With a mass of 10 Earths, we would expect the body to be something like 2-5 times Earth's radius -- somewhat smaller than Neptune.
The combination of distance and size suggests the body would be far fainter than Neptune, no brighter than magnitude 16.5 at perihelion, and likely much dimmer.
Citing the original article:
We find that the observed orbital alignment can be maintained by a distant eccentric planet with mass $\geq \approx 10$ m⊕ whose orbit lies in approximately the same plane as those of the distant KBOs, but whose perihelion is 180° away from the perihelia of the minor bodies.
and
As already alluded to above, the precise range of perturber parameters required to satisfactorily reproduce the data is at present difficult to diagnose. Indeed, additional work is required to understand the tradeoffs between the assumed orbital elements and mass, as well as to identify regions of parameter space that are incompatible with the existing data.
So, finding out likely orbital parameters is work in progress.
Batygin and Brown made a website which describes the search for the 9th planet in clear terms. They specifically note the following:
perihelion (its closest approach to the sun) at around a Right Ascension in the sky of 16 hours, which means that the perihelion position is straight overhead in late May. Conversely, the orbit comes to aphelion (the furthest point from the sun) at about 4 hours, or straight overhead in late November.
So to look for it, one should look along the ecliptic, concentrating mostly on the area directly overhead in late November. Note that this is the part of the sky where the galactic center also appears. The inclination is estimated to be 30 degrees, plus or minus 20, so that distance from the ecliptic should be searched as well.
If you had access to a sufficient telescope, you could theoretically see it, if you looked in the right place (although no one knows where the right place might be). But if it's anywhere near aphelion there are only a handful of sufficient telescopes in the world (let's say an 8m mirror or larger), so I think it highly unlikely that you have access to one of them. |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
The whole point is that the limit of the difference quotient is the
definition of the derivative at a point. In general, there is no independent standard against which to check this definition. As a result, it does not make sense to ask "does the value of the limit really give the slope of the tangent line?" -- precisely because we have no other definition of "slope of the tangent line," in general. Put more informally, there can be no question of whether the limit is "exactly" equal to something else, because we have no definition of that something else.
However, perhaps you do
think you have an independent definition of "tangent line." If so, you probably have in mind the intuitive idea that a tangent line ought to intersect the curve "at one point." Of course this is wrong in general (think about tangent lines to $y=x^3$ away from $x=0$), but you can check that the limit definition gives the "right" answer (in this sense) for the simplest possible case of $y=x^2$ by a standard precalculus argument. Suppose the line $y=ax+b$ intersects the parabola $y=x^2$ at exactly one point, $(x_0,y_0)$. Then the equation
$$x^2-ax-b=0$$
ought to have precisely one solution (namely $x_0$). But the solutions are $x=\frac{a\pm\sqrt{a^2+4b}}{2}$. Since we have only one solution, the discriminant vanishes, giving $x_0=\frac{a}{2}$. In other words, $a=2x_0$, so the slope of the tangent line $y=ax+b$ is indeed the same as the limit of the difference quotient.
The problem is that this simple geometric picture won't carry you very far: it fails, as I said, even for the nice curve $y=x^3$. (Things can get a whole lot worse: consider the problem of making sense of the tangent line to the curve $y=x^2\sin(1/x)$ at $x=0$.) So we must take a different approach to defining "slope of the tangent line" for general functions. The approach we take is to say a function has a tangent line if the limit of the secant slopes exists.
Why is this a good approach? The short answer is that it allows us to prove many important theorems. But note, as well, that it does generalize
some of our geometric intuition, which says that the secant lines ought to slide continuously through the tangent line as $h$ changes from positive to negative. In other words, the limit definition tells us how to define the function $$g(h):=\begin{cases}\frac{f(x+h)-f(x)}{h}& h\neq0\\??&h=0\end{cases}$$if we want $g$ to be continuous at zero. |
I agree with most of what's already been said so far, but wanted to add one point---that the answer is surely going to depend on your audience. Of course, this is going to be people who have never learnt calculus before so their mathematical maturity and exposure to such things is definitely going to be limited, but depending on where you're teaching the ...
I think that there may be advantages to introducing the idea of asymptotics early, and I think that it might be interesting to experiment a bit with curriculum by bringing in asymptotics in calculus. However, there are caveats:I think that students need to have a solid understanding of limits and continuity, first. I don't necessarily mean that the ...
I've never known or needed that notation in my life. And I've used lots of calculus for science and engineering. So one of your assumptions is wrong. I think it would be more efficient to teach a normal calculus class. Leave the esoteric abstractions for later, and for the kids who will really need them (a small subset of the typical calculus population, ...
When I was an undergraduate, the big and little oh notations were taught to me in a first-year math course ostensibly aimed at physics students. The class was strong - several of us are now mathematics and physics professors in universities - and the attempt was, to my mind, moderately unsuccessful (although one could make the counterpoint that I still ...
You could start with two tables of values for x (the input variable) and y (the output value) in both. To start, each should represent a permutation on say, the set {1, 2, 3, 4, 5}, but don't use the word "permutation."Label one "Table A", the other "Table B".For each line in Table A, introduce the notation A(1)=, A(2)=, etc.Similarly, for Table B....
Many answers already, so I'll keep this one short: it has been realized by researchers in didactic that one difficulty in the concept of function is that it changes status: at first each function is considered as a process (a verb in @ΦDev's answer); they meet several of them, each being akin to a (unitary) operation, not very different from addition or ...
In the Yoruba Language, we use the word òǹkà to refer to tokens for representing numbers. Thus, it would correspond to the word numeral. The word òǹkà literally translates as 'that which is used for reckoning.' The prefix on (the n is a nasal) means thing, and the other part, ka, means to reckon, count, calculate, etc.On the other hand the word number ...
It's the whole subject of the Bourbaki's book IV Integration.First generalize your notion of derivative: a derivative is any function $d:A \rightarrow A$ of the space $A=\mathcal{C} ^ \infty(\mathbb{R})$ such that$d(f+g) = d(f)+d(g) $$d(\lambda f) = \lambda f $ ($\lambda \in C^{te}) \cong R \subset A$$d(fg) = f d(g) + d(f) g $The first two mean the ... |
I'm working on a parameter study of Duffing's equation
$\ddot x + \delta \dot x + \alpha x + \beta x^3 = \gamma \cos{\omega t},$
where $\delta, \alpha, \beta, \gamma$ and $\omega$ are real parameters and $\dot x := dx/dt$. Substituting $x \rightarrow u, \dot x\rightarrow v$ in the usual manner, the idea is to integrate the system
$\dot u = v$,
$\dot v = \gamma \cos{\omega t} - \delta v - \alpha u - \beta u^3$
using the fourth order Runge-Kutta method. I've chosen this pretty much because it's standard. My questions are:
1) How do I determine whether this method is at all applicable to the problem? Is the problem stiff, and should I choose a different method?
This of course is complicated by the fact that Duffing's equation doesn't have an analytical solution for most parameters.
2) How do I determine what step size to use - ie. how is the error resulting from a given step size estimated? (I would very much like to avoid implementing an adaptive step size.)
Being pointed towrads good references will be much appreciated! Best regards, \T |
Mentor: Bjoern Muetzel
If you are an undergraduate interested in a reading course, independent study or working on a research project, feel free to contact me. I am particularly interested in the following topics.
The hyperbolic plane is a space of constant negative curvature minus one, where different rules than in Euclidean space apply for geodesics, the geometry of polygons and the area of disks. A hyperbolic surface can be seen as a polygon in the hyperbolic plane with identified sides. We call such a surface a Riemann surface. Many questions about Riemann surfaces are still open or under study. Hyperbolic geometry is used in the theory of special relativity, particularly Minkowski spacetime.
A systole of a surface is a shortest non-contractible loop on a surface. Every surface has a genus \( g \), where informally \( g \) denotes the number of holes. Surprisingly given any surface of fixed genus \( g \) and area one, the systole can not take a value larger than \(c \cdot \frac{\log(g)}{ \sqrt{g}} \), where \( c \) is a constant. A large number of families of short curves on surfaces satisfy this upper bound and example surfaces can be found among the hyperbolic Riemann surfaces.
Advisor: Prof. Orellana
I have a number of projects accessible to undergraduate students in Combinatorics, Algebra and Graph Theory. These projects can lead to a senior thesis for honors or high honors. The ideal student should have taken math 24 and preferably (although not required) Math 28, 31 (71), 38 and have some programming skills. For more details schedule an appointment.
Advisor: Prof. Voight
Classical unsolved problems often serve as the genesis for the formulation of a rich and unified mathematical fabric. Diophantus of Alexandria first sought solutions to algebraic equations in integers almost two thousand years ago. For instance, he stated that if a two numbers can be written as the sum of squares, then their product is also a sum of two squares: since $5=2^2+1^2$ and $13=3^2+2^2$, then also $13\cdot 5=65$ can be written as the sum of two squares, indeed $65=8^2+1^2$. Equations in which only integer solutions are sought are now called Diophantine equations in his honor.
Diophantine equations may seem perfectly innocuous, but in fact within them can be found the deep and wonderously complex universe of number theory. Pierre de Fermat, a seventeenth century French lawyer and mathematician, famously wrote in his copy of Diophantus’ treatise “Arithmetica” that “it is impossible to separate a power higher than two into two like powers”, i.e., if $n>2$ then the equation $x^n+y^n=z^n$ has no solution in integers $x,y,z\ge 1$; provocatively, that he had “discovered a truly marvelous proof of this, which this margin is too narrow to contain.” This deceptively simple mathematical statement known as “Fermat’s last ‘theorem’” remained without proof until the pioneering work of Andrew Wiles, who in 1995 (building on the work of many others) used the full machinery of modern algebra to exhibit a complete proof. Over 300 years, attempts to prove there are no solutions to this innocent equation gave birth to some of the great riches of modern number theory.
Even before the work of Wiles, mathematicians recognized that geometric properties often govern the behavior of arithmetic objects. For example, Diophantus may have asked if there is a cube which is one more than a square, i.e., is there a solution in integers x,y to the equation $E : x^3-y^2=1$? This equation describes a curve in the plane called an elliptic curve, and a property of elliptic curves known as modularity was the central point in Wiles’s proof. One sees visibly the solution $(x,y)=(1,0)$ to the equation—but are there any others? What happens if 1 is replaced by 2 or another number?
Computational tools provide a means to test conjectures and can sometimes furnish partial solutions; for example, one can check in a fraction of a second on a desktop computer that there is no integral point on E other than $(1,0)$ with the coordinates x,y at most a million. Although this experiment does not furnish a proof, it is strongly suggestive. (Indeed, one can prove there are no other solutions following an argument of Leonhard Euler.) At the same time, theoretical advances fuel dramatic improvements in computation, allowing us to probe further into the Diophantine realm.
My research falls into this area of computational arithmetic geometry: I am concerned with algorithmic aspects of the problem of finding rational and integral solutions to polynomial equations, and I investigate the arithmetic of moduli spaces and elliptic curves. My work blends number theory with the explicit methods in algebra, analysis, and geometry in the exciting context of modern computation. This research is primarily theoretical, but it has potential applications in the areas of cryptography and coding theory. The foundation of modern cryptography relies upon the apparent difficulty of certain computational problems in number theory, in particular, the factorization of integers (in RSA) or the discrete logarithm problem (in elliptic curve cryptography).
I have several problems in the area of computational and explicit methods in number theory suitable for experimentation and possible resolution by motivated students. These problems can be tailored to the student based on interests, background, and personality, so there is little need to present the details here; but they all will feature a explicit mathematical approach and, very likely, some computational aspects. Mathematical maturity and curiosity is essential; some background (at the level of MATH 71) is desirable. |
Eilenberg-MacLane space
A space, denoted by $K(\pi,n)$, representing the functor $X\to H^n(X;\pi)$, where $n$ is a non-negative number, $\pi$ is a group which is commutative for $n>1$ and $H^n(X;\pi)$ is the $n$-dimensional cohomology group of a cellular space $X$ with coefficients in $\pi$. It exists for any such $n$ and $\pi$.
The Eilenberg–MacLane space $K(\pi,n)$ can also be characterized by the condition: $\pi_i(K(\pi,n))=\pi$ for $i=n$ and $\pi_i(K(\pi,n))=0$ for $i\neq n$, where $\pi_i$ is the $i$-th homotopy group. Thus, $K(\pi,n)$ is uniquely defined up to a weak homotopy equivalence. An arbitrary topological space can, up to a weak homotopy equivalence, be decomposed into a twisted product of Eilenberg–MacLane spaces (see Postnikov system). The cohomology groups of $K(\pi,1)$ coincide with those of $\pi$. Eilenberg–MacLane spaces were introduced by S. Eilenberg and S. MacLane .
References
[1a] S. Eilenberg, S. MacLane, "Relations between homology and homotopy groups of spaces" Ann. of Math. , 46 (1945) pp. 480–509 [1b] S. Eilenberg, S. MacLane, "Relations between homology and homotopy groups of spaces. II" Ann. of Math. , 51 (1950) pp. 514–533 [2] R.E. Mosher, M.C. Tangora, "Cohomology operations and applications in homotopy theory" , Harper & Row (1968) [4] E.H. Spanier, "Algebraic topology" , McGraw-Hill (1966) How to Cite This Entry:
Eilenberg-MacLane space.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Eilenberg-MacLane_space&oldid=32396 |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
For comparing two images, one can use the Peak Signal-to-Noise Ratio (PSNR) metric, defined as follows:
$\mathrm{PSNR} = 10 \cdot \log_{10}\left(\frac{\mathrm{MAX}^2}{\mathrm{MSE}}\right) = 20 \cdot \log_{10}(\mathrm{MAX}) - 10 \cdot \log_{10}(\mathrm{MSE})\ dB.$
with $\mathrm{MAX}$ being the maximum gray-scale value ($2^8-1 = 255$ in my case), and $\mathrm{MSE}$ being the mean squared error.
My question is, among the definition with division ($\mathrm{PSNR}_{\div}$) and subtraction ($\mathrm{PSNR}_{-}$), which one is the most numerically accurate? And which one is the most computationally efficient?
I made an experiment with MATLAB R2015a, and for some MSE values, I have different scores. For instance, knowing that the machine epsilon is $\epsilon = 2.2204e{-16}$, for $\mathrm{MSE} = 1708.25$ and $\mathrm{MSE} = 1710.00$, I have $\left| \mathrm{PSNR}_{\div} - \mathrm{PSNR}_{-} \right| = 1.42109e-14$ and $7.10543e-15$ respectively. What can I conclude?
Note: in $\mathrm{PSNR}_{-}$, $20 \cdot \log_{10}(\mathrm{MAX})$ can be reduced to a pre-computed constant. |
Assume that a matrix $\mathbf{R}$ of size $N\times N$ is given such that $r_{i,j} \ge r_{i,j+1}$ for any $i$ and $j$, where $r_{i,j}$ is the $i$th row and $j$th column element of $\mathbf{R}$.
I will solve the following problem: $$ \begin{array}{cl} \displaystyle \max_{K_i, i=1,\ldots,N} & \displaystyle f(K_1,\ldots,K_N)=\sum_{i=1}^N \sum_{j=1}^{K_i}r_{i,j}\\ \text{subject to} & \displaystyle \sum_{i=1}^N K_i = L, \end{array} $$ where $L$ is not greater than $N$.
Literally speaking, the above problem is to select $L$ items out of $N$ items, allowing duplicate selection, where the score obtained by selecting an item decreases as the number of times selected increases.
My greedy algorithm is as follows:
Given r[i,j] for all i and j.for i=1:N K[i] = 1.endrepeat i_star = argmax r[i,K_i] over i. K[i_star] = K[i_star] + 1.until L = sum K[i] over all i
Obviously, the above greedy algorithm will give an optimal solution to the above problem. However, I want to prove the solution derived by the algorithm is optimal.
I am trying to prove it using mathematical induction, but my proof is not as rigid as I hope since I failed to prove the optimal substructure of the problem. The optimal substructure implies that an optimal solution when $L=k + 1$ contains an optimal solution when $L=k$.
How can I prove it mathematically rigorously? |
Difference between revisions of "Fujimura's problem"
(→n=6)
(→n = 7)
Line 163: Line 163:
<math>\overline{c}^\mu_{7} \leq 22</math>:
<math>\overline{c}^\mu_{7} \leq 22</math>:
−
Using the same
+
Using the same extremal solutions to <math> \overline{c}^\mu_3 </math> as previous proofs:
Solution I: remove 300, 020, 111, 003<br>
Solution I: remove 300, 020, 111, 003<br>
−
Solution II: remove 030, 111, 201, 102<br>
+
Solution II : remove 030, 111, 201, 102<br>
Solution III (and 2 rotations): remove 030, 021, 210, 102<br>
Solution III (and 2 rotations): remove 030, 021, 210, 102<br>
Solution III' (and 2 rotations): remove 030, 120, 012, 201
Solution III' (and 2 rotations): remove 030, 120, 012, 201
Revision as of 20:19, 6 March 2009
Let [math]\overline{c}^\mu_n[/math] the largest subset of the triangular grid
[math]\Delta_n := \{ (a,b,c) \in {\Bbb Z}_+^3: a+b+c=n \}[/math]
which contains no equilateral triangles [math](a+r,b,c), (a,b+r,c), (a,b,c+r)[/math] with [math]r \gt 0[/math]; call such sets
triangle-free. (It is an interesting variant to also allow negative r, thus allowing "upside-down" triangles, but this does not seem to be as closely connected to DHJ(3).) Fujimura's problem is to compute [math]\overline{c}^\mu_n[/math]. This quantity is relevant to a certain hyper-optimistic conjecture.
n 0 1 2 3 4 5 [math]\overline{c}^\mu_n[/math] 1 2 4 6 9 12 Contents n=0
[math]\overline{c}^\mu_0 = 1[/math]:
This is clear.
n=1
[math]\overline{c}^\mu_1 = 2[/math]:
This is clear.
n=2
[math]\overline{c}^\mu_2 = 4[/math]:
This is clear (e.g. remove (0,2,0) and (1,0,1) from [math]\Delta_2[/math]).
n=3 [math]\overline{c}^\mu_3 = 6[/math]:
For the lower bound, delete (0,3,0), (0,2,1), (2,1,0), (1,0,2) from [math]\Delta_3[/math].
For the upper bound: observe that with only three removals each of these (non-overlapping) triangles must have one removal:
set A: (0,3,0) (0,2,1) (1,2,0) set B: (0,1,2) (0,0,3) (1,0,2) set C: (2,1,0) (2,0,1) (3,0,0)
Consider choices from set A:
(0,3,0) leaves triangle (0,2,1) (1,2,0) (1,1,1) (0,2,1) forces a second removal at (2,1,0) [otherwise there is triangle at (1,2,0) (1,1,1) (2,1,0)] but then none of the choices for third removal work (1,2,0) is symmetrical with (0,2,1) n=4 [math]\overline{c}^\mu_4=9[/math]:
The set of all [math](a,b,c)[/math] in [math]\Delta_4[/math] with exactly one of a,b,c =0, has 9 elements and is triangle-free. (Note that it does contain the equilateral triangle (2,2,0),(2,0,2),(0,2,2), so would not qualify for the generalised version of Fujimura's problem in which [math]r[/math] is allowed to be negative.)
Let [math]S\subset \Delta_4[/math] be a set without equilateral triangles. If [math](0,0,4)\in S[/math], there can only be one of [math](0,x,4-x)[/math] and [math](x,0,4-x)[/math] in S for [math]x=1,2,3,4[/math]. Thus there can only be 5 elements in S with [math]a=0[/math] or [math]b=0[/math]. The set of elements with [math]a,b\gt0[/math] is isomorphic to [math]\Delta_2[/math], so S can at most have 4 elements in this set. So [math]|S|\leq 4+5=9[/math]. Similar if S contain (0,4,0) or (4,0,0). So if [math]|S|\gt9[/math] S doesn’t contain any of these. Also, S can’t contain all of [math](0,1,3), (0,3,1), (2,1,1)[/math]. Similar for [math](3,0,1), (1,0,3),(1,2,1)[/math] and [math](1,3,0), (3,1,0), (1,1,2)[/math]. So now we have found 6 elements not in S, but [math]|\Delta_4|=15[/math], so [math]S\leq 15-6=9[/math].
Remark: curiously, the best constructions for [math]c_4[/math] uses only 7 points instead of 9. n=5 [math]\overline{c}^\mu_5=12[/math]:
The set of all (a,b,c) in [math]\Delta_5[/math] with exactly one of a,b,c=0 has 12 elements and doesn’t contain any equilateral triangles.
Let [math]S\subset \Delta_5[/math] be a set without equilateral triangles. If [math](0,0,5)\in S[/math], there can only be one of (0,x,5-x) and (x,0,5-x) in S for x=1,2,3,4,5. Thus there can only be 6 elements in S with a=0 or b=0. The set of element with a,b>0 is isomorphic to [math]\Delta_3[/math], so S can at most have 6 elements in this set. So [math]|S|\leq 6+6=12[/math]. Similar if S contain (0,5,0) or (5,0,0). So if |S| >12 S doesn’t contain any of these. S can only contain 2 point in each of the following equilateral triangles:
(3,1,1),(0,4,1),(0,1,4)
(4,1,0),(1,4,0),(1,1,3)
(4,0,1),(1,3,1),(1,0,4)
(1,2,2),(0,3,2),(0,2,3)
(3,2,0),(2,3,0),(2,2,1)
(3,0,2),(2,1,2),(2,0,3)
So now we have found 9 elements not in S, but [math]|\Delta_5|=21[/math], so [math]S\leq 21-9=12[/math].
n=6 [math]15 \leq \overline{c}^\mu_6 \leq 17[/math]:
[Incomplete: need to add rotations of solution II.]
[math]15 \leq \overline{c}^\mu_6[/math] from the bound for general n.
Note that there are ten extremal solutions to [math] \overline{c}^\mu_3 [/math]:
Solution I: remove 300, 020, 111, 003
Solution II (and 2 rotations): remove 030, 111, 201, 102 Solution III (and 2 rotations): remove 030, 021, 210, 102 Solution III' (and 2 rotations): remove 030, 120, 012, 201
Also consider the same triangular lattice with the point 020 removed, making a trapezoid. Solutions based on I-III are:
Solution IV: remove 300, 111, 003
Solution V: remove 201, 111, 102 Solution VI: remove 210, 021, 102 Solution VI': remove 120, 012, 201
Suppose we can remove all equilateral triangles on our 7×7x7 triangular lattice with only 10 removals.
The triangle 141-411-114 must have at least one point removed. Remove 141, and note because of symmetry any logic that follows also applies to 411 and 114.
There are three disjoint triangles 060-150-051, 240-231-330, 042-132-033, so each must have a point removed.
(Now only six removals remaining.)
The remainder of the triangle includes the overlapping trapezoids 600-420-321-303 and 303-123-024-006. If the solutions of these trapezoids come from V, VI, or VI', then 6 points have been removed. Suppose the trapezoid 600-420-321-303 uses the solution IV (by symmetry the same logic will work with the other trapezoid). Then there are 3 disjoint triangles 402-222-204, 213-123-114, and 105-015-006. Then 6 points have been removed. Therefore the remaining six removals must all come from the bottom three rows of the lattice.
Note this means the "top triangle" 060-330-033 must have only four points removed so it must conform to solution either I or II, because of the removal of 141.
Suppose the solution of the trapezoid 600-420-321-303 is VI or VI'. Both solutions I and II on the "top triangle" leave 240 open, and hence the equilateral triangle 240-420-222 remains. So the trapezoid can't be VI or VI'.
Suppose the solution of the trapezoid 600-420-321-303 is V. This leaves an equilateral triangle 420-321-330 which forces the "top triangle" to be solution I. This leaves the equilateral triangle 201-321-222. So the trapezoid can't be V.
Therefore the solution of the trapezoid 600-420-321-303 is IV. Since the disjoint triangles 402-222-204, 213-123-114, and 105-015-006 must all have points removed, that means the remaining points in the bottom three rows (420, 321, 510, 501, 312, 024) must be left open. 420 and 321 force 330 to be removed, so the "top triangle" is solution I. This leaves triangle 321-024-051 open, and we have reached a contradiction.
[math]15 \leq \overline{c}^\mu_6 \leq 16[/math]:
Here, "upper triangle" means the first four rows (with 060 at top) and "lower trapezoid" means the bottom three rows.
Suppose 11 removals leave a triangle-free set.
First, suppose that 5 removals come from the upper triangle and 6 come from the lower trapezoid.
Suppose the trapezoid 600-420-321-303 used solution IV. There are three disjoint triangles 402-222-204, 213-123-114, and 105-015-006. The remainder of the points in the lower trapezoid (420, 321, 510, 501, 402, 312, 024) must be left open. 024 being open forces either 114 or 015 to be removed.
Suppose 114 is removed. Then 213 is open, and with 312 open that forces 222 to be removed. Then 204 is open, and with 024 that forces 006 to be removed. So the bottom trapezoid is a removal configuration of 600-411-303-222-114-006, and the rest of the points in the bottom trapezoid are open. All 10 points in the upper triangle form equilateral triangles with bottom trapezoid points, hence 10 removals in the upper triangle would be needed, so 114 being removed doesn't work.
Suppose 015 is removed. Then 006-024 forces 204 to be removed. Regardless of where the removal in 123-213-114, the points 420, 321, 222, 024, 510, 312, 501, 402, 105, and 006 must be open. This forces upper triangle removals at 330, 231, 042, 060, 051, 132, which is more than the 5 allowed, so 015 being removed doesn't work, so the trapezoid 600-420-321-303 doesn't use solution IV.
Suppose the trapezoid 600-420-321-303 uses solution VI. The trapezoid 303-123-024-006 can't be IV (already eliminated by symmetry) or VI' (leaves the triangle 402-222-204). Suppose the trapezoid 303-123-024-006 is solution VI. The removals from the lower trapezoid are then 420, 501, 312, 123, 204, and 015, leaving the remaining points in the lower trapezoid open. The remaining open points is forces 10 upper triangle removals, so the trapezoid 600-420-321-303 doesn't use solution VI. Therefore the trapezoid 303-123-024-006 is solution V. The removals from the lower trapezoid are then 420, 510, 312, 204, 114, and 105. The remaining points in the lower trapezoid are open, and force 9 upper triangle removals, hence the trapezoid 303-123-024-006 can't be V, and the solution for 600-420-321-303 can't be VI.
The solution VI' for the trapezoid 600-420-321-303 can be eliminated by the same logic by symmetry.
Therefore it is impossible for 5 removals come from the upper triangle and 6 come from the lower trapezoid. Therefore 4 removals come from the upper triangle and 7 come from the lower trapezoid.
At this point note the triangle 141-411-141 must have one point removed, so let it be 141 and note that any logic that follows is also true for a removal of 411 and 141 by symmetry.
This implies the upper triangle must have either solution I or II.
Suppose it has solution II. Note there are five disjoint triangles 600-510-501, 411-321-312, 402-222-204, 213-123-114, and 105-015-006.
Suppose 420 and 024 are removed. Then, noting 303 must be open, 606 must be removed, leaving 510 open. 510-240 forces 213 to be removed, and 510-150 force 114 to be removed. 213 are 114 are in the same disjoint triangle. Hence both 420 and 024 both can't be removed.
So at least either 420 or 024 is open. Let it be 420, noting by symmetry identical logic will apply if 024 is removed. Then 321, 222, and 123 are removed based on 420 and the open spaces in the upper triangle. This leaves four disjoint triangles 600-501-510, 402-303-312, 213-033-015, 204-114-105. So 411 and 420 are open, forcing the removal of 510. This leaves 501 open, and 501-411 forces the removal of 402. 600-303, and 330 are then open, forming an equilateral triangle. Therefore 420 isn't open, therefore the upper triangle can't have solution II.
Therefore the upper triangle has solution I.
Suppose 222 is open. 222 with open points in the upper triangle force 420, 321, 123, and 024 to be removed. This leaves four disjoint triangles 411-501-402, 213-303-204, 015-105-006, and 132-312-114. This would force 8 removals in the lower trapezoid, so 222 must be closed.
Therefore 222 is removed. There are six disjoint triangles 150-420-123, 051-321-024, 231-501-204, 132-402-105, 510-150-114, and 312-042-015. So 600, 411, 393, 114, and 006 are open. 600-240 open forces 204 to be removed and 600-150 open forces 105 to be removed. This forces 501 and 402 to be open, but 411 is open, so there is the equilateral triangle 501-411-402.
Therefore the solution of the upper triangle is not I, and we have a contradiction. So [math] \overline{c}^\mu_6 \neq 17 [/math].
n = 7
[math]\overline{c}^\mu_{7} \leq 22[/math]:
Using the same ten extremal solutions to [math] \overline{c}^\mu_3 [/math] as previous proofs:
Solution I: remove 300, 020, 111, 003
Solution II (and 2 rotations): remove 030, 111, 201, 102 Solution III (and 2 rotations): remove 030, 021, 210, 102 Solution III' (and 2 rotations): remove 030, 120, 012, 201
Suppose the 8x8x8 lattice can be triangle-free with only 13 removals.
Slice the lattice into region A (070-340-043) region B (430-700-403) and region C (034-304-007). Each region must have at least 4 points removed. Note there is an additional disjoint triangle 232-322-223 that also must have a point removed. Therefore the points 331, 133, and 313 are open. 331-313 open means 511 must be removed, 331-133 open means 151 must be removed, and 133-313 open means 115 must be removed. Based on the three removals, the solutions for regions A, B, and C must be either I or II. All possible combinations for the solutions leave several triangles open (for example 160-520-124). So we have a contradiction, and [math] \overline{c}^\mu_7 \leq 22 [/math].
n = 8
[math]\overline{c}^\mu_{8} \geq 22[/math]:
008,026,044,062,107,125,134,143,152,215,251,260,314,341,413,431,440,512,521,620,701,800
n = 9
[math]\overline{c}^\mu_{9} \geq 26[/math]:
027,045,063,081,126,135,144,153,207,216,252,270,315,342,351,360,405,414,432,513,522,531,603,630,720,801
n = 10
[math]\overline{c}^\mu_{10} \geq 29[/math]:
028,046,055,064,073,118,172,181,190,208,217,235,262, 316,334,352,361,406,433,442,541,550,604,613,622, 721,730,901,1000
General n
A lower bound for [math]\overline{c}^\mu_n[/math] is 2n for [math]n \geq 1[/math], by removing (n,0,0), the triangle (n-2,1,1) (0,n-1,1) (0,1,n-1), and all points on the edges of and inside the same triangle. In a similar spirit, we have the lower bound
[math]\overline{c}^\mu_{n+1} \geq \overline{c}^\mu_n + 2[/math]
for [math]n \geq 1[/math], because we can take an example for [math]\overline{c}^\mu_n[/math] (which cannot be all of [math]\Delta_n[/math]) and add two points on the bottom row, chosen so that the triangle they form has third vertex outside of the original example.
An asymptotically superior lower bound for [math]\overline{c}^\mu_n[/math] is 3(n-1), made of all points in [math]\Delta_n[/math] with exactly one coordinate equal to zero.
A trivial upper bound is
[math]\overline{c}^\mu_{n+1} \leq \overline{c}^\mu_n + n+2[/math]
since deleting the bottom row of a equilateral-triangle-free-set gives another equilateral-triangle-free-set. We also have the asymptotically superior bound
[math]\overline{c}^\mu_{n+2} \leq \overline{c}^\mu_n + \frac{3n+2}{2}[/math]
which comes from deleting two bottom rows of a triangle-free set and counting how many vertices are possible in those rows.
Another upper bound comes from counting the triangles. There are [math]\binom{n+2}{3}[/math] triangles, and each point belongs to n of them. So you must remove at least (n+2)(n+1)/6 points to remove all triangles, leaving (n+2)(n+1)/3 points as an upper bound for [math]\overline{c}^\mu_n[/math].
Asymptotics
The corners theorem tells us that [math]\overline{c}^\mu_n = o(n^2)[/math] as [math]n \to \infty[/math].
By looking at those triples (a,b,c) with a+2b inside a Behrend set, one can obtain the lower bound [math]\overline{c}^\mu_n \geq n^2 \exp(-O(\sqrt{\log n}))[/math]. |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
For any set $X$, let $[X]^2 = \big\{\{a,b\}:a,b \in X, a\neq b\big\}$.
We call a finite, simple, undirected graph $G=(V,E)$ an $n$-
Erdös-Faber-Lovasz (EFL-) graph if there are $n$ subsets $S_1,\ldots, S_n$ of $V$ such that each $S_k$ has $n$ elements for $k\in\{1,\ldots, n\}$, $|S_k\cap S_j|\leq 1$ for $k\neq j\in \{1,\ldots, n\}$, and $V = \bigcup_{k=1}^n S_k$, and $E = \bigcup_{k=1}^n [S_k]^2$.
The Erdos-Faber-Lovasz conjecture says that if $G$ is an $n$-EFL-graph, then $\chi(G) \le n$.
Given any finite, simple, undirected graph $G=(V,E)$, the
Hadwiger number $\eta(G)$ is the maximum $n\in\mathbb{N}$ such that $K_n$ is a minor of $G$. Question. Is there $C\in\mathbb{N}$ such that for all $n\in\mathbb{N}$, if $G$ is an $n$-EFL-graph, then $\eta(G) \leq Cn$? |
I am following along well several textbooks (Geophysics) that helps me understand the in-depth physics behind the magnetic field of a dipole magnet. I understand that the basic magnetic potential (when observing a dipole magnet where the one of the poles is far enough to have negligible effect on its partner is:
$$W= -\int_r^{\infty}B \space dr = \frac{u_op}{4 \pi r}$$
Then to find the magnetic potential at point P due to both poles are:
$$W(\theta, r)=\frac{u_om\space \cos \theta}{4 \pi r^2}$$
Now to find the magnetic field strength at point P, I know its the vector addition of both $B_r$ and $B_\theta$ (radial and tangential respectively), and to find both I need to differentiate the potential with respect to r and $\theta$
Now here is finally where I get to ask my question. To find $B_r$ is easy enough by:
$$B_r=\frac{\partial W}{\partial r} = -\frac{2 u_o m\space \cos\theta}{4 \pi r^3}$$
But when I take the potential and differentiate with respect to $\theta$ I should get:
$$B_\theta = \frac{\partial W}{\partial \theta} = -\frac{u_o m\space \sin\theta}{4 \pi r^2}$$
but in ALL the textbooks they multiply $\frac{1}{r}$ to get:
$$B_\theta = \frac{1}{r}\frac{\partial W}{\partial \theta} = -\frac{u_o m\space \sin\theta}{4 \pi r^3}$$
Which is needed to find the total magnetic strength field ($B_r + B_\theta$)
Can anyone please explain where they got that extra $\frac{1}{r}?$
I have a feeling it has something to do with a unit vector, but I can't seem to connect the dots.
. |
I'm reading a paper and it says that in a no-arbitrage market the sharpe ratio is the same for all bonds. I'm guessing that a difference in two bonds sharpe ratios would open the possibility of arbitrage, but why is that?
Regards
Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. It only takes a minute to sign up.Sign up to join this community
This was proved by Vasiceck in his 1977 paper. If you suppose that the price of a pure discount bond depends only on a markovian short rate $r(t)$ with SDE \begin{equation} dr(t)=\mu(t,r(t))dt + \sigma(t,r(t))dW(t) \end{equation}
then you can assume that $P(t,T)=F(t,r(t);T)$. Now, with similar arguments used in the derivation of the Black-Scholes formula, he made a self-financing portfolio consisting of a $T$-bond and a $S$-bond. Say your portfolio value has SDE: \begin{equation} dV(t)=\theta_T(t)dF(t,r(t);T) + \theta_SdF(t,r(t);S) \end{equation} where $(\theta_T,\theta_S)$ is your self financing strategy. Now for simplicity write $F(t,r(t);T)=F^T(t,r(t))$ and since $P(t,T)>0$ for all $t\le T$ we can use Ito lemma to write his differential in this way: \begin{align} dF^T(t,r(t))&=\alpha^TF^Tdt + \beta^TF^TdW(t) \\ dF^S(t,r(t))&=\alpha^SF^Sdt + \beta^SF^SdW(t) \end{align} By substituting in the self financing portfolio SDE you now search the strategy $(\theta_T,\theta_S)$ that makes this portfolio risk-neutral. If the market doesn't allow for arbitrage, then a risk-less asset must earn the same rate of return of the bank account: \begin{equation} dV(t)=r(t)V(t)dt \end{equation} After substituting you will find that this equals the condition \begin{equation} \frac{\alpha^S(t) - r(t)}{\beta^S(t)}=\frac{\alpha^T(t) - r(t)}{\beta^T(t)} \end{equation} This means that bonds with different maturities have the same Sharpe Ratio. You will find a clearer derivation in the book by Bjork, however this just works for short rate models. Actually I don't know if there are more general derivation of this result.
I would guess you mean that all the
expected Sharpe ratios are equal. Here is why.
Consider a market with $d$ assets $(S^1, \dots, S^d)$ which is free ob arbitrage. Let $B$ denote the numeraire. According to the fundamental theorem of asset pricing there exists a martingale measure $\Bbb Q$. In particular: $$ \Bbb E_{\Bbb Q} \Bigg[\frac{S_{1}^j - S_0^j} {S_0^j} \Biggr] = \frac{1}{S_0^j}\Bbb E_{\Bbb Q} \bigl[S_{1}^j\bigr] - 1 = \frac{B_1}{S_0^j}\frac{S_0^j}{B_0} - 1 = \frac{B_1 - B_0}{B_0}, \quad \ \text{for all} \ j \in \{1,\dots, d\}. $$
This is the well known result that under the absence of arbitrage the expected return of each asset is given by the expected return of the bank account.
In particular all Sharpe ratios have to be equal. |
Question 1: Is it known that $\mathrm{QP}=\cup_{k}\mathrm{TIME}(2^{\mathrm{log}^k(n)})$ has any complete problem? Question 2: Can this be used to simplify the computational complexity theory at all?
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
Question 1: Is it known that $\mathrm{QP}=\cup_{k}\mathrm{TIME}(2^{\mathrm{log}^k(n)})$ has any complete problem? Question 2: Can this be used to simplify the computational complexity theory at all?
Question 1:
NO
Question 2:
NO Question 1: Assume to the contrary that $\mathrm{QP}$ has a complete problem $\mathrm{L}\in\mathrm{QP}$.
Then, there exists $k$ such that $\mathrm{L}\in\mathrm{TIME}(2^{\mathrm{log}^k(n)})$. Consider an arbitrary problem $\mathrm{L}'\in\mathrm{QP}$, by completeness of $\mathrm{L}$, we have that $\mathrm{L}'\leq_p\mathrm{L}$, i.e there exists a many-one polynomial-time reduction $f$ from $\mathrm{L}$ to $\mathrm{L}'$. So, given any instance $x\in\Sigma^*$ with $|x|=n$, we first reduce $x$ to $f(x)$. Since $f$ is polynomial-time computable, we have that $|f(x)|=n^{O(1)}$. Now, we run the $2^{\mathrm{log}^k(n)}$ time-bounded TM deciding $\mathrm{L}$, and accept or refute accordingly.
We now estimate the time complexity of this TM for $\mathrm{L}'$. We have: $$2^{\mathrm{log}^k(n^{O(1)})}=2^{O(1).\mathrm{log}^k(n)}=2^{O(\mathrm{log}^k(n))}$$
So, the whole class $\mathrm{QP}$ collapse to $\mathrm{TIME}(2^{O(\mathrm{log}^k(n))})$ violating the Time Hierachy Theorem.
Thus, $\mathrm{QP}$ does not have any complete problem.
Question 2: The unconditional lack of complete problem of $\mathrm{QP}$ unconditionally separates it from $\mathrm{NP}$, $\mathrm{co}$-$\mathrm{NP}$, $\mathrm{PP}$, $\oplus P$, $\mathrm{US}$, $\mathrm{PSPACE}$. But that is of no help for computational complexity theory, since the interesting nasty, stubborn classes such as $\mathrm{ZPP}$, $\mathrm{RP}$, $\mathrm{co}$-$\mathrm{RP}$, $\mathrm{BPP}$, $\mathrm{MA}$, $\mathrm{AM}$ all remain elusive to whether any of them has a complete problem. |
You are given a multigraph $G$ with $n$ vertices as follows:
$V := (v_1, v_2, \dots ,v_n)$ $C := \{c_1, c_2, \dots\}$, be an infinite set of colors. $f: V \rightarrow \mathbb{P}_{\le m}(C) $, a function that maps each vertex to a subset of colors of size $\le m$. We define the set of edges of the graph as: $E := \big\lbrace \small\{ \normalsize v_i, v_j, c_k \small\} \normalsize \mid c_k \in f(v_i) \cap f(v_j) \big\rbrace$, i.e. there is an edge colored with a given color between every pair of vertices that share that color. Note that a pair of vertices may have more than one edge if they share multiple colors.
It is guaranteed that every pair of vertices has at least one edge in common. i.e. $\forall v_i, v_j \in V$ there exists $c_k \in C$ such that $\{v_i, v_j, c_k\} \in E$.
Define $F(n,m) = s$ such that for any graph with $n$ vertices, each vertex colored with $\le m$ colors as above, there is a clique of order $s$ where each edge is colored with the same color. Another way of phrasing this is that there is a subset of size $s$, $S \subseteq V$, $|S| = s$, all vertices of which share a color. i.e. $\exists c_k \in C$ such that $\forall v_i \in S, c_k \in f(v_i)$.
So a few simple examples: $F(n,1)=n$, clearly because every vertex has to have the same color. It is easy to show that, $F(n,2)=n−1$ and $F(n,n-1)=2$.
One can show that $F(9,3)=5$ by applying the pigeonhole principle repeatedly.
What method can be used to solve this for general $n$ and $k$? Or how can this problem be represented as a known problem in graph-theory/combinatorics.
Note: This function can be represented as an inverse to resemble the Ramsey numbers as follows:$G(s, m) = n$, where $n$ is the smallest number where any graph with $n$ vertices, each vertex colored with at most $m$ colors as above contains a clique of order $s$. But that is harder since $F$ only gives us bounds on $G$. |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
Customers arrive at a server with rate $\lambda$ and are served at rate $\mu$. The server breaks down with rate $\gamma$, which causes all customers to leave. New customers can only arrive once the server has been repaired, which happens at rate $\delta$. Model as a MC give balance equations, and rewrite those so that they can be recursively solved (i.e. express for instance $\pi_k$ in $\pi_{k-n}$ ($n=1,\dots k$), $k=1,2,\dots$. You don't have to solve the equations.
My approach. Let $X(t)$ be the number of customers in the system and let the state space be $A=\{d, 0, 1, \dots\}$ where $d$ denotes a state in which the server is down.
The transition rates are then: $q_{d,0}=\delta$, $q_{i,d}=\gamma$, $q_{i,i+1}=\lambda$, $q_{i,i-1}=\mu$.
Then \begin{align*}\delta \pi_d &= \gamma \quad[\text{is this correct or should it be } \delta \pi_d = \gamma \pi_i, \ i =0,1,\dots?]\\ \lambda \pi_0 &= \mu \pi_1 + \delta \pi_d\\ (\lambda + \mu)\pi_i &= \lambda \pi_{i-1} + \mu \pi_{i+1}, \quad i \geq 2\\ \sum_i \pi_i&=1\end{align*}
Is my definition of the Markov chain correct and are the balance equations correct (specifically the first equation)? Also, how can I then rewrite the system so that it can be recursively solved like the question asks? |
It is well-known, that any element $\rho$ of the symmetric group $S_n$ with $n-p$ cycles admits a unique presentation as a product of a sequence of transpositions $\{(a_i\,b_i)\}_{i = 1}^p$ with $a_i < b_i$ and $b_i < b_{i+1}$ for any $i$. Another way to say it is that the sum $\sum_{j = 1}^n e_j(J_2,\ldots,J_n)$ of elementary symmetric polynomials evaluated on Jucys-Murphy elements is a central element of the group algebra $\mathbb C[S_n]$, that can be presented as $\sum_{\tau \in S_n} \tau$. We refer to a sequence of transpositions $\{(a_i\,b_i)\}_{i = 1}^p$ with $a_i < b_i$ and $b_i < b_{i+1}$ for any $i$ to as a strictly monotone factorization.
Thus, there is a natural action of $S_n$ on the set of strictly monotone factorizations, isomorphic to the action of $S_n$ on itself by conjugations. This action can be completely described in the terms of strictly monotone factorizations only.
We may also consider the set $M_p$ of weakly monotone factorizations, i.e. the set of sequences $\{(a_i\,b_i)\}_{i = 1}^p$ with $a_i < b_i$ and $b_i \le b_{i+1}$. This set is naturally related to the complete homogeneous symmetric polynomial $h_p$ evaluated on Jucys-Murphys. The set $M_p$ admits a natural map $\pi\colon M_p \to S_n$ -- the product of the elements of the sequence in the prescribed order.
Is there a natural structure of $S_n$-space on $M_p$, such that $\pi$ is a morphism of $S_n$-spaces, where $S_n$ acts on itself by conjugations? |
I use Wald's notation: $I^+$ is the chronological future and $J^+$ is the causal future.
My confusion arises from the following passage in Wald (1984):
Now, let $S$ be a closed, achronal set (possibly with edge). We define the
future domain of dependenceof $S$, denoted $D^+(S)$, by $$D^+(S)=\{p\in M|\, \text{Every past inextendible causal curve through $p$ intersects $S$}\}$$ Note that we always have $S\subset D^+(S)\subset J^+(S)$.
I have to disagree with the last statement. We know that $S$ is achronal, i.e. $I^+(S)\cap S=\emptyset$. The relation $S\subset D^+(S)\subset J^+(S)$ implies $S\subset J^+(S)$, i.e. $J^+(S)\cap S\ne\emptyset.$ But I cannot see how a set can be both achronal and contained in its causal future. Hence the title of my question.
I
think Wald meant to write $S\subset D^+(S)\subset \overline{J^+(S)}$. [EDIT: Disregard this statement.] |
I know when the body rotatates as well as translates IAR or ICR shouldn't be used but I am not able to understand why?
closed as unclear what you're asking by ja72, Jim, Ryan Unger, ACuriousMind♦, John Rennie Aug 12 '15 at 9:49
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
Any rigid body in motion can be described as rotating about in instantaneous axis of rotation (IAR) and translating along the same axis at the same time.
Example/Proof
A rigid body in moving and at time instant a point
A riding on the rigid body has position vector $\vec{r}_A$ and instantaneous linear velocity $\vec{v}_A$ at A. The whole body is rotating with $\vec{\omega}$. We are given 6 motion parameters and one 3 location parameters.
The motion is decomposed as:
The direction of the IAR is given by $$\vec{e} = \frac{\vec{\omega}}{| \vec{\omega} |}$$ The rotational speed (magnitude of rotation) is $$\omega = |\vec{\omega}|$$ The IAR passes through a point $$ \vec{r}_C = \vec{r}_A + \frac{\vec{\omega}\times \vec{v}_A}{|\vec{\omega}|^2}$$ The motion parallel to the IAR is described by the pitch $h$ which is the ratio of the linear speed along the axis to angular speed $$ h = \frac{\vec{\omega} \cdot \vec{v}_A}{|\vec{\omega}|^2}$$ If the above is true then the linear velocity at Cis zero and thus $$\vec{v}_A = (\vec{r}_C-\vec{r}_A) \times \vec{\omega} + h \vec{\omega}$$ Lets back substitute and see if we can go full circle: $$ \begin{align} \vec{v}_A & = \frac{\vec{\omega}\times \vec{v}_A}{|\vec{\omega}|^2} \times \vec{\omega} + \frac{\vec{\omega} \cdot \vec{v}_A}{|\vec{\omega}|^2} \vec{\omega} \\ & = \frac{\left(\vec{\omega}\times\vec{v}_{A}\right)\times\vec{\omega}+\left(\vec{\omega}\cdot\vec{v}_{A}\right)\vec{\omega}}{\left|\vec{\omega}\right|^{2}} \\ & = \frac{\vec{v}_{A}\left(\vec{\omega}\cdot\vec{\omega}\right)-\vec{\omega}\left(\vec{v}_{A}\cdot\vec{\omega}\right)+\left(\vec{\omega}\cdot\vec{v}_{A}\right)\vec{\omega}}{\left|\vec{\omega}\right|^{2}} \\ & = \frac{\vec{v}_{A}\left|\vec{\omega}\right|^{2}}{\left|\vec{\omega}\right|^{2}}=\vec{v}_{A} \end{align} $$ Summary
Given the motion parameters $\vec{\omega}$, $\vec{v}_A$ and of any point
Aon a moving rigid body, the instant axis of rotation and parallel velocity ratio are uniquely defined by the unit direction vector $\vec{e}$ the closest point on the axis $\vec{r}_C$, the scalar pitch $h$ and the rotation magnitude $\omega$.
Given the axis of rotation position $\vec{r}_C$, direction $\vec{e}$, the pitch $h$ and the magnitude $\omega$ the motion of any point
Alocated at $\vec{r}_A$ is fullydefined by $\vec{\omega} = \omega \vec{e}$ and $\vec{v}_A =\omega \left((\vec{r}_C-\vec{r}_A) \times \vec{e} + h \vec{e}\right)$ Appendix
In robotics there is talk about a joint screw axis $\mathbf{s}_A$ such that
$$ \mathbf{v}_A =\mathbf{s}_A \, \omega $$ $$ \begin{bmatrix} \vec{v}_A \\ \vec{\omega} \end{bmatrix} = \begin{vmatrix} (\vec{r}_C-\vec{r}_A) \times \vec{e} + h \vec{e} \\ \vec{e} \end{vmatrix} \,\omega $$ The coordinates of $\mathbf{s}_A$ are the pluecker coordinates of a line (or screw) in space.
NOTE: $\times$ is the vector cross product and $\cdot$ the vector dot product. |
A particle is submitted to a time dependent force $$F(x,t)=\dfrac{k}{x^2}e^{-t/\tau}$$
Which is the Lagrangian of the particle?
I think that the force is derived from the potential $V$ and this potential has not explicit dependence of $\dot x$. So i can write
$$ \dfrac{d}{dt}\dfrac{\partial \mathcal L}{\partial \dot x} = m \ddot x$$
$$\mathcal L = T-\int \dfrac{\partial \mathcal L}{\partial x} dx$$ Then the lagrangian is $$\mathcal L = \dfrac{m}{2}\dot x^2 + \dfrac{k}{x}e^{-t/\tau}$$
Am i right? |
Methods Funct. Anal. Topology
19 (2013), no. 2, 97-107
The form of the Jacobi type matrix related to the strong Hamburger moment problem is known \cite{N5,BD}, i.e., there are known the zero elements of corresponding matrix. We describe the relations between of non-zero elements of such matrices, i.e., we describe ''the inner structure'' of the Jacobi-Laurent matrices related to the strong Hamburger moment problem.
Methods Funct. Anal. Topology
19 (2013), no. 2, 108-126
We consider Vlasov-type scaling for Markov evolution of birth-and-death type in continuum, which is based on a proper scaling of corresponding Markov generators and has an algorithmic realization in terms of related hierarchical chains of correlation functions equations. The existence of rescaled and limiting evolutions of correlation functions and convergence to the limiting evolution are shown. The obtained results enable us to derive a non-linear Vlasov-type equation for the density of the limiting system.
Methods Funct. Anal. Topology
19 (2013), no. 2, 127-145
In the present paper we construct an expansion of the set of Lorentz transforms, which allows for the velocity of the reference frame to be greater than the speed of light. For maximum generality we investigate this tachyon expansion in the case of Minkowski space time over any real Hilbert space.
Methods Funct. Anal. Topology
19 (2013), no. 2, 146-160
We give an application of interpolation with a function parameter to parabolic differential operators. We introduce a refined anisotropic Sobolev scale that consists of some Hilbert function spaces of generalized smoothness. The latter is characterized by a real number and a function varying slowly at infinity in Karamata's sense. This scale is connected with anisotropic Sobolev spaces by means of interpolation with a function parameter. We investigate a general initial--boundary value parabolic problem in the refined Sobolev scale. We prove that the operator corresponding to this problem sets isomorphisms between appropriate spaces pertaining to this scale.
Methods Funct. Anal. Topology
19 (2013), no. 2, 161-167
In this paper, an asymmetric generalization of the Glazman-Povzner-Wienholtz theorem is proved for one-dimensional Schrödinger operators with strongly singular matrix potentials from the space $H_{loc}^{-1}(\mathbb{R}, \mathbb{C}^{m\times m})$. This result is new in the scalar case as well.
Methods Funct. Anal. Topology
19 (2013), no. 2, 168-186
Indefinite moment problem was considered by M. G. Krein and H. Langer in 1979. In the present paper the general indefinite moment problem is associated with an abstract interpolation problem in generalized Nevanlinna classes. To prove the equivalence of these two problems we investigate the structure of de Branges space $H(m)$ associated with a generalized Nevanlinna function $m$. A general formula for description of the set of solutions of indefinite moment problem is found. It is shown that the Kein-Langer description can be derived from this formula by a special choice of biorthogonal system of polynomials.
Methods Funct. Anal. Topology
19 (2013), no. 2, 187-190
In this paper we investigate ideal amenability of $\ell^{1}(G_{p})$, where $G_{p}$ is a maximal subgroup of inverse semigroup $S$ with uniformly locally finite idempotent. Also we find some conditions for ideal amenability of Rees matrix semigroup.
Methods Funct. Anal. Topology
19 (2013), no. 2, 191-196
We prove that for all $m_1,m_2,m_3 \in \mathbb{N},~ \frac{1}{m_1}+\frac{1}{m_2}+\frac{1}{m_3} \leq 1$, every unitary scalar operator $\gamma I$ on a complex infinite-dimensional Hilbert space is a product $\gamma I = U_1 U_2 U_3$ where $U_i$ is a unitary operator such that $U_i^{m_i} = I$. |
Unit 2.4.3 Details¶
In this unit, we give details regarding the partial code in Figure 2.4.3 and illustrated in Figure 2.4.4.
To use the intrinsic functions, we start by including the header file immintrin.h.
#include <immintrin.h>
The declaration
__m256d gamma_0123_0 = _mm256_loadu_pd( &gamma( 0,0 ) );
creates gamma_0123_0 as a variable that references a vector register with four double precision numbers and loads it with the four numbers that are stored starting at address &gamma( 0,0 )
In other words, it loads that vector register with the original values
\begin{equation*}\left(\begin{array}{c c c c}\gamma_{0,0} \\\gamma_{1,0} \\\gamma_{2,0} \\\gamma_{3,0}\end{array}\right).\end{equation*}
This is repeated for the other three columns of \(C \text{:}\)
__m256d gamma_0123_1 = _mm256_loadu_pd( &gamma( 0,1 ) );
__m256d gamma_0123_2 = _mm256_loadu_pd( &gamma( 0,2 ) );
__m256d gamma_0123_3 = _mm256_loadu_pd( &gamma( 0,3 ) );
The loop in Figure 2.4.2 implements
\begin{equation*}\begin{array}{l}{\bf for~} p = 0, \ldots, k-1 \\~~~\left( \begin{array}{c | c | c | c} \begin{array}{c c c c}\gamma_{0,0} +:= \alpha_{0,p} \times \beta_{p,0} \\\gamma_{1,0} +:= \alpha_{1,p} \times \beta_{p,0} \\\gamma_{2,0} +:= \alpha_{2,p} \times \beta_{p,0} \\\gamma_{3,0} +:= \alpha_{3,p} \times \beta_{p,0}\end{array}\amp\begin{array}{c c c c}\gamma_{0,1} +:= \alpha_{0,p} \times \beta_{p,1} \\\gamma_{1,1} +:= \alpha_{1,p} \times \beta_{p,1} \\\gamma_{2,1} +:= \alpha_{2,p} \times \beta_{p,1} \\\gamma_{3,1} +:= \alpha_{3,p} \times \beta_{p,1}\end{array}\amp\begin{array}{c c c c}\gamma_{0,2} +:= \alpha_{0,p} \times \beta_{p,2} \\\gamma_{1,2} +:= \alpha_{1,p} \times \beta_{p,2} \\\gamma_{2,2} +:= \alpha_{2,p} \times \beta_{p,2} \\\gamma_{3,2} +:= \alpha_{3,p} \times \beta_{p,2}\end{array}\amp\begin{array}{c c c c}\gamma_{0,3} +:= \alpha_{0,p} \times \beta_{p,3} \\\gamma_{1,3} +:= \alpha_{1,p} \times \beta_{p,3} \\\gamma_{2,3} +:= \alpha_{2,p} \times \beta_{p,3} \\\gamma_{3,3} +:= \alpha_{3,p} \times \beta_{p,3}\end{array}\end{array}\right)\\{\bf end}\end{array}\end{equation*}
leaving the result in the vector registers. Each iteration starts by declaring vector register variable alpha_0123_p and loading it with the contents of
\begin{equation*}\left(\begin{array}{c c c c}\alpha_{0,p} \\\alpha_{1,p} \\\alpha_{2,p} \\\alpha_{3,p}\end{array}\right).\end{equation*}
__m256d alpha_0123_p = _mm256_loadu_pd( &alpha( 0,p ) );
Next, \(\beta_{p,0} \) is loaded into a vector register, broadcasting (duplicating) that value to each entry in that register: beta_p_j = _mm256_broadcast_sd( &beta( p, 0) );
This variable is declared earlier in the routine as beta_p_j because it is reused for \(j =0,1,\ldots,\text{.}\)
The command
gamma_0123_0 = _mm256_fmadd_pd( alpha_0123_p, beta_p_j, gamma_0123_0 );
then performs the computation
\begin{equation*}\left(\begin{array}{c c c c}\gamma_{0,0} +:= \alpha_{0,p} \times \beta_{p,0} \\\gamma_{1,0} +:= \alpha_{1,p} \times \beta_{p,0} \\\gamma_{2,0} +:= \alpha_{2,p} \times \beta_{p,0} \\\gamma_{3,0} +:= \alpha_{3,p} \times \beta_{p,0}\end{array}\right)\end{equation*}
illustrated by in Figure 2.4.3. Notice that we use beta_p_j for \(\beta_{p,0}\) because that same vector register will be used for \(\beta_{p,j} \) with \(j = 0,1,2,3\text{.}\)
We leave it to the reader to add the commands that compute
\begin{equation*}\left(\begin{array}{c c c c}\gamma_{0,1} +:= \alpha_{0,p} \times \beta_{p,1} \\\gamma_{1,1} +:= \alpha_{1,p} \times \beta_{p,1} \\\gamma_{2,1} +:= \alpha_{2,p} \times \beta_{p,1} \\\gamma_{3,1} +:= \alpha_{3,p} \times \beta_{p,1}\end{array}\right),\left(\begin{array}{c c c c}\gamma_{0,2} +:= \alpha_{0,p} \times \beta_{p,2} \\\gamma_{1,2} +:= \alpha_{1,p} \times \beta_{p,2} \\\gamma_{2,2} +:= \alpha_{2,p} \times \beta_{p,2} \\\gamma_{3,2} +:= \alpha_{3,p} \times \beta_{p,2}\end{array}\right), \mbox{and} \left(\begin{array}{c c c c}\gamma_{0,3} +:= \alpha_{0,p} \times \beta_{p,3} \\\gamma_{1,3} +:= \alpha_{1,p} \times \beta_{p,3} \\\gamma_{2,3} +:= \alpha_{2,p} \times \beta_{p,3} \\\gamma_{3,3} +:= \alpha_{3,p} \times \beta_{p,3}\end{array}\right).\end{equation*}
in
Assignments/Week2/C/Gemm_4x4Kernel.c. Upon completion of the loop, the results are stored back into the original arrays with the commands
_mm256_storeu_pd( &gamma(0,0), gamma_0123_0 );
_mm256_storeu_pd( &gamma(0,1), gamma_0123_1 );
_mm256_storeu_pd( &gamma(0,2), gamma_0123_2 );
_mm256_storeu_pd( &gamma(0,3), gamma_0123_3 );
Homework 2.4.3.1.
Complete the code in
Assignments/Week2/C/Gemm_4x4Kernel.c and execute it with
make JI_4x4Kernel
View its performance with
Assignments/Week2/C/data/Plot_register_blocking.mlx
We are starting to see some progress towards higher performance |
2014-01-01
Pipage Rounding, Pessimistic Estimators and Matrix Concentration Publication Publication Presented at the ACM-SIAM Symposium on Discrete Algorithms
Pipage rounding is a dependent random sampling technique that has several interesting properties and diverse applications. One property that has been particularly useful is negative correlation of the resulting vector. Unfortunately negative correlation has its limitations, and there are some further desirable properties that do not seem to follow from existing techniques. In particular, recent concentration results for sums of independent random matrices are not known to extend to a negatively dependent setting. We introduce a simple but useful technique called concavity of pessimistic estimators. This technique allows us to show concentration of submodular functions and concentration of matrix sums under pipage rounding. The former result answers a question of Chekuri et al. (2009). To prove the latter result, we derive a new variant of Lieb's celebrated concavity theorem in matrix analysis. We provide numerous applications of these results. One is to spectrally-thin trees, a spectral analog of the thin trees that played a crucial role in the recent breakthrough on the asymmetric traveling salesman problem. We show a polynomial time algorithm that, given a graph where every edge has effective conductance at least $\kappa$, returns an $O(\kappa^{-1} \cdot \log n / \log \log n)$-spectrally-thin tree. There are further applications to rounding of semidefinite programs, to the column subset selection problem, and to a geometric question of extracting a nearly-orthonormal basis from an isotropic distribution.
Additional Metadata THEME Other (theme 6) Publisher SIAM Editor C. Chekuri Journal Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms Conference ACM-SIAM Symposium on Discrete Algorithms Citation
Harvey, N.J.A, & Olver, N.K. (2014). Pipage Rounding, Pessimistic Estimators and Matrix Concentration. In C Chekuri (Ed.),
Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms. SIAM. |
We will now discuss how to solve a triangle in Case 3: two sides and the angle between them. First, let us see what happens when we try to use the Law of Sines for this case.
Example \(\PageIndex{1}\): Case 3 - Two sides and the angle between them Known
Solve the triangle \(\triangle\,ABC \) given \(A = 30^\circ \), \(b = 4 \), and \(c = 5 \).
Solution
Using the Law of Sines, we have
\[\nonumber \dfrac{a}{\sin\;30^\circ} ~=~ \dfrac{4}{\sin\;B} ~=~ \dfrac{5}{\sin\;C} ~, \]
where each of the equations has two unknown parts, making the problem impossible to solve. For example, to solve for \(a \) we could use the equation
\[\dfrac{4}{\sin\;B} = \dfrac{5}{\sin\;C} \nonumber\]
to solve for \(\sin\;B \) in terms of \(\sin\;C \) and substitute that into the equation
\[\dfrac{a}{\sin\;30^\circ} = \dfrac{4}{\sin\;B}. \nonumber\]
But that would just result in the equation
\[\dfrac{a}{\sin\;30^\circ} = \dfrac{5}{\sin\;C} ,\nonumber\]
which we already knew and which still has two unknowns! Thus, this problem can not be solved using the Law of Sines.
To solve the triangle in the above example, we can use the
Law of Cosines:
Theorem \(\PageIndex{1}\): Law of Cosines
If a triangle has sides of lengths \(a \), \(b \), and \(c \) opposite the angles \(A \), \(B \), and \(C \), respectively, then
\[\begin{align}
a^2 ~ &= b^2 + c^2 - 2bc\;\cos\;A ~,\label{2.9}\\[4pt] b^2 ~ &= c^2 + a^2 - 2ca\;\cos\;B ~,\label{2.10}\\[4pt] c^2 ~ &= a^2 + b^2 - 2ab\;\cos\;C ~.\label{2.11} \\[4pt] \end{align}\]
Proof
To prove the Law of Cosines, let \(\triangle\,ABC \) be an oblique triangle. Then \(\triangle\,ABC \) can be acute, as in Figure \(\PageIndex{1a}\), or it can be obtuse, as in Figure \(\PageIndex{1b}\). In each case, draw the altitude from the vertex at \(C \) to the side \(\overline{AB} \). In Figure \(\PageIndex{1a}\), the altitude divides \(\overline{AB} \) into two line segments with lengths \(x \) and \(c-x \), while in Figure \(\PageIndex{1b}\) the altitude extends the side \(\overline{AB} \) by a distance \(x \). Let \(h \) be the height of the altitude.
For each triangle in Figure \(\PageIndex{1}\), we see by the Pythagorean Theorem that
\[h^2 ~=~ a^2 ~-~ x^2\label{2.12}\]
and likewise for the acute triangle in Figure \(\PageIndex{1a}\) we see that
\[b^2 ~=~ h^2 ~+~ (c-x)^2 ~.\label{2.13}\]
Thus, substituting the expression for \(h^2 \) in Equation \ref{2.12} into Equation \ref{2.13} gives
\[\begin{align*} b^2 ~&=~ a^2 ~-~ x^2 ~+~ (c-x)^2\nonumber \\
&=~ a^2 ~-~ x^2 ~+~ c^2 ~-~ 2cx ~+~ x^2\nonumber \\ &=~ a^2 ~+~ c^2 ~-~ 2cx ~.\nonumber \\ \end{align*}\]
But we see from Figure \(\PageIndex{1a}\) that \(x = a\;\cos\;B \), so
\[ b^2 ~=~ a^2 ~+~ c^2 ~-~ 2ca\;\cos\;B ~.\label{2.14} \]
And for the obtuse triangle in Figure \(\PageIndex{1b}\) we see that
\[ b^2 ~=~ h^2 ~+~ (c+x)^2 ~.\label{2.15}\]
Thus, substituting the expression for \(h^2 \) in Equation \ref{2.12} into Equation \ref{2.15} gives
\[ \begin{align*}b^2 ~&=~ a^2 ~-~ x^2 ~+~ (c+x)^2\nonumber \\
&=~ a^2 ~-~ x^2 ~+~ c^2 ~+~ 2cx ~+~ x^2\nonumber \\ &=~ a^2 ~+~ c^2 ~+~ 2cx ~.\nonumber \\ \end{align*}\]
But we see from Figure \(\PageIndex{1a}\) that \(x = a\;\cos\;(180^\circ - B) \), and we know from Section 1.5 that \(\cos\;(180^\circ - B) = -\cos\;B \). Thus, \(x = -a\;\cos\;B\) and so
\[ b^2 ~=~ a^2 ~+~ c^2 ~-~ 2ca\;\cos\;B ~.\label{2.16} \]
So for both acute and obtuse triangles we have proved Equation \ref{2.10} in the Law of Cosines. Notice that the proof was for \(B \) acute and obtuse. By similar arguments for \(A \) and \(C\) we get the other two formulas.
\(\square\)
Note that we did not prove the Law of Cosines for right triangles, since it turns out (see Exercise 15) that all three formulas reduce to the Pythagorean Theorem for that case. The Law of Cosines can be viewed as a generalization of the Pythagorean Theorem.
Also, notice that it suffices to remember just one of the three Equations \ref{2.9}-\ref{2.11}, since the other two can be obtained by "cycling'' through the letters \(a \), \(b \), and \(c \). That is, replace \(a \) by \(b \), replace \(b \) by \(c \), and replace \(c \) by \(a \) (likewise for the capital letters). One cycle will give you the second formula, and another cycle will give you the third.
The angle between two sides of a triangle is often called the
included angle. Notice in the Law of Cosines that if two sides and their included angle are known (e.g. \(b \), \(c \), and \(A\)), then we have a formula for the square of the third side. We will now solve the triangle from Example \(\PageIndex{2}\).
Example \(\PageIndex{2}\): Case 3 - Two sides and the angle between them Known
Solve the triangle \(\triangle\,ABC \) given \(A = 30^\circ \), \(b = 4 \), and \(c = 5 \).
Solution
We will use the Law of Cosines to find \(a \), use it again to find \(B \), then use \(C = 180^\circ - A - B \). First, we have
\[\nonumber \begin{align*}
a^2 ~ &= ~ b^2 ~ &+ ~ c^2 ~ &- ~ 2bc\;\cos\;A\\ \nonumber &= ~ 4^2 ~ &+ ~ 5^2 ~ &- ~ 2(4)(5)\;\cos\;30^\circ ~=~ 6.36 \quad\Rightarrow\quad \boxed{a ~=~ 2.52} ~.\\ \end{align*}\]
Now we use the formula for \(b^2 \) to find \(B\):
\[\nonumber \begin{align*}
b^2 ~ = ~ c^2 ~ + ~ a^2 ~ - ~ 2ca\;\cos\;B \quad&\Rightarrow\quad \cos\;B ~=~ \dfrac{c^2 ~ + ~ a^2 ~-~ b^2}{2ca}\\ \nonumber &\Rightarrow\quad \cos\;B ~=~ \dfrac{5^2 ~ + ~ (2.52)^2 ~-~ 4^2}{2(5)(2.52)} ~=~ 0.6091\\ \nonumber &\Rightarrow\quad \boxed{B ~=~ 52.5^\circ}\\ \end{align*}\]
Thus, \(C = 180^\circ - A - B = 180^\circ - 30^\circ - 52.5^\circ \Rightarrow \fbox{\(C = 97.5^\circ \; \)}\).
Notice in Example \(\PageIndex{2}\) that there was only one solution. For Case 3 this will
always be true: when given two sides and their included angle, the triangle will have exactly one solution. The reason is simple: when joining two line segments at a common vertex to form an angle, there is exactly one way to connect their free endpoints with a third line segment, regardless of the size of the angle.
You may be wondering why we used the Law of Cosines a second time in Example \(\PageIndex{2}\), to find the angle \(B \). Why not use the Law of Sines, which has a simpler formula? The reason is that using the cosine function eliminates any ambiguity: if the cosine is positive then the angle is acute, and if the cosine is negative then the angle is obtuse. This is in contrast to using the sine function; as we saw in Section 2.1, both an acute angle and its obtuse supplement have the same positive sine.
To see this, suppose that we had used the Law of Sines to find \(B \) in Example \(\PageIndex{2}\):
\[\nonumber
\sin\;B ~=~ \dfrac{b\;\sin\;A}{a} ~=~ \dfrac{4\;\sin\;30^\circ}{2.52} ~=~ 0.7937 \quad\Rightarrow\quad B ~=~ 52.5^\circ ~\text{or}~ 127.5^\circ \]
How would we know which answer is correct? We could not immediately rule out \(B = 127.5^\circ \) as too large, since it would make \(A + B = 157.5^\circ < 180^\circ \) and so \(C = 22.5^\circ \), which seems like it could be a valid solution. However, this solution is impossible. Why? Because the largest side in the triangle is \(c = 5 \), which (as we learned in Section 2.1) means that \(C \) has to be the largest angle. But \(C = 22.5^\circ \) would not be the largest angle in this solution, and hence we have a contradiction.
It remains to solve a triangle in Case 4, i.e. given three sides. We will now see how to use the Law of Cosines for that case.
Example \(\PageIndex{3}\): Case 4 - Three sides Known
Solve the triangle \(\triangle\,ABC \) given \(a = 2 \), \(b = 3 \), and \(c = 4 \).
Solution:
We will use the Law of Cosines to find \(B \) and \(C \), then use \(A = 180^\circ - B - C \). First, we use the formula for \(b^2 \) to find \(B\):
\[\nonumber \begin{align*}
b^2 ~ = ~ c^2 ~ + ~ a^2 ~ - ~ 2ca\;\cos\;B \quad&\Rightarrow\quad \cos\;B ~=~ \dfrac{c^2 ~ + ~ a^2 ~-~ b^2}{2ca}\\ \nonumber &\Rightarrow\quad \cos\;B ~=~ \dfrac{4^2 ~ + ~ 2^2 ~-~ 3^2}{2(4)(2)} ~=~ 0.6875\\ \nonumber &\Rightarrow\quad \boxed{B ~=~ 46.6^\circ} \\ \end{align*}\]
Now we use the formula for \(c^2 \) to find \(C\):
\[\nonumber \begin{align*}
c^2 ~ = ~ a^2 ~ + ~ b^2 ~ - ~ 2ab\;\cos\;C \quad&\Rightarrow\quad \cos\;C ~=~ \dfrac{a^2 ~ + ~ b^2 ~-~ c^2}{2ab}\\ \nonumber &\Rightarrow\quad \cos\;C ~=~ \dfrac{2^2 ~ + ~ 3^2 ~-~ 4^2}{2(2)(3)} ~=~ -0.25\\ \nonumber &\Rightarrow\quad \boxed{C ~=~ 104.5^\circ} \\ \end{align*}\]
Thus, \(A = 180^\circ - B - C = 180^\circ - 46.6^\circ - 104.5^\circ \Rightarrow \boxed{A = 28.9^\circ}\; \).
It may seem that there is always a solution in Case 4 (given all three sides), but that is not true, as the following example shows.
Example \(\PageIndex{4}\): Case 4 - Three sides Known
Solve the triangle \(\triangle\,ABC \) given \(a = 2 \), \(b = 3 \), and \(c = 6 \).
Solution:
If we blindly try to use the Law of Cosines to find \(A \), we get
\[\nonumber
a^2 ~ = ~ b^2 ~ + ~ c^2 ~ - ~ 2bc\;\cos\;A \quad\Rightarrow\quad \cos\;A ~=~ \dfrac{b^2 ~ + ~ c^2 ~-~ a^2}{2bc} ~=~ \dfrac{3^2 ~ + ~ 6^2 ~-~ 2^2}{2(3)(6)} ~=~ 1.139 ~, \]
which is impossible since \(| \cos\;A | \le 1 \). Thus, there is \(\fbox{no solution}\).
We could have saved ourselves some effort by recognizing that the length of one of the sides (\(c=6\)) is greater than the sums of the lengths of the remaining sides (\(a=2 \) and \(b=3\)), which (as the picture below shows) is impossible in a triangle.
The Law of Cosines can also be used to solve triangles in Case 2 (two sides and one opposite angle), though it is less commonly used for that purpose than the Law of Sines. The following example gives an idea of how to do this.
Example \(\PageIndex{5}\): Case 2 - Two sides and one opposite angle Known
Solve the triangle \(\triangle\,ABC \) given \(a = 18 \), \(A = 25^\circ \), and \(b = 30 \).
Solution
In Example 2.2 from Section 2.1 we used the Law of Sines to show that there are two sets of solutions for this triangle: \(B = 44.8^\circ \), \(C = 110.2^\circ \), \(c = 40 \) and \(B = 135.2^\circ \), \(C = 19.8^\circ \), \(c = 14.4 \). To solve this using the Law of Cosines, first find \(c \) by using the formula for \(a^2\):
\[ \nonumber \begin{align*}
a^2 ~ = ~ b^2 ~ + ~ c^2 ~ - ~ 2bc\;\cos\;A \quad&\Rightarrow\quad 18^2 = ~ 30^2 ~ + ~ c^2 ~ - ~ 2(30)c\;\cos\;25^\circ\\ \nonumber &\Rightarrow\quad c^2 ~-~ 54.38\,c ~+~ 576 ~ = ~ 0 ~, \end{align*}\]
which is a quadratic equation in \(c \), so we know that it can have either zero, one, or two real roots (corresponding to the number of solutions in Case 2). By the quadratic formula, we have
\[\nonumber
c ~=~ \dfrac{54.38 ~\pm~ \sqrt{(54.38)^2 ~-~ 4(1)(576)}}{2(1)} ~=~ 40 ~~\text{or}~~ 14.4 ~. \]
Note that these are the same values for \(c \) that we found before. For \(c=40 \) we get
\[\nonumber
\cos\;B ~=~ \dfrac{c^2 ~ + ~ a^2 ~-~ b^2}{2ca} ~=~ \dfrac{40^2 ~ + ~ 18^2 ~-~ 30^2}{2(40)(18)} ~=~ 0.7111 \quad\Rightarrow\quad B ~=~ 44.7^\circ \quad\Rightarrow\quad C ~=~ 110.3^\circ ~, \]
which is close to what we found before (the small difference being due to different rounding). The other solution set can be obtained similarly.
Like the Law of Sines, the Law of Cosines can be used to prove some geometric facts, as in the following example.
Example \(\PageIndex{6}\): Parallelogram Diagonals
Use the Law of Cosines to prove that the sum of the squares of the diagonals of any parallelogram equals the sum of the squares of the sides.
Solution:
Let \(a \) and \(b \) be the lengths of the sides, and let the diagonals opposite the angles \(C \) and \(D \) have lengths \(c \) and \(d \), respectively, as in Figure \(\PageIndex{2}\). Then we need to show that
\[c^2 ~+~ d^2 ~=~ a^2 ~+~ b^2 ~+~ a^2 ~+~ b^2 ~=~ 2\,( a^2 ~+~ b^2 ) ~.\nonumber \]
By the Law of Cosines, we know that
\[\nonumber \begin{align*}
c^2 ~ &= ~ a^2 ~ + ~ b^2 ~ - ~ 2ab\;\cos\;C ~,~\text{and}\\ \nonumber d^2 ~ &= ~ a^2 ~ + ~ b^2 ~ - ~ 2ab\;\cos\;D ~. \\ \end{align*}\]
By properties of parallelograms, we know that \(D = 180^\circ - C \), so
\[\nonumber \begin{align*} d^2 ~ &= ~ a^2 ~ + ~ b^2 ~ - ~ 2ab\;\cos\;(180^\circ - C)\\ \nonumber &=~ a^2 ~ + ~ b^2 ~ + ~ 2ab\;\cos\;C ~, \\ \end{align*}\]
since \(\;\cos\;(180^\circ - C) = -\cos\;C \). Thus,
\[\nonumber \begin{align*} c^2 ~+~ d^2 ~&=~ a^2 ~ + ~ b^2 ~ - ~ 2ab\;\cos\;C ~+~ a^2 ~ + ~ b^2 ~ + ~ 2ab\;\cos\;C\\ \nonumber
&=~ 2\,( a^2 ~+~ b^2 ) ~. \quad \\ \end{align*}\] |
Theorem 2.13.If $X$ is a space and $A$ is a nonempty closedsubspace that is a deformation retract of some neighborhood in $X$, then there is an exact sequence $$\cdots \longrightarrow \tilde H_n (A)\longrightarrow \tilde H_n(X) \longrightarrow \tilde H_n(X/A)\longrightarrow \tilde H_{n-1}(A)\longrightarrow \cdots $$
What's bugging me is the requirement $A$ be closed - isn't it enough to ask $\overline A\subset \mathring V$?
I ask the same question for the theorem stated in this homework sheet:
Theorem 0.2.Let $A\subset X$ be a closedsubset such that $A$ is a deformation retract of some open set $V\subset X$. Then there is an isomorphism $$H(X,A)\cong H(X/A,\text{pt})$$
Isn't it enough to ask $\overline A\subset \mathring V$? |
Arzelà variation
A generalization to functions of several variables of the Variation of a function of one variable, proposed by C. Arzelà in [Ar] (see also [Ha], p. 543). However the modern theory of functions of bounded variation uses a different generalization (see Function of bounded variation and Variation of a function). Therefore the Arzelà variation is seldomly used nowadays.
Consider a rectangle $R:= [a_1, b_1]\times \ldots \times [a_n, b_n]\subset \mathbb R^n$ and denote by
$\Gamma$ the class of continuous $\gamma= (\gamma_1, \ldots, \gamma_n):[0,1]\to R$ such that each component $\gamma_j$ is nondecreasing and maps $[0,1]$ onto $[a_j, b_j]$. $\Pi$ the family of $N+1$-tuples of points $0\leq t_1 < \ldots < t_{N+1}\leq 1$. DefinitionThe Arzelà variation of a function $f:R\to\mathbb R$ is then defined as\[V_A (f):= \sup_{\gamma\in \Gamma}\; TV (f\circ \gamma) = \sup_{\gamma\in \Gamma}\;\left(\sup \left\{ \sum_{i=1}^N |f(\gamma (t_{i+1})) - f (\gamma (t_i))| : (t_1, \ldots , t_{N+1})\in \Pi\right\}\right)\]($TV (f\circ \gamma)$ is then the classical total variation of the real variable function $f\circ \gamma$, see Variation of a function).
A function $f$ has finite Arzelà variation if and only if it can be written as the difference of two functions $f^+-f^-$ with the property that \[ f^{\pm} (x_1, \ldots, x_n) \leq f^{\pm} (y_1, \ldots, y_n) \qquad \mbox{if } x_i\leq y_i \, \forall i\, . \] This statement generalizes the Jordan decomposition of functions of bounded variation of one variable.
The class of functions with finite Arzelà variation contains the class of functions with finite Hardy variation.
References
[Ar] C. Arzelà, Rend. Accad. Sci. Bologna , 9 : 2 (1905) pp. 100–107. [Ha] H. Hahn, "Theorie der reellen Funktionen" , 1 , Springer (1921). JFM Zbl 48.0261.09 How to Cite This Entry:
Arzelà variation.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Arzel%C3%A0_variation&oldid=30099 |
The Distance Formula
Definition: Distance
Recall that for two points \((a,b)\) and \((c,d)\) in a plane, the distance is found by the formula
\[\text{Distance}=\sqrt{(c-a)^2+(d-b)^2}.\]
Example \(\PageIndex{1}\)
Find the distance between the points \((1,1)\) and \((-4,3)\).
Solution
\[\begin{align*} \text{Distance} &=\sqrt{(-4-1)^2+(3-1)^2} \\[4pt] &=\sqrt{25+4}\\ [5pt] &=\sqrt{29}. \end{align*}\]
The Midpoint Formula
Definition: Midpoint
For points \((a,b)\) and \((c,d)\) the midpoint of the line segment formed by these points has coordinates:
\[M=\left(\dfrac{a+c}{2},\dfrac{b+d}{2}\right). \]
Example \(\PageIndex{2}\)
Suppose that you have a boat at one side of the lake with coordinates \((3,4)\) and your friend has a boat at the other side of the lake with coordinates \((18,22)\). If you want to meet half way, at what coordinates should you meet?
Solution:
\[\begin{align*} M &= \left(\dfrac{3+18}{2}, \dfrac{4+22}{2}\right) \\[4pt] &=(10.5,13). \end{align*}\]
Exercises
Show that the points \((-5,14)\), \((1,4)\), and \((11,10)\) are vertices of an isosceles triangle.
Show that the triangle with vertices \((1,1)\), \((-1,-1)\), and \((\sqrt{3},-\sqrt{3})\) are vertices of a right triangle.
Graphing on a Calculator
We will graph the equations:
\(y = 2x - 3\) (Use
graphthen y(x) =)
\(y = 5x^2 + 4\)
\(y = |x + 1|\) (To find absolute value, use catalog then hit enter)
\(y = 2x + \{-1,0,1,2,3,5\}\) (find the curly braces "{" and "}" use the list feature)
Contributors Larry Green (Lake Tahoe Community College)
Integrated by Justin Marshall. |
We have seen how integration can be used to find an area between a curve and the \(x\)-axis. With very little change we can find some areas between curves; indeed, the area between a curve and the \(x\)-axis may be interpreted as the area between the curve and a second "curve'' with equation \(y=0\). In the simplest of cases, the idea is quite easy to understand.
Example 9.1.1
Find the area below \(f(x)= -x^2+4x+3\) and above \( g(x)=-x^3+7x^2-10x+5\) over the interval \(1\le x\le2\).
Solution
In figure 9.1.1 we show the two curves together, with the desired area shaded, then \(f\) alone with the area under \(f\) shaded, and then \(g\) alone with the area under \(g\) shaded.
Figure 9.1.1. Area between curves as a difference of areas.
It is clear from the figure that the area we want is the area under \(f\) minus the area under \(g\), which is to say $$\int_1^2 f(x)\,dx-\int_1^2 g(x)\,dx = \int_1^2 f(x)-g(x)\,dx.$$ It doesn't matter whether we compute the two integrals on the left and then subtract or compute the single integral on the right. In this case, the latter is perhaps a bit easier: $$\eqalign{ \int_1^2 f(x)-g(x)\,dx&=\int_1^2 -x^2+4x+3-(-x^3+7x^2-10x+5)\,dx\cr &=\int_1^2 x^3-8x^2+14x-2\,dx\cr &=\left.{x^4\over4}-{8x^3\over3}+7x^2-2x\right|_1^2\cr &={16\over4}-{64\over3}+28-4-({1\over4}-{8\over3}+7-2)\cr &=23-{56\over3}-{1\over4}={49\over12}.\cr }$$
It is worth examining this problem a bit more. We have seen one way to look at it, by viewing the desired area as a big area minus a small area, which leads naturally to the difference between two integrals. But it is instructive to consider how we might find the desired area directly. We can approximate the area by dividing the area into thin sections and approximating the area of each section by a rectangle, as indicated in figure 9.1.2. The area of a typical rectangle is \(\Delta x(f(x_i)-g(x_i))\), so the total area is approximately $$\sum_{i=0}^{n-1} (f(x_i)-g(x_i))\Delta x.$$ This is exactly the sort of sum that turns into an integral in the limit, namely the integral $$\int_1^2 f(x)-g(x)\,dx.$$ Of course, this is the integral we actually computed above, but we have now arrived at it directly rather than as a modification of the difference between two other integrals. In that example it really doesn't matter which approach we take, but in some cases this second approach is better.
Figure 9.1.2. Approximating area between curves with rectangles.
Example 9.1.2
Find the area below \(f(x)= -x^2+4x+1\) and above \(g(x)=-x^3+7x^2-10x+3\) over the interval \(1\le x\le2\); these are the same curves as before but lowered by 2.
Solution
In figure 9.1.3 we show the two curves together. Note that the lower curve now dips below the \(x\)-axis. This makes it somewhat tricky to view the desired area as a big area minus a smaller area, but it is just as easy as before to think of approximating the area by rectangles. The height of a typical rectangle will still be \(f(x_i)-g(x_i)\), even if \(g(x_i)\) is negative. Thus the area is $$ \int_1^2 -x^2+4x+1-(-x^3+7x^2-10x+3)\,dx =\int_1^2 x^3-8x^2+14x-2\,dx. $$ This is of course the same integral as before, because the region between the curves is identical to the former region---it has just been moved down by 2.
Figure 9.1.3. Area between curves.
Example 9.1.3
Find the area between \(f(x)= -x^2+4x\) and \(g(x)=x^2-6x+5\) over the interval \(0\le x\le 1\); the curves are shown in figure 9.1.4.
Solution
Generally we should interpret "area'' in the usual sense, as a necessarily positive quantity. Since the two curves cross, we need to compute two areas and add them. First we find the intersection point of the curves: $$\eqalign{ -x^2+4x&=x^2-6x+5\cr 0&=2x^2-10x+5\cr x&={10\pm\sqrt{100-40}\over4}={5\pm\sqrt{15}\over2}.\cr }$$ The intersection point we want is \(x=a=(5-\sqrt{15})/2\). Then the total area is $$\eqalign{ \int_0^a x^2-6x+5-(-x^2+4x)\,dx&+\int_a^1 -x^2+4x-(x^2-6x+5)\,dx\cr &=\int_0^a 2x^2-10x+5\,dx+\int_a^1 -2x^2+10x-5\,dx\cr &=\left.{2x^3\over3}-5x^2+5x\right|_0^a + \left.-{2x^3\over3}+5x^2-5x\right|_a^1\cr &=-{52\over3}+5\sqrt{15}, }$$ after a bit of simplification.
Figure 9.1.4. Area between curves that cross.
Example 9.1.4
Find the area between \(f(x)= -x^2+4x\) and \(g(x)=x^2-6x+5\); the curves are shown in figure 9.1.5.
Solution
Here we are not given a specific interval, so it must be the case that there is a "natural'' region involved. Since the curves are both parabolas, the only reasonable interpretation is the region between the two intersection points, which we found in the previous example: \({5\pm\sqrt{15}\over2}.\) If we let \(a=(5-\sqrt{15})/2\) and \( b=(5+\sqrt{15})/2\), the total area is $$\eqalign{ \int_a^b -x^2+4x-(x^2-6x+5)\,dx &=\int_a^b -2x^2+10x-5\,dx\cr &=\left.-{2x^3\over3}+5x^2-5x\right|_a^b\cr &=5\sqrt{15}.\cr }$$ after a bit of simplification.
Figure 9.1.5. Area bounded by two curves. |
We will assume that you have already been exposed to neural network modeling. This section is designed to quickly help you recap the basics that you will need in order to create and experiment with neural networks in Pyrobot.
See the Pyrobot website for more info.
We will concentrate mostly on
backpropagation networks here. A typical backprop network is a three layer network containing input, hidden, and output layers. Each layer contains a collection of nodes. Typically, the nodes in a layer are fully connected to the next layer. For instance, every input node will have a weighted connection to every hidden node. Similarly, every hidden node will have a
weighted connection to every output node.
Processing in a backprop network works as follows. Input is propagated forward from the input layer through the hidden layer and finally through the output layer to produce a response. Each node, regardless of the layer it is in, uses the same transfer function in order to propagate its information forward to the next layer. This is described next.
Each node maintains an activation value that depends on the activation values of its incoming neighbors, the weights from its incoming neighbors, and its own default bias value. To compute this activation value, we first calculate the node's net input.
The net input is a weighted sum of all the incoming activations plus the node's bias value:
$net_i = \sum\limits_{j=1}^n w_{ij} x_j + b_i$
Here is some corresponding Python code to compute this function for each node:
toNodes = range(3, 5)fromNodes = range(0, 2)bias = [0.2, -0.1, 0.5, 0.1, 0.4, 0.9]activation = [0.8, -0.3, -0.8, 0.1, 0.5]netInput = [0, 0, 0, 0, 0]weight = [[ 0.1, -0.8], [-0.3, 0.1], [ 0.2, -0.1], [ 0.0, 0.1], [ 0.8, -0.8], [ 0.4, 0.5]]for i in toNodes: netInput[i] = bias[i] for j in fromNodes: netInput[i] += (weight[i][j] * activation[j])
where
weight[i][j] is the weight $w_{ij}$, or connection strength, from the $j^{th}$ node to the $i^{th}$ node,
activation[j] is the activation signal $x_j$ of the $j^{th}$ input node, and
bias[i] is the bias value $b_i$ of the $i^{th}$ node.
After computing the net input, each node has to compute its output activation. The value that results from applying the activation function to the net input is the signal that will be sent as output to all the nodes in the next layer. The activation function used in backprop networks is generally:
$a_i = \sigma(net_i)$
where $\sigma(x) = \dfrac{1}{1 + e^{-x}}$
import mathdef activationFunction(netInput): return 1.0 / (1.0 + math.exp(-netInput))for i in toNodes: activation[i] = activationFunction(netInput[i])
This $\sigma$ is the activation function, as shown in the plot below. Notice that the function is monotonically increasing and bounded by 0.0 and 1.0 as the net input approaches negative infinity and positive infinity, respectively.
import mathpts = [(x, activationFunction(x)) for x in range(-10, 10)]
calico.ScatterChart(['x', 'activiation'], pts, {'width': 600, "height": 400, 'legend': {'position': 'in'}, "lineWidth": 1, "pointSize": 3})
Google's Graphing API can be found here: https://developers.google.com/chart/interactive/docs/gallery/linechart
from ai.conx import *
net = Network()net.addLayers(2, 3, 1)print(net)
Conx using seed: 1398275934.28 Layer 'output': (Kind: Output, Size: 1, Active: 1, Frozen: 0) Target : 0.00 Activation: 0.00 Layer 'hidden': (Kind: Hidden, Size: 3, Active: 1, Frozen: 0) Activation: 0.00 0.00 0.00 Layer 'input': (Kind: Input, Size: 2, Active: 1, Frozen: 0) Activation: 0.00 0.00 |
Answer
No
Work Step by Step
A rational number is a number that can be expressed as $\frac{a}{b}$, where $a$ is an integer and $b\ne0$. This is not the case for$\sqrt 7$, so it is not an integer.
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback. |
Methods Funct. Anal. Topology
16 (2010), no. 1, 1-5
In this paper, we investigate hereditary properties of hyperspaces. Our basic cardinals are the Suslin hereditary number, the hereditary $\pi$-weight, the Shanin hereditary number, the hereditary density, the hereditary cellularity. We prove that the hereditary cellularity, the hereditary $\pi$-weight, the Shanin hereditary number, the hereditary density, the hereditary cellularity for any Eberlein compact and any Danto space and their hyperspaces coincide.
Methods Funct. Anal. Topology
16 (2010), no. 1, 6-16
Using a general approach that covers the cases of Gaussian, Poissonian, Gamma, Pascal and Meixner measures on an infinite- dimensional space, we construct a general integration by parts formula for analysis connected with each of these measures. Our consideration is based on the constructions of the extended stochastic integral and the stochastic derivative that are connected with the structure of the extended Fock space.
Methods Funct. Anal. Topology
16 (2010), no. 1, 17-27
Let $E$ be either $\ell_1$ of $L_1$. We consider $E$-unattainable continuous linear operators $T$ from $L_1$ to a Banach space $Y$, i.e., those operators which do not attain their norms on any subspace of $L_1$ isometric to $E$. It is not hard to see that if $T: L_1 \to Y$ is $\ell_1$-unattainable then it is also $L_1$-unattainable. We find some equivalent conditions for an operator to be $\ell_1$-unattainable and construct two operators, first $\ell_1$-unattainable and second $L_1$-unattainable but not $\ell_1$-unattainable. Some open problems remain unsolved.
Dimension stabilization effect for the block Jacobi-type matrix of a bounded normal operator with the spectrum on an algebraic curve
Methods Funct. Anal. Topology
16 (2010), no. 1, 28-41
Under some natural assumptions, any bounded normal operator in an appropriate basis has a three-diagonal block Jacobi-type matrix. Just as in the case of classical Jacobi matrices (e.g. of self-adjoint operators) such a structure can be effectively used. There are two sources of difficulties: rapid growth of blocks in the Jacobi-type matrix of such operators (they act in $\mathbb C^1\oplus\mathbb C^2\oplus\mathbb C^3\oplus\cdots$) and potentially complicated spectra structure of the normal operators. The aim of this article is to show that these two aspects are closely connected: simple structure of the spectra can effectively bound the complexity of the matrix structure. The main result of the article claims that if the spectra is concentrated on an algebraic curve the dimensions of Jacobi-type matrix blocks do not grow starting with some value.
Methods Funct. Anal. Topology
16 (2010), no. 1, 42-50
We give necessary and sufficient conditions for a one-dimensional Schrodinger operator to have the number of negative eigenvalues equal to the number of negative intensities in the case of $\delta$ interactions.
Methods Funct. Anal. Topology
16 (2010), no. 1, 51-56
It was proved in~\cite{Pop09b} that a $*$-algebra is $C^*$-representable, i.e., $*$-isomorphic to a self-adjoint subalgebra of bounded operators acting on a Hilbert space if and only if there is an algebraically admissible cone in the real space of Hermitian elements of the algebra such that the algebra unit is an Archimedean order unit. In the present paper we construct such cones in free products of $C^*$-representable $*$-algebras generated by unitaries. We also express the reducing ideal of any algebraically bounded $*$-algebra with corepresentation $\mathcal F/\mathcal J$ where $\mathcal F$ is a free algebra as a closure of the ideal $\mathcal J$ in some universal enveloping $C^*$-algebra.
Methods Funct. Anal. Topology
16 (2010), no. 1, 57-68
In this paper we consider decompositions of the identity operator into a linear combination of $k\ge 5$ orthogonal projections with real coefficients. It is shown that if the sum $A$ of the coefficients is closed to an integer number between $2$ and $k-2$ then such a decomposition exists. If the coefficients are almost equal to each other, then the identity can be represented as a linear combination of orthogonal projections for $\frac{k-\sqrt{k^2-4k}}{2} < A < \frac{k+\sqrt{k^2-4k}}{2}$. In the case where some coefficients are sufficiently close to $1$ we find necessary conditions for the existence of the decomposition.
Inverse theorems in the theory of approximation of vectors in a Banach space with exponential type entire vectors
Methods Funct. Anal. Topology
16 (2010), no. 1, 69-82
An arbitrary operator $A$ on a Banach space $X$ which is a generator of a $C_0$-group with a certain growth condition at infinity is considered. A relationship between its exponential type entire vectors and its spectral subspaces is found. Inverse theorems on the connection between the degree of smoothness of a vector $x\in X$ with respect to the operator $A$, the rate of convergence to zero of the best approximation of $x$ by exponential type entire vectors for operator $A$, and the $k$-module of continuity with respect to $A$ are established. Also, a generalization of the Bernstein-type inequality is obtained. The results allow to obtain Bernstein-type inequalities in weighted $L_p$ spaces.
Methods Funct. Anal. Topology
16 (2010), no. 1, 83-100
We study positive definite kernels $K = (K_{n,m})_{n,m\in A}$, $A=\mathbb Z$ or $A=\mathbb Z_+$, which satisfy a difference equation of the form $L_n K = \overline L_m K$, or of the form $L_n \overline L_m K = K$, where $L$ is a linear difference operator (here the subscript $n$ ($m$) means that $L$ acts on columns (respectively rows) of $K$). In the first case, we give new proofs of Yu.M. Berezansky results about integral representations for $K$. In the second case, we obtain integral representations for $K$. The latter result is applied to strengthen one our result on abstract stochastic sequences. As an example, we consider the Hamburger moment problem and the corresponding positive matrix of moments. Classical results on the Hamburger moment problem are derived using an operator approach, without use of Jacobi matrices or orthogonal polynomials. |
Methods Funct. Anal. Topology
19 (2013), no. 4, 293-300
We study a class of measures on the space $\Gamma _{X}$ of locally finiteconfi\-gurations in $X=\mathbb{R}^{d}$, obtained as images of ''lattice'' Gibbs measures on $X^{\mathbb{Z}^{d}}$ with respect to an embedding $\mathbb{Z}^{d}\subset \mathbb{R}^{d}$. For these measures, we prove the integration by parts formula andlog-Sobolev inequality.
Methods Funct. Anal. Topology
19 (2013), no. 4, 301-309
A blanket version of the non-Gaussian analysis under the so-called bior hogo al approach uses the Kondratiev spaces of test functions with orthogonal bases given by a generating function $Q\times H \ni (x,\lambda)\mapsto h(x;\lambda)\in\mathbb C$, where $Q$ is a metric space, $H$ is some complex Hilbert space, $h$ satisfies certain assumptions (in particular, $h(\cdot;\lambda)$ is a continuous function, $h(x;\cdot)$ is a holomorphic at zero function). In this paper we consider the construction of the Kondratiev spaces of test functions with orthogonal bases given by a generating function $\gamma(\lambda)h(x;\alpha(\lambda))$, where $\gamma :H\to\mathbb C$ and $\alpha :H\to H$ are holomorphic at zero functions, and study some properties of these spaces. The results of the paper give a possibility to extend an area of possible applications of the above mentioned theory.
Methods Funct. Anal. Topology
19 (2013), no. 4, 310-318
We consider the minimal non-negative Jacobi operator with $p\times p-$matrix entries. Using the technique of boundary triplets and the corresponding Weyl functions, we describe the Friedrichs and Krein extensions of the minimal Jacobi operator. Moreover, we parametrize the set of all non-negative extensions in terms of boundary conditions.
Methods Funct. Anal. Topology
19 (2013), no. 4, 319-326
We consider the minimal differential operator $A$ generated in $L^2(0,\infty)$ by the differential expression $l(y) = (-1)^n y^{(2n)}$. Using the technique of boundary triplets and the corresponding Weyl functions, we find explicit form of the characteristic matrix and the corresponding spectral function for the Friedrichs and Krein extensions of the operator $A$.
Methods Funct. Anal. Topology
19 (2013), no. 4, 327-345
We study spectral properties of energy-dependent Sturm--Liouville equations, introduce the notion of norming constants and establish their interrelation with the spectra. One of the main tools is the linearization of the problem in a suitable Pontryagin space.
Methods Funct. Anal. Topology
19 (2013), no. 4, 346-363
We consider the self-adjoint Dirac operators on a finite interval with summable matrix-valued potentials and general boundary conditions. For such operators, we study the inverse problem of reconstructing the potential and the boundary conditions of the operator from its eigenvalues and suitably defined norming matrices.
Methods Funct. Anal. Topology
19 (2013), no. 4, 364-375
We reconsider the norm resolvent limit of $-\Delta + V_\ell$ with $V_\ell$ tending to a point interaction in three dimensions. We are mainly interested in potentials $V_\ell$ modelling short range interactions of cold atomic gases. In order to ensure stability the interaction $V_\ell$ is required to have a strong repulsive core, such that $\lim_{\ell \to 0} \int V_\ell >0$. This situation is not covered in the previous literature.
Methods Funct. Anal. Topology
19 (2013), no. 4, 376-388
The Lie algebra of planar vector fields with coefficients from the field of rational functions over an algebraically closed field of characteristic zero is considered. We find all finite-dimensional Lie algebras that can be realized as subalgebras of this algebra. |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ...
The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial.
This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ...
I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv...
As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists?
I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib...
@EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc.
Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/…
You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago
So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball.
@ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why?
@AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially...
@vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes.
@RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself
@AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that?
@ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions...
When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former.
@RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that
And that is what I mean by "the basics".
Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers
@RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14
The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for...
@vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world.
@Slereah It's like the brain has a limited capacity on math skills it can store.
@NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life"
I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money
It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge
Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it |
energy_budget¶ class
climlab.process.energy_budget.
EnergyBudget(
**kwargs)¶
A parent class for explicit energy budget processes.
This class solves equations that include a heat capacitiy term like \(C \\frac{dT}{dt} = \\textrm{flux convergence}\)
In an Energy Balance Model with model state \(T\) this equation will look like this:\[\begin{split}C \\frac{dT}{dt} = R\downarrow - R\uparrow - H \n \\frac{dT}{dt} = \\frac{R\downarrow}{C} - \\frac{R\uparrow}{C} - \\frac{H}{C}\end{split}\]
Every EnergyBudget object has a
heating_ratedictionary with items corresponding to each state variable. The heating rate accounts the actual heating of a subprocess, namely the contribution to the energy budget of \(R\\downarrow, R\\uparrow\) and \(H\) in this case. The temperature tendencies for each subprocess are then calculated through dividing the heating rate by the heat capacitiy \(C\).
Initialization parametersn
An instance of
EnergyBudgetis initialized with the forwarded keyword arguments
**kwargsof the corresponding children classes.
Object attributesn
Additional to the parent class
TimeDependentProcessfollowing object attributes are generated or modified during initialization:
Variables Attributes
depth
Depth at grid centers (m)
depth_bounds
Depth at grid interfaces (m)
diagnostics
Dictionary access to all diagnostic variables
input
Dictionary access to all input variables
lat
Latitude of grid centers (degrees North)
lat_bounds
Latitude of grid interfaces (degrees North)
lev
Pressure levels at grid centers (hPa or mb)
lev_bounds
Pressure levels at grid interfaces (hPa or mb)
lon
Longitude of grid centers (degrees)
lon_bounds
Longitude of grid interfaces (degrees)
timestep
The amount of time over which
step_forward()is integrating in unit seconds.
Methods
add_diagnostic(name[, value])
Create a new diagnostic variable called
namefor this process and initialize it with the given
value.
add_input(name[, value])
Create a new input variable called
namefor this process and initialize it with the given
value.
add_subprocess(name, proc)
Adds a single subprocess to this process.
add_subprocesses(procdict)
Adds a dictionary of subproceses to this process.
compute()
Computes the tendencies for all state variables given current state and specified input.
compute_diagnostics([num_iter])
Compute all tendencies and diagnostics, but don’t update model state.
declare_diagnostics(diaglist)
Add the variable names in
inputlistto the list of diagnostics.
declare_input(inputlist)
Add the variable names in
inputlistto the list of necessary inputs.
integrate_converge([crit, verbose])
Integrates the model until model states are converging.
integrate_days([days, verbose])
Integrates the model forward for a specified number of days.
integrate_years([years, verbose])
Integrates the model by a given number of years.
remove_diagnostic(name)
Removes a diagnostic from the
process.diagnosticdictionary and also delete the associated process attribute.
remove_subprocess(name[, verbose])
Removes a single subprocess from this process.
set_state(name, value)
Sets the variable
nameto a new state
value.
set_timestep([timestep, num_steps_per_year])
Calculates the timestep in unit seconds and calls the setter function of
timestep()
step_forward()
Updates state variables with computed tendencies.
to_xarray([diagnostics])
Convert process variables to
xarray.Datasetformat.
class
climlab.process.energy_budget.
ExternalEnergySource(
**kwargs)¶
A fixed energy source or sink to be specified by the user.
Object attributes
Additional to the parent class
EnergyBudgetthe following object attribute is modified during initialization:
Variables heating_rate( dict) – energy share dictionary for this subprocess is set to zero for every model state.
After initialization the user should modify the fields in the
heating_ratedictionary, which contain heating rates in unit \(\textrm{W}/ \textrm{m}^2\) for all state variables.
Example
Creating an Energy Balance Model with a uniform external energy source of \(10 \ \textrm{W}/ \textrm{m}^2\) for all latitudes:
>>> import climlab >>> from climlab.process.energy_budget import ExternalEnergySource >>> import numpy as np >>> # create model & external energy subprocess >>> model = climlab.EBM(num_lat=36) >>> ext_en = ExternalEnergySource(state= model.state,**model.param) >>> # modify external energy rate >>> ext_en.heating_rate.keys() ['Ts'] >>> np.squeeze(ext_en.heating_rate['Ts']) Field([-0., -0., -0., -0., -0., -0., -0., -0., -0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., -0., -0., -0., -0., -0., -0., -0., -0., -0.]) >>> ext_en.heating_rate['Ts'][:]=10 >>> np.squeeze(ext_en.heating_rate['Ts']) Field([ 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10., 10.]) >>> # add subprocess to model >>> model.add_subprocess('ext_energy',ext_en) >>> print model climlab Process of type <class 'climlab.model.ebm.EBM'>. State variables and domain shapes: Ts: (36, 1) The subprocess tree: top: <class 'climlab.model.ebm.EBM'> diffusion: <class 'climlab.dynamics.diffusion.MeridionalDiffusion'> LW: <class 'climlab.radiation.AplusBT.AplusBT'> ext_energy: <class 'climlab.process.energy_budget.ExternalEnergySource'> albedo: <class 'climlab.surface.albedo.StepFunctionAlbedo'> iceline: <class 'climlab.surface.albedo.Iceline'> cold_albedo: <class 'climlab.surface.albedo.ConstantAlbedo'> warm_albedo: <class 'climlab.surface.albedo.P2Albedo'> insolation: <class 'climlab.radiation.insolation.P2Insolation'> Attributes
depth
Depth at grid centers (m)
depth_bounds
Depth at grid interfaces (m)
diagnostics
Dictionary access to all diagnostic variables
input
Dictionary access to all input variables
lat
Latitude of grid centers (degrees North)
lat_bounds
Latitude of grid interfaces (degrees North)
lev
Pressure levels at grid centers (hPa or mb)
lev_bounds
Pressure levels at grid interfaces (hPa or mb)
lon
Longitude of grid centers (degrees)
lon_bounds
Longitude of grid interfaces (degrees)
timestep
The amount of time over which
step_forward()is integrating in unit seconds.
Methods
add_diagnostic(name[, value])
Create a new diagnostic variable called
namefor this process and initialize it with the given
value.
add_input(name[, value])
Create a new input variable called
namefor this process and initialize it with the given
value.
add_subprocess(name, proc)
Adds a single subprocess to this process.
add_subprocesses(procdict)
Adds a dictionary of subproceses to this process.
compute()
Computes the tendencies for all state variables given current state and specified input.
compute_diagnostics([num_iter])
Compute all tendencies and diagnostics, but don’t update model state.
declare_diagnostics(diaglist)
Add the variable names in
inputlistto the list of diagnostics.
declare_input(inputlist)
Add the variable names in
inputlistto the list of necessary inputs.
integrate_converge([crit, verbose])
Integrates the model until model states are converging.
integrate_days([days, verbose])
Integrates the model forward for a specified number of days.
integrate_years([years, verbose])
Integrates the model by a given number of years.
remove_diagnostic(name)
Removes a diagnostic from the
process.diagnosticdictionary and also delete the associated process attribute.
remove_subprocess(name[, verbose])
Removes a single subprocess from this process.
set_state(name, value)
Sets the variable
nameto a new state
value.
set_timestep([timestep, num_steps_per_year])
Calculates the timestep in unit seconds and calls the setter function of
timestep()
step_forward()
Updates state variables with computed tendencies.
to_xarray([diagnostics])
Convert process variables to
xarray.Datasetformat. |
I want to solve this recursion:
$$T(n) = 5T(\frac{n}{5}) + \frac{n}{lg(n)}$$
My attempt and issue:
None of the cases for master theorem apply here. I tried using Akra-Bazzi method (https://en.wikipedia.org/wiki/Akra%E2%80%93Bazzi_method) with
$f(n) = \frac{n}{lg(n)}$
The derivative of $f(n)$ satisfies the condition for Akra-Bazzi: $$\frac{d}{dn} f(n) = \frac{1}{lg(n)} - \frac{1}{ln(2)*lg^2(n)} = O(n)$$ Also I found that $p=1$ to satisfy the method's condition that $\frac{a}{b^p} = 1$.
Now the solution is given by this formula:
$$T(n) = \theta(n^p*(1-\int_{1}^{n} \frac{f(x)}{x^{p+1}} dx)$$
So I calculated the integral:$$\int\frac{f(x)}{x^{p+1}} dx = lg(lg(x)) + C$$, but with one of the limits being $1$ it
diverges!
The weird thing is that according to Wolfram Alpha calculator, $T(n) = \theta(nlg(lg(n)))$.
So why is it true if the integral diverges? What am I getting wrong here? |
To write some documentation, I would like the ability to define a command such as
\showcase{x^{y}\in\Omega}
which expands to something roughly equivalent to
\verb|x^{y}\in\Omega| & $x^{y}\in\Omega$ \\
so that I can show what the latex code looks like and how does it show on the document.
Now I now that it isn't really possible to use
\verb for this because it's
really fragile and you can't pass arguments from a macro to it. But, assuming that I'm not trying to write anything crazy in the argument to
\showcase (e.g. braces are always balanced, I don't care much if spaces are collapsed, no
%'s inside), is there a way to define such a command?
Update
\detokenize does (almost) exactly what I want. Take the following example
\newcommand\showcase[1]{{\ttfamily\detokenize{#1}} & $#1$}\begin{tabular}{cc|cc}\showcase{x^{y}} & \showcase{\hat{x}, \bar{x}} \\\showcase{x_{y}} & \showcase{f\colon X \to Y} \\\showcase{x'} & \showcase{\sqrt{x+2}} \\\showcase{x''_{2}} & \showcase{2 < x \leq 4} \\\showcase{A^{1}_{2}} & \showcase{\frac{a+b}{c+d}} \\\showcase{3\pi/4} & \showcase{\int_{0}^{1} x^{2} \,dx} \\\showcase{x\in\Omega} & \showcase{A \cup B \subseteq C \cap D}\end{tabular}
Which produces the output:
My minor quibbles are that:
The double
''seems to have been collapsed into a
"
I find mildly annoying the spaces after control sequences. I know why they are there, but could there be a way to remove the space if it is notfollowed by a letter or number? (Or, conversely, if followed by
{or
_) |
For details about the tensor notation, see “tensors in SMART+.pdf” in the Continuum Mechanics library.
The Continuum Mechanics Library: contimech.hpp The Constitutive Library: constitutive.hpp The Damage Library: damage.hpp
The Continuum Mechanics Library: contimech.hpp
tr(vec)
Provides the trace of a second order tensor written as a vector v in the SMART+ formalism.
Return a double. Example: vec v = randu(6); double trace = tr(v);
dev(vec)
Provides the deviatoric part of a second order tensor written as a vector v in the SMART+ formalism.
Return a vec. Example: vec v = randu(6); vec deviatoric = dev(v);
Mises_stress(vec)
Provides the Von Mises stress \(\sigma^{Mises}\) of a second order stress tensor written as a vector v in the SMART+ formalism.
Return a double. vec v = randu(6); double Mises_sig = Mises_stress(v);
Mises_strain(vec)
Provides the Von Mises strain \(\varepsilon^{Mises}\) of a second order stress tensor written as a vector v in the SMART+ formalism.
Return a double. vec v = randu(6); double Mises_eps = Mises_strain(v);
eta_stress(vec)
Provides the stress flow \(\eta_{stress}=\frac{3/2\sigma_{dev}}{\sigma_{Mises}}\) from a second order stress tensor written as a vector v in the SMART+ formalism (i.e. the shear terms are multiplied by 2, providing shear angles).
Return a vec. vec v = randu(6); vec sigma_f = eta_stress(v);
eta_strain(vec)
Provides the strain flow \(\eta_{strain}=\frac{2/3\varepsilon_{dev}}{\varepsilon_{Mises}}\) from a second order strain tensor written as a vector v in the SMART+ formalism (i.e. the shear terms are multiplied by 2, providing shear angles).
Return a vec. vec v = randu(6); vec eps_f = eta_strain(v);
v2t_strain(vec)
Converts a second order strain tensor written as a vector v in the SMART+ formalism into a second order strain tensor written as a matrix m.
Return a mat. vec v = randu(6); mat m = v2t_strain(v);
t2v_strain(vec)
Converts a second order strain tensor written as a matrix m in the SMART+ formalism into a second order strain tensor written as a vector v.
Return a vec. mat m = randu(6,6); vec v = t2v_strain(m);
v2t_stress(vec)
Converts a second order stress tensor written as a vector v in the SMART+ formalism into a second order stress tensor written as a matrix m.
Return a mat. vec v = randu(6); mat m = v2t_stress(v);
t2v_stress(vec)
Converts a second order stress tensor written as a matrix m in the SMART+ formalism into a second order stress tensor written as a vector v.
Return a vec. mat m = randu(6,6); vec v = t2v_stress(m);
J2_strain(vec)
Provides the second invariant of a second order strain tensor written as a vector v in the SMART+ formalism.
Return a vec. vec v = randu(6); double J2 = J2_strain(v);
J2_stress(vec)
Provides the second invariant of a second order stress tensor written as a vector v in the SMART+ formalism.
Return a double. vec v = randu(6); double J2 = J2_stress(v);
J3_strain(vec)
Provides the third invariant of a second order strain tensor written as a vector v in the SMART+ formalism.
Return a vec. vec v = randu(6); double J3 = J3_strain(v);
J3_stress(vec)
Provides the third invariant of a second order stress tensor written as a vector v in the SMART+ formalism.
Return a double. vec v = randu(6); double J3 = J3_stress(v);
normal_ellipsoid(double, double, double, double, double)
Provides the normalized vector to an ellipsoid with semi-principal axes of length \(a_{1}\), \(a_{2}\), \(a_{3}\). The direction of the normalized vector is set by angles u and v. These 2 angles correspond to the rotations in the plan defined by the center of the ellipsoid, a1 and a2 directions for u, a1 and a3 ones for v. u = 0 corresponds to a1 direction and v = 0 correspond to a3 one. So the normal vector is set at the parametrized position :
\[ \begin{align} x & = a_{1} cos(u) sin(v) \\ y & = a_{2} sin(u) sin(v) \\ z & = a_{3} cos(v) \end{align} \]
Return a vec.
const double Pi = 3.14159265358979323846 double u = (double)rand()/(double)(RAND_MAX) % 2*Pi - 2*Pi; double v = (double)rand()/(double)(RAND_MAX) % Pi - Pi; double a1 = (double)rand(); double a2 = (double)rand(); double a3 = (double)rand(); vec v = normal_ellipsoid(u, v, a1, a2, a3);
sigma_int(vec , double, double, double, double, double)
Provides the normal and tangent components of a stress vector \(\sigma_{in}\) in accordance with the normal direction
n to an ellipsoid with axes \(a_{1}\), \(a_{2}\), \(a_{3}\). The normal vector is set at the parametrized position : \[ \begin{align} x & = a_{1} cos(u) sin(v) \\ y & = a_{2} sin(u) sin(v) \\ z & = a_{3} cos(v) \end{align} \] Return a vec. vec sigma_in = randu(6); double u = (double)rand()/(double)(RAND_MAX) % Pi - Pi/2; double v = (double)rand()/(double)(RAND_MAX) % 2*Pi - Pi; double a1 = (double)rand(); double a2 = (double)rand(); double a3 = (double)rand(); vec sigma_i = sigma_int(sigma_in, a1, a2, a3, u, v));
p_ikjl(vec)
Provides the Hill interfacial operator according to a normal
a (see papers of Siredey and Entemeyer Ph.D. dissertation). Return a mat. vec v = randu(6); mat H = p_ikjl(v); The Constitutive Library: constitutive.hpp
Ireal()
Provides the fourth order identity tensor written in Voigt notation \(I_{real}\), where :
\[I_{real} = \left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0.5 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0.5 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0.5 \end{array} \right)\] Return a mat. Example: mat Ir = Ireal();
Ireal2()
Provides the fourth order identity tensor \(\widehat{I}\) written in the form. So :
\[\widehat{I} = \left( \begin{array}{ccc} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 2 & 0 & 0 \\ 0 & 0 & 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 2 \end{array} \right)\]
For example, this tensor allows to obtain : \(L*\widehat{M}=I\) or \(\widehat{L}*M=I\), where a matrix \(\widehat{A}\) is set by \(\widehat{A}=\widehat{I}A\widehat{I}\)
Return a mat. Example: mat Ir2 = Ireal2();
Ivol()
Provides the volumic of the identity tensor \(I_{real}\) written in the SMART+ formalism. So :
\[I_{vol} = \left( \begin{array}{ccc} 1/3 & 1/3 & 1/3 & 0 & 0 & 0 \\ 1/3 & 1/3 & 1/3 & 0 & 0 & 0 \\ 1/3 & 1/3 & 1/3 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right)\] Return a mat. Example: mat Iv = Ivol();
Idev()
Provides the deviatoric of the identity tensor \(I_{real}\) written in the SMART+ formalism. So :
\[I_{dev} = \left( \begin{array}{ccc} 2/3 & -1/3 & -1/3 & 0 & 0 & 0 \\ -1/3 & 2/3 & -1/3 & 0 & 0 & 0 \\ -1/3 & -1/3 & 2/3 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0.5 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0.5 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0.5 \end{array} \right)\] Return a mat. Example: mat Id = Idev();
Idev2()
Provides the deviatoric of the identity tensor \(\widehat{I}\) written in the SMART+ formalism. So :
\[I_{dev2} = \left( \begin{array}{ccc} 2/3 & -1/3 & -1/3 & 0 & 0 & 0 \\ -1/3 & 2/3 & -1/3 & 0 & 0 & 0 \\ -1/3 & -1/3 & 2/3 & 0 & 0 & 0 \\ 0 & 0 & 0 & 2 & 0 & 0 \\ 0 & 0 & 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 2 \end{array} \right)\] Return a mat. Example: mat Id2 = Idev2();
Ith()
Provides the vector \(I_{th} = \left( \begin{array}{ccc}
1 \\ 1 \\ 1 \\ 0 \\ 0 \\ 0 \end{array} \right)\). Return a vec. Example: vec It = Ith();
Ir2()
Provides the vector \(I_{r2} = \left( \begin{array}{ccc}
1 \\ 1 \\ 1 \\ 2 \\ 2 \\ 2 \end{array} \right)\). Return a vec. Example: vec I2 = Ir2();
Ir05()
Provides the vector \(I_{r05} = \left( \begin{array}{ccc}
1 \\ 1 \\ 1 \\ 0.5 \\ 0.5 \\ 0.5 \end{array} \right)\). Return a vec. Example: vec I05 = Ir05();
L_iso(double, double, string)
Provides the elastic stiffness tensor for an isotropic material.
The two first arguments are a couple of elastic properties. The third argument specifies which couple has been provided and the nature and order of coefficients. Exhaustive list of possible third argument : ‘Enu’,’nuE,’Kmu’,’muK’, ‘KG’, ‘GK’, ‘lambdamu’, ‘mulambda’, ‘lambdaG’, ‘Glambda’. Return a mat. Example : double E = 210000; double nu = 0.3; mat Liso = L_iso(E,nu,'Enu');
M_iso(double, double, string)
Provides the elastic compliance tensor for an isotropic material.
The two first arguments are a couple of elastic properties. The third argument specify which couple has been provided and the nature and order of coefficients. Exhaustive list of possible third argument : ‘Enu’,’nuE,’Kmu’,’muK’, ‘KG’, ‘GK’, ‘lambdamu’, ‘mulambda’, ‘lambdaG’, ‘Glambda’. Return a mat. Example : double E = 210000; double nu = 0.3; mat Miso = M_iso(E,nu,'Enu');
L_cubic(double, double, double)
Provides the elastic stiffness tensor for a cubic material.
Arguments are the stiffness coefficients C11, C12 and C44.
Return a mat.
Example : double C11 = (double)rand(); double C12 = (double)rand(); doubel C44 = (double)rand(); mat Liso = L_cubic(C11,C12,C44);
M_cubic(double, double, double)
Provides the elastic compliance tensor for a cubic material.
Arguments are the stiffness coefficients C11, C12 and C44.
Return a mat.
Example : double C11 = (double)rand(); double C12 = (double)rand(); double C44 = (double)rand(); mat Miso = M_cubic(C11,C12,C44);
L_ortho(double, double, double, double, double, double, double, double, double, string)
Provides the elastic stiffness tensor for an orthotropic material.
Arguments could be all the stiffness coefficients or the material parameter. For an orthotropic material the material parameters should be : Ex,Ey,Ez,nuxy,nuyz,nxz,Gxy,Gyz,Gxz.
The last argument must be set to “Cii” if the inputs are the stiffness coefficients or to “EnuG” if the inputs are the material parameters.
Return a mat.
Example : double C11 = (double)rand(); double C12 = (double)rand(); double C13 = (double)rand(); double C22 = (double)rand(); double C23 = (double)rand(); double C33 = (double)rand(); double C44 = (double)rand(); double C55 = (double)rand(); double C66 = (double)rand(); mat Lortho = L_ortho(C11, C12, C13, C22, C23, C33, C44, C55, C66,"Cii");
M_ortho(double, double, double, double, double, double, double, double, double,string)
Provides the elastic compliance tensor for an orthotropic material.
Arguments could be all the stiffness coefficients or the material parameter. For an orthotropic material the material parameters should be : Ex,Ey,Ez,nuxy,nuyz,nxz,Gxy,Gyz,Gxz.
The last argument must be set to “Cii” if the inputs are the stiffness coefficients or to “EnuG” if the inputs are the material parameters.
return a mat
Example : double C11 = (double)rand(); double C12 = (double)rand(); double C13 = (double)rand(); double C22 = (double)rand(); double C23 = (double)rand(); double C33 = (double)rand(); double C44 = (double)rand(); double C55 = (double)rand(); double C66 = (double)rand(); mat Mortho = M_ortho(C11, C12, C13, C22, C23, C33, C44, C55, C66,"Cii");
L_isotrans(double, double, double, double, double, int)
Provides the elastic stiffness tensor for an isotropic transverse material.
Arguments are longitudinal Young modulus EL, transverse young modulus, Poisson’s ratio for loading along the longitudinal axis nuTL, Poisson’s ratio for loading along the transverse axis nuTT, shear modulus GLT and the axis of symmetry.
Return a mat.
Example : double EL = (double)rand(); double ET = (double)rand(); double nuTL = (double)rand(); double nuTT = (double)rand(); double GLT = (double)rand(); double axis = 1; mat Lisotrans = L_isotrans(EL, ET, nuTL, nuTT, GLT, axis);
M_isotrans(double, double, double, double, double, int)
Provides the elastic compliance tensor for an isotropic transverse material.
Arguments are longitudinal Young modulus EL, transverse young modulus, Poisson’s ratio for loading along the longitudinal axis nuTL, Poisson’s ratio for loading along the transverse axis nuTT, shear modulus GLT and the axis of symmetry.
Return a mat.
Example : double EL = (double)rand(); double ET = (double)rand(); double nuTL = (double)rand(); double nuTT = (double)rand(); double GLT = (double)rand(); double axis = 1; mat Misotrans = M_isotrans(EL, ET, nuTL, nuTT, GLT, axis);
IsoParam(vec, v
Christoffel(mat, vec)
Provides the Christoffel tensor in the anisotropic case.
Parameters are the elastic stiffness matrix L and the wavenumber vector k. Return a mat. Example : vec L = randu(6,6); vec k = randu(3); mat Christ = Christoffel(L, k); The Damage Library: damage.hpp
weibull(vec, double, double, double, double, string)
Provides the damage evolution \(\delta D\) considering a Weibull damage law.
It is given by : \(\delta D = (1-D_{old})*\Big(1-exp\big(-1(\frac{crit}{\beta})^{\alpha}\big)\Big)\) Parameters of this function are: the stress vector \(\sigma\), the old damage \(D_{old}\), the shape parameter \(\alpha\), the scale parameter \(\beta\), the time increment \(\Delta T\) and the criterion (which is a string).
The criterion possibilities are :
“vonmises” : \(crit = \sigma_{Mises}\) “hydro” : \(crit = tr(\sigma)\) “J3” : \(crit = J3(\sigma)\). Default value of the criterion is “vonmises”. Return a scalar. Example: double varD = damage_weibull(stress, damage, alpha, beta, DTime, criterion);
damage_kachanov(vec, vec, double, double, double, string)
Provides the damage evolution \(\delta D\) considering a Kachanov’s creep damage law.
It is given by : \(\delta D = \Big(\frac{crit}{A_0(1-D_{old})}\Big)^r\) Parameters of this function are: the stress vector \(\sigma\), the strain vector \(\varepsilon\), the old damage \(D_{old}\), the material properties characteristic of creep damage \((A_0,r)\) and the criterion (which is a string).
The criterion possibilities are :
“vonmises” : \(crit = (\sigma*(1+\varepsilon))_{Mises}\) “hydro” : \(crit = tr(\sigma*(1+\varepsilon))\) “J3” : \(crit = J3(\sigma*(1+\varepsilon))\). Here, the criterion has no default value. Return a scalar. Example: double varD = damage_kachanov(stress, strain, damage, A0, r, criterion);
damage_miner(double, double, double, double, double, double, double)
Provides the constant damage evolution \(\Delta D\) considering a Woehler- Miner’s damage law.
It is given by : \(\Delta D = \big(\frac{S_{Max}-S_{Mean}+Sl_0*(1-b*S_{Mean})}{S_{ult}-S_{Max}}\big)*\big(\frac{S_{Max}-S_{Mean}}{B_0*(1-b*S_{Mean})}\big)^\beta\) Parameters of this function are: the max stress value \(\sigma_{Max}\), the mean stress value \(\sigma_{Mean}\), the “ult” stress value \(\sigma_{ult}\), the \(b\), the \(B_0\), the \(\beta\) and the \(Sl_0\).
Default value of \(Sl_0\) is 0.0.
Return a scalar. Example: double varD = damage_minerl(S_max, S_mean, S_ult, b, B0, beta, Sl_0);
damage_manson(double, double, double)
Provides the constant damage evolution \(\Delta D\) considering a Coffin-Manson’s damage law.
It is given by : \(\Delta D = \big(\frac{\sigma_{Amp}}{C_{2}}\big)^{\gamma_2}\) Parameters of this function are: the “amp” stress value \(\sigma_{Amp}\), the \(C_{2}\) and the \(\gamma_2\).
Return a scalar.
Example: double varD = damage_manson(S_amp, C2, gamma2); |
“Ne pleure pas, Alfred ! J’ai besoin de tout mon courage pour mourir à vingt ans!”
We all remember the last words of Evariste Galois to his brother Alfred. Lesser known are the mathematical results contained in his last letter, written to his friend Auguste Chevalier, on the eve of his fatal duel. Here the final sentences :
Tu prieras publiquement Jacobi ou Gauss de donner leur avis non sur la verite, mais sur l’importance des theoremes.
Apres cela il se trouvera, j’espere, des gens qui trouvent leur profis a dechiffrer tout ce gachis. Je t’embrasse avec effusion. E. Galois, le 29 Mai 1832
A major result contained in this letter concerns the groups $L_2(p)=PSL_2(\mathbb{F}_p) $, that is the group of $2 \times 2 $ matrices with determinant equal to one over the finite field $\mathbb{F}_p $ modulo its center. $L_2(p) $ is known to be simple whenever $p \geq 5 $. Galois writes that $L_2(p) $ cannot have a non-trivial permutation representation on fewer than $p+1 $ symbols whenever $p > 11 $ and indicates the transitive permutation representation on exactly $p $ symbols in the three ‘exceptional’ cases $p=5,7,11 $.
Let $\alpha = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} $ and consider for $p=5,7,11 $ the involutions on $\mathbb{P}^1_{\mathbb{F}_p} = \mathbb{F}_p \cup { \infty } $ (on which $L_2(p) $ acts via Moebius transformations)
$\pi_5 = (0,\infty)(1,4)(2,3) \quad \pi_7=(0,\infty)(1,3)(2,6)(4,5) \quad \pi_{11}=(0,\infty)(1,6)(3,7)(9,10)(5,8)(4,2) $
(in fact, Galois uses the involution $~(0,\infty)(1,2)(3,6)(4,8)(5,10)(9,7) $ for $p=11 $), then $L_2(p) $ leaves invariant the set consisting of the $p $ involutions $\Pi = { \alpha^{-i} \pi_p \alpha^i~:~1 \leq i \leq p } $. After mentioning these involutions Galois merely writes :
Ainsi pour le cas de $p=5,7,11 $, l’equation modulaire s’abaisse au degre p.
En toute rigueur, cette reduction n’est pas possible dans les cas plus eleves.
Alternatively, one can deduce these permutation representation representations from group isomorphisms. As $L_2(5) \simeq A_5 $, the alternating group on 5 symbols, $L_2(5) $ clearly acts transitively on 5 symbols.
Similarly, for $p=7 $ we have $L_2(7) \simeq L_3(2) $ and so the group acts as automorphisms on the projective plane over the field on two elements $\mathbb{P}^2_{\mathbb{F}_2} $ aka the Fano plane, as depicted on the left.
This finite projective plane has 7 points and 7 lines and $L_3(2) $ acts transitively on them.
For $p=11 $ the geometrical object is a bit more involved. The set of non-squares in $\mathbb{F}_{11} $ is
${ 1,3,4,5,9 } $
and if we translate this set using the additive structure in $\mathbb{F}_{11} $ one obtains the following 11 five-element sets
${ 1,3,4,5,9 }, { 2,4,5,6,10 }, { 3,5,6,7,11 }, { 1,4,6,7,8 }, { 2,5,7,8,9 }, { 3,6,8,9,10 }, $
$ { 4,7,9,10,11 }, { 1,5,8,10,11 }, { 1,2,6,9,11 }, { 1,2,3,7,10 }, { 2,3,4,8,11 } $
and if we regard these sets as ‘lines’ we see that two distinct lines intersect in exactly 2 points and that any two distinct points lie on exactly two ‘lines’. That is, intersection sets up a bijection between the 55-element set of all pairs of distinct points and the 55-element set of all pairs of distinct ‘lines’. This is called the
biplane geometry.
The subgroup of $S_{11} $ (acting on the eleven elements of $\mathbb{F}_{11} $) stabilizing this set of 11 5-element sets is precisely the group $L_2(11) $ giving the permutation representation on 11 objects.
An alternative statement of Galois’ result is that for $p > 11 $ there is no subgroup of $L_2(p) $
complementary to the cyclic subgroup
$C_p = { \begin{bmatrix} 1 & x \\ 0 & 1 \end{bmatrix}~:~x \in \mathbb{F}_p } $
That is, there is no subgroup such that set-theoretically $L_2(p) = F \times C_p $ (note this is of courese
not a group-product, all it says is that any element can be written as $g=f.c $ with $f \in F, c \in C_p $.
However, in the three exceptional cases we do have complementary subgroups. In fact, set-theoretically we have
$L_2(5) = A_4 \times C_5 \qquad L_2(7) = S_4 \times C_7 \qquad L_2(11) = A_5 \times C_{11} $
and it is a truly amazing fact that the three groups appearing are precisely the three Platonic groups!
Recall that here are 5 Platonic (or Scottish) solids coming in three sorts when it comes to rotation-automorphism groups : the tetrahedron (group $A_4 $), the cube and octahedron (group $S_4 $) and the dodecahedron and icosahedron (group $A_5 $). The “4” in the cube are the four body diagonals and the “5” in the dodecahedron are the five inscribed cubes.
That is, our three ‘exceptional’ Galois-groups correspond to the three Platonic groups, which in turn correspond to the three exceptional Lie algebras $E_6,E_7,E_8 $ via McKay correspondence (wrt. their 2-fold covers). Maybe I’ll detail this latter connection another time. It sure seems that surprises often come in triples…
Finally, it is well known that $L_2(5) \simeq A_5 $ is the automorphism group of the icosahedron (or dodecahedron) and that $L_2(7) $ is the automorphism group of the Klein quartic.
So, one might ask : is there also a nice curve connected with the third group $L_2(11) $? Rumour has it that this is indeed the case and that the curve in question has genus 70… (to be continued).
Reference Similar Posts: Snakes, spines, threads and all that the modular group and superpotentials (1) icosahedral group The defining property of 24 quiver representations The Langlands program and non-commutative geometry The cartographers’ groups permutation representations of monodromy groups looking for the moonshine picture A tetrahedral snake |
Home
Integration by PartsIntegration by Parts
Examples
Integration by Parts with a definite integral
Going in Circles
Tricks of the Trade
Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions
Product of Sines and Cosines (mixed even and odd powers or only odd powers)
Product of Sines and Cosines (only even powers)
Product of Secants and Tangents
Other Cases
Trig SubstitutionsHow Trig Substitution Works
Summary of trig substitution options
Examples
Completing the Square
Partial FractionsIntroduction to Partial Fractions
Linear Factors
Irreducible Quadratic Factors
Improper Rational Functions and Long Division
Summary
Strategies of IntegrationSubstitution
Integration by Parts
Trig Integrals
Trig Substitutions
Partial Fractions
Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration
Type 2 - Improper Integrals with Discontinuous Integrands
Comparison Tests for Convergence
Modeling with Differential EquationsIntroduction
Separable Equations
A Second Order Problem
Euler's Method and Direction FieldsEuler's Method (follow your nose)
Direction Fields
Euler's method revisited
Separable EquationsThe Simplest Differential Equations
Separable differential equations
Mixing and Dilution
Models of GrowthExponential Growth and Decay
The Zombie Apocalypse (Logistic Growth)
Linear EquationsLinear ODEs: Working an Example
The Solution in General
Saving for Retirement
Parametrized CurvesThree kinds of functions, three kinds of curves
The Cycloid
Visualizing Parametrized Curves
Tracing Circles and Ellipses
Lissajous Figures
Calculus with Parametrized CurvesVideo: Slope and Area
Video: Arclength and Surface Area
Summary and Simplifications
Higher Derivatives
Polar CoordinatesDefinitions of Polar Coordinates
Graphing polar functions
Video: Computing Slopes of Tangent Lines
Areas and Lengths of Polar CurvesArea Inside a Polar Curve
Area Between Polar Curves
Arc Length of Polar Curves
Conic sectionsSlicing a Cone
Ellipses
Hyperbolas
Parabolas and Directrices
Shifting the Center by Completing the Square
Conic Sections in Polar CoordinatesFoci and Directrices
Visualizing Eccentricity
Astronomy and Equations in Polar Coordinates
Infinite SequencesApproximate Versus Exact Answers
Examples of Infinite Sequences
Limit Laws for Sequences
Theorems for and Examples of Computing Limits of Sequences
Monotonic Covergence
Infinite SeriesIntroduction
Geometric Series
Limit Laws for Series
Test for Divergence and Other Theorems
Telescoping Sums
Integral TestPreview of Coming Attractions
The Integral Test
Estimates for the Value of the Series
Comparison TestsThe Basic Comparison Test
The Limit Comparison Test
Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test
Absolute Convergence
Rearrangements
The Ratio and Root TestsThe Ratio Test
The Root Test
Examples
Strategies for testing SeriesStrategy to Test Series and a Review of Tests
Examples, Part 1
Examples, Part 2
Power SeriesRadius and Interval of Convergence
Finding the Interval of Convergence
Power Series Centered at $x=a$
Representing Functions as Power SeriesFunctions as Power Series
Derivatives and Integrals of Power Series
Applications and Examples
Taylor and Maclaurin SeriesThe Formula for Taylor Series
Taylor Series for Common Functions
Adding, Multiplying, and Dividing Power Series
Miscellaneous Useful Facts
Applications of Taylor PolynomialsTaylor Polynomials
When Functions Are Equal to Their Taylor Series
When a Function Does Not Equal Its Taylor Series
Other Uses of Taylor Polynomials
Functions of 2 and 3 variablesFunctions of several variables
Limits and continuity
Partial DerivativesOne variable at a time (yet again)
Definitions and Examples
An Example from DNA
Geometry of partial derivatives
Higher Derivatives
Differentials and Taylor Expansions
Differentiability and the Chain RuleDifferentiability
The First Case of the Chain Rule
Chain Rule, General Case
Video: Worked problems
Multiple IntegralsGeneral Setup and Review of 1D Integrals
What is a Double Integral?
Volumes as Double Integrals
Iterated Integrals over RectanglesHow To Compute Iterated Integrals
Examples of Iterated Integrals
Fubini's Theorem
Summary and an Important Example
Double Integrals over General RegionsType I and Type II regions
Examples 1-4
Examples 5-7
Swapping the Order of Integration
Area and Volume Revisited
Double integrals in polar coordinatesdA = r dr (d theta)
Examples
Multiple integrals in physicsDouble integrals in physics
Triple integrals in physics
Integrals in Probability and StatisticsSingle integrals in probability
Double integrals in probability
Change of VariablesReview: Change of variables in 1 dimension
Mappings in 2 dimensions
Jacobians
Examples
Bonus: Cylindrical and spherical coordinates
Functions as Power Series
A power series $\displaystyle\sum_{n=0}^\infty c_n x^n$ can be thought of as a function of $x$ whose domain is the interval of convergence.
We begin by looking at the
most basic examples, found by manipulating the geometric series .
Consider $\displaystyle\sum_{n=0}^\infty r^n$. This is
geometric series converges when $|r|<1$ and diverges otherwise. When it converges, its value is
$$f(x)=\frac{1}{1-x} = \sum_{n=0}^\infty x^n = 1+x+x^2+x^3+x^4+\cdots\qquad\text{ as long as }\lvert x\rvert<1$$
We get a different function by replacing $x$ with $-x$: $$\displaystyle{g(x)=\frac{1}{1+x} = \sum_{n=0}^\infty (-x)^n = 1-x+x^2+\cdots}\quad\text{ as long as } \lvert -x\rvert=\lvert x\rvert<1.$$
The video will look at these and other examples arising from the geometric series.
DO: Work on the following two examples before reading ahead. Example 1: Find a power series representation of the function $\displaystyle\frac{x}{1+x^2}$, and determine for which $x$ it would be defined. Example 2: Find a power series representation of the function $\displaystyle\frac{1}{7+2x}$, and determine for which $x$ it would be defined. Solution 1: Replace $x$ (in our original $f(x)$ before the video) by $-x^2$, and multiply the expression by $x$. $$\displaystyle{\frac{x}{1+x^2} = x\sum_{n=0}^\infty (-x^2)^n =\sum_{n=0}^\infty x(-x^2)^n=\sum_{n=0}^\infty (-1)^n x^{2n+1}= x-x^3+x^5-x^7+\cdots}.$$For convergence, we need $|x^2|<1$, which simplifies to $\lvert x\rvert<1$. ----------------------------------------------------------------------------------------- Solution 2: Divide out a $7$ in the denominator, in order to have the constant equal to 1: $\displaystyle\frac{1}{7+2x}=\frac{1}{7}\frac{1}{1+\left(\frac{2x}7\right)}.$ Now we can see that we replace our original $x$ by $-\frac{2x}{7}$ and multiply the expression by $\frac17$.
$$\displaystyle \frac{7x}{1+2x}=\left(\frac17\right)\sum_{n=0}^\infty\left(-\frac{2x}{7}\right)^n=\left(\frac17\right)\sum_{n=0}^\infty(-1)^n\left(\frac{2x}7 \right)^n=\sum_{n=0}^\infty(-1)^n\frac{(2x)^n}{7^{n+1}}=\frac17-\frac{2}{7^2}x+\frac{2^2}{7^3}x^2+\frac{2^3}{7^4}x^3+\cdots .$$For convergence, we need $\lvert\frac{2x}{7}\rvert<1$. This simplifies to $|x|<\frac72$. |
In this tutorial, we will implement the Heap Data Structure from scratch and see how we can perform various operations with the desired running time complexity.
In the previous tutorial, we described how
can be used for choosing minimum/maximum item in each iteration and also promised an explanation of their inner workings due to which we can achieve the performance stated before. Before moving forward let's see the definition of Heaps from Wikipedia once again: Heap
In computer science, a
heap is a specialized tree-based data structure that satisfies the heap property: If A is a parent node of B then the key of node A is ordered with respect to the key of node B with the same ordering applying across the heap. A heap can be classified further as either a " max heap" or a " min heap".
It states that the
is a Heap tree-based data structure and in practice, it is implemented as a If the complete binary tree follows the heap property stated above then we call it a Complete Binary Tree. Consider following two statements: Heap.
A binary tree is full if each node is
either a leaf or possesses exactly two child nodes.
A binary tree
is complete if all levels except possibly the last are completely full, and the last level has all its nodes to the left side.
Following Diagram should clarify the distinction between a complete and a full binary tree:
Now a Complete Binary tree can be implemented using most simple inbuilt data structure i.e.
: Array. The illustration below shows a complete binary tree representation in Array
In the illustration above, we can see that node at index (i) has the parent at \(\lfloor {(i-1)/2}\rfloor\) (floor value), the left child is at \(2*i +1\) and the right child at \(2*i + 2\) (
following 0 based index). So, for element 4 located at index 2, the parent is at 0 (element 5), the left child at index 5 (element 3).
So now we know how to represent the
in the form of an Array; the only thing left is that it should follow the Heap property. For example, if we take the case of Heap , then every parent must have key less than its children. Now we will create the Min-Heap from the set of elements taken from an Array while ensuring that the Min-Heap property is intact and in the process we formulate our first utility function ' Heap '. First, let's take an example: insertElement
Let the Input Array from where elements would be taken one by one is \(A[6] = [5,2,4,3,1,6]\). Initially, the Min Heap would be empty with capacity 6 \(H[6] = [-,-,-,-,-,-]\). For insertion of elements, the following conditions must hold:
It must remain a Complete Binary Tree. (For this we need to insert new node in the last array index)
It must retain the heap property. (If property is violated, we should swap the child and parent)
While maintaining above two invariants, the following illustration shows changes in heap:
As evident from the illustration, insertion of element 2 violates the heap property and in order to restore it we need to swap the elements 2 and 5; same is the case with the insertion of element 3 (
). We can see that in the case of insertion of element 1, it takes two swaps before the heap property is restored. The procedure of exchanging child and parent node until the heap property is restored is formally called violations are marked with red circles
BubbleUp. So in our first utility function
insertElement we will just insert the element at first available array index and then decide if we want to apply the
BubbleUp operation or not based on the parent key. Following is the functional code:
void MinHeap::bubbleUp(int index)
{
while(index != 0 && heap[parent(index)] > heap[index])
{
swap(&heap[parent(index)],&heap[index]);
index = parent(index); //----------------------------- (1)
}
}
void MinHeap::insertElement(int key) {
if(heap_size == capacity)
{
cout<<"Capacity full\n";
return;
}
heap_size++;
heap[heap_size -1] = key;
bubbleUp(heap_size -1);
}
Now that we have a working code of
insertElement, let us analyze the performance of this code and find out whether we fulfilled the expectations raised in our previous tutorial. These were the computational complexities that we claimed to achieve:
Insertion - insert an element with cost \(O (\log N)\)
Deletion - delete an element at given index with cost \(O (\log N)\)
extractMin - Delete minimum key element with cost \(O (\log N)\)
getMin - get the minimum key element with cost \(O(1)\)
InsertBulk - insert \(N \) elements in bulk with cost \(O (N)\)
The main work is done in the while loop of the
BubbleUp function and it ends when Index is 0 or the parent has a lower key than its child. In the worst case, we say that the loop will end when Index reaches 0. Now if look at the line marked as (1), we realize that index is decreasing at the rate of \((i-1)/2\). So if we started at length \(n\) at worst case we will travel \(\log_2 n\). It is easily seen if we try to write n in the nearest powers of two and argue that each division by 2 takes one exponent away. This way \(\log_{2}n = \log_{2}2^h = h\). This is exactly what we promised previously.
property states that each parent will have its key value lower than that of its children. So if the tree is fully heapified then the Minumum Key will always be at the root!; thus, function Min-Heap
getMin have complexity in order of \(O(1)\) as we can access the first element of Array in constant time.
What about the
extractMin and
Deletion function? In fact
extractMin is a Special case of
in which the index value passed in 0. We'll understand deletion via an example, but before that let us first reiterate the invariants of delete operation and note some salient points: Deletion
It
must remain a Complete Binary Tree. Deletion is somewhat tricky; there is a difference between removing the Value in a node and removing the node itself. Here, we can't remove any random node as it will break the tree and that is not what we want. So what we will do is that we remove the value of given Node; copy the value of the last Node of the Heap to the given Node and then delete the last Node. In this way, the Heap will remain the Complete Binary tree. It
must retain the heap property. In order to maintain the first invariant we have violated the second, so we must swap the child and parent until the property restored.
Following diagram shows working of
extractMin on the Heap that we formed in the insertion step above:
It should be evident from the illustration above that in one
extractMin operation, we are traversing the tree the same number of times as is its height. This means we are traversing it \(\log_2 n\) for n elements in the array; same time complexity is can be seen from the working code as follows:
void MinHeap::bubbleDown(int index)
{
int l = left(index);
int r = right(index);
int smallest = index;
if(l < heap_size && heap[l] < heap[index])
{
smallest = l;
}
if(r < heap_size && heap[r] < heap[smallest])
{
smallest = r;
}
if(smallest != index)
{
swap(&heap[index],&heap[smallest]);
bubbleDown(smallest);
}
}
int MinHeap::extractMin() {
if(heap_size == 0)
return 0;
else if(heap_size == 1)
{
heap_size--;
return heap[0];
}
int data = heap[0];
heap[0] = heap[heap_size-1];
heap_size--;
bubbleDown(0);
return data;
}
In the
BubbleDown function, recursion happens for the smallest element and that is calculated as smallest of the two children of the current element. Since Index is increasing at the rate of \(2*i\) , for n elements we will reach the end after \(\log_2 n\) iterations.
We have seen that the individual element insertion takes \(\log_2 n\) time. This is pretty good in case of dynamic element insertion but still a heavy price if we are performing this operation for each of the elements we have in our static array; this is where
bulkInsert plays its role. The idea is that we copy all the elements from the source Array to the Heap. In this way we are satisfying the first invariant that it should be a Complete Binary Tree, now all we need to do is to restore the heap property. Heap property can be restored via running
bubbleDown on each of the parents from bottom to the top. But how will this operation restore the heap property?
bubbleDown called with an Index ensures that a node at given index is at the right position with respect to its children and follow the heap property provided all of its children follow the same, so if we call
bubbleDown for all the parents from bottom to top, then the whole tree will be heapify. For visual clarification, look at the example:
Let us say that heap has following elements after copying- \(H[6] = [5,2,4,6,1,3]\) . Following diagram shows the sequence of operations:
Now let's see the working code and analyze its running time complexity:
void MinHeap::bulkInsert(int* a,int size)
{
memcpy(heap,a,size); //-------------------------------------(1)
heap_size = size;
for(int i = parent(heap_size-1);i >= 0; i--) //--------------(2)
{
bubbleDown(i);
}
}
At point (1), we are copying the elements from the given array. So if it has \(n\) elements then our running time complexity can't be less than \(O(n)\). Now, Time Complexity would be governed by processing of
It is evident that we start at index \(n/2\) and go up to 0. We can say that the for loop.
bubbleDown method is called \(n/2\) times and time complexity of
bubbleDown method is \(log_2 n\) so the total time complexity should be \(O(n\text{ }log_2 n)\). But in the earlier tutorial, we claimed the time complexity of \(O(n)\) for
bulkInsert operation and indeed its average time complexity is \(O(n)\) . So, where we did we go wrong in our assessment?
bubbleDown method has worst case time complexity of \(log_2n\) if it's called with Index 0. But in
function bulkInsert
bubbleDown for Index 0 is called only once and for all other nodes the depth up to which the element can descend is different. Following is the exact assessment of
bubbleDown method for all the levels of Heap [1]:
Let's say that our tree is a complete tree that is \(n = 2^{h+1} -1\) where \(h \geq 0\) is the height of Tree and its bottommost level is full. Now let's analyze the amount of work done for each level:
(Bottom level 0) It will have \(2^h\) nodes and all of the nodes are Leaf so we don't call bubbleDown.
(Bottom level 1) It will have \(2^{h-1}\) nodes and each of them can go down by 1 level.
(Bottom level 1) It will have \(2^{h-2}\) nodes and each of them can go down by 2 level.
(Bottom level i) It will have \(2^{h-i}\) nodes and each of them can go down by i level.
so calculating level by level, the Time complexity will be proportional to \(T(n) = \displaystyle\sum_{i=0}^{h} Nodes_i * bubbleDown_i = \sum_{i=0}^{h} 2^{h-i} * i\) .
factor \(2^h\) outside the summation :
\( \begin{equation} T(n) = \displaystyle 2^h \sum_{i=0}^{h} \frac {i}{2^i} \end{equation} \) - (1)
Note that the equation (1) resembles an Arithmetico-geometric Progression. To solve equation (1) let us consider the sum of elements of a geometric series in which \(r < 1\) is given by
\(\displaystyle \sum_{i=0}^{\infty} r^i = \frac {1} {1-r}\)
Now
Differentiate this equation with respect to r and multiply the result with r we get:
\(\displaystyle \sum_{i=0}^{\infty} i*r^{i-1} = \frac {1} {(1-r)^2}\)
\(\begin{equation} \displaystyle \sum_{i=0}^{\infty} i*r^{i} = \frac {r} {(1-r)^2} \end{equation}\) - (2)
Now if look at equation (1) and (2) closely we can achieve the similar formula as in summation part of equation (1) via equation (2) by putting the value of \(r = 1/2\). The difference will be that ours is a sum up to h while (2) goes on till infinity.
\(\displaystyle \sum_{i=0}^{\infty} \frac{i}{2^i} = \frac {1/2} {(1-1/2)^2} = \frac{4}{2} = 2\)
We note that (2) will have more positive terms in summation and yet, it converges to a finite value of 2. This is the highest that we can get and will always be smaller than or equal to 2. In more technical parlance summation in equation (1) is finite bound, while summation in equation (2) is not bounded but still gives us the upper bound of 2. So we can say that:
\(T(n) = \displaystyle 2^h \sum_{i=0}^{h} \frac {i}{2^i} \leq2^{h+1}\)
Now according to our assumption \(n = 2^{h+1} -1\) i.e. \(2^{h+1} = n +1\) so
\(T(n) = \leq n+1 \in O(N)\)
With this proof, we wrap up our discussion of Heap implementation. So in this post, we implemented
insertElement,
extractMin,
getMin and
bulkInsert for a data structure
and also analyzed their running time complexity. All these methods can be changed easily if you want to implement the Min Heap . Max Heap
[1] CMSC 351 Algorithms, Fall, 2011 course lecture Notes |
Definition:Skewes' Number Definition Skewes' number is: $10^{10^{10^{34} } }$
In Knuth notation this can be presented as:
$10 \uparrow \paren {10 \uparrow \paren {10 \uparrow 34} }$ Also known as
The name can also be seen presented as:
Skewes Number Skewes's Number Source of Name
This entry was named for Stanley Skewes.
Stanley Skewes deduced the number which now bears his name in $1933$.
Godfrey Harold Hardy referred to it as:
the largest number which has ever served any definite purpose in mathematics.
By way of comparison, the number of particles in the universe has been estimated at somewhere between $10^{80}$ and $10^{87}$.
However, Skewes' estimate has been reduced somewhat more recently.
For example, Hermanus Johannes Joseph te Riele has shown that there are many $n$ between $6 \cdotp 62 \times 10^{370}$ and $6 \cdotp 69 \times 10^{370}$ for which $\map \pi n$ is less than $\displaystyle \int_2^n \frac {\d x} {\ln x}$.
Sources 1933: S. Skewes: On the difference $\map \pi x − \map \Li x$ (I)( J. London Math. Soc. Vol. 8, no. 4: 277 – 283) |
Trivial Zeroes of Riemann Zeta Function Theorem $0 \le \map \Re s \le 1$
Then:
$s \in \set {-2, -4, -6, \ldots}$
These are called the
trivial zeros of $\zeta$. Proof
Therefore, the completed Riemann zeta function:
$\map \xi s = \dfrac 1 2 s \paren {s - 1} \pi^{-s/2} \map \Gamma {\dfrac s 2} \, \map \zeta s$
has the same zeroes as $\zeta$.
Additionally by Functional Equation for Riemann Zeta Function, we have $\map \xi s = \map \xi {1 - s}$ for all $s \in \C$.
Therefore if $\map \zeta s \ne 0$ for all $s$ with $\map \Re s > 1$ then also $\map \zeta s \ne 0$ for all $s$ with $\map \Re s < 0$.
Let us consider $\map \Re s > 1$.
We have:
$\displaystyle \map \zeta s = \prod_p \frac 1 {1 - p^{-s} }$
where here and in the following $p$ ranges over the primes.
Therefore, we have:
$\displaystyle \map \zeta s \prod_p \paren {1 - p^{-s} } = 1$
All of the factors of this infinite product can be found in the product:
$\displaystyle \prod_{n \mathop = 2}^\infty \paren {1 - n^{-s} }$
Hence:
$\displaystyle \prod_p \paren {1 - p^{-s} }$
converges absolutely, and so by the fact that:
$\displaystyle \map \zeta s \prod_p \paren {1 - p^{-s} } = 1$
we know $\map \zeta s$ cannot possibly be zero for any point in the region in question.
$\blacksquare$
Also see Riemann Hypothesis: all zeroes in the critical strip fall on the critical line $\map \Re s = \tfrac 1 2$. |
Difference between revisions of "Kakeya problem"
(One intermediate revision by one other user not shown) Line 16: Line 16:
== Lower Bounds ==
== Lower Bounds ==
− −
To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, <math>\binom{k_n}{2}\ge 3\cdot(3^n-1)/2</math>, and hence
To each of the <math>(3^n-1)/2</math> directions in <math>{\mathbb F}_3^n</math> there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, <math>\binom{k_n}{2}\ge 3\cdot(3^n-1)/2</math>, and hence
Line 26: Line 24:
<math>|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}</math>.
<math>|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}</math>.
−
A better bound follows by using the "slices argument". Let <math>A,B,C\subset{\mathbb F}_3^{n-1}</math> be the three slices of a Kakeya set <math>E\subset{\mathbb F}_3^n</math>. Form a bipartite graph <math>G</math> with the partite sets <math>A</math> and <math>B</math> by connecting <math>a</math> and <math>b</math> by an edge if there is a line in <math>E</math> through <math>a</math> and <math>b</math>. The restricted sumset <math>\{a+b\colon (a,b)\in G\}</math> is contained in the set <math>-C</math>, while the difference set <math>\{a-b\colon (a,b)\in G\}</math> is all of <math>{\mathbb F}_3^{n-1}</math>. Using an estimate from [http://front.math.ucdavis.edu/math.CO/9906097 a paper of Katz-Tao], we conclude that <math>3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}</math>, leading to <math>|E|\ge 3^{6(n-1)/11}</math>. Thus,
+ + + + + + +
A better bound follows by using the "slices argument". Let <math>A,B,C\subset{\mathbb F}_3^{n-1}</math> be the three slices of a Kakeya set <math>E\subset{\mathbb F}_3^n</math>. Form a bipartite graph <math>G</math> with the partite sets <math>A</math> and <math>B</math> by connecting <math>a</math> and <math>b</math> by an edge if there is a line in <math>E</math> through <math>a</math> and <math>b</math>. The restricted sumset <math>\{a+b\colon (a,b)\in G\}</math> is contained in the set <math>-C</math>, while the difference set <math>\{a-b\colon (a,b)\in G\}</math> is all of <math>{\mathbb F}_3^{n-1}</math>. Using an estimate from [http://front.math.ucdavis.edu/math.CO/9906097 a paper of Katz-Tao], we conclude that <math>3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}</math>, leading to <math>|E|\ge 3^{6(n-1)/11}</math>. Thus,
:<math>k_n \ge 3^{6(n-1)/11}.</math>
:<math>k_n \ge 3^{6(n-1)/11}.</math>
Latest revision as of 00:35, 5 June 2009
A
Kakeya set in [math]{\mathbb F}_3^n[/math] is a subset [math]E\subset{\mathbb F}_3^n[/math] that contains an algebraic line in every direction; that is, for every [math]d\in{\mathbb F}_3^n[/math], there exists [math]e\in{\mathbb F}_3^n[/math] such that [math]e,e+d,e+2d[/math] all lie in [math]E[/math]. Let [math]k_n[/math] be the smallest size of a Kakeya set in [math]{\mathbb F}_3^n[/math].
Clearly, we have [math]k_1=3[/math], and it is easy to see that [math]k_2=7[/math]. Using a computer, it is not difficult to find that [math]k_3=13[/math] and [math]k_4\le 27[/math]. Indeed, it seems likely that [math]k_4=27[/math] holds, meaning that in [math]{\mathbb F}_3^4[/math] one cannot get away with just [math]26[/math] elements.
Basic Estimates
Trivially, we have
[math]k_n\le k_{n+1}\le 3k_n[/math].
Since the Cartesian product of two Kakeya sets is another Kakeya set, the upper bound can be extended to
[math]k_{n+m} \leq k_m k_n[/math];
this implies that [math]k_n^{1/n}[/math] converges to a limit as [math]n[/math] goes to infinity.
Lower Bounds
To each of the [math](3^n-1)/2[/math] directions in [math]{\mathbb F}_3^n[/math] there correspond at least three pairs of elements in a Kakeya set, determining this direction. Therefore, [math]\binom{k_n}{2}\ge 3\cdot(3^n-1)/2[/math], and hence
[math]k_n\ge 3^{(n+1)/2}.[/math]
One can derive essentially the same conclusion using the "bush" argument, as follows. Let [math]E\subset{\mathbb F}_3^n[/math] be a Kakeya set, considered as a union of [math]N := (3^n-1)/2[/math] lines in all different directions. Let [math]\mu[/math] be the largest number of lines that are concurrent at a point of [math]E[/math]. The number of point-line incidences is at most [math]|E|\mu[/math] and at least [math]3N[/math], whence [math]|E|\ge 3N/\mu[/math]. On the other hand, by considering only those points on the "bush" of lines emanating from a point with multiplicity [math]\mu[/math], we see that [math]|E|\ge 2\mu+1[/math]. Comparing the two last bounds one obtains [math]|E|\gtrsim\sqrt{6N} \approx 3^{(n+1)/2}[/math].
The better estimate
[math]k_n\ge (9/5)^n[/math]
is obtained in a paper of Dvir, Kopparty, Saraf, and Sudan. (In general, they show that a Kakeya set in the [math]n[/math]-dimensional vector space over the [math]q[/math]-element field has at least [math](q/(2-1/q))^n[/math] elements).
A still better bound follows by using the "slices argument". Let [math]A,B,C\subset{\mathbb F}_3^{n-1}[/math] be the three slices of a Kakeya set [math]E\subset{\mathbb F}_3^n[/math]. Form a bipartite graph [math]G[/math] with the partite sets [math]A[/math] and [math]B[/math] by connecting [math]a[/math] and [math]b[/math] by an edge if there is a line in [math]E[/math] through [math]a[/math] and [math]b[/math]. The restricted sumset [math]\{a+b\colon (a,b)\in G\}[/math] is contained in the set [math]-C[/math], while the difference set [math]\{a-b\colon (a,b)\in G\}[/math] is all of [math]{\mathbb F}_3^{n-1}[/math]. Using an estimate from a paper of Katz-Tao, we conclude that [math]3^{n-1}\le\max(|A|,|B|,|C|)^{11/6}[/math], leading to [math]|E|\ge 3^{6(n-1)/11}[/math]. Thus,
[math]k_n \ge 3^{6(n-1)/11}.[/math] Upper Bounds
We have
[math]k_n\le 2^{n+1}-1[/math]
since the set of all vectors in [math]{\mathbb F}_3^n[/math] such that at least one of the numbers [math]1[/math] and [math]2[/math] is missing among their coordinates is a Kakeya set.
This estimate can be improved using an idea due to Ruzsa (seems to be unpublished). Namely, let [math]E:=A\cup B[/math], where [math]A[/math] is the set of all those vectors with [math]r/3+O(\sqrt r)[/math] coordinates equal to [math]1[/math] and the rest equal to [math]0[/math], and [math]B[/math] is the set of all those vectors with [math]2r/3+O(\sqrt r)[/math] coordinates equal to [math]2[/math] and the rest equal to [math]0[/math]. Then [math]E[/math], being of size just about [math](27/4)^{r/3}[/math] (which is not difficult to verify using Stirling's formula), contains lines in a positive proportion of directions: for, a typical direction [math]d\in {\mathbb F}_3^n[/math] can be represented as [math]d=d_1+2d_2[/math] with [math]d_1,d_2\in A[/math], and then [math]d_1,d_1+d,d_1+2d\in E[/math]. Now one can use the random rotations trick to get the rest of the directions in [math]E[/math] (losing a polynomial factor in [math]n[/math]).
Putting all this together, we seem to have
[math](3^{6/11} + o(1))^n \le k_n \le ( (27/4)^{1/3} + o(1))^n[/math]
or
[math](1.8207+o(1))^n \le k_n \le (1.8899+o(1))^n.[/math] |
Here are some examples that I will go over in the midterm review. If you notice any typos or errors, feel free to let me know in the comments. You can add math symbols to your comments using LaTeX syntax surrounded by dollar signs. For example, the line
D\lim_{x \to \infty} f(x) = \frac{1}{2}D
with the D’s replaced by dollar signs becomes
$\lim_{x \to \infty} f(x) = \frac{1}{2}$. |
1. Geometry of Systems of Equations
We know that for two by two linear systems of equation, the geometry is that of two lines that either intersect, are parallel, or are the same line. If they intersect then there is exactly one solution, if they are parallel then there are no solutions, and if they are the same line, then there are infinitely many solutions. For three by three systems, the situation is different. The solution set is either the empty set, a point, a line, or a whole plane. For four by four systems, the geometry becomes four dimensional and is rough to comprehend, but is still useful.
2. The Algebra of Linear Systems
In this class we will perform algebra on linear systems in a new way. For example, for a three by three system, we line the equations up to form three rows. We will manipulate the rows to simplify the equations. There are three operations, called row operations that we can perform:
Row Operations
1. We can multiply an entire row by a nonzero constant
\[cR_i \rightarrow R_i \]
2. We can interchange two rows.
\[R_i \leftrightarrow R_j\]
3. We can replace one row with that row + a multiple of another row
\[cR_j + R_i \rightarrow R_i\]
We follow the following steps that use row operations to solve a system of equations.
We interchange rows so that the top left corner is nonzero (preferably a 1). We multiply row 1 by the appropriate number to make the top left corner a 1. We replace rows two and three with the appropriate multiples of row 1 + rows two and three to make the two bottom rows begin with 0. We interchange the two bottom rows so that the middle coefficient nonzero. We multiply row two appropriately so that the middle number is a 1. We use row two to make the middle top and bottom zero. Make the bottom right 0. Use row three to make the top and middle rights 0.
Example
\[\begin{align} &2x &+y & &&= 4 \\ &x &- 2y &&+ z &= 0\\ &3x &-4y &&+ 2z &= 1 \end{align}\]
Solution:
1. \[\begin{align} &x &-2y &&z &= 0 \\ &2x &+y &&&= 4 \\ &3x &-4y &&+2z &= 1 \end{align}\]
2. Already done.
3. We replace row two with row two - twice row 1 and replace row three with row three minus three times row
\[\begin{align} &x &- 2y &&+ z &= 0 \\ && 5y &&-2z&= 4 \\ && 2y &&- z &= 1 \end{align}\]
4. Not needed
5. We divide row two by 5:
\[\begin{align} &x &-2y &&+z &= 0 \\ &&y &&\dfrac{2}{5}z&= \dfrac{4}{5} \\ &&2y &&-z &= 1 \end{align}\]
6. We replace row 1 with row 1 + 2 row 2 and replace row three with row three - twice row two.
\[\begin{align} &x &&&+\dfrac{1}{5}z &= \dfrac{8}{5} \\ &&y &&-\dfrac{2}{5}z&= \dfrac{4}{5} \\ && &&- \dfrac{1}{5}z &= -\dfrac{3}{5} \end{align}\]
7. We multiply row three by -5
\[\begin{align} &x & &&+\dfrac{1}{5}z &= \dfrac{8}{5} \\ &&y &&-\dfrac{2}{5}z&= \dfrac{4}{5} \\ & & &&z &= 3 \end{align}\]
8. We replace row 1 with row 1 - \(\dfrac{1}{5}\) row 1 and replace row two with row three + \(\dfrac{2}{5}\) row three.
\[\begin{align} &x & && &= 1 \\ & &y && &= 2 \\ & & && z &= 3 \end{align}\]
3. Exercises
Exercise
\[\begin{align} &4x &-3y &&-z & = 0 \\ &x &-3y &&+2z & = 7 \\ &3x &9y &&-z & = -2 \end{align}\]
Exercise
\[\begin{align} &3x &-2y &&+z & = 3 \\ &4x &+y &&& = 1 \\ & &11y &&-4z & = -9 \end{align}\]
4. Application
Example
Your breakfast consists of orange juice, cereal, and eggs with the following nutritional information:
OJ Cereal Eggs Protein 0% 10% 20% Vitamin C 20% 15% 0% Calories 100 120 100
If you must have 30% protein, 30% Vitamin C and 300 calories for your breakfast, How many servings of OJ, Cereal, and Eggs should you have?
Solution
Let
\(x = \text{ the number of servings of OJ}\)
\(y = \text{ the number of servings of Cereal}\) \(z = \text{ the number of servings of eggs}\) Then
\[\begin{align} &&10y &&+20z & = 30 \\ &20x &+15y && & = 30 \\ &100x &+120y &&+100z & = 300 \end{align}\]
We conclude that the breakfast should consist of 1.5 of a serving of OJ, no cereal, and 1.5 servings of eggs.
1.
\[\begin{align} &20x &+15y && & = 30 \\ &&10y &&+20z & = 30 \\ &100x &+120y &&+100z & = 300 \end{align}\]
2.
\[\begin{align} &x &+\dfrac{3}{4}y && & = \dfrac{3}{2} \\ &&10y &&+20z & = 30 \\ &100x &+120y &&+100z & = 300 \end{align}\]
3.
\[\begin{align} &x &+\dfrac{3}{4}y && & = \dfrac{3}{2} \\ &&10y &&+20z & = 30 \\ &&45y &&+100z & = 150 \end{align}\]
4. Already Done
5.
\[\begin{align} &x &+\dfrac{3}{4}y && & = \dfrac{3}{2} \\ &&y &&+2z & = 3 \\ &&45y &&+100z & = 150 \end{align}\]
6 .
\[\begin{align} &x & && -\dfrac{3}{2}z& = -\dfrac{3}{4} \\ &&y &&+2z & = 3 \\ && &&10z & = 15 \end{align}\]
7.
\[\begin{align} &x & && -\dfrac{3}{2}z& = -\dfrac{3}{4} \\ &&y &&+2z & = 3 \\ && &&z & = \dfrac{3}{2} \end{align}\]
8.
\[\begin{align} &x & && & = \dfrac{3}{2} \\ &&y && & = 0 \\ && &&z & = \dfrac{3}{2} \end{align}\]
Larry Green (Lake Tahoe Community College)
Integrated by Justin Marshall. |
This thread originated from MSE: Compact Approximation This is meant as lemma for: Approximation Property
Given a Banach space $E$.
Denote compact domains by $\mathcal{C}$.
Denote compact operators by $\mathcal{C}(E)$.
Then there is a compact approximate identity: $$\forall C\in\mathcal{C}\,\exists C_N\in\mathcal{C}(E,E):\quad\|C_N-1\|_C:=\sup_{x\in C}\|C_Nx-x\|\stackrel{N\to\infty}\to0$$
How to construct such compact operators? |
During our knot-theory lecture we have talking about the following theorem:
Given two finite presentations of the same group, one can be obtained from the other by a finite sequence of Tietze transformations.
This theorem is also given in the book from
Gilbert & Porter: Knots and Surfaces. Here they are given a proof with some exercises (I also will write down them here), but I don't come further. Can someone help me to solve the exercises to go further with the proof?! Here the questions: Let $P=(X:R)$ and $Q=(X:R\cup S)$ be presentations of the groups $G$ and $H$ resp. with $f:P\rightarrow Q$ given by $f=id:F(X)\rightarrow F(X)$. Show that f is a presentationmapping. What is the corresponding homomorphism $\theta:G\rightarrow H$? When is $\theta$ an isomorphism? From my point of view this is trivial, true?! the corresponding $\theta$ (it exists) is just an inclusion map and it is an isomorphism if $S$ is a consequence from $R$ ?! Let $Y$ be a set, $y\in Y$ and $X=Y-\{y\}$. Suppose $P=(X:R)$ and $Q=(Y:S)$ presentations and $f:F(X)\rightarrow F(Y)$ the homomorphism induced by the inclusion of $X$ into $Y$, then $f(R)\subseteq S$. Describe the corresponding group homomorphism $\theta$. Now suppose $S=f(R)\cup\{yw^{-1}\}$ ($w$ a word in the image of $f$). Prove that $\theta$ is an isomorphism. Hint: Define $g:F(Y)\rightarrow F(X)$ by $g(x)=x$ if $x\in X\subset Y$, g(y)=w. Prove that $g$ a presentationsmapping from $Q$ to $P$ and the corresponding homomorphism is the inverse of $\theta$. I have no idea how to start with this exercise -.- Let $P=(X:R)$ presentation of a group $G$ and let $\phi:F(X)\rightarrow G$ a epimorphism corresponding to the presentation. Suppose $Z$ a finite set disjoint from $X$. Pick homomorphism $\theta:F(X\cup Z)\rightarrow F(X)$ such tha $\theta(x)=x\ \forall x\in X$. Try to describe a set of normal generators for $Ker(\phi\theta)$. Of course $R\subset Ker(\phi\theta)$ but what other elements do we need? The case $Z=\{y\}$ is a good place to start (look up to the previous exercise) Here I also don't know how to begin, because I don't know what I have to do.
I know that is quite a lot information, but can someone help me or/and give solutions?! Thanks :) |
Yes, this is perfectly possible
The way you do this is by choosing a charge distribution which is 'completely dipolar' in some suitable sense, and this will produce an electric field which is also a pure dipole. In more technical terms, all you need to do is use a charge distribution with a separable angular dependence (i.e. $\rho(\mathbf r) = f(r) g(\theta,\phi)$) and then set that angular dependence to a suitable spherical harmonic.
The simplest example of this is a surface charge distribution spread over a sphere, of radius $a$, say, which has a dipolar dependence, i.e. the simple-as-day surface charge density$$\sigma(\theta,\phi) = \sigma_0 \cos(\theta),$$which looks like the image below, where red is positive and blue is negative, and the darker mesh lines are contours separated by constant
(I'm also partial to dipolar gaussian charges, with the volume charge density $\rho(\mathbf r) = A\, z\, e^{-r^2/a^2}$, but with that one the charge is never quite fully zero when away from the system.)
Once you do decide to look at this charge distribution, there are two main nice things that happen:
The electric field outside the sphere is exactly that of a pure point dipole, and the electric field inside the sphere is exactly uniform.
This can be shown via a few simple steps:
The point-dipole electric field $$\mathbf E_\mathrm{out}(\mathbf r) = {\frac {1}{4\pi \epsilon _{0}}}\frac {3(\mathbf {p} \cdot {\hat {\mathbf {r} }}){\hat {\mathbf {r} }}-\mathbf {p} }{r^{3}}$$ is a valid electric field (conservative and a solution of the Laplace equation) when away from the boundary. The uniform electric field $\mathbf E_\mathrm{in} (\mathbf r) = \mathbf E_0$ is obviously a valid electric field inside the sphere.
We take these two for granted (though if required they can be checked by direct differentiation), and this means that to show that the fields as claimed above correspond to our surface density we only need to show that they obey the right matching at the boundary. Thus, we have
The two fields match in a conservative fashion at the boundary, i.e. their components normal to the surface are equal (so you cannot harvest free energy by doing small loops around the surface). To show this, we have$$\hat{\boldsymbol \theta}\cdot \mathbf E_\mathrm{in}(\mathbf r) = \hat{\boldsymbol \theta}\cdot \mathbf E_0 = \hat{\boldsymbol \theta}\cdot \hat{\mathbf z}E_0 = E_0 a \sin(\theta),$$where the azimuthal component along $\hat{\boldsymbol \phi}$ is in both cases obviously zero, and\begin{align}\hat{\boldsymbol \theta}\cdot \mathbf E_\mathrm{out}(\mathbf r) &=\hat{\boldsymbol \theta}\cdot {\frac {1}{4\pi \epsilon _{0}}}\frac {3(\mathbf {p} \cdot {\hat {\mathbf {r} }}){\hat {\mathbf {r} }}-\mathbf {p} }{r^{3}}\\&=-\frac {1}{4\pi \epsilon_0}\frac{\hat{\boldsymbol \theta}\cdot\hat{\mathbf z}p }{a^{3}}\\&=-\frac {1}{4\pi \epsilon_0}\frac{p \sin(\theta) }{a^{3}}\end{align}Since the angular dependence matches, all we need to do is set $E_0 = -p/ 4\pi\epsilon_0a^4$.
Moreover, now is a good time to calculate the dipole moment in terms of the surface charge density: the $x$ and $y$ components vanish by symmetry, and the axial component gives\begin{align}p_z &= \int z \sigma(\theta,\phi) \mathrm dA= \int_0^\pi \int_0^{2\pi} a\cos(\theta) \sigma_0\cos(\theta) a^2 \sin(\theta) \mathrm d\phi \mathrm d\theta\\&= 2\pi a^3\sigma_0\int_0^{\pi} \cos^2(\theta) \sin(\theta)\mathrm d\theta= 2\pi a^3\sigma_0\int_{-1}^{1} u^2 \mathrm du\\&= \frac{4\pi}{3} a^3\sigma_0.\end{align}
The normal components of the fields differs by the surface charge density (or, in plainer terms, if you zoom in enough, the surface charge density works like an infinite plane of charge, pushing the electric field by $\sigma/2\epsilon_0$ away from it in either direction). To see this, we calculate$$\hat{\mathbf r}\cdot \mathbf E_\mathrm{in}(\mathbf r) = \hat{\mathbf r}\cdot \mathbf E_0 = \hat{\mathbf r}\cdot \hat{\mathbf z}E_0 = E_0 a \cos(\theta)$$and\begin{align}\hat{\mathbf r}\cdot \mathbf E_\mathrm{out}(\mathbf r) &=\hat{\mathbf r}\cdot {\frac {1}{4\pi \epsilon _{0}}}\frac {3(\mathbf {p} \cdot {\hat {\mathbf {r} }}){\hat {\mathbf {r} }}-\mathbf {p} }{r^{3}}\\&={\frac {1}{4\pi \epsilon _{0}}}\frac {2\mathbf {p} \cdot {\hat {\mathbf {r} }}}{a^{3}}\\&={\frac {1}{2\pi \epsilon _{0}}}\frac {p\cos(\theta)}{a^{3}},\end{align}and we confirm that $\hat{\mathbf r}\cdot \mathbf E_\mathrm{out}(\mathbf r)-\hat{\mathbf r}\cdot \mathbf E_\mathrm{in}(\mathbf r) =\frac{1}{\epsilon_0}\sigma(\theta,\phi)$, where we have\begin{align}\hat{\mathbf r}\cdot \mathbf E_\mathrm{out}(\mathbf r)-\hat{\mathbf r}\cdot \mathbf E_\mathrm{in}(\mathbf r)& = {\frac {1}{2\pi \epsilon _{0}}}\frac {p\cos(\theta)}{a^{3}}-E_0 a \cos(\theta)\\ & =\left(\frac {p}{2\pi \epsilon_0a^4}- E_0\right)a \cos(\theta)\\ & =\left(\frac {p}{2\pi \epsilon_0a^4}+\frac {p}{4\pi \epsilon_0a^4}\right)a \cos(\theta)\\ & =\frac {3p}{4\pi \epsilon_0a^3}\cos(\theta)\\ & =\frac{1}{\epsilon_0}\sigma(\theta,\phi).\end{align}
Thus, since the electric fields as given are full solutions everywhere for this charge distribution, they are
the fields it generates.
The above text has a bit more detail than strictly necessary, in the interest of providing an essentially self-contained resource, but what did we learn? Well, it shows that whenever one is bothered by the presence of a point dipole field$$\mathbf E_\mathrm{out}(\mathbf r) = {\frac {1}{4\pi \epsilon _{0}}}\frac {3(\mathbf {p} \cdot {\hat {\mathbf {r} }}){\hat {\mathbf {r} }}-\mathbf {p} }{r^{3}},$$one can always conceptualize it as arising from this charge density, and all the conceptual issues fall away, without the need of tricky limits or unclear approximations.
Moreover, the method extends directly to any arbitrary multipolarity: do you want to conceptualize quadrupoles or octupoles? No problem, just plop in an appropriate solid harmonic as your $\sigma(\theta,\phi)$, and you will automatically have a charge density that produces the relevant multipolar field. Moreover, this method extends all the way to hexadecapoles as beyond (as showcased in Hexadecapole potential using point particles?) where methods like point-charge-hypercubes give out.
The Mathematica code used to produce the image in this answer is available through
Import["http://halirutan.github.io/Mathematica-SE-Tools/decode.m"]["http://i.stack.imgur.com/IsKL8.png"]. |
I'm simulating a simple 3-node bar with convection BCs at the edges to validate my FEM code. The following data was used:
Initial temperature = 25 ºC
Temperature surrounding the rod = 10 ºC
Thermal conductivity = 0
Specific heat capacity = Density = Convection coefficient = 1
For time -> infinity the two nodes at the edge converged to 10 ºC (which was expected). However, at the middle node the temperature should had maintained its initial value of 25 ºC since there's no thermal conductivity but it raised up to a higher temperature (32.5 ºC) and I have no clue why is this happening.
Edit1: I followed these steps:
1) Assuming the standard transient heat-equation:
$$\rho.c_p\ \frac{\partial T}{\partial t} - k\frac{\partial^2 T}{\partial x^2} = 0$$
2) After the formal FEM procedure and using Euler-Backward time-integration I arrived to the following formulation:
$$ \rho.c_p \int N_i\ N_j\ d\Omega\ \frac{T_j^{t+1}-T_j^t}{\Delta t} +\ k\int N'_i\ N'_j\ d\Omega\ \ T_j^{t+1}\ + h\int N_i\ N_j\ d\Gamma\ \ T_j^{t+1}\\= h\ T_{surrounding} \int N_i\ d\Gamma$$
$\rho:\ $Density
$c_p:\ $Specific heat capacity
$k:\ $ Thermal conductivity
$h:\ $Convective heat transfer coefficient
If $k\rightarrow \ 0$ then the above related problem appears.
Edit 2: The step-by-step procedure I followed to arrive at the equation presented at point 2 of Edit 1 was the following: Step 1) Multiply the PDE at point 1 by the weight function (w) and integrate over the domain:
$$\rho c_p \int \frac{\partial T}{\partial t} w\ d\Omega + \int \overrightarrow{\nabla}.(-k\overrightarrow{\nabla} T) w\ d\Omega = 0$$
Step 2) Apply divergence theorem on the second term at the Left Hand-Side of the equation:
$$\rho c_p \int \frac{\partial T}{\partial t} w\ d\Omega - k\oint (\overrightarrow{\nabla} T .\overrightarrow{n}) w\ d\Gamma + k\int \overrightarrow{\nabla} T . \overrightarrow{\nabla} w\ d\Omega = 0$$
or
$$\rho c_p \int \frac{\partial T}{\partial t} w\ d\Omega + k\int \overrightarrow{\nabla} T . \overrightarrow{\nabla} w\ d\Omega = \oint k(\overrightarrow{\nabla} T .\overrightarrow{n}) w\ d\Gamma$$
where $\overrightarrow{n}$ and $\overrightarrow{\nabla}$ denotes the outward normal vector at the boundary and the gradient vector, respectively.
Step 3) Using the following relation
$$k\overrightarrow{\nabla}T.\overrightarrow{n}\ + h(T-T_{\infty}) = 0$$
at the Right Hand-Side of latter equation at Step 2 one arrives to:
$$\rho c_p \int \frac{\partial T}{\partial t} w\ d\Omega + k\int \overrightarrow{\nabla} T . \overrightarrow{\nabla} w\ d\Omega = \oint -h(T-T_{\infty}) w\ d\Gamma$$
Where $$T_{\infty} = T_{surrounding}$$ is the surrounding temperature.
Step 4) Rearranging the Right Hand-Side
$$\rho c_p \int \frac{\partial T}{\partial t} w\ d\Omega + k\int \overrightarrow{\nabla} T . \overrightarrow{\nabla} w\ d\Omega + \oint h\ T\ w\ d\Gamma = \oint h\ T_{\infty}\ w\ d\Gamma$$
and using $T = N_jT_j$, $w = N_i$ and assuming a Backward-Euler time integration scheme we'll have
$$\rho c_p \int N_iN_j\ d\Omega\ \frac{T_j^{t+1}-T_j^t}{\Delta t} + k\int \overrightarrow{\nabla}N_i.\overrightarrow{\nabla} N_j\ d\Omega\ T_j^{t+1} + h\oint N_i N_j\ d\Gamma\ T_j^{t+1} = \\h\oint T_{\infty} N_i\ d\Gamma$$
Or in matrix form
$$[M]\frac{T^{t+1}-T^t}{\Delta t} + [K]T^{t+1} + [H]T^{t+1} = F$$ |
In physics and astronomy, an
is a simulation of a dynamical system of particles, usually under the influence of physical forces, such as gravity (see n-body problem). N-body simulation N-body simulations are widely used tools in astrophysics, from investigating the dynamics of few-body systems like the Earth-Moon-Sun system to understanding the evolution of the large-scale structure of the universe. [1] In physical cosmology, N-body simulations are used to study processes of non-linear structure formation such as galaxy filaments and galaxy halos from the influence of dark matter. Direct N-body simulations are used to study the dynamical evolution of star clusters.
Contents 1 Nature of the particles Direct gravitational N-body simulations 2 General relativity simulations 3 Calculation optimizations 4 5 Two-particle systems Incorporating baryons, leptons and photons into simulations 6 Computational Complexity 7 See also 8 References 9 Nature of the particles
The 'particles' treated by the simulation may or may not correspond to physical objects which are particulate in nature. For example, an N-body simulation of a star cluster might have a particle per star, so each particle has some physical significance. On the other hand a simulation of a gas cloud cannot afford to have a particle for each atom or molecule of gas as this would require on the order of
23 particles for each mole of material (see 10Avogadro constant), so a single 'particle' would represent some much larger quantity of gas (often implemented using Smoothed Particle Hydrodynamics). This quantity need not have any physical significance, but must be chosen as a compromise between accuracy and manageable computer requirements. Direct gravitational N-body simulations
N-body simulation of 400 objects with parameters close to those of Solar System
planets.
In direct gravitational
N-body simulations, the equations of motion of a system of N particles under the influence of their mutual gravitational forces are integrated numerically without any simplifying approximations. These calculations are used in situations where interactions between individual objects, such as stars or planets, are important to the evolution of the system. The first direct N-body simulations were carried out by Sebastian von Hoerner at the Astronomisches Rechen-Institut in Heidelberg, Germany. Sverre Aarseth at the University of Cambridge (UK) has dedicated his entire scientific life to the development of a series of highly efficient N-body codes for astrophysical applications which use adaptive (hierarchical) time steps, an Ahmad-Cohen neighbour scheme and regularization of close encounters. Regularization is a mathematical trick to remove the singularity in the Newtonian law of gravitation for two particles which approach each other arbitrarily close. Sverre Aarseth's codes are used to study the dynamics of star clusters, planetary systems and galactic nuclei. General relativity simulations
Many simulations are large enough that the effects of general relativity in establishing a Friedmann-Lemaitre-Robertson-Walker cosmology are significant. This is incorporated in the simulation as an evolving measure of distance (or scale factor) in a comoving coordinate system, which causes the particles to slow in comoving coordinates (as well as due to the redshifting of their physical energy). However, the contributions of general relativity and the finite speed of gravity can otherwise be ignored, as typical dynamical timescales are long compared to the light crossing time for the simulation, and the space-time curvature induced by the particles and the particle velocities are small. The boundary conditions of these cosmological simulations are usually periodic (or toroidal), so that one edge of the simulation volume matches up with the opposite edge.
Calculation optimizations
N-body simulations are simple in principle, because they merely involve integrating the 6 N ordinary differential equations defining the particle motions in Newtonian gravity. In practice, the number N of particles involved is usually very large (typical simulations include many millions, the Millennium simulation included ten billion) and the number of particle-particle interactions needing to be computed increases on the order of N 2, and so direct integration of the differential equations can be prohibitively computationally expensive. Therefore, a number of refinements are commonly used.
One of the simplest refinements is that each particle carries with it its own timestep variable, so that particles with widely different dynamical times don't all have to be evolved forward at the rate of that with the shortest time.
There are two basic approximation schemes to decrease the computational time for such simulations. These can reduce the computational complexity to O(N log N) or better.
Tree methods
In
tree methods, such as a Barnes–Hut simulation, an octree is usually used to divide the volume into cubic cells in, so that only particles from nearby cells need to be treated individually, and particles in distant cells can be treated as a single large particle centered at the cell's center of mass (or as a low-order multipole expansion). This can dramatically reduce the number of particle pair interactions that must be computed. To prevent the simulation from becoming swamped by computing particle-particle interactions, the cells must be refined to smaller cells in denser parts of the simulation which contain many particles per cell. For simulations where particles are not evenly distributed, the well-separated pair decomposition methods of Callahan and Kosaraju yield optimal O( n log n) time per iteration with fixed dimension. Particle mesh method
Another possibility is the particle mesh method in which space is discretised on a mesh and, for the purposes of computing the gravitational potential, particles are assumed to be divided between the nearby vertices of the mesh. Finding the potential energy Φ is easy, because the Poisson equation
\nabla^2\Phi=4\pi G{\rho},\,
where
G is Newton's constant and {\rho} is the density (number of particles at the mesh points), is trivial to solve by using the fast Fourier transform to go to the frequency domain where the Poisson equation has the simple form \hat{\Phi}= -4\pi G\frac{\hat{\rho}}{k^2},\,
where \vec{k} is the comoving wavenumber and the hats denote Fourier transforms. The gravitational field can now be found by multiplying by \vec{k} and computing the inverse Fourier transform (or computing the inverse transform and then using some other method). Since this method is limited by the mesh size, in practice a smaller mesh or some other technique (such as combining with a tree or simple particle-particle algorithm) is used to compute the small-scale forces. Sometimes an adaptive mesh is used, in which the mesh cells are much smaller in the denser regions of the simulation.
Special-case optimizations
Several different gravitational perturbation algorithms are used to get fairly accurate estimates of the path of objects in the solar system.
People often decide to put a satellite in a frozen orbit. The path of a satellite closely orbiting the Earth can be accurately modeled starting from the 2-body elliptical orbit around the center of the Earth, and adding small corrections due to the oblateness of the Earth, gravitational attraction of the Sun and Moon, atmospheric drag, etc. It is possible to find a frozen orbit without calculating the actual path of the satellite.
The path of a small planet, comet, or long-range spacecraft can often be accurately modeled starting from the 2-body elliptical orbit around the sun, and adding small corrections from the gravitational attraction of the larger planets in their known orbits.
Some characteristics of the long-term paths of a system of particles can be calculated directly. The actual path of any particular particle does not need to be calculated as an intermediate step. Such characteristics include Lyapunov stability, Lyapunov time, various measurements from ergodic theory, etc.
Two-particle systems
Although there are millions or billions of particles in typical simulations, they typically correspond to a real particle with a very large mass, typically 10
9 solar masses. This can introduce problems with short-range interactions between the particles such as the formation of two-particle binary systems. As the particles are meant to represent large numbers of dark matter particles or groups of stars, these binaries are unphysical. To prevent this, a softened Newtonian force law is used, which does not diverge as the inverse-square radius at short distances. Most simulations implement this quite naturally by running the simulations on cells of finite size. It is important to implement the discretization procedure in such a way that particles always exert a vanishing force on themselves. Incorporating baryons, leptons and photons into simulations
Many simulations simulate only cold dark matter, and thus include only the gravitational force. Incorporating baryons, leptons and photons into the simulations dramatically increases their complexity and often radical simplifications of the underlying physics must be made. However, this is an extremely important area and many modern simulations are now trying to understand processes that occur during galaxy formation which could account for galaxy bias.
Computational Complexity
Reif et al.
[2] prove that if the n-body reachability problem is defined as follows - given n bodies satisfying a fixed electrostatic potential law, determining if a body reaches a destination ball in a given time bound where we require a poly(n) bits of accuracy and the target time is poly(n) is in PSPACE.
On the other hand if the question is whether the body
eventually reaches the destination ball, the problem is PSPACE-hard. These bounds are based on similar complexity bounds obtained for ray tracing. See also References ^ Trenti, Michele; Hut, Piet. "N-body simulations (gravitational)". Scholarpedia. Retrieved 25 March 2014. ^ "The Complexity of N-body Simulation". Further reading
Bertschinger, Edmund (1998). "Simulations of structure formation in the universe". Annual Review of Astronomy and Astrophysics 36 (1): 599–654. Binney, James; Tremaine, Scott (1987). Galactic Dynamics. Princeton University Press. Callahan, Paul B.; Kosaraju, Sambasiva Rao (1992). "A decomposition of multidimensional point sets with applications to k-nearest-neighbors and n-body potential fields (preliminary version)". STOC '92: Proc. ACM Symp. Theory of Computing. ACM..
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. |
Equivalence of Definitions of Prime Ideal of Commutative and Unitary Ring Contents Theorem
A
prime ideal of $R$ is a proper ideal $P$ such that: $\forall a, b \in R : a \circ b \in P \implies a \in P$ or $b \in P$
A
prime ideal of $R$ is a proper ideal $P$ of $R$ such that: $I \circ J \subseteq P \implies I \subseteq P \text { or } J \subseteq P$
for all ideals $I$ and $J$ of $R$.
A
prime ideal of $R$ is a proper ideal $P$ of $R$ such that: Proof
Let $\struct {R, +, \circ}$ be a commutative and unitary ring throughout.
$(1)$ implies $(2)$
Let $P$ be a prime ideal of $R$ by definition 1.
Then by definition:
$\forall a, b \in R : a \circ b \in P \implies a \in P$ or $b \in P$
Let $I \circ J \subseteq P$.
Aiming for a contradiction, suppose that both $I \not \subseteq P$ and $J \not \subseteq P$.
Then by definition of subset:
$\exists a \in I \setminus P, b \in J \setminus P$
But by definition of subset product
$a \circ b \in P$ as $I \circ J \subseteq P$
Thus we have $a, b \in P$ such that:
$a \circ b \in P$ where $a \notin P$ and $b \notin P$
Thus by Proof by Contradiction, either $I \subseteq P$ or $J \subseteq P$,
Thus $P$ is a prime ideal of $R$ by definition 2.
$\Box$
$(2)$ implies $(1)$
Let $P$ be a prime ideal of $R$ by definition 2.
Then by definition:
$I \circ J \subseteq P \implies I \subseteq P \text { or } J \subseteq P$
for all ideals $I$ and $J$ of $R$.
Aiming for a contradiction, suppose there exist $a \circ b \in P$ such that $a \notin P$ and $b \notin P$.
Thus $P$ is a prime ideal of $R$ by definition 1.
$\Box$
$(1)$ implies $(3)$
Let $P$ be a prime ideal of $R$ by definition 1.
Then by definition:
$\forall a, b \in R : a \circ b \in P \implies a \in P$ or $b \in P$
That is:
\(\, \displaystyle \exists a, b \in R \setminus P: \, \) \(\displaystyle a \circ b\) \(\notin\) \(\displaystyle R \setminus P\) \(\displaystyle \leadsto \ \ \) \(\displaystyle a \circ b\) \(\in\) \(\displaystyle P\) \(\displaystyle \leadsto \ \ \) \(\displaystyle a\) \(\in\) \(\displaystyle P\) \(\, \displaystyle \lor \, \) \(\displaystyle b\) \(\in\) \(\displaystyle P\) But this contradicts the assertion that $a, b \in R \setminus P$.
Thus $P$ is a prime ideal of $R$ by definition 3.
$\Box$
$(3)$ implies $(1)$
Let $P$ be a prime ideal of $R$ by definition 3.
Then by definition:
Aiming for a contradiction, suppose let $a \circ b \in P$ such that $a \notin P$ and $b \notin P$.
Then:
$a, b \in R \setminus P$
by definition of relative complement.
That means:
$\forall a, b \in R \setminus P \implies a \circ b \in R \setminus P $
But this contradicts the assertion that $a \circ b \in P$.
Thus by Proof by Contradiction either $a \in P$ or $b \in P$ (or both).
Thus $P$ is a prime ideal of $R$ by definition 1.
$\blacksquare$ |
Search
Now showing items 1-2 of 2
D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC
(Elsevier, 2017-11)
ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ...
ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV
(Elsevier, 2017-11)
ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ... |
For any Binomial distribution $B (n, p),$ the probability of x success in n-Bernoulli trials, $P (X = x) = \large^{n}C_x. p^x.q^{n–x}$ where $x = 0, 1, 2,...,n$ and $(q = 1 – p)$
Given a binomial distribution $B\; \large($$6, \large\frac{1}{2})$$ \rightarrow n = 6$ and $p = \large \frac{1}{2} \rightarrow$ $q = \large\frac{1}{2}$
$P (X = x) = \large^{6}C_x. \large\frac{1}{2}^x.\frac{1}{2}^{6–x} = \large^{6}C_x.\large(\frac{1}{2})^6$
The most likely outcome is when $\large^{6}C_x$ is maximum.
We can calculate $\large^{6}C_x$ for $x = 0,1,2,3...6$ to see which yields the maximum value:
$\begin{matrix} x & 0 &1 &2 &3 &4 &5&6 \\ ^{6}C_x& 1&6 &15 &20 &15 &6 &1 \end{matrix}$
The most likely outcome is the outcome whose probability is the highest.
Therefore, $X=3$ has the maximum of all above values $\rightarrow P (X=3) = 20 \large(\frac{1}{2})^6$ |
I have an elementary question about equivariant loop spaces that I feel it should be well known.
Given a finite group $G$ and a finite $G$-set $J$ let $S^J=\mathbb{R}[J]^+$ be the permutation representation sphere, and let $\Omega^J$ and $\Sigma^J$ be the corresponding loop and suspension functors on pointed $G$-spaces. If $\alpha\colon K\to J$ is a surjective $G$-map of finite $G$-sets, there is a suspension map $$\alpha^\ast\colon \Omega^J\Sigma^JX\to \Omega^K\Sigma^KX$$ It is defined by smashing with the sphere of the canonical orthogonal complement of the inclusion $\alpha^\ast\colon \mathbb{R}[J]\to \mathbb{R}[K]$. My question is:
$$\mbox{When does the restriction of $\alpha^\ast$ on $G$-fixed points } \alpha^\ast\colon (\Omega^J\Sigma^JX)^G\to (\Omega^K\Sigma^KX)^G \mbox{ have a retraction? }$$
I'd be happy to know what happens when $J$ and $K$ are transitive, as these retractions should give the Tom Dieck-splitting.
Here's what I know: if $\alpha\colon K\to K/H$ is the projection onto the quotient of $K$ by a
normal subgroup $H\leq G$, a retraction exists. It is constructed as follows:$$(\Omega^K\Sigma^KX)^G=((\Omega^K\Sigma^KX)^H)^G\stackrel{res^H}{\longrightarrow}(\Omega^{K/H}\Sigma^{K/H}X^H)^G=(\Omega^{K/H}\Sigma^{K/H}X)^G$$where the middle map restricts an equivariant loop to the $H$-fixed-points. We are using the canonical isomorphism $\mathbb{R}[K]^H\cong\mathbb{R}[K/H]$. |
See edit and comments, this response might not be applicable to the question:
When performing regression you would tend to want your regressors to be of similar type, or at the very least range. Assuming you use log return for price changes I would recommend using the untransformed interest rate. The reason for this is that they are the same type of entity, rate of returns.
$R_t = \ln\frac{P_t}{P_{t-1}}$
$R_{t+1} = \theta_0 + \theta_1R_{t} + \theta_2I_{t} + \epsilon$
You can of course use more fancy transformations, but this would be the natural starting point. Personally I use an evolutionary algorithm to evolve the regressor transformations.
Don't worry about the interest rate being always positive. If this matters at all it will be pushed in to the intercept weight.
Edit:
Given that the interest rate and data resolution you are looking at displays tendencies to be non-stationary I would retract my recommendation above. However, this does make me wonder if you have enough data since I would intuitively expect interest rates to not trend in the long run.
In your shoes I might have attempted to try evolutionary symbolic regression to transform the interest rate data, as discussed in the comment section. When doing this you could try to use your ADF test results as a fitness measure. The resulting transformation function can be used prior to your linear regression model. Remember to split in to test and training datasets in order to detect overfitting. |
In this exercise, you will get familiar with the XAS_TDP method in CP2K. We will simulate the Aluminium K-edge XANES of trimethylaluminium with chemical formula Al$_2$(CH$_3$)$_6$:
The molecule is made of 26 atoms and we will use medium quality double-zeta basis sets. This way, most of the exercises can be done with little to no parallelization of the calculations. All the concepts discussed below are still valid for larger system though.
You should create a new directory for each of the 3 parts of the exercise Remember that the LR-TDDFT code is not yet included in the main CP2K code, hence you have to modify the
module load line in the
cp2k.sh file in order to load the correct version:
module load CP2K/xas_tdp_libint2
To make sure you are using the proper version, type
module unload CP2K/6.1-foss-2017b before launching any calculation.
The first step to all atomistic simulations is to get hold of the molecule/material structure. The latter can come from different sources: a careful geometry optimization, frames of a molecular dynamics run or experiments. In order to keep the focus on XANES, a geometry optimization was run ahead of time and the structure stored in the following
geometry.xyz file:
26 Trimethylaluminium structure Al 4.2675935634 5.5407111338 4.7089610588 Al 6.4608798572 4.2997143768 5.3967873091 C 5.1793535458 3.8631556857 3.7238051743 H 5.6698412406 4.2214449576 2.8092452108 H 4.1254815388 3.6491894229 3.4661776259 C 5.5489970130 5.9771338270 6.3820518382 H 5.1739704174 6.9992805163 6.1899010533 H 5.0579905439 5.6189992181 7.2963828304 H 5.5549255131 2.8411776275 3.9158135330 H 6.6028509125 6.1906122100 6.6401744970 C 8.1577533120 4.8462637633 4.5673348260 H 8.0626764249 5.7568375257 3.9554443238 H 8.5548879421 4.0566420550 3.9092516157 H 8.9349946502 5.0418279813 5.3241717501 C 6.3493279327 2.8490785738 6.7195500278 H 5.3140757011 2.6112574534 7.0097068947 H 6.8924547937 3.1044799443 7.6434085417 H 6.8018718123 1.9194670696 6.3386170476 C 4.3795530867 6.9915820838 3.3864761343 H 5.4149333409 7.2291739602 3.0966185981 H 3.9272669851 7.9212012690 3.7677052718 H 3.8365057581 6.7367174672 2.4624343308 C 2.5706506209 4.9941975061 5.5382826840 H 2.6657281632 4.0834902522 6.1499880352 H 1.7933806464 4.7988174956 4.7814409011 H 2.1735548094 5.7836967331 6.1965426058
Download/copy the above file in your working directory. This is a XYZ file, which structure is quite simple: the first line states the number of atoms in the structure, the second line is reserved for a description and the rest contains the Cartesian coordinates in Angstroms.
To accurately describe the core states from which electrons are excited in XAS, all electron calculations are required. In CP2K, this means using the GAPW method. The following
part1_gs.inp input file contains all required keywords for such a calculation. Pay attention to the comments (starting with exclamation marks) explaining what is going on.
!This input file is meant for a ground state calculation of Al2(CH3)6 before XAS_TDP &GLOBAL PROJECT part1 !project name, useful to keep track of produced files RUN_TYPE ENERGY &END GLOBAL &FORCE_EVAL METHOD QS &DFT !where to find all-electron basis sets and potentials BASIS_SET_FILE_NAME EMSL_BASIS_SETS POTENTIAL_FILE_NAME POTENTIAL &QS METHOD GAPW !GAPW for all-electron &END QS &POISSON !Necessary to define POISSON section in non-perdiodic PERIODIC NONE !boudary conditions as the default assumes PBCs POISSON_SOLVER MT &END POISSON &SCF MAX_SCF 50 EPS_SCF 1.0E-08 !high quality ground state required for XAS_TDP SCF_GUESS RESTART !to avoid recomputing when we can &END SCF &MGRID CUTOFF 600 &END &XC &XC_FUNCTIONAL !the stendard PBE functional, the LIBXC way &LIBXC FUNCTIONAL GGA_C_PBE &END LIBXC &LIBXC FUNCTIONAL GGA_X_PBE &END LIBXC &END XC_FUNCTIONAL &END XC &END DFT &SUBSYS &CELL ABC 10.0 10.0 10.0 !Big enough cell, we don't want density to spread out of it PERIODIC NONE !Need to specify non-periodicity here too &END CELL &TOPOLOGY !define the structure externally => reuse the same file all the way COORD_FILE_NAME geometry.xyz COORD_FILE_FORMAT xyz &CENTER_COORDINATES &END CENTER_COORDINATES &END TOPOLOGY &KIND H BASIS_SET Ahlrichs-pVDZ !Kinds are described by all-electron potentials and POTENTIAL ALL !double-zeta quality all-electron basis sets &END KIND &KIND C BASIS_SET Ahlrichs-pVDZ POTENTIAL ALL &END &KIND Al BASIS_SET Ahlrichs-pVDZ POTENTIAL ALL &END &END SUBSYS &END FORCE_EVAL
Download this input and place it into your working directory. Make sure that the
geometry.xyz file is also there. Because it is a relatively small system, we do not need to run CP2K in parallel. To launch the calculation, write in the terminal (after modifying the paths in the
cp2k.sh file):
sbatch cp2k.sh
Where
cp2k.sopt is the serial version of CP2K, the
-i flag indicates the input file and the
-o flag the output. You can follow the progress of your calculation by typing the following in the terminal (press
ctrl+
C to stop):
tail -f part1_gs.out
Once the calculation finishes, you will notice that a new file named
part1-RESTART.wfn was created. This file contains information about the converged ground state wave function, such that further calculation sharing the same project name will be able to skip the SCF convergence step and be much faster.
Note that in principle, you should make sure your ground state calculation is converged. For this exercise, we assume that it has been done ahead of time.
The next step in the workflow consists in verifying that we can find the proper donor core states, which are 2 Aluminium 1s in our case. To do that, we need to trigger a XAS_TDP calculation on top of our previously computed ground state with the
CHECK_ONLY keyword. The latter makes sure that the programs only analyzes the potential donor molecular orbitals (MOs) and stops before doing any expensive operation. The next input file does just that:
&GLOBAL PROJECT part1 !This is important to keep the same project name to be able to RESTART RUN_TYPE ENERGY &END GLOBAL &FORCE_EVAL METHOD QS &DFT BASIS_SET_FILE_NAME EMSL_BASIS_SETS POTENTIAL_FILE_NAME POTENTIAL &QS METHOD GAPW &END QS &POISSON PERIODIC NONE POISSON_SOLVER MT &END POISSON &SCF MAX_SCF 50 EPS_SCF 1.0E-08 SCF_GUESS RESTART &END SCF &MGRID CUTOFF 600 &END &XC &XC_FUNCTIONAL &LIBXC FUNCTIONAL GGA_C_PBE &END LIBXC &LIBXC FUNCTIONAL GGA_X_PBE &END LIBXC &END XC_FUNCTIONAL &END XC !The section controlling the XAS calculations &XAS_TDP CHECK_ONLY !in order to verify our donor MOs &DONOR_STATES DEFINE_EXCITED BY_KIND KIND_LIST Al STATE_TYPES 1s N_SEARCH 2 !We know that Al 1s are the 2 MOs with the lowest energy #LOCALIZE !rmove the # to uncomment LOCALIZE &END DONOR_STATES &END XAS_TDP &END DFT &SUBSYS &CELL ABC 10.0 10.0 10.0 PERIODIC NONE &END CELL &TOPOLOGY COORD_FILE_NAME geometry.xyz COORD_FILE_FORMAT xyz &CENTER_COORDINATES &END CENTER_COORDINATES &END TOPOLOGY &KIND H BASIS_SET Ahlrichs-pVDZ POTENTIAL ALL &END KIND &KIND C BASIS_SET Ahlrichs-pVDZ POTENTIAL ALL &END &KIND Al BASIS_SET Ahlrichs-pVDZ POTENTIAL ALL &END &END SUBSYS &END FORCE_EVAL
You can download/copy the above file in your working directory. Note that the project name is the same as in
part1_gs.inp; this ensure that we can restart and skip the ground state calculation. Only a minimal &XAS_TDP section was added, with the CHECK_ONLY keyword and the &DONOR_STATES subsection. Note that the LOCALIZE keyword is commented for now. Adapt the paths in
cp2k.sh and run this input.
Once the calculation is over, open
part1_check.out and look for
XAS_TDP. This will lead you to the relevant part of the output file, in which you can check the quality of the donor MOs.
Remember: for the theory to apply correctly, the donor states need to be local in space and of 1s character. Look at the
overlap and
charge indicators, are we good to proceed ?
No, we are not, because the Aluminium atoms are equivalent under symmetry and the 2 lowest energy MOs will be arbitrary linear combinations of the 1s states. To get the needed donor state quality, uncomment the LOCALIZE keyword in the input file and rerun.
Now that we have a proper ground state and we know that the donor MOs are of good quality, we can do an actual XANES calculation on top. To do so, we, have to complete the &XAS_TDP section to include information about the integration grid, the dipole form and most importantly the kernel:
&GLOBAL PROJECT part1 RUN_TYPE ENERGY &END GLOBAL &FORCE_EVAL METHOD QS &DFT BASIS_SET_FILE_NAME EMSL_BASIS_SETS POTENTIAL_FILE_NAME POTENTIAL AUTO_BASIS RI_XAS MEDIUM !automatically generates an RI basis &QS METHOD GAPW &END QS &POISSON PERIODIC NONE POISSON_SOLVER MT &END POISSON &SCF MAX_SCF 50 EPS_SCF 1.0E-08 SCF_GUESS RESTART &END SCF &MGRID CUTOFF 600 &END &XC &XC_FUNCTIONAL &LIBXC FUNCTIONAL GGA_C_PBE &END LIBXC &LIBXC FUNCTIONAL GGA_X_PBE &END LIBXC &END XC_FUNCTIONAL &END XC !The section controlling the XAS calculations &XAS_TDP CHECK_ONLY F !set CHECK_ONLY to false for actual computation &DONOR_STATES DEFINE_EXCITED BY_KIND KIND_LIST Al STATE_TYPES 1s N_SEARCH 2 LOCALIZE &END DONOR_STATES TAMM_DANCOFF !For a full-TDDFT calculation, set TAMM_DANCOFF F DIPOLE_FORM LENGTH !LENGTH or VELOCITY, it does not really matter GRID Al 100 100 !100x100 grid rather small but enough here &KERNEL &XC_FUNCTIONAL !In principle, one could use a &LIBXC !different functional for the FUNCTIONAL GGA_C_PBE !kernel, but to be consistent &END LIBXC !one usually uses the same as &LIBXC !for the ground state. Only FUNCTIONAL GGA_X_PBE !LIBXC functionals are available &END LIBXC &END XC_FUNCTIONAL &END KERNEL &END XAS_TDP &END DFT &SUBSYS &CELL ABC 10.0 10.0 10.0 PERIODIC NONE &END CELL &TOPOLOGY COORD_FILE_NAME geometry.xyz COORD_FILE_FORMAT xyz &CENTER_COORDINATES &END CENTER_COORDINATES &END TOPOLOGY &KIND H BASIS_SET Ahlrichs-pVDZ POTENTIAL ALL &END KIND &KIND C BASIS_SET Ahlrichs-pVDZ POTENTIAL ALL &END &KIND Al BASIS_SET Ahlrichs-pVDZ POTENTIAL ALL &END &END SUBSYS &END FORCE_EVAL
You can run this XAS calculation the usual way.
Note that again, the program reuses the previous ground state calculation. The spectral data can be found in the
part1.spectrum file. The excitation energies are given in eV and the oscillator strength are within the dipole approximation (arbitrary units). There are two blocks of data, one for each Al 1s. Note that the values are practically the same since the Al atoms are indistinguishable.
If you wish, you can plot the XANES spectrum using this python script. Make sure the script is located in the same directory as your
part1.spectrum file and type:
Note: before using this script, you have to load a proper python installation by typing:
module load Anaconda3/
python plot_spectrum.py part1.spectrum Al 1s 1.0 1500 1525 1000
Where
Al and
1s tell the script which data to read in
part1.spectrum,
1.0 is the width for the gaussian broadening, 1500 and 1525 are the x-axis (energy) bounds and 1000 is for the resolution (1000 points). You can read the first few lines of the script for documentaion.
Remember that TDDFT results usually need to be shifted to match experiments (either by an empirical amount or by a ΔSCF calculation) If you have time, feel free to:
It is generally accepted that standard GGA level TDDFT is not enough for accurately describing X-ray absorption spectrosocpy. Instead, most calculations in the literature are done at the hybdrid level. In this case, a fraction of the exchange part of the XC functional is replaced by the so-called exact exchange or Hartree-Fock exchange:
$$ E_{xc}[n] = E_c^{DFT}[n] + (1-s_x)E_x^{DFT}[n] + s_x E_x^{HF}[n] $$
where $s_x$ is the fraction of Hartree-Fock exchange. This exact exchange notably allows to catch non-local effects, which cannot be described by a semi-local description such as GGA.
In this part of the exercise, we will simulate the Aluminium K-edge of Al$_2$(CH$_3$)$_6$ at the hybrid level, using the BHandHLYP functional. Note that such calculations are significantly more expensive and it is recommended that you run CP2K in parallel.
&GLOBAL PROJECT hybrid RUN_TYPE ENERGY &END GLOBAL &FORCE_EVAL METHOD QS &DFT BASIS_SET_FILE_NAME EMSL_BASIS_SETS POTENTIAL_FILE_NAME POTENTIAL AUTO_BASIS RI_XAS LARGE !want high precision &QS METHOD GAPW &END QS &POISSON PERIODIC NONE POISSON_SOLVER MT &END POISSON &SCF MAX_SCF 50 EPS_SCF 1.0E-08 SCF_GUESS RESTART &END SCF &MGRID CUTOFF 600 &END &XC &XC_FUNCTIONAL !The BHandHLYP functional, popular with XAS TDDFT &LIBXC FUNCTIONAL HYB_GGA_XC_BHandHLYP &END LIBXC &END XC_FUNCTIONAL &HF FRACTION 0.5 !the functional requires 50% of exact exchange &END HF &END XC !The section controlling the XAS calculations &XAS_TDP &DONOR_STATES DEFINE_EXCITED BY_KIND KIND_LIST Al STATE_TYPES 1s N_SEARCH 2 LOCALIZE &END DONOR_STATES TAMM_DANCOFF DIPOLE_FORM LENGTH GRID Al 300 300 !Denser grid for integration &KERNEL NPROCS_GRID 4 !Use 4 procs per grid, ignored if serial run RI_RADIUS 3.0 !Use also neighbor's RI basis functions for density projection &XC_FUNCTIONAL &LIBXC FUNCTIONAL HYB_GGA_XC_BHandHLYP &END LIBXC &END XC_FUNCTIONAL &EXACT_EXCHANGE SCALE 0.5 &END EXACT_EXCHANGE &END KERNEL &END XAS_TDP &END DFT &SUBSYS &CELL ABC 10.0 10.0 10.0 PERIODIC NONE &END CELL &TOPOLOGY COORD_FILE_NAME geometry.xyz COORD_FILE_FORMAT xyz &CENTER_COORDINATES &END CENTER_COORDINATES &END TOPOLOGY &KIND H BASIS_SET Ahlrichs-def2-TZVP !moved to higher quality triple zeta basis sets POTENTIAL ALL &END KIND &KIND C BASIS_SET Ahlrichs-def2-TZVP POTENTIAL ALL &END &KIND Al BASIS_SET Ahlrichs-def2-TZVP POTENTIAL ALL &END &END SUBSYS &END FORCE_EVAL
Create a new directory for this part of the exercise and download the above input file into it. Make also sure that you copy the
gemoetry.xyz file from the previous part in there. Have a look at the changes made in the input and their corresponding comments.
In particular, notice how the new keyword RI_RADIUS was added in the &KERNEL subsection. This defines a sphere around the excited atoms in which all RI basis functions contribute to the projection of the density. Remeber:
$$ n(\mathbf{r}) \approx \sum_{pq}\sum_{\mu\nu} P_{pq} \ \big(\varphi_p \varphi_q \chi_\mu \big) \ S_{\mu\nu}^{-1} \ \chi_\nu(\mathbf{r}) $$
where $P_{pq}$ is the density matrix, $\varphi_p,\varphi_q$ are atomic orbitals and $\chi_\mu, \chi_\nu$ are RI basis functions. Extending RI_RADIUS allows for a greater number of RI basis and a higher quality projection of the density, to which hybrid kernels have shown to be very sensitive.
With the keyword NPROCS_GRID set to 4, you can run the calculation on 8 cores and have simultaneous integration of the XC kernel for both Al atoms using all available resources. cp2k.popt -i hybrid.inp -o hybrid.out & </code>
If you copy the
plot_spectrum.py script from part one in your working directory, you can use it the same way to plot the XANES spetrum:
python plot_spectrum.py hybrid.spectrum Al 1s 1.0 1545 1570 1000 If you have time, feel free to:
If you have enough time or a particular interest in L-edge spectroscopy, you can try and setup a calculation for it.
You should first create a new directory for this part. Then, starting from either
part1_xas.inp or
hybrid.inp, you can tweak the input file to look for 2s and/or 2p donor states by modifying the &DONOR_STATES subsection:
&DONOR_STATES DEFINE_EXCITED BY_KIND KIND_LIST Al STATE_TYPES 2s !Note that the STATE_TYPES keyword can be used multiple times such STATE_TYPES 2p !that both 2s and 2p excitations are done in a single calculation N_SEARCH -1 !We do not know exactly where are the 2s/2p states in terms of LOCALIZE !energy => N_SEARCH -1 will look among all occupied MOs &END DONOR_STATES
Once you have copied the above in your input file, you can launch your calculation as usual. Make sure that the
gemoetry.xyz file is present in the same directory too. The
plot_spectrum.py script can be used again.
Note that if you chose to do the hybrid calculation, you might have to increase the integration grid density and the RI_RADIUS even further to obtain converged results. |
Defining parameters
Level: \( N \) = \( 4000 = 2^{5} \cdot 5^{3} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 4000.bk (of order \(20\) and degree \(8\)) Character conductor: \(\operatorname{cond}(\chi)\) = \( 400 \) Character field: \(\Q(\zeta_{20})\) Newforms: \( 0 \) Sturm bound: \(600\) Trace bound: \(0\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{1}(4000, [\chi])\).
Total New Old Modular forms 160 0 160 Cusp forms 0 0 0 Eisenstein series 160 0 160
The following table gives the dimensions of subspaces with specified projective image type.
\(D_n\) \(A_4\) \(S_4\) \(A_5\) Dimension 0 0 0 0 |
I want to use $ \log(x^3) < x^2 $ so I compute:
$$\lim_{x\to+\infty}\frac{\log(x^3)}{x^2}$$
and with L'Hôpital showed it is equal to $0$; this implies that from certain point the function is "stronger". Is that enough?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I want to use $ \log(x^3) < x^2 $ so I compute:
$$\lim_{x\to+\infty}\frac{\log(x^3)}{x^2}$$
and with L'Hôpital showed it is equal to $0$; this implies that from certain point the function is "stronger". Is that enough?
Do you only need the asymptotic behaviour? See comments.
If you want to show that $\log(x^3) < x^2$ holds
for all $x$, checking the behaviour for $x \to +\infty$ is not enough (*).
Consider the function ($x>0$): $$f(x)=x^2-\log(x^3) = x^2-3\log x$$ The derivative becomes $0$ at $x = \sqrt{3/2}$ and switches from negative to positive, so $f$ attains a minimum there. It is a global minimum so you have for all $x>0$: $$x^2-\log(x^3) \ge \underbrace{f(\sqrt{3/2})}_{\approx 0.89} > 0 \Rightarrow x^2 > \log(x^3)$$
(*) E.g. $\tfrac{x+2}{x^2} \to 0$ so $x^2 > x+2$ for sufficiently large $x$, but $x^2 \le x+2$ for $-1 \le x \le 2$. |
Given a recurrence of the form $\forall n,m.\ \ T(n,m)=\begin{cases}1,&,m=1\\\sum_i{T(n_i,m_i)}&,\text{else}\end{cases}$
Note: both $n_i$ and $m_i$ are dependent on $n,m$ so they should have been written above as $n_{i,n,m}$ and $m_{i,n,m}$, but they are written above as $n_i$ and $m_i$ in a way of abbreviation for more readablity.
Let us assume that $T$ is non-decreasing w.r.t $n$, and non-increasing w.r.t $m$.
I am looking for a
useful, necessary and sufficient condition so that $\exists k. T(n,m)=O\bigg(\Big(\log(m)+n\Big)^k\bigg)$
The importance of such conditions stems from their applicabilty in concluding super-polynomial lower bounds on complex recurrences of the above-mentioned form. |
I have the equation of an ellipse centred at the origin and inclined to the coordinate axes: $$ \frac{(x\cos\theta + y\sin\theta)^2}{a^2} + \frac{(y\cos\theta - x\sin\theta)^2}{b^2} = 1 $$ In order to find the rotation angle I know that: $$ \tan(2\theta)=\frac B{A-C}\ $$ Now my problem is to obtain the forms of A, B and C for the latter equation, from the former. Any help will be very welcome.
All you need is to explicitly write your first equation by expanding the squares and grouping terms for $x'^2$, $y'^2$, and $x'y'$. Their coefficients are $A$, $C$, and $B$. You will be surprised to see $$\tan(2\theta)=\frac{2\sin\theta\cos\theta}{\cos^2\theta-\sin^2\theta}$$ :)
In the world coordinates you have $$A x^2 + B y^2 + C x y - 1 = 0$$ and you want to find the parameters of the ellipse in a coordinate system that is aligned with the major/minor axis. We rotate the local coordinated by $\theta$ with:
$$ x = x' \cos\theta -y' \sin\theta \\ y = x' \sin\theta + y' \cos\theta $$
If you plug it in the ellipse equations you will find from the terms multiplying $x y$ that $$ \left. 2 C \cos^2 \theta + 2 (B-A) \sin \theta \cos\theta - C =0 \right\}\\ \left. C \cos(2 \theta)+(B-A) \sin(2 \theta) = 0 \right\}\\ \tan (2 \theta) = \frac{C}{A-B} $$
Use this angle to get the aligned ellipse
$$ \left( \tfrac{\sqrt{ (A-B)^2+C^2}+A+B}{2} \right) x'^2 + \left( \tfrac{-\sqrt{(A-B)^2+C^2}+A+B}{2} \right) y'^2 = 1 $$ |
Let us assume that $N$ stars have ever been born in the Milky Way galaxy, and given them masses between 0.1 and 100$M_{\odot}$. Next, assume that stars have been born with a mass distribution that approximates to the Salpeter mass function - $n(m) \propto m^{-2.3}$. Then assume that all stars with mass $m>25M_{\odot}$ end their lives as black holes.
So, if $n(m) = Am^{-2.3}$, then$$N = \int^{100}_{0.1} A m^{-2.3}\ dm$$and thus $A=0.065N$.
The number of black holes created will be $$N_{BH} = \int^{100}_{25} Am^{-2.3}\ dm = 6.4\times10^{-4} N$$i.e 0.06% of stars in the Galaxy become black holes. NB: The finite lifetime of the galaxy is irrelevant here because it is much longer than the lifetime of black hole progenitors.
Now, I follow the other answers by scaling to the number of stars in the solar neighbourhood, which is approximately 1000 in a sphere of 15 pc radius $\simeq 0.07$ pc$^{-3}$. I assume that as stellar lifetime scales as $M^{-2.5}$ and the Sun's lifetimes is about the age of the Galaxy, that almost all the stars ever born are still alive. Thus, the black hole density is $\simeq 4.5 \times 10^{-5}$ pc$^{-3}$ and so there is one black hole within 18 pc.
OK, so why might this number be wrong? Although the number is very
insensitive to the assumed upper mass limit of stars, it is very sensitive to the assumed lower mass limit. This could be higher or lower depending on the very uncertain details of the late stellar evolution and mass-loss from massive stars. This could drive our answer up or down.
Some fraction $f$ of these black holes will merge with other black holes or will escape the Galaxy due to "kicks" from a supernova explosion or interactions with other stars in their dense, clustered birth environments (though not all black holes require a supernova explosion for their creation). We don't know what this fraction is, but it
increases our answer by a factor $(1-f)^{-1/3}$.
Even if they don't escape, it is highly likely that black holes will have a much higher velocity dispersion and hence spatial dispersion above and below the Galactic plane compared with "normal" stars. This is especially true considering most black holes will be very old, since most star formation (including massive star formation) occurred early in the life of the Galaxy, and black hole progenitors die very quickly. Old stars (and black holes) have their kinematics "heated" so that their velocity and spatial dispersions increase.
I conclude that black holes will therefore be under-represented in the solar neighbourhood compared with the crude calculations above and so you should treat the 18pc as a lower limit to the
expectation value, although of course it is possible (though not probable) that a closer one could exist. |
What is a Blackbody?
A black body is an idealization in physics that pictures a body that absorbs all electromagnetic radiation incident on it irrespective of its frequency or angle. Through the second law of thermodynamics that a body always tries to stay in thermal equilibrium.
To stay in thermal equilibrium, a black body must emit radiation at the same rate as it absorbs and so it must also be a good emitter of radiation, emitting electromagnetic waves of as many frequencies as it can absorb i.e. all the frequencies.
What is Blackbody Radiation?
The radiation emitted by the blackbody is known as blackbody radiation. Below is the diagram of the spectral lines obtained from the blackbody radiation. The x-axis represents the wavelength while the y-axis represents the distribution of the spectral line. These spectral lines are obtained for different temperature ranges.
Characteristics of Blackbody Radiation
The characteristics of the blackbody radiation are explained with the help of the following laws:
Wien’s displacement law Planck’s law Stefan-Boltzmann law Wien’s Displacement Law
Wien’s displacement law states that
The blackbody radiation curve for different temperatures peaks at a wavelength is inversely proportional to the temperature.
Wien’s Law Formula
Wien’s Law Formula
\(\lambda_{max}=\frac{b}{T}\)
Planck’s Law
Using Planck’s law of blackbody radiation, the spectral density of the emission is determined for each wavelength at a particular temperature.
Planck’s Law Formula
Planck’s law
\(E_{\lambda }=\frac{8\pi hc}{\lambda ^{5}(e^{\frac{hc}{\pi KT}}-1)}\)
Stefan-Boltzmann Law
The Stefan-Boltzmann law explains the relationship between total energy emitted and the absolute temperature.
Stefan-Boltzmann Law Formula
Stefan-Boltzmann Law
E ∝ T
Wien’s Displacement Law Example We can easily deduce that a wood fire which is approximately 1500K hot, gives out peak radiation at 2000 nm. This means that the majority of the radiation from the wood fire is beyond the human eye’s visibility. This is why a campfire is an excellent source of warmth but a very poor source of light. The temperature of the sun’s surface is 5700 K. Using the Wien displacement law; we can calculate the peak radiation output at a wavelength of 500 nm. This lies in the green portion of the visible light spectrum. Turns out, our eyes are highly sensitive to this particular wavelength of visible light. We really should be appreciative of the fact that a rather unusually large portion of the sun’s radiation falls in a fairly small visible spectrum. When a piece of metal is heated, it first becomes ‘red hot’. This is the longest visible wavelength. One further heating, it moves from red to orange and then yellow. At its hottest, the metal will be seen to be glowing white. This is the shorter wavelengths dominating the radiation.
Stay tuned with BYJU’S to learn more about black body radiation, light sources and much more. |
A quite common idea to provide "gravitation" in space stations is to make them rotate, so the centrifugal force gives an effective gravitation. A possible design is a ring-shaped space station.
Now of course it is easy to calculate how fast a space station has to rotate, as function of the radius, in order to provide a given g-value. Of course, the smaller the radius, the larger the needed angular velocity to provide the gravitation:
$$\omega = \sqrt{\frac{g}{r}}$$
However, a small, fast rotating a space station has two disadvantages:
Due to the dependence of the centrifugal force of the radius, there's a difference in the gravitational strength between head and feet. For a human of height $h$, when the floor is on radius $r$ (assuming $r>h$, of course) you get
$$\Delta g = \frac{g h}{r}$$
This difference might give problems; but I doubt they will be the main problem. Also, it falls off quite quickly with $r$, so with any halfway reasonable size of the space station, I guess it should be no issue.
Due to the rotation you get a Coriolis force. This should mess with the human sense of balance. Moreover, since it is proportional to $\omega$ (more exactly, for running perpendicular to the rotation axis, it's $2\omega v$), it only falls off with the square root of the radius, so I guess that is the determining factor to decide which radius is needed.
Also an effect is that when moving, the direction of "gravitation" will change as you walk around the ring. This I guess will tend to cause you to stumble as soon as you walk, let alone run, if the radius is too low. I have no idea how well humans can adapt to this (or if that question has even been studied).
So my question is:
What would be the minimal radius for a space station, if there should be no problematic effects for humans?
Assume aiming for earth-like gravity ($g=10\,\rm m/s^2$), normal size humans (height up to not much more than 2 meters) and people may run (from 100 meter sprint times, one may assume a maximum speed of about 10 m/s). Also, we can assume there's only one floor (no "upstairs" with different radius).
Note that this is not really a question about physics (how to calculate the physical quantities is clear to me) but more a question about human physiology (how weak do we have to make the effects to not cause problems), thus the tag. |
Note for chapter 2 Probability and Distributions (概率与分布)
一些符号:
符号 表示 定义 for all symbol 对所有 表示 对于所有 为真 Probability 概率 Properties of a probability space 概率空间的性质
概率空间是概率论的基础。概率的严格定义基于这个概念。
if Inclusion-exclusion formula (容斥原理)
More general:
where
Law of total probability 全概公式
Let be a partition of
is called the law of total probability.
Bayes’ Theorem:
Distribution Bernoulli experiment (伯努利试验) and Bernoulli Distribution (伯努利分布)
伯努利试验(Bernoulli experiment)是在同样的条件下重复地、相互独立地进行的一种随机试验,其特点是该随机试验只有两种可能结果:发生或者不发生。
我们假设该项试验独立重复地进行了 次,那么就称这一系列重复独立的随机试验为 重伯努利试验,或称为伯努利概型。单个伯努利试验是没有多大意义的。
Let be a random variable associated with a Bernoulli trial by defining it as follows:
The pmf of can be written as
伯努利分布就是常见的0-1分布,即两点分布(two-point distribution)。
Binomial Distribution (二项分布)
Let equal the number of observed successes in Bernoulli trials, the
possible values of are We say the follows a binomial distribution and write The pdf of is
where .
Geometric distribution (几何分布)
Let be the number of a Bernoulli trials where the first “yes”
appeared. Let be the number of “No” before the first “yes”. Y=X-1.
Multinomial Distribution (多项分布) This is an extension of the binomial distribution. Let a random experiment be repeated independent times. Each experimental results in but one of mutually exclusive and exhaustive ways, say . Let be the prob. that the outcome is an element of Let be the number of outcomes that are elements of We have The pmf of is Poisson Distribution (泊松分布)
A random variable that has a pmf
is said to have a Poisson distribution with parameter .
Suppose are independent random variables and suppose has a Poisson distribution with parameter Then has a Poisson distribution with parameter .
Exponential Distribution (指数分布)
The exponential distribution with the pdf
was one of important continuous distribution in theory of reliability, queueing theory and telephone system.
Gamma Distribution (伽玛分布) The Gamma Function
The integral is called the gamma function of and we write
Properties:
if is a positive integer The -distribution
A random variable that has the pdf of the form
is said to have a gamma distribution with parameters and where and We will write or X \sim \operatorname{gamma}_{(\alpha, \beta)} .
We have ;
as by using a transformation of in the integral of
The -distribution involves many useful distributions
The standard -distribution with
The exponential distribution with pdf
The -distribution with
The -distribution ()
Example. If has the pdf
Corollary:
Let be independent and Then
where
The -distribution (贝塔分布) The beta function
Properties
The -distribution
Let and be two independent random variables, where and The distribution of is called the -distribution with parameters and and write or B \sim \operatorname{beta}(\alpha, \beta)
Properties of The -distribution: The -distribution involves The uniform distribution with pdf 1 on and 0 elsewhere. The inverse sine distribution Its pdf is
The power distribution and its pdf is
Normal Distribution (正态分布)
Definition A random variable that has a pdf
is said to have a normal distribution with parameters and and write When and we say that follows a standard normal distribution.
Assume random variable with then the random variable .
The -distribution (分布)
Let random variables and let are independent.
Define a new random variable by writing
We say that follows a -distribution with degrees of freedom.
The -distribution (分布)
Let and be independent. Then
have the pdf
We say that follows a -distribution with and degrees of
freedom, and write |
Static pressure will be the same, even if one pipe would be only half as long as the other one. That's due to Pascal's Law:
$ \Delta P = \rho \cdot g \cdot \Delta h $
So the pressure difference is proportional with difference in height, and for all points at the same height the pressure will be the same.
Dynamic pressure will be lower for the #2, as there won't be the same pressure loss due to the resistance difference.
A fluid with sufficiently high viscosity (most liquids) will have a laminar flow in pipe #2, whereas the bends in #1 will break the lamina and cause turbulence at the bends which increases flow resistance. |
Some tricks I've seen:
Tricks with notable products
$(a + b)^2 = a^2 + 2ab + b^2$
This formula can be used to compute squares. Say that we want to compute $46^2$. We use $46^2 = (40+6)^2 = 40^2+2\cdot40\cdot6 +6^2 = 1600 + 480 + 36 = 2116$. You can also use this method for negative $b$:$ 197^2 = (200 - 3)^2 = 200^2 - 2\cdot200\cdot3 + 3^2 = 40000 - 1200 + 9 = 38809 $
The last subtraction can be kind of tricky: remember to do it right to left, and take out the common multiples of 10:$ 40000 - 1200 = 100(400-12) = 100(398-10) = 100(388) = 38800 $The hardest thing here is to keep track of the amount of zeroes, this takes some practice!
Also note that if we're computing $(a+b)^2$ and a is a multiple of $10^k$ and $b$ is a single digit-number, we already know the last $k$ digits of the answer: they are $b^2$, then the rest (going to the right) are zeroes. We can use this even if a is only a multiple of 10: the last digit of $(10 * a + b)^2$ (where $a$ and $b$ consist of a single digit) is $b$. So we can write (or maybe only make a mental note that we have the final digit) that down and worry about the more significant digits.
Also useful for things like $46\cdot47 = 46^2 + 46 = 2116 + 46 = 2162$. When both numbers are even or both numbers are uneven, you might want to use:
$(a+b)(a-b) = a^2 - b^2$Say, for example, we want to compute $23 \cdot 27$. We can write this as $(25 - 2)(25 + 2) = 25^2 - 2^2 = (20 + 5)^2 = 20^2 + 2\cdot20\cdot5 + 5^2 - 4 = 400 + 200 + 25 - 4 = 621$.
Divisibility checks
Already covered by Theodore Norvell. The basic idea is that if you represent numbers in a base $b$, you can easily tell if numbers are divisible by $b - 1$, $b + 1$ or prime factors of $b$, by some modular arithmetic.
Vedic math
A guy in my class gave a presentation on Vedic math. I don't really remember everything and there probably are a more cool things in the book, but I remember with algorithm for multiplication that you can use to multiplicate numbers in your head.
This picture shows a method called lattice or gelosia multiplication and is just a way of writing our good old-fashioned multiplication algorithm (the one we use on paper) in a nice way. Please notice that the picture and the Vedic algorithm are not tied: I added the picture because I think it helps you appreciate and understand the pattern that is used in the algorithm. The gelosia notation shows this in a much nicer way than the traditional notation.
The algorithm the guy explained is essentially the same algorithm as we would use on paper. However, it structures the arithmetic in such a way that we never have remember too many numbers at the same time.
Let's illustrate the method by multiplying $456$ with $128$, as in the picture. We work from left to right: we first compute the least significant digits and work our way up.
We start by multiplying the least significant digits:
$6 \cdot 8 = 48$: the least significant digit is $8$, remember the $4(0)$ for the next round (of course, I don't mean zero times four here but four, or forty, whatevery you prefer: be consistent though, if you include the zero here to make forty, you got do it everywhere).$ 8 \cdot 5(0) = 40(0) $
$ 2(0) \cdot 6 = 12(0) $ $ 4(0) + 40(0) + 12(0) = 56(0) $: our next digit (to the left of the $8$) is $6$: remember the $5(00)$
$ 8 \cdot 4(00) = 32(00) $
$ 2(0) \cdot 5(0) = 10(00) $ $ 1(00) \cdot 6 = 6(00) $ $ 5(00) + 32(00) + 10(00) + 6(00) = 53(00) $: our next digit is a $3$, remember the $5(000)$
Pfff... starting with 2-digit numbers is a better idea, but I wanted to this longer one to make the structure of the algorithm clear. You can do this much faster if you have practiced, since you don't have to write it all down.
$ 2(0) \cdot 4(00) = 8(000) $
$ 1(00) \cdot 5(0) = 5(000)$ $ 5(000) + 8(000) + 5(000) = 18(000)$: next digit is an $8$, remember the $1(0000)$
$ 1(00) \cdot 4(00) = 4(0000) $
$ 1(0000) + 4(0000) = 5(0000) $: the most significant digit is a $5$.
So we have $58368$.
Quadratic equations
There are multiple ways to solve a quadratic equation in your head. The easiest are quadratic with integer coefficients. If we have $x^2 + ax + c = 0$, try to find $r_{1, 2}$ such that $r_1 + r_2 = -a$ and $r_1r_2 = c$. It is also possible to solve for non-integer solutions this way, but it is usually too hard to actually come up with solutions this way.
Another way is just to try divisors of the constant term. By the rational root theorem (google it, I can't link anymore
sigh) all solutions to $x^n + ... + c = 0$ need to be divisors of $c$. If $c$ is a fraction $\frac{p}{q}$, the solutions need to be of the form $\frac{a}{b}$ where $a$ divides $p$ and $b$ divides $q$.
If this all fails, we can still put the abc-formula in a much easier form:
$ ux^2 + vx + w = 0 $
$ x^2 + \frac{v}{u}x + \frac{w}{u} = 0 $ $ x^2 + \frac{v}{u}x + \frac{w}{u} = 0 $ $ x^2 - ax - b = 0 $
$ x^2 = ax + b $
(This is the form that I found easiest to use!) $ (x - \frac{a}{2})^2 = (\frac{a}{2})^2 + b $ $ x = \frac{a\pm\sqrt{a^2 + 4b}}{2} = \frac{a}{2} \pm \sqrt{(\frac{a}{2})^2 + b} $
I'm sure there are also a lot of techniques for estimating products and the like, but I'm not really familiar with them.
Tricks that aren't really usable but still pretty cool
See this excerpt from Feynman's "Surely you're joking, Mr. Feynman!" about how he managed to amaze some of his colleagues, and also this video from Numberphile. |
Being a bit late for the party, here is nevertheless a small answer. In fact, there is a rather explicit way to compute adjoints of every differential operator (any order) between (smooth, compactly supported) sections of vector bundles. Of course, this is (in general) not the Hilbert space adjoint, here you need a bit more analysis ;)
The main idea is to use a symbol calculus based on a covariant derivative. I explain here the scalar version (trivial vector bundles) but the whole thing can be done in full generality as well. First, you choose a covariant derivative, say torsion-free and you choose a density on your manifold in order to have an integration measure. As you probably know, the symmetrized covariant derivative allows you to establish a $C^\infty(M)$-linear bijection between symbols, i.e. smooth functions on the cotangent bundle being polynomial in the fiber directions, and differential operators. Note that this is a real bijection, not just taking into account the leading symbol. Of course, the symbol depends on the chosen covariant derivative.
In a second step you compute once and for all the adjoint of a differential operator $D$ with symbol $f$ by zillions of integrations by parts. The funny thing is that there is a fairly simple way how the symbol of the adjoint looks like. You need two ingredients for that:
First, the covariant derivative allows you to define a horizontal lift which in turn determines a maximally indefinite pseudo Riemannian metric on the cotangent bundle (horizontal spaces are in bijection to tangent spaces at the base point, vertical spaces are in bijection to the cotangent space, thus there is a natural pairing). This metric has a Laplace operator (better: d'Alembert operator) $\Delta$ with which you can act on the symbol $f$. In the flat situation this is just\begin{equation}\Delta_{\mathrm{flat}} = \frac{\partial^2}{\partial q^i \partial p_i}\end{equation}for a Darboux chart on $T^*M$ induced by a chart on $M$. In general, there are a couple of Christoffel symbols needed to make this globally defined ;)
Second, the density $\mu$ of your integration might not be covariantly constant. In any case, it defines a one-form by\begin{equation}\alpha(X) = \frac{\nabla_X \mu}{\mu},\end{equation}which is now be used to cook up a new differential operator on $T^*M$. You can lift $\alpha$ vertically to a vector field $F(\alpha)$ on $T^*M$, completely canonical.
Having these two ingredients, the adjoint of $D^*$ has the following symbol\begin{equation}f^* = \exp\left(\frac{1}{2i}(\Delta + F(\alpha))\right) \overline{f}.\end{equation}The prefactor in the exponential depends a bit on your conventions concerning the assignment of symbol to operator. With this formula it is typically really just a computation to get adjoints of all kind of operators. In many cases, you can chose your density to be covariantly constant, so $\alpha = 0$.
You can find all this in much detail in my book on Poisson geometry, based on some old papers on quantization of cotangent bundles in the late 90s (together with Bordemann and Neumaier).
In the case of interesting bundles, the formula is essentially the same: you only have to choose covariant derivatives for the two vector bundles in question and modify the Laplace operator accordingly. Then you can proceed in the same way. This generalization is in a paper of mine with Bordemann, Neumaier and Pflaum. |
You have three different questions here:
is 8th Avenue and 14th Street at 3pm is the same place as as 8th and 14th at 4pm?
why can't we move backwards in time
is the answer to (1) connected to the answer to (2)
The answer to (1) is unambiguously
NO. If you remember back to Pythagoras' theorem as learnt by generations of schoolchildren, the distance $s$ between two points $(x_1, y_1)$ and $(x_2, y_2)$ is given by:
$$ s^2 = \Delta x^2 + \Delta y^2 $$
where I used $\Delta x$ as shorthand for $x_2 - x_1$ and likewise for $y$. For spacetime we define a distance in a similar way, but the equation is now:
$$ s^2 = -c^2\Delta t^2 + \Delta x^2 + \Delta y^2 + \Delta z^2 $$
All the weird stuff, like time dilation and length contraction, can be derived from this equation for the distance so it's absolutely fundamental to relativity (general as well as special). Two points are only the same if $s = 0$, and it clearly isn't for the same street corner at different times. This sounds a bit like a mathematician making an artificial distinction, but I must emphasise that all of SR depends on this distinction and we know SR works because we test it every day in particle accelerators.
On to question (2) and the answer is unambiguously
NO-ONE KNOWS.
And finally question (3) and the answer is that there is no obvious connection between (1) and (2). The equations of special relativity are time symmetric, so they do not dictate that you cannot move backwards in time.
Feel free to Google around for the answer to your question (2). There are lots of interesting articles out there, but in the final analysis the answer is the two word answer I gave above.
Response to comment
You say in your comment:
I was rather more interested in why we believe we can move backward in space. I know mathematically we can but in reality going forward two paces and then back two does not get you back to the same place
Your net displacement is the vector $(\Delta t, \Delta x, \Delta y, \Delta z)$, so the direction you have moved in is the direction of this vector. In your example of two steps forward and two back the vector will be $(\tau, 0, 0, 0)$, where $\tau$ is the time you took to make the four steps. So if you define the word
place to refer only to the spatial coordinates (which is after all what most of us mean by place) then you are back in the same place but not at the same spacetime point.
If we assume your steps carried you along the $x$ axis then the displacement for the first two steps was $(\tau/2, 2X, 0, 0)$ and the displacement for the return two steps was $(\tau/2, -2X, 0, 0)$, where $X$ is your step length. Note that the $x$ displacement can be positive or negative. When we say we cannot move backwards in time we are saying that the $t$ displacement can never be negative; it can only be positive.
In fact we can make a stronger statement than that. An observer who is moving relative to you will disagree about your displacement. Suppose you measure your displacement as $(t, x, y, z)$ and the moving observer measures it as $(t', x', y', z')$ then $t \ne t'$, $x \ne x'$ and so on. You and the moving observer will disagree about your displacement (though you will both calculate the same value for $s$ as described above). However, both you and all observers moving (slower than light) with respect to you will agree that the time displacement cannot be less than zero. |
Your hovercraft device has a mass of 2.25 kg and at max power can travel a speed of 0.2 m/s. You also learn that the target distance is 180 cm and you are trying to get there in 15 seconds.
How many rolls of pennies should you place on the hovercraft in order to get as close to 15 seconds as possible?
Assuming there is no change in lift (which affects friction) and the hovercraft gets to max speed instantly:One penny roll is around the mass of 50 (new) pennies, 50 * 3.11 g or 0.156 kg.Required speed is 1.8 m / 15 s = 0.12 m/s.(2.25 kg)(0.2 m/s) = (2.25 kg + n * 0.156 kg)(0.12 m/s)n = 9.6
Your hovercraft device has a mass of 2.25 kg and at max power can travel a speed of 0.2 m/s. You also learn that the target distance is 180 cm and you are trying to get there in 15 seconds.
How many rolls of pennies should you place on the hovercraft in order to get as close to 15 seconds as possible?
Assuming there is no change in lift (which affects friction) and the hovercraft gets to max speed instantly:One penny roll is around the mass of 50 (new) pennies, 50 * 3.11 g or 0.156 kg.Required speed is 1.8 m / 15 s = 0.12 m/s.(2.25 kg)(0.2 m/s) = (2.25 kg + n * 0.156 kg)(0.12 m/s)n = 9.6
10
Oh sorry, I forgot to mention that you should assume that each roll of pennies has a mass of 125 grams (it's mentioned in the rules manual). However, your answer is technically correct, so you can go ahead with the next one!
"The fault, dear Brutus, is not in our stars,But in ourselves, that we are underlings."
University of Texas at Austin '23Seven Lakes High School '19
Derive the formula for the distance a projectile travels given the angle of elevation of its launch, theta, and the initial speed, v. Assume the projectile encounters no air resistance and starts at ground level.
Derive the formula for the distance a projectile travels given the angle of elevation of its launch, theta, and the initial speed, v. Assume the projectile encounters no air resistance and starts at ground level.
First, find the individual components of the 2D velocity vector:[math]v_x = vcos\theta[/math][math]v_y = vsin\theta[/math]The amount of time the ball will be in the air can be found using[math]\Delta y = v_{y}t + \frac{at^2}{2}[/math]Since [math]\Delta y = 0[/math] and [math]a = -9.8[/math]:[math]0 = t(vsin\theta - 4.9t)[/math][math]t = 0, \frac{vsin\theta}{4.9}[/math]To then get the total distance traveled, we simply use [math]v_{x}t = \Delta x[/math].[math]vcos\theta*\frac{vsin\theta}{4.9} = \Delta x[/math][math]\frac{v^2sin(2\theta)}{9.8} = \Delta x[/math] is the final equation.
Conant 19' UIUC 23' Member of The Builder CultPhysics is the only real scienceChange my mind
Derive the formula for the distance a projectile travels given the angle of elevation of its launch, theta, and the initial speed, v. Assume the projectile encounters no air resistance and starts at ground level.
First, find the individual components of the 2D velocity vector:[math]v_x = vcos\theta[/math][math]v_y = vsin\theta[/math]The amount of time the ball will be in the air can be found using[math]\Delta y = v_{y}t + \frac{at^2}{2}[/math]Since [math]\Delta y = 0[/math] and [math]a = -9.8[/math]:[math]0 = t(vsin\theta - 4.9t)[/math][math]t = 0, \frac{vsin\theta}{4.9}[/math]To then get the total distance traveled, we simply use [math]v_{x}t = \Delta x[/math].[math]vcos\theta*\frac{vsin\theta}{4.9} = \Delta x[/math][math]\frac{v^2sin(2\theta)}{9.8} = \Delta x[/math] is the final equation.
There is a 20kg ball heading east at 26m/s. A 36kg ball is traveling southwest at 38m/s. If both balls undergo an inelastic collision, what is the speed and the direction of the balls after the collision?
Conant 19' UIUC 23' Member of The Builder CultPhysics is the only real scienceChange my mind
There is a 20kg ball heading east at 26m/s. A 36kg ball is traveling southwest at 38m/s. If both balls undergo an inelastic collision, what is the speed and the direction of the balls after the collision?
It doesn't seem like there's enough information given to solve the problem, but given that the balls "stick together", thenwhere east is positive.
There is a 20kg ball heading east at 26m/s. A 36kg ball is traveling southwest at 38m/s. If both balls undergo an inelastic collision, what is the speed and the direction of the balls after the collision?
It doesn't seem like there's enough information given to solve the problem, but given that the balls "stick together", thenwhere east is positive.
or
where south is positive.
or
That’s correct. You turn!
Conant 19' UIUC 23' Member of The Builder CultPhysics is the only real scienceChange my mind
1. A uniform disk is rolled down a hill of height 11m. What is the speed of the disk when it reaches the bottom of the hill?2. As soon as the disk reaches the bottom of the hill, it hits a horizontal surface of frictional coefficient 0.13. What is the acceleration of the disk?3. A ball of mass 2kg is attached to a string of length 0.75m and rotated vertically with an angular velocity of 7rad/s. What is the ratio of the string tension at the top of the loop to the string tension at the bottom?Ignore significant figures.
i wish i was goodEvents 2019: Expd, Water, HerpRip states 2019 |
This example explores the physics of the damped harmonic oscillator by
solving the equations of motion in the case of no driving forces,
investigating the cases of under-, over-, and critical-damping
Derive Equation of Motion
Solve the Equation of Motion (F = 0)
Underdamped Case ()
Overdamped Case ()
Critically Damped Case ()
Conclusion
Consider a forced harmonic oscillator with damping shown below. Model the resistance force as proportional to the speed with which the oscillator moves.
Define the equation of motion where
is the mass
is the damping coefficient
is the spring constant
is a driving force
syms x(t) m c k F(t) eq = m*diff(x,t,t) + c*diff(x,t) + k*x == F eq(t) =
Rewrite the equation using and .
syms gamma omega_0 eq = subs(eq, [c k], [m*gamma, m*omega_0^2]) eq(t) =
Divide out the mass . Now we have the equation in a convenient form to analyze.
eq = collect(eq, m)/m eq(t) =
Solve the equation of motion using
dsolve in the case of no external forces where . Use the initial conditions of unit displacement and zero velocity.
vel = diff(x,t); cond = [x(0) == 1, vel(0) == 0]; eq = subs(eq,F,0); sol = dsolve(eq, cond) sol =
Examine how to simplify the solution by expanding it.
sol = expand(sol) sol =
Notice that each term has a factor of , or , use
collect to gather these terms
sol = collect(sol, exp(-gamma*t/2)) sol =
The term appears in various parts of the solution. Rewrite it in a simpler form by introducing the damping ratio .
Substituting ζ into the term above gives:
syms zeta; sol = subs(sol, ... sqrt(gamma^2 - 4*omega_0^2), ... 2*omega_0*sqrt(zeta^2-1)) sol =
Further simplify the solution by substituting in terms of and ,
sol = subs(sol, gamma, 2*zeta*omega_0) sol =
We have derived the general solution for the motion of the damped harmonic oscillator with no driving forces. Next, we'll explore three special cases of the damping ratio where the motion takes on simpler forms. These cases are called
underdamped ,
overdamped , and
critically damped .
If , then is purely imaginary
solUnder = subs(sol, sqrt(zeta^2-1), 1i*sqrt(1-zeta^2)) solUnder =
Notice the terms in the above equation and recall the identity
Rewrite the solution in terms of .
solUnder = coeffs(solUnder, zeta);solUnder = solUnder(1);c = exp(-omega_0 * zeta * t);solUnder = c * rewrite(solUnder / c, 'cos')
solUnder = solUnder(t, omega_0, zeta) = solUnder solUnder(t, omega_0, zeta) =
The system oscillates at a natural frequency of and decays at an exponential rate of .
Plot the solution with
fplot as a function of and .
z = [0 1/4 1/2 3/4]; w = 1; T = 4*pi; lineStyle = {'-','--',':k','-.'}; fplot(@(t)solUnder(t, w, z(1)), [0 T], lineStyle{1}); hold on; for k = 2:numel(z) fplot(@(t)solUnder(t, w, z(k)), [0 T], lineStyle{k}); end hold off; grid on; xticks(T*linspace(0,1,5)); xticklabels({'0','\pi','2\pi','3\pi','4\pi'}); xlabel('t / \omega_0'); ylabel('amplitude'); lgd = legend('0','1/4','1/2','3/4'); title(lgd,'\zeta'); title('Underdamped');
If , then is purely real and the solution can be rewritten as
solOver = sol solOver = solOver = coeffs(solOver, zeta); solOver = solOver(1) solOver =
Notice the terms and recall the identity .
Rewrite the expression in terms of .
c = exp(-omega_0*t*zeta);solOver = c*rewrite(solOver / c, 'cosh')
solOver = solOver(t, omega_0, zeta) = solOver solOver(t, omega_0, zeta) =
Plot the solution to see that it decays without oscillating.
z = 1 + [1/4 1/2 3/4 1]; w = 1; T = 4*pi; lineStyle = {'-','--',':k','-.'}; fplot(@(t)solOver(t, w, z(1)), [0 T], lineStyle{1}); hold on; for k = 2:numel(z) fplot(@(t)solOver(t, w, z(k)), [0 T], lineStyle{k}); end hold off; grid on; xticks(T*linspace(0,1,5)); xticklabels({'0','\pi','2\pi','3\pi','4\pi'}); xlabel('\omega_0 t'); ylabel('amplitude'); lgd = legend('1+1/4','1+1/2','1+3/4','2'); title(lgd,'\zeta'); title('Overdamped');
If , then the solution simplifies to
solCritical(t, omega_0) = limit(sol, zeta, 1) solCritical(t, omega_0) =
Plot the solution for the critically damped case.
w = 1; T = 4*pi; fplot(solCritical(t, w), [0 T]) xlabel('\omega_0 t'); ylabel('x'); title('Critically damped, \zeta = 1'); grid on; xticks(T*linspace(0,1,5)); xticklabels({'0','\pi','2\pi','3\pi','4\pi'});
We have examined the different damping states for the harmonic oscillator by solving the ODEs which represents its motion using the damping ratio . Plot all three cases together to compare and contrast them.
zOver = pi; zUnder = 1/zOver; w = 1; T = 2*pi; lineStyle = {'-','--',':k'}; fplot(@(t)solOver(t, w, zOver), [0 T], lineStyle{1},'LineWidth',2); hold on; fplot(solCritical(t, w), [0 T], lineStyle{2},'LineWidth',2) fplot(@(t)solUnder(t, w, zUnder), [0 T], lineStyle{3},'LineWidth',2); hold off; textColor = lines(3); text(3*pi/2, 0.3 , 'over-damped' ,'Color',textColor(1,:)); text(pi*3/4, 0.05, 'critically-damped','Color',textColor(2,:)); text(pi/8 , -0.1, 'under-damped'); grid on; xlabel('\omega_0 t'); ylabel('amplitude'); xticks(T*linspace(0,1,5)); xticklabels({'0','\pi/2','\pi','3\pi/2','2\pi'}); yticks((1/exp(1))*[-1 0 1 2 exp(1)]); yticklabels({'-1/e','0','1/e','2/e','1'}); lgd = legend('\pi','1','1/\pi'); title(lgd,'\zeta'); title('Damped Harmonic Oscillator'); |
Question:
Let $B(t)$ be the standard Brownian motion, $\mu(t,x)$ and $\sigma(t,x)$ are continuous functions, and $$dr(t) = \mu(t,r(t))dt+\sigma(t,r(t))dB(t).$$ $(\mu,\sigma)$ obeys the linear growth condition $$\left|\mu(t,x)\right|+\left|\sigma(t,x)\right|<C(1+|x|),\ \forall t\in[0,T],\, x\in\mathbf R$$ for some positive constant $C$. Is it true that $$\mathbf E\Big[\exp\Big(-\int_0^t r(s)ds\Big)\Big]=\exp\big(-r(0)t+O(t^2)\big)$$ as $t\to 0^+$?
What I have obtained so far:
By the Cauchy-Schwarz inequality and the Gronwall inequality, $$\mathbf E[r^2]<3\mathbf E[r(0)^2]e^{a(1+T)t}, \forall t\in[0,T]$$ for some positive constant $a$. We conclude there $$\mathbf E \Big[\Big(\int_0^t r(s)ds\Big)^2\Big]\le t\int_0^t \mathbf E[r(s)^2]ds\le 3\mathbf E[r(0)^2]e^{a(1+T)T}t^2 = O(t^2). \tag{1}$$
I have tried Taylor expanding $e^{x}$ around $x=0$ in the following way. $$I:=\exp\Big(-\int_0^t r(s)ds\Big)=1-\int_0^t r(s)ds+\frac{e^{\theta(x)}}{2}\Big(\int_0^t r(s)ds\Big)^2 \tag{2}$$ for some $\theta(x)\in[0,x]$ and $x:=-\int_0^t r(s)ds$. Because $r(u)$ is continuous so is $\int_0^u r(s)ds$, $\exists\text{ stopping time }\tau(t,\omega),\ni-\int_0^{\tau(t,\omega)} r(s)ds=\theta(x)$ where $\omega$ is the sample under consideration. Take expectation of Equation (2), we have $$\mathrm E[I] = 1-\int_0^t\mathbf E[r(s)]ds+\frac12\mathbf E\Big[\exp\Big(-\int_0^{\tau(t,\omega)} r(s)ds\Big)\Big(\int_0^t r(s)ds\Big)^2\Big].$$ I intend to use Equation (1). In the case $r\ge 0$, $\exp\Big(-\int_0^{\tau(t,\omega)} r(s)ds\Big)\le 1$ and we can proceed easily. What do we do when $r$ can assume both signs?
Perhaps bounding the quadratic moment is not enough and we need more accurate estimate of the probability distribution. I am considering using the heat kernel expansion to estimate the probability distribution of $r$. But I suspect there is a more elegant solution for this short time asymptotics. |
Search
Now showing items 1-10 of 234
The determination of a coefficient in a parabolic differential equation
(1961)
Abstract Not Available.
Critical Riemannian metrics
(1990)
Let $(M,g)$ be a compact oriented n-dimensional smooth Riemannian manifold. Consider the following quadratic Riemannian functional$$SR(g) = \int\sb{M}\ \vert R\sb{ijkl}(g)\vert \sp{2}d\mu$$which is homogeneous of degree ...
Energy minimizers, gradient flow solutions, and computational investigations in the theory of biharmonic maps
(1998)
We present the definitions, derive the relevant Euler-Lagrange equations, and establish various properties concerning biharmonic maps. We investigate several classes of examples exhibiting singular behavior. Existence of ...
INTEGRAL EQUATIONS' APPROACH TO SCATTERING PROBLEMS
(1982)
In the present thesis, the classical potential theory is used to derive systems of second kind integral equations corresponding to scattering of acoustic and elastic waves from both fluid and solid inclusions. These systems ...
Some domain decomposition and multigrid preconditioners for hybrid mixed finite elements
(1994)
Discretizations of self-adjoint, linear, second-order, uniformly elliptic partial differential equations by hybrid mixed finite elements lead to large, ill-conditioned saddle-point problems. By eliminating the flux variable, ...
A minimization of a curvature functional on fiber bundles
(1998)
Let B be a smooth compact orientable surface without boundary and with $\chi(B) < 0.$ We examine two types of fiber bundles M over B with fiber F. The first is a principle fiber bundle with a two-torus fiber and the second ...
I. A mathematical theory of competition. II. Generalized Lagrange problems
(1926)
Abstract Not Available.
Integral invariants and stability
(1925)
Abstract Not Available.
The transforms of Fuchsian groups
(1933)
Abstract Not Available.
Surgery, bordism and equivalence of 3-manifolds
(1997)
We call two closed, oriented 3-manifolds, $M\sb0$ and $M\sb1,$ HTS-equivalent if there exists a sequence $M\sb{j\sb{i}},\ i = 1..m$ where $M\sb0 = M\sb{j\sb1}$ and $M\sb1 = M\sb{j\sb{m}}$ and $M\sb{j\sb{i}}$ is obtained ... |
We have briefly mentioned the inverse trigonometric functions before, for example in Section 1.3 when we discussed how to use the \(\fbox{\(\sin^{-1}\)}\), \(\fbox{\(\cos^{-1}\)}\), and \(\fbox{\(\tan^{-1}\)}\) buttons on a calculator to find an angle that has a certain trigonometric function value. We will now define those inverse functions and determine their graphs.
Recall that a
function is a rule that assigns a single object \(y \) from one set (the range to each object \(x \) from another set (the domain). We can write that rule as \(y = f(x) \), where \(f \) is the function (see Figure 5.3.1). There is a simple vertical rule for determining whether a rule \(y=f(x) \) is a function: \(f \) is a function if and only if every vertical line intersects the graph of \(y=f(x) \) in the \(xy\)-coordinate plane at most once (see Figure 5.3.2).
Recall that a function \(f \) is
one-to-one (often written as \(1-1\)) if it assigns distinct values of \(y \) to distinct values of \(x \). In other words, if \(x_1 \ne x_2 \) then \(f(x_1 ) \ne f(x_2 ) \). Equivalently, \(f \) is one-to-one if \(f(x_1 ) = f(x_2 ) \) implies \(x_1 = x_2 \). There is a simple horizontal rule for determining whether a function \(y=f(x) \) is one-to-one: \(f \) is one-to-one if and only if every horizontal line intersects the graph of \(y=f(x) \) in the \(xy\)-coordinate plane at most once (see Figure 5.3.3).
If a function \(f \) is one-to-one on its domain, then \(f \) has an
inverse function, denoted by \(f^{-1} \), such that \(y=f(x) \) if and only if \(f^{-1}(y) = x \). The domain of \(f^{-1} \) is the range of \(f \).
The basic idea is that \(f^{-1} \) "undoes'' what \(f \) does, and vice versa. In other words,
\[\nonumber \begin{alignat*}{3} f^{-1}(f(x)) ~&=~ x \quad&&\text{for all \(x \) in the domain of \(f \), and}\\ \nonumber f(f^{-1}(y)) ~&=~ y \quad&&\text{for all \(y \) in the range of \(f \).} \end{alignat*}\]
We know from their graphs that none of the trigonometric functions are one-to-one over their entire domains. However, we can restrict those functions to
subsets of their domains where they are one-to-one. For example, \(y=\sin\;x \) is one-to-one over the interval \(\left[ -\frac{\pi}{2},\frac{\pi}{2} \right] \), as we see in the graph below:
For \(-\frac{\pi}{2} \le x \le \frac{\pi}{2} \) we have \(-1 \le \sin\;x \le 1 \), so we can define the
inverse sine function \(y=\sin^{-1} x \) (sometimes called the arc sine and denoted by \(y=\arcsin\;x\)) whose domain is the interval \([-1,1] \) and whose range is the interval \(\left[ -\frac{\pi}{2},\frac{\pi}{2} \right] \). In other words:
\[ \begin{alignat}{3}
\sin^{-1} (\sin\;y) ~&=~ y \quad&&\text{for \(-\tfrac{\pi}{2} \le y \le \tfrac{\pi}{2}\)}\label{eqn:arcsin1}\\ \sin\;(\sin^{-1} x) ~&=~ x \quad&&\text{for \(-1 \le x \le 1\)}\label{eqn:arcsin2} \end{alignat}\]
Example 5.13
Find \(\sin^{-1} \left(\sin\;\frac{\pi}{4}\right) \).
Solution:
Since \(-\frac{\pi}{2} \le \frac{\pi}{4} \le \frac{\pi}{2} \), we know that \(\sin^{-1} \left(\sin\;\frac{\pi}{4}\right) = \boxed{\frac{\pi}{4}}\; \), by Equation \ref{eqn:arcsin1}.
Example 5.14
Find \(\sin^{-1} \left(\sin\;\frac{5\pi}{4}\right) \).
Solution:
Since \(\frac{5\pi}{4} > \frac{\pi}{2} \), we can not use Equation \ref{eqn:arcsin1}. But we know that \(\sin\;\frac{5\pi}{4} = -\frac{1}{\sqrt{2}} \). Thus, \(\sin^{-1} \left(\sin\;\frac{5\pi}{4}\right) = \sin^{-1} \left( -\frac{1}{\sqrt{2}} \right) \) is, by definition, the angle \(y \) such that \(-\frac{\pi}{2} \le y \le \frac{\pi}{2} \) and \(\sin\;y = -\frac{1}{\sqrt{2}} \). That angle is \(y=-\frac{\pi}{4} \), since
\[\sin\;\left( -\tfrac{\pi}{4} \right) ~=~ -\sin\;\left( \tfrac{\pi}{4} \right) ~=~
-\tfrac{1}{\sqrt{2}} ~. \nonumber \]
Thus, \(\sin^{-1} \left(\sin\;\frac{5\pi}{4}\right) = \boxed{-\tfrac{\pi}{4}}\; \).
Example 5.14 illustrates an important point: \(\sin^{-1} x \) should
always be a number between \(-\frac{\pi}{2} \) and \(\frac{\pi}{2} \). If you get a number outside that range, then you made a mistake somewhere. This why in Example 1.27 in Section 1.5 we got \(\sin^{-1}(-0.682) = -43^\circ \) when using the \(\fbox{\(\sin^{-1}\)}\) button on a calculator. Instead of an angle between \(0^\circ \) and \(360^\circ \) (i.e. \(0 \) to \(2\pi \) radians) we got an angle between \(-90^\circ \) and \(90^\circ \) (i.e. \(-\frac{\pi}{2} \) to \(\frac{\pi}{2} \) radians).
In general, the graph of an inverse function \(f^{-1} \) is the reflection of the graph of \(f \) around the line \(y=x \). The graph of \(y=\sin^{-1} x \) is shown in Figure 5.3.5. Notice the symmetry about the line \(y=x \) with the graph of \(y=\sin\;x \).
The
inverse cosine function \(y=\cos^{-1} x \) (sometimes called the arc cosine and denoted by \(y=\arccos\;x\)) can be determined in a similar fashion. The function \(y=\cos\;x \) is one-to-one over the interval \([0,\pi] \), as we see in the graph below:
Thus, \(y=\cos^{-1} x \) is a function whose domain is the interval \([-1,1] \) and whose range is the interval \([0,\pi] \). In other words:
\[\begin{alignat}{3}
\cos^{-1} (\cos\;y) ~&=~ y \quad&&\text{for \(0 \le y \le \pi\)}\label{eqn:arccos1}\\ \cos\;(\cos^{-1} x) ~&=~ x \quad&&\text{for \(-1 \le x \le 1\)}\label{eqn:arccos2} \end{alignat}\]
The graph of \(y=\cos^{-1} x \) is shown below in Figure 5.3.7. Notice the symmetry about the line \(y=x \) with the graph of \(y=\cos\;x \).
Example 5.15
Find \(\cos^{-1} \left(\cos\;\frac{\pi}{3}\right) \).
Solution:
Since \(0 \le \frac{\pi}{3} \le \pi \), we know that \(\cos^{-1} \left(\cos\;\frac{\pi}{3}\right) = \boxed{\frac{\pi}{3}}\; \), by Equation \ref{eqn:arccos1}.
Example 5.16
Find \(\cos^{-1} \left(\cos\;\frac{4\pi}{3}\right) \).
Solution:
Since \(\frac{4\pi}{3} > \pi \), we can not use Equation \ref{eqn:arccos1}. But we know that \(\cos\;\frac{4\pi}{3} = -\frac{1}{2} \). Thus, \(\cos^{-1} \left(\cos\;\frac{4\pi}{3}\right) = \cos^{-1} \left( -\frac{1}{2} \right) \) is, by definition, the angle \(y \) such that \(0 \le y \le \pi \) and \(\cos\;y = -\frac{1}{2} \). That angle is \(y=\frac{2\pi}{3} \) (i.e. \(120^\circ\)). Thus, \(\cos^{-1} \left(\cos\;\frac{4\pi}{3}\right) = \boxed{\tfrac{2\pi}{3}}\; \).
Examples 5.14 and 5.16 may be confusing, since they seem to violate the general rule for inverse functions that \(f^{-1}(f(x)) = x \) for all \(x \) in the domain of \(f \). But that rule only applies when the function \(f \) is one-to-one over its
entire domain. We had to restrict the sine and cosine functions to very small subsets of their entire domains in order for those functions to be one-to-one. That general rule, therefore, only holds for \(x \) in those small subsets in the case of the inverse sine and inverse cosine.
The
inverse tangent function \(y=\tan^{-1} x \) (sometimes called the arc tangent and denoted by \(y=\arctan\;x\)) can be determined similarly. The function \(y=\tan\;x \) is one-to-one over the interval \(\left( -\frac{\pi}{2},\frac{\pi}{2} \right) \), as we see in Figure 5.3.8:
The graph of \(y=\tan^{-1} x \) is shown below in Figure 5.3.9. Notice that the vertical asymptotes for \(y=\tan\;x \) become horizontal asymptotes for \(y=\tan^{-1} x \). Note also the symmetry about the line \(y=x \) with the graph of \(y=\tan\;x \).
Thus, \(y=\tan^{-1} x \) is a function whose domain is the set of all real numbers and whose range is the interval \(\left( -\frac{\pi}{2},\frac{\pi}{2} \right) \). In other words:
\[\begin{alignat}{3}
\tan^{-1} (\tan\;y) ~&=~ y \quad&&\text{for \(-\tfrac{\pi}{2} < y < \tfrac{\pi}{2}\)}\label{eqn:arctan1}\\ \tan\;(\tan^{-1} x) ~&=~ x \quad&&\text{for all real \(x\)}\label{eqn:arctan2} \end{alignat}\]
Example 5.17
Find \(\tan^{-1} \left(\tan\;\frac{\pi}{4}\right) \).
Solution:
Since \(-\tfrac{\pi}{2} \le \tfrac{\pi}{4} \le \tfrac{\pi}{2} \), we know that \(\tan^{-1} \left(\tan\;\frac{\pi}{4}\right) = \boxed{\frac{\pi}{4}}\; \), by Equation \ref{eqn:arctan1}.
Example 5.18
Find \(\tan^{-1} \left(\tan\;\pi\right) \).
Solution:
Since \(\pi > \tfrac{\pi}{2} \), we can not use Equation \ref{eqn:arctan1}. But we know that \(\tan\;\pi = 0 \). Thus, \(\tan^{-1} \left(\tan\;\pi\right) = \tan^{-1} 0 \) is, by definition, the angle \(y \) such that \(-\tfrac{\pi}{2} \le y \le \tfrac{\pi}{2} \) and \(\tan\;y = 0 \). That angle is \(y=0 \). Thus, \(\tan^{-1} \left(\tan\;\pi \right) = \boxed{0}\; \).
Example 5.19
Find the exact value of \(\cos\;\left(\sin^{-1}\;\left(-\frac{1}{4}\right)\right) \).
Solution:
Let \(\theta = \sin^{-1}\;\left(-\frac{1}{4}\right) \). We know that \(-\tfrac{\pi}{2} \le \theta \le \tfrac{\pi}{2} \), so since \(\sin\;\theta = -\frac{1}{4} < 0 \), \(\theta \) must be in QIV. Hence \(\cos\;\theta > 0 \). Thus,
\[ \cos^2 \;\theta ~=~ 1 ~-~ \sin^2 \;\theta ~=~ 1 ~-~ \left( -\frac{1}{4} \right)^2 ~=~\frac{15}{16}
\quad\Rightarrow\quad \cos\;\theta ~=~ \frac{\sqrt{15}}{4} ~. \nonumber \]
Note that we took the positive square root above since \(\cos\;\theta > 0 \). Thus, \(\cos\;\left(\sin^{-1}\;\left(-\frac{1}{4}\right)\right) = \boxed{\frac{\sqrt{15}}{4}}\; \).
Example 5.20
Show that \(\tan\;(\sin^{-1} x) = \dfrac{x}{\sqrt{1 - x^2}} \) for \(-1 < x < 1 \).
Solution:
When \(x=0 \), the Equation holds trivially, since
\[\nonumber \tan\;(\sin^{-1} 0) ~=~ \tan\;0 ~=~ 0 ~=~ \dfrac{0}{\sqrt{1 - 0^2}} ~.\]
Now suppose that \(0 < x < 1 \). Let \(\theta = \sin^{-1} x \). Then \(\theta \) is in QI and \(\sin\;\theta = x \). Draw a right triangle with an angle \(\theta \) such that the opposite leg has length \(x \) and the hypotenuse has length \(1 \), as in Figure 5.3.10 (note that this is possible since \(0 < x < 1\)). Then \(\sin\;\theta = \frac{x}{1} = x \). By the Pythagorean Theorem, the adjacent leg has length \(\sqrt{1 - x^2} \). Thus, \(\tan\;\theta = \frac{x}{\sqrt{1 - x^2}} \).
If \(-1 < x < 0 \) then \(\theta = \sin^{-1} x \) is in QIV. So we can draw the same triangle except that it would be "upside down'' and we would again have \(\tan\;\theta = \frac{x}{\sqrt{1 - x^2}} \), since the tangent and sine have the same sign (negative) in QIV. Thus, \(\tan\;(\sin^{-1} x) = \dfrac{x}{\sqrt{1 - x^2}} \) for \(-1 < x < 1 \).
The inverse functions for cotangent, cosecant, and secant can be determined by looking at their graphs. For example, the function \(y=\cot\;x \) is one-to-one in the interval \((0,\pi) \), where it has a range equal to the set of all real numbers. Thus, the
inverse cotangent \(y=\cot^{-1} x \) is a function whose domain is the set of all real numbers and whose range is the interval \((0,\pi) \). In other words:
\[\begin{alignat}{3}
\cot^{-1} (\cot\;y) ~&=~ y \quad&&\text{for \(0 < y < \pi\)}\label{eqn:arccot1}\\ \cot\;(\cot^{-1} x) ~&=~ x \quad&&\text{for all real \(x\)}\label{eqn:arccot2} \end{alignat}\]
The graph of \(y=\cot^{-1} x \) is shown below in Figure 5.3.11.
Similarly, it can be shown that the
inverse cosecant \(y=\csc^{-1} x \) is a function whose domain is \(|x| \ge 1 \) and whose range is \(-\frac{\pi}{2} \le y \le \frac{\pi}{2} \), \(y \ne 0 \). Likewise, the inverse secant \(y=\sec^{-1} x \) is a function whose domain is \(|x| \ge 1 \) and whose range is \(0 \le y \le \pi \), \(y \ne \frac{\pi}{2} \).
\[\begin{alignat}{3}
\csc^{-1} (\csc\;y) ~&=~ y \quad&&\text{for \(-\frac{\pi}{2} \le y \le \frac{\pi}{2} \), \(y \ne 0\)}\label{eqn:arccsc1}\\ \csc\;(\csc^{-1} x) ~&=~ x \quad&&\text{for \(|x| \ge 1\)}\label{eqn:arccsc2} \end{alignat}\]
\[\begin{alignat}{3}
\sec^{-1} (\sec\;y) ~&=~ y \quad&&\text{for \(0 \le y \le \pi \), \(y \ne \frac{\pi}{2}\)}\label{eqn:arcsec1}\\ \sec\;(\sec^{-1} x) ~&=~ x \quad&&\text{for \(|x| \ge 1\)}\label{eqn:arcsec2} \end{alignat}\]
It is also common to call \(\cot^{-1} x \), \(\csc^{-1} x \), and \(\sec^{-1} x \) the
arc cotangent, arc cosecant, and arc secant, respectively, of \(x \). The graphs of \(y=\csc^{-1} x \) and \(y=\sec^{-1} x \) are shown in Figure 5.3.12:
Example 5.21
Prove the identity \(\tan^{-1} x \;+\; \cot^{-1} x ~=~ \frac{\pi}{2} \).
Solution:
Let \(\theta = \cot^{-1} x \). Using relations from Section 1.5, we have
\[\nonumber \tan\;\left( \tfrac{\pi}{2} - \theta \right) ~=~ -\tan\;\left( \theta - \tfrac{\pi}{2} \right)
~=~ \cot\;\theta ~=~ \cot\;(\cot^{-1} x) ~=~ x ~,\]
by Equation \ref{eqn:arccot2}. So since \(\tan\;(\tan^{-1} x) = x \) for all \(x \), this means that \(\tan\;(\tan^{-1} x) = \tan\;\left( \tfrac{\pi}{2} - \theta \right) \). Thus, \(\tan\;(\tan^{-1} x) = \tan\;\left( \tfrac{\pi}{2} - \cot^{-1} x \right) \). Now, we know that \(0 < \cot^{-1} x < \pi \), so \(-\tfrac{\pi}{2} < \tfrac{\pi}{2} - \cot^{-1} x < \tfrac{\pi}{2} \), i.e. \(\tfrac{\pi}{2} - \cot^{-1} x \) is in the restricted subset on which the tangent function is one-to-one. Hence, \(\tan\;(\tan^{-1} x) = \tan\;\left( \tfrac{\pi}{2} - \cot^{-1} x \right)\) implies that \(\tan^{-1} x = \tfrac{\pi}{2} - \cot^{-1} x \), which proves the identity.
Example 5.22
Is \(\;\tan^{-1} a \;+\; \tan^{-1} b ~=~ \tan^{-1} \left( \dfrac{a+b}{1-ab} \right)\; \) an identity?
Solution:
In the tangent addition Equation \(\tan\;(A+B) = \dfrac{\tan\;A \;+\; \tan\;B}{1 \;-\; \tan\;A~\tan\;B} \), let \(A = \tan^{-1} a \) and \(B = \tan^{-1} b \). Then
\[\nonumber \begin{align*}
\tan\;(\tan^{-1} a \;+\; \tan^{-1} b ) ~&=~ \dfrac{\tan\;(\tan^{-1} a) \;+\; \tan\;(\tan^{-1} b)}{1 \;-\; \tan\;(\tan^{-1} a)~\tan\;(\tan^{-1} b)}\\ \nonumber &=~ \dfrac{a+b}{1-ab}\qquad\text{by Equation \ref{eqn:arctan2}, so it seems that we have}\\ \nonumber \tan^{-1} a \;+\; \tan^{-1} b ~&=~ \tan^{-1} \left( \dfrac{a+b}{1-ab} \right) \end{align*}\]
by definition of the inverse tangent. However, recall that \(-\tfrac{\pi}{2} < \tan^{-1} x < \tfrac{\pi}{2} \) for all real numbers \(x \). So in particular, we must have \(-\tfrac{\pi}{2} < \tan^{-1} \left( \frac{a+b}{1-ab} \right) < \tfrac{\pi}{2} \). But it is possible that \(\tan^{-1} a \;+\; \tan^{-1} b \) is
not in the interval \(\left(-\tfrac{\pi}{2},\tfrac{\pi}{2}\right) \). For example,
\[ \tan^{-1} 1 \;+\; \tan^{-1} 2 ~=~ 1.892547 ~>~ \tfrac{\pi}{2} \approx 1.570796 ~.\nonumber \]
And we see that \(\tan^{-1} \left( \frac{1+2}{1-(1)(2)} \right) = \tan^{-1} (-3) = -1.249045 \ne \tan^{-1} 1 \;+\; \tan^{-1} 2 \). So the Equation is only true when \(-\tfrac{\pi}{2} < \tan^{-1} a \;+\; \tan^{-1} b < \tfrac{\pi}{2} \). |
I am trying to simulate heating and melting of the steel plate by means of FEM.The model is based on nonlinear heat conduction equation in axial symmetry case.
The problem statement is the next:$$\rho c_{eff}\frac{\partial T}{\partial t}= \frac{1}{r}\frac{\partial}{\partial r}\left(r\lambda \frac{\partial T}{\partial r} \right) + \frac{\partial}{\partial z}\left(\lambda \frac{\partial T}{\partial z} \right),\\0\leq r\leq L_r,~0\leq z\leq L_z,~0\leq t\leq t_f$$$$\lambda \frac{\partial T}{\partial z}\Bigg|_{z=L_z}=q_{0}exp(-a r^2),~~\frac{\partial T}{\partial r}\Bigg|_{r=L_r}=0, T|_{z=0}=T_0\\T(0,r,z)=T_0$$
To take into account latent heat of fusion $L$ the effective heat capacity is introduced $c_{eff}=c_{s}(1-\phi)+c_{l}\phi+ L\frac{d \phi}{dT} $, where $\phi$ is a fraction of liquid phase, $ c_s, c_l $ are the heat capacity of solid and liquid phase respectively. Smoothed Heaviside function
$$h(x,\delta)=\left\{\begin{array}{l l l} 0,& x<-\delta\\ 0.5\left(1+\frac{x}{\delta}+\frac{1}{\pi}sin(\frac{\pi x}{\delta}) \right), &\mid x\mid\leq \delta\\ 1,& x>\delta \end{array} \right.$$
is employed to describe mushy zone so that $\phi(T)=h(T-T_m,\Delta T_{m}/2)$, where $T_m$ and $\Delta T_m$ are melting temperature and melting range respectively. FE approximation is used for spatial discretization of PDE whereas time derivative is approximated by means of first order finite difference scheme: $$\left.\frac{\partial T}{\partial t}\right|_{t=t^{k}} \approx \frac{T(t^k,r,z)-T(t^{k-1},r,z)}{\tau}$$
where $\tau$ is a time step size. For calculation of $c_{eff}$ at k-th time step the temperature field from k-1 time step is utilized. After discretization in time one can rewrite equation:
$$c_{eff}\left(T(t^{k-1},r,z)\right) \frac{T(t^k,r,z)-T(t^{k-1},r,z)}{\tau}=\frac{1}{r}\frac{\partial}{\partial r}\left(r\lambda \frac{\partial T(t^k,r,z)}{\partial r} \right) + \frac{\partial}{\partial z}\left(\lambda \frac{\partial T(t^k,r,z)}{\partial z} \right)$$
At each time step the DampingCoefficients is corrected in InitializePDECoefficients[] so that interpolation is used for $c_{eff}$.Such approach leads to significant grows of computational time in comparison with solution of linear problem when $c_{eff}$=const. I also tried to use ElementMarker to set certain value of $c_{eff}$ in each element. Such approach allows to avoid interpolation but the computation time is getting more larger. This last fact I can not understand at all. As to me the duration of FE matrix assembly should be diminished when interpolation for $c_{eff}$ is avoided.
Needs["NDSolve`FEM`"];Needs["DifferentialEquations`NDSolveProblems`"];Needs["DifferentialEquations`NDSolveUtilities`"];
Setting of the computational domain dimensions and mesh generation:
Lr = 2*10^-2; (*dimension of computational domain in r-direction*)Lz = 10^-2; (*dimension of computational domain in z-direction*) mesh = ToElementMesh[FullRegion[2], {{0, Lr}, {0, Lz}},MaxCellMeasure -> {"Length" -> Lr/50}, "MeshOrder" -> 1]mesh["Wireframe"]
Input parameters of the model:
lambda = 22; (*heat conductivity*)density = 7200; (*density*)Cs = 700; (*specific heat capacity of solid*) Cl = 780; (*specific heat capacity of liquid*) LatHeat = 272*10^3; (*latent heat of fusion*) Tliq = 1812; (*melting temperature*)MeltRange = 100; (*melting range*)To = 300; (*initial temperature*) SPow = 1000; (*source power*) R = Lr/4; (*radius of heat source spot*)a = Log[100]/R^2; qo = (SPow*a)/Pi; q[r_] := qo*Exp[-r^2*a]; (*heat flux distribution*) tau = 10^-3; (*time step size*)ProcDur = 0.2; (*process duration*)
Smoothed Heaviside function:
Heviside[x_, delta_] := Module[{res}, res = Piecewise[ { {0, Abs[x] < -delta}, {0.5*(1 + x/delta + 1/Pi*Sin[(Pi*x)/delta]), Abs[x] <= delta}, {1, x > delta} } ]; res ]
Smoothed Heaviside function derivative:
HevisideDeriv[x_, delta_] := Module[{res}, res = Piecewise[ { {0, Abs[x] > delta}, {1/(2*delta)*(1 + Cos[(Pi*x)/delta]), Abs[x] <= delta} } ]; res ]
Effective heat capacity:
EffectHeatCapac[tempr_] := Module[{phase}, phase = Heviside[tempr - Tliq, MeltRange/2]; Cs*(1 - phase) + Cl*phase +LatHeat*HevisideDeriv[tempr - Tliq, 0.5*MeltRange] ]
Numerical solution of PDE:
ts = AbsoluteTime[];vd = NDSolve`VariableData[{"DependentVariables" -> {u},"Space" -> {r,z},"Time" -> t}];sd = NDSolve`SolutionData[{"Space","Time"} -> {ToNumericalRegion[mesh], 0.}];DirichCond=DirichletCondition[u[t, r, z] ==To,z==0];NeumCond=NeumannValue[q[r],z==Lz];initBCs=InitializeBoundaryConditions[vd,sd, {{DirichCond, NeumCond}}];methodData = InitializePDEMethodData[vd, sd] ;discreteBCs = DiscretizeBoundaryConditions[initBCs, methodData, sd];xlast = Table[{To}, {methodData["DegreesOfFreedom"]}];TemprField = ElementMeshInterpolation[{mesh}, xlast];NumTimeStep = Floor[ProcDur/tau];For[i = 1, i <= NumTimeStep, i++, (* (*Setting of PDE coefficients for linear problem*) pdeCoefficients=InitializePDECoefficients[vd,sd,"ConvectionCoefficients"-> {{{{-lambda/r, 0}}}}, "DiffusionCoefficients" -> {{-lambda*IdentityMatrix[2]}}, "DampingCoefficients" -> {{Cs*density}}]; *)(*Setting of PDE coefficients for nonlinear problem*) pdeCoefficients = InitializePDECoefficients[vd, sd, "ConvectionCoefficients" -> {{ {{-(lambda/r), 0}} }}, "DiffusionCoefficients" -> {{-lambda*IdentityMatrix[2]}}, "DampingCoefficients" -> {{EffectHeatCapac[TemprField[r, z]]* density}}]; discretePDE = DiscretizePDE[pdeCoefficients, methodData, sd]; {load, stiffness, damping, mass} = discretePDE["SystemMatrices"]; DeployBoundaryConditions[{load, stiffness, damping}, discreteBCs]; A = damping/tau + stiffness; b = load + damping.xlast/tau; x = LinearSolve[A,b,Method -> {"Krylov", Method -> "BiCGSTAB", "Preconditioner" -> "ILU0","StartingVector"->Flatten[xlast,1]}]; TemprField = ElementMeshInterpolation[{mesh}, x]; xlast = x; ]te = AbsoluteTime[];te - ts
Visualization of the calculation results
ContourPlot[TemprField[r, z], {r, z} \[Element] mesh, AspectRatio -> Lz/Lr, ColorFunction -> "TemperatureMap", Contours -> 50, PlotRange -> All, PlotLegends -> Placed[Automatic, After], FrameLabel -> {"r", "z"}, PlotPoints -> 50, PlotLabel -> "Temperature field", BaseStyle -> 16]
On my laptop the computation time are 63 sec and 2.17 sec for nonlinear and linear problems respectively.This question can be generalized to the case when $\lambda=\lambda(T)$. I would appreciate if anyone could please show me a good way which leads to time savings. Thanks in advance for your help. |
Search
Now showing items 1-2 of 2
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions
(Elsevier, 2017-11)
Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ... |
@mickep I'm pretty sure that malicious actors knew about this long before I checked it. My own server gets scanned by about 200 different people for vulnerabilities every day and I'm not even running anything with a lot of traffic.
@JosephWright @barbarabeeton @PauloCereda I thought we could create a golfing TeX extension, it would basically be a TeX format, just the first byte of the file would be an indicator of how to treat input and output or what to load by default. I thought of the name: Golf of TeX, shortened as GoT :-)
@PauloCereda Well, it has to be clever. You for instance need quick access to defining new cs, something like (I know this won't work, but you get the idea) \catcode`\@=13\def@{\def@##1\bgroup} so that when you use @Hello #1} it expands to \def@#1{Hello #1}
If you use the D'Alembert operator as well, you might find pretty using the symbol \bigtriangleup for your Laplace operator, in order to get a similar look as the \Box symbol that is being used for D'Alambertian. In the following, a tricky construction with \mathop and \mathbin is used to get the...
Latex exports. I am looking for a hint on this. I've tried everything I could find but no solution yet. I read equations from file generated by CAS programs. I can't edit these or modify them in any way. Some of these are too long. Some are not. To make them fit in the page width, I tried resizebox. The problem is that this will resize the small equation as well as the long one to fit the page width. Which is not what I want. I want only to resize the ones that are longer that pagewidth and keep the others as is. Is there a way in Latex to do this? Again, I do not before hand the size of…
\documentclass[12pt]{article}\usepackage{amsmath}\usepackage{graphicx}\begin{document}\begin{equation*}\resizebox{\textwidth}{!}{$\begin{split}y &= \sin^2 x + \cos^2 x\\x &= 5\end{split}$}\end{equation*}\end{document}
The above will resize the small equation, which I do not want. But since I do not know before how long the equation is, I do resize on everyone. Is there a way to find in Latex using some latex command, if an equation "will fit" the page width or how long it is? If so I can add logic to add resize if needed in that case.
What I mean, I want to resize DOWN only if needed. And not resize UP. Also, if you think I should ask this on main board, I can. But thought to check here first.
@egreg what other options do I have? sometimes cas generates an equation which do not fit the page. Now it overflows the page and one can't see the rest of it at all. Now, since in pdf one can zoom in a little, at least one can see it if needed. It is impossible to edit or modify these by hand, as this is done all using a program.
@UlrikeFischer I do not generate unreadable equations. These are solutions of ODE's. The latex is generated by Maple. Some of them are longer than the page width. That is all. So what is your suggestion I do? Keep the long solutions flow out of the page? I can't edit these by hand. This is all generated by a program. I can add latex code around them that is all. But editing them is out of question. I tried breqn package, but that did not work. It broke many things as well.
@egreg That was just an example. That was something I added by hand to make up a long equation for illustration. That was not real solution to an ODE. Again, thanks for the effort. but I can't edit the latex generated at all by hand. It will take me a year to do. And I run the program many times each day. each time, all the latex files are overwritten again any way.
CAS providers do not generate good Latex also. That is why breqn did not work. many times they add {} around large expressions, which made breqn not able to break it. Also breqn has many other problems. So I no longer use it at all. |
2019-09-04 12:06
Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 Detailed record - Similar records 2019-08-15 17:39
LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Detailed record - Similar records 2019-08-15 17:36
Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Detailed record - Similar records 2019-05-15 16:57 Detailed record - Similar records 2019-02-12 14:01
XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 Detailed record - Similar records 2019-01-21 09:59
Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 Detailed record - Similar records 2019-01-15 14:22
Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 Detailed record - Similar records 2019-01-10 15:54 Detailed record - Similar records 2018-12-20 16:31
Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 Detailed record - Similar records 2018-12-14 16:02
The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. Detailed record - Similar records |
Asymptotic expansion of the mean-field approximation
1.
CMLS, Ecole polytechnique, CNRS, Université Paris-Saclay, 91128 Palaiseau Cedex, France
2.
International Research Center on the Mathematics and Mechanics of Complex Systems, MeMoCS, University of L'Aquila, Italy
We consider the $ N $-body quantum evolution of a particle system in the mean-field approximation. We show that the $ j $th order marginals $ F^N_j(t) $, for factorized initial data $ F(0)^{\otimes N} $, are explicitly expressed, modulo $ N^{-\infty} $, out of the solution $ F(t) $ of the corresponding non-linear mean-field equation and the solution of its linearization around $ F(t) $. The result is valid for all times $ t $, uniformly in $ j = O(N^{\frac12-\alpha}) $ for any $ \alpha>0 $. We establish and estimate the full asymptotic expansion in integer powers of $ \frac1N $ of $ F^N_j(t) $, $ j = O(\sqrt N) $, whose computation at order $ n $ involves a finite number of operations depending on $ j $ and $ n $ but not on $ N $. Our results are also valid for more general models including Kac models. As a by-product we get that the rate of convergence to the mean-field limit in $ \frac1N $ is optimal in the sense that the first correction to the mean-field limit does not vanish.
Keywords:Mean-field limit, quantum mechanics, Kac model, asymptotic expansion, mathematical physics. Mathematics Subject Classification:35Q83, 35Q20, 35Q40, 34E05. Citation:Thierry Paul, Mario Pulvirenti. Asymptotic expansion of the mean-field approximation. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 1891-1921. doi: 10.3934/dcds.2019080
References:
[1] [2]
H. van Beijeren, O. E. Landford, J. L. Lebowitz and H. Spohn,
Equilibrium time correlation functions in the low-density limit,
[3] [4]
C. Boldrighini, A. De Masi and A. Pellegrinotti,
Non equilibrium fluctuations in particle systems modelling Reaction-Diffusion equations,
[5] [6] [7] [8]
S. Caprino, M. Pulvirenti and W. Wagner,
A particle systems approximating stationary solutions to the Boltzmann equation,
[9] [10] [11]
A. De Masi, E. Orlandi, E. Presutti and L. Triolo,
Glauber evolution with Kac potentials. Ⅰ.Mesoscopic and macroscopic limits, interface dynamics,
[12] [13]
A. De Masi, E. Presutti, D. Tsagkarogiannis and M. E. Vares,
Truncated correlations in the stirring process with births and deaths,
[14] [15]
C. Graham and S. Méléard,
Stochastic particle approximations for generalized Boltzmann models and convergence estimates,
[16]
K. Hepp and E. H. Lieb, Phase transitions in reservoir-driven open systems with applications to lasers and superconductors,
[17]
M. Kac, Foundations of kinetic theory,
[18]
M. Kac,
[19] [20] [21]
S. Lang,
[22] [23] [24] [25]
D. Mitrouskas, S. Petrat and P. Pickl, Bogoliubov corrections and trace norm convergence for the Hartree dynamics, preprint.Google Scholar
[26]
T. Paul, M. Pulvirenti and S. Simonella, On the size of kinetic chaos for mean field models, to appear in ARMA.Google Scholar
[27]
M. Pulvirenti and S. Simonella,
The Boltzmann Grad limit of a hard sphere system: Analysis of the correlation error,
[28]
B. Schlein, Derivation of effective evolution equations from microscopic quantum dynamics,
[29] [30]
show all references
References:
[1] [2]
H. van Beijeren, O. E. Landford, J. L. Lebowitz and H. Spohn,
Equilibrium time correlation functions in the low-density limit,
[3] [4]
C. Boldrighini, A. De Masi and A. Pellegrinotti,
Non equilibrium fluctuations in particle systems modelling Reaction-Diffusion equations,
[5] [6] [7] [8]
S. Caprino, M. Pulvirenti and W. Wagner,
A particle systems approximating stationary solutions to the Boltzmann equation,
[9] [10] [11]
A. De Masi, E. Orlandi, E. Presutti and L. Triolo,
Glauber evolution with Kac potentials. Ⅰ.Mesoscopic and macroscopic limits, interface dynamics,
[12] [13]
A. De Masi, E. Presutti, D. Tsagkarogiannis and M. E. Vares,
Truncated correlations in the stirring process with births and deaths,
[14] [15]
C. Graham and S. Méléard,
Stochastic particle approximations for generalized Boltzmann models and convergence estimates,
[16]
K. Hepp and E. H. Lieb, Phase transitions in reservoir-driven open systems with applications to lasers and superconductors,
[17]
M. Kac, Foundations of kinetic theory,
[18]
M. Kac,
[19] [20] [21]
S. Lang,
[22] [23] [24] [25]
D. Mitrouskas, S. Petrat and P. Pickl, Bogoliubov corrections and trace norm convergence for the Hartree dynamics, preprint.Google Scholar
[26]
T. Paul, M. Pulvirenti and S. Simonella, On the size of kinetic chaos for mean field models, to appear in ARMA.Google Scholar
[27]
M. Pulvirenti and S. Simonella,
The Boltzmann Grad limit of a hard sphere system: Analysis of the correlation error,
[28]
B. Schlein, Derivation of effective evolution equations from microscopic quantum dynamics,
[29] [30]
[1] [2]
Seung-Yeal Ha, Jeongho Kim, Jinyeong Park, Xiongtao Zhang.
Uniform stability and mean-field limit for the augmented Kuramoto model.
[3]
Rong Yang, Li Chen.
Mean-field limit for a collision-avoiding flocking system and the time-asymptotic flocking dynamics for the kinetic equation.
[4]
Seung-Yeal Ha, Jeongho Kim, Xiongtao Zhang.
Uniform stability of the Cucker-Smale model and its application to the Mean-Field limit.
[5]
Seung-Yeal Ha, Jeongho Kim, Peter Pickl, Xiongtao Zhang.
A probabilistic approach for the mean-field limit to the Cucker-Smale model with a singular communication.
[6] [7] [8]
Franco Flandoli, Enrico Priola, Giovanni Zanco.
A mean-field model with discontinuous coefficients for neurons with spatial interaction.
[9]
Young-Pil Choi, Samir Salem.
Cucker-Smale flocking particles with multiplicative noises: Stochastic mean-field limit and phase transition.
[10] [11] [12]
Weinan E, Jianfeng Lu.
Mathematical theory of solids: From quantum mechanics to continuum models.
[13]
Yufeng Shi, Tianxiao Wang, Jiongmin Yong.
Mean-field backward stochastic Volterra integral equations.
[14] [15]
Illés Horváth, Kristóf Attila Horváth, Péter Kovács, Miklós Telek.
Mean-field analysis of a scaling MAC radio protocol.
[16]
Hayato Chiba, Georgi S. Medvedev.
The mean field analysis of the kuramoto model on graphs Ⅱ. asymptotic stability of the incoherent state, center manifold reduction, and bifurcations.
[17]
Jianhui Huang, Xun Li, Jiongmin Yong.
A linear-quadratic optimal control problem for mean-field stochastic differential equations in infinite horizon.
[18]
Haiyan Zhang.
A necessary condition for mean-field type stochastic differential equations with correlated state and observation noises.
[19] [20]
Juan Li, Wenqiang Li.
Controlled reflected mean-field backward stochastic differential equations coupled with value function and related
PDEs.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
I would like to solve a dense linear system the form in python$$L\left(\boldsymbol{x}\right):=\left[\gamma^+\left[\boldsymbol{A}+\frac{1}{2}\boldsymbol{B}^{-1}\right]+\gamma^-\left[\boldsymbol{A}-\frac{1}{2}\boldsymbol{B}^{-1}\right]\right]\cdot\boldsymbol{x}=\boldsymbol{b}$$I thought it is a good idea
not to explicitly compute the inverse of $\boldsymbol{B}$. Therefore, I implemented the Operator $L$ to solve $L\left(\boldsymbol{x}\right)=\boldsymbol{b}$ using Krylov methods like cg or gmres. I am using the
scipy.sparse.linalg.LinearOperator class for the operator see the docs here. The product $\boldsymbol{B}^{-1}\cdot\boldsymbol{x}$ computed by another Krylov iteration or an L-U decomposition depending on system size.
However, for larger problems I would like to improve the rate of convergence of the outer iteration. I neither have sparse matrices nor an explicit representation of my matrix. Therefore, as far as I know the
classical preconditioners for Krylov methods like ilu or Jacobi's method are not applicable.
Are there other methods which can be used? And are there python libraries for these methods? |
In the last section, we saw how to determine if a real number was a zero of a polynomial. In this section, we will learn how to find good candidates to test using synthetic division. In the days before graphing technology was commonplace, mathematicians discovered a lot of clever tricks for determining the likely locations of zeros. Technology has provided a much simpler approach to narrow down potential candidates, but it is not always sufficient by itself. For example, the function shown to the right does not have any clear intercepts.
There are two results that can help us identify where the zeros of a polynomial are. The first gives us an interval on which all the real zeros of a polynomial can be found.
Definition: Cauchy’s Bound
Given a polynomial \(f(x)=a_{n} x^{n} +a_{n-1} x^{n-1} +\cdots +a_{1} x+a_{0},\) let \(M\) be the largest of the coefficients in absolute value. Then all the real zeros of \(f(x)\) lie in the interval
\(\left[-\frac{M}{\left|a_{n} \right|} -1,\quad \frac{M}{\left|a_{n} \right|} +1\right] \label{Cauchy}\)
Example 1
Let \(f(x)=2x^{4} +4x^{3} -x^{2} -6x-3\). Determine an interval which contains all the real zeros of
f. Solution
To find the \(M\) from Cauchy’s Bound, we take the absolute value of the coefficients and pick the largest, in this case \(\left|-6\right|=6\). Divide this by the absolute value of the leading coefficient, 2, to get 3. All the real zeros of
f lie in the interval
\[\left[-\frac{6}{\left|2\right|} -1,\quad \frac{6}{\left|2\right|} +1\right]=\left[-3-1,\quad 3+1\right]=[-4,\; 4]. \nonumber\]
Knowing this bound can be very helpful when using a graphing calculator, since we can use it to set the display bounds. This helps avoid missing a zero because it is graphed outside of the viewing window.
Exercise
Determine an interval which contains all the real zeros of \(f(x)=3x^{3} -12x^{2} +6x-8\).
Answer
The maximum coefficient in absolute value is 12. Cauchy’s Bound for all real zeros is
\(\left[-\frac{12}{\left|3\right|} -1,\quad \frac{12}{\left|3\right|} +1\right]=[-5,5]\)
Now that we know
where we can find the real zeros, we still need a list of possible real zeros. The Rational Roots Theorem provides us a list of potential integer and rational zeros.
rational roots theorem
Given a polynomial \(f(x)=a_{n} x^{n} +a_{n-1} x^{n-1} +\cdots +a_{1} x+a_{0}\) with integer coefficients, if \(r\) is a rational zero of \(f\), then \(r\) is of the form \(r=\pm \frac{p}{q}\), where \(p\) is a factor of the constant term \(a_{0}\), and \(q\) is a factor of the leading coefficient, \(a_{n}\).
This gives us a list of numbers to try in our synthetic division, which is a nicer place to start than simply guessing. If none of the numbers in the list are zeros, then either the polynomial has no real zeros at all, or all the real zeros are irrational numbers.
Example 2
Let \(f(x)=2x^{4} +4x^{3} -x^{2} -6x-3\). Use the Rational Roots Theorem to list all the possible rational zeros of \(f(x)\).
Solution
To generate a complete list of rational zeros, we need to take each of the factors of the
constant term, \(a_{0} =-3\), and divide them by each of the factors of the leading coefficient \(a_{4} =2\). The factors of -3 are \(\pm 1\) and \(\pm 3\). Since the Rational Roots Theorem tacks on a \(\pm\) anyway, for the moment, we consider only the positive factors 1 and 3. The factors of 2 are 1 and 2, so the Rational Roots Theorem gives the list
\(\left\{\pm \frac{1}{1} ,\pm \frac{1}{2} ,\pm \frac{3}{1} ,\pm \frac{3}{2} \right\}\), or \(\left\{\pm 1,\pm \frac{1}{2} ,\pm 3,\pm \frac{3}{2} \right\}\)
Now we can use synthetic division to test these possible zeros. To narrow the list first, we could use graphing technology to help us identify some good possibilities.
Example 3
Find the horizontal intercepts of \(f(x)=2x^{4} +4x^{3} -x^{2} -6x-3\).
Solution
From Example 1, we know that the real zeros lie in the interval [-4, 4]. Using a graphing calculator, we could set the window accordingly and get the graph below.
In Example 2, we learned that any rational zero must be on the list \(\left\{\pm 1,\pm \frac{1}{2} ,\pm 3,\pm \frac{3}{2} \right\}\). From the graph, it looks like -1 is a good possibility, so we try that using synthetic division.
Success! Remembering that \(f\) was a fourth degree polynomial, we know that our
quotient is a third degree polynomial. If we can do one more successful division, we will have knocked the quotient down to a quadratic, and, if all else fails, we can use the quadratic formula to find the last two zeros. Since there seems to be no other rational zeros to try, we continue with -1. Also, the shape of the crossing at \(x = -1\) leads us to wonder if the zero \(x = -1\) has multiplicity 3.
Success again! Our quotient polynomial is now \(2x^{2} -3\). Setting this to zero gives \(2x^{2} -3=0\), giving \(x=\pm \sqrt{\frac{3}{2} } =\pm \frac{\sqrt{6} }{2}\). Since a fourth degree polynomial can have at most four zeros, including multiplicities, then the intercept x = -1 must only have multiplicity 2, which we had found through division, and not 3 as we had guessed.
It is interesting to note that we could greatly improve on the graph of \(y=f(x)\)in the previous example given to us by the calculator. For instance, from our determination of the zeros of \(f\) and their multiplicities, we know the graph crosses at \(x=-\frac{\sqrt{6} }{2} \approx -1.22\) then turns back upwards to touch the \(x\)-axis at \(x = -1\). This tells us that, despite what the calculator showed us the first time, there is a relative maximum occurring at \(x = -1\) and not a "flattened crossing" as we originally believed.
After resizing the window, we see not only the relative maximum but also a relative minimum just to the left of \(x = -1\)
In this case, mathematics helped reveal something that was hidden in the initial graph.
Example 4
Find the real zeros of \(f(x)=4x^{3} -10x^{2} -2x+2\).
Solution
Cauchy’s Bound tells us that the real zeros lie in the interval \(\left[-\frac{10}{\left|4\right|} -1,\quad \frac{10}{\left|4\right|} +1\right]=[-3.5,\; 3.5]\).
Graphing on this interval reveals no clear integer zeros. Turning to the rational roots theorem, we need to take each of the factors of the constant term, \(a_{0} =2\), and divide them by each of the factors of the leading coefficient \(a_{3} =4\). The factors of 2 are 1 and 2. The factors of 4 are 1, 2, and 4, so the Rational Roots Theorem gives the list
\(\left\{\pm \frac{1}{1} ,\pm \frac{1}{2} ,\pm \frac{1}{4} ,\pm \frac{2}{1} ,\pm \frac{2}{2} ,\pm \frac{2}{4} \right\}\), or \(\left\{\pm 1,\pm \frac{1}{2} ,\pm \frac{1}{4} ,\pm 2\right\}\)
The two most likely candidates are \(\pm \frac{1}{2}\).
Trying \(\frac{1}{2}\),
The remainder is not zero, so this is not a zero. Trying \(-\frac{1}{2}\),
Success! This tells us \(4x^{3} -10x^{2} -2x+2=\left(x+\frac{1}{2} \right)\left(4x^{2} -12x+4\right)\), and that the graph has a horizontal intercept at \(x=-\frac{1}{2}\).
To find the remaining two intercepts, we can use the quadratic equation, setting \(4x^{2} -12x+4=0\). First, we might pull out the common factor, \(4\left(x^{2} -3x+1\right)=0\).
\(x=\frac{3\pm \sqrt{(-3)^{2} -4(1)(1)} }{2(1)} =\frac{3\pm \sqrt{5} }{2} \approx 2.618,\; \; 0.382\)
Exercise
Find the real zeros of \(f(x)=3x^{3} -x^{2} -6x+2\)
Answer
Cauchy’s Bound tells us the zeros lie in the interval \(\left[-\frac{6}{\left|3\right|} -1,\quad \frac{6}{\left|3\right|} +1\right]=[-3,3]\). The rational roots theorem tells us the possible rational zeros of the polynomial are on the list
\(\left\{\pm \frac{1}{1} ,\pm \frac{1}{3} ,\pm \frac{2}{1} ,\pm \frac{2}{3} \right\}=\left\{\pm 1,\pm \frac{1}{3} ,\pm 2,\pm \frac{2}{3} \right\}\).
Looking at a graph, the only likely candidate is \(\frac{1}{3}\)
Using synthetic division,
\(3x^{3} -x^{2} -6x+2=\left(x-\frac{1}{3} \right)\left(3x^{2} -6\right)=3\left(x-\frac{1}{3} \right)\left(x^{2} -2\right)\).
Solving \(x^{2} -2=0\) gives zeros \(x=\pm \sqrt{2}\).
The real zeros of the polynomial are \(x=\sqrt{2} ,\; -\sqrt{2} ,\; \frac{1}{3}\).
Important Topics of this Section Cauchy’s Bound for all real zeros of a polynomial Rational Roots Theorem Finding real zeros of a polynomial |
I'm looking for the complete set (x,y,z component) of the Navier-Stoke equations under the Eddy Viscosity hypothesis to model turbulent fluid flow.
I found the following, but I have a really hard time believing the transition from the next to last set of equations to the last set of equations. I used the eddy viscosity hypothesis but I could not get terms to cancel to give the last set of equations. Using the eddy viscosity hypothesis: \begin{equation} - \overline{u'_i u'_j} = \nu_t \, \left( \frac{\partial \overline{u_i}}{\partial x_j} + \frac{\partial \overline{u_j}}{\partial x_i} \right) - \frac{2}{3} k \delta_{ij} = \nu_t \, \left( \frac{\partial \overline{u_i}}{\partial x_j} + \frac{\partial \overline{u_j}}{\partial x_i} \right) -\frac{1}{3} \left( \overline{u'^2} + \overline{v'^2} \right)\delta_{ij} \end{equation}
$\overline{u'^2} = -2\nu_T \frac{\partial \overline{u}}{\partial x}$, $\overline{v'^2} = -2\nu_T \frac{\partial \bar{v}}{\partial y}$ and $\overline{u'v'} = -\nu_T ( \frac{\partial \bar{u}}{\partial y} + \frac{\partial \bar{v}}{\partial x} ) = \overline{v'u'}$
For example, in two-dimensional flow, for the x-momentum equation
\begin{equation} -\frac{\partial \overline{u'^2}}{\partial x} = \frac{\partial \nu_T}{\partial x}\frac{\partial \bar{u}}{\partial x} + \nu_T \frac{\partial^2 \bar{u}}{\partial x^2} \end{equation}
\begin{equation} -\frac{\partial \overline{u'v'} }{\partial y} = \frac{\partial \nu_T}{\partial y} ( \frac{\partial \bar{u}}{\partial y} + \frac{\partial \bar{v}}{\partial x} ) + \nu_T ( \frac{\partial^2 \bar{u}}{\partial y^2} + \frac{\partial^2 \bar{v}}{\partial y \partial x} ) \end{equation}
I can't see how the sum of these two terms simplifies to give the first equation in the last set of equations on page 2 where the viscosities are added in each term. |
The first observation of top quark production in proton-nucleus collisions is reported using proton-lead data collected by the CMS experiment at the CERN LHC at a nucleon-nucleon center-of-mass energy of $\sqrt{s_\mathrm{NN}} =$ 8.16 TeV. The measurement is performed using events with exactly one isolated electron or muon and at least four jets. The data sample corresponds to an integrated luminosity of 174 nb$^{-1}$. The significance of the $\mathrm{t}\overline{\mathrm{t}}$ signal against the background-only hypothesis is above five standard deviations. The measured cross section is $\sigma_{\mathrm{t}\overline{\mathrm{t}}} =$ 45$\pm$8 nb, consistent with predictions from perturbative quantum chromodynamics.
Measurements of two- and multi-particle angular correlations in pp collisions at s=5,7, and 13TeV are presented as a function of charged-particle multiplicity. The data, corresponding to integrated luminosities of 1.0pb−1 (5 TeV), 6.2pb−1 (7 TeV), and 0.7pb−1 (13 TeV), were collected using the CMS detector at the LHC. The second-order ( v2 ) and third-order ( v3 ) azimuthal anisotropy harmonics of unidentified charged particles, as well as v2 of KS0 and Λ/Λ‾ particles, are extracted from long-range two-particle correlations as functions of particle multiplicity and transverse momentum. For high-multiplicity pp events, a mass ordering is observed for the v2 values of charged hadrons (mostly pions), KS0 , and Λ/Λ‾ , with lighter particle species exhibiting a stronger azimuthal anisotropy signal below pT≈2GeV/c . For 13 TeV data, the v2 signals are also extracted from four- and six-particle correlations for the first time in pp collisions, with comparable magnitude to those from two-particle correlations. These observations are similar to those seen in pPb and PbPb collisions, and support the interpretation of a collective origin for the observed long-range correlations in high-multiplicity pp collisions.
Measurements are presented of the associated production of a W boson and a charm-quark jet (W + c) in pp collisions at a center-of-mass energy of 7 TeV. The analysis is conducted with a data sample corresponding to a total integrated luminosity of 5 inverse femtobarns, collected by the CMS detector at the LHC. W boson candidates are identified by their decay into a charged lepton (muon or electron) and a neutrino. The W + c measurements are performed for charm-quark jets in the kinematic region $p_T^{jet} \gt$ 25 GeV, $|\eta^{jet}| \lt$ 2.5, for two different thresholds for the transverse momentum of the lepton from the W-boson decay, and in the pseudorapidity range $|\eta^{\ell}| \lt$ 2.1. Hadronic and inclusive semileptonic decays of charm hadrons are used to measure the following total cross sections: $\sigma(pp \to W + c + X) \times B(W \to \ell \nu)$ = 107.7 +/- 3.3 (stat.) +/- 6.9 (syst.) pb ($p_T^{\ell} \gt$ 25 GeV) and $\sigma(pp \to W + c + X) \times B(W \to \ell \nu)$ = 84.1 +/- 2.0 (stat.) +/- 4.9 (syst.) pb ($p_T^{\ell} \gt$ 35 GeV), and the cross section ratios $\sigma(pp \to W^+ + \bar{c} + X)/\sigma(pp \to W^- + c + X)$ = 0.954 +/- 0.025 (stat.) +/- 0.004 (syst.) ($p_T^{\ell} \gt$ 25 GeV) and $\sigma(pp \to W^+ + \bar{c} + X)\sigma(pp \to W^- + c + X)$ = 0.938 +/- 0.019 (stat.) +/- 0.006 (syst.) ($p_T^{\ell} \gt$ 35 GeV). Cross sections and cross section ratios are also measured differentially with respect to the absolute value of the pseudorapidity of the lepton from the W-boson decay. These are the first measurements from the LHC directly sensitive to the strange quark and antiquark content of the proton. Results are compared with theoretical predictions and are consistent with the predictions based on global fits of parton distribution functions.
A search for narrow resonances in the dijet mass spectrum is performed using data corresponding to an integrated luminosity of 2.9 inverse pb collected by the CMS experiment at the LHC. Upper limits at the 95% confidence level (CL) are presented on the product of the resonance cross section, branching fraction into dijets, and acceptance, separately for decays into quark-quark, quark-gluon, or gluon-gluon pairs. The data exclude new particles predicted in the following models at the 95% CL: string resonances, with mass less than 2.50 TeV, excited quarks, with mass less than 1.58 TeV, and axigluons, colorons, and E_6 diquarks, in specific mass intervals. This extends previously published limits on these models.
The production of jets associated to bottom quarks is measured for the first time in PbPb collisions at a center-of-mass energy of 2.76 TeV per nucleon pair. Jet spectra are reported in the transverse momentum (pt) range of 80-250 GeV, and within pseudorapidity abs(eta < 2). The nuclear modification factor (R[AA]) calculated from these spectra shows a strong suppression in the b-jet yield in PbPb collisions relative to the yield observed in pp collisions at the same energy. The suppression persists to the largest values of pt studied, and is centrality dependent. The R[AA] is about 0.4 in the most central events, similar to previous observations for inclusive jets. This implies that jet quenching does not have a strong dependence on parton mass and flavor in the jet pt range studied.
A search for neutral Higgs bosons in the minimal supersymmetric extension of the standard model (MSSM) decaying to tau-lepton pairs in pp collisions is performed, using events recorded by the CMS experiment at the LHC. The dataset corresponds to an integrated luminosity of 24.6 fb$^{−1}$, with 4.9 fb$^{−1}$ at 7 TeV and 19.7 fb$^{−1}$ at 8 TeV. To enhance the sensitivity to neutral MSSM Higgs bosons, the search includes the case where the Higgs boson is produced in association with a b-quark jet. No excess is observed in the tau-lepton-pair invariant mass spectrum. Exclusion limits are presented in the MSSM parameter space for different benchmark scenarios, m$_{h}^{max}$ , m$_{h}^{mod +}$ , m$_{h}^{mod −}$ , light-stop, light-stau, τ-phobic, and low-m$_{H}$. Upper limits on the cross section times branching fraction for gluon fusion and b-quark associated Higgs boson production are also given.
Measurements of the differential production cross sections in transverse momentum and rapidity for B0 mesons produced in pp collisions at sqrt(s) = 7 TeV are presented. The dataset used was collected by the CMS experiment at the LHC and corresponds to an integrated luminosity of 40 inverse picobarns. The production cross section is measured from B0 meson decays reconstructed in the exclusive final state J/Psi K-short, with the subsequent decays J/Psi to mu^+ mu^- and K-short to pi^+ pi^-. The total cross section for pt(B0) > 5 GeV and y(B0) < 2.2 is measured to be 33.2 +/- 2.5 +/- 3.5 microbarns, where the first uncertainty is statistical and the second is systematic.
The Upsilon production cross section in proton-proton collisions at sqrt(s) = 7 TeV is measured using a data sample collected with the CMS detector at the LHC, corresponding to an integrated luminosity of 3.1 +/- 0.3 inverse picobarns. Integrated over the rapidity range |y|<2, we find the product of the Upsilon(1S) production cross section and branching fraction to dimuons to be sigma(pp to Upsilon(1S) X) B(Upsilon(1S) to mu+ mu-) = 7.37 +/- 0.13^{+0.61}_{-0.42}\pm 0.81 nb, where the first uncertainty is statistical, the second is systematic, and the third is associated with the estimation of the integrated luminosity of the data sample. This cross section is obtained assuming unpolarized Upsilon(1S) production. If the Upsilon(1S) production polarization is fully transverse or fully longitudinal the cross section changes by about 20%. We also report the measurement of the Upsilon(1S), Upsilon(2S), and Upsilon(3S) differential cross sections as a function of transverse momentum and rapidity.
A search for Z bosons in the mu^+mu^- decay channel has been performed in PbPb collisions at a nucleon-nucleon centre of mass energy = 2.76 TeV with the CMS detector at the LHC, in a 7.2 inverse microbarn data sample. The number of opposite-sign muon pairs observed in the 60--120 GeV/c2 invariant mass range is 39, corresponding to a yield per unit of rapidity (y) and per minimum bias event of (33.8 ± 5.5 (stat) ± 4.4 (syst)) 10^{-8}, in the |y|<2.0 range. Rapidity, transverse momentum, and centrality dependencies are also measured. The results agree with next-to-leading order QCD calculations, scaled by the number of incoherent nucleon-nucleon collisions.
A measurement of the J/psi and psi(2S) production cross sections in pp collisions at sqrt(s)=7 TeV with the CMS experiment at the LHC is presented. The data sample corresponds to an integrated luminosity of 37 inverse picobarns. Using a fit to the invariant mass and decay length distributions, production cross sections have been measured separately for prompt and non-prompt charmonium states, as a function of the meson transverse momentum in several rapidity ranges. In addition, cross sections restricted to the acceptance of the CMS detector are given, which are not affected by the polarization of the charmonium states. The ratio of the differential production cross sections of the two states, where systematic uncertainties largely cancel, is also determined. The branching fraction of the inclusive B to psi(2S) X decay is extracted from the ratio of the non-prompt cross sections to be: BR(B to psi(2S) X) = (3.08 +/- 0.12(stat.+syst.) +/- 0.13(theor.) +/- 0.42(BR[PDG])) 10^-3
Isolated photon production is measured in proton-proton and lead-lead collisions at nucleon-nucleon centre-of-mass energies of 2.76 TeV in the pseudorapidity range |eta|<1.44 and transverse energies ET between 20 and 80 GeV with the CMS detector at the LHC. The measured ET spectra are found to be in good agreement with next-to-leading-order perturbative QCD predictions. The ratio of PbPb to pp isolated photon ET-differential yields, scaled by the number of incoherent nucleon-nucleon collisions, is consistent with unity for all PbPb reaction centralities.
The prompt D0 meson azimuthal anisotropy coefficients, v[2] and v[3], are measured at midrapidity (abs(y) < 1.0) in PbPb collisions at a center-of-mass energy sqrt(s[NN]) = 5.02 TeV per nucleon pair with data collected by the CMS experiment. The measurement is performed in the transverse momentum (pT) range of 1 to 40 GeV/c, for central and midcentral collisions. The v[2] coefficient is found to be positive throughout the pT range studied. The first measurement of the prompt D0 meson v[3] coefficient is performed, and values up to 0.07 are observed for pT around 4 GeV/c. Compared to measurements of charged particles, a similar pT dependence, but smaller magnitude for pT < 6 GeV/c, is found for prompt D0 meson v[2] and v[3] coefficients. The results are consistent with the presence of collective motion of charm quarks at low pT and a path length dependence of charm quark energy loss at high pT, thereby providing new constraints on the theoretical description of the interactions between charm quarks and the quark-gluon plasma.
The transverse momentum (pt) spectrum of prompt D0 mesons and their antiparticles has been measured via the hadronic decay channels D0 to K- pi+ and D0-bar to K+ pi- in pp and PbPb collisions at a centre-of-mass energy of 5.02 TeV per nucleon pair with the CMS detector at the LHC. The measurement is performed in the D0 meson pt range of 2-100 GeV and in the rapidity range of abs(y)<1. The pp (PbPb) dataset used for this analysis corresponds to an integrated luminosity of 27.4 inverse picobarns (530 inverse microbarns). The measured D0 meson pt spectrum in pp collisions is well described by perturbative QCD calculations. The nuclear modification factor, comparing D0 meson yields in PbPb and pp collisions, was extracted for both minimum-bias and the 10% most central PbPb interactions. For central events, the D0 meson yield in the PbPb collisions is suppressed by a factor of 5-6 compared to the pp reference in the pt range of 6-10 GeV. For D0 mesons in the high-pt range of 60-100 GeV, a significantly smaller suppression is observed. The results are also compared to theoretical calculations.
A search for supersymmetry is presented based on proton-proton collision events containing identified hadronically decaying top quarks, no leptons, and an imbalance pTmiss in transverse momentum. The data were collected with the CMS detector at the CERN LHC at a center-of-mass energy of 13 TeV, and correspond to an integrated luminosity of 35.9 fb−1. Search regions are defined in terms of the multiplicity of bottom quark jet and top quark candidates, the pTmiss, the scalar sum of jet transverse momenta, and the mT2 mass variable. No statistically significant excess of events is observed relative to the expectation from the standard model. Lower limits on the masses of supersymmetric particles are determined at 95% confidence level in the context of simplified models with top quark production. For a model with direct top squark pair production followed by the decay of each top squark to a top quark and a neutralino, top squark masses up to 1020 GeV and neutralino masses up to 430 GeV are excluded. For a model with pair production of gluinos followed by the decay of each gluino to a top quark-antiquark pair and a neutralino, gluino masses up to 2040 GeV and neutralino masses up to 1150 GeV are excluded. These limits extend previous results.
A measurement of the exclusive two-photon production of muon pairs in proton-proton collisions at sqrt(s)= 7 TeV, pp to p mu^+ mu^- p, is reported using data corresponding to an integrated luminosity of 40 inverse picobarns. For muon pairs with invariant mass greater than 11.5 GeV, transverse momentum pT(mu) > 4 GeV and pseudorapidity |eta(mu)| < 2.1, a fit to the dimuon pt(mu^+ mu^-) distribution results in a measured cross section of sigma(pp to p mu^+ mu^- p) = 3.38 [+0.58 -0.55] (stat.) +/- 0.16 (syst.) +/- 0.14 (lumi.) pb, consistent with the theoretical prediction evaluated with the event generator Lpair. The ratio to the predicted cross section is 0.83 [+0.14-0.13] (stat.) +/- 0.04 (syst.) +/- 0.03 (lumi.). The characteristic distributions of the muon pairs produced via photon-photon fusion, such as the muon acoplanarity, the muon pair invariant mass and transverse momentum agree with those from the theory. |
A Belyi-extender (or dessinflateur) is a rational function $q(t) = \frac{f(t)}{g(t)} \in \mathbb{Q}(t)$ that defines a map
\[ q : \mathbb{P}^1_{\mathbb{C}} \rightarrow \mathbb{P}^1_{\mathbb{C}} \] unramified outside $\{ 0,1,\infty \}$, and has the property that $q(\{ 0,1,\infty \}) \subseteq \{ 0,1,\infty \}$.
An example of such a Belyi-extender is the power map $q(t)=t^n$, which is totally ramified in $0$ and $\infty$ and we clearly have that $q(0)=0,~q(1)=1$ and $q(\infty)=\infty$.
The composition of two Belyi-extenders is again an extender, and we get a rather mysterious monoid $\mathcal{E}$ of all Belyi-extenders.
Very little seems to be known about this monoid. Its units form the symmetric group $S_3$ which is the automrphism group of $\mathbb{P}^1_{\mathbb{C}} – \{ 0,1,\infty \}$, and mapping an extender $q$ to its degree gives a monoid map $\mathcal{E} \rightarrow \mathbb{N}_+^{\times}$ to the multiplicative monoid of positive natural numbers.
If one relaxes the condition of $q(t) \in \mathbb{Q}(t)$ to being defined over its algebraic closure $\overline{\mathbb{Q}}$, then such maps/functions have been known for some time under the name of
dynamical Belyi-functions, for example in Zvonkin’s Belyi Functions: Examples, Properties, and Applications (section 6).
Here, one is interested in the complex dynamical system of iterations of $q$, that is, the limit-behaviour of the orbits
\[ \{ z,q(z),q^2(z),q^3(z),… \} \] for all complex numbers $z \in \mathbb{C}$.
In general, the 2-sphere $\mathbb{P}^1_{\mathbb{C}} = S^2$ has a finite number of open sets (the Fatou domains) where the limit behaviour of the series is similar, and the union of these open sets is dense in $S^2$. The complement of the Fatou domains is the Julia set of the function, of which we might expect a nice fractal picture.
Let’s take again the power map $q(t)=t^n$. For a complex number $z$ lying outside the unit disc, the series $\{ z,z^n,z^{2n},… \}$ has limit point $\infty$ and for those lying inside the unit circle, this limit is $0$. So, here we have two Fatou domains (interior and exterior of the unit circle) and the Julia set of the power map is the (boring?) unit circle.
Fortunately, there are indeed dynamical Belyi-maps having a more pleasant looking Julia set, such as this one
But then, many dynamical Belyi-maps (and Belyi-extenders) are systems of an entirely different nature, they are
completely chaotic, meaning that their Julia set is the whole $2$-sphere! Nowhere do we find an open region where points share the same limit behaviour… (the butterfly effect).
There’s a nice sufficient condition for chaotic behaviour, due to Dennis Sullivan, which is pretty easy to check for dynamical Belyi-maps.
A
periodic point for $q(t)$ is a point $p \in S^2 = \mathbb{P}^1_{\mathbb{C}}$ such that $p = q^m(p)$ for some $m > 1$. A critical point is one such that either $q(p) = \infty$ or $q'(p)=0$.
Sullivan’s result is that $q(t)$ is completely chaotic when all its critical points $p$ become eventually periodic, that is some $q^k(p)$ is periodic,
but $p$ itself is not periodic.
For a Belyi-map $q(t)$ the critical points are either comlex numbers mapping to $\infty$ or the inverse images of $0$ or $1$ (that is, the black or white dots in the dessin of $q(t)$) which are not leaf-vertices of the dessin.
Let’s do an example, already used by Sullivan himself:
\[ q(t) = (\frac{t-2}{t})^2 \] This is a Belyi-function, and in fact a Belyi-extender as it is defined over $\mathbb{Q}$ and we have that $q(0)=\infty$, $q(1)=1$ and $q(\infty)=1$. The corresponding dessin is (inverse images of $\infty$ are marked with an $\ast$)
The critical points $0$ and $2$ are not periodic, but they become eventually periodic:
\[
2 \rightarrow^q 0 \rightarrow^q \infty \rightarrow^q 1 \rightarrow^q 1 \] and $1$ is periodic.
For a general Belyi-extender $q$, we have that the image under $q$ of any critical point is among $\{ 0,1,\infty \}$ and because we demand that $q(\{ 0,1,\infty \}) \subseteq \{ 0,1,\infty \}$, every critical point of $q$ eventually becomes periodic.
If we want to avoid the corresponding dynamical system to be completely chaotic, we have to ensure that one of the periodic points among $\{ 0,1,\infty \}$ (and there is at least one of those) must be critical.
Let’s consider the very special Belyi-extenders $q$ having the additional property that $q(0)=0$, $q(1)=1$ and $q(\infty)=\infty$, then all three of them are periodic.
So, the system is always completely chaotic
unless the black dot at $0$ is not a leaf-vertex of the dessin, or the white dot at $1$ is not a leaf-vertex, or the degree of the region determined by the starred $\infty$ is at least two.
Going back to the mystery Manin-Marcolli sub-monoid of $\mathcal{E}$, it might explain why it is a good idea to restrict to very special Belyi-extenders having associated dessin a $2$-coloured tree, for then the periodic point $\infty$ is critical (the degree of the outside region is at least two), and therefore the conditions of Sullivan’s theorem are not satisfied. So, these Belyi-extenders do not necessarily have to be completely chaotic. (tbc)Leave a Comment |
I am just curious: is there a published proof of the compactness of the Hilbert cube that does not use the Axiom of Choice, or is it well known?
If by the Hilbert cube you mean only $[0,1]^\mathbb N$ then the answer is yes. There is such proof, you can find it in Herrlich's
The Axiom of Choice as Theorem 3.13.
If you mean the general case of $[0,1]^I$ then the answer is no, to prove that all Hilbert cubes are compact is equivalent to BPIT/The ultrafilter lemma/Tychonoff for Hausdorff spaces. The proof of that you can find in the same book as Theorem 4.70.
The compactness of the Hilbert cube follows (without AC) from the compactness of $2^\omega$, since $[0,1]$ as well as $[0,1]^\omega$ are continuous images of $2^\omega$.
(Conversely, $2^\omega$ is a closed subset of the Hilbert cube.)
The compactness of $2^\omega$ is just König's lemma for trees of binary sequences, which is easy to prove (hence certainly well-known) without AC. (I think this is called "weak König's lemma", an important principle in reverse mathematics.)
Some of the comments in Goldstern's answer look like they express doubt as to whether choice is required. Here is a proof without choice, in gory detail, just to make sure. The trick is to notice that the construction of an infinite branch $\alpha$ in an infinite binary tree $T$ requires no appeal to the axion of choice because we can
specify a concrete choice: go left if you can, otherwise go right.
The Hilbert cube is a continuous image of the Cantor space $2^\omega$ of infinite binary sequences with the product topology. Thus it suffices to show that $2^\omega$ is compact. Given a finite binary sequence $a = [a_1, \ldots, a_n]$, denote by $|a| = n$ its length, and let $B_a = \lbrace \alpha \in 2^\omega \mid a = [\alpha_1, \ldots, \alpha_{|a|}] \rbrace$ be the basic open subset of those sequences that start with $a$.
Consider any cover $(B_{a_i})_{i \in I}$ of $2^\omega$. We build a binary tree $T$ which consists of those finite binary sequences $a$ for which $B_a$ is not contained in any $B_{a_i}$, $$T = \lbrace a \in 2^{*} \mid \forall i \in I . B_a \not\subseteq B_{a_i} \rbrace.$$In other words, we put in $T$ any finite sequence $a$ such that all of its prefixes are
not in $(a_i)_{i \in I}$. Let us show that $T$ has bounded height, i.e., there is $n$ such that every branch in $T$ has lebgth at most $n$.
Suppose on the contrary that the height of $T$ is unbounded. Then we can build an infinite path $\alpha$ in $T$ by recursion as follows. (This is König's lemma, saying that an unbounded binary tree has an infinite path.) We make sure that at each stage $n$ the subtree of $T$ at $[\alpha_1, \ldots, \alpha_n]$ has unbounded height.Start with the empty sequence $[]$. The tree at $[]$ is all of $T$, which has unbounded height by assumption. If $[\alpha_1, \ldots, \alpha_n]$ has been constructed, let $T'$ be the subtree of $T$ at $[\alpha_1, \ldots, \alpha_n]$. One or both of the trees$$T_0' = \lbrace b \in T' \mid b_{n+1} = 0 \rbrace$$and$$T_1' = \lbrace b \in T' \mid b_{n+1} = 1 \rbrace$$have unbounded height. If $T_0'$ does, set $\alpha_{n+1} = 0$, otherwise set $\alpha_{n+1} = 1$. (At this point we did
not appeal to the axiom of choice, but we did appeal to excluded middle.) This concludes the construction of $\alpha$. Now we have a problem since $\alpha$ is covered by some $B_{a_i}$, and so $a_i$ is a prefix of $\alpha$, but this contradicts the definition of $T$.
Now we know that the height of $T$ is bounded by some $n$. Consider the subset $J \subseteq I$ of those indices $j \in I$ for which $|a_j| \leq n + 1$. As there are only finitely many binary sequences of length at most $n+1$, the set $J$ is finite. But since every sequence of length $n+1$ has some $j \in J$ such that $a_j$ is its prefix, $(B_{a_j})_{j \in J}$ is a finite cover of $2^\omega$.
Thanks for all the answers and sorry about a silly question. I have also figured out that it can be proved using the usual complete metric on the usual (countable product) Hilbert cube and finite $\epsilon$-nets.
Update. Here is a proof.
Let $X = [-\frac{1}{2},\frac{1}{2}]\times[-\frac{1}{4},\frac{1}{4}]\times[-\frac{1}{8},\frac{1}{8}]\times\dotsb$ be a Hilbert cube endowed with its $\ell_\infty$ metric. For every positive integer $k$, let $N_k$ be the “natural” $\frac{1}{2^k}$-net for $X$.
Let $\mathcal U$ be a given family of open sets such that no finite subfamily of $\mathcal U$ covers $X$. Then let $S_k$ be the set of those elements of $N_k$ which are within the distance of $\frac{1}{2^k}$ from the complement of any finite union of elements of $\mathcal U$. Each $S_k$ is nonempty. For all $m$ and $n$, the distance from any point of $S_m$ to the set $S_n$ is at most $\frac{1}{2^m}+\frac{1}{2^n}$.
Assume that the points of $X$ are ordered by the lexicographic order of their coordinates. Take the “first” $x_1\in S_1$ (i.e. the smallest in the order), then the “first” $x_2\in S_2$ that is within the distance of $\frac{3}{4}$ from $x_1$, then the “first” $x_3\in S_3$ that is within the distance of $\frac{3}{8}$ from $x_2$, and so forth. The obtained sequence $\lbrace x_k \rbrace_{k=1}^\infty$ is Cauchy. Its limit is not in any element of $\mathcal U$.
I have found this paper by Peter Loeb:
Peter A. Loeb, A new proof of the Tychonoff Theorem, The American Mathematical Monthly 72(1965), no. 7, 711--717.
Here is the theorem from this paper that implies that the usual Hilbert cube is compact without using the AC.
Theorem 1.Let $\{\,X_\nu\mid\nu\in I\,\}$ be a family of compact spaces which is indexed by a set $I$ on which there is a well-ordering $\ge$. If $I$ is an infinite set, let there also be a choice function $F$ on the collection $\{\,C\mid\text{$C$ is closed},\ C\ne\varnothing,\ \text{$C\subset X_\nu$ for some $\nu$}\,\}$. Then the product space $\prod_{\nu\in I}X_\nu$ is compact in the product topology.
For the usual Hilbert cube $[0,1]^{\mathbb N}$, the function $F$ can, for example, select the least element in every compact subset of $[0,1]$. |
Mensuration
Category : 6th Class
Mensuration
Learning Objectives Mensuration
Mensuration is the branch of mathematics which deals with the measurement of lengths, area and volume of the plane and solid figures.
Perimeter of a plane figure: The distance all round a plane figure is called perimeter of the figure or the lengths of boundary of a plane figure is known as its perimeter
\
Perimeter of the quadrilateral \[ABCD=AB+BC+CD+DA\]
\[=40\text{ }cm+10\text{ }cm+40\text{ }cm+10\text{ }cm\]
\[=100\text{ }cm\]
Perimeter of the hexagon \[ABCDEF=AB+BC+CD+DE+EF+FA\]
\[=100\text{ }m+120\text{ }m+90\text{ }m+45\text{ }m+60\text{ }m+80\text{ }m\]
\[=495\text{ }m\]
Perimeter of a scalene triangle = Sum of all the three sides of the triangle. Example: Find the perimeter of the triangle ABC.
Perimeter of the triangle \[ABC=AB+BC+CA\]
\[=4\text{ }cm+12\text{ }cm+8\text{ }cm\]
\[=24cm\]
Perimeter of a rectangle \[=2\times \left( length+breadth \right).\] Example: Find the perimeter of the following rectangle ABCD.
Perimeter of rectangle \[ABCD=2\times \left( AB+BC \right)\]
\[=2\times \left( 15cm+9cm \right)\]
\[=48\text{ }cm\]
Perimeter of regular shapes Perimeter of an equilateral triangle \[\text{=3 }\!\!\times\!\!\text{ length of one side}\]. Example: Find the perimeter of the given triangle.
Perimeter of the triangle \[=3\times 4\text{ }cm\]
\[=12\text{ }cm\]
Perimeter of a square: \[\text{4 }\!\!\times\!\!\text{ length of one side}\text{.}\] Example: Find the perimeter of the given square.
Perimeter of the square \[=4\times 1\text{ }m\]
\[=4\text{ }m\]
Perimeter of regular pentagon \[=5\text{ }\times \text{ }length\text{ }of\text{ }one\text{ }side.\] Example: Find the perimeter of the given pentagon.
Perimeter of the pentagon \[=5\times 4\text{ }cm\]
\[=20\text{ }cm\]
Perimeter of the regular hexagon \[=6\times length\text{ }of\text{ }one\text{ }side.\]
Perimeter of the hexagon \[=6\times 5\text{ }cm\]
\[=30\text{ }cm.\]
Area of a plane figure: The measurement of the region enclosed by a plane figure is called area of the figure or area is the amount of surface covered by the shape. Area of triangle \[=\frac{1}{2}\times base\times height.\] Example: Find the area of the given triangle.
Area of the triangle \[=\frac{1}{2}\times 4cm\times 6cm=12c{{m}^{2}}\]
Area of rectangle: \[length\times breadth.\] Example: Find the area of the given rectangle.
Area of the rectangle \[=8\text{ }cm\times 6\text{ }cm=48\text{ }c{{m}^{2}}\]
Area of square = \[side\times side\] Example: Find the area of the given square.
Area of the square \[=6cm\times 6cm=36c{{m}^{2}}.\]
Commonly Asked Questions Find the area of a square park whose perimeter is 320 m.
(a) \[6300{{m}^{2}}\] (b) \[6500{{m}^{2}}\]
(c) \[6400{{m}^{2}}\] (d) \[6200{{m}^{2}}\]
(e) None of these
Answer (c) Explanation: Let the length of each side of the square park be a metre. Then, perimeter = 320 m
\[\Rightarrow 4a=320\]
\[\Rightarrow a=\frac{320}{4}=80m\] [\[\therefore \] Perimeter of a square\[=4\times Side\]]
\[\therefore Area={{a}^{2}}=\left( 80\times 80 \right){{m}^{2}}=6400{{m}^{2}}.\]
Find the breadth of a rectangular plot of land, if its area is 440 sq. m and length is 22 m. Also, find its perimeter.
(a) 20 m, 84 m (b) 2 m, 46 m
(c) 50 m, 42 m. (d) 4 m, 40 m
(e) None of these
Answer (a) Explanation: We have,
I = Length of the plot = 22 m. Area of the plot = 440 sq. metre
Let the breadth of the plot be b metres. Then,
\[Breath=\frac{Area}{Length}\Rightarrow b=\frac{420}{22}=20m\]
Perimeter \[=2\left( I+b \right)=2\left( 22+20 \right)m=2\times 42m=84m.\]
Hence, the breadth of the plot is 20 m and the perimeter is 84 m.
The carpet for a room 6.6 m by 5.6 m costs Rs. 3960 and it was made from a roll 70 cm wide. Find the cost of the carpet per metre.
(a) Rs. 70 (b) Rs. 78
(c) Rs. 75 (d) Rs. 70
(e) None of these
Answer (c) Explanation: We have,
Area of the carpet \[=6.6\times 5.6=36.96{{m}^{2}},\] Width of the roll = 70 cm = 0.7 m
\[\therefore \]Length of the roll \[=\frac{Area}{With}=\frac{36.96}{0.7}m=52.8m\]
Cost of the carpet \[=Rs.\,3960\]
\[\therefore \]Cost of the carpet per metre \[=Rs.\,\frac{3960}{52.8}=Rs.\,75\]
Hence, the carpet costs \[Rs.\,75\] per metre.
A rectangular lawn of length 40 m and breadth 25 m is to be surrounded all around by a path which is 2 m wide. Find the area of the path.
(a) \[276\text{ }{{m}^{2}}\]
(b) \[345\text{ }{{m}^{2}}\]
(c) \[308\text{ }{{m}^{2}}\]
(d) All of these
(e) None of these
Answer (a) Explanation: Area of the path \[=2\left( 44\times 2 \right)+2\left( 25\times 2 \right)=176+100=276{{m}^{2}}.\] A garden is 60 m by 30 m. It has two paths at the centre as shown in the figure. If the width of the path is 2 m, how much area is left for gardening?
(a) \[1441\text{ }{{m}^{2}}\]
(b) \[1624\,{{m}^{2}}\]
(c) \[1328\,{{m}^{2}}\]
(d) All of these
(e) None of these
Answer (b) Explanation: Area for gardening \[=4\left( 14\times 29 \right)=1624{{m}^{2}}.\]
You need to login to perform this action.
You will be redirected in 3 sec |
Below are my 2 cents only, but this was too long for a comment.
As he shows in the next lines (see also
Variance Swaps chapter of Bergomi's book)$$ \sigma_{VS}^2(T) = \int_{-\infty}^{+\infty} \tilde{\sigma}^2(z,T) \phi(z) dz \tag{0} $$where $\sigma_{VS}(T)$ denotes the volatility of a fresh-start variance swap of maturity $T$; $\phi(\cdot)$ the standard Gaussian pdf; $\sigma(k,T)$ the implied volatility of smile in log-forward moneyness and time to expiry space, and $\tilde{\sigma}(\cdot,T)$ (modified smile) directly related the true smile $\sigma(\cdot,T)$ as follows$$ f: (k,t) \rightarrow -\frac{k}{\sigma(k,t)\sqrt{t}} + \frac{\sigma(k,t)\sqrt{t}}{2} $$$$ \tilde{\sigma} : (z,t) \to (\sigma \circ f^{-1})(z,t) $$
Equation $(0)$ is equivalent to writing that$$ \sigma_{VS}^2(T) = \Bbb{E} \left[ \tilde{\sigma}^2(z,T) \right],\,z \sim N(0,1)$$I think that he is then referring to the fact that$$ \sigma_{VS}(T) = \sqrt{ \Bbb{E} \left[ \tilde{\sigma}^2(z,T) \right] } \geq \Bbb{E} \left[ \tilde{\sigma}(z,T) \right] $$by Jensen's inequality (square root is a concave function). Now if you parametrise $\tilde{\sigma}$ as $$\tilde{\sigma}(z) = \tilde{\sigma}_0 + \alpha z \tag{1}$$You indeed have that$$ \sigma_{VS}(T) \geq \tilde{\sigma}_0 $$which shows that $\sigma_{VS}(T) $ is greater than $\tilde{\sigma}_0$ if the "modified" smile $\tilde{\sigma}$ can be parametrised as given by $(1)$. He concludes that skew does not contribute to this result.
Now of course, the problem is that $(1)$ is certainly too rigid in practice (it could be argued that close to the forward moneyness it could be a decent approximation though) and $\tilde{\sigma}_0 \ne \sigma_0$ the genuine ATMF vol. So IMO his assertion cannot be made in general.
Note that Bergomi & Guyon managed to derive accurate approximations tying VS volatilities and ATMF volatilities in very general stochastic volatility models, see here. If you look at equation (12) of their paper you'll see that, already at first order, skew is the only thing which contributes to the discrepancy between ATMF vol and VS vol, which goes against what Gatheral obtains.
At the end of the day, I think that his assertion holds in the modified smile space $\tilde{\sigma}(\cdot,T)$ but not in the genuine smile space $\sigma(\cdot,T)$. |
Good analysis by the others, but I want to add in some math here, because I'm really that nerdy.
We can model the growth of a black hole by the matter it accretes. Normally, a black hole accretes matter via a (surprise, surprise) accretion disk. Analysis of this type of object is nice because it's two-dimensional, for most practical purposes. Here, though, the accretion is decidedly three-dimensional. To analyze this, we have to model a phenomenon known as Bondi accretion.
The accretion rate onto a spherical body of mass $M$ in a medium of density $\rho$, the rate of accretion is$$\frac{dM}{dt}=\frac{4 \pi \rho G^2M^2}{c_s^3}$$$G$ is the familiar universal gravitational constant, while $c_s$ is the speed of sound in the medium, a quantity that is actually pretty ubiquitous in studying astrophysical mediums.
Anyway, we can then write$$\int_{.01 \times M_{\text{Moon}}}^{M_{\odot}} \frac{1}{M^2} dM=\int \frac{4 \pi \rho G^2}{c_s^3} dt$$$$\frac{1}{M_{\text{Moon}}}-\frac{1}{.01 M_{\odot}}=\frac{4 \pi \rho G^2}{c_s^3}t$$and then, solving for $t$, we find$$t=\frac{(.01M_{\odot}-M_{\text{Moon}})(c_s^3)}{M_{\odot} \times .01M_{\text{Moon}} \times 4 \pi \rho G^2}$$Of course, $.01 \times M_{\text{Moon}}\ll{}M_{\odot}$, but that's okay here.
Now, we know that $M_{\odot}=1.98855±0.00025×10^{30} \text{ kg}$, $V_{\odot}=\frac{4}{3} \pi r_{\odot}^3=1.41 \times10^{18} \text{ km}^3$, and $\rho=0.1403 \text{ kg/m}^3$, and that $M_{\text{Moon}}=7.3477×10^{22} \text{ kg}$. I haven't been able to find any figures for $c_s$, but we
can still simplify the above equation to$$t=\frac{(1.98855±0.00025×10^{30}-7.3477×10^{18})(c_s^3)}{1.98855±0.00025×10^{30} \times .01 \times 7.3477×10^{22} \times 4 \pi \times 0.1403 \times 4.4528929 \times 10^{-21}}$$As per ckersch's link, $c_s \approx 2,500,000 \text{ m/s}$. This means that$$t=\frac{(1.98855±0.00025×10^{30}-7.3477×10^{18})((2500000)^3)}{1.98855±0.00025×10^{30} \times .01 \times 7.3477×10^{22} \times 4 \pi \times 0.1403 \times 4.4528929 \times 10^{-21}}$$$$=2.709 \times 10^{15} \text{ seconds}$$$$=85.89 \text{ million years}$$
There are some things that were neglected here. For example, the black hole will lose some mass due to Hawking radiation, and the Sun can fail even if it doesn't lose all (of even the majority of) its mass. Still, though, this analysis should show you that we've got not a lot to worry about if a Moon-sized black hole decides to take a jaunt through the Sun.
Note: There may be an error here somewhere along the line (which I can't find just yet), but it appears to be around where I started plugging stuff in. At any rate, until I'm able to fix this, know that you can use Bondi accretion to figure out how long the Sun has to live. |
This article will be permanently flagged as inappropriate and made unaccessible to everyone.
Are you certain this article is inappropriate?
Excessive Violence
Sexual Content
Political / Social
Email Address:
Article Id:
WHEBN0003747894
Reproduction Date:
Atmospheric dispersion modeling is the mathematical simulation of how air pollutants disperse in the ambient atmosphere. It is performed with computer programs that solve the mathematical equations and algorithms which simulate the pollutant dispersion. The dispersion models are used to estimate the downwind ambient concentration of air pollutants or toxins emitted from sources such as industrial plants, vehicular traffic or accidental chemical releases. They can also be used to predict future concentrations under specific scenarios (i.e. changes in emission sources). Therefore they are the dominant type of model used in air quality policy making. They are most useful for pollutants that are dispersed over large distances and that may react in the atmosphere. For pollutants that have a very high spatio-temporal variability (i.e have very steep distance to source decay such as black carbon) and for epidemiological studies statistical land-use regression models are also used.
Dispersion models are important to governmental agencies tasked with protecting and managing the ambient air quality. The models are typically employed to determine whether existing or proposed new industrial facilities are or will be in compliance with the National Ambient Air Quality Standards (NAAQS) in the United States and other nations. The models also serve to assist in the design of effective control strategies to reduce emissions of harmful air pollutants. During the late 1960s, the Air Pollution Control Office of the U.S. EPA initiated research projects that would lead to the development of models for the use by urban and transportation planners.[1] A major and significant application of a roadway dispersion model that resulted from such research was applied to the Spadina Expressway of Canada in 1971.
Air dispersion models are also used by public safety responders and emergency management personnel for emergency planning of accidental chemical releases. Models are used to determine the consequences of accidental releases of hazardous or toxic materials, Accidental releases may result in fires, spills or explosions that involve hazardous materials, such as chemicals or radionuclides. The results of dispersion modeling, using worst case accidental release source terms and meteorological conditions, can provide an estimate of location impacted areas, ambient concentrations, and be used to determine protective actions appropriate in the event a release occurs. Appropriate protective actions may include evacuation or shelter in place for persons in the downwind direction. At industrial facilities, this type of consequence assessment or emergency planning is required under the Clean Air Act (United States) (CAA) codified in Part 68 of Title 40 of the Code of Federal Regulations.
The dispersion models vary depending on the mathematics used to develop the model, but all require the input of data that may include:
Many of the modern, advanced dispersion modeling programs include a pre-processor module for the input of meteorological and other data, and many also include a post-processor module for graphing the output data and/or plotting the area impacted by the air pollutants on maps. The plots of areas impacted may also include isopleths showing areas of minimal to high concentrations that define areas of the highest health risk. The isopleths plots are useful in determining protective actions for the public and responders.
The atmospheric dispersion models are also known as atmospheric diffusion models, air dispersion models, air quality models, and air pollution dispersion models.
Discussion of the layers in the Earth's atmosphere is needed to understand where airborne pollutants disperse in the atmosphere. The layer closest to the Earth's surface is known as the troposphere. It extends from sea-level to a height of about 18 km and contains about 80 percent of the mass of the overall atmosphere. The stratosphere is the next layer and extends from 18 km to about 50 km. The third layer is the mesosphere which extends from 50 km to about 80 km. There are other layers above 80 km, but they are insignificant with respect to atmospheric dispersion modeling.
The lowest part of the troposphere is called the atmospheric boundary layer (ABL) or the planetary boundary layer (PBL) and extends from the Earth's surface to about 1.5 to 2.0 km in height. The air temperature of the atmospheric boundary layer decreases with increasing altitude until it reaches what is called the inversion layer (where the temperature increases with increasing altitude) that caps the atmospheric boundary layer. The upper part of the troposphere (i.e., above the inversion layer) is called the free troposphere and it extends up to the 18 km height of the troposphere.
The ABL is of the most important with respect to the emission, transport and dispersion of airborne pollutants. The part of the ABL between the Earth's surface and the bottom of the inversion layer is known as the mixing layer. Almost all of the airborne pollutants emitted into the ambient atmosphere are transported and dispersed within the mixing layer. Some of the emissions penetrate the inversion layer and enter the free troposphere above the ABL.
In summary, the layers of the Earth's atmosphere from the surface of the ground upwards are: the ABL made up of the mixing layer capped by the inversion layer; the free troposphere; the stratosphere; the mesosphere and others. Many atmospheric dispersion models are referred to as boundary layer models because they mainly model air pollutant dispersion within the ABL. To avoid confusion, models referred to as mesoscale models have dispersion modeling capabilities that extend horizontally up to a few hundred kilometres. It does not mean that they model dispersion in the mesosphere.
The technical literature on air pollution dispersion is quite extensive and dates back to the 1930s and earlier. One of the early air pollutant plume dispersion equations was derived by Bosanquet and Pearson.[2] Their equation did not assume Gaussian distribution nor did it include the effect of ground reflection of the pollutant plume.
Sir Graham Sutton derived an air pollutant plume dispersion equation in 1947[3] which did include the assumption of Gaussian distribution for the vertical and crosswind dispersion of the plume and also included the effect of ground reflection of the plume.
Under the stimulus provided by the advent of stringent environmental control regulations, there was an immense growth in the use of air pollutant plume dispersion calculations between the late 1960s and today. A great many computer programs for calculating the dispersion of air pollutant emissions were developed during that period of time and they were called "air dispersion models". The basis for most of those models was the Complete Equation For Gaussian Dispersion Modeling Of Continuous, Buoyant Air Pollution Plumes shown below:[4][5]
C = \frac{\;Q}{u}\cdot\frac{\;f}{\sigma_y\sqrt{2\pi}}\;\cdot\frac{\;g_1 + g_2 + g_3}{\sigma_z\sqrt{2\pi}}
The above equation not only includes upward reflection from the ground, it also includes downward reflection from the bottom of any inversion lid present in the atmosphere.
The sum of the four exponential terms in g_3 converges to a final value quite rapidly. For most cases, the summation of the series with m = 1, m = 2 and m = 3 will provide an adequate solution.
\sigma_z and \sigma_y are functions of the atmospheric stability class (i.e., a measure of the turbulence in the ambient atmosphere) and of the downwind distance to the receptor. The two most important variables affecting the degree of pollutant emission dispersion obtained are the height of the emission source point and the degree of atmospheric turbulence. The more turbulence, the better the degree of dispersion.
The resulting calculations for air pollutant concentrations are often expressed as an air pollutant concentration contour map in order to show the spatial variation in contaminant levels over a wide area under study. In this way the contour lines can overlay sensitive receptor locations and reveal the spatial relationship of air pollutants to areas of interest.
Whereas older models rely on stability classes (see air pollution dispersion terminology) for the determination of \sigma_y and \sigma_z, more recent models increasingly rely on the Monin-Obukhov similarity theory to derive these parameters.
The Gaussian air pollutant dispersion equation (discussed above) requires the input of H which is the pollutant plume's centerline height above ground level—and H is the sum of Hs (the actual physical height of the pollutant plume's emission source point) plus ΔH (the plume rise due the plume's buoyancy).
To determine ΔH, many if not most of the air dispersion models developed between the late 1960s and the early 2000s used what are known as "the Briggs equations." G.A. Briggs first published his plume rise observations and comparisons in 1965.[6] In 1968, at a symposium sponsored by CONCAWE (a Dutch organization), he compared many of the plume rise models then available in the literature.[7] In that same year, Briggs also wrote the section of the publication edited by Slade[8] dealing with the comparative analyses of plume rise models. That was followed in 1969 by his classical critical review of the entire plume rise literature,[9] in which he proposed a set of plume rise equations which have become widely known as "the Briggs equations". Subsequently, Briggs modified his 1969 plume rise equations in 1971 and in 1972.[10][11]
Briggs divided air pollution plumes into these four general categories:
Briggs considered the trajectory of cold jet plumes to be dominated by their initial velocity momentum, and the trajectory of hot, buoyant plumes to be dominated by their buoyant momentum to the extent that their initial velocity momentum was relatively unimportant. Although Briggs proposed plume rise equations for each of the above plume categories, it is important to emphasize that "the Briggs equations" which become widely used are those that he proposed for bent-over, hot buoyant plumes.
In general, Briggs's equations for bent-over, hot buoyant plumes are based on observations and data involving plumes from typical combustion sources such as the flue gas stacks from steam-generating boilers burning fossil fuels in large power plants. Therefore the stack exit velocities were probably in the range of 20 to 100 ft/s (6 to 30 m/s) with exit temperatures ranging from 250 to 500 °F (120 to 260 °C).
A logic diagram for using the Briggs equations[4] to obtain the plume rise trajectory of bent-over buoyant plumes is presented below:
The above parameters used in the Briggs' equations are discussed in Beychok's book.[4]
Atmospheric dispersion modeling, France, Italy, Brisbane, Australia
Air pollution dispersion terminology, Atmospheric dispersion modeling, Smhi, Mercure, Aarhus University
Atmospheric dispersion modeling, Peer review, Air pollution dispersion terminology, Software testing, Meteorology
Atmospheric dispersion modeling, Air pollution dispersion terminology, Met Office, Jules, Atmosphere
Atmospheric dispersion modeling, London, Lyon, Europe, Basel
Atmospheric dispersion modeling, United States Department of Homeland Security, Met Office, Climate, Office of Oceanic and Atmospheric Research |
The same "argument" would show that $\mathbb{N}$ is finite since it's the union of finite sets $$\mathbb{N}=\{0\}\cup\{0,1\}\cup\{0,1,2\}\cup ...$$ The point is that knowing that a given property is preserved under a given operation does not mean that it's preserved under "infinite iterations" of that operation.
Your exam question makes very little sense. The obvious reading would be this:Let $M$ and $N$ be two Turing machines. Why is it not possible to prove that $M$ and $N$ compute the same function?More precisely:It is not the case that for all Turing machines $M$ and $N$ it is provable that $M$ and $N$ compute the same function.Well, this is quite ...
Clearly not.Let $A=\{a\}$ and $B=\{aa\}$.Now,$A\cap B = \emptyset$so$(A\cap B)^* = \{\epsilon\}$but$A^*\cap B^*=B^*=\{a^{2i} : i \in \mathbb{N}\}$(all strings consisting of an even number of $a$).
Just pump up $(M+1)$ $y$'s. Now you get $xy^{M+1}z=a^{(M+1)j+M-j}=a^{M(j+1)}$. Since $M$ is a product of two primes, $M(j+1)$ is a product of at least 3 primes, so $a^{M(j+1)}\notin L_1$, which proves $L_1$ is not regular by the pumping lemma.
I'm not sure what $L1, \dots, Lk$ are since you did not define them. The easiest way is probably that of starting with a DFA for $L$ and constructing a NFA for $drop(L)$ (hint in the spoiler below).Then it should be easy to show that:If $w \in drop(L)$ then the NFA accepts $w$: use the definition of the function $drop$ to conclude that there must be a ...
Can you use a rule that looks like $S \rightarrow A_1A_2\dots A_iS \mid B_1B_2\dots B_iS \mid Z_i$, where $i \ge 2$?No. Grammar rules consist of explicitly given finite strings of terminals and non-terminals on each side of the arrow, and a grammar may contain only finitely many rules. The first restriction rules out the "for all $i\ge 2$" part of your ...
The idea is to start with a grammar for the related language $L'_2 = \{a^ib^j \mid 2j \leq i \leq 3j\}$:$$ S \to a^2Sb \mid a^3Sb \mid \epsilon. $$We want to force at least one production of the form $a^2Sb$ and at least one of the form $a^3Sb$. There are many ways of doing that. The simplest, probably, is to force one of these productions to be the first, ...
You need to keep doing it an infinite number of times before you reach any infinite languages. So your proof will involve transfinite induction. As Wikipedia says:Transfinite induction is an extension of mathematical induction towell-ordered sets, for example to sets of ordinal numbers or cardinalnumbers.Let P(α) be a property defined for all ...
Here is yet another proof. It is known that the number of integers at most $n$ which are the product of two primes is $o(n)$, see for example this answer, which gives the asymptotic $\frac{n\log\log n}{\log n}$. This means that your language is infinite yet has vanishing asymptotic density. This is impossible for a regular unary language.
The subset of all palindromes in L is obviously not usually regular, take the simple example $a^*ba^*$ where the subset of palindromes $a^nba^n$ is not regular.Assume you have an FSM for L (that is an FSM describing and defining L). You can take that FSM and use a simple algorithm to determine if w is in M:Given a state S, define succ(S, a) as the state ...
As all words of length $>1$ and only consisting of a's should be contained in L2, there is a simple finite automaton that recognizes it. So your attempt at using the pumping lemma is futile, as the pumping lemma only helps you prove that a language is irregular if it is, and doesn't tell you anything about languages that are regular.Maybe I'm also ...
According to the Fundamental Theorem of Arithmetic, any integer $>1$ can be written as a product of one or more primes (in a unique way). So, it seems that your language can be simplified as $\{a^n\mid n\geq 2\}$.
There is an alternative to the “pumping” lemma which I find easier: After each possible input, determine the set of continuations that would complete a string of the language. You can use each of those sets as a state in the finite state machine for the language, so if there is a finite number of those sets then the language is regular- if there are ...
Here is one result in the direction you're looking into:Suppose that $A,B,C$ are languages such that $A$ is a non-regular subset of the regular language $C$, and $B$ is disjoint from $C$. Then $A \cup B$ is non-regular.For the proof, consider the intersection of $A \cup B$ and $C$. Details left to the reader.The question you link to concerns the ...
Using the accepted answer by @David Richerby ->I think what we have to do is modify the DFAs that recognize L1 and L2.Let L1 alphabet Σ1 and L2 alphabet Σ2,let Σ = Σ1 ∪ Σ2let's say we have DFA for L1 called M,For M DFA add a extra state called y and for all the letters in Σ but not in Σ1 add a transition from all the states of M to state y. then ...
Pushdown automata do not necessarily halt. They are not forced to read input each step, they can also do so-called $\lambda$-instructions where the tape is not advanced.Then we can have an infinite loop at a certain tape position.Likewise, linear bounded automata may loop, and do not always halt. However, they can be simulated by a Turing machine, which ...
Let $S$ be the list of all prefixes of words in $L$. Create a DFA with a state $q_s$ for each $s \in S$, and an additional sink state $q_\bot$. The starting state is $q_\epsilon$, and a state is accepting if it corresponds to a word in $L$. When at a non-sink state $q_s$, upon reading $\sigma$, move to $q_{s\sigma}$ if $s\sigma \in S$, and to $q_\bot$ ...
Given that, I would expect that for any reasonable model of computation, if $f : A \rightarrow B$ and $g : B \rightarrow C$ are computable, then $g \circ f : A \rightarrow C$ should be as well.Let's say our model is quadratic time computation. If $f$ is the function which maps a string of length $n$ to a string of $n^2$ zeroes, then $f$ is computable in ...
I'm assuming by $\lambda$ you mean the empty word and by $n_0(w)$ the length of a word.The proof for the first part is not correct: You argue that every word in $L^2$ has length at least $2$ and every word in $L^3$ has length at least $3$. From that it does not follow that every word in $L^3$ is longer than every word $L^2$, because there could also be a ... |
This site is intended as an appendum to the paper "Quantisation spaces of cluster algebras",
joint with Philipp Lampe.
It can be found here on the arXiv.
Please note that this site was created using
mathJax and the
Sage Cell Server.
In case you experience issues either with the displayed LaTeX or no computing cells appear,
feel free to
contact me.
General compatibility requirements can be found
here and
here
for mathJax and the Sage Cell Server respectively.
The precise definition of quantum cluster algebras can be found in the aforementioned acticle or the initial paper from Berenstein and Zelevinksy. Thus we focus on our main constructions outlined below.
An initial cluster seed of a given cluster algebra carries, along with its cluster variables,
an
-matrix
\[
\tilde{B}=\left[\begin{smallmatrix}B\\C\\\end{smallmatrix}\right]
\]
with the $n\times n$ principal part $B$. By definition, a skew-symmetric
$m\times m$ integer matrix $\Lambda = (\lambda_{i,j})$ is called
compatible to $\tilde{B}$ if there exists a diagonal
$n\times n$ matrix $D'=\textrm{diag}(d'_1,d'_2,\ldots,d'_n)$
with positive integers $d'_1,d'_2,\ldots,d'_n$ such that
\[
\tilde{B}^T\Lambda =\left[\begin{matrix}D' & 0 \end{matrix}\right]
\]
as a $n\times n$ plus $n\times (m-n)$ block matrix.
Our paper then discusses the question, "When does a quantisation for a given cluster algebra $\mathcal{A}(\tilde{B})$ exist and how unique is it?"
The existence of a compatible matrix pair $(\tilde{B}, \Lambda)$ depends on the
rank of $\tilde{B}$: if $\mathrm{rk}(\tilde{B}) < n$, then no quantisation exists.
Otherwise, one can construct a compatible $\Lambda$ by viewing the
columns of $\tilde{B}$ as vectors, completing their set to a basis of
$\mathbb{Q}^m$ and obtaining the following result.
Theorem:
Let $D$ be a skew-symmetriser of $B$.
There exists a skew-symmetric $m\times m$-matrix $\Lambda$ with integer coefficients
and a multiple $D'=\lambda D$ with $\lambda\in\mathbb{Q}^{+}$ such that
$\tilde{B}^T\Lambda =\left[\begin{matrix}D'&0\end{matrix}\right]$.
We illustrate the construction using Sage, details on the relationship between the choice of added basis vectors and $\Lambda$ can be found in the article.
After loading the new function, we can now compute a $\Lambda$ from the theorem above.
The construction of $\Lambda$ above depends on a choice of basis completion. This ambiguity we reformulate by giving a generating set of integer matrices for the equation \[ \tilde{B}^T\Lambda = \left[\begin{matrix}0 &0\end{matrix}\right]. \] The stated dependency does not occur for $0$ or $1$ frozen vertices, hence we start with the case for $m=n+2$. We only provide the Sage functions at this point, the general construction can be found in the paper.
Let us check, that what we did so far has worked out. First, we are interested whether $B^T \cdot \mathrm{minorBlock}(B)$ equals the zero matrix. The second entry of the list gives the product $B^T \cdot \mathrm{inhomSolution}(B)$:
Next, if a given $\tilde{B}$ is associated to a quiver with more than two
frozen vertices, we need to use the
minor blocks and compose them.
This can be achieved via the following code, see Section 4.2 in the paper.
Again, after loading these definitions, we can compute the composed minor blocks and retrieve a list of matrices satisfying the equation $\tilde{B}^T\Lambda = \left[\begin{smallmatrix}0 &0\end{smallmatrix}\right]$:
Lastly, we can again check the correctness of our computations. For each element in the list of results returned from compBlock, compare with the appropriate 0-matrix.
Note that all cells with an 'Evaluate'-button underneath can be altered - Feel free to test the functions on skew-symmetrisable matrices of your own choosing. Please feel free to contact me for comments and/or questions regarding our results or how we used Sage in the process of developing our findings. |
Home
Integration by PartsIntegration by Parts
Examples
Integration by Parts with a definite integral
Going in Circles
Tricks of the Trade
Integrals of Trig FunctionsAntiderivatives of Basic Trigonometric Functions
Product of Sines and Cosines (mixed even and odd powers or only odd powers)
Product of Sines and Cosines (only even powers)
Product of Secants and Tangents
Other Cases
Trig SubstitutionsHow Trig Substitution Works
Summary of trig substitution options
Examples
Completing the Square
Partial FractionsIntroduction to Partial Fractions
Linear Factors
Irreducible Quadratic Factors
Improper Rational Functions and Long Division
Summary
Strategies of IntegrationSubstitution
Integration by Parts
Trig Integrals
Trig Substitutions
Partial Fractions
Improper IntegralsType 1 - Improper Integrals with Infinite Intervals of Integration
Type 2 - Improper Integrals with Discontinuous Integrands
Comparison Tests for Convergence
Modeling with Differential EquationsIntroduction
Separable Equations
A Second Order Problem
Euler's Method and Direction FieldsEuler's Method (follow your nose)
Direction Fields
Euler's method revisited
Separable EquationsThe Simplest Differential Equations
Separable differential equations
Mixing and Dilution
Models of GrowthExponential Growth and Decay
The Zombie Apocalypse (Logistic Growth)
Linear EquationsLinear ODEs: Working an Example
The Solution in General
Saving for Retirement
Parametrized CurvesThree kinds of functions, three kinds of curves
The Cycloid
Visualizing Parametrized Curves
Tracing Circles and Ellipses
Lissajous Figures
Calculus with Parametrized CurvesVideo: Slope and Area
Video: Arclength and Surface Area
Summary and Simplifications
Higher Derivatives
Polar CoordinatesDefinitions of Polar Coordinates
Graphing polar functions
Video: Computing Slopes of Tangent Lines
Areas and Lengths of Polar CurvesArea Inside a Polar Curve
Area Between Polar Curves
Arc Length of Polar Curves
Conic sectionsSlicing a Cone
Ellipses
Hyperbolas
Parabolas and Directrices
Shifting the Center by Completing the Square
Conic Sections in Polar CoordinatesFoci and Directrices
Visualizing Eccentricity
Astronomy and Equations in Polar Coordinates
Infinite SequencesApproximate Versus Exact Answers
Examples of Infinite Sequences
Limit Laws for Sequences
Theorems for and Examples of Computing Limits of Sequences
Monotonic Covergence
Infinite SeriesIntroduction
Geometric Series
Limit Laws for Series
Test for Divergence and Other Theorems
Telescoping Sums
Integral TestPreview of Coming Attractions
The Integral Test
Estimates for the Value of the Series
Comparison TestsThe Basic Comparison Test
The Limit Comparison Test
Convergence of Series with Negative TermsIntroduction, Alternating Series,and the AS Test
Absolute Convergence
Rearrangements
The Ratio and Root TestsThe Ratio Test
The Root Test
Examples
Strategies for testing SeriesStrategy to Test Series and a Review of Tests
Examples, Part 1
Examples, Part 2
Power SeriesRadius and Interval of Convergence
Finding the Interval of Convergence
Power Series Centered at $x=a$
Representing Functions as Power SeriesFunctions as Power Series
Derivatives and Integrals of Power Series
Applications and Examples
Taylor and Maclaurin SeriesThe Formula for Taylor Series
Taylor Series for Common Functions
Adding, Multiplying, and Dividing Power Series
Miscellaneous Useful Facts
Applications of Taylor PolynomialsTaylor Polynomials
When Functions Are Equal to Their Taylor Series
When a Function Does Not Equal Its Taylor Series
Other Uses of Taylor Polynomials
Functions of 2 and 3 variablesFunctions of several variables
Limits and continuity
Partial DerivativesOne variable at a time (yet again)
Definitions and Examples
An Example from DNA
Geometry of partial derivatives
Higher Derivatives
Differentials and Taylor Expansions
Differentiability and the Chain RuleDifferentiability
The First Case of the Chain Rule
Chain Rule, General Case
Video: Worked problems
Multiple IntegralsGeneral Setup and Review of 1D Integrals
What is a Double Integral?
Volumes as Double Integrals
Iterated Integrals over RectanglesHow To Compute Iterated Integrals
Examples of Iterated Integrals
Fubini's Theorem
Summary and an Important Example
Double Integrals over General RegionsType I and Type II regions
Examples 1-4
Examples 5-7
Swapping the Order of Integration
Area and Volume Revisited
Double integrals in polar coordinatesdA = r dr (d theta)
Examples
Multiple integrals in physicsDouble integrals in physics
Triple integrals in physics
Integrals in Probability and StatisticsSingle integrals in probability
Double integrals in probability
Change of VariablesReview: Change of variables in 1 dimension
Mappings in 2 dimensions
Jacobians
Examples
Bonus: Cylindrical and spherical coordinates
Now that we understand what double integrals are, we can use them to compute areas of regions and volumes of solids of revolution. The following video shows how, and the main formulas are repeated below the video for convenience.
Likewise, if $R$ lies to the right of the $y$ axis and we rotate around the $y$ axis, then the volume the solid of revolution is $\displaystyle{\iint_R 2\pi x\, dA}$. Integrating $dx \, dy$ gives volume by washers and integrating $dy\,dx$ gives volume by cylindrical shells. |
The theorem states that
$$ H(P)\leq\mathrm{MinACL}(P)<H(P)+1 $$
where, $\mathrm{MinACL}$ means the minimum average code word length of a given information source, i.e. the average code word length of any Huffman coding and $H$ means the entropy of the probability distribution $P$.
Now, the problem is how to show that for any $\epsilon>0$, there is a probability distribution $P$ s.t. $\mathrm{MinACL}(P) - H(P)\geq1-\epsilon$?
(I was given a hint that I can start with a source s.t. $H(P)=\mathrm{MinACL}(P)$ and try to change the probabilities in order to skew the code.} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.