text stringlengths 256 16.4k |
|---|
I'm working on solving the coupled one-dimensional poroelasticity equations (biot's model), given as:
$$-(\lambda+ 2\mu) \frac{\partial^2 u}{\partial x^2} + \frac{\partial p}{\partial x} = 0$$ $$\frac{\partial}{\partial t} \left[ \gamma p + \frac{\partial u}{\partial x}\right] -\frac{\kappa}{\eta}\left[\frac{\partial^2 p}{\partial x^2}\right] =q(x,t)$$ on the domain $\Omega=(0,1)$ and with the boundary conditions:
$p=0, (\lambda + 2\mu)\frac{\partial u}{\partial x}=-u_0$ at $x=0$ and $u=0, \frac{\partial p}{\partial x} = 0$ at $x=1$.
I discretized these equations using a centered finite difference scheme:
$$(\lambda + 2\mu)\frac{u^{t+1}_{i+1}-2u^{t+1}_i+u^{t+1}_{i-1}}{\Delta x^2} + \frac{p^{t+1}_{i+1}-p^{t+1}_{i-1}}{2\Delta x} = 0$$ $$\gamma \frac{p^{t+1}_i-p^t_i}{\Delta t} + \frac{u^{t+1}_{i+1}-u^{t+1}_{i-1}}{2\Delta x \Delta t} -\left[ \frac{u^t_{i+1}-u^t_{i-1}}{2\Delta x \Delta t}\right] - \frac{\kappa}{\eta}\left[ \frac{p^{t+1}_{i+1} -2p^{t+1}_i + p^{t+1}_{i-1}}{\Delta x^2}\right]=q^{t+1}_i$$
I'm currently working out the details of the scheme's convergence by analyzing its consistency and stability. The consistency part seems fairly straightforward to me, but I'm already foreseeing some difficulties with the stability analysis. First of all, there are two variables and two equations. Secondly, there is also a mixed spatiotemporal derivative term in the second equation. I'm familiar with von neumann stability analysis and can see that it will be very tough to establish stability with this method. Are there any alternatives to von neumann analysis that I can use? |
I would like to know if somebody is aware of some result that looks like the following.
Let us consider the space $C_b(X)$ of continuous bounded function over a measurable space $X$.
Suppose that:
$f_n \uparrow_n f$, i.e. $(f_n)$ converge monotonically towards $f$, and moreover $f_n\geq 0 \,\forall n$. Suppose $\mu_n\xrightarrow{w*}\mu$, with $(\mu_n)$ and $\mu$ $\sigma$-additive measures over $X$ (where the convergence is w.r.t. the weak-* topology i.e. $\mu_n\xrightarrow{w*}\mu$ iff $\forall g\in C_b(X), \int g d\mu_n \rightarrow \int g d\mu$).
Do we have that $\int f_n d\mu_n \rightarrow \int f d\mu$? |
Find the smallest integer $n$ such that
$$(x^2 + y^2 + z^2)^2 \leq n(x^4 + y^4 + z^4)$$for all real numbers $x, y,$ and $z.$
How should I manipulate this inequality? I am stuck and don't know how to proceed. All solutions are greatly appreciated!
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Find the smallest integer $n$ such that
$$(x^2 + y^2 + z^2)^2 \leq n(x^4 + y^4 + z^4)$$for all real numbers $x, y,$ and $z.$
How should I manipulate this inequality? I am stuck and don't know how to proceed. All solutions are greatly appreciated!
Let's work with $a=x^2$, $b=y^2$, and $c=z^2$ which are all nonnegative. Then $$ (a+b+c)^2\leq 3(a^2+b^2+c^2) \tag{$*$} $$ (either by Cauchy-Schwarz or by expanding both sides) so $n\leq 3$. But ($*$) is an equality when $a=b=c>0$ so $n\geq 3$. We infer that $n=3$.
$$(x^2 + y^2 + z^2)^2 \leq n(x^4 + y^4 + z^4)\iff (xy)^2+(xz)^2+(yz)^2\le \frac {n-1}{2}(x^4+y^4+z^4)$$ If $n=3-\epsilon$ where $\epsilon\gt 0$ and $x=y=z$ then $2\le 2-\epsilon$, absurd. Thus the smallest $n$ is $3$. |
I want to solve the following expression (used to obtain an analytic solution to a current distribution inside a workpiece):
$$a_{mn} = -\frac{\frac{4}{ab} \int_0^a \int_0^b f(x',y')\sin(px')\sin(qy')\mathrm{d}x'\mathrm{d}y'}{t\sinh(tc)}$$
Here. $a$,$b$ are scalar constants and $p = \frac{m\pi}{a}$, $q = \frac{n\pi}{b}$, $t=\sqrt{p^2+q^2}$.
The function $f$ is a Gaussian distribution: $f(x,y) = \frac{I_0d}{\pi\sigma^2} \exp(-\frac{r^2d}{\sigma^2})$
I was wondering if using the Simpson's method is the smartest way to solve the double integral. Are there alternative solutions which would be more computationally efficient?
Additional information: The resulting matrix $a_{mn}$ is used in the following infinite series to obtain the final result: $\sum_{m=1}^\infty \sum_{n=1}^\infty a_{mn}p\cos(px)\sin(qy)\cosh(tz)$ |
Preprint Series 2012 2012:09 K. Kazarian and V. Temlyakov
We study Hilbert spaces $\mathfrak{L}^{2}(E,G)$, where $E\subset \mathbb{R}^{d}$ is a measurable set, $|E|>0$ andfor almost every $t\in E$ the matrix $G(t)$ (see (3)) is a Hermitian positive-definite matrix.We find necessary and sufficient conditions for which the projection operators ...
[Full Abstract
]
2012:08 G. Kerkyacharian and P. Petrushev
Classical and nonclassical Besov and Triebel-Lizorkin spaces withcomplete range of indices are developed in the general setting of Dirichlet spacewith a doubling measure and local scale-invariant Poincaré inequality. Thisleads to Heat kernel with small time Gaussian bounds and Hölder continuity,which play a central role in this ...
[Full Abstract
]
2012:07 P. Binev, A. Cohen, W. Dahmen, and R. DeVore
Algorithms for binary classification based on adaptive partitioning are formulated and analyzedfor both their risk performance and their friendliness to numerical implementation. Thealgorithms can be viewed as generating a set approximation to the Bayes set and thus fall intothe general category of
set estimators. A general ...
[Full Abstract
]
2012:06 T. Coulhon, G. Kerkyacharian, and P. Petrushev
Wavelet bases and frames consisting of band limited functions of nearly exponential localization on $\mathbb{R}^d$ are a powerful tool in harmonic analysis by making various spaces of functions and distributions more accessible for study and utilization, and providing sparse representation of natural function spaces (e.g. Besov spaces ...
[Full Abstract
]
BACK TO TOP
2012:05 V. Temlyakov
This paper is a follow up to the previous author's paper on convex optimization. In that paper we began the process of adjusting greedy-type algorithms from nonlinear approximation for finding sparse solutions of convex optimization problems. We modified there three the most popular in nonlinear approximation in Banach spaces ...
[Full Abstract
]
BACK TO TOP
2012:04 J. Nelson and V. Temlyakov
We study rate of convergence of expansions of elements in a Hilbert space $H$ into series with regard to a given dictionary ${\cal D}$.
The primary goal of this paper is to study representations of an element $f\in H$ by a series $$f\sim \displaystyle\sum\limits _ {j ...
[Full Abstract
]
BACK TO TOP
2012:03 V. Temlyakov
We study sparse approximate solutions to convex optimization problems. It is known that in many engineering applications researchers are interested in an approximate solution of an optimization problem as a linear combination of elements from a given system of elements. There is an increasing interest in building such sparse approximate ...
[Full Abstract
]
BACK TO TOP
2012:02 S. Dilworth, M. Soto-Bajo, and V. Temlyakov
We study Lebesgue-type inequalities for greedy approximation with respect to quasi-greedy bases. We mostly concentrate on this study in the $L _ p$ spaces. The novelty of the paper is in obtaining better Lebesgue-type inequalities under extra assumptions on a quasi-greedy basis than known Lebesgue-type inequalities for quasi-greedy bases. We ...
[Full Abstract
]
BACK TO TOP
2012:01 D. Savu and V. Temlyakov
We study sparse representations and sparse approximations with respect to incoherent dictionaries. We address the problem of designing and analyzing greedy methods of approximation. A key question in this regard is: How to measure efficiency of a specific algorithm? Answering this question we prove the Lebesgue-type inequalities for algorithms under ...
[Full Abstract
]
BACK TO TOP |
Consider the following traditional integer knapsack problem:
$\max \sum_{i=1}^k p_i \cdot x_i\\ \text{s.t.} \sum_{i=1}^k w_i \cdot x_i \leq W \\ x_i \in \{0,\ldots,k_i\} \text{ for each } i$
Now consider a laminar family $\mathcal{I}$ of sets, i.e., a collection of subsets of $\{1,\ldots,k\}$ such that for each pair $I_1,I_2 \in \mathcal{I}$ it either holds that $I_1 \subseteq I_2$ or $I_1 \supseteq I_2$ or $I_1 \cap I_2 = \emptyset$. For each of these subsets $I_j \in \mathcal{I}$, we insert an additional cardinality constraint of the form
$\sum_{i \in I_j} x_i \leq \mu_j$
for some number $\mu_j$.
Clearly, this problem is weakly $NP$-complete in general since it contains the traditional (integer) knapsack problem. The question is what happens if the profits $p_i$ are polynomially bounded. Does the problem remain $NP$-complete or is it easy to solve now? We are thinking about this problem for months now.
What we know so far:
If we omit the cardinality constraints or restrict ourselves to only one cardinality constraint that involves each of the variables, the problem becomes easy to solve (this isn't as trivial as it sounds since the maximum amounts $k_i$ are still not polynomially bounded). Accordingly, if the maximum amounts $k_i$ are polynomially bounded, the problem becomes easy to solve as well.
The above results imply that, in order to prove $NP$-completeness, a possible reduction would need to make use of decision variables that are not polynomially bounded and must incorporate suitable laminar cardinality constraints with values $\mu_j$ that aren't polynomially bounded as well.
So, does anyone have some clue on how to prove or disprove the $NP$-completeness of the problem? |
Evaporation-Driven Assembly of Colloidal Particles
Wiki entry by Emily Gehrels, Fall 2012
Based on the article: Lauga, E., Brenner, M.P. (2004). Evaporation-Driven Assembly of Colloidal Particles. Physical Review Letters,
93, 238301. Background
Vinny Manoharan wrote a paper describing how clusters of small (d=844nm) polystyrene spheres could be formed in reproducible configurations by dispersing them in a toluene-water emulsion and then preferentially evaporating the toluene. The geometries that formed minimized the second moment of the cluster as defined by <math>\mathcal{M}=\sum_{i}||r_i-r_o||^2</math> where <math>r_o</math> is the position of the center of the droplet. This paper examines the theoretical basis for these findings.
Theories and Simulations
For a droplet with particles attached to the surface, the particles arrange themselves in such a way to minimize the total surface energy between the particles and the drop, the drop and the surrounding medium and the particle and the surrounding medium. A program (Brakke's Surface Evolver) was used to simulate the equilibrium positions of particles in a droplet. The system was initialized with a certain number of particles positioned randomly on the droplet and then the volume of the drop was slowly decreased in steps. At each step the particles were moved to the positions that minimized the energy. The final packings (when the dop volume went to zero) in the simulation agreed with the experimentally observed results as seen in the figure below where in each pair of images the left image is the experimentally observed packing and the right image is the packing obtained from the numerical simulation.
To understand why these particular packings arise, the authors then study the theoretical basis for the packing selection. For large enough volumes, the minimum energy configuration is a spherical droplet where the particles do not interact. However, when the volume of the droplet decreases to the critical volume (<math>V_C</math>), the droplet cannot remain spherical as the particles come close together. At this critical volume the particles reach a critical packing that is determined by the "cone of influence" of each particle. The cone of influence is a cone that starts at the center of the drop and extends out, tangent to the particle, to intersect with the surface of the droplet. The interactions between the particles on the surface of the droplet are directly related with the interactions between the intersections of the cones of influence of the particles with the surface of the droplet. So, the problem of packing particles on the droplet at the critical volume is like packing circles on a sphere. Most circle packings are unique for a given number of circles. However, there are some numbers of circles (5, 19, 20, 23, ...) where there is a degree of freedom that arises from on of the circles being free to 'rattle' around. There are also some numbers of circles (such as 15) where here are two (or more) different configurations that work.
Below <math>V_C</math> we need to find a way of theoretically determining the particle configuration that minimizes the energy If we decrease the volume by a small amount, the forces on each particle are from capillary forces from the droplet interface and from contact forces between the particles. To determine whether a packing will be unique, we will look at the number of degrees of freedom involved in these systems. Each particle as 3 degrees of freedom and the droplet itself has one (arising from the pressure). This means that the system has a total of 3N+1 degrees of freedom. Now we have constraints on the system. The first is that any rotation of the entire droplet does not change the packing. This removes three degrees of freedom. The fact that the particles cannot overlap removes an additional <math>n_c</math> where <math>n_c</math> denotes the number of contacts between different particles when they are critically packed. The final constraint is that the capillary and interparticle forces on have to balance on each particle at equilibrium.
<math>F_i+\sum_{j\in C(i)}f_{ij} =0</math>
This last constraint is not as straight forward as the other two. In order to satisfy this equation, the capillary forces on each of the particles must satisfy compatibility relations. This constrains the rearrangement of the particles such that only <math>n_f</math> of the N capillary forces can be chosen independently. This condition imposes an additional <math>N-n_f</math> constraints to the problem. This leaves a total of
<math>3N+1-3-n_c-(N-n_f)=2N-2+n_f-n_c\equiv n_m</math>
degrees of freedom in the problem.
For most of the cases examined in the experimental study, there is only one final degree of freedom (<math>n_m=1</math>). The only cases where this dof is not one is in the case where a rattler is present. This means that for the cases studied, there is exactly one set of capillary forces that is consistent with the constraints and the final packing is unique.
We can find the exact equation relating the capillary force on each particle as the radius of the droplet changes from R to <math>R+\delta R</math> is given by:
<math>F_i=-2\pi \gamma_D cos \beta (\delta r_i + \frac{Acos(\beta )}{4\pi R^2-NA} \sum_i \delta r_i)</math>
where <math>\alpha</math> is the dry angle of the particle on the critical packing, <math>\theta</math> is the equilibrium contact angle, and <math>\beta=\alpha-\theta</math>.
This formula is valid only for small changed in volume, but can be applied iteratively as the droplet volume is decreased by small amounts. The results for the second moment given by this theoretical analysis (<math>M_m</math>), found in the experiment (<math>M_{exp}</math>) and the minimum second moment (<math>M_2</math>) are given in the table below.
As can be seen, there is excellent agreement between all three values.
Conclusions
This study has shown that unique packings arise from unique initial circular packing at the critical volume, and highly constrained evolution of the packing upon decrease in droplet size. This suggests that it should be possible to create different particle packings by adjusting the size or wettability of the particles so that the circle packing at the critical volume is changed. |
Is this something found in the POH? I know there is a IAS and CAS chart.
Short Answer
The short answer:
From TAS to IAS $IAS=f(TAS)$:
$$IAS = a_0 \sqrt{5\left[\left(\frac{\frac{1}{2} \rho {TAS}^2}{P_0} + 1 \right)^{\frac{2}{7}}-1\right]} + K_i $$
From IAS to TAS $TAS=f(IAS)$:
$$TAS = \sqrt{\frac{2 P_0}{\rho}\left[\left( \frac{ (\frac{IAS - K_i}{a_0})^2 + 1}{5} \right)^{\frac{7}{2}} + 1 \right]} $$
WARNING: the units should be taken as SI, ($\frac{m}{s},\frac{kg}{m^3},Pa$).
In particular:
$a_0$: speed of soundat sea level in ISA condition = $290. 07 \; \frac{m}{s}$ $P_0$: static pressure at sea level in ISA condition = $1013.25 \; Pa$ $\rho$: density of the air in which you are flying $\frac{kg}{m^3}$ $IAS$: indicated Air Speed $\frac{m}{s}$ $K_i$: is a correction factor typical of your aircraft. You should find it in the POH.
How to get to this formula, see the long answer below.
Long Answer
From the definition of dynamic pressure:
$$ q_c = \frac{1}{2} \rho v^2 $$
Where $v = TAS$, I am assuming you are interesting in subsonic speeds (cross-country flight), so we don't consider compressibility effects for the CAS:
$$CAS = a_0 \sqrt{5\left[\left(\frac{q_c}{P_0} + 1 \right)^{\frac{2}{7}}-1\right]} $$
Substituting the dynamic pressure definition in the $CAS$ as a function of $TAS$
$$CAS = a_0 \sqrt{5\left[\left(\frac{\frac{1}{2} \rho {TAS}^2}{P_0} + 1 \right)^{\frac{2}{7}}-1\right]} $$
Where $a_0$ is $295.070 \; \frac{m}{s}$ and $P_0$ is $101325 \; Pa$. the density $\rho $ is the density at your altitude that day, you can get it from International Standard Atmosphere calculator or table/formulas. If you measure it from your aircraft instruments $\rho = \frac{P}{RT}$, with $R=287.058 J kg ^{-1} K^{-1}$. You should give in Pressure in Pascal (not $hPa$) and most important temperature in Kelvin $K$. Reverting the formula above (double proof is appreciated):
$$TAS = \sqrt{\frac{2 P_0}{\rho}\left[\left( \frac{ (\frac{CAS}{a_0})^2 + 1}{5} \right)^{\frac{7}{2}} + 1 \right]} $$
Being $$ IAS = CAS + K_i \\ CAS = IAS - K_i $$
Where $K_i$ is a correction factor typical of your aircraft. You should find it in the POH.
Finally we get TAS as a function of IAS $TAS=f(IAS)$
So: $$TAS = \sqrt{\frac{2 P_0}{\rho}\left[\left( \frac{ (\frac{IAS - K_i}{a_0})^2 + 1}{5} \right)^{\frac{7}{2}} + 1 \right]} $$
Complementing GHB’s answer, an exact formula for converting CAS to TAS that takes compressibility effects, indicated altitude, and static air temperature into account is$$ \text{TAS}= \sqrt{ \frac{7 R T}{M} \left[ \left( \left( 1 - \frac{L h}{T_{0}} \right)^{- \frac{g M}{R L}} \left[ \left( \frac{\text{CAS}^{2}}{5 a_{0}^{2}} + 1 \right)^{\frac{7}{2}} - 1 \right] + 1 \right)^{\frac{2}{7}} - 1 \right] }.$$In this formula (which is valid only for
subsonic speeds), the inputs are $ \text{CAS} $ — the calibrated airspeed ($ \text{m}/\text{s} $), $ h $ — the indicated altitude ($ \text{m} $) up to $ 11,000 ~ \text{m} $, $ T $ — the staticair temperature ($ \text{K} $);
the output is
$ \text{TAS} $ — the true airspeed ($ \text{m}/\text{s} $);
and the various physical constants are
$ a_{0} = 340.3 ~ \text{m}/\text{s} $ is the speed of sound at sea level in the ISA, $ g = 9.80665 ~ \text{m}/\text{s}^{2} $ is the standard acceleration due to gravity, $ L = 0.0065 ~ \text{K}/\text{m} $ is the standard ISA temperature lapse rate, $ M = 0.0289644 ~ \text{kg}/\text{mol} $ is the molar mass of dry air, $ R = 8.3144598 ~ \text{J}/(\text{mol} \cdot \text{K}) $ is the universal gas constant, $ T_{0} = 288.15 ~ \text{K} $ is the static air temperature at sea level in the ISA.
It's a common questions...you get a TAS from your POH based on an RPM setting in your cruise performance chart. Some Navlogs have a TAS/IAS box. If you can determine the IAS then you can look at the Air Speed Indicator to insure everything is correct (from an airspeed perspective). You still need to do a ground speed check because the TAS/IAS question doesn't help you with Navigation and confirming the forecasted winds. But this is a commonly asked question.
And yes, using your E6B and working backwards to your CAS and the chart in the POH for you IAS is how you do it. It's faster with an electronic E6B for sure.
PS - thanks for the math above, which will be way more accurate than the E6B, but I doubt I'll be doing the math! haha
You read your TAS from your POH. Then you get to your CAS by using a flight computer, such as the E6-B. Then you use your POH to convert from CAS to IAS. |
Suppose we're given data from three different classes which are normally distributed with the following means and variances:
$C_1: \mu_1=(1,2)^T, \Sigma_1^{-1}=( \begin{array}{ccc}2 & 1 \\1 & 2 \end{array})$
$C_2: \mu_2=(2,-2)^T, \Sigma_2^{-1}=( \begin{array}{ccc}1 & 0 \\0 & 2 \end{array})$ $C_3: \mu_3=(1,-1)^T, \Sigma_3^{-1}=( \begin{array}{ccc}7 & 5 \\5 & 6 \end{array})$
And the loss function of those three classes is $L=\left (\begin{array}{ccc}0 & 1 &3 \\2 & 0 & 2\\ 4&3&0\end{array}\right)$
How should criterion be modeled to make optimal decisions for such a problem? And to which class the point $(0,0)^T$ must be assigned?
I think I can solve the problem without considering the loss function (from prior and posterior probabilities) but I have no idea what can be done about the loss function and consider it in making optimal decisions. |
TensorFlow 1 version View source on GitHub
Batch normalization.
Aliases:
tf.compat.v1.nn.batch_normalization
tf.compat.v2.nn.batch_normalization
tf.nn.batch_normalization( x, mean, variance, offset, scale, variance_epsilon, name=None)
Normalizes a tensor by
mean and
variance, and applies (optionally) a
scale \(\gamma\) to it, as well as an
offset \(\beta\):
\(\frac{\gamma(x-\mu)}{\sigma}+\beta\)
mean,
variance,
offset and
scale are all expected to be of one of twoshapes:
In all generality, they can have the same number of dimensions as theinput
x, with identical sizes as
xfor the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over.
meanand
variancein this case would typically be the outputs of
tf.nn.moments(..., keep_dims=True)during training, or running averages thereof during inference.
In the common case where the 'depth' dimension is the last dimension inthe input tensor
x, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common
[batch, depth]layout of fully-connected layers, and
[batch, height, width, depth]for convolutions.
meanand
variancein this case would typically be the outputs of
tf.nn.moments(..., keep_dims=False)during training, or running averages thereof during inference.
Args: : Input
x
Tensorof arbitrary dimensionality.
: A mean
mean
Tensor.
: A variance
variance
Tensor.
: An offset
offset
Tensor, often denoted \(\beta\) in equations, or None. If present, will be added to the normalized tensor.
: A scale
scale
Tensor, often denoted \(\gamma\) in equations, or
None. If present, the scale is applied to the normalized tensor.
: A small float number to avoid dividing by 0.
variance_epsilon
: A name for this operation (optional).
name
Returns:
the normalized, scaled, offset tensor. |
This question already has an answer here:
About $\lim \left(1+\frac {x}{n}\right)^n$ 10 answers
In another answer I saw, there is this expression
$$\lim _{ x\rightarrow \infty }{ x{ \left( 1-\frac { 1 }{ x } \right) }^{ x } } =\lim _{ x\rightarrow \infty }{ \frac { x }{ e } } =\infty$$
Can anyone show me how to get to this result? There's a multiplication by $x$, a division by $x$, and an exponent of $x$, so it seems like a good example to figure out how all of this interplays with each other when raising $x$ to infinity. |
I'm working on the problem $$I=\int_0^\infty \frac{\sin^2(x)}{1+x^4} dx$$ I found 4 singularities and i would like to use the singularities in the 1st and 2nd quadrants to solve this integral; i.e. $z_0=e^{i\frac{\pi}{4}}$ and $e^{i\frac{3\pi}{4}}$. My prof. said we should split this into two integrals using the trig identity $\sin^2(x)=\frac{1-\cos(2x)}{2}$ so we have $$I=\frac{1}4 \int_{-\infty}^{\infty}\frac{dx}{1+x^4}- \frac{1}4\int_{-\infty}^{\infty}\frac{\cos(2x)}{1+x^4}$$ Or $$I=\frac{2\pi i}{4}Res[\frac{1}{1+z^4}, z_0=e^{\frac{\pi i}{4}}, e^{\frac{3\pi i}{4}}]-\frac{2\pi i}{4}Res[\frac{e^{2ix}}{1+z^4},z_0=e^{\frac{\pi i}{4}},e^{\frac{3\pi i}{4}}]$$ I didn't have any trouble with the 1st residue but i'm a bit unsure how to handle the second residue. Should i let $e^{2ix}=e^{2iz}$ then substitute $z=z_0$? In that case I seems to get a really messy answer. Is there a cleaner way to deal with that second residue? I should add that I am a physics major and this is a problem for my Mathematical Methods class. I haven't taken any complex variable courses. Thank you.
Let's start at the beginning because it appears that, although you have the correct formulation, you really do not seem to understand what is going on.(For example, why not use the singularities in the 3rd and 4th quadrants?)
Consider the contour integral:
$$\oint_C dz \frac{1-e^{i 2 z}}{1+z^4} $$
where $C$ is a semicircle of radius $R$ in the upper half plane. The contour integral is equal to
$$\int_{-R}^R dx \frac{1-e^{i 2 x}}{1+x^4} + i R \int_0^{\pi} d\theta \, e^{i \theta} \, \frac{1-e^{i 2 R e^{i \theta}}}{1+R^4 e^{i 4 \theta}}$$
The first integral is equal to
$$\int_{-R}^R dx \frac{1-\cos{2 x}}{1+x^4} $$
We now show that the magnitude the second integral vanishes as $R \to \infty$. We do this by using the inequality $\sin{\phi} \ge 2 \phi/\pi$ when $\phi \in [0,\pi/2]$. The magnitude of the second integral is then bounded by
$$\begin{align}\left |i R \int_0^{\pi} d\theta \, e^{i \theta} \, \frac{1-e^{i 2 R e^{i \theta}}}{1+R^4 e^{i 4 \theta}} \right | &\le R \int_0^{\pi} d\theta \, \left | \frac{1}{1+R^4 e^{i 4 \theta}} \right | + R \int_0^{\pi} d\theta \, \left | \frac{e^{i 2 R e^{i \theta}}}{1+R^4 e^{i 4 \theta}} \right |\\ &\le \frac{\pi R}{R^4-1} + \frac{2 R}{R^4-1} \int_0^{\pi/2} d\theta \, e^{-2 R \sin{\theta}} \\ &\le \frac{\pi R}{R^4-1} + \frac{2 R}{R^4-1} \int_0^{\pi/2} d\theta \, e^{-2 R (2 \theta/\pi)}\\ &\le \frac{\pi (R+1/2)}{R^4-1}\end{align} $$
which vanishes as $R \to \infty$.
By the residue theorem, the contour integral is also equal to $i 2 \pi$ times the sum of the residues at the poles of the integrand inside $C$, or $z=e^{i \pi/4}$ and $z=e^{i 3 \pi/4}$. Thus,
$$\begin{align}\int_{-\infty}^{\infty} dx \frac{1-\cos{2 x}}{1+x^4} &= i 2 \pi \left [\frac{1-e^{i 2 e^{i \pi/4}}}{4 e^{i 3 \pi/4}} + \frac{1-e^{i 2 e^{i 3 \pi/4}}}{4 e^{i 9 \pi/4}} \right ] \\ &= i \frac{\pi}{2} \left [-i \sqrt{2} - e^{-i 3 \pi/4} e^{i \sqrt{2}} e^{-\sqrt{2}} - e^{-i \pi/4} e^{-i \sqrt{2}} e^{-\sqrt{2}} \right ] \\ &= i \frac{\pi}{2} \left [-i \sqrt{2} + e^{i \pi/4} e^{i \sqrt{2}} e^{-\sqrt{2}} - e^{-i \pi/4} e^{-i \sqrt{2}} e^{-\sqrt{2}} \right ] \\ &= i \frac{\pi}{2} \left [-i \sqrt{2} + i 2 \operatorname{Im}{\left ( e^{i \pi/4} e^{i \sqrt{2}} e^{-\sqrt{2}}\right )} \right ] \\ &= \frac{\pi}{\sqrt{2}}-\pi e^{-\sqrt{2}} \sin{\left (\sqrt{2} + \frac{\pi}{4} \right )} \end{align}$$
Thus,
$$\int_0^{\infty} dx \frac{\sin^2{x}}{1+x^4} = \frac{\pi}{4 \sqrt{2}}-\frac{\pi}{4} e^{-\sqrt{2}} \sin{\left (\sqrt{2} + \frac{\pi}{4} \right )}$$
which can also be written as
$$\int_0^{\infty} dx \frac{\sin^2{x}}{1+x^4} = \frac{\pi}{4 \sqrt{2}} \left ( 1 -e^{-\sqrt{2}} \left [ \sin{\left (\sqrt{2} \right )} +\cos{\left (\sqrt{2} \right )} \right ] \right )$$ |
Description: Test - 2 Number of Questions: 20 Created by: Yashbeer Singh Tags: Test - 2 Soil Mechanics Civil Engineering - CE
The vertical stress at some depth below the corner of a 2 m x 3 m rectangular footing due to a certain load intensity is 100 kN/m2. What is the vertical stress in kN/m2 below the centre of a 4 m x 6 m rectangular footing at the same depth and same load intensity?
A field vane shear testing instrument (shown alongside) was inserted completely into a deposit of soft, saturated silty clay with the vane rod vertical such that the top of the blades were 500 mm below the ground surface. Upon application of a rapidly increasing torque about the vane rod, the soil was found to fail when the torque reached 4.6 Nm. Assuming mobilisation of undrained shear strength on all failure surfaces to be uniform and the resistance mobilised on the surface of the vane rod to be negligible, what would be the peak undrained shear strength (rounded off to the nearest integer value of kPa) of the soil?
In an aquifer extending over 150 hectares, the water table was 21 m below ground level. Over a period of time, the water table dropped to 23 m below the ground level. If the porosity of aquifer is 0.40 and the specific retention is 0.15, what is the change in ground water storage of the aquifer?
Identical surcharges are placed at ground surface at sites X and Y, with soil conditions shown alongside and water table at ground surface. The silty clay layers at X and Y are identical. The thin sand layer at Y is continuous and free-draining with a very large discharge capacity. If primary consolidation at X is estimated to complete in 36 months, what would be the corresponding time for completion of primary consolidation at Y?
A sand layer found at sea floor under 20 m water depth is characterised with relative density = 40%, maximum void ratio = 1.0, minimum void ratio = 0.5 and specific gravity of soil solids = 2.67. Assume the specific gravity of sea water to be 1.03 and the unit weight of fresh water to be 9.81 kN/m3.
What would be the effective stress (rounded off to the nearest integer value of kPa) at 30 m depth into the sand layer?
A sand layer found at sea floor under 20 m water depth is characterised with relative density = 40%, maximum void ratio = 1.0, minimum void ratio = 0.5 and specific gravity of soil solids = 2.67. Assume the specific gravity of sea water to be 1.03 and the unit weight of fresh water to be 9.81 kN/m3.
What would be the change in the effective stress (rounded off to the nearest integer value of kPa) at 30 m depth into the sand layer if the sea water level permanently rises by 2 m?
Effective stress is basically stress developed in between soil particles, hence changing water level would not cause any change in effective stress.
$$\triangle\sigma_1 = 0$$
The vertical stress at point P1 due to the point load Q on the ground surface as shown in figure is s2. According to Boussinesq’s equation, the vertical stress at point P2 shown in the figure will be
An open ended steel barrel of 1 m height and 1 m diameter is filled with saturated fine sand having coefficient of permeability of 10−2m/ s. The barrel stands on a saturated bed of gravel. The time required for the water level in the barrel to drop by 0.75 m is
$k = 10^{-2}m/s \\
\text{time required for water level drop by 0.75} \\ = \frac{0.75m}{10^{-2}m/s} = 75 s.$
Direction: The unconfined compressive strength of a saturated clay sample is 54 kPa.
The value of cohesion for the clay is
$q_u = 54 KPa, C_u' = \frac{q_u}{2} = \frac{54}{2} = 27 KPa$
Direction: The unconfined compressive strength of a saturated clay sample is 54 kPa.
If a square footing of size 4 m x 4 m is resting on the surface of a deposit of the above clay, the ultimate bearing capacity of the footing (as per Terzaghi’s equation) is
The laboratory test results of a soil sample are given below:
Percentage finer than 4.75 mm = 60 Percentage finer than 0.075 mm = 30 Liquid Limit = 35% Plastic Limit = 27% The soil classification is
Direction: Examine the test arrangement and the soil properties given below.
The maximum resistance offered by the soil through skin friction while pulling out the pile from the ground is
A wall of diameter 20 cm fully penetrates a confined aquifer. After a long period of pumping at a rate of 2720 litres per minute, the observations of drawdown taken at 10 m and 100 m distances from the centre of the wall are found to be 3 m and 0.5 m respectively. The transmissivity of the aquifer is
A plate load test is carried out on a 300 mm x 300 mm plate placed at 2 m below the ground level to determine the bearing capacity of a 2 m x 2 m footing placed at same depth of 2 m on a homogeneous sand deposit extending 10 m below ground. The ground water table is 3 m below the ground level. Which of the following factors does not require a correction to the bearing capacity determined based on the load test?
Direction: Examine the test arrangement and the soil properties given below.
The maximum pressure that can be applied with a factor of safety of 3 through the concrete block, ensuring no bearing capacity failure in soil using Terzaghi’s bearing capacity equation without considering the shape factor, depth factor and inclination factor is
A saturated undisturbed sample from a clay strata has moisture content of 22.22% and specific weight of 2.7. Assuming $y_\omega$ = 10 kN/m3, the void ratio and the saturated unit weight of the clay, respectively are
The ultimate load capacity of a 10 m long concrete pile of square cross-section 500 mm x 500 mm driven into a homogeneous clay layer having undrained cohesion value of 40 kPa is 700 kN. If the cross-section of the pile is reduced to 250 mm x 250 mm and the length of the pile is increased to 20 m, the ultimate load capacity will be
The liquid limit (LL), plastic limit (PL) and shrinkage limit (SL) of a cohesive soil satisfy the relation
Using the properties of the clay layer derived from the above question, the consolidation settlement of the same clay layer under a square footing (neglecting its self weight) with additional data shown in the figure below (assume the stress distribution as 1H : 2V from the edge of the footing and$y_{\omega}$= 10kN/m3) is
A singly under-reamed, 8-m long, RCC pile (shown in the adjoining figure) weighing 20 kN with 350 mm shaft diameter and 750 mm under-ream diameter is installed within stiff, saturated silty clay (undrained shear strength is 50 kPa, adhesion factor is 0.3 and the applicable bearing capacity factor is 9.0 to counteract the impact of soil swelling on a structure constructed above. Neglecting suction and the contribution of the under-ream to the adhesive shaft capacity, what would be the estimated ultimate tensile capacity (rounded off to the nearest integer value of kN) of the pile? |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Approximation algorithms are only for optimization problems, not for decision problems.Why don't we define the approximation ratio to be the fraction of mistakes an algorithm makes, when trying to solve some decision problem? Because "the approximation ratio" is a term with a well-defined, standard meaning, one that means something else, and it would be ...
As you said, there is no decision to make, so new complexity classes and new types of reductions are needed to arrive at a suitable definition of NP-hardness for optimization-problems.One way of doing this is to have two new classes NPO and PO that contain optimizations problems and they mimic of course the classes NP and P for decision problems. New ...
Usually what's shown is the NP-hardness of a "Gap" version of the problem. For example, suppose that you want to show that it's hard to approximate SET COVER to within a factor of 2.You define the following "promise" instance of SET COVER that we'll call 2-GAP-SET-COVER:Fix some number $\ell$. 2-GAP-SET-COVER consists of all instances of set cover where ...
The expression "$A(L) \le (1 + \varepsilon) \text{Opt}(L) + O(1/\varepsilon^2)$" is, as usual, shorthand for the following:There exist constants $c>0$ and $\varepsilon_0>0$ such that for all $\varepsilon$ with $0<\varepsilon<\varepsilon_0$, the inequality $A(L) \le (1 + \varepsilon) \text{Opt}(L) + c/\varepsilon^2$ holds.
There is actually a stronger result; A problem is in the class $\mathrm{FPTAS}$ if it has an fptas1: an $\varepsilon$-approximation running in time bounded by $(n+\frac{1}{\varepsilon})^{\mathcal{O}(1)}$ (i.e. polynomial in both the size and the approximation factor). There's a more general class $\mathrm{EPTAS}$ which relaxes the time bound to $f(\frac{1}{\...
One reason that we see different approximation complexities for NP-complete problems is that the necessary conditions for NP-complete constitute a very coarse grained measure of a problem's complexity. You may be familiar with the basic definition of a problem $\Pi$ being NP-complete:$\Pi$ is in NP, andFor every other problem $\Xi$ in NP, we can turn an ...
One way to consider the difference between decision version and optimization version is by considering different optimization versions of the same decision version. Take for example the MAX-CLIQUE problem, which is very hard to approximate in terms of the usual parameter – the size of the clique. If we change the optimization parameter to the logarithm of ...
Let me answer your questions in order:By definition, a problem has an FPTAS if there is an algorithm which on instances of length $n$ gives an $1+\epsilon$-approximation and runs in time polynomial in $n$ and $1/\epsilon$, that is $O((n/\epsilon)^C)$ for some constant $C \geq 0$. A running time of $2^{1/\epsilon}$ doesn't belong to $O((n/\epsilon)^C)$ for ...
The reason you don't see things like approximation ratios in decision making problems is that they generally do not make sense in the context of the questions one typically asks about decision making problems. In an optimization setting, it makes sense because it's useful to be "close." In many environments, it doesn't make sense. It doesn't make sense to ...
I believe that NVidia GPUs they use a table lookup, followed by a quadratic interpolation. I think they are using an algorithm similar to the one described in: Oberman, Stuart F; Siu, Michael Y: "A High-Performance Area-Efficienct Mutlifunction Interpolator," _IEEE Int'l Symp Comp Arithmetic, (ARITH-17):272-279, 2005.The table lookup is indexed with the $...
I'll expand on the answer by Yuval Filmus by providing an interpretation based on multi-objective optimization problems.Single-objective optimization and approximationIn computer science we often study optimization problems with a single objective (for example, minimize f(x) subject to some constraint). When proving, say, NP-completeness, it is common to ...
A heuristic is essentially a hunch, i.e., the case you describe ("I noticed it is near", you don't have a proof it is so) is a heuristic. As is solving the traveling salesman problem by starting at a random vertex and going to the nearest not yet visited each step. It is a plausible idea, that should not give a too bad solution. In this case, it can be shown ...
In theoretical computer science, an approximation algorithm is an algorithm that guarantees a certain approximation ratio $\rho$, and an approximation scheme is a (uniform) collection of algorithms that guarantees several different approximation ratios. Since the collection is uniform (all the algorithms look the same but with different parameters), you can ...
Typically, we use $\alpha < 1$ for maximization problems, and $\alpha > 1$ for minimization problems, where $\alpha$ is the approximation guarantee. So, a $2$-approximation algorithm returns a solution whose cost is at most twice the optimal. But as always, to be absolutely sure, go back to the definitions of the text you are reading (if a definition ...
I guess one possible answer to your question is this: Take a pseudorandom number generator $G$. Try to chose a generator which has some powerful attacks against it: a random number generator attack for $G$ is (for our purposes), an algorithm $A$ which, when given an imput string $s$, determines a seed $A(s)$, such that $G(A(s))=s$. Then approximate the KC of ...
No. Counting independent sets in graph is #P-hard, even for 4-regular graphs but Dror Weitz gave a PTAS for counting independent sets of $d$-regular graphs for any $d\leq5$ [3]. (In the model he writes about, counting independent sets corresponds to taking $\lambda=1$.)Computing the permanent of a 0-1 matrix is also #P-hard (this is in Valiant's original #...
In fact, something stronger is true: if you can approximate maximum clique within $n^{1-\epsilon}$ for some $\epsilon > 0$ then P=NP. This is because for every $\epsilon > 0$ there is a polytime reduction $f_\epsilon$ that takes an instance $\varphi$ of SAT and returns an instance $(G,cn)$ of maximum clique such that:If $\varphi$ is satisfiable then $...
AD supports arbitrary computer programs, including branches and loops, but with one caveat: the control flow of the program must not depend on the contents of variables whose derivatives are to be calculated (or variables depending on them). Here is an example:if x = 3 then 9 else x * xAt close inspection you will recognize that the above is really just ...
In his famous paper, Håstad shows that it is NP-hard to approximate MAX2SAT better than $21/22$. This likely means that is is NP-hard to distinguish instances which are $\leq \alpha$ satisfiable and instances which are $\geq (22/21) \alpha$ satisfiable, for some $\alpha \geq 1/2$. Now imagine padding an instance so that it becomes a $p$-fraction of a new ...
Following up on a comment by sdcvvc I checked out example 11.7 in Computational Complexity: A Modern Approach by Arora and Barak (Draft of 2008). There, the authors describe a "PCP algorithm" for the problem Graph Non-Isomorphism (GNI):Input: Two graphs $G_0$ and $G_1$ with $n$ vertices each.Output: Accept if and only if $G_0$ and $G_1$ are not ...
In general when you want to bound the approximation ratio of an algorithm you look for an easy lower bound on the optimal value. The most straightforward is often the LP relaxation of a (suitably chosen) ILP formulation of the problem. Sometimes other things are used, for TSP for example you can also use the weight of a MST (the optimal tour minus one edge ...
You made a crucial change to the question. I've updated my answer to respond to the new question; I'll keep my original answer below for posterity as well.To answer the latest iteration of the question: If the problem you really want to solve is a decision problem, and you've shown that it is NP-complete, then you might be in a tough spot. Here are some ...
I ran across this question while researching a similar problem: optimum additions of liquids to reduce stratification. It seems like my solution would be applicable to your situation, as well.If you want to mix liquids A, B, and C in the proportion 30,20,10 (that is, 30 units of A, 20 units of B, and 10 units of C), you end up with stratification if you ...
Optimization problems come in two flavors: minimization and maximization. For definiteness, in this answer we consider minimization problems; for maximization problems the situation is completely analogous.Generally speaking, when we say that a minimization problem $\Pi$ is $c$-hard to approximate, we mean the following:If there is a polynomial time ...
In addition to the existing answers, let me point out that there are situations where it makes sense to have an approximate solution for a decision problem, but it works different than you might think.With these algorithms, only one of the two outcomes is determined with certainty, while the other might be incorrect. Take the Miller-Rabin test for prime ...
A short trip to wikipedia will tell you that there is no known better approximation algorithm for vertex cover (at least when by "better" we require an improvement by a constant independent of the input).This is what is known so far:The best known approximation achieves an approximation factor of $2-\Theta\left(\frac{1}{\sqrt{\log V}}\right)$ [1].VC is ...
Any probability distribution. If you have a computable probability distribution that gives your data probability $p(x)$, then by the Kraft inequality, there's a computable compressor that compresses it in $-\log p(x)$ bits (round up if you object to fractional bits). This means pretty much any generative machine learning algorithm can be used.This is why ...
Blache et. al, On Approximation Intractability of the Bandwidth Problem, 1997 confirms there is no PTAS for the problem unless $\text{P} = \text{NP}$, even for (binary) trees. Unger W, The Complexity of the Approximation of the Bandwidth Problem, 1998 show that for any constant $k \in \mathbb{N}$ there is no polynomial time approximation algorithm with an ...
Kann's online compendium of NPO problems is a good place to start. Feedback Arc Set (the "Directed part is redundant when you use "arc") is:APX-hard,Approximable within $\mathcal{O}(\log n \log \log n)$ (where $n$ is the number of vertices).The problem is also fixed-parameter tractable1, so it might make more sense to solve the problem exactly, rather ... |
I'm having trouble understanding the math behind a step in an explanation of BCS theory.
At one point the superconductor gap $\Delta$ is defined as
\begin{equation} 1 = V \sum_q \frac{1}{\xi_q^2+\Delta^2}, \end{equation} where $V$ is the potential and $\xi_q \equiv \epsilon_q - \mu$. The tricky part is when the summation is then recast as an integral:
\begin{equation} \boxed{\sum_q \frac{1}{(\epsilon_q - \mu)^2+\Delta^2} \mapsto \int \frac{1}{\sqrt{(\epsilon - \mu)^2+\Delta^2}}\color{red}{\rho(\epsilon)}d\epsilon.} \end{equation}
I didn't write the extrema for summation / integration for clarity, I don't think they matter in this instance. I am missing two points:
How do we justify going from a summation to an integral in this case? Most importantly: why does the density of states $\rho(\epsilon)$ appear? I would (intuitively) say that the same integral without the density of states would look right.
In a way this looks more like a math problem than a physics problem, but I chose to post it here since there could also be some physical argument that comes into play for its solution. |
Examples. An
indefinite integral (or antiderivative) of $\cos$ is $\sin$:
$$\int \cos = \sin.$$
Edit: There has been much unexpected confusion with the above statement. I define the above statement to mean precisely that an antiderivative of the cosine function (which has domain $\mathbb R$) is the sine function (which has domain $\mathbb R$). Or equivalently, the derivative of the sine function is the cosine function. This notation though is not what the question is about, so I hope you will not dwell on this. If you want to question the merits of this notation, that is perhaps for another question. If it really bothers you, you can just pretend the above is statement instead $\int \cos x = \sin x$ or $\int \cos x \, \textrm dx = \sin x$ or $\int \cos x \, \textrm dx = \sin x + C$. (I personally assign to these latter statements slightly different meanings. And yes, I do explain all of this very clearly and repeatedly to my students.)
The
definite integral (or "area function") of $\cos$ from $0$ to $\pi/2$ is $1$:
$$\int^{\pi/2}_0 \cos = 1.$$
These are distinct concepts. But by custom, we use the same symbol for them, because the Fundamental Theorem of Calculus tells us that
$$\int^{\pi/2}_0 \cos = \left(\int \cos\right)\left(\frac{\pi}{2}\right)-\left(\int \cos\right)(0) = \sin \frac{\pi}{2}-\sin 0 = 1- 0=1.$$
This often causes students a great deal of confusion (see e.g. below "Related (but distinct) discussions"). In particular, students
don't see any difference between the indefinite and definite integrals; believe that integration is, by definition, the inverse of differentiation; and thus fail to understand that the significance of the FTC or see that it is more than a mere definition.
To reduce the above confusion, I have already tried to follow @PeteL.Clark's recommendation (in this answer) of using the term
antiderivative (rather than indefinite integral) as much as possible.
I am now thinking of also trying the following approach:
First introduce antidifferentiation. Never once mention the word integral. Don't use $\int$ as our symbol for the antidifferentiation operator---instead, use some other symbol like $\square$. In which case, we'd write things like $\square \cos=\sin$. Introduce integration (i.e. the definite integral) as usual. Go through the FTC, explaining that integration turns out to be, in a certain precise sense, the "same thing" as antidifferentiation. Explain that because of the FTC (and custom), we'll now discard our $\square$ symbol for the antidifferentiation operator and replace it with $\int$.
I don't think I've ever seen any writer using this approach (but please let me know if you have). So, what disadvantages might the above approach have?
Pete L. Clark also writes
At some point I slip up because after all we are using the same integral sign for both: I
almostwish we didn't. (Using almost identical notation for two a prioriincredibly different things which in the setting of the FTC become almost the same is one of the brilliant notational innovations of calculus.
I don't quite understand the above quote. I understand that today, after centuries of use, it would be difficult and foolish to go against tradition and attempt to completely replace the $\int$ symbol for the indefinite integral/antiderivative with something else altogether.
But why wouldn't it have been better if mathematicians had just decided to use a different symbol from the start? What is so brilliant about using the same symbol $\int$ for "two
a priori incredibly different things"?
It seems that this has simply resulted in a great deal of confusion (at least among average students of calculus), without any obvious gain in convenience or understanding.
Related (but distinct) discussions: |
Search
Now showing items 1-1 of 1
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... |
Difference between revisions of "Group cohomology of dihedral group:D8"
(→Over the integers)
(→Baer invariants)
(36 intermediate revisions by the same user not shown) Line 3: Line 3:
group = dihedral group:D8|
group = dihedral group:D8|
connective = of}}
connective = of}}
+ + + + + + + +
==Homology groups for trivial group action==
==Homology groups for trivial group action==
Line 10: Line 18:
===Over the integers===
===Over the integers===
−
The homology groups
+
The homology groups the integers are as follows:
−
<math>
+
<math>(D_8;\mathbb{Z}) = \left \lbrace \begin{array}{rl} \mathbb{Z}, & = 0 \\\mathbb{Z}/2\mathbb{Z}, & \equiv 1 \pmod 4\\ \mathbb{Z}/\mathbb{Z}, & \equiv 3 \pmod 4 \\\\, \{ even }\end{array}\right</math>
− +
first few homology groups are :
−
{| class="
+
{| class="" border="1"
− +
<math></math> 012345678
|-
|-
−
| <math>
+
| <math>\mathbb{Z}</math> || <math>\mathbb{Z}</math> || <math>\mathbb{Z}/2\mathbb{Z}</math> || || <math>\mathbb{Z}/\mathbb{Z}</math> || <math>\mathbb{Z}/2\mathbb{Z}</math> || || <math>\mathbb{Z}/\mathbb{Z}</math> ||
|}
|}
===Over an abelian group===
===Over an abelian group===
+ + + + + + + + + + +
==Cohomology groups for trivial group action==
==Cohomology groups for trivial group action==
Line 30: Line 49:
===Over the integers===
===Over the integers===
−
The cohomology groups
+
The cohomology groups the integers are
+ + + + + + + + + + + + + +
as follows:
+ + + + + + + + + +
==Cohomology ring with coefficients in integers==
==Cohomology ring with coefficients in integers==
Line 37: Line 80:
==Second cohomology groups and extensions==
==Second cohomology groups and extensions==
+ + + + + + + + + +
===Second cohomology groups for trivial group action===
===Second cohomology groups for trivial group action===
{| class="sortable" border="1"
{| class="sortable" border="1"
−
! Group acted upon !! Order !! Second part of GAP ID !! [[Second cohomology group for trivial group action]] !! Extensions !! Cohomology information
+
! Group acted upon !! Order !! Second part of GAP ID !! [[Second cohomology group for trivial group action]] !! Extensions !! Cohomology information
+ + + + + + + + + + + + + + + +
|-
|-
−
|
+
| || 3[[]] ||
|-
|-
−
|
+
| || ||
|}
|}
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + Latest revision as of 00:27, 29 May 2013
Contents This article gives specific information, namely, group cohomology, about a particular group, namely: dihedral group:D8. View group cohomology of particular groups | View other specific information about dihedral group:D8 Family contexts
Family name Parameter value Information on group cohomology of family dihedral group of degree , order degree , order group cohomology of dihedral groups Homology groups for trivial group action FACTS TO CHECK AGAINST(homology group for trivial group action): First homology group: first homology group for trivial group action equals tensor product with abelianization Second homology group: formula for second homology group for trivial group action in terms of Schur multiplier and abelianization|Hopf's formula for Schur multiplier General: universal coefficients theorem for group homology|homology group for trivial group action commutes with direct product in second coordinate|Kunneth formula for group homology Over the integers
The homology groups over the integers are given as follows:
The first few homology groups are given below:
Over an abelian group
The homology groups over an abelian group are given as follows:
The first few homology groups with coefficients in an abelian group are given below:
Cohomology groups for trivial group action FACTS TO CHECK AGAINST(cohomology group for trivial group action): First cohomology group: first cohomology group for trivial group action is naturally isomorphic to group of homomorphisms Second cohomology group: formula for second cohomology group for trivial group action in terms of Schur multiplier and abelianization In general: dual universal coefficients theorem for group cohomology relating cohomology with arbitrary coefficientsto homology with coefficients in the integers. |Cohomology group for trivial group action commutes with direct product in second coordinate | Kunneth formula for group cohomology Over the integers
The cohomology groups over the integers are given as follows:
The first few cohomology groups are given below:
0 Over an abelian group
The cohomology groups over an abelian group are given as follows:
The first few cohomology groups with coefficients in an abelian group are:
Cohomology ring with coefficients in integers PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE] Second cohomology groups and extensions Schur multiplier
This has implications for projective representation theory of dihedral group:D8.
Schur covering groups
The three possible Schur covering groups for dihedral group:D8 are: dihedral group:D16, semidihedral group:SD16, and generalized quaternion group:Q16. For more, see second cohomology group for trivial group action of D8 on Z2, where these correspond precisely to the stem extensions.
Second cohomology groups for trivial group action
Group acted upon Order Second part of GAP ID Second cohomology group for trivial group action (as an abstract group) Order of second cohomology group Extensions Number of extensions up to pseudo-congruence, i.e., number or orbits under automorphism group actions Cohomology information cyclic group:Z2 2 1 elementary abelian group:E8 8 direct product of D8 and Z2, SmallGroup(16,3), nontrivial semidirect product of Z4 and Z4, dihedral group:D16, semidihedral group:SD16, generalized quaternion group:Q16 6 second cohomology group for trivial group action of D8 on Z2 cyclic group:Z4 4 1 elementary abelian group:E8 8 direct product of D8 and Z4, nontrivial semidirect product of Z4 and Z8, SmallGroup(32,5), central product of D16 and Z4, SmallGroup(32,15), wreath product of Z4 and Z2 6 second cohomology group for trivial group action of D8 on Z4 Klein four-group 4 2 elementary abelian group:E64 64 [SHOW MORE] 11 second cohomology group for trivial group action of D8 on V4 Baer invariants
Subvariety of the variety of groups General name of Baer invariant Value of Baer invariant for this group abelian groups Schur multiplier cyclic group:Z2 groups of nilpotency class at most two 2-nilpotent multiplier groups of nilpotency class at most three 3-nilpotent multiplier any variety of groups containing all groups of nilpotency class at most three -- GAP implementation Computation of integral homology
The homology groups for trivial group action with coefficients in can be computed in GAP using the GroupHomology function in the
HAP package, which can be loaded by the command
LoadPackage("hap"); if it is installed but not loaded. The function outputs the orders of cyclic groups for which the homology or cohomology group is the direct product of these (more technically, it outputs the elementary divisors for the homology or cohomology group that we are trying to compute).
Here are computations of the first few homology groups:
Computation of first homology group gap> GroupHomology(DihedralGroup(8),1); [ 2, 2 ]
The way this is to be interpreted is that the first homology group (the abelianization) is the direct sum of cyclic groups of the orders listed, so in this case we get that is , which is the Klein four-group.
Computation of second homology group gap> GroupHomology(DihedralGroup(8),2); [ 2 ] Computation of first few homology groups
To compute the first eight homology groups, do:
gap> List([1,2,3,4,5,6,7,8],i->[i,GroupHomology(DihedralGroup(8),i)]); [ [ 1, [ 2, 2 ] ], [ 2, [ 2 ] ], [ 3, [ 2, 2, 4 ] ], [ 4, [ 2, 2 ] ], [ 5, [ 2, 2, 2, 2 ] ], [ 6, [ 2, 2, 2 ] ], [ 7, [ 2, 2, 2, 2, 4 ] ], [ 8, [ 2, 2, 2, 2 ] ] ] |
I want to solve the following integral $$\int_0^{\infty}y^3\theta e^{-\theta y} dy$$ so I chose two approaches, a direct one and one with substitution. The direct one is just a triple integration per parts which leads to $\frac{6}{\theta^3}$. The one with the substitution is solved by using the substitution $y\theta = t$. Now we have $y = \frac{t}{\theta}$ and $dy = \frac{dt}{\theta}$. Substituting and using per parts, we have $$\frac{1}{\theta^3}\int_0^{\infty}t^3e^{-t}dt$$ and we get again $\frac{6}{\theta^3}$.
However this is a result in $t$, not $y$ (the original variable). So I thought I had to find it in terms of $y$, i.e. $y = \frac{t}{\theta} = \frac{6}{\theta^4}$ which is different from the previous result!
So my question is
When I solve an integral using substitution as above, do I have to bring back the result in terms of the original variable? Cause I recall loads of examples (e.g. the sine substitution) where we had to bring the result back. But here obviously I get the same result if I don't bring it back to the original variable. Can you help me clarify this? |
Sessile drop
A sessile drop is a drop of liquid at rest on a solid surface. In the absence of gravity, the shape of the drop is controlled by surface tension only. An important parameter is the “contact angle” \theta between the solid surface and the interface. In the absence of gravity, the drop is hemispherical and it is easy to show that the relation between the radius of the drop R and its volume V is (for two-dimensional drops) \displaystyle V = R^2 (\theta - \sin\theta\cos\theta)
To test this relation, a drop is initialised as a half-disk (i.e. the initial contact angle is 90^\circ) and the contact angle is varied between 15^\circ and 165^\circ. The drop oscillates and eventually relaxes to its equilibrium position. This equilibrium is exact to within machine accuracy. The curvature along the interface is constant.
Note that shallower angles are not accessible yet.
set term pushset term @SVG size 640,180set size ratio -1unset keyunset xticsunset yticsunset borderplot 'out' w l, '' u (-$1):2 w l lt 1, 0 lt -1set term pop
#include "grid/multigrid.h"#include "navier-stokes/centered.h"#include "contact.h"#include "vof.h"#include "tension.h"scalar f[], * interfaces = {f};
To set the contact angle, we allocate a height-function field and set the contact angle boundary condition on its tangential component.
vector h[];double theta0 = 30;h.t[bottom] = contact_angle (theta0*pi/180.);int main(){ size (2);
We use a constant viscosity.
const face vector muc[] = {.1,.1}; mu = muc;
We must associate the height function field with the VOF tracer, so that it is used by the relevant functions (curvature calculation in particular).
f.height = h;
We set the surface tension coefficient and run for the range of contact angles.
f.sigma = 1.; for (theta0 = 15; theta0 <= 165; theta0 += 15) run();}
The initial drop is a quarter of a circle.
At equilibrium (t = 10 seems sufficient), we output the interface shape and compute the (constant) curvature.
We compare R/R_0 to the analytical expression, with R_0=\sqrt{V/\pi}.
resetset xlabel 'Contact angle (degrees)'set ylabel 'R/R_0'set arrow from 15,1 to 165,1 nohead dt 2set xtics 15,15,165plot 1./sqrt(x/180. - sin(x*pi/180.)*cos(x*pi/180.)/pi) t 'analytical', \ 'log' u 2:3 pt 7 t 'numerical' |
Jansen, Maurice ; Qiao, Youming ; Sarma M.N., Jayalal
Deterministic Black-Box Identity Testing $pi$-Ordered Algebraic Branching Programs
AbstractIn this paper we study algebraic branching programs (ABPs) with restrictions on the order and the number of reads of variables in the program. An ABP is given by a layered directed acyclic graph with source $s$ and sink $t$, whose edges are labeled by variables taken from the set $\{x_1, x_2, \ldots, x_n\}$ or field constants. It computes the sum of weights of all paths from $s$ to $t$, where the weight of a path is defined as the product of edge-labels on the path.
Given a permutation $\pi$ of the $n$ variables, for a $\pi$-ordered ABP ($\pi$-OABP), for any directed path $p$ from $s$ to $t$, a variable can appear at most once on $p$, and the order in which variables appear on $p$ must respect $\pi$. One can think of OABPs as being the arithmetic analogue of ordered binary decision diagrams (OBDDs). We say an ABP $A$ is of read $r$, if any variable appears at most $r$ times in $A$.
Our main result pertains to the polynomial identity testing problem, i.e. the problem of deciding whether a given $n$-variate polynomial is identical to the zero polynomial or not. We prove that over any field $\F$, and in the black-box model, i.e. given only query access to the polynomial, read $r$ $\pi$-OABP computable polynomials can be tested in $\DTIME[2^{O(r\log r \cdot \log^2 n \log\log n)}]$. In case $\F$ is a finite field, the above time bound holds provided the identity testing algorithm is allowed to make queries to extension fields of $\F$. To establish this result, we combine some basic tools from algebraic geometry with ideas from derandomization in the Boolean domain.
Our next set of results investigates the computational limitations of OABPs. It is shown that any OABP computing the determinant or permanent requires size $\Omega(2^n/n)$ and read $\Omega(2^n/n^2)$. We give a multilinear polynomial $p$ in $2n+1$ variables over some specifically selected field $\mathbb{G}$, such that any OABP computing $p$ must read some variable at least $2^n$ times. We prove a strict separation for the computational power of read $(r-1)$ and read $r$ OABPs. Namely, we show that the elementary symmetric polynomial of degree $r$ in $n$ variables can be computed by a size $O(rn)$ read $r$ OABP, but not by a read $(r-1)$ OABP, for any $0 < 2r-1 \leq n$. Finally, we give an example of a polynomial $p$ and two variables orders $\pi \neq \pi'$, such that $p$ can be computed by a read-once $\pi$-OABP, but where any $\pi'$-OABP computing $p$ must read some variable at least $2^n$ times.
BibTeX - Entry
@InProceedings{jansen_et_al:LIPIcs:2010:2872,
author = {Maurice Jansen and Youming Qiao and Jayalal Sarma M.N.},
title = {{Deterministic Black-Box Identity Testing $pi$-Ordered Algebraic Branching Programs}},
booktitle = {IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010)},
pages = {296--307},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-939897-23-1},
ISSN = {1868-8969},
year = {2010},
volume = {8},
editor = {Kamal Lodaya and Meena Mahajan},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2010/2872},
URN = {urn:nbn:de:0030-drops-28728},
doi = {10.4230/LIPIcs.FSTTCS.2010.296},
annote = {Keywords: ordered algebraic branching program, polynomial identity testing}
}
Keywords: ordered algebraic branching program, polynomial identity testing Seminar: IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2010) Issue date: 2010 Date of publication: 2010 |
Difference between revisions of "Quasirandomness"
(→Subsets of grids)
(23 intermediate revisions by 16 users not shown) Line 1: Line 1: + +
Quasirandomness is a central concept in extremal combinatorics, and is likely to play an important role in any combinatorial proof of the density Hales-Jewett theorem. This will be particularly true if that proof is based on the [[density increment method]] or on some kind of generalization of [[Szemerédi's regularity lemma]].
Quasirandomness is a central concept in extremal combinatorics, and is likely to play an important role in any combinatorial proof of the density Hales-Jewett theorem. This will be particularly true if that proof is based on the [[density increment method]] or on some kind of generalization of [[Szemerédi's regularity lemma]].
Line 10: Line 12:
These two properties are already discussed in some detail in the [[density increment method|article on the density increment method]]: this article concentrates more on examples of quasirandomness in other contexts, and possible definitions of quasirandomness connected with the density Hales-Jewett theorem.
These two properties are already discussed in some detail in the [[density increment method|article on the density increment method]]: this article concentrates more on examples of quasirandomness in other contexts, and possible definitions of quasirandomness connected with the density Hales-Jewett theorem.
− − − − − − − − − − − − − − − − − − − − − − − − − −
==A possible definition of quasirandom subsets of <math>[3]^n</math>==
==A possible definition of quasirandom subsets of <math>[3]^n</math>==
Latest revision as of 06:08, 8 July 2010 Introduction
Quasirandomness is a central concept in extremal combinatorics, and is likely to play an important role in any combinatorial proof of the density Hales-Jewett theorem. This will be particularly true if that proof is based on the density increment method or on some kind of generalization of Szemerédi's regularity lemma.
In general, one has some kind of parameter associated with a set, which in our case will be the number of combinatorial lines it contains, and one would like a
deterministic definition of the word "quasirandom" with the following key property. Every quasirandom set [math]\mathcal{A}[/math] has roughly the same value of the given parameter as a random set of the same density.
Needless to say, this is not the
only desirable property of the definition, since otherwise we could just define [math]\mathcal{A}[/math] to be quasirandom if it has roughly the same value of the given parameter as a random set of the same density. The second key property is this. Every set [math]\mathcal{A}[/math] that failsto be quasirandom has some other property that we can exploit.
These two properties are already discussed in some detail in the article on the density increment method: this article concentrates more on examples of quasirandomness in other contexts, and possible definitions of quasirandomness connected with the density Hales-Jewett theorem.
A possible definition of quasirandom subsets of [math][3]^n[/math]
As with all the examples above, it is more convenient to give a definition for quasirandom functions. However, in this case it is not quite so obvious what should be meant by a balanced function.
Here, first, is a possible definition of a quasirandom function from [math][2]^n\times [2]^n[/math] to [math][-1,1].[/math] We say that f is c-quasirandom if [math]\mathbb{E}_{A,A',B,B'}f(A,B)f(A,B')f(A',B)f(A',B')\leq c.[/math] However, the expectation is not with respect to the uniform distribution over all quadruples (A,A',B,B') of subsets of [math][n].[/math] Rather, we choose them as follows. (Several variants of what we write here are possible: it is not clear in advance what precise definition will be the most convenient to use.) First we randomly permute [math][n][/math] using a permutation [math]\pi[/math]. Then we let A, A', B and B' be four random intervals in [math]\pi([n]),[/math] where we allow our intervals to wrap around mod n. (So, for example, a possible set A is [math]\{\pi(n-2),\pi(n-1),\pi(n),\pi(1),\pi(2)\}.[/math])
As ever, it is easy to prove positivity. To apply this definition to subsets [math]\mathcal{A}[/math] of [math][3]^n,[/math] define f(A,B) to be 0 if A and B intersect, [math]1-\delta[/math] if they are disjoint and the sequence x that is 1 on A, 2 on B and 3 elsewhere belongs to [math]\mathcal{A},[/math] and [math]-\delta[/math] otherwise. Here, [math]\delta[/math] is the probability that (A,B) belongs to [math]\mathcal{A}[/math] if we choose (A,B) randomly by taking two random intervals in a random permutation of [math][n][/math] (in other words, we take the marginal distribution of (A,B) from the distribution of the quadruple (A,A',B,B') above) and condition on their being disjoint. It follows from this definition that [math]\mathbb{E}f=0[/math] (since the expectation conditional on A and B being disjoint is 0 and f is zero whenever A and B intersect).
Nothing that one would really like to know about this definition has yet been fully established, though an argument that looks as though it might work has been proposed to show that if f is quasirandom in this sense then the expectation [math]\mathbb{E}f(A,B)f(A\cup D,B)f(A,B\cup D)[/math] is small (if the distribution on these "set-theoretic corners" is appropriately defined). |
Errors and residuals
Regression analysis Models Estimation Background
In statistics and optimization,
statistical errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "theoretical value". The error (or disturbance) of an observed value is the deviation of the observed value from the (unobservable) true function value, while the residual of an observed value is the difference between the observed value and the estimated function value. Contents Introduction 1 Example 1.1 Regressions 2 Stochastic error 2.1 Other uses of the word "error" in statistics 3 See also 4 References 5 Introduction
Suppose there is a series of observations from a univariate distribution and we want to estimate the mean of that distribution (the so-called location model). In this case, the errors are the deviations of the observations from the population mean, while the residuals are the deviations of the observations from the sample mean.
A
statistical error (or disturbance) is the amount by which an observation differs from its expected value, the latter being based on the whole population from which the statistical unit was chosen randomly. For example, if the mean height in a population of 21-year-old men is 1.75 meters, and one randomly chosen man is 1.80 meters tall, then the "error" is 0.05 meters; if the randomly chosen man is 1.70 meters tall, then the "error" is −0.05 meters. The expected value, being the mean of the entire population, is typically unobservable, and hence the statistical error cannot be observed either.
A
residual (or fitting error), on the other hand, is an observable estimate of the unobservable statistical error. Consider the previous example with men's heights and suppose we have a random sample of n people. The sample mean could serve as a good estimator of the population mean. Then we have: The difference between the height of each man in the sample and the unobservable populationmean is a statistical error, whereas The difference between the height of each man in the sample and the observable samplemean is a residual.
Note that the sum of the residuals within a random sample is necessarily zero, and thus the residuals are necessarily
not independent. The statistical errors on the other hand are independent, and their sum within the random sample is almost surely not zero. Example X_1, \dots, X_n\sim N(\mu,\sigma^2)\,
and the sample mean
\overline{X}={X_1 + \cdots + X_n \over n}
is a random variable distributed thus:
\overline{X}\sim N(\mu, \sigma^2/n).
The
statistical errors are then \varepsilon_i=X_i-\mu,\,
whereas the
residuals are \widehat{\varepsilon}_i=X_i-\overline{X}.
(As is often done, the "hat" over the letter ε indicates an observable
estimate of an unobservable quantity called ε.) \sum_{i=1}^n \left(X_i-\mu\right)^2/\sigma^2\sim\chi^2_n.
This quantity, however, is not observable. The sum of squares of the
residuals, on the other hand, is observable. The quotient of that sum by σ 2 has a chi-squared distribution with only n − 1 degrees of freedom: \sum_{i=1}^n \left(\,X_i-\overline{X}\,\right)^2/\sigma^2\sim\chi^2_{n-1}.
This difference between
n and n − 1 degrees of freedom results in Bessel's correction for the estimation of sample variance of a population with unknown mean and unknown variance, though if the mean is known, no correction is necessary.
It is remarkable that the sum of squares of the residuals and the sample mean can be shown to be independent of each other, using, e.g. Basu's theorem. That fact, and the normal and chi-squared distributions given above, form the basis of calculations involving the quotient
{\overline{X}_n - \mu \over S_n/\sqrt{n}}.
The probability distributions of the numerator and the denominator separately depend on the value of the unobservable population standard deviation
σ, but σ appears in both the numerator and the denominator and cancels. That is fortunate because it means that even though we do not know σ, we know the probability distribution of this quotient: it has a Student's t-distribution with n − 1 degrees of freedom. We can therefore use this quotient to find a confidence interval for μ. Regressions
In regression analysis, the distinction between
errors and residuals is subtle and important, and leads to the concept of studentized residuals. Given an unobservable function that relates the independent variable to the dependent variable – say, a line – the deviations of the dependent variable observations from this function are the unobservable errors. If one runs a regression on some data, then the deviations of the dependent variable observations from the fitted function are the residuals.
However, a terminological difference arises in the expression mean squared error (MSE). The mean squared error of a regression is a number computed from the sum of squares of the computed
residuals, and not of the unobservable errors. If that sum of squares is divided by n, the number of observations, the result is the mean of the squared residuals. Since this is a biased estimate of the variance of the unobserved errors, the bias is removed by multiplying the mean of the squared residuals by n / df where df is the number of degrees of freedom ( n minus the number of parameters being estimated). This method gets the exact same answer as the method using the the mean of the squared error). to This latter formula serves as an unbiased estimate of the variance of the unobserved errors, and is called the mean squared error. [1]
Another method to calculate the means square of error when analyzing the variance of linear regression using a technique like that used in ANOVA (they are the same because ANOVA is a type of regression), the sum of squares of the residuals (aka sum of squares of the error) is divided by the degrees of freedom (where the degrees of freedom equals
n-p-1, where p is the number of 'parameters' or predictors used in the model (i.e. the number of variables in the regression equation). One can then also calculate the mean square of the model by dividing the sum of squares of the model minus the degrees of freedom, which is just the number of parameters. Then the F value can be calculated by divided MS(model) by MS(error), and we can then determine significance (which is why you want the means squares to begin with.). [2]
However, because of the behavior of the process of regression, the
distributions of residuals at different data points (of the input variable) may vary even if the errors themselves are identically distributed. Concretely, in a linear regression where the errors are identically distributed, the variability of residuals of inputs in the middle of the domain will be higher than the variability of residuals at the ends of the domain: linear regressions fit endpoints better than the middle. This is also reflected in the influence functions of various data points on the regression coefficients: endpoints have more influence.
Thus to compare residuals at different inputs, one needs to adjust the residuals by the expected variability of
residuals, which is called studentizing. This is particularly important in the case of detecting outliers: a large residual may be expected in the middle of the domain, but considered an outlier at the end of the domain. Stochastic error
The stochastic error in a measurement is the error that is random from one measurement to the next. Stochastic errors tend to be gaussian (normal), in their distribution. That's because the stochastic error is most often the sum of many random errors, and when many random errors are added together, the distribution of their sum looks gaussian, as shown by the Central Limit Theorem. A stochastic error is added to a regression equation to introduce all the variation in Y that cannot be explained by the included Xs. It is, in effect, a symbol of our inability to model all the movements of the dependent variable.
Other uses of the word "error" in statistics
The use of the term "error" as discussed in the sections above is in the sense of a deviation of a value from a hypothetical unobserved value. At least two other uses also occur in statistics, both referring to observable prediction errors:
Mean square error or
mean squared error (abbreviated MSE) and root mean square error (RMSE) refer to the amount by which the values predicted by an estimator differ from the quantities being estimated (typically outside the sample from which the model was estimated). Sum of squared errors, typically abbreviated SSE or SS e, refers to the residual sum of squares (the sum of squared residuals) of a regression; this is the sum of the squares of the deviations of the actual values from the predicted values, within the sample used for estimation. Likewise, the sum of absolute errors (SAE) refers to the sum of the absolute values of the residuals, which is minimized in the least absolute deviations approach to regression. See also Absolute deviation Consensus forecasts Deviation (statistics) Error detection and correction Explained sum of squares Innovation (signal processing) Innovations vector Lack-of-fit sum of squares Margin of error Mean absolute error Propagation of error Regression dilution Root mean square deviation Sampling error Studentized residual Type I and type II errors References Steel, Robert G. D.; Torrie, James H. (1960). Principles and Procedures of Statistics, with Special Reference to Biological Sciences. McGraw-Hill. p. 288. Zelterman, Daniel (2010). Applied linear models with SAS([Online-Ausg.]. ed.). Cambridge: Cambridge University Press. Cook, R. Dennis; Weisberg, Sanford (1982). Residuals and Influence in Regression.(Repr. ed.). New York: Weisberg, Sanford (1985). Applied Linear Regression(2nd ed.). New York: Wiley. Hazewinkel, Michiel, ed. (2001), "Errors, theory of", |
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say |
I am solving a non linear second order implicit initial value problem using finite difference method, but my results do not converge. Please guide me with an example, how we can apply finite difference method with big accuracy. The problem is $$(A+By(t)-C(-y'(t))^-1/3)y''(t)=-(By'(t)^2+C),\qquad y(t=0)=0,$$ using fdm, time can be from zero to 2 mints. A,B and C are constants.
You also have to specify which scheme you want to use.
Let's say that you want to use the simple Backward-Euler (BE) scheme. For BE scheme, your governing equation becomes
$\left(A+By_{n+1}-C(y'_{n+1})^{-1/3}\right)y''_{n+1}=-\left(B(y'_{n+1})^2+C\right) \tag{1}$ with $y'_{n+1}=\frac{y_{n+1}-y_n}{\Delta t} \tag{2}$
$y''_{n+1}=\frac{y'_{n+1}-y'_n}{\Delta t} \tag{3}$
Using (2) in (3), $y''_{n+1}=\frac{1}{\Delta t^2}[y_{n+1}-y_n] - \frac{y'_n}{\Delta t} \tag{4}$
Now, using (2) and (4) in (1), you will get a nonlinear equation of the form
$f(y_n,y'_n, y_{n+1})=0 \tag{5}$ which can be solved for $y_{n+1}$ using an iterative scheme, for example, Newton-Raphson scheme. Once you get $y_{n+1}$ you can compute velocity and acceleration from (2) and (4), respectively.
You can follow the same procedure for other schemes. Equations (1), (2), (3) and (4) will be different for different schemes.
Since you have a second order ODE you must provide 2 initial conditions, say: $$y'(0) = y'_0 \qquad y(0) = y_0\tag{1}$$
One thing you can do is transform the original second order equation: $$\left(A+By-Cy'^{-1/3}\right)y''=-\left(By'^2+C\right) \tag{2}$$ into a system of first order ODEs $$\left\{\begin{array}{ll}% v' =& -\frac{Bv^2+C}{A+By-Cv^{-1/3}}\\ y'=&v \end{array}\right. \tag{3}$$ with initial conditions: $$ v(0) = y'_0 \qquad y(0) = y_0$$
System $(2)$ can be put into one general vector equation: being $\vec{x} = [x_1,x_2]^{T}$ for $x_1=v$ and $x_2=y$ and $\vec{f} = \left[-\frac{Bv^2+C}{A+By-Cv^{-1/3}},\,v\right]^T$. Therefore: $$\frac{d \vec{x}}{dt} = \vec{f}(\vec{x}) $$
Yu can solve it easily with matlab using any ODE solver that supports stiff equations (implicit numerical method). |
A well known representation of Dirac's delta-distribution is via the Fourier transform of distributions: \begin{equation} \delta[f]:=f(0)=\int_{\mathbb{R}}\int_{\mathbb{R}} e^{\mathrm{i}xk}f(k)\mathrm{d}k\mathrm{d}x. \end{equation} Can this be used to define the delta distribution composed with a function $\phi:\mathbb{R}\to\mathbb{R}$ via \begin{equation} (\delta\circ\phi)[f]:=\int_{\mathbb{R}}\int_{\mathbb{R}} e^{\mathrm{i}x\phi(k)}f(k)\mathrm{d}k\mathrm{d}x? \end{equation} Does this make sense? Heuristically and in a more physics-style notation, I would argue that \begin{equation} \int\int e^{\mathrm{i}x\phi(k)}f(k)\mathrm{d}k\mathrm{d}x "=" \int\left(\int e^{\mathrm{i}x\phi(k)}\mathrm{d}x\right)f(k)\mathrm{d}k "=" \int\delta(\phi(k))f(k)\mathrm{d}k. \end{equation} If it does not make sense, what can I say about the above expression for general functions $\phi$? E.g. I would expect the above integral to be positive if $f$ is point-wise positive.
What you want to do is the pull-back of distributions. And there is a theorem (cf. Hörmander 1, Theorem 8.2.4) that if the set $\{(\phi(x),\eta) \colon \phi'(x) \eta = 0\}$ and $\operatorname{WF}(\delta) = \{ (0,\eta) \colon \eta \not = 0\}$ have empty intersection, then the pull-back is well-defined.
If you are only interested in the delta-Distribution, then there is also Theorem 6.1.5 saying that for any smooth function $\phi : X \to \mathbb{R}$ with $|\phi'| \not = 0$ on $\phi = 0$, one has that $\delta^* \phi = \frac{dS}{|\phi'|}$, where $dS$ is the Euclidean surface measure on $\{\phi = 0\}$. Even though I mentioned no integrals, this has quite strong flavour of oscillatory integrals (cf. Shubin, Chapter 1) and FIOs (Hörmander 4) to it.
Literature:
L. Hörmander - The Analysis of Linear Partial Differential Operators 1-4
M. Shubin - Pseudodifferential Operators and Spectral Theory
If $\phi(k)$ vanishes at $k=k_n$, $n=1,2,\ldots$, and $\phi'(k_n)\neq 0$ for all $n$, then \begin{equation} \int\int e^{\mathrm{i}x\phi(k)}f(k)\,\mathrm{d}k\mathrm{d}x = 2 \pi \int\delta(\phi(k))f(k)\,\mathrm{d}k=2\pi\sum_{n}\frac{f(k_n)}{|\phi'(k_n)|}, \end{equation} so yes, the integral is positive for point-wise positive $f$. |
I have a question regarding the timing of treatment effects and how one could use the difference-in-difference estimator on a panel data set.
Let me begin by saying that I have a big firm level unbalanced panel dataset with large N (7000-ish), small T (varies from 3 to 28). I'm interested in analysing the effects of a particular policy intervention, however, the problem is that the treatment timing is different for different firms and I'm wondering how (and if) one can account for this in the DiD framework.
As I understand it, the general DiD panel data setup with simultaneous treatment would look something like this: $$ y_{it} = \alpha_0 + \alpha_1 \text{Treat}_{i} + \alpha_2 \text{After}_{t} + \delta (\text{Treat*After})_{it}+ x_{it}'\beta + \text{FFE}+\text{TFE} + \varepsilon_{it}, $$ where:
Treat= 1 if in the treatment group, 0 otherwise After= 1 if after policy intervention, 0 otherwise $x$ is a vector of controls, $\alpha_i$ are the parameters/constants and $\delta$ is the treatment effect FFE are the firm fixed effects and TFE are the time fixed effects
After searching for the site I haven't been able to find an answer I fully understand, however, this post as I undstand it suggests running a model along the lines of:
$$ y_{it} = \alpha_0 + \alpha_1 \text{Treat}_{i} + \delta \text{Policy}_{it}+ \sum^T_{t=2} \alpha_t \text{year}_t+x_{it}'\beta + \text{FFE} + \varepsilon_{it}, $$ where:
" ...policy is a dummy for each individual that equals 1 if the individual is in the treatment group after the policy intervention/treatment..." (from the post linked above) yearare a set of time dummies
I guess I have two things I don't understand with this approach;
The construction of the dummy(s) $\text{Policy}_{it}$. Is this one dummy variable just taking the value one for each treated firm after their time varying treatment takes place? Or does the author of the post mean one dummy for each firm indicating the timing of treatment? My second question relates to the first but is more conceptual. To my understanding the difference-in-differenceapproach is about using the non-treated as a counterfactual outcome for the treated (assuming parallel trends) - in absence of treatment. However, when the treatment timing is different for different firms in this case, there is no clear "after period" for the control group and I beleive this is the cause of my confusion here. What is the conceptual idea behind this approach suggested in the previous link? Is this approach even remotely possible or should one apply some different identification strategy? In that case, what would be appropriate given the circumstances?
Any answers, references to papers or books working with panel data sets with different treatment timings (preferably of econometric nature) would be greatly appreciated.
//Billy |
Functiones et Approximatio Commentarii Mathematici Funct. Approx. Comment. Math. Volume 45, Number 2 (2011), 165-205. Fleck's congruence, associated magic squares and a zeta identity Abstract
Let the \emph{Fleck numbers}, $C_n(t,q)$, be defined such that \[ C_n(t,q)=\sum_{k\equiv q (mod n)}(-1)^k\binom{t}{k}. \] For prime $p$, Fleck obtained the result $C_p(t,q)\equiv 0 (mod p^{{\left \lfloor (t-1)/(p-1)\right \rfloor}}} )$, where $\lfloor.\rfloor$ denotes the usual floor function. This congruence was extended 64 years later by Weisman, in 1977, to include the case $n=p^\alpha$. In this paper we show that the Fleck numbers occur naturally when one considers a symmetric $n\times n$ matrix, $M$, and its inverse under matrix multiplication. More specifically, we take $M$ to be a symmetrically constructed $n\times n$ associated magic square of odd order, and then consider the reduced coefficients of the linear expansions of the entries of $M^t$ with $t\in \mathbb{Z}$. We also show that for any odd integer, $n=2m+1$, $n\geq 3$, there exist geometric polynomials in $m$ that are linked to the Fleck numbers via matrix algebra and $p$-adic interaction. These polynomials generate numbers that obey a reciprocal type of congruence to the one discovered by Fleck. As a by-product of our investigations we observe a new identity between values of the Zeta functions at even integers. Namely \[ \zeta{(2j)}=(-1)^{j+1}\left (\frac{j\pi^{2j}}{(2j+1)!}+\sum_{k=1}^{j-1}\frac{(-1)^k\pi^{2j-2k}}{(2j-2k+1)!}\zeta{(2k)}\right ). \] We conclude with examples of combinatorial congruences, Vandermonde type determinants and Number Walls that further highlight the symmetric relations that exist between the Fleck numbers and the geometric polynomials.
Article information Source Funct. Approx. Comment. Math., Volume 45, Number 2 (2011), 165-205. Dates First available in Project Euclid: 12 December 2011 Permanent link to this document https://projecteuclid.org/euclid.facm/1323705813 Digital Object Identifier doi:10.7169/facm/1323705813 Mathematical Reviews number (MathSciNet) MR2895154 Zentralblatt MATH identifier 1246.11043 Subjects Primary: 05A10: Factorials, binomial coefficients, combinatorial functions [See also 11B65, 33Cxx] 11B65: Binomial coefficients; factorials; $q$-identities [See also 05A10, 05A30] Secondary: 05A15: Exact enumeration problems, generating functions [See also 33Cxx, 33Dxx] 05A19: Combinatorial identities, bijective combinatorics 11C20: Matrices, determinants [See also 15B36] 11E95: $p$-adic theory 11S05: Polynomials Citation
Lettington, Matthew C. Fleck's congruence, associated magic squares and a zeta identity. Funct. Approx. Comment. Math. 45 (2011), no. 2, 165--205. doi:10.7169/facm/1323705813. https://projecteuclid.org/euclid.facm/1323705813 |
Osaka Journal of Mathematics Osaka J. Math. Volume 46, Number 2 (2009), 411-440. Subelliptic harmonic morphisms Abstract
We study subelliptic harmonic morphisms i.e. smooth maps $\phi\colon \Omega \to \tilde{\Omega}$ among domains $\Omega \subset \mathbb{R}^{N}$ and $\tilde{\Omega} \subset \mathbb{R}^{M}$, endowed with Hörmander systems of vector fields $X$ and $Y$, that pull back local solutions to $H_{Y} v = 0$ into local solutions to $H_{X} u = 0$, where $H_{X}$ and $H_{Y}$ are Hörmander operators. We show that any subelliptic harmonic morphism is an open mapping. Using a subelliptic version of the Fuglede-Ishihara theorem (due to E. Barletta, [5]) we show that given a strictly pseudoconvex CR manifold $M$ and a Riemannian manifold $N$ for any heat equation morphism $\Psi\colon M \times (0, \infty) \to N \times (0, \infty)$ of the form $\Psi (x,t) = (\phi (x), h(t))$ the map $\phi\colon M \to N$ is a subelliptic harmonic morphism.
Article information Source Osaka J. Math., Volume 46, Number 2 (2009), 411-440. Dates First available in Project Euclid: 19 June 2009 Permanent link to this document https://projecteuclid.org/euclid.ojm/1245415677 Mathematical Reviews number (MathSciNet) MR2549594 Zentralblatt MATH identifier 1175.58005 Subjects Primary: 32V20: Analysis on CR manifolds 53C43: Differential geometric aspects of harmonic maps [See also 58E20] Secondary: 35H20: Subelliptic equations 58E20: Harmonic maps [See also 53C43], etc. Citation
Dragomir, Sorin; Lanconelli, Ermanno. Subelliptic harmonic morphisms. Osaka J. Math. 46 (2009), no. 2, 411--440. https://projecteuclid.org/euclid.ojm/1245415677 |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
I'm curious about two things.
When we define the class called "probabilistic polynomial-time algorithm" in computer science, does it include polynomial-time algorithm with exponential space? For example, when algorithm is considered to be given a input from domain $\{0,1\}^n$, what if the algorithm internally queries its exponential sized table (ex. $0^n\to0,0^{n-1}1\to1$ and so on..) and outputs the result? Does it still polynomial-time algorithm?
In theoretical cryptography, one-way function $f:\{0,1\}^*\to\{0,1\}^*$ has a requirement, which is related with
hard-to-invertproperty, as following block. If the answer to above question is yes, is it possible to construct algorithm $A'$ to simulate exactly same as $f$ for every value in $\{0,1\}^n$ using exponential table as described in above question? If then, it implies that it's impossible to design one-way function which is definitely not true. So what have i missed?
For every probabilistic polynomial-time algorithm $A'$, every positive polynomial $p(\cdot)$, and all sufficiently large $n$'s,
$Pr[A'(f(U_n),1^n)\in f^{-1}(f(U_n))]<\frac{1}{p(n)}$
where $U_n$ is random variable uniformly distributed over $\{0,1\}^n$ |
Let
$G = (<e_i,e_j>)_{i,j=1,...3} :=\left( \begin{matrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{matrix} \right)$
be a symmetric bilinear form $< \cdot, \cdot>$ with regard to the canonical basis of $\mathbb R^3$,
$H:= \left\{ x \in \mathbb R^3 | <x,x> = -1 \right\}$
and for $a \in H$ name the orthogonal complement of $\mathbb Ra$ with regard to the upper bilinear form: $T_aH$.
I need to show that the restriction $< \cdot, \cdot>|_{T_aH \times T_aH}$ is positiv definite with the help of Sylvester's law of inertia.
How can I do this? I know that we have $\mathbb R^3 = V = V_0 \oplus V_+ \oplus V_-$ and in this case $V_0 = 0$, $V_+= <\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}, \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}>$ and $V_-= <\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}>$ as these are the eigenvectors corresponding to the eigenvalues 1 and -1. But that's not the orthogonal complement or $H$, so how can I use this? (Besides I do not really know how to compute the orthogonal complement in this case. If that's necessary, I would be pleased if someone gave me a hint.)
Thanks for help! |
Automated Identification of Microtubules in Cellular Electron Tomography
Danyiar Nurgaliev, Timur Gatanov, and Daniel J. Needleman, Automated Identification of Microtubules in Cellular Electron Tomography. Methods in Cell Biology, 2010, 97.
Robin Kirkpatrick, Week 1 Commentary
Introduction Computer automated image segmentation can be a difficult problem, and requires careful combination of prefiltering, and selection of the optimization procedures and parameters used. Although it can be challenging, automation of analysis is highly desired when handling data sets as manual segmentation is time consuming. In this paper, the authors present their computer automated approach for segmenting microtubules from cellular electron tomography data taken from frozen cells. This technique renders a 3D image, from which the authors attempt to segment each microtubule in 3D. Successful Image Segmentation
Their successful approach involved 2 steps, namely preprocessing and tracking. Preprocessing was done to find sets of points that were likely to be part of the wall of a microtubule, whereas the "tracking" step linked the final set of data to segments that likely correspond the same microtubule. An example of an original image is shown below in Figure 1.
Figure 1- Example of an XY slice of the original electron tomography data. Black spots correspond to ribosomes, and the white long structures with black outlines are microtubules.
Note that the microtubulues are bright in comparison to the background (ie. they hold a high pel value). However, the boundaries are dark (low pel value).
Preprocessing
To remove noise, the initial image was low- pass filtered using a gaussian kernel. The authors then employed a method that allows them to determine pels that most likely lie within a microtubule. Their assumption was as follows. Microtubules are approximately (locally) linear structures, and appear as white with dark outlines corresponding to the walls of microtubules. The authors note that a line in 2D has is a local minimum (since the line is dark) in the perpendicular direction of the line, and is constant in the direction of the line. Noting that the first and second derivatives in the direction of the line should be zero (plus or minus noise, which is amplified due to differentiation as well). The first derivative in the perpendicular direction should be high at the edge, and the second derivative should be zero at the edge.
It should be noted that numerical differentiation amplifies noise, therefore it is critical to prefilter the image. Mathematically, we can define the output image, Io, in terms of the convolution kernel G, and the initial image, I.
<math>{Io(x,y) = I(x,y) * G(x,y)}</math>
Following directly from calculus, the derivatives can be computed directly by convolution with the derivative of the convolution kernel (which has a closed analytical form for a gaussian kernel) as shown below (rather than computing the convolution, followed by a numerical derivative).
<math>{[\frac{\partial Io(x,y)}{\partial x}, \frac{\partial Io(x,y)}{\partial y}] = [I(x,y) * G_x(x,y), I(x,y) * G_y(x,y)}</math>
Where <math>{G_y(x,y)}</math> and <math>{G_x(x,y)}</math> are the partial derivatives of the convolution kernel which is trivial to compute analytically for a gaussian kernel.
Similarly, a matrix of second derivatives can be computed
<math>{[ \left( \begin{array}{cc} Io_{xx} & Io_{xy} \\ Io_{xy} & Io_{yy} \end{array} \right)]= [ \left( \begin{array}{cc} (Io * G_{xx}) & (Io * G_{xy}) \\ (Io * G_{xy}) & (Io * G_{yy}) \end{array} \right)]}</math>
Conveniently, the width of the convolution kernel determines the width of structures that remain in the resulting mask. The authors note that the intensity can be expanded in the vicinity of the current pixel using a taylor series expansion. Directly following this analysis for I_x = 0 (not shown here, done for an angle of 0) results in the relation:
<math>{[x = -(I_x/I_{xx})]} </math>
Therefore, if Ixx>0 (concave up), and the ratio above is less than a half a pixel, then the current pixel is considered to be at a local minimum. This analysis can be done in a straightforward manner for a given angle with respect to x axis. The result is a binary mask, which is an initial guess of the locations that are the edges of the microtubules. However, their method at this stage still has many false positives due the algorithm finding local linear structures that may not be representative of a microtubule wall. To search for extended lines, the authors convolve this mask with a binary mask that consists of a large number of 1's along the orthogonal direction of the original angle used (ie in the guessed direction of the line). This is followed by thresholding, and a dilation operation (to reduce the shortening effect induced by the convolution step near the edges). These steps are done for a large number of different angles. The authors then noted that the microtubule is always surrounded by two lines, and thus walls form pairs at the same orientation. The current mask was then convolved with pairs of lines at a set spacing and thresholded.
Tracking
To discern global properties about the microtubules in the image (length, number, etc), it is important to link the various segments if they belong to the same microtubule. The preprocessing step has some issues, as it will yield false breaks, false fuses (especially near clusters of microtubules), etc. To circumvent this issue, the authors use the method of active contours. Briefly, they seek a contour <math>{\vec x(l)}</math> which minimizes an energy function E which is discussed below. Noting that global search algorithms are laborious, the authors employ simulated annealing (SA). SA is a Monte-Carlo technique. SA is intended to find the global minimum in significantly less moves that a global search algorithm. However, SA allows for random moves up the energy landscape. Allowing random moves uphill in energy allows the solution to navigate a complex energy landscape, and thus avoid being trapped in local energy wells so it can find the global minimum. The algorithm makes a random perturbation from the guess, and computes E(x_perturbed). The algorithm will temporarily keep this solution as the current guess if E(x_perturbed)<E(x_guess). The perturbed guess will also be kept with probability exp(-deltaE/T) allowing the algorithm to go uphill in energy. The temperature is slowly decreased throughout the algorithm so that eventually no additional uphill moves are allowed to be made. The energy function the authors defined was analogous to an elastic string in an external potential.
<math> {E_{contour} = E_{potential} + E_{bending} + E_{length} = \int \! U(\vec x(l)) dl + \int \! (k/R(l))dl - \mu*L }</math>
L is the contour length, <math>{\mu}</math> determines the energy contribution of energy due to the length. 1/R(l) is the curvature of the contour, and k is the bending modulus. High values of k restrict the amount of bending allowed in the final contour. The potential function, U, was set to Io for the preprocessed images. For the unprocessed images, <math>{ U = I(\vec x(l) - (w/2)\vec n(l)) + I(\vec x(l) + (w/2)\vec n(l))}</math>. w is the predicted width of the microtubule, which allows the algorithm to search for pairs of lines width w apart. The perturbations made in the algorithm include shrinking and growing segments, and moving one vertex in the contour. Note that the energies discussed above were for a single contour. The algorithm searches instead for N curves. The total energy is the sum of the individual contour energies, plus an interaction energy due to contours being near one another, and a chemopotential energy due to creation of additional contours. Mathematically, this can be expressed as
<math>{E_{total} = \sum_{i=0}^N E_{i,contour} + E_{interaction} + \nu N}</math> Final Comments
The authors compare their results with manual results and find that there are still some problems with the current methodology. These remaining issues include missing microtubules, determining a shorter length than what is actually present, and identifying long microtubules as a large number of short ones.
The reader wonders whether the form of the energy functional used here would be useful for trajectory linking of distance-time data (an analogous problem). Techniques typically minimize variances and intensity fluctuations, however, this method seems much more robust. The reader also wonders if the gaps could be easily filled in using a closing (or analogous) operation either pre or post processing in the direction of the microtubule (ie orthogonal to the picked angle). The parameters that work best for the optimization (mu, nu, etc) are highly dependent on the region properties in the image. The authors are currently working on improving their technique by using machine learning to train their algorithm to handle a larger number of cases. The reader also wonders if the likelihood model could be verified using the training sets given. That would be illuminating, and may yield a better understanding of what the energy functions should be used. |
All of the interactions you have drawn do in fact contribute to the interaction between the electron and the proton. This is the sense in which it is a weak "force." The size of the neutral current interaction between the proton and the electron has recently been measured, with preliminary and final analyses published in 2013 and 2018. (I'm an author on both papers.)
The neutral current interaction,
$${\rm pe} \to {\rm p}Z\rm e \to pe,$$
has the same structure as the electromagnetic force mediated by the photon, but different behavior under parity symmetry and with a Yukawa potential
$$U_\text{weak} \sim \frac{q^\mathrm{weak}_\mathrm e q^\mathrm{weak}_\mathrm p}{r} \cdot e^{ - m_Z r}$$
rather than the $1/r$ Coulomb potential of electromagnetism.The mass of the $Z$ boson makes this interaction vanish exponentially with a distance scale$r_Z = \hbar c / m_Zc^2 \approx 0.0022\,\rm fm$, three orders of magnitude smaller than the size of the proton.So for the hydrogen atom bound states, there is a small correction to the energy from the part of the wavefunction where the electron overlaps with the proton.This correction would be "large" in $s$-wave states compared to $p$-wave states, since their overlap with the nucleus is different, but not large enough to contribute to the current uncertainty of the Rydberg constant, $E_1 = -13.6\,\mathrm{eV} \times (1 \pm 290\,\mathrm{ppb})$. That few hundred parts per billion is the size of the weak-interaction asymmetry in the papers I linked above, where the interaction energy was $1\,\rm GeV$ and the short-range weak interaction is less inaccessible.
The charged-current force between a proton and an electron would have its largest contribution from the loop diagram
$${\rm pe}\to {\rm n}W{\rm e} \to {\rm n}\nu \to {\rm p}W\nu \to {\rm pe}$$
and its permutations. This force has a shorter effective range because now you have to borrow enough energy for
two heavy virtual bosons. That makes it even weaker at long range than the weak interaction from the single heavy boson.I can't remember how much weaker, but it wasn't a consideration at all in the experiment we did. (We did have about five years where the theory community had a disagreement about the $\gamma Z$ box diagram.)
As far as beta decay from the virtual $\rm n\nu$ state in the charged-current box diagram that doesn't matter, that would look from the outside like
$$\rm pe \to pe\bar\nu\nu$$
which violates conservation of energy unless the proton-electron system releases some binding energy. A fun calculation would be predict the branching ratio for $\bar\nu\nu$ emission from one of the Lyman or Balmer transitions.
In fact, that's a
really interesting calculation. Suppose that the branching ratio for $\rm H^*\to H\bar\nu\nu$ is any tiny fraction you like: $10^{-20}$ or $10^{-50}$ or whatever. Then the photospheres of stars are diffuse emitters of few-eV neutrinos, which would effectively be a continuous source of quasi-relativistic dark matter.I think the evidence is that most dark matter is "cold," or nonrelativistic, but there's a good paper in an estimate of how rapidly these Lyman neutrinos would accumulate around a galaxy. |
ISSN:
1930-8337
eISSN:
1930-8345
All Issues
Inverse Problems & Imaging
February 2013 , Volume 7 , Issue 1
Select all articles
Export/Reference:
Abstract:
In this paper we construct a shape space of medial ball representations from given shape training data using methods of Computational Geometry and Statistics. The ultimate goal is to employ the shape space as prior information in supervised segmentation algorithms for complex geometries in 3D voxel data. For this purpose, a novel representation of the shape space (i.e., medial ball representation) is worked out and its implications on the whole segmentation pipeline are studied. Such algorithms have wide applications for industrial processes and medical imaging, when data are recorded under varying illumination conditions, are corrupted with high noise or are occluded.
Abstract:
It has been shown in [10] that on a simple, compact Riemannian 2-manifold the attenuated geodesic ray transform, with attenuation given by a connection and Higgs field, is injective on functions and 1-forms modulo the natural obstruction. Furthermore, the scattering relation determines the connection and Higgs field modulo a gauge transformation. We extend the results obtained therein to the case of magnetic geodesics. In addition, we provide an application to tensor tomography in the magnetic setting, along the lines of [11].
Abstract:
In order to prevent influx of highly enriched nuclear material throu-gh border checkpoints, new advanced detection schemes need to be developed. Typical issues faced in this context are sources with very low emission against a dominating natural background radiation. Sources are expected to be small and shielded and hence cannot be detected from measurements of radiation levels alone. We consider collimated and Compton-type measurements and propose a detection method that relies on the geometric singularity of small sources to distinguish them from the more uniform background. The method is characterized by high sensitivity and specificity and allows for assigning confidence probabilities of detection. The validity of our approach can be justified using properties of related techniques from medical imaging. Results of numerical simulations are presented for collimated and Compton-type measurements. The 2D case is considered in detail.
Abstract:
The full application of Bayesian inference to inverse problems requires exploration of a posterior distribution that typically does not possess a standard form. In this context, Markov chain Monte Carlo (MCMC) methods are often used. These methods require many evaluations of a computationally intensive forward model to produce the equivalent of one independent sample from the posterior. We consider applications in which approximate forward models at multiple resolution levels are available, each endowed with a probabilistic error estimate. These situations occur, for example, when the forward model involves Monte Carlo integration. We present a novel MCMC method called $MC^3$ that uses low-resolution forward models to approximate draws from a posterior distribution built with the high-resolution forward model. The acceptance ratio is estimated with some statistical error; then a confidence interval for the true acceptance ratio is found, and acceptance is performed correctly with some confidence. The high-resolution models are rarely run and a significant speed up is achieved.
Our multiple-resolution forward models themselves are built around a new importance sampling scheme that allows Monte Carlo forward models to be used efficiently in inverse problems. The method is used to solve an inverse transport problem that finds applications in atmospheric remote sensing. We present a path-recycling methodology to efficiently vary parameters in the transport equation. The forward transport equation is solved by a Monte Carlo method that is amenable to the use of $MC^3$ to solve the inverse transport problem using a Bayesian formalism.
Abstract:
This article is concerned with the representation of curves by means of integral invariants. In contrast to the classical differential invariants they have the advantage of being less sensitive with respect to noise. The integral invariant most common in use is the circular integral invariant. A major drawback of this curve descriptor, however, is the absence of any uniqueness result for this representation. This article serves as a contribution towards closing this gap by showing that the circular integral invariant is injective in a neighbourhood of the circle. In addition, we provide a stability estimate valid on this neighbourhood. The proof is an application of Riesz--Schauder theory and the implicit function theorem in a Banach space setting.
Abstract:
The aim of our work is to reconstruct an inclusion $\omega$ immersed in a fluid flowing in a larger bounded domain $\Omega$ via a boundary measurement on $\partial\Omega$. Here the fluid motion is assumed to be governed by the Stokes equations. We study the inverse problem of reconstructing $\omega$ thanks to the tools of shape optimization by minimizing a Kohn-Vogelius type cost functional. We first characterize the gradient of this cost functional in order to make a numerical resolution. Then, in order to study the stability of this problem, we give the expression of the shape Hessian. We show the compactness of the Riesz operator corresponding to this shape Hessian at a critical point which explains why the inverse problem is ill-posed. Therefore we need some regularization methods to solve numerically this problem. We illustrate those general results by some explicit calculus of the shape Hessian in some particular geometries. In particular, we solve explicitly the Stokes equations in a concentric annulus. Finally, we present some numerical simulations using a parametric method.
Abstract:
We study the inverse problem of the simultaneous identification of two discontinuous diffusion coefficients for a one-dimensional coupled parabolic system with the observation of only one component. The stability result for the diffusion coefficients is obtained by a Carleman-type estimate. Results from numerical experiments in the one-dimensional case are reported, suggesting that the method makes possible to recover discontinuous diffusion coefficients.
Abstract:
We investigate two inverse scattering problems for the nonlinear Schrödinger equation $$ -\Delta u(x) + h(x,|u(x)|)u(x) = k^{2}u(x), \quad x \in \mathbb{R}^2, $$ where $h$ is a very general and possibly singular combination of potentials. The method of Born approximation is applied for the recovery of local singularities and jumps from fixed angle scattering and backscattering data.
Abstract:
In this paper we integrate the SART (Simultaneous Algebraic Reconstruction Technique) algorithm into a general iterative method, introduced in [8]. This general method offers us the possibility of achieving a new convergence proof of the SART method and prove the convergence of the constrained version of SART. Systematic numerical experiments, comparing SART and Kaczmarz-like algorithms, are made on two phantoms widely used in image reconstruction literature.
Abstract:
Electrical impedance tomography (EIT) aims to reconstruct the electric conductivity inside a physical body from current-to-voltage measurements at the boundary of the body. In practical EIT one often lacks exact knowledge of the domain boundary, and inaccurate modeling of the boundary causes artifacts in the reconstructions. A novel method is presented for recovering the boundary shape and an isotropic conductivity from EIT data. The first step is to determine the minimally anisotropic conductivity in a model domain reproducing the measured EIT data. Second, a Beltrami equation is solved, providing shape-deforming reconstruction. The algorithm is applied to simulated noisy data from a realistic electrode model, demonstrating that approximate recovery of the boundary shape and conductivity is feasible.
Abstract:
We study the spherical mean transform on $\mathbb{R}^n$. The transform is characterized by the Euler-Poisson-Darboux equation. By looking at the spherical harmonic expansions, we obtain a system of $1+1$-dimension hyperbolic equations. Using these equations, we discuss two known problems. The first one is a local uniqueness problem investigated by M. Agranovsky and P. Kuchment, [
Memoirs on Differential Equations and Mathematical Physics, 52(2011), 1--16]. We present a proof which only involves simple energy arguments. The second problem is to characterize the kernel of spherical mean transform on annular regions, which was studied by C. Epstein and B. Kleiner [ Comm. Pure Appl. Math., 46(3)(1993), 441--451]. We present a short proof that simultaneously provides the necessity and sufficiency for the characterization. As a consequence, we derive a reconstruction procedure for the transform with additional interior (or exterior) information.
We also discuss how the approach works for the hyperbolic and spherical spaces.
Abstract:
Photoacoustic tomography is a rapidly developing medical imaging technique that combines optical and ultrasound imaging to exploit the high contrast and high resolution of the respective individual modalities. Mathematically, photoacoustic tomography is divided into two steps. In the first step, one solves an inverse problem for the wave equation to determine how tissue absorbs light as a result of a boundary illumination. The second step is generally modeled by either diffusion or transport equations, and involves recovering the optical properties of the region being imaged.
In this paper we, address the second step of photoacoustics, and in particular, we show that the absorption coefficient in the stationary transport equation can be recovered given certain internal information about the solution. We will consider the variable index of refraction case, which will correspond to an inverse transport problem on a Riemannian manifold with internal data and a known metric. We will prove a stability estimate for a functional of the absorption coefficient of the medium by finding a singular decomposition for the distribution kernel of the measurement operator. Finally, we will use this estimate to recover the desired absorption properties.
Abstract:
The X-ray phase contrast imaging technique relies on the measurement of the Fresnel diffraction intensity patterns associated to a phase shift induced by the object. The simultaneous recovery of the phase and of the absorption is an ill-posed nonlinear inverse problem. In this work, we investigate the resolution of this problem with nonlinear Tikhonov regularization and with a joint sparsity constraint regularization. The regularization functionals are minimized with a Gauss-Newton method and with a fixed point iterative method based on a surrogate functional. The algorithms are evalutated using simulated noisy data. The joint sparsity regularization gives better reconstructions for high noise levels.
Abstract:
Source extraction in audio is an important problem in the study of blind source separation (BSS) with many practical applications. It is a challenging problem when the foreground sources to be extracted are weak compared to the background sources. Traditional techniques often do not work in this setting. In this paper we propose a novel technique for extracting foreground sources. This is achieved by an interval of silence for the foreground sources. Using this silence interval one can learn the background information, allowing the removal or suppression of background sources. Very effective optimization schemes are proposed for the case of two sources and two mixtures.
Abstract:
We consider an interior inverse scattering problem of reconstructing the shape of a cavity. The measurements are the scattered fields on a curve inside the cavity due to one point source. We employ the decomposition method to reconstruct the cavity and present some convergence results. Numerical examples are provided to show the viability of the method.
Abstract:
In paper "Strongly Convex Programming for Exact Matrix Completion and Robust Principal Component Analysis", an explicit lower bound of $\tau$ is strongly based on Theorem 3.4. However, a coefficient is missing in the proof of Theorem 3.4, which leads to improper result. In this paper, we correct this error and provide the right bound of $\tau$.
Readers Authors Editors Referees Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
The Lanczos tensor is an interesting animal. It can be thought of as the source of the Weyl curvature tensor, the traceless part of the Riemann curvature tensor.
The Weyl tensor and the Ricci tensor together fully determine the Riemann tensor, i.e., the intrinsic curvature of a spacetime. Crudely put, whereas the Ricci tensor tells you how the volume of, say, a cloud of dust changes in response to gravity, the Weyl tensor tells you how that cloud of dust is distorted in response to the same gravitational field. (For instance, consider a cloud of dust in empty space falling towards the Earth. In empty space, the Ricci tensor is zero, so the volume of the cloud does not change. But its shape becomes distorted and elongated in response to tidal forces. This is described by the Weyl tensor.)
Because the Ricci tensor is absent, the Weyl tensor fully describes gravitational fields in empty space. In a sense, the Weyl tensor is analogous to the electromagnetic field tensor that fully describes electromagnetic fields in empty space. The electromagnetic field tensor is sourced by the four-dimensional electromagnetic vector potential (meaning that the electromagnetic field tensor can be expressed using partial derivatives of the electromagnetic vector potential.) The Weyl tensor has a source in exactly the same sense, in the form of the Lanczos tensor.
The electromagnetic field does not uniquely determine the electromagnetic vector potential. This is basically how integrals vs. derivatives work. For instance, the derivative of the function $y=x^2$ is given by $y'=2x$. But the inverse operation is not unambiguous: $\int 2x~ dx=x^2+C$ where $C$ is an arbitrary integration constant. This is a recognition of the fact that the derivative of any function in the form $y=x^2+C$ is $y'=2x$ regardless of the value of $C$; so knowing only the derivative $y'$ does not fully determine the original function $y$.
In the case of electromagnetism, this freedom to choose the electromagnetic vector potential is referred to as the gauge freedom. (And this gauge freedom can be used to deduce the electromotive force, but that is another story.) A similar gauge freedom also exists for the Lanczos tensor. And this gauge freedom is what led to an interesting question during a recent e-mail conversation I had with someone: could the Lanczos tensor be used in some way by an observer to determine of he is at or near the event horizon of a black hole? Conventional wisdom says no, since for a freely falling observer, the event horizon is not in any way special. But could conventional wisdom be wrong?
The Weyl-Lanczos equation is given by
\begin{align}C_{abcd}&=H_{abc;d}+H_{cda;b}+H_{bad;c}+H_{dcb;a} \\ &\, \, \, \, \, + (H^e{}_{(ac);e} + H_{(a|e|}{}^e{}_{;c)})g_{bd} + (H^e{}_{(bd);e} + H_{(b|e|}{}^e{}_{;d)})g_{ac} \\ &\, \, \, \, \, - (H^e{}_{(ad);e} + H_{(a|e|}{}^e{}_{;d)})g_{bc} - (H^e{}_{(bc);e} + H_{(b|e|}{}^e{}_{;c)})g_{ad} \\ &\, \, \, \, \, -\frac{2}{3} H^{ef}{}_{f;e}(g_{ac}g_{bd}-g_{ad}g_{bc}),\end{align}
where $C_{abcd}$ is the Weyl tensor, $H_{abc}$ is the Lanczos tensor, and $g_{ab}$ is the metric. Covariant derivatives with respect to the metric are indicated by the semicolon, while round brackets in indices indicate symmetrization.
For the Schwarzschild metric $ds^2=(1-2m/r)dt^2-(1-2m/r)^{-1}dr^2-r^2d\theta^2-r^2\sin^2\theta d\phi^2$, the simplest solution is given by
\begin{align}H_{trt}=-H_{rtt}=\frac{m}{r^2},\end{align}
where $m$ is the source mass, $r$ is the radial Schwazschild coordinate and we choose units such that the universal gravitational constant and the speed of light are both 1. All other components of the Lanczos tensor are zero.
However, because of the gauge freedom, other solutions exist. A notable solution is the solution given in the "Lanczos algebraic gauge", a gauge choice that greatly simplifies the Weyl-Lanczos equation. This solution is given by
\begin{align}H_{trt}&=-H_{rtt}=\frac{2m}{3r^2},\\
H_{r\theta\theta}&=-H_{\theta r\theta}=-\frac{m}{3(1-2m/r)},\\ H_{r\phi\phi}&=-H_{\phi r\phi}=-\frac{m~\sin^2\theta}{3(1-2m/r)}.\end{align}
There is one property shared by these two solutions. The invariant scalar quantity $H_{abc}H^{abc}$ in both cases is infinite at the horizon where $r=2m$ (the value is $-2m^2/r^3(r-2m)$ and $-4m^2/3r^3(r-2m)$, respectively, and it is the $r-2m$ bit in the denominator that causes trouble.) So the question is, then, would this be true for all solutions of the Weyl-Lanczos equation in the Schwarzschild metric?
The answer is no. The gauge freedom of the Lanczos tensor is expressed by the fact that if $H_{abc}$ is a solution of the Weyl-Lanczos equation, then so is
$$H'_{abc} = H_{abc} + \Phi_{[a}g_{b]c},$$
where $\Phi_a$ is an arbitrary vector field and the square brackets indicate antisymmetrization.
So if we start with the solution given by $H_{trt}=-H_{rtt}=m/r^2$, the condition for $H_{abc}H^{abc}$ to be zero at the horizon is given by
$$\Phi_t^2 = -\frac{2m(r-2m)}{3r^3}\Phi_r + \frac{(r-2m)^2}{r^2}\Phi_r^2 + \frac{r-2m}{r^3}\Phi_\theta^2+\frac{r-2m}{r^3\sin^2\theta}\Phi_\phi^2+\frac{m^2}{3r^4}.$$
In particular, the solution given by $\Phi_t = m/\sqrt{3}r^2$ with all other components of $\Phi$ being 0 is such a solution. The nontrivial components of this solution are:
\begin{align}H_{trt}&= \frac{m}{r^2},\\
H_{trr}&= \frac{m}{\sqrt{3}r(2m-r)},\\ H_{t\theta\theta}&= -\frac{m}{\sqrt{3}},\\ H_{t\phi\phi}&= -\frac{m~\sin^2\theta}{\sqrt{3}}.\end{align}
This can be verified by direct substitution as a valid solution of the Weyl-Lanczos equation (and furthermore, the above equation for $\Phi_t$ provides an infinite 3-parameter set of such solutions) for which $H_{abc}H^{abc} = 0$ everywhere.
I used the following Maxima code to study the Lanczos tensor:
load(ctensor); load(itensor); ct_coordsys(exteriorschwarzschild); lg:-lg; cmetric(); christof(false); lriemann(false); riemann(false); weyl(false); imetric(g); components(H([a,m],[]),g([],[r,s])*(covdiff(H([a,s,m],[]),r)-covdiff(H([a,s,r],[]),m))); components(g([a,b,c,d],[]),g([a,c],[])*g([b,d],[])-g([a,d],[])*g([b,c],[])); components(C([b,c,d,a],[]), (covdiff(H([a,b,c],[]),d)-covdiff(H([a,b,d],[]),c)+ covdiff(H([c,d,a],[]),b)-covdiff(H([c,d,b],[]),a))+ 1/2*(H([a,d],[])*g([b,c],[])+H([d,a],[])*g([b,c],[])+ H([b,c],[])*g([a,d],[])+H([c,b],[])*g([a,d],[])- H([a,c],[])*g([b,d],[])-H([c,a],[])*g([b,d],[])- H([b,d],[])*g([a,c],[])-H([d,b],[])*g([a,c],[]) )+ 2/3*g([],[e,i])*g([],[f,j])*covdiff(H([i,j,e],[]),f)*g([a,b,c,d],[]) ); EQ:W([a,b,c,d],[])=C([a,b,c,d],[])$ SOL:ic_convert(EQ)$ for i thru 4 do for j thru 4 do for k thru 4 do HH[i,j,k]:0; HH[1,2,1]:m/r^2; HH[2,1,1]:-m/r^2; EHH:ic_convert(H([a,b,c],[])=HH([a,b,c],[])+F([a],[])*g([b,c],[])-F([b],[])*g([a,c],[])); ev(EHH); for i thru dim do for j thru dim do for k thru dim do if H[i,j,k]#0 then ldisplay(H[i,j,k]); ev(SOL); for i thru 4 do for j thru 4 do for k thru 4 do for l thru 4 do weyl[i,j,k,l]:factor(weyl[i,j,k,l]); for i thru 4 do for j thru 4 do for k thru 4 do for l thru 4 do W[i,j,k,l]:factor(W[i,j,k,l]); for i thru dim do for j from i+1 thru dim do for k from i thru dim do for l thru dim do if W[i,j,k,l] # 0 then ldisplay(W[i,j,k,l]); for i thru dim do for j from i+1 thru dim do for k from i thru dim do for l thru dim do if weyl[i,j,k,l] # 0 then ldisplay(weyl[i,j,k,l]); for i thru 4 do for j thru 4 do for k thru 4 do for l thru 4 do Z[i,j,k,l]:factor(W[i,j,k,l]-weyl[i,j,k,l]); for i thru 4 do for j thru 4 do for k thru 4 do for l thru 4 do if Z[i,j,k,l] # 0 then ldisplay(Z[i,j,k,l]); H0:0; for i thru 4 do for j thru 4 do for k thru 4 do H0:factor(H0+H[i,j,k]*H[i,j,k]*ug[i,i]*ug[j,j]*ug[k,k]); H0:factor(H0); solve(H0=0,F[1]^2); factor(H0),%[1]; |
In my articles on filter design I mostly focus on a rather specific subset of all possible filters, namely
symmetrical FIR filters with an odd number of coefficients. Each of these properties (1. FIR, 2. symmetrical, and 3. an odd number of coefficients) was chosen for the purpose of making the filters easier to work with. The three properties each have clear advantages, as described below. FIR Finite Impulse Response ( FIR) filters are inherently stable, in the sense that they guarantee BIBO stability, where BIBO stands for bounded-input, bounded-output. Mathematically, BIBO stability is defined as follows. A signal \(x[n]\) is bounded if there exists a real number \(b\in\mathbb{R}\) so that \(|x[n]|\leq b\) for all \(n\in\mathbb{Z}\). A filter is BIBO stable if every bounded input signal results in a bounded output signal. In other words, you don’t need to worry about the ouput of your filter “blowing up” if the input is okay. Symmetrical FIR Symmetrical FIR filters are linear phase. This means that all frequencies in the input signal are delayed in the same way, because the filter does not introduce phase distortion (also see The Phase Response of a Filter). For example, if you use a FIR filter to change some characteristic of an audio signal, it will not also introduce a delay in some of the frequencies. Of course, this also implies that you cannot use symmetrical FIR filters to correct or pre-compensate frequency delay…
A major effect of nonlinear phase delay is that rising and falling edges are affected differently. This is illustrated in Figure 1. The plot on the left shows a (slightly noisy) block pulse filtered with a low-pass moving-average filter, with the original signal in blue and the filtered signal in red. The effect of the filter on the rising and falling edges is exactly symmetrical. The plot on the right shows the same block pulse filtered with a low-pass single-pole IIR filter. The effect on both edges is quite different. Hence, for applications where the edges in the signal are important, this filter is often unsuitable.
Symmetrical FIR with an Odd Number of Coefficients
A symmetrical FIR filter with an odd number of coefficients has a delay of an
integer number of samples. This might be a minor advantage in most cases, but it does let you compare the original signal with the filtered one simply by shifting samples. Figure 2 shows the signal of Figure 1, with the moving-average-filtered signal shifted over 5 samples. The length of the filter is 11 samples, so that its delay is exactly 5 samples, according to the formula
\[d=\frac{N-1}{2},\]
with \(d\) the delay and \(N\) the length of the filter.
The conclusion is that, if your application allows it, you should strongly consider going for a symmetrical FIR filter with an odd number of coefficients. |
Let $L$ be a regular language over $\Sigma=\{a,b,c\}$. Build a finite automaton for $L/\{a\}$.
Because $L$ is regular then a DFA exists for it: $A=(\Sigma, Q, q_0, F, \delta)$.
Let $M$ be a finite automaton, $L(M)=L/\{a\}$.
$M=(\Sigma, Q\times\{0,1\}, (q_0,0), \delta', F\times\{1\})$ with the transition function defined below for all $q\in Q, \sigma \in \Sigma$: $$ \delta'((q,0),\sigma)=(\delta(q,\sigma),0)\\ \delta'((q,0),\epsilon)=(\delta(q,a),1) $$
The reasoning is that in state $0$ we just read letters in $M$ because those exact letters also exist in $L$. But when we reach end of input in $M$ via $\epsilon$ we still need to read $a$ in $L$. I'm not sure if it's valid to assume that $\epsilon$ means end of input? |
Although it is relatively simple to see that integer linear programming is NP-hard, whether it lies in NP is a bit harder. Therefore, I'm wondering whether the following reasoning shows that $ILP\in NP$ implies $NP=CoNP$.
To be precise, I am considering the decision variant of ILP over a non-unary input encoding, so the problem is: given an integer matrix $A$; an integer vector $b$, does there exist an integer vector $y \in Z_n$ with $Ay \leq b$?
The main issue here is that if we take such an $y$ as NP-certificate, it might be of exponential size in terms of the input, but this of course does not mean a certificate cannot exist.
My idea was that if we assume ILP in NP, we can use a discrete version of the Farkas lemma (see here) to transform an ILP instance with the answer NO to an ILP instance with the answer YES, which then would imply that ILP is in coNP. So we get an NP-complete problem in coNP, which implies that $NP=coNP$. Since most people think $NP\neq coNP$, this is 'evidence' for ILP not being in NP.
However, I'm not sure if my reasoning is correct. In particular, does this 'discrete Farkas lemma' work in polynomial time and create an ILP of polynomial size of the original one? |
I have been working the afternoon on an intertemporal exercise where I'm blocking on something very basic. Have been looking on previous posts but didn't find a similar question.
We have the utility function of the household : $ U(C_1,C_2,n_2)= \log(c_1) + \beta \log(c_2) - \beta n_2 $
We have an allocation $ \bar y $ for period 1 and none for period 2, the household can only work in period two.
Here is the budget constraint : $ c_1 p_1 + c_2 p_2 = p_1 ( \bar y - x ) + qx + wn_2 + \pi $
And the production function of the firm : $ f(k) = k^\alpha n_2^{1-\alpha} $
What I did until now is replacing $ c_2 $ in the utility function to have it depending on $c_1$,$ n_2$ and $ x $ only. Then I derive it for each variable and I can get the optimal condition for consumption : $ \frac{c_2}{c_1} = \beta \frac{p_1}{p_2} $, the other derivatives gives me : $ w = p_2 c_2 $ and $ q = p_1 $ . By replacing $c_2$ in the budget constraint, I can determine $ c_1$ and then $c_2$, and x is given by the market clearing condition $ \bar y = c_1 + x $.
But I can't find anyway to determine $n_2$, it's usually in a logarithm so we can find it in our First Order Conditions...
Edit : $ x $ is the amount of capital given to the firm by the household, so we sould have $ x=k $ on capital market
$ q $ is the price of capital per unit
$ \pi $ is the profit of the firm , i.e. $ \pi(k,n_2) = p_2 k^\alpha n_2^{1-\alpha} - kq - wn_2 $
Edit 2 : Rereading my question, I realize I should precise : The household can only consume the firm production in period 2, which is why he contributes to the capital and labor of the firm, earning wages and profits. Therefore, we should also have $ f(k,n_2) = c_2 $ I believe
Edit 3 : My profit maximisation gives me : $ \frac{\alpha}{1-\alpha} n_2 w = k q $
Edit 4 after @X recommendation : So I do the lagrangian and derivatives, which gets me :
$ \frac{1}{c_1} = \lambda p_1 $
$ \frac{\beta}{c_2} = \lambda p_2 $
$ \beta = \lambda w $
$ p_1 = q $
After computing, I find : $ c_1 = \frac{p_1 \bar y + w n_2 - \pi}{p_1(1+\beta)} $ and $ x = k = \bar y - c_1 = \frac{\beta p_1 \bar y - w n_2 + \pi}{p_1(1+\beta)} $
But still having trouble regarding my original issue, $ n_2 $ : I did , according to your advices : $ \frac{\alpha}{1-\alpha} n_2 w = k q $
$ \frac{\alpha}{1-\alpha} n_2 \frac{\beta}{\lambda} = \frac{\beta p_1 \bar y - w n_2 + \pi}{p_1(1+\beta)} p_1 $
$ \frac{\alpha}{1-\alpha} n_2 \beta p_1 c_1 = \frac{\beta p_1 \bar y - w n_2 + \pi}{(1+\beta)} $
$ \frac{\alpha}{1-\alpha} n_2 \frac{\beta}{1+\beta}(p_1 \bar y + w n_2 - \pi) = \frac{\beta p_1 \bar y - w n_2 + \pi}{(1+\beta)} $
$ \frac{\alpha}{1-\alpha} n_2 \beta(p_1 \bar y + w n_2 - \pi) = \beta p_1 \bar y - w n_2 + \pi $
$ n_2 \beta(p_1 \bar y + w n_2 - \pi) = \frac{1-\alpha}{\alpha} \frac{\beta p_1 \bar y - w n_2 + \pi}{\beta(p_1 \bar y + w n_2 - \pi)} $
Still pretty stupid as results ... I also used the central planner way and found $ n_2 = 1 - \alpha $ , which is weird.. I'd still like to go at the end of this way if someone has an idea.
Any idea ? Thanks. |
Background:
The Quantum Dimer Model is a lattice model, where each configuration is a covering of the lattice with nearest-neighbour bonds, like in the figure on the left:
Each of the configurations is a basis state in a Hilbert space and they are all mutually orthogonal. A 2x2 patch of spins is called a plaquette. A plaquette is
flippable if it contains two parallel bonds (like the green plaquette in the picture; the red one is not flippable).
For each state, one can define two
winding numbers, by considering an arbitrary cut through the torus (dashed line in the figure) and counting how many dimers cross the line with a black dot on one side minus the number of dimers the cross the line with a white dot on that same side. In the example above, the winding number in both the horizontal and vertical direction is zero.This decomposes the Hilbert space (on the $L \times L$ torus) into $(L+1)^2$ sectors (each winding number can take values from $-\frac{L}{2}$ to $+\frac{L}{2}$.
The Hamiltonian of the system is
The first term (the
potential) gives an energy V to flippable plaquettes, while the latter ( kinetic) term flips it. One can check, that the Hamiltonian does not change the winding number, i.e.
$$ \langle b \rvert H \lvert a \rangle = 0$$
for all $a$ and $b$ with different winding numbers.
Within each of the sectors though, it is commonly stated that $H$ is ergodic, meaning that for any $c$ and $d$ with the same pair of winding numbers, there exists an $n \in \mathbb{Z}$, such that
$$ \langle c \rvert H^n \lvert d \rangle \neq 0$$
This is stated for example in the original paper:
Any two dimer configurations with the same winding numbers can be obtained from each other by repeated application of the Hamiltonian; no local operator has matrix elements between states of different winding number. The winding numbers therefore label the disconnected sectors of Hilbert space.
One can also find the statement in this (very-well written) review:
For the square lattice, the kinetic term is believed to be ergodic in each topological sector. In this case, there is a unique ground state for each topological sector given by the equal amplitude superposition of all dimer coverings in that sector.
I don't think this statement is true: Consider the two configurations in the middle and on the right hand side of the first figure and convince yourself of the fact that both of them have winding numbers 2 (horizontal) and 2 (vertical). However, there is no flippable plaquette in either of them, meaning the Hamiltonian is identically zero on both! Moreover, I can create many such isolated configurations by swapping around the shaded staircases, in fact, the right configuration was obtained from the middle by swapping the yellow and blue staircases. This way I can create $2^{L/2}$ isolated states (because there are $\frac{L}{2}$ staircases and for each of them I can choose the vertical or horizontal orientation. They won't all have the same winding number, but they are definitely isolated from all other states in the same sector).
The reason for me to ask this question is that the above Hamiltonian, at the point t=V=1, naturally came up in my research.At that point, which is also called the
Rokhsar-Kivelson point, there is one ground state for each set of states connected by Hamiltonian in the sense I described above. By the above construction, it is exponentially degenerate. My question:
People seem to ignore those isolated states and claim that the
real degeneracy is $(L+1)^2$. Are those states physically not relevant? Why can we neglect them? |
When analyzing algorithms we want to reason (quickly) about the running time of algorithms. We use the
RAM model in which the following are considered to cost 1 CPU operation:
Other attributes:
The goal is to allow you to count operations and CPU cost without needing to know the specific architecture you are running on.
Some notes on algorithm analysis.
Big-oh, \( \mathcal{O}( \cdot ) \), has nothing to do with CPU cost, persay, it is just a way to fuzzily compare functions. We say that \( f(n) = \mathcal{O}(g(n)) \) if and only if there is some cutoff \( N\) and positive real \(c\) such that for all \(n \geq N\) we have \(|f(n)| \leq c \cdot |g(n)| \).
The official definition is there to get you out of trouble when you feel lost, but it seems way more complicated than it really is. It's really used to compare functions.
For \(f, g, f_1, g_1\) functions from \(\mathbb{N} \to \mathbb{R}\) we know that:
So to use big-oh in algorithms we ballpark the cost of loops and multiplications ignoring specific details.
Sample algorithms to practice on: sorting!
This is insert sort done via Folk Dance!
or a gif: |
If two operators commute then both quantities can be measured at the same time, if not then there is a tradeoff in the accuracy in the measurement for one quantity vs. the other.
Introduction
Operators are commonly used to perform a specific mathematical operation on another function. The operation can be to take the derivative or integrate with respect to a particular term, or to multiply, divide, add or subtract a number or term with regards to the initial function. Operators are commonly used in physics, mathematics and chemistry, often to simplifiy complicated equations such as the Hamiltonian operator, used to solve the Schrödinger equation.
Operators
Operators are generally characterized by a hat. Thus they generally appear like the following equation with \(\hat{E}\) being the operator operating on \(f(x)\)
\(\hat{E} f(x) = g(x)\)
For example if \(\hat{E} = d/dx\) then:
\(g(x) = f'(x)\)
Commuting Operators
One property of operators is that the order of operation matters. Thus:
\(\hat{A}\hat{E}f(x) \not= \hat{E}\hat{A}f(x)\)
unless the two operators commute. Two operators commute if the following equation is true:
\(\left[\hat{A},\hat{E}\right] = \hat{A}\hat{E} - \hat{E}\hat{A} = 0\)
To determine whether two operators commute first operate \(\hat{A}\hat{E}\) on a function \(f(x)\). Then operate\(\hat{E}\hat{A}\) the same function \(f(x)\). If the same answer is obtained subtracting the two functions will equal zero and the two operators will commute.on
Example 1
Do \(\hat{A}\) and \(\hat{E} \) commute if \(\hat{A} = \frac{d}{dx}\) and \(\hat{E} = x^2\)?
SOLUTION
This requires evaluating \(\left[\hat{A},\hat{E}\right]\).
\[\hat{A}\hat{E} f(x) = \hat{A}\left[x^2 f(x) \right] = \dfrac{d}{dx} \left[x^2 f(x)\right] = 2xf(x) + x^2 f'(x)\]
\[\hat{E}\hat{A}f(x) = \hat{E}\left[f'(x)\right] = x^2 f'(x)\]
\[\left[\hat{A},\hat{E}\right] = 2x f(x) + x^2 f'(x) - x^2f'(x) = 2x f(x) \not= 0\]
Therefore the two operators do not commute.
Example 2
Do \(\hat{B}\) and \(\hat{C} \) commute if \(\hat{B}= \frac {h} {x}\) and \(\hat{C} = f(x) +5\)
SOLUTION
This requires evaluating \(\left[\hat{B},\hat{C}\right]\)
\[\hat{B}\hat{C}f(x) = \hat{B}f(x) +3 = \dfrac {h}{x} (f(x) +3) = \dfrac {h f(x)}{x} + \dfrac{3h}{x} \]
\(\hat{C}\hat{B}f(x) = \hat{C} \dfrac {h} {x} f(x) = \dfrac {h f(x)} {x} +3\)
\(\left[\hat{B},\hat{C}\right] = \dfrac {h f(x)} {x} + \frac {3h} {x} - \dfrac {h f(x)} {x} -3 \not= 0\)
The two operators do not commute.
Example 3
Do \(\hat{B}\) and \(\hat{C} \) commute if \(\hat{J} = 3x\) and \(\hat{O} = x^{-1}\)
SOLUTION
This requires evaluating \(\left[\hat{J},\hat{O}\right]\)
\(\hat{J}\hat{O}f(x) = \hat{J}(f(x)3x) = f(x)3x/x = 3f(x) \)
\(\hat{O}\hat{J}f(x) = \hat{O}(f(x)/x) = f(x)3x/x = 3f(x) \)
\(\left[\hat{J},\hat{O}\right] = 3f(x) - 3f(x) = 0\)
Because the difference is zero, the two operators commute.
Applications in Physical Chemistry
Operators are very common in Physical Chemistry with a variety of purposes. They are used to figure out the energy of a wave function using the Schrödinger Equation.
\[\hat{H}\psi = E\psi\]
They also help to explain observations made in the experimentally. An example of this is the relationship between the magnitude of the angular momentum and the components.
\[\left[\hat{L}^2, \hat{L}^2_x\right] = \left[\hat{L}^2, \hat{L}^2_y\right] = \left[\hat{L}^2, \hat{L}^2_z\right] = 0 \]
However the components do not commute themselves. An additional property of commuters that commute is that both quantities can be measured simultaneously. Thus, the magnitude of the angular momentum and ONE of the components (usually z) can be known at the same time however, NOTHING is known about the other components.
References Gohberg, I. Basic Operator Theory; Birkhäuser: Boston, 2001 McQuarrie, D.A. Quantum Chemistry, 2nd Edition; University Science Books:Sausalito, 2008 Schechter, M. Operator Methods in Quantum Mechanics; Dover Publications, 2003 Problems
Determine whether the following combination of operators commute.
\(\left[\hat{K},\hat{H}\right]\) where \(\hat{K} = \alpha \displaystyle \int {[1]}^{[\infty]} d[x]\) and \(\hat{H} = d/dx\)
\(\left[\hat{I},\hat{L}\right]\) where \(\hat{I} = 5\) and \(\hat{L} = \displaystyle \int_{[1]}^{[\infty]} d[x]\) \( \)
Show that the components of the angular momentum do not commute.
\[\hat{L}_x = -i \hbar \left[ -\sin \left(\phi \dfrac {\delta} {\delta \theta} \right) - \cot (\Theta) \cos \left( \phi \dfrac {\delta} {\delta \phi} \right) \right] \]
\[\hat{L}_y = -i \hbar \left[ \cos \left(\phi \dfrac {\delta} {\delta \theta} \right) - \cot (\Theta) \cos \left( \phi \dfrac {\delta} {\delta \phi} \right) \right] \]
\(\hat{L}_z = -i\hbar \dfrac {\delta} {\delta\theta} \)
Solution
\(\left[\hat{L}_z,\hat{L}_x\right] = i\hbar \hat{L}_y \)
\(\left[\hat{L}_x,\hat{L}_y\right] = i\hbar \hat{L}_z \)
\(\left[\hat{L}_y,\hat{L}_z\right] = i\hbar \hat{L}_x \) |
I am reading and trying to understand https://jeremykun.com/2012/02/23/p-vs-np-a-primer-and-a-proof-written-in-racket/ and https://jeremykun.com/2011/07/04/turing-machines-a-primer/, and the author author often talks about languages and problems interchangeably.
Example:
Definition: If a Turing machine halts on a given input, either accepting or rejecting, then it decides the input. We call an
acceptance problemdecidable if there exists some Turing machine that halts on every input for that problem. If no such Turing machine exists, we call the problem undecidable over the class of Turing machines.
I think I understand the definition of decidability for languages:
Language $L$ is decidable iff there is a Turing Machine that accepts all strings in $L$ and rejects all strings not in $L$.
But what is a problem and how is a problem converted to language? What is an input for a problem? What is an acceptance problem?
there exists some Turing machine that halts on every input for that problem
I don't understand what this means.
Another example:
Definition: Given two languages $A$, $B$, we say $A \leq_p B$, or $A$ is polynomial-time reducible to $B$ if there exists a computable function $f: \Sigma^* \to \Sigma^*$ such that $w \in A$ if and only if $f(w) \in B$, and $f$ can be computed in polynomial time.
We have seen this same sort of idea with mapping reducibility in our last primer on Turing machines. Given a language B that we wanted to show as undecidable, we could show that if we had a Turing machine which decided B, we could solve the halting problem. This is precisely the same idea:
given a solution for B and an input for A, we can construct in polynomial time an input for B, use a decider for B to solve it, and then output accordingly. The only new thing is that the conversion of the input must happen in polynomial time.
Again, what is an input for a problem for a Turing machine? What is a problem for a Turing machine? |
I am attempting to solve the following ODE problem:
$$-u''+ u = x$$ $$u(0) = 0$$ $$u'(1) = -u(1)$$
The exact solution is: $u(x) = e^{-x-1} - e^{x-1} + x$
I have a Dirichlet at $x = 0$ and a Robin condition at $x = 1$. I refer to the exact solution with "little" $u$, and the approximate solution with "big" $U_i$, where the subscript $i$ refers to "point number": if I have $m$ sub-intervals, then I will have $m+1$ points indexed from $0$ to $m$.
In order to implement the Dirichlet condition, I eliminated $u_0$ from the finite difference matrix, and modified the right hand side, but in this case I didn't
really modify the RHS since $u(0) = 0$, thus $U_0 = 0$.
$$u''(0+h) \approx \frac{U_0 - 2U_1 + U_2}{h^2} = \frac{- 2U_1 + U_2}{h^2}$$
In order to implement the Robin condition, I decided to use the "ghost point" method along with a centred difference approximation of the first derivative ($U_m = u(1)$, if there are $m$ points).
$$u'(1) = -u(1),\;\therefore u'(1) \approx \frac{U_{m+1} - U_{m-1}}{2h} = -U_m$$ $$U_{m+1} = U_{m-1} - 2hU_{m}$$ $$\therefore u''(1) \approx \frac{U_{m-1} - 2U_m + U_{m+1}}{h^2} = \frac{2U_{m-1} + (-2 - 2h)U_m}{h^2}$$
So, if I have 4 sub-intervals (5 points), where $U_0$ and $U_4$ are the correspond to $u(0)$ and $u(1)$ respectively, I should have the following matrix equation:
$$\left(\frac{-1}{h^2}\begin{bmatrix}-2 & 1 & 0 & 0\\ 1 & -2 & 1 & 0\\ 0 & 1 & -2 & 1\\ 0 & 0 & 2 & -2 - 2h \end{bmatrix} + \mathbf{I}\right)\begin{bmatrix}U_1 \\ U_2 \\ U_3 \\ U_4\end{bmatrix} = \begin{bmatrix}F_1 \\ F_2 \\ F_3 \\ F_4\end{bmatrix}$$
I then implemented the above approximation using some simple Python code (you should be able to run it too), and I have included comments liberally to aid in readability:
from __future__ import divisionimport numpy as npimport matplotlib.pyplot as plt#Want to solve ODE: -u'' + u = x, u(0) = 0, u'(1) = -u(1)#Actual solution: exp(-X-1)# a and b mark the interval over which we want to solve our ODEa = 0b = 1num_subintervals = 4num_points = num_subintervals+1#size of each subintervalh = (b - a)/num_subintervalsX = np.linspace(a, b, num_points)def f(x): return xvectorized_f = np.vectorize(f)# actual solution: -u'' + u = x, u(0) = 0, u'(1) = -u(1)u = np.exp(-X-1) - np.exp(X-1) + X#Implementing Dirichlet condition at u(0)F = vectorized_f(X) u_0 = 0F[1] = F[1] - (u_0/(h**2))#We have eliminated U_0 from our equations due to implementation of BCsF = F[1:]#eliminated U_0, but U_n is still in the equation, thus num_points-1 to solve forN = num_points-1# -2 along main diagonal, 1 along +1 and -1 diagonalD2 = np.diag((-2/(h**2))*np.ones(N), 0) + np.diag((1/(h**2))*np.ones(N-1), 1) + np.diag((1/(h**2))*np.ones(N-1), -1)#D1(U_n) = (U_n+1 - U_n-1)/2h = -U_1# -2h*U_n + U_n-1 = U_n+1# D2(U_n) = (U_n-1 - 2*U_n + U_n+1)/h**2 = (U_n-1 - 2*U_n - 2*h*U_n + U_n-1)/h**2# D2(U_n) = (-2*U_n - 2*h*U_n + 2*U_n-1)/h**2D2[N-1, N-2] = 2/(h**2) #Modifying U_n-1 coefficient (index N-2) in equation for U''_nD2[N-1, N-1] = (-2 - 2*h)/(h**2) #Modifying U_n (index N-1) in equation for U''_nI = np.ones((num_points-1, num_points-1))#LHSA = -1*D2 + IU = np.concatenate(([u_0], np.linalg.solve(A, F)))error = np.linalg.norm(U - u, ord=np.inf)print "error: ", errorplt.plot(X, U, label="approximate")plt.plot(X, u, label="exact")plt.legend(loc="best")plt.show()
I get the following image:
And the error does not decrease as I increase number of sub-intervals. So, clearly, I have an error, but I am not sure where. Could you help? |
I am trying to solve the following exercise from this book:
Show that CLIQUE PROBLEM, parameterized by the solution size $k$, is Fixed-parameter tractable (FTP) on $r$-regular graphs for every fixed integer $r$.
Here, the CLIQUE PROBLEM is given a instance $(G, k)$, decide whether $G$ has a clique of size $k$ or not.
First of all, for an instance $(G, k)$, if $k > r+1$, then the answer is NO, because each vertex is connected with exactly $r$ elements, the maximum size of a clique is $r + 1$ (vertex plus $r$ neighbours). So, we can assume that $k \le r+1$.
Let $N(v)$ be the set of neighbours of $v$.
I thought of that simple algorithm
.... for each vertex $v \in V(G)$
........ check if for any subset $X \subset N(v)$, such that $|X| = k - 1$, $X \cup \{v\}$ is a clique.
Since there is only $\binom r k$ such subsets $X$ for each vertex and we take time polynomial in $k$ to check if $X \cup \{v\}$ is a clique, then, this algorithm is already a FTP and is of the form $\left( k^{O(1)}\binom{r}{k} \right)n$.
If everything is right, them I have solved the exercise. However, the next thing I have to do in the exercise, is to show that this problem is also a FTP considering the parameter $k + r$ (so, $r$ is no longer seen as a constant), and the same algorithm works in this case. Since I was expecting to face a harder exercise in this case of $k + r$, I started to think my solution is not right.
So, what is wrong? |
$$S_{N(t)+1} = \sum_{i=1}^{N(t)+1}X_i$$
where $X_i \sim \text{Uniform}(0,1)$ are iid. Since $N(t)+1$ is a stopping time and $\mathbb{E}[N(t)+1] = m(t)+ 1$ where $m(t) = \mathbb{E}[N(t)]$, if $m(t) < \infty$, we can use Wald's equation to conclude that,
$$\mathbb{E}[S_{N(t)+1}] = \mathbb{E}[X_1]\mathbb{E}[N(t)+1] = \frac{1}{2}(m(t)+1)$$
The only thing left is to compute $m(t)$ and simultaneously show that $m(t) < \infty$. It is easy to show that,
$$m(t) = \sum_{n=1}^{\infty}F_{n}(t)$$
where $F_{n}(t)$ is the $n$ convolutions of the cdf of uniform random variable. It follows from above,
$$m(t) = F(t) + (F*\sum_{k=1}^{\infty}F_{k})(t) = F(t) + (F*m)(t) = F(t) + \int_{0}^{t}m(t-x)dF(x)$$
Now, use $F(t)=t$ and $dF(x) = 1\cdot dx$ to show that,
$$m(t) = t + \int_{0}^{t}m(t-x)dx = t+\int_{0}^{t}m(x)dx \implies \frac{dm(t)}{dt} = 1+m(t) \implies m(t) = e^{t}-1$$
Therefore, $m(t) < \infty$ when $t < \infty$ and $\mathbb{E}[S_{N(t)+1}] = e^t/2$.
P.S. Please comment if something is not clear. |
T KUNDU
Articles written in Pramana – Journal of Physics
Volume 50 Issue 5 May 1998 pp 419-432 Research Articles
Two-photon optogalvanic transitions in Ar glow discharge with Nd: YAG laser pumped dye laser excitation in the frequency range 13520–16520 cm
−1 has been studied using linear and circular polarization. The intensities of two-photon optogalvanic transitions are very sensitive to changes in the incident laser power which is not the case with one-photon transitions. Intensity ratio for circular and linear polarized light for two photon transitions 6
Volume 87 Issue 4 October 2016 Article ID 0056 Regular
We have synthesized, characterized and studied the third-order nonlinear optical properties of two different nanostructures of polydiacetylene (PDA), PDA nanocrystals and PDA nanovesicles, along with silver nanoparticles-decorated PDA nanovesicles. The second molecular hyperpolarizability $\gamma (−\omega; \omega,−\omega,\omega$) of the samples has been investigated by antiresonant ring interferometric nonlinear spectroscopic (ARINS) technique using femtosecond mode-locked Ti:sapphire laser in the spectral range of 720–820 nm. The observed spectral dispersion of $\gamma$ has been explained in the framework of three-essential states model and a correlation between the electronic structure and optical nonlinearity of the samples has been established. The energy of two-photon state, transition dipole moments and linewidth of the transitions have been estimated. We have observed that the nonlinear optical properties of PDA nanocrystals and nanovesicles are different because of the influence of chain coupling effects facilitated by the chain packing geometry of the monomers. On the other hand, our investigation reveals that the spectral dispersion characteristic of $\gamma$ for silver nanoparticles-coated PDA nanovesicles is qualitatively similar to that observed for the uncoated PDA nanovesicles but bears no resemblance to that observed in silver nanoparticles. The presence of silver nanoparticles increases the $\gamma$ values of the coated nanovesicles slightly as compared to that of the uncoated nanovesicles, suggesting a definite but weak coupling between the free electrons of the metal nanoparticles and $\pi$ electrons of the polymer in the composite system. Our comparative studies show that the arrangement of polymer chains in polydiacetylene nanocrystals is more favourable for higher nonlinearity.
Current Issue
Volume 93 | Issue 6 December 2019
Click here for Editorial Note on CAP Mode |
Having come across this discussion I'm raising the question on the back-transformed confidence intervals conventions.
According to this article the nominal coverage back-transformed CI for the mean of a log-normal random variable is:
$\ UCL(X)= \exp\left(Y+\frac{\text{var}(Y)}{2}+z\sqrt{\frac{\text{var}(Y)}{n}+\frac{\text{var}(Y)^2}{2(n-1)}}\right)$ $\ LCL(X)= \exp\left(Y+\frac{\text{var}(Y)}{2}-z\sqrt{\frac{\text{var}(Y)}{n}+\frac{\text{var}(Y)^2}{2(n-1)}}\right)$
/and not the naive $\exp((Y)+z\sqrt{\text{var}(Y)})$/
Now, what are such CIs for the following transformations:
$\sqrt{x}$ and $x^{1/3}$ $\text{arcsin}(\sqrt{x})$ $\log(\frac{x}{1-x})$ $1/x$
How about the tolerance interval for the random variable itself (I mean a single sample value randomly drawn from the population)? Is there the same issue with the back-transformed intervals, or will they have the nominal coverage? |
Global solvability in a two-dimensional self-consistent chemotaxis-Navier-Stokes system
School of Science, Xihua University, Chengdu 610039, China
$\left\{ \begin{array}{l}{n_t} + u \cdot \nabla n = \Delta {n^m} - \nabla \cdot (n\nabla c) + \nabla \cdot (n\nabla \phi ), \;\;\;\;\;\;(x, t) \in \Omega \times (0, T), \\{c_t} + u \cdot \nabla c = \Delta c - nc, \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(x, t) \in \Omega \times (0, T), \\{u_t} + (u \cdot \nabla )u + \nabla P = \Delta u - n\nabla \phi + n\nabla c, \;\;\;\;\;\;\;\;\;\;\;\;\;(x, t) \in \Omega \times (0, T), \\\nabla \cdot u = 0, \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(x, t) \in \Omega \times (0, T), \end{array} \right.$
$Ω\subset \mathbb{R}^2$
$m>1$ Discrete Cont. Dyn. Syst. A28 (2010)) which asserts global existence of weak solutions under the constraint
$m∈(\frac{3}{2}, 2]$ Mathematics Subject Classification:35K55, 35Q92, 35Q35, 92C17. Citation:Yulan Wang. Global solvability in a two-dimensional self-consistent chemotaxis-Navier-Stokes system. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020019
References:
[1]
N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler,
Toward a mathematical theory of KellerSegel models of pattern formation in biological tissues,
[2] [3] [4]
X. Cao and J. Lankeit, Global classical small-data solutions for a three-dimensional chemotaxis Navier-Stokes system involving matrix-valued sensitivities,
[5] [6]
M. Chae, K. Kang and J. Lee,
Global Existence and temporal decay in Keller-Segel modelscoupled to fluid equations,
[7]
M. Difrancesco, A. Lorz and P. A. Markowich,
Chemotaxis-fluid coupled model for swimmingbacteria with nonlinear diffusion: Global existence and asymptotic behavior,
[8]
R. Duan, X. Li and Z. Xiang,
Global existence and large time behavior for a two dimensionalchemotaxis–Navier–Stokes system,
[9] [10] [11]
S. Ishida,
Global existence and Boundedness for chemotaxis-Navier-Stokes system withposition-dependent sensitivity in 2D bounded domains,
[12] [13] [14]
X. Li, Y. Wang and Z. Xiang,
Global existence and boundedness in a 2D Keller-Segel-Stokessystem with nonlinear diffusion and rotational flux,
[15] [16] [17] [18]
Y. Peng and Z. Xiang, Global existence and boundedness in a 3D Keller-Segel-Stokes system with nonlinear diffusion and rotational flux,
[19]
Y. Peng and Z. Xiang,
Global solutions to the coupled chemotaxis-fluids system in a 3Dunbounded domain with boundary,
[20]
Y. Shibata and S. Shimizu,
On the
L maximal regularity of Neumann problem for theStokes equations in a bounded domain, q J.Reine Angew.Math., 615 (2008), 157-209.
doi: 10.1515/CRELLE.2008.013.
Google Scholar
[21]
H. Sohr,
[22] [23]
Y. Tao and M. Winkler,
Global existence and boundedness in a Keller-Segel-Stokes modelwith arbitrary porous medium diffusion,
[24]
Y. Tao and M. Winkler,
Locally bounded global solutions in a three-dimensional chemotaxisStokes system with nonlinear diffusion,
[25]
I. Tuval, L. Cisneros and C. Dombrowski, et al., Bacterial swimming and oxygen transportnear contact lines,
[26]
J. L. Vázquez,
[27] [28] [29]
Y. Wang and Z. Xiang,
Global existence and boundedness in a Keller-Segel-Stokes systeminvolving a tensor-valued sensitivity with saturation: The 3D case,
[30]
Y. Wang,
Global weak solutions in a three-dimensional Keller-Segel-Navier-Stokes systemwith subcritical sensitivity,
[31]
Y. Wang, M. Winkler and Z. Xiang,
Global classical solutions in a two-dimensional chemotaxis-Navier-Stokes system with subcritical sensitivity,
[32] [33] [34]
M. Winkler,
Global large-data solutions in a chemotaxis-(Navier-)Stokes system modelingcellular swimming in fluid drops,
[35] [36]
M. Winkler,
Global mass-preserving solutions in a two-dimensional chemotaxis-Stokes systemwith rotational flux components,
[37]
M. Winkler,
Boundedness and large time behavior in a three-dimensional chemotaxis-Stokessystem with nonlinear diffusion and general sensitivity,
[38]
M. Winkler,
Global weak solutions in a three-dimensional chemotaxis-Navier-Stokes system,
[39] [40] [41]
Q. Zhang and Y. Li,
Convergence rates of solutions for a two-dimensional chemotaxis-NavierStokes system,
[42]
Q. Zhang and Y. Li,
Global weak solutions for the three-dimensional chemotaxis-Navier-Stokessystem with nonlinear diffusion,
[43]
Q. Zhang and X. Zheng,
Global well-posedness for the two-dimensional incompressiblechemotaxis-Navier-Stokes equations,
show all references
References:
[1]
N. Bellomo, A. Bellouquid, Y. Tao and M. Winkler,
Toward a mathematical theory of KellerSegel models of pattern formation in biological tissues,
[2] [3] [4]
X. Cao and J. Lankeit, Global classical small-data solutions for a three-dimensional chemotaxis Navier-Stokes system involving matrix-valued sensitivities,
[5] [6]
M. Chae, K. Kang and J. Lee,
Global Existence and temporal decay in Keller-Segel modelscoupled to fluid equations,
[7]
M. Difrancesco, A. Lorz and P. A. Markowich,
Chemotaxis-fluid coupled model for swimmingbacteria with nonlinear diffusion: Global existence and asymptotic behavior,
[8]
R. Duan, X. Li and Z. Xiang,
Global existence and large time behavior for a two dimensionalchemotaxis–Navier–Stokes system,
[9] [10] [11]
S. Ishida,
Global existence and Boundedness for chemotaxis-Navier-Stokes system withposition-dependent sensitivity in 2D bounded domains,
[12] [13] [14]
X. Li, Y. Wang and Z. Xiang,
Global existence and boundedness in a 2D Keller-Segel-Stokessystem with nonlinear diffusion and rotational flux,
[15] [16] [17] [18]
Y. Peng and Z. Xiang, Global existence and boundedness in a 3D Keller-Segel-Stokes system with nonlinear diffusion and rotational flux,
[19]
Y. Peng and Z. Xiang,
Global solutions to the coupled chemotaxis-fluids system in a 3Dunbounded domain with boundary,
[20]
Y. Shibata and S. Shimizu,
On the
L maximal regularity of Neumann problem for theStokes equations in a bounded domain, q J.Reine Angew.Math., 615 (2008), 157-209.
doi: 10.1515/CRELLE.2008.013.
Google Scholar
[21]
H. Sohr,
[22] [23]
Y. Tao and M. Winkler,
Global existence and boundedness in a Keller-Segel-Stokes modelwith arbitrary porous medium diffusion,
[24]
Y. Tao and M. Winkler,
Locally bounded global solutions in a three-dimensional chemotaxisStokes system with nonlinear diffusion,
[25]
I. Tuval, L. Cisneros and C. Dombrowski, et al., Bacterial swimming and oxygen transportnear contact lines,
[26]
J. L. Vázquez,
[27] [28] [29]
Y. Wang and Z. Xiang,
Global existence and boundedness in a Keller-Segel-Stokes systeminvolving a tensor-valued sensitivity with saturation: The 3D case,
[30]
Y. Wang,
Global weak solutions in a three-dimensional Keller-Segel-Navier-Stokes systemwith subcritical sensitivity,
[31]
Y. Wang, M. Winkler and Z. Xiang,
Global classical solutions in a two-dimensional chemotaxis-Navier-Stokes system with subcritical sensitivity,
[32] [33] [34]
M. Winkler,
Global large-data solutions in a chemotaxis-(Navier-)Stokes system modelingcellular swimming in fluid drops,
[35] [36]
M. Winkler,
Global mass-preserving solutions in a two-dimensional chemotaxis-Stokes systemwith rotational flux components,
[37]
M. Winkler,
Boundedness and large time behavior in a three-dimensional chemotaxis-Stokessystem with nonlinear diffusion and general sensitivity,
[38]
M. Winkler,
Global weak solutions in a three-dimensional chemotaxis-Navier-Stokes system,
[39] [40] [41]
Q. Zhang and Y. Li,
Convergence rates of solutions for a two-dimensional chemotaxis-NavierStokes system,
[42]
Q. Zhang and Y. Li,
Global weak solutions for the three-dimensional chemotaxis-Navier-Stokessystem with nonlinear diffusion,
[43]
Q. Zhang and X. Zheng,
Global well-posedness for the two-dimensional incompressiblechemotaxis-Navier-Stokes equations,
[1]
Laiqing Meng, Jia Yuan, Xiaoxin Zheng.
Global existence of almost energy solution to the two-dimensional chemotaxis-Navier-Stokes equations with partial diffusion.
[2]
Minghua Yang, Zunwei Fu, Jinyi Sun.
Global solutions to Chemotaxis-Navier-Stokes equations in critical Besov spaces.
[3]
Sachiko Ishida.
Global existence and boundedness for chemotaxis-Navier-Stokes systems
with position-dependent sensitivity in 2D bounded domains.
[4]
Xiaoping Zhai, Zhaoyang Yin.
Global solutions to the Chemotaxis-Navier-Stokes equations with some large initial data.
[5]
Qingshan Zhang, Yuxiang Li.
Convergence rates of solutions for a two-dimensional chemotaxis-Navier-Stokes system.
[6]
Vladimir S. Gerdjikov, Georgi Grahovski, Rossen Ivanov.
On the integrability of KdV hierarchy with self-consistent sources.
[7]
Shi Jin, Christof Sparber, Zhennan Zhou.
On the classical limit of a time-dependent self-consistent field system: Analysis and computation.
[8]
Feng Li, Yuxiang Li.
Global existence of weak solution in a chemotaxis-fluid system with nonlinear diffusion and rotational flux.
[9]
Hai-Yang Jin, Tian Xiang.
Convergence rates of solutions for a two-species chemotaxis-Navier-Stokes sytstem with competitive kinetics.
[10]
Marco Di Francesco, Alexander Lorz, Peter A. Markowich.
Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion:
Global existence and asymptotic behavior.
[11]
Yuming Qin, Lan Huang, Zhiyong Ma.
Global existence and exponential stability in
$H^4$ for the nonlinear compressible Navier-Stokes equations.
[12]
Pan Zheng.
Global boundedness and decay for a multi-dimensional chemotaxis-haptotaxis system with nonlinear diffusion.
[13]
Daoyuan Fang, Bin Han, Matthias Hieber.
Local and global existence results for the Navier-Stokes equations in the rotational framework.
[14]
Peixin Zhang, Jianwen Zhang, Junning Zhao.
On the global existence of classical solutions for compressible Navier-Stokes equations with vacuum.
[15] [16]
Pablo Cincotta, Claudia Giordano, Juan C. Muzzio.
Global dynamics in a self--consistent model of elliptical galaxy.
[17] [18]
Johannes Lankeit, Yulan Wang.
Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption.
[19] [20]
Hammadi Abidi, Taoufik Hmidi, Sahbi Keraani.
On the global regularity of axisymmetric Navier-Stokes-Boussinesq system.
2018 Impact Factor: 0.545
Tools Metrics Other articles
by authors
[Back to Top] |
This was really driving me nuts. The so-called "barometric formula". Here's what it is all about: an ideal gas in a container of a given volume would fill that volume at constant pressure $p$ and temperature $T$ so that the
ideal gas law applies:
\[pV=RnT,\]
where $T$ is the temperature, $n$ is the number of gas molecules (or number of moles, it's just a matter of picking a consistent set of units) and $R$ is the universal gas constant.
So what happens if you put this container in a homogeneous gravitational field characterized by the acceleration constant $g$? Something has to change.
Assuming that the temperature of the gas remains constant, the difference in pressure is simply the difference in the weight of a column of gas, which is expressed by the so-called aerostatic equation:
\[dp=-g\rho~dz,\]
where $\rho=\rho(z)$ is the density of the gas at height $z$. The infinitesimal version of the ideal gas law is $p~dV=RT~dn$ but $dn=(dn/dV)dV=(\mu dn/dV)/\mu~dV=\rho/\mu~dV$ where $\mu$ is the particle (or molar) mass, so
\begin{align}p~dV&=\frac{RT}{\mu}\rho~dV,\\\rho&=\frac{\mu p}{RT}.\end{align}
Putting this expression into the aerostatic equation gives
\begin{align}dp&=-\frac{g\mu p}{RT}dz,~~~~{\rm or}\\\frac{1}{p}dp&=-\frac{g\mu}{RT}dz.\end{align}
This last equation can be trivially integrated, producing the barometric formula:
\[p=p_0e^{-g\mu/RT}.\]
This looks nice, but why should we believe that $T$ remains constant with altitude, in this case? Indeed, Feynman himself says, when he introduces the barometric formula, that it really doesn't apply to our atmosphere since the temperature of air varies with altitude. There are at least two intuitive reasons to think that temperature would vary with altitude even in an equilibrium. First, if one imagines a gas as a bunch of little rubber balls that are being shaken, it stands to reason that only the faster balls reach beyond a certain altitude; so while high up, there will be fewer balls, they'd also tend to be "hotter" (keeping in mind that temperature is just the average kinetic energy of a particle.) Another way of looking at it is to think about a house in the summer: it's hot in the attic, cold in the basement, and there really is no reason to believe that, if the house were wrapped with a perfect insulating material, the temperatures would necessarily equalize. Or would they?
So let me rephrase the question. Instead of asking about the change in pressure in an isothermal column of gas, let me ask this: What is the equilibrium state of a column of gas in a homogeneous gravitational field? Equilibrium in this case need not mean isothermal. It may mean isothermal, but that has to be proven. What equilibrium means is that the system is in a state of maximum entropy.
How can we find the equilibrium state? We have two differential equations. One is the ideal gas law, the other is the aerostatic equation, but now both need to be modified to account for the variability of $T$. Two equations are not sufficient to solve for three unknown functions ($T$, $p$ and $\rho$), but we also have a third condition: in the equilibrium state, entropy is maximal.
Using a fundamental equation from thermodynamics,
\[T~dS=dU+p~dV,\]
we can express the infinitesimal change in entropy, $dS$ (here, $dU$ is the infinitesimal potential energy, $dV$ is the volume element, both of which are calculable from $\rho$ and $z$). The condition for equilibrium is that the integral $\int~dS$ be maximal, which is a problem from the calculus of variations. Unfortunately, I was not able to obtain a solution this way. To get to this point, I'd have to be able to express $p$, $\rho$ and $T$ using a function that has appropriate boundary conditions (e.g., $m(z)$ representing the mass of gas in the column up to a height $z$, so that $m(0)=0$ and $m(H)=M$ is the total mass of gas) so that I could have an equation like
\[\int~dS=\int L(m,dm/dz,z)~dz,\]
from which an Euler-Lagrange equation can be obtained and solved for $m$. Well, I couldn't do it, and it is possible that no such nice, clean solution exists.
Fortunately, shortly before I was ready to go bonkers, I came across a paper
1 that addressed precisely this problem, i.e., why is $T={\rm const.}$?
Here's the essence of the argument: The particles of the gas would indeed be bouncing about, and only those with more kinetic energy will bounce up high enough to get to the upper reaches of the column. But,
by the time they get up there, they'll have lost much of their kinetic energy! So whatever particles get all the way up there will have the same velocity distribution (hence, the same temperature) as the particles at ground level. So, the barometric formula is, in fact, the correct formula, because an equilibrium system will be isothermal.
Unlike my failed attempt, which was to use the axioms of thermodynamics, this reasoning starts with statistical physics. The starting point is the idea that the number of particles located at altitude between $z_1$ and $z_1+dz_1$, and moving at velocities $v_1$ and $v_1+dv_1$ can be expressed using a distribution function $f(z_1,v_1,t_1)$:
\[N=f(z_1,v_1,t_1)~dz_1dv_1.\]
Let us denote the area element $dz_1dv_1$ as $d\tau_1$.
As the system evolves from $t_1$ to $t_2$, the particles move from $z_1$ to $z_2$ and their velocities change from $v_1$ to $v_2$, in accordance with the dynamical equations of the system. However, the particle number remains unchanged, therefore
\[f(z_1,v_1,t_1)d\tau_1=f(z_2,v_2,t_2)d\tau_2.\]
If we could somehow show that $d\tau_1=d\tau_2$, that would mean that the distribution function is independent of time.
Let us take the special case of the homogeneous gravitational field. Initially, the area element is located in the $[z,v]$ phase space at coordinates $[z_1,v_1]$, $[z_1+dz_1,v_1]$, $[z_1,v_1+dv_1]$, $[z_1+dz_1,v_1+dv_1]$. As the system evolves and $t$ time elapses ($t=t_2-t_1$), the four corners of this rectangle move in accordance with the equations
\begin{align}z_2&=z_1+v_1t+\frac{1}{2}gt^2,\\v_2&=v_1+gt.\end{align}
Therefore, the four corners of the area element are mapped as follows (see also Fig. 1):
\begin{align}[z_1,v_1]&\rightarrow[z_2,v_2],\\ [z_1+dz_1,v_1]&\rightarrow[z_2+dz_1,v_2],\\ [z_1,v_1+dv_1]&\rightarrow[z_2+dv_1t,v_2+dv_1],\\ [z_1+dz_1,v_1+dv_1]&\rightarrow[z_2+dz_1+dv_1t,v_2+dv_1].\end{align}
These corners define a parallelogram with area $dz_1dv_1$, hence
\[d\tau_2=dz_1dv_1=d\tau_1,\]
which means that
\[f(z_1,v_1,t_1)=f(z_2,v_2,t_2).\]
Now let us make an assumption, a very reasonable one on thermodynamical grounds: namely that at some point $t_1$ in time, at some height $z_1$, the distribution function is the Maxwell-distribution:
\[f(z_1,v_1,t_1)=Ce^{-mv_1^2/2RT}.\]
Here, $C$ is a constant, $T$ is the temperature, and $m$ is the mass of a particle.
If the particles move in a collisionless manner, energy for each particle is conserved. The particle energy is a sum of its potential and kinetic energies:
\[E=mgz+\frac{1}{2}mv^2.\]
Energy conservation means that
\[mgz_1+\frac{1}{2}mv_1^2=mgz_2+\frac{1}{2}mgv_2^2.\]
From this,
\[\frac{1}{2}mv_1^2=mgz_2-mgz_1+\frac{1}{2}mgv_2^2.\]
Putting this expression into the Maxwell distribution equation, we get
\[f(z_1,v_1,t_1)=Ce^{mg(z_2-z_1)/RT}e^{-mv_2^2/2RT}.\]
As per the equality established earlier, this also means that
\[f(z_2,v_2,t_2)=Ce^{mg(z_2-z_1)/RT}e^{-mv_2^2/2RT}.\]
The first part of the right-hand side is a function of positions only. The velocity-dependent part of the distribution (hence, the relative probabilities of different velocities) is the same as before. In other words, the temperature is the same, regardless of the altitude; only the density of the medium varies with height.
One obvious question is whether this can be generalized to an arbitrary potential function, not just a homogeneous potential. A key step in the reasoning was the proof that the area element
dτ remains unchanged over time. We proved this in the special case of an homogeneous field, but that method doesn't work in the general case.
There is, however, a very powerful theorem in mechanics that is exactly about this issue: Liouville's theorem, that states that a volume element in phase space is a constant of the motion, hence it remains unchanged as we advance time and move along a particle's trajectory. This is exactly what we are saying: $d\tau_1=d\tau_2$.
In the general case, our distribution function will therefore take the form,
\[f(z_2,v_2,t_2)=Ce^{[V(z_2)-V(z_1)]/RT}e^{-mv_2^2/2RT},\]
where $V$ is a potential function. So long as $V$ is a function of only the coordinates, not velocities or time, the result stands: a column of ideal gas in equilibrium will be isothermal.
And, although we only considered the one-dimensional case here, the reasoning can be easily extended to more dimensions, and fields such as the central gravitational field of a planet like the Earth.
So why is the Earth's atmosphere not isothermal, then? Mainly because it is not an equilibrium system! It is constantly heated by the Earth itself from below, it exchanges heat with the oceans, it is heated by the Sun during the day, cooled radiatively during the night... a very complex system indeed, which is why global warming, for instance, remains such a contentious issue.
1Charles A. Coombes and Hans Laue: "A paradox concerning the temperature distribution of a gas in a gravitational field", Am. J. Phys, 53 (3) March 1985 |
The role of holomorphic functions (and their generalizations in the form of holomorphic sections of vector bundles) in physics is invaluable. Please see for example the followingreview by B.C. Hall, discussing holomorphic methods in mathematical physics, especially in quantum mechanics.
It should be emphasized that these theories cover important parts ofquantum theory but they are not exclusive. Still more general functionspaces are needed to describe other systems in quantum mechanics.
The Hilbert spaces describing the states of many very important systemsappearing in almost all areas of physics have a holomorphic realizationas (reproducing kernel) Hilbert spaces of holomorphic functions (orsections). This is true for the harmonic oscillator, the electron spin, thehydrogen atom and even in very advanced applications like the Hilbertspaces corresponding to Chern-Simons theories.
The intuition behind the major role played by holomorphic functions inquantum theory is the following:
Imagine $\mathbb{R}^2$ to be the phase space of a particle moving on a line $(x, p) \in \mathbb{R}^2$, where $x$ is the position and $p$ is the momentum. since $\mathbb{R}^2 \equiv \mathbb{C}$ is a complex manifold, we can use the "shorthand”:
$$z = x+ ip$$
Consider the Hilbert space of functions on $ \mathbb{C}$ corresponding to the Gaussian inner product:
$(\psi, \phi) = \int \overline{\psi(z)} \phi(z) e^{-\bar{z}z} d\mathrm{Re}z d\mathrm{Im}z$
Taking the whole Hilbert space would allow construction of wave functionsarbitrarily concentrated around any point $(x_0, p_0)$ in phase space,this is in contradiction with the Heisenberg uncertainty principle. Onthe other hand restricting the Hilbert space to holomorphic functions
$$ \frac{\partial \psi}{\partial \bar{z}} = 0$$
will restrict the "width" of the functions due to the squareintegrability condition.
This space (of holomorphic functions) is the Hilbert space of theharmonic oscillator. Another way to look at the holomorphicity restriction is to notice that without this restriction the full Hilbret space corresponds to an infinite number of copies of the harmonic oscillator, each closed under the action position and momentum operators.
The meaning of the restriction in this respect is that restricted Hilbert space carries an irreducible representation of the observable algebra.
This is a basic property required by Dirac in his axiomatic formulationof quantum theory. An irreducible representation which describes a single system is the right choice because we started classically from a single system. |
Here's a neat one from optimization: the Alternating Direction Method of Multipliers (ADMM) algorithm.
Given an uncoupled and convex objective function of two variables (the variables themselves could be vectors) and a linear constraint coupling the two variables:
$$\min f_1(x_1) + f_2(x_2) $$$$ s.t. \; A_1 x_1 + A_2 x_2 = b $$
The Augmented Lagrangian function for this optimization problem would then be $$ L_{\rho}(x_1, x_2, \lambda) = f_1(x_1) + f_2(x_2) + \lambda^T (A_1 x_1 + A_2 x_2 - b) + \frac{\rho}{2}|| A_1 x_1 + A_2 x_2 - b ||_2^2 $$
The ADMM algorithm roughly works by performing a "Gauss-Seidel" splitting on the augmented Lagrangian function for this optimization problem by minimizing $L_{\rho}(x_1, x_2, \lambda)$ first with respect to $x_1$ (while $x_2, \lambda$ remain fixed), then by minimizing $L_{\rho}(x_1, x_2, \lambda)$ with respect to $x_2$ (while $x_1, \lambda$ remain fixed), then by updating $\lambda$. This cycle goes on until a stopping criterion is reached.
(Note: some researchers such as Eckstein discard the Gauss-Siedel splitting view in favor of proximal operators, for example see http://rutcor.rutgers.edu/pub/rrr/reports2012/32_2012.pdf )
For convex problems, this algorithm has been proven to converge - for two sets of variables. This is not the case for three variables. For example, the optimization problem
$$\min f_1(x_1) + f_2(x_2) + f_3(x_3)$$$$ s.t. \; A_1 x_1 + A_2 x_2 + A_3x_3 = b $$
Even if all the $f$ are convex, the ADMM-like approach (minimizing the Augmented Lagrangian with respect to each variable $x_i$, then updating the dual variable $\lambda$) is NOT guaranteed to converge, as was shown in this paper.
https://web.stanford.edu/~yyye/ADMM-final.pdf |
Waecmaths
Title waecmaths question Question 1
Simplify $(3\tfrac{1}{2}+4\tfrac{1}{3})\div (\tfrac{5}{6}-\tfrac{2}{3})$
Question 2
In a particular year, the exchange rate of Naira (N) varies directly with the Dollars. If N1222 is equivalent to 8 dollars. Find the Naira equivalent to 36dollars
Question 3
If $\log 2=x,\log 3=y\text{ and }\log 7=z$ find, in term of
Question 4
Arrange the following numbers in descending order of magnitude 22
Question 5
Find the value to which N3000.00 will amount to in 5 years at 6% per annum simple interest
Question 6
Express the square root of 0.000144 in the standard form
Question 7
Two sets are disjoints if
Question 8
Solve the following equations $\begin{align} & 2x+3y=7 \\ & x+5y=0 \\\end{align}$
Question 9
If $\tfrac{3}{2x}-\tfrac{2}{3x}=4$ Solve for
Question 10
Find the coordinates of
Question 11
Find the value of
Question 12
Five bottles of Fanta and two packets of biscuits cost Gh¢ 6.00. Three bottles of Fanta and four packets of biscuits cost Gh¢ 5.00. Find the cost of a bottle of Fanta.
Question 13
If ${{(2x+3)}^{3}}=125$, find the value of
Question 14
Factorize $({{x}^{2}}-2x-3xy+6y)$ completely
Question 15
Which of the following illustrates the solution of $\left( \frac{3x+4}{2} \right)<x+3$
Question 16
The diameter of a bicycle wheel is 42cm. If the wheel makes 16 completes revolutions, what will be the total distance covered by the wheels (Take$\pi =\tfrac{22}{7}$)
Question 17
Calculate the perpendicular distance between the parallel between the parallel sides
Question 18
Calculate; correct to the nearest cm
Question 19
In the diagram, $\left| TP \right|=12cm$ and it is 6cm from
Question 20
The interior angle of a pentagon are (2
Question 21
In the diagram,
Question 22
If
Question 23
Find the value of
Question 24
In the diagram $\left| SQ \right|=4cm,\text{ }\left| PT \right|=7cm\text{, }\left| TR \right|=5cm$ and $ST\parallel QR$ If $\left| SP \right|=xcm$. Find the value of
Question 25
Given that $\sin {{60}^{\circ }}=\tfrac{\sqrt{3}}{2}\text{ and }\cos {{60}^{\circ }}=\tfrac{1}{2}$ evaluate $\frac{1-\sin {{60}^{\circ }}}{1+\cos {{60}^{\circ }}}$
Question 26
Find the bearing of
Question 27
If $\left| XY \right|=50m$ how far east of
Questio 28
In an examination, Kofi scored
Question 29
What is the median of the following scores 22, 41, 35, 63, 82, 74 ?
Question 30
Express 2.7864 × 10
Question 31
If ${{9}^{2x}}=\tfrac{1}{3}({{27}^{x}})$ find
Question 32
If 85% of
Question 33
The capacity of a water tank is 1,800 litres. If the tank is in the form of a cuboid with base 600cm by 150cm, find the height of the tank
Question 34
An arc of a circle subtends an angle of 60
Question 35
Solve $\frac{2x+1}{6}-\frac{3x-1}{4}=0$
Question 36
If
Question 37
If
Question 38
In the diagram, angle 200
Question 39
Each of the interior angle of a regular polygon is 140
Question 40
A bucket holds 10 litres of waters. How many buckets of water will fill a reservoir of size 8m × 7m × 5m(1 litres = 1000cm |
Search
Now showing items 1-10 of 32
Measurement of the inclusive jet cross-section in proton-proton collisions at \( \sqrt{s}=7 \) TeV using 4.5 fb−1 of data with the ATLAS detector
(Springer, 2015-02-24)
The inclusive jet cross-section is measured in proton-proton collisions at a centre-of-mass energy of 7 TeV using a data set corresponding to an integrated luminosity of 4.5 fb−1 collected with the ATLAS detector at the ...
Measurement of the production and lepton charge asymmetry of $\textit{W}$ bosons in Pb+Pb collisions at $\sqrt{s_{\mathrm{\mathbf{NN}}}}=$2.76 TeV with the ATLAS detector
(Springer, 2015-01-22)
A measurement of $\textit{W}$ boson production in lead-lead collisions at $\sqrt{s_{\mathrm{NN}}}=$2.76 TeV is presented. It is based on the analysis of data collected with the ATLAS detector at the LHC in 2011 corresponding ...
Search for heavy long-lived multi-charged particles in $pp$ collisions at $\sqrt{s}$=8 TeV using the ATLAS detector
(Springer, 2015-08)
A search for heavy long-lived multi-charged particles is performed using the ATLAS detector at the LHC. Data collected in 2012 at $\sqrt{s}$=8 TeV from $pp$ collisions corresponding to an integrated luminosity of $20.3$ ...
Performance of the ATLAS muon trigger in pp collisions at $\sqrt{s}=$8 TeV
(Springer, 2015-03-13)
The performance of the ATLAS muon trigger system has been evaluated with proton--proton collision data collected in 2012 at the Large Hadron Collider at a centre-of-mass energy of 8 TeV. The performance was primarily ...
Search for invisible particles produced in association with single-top-quarks in proton-proton collisions at $\sqrt{s}$ = 8 TeV with the ATLAS detector
(Springer, 2015-02-18)
A search for the production of single-top-quarks in association with missing energy is performed in proton--proton collisions at a centre-of-mass energy of $\sqrt{s}$ = 8 TeV with the ATLAS experiment at the Large Hadron ...
Search for invisible decays of the Higgs boson produced in association with a hadronically decaying vector boson in $pp$ collisions at $\sqrt{s} = 8$ TeV with the ATLAS detector
(Springer, 2015-07)
A search for Higgs boson decays to invisible particles is performed using 20.3 fb$^{-1}$ of $pp$ collision data at a centre-of-mass energy of 8 TeV recorded by the ATLAS detector at the Large Hadron Collider. The process ...
Search for Higgs Boson Pair Production in the $\gamma\gamma b\bar{b}$ Final State using $pp$ Collision Data at $\sqrt{s}=8$TeV from the ATLAS Detector
(American Physical Society, 2015-02)
Searches are performed for resonant and non-resonant Higgs boson pair production in the $hh \rightarrow \gamma\gamma b \overline b$ final state using 20 fb$^{-1}$ of proton--proton collisions at a center-of-mass energy of ...
Measurement of the top quark mass in the $t\bar t \to {\rm lepton+jets}$ and $t\bar t \to {\rm dilepton}$ channels using $\sqrt{s}=7$ TeV ATLAS data
(Springer, 2015-07)
The top quark mass was measured in the channels $t\bar{t} \to \mathrm{lepton+jets}$ and $t\bar{t} \to \mathrm{dilepton}$ (lepton=$e, \mu$) based on ATLAS data recorded in 2011. The data were taken at the LHC with a ...
Constraints on the off-shell Higgs boson signal strength in the high-mass $ZZ$ and $WW$ final states with the ATLAS detector
(Springer, 2015-07)
Measurements of the $ZZ$ and $WW$ final states in the mass range above the $2m_Z$ and $2m_W$ thresholds provide a unique opportunity to measure the off-shell coupling strength of the Higgs boson. This paper presents a ...
Search for heavy lepton resonances decaying to a $Z$ boson and a lepton in $pp$ collisions at $\sqrt{s}=8$ TeV with the ATLAS detector
(Springer, 2015-09)
A search for heavy leptons decaying to a $Z$ boson and an electron or a muon is presented. The search is based on $pp$ collision data taken at $\sqrt{s}=8$ TeV by the ATLAS experiment at the CERN Large Hadron Collider, ... |
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... |
OK, let me try to expand on Chaum and van Antwerpen's proofs a bit. Please let me know if there's still something that doesn't convince you.
I'll start by rephrasing Chaum and van Antwerpen's theorem 1 using your notation:
Theorem 1: Even with infinite computing power, the probability that Sally can verify an invalid signature $s \ne m^k \bmod p$ is at most $1/q$.
To verify $s$, Victor chooses two random numbers $a$ and $b$ from $\mathbb Z_q = \{0, 1, \dotsc, q-1\}$ (see note below). However, the important bit is that Sally doesn't know $a$ and $b$; she only sees $c = s^a y^b \in G$.
Now, it's not hard to see that every value of $c$ seen by Sally corresponds to $q$ possible $(a,b)$ pairs: in particular, for any $a$ and $c$, we may solve for $b$ as follows:
$$\begin{aligned}s^a y^b &= c \\y^b &= s^{-a} c \\b &= \operatorname{dlog}_y(s^{-a} c)\\\end{aligned}$$
(This works because, in a cyclic group of prime order, the discrete logarithm is unique for any base not equal to 1; equivalently, $b \mapsto y^b$ is a bijection, and in fact a group homomorphism, from $\mathbb Z_q^+$ to $G$ for any $y \in G$, $y \ne 1$. The case $y = 1$ cannot occur, since $k \ne 0$ and $g \ne 1$ by construction.)
Now, assume that $s \in G$ and $m \ne 1$ (which Chaum and van Antwerpen's paper mandates). Then there exists some $x \in \mathbb Z_q$ such that $s = m^x$. If $x = k$, then the signature is valid, so an invalid signature must have $x \ne k$.
I'll now show that, if $x \ne k$, then for each of the $q$ different $(a,b)$ pairs corresponding to the $c$ sent by Victor, Sally would have to reply with a different $r$ to have it accepted. Like Chaum and van Antwerpen, I will do so using proof by contradiction. Specifically, assume that there exist two pairs $(a,b)$ and $(a^*,b^*)$ corresponding to the same $c$ and the same $r$. Then:
$$\begin{aligned}m^{xa} g^{kb} =& c = m^{xa^*} g^{kb^*} &&&m^{a} g^{b} =& r = m^{a^*} g^{b^*} \\ m^{xa} m^{-xa^*} =& g^{kb^*} g^{-kb} & \text{and} &&m^{a} m^{-a^*} =& g^{b^*} g^{-b} \\ m^{x(a-a^*)} =& g^{k(b^*-b)} &&&m^{a-a^*} =& g^{b^*-b} \\ \end{aligned}$$
Taking the base-$g$ discrete logarithm and letting $\mu = \operatorname{dlog}_g(m)$, we get $\mu x(a-a^*) = k(b^*-b)$ and $\mu (a-a^*) = b^*-b$. Substituting the latter into the former, we get $\mu x(a-a^*) = k \mu (a-a^*)$, and thus $x = k$. But this is only possible if the signature is, in fact, valid!
Thus, we've shown that, if $s \ne m^k$, any value of $r$ that Sally returns can only be valid for one $(a,b)$ pair. But since there are $q$ possible such pairs, each of which is equally likely, Sally has only a $1/q$ chance of guessing the correct one.
Note: This proof involves a few corner cases in which the version of the Chaum–van Antwerpen scheme given in the Handbook of Applied Cryptography differs from that in Chaum and van Antwerpen's original paper.
First, Chaum and van Antwerpen explicitly require that $m \ne 1$. This makes sense, since if $m = 1$, $s = m^k = 1$ regardless of $k$! I assume this is most likely just an oversight in the Handbook, but it may be important if the message $m$ might be chosen by an adversary.
Second, the Handbook says that $a$ and $b$ should be chosen at random from $\{1,2, \dotsc, q-1\}$, whereas Chaum and van Antwerpen also allow $a = 0$ and $b = 0$. Admittedly, these are somewhat peculiar cases — in particular, $a = 0$ implies that $c$ (and thus $r$, if Sally is honest) is independent of $s$ (and thus $m$)! Even so, these cases don't actually present a weakness, since they each occur only with probability $1/q$, and, importantly,
Sally can't tell when they occur.
In fact, excluding $a = 0$ and $b = 0$ means that, at least in principle, Sally can exclude two of the possible $(a,b)$ pairs (or one, if $c=1$). Thus, the maximum probability of verifying an invalid signature may actually be as high as $1/(q-2)$. Of course, for practical values of $q$, that makes no real difference.
Also note that, in practice, one would usually use the compact representation recommended by Chaum and van Antwerpen, in which $G$ is mapped to $\{1, 2, \dotsc, q\} \subset \mathbb Z_p^*$ by the map $f(x) = \min(x, p-x)$, and the map $f$ is reapplied after any sequence of arithmetic operations in $\mathbb Z_p^*$. (This works because $p-1 \equiv -1 \bmod p$ is the generator of the order-2 subgroup of $\mathbb Z_p^*$.) Not only is this representation more compact than naively storing elements of $G$, but it makes it easy to map arbitrary messages to elements of $G \setminus \{1\}$ and avoids issues with bogus signatures outside $G$. |
Given $m+1$ integers $\alpha_0,\ldots,\alpha_m\geq 1$, I was trying to get a nice closed formula for the integral $$ \int_0^\pi\cos(\alpha_1\theta)\cdots\cos(\alpha_m\theta)d\theta. $$ More precisely, I was trying to count the number of $m$-uplets $(\alpha_1\ldots,\alpha_m)$ with $1\leq \alpha_1\leq\ldots\leq\alpha_m\leq n$ such that the later integral is non-zero.
When $m=1$, this is easy because of the orthogonality relation $$ \frac{2}{\pi}\int_0^\pi\cos(\alpha_0\theta)\cos(\alpha_1\theta)d\theta=\;\boldsymbol 1_{\alpha_0=\alpha_1}, $$ and thus the number of such solutions is $\boldsymbol 1_{\alpha_0\leq n}$, that is the number of ways to partition $\alpha_0$ into one positive integer smaller than $n$.
If one looks instead at the integral $$ \frac{1}{2\pi}\int_0^{2\pi}e^{i\alpha_0\theta}e^{-i\alpha_1\theta}\cdots e^{-i\alpha_m\theta} d\theta $$ then the number of solutions is the number of ways to partition $\alpha_0$ into $m$ integers smaller than $n$. I was somehow hopping that a similar type of solution would appear for the cosine integral, but maybe that is too optimistic ...
Any idea for the general case with the cosines ? Or a reference ? I have tried to use inductively the trigonometric formula $cos(a)cos(b)=\frac{1}{2}(cos(a+b)+\cos(a-b))$, but I don't see any structure appearing. |
If you use
NumPy for numerical computations, which I recommend if you program in Python, one of the key things to do is to process entire arrays in one go as much as possible (this is also true for MATLAB). Using these so-called vectorized operations makes sure that the operation is run using compiled code instead of interpreted Python code, since the array and matrix operations behind NumPy (and MATLAB) are implemented in compiled languages such as C.
As an example, I’ll create a Python program that produces the classical Mandelbrot fractal. This fractal is computed by defining, for each \(c\in\mathbb{C}\), a polynomial \(f_c(z)=z^2+c\). This polynomial is then
iterated starting with \(z=0\), that is, \(f_c^n(0)\) is computed, and any point \(c\) for which \(f_c^n(0)\) does not “escape to infinity” for \(n\to\infty\) is part of the Mandelbrot set (there are some example values of \(c\) in the article Mandelbrot Set).
In order to compute this polynomial for all pixels of an image at the same time, we create matrices \({\bf Z}\) and \({\bf C}\), and then do the computation
\[{\bf Z}^{(t+1)}={\bf Z}^{(t)}*\,{\bf Z}^{(t)}+\,{\bf C},\]
where \(*\) means
element-wise matrix multiplication. The matrix \({\bf Z}^{(0)}\) must be initialized to all zeros, so that’s easy. The Matrix C
Each element of the matrix \({\bf C}\) must be initialized to the coordinate of the pixel in the complex plane that it represents, i.e., the pixel \((x,y)\) must get the value \(x+yi\). This makes sure that the correct function \(f_c\) is computed at each point. A straightforward way to do that is to create a row of x coordinates and a column of y coordinates. For example, for the points with integer coordinates in the range \([-2,2]\), this would be
\[\begin{pmatrix}
-2 & -1 & 0 & 1 & 2 \end{pmatrix}\]
and
\[\begin{pmatrix}
2 \\ 1 \\ 0 \\ -1 \\ -2 \end{pmatrix}.\]
By replicating both to a full \(5\times5\) matrix and adding them, with the second one multiplied by \(i\), we get
\[\begin{pmatrix}
-2+2i & -1+2i & 2i & 1+2i & 2+2i \\ -2+i & -1+i & i & 1+i & 2+i \\ -2 & -1 & 0 & 1 & 2 \\ -2-i & -1-i & -i & 1-i & 2-i \\ -2-2i & -1-2i & -2i & 1-2i & 2-2i \end{pmatrix}.\]
In practice, a basic plot of the Mandelbrot fractal can be made on a grid that extends from \(-2\) to \(1\) in direction of the x-axis, and \(-1\) to \(1\) in the direction of the y-axis. The magnitude of the elements of a \(480\times320\) matrix with this range is shown in Figure 1.
One way to compute \({\bf C}\) in Python is the following.
import numpy as np m = 480 n = 320 x = np.linspace(-2, 1, num=m).reshape((1, m)) y = np.linspace(-1, 1, num=n).reshape((n, 1)) C = np.tile(x, (n, 1)) + 1j * np.tile(y, (1, m))
This code literally translates the vector and matrix operations from the example above. There are alternatives such as
mgrid() that could be used, but I think that the construction using
tile() explicitly is clearer in this case.
Iterating the Polynomial
We now have the matrices \({\bf Z}\) and \({\bf C}\) ready to start iterating. This is where it gets a bit tricky. The basic code is completely trivial:
Z = np.zeros((n, m), dtype=complex) for i in range(100): Z = Z * Z + C
However, there’s a problem. The values that “escape to infinity” grow so quickly that they overflow the maximum float value in no time, which first results in
Inf values and then quickly in
NaN values. To avoid this, I’ll add a mask
M of pixels that are potentially in the Mandelbrot set, but from which we will remove pixels if we discover that they are not. Mathematically, it can be proven that pixels values with a magnitude larger than 2 will escape and cannot be part of the set. Hence, the code can be adapted as follows.
Z = np.zeros((n, m), dtype=complex) M = np.full((n, m), True, dtype=bool) for i in range(100): Z[M] = Z[M] * Z[M] + C[M] M[np.abs(Z) > 2] = False
There are two important constructs here. The first one is the notation
Z[M]. This is so-called
boolean indexing. It selects the elements of the matrix
Z for which
M contains
True. This enables you to only continue iterating on those pixels that have not escaped yet. The second line updates
M itself. After each iteration, we determine all pixels that have escaped, using the expression
np.abs(Z) > 2. We then use boolean indexing again to remove these pixels from
M, using the expression
M[np.abs(Z) > 2] = False.
Hence, using array operations allows computing the Mandelbrot set in only a few lines of code, and with much better performance than by iterating over all pixels with Python
for loops. Figure 2 shows the result of running the above code (so, for 100 iterations) and then plotting
M.
This black-and-white image shows the Mandelbrot set in black. For more information on the difference between this and the usual colorful images that you possibly expected here, see again the article Mandelbrot Set.
Python Code
For completeness, a complete Python program follows.
import numpy as np from imageio import imwrite m = 480 n = 320 x = np.linspace(-2, 1, num=m).reshape((1, m)) y = np.linspace(-1, 1, num=n).reshape((n, 1)) C = np.tile(x, (n, 1)) + 1j * np.tile(y, (1, m)) Z = np.zeros((n, m), dtype=complex) M = np.full((n, m), True, dtype=bool) for i in range(100): Z[M] = Z[M] * Z[M] + C[M] M[np.abs(Z) > 2] = False imwrite('mandelbrot.png', np.uint8(np.flipud(1 - M) * 255))
The call to
flipud() makes sure that the plot is not upside down due to the matrix indexing that NumPy uses. This does not make any difference for this image, since the Mandelbrot set is symmetrical around the x-axis, but it becomes important when you zoom in on different regions of the set. |
Yes, propellers have problems at high speed, but if done right, they still have an advantage over turbofans at speeds up to Mach 0.8. Look at the inner engine gondolas of the Tu-95: They are elongated and thicker aft of the trailing edge. This was done to stow the landing gear in them, but also to area-rule the aircraft. The Tu-95 applies knowledge that was gained only in the early jet age. This, of course, also explains the swept wing.
Next, it uses contra-rotating propellers which spin very slowly (just 750 RPM). By having two coaxial propellers spinning in opposite direction, the efficiency at high speed is improved. The first propeller pre-swirls the flow so the flow conditions on the second propeller are more favorable for thrust creation.
The tips of the fan blades of a modern turbofan also move at supersonic speed, so the supersonic propellers on the Tu-95 do not create a direct disadvantage. By keeping the relative thickness of the blade near the tip low, the drag increase can be kept at tolerable levels. But make no mistake: Supersonic flow adds wave drag, and especially around Mach 1 the zero-lift drag coefficient of everything which moves through air has a maximum. It would make the Tu-95 even more efficient if it would fly at a lower cruise speed where the propeller tips are still subsonic, but Tupolev wanted to push the design to the highest useable cruise Mach number.
What you learned about propellers and jets is not wrong, but it is not a black and white world, either. Airliners use jet engines to fly at the highest cruise speed possible, but at the cost of higher fuel consumption. If they would restrict themselves to lower speeds, plenty of fuel could be saved. But too few people would book these flights, because on intercontinental routes they will take noticeably longer. Note that turboprops are still used in regional air traffic, and even the regional jets have lower flight speeds than the intercontinental jets.
Now to the efficiencies of engine types:
Piston engines are the most fuel efficient aviation engines. Their drawback is a constant power output over speed, so that thrust is inverse to speed. This helps for acceleration at take-off, but limits maximum speed. A modern piston engine uses 240 g of fuel for providing 1 kW of power over one hour: 240 g/kW-h. Diesel engines use as little as 220 g/kW-h. This number is already true for the old Jumo 205, among the first aviation diesel engines in operation 80 years ago. Turboprop engines are next, and their power increases a little over speed due to ram pressure (which will raise the internal pressure in the engine by approx. 30% at Mach 0.8). Their power-specific consumption is about 300 g/kW-h. Jet engines are less efficient than both, but are better for flying fast and high. Their thrust drops even less with speed, so the better basis for expressing consumption is thrust, not power. The typical fuel consumption of a modern jet engine (GE-90) is 30 grams of fuel per Newton of thrust over one hour (30 g/N-h) when run stationary, and twice that in cruise at Mach 0.85. Modern Military jet engines achieve 80 g/N-h at take-off and have roughly constant thrust and specific consumption over speed.
In all cases, thrust is created by accelerating a mass of air backwards. The general equation for propulsive efficiency $\eta$ is $$\eta = \frac{v_{\infty}}{v_{\infty}+\frac{\Delta V}{2}},$$ where $\Delta v$ is the speed increase of the mass of air due to that acceleration. This formula shows that it is better to accelerate a big mass of air only a little than a smaller mass by a lot. Propellers do this and for that reason offer the best efficiency. Turboprops use less efficient, but lighter gas turbines for creating power, but retain the efficient propeller. Civilian turbofans try to increase the mass of air by increasing their bypass ratio, and only the military is using the least efficient types with bypass ratios below 1, because they are the best choice at supersonic speed.
Below you see a plot of the thrust-specific fuel consumption in cruise condition of different engine types over their bypass ratio. The inverse relation is easily visible.
Plot of the thrust specific fuel consumption in lb of fuel per lb of thrust per hour of different engines over the logarithm of their bypass ratio (picture source).
To make a comparison between piston and turbofan engines possible, let's compare fuel consumption at take-off. The formula for static thrust of a propeller is $$T_0 = \sqrt[3]{P^2\cdot\eta_{Prop}^2\cdot\pi\cdot d_P^2\cdot\rho},$$ where $P$ is the shaft power, $d_p$ the propeller diameter and $\rho$ the air density. For our example, we use a four-bladed prop of 3.4 m diameter and an engine with 1111 kW power. Its static thrust is 10.727 kN when we assume standard atmospheric conditions and a prop efficiency of 85%. The fuel flow will be 266.6 kg per hour, and relative to thrust this is 24.8 g/N-h or just 80% of that of a modern turbofan.
I wonder if even the enthusiasts could guess what airplane I used, because I obfuscated it by using those unfamiliar metric units. I guess nobody will argue that it is not optimized for fast flight, so this comparison should also hold for the Tu-95, for which I have less data available.
Here follows the requested expansion on propeller tip speeds. Thanks to the excellent comment by @JanHudec not much is left to say: The propeller diameter is 5.6 m and their speed is 750 RPM, so the circumferential component is $5.6 \cdot \pi \cdot 750 / 60 = 220 m/s$. Add to this the cruise speed of Mach 0.67 (taken from this site — others list quite incredible numbers) at altitude, where the speed of sound is 295 m/s. Mach 0.67 equates there to 197.65 m/s, and vector addition gives 295 m/s for the propeller tips, exactly Mach 1.0. This means that the propeller is subsonic over its whole span.
But the top speed is quite a bit higher. Thanks to the excellent work of Ferdinand Brandner and his team back in the Fifties the NK-12 engines developed already 12,000 horsepower back then, and their power output has since been raised to 14,795 HP. This allows a top speed of Mach 0.82, and now the tip speed comes out at 327 m/s or Mach 1.08 — mildly supersonic. This means that the outer 30% of the propeller experience supersonic flow.
I fail to find a source for the range figures you list in your question. Again I refer to this site: It is 7,800 miles or 12,552 km at a cruise speed of 400 knots or 179 m/s which equals Mach 0.606 at altitude, resulting in Mach 0.96 for the propeller tips. Therefore, it seems the best range is reached with subsonic propeller tips. |
This is problem 2.44 from Introduction to the theory of computation by Michael Sipser.
If $A$ and $B$ are languages, define $A \diamond B = \{xy: x \in A \land y \in B \land |x| = |y|\}$
Show that if $A$ and $B$ are regular languages, then $A \diamond B$ is a $CFL$.
My try:
Let us define the following languages:
$$L_1 = \{ x\#y : |x| = |y| \}$$
$$L_2 = \{ x\#y : x \in A \land x\in B \}$$ $L_1$ is context-free, can be proven in a similar way to as done here
$L_2$ is concatenation of regular languages, and hence regular.
Context-free languages are closed under intersection with regular languages, and hence $L_1 \cap L_2 = \{x\#y: x \in A \land y \in B \land |x| = |y|\}$ is context free.
Let us define the homomorphism $h$ such that $h(\#)=\epsilon$ and as the identity homomorphism for all other symbols.
$h(L_1 \cap L_2)=A \diamond B$, and since Context-free languages are closed under homomorphism, we conclude the requested result.
Does my proof make sense? |
Is it possible to solve an equation with only a single derivative such as:
$$\frac{\partial U(x,t)}{\partial t} = A - BU(x,t)$$
with finite difference methods?
I ask as I am trying to solve the below equation using a finite difference method.
$$\frac{\partial \vec{m}(x,t)}{\partial t} = -\frac{\partial \vec{j}_{m}}{\partial x} - \frac{\vec{m}(x,t) \times \hat{M}(x)}{\lambda_{J}^2} - \frac{\hat{M}(x) \times \left( \vec{m}(x,t) \times \hat{M}(x) \right)}{\lambda_{\phi}} - \frac{\vec{m}(x,t) - m_\infty}{\lambda_{sf}^2}$$
I have implemented this with a forward difference scheme and it is unstable (even when $dt \ll \frac{dx}{2}$). I have attached an outline of the implementation if anyone wants to check this out!
Edit:
The $\vec{j}_m$ term is called the
spin current and is defined in the document. It is calculated from the gradient of the spin accumulation $\frac{ \partial \vec{m}(x,t)}{\partial x}$ as in the document (bottom of the page).
I have used Dirichlet boundary conditions for now, although will probably use Neumann BCs at a later stage. The set up would involve using a magnetic material ($\hat{M} > 0$) surrounded by enough non-magnetic material either side to allow the solution to decay to $0$. |
Topics Proportions Percentages Unit Conversions
In Essential Skills 1 in Chapter 1 Section 1.8 , we introduced you to some of the fundamental mathematical operations you need to successfully manipulate mathematical equations in chemistry. Before proceeding to the problems in Chapter 7 you should become familiar with the additional skills described in this section on proportions, percentages, and unit conversions.
Proportions
We can solve many problems in general chemistry by using ratios, or proportions. For example, if the ratio of some quantity
A to some quantity B is known, and the relationship between these quantities is known to be constant, then any change in A (from A 1 to A 2) produces a proportional change in B (from B 1 to B 2) and vice versa. The relationship between A 1, B 1, A 2, and B 2 can be written as follows:
\( \dfrac{A_{1}}{B_{1}}=\dfrac{A_{2}}{B_{2}}=constant \)
To solve this equation for
A 2, we multiply both sides of the equality by B 2, thus canceling B 2 from the denominator:
\( \begin{matrix}
\left ( B_{2} \right )\dfrac{A_{1}}{B_{1}}=\left ( \cancel{B_{2}} \right )\dfrac{A_{2}}{\cancel{B_{2}}} \\ \\ \dfrac{B_{2}A_{1}}{B_{1}}=A_{2} \end{matrix} \)
Similarly, we can solve for
B 2 by multiplying both sides of the equality by 1/ A 2, thus canceling A 2 from the numerator:
\begin{matrix}
\left ( \dfrac{1}{A_{2}} \right )\dfrac{A_{1}}{A_{1}} & = & \left ( \dfrac{1}{\cancel{A_{2}}} \right )\dfrac{\cancel{A_{2}}}{B_{2}} \\ & & \\ \dfrac{A_{1}}{A_{2}B_{1}} & = & \dfrac{1}{B_{2}} \end{matrix}
If the values of
A 1, A 2, and B 1 are known, then we can solve the left side of the equation and invert the answer to obtain B 2:
\( \begin{matrix}
B_{2} & = & numerical\; value \\ & & \\ B_{2} & = & \dfrac{1}{numerical\; value} \end{matrix} \)
If the value of
A 1, A 2, or B 1 is unknown, however, we can solve for B 2 by inverting both sides of the equality:
\( B_{2}=\dfrac{A_{2}B_{1}}{A_{1}} \)
When you manipulate equations, remember that
any operation carried out on one side of the equality must be carried out on the other.
Skill Builder ES1 illustrates how to find the value of an unknown by using proportions.
Skill Builder ES1
If 38.4 g of element A are needed to combine with 17.8 g of element B, then how many grams of element A are needed to combine with 52.3 g of element B?
Solution
We set up the proportions as follows:
\( \begin{matrix}
A_{1} & = & 38.4 \; g\\ & & \\ B_{1} & = & 17.8 \; g\\ & & \\ A_{2} & = & ?\\ & & \\ B_{2} & = & 52.3 \; g\\ & & \\ \frac{A_{1}}{B_{1}} & = & \dfrac{A_{2}}{B_{2}}\\ & & \\ \frac{38.4 \; g}{17.8 \; g} & = & \dfrac{A_{2}}{52.3 \; g} \end{matrix} \)
Multiplying both sides of the equation by 52.3 g gives
\( \begin{matrix}
\dfrac{\left (38.4 \; g \right )\left ( 52.3\; g \right )}{17.8 \; g} & = & \dfrac{A_{2}\left ( \cancel{52.3} \right )}{\cancel{52.3} \; g} & & &\\ A_{2} & = & 113\; g \end{matrix} \)
Notice that grams cancel to leave us with an answer that is in the correct units.
Always check to make sure that your answer has the correct units. Skill Builder ES2
Solve to find the indicated variable.
\( \frac{\left (16.4 \; g \right )}{41.2 \; g} = \dfrac{x}{18.3\; g} \) \( \frac{\left (2.65 \; m \right )}{4.02 \; m} = \dfrac{3.28\; m}{y} \) \( \frac{3.27\times 10^{-3} \; g }{x} = \dfrac{5.0\times 10^{-1} \; g }{3.2\; g} \) Solve for V 1: \( \frac{P_{1} }{P_{2}} = \dfrac{V_{2} }{V_{1}} \) Solve for T 1: \( \frac{P_{1}V_{1} }{T_{1}} = \dfrac{P_{2}V_{2} }{T_{2}} \)
Solution
Multiply both sides of the equality by 18.3 g to remove this measurement from the denominator:\( \left ( 18.3\; g \right )\dfrac{16.4\; \cancel{g}}{41.2\; \cancel{g}} = \left ( \cancel{18.3\; g} \right )\dfrac{x}{\cancel{18.3\; g}} \)
\( 7.28\; g=x \)
2, Multiply both sides of the equality by 1/3.28 m, solve the left side of the equation, and then invert to solve for
y:
\( \begin{matrix}
\left ( \dfrac{1}{3.28\; m} \right )\dfrac{2.65\; m}{4.02\; m} & = & \left ( \dfrac{1}{\cancel{3.28\; m}} \right )\dfrac{\cancel{3.28\; m}}{y} &=& \dfrac{1}{y} \\ & & \\ y& = & \dfrac{\left (4.02 \right )\left ( 3.28 \right )}{2.65}&=&4.98\; m \end{matrix} \)
3. Multiply both sides of the equality by 1/3.27 × 10
−3 g, solve the right side of the equation, and then invert to find x:
\( \begin{matrix}
\left ( \dfrac{1}{\cancel{3.27\times 10^{-3}\; g}} \right )\dfrac{\cancel{3.27\times 10^{-3}\; g}}{x}&=& \left ( \dfrac{1}{3.27\times 10^{-3}\; g} \right )\dfrac{5.0\times 10^{-1}\; \cancel{g}}{3.2\; \cancel{g}}&=&\dfrac{1}{x} \\ & & \\ x& = & \dfrac{\left (3.2\; g \right )\left ( 3.27\times 10^{-3}\; g\right )}{5.0\times 10^{-1}\; g}&=&2.1\times 10^{-2}\; g \end{matrix} \)
Multiply both sides of the equality by 1/\( \left ( \dfrac{1}{V_{2}} \right )\dfrac{P_{_{1}}}{P_{2}}=\left ( \dfrac{1}{\cancel{V_{2}}} \right )\dfrac{\cancel{V_{2}}}{V_{1}} \)
V 2, and then invert both sides to obtain V 1: \( \dfrac{P_{2}V_{2}}{P_{1}}=V_{1} \)
Multiply both sides of the equality by 1/\( \left ( \dfrac{1}{\cancel{P_{1}V_{1}}} \right )\dfrac{\cancel{P_{1}V_{1}}}{T_{1}}= \left ( \dfrac{1}{P_{1}V_{1}} \right )\dfrac{P_{2}V_{2}}{T_{2}} \)
P 1 V 1and then invert both sides to obtain T 1: \( \dfrac{1}{T_{1}}=\dfrac{P_{2}V_{2}}{T_{2}P_{1}V_{1]}} \) \( T_{1}=\dfrac{T_{2}P_{1}V_{1]}}{P_{2}V_{2}} \) Percentages
Because many measurements are reported as percentages, many chemical calculations require an understanding of how to manipulate such values. You may, for example, need to calculate the mass percentage of a substance, as described in Section 11.2 , or determine the percentage of product obtained from a particular reaction mixture.
You can convert a percentage to decimal form by dividing the percentage by 100:
\( 52.8\%=\dfrac{52.8}{100}=0.528 \)
Conversely, you can convert a decimal to a percentage by multiplying the decimal by 100:
\( 0.356\times 100=35.6\% \)
Suppose, for example, you want to determine the mass of substance
A, one component of a sample with a mass of 27 mg, and you are told that the sample consists of 82% A. You begin by converting the percentage to decimal form:
\( 82\%=\dfrac{82}{100}=0.82 \)
The mass of
A can then be calculated from the mass of the sample:
\( 0.82 \times 27\; mg = 22\; mg \)
Skill Builder ES3 provides practice in converting and using percentages.
Skill Builder ES3
Convert each number to a percentage or a decimal.
29.4% 0.390 101% 1.023
Solution
\( \frac{29.4}{100} = 0.294 \) \( 0.390\times × 100 = 39.0% \) \( \frac{101}{100}=1.01 \) \( 1.023\times 100 = 102.3\% \) Skill Builder ES4
Use percentages to answer the following questions, being sure to use the correct number of significant figures (see Essential Skills 1 in Chapter 1 Section 1.8 ). Express your answer in scientific notation where appropriate.
What is the mass of hydrogen in 52.83 g of a compound that is 11.2% hydrogen? What is the percentage of carbon in 28.4 g of a compound that contains 13.79 g of that element? A compound that is 4.08% oxygen contains 194 mg of that element. What is the mass of the compound?
Solution
\( 52.83\; g\times \frac{11.2}{100}=52.83\; g\times 0.112=5.92\; g \) \( \frac{13.79\; g\; carbon}{28.4\; g}\times 100=48.6\%\; carbon \)
This problem can be solved by using a proportion:\( \dfrac{4.08\%\; oxygen}{100\%\; compound}\dfrac{194\; mg}{x\; mg} \)
\( x= 4.75\times 10^{3}\; mg\; \left ( or\; 4.75\; g \right ) \) Unit Conversions
As you learned in Essential Skills 1 in Chapter 1, all measurements must be expressed in the correct units to have any meaning. This sometimes requires converting between different units (Table 1.8.1). Conversions are carried out using conversion factors, which are are ratios constructed from the relationships between different units or measurements. The relationship between milligrams and grams, for example, can be expressed as either 1 g/1000 mg or 1000 mg/1 g. When making unit conversions, use arithmetic steps accompanied by unit cancellation.
Suppose you have measured a mass in milligrams but need to report the measurement in kilograms. In problems that involve SI units, you can use the definitions of the prefixes given in Table 1.8.2 to get the necessary conversion factors. For example, you can convert milligrams to grams and then convert grams to kilograms:
\( milligrams\rightarrow grams\rightarrow kilograms\; 1000 mg\rightarrow 1 g\; 1000 g\rightarrow 1\;kilogram \)
If you have measured 928 mg of a substance, you can convert that value to kilograms as follows:
\( 928\; \cancel{mg}\times \dfrac{1\; g}{1000\; \cancel{mg}}=0.928\; g \)
\( 928\; \cancel{g}\times \dfrac{1\; kg}{1000\; \cancel{g}}=0.000928\; kg=9.28\times 10^{-4}\; kg \)
In each arithmetic step, the units cancel as if they were algebraic variables, leaving us with an answer in kilograms. In the conversion to grams, we begin with milligrams in the numerator. Milligrams must therefore appear in the denominator of the conversion factor to produce an answer in grams. The individual steps may be connected as follows:
\( 928\; \cancel{mg}\times \dfrac{1\; \cancel{g}}{1000\; \cancel{mg}}\times \dfrac{1\; kg}{1000\; \cancel{g}}=\dfrac{928\; kg}{10^{6}}=928\times 10^{-6}\; kg =9.28\times 10^{-4}\; kg \)
Skill Builder ES5 provides practice converting between units.
Skill Builder ES5
Use the information in Table 1.8.2 to convert each measurement. Be sure that your answers contain the correct number of significant figures and are expressed in scientific notation where appropriate.
59.2 cm to decimeters 3.7 × 10 5mg to kilograms 270 mL to cubic decimeters 2.04 × 10 3g to tons 9.024 × 10 10s to years
Solution
\( 59.2\; \cancel{cm}\times \dfrac{1\; \cancel{m}}{100\; \cancel{cm}}\times \dfrac{10\; dm}{1\; \cancel{m}}=5.92\; dm \) \( 3.7\times 10^{5}\; \cancel{mg}\times \frac{1\; \cancel{g}}{1000\; \cancel{mg}}\times \dfrac{1\; kg}{1000\; \cancel{g}}=3.7\times 10^{-1}\; kg \) \( 270\; \cancel{mL}\times \dfrac{1\; \cancel{L}}{1000\; \cancel{mL}}\times \dfrac{1\; dm^{3}}{1\; \cancel{L}}=270\times 10^{-3}\; dm^{3}=2.70\times 10^{-1}\; dm^{3} \) \( 2.04\times 10^{3}\; \cancel{g}\times \dfrac{1\; \cancel{lb}}{453.6\; \cancel{g}}\times \dfrac{1\; tn}{2000\; \cancel{lb}}=0.00225\; tn= 2.25\times 10^{-3}\; tn \) \( 9.024\times 10^{10}\; \cancel{s}\times \dfrac{1\; \cancel{min}}{60\; \cancel{s}}\times \dfrac{1\; \cancel{h}}{60\; \cancel{min}}\times \dfrac{1\; \cancel{d}}{24\; \cancel{h}}\times \dfrac{1\; yr}{365\; \cancel{d}}=2.86\times 10^{3}\; yr \) Contributors Anonymous
Modified by Joshua Halpern |
I need some help with explaining why a grammar is not LL(1).
Let us take the following grammar:
$$ \begin{align} S \rightarrow & aB \mid bA \mid \varepsilon \\ A \rightarrow & aS \mid bAA \\ B \rightarrow & b \\ \end{align} $$
This is my attempt:
For the grammar to be LL(1) it is a necessary condition that for any strings $c_1γ$ and $c_2β$, derivable from $S \rightarrow aB$ and $A \rightarrow aS$ respectively, we have $c_1 \ne c_2$.
But, $S \rightarrow aB$ and $A \rightarrow aS$, hence $c_1 = c_2$ and the grammar is not LL(1).
Is my reasoning right?
Thanks in advance. |
I have already solved this problem using trig, however I feel that their must be an easier way to solve this problem using some theorem or property of quadrilaterals that I am forgetting.
Initially I tried solving this problem using strictly algebra and the angle sum of a triangle theorem. However I quickly found that this just leads me to $x + \angle BDC = 106$. I tried extending the original shape and setting up the problem as a systems of equations and I still couldn't get a singular answer.
When I couldn't find a singular answer with this methods I figured that I must be missing a required theorem relating the angles in a quadrilateral with their diagonals.
I was able to solve for $x$ using the law of sines and cosines, however I feel like this isn't the method that they wanted me to use when solving this problem.
$$\begin{matrix} \frac{DE}{sin(48)} = \frac{AD}{sin(74)} & \frac{AE}{sin(58)} = \frac{AD}{sin(74)} \\ \frac{EC}{sin(30)} = \frac{DE}{sin(44)} & \frac{BE}{sin(16)} = \frac{AE}{sin(58)} \\ \end{matrix}$$
$$BC^2 = BE^2 + CE^2 - 2 \times BE \times CE \times cos(74) $$ $$\frac{sin(74)}{BC} = \frac{sin(x)}{BE} $$ $$x = sin^{-1} \left( \frac{BE \times sin(74)}{BC} \right) $$
So my general question is "What theorem or method should I use to solve this problem?"
Also as I side question, "Can this problem be solved using just the angle sum theorem and algebra? |
This question already has an answer here:
I am trying to answer problem 3.3 part b in Schwartz's QFT book:
(b) Show that the total energy $Q=\int \mathcal{T}^{00} d^3 x$ is invariant under the addition of a total derivative to the Lagrangian, $\mathcal{L}\rightarrow \mathcal{L}+\partial_{\mu} X^{\mu}$.
When I try to show this, I find that it is in general not invariant, meaning I am making an error somewhere. Here is my attempt:
The (canonical) energy-momentum tensor $\mathcal{T}^{\mu \nu}$ is defined by:
$$\mathcal{T}^{\mu \nu}\equiv \sum_{i} \frac{\partial \mathcal{L}}{\partial (\partial_{\mu} \phi_i)}\partial^{\nu}\phi_i - \eta^{\mu\nu}\mathcal{L}$$
Since a total derivative $\partial_{\mu}X^{\mu}$ doesn't change the action, and $\mathcal{T}^{\mu \nu}$ is defined through the action, the new energy-momentum tensor will simply be the old one with $\mathcal{L}$ replaced with $\mathcal{L}+\partial_{\mu}X^{\mu}$.
$$\begin{align} \mathcal{T'}^{\mu \nu}&=\mathcal{T}^{\mu \nu}+ \sum_{i} \frac{\partial \left(\partial_{\rho}X^{\rho}\right)}{\partial (\partial_{\mu} \phi_i)}\partial^{\nu}\phi_i - \eta^{\mu\nu}\left(\partial_{\rho}X^{\rho}\right)\\ &=\mathcal{T}^{\mu \nu}+ \sum_{i} \partial_{\rho}\left(\frac{\partial X^{\rho}}{\partial (\partial_{\mu} \phi_i)}\right)\partial^{\nu}\phi_i - \eta^{\mu\nu}\partial_{\rho}X^{\rho}\\ \end{align}$$
The new energy $Q'$ is given by:
$$\begin{align} Q'&=\int d^3 x \mathcal{T}'^{00}\\ &=Q+\int d^3 x \, \left[ \sum_{i} \partial_{\rho}\left(\frac{\partial X^{\rho}}{\partial \dot{\phi}_i}\right)\dot{\phi}_i -\partial_{\rho} X^{\rho}\right] \end{align}$$
So I need to show that the second term vanishes, but I don't know how. That's a 4-derivative sitting inside the integral, so if it was $d^4 x$ I'd just use the divergence theorem and say "oh well assuming $X^{\rho}$ vanishes at infinity we get our desired result". But it's an integral over $d^3 x$.
What do I do? |
Bouncing Saint-Venant bump
This test case is identical to bump2D.c but using the generic solver for systems of conservation laws rather than the Saint-Venant solver.
#include "conservation.h"
The only conserved scalar is the water depth
h and the only conserved vector is the flow rate
q.
scalar h[];vector q[];scalar * scalars = {h};vector * vectors = {q};
Using these notations, the Saint-Venant system of conservation laws (assuming a flat topography) can be written \displaystyle \partial_t\left(\begin{array}{c} h\\ q_x\\ q_y\\ \end{array}\right) + \nabla\cdot\left(\begin{array}{c} q_x & q_y\\ \frac{q_x^2}{h} + \frac{Gh^2}{2} & \frac{q_xq_y}{h}\\ \frac{q_yq_x}{h} & \frac{q_y^2}{h} + \frac{Gh^2}{2} \end{array}\right) = 0 with G the acceleration of gravity.
This system is entirely defined by the
flux() function called by the generic solver for conservation laws. The parameter passed to the function is the array
s which contains the state variables for each conserved field, in the order of their definition above (i.e. scalars then vectors). In the function below, we first recover each value (
h,
qx and
qy) and then compute the corresponding fluxes (
f[0],
f[1] and
f[2]). The minimum and maximum eigenvalues for the Saint-Venant system are the characteristic speeds u \pm \sqrt(Gh).
double G = 1.;void flux (const double * s, double * f, double e[2]){ double h = s[0], qx = s[1], u = qx/h, qy = s[2]; f[0] = qx; f[1] = qx*u + G*h*h/2.; f[2] = qy*u; // min/max eigenvalues double c = sqrt(G*h); e[0] = u - c; // min e[1] = u + c; // max}
The solver is now fully defined and we proceed with initial conditions etc… as when using the standard Saint-Venant solver (see bump2D.c for details).
#define LEVEL 7int main(){ origin (-0.5, -0.5); init_grid (1 << LEVEL); run();}event init (i = 0){ theta = 1.3; // tune limiting from the default minmod foreach() h[] = 0.1 + 1.*exp(-200.*(x*x + y*y));}event logfile (i++) { stats s = statsf (h); fprintf (stderr, "%g %d %g %g %.8f\n", t, i, s.min, s.max, s.sum);}event outputfile (t <= 2.5; t += 2.5/8) { static int nf = 0; printf ("file: eta-%d\n", nf); output_field ({h}, stdout, N, linear = true); scalar l[]; foreach() l[] = level; printf ("file: level-%d\n", nf++); output_field ({l}, stdout, N); /* check symmetry */ foreach() { double h0 = h[]; point = locate (-x, -y); // printf ("%g %g %g %g %g\n", x, y, h0, h[], h0 - h[]); assert (fabs(h0 - h[]) < 1e-12); point = locate (-x, y); assert (fabs(h0 - h[]) < 1e-12); point = locate (x, -y); assert (fabs(h0 - h[]) < 1e-12); }}#if TREEevent adapt (i++) { astats s = adapt_wavelet ({h}, (double[]){1e-3}, LEVEL); fprintf (stderr, "# refined %d cells, coarsened %d cells\n", s.nf, s.nc);}#endif
Results
The results are comparable to that of bump2D.c. They are not identical mainly because the standard Saint-Venant solver applies slope-limiting to velocity rather than flow rate in the present case. |
I have to study the solvability of the equation $$ 7^x -5x^3 \equiv 0 \quad \pmod{33} $$ and determine its integer solutions $ x $ with $ 0 \le x \le 110 $.
I started dividing this equation into two equations $$\cases {7^x -5x^3 \equiv 0 \quad \pmod{3} \\ 7^x -5x^3 \equiv 0 \quad \pmod{11}}.$$ For the first one I tried to substitute the values $0,1,2 $ and found that $2$ is the only possibile solution.
Then I tried to solve the second one with the method of indices: $$ x \cdot\mbox{ind}_{11}(7)-\mbox{ind}_{11}(5)-3\cdot\mbox{ind}_{11}(x) \equiv 0 \pmod{\phi(11)}.$$ I noticed that $2$ is a primitive root $\pmod{11}$ and computed its powers which lead to $$2^4 \equiv 5 \pmod{11} \\2^7 \equiv 7 \pmod{11}.$$ The equation became $$ x \cdot 7 - 4- 3 \cdot \mbox{ind}_{11}(x) \equiv 0 \pmod{\phi(11)} \quad \longrightarrow \quad 7 x - 3 \cdot \mbox{ind}_{11}(x) \equiv 4 \quad \pmod{\phi(11)}.$$
Then I stopped because I had no clues on how to continue. Have you any ideas?
Thank you in advance. |
Let $(M,\omega, \mathbb{T})$ be a symplectic toric manifold. It is well-known that the properties of $M$ can be retrieved by looking at the moment polytope $\Delta$ image of the momentum map $$ \mu : M \to \text{Lie}(\mathbb{T})^*, \quad \Delta := \mu(M) $$ associated with the $\mathbb{T}$-action on $M$. If $\Delta$ has $n$ facets, it is given by $$ \Delta = \lbrace x \in \text{Lie}(\mathbb{T})^* \ | \ \langle x, v_j \rangle + a_j \geq 0, \ \text{for all} \ j=1,...,n \rbrace, $$ where $\langle.,.\rangle$ denotes the natural pairing between $\text{Lie}(\mathbb{T})$ and its dual $\text{Lie}(\mathbb{T})^*$, the $v_j$'s are primitive vectors in the integer lattice $\text{Lie}(\mathbb{T})_{\mathbb{Z}} := \exp(\text{Lie}(\mathbb{T}) \to \mathbb{T})$, and $a = (a_1,...,a_n) \in \mathbb{R}_{\geq 0}^n \setminus \{0\}$. For instance, we have:
$M$ is smooth if and only if each $k$-codimensional face of $\Delta$ is the intersection of exactly $k$ facets; $M$ is compact if and only if the $k$ conormals associated to any such $k$-codimensional face can be extended to an integer basis of the lattice $\text{Lie}(\mathbb{T})_{\mathbb{Z}}$. The integral cohomology of $M$ can be described by means of the fan associated with the polytope $\Delta$. It is defined as follows: Let $\Gamma$ be a face of $\Delta$. Its associated cone is defined by $$ \sigma_{\Gamma} := \underset{r > 0} \bigcup r(\Delta - x), $$ where $x$ is any point in the interior of $\Gamma$. The dual cone of $\sigma_{\Gamma}$ is $$ \sigma_{\Gamma}^* = \lbrace v \in \text{Lie}(\mathbb{T}) \ | \ \langle x, v \rangle \geq 0, \ \text{for all} \ x \in \Gamma \rbrace. $$ The fan $\Sigma(\Delta)$ associated with $\Delta$ is the set of all dual cones $\sigma_{\Gamma}^*$, for $\Gamma$ a face of $\Delta$. One can then show that the cohomology of $M$ is given by $$ H^*(M; \mathbb{Z}) = \mathbb{Z}[u_1,...,u_n] / I + J, $$ where $I$ and $J$ are ideals computed in terms of the fan $\Sigma(\Delta)$. I refer to the books of Audin "Torus actions on symplectic manifolds", or "Toric varieties" of Cox for more details on that.
The moment polytope $\Delta$ can be identified with another polytope through Delzant's construction. The standard $(S^1)^n$-action on $\mathbb{C}^n$ is induced by the momentum map. $$ P : \mathbb{C}^n \to \mathbb{R}^{n*}, \quad (z_1,...,z_n) \mapsto \pi(|z_1|^2,...,|z_n|^2). $$ Endow $\mathbb{R}^n$ with its standard basis $(e_1,...,e_n)$, and consider the following surjective linear map $$ \beta : \mathbb{R}^n \to \text{Lie}(\mathbb{T}), \quad e_i \mapsto v_i. $$ It induces a map $$ [\beta] : \mathbb{R}^n / \mathbb{Z}^n \to \mathbb{T}. $$ Denote by $\mathbb{K} \subset \mathbb{R}^n$ its kernel. It has $\text{Lie}(\mathbb{K}) = \ker \beta$ as Lie algebra, and if $\iota : \text{Lie}(\mathbb{K}) \hookrightarrow \mathbb{R}^n$ denotes the inclusion, the momentum map associated with the $\mathbb{K}$-action on $\mathbb{C}^n$ is given by $$ P_{\mathbb{K}} := \iota^* \circ P : \mathbb{C}^n \to \text{Lie}(\mathbb{K})^*. $$ Then $\mathbb{K}$ acts freely on the regular level set $$ P_{\mathbb{K}}^{-1}(p), \quad p := \iota^*(a), $$ and one can show that the standard symplectic form on $\mathbb{C}^n$ induces a well-defined symplectic form $\omega_p$ on the quotient $P_{\mathbb{K}}^{-1}(p) / \mathbb{K}$. By Delzant's theorem, there is an equivariant symplectomorphism $$ (M, \omega, \mathbb{T}) \simeq (P_{\mathbb{K}}^{-1}(p) / \mathbb{K}, \omega_p, (S^1)^n / \mathbb{K}), $$
Through this isomorphism, there is a natural identification $$ \Delta \simeq (\iota^*)^{-1}(p) \cap \Pi, $$ where $\Pi := \mathbb{R}_{\geq 0}^{n*}$ is the first orthant. Indeed, a point $x \in \text{Lie}(\mathbb{T})^*$ lies in $\Delta$ if and only if $\langle x, v_j \rangle + a_j \geq 0$ for all $j = 1,...,n$. But $$ \langle x, v_j \rangle + a_j = \langle x, \beta(e_j) \rangle + a_j = \langle \beta^*(x) + a, e_j \rangle, $$ and we have $\text{Im} \beta + a = \ker \iota^*$.
I am trying to understand how to interpret the compactness and smoothness of $M$ in terms of the polytope $(\iota^*)^{-1}(p) \cap \Pi$. More precisely, I would like to show the following:
$M$ is compact if and only if $\ker \iota^* \cap \Pi = \{0\}$; $M$ is smooth if and only if the projections of any two $k$-dimensional faces of $\Pi$ which cover $p$ when projected to $\text{Lie}(\mathbb{K})^*$ are isomorphic over $\mathbb{Z}$; The integral cohomology of $M$ is given by $$ H^*(M; \mathbb{Z}) \simeq \mathbb{Z}[u_1,...,u_n] / I + J, $$ where $I$ is the ideal generated by polynomials which vanish on the lattice $Lie({\mathbb{K}})_{\mathbb{Z}} := \ker (\exp: \text{Lie}(\mathbb{K}) \to \mathbb{K})$, and $J$ is generated by monomials $u_1^{m_1}...u_n^{m_n}$ such that $m = (m_1,...,m_n) \in \mathbb{R}_{\geq 0}^n$, considered as a function on $\mathbb{R}_{\geq 0}^{n*}$, assumes strictly positive values on the (vertices) of $(\iota^*)^{-1}(p) \cap \Pi$.
The two first points seem rather intuitive, but I having trouble to prove them properly. As for the third one, it requires to study the relation between the fan associated with the polytope $\Delta$, and the faces of $(\iota^*)^{-1}(p) \cap \Pi$, which I don't understand.
This description appears in a paper of Alexander Givental called "A fixed point theorem for toric manifolds", but it I haven't seen it explained properly anywhere. Any help will be appreciated.
Thanks a lot! |
The linear bounded automata (LBA) is defined as follows:
A linear bounded automata is a nondeterministic Turing machine $M=(Q,\Sigma,\Gamma,\delta,q_0,\square,F)$ (as in the definition of TM) with the restriction that $\Sigma$ must contain two symbols $[$ and $]$, such that $\delta(q_i,[)$ can contain only elements of the form $(q_j,[,R)$ and $\delta(q_i,])$ can contain only elements of the form $(q_j,],L)$
Informally this can be interpreted as follows:
In linear bounded automata, we allow the Turing machine to use only that part of the tape occupied by the input. The input can be envisioned as bracketed by left end marker $[$ and right end marker $]$. The end markers cannot be rewritten, and RW head cannot move to the left of $[$ or to the right of $]$.
Now I read that context sensitive grammar imitates the function of LBA and is defined as follows:
A grammar is CSG if all productions in context sensitive grammar takes form $$x\rightarrow y,$$ where $x,y\in(V\cup T)^+$ and $|x|\leq|y|$
Now people say that CSG cannot contain lambda or empty production (which takes form: $x\rightarrow \lambda$) as it will make make it impossible to meet the requirement $|x|\leq|y|$ and this can be understood.
However, what I don't understand is how informal interpretation of the working of LBA given above explains
(which is why CSG does not have lambda production). Can anyone explain? why LBA cannot accept empty string |
This question is not so much related to the physics as it is the integral itself. In terms of the canonical coherent states, $|z\rangle$, one encounters a resolution of the identity of the form:
$$\hat{I}=\displaystyle\int \dfrac{d^2z}{\pi} |z\rangle\langle{z}|$$
Where the integral is taken over the entire complex plane. My question is, often this is re-expressed in the forms:
$$\displaystyle\int \dfrac{dzd\bar{z}}{2\pi i}|z\rangle\langle{z}|$$
$$\displaystyle\int \dfrac{dxdy}{\pi} |z\rangle\langle{z}| \qquad \text{where } x=\Re z, \ y=\Im z$$
How does one show that the $z-\bar{z}$ integral is equivalent to $x-y$ integral? How does one show that either of these are equivalent to the original integral? I presume this has something to do with the properties of differential forms inducing the area element but I'm unsure how to formulate these in terms of differential forms (and back again) in the first place.
For instance, it's clear to me that
IF I assume that I can assume a correspondence of the form:
$$\int dzd\bar{z} = \int dz \wedge d\bar{z}$$
Then I could use the standard properties of the exterior product to come to the relation $dz \wedge d\bar{z} = -2idx \wedge dy$. Whilst close, this is still not quite right even if one were to use the above unjustified correspondence to return to the element $dxdy$. I presume this issue is related to orientation, but I have yet to see a satisfactory explanation for what's going on here.
How does one relate the area element $dzd\bar{z}$ to the differential 2-form $dz\wedge d\bar{z}$ in the context of the integral? Why is this a valid step to take?
Thanks in advance! |
Academic is designed to give technical content creators a seamless experience. You can focus on the content and Academic handles the rest.
Highlight your code snippets, take notes on math classes, and draw diagrams from textual representation.
On this page, you'll find some examples of the types of technical content that can be rendered with Academic.
Examples Code
Academic supports a Markdown extension for highlighting code syntax. You can enable this feature by toggling the
highlight option in your
config/_default/params.toml file.
```pythonimport pandas as pddata = pd.read_csv("data.csv")data.head()```
renders as
import pandas as pddata = pd.read_csv("data.csv")data.head()
Math
Academic supports a Markdown extension for $\LaTeX$ math. You can enable this feature by toggling the
math option in your
config/_default/params.toml file and adding
markup: mmark to your page front matter.
To render
inline or block math, wrap your LaTeX math with
$$...$$.
Example
math block:
$$\gamma_{n} = \frac{ \left | \left (\mathbf x_{n} - \mathbf x_{n-1} \right )^T \left [\nabla F (\mathbf x_{n}) - \nabla F (\mathbf x_{n-1}) \right ] \right |}{\left \|\nabla F(\mathbf{x}_{n}) - \nabla F(\mathbf{x}_{n-1}) \right \|^2}$$
renders as
\[\gamma_{n} = \frac{ \left | \left (\mathbf x_{n} - \mathbf x_{n-1} \right )^T \left [\nabla F (\mathbf x_{n}) - \nabla F (\mathbf x_{n-1}) \right ] \right |}{\left \|\nabla F(\mathbf{x}_{n}) - \nabla F(\mathbf{x}_{n-1}) \right \|^2}\]
Example
inline math
$$\left \|\nabla F(\mathbf{x}_{n}) - \nabla F(\mathbf{x}_{n-1}) \right \|^2$$ renders as \(\left \|\nabla F(\mathbf{x}_{n}) - \nabla F(\mathbf{x}_{n-1}) \right \|^2\) .
Example
multi-line math using the
\\ math linebreak:
$$f(k;p_0^*) = \begin{cases} p_0^* & \text{if }k=1, \\1-p_0^* & \text {if }k=0.\end{cases}$$
renders as
\[f(k;p_0^*) = \begin{cases} p_0^* & \text{if }k=1, \\ 1-p_0^* & \text {if }k=0.\end{cases}\]
Diagrams
Academic supports a Markdown extension for diagrams. You can enable this feature by toggling the
diagram option in your
config/_default/params.toml file or by adding
diagram: true to your page front matter.
An example
flowchart:
```mermaidgraph TD; A-->B; A-->C; B-->D; C-->D;```
renders as
graph TD; A-->B; A-->C; B-->D; C-->D;
An example
sequence diagram:
```mermaidsequenceDiagram participant Alice participant Bob Alice->John: Hello John, how are you? loop Healthcheck John->John: Fight against hypochondria end Note right of John: Rational thoughts <br/>prevail... John-->Alice: Great! John->Bob: How about you? Bob-->John: Jolly good!```
renders as
sequenceDiagram participant Alice participant Bob Alice->John: Hello John, how are you? loop Healthcheck John->John: Fight against hypochondria end Note right of John: Rational thoughts <br/>prevail... John-->Alice: Great! John->Bob: How about you? Bob-->John: Jolly good!
An example
Gantt diagram:
```mermaidgantt dateFormat YYYY-MM-DD section Section A task :a1, 2014-01-01, 30d Another task :after a1 , 20d section Another Task in sec :2014-01-12 , 12d another task : 24d```
renders as
gantt dateFormat YYYY-MM-DD section Section A task :a1, 2014-01-01, 30d Another task :after a1 , 20d section Another Task in sec :2014-01-12 , 12d another task : 24d
Todo lists
You can even write your todo lists in Academic too:
- [x] Write math example- [x] Write diagram example- [ ] Do something else
renders as
Write math example Write diagram example Do something else Tables
Represent your data in tables:
| First Header | Second Header || ------------- | ------------- || Content Cell | Content Cell || Content Cell | Content Cell |
renders as
First Header Second Header Content Cell Content Cell Content Cell Content Cell Asides
Academic supports a Markdown extension for asides, also referred to as
notices or hints. By prefixing a paragraph with
A>, it will render as an aside. You can enable this feature by adding
markup: mmark to your page front matter, or alternatively using the
Alert shortcode.
A> A Markdown aside is useful for displaying notices, hints, or definitions to your readers.
renders as |
Let $K \supset F$ a normal extension (finite). Show in details that, for every $\alpha \in K$, the polynomial
$$f(x)=\prod_{\sigma \in Aut_F(K)}(x-\sigma(\alpha))\in F[x];$$
My attempt:
Since $K$ is a finite extension of $F$, $K$ is an algebraic extension, so $\alpha$ is algebraic over $F$.
If $\alpha \in F$, then $\sigma(\alpha)=\alpha$ for every $\sigma \in$ Aut$_F(K)$, and, obviously, $f(x)=(x-\alpha)^k \in F[x]$, where $k$ $=$ |Aut$_F(K)$|.
If $\alpha \notin F$, we know that $\alpha$ is algebraic over $F$, so there is $p(x)=irr(\alpha,F) \in F$. Since, the extension is normal, every root of $p(x)$ is in $K$ and, it's know that any automorphism in Aut$_F(K)$ will take a root of an irreducible polynomial into another, so if $\{\alpha=\alpha_1,\ldots,\alpha_m\}$ is the set of rotos of $p(x)$, any automorphism will permute these roots.
We can write $p(x)=(x-\alpha_1)^{n_1}\ldots(x-\alpha_m)^{n_m} \in K[x]$.
I can't go any further than that... My initial idea was to show that $f(x)=p(x)$, however I see that it's not true, because the number of automorphisms in Aut$_F(K)$ can be bigger than $p$'s degree, so I can't see how to guarantee that $f(x) \in F[x]$.
I know that $f(x)=(x-\alpha_1)^{n'_1}\ldots(x-\alpha_m)^{n'_m}$ , $0\leq n'_i\leq k=$|Aut$_F(K)$|. So if $n'_i \geq n_i$, we will have that $f(x)=p(x)g(x)$, $g(x)=(x-\alpha_1)^{n'_1-n_1}\ldots(x-\alpha_m)^{n'_m-n_m}$. So I'd have to show that $g(x)\in F[x]$, but I can't see how to do that... And, even if I knew, it doesn't solve the entire problem..
Does someone have a better idea to solve this?
Thanks :) |
I know that the Euler Lagrange equation (here only in 1D)
$$ \left(\frac{d}{dt}\frac{\partial}{\partial\dot{x}}-\frac{\partial}{\partial x}\right)L\left(x,\dot{x},t\right)=0 $$
is invariant under (invertible) coordinate transformations of the kind $q=q\left(x,t\right)$. Most simply because the derivation using the principle of least action can be performed in any coordinate system. Suppose however that I wish to explicit show that if EL is satisfied for $x$ that it will also be satisfied for $q$ by actually changing variables in the equation.
I begin by rewriting $\frac{\partial}{\partial x}$ and $\frac{\partial}{\partial\dot{x}}$ as
\begin{eqnarray*} \frac{\partial}{\partial x} & = & \left(\frac{\partial q}{\partial x}\right)\frac{\partial}{\partial q}+\left(\frac{\partial\dot{q}}{\partial x}\right)\frac{\partial}{\partial\dot{q}}\\ \frac{\partial}{\partial\dot{x}} & = & \left(\frac{\partial q}{\partial\dot{x}}\right)\frac{\partial}{\partial q}+\left(\frac{\partial\dot{q}}{\partial\dot{x}}\right)\frac{\partial}{\partial\dot{q}} \end{eqnarray*} so that my EL now reads
$$ \left(\frac{d}{dt}\left[\left(\frac{\partial q}{\partial\dot{x}}\right)\frac{\partial}{\partial q}+\left(\frac{\partial\dot{q}}{\partial\dot{x}}\right)\frac{\partial}{\partial\dot{q}}\right]-\left[\left(\frac{\partial q}{\partial x}\right)\frac{\partial}{\partial q}+\left(\frac{\partial\dot{q}}{\partial x}\right)\frac{\partial}{\partial\dot{q}}\right]\right)L\left(q,\dot{q},t\right)=0 $$
I then let $\frac{d}{dt}$ act from the right and collect terms, at some point i should possibly use that $\dot{q}\left(x,\dot{x},t\right)=\frac{\partial q}{\partial t}+\frac{\partial q}{\partial x}\dot{x}$ and $\dot{x}\left(q,\dot{q},t\right)=\frac{\partial x}{\partial t}+\frac{\partial x}{\partial q}\dot{q}$ to in the end obtain
$$ \left(\mbox{some function}\right)\left(\frac{d}{dt}\frac{\partial}{\partial\dot{q}}-\frac{\partial}{\partial q}\right)L\left(q,\dot{q},t\right)=0 $$
However expanding the action of $\frac{d}{dt}$ gives a horrible mess that i will not reproduce for you.
The question then is: Is the setup I'm trying above correct (albeit ugly), or is there some neater way, without making use of the principle of least action?
Iv'e found some questions related to this, such as Euler Lagrange equation in different frames but i'm not sure how to make use of them. |
The dispersion relation you mention, is the dispersion relation for
O-waves, those are electromagnetic waves in an unmagnetized cold plasma, or waves where the electric field vector is parallel to the background magnetic field.
You can see that the dispersion relation has a characteristic frequency, $\omega_p$. That is the
plasma frequency and electromagnetic waves with a frequency lower than the plasma frequency cannot penetrate the plasma, they are reflected (like rf-waves being reflected at the ionosphere). For this reason, the plasma frequency is also referred to as cut-off frequency. If the wave frequency is higher than the plasma frequency, it can penetrate the plasma (like microwaves in the GHz range which are used for satellite communication since they can easily pass the ionosphere).
The plasma frequency depends on the electron density of the plasma: $$\omega_p = \sqrt{\frac{n_e e^2}{m_e \epsilon_0}},$$with $n_e$ the electron density, $e$ the elementary charge, $m_e$ the mass of the electron, and $\epsilon_0$ the free space permittivity.
If you vary the plasma density, you change the plasma frequency.
Consider an electromagnetic wave emitted onto a plasma whose frequency is slightly below the plasma frequency. It will be reflected by the plasma. If you are able to periodically vary the plasma density and reduce it such that the resulting plasma frequency is below the wave frequency, your wave can penetrate the plasma. Since you do this periodically, you will get a pulsed transmission. |
We ended the post on PageRank
[1] by reaching the following final equation:
$$\mathbf{r}^{(t+1)} = \beta M \cdot \mathbf{r}^{(t)} + (1-\beta) \left[\frac{1}{N}\right]_N$$
Where the random walker with probability \(\beta\) decides to follow any random link of a node and with probability \(1-\beta\) it teleports itself to another random node of the graph, we also see that this is equivalent to add to the standard PageRank formulation a vector of \(N\) evenly distributed probabilities.
Topic-Specific PageRank
However, we may want that the random walker does not teleport itself to any random node of the graph but to a specific set of nodes, for example the ones that are related to a specific topic. In this way we compute PageRank that is not generic, but it is related to a specific topic: we call this kind of PageRank
Topic-Specific. Why we need to alterate the random teleport? For example to make the walker teleporting to a set of trusted pages, in order to combating the web spam (see the TrustRank [2]), or simply to give a PageRank to pages in relation to a particular topic. The new formulation
In general, we express the
teleport set, namely the nodes to which the walker teleports itself as the set \(S\). Suppose the following graph:
we can define, for example, a teleport set as:
$$S = \{1,3\}$$
that we can translate to a personalization vector \(\mathbf{p}\)
$$p = \begin{pmatrix}0.5 \\ 0 \\ 0.5\end{pmatrix}$$
Again, a stochastic vector
[3] which describes the probabilities of a random walker to teleport randomly to any node in the teleport set. You will agree with me that this kind of formulation is a generalization of the previous:
$$\mathbf{r}^{(t+1)} = \beta M \cdot \mathbf{r}^{(t)} + (1-\beta) \mathbf{p}$$
since the standard PageRank is a Topic-Specific PageRank with a teleport set equal to all the nodes and with the teleport probability evenly distributed to all the nodes.
Example
Let's see a SageMath
[4] concrete example for the graph in the figure:
# Number of the nodesn_nodes = 3# Transition probabilities matrixM = matrix([[0,1,0],[0.5,0,1],[0.5,0,0]])b = 0.9# Personalization vectorp = vector([0.5,0,0.5]) # Init the rank vector with evenly distributed probabilitiesr = vector([1/n_nodes for i in range(n_nodes)])for i in range(1000): r = (b*M)*r + (1-b)*p print("r = " + str(r))
It replies with:
r = (0.392624728850325, 0.380694143167028, 0.226681127982646)
Dead ends? No problem
What if we had dead ends in the graph? As we did with the standard PageRank
[1:1] we have that the dead ends leak probability since they have no out link they do not take the \(\beta\) probability. So either we can manually make that we sum to every dead end column the \(\mathbf{p}\) vector, or we divide the algorithm in the following two steps:
Step 1: compute the standard PageRank
$$\mathbf{r}^{(t+1)} = \beta M \cdot \mathbf{r}^{(t)}$$
Step 2: redistribute the leaked probability
$$\mathbf{r}^{(t+1)} = \mathbf{r}^{(t+1)} + (1 - \beta) \cdot \left(1 - \sum_i r^{(t+1)}_i\right) \mathbf{p}$$
Example
Here's a running example with SageMath
[4:1]:
# Number of the nodesn_nodes = 3# Transition probabilities matrixM = matrix([[0,1,0],[0.5,0,1],[0.5,0,0]])b = 0.9# Personalization vectorp = vector([0.5,0,0.5]) # Init the rank vector with evenly distributed probabilitiesr = vector([1/n_nodes for i in range(n_nodes)])for i in range(1000): # Step 1 r = (b*M)*r # Step 2 - distribute the remaining probability r = r + (1 - sum(r)) * p print("r = " + str(r))
It replies with:
r = (0.392624728850325, 0.380694143167028, 0.226681127982646)
Composability
Suppose now that we have personalization vector \(\mathbf{p}\) for \(k\) different topics and we want to build a final personalization vector for a specific user that has particular preferences for every topic. From the equation:
$$\mathbf{r}^{(t+1)} = \beta M \cdot \mathbf{r}^{(t)} + (1-\beta) \mathbf{p}$$
since \(\mathbf{p}\) is a stochastic vector (its coordinates sums to 1) we can also obtain it from a composition of multiple topics. I think that showing a concrete example now is the best way of approaching to this topic.
Suppose that we have the personalization vector of 2 topics in general: cars and bikes. We have two stochastic vectors of \(N\) dimensions (where \(N\) is always the number of nodes):
\(\mathbf{q}_{\textrm{cars}} = \begin{pmatrix}0.2 \\ 0 \\0.8\end{pmatrix}\) is the personalization vector that is specific for the carstopic; \(\mathbf{q}_{\textrm{bikes}} = \begin{pmatrix}0 \\ 0.7 \\0.3\end{pmatrix}\) it the personalization vector for the bikestopic.
Suppose now that we now want to compute a personalized PageRank for a user. For doing this it's sufficient to understand how much the user likes cars and bikes, and compose a final personalization vector as composition of the cars and bikes vectors, weighted on the user addiction. Suppose that the user likes, by having normalized the values, \(0.30\) bikes and \(0.70\) cars, then we compose the vector \(\mathbf{p}\) as follows:
$$\mathbf{p} = 0.70 \cdot \mathbf{q}_{\textrm{cars}} + 0.30 \cdot \mathbf{q}_{\textrm{bikes}}$$
It's easy to prove that \(\mathbf{p}\) is still a
stochastic vector. So in general, given a column stochastic vector \(\mathbf{s}\) of \(k\) topics coefficients, in this case:
$$\mathbf{s} = \begin{pmatrix}0.7 \\ 0.3\end{pmatrix}$$
and a general matrix \(Q\) that is \(n\times k\) and contains all the personalization vectors for all the topics, in this case, for example:
$$\mathbf{Q} = \begin{pmatrix} \mathbf{q}_{\textrm{cars}} & \mathbf{q}_{\textrm{bikes}} \end{pmatrix} = \begin{pmatrix} 0.2 & 0 \\ 0 & 0.7 \\ 0.8 & 0.3 \end{pmatrix}$$
it follows that the
final formula for personalized PageRank is:
$$\mathbf{r}^{(t+1)} = \beta M \cdot \mathbf{r}^{(t)} + (1-\beta) Q \cdot \mathbf{s}$$
Example
Let's see the same example in SageMath
[4:2]:
# Number of the nodesn_nodes = 3# Transition probabilities matrixM = matrix([[0,0,1],[0.5,0,0],[0.5,1,0]])b = 0.9# Personalization vector for cars and bikesP = matrix([[0.2, 0],[0, 0.7],[0.8, 0.3]])# Vector of user likes for cars and bikess = vector([0.7, 0.3])# Init the rank vector with evenly distributed probabilitiesr = vector([1/n_nodes for i in range(n_nodes)])for i in range(1000): r = (b*M)*r + (1-b)*P*s print("r = " + str(r))
It replies with:
r = (0.388329718004339, 0.195748373101952, 0.415921908893709)
The TrustRank is explained in the book
Mining of Massive Datasets, Jure Leskovec, Anand Rajaraman, Jeff Ullman↩︎
There's no need that the probabilities must be evenly distributed to all the nodes in the teleport set, we can also specify arbitrary ones ↩︎ |
Learning Outcomes Conduct and interpret hypothesis tests of two variances
Another of the uses of the
F distribution is testing two variances. It is often desirable to compare two variances rather than two averages. For instance, college administrators would like two college professors grading exams to have the same variation in their grading. In order for a lid to fit a container, the variation in the lid and the container should be the same. A supermarket might be interested in the variability of check-out times for two checkers.
In order to perform a
F test of two variances, it is important that the following are true: The populations from which the two samples are drawn are normally distributed. The two populations are independent of each other.
Unlike most other tests in this book, the
F test for equality of two variances is very sensitive to deviations from normality. If the two distributions are not normal, the test can give higher p-values than it should, or lower ones, in ways that are unpredictable. Many texts suggest that students not use this test at all, but in the interest of completeness we include it here.
Suppose we sample randomly from two independent normal populations. Let [latex]\displaystyle{{\sigma}_{{1}}}^{{2}},{{\sigma}_{{2}}}^{{2}}[/latex] be the sample variances. Let the sample sizes be
n 1 and n 2. Since we are interested in comparing the two sample variances, we use the F ratio:
[latex]\displaystyle{F}=\frac{{{\left[\frac{{({s}{1})}^{{2}}}{{(\sigma_{1})}^{{2}}}\right]}}}{{{\left[\frac{{({s}{2})}^{{2}}}{{(\sigma_{2})}^{{2}}}\right]}}}[/latex]
F has the distribution F ~ F( n 1 – 1, n 2 – 1)
where
n 1 – 1 are the degrees of freedom for the numerator and n 2 – 1 are the degrees of freedom for the denominator.
If the null hypothesis is [latex]\displaystyle{\sigma_{{1}}^{{2}}}={\sigma_{{2}}^{{2}}}[/latex] then the F Ratio becomes [latex]\displaystyle{F}=\frac{{{\left[\frac{{({s}{1})}^{{2}}}{{(\sigma{1})}^{{2}}}\right]}}}{{{\left[\frac{{({s}{2})}^{{2}}}{{(\sigma{2})}^{{2}}}\right]}}}[/latex] = [latex]\displaystyle\frac{{({s}_{1})}^{{2}}}{{({s}_{2})}^{{2}}}[/latex]
The
F ratio could also be[latex]\displaystyle\frac{{({s}_{1})}^{{2}}}{{({s}_{2})}^{{2}}}[/latex]. It depends on H a and on which sample variance is larger. If the two populations have equal variances, then [latex]\displaystyle\sigma_{{1}}^{{2}},\sigma_{{2}}^{{2}}[/latex] are close in value and F =[latex]\displaystyle\frac{{({s}_{1})}^{{2}}}{{({s}_{2})}^{{2}}}[/latex] is close to one. But if the two population variances are very different,[latex]\displaystyle\sigma_{{1}}^{{2}},\sigma_{{2}}^{{2}}[/latex]tend to be very different, too. Choosing[latex]\displaystyle\sigma_{{1}}^{{2}}[/latex] as the larger sample variance causes the ratio (s1)2(s2)2to be greater than one. If [latex]\displaystyle\sigma_{{1}}^{{2}},\sigma_{{2}}^{{2}}[/latex] are far apart, then F =[latex]\displaystyle\frac{{({s}_{1})}^{{2}}}{{({s}_{2})}^{{2}}}[/latex]is a large number.
Therefore, if
F is close to one, the evidence favors the null hypothesis (the two population variances are equal). But if F is much larger than one, then the evidence is against the null hypothesis. A test of two variances may be left, right, or two-tailed. Example
Two college instructors are interested in whether or not there is any variation in the way they grade math exams. They each grade the same set of 30 exams. The first instructor’s grades have a variance of 52.3. The second instructor’s grades have a variance of 89.9. Test the claim that the first instructor’s variance is smaller. (In most colleges, it is desirable for the variances of exam grades to be nearly the same among instructors.) The level of significance is 10%.
Solution:
Calculate the test statistic:By the null hypothesis (σ21 = σ22), the Fstatistic is:
[latex]\displaystyle{F}=\frac{{{\left[\frac{{({s}{1})}^{{2}}}{{(\sigma{1})}^{{2}}}\right]}}}{{{\left[\frac{{({s}{2})}^{{2}}}{{(\sigma{2})}^{{2}}}\right]}}}[/latex] =[latex]\displaystyle\frac{{({s}_{1})}^{{2}}}{{({s}_{2})}^{{2}}}[/latex]=[latex]\frac{{52.3}}{{89.9}}={0.5818}[/latex]
Distribution for the test: F 29,29 where n 1 – 1 = 29 and n 2 – 1 = 29. Graph: This test is left tailed.
Draw the graph labeling and shading appropriately.
Probability statement: p-value = P( F< 0.5818) = 0.0753 Compare αand the p-value: α= 0.10 α> p-value. Make a decision:Since α> p-value, reject H. 0 Conclusion:With a 10% level of significance, from the data, there is sufficient evidence to conclude that the variance in grades for the first instructor is smaller.
STAT and arrow over to
TESTS. Arrow down to
D:2-SampFTest. Press
ENTER. Arrow to
Stats and press
ENTER. For
Sx1,
n1,
Sx2, and
n2, enter [latex]\displaystyle\sqrt{52.3}[/latex],
30,[latex]\displaystyle\sqrt{89.9}[/latex], and
30. Press
ENTER after each. Arrow to
σ1: and
<σ2. Press
ENTER. Arrow down to
Calculate and press
ENTER.
F= 0.5818 and p-value = 0.0753. Do the procedure again and try
Draw instead of
Calculate.
try it
The New York Choral Society divides male singers up into four categories from highest voices to lowest: Tenor1, Tenor2, Bass1, Bass2. In the table are heights of the men in the Tenor1 and Bass2 groups. One suspects that taller men will have lower voices, and that the variance of height may go up with the lower voices as well. Do we have good evidence that the variance of the heights of singers in each of these two groups (Tenor1 and Bass2) are different?
69 72 71 74 75
Tenor1 Bass2 Tenor 1 Bass 2 Tenor 1 Bass 2 69 72 67 72 68 67 72 75 70 74 67 70 71 67 65 70 64 70 66 75 72 66 76 74 70 68 74 72 68 75 71 72 64 68 66 74 73 70 68 72 66 72
The histograms are not as normal as one might like. Plot them to verify. However, we proceed with the test in any case.
Subscripts: T1= tenor1 and B2 = bass 2
The standard deviations of the samples are
s = 3.3302 and T1 s = 2.7208. B2
The hypotheses are
[latex]\displaystyle{H}_{{o}}:{\sigma}_{{T1}}^{{2}}={\sigma}_{{B2}}^{{2}}[/latex] and [latex]\displaystyle{H}_{{o}}:{\sigma}_{{T1}}^{{2}}\neq{\sigma}_{{B2}}^{{2}}[/latex] (two tailed test)
The
F statistic is 1.4894 with 20 and 25 degrees of freedom.
The
p-value is 0.3430. If we assume alpha is 0.05, then we cannot reject the null hypothesis.
We have no good evidence from the data that the heights of Tenor1 and Bass2 singers have different variances (despite there being a significant difference in mean heights of about 2.5 inches.) |
Before obtaining the hazard function of $T=\min\{T_1,...,T_n\}$,
let's first derive its distribution and its density function, i.e. the CFD and PDF of the first-order statistic from a sample of independently but not identically distributed random variables.
The distribution of the minimum of $n$ independent random variables is
$$F_T(t) = 1-\prod_{i=1}^n[1-F_i(t)]$$
(see the reasoning in this CV post, if you don't know it already)
We differentiate to obtain its density function:
$$f_T(t) =\frac {\partial}{\partial t}F_T(t) = f_1(t)\prod_{i\neq 1}[1-F_i(t)]+...+f_n(t)\prod_{i\neq n}[1-F_i(t)]$$
Using $h_i(t) = \frac {f_i(t)}{(1-F_i(t)} \Rightarrow f_i(t) = h_i(t)(1-F_i(t)) $ and substituting in $f_T(t)$ we have
$$f_T(t) = h_1(t)(1-F_1(t))\prod_{i\neq 1}[1-F_i(t)]+...+h_n(t)(1-F_n(t))\prod_{i\neq n}[1-F_i(t)]$$
$$=\left(\prod_{i=1}^n[1-F_i(t)]\right)\sum_{i=1}^nh_i(t),\;\;\; h_i(t) = \frac {f_i(t)}{1-F_i(t)} \tag{1}$$
which is the density function of the minimum of $n$ independent but not identically distributed random variables.
Then the hazard rate of $T$ is
$$h_T(t) = \frac {f_T(t)}{1-F_T(t)} = \frac {\left(\prod_{i=1}^n[1-F_i(t)]\right)\sum_{i=1}^nh_i(t)}{\prod_{i=1}^n[1-F_i(t)]} = \sum_{i=1}^nh_i(t) \tag{2}$$ |
I decided to do a "back-of-the-envelope" calculation to see for myself why dark matter can explain, at least qualitatively, the anomalous rotation of galaxies.
The problem: if you move around the center of a galaxy in an approximately circular orbit, the farther you are from the center, the slower you should be moving. Observational evidence contradicts this: the velocities of stars in distant galaxies do not appear to fall off with distance.
Numerically, in the first approximation, if you assume that most of the mass of a galaxy is concentrated in its center, in the outlying regions, the gravitational potential should be proportional to $-1/r$. The kinetic energy of a star orbiting the galactic center should be proportional to this, but kinetic energy is also proportional to $v^2$, where $v$ is the star's velocity. If $v^2\sim 1/r$, then $v\sim 1/\sqrt{r}$, so stellar velocities should be proportional to the inverse square root of radial distance. Actual observations, however, reveal velocity curves that are nearly flat.
One possibility is that the galaxy is surrounded by "dark matter", characterized by the following important properties:
It interacts with normal matter only gravitationally It is not present in substantial quantities in regions dominated by normal matter
Unfortunately, it is not sufficient to simply propose a homogeneous distribution of dark matter with spherical "cavities" in which galaxies are situated. That is because a well known result from classical physics shows that inside such a spherical cavity, the gravitational potential is constant, and there is no net gravitational force (from the surrounding matter), so such a dark matter distribution would not influence galaxy rotation.
To prove this result, we need to compute the gravitational potential at a point $\vec{P}$, by integrating over all points $\vec{R}$ that lie within a spherical shell of inner radius $R_1$, outer radius $R_2$. The integral can be computed without much difficulty if we find a suitable volume element. One such volume element can be obtained by setting up a coordinate system as depicted in Figure 1.
Figure 1: A suitable coordinate system for integrating the volume of a spherical shell.
Using coordinates $R$, $\alpha$ and $\beta$, the distance between points $\vec{P}$ and $\vec{R}$ can be calculated as $r=\sqrt{P^2+R^2-2PR\cos\alpha}$. The volume element at point $\vec{R}$ would be $dV=R^2\sin\alpha~dRd\alpha d\beta$. The gravitational potential at point $\vec{P}$ due to the volume element at point $\vec{R}$ will be proportional to $-1/r$. So all we need to calculate is the integral $\int-1/r~dV$, using integration limits of $\alpha=0...\pi$, $\beta=0...2\pi$ and $R=R_1...R_2$. This definite integral can be computed exactly for the case when $P\gt 0$ and $R\gt P$. The result is $2\pi(R_1^2-R_2^2)$. As the result is not dependent on the value of $P$, it is constant throughout the interior of the cavity. A constant potential means zero gradient, and no net gravitational force.
But this is rather counterproductive, isn't it! This result basically proves that dark matter
cannot offer a solution for the galaxy rotation issue. So why do so many clever people insist on the opposite?
To answer that question, consider a different kind of dark matter distribution. Rather than having a cavity with sharp boundary, imagine a dark matter distribution that is smooth, for instance characterized by the function $1-e^{-r^2}$ (Figure 2).
Figure 2: The assumed peculiar distribution of dark matter, as a function of radial distance.
So now the potential function will be $(1-e^{-r^2})/r$, and we need to compute the integral $\int-(1-e^{-r^2})/r~dV$. This integral can also be evaluated, but the result is no longer independent of the value of $P$. On the contrary, when plotted as a function of $P$, the potential would look like that shown in Figure 3.
Figure 3: The gravitational potential of the dark matter "halo" shown in Figure 2.
Contrast this curve with the gravitational "potential well" of a point mass, shown in Figure 4.
Figure 4: The gravitational potential well of a point mass.
What is remarkable when you compare these two curves is that just when the "potential well" flattens at greater radial distance, the gravitational potential due to the surrounding dark matter becomes steeper. This is precisely what it takes to "flatten out" the rotation curves of a galaxy, causing stars far from the center to move faster than they would if influenced only by the gravitational attraction of the galaxy itself.
Qualitatively this all makes sense, but to verify if these ideas are valid, it is necessary to get some quantitative results. Specifically, one needs to compare the actual mass and rotational curve of a galaxy, and estimate what kind of dark matter distribution it would take to produce that rotational curve.
Taking a visible galactic mass of $4\times 10^{41}~{\rm kg}$ (approximately $2\times 10^{11}$ solar masses), and a galactic radius of $5\times 10^{20}$ meters (roughly 50,000 light-years), and assuming a dark matter "halo" with a characteristic radius of twice the galactic radius and a mass distribution of $1-e^{-r^2}$, as discussed above, I get a gravitational potential that "flattens out" very nicely if the asymptotic value of the halo mass density is on the order of $10^{-20}~{\rm kg}/{\rm m}^3$, as shown in Figure 5.
Figure 5: The gravitational potential, as a function of radial distance, around a point mass that is surrounded by a dark matter halo.
This is a very high value compared to the critical density of the universe ($\sim 10^{-26}~{\rm kg}/{\rm m}^3$), but how much mass would it represent? Integrating $1-e^{-r^2}$ over the unit sphere I get 1.81, or about 0.43 times the volume of the unit sphere. For a sphere with a radius of $10^{21}$ meters and a matter density of $10^{-21}~{\rm kg}/{\rm m}^3$, the mass is about $4.2\times 10^{43}~{\rm kg}$, and 0.43 times that amount is about $1.8\times 10^{43}~{\rm kg}$, or about 45 times the (visible) galactic mass.
This is very interesting, because it shows that even a "back of the envelope" calculation can reproduce, well within an order of magnitude, the amount of "missing mass" that dark matter theory predicts.
So if it works out so well, why is dark matter controversial? One very simple reason: Figure 2. What kind of matter would assume this peculiar distribution? What force is responsible for removing dark matter from regions dominated by ordinary matter, even though the two types of matter are gravitationally attracted to each other?
It is these questions that motivate researchers like John Moffat, who are busy developing alternate theories that do not involve dark matter.
Then of course there are also people who believe that the problem doesn't even exist in the first place: that once you take relativity properly into account, the rotation of galaxies turns out to be perfectly normal! I'm not sure, but upon first reading, the paper certainly appeared convincing. |
Let $G$ be a group.
It is written in my text that there is a homomorphism $\phi:G\rightarrow Aut(H)$ where $H$ is a normal subgroup of $G$ and $\ker(\phi)=C_G(H)$.
From this, i realize that $C_G(H)$ is normal when $H$ is normal.
Is it a trivial result?
That is, can one deduce this from the definition with basic set operations? That is, showing $gC_G(H)g^{-1}\subset C_G(H)$?
What about $N_G(H)$? Is this normal? |
I am writing some Calculus content, and I would like a "big list" of useful functions which are defined by definite integrals, but are not elementary functions.
Two examples of such functions are
$$ \mathrm{Erf}(x) = \frac{2}{\sqrt{\pi}}\int_0^x e^{-t^2}\mathrm{d} t $$
which is fundamentally important to statistics, and
$$ \mathrm{Si}(x) = \int_0^x \frac{\sin(t)}{t} \mathrm{d} t $$
which comes up all the time in signal processing.
I would like to be able to sketch such functions, express some definite integrals (like $\int_0^1 e^{-4t^2} dt$) in terms of such functions, etc.
So what other functions are important enough to have their own name, and are given as integrals of elementary functions? |
What is the theoretical convergence rate for an FFT Poison solver?
I am solving a Poisson equation: $$\nabla^2 V_H(x, y, z) = -4\pi n(x, y, z)$$ with $$n(x, y, z) = {3\over\pi} ((x-1)^2 + (y-1)^2 + (z-1)^2 - 1)$$ on the domain $[0, 2] \times [0, 2] \times [0, 2]$ with periodic boundary condition. This charge density is net neutral. The solution is given by: $$ V_H({\bf x}) = \int {n({\bf y})\over |{\bf x}-{\bf y}|} d^3 y $$ where ${\bf x}=(x, y, z)$. In reciprocal space $$ V_H({\bf G}) = 4\pi {n({\bf G})\over G^2} $$ where ${\bf G}$ are the reciprocal space vectors. I am interested in the Hartree energy: $$ E_H = {1\over 2} \int {n({\bf x}) n({\bf y})\over |{\bf x}-{\bf y}|} d^3 x d^3 y = {1\over 2} \int V_H({\bf x}) n({\bf x}) d^3 x $$ In reciprocal space this becomes (after discretization): $$ E_H = 2\pi \sum_{{\bf G}\ne 0} {|n({\bf G})|^2\over G^2} $$ The ${\bf G}=0$ term is omitted, which effectively makes the charge density net neutral (and since it is already neutral, then everything is consistent).
For the test problem above, this can be evaluated analytically and one gets: $$ E_H = {128\over 35\pi} = 1.16410... $$ How fast should this energy converge?
Here is a program using NumPy that does the calculation.
from numpy import empty, pi, meshgrid, linspace, sumfrom numpy.fft import fftn, fftfreqE_exact = 128/(35*pi)print "Hartree Energy (exact): %.15f" % E_exactf = open("conv.txt", "w")for N in range(3, 384, 10): print "N =", N L = 2. x1d = linspace(0, L, N) x, y, z = meshgrid(x1d, x1d, x1d) nr = 3 * ((x-1)**2 + (y-1)**2 + (z-1)**2 - 1) / pi ng = fftn(nr) / N**3 G1d = N * fftfreq(N) * 2*pi/L kx, ky, kz = meshgrid(G1d, G1d, G1d) G2 = kx**2+ky**2+kz**2 G2[0, 0, 0] = 1 # omit the G=0 term tmp = 2*pi*abs(ng)**2 / G2 tmp[0, 0, 0] = 0 # omit the G=0 term E = sum(tmp) * L**3 print "Hartree Energy (calculated): %.15f" % E f.write("%d %.15f\n" % (N, E))f.close()
And here is a convergence graph (just plotting the
conv.txt from the above script, here is a notebook that does it if you want to play with this yourself):
As you can see, the convergence is linear, which was a surprise for me, I thought that FFT converges much faster than that.
Update:
The solution has a cusp at the boundary (I didn't realize this before). In order for FFT to converge fast, the solution must have all derivatives smooth. So I also tried the following right hand side:
nr = 3*pi*sin(pi*x)*sin(pi*y)*sin(pi*z)/4
Which you can just put into the script above (updated script). The exact solution is $V_H=\sin(\pi x)\sin(\pi y)\sin(\pi z)$, which should be infinitely differentiable. The exact integral in this case is $E_H = \frac{3\pi}{8}$. Yet the FFT solver still converges only linearly towards this exact solution, as can be checked by running the script above and plotting the convergence (updated notebook with plots).
Does anyone know any benchmark in 3D so that I can see faster convergence than linear? |
Circuit requirements:
DC voltage gain: 50dB Unity gain bandwidth: 50MHz Phase margin: 45 deg(60 deg is recommended).
The circuit is completely symmetric, so M1=M2, M3=M4, and M5=M6. Therefore the DC gain of the first stage without the buffer is
\$A_{V_o}=-g_{m_1}R_{out}, \ R_{out}=\frac{1}{1/r_{o_1}+1/r_{o_3}+1/r_{o_5}+g_{m_5}-g_{m_3}}\$.
Since we need a relatively high DC gain I can choose \$g_{m_5}=g_{m_3}\$ to make \$R_{out}\$ maximum. So \$R_{out}\$ becomes \$R_{out}=r_{o_1}||r_{o_3}||r_{o_5}\$.
From now on let's follow two different approaches in order to satisfy the above requirements.
Approach 1:
The circuit without the source follower M7 and the compensation network.
In this case the load capacitor, CL(=10pF), is directly connected to the drain of M2. Let's first try to find an equation for the unity gain frequency, \$f_u\$. The circuit has two poles with the dominant pole situated at the output node. If we assume that the next dominant pole is located far from the dominant pole, the transfer function can be approximated as
\$H(s)=\frac{A_{V_o}}{1+s/\omega_{p_1}}, \ \omega_{p_1}=\frac{1}{C_LR_{out}}\$
Since \$|H(j2\pi f_u)|=1\$, the equation for \$f_u\$ becomes roughly
\$f_u= \frac{g_{m_1}}{2\pi C_L}\$.
With \$f_u\$=50MHz, \$C_L\$=10pF, the above equation gives 3.14mA/V for \$g_{m_1}\$ (Neglecting parasitic capacitances). So that's for \$g_{m_1}\$.
Now with the \$g_{m_1}\$ already determined and the DC gain 50dB (or equivalently ~320V/V), it's just left to figure out \$R_{out}\$, which, as per equation for DC gain, should be ~100kohms. So I can easily modify \$r_{o_1}\$,\$r_{o_3}\$, and \$r_{o_5}\$ in order to make the parallel combination of them 100kohms. That's for \$R_{out}\$.
Phase margin is 90 deg since the other high frequency pole doesn't affect the phase margin. A 90-deg PM doesn't have ringing and overshooting but trades off speed. But I think that's fine.
So it seems that the first approach did work well without the need for the source follower and the compensation network.
Approach 2
The complete circuit.
Here comes my confusion. I do not know as to why we need to consider those added parts while approach 1 worked out fine. Could anyone please explain as to how the second approach can be better than the first (if it's better at all)?
I think compensation is only required if the phase margin's dropped less than 45 as the result of subsequent stages, or if we want good unity gain bandwidth (?). But I do not see any good reason to invoke compensation in the above circuit. |
Your best bet is to keep track of two things for both sides: how likely
this particular result is, and how likely anything less than this result is. Given these we can multiply them together relatively easily.
$$\begin{array}{r|rr|rr|r}x & 3\text{d}6 = x & 3\text{d}6 < x & 1\text{d}20 = x & 1\text{d}20 < x & 1\text{d}20 < x \cap 3\text{d}6 = x\\\hline 1 & 0 & 0 & 1 & 0 & 0 \\2 & 0 & 0 & 1 & 1 & 0 \\3 & 1 & 0 & 1 & 2 & 2 \\4 & 3 & 1 & 1 & 3 & 9 \\5 & 6 & 4 & 1 & 4 & 24 \\6 & 10 & 10 & 1 & 5 & 50 \\7 & 15 & 20 & 1 & 6 & 90 \\8 & 21 & 35 & 1 & 7 & 147 \\9 & 25 & 56 & 1 & 8 & 200 \\10 & 27 & 81 & 1 & 9 & 243 \\11 & 27 & 108 & 1 & 10 & 270 \\12 & 25 & 135 & 1 & 11 & 275 \\13 & 21 & 160 & 1 & 12 & 252 \\14 & 15 & 181 & 1 & 13 & 195 \\15 & 10 & 196 & 1 & 14 & 140 \\16 & 6 & 206 & 1 & 15 & 90 \\17 & 3 & 212 & 1 & 16 & 48 \\18 & 1 & 215 & 1 & 17 & 17 \\19 & 0 & 216 & 1 & 18 & 0 \\20 & 0 & 216 & 1 & 19 & 0 \\\hline\text{total} & 216 & & 20 & & 2052\end{array}$$
Total up everything in the rightmost column, divide by the totals for the $3\text{d}6 = x$ and $1\text{d}20 = x$ columns, and you win: Bob wins $\frac{2052}{4320} = \frac{19}{40} = 0.475$ of the time.
To get ties, or where Bill wins, change what pairs you multiply: both $=$ ones for ties, and $3\text{d}6 < x$ and $1\text{d}20 = x$ for Bill's wins.
This particular one is interesting: because both distributions are symmetrical, and they have the same mean, Bill and Bob will both win the same proportion of the time.
It occurs to me that there's another part to this question: how do we efficiently calculate result probabilities for combinations of dice?
The answer to that is an operation called
convolution, which I'll present here in discrete form.
Given two functions $f(x)$ and $g(x)$, the convolution is $$(f * g)(x) = \sum_{k = -\infty}^\infty f(k)g(x-k)$$
This can be interpreted in probability theory as the following: we have two random variables $f$ and $g$, with probability functions $f(x)$ and $g(x)$. the probability function for $f + g$ -- adding the two results together -- is equal to $(f * g)(x)$.
Obviously with those infinities in there, we have to fiddle with it a little to actually get anything done. In our case, because we're dealing with dice, our functions have what's called
limited support: they're only non-zero in a small area, so we only need to cover that small area.
Let's do a specific example. Say I want to calculate the probability that I'll get a $7$ on $3\text{d}6$. $3\text{d}6$ is the same as $1\text{d}6$ + $2\text{d}6$, so I can convolve these two. I'll call their functions $f$ and $g$ respectively.
$f$ here has limited support: the only values it is non-zero for are $1$ through $6$. This allows us to change the limits of our summation to the bounds of $f$'s support.
$$\begin{align}(f * g)(7) &= \sum_{k=1}^6 f(k)g(7 - k)\\&= f(1)g(6) + f(2)g(5) + f(3)g(4) + f(4)g(3) + f(5)g(2) + f(6)g(1) \\&= \frac{1}{6}\cdot\frac{5}{36} + \frac{1}{6}\cdot\frac{4}{36} + \frac{1}{6}\cdot\frac{3}{36} + \frac{1}{6}\cdot\frac{2}{36} + \frac{1}{6}\cdot\frac{1}{36} + \frac{1}{6}\cdot\frac{0}{36} \\&= \frac{15}{216}\end{align}$$
Using convolution, then, we can calculate the probabilities of the summed results of multiple dice, without necessarily considering every single simple event: to calculate, say, the probability distribution of $5\text{d}6$, we can take the distribution of $4\text{d}6$ and the distribution of $1\text{d}6$ and convolve them. And to get $4\text{d}6$'s distribution we can convolve $3\text{d}6$ and $1\text{d}6$, etc. So instead of counting out $7776$ possibilities, we instead handle $21\cdot6 + 16\cdot6 + 11\cdot6 + 6\cdot6 = 324$ total multiplications and a similar number of additions. |
This seems to be a bit of an odd one. I have worked out a possible answer, but I have a feeling I am going about this the wrong way. Help would be appreciated.
Find $m,M\in \mathbb{R}$ so that for every $x\in R^2$,
$$m||x||_2 \leq ||x||_\infty \leq M||x||_2$$
I tried plugging in the $||x||_2$ norm and solving for $m$, which got me
$m\leq max{|x_1|,|x_2|}(x_1^2+x_2^2)^{-1/2}$
This just seems off to me... |
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences). |
Think about it like the following:
$z_i \sim Mult(\pi)$ where $\pi$ is a $\textit{vector}$ of probabilities of size $K$. The multinomial distribution is a generalization of the categorical distribution, which the poster below correctly mentions. The categorical distribution is equivalent to considering the multinomial when we have
one trial and more than 2 categories for consideration. In other words, the author of the document could have replace the multinomial specification with the categorical and would have been more easily interpreted.
As a very concrete example, let's consider the case where you have just completed step (1) above and have drawn all the means, $\mu_k$ for $k = 1, \ldots, K$. Now imagine we have a $K$-sided die that you can roll, where each side has probability $\pi_k$ of being chosen for $k = 1, \ldots, K$. (Normally each side of a dice roll is equally likely but here they may not be equal). These $\pi_k$ can be collected into a vector $\pi = (\pi_{1}, \ldots, \pi_K)$, which is what we put into the "Mult" specification in part (a) above. Then, the "action":
$$z _ { i } \sim \operatorname { Mult } ( \pi )$$
is the same as rolling the $K$-sided die and setting $z_i$ equal to the result of of the dice roll. Here $z_i$ can take values in the range of $\{1, \ldots, K\}$. Then, $z_i$ will serve as the
index of which of the means $\mu_k$ you had generated earlier will be used to now serve as the mean of the sampling distribution of $x_i$. Now repeat this process for each $i \in \{1, \ldots, n\}$.
Finally, let's consider a real-life example. Suppose $K=3$ and $n=2$, then we will first in step (1) draw out three means $\mu_1, \mu_2, \mu_3$. Then, we have a 3-sided die for which we may roll a $1$ with probability $\pi_1$, a $2$ with probability $\pi_2$, and a $3$ with probability $\pi_3$. Then, $\pi = (\pi_1, \pi_2, \pi_3)$. Then we want to do parts (a) and (b) twice since we set $n=2$.
Roll the 3-sided die and record what we got. Let's say for the $n=1$ case we rolled a $2$. Then, we set $z_1 = 2$. For the $n=2$ case let's say we rolled a $3$. Then we set $z_2 = 3$. Now, we have:
Take the $z_1$ and $z_2$ values to serve as the index of which of the three means we want to use for sampling $x_1$ and $x_2$. This entails sampling:
$$x _ { 1 } \sim \mathcal { N } \left( \mu _ { 2 } , \sigma ^ { 2 } \right) $$
and
$$x _ { 2 } \sim \mathcal { N } \left( \mu _ { 3 } , \sigma ^ { 2 } \right) $$
and plugging in the generated $\mu_2$ and $\mu_3$ from part (1) above, since $\mu_{z_1} = \mu_{2}$ and $\mu_{z_2} = \mu_{3}$. |
The perfect camera is
diffraction limited. This article is written strictly from an optical point of view, so don’t expect a Nikon / Canon / Leica comparison here… Optically, a camera (or telescope or other optical instrument) can be described by its point spread function (PSF), which combines the optical characteristics of any mirrors, lenses, sensors, etc. that it contains. The point spread function is another name, typically used in image processing, for the impulse response of a system. It describes how the system images a point source. The point spread function is relatively easy to determine in astrophotography. Since stars are essentially point sources, each individual star gets imaged as the PSF of the camera / telescope combination.
A “perfect” camera should have a PSF that is a single point. In practice, most optical systems typically contain imperfections, which result in aberrations in the image. A typical example of this is
chromatic aberration, caused by light of different frequencies (colors) being refracted differently by a lens. A “perfect” system, however, which does not have any imperfections that cause aberrations, cannot escape the effects of diffraction. Diffraction is a basic physical effect that is caused by the interaction of light waves with objects. It cannot be avoided, which is why optically perfect systems are called diffraction limited. Airy pattern
The classical approach for determining how diffraction limits the resolution of a camera is the
Airy pattern, which is the theoretical PSF of a circular aperture. The Airy pattern is named after Sir George Biddell Airy (1801–1892), an English mathematician and astronomer. It is given by
\[I(x)=I_0\left(\frac{2J_1(x)}{x}\right )^2,\]
where \(I_0\) is the intensity in the center of the pattern and \(J_1\) is the Bessel function of the first kind of order 1. The variable \(x\) is defined by
\[x=\frac{\pi r}{\lambda N},\]
where \(\lambda\) is the wavelength of the light, \(r\) is the radial distance from the center of the image, and \(N\) is the f-number of the camera. Below is an example of an Airy pattern.
This image has, of course, been gamma corrected for viewing on your screen. The central spot is called the
Airy disk. The size of the Airy disk is typically used to determine the (theoretical) resolution of a camera, where the rings that surround it are often ignored. In practice, the size of the Airy disk has only meaning when coupled with the actual pixel size of the sensor of the camera. If the Airy disk is larger than those pixels, then the resolution will be limited by diffraction.
In the follow-up article The PSF of a Pinhole Camera, I work out the PSF of my pinhole camera. To do that, I had to use at least
Fresnel diffraction, since the Airy pattern is based on Fraunhofer diffraction, which is an approximation that isn’t valid for the close distance between the detector and the pinhole of my camera. In the end, I have used the full Rayleigh–Sommerfeld diffraction integral. |
Let
$$f_{\alpha,n}=\operatorname{argmin}_f\|Hf-y\|^2_2+\alpha\|Cf\|_2^2,$$
be the Tikhonov regularised solution and $f^\dagger=H^\dagger y^\dagger$ be the solution we are trying to approximate, where $y^\dagger=y-n$ and $H^\dagger:\operatorname{range}H\oplus\operatorname{range}H^\perp\to\ker H^\perp$ is the Moore-Penrose pseudo-inverse of $H$.
The optimal parameter $\alpha_{\text{opt}}$ can then defined be defined as
$$\alpha_{\text{opt}}:=\operatorname{argmin}_\alpha\|f_{\alpha,n}-f^\dagger\|.$$
Of course, this is not a parameter choice rule per se, as it requires knowledge of $f^\dagger$. There are a number of parameter choice rules, however, which you can use to estimate the optimal parameter depending on how much knowledge you have. Parameter choice rules can essentially be split into three categories: a-priori, a-posteriori and heuristic.
An a-priori parameter choice rule
If you know $\delta:=\|n\|$ and also the smoothness of $f^\dagger$, i.e., you know $\mu$, where $f^\dagger\in\operatorname{range}(H^\ast H)^\mu$, then you can select the parameter as
$$\alpha_\ast\sim\delta^\frac{2}{2\mu+1},$$
which yields optimal convergence rates for Tikhonov regularisation if $\mu\le 1$.
An a-posteriori parameter choice rule
If you do not know $\mu$, but you have knowledge of the noise-level, $\delta$, then you can use the Morozov's discrepancy principle:
$$\alpha_\ast=\sup\{\alpha:\|Hf_{\alpha,n}-y\|\le\tau\delta\},$$
for a constant $\tau\ge 1$.
A couple of heuristic parameter choice rules
If you have neither knowledge of $\mu$ nor $\delta$, then you can use one of the so-called heuristic parameter choice rules. I will list two:
The quasi-optimality rule selects the parameter as
$$\alpha_\ast=\operatorname{argmin}_\alpha\alpha\|\frac{\mathrm{d}}{\mathrm{d}\alpha}f_{\alpha,n}\|$$
and the heuristic discrepancy rule (also known as the Hanke-Raus rule) chooses
$$\alpha_\ast=\operatorname{argmin}_\alpha\frac{\|Hf_{\alpha,n}-y\|}{\sqrt{\alpha}}.$$
Note, however, that the drawback of the aforementioned heuristic rules is that they only work whenever $n\in\mathcal{N}_p$, where
$$\mathcal{N}_p:=\left\{n:\alpha^{2p}\int_\alpha^{\|H\|^2+}\lambda^{-1}\,\mathrm{d}\|F_\lambda n\|^2\le C\int_0^\alpha\lambda^{2p-1}\,\mathrm{d}\|F_\lambda n\|^2\text{ for all }\alpha>0\right\},$$
where $\{F_\lambda\}_\lambda$ denotes the spectral family of $HH^\ast$, with $p=1$ for the heuristic discrepancy rule and $p=2$ for the quasi-optimality rule. In particular, $\mathcal{N}_2\subset\mathcal{N}_1$, although the quasi-optimality rule tends to perform better when it works. It's important to note that the above noise condition is not particularly restrictive and excludes the
worst case (in which heuristic rules are known not to converge).
Actually, I would recommend the quasi-optimality rule. The L-curve rule, mentioned in the comments, tends not to perform as well and moreover, does not have the theoretical backing of the other rules. For numerical comparisons, I recommend a paper by Bauer and Lukas, as well as the PhD thesis of Palm.
References: Regularization of Inverse Problems, Engl, Hanke, Neubauer - https://books.google.at/books/about/Regularization_of_Inverse_Problems.html?id=VuEV-Gj1GZcC&redir_esc=y
Convergence analysis of minimization-based noise level-free parameter choice rules for linear ill-posed problems, Kindermann - https://etna.ricam.oeaw.ac.at/volumes/2011-2020/vol38/abstract.php?vol=38&pages=233-257
Comparing parameter choice methods for regularization of ill-posed problems, Bauer, Lukas - https://www.sciencedirect.com/science/article/abs/pii/S0378475411000607
Numerical comparison of regularization algorithms, Palm - http://dspace.ut.ee/handle/10062/14623?locale-attribute=en |
Johann Wolfgang von Goethe: "By seeking and blundering we learn."Original German, 1825.Albert Einstein: "Anyone who has never made a mistake has never tried anything new." (However, attribution to Einstein is weak. See quoteinvestigator.com.)Jo Boaler:"When I have tutored people in math, I've always started by saying, 'By the way, I just want you to know ...
Perhaps: The discovery a year ago of a new tiling of the plane bya convex polygonal tile, found byMann, McLoud, and Von Derau (the latter of whom was an undergraduate atthe time of the discovery):Here is a nice article on the discovery in The Guardian, by Alex Bellos.As Alex says, the problem has been studied for $100$ years now,since Reinhardt in ...
"I have not failed. I've just found 10,000 ways that won't work.""Our greatest weakness lies in giving up. The most certain way to succeed is always to try just one more time.""Many of life's failures are people who did not realize how close they were to success when they gave up."Thomas A. Edison
“There is no man,” he began, “however wise, who has not at some periodof his youth said things, or lived in a way the consciousness of whichis so unpleasant to him in later life that he would gladly, if hecould, expunge it from his memory. And yet he ought not entirely toregret it, because he cannot be certain that he has indeed become awise ...
Some example types:Minimizing potential energy of any realistic physical system. Examples:0D: The point where a rolling ball/flowing water might* come to rest (*might not, if momentum carries it past the low point).1D: The curve described by a hanging chain/flexible rope (in equilibrium).2D: The surface of a soap film (in equilibrium).Generally, ...
Here are some optimization problems that were harder than a simple homework problem:Given the pavillion angle of a diamond, what crown angle produces optimal light return?What percentage of the lignin should be removed from wood, if one wishes to squeeze the wood into a cheap structure similar to carbon fiber with an optimum strength-to-weight ratio?When ...
There was a post over at academia where somebody essentially said that after starting by investigating other people's mistakes:Eventually I got better — I started making my own mistakes.As I said over there, I liked that a lot and it stuck with me. Making your own mistakes is a sign of growing up.
A possibility, requiring one definition: What is a tiling of the plane with an infinite supply of congruent copies of a single tile (technically,a monohedral tiling). This can go as deep as you'd like, perhaps in stringing together several mini-sessions.Can every triangle tile the plane? (Yes.)Form parallelograms, then argue that a parallelogram tiles ...
I had a lot of luck when running a primary school math club (ages 9-10) with the game of Nim. It's not much harder than tic-tac-toe to play, but there is a lot more math underneath that sounds "hard" but doesn't actually have any prerequisites.It's not as quick of a payoff as your ideas in the question, but the outline goes like this:Show them how to ...
19 public data sets, from Springborg blog, curated by T.J. DeGroat.Summaries and links for each in DeGroat's page.United States Census DataFBI Crime DataCDC Cause of DeathMedicare Hospital QualitySEER Cancer IncidenceBureau of Labor StatisticsBureau of Economic AnalysisIMF Economic DataDow Jones Weekly ReturnsData.gov.ukEnron EmailsGoogle ...
The 2016 result about Unexpected biases in the distribution of consecutive primes.This is really pretty simple to understand - the distribution of primes had been supposed to be unconditionally random, but this interesting bias is suddenly discovered. The amazing thing about it is that the observation could have been made by pretty well any beginner ...
The one I'm using is:An expert is a person who has found out by his own painful experience all the mistakes that one can make in a very narrow field.Attributed to Niels Bohr, quoted by Edward Teller, in Dr. Edward Teller's Magnificent Obsession by Robert Coughlan, in LIFE magazine (6 September 1954), p. 62.
Some physics examples: --Given that the range of a projectile is $R=(v^2/2g)\sin\theta\cos\theta$, prove that the maximum range is achieved for $\theta=45$ degrees.An electrical transmission line of resistance $x$ is in series with a load of resistance $y$. For a fixed voltage $V$, the useful power dissipated in the load is $P= V^2y(x+y)^{-2}$. Show that ...
I customarily use: Gillison, Maura L., et al. "Prevalence of oral HPV infection in the United States, 2009-2010." Jama 307.7 (2012): 693-703. This has a nice mix of CI's, P-values, and statements about distribution shapes (bimodal) in the abstract.At one point I used this article: Peck, Peggy, "Long Work Hours Increase Hypertension Risk", MedPage Today. ...
There are many volume-of-a-box questions. I like this one, simpler thanwhat the OP cites:Given a rectangle,cut out squares from the corners so you can fold it up to a box, without a top,of maximal volume.The rectangle might be specialized to a square, as below.See also The Math Forum....
Perhaps logic puzzles would work in this case. Some classic examples are:You're traveling along a road and arrive at a fork. Two guides are posted, but one always lies and the other always tells the truth, but you don't know which one is which. What one question can you ask to find out which path you should take?Three boxes are labeled "Apples", "...
Since Benjamin Dickman mentioned The Tokyo Puzzles in the comments, I'll include a couple of the questions from that book here that I thought fit the prompt nicely; neither requires a pen and paper to think about, and a student can just ponder them mentally.Two brothers decided to run a 100-meter race. The older brother won by 3 meters. In other words,...
Not a single answer but rather a resource:The snapshots of modern mathematics from the mathematical research institute Oberwolfach (http://www.mfo.de/math-in-public/snapshots/) aim to provide pretty much exactly what you ask for. Research mathematicians present some "recent developments" in their field in a way that is understandable for high school ...
I did something very similar as a project when I taught AP statistics a few years ago. It was relatively effective and my students left that class with that project as one of the ones they felt was the most impactful, since it showed them how statistics was actually used in real life.I believe that the most recent topics for this project were linear ...
I always enjoy bringing in Bortkiewicz’s data on the annual deaths by horse kicks in the Prussian Army from 1875-1894.This is what inspired the discovery of the Poisson distribution. Sometimes, students enjoy thinking about how to collect data about rare events:How often does the anaesthetist has fallen asleep during surgery? And other medical “Never ...
"Human is the only animal that trips twice over the same stone" - Anonymous"Next time you trip over a stone, instead of stepping over it, place a big flashing sign that will remind you where you fell to the ground. The next time you travel the same path you will remember your past mistake and be able to avoid the hazard" - MeThis is trying to be funny to ...
For a much lower-level topic, consider explaining to beginning algebra students why "like terms" can be combined. On a few occasions, I have resorted to reasoning with students that adding algebraic expressions is like adding quantities with units. [Our curriculum begins with units and geometry before algebra, so this is usually safe ground in my class.]If ...
A point (say, in $\mathbb{R}^n$) is a vector. Vectors and points are really no different. They are both $n$-tuples in $\mathbb{R}^n$.The difference between two points (in $\mathbb{R}^n$) is a vector, but a vector hasno fixed position. Points are positions in space. Vectorsare displacements. It makes no sense to add two points, but itdoes make sense to ...
Edit (June 2019): I used this final project with minor tweaks again this year. You can find links to some of the output for Spring 2019 - both students' graphs and write-ups - here and here.I used a Desmos Make-A-Graph prompt for an Algebra 2 class' final project last year. The results were quite good; so, I expect to incorporate at least one similar ...
Another math circle activity that is fine for kids (warning: I have not tried this specifically on primary school, so the patience required might be too high):Modular arithmetic modulo small numbers is required, but kids are more than happy to accept it. Write on a piece of paper that (only on this piece of paper) 3 = 0. Then talk about it; if 3 = 0, then ...
There are elementary results in mathematics being produced regularly either in the sense that the ideas don't need a lot of background to understand them or sometimes that even the "details" of the proofs are comprehensible. An example, is that of the de Bruijn graph, which has ties to the theory of Eulerian circuits in directed graphs and has found many ... |
I'd really love your help with the following:
For
any fixed $L_2$ I need to decide whether there is closure under the following operators:
$A_r(L)=\{x \mid \exists y \in L_2 : xy \in L\}$
$A_l(L)=\{x \mid \exists y \in L : xy \in L_2\}$.
The relevant options are:
Regular languages are closed under $A_l$ resp. $A_r$, for any language $L_2$
For some languages $L_2$, regular languages are closed under $A_l$ resp. $A_r$, and for some languages $L_2$, regular languages are not closed under $A_l$ resp. $A_r$.
I believed that the answer for (1) should be (2), because when I get a word in $w \in L$ and $w=xy$ I can build an automaton that can guess where $x$ turning to $y$, but then it needs to verify that $y$ belongs to $L_2$ and if it won't be regular, how would it do that?
The answer for that is (1).
What should I do in order to analyze those operators correctly and to determine if the regular languages are closed under them or not? |
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for
@JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default?
@JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever
I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font.
@DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma).
@egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge.
@barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually)
@barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording?
@barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us.
@DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.)
@barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow)
if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.)
@egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended.
@barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really
@DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts.
@DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ...
@DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts.
MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers...
has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable?
I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something.
@baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out
You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!...
@baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier.
@baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals. |
Consider a $C^k$, $k\ge 2$, Lorentzian manifold $(M,g)$ and let $\Box$ be the usual wave operator $\nabla^a\nabla_a$. Given $p\in M$, $s\in\Bbb R,$ and $v\in T_pM$, can we find a neighborhood $U$ of $p$ and $u\in C^k(U)$ such that $\Box u=0$, $u(p)=s$ and $\mathrm{grad}\, u(p)=v$?
The tog is a measure of thermal resistance of a unit area, also known as thermal insulance. It is commonly used in the textile industry and often seen quoted on, for example, duvets and carpet underlay.The Shirley Institute in Manchester, England developed the tog as an easy-to-follow alternative to the SI unit of m2K/W. The name comes from the informal word "togs" for clothing which itself was probably derived from the word toga, a Roman garment.The basic unit of insulation coefficient is the RSI, (1 m2K/W). 1 tog = 0.1 RSI. There is also a clo clothing unit equivalent to 0.155 RSI or 1.55 tog...
The stone or stone weight (abbreviation: st.) is an English and imperial unit of mass now equal to 14 pounds (6.35029318 kg).England and other Germanic-speaking countries of northern Europe formerly used various standardised "stones" for trade, with their values ranging from about 5 to 40 local pounds (roughly 3 to 15 kg) depending on the location and objects weighed. The United Kingdom's imperial system adopted the wool stone of 14 pounds in 1835. With the advent of metrication, Europe's various "stones" were superseded by or adapted to the kilogram from the mid-19th century on. The stone continues...
Can you tell me why this question deserves to be negative?I tried to find faults and I couldn't: I did some research, I did all the calculations I could, and I think it is clear enough . I had deleted it and was going to abandon the site but then I decided to learn what is wrong and see if I ca...
I am a bit confused in classical physics's angular momentum. For a orbital motion of a point mass: if we pick a new coordinate (that doesn't move w.r.t. the old coordinate), angular momentum should be still conserved, right? (I calculated a quite absurd result - it is no longer conserved (an additional term that varies with time )
in new coordinnate: $\vec {L'}=\vec{r'} \times \vec{p'}$
$=(\vec{R}+\vec{r}) \times \vec{p}$
$=\vec{R} \times \vec{p} + \vec L$
where the 1st term varies with time. (where R is the shift of coordinate, since R is constant, and p sort of rotating.)
would anyone kind enough to shed some light on this for me?
From what we discussed, your literary taste seems to be classical/conventional in nature. That book is inherently unconventional in nature; it's not supposed to be read as a novel, it's supposed to be read as an encyclopedia
@BalarkaSen Dare I say it, my literary taste continues to change as I have kept on reading :-)
One book that I finished reading today, The Sense of An Ending (different from the movie with the same title) is far from anything I would've been able to read, even, two years ago, but I absolutely loved it.
I've just started watching the Fall - it seems good so far (after 1 episode)... I'm with @JohnRennie on the Sherlock Holmes books and would add that the most recent TV episodes were appalling. I've been told to read Agatha Christy but haven't got round to it yet
?Is it possible to make a time machine ever? Please give an easy answer,a simple one A simple answer, but a possibly wrong one, is to say that a time machine is not possible. Currently, we don't have either the technology to build one, nor a definite, proven (or generally accepted) idea of how we could build one. — Countto1047 secs ago
@vzn if it's a romantic novel, which it looks like, it's probably not for me - I'm getting to be more and more fussy about books and have a ridiculously long list to read as it is. I'm going to counter that one by suggesting Ann Leckie's Ancillary Justice series
Although if you like epic fantasy, Malazan book of the Fallen is fantastic
@Mithrandir24601 lol it has some love story but its written by a guy so cant be a romantic novel... besides what decent stories dont involve love interests anyway :P ... was just reading his blog, they are gonna do a movie of one of his books with kate winslet, cant beat that right? :P variety.com/2016/film/news/…
@vzn "he falls in love with Daley Cross, an angelic voice in need of a song." I think that counts :P It's not that I don't like it, it's just that authors very rarely do anywhere near a decent job of it. If it's a major part of the plot, it's often either eyeroll worthy and cringy or boring and predictable with OK writing. A notable exception is Stephen Erikson
@vzn depends exactly what you mean by 'love story component', but often yeah... It's not always so bad in sci-fi and fantasy where it's not in the focus so much and just evolves in a reasonable, if predictable way with the storyline, although it depends on what you read (e.g. Brent Weeks, Brandon Sanderson). Of course Patrick Rothfuss completely inverts this trope :) and Lev Grossman is a study on how to do character development and totally destroys typical romance plots
@Slereah The idea is to pick some spacelike hypersurface $\Sigma$ containing $p$. Now specifying $u(p)$ is trivial because the wave equation is invariant under constant perturbations. So that's whatever. But I can specify $\nabla u(p)|\Sigma$ by specifying $u(\cdot, 0)$ and differentiate along the surface. For the Cauchy theorems I can also specify $u_t(\cdot,0)$.
Now take the neigborhood to be $\approx (-\epsilon,\epsilon)\times\Sigma$ and then split the metric like $-dt^2+h$
Do forwards and backwards Cauchy solutions, then check that the derivatives match on the interface $\{0\}\times\Sigma$
Why is it that you can only cool down a substance so far before the energy goes into changing it's state? I assume it has something to do with the distance between molecules meaning that intermolecular interactions have less energy in them than making the distance between them even smaller, but why does it create these bonds instead of making the distance smaller / just reducing the temperature more?
Thanks @CooperCape but this leads me another question I forgot ages ago
If you have an electron cloud, is the electric field from that electron just some sort of averaged field from some centre of amplitude or is it a superposition of fields each coming from some point in the cloud? |
Notice:
If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help!
Test-drive the GATK tools and Best Practices pipelines on Terra Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything. Phred-scaled Quality Scores
You may have noticed that a lot of the scores that are output by the GATK are in Phred scale. The Phred scale was originally used to represent base quality scores emitted by the Phred program in the early days of the Human Genome Project (see this Wikipedia article for more historical background). Now they are widely used to represent probabilities and confidence scores in other contexts of genome science.
Phred scale in context
In the context of sequencing, Phred-scaled quality scores are used to represent how confident we are in the assignment of each base call by the sequencer.
In the context of variant calling, Phred-scaled quality scores can be used to represent many types of probabilities. The most commonly used in GATK is the QUAL score, or variant quality score. It is used in much the same way as the base quality score: the variant quality score is a Phred-scaled estimate of how confident we are that the variant caller correctly identified that a given genome position displays variation in at least one sample.
Phred scale in practice
In today’s sequencing output, by convention, most useable Phred-scaled base quality scores range from 2 to 40, with some variations in the range depending on the origin of the sequence data (see the FASTQ format documentation for details). However, Phred-scaled quality scores in general can range anywhere from 0 to infinity. A
higher score indicates a higher probability that a particular decision is correct, while conversely, a lower score indicates a higher probability that the decision is incorrect.
The Phred quality score (Q) is logarithmically related to the error probability (E).
$$ Q = -10 \log E $$
So we can interpret this score as an estimate of
error, where the error is e.g. the probability that the base is called incorrectly by the sequencer, but we can also interpret it as an estimate of accuracy, where the accuracy is e.g. the probability that the base was identified correctly by the sequencer. Depending on how we decide to express it, we can make the following calculations:
If we want the probability of error (E), we take:
$$ E = 10 ^{-\left(\frac{Q}{10}\right)} $$
And conversely, if we want to express this as the estimate of accuracy (A), we simply take
$$
\begin{eqnarray} A &=& 1 - E \nonumber \ &=& 1 - 10 ^{-\left(\frac{Q}{10}\right)} \nonumber \ \end{eqnarray} $$
Here is a table of how to interpret a range of Phred Quality Scores. It is largely adapted from the Wikipedia page for Phred Quality Score.
For many purposes, a Phred Score of 20 or above is acceptable, because this means that whatever it qualifies is 99% accurate, with a 1% chance of error.
Phred Quality Score Error Accuracy (1 - Error) 10 1/10 = 10% 90% 20 1/100 = 1% 99% 30 1/1000 = 0.1% 99.9% 40 1/10000 = 0.01% 99.99% 50 1/100000 = 0.001% 99.999% 60 1/1000000 = 0.0001% 99.9999%
And finally, here is a graphical representation of the Phred scores showing their relationship to accuracy and error probabilities.
The red line shows the error, and the blue line shows the accuracy. Of course, as error decreases, accuracy increases symmetrically.
Note: You can see that below Q20 (which is how we usually refer to a Phred score of 20), the curve is really steep, meaning that as the Phred score decreases, you lose confidence very rapidly. In contrast, above Q20, both of the graphs level out. This is why Q20 is a good cutoff score for many basic purposes. |
Note. I asked the question below on Math Stack Exchange (link), but didn't get a really satisfactory answer there, so I'm posting it here too.
I am looking for high school algebra/mathematics textbooks targeted at talented students, as preparation for fully rigorous calculus à la Spivak. I am interested in the best materials available in English, French, German or Hebrew.
Ideally, the book(s) should provide a comprehensive introduction to algebra at this level, starting from the most basic operations on polynomials. It should include necessary theory (e.g., Bezout's remainder theorem on polynomials, proof of the fundamental theorem of arithmetic, Euclid's algorithm, a more honest discussion of real numbers than usual, proofs of the properties of rational exponents, etc., and a general attitude that all statements are to be proved, with few exceptions). It should also have problems that range from exercises acquainting students with the basic algebraic manipulations on polynomials to much more difficult ones.
Specifically, I am looking for something similar in spirit to a series of excellent Russian books by Vilenkin for students in so-called "mathematical schools" from grades 8 to 11, although I am only looking for the equivalent of the grade 8 and 9 books, which are at precalculus level. To give you an idea, here are a sample of typical problems from the grade-8 book.
Perform the indicated operations. $\frac{3p^2mq}{2a^2 b^2} \cdot \frac{3abc}{8x^2 y^2} : \frac{9a^2 b^2 c^3}{28pxy}$
Prove that when $a \ne 0$, the polynomial $x^{2n} + a^{2n}$ is divisible neither by $x + a$ nor by $x - a$.
Prove that if $a + b + c = 0$, then $a^3 + b^3 + c^3 + 3(a + b)(a + c) (b + c) = 0$.
Prove that if $a > 1$, then $a^4 + 4$ is a composite number.
Prove that if $n$ is relatively prime to $6$, then $n^2 - 1$ is divisible by 24.
Simplify $\sqrt{36x^2}$.
Simplify $\sqrt{12 + \sqrt{63}}$.
Prove that the difference of the roots of the equation $5x^2 -2(5a + 3)x + 5a^2 + 6a + 1 = 0$ does not depend on $a$.
Solve the inequality $|x - 6| \leq |x^2 - 5x + 2|$.
And here are the chapter titles for the grade 8 and 9 books.
Grade 8: Fractions. Polynomials. Divisibility; prime and composite numbers. Real numbers. Quadratic equations; systems of nonlinear equations; resolution of inequalities.
Grade 9: Elements of set theory. Functions. Powers and roots. Equations and inequalities, and systems thereof. Sequences. Elements of trigonometry. Elements of combinatorics and probability theory.
Broadly similar questions have been asked elsewhere, however the suggestions made there are not satisfactory for my purposes.
The English translations of Gelfand's books are good; however they are not a sufficiently broad introduction to high school algebra, and do not have enough material on computational technique. They are more in the nature of supplements to an ordinary textbook.
Some 19th century books like Hall and Knight have been suggested. On conceptual material, these tend to be too old in language and outlook.
Basic Mathematicsby Serge Lang seems more to dabble in various topics than to provide a thorough introduction to algebra.
I am not inclined towards books with a very strong "New Math" orientation (1971-1983 France, for example). I don't think a student should need to understand the group of affine transformations of $\mathbb{R}$ to know what a line is.
Also, previous questions have perhaps focused implicitly on material in English. I have in mind a student who can also easily read French, German or Hebrew if something better can be found in those languages.
Edit. I'd like to clarify that I'm not asking for something identical to these books, just something as close as possible to their spirit. Fundamentally, this means: 1. It is a substitute for, rather than just a complement to, a regular school algebra textbook. 2. It is directed at the most able students. 3. It conveys the message that proofs and creative problem-solving are central to mathematics. |
Suppose we're given an infinite stream of integers, $x_1, x_2, \dots$.
a) Show that we can compute whether the sum of all integers seen so far is divisible by some fixed integer $N$ using $O(\log N)$ bits of memory.
b) Let $N$ be an arbitrary number, and suppose we're given $N$'s prime factorization: $N = p_1 ^{k_1} p_2 ^{k_2} \dots p_r ^{k_r}$. How would you check whether $N$ divides the product of all integers $x_i$ seen so far, using as few bits of memory as possible? Write down the number of bits used in terms of $k_1, ..., k_r$.
For part a), we know that for any prime $p \ne 2, 5$, there is an integer $r$ such that in order to see if $p$ divides a decimal number $n$, we break $n$ into $r$-tuples of decimal digits, add up these $r$-tuples, and check if the sum is divisible by $p$. But $N$ is a fixed integer, and not necessarily a prime. Is there some way to connect the theorem above to any arbitrary integer?
For part b), for the product of $x_i$ (call it $y$) is divisible by $N$, then $y$ must be divisible by each $p_i ^{k_i}$ (call it $a_i$. Since we're given the prime factorization, we can just check if $y$ is divisible by $N$ by dividing $y$ by each $a_i$ and halting when it fails, right? Would this result in using $k_1 \times \dots \times k_r$ bits?
Am I at least on the right track, or am I completely wrong? Any help understanding this problem would be tremendously helpful. |
The usual Lorentz force expression I am familiar with is this:
$$\vec F=q(\vec E+\vec v \times \vec B)$$
I have seen some other versions lately that include an extra factor $1/c$:
$$\vec F=q\left(\vec E+ \frac{1}{c} \vec v \times \vec B\right)$$
What is this $c$ and how is it included? I guess other parameters in the expression are also different from the top expression for this to fit?
Example of a text snippet where I have run across this extra parameter: |
Let us write the equation in state-space form
\begin{equation}\frac{d}{dt}\begin{bmatrix}x_{1}(t)\\x_{2}(t)\end{bmatrix}+\begin{bmatrix}\eta/2 & -\omega\\\omega & \eta/2\end{bmatrix}\begin{bmatrix}x_{1}(t)\\x_{2}(t)\end{bmatrix}=0 \tag1\end{equation}
where the natural frequency is $\omega \triangleq \sqrt{1-\eta^2/4}$. It is easy to verify that the characteristic polynomial is indeed $p(\lambda) = \lambda^2 + \eta \lambda + 1$ by construction. When $\eta \ll 1$, we have $\omega \approx 1$, so the solution can always be approximated to great accuracy as
\begin{equation}\begin{bmatrix}x_{1}(t)\\x_{2}(t)\end{bmatrix}=\exp\left(-t\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix}\right)\begin{bmatrix}y_{1}\\y_{2}\end{bmatrix}\tag2\end{equation}
with $y_{1}=x_{1}(0), y_{2}=x_{2}(0)$. Indeed this is the exact solution when $\eta=0$. Now, with a finite $\eta$, this solution is not completely correct, so we let $y_{1},y_{2}$ vary with time to yield
\begin{equation}\begin{bmatrix}x_{1}(t)\\x_{2}(t)\end{bmatrix}=\exp\left(-t\begin{bmatrix}0 & -1\\1 & 0\end{bmatrix}\right)\begin{bmatrix}y_{1}(t)\\y_{2}(t)\end{bmatrix},\tag3\end{equation}
with $y_{1}(0)=x_{1}(0)$, $y_{2}(0)=x_{2}(0)$. These variables can be intuitively understood as “corrections” to the initial guess above. Then using the laws of matrix exponentials, we derive an ODE for the correction terms
\begin{equation}\frac{d}{dt}\begin{bmatrix}y_{1}(t)\\y_{2}(t)\end{bmatrix}+\begin{bmatrix}\eta/2 & -(\omega-1)\\(\omega-1) & \eta/2\end{bmatrix}\begin{bmatrix}y_{1}(t)\\y_{2}(t)\end{bmatrix}=0.\tag4\end{equation}
This latter ODE is non-stiff, and can be solved using any simple integration method. With $y_{1}(t)$ and $y_{2}(t)$ known, $x(t)$ is easily recovered.
To summarize, the principal idea is to start with an excellent initial guess, and to write the true solution as a time-dependent correction multiplied by this initial guess. Solving for the correction is then a far easier problem (read: less stiff) than solving for the solution directly.
The particularly strategy above is known as an “exponential integrator” (more specifically, the Lawson type), and assumes that the nonlinear dynamics can be well approximated by a linear one. For this very specific problem of a damped oscillation, it is also known as a phasor transform. |
I have a joint probability distribution as given in the figure:
In this figure, variables in circles are random variables and variables in squares are constants. So, I can write the joint distribution over the data $y$ and the model parameters as:
$$ P(y, w, \lambda, \phi) = P(y|w, \phi) \times P(w|\lambda) \times P(\lambda) \times P(\phi) $$
Now $P(w|\lambda)$ is modelled using a multivariate normal distribution with 0 mean and a covariance matrix scaled by $\lambda$. $P(\lambda)$ and $P(\phi)$ are modelled using gamma distributions. Also, the likelihood term $P(y|x, w, \phi)$ is a Gaussian likelihood given by:
$$ P(y|x, w, \phi) = (\frac{\phi}{2\pi})^{0.5} \exp^{-0.5 e \phi e} $$
The model noise is independent and identically distributed.
Now, I am interested in $P(w, \lambda, \phi|y)$ which is given by the joint distribution above normalised appropriately by $P(y)$
My question is regarding the use of expectation propagation to perform inference on this model? I have been trying to understand EP with little success.
Can someone help me understand what approximations I need to make to this model to use EP on it? The prior $P(w|\lambda)$ is modelled using a zero mean multivariate normal distribution. $P(\lambda)$ is modelled using a Gamma distribution and $P(\phi)$ is also a gamma distribution. So, to infer the posterior $P(w, \lambda, \phi|y)$, I am confused as to where to start. Should I start with the mean field like approach to assume independence among the posterior distributions of these parameters?
$$ P(w, \lambda, \phi) \approx q(w) \times q(\lambda) \times q(\phi) $$
I have been struggling with this for weeks now. Any suggestion/reference etc. would be really appreciated. Looking at Minka's lecture notes, I am having issues figuring out whether some factor graph representation for this as that is usually the structure he uses in his examples. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.