text
stringlengths 256
16.4k
|
|---|
How to Calculate the Statical or First Moment of Area of Beam Sections
The statical or first moment of area (Q) simply measures the distribution of a beam sections’s area relative to an axis. It is calculated by taking the summation of all areas, multiplied by its distance from a particular axis (Area by Distance).
In fact, you may not have realised it, but if you’ve calculated the centroid of a beam section then you would have already calculated the first moment of area. Also this property, often denoted by Q, is most commonly used when determining the shear stress of a beam section.
Since beam sections are usually made up many geometries we first need to split the section into segments. After this, the area and centroid of each segment is calculated to find the overall statical moment of area.
Consider the I-beam section shown below. In our previous tutorial we already found the centroid to be 216.29 mm from the bottom of the section. To calculate the statical moment of area relative to the horizontal x-axis, the section can be split into 4 segments as illustrated:
Remember that the first moment of area is the summation of the areas multiplied by the distance from the axis. So the formula for the statical moment of area relative to the horizontal x-axis is:
[math]
\begin{align} Q_x &=\sum{y}_{i}{A}_{i}\text{ where:}\\ {A}_{i} &= \text{The individual segment’s area}\\ {y}_{i} &= \text{The individual segment’s centroid distance from a reference line or datum} \end{align} [math]
Now for a result such as shear stress we often want the statical moment of either the TOP or BOTTOM of the section relative to the Neutral Axis (NA) XX. Let’s start with the TOP portion of the section (i.e. Segments 1 and 2). We’ll find A
i and y i for each segment of the I-beam section above the neutral axis and then compute the statical moment of area (Q x). Remember we measure the distances from the neutral axis!
[math]
\text{Segment 1:}\\ \begin{align} {A}_{1} &= 250\times38 = 9500 {\text{ mm}}^{2}\\ {y}_{1} &= 159.71-\tfrac{38}{2} = 140.71 \text{ mm}\\\\ \end{align} [math]
[math]
\text{Segment 2:}\\ \begin{align} {A}_{2} &= (159.71-38)\times25 = 3042.75 {\text{ mm}}^{2}\\ {y}_{2} &= \tfrac{159.71-38}{2} = 60.86 \text{ mm}\\\\ \end{align} [math]
[math]
\begin{align} {Q}_{x,top} &= \sum{y}_{i}{A}_{i}\\ {Q}_{x,top} & ={y}_{1}{A}_{1} + {y}_{2}{A}_{2}\\ {Q}_{x,top} &=(140.71\times9500)+(60.86\times3042.75)\\ {Q}_{x,top} &\approx 1,521,900\text{ mm}^{3} \end{align} [math]
Similarly, we can calculate the statical moment of area of the BOTTOM portion of the section. This involes Segments 3 and 4 which are below the neutral axis.
[math]
\text{Segment 3:} \\ \begin{align} {A}_{3} &= (216.29-38)\times25 = 4457.25 {\text{ mm}}^{2}\\ {y}_{3} &= \tfrac{216.29-38}{2} = 89.15 \text{ mm} \\\\ \end{align} [math]
[math]
\text{Segment 4:} \\ \begin{align} {A}_{4} &= 150\times38 = 5700 {\text{ mm}}^{2}\\ {y}_{4} &= 216.29-\tfrac{38}{2} = 197.29 \text{ mm} \\\\ \end{align} [math]
[math]
\begin{align} {Q}_{x,bottom} &=\sum{y}_{i}{A}_{i} \\ {Q}_{x,bottom} &={y}_{3}{A}_{3} + {y}_{4}{A}_{4} \\ {Q}_{x,bottom} &=(89.15\times4457.25)+(197.29\times5700) \\ {Q}_{x,bottom} &\approx 1,521,900\text{ mm}^{3} \end{align} [math]
What you’ll notice is that the statical moment of area above the neutral axis is equal to that below the neutral axis!
[math]
{Q}_{x,top}={Q}_{x,bottom} [math]
Of course you don’t need to do all these calculations manually because you can use our fantastic Free Moment of Inertia Calculator to find the statical moment of area of beam sections.
Visit the next step: How to Calculate the Moment of Inertia of a Beam Section.
|
Let $S$ be a finite set of integers (this set contains about 200000 elements). Let $T \subset S$ be a particular subset of $S$ called
target. $S$ keeps growing. So does $T$. Each new element of $S$ might or might not be in $T$.
No (known, or practical) algorithm can determine if an element $s \in S$ is in the
target set: a human being must give the final word (ie, it is subjective). It is estimated that $T$ has about 30000-35000 elements. I already know $T_1$, a first approximation of $T$, with about 25000 elements. I also already know some thousands of elements of $S$ that are certainly not in $T$.
What I want is a way to approximate $T$ as closely as possible, and present only those elements to a human being. Also, for each new element of $S$, I want to determine if it has high probability of being in $T$ -- and present only those with high probability to a human being.
Now, I describe what I can use to try to approximate $T$.
Each integer $s \in S$ has some
labels associated. These can be represented as subsets $L_i \subset S, \forall i \in \{1, ..., n\}$ ($n$ is about 250). These subsets are known, determined by algorithms (ie, I have functions $l_i \to \{in,out\}$ such that $l_i(s) = in \iff s \in L_i$).
Some label algorithms are very fast, some are slow. Anyways, these labels (ie, the sets $L_i$) have already been determined. Some of these labels contain very few (1-100) elements, some contain a lot (100000-150000). Many labels are independent, some are closely related (ie, I know that some labels are subsets of others, I know that some are disjoint, etc).
So, given this framework, what kind of algorithms can I use to approximate $T$? They can be interactive, ie, they could get better after each new approximation of $T$, if this makes the problem easier.
I thought about using a
genetic algorithm to determine which labels, when intersected, give good approximations of $T$. However, this can get slow, with a naïve intersection algorithm (ie, suppose $L_1, L_2, L_3$ are to be intersected; if they are all "big" (50000-150000), it can be quite time consuming to calculate the intersection! -- now, imagine a gene that would require to intersect, say, 50 labels...).
How can I speed this, without sacrificing too much the precision?
|
I have a problem verifying the following equation (in three dimensions)
$$\epsilon_{abc} e^a\wedge R^{bc}=\sqrt{|g|}Rd^3 x$$
where $R$ is the Ricci scalar and $R^{bc}$ is the Ricci curvature
Attempt at a solution:
$$\epsilon_{abc} e^a\wedge R^{bc}=\epsilon_{abc} e_\mu^ae_\alpha^be_\beta^c R^{\alpha\beta}_{\nu\rho} dx^\mu\wedge dx^\nu\wedge dx^\rho$$
Now the idea is that the number of dimensions and the Levi-Civita tensor and the antisymmetry of the three-form forces the set $\{\alpha,\beta\}=\{\nu,\rho\}$. This will give the expression
\begin{align}\epsilon_{abc} e^a\wedge R^{bc}&=\epsilon_{abc} e_0^ae_1^be_2^c R^{12}_{12} dx^0\wedge dx^1\wedge dx^2+\\&\epsilon_{abc} e_0^ae_1^be_2^c R^{12}_{21} dx^0\wedge dx^2\wedge dx^1+\\&\epsilon_{abc} e_0^ae_2^be_1^c R^{21}_{21} dx^0\wedge dx^2\wedge dx^1+\\&\epsilon_{abc} e_0^ae_2^be_1^c R^{21}_{12} dx^0\wedge dx^1\wedge dx^2+({\rm cyclic\,permutations})\end{align}
The problem now is that the Ricci scalar is $R^{12}_{12}+R^{21}_{21}+({\rm cyclic\,permutations})$, so when counting the number of terms I obtain $2\sqrt{|g|}Rd^3 x$ which is wrong by a factor of 2. Can anyone see where I made a mistake?This post imported from StackExchange Physics at 2015-10-11 18:32 (UTC), posted by SE-user user2133437
|
I have a time series that has been measured after convolution with a moving average filter. Knowing the parameters of the moving average filter, is it possible to reconstruct/constrain the values of ...
I have 2 oversampling ADC's running parallelly, each to process data in a specific range of the input as shown below:Each ADC can process only half cycle range of a sine wave. Each ADC adds its own ...
Suppose we want to approximate the instantaneous power $z(n) = |y(n)|^2$ of the discrete-time signal $y(n)$, where $y(n)$ is the result of filtering $x(n)$ with a given window $h(n)$, $y(n) = x(n) \...
The measure of a given frequency $\omega$ in a signal $x(t)$ is:$\frac{1}{N}\sum\limits^N_{t=0}x\left(t\right)e^{^{-i \omega t}}$This is basically an average of the correlation between the signal ...
After filtering my noisy input signal using an anti-aliasing and FIR filter, I now wish to get the basic signal information (peak voltage and impedance; $R$ and $X$) from the pre-filtered as well as ...
I'm trying to figure out where exactly to draw the confidence levels for the autocorrleation function (ACF) and the partial autocorrelation function (PACF) for an ARMA model.For PACF I found that a ...
Suppose in one case I convolve Gaussian kernel with FWHM=10 samples. I would like to compare the result with moving average. My question: should I take the moving average window also 10 samples? In ...
In order to know if my signal is increasing or decreasing, I'm using the discrete derivative $y[n] = x[n] - x[n-1]$or a smoothed version of it (for example Exponential Weight Moving Average of $y[n]$ ...
I'm implementing a 80-72-64-48 multi pass moving average filter for a embedded system in C and in fixed point. The implementation is a circular buffer where i'm keeping a running sum and calculating...
On a stationary signal a sine-weighted moving average is calculated (SWMA: the coefficient vector looks like the first (>0) part of a sinusoid). The SWMA looks like this: very smooth:The future is ...
I have 80 seconds of data and I have to score my data by taking the average or median (or some other method) every 10 seconds. What's the best way to do this ? Should I just use a regular rectangular ...
|
Since moving to a Mac a bit over a year ago, I've had only a few reasons to look back (the business with the HP LJ1022 printer being one of them). I'm now rather close to the end of my tether, and the reason is fonts.
As an academic and a computer scientist, I end up writing quite a lot of papers and presentations with maths in them. Like any sensible person, I use LaTeX for typesetting the maths; it's a lot easier to type $\sum_{i=0}^{i=n-1} i^2$ than to wrestle with the equation editor in Word. I've also been using LaTeX for rendering mathematical expressions in lecture slides; there are two tools - LaTeXit and LaTeX Equation Editor - which make putting maths in Powerpoint or KeyNote a drag-and-drop operation.
However, I've spent quite a lot of time over the last week trying to debug a problem with the font rendering of TeX-generated PDF files on OS X. If I wrote a LaTeX file containing the following:
\documentclass{article} \begin{document} \section{This is a test} \[e = mc^2 \rightarrow \chi \pi \ldots r^2 \] \end{document}
then I'd expect it to render something like this:
Preview renders it like that, but not reliably - perhaps one time in eight. The rest of the time, it randomly substitutes a sans serif font for the various Computer Modern fonts. Sometimes it looks like this (missing the italic font):
Sometimes it looks like this (missing the bold and italic fonts):
And sometimes it looks like this (missing the bold and symbol fonts):
It isn't predictable which rendering I get. The problem also isn't limited to CM, but appears whenever you have a subset of a Type1 font embedded in PDF (on my machine, at least); TeX isn't the problem. The problem didn't exist on 10.4. The best guess from the Mac communities is that it's a cache corruption problem with the OS X PDF-rendering component on 10.5 (which would explain why I see the same problem in LaTeXit, LEE and Papers, but not in Acrobat).
I really don't see how Apple could have let a release out of the door with a bug like this - this is surely a critical bug for anyone in publishing.
Edited to add links: Apple forums [1] [2] [3] Macscoop on 10.5.2 update Another report of the problem Clearing the font cache
|
Expanding on
physicsphile's answer, there is an alternative way of computing the expectation value of $f(q)$, and that is to sum the possible values of $f(q)$ weighted by their respective probabilities as follows.
If $f(q)$ represents a physical dynamical variable, then it is a Hermitian (self-adjoint) operator and can therefore be diagonalized by some set of eigenfunctions $\phi_i(q)$ such that
$$f(q)\phi_i(q) = \lambda_i\phi_i(q),$$
where $\lambda_i$ is the eigenvalue. This set of eigenfunctions is an orthonormal (read: they are orthogonal and normalized) basis for the Hilbert space, and therefore any wave function can be expanded as a linear combination of these eigenfunctions,
$$\psi(q) = \sum_i a_i \phi_i(q).$$
Using your formula for the expected value, we have
$$\langle f(q)\rangle = \int dq~\psi(q)^*f(q)\psi(q) = \int \sum_i a_i^*\phi_i(q)^*f(q)\sum_j a_j\phi_j(q)=\sum_{i,j}a_i^*a_j\int \phi_i(q)^*f(q)\phi_j(q)$$
Now, $f(q)$ acting on $\phi_i(q)$ yields $\lambda_i\phi_i(q)$, and since this eigenvalue is just a number, it can be pulled out of the sum, yielding
$$\langle\phi_i(q)\rangle =\sum_{i,j}a_i^*a_j\lambda_i\int \phi_i(q)^*\phi_j(q).$$
Now, since the eigenfunctions are orthonormal, the integral evaluates to the Kronecker-delta
$$\delta_{ij} = \begin{cases} 1 & i=j\\ 0 & i\neq j \end{cases},$$
in which case the sum collapses to a single sum, yielding
$$\langle\phi_i(q)\rangle =\sum_{i}|a_i|^2\lambda_i.$$
When we recognize $|a_i|^2$ as exactly the expression in
physicsphile's answer, you can see that you can interpret the integral you've written as a sum over the eigenvalues (i.e. the possible measured values of $f(q)$!) weighted by their probabilities $|a_i|^2$ in the state $\psi(q)$.
|
I am trying to intuitively understand why equation $Q=CV$ should work. I can understand why $V$ should be proportional to $Q$ but not why $Q$ should be proportional to $V$. In the end, $C$ is constant so $\frac{1}{C}$ will be a constant too. So the equation should work. But intuitively I am not able to understand
A capacitor in an electrical circuit is similar to a spring in a mechanical system, with $Q$ being like the amount of stretching $x$ of the spring, $V$ is like the force $F$ produced by the spring, and $1/C$ is like the spring constant $k$.
Now you say you understand the equation $V=\dfrac{1}{C} Q$, which makes sense because the charge is creating the voltage, so if you have twice the charge, it must make twice the voltage. Now according to what I said in the first paragraph, this is analagous to the equation $F=kx$. This also makes sense: the more you stretch the spring, the more force you get.
Now lets look at the mechanical version of the equation you don't understand. The mechanical version is $x=F/k$. This says that if the spring has a force $F$, then it must have been stretched by a proportional amount. For example if the spring force is ten percent greater, then the spring must have been stretched by ten percent more, if it stretched any farther, the force would be more than ten percent greater, and if it stretched less, the force would be less. Another way of looking at things is to say $k=F/x$. In this equation it is obvious that $F$ and $x$ have a constant proportion.
You can apply this same logic to the equation $Q=CV$. If the only way to increase the voltage by ten percent is to increase the charge by ten percent. Put another way, the ratio of voltage to charge has to be a constant: $\dfrac{V}{Q}=\dfrac{1}{C}$.
What about: If you double the charge, you double the voltage, thus V is proportional to Q. If you double the voltage, you also double the charge, so Q is also proportional to V.
Look at the way how capacitance is defined: $C=\frac{Q}{V}$
Capacitance is defined as the number of charges required to raise the potential by $1 Volt$. So, for a capacitor of high capacitance, more charge is required to raise the potential by $1 Volt$.
For example, if the capacitance of a capacitor in $50\mu F$, then it means that $50\mu C$ of charge is required to raise the potential of capacitor by $1 Volt$.
That's just the way how capacitance is defined.
|
Search
Now showing items 1-10 of 55
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC
(Elsevier, 2013-12)
The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ...
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(American Physical Society, 2013-12)
The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
|
Generalized Truth Functions
Logic is all about choices. An idea is true or false, a student is present or absent, a switch is on or off. All the truth tables and specialized language of propositional calculus is geared to exploring these choices and their consequences.
In this section I will layout basic concepts and use them to explore logic, in general. With these simple ideas we'll look at topics familiar to logic and some that aren't usually thought of as logic.
It should be noted that some, but not all, notation I use is my own. I have tried to use as much existing notation and vocabulary as I can while still leaving myself free to explore.
The Holy Trinity Core Concepts$$(\textbf{A}^n),\ (\textbf{A}^n)_\text{CV},\ (\textbf{A}^n)_\alpha$$
At its core, logic has three parts: domains, condition values, and logical functions. With these basic parts one can explore the certainty of orthodox formal logic, the uncertainty of quantifiers, the expanse of sets, and the insight of theory.
More Of The Same, With A Twist Function Parses and Operators$$(A,B)_\alpha\big|_{(A,B)_\text{CV}=i}=\alpha_i,\ \vec{\alpha}^4=(\alpha_\mathtt{0x0},\alpha_\mathtt{0x1},\alpha_\mathtt{0x2},\alpha_\mathtt{0x3})$$
Continuing where "The Holy Trinity" left off, this section explores the output of truth functions as individual variables.
I Swear It Was Here A Minute Ago Term Neglect$$\big<(\textbf{A}^n,\textbf{B}^m)_\alpha\big>^{(\textbf{A}^n)}_\phi=\big((\textbf{A}^n,\textbf{B}^m)_\alpha\big|_{(\textbf{A}^n)_\text{CV}=0},\ldots,(\textbf{A}^n,\textbf{B}^m)_\alpha\big|_{(\textbf{A}^n)_\text{CV}=2^n-1})_\phi$$
Some times you need to test a truth function's mettle on a domain. A neglect function is just such a test. This special case truth function systematically compares all semi-parses of a function, across a neglected domain, using a specific standard.
In this section neglect is generally defined, and two even more special forms are presented.
Setting Standards Standard Operator Functions$$ (\textbf{A}^n)_{\sigma^n_i}\big|_{(\textbf{A}^n)_\text{CV}=j}=A_{i-1}\ \forall j\in[0,2^n-1]\ni i\in[1,n] $$
Among the many strange and complex truth functions that are possible when the concept is generalized, there are several very boring functions that only ever return the value of one operand. This section defines those functions, and provides notation to invoke them.
|
Truth Function Algebra
Algebra is all about keeping a predicate true while rearranging the variables that define its shape. Identities introduce new or repeats of existing variables, commutativity switches the location of variables, and associativity changes what functions variables are in.
This section will explore these three algebraic concepts as they pertain to generalized truth functions. In order to accommodate this exploration, I also introduce a method for calculating new operators.
Bit By Bit Bitwise Operator Calculation$$\big(A,(B,C)_\alpha\big)_\beta=(A,B,C)_\gamma\ \ni\ \gamma=\big[\mathtt{0xAA},[\mathtt{0xCC},\mathtt{0xF0}]_\alpha\big]_\beta$$
Algebra is mostly performed in abstraction; variables are never really given value. But eventually, someone will have to put rubber to the road, and reify the algebraic result. In generalized truth functions, that method is the bitwise calculation.
This section derives the method for calculating a truth function's operator from a composition of truth functions, and provides some examples.
Redundancy At Work Redundant Variables$$(\textbf{A}^n,X,X)_{\alpha'}=(\textbf{A}^n,X)_\alpha$$
There's nothing saying a variable can't repeat in a domain. In fact, repeating a variable can be very useful. In later exercises we'll see precisely how useful these stutters can be, but for now, we'll just look at some basic consequences.
So, What Do You Do Here? Trivial Truth Functions$$(\textbf{A}^n,\textbf{B}^m)_\alpha=(\textbf{A}^n)_\beta\ \forall\ (\textbf{A}^n,\textbf{B}^m)_\text{CV}$$
Often we're confronted with superfluous information, some concept that really doesn't matter. This also happens in truth functions. In the notation I use, functions with unnecessary input variables are called
trivial. No, You First Commutativity$$(\textbf{A}^n,X,\textbf{B}^,Y,\textbf{C}^p)=(\textbf{A}^n,Y,\textbf{B}^,X,\textbf{C}^p)$$
Commutativity is the term mobility tool that swaps the position of variables in the cue of an expression. Normally, in algebra, commutativity is restricted to functions that are commute variables without changing the function. However, by allowing the truth function to change commutativity is guaranteed.
Singled Out$$ (X^n,\textbf{A}^m)_\alpha=\big(X^n,(\textbf{A}^m)_\beta\big)_\gamma\iff(X^n,\textbf{A}^m)_\alpha\big|_{(\textbf{A}^m)_\text{CV}=i}\in\big\{(X^n)_{\theta_0},(X^n)_{\theta_1}\big\}\forall\ i $$
where $(X^n,0)_\gamma=(X^n)_{\theta_0}$, $(X^n,1)_\gamma=(X^n)_{\theta_1}$, and each parse of $\beta$ is used to select which $\theta$ will react to $X^n$ for each $(\textbf{A}^m)_\text{CV}$.
|
I got this problem:
Prove that if $f:[a,b]\to[a,b]$ is a nondecreasing function then $\exists x_0\in[a,b]$ such that $f(x_0)=x_0$ (i.e. $f$ has a fixed point).
(Hint: set $A=\{x\in[a,b]|x\leq f(x)\}$ and show that $x_0=\sup f([a,b])$ exist and that $f(x_0)=x_0$)
I tried to show that $A\neq\emptyset$ by supposing that $A=\emptyset$ and trying to reach a contradiction, but I got stuck.
Thanks only help.
|
It is not quite true that we don't obtain any useful information. The relativistic particle action is indeed$$ S = -m_0c^2 \int dt_{\rm proper} $$ When you substitute your correct formula for $dt_{\rm proper}$ and Taylor expand the Lorentz factor in it, the integral has the factor of $dt_{\rm coordinate}(1-v^2/c^2+\dots)$. The first term proportional to $1$ is constant and the second term gives you the usual $mv^2/2$ part of the non-relativistic action.
At least formally, the relativistic action above may be used to deduce the propagators for a relativistic spinless particle – such as the Higgs boson. The addition of a spin isn't straightforward. However, the "proper time" action above may be easily generalized to the "proper area of a world sheet" action (times $-T$, the negative string tension) for string theory, the so-called Nambu-Goto action, and this action admits spin, interactions, many strings/particles, and is fully consistent. The "proper time" action is therefore the usual starting point to motivate the stringy actions (see e.g. Polchinski's String Theory, initial chapters).
The reason why a single relativistic particle isn't consistent without the whole machinery of quantum fields is a physical one and it may be seen in the operator formalism just like in the path integral formalism. Any valid formalism has to give the right answers to physical questions and the only right answer to the question whether a theory of interacting relativistic particles without particle production may be consistent is No.
In the path integral formalism, we could say that it brings extra subtleties to have a path integral with a square root such as $\sqrt{1-v^2/c^2}$. To know how to integrate such nonlinear functions in an infinite-dimensional functional integral, you have to do some substitutions to convert them to a Gaussian i.e. $\exp(-X^2)$ path integral.
This may be done by the introduction of an auxiliary time-like parameter along the world line, $\tau$, agreeing with the variable in the aforementioned paper. With a condition relating $\tau$ and $t_{\rm coordinate}$, it may be guaranteed that the new action in the $\tau$ language looks like$$ - m\int d\tau \,e(\tau) \left( \frac{d X^\mu}{d\tau}\cdot \frac{dX_\mu}{d\tau}\right) $$which is nicely bilinear and the square root disappear. However, this clever substitution or any similar substitution has the effect of allowing the negative-energy solutions, too.
While the relativistic $p^2/2m$ is positive semidefinite, $m/\sqrt{1-v^2/c^2}$ can really have both signs. We may manually try to "forbid" the negative sign of the square root but this solution will always reappear whenever we try to define the path integral (or another piece of formalism) rigorously.
This implies that we have states with energy unbounded from below, an instability of the theory because the particle may roll down to minus infinity in energy. Alternatively, the squared norms of these negative-energy states may be (and, in fact, should be) taken to be negative, traded for the negative energy, which brings an even worse inconsistency: negative probabilities.
The only consistent way to deal with these negative-norm solutions is to "occupy" all the states with negative energies so that any change in the states with negative energy means to add a hole – an antiparticle such as the positron – whose energy is positive (above the physical vacuum) again. At least, this description (the "Dirac sea") is valid for fermions. For bosons, we use an approach that is the direct mathematical counterpart of the Dirac sea but only in some other variables.
It's important to realize that any attempt to ban the negative-energy solutions by hand will lead to an inconsistent theory. The consistent theory has to allow the antiparticles (which may be identical to the particles in some "totally real field" cases, however), and it must allow the particle-antiparticle pairs to be created and destroyed. It's really an inevitable consequence of the combination of assumptions "special relativity" plus "quantum mechanics". Quantum field theory is the class of minimal theories that obey both sets of principles; string theory is a bit more general one (and the only other known, aside from QFT, that does solve the constraints consistently).
Why quantum field theory predicts a theory equivalent to multibody relativistic particles (which are indistinguishable) is the #1 most basic derivation in each quantum field theory course. A quantum field is an infinite-dimensional harmonic oscillator and each raising operator $a^\dagger(\vec k)$ increases the energy (eigenvalue of the free Hamiltonian $H$) by $\hbar\omega$ which is calculable and when you calculate it, you simply get $+\sqrt{m_0^2+|\vec k|^2}$ in the $c=1$ units. So $a^\dagger(\vec k_1)\cdots a^\dagger(\vec k_n)|0\rangle$ may be identified with the basis vector $|\vec k_1,\dots,\vec k_n\rangle$ (anti)symmetrized over the momenta (with the right normalization factor added) in the usual multiparticle quantum mechanics. This has various aspects etc. that are taught in basic quantum field theory courses. If you don't understand something about those things, you should probably ask a more specific question about some step you don't understand. Quantum field theory courses often occupy several semesters so it's unproductive to try to preemptively answer every question you could have.This post imported from StackExchange Physics at 2014-04-21 15:14 (UCT), posted by SE-user Luboš Motl
|
Ex.5.3 Q1 Arithmetic progressions Solutions - NCERT Maths Class 10 Question
Find the sum of the following APs.
(i) \(2, 7, 12 ,\,\dots,\) to \(10\) terms.
(ii) \(- 37, - 33, - 29,\,\dots, \)to \(12\) terms
(iii) \(0.6, 1.7, 2.8 ,\,\dots,\) to \(100\) terms
(iv) \(\begin{align}\frac{1}{{15}},\frac{1}{{12}},\frac{1}{{10}},\end{align}\)........., to \(11\) terms
Text Solution
(i) \(2, 7, 12 ,\,\dots,\) to \(10\) terms.
What is Known?
The AP \(2,7,12,\, \dots\)
What is Unknown?
Sum upto \(10\) terms of the AP.
Reasoning:
Sum of the first \(n\) terms of an AP is given by \({S_n} = \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right]\)
Where \(a\) is the first term, \(d\) is the common difference and \(n\) is the number of terms.
Steps: Given, First term, \(\begin{align}{a = 2}\end{align}\) Common Difference, \(d = 7 - 2 = 5\) Number of Terms, \(\begin{align}n = 10\end{align}\)
We know that Sum up to \(n^\rm{th}\) term of AP,
\[\begin{align}{S_n} &= \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right]\\{S_{10}} &= \frac{{10}}{2}\left[ {2\left( 2 \right) + \left( {10 - 1} \right)5} \right]\\ &= 5\left[ {4 + 9 \times 5 } \right]\\ &= 5\left[ {4 + 45} \right]\\ &= 5 \times 49 \\&= 245\end{align}\]
(ii) \(- 37, - 33, - 29,\,\dots, \)to \(12\) terms
What is Known?
The AP \(-37, -33, -29, \dots\)
What is Unknown?
Sum up to \(12\) terms
Reasoning:
Sum of the first \(n\) terms of an AP is given by \({S_n} = \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right]\)
Where \(a\) is the first term, \(d\) is the common difference and \(n\) is the number of terms.
Steps: Given, First term, \(\begin{align}a = -37\end{align}\) Common Difference, \(d = ( - 33) - ( - 37) = 4\) Number of Terms, \(\begin{align}n = 12\end{align}\)
We know that Sum upto \(n^\rm{th}\) term of AP,
\[\begin{align}{S_n} &= \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right]\\{S_{12}} &= \frac{{12}}{2}\left[ {2\left( { - 37} \right) + \left( {12 - 1} \right)4} \right]\\ &= 6\left[ { - 74 + 11 \times 4} \right]\\& = 6\left[ { - 74 + 44} \right]\\& = 6\times\left( { - 30} \right)\\& = - 180\end{align}\]
(iii) \(0.6, 1.7, 2.8 ,\,\dots,\) to \(100\) terms
What is Known?
The AP \(0.6, 1.7, 2.8 ,\,\dots,\)
What is Unknown?
Sum up to \(100\) terms of the AP.
Reasoning:
Sum of the first \(n\) terms of an AP is given by \({S_n} = \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right]\)
Where \(a\) is the first term, \(d\) is the common difference and \(n\) is the number of terms.
Steps: Given, First term, \(\begin{align}a = 0.6\end{align}\) Common difference, \(d = 1.7 - 0.6 = 1.1\) Number of Terms, \(\begin{align}n = 100\end{align}\)
We know that Sum up to \(n^\rm{th}\) term of AP,
\[\begin{align}{S_n} &= \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right]\\{S_{100}} &= \frac{{100}}{2}\left[ {2 \times 0.6 + \left( {100 - 1} \right)1.1} \right]\\ &= 50\left[ {1.2 + {99} \times {1.1} } \right]\\ &= 50\left[ {1.2 + 108.9} \right]\\ &= 50\left[ {110.1} \right]\\ &= 5505\end{align}\]
(iv) \(\begin{align}\frac{1}{{15}},\frac{1}{{12}},\frac{1}{{10}},\end{align}\)........., to \(11\) terms
What is Known?
The AP \(\begin{align}\frac{1}{{15}},\frac{1}{{12}},\frac{1}{{10}}, \end{align}\)
What is Unknown?
Reasoning:
Sum of the first \(n\) terms of an AP is given by \({S_n} = \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right]\)
Where \(a\) is the first term, \(d\) is the common difference and \(n\) is the number of terms.
Steps: Given, First term, \(\begin{align}a = \frac{1}{{15}}\end{align}\) Common difference, \(\begin{align}d = \frac{1}{{12}} - \frac{1}{{15}} = \frac{1}{{60}}\end{align}\) Number of Terms, \(\begin{align}n=11\end{align}\)
We know that Sum up to \(n^\rm{th}\) term of AP,
\[\begin{align}{S_n} &= \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right]\\{S_{11}} &= \frac{{11}}{2}\left[ {2 \times \frac{1}{{15}} + \left( {11 - 1} \right)\frac{1}{{60}}} \right]\\ &= \frac{{11}}{2}\left[ {\frac{2}{{15}} + \frac{1}{6}} \right]\\& = \frac{{11}}{2}\left[ {\frac{{4 + 5}}{{30}}} \right]\\ &= \frac{{11}}{2} \times \frac{3}{{10}}\\& = \frac{{33}}{{20}}\end{align}\]
|
2018-09-11 04:29
Proprieties of FBK UFSDs after neutron and proton irradiation up to $6*10^{15}$ neq/cm$^2$ / Mazza, S.M. (UC, Santa Cruz, Inst. Part. Phys.) ; Estrada, E. (UC, Santa Cruz, Inst. Part. Phys.) ; Galloway, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; Gee, C. (UC, Santa Cruz, Inst. Part. Phys.) ; Goto, A. (UC, Santa Cruz, Inst. Part. Phys.) ; Luce, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; McKinney-Martinez, F. (UC, Santa Cruz, Inst. Part. Phys.) ; Rodriguez, R. (UC, Santa Cruz, Inst. Part. Phys.) ; Sadrozinski, H.F.-W. (UC, Santa Cruz, Inst. Part. Phys.) ; Seiden, A. (UC, Santa Cruz, Inst. Part. Phys.) et al. The properties of 60-{\mu}m thick Ultra-Fast Silicon Detectors (UFSD) detectors manufactured by Fondazione Bruno Kessler (FBK), Trento (Italy) were tested before and after irradiation with minimum ionizing particles (MIPs) from a 90Sr \b{eta}-source . [...] arXiv:1804.05449. - 13 p. Preprint - Full text 詳細記錄 - 相似記錄 2018-08-25 06:58
Charge-collection efficiency of heavily irradiated silicon diodes operated with an increased free-carrier concentration and under forward bias / Mandić, I (Ljubljana U. ; Stefan Inst., Ljubljana) ; Cindro, V (Ljubljana U. ; Stefan Inst., Ljubljana) ; Kramberger, G (Ljubljana U. ; Stefan Inst., Ljubljana) ; Mikuž, M (Ljubljana U. ; Stefan Inst., Ljubljana) ; Zavrtanik, M (Ljubljana U. ; Stefan Inst., Ljubljana) The charge-collection efficiency of Si pad diodes irradiated with neutrons up to $8 \times 10^{15} \ \rm{n} \ cm^{-2}$ was measured using a $^{90}$Sr source at temperatures from -180 to -30°C. The measurements were made with diodes under forward and reverse bias. [...] 2004 - 12 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 533 (2004) 442-453 詳細記錄 - 相似記錄 2018-08-23 11:31 詳細記錄 - 相似記錄 2018-08-23 11:31
Effect of electron injection on defect reactions in irradiated silicon containing boron, carbon, and oxygen / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Yakushevich, H S (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) Comparative studies employing Deep Level Transient Spectroscopy and C-V measurements have been performed on recombination-enhanced reactions between defects of interstitial type in boron doped silicon diodes irradiated with alpha-particles. It has been shown that self-interstitial related defects which are immobile even at room temperatures can be activated by very low forward currents at liquid nitrogen temperatures. [...] 2018 - 7 p. - Published in : J. Appl. Phys. 123 (2018) 161576 詳細記錄 - 相似記錄 2018-08-23 11:31 詳細記錄 - 相似記錄 2018-08-23 11:31
Characterization of magnetic Czochralski silicon radiation detectors / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) Silicon wafers grown by the Magnetic Czochralski (MCZ) method have been processed in form of pad diodes at Instituto de Microelectrònica de Barcelona (IMB-CNM) facilities. The n-type MCZ wafers were manufactured by Okmetic OYJ and they have a nominal resistivity of $1 \rm{k} \Omega cm$. [...] 2005 - 9 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 548 (2005) 355-363 詳細記錄 - 相似記錄 2018-08-23 11:31
Silicon detectors: From radiation hard devices operating beyond LHC conditions to characterization of primary fourfold coordinated vacancy defects / Lazanu, I (Bucharest U.) ; Lazanu, S (Bucharest, Nat. Inst. Mat. Sci.) The physics potential at future hadron colliders as LHC and its upgrades in energy and luminosity Super-LHC and Very-LHC respectively, as well as the requirements for detectors in the conditions of possible scenarios for radiation environments are discussed in this contribution.Silicon detectors will be used extensively in experiments at these new facilities where they will be exposed to high fluences of fast hadrons. The principal obstacle to long-time operation arises from bulk displacement damage in silicon, which acts as an irreversible process in the in the material and conduces to the increase of the leakage current of the detector, decreases the satisfactory Signal/Noise ratio, and increases the effective carrier concentration. [...] 2005 - 9 p. - Published in : Rom. Rep. Phys.: 57 (2005) , no. 3, pp. 342-348 External link: RORPE 詳細記錄 - 相似記錄 2018-08-22 06:27
Numerical simulation of radiation damage effects in p-type and n-type FZ silicon detectors / Petasecca, M (Perugia U. ; INFN, Perugia) ; Moscatelli, F (Perugia U. ; INFN, Perugia ; IMM, Bologna) ; Passeri, D (Perugia U. ; INFN, Perugia) ; Pignatel, G U (Perugia U. ; INFN, Perugia) In the framework of the CERN-RD50 Collaboration, the adoption of p-type substrates has been proposed as a suitable mean to improve the radiation hardness of silicon detectors up to fluencies of $1 \times 10^{16} \rm{n}/cm^2$. In this work two numerical simulation models will be presented for p-type and n-type silicon detectors, respectively. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 2971-2976 詳細記錄 - 相似記錄 2018-08-22 06:27
Technology development of p-type microstrip detectors with radiation hard p-spray isolation / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) ; Díez, S (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) A technology for the fabrication of p-type microstrip silicon radiation detectors using p-spray implant isolation has been developed at CNM-IMB. The p-spray isolation has been optimized in order to withstand a gamma irradiation dose up to 50 Mrad (Si), which represents the ionization radiation dose expected in the middle region of the SCT-Atlas detector of the future Super-LHC during 10 years of operation. [...] 2006 - 6 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 566 (2006) 360-365 詳細記錄 - 相似記錄 2018-08-22 06:27
Defect characterization in silicon particle detectors irradiated with Li ions / Scaringella, M (INFN, Florence ; U. Florence (main)) ; Menichelli, D (INFN, Florence ; U. Florence (main)) ; Candelori, A (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Bruzzi, M (INFN, Florence ; U. Florence (main)) High Energy Physics experiments at future very high luminosity colliders will require ultra radiation-hard silicon detectors that can withstand fast hadron fluences up to $10^{16}$ cm$^{-2}$. In order to test the detectors radiation hardness in this fluence range, long irradiation times are required at the currently available proton irradiation facilities. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 589-594 詳細記錄 - 相似記錄
|
The rest of these docs assume that you’re familiar with the basics of Clojure, and have a working copy of Leiningen (version >= 2) installed. If you’re not yet familiar with Leiningen then you should head over to the Leiningen website and get it installed first. It’s a really nice tool that makes things very easy!
Gorilla is packaged as a Leiningen plugin. To use Gorilla in one of your Leiningen projects, then add the following tothe
:plugins section of that project’s
project.clj file:
[lein-gorilla "0.4.0"]
Your completed
project.clj file might look something like this:
(defproject gorilla-test "0.1.0-SNAPSHOT" :description "A test project for Gorilla REPL." :dependencies [[org.clojure/clojure "1.6.0"]] :main ^:skip-aot gorilla-test.core :target-path "target/%s" :plugins [[lein-gorilla "0.4.0"]] :profiles {:uberjar {:aot :all}})
That’s it. You should now be able to run
lein gorilla from within the project directory and get started.
When you run
lein gorilla it will start up the REPL server, and print a web-link to the console. Point yourweb-browser at this link to get going (hint for Mac users: try ctrl-clicking the link). You can open as many browserwindows as you like with this link, each will get its own nREPL session to work in, but share the same nREPL instance(in case you’re not familiar with nREPL’s terminology: this means all windows will share definitions etc, but eachwindow will separately keep track of which namespace you’re working in - try it, you’ll see it’s quite natural).
Once you’ve got a web-browser pointed at Gorilla you can use it just like a REPL. Type some clojure code in, and hit
shift+enter to evaluate it. The results are displayed below the code, along with any console output or errors thatwere generated. Gorilla offers nREPL’s autocomplete function, hit
ctrl+space to see what nREPL has to suggest (unless you’re using Firefox on Mac - see the commands section below for more info.)
One of the most handy features of Gorilla is the ability to plot graphs. The plotting librarygorilla-plot is integrated into Gorilla and is always available withoutexplicitly including it as a dependency in your
project.clj. Full documentation for gorilla-plot is available at theplotting page, but to get you started, let’s give a short summary.
There are five functions that should cover many plotting needs. These functions are in the
gorilla-plot.corenamespace, so you will need to
use or
require it before starting. The functions are:
(list-plot data) where
data can either be a sequence of y-values, or a sequence of
(x y) pairs.
(plot func [start end]) which will evaluate and plot
func over the given range.
(histogram data) where
data is a list of values.
(bar-chart categories values) where
categories are the category names, and
values their values.
(compose plot1 plot2 & more) which tries to compose together the given plots. Note that composing a bar-chart withother plots will give odd results, as it’s not obvious how to compose category-scales.
These functions take many options, look at the detailed docs for more help.
There’s a short video talking a little more about how the plots work, and how they try and fit nicely with the Clojure way of thinking (plots are values) which might interest you.
Plots aren’t the only way that Gorilla can give you a more useful view into your Clojure values. There are a number of other built in functions to view data:
table-view in the
gorilla-repl.table namespace lets you view lists-of-lists as tables. You can supply anoptional vector as a
:columns argument to label the columns of the table.
latex-view in the
gorilla-repl.latex namespace lets you view a string as its rendered LaTeX form.
html-view in the
gorilla-repl.html namespace lets you view a string rendered as HTML.
These built-in view functions are just the beginning though. Gorilla REPL has a very flexible, extensible renderer so you can plug in new ways of viewing values. If your favourite library doesn’t have the viewers for the data that you want, then file a feature request, or even better write some code and contribute it!
A feature worth mentioning here is value-copy-and-paste. Although Gorilla shows some values (like graphs, tables) with fancy formatting, the underlying Clojure value is always there to be used. If you alt-click on any output in Gorilla it will give you the readable Clojure value to work with (if it exists).
Before we can go much further we will need to introduce
editor commands. You don’t need them right now, but you cansee all of the editor commands by hitting
ctrl+g twice in succession (
alt-g on Windows and Linux, see below), orclicking on the feint menu icon in the top right hand corner of the Gorilla window. Hopefully they are allself-explanatory, or at least you can figure them out easily enough.
You’ll probably want to use keyboard shortcuts to issue the editor commands if you use Gorilla much: they are usually asequence of two keypresses. On Windows these are of the form
alt+a alt+b and on other platforms they are of the form
ctrl+a ctrl+b.This document writes them in the format suitable for Mac - if you’re on Windows/Linux replace the
ctrls with
alts.An attempthas been made to make sure the commands work across popular browsers and operating systems (which is not as easy as youmight think). The one exception is the autocomplete command which doesn’t work on Firefox on the Mac, as itsteals
ctrl+space for its own use, somewhat controversially.You can instead use
ctrl+g ctrl+a if you’re using Firefox.
Note that you can auto-indent code in the editor by selecting it and hitting shift-tab. This can be a nice way to spot syntax errors :-)
So far we’ve used Gorilla as a fancy REPL, but we can also think of it as a tool for making documents, which we call‘worksheets’. As well as including snippets of Clojure code, a Gorilla worksheet can include notes, which are written inMarkdown format. To add notes you need to first tell Gorilla that it should interpret a snippet of text as notes, ratherthan Clojure code. To do this place the cursor in the segment (a segment being one of the boxes containing a snippet ofcode/notes) that you want to use for notes and hit,
ctrl+g ctrl+m (g for Gorilla, m for Markdown). You can then feelfree to put any Markdown you like in there. The notes segments also support LaTeX formulae. To write a formula yousimply surround the latex code with $$, or @@ if you want the formula to appear inline. So for instance, the contents ofa Markdown segment could be:
This is an inline formula, @@\sin(x)@@, and this is on its own line:$$\int_0^{2\pi}\sin^2(x) \textrm{d}x$$
Note: currently you will need to be online in order for LaTeX to render properly.
Due to limitations of the underlying CodeMirror component, spell checking of the notes is currently not possible. A workaround is to open the worksheet file (see below) with a Clojure editor which supports spell checking. Emacs work very well for this, with its ‘ispell-comments-and-strings” command which ignores code blocks during the spell check. The document should be saved without the output. Gorilla has a command to clear the output of all the segments, which makes this easy.
You can save the contents of a window to a worksheet file. This will include everything you see, the code, the output,graphs, notes and mathematics, the lot. To save a file just hit
ctrl+g ctrl+s. If you haven’t already saved the file it willprompt for a filename, which is given relative to the project. To load a file, use
ctrl+g ctrl+l (note that Gorilla files must end in
.clj or
.cljw to be loaded). By convention, I often find itconvenient to store my worksheets in a directory called
ws at the rootof the project (alongside
src etc) but you, of course, can store them wherever you want. A neat feature is that theseworksheet files are just plain Clojure files with some magic comments. This means it’s really easy to interactivelydevelop your code in Gorilla, and then turn it into a library when it stabilises.
You might be used to using
doc and
source at the command-line REPL. By default these are not imported into the
user namespace when Gorilla starts, but if you’d like to use them then you just need to run
(use 'clojure.repl) tobring them into scope.
Gorilla works well alongside other editors and there’s a page with hints on setting up your favourite environment to work well with Gorilla.
Many people find it useful to run Gorilla on a server. As all of Gorilla’s communication happens over HTTP and websockets, this is usually very easy. A couple of notes:
You can see what can be customised in Gorilla on the configuration page.
Copyright © 2014-, Jony Hudson and contributors. Privacy policy.
|
Given a planar graph G with $N$ nodes, 4 colors are enough to color each node, so that adjacent nodes have different colors.
Let $k > 4$. Is there an algorithm to color the nodes with $k$ colors, so that the colors are distributed the most equally possible?
In other words:
Let's call $c_1, ..., c_k$ the colors. We define $\forall c_k\in[1,k], f(c_k)=\textrm{number of nodes of color}c_k$. Obviously, $\sum f(c_k) = N$. Find an algorithm to color the $N$ node with the $k$ colors to maximize the value $\min f$, with 2 adjacent nodes having different colors.
|
Before we can even think of trying to derive it from the Schwarzschild metric, why not first have a go at it with
Newton's law of universal gravitation in its full form, that is, using classical mechanics to its fullest extend before we pull in modern physics? I'd think one is rather hastily skipping a step here ... That will already be much closer to the Schwarzschild metric than the usual pendulum equation is which is based on the idea of a uniform gravitational field, and thus while it doesn't go all the way there, it nonetheless goes part way and so will provide a good deal of the insights.
So this won't directly answer your question as phrased, but I think it will, much better, answer to the
gist of what you're after. Remember that in the Newtonian mechanics you're thinking of, the gravity field is being approximated as uniform - the immediate next step up from there is still to use Newtonian mechanics, but to now drop the uniformity, and switch for the full Newton's law of universal gravitation, since it seems what you're really interested in is the behavior in a more realistic gravitational field taking into account that the field of a real gravitator like the Earth is not uniform: you say "what a pendulums movement would be like from a point mass considering the changes in acceleration." In that case, general relativity is quite overkill if we're talking anything close to an everyday situation.
In that case, we would start with that the pendulum, when hanging straight down, where the pivot is at some distance $h$ from a gravitating center of mass $M$ and the pendulum has length $L$. Then the force on the pendulum is
$$\mathbf{F} = -\frac{GMm}{||\mathbf{r}||^2} \hat{\mathbf{r}}$$
where $m$ is the mass of the pendulum. However, analyzing this in this form is difficult, so to make the problem easier we will use the Lagrangian formalism. The potential energy is
$$U = -\frac{GMm}{||\mathbf{r}||}$$
and the kinetic energy is
$$K = \frac{1}{2} m \dot{\mathbf{r}}^2$$
so
$$\mathfrak{L} = K - U = \frac{1}{2} m \dot{\mathbf{r}}^2 + \frac{GMm}{||\mathbf{r}||}$$
It is possible to determine with geometry that
$$||\mathbf{r}|| = (L^2 + h^2) - 2Lh \cos(\theta)$$
and furthermore we have
$$\begin{align}\dot{\mathbf{r}}^2 &= [L \cos(\theta) \dot{\theta}]^2 + [L \sin(\theta) \dot{\theta}]^2\\&= L^2 \dot{\theta}^2\end{align}$$
Thus
$$\mathfrak{L} = \frac{1}{2} m L^2 \dot{\theta}^2 + \frac{GMm}{\sqrt{(L^2 + h^2) - 2Lh \cos(\theta)}}$$
is the system Lagrangian in the single angular coordinate $\theta$. Now we can set up the Euler-Lagrange equation
$$\frac{\partial \mathfrak{L}}{\partial \theta} = \frac{d}{dt} \frac{\partial \mathfrak{L}}{\partial \dot{\theta}}$$
And we get
$$\frac{\partial \mathfrak{L}}{\partial \theta} = -\frac{GMm}{2} \left[(L^2 + h^2) - 2Lh \cos(\theta)\right]^{-3/2} \cdot [2Lh \sin(\theta)]$$
$$\frac{\partial \mathfrak{L}}{\partial \dot{\theta}} = mL^2 \dot{\theta}$$
with the last equation being recognized as the angular momentum $I \omega$ of the point-mass bob. Thus the full equation of motion is
$$mL^2 \ddot{\theta} = -\frac{GMm}{2} \frac{2Lh \sin(\theta)}{[(L^2 + h^2) - 2Lh \cos(\theta)]^{3/2}}$$
This is the equation of motion of a pendulum in a Newtonian spherically symmetric gravitational potential. As you can see, it is considerably worse than the equation for the pendulum in a uniform field, and is definitely not solvable analytically. We should, however, check to see if it makes sense. If we take the pendulum very very small, that is, $L << h$, we get that $L^2 \approx 0$ and $2Lh \approx 0$ as well so the denominator is practically just $h^3$ and thus
$$mL^2 \ddot{\theta} = -\frac{GMm}{2} \frac{2L}{h^2} \sin(\theta)$$
$$mL^2 \ddot{\theta} = -\frac{GMm}{h^2} L \sin(\theta)$$
If you eliminate the mass from both sides, divide by $L$, and recognize that $g(h) = -\frac{GM}{h^2}$, you get
$$L \ddot{\theta} = -g \sin(\theta)$$
or
$$\ddot{\theta} = -\frac{g}{L} \sin(\theta)$$
which is the usual equation for a pendulum; thus we can be confident in our derivation.
To attempt to understand the equation, as said, we cannot solve it analytically any more than the simpler pendulum one (arguably in some sense this should be "even worse"!), but we can nonetheless do the same trick where we consider the
small-angle approximation with $\theta \approx 0$, and thus $\sin(\theta) \approx \theta$ and $\cos(\theta) \approx 1$, which causes the equation to simplify to
$$mL^2 \ddot{\theta} = -\frac{GMm}{2} \frac{2Lh \theta}{[L^2 - 2Lh + h^2]^{3/2}}$$
which then becomes, recognizing that $(L^2 - 2Lh + h^2) = (h - L)^2$ (if we use $(L - h)^2$ we will get bad results below in that the square root will have a negative input in realistic situations, so that is not an accident to make this (equivalent) choice),
$$mL^2 \ddot{\theta} = -\frac{GMm}{2} \frac{2Lh}{(h - L)^3} \theta$$
and cancelling and collecting everything, we get
$$m \ddot{\theta} = -\left[\frac{GMm}{L} \frac{h}{(h - L)^3}\right] \theta$$
and thus taking this in the form $m\ddot{\theta} = -k\theta$ for the simple harmonic oscillator we see we again have simple harmonic motion but now with angular frequency
$$\omega = \sqrt{\frac{k}{m}} = \sqrt{\frac{GMh}{L(h - L)^3}}$$
versus the usual
$$\omega = \sqrt{\frac{g}{L}}$$.
Again, note that if $L << h$ you get $(h - L)^3 \approx h^3$ and the latter equation is recovered just same with recognition that $g = \frac{GM}{h^2}$. Indeed, using this relation for $g$ at the altitude $h$ above the gravitator we can get the more comparatively useful form
$$\omega = \sqrt{\frac{g}{L}} \left[\frac{h}{h - L}\right]^{3/2}$$
For $h > L$ the term on the right is easily seen to be larger than $1$, thus the effect to first order is that for very small oscillations the non-uniform Newtonian gravitational field causes the pendulum's frequency of oscillation to increase. This should make intuitive sense: near the bottom the gravitational force on the bob is stronger as it's closer to the gravitating mass, so there is more force "preferring" it point down and thus will want to wiggle around more vigorously near there. You can think of it as being "pinched" by the gravitational field lines near the bottom of its swing.
As a benchmark of the effect, consider near the Earth's surface with a pendulum of length $L = 1\ \mathrm{m}$ (was perhaps the original inspiration for the metre unit) and $h = 6371\ \mathrm{km}$. With these values you can easily figure the deviational term $\left[\frac{h}{h - L}\right]^{3/2}$ as about 1.000000235. So the pendulum's frequency (about, but not quite, 1 Hz - note that while we gave the above in angular frequency $\omega$, this is directly proportional to frequency, so we need not convert anything for the amplification term) is only slightly higher by about 235 parts per billion due to this correction. Arguably, we could expect that even more complex inhomogeneities in the field due to the Earth not being a uniform sphere but a whole, complex planet, will be even worse. These will, however, require a supercomputer loaded with a very accurate model of the Earth, to calculate to any precision, and will depend strongly on precisely where on Earth you are seeking to envision a pendulum (and it's for this reason that the pendulum was not used as a standard to define the metre, but instead the rather coincidentally similar 1/10,000,000 of a meridian arc from the Equator to the North Pole was used instead, as this has rather less deviation.).
I do not have enough expertise yet to do a full mathematical workup in
general relativity as you ask for, but I figured I'd post this answer because it seems what you'd be after and at least answers it part-way, inasmuch as Newton's law is a better approximation of the real situation. But intuitively I'd guess from the non-linear effect that it will show an even greater worsening of the angular frequency near the equilibrium point.
|
I am trying to optimize this cost function by using Gauss-Newton method. $$f = \sum_{i = 1}^n Tr{(Z^TZ)}$$ where $Z$ is a $4\times4$ matrix and it is a function of real vector $\vec{a}\in\mathbb{R}^5$. And because I am using Gauss-Newton approach, I define $g \in\mathbb{R}^{n}$ to be $\sqrt{Tr(Z^T_iZ_i)} = \sqrt{p_i}$, where $i = 1,2,...,n$
This is my attempt to find the gradient and its Hessian $H$. Gradient of the cost function is $2J^Tg$ and Hessian is $2(J^TJ + \sum(g_iH_i))$, where $J$ is Jacobian matrix.
$$\frac{\partial g}{\partial\epsilon}|_{\epsilon = 0} = \frac{\partial g}{\partial p}\frac{\partial p}{\partial Z}\frac{\partial Z}{\partial\epsilon}|_{\epsilon = 0}$$ $$\frac{\partial g}{\partial\epsilon}|_{\epsilon = 0} = \frac{1}{2\sqrt{p_i}}Z(\epsilon\vec{a})\frac{\partial Z(\epsilon\vec{a})}{\partial\epsilon}|_{\epsilon = 0}$$
Apparently, the expression above is going to be $4\times4$ matrix. How to proceed find the gradient of the cost function because we cannot multiply $J$ and $g$ right away? I have tried with Hessian matrix and it is even worse. I am not sure whether I am on the right path or not so I did not show my attempt on Hessian here.
|
Edit: My original post assumed $p$-arity rather than $p$-regularity. If the $p$ child trees of the root, rather than being (infinite) trees similar to the $p$-regular parent, are of $(p-1)$-arity, then the recursion given needs to be adapted accordingly.
Note however this previous Question and Answer, which appears to give a closed form solution.
The recursion required here is a bit messy but seems to be fairly straightforward.
Let $T_p(m)$ denote the number of rooted (labelled) subtrees of the rooted infinite $p$-arity tree which have $m$ edges and share the same root $v_0$.
Note that the Question asks about an infinite $p$-regular tree, which has arity $p$ for root $v_0$ but all other nodes, having degree $p$, have arity $p-1$. We let $\widetilde{T}_p(m)$ denote this slightly different count and express it in terms of $T_p(m)$.
Essential idea of recursion: Since the root $v_0$ must appear in each subtree, we can choose the number $k$ of the $p$ edges from $v_0$ that will appear in the subtree, and then count possible subtrees extending from those edges.
This gives a recursion on $m$ involving the set $\mathscr{W}(m-k,k)$ of weak compositions of $m-k$ with $k$ summands.
For the basis case, define $T_p(0) = 1$. Then for $m \gt 0$:
$$ T_p(m) = \sum_{k = 1}^{\min(m,p)} \binom{p}{k} \sum_{\vec{w}\in \mathscr{W}(m-k,k)} T_p(w_1)\cdot T_p(w_2) \cdot \ldots \cdot T_p(w_k) $$
Here the inner summation is indexed by weak compositions $\vec{w} = (w_1,w_2,\ldots,w_k)$ of $m-k$ with $k$ summands:
$$ w_1 + w_2 + \ldots + w_k = m-k $$
where the summands are nonnegative integers.
Finally we express the desired $\widetilde{T}_p(m)$ in terms of $T_{p-1}(m)$:
$$ \widetilde{T}_p(m) = \sum_{k = 1}^{\min(m,p)} \binom{p}{k} \sum_{\vec{w}\in \mathscr{W}(m-k,k)} T_{p-1}(w_1)\cdot T_{p-1}(w_2) \cdot \ldots \cdot T_{p-1}(w_k) $$
Added:
Do the cases $m\lt p$ and $m\gt p$ make a difference?
They do in the immediate sense that when $m\gt p$ we are restricted at the root vertex $v_0$ from using up all the edges there (there simply aren't enough to exhaust the $m$ edges of our sought-after subtrees). This shows up in the recursion as the upper limit of the outer summation being given by $\min(m,p)$ rather than depending only on $m$.
|
Benney-Luke equations: a reduced water wave model¶
The work is based on the article “Variational water wave modelling: from continuum to experiment” by Onno Bokhove and Anna Kalogirou [BK16]. The authors gratefully acknowledge funding from EPSRC grant no. EP/L025388/1 with a link to the Dutch Technology Foundation STW for the project “FastFEM: behavior of fast ships in waves”.
The Benney-Luke-type equations consist of a reduced potential flow water wave model based on the assumptions of small amplitude parameter \(\epsilon\) and small dispersion parameter \(\mu\) (defined by the square of the ratio of the typical depth over a horizontal length scale). They describe the deviation from the still water surface, \(\eta(x,y,t)\), and the free surface potential, \(\phi(x,y,t)\). A modified version of the Benney-Luke equations can be obtained by the variational principle:
where the spatial domain is assumed to be \(\Omega\) with natural boundary conditions, namely Neumann conditions on all the boundaries. In addition, suitable end-point conditions at \(t=0\) and \(t=T\) are used. Note that the introduction of the auxiliary function \(q\) is performed in order to lower the highest derivatives. This is advantageous in a \(C^0\) finite element formulation and motivated the modification of the “standard” Benney-Luke equations. The partial variations in the last line of the variational principle can be integrated by parts in order to get expressions that only depend on \(\delta\eta,\,\delta\phi,\,\delta q\) and not their derivatives:
Since the variations \(\delta\eta,\,\delta\phi,\,\delta q\) are arbitrary, the modified Benney-Luke equations then arise for functions \(\eta,\phi,q\in V\) from a suitable function space \(V\) and are given by:
We can either directly use the partial variations in the variational principle above (last line) as the fundamental weak formulation (with \(\delta\phi,\, \delta\eta,\, \delta q\) playing the role of test functions), or multiply the equations by a test function \(v\in V\) and integrate over the domain in order to obtain a weak formulation in a classic manner
Note that the Neumann boundary conditions have been used to remove every surface term that resulted from the integration by parts. Moreover, the variational form of the system requires the use of a symplectic integrator for the time-discretisation. Here we choose the 2nd-order Stormer-Verlet scheme [EHW06], which requires two half-steps to update \(\phi\) in time (one implicit and one explicit in general) and one (implicit) step for \(\eta\):
Furthermore, we note that the Benney-Luke equations admit asymptotic solutions (correct up to order \(\epsilon\)). The “exact” solutions can be found by assuming one-dimensional travelling waves of the type
The Benney-Luke equations then become equivalent to a Korteweg-de Vries (KdV) equation for \(\eta\) at leading order in \(\epsilon\). The soliton solution of the KdV [DJ89] travels with speed \(c\) and is reflected when reaching the solid wall. The initial propagation before reflection matches the asymptotic solution for the surface elevation \(\eta\) well. The asymptotic solution for the surface potential \(\phi\) can be found by using \(\eta=\phi_{\xi}\) (correct at leading order), giving
Finally, before implementing the problem in Firedrake, we calculate the total energy defined by the sum of potential and kinetic energy. The system is then stable if the energy is bounded and shows no drift. The expression for total energy is given by:
The implementation of this problem in Firedrake requires solving two nonlinear variational problems and one linear problem. The Benney-Luke equations are solved in a rectangular domain \(\Omega=[0,10]\times[0,1]\), with \(\mu=\epsilon=0.01\), time step \(dt=0.005\) and up to the final time \(T=2.0\). Additionally, the domain is split into 50 cells in the x-direction using a quadrilateral mesh. In the y-direction only 1 cell is enough since there are no variations in y:
from firedrake import *
Now we move on to defining parameters:
T = 2.0dt = 0.005Lx = 10Nx = 50Ny = 1c = 1.0mu = 0.01epsilon = 0.01m = UnitIntervalMesh(Nx)mesh = ExtrudedMesh(m, layers=Ny)coords = mesh.coordinatescoords.dat.data[:,0] = Lx*coords.dat.data[:,0]
The function space chosen consists of degree 2 continuous Lagrange polynomials, and the functions \(\eta,\,\phi\) are initialised to take the exact soliton solutions for \(t=0\), centered around the middle of the domain, i.e. with \(x_0=\frac{1}{2}L_x\):
V = FunctionSpace(mesh,"CG",2)eta0 = Function(V, name="eta")phi0 = Function(V, name="phi")eta1 = Function(V, name="eta_next")phi1 = Function(V, name="phi_next")q1 = Function(V)phi_h = Function(V)q_h = Function(V)ex_eta = Function(V, name="exact_eta")ex_phi = Function(V, name="exact_phi")q = TrialFunction(V)v = TestFunction(V)x = SpatialCoordinate(mesh)x0 = 0.5 * Lxeta0.interpolate(1/3.0*c*pow(cosh(0.5*sqrt(c*epsilon/mu)*(x[0]-x0)),-2))phi0.interpolate(2/3.0*sqrt(c*mu/epsilon)*(tanh(0.5*sqrt(c*epsilon/mu)*(x[0]-x0))+1))
Firstly, \(\phi\) is updated to a half-step value using a nonlinear variational solver to solve the implicit equation:
Fphi_h = ( v*(phi_h-phi0)/(0.5*dt) + 0.5*mu*inner(grad(v),grad((phi_h-phi0)/(0.5*dt))) + v*eta0 + 0.5*epsilon*inner(grad(phi_h),grad(phi_h))*v )*dxphi_problem_h = NonlinearVariationalProblem(Fphi_h,phi_h)phi_solver_h = NonlinearVariationalSolver(phi_problem_h)
followed by a calculation of a half-step solution \(q\), performed using a linear solver:
aq = v*q*dxLq_h = 2.0/3.0*inner(grad(v),grad(phi_h))*dxq_problem_h = LinearVariationalProblem(aq,Lq_h,q_h)q_solver_h = LinearVariationalSolver(q_problem_h)
Then the nonlinear implicit equation for \(\eta\) is solved:
Feta = ( v*(eta1-eta0)/dt + 0.5*mu*inner(grad(v),grad((eta1-eta0)/dt)) - 0.5*((1+epsilon*eta0)+(1+epsilon*eta1))*inner(grad(v),grad(phi_h)) - mu*inner(grad(v),grad(q_h)) )*dxeta_problem = NonlinearVariationalProblem(Feta,eta1)eta_solver = NonlinearVariationalSolver(eta_problem)
and finally the second half-step (explicit this time) for the equation of \(\phi\) is performed and \(q\) is computed for the updated solution:
Fphi = ( v*(phi1-phi_h)/(0.5*dt) + 0.5*mu*inner(grad(v),grad((phi1-phi_h)/(0.5*dt))) + v*eta1 + 0.5*epsilon*inner(grad(phi_h),grad(phi_h))*v )*dxphi_problem = NonlinearVariationalProblem(Fphi,phi1)phi_solver = NonlinearVariationalSolver(phi_problem)Lq = 2.0/3.0*inner(grad(v),grad(phi1))*dxq_problem = LinearVariationalProblem(aq,Lq,q1)q_solver = LinearVariationalSolver(q_problem)
What is left before iterating over all time steps, is to find the initial energy \(E_0\), used later to evaluate the energy difference \(\left|E-E_0\right|/E_0\):
t = 0E0 = assemble( (0.5*eta0**2 + 0.5*(1+epsilon*eta0)*abs(grad(phi0))**2 + mu*(inner(grad(q1),grad(phi0)) - 0.75*q1**2))*dx )E = E0
and define the exact solutions, which need to be updated at every time-step:
t_ = Constant(t)expr_eta = 1/3.0*c*pow(cosh(0.5*sqrt(c*epsilon/mu)*(x[0]-x0-t_-epsilon*c*t_/6.0)),-2)expr_phi = 2/3.0*sqrt(c*mu/epsilon)*(tanh(0.5*sqrt(c*epsilon/mu)*(x[0]-x0-t_-epsilon*c*t_/6.0))+1)
eta_interpolator = Interpolator(expr_eta, ex_eta)phi_interpolator = Interpolator(expr_phi, ex_phi)phi_interpolator.interpolate()eta_interpolator.interpolate()
For visualisation, we save the computed and exact solutions to an output file. Note that the visualised data will be interpolated from piecewise quadratic functions to piecewise linears:
output = File('output.pvd')output.write(phi0, eta0, ex_phi, ex_eta, time=t)
We are now ready to enter the main time iteration loop:
while t < T: print(t, abs((E-E0)/E0)) t += dt t_.assign(t) eta_interpolator.interpolate() phi_interpolator.interpolate() phi_solver_h.solve() q_solver_h.solve() eta_solver.solve() phi_solver.solve() q_solver.solve() eta0.assign(eta1) phi0.assign(phi1) output.write(phi0, eta0, ex_phi, ex_eta, time=t) E = assemble( (0.5*eta1**2 + 0.5*(1+epsilon*eta1)*abs(grad(phi1))**2 + mu*(inner(grad(q1),grad(phi1)) - 0.75*q1**2))*dx )
The output can be visualised using paraview.
A python script version of this demo can be found here.
The Benney-Luke system and weak formulations presented in this demo have also been used to model extreme waves that occur due to Mach reflection through the intersection of two obliquely incident solitary waves. More information can be found in [GBK17].
References
BK16
O. Bokhove and A. Kalogirou.
Lectures on the Theory of Water Waves, chapter Variational water wave modelling: from continuum to experiment. LMS Lecture Note Series. Cambridge University Press, 2016. URL: http://www1.maths.leeds.ac.uk/~matak/documents/lms-cup2015.pdf. DJ89
P.G. Drazin and R.S. Johnson.
Solitons: an Introduction. Cambridge University Press, 1989. EHW06
C. Lubich E. Hairer and G. Wanner.
Geometric numerical integration. Springer, 2006. GBK17
F. Gidel, O. Bokhove, and A. Kalogirou. Variational modelling of extreme waves through oblique interaction of solitary waves: application to Mach reflection.
Nonlinear Processes in Geophysics, 24:43–60, 2017. doi:10.5194/npg-24-43-2017.
|
I'm familiar with using the Calculus of variations to find the condition for which first order variations of a functional wrt a function are zero:
We start with a functional $J[x]= \int_{t_f}^{t_i}L(x(t),\dot x(t),t)dt$, where $t_i, t_f$ are constants and $\dot x(t)= dx/dt$
If $J[f]$ is stationary at $f$, we add to $f$ another continuous function $\epsilon\eta(t)$ where $\eta(t)$ is arbitrary and $\epsilon$ is a positive number
For any small number $\epsilon$ close to 0, the first order variation of the functional is zero around $f$, giving the Euler-Lagrange equations as the necessary condition.
If $\delta(t)$ is what physicists/engineers call the Dirac-Delta function, even though mathematicians wouldn't define it as a function:
Can the above procedure be applied to $\delta(t-t_0)$ variations of f at $t = t_0$ to again yield the Euler-Lagrange equations?
I've had a go by adding $\epsilon\delta(t-t_0)$ to $f$ as the variation, getting the differential of $L$ in terms of the usual partial derivatives and differentials of the independent variables. But the differential of $x$, for example, then becomes $dx = \epsilon\delta(t-t_o)$ which appears to be infinite for any $\epsilon$, making my expression for $dL$ nonsense.
Again, according to an answer given on PSE that motivated me to ask my question here, the change in $L$ to an $\epsilon\delta(t-t_0)$ variation in $\dot x$ is:
$$dL= {\partial L \over \partial \dot{x_1}} \delta \dot{x_1} = {\partial L \over \partial \dot{x_1}}\epsilon\delta(t-t_0)$$
Is this correct?
|
Given a two positive matrices $A,B$. For simplicity, let's assume that $Tr A=Tr B=1$. Assume that $\|A-B\|_1\leq\varepsilon$, for some small $\varepsilon>0$, where $\|\cdot\|_1$ is the $l_1$-norm, namely the sum of all its singular values.
My question is whether there is a generic transformation to move $A$ to $B$ for arbitrary such $A$ and $B$. The following are some simple observations. There are two simple transformations moving $A$ to its neighbours.
(1) Find an eigensystem (eigenvalues with corresponding eigenvectors) of $A$ and replace the eigenvalues $(\lambda_1(A),\cdots,\lambda_k(A))$ by a new sequence $(\theta_1,\cdots,\theta_k)\in[0,1]^k$ where $\sum_i\theta_i=1$ and $\sum_i|\lambda_i(A)-\theta_i|\leq\varepsilon$. Here we need the eigensystem of $A$ because the eigenvalues may not be unique.
(2) Choose a unitary matrix $U$ satisfying $\|U-I\|\leq\varepsilon$, where $\|\cdot\|$ is the spectral norm and replace $A$ by $UAU^*$.
It is easy to see neither single transformation (1) nor single transformation (2) is enough to cover all the neighbors.
Consider a naive example
$$ A=\left(\begin{array}{cc} \frac{1+\varepsilon}{2} & 0 \\ 0 & \frac{1-\varepsilon}{2} \end{array}\right) $$
And
$$B=(\frac{1}{2}+\varepsilon) \left(\begin{array}{c}\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}}\end{array}\right)\left(\begin{array}{cc}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\end{array}\right)+(\frac{1}{2}-\varepsilon) \left(\begin{array}{c}\frac{1}{\sqrt{2}}\\ -\frac{1}{\sqrt{2}}\end{array}\right)\left(\begin{array}{cc}\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\end{array}\right).$$
$\|A-B\|_1\leq\varepsilon$ because $\|A-I/2\|_1\leq\varepsilon/2$ and $\|B-I/2\|_1\leq\varepsilon/2$, where $I$ is the $2\times 2$ identity matrix. Transformation (2) cannot bring $A$ to $B$ because the eigenvectors of $A$ and $B$ are far from each other. Also transformation (1) is not enough because the eigenvectors are different. But we can first apply transformmation (1) moving $A$ to $I/2$ and then apply transformation (2) moving $I/2$ to $B$ (There is freedom to choose the eigenvectors for $I/2$).
A more specific question. Can we move $A$ to all its $\varepsilon$-close (in $\|\cdot\|_1$-distance) neighbours by the composition of constant number of transformation (1) and (2)? Even more, can we do it with one transformation (1) and one transformation (2)?
I am not sure the question is research-level. Do not hesitate to close if it is not. Thank you.
|
...no strings attached... In recent years, I got used to the fact that Sean Carroll is confused about some very basic physics – the postulates of quantum mechanics as well as thermodynamics (and the very basic insight due to Boltzmann and others that its laws are microscopically explained by statistical physics and not, for example, by cosmology). And I won't even threaten your stomach by memories of the Boltzmann brains, doomsdays, and similar delusions. But I thought he could rationally think at least about classical general relativity. His book was pretty good, I thought, although I have never read the whole volume. However, I don't think so anymore after I finished reading Carroll's insane tirade called
His crusade is made even more paradoxical given the apparent fact that he knows the equation and other key pieces needed to understand why it's accurate to say that the acceleration is caused by negative pressure. But like a schoolkid who has just mindlessly memorized an equation but can't understand what it means, he just can't sort out what the basic implications of the equations are. So he wants to "ban" the fact that the negative pressure is the reason for the acceleration from expositions of cosmology. You may imagine that a progressive (i.e. Stalinist) like himself thinks that such a ban would be "a great step forward". Bans aren't a good step forward, especially not bans of key scientific insights.
(Brian Greene would be among those whose books would be banned; he wrote a crisp explanation of these matters in
The Hidden Reality. Tony Zee's GR book would be on the black list, too. Zee mentions beginners' i.e. Carroll's confusion of the velocity and acceleration in 2nd paragraph on page 500 – and more generally, between pages 499 and 507.)
Since the late 1990s, we've known that the Universe was not only expanding but the rate of the expansion was increasing. It was a surprise for many because most people were expecting that the rate was slowing down. The substance driving this expansion is "dark energy" – the cosmological constant with \(p=-\rho\) is the simplest and most natural "subtype" or "more detailed explanation" of dark energy that is so far compatible with all statistically significant experimental results.
The property that allows dark matter or cosmological constant to accelerate the expansion is its negative pressure \(p\lt 0\). Why is that? Well, it is because of the so-called second Friedmann equation\[
\frac{\ddot a}{a} =-\frac{4\pi G}{3} (\rho + 3p)
\] The numerator on the left hand side contains the second derivative of the scale factor \(a\). You may literally imagine that in some units, \(a\) is nothing else than a distance between two particular galaxies (well, the proper length of a line that connects them through the \(t={\rm const}\) slice which is, let's admit, not a geodesic, but it is some coordinate distance, anyway).
The second derivative of this distance is fully analogous to the acceleration \(a_{\rm acc}=-\ddot h\) of a ball that you threw somewhere. Note that Earth's gravity implies \(a_{\rm acc}=-\ddot h=g\) which means that the ball will ultimately fall down (unless its speed exceeds the escape velocity: we would have to modify the equation if the ball could reach substantial distances from the surface) towards the Earth.
Note that we use the convention in which a positive acceleration \(a_{\rm acc}\gt 0\) means that the ball is attracted to the Earth i.e. the second derivative of its height is negative. That's why we had to insert the minus sign.
The second Friedmann equation is
completely analogous. It's not just some vague popular analogy; it is a mathematical isomorphism. The distance between two galaxies is fully analogous to the distance between the ball and the Earth's surface. In both cases, they are attracted by the gravitational force (of a sort). In the second Friedmann case, the gravity follows somewhat more accurate laws imposed by the general theory of relativity – the Friedmann equations are what Einstein's equations of GR boil down to if we assume a uniform, isotropic Universe.
You may see that the role of the Earth's gravitational acceleration \(g\) is being played by\[
\frac{4\pi G}{3} (\rho + 3p)
\] The minus sign in front of the right hand side is there for the same reason as in the case of the ball: attraction (deceleration of the outward speed) is identified with a negative second derivative of "the" quantity (the height of the ball or the distance between two galaxies).
So the total force is "attractive" if\[
\rho + 3p \gt 0.
\] For example, if the Universe were filled with the dust only, and the dust has \(p=0\), this expression would surely be positive and we would get an attraction i.e. decelerated expansion. A positive energy density implies attraction for the same reason why the Earth's positive energy (and energy density) is able to attract the ball. Ordinary gravity is simply attractive. If the Universe were filled with radiation and nothing else, \(p=+\rho/3\) (with the plus sign) and the two terms would actually have the same sign and double: an even clearer deceleration.
The type of matter that has \(p=-\rho/3\) is actually "cosmic strings". If the Universe were filled with cosmic strings only (in chaotic directions), they would contribute nothing to the acceleration. Cosmic domain walls (membranes of a sort) would have \(p=-2\rho/3\) and the expression would already be negative. The domain walls would make the expansion accelerate.
Similarly, the cosmological constant – the most motivated type of dark energy – has \(p=-\rho\) so \(\rho+3p\) is negative. In general, you see that you get an accelerated expansion if \(p\) is not only negative but smaller than \(-\rho/3\),\[
p \lt -\frac{\rho}{3}.
\] This is the only refinement of the claim that "a negative pressure is the cause of the acceleration". In fact, we need a "sufficiently negative pressure", one obeying the inequality above. But otherwise the statement is 100% accurate – and not just at the level of popular presentations. It's also completely accurate to say that the total gravitational force operating in between the galaxies becomes repulsive – you may even call it "antigravity" – when the pressure in between is sufficiently negative.
Carroll tries to claim that there is something wrong with the proposition that "the acceleration is caused by a [sufficiently] negative pressure" but his argumentation seems utterly irrational. Well, the core of his would-be argument is probably the following:
But, while that’s a perfectly good equation — the “second Friedmann equation” — it’s not the one anyone actually uses to solve for the evolution of the universe. It’s much nicer to use the first Friedmann equation, which involves the first derivative of the scale factor rather than its second derivative (spatial curvature set to zero for convenience):\[So Carroll told us that we should switch to this equation because "people use it more often" and it is "nicer". The problem with this would-be justification is that it is no justification at all. If an equation is used more often or looks "nicer" to someone (for other irrational reasons), it does H^2 \equiv \zav{ \frac{\dot a}{a} }^2 = \frac{8\pi G}{3} \rho \] notimply that this equation is the right equation to explain a pattern or to answer a question.
In this case, we want to explain why the
accelerationis negative and the acceleration is simply related to the secondderivative of the height or the second derivative of the scale factor \(a\). The last displayed equation above, the first Friedmann equation, doesn't include the second derivative \(\ddot a\) at all, so it can't possibly be the right equation that tells us whether the acceleration is positive or negative!
I am stunned that Carroll isn't capable of figuring this simple point out.
So the mathematical formalization of the reason why the expansion is accelerating
isthe second Friedmann equation and it doesn't matter a single bit whether this equation is used more often or less often to calculate other thingsor answer other questions.
What is the alternative proposition that Carroll proposed instead of the correct one? It isn't quite clear but it seems that it's the bold face sentence below:
Second, a constant energy density straightforwardly implies a constant expansion rate \(H\).But this sentence is just incorrect. The energy density carried by dust or anything else is "persistent" in the sense that it remains nonzero forever but the dust implies a decelerating expansion (much like most other known types of energy density). If the word "persistent" were interpreted as "constant", the sentence above would be marginally correct but it would completely obscure the reason why the energy density is able to stay constant in an expanding Universe. The reason for this is the negative pressure, too; Sean has only offered a sleight-of-hand to mask the actual reason, the negative pressure. Even a German blogger knows that. So no problem at all: a persistent source of energy causes the universe to accelerate.
The relevant quantity for the sign of the acceleration is \(\rho+3 p\) and not just \(\rho\) (or by \(\dot\rho\)) as Carroll incorrectly suggests. This influence of the pressure on the curvature of the spacetime (in this particular case, the acceleration of its expansion) is one of the "refinements" that general relativity brought us relatively to Newton's gravity where only the total energy density mattered for
allgravitational fields. In GR, the whole stress-energy tensor (not just the energy density but also the pressure and the density of momentum etc.) matters for various aspects of the spacetime curvature.
Carroll correctly states that the first Friedmann equation and the second Friedmann equation are consistent with one another because one may be derived from the other using the general relativistic form of the energy conservation law (which does depend on the pressure as well). All this stuff is OK but it changes nothing about the fact that he gave a completely incorrect answer to the key question which of the equations is the right one to calculate the sign of the acceleration of the expansion of the Universe.
These sentences of mine are no "popular presentations" and surely not "misleading popular presentations" and whoever understands them
really understandswhat drives the acceleration etc. – it is not just an illusion of the understanding – while Carroll apparently does notunderstand these basic facts. He does notunderstand that the pressure has become relevant for some questions about the spacetime curvature – in particular, for the question whether the expansion is accelerating.
The relevant equation is unquestionably the second Friedmann equation, whether a pervert finds it nicer or not, and I urge all writers to keep on writing the absolutely valid claim that the negative pressure is the reason and notice that Sean Carroll is just [being?] an idiot.
And that's the memo.
|
Search
Now showing items 1-10 of 17
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ...
Multi-strange baryon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2014-01)
The production of ${\rm\Xi}^-$ and ${\rm\Omega}^-$ baryons and their anti-particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been measured using the ALICE detector. The transverse momentum spectra at ...
Measurement of charged jet suppression in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2014-03)
A measurement of the transverse momentum spectra of jets in Pb–Pb collisions at √sNN = 2.76TeV is reported. Jets are reconstructed from charged particles using the anti-kT jet algorithm with jet resolution parameters R ...
Two- and three-pion quantum statistics correlations in Pb-Pb collisions at √sNN = 2.76 TeV at the CERN Large Hadron Collider
(American Physical Society, 2014-02-26)
Correlations induced by quantum statistics are sensitive to the spatiotemporal extent as well as dynamics of particle-emitting sources in heavy-ion collisions. In addition, such correlations can be used to search for the ...
Exclusive J /ψ photoproduction off protons in ultraperipheral p-Pb collisions at √sNN = 5.02TeV
(American Physical Society, 2014-12-05)
We present the first measurement at the LHC of exclusive J/ψ photoproduction off protons, in ultraperipheral proton-lead collisions at √sNN=5.02 TeV. Events are selected with a dimuon pair produced either in the rapidity ...
Measurement of quarkonium production at forward rapidity in pp collisions at √s=7 TeV
(Springer, 2014-08)
The inclusive production cross sections at forward rapidity of J/ψ , ψ(2S) , Υ (1S) and Υ (2S) are measured in pp collisions at s√=7 TeV with the ALICE detector at the LHC. The analysis is based on a data sample corresponding ...
|
Difference between revisions of "De Bruijn-Newman constant"
(→t=0)
(→Bibliography)
Line 110: Line 110:
* [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251.
* [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251.
* [P2017] D. J. Platt, [http://www.ams.org/journals/mcom/2017-86-307/S0025-5718-2017-03198-7/ Isolating some non-trivial zeros of zeta], Math. Comp. 86 (2017), 2449-2467.
* [P2017] D. J. Platt, [http://www.ams.org/journals/mcom/2017-86-307/S0025-5718-2017-03198-7/ Isolating some non-trivial zeros of zeta], Math. Comp. 86 (2017), 2449-2467.
+
* [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. [https://arxiv.org/abs/1801.05914 arXiv:1801.05914]
* [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. [https://arxiv.org/abs/1801.05914 arXiv:1801.05914]
* [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. [http://plouffe.fr/simon/math/The%20Theory%20Of%20The%20Riemann%20Zeta-Function%20-Titshmarch.pdf pdf]
* [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. [http://plouffe.fr/simon/math/The%20Theory%20Of%20The%20Riemann%20Zeta-Function%20-Titshmarch.pdf pdf]
Revision as of 12:43, 5 February 2018
For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula
[math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math]
where [math]\Phi[/math] is the super-exponentially decaying function
[math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math]
It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as
[math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math]
or
[math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math]
In the notation of [KKL2009], one has
[math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math]
De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the
de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]).
The
Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math]
When [math]t=0[/math], one has
[math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math]
where
[math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{s/2} \Gamma(s/2) \zeta(s)[/math]
is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives
[math]\displaystyle N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math]
for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T.
The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real.
[math]t\gt0[/math]
For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis,
all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2].
Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-decreasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have
[math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math]
for any [math]t[/math].
The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE
[math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math]
where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as
[math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math]
where the dependence on [math]t[/math] has been omitted for brevity.
In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic
[math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math]
as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that
[math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math]
as [math]k \to +\infty[/math].
See asymptotics of H_t for asymptotics of the function [math]H_t[/math].
Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Wikipedia and other references Bibliography [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [P1992] G. Pugh, The Riemann-Siegel formula and large scale computations of the Riemann zeta function, M.Sc. Thesis, U. British Columbia, 1992. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
|
Consider two states of the type $|\alpha,\xi \rangle = \hat{D}(\alpha) \hat{S}(\xi) |0\rangle$, where $D$ and $S$ are the displacement and squeeze operators, respectively, and $|0\rangle$ is a 1D harmonic oscillator vacuum state.
My question is: Is there a closed formula for $\langle \alpha, \xi | \beta, \eta \rangle$?
I know how to calculate this for two coherent states ($\xi = \eta = 0$), but since the commutator of $[a^2,a] \neq I$ (which comes from $S$) the same strategy I use in that case does not work (i.e., using the Zassenhaus formula).
I saw that there is a way to express the wave function of this state in position representation, so I could calculate this as $\int dx \langle \alpha, \xi | x \rangle \langle x \beta, \eta \rangle$, but this seems really unwieldly. Is there a simpler way analogous to the coherent case?
|
Ex.7.2 Q10 Coordinate Geometry Solution - NCERT Maths Class 10 Question
Find the area of a rhombus if its vertices are \((3, 0)\), \((4, 5)\), \((-1, 4)\) and \((-2, -1)\) taken in order.
[Hint: Area of a rhombus \(=\) (product of its diagonals)]
Text Solution Reasoning:
A rhombus has all sides of equal length and opposite sides are parallel.
What is the known?
The \(x\) and \(y\) co-ordinates of the vertices of the rhombus.
What is the unknown?
The area of the rhombus
Steps:
From the Figure,
Given,
Let \(A(3, 0)\), \(B(4, 5)\), \(C(-1, 4)\) and \(D(-2, -1)\) are the vertices of a rhombus \(ABCD\).
We know that the distance between the two points is given by the Distance Formula,
\[\begin{align}\sqrt {{{\left( {{{{x}}_1} - {{{x}}_2}} \right)}^2} + {{\left( {{{{y}}_1} - {{\text{y}}_2}} \right)}^2}} & & ...\;{\text{Equation}}\;{\text{(1)}}\end{align}\]
Therefore, distance between \(A\;(3, 0)\) and \(C \;(-1, 4)\) is given by
Length of diagonal
\[\begin{align} AC &= \sqrt {{{[3 - ( - 1)]}^2} + {{(0 - 4)}^2}} \\ &= \sqrt {16 + 16} = 4\sqrt 2 \end{align}\]
Therefore, distance between B (4, 5) and D (-2, -1) is given by
Length of diagonal
\[\begin{align} BD &= \sqrt {{{[4 - ( - 2)]}^2} + \left( {5 - {{( - 1)}^2}} \right.} \\& = \sqrt {36 + 36} \\&= 6\sqrt 2 \end{align}\]
\[\begin{align} \text{Area of the rhombus } ABCD &= \frac{1}{2} \times \rm{(Product\; of \;lengths\; of \;diagonals)} \\&= \frac{1}{2} \times {\text{AC}} \times {\text{BD}}\end{align}\]
Therefore, area of rhombus
\[\begin{align}ABCD &= \frac{1}{2} \times 4\sqrt 2 \times 6\sqrt 2 \\ &= 24 \;\rm square\; units\end{align}\]
|
To open the application about the topic, open the following post: Interactive demonstration of intensity transforms
Image processing often requires transforming the intensity values for example to turn the image easier to process or to highlight certain objects. In this post an effective way of some simple transforms are explained and an interactive demonstration is also available.
Basic intensity transforms
Intensity transforms – according to the name – change the intensity values, most often in the range of [0, 255]. A basic
t transform is a function, mapping all the possible input values to output values also in the range of [0, 255]. The simplest t transform is the identity, which maps all input values to themselves:
$$ t_{identity}(x) = x $$
The function looks like this:
Inverting the image is very similar:
$$ t_{invert}(x) = 255 - x $$
The transform – marked by red – looks like this:
Brightness
Changing the brightness of an image is as simple as shifting the values by a given a constant.
$$ t_{brightness}(x) = x + c $$
If
c is positive, the image gets brighter, otherwise it will be darker. Of course the output of t is limited into the range [0, 255], so the exceeding values will be saturated. Also we have to take into consideration, that this may cause some loss of information, as you can see on the following picture:
Here we increased the brightness by a constant of 64, the transform is marked by red. On the upper part more input values get the output value of 255: here we loose information, since the transform can not be inverted.
Here you can see the result of the transform on the classic cameraman picture, the image has been brightened.
Threshold
Thresholding is a binary operation, where we set an
l limit: all intensities below l will be false and true otherwise.
$$ t_{threshold}(x) = \begin{cases} 0: & \text{$n$ < $l$} \\ 1: & \text{$n$} \ge \text{$l$} \end{cases} $$
In the following transform we set
l to 128, and the true value is 255, because we are dealing in the [0, 255] range.
Here is the output for the cameraman image:
Gamma correction
Gamma-correction transforms intensities in a nonlinear way:
$$ t_{gamma}(x) = A\cdot x^\gamma, \quad x \in [0, 1] $$
A often equals to 1. Before applying gamma correction, we have to normalize our intensities into the [0, 1] range, then transform the values, and then scale again into the [0, 255] range. Here you can see the transform for γ = ⅓:
As you can see, only the very dark regions remain dark, other intensities get brighter, as the sample output for the cameraman image shows. This transform turns some regions of the image more visible, for example have a look at the pants of the man:
Contrast
Increasing the contrast can be achieved in many ways, for example sigmoid-like function is great for this task. Basically increasing the contrast means making the dark regions darker and bright regions brighter, as you can see in the following transform:
By shifting the sigmoid function left or right, the tone of the resulting image can be modified. If we want to decrease the contrast, the amplitude of the sigmoid can be decreased. Here you can see an example of the increased contrast:
Applying transforms in MATLAB
Most of the transforms above can be achieved by a single line of MATLAB code. For example if we want to invert the image or brighten it, we write:
% read the image image = imread('cameraman.png'); % invert it invert = 255 - image; % brighten it bright = image + 64;
Even thresholding is very easy:
% set the threshold at 128 binary = image > 128;
Applying gamma correction is much harder, if we want to use the classic equation:
% normalize first gamma = image / 255; % gamma correction of gamma = 1/3 gamma = gamma .^ (1/3); % get back to the [0, 255] range gamma = gamma * 255; % convert the values to uint8 gamma = uint8(gamma);
Of course this also could be written in one single line, but it would make much harder to understand the code. You can imagine how difficult it would be to apply contrast-transform this way. So let us take another point of view.
Instead of calculating the transformed value for each pixels individually – as we did it in the examples before – , now calculate the output values for all possible inputs, and store them in an array. Some examples:
% all possible inputs x = 0 : 255; % invert y_i = 255 - x; % brightness: we do the saturation manually y_b = min(255, 64 + x); % gamma y_g = uint8(255 * (x / 255) .^ (1/3)); % contrast: generate a sigmoid function y_c = uint8(255 ./ (1 + exp(-12 * (x ./ 255 - 0.5)))); % plot them plot(x, y_i, 'b', 'linewidth', 2); % blue axis('square'); hold on; plot(x, y_b, 'g', 'linewidth', 2); % green plot(x, y_g, 'r', 'linewidth', 2); % red plot(x, y_c, 'k', 'linewidth', 2); % black
Here you can see the result and have a look at our mapping functions:
Here comes the trick: use 1-dimensional interpolation to transform the image! As the MATLAB documentation of interp1 says, it is basically a table lookup, having the following form:
vq = interp1(x, v, xq)
Where
x and v contains point pairs of the function, xq holds the query points and the results are returned in vq. Have a look at the following simple example: % describes three points of an invert-like function % in the range of [0, 255] x = [0, 127, 255]; v = [255, 127, 0]; % display the function plot(x, v); % interpolate for some points xq = [0, 64, 8, 200, 210, 1]; vq = uint8(interp1(x, v, xq)) % display the results plot(xq, vq, '+');
The output is:
vq = 255 190 247 55 45 254
It works fast and great. We can use it to transform our image in the same way:
% invert an image using interpolation result = interp1(x, y_i, image);
The advantage of this method is, that we have only to calculate the mapping function, and then simply apply a transform. In addition, we can describe our transform with a few points, because interpolation calculates us all the values. This way we can work efficiently even with any complex transforms, since this approach is a generalized one.
|
One slopiness that many teachers are guilty of is teaching this misleading thing:
The position operator $\hat{x}$ has eigenvectors $|x_0\rangle$ that obey
$$\hat{x} |x_0\rangle = x_0|x_0\rangle$$
and are represented by distributions on domain of $x$: $\delta(x-x_0)$ for different $x_0$. (WRONG)
The incorrect predictions come when student uses this "representing function" as a simple initial condition to find out how a localized psi function spreads out in time, or to calculate expected average of position.
Let me demonstrate the latter case: calculating expected average of position in such state $|x_0\rangle$ using the standard algorithm, we get
$$\langle x \rangle = \langle x_0|\hat{x}|x_0\rangle = x_0 \langle x_0|x_0\rangle$$It is tempting to put $\langle x_0|x_0\rangle = 1$ now, but this is not correct, because we already said that $|x_0\rangle$ is represented by delta distribution. The expression is just not defined, as the integral$$\int \delta(x-x_0)\delta(x-x_0)dx$$is not defined (or, sometimes said to be infinite). So, here the slopiness of assuming position operator has eigenvectors leads us to incorrect prediction that there is no expected average of position. Such result would be correct for, say, the Cauchy distribution, but it is incorrect for a localized one we implicitly assume to describe here. For any well-localized psi function around $x_0$, the correct answer is close to $x_0$.
The correct way to handle this is to teach that position operator has no eigenfunctions, but we can assign it improper eigenvectors $|x_0\rangle$ that are however no realizable psi functions. So that fact the very position operator used to define such kets has no expected average for such kets is no problem, because physical kets can never be equal to such kets.
|
Say a deck of cards is dealt out equally to four players (each player receives 13 cards).
A friend of mine said he believed that if one player is dealt four-of-a-kind (for instance), then the likelihood of another player having four-of-a-kind is increased - compared to if no other players had received a four-of-a-kind.
Statistics isn't my strong point but this sort of makes sense given the pigeonhole principle - if one player gets
AAAAKKKKQQQQJ, then I would think other players would have a higher likelihood of having four-of-a-kinds in their hand compared to if the one player was dealt
AQKJ1098765432.
I wrote a Python program that performs a Monte Carlo evaluation to validate this theory, which found:
But counter-intuitively, four-of-a-kind frequencies appear to decrease as more players are dealt those hands:
The result is non-intuitive and I'm all sorts of confused - not sure if my friend's hypothesis was incorrect, or if I'm asking my program the wrong questions.
First off: there are
$$\frac{1}{4!} \cdot \left(\begin{array}{c}52\\13\end{array}\right) \cdot \left(\begin{array}{c}39\\13\end{array}\right) \cdot \left(\begin{array}{c}26\\13\end{array}\right) = 22,351,974,068,953,663,683,015,600,000$$
distinct combinations of four thirteen-card hands drawn from a deck of 52 cards. If each combination takes just one nanosecond to examine, it would take thousands of years to examine any significant fraction of the possibilities -- and about 700 billion years to examine every combination. It seems likely that your Monte Carlo simulation just didn't have enough time to get an accurate statistic here. Did you get the same values from running the simulation multiple times?
Since the probability space is so large, we'll probably need to approach this based on theory. Inconveniently, analyzing four-of-a-kinds with thirteen-card hands is complicated, so let's simplify to a smaller deck. Consider a deck of cards with just two ranks (A and 2) and two players. In that case, if one player has a four-of-a-kind, then the other player automatically does as well. This extreme case suggests that your friend's hypothesis is correct.
To test the idea, let's step it up a little: three ranks (A, 2, and 3) but still two players. Then the probability of one player having a four-of-a-kind is given by
$$\frac{3}{11}\cdot\frac{2}{10}\cdot\frac19 \cdot \left(\begin{array}{c}6\\4\end{array}\right) = \frac{1}{11}$$
but the probability of the second player having a four-of-a-kind
given that the first player does is $1/2$ (as long as the leftover two cards in the first player's hand match, the second player automatically has the other four-of-a-kind).
This pretty clearly highlights the trend: one player having a four-of-a-kind
does increase the likelihood of other players having four-of-a-kinds. It's just a difficult phenomenon to see in a large deck.
You have to be careful what question you are asking. What your friend seems to be claiming is if you draw a hand of $13$ cards for one player and it has four of a kind, the chance of a second hand you draw from the rest of the deck having four of a kind is increased. My intuition agrees with that. Distributing all the cards and asking the chance of two four of a kinds given there is one four of a kind is a different question. It seems likely you could modify your program to ask the first question.
For a comparison, I imagined four players each flipping a coin with $0.05$ chance to come up heads. Clearly here your friend's question has player $2$ getting heads with chance $0.05$ regardless of what player $1$ does. The chance at least one player gets heads is about $0.1714$. The chance at least two players get heads, given that at least one did, is about $0.07558$, which is less than the chance of at least one getting heads.
Without even doing the calculations you can see intuitively that this must be true. If one player has four-of-a-kind then they have taken all four cards with the same number/face out of the deck, which means that the remaining deck now has heavier concentrations of each of the remaining numbers/faces. Conditioning on this event clearly increases the probability that another player will have four-of-a-kind. The increase in probability will not be huge, and it will still be a rare event, but your friend is correct.
|
OpenCV 3.3.0
Open Source Computer Vision
In this chapter,
As an OpenCV enthusiast, the most important thing about the ORB is that it came from "OpenCV Labs". This algorithm was brought up by Ethan Rublee, Vincent Rabaud, Kurt Konolige and Gary R. Bradski in their paper
ORB: An efficient alternative to SIFT or SURF in 2011. As the title says, it is a good alternative to SIFT and SURF in computation cost, matching performance and mainly the patents. Yes, SIFT and SURF are patented and you are supposed to pay them for its use. But ORB is not !!!
ORB is basically a fusion of FAST keypoint detector and BRIEF descriptor with many modifications to enhance the performance. First it use FAST to find keypoints, then apply Harris corner measure to find top N points among them. It also use pyramid to produce multiscale-features. But one problem is that, FAST doesn't compute the orientation. So what about rotation invariance? Authors came up with following modification.
It computes the intensity weighted centroid of the patch with located corner at center. The direction of the vector from this corner point to centroid gives the orientation. To improve the rotation invariance, moments are computed with x and y which should be in a circular region of radius \(r\), where \(r\) is the size of the patch.
Now for descriptors, ORB use BRIEF descriptors. But we have already seen that BRIEF performs poorly with rotation. So what ORB does is to "steer" BRIEF according to the orientation of keypoints. For any feature set of \(n\) binary tests at location \((x_i, y_i)\), define a \(2 \times n\) matrix, \(S\) which contains the coordinates of these pixels. Then using the orientation of patch, \(\theta\), its rotation matrix is found and rotates the \(S\) to get steered(rotated) version \(S_\theta\).
ORB discretize the angle to increments of \(2 \pi /30\) (12 degrees), and construct a lookup table of precomputed BRIEF patterns. As long as the keypoint orientation \(\theta\) is consistent across views, the correct set of points \(S_\theta\) will be used to compute its descriptor.
BRIEF has an important property that each bit feature has a large variance and a mean near 0.5. But once it is oriented along keypoint direction, it loses this property and become more distributed. High variance makes a feature more discriminative, since it responds differentially to inputs. Another desirable property is to have the tests uncorrelated, since then each test will contribute to the result. To resolve all these, ORB runs a greedy search among all possible binary tests to find the ones that have both high variance and means close to 0.5, as well as being uncorrelated. The result is called
rBRIEF.
For descriptor matching, multi-probe LSH which improves on the traditional LSH, is used. The paper says ORB is much faster than SURF and SIFT and ORB descriptor works better than SURF. ORB is a good choice in low-power devices for panorama stitching etc.
As usual, we have to create an ORB object with the function,
cv2.ORB() or using feature2d common interface. It has a number of optional parameters. Most useful ones are nFeatures which denotes maximum number of features to be retained (by default 500), scoreType which denotes whether Harris score or FAST score to rank the features (by default, Harris score) etc. Another parameter, WTA_K decides number of points that produce each element of the oriented BRIEF descriptor. By default it is two, ie selects two points at a time. In that case, for matching, NORM_HAMMING distance is used. If WTA_K is 3 or 4, which takes 3 or 4 points to produce BRIEF descriptor, then matching distance is defined by NORM_HAMMING2.
Below is a simple code which shows the use of ORB.
See the result below:
ORB feature matching, we will do in another chapter.
|
Trisect sides of a quadrilateral and connect the points to have nine quadrilaterals, as can be seen in the figure. Prove that the middle quadrilateral area is one ninth of the whole area.
Consider all occurring points as vectors, as in @Calvin Lin's answer, and write $\mu$ for ${1\over3}$. Then $$p=(1-\mu)a+\mu b,\quad h=(1-\mu)d+\mu c,\quad n=(1-\mu)a+\mu d,\quad e=(1-\mu) b+\mu c\ .$$ It follows that $$(1-\mu)p+\mu h=(1-\mu)n+\mu e\quad(=:w')\ ,$$ which shows that in fact $$w=w'=(1-\mu)^2 a +\mu(1-\mu)(b+d)+\mu^2 c\ .$$ Interchanging $a$ and $c$ here gives $$y=(1-\mu)^2 c +\mu(1-\mu)(b+d)+\mu^2 a\ ,$$ so that we arrive at $$w-y=(1-2\mu)(a-c)\ .$$ Appealing to symmetry again we conclude that we also have $$x-z=(1-2\mu)(b-d)\ .$$ It follows that $${\rm area}[WXYZ]=(1-2\mu)^2\ {\rm area}[ABCD]\ ,$$ and this holds for any $\mu\in[0,{1\over2}[\ $.
This is most easily done using vectors. Let the points $A, B, C, D$ be represented by the vectors $a, b, c, d$. The area $[ABCD]$ is equal to $\frac{1}{2}(a-c) \times (b-d) $.
If you are unfamiliar with this, consider triangulation using the origin, and sum up the 4 triangle areas, to get
$$\begin{align} [ABCD] = & \frac{1}{2} a \times b + \frac{1}{2} b \times c + \frac{1}{2} c \times d + \frac{1}{2} d \times a \\ = & (a-c) \times \frac{1}{2} b + (a-c ) \times (-\frac{1}{2} d) \\= & \frac{1}{2}(a-c) \times (b-d) \end{align}$$
It is easy to show that $W= \frac{4a+2b+c+2d} {9}, X = \frac{ 2a+4b+2c+d} { 9},Y = \frac{a+2b+4c+d} { 9} , Z = \frac{ 2a+b+2c+4d} {9} $. Hence the area is
$$ [WXYZ] = \frac{1}{2} ( \frac{3a-3c}{9} ) \times ( \frac{3b-3d}{9} ) = \frac{1}{9} \times \frac{1}{2} (a-c)(b-d) = \frac{1}{9} [ABCD]$$
Claim.$W$ and $Z$ trisect $\overline{PH}$. Likewise elsewhere.
Proof. Left to the reader (for now).
Given the claim, we can make this illustrated argument:
Here, we have $\triangle ABC \sim \triangle PBF$, with $$\frac{|\overline{PB}|}{|\overline{AB}|} = \frac{|\overline{FB}|}{|\overline{CB}|} = \frac{2}{3} = \frac{|\overline{PF}|}{|\overline{AC}|} \qquad \text{and} \qquad \overline{PF} \parallel \overline{AC}$$ and $\triangle PZF \sim \triangle WZY$, with $$\frac{|\overline{WZ}|}{|\overline{PZ}|} = \frac{|\overline{YZ}|}{|\overline{FZ}|} = \frac{1}{2} = \frac{|\overline{WY}|}{|\overline{PF}|} \qquad \text{and} \qquad \overline{WY} \parallel \overline{PF}$$ so that $$|\overline{WY}|= \frac13 |\overline{AC}| \qquad \text{and} \qquad \overline{WY} \parallel \overline{AC}$$ and likewise $$|\overline{XY}|= \frac13 |\overline{BD}| \qquad \text{and} \qquad \overline{XZ} \parallel \overline{BD}$$
By the Diagonal-Diagonal-Angle formula for quadrilateral area, $$|\square WXYZ| = \frac{1}{2}|\overline{WY}||\overline{XZ}|\sin\theta = \frac12 \cdot \frac{1}{3}|\overline{AC}| \cdot \frac13 |\overline{BD}|\cdot \sin\theta = \frac19 |\square ABCD|$$
Stretch the figure in a direction and by an amount which makes the top and bottom edges parallel. Such a transformation preserves relative areas. Now look at each trapezoid in the center row. It is clear that the area is equal to the average of the areas of the trapezoids above and below. Similarly, stretch to make the sides parallel, and look at the trapezoids in the center column. The area of each is equal to the average of the areas of the figures to the right and left. It follows that the area of the center quadrilateral is equal to the average of the areas of the outer eight quadrilaterals.
|
In my talk at 2018 Chinese Mathematical Logic Conference, I asked if \((V,\subset,P)\) is epsilon-complete, namely if the membership relation can be recovered in the reduct. Professor Joseph S. Miller approached to me during the dinner and pointed out that it is epsilon-complete. Let me explain how.
Theorem
Let \((V,\in)\) be a structure of set theory, \((V,\subset,P)\) is the structure of the inclusion relation and the power set operation, which are defined in \((V,\in)\) as usual. Then \(\in\) is definable in \((V,\subset,P)\).
Proof.
Fix a set \(x\). Define \(y\) to be the \(\subset\)-least such that
\[\forall z \big((z\subset x\wedge z\neq x)\rightarrow P(z)\subset y\big).\]
Actually, \(y=P(x)-\{x\}\), so \(\{x\}= P(x) – y\). Since set difference can be defined from subset relation and \((V,\subset,\{x\})\) can define \(\in\), we are done.
\(\Box\)
Here is another argument figured out by Jialiang He and me after we heard Professor Miller’s Claim.
Proof. Since \(\in\) can be defined in \((V,\subset,\bigcup)\) (see the slides). Fix a set \(A\), it suffices to show that we can define \(\bigcup A\) from \(\subset\) and \(P\).
Let \(B\) be the \(\subset\)-least set such that there is \(c\), \(B=P(c)\) and \(A\subset B\). Note that
\[ \bigcap\big\{P(d)\bigm|A\subset P(d)\big\}= P\big(\bigcap\big\{d\bigm|A\subset P(d)\big\}\big). \] Therefore, \(B\) is well-defined. Next, we show that \[ \bigcap\big\{d\bigm|A\subset P(d)\big\}=\bigcup A. \] Clearly, \(A\subset P(\bigcup A)\). This proves the direction from left to right. For the other direction, if \(x\) is in an element of \(A\), then it is in an element of \(P(d)\) given \(A\subset P(d)\), i.e. it is an element of such \(d\).
Therefore \(\bigcup A\) is the unique set whose power set is \(B\).
\(\Box\)
|
The first hep-th paper today is
heterotic \(E_8\times E_8\) strings on Calabi-Yau three-folds, string theorists' oldest promising horse heterotic Hořava-Witten M-theory limit with the same gauge group, on Calabi-Yaus times a line interval type IIB flux vacua – almost equivalently, F-theory on Calabi-Yau four-folds – with the notorious \(10^{500}\) landscape type IIA string theory with D6-branes plus orientifolds or similar braneworlds M-theory on \(G_2\) holonomy manifolds
Halverson and Morrison focus on the last group, the \(G_2\) compactifications, although they don't consider "quite" realistic compactifications. To have non-Abelian gauge groups like the Standard Model's \(SU(3)\times SU(2)\times U(1)\), one needs singular seven-dimensional \(G_2\) holonomy manifolds: the singularities are needed for the non-Abelian enhanced group.
They are satisfied with smooth manifolds whose gauge group in \(d=4\) is non-Abelian, namely \(U(1)^3\).
Recall that \(G_2\) is the "smallest" among five simple exceptional Lie groups – the others are \(F_4,E_6,E_7,E_8\). \(G_2\) is a subgroup of \(SO(7)\), the group rotating a 7-dimensional Euclidean space, but instead of allowing all 21 \(SO(7)\) generators, \(G_2\) only allows 2/3 of them, namely 14, those that preserve the multiplication table between the 7 imaginary units in the Cayley algebra (also known as octonions).
It's a beautiful structure. The preservation of the multiplication table, the antisymmetric tensor \(m_{ijk}\) where \(i,j,k\in \{1,2,\dots , 7\}\), is equivalent to the preservation of a spinor \(s\) in the 8-dimensional real spinor representation of \(Spin(7)\). After all,\[
m_{ijk} = s^T \gamma_{[i} \gamma_j \gamma_{k]} s.
\] And it's this conservation of "one spinor among eight" that is responsible for preserving one eighth of the original 32 real supercharges in M-theory. We are left with 4 unbroken supercharges or \(\NNN=1\) in \(d=4\).
Pretty much all the other groups deal with six-dimensional compact manifolds of the "hidden dimensions". In the M-theory case, we have eleven dimensions in total which is why the \(G_2\) holonomy manifolds are seven-dimensional. So the dimensionality is higher than for the 6-dimensional manifolds in string theory.
You may say that having a "higher number of dimensions", like in M-theory, means to "do a better job in translating the physics to geometry". We are geometrizing a higher percentage of the physical properties of the geometrization – which some people could consider to be a "clear aesthetic advantage". And the \(G_2\) compactifications treat this maximum number of (seven) compactified dimensions on equal footing which may be said to be "nice", too. More physical properties deciding about the particle spectrum are encoded in the geometric shapes of the compatified 7 dimensions; fewer of them are carried by "matter fields" or "branes" living on top of the compactified dimensions. All these comments are mine but I guess that string theorists including the authors of this paper would generally endorse my observations.
(The type IIB vacua may also be viewed as "12-dimensional" F-theory on 8-dimensional manifolds, Calabi-Yau four-folds, and in some sense, because of these 8 extra dimensions, F-theory geometrizes even a "higher fraction of physics" than M-theory. It may translate some fluxes to a topology change of the 8-dimensional manifold. But unlike M-theory's 7 dimensions, the 8 dimensions in F-theory are not treated on completely equal footing – two of them have to be an infinitesimal toroidal fiber.)
These differences have an impact on the counting of the number of vacua. You have heard that the type IIB flux vacua lead to \(10^{500}\) different solutions of string theory. They are built by adding fluxes and branes to the compactified dimensions. The fluxes and branes are "decorations" of a geometry that is given to start with. But the number of topologically distinct 6-dimensional manifolds used in these games is of order 30,000 (at least if we assume that each choice of the Hodge numbers \(h^{1,1}\) and \(h^{1,2}\) produces a unique topology which I believe is close to the truth because if there were a huge excess, almost all arrangements of these small enough Hodge numbers would be realized by a known topology which is known not to be the case), even though this upper class may be built in many ways, sometimes from millions of building blocks. On the other hand, the decoration may be added on top of the geometry in "googol to the fifth or so" different ways.
As I said, M-theory has "more dimensions of the underlying geometry" and "fewer decorations". Instead of 30,000 different topologies, they show that some recent construction produces something like 500 million different topologies, i.e. half a billion of allowed seven-dimensional manifolds that are so qualitatively different that they can't be connected with each other continuously, through non-singular intermediate manifolds. But there's nothing much to add (matter fields' backgrounds, fluxes, branes) here, so this is pretty much the final number of the vacua. (The four-form fluxes \(G_4\) over 4-cycles of the manifold may be nonzero but for any allowed compactification, its cousin with \(G_4\) equal to zero is allowed, too. And nonzero values of \(G_4\) qualitatively change the story on moduli stabilization – by adding a superpotential term that most researchers seem to find unattractive, at least now.)
The 500-million class of seven-dimensional \(G_2\) compactifications was constructed by Kovalev (and those who fixed some of his errors and extended the method). The method is known as TCS, the "twisted connected sum". One starts with two Calabi-Yau three-folds times a circle, twists them, and glues them in such a way that the final result is guaranteed to have \(G_2\) holonomy. It's probably no coincidence that 500 million is very close to the number of "subsets with two elements" of a set with 30,000 elements. The information carried by the topology of a \(G_2\) holonomy manifold
could bevery close (the same? Probably not) to the information carried by two Calabi-Yau three-folds.
It seems to me that this method betrays some dependence on the complex geometry and Calabi-Yaus. This is sort of an undemocratic situation. The laymen often dislike complex numbers and fail to realize that complex numbers are more fundamental in natural mathematics (e.g. calculus and higher-dimensional geometry) than e.g. real numbers. However, professional mathematicians do not suffer from this problem, of course. They do realize that complex numbers are more natural. And I would argue that the complex geometries and other things may even be "overstudied" relatively to other things.
My feeling is that the pairing of the dimensions into "complex dimensions", something that is a part of the Calabi-Yau manifolds, is intrinsically linked to the supersymmetry as realized in perturbative string theory because the field \(B_{\mu\nu}\) coupled to the fundamental strings' world sheets has two indices, just like the metric tensor. That's why they're naturally combined into some complex tensors with two indices and why the spacetime dimensions end up paired. The "complex-like" character of D-brane gauge groups, like \(U(N)\), is probably a sign of the same character of perturbative string theory that loves bilinear invariants and therefore complex numbers. Lots of things are known and solvable.
On the other hand, M-theory likes compactifications with holonomies like \(G_2\) that also has a cubic (or quartic, it's equivalent) invariant, a higher-than-bilinear one. Enhanced gauge groups from singularities are not just \(U(N)\)-like, membranes are coupled to a 3-form gauge potential \(C_{\lambda\mu\nu}\), not a 2-form gauge potential \(B_{\mu\nu}\), and this is one of the sources of the cubic and higher-order structures. That's a heuristic argument why exceptional Lie groups and other things that go "beyond complex numbers" are more frequently encountered in M-theory but not in perturbative string theory.
The exceptional Lie groups and cubic invariants are perhaps "more exceptional" and "harder to solve" which is why our knowledge of the M-theory compactifications and \(G_2\) holonomy manifolds is probably less complete than in the case of the Calabi-Yau manifolds. Perturbative methods are usually inapplicable because there's no solvable "zeroth order approximation": M-theory wants couplings of order one and the dilaton or the string coupling is no longer adjustable (because the extra dimension whose size dictates the dilaton has been used as one dimension, some material to construct the 7-dimensional geometry from which it cannot be separated).
And we usually reduce the \(G_2\) manifolds – which are odd-dimensional and therefore obviously not complex – to some complex manifolds. Couldn't we discuss these manifolds without any reference to the complex ones? One may have a feeling that it should be possible, for the sake of democracy – the classes of the vacua have the same \(\NNN=1\) supersymmetry and may be expected to be treated more democratically – but this feeling may be wrong, too. Maybe the exceptional groups and M-theory should be viewed as some "strange derivatives" of the classical groups and string theory based on bilinear things, after all, and the democracy between the descriptions is an illusion.
Back to the paper.
They discuss these 500 million \(G_2\) manifolds and various membrane instantons and topological transitions in between them, along with the spectra of the models and the Higgs vs Coulomb branches. Most of this work deals with non-singular \(G_2\) manifolds that produce Abelian gauge groups in \(d=4\) only but in this context, it's natural to expect that insights about the compactifications that allow the Standard Model or GUT gauge groups, for example, are "special cases" of the constructions above – or "special cases of a generalization" of the TCS construction that also allows singularities.
Concerning the right compactification, I think it is "very likely" that our Universe allows a description in terms of one of the five compactifications mentioned at the top. The anthropic people think that the class with the "overwhelming majority of the solutions", the type IIB flux vacua, is almost inevitably the most relevant one. But I completely disagree. There's no rational argument linking "the number of solutions in a class" with the "probability that the class contains the right compactification".
I think it is much more rational to say that each of the five classes above has about 20% probability of being relevant. That is my idea about the fair Bayesian inference: each comparably simple, qualitatively different hypothesis (in this case, each of the 5 classes of stringy compactifications) should be assigned the same or comparable prior probability, otherwise we are biased. (The correct vacuum may also allow two or many different dual descriptions.) If the "last 20%" are relevant and our world is a \(G_2\) compactification of M-theory, it may ultimately be sufficient to eliminate about 500 million compactifications in total and pick the right one that passes some tests. It's plausible that the right compactification could be found if that's true. It may end up looking very special for some reason, too.
We don't know yet but I think it's obviously right to try to get as far as we can because the 5 descriptions of superstring/M-theory mentioned at the top seem to be the contemporary physicists' only strong candidates for a theory of Nature that goes beyond an effective quantum field theory and this situation hasn't changed for a decade or two.
|
This shows you the differences between two versions of the page.
— epimorphisms_are_surjective [2010/08/20 20:37] (current)
jipsen created
Line 1: Line 1: + =====Epimorphisms are surjective===== + + A morphism $h$ in a category is an \emph{epimorphism} if it is right-cancellative, i.e. for all + morphisms $f$, $g$ in the category $f\circ h=g\circ h$ implies $f=g$. + + A function $h:A\to B$ is \emph{surjective} (or \emph{onto}) if $B=f[A]=\{f(a): a\in A\}$, + i.e., for all $b\in B$ there exists $a\in A$ such that $f(a)=b$. + + \emph{Epimorphisms are surjective} in a (concrete) category of structures if the underlying function of every epimorphism is surjective. + + If a concrete category has the [[amalgamation property]] and all epimorphisms are surjective, then it has the [[strong amalgamation property]][(E. W. Kiss, L. Márki, P. Pröhle, W. Tholen, \emph{Categorical algebraic properties. A compendium on amalgamation, congruence extension, epimorphisms, residual smallness, and injectivity}, Studia Sci. Math. Hungar., \textbf{18}, 1982, 79-140 [[http://www.ams.org/mathscinet-getitem?mr=85k:18003|MRreview]])]
|
2019-09-04 12:06
Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 Registre complet - Registres semblants 2019-08-15 17:39
LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Registre complet - Registres semblants 2019-08-15 17:36
Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Registre complet - Registres semblants 2019-05-15 16:57 Registre complet - Registres semblants 2019-02-12 14:01
XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 Registre complet - Registres semblants 2019-01-21 09:59
Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 Registre complet - Registres semblants 2019-01-15 14:22
Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 Registre complet - Registres semblants 2019-01-10 15:54 Registre complet - Registres semblants 2018-12-20 16:31
Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 Registre complet - Registres semblants 2018-12-14 16:02
The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. Registre complet - Registres semblants
|
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator
|
The first equation actually contains the definition of the standard equilibrium constant:$$K^\circ = \exp\left\{\frac{−\Delta_r G^\circ}{R T}\right\}$$With this definition the equilibrium constant is dimensionless.
Under standard conditions the van't Hoff equation is$$\frac{\mathrm{d} \ln K^\circ}{\mathrm{d}T} = \frac{\Delta H^\circ}{R T^2},$$and therefore uses the same constant. The integrated variant is therefore already an approximation and may be correct using a different definition of the equilibrium constant.$$\ln \left( {\frac{{K_{T_2} }}{{K_{T_1} }}} \right) = \frac{{\ - \Delta H^\circ }}{R} \left( {\frac{1}{{T_2 }} - \frac{1}{{T_1 }}} \right)$$
Now the ordinary equilibrium constant may be defined in various forms:$$K_x = \prod_B x_B^{\nu_B}.$$
Probably one of the best representations for the standard equilibrium constant involves relative activities, for an arbitrary reaction, $$\ce{\nu_{A}A + \nu_{B}B -> \nu_{C}C + \nu_{D}D},$$this resolves in $$K^\circ = \frac{a^{\nu_{\ce{C}}}(\ce{C})\cdot{}a^{\nu_{\ce{D}}}(\ce{D})}{a^{\nu_{\ce{A}}}(\ce{A})\cdot{}a^{\nu_{\ce{B}}}(\ce{B})}.$$
The concentration is connected to the activity via$$a(\ce{A})= \gamma_{c,\ce{A}}\cdot{}\frac{c(\ce{A})}{c^\circ},$$where the standard concentration is $c^\circ = \pu{1 mol/L}$. At reasonable concentrations it is therefore fair to assume that activities can be substituted by concentrations, as$$\lim_{c(\ce{A})\to\pu{0 mol/L}}\left(\gamma_{c,\ce{A}}\right)=1.$$See also a very detailed answer of Philipp.
The partial pressure is connected to the activity via$$a(\ce{A}) = \frac{f(\ce{A})}{p^{\circ}} = \phi_{\ce{A}} y_{\ce{A}} \frac{p}{p^{\circ}},$$with the fugacity $f$ and the fugacity coefficient $\phi$ and the fraction occupied by the gas $y$, the total pressure $p$, as well as the standard pressure $p^\circ=\pu{1 bar}$ or traditional use of $p^\circ=\pu{1 atm}$.For low pressures it is also fair to assume that you can rewrite the activity with the partial pressure $p(\ce{A})$, since\begin{align}\lim_{p\to\pu{0 bar}}\left(\phi_{\ce{A}}\right) &=1, &p(\ce{A}) &= y_{\ce{A}}\cdot{}p.\end{align}
Of course concentrations and partial pressures are connected via the ideal gas\begin{aligned}pV\ &=nRT\\p\ &\propto \text{const.} \cdot c,\end{aligned}and therefore it is valid to write:$$K_c\propto \text{const.} \cdot K_p.$$It is important to note, that the two expressions are not necessarily being equal, and can scale by powers of $(\mathcal{R}T)^{\sum{}\nu}$.
While using these equations it is always necessary to keep in mind, that there are a lot of approximations involved, so it depends very much on what you are looking for. Either use might be fine, as all these functions are related - some might lead to simple, some may lead to complicated solutions.
|
One disadvantage of the fact that you have posted 5 identical answers (1, 2, 3, 4, 5) is that if other users have some comments about the website you created, they will post them in all these place. If you have some place online where you would like to receive feedback, you should probably also add link to that. — Martin Sleziak1 min ago
BTW your program looks very interesting, in particular the way to enter mathematics.
One thing that seem to be missing is documentation (at least I did not find it).
This means that it is not explained anywhere: 1) How a search query is entered. 2) What the search engine actually looks for.
For example upon entering $\frac xy$ will it find also $\frac{\alpha}{\beta}$? Or even $\alpha/\beta$? What about $\frac{x_1}{x_2}$?
*******
Is it possible to save a link to particular search query? For example in Google I am able to use link such as: google.com/search?q=approach0+xyz Feature like that would be useful for posting bug reports.
When I try to click on "raw query", I get curl -v https://approach0.xyz/search/search-relay.php?q='%24%5Cfrac%7Bx%7D%7By%7D%24' But pasting the link into the browser does not do what I expected it to.
*******
If I copy-paste search query into your search engine, it does not work. For example, if I copy $\frac xy$ and paste it, I do not get what would I expect. Which means I have to type every query. Possibility to paste would be useful for long formulas. Here is what I get after pasting this particular string:
I was not able to enter integrals with bounds, such as $\int_0^1$. This is what I get instead:
One thing which we should keep in mind is that duplicates might be useful. They improve the chance that another user will find the question, since with each duplicate another copy with somewhat different phrasing of the title is added. So if you spent reasonable time by searching and did not find...
In comments and other answers it was mentioned that there are some other search engines which could be better when searching for mathematical expressions. But I think that as nowadays several pages uses LaTex syntax (Wikipedia, this site, to mention just two important examples). Additionally, som...
@MartinSleziak Thank you so much for your comments and suggestions here. I have took a brief look at your feedback, I really love your feedback and will seriously look into those points and improve approach0. Give me just some minutes, I will answer/reply to your in feedback in our chat. — Wei Zhong1 min ago
I still think that it would be useful if you added to your post where do you want to receive feedback from math.SE users. (I suppose I was not the only person to try it.) Especially since you wrote: "I am hoping someone interested can join and form a community to push this project forward, "
BTW those animations with examples of searching look really cool.
@MartinSleziak Thanks to your advice, I have appended more information on my posted answers. Will reply to you shortly in chat. — Wei Zhong29 secs ago
We are open-source project hosted on GitHub: http://github.com/approach0Welcome to send any feedback on our GitHub issue page!
@MartinSleziak Currently it has only a documentation for developers (approach0.xyz/docs) hopefully this project will accelerate its releasing process when people get involved. But I will list this as a important TODO before publishing approach0.xyz . At that time I hope there will be a helpful guide page for new users.
@MartinSleziak Yes, $x+y$ will find $a+b$ too, IMHO this is the very basic requirement for a math-aware search engine. Actually, approach0 will look into expression structure and symbolic alpha-equivalence too. But for now, $x_1$ will not get $x$ because approach0 consider them not structurally identical, but you can use wildcard to match $x_1$ just by entering a question mark "?" or \qvar{x} in a math formula. As for your example, enter $\frac \qvar{x} \qvar{y} $ is enough to match it.
@MartinSleziak As for the query link, it needs more explanation, technologically the way you mentioned that Google is using, is a HTTP GET method, but for mathematics, GET request may be not appropriate since it has structure in a query, usually developer would alternatively use a HTTP POST request, with JSON encoded. This makes developing much more easier because JSON is a rich-structured and easy to seperate math keywords.
@MartinSleziak Right now there are two solutions for "query link" problem you addressed. First is to use browser back/forward button to navigate among query history.
@MartinSleziak Second is to use a computer command line 'curl' to get search results from particular query link (you can actually see that in browser, but it is in developer tools, such as the network inspection tab of Chrome). I agree it is helpful to add a GET query link for user to refer to a query, I will write this point in project TODO and improve this later. (just need some extra efforts though)
@MartinSleziak Yes, if you search \alpha, you will get all \alpha document ranked top, different symbols such as "a", "b" ranked after exact match.
@MartinSleziak Approach0 plans to add a "Symbol Pad" just like what www.symbolab.com and searchonmath.com are using. This will help user to input greek symbols even if they do not remember how to spell.
@MartinSleziak Yes, you can get, greek letters are tokenized to the same thing as normal alphabets.
@MartinSleziak As for integrals upper bounds, I think it is a problem on a JavaScript plugin approch0 is using, I also observe this issue, only thing you can do is to use arrow key to move cursor to the right most and hit a '^' so it goes to upper bound edit.
@MartinSleziak Yes, it has a threshold now, but this is easy to adjust from source code. Most importantly, I have ONLY 1000 pages indexed, which means only 30,000 posts on math stackexchange. This is a very small number, but will index more posts/pages when search engine efficiency and relevance is tuned.
@MartinSleziak As I mentioned, the indices is too small currently. You probably will get what you want when this project develops to the next stage, which is enlarge index and publish.
@MartinSleziak Thank you for all your suggestions, currently I just hope more developers get to know this project, indeed, this is my side project, development progress can be very slow due to my time constrain. But I believe its usefulness and will spend my spare time to develop until its publish.
So, we would not have polls like: "What is your favorite calculus textbook?" — GEdgar2 hours ago
@GEdgar I'd say this goes under "tools." But perhaps it could be made explicit. — quid1 hour ago
@quid I think that the type of question mentioned in GEdgar's comment is closer to book-recommendations which are valid questions on the main. (Although not formulated like that.) I also think that his comment was tongue-in-cheek. (Although it is a bit more difficult for me to detect sarcasm, as I am not a native speaker.) — Martin Sleziak57 mins ago
"What is your favorite calculus textbook?" is opinion based and/or too broad for main. If at all it is a "poll." On tex.se they have polls "favorite editor/distro/fonts etc" while actual questions on these are still on-topic on main. Beyond that it is not clear why a question which software one uses should be a valid poll while the question which book one uses is not. — quid7 mins ago
@quid I will reply here, since I do not want to digress in the comments too much from the topic of that question.
Certainly I agree that "What is your favorite calculus textbook?" would not be suitable for the main. Which is why I wrote in my comment: "Although not formulated like that".
Book recommendations are certainly accepted on the main site, if they are formulated in the proper way.
If there will be community poll and somebody suggests question from GEdgar's comment, I will be perfectly ok with it. But I thought that his comment is simply playful remark pointing out that there is plenty of "polls" of this type on the main (although ther should not be). I guess some examples can be found here or here.
Perhaps it is better to link search results directly on MSE here and here, since in the Google search results it is not immediately visible that many of those questions are closed.
Of course, I might be wrong - it is possible that GEdgar's comment was meant seriously.
I have seen for the first time on TeX.SE. The poll there was concentrated on TeXnical side of things. If you look at the questions there, they are asking about TeX distributions, packages, tools used for graphs and diagrams, etc.
Academia.SE has some questions which could be classified as "demographic" (including gender).
@quid From what I heard, it stands for Kašpar, Melichar and Baltazár, as the answer there says. In Slovakia you would see G+M+B, where G stand for Gašpar.
But that is only anecdotal.
And if I am to believe Slovak Wikipedia it should be Christus mansionem benedicat.
From the Wikipedia article: "Nad dvere kňaz píše C+M+B (Christus mansionem benedicat - Kristus nech žehná tento dom). Toto sa však často chybne vysvetľuje ako 20-G+M+B-16 podľa začiatočných písmen údajných mien troch kráľov."
My attempt to write English translation: The priest writes on the door C+M+B (Christus mansionem benedicat - Let the Christ bless this house). A mistaken explanation is often given that it is G+M+B, following the names of three wise men.
As you can see there, Christus mansionem benedicat is translated to Slovak as "Kristus nech žehná tento dom". In Czech it would be "Kristus ať žehná tomuto domu" (I believe). So K+M+B cannot come from initial letters of the translation.
It seems that they have also other interpretations in Poland.
"A tradition in Poland and German-speaking Catholic areas is the writing of the three kings' initials (C+M+B or C M B, or K+M+B in those areas where Caspar is spelled Kaspar) above the main door of Catholic homes in chalk. This is a new year's blessing for the occupants and the initials also are believed to also stand for "Christus mansionem benedicat" ("May/Let Christ Bless This House").
Depending on the city or town, this will be happen sometime between Christmas and the Epiphany, with most municipalities celebrating closer to the Epiphany."
BTW in the village where I come from the priest writes those letters on houses every year during Christmas. I do not remember seeing them on a church, as in Najib's question.
In Germany, the Czech Republic and Austria the Epiphany singing is performed at or close to Epiphany (January 6) and has developed into a nationwide custom, where the children of both sexes call on every door and are given sweets and money for charity projects of Caritas, Kindermissionswerk or Dreikönigsaktion[2] - mostly in aid of poorer children in other countries.[3]
A tradition in most of Central Europe involves writing a blessing above the main door of the home. For instance if the year is 2014, it would be "20 * C + M + B + 14". The initials refer to the Latin phrase "Christus mansionem benedicat" (= May Christ bless this house); folkloristically they are often interpreted as the names of the Three Wise Men (Caspar, Melchior, Balthasar).
In Catholic parts of Germany and in Austria, this is done by the Sternsinger (literally "Star singers"). After having sung their songs, recited a poem, and collected donations for children in poorer parts of the world, they will chalk the blessing on the top of the door frame or place a sticker with the blessing.
On Slovakia specifically it says there:
The biggest carol singing campaign in Slovakia is Dobrá Novina (English: "Good News"). It is also one of the biggest charity campaigns by young people in the country. Dobrá Novina is organized by the youth organization eRko.
|
where the left part of the equal is the circuit representation of the Toffoli gate where the first two lines are the control qubits and the third one is the target. On the right side we have the CNOT gate whose matrix representation is
$$ CNOT=\left(\begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0 \end{array}\right)$$
T,S and H are the $\dfrac{\pi}{8}$ gate,phase gate and Hadamard gate, respectively, represented by the matrixes
$$H=\dfrac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 & 1\\ 1 & -1 \end{array}\right), S=\left(\begin{array}{cc} 1 & 0\\ 0 & i \end{array}\right), T=\left(\begin{array}{cc} 1 & 0\\ 0 & exp(i\pi/4) \end{array}\right)$$
My problem is that I don't know how, given some input, for example, the three qubits in the state |1>, the circuit acts on the input. For example, in the second control qubit there is a CNOT gate, so what I have to do is calculate the direct sum of |1> and the qubit which is below it once it has passed through the H gate? This is, before the target qubit reach $T^{\dagger}$, we have the state $|1>|1\oplus H|1>>=-\dfrac{1}{\sqrt{2}}|1>(|0>-|1>)$
Is this correct?
|
For an FIR filter, with symmetrical tap values $h[N-1-n]=h[n]$,
why is the group delay $\frac{N-1}{2} T$ (where $N$ is the number of taps of the FIR filter and $T$ is the sampling time)? Why is linear phase so important for the filter response?
Signal Processing Stack Exchange is a question and answer site for practitioners of the art and science of signal, image and video processing. It only takes a minute to sign up.Sign up to join this community
For an FIR filter, with symmetrical tap values $h[N-1-n]=h[n]$,
To be precise the group delay of a linear phase FIR filter is $(N-1)/2$ samples, where $N$ is the filter length (i.e. the number of taps). The group delay is constant for all frequencies, because the filter has a linear phase, i.e. its impulse response is symmetrical (or asymmetric). A linear phase means that all frequency components of the input signal experience the same delay, i.e. there are no phase distortions. So for a frequency selective filter (e.g., a low pass filter), if the input signal is in the passband of the filter, the output signal is approximately equal to the input signal delayed by the group delay of the filter. Note that in general FIR filters do not have a linear phase response. In this case, the group delay is a function of frequency.
EDIT:
Consider an odd length FIR filter with even symmetry ("type I filter"). In this case the group delay is an integer number of samples $K=(N-1)/2$. The filter has a linear phase response (because the group delay is the negative derivative of the phase) and the filter's frequency response can be written as
$$H(\omega)=A(\omega)e^{-jK\omega}$$
where $A(\omega)$ is the real-valued amplitude function, and $\phi(\omega)=-K\omega$ is the (linear) phase. If the input signal's Fourier transform is $X(\omega)$, then the Fourier transform of the output signal is given by
$$Y(\omega)=H(\omega)X(\omega)=A(\omega)X(\omega)e^{-jK\omega}\tag{1}$$
Now assume that the frequency content of the input signal is in the passband of the filter, i.e. in the frequency region where $A(\omega)\approx 1$ holds. Then from (1) the output signal's spectrum can be written as
$$Y(\omega)\approx X(\omega)e^{-jK\omega}\tag{2}$$
and, from the shift property of the Fourier transform, the output signal is (approximately) given by
$$y(n)\approx x(n-K)$$
i.e. $y(n)$ is a delayed version of the input signal, where the delay is given by the group delay $K$.
Why is the group delay constant and close to half of the number of taps in this symmetric tap filter ($h[N−1−n]=h[n]$)? The answer lies in it being symmetric. Note that the $n^{th}$ tap (coefficient) of an FIR filter can be seen as delay of $n$ "sampling intervals" (or $n$ samples).
Thus, if you pick a pair of "symmetric" taps/coefficients of this filter (say at $n_1$ and $N-n_1-1$), and pass a sinusoidal signal, $\sin(\omega t)$, through this pair, you would see that the output of this passing is a sinusoidal signal of the same frequency $\omega$ but with a delay which is an average of the two tap delays, i.e., $(N-n_1-1+n_1)/2 = (N-1)/2$. It is simple math, i.e., try adding $\sin(\omega t-t_0-t_1)$ and $\sin(\omega t-t_0+t_1)$, the sum is $K*\sin(\omega t-t_0)$. Do this for all $N/2$ pairs and you will find all will have a time delay of $t_0 = (N-1)/2$. Please excuse my extreme brevity and delay in replying. I hope I got the point across.
To answer your second question: why Linear phase so important for filter response. Consider filter as a channel, when you pass your signal from a channel what you want is all frequency components to be delayed by equal number of samples, if different frequency components gets delayed by different value then there will be distortion in your signal which is a bit difficult to compensate while a constant delay can be compensated by some simple techniques at receiver. Linear phase or constant group delay indicates that all frequencies (for which filter response is linear) are delayed by an equal amount. Therefore your output signal is just a shifted version of your input signal.
|
I would like to calculate an observable's expectation value of a state, the ground state, or time evolution of a finite system with $N$ spins under an Hamiltonian $H$.
For the sake of discussion assume $N=16$ so we can use IBM QC.
How to translate a given Hamiltonian into Quantum logic gates in order to simulate the system evolution or its statistics.
If it makes life easier assume a local hamiltonian or any lattice based model such as an Ising model Hamiltonian:
$$
H(\sigma) = - \sum_{\langle i~j\rangle} J_{ij} \sigma_i \sigma_j -\mu \sum_{j} h_j\sigma_j $$
As a side note, I was intrigued by this question which mentions Andre Lucas paper:
Ising formulations of many NP problems and thought that it would be nice to know how to translate an hamiltonian to a QC circuit.
|
Say there is a population of mass 1 in which individuals can choose one of two traits (1 and 2). The population share with trait 1 at time $t+1$, $$p_{t+1}=\frac{1}{1+e^{-\beta\left(u_1-u_2+J(2p_t-1)\right)}}$$ for constants $u_1,u_2,\beta,J$, with $u_1>u_2$. I can't calculate them analytically, but it can be shown that for $\beta J>1$, there are at most three stationary points (of which the smallest and largest are stable) and at least one stationary point (which will be in the neighbourhood of $p^*\approx 1$).
Assume $p_{t=0}=0$. Consider two cases defined by two sets of parameters $u_1,u_2,\beta,J$ such that in each, there is one unique stationary point $p^*\approx 1$. How can I work out which system will reach its stationary point faster? In general I would like to know how to calculate the convergence rates analytically, if it's possible, but if not, knowing how to compare two convergent systems and work out the faster one would suffice.
For example, my hunch is that higher $J$ slows the system. For given $\beta,u_1,u_2$, if we take two environments with $J_1$ and $J_2$, respectively, where there exists one stationary point, then the curve of the function with lower $J$ more closely "wraps" around the line $p=p$, which should lead to more transition steps. (I don't know how to explain that better, but hopefully you understand what I mean.)
Even just knowing what terms to search for would be a big help. I've tried searching for literature on convergence rates but given the implicit transition expression, I'm not sure whether my case is applicable to (e.g.) Markov chain theory, etc. As you can tell, this is not my area :) Any search terms would be appreciated.
(Postscript: Ultimately I'm investigating a three-trait system, but I thought this would a good start to get the tools to solve it. However, if your answer also generalises, all the better!)
|
As described in the Rich Output tutorial, the IPython display system can display rich representations of objects in the following formats:
This Notebook shows how you can add custom display logic to your own classes, so that they can be displayed using these rich representations. There are two ways of accomplishing this:
_repr_html_ when you define your class.
This Notebook describes and illustrates both approaches.
Import the IPython display functions.
from IPython.display import ( display, display_html, display_png, display_svg)
Parts of this notebook need the matplotlib inline backend:
%matplotlib inlineimport numpy as npimport matplotlib.pyplot as plt
The main idea of the first approach is that you have to implement special display methods when you define your class, one for each representation you want to use. Here is a list of the names of the special methods and the values they must return:
_repr_html_: return raw HTML as a string
_repr_json_: return a JSONable dict
_repr_jpeg_: return raw JPEG data
_repr_png_: return raw PNG data
_repr_svg_: return raw SVG data as a string
_repr_latex_: return LaTeX commands in a string surrounded by "$".
As an illustration, we build a class that holds data generated by sampling a Gaussian distribution with given mean and standard deviation. Here is the definition of the
Gaussian class, which has a custom PNG and LaTeX representation.
from IPython.core.pylabtools import print_figurefrom IPython.display import Image, SVG, Mathclass Gaussian(object): """A simple object holding data sampled from a Gaussian distribution. """ def __init__(self, mean=0.0, std=1, size=1000): self.data = np.random.normal(mean, std, size) self.mean = mean self.std = std self.size = size # For caching plots that may be expensive to compute self._png_data = None def _figure_data(self, format): fig, ax = plt.subplots() ax.hist(self.data, bins=50) ax.set_title(self._repr_latex_()) ax.set_xlim(-10.0,10.0) data = print_figure(fig, format) # We MUST close the figure, otherwise IPython's display machinery # will pick it up and send it as output, resulting in a double display plt.close(fig) return data def _repr_png_(self): if self._png_data is None: self._png_data = self._figure_data('png') return self._png_data def _repr_latex_(self): return r'$\mathcal{N}(\mu=%.2g, \sigma=%.2g),\ N=%d$' % (self.mean, self.std, self.size)
Create an instance of the Gaussian distribution and return it to display the default representation:
x = Gaussian(2.0, 1.0)x
You can also pass the object to the
display function to display the default representation:
display(x)
Use
display_png to view the PNG representation:
display_png(x)
display and
display_png. The former computes
Create a new Gaussian with different parameters:
x2 = Gaussian(0, 2, 2000)x2
You can then compare the two Gaussians by displaying their histograms:
display_png(x)display_png(x2)
Note that like
display functions multiple times in a cell.
When you are directly writing your own classes, you can adapt them for display in IPython by following the above approach. But in practice, you often need to work with existing classes that you can't easily modify. We now illustrate how to add rich output capabilities to existing objects. We will use the NumPy polynomials and change their default representation to be a formatted LaTeX expression.
First, consider how a NumPy polynomial object renders by default:
p = np.polynomial.Polynomial([1,2,3], [-10, 10])p
Polynomial([ 1., 2., 3.], [-10., 10.], [-1, 1])
Next, define a function that pretty-prints a polynomial as a LaTeX string:
def poly_to_latex(p): terms = ['%.2g' % p.coef[0]] if len(p) > 1: term = 'x' c = p.coef[1] if c!=1: term = ('%.2g ' % c) + term terms.append(term) if len(p) > 2: for i in range(2, len(p)): term = 'x^%d' % i c = p.coef[i] if c!=1: term = ('%.2g ' % c) + term terms.append(term) px = '$P(x)=%s$' % '+'.join(terms) dom = r', $x \in [%.2g,\ %.2g]$' % tuple(p.domain) return px+dom
This produces, on our polynomial
p, the following:
poly_to_latex(p)
'$P(x)=1+2 x+3 x^2$, $x \\in [-10,\\ 10]$'
You can render this string using the
Latex class:
from IPython.display import LatexLatex(poly_to_latex(p))
However, you can configure IPython to do this automatically by registering the
Polynomial class and the
plot_to_latex function with an IPython display formatter. Let's look at the default formatters provided by IPython:
ip = get_ipython()for mime, formatter in ip.display_formatter.formatters.items(): print('%24s : %s' % (mime, formatter.__class__.__name__))
image/png : PNGFormatter application/pdf : PDFFormatter text/html : HTMLFormatter image/jpeg : JPEGFormatter text/plain : PlainTextFormatter text/markdown : MarkdownFormatter application/json : JSONFormatter application/javascript : JavascriptFormatter text/latex : LatexFormatter image/svg+xml : SVGFormatter
The
formatters attribute is a dictionary keyed by MIME types. To define a custom LaTeX display function, you want a handle on the
text/latex formatter:
ip = get_ipython()latex_f = ip.display_formatter.formatters['text/latex']
The formatter object has a couple of methods for registering custom display functions for existing types.
help(latex_f.for_type)
Help on method for_type in module IPython.core.formatters: for_type(typ, func=None) method of IPython.core.formatters.LatexFormatter instance Add a format function for a given type. Parameters ----------- typ : type or '__module__.__name__' string for a type The class of the object that will be formatted using `func`. func : callable A callable for computing the format data. `func` will be called with the object to be formatted, and will return the raw data in this formatter's format. Subclasses may use a different call signature for the `func` argument. If `func` is None or not specified, there will be no change, only returning the current value. Returns ------- oldfunc : callable The currently registered callable. If you are registering a new formatter, this will be the previous value (to enable restoring later).
help(latex_f.for_type_by_name)
Help on method for_type_by_name in module IPython.core.formatters: for_type_by_name(type_module, type_name, func=None) method of IPython.core.formatters.LatexFormatter instance Add a format function for a type specified by the full dotted module and name of the type, rather than the type of the object. Parameters ---------- type_module : str The full dotted name of the module the type is defined in, like ``numpy``. type_name : str The name of the type (the class name), like ``dtype`` func : callable A callable for computing the format data. `func` will be called with the object to be formatted, and will return the raw data in this formatter's format. Subclasses may use a different call signature for the `func` argument. If `func` is None or unspecified, there will be no change, only returning the current value. Returns ------- oldfunc : callable The currently registered callable. If you are registering a new formatter, this will be the previous value (to enable restoring later).
In this case, we will use
for_type_by_name to register
poly_to_latex as the display function for the
Polynomial type:
latex_f.for_type_by_name('numpy.polynomial.polynomial', 'Polynomial', poly_to_latex)
Once the custom display function has been registered, all NumPy
Polynomial instances will be represented by their LaTeX form instead:
p
p2 = np.polynomial.Polynomial([-20, 71, -15, 1])p2
Rich output special methods and functions can only display one object or MIME type at a time. Sometimes this is not enough if you want to display multiple objects or MIME types at once. An example of this would be to use an HTML representation to put some HTML elements in the DOM and then use a JavaScript representation to add events to those elements.
IPython 2.0 recognizes another display method,
_ipython_display_, which allows your objects to take complete control of displaying themselves. If this method is defined, IPython will call it, and make no effort to display the object using the above described
_repr_*_ methods for custom display functions. It's a way for you to say "Back off, IPython, I can display this myself." Most importantly, your
_ipython_display_ method can make multiple calls to the top-level
display functions to accomplish its goals.
Here is an object that uses
display_html and
display_javascript to make a plot using the Flot JavaScript plotting library:
import jsonimport uuidfrom IPython.display import display_javascript, display_html, displayclass FlotPlot(object): def __init__(self, x, y): self.x = x self.y = y self.uuid = str(uuid.uuid4()) def _ipython_display_(self): json_data = json.dumps(list(zip(self.x, self.y))) display_html('<div id="{}" style="height: 300px; width:80%;"></div>'.format(self.uuid), raw=True ) display_javascript(""" require(["//cdnjs.cloudflare.com/ajax/libs/flot/0.8.2/jquery.flot.min.js"], function() { var line = JSON.parse("%s"); console.log(line); $.plot("#%s", [line]); }); """ % (json_data, self.uuid), raw=True)
import numpy as npx = np.linspace(0,10)y = np.sin(x)FlotPlot(x, np.sin(x))
|
Support for Windows Products
Support for Windows Products
How To Fix Error In The Midpoint Rule
If you have Error In The Midpoint Rule then we strongly recommend that you
.download and run this (Error In The Midpoint Rule) repair tool
Symptoms & Summary
Error In The Midpoint Rule and other critical errors can occur when your Windows operating system becomes corrupted. Opening programs will be slower and response times will lag. When you have multiple applications running, you may experience crashes and freezes. There can be numerous causes of this error including excessive startup entries, registry errors, hardware/RAM decline, fragmented files, unnecessary or redundant program installations and so on.
Resolution
In order to fix your error, it is recommended that you download the
'Error In The Midpoint Rule Repair Tool'. This is an advanced optimization tool that can repair all the problems that are slowing your computer down. You will also dramatically improve the speed of your machine when you address all the problems just mentioned.
Recommended: In order to repair your system and Error In The Midpoint Rule, download and run Reimage. This repair tool will locate, identify, and fix thousands of Windows errors. Your computer should also run faster and smoother after using this software.
File Size 746 KB Compatible Windows XP, Vista, 7 (32/64 bit), 8 (32/64 bit), 8.1 (32/64 bit) Windows 10 (32/64 bit) Downloads 361,927
Du siehst YouTube auf Deutsch. Du kannst diese Einstellung unten ändern. Learn more You're viewing YouTube in German. You can change this preference below. Schließen simpson rule Ja, ich möchte sie behalten Rückgängig machen Schließen Dieses Video
ist nicht verfügbar. WiedergabelisteWarteschlangeWiedergabelisteWarteschlange Alle entfernenBeenden Wird geladen... Wiedergabeliste Warteschlange __count__/__total__ Midpoint and Trapezoid Error midpoint rule error k Bounds - Ex. 2. W2012.mp4 Aharon Dagan AbonnierenAbonniertAbo beenden214214 Wird geladen... Wird geladen... Wird verarbeitet... Hinzufügen Möchtest du dieses Video später noch einmal ansehen? Wenn trapezoidal rule error du bei YouTube angemeldet bist, kannst du dieses Video zu einer Playlist hinzufügen. Anmelden Teilen Mehr Melden Möchtest du dieses Video melden? Melde dich an, um unangemessene Inhalte zu melden. Anmelden Transkript Statistik 10.472 Aufrufe 40 Dieses Video gefällt dir? Melde dich bei YouTube an, damit dein Feedback gezählt wird.
Anmelden 41 3 Dieses Video gefällt dir nicht? Melde dich bei YouTube an, damit dein Feedback gezählt wird. Anmelden 4 Wird geladen... Wird geladen... Transkript Das interaktive Transkript konnte nicht geladen werden. Wird geladen... Wird geladen... Die Bewertungsfunktion ist nach Ausleihen des Videos verfügbar. Diese Funktion ist zurzeit nicht verfügbar. Bitte versuche es später erneut. Veröffentlicht am 02.03.2012Calculus 2; Integration Techniques; Approximate Integration; Midpoint; Trapezoidal Rule; Error Bounds Kategorie Bildung Lizenz Standard-YouTube-Lizenz Mehr anzeigen Weniger anzeigen Wird geladen... Anzeige Autoplay Wenn Autoplay aktiviert ist, wird die Wiedergabe automatisch mit einem der aktuellen Videovorschläge fortgesetzt. Nächstes Video Error Estimates (Midpoint Rule, Trapezoid Rule, Simpson's Rule) - Dauer: 9:37 BriTheMathGuy 635 Aufrufe 9:37 Simpson's Rule - Error Bound - Dauer: 11:35 patrickJMT 147.176 Aufrufe 11:35 Approximate Integration: Midpoint Rule Error Bound: Proof - Dauer: 45:31 Math Easy Solutions 940 Aufrufe 45:31 Maximum Error in Trapezoidal Rule & Simpson's Rule READ DESCRI
here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies midpoint rule error estimate of this site About Us Learn more about Stack Overflow the company midpoint rule error derivation Business Learn more about hiring developers or posting ads with us Mathematics Questions Tags Users Badges Unanswered Ask
Question _ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. Join them; it only takes a minute: Sign https://www.youtube.com/watch?v=f_km4RuBU1M up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the top The error of the midpoint rule for quadrature up vote 3 down vote favorite 1 Wikipedia says the midpoint formula for numerical integration has error of order $h^3 f''(\xi)$. I am trying to replicate this result. I'm http://math.stackexchange.com/questions/15242/the-error-of-the-midpoint-rule-for-quadrature guessing that I want to use Lagrange's formulation of the remainder for Taylor series. Let $x_0=\frac{a+b}{2}$ (i.e. the midpoint). The midpoint method says $\int_a^b f(x)dx \approx (b-a)f(\frac{a+b}{2})$, so to get the error I find $(b-a) f(\frac{a+b}{2}) - \int_a^bf(x)dx$. If I expand this using Taylor's theorem I get: $ \begin{aligned} error & =(b-a) f(x_0) - \int_a^bf(x_0)+\frac{f'(\xi)(x-x_0)}{2}dx \\ & =\frac{f'(\xi)}{2}\int_a^b(x-x_0)dx \\ & =\frac{f'(\xi)}{2}\int_a^b(x-\frac{a+b}{2})dx \\ & = 0 \end{aligned}$ So apparently I have just proven that it has zero error? Any hints as to what I did wrong? (I realize that since wikipedia gives it in terms of $f''$ I probably want to take the expansion one level further to match them, but I don't understand why this doesn't work.) calculus numerical-methods share|cite|improve this question edited Dec 22 '10 at 21:24 asked Dec 22 '10 at 20:51 Xodarap 2,8811136 3 Since $\xi$ depends on $x$, you can't take $f'(\xi)$ outside of the integral. –Hans Lundmark Dec 22 '10 at 21:20 add a comment| 1 Answer 1 active oldest votes up vote 3 down vote accepted You are right that you need one m
be down. Please try the request again. Your cache administrator is webmaster. Generated Tue, 11 Oct 2016 19:36:15 GMT by s_ac15 (squid/3.5.20)
be down. Please try the request again. Your cache administrator is webmaster. Generated Tue, 11 Oct 2016 19:36:15 GMT by s_ac15 (squid/3.5.20)
error approximation midpoint rule
Error Approximation Midpoint Ruleavailable Most of the classes have using midpoint rule to approximate the integral practice problems with solutions available on the practice problems midpoint formula for integration pages Also most classes have assignment problems for instructors to assign for homework Midpoint Rule Approximation Calculator answers solutions to the assignment problems are not given or available on the site Algebra Notes Practice Problems Assignment Problems Calculus I Notes Use The Midpoint Rule To Approximate The Integral N Practice Problems Assignment Problems Calculus II Notes Practice Problems Assignment Problems Calculus III Notes Practice Problems Assignment Problems Differential Equations Notes Extras
error bounds midpoint rule
Error Bounds Midpoint Ruleavailable Most of the classes have practice problems with solutions available on the practice problems what is the trapezoidal rule in calculus pages Also most classes have assignment problems for instructors to assign Midpoint Rule Error Calculator for homework answers solutions to the assignment problems are not given or available on the site midpoint error calculator Algebra Notes Practice Problems Assignment Problems Calculus I Notes Practice Problems Assignment Problems Calculus II Notes Practice Problems Assignment Problems Calculus III Notes Practice Problems Assignment midpoint rule example Problems Differential Equations Notes Extras Here are some extras topics that I
error bounds for midpoint rule
Error Bounds For Midpoint RuleDu kan ndra inst llningen nedan Learn more You're viewing YouTube in Swedish You what is the trapezoidal rule in calculus can change this preference below St ng Ja beh ll den ngra midpoint rule error calculator St ng Det h r videoklippet r inte tillg ngligt Visningsk K Visningsk K Ta bort allaKoppla fr n L ser in Midpoint Error Calculator Visningsk K count total Ta reda p varf rSt ng Midpoint and Trapezoid Error Bounds - Ex W mp Aharon Dagan PrenumereraPrenumerantS g upp L ser in L ser in Error Bound Formula Statistics
error estimate for midpoint rule
Error Estimate For Midpoint RuleDu siehst YouTube auf Deutsch Du kannst diese Einstellung unten ndern Learn more You're viewing YouTube in German You can change this preference below Schlie en Ja ich m chte sie behalten R ckg ngig machen midpoint rule vs trapezoidal rule Schlie en Dieses Video ist nicht verf gbar WiedergabelisteWarteschlangeWiedergabelisteWarteschlange Alle entfernenBeenden Wird geladen simpson rule vs midpoint rule Wiedergabeliste Warteschlange count total Midpoint and Trapezoid Error Bounds - Ex W mp Aharon Dagan AbonnierenAbonniertAbo beenden Wird Midpoint Rule Error Calculator geladen Wird geladen Wird verarbeitet Hinzuf gen M chtest du dieses Video sp ter noch
There are many reasons why Error In The Midpoint Rule happen, including having malware, spyware, or programs not installing properly. You can have all kinds of system conflicts, registry errors, and Active X errors. Reimage specializes in Windows repair. It scans and diagnoses, then repairs, your damaged PC with technology that not only fixes your Windows Operating System, but also reverses the damage already done with a full database of replacement files.
A FREE Scan (approx. 5 minutes) into your PC's Windows Operating System detects problems divided into 3 categories - Hardware, Security and Stability. At the end of the scan, you can review your PC's Hardware, Security and Stability in comparison with a worldwide average. You can review a summary of the problems detected during your scan. Will Reimage fix my Error In The Midpoint Rule problem? There's no way to tell without running the program. The state of people's computers varies wildly, depending on the different specs and software they're running, so even if reimage could fix Error In The Midpoint Rule on one machine doesn't necessarily mean it will fix it on all machines. Thankfully it only takes minutes to run a scan and see what issues Reimage can detect and fix.
A Windows error is an error that happens when an unexpected condition occurs or when a desired operation has failed. When you have an error in Windows, it may be critical and cause your programs to freeze and crash or it may be seemingly harmless yet annoying.
A
stop error screen or bug check screen, commonly called a blue screen of death (also known as a BSoD, bluescreen), is caused by a fatal system error and is the error screen displayed by the Microsoft Windows family of operating systems upon encountering a critical error, of a non-recoverable nature, that causes the system to "crash".
One of the biggest causes of DLL's becoming corrupt/damaged is the practice of constantly installing and uninstalling programs. This often means that DLL's will get overwritten by newer versions when a new program is installed, for example. This causes problems for those applications and programs that still need the old version to operate. Thus, the program begins to malfunction and crash.
Computer hanging or freezing occurs when either a program or the whole system ceases to respond to inputs. In the most commonly encountered scenario, a program freezes and all windows belonging to the frozen program become static. Almost always, the only way to recover from a system freeze is to reboot the machine, usually by power cycling with an on/off or reset button.
Once your computer has been infected with a virus, it's no longer the same. After removing it with your anti-virus software, you're often left with lingering side-effects. Technically, your computer might no longer be infected, but that doesn't mean it's error-free. Even simply removing a virus can actually harm your system.
Reimage repairs and replaces all critical Windows system files needed to run and restart correctly, without harming your user data. Reimage also restores compromised system settings and registry values to their default Microsoft settings. You may always return your system to its pre-repair condition.
Reimage patented technology, is the only PC Repair program of its kind that actually reverses the damage done to your operating system. The online database is comprised of over 25,000,000 updated essential components that will replace any damaged or missing file on a Windows operating system with a healthy version of the file so that your PC's performance, stability & security will be restored and even improve. The repair will deactivate then quarantine all Malware found then remove virus damage. All System Files, DLLs, and Registry Keys that have been corrupted or damaged will be replaced with new healthy files from our continuously updated online database.
Rating:
Downloads in September: 361,927
Download Size: 746KB
To Fix (Error In The Midpoint Rule) you need to follow the steps below:
Step 1: Download Error In The Midpoint Rule Repair Tool Step 2: Click the " Scan" button Step 3: Click ' Fix All' and the repair is complete. Windows Operating Systems:
Compatible with Windows XP, Vista, Windows 7 (32 and 64 bit), Windows 8 & 8.1 (32 and 64 bit), Windows 10 (32/64 bit).
|
It is demonstrated that the square trace of the electromagnetic tensor is nothing and it is valid: $$ \mathrm{Tr}\,{F}^2_{\mu\nu}=\frac{2}{c^{2}}(E^2-c^2B^2). $$ Proof: $F_{\mu\nu}=-F_{\nu\mu}$, hence $$ \mathrm{Tr}\,{F}^2_{\mu\nu}=\sum_{\mu}\left(F^{2}\right)_{\mu\mu}=-\sum_{\mu\nu}F_{\mu\nu}F_{\nu\mu}=-\sum_{\mu\nu}F_{\mu\nu}^{2}= $$ $$ =-2\left[B_{1}^{2}+B_{2}^{2}+B_{3}^{2}-\frac{1}{c^{2}}\left(E_{1}^{2}+E_{2}^{2}+E_{3}^{2}\right)\right]= $$
$$=-\frac{2}{c^{2}}\left(B^2-\frac{E^2}{c^{2}}\right)=\frac{2}{c^{2}}\left(E^2-c^{2}B^2\right)$$
I have seen, also, this explanation of Lorentz invariant $E^2-c^2B^2$:
After, on the site Why is this invariant in Relativity: $E^2−c^2B^2$? there are limited informations, mathematical and physical, for the following relationships:
$E^2-c^2B^2=0$
$E^2-c^2B^2>0$
$E^2-c^2B^2<0$
For item 2.) $E^2-c^2B^2>0$ in $\Sigma$. Then there will be a reference system of $\Sigma'$ such that $\overline{B}'=\textbf{0}$ i.e. the interaction is purely electric. Why? For item 1.) $E^2-c^2B^2=0$ in $\Sigma$ is the case with a plane wave: why? We can also say that if we have a plane wave in an inertial reference $\Sigma$ we will still find a plane wave in any other inertial reference $\Sigma'$. For item 3.) $E^2-c^2B^2<0$ in $\Sigma$. Both $\overline{E}$ and $\overline{B}$ are different from zero in each reference system (otherwise both must be null and therefore there would be no electromagnetic wave). An example is a wire with current? It is correct and why?
|
I'm trying to better understand the causes for the equation of time by deriving an approximation from first principles.
My naive approach, $EOT_{NAIVE}$, is to take the difference between the right ascension of the mean sun, $\alpha_M(t)$, and the right ascension of the "real" sun $$EOT_{NAIVE}(t)=\langle\dot{\alpha }\rangle\cdot(t-t_0)-\alpha(t)$$ where $\alpha(t)$ is simply the "actual" position of the sun at time $t$ from ephemeris data (e.g. either of the RA values reported by JPL HORIZONS, or a similar source) and where $\langle\dot{\alpha }\rangle = 24/365.242$ and $t_0=t:\alpha_M(t_0)=0$.
To confirm that this is about right, I compare my result with what I get using the USNO definition of GMST $$EOT_{GMST}(t)=GMST(t)-(t-t_{noon})-\alpha(t)$$ and with $EOT_{POLY}$, a standard polynomial expression from Dershowitz & Reingold as described in The Clock of the Long Now documentation.
But the values I get for $EOT_{NAIVE}$ (blue dashed line) are consistently about $7.4$ minutes greater than these reference methods (exactly $7.4537$ for $EOT_{GMST}$, red line; and within a couple seconds of that for $EOT_{POLY}$, gray outline) give:
Surprisingly, I can fix this by simply changing $t_0$ from the date, $t_E$, on which the most recent vernal equinox occurred (2012-03-20T05:14:33Z) — which I had assumed would give a "starting" RA of 0 — to 2012-03-22T02:20:32.41Z, which brings my approximation (blue line) exactly in register with the GMST approach (gray band), and within a couple seconds of the polynomial approach (red line), as can be seen by taking the difference between the GMST approach (gray band) and each of these approaches (note the change of scale, this figure essentially takes the difference between $EOT_{GMST}$ and each of the plots in the first figure above, and "zooms in" on the gray band):
Why should changing the date in this way "fix" my approximation? Why doesn't my approximation work with $t_0=t_E$? Why does it work perfectly with a different date? Are the approaches I'm using as references ($EOT_{GMST}$ and $EOT_{POLY}$) the right ones for this kind of exploration?
Note that the reported RA of the sun at 2012-03-22T02:20:32.41Z is $0.1146^h$, and that the fit between $EOT_{GMST}$ and $EOT_{NAIVE}$ using that date is quite good (this figure zooms in further on the gray band in the second figure above):
Note also that despite the documentation for the ephemeris I use claiming that "light time delay is not included", the values I get match those from JPL's "airless apparent right ascension and declination of the target center with respect to the Earth's true-equator and the meridian containing the Earth's true equinox-of-date. Corrected for light-time, the gravitational deflection of light, stellar aberration, precession and nutation."
|
Help:Wikitext examples For basic information see Help:Editing. Contents Basic text formatting[edit]
You can format the page using Wikitext special characters.
What it looks like What you type
You can
3 apostrophes will
(Using 4 apostrophes doesn't do anythingspecial --
You can ''italicize'' text by putting 2 apostrophes on ''each'' side. 3 apostrophes will '''bold''' the text. 5 apostrophes will '''''bold and italicize''''' the text. (Using 4 apostrophes doesn't do anything special -- <br /> 3 of them '''bold''' the text as usual; the others are ''''just'''' apostrophes around the text.)
A single newlinegenerally has no effect on the layout.These can be used to separatesentences within a paragraph.Some editors find that this aids editingand improves the
But an empty line starts a new paragraph.
When used in a list, a newline
A single newline generally has no effect on the layout. These can be used to separate sentences within a paragraph. Some editors find that this aids editing and improves the ''diff'' function (used internally to compare different versions of a page). But an empty line starts a new paragraph. When used in a list, a newline ''does'' affect the layout ([[#lists|see below]]).
You can break lines
Please do not start a link or
You can break lines<br/> without a new paragraph.<br/> Please use this sparingly. Please do not start a link or ''italics'' or '''bold''' text on one line and end on the next. You should "sign" your comments on talk pages: You should "sign" your comments on talk pages: * Three tildes gives your signature: ~~~ * Four tildes give your signature plus date/time: ~~~~ * Five tildes gives the date/time alone: ~~~~~ [edit]
You can use some HTML tags, too. However, you should avoid HTML in favor of Wiki markup whenever possible.
What it looks like What you type
Put text in a
Put text in a <kbd>monospace ('typewriter') font</kbd>. The same font is generally used for <code> computer code</code>. <strike>Strike out</strike> or <u>underline</u> text, or write it <span style= "font-variant:small-caps"> in small caps</span>.
Superscripts and subscripts:X
Superscripts and subscripts: X<sup>2</sup>, H<sub>2</sub>O <center>Centered text</center> * Please note the American spelling of "center". * This is how to {{Font color||yellow|highlight part of a sentence}}. <blockquote> The '''blockquote''' command ''formats'' block quotations, typically by surrounding them with whitespace and a slightly different font. </blockquote>
Invisible comments to editors (<!-- -->) appear only while editing the page.
Invisible comments to editors (<!-- -->) appear only while editing the page. <!-- Note to editors: blah blah blah. --> Organizing your writing[edit] See also: w:Picture tutorial
What it looks like What you type Subsection
Using more "equals" (=) signs creates a subsection.
A smaller subsection
Don't skip levels, like from two to four equals signs.
Start with 2 equals signs, not 1. If you use only 1 on each side, it will be the equivalent of h1 tags, which should be reserved for page titles.
== Section headings == ''Headings'' organize your writing into sections. The ''Wiki'' software can automatically generate a [[help:Section|table of contents]] from them. === Subsection === Using more "equals" (=) signs creates a subsection. ==== A smaller subsection ==== Don't skip levels, like from two to four equals signs. Start with 2 equals signs, not 1. If you use only 1 on each side, it will be the equivalent of h1 tags, which should be reserved for page titles.
marks the end of the list.
* ''Unordered lists'' are easy to do: ** Start every line with a asterisk. *** More asterisks indicate a deeper level. *: Previous item continues. ** A newline * in a list marks the end of the list. *Of course you can start again.
A newline marks the end of the list.
# ''Numbered lists'' are: ## Very organized ## Easy to follow A newline marks the end of the list. # New numbering starts with 1.
Here's a
Begin with a semicolon. One item per line; a newline can appear before the colon, but using a space before the colon improves parsing.
Here's a ''definition list'': ; Word : Definition of the word ; A longer phrase needing definition : Phrase defined ; A word : Which has a definition : Also a second definition : And even a third Begin with a semicolon. One item per line; a newline can appear before the colon, but using a space before the colon improves parsing. * You can even do mixed lists *# and nest them *# inside each other *#* or break lines<br>in lists. *#; definition lists *#: can be *#:; nested : too
A newline starts a new paragraph.
: A colon (:) indents a line or paragraph. A newline starts a new paragraph. Should only be used on talk pages. For articles, you probably want the blockquote tag. : We use 1 colon to indent once. :: We use 2 colons to indent twice. ::: 3 colons to indent 3 times, and so on.
You can make
But you should usually use sections instead, so that they go in the table of contents.
You can make ''horizontal dividing lines'' (----) to separate text. ---- But you should usually use sections instead, so that they go in the table of contents.
You can add footnotes to sentences using the
You can add footnotes to sentences using the ''ref'' tag -- this is especially good for citing a source. :There are over six billion people in the world.<ref>CIA World Factbook, 2006.</ref> References: <references/> For details, see [[Wikipedia:Footnotes]] and [[Help:Footnotes]]. Links[edit]
You will often want to make clickable
links to other pages.
What it looks like What you type Here's a link to a page named [[Official positions|Official position]]. You can even say [[official positions]] and the link will show up correctly.
You can put formatting around a link.Example:
You can put formatting around a link. Example: ''[[Wikipedia]]''. The ''first letter'' of articles is automatically capitalized, so [[wikipedia]] goes to the same place as [[Wikipedia]]. Capitalization matters after the first letter.
Intentionally permanent red link is a page that doesn't exist yet. You could create it by clicking on the link.
[[Intentionally permanent red link]] is a page that doesn't exist yet. You could create it by clicking on the link.
You can link to a page section by its title:
If multiple sections have the same title, add a number. #Example section 3 goes to the third section named "Example section".
You can link to a page section by its title: * [[Doxygen#Doxygen Examples]]. If multiple sections have the same title, add a number. [[#Example section 3]] goes to the third section named "Example section".
You can make a link point to a different place with a piped link. Put the link target first, then the pipe character "|", then the link text.
Or you can use the "pipe trick" so that a title that contains disambiguation text will appear with more concise link text.
You can make a link point to a different place with a [[Help:Piped link|piped link]]. Put the link target first, then the pipe character "|", then the link text. * [[Help:Link|About Links]] * [[List of cities by country#Morocco|Cities in Morocco]] Or you can use the "pipe trick" so that a title that contains disambiguation text will appear with more concise link text. * [[Spinning (textiles)|]] * [[Boston, Massachusetts|]]
You can make an external link just by typing a URL: http://www.nupedia.com
You can give it a title: Nupedia
Or leave the title blank: [1]
External link can be used to link to a wiki page that cannot be linked to with [[page]]: http://meta.wikimedia.org/w/index.php?title=Fotonotes&oldid=482030#Installation
You can make an external link just by typing a URL: http://www.nupedia.com You can give it a title: [http://www.nupedia.com Nupedia] Or leave the title blank: [http://www.nupedia.com] External link can be used to link to a wiki page that cannot be linked to with <nowiki>[[page]]</nowiki>: http://meta.wikimedia.org/w/index.php?title=Fotonotes &oldid=482030#Installation Linking to an e-mail address works the same way: mailto:someone@example.com or [mailto:someone@example.com someone]
You can redirect the user to another page.
#REDIRECT [[Official positions|Official position]]
Category links do not show up in linebut instead at page bottom
Add an extra colon to
[[Help:Category|Category links]] do not show up in line but instead at page bottom ''and cause the page to be listed in the category.'' [[Category:English documentation]] Add an extra colon to ''link'' to a category in line without causing the page to be listed in the category: [[:Category:English documentation]]
The Wiki reformats linked dates to match the reader's date preferences. These three dates will show up the same if you choose a format in your Preferences:
The Wiki reformats linked dates to match the reader's date preferences. These three dates will show up the same if you choose a format in your [[Special:Preferences|]]: * [[1969-07-20]] * [[July 20]], [[1969]] * [[20 July]] [[1969]] Just show what I typed[edit]
A few different kinds of formatting will tell the Wiki to display things as you typed them.
What it looks like What you type
The nowiki tag ignores [[Wiki]] ''markup''. It reformats text by removing newlines and multiple spaces. It still interprets special characters: →
<nowiki> The nowiki tag ignores [[Wiki]] ''markup''. It reformats text by removing newlines and multiple spaces. It still interprets special characters: → </nowiki> The pre tag ignores [[Wiki]] ''markup''. It also doesn't reformat text. It still interprets special characters: → <pre> The pre tag ignores [[Wiki]] ''markup''. It also doesn't reformat text. It still interprets special characters: → </pre>
Leading spaces are another way to preserve formatting.
Putting a space at the beginning of each line stops the text from being reformatted. It still interprets Wiki Leading spaces are another way to preserve formatting. Putting a space at the beginning of each line stops the text from being reformatted. It still interprets [[Wiki]] ''markup'' and special characters: → Source code[edit]
If the syntax highlighting extension is installed, you can display programming language source code in a manner very similar to the HTML
<pre> tag, except with the type of syntax highlighting commonly found in advanced text editing software.
List of supported languages: http://pygments.org/languages/
Here's an example of how to display some C# source code:
<source lang="csharp"> // Hello World in Microsoft C# ("C-Sharp"). using System; class HelloWorld { public static int Main(String[] args) { Console.WriteLine("Hello, World!"); return 0; } } </source>
Results in:
// Hello World in Microsoft C# ("C-Sharp").using System;class HelloWorld{ public static int Main(String[] args) { Console.WriteLine("Hello, World!"); return 0; }}
Images, tables, video, and sounds[edit] This is a very quick introduction. For more information, see: Help:Images and other uploaded files, for how to upload files; w:en:Wikipedia:Extended image syntax, for how to arrange images on the page; Help:Table, for how to create a table.
After uploading, just enter the filename, highlight it and press the "embedded image"-button of the edit_toolbar.
This will produce the syntax for uploading a file
[[Image:filename.png]]
What it looks like What you type
A picture, including alternate text:
You can put the image in a frame with a caption:
A picture, including alternate text: [[Image:Wiki.png|This is Wiki's logo]] You can put the image in a frame with a caption: [[Image:Wiki.png|frame|This is Wiki's logo]]
A link to Wikipedia's page for the image: Image:Wiki.png
Or a link directly to the image itself: Media:Wiki.png
A link to Wikipedia's page for the image: [[:Image:Wiki.png]] Or a link directly to the image itself: [[Media:Wiki.png]] Use media: links to link
directly to sounds or videos: A sound file
Use '''media:''' links to link directly to sounds or videos: [[media:Classical guitar scale.ogg|A sound file]] Provide a spoken rendition of some text in a template: Provide a spoken rendition of some text in a template: {{listen |title = Flow, my tears |filename = Flow, my tears.ogg |filesize = 1.41 MB }} |<span style="border:5px double black">'''Text In a Box'''</span> |
{| border="10" cellspacing="5" cellpadding="10" align="center" |- ! This ! is |- | a | table |} Galleries[edit] Main article: w:Gallery tag
Images can also be grouped into galleries using the
<gallery> tag, such as the following:
Links can be put in captions.
Mathematical formulae[edit]
You can format mathematical formulae with TeX markup.
What it looks like What you type
<math>\sum_{n=0}^\infty \frac{x^n}{n!}</math>
Templates[edit]
Templates are segments of Wiki markup that are meant to be copied automatically ("transcluded") into a page. You add them by putting the template's name in {{double braces}}. It is also possible to transclude other pages by using {{:colon and double braces}}.
Some templates take
parameters, as well, which you separate with the pipe character.
What it looks like What you type {{Transclusion demo}} {{Help:Transclusion Demo}}
This template takes two parameters, and creates underlined text with a hover box for many modern browsers supporting CSS:
Go to this page to see the H:title template itself: {{H:title}}
This template takes two parameters, and creates underlined text with a hover box for many modern browsers supporting CSS: {{H:title|This is the hover text| Hover your mouse over this text}} Go to this page to see the H:title template itself: {{tl|H:title}} Links to other help pages[edit] Help contents Meta · Wikinews · Wikipedia · Wikiquote · Wiktionary · Commons: · Wikidata · MediaWiki · Wikibooks · Wikisource · MediaWiki: Manual · Google Versions of this help page (for other languages see further) What links hereon Meta Reading Go · Search · Stop words · Namespace · Page name · Section · Backlinks · Redirect · Category · Image page · Special pages · Printable version Tracking changes Recent changes (enhanced) | Related changes · Watching pages · Diff · Page history · Edit summary · User contributions · Minor edit · Patrolled edit Logging in and preferences Logging in · Preferences · User style Editing Starting a new page · Advanced editing · Editing FAQ · Edit toolbar · Export · Import · Shortcuts · Edit conflict · Page size Referencing Links · URL · Piped links · Interwiki linking · Footnotes Style and formatting Wikitext examples · CSS · Reference card · HTML in wikitext · Formula · List · Table · Sorting · Colors · Images and file uploads Fixing mistakes Show preview · Testing · Reverting edits Advanced functioning Expansion · Template · Advanced templates · Parser function · Parameter default · Variable · System message · Substitution · Array · Calculation · Embed page Others Special characters · Renaming (moving) a page · Preparing a page for translation · Talk page · Signatures · Sandbox · Legal issues for editors
Languages: English · Deutsch · français · português · português do Brasil · русский · shqip
|
I just read the topic about Ergodicity but I have ambiguity about its meaning (by intuition). What does mean: (for mean) Statistical average = Time average. Could you please explain it in detail. Thanks.
If you sample a random process for a specific t, you will get one realization of a random variable. For another t, you get another realization of that random variable. This random variable has its statistics which is almost impossible to learn in real world because not all sample paths are observable. See the brown rectangle in the figure below.
That becomes possible in ergodic processes. An ergodic process is the one where the time average of a sample path is the same as the statistical mean. See the purple line in the figure below, which was the observed sample path such as a particular realization of noise in a communications receiver.
An eternal well-balanced dice has 1/6 probability for each facet $f$, each time. This uniform probability law yields a mean expectation of 3.5: $\sum_{f=1}^{6} \frac{1}{6}\times f$.
Each time you cast the dice, you get this expectation. Of course, for each throw you'll only get an integer 1, 2, 3, 4, 5, or 6, never a decimal like 3.5. So there is an apparent mismatch between what you can expect (in probability) and what you get (actually). A mismatch that relates the (theoretical) probability space and the (real) time space.
The hypothesis of ergodicity may reconcile the two aspects: it tells you that, averaging over a sufficient number of trials in time, you can get the same results as if you were capable of throwing an infinity of dice at the same time.
But remember that it is an hypothesis on processes, and that non-ergodic phenomena exist.
|
I really can't figure out how to do this at all. I've been trying to show this for nearly 4 hours now. I've tried working from $\tanh(z)=\frac{\sinh(z)}{\cosh(z)}$ and expanding the top and bottom, but that just becomes a mess that, after trying so hard to put into the desired form, didn't work. I also tried working from the identity
$$\tanh(z)=\tanh(x+iy)=\frac{\tanh(x)+\tanh(iy)}{1+\tanh(x)\tanh(iy)}$$
but I wasn't able to put that into the desired form as well. I've tried working from the right side to the left, using every formula for $\sin(2y)$, $\sinh(2x)$, $\cosh(2x)$, and $\cos(2y)$ I could derive/find. I even tried multiplying the right-side by $\coth(z)$ and working it out to show that it's equal to $1$, but that didn't work.
Could you please give me some hints?
|
Ex.14.1 Q1 Statistics Solution - NCERT Maths Class 10 Question
A survey was conducted by a group of students as a part of their environment awareness programme, in which they collected the following data regarding the number of plants in \(20\) houses in a locality. Find the mean number of plants per house.
Number of plants \(0 - 2\) \(2 - 4\) \(4 - 6\) \(6 - 8\) \(8 - 10\) \(10 -12\) \(12 - 14\)
Number of houses \(1\) \(2\) \(1\) \(5\) \(6\) \(2\) \(3\)
Which method did you use for finding the mean, and why?
Text Solution What is known?
The number of plants in \(20\) houses in a locality.
What is unknown?
The mean number of plants per house and the method used for finding the mean.
Reasoning:
We can solve this question by any method of finding mean but here we will use direct method to solve this question because the data given is small.
The mean (or average) of observations, as we know, is the sum of the values of all the observations divided by the total number of observations.
We know that if \(\ x_1, x_2, \ldots, x_n \) are observations with respective frequencies \(f_{1}, f_{2}, \ldots, f_{n}\) then this means observation \(\ x_1 \text { occurs } f_1 \text { times, } x_2 \text { occurs } f_2\) times, and so on.
\(\ x\) is the class mark for each interval, you can find the value of \(\ x\) by using
\[\begin{align} \text{class mark} \left(x_i\right) =\frac{\text { upper limit + lower limit }}{2}\end{align}\]
Now, the sum of the values of all the observations \(=\ f_1 \ x_1+\ f_2 x_2+\ldots+\ f_n \ x_n\) and the number of observations \(=f_1+f_2+\ldots+f_n\).
So, the mean of the data is given by
\[\begin{align}\ \overline x=\frac{f_{1} x_{1}+f_{2} x_{2}+\cdots \ldots \ldots+f_{n} x_{n}}{f_{1}+f_{2}+\cdots \ldots \ldots+f_{n}}\end{align}\]
\(\begin{align}\overline x=\frac{\sum f_{i} x_{i}}{\Sigma f_{i}}\end{align}\) where \(i\) varies from \(1\) to \(n\)
Steps:
Number of plants
Number of houses\(\ (f_i)\)
\(\ x_i\)
\(\ f_ix_i\) \(0 - 2\) \(1\) \(1\) \(1\) \(2 - 4\) \(2\) \(3\) \(6\) \(4 - 6\) \(1\) \(5\) \(5\) \(6 - 8\) \(5\) \(7\) \(35\) \(8 - 10\) \(6\) \(9\) \(54\) \(10 - 12\) \(2\) \(11\) \(22\) \(12 - 14\) \(3\) \(13\) \(39\) \(\Sigma f_i = 20\) \(\Sigma f_ix_i = 162\)
From the table it can be observed that,
\[\begin{align}\Sigma f_i=20, \\ \Sigma f_ix_i =162\end{align}\]
\[\begin{align} \text { Mean } \overline x &=\frac{\Sigma f_i x_i}{\Sigma f_i} \\ &=\frac{162}{20} \\ &=8.1 \end{align}\]
Thus, the mean number of plants each house has \(8.1.\)
Here, we have used the direct method because the value of \(x_i\) and \(f_i\) are small.
|
Electronic Journal of Probability Electron. J. Probab. Volume 22 (2017), paper no. 99, 23 pp. Harmonic moments and large deviations for a supercritical branching process in a random environment Abstract
Let $(Z_n)_{n\geq 0}$ be a supercritical branching process in an independent and identically distributed random environment $\xi =(\xi _n)_{n\geq 0}$. We study the asymptotic behavior of the harmonic moments $\mathbb{E} \left [Z_n^{-r} | Z_0=k \right ]$ of order $r>0$ as $n \to \infty $, when the process starts with $k$ initial individuals. We exhibit a phase transition with the critical value $r_k>0$ determined by the equation $\mathbb E p_1^k(\xi _0) = \mathbb E m_0^{-r_k},$ where $m_0=\sum _{j=0}^\infty j p_j (\xi _0)$, $(p_j(\xi _0))_{j\geq 0}$ being the offspring distribution given the environnement $\xi _0$. Contrary to the constant environment case (the Galton-Watson case), this critical value is different from that for the existence of the harmonic moments of $W=\lim _{n\to \infty } Z_n / \mathbb E (Z_n|\xi ).$ The aforementioned phase transition is linked to that for the rate function of the lower large deviation for $Z_n$. As an application, we obtain a lower large deviation result for $Z_n$ under weaker conditions than in previous works and give a new expression of the rate function. We also improve an earlier result about the convergence rate in the central limit theorem for $W-W_n,$ and find an equivalence for the large deviation probabilities of the ratio $Z_{n+1} / Z_n$.
Article information Source Electron. J. Probab., Volume 22 (2017), paper no. 99, 23 pp. Dates Received: 26 August 2016 Accepted: 26 May 2017 First available in Project Euclid: 16 November 2017 Permanent link to this document https://projecteuclid.org/euclid.ejp/1510802253 Digital Object Identifier doi:10.1214/17-EJP71 Mathematical Reviews number (MathSciNet) MR3724567 Zentralblatt MATH identifier 06827076 Subjects Primary: 60J80: Branching processes (Galton-Watson, birth-and-death, etc.) 60K37: Processes in random environments 60J05: Discrete-time Markov processes on general state spaces Secondary: 60J85: Applications of branching processes [See also 92Dxx] 92D25: Population dynamics (general) Citation
Grama, Ion; Liu, Quansheng; Miqueu, Eric. Harmonic moments and large deviations for a supercritical branching process in a random environment. Electron. J. Probab. 22 (2017), paper no. 99, 23 pp. doi:10.1214/17-EJP71. https://projecteuclid.org/euclid.ejp/1510802253
|
I assume from your solution that your time series closely fits a sine wave. The method you are using is quite good if you restrict yourself to the set of data which is near the peaks.
The more standard way is to find a best fit sinusoid using linear algebra techniques. You have stated the period, so you know the frequency. Let's assume you don't know where the zero crossings are. The problem then becomes to find the best values of $(a,b)$ so that
$$ Y = a C + b S $$
Where $Y$ is your signal values as a vector, $C$ is a cosine curve, and $S$ is a sine curve over one period. Now dot this equation with $C$ and then $S$ to get:
$$ Y \cdot C = a C \cdot C + b S \cdot C $$$$ Y \cdot S = a C \cdot S + b S \cdot S $$
The dot products are scalars. Since $S$ and $C$ are orthogonal, their dot product is zero. The dot product of $S$ or $C$ with itself is $N/2$ where $N$ is the sample count. The equations then become
$$ Y \cdot C = a \cdot N/2 $$$$ Y \cdot S = b \cdot N/2 $$
Solve for $a$ and $b$.
$$ a = \frac{ Y \cdot C }{ N/2 } $$$$ b = \frac{ Y \cdot S }{ N/2 } $$
Your amplitude can now be found from $a$ and $b$.
$$ A = \sqrt{a^2+b^2} $$
This method is equivalent to a single bin of a Discrete Fourier Transform (DFT), so if you follow this then you understand how a DFT works.
To calculate the dot product, you take a summation:
$$ Y \cdot C = \sum_{n=0}^{N-1} { Y[n] C[n] } $$$$ Y \cdot S = \sum_{n=0}^{N-1} { Y[n] S[n] } $$
That's all there is to it.
|
There are a few things that confuse me by your comments and question. Instead of "I would like to find the posterior density of the parameters
conduct Gibbs sampling" I assume you mean that you would like to conduct Gibbs sampling in order to sample from $p(x_{1:t},\theta|y_{1:t})$ and thereby have draws from $p(\theta|y_{1:t})$. to
Gibbs amounts to alternatively drawing $\theta^i \sim p(\theta|x_{1:t},y_{1:t})$ and then $x_{1:t} \sim p(x_{1:t}|\theta,y_{1:t})$. Doing this gives you draws from $p(x_{1:t},\theta|y_{1:t})$. You can integrate out $x_{1:t}$ if you're only interested in $p(\theta|y_{1:t})$ (throw away parts of the joint samples). You don't want to deal with $p(\theta|y_{1:t})$ at all, really. It's intractable.
Both parts are relatively easy, though. Because both of these distributions are available in closed form. Drawing paths can be accomplished by taking means and covariances from kalman smoother (not filter, another thing that was mistaken in your comments), and using those to draw from a big multivariate normal. Page 391 of the book I linked below mentions the forward backward algorithm in Frühwirth-Schnatter, S. (1994), DATA AUGMENTATION AND DYNAMIC LINEAR MODELS to do this.
The other part you need to draw from $p(\theta|x_{1:t},y_{1:t}) \propto \pi(\theta)p(y_{1:t},x_{1:t}|\theta)$. I am assuming it's available in closed form, although I haven't worked it out because you haven’t given the priors, yet.
Check out page 390 and 391 of http://www.springer.com/us/book/9781441978646 for more details.
|
Lecture: HGX205, M 18:30-21
Section: HGW2403, F 18:30-20 Exercise 01 Prove that \(\neg\Box(\Diamond\varphi\wedge\Diamond\neg\varphi)\) is equivalent to \(\Box\Diamond\varphi\rightarrow\Diamond\Box\varphi\). What you have assumed? Define strategyand winning strategyfor modal evaluation games. Prove Key Lemma: \(M,s\vDash\varphi\) iff V has a winning strategy in \(G(M,s,\varphi)\). Prove that modal evaluation games are determined, i.e. either V or F has a winning strategy.
And all exercises for Chapter 2 (see page 23,
open minds) Exercise 02 Let \(T\) with root \(r\) be the tree unraveling of some possible world model, and \(T’\) be the tree unraveling of \(T,r\). Show that \(T\) and \(T’\) are isomorphic. Prove that the union of a set of bisimulations between \(M\) and \(N\) is a bisimulation between the two models. We define the bisimulation contraction of a possible world model \(M\) to be the “quotient model”. Prove that the relation links every world \(x\) in \(M\) to the equivalent class \([x]\) is a bisimulation between the original model and its bisimulation contraction.
And exercises for Chapter 3 (see page 35,
open minds): 1 (a) (b), 2. Exercise 03 Prove that modal formulas (under possible world semantics) have ‘Finite Depth Property’.
And exercises for Chapter 4 (see page 47,
open minds): 1 – 3. Exercise 04 Prove the principle of Replacement by Provable Equivalents: if \(\vdash\alpha\leftrightarrow\beta\), then \(\vdash\varphi[\alpha]\leftrightarrow\varphi[\beta]\). Prove the following statements. “For each formula \(\varphi\), \(\vdash\varphi\) is equivalent to \(\vDash\varphi\)” is equivalent to “for each formula \(\varphi\), \(\varphi\) being consistent is equivalent to \(\varphi\) being satisfiable”. “For every set of formulas \(\Sigma\) and formula \(\varphi\), \(\Sigma\vdash\varphi\) is equivalent to \(\Sigma\vDash\varphi\)” is equivalent to “for every set of formulas \(\Sigma\), \(\Sigma\) being consistent is equivalent to \(\Sigma\) being satisfiable”. Prove that “for each formula \(\varphi\), \(\varphi\) being consistent is equivalent to \(\varphi\) being satisfiable” using the finite version of Henkin model.
And exercises for Chapter 5 (see page 60,
open minds): 1 – 5. Exercise 05
Exercises for Chapter 6 (see page 69,
open minds): 1 – 3. Exercise 06 Show that “being equivalent to a modal formula” is not decidable for arbitrary first-order formulas.
Exercises for Chapter 7 (see page 88,
open minds): 1 – 6. For exercise 2 (a) – (d), replace the existential modality E with the difference modality D. In the clause (b) of exercise 4, “completeness” should be “correctness”. Exercise 07 Show that there are infinitely many non-equivalent modalities under T. Show that GL + Idis inconsistent and Unproves GL. Give a complete proof of the fact: In S5, Every formula is equivalent to one of modal depth \(\leq 1\).
Exercises for Chapter 8 (see page 99,
open minds): 1, 2, 4 – 6. Exercise 08 Let \(\Sigma\) be a set of modal formulas closed under substitution. Show that \[(W,R,V),w\vDash\Sigma~\Leftrightarrow~ (W,R,V’),w\vDash\Sigma\] hold for any valuation \(V\) and \(V’\). Define a \(p\)- morphismbetween \((W,R),w\) and \((W’,R’),w’\) as a “functional bisimulation”, namely bisimulation regardless of valuation. Show that if there is a \(p\)-morphism between \((W,R),w\) and \((W’,R’),w’\), then for any valuation \(V\) and \(V’\), we have \[(W,R,V),w\vDash\Sigma~\Leftrightarrow~ (W’,R’,V’),w\vDash\Sigma.\]
Exercises for Chapter 9 (see page 99,
open minds). Exercise the last
Exercises for Chapter 10 and 11 (see page 117 and 125,
open minds).
|
Search
Now showing items 1-10 of 167
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
|
According to me, $\ce{Fe^{2+}}$ should be a better reducing agent because $\ce{Fe^2+}$ - after being oxidized - will attain a stable $\ce{d^5}$ configuration, whereas $\ce{Cr^2+}$ will attain a $\ce{d^3}$ configuration. I think the half filled $\ce{d^5}$ configuration is more stable than the $\ce{d^3}$ configuration. Why is this not so?
The short answer is thermodynamics. Reduction with $\ce{Cr^2+}$ must be more exergonic than reduction with $\ce{Fe^2+}$, we'll get to some numbers in a bit, but let's deal with the concept.
It is tricky to compare the "stability" of two possible products that occur from different pathways. What is more important for the spontaneity of the reaction is the change in (free) energy that occurs from start to finish.
To put it another way, it may be that $\ce{Fe^3+}$ is more stable than $\ce{Cr^3+}$ on an absolute scale, but what we really care about is how much more stable $\ce{Cr^3+}$ is to $\ce{Cr^2+}$ compared to how much more stable $\ce{Fe^3+}$ is to $\ce{Fe^2+}$.
Let's examine all four using electron configuration as you have done:
$\ce{Fe^2+}$ is $\ce{d^6}$ or more probably $\ce{[Ar] 4s^1 3d^5}$ - two half-filled half shells! $\ce{Fe^3+}$ is $\ce{d^5}$, which is $\ce{[Ar] 3d^5}$ - one filled half shell.
The difference between the two iron ions might not be that large.
$\ce{Cr^2+}$ is $\ce{d^4}$, which is $\ce{[Ar] 3d^4}$ or $\ce{[Ar] 4s^2 3d^2}$ $\ce{Cr^3+}$ is $\ce{d^3}$, which is $\ce{[Ar] 3d^3}$ or $\ce{[Ar] 4s^2 3d^1}$
The energy difference between chromium ions might be larger. Represented graphically, the reaction coordinate energy diagram for the two process might be:
Now the number part.
We can go looking for some standard thermodynamic data to help make our case. The best data are for standard reduction potentials, because these data are for exactly what we want!
Taking the reduction potential data and writing the equations in the direction we care about, we have:
$$\begin{align} &\ce{Fe^2+ -> Fe^3+} &&&E^\circ=\pu{-0.77 V}\\ &\ce{Cr^2+ -> Cr^3+} &&&E^\circ=\pu{+0.44 V} \end{align}$$
Spontaneous reactions produce positive potential differences, so we can see right now that $\ce{Cr^2+}$ is a better reducing agent.
Let's go a step farther to free energy.
$$\Delta G^\circ =-nFE^\circ$$
However, to deal with free energy, we need a full reaction. Technically, comparing the two half-reactions in isolation is just as bad. However, the definition of the standard electrode potential and the standard free energy come with a common zero point in terms of half-reaction: $\ce{2H+ + 2e- -> H2}$.
The two full reactions (net ionic equations anyway) are:
$$\begin{align} &\ce{2Fe^2+ + 2H+ -> 2Fe^3+ + H2} &&E^\circ=\pu{-0.77 V}&&&\Delta G^\circ=\pu{+74kJ/mol}\\ &\ce{2Cr^2+ + 2H+ -> 2Cr^3+ + H2} &&E^\circ=\pu{+0.44 V}&&&\Delta G^\circ = \pu{-42 kJ/mol} \end{align}$$
And the winner is chromium, by a whopping $\pu{116 kJ/mol}$. In fact, the data might show that $\ce{Fe^2+}$ is
more stable than $\ce{Fe^3+}$, while $\ce{Cr^2+}$ is less stable than $\ce{Cr^3+}$.
In water both $\ce{Cr^{3+}}$ and $\ce{Fe^{3+}}$ are in octahedral configurations. This means that $d$-orbitals become unequal in their energy; specifically 3 of them are lower and 2 are higher. This is the premise of 'crystal field theory'.
Depending on the nearest neighborhood, the splitting may be strong enough to force electron pairing or it may not. For water it usually isn't. This means that 'first half-filled shell' here is 3 electrons - $\ce{V^{2+}}$ $\ce{Cr^{3+}}$ or $\ce{Mn^{4+}}$. The second half-filled shell in low field (water ligand) is $\ce{Mn^{2+}}$ or $\ce{Fe^{3+}}$ (5 $d$-orbitals), both surprisingly stable. In strong field (say, $\ce{CN^-}$ ligand), it is 6 electrons (3 double occupied lower orbitals), like in $\ce{[Co(NH3)_{6}]^{3+}}$ and $\ce{[Fe(CN)_{6}]^{4-}}$. The next "subshell" is, 8 electrons, (6 on 3 lower orbitals and 2 on higher), like in $\ce{Ni^{2+}}$. Depending on the strength of the ligands, an unusual square planar coordination may become preferable, with two higher orbitals also splitting. It is typical for $\ce{Ni}$ subgroup in +2 oxidation state and $\ce{Cu}$ subgroup in +3 oxidation state.
TL;DR : invest some time into reading about crystal field theory.
It is quite easy, a good reducing agent means which can be oxidised easily, for:
$\ce{Cr^2+ -> Cr^3+} $, we have $\mathrm d^3$ configuration for $\ce{Cr^3+} $, which we can say that stable as all the three electrons are in $\mathrm{t_{2g}}$ level, hence it is half-filled. On the other hand, for:
$\ce{Fe^2+ -> Fe^3+} $, we have $\mathrm d^5$ configuration for $\ce{Fe^3+} $, which is again stable due to half-filled d orbitals. So, who is more stable? we can figure out from $\ce{M^3+/M^2+}$ standard electrode potentials. for $\ce{Cr^2+ -> Cr^3+} $, it is $\pu{-0.41 V}$ , and for $\ce{Fe^2+ -> Fe^3+} $, it is $\pu{0.77 V}$.
The positive one indicates that it is difficult to remove electron from $\ce{Fe^2+}$ and negative one indicates that it is easier to remove electron from $\ce{Cr^2+}$. Hence, $\ce{Cr^2+}$ is a stronger reducing agent.
$\ce{Cr^2+}$ is a better reducing agent. As it attains a $\ce d^3$ configuration on loosing an electron while $\ce{Fe^2+}$ attains $\ce d^5$ configuration. In an aqueous medium $\ce d^3$ (half filled $\ce{t_{2g}}$ orbital) is more stable than $\ce d^5$.
|
Proof:
a, f has a limit everywhere, and it is $0$.
It is enough, to show it on $(-1,1)$. Let us have an arbitary $a \in (-1,1)$.
First, we need, that for every $\epsilon > 0$, such $\delta$ exists, that if $|x-a| < \delta$, then $|f(x)-0| < \epsilon$. Let us have $N > \frac{1}{\epsilon}$, then $|f(x)-0|=|f(x)| \le \frac{1}{N}$. If $x$ is irrational, it is of course, true, if $x$ is rational, we need that, $x=\frac{p}{q}$, where $q > N$. Divide the interval $(-1,1)$ into $\frac{1}{q}$ length areas, and choose one of them, which contains $a$. Choose $\delta$ lower, than the distance between $a$ and the interval.
b, f is contionus in every irrational points.
If $a$ is irrational, $f(a)=\lim_{x\rightarrow a} f(x) = 0.$
c, f isn't contious, whenever $x$ is rational.
If $a$ is rational, $f(a) \ne\lim_{x\rightarrow a} f(x) = 0.$
Consequence: Our function is bounded, and it isn't continous in $\mathbb{Q}$. Since $\mathbb{Q}$'s Lebesgue measure is $0$, it is Riemann-integrable. It can be proven, that there isn't any function which is continous in rational points, and isn't in irrational points.
Hint: This function is called Riemann-function.
|
Let $X(f)$ denote the Fourier transform of $x(t)$ where$$\begin{align*}X(f) &= \int_{-\infty}^{\infty} x(t) \exp(-j2\pi ft) \mathrm dt\\x(t) &= \int_{-\infty}^{\infty} X(f) \exp(+j2\pi ft) \mathrm df\end{align*}$$which I will denote via $x(t) \leftrightarrow X(f)$.The following transform pairs will be needed in what follows.$$\begin{align*}\delta(t) &\leftrightarrow 1\\\delta(t-t_0) &\leftrightarrow \exp(-j2\pi f \,t_0)\\\sum_{n=-\infty}^{\infty}\delta(t-nT) &\leftrightarrow\frac{1}{T}\sum_{k=-\infty}^{\infty}\delta\left(f-\frac{k}{T}\right)\\\end{align*}$$
Given a signal $x(t)$, its sampled pulse train (at intervals of $T$ seconds) is $$x(t) \sum_{n=-\infty}^{\infty}\delta(t-nT) = \sum_{n=-\infty}^{\infty} x(t)\delta(t-nT) = \sum_{n=-\infty}^{\infty} x(nT)\delta(t-nT).$$Since multiplication in the time domain corresponds to convolution in thefrequency domain, we have$$\begin{align*}x(t)\sum_{n=-\infty}^{\infty}\delta(t-nT) &\leftrightarrowX(f)\circledast\frac{1}{T}\sum_{k=-\infty}^{\infty}\delta\left(f-\frac{k}{T}\right)\\&\leftrightarrow \frac{1}{T}\sum_{k=-\infty}^{\infty}X(f)\circledast\delta\left(f-\frac{k}{T}\right)\\&\leftrightarrow \frac{1}{T}\sum_{k=-\infty}^{\infty}\int_{-\infty}^{\infty} X(f-w)\delta\left(w-\frac{k}{T}\right) \mathrm dw\\x(t)\sum_{n=-\infty}^{\infty}\delta(t-nT)&\leftrightarrow \frac{1}{T}\sum_{k=-\infty}^{\infty}X\left(f-\frac{k}{T}\right) = \hat{X}(f)\end{align*}$$Thus, the Fourier transform of the impulse train formed bysampling $x(t)$ at $T$ second intervals is $\hat{X}(f)$ whichis obtained by repeating $X(f)$ along the $f$ axis at intervals of $T^{-1}$ Hz and summing the result.Furthermore, $\hat{X}(f)$ is a
periodic function of the frequencyvariable $f$ with period $T^{-1}$ Hz. That is, for all $f$,$$\hat{X}\left(f + \frac{1}{T}\right) = \hat{X}(f).$$ Note thatall of this holds regardless of what $X(f)$ is: $X(f)$ could benonzero for all $f$, and the result would still be valid.
Now suppose that $X(f)$ is zero if $f < a$ or $f > b$. Then,$\hat{X}(f)$, obtained by repeating $X(f)$ periodically alongthe $f$ axis, is nonzero only in the intervals$$\ldots, \left[a-\frac{2}{T}, b-\frac{2}{T}\right], \left[a-\frac{1}{T}, b-\frac{1}{T}\right],[a, b] \left[a+\frac{1}{T}, b+\frac{1}{T}\right], \left[a+\frac{2}{T}, b+\frac{2}{T}\right], \ldots$$and so if $$b -\frac{1}{T} < a \Rightarrow b-a < \frac{1}{T},$$that is, the
support $b-a$ of $X(f)$ is smaller than the repetition interval $T^{-1}$ Hz, then the repetitionsof $X(f)$ do not overlap. Indeed as the OP's figures show, for a real-valued signal whose spectrum extends from $a = -f_0$to $b = f_0$ (support of $X(f)$ is of length $2f_0$), sampling $x(t)$ at intervals of $T = (3f_0)^{-1}$ (and thus repeating $X(f)$ atintervals of $3f_0$ Hz on the frequency axis)leads to no overlap while sampling $x(t)$ at intervals of$T = (1.5f_0)^{-1}$ leads to overlap of the repetitionsof $X(f)$.
Finally, the OP asks about negative frequencies and theinterpretation of the pictures which show positive frequenciesonly. For a
real-valued signal, $X(f)$ has conjugate symmetry,meaning that $X(-f) = X^*(f)$, and so specifying $X(f)$ forpositive values of $f$ suffices. In any case, the pictureshe is looking at are of $|X(f)|$ and $|\hat{X}(f)|$ whichare even functions of $f$, and so showing only the positiveaxis saves space, though it does make the pictures look a bit lopsided since only half the lobe is shown at low frequencies.For complex-valued signals, $X(f)$ does not have conjugatesymmetry and $|X(f)|$ need not be an even function of $f$and so the whole axis would need to be shown. But, the generaldevelopment above is still applicable, and we still need to sample at rate exceeding $b-a$ Hz, and it helps to keep inmind that in this general case, each sample is actually two realnumbers, not one, since we are sampling a complex-valued signal.
|
Preprints (rote Reihe) des Fachbereich Mathematik Refine Keywords Brownian motion (2) (remove)
303
We show that the intersection local times \(\mu_p\) on the intersection of \(p\) independent planar Brownian paths have an average density of order three with respect to the gauge function \(r^2\pi\cdot (log(1/r)/\pi)^p\), more precisely, almost surely, \[ \lim\limits_{\varepsilon\downarrow 0} \frac{1}{log |log\ \varepsilon|} \int_\varepsilon^{1/e} \frac{\mu_p(B(x,r))}{r^2\pi\cdot (log(1/r)/\pi)^p} \frac{dr}{r\ log (1/r)} = 2^p \mbox{ at $\mu_p$-almost every $x$.} \] We also show that the lacunarity distributions of \(\mu_p\), at \(\mu_p\)-almost every point, is given as the distribution of the product of \(p\) independent gamma(2)-distributed random variables. The main tools of the proof are a Palm distribution associated with the intersection local time and an approximation theorem of Le Gall.
296
We show that the occupation measure on the path of a planar Brownian motion run for an arbitrary finite time intervalhas an average density of order three with respect to thegauge function t^2 log(1/t). This is a surprising resultas it seems to be the first instance where gauge functions other than t^s and average densities of order higher than two appear naturally. We also show that the average densityof order two fails to exist and prove that the density distributions, or lacunarity distributions, of order threeof the occupation measure of a planar Brownian motion are gamma distributions with parameter 2.
|
Your reasoning is correct, but you need to go a little further to deduce the answer.
The question does not say but we must assume that the mean position O is the same for P and Q. The phase difference between them remains the same, but the separation changes.
Suppose that, at some instant, P and Q are moving in the same direction. When the separation is maximum they are moving with the same speed. If the speed of one was greater than the other, they would get closer together of further apart. For each, speed decreases with distance from O in the same manner, because they have the same amplitude. Whichever is closer to O moves faster. They will have the same speed when they are the same distance from O. So at maximum separation they will be positioned symmetrically about O.
If the maximum separation is $A\sqrt2$ then each is $\frac{A\sqrt2}{2}=\frac{A}{\sqrt2}$ from O. The phase of each is then $\phi$ where
$A\sin\phi=\frac{A}{\sqrt2}$ $\sin\phi=\frac{1}{\sqrt2}=\sin45^{\circ}$ $\phi=45^{\circ}$.
So the phase difference at maximum separation (and at all separations) is $2\phi=90^{\circ}$.
Alternative solution :
$x_1=A\sin(\omega t+\phi)$
$x_2=A\sin(\omega t)$ $x_1-x_2=2A\cos(\frac{\omega t+\phi+\omega t}{2})\sin(\frac{\omega t+\phi - \omega t}{2})=2A\cos(\omega t+\frac12 \phi)\sin(\frac12 \phi)$.
$\sin(\frac12 \phi)$ is a constant, so the maximum possible value of $x_1-x_2$ occurs when $\cos(\omega t+\frac12 \phi)=1$. Then
$2A\sin(\frac12 \phi)= A\sqrt2$ $\sin(\frac12 \phi)=\frac{\sqrt2}{2}=\frac{1}{\sqrt2}=\sin 45^{\circ}$ $\phi=90^{\circ}$.
|
The cost of boundary controllability for a parabolic equation with inverse square potential
Institut de Mathématiques de Toulouse, UMR CNRS 5219, Université Paul Sabatier Toulouse Ⅲ, 118 route de Narbonne, 31 062 Toulouse Cedex 4, France
$ 1-D $
$ u_t -u_{xx} - \frac{\mu}{x^2} u = 0, \qquad x\in (0,1), \ t \in (0,T), $
$ \mu $
$ \mu \leq 1/4 $
$ \mu \leq 1/4 $
$ \mu>1/4 $
$ \mu \leq 1/4 $
$ T>0 $
$ x = 1 $
$ T>0 $
$ \mu \in (-\infty, 1/4] $ Mathematics Subject Classification:Primary: 58F15, 58F17; Secondary: 53C35. Citation:Patrick Martinez, Judith Vancostenoble. The cost of boundary controllability for a parabolic equation with inverse square potential. Evolution Equations & Control Theory, 2019, 8 (2) : 397-422. doi: 10.3934/eect.2019020
References:
[1]
F. Ammar Khodja, A. Benabdallah, M. González-Burgos and L. de Teresa,
The Kalman condition for the boundary controllability of coupled parabolic systems. Bounds on biorthogonal families to complex matrix exponentials,
[2]
G. Avalos and I. Lasiecka,
Optimal blowup rates for the minimal energy null control of the strongly damped abstract wave equation,
[3]
G. Avalos and I. Lasiecka,
The null controllability of thermoelastic plates and singularity of the associated minimal energy function,
[4] [5] [6]
A. Benabdallah, F. Boyer, M. González-Burgos and G. Olive,
Sharp estimates of the one-dimensional boundary control cost for parabolic systems and applications to the $N$-dimensional boundary null controllability in cylindrical domains,
[7]
U. Biccari, V. Hernández-Santamaría and J. Vancostenoble, Existence and cost of boundary controls for a degenerate/singular parabolic equation, preprint.Google Scholar
[8]
U. Biccari and E. Zuazua,
Null controllability for a heat equation with a singular inverse-square potential involving the distance to the boundary function,
[9]
P. Cannarsa, P. Martinez and J. Vancostenoble,
The cost of controlling weakly degenerate parabolic equations by boundary controls,
[10]
P. Cannarsa, P. Martinez and J. Vancostenoble, Precise estimates for biorthogonal families under asymptotic gap conditions, DCDS-s, in press.Google Scholar
[11]
P. Cannarsa, P. Martinez and J. Vancostenoble, The cost of controlling strongly degenerate parabolic equations,
[12]
C. Cazacu,
Controllability of the heat equation with an inverse-square potential localized on the boundary,
[13] [14]
G. Da Prato,
[15]
G. Da Prato and J. Zabczyk,
Second Order Partial Differential Equations in Hilbert Spaces, London Mathematical Society Lecture Note Series, 293. Cambridge University Press, Cambridge, 2002.
doi: 10.1017/CBO9780511543210.
Google Scholar
[16]
S. Ervedoza,
Control and stabilization properties for a singular heat equation with an inverse-square potential,
[17] [18]
H. O. Fattorini and D. L. Russel,
Exact controllability theorems for linear parabolic equations in one space dimension,
[19]
H. O. Fattorini and D. L. Russel,
Uniform bounds on biorthogonal functions for real exponentials with an application to the control theory of parabolic equations,
[20] [21] [22]
A. V. Fursikov and O. Yu. Imanuvilov,
[23]
O. Glass,
A complex-analytic approach to the problem of uniform controllability of transport equation in the vanishing viscosity limit,
[24] [25] [26]
E. N. Güichal,
A lower bound of the norm of the control operator for the heat equation,
[27] [28]
S. Hansen,
Bounds on functions biorthogonal to sets of complex exponentials; control of damped elastic systems,
[29]
G. H. Hardy, J. E. Littlewood and G. Pólya,
Inequalities, 2d ed. Cambridge, at the University Press, 1952.
Google Scholar
[30]
E. Kamke,
[31]
V. Komornik and P. Loreti,
[32] [33] [34]
N. N. Lebedev,
[35]
P. Lissy,
On the cost of fast controls for some families of dispersive or parabolic equations in one space dimension,
[36]
P. Lissy,
Explicit lower bounds for the cost of fast controls for some 1-D parabolic or dispersive equations, and a new lower bound concerning the uniform controllability of the 1-D transport-diffusion equation,
[37] [38]
P. Martin, L. Rosier and P. Rouchon,
Null controllability of one-dimensional parabolic equations by the flatness approach,
[39]
L. Miller,
Geometric bounds on the growth rate of null-controllability cost for the heat equation in small time,
[40] [41] [42]
B. Opic and A. Kufner,
[43] [44]
C. K. Qu and R. Wong,
"Best possible'' upper and lower bounds for the zeros of the Bessel function $J_\nu(x)$,
[45] [46] [47] [48] [49] [50]
J. Vancostenoble,
Improved Hardy-Poincaré inequalities and sharp Carleman estimates for degenerate/singular parabolic problems,
[51]
J. Vancostenoble,
Lipschitz stability in inverse source problems for singular parabolic equations,
[52]
J. Vancostenoble and E. Zuazua,
Null controllability for the heat equation with singular inverse-square potentials,
[53]
J. L. Vázquez and E. Zuazua,
The Hardy inequality and the asymptotic behaviour of the heat equation with an inverse-square potential,
[54]
G. N. Watson,
A Treatise on the Theory of Bessel Functions, second edition, Cambridge University Press, Cambridge, England, 1944.
Google Scholar
show all references
References:
[1]
F. Ammar Khodja, A. Benabdallah, M. González-Burgos and L. de Teresa,
The Kalman condition for the boundary controllability of coupled parabolic systems. Bounds on biorthogonal families to complex matrix exponentials,
[2]
G. Avalos and I. Lasiecka,
Optimal blowup rates for the minimal energy null control of the strongly damped abstract wave equation,
[3]
G. Avalos and I. Lasiecka,
The null controllability of thermoelastic plates and singularity of the associated minimal energy function,
[4] [5] [6]
A. Benabdallah, F. Boyer, M. González-Burgos and G. Olive,
Sharp estimates of the one-dimensional boundary control cost for parabolic systems and applications to the $N$-dimensional boundary null controllability in cylindrical domains,
[7]
U. Biccari, V. Hernández-Santamaría and J. Vancostenoble, Existence and cost of boundary controls for a degenerate/singular parabolic equation, preprint.Google Scholar
[8]
U. Biccari and E. Zuazua,
Null controllability for a heat equation with a singular inverse-square potential involving the distance to the boundary function,
[9]
P. Cannarsa, P. Martinez and J. Vancostenoble,
The cost of controlling weakly degenerate parabolic equations by boundary controls,
[10]
P. Cannarsa, P. Martinez and J. Vancostenoble, Precise estimates for biorthogonal families under asymptotic gap conditions, DCDS-s, in press.Google Scholar
[11]
P. Cannarsa, P. Martinez and J. Vancostenoble, The cost of controlling strongly degenerate parabolic equations,
[12]
C. Cazacu,
Controllability of the heat equation with an inverse-square potential localized on the boundary,
[13] [14]
G. Da Prato,
[15]
G. Da Prato and J. Zabczyk,
Second Order Partial Differential Equations in Hilbert Spaces, London Mathematical Society Lecture Note Series, 293. Cambridge University Press, Cambridge, 2002.
doi: 10.1017/CBO9780511543210.
Google Scholar
[16]
S. Ervedoza,
Control and stabilization properties for a singular heat equation with an inverse-square potential,
[17] [18]
H. O. Fattorini and D. L. Russel,
Exact controllability theorems for linear parabolic equations in one space dimension,
[19]
H. O. Fattorini and D. L. Russel,
Uniform bounds on biorthogonal functions for real exponentials with an application to the control theory of parabolic equations,
[20] [21] [22]
A. V. Fursikov and O. Yu. Imanuvilov,
[23]
O. Glass,
A complex-analytic approach to the problem of uniform controllability of transport equation in the vanishing viscosity limit,
[24] [25] [26]
E. N. Güichal,
A lower bound of the norm of the control operator for the heat equation,
[27] [28]
S. Hansen,
Bounds on functions biorthogonal to sets of complex exponentials; control of damped elastic systems,
[29]
G. H. Hardy, J. E. Littlewood and G. Pólya,
Inequalities, 2d ed. Cambridge, at the University Press, 1952.
Google Scholar
[30]
E. Kamke,
[31]
V. Komornik and P. Loreti,
[32] [33] [34]
N. N. Lebedev,
[35]
P. Lissy,
On the cost of fast controls for some families of dispersive or parabolic equations in one space dimension,
[36]
P. Lissy,
Explicit lower bounds for the cost of fast controls for some 1-D parabolic or dispersive equations, and a new lower bound concerning the uniform controllability of the 1-D transport-diffusion equation,
[37] [38]
P. Martin, L. Rosier and P. Rouchon,
Null controllability of one-dimensional parabolic equations by the flatness approach,
[39]
L. Miller,
Geometric bounds on the growth rate of null-controllability cost for the heat equation in small time,
[40] [41] [42]
B. Opic and A. Kufner,
[43] [44]
C. K. Qu and R. Wong,
"Best possible'' upper and lower bounds for the zeros of the Bessel function $J_\nu(x)$,
[45] [46] [47] [48] [49] [50]
J. Vancostenoble,
Improved Hardy-Poincaré inequalities and sharp Carleman estimates for degenerate/singular parabolic problems,
[51]
J. Vancostenoble,
Lipschitz stability in inverse source problems for singular parabolic equations,
[52]
J. Vancostenoble and E. Zuazua,
Null controllability for the heat equation with singular inverse-square potentials,
[53]
J. L. Vázquez and E. Zuazua,
The Hardy inequality and the asymptotic behaviour of the heat equation with an inverse-square potential,
[54]
G. N. Watson,
A Treatise on the Theory of Bessel Functions, second edition, Cambridge University Press, Cambridge, England, 1944.
Google Scholar
[1]
V. Afraimovich, J. Schmeling, Edgardo Ugalde, Jesús Urías.
Spectra of dimensions for Poincaré recurrences.
[2] [3]
Juan Wang, Xiaodan Zhang, Yun Zhao.
Dimension estimates for arbitrary subsets of limit sets of a Markov construction and related multifractal analysis.
[4] [5]
V. Afraimovich, Jean-René Chazottes, Benoît Saussol.
Pointwise dimensions for Poincaré recurrences associated with maps and special flows.
[6] [7] [8] [9] [10] [11]
Valentin Afraimovich, Jean-Rene Chazottes and Benoit Saussol.
Local dimensions for Poincare recurrences.
[12] [13] [14] [15] [16]
Imen Bhouri, Houssem Tlili.
On the multifractal formalism
for Bernoulli products of invertible matrices.
[17] [18]
Jiahang Che, Li Chen, Simone GÖttlich, Anamika Pandey, Jing Wang.
Boundary layer analysis from the Keller-Segel system to the aggregation system in one space dimension.
[19] [20]
2018 Impact Factor: 1.048
Tools Metrics Other articles
by authors
[Back to Top]
|
I am struggling with proving this question
Show that if $a_n \leq x$ for all $n \geq 1$, then (the formal limit when n goes to infinity) $\lim_{n\rightarrow \infty} a_n \leq x$.
I am assuming by contradiction that $\lim_{n \rightarrow \infty}a_n > x$.
Then there exists a rational $q$ between them s.t $x < q < \lim_{n \rightarrow \infty}a_n$. Then $x = \lim x < q = \lim q < \lim a_n$, but I could not go to a contradiction from this. On the other hand, I am thinking to prove it by saying $x = \lim b_n$, and then if $a_n > b_n$ then we will have $x = \lim b_n < \lim a_n \leq \lim x = x$ which is a contradiction. So what is the better way to prove it? Thanks for any help.
|
Several recent hep-ph papers studied grand unification without supersymmetry, an approach that seems intriguing to a whole group of researchers. Grand unification theories (GUT) work nicely with supersymmetry (SUSY) – it's the supersymmetric version of the \(SU(5)\) and larger models that achieves the accurate unification of gauge couplings of the \(U(1)_Y\), \(SU(2)_W\), and \(SU(3)_c\) factors of the gauge group in the simplest, most natural way.
Super-Kamiokande has looked for a proton decay, a flagship prediction of grand unified theories However, SUSY hasn't been experimentally proven yet so it's a legitimate possibility – at least from a phenomenologist's viewpoint – that SUSY isn't relevant for any low-energy phenomena (or isn't relevant for Nature at all). The Standard Model is a bit awkward, with its diverse groups and fragmented representations, so one should better unify those structures a little bit. The people behind the papers below tend to assume that Nature needs a grand unified theory, the symmetry breaking is achieved by the ordinary Higgs mechanism (pure field theory, no stringy Wilson lines etc.), and a precision gauge coupling unification is achieved in some way, too. They typically require a dark matter candidate, too: it's usually an axion. GUT theories may imply lots of unobserved decays of particles (especially the proton decay) so the null results of all these experiments kill many GUT models and constrain the parameters of others. Yesterday, C.P. Martin of Madrid released the paper
It seems to me that these people love to write the phrase "Clifford algebra" or the Greek letter \(\Gamma\) and that's it. It doesn't seem to me that anything linked to this mathematical structure is "exploited" in any physical way at all. At most, Martin noticed that the representations used in the model he promotes may be found in the decomposition of a bispinor of \(SO(10)\) – I mean the tensor product of a Dirac spinor with itself. That's great, it's a justification making these representations natural – but why is there so much ado about the "Clifford algebra" there? I have no clue.
Martin's paper uses the May 2013 model by Guido Altarelli and Davide Meloni (AM),
AM are extending the Standard Model and a possible strategy to plan a presentation of this model is the bottom-up approach, i.e. one starting at low energies.
First, let us look at the gauge groups and their breaking patterns.
Below the electroweak scale \(v=246\GeV\), we have the gauge group\[
SU(3)_c\times U(1)_{\rm em},
\] the QCD color group and the electromagnetic Abelian group. As we know very well, this is just an unbroken subgroup of a larger group operating above \(v=246\GeV\), namely the Standard Model group\[
G_{SM} = SU(3)_c\times SU(2)_W\times U(1)_Y
\] which includes the electroweak \(SU(2)\) and the hypercharge \(Y\). The breaking from the Standard Model group to the QCD+electromagnetism group is achieved by a Higgs doublet. This statement is pretty much an experimental fact by now, a Nobel-prize-winning one. In this grand unified model building, we ultimately want an \(SO(10)\) symmetry at really high scales so all fields, including the Higgs fields, have to be embedded into full representations of \(SO(10)\).
The simplest representation of \(SO(10)\) that provides us with the Nobel-prize-winning Higgs doublets is a \({\bf 10}\), the fundamental vector of \(SO(10)\), decomposing as\[
{\bf 10}_H \to ({\bf 1},{\bf 2},{\bf 2}) \oplus ({\bf 6},{\bf 1},{\bf 1})
\] under the Standard Model group. The symbol for the singlets and doublets is obvious; the 6-dimensional representation is the symmetric tensor of \(SU(3)\) (note that due to the complexity of the group, the "trace" cannot be separated like in \(SO(3)\)).
Great. Now we're above \(v=246\GeV\) and continue to raise the energy. AM are telling us that there is an intermediate scale, something like \(10^{10}\GeV\), where a larger group gets restored. The intermediate energy scale may be adjusted so that the gauge coupling unification is restored with the same precision we know from the simple SUSY GUTs. This is a bit ugly – we have to adjust one real parameter (the energy scale) to guarantee one real condition (the second coupling and the third coupling unify with the first one at the same energy scale) – but there's no guarantee that all types of unification that Nature manages are "breathtakingly beautiful and rigid".
Fine. So above the intermediate scale, the gauge group becomes larger, namely\[
G_{PS} = SU(4)_c \times SU(2)_L \times SU(2)_R.
\] We have enhanced the color group to \(SU(4)\), i.e. added the fourth basic color (it's like going from RGB to CMYK, if you like silly analogies). The electroweak \(SU(2)_W\) group was kept and just renamed to \(SU(2)_L\) because it couples to the left-handed fermions. And the hypercharge was extended to an \(SU(2)_R\) group where "R" stands for "right".
The group above has been known as the Pati-Salam group. It's a "partial unification" group because the number of factors is the same as it is in the Standard Model but at least the smaller two factors became isomorphic to one another and may be related by a discrete symmetry.
This Pati-Salam group is broken to the Standard Model by some Higgs fields analogous to those in the Standard Model. The quartic and quadratic interactions are analogous, just the group theory is a bit more complicated. The required Higgses transform as \[
\overline{\bf 126}.
\] What is this representation? Well, consider the antisymmetric tensor field with five indices, \({\bf 5}\wedge {\bf 5}\wedge {\bf 5}\wedge {\bf 5}\wedge {\bf 5}\), if you wish. Its dimension is \[
\frac{10\times 9\times 8\times 7\times 6}{5\times 4\times 3\times 2\times 1} = 252
\] but one may also "Hodge-dualize" this tensor field with a 10-index "epsilon symbol" which allows us to split this 252-dimensional representation to two 126-dimensional ones, the self-dual and the anti-self-dual ones, which are complex conjugate to each other.
This tensor field \(T_{abcde}\) may remember which 5 of the 10 (complexified) directions in the \(SO(10)\) vector are the "holomorphic" directions of the fundamental 5-dimensional representation of \(SU(5)\) and which of them are the antifundamental ones. Consequently, this field might break \(SO(10)\) to \(SU(5)\).
However, when we start with the Pati-Salam group instead of \(SO(10)\), it's also able to break the group to the Standard Model group. The group theory is not too complicated and we actually need a Higgs field in another representation, \({\bf 45}\), as well. This is the antisymmetric tensor with two indices, \({\bf 10}\wedge {\bf 10}\), if you wish, with dimension \(10\times 9/2\times 1 = 45\). Such a field simply picks a preferred complex 2-plane inside the 4-dimensional space symmetric under \(SO(4)=SU(2)\times SU(2)\) and this complex 2-plane is the one that preserves the \(SU(2)_L\) symmetry.
Finally, there's another scale, the highest one: the GUT scale. Above the scale, \(SO(10)\) is restored. Beneath the scale, it is broken to the Pati-Salam group. The breaking is done by \({\bf 210}_H\), an antisymmetric tensor with four indices whose dimension is understandably\[
\frac{10\times 9\times 8\times 7}{4\times 3\times 2\times 1} = 210.
\] Now, there is no Hodge duality. Note that all these antisymmetric tensor products of copies of \({\bf 10}\) may be found in the tensor product of two spinors, \({\bf 16}\otimes {\bf 16}\), of \(SO(10)\), except that sometimes we need the same chirality and sometimes we need the opposite chirality of the two 16-dimensional spinors. That's also the case of the 120-dimensional antisymmetric tensor with three indices that is not used in this construction.
It's easy to understand why \(SO(10)\) may be broken to the Pati-Salam group by the antisymmetric tensor with four indices. It defines a volume form for 4-dimensional submanifolds of the 10-dimensional space so it splits "ten" to "four plus six" and \(SO(6)=SU(4)\) while \(SO(4)=SU(2)\times SU(2)\) which produce all the factors of the Pati-Salam group.
So the representations used in the AM paper have dimensions \(10,16,45,126,210\) where \(16\) is the usual "single generation of fermions" including an active right-handed neutrino. The 120-dimensional representation isn't used in this particular grand unified model (it is used in other \(SO(10)\) models, however) while the 45-dimensional representation is used and produces an axion, a dark matter candidate. If dark matter is composed of axions only, the underground experiments will probably find nothing, at least for quite some time.
They discuss many features of the model and show it is viable. They also identify 15 high-scale parameters that are pretty much manifested as 15 parameters of the Standard Model. (My understanding is that the Higgs quartic couplings etc. are assumed to be irrelevant in all this fitting; they don't seem to know anything about their values.) Their best fit (see an appendix in that paper) leaves no freedom and you may see that the Standard Model with the required values of the parameters may be realized within their model. Some of the "best fit" values are extremely unnatural (tiny) numbers, highlighting the fact that grand unification without SUSY seems more awkward than one with SUSY.
However, it seems that aside from the gauge coupling unification and a dark matter candidate, they also manage to suppress all the dangerous decays of various particles that rule out many other generic GUT models.
Today, a Spanish-Portuguese-Czech collaboration (Carolina Arbeláez, Martin Hirsch, Michal Malinský, Jorge C. Romão: AHMR; I have never met Michal Malinský, I believe) released another non-supersymmetric \(SO(10)\) grand unified paper,
AM adjusted the Pati-Salam scale to be an intermediate one; it was needed for gauge coupling unification. AHMR do something else. They show that in many models, the Pati-Salam scale may be brought down, close to the LHC scale so that there's no continuous adjustment needed to restore the gauge coupling unification.
However, what's needed as an extra price to pay are new multiplets of matter – which should or might be accessible by the LHC. They seem to conclude that to guarantee a long enough lifetime of the proton that is compatible with the current lower bound of order \(10^{34}\,{\rm years}\), or at least to respect this lower bound "safely", most of their models need to add new colored states at the LHC scale. That would be very exciting, of course (although arguably less exciting than an experimental discovery of SUSY), if such new GUT-predicted states were found.
|
Main Page The Problem
Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A
combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. Density Hales-Jewett (DHJ) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math]
The original proof of DHJ used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers.
Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (active) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (final call) (500-599) TBA (600-699) A reading seminar on density Hales-Jewett (active)
A spreadsheet containing the latest lower and upper bounds for [math]c_n[/math] can be found here.
Unsolved questions
Gowers.462: Incidentally, it occurs to me that we as a collective are doing what I as an individual mathematician do all the time: have an idea that leads to an interesting avenue to explore, get diverted by some temporarily more exciting idea, and forget about the first one. I think we should probably go through the various threads and collect together all the unsolved questions we can find (even if they are vague ones like, “Can an approach of the following kind work?”) and write them up in a single post. If this were a more massive collaboration, then we could work on the various questions in parallel, and update the post if they got answered, or reformulated, or if new questions arose.
IP-Szemeredi (a weaker problem than DHJ)
Solymosi.2: In this note I will try to argue that we should consider a variant of the original problem first. If the removal technique doesn’t work here, then it won’t work in the more difficult setting. If it works, then we have a nice result! Consider the Cartesian product of an IP_d set. (An IP_d set is generated by d numbers by taking all the [math]2^d[/math] possible sums. So, if the n numbers are independent then the size of the IP_d set is [math]2^d[/math]. In the following statements we will suppose that our IP_d sets have size [math]2^n[/math].)
Prove that for any [math]c\gt0[/math] there is a [math]d[/math], such that any c-dense subset of the Cartesian product of an IP_d set (it is a two dimensional pointset) has a corner.
The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for [math]k=4[/math]. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our c-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma.
Finally, let me prove that there is square if [math]d[/math] is large enough compare to [math]c[/math]. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length [math]d[/math]. It has a one to one mapping to [4]^d; Given a point ((x_1,…,x_d),(y_1,…,y_d)) where x_i,y_j are 0 or 1, it maps to (z_1,…,z_d), where z_i=0 if x_i=y_i=0, z_i=1 if x_i=1 and y_i=0, z_i=2 if x_i=0 and y_i=1, and finally z_i=3 if x_i=y_i=1. Any combinatorial line in [4]^d defines a square in the Cartesian product, so the density HJ implies the statement.
Gowers.7: With reference to Jozsef’s comment, if we suppose that the d numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as (\epsilon,\eta), where each of \epsilon and \eta is a 01-sequence of length d. Then a corner is a triple of the form (\epsilon,\eta), (\epsilon,\eta+\delta), (\epsilon+\delta,\eta), where \delta is a \{-1,0,1\}-valued sequence of length d with the property that both \epsilon+\delta and \eta+\delta are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product.
This is simpler than the density Hales-Jewett problem in at least one respect: it involves 01-sequences rather than 012-sequences. But that simplicity may be slightly misleading because we are looking for corners in the Cartesian product. A possible disadvantage is that in this formulation we lose the symmetry of the corners: the horizontal and vertical lines will intersect this set in a different way from how the lines of slope -1 do.
I feel that this is a promising avenue to explore, but I would also like a little more justification of the suggestion that this variant is likely to be simpler.
Gowers.22: A slight variant of the problem you propose is this. Let’s take as our ground set the set of all pairs (U,V) of subsets of \null [n], and let’s take as our definition of a corner a triple of the form (U,V), (U\cup D,V), (U,V\cup D), where both the unions must be disjoint unions. This is asking for more than you asked for because I insist that the difference D is positive, so to speak. It seems to be a nice combination of Sperner’s theorem and the usual corners result. But perhaps it would be more sensible not to insist on that positivity and instead ask for a triple of the form (U,V), ((U\cup D)\setminus C,V), (U, (V\cup D)\setminus C, where D is disjoint from both U and V and C is contained in both U and V. That is your original problem I think.
I think I now understand better why your problem could be a good toy problem to look at first. Let’s quickly work out what triangle-removal statement would be needed to solve it. (You’ve already done that, so I just want to reformulate it in set-theoretic language, which I find easier to understand.) We let all of X, Y and Z equal the power set of \null [n]. We join U\in X to V\in Y if (U,V)\in A.
Ah, I see now that there’s a problem with what I’m suggesting, which is that in the normal corners problem we say that (x,y+d) and (x+d,y) lie in a line because both points have the same coordinate sum. When should we say that (U,V\cup D) and (U\cup D,V) lie in a line? It looks to me as though we have to treat the sets as 01-sequences and take the sum again. So it’s not really a set-theoretic reformulation after all.
O'Donnell.35: Just to confirm I have the question right…
There is a dense subset A of {0,1}^n x {0,1}^n. Is it true that it must contain three nonidentical strings (x,x’), (y,y’), (z,z’) such that for each i = 1…n, the 6 bits
[ x_i x'_i ] [ y_i y'_i ] [ z_i z'_i ]
are equal to one of the following:
[ 0 0 ] [ 0 0 ] [ 0, 1 ] [ 1 0 ] [ 1 1 ] [ 1 1 ] [ 0 0 ], [ 0 1 ], [ 0, 1 ], [ 1 0 ], [ 1 0 ], [ 1 1 ], [ 0 0 ] [ 1 0 ] [ 0, 1 ] [ 1 0 ] [ 0 1 ] [ 1 1 ]
?
McCutcheon.469: IP Roth:
Just to be clear on the formulation I had in mind (with apologies for the unprocessed code): for every $\delta>0$ there is an $n$ such that any $E\subset [n]^{[n]}\times [n]^{[n]}$ having relative density at least $\delta$ contains a corner of the form $\{a, a+(\sum_{i\in \alpha} e_i ,0),a+(0, \sum_{i\in \alpha} e_i)\}$. Here $(e_i)$ is the coordinate basis for $[n]^{[n]}$, i.e. $e_i(j)=\delta_{ij}$.
Presumably, this should be (perhaps much) simpler than DHJ, k=3.
High-dimensional Sperner Kalai.29: There is an analogous for Sperner but with high dimensional combinatorial spaces instead of "lines" but I do not remember the details (Kleitman(?) Katona(?) those are ususal suspects.)
Fourier approach
Kalai.29: A sort of generic attack one can try with Sperner is to look at f=1_A and express using the Fourier expansion of f the expression \int f(x)f(y)1_{x<y} where x<y is the partial order (=containment) for 0-1 vectors. Then one may hope that if f does not have a large Fourier coefficient then the expression above is similar to what we get when A is random and otherwise we can raise the density for subspaces. (OK, you can try it directly for the k=3 density HJ problem too but Sperner would be easier;) This is not unrealeted to the regularity philosophy. Gowers.31: Gil, a quick remark about Fourier expansions and the k=3 case. I want to explain why I got stuck several years ago when I was trying to develop some kind of Fourier approach. Maybe with your deep knowledge of this kind of thing you can get me unstuck again.
The problem was that the natural Fourier basis in \null [3]^n was the basis you get by thinking of \null [3]^n as the group \mathbb{Z}_3^n. And if that’s what you do, then there appear to be examples that do not behave quasirandomly, but which do not have large Fourier coefficients either. For example, suppose that n is a multiple of 7, and you look at the set A of all sequences where the numbers of 1s, 2s and 3s are all multiples of 7. If two such sequences lie in a combinatorial line, then the set of variable coordinates for that line must have cardinality that’s a multiple of 7, from which it follows that the third point automatically lies in the line. So this set A has too many combinatorial lines. But I’m fairly sure — perhaps you can confirm this — that A has no large Fourier coefficient.
You can use this idea to produce lots more examples. Obviously you can replace 7 by some other small number. But you can also pick some arbitrary subset W of \null[n] and just ask that the numbers of 0s, 1s and 2s inside W are multiples of 7.
DHJ for dense subsets of a random set
Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of {}[3]^n, much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic.
Bibliography H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119.
|
Steady-state continuity equation on an extruded mesh¶
This demo showcases the use of extruded meshes, including the new regions of integration and the construction of sophisticated finite element spaces.
We now consider the equation
in a domain \(\Omega\), where \(\vec{u}\) is a prescribed vector field, and \(q\) is an unknown scalar field. The value of \(q\) is known on the ‘inflow’ part of the boundary \(\Gamma\), where \(\vec{u}\) is directed towards the interior of the domain. \(q\) can be interpreted as the steady-state distribution of a passive tracer carried by a fluid with velocity field \(\vec{u}\).
We apply an upwind DG method, as we saw in the previous example. Denoting the upwind value of \(q\) on interior facets by \(\widetilde{q}\), the full set of equations are then
We will take the domain \(\Omega\) to be the cuboid \(\Omega = [0,1] \times [0,1] \times [0,0.2]\). We will use the uniform velocity field \(\vec{u} = (0, 0, 1)\). \(\Gamma_\mathrm{inflow}\) is therefore the base of the cuboid, while \(\Gamma_\mathrm{outflow}\) is the top. The four vertical sides can be ignored, since \(\vec{u} \cdot \vec{n} = 0\) on these faces.
We use an
extruded mesh, where the base mesh is a 20 by 20 unit square,divided into triangles, with 10 evenly-spaced vertical layers. This givesprism-shaped cells.
from firedrake import *m = UnitSquareMesh(20, 20)mesh = ExtrudedMesh(m, layers=10, layer_height=0.02)
We will use a simple piecewise-constant function space for the unknown scalar \(q\):
V = FunctionSpace(mesh, "DG", 0)
Our velocity will live in a low-order Raviart-Thomas space. The construction of this is more complicated than element spaces that have appeared previously. The horizontal and vertical components of the field are specified separately. They are combined into a single element which is used to build a FunctionSpace.
# RT1 element on a prismW0_h = FiniteElement("RT", "triangle", 1)W0_v = FiniteElement("DG", "interval", 0)W0 = HDivElement(TensorProductElement(W0_h, W0_v))W1_h = FiniteElement("DG", "triangle", 0)W1_v = FiniteElement("CG", "interval", 1)W1 = HDivElement(TensorProductElement(W1_h, W1_v))W_elt = W0 + W1W = FunctionSpace(mesh, W_elt)
As an aside, since our prescibed velocity is purely in the vertical direction, a simpler space would have sufficed:
# Vertical part of RT1 element# W_h = FiniteElement("DG", "triangle", 0)# W_v = FiniteElement("CG", "interval", 1)# W_elt = HDivElement(TensorProductElement(W_h, W_v))# W = FunctionSpace(mesh, W_elt)
Or even:
# Why can't everything in life be this easy?# W = VectorFunctionSpace(mesh, "CG", 1)
Next, we set the prescribed velocity field:
velocity = as_vector((0.0, 0.0, 1.0))u = project(velocity, W)# if we had used W = VectorFunctionSpace(mesh, "CG", 1), we could have done# u = Function(W)# u.interpolate(velocity)
Next, we will set the boundary value on our scalar to be a simple indicator function over part of the bottom of the domain:
x, y, z = SpatialCoordinate(mesh)inflow = conditional(And(z < 0.02, x > 0.5), 1.0, -1.0)q_in = Function(V)q_in.interpolate(inflow)
Now we will define our forms. We use the same trick as in theprevious example of defining
un to aidwith the upwind terms:
n = FacetNormal(mesh)un = 0.5*(dot(u, n) + abs(dot(u, n)))
We define our trial and test functions in the usual way:
q = TrialFunction(V)phi = TestFunction(V)
Since we are on an extruded mesh, we have several new integral types at ourdisposal. An integral over the cells of the domain is still denoted by
dx.Boundary integrals now come in several varieties:
ds_b denotes an integralover the base of the mesh, while
ds_t denotes an integral over the top ofthe mesh.
ds_v denotes an integral over the sides of a mesh, though we willnot use that here.
Similiarly, interior facet integrals are split into
dS_h and
dS_v, over
horizontal interior facets and vertical interior facets respectively. Sinceour velocity field is purely in the vertical direction, we will omit theintegral over vertical interior facets, since we know\(\vec{u} \cdot \vec{n}\) is zero for these.
a1 = -q*dot(u, grad(phi))*dxa2 = dot(jump(phi), un('+')*q('+') - un('-')*q('-'))*dS_ha3 = dot(phi, un*q)*ds_t # outflow at top walla = a1 + a2 + a3L = -q_in*phi*dot(u, n)*ds_b # inflow at bottom wall
Finally, we will compute the solution:
out = Function(V)solve(a == L, out)
By construction, the exact solution is quite simple:
exact = Function(V)exact.interpolate(conditional(x > 0.5, 1.0, -1.0))
We finally compare our solution to the expected solution:
assert max(abs(out.dat.data - exact.dat.data)) < 1e-10
This demo can be found as a script in extruded_continuity.py.
|
Main Page Contents The Problem
Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A
combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. Density Hales-Jewett (DHJ) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math]
The original proof of DHJ used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers.
Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (active) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (final call) (500-599) TBA (600-699) A reading seminar on density Hales-Jewett (active)
A spreadsheet containing the latest lower and upper bounds for [math]c_n[/math] can be found here.
Unsolved questions Gowers.462: Incidentally, it occurs to me that we as a collective are doing what I as an individual mathematician do all the time: have an idea that leads to an interesting avenue to explore, get diverted by some temporarily more exciting idea, and forget about the first one. I think we should probably go through the various threads and collect together all the unsolved questions we can find (even if they are vague ones like, “Can an approach of the following kind work?”) and write them up in a single post. If this were a more massive collaboration, then we could work on the various questions in parallel, and update the post if they got answered, or reformulated, or if new questions arose. IP-Szemeredi (a weaker problem than DHJ) Solymosi.2: In this note I will try to argue that we should consider a variant of the original problem first. If the removal technique doesn’t work here, then it won’t work in the more difficult setting. If it works, then we have a nice result! Consider the Cartesian product of an IP_d set. (An IP_d set is generated by d numbers by taking all the [math]2^d[/math] possible sums. So, if the n numbers are independent then the size of the IP_d set is [math]2^d[/math]. In the following statements we will suppose that our IP_d sets have size [math]2^n[/math].)
Prove that for any [math]c\gt0[/math] there is a [math]d[/math], such that any [math]c[/math]-dense subset of the Cartesian product of an IP_d set (it is a two dimensional pointset) has a corner.
The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for [math]k=4[/math]. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our [math]c[/math]-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma.
Finally, let me prove that there is square if [math]d[/math] is large enough compare to [math]c[/math]. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length [math]d[/math]. It has a one to one mapping to [math][4]^d[/math]; Given a point [math]((x_1,…,x_d),(y_1,…,y_d))[/math] where [math]x_i,y_j[/math] are 0 or 1, it maps to [math](z_1,…,z_d)[/math], where [math]z_i=0[/math] if [math]x_i=y_i=0[/math], [math]z_i=1[/math] if [math]x_i=1[/math] and [math]y_i=0, z_i=2[/math] if [math]x_i=0[/math] and [math]y_i=1[/math], and finally [math]z_i=3[/math] if [math]x_i=y_i=1[/math]. Any combinatorial line in [math][4]^d[/math] defines a square in the Cartesian product, so the density HJ implies the statement.
Gowers.7: With reference to Jozsef’s comment, if we suppose that the d numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as (\epsilon,\eta), where each of \epsilon and \eta is a 01-sequence of length d. Then a corner is a triple of the form (\epsilon,\eta), (\epsilon,\eta+\delta), (\epsilon+\delta,\eta), where \delta is a \{-1,0,1\}-valued sequence of length d with the property that both \epsilon+\delta and \eta+\delta are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product.
This is simpler than the density Hales-Jewett problem in at least one respect: it involves 01-sequences rather than 012-sequences. But that simplicity may be slightly misleading because we are looking for corners in the Cartesian product. A possible disadvantage is that in this formulation we lose the symmetry of the corners: the horizontal and vertical lines will intersect this set in a different way from how the lines of slope -1 do.
I feel that this is a promising avenue to explore, but I would also like a little more justification of the suggestion that this variant is likely to be simpler.
Gowers.22: A slight variant of the problem you propose is this. Let’s take as our ground set the set of all pairs (U,V) of subsets of \null [n], and let’s take as our definition of a corner a triple of the form (U,V), (U\cup D,V), (U,V\cup D), where both the unions must be disjoint unions. This is asking for more than you asked for because I insist that the difference D is positive, so to speak. It seems to be a nice combination of Sperner’s theorem and the usual corners result. But perhaps it would be more sensible not to insist on that positivity and instead ask for a triple of the form (U,V), ((U\cup D)\setminus C,V), (U, (V\cup D)\setminus C, where D is disjoint from both U and V and C is contained in both U and V. That is your original problem I think.
I think I now understand better why your problem could be a good toy problem to look at first. Let’s quickly work out what triangle-removal statement would be needed to solve it. (You’ve already done that, so I just want to reformulate it in set-theoretic language, which I find easier to understand.) We let all of X, Y and Z equal the power set of \null [n]. We join U\in X to V\in Y if (U,V)\in A.
Ah, I see now that there’s a problem with what I’m suggesting, which is that in the normal corners problem we say that (x,y+d) and (x+d,y) lie in a line because both points have the same coordinate sum. When should we say that (U,V\cup D) and (U\cup D,V) lie in a line? It looks to me as though we have to treat the sets as 01-sequences and take the sum again. So it’s not really a set-theoretic reformulation after all.
O'Donnell.35: Just to confirm I have the question right…
There is a dense subset A of {0,1}^n x {0,1}^n. Is it true that it must contain three nonidentical strings (x,x’), (y,y’), (z,z’) such that for each i = 1…n, the 6 bits
[ x_i x'_i ] [ y_i y'_i ] [ z_i z'_i ]
are equal to one of the following:
[ 0 0 ] [ 0 0 ] [ 0, 1 ] [ 1 0 ] [ 1 1 ] [ 1 1 ] [ 0 0 ], [ 0 1 ], [ 0, 1 ], [ 1 0 ], [ 1 0 ], [ 1 1 ], [ 0 0 ] [ 1 0 ] [ 0, 1 ] [ 1 0 ] [ 0 1 ] [ 1 1 ]
?
McCutcheon.469: IP Roth:
Just to be clear on the formulation I had in mind (with apologies for the unprocessed code): for every $\delta>0$ there is an $n$ such that any $E\subset [n]^{[n]}\times [n]^{[n]}$ having relative density at least $\delta$ contains a corner of the form $\{a, a+(\sum_{i\in \alpha} e_i ,0),a+(0, \sum_{i\in \alpha} e_i)\}$. Here $(e_i)$ is the coordinate basis for $[n]^{[n]}$, i.e. $e_i(j)=\delta_{ij}$.
Presumably, this should be (perhaps much) simpler than DHJ, k=3.
High-dimensional Sperner Kalai.29: There is an analogous for Sperner but with high dimensional combinatorial spaces instead of "lines" but I do not remember the details (Kleitman(?) Katona(?) those are ususal suspects.)
Fourier approach
Kalai.29: A sort of generic attack one can try with Sperner is to look at f=1_A and express using the Fourier expansion of f the expression \int f(x)f(y)1_{x<y} where x<y is the partial order (=containment) for 0-1 vectors. Then one may hope that if f does not have a large Fourier coefficient then the expression above is similar to what we get when A is random and otherwise we can raise the density for subspaces. (OK, you can try it directly for the k=3 density HJ problem too but Sperner would be easier;) This is not unrealeted to the regularity philosophy. Gowers.31: Gil, a quick remark about Fourier expansions and the k=3 case. I want to explain why I got stuck several years ago when I was trying to develop some kind of Fourier approach. Maybe with your deep knowledge of this kind of thing you can get me unstuck again.
The problem was that the natural Fourier basis in \null [3]^n was the basis you get by thinking of \null [3]^n as the group \mathbb{Z}_3^n. And if that’s what you do, then there appear to be examples that do not behave quasirandomly, but which do not have large Fourier coefficients either. For example, suppose that n is a multiple of 7, and you look at the set A of all sequences where the numbers of 1s, 2s and 3s are all multiples of 7. If two such sequences lie in a combinatorial line, then the set of variable coordinates for that line must have cardinality that’s a multiple of 7, from which it follows that the third point automatically lies in the line. So this set A has too many combinatorial lines. But I’m fairly sure — perhaps you can confirm this — that A has no large Fourier coefficient.
You can use this idea to produce lots more examples. Obviously you can replace 7 by some other small number. But you can also pick some arbitrary subset W of \null[n] and just ask that the numbers of 0s, 1s and 2s inside W are multiples of 7.
DHJ for dense subsets of a random set
Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of {}[3]^n, much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic.
Bibliography H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119.
|
My homework problem reads:
Consider a free particle in one dimension. Write an expression for the wavefunction $\psi(x, t)$ given an initial state $\psi_0(x) = Ae^{-ax^2}$ at $t = 0$, where $A$ is a normalization constant (that you do not need to calculate).
This is my attempt:
$\begin{align} \left<p|\psi_0\right> &= \int \left<p|x\right>\left<x|\psi_0\right> dx\\\\ &= \frac{A}{\sqrt{2\pi\hbar}}\int \exp\left(-ipx/\hbar - ax^2\right) dx, \end{align}$
and
$\begin{align} \psi\left(x, t\right) = \left<x|\psi\right> &= \int \exp\left(-iEt/\hbar\right) \left<x|p\right>\left<p|\psi_0\right> dp\\\\ &= \frac{A}{2\pi\hbar}\int\exp\left(ipx/\hbar-iEt/\hbar\right)\int \exp\left(-ipx'/\hbar - ax'^2\right) dx'dp. \end{align}$
I'm not sure I'm moving in the right direction and if I am, I don't know how to proceed. I'd like some guidance please.
|
This is a tricky question because it asks about the meaning of words. People use the word "particle" to refer to various, not always well defined, notions in physics.
In the end, I think the simplest and more correct single way to categorize the terms is to interpret "particle" as "excitation of a field". For example, if someone says
There are two electrons in this box"
I would mentally translate that to
The electron field in this box has two units of excitation.
This is all much easier to think about if you're familiar with the so-called "second quantization".$^{[1]}$
Second quantization
Consider a one-dimensional infinte wall potential (ie "particle in a box"). The system has a set of discrete energy levels, which we can index as
$$\left\{ A, B, C, D, \ldots \right\}$$
If we have only one particle, we can denote it's state as e.g. $|\Psi \rangle_1 = |B\rangle + |D\rangle$.$^{[2]}$ This is the so-called
first quantization. If we have two particles, the situation is significantly more complex because, as you have probably learned, quantum particles are indistinguishable. You probably learned that you have to symmetrize (bosons) or antisymmetrize (fermions) the state vector to account for the fact that the particles are indistinguishable. For example, if you say that particle #1 is in state $|\Psi\rangle_1$ as written above, and particle #2 is in state $|\Psi\rangle_2=|C\rangle$, then the total system state is (assuming boson particles):
\begin{align}\left \lvert \Phi \right \rangle&= (|B\rangle_1 + |D\rangle_1)|C\rangle_2 + |C\rangle_1 (|B\rangle_2 + |D\rangle_2) \\&= |B\rangle_1 |C\rangle_2 + |D\rangle_1 |C\rangle_2 + |C\rangle_1 |B\rangle_2 + |C\rangle_1 |D\rangle_2 \, .\end{align}
This notation is horrible. In symmetrization/antisymmetrization you are basically saying:
"My notation contains information that it shouldn't, namely the independent states of particles which are actually indistinguishable, so let me add more terms to my notation to effectively remove the unwanted information."
This should seem really awkward and undesirable, and it is.
Let us consider an analogy for why the symmetrized state is such a bad representation. Consider a violin string with a set of vibrational modes. If we want to specify the state of the string, we enumerate the modes and specify the amplitude of each one, i.e. we write a Fourier series
$$\text{string displacement}(x) = \sum_{\text{mode }n=0}^{\infty}c_n \,\,\text{[shape of mode }n](x).$$
The vibrational modes are like the quantum eigenstates, and the amplitudes $c_n$ are like the number of particles in each state. With this analogy, the first quantization notation, in which we index over the particles and specify each one's state, is like indexing over units of amplitude and specifying each one's mode. That's obviously backwards. In particular, you now see why particles are indistinguishable. If a particle is just a unit of excitation of a quantum state, then just like units of amplitude of a vibrating string, it doesn't make any sense to say that the particle has identity. Units of excitation have no identity because they're just mathematical constructs to keep track of how excited a particular mode is.
A better way to specify a quantum state is to list each possible state and say how excited it is. In quantum mechanics, excitations come in discrete units $^{[3]}$, so we could specify a state like this:
$$|n_A\rangle_A |n_B\rangle_B |n_C\rangle_C |n_D\rangle_D$$
where $n_i$ is an integer. In this notation, the state $|\Psi\rangle_1$ from before is written
$$|\Psi\rangle_1 = |0\rangle_A |1\rangle_B |0\rangle_C |0\rangle_D +|0\rangle_A |0\rangle_B |0\rangle_C |1\rangle_D.$$
For compactness this would often be written $|\Psi\rangle_1=|0100\rangle + |0001\rangle$. The more complex two particle state would be
$$\left \lvert \Phi \right \rangle = |0\rangle_A |1\rangle_B |1\rangle_C |0\rangle_D + |0\rangle_A |0\rangle_B |1\rangle_C |1\rangle_D$$
or, more compactly,
$$\left \lvert \Phi \right \rangle = |0110\rangle + |0011\rangle \, .$$
This is the so-called
second quantization notation. Note that it has less terms than the first quantized version. This is because it doesn't need to undo information that it's not supposed to have. Back to fields vs. particles
The second quantized notation is far better because it naturally accounts for the "indistinguishable" particles. But, what we really learned, is that particles are actually units of excitation of quantum states. In the field theory language, we'd say that the particle is a unit of excitation of the various modes of the field. I won't say that either fields or particles are more fundamental because one has little meaning without the other, but now that we understand what "particle" really means, the whole situation is hopefully much clearer to you.
P.S. I do hope you'll ask for clarification as needed.
[1] The term "second quantization" is stupid, so don't try to interpret it.
[2] We ignore normalization.
[3] Hence the term "quantum".
|
I don't think it's the case that all $n>0$ terms vanish, because the mode expansion of $\phi$ has a zero mode $\phi_0$. Its expansion is
\begin{equation}\phi \left(z,\bar{z}\right) = \phi_0 - i\pi_0 \log\left(z\bar{z}\right) +i \sum_{n\neq 0} \frac{1}{n} \left(a_n z^{-n} + \bar{a}_n \bar{z}^{-n}\right)\end{equation}
Computing $\langle:\phi^n:\rangle$ for $n>0$, the only term that contributes when we take the vacuum expectation value is $\phi_0^n$. This is because $a_n$ and $\bar{a}_n$ annihilate the vacuum for $n>0$, and $\pi_0|0\rangle=0$ as well. Any cross-terms involving $a_n$ and $a_{-m}$ will be zero due to the normal ordering, as will any terms involving $\phi_0$ and $\pi_0$ (as $\pi_0$ is placed to the right).
As a result, we just get \begin{equation}\langle V_\alpha \left(z\right) \rangle =\langle \sum_{n} \frac{\left(i\alpha \phi_0\right)^n}{n!} \rangle= \langle e^{i\alpha \phi_0} \rangle.\end{equation} Because of the commutation relations between $\pi_0$ and $\phi_0$, $e^{i\beta \phi_0} |\alpha\rangle = |\alpha+\beta\rangle$, so the vacuum expectation value is $\langle e^{i\alpha \phi_0}\rangle = \delta_{\alpha,0}$; this is just the charge neutrality condition.
It's easier to obtain this result by using the definition of normal ordering [see e.g. Di Francesco]; \begin{equation}V_\alpha = \exp\left(i\alpha \phi_0 + \alpha \sum_{n>0} \frac{1}{n}\left(a_{-n}z^n + \bar{a}_{-n} \bar{z}^n\right)\right) \exp \left(\alpha \pi_0 \log\left(z\bar{z}\right) - \alpha \sum_{n>0}\frac{1}{n} \left(a_{n}z^{-n} + \bar{a}_{n} \bar{z}^{-n}\right)\right).\end{equation}The last exponential acts trivially on $|0\rangle$, and the $a_{-n},\bar{a}_{-n}$ with $n>0$ map $|0\rangle$ on to its descendants, which are orthogonal to $|0\rangle$. So when taking the vacuum expectation value, the operator is just $e^{i\alpha \phi_0}$ as before.
Alternatively, one can use the Ward identities; the Ward identity for translational invariance $\partial_z \langle V_{\alpha} \left(z\right)\rangle = 0$ means the correlator is constant. The Ward identity $ \left(z\partial_z + h_{\alpha}\right) \langle V_{\alpha}\left(z\right)\rangle =0$ then implies that $h_\alpha \langle V_{\alpha} =0 \rangle$: since $h_\alpha = \alpha^2/2$ is non-zero for $\alpha \neq 0$, the correlator must be zero. If $\alpha=0$, $V_{\alpha} = 1$ and the correlator is just 1.
|
The $x$ component of vector $A$ is 25.0 m and the $y$ component is 40.0 m.
(a) What is the magnitude of $A$?
(b) What is the angle between the direction of $A$ and the positive direction of x?
For (b) I tried using the formula $\tan \theta = \frac{a_y}{a_x} = \frac{40}{-25} = -1.6$, thus $\arctan(-1.6)=58$ degrees which does not match the answer key: $122$ degrees.
Any help is appreciated.
|
This is probably more of a model specification question than a technical question about PyMC3 itself, but I’m hoping someone here can help. Basically, multi-group logistic ANOVA in PyMC3 is giving different results than other Bayesian models and frequentist methods, and I’d like to figure out why.
I’m providing as much background as I can, but you can probably skip the reference analyses if you prefer.
Experimental background
I have a roughly split-plot design for a memory experiment. Participants (~40) from two different conditions perform a repeated measures task (~100 trials), where each trial has two different categories. On each trial, the participant needs to recall the presence/absence of a marker at each of ~40 grid locations (typically 10-15 markers per trial).
I want to predict the probability of a participant forgetting to place a marker at grid position, given a participant’s condition and the type of trial they saw. (Therefore there are several ways to represent the target quantity: as binary indicators at each grid location (~160000 observations), as counts per trial (~4000 observations), or as counts per participant (~80 observations).
Analyses Reference analyses
TLDR: these all indicate large effect of trial type, weak effect of condition.
Permutation tests (Original frequentist analysis) Not ideal, as treats condition and trial type as totally independent, and uses per-subject point estimates instead of full data Results: trial type has large and reliable (p << .001) effect, condition has small and unreliable (p ~=.3) effect MLE logistic ANOVA 2 MLE logistic ANOVA Uses sum coding, matching Krushcke’s sum-to-zero constraint. Consistent with permutation tests
Notes:
Using
statsmodels for MLE on the regression equation gives roughly \beta_0 = -2.2, \beta_{condition} = .07, and \beta_{trial\_type} = .21.
No \beta_{subject} are estimated well (confidence interval is extreme to either side of 0), as is \beta_{condition}.
When excluding \beta_{subject}, \beta_{condition} has a similar value but much tighter confidence interval. Everything else remains the same.
\beta_{interaction} is identical and tiny regardless of whether \beta_{subject} is included.
Hierarchical binomial models Structurally similar to permutation tests; assumes independence of trial type and condition, but doesn’t – Unlike permutation tests, however, can be done at level of individual trials or grid locations within trials. See Krushcke Ch 9 (p252 for model and JAGS) Result is similar to permutation tests and MLE: small effect of condition, with some non-negligible posterior mass on opposite side of 0; large effect of trial type, with all posterior mass on one side of 0 Bayesian logistic ANOVA Model from Krushcke DBDA 2e Ch 21 (p642) – Rough example also in this notebook. I want to eventually report this analysis, but I’m not sure it’s working correctly, as it gives qualitatively different results from the other analyses. Model
y \sim Binomial(\theta, n)
\theta \sim Beta(a, b) a = \omega \cdot (\kappa - 2) + 1 b = (1 - \omega) \cdot (\kappa -2) + 1 \omega = \mathit{logistic}(\mu) \mu = \beta_{0} + \beta_{condition}[x_{condition}] + \beta_{trial\_type}[x_{trial\_type}] + \beta_{interaction}[x_{interaction}] + \beta_{subject}[x_{subject}] \beta_0 \sim N(0, \tau=\frac{1}{4}) \beta_i \sim N(0, \sigma_i) \sigma_i \sim Gamma(1.64, .32) Sample code
# Standard split-plot BANOVA design# More or less follows# https://github.com/JWarmenhoven/DBDA-python/blob/master/Notebooks/Chapter%2021.ipynb# I've tried a bunch of variations on this, including changing to sum-to-zero encoding for inputs# but these changes mostly don't seem to have that much impactdef create_coeff(name, shape): # Shared variance, sample coefficients sigma = pm.Gamma(f'sigma_{name}', 1.64, .32) a = pm.Normal(f'a_{name}', mu=0, tau=1/sigma**2, shape=shape) return adef create_model(trial_index, condition_index, interaction_index, subject_index, y, n, batch_size, num_trial_levels, num_condition_levels, num_interaction_levels, num_subjects): """Create variables and return context manager. index args are integers indicating which level of variable y is error counts, n is number of possible errors """ with pm.Model() as model: a0 = pm.Normal('intercept', 0, tau=1/2**2) a_position = create_coeff('trial', num_trial_levels) a_condition = create_coeff('condition', num_condition_levels) a_interaction = create_coeff('interaction', num_interaction_levels) a_subject = create_coeff('subject', num_subjects) mu = a0 mu += a_position[trial_index] mu += a_condition[condition_index] mu += a_interaction[interaction_index] mu += a_subject[subject_index] mu = pm.Deterministic('mu', mu) omega = pm.Deterministic('omega', T.nnet.sigmoid((mu))) kappa = pm.Gamma('beta_variance', .01, .01) alpha = omega * (kappa - 2) + 1 beta = (1 - omega) * (kappa - 2) + 1 theta = pm.Beta('theta', alpha=alpha, beta=beta, shape=batch_size) y = pm.Binomial('targets', p=theta, n=n, observed=y) a = T.concatenate([a_position, a_condition, a_interaction, a_subject]) m = pm.Deterministic('m', a0 + a) bb0 = pm.Deterministic('bb0', T.mean(m)) bb = pm.Deterministic('bb', m - bb0) position_contrast = pm.Deterministic('bb_pos', bb[1] - bb[0]) condition_contrast = pm.Deterministic('bb_con', bb[3] - bb[2]) return model
Results
The results deviate from the other three analyses. No reliable effect of trial type, and condition sometimes has a reliable and large effect depending on model alterations. Some subject coefficients are estimated with tight posteriors around relatively large values.
Further details
There are two ways to implement this model. Krushcke uses an overparametrized scheme, where each level of each variable gets a coefficient, and these are corrected by substracting the level-wise mean from each coefficient. The alternative is to use sum-to-zero coding. There are advantages and drawbacks to both approaches.
In Krushcke’s approach, no dot product is necessary, which helps to accelerate sampling, especially when using trial or within-trial results as the target outcome. (Theano seems inefficient at these dot products, and PyMC3 doesn’t work very well with GPU compute for me, so this becomes a problem.) In practice, however, it seems to perform poorly when sampling.
In the sum-to-zero coding approach, the dot product is computationally limiting, but it simplifies the computations downstream by avoiding the need to adjust all regression params by level-wise means. Computing condition/trial type contrasts is as simple as doubling the coefficient value for a given level, since each has only two levels.
(The condition in which experiment condition has a large effect is using sum-to-zero coding with an additional correction. That correction subtracts the per-condition means of all \beta_{subject_i} and adding to the corresponding \beta_{condition}. This made sense to me because each participant belonged to only one condition, so any tendency across participants within a condition can be attributed to condition instead of participant coefficient.)
In either approach, I do notice that the
sigma_i tend to be quite large.
Questions
Main question is, why am I getting a radically different result from the Bayesian ANOVA approach? I wouldn’t be that surprised if coefficients were just on different scales, but to find the relative presence/absence of effects basically reversed seems unusual. (Yes, I checked that I was passing the data correctly ).
My suspicion is that this has to do with prior on per-variable variance \sigma_i being on the wrong scale, but I’m not too sure - changing it hasn’t really changed the posteriors too much. I’ve also tried initializing the coefficient means to the outcomes from MLE point estimates, which helps some if the variance prior is also tightened. But inference seems to pull these towards the same values as zero initialization regardless.
Anyway, I’ve been working on this for weeks with no real progress - any suggestions, hints, or clues would be welcome!
|
I want to design a filter with a custom phase delay related to frequency. As frequency increases, phase delay should increase.
The time delay as a function of frequency can be expressed as: $t_d = \frac{L}{C_{ph}} - \frac{1}{f}$
$L$ and $C_{ph}$ are constants of $0.0017$ and $2628$ respectively.
The range of $f$ is $500\mathrm{kHz}$ to $1 \mathrm{MHz}$, the sampling frequency, $f_s$ is $78.39\mathrm{MHz}$.
Thus, when the frequency is $500 \mathrm{kHz}$, the delay should be $1.37\mathrm{\mu s}$ or $107$ samples.
For the pass band, $f$, I have used a filter response of $e^{-i2\pi fd}$, where $d$ is the delay in samples required.
I have designed stop bands with a filter response of zero between $0 \mathrm{Hz}-100\mathrm{kHz}$ and $1.4 \mathrm{MHz}-f_s$.
In MATLAB my code looks like this:
n = 50;fs = double(fs); L = double(L);Cph = 2628;f1 = linspace(500e3/fs,1e6/fs,100);f1d = L/Cph - (1./(f1*fs));%Our array is backwards though innitf1d = f1d * -1;f1dz = f1d * fs;h1 = exp(-1i*2*pi*f1.*f1dz);fstop1 = linspace(0,100e3/fs, 10);hstop1 = zeros(size(fstop1));fstop2 = linspace(1.4e6/fs, 1, 10);hstop2 = zeros(size(fstop2));d=fdesign.arbmagnphase('n,b,f,h', n, 3, fstop1, hstop1, f1, h1, fstop2, hstop2);%d=fdesign.arbmagnphase('n,b,f,h', n, 1, f1, h1);D = design(d,'equiripple');fvtool(D,'Analysis','phasedelay');
This is what I get:
Markers are at 500k and 1M.
What am I doing wrong?
|
Communications in Mathematical Analysis Commun. Math. Anal. Volume 17, Number 2 (2014), 131-150. Commutators of Convolution Type Operators with Piecewise Quasicontinuous Data Abstract
Applying the theory of Calderon-Zygmund operators, we study the compactness of the commutators $[aI,W^0(b)]$ of multiplication operators $aI$ and convolution operators $W^0(b)$ on weighted Lebesgue spaces $L^p({\mathbb R},w)$ with $p\in(1,\infty)$ and Muckenhoupt weights $w$ for some classes of piecewise quasicontinuous functions $a\in PQC$ and $b\in PQC_{p,w}$ on the real line ${\mathbb R}$. Then we study two $C^*$-algebras $Z_1$ and $Z_2$ generated by the operators $aW^0(b)$, where $a,b$ are piecewise quasicontinuous functions admitting slowly oscillating discontinuities at $\infty$ or, respectively, quasicontinuous functions on ${\mathbb R}$ admitting piecewise slowly oscillating discontinuities at $\infty$. We describe the maximal ideal spaces and the Gelfand transforms for the commutative quotient $C^*$-algebras $Z_i^\pi=Z_i/{\mathcal K}$ $(i=1,2)$ where ${\mathcal K}$ is the ideal of compact operators on the space $L^2({\mathbb R})$, and establish the Fredholm criteria for the operators $A\in Z_i$.
Article information Source Commun. Math. Anal., Volume 17, Number 2 (2014), 131-150. Dates First available in Project Euclid: 18 December 2014 Permanent link to this document https://projecteuclid.org/euclid.cma/1418919760 Mathematical Reviews number (MathSciNet) MR3292964 Zentralblatt MATH identifier 1319.47033 Subjects Primary: 47B47: Commutators, derivations, elementary operators, etc. Citation
De la Cruz-Rodriguez, I.; Karlovich, Yu. I.; Loreto-Hernandez, I. Commutators of Convolution Type Operators with Piecewise Quasicontinuous Data. Commun. Math. Anal. 17 (2014), no. 2, 131--150. https://projecteuclid.org/euclid.cma/1418919760
|
I keep reading about the Unification Algorithm.
What is it and why is so important to Inference Engines? Why is it so important to Computer Science?
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
I keep reading about the Unification Algorithm.
Unification is such a fundamental concept in computer science that perhaps at time we even take it for granted. Any time we have a rule or equation or pattern and want to apply it to some data, unification is used to specialize the rule to the data. Or if we want to combine two general but overlapping rules, unification provides us with the most general combined rule. Unification is at the core of
Proof assistants such as Isabelle/HOL work on a syntactical level on a logical calculus. Imagine you have the modus ponens rule (MP)
$\qquad \displaystyle P\to Q, P\ \Longrightarrow\ Q$
and the proof goal
$\qquad \displaystyle (a \lor b) \to (c \land d), a \lor b \ \overset{!}{\Longrightarrow} c\land d$
We humans see immediately that this follows with modus ponens, but the machine has to match goal to rule
syntactically (wether you do
apply rule mp or
apply simp), and this is what unification does. The algorithm finds $\varphi$ with $\varphi(P) = a\lor b$ and $\varphi(Q) = c \land d$, instantiates the rule and applies it.
The good thing about assistants' methods like
simp now is that if your goal is
$\qquad \displaystyle (a \lor b) \to (c \land d), a \ \overset{!}{\Longrightarrow} d$
that they will find a suitable sequence of applications of rules MP, $P \land Q \Longrightarrow P$ and $P \Longrightarrow P \lor Q$ with compatible unifications for the respective steps and solve the goal.
Notation: With $\Gamma = \{\varphi_1, \dots, \varphi_n\}$ a set of logical formulae, the notation
$\qquad \Gamma \Longrightarrow \psi$
means the following:
If I have derived/proven all formulae in $\Gamma$ (i.e. they are
valid) then this rule asserts that $\psi$ is also valid.
In a sense, the rule $\Gamma \Longrightarrow \psi$ is the last step in a (long) proof for $\psi$. Proofs are nothing but chains of such rule applications.
Note that
rules usually contain schematic variables ($P$ and $Q$ in the above) that can be replaced by arbitrary formulae as long as the same variable is replaced with the same formula in all instances; the result of that format is the concrete rule instance (or intuitively, a proof step). This replacement is above denoted by $\varphi$ which was found by unification.
Often people use $\models$ instead of $\Longrightarrow$.
I don't think it is important to
inference engines. The unification algorithm is however very helpful for type inference. These are two very different kinds of inference.
Type inference is important to computer science because
types are important in the theory of programming languages, which is a significant part of computer science. Types are also close to logic and are intensively used in automated theorem proving. There are implementations of unification algorithms in many, if not all, proof assistants and SMT solvers.
Inference engines are related to artificial intelligence, which is also important but very different. (I've seen links between learning and logic but this seems fetched.)
|
Mixed formulation for the Poisson equation¶
We’re considering the Poisson equation \(\nabla^2 u = -f\) using a mixed formulation on two coupled fields. We start by introducing the negative flux \(\sigma = \nabla u\) as an auxiliary vector-valued variable. This leaves us with the PDE on a unit square \(\Omega = [0,1] \times [0,1]\) with boundary \(\Gamma\)
for some known function \(f\). The solution to this equation will be some functions \(u\in V\) and \(\sigma\in \Sigma\) for some suitable function space \(V\) and \(\Sigma\) that satisfy these equations. We multiply by arbitrary test functions \(\tau \in \Sigma\) and \(\nu \in V\), integrate over the domain and then integrate by parts to obtain a weak formulation of the variational problem: find \(\sigma\in \Sigma\) and \(\nu\in V\) such that:
The flux boundary condition \(\sigma \cdot n = g\) becomes an
essentialboundary condition to be enforced on the function space, while the boundarycondition \(u = u_0\) turn into a natural boundary condition whichenters into the variational form, such that the variational problem can bewritten as: find \((\sigma, u)\in \Sigma_g \times V\) such that
with the variational forms \(a\) and \(L\) defined as
The essential boundary condition is reflected in function spaces \(\Sigma_g = \{ \tau \in H({\rm div}) \text{ such that } \tau \cdot n|_{\Gamma_N} = g \}\) and \(V = L^2(\Omega)\).
We need to choose a stable combination of discrete function spaces \(\Sigma_h \subset \Sigma\) and \(V_h \subset V\) to form a mixed function space \(\Sigma_h \times V_h\). One such choice is Brezzi-Douglas-Marini elements of polynomial order \(k\) for \(\Sigma_h\) and discontinuous elements of polynomial order \(k-1\) for \(V_h\).
For the remaining functions and boundaries we choose:
To produce a numerical solution to this PDE in Firedrake we procede as follows:
The mesh is chosen as a \(32\times32\) element unit square.
from firedrake import *mesh = UnitSquareMesh(32, 32)
As argued above, a stable choice of function spaces for our problem is thecombination of order \(k\) Brezzi-Douglas-Marini (BDM) elements and order\(k - 1\) discontinuous Galerkin elements (DG). We use \(k = 1\) andcombine the BDM and DG spaces into a mixed function space
W.
BDM = FunctionSpace(mesh, "BDM", 1)DG = FunctionSpace(mesh, "DG", 0)W = BDM * DG
We obtain test and trial functions on the subspaces of the mixed function spaces as follows:
sigma, u = TrialFunctions(W)tau, v = TestFunctions(W)
Next we declare our source function
f over the DG space and initialise itwith our chosen right hand side function value.
x, y = SpatialCoordinate(mesh)f = Function(DG).interpolate( 10*exp(-(pow(x - 0.5, 2) + pow(y - 0.5, 2)) / 0.02))
After dropping the vanishing boundary term on the right hand side, the bilinear and linear forms of the variational problem are defined as:
a = (dot(sigma, tau) + div(tau)*u + div(sigma)*v)*dxL = - f*v*dx
The strongly enforced boundary conditions on the BDM space on the top and bottom of the domain are declared as:
bc0 = DirichletBC(W.sub(0), as_vector([0.0, -sin(5*x)]), 3)bc1 = DirichletBC(W.sub(0), as_vector([0.0, sin(5*x)]), 4)
Note that it is necessary to apply these boundary conditions to the firstsubspace of the mixed function space using
W.sub(0). This way theassociation with the mixed space is preserved. Declaring it on the BDM spacedirectly is
not the same and would in fact cause the application of theboundary condition during the later solve to fail.
Now we’re ready to solve the variational problem. We define w to be a function to hold the solution on the mixed space.
w = Function(W)
Then we solve the linear variational problem
a == L for
w under thegiven boundary conditions
bc0 and
bc1. Afterwards we extract thecomponents
sigma and
u on each of the subspaces with
split.
solve(a == L, w, bcs=[bc0, bc1])sigma, u = w.split()
Lastly we write the component of the solution corresponding to the primal variable on the DG space to a file in VTK format for later inspection with a visualisation tool such as ParaView
File("poisson_mixed.pvd").write(u)
We could use the built in plot function of firedrake by calling
plot to plot a surface graph. Before that,matplotlib.pyplot should be installed and imported:
try: import matplotlib.pyplot as pltexcept: warning("Matplotlib not imported")try: plot(u)except Exception as e: warning("Cannot plot figure. Error msg '%s'" % e)
Don’t forget to show the image:
try: plt.show()except Exception as e: warning("Cannot show figure. Error msg '%s'" % e)
|
For a configuration manifold $M$ we expect there to be a bracket on $\mathcal C^\infty (T^*M)$ endowing the space with a Lie algebra structure. Here we assume that $T^*M$ is also equipped with a non-degenerate symplectic 2-form $\omega_H$ and that the bracket can be written $\omega_H(X_f,X_g)$ for two Hamiltonian vector fields $X_f,X_g\in \chi(T^*M)$. We note that there exists a 1-form $\theta_H$ on $T^*M$ such that $\omega_H=-d\theta$ and that $\theta_H$ can be pulled back to $TM$ to give $\theta_L$. In charts we may write,
\begin{equation}\theta_L=\frac{\partial L}{\partial \dot q^i}dq^i\end{equation}
Hence we expect the exterior derivative to give an induced 2-form on $TM$.
\begin{equation}\omega_L=\frac{\partial ^2}{\partial q\partial v}dq\wedge dq +\frac{\partial ^2L}{\partial v\partial v}dq\wedge dv \end{equation}
At this point I am interested in evaluating $\omega _L(X_f,X_g)$ in order to see what expression is generated. My attempt is to assume that $X_f$ and $X_g$ are now Lagrangian vector fields on $TM$ and hence can be written,
\begin{equation}X_f=\dot q\frac{\partial f}{\partial q}+\dot v\frac{\partial f}{\partial v}\end{equation}
This approach however does not successfully lead anywhere (could be my shoddy maths skills to blame however, so I would be extremely interested to see if it did in fact lead somewhere?). If we cannot develop a bracket on $TM$ does this mean that $f,g\in\mathcal C^\infty (TM)$ do not have a Lie algebra structure, and at what point would they inherit such a structure during their transition to the cotangent bundle?
Thank you for your time and I hope this question is appropriate for this forum (?) apologies if not!
|
Despite the comments to the contrary, this circuit does have a steady state solution since the voltage source produces 20V for \$t \ge 0\$.
My best guess is that because there is a parallel branch which by KCL
should equal 100ix and would be zero because of the open circuit
provided by the capacitor.
That's correct. The steady state KCL at the node in question is:
$$i_x + 99i_x = i_C(\infty) = 0 \rightarrow i_x = 0$$
However, this seems counter intuitive because wouldn't the electricity
want to go around the outer loop.
It may seem counter intuitive but that's because your intuition hasn't fully developed yet. Once you come to fully understand the implication of that current source, the result will seem
obvious.
What you must fully appreciate is that a current source completely determines the current through its branch. If there is a current source in a branch and you set its value to zero, the branch is
open, i.e., there can be no current through for any voltage across.
And in this case, how do you deal with a loop that has a dependent
current source dependent on its own current? Is that even possible?
But this isn't the case here*. There are
two meshes (loops), one with current \$i_x\$ and the other with current \$99i_x\$. So the controlling variable of the dependent current source is not "its own current".
But, if it
were the case, then the only way for the source to produce a non-zero current is for the current gain to be precisely 1:
$$i_x = ki_x \rightarrow i_x = 0$$
unless \$k=1\$ in which case you have
$$i_x = i_x$$
Since
any value of \$i_x\$ satisfies the equation, the current is indeterminate. For example:
simulate this circuit – Schematic created using CircuitLab
In this circuit, the voltage and current are not determined. The only equation one can write is:
$$V_{CCCS1} = I_{CCCS1} \cdot 100\Omega$$
But, we cannot determine what the current or voltage actually
is since we have two unknowns and just one equation.
*Yes, in steady state, one might argue that it is the case here thus the remainder of the answer.
The equivalent circuit to the right of the resistor
It is straightforward to show that the equivalent circuit looking to the right of the resistor is:
simulate this circuit
In other words, for the purposes of calculating \$i_x(t)\$, one can replace the circuit to the right of the resistor with the above equivalent. Now, one can see by inspection that \$i_x(\infty) = 0\$
|
Given a DeSitter-space metric from the line element:
$$ ds^2=\left(1-\frac{r^2}{R^2}\right)dt^2-\left(1-\frac{r^2}{R^2}\right)^{-1}dr^2-r^2d\Omega^2 $$
Where $R=\sqrt{\frac{3}{\Lambda}}$, and $\Lambda$ is a positive cosmological constant, I am trying to derive the equations for radial null geodesics. I derived the geodesic equations from the definition and Christoffel symbols, but I'm a little suspicious of my solution to these equations. So, the differential equations I derived are ($\lambda$ is an "affine" parameter):
$$ \frac{d^2r}{d\lambda^2}-\frac{r}{R^2}\left(1-\frac{r^2}{R^2}\right)\left(\frac{dt}{d\lambda}\right)^2+\frac{r}{R^2-r^2}\left(\frac{dr}{d\lambda}\right)^2=0 $$
$$ \frac{d^2t}{d\lambda^2}+\frac{2r}{r^2-R^2}\frac{dt}{d\lambda}\frac{dr}{d\lambda}=0 $$
Now, I tried to use the identity $\textbf{u}\cdot\textbf{u}=0$ where $\bf u$ is the four velocity (or I guess a vector tangent to the light ray's world line if I'm thinking about this correctly). Thus, because $u^2=u^3=0$:
$$g_{\mu\nu}u^{\mu}u^{\nu}=\left(1-\frac{r^2}{R^2}\right)(u^0)^2-\left(1-\frac{r^2}{R^2}\right)^{-1}(u^1)^2=0 $$
$$\implies -\frac{r}{R^2}\left(1-\frac{r^2}{R^2}\right)(u^0)^2+\frac{r}{R^2-r^2}(u^1)^2=0$$
Then, substituting this into the first (radial) geodesic equation, I get:
$$\frac{d^2r}{d\lambda^2}=0$$
This is what I am suspicious of. It seems too easy and simple. Do you think this is correct? If I carry on anyways and integrate to find $u^0(r)$ then substitute $r(\lambda)$ I get the following for $t(\lambda)$:
$$ r(\lambda)=A\lambda+B $$
$$ t(\lambda)=\frac{D}{AR}\tanh^{-1}\left[\frac{A\lambda+B}{R} \right]+E $$
Where, $A,B,D,E$ are constants of integration. To sum up, is this a viable way to solve these equations or am I missing something? If this is correct, how does one define initial conditions for these solutions. Specifically, the derivative conditions $t'(0)$ and $r'(0)$.This post imported from StackExchange Physics at 2014-08-09 08:48 (UCT), posted by SE-user TylerHG
|
Publications Influence
Claim Your Author Page
Ensure your research is discoverable on Semantic Scholar. Claiming your author page allows you to personalize the information displayed and manage publications.
We find the sum of series of the form $$\sum\limits_{i = 1}^\infty {\frac{{f(i)}}{{{i^r}}}} $$ for some special functions f. The above series is a generalization of the Riemann zeta function. In… (More)
|
This is the question as posed to the DSP folks. Electrical engineers like to use particular definitions of the Fourier Transform (using Hz, rather than angular frequency) and the sinc function.
Out of consideration to you kind folks, I will try to eliminate a few symbols, like I will, without loss of generality, set the sample rate, $f_\text{s}$, to 1 and I try to use angular frequency.
So let $$ 0 < \omega_0 < \pi $$
and for
any real $W$ such that
$$ \omega_0 < W < 2\pi - \omega_0 $$
please prove that
$$ \cos(\omega_0 t + \phi) = \sum\limits_{n=-\infty}^{\infty} \cos\left(\omega_0 n + \phi \right) \, \frac{\sin\big( W(t - n) \big)}{\pi(t - n)} $$
without the use of the Fourier Transform.
I s'pose one of the
" Whittaker–Nyquist–Kotelnikov–Shannon" folks did this, but I can't see exactly how this gets extracted out of the Poisson summation formula.
I spent all of my rep on a bounty of a previous question. Sorry, I don't have much rep to spend here.
well,
21 hours left to get the bounty!! don't let it go to waste.
UPDATE: the 100 rep bounty has expired. so i guess it's wasted.
|
Baire's lemma says :
Let $X$ a complete metric space. Let $(X_n)_n$ a sequence of closed subset. Suppose $Int(X_n)=\emptyset$ for all $n$. Then $Int(\bigcup_{n\in\mathbb N}X_n)=\emptyset$.
And here an equivalent form :
Let $X$ a non-empty complete metric space. Let $(X_n)_n$ a sequence of closed set s.t. $\bigcup_{n=1}^\infty X_n=X$. Then there is a $n_0$ s.t. $Int(X_n)\neq \emptyset$.
For example, $\{1\}$ with the induced topology from $\mathbb R$ is a complete metric space, we can take $X_n=\{1\}$ for all $n$, we have that $\bigcup_{n=1}^\infty X_n=X$, but $Int(X_n)=\emptyset$ for all $n$. Where is my mistakes here ?
|
I am having trouble
understanding my problem and what to calculate
I have been given the subspace $U=\{x=$$ \begin{vmatrix} x_1\\ x_2\\ x_3\\ \end{vmatrix} \in F^3 | x_1 + x_2 + x_3 = 0\} \subset F^3 $
and the linear transformation $f: U \rightarrow F^2 $ $f\begin{vmatrix} x_1\\ x_2\\ x_3\\ \end{vmatrix} =\begin{vmatrix} x_1\\ x_2+x_3\\ \end{vmatrix}$
The question is to determine a matrix A that represent $f: U \rightarrow F^2 $ with respect to the basis for U and standard basis $(e_1,e_2)$ for $F^2$
My attempt: I have calculated the basis $B=\{\begin{vmatrix} 1\\ 0\\ -1\\ \end{vmatrix},\begin{vmatrix} 0\\ 1\\ -1\\ \end{vmatrix}\}$
And my matrix A calculated from the linear transformation
$\begin{vmatrix} 1&0&0\\ 0&1&1\\ \end{vmatrix}$
I'm aware the standard basis are $e_1=\begin{vmatrix} 1\\ 0\\ \end{vmatrix} , e_2=\begin{vmatrix} 0\\ 1\\ \end{vmatrix}$
I'm not sure about the next step. Do I calculate: $f\begin{vmatrix} 1\\ 0\\ -1\\ \end{vmatrix} =\begin{vmatrix} 1\\ -1\\ \end{vmatrix}$
and do the same thing for the other basis or am I suppose to find a matrix A that's going to give me $f\begin{vmatrix} 1\\ 0\\ -1\\ \end{vmatrix} =\begin{vmatrix} 1\\ 0\\ \end{vmatrix}$
and
$f\begin{vmatrix} 0\\ 1\\ -1\\ \end{vmatrix} =\begin{vmatrix} 0\\ 1\\ \end{vmatrix}$
I just don't understand the question. How am I suppose to find a matrix A with respect to the basis of U and the standard basis for $F^2$?
|
Liouville theorems for stable weak solutions of elliptic problems involving Grushin operator
Division of Computational Mathematics and Engineering, Institute for Computational Science, Ton Duc Thang University, Ho Chi Minh City, Vietnam, Faculty of Mathematics and Statistics, Ton Duc Thang University, Ho Chi Minh City, Vietnam
$\begin{equation*} \begin{cases} -{\rm div}_G(w_1\nabla_G u) = w_2f(u) &\text{ in } \Omega,\\ u=0 &\text{ on } \partial\Omega, \end{cases}\end{equation*}$
$\Omega$
$C^1$
$\mathbb{R}^N$
$w_1, w_2 \in L^1_{\rm loc}(\Omega)\setminus\{0\}$
$f$
$\nabla_G$
${\rm div}_G$
$\Omega$
$w_1$
$w_2$
$f$
$\Omega=\mathbb{R}^N$
$f$ Mathematics Subject Classification:Primary: 35J25, 35H20; Secondary: 35B53, 35B35. Citation:Phuong Le. Liouville theorems for stable weak solutions of elliptic problems involving Grushin operator. Communications on Pure & Applied Analysis, 2020, 19 (1) : 511-525. doi: 10.3934/cpaa.2020025
References:
[1]
C. T. Anh, J. Lee and B. K. My,
On the classification of solutions to an elliptic equation involving the {G}rushin operator,
[2]
I. Birindelli, I. Capuzzo Dolcetta and A. Cutrì,
Liouville theorems for semilinear equations on the Heisenberg group,
[3]
I. Birindelli and J. Prajapat,
Nonlinear Liouville theorems in the Heisenberg group via the moving plane method,
[4]
D. Castorina, P. Esposito and B. Sciunzi,
Low dimensional instability for semilinear and quasilinear problems in $\Bbb R^N$,
[5] [6]
L. Damascelli, A. Farina, B. Sciunzi and E. Valdinoci,
Liouville results for
[7] [8]
E. N. Dancer, Y. Du and Z. Guo,
Finite Morse index solutions of an elliptic equation with supercritical exponent,
[9]
A. T. Duong and N. T. Nguyen, Liouville type theorems for elliptic equations involving Grushin operator and advection,
[10] [11]
L. Dupaigne,
[12]
A. Farina,
On the classification of solutions of the Lane-Emden equation on unbounded domains of $\Bbb R^N$,
[13] [14]
B. Franchi, C. E. Gutiérrez and R. L. Wheeden,
Weighted Sobolev-Poincaré inequalities for Grushin type operators,
[15] [16] [17]
P. Le and V. Ho, Stable solutions to weighted quasilinear problems of Lane-Emden type,
[18] [19] [20]
P. Le, H. T. Nguyen and T. Y. Nguyen,
On positive stable solutions to weighted quasilinear problems with negative exponent,
[21]
D. D. Monticelli,
Maximum principles and the method of moving planes for a class of degenerate elliptic linear operators,
[22]
B. Rahal, Liouville-type theorems with finite {M}orse index for semilinear $\Delta_\lambda$-Laplace operators,
[23] [24]
show all references
References:
[1]
C. T. Anh, J. Lee and B. K. My,
On the classification of solutions to an elliptic equation involving the {G}rushin operator,
[2]
I. Birindelli, I. Capuzzo Dolcetta and A. Cutrì,
Liouville theorems for semilinear equations on the Heisenberg group,
[3]
I. Birindelli and J. Prajapat,
Nonlinear Liouville theorems in the Heisenberg group via the moving plane method,
[4]
D. Castorina, P. Esposito and B. Sciunzi,
Low dimensional instability for semilinear and quasilinear problems in $\Bbb R^N$,
[5] [6]
L. Damascelli, A. Farina, B. Sciunzi and E. Valdinoci,
Liouville results for
[7] [8]
E. N. Dancer, Y. Du and Z. Guo,
Finite Morse index solutions of an elliptic equation with supercritical exponent,
[9]
A. T. Duong and N. T. Nguyen, Liouville type theorems for elliptic equations involving Grushin operator and advection,
[10] [11]
L. Dupaigne,
[12]
A. Farina,
On the classification of solutions of the Lane-Emden equation on unbounded domains of $\Bbb R^N$,
[13] [14]
B. Franchi, C. E. Gutiérrez and R. L. Wheeden,
Weighted Sobolev-Poincaré inequalities for Grushin type operators,
[15] [16] [17]
P. Le and V. Ho, Stable solutions to weighted quasilinear problems of Lane-Emden type,
[18] [19] [20]
P. Le, H. T. Nguyen and T. Y. Nguyen,
On positive stable solutions to weighted quasilinear problems with negative exponent,
[21]
D. D. Monticelli,
Maximum principles and the method of moving planes for a class of degenerate elliptic linear operators,
[22]
B. Rahal, Liouville-type theorems with finite {M}orse index for semilinear $\Delta_\lambda$-Laplace operators,
[23] [24]
[1] [2]
Hatem Hajlaoui, Abdellaziz Harrabi, Foued Mtiri.
Liouville theorems for stable solutions of the weighted Lane-Emden system.
[3]
M. Á. Burgos-Pérez, J. García-Melián, A. Quaas.
Classification of supersolutions and Liouville theorems for some nonlinear elliptic problems.
[4]
Ismail Kombe.
On the nonexistence of positive solutions to doubly nonlinear equations for Baouendi-Grushin operators.
[5]
Tomasz Adamowicz, Przemysław Górka.
The Liouville theorems for elliptic equations with nonstandard growth.
[6]
Shujie Li, Zhitao Zhang.
Multiple solutions theorems for semilinear elliptic boundary value problems with resonance at infinity.
[7] [8] [9]
Y. Kabeya, Eiji Yanagida, Shoji Yotsutani.
Canonical forms and structure theorems for radial solutions to semi-linear elliptic problems.
[10]
Alberto Farina.
Symmetry of components, Liouville-type theorems and classification results for some nonlinear elliptic systems.
[11] [12] [13]
Yayun Li, Yutian Lei.
On existence and nonexistence of positive solutions of an elliptic system with coupled terms.
[14]
Takahiro Hashimoto.
Nonexistence of weak solutions of quasilinear elliptic
equations with variable coefficients.
[15]
Takahiro Hashimoto.
Existence and nonexistence of nontrivial solutions of some nonlinear fourth order elliptic equations.
[16]
Kaouther Ammar, Philippe Souplet.
Liouville-type theorems and universal bounds for nonnegative
solutions of the porous medium equation with source.
[17]
Gabriele Bonanno, Pasquale Candito, Roberto Livrea, Nikolaos S. Papageorgiou.
Existence, nonexistence and uniqueness of positive solutions for nonlinear eigenvalue problems.
[18] [19] [20]
Tomás Sanz-Perela.
Regularity of radial stable solutions to semilinear elliptic equations for the fractional Laplacian.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top]
|
This shows you the differences between two versions of the page.
kleene_algebras [2010/07/29 18:30]
127.0.0.1 external edit
kleene_algebras [2010/09/04 17:00] (current)
jipsen
Line 16: Line 16: ==Morphisms== ==Morphisms== Let $\mathbf{A}$ and $\mathbf{B}$ be Kleene algebras. Let $\mathbf{A}$ and $\mathbf{B}$ be Kleene algebras. - A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h: Aarrow B$ that is a + A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h: A\to B$ that is a homomorphism: $h(x\vee y)=h(x)\vee h(y)$, $h(x\cdot y)=h(x)\cdot h(y)$, $h(x^{\ast })=h(x)^{\ast }$, $h(0)=0$, homomorphism: $h(x\vee y)=h(x)\vee h(y)$, $h(x\cdot y)=h(x)\cdot h(y)$, $h(x^{\ast })=h(x)^{\ast }$, $h(0)=0$, and $h(1)=1$. and $h(1)=1$. Line 24: Line 24: ====Basic results==== ====Basic results==== - [[Kleene algebras (otter)]] Line 46: Line 45: ^[[Strong amalgamation property]] | | ^[[Strong amalgamation property]] | | ^[[Epimorphisms are surjective]] | | ^[[Epimorphisms are surjective]] | | + ====Finite members==== ====Finite members==== Line 63: Line 63: [[Kleene lattices]] [[Kleene lattices]] + ====Superclasses==== ====Superclasses====
|
Healy, MW, Darnley, MJ, Copperwheat, CM, Filippenko, AV, Henze, M, Hestenes, JC, James, PA, Page, KL, Williams, SC and Zheng, W (2019)
AT 2017fvz: a nova in the dwarf irregular galaxy NGC 6822. Monthly Notices of the Royal Astronomical Society, 486 (3). pp. 4334-4347. ISSN 0035-8711
Text
AT 2017fvz a nova in the dwarf irregular galaxy NGC 6822.pdf - Published Version
Download (1MB) | Preview
Abstract
A transient in the Local Group dwarf irregular galaxy NGC 6822 (Barnard's Galaxy) was discovered on 2017 August 2 and is only the second classical nova discovered in that galaxy. We conducted optical, near-ultraviolet, and X-ray follow-up observations of the eruption, the results of which we present here. This 'very fast' nova had a peak $V$-band magnitude in the range $-7.41>M_V>-8.33$ mag, with decline times of $t_{2,V} = 8.1 \pm 0.2$ d and $t_{3,V} = 15.2 \pm 0.3$ d. The early- and late-time spectra are consistent with an Fe II spectral class. The H$\alpha$ emission line initially has a full width at half-maximum intensity of $\sim 2400$ km s$^{-1}$ - a moderately fast ejecta velocity for the class. The H$\alpha$ line then narrows monotonically to $\sim1800$ km s$^{-1}$ by 70 d post-eruption. The lack of a pre-eruption coincident source in archival Hubble Space Telescope imaging implies that the donor is a main sequence, or possibly subgiant, star. The relatively low peak luminosity and rapid decline hint that AT 2017fvz may be a 'faint and fast' nova.
Item Type: Article Additional Information: This article has been accepted for publication in Monthly Notices of the Royal Astronomical Society ©: 2019 The Authors. Published by Oxford University Press on behalf of the Royal Astronomical Society. All rights reserved. Uncontrolled Keywords: astro-ph.SR; astro-ph.SR Subjects: Q Science > QB Astronomy
Q Science > QC Physics
Divisions: Astrophysics Research Institute Publisher: Oxford University Press Related URLs: Date Deposited: 18 Apr 2019 10:19 Last Modified: 27 Jun 2019 10:01 DOI or Identification number: 10.1093/mnras/stz1108 URI: http://researchonline.ljmu.ac.uk/id/eprint/10573 Actions (login required)
View Item
|
Context, I have a geometric algorithm that is sensitive to collinear points and receives as input a list of points in 2D generated randomly. Suppose that I have a Boolean function nonCollinear(x,y,z) that given three points returns true if they are non collinear and false otherwise. Is there an efficient algorithm to generate a list of random points such that no 3 points in the list are collinear? The points coordinates x y are integers, the amount of points is N and the grid size is RxC, I know that there is a restriction between N and R C, so I am fine if the algorithm relies in some assumption (e.g RxC is way bigger than N)
For simplicity , assume the grid is a square $N \times N$ grid and $N$ is a prime.
Its easy to see that from each row we can pick $\leq 2$ points only , so the maximum number of points we can chose is $2N$.
Now consider the set of points $\{(i,i^2\ mod\ n)\ |\ 0\leq i \leq n-1 \}$.
For any set of 3 points to be collinear (Lets call them $(x_1,y_1),(x_2,y_2),(x_3,y_3) , x_1 < x_2 < x_3$ ) we must have $\frac{y_3 - y_2}{x_3 - x_2} = \frac{y_2 - y_1}{x_2 - x_1}$ , putting in $y_i = x_i^2\ mod\ n$ , we obtain $x_3 + x_2 \equiv x_2 + x_1 \mod n$ and this is only possible when $x_1 = x_3$.
Hence we have a set of $N$ points where no 3 points are collinear and it is the best we can obtain upto a constant factor.
Since this is a deterministic construction , to get a random family of points , instead of choosing $i^2$ , we can take any random quadratic polynomial $p$ and take the set of points $\{(i , p(i)) \ mod\ n \ |\ 0 \leq i \leq n-1 \}$.
|
If we convolve 2 signals we get a third signal. What does this third signal represent in relation to the input signals?
There's not particularly any "physical" meaning to the convolution operation. The main use of convolution in engineering is in describing the output of a linear, time-invariant (LTI) system. The input-output behavior of an LTI system can be characterized via its impulse response, and the output of an LTI system for any input signal $x(t)$ can be expressed as the convolution of the input signal with the system's impulse response.
Namely, if the signal $x(t)$ is applied to an LTI system with impulse response $h(t)$, then the output signal is:
$$ y(t) = x(t) * h(t) = \int_{-\infty}^{\infty}x(\tau)h(t - \tau)d\tau $$
Like I said, there's not much of a physical interpretation, but you can think of a convolution qualitatively as "smearing" the energy present in $x(t)$ out in time in some way, dependent upon the shape of the impulse response $h(t)$. At an engineering level (rigorous mathematicians wouldn't approve), you can get some insight by looking more closely at the structure of the integrand itself. You can think of the output $y(t)$ as the sum of an infinite number of copies of the impulse response, each shifted by a slightly different time delay ($\tau$) and scaled according to the value of the input signal at the value of $t$ that corresponds to the delay: $x(\tau)$.
This sort of interpretation is similar to taking discrete-time convolution (discussed in Atul Ingle's answer) to a limit of an infinitesimally-short sample period, which again isn't fully mathematically sound, but makes for a decently intuitive way to visualize the action for a continuous-time system.
A particularly useful intuitive explanation that works well for discrete signals is to think of convolution as a "weighted sum of echoes" or "weighted sum of memories."
For a moment, suppose the input signal to a discrete LTI system with transfer function $h(n)$ is a delta impulse $\delta(n-k)$. The convolution is \begin{eqnarray} y(n) &=& \sum_{m=-\infty}^{\infty} \delta(m-k) h(n-m) \\ &=& h(n-k). \end{eqnarray} This is just an echo (or memory) of the transfer function with delay of k units.
Now think of an arbitrary input signal $x(n)$ as a sum of weighted $\delta$ functions. Then the output is a weighted sum of delayed versions of h(n).
For example, if $x(n) = \{1, 2, 3\}$, then write $x(n) = \delta(n) + 2 \delta(n-1) + 3 \delta(n-2)$.
The system output is a sum of the echoes $h(n)$, $h(n-1)$ and $h(n-2)$ with appropriate weights 1, 2, and 3, respectively.
So $y(n) = h(n) + 2h(n-1)+3h(n-2)$.
A good intuitive way of understanding convolution is to look at the result of convolution with a point source.
As an example, the 2D convolution of a point with the flawed optics of Hubble Space Telescope creates this image:
Now imagine what happens if there are two (or more) stars in a picture: you get this pattern twice (or more), centered on each star. The luminosity of the pattern is related to the luminosity of a star. (Note that a star is practically always a point source.)
These patterns are basically the multiplication of the point source with the convoluted pattern, with the result stored at the pixel such that it reproduces the pattern when the resulting picture is viewed in its entirety.
My personal way of visualizing a convolution algorithm is that of a loop on every pixel of the source image. On each pixel, you multiply by the value of the convoluted pattern, and you store the result on the pixel which relative position corresponds to the pattern. Do that on every pixels (and sum results on every pixels), and you get the result.
Think of this... Imagine a drum you are beating it repeatedly to hear the music right? Your drum stick will land on the membrane for the first time due to the impact it will vibrate , when you strikes it for the second time ,vibration due to first impact has already decayed to some extent. So whatever sound you will hear is the current beating and sum of the decayed response of previous impacts. So if $x(k)$ is the impact force on $k$ th moment, then the impact will be Force $x$ Impact time
Which is
$x(k)dk$
Where $dk$ is infinitesimaly small time of impact
and you are hearing the sound @ $t$ , then the elapsed time will be $t-k$ , suppose if the membrane of the drum has a decay effect , defined by a function $h(u)$ , where $u$ is elapsed time, in our case $t-k$ , so the response of impact @ $k$ will be $h(t-k)$. So the effect of $x(k)dk$ at time t will be multiplication of both, i.e. $x(k)h(t-k)dk$.
So the overall effect of the music we hear will be the integrated effect of all the impacts. That too from negative infinity to plus infinity. Which is what is know as convolution.
You can also think of convolution as smearing/smoothing of one signal by another. If you have a signal with pulses and another of, say, a single square pulse, the result will the smeared or smoothed out pulses.
Another example is two square pulses convolved come out as a flattened trapezoid.
If you take a picture with a camera with the lens defocused, the result is a convolution of the focused image with the point spread function of the defocus.
The probability distribution of the sum of a pair of dice is the convolution of the probability distributions of the individual dice.
Long multiplication is convolution, if you don't carry from one digit to the next. And if you flip one of the numbers. {2, 3, 7} convolved with {9, 4} is {8, 30, 55, 63}
2 3 7 X 4 9--------------- 18 27 63 8 12 28--------------- 8 30 55 63
(You could finish out the multiplication by carrying the "6" from 63 into the 55, and so on.)
In signals and systems, convolution is usually used with input signal and impulse response to get an output signal(third signal). It's easier to see convolution as "weighted sum of past inputs" because past signals also influence current output.
I'm not sure if this is the answer you were looking for, but I made a video on it recently because it bothered me for a long time. https://www.youtube.com/watch?v=1Y8wHa3fCKs&t=14s Here's a short video. Please excuse my English lol.
Another way to look at convolution is to consider that you have two things:
DATA - quantities certainly corrupted by some noise - and at random positions (in time, space, name it) PATTERN = some knowledge of how information should look like
the convolution of DATA with (the mirror symmetric of the) PATTERN is another quantity that evaluates -knowing the PATTERN- how likely it is that it is at each of the positions within the DATA.
Technically, at every position, this quantity is the correlation (this is the mirror of the PATTERN) and thus measures the log-likelihood under some general assumptions (independent Gaussian noise). The convolution allows to compute it at each position (in space, time ...) in parallel.
A convolution is an integral that expresses the amount of overlap of one function (say $g$) as it shifted over another function ( say $f$) where $g*f$.
The physical meaning is a signal passes through an LTI system! Convolution is defined as flip (one of the signals), shift, multiply and sum. I am going to explain my intuition about each.
1. Why we flip one of the signals in convolution, What does it mean?
Because the last point in the
representation of the input signal actually is the first which enters the system (notice the time axis). Convolution is defined for Linear-Timer Invariant systems. It is all related to Time and how we represent it in math. There are two signals in convolution, one represents the input signal and one represent the system response. So the first question here is What is the signal of system response? System response is the output of the system in a given time
t to an input with only one non-zero element in a given time
t (impulse signal which is shifted by
t).
2. Why the signals are multiplied point by point?
Again, lets refer to the definition of signal of system response. As said, it is the signal which is formed through shifting an impulse function by
t and plotting the output for each of these
t's. We can also imagine the input signal as sum of impulse functions with different amplitudes (scales) and phases. OK, so the system response to the input signal in any given time is the signal response itself
multiplied by (or scaled by) the amplitude of the input in that given time. 3. What does shifting mean?
Having said those (1 & 2), shifting is performed to get the output of the system for any input signal point at an time
t.
I hope it helps you folks!
[As the question keeps bumping, a short edit]
The output is the joint filtering of the two input signals or functions. In other words, how $x_1$ is smoothed by $x_2$ considered as a filter, and symmetrically how $x_2$ is smoothed by $x_1$ considered as a smoothing function. To some extent, this convolution is a kind of "Least common multiple" between two signals (instead of numbers).
A longer "system view" follows: Think of an ideal (Platonist) vision of a point. The head of a pin, very thin, somewhere in the empty space. You can abstract it like a Dirac (discrete or continuous).
Look at it from afar, or like a short-sighted person (as I am), it gets blurred. Now imagine the point is looking at you, too. From the point "point of view", you can be a singularity, too. The point can be short-sighted as well, and the medium between you both (you as a singularity and the point) can be non-transparent.
So, convolution is like A bridge over troubled water. I never thought I could quote Simon and Garfunkel here. Two phenomena trying to seize each other. The result is the blur of one blurred by the other, symmetrically. The blurs don't have to be the same. Your short-sighted blurring combines evenly with the fuzziness of the object. The symmetry is such that if the fuzziness of the object becomes your eye-impairment, and vice-versa, the overall blur remains the same. If one of them is ideal, the other is untouched. If you can see perfectly, you see the exact blurriness of the object. If the object is a perfect point, one gets the exact measure of your short-sightedness.
All that under some linearity assumptions.
The convolution is a complicated operation. In the Fourier domain, you can interpret it as a product of blurs. Or in the $\log$-Fourier domain, it can be interpreted as a sum of blurs.
You can check But Why? Intuitive Mathematics: Convolution
The way you hear sound in a given environment (room, open space etc) is a convolution of audio signal with the impulse response of that environment.
In this case the impulse response represents the characteristics of the environment like audio reflections, delay and speed of audio which varies with temperature.
To rephrase the answers:
For signal processing it is the weighted sum of the past into the present. Typically one term is the voltage history at an input to a filter and the other term is the a filter or some such that has "memory". Of course in video processing all of the adjacent pixels take the place of "past".
For probability it is a cross probability for an event given other events; the number of ways to get a 7 in craps is the chance of getting a: 6 and 1, 3 and 4, 2 and 5. i.e. the sum of probabilities P(2) times the probability P(7-2): P(7-2)P(2)+P(7-1)*P(1)+.....
Convolution is a mathematical way of combing two signals to form a third signal. It is one of the most important techniques in DSP… why? Because using this mathematical operation you can extract the system impulse response. If you do not know why system impulse response is important, read about it in http://www.dspguide.com/ch6.htm. Using the strategy of impulse decomposition, systems are described by a signal called impulse response.
Convolution is important because it relates the three signals of interest: the input signal, the output signal, and the impulse response. It is a formal mathematical operation, just as multiplication, addition, and integration. Addition takes two numbers and produces a third number, while convolution takes two signals and produces a third signal. In linear systems, convolution is used to describe the relationship between three signals of interest: the input signal, the impulse response, and the output signal (from Steven W. Smith). Again, this is highly bound to the concept of impulse response which you need to read about it.
Impulse causes output sequence which captures the dynamics of the system ( future). By flipping over this impulse response we use it to calculate the output from The weighted combination of all previous input values. This is an amazing duality.
In simple terms it means to transfer inputs from one domain to another domain where we find it easier to work with. Convulation is tied with Laplace transform, and sometimes it is easier to work in the s domain, where we can do basic additions to the frequencies. and also as laplace transform is a one to one function we are most likely to not corrupt the input. Before trying to understand what the general theorem of convulation means in physical significance, we should instead start at the frequency domain. addition and scalar multiplication follows the same rule as Laplace transform is a linear operator. c1.Lap(f(x)+ c2.Lap g(x)= Lap (c1.f(x) + c2.g(x)). But what is Lap f(x).Lap g(x). is what defines the convulation theorem.
|
I'm trying to get the AdS solution to the circular wilson loop. The standard AdS metric is:
$ds^2 = \frac{L^2}{z^2}(\eta_{\mu \nu} dx^{\mu} dx^{\nu} + dz^2)$
If I take the circle of radius R at the x1,x2 plane I can choose polar coordinates:
$x_1 = R cos(\theta)$, $x_2 = R sin(\theta)$
$ds^2 = \frac{L^2}{z^2}(-dt^2 + dr^2 + r^2d\theta^2 + dx3^2 + dz^2)$
Now i want to find the area that minimizes the Nambu-Goto action:
$S_{NG} = \int d\sigma d\tau \sqrt{g}$
Where g is the usual pullback: $g_{ab} = G_{\mu \nu} \partial_a X^{\mu} \partial_b X^{\nu}$. Now my fields are $X^{\mu} = (t,r,\theta,x3,z(r))$ and I choose the gauge where: $\sigma = r$, $\tau = \theta$ from where i get:
$S_{NG} = \int dr d\theta \frac{L^2 r}{z^2} \sqrt{1 + z'^2}$
From where I see that the Hamiltonian is conserved and we get:
$H = \frac{-L^2 r}{z^2} \frac{1}{\sqrt{1+z'^2}}$
But the answer is $S_{NG}= \sqrt{\lambda} (\frac{R}{z_0}-1)$ and I don't know where the problem is.This post imported from StackExchange Physics at 2016-07-07 18:18 (UTC), posted by SE-user Jasimud
|
Method of sub-super solutions for fractional elliptic equations
1.
School of Mathematics, Hunan University, Changsha 410082, Hunan, China
2.
School of Mathematics and Applied Statistics, University of Wollongong, Wollongong, 2522, NSW, Australia
3.
School of Mathematics(Zhuhai), Sun Yat-sen University, Zhuhai 519082, Guangdong, China
$\left\{ \begin{array}{*{35}{l}} {{(-\Delta )}^{s}}u=f(x,u),&\text{in}\ \Omega , \\ u=0,&\text{in}\ {{\mathbb{R}}^{N}}\backslash \Omega , \\\end{array} \right.$
$f:\Omega \text{ }\!\!\times\!\!\text{ }\mathbb{R}\to \mathbb{R}$
$ν$
$ \left\{ \begin{array}{*{35}{l}} {{(-\Delta )}^{s}}u=f(x,u)+\nu ,&\text{in}\ \Omega , \\ u=0,&\text{in}\ {{\mathbb{R}}^{N}}\backslash \Omega , \\\end{array} \right. $ Mathematics Subject Classification:Primary: 35J60, 35J67; Secondary: 58J05. Citation:Yanqin Fang, De Tang. Method of sub-super solutions for fractional elliptic equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3153-3165. doi: 10.3934/dcdsb.2017212
References:
[1]
N. Abatangelo,
Large s-harmonic functions and boundary blow-up solutions for the fractional Laplacian,
[2] [3]
C. Brandle, E. Colorado, A. Pablo and U. Sanchez,
A concave convex elliptic problem involving the fractional Laplacian,
[4]
X. Cabré and Y. Sire,
Nonlinear equations for fractional Laplacians Ⅰ: Regularity, maximum principles, and Hamiltonian estimates,
[5] [6]
H. Chen, P. Felmer and A. Quass,
Large solutions to elliptic equations involving the fractional Laplacian,
[7] [8] [9] [10] [11] [12] [13] [14]
M. M. Fall and T. Weth, Monotonicity and nonexistence results for some fractional ellipticproblems in the half space,
[15] [16] [17]
X. Rosoton and J. Serra,
The Dirichlet problem for the fractional Laplacian: Regularity up to the boundary,
[18]
show all references
References:
[1]
N. Abatangelo,
Large s-harmonic functions and boundary blow-up solutions for the fractional Laplacian,
[2] [3]
C. Brandle, E. Colorado, A. Pablo and U. Sanchez,
A concave convex elliptic problem involving the fractional Laplacian,
[4]
X. Cabré and Y. Sire,
Nonlinear equations for fractional Laplacians Ⅰ: Regularity, maximum principles, and Hamiltonian estimates,
[5] [6]
H. Chen, P. Felmer and A. Quass,
Large solutions to elliptic equations involving the fractional Laplacian,
[7] [8] [9] [10] [11] [12] [13] [14]
M. M. Fall and T. Weth, Monotonicity and nonexistence results for some fractional ellipticproblems in the half space,
[15] [16] [17]
X. Rosoton and J. Serra,
The Dirichlet problem for the fractional Laplacian: Regularity up to the boundary,
[18]
[1]
Dumitru Motreanu, Calogero Vetro, Francesca Vetro.
Systems of quasilinear elliptic equations with dependence on the gradient via subsolution-supersolution method.
[2] [3] [4] [5]
Kazuhiro Ishige.
On the existence of solutions of the Cauchy problem for porous medium equations with radon measure as initial data.
[6] [7] [8]
Mikko Kemppainen, Peter Sjögren, José Luis Torrea.
Wave extension problem for the fractional Laplacian.
[9]
De Tang, Yanqin Fang.
Regularity and nonexistence of solutions for a system involving the fractional Laplacian.
[10] [11]
Selma Yildirim Yolcu, Türkay Yolcu.
Sharper estimates on the eigenvalues of Dirichlet fractional Laplacian.
[12] [13]
Lorenzo Brasco, Enea Parini, Marco Squassina.
Stability of variational eigenvalues
for the fractional $p-$Laplacian.
[14]
Vladimir Georgiev, Koichi Taniguchi.
On fractional Leibniz rule for Dirichlet Laplacian in exterior domain.
[15] [16]
Huyuan Chen, Hichem Hajaiej, Ying Wang.
Boundary blow-up solutions to fractional elliptic
equations in a measure framework.
[17]
Yan Wang, Guanggan Chen.
Invariant measure of stochastic fractional Burgers equation with degenerate noise on a bounded interval.
[18]
David Gómez-Castro, Juan Luis Vázquez.
The fractional Schrödinger equation with singular potential and measure data.
[19]
Claudia Bucur.
Some observations on the Green function for the ball in the fractional Laplace framework.
[20]
Vitali Liskevich, Igor I. Skrypnik, Zeev Sobol.
Estimates of solutions for the parabolic $p$-Laplacian equation with measure
via parabolic nonlinear potentials.
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top]
|
I am working on a problem, which would possibly relate the Fourier transform/series with the jump singularities of the function where the function itself or one of its derivatives jump. ((some kind of logarthmic blow ups too, possibly as a corollary).
Consider a BV function $f(t)$ in $L^2(\mathbb{R})$ such that $f(t) =0, t<0$. Let $F(\omega)$ be its Fourier transform.
Consider the family of curves $\alpha_t(\omega) \equiv (x_t(\omega),y_t(\omega)) $ given as $$x_t(\omega) = \int_0^{\omega}R(\Omega)\cos(\Omega t + \Phi(\Omega))d\Omega$$ and $$y_t(\omega) = \int_0^{\omega}R(\Omega)\sin(\Omega t + \Phi(\Omega))d\Omega,$$ defined only for $\omega \ge 0$, where $R(\omega) = |F(\omega)|$ and $\Phi(\omega) = \angle F(\omega)$.
Let $A_t(s) \equiv (X_t(s),Y_t(s))$ be the arc length parametrization of the above mentioned curves. It can be seen that the transformation is $s(\omega) = \int_0^{\omega}R(\omega)d\omega$. We define the moment of inertia about center of mass of a segment of this curve corresponding to $t$, between $s_0$ and $s_1$ in arc length parametrization as $$I_{s_0,s_1}(t) = \int_{s_0}^{s_1} ((X_t(s)-X_{cm})^2 + (Y_t(s)-Y_{cm})^2) ds, $$ where $X_{cm} = \frac{1}{s_1-s_0}\int_{s_0}^{s_1}X_t(s)ds$ and $Y_{cm} = \frac{1}{s_1-s_0}\int_{s_0}^{s_1}Y_t(s)ds$.
The moment of inertia about center of mass of curve segment ((corresonding to $t$)) between $\omega_0$ and $\omega_1$ is denoted as $$MI_{\omega_0,\omega_1}(t) = I_{s(\omega_0),s(\omega_1)}(t).$$
Assumption : Assume $f(t)$ only has jump singularities in the form of the function itself or one of its derivatives jumping at that point. For example $t_0$ is considered as a singularity if any derivative, say the tenth derivative $f^{(10)}(t)$ jumps at $t_0$.
Statement : Given that there is a jump singularity at $t = t_0 > 0$ then we can always find an $\omega_{oc}$ such that, for all $\omega_0 > \omega_{oc}$, given any arbitrarily samll $\epsilon$, we can find a sufficiently large $\omega_{0,\epsilon}$ such that for all $\omega > \omega_{0,\epsilon}$ the function in $t$, $MI_{\omega_0,\omega}(t)$ has a maxima in $(t_0-\epsilon,t_0+\epsilon)$. PS : Clarification : If the function $f$ is continuous at $t_0$ but say the tenth derivativejumps at $t_0$, then also $t_0$ is defined as a jump singularity of $f$ in this problem. The function may have multiple jump singularities like third derivative jumping at $t_1$ and second derivative jumping at $t_2$, etc. Clues I had :
I am trying to use this result and this answer, which I think is the key, but my limited ability to solve complex math or lack any sharp ideas, I am not able to attempt to solve it anymore. So I give up and post it here in this forum, where I hope to find fresh ideas and solution.
Things look interesting once we start looking from the geometric perspective of the plane where our curves are. Also to note, $f\cos(\theta) + f_h \sin(\theta)$, ($f_h$ being Hilbert transform of $f$) for different $\theta$ all have same singularities (see here) at same places, only difference being partial blowup and partial jump, depending on $\theta$. (blowup being always logarithmic). This is in sync with
follows from the translation and rotation invariance property of our moment of inertia about center of mass. Some non technical details :
...I have been trying to formulate and prove this relation for the past 3.5 years. Most of my activity on math.SE and here was indirectly related to solving this. In fact I bumped into math.SE and mathoverflow when I started on this. This question in particular was an attempt to know any existing theorems...). (..If proven this can be extended to functions in $\mathbb{R}^N$ using clifford algebra.
I guess this problem is very important for applied math. As far I know, definitely for signal processing.
PS2 : This concept exhibits duality, for example consider the real part of the Fourier transform as the function to begin with, then we can construct exactly similar things about the singularities of this real part function in frequency domain. Motivation : For math greats like Terry and the likes and also for newbies like me, here is a motivation as to why this problem is so important.
Let $f(t)$ be an audio signal. We can safely asume it to be bandlimited to 0-20kHz as we cannot hear anything above that. Capture this signal in digital computer with appropriate sampling frequency and denote it as $f[n]$.
Now take Discrete Hilbert transform of $f[n]$ to get $f_h[n]$, (using the code $f_h$ = imag(hilbert(f)); in Matlab).
Compute the signal $f_{\theta}[n] = f[n]\cos\theta + f_h[n]\sin\theta$ for any value of $\theta$, then listen to the signal with different values for $\theta$.
They all sound
exactly identical.
Similarly our $MI_{\omega_0,\omega_1}(t)$ is same for all $f_{\theta} = f\cos\theta + f_h\sin\theta$, for any value of $\theta$.
just try it. $<f,f_h> = 0$, they why do they produce same effect in the listner?
MATLAB code :
[f,fs] = wavread('audio_file.wav');
fh = imag(hilbert(f));
theta = pi/4;
f_tht = f
cos(theta) + fhsin(theta);
wavplay(f,fs);
wavplay(f_tht,fs);
Some Illustrations for the problem :
Some illustrations : (These are discrete approximations)
The function $f(t)$ (discrete version) is as follows :
The corresponding Moment of inertia $MI(t)$ (segment from zero to highest frequency) is as follows : (interesting to observe there is
no ringing!)
Here is a plot of curves from $t = 0$ to $t = 800$. We can see that at $t = 400$, the curve is almost straight, making MI highest. $x-$axis is $f(t)$ and $y-$axis is $f_h(t)$.
|
There are proofs that treat the cases of real and non-real $\chi$ on an equal footing. One proof is in Serre's Course in Arithmetic, which the answers by Pete and David are basically about. That method is using the (hidden) fact that the zeta-function of the $m$-th cyclotomic field has a simple pole at $s = 1$, just like the Riemann zeta-function.Here is another proof which focuses only on the $L$-function of the character $\chi$ under discussion, the $L$-function of the conjugate character, and the Riemann zeta-function.
Consider the product$$H(s) = \zeta(s)^2L(s,\chi)L(s,\overline{\chi}).$$This function is analytic for $\sigma > 0$, with the possible exception of a pole at $s = 1$. (As usual I write $s = \sigma + it$.)
Assume $L(1,\chi) = 0$. Then also $L(1,\overline{\chi}) = 0$.So in the product defining $H(s)$, the double pole of $\zeta(s)^2$ at $s = 1$ is cancelled and $H(s)$ is therefore analytic throughout the half-plane $\sigma > 0$.
For $\sigma > 1$, we have the exponential representation $$H(s) = \exp\left(\sum_{p, k} \frac{2 + \chi(p^k) + \overline{\chi}(p^k)}{kp^{ks}}\right),$$where the sum is over $k \geq 1$ and primes $p$. If $p$ does not divide $m$, then we write $\chi(p) = e^{i\theta_p}$ and find
$$\frac{2 + \chi(p^k) + \overline{\chi}(p^k)}{k} = \frac{2(1 + \cos(k\theta_p))}{k} \geq 0.$$ If $p$ divides $m$ then this sum is $2/k > 0$. Either way, inside that exponential is a Dirichlet series with nonnegative coefficients, so when we exponentiate and rearrange terms (on the half-plane of abs. convergence, namely where $\sigma > 1$), we see that $H(s)$ is a Dirichlet series with nonnegative coefficients. A lemma of Landau on Dirichlet series with nonnegative coefficients then assures us that the Dirichlet series representation of $H(s)$ is valid on any half-plane where $H(s)$ can be analytically continued.
To get a contradiction at this point, here are several methods.
[Edit: In the answer by J.H.S., and due to Bateman, is the slickest argument I have seen, so let me put it here. The idea is to look at the coefficient of $1/p^{2s}$ in the Dirichlet series for $H(s)$. By multiplying out the $p$-part of the Euler product, the coefficient of $1/p^s$ is $2 + \chi(p) + \overline{\chi}(p)$, which is nonnegative, but the coefficient of $1/p^{2s}$ is $(\chi(p) + \overline{\chi}(p) + 1)^2 + 1$, which is not only nonnegative but in fact is greater than or equal to 1. Therefore if $H(s)$ has an analytic continuation along the real line out to the number $\sigma$, then for real $s \geq \sigma$ we have $H(s) \geq \sum_{p} 1/p^{2s}$. The hypothesis that $L(1,\chi) = 0$ makes $H(s)$ analytic for all complex numbers with positive real part, so we can take $s = 1/2$ and get $H(1/2) \geq \sum_{p} 1/p$, which is absurd since that series over the primes diverges. QED!]
If you are willing to accept that $L(s,\chi)$ (and therefore $L(s,\overline{\chi})$) has an analytic continuation to the whole plane, or at least out to the point $s = -2$, then $H(s)$ extends to $s = -2$. The Dirichlet series representation of $H(s)$ is convergent at $s = -2$ by our analytic continuation hypothesis and it shows $H(-2) > 1$, or the exponential representation implies that at least $H(-2) \not= 0$.But $\zeta(-2) = 0$, so $H(-2) = 0$. Either way, we have a contradiction.
There is a similar argument, pointed out to me by Adrian Barbu, that does not require analytic continuation of $L(s,\chi)$ beyond the half-plane $\sigma > 0$. If you are willing to accept that $\zeta(s)$ has zeros in the critical strip $0 < \sigma < 1$ (which is a region that the Dirichlet series and exponential representations of $H(s)$ are both valid since $H(s)$ is analytic on $\sigma > 0$), we can evaluate the exponential representation of $H(s)$ at such a zero to get a contradiction. Of course the amount of analysis that lies behind this is more substantial than what is used to continue $L(s,\chi)$ out to $s = -2$.
We consider $H(s)$ as $s \rightarrow 0^{+}$. We need to accept that $H$ is bounded as $s \rightarrow 0^{+}$. (It's even holomorphic there, but we don't quite need that.) For real $s > 0$ and a fixed prime $p_0$ (not dividing $m$, say), we can bound $H(s)$ from below by the sum of the $p_0$-power terms in its Dirichlet series. The sum of these terms is exactly the $p_0$-Euler factor of $H(s)$, so we have the lower bound $$H(s) > \frac{1}{(1 - p_0^{-s})^2(1 - \chi(p_0)p_0^{-s})(1 - \overline{\chi}(p_0)p_0^{-s})} = \frac{1}{(1 - p_0^{-s})^2(1 - (\chi(p_0)+ \overline{\chi}(p_0))p_{0}^{-s} + p_0^{-2s})}$$for real $s > 0$. The right side tends to $\infty$ as $s \rightarrow 0^{+}$.We have a contradiction. QED
These three arguments at some point use knowledge beyond the half-plane $\sigma > 0$ or a nontrivial zero of the zeta-function. Granting any of those lets you see easily that $H(s)$ can't vanish at $s = 1$, but that "granting" may seem overly technical. If you want a proof for the real and complex cases uniformly which does not go outside the region $\sigma > 0$, use the method in the answer by Pete or David [edit: or use the method I edited in as the first one in this answer].
|
Given a field $k$. We denote $A_{mn}=k[\{X_{ij}\}_{1\le i\le m,1\le j\le n}]$ a polynomial ring of $mn$ variables. Given $m,n,r>0$, we have a natural homomorphism $\phi\colon A_{mn}\to A_{mr}\otimes_kA_{rn}$ induced by matrix multiplication: $Z_{ij}\mapsto\sum_{l=1}^rX_{il}Y_{lj}$. Let $I\subseteq A=A_{mn}$ be the ideal generated by all $(r+1)$-minors of the matrix $(X_{ij})_{1\le i\le m,1\le j\le n}$.
Is it true that $I$ is a prime ideal in $A$? Is it true that $I=\ker\phi$?
When $r=1$, it's just something about Segre embedding and two propositions are all true. A proof could be found here. The essential point is that, $\phi$ induces a surjective homomorphism $\overline\phi\colon A/I\to B$ where $B$ is a subring of $A_{m1}\otimes_kA_{1n}$ which is generated by $a\otimes b$ where $\deg a=\deg b$, and $\overline\phi$ is in fact an isomorphism and we can easily construct an inverse of $\overline\phi$.
Any help is welcome. Thanks!
(If it's not considered a
research level problem, moderators can tacitly migrate this post to MSE.)
|
Optimization problems arising in intelligent systems are similar to those studied in other fields (such as operations research, control, and computational physics), but they have some prominent features that set them apart, and which are not addressed by classic optimization methods. Numerical optimization is a domain where probabilistic numerical methods offer a particularly interesting theoretical contribution.
One key issue is computational noise. Big Data problems often have the property that computational precision can be traded off against computational cost. One of the most widely occuring problem structure is that one has to find a (local) optimum of a function $L$ that is the sum of many similar terms, each arising from an individual data point $y_i$
$$L(x) = \frac{1}{N}\sum_{i = 1} ^N \ell(y_i,x) $$
Examples of this problem include the training of neural networks, of logistic regressors, and many other linear and nonlinear regression/classification algorithms. If the dataset is very large or even infinite, it is impossible, or at least inefficient, to evaluate the entire sum. Instead, one draws $M\ll N$ (hopefully representative)
samples $y_j$ from some distribution and computes the approximation
$$\hat{L}(x) = \frac{1}{M} \sum_{j=1} ^M \ell(y_j,x) \approx L(x)$$
If the representers $y_j$ are drawn independently and identically from some distribution, then this approximation deviates, relative to the true $L(x)$, by an approximately Gaussian disturbance.
Classic optimizers like quasi-Newton methods are unstable to these disturbances, hence the popularity of first-order methods, like stochastic gradient descent (sgd), in deep learning. But even such simple methods become harder to use in the stochastic domain. In particular, sgd and its variants exhibit free parameters (e.g. step-sizes / learning rates) in the stochastic setting, even though such parameters can be easily tuned automatically in the noise-free case. Thus, even at the world's leading large AI companies, highly trained engineers spent their work time hand-tuning parameters by repeatedly running the same training routine on high-performance hardware. A very wasteful use of both human and hardware resources. A NeurIPS workshop organized by us in 2016 highlighted the urgency of this issue.
The probabilistic perspective offers a clean way to capture this issue: It simply amounts to changing the
likelihood term of the computation from a point measure on $L(x)$ to a Gaussian distribution $p(\hat{L}\mid L) = \mathcal{N}(\hat{L};L,\Sigma)$. This seemingly straightforward formulation immediately offers an analytical avenue to understand why existing optimizers fundamentally require hand-tuning: While a point measure only has a single parameter (the location), a Gaussian has two parameters: mean and (co-) variance. But the latter does not feature in classic analysis, and is simply unknown to the algorithm. It is possible to show [ ] that this lack of information can make certain parameters (such as step sizes) fundamentally un-identified. Identifying them not just requires new algorithms, but also concretely computing a new object: In addition to batch gradients, also batch square gradients, to empirically estimate the variance. Doing so is not free, but it has low and bounded computational cost [ ], because it can re-use the back-prop pass, the most expensive part of deep learning training steps.
Over years we have built a series of tools that use this additional quantity to tune various learning hyperparameters such as the learning rate [ ] [ ] [ ], batch size [ ] and stopping criterion [ ]. We have also contributed theoretical analysis for some of the most popular deep learning optimizers [ ] and are now working towards a new code platform for the automation of deep learning in the inner loop, to free practitioners hands to build models, rather than tune algorithms.
|
This one can be done with "residue at infinity" calculation. This method is shown in the Example VI of http://en.wikipedia.org/wiki/Methods_of_contour_integration .
First, we use $z^z = \exp ( z \log z )$ where $\log z$ is defined for $-\pi\leq \arg z < \pi$.
For $(1-z)^{1-z} = \exp ( (1-z)\log (1-z) )$, we use $\log (1-z)$ defined for $0\leq \arg(1-z) <2\pi$.
Then, let $f(z)= \exp( i\pi z + z \log z + (1-z) \log (1-z) )$.
As shown in the Ex VI in the wikipedia link, we can prove that $f$ is continuous on $(-\infty, 0)$ and $(1,\infty)$, so that the cut of $f(z)$ is $[0,1]$.
We use the contour: (consisted of upper segment: slightly above $[0,1]$, lower segment: slightly below $[0,1]$, circle of small radius enclosing $0$, and circle of small radius enclosing $1$, that looks like a dumbbell having knobs at $0$ and $1$, can someone edit this and include a picture of it please? In fact, this is also the same contour as in Ex VI, with different endpoints.)
On the upper segment, the function $f$ gives, for $0\leq r \leq 1$, $$\exp(i\pi r) r^r (1-r)^{1-r} \exp( (1-r) 2\pi i ).$$
On the lower segment, the function $f$ gives, for $0\leq r \leq 1$, $$\exp(i\pi r) r^r(1-r)^{1-r}. $$
Since the functions are bounded, the integrals over circles vanishes when the radius tend to zero.
Thus, the integral of $f(z)$ over the contour, is the integral over the upper and lower segments, which contribute to
$$\int_0^1 \exp(i\pi r) r^r (1-r)^{1-r} dr - \int_0^1 \exp(-i\pi r) r^r(1-r)^{1-r} dr$$
which is $$2i \int_0^1 \sin(\pi r) r^r (1-r)^{1-r} dr.$$
By the Cauchy residue theorem, the integral over the contour is$$-2\pi i \textrm{Res}_{z=\infty} f(z) = 2\pi i \textrm{Res}_{z=0} \frac{1}{z^2} f(\frac 1 z).$$
From a long and tedious calculation of residue, it turns out that the value on the right is $$2i \frac{\pi e}{24}.$$Then we have the result:$$ \int_0^1 \sin(\pi r) r^r (1-r)^{1-r} dr = \frac{\pi e}{24}.$$
|
Ok, here is how I see these issues now after taking a bit more time to think about them:
In the first paper it is said that the Euler equations emerge as the infinite Reynolds number limit of the Navier Stokes equation which means that according to the definition of the Reynolds number
$$ RE = \frac{V L}{\nu} $$
the molecular diffusion can be neglected in this case. As is known that to fully developped turbulence all scales are expected to conribute (there is no characteristic scale present) and molecular diffusion can be neglected at larg scales, the fixed point corresponding to the Euler equations from a RG point of view is a critical IR fixed point. However as mentioned here when looking at LES dynamic subgrid scale parameterizations from a RG point of view, talking of a (scale invariant) fixed point is not exactly justified because the rescaling is missing in the renormalization step, and the IR limit the system approaches when repeating this modified renormalization step is more exactly called a limit point.
Looking at the non-uniqueness of this fixed (or limit) point mentioned in the first paper from the point of view explained around p. 67 of the second paper, this non-uniqueness does not mean that the N+1 parameters span some kind of a higher dimensional generalization of a line of fixed points, as the corresponding operators are relevant instead of redundant and marginal. What is meant instead, is when doing a linear analysis of the RG flow around the fixed (limit) point such that the nearby action is given by
$$ S_t(\phi) = S_{*}(\phi) + \sum\limits_i \alpha_i e^{\lambda_i t}O_i(\phi) $$
where $S_{*}(\phi)$ is the action at the fixed (relevant) point and $\alpha_i$ are integration constants, there are N+1 relevant operators $O_i(\phi)$ for a system with N mixing species with Eigenvalues $\lambda_i > 0$ which are determined from the Eigenvalue equation
$$ M O_i(\phi) = \lambda_i O_i(\phi) $$
The non-uniqueness alluded to in the first paper corresponds to the fact that the integration constants $\alpha_i$ are not determined by the renormalization procedure itself but as explained in the second paper have to be determined by the bare action or perfect action which lies on a renormalized trajectory.
In the context of Large Eddy Simulations (LES) that make use of dynamic subgrid scale parameterizations for turbulent diffusion for example, it is possible to dispense with the non-uniqueness by calculating the corresponding integration constant directly from the resolved scale by making use of the Germano identity Eq. (4.2) in the second paper and application of the Smagorinsky scheme to calculate a dynamic mixing length.
|
Ex.12.1 Q4 Areas Related to Circles Solution - NCERT Maths Class 10 Question
The wheels of a car are of diameter \(80\, \rm{cm}\) each. How many complete revolutions does each wheel make in \(10\) minutes when the car is traveling at a speed of \(66\, \rm{km}\) per hour?
Text Solution What is known?
Diameter of the wheel of the car and the speed of the car.
What is unknown?
Revolutions made by each wheel.
Reasoning:
Distance travelled by the wheel in one revolution is nothing but the circumference of the wheel itself.
Steps:
Diameter of the wheel of the car \(= 80\,\rm{cm}\)
Radius \((r)\) of the wheel of the car \(= 40\,\rm{cm}\)
Distance travelled in \(1\) revolution \(=\) Circumference of wheel
Circumference of wheel
\[\begin{align}&= 2\pi \,{ r}\\& = 2\pi \left( {{\text{40}}} \right)\\&= 80\pi\, \rm{cm}\end{align}\]
Speed of car\(= 66\, \text{km/hour}\)
\[\begin{align}&= \frac{{66 \times {\text{ }}100000}}{{60}}\,{\text{cm/}}\,{\text{min}}\\&= 110000 {\text{ cm/min}}\end{align}\]
Distance travelled by the car in \(10\) minutes
\[\begin{align}&= {\text{ }}110000{\text{ }} \times {\text{ }}10{\text{ }}\\&= {\text{ }}1100000{\text{ cm}}\end{align}\]
Let the number of revolutions of the wheel of the car be \(n\)
\(\rm{n} \times\)Distance travelled in\(1\) revolution \(=\)Distance travelled in \(10\) minutes
\[\begin{align}\\\rm{n} \times 80\pi &= 1100000\\{\text{n}} &= \frac{{1100000 \times 7}}{{80 \times 22}}\\ &= \frac{{35000}}{8}\\&= 4375\end{align}\]
Therefore, each wheel of the car will make \(4375\) revolutions.
|
When someone says a real valued function $f(x)$ on $\mathbb{R}$ is finite, does it mean that $|f(x)| \leq M$ for all $x \in \mathbb{R}$ with some $M$ independent of $x$?
I am not familiar with the term
finite in this context. One possible definition would be this.
A function $f: A \to B$ is finite if and only if $f(A) \subseteq B$ is finite.
However, I would not use this definition because the relation $f \subseteq A \times B$ is still an infinite set (if $A$ is infinite).
No, It means there are only finitely many $y$ such that $f(x)=y$, for example $f(x)=0,1,2,\dots,n$ where $n\in\mathbb{N}$ is finitely valued, although it is also bounded, but not all bounded functions are finitely valued for example $f(x)=x$ on $\mathbb{R}$ or $\mathbb{Q}$ takes uncountably many values within any $|x_i-x_j|<\epsilon$
Since a valued function may have $\mathbb R \cup \{\infty\}$ as target, it's possible that finite function $f$ corresponds to cases where $\forall x \in \mathbb R \quad f(x) \neq \infty$ like $f(x)=x$ or $f(x)= \frac x {x^2+6}$ while $f(x)=\frac 1x$ , for example, is not finite according to this meaning, because $f(0)=\infty$ (thing that can be taken by defintion or convention)
A function is finite if it never asigns infinity to any element in its domain. Note that this is different than bounded as $f(x):\mathbb R \to \mathbb R \cup\{\infty\}: f(x)=x^2$ is not bounded since $\lim_{x \to \infty}=\infty$. However, $f$ is finite since it does not assign $\infty$ to any real number.
In Elias Stein's
Real Analysis, at the beginning of Chapter 4.1, it reads "We shall say that $f$ is finite-valued if $-\infty<f(x)<\infty$ for all $x$."
A convex function $f$ is said to be proper if its epigraph is non-empty and contains no vertical lines, i.e., if $f(x)<+\infty$ for at least one $x$ and $f(x)>-\infty$ for every x. (Section 4, Chapter 1, Convex analysis, Rockafellar, 1997)
|
Shinichi Mochizuki of Kyoto divided the steps needed to prove the 1985 conjecture by Oesterlé and Masser into four papers listed at the bottom of the Nature article above.
Up to a few exceptions to be proved separately, a strengthening of Fermat's Last Theorem Four days ago, Nature described a potentially exciting development in mathematics, namely number theory: The newly revealed proof works with mathematical structures such as the Hodge theaters (a theater with Patricia Hodge is above, I hope it's close enough) and with canonical splittings of the log-theta lattice (yes, the word "splitting" is appropriate above, too).
What is the conjecture about and why it's important, perhaps more important than Fermat's Last Theorem itself?
First, before I tell you what it is about, let me say that, as shown by Goldfeld in 1996, it is "almost stronger" than Fermat's Last Theorem (FLT) i.e. it "almost implies" FLT. What does "almost" mean? It means that it only implies a weakened FLT in which the exponent has to be larger than a certain large finite number.
I am not sure whether all the exponents for which the \(abc\) theorem doesn't imply FLT have been proved before Wiles or whether the required Goldfeld bound is much higher. Please tell me if you know the answer: what's the minimum Goldfeld exponent?
Off-topic: When Ben Bernanke was a kid... Via AA
Recall that Wiles proved Fermat's Last Theorem in 1996 and his complicated proof is based on elliptic curves. That's also true for Mochizuki's new (hopefully correct) proof. However, Mochizuki also uses Teichmüller theory, Hodge-Arejekekiv theory, log-volume computations and log-theta lattices, and various sophisticated algebraic structures generalizing simple sets, permutations, topologies and matrices. To give an example, one of these object is the "Hodge theater" which sounds pretty complicated and cultural. ;-)
I am not gonna verify the proof although I hope that some readers will try to do it. But let me just tell everybody what the FLT theorem and the \(abc\) theorem are.
Fermat's Last Theorem says that if positive integers \(a,b,c,n\) obey\[
a^n+b^n = c^n,
\] then it must be that \(n\leq 2\). Indeed, you can find solutions with powers \(1,2\) such as \(2+3=5\) and \(3^2+4^2=5^2\) but you will fail for all higher powers. Famous mathematicians have been trying to prove the theorem for centuries but at most, they were able to prove it for individual exponents \(n\), not the no-go theorem for all values of \(n\).
The \(abc\) conjecture says the following.
For any (arbitrarily small) \(\epsilon\gt 0\), there exists a (large enough but fixed) constant \(C_\epsilon\) such that each triplet of relatively prime (i.e. having no common divisors) integers \(a,b,c\) that satisfies \[
a+b=c
\] the following inequality still holds:\[
\Large \max (\abs a, \abs b, \abs c) \leq C_\epsilon \prod_{p|(abc)} p^{1+\epsilon}.
\] That's it. I used larger fonts because it's a key inequality of this blog entry.
In other words, we're trying to compare the maximum of the three numbers \(\abs a,\abs b,\abs c\) with the "square-free part" of their product (product in which we eliminate all copies of primes that appear more than once in the decomposition). The comparison is such that if the "square-free part" is increased by exponentiating it to a power \(1+\epsilon\), slightly greater than one but arbitrarily close, the "square-free part" will be typically be smaller than \(abc\) and even \(a,b,c\) themselves but the ratio how much it is smaller than either \(a\) or \(b\) or \(c\) will never exceed a bound, \(C_\epsilon\), that may be chosen to depend on \(\epsilon\) but it isn't allowed to depend on \(a,b,c\).
See e.g. Wolfram Mathworld for a longer introduction.
Now, you may have been motivated to jump to the hardcore maths and verify all the implications and proofs that have been mentioned above or construct your own.
|
Electronic Journal of Probability Electron. J. Probab. Volume 23 (2018), paper no. 6, 48 pp. Pinning of a renewal on a quenched renewal Abstract
We introduce the pinning model on a quenched renewal, which is an instance of a (strongly correlated) disordered pinning model. The potential takes value 1 at the renewal times of a quenched realization of a renewal process $\sigma $, and $0$ elsewhere, so nonzero potential values become sparse if the gaps in $\sigma $ have infinite mean. The “polymer” – of length $\sigma _N$ – is given by another renewal $\tau $, whose law is modified by the Boltzmann weight $\exp (\beta \sum _{n=1}^N \mathbf{1} _{\{\sigma _n\in \tau \}})$. Our assumption is that $\tau $ and $\sigma $ have gap distributions with power-law-decay exponents $1+\alpha $ and $1+\tilde \alpha $ respectively, with $\alpha \geq 0,\tilde \alpha >0$. There is a localization phase transition: above a critical value $\beta _c$ the free energy is positive, meaning that $\tau $ is
pinned on the quenched renewal $\sigma $. We consider the question of relevance of the disorder, that is to know when $\beta _c$ differs from its annealed counterpart $\beta _c^{\mathrm{ann} }$. We show that $\beta _c=\beta _c^{\mathrm{ann} }$ whenever $ \alpha +\tilde \alpha \geq 1$, and $\beta _c=0$ if and only if the renewal $\tau \cap \sigma $ is recurrent. On the other hand, we show $\beta _c>\beta _c^{\mathrm{ann} }$ when $ \alpha +\frac 32\, \tilde \alpha <1$. We give evidence that this should in fact be true whenever $ \alpha +\tilde \alpha <1$, providing examples for all such $ \alpha ,\tilde \alpha $ of distributions of $\tau ,\sigma $ for which $\beta _c>\beta _c^{\mathrm{ann} }$. We additionally consider two natural variants of the model: one in which the polymer and disorder are constrained to have equal numbers of renewals ($\sigma _N=\tau _N$), and one in which the polymer length is $\tau _N$ rather than $\sigma _N$. In both cases we show the critical point is the same as in the original model, at least when $ \alpha >0$. Article information Source Electron. J. Probab., Volume 23 (2018), paper no. 6, 48 pp. Dates Received: 27 April 2017 Accepted: 3 January 2018 First available in Project Euclid: 12 February 2018 Permanent link to this document https://projecteuclid.org/euclid.ejp/1518426053 Digital Object Identifier doi:10.1214/18-EJP136 Mathematical Reviews number (MathSciNet) MR3771743 Zentralblatt MATH identifier 1390.60341 Subjects Primary: 60K35: Interacting random processes; statistical mechanics type models; percolation theory [See also 82B43, 82C43] Secondary: 60K05: Renewal theory 60K37: Processes in random environments 82B27: Critical phenomena 82B44: Disordered systems (random Ising models, random Schrödinger operators, etc.) Citation
Alexander, Kenneth S.; Berger, Quentin. Pinning of a renewal on a quenched renewal. Electron. J. Probab. 23 (2018), paper no. 6, 48 pp. doi:10.1214/18-EJP136. https://projecteuclid.org/euclid.ejp/1518426053
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.