text
stringlengths 256
16.4k
|
|---|
I am thinking of the property in probability of inequality. In particular, we assume \begin{equation} P[\zeta>a]\leq b, \end{equation} where $a>0$, $b>0$ and $\zeta\in R$ is a random variable.
Now we would like to consider whether the inequality of $P[\zeta^2>a^2]\leq b$ holds.
In fact, for differential and monotonic transformation, e.g., exponential function, the inequality holds. That is, $P[\exp(\zeta)>\exp(a)]\leq b$.
Can someone give hints for me on this issue? Thanks a lot in advance.
|
In a previous post, I applied my rules-of-thumb for
response time (RT) percentiles (or more accurately, residence time in queueing theory parlance), viz., 80th percentile: $R_{80}$, 90th percentile: $R_{90}$ and 95th percentile: $R_{95}$ to a cellphone application and found that the performance measurements were not completely consistent. Since the data appeared in a journal blog, I didn’t have enough information to resolve the discrepancy; which is ok. The first job of the performance analyst is to flag performance anomalies but most probably let others resolve them—after all, I didn’t build the system or collect the measurements.
More importantly, that analysis was for a
single server application (viz., time-to-first-fix latency). At the end of my post, I hinted at adding percentiles to PDQ for multi-server applications. Here, I present the corresponding rules-of-thumb for the more ubiquitous multi-server applications. Single-server Percentiles
First, let’s summarize the Guerrilla rules-of-thumb for single-server percentiles (M/M/1 in queueing parlance): \begin{align} R_{1,80} &\simeq \dfrac{5}{3} \, R_{1} \label{eqn:mm1r80}\\ R_{1,90} &\simeq \dfrac{7}{3} \, R_{1}\\ R_{1,95} &\simeq \dfrac{9}{3} \, R_{1} \label{eqn:mm1r95} \end{align} where $R_{1}$ is the statistical mean of the measured or calculated RT and $\simeq$ denotes
approximately equal. A useful mnemonic device is to notice the numerical pattern for the fractions. All denominators are 3 and the numerators are successive odd numbers starting with 5. Multi-server Percentiles
A multi-server analysis (M/M/m in queueing parlance) can be applied to such things as application servers, like Weblogic or Websphere. The corresponding percentile approximations are a bit more complicated in that they involve
two terms and require knowing not just the mean of the RT $R_{m}$, but also the standard deviation $R_{sd}$. \begin{align} R_{m,80} &\simeq R_{m} + \dfrac{2}{3} R_{sd} \label{eqn:mm8r80}\\ R_{m,90} &\simeq R_{m} + \dfrac{4}{3} R_{sd}\\ R_{m,95} &\simeq R_{m} + \dfrac{6}{3} R_{sd} \label{eqn:mm8r95} \end{align} The factions are now applied to the standard deviation term, not the mean. Their numerical pattern is also slightly different from the single-server case in that the numerators are successive even numbers starting with 2. This difference arises from the fact that equations $\eqref{eqn:mm1r80}$—$\eqref{eqn:mm1r95}$ can be rewritten as: \begin{align} R_{1,80} &\simeq \dfrac{3}{3} \, R_{1} + \dfrac{2}{3} R_{sd} \nonumber \\ R_{1,90} &\simeq \dfrac{3}{3} \, R_{1} + \dfrac{4}{3} R_{sd} \nonumber \\ R_{1,95} &\simeq \dfrac{3}{3} \, R_{1} + \dfrac{6}{3} R_{sd} \nonumber \end{align}
In the case of an M/M/1 queue, however, the standard deviation of the RT is identical to the mean of the RT (i.e., $R_{sd} = R_{1}$), so these equations reduce to $\eqref{eqn:mm1r80}$—$\eqref{eqn:mm1r95}$. This simplification is not possible for an M/M/m queue.
Example: 8-Way Server
To establish the validity of equations $\eqref{eqn:mm8r80}$—$\eqref{eqn:mm8r95}$, I built a simulation of an 8-way server, i.e., M/M/8, that assumes exponential arrival and service periods. The throughput was set to 40 requests per second with a service rate of 6 requests per second. That makes each server 83.33% busy. The simulation was run for a million time-steps with the RT logged for each request. This produced more than 650,000 raw data samples, which were then analyzed using the following R code.
# Fast read of near 1-million log file require(data.table) system.time(dt <- fread("/Users/.../logfile.data")) df <- as.data.frame(dt)
Next, we need to determine the time spent in the multi-server by taking the difference between the arrival time and the departure time. In other words, we need to find each pair of times belonging to the same request id. The simplest logic in R is something like this:
deltaR <- 0 # residence time per request for(j in 0:maxpair) { rowz <- which(df$JOB_ID == j) deltaR[j+1] <- df$TIMESTAMP[rowz[2]] - df$TIMESTAMP[rowz[1]] }
but you need to know maximum request ID that has a corresponding pair. Even worse, because of the way
which() has to span the data frame, this code took more than 15 minutes on a quad-core Macbook Air with 8 GB of RAM. I was able to gain a 5x speedup by first sorting the data using the request ID to produce adjacent pairs and then simply alternating over these pairs.
# sort job IDs into pairs df <- df[order(df$JOB_ID),] # re-index sorted rows dfs <- data.frame(df$LOGGERNAME,df$TIMESTAMP,df$JOB_ID) deltaR <- 0 # residence time per request i <- 1 # sample counter time1 <- TRUE for(row in 1:dim(df)[1]) { if(time1) { t1 <- df$TIMESTAMP[row] id1 <- df$JOB_ID[row] time1 <- FALSE } else { id2 <- df$JOB_ID[row] if(id2 != id1) next # near EOF so skip it t2 <- df$TIMESTAMP[row] deltaR[i] <- t2 - t1 i <- i + 1 time1 <- TRUE } }
Another benefit of this approach is that I didn’t need to examine the log file to see which requests started but didn’t complete, i.e., didn’t form a pair.
Since we have computed the difference of two-thirds of a million pairs, there are one-third of a million instantaneous RT samples shown in Figure 1. We can then determine the sample mean and standard deviation of these individual response times.
(Rmu <- mean(deltaR)) [1] 0.2341104 (Rsd <- sd(deltaR)) [1] 0.1999741
These values can now be substituted directly into equations $\eqref{eqn:mm8r80}$—$\eqref{eqn:mm8r95}$ to determine the multi-server percentiles.
(R80 <- Rmu + 2*Rsd/3) [1] 0.3674265 (R90 <- Rmu + 4*Rsd/3) [1] 0.5007425 (R95 <- Rmu + 6*Rsd/3) [1] 0.6340586
We can check these values against the
quantile function in R:
quantile(deltaR,c(0.80,0.90,0.95)) 80% 90% 95% 0.3707738 0.5002468 0.6268308
The following table summarizes and compares the response-time percentiles evaluated in three different ways to four significant digits of precision.
Simulation Analytic Percentile Empirical Equations Equations R 80 0.3708 0.3674 0.3666 R 90 0.5002 0.5007 0.4999 R 95 0.6268 0.6341 0.6332 The Empiricalcolumn shows the results of calculating the percentiles manually or using a function like
quantile()applied to the raw response-time data.
The Equationscolumn, in the Simulationsection, shows the results of first calculating the mean and standard deviation for the raw data and then applying equations $\eqref{eqn:mm8r80}$—$\eqref{eqn:mm8r95}$. The Equationscolumn, in the Analyticsection, shows the results of first calculating the mean and standard deviation analytically for an M/M/8 queuing model and then applying equations $\eqref{eqn:mm8r80}$—$\eqref{eqn:mm8r95}$.
Although there is no closed-form analytic expression for multi-server response-time percentiles (as there is for a single-server), the Guerrilla approximations $\eqref{eqn:mm8r80}$—$\eqref{eqn:mm8r95}$ are just as utilitarian as $\eqref{eqn:mm1r80}$—$\eqref{eqn:mm1r95}$.
Percentiles in PDQ
Although I haven’t implemented these percentiles in PDQ yet, it’s now clear to me that the rather than trying to include them in the generic PDQ Report (which could really explode its size for a model with many PDQ nodes), it would better to have a separate function that can be called ad hoc. I’ll be discussing these results in more detail in the 2014 Guerrilla classes. In the meantime …
Merry Xmas!
|
:Prefacing with mentioning that this method works exactly like Ross's above, however may appear a bit more tangible to use:
Assuming the coin is fair and $Pr(H)=Pr(T)=\frac{1}{2}$
This can be described with a Markov Chain with seven states (including a start and a placeholder state which can only be entered after a single flip). Denoting the states by the most recent coin flips (except $*H$ which denotes the first coin flip was a heads with no prior flips), we have the transition diagram:
Matrix with order $HHH, THT, *H, .T, .TH, .HH, Start$
$\begin{bmatrix}1 & 0 & 0 & 0 & 0 & h & 0\\0 & 1 & 0 & 0 & t & 0 & 0\\0 & 0 & 0 & 0 & 0 & 0 & h\\0 & 0 & t & t & 0 & t & t\\0 & 0 & 0 & h & 0 & 0 & 0\\0 & 0 & h & 0 & h & 0 & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0\end{bmatrix}$
This is a matrix in the form $A=\begin{bmatrix} I & S \\ 0 & R\end{bmatrix}$, which gives the limiting matrix $\lim\limits_{n\to\infty} A^n = \begin{bmatrix} I & S(I-R)^{-1}\\0 & 0\end{bmatrix}$.
Calculating the fundamental matrix, $(I-R)^{-1}$ will tell us what the expected game length will be.
Replacing $h$ and $t$ with $\frac{1}{2}$ we get:
$(I-R)^{-1} = \begin{bmatrix} 1 & 0 & 0 & 0 & .5\\2 & 8/3 & 2/3 & 4/3 & 7/3\\1 & 4/3 & 4/3 & 2/3 & 7/6\\1 & 2/3 & 2/3 & 4/3 & 5/6\\0 & 0 & 0 & 0 & 1\end{bmatrix}$
As our initial state configuration was entirely in the $Start$, by multiplying by $[0,0,0,0,1]^{T}$ on the right, we get $5+\frac{5}{6}$ as the number of turns on average the game is played for. In general, the fundamental matrix will tell you how many time increments on average until it reaches a stable state given some initial distribution (and as Ross points out, we could have done away with the $*H$ and the $Start$ locations, and noted that after two flips, half of the time we will be in the $.T$ state, a quarter of the time we will be in the $.TH$ state, and a quarter of the time we will be in the $.HH$ state.
To satisfy our curiosity, we may finish calculating the limiting matrix:
$\begin{bmatrix} 1 & 0 & .5 & 1/3 & 1/3 & 2/3 & 5/12\\0 & 1 & .5 & 2/3 & 2/3 & 1/3 & 7/12\\0 & 0 & 0 & 0 & 0 & 0 & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0\\0 & 0 & 0 & 0 & 0 & 0 & 0\end{bmatrix}$
Which by multiplying by our initial state vector, shows us that Alice has a $\frac{5}{12}$ chance of winning, and Bob has a $\frac{7}{12}$ chance of winning.
|
In the Artin's book on Algebra, the author stated a theorem (Ch.9, Thm. 2.2):
"A finite subgroup $G$ of $GL(n,\mathbb{C})$ is conjugate to a subgroup of $U(n)$.
Here, $U(n)$ is the unitary group, i.e. if $\langle \,\,, \rangle$ is
on $\mathbb{C}^n$ given by $\langle (x_1,\cdots,x_n),(y_1,\cdots,y_n)\rangle=\sum_{}{x_i\bar{y_i}}$ then $U(n)=\{A\in GL(n,\mathbb{C}) \colon \langle Av,Aw\rangle = \langle v,w\rangle,\forall v,w\in \mathbb{C}^n \}$. the standard Hermitian inner product
The proof is:
there is a $G$-invariant Hermitian inner product $\langle\,\, , \rangle_1$ on $\mathbb{C}^n$, and consider an orthonormal basis $B_1$ w.r.t this form. If $P$ is the matrix of transformation which changes standard basis to $B_1$, then $PGP^{-1}\leq U(n)$ Question: In the last statement in the proof, the author says that $PGP^{-1}$ is subgroup of unitary group; but, this unitary group is corresponding to the Hermitian inner product $\langle \,\,, \rangle_1$, i.e. it is the group of linear transformations which preserves the inner product $\langle \,\,, \rangle_1$. Why $PGP^{-1}$ should be subgroup of $U(n)$ which is the group of linear transformations which "preserves" the standard Hermitian inner product $\langle \,\,,\rangle$ on $\mathbb{C}^n$?
|
I’ve avoided doing any manifold (regretting it somewhat) courses, however do have some understanding. Let $p$ be a point on a surface $S:U\to \Bbb{R}^3$, we define:
The tangent space to $S$ at $p$, $T_p(S)=\{k\in\Bbb{R}^3\mid\exists\textrm{ a curve }\gamma:(-ε,ε)\to S\textrm{ with }\gamma(0)=p,\gamma'(0)=k\}$.
The tangent plane to $S$ at $p$ as the plane $p+T_p(S)\subseteq\Bbb{R}^3$.
My current understanding is, in the diagram below the tangent plane is the plane shown, whilst the tangent space would be p minus each element of the plane, hence the corresponding plane passing through the origin. Is this correct or is it incorrect? I’m doing a course called geometry of curves and surfaces and being unsure about this is making understanding later topics difficult.
Edit - can't post images, here's a link instead! http://standards.sedris.org/18026/text/ISOIEC_18026E_SRF/image022.jpg
Thanks!
|
On the uniqueness of bound state solutions of a semilinear equation with weights
Departamento de Matemática, Pontificia Universidad Católica de Chile, Casilla 306, Correo 22, Santiago, Chile
$ \mbox{div}\big(\mathsf A\, \nabla v\big)+\mathsf B\, f(v) = 0\, , \quad\lim\limits_{|x|\to+\infty}v(x) = 0, \quad x\in\mathbb R^n, ~~~~{(P)} $
$ n>2 $
$ \mathsf A $
$ \mathsf B $
$ \mathbb R^n\setminus\{0\} $
$ f\in C(-c, c) $
$ 0<c\le\infty $
$ b>0 $
$ (0, b) $
$ (b, c) $
$ (0, c) $ Mathematics Subject Classification:35J61, 35A02. Citation:Carmen Cortázar, Marta García-Huidobro, Pilar Herreros. On the uniqueness of bound state solutions of a semilinear equation with weights. Discrete & Continuous Dynamical Systems - A, 2019, 39 (11) : 6761-6784. doi: 10.3934/dcds.2019294
References:
[1]
C. C. Chen and C. S. Lin,
Uniqueness of the ground state solutions of $\Delta u+f(u) = 0$ in $\mathbb R^N, $ $N\ge 3, $,
[2]
C. V. Coffman,
Uniqueness of the ground state solution of $\Delta u-u+u^3 = 0$ and a variational characterization of other solutions,
[3] [4]
C. Cortázar, P. Felmer and M. Elgueta,
Uniqueness of positive solutions of $\Delta u+f(u) = 0$ in $\mathbb R^N$, $N\ge 3$,
[5]
C. Cortázar and M. García-Huidobro,
On the uniqueness of ground state solutions of a semilinear equation containing a weighted Laplacian,
[6]
C. Cortázar, J. Dolbeault, M. García-Huidobro and R. Manásevich,
Existence of sign changing solutions for an equation with a weighted p-Laplace operator,
[7]
C. Cortázar, M. García-Huidobro and C. Yarur,
On the uniqueness of the second bound state solution of a semilinear equation,
[8]
C. Cortázar, M. García-Huidobroand and C. Yarur,
On the uniqueness of sign changing bound state solutions of a semilinear equation,
[9]
C. Cortázar, M. García-Huidobro and C. Yarur,
On the existence of sign changing bound state solutions of a quasilinear equation,
[10] [11]
B. Franchi, E. Lanconelli and J. Serrin,
Existence and Uniqueness of nonnegative solutions of quasilinear equations in $\mathbb R^n$,
[12]
M. García-Huidobro and D. Henao,
On the uniqueness of positive solutions of a quasilinear equation containing a weighted $p$-Laplacian,
[13] [14] [15] [16] [17]
K. McLeod, W. C. Troy and F. B. Weissler,
Radial solutions of $\Delta u+f(u) = 0$ with prescribed numbers of zeros,
[18] [19] [20]
P. Pucci, M. Garca-Huidobro, R. Mansevich and J. Serrin, Qualitative properties of ground states for singular elliptic equations with weights,
[21] [22] [23] [24] [25] [26]
S. Tanaka,
Uniqueness of sign-changing radial solutions for $\Delta u-u+|u|^{p-1}u=0$ in some ball and annulus,
[27] [28]
W. Troy,
The existence and uniqueness of bound state solutions of a semilinear equation,
[29]
show all references
References:
[1]
C. C. Chen and C. S. Lin,
Uniqueness of the ground state solutions of $\Delta u+f(u) = 0$ in $\mathbb R^N, $ $N\ge 3, $,
[2]
C. V. Coffman,
Uniqueness of the ground state solution of $\Delta u-u+u^3 = 0$ and a variational characterization of other solutions,
[3] [4]
C. Cortázar, P. Felmer and M. Elgueta,
Uniqueness of positive solutions of $\Delta u+f(u) = 0$ in $\mathbb R^N$, $N\ge 3$,
[5]
C. Cortázar and M. García-Huidobro,
On the uniqueness of ground state solutions of a semilinear equation containing a weighted Laplacian,
[6]
C. Cortázar, J. Dolbeault, M. García-Huidobro and R. Manásevich,
Existence of sign changing solutions for an equation with a weighted p-Laplace operator,
[7]
C. Cortázar, M. García-Huidobro and C. Yarur,
On the uniqueness of the second bound state solution of a semilinear equation,
[8]
C. Cortázar, M. García-Huidobroand and C. Yarur,
On the uniqueness of sign changing bound state solutions of a semilinear equation,
[9]
C. Cortázar, M. García-Huidobro and C. Yarur,
On the existence of sign changing bound state solutions of a quasilinear equation,
[10] [11]
B. Franchi, E. Lanconelli and J. Serrin,
Existence and Uniqueness of nonnegative solutions of quasilinear equations in $\mathbb R^n$,
[12]
M. García-Huidobro and D. Henao,
On the uniqueness of positive solutions of a quasilinear equation containing a weighted $p$-Laplacian,
[13] [14] [15] [16] [17]
K. McLeod, W. C. Troy and F. B. Weissler,
Radial solutions of $\Delta u+f(u) = 0$ with prescribed numbers of zeros,
[18] [19] [20]
P. Pucci, M. Garca-Huidobro, R. Mansevich and J. Serrin, Qualitative properties of ground states for singular elliptic equations with weights,
[21] [22] [23] [24] [25] [26]
S. Tanaka,
Uniqueness of sign-changing radial solutions for $\Delta u-u+|u|^{p-1}u=0$ in some ball and annulus,
[27] [28]
W. Troy,
The existence and uniqueness of bound state solutions of a semilinear equation,
[29]
[1]
Claudia Anedda, Giovanni Porru.
Boundary estimates for solutions of weighted semilinear elliptic
equations.
[2]
C. Cortázar, Marta García-Huidobro.
On the uniqueness of ground state solutions of a semilinear equation containing a weighted Laplacian.
[3]
C. Cortázar, Marta García-Huidobro.
On the uniqueness of ground state solutions of a semilinear equation containing a weighted Laplacian.
[4]
Soohyun Bae.
Weighted $L^\infty$ stability of positive steady states of a
semilinear heat equation in $\R^n$.
[5]
Guoqing Zhang, Jia-yu Shao, Sanyang Liu.
Linking solutions for N-laplace elliptic equations with Hardy-Sobolev operator and indefinite weights.
[6]
Ruofei Yao, Yi Li, Hongbin Chen.
Uniqueness of positive radial solutions of a semilinear elliptic equation in an annulus.
[7]
Mei Ming.
Weighted elliptic estimates for a mixed boundary system related to the Dirichlet-Neumann operator on a corner domain.
[8]
Peter I. Kogut, Olha P. Kupenko.
On optimal control problem for an ill-posed strongly nonlinear elliptic equation with $p$-Laplace operator and $L^1$-type of nonlinearity.
[9]
Paolo Caldiroli.
Radial and non radial ground states for a class of dilation invariant fourth order semilinear elliptic equations on $R^n$.
[10] [11]
J. De Beule, K. Metsch, L. Storme.
Characterization results on weighted minihypers and on linear codes meeting the Griesmer bound.
[12]
A. M. Micheletti, Angela Pistoia.
Multiple eigenvalues of the Laplace-Beltrami operator and deformation of the Riemannian metric.
[13] [14]
Micol Amar, Roberto Gianni.
Laplace-Beltrami operator for the heat conduction in polymer coating of electronic devices.
[15]
Alexander Arbieto, Luciano Prudente.
Uniqueness of equilibrium states for some partially hyperbolic horseshoes.
[16] [17] [18] [19] [20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
Attractors for first order lattice systems with almost periodic nonlinear part
Department of Mathematics, The University of Jordan, Amman 11942, Jordan
$ \begin{equation*} \overset{.}{u}+Au+\alpha u+f\left( u,t\right) = g\left( t\right) ,\,\,\left( g,f\right) \in \mathcal{H}\left( \left( g_{0},f_{0}\right) \right) ,t>\tau ,\tau \in \mathbb{R}, \end{equation*} $
$ \begin{equation*} u\left( \tau \right) = u_{\tau }. \end{equation*} $
$ f\left( u,t\right) $
$ W $
$ f_{0}\left( \cdot ,t\right) $
$ t $
$ W $
$ \left( g,f\right) \in \mathcal{H}\left( \left( g_{0},f_{0}\right) \right) $ Keywords:Non-autonomous lattice dynamical system, uniform absorbing set, uniform global attractor, almost periodic symbol. Mathematics Subject Classification:Primary: 37L30; Secondary: 37L60. Citation:Ahmed Y. Abdallah. Attractors for first order lattice systems with almost periodic nonlinear part. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2019218
References:
[1] [2] [3] [4]
A. Y. Abdallah, Uniform global attractors for first order non-autonomous lattice dynamical systems,,
[5]
A. Y. Abdallah, Upper semicontinuity of the attractor for a second order lattice dynamical system,
[6] [7] [8] [9] [10] [11]
J. Bell and C. Cosner,
Threshold behavior and propagation for nonlinear differential-difference systems motivated by modeling myelinated axons,
[12]
T. Caraballo, F. Morillas and J. Valero,
Attractors of stochastic lattice dynamical systems with a multiplicative noise and non-Lipschitz nonlinearities,
[13]
T. Caraballo, F. Morillas and J. Valero,
Random attractors for stochastic lattice systems with non-Lipschitz nonlinearity,
[14] [15]
H. Chate and M. Courbage (Eds.), Lattice systems,
[16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26]
X. Jia, C. Zhao and X. Yang,
Global attractor and Kolmogorov entropy of three component reversible Gray–Scott model on infinite lattices,
[27] [28] [29] [30]
B. M. Levitan and V. V. Zhikov,
Almost Periodic Functions and Differential Equations, Cambridge Univ. Press, Cambridge, 1982.
Google Scholar
[31]
X. Liao, C. Zhao and S. Zhou,
Compact uniform attractors for dissipative non-autonomous lattice dynamical systems,
[32]
Q. F. Ma, S. H. Wang and C. K. Zhong,
Necessary and sufficient conditions for the existence of global attractor for semigroup and application,
[33] [34] [35] [36] [37]
C. Wang, G. Xue and C. Zhao,
Invariant Borel probability measures for discrete long-wave-short-wave resonance equations,
[38] [39] [40] [41] [42] [43] [44]
S. Zhou and M. Zhao, Uniform exponential attractor for second order lattice system with quasi-periodic external forces in weighted space,
[45]
show all references
References:
[1] [2] [3] [4]
A. Y. Abdallah, Uniform global attractors for first order non-autonomous lattice dynamical systems,,
[5]
A. Y. Abdallah, Upper semicontinuity of the attractor for a second order lattice dynamical system,
[6] [7] [8] [9] [10] [11]
J. Bell and C. Cosner,
Threshold behavior and propagation for nonlinear differential-difference systems motivated by modeling myelinated axons,
[12]
T. Caraballo, F. Morillas and J. Valero,
Attractors of stochastic lattice dynamical systems with a multiplicative noise and non-Lipschitz nonlinearities,
[13]
T. Caraballo, F. Morillas and J. Valero,
Random attractors for stochastic lattice systems with non-Lipschitz nonlinearity,
[14] [15]
H. Chate and M. Courbage (Eds.), Lattice systems,
[16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26]
X. Jia, C. Zhao and X. Yang,
Global attractor and Kolmogorov entropy of three component reversible Gray–Scott model on infinite lattices,
[27] [28] [29] [30]
B. M. Levitan and V. V. Zhikov,
Almost Periodic Functions and Differential Equations, Cambridge Univ. Press, Cambridge, 1982.
Google Scholar
[31]
X. Liao, C. Zhao and S. Zhou,
Compact uniform attractors for dissipative non-autonomous lattice dynamical systems,
[32]
Q. F. Ma, S. H. Wang and C. K. Zhong,
Necessary and sufficient conditions for the existence of global attractor for semigroup and application,
[33] [34] [35] [36] [37]
C. Wang, G. Xue and C. Zhao,
Invariant Borel probability measures for discrete long-wave-short-wave resonance equations,
[38] [39] [40] [41] [42] [43] [44]
S. Zhou and M. Zhao, Uniform exponential attractor for second order lattice system with quasi-periodic external forces in weighted space,
[45]
[1]
Xinyuan Liao, Caidi Zhao, Shengfan Zhou.
Compact uniform attractors for dissipative non-autonomous lattice dynamical systems.
[2]
Michael Zgurovsky, Mark Gluzman, Nataliia Gorban, Pavlo Kasyanov, Liliia Paliichuk, Olha Khomenko.
Uniform global attractors for non-autonomous dissipative dynamical systems.
[3]
Ahmed Y. Abdallah, Rania T. Wannan.
Second order non-autonomous lattice systems and their uniform attractors.
[4]
Chunyou Sun, Daomin Cao, Jinqiao Duan.
Non-autonomous wave dynamics with memory --- asymptotic regularity and uniform attractor.
[5] [6] [7]
Tomás Caraballo, David Cheban.
On the structure of
the global attractor for non-autonomous dynamical systems with weak convergence.
[8]
Tomás Caraballo, David Cheban.
On the structure of
the global attractor for infinite-dimensional non-autonomous
dynamical systems with weak convergence.
[9]
Sergey Zelik.
Strong uniform attractors for non-autonomous dissipative PDEs with non translation-compact external forces.
[10]
Gaocheng Yue.
Attractors for non-autonomous reaction-diffusion equations with fractional diffusion in locally uniform spaces.
[11] [12]
P.E. Kloeden, Desheng Li, Chengkui Zhong.
Uniform attractors of periodic and asymptotically periodic dynamical systems.
[13]
Wen Tan.
The regularity of pullback attractor for a non-autonomous
[14]
Messoud Efendiev, Etsushi Nakaguchi, Wolfgang L. Wendland.
Uniform estimate of dimension of the global attractor for a semi-discretized chemotaxis-growth system.
[15]
Xiang Li, Zhixiang Li.
Kernel sections and (almost) periodic solutions of a non-autonomous parabolic PDE
with a discrete state-dependent delay.
[16]
Caidi Zhao, Shengfan Zhou.
Compact uniform attractors for dissipative lattice dynamical systems with delays.
[17] [18]
Vladimir V. Chepyzhov, Monica Conti, Vittorino Pata.
Totally dissipative dynamical processes
and their uniform global attractors.
[19]
V. V. Chepyzhov, M. I. Vishik, W. L. Wendland.
On non-autonomous sine-Gordon type equations with a simple global attractor and some averaging.
[20]
T. Tachim Medjo.
Non-autonomous 3D primitive equations with
oscillating external force and its global attractor.
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top]
|
The problem
Let us consider a row vector u of size $n\in\mathbb{N}$, containing only binary values (0,1): $$u=(u_1 \cdots u_n), n\in\mathbb{N}$$ $$\forall i \in \{1\ldots n\}, u_i \in\{0,1\}$$
I would like to define notions of
consistency and heterogeneity (for lack of better terminologies...), in order to quantify how the vector values are distributed. Concretely, I would like to mathematically quantify the difference between this (ideal) vector$$u=(0,0,0,0,1,1,1)$$where I can find a rank k such as: $$\forall i \in \{2\ldots k\} \ \ u(i)=u(i-1) $$and $$ \forall i \in \mathbb{N}, i>k+2 \ \ u(i)=u(i-1)$$and this vector$$v=(0,1,0,1,0,1,0)$$where I can't. In practice, my vectors (which represent experimental data) will most likely look like that: $$ u=(0,0,0,1,0,0,0,1,1,1,1,0,1,1) $$ and I will have to determine empirically until what point I can consider a vector to contain "homogeneous" values which can be "grouped" into two sets. In this case, the first 7 components (0,0,0,1,0,0,0) would be one set, the next 7 components (1,1,1,1,0,1,1) another set. Definitions Heterogeneity
I would define the concept of
heterogeneity $\mathscr{H}$ of the m-s+1 sub-values (ranging from u(s) to u(m)) of a vector u as follows:$$\mathscr{H}(u,s,m)=\sum_{k=s}^{m-1}{|u_{k+1}-u_{k}|}$$$$s\in\{1...n\}, m\in\{1...n\}, n=size(u), s<m$$Example:$$u=(0,0,1,1,1,0,0)$$$$\mathscr{H}(u,3,5)=0$$ Perfectly consistent vector
A given vector u of size n is said to be
perfectly consistent if and only if$$\mathscr{H}(u,1,n)=0$$Example:$$u=(0,0,0,0,0,0,0)$$$$ and $$$$v=(1,1,1,1,1,1,1) $$are perfectly consistent vectors of size 7 Perfectly homogeneous vector
A given vector u of size n is said to be
perfectly homogeneous if and only if$$\exists i, i\in\{2...n\}, \mathscr{H}(u,1,i-1)=\mathscr{H}(u,i,n)=0$$The vector u is then said to be homogeneous of rank i. For instance: $$u=(0,0,0,1,1,1,1)$$ is homogeneous of rank 4.
I most likely need to refine this definition, but you get the idea.
Perfectly heterogeneous vector
A given vector u of size n is said to be
perfectly heterogeneous if and only if$$\mathscr{H}(u,1,n)=\max_{v, size(v)=n}\mathscr{H}(v,1,n) $$Heterogeneity is maximal given all sets of possible vectors of size n containing binary values (0,1).Example:$$u=(0,1,0,1,0,1,0)$$is a perfectly heterogeneous vector of size 7. So is$$v=(1,0,1,0,1,0,1)$$ My question
Assuming what I wrote above makes sense, I feel I am reinventing the wheel. This surely exists and has be done already somewhere, but my lack of mathematical knowledge prevents me from knowing even what to google and where to look for. Could anyone point me toward useful resources that would help me achieve what I am trying to do here? Is there a better way to name those concepts, instead of "heterogeneity" and "consistency"? Is what I am trying to do a particular case of a broader field, from which I could use the results / theorems with particular conditions?
Misc. considerations This is my first time writing maths in English and on MathOverflow. Any help on improving the quality of this post will be greatly appreciated. I have very basic maths knowledge from undergrad. I need this for modeling correlations in a psychology experiment. The value of the rank i of a homogeneous vector does not matter. What matters is that, as much as possible, my vectors contain two clear "groups" of 0 and 1. I am not sure about the tags for this question. If you have any recommendation, I'd be grateful as well.
|
Difference between revisions of "Help:Editing"
m (→Tables)
Line 148: Line 148:
*[http://en.wikipedia.org/wiki/Wikipedia:Writing_better_articles Wikipedia:Writing better articles].
*[http://en.wikipedia.org/wiki/Wikipedia:Writing_better_articles Wikipedia:Writing better articles].
+ +
[[Category:Help:Editing]]
[[Category:Help:Editing]]
Revision as of 04:26, 22 July 2014
The MediaWiki software is extremely easy to use. Viewing pages is self-explanatory, but adding new pages and making edits to existing content is extremely easy and intuitive as well. No damage can be done that can't be easily fixed. Although there are some differences, editing SDIY wiki is much the same as editing on Wikipedia.
Contents 1 Editing the wiki 2 Use a sandbox page 3 Creating links and adding pages 4 Headings 5 Lists 6 Inserting files 7 Schematics 8 Tables 9 Formatting 10 Categories 11 Standard appendices 12 Templates 13 Talk pages 14 See also 15 Further reading 16 References 17 External links Editing the wiki
By default the enhanced editing toolbar is disabled. To enable it go to Preferences:Editing and tick
Enable enhanced editing toolbar.
At the top of any wiki the page, you will see some tabs titled Page, Discussion, Edit, History, Move And Watch. Clicking the edit tab opens the editor, a large text entry box in the middle of the page. This is where to enter plain text. Very little formatting code (known as "wiki markup") is required, unlike regular websites using HTML and CSS. At the top of this text entry box is a row of buttons with small icons on them. Holding the mouse cursor over an icon displays a tool-tip telling you its function. These buttons make it very simple to use the formatting features of the wiki software. You can achieve the same effect by typing the correct wiki code, however using the buttons makes it very simple and also eases the process of learning the correct code syntax. Please do your best to always fill in the edit summary field. An enhanced editor can be enabled in user preferences.
Use a sandbox page
Use the sandbox page to play around and experiment with editing. It isn't for formal wiki info, just a place to play and explore. Any content here won't be preserved. You can create your own sandbox area by appending "/sandbox" to the URL of your user page, or click the Sandbox link in the personal toolbar area, if enabled in your preferences. Your own sandbox is where to rough out articles until they're ready for posting. Don't do articles in rough in the main wiki. Sandboxes will be indexed by search engines like any other page, unless the first line is
__NOINDEX__ or uses the template
{{User sandbox}}.
The third and fourth buttons are create "Internal link" and "External link". The third button creates an internal link (aka a wikilink) which, in the editor, has the format
[[Eurocard]] ie. surrounded by double square brackets. Use a vertical bar "|" (the "pipe" symbol) to create a link with a different name to original article eg.
[[Printed circuit board|PCB]], only PCB appears on the page.
Only the first occurence of a link on the page needs to be the link, any further uses of the word/phrase can be in plain text. If the page doesn't exist already the link will be in shown with red text. Following a redlink opens up the editor window for creating that page within the wiki structure. Linking articles in a structured way is the preferred method of adding new pages to the wiki. Except for names, use ordinary sentence case for article titles.
Using the fourth button will make an external link to a page elsewhere on the Internet. This has the form
[http://www.google.com Google], ie. the URL, followed by a space, followed by linking text in single square brackets.
Every article is part of a network of connected topics. Establishing such connections via internal links is a good way to establish context. Each article should be linked from more general subjects, and contain links where readers might want to use them to help clarify a detail. Only create relevant links. When you write a new article, make sure that one or more other pages link to it. There should always be an unbroken chain of links leading from the Main Page to every article in the wiki.
Always preview your edits before saving them and also check any links you have made to confirm that they do link to where you expect.
See also Wikipedia:Manual of Style/Linking
Headings
Headings help clarify articles and create a structure shown in the table of contents.
Headings are hierarchical. The article's title uses a level 1 heading, so you should start with level 2 heading (
==Some heading==) and follow it with a level 3 (
===A sub-heading===, and just use
'''Text made bold''' after that). Whether extensive subtopics should be kept on one page or moved to individual pages is a matter of personal judgment.
Headings should not be links. This is because headings in themselves introduce information and let the reader know what subtopics will be presented; links should be incorporated in the text of the section.
Except for names, use ordinary sentence case for headings, also don't have all words with initial caps.
Lists
In an article, significant items should normally be mentioned naturally within the text rather than merely listed. Where a "bulleted list" is required each item/line of the list is preceded by an asterix (
*) or for indenting a sublist use two asterixes
**). For numbered lists use a hash sign (
#) and further hash signs for subsections. Lists of links are usually bulleted, giving each link on a new line.
Definition lists
Are useful for more than just terms and definitions. Use semi-colons and colons:
Some term - this line starts with ; And then a definition - this line starts with a : Inserting files
The sixth button will enable to insert an image (or other media type) into the text. Relevant images add interest to the article. Currently there are limitations on the allowable size of uploads.
<mp3>Synth_filter_sweep.mp3</mp3> An example of a classic analog synthesizer
sound - a sawtooth bass filter sweep with
gradually increasing resonance.
You can also insert MP3 clips by using the tag <mp3>, but it needs to be put in an inline styled table to format it neatly. Use <br> tags to format any caption.
File names should be clear and descriptive, without being excessively long. It is helpful to have descriptive names for editors. Very generic filenames should not be used when uploading, as sooner or later someone else will use same name and this will overwrite the first file.
For a large selection of freely usable media see Wikimedia Commons.
Hotlinking from from Wikimedia Commons is allowed. You can first upload your file there, but be sure to use a long descriptive or unique file name. This is to avoid name clash. When files have the same name, some other other file might be displayed locally instead of the one expected.
Hotlinking is not recommended because anyone could change, vandalise, rename or delete a hotlinked image. There is no control over what is served locally. If you do hotlink, then it is still necessary to follow any licensing conditions.
Generally hotlinking is wrong because it exploits another servers bandwidth to supply the files. For files on sites other than Wikimedia, don't link directly to those files without permission. Either download a copy from the other site and then upload it to the wiki, or link to the other site's page on which the file can be found.
Schematics
For quickly illustrating articles with simple schematics and sketches. There's some suggestions listed at Wikipedia:Project Electronics/Programs and at StackExchange EE:Good Tools for Drawing Schematics.
Tables
Use wiki markup not HTML nor images. The easiest way to work with tables is to use Microsoft Excel. Paste and edit or create a table in Excel, then copy the table into the tab-delimited string to wiki markup converter. Other methods are described at Commons:Convert tables and charts to wiki code.
Enabling the enhanced editor in user preferences, gives an
Insert a table button. Clicking this produces the following. {| class="wikitable" |- !header1!!header 2!!header 3 |- | row 1, cell 1|| row 1, cell 2|| row 1, cell 3 |- | row 2, cell 1|| row 2, cell 2|| row 2, cell 3 |}
Which displays as
header1 header 2 header 3 row 1, cell 1 row 1, cell 2 row 1, cell 3 row 2, cell 1 row 2, cell 2 row 2, cell 3
For more in depth information on table markup see Wikipedia:Help:Table.
Formatting
Be sure to keep your content meaningful. Relying on styling to indicate meaning is a bad practice (e.g. for machine readability such as by search engines, screen readers using text-to-speech, and text browsers).
Inline styling
Some HTML tags and inline styling is allowed, for example
<code>,
<div>,
<span> and
<font>. These apply anywhere you insert them - depending upon which fonts are installed. Here is an example using <span style="font-family:Courier;font-size:100%;color:#0000ff;background-color:#dddddd"></span>. For further information see Mediawiki:Help:Formatting.
Indenting text
Use a colon
: to indent text.
Subscript and superscript
Foo<sub>Bar</sub> gives Foo
Bar and
Bar<sup>Baz</sup> gives Bar
Baz. Inserting symbols
Symbols and other special characters can be inserted through HTML entities. For example Ω will show Ω and > will show >. These are case-sensitive. For a list of HTML entities see Wikipedia:List of HTML entities
Text boxes
For preformatted text (in a dashed box) simply indent it by one space. Inline styling allows more options e.g.
<div style="background: #eeffff; border: 1px solid #999999; padding: 1em; width:88%;">
LaTeX formulae
SDIY wiki supports embedding mathematical formulas using TeX syntax. Use
<m> tags.
For example (use edit to see the source): <m>\operatorname{erfc}(x) = \frac{2}{\sqrt{\pi}} \int_x^{\infty} e^{-t^2}\,dt = \frac{e^{-x^2}}{x\sqrt{\pi}}\sum_{n=0}^\infty (-1)^n \frac{(2n)!}{n!(2x)^{2n}}</m>
Categories
Add one or more categories to pages or uploaded file, by simply adding eg.
[[Category:Whatever]]. Categories themselves need to be categorised to create a hierarchy for navigating through the wiki.
Standard appendices
Information that can follow after the body of the article should follow in this order:
A list of works created by the subject of the article See also, a list of internal links to related articles Notes and references Further reading, a list of recommended relevant books, articles, or other publications that have not been used as sources External links, a list of recommended relevant websites that have not been used as sources Templates
Template:Main article A template is a page that gets included in another page, (this is called transclusion). This is useful for text that is often repeated. For example, create a page called "Template:Main article" with the text
''The main article for this is
[[{{{1}}}]].''
and then to use the template insert "{{Main article|Whatever}}" where you want that text to appear.
Talk pages
Don't leave visible notes and comments in the article. At the top of every article, the second tab entitled Discussion, opens the articles "Talk page". This is where to dicuss the article or leave notes for other editors. Remember to sign you posts on talk pages, (second from last button). In articles to leave notes or explanations use HTML commenting. These will be hidden except from other editors. An HTML comment, which has the form:
<!--This is a comment.-->, will work fine in Mediawiki.
See also Further reading References Convert from Microsoft Word to Media Wiki markup, stackoverflow
|
Trimester Seminar Venue: HIM, Poppelsdorfer Allee 45, Lecture Hall Thursday, December 13th, 2:30 p.m. Conjugacy classes and centralisers in classical groups
Speaker: Giovanni de Franceschi (Auckland)
Abstract
We discuss conjugacy classes and associated centralisers in classical groups, giving descriptions which underpin algorithms to list these explicitly.
Thursday, December 13th, 2 p.m. PFG, PRF and probabilistic finiteness properties of profinite groups
Speaker: Matteo Vannacci (Düsseldorf)
Abstract
A profinite group G equipped with its Haar measure is a probability space and one can talk about "random elements" in G. A profinite group G is said to be
positively finitely generated (PFG) if there is an integer k such that k Haar-random elements generate G with positive probability. I will talk about a variation of PFG, called "positive finite relatedness" (PFR) for profinite groups. Finally I will survey some recent work-in-progress defining higher probabilistic homological finiteness properties (PFP_n), building on PFG and PFR. This is joint work with Ged Corob Cook and Steffen Kionke. Thursday, December 13th, 11 a.m. Strong Approximation for Markoff Surfaces and Product Replacement Graphs
Speaker: Alex Gamburd (Graduate Centre, CUNY)
Abstract
Markoff triples are integer solutions of the equation $x^2+y^2+z^2=3xyz$ which arose in Markoff's spectacular and fundamental work (1879) on diophantine approximation and has been henceforth ubiquitous in a tremendous variety of different fields in mathematics and beyond. After reviewing some of these, in particular the intimate relation with product replacement graphs, we will discuss joint work with Bourgain and Sarnak on the connectedness of the set of solutions of the Markoff equation modulo primes under the action of the group generated by Vieta involutions, showing, in particular, that for almost all primes the induced graph is connected. Similar results for composite moduli enable us to establish certain new arithmetical properties of Markoff numbers, for instance the fact that almost all of them are composite.
Wednesday, December 12th, 11 a.m. McKay graphs for simple groups
Speaker: Martin Liebeck (Imperial College)
Abstract
Let V be a faithful module for a finite group G over a field k, and let Irr(kG) denote the set of irreducible kG-modules. The McKay graph M(G,V) is a directed graph with vertex set Irr(kG), having edges from any irreducible X to the composition factors of the tensor product of X and V. These graphs were first defined by McKay in connection with the well-known McKay correspondence. I shall discuss McKay graphs for simple groups.
Tuesday, December 11th, 11 a.m. Groups, words and probability
Speaker: Aner Shalev (Hebrew University of Jerusalem)
Abstract
I will discuss probabilistic aspects of word maps on finite and infinite groups. I will focus on solutions of some probabilistic Waring problems for finite simple groups, obtained in a recent work with Larsen and Tiep. Various applications will also be given.
Thursday, December 6th, 1:30 p.m. Conjugacy growth in groups
Speaker: Alex Evetts
Thursday, December 6th, 2:10 p.m. Zeta functions of groups: theory and computations
Speaker: Tobias Rossmann
Abstract
I will give a brief introduction to the theory of zeta functions of (infinite) groups and algebras. I will describe some of the techniques used to investigate these functions and give an overview of recent work on practical methods for computing them.
Thursday, December 6th, 3:10 p.m. Zeta functions of groups and model theory
Speaker: Michele Zordan
Abstract
In this talk we shall explore the connections between rationality questions regarding zeta functions of groups and model theory of valued fields.
Wednesday, December 5th, 11 a.m. Towards Short Presentations for Ree Groups
Speaker: Alexander Hulpke (Colorado State University, Fort Collins)
Abstract
The Ree groups 2G2(32m+1) are the only class of groups for which no short presentations (that is of length polynomial in log(q) are known. I will report on work of Ákos Seress of myself that found a likely candidate for such a short presentation, as well as the obstacles that lie in the way of proving that it is a presentation for the group.
Tuesday, December 4th, 11 a.m. Finding involution centralisers efficiently in classical groups of odd characteristic
Speaker: Cheryl Praeger (University of Western Australia)
Abstract
Bray's involution centraliser algorithm plays a key role in recognition algorithms for classical groups over finite fields of odd order. It has always performed faster than the time guaranteed/justified by complexity analyses. Work of Dixon, Seress and I published this year give a satisfactory analysis for SL(n,q). And we are slowly making progress with the other classical groups. The "we" are Colva Roney-Dougal, Stephen Glasby and me - and we have conquered the unitary groups so far.
Thursday, November 29th, 2 p.m. Density of small cancellation presentations
Speaker: Michal Ferov
Thursday, November 29th, 2 p.m. Constructing Grushko and JSJ decompositions: a combinatorial approach
Speaker: Suraj Krishna
Abstract
The class of graphs of free groups with cyclic edge groups constitutes an important source of examples in geometric group theory, particularly of hyperbolic groups. In this talk, I will focus on groups of this class that arise as fundamental groups of certain nonpositively curved square complexes. The square complexes in question, called tubular graphs of graphs, are obtained by attaching tubes (a tube is a Cartesian product of a circle with the unit interval) to a finite collection of finite graphs. I will explain how to obtain two canonical decompositions, the Grushko decomposition and the JSJ decomposition, for the fundamental groups of tubular graphs of graphs. While our algorithm to obtain the Grushko decomposition is of polynomial time-complexity, the algorithm for the JSJ decomposition is of double exponential time-complexity and is the first such algorithm with a bound on its time-complexity.
Thursday, November 29th, 2 p.m. On the Burnside variety of groups
Speaker:Rémi Coulon
Thursday, November 22th, 3 p.m. From the Principle conjecture towards the Algebraicity conjecture
Speaker:Ulla Karhumäki
Abstract
It was proven by Hrushovski that, if true, the Algebraicity conjecture implies that if an infinite simple group of finite Morley rank has a generic automorphism, then the fixed point subgroup of this automorphism is pseudofinite. I will state some results suggesting that the converse is true as well, and further, present a possible strategy for proving that the Principle conjecture and the Algebraicity conjecture are actually equivalent.
Thursday, November 15th, 3 p.m. Separating cyclic subgroups in the pro-p topology
Speaker: Michal Ferov
Thursday, November 15th, 3 p.m. Refinements and filters for groups
Speaker: Josh Maglione
Thursday, November 8th, 2 p.m. On spaces of Lipschitz functions on finitely generated and Carnot groups
Speaker: Michal Doucha
Abstract
The motivation for this work comes from functional analysis, namely to study the normed spaces of Lipschitz functions defined on metric spaces, however certain natural restrictions lead us to focus on finitely generated and Lie groups as metric spaces in question. We show that whenever $\Gamma$ is a finitely generated nilpotent torsion-free group and $G$ is its Mal'cev closure which is Carnot, then the spaces of Lipschitz functions defined on $\Gamma$ and $G$ are isomorphic as Banach spaces. This applies e.g. to the pairs $(\mathbb{Z}^d, \mathbb{R}^d)$ or $(H_3(\mathbb{Z}), H_3(\mathbb{R}))$. I will focus on the group-theoretic content of the results and on the relations between finitely generated nilpotent torsion-free groups and their asymptotic cones (which are Carnot groups) and Mal'cev closures. Based on joint work with Leandro Candido and Marek Cuth.
Thursday, November 8th, 2 p.m. The Probability Distribution of Word Maps on Finite Groups
Speaker: Turbo Ho
Thursday, November 8th, 2 p.m. How to construct short laws for finite groups
Speaker: Henry Bradford
Friday, November 2th, 2 - 4 p.m. Groups, boundaries and Cannon--Thurston maps
Speaker: Giles Gardam
On the isomorphism problem for one-relator groups
Speaker: Alan Logan
String C-group representations for symmetric and alternating groups
Speaker: Dimitri Leemans
Abstract
A string C-group representation of a group G is a pair (G,S) where S is a set of involutions generating G and satisfying an intersection property as well as a commuting property. String C-group representations are in one-to-one correspondance with abstract regular polytopes. In this talk, we will talk about what is known on string C-group representations for the symmetric and alternating groups. We will also explain some open questions in that area that involve group theory, graph theory and combinatorics.
Monday, October 29th, 3:15 p.m. Product set growth in groups and hyperbolic geometry
Speaker: Markus Steenbock
Abstract
We discuss product theorems in groups acting on hyperbolic spaces:
for every hyperbolic group there exists a constant $a>0$ such that for every finite subset U that is not contained in a virtually cyclic subgroup, $|U^3|>(a|U|)^2$. We also discuss the growth of $|U^n|$ and conclude that the entropy of $U$ (the limit of $1/n log|U^n|$ as $n$ goes to infinity) exceeds $1/2\log(a|U|)$. This generalizes results of Razborov and Safin, and answers a question of Button. We discuss similar estimates for groups acting acylindrically on trees or hyperbolic spaces. This talk is on a joint work with T. Delzant. Thursday, October 18th, 2 p.m. A quick introduction to homogeneous dynamics Guan Lifan Thursday, October 18th, 2:30 p.m. Scale subgroups of automorphism groups of trees George Willis Thursday, October 18th, 3 p.m. Searching for random permutation groups Robert Gilman
Abstract
It is well known that two random permutations generate the symmetric or alternating group with asymptotic probability 1. In other words the collection of all other permutation groups has asymptotic density 0. This is bad news if you want to sample random two-generator permutation groups. However, there is another notion of density, defined in terms of Kolmogorov complexity, with respect to which the asymptotic density of every infinite computable set is positive. For the usual reasons the corresponding search algorithm cannot be implemented, but one may try a heuristic variation. Perhaps surprisingly, it seems to work. We present some experimental results.
Thursday, October 11th, 3 p.m. Canonical conjugates of finite permutation groups Robin Candy (Australian National University)
Abstract
Given a finite permutation group $G \le \operatorname{Sym}(\Omega)$ we discuss a way to find a canonical representative of the equivalence class of conjugate groups $G^{\operatorname{Sym}(\Omega)}=\left\{ s^{-1} G s \,\middle|\, s \in \operatorname{Sym}(\Omega) \right\}$. As a consequence the subgroup conjugacy and symmetric normaliser problems are introduced and addressed. The approach presented is based on an adaptation of Brendan McKay's graph isomorphism algorithm and is heavily related to Jeffrey Leon's partition backtrack algorithm.
Thursday, October 4th, 3 p.m. Enumerating characters of Sylow p-subgroups of finite groups of Lie type $G(p^f)$ Alessandro Paolini (TU Kaiserslautern)
Abstract
Let q=p^f with p a prime. The problem of enumerating characters of subgroups of a finite group of Lie type G(q) plays an important role in various research problems, from random walks on G(q) to cross-characteristics representations of G(q). O'Brien and Voll have recently determined a formula for the generic number of irreducible characters of a fixed degree of a Sylow p-subgroup U(q) of G(q), provided p>c where c is the nilpotency class of G(q).
We discuss in this talk the situation in the case $p \le c$. In particular, we describe an algorithm for the parametrization of the irreducible characters of U(q) which replaces the Kirillov orbit used in the case p>c. Moreover, we present connections with a conjecture of Higman and we highlight a departure from the case of large p. This is based on joint works with Goodwin, Le and Magaard. Thursday, October 4th, 3 p.m. Rationality of the representation zeta function for compact FAb $p$-adic analytic groups. Michele Zordan (KU Leuven)
Abstract
Let $\Gamma$ be a topological group such that the number $r_n(\Gamma)$ of its irreducible continuous complex characters of degree $n$ is finite for all $n\in\mathbb{N}$. We define the {\it representation zeta function} of $\Gamma$ to be the Dirichlet generating function \[\zeta_{\Gamma}(s) = \sum_{n\ge 1} r_n(\Gamma)n^{-s} \,\,\,(s\in\mathbb{C}).\]% One goal in studying a sequence of numbers is to show that it has some sort of regularity. Working with zeta functions, this amounts to showing that $\zeta_{\Gamma}(s)$ is rational. Rationality results for the representation zeta function of $p$-adic analytic groups have been first been obtained by Jaikin-Zapirain for almost all $p$. In this talk I shall report on a new proof (joint work with Stasinski) of Jaikin-Zapirain's result without restriction on the prime.
Thursday, September 27th, 3:30p.m. Hyperbolicity is preserved under elementary equivalence Simon Andre (University of Rennes)
Abstract
Zlil Sela proved that any finitely generated group which satisfies the same first-order properties as a torsion-tree hyperbolic group is itself torsion-free hyperbolic. This result is striking since hyperbolicity is defined in a purely geometric way. In fact, Sela's theorem remains true for hyperbolic groups with torsion, as well as for subgroups of hyperbolic groups, and for hyperbolic and cubulable groups. I will say a few words about these results.
Thursday, September 6th, 3 p.m. Universal minimal flows of the homeomorphism groups of Ważewski dendrites Aleksandra Kwiatkowska (Universität Münster)
Abstract
For each P ⊆ {3,4,...,ω}, we consider Ważewski dendrite W
P, which is a compact connected metric space that we can construct in the framework of the Fraïssé theory. If P is finite, we prove that the universal minimal flow of the homeomorphism group H(W P) is metrizable and we compute it explicitly. This answers a question of Duchesne. If $P$ is infinite, we show that the universal minimal flow of H(W P) is not metrizable. This provides examples of topological groups which are Roelcke precompact and have a non-metrizable universal minimal flow with a comeager orbit.
|
1,764 69
Hi PF!
Given the ODE $$f'' = -\lambda f : f(0)=f(1)=0$$ we know ##f_n = \sin (n\pi x), \lambda_n = (n\pi)^2##. Estimating eigenvalues via Rayleigh quotient implies $$\lambda_n \leq R_n \equiv -\frac{(\phi''_n,\phi_n)}{(\phi_n,\phi_n)}$$ where ##\phi_n## are the trial functions. Does the quotient hold for all ##n\in\mathbb N##? It seems like it should (I haven't seen the proof so maybe not), but if I let ##\phi_n=x(1-x^n)## then ##R_2 = 10.5## which is larger than ##(2\pi)^2##. What am I doing (understanding) wrong? Also, the Rayleigh quotient only holds for admissible functions, right (i.e. functions satisfying ##\phi(0)=\phi(1)=0## which are sufficiently smooth)?
Given the ODE $$f'' = -\lambda f : f(0)=f(1)=0$$ we know ##f_n = \sin (n\pi x), \lambda_n = (n\pi)^2##. Estimating eigenvalues via Rayleigh quotient implies $$\lambda_n \leq R_n \equiv -\frac{(\phi''_n,\phi_n)}{(\phi_n,\phi_n)}$$
where ##\phi_n## are the trial functions. Does the quotient hold for all ##n\in\mathbb N##? It seems like it should (I haven't seen the proof so maybe not), but if I let ##\phi_n=x(1-x^n)## then ##R_2 = 10.5## which is larger than ##(2\pi)^2##. What am I doing (understanding) wrong?
Also, the Rayleigh quotient only holds for admissible functions, right (i.e. functions satisfying ##\phi(0)=\phi(1)=0## which are sufficiently smooth)?
Last edited:
|
When choosing the public exponent $e$, if the value chosen is the first coprime after $\phi(n)/2$ then the resulting public and private exponents are equal.
Well, yeah, that'll always be true.
Why does this happen?
We have $e=d$ whenever we have both of the following true:
$$e^2 \equiv 1 \pmod{p-1}$$
$$e^2 \equiv 1 \pmod{q-1}$$
Now, if $e = (p-1)(q-1)/2 + 1$ (which is always the first coprime after $\phi(n)/2$), then if we denote $k = (q-1)/2$ (which is an integer),
$e^2 = ((p-1)(q-1)/2 + 1)^2 = ((p-1) k + 1)^2 \equiv 1^2 = 1 \pmod{p-1}$
By symmetry, we also have $e^2 = 1 \pmod{q-1}$ as well, and so $e=d$ works in this case.
Furthermore, whenever we have both the following hold:
$$e \equiv 1 \pmod{p-1}$$$$e \equiv 1 \pmod{q-1}$$
then we'll have $M^e \equiv M \pmod{N}$ (for all $M$), that is, the RSA operation will always give us the original plaintext. These are also both true in the case of $e = \phi(N)/2 + 1$, and so such an $e$ will also always have plaintext=ciphertext, which is what you observed.
Is this even a problem?
If you intend to use $\phi(n)/2 + 1$ as your public exponent, yeah, that's a problem. Exposing such a value also makes $n$ easy to factor; however the attacker doesn't need to factor to break the system in this case.
However, if you use a more normal public exponent, say, 3 or 65537, it's pretty irrelevant.
Selecting only public exponents that are primes might eliminate the possibility of this occurring, but I have neither tested nor seen that it's a requirement in the RSA algorithm.
Well, what's most common for RSA implementations is to pick $e$ first (and it makes sense to pick it as a small value), and then select primes $p$ and $q$ such that $p-1$ and $q-1$ are relatively prime to $e$. If you do that, then $e=d$ cannot happen, because if $1 < e < \sqrt{p-1}$, then we trivially have $e^2 \not\equiv 1 \pmod{p-1}$.
However, even if you pick $e$ large (for whatever reason), as long as you do it randomly, then the probability that both $e^2 \equiv 1 \pmod{p-1}$ and $e^2 \equiv 1 \pmod{q-1}$ both hold is negligible.
|
So one thing that I find really interesting is that if I have a vector $\vec V = V_x \hat i + V_y \hat j$ its length is just:
$$ |V|^2 = V_x^2 + V_y^2 $$
That is all well and good, but then if I transform the vector into a new basis, I can rewrite the vector in terms of a covariant basis as:
$$ \vec V = V^1 \vec b_1 + V^2 \vec b_2 $$
Now of course, it is obvious that since $\vec b_1 $ and $\vec b_2$ are not necessarily orthogonormal, that $|V| \ne \sqrt{(V^1)^2 + (V^2)^2}$, that is all well and good:
Contravariant Basis
Now the usual way this goes is that we then define a new set of basis vectors: we define $b^1$ to be orthogonal to all $b_i$ when $i\ne 1$ but we define
strangely that $b_1 \cdot b^1 = 1$. Then we rinse and repeat for all other vectors.
We can then represent v in terms of this new basis directly as:
$$ \vec V = V_1 \vec b^1 + V_2 \vec b^2 $$
Now this is also fine, but then something totally out of the blue happens:
The Dot Product
If we take the dot product of these two representations, we can get an alternative formula for the length:
$$ |V| ^2 = (V^1 \vec b_1 + V^2 \vec b_2) \cdot (V_1 \vec b^1 + V_2 \vec b^2) = V_x^2 + V_y^2 $$
Now I can verify this by calculation, but I now realise that I have absolutely no understanding of why this should be true.
Any help would be most appreciated :) I don't see the connection here, why does defining this new basis with the rule that $b_j \cdot b^k = \delta_{jk}$ lead to such an elegant formula for length?
|
I am in the process of designing a reaction wheel to be used in a cubesat for my final year project. I have already chosen my motor ( Faulhaber 2610T006B ) , and am currently in the process of sizing my flywheel. But I have gotten stuck halfway through. The following is my working:
$$Desired \space slew \space rate: 3° per \space sec \approx 0.0523599 \space rad \space per \space sec$$
$$\theta = \frac12 \frac\tau J t^2$$
$$For \space 90° rotation \space at \space 3° per \space sec: \space Time \space taken = \frac{90}{3} = 30s$$
$$\therefore \tau_{min} = \frac{2\theta J}{t^2} = \frac{2\times0.5 \times \pi \times 0.03}{30^2} = 1.047 \times 10^{-4}Nm,$$
$$where \space \tau_{min} = Minimum \space torque \space required \space to \space achieve \space 90° rotation \space within \space 30s$$
$$For \space reaction \space wheel, \space \tau = I\alpha,$$
$$where \space I = moment \space of \space inertia \space of \space flywheel,$$
$$\alpha_{max} = max. \space angular \space acceleration \space of \space the \space motor(\frac{rad}{s^2})$$
$$Set \space \tau_{min} = \tau , \therefore I = \frac{\tau_{min}}{\alpha_{max}}\rvert$$
$$$$
$$$$
$$Finding \space \alpha_{max},$$
$$F = ma, \space a = radius \times \alpha_{max},$$
$$where \space m = weight \space of \space flywheel = 50g \approx 0.05Kg, \space a = linear \space acceleration \space in \space \frac{m}{s^2}$$
$$\require{extpfeil}\Newextarrow{\xRightarrow}{5,5}{0x21D2}\tau = Fdsin(\theta) \xRightarrow[\theta = 90°]{} Fd, where \space d = length \space of \space the \space rotor = 0.006meters$$
$$\therefore \tau \space should \space be \space the \space torque \space required \space to \space move \space the \space load, what \space should \space \tau \space be?$$
As can be seen from the working above, I am trying to achieve a slew rate of 3 degrees per second. But I am having trouble finding alpha max, that is, the maximum angular acceleration when my flywheel is attached to my motor. I also forgot to mention that J is the principal moment of inertia about one axis and is 0.03kgm^2.
My flywheel will be a cylindrical shape, once I am able to find alpha max, I can find my required I(moment of inertia of flywheel) and then begin sizing it.
Am I approaching this the right way? And how should I find alpha max?
I would also like to ask if I were to factor in all three reaction wheels operating at once, how much would my requirements for momentum storage change by?
Thanks!
|
May 9th, 2017, 08:13 AM
# 1
Member
Joined: Dec 2016
From: United States
Posts: 53
Thanks: 3
Math Focus: Abstract Simulations
Building an intuition for infinite-dimensional points, lines, surface etc.
I come from a javascript background. My goal is to build simulations of "dream" stuff. Not in javascript
Yesterday my mind was blown when I was listening to a lecture on youtube and the professor brought up "a point with infinite dimensions". In that moment I asked myself "how could you do that?" and then I figured that you could just define the value on every axis with a function, if you wanted to know the 8th dimension's value then just plug in 8 to the function.
The interesting thing about this is that a simple point that is 1 at every axis is infinitely distant from the center. I'm sensing an interesting computational shift when one works with purely infinite dimensions.
I don't know, but it would almost seem like everything is either infinitely distant from each other, or the distance between two of these points is always divergent.
I feel like I am absorbing some stuff just by brute-force reading the threads and playing youtube lectures like talk radio/podcasts. I also feel that because I'm trying to apply all features of math to this simulation I'm getting a very real hands on experience & it can really go wherever my imagination wants to. The interesting thing is, the more I discover about math, it's properties are useful for what I'm trying to do, and I can't quite explain it, but my efforts are converging on something.
So back to the concepts. If I can have a point with infinite dimensions, what does my axis value defining function mean at 1.5? Because I'm sure that there is some value between two natural numbers that may indicate that one point is within a certain distance or distance definition to some other point & I feel like when we have infinite dimensions the algebra/calculous is already going to look at all real numbers in-between natural numbers. Can I have partial dimensions? What happens to the distance formula? Can only the tail end dimension be partial? or can any dimension be partial? If so, then are dimensions independent; so my axis-value defining function really hasn't covered all dimensions because dimensions are defined independently of a sequence or without name? And if there is some type of proof to figure out what the distance to something is, even if it is divergent then what about negative dimensions? Negative dimensions feel like they're gonna be big if I can find a way to figure out distance with points that have infinite dimensions.
What can I call any given point, line, surface, manifolds? Is there a name to group all three? Substances?
OH yeah. and I heard about infinitely dense curves, and these is great for representing distortions in space (topology) ...
There's so many properties that naturally come from pure geometry.
There's more I want to dive into.
-Continuous vector definitions throughout a "substance".
And finally. None of these really means much of anything unless I can find out how to fluidly (by plugging in T) find the x & y coordinates of two particles pulling on each other for any length of time.
I think I've asked this question here before, but I didn't understand the answer or maybe the answer wasn't enough. I think they showed me some langrangian or hamiltonian stuff. What confused me was that the value being returned was only the distance between 2 points, when my real issue is when I have more than 2 points and they are all acting on each other in continuous space and time. If there was a way to get the actual x y and z coordinates of 2 or more particles I don't think you could plug in T and get infinite values returned with a single algebraic trick. I'm guessing it would involve calculating it to a some kind of limit and then running the function again at that limit "key-frame".
I COMPLETELY forgot calculous could define the area under a curve, which I believe is a similar challenge to what I'm trying to do. If what I'm theorizing is correct, then ... yeah, let's do this.
May 9th, 2017, 05:51 PM
# 2
Senior Member
Joined: Feb 2016
From: Australia
Posts: 1,838
Thanks: 653
Math Focus: Yet to find out.
Relax... Keep working on your sim, but make yourself some clear objectives of what you want it to do, predict the results, and then go from there..
May 10th, 2017, 05:22 AM
# 3
Member
Joined: Dec 2016
From: United States
Posts: 53
Thanks: 3
Math Focus: Abstract Simulations
The things I'm reaching for can only be 100% implemented in a 2.0 version. Exploring higher concepts has helped me solidify what needs to be done with a simpler version.
I do have code, it works. I promise you I'm not entirely crazy.
Am I just wasting my time? Maybe.
The fact that I know the 4th dimension is a weird place for spheres, tells me something I should look out for when casting shadows down to the 3rd dimension. I think it's valuable to have these nuggets of information.
This is the projection involving a 4-dimensional object called a dodecaplex. Image crated by Paul Nylander
Last edited by InkSprite; May 10th, 2017 at 06:08 AM.
May 10th, 2017, 07:03 AM
# 4
Senior Member
Joined: Sep 2015
From: USA
Posts: 2,571
Thanks: 1415
Quote:
It's about a kid that can see and to some extent interact with the 4th dimension.
May 11th, 2017, 06:56 AM
# 5
Banned Camp
Joined: Mar 2015
From: New Jersey
Posts: 1,720
Thanks: 126
Infinite dimensional point:
$\displaystyle p=\lim_{n\rightarrow \infty}(x_{1},x_{2},x_{3},...,x_{n})$
Infinite dimensional line:
$\displaystyle p=p_{1}+k(p_{2}-p_{1})$
Infinite dimensional plane:
$\displaystyle \lim_{n\rightarrow \infty}a_{0}(p-p_{0})+a_{1}(p-p_{1})+a_{2}(p-p_{2})+...+a_{n}(p-p_{n})=0
$
Infinite dimensional circle:
$\displaystyle \lim_{n\rightarrow \infty}(x_{1}^{2}+x_{2}^{2}+x_{3}^{2},...,x_{n}^{2 })=r^{2}
$
There is nothing mysterious about a 4-dimensional circle. It is simply a definition:
$\displaystyle x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}=r^{2}$
If $\displaystyle (x_{1}, x_{2}, x_{3})$ are space coordinates and $\displaystyle x_{4}$ is time t, then it describes a spherical shell contracting from r.
$\displaystyle x_{i}, k, r, a_{i}$, real
May 12th, 2017, 04:48 AM
# 6
Member
Joined: Dec 2016
From: United States
Posts: 53
Thanks: 3
Math Focus: Abstract Simulations
Quote:
https://plus.maths.org/content/richard-elwes
They say that they don't know how many "spheres" the 4th dimension has. It just has a question mark.
15 dimensions has 16256 spheres. 7 dimensions, 28. There's something weird about the 4th dimension I don't quite understand.
Quote:
Last edited by InkSprite; May 12th, 2017 at 04:57 AM.
Tags building, infinitedimensional, intuition, lines, points, surface
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post Surface area of a 3-dimensional object rarian Real Analysis 1 October 19th, 2014 03:50 PM Surface Area of 3-Dimensional Composites kot Algebra 7 February 3rd, 2012 11:17 PM Infinite dimensional dervast Abstract Algebra 8 January 18th, 2011 02:18 AM A infinite dimensional linear space elim Abstract Algebra 4 April 29th, 2010 10:52 AM Three Dimensional Surface trick? cmmcnamara Calculus 9 March 17th, 2009 03:24 PM
|
I'm attempting to numerically solve the following in order to get a function of 2 variables, just looking at the real part of
$$\psi(x,t)=\frac{1}{\pi\sqrt{2}}\int_{-100}^{100}\frac{\sin{(k)}}{k}e^{i\left[kx-\frac{k^2}{2}t\right]}\,dk$$
where I have specific values for $t=0,2,4$ and I'd like to plot the function from $x=-10$ to $x=10$. The way I've tried so far is
func[x_, t_] := Re[Sin[k]/k*Exp[I*(k*x - k^2/2*t)]]y = Table[NIntegrate[func[x, 0], {k, -100, 100}], {x, -10, 10, 0.01}]
I get several errors when running this but still get some results out
The plotted results aren't what I was expecting. I've done this numerical integration in Matlab where I specified in the integration function to expect an array:
x=-10:0.01:10;func = @(k,c) sin(k)./(pi*sqrt(2).*k).*cos(k.*c-k.^2./2*0);real_0 = integral(@(k)func(k,x),-100,100,'ArrayValued',true);
I'm pretty new to Mathematica and don't know what the equivalent to this Matlab expression would be.
Here are the results I get from Mathematica
and what I get from Matlab
|
So in Julie Bergner's work on $(\infty, 1)$-categories arXiv:0610239, she considers several model categories which model $(\infty, 1)$-categories, which are known to be equivalent. I'm guessing that there is another model which is equivalent to these. It is probably known to experts, and probably exists for easy reasons, but I'm not seeing it so I'm asking here.
Background
In Julie's survey, which reviews some of her own work in arXiv:0504334, the work of Joyal and Tierney arXiv:0607820, and the work of others, she compares four models of $(\infty, 1)$-categories. They are model categories of Segal Categories, Complete Segal spaces, Quasi-categories, and Simplicial Categories.
All of these models are related in multiple ways, but some are more closely aligned than others. In particular the Complete Segal spaces and the Segal categories are very similar objects. The underlying categories for these two model categories are almost the same. They are the category of simplicial spaces and the category of simplicial spaces whose zeroeth space $X_0$ is discrete, respectively.
Among the simplicial spaces we have those which satisfy the Segal condition. These are the (Reedy fibrant) simplicial spaces such that the Segal map$$ X_n \to X_1 \times_{X_0} \cdots \times_{X_0} X_1 $$is a weak equivalence. These are called
Segal spaces. There is a model structure on the category of simplicial spaces such that the Segal spaces are the fibrant objects. It is a localization of the Reedy model structure. But in this model structure the weak equivalences between Segal spaces are just the level-wise weak equivalences.
The model category of complete Segal spaces is a further localization of this. The weak equivalences between fibrant objects (complete Segal spaces) in this model category are pretty easy. They are the level-wise weak equivalences. More generally the weak equivalences between Segal spaces in this new model structure can also be identified, without too much trouble. They are called
Dwyer-Kan equivalences or DK-equivalences for short.
A Segal category is a Segal space where the space of objects $X_0$ is discrete. There is a Quillen equivalent model structure on the category of those simplicial spaces whose zeroeth space is discrete. In this model category, a map between two Segal categories is a weak equivalence if and only if it is a DK-equivalence.
Question
I'm wondering if there is an intermediary model category which is
equivalent to both the above model categories (hopefuly in an obvious way) which has the following properties: It should be a model category on the category of simplicial spaces. The fibrant objects should be the Segal spaces. (Not necessarily complete). The weak equivalences should be the DK-equivalences.
Does such a model category exist? is it well known?
|
This is likely a simple question that I'm just missing, but nothing immediately came to mind.
When dealing with topological monoids, it is necessary to prove that the group operation is continuous. I.e. if we have a monoid $(G,\ast,e)$ with some topology $\tau$, then it is necessary that $-\ast-: G\times G\rightarrow G$ be continuous with respect to $\tau$ in order for $G$ to be a topological group.
Slightly related to this is in the consideration of the function $f_g:G\rightarrow G$ defined by $f_g(g')=g\ast g'$. Now, if the monoid operation is continuous, each $f_g$ will clearly be continuous by pre-composition with the product of the constant function sending all elements of $G$ to $g$ (which is always continuous) and the identity on $G$ (again, always continuous), and using the fact that the product of continuous maps is continuous. In the case where $G$ is a topological group, then $f_g$ is furthermore a homeomorphism.
However, it doesn't seem to me that the converse would necessarily hold true: if for every element $g$ of $G$, $f_g:G\rightarrow G$ defined as above is continuous, then $\ast$ is continuous.
This gets me to my question:
Is the converse true? If not, what are some counterexamples?
|
Line 16: Line 16:
The final step to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed. Siril proposes several algorithms for stacking computation.
The final step to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed. Siril proposes several algorithms for stacking computation.
* Sum Stacking
* Sum Stacking
−
This is the simplest algorithm: each pixel in the stack is summed using 32-bit precision, and the result is normalized to 16-bit. The increase in signal-to-noise ratio (SNR) is proportional to <math>\sqrt{N}</math>, where <math>N</math> is the number of images.
+
This is the simplest algorithm: each pixel in the stack is summed using 32-bit precision, and the result is normalized to 16-bit. The increase in signal-to-noise ratio (SNR) is proportional to <math>\sqrt{N}</math>, where <math>N</math> is the number of images.
* Average Stacking With Rejection
* Average Stacking With Rejection
** Percentile Clipping: this is a one step rejection algorithm ideal for small sets of data (up to 6 images).
** Percentile Clipping: this is a one step rejection algorithm ideal for small sets of data (up to 6 images).
Revision as of 13:46, 11 April 2016 Siril processing tutorial Convert your images in the FITS format Siril uses (image import) Work on a sequence of converted images Pre-processing images Registration (Global star alignment) → Stacking Stacking
The final step to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed. Siril proposes several algorithms for stacking computation.
Sum Stacking
This is the simplest algorithm: each pixel in the stack is summed using 32-bit precision, and the result is normalized to 16-bit. The increase in signal-to-noise ratio (SNR) is proportional to [math]\sqrt{N}[/math], where [math]N[/math] is the number of images. Because of the lack of normalisation, this method should only be used for planetary processing.
Average Stacking With Rejection Percentile Clipping: this is a one step rejection algorithm ideal for small sets of data (up to 6 images). Sigma Clipping: this is an iterative algorithm which will reject pixels whose distance from median will be farthest than two given values in sigma units ([math]\sigma_{low}[/math], [math]\sigma_{high}[/math]). Median Sigma Clipping: this is the same algorithm except than the rejected pixels are replaced by the median value of the stack. Winsorized Sigma Clipping: this is very similar to Sigma Clipping method but it uses an algorithm based on Huber's work [1] [2]. Linear Fit Clipping: this is an algorithm developed by Juan Conejero, main developer of PixInsight [2]. It fits the best straight line ([math]y=ax+b[/math]) of the pixel stack and rejects outliers. This algorithm performs very well with large stacks and images containing sky gradients with differing spatial distributions and orientations.
These algorithms are very efficient to remove satellite/plane tracks.
Median Stacking
This method is mostly used for dark/flat/offset stacking. The median value of the pixels in the stack is computed for each pixel. As this method should only be used for dark/flat/offset stacking, it does not take into account shifts computed during registration. The increase in SNR is proportional to [math]0.8\sqrt{N}[/math].
Pixel Maximum Stacking
This algorithm is mainly used to construct long exposure star-trails images. Pixels of the image are replaced by pixels at the same coordinates if intensity is greater.
In the case of NGC7635 sequence, we first used the "Winsorized Sigma Clipping" algorithm in "Average stacking with rejection" section, in order to remove satellite tracks ([math]\sigma_{low}=4[/math] and [math]\sigma_{high}=3[/math]).
The output console thus gives the following result:
22:26:06: Pixel rejection in channel #0: 0.215% - 1.401%
22:26:06: Pixel rejection in channel #1: 0.185% - 1.273% 22:26:06: Pixel rejection in channel #2: 0.133% - 1.150% 22:26:06: Integration of 12 images: 22:26:06: Normalization ............. additive + scaling 22:26:06: Pixel rejection ........... Winsorized sigma clipping 22:26:06: Rejection parameters ...... low=4.000 high=3.000 22:26:09: Saving FITS: file NGC7635.fit, 3 layer(s), 4290x2856 pixels 22:26:19: Background noise value (channel: #0): 10.013 (1.528e-04) 22:26:19: Background noise value (channel: #1): 6.755 (1.031e-04) 22:26:19: Background noise value (channel: #2): 6.621 (1.010e-04)
Noise estimation is a good estimator of the quality of your stacking process. In our example, the red channel has almost 2 times more noises that green or blue. That probably means that DSLR is unmodified: most of red photon are stopped by the original filter, therefore leading to a more noisy channel. Then, in this example we note that high rejection seems to be a bit strong. Setting high rejection to [math]\sigma_{high}=4[/math] could produce a better image. And this is what we have in the image below.
After that, the result is saved in the file named below the buttons, and is displayed in the grey and colour windows. You can adjust levels if you want to see it better, or use the differe1nt display mode. In our example the file is the stack result of all files, i.e., 12 files.
The images above picture the result in Siril using the Histogram Equalization rendering mode. Note the improvement of the signal-to-noise ratio regarding the result given for one frame in the previous step (take a look to the sigma value). The increase in SNR is of [math]19.7/6.4 = 3.08 \approx \sqrt{12} = 3.46[/math] and you should try to improve this result adjusting [math]\sigma_{low}[/math] and [math]\sigma_{high}[/math].
Now should start the process of the image with crop, background extraction (to remove gradient), and some other processes to enhance your image. To see processes available in Siril please visit this page. Here an example of what you can get with Siril:
Peter J. Huber and E. Ronchetti (2009), Robust Statistics, 2nd Ed., Wiley Juan Conejero, ImageIntegration, Pixinsight Tutorial
|
Genus 2 curves in isogeny class 448.a
Label Equation 448.a.448.2 \(y^2 + (x^3 + x)y = -2x^4 + 7\) 448.a.448.1 \(y^2 + (x^3 + x)y = x^4 - 7\) L-function data
Analytic rank: \(0\) Bad L-factors:
Good L-factors:
See L-function page for more information
\(\mathrm{ST} =\) $N(G_{1,3})$, \(\quad \mathrm{ST}^0 = \mathrm{U}(1)\times\mathrm{SU}(2)\)
Of \(\GL_2\)-type over \(\Q\)
Smallest field over which all endomorphisms are defined:
Galois number field \(K = \Q (a) \simeq \) \(\Q(\sqrt{-1}) \) with defining polynomial \(x^{2} + 1\)
\(\End (J_{\overline{\Q}}) \otimes \Q \) \(\simeq\) \(\Q\) \(\times\) \(\Q(\sqrt{-1}) \) \(\End (J_{\overline{\Q}}) \otimes \R\) \(\simeq\) \(\R \times \C\)
More complete information on endomorphism algebras and rings can be found on the pages of the individual curves in the isogeny class.
|
NEW CONJECTURE: There is no general upper bound.
Wadim Zudilin suggested that I make this a separate question. This follows representability of consecutive integers by a binary quadratic form where most of the people who gave answers are worn out after arguing over indefinite forms and inhomogeneous polynomials. Some real effort went into this, perhaps it will not be seen as a duplicate question.
So the question is, can a positive definite integral binary quadratic form $$ f(x,y) = a x^2 + b x y + c y^2 $$ represent 13 consecutive numbers?
My record so far is 8: the form $$6x^2+5xy+14y^2 $$ represents the 8 consecutive numbers from 716,234 to 716,241. Here we have discriminant $ \Delta = -311,$ and 2,3,5,7 are all residues $\pmod {311}.$ I do not think it remotely coincidental that $$6x^2+xy+13 y^2 $$ represents the 7 consecutive numbers from 716,235 to 716,241.
I have a number of observations. There is a congruence obstacle $\pmod 8$ unless, with $ f(x,y) = a x^2 + b x y + c y^2 $ and $\Delta = b^2 - 4 a c,$ we have $\Delta \equiv 1 \pmod 8,$ or $ | \Delta | \equiv 7 \pmod 8.$ If a prime $p | \Delta,$ then the form is restricted to either all quadratic residues or all nonresidues $ \pmod p$ among numbers not divisible by $p.$
In what could be a red herring, I have been emphasizing $\Delta = -p$ where $p \equiv 7 \pmod 8$ is prime, and where there is a very long string of consecutive quadratic residues $\pmod p.$ Note that this means only a single genus with the same $\Delta = -p,$ and any form is restricted to residues. I did not anticipate that long strings of represented numbers would not start at 1 or any predictable place and would be fairly large. As target numbers grow, the probability of not being represented by any form of the discriminant grows ( if prime $q \parallel n$ with $(-p| q) = -1$), but as the number of prime factors $r$ with $(-p| r) = 1$ grows so does the probability that many forms represent the number if any do. Finally, on the influence of taking another $\Delta$ with even more consecutive residues, the trouble seems to be that the class number grows as well. So everywhere there are trade-offs.
EDIT, Monday 10 May. I had an idea that the large values represented by any individual form ought to be isolated. That was naive. Legendre showed that for a prime $q \equiv 7 \pmod 8$ there exists a solution to $u^2 - q v^2 = 2,$ and therefore infinitely many solutions. This means that the form $x^2 + q y^2$ represents the triple of consecutive numbers $q v^2, 1 + q v^2, u^2$ and then represents $4 + q v^2$ after perhaps skipping $3 + q v^2$. Taking $q = 8 k - 1,$ the form $ x^2 + x y + 2 k y^2$ has no restrictions $\pmod 8,$ while an explicit formula shows that it represents every number represented by $x^2 + q y^2.$ Put together, if $8k-1 = q$ is prime, then $ x^2 + x y + 2 k y^2$ represents infinitely many triples. If, in addition, $ ( 3 | q) = 1,$ it seems plausible to expect infinitely many quintuples. It should be admitted that the recipe given seems not to be a particularly good way to jump from length 3 to length 5, although strings of length 5 beginning with some $q t^2$ appear plentiful.
EDIT, Tuesday 11 May. I have found a string of 9, the form is $6 x^2 + x y + 13 y^2$ and the numbers start at $1786879113 = 3 \cdot 173 \cdot 193 \cdot 17839$ and end with $1786879121$ which is prime. As to checking, I have a separate program that shows me the particular $x,y$ for representing a target number by a positive binary form. Then I checked those pairs using my programmable calculator, which has exact arithmetic up to $10^{10}.$
EDIT, Saturday 15 May. I have found a string of 10, the form is $9 x^2 + 5 x y + 14 y^2$ and the numbers start at $866988565 = 5 \cdot 23 \cdot 7539031$ and end with $866988574 = 2 \cdot 433494287.$
EDIT, Thursday 17 June. Wadim Zudilin has been running one of my programs on a fast computer. We finally have a string of 11, the form being $ 3 x^2 + x y + 26 y^2$ of discriminant $-311.$ The integrally represented numbers start at 897105813710 and end at 897105813720. Note that the maximum possible for this discriminant is 11. So we now have this conjecture: For discriminants $\Delta$ with absolute values in this sequence http://www.oeis.org/A000229 some form represents a set of $N$ consecutive integers, where $N$ is the first quadratic nonresidue. As a result, we conjecture that there is no upper bound on the number of consecutive integers that can be represented by a positive quadratic form.
|
If a given heap has $k$ inversions, what is the complexity of making it into a valid min heap? We could define an inversion as a tuple (node, descendant), where the node has a key value strictly higher than its descendant (all nodes have distinct key values).
This isn't an interview question, but is something that I thought of while learning about heaps. Searching for it on google gives zero hits.
Back when we learnt arrays and how to sort them, we were told that sorting arrays requires $\mathcal{O}(n + k)$ ($k$ = number of inversions) time complexity in insertion sort. I wonder if something similar exists for making a heap out of a given array.
I attempted to approach this problem in the following way. Assume a full binary tree (one with number of nodes equal to $2^h - 1$ ($h$ = height)). Note that we perform the sift down operation only when there exists an inversion. In the best case, the input array is already sorted so complexity is just order $n$, no sift downs are performed. In the worst case, input array is reverse sorted, so each internal node needs to be sifted down. That's summation of $\frac {(n + 1)}4 + \frac {(n + 1)}8\cdot2+\frac {(n + 1)}{16}\cdot3+\ldots= n-1$ siftdowns (solving it as a arithmetico geometric sequence) in the worst case, plus having to loop through the entire array .
In a more general case, I would say that we are visiting each node internal node once, performing a comparison with its children for cost $c$, and if it is out of place, perform a siftdown for additional cost $d$. If $i$-th internal node undergoes siftdown $k_i$ times, then we can formulate the total complexity as:
$$=\sum_{i=1}^{(n + 1) / 2}(c\cdot(k_i+1) +d\cdot k_i)$$
(as you always perform comparisons one more time than total number of siftdowns, except when you reach leaf node, which I approximated away)
This gives us:
$$=(c+d)k+\frac{(n+1)}2\cdot c$$
This suggests to me that that in worst case reverse sorted array, the order should be $n^2$, as the number of inversions $k$ is of that order. However, that we already know is not the case. Where is my analysis going wrong then?
|
Consider the scalar PDE for $u$ with Dirichlet boundary conditions:
$\mathrm{div}(\mathcal{K}\nabla u) = f\; \forall x\; \in \Omega \subset R^2$,
$u = 0 \; \forall \; x\;\in \partial\Omega$
where $\mathcal{K} \in R^{2\times 2}$ is positive definite symmetric.
For a start I assume
$\Omega$ is the unit square with a uniform rectangular mesh.
EDIT: I edited the question following Jed's comment to make it more specific.
If $\mathcal{K}$ is diagonal, it is simple enough. When it is not, I see that there it will require a computation of the "secondary gradient" (gradient along the face). How is that usually done ? And what I should do when I am near the boundary in this case ?
|
If $\lambda_n,\mu_n \in \mathbb{R}$, $\lambda_n \sim \mu_n$ as $n \to +\infty$, and $\mu_n \to +\infty$ as $n \to +\infty$, is it true that $$ \sum_{n=1}^{\infty} \exp(-\lambda_n x) \sim \sum_{n=1}^{\infty} \exp(-\mu_n x) $$ as $x \to 0^{+}$?
In other words, is it true that $$ \lim_{x \to 0^+} \frac{\sum_{n=1}^{\infty} \exp(-\lambda_n x)}{\sum_{n=1}^{\infty} \exp(-\mu_n x)} = 1? $$
Note that since $\mu_n \to +\infty$ we must also have $\lambda_n \to +\infty$ to ensure that $\lambda_n \sim \mu_n$ as $n \to +\infty$, i.e. that
$$ \lim_{n \to +\infty} \frac{\lambda_n}{\mu_n} = 1. $$
We also assume that each series converges for all $x>0$.
I believe this is true (and some numerical examples agree), but I can't see how to prove it. Using the idea presented in this answer we have an upper bound like
$$ \sum_{n=1}^{\infty} e^{-\lambda n x} \leq \sum_{n=1}^{\infty} e^{-(1-\epsilon)\mu_n x} + O(1) $$
with an analogous lower bound, where the term in $O(1)$ in bounded independently of $x$ (but does depend on $\epsilon$). So, dividing through by $\sum_n e^{-\mu_n x}$, we're really interested in whether
$$ \lim_{\epsilon \to 0} \lim_{x \to 0^+} \frac{\sum_{n=1}^{\infty} e^{-(1-\epsilon)\mu_n x}}{\sum_{n=1}^{\infty} e^{-\mu_n x}} = 1. $$
If this were true the result would follow.
Sometimes it's possible to show this
a posteriori if we know an elementary closed form or asymptotic for $\sum_n \exp(-C\lambda_n x)$ valid for all $C$ in some neighborhood of $1$ and small $x > 0$, as was the case in the second half of this answer. In this question I am interested in the case when we do not.
It was noted by PavelM in the comments that it may very well be false when $\lambda_n$ is almost $\log n$.
I am definitely interested in the general question. However, I am specifically interested in the special case where
$$ \lambda_n \sim a n $$
as $n \to \infty$ for some constant $a > 0$. Any help with this specific problem would likewise be much appreciated.
|
The intersection of two groups $(U,\odot_U)$ and $(V,\odot_V)$ is first, and foremost, the
set $$ G = U \cap V \text{.}$$To turn this into a group, one would need to define a suitable operation $\odot_G$ on $G$. But where is that operation supposed to come from? Since $U$ and $V$ can be completely different groups, which just happen to be constructed over two non-disjoint sets $U$ and $V$, it's not at all obvious how $\odot_G$ is supposed to be defined. Thus, the intersection of two groups is merely a set, not a group. The same holds of course for the union of two sets - again, where would the operation come from that turns the union into a group?
In fact, since we generally consider groups only
up to isomorphisms, i.e we treat two groups $G_1,G_2$ as the same group $G$ if they only difference between the two is the names of the elements, the union or intersection of two groups isn't even well-defined. For any pair of groups $U,V$ we can find some set-theoretic representation of $U$ and $V$ such that $U \cap V = \emptyset$, and another such that $U \cap V \neq \emptyset$.
Now constrast this with the situation of two
subgroups $U,V$ of some group $(H,\odot_H)$. In this case, we know that $\odot_U$ and $\odot_V$ are simply the restrictions of $\odot_H$ to $U$ respectively $V$, and the two operations will therefore agree on the intersection of $U$ and $V$. So we can very naturally endow the set $$ G = U \cap V$$with the operation $$ \odot_G = \odot_H\big|_{U \cap V} = \odot_U\big|_{U \cap V} = \odot_V\big|_{U \cap V} \text{.}$$
|
There is no best general way to check if any two trigonometric expressions are equal. One can use
TrigReduce,
TrigExpand,
TrigFactor,
TrigToExp,
Together and
Apart (especially with the
Trig->True option),
Simplify,
FullSimplify, etc. All these functions have their advantages and we discuss some of them. For more complicated examples when using
Simplify and
FullSimplify we can encounter problems with timings and/or memory allocation, we give an appropriate example later.
In case of your example I recommend to evaluate
TrigReduce on the difference of the both expressions.
Sometimes you needn't use any simplifications, and quite inevident expressions are simplified automatically by built-in rewrite rules.
First we consider a few examples where no
Mathematica functions are needed.
ex 1.
$$\sum_{\small{k = 1}}^{\small{k = m-1}} \;\frac{1}{\sin^{2}(\frac{k\; \pi}{m})} = \frac{m^{2} -1}{3} $$
Sum[ 1/Sin[ k Pi/m]^2, {k, 1, m - 1}]
1/3 (-1 + m^2)
if we set e.g.
m = 21, the above doesn't simplify and we need e.g.
FullSimplify
m = 21;
FullSimplify @ Sum[ 1/Sin[ k Pi/m]^2, {k, 1, m - 1}]
440/3
The larger one sets
m, the more time it takes.
Here is another example where we need no simplifications and the result is obtained with help of built-in rewrite rules :
ex 2.
$$\sum_{\small{k = 0}}^{\small{k = m-1}} \; \cos(a + k b) = \frac{\sin(\frac{n b}{2})}{\sin(\frac{b}{2})} \cos(\frac{2 a+(n-1) b}{2})$$
(Sum[ Cos[ a + k b], {k, 0, m - 1}] -
Sin[ (m b)/2]/Sin[ b/2] Cos[ (2 a + (m - 1) b)/2]) /. m -> 137
0
but if we make no substitution we'll need e.g.
Simplify
Simplify[ (Sum[ Cos[ a + k b], {k, 0, m - 1}] -
Sin[ (m b)/2]/Sin[ b/2] Cos[ (2 a + (m - 1) b)/2]) ]
0
It works even if we add the option
Simplify[ expr, Trig -> False].
ex 3.
$$\sum_{\small{k = 0}}^{\small{k = n-2}} \; \tan(\frac{\pi}{2^{n - k}}) = \cot(\frac{\pi}{2^{n}})$$
Here we define e.g.
f[n_] := Sum[2^k Tan[Pi/2^(n - k)], {k, 0, n - 2}] - Cot[Pi/2^n]
In this case the following functions work equally well if we set e.g.
n == 15 :
Together[ f[n], Trig -> True] == Simplify[ f[n]] == TrigReduce[ f[n]] == 0
True
however for larger
n we can easily see advantages of various approaches, e.g. for
n == 21 we observe that
Together[ expr, Trig->True] is the best, while
Simplify cannot tackle such an expression :
TrigReduce[ f[21]] // AbsoluteTiming
Together[f[21], Trig -> True] // AbsoluteTiming
{0.1404000, 0}
{0.0312000, 0}
while
Simplify[f[21]] // AbsoluteTiming
returns
If we evaluate e.g.
f[n], where we haven't assigned to
n any value, we get
ComplexInfinity.
At last we consider two simple examples :
ex 4.
expr1 = -1 + 50 Cos[x]^2 - 400 Cos[x]^4 + 1120 Cos[x]^6 - 1280 Cos[x]^8;
expr2 = -512 Cos[x]^10 + Cos[10 x];
In this case one can use
TrigReduce[ expr1 - expr2]
0
as well as one of these
Together[ expr1 - expr2, Trig -> True]
or
Apart[ expr1 - expr2, Trig -> True]
Let's consider your example :
ex 5.
exprA = - 1/15 Cos[4 x] + (1/15 + 6) Cos[x] + 11 Sin[x];
exprB = 1/30 ( 182 Cos[x] - 5 Cos[x] Cos[3 x] + 3 Cos[x] Cos[5 x] + 330 Sin[x]
+ 5 Sin[x] Sin[3 x] + 3 Sin[x] Sin[5 x] );
compare various ways (
TrigReduce and
TrigExpand are optimal here) :
TrigReduce[exprA - exprB] // AbsoluteTiming
TrigExpand[exprA - exprB] // AbsoluteTiming
{0., 0}
{0., 0}
but
FullSimplify and
TrigFactor also work, though are a bit slower :
TrigFactor[exprA - exprB] // AbsoluteTiming
FullSimplify[exprA - exprB] // AbsoluteTiming
{0.0156000, 0}
{0.0312000, 0}
Thus using
TrigReduce seems to be more appropriate for this type of trigonometric expressions. We can observe that timing for is a multiple of
0.0156000 for
TrigFactor and
FullSimplify.
|
Axicons are conical prisms that are defined by the alpha and apex angle. As the distance from the Axicon to the image increases, the diameter of the ring increases, while the line thickness remains constant.
Given the input is a collimated beam, you can calculate the outer ring diameter and the line thickness an Axicon will produce. The half fan angle calculation will be an approximation.
The figure above demonstrates the original beam entering the Axicon. It then shows how the beam exits the Axicon. The figure has arrows leading the user to the different variables used in the Axicon's formulas.
The calculator uses three equations to determine the diameter of the ring, the line thickness of the ring, and the half fan angle.
$$ d_r = 2L \cdot \tan{\left[ \left( n - 1 \right) \alpha \right]} $$
$$ \beta = \sin^{-1}{\left( n \, \sin{\alpha} \right)} - \alpha $$
$$ t = \frac{1}{2} d_b $$
d r: Outer diameter of the ring that the beam forms d b: Diameter of the beam that enters the lens t: Thickness of the line that the beam forms β: Half fan angle that beam forms L: Length from Axicon to image formed n: Refractive index of the Axicon α: Axicon angle
The calculator functions by taking the known variables and using the equations to determine the unknowns of the Axicon.
|
This problem can be recast in terms of the famous problem of the number of ways to represent a positive integer as a sum of squares. With this perspective, we can see that the following more general statement is true for any $p > 1$ (so that each of the infinite series actually converges):$$\sum_{n=1}^{\infty} \frac{1}{(n^2)^p} + \sum_{m,n = 1}^{\infty} \frac{1}{(m^2+n^2)^p} = \left(\sum_{n=1}^{\infty} \frac{1}{n^p}\right) \left(\sum_{n=0}^{\infty} \frac{(-1)^n}{(2n+1)^p}\right).$$
The left-hand side is$$\sum_{s=1}^{\infty} \frac{n_2(s)}{s^p},$$ where $n_2(s)$ is the number of ways of representing $s$ as the sum of one or of two squares of positive integers, in which order is distinguished (i.e., $1^2 + 2^2$ is counted separately from $2^2+1^2$).
It is known that $n_2(s) = d_1(s) - d_3(s)$ (see eq. 24 in the site linked above), where $d_k(s)$ is the number of divisors of $s$ congruent to $k \bmod 4$.
The first sum on the right side of the equation has (the $p$th powers of) all positive integers as denominators and the second sum on the right has (the $p$th powers of) all odd numbers as denominators. After multiplying those sums together, then, $1/s^p$ (ignoring signs) appears on the right as many times as there are odd divisors of $s$. Each odd divisor of $s$ congruent to $1 \bmod 4$ contributes a $+1/s^p$, and each odd divisor of $s$ congruent to congruent to $3 \bmod 4$ contributes a $-1/s^p$. Thus the coefficient of $1/s^p$ on the right side is exactly $d_1(s) - d_3(s)$. Therefore, the right-hand side is also $$\sum_{s=1}^{\infty} \frac{n_2(s)}{s^p}.$$
|
I was asked to find the minimum and maximum values of the functions:
$y=\sin^2x/(1+\cos^2x)$; $y=\sin^2x-\cos^4x$.
What I did so far:
$y' = 2\sin(2x)/(1+\cos^2x)^2$
How do I check if they are suspicious extrema points? After this function is cyclical and therefore only section that is not $(-\infty,\infty)$ can there be a local minimum/maximum.
$y' = \sin(2x)+4\cos^3(x)\cdot\sin(x)$
Any suggestions?
|
In the above question we asked for a strategy so you could guarantee you won as much money as possible. This week we explore this game a little further, introducing the concept of
Expected Value and showing how expected values relate to our card game!
Let's consider some simple examples first.
Example: Pretend you usually get $\$10$ a week for your allowance. One day your mom comes to you with a proposal. Instead of your current $\$10$ a week, each week you will flip a (fair) coin. If the result is heads, you get $\$20$, while if the result is tail you get $\$0$. Of course this new way of getting your allowance is risky, but is it really better or worse than your old allowance in the long run?
The concept of expected values can explain why the new allowance proposal is very similar in the long run to your previous allowance.
Originally you got $\$10$ each week, therefore each week you
expect $\$10$! Under the new proposal there is a $50\%$ chance you get $\$20$ and a $50\%$ chance you get $\$0$. Thus we say the expected value of your allowance is
$$50\%\times \$20 + 50\%\times \$0 = \$10,$$
the same as before!
Example: Enjoying the chance involved in your mom's proposal, you suggest your own method for determining your allowance! Each week you'll roll a fair six-sided die. If the result is $2$, $3$, or $4$ you'll get $\$10$, if the roll is $5$ or $6$ you'll get $\$20$, but, if the roll is $1$ you'll give your mom $\$10$! What is the expected value in this case?
With this proposal there is a $\dfrac{1}{6}$ chance you lose $\$10$, a $\dfrac{3}{6}$ chance you get $\$10$, and a $\dfrac{2}{6}$ chance you get $\$20$. Hence the expected value is
$$\frac{1}{6}\times (-\$10) + \frac{1}{2}\times \$10 + \frac{1}{3}\times \$20 = \$10,$$
still the same as before!
Let's now use expected values to explore the card game above in more detail when $N = 2$. Recall this means, when shuffled, the $2$ black ($B$) cards and $2$ red cards ($R$) can be arranged in $6$ ways, shown below:
$$BBRR, BRBR, BRRB, RBBR, RBRB, RRBB.$$
Each of these $6$ orderings is equally likely, so has a $\dfrac{1}{6}$ chance of occurring.
You're still trying to figure out how to play the game (as asked last week) to make sure you always win the same amount of money each time you play. However, your two friends Rick and Mark have other plans.
Rick wants to go all or nothing. He always thinks that the cards will come in the order $BBRR$, so will always bet $B$ for the 1st card, $B$ for the 2nd card, etc, risking ALL of his money each time.
Mark plays it a little safer. He only bets on the 2nd and 4th card. He still bets all his money, but bases his guess on earlier cards. For example, if the 1st card is $B$, then he bets all on $R$ for the 2nd. (Note Mark will always get the 4th card correct.)
Rick has the chance to win up to $\$16 = \$1\times 2\times 2\times 2\times 2$ while Mark has the chance to win up to $\$4 = \$1\times 2\times 2$.
However, the expected values of what they win are actually equal. What is this expected value?
Further, the amount you can guarantee you win is equal to this expected value as well. Try to use this fact to help you come up with a strategy! See if you can extend the reasoning about expected values to help with $N = 3$ or above in the game as well!
Please share any thoughts or questions you have below. We'll monitor the responses and give our thoughts as well! Have your own request, idea, or feedback for the Brain Potion series? Share with us in our Request and Idea Thread available here.
|
The Annals of Applied Probability Ann. Appl. Probab. Volume 23, Number 6 (2013), 2161-2211. Limit theory for point processes in manifolds Abstract
Let $Y_{i}$, $i\geq1$, be i.i.d. random variables having values in an $m$-dimensional manifold $\mathcal{M}\subset\mathbb{R}^{d}$ and consider sums $\sum_{i=1}^{n}\xi(n^{1/m}Y_{i},\{n^{1/m}Y_{j}\}_{j=1}^{n})$, where $\xi$ is a real valued function defined on pairs $(y,\mathcal{Y} )$, with $y\in\mathbb{R}^{d}$ and $\mathcal{Y}\subset\mathbb{R}^{d}$ locally finite. Subject to $\xi$ satisfying a weak spatial dependence and continuity condition, we show that such sums satisfy weak laws of large numbers, variance asymptotics and central limit theorems. We show that the limit behavior is controlled by the value of $\xi$ on homogeneous Poisson point processes on $m$-dimensional hyperplanes tangent to $\mathcal{M} $. We apply the general results to establish the limit theory of dimension and volume content estimators, Rényi and Shannon entropy estimators and clique counts in the Vietoris–Rips complex on $\{Y_{i}\}_{i=1}^{n}$.
Article information Source Ann. Appl. Probab., Volume 23, Number 6 (2013), 2161-2211. Dates First available in Project Euclid: 22 October 2013 Permanent link to this document https://projecteuclid.org/euclid.aoap/1382447685 Digital Object Identifier doi:10.1214/12-AAP897 Mathematical Reviews number (MathSciNet) MR3127932 Zentralblatt MATH identifier 1285.60021 Subjects Primary: 60F05: Central limit and other weak theorems Secondary: 60D05: Geometric probability and stochastic geometry [See also 52A22, 53C65] Citation
Penrose, Mathew D.; Yukich, J. E. Limit theory for point processes in manifolds. Ann. Appl. Probab. 23 (2013), no. 6, 2161--2211. doi:10.1214/12-AAP897. https://projecteuclid.org/euclid.aoap/1382447685
|
14 0 Homework Statement A yo-yo is placed on a conveyor belt accelerating ##a_C = 1 m/s^2## to the left. The end of the rope of the yo-yo is fixed to a wall on the right. The moment of inertia is ##I = 200 kg \cdot m^2##. Its mass is ##m = 100kg##. The radius of the outer circle is ##R = 2m## and the radius of the inner circle is ##r = 1m##. The coefficient of static friction is ##0.4## and the coefficient of kinetic friction is ##0.3##. Find the initial tension in the rope and the angular acceleration of the yo-yo. Homework Equations ##T - f = ma## ##\tau_P = -fr## ##\tau_G = Tr## ##I_P = I + mr^2## ##I_G = I + mR^2## ##a = \alpha R##
First off, I was wondering if the acceleration of the conveyor belt can be considered a force. And I'm not exactly sure how to use Newton's second law if the object of the forces is itself on an accelerating surface.
Also, I don't know whether it rolls with or without slipping. I thought I could use ##a_C = \alpha R## for the angular acceleration, but the acceleration of the conveyor belt is not the only source of acceleration, since the friction and the tension also play a role. I can't find a way to combine these equations to get the
Also, I don't know whether it rolls with or without slipping.
I thought I could use ##a_C = \alpha R## for the angular acceleration, but the acceleration of the conveyor belt is not the only source of acceleration, since the friction and the tension also play a role.
I can't find a way to combine these equations to get the
|
The inverse of an (invertible) upper triangular matrix is always upper triangular and the inverse of an (invertible) lower triangular matrix is always lower triangular. Why?
I don't have any work to show and this isn't homework.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
A
flagged vector space of dimension $n$ is a vector space $V$ equipped with a filtration $$0\subseteq V_1 \subseteq \dots \subseteq V_n,$$ where the dimension increases by $1$ at each step. A morphism $V \to W$ of flagged vector spaces of the same dimension is a linear transformation which respects the flags.
A triangular matrix is nothing but an endomorphism of the flagged vector space
$$0\subseteq \mathbf F^1 \subseteq \dots \subseteq \mathbf F^n.$$
It is immediate that among these, the isomorphisms correspond to invertible triangular matrices, which therefore form a group under matrix multiplication.
You can find the inverse of $A$ by row reduction in the extended matrix $(A|I)$. You should just trace what is happening to the identity matrix $I$ on the right while doing the row reduction process.
Some hints and proofs can be found on the linked post in the comments above.
Another simple proof can be made by induction. It is clear that the statement is true for $n=1$. Assume that it is true as well for matrices of dimension $n-1$ and consider an $n\times n$ upper triangular and nonsingular matrix $U$ in the block form $$ U = \begin{bmatrix}\nu & u^* \\ 0 & \tilde{U}\end{bmatrix}, $$ where $\tilde{U}$ is $(n-1)\times (n-1)$. Since $U$ is assumed to be nonsingular and so $\nu\neq 0$ and $\tilde{U}$ is nonsingular. Since the statement is true for $(n-1)\times(n-1)$ matrices, $\tilde{U}^{-1}$ is upper triangular. Now let $$ U^{-1}=\begin{bmatrix}\alpha&b^*\\c&D\end{bmatrix} $$ be the partitioning of $U^{-1}$ conforming to the partitioning of $U$. Then $UU^{-1}=I$ is equivalent to $$ \alpha\nu+u^*c=1, \quad \nu b+D^*u=0, \quad \tilde{U}c=0, \quad \tilde{U}D=I. $$ The last two equations imply that $D=\tilde{U}^{-1}$ and $c=0$, which already implies that $U^{-1}$ is upper triangular.
Since we are assuming that $A$ is invertible we know that for any linear independent set of vectors $x_k$ $\>(1\leq k\leq r)$ the vectors $Ax_k$ are again linearly independent.
If the matrix of $A$ with respect to the basis $(e_k)_{1\leq k\leq n}$ is upper triangular then for each $r\in[n]$ one has $$A\bigl(\langle e_1,e_2,\ldots, e_r\rangle\bigr)\subset \langle e_1,e_2,\ldots, e_r\rangle\ ,$$ and on account of the preliminary remark we even can say that $$A\bigl(\langle e_1,e_2,\ldots, e_r\rangle\bigr) = \langle e_1,e_2,\ldots, e_r\rangle\ .$$ Applying $A^{-1}$ on both sides of the last equation gives $$\langle e_1,e_2,\ldots, e_r\rangle =A^{-1}\bigl( \langle e_1,e_2,\ldots, e_r\rangle\bigr)\qquad(1\leq r\leq n)\ ,$$ and this says that the matrix of $A^{-1}$ with respect to the basis $(e_k)_{1\leq k\leq n}$ is upper triangular.
|
Let $u$ be an element of $\mathbb{Z}[\sqrt 5]$ of norm 1, i.e. $u = r + s \sqrt 5$ with $r^2-5s^2 = 1$.
The multiplication by $u$ in $\mathbb{Z}[\sqrt 5]$ turns any element $y$ of norm $44$ into another element $uy$ of norm $44$.View this multiplication operation on $\mathbb{Z}[\sqrt 5]$ as the transformation of the plane $f : (p,q) \rightarrow (pr+5qs,ps+qr)$, and look for its eigenvalues :
$f(\sqrt5,1) = (r\sqrt5+5s,r+\sqrt5s) = u(\sqrt5,1)$, and we have $f(- \sqrt5,1) = \frac 1u (- \sqrt5,1)$ as well.
If $u>1$ this means that $f$ is an operation that, when iterated, takes elements near the line $(p = - \sqrt5 q)$ and moves them over to the line $(p = \sqrt5 q)$Now you want to find a sector of the plane so that you can reach the whole plane by taking its images by the iterates of $f$ and $f^{-1}$
Define $g(p,q) = \frac {p + \sqrt5 q}{p - \sqrt5 q}$, which is the ratio of the coordinates of $(p,q)$ in the eigenbasis of $f$.$g(f(p,q)) = \frac {pr+5qs + \sqrt5 (ps+qr)}{pr+5qs - \sqrt5 (ps+qr)} = \frac{(r+\sqrt5 s)(p + \sqrt5 q)}{(r-\sqrt5 s)(p - \sqrt5 q)} = (r+\sqrt5 s)^2 g(p,q)$.
Or alternately, define $g(y) = y/\overline{y}$, so that $g(uy) = uy/\overline{uy} = u^2 g(y)$.
Thus if you look at any point $(p,q)$, you know you can apply $f$ or $f^{-1}$ to turn it into $(p',q')$ such that $g(p',q') \in [1 ; u^2[$
Thus, a suitable sector of the plane is the set of points $(p,q)$ such that $g(p,q) \in [1 ; u^2[$ : if you find all the elements $y$ of norm $44$ such that $g(y) \in [1 ; u^2[$, then this means that the $u^ky$ will cover all the elements of norm $44$
Finally, the good thing is that $ \{y \in \mathbb{Z}[ \sqrt 5] / g(y) \in [1; u^2[, y\overline{y} \in [0; M] \}$ is a finite set, so a finite computation can give you all the elements of norm $44$ you need.
In the case of $p²-10q²=9$, a fundamental unit is $u = 19+6\sqrt{10}$,so replace $\sqrt 5,r,s$ with $ \sqrt {10},19,6$ in everything I wrote above.
In order to find all the solutions, you only need to check potential solutions in the sector of the plane between the lines $g(p,q) = 1$ and $g(p,q) = u^2$.
You can look at the intersection of the line $g(p,q)=1$ with the curve $p^2-10q^2 = 9$.$g(p,q)=1$ implies that $p+\sqrt{10}q = p- \sqrt{10}q$, so $q=0$, and then the second equations has two solutions $p=3$ and $p= -3$.It so happens that the intersection points have integer coordinates so they give solutions to the original equation.
Next, the intersection of the line $g(p,q) = u^2$ with the curve will be $u \times (3,0) = f(3,0) = (19*3+60*0, 6*3+19*0) = (57,18)$ and $u \times (-3,0) = (-57,-18)$.
So you only have to look for points on the curve $p^2-10q^2=9$ with integers coordinates in the section of the curve between $(3,0)$ and $(57,18)$ (and the one between $(-3,0)$ and $(-57,-18)$ but it is essentially the same thing).
You can write a naïve program :
for q = 0 to 17 do :
let square_of_p = 9+10*q*q. if square_of_p is a square, then add (sqrt(square_of_p),q) to the list of solutions.
Which will give you the list $\{(3,0) ; (7,2) ; (13,4)\}$.These three solutions, together with their opposite, will generate, using the forward and backward interations of the function $f$, all the solution in $\mathbb{Z}^2$.
If you only want solution with positive coordinates, the forward iteration of $f$ on those three solutions are enough.
Also, as Gerry points out, the conjugate of $(7,2)$ generates $(13,4)$ because $f(7,-2) = (13,4)$. Had we picked a sector of the plane symmetric around the $x$-axis, we could have halved the search space thanks to that symmetry, and we would have obtained $\{(7,-2),(3,0),(7,2)\}$ instead.
One loop of this hypnotic animation represents one application of the function $f$.Each dot corresponds to one point of the plane with integer coordinates, and is moved to its image by $f$ in the course of the loop.The points are colored according to their norm (and as you can see, each of them stay on their hyperbolic branch of points sharing their norm), and I've made the yellow-ish points of norm 9 (the solutions of $x^2-10y^2 = 9$) a bit bigger.For example, the point at (3,0) is sent outside the graph, and the point at (-7,2) is sent on (13,4) (almost vanishing).
You can see that there are three points going through (3,0) during the course of one loop. They correspond to three representants of the three fundamental solutions of the equation.For each yellowish point on the curve $x^2-10y^2=9$, no matter how far along the asymptote it may be, there is an iterate of $f$ or $f^{-1}$ that sends it to one of those three fundamental solutions.
In order to find all fundamental solutions, it is enough to explore only a fundamental portion of the curve (a portion whose iterates by $f$ covers the curve), for example the fundamental portion of the curve between (-7,2) and its image by $f$, (13,4). To find the solutions on that portion, you set $y=-2,-1,0,1,2,3$ and look if there is an integer $x$ that makes a solution for each of those $y$.
Whichever fundamental portion of the curve you choose, you will find 3 solutions inside it, whose images by $f$ are sent to the next three solutions in the next portion of the curve, and so on.
Now there is a better procedure than the "brute search" I did to get all the solutions.It is an adaptation of the procedure to obtain a fundamental unit :
Start with the equation $x^2-10y^2 = 9$, and suppose we want all the positive solutions.
We observe that we must have $x > 3y$, or else $-y^2 \ge 9$, which is clearly impossible. So, replace $x$ with $x_1 + 3y$. We get the equation $x_1^2 + 6x_1 y - y^2 = 9$. We observe that we must have $y > 6x_1$, or else $x_1^2 \le 9$. In this case we quickly get the three small solutions $(x_1,y) = (1,2),(1,4),(3,0)$ which correspond to the solutions $(x,y) = (7,2),(13,4),(3,0)$. Otherwise, continue and replace $y$ with $y_1 + 6x_1$. We get the equation $x_1^2 - 6x_1y_1 - y_1^2 = 9$. We observe that we must have $x_1 > 6y_1$, or else $-y_1^2 \ge 9$, which is clearly impossible. So, replace $x_1$ with $x_2 + 6y_1$. We get the equation $x_2^2 + 6x_2y_1 - y_1^2 = 9$. But we already encountered that equation so we know how to solve it.
|
Buckling, When Structures Suddenly Collapse
Buckling instability is a treacherous phenomenon in structural engineering, where a small increase in the load can lead to a sudden catastrophic failure. In this blog post, we will investigate some classes of buckling problems and how they can be analyzed.
What Is Buckling?
Have you ever seen the party trick where a full-grown person can balance on an emptied soda can?
Even though the can’s wall is only 0.1 millimeter thick aluminum, it can sustain the load as long as its shape is perfectly cylindrical. The axial stress is below the yield stress, which is easily checked by just dividing the force by the cross section area.
But, if you just press lightly against a point on the cylindrical surface, the can will collapse. The collapse load for the perfect cylinder is higher than the weight of the person performing the trick, while only a slight distortion will significantly decrease the load bearing capacity. This phenomenon is called
imperfection sensitivity and is one of the possible pitfalls when designing structures under compression. You can see some cases of collapsed shells with dimensions much larger than soda cans on this page.
Mathematically, buckling is a bifurcation problem. At a certain load level, there is more than one solution. The sketch below shows a bifurcation point and three different possible paths for the solution, branching out at the bifurcation point. The secondary path can be of three fundamentally different types as indicated in the sketch.
A solution with a bifurcation.
If the load carrying capacity continues to increase, the solution can be characterized as stable. This is the least dangerous situation, but if you fail to recognize it, you will probably compute too low stresses. Thereby you will underestimate the load carrying capacity. The
neutral and unstable paths are more dangerous, since once the peak load is reached, there is no limitation of the displacements.
When there is more than one solution, the question about which one is correct arises. All solutions will satisfy the equations of equilibrium, but in real life, the structure will have to select a path. It will do so based on where the energy can be minimized. The solution you compute using conventional linear theory will in general not be the preferred solution.
You can make the analogy with a ball on a wavy surface. It can be in equilibrium both on the hilltops and in the valleys, but any perturbation will make it drop into the valley. In the same way, even the smallest perturbation to the structure will make it jump to the more energetically preferable state. In real life, there are no perfect structures; there will always be perturbations in geometry, material, or loads.
Linearized Buckling Analysis
The easiest way in which you can approach a buckling problem is by doing a
linearized buckling analysis. This is essentially what you do with pen and paper for simple structures in basic engineering courses. Computing the critical loads for compressed struts (like the Euler buckling cases) is one such example.
In COMSOL Multiphysics, there is a special study type called “Linear Buckling”. When performing such a study, you add the external loads with an arbitrary scale. It can be a unit load or the intended operating load. The study consists of two study steps:
A Stationary study step where the stress state from the applied load is computed. A Linear Buckling study step. This is an eigenvalue solution where the stress state is used for determining the Critical load factor.
The critical load factor is the factor by which you need to multiply the applied loads to reach the buckling load. If you modeled with operational loads, the critical load factor can be interpreted as a factor of safety. The critical load factor can be smaller than unity, in which case the critical load is smaller than the one you applied. This in itself is not a problem, since the analysis is linear. The critical load factor can even be negative, in which case the lowest load needed for buckling acts in the opposite direction from the one in which you applied the load.
The eigenvalue solution will also give you the shape of the buckling mode. Note that the mode shape is only known to within an arbitrary scale factor, just like an eigenmode in an eigenfrequency analysis.
Before going into detail, some words of warning are appropriate:
For some real-life structures, the theoretical buckling load obtained using this approach can be significantly higher than what would be encountered in practice due to imperfection sensitivity. This is especially important for thin shells. Some structures show significant nonlinearity even before buckling. The reasons can be both geometrical and material nonlinearity. Never use symmetry conditions in a buckling analysis. Even though the structure and loads are symmetric, the buckling shape may not be. The buckling shapes of two symmetric frames with slightly different cross sections and equal symmetric load.
The idea with the linearized buckling analysis is that the problem can be solved as a linear eigenvalue problem. The buckling criterion is that the stiffness matrix is singular, so that the displacements are indeterminate. The applied set of loads is called \mathbf P_0, and the critical load state is called \mathbf P_c = \lambda \mathbf P_0, where \lambda is a scalar multiplier.
The total stiffness matrix for the full geometrically nonlinear problem, \mathbf K, can be seen as a sum of two contributions. One is the ordinary stiffness matrix for a linear problem, \mathbf K_L, and the second is a nonlinear addition, \mathbf K_{NL}, which depends on the load.
In the linear approximation, \mathbf K_{NL} is proportional to the load, so that
The stiffness matrix is singular when its determinant is zero. This forms an eigenvalue problem for the parameter, \lambda.
The lowest eigenvalue \lambda is the critical load factor, and the corresponding eigenmode, \mathbf u, shows the buckling shape.
By default, only one buckling mode corresponding to the lowest critical load is computed. You can select to compute any number of modes, and for a complex structure this can have some interest. There may be several buckling modes with similar critical load factors. The lowest one may not correspond with the most critical one in real life due to, for example, imperfection sensitivity.
In the COMSOL software, you should not mark the Linear Buckling study step as being geometrically nonlinear. The nonlinear terms giving \mathbf K_{NL} are added separately. However, if you do select geometric nonlinearity, you will solve the following problem:
The the extra ‘1’ in the term\lambda+1 is automatically compensated for, so the computed load factor is the same in either case. The best rule is to use the same setting for geometric nonlinearity in both the preload study step and the buckling study step.
You can study an example of a linearized buckling analysis in the model Linear Buckling Analysis of a Truss Tower.
Fixed Loads and Variable Loads
Sometimes, there is one set of loads, \mathbf Q, which can be considered as fixed with respect to the buckling analysis, whereas another set of loads, \mathbf P_0, will be multiplied by the load factor \lambda. Still, the combination of both load systems must be taken into account when computing the critical load factor.
Mathematically, this problem can be stated as
This kind of problem can also be solved in COMSOL Multiphysics using one of two strategies:
Run it as a post-buckling analysis, with one set of loads fixed and the other set of loads ramped up. This is straightforward, but unnecessarily heavy from the computational point of view. Use a modified version of the Linear Buckling study as described below.
Due to the flexibility of the software, it is not difficult to modify the built-in Linear Buckling study so that it can handle the two separate load systems. To do that, start by adding an extra physics interface, which is used only to compute the stress state caused by the fixed load. Solve for this interface only in the stationary analysis, but not in the buckling step.
The extra physics interface is not active in the Linear Buckling step.
Now, you need to generate the extra stiffness matrix contribution in the buckling study from the stresses that were computed in the second physics interface. You do that by adding the following extra weak contribution:
Here, \boldsymbol \sigma^{Q} is the stress tensor from the fixed load, and \mathbf E and \boldsymbol \epsilon are the Green-Lagrange and linear strain tensors, respectively. In other words, the difference \mathbf E-\boldsymbol \epsilon contains the quadratic terms of the Green-Lagrange strain tensor.
Contribution from the fixed load system for a 2D Solid Mechanics problem.
Now, you can run the study sequence as usual, and the computed critical load factor applies only to the second load system.
Post-Buckling Analysis
With a linearized buckling analysis, you will only find the critical load, but not what happens once it has been reached. In many cases, you are only interested in ensuring the safety against reaching the buckling load, and then a linearized study may be sufficient.
Sometimes, you will, however, need the full deformation history. Some of the reasons for this might be:
The structure has significant nonlinearity also before the critical load, so a linearized analysis is not applicable. You need to investigate imperfection sensitivity. The operation of the component intentionally acts in the post-buckling regime.
In order to perform a post-buckling analysis, you will need to load the structure incrementally, and trace the load-deflection history. In the COMSOL software, you can use the parametric continuation solver to do this.
Doing a post-buckling analysis is not a trivial task. An inherent problem is that there are several solutions to a bifurcation problem, so how do you know that the solution you get is as intended? Also, in many cases the buckling instability will manifest itself numerically as an ill-conditioned or singular stiffness matrix, so that the solver will fail to converge unless you use appropriate modeling techniques. Below, I outline some useful approaches.
Symmetric Structures
Consider a simple case like a cantilever beam with a compressive load at the tip. When it reaches the collapse load, it can deflect in an arbitrary direction in 3D, or in two possible directions in 2D. It is, however, unlikely that the solver will converge to any of these solutions, unless the symmetry is disturbed since the symmetric problem will become singular at the buckling load. If you add a small transverse load at the tip, the solution can be traced without problems. An example using this technique can be found in the Large Deformation Beam model.
Snap-Through Problems
In many cases, the structure will “jump” from one state to another. A simple example of this can be displayed by the two-bar truss structure below.
Snap-through analysis of a simple truss structure. At deflection 0.2, the two bars are horizontal.
When the force is increased, it will reach the peak value at A. Numerically, the stiffness matrix will become singular. Physically, the structure will suddenly invert and jump to the state B along the red dotted line. In real life, this will be a dynamic event. The stored strain energy will be released and converted into kinetic energy.
One way of solving this problem is to actually run a time-dependent analysis, where the inertia forces will balance the external load and internal elastic forces. However, such an approach is seldom used, since it is computationally expensive.
To trace the solid green line, you can replace the prescribed load by a prescribed displacement, and instead record the reaction force. Replacing loads with prescribed displacements is a simple method to stabilize models, but the method has limitations:
It is more or less limited to cases where the external load is a single point load. The displacement you prescribe must be monotonically increasing.
To introduce a more general method, consider the shallow cylindrical shell below. It is subjected to a single point load at the center, so in this case it would also be tempting to use displacement control. But, as you can see in the graph below, neither the force nor the displacement under the force is monotonic during the buckling event.
A shallow cylindrical shell and graph of the load versus displacement. Animation of the buckling event.
For problems like this, literature will recommend that you use an arc-length solver. The popular Riks method is one such method, and we are frequently asked why we do not add such a solver to our software. The simple answer is that we do not need one.
A problem like this one is actually quite easy to solve using the continuation solver in COMSOL Multiphysics, once you have learned how to do it. All you need is to figure out a quantity in your model that will increase monotonically, and then use it to drive the analysis. For instance, in the model above, you can select the average vertical displacement of the shell surface as the controlling parameter.
You will then add the load intensity as an extra degree of freedom in the problem, introduced through a Global Equation. The equation to be fulfilled is that the average displacement (defined through an average operator) should be equal to the continuation parameter (called
disp in the screenshot below).
Adding a Global Equation to control the load. The Stationary solver is set up to run the continuation sweep.
You can download the full model from the Model Gallery.
The method I just described above is by no means limited to buckling problems in mechanics. It can be used for any unstable problem, like a pull-in analysis of an electromechanical system, for example.
Imperfections
Occasionally, it is necessary to model imperfections explicitly. As an example, there are standards stating that a load must have a certain eccentricity, or that a beam must have a certain assumed initial curvature. When you introduce an imperfection, the load-deflection curve will take a “shortcut” between the branches of the ideal bifurcation curve.
Solution path for a model with initial imperfection.
When you include a disturbance in the model of a geometry that is imperfection sensitive, the peak load may decrease significantly. This is what happens to the soda can in the scenario mentioned earlier, and it is a physical reality, not just an effect of finite element modeling. Thus, it is of the utmost importance to actually take imperfections into account for this class of structures.
Solution path for a model with imperfection sensitivity.
How should you then select an appropriate imperfection in your model?
One good strategy is to first perform a linearized buckling analysis and then use the computed mode shape as imperfection. The idea is that the structure will be most sensitive to this shape. It is, however, not essential that you capture the exact shape, so you could use anything similar. The size of the perturbation should be similar to what you would expect in your real structure when considering manufacturing tolerances and operating conditions.
In some cases, it also a good idea to compute several buckling modes and try more than one of them if the critical load factors are of the same order of magnitude. The imperfection sensitivity can vary a lot between different buckling modes.
Instead of actually changing the geometry, it is often easier to obtain the perturbation using an additional load. If you do so, you should make sure that the stresses introduced by that load do not significantly change the problem.
Further Reading Comments (10) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
LaTeX uses internal counters that provide numbering of pages, sections, tables, figures, etc. This article explains how to access and modify those counters and how to create new ones.
Contents
A counter can be easily set to any arbitrary value with
\setcounter. See the example below:
\section{Introduction} This document will present several counting examples, how to reset and access them. For instance, if you want to change the numbers in a list. \begin{enumerate} \setcounter{enumi}{3} \item Something. \item Something else. \item Another element. \item The last item in the list. \end{enumerate}
In this example
\setcounter{enumi}{3} sets the value of the item counter in the list to 3. This is the general syntax to manually set the value of any counter. See the reference guide for a complete list of counters.
All commands changing a counter's state in this section are changing it globally.
Counters in a document can be incremented, reset, accessed and referenced. Let's see an example:
\section{Another section} This is a dummy section with no purpose whatsoever but to contain text. This section has assigned the number \thesection. \stepcounter{equation} \begin{equation} \label{1stequation} \int_{0}^{\infty} \frac{x}{\sin(x)} \end{equation}
In this example, two counters are used:
\thesection
section at this point. For further methods to print a counter take a look on how to print counters.
\stepcounter{equation}
equation. Other similar commands are
\addtocounter and
\refstepcounter, see the reference guide.
Further commands to manipulate counters include:
\counterwithin<*>{<ctr1>}{<ctr2>}
<ctr2> to the counters that reset
<ctr1> when they're incremented. If you don't provide the
*,
\the<ctr1> will be redefined to
\the<ctr2>.\arabic{<ctr1>}. This macro is included in the LaTeX format since April 2018, if you're using an older version, you'll have to use the
chngctr package.
\counterwithout<*>{<ctr1>}{<ctr2>}
<ctr2> from the counters that reset
<ctr1> when they're incremented. If you don't provide the
*,
\the<ctr1> will be redefined to
\arabic{<ctr1>}. This macro is included in the LaTeX format since April 2018, if you're using an older version, you'll have to use the
chngctr package.
\addtocounter{<ctr>}{<num>}
<num> to the value of the counter
<ctr>.
\setcounter{<ctr>}{<num>}
<ctr>'s value to
<num>.
\refstepcounter{<ctr>}
\stepcounter but you can use LaTeX's referencing system to add a
\label and later
\ref the counter. The printed reference will be the current expansion of
\the<ctr>.
The basic syntax to create a new counter is by
\newcounter. Below an example that defines a numbered environment called
example:
\documentclass{article} \usepackage[utf8]{inputenc} \usepackage[english]{babel} \newcounter{example}[section] \newenvironment{example}[1][]{\refstepcounter{example}\par\medskip \textbf{Example~\theexample. #1} \rmfamily}{\medskip} \begin{document} This document will present... \begin{example} This is the first example. The counter will be reset at each section. \end{example} Below is a second example \begin{example} And here's another numbered example. \end{example} \section{Another section} This is a dummy section with no purpose whatsoever but to contain text. This section has assigned the number \thesection. \stepcounter{equation} \begin{equation} \label{1stequation} \int_{0}^{\infty} \frac{x}{\sin(x)} \end{equation} \begin{example} This is the first example in this section. \end{example} \end{document}
In this LaTeX snippet the new environment
example is defined, this environment has 3 counting-specific commands.
\newcounter{example}[section]
section or omit the parameter if you don't want your defined counter to be automatically reset.
\refstepcounter{example}
\label afterwards.
\theexample
For further information on user-defined environments see the article about defining new environments
You can print the current value of a counter in different ways:
\theCounterName
2.1 for the first subsection in the second section.
\arabic
2.
\value
\setcounter{section}{\value{subsection}}).
\alph
b.
\Alph
B.
\roman
ii.
\Roman
II.
\fnsymbol
†.
\theCounterName is the macro responsible to print
CounterName's value in a formatted manner. For new counters created by
\newcounter it gets initialized as an Arabic number. You can change this by using
\renewcommand. For example if you want to change the way a subsection counter is printed to include the current section in italics and the current subsection in uppercase Roman numbers, you could do the following:
\renewcommand\thesubsection{\textit{\thesection}.\Roman{subsection}} \section{Example} \subsection{Example}\label{sec:example:ssec:example} This is the subsection \ref{sec:example:ssec:example}. Default counters in LaTeX
Usage Name For document structure For floats For footnotes For the enumerate environment Counter manipulation commands \addtocounter{CounterName}{number} \stepcounter{CounterName} \refstepcounter{CounterName}
It works like
\stepcounter, but makes the counter visible to the referencing mechanism (
\ref{label} returns counter value)
\setcounter{CounterName}{number} \newcounter{NewCounterName}
If you want the
NewCounterName counter to be reset to zero every time that another OtherCounterName counter is increased, use: \newcounter{NewCounterName}[OtherCounterName]
\setcounter{section}{\value{subsection}}.
\value{CounterName} \theCounterName
for example:
\thechapter,
\thesection, etc. Note that this might result in more than just the counter, for example with the standard definitions of the
article class
\thesubsection will print
Section. Subsection (e.g.
2.1).
For more information see:
|
$\newcommand{\al}{\alpha}$ $\newcommand{\ga}{\gamma}$ $\newcommand{\e}{\epsilon}$
Let $X,Y$ be Riemannian manifolds, such that $\dim(X) > \dim(Y)$.
I am trying to prove the following statement (mentioned by Gromov in his book on metric geometry):
There is no arcwise isometry (i.e length preserving map) from $X$ to $Y$.
However, the naive attempt to prove this hits an obstacle which I do not see how to pass:
Suppose by contradiction $f:X \to Y$ is an arcwise isometry. Then $f$ is $1$-Lipschitz, hence differeniable almost everywhere (by Rademacher's theorem).
Question: Let $p \in X$ be a point where $f$ is differentiable. Is $df_p:T_pX \to T_{f(p)}Y$ an isometry?
(This will imply the claim of course).
Here is what happens when trying to show this naively:
Let $v \in T_pX$, and let $\al:[0,1] \to X$ be a smooth path s.t $\al(0)=p,\dot \al(0)=v$.
Then $|\dot \alpha(s)| = |\dot \alpha(0)|+\Delta(s)=|v|+ \Delta(s)$ where $\lim_{s \to 0} \Delta(s) =\Delta(0)= 0$, thus
$$ (1) \, \, L(\alpha|_{[0,t]})=\int_0^t |\dot \alpha(s)| ds=t|v|+\int_0^t \Delta(s) ds.$$
$\al$ is Lipschitz and $f$ is $1$-Lipschitz, so $\ga:= f \circ \al$ is Lipschitz. By theorem 2.7.6 in the book ``A course in metric geometry'' ( Burago,Burago,Ivanov) it follows that:
$$ (2) \, \, L(\ga|_{[0,t]})=\int_0^t \nu_{\ga}(s) ds, $$
where $\nu_{\ga}(s):=\lim_{\e \to 0} \frac{d\left( \ga(s),\ga(s+\e) \right)}{|\e|}$ is the
speed of $\ga$.
Note that $\nu_{\ga}(0)= |\dot \ga(0)|=|df_p(v)|$.
Using the assumption $f$ preserves lengths, we would now like to compare $(1),(2)$ and take derivatives at $t=0$, to get $$|v|=\frac{d}{dt}\left. \right|_{t=0} L(\al|_{[0,t]})=\frac{d}{dt}\left. \right|_{t=0} L(\ga|_{[0,t]})=|df_p(v)|.$$
However, it seems that the last equality is
false in general; Even when the speed of a Lipschitz curve and the derivative of its length exist at a point, they do not need to be equal.
It seems that the only thing we can say is that $ \frac{d}{dt}\left. \right|_{t=0} L(\ga|_{[0,t]}) \ge \nu_{\ga}(0) =|df_p(v)|$, so we are left with $|v| \ge |df_p(v)|$ which doesn't help.
Is there a way to "fix" the proof above?
|
(27 intermediate revisions by 2 users not shown) Line 8: Line 8:
* [[Siril:Tutorial_sequence|Work on a sequence of converted images]]
* [[Siril:Tutorial_sequence|Work on a sequence of converted images]]
* [[Siril:Tutorial_preprocessing|Pre-processing images]]
* [[Siril:Tutorial_preprocessing|Pre-processing images]]
−
* [[Siril:Tutorial_manual_registration|Registration (
+
* [[Siril:Tutorial_manual_registration|Registration (alignment)]]
* → '''Stacking'''
* → '''Stacking'''
Line 14: Line 14:
<!--T:4-->
<!--T:4-->
−
The final
+
The final to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed. Siril proposes several algorithms for stacking computation.
* Sum Stacking
* Sum Stacking
−
This is the simplest algorithm: each pixel in the stack is summed
+
This is the simplest algorithm: each pixel in the stack is summed using 32-bit precision, and the result is normalized to 16-bit. The increase in signal-to-noise ratio (SNR) is proportional to <math>\sqrt{N}</math>, where Nis the number of images.
−
* Average Stacking With Rejection
* Average Stacking With Rejection
−
**
+
** Clipping:
−
** Median Sigma Clipping:
+
Sigma Clippingis an iterative algorithm which will reject pixels whose distance from median will be farthest than two given values in sigma units .
−
** Winsorized Sigma Clipping:
+
** Median Sigma Clipping: is the same algorithm except than the rejected pixels are replaced by the median value .
−
** Linear Fit Clipping:
+
** Winsorized Sigma Clipping: is very similar to Sigma Clipping method but it uses an algorithm based on Huber's work .
+
** Linear Fit Clipping: is an algorithm developed by Juan Conejero, main developer of PixInsight .
−
These algorithms are very efficient to remove satellite/
+ +
These algorithms are very efficient to remove satellite/tracks.
+
* Median Stacking
* Median Stacking
This method is mostly used for dark/flat/offset stacking.
This method is mostly used for dark/flat/offset stacking.
The median value of the pixels in the stack is computed for each pixel.
The median value of the pixels in the stack is computed for each pixel.
−
As this method should only be used for dark/flat/offset stacking, it does not take into account shifts computed during registration.
+
As this method should only be used for dark/flat/offset stacking, it does not take into account shifts computed during registration.
* Pixel Maximum Stacking
* Pixel Maximum Stacking
This algorithm is mainly used to construct long exposure star-trails images.
This algorithm is mainly used to construct long exposure star-trails images.
Pixels of the image are replaced by pixels at the same coordinates if intensity is greater.
Pixels of the image are replaced by pixels at the same coordinates if intensity is greater.
− + − +
sequence .
− +
the the if is .
− − − − −
<!--T:7-->
<!--T:7-->
− +
stacking .
<!--T:8-->
<!--T:8-->
−
[[File:Siril
+
[[File:Siril .png|700px]]
+ + + + + + + + + + + + + + + +
<!--T:9-->
<!--T:9-->
− +
[[Siril .|]]
−
<!--T:10-->
<!--T:10-->
− +
.
<!--T:11-->
<!--T:11-->
− +
.
<!--T:12-->
<!--T:12-->
−
[[
+
[[:|]]
<!--T:13-->
<!--T:13-->
+ + + + + + + + + + +
End of the [[Siril:Manual#Tutorial_for_a_complete_sequence_processing|processing tutorial]]. Return to the [[Siril:Manual|main documentation page]] for more illustrated tutorials.
End of the [[Siril:Manual#Tutorial_for_a_complete_sequence_processing|processing tutorial]]. Return to the [[Siril:Manual|main documentation page]] for more illustrated tutorials.
</translate>
</translate>
Latest revision as of 10:34, 13 September 2016 Siril processing tutorial Convert your images in the FITS format Siril uses (image import) Work on a sequence of converted images Pre-processing images Registration (Global star alignment) → Stacking Stacking
The final step to do with Siril is to stack the images. Go to the "stacking" tab, indicate if you want to stack all images, only selected images or the best images regarding the value of FWHM previously computed. Siril proposes several algorithms for stacking computation.
Sum Stacking
This is the simplest algorithm: each pixel in the stack is summed using 32-bit precision, and the result is normalized to 16-bit. The increase in signal-to-noise ratio (SNR) is proportional to [math]\sqrt{N}[/math], where [math]N[/math] is the number of images. Because of the lack of normalisation, this method should only be used for planetary processing.
Average Stacking With Rejection Percentile Clipping: this is a one step rejection algorithm ideal for small sets of data (up to 6 images). Sigma Clipping: this is an iterative algorithm which will reject pixels whose distance from median will be farthest than two given values in sigma units ([math]\sigma_{low}[/math], [math]\sigma_{high}[/math]). Median Sigma Clipping: this is the same algorithm except than the rejected pixels are replaced by the median value of the stack. Winsorized Sigma Clipping: this is very similar to Sigma Clipping method but it uses an algorithm based on Huber's work [1] [2]. Linear Fit Clipping: this is an algorithm developed by Juan Conejero, main developer of PixInsight [2]. It fits the best straight line ([math]y=ax+b[/math]) of the pixel stack and rejects outliers. This algorithm performs very well with large stacks and images containing sky gradients with differing spatial distributions and orientations.
These algorithms are very efficient to remove satellite/plane tracks.
Median Stacking
This method is mostly used for dark/flat/offset stacking. The median value of the pixels in the stack is computed for each pixel. As this method should only be used for dark/flat/offset stacking, it does not take into account shifts computed during registration. The increase in SNR is proportional to [math]0.8\sqrt{N}[/math].
Pixel Maximum Stacking
This algorithm is mainly used to construct long exposure star-trails images. Pixels of the image are replaced by pixels at the same coordinates if intensity is greater.
Pixel Minimum Stacking
This algorithm is mainly used for cropping sequence by removing black borders. Pixels of the image are replaced by pixels at the same coordinates if intensity is lower.
In the case of NGC7635 sequence, we first used the "Winsorized Sigma Clipping" algorithm in "Average stacking with rejection" section, in order to remove satellite tracks ([math]\sigma_{low}=4[/math] and [math]\sigma_{high}=3[/math]).
The output console thus gives the following result:
14:33:06: Pixel rejection in channel #0: 0.181% - 1.184% 14:33:06: Pixel rejection in channel #1: 0.151% - 1.176% 14:33:06: Pixel rejection in channel #2: 0.111% - 1.118% 14:33:06: Integration of 12 images: 14:33:06: Pixel combination ......... average 14:33:06: Normalization ............. additive + scaling 14:33:06: Pixel rejection ........... Winsorized sigma clipping 14:33:06: Rejection parameters ...... low=4.000 high=3.000 14:33:07: Saving FITS: file NGC7635.fit, 3 layer(s), 4290x2856 pixels 14:33:07: Execution time: 9.98 s. 14:33:07: Background noise value (channel: #0): 9.538 (1.455e-04) 14:33:07: Background noise value (channel: #1): 5.839 (8.909e-05) 14:33:07: Background noise value (channel: #2): 5.552 (8.471e-05)
After that, the result is saved in the file named below the buttons, and is displayed in the grey and colour windows. You can adjust levels if you want to see it better, or use the different display mode. In our example the file is the stack result of all files, i.e., 12 files.
The images above picture the result in Siril using the Auto-Stretch rendering mode. Note the improvement of the signal-to-noise ratio regarding the result given for one frame in the previous step (take a look to the sigma value). The increase in SNR is of [math]21/5.1 = 4.11 \approx \sqrt{12} = 3.46[/math] and you should try to improve this result adjusting [math]\sigma_{low}[/math] and [math]\sigma_{high}[/math].
Now should start the process of the image with crop, background extraction (to remove gradient), and some other processes to enhance your image. To see processes available in Siril please visit this page. Here an example of what you can get with Siril:
Peter J. Huber and E. Ronchetti (2009), Robust Statistics, 2nd Ed., Wiley Juan Conejero, ImageIntegration, Pixinsight Tutorial
|
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
|
This article provides answers to the following questions, among others:
What are the objectives of the “Charpy impact test”? Why does “notch impact energy” represent a measure of the toughness of a test specimen? What external influences effect the notch impact energy? What are “upper shelf”, “lower shelf” and “transition temperature”? Which lattice structures show a pronounced upper shelf and lower shelf and which do not? What is a ductile fracture, sliding fracture, brittle fracture and cleavage fracture? How does the fracture speed influence the notch impact energy? Introduction
The
elongation at break and reduction in area obtained by the tensile test can give an impression of the toughness of a material, but this only applies to a (quasi-)static load and only at room temperature. In many cases, however, components are also subjected to a shock load and not always at room temperature. This applies, for example, to shock absorbers and their bearings.
These components must withstand shock loads both in summer at high temperatures and in winter at extremely cool temperatures. The ideal boundary conditions of the tensile test can not meet reality. Components with good toughness behaviour in the tensile test become brittle at low temperatures and lead to premature material failure. For this reason, the so-called
Charpy impact test or Charpy V-notch test is used to test the toughness of a material under an impact-like load as a function of temperature.
The Charpy impact test (Charpy V-notch test) is used to measure the toughness of materials under impact load at different temperatures!
Test setup and test procedure
In the Charpy impact test, a notched specimen is abruptly subjected to bending stress. The specimen is usually 55 mm long and has a square cross-section with an edge length of 10 mm. The notch in the middle has a V-shaped geometry (in special cases also U-shaped). The notch provides a defined predetermined breaking point, which generates a triaxial stress state in the notch base. The notched specimen is placed into the support of a
pendulum impact testing machine.
A deflected pendulum hammer is then released from a certain height. At the lowest point of the circular trajectory, the striker of the hammer hits the opposite notch-facing side of the specimen (impact velocity usually between 5.0 and 5.5 m/s). The sample is fractured by the striker and absorbs part of the kinetic energy of the hammer. With the remaining residual energy, the hammer swings out to a certain height. Due to the absorbed kinetic energy through the sample, however, it does not reach its initial height again.
The deformation energy and thus the final height achieved depends on the toughness of the specimen. The tougher the material, the more it has to be deformed until it breaks. The required deformation energies are correspondingly high and the pendulum energy is strongly absorbed. The hammer then only reaches a low final height after fracturing the specimen.
Very brittle specimens, on the other hand, break almost without deformation and therefore require only a low deformation energy. The pendulum hammer swings almost at the initial level. Such a comparison between a tough and brittle fracture behavior is only possible if identical specimen geometries are used.
The deformation energy required for fracturing the specimen is called
notch impact energy \(K\) (\(KV\): specimens with V-notch; \(KU\): specimens with U-notch). The notch impact energy can therefore be determined from the difference between the potential energy of the pendulum hammer at the beginning \(W_b\) and the potential energy at the end \(W_e\).
The notch impact energy indicates the energy required to fracture a specimen and is therefore a measure of the toughness of a test specimen! Tough samples have higher notch impact energy values than brittle samples!
At a given initial height \(H\) and mass \(m\) of the pendulum hammer, the notch impact energy depends only on the final height \(h\). The notch impact energy can be read off directly from a dial gauge by a
drag indicator, which is carried along from the lowest point as soon as the pendulum hammer hits the specimen.
\begin{align}
\label{kerbschlagarbeit} &K = W_b – W_e = m \cdot g \cdot H – m \cdot g \cdot h = m \cdot g \cdot \left(H-h \right) \\[5px] &\boxed{K = m \cdot g \cdot \left(H-h \right)} ~~~~~[K_V]=\text{J} ~~~~~\text{notch impact energie} \\[5px] \end{align}
Abbildung: Kerbschlagproben
The notch impact energy determined in this way strongly depends on the cross-sectional area of the specimen. Large cross-sections always require higher deformation energies than smaller ones, even if under certain circumstances a more brittle behavior is present. Comparisons in toughness by the notch impact energies are therefore only possible if they were obtained from identical specimen geometries. If at all, a comparison with different geometries is only possible if the notch impact energy \(K\) is related to the cross-section \(A_K\) of the specimen. This quotient of notch impact energy and cross-sectional area is often referred to as
notch toughness \(\alpha\), although in most cases this term is used identically to that of notch impact energy..
\begin{align}
\label{kerbschlagzaehigkeit} &\boxed{\alpha = \frac{K}{A_K}} ~~~~~[\alpha]=\frac{\text{J}}{\text{mm²}} ~~~~~\text{notch toughness} \\[5px] \end{align}
Note that even notch toughness \(\alpha\) is not a pure material parameter, as it is not dependent on the material alone. The notch impact energy and thus the notch toughness is also influenced by the shape of the specimen cross-section and in particular by the shape of the notch and the speed at which the hammer hits the specimen (more on this in the section on
fracture types). Thus, notch impact energy and notch toughness are purely technological parameters that are not included in any dimensioning calculations.
Notch impact energy values are technology parameters and can only be compared with each other if they were obtained from identical specimen geometries with identical boundary conditions (e.g. impact speed, temperature, notch shape, etc.)!
Upper shelf, lower shelf and transition temperature
However, the mentioned influences on notch impact energy, such as fracture speed, temperature and notch shape, are only of minor significance with regard to the actual objective of the Charpy impact test. This is because the V-notch test serves less to compare different materials with each other than to qualitatively compare the toughness of a single material at different temperatures!
In this way, it is possible, for example, to determine at what temperature a material becomes brittle in order to specify the limits of use of the material. For this purpose, the Charpy impact test must only be carried out sufficiently often on samples of the same material at different temperatures. If this is done in this way, especially materials with a body-centered cubic lattice structure (bcc) such as ferritic steels and materials with hexagonal lattice structures (hex) show a particularly strong dependence of toughness on temperature.
While these materials have high toughness at high temperatures, they become brittle at low temperatures. Many plastics show such a behaviour as well, which also begin to become brittle at low temperatures, while they are relatively tough at high temperatures. This behaviour can be illustrated graphically by plotting the notch impact energy as a function of the temperature.
In body-centered cubic and hexagonal lattice structures, the notch impact energy values are very strongly dependent on temperature! For such materials, the brittleness behavior is therefore strongly influenced by temperature!
The temperature range at which the specimen has low notch impact energy values and thus behaves brittle is referred to as
lower shelf. Accordingly, the upper shelf indicates the temperature range at which the material behaves relatively tough. Between the lower and the upper shelf there is a transition range, which is characterized by strongly scattering values.
The reason for the large scattering in the transition area lies in small microstructural differences between the individual samples, which cause the material to become brittle at slightly higher or lower temperatures. Therefore, the toughness scatters very strongly despite identical temperatures. Due to the steeply sloping curve from upper shelf to lower shelf, this transition range is also referred to as
steep front.
Due to the continuous curve from the upper to the lower shelf, no specific temperature can be assigned to this transition. Nevertheless, different approaches are used to define such a
transition temperature in order to identify the temperature below which embrittlement of the material is to be expected.
The transition temperature is frequently defined by the notch impact energy itself. The transition temperature \(T_t\) is often defined as the temperature at which the specimen has an average notch impact energy of 27 J (\(T_{t,27J}\)). However, values of 40 J or 60 J can also be used to define the transition temperature (\(T_{t,40J}\) or \(T_{t,60J}\)). It is also possible to define the transition temperature as the temperature at which the notch impact energy corresponds to 50 % of the upper shelf.
The transition temperature is the temperature below which a material sample shows a rather brittle behaviour in the Charpy impact test and above the transition temperature a rather tough one!
In comparison to materials with body-centered cubic lattice structures, the temperature has hardly any influence on the toughness for materials with face-centered cubic lattice structures such as aluminium. With such materials there is no pronounced lower or upper shelf and therefore no steep front! Some materials behave relatively tough over the entire temperature range, such as aluminium, or show relatively brittle behaviour, such as hardened steels (not tempered) or lamellar graphite castings.
Materials with face-centered cubic lattice structures generally do not show a pronounced upper or lower shelf; they behave either brittle or tough over a wide temperature range!
Testing of state structure
Toughness is not only influenced by temperature but also by the structural state of the material. Quenched and tempered steels and fine-grained structural steels, for example, are characterised by their special toughness. Compared to normalized steels, this remains unchanged even at lower temperatures. The steep front in quenched and tempered steels therefore shifts to lower temperatures. In this way, the Charpy impact test can also be used to check heat treatments or structural conditions.
The reverse effect on the position of the steep front in steels is caused by aging. Aging leads to embrittlement and consequently shift the transition temperature to higher values. Thus, the influence of aging effects can also be examined in the Charpy impact test. Hardened steels also show a shift in transition temperature to higher values due to their low toughness.
The Charpy impact test can also be used to check state structures (heat treatment, aging, etc.)!
In summary, the Charpy impact test may have the following objectives:
Determination of the transition temperature (onset of possible embrittlement) Verification of heat treatments Examination of aging effects Indication of notch impact energy values
In addition to the notch impact energy value, the indication of the test result shall also include the notch shape and possibly the energy capacity of the pendulum impact tester (\(W_b\)) . The energy capacity can be omitted if the energy capacity corresponds to the standard value of 300 J. For example, the indication “KV 150 = 40 J” means that the notch impact energy was 40 J in total when using a 150-Joule pendulum impact tester and a V-shaped notched specimen. If the notch impact energy had been obtained on a specimen with a U-shaped notch and a standard pendulum impact tester of 300 J, the indication would have been: “KU = 40 J”.
Fracture types
The fracture behaviour of the specimens used cannot only be assessed on the basis of the notch impact energy. Even the form of the fracture provides information about the toughness or embrittlement of the specimen.
A very tough behaviour can be seen by a strongly deformed fracture surface. Often the ductile sample is not even divided into two parts but only pulled through the two supports in a strongly deformed state. Such a fracture on the upper shelf is therefore also called a
deformation fracture or sliding fracture. The fracture surface of steels appears in a matt grey. Under the microscope, the fracture surface shows a honeycomb-like structure.
Deformation fracture (sliding fracture) is the fracture of a tough specimen in which the fracture surface shows very strong deformation (high notch impact energy values)!
On the lower shelf, however, there is hardly any deformation. The sample is usually separated in two halves when the hammer strikes. Such
brittle fracture is also referred to as cleavage fracture. The fracture surface appears shiny whitish. In the transition temperature range, the fracture surface often shows characteristics of both types of fracture, i.e. a strongly deformed area followed by an area with less deformation. This type of fracture is then also referred to as mixed fracture.
Brittle fracture (cleavage fracture) is the fracture of a brittle specimen in which the fracture surface shows only slight deformation (low notch impact energy values)!
Influence of impact speed on notch impact energy
As far as impact load and specimen geometry are concerned, the Charpy impact test is carried out under precisely defined conditions. Therefore, the results cannot easily be applied to real situations. The deformation speed (impact speed) also has a major influence on the fracture behaviour. If the pendulum hammer hits the specimen at higher speeds, brittle fracture is favoured and the notch impact energies decrease. Conversely, lower deformation speeds are more likely to lead to a deformation fracture with correspondingly higher notch impact energy values.
Due to high impact speeds, the stress in the material increases so rapidly that the bond strength (cohesion strength) of the atomic planes is exceeded before the dislocations could have moved through the material to a significant extent. Note that dislocations do not move infinitely fast but can only move at the speed of sound! A plastic deformation, which is ultimately based on dislocation movements, therefore does not take place at very high deformation speeds. The material breaks practically without deformation by tearing apart the atomic planes (cleavage fracture). Preference is given to those atomic layers that are relatively loosely packed.
Brittle fracture is favoured by high deformation speeds!
At slow deformation speeds, however, the dislocations can move over long distances and deform the material when the critical shear stress is reached. The material is then plastically deformed before it fractures (
deformation fracture).
|
Let’s understand the concept of per unit system by solving an example. In the one-line diagram below, the impedance of various components in a power system, typically derived from their nameplates, are presented. The task now is to normalize these values using a common base.
Now that you have carefully examined the system and its parameters, the equivalent impedance diagram for the above system would look something like the following.
Resistive impedance for most components have been ignored. Rotating machines have been replaced with a voltage source behind their internal reactance. Capacitive effects between lines and to ground are ignored as well.
To obtain the new normalized per unit impedances, first we need to figure out the base values (Sbase, Vbase, Zbase) in the power system. Following steps will lead you through the process.
Step 1: Assume a system base
Assume a system wide S_{base} of 100MVA. This is a random assumption and chosen to make calculations easy when calculating the per unit impedances.
So, S_{base} = 100MVA
Step 2: Identify the voltage base
Voltage base in the system is determined by the transformer. For example, with a 22/220kV voltage rating of T1 transformer, the V_{base} on the primary side of T1 is 22kV while the secondary side is 220kV. It does not matter what the voltage rating of the other components are that are encompassed by the V_{base} zone.
See figure below for the voltage bases in the system.
Step 3: Calculate the base impedance
The base impedance is calculated using the following formula:
Z_{base}=\frac{{kV_{base}}^2}{S_{base MVA}} Ohms……………………………………………………(1)
For T-Line 1: Z_{base}=\frac{(220)^2}{100}= 484 Ohms
For T-Line 2: Z_{base}=\frac{(110)^2}{100}= 121 Ohms
For 3-phase load: Z_{base}=\frac{(11)^2}{100}= 1.21 Ohms
Step 4: Calculate the per unit impedance
The per unit impedance is calculated using the following formulas:
Z_{p.u.}=\frac{Z_{actual}}{Z_{base}} ……………………………………………………………………………..(2)
Z_{p.u._{new}}=Z_{p.u._{old}}(\frac{S_{base_{new}}}{S_{base_{old}}})(\frac{V_{base_{old}}}{V_{base_{new}}})^2 ……………………………….(3)
The voltage ratio in equation (3) is not equivalent to transformers voltage ratio. It is the ratio of the transformer’s voltage rating on the primary or secondary side to the system nominal voltage on the same side.
For T-line 1 using equation (2): X_{l1_{p.u.}}=\frac{48.4}{484}= 0.1 pu
For T-line 2 using equation (2): X_{l2_{p.u.}}=\frac{65.43}{121}= 0.5 pu
For 3-Phase load:
Power Factor: \cos^{-1}(0.6)=\angle{53.13}
Thus, S_{3\phi}(load)=57\angle{53.13}Z_{act}=\frac{(V_{rated})^2}{\overline{S}^*}= \frac{10.45^2}{57\angle{-53.13}}
= 1.1495+j1.53267 Ohms
Per unit impedance of 3-phase load using equation (2)= \frac{1.1495+j1.5326}{1.21} = 0.95+j1.2667 pu
For generator, the new per unit reactance using equation (3)X_{sg}= 0.18(\frac{100}{90})(\frac{22}{22})^2
= 0.2 pu
For transformer T1: X_{t1}= 0.1(\frac{100}{50})(\frac{22}{22})^2 = 0.2 pu
For transformer T2: X_{t2}= 0.06(\frac{100}{40})(\frac{220}{220})^2 = 0.15 pu
For transformer T3: X_{t3}= 0.064(\frac{100}{40})(\frac{22}{22})^2 = 0.16 pu
For transformer T4: X_{t4}= 0.08(\frac{100}{40})(\frac{110}{110})^2 = 0.2 pu
For Motor, X_{sm}= 0.185(\frac{100}{66.5})(\frac{10.45}{11})^2 = 0.25 pu
If you think you learnt something today then you will love the eBook I prepared for you. It has 10 additional and unique per unit problems. Go ahead, preview it. Get the complete version for only $1.99. Thanks for supporting this blog.
To view full load amps due to motor load and inductive load at Bus 2, see this post.
Summary Assume a Sbase for the entire system. The Vbase is defined by the transformer and any off-nominal tap setting it may have. Zbase is derived from the Sbase and Vbase. The new per unit impedance is obtained by converting the old per unit impedance on old base values to new ones. See equations (2) and (3).
|
A partial attempt to address this issue is made by invoking the idea of quantum discord. The basic idea of quantum discord is the environment needn't be in a specific state prior to interacting with the system. All that is necessary is that it factorizes and there is no correlation.
Let's start with the simple example of a qubit, and an environment which is initially in a maximally mixed state, not a pure one. Assume the pointer states are $|0\rangle$ and $|1\rangle$, and it's the same no matter what state the environment is in, and that the pointer states are exact. This is only a toy model after all. Suppose $$|0\rangle\otimes|e\rangle \to |0\rangle \otimes U |e\rangle$$ and $$|1\rangle\otimes|e\rangle \to |1\rangle \otimes V |e\rangle$$ where U and V are unitary matrices acting upon the environment.
Now you might think, if the environment is in a maximally mixed state before interacting, it will still be maximally mixed after interacting, so how can there be decoherence? It's possible, however.
In block matrix form, an initial qubit state $\alpha|0\rangle + \beta |1\rangle$ transforms as $${1\over N}\begin{pmatrix}|\alpha|^2 I & \alpha\beta^* I\\\alpha^*\beta I & |\beta|^2 I\end{pmatrix} \to {1\over N}\begin{pmatrix}|\alpha|^2 I & \alpha\beta^* UV^{-1}\\\alpha^*\beta VU^{-1} & |\beta|^2 I\end{pmatrix}$$ for the density matrix where N is the dimensionality of the state space of the environment. Taking the partial trace over the environment, we get $$\begin{pmatrix}|\alpha|^2 & \alpha\beta^* Tr[UV^{-1}]/N\\ \alpha^*\beta Tr[VU^{-1}]/N& |\beta|^2\end{pmatrix}$$. For generic unitary matrices, the two traces divided by N scales as $1/\sqrt{N}$ assuming some very mild statistical distribution properties.
How can a maximally mixed environment record any information about the qubit? It can't, but it can still decohere the qubit!
Physically, consider a molecule decohered by light shining on it and scattering off it. If most of the photons are coming from only one direction, e.g. sunlight only coming only from the direction of the sun at a certain spectral distribution, and the photons are scattered off in different directions at a different frequency spectrumspectral distribution, we can see how the scattered photonphotons carry off information about the location of the molecule, its energy level prior to the scattering, and the difference between its energy levels (assuming itsit's an inelastic scattering).
However, place the molecule in a closed box filled with blackbody radiation in thermal equilibrium. The blackbody radiation can still decohere the position of the molecule and its energy levels even though the blackbody photons can't carry any information about the molecule!
The OP's question is about a different case though, where the different environmental states have different pointer states. This has also been covered by Zurek. Assume a dilute gas of environmental particles scatter off the molecule from different directions and velocities. The pointer states depend upon the direction and velocity of the scattering probe, as can be shown by an examination of the S-matrix. What happens in this case after a number of collisions is thermalization, not decoherence in the form of dephasing in a specific pointer state basis.
That's still not what the OP's question is about. The previous paragraph is for an environment in a thermal state. The OP's question is about an environment in a superposition which is nonthermal. There is also only one interaction, and not multiple scatterings. I'm afraid the question is still open as it stands.
A partial attempt to address this issue is made by invoking the idea of quantum discord. The basic idea of quantum discord is the environment needn't be in a specific state prior to interacting with the system. All that is necessary is that it factorizes and there is no correlation.
Let's start with the simple example of a qubit, and an environment which is initially in a maximally mixed state, not a pure one. Assume the pointer states are $|0\rangle$ and $|1\rangle$, and it's the same no matter what state the environment is in, and that the pointer states are exact. This is only a toy model after all. Suppose $$|0\rangle\otimes|e\rangle \to |0\rangle \otimes U |e\rangle$$ and $$|1\rangle\otimes|e\rangle \to |1\rangle \otimes V |e\rangle$$ where U and V are unitary matrices acting upon the environment.
Now you might think, if the environment is in a maximally mixed state before interacting, it will still be maximally mixed after interacting, so how can there be decoherence? It's possible, however.
In block matrix form, an initial qubit state $\alpha|0\rangle + \beta |1\rangle$ transforms as $${1\over N}\begin{pmatrix}|\alpha|^2 I & \alpha\beta^* I\\\alpha^*\beta I & |\beta|^2 I\end{pmatrix} \to {1\over N}\begin{pmatrix}|\alpha|^2 I & \alpha\beta^* UV^{-1}\\\alpha^*\beta VU^{-1} & |\beta|^2 I\end{pmatrix}$$ for the density matrix where N is the dimensionality of the state space of the environment. Taking the partial trace over the environment, we get $$\begin{pmatrix}|\alpha|^2 & \alpha\beta^* Tr[UV^{-1}]/N\\ \alpha^*\beta Tr[VU^{-1}]/N& |\beta|^2\end{pmatrix}$$. For generic unitary matrices, the two traces divided by N scales as $1/\sqrt{N}$ assuming some very mild statistical distribution properties.
How can a maximally mixed environment record any information about the qubit? It can't, but it can still decohere the qubit!
Physically, consider a molecule decohered by light shining on it and scattering off it. If most of the photons are coming from only one direction, e.g. sunlight only coming from the direction of the sun at a certain spectral distribution, and the photons are scattered off in different directions at a different frequency spectrum, we can see how the scattered photon carry off information about the location of the molecule, its energy level prior to the scattering, and the difference between its energy levels (assuming its an inelastic scattering).
However, place the molecule in a closed box filled with blackbody radiation in thermal equilibrium. The blackbody radiation can still decohere the position of the molecule and its energy levels even though the blackbody photons can't carry any information about the molecule!
The OP's question is about a different case though, where the different environmental states have different pointer states. This has also been covered by Zurek. Assume a dilute gas of environmental particles scatter off the molecule from different directions and velocities. The pointer states depend upon the direction and velocity of the scattering probe, as can be shown by an examination of the S-matrix. What happens in this case after a number of collisions is thermalization, not decoherence in the form of dephasing in a specific pointer state basis.
That's still not what the OP's question is about. The previous paragraph is for an environment in a thermal state. The OP's question is about an environment in a superposition which is nonthermal. There is also only one interaction, and not multiple scatterings. I'm afraid the question is still open as it stands.
A partial attempt to address this issue is made by invoking the idea of quantum discord. The basic idea of quantum discord is the environment needn't be in a specific state prior to interacting with the system. All that is necessary is that it factorizes and there is no correlation.
Let's start with the simple example of a qubit, and an environment which is initially in a maximally mixed state, not a pure one. Assume the pointer states are $|0\rangle$ and $|1\rangle$, and it's the same no matter what state the environment is in, and that the pointer states are exact. This is only a toy model after all. Suppose $$|0\rangle\otimes|e\rangle \to |0\rangle \otimes U |e\rangle$$ and $$|1\rangle\otimes|e\rangle \to |1\rangle \otimes V |e\rangle$$ where U and V are unitary matrices acting upon the environment.
Now you might think, if the environment is in a maximally mixed state before interacting, it will still be maximally mixed after interacting, so how can there be decoherence? It's possible, however.
In block matrix form, an initial qubit state $\alpha|0\rangle + \beta |1\rangle$ transforms as $${1\over N}\begin{pmatrix}|\alpha|^2 I & \alpha\beta^* I\\\alpha^*\beta I & |\beta|^2 I\end{pmatrix} \to {1\over N}\begin{pmatrix}|\alpha|^2 I & \alpha\beta^* UV^{-1}\\\alpha^*\beta VU^{-1} & |\beta|^2 I\end{pmatrix}$$ for the density matrix where N is the dimensionality of the state space of the environment. Taking the partial trace over the environment, we get $$\begin{pmatrix}|\alpha|^2 & \alpha\beta^* Tr[UV^{-1}]/N\\ \alpha^*\beta Tr[VU^{-1}]/N& |\beta|^2\end{pmatrix}$$. For generic unitary matrices, the two traces divided by N scales as $1/\sqrt{N}$ assuming some very mild statistical distribution properties.
How can a maximally mixed environment record any information about the qubit? It can't, but it can still decohere the qubit!
Physically, consider a molecule decohered by light shining on it and scattering off it. If most of the photons are coming from only one direction, e.g. sunlight coming only from the direction of the sun at a certain spectral distribution, and the photons are scattered off in different directions at a different spectral distribution, we can see how the scattered photons carry off information about the location of the molecule, its energy level prior to the scattering, and the difference between its energy levels (assuming it's an inelastic scattering).
However, place the molecule in a closed box filled with blackbody radiation in thermal equilibrium. The blackbody radiation can still decohere the position of the molecule and its energy levels even though the blackbody photons can't carry any information about the molecule!
The OP's question is about a different case though, where the different environmental states have different pointer states. This has also been covered by Zurek. Assume a dilute gas of environmental particles scatter off the molecule from different directions and velocities. The pointer states depend upon the direction and velocity of the scattering probe, as can be shown by an examination of the S-matrix. What happens in this case after a number of collisions is thermalization, not decoherence in the form of dephasing in a specific pointer state basis.
That's still not what the OP's question is about. The previous paragraph is for an environment in a thermal state. The OP's question is about an environment in a superposition which is nonthermal. There is also only one interaction, and not multiple scatterings. I'm afraid the question is still open as it stands.
|
Existence of pullback attractors for the non-autonomous suspension bridge equation with time delay
School of Mathematics and Statistics, Northwest Normal University, Lanzhou 730070, China
We investigate the long-time behavior of solutions for the suspension bridge equation when the forcing term containing some hereditary characteristic. Existence of pullback attractor is shown by using the contractive function methods.
Keywords:Suspension bridge equation, time delay, pullback attractor, pullback $ \mathcal{D}- $asymptotically compact, contractive function. Mathematics Subject Classification:Primary: 35B40, 37B55; Secondary: 35B41, 35K30. Citation:Suping Wang, Qiaozhen Ma. Existence of pullback attractors for the non-autonomous suspension bridge equation with time delay. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2019221
References:
[1]
T. Caraballo, G. Łukaszewicz and J. Real,
Pullback attractors for asymptotically compact non-autonomous dynamical systems,
[2] [3]
J. García-Luengo, P. Marín-Rubio and J. Real,
Some new regularity results of pullback attractors for 2D Navier-Stokes equations with delays,
[4]
J. García-Luegngo and P. Marín-Rubio,
Reaction-diffusion equations with non-autonomous force in $H^{-1}$ and delays under measurability conditions on the driving delay term,
[5]
A. Kh. Khanmamedov,
Global attractors for a non-autonomous von Karman equations with nonlinear interior dissipation,
[6]
A. C. Lazer and P. J. McKenna,
Large-amplitude periodic oscillations in suspension bridge: Some new connections with nonlinear analysis,
[7]
Q. Z. Ma and C. K. Zhong,
Existence of strong solutions and global attractors for the coupled suspension bridge equations,
[8]
Q. Z. Ma and C. K. Zhong,
Existence of global attractors for the coupled system of suspension bridge equations,
[9] [10] [11] [12] [13] [14] [15] [16] [17] [18]
C. Y. Sun and K. X. Zhu, Pullback attractors for nonclassical diffusion equations with delays,
[19] [20]
C. K. Zhong, Q. Z. Ma and C. Y. Sun,
Existence of strong solutions and global attractors for the suspension bridge equations,
show all references
References:
[1]
T. Caraballo, G. Łukaszewicz and J. Real,
Pullback attractors for asymptotically compact non-autonomous dynamical systems,
[2] [3]
J. García-Luengo, P. Marín-Rubio and J. Real,
Some new regularity results of pullback attractors for 2D Navier-Stokes equations with delays,
[4]
J. García-Luegngo and P. Marín-Rubio,
Reaction-diffusion equations with non-autonomous force in $H^{-1}$ and delays under measurability conditions on the driving delay term,
[5]
A. Kh. Khanmamedov,
Global attractors for a non-autonomous von Karman equations with nonlinear interior dissipation,
[6]
A. C. Lazer and P. J. McKenna,
Large-amplitude periodic oscillations in suspension bridge: Some new connections with nonlinear analysis,
[7]
Q. Z. Ma and C. K. Zhong,
Existence of strong solutions and global attractors for the coupled suspension bridge equations,
[8]
Q. Z. Ma and C. K. Zhong,
Existence of global attractors for the coupled system of suspension bridge equations,
[9] [10] [11] [12] [13] [14] [15] [16] [17] [18]
C. Y. Sun and K. X. Zhu, Pullback attractors for nonclassical diffusion equations with delays,
[19] [20]
C. K. Zhong, Q. Z. Ma and C. Y. Sun,
Existence of strong solutions and global attractors for the suspension bridge equations,
[1]
Theodore Tachim Medjo.
Pullback $ \mathbb{V}-$attractor of a three dimensional globally modified two-phase flow model.
[2]
Lin Du, Yun Zhang.
$\mathcal{H}_∞$ filtering for switched nonlinear systems: A state projection method.
[3] [4]
Yu-Zhao Wang.
$ \mathcal{W}$-Entropy formulae and differential Harnack estimates for porous medium equations on Riemannian manifolds.
[5]
Edcarlos D. Silva, José Carlos de Albuquerque, Uberlandio Severo.
On a class of linearly coupled systems on $ \mathbb{R}^N $ involving asymptotically linear terms.
[6]
Emma D'Aniello, Saber Elaydi.
The structure of $ \omega $-limit sets of asymptotically non-autonomous discrete dynamical systems.
[7]
Abdelwahab Bensouilah, Van Duong Dinh, Mohamed Majdoub.
Scattering in the weighted $ L^2 $-space for a 2D nonlinear Schrödinger equation with inhomogeneous exponential nonlinearity.
[8]
María Anguiano, Alain Haraux.
The $\varepsilon$-entropy of some infinite dimensional compact ellipsoids and fractal dimension of attractors.
[9]
Linglong Du.
Long time behavior for the visco-elastic damped wave equation in $\mathbb{R}^n_+$ and the boundary effect.
[10]
Yong Xia, Ruey-Lin Sheu, Shu-Cherng Fang, Wenxun Xing.
Double well potential function and its optimization in the $N$ -dimensional real space-part Ⅱ.
[11]
Shu-Cherng Fang, David Y. Gao, Gang-Xuan Lin, Ruey-Lin Sheu, Wenxun Xing.
Double well potential function and its optimization in the $N$-dimensional real space-part Ⅰ.
[12]
Peter Benner, Ryan Lowe, Matthias Voigt.
$\mathcal{L}_{∞}$-norm computation for large-scale descriptor systems using structured iterative eigensolvers.
[13]
Yong Ren, Huijin Yang, Wensheng Yin.
Weighted exponential stability of stochastic coupled systems on networks with delay driven by $ G $-Brownian motion.
[14] [15]
Shihu Li, Wei Liu, Yingchao Xie.
Large deviations for stochastic 3D Leray-$ \alpha $ model with fractional dissipation.
[16] [17]
Xue-Li Song, Yan-Ren Hou.
Pullback $\mathcal{D}$-attractors for the non-autonomous Newton-Boussinesq equation in two-dimensional bounded domain.
[18]
Peng Mei, Zhan Zhou, Genghong Lin.
Periodic and subharmonic solutions for a 2$n$th-order $\phi_c$-Laplacian difference equation containing both advances and retardations.
[19] [20]
Imed Bachar, Habib Mâagli.
Singular solutions of a nonlinear equation in apunctured domain of $\mathbb{R}^{2}$.
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top]
|
A language $L'$ is $PSPACE$-hard if for every $L \in PSPACE$ we have $L \le_p L'$.
Here $L \le_p L'$ means that $L$ is polynomial-
time reducible to $L'$.
Why does we use time reductions instead of space reductions in this situation?
Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community
A language $L'$ is $PSPACE$-hard if for every $L \in PSPACE$ we have $L \le_p L'$.
Here $L \le_p L'$ means that $L$ is polynomial-
time reducible to $L'$.
Why does we use time reductions instead of space reductions in this situation?
It's never interesting to use reductions that are as powerful as the complexity class you're talking about. With the exception of $\emptyset$ and $\Sigma^*$, every problem in $\mathbf{PSPACE}$ is $\mathbf{PSPACE}$-complete under poly-space reductions.
To see this, let $X\in \mathbf{PSPACE}\setminus\{\emptyset,\Sigma^*\}$, and choose any fixed pair of strings strings $y\in X$ and $n\notin X$. Now, for any language $L\in\mathbf{PSPACE}$, we can reduce $L$ to $X$ using the function $$f(w)=\begin{cases}\ y&\text{if }w\in L\\ \ n&\text{if }w\notin L.\end{cases}$$ Since $L\in\mathbf{PSPACE}$, we can compute $f$ in polynomial space.
This argument applies for any reasonable complexity class and it's the reason why, for example, we don't talk about problems being $\mathbf{P}$-complete under poly-time reductions: instead, we use something like log-space reductions, there.
In general, hardness or completeness results are "more impressive" using weaker reductions, since the reduction is able to do less of the work. In the case of $\mathbf{PSPACE}$-completeness under poly-space reductions, the reduction is actually doing
all the work.
|
Is the congruence group $\Gamma(2)$ generated by the upper triangular matrix $(1, 2; 0, 1)$ and the lower triangular matrix $(1, 0; 2, 1)$ or does on need to also throw in the negation of the identity? To be specific, how do I check that the negation of the identity is not a word in the above matrices?
Yes, you need to throw in $-I$. Check that the set of all matrices of the form $$\left(\begin{matrix} a&b\\\ c&d \end{matrix}\right)$$ with $b$ and $c$ even and $a\equiv d\equiv1$ (mod $4$) is a subgroup of the modular group.
There is already an answer posted, but I can't resist making two remarks. The first gives an alternate proof that also works for $\Gamma_n(p)$ for all $n$ and $p$ (and also gives a minimal generating set for these groups, at least when $n \geq 3$). The second says a little more about $\Gamma_2(2)$. By the way, $p$ doesn't have to be prime.
1) Let us define a surjective homomorphism $f : \Gamma_n(p) \rightarrow \mathfrak{sl}_n(\mathbb{Z}/p\mathbb{Z})$. An element $M \in \Gamma_n(p)$ is of the form $M = \mathbb{I}_n + p A$ for some matrix $A$. Define $f(M) = A$ mod $p$. Amazingly enough, this is a homomorphism! Indeed, if $N = \mathbb{I}_n + p B$, then $$f(MN) = f((\mathbb{I}_n + p A)(\mathbb{I}_n + p B)) = f(\mathbb{I}_n + p(A+B) + p^2 AB) = A+B$$ modulo $p$. This is sort of like a derivative! It is an easy exercise to check that the image of $f$ lies in $\mathfrak{sl}_n(\mathbb{Z}/p\mathbb{Z})$.
To check that $f$ is surjective, let $e_{ij}$ for $i \neq j$ be the identity matrix with a $1$ inserted into the $(i,j)$ position. Then $f(e_{ij}^p)$ is the matrix with a $1$ in the $(i,j)$ position and zeros elsewhere. To get the diagonal matrices, define $f_i$ for $1 \leq i < n$ to be the result of inserting the 2x2 matrix $(1+p,p;-p,1-p)$ into the identity matrix with its upper left entry at position $(i,i)$. Then $f(f_i)$ is the matrix with a $1$ at positions $(i,i)$ and $(i,i+1)$, a $-1$ at positions $(i+1,1)$ and $(i+1,i+1)$, and zeros elsewhere.
The existence of $f$ implies immediately that $\Gamma_n(p)$ is not generated by the elementary matrices $e_{ij}^p$. A theorem of Lee and Szczarba says that in fact $f$ gives the abelianization of $\Gamma_n(p)$ for $n \geq 3$. Thus for $n \geq 3$ we have $[\Gamma_n(p),\Gamma_n(p)] = ker\ f = \Gamma_n(p^2)$. One can check (I've never seen this in print) that $\Gamma_n(p)$ is generated by the $e_{ij}^p$ and the $f_i$ when $n \geq 3$. For the case $n=2$, see the answers to my question here.
2) In fact, we have $\Gamma_2(2) \cong F_2 \times (\mathbb{Z}/2\mathbb{Z})$. Here $F_2$ is a rank $2$ free group generated by $e_{12}^2$ and $e_{21}^2$ and $\mathbb{Z}/2\mathbb{Z}$ is generated by the central element $(-1,0;0,-1)$. This can be proved in many ways : I leave it as a fun exercise!
This follows from the fact that the image of $\Gamma(2)$ in $\text{PSL}_2(\mathbb{Z})$ is freely generated by the two matrices you describe. There is a geometric proof of this fact based on the fact that $\Gamma(2)$ acts properly discontinuously on the upper half plane $\mathbb{H}$ which I sketch here.
|
Article Gauge theories on compact toric surfaces, conformal field theories and equivariant Donaldson invariants
We show that equivariant Donaldson polynomials of compact toric surfaces can be calculated as residues of suitable combinations of Virasoro conformal blocks, by building on AGT correspondence between N=2 supersymmetric gauge theories and two-dimensional conformal field theory.Talk. 1 1 http://salafrancesco.altervista.org/wugo2015/tanzini.pdf. presented by A.T. at the conference Interactions between Geometry and Physics - in honor of Ugo Bruzzo's 60th birthday 17-22 August 2015, Guarujá, São Paulo, Brazil, mostly based on Bawane et al. (0000) and Bershtein et al. (0000).
A recently proposed correspondence between 4-dimensional N=2 SUSY SU(k) gauge theories on R^4/Z_m and SU(k) Toda-like theories with Z_m parafermionic symmetry is used to construct four-point N=1 super Liouville conformal block, which corresponds to the particular case k=m=2.
The construction is based on the conjectural relation between moduli spaces of SU(2) instantons on R^4/Z_2 and algebras like \hat{gl}(2)_2\times NSR. This conjecture is confirmed by checking the coincidence of number of fixed points on such instanton moduli space with given instanton number N and dimension of subspace degree N in the representation of such algebra.
We consider the AGT correspondence in the context of the conformal field theory M(p, p')\otimes H, where M(p,p') is the minimal model based on the Virasoro algebra labeled by two co-prime integers p,p' and H is the free boson theory based on the Heisenberg algebra. Using Nekrasov's instanton partition functions without modification to compute conformal blocks in M(p, p')\otimes H leads to ill-defined or incorrect expressions.
We propose the procedure to make this expressions are well defined and check these proposal in two cases: 1. 1-point torus, when the operator insertion is the identity, and 2. The 6-point Ising conforma block on the sphere that involves six Ising magnetic operators.
This proceedings publication is a compilation of selected contributions from the “Third International Conference on the Dynamics of Information Systems” which took place at the University of Florida, Gainesville, February 16–18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government, and academia in order to exchange new discoveries and results in a broad range of topics relevant to the theory and practice of dynamics of information systems. Dynamics of Information Systems: Mathematical Foundation presents state-of-the art research and is intended for graduate students and researchers interested in some of the most recent discoveries in information theory and dynamical systems. Scientists in other disciplines may also benefit from the applications of new developments to their own area of study.
A model for organizing cargo transportation between two node stations connected by a railway line which contains a certain number of intermediate stations is considered. The movement of cargo is in one direction. Such a situation may occur, for example, if one of the node stations is located in a region which produce raw material for manufacturing industry located in another region, and there is another node station. The organization of freight traпђГc is performed by means of a number of technologies. These technologies determine the rules for taking on cargo at the initial node station, the rules of interaction between neighboring stations, as well as the rule of distribution of cargo to the пђБnal node stations. The process of cargo transportation is followed by the set rule of control. For such a model, one must determine possible modes of cargo transportation and describe their properties. This model is described by a пђБnite-dimensional system of diпђАerential equations with nonlocal linear restrictions. The class of the solution satisfying nonlocal linear restrictions is extremely narrow. It results in the need for the “correct” extension of solutions of a system of diпђАerential equations to a class of quasi-solutions having the distinctive feature of gaps in a countable number of points. It was possible numerically using the Runge–Kutta method of the fourth order to build these quasi-solutions and determine their rate of growth. Let us note that in the technical plan the main complexity consisted in obtaining quasi-solutions satisfying the nonlocal linear restrictions. Furthermore, we investigated the dependence of quasi-solutions and, in particular, sizes of gaps (jumps) of solutions on a number of parameters of the model characterizing a rule of control, technologies for transportation of cargo and intensity of giving of cargo on a node station.
Let k be a field of characteristic zero, let G be a connected reductive algebraic group over k and let g be its Lie algebra. Let k(G), respectively, k(g), be the field of k- rational functions on G, respectively, g. The conjugation action of G on itself induces the adjoint action of G on g. We investigate the question whether or not the field extensions k(G)/k(G)^G and k(g)/k(g)^G are purely transcendental. We show that the answer is the same for k(G)/k(G)^G and k(g)/k(g)^G, and reduce the problem to the case where G is simple. For simple groups we show that the answer is positive if G is split of type A_n or C_n, and negative for groups of other types, except possibly G_2. A key ingredient in the proof of the negative result is a recent formula for the unramified Brauer group of a homogeneous space with connected stabilizers. As a byproduct of our investigation we give an affirmative answer to a question of Grothendieck about the existence of a rational section of the categorical quotient morphism for the conjugating action of G on itself.
Let G be a connected semisimple algebraic group over an algebraically closed field k. In 1965 Steinberg proved that if G is simply connected, then in G there exists a closed irreducible cross-section of the set of closures of regular conjugacy classes. We prove that in arbitrary G such a cross-section exists if and only if the universal covering isogeny ƒЬ → G is bijective; this answers Grothendieck's question cited in the epigraph. In particular, for char k = 0, the converse to Steinberg's theorem holds. The existence of a cross-section in G implies, at least for char k = 0, that the algebra k[G]G of class functions on G is generated by rk G elements. We describe, for arbitrary G, a minimal generating set of k[G]G and that of the representation ring of G and answer two Grothendieck's questions on constructing generating sets of k[G]G. We prove the existence of a rational (i.e., local) section of the quotient morphism for arbitrary G and the existence of a rational cross-section in G (for char k = 0, this has been proved earlier); this answers the other question cited in the epigraph. We also prove that the existence of a rational section is equivalent to the existence of a rational W-equivariant map T- - - >G/T where T is a maximal torus of G and W the Weyl group.
|
V S Sunder
Articles written in Proceedings – Mathematical Sciences
Volume 113 Issue 1 February 2003 pp 15-51
We obtain (two equivalent) presentations — in terms of generators and relations — of the planar algebra associated with the subfactor corresponding to (an outer action on a factor by) a finite-dimensional Kac algebra. One of the relations shows that the antipode of the Kac algebra agrees with the ‘rotation on 2-boxes’.
Volume 116 Issue 4 November 2006 pp 373-373
Volume 116 Issue 4 November 2006 pp 443-458 Operator Theory/Operator Algebras/Quantum Invariants
To a semisimple and cosemisimple Hopf algebra over an algebraically closed field, we associate a planar algebra defined by generators and relations and show that it is a connected, irreducible, spherical, non-degenerate planar algebra with non-zero modulus and of depth two. This association is shown to yield a bijection between (the isomorphism classes, on both sides, of) such objects.
Volume 122 Issue 4 November 2012 pp 547-560
We investigate a construction (from Kodiyalam Vijay and Sunder V S, J. Funct. Anal.260 (2011) 2635–2673) which associates a finite von Neumann algebra $M(\Gamma, \mu)$ to a finite weighted graph $(\Gamma, \mu)$. Pleasantly, but not surprisingly, the von Neumann algebra associated to a `flower with 𝑛 petals’ is the group on Neumann algebra of the free group on 𝑛 generators. In general, the algebra $M(\Gamma, \mu)$ is a free product, with amalgamation over a finite-dimensional abelian subalgebra corresponding to the vertex set, of algebras associated to subgraphs `with one edge’ (or actually a pair of dual edges). This also yields `natural’ examples of (i) a Fock-type model of an operator with a free Poisson distribution; and (ii) $\mathbb{C}\oplus\mathbb{C}$-valued circular and semi-circular operators.
Current Issue
Volume 129 | Issue 5 November 2019
Click here for Editorial Note on CAP Mode
|
In short echo time spectroscopy sequences macromolecular signals may strongly influence the quantification of metabolite spectra they underlay. In this work we present a method for simulating a MM basis set that is tailored toward chosen sequences and sequence parameters with the aim to improve the accuracy of metabolite measurements in vivo. We show that utilizing a simulated macromolecule basis set while considering the relaxation behavior of individual macromolecular resonances can produce metabolite quantification results of similar quality to that of a dedicatedly measured MM basis set when applied to the same spectra of interest.
Voigt lines were simulated by
taking into account T
2-relaxation of MM peaks 1 in order
to estimate the minimum FWHM of Voigt line-shapes. The lines were then normalized
to the amplitude of their respective maxima. A MM baseline, acquired with double
inversion recovery (DIR) metabolite cycled-(MC)-semiLASER2 (TI 1/TI 2/TE/TR
= 2360/625/24/8000ms), was used as prior information to scale the amplitudes of
each peak and to position the center of the Voigt lines. See figure 1 for a
workflow diagram.
Bloch simulations were performed
using T
1- and T 2-relaxation times of individual
macromolecular peaks 1,3 for the same DIR-semiLASER sequence and the
relative magnetization (M z/M 0) were calculated for each
peak. Voigt lines were then scaled to yield theoretical MM spectra without
relaxation effect:
$$Spectrum_{MM,\,non-relaxed} = \Sigma^{N=13}_{i=1}(\frac{MM^{i}_{baseline,\,prior}}{Relaxation_{DIR}})$$
The calculated relaxation free spectrum
was then attenuated considering TE and TR of a chosen localization scheme and T
1
and T 2 relaxation times of individual macromolecular peaks to simulate the MM baseline
underlying experimental metabolite spectra.
$$Spectrum_{MM,\,final} = Relaxation_{sequence} \cdot Spectrum_{MM,\,non-relaxed}$$
The simulated MM baseline spectra
were Fourier transformed into a FID and read in as a basis vector in LCModel
(v-6.3)
4 to accompany the fitting of in vivo acquired metabolite
spectra.
DIR-MC-semiLASER sequence was used to validate this strategy by direct comparison of a simulated relaxation corrected MM baseline model against an experimentally acquired MM spectrum for identical sequence parameters. Both the simulated and the experimental MM baseline model were also used for metabolite quantification of a MC-semiLASER acquisition with (TE/TR = 24/6000ms) (Figure 3).
Further simulations were created
for data with varying inversion times of an IR-MC-STEAM sequence (TI = 20, 100,
400, 700, 1000, 1400, 2500 ms and validated against respective experimental
results
5 (Figure 4).
Simulating MM baselines was done
for GM-rich regions; as the longitudinal relaxation times available during this
work was acquired in a GM rich region
3.The VeSPA 6
simulation tool was used for creation of metabolite basis vectors.
Comparison of spectra in figure 3
show minor differences between the simulated (left) and experimentally acquired
(right) MM baselines for fitting to the aforementioned semiLASER acquisition.
Fit reliability is improved for fitting the MM resonances underlying NAA and
NAAG as well as for the spectral prior to -CH
3 of tCr at
approximately 2.8ppm. The region encompassing 1.1-1.8ppm is prone to have lipid
resonances; with the experimentally acquired MM baseline LCModel fitted more
lipids, while with the simulated MM baseline, LCModel fitted lesser lipid
resonances indicating that MMs in this range are well accounted for with the
simulated MM baseline. The experimentally acquired baseline fitted MM09 better
than the simulated model and also fitted the resonance 3.65ppm better. This
discrepancy is likely from errors in T 2-measurements that were used
in determination of simulated MM Voigt lines.
In figure 4, relaxation corrected spectra show how MM contribution changes significantly dependent on inversion time chosen for an IR-MC-STEAM acquisition. Thus, it is important to account for individual MM contributions to spectra in order to reliably quantify the signal of spectra acquired with short TE. Applying a simulated basis set to IR-MC-STEAM data of varying TIs (figure 5) shows promise in accounting for underlying MM resonances.
1. Borbath T, Murali Manohar S and Henning A (October-2018): Estimation of Tp2 Relaxation Times of Macromolecules in Human Brain Spectra at 9.4 T, MRS Workshop 2018 Metabolic Imaging, Utrecht, The Netherlands.
2. Giapitzakis IA, Avdievich NI and Henning A (August-2018) Characterization of macromolecular baseline of human brain using metabolite cycled semi-LASER at 9.4T Magnetic Resonance in Medicine 80(2) 462-473.
3. Murali-Manohar S, Wright AM and Henning A (October-2018): Challenges in estimating T1 Relaxation Times of Macromolecules in the Human Brain at 9.4T, MRS Workshop 2018 Metabolic Imaging, Utrecht, The Netherlands.
4. Provencher SW. LCModel & LCMgui user’s manual. LCModel Version. 2014 Jun 15:6-2.
5. Wright AM, Murali Manohar S and Henning A (October-2018): Longitudinal Relaxation Times of 5 Metabolites in vivo at 9.4T: preliminary results, MRS Workshop 2018 Metabolic Imaging, Utrecht, The Netherlands.
6. Soher BJ, Semanchuk P, Todd D, Steinberg J, Young K. VeSPA: integrated applications for RF pulse design, spectral simulation and MRS data analysis. InProc Int Soc Magn Reson Med 2011 (Vol. 19, p. 1410).
Figure
2 top: a simulated MM baseline for a DIR-MC-semiLASER sequence using TE/TR =
24/6000ms and TI
1/TI 2 = 2360/625ms.
Figure
2 bottom: an
experimentally acquired MM baseline with the same DIR-MC-semiLASER sequence.
The bottom image shows a strong residual peak from creatine at 3.925ppm. The
differences in the lineshapes between both spectra could be due to the errors
in the T
2 values.
Figure 3 left: a relaxation corrected simulated MM baseline model was used in the LCModel fit of metabolite spectra of a MC-semiLASER sequence acquired in vivo with TE/TR = 24/6000ms.
Figure 3 right: an experimentally acquired MM baseline used in the LCModel fit for the same in vivo acquired MC-semiLASER data. Points of interest are marked with arrows where there appear to be significant differences in the fitting routine dependent on which baseline was used.
Figure 4 left: simulated MM
inversion series for an IR-MC-STEAM protocol used at 9.4T for measuring the T
1-relaxation
of metabolites 5. Due to the relaxation effect of the MM the use of a
relaxation corrected MM baseline model may provide more accurate results of metabolite
T 1-relaxation times.
Figure 4 right: respective IR-MC-STEAM series to
determine metabolite T
1 relaxation times.
|
Today’s post reviews a recent talk given at the Simons Institute for the Theory of Computing in their current workshop series Computational Challenges in Machine Learning.
tl;dr: Sufficiently regular functions (roughly: having Lipschitz,invertible derivatives) can be represented as compositions ofdecreasing, small perturbations of the identity. Furthermore, criticalpoints of the quadratic loss for these target functions are proven tobe always minima, thus ensuring loss-reducing gradient descentsteps. This makes this class of functions “easily” approximable byDeep Residual Networks.
Deep networks are deep compositions of non-linear functions
\[ h = h_m \circ h_{m - 1} \circ \ldots \circ h_1 . \]
Depth provides effective, parsimonious representations of features andnonlinearities provide better rates of approximation (known fromrepresentation theory). Even more, shallow representations of somefunctions are necessarily more complex than deeper ones (
noflattening theorems). 1 But optimization is hard with many layers:conventional DNNs show increasingly poorer performance at the sametask with growing depth, even though they can approximate a strictlylarger set of functions. Deep Residual Networks 2 overcome this limitation byintroductig skip-paths connecting the input of layer $j$ to the outputof layer $j+1$:
\[ f_{j + 1} (x_j) = w_{j + 1} \sigma (w_j x_j) + x_j, \]
where $\sigma$ is typically a ReLU.
3 Why? First consider the following fact about linear maps:
Theorem 1: 4Any $A$ with $\det A > 0$ can be written as a product of perturbations of the identity with decreasing norm, i.e.
\[ A = (I + A_m) \cdots (I + A_1)\]
with the spectral norm of each perturbation fulfilling $| A_i | =\mathcal{O} (1 / m)$.
Consider now a linear Gaussian model $y = Ax + \varepsilon$ with $\varepsilon \sim \mathcal{N} (0, \sigma^2 I)$. If one sets to find $A_i$ which minimize the quadratic loss
$$ \mathbb{E} | (I + A_m) \cdots (I + A_1) x - y |^2, $$
over all $A_{i}$ such that $I + A_{i}$ is near the identity,i.e. among all $|A_{i} | \ll 1$ in some suitable norm, then it canbe shown that
every stationary point is a global optimum, that is:if one has $\nabla _{A_1, \ldots, A_m} R (A^{\star}_{1}, \ldots,A_{m}^{\star}) = 0$, then one has found the true $A = (I +A^{\star}_{m}) \cdots (I + A^{\star}_{1})$ (according to themodel). Note that this a property of stationary points in this regionand does not say that one can attain these points by some particulardiscretization of gradient descent, i.e. this is neither ageneralization bound nor a global result.
The goal of the talk is to show that similar staments hold in thenon-linear case as well. The
main result is (roughly):
Theorem 2: The computation of a “ smooth invertible map” $h$ can be spread throughout a deep network
\[ h = h_m \circ h_{m - 1} \circ \ldots \circ h_1, \]
so that all layers compute near-identity functions:
\[ | h_i - Id |_L =\mathcal{O} \left( \frac{\log m}{m} \right). \]
Where the $| f |_L$ semi-norm is the optimal Lipschitz constant of $f$ and “smoothly invertible” means invertible and differentiable, with Lipschitz derivative and inverse
How does this relate to DRNs? Recall that a standard unit of the kind $A_i \sigma (B_i x)$ is known to be a universal approximator so any of the $h_i$ in the decomposition of Theorem 2 can be approximated by units of DRNs of the form
\[ \hat{h}_i(x) = x + A_i \sigma (B_i x), \]
with the perturbations “becoming flatter and flatter” with $i$. This means that DRNs allow for compositional representation of functions where terms are increasingly “flat” as one goes deeper. Note however that this is only proved for functions which are invertible and differentiable, with Lipshitz derivative and inverse.
The functions $h_i$ of the theorem can be explicitly constructed via adequate scalings $a_1, \dots, a_m \in \mathbb{R}$ such that:
\[ h_1 (x) = h (a_1 x) / a_1, h_2 (h_1 (x)) = h (a_2 x) / a_2, \ldots, h_m \circ \cdots \circ h_1 (x) = h (a_m x) / a_m, \]
and the $a_i$ small enough that $h_i \approx Id$.
Analogously to the linear case, for the class of functions which maybe represented as such nested, near-identity compositions of maps,
stationary points of the risk function
\[ Q (h) = \frac{1}{2} \mathbb{E} | h (X) - Y |_2^2 \]
are global minima.
5 Then
Theorem 3: for any function
\[ h = h_m \circ h_{m - 1} \circ \ldots \circ h_1, \]
where $| h_i - Id |_L \leqslant \varepsilon < 1$, it holds that for all $i$:
\[ | D_{h_i} Q (h) | \geqslant \frac{(1 - \varepsilon)^{m - 1}}{| h - h^{\ast} |} (Q (h) - Q (h^{\ast})) . \]
This means that if we start with any $h$ in this class of functionsnear the identity which is
suboptimal (i.e. $Q (h) - Q (h^{\ast}) >0$), then the (Fréchet) gradient is bounded below and a gradientdescent step can be taken to improve the risk.
Note that this is in the whole space of such nested functions, not in the particular parameter space of some instance $\hat{h}$: it can happen that the optimal direction in the whole space is “orthogonal” to the whole subspace allowed by changing weights in the layers of $\hat{h}$. Or it can (and will!) happen that there are local minima among all possible parametrizations of any layer $\hat{h}$. The following statement remains a bit unclear to me:
We should expect suboptimal stationary points in the ReLU or sigmoid parameter space, but these cannot arise because of interactions between parameters in different layers; they arise only within a layer.
Basically: if we are able to optimize in the space of architectures, we should always be able to improve performance (assuming invertible $h$ with Lipschitz derivative and so on).
See e.g. Why does deep and cheap learning work so well? . ⇧ Deep Residual Learning for image recognition . ⇧ Deep sparse rectifier neural networks . ⇧ Identity matters in Deep Learning . ⇧ Recall that the minimizer of the risk with quadratic loss is the $L^2$ projection, i.e. the conditional expectation $h^{\ast} (x) = \mathbb{E} [Y|X = x]$. ⇧
|
Determine spot size of our lasers and laser diode modules from user supplied working distances. Calculator provides circular or elliptical spot size approximations based on 1/e
2 beam diameter and beam divergence; for lasers, beam diameter is given for TEM 00 mode. Spot size visibility varies based on ambient light conditions.
$$ D_O = D_I + 2 \left[ L \cdot \tan{\left( \frac{\theta _I}{2} \right)} \right] $$
D I: Input Beam Diameter D O: Output Beam Diameter θ I: Beam Divergence L: Working Distance
|
A convexified energy functional for the Fermi-Amaldi correction
1.
Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, United States, United States
2.
Department of Mathematics and Computer Science, Benedict College, Columbia, SC 29204, United States
$ \mathcal{E}$,we prove that $ \mathcal{E} $has a unique minimizing density $( \rho _{1},\rho _{2}) \ $ when $N_{1}+N_{2}\leq Z+1\ $and $N_{2}\ $is close to $N_{1}.$ Keywords:$L^{1} $constrained minimization, ground state electron density, Fermi-Amaldi correction, convex minorant, spin polarized system, degree theory, Thomas-Fermi theory. Mathematics Subject Classification:Primary: 35J47, 35J91, 49S05; Secondary: 81Q99, 81V55, 92E1. Citation:Gisèle Ruiz Goldstein, Jerome A. Goldstein, Naima Naheed. A convexified energy functional for the Fermi-Amaldi correction. Discrete & Continuous Dynamical Systems - A, 2010, 28 (1) : 41-65. doi: 10.3934/dcds.2010.28.41
[1] [2]
John P. Perdew, Adrienn Ruzsinszky.
Understanding Thomas-Fermi-Like approximations: Averaging over oscillating occupied orbitals.
[3] [4] [5]
Antonio Giorgilli, Simone Paleari, Tiziano Penati.
Local chaotic behaviour in the Fermi-Pasta-Ulam system.
[6] [7]
Riccardo Adami, Diego Noja, Nicola Visciglia.
Constrained energy minimization and ground states for NLS with point defects.
[8] [9] [10]
Luchuan Ceng, Qamrul Hasan Ansari, Jen-Chih Yao.
Extragradient-projection method for solving constrained convex minimization problems.
[11]
Xinliang An, Avy Soffer.
Fermi's golden rule and $ H^1 $ scattering for nonlinear Klein-Gordon equations with metastable states.
[12]
Liren Lin, I-Liang Chern.
A kinetic energy reduction technique and characterizations of the ground states of spin-1 Bose-Einstein condensates.
[13] [14]
Stefan Possanner, Claudia Negulescu.
Diffusion limit of a generalized matrix Boltzmann equation for
spin-polarized transport.
[15]
Carlos J. García-Cervera, Xiao-Ping Wang.
A note on 'Spin-polarized transport: Existence of weak solutions'.
[16] [17]
Piernicola Bettiol.
State constrained $L^\infty$ optimal control problems interpreted as differential games.
[18]
Yong Xia, Yu-Jun Gong, Sheng-Nan Han.
A new semidefinite relaxation for $L_{1}$-constrained quadratic
optimization and extensions.
[19]
Alexander Pankov, Vassilis M. Rothos.
Traveling waves in Fermi-Pasta-Ulam lattices with saturable nonlinearities.
[20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
Let $\Omega$ be a bounded smooth domain and define $\mathcal{C} = \Omega \times (0,\infty)$. Below, $x$ refers to the variable in $\Omega$ and $y$ to the variable in $(0,\infty)$. The map $\operatorname{tr}_\Omega:H^1(\mathcal C) \to L^2(\Omega)$ refers to the trace operator ($\operatorname{tr}_\Omega u = u(\cdot,0)$ for smooth functions).
How do I know that the constant functions are in that bigger space (let's just take $\epsilon =1$)? They obviously have finite $H^\epsilon(\mathcal{C})$ norm but that is not enough.
We can approximate (see this) the constant function $1$ by $u_n$, where $u_n(x,y) = 1$for $y \in [0,n)$ and $u_n(x,y) = 0$ for $y \in [2n, \infty)$ and $u_n(x,y)$ linearly interpolates between $(n,2n)$.
This is Cauchy with respect to the $H^\epsilon$ norm ( edit: it's not Cauchy), but how to prove that $1$ is in $H^\epsilon$? I thought we could say $\lVert u_n - 1 \rVert_{H^\epsilon(\mathcal C)} \to 0$ but this is not sensible since $tr_\Omega$ is only defined for $H^1(\mathcal C)$ functions, and $1$ is not in $H^1(C)$.
|
Consider the following situation:
There are two boundaries -- one is denoted using grey lines, and the other is denoted using black lines. The boundaries are numerically represented using "vertices", with edges assumed to be connecting these vertices, but otherwise not necessary for the calculation.
At a given vertex (say, vertex $i$), we can write ODEs of the form:
\begin{align} \frac{\textrm{d}x_i}{\textrm{d}t} &= A_i(...) \\ \frac{\textrm{d}y_i}{\textrm{d}t} &= B_i(...) \end{align}
Here, the $...$ are placeholders for various variables that could be, for example:
the position of other vertices on the same boundary (relevant for instance, if we are modelling "spring-like" connections between vertices) the position of vertices on the other boundary (relevant for instance, if we are modelling repulsive forces between the boundaries, as shown in the situation diagram above as an example) various other variables (for example, maybe a mystery chemical signal which determines the strength of the forces acting on the vertex in some particular direction)
In order to model the dynamics of the boundaries, one can integrate the vertex ODEs. For a simple example as shown in the diagram above, we can simply consider the ODEs for all the vertices together in one large system, and then integrate this system of ODEs numerically using integrator.
A possibly invalid integration simplification?
If the boundaries rarely ever interact (i.e. they are rarely so close as to actually affect each other, depending on how the model is set up), then one could attempt another approximation, in order to lessen the computational burden on the integrator:
Say we know the state of the system (containing both grey and black boundaries) at some time $t$. In order to compute the state of the system at $t + \Delta t$, pick one boundary at random, call it boundary $P$ for "picked". Evolve the chosen boundary from time $t$ to $t + \Delta t$ using numerical integration, but consider the other state of the other boundary (call it $Q$ for "not picked", or "not $P$") to be held fixed during this calculation. So, when we need to use information regarding $Q$ to evolve the state of $P$, we only use information about the state of $Q$ at $t$. Evolve state of boundary $Q$ via integration from time $t$ to $\Delta t$, and if any information regarding $P$ is needed for the calculation, only use information regarding $P$ from time $t$.
Summarizing, this method aims to reduce the "burden" on the integrator by integrating small pieces of the global system at a time, while assuming that the rest of the system has not evolved.
The problem of "volume" exclusion
Maybe the boundaries denote special areas that never intersect: for instance, both boundaries might represent separate membranes which can come very close to one another, but do not intersect. I call this "volume exclusion" (two "volumes" cannot intersect), but perhaps there is better terminology.
In any case, one might implement volume exclusion by assigning repulsive forces to the vertices representing each boundary, where these repulsive forces depend on the distance between a particular vertex and the closest point away from it on the other boundary (see the situation diagrammed above for an example).
There are several issues with implementing such a rule however, with one in particular being how "intersections" (due to numerical error) are handled. If one vertex does end up in the volume contained by another boundary, how do we correct for it without breaking the integration process?
Are there any resources which explore such problems?
|
Consider $GL_2$ as the affine group scheme with coordinate ring ${\mathbb Z}[x_1,x_2,x_3,x_4,y]/(\det\left(\begin{array}{cc}x_1& x_2\\ x_3& x_4\end{array}\right)y-1)$. The group scheme $PGL_2$ is then given by the subring $S$ of $GL_1$-invariants, which is the subring generated by all monomials of the form $x_ix_iy^2$. In this way, we define $PGL_2( R )=Hom(S,R)$ for any ring $R$. By Hilbert 90 we know that the sequence $$ 0\to GL_1( R )\to GL_2( R )\to PGL_2( R )\to 1 $$ is exact if $R$ is a field. From that one can derive that it stays exact if $R$ is factorial. But what for a general commutative ring with unit? Is it always exact? If not, is there a handy description of all rings for which it is?
The answer is no. Here is an explicit example. Let $R=\mathbb{Z}[\sqrt{-5}]$.
Consider the matrix $$\left(\begin{array}{cc}1+\sqrt{-5}& 2\\ 2& 1-\sqrt{-5}\end{array}\right).$$
It represents an element of $PGL_2(R)$ that is not in the image of $GL_2(R)$.
The motivation for this example is that the ideal $(2,1+\sqrt{-5})$ is not principal in $R$ and this should be relevant because the next term in the long exact sequence is $H^1(R,\mathbb{G}_m)$ (and so the sequence written in the question will be exact whenever this $H^1$ vanishes).
I would love to see a more conceptual proof of the failure of $GL_2(R)\to PGL_2(R)$ to be surjective.
For a topological perspective, take $R$ to be the tensor product of $\mathbb R$ with the coordinate ring of $PGL_2$. surjectivity would imply that the map of topological spaces $GL_2(\mathbb R)\to PGL_2(\mathbb R)$ has a right inverse, which it does not because the induced map of fundamental groups $\mathbb Z\to \mathbb Z$ is $x\mapsto 2x$. Or you can argue similarly over $\mathbb C$, where the map of fundamental groups is $\mathbb Z\to \mathbb Z/2$.
|
A matrix whose number of rows does not equal to the number of columns, is called a rectangular matrix.
Rectangular matrix is a type of matrix and the elements are arranged in the matrix as number of rows and the number of columns. The arrangement of elements in matrix is in rectangle shape. Thus, it is called as a rectangular matrix.
The rectangular matrix can be expressed in general form as follows. The elements of this matrix are arranged in $m$ rows and $n$ columns. Therefore, the order of the matrix is $m \times n$.
$$M = {\begin{bmatrix} e_{11} & e_{12} & e_{13} & \cdots & e_{1n}\\ e_{21} & e_{22} & e_{23} & \cdots & e_{2n}\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ e_{m1} & e_{m2} & e_{m3} & \cdots & e_{mn} \end{bmatrix}}_{\displaystyle m \times n} $$
Rectangle shape in matrix is possible if the number of rows is different to the number of columns. It means $m \ne n$. Therefore, there are two possibilities to form rectangular matrix, one is number of rows is greater than the number of columns ($m > n$) and the other is number of rows is less than the number of columns ($m < n$).
The following two cases are the possibility for the formation of rectangular matrices in the matrix algebra.
$A$ is a matrix and elements are arranged in matrix as $3$ rows and $4$ columns.
$$A = \begin{bmatrix} 5 & -1 & 4 & 9\\ -7 & 1 & 3 & 2\\ 8 & 5 & 0 & -6 \end{bmatrix} $$
The order of the matrix $A$ is $3 \times 4$. The number of rows is not equal to the number of columns ($3 \ne 4$), and also the number of rows is less than number of columns ($3 < 4$). Therefore, the matrix $A$ is an example for a rectangular matrix.
$B$ is a matrix and the elements are arranged in the matrix in $5$ rows and $2$ columns.
$$B = \begin{bmatrix} 2 & 6\\ 5 & 2\\ 9 & 4\\ 6 & 2\\ 7 & -6 \end{bmatrix} $$
The order of the matrix $B$ is $5 \times 2$. The number of rows is not equal to the number of columns ($5 \ne 2$) and also the number of rows is greater than number the columns ($5 > 2$). So, the matrix $B$ is known as a rectangular matrix.
Learn how to solve easy to difficult mathematics problems of all topics in various methods with step by step process and also maths questions for practising.
|
I want to calculate / simplify:
$$\mathcal{F} (\ln(|x|)\mathcal{F(f)}(x))=\mathcal{F} (\ln(|x|)) \star f$$
where $\mathcal{F}$ is the Fourier transform ($\mathcal[f](\xi)=\int_{\mathbb R}f(x)e^{ix\xi}\,dx$) and where $f$ is an even function.
Looking here: wiki, we find that
$$\mathcal{F}[\log|x|](\xi)=-2\pi\gamma\delta(\xi)-\frac\pi{|\xi|},$$
so we should have:
$$\mathcal{F} (\ln(|x|)) \star f = (-2\pi\gamma\delta(x)-\frac\pi{|x|}) \star f(x) $$ $$ = -2\pi\gamma f(x)- \pi \int_{-\infty}^{\infty} \frac{f(t)}{|x-t|} dt $$
but the integral of the second term does not converge... whereas the term $\mathcal{F} (\ln(x)\mathcal{F(f)}(x))$ is well defined providing the function $f$ is of rapide decrease near zero and infinity. So where is the problem ? and what is finally the "simplified expression" of $\mathcal{F} (\ln(x)\mathcal{F(f)}(x))$ ? We cannot use this distribution in a convolution product with a function?
I already post this on Stackexchange but did not receive an answer.
|
Little explorations with HP calculators (no Prime)
03-27-2017, 07:39 PM
Post: #41
RE: Little explorations with the HP calculators
Let's divide one of the right-triangles into four right-triangles (two with sides lengths equal to r and x and two with side lengths r and 1-x) and a square with side r. Then it's area is
S = r*x + r(1-x) + r^2
S = r^2 + r
The area of the larger square, witch we know is equal to 1, is the sum of the areas of these four right-triangles plus the area of the smaller square in the center, with side 2*r:
S = 4(r^2 + r) + (2*r)^2 = 1
Or
8*r^2 + 4*r - 1 = 0
Here I am tempted to just take my wp34s and do 8 ENTER 4 ENTER 1 +/- SLVQ and get a valid numerical answer for the quadratic equation, but I decide to go through a few more steps by hand and get the exact answer:
r = (sqrt(3) - 1)/4
03-27-2017, 07:58 PM
Post: #42
RE: Little explorations with the HP calculators
Gerson do you mean the following subdivision?
Wikis are great, Contribute :)
03-27-2017, 08:01 PM
Post: #43
RE: Little explorations with the HP calculators
(03-27-2017 07:24 PM)Joe Horn Wrote: So it SEEMS to be zeroing on something close to -LOG(LOG(2)), but I give up.
Some multivariate calculus and probability theory will get you:
\[ \frac{2+\sqrt{2}+5\ln(1+\sqrt{2})}{15} \approx 0.521405433164720678330982356607243974914031567779008341796 \]
Graph 3D | QPI | SolveSys
03-27-2017, 08:07 PM
Post: #44
RE: Little explorations with the HP calculators
(03-27-2017 07:24 PM)Joe Horn Wrote: After running 100 million iterations several times in UBASIC, I'm surprised that each run SEEMS to be converging, but each run ends with a quite different result:
I wonder if this may be due to precision. The shorter the distance between two points, the higher the probability of seeing that distance appear. Thus, you're looking at a lot of tiny values (as they are more likely) that represent the distance, which may or may not get picked up in the sum if your machine precision is not large enough.
Graph 3D | QPI | SolveSys
03-27-2017, 08:08 PM
Post: #45
RE: Little explorations with the HP calculators
(03-27-2017 08:01 PM)Han Wrote:(03-27-2017 07:24 PM)Joe Horn Wrote: So it SEEMS to be zeroing on something close to -LOG(LOG(2)), but I give up.
Awesome! Time to dust off these old math texts....
<0|ɸ|0>
-Joe-
03-27-2017, 08:14 PM (This post was last modified: 03-27-2017 08:19 PM by pier4r.)
Post: #46
RE: Little explorations with the HP calculators
(03-27-2017 08:01 PM)Han Wrote: Some multivariate calculus and probability theory will get you:
This formula comes out from? I suppose a double integral (for x and y?)
On a side note: by chance could you tell me why my algorithm screws up so many digits? Did I make a mistake somewhere or is it again a problem of precision?
Wikis are great, Contribute :)
03-27-2017, 08:36 PM (This post was last modified: 03-27-2017 09:00 PM by Han.)
Post: #47
RE: Little explorations with the HP calculators
(03-27-2017 08:14 PM)pier4r Wrote:(03-27-2017 08:01 PM)Han Wrote: Some multivariate calculus and probability theory will get you:
Since the distance between two points \( x_1 , y_1 \) and \( x_2, y_2\) is
\[ d = \sqrt{ (x_2 - x_1)^2 + (y_2 - y_1)^2}, \]
I took the approach of looking at the probability density function for the distance between each of the coordinates: \( |x_2 - x_1| \) and \( |y_2 - y_1| \). Since they are independent and identically distributed, just consider the probability density of \( x=|x_2 - x_1| \). Once I got the probability distribution function (it's a triangular distribution with \( 0\le x \le 1 \) ), the integral I ended up with was indeed a double integral. EDIT: there were two of them; one for find the PDF and the other to compute the expected value. (I had to pull out my calculus textbook because the second one is quite a tedious computation to do by hand. I started with a double integral in x and y, but had to convert over to polar coordinates.)
Quote:On a side note: by chance could you tell me why my algorithm screws up so many digits? Did I make a mistake somewhere or is it again a problem of precision?
My suspicion is that it is due to precision. This is just a hunch, though. Here is my thought process. For 1000000 iterations, the sum must be close to 520000 so that the average comes out to be about .52. However, the small distances that are added into the running average (that appear the most frequently) would be very close to zero (but sufficiently many occurrences to add up to a significant value if there was enough precision). Each incremental sum, however, would likely be computed as adding 0 once the partial sum has reached a large enough magnitude.
Graph 3D | QPI | SolveSys
03-27-2017, 08:44 PM (This post was last modified: 03-27-2017 09:18 PM by pier4r.)
Post: #48
RE: Little explorations with the HP calculators
(03-27-2017 08:36 PM)Han Wrote: Since the distance between two points \( x_1 , y_1 \) and \( x_2, y_2\) is
Interesting, both parts. I do indeed take only the integer part of the random value, after 3 digits (with IP). I will check what happens if I extend it to 6. Now I compute... poor batteries.
Wikis are great, Contribute :)
03-27-2017, 08:55 PM
Post: #49
RE: Little explorations with the HP calculators
03-27-2017, 10:12 PM
Post: #50
RE: Little explorations with the HP calculators
03-27-2017, 10:35 PM
Post: #51
RE: Little explorations with the HP calculators
(03-27-2017 08:55 PM)Dieter Wrote:
Thanks, Dieter! Honestly, I was a bit lucky on this one. I had only introduced the x and while I was still worried about how to get rid of it, it just magically disappeared in the second line :-)
Probably there are better solutions around.
Gerson.
03-28-2017, 10:44 AM (This post was last modified: 03-28-2017 10:47 AM by pier4r.)
Post: #52
RE: Little explorations with the HP calculators
So there is the summation function built in in the 50g, I searched for the product function (like \PI ) in the AUR and I had no luck. So using a quick search on this site using a search engine I found this:
http://www.hpmuseum.org/cgi-sys/cgiwrap/...ead=249353
Message #10
Quote:Outside the equation writer one could use a combination of SEQ and PILIST (from the MTH/LIST menu). This would also work for negative members of the series.
I have to say that this is pretty neat, but do you know any better way to achieve the same (function / user defined program)? Of course I could do a rough program by myself, but I do like to reuse code, especially if the code is well tested and refined.
I may also study the following post, maybe it yields to neat results: https://groups.google.com/forum/#!topic/...discussion (Simone knows a lot / has great searching skills)
Wikis are great, Contribute :)
03-28-2017, 11:07 AM (This post was last modified: 03-28-2017 05:39 PM by pier4r.)
Post: #53
RE: Little explorations with the HP calculators
Quote:brilliant.org (this site is very nice on certain topics, other topics are a bit, well, low profile still)
On this I have at first no useful direction whatsoever. Anyway since the intention is to burn the 50g somehow, I will try with a trial and error approach in the range of possible values (knowing that the diameter cannot be smaller than, say, 10, and bigger than 10 * sqrt(2) ) until I fit the mentioned form.
edit, the minvalue 10 is a tad too much, since the diameter is included in the radius of the big circle. So the maxvalue is 10 actually.
Ok wrote the program, not so nice but it is the first iteration. Its output are all the possible "valid" length of the diameter, given that the diameter is expected to be greater or equal to 7 and smaller or equal to 10, plus the value of a b and c. The problem is, to deremine the right value if the right value is there (there could be the case that a is large, while sqrt(b) and c have a small difference, this is not captured by the program.
Code:
Wikis are great, Contribute :)
03-28-2017, 05:33 PM (This post was last modified: 03-28-2017 05:35 PM by Dieter.)
Post: #54
RE: Little explorations with the HP calculators
What? No clue? Really ?-)
Take a look at the picture. From the upper left corner to the point where the circle touches the arc it's √2 · d/2 plus d/2, and this sum is 10. This directly leads to d = 20/(1+√2) or 8,284. Expand this with 1–√2 to get d = 20 · (√2–1). So a=20, b=2, c=1, and the desired value is floor(1600/23) = 69.
Dieter
03-28-2017, 05:46 PM (This post was last modified: 03-28-2017 05:54 PM by Dieter.)
Post: #55
RE: Little explorations with the HP calculators
(03-21-2017 10:40 PM)pier4r Wrote: So I got to another problem and I'm stuck.
Better use your brain. ;-)
You know the summation formulas. For any positive upper limit (not just 2014), B is the square of A.
So log
AB = 2 and log √AB = 2 · 2 = 4. No calculator required.
Dieter
03-28-2017, 06:13 PM
Post: #56
RE: Little explorations with the HP calculators
(03-28-2017 05:33 PM)Dieter Wrote: What? No clue? Really ?-)
Thanks for the contribution, my only clue (see image below) died immediately because I did try to determine the x in the image but I failed with my rusty knowledge. And if your solution is correct (we need peer review here) then my program fails even to capture the solution.
http://i.imgur.com/24nmK1G.jpg
Could you explain (or hint a known relationship) how did you get that d/2+x = d/2*sqrt(2) ?
Wikis are great, Contribute :)
03-28-2017, 06:17 PM (This post was last modified: 03-28-2017 06:20 PM by pier4r.)
Post: #57
RE: Little explorations with the HP calculators
But the idea of the explorations is: either when I know the shortcut or solutions, or when I do not know them, can I let the calculator solve most of the problem?
In particular that problem raised the problem of "hidden digits by the real representation" that then was solved first with a homemade program (and proper flags), then with the knowledge of Joe Horn with flags and built in summation function.
The problem of the circle above, where I ask you to justify how do you get that x+r = r*sqrt(2), let me refresh a couple of userRPL commands and the usage of flags as booleans.
I mean, the more I analyze the failures, the better.
Wikis are great, Contribute :)
03-28-2017, 07:32 PM
Post: #58
RE: Little explorations with the HP calculators
(03-28-2017 06:13 PM)pier4r Wrote: Could you explain (or hint a known relationship) how did you get that d/2+x = d/2*sqrt(2) ?
The right picture can be worth a 1000 words, as they say :-)
Draw a line segment from the point of tangency (where the circle touches the radial segments of length 10) toward the center of the inner circle. This length is d/2, and the line segment we created is perpendicular to the segments of length 10. (You can produce a square whose diagonal lies at the center of the two circles, and whose side lengths are d/2.)
Graph 3D | QPI | SolveSys
03-28-2017, 07:36 PM
Post: #59
RE: Little explorations with the HP calculators
Also, for the summation problem, one does not actually need to know either summation formulas. The value 2014 is not that significant (likely chosen because the problem may have been created in 2014). You can make up a conjecture about \( \log_{\sqrt{A}}B \) by using smaller values instead of 2014, and computing the individual sums on your calculator (no program needed, really) which should lead you to the conclusion that the result is always 4. Moreover, this result would enable one to deduce the formula for the sum of cubes if one knew only the formula for the sum of the integers.
Graph 3D | QPI | SolveSys
03-28-2017, 08:15 PM (This post was last modified: 03-29-2017 08:47 AM by pier4r.)
Post: #60
RE: Little explorations with the HP calculators
(03-28-2017 07:32 PM)Han Wrote: Draw a line segment from the point of tangency (where the circle touches the radial segments of length 10) toward the center of the inner circle. This length is d/2, and the line segment we created is perpendicular to the segments of length 10. (You can produce a square whose diagonal lies at the center of the two circles, and whose side lengths are d/2.)
Understood. I thought about that but I could not prove... frick. I relied too much on the visual image. Instead of thinking that when a line is tangent to a circle then the radius is perpendicular to it (otherwise the circle would pass through the line), I looked at the picture and I sad "hmm, here I cannot build a square with the radius, I do not see perpendicularity". So it was actually trivial but I relied too much on the visual hint.
Damn me. Well, experience for the next time.
Thanks!
Wikis are great, Contribute :)
User(s) browsing this thread: 1 Guest(s)
|
Information theory was born in 1948 with the publication of
A Mathematical Theory of Communication by Claude Shannon (1916 to 2001). Shannon was inspired in part by earlier work by Boltzmann and Gibbs in thermodynamics and by Hartley and Nyquist at Bell. Most of the theory and applications of information theory (compression, coding schemes, data transfer over noisy channels) are outside the scope of this thesis, but there are certain information theoretic quantities used regularly in machine learning, so it is useful to discuss them now.
The information we talk about is restricted to the information about the probability distribution over the elementary outcomes, not information about the content of the outcomes. The significance of probability is that it tells us how certain we can be when making inference. The most important information in this regard is found in the probability distribution over the possible outcomes.
Information theory is quite useful for deep learning. If we think of neural nets as noisy channels, the need for this theory becomes even more obvious. David Mackay said "brains are the ultimate compression and communication systems. And the state-of-the-art algorithms for both data compression and error-correcting codes use the same tools as machine learning". Furthermore, "we might anticipate that the best data compression algorithms will result from the development of artificial intelligence methods".
The most fundamental quantity in information theory is entropy. Before we state the formal definition of entropy, we will motivate it as a measure of uncertainty by walking through its derivation. We will define a function $\eta$ as a measure of uncertainty and we will derive entropy as a function based on the requirements it must satisfy using $\eta$ as a starting point.
Let $(\mathcal{X}, p)$ be a discrete probability space. We define uncertainty to be a real-valued function $\eta(\cdot): \mathcal{X} \mapsto \mathbb{R}^+$ which depends only on the probabilities of the elementary outcomes and satisfies the following:
If an outcome $x$ is guaranteed to occur, then there is no uncertainty about it and $\eta(x)=0$;
For any two outcomes $x$, $x^\prime$, we have $p(x)\leq p(x^\prime)\iff \eta(x)\geq\eta(x^\prime)$;
For any two independent outcomes, $x$, $x^\prime$, the uncertainty of their joint occurrence, is the sum of their uncertainties, i.e. $\eta(x\cdot x^\prime)=\eta(x)+\eta(x^\prime)$.
It should not be a surprise that it is new information we are interested in, since that is what reduces uncertainty. Common outcomes provide less information than rare outcomes, which means $\eta$ should be inversely proportional to the probability of the outcome. $$ \begin{align} \eta(x) \propto {1 \over p(x)} \end{align} $$ Since $\eta$ must satisfy $\eta(x\cdot x^\prime)=\eta(x)+\eta(x^\prime)$, we must define $\eta$ in terms of the logarithm. This is because the probability of two independent outcomes is the product of their probabilities whereas we want information to be additive. Thus, $$ \begin{align} \eta(x) \approx \log{1 \over p(x)}. \end{align} $$ For probability distributions, we need a measure of uncertainty that says, on average, how much uncertainty is contained in $(\mathcal{X}, p)$. We need to weight the calculation by the probability of observing each outcome. This means what we are really seeking is a measure on the probability distribution over $\mathcal{X}$. We adjust the notation, using the capital eta, which resembles the Latin H. Thus, $$ \begin{align} H(p) = \sum_{x \in \mathcal{X}} p(x) \log{1 \over p(x)}. \end{align} $$ This is what we will call entropy, a measure on the average amount of surprise associated to outcomes from $(\mathcal{X}, p)$. Entropy is maximized when we cannot say with any confidence if an outcome will occur. This upper bound occurs when the probabilities over the set of possible outcomes are uniformly distributed. $$ \begin{align} H(p) \leq \log{|\mathcal{X}|} \end{align} $$ We can also think of entropy as how much information, measured in binary information units (bits), is required to describe outcomes drawn from $(\mathcal{X}, p)$. The way to understand this last part is the logarithm tells us how many bits we need to describe this uncertainty, since $$ \begin{align} \log_2{1 \over p(x)} = n \iff 2^n = {1 \over p(x)}. \end{align} $$ However, any logarithm can be used. Base $e$ and base 10 are also commonly used.
Let $(\mathcal{X}, p)$ be any discrete probability space. The entropy of a probability distribution $p$ with mass function $p$, denoted by $H(p)$, is the average amount of uncertainty found in elementary outcomes from $(\mathcal{X}, p)$. We write $$ \begin{align} H(p) = - \mathbb{E}_{x \sim p} \left[ \log{p(x)} \right]. \end{align} $$ The entropy of a probability distribution tells us how much variation we should expect to see in samples drawn from $(\mathcal{X}, p)$. The probability distribution with maximum entropy is the uniform distribution since all outcomes are equally surprising.
The following figure
depicts the entropy of a probability distribution over two states as a function of the symmetry of the distribution. As the probability of heads $p(H)$ approaches 0 or 1, we see the uncertainty vanishes, and uncertainty is maximized when probability is equally distributed over heads and tails.
The entropy of the probability distribution corresponding to a fair coin toss is 1 bit, and the entropy of $m$ tosses is $m$ bits. If there are two states of equal probability, then we need 1 bit and if we have 3 states of equal probability, we need 1.584963 bits.
We include a definition of a metric below in order to make clear the distinction between it and a divergence, which will be defined afterwards.
A metric $d$ on $\mathcal{X}$ is a function $d(\cdot, \cdot): \mathcal{X} \times \mathcal{X} \mapsto \mathbb{R}^+$ such that $\forall x,y,z\in\mathcal{X}$:
$d(x,y))\geq 0$, and $d(x,y)=0\iff x=y$.
$d(x,y)=d(x,y)$
$d(x,z))\leq d(x, y) + d(y, z)$
A divergence is a weaker notion than that of distance. A divergence need not be symmetric nor satisfy the triangle inequality.
Let $\mathcal{P}$ be any space of probability distributions over any finite set $\mathcal{X}$ such that all $p \in \mathcal{P}$ have the same support. A divergence on $\mathcal{P}$ is a function, $\mathcal{D}(\cdot||\cdot):\mathcal{P}\times\mathcal{P}\mapsto \mathbb{R}^+$, such that $\forall p, q, \in \mathcal{P}$ the following conditions are satisfied
$\mathcal{D}(p||q) \geq 0$
$\mathcal{D}(p||q) = 0 \iff p = q$.
The Kullback-Leibler divergence is a measure of how different a probability distribution is from a second, reference probability distribution. It is also known by the following names: relative entropy, directed divergence, information gain and discrimination information. It is defined by $$ \begin{align} KL(p||q)=\sum_{x\in\mathcal{X}} p(x) \log{p(x) \over q(x)}. \end{align} $$ If $p$ and $q$ have the same support, then $KL(p||q) = 0$ if and only if $p = q$.
The Kullback-Leibler divergence is defined only if $p$ is absolutely continuous with respect to $q$, i.e. $\forall x\ q(x) = 0 \implies p(x) = 0$, When $p(x) = 0$, $KL(p||q) = 0$ since $\lim_{x \to 0} x \log{x} = 0$.
For a closed convex set $E \subset \mathcal{P}$, where $\mathcal{P}$ is the space of all probability distributions over a finite set $\mathcal{X}$, and for a distribution $Q \not \in E$ , let $p^* \in E$ be defined by $p^* = \text{argmin}_{P \in E} KL(p||q)$ , then
$$ \begin{align} KL(p||q)\geq KL(p||p^*)+KL(p^*||q). \end{align} $$
The interested reader can consult Theorem 11.6.1 in Cover and Thomas.
The log-likelihood ratio test is used in comparing the goodness-of-fit of one statistical model over another. The Kullback-Leibler divergence of $p$ and $q$ is the average of the log-likelihood ratio test with respect to probabilities defined by $p$. For two models $p(x) = f(x|\theta)$ and $q(x) = f(x | \phi)$ the log-likelihood ratio test is
$$ \begin{align} \lambda(x) & = \log{\prod_{x\in\mathcal{X}}p(x)\over\prod_{x\in\mathcal{X}}q(x)} \\ & = \log \prod_{x\in\mathcal{X}}{p(x)\over q(x)} \\ & = \sum_{x \in \mathcal{X}} \log {p(x) \over q(x)} \end{align} $$
and the average with respect to $p$ is
$$ \begin{align} \mathbb{E}_{p}\left[\lambda(x)\right]=\sum_{x\in\mathcal{X}}p(x)\log{p(x)\over q(x)}. \end{align} $$
One way to think of Generative Adversarial Net (GAN) training is as fitting the Discriminator $D$ and the Generator $G$ to the data via optimizing a goodness-of-fit test since
$$ \begin{align} \min_\phi\max_\theta{1 \over n} \sum_{i=1}^n \log{D_\theta(x_i)} + {1 \over n} \sum_{i=1}^n\log{(1 - D_\theta(G_\phi(z_i)))} \end{align} $$
has the same fixed point as
$$ \begin{align} & \min_\phi\max_\theta{1 \over n} \sum_{i=1}^n \left[\log{D_\theta(x_i)} - \log{(D_\theta(G_\phi(z_i)))}\right] \\ & = \min_\phi\max_\theta{1 \over n} \sum_{i=1}^n \log{\left({D_\theta(x_i) \over D_\theta(G_\phi(z_i))}\right)} \end{align} $$
which is the Kullback-Leibler divergence or the average log-likelihood ratio test. Since $\forall x$ $D_\theta(x) \in [0, 1]$ we can infer that when $D_\theta$ is optimized, it will place a larger amount of mass on $x$ than on $G_\phi(z)$.
The term
information gain refers to one interpretation of the Kullback-Leibler divergence. Specifically $KL(p||q)$ is the amount of information gained about the data when $q$ is used to model the data, rather than $p$. Equivalently, the amount of information lost when $q$ is used to approximate $p$.
The reverse Kullback-Leibler divergence is the asymmetrical counterpart.
$$ \begin{align} KL(q||p)=\sum_{x\in\mathcal{X}}q(x)\log{q(x)\over p(x)}. \end{align} $$
The reverse Kullback-Leibler divergence is the average of the log-likelihood ratio test taken with respect to the model $q(x)$, $$ \begin{align} \mathbb{E}_{q} \left[\lambda(x)\right] = \sum_{x \in \mathcal{X}} q(x) \log {q(x) \over p(x)}. \end{align} $$
Minimizing the reverse Kullback-Leibler divergence is not equivalent to maximum likelihood methods.
The Kullback-Leibler divergence is related to another quantity used quite often in machine learning: cross entropy.
The cross entropy of $p$ and $q$ (for a given data set) is the total amount of uncertainty incurred by modelling the data with $q$ rather than $p$.
$$ \begin{align} H(p,q) &=-\sum_{x\in\mathcal{X}}p(x)\log{q(x)}\\ &=-\mathbb{E}_{p}\left[\log{q(x)}\right]. \end{align} $$
The cross entropy of $p$ and $q$ is the sum of the entropy of $p$ and the Kullback-Leibler divergence of $p$ and $q$.
$$ \begin{align} H(p,q) &=-\sum_{x\in\mathcal{X}}p(x)\log q(x)\\ &=-\sum_{x\in\mathcal{X}}p(x)\log p(x)+\sum_{x\in\mathcal{X}}p(x)\log p(x)-\sum_{x\in\mathcal{X}}p(x)\log q(x)\\ &=-\sum_{x\in \mathcal{X}}p(x)\log p(x)+\sum_{x\in\mathcal{X}}p(x)\log{p(x)\over q(x)} \\ &=H(p)+KL(p||q) \end{align} $$
This tells us the lower bound for cross entropy must be the entropy of the probability distribution $p$ over $\mathcal{X}$. Thus, cross entropy is the uncertainty induced by assuming the wrong probability distribution over the data. The additional uncertainty is captured by the Kullback-Leibler divergence. Cross entropy is not symmetric since $H(q,p)=H(q)+KL(q||p)$.
As shown in Goodfellow et al., the generator minimizes an approximation of the Jensen-Shannon divergence.
Let $p$ and $q$ be any two probability distributions over any space $\mathcal{X}$. The Jenson-Shannon divergence of $p$ and $q$ is a symmetrization of the Kullback-Leibler divergence of $p$ and $q$ over $\mathcal{X}$. $$ \begin{align} JSD(p||q) = {1 \over 2} KL\left(p\mid\mid{p+q\over 2}\right) + {1 \over 2} KL\left(q\mid\mid{p+q\over 2}\right) \end{align} $$
The Jensen-Shannon divergence is the average of the Kullback-Leibler divergence and the reverse Kullback-Leibler divergence.
The square root of the Jensen-Shannon divergence is a metric.
Information theory provides us with a measure of dependency, or at least how much information about one probability distribution is contained in another distribution. The following measure are defined in terms of random variables denoted by upper case letters such as $X$ and $Y$.
Let $(\mathcal{X}, p)$ and $(\mathcal{Y}, q)$ and be any two finite probability spaces ($\mathcal{X}$ and $\mathcal{Y}$ need not be distinct) and consider two random variables $X \sim p$ and $Y\sim q$ with joint probability mass function $\gamma$ and marginal probability mass functions $\pi_p \circ \gamma = p$ and $\pi_q \circ \gamma = q$. The mutual information $I(X; Y)$ is the Kullback-Leibler divergence of $\gamma$ and the product of $p$ and $q$ , in other words $$ \begin{align} I(X;Y)=\sum_{x\in\mathcal{X}}\sum_{y\in\mathcal{Y}}\gamma(x,y)\log{{\gamma(x, y) \over p(x)q(y)}} \end{align} $$
If the random variables $X$ and $Y$ are independent, then $\gamma(x, y) = p(x)\cdot q(y)$ and $I(X; Y) = 0$
Mutual information is a measure of the amount of information contained in one probability distribution about another and makes for a useful measure of statistical dependence.
Mutual information can also be defined in terms of conditional entropy, defined in terms of random variables $X$ and $Y$,$$ \begin{align} I(X;Y)=H(X)-H(X|Y)=H(Y)-H(Y|X) \end{align} $$
where $H(X|Y)$ is the conditional entropy of $X$ given that $Y$ has occurred. In this form the mutual information can be interpreted as the information contained in one probability distribution minus the information contained in the distribution when the other distribution is known.
The relationship different information theoretic quantities is depicted in the following Venn diagram in
|
A new Chipotle Restaurant opened in my neighborhood, and while I was enjoying a delicious chicken bowl, I began wondering who was Chipotle’s primary consumer. I decided to answer this question using population statistics at a Zip code level. In addition to this, I decided to map out areas where a Chipotle would sit well, based on already established franchises.
The first part of this project consisted of getting every single Chipotle location in America. After my google-fu failed to take me to a site with every Chipotle restaurant, I wrote a scrapper to check each and every every zip code in America on Chipotle’s website. There are about 60k zip codes in the United States, so this took a while but eventually I discovered there are 1,140 Chipotle Restaurants in America!
To population data at a Zip Code level, I used the choroplethrZip package (GitHub link).The variables I had access where total population, race, average age and income per capita. This data is from 2013, but for simplicity I assumed no restaurants opened or closed since then.
Once I organzied my data I studied the demographic variable. To do this, I separated the Zip Codes in four (White, Hispanic, Asian, Black) and assigned a Zip Code to the race where the majority of its population belonged to. As the histogram below shows, there are more Chipotle resturants in Zip Codes where there is a predominance of white people. The graph on the right is not limited to Zip Codes with a Chipotle restuarant. Comparing them side by side we can see how the reason why there are more restaurants in Zip Codes where the white population is the majority may be simply because there are more Zip Codes where the white population is a majority.
The second variable I observed was income. Meal prices ranging between $8 and $10 is affordable but not necesarily cheap. The histogram below shows that me majority of the restaurants are located in Zip Codes where the average income ranges between $30k-$50k. Just like with demographics, the income variable is not too surprising. As shown in the histogram below, the majority of the population in the United States falls between the $20k - $40k per capita income range.
Another variable I considered was age. The median age of Chipotle consumers is 37. Four years shy 41, the national median age. Just like the previous two graphics, the graph below shows how the average age of Zip Codes with a Chipotle restaurant is a sample of the national data set.
The last variable I looked at was population. The average population per Zip Code in the United States is 9,517 and the average population for Zip Codes with a Chipotle restaurant is 32,470 and the minimum population in a Zip Code with a restaurant is 21,140. The graph below shows how regions with a high population are selected to have a Chipotle restuarants. Obviously. Unlike the previous variables, this variable does not look like a subset of the total population.
Once I understood the variables, I decided to run a logistic regression to obtain the probabilty of having a Chipotle store. After experimenting with many equations testing for Type I and Type II errors, I concluded the probability \( \pi \) is determined by the following equation:
\(\pi(x)= \frac{1}{1+e^{-\beta X}} \)
where:
\( \beta X = \beta _{0} +\beta _{1}X _{per \, capita \, income} + \beta _{2}X _{median \, age}+\beta _{3}X _{total \, population}+\beta _{4}X _{total \, population \times median \, age} +\beta _{5}X _{per \, capita \, income \times median \, age} \)
Using this equation to find Chipotle-like regions, I produced the map bellow.
To summarize, California, Texas, New York and FLorida are the states where Chipotle new restuarants should/would likely open. The plots below show the top 10 States and Citues for potential new Chipotle restaurants.
|
The connection between symmetries and conservation laws can be viewed through the lens of both Lagrangian and Hamiltonian mechanics. In the Lagrangian picture we have Noether's theorem. In the Hamiltonian picture we have the so-called "moment map." When we consider the same "symmetry" in both viewpoints, we get the exact same conserved quantities. Why is that?
I'll give an example. For a 2D particle moving in a central potential, the action is
$$S = \int dt \Bigl(\frac{m}{2} ( \dot q_1^2 + \dot q_2^2) - V(q_1^2 + q_2^2)\Bigr).$$
We can then consider the $SO(2)$ rotational symmetry that leaves this action invariant. When we vary the path by a infinitesimal time dependent rotation,
$$\delta q_1(t) = - \varepsilon(t) q_2(t)$$ $$\delta q_2(t) = \varepsilon(t) q_1(t)$$ we find that the change in the action is
$$\delta S = \int dt \Bigl( m ( \dot q_1 \delta \dot q_1 + \dot q_2 \delta \dot q_2) - \delta V \Bigr)$$ $$= \int dt m (q_1 \dot q_2 - q_2 \dot q_1)\dot \varepsilon(t)$$
As $\delta S = 0$ for tiny perturbations from the actual path of the particle, an integration by parts yields
$$\frac{d}{dt} (m q_1 \dot q_2 - m q_2 \dot q_1) = \frac{d}{dt}L = 0 $$ and angular momentum is conserved.
In the Hamiltonian picture, when we rotate points in phase space by $SO(2)$, we find that $L(q,p) = q_1 p_2 - q_2p_1$ remains constant under rotation. As the Hamiltonian is $H$, we have
$$\{ H, L\} = 0$$ implying that angular momentum is conserved under time evolution.
In the Lagrangian picture, our $SO(2)$ symmetry acted on paths in configuration space, while in the Hamiltonian picture our symmetry acted on points in phase space. Nevertheless, the conserved quantity from both is the same angular momentum. In other words, our small perturbation to the extremal path turned out to be the one found by taking the Poisson bracket with the derived conserved quantity:
$$\delta q_i = \varepsilon(t) \{ q_i, L \}$$
Is there a way to show this to be true in general, that the conserved quantity derived via Noether's theorem, when put into the Poisson bracket, re-generates the original symmetry? Is it even true in general? Is it only true for conserved quantities that are at most degree 2 polynomials?
Edit (Jan 23, 2019): A while ago I accepted QMechanic's answer, but since then I figured out a rather short proof that shows that, in the "Hamiltonian Lagrangian" framework, the conserved quantity does generate the original symmetry from Noether's theorem.
Say that $Q$ is a conserved quantity:
$$ \{ Q, H \} = 0. $$ Consider the following transformation parameterized by the tiny function $\varepsilon(t)$: $$ \delta q_i = \varepsilon(t)\frac{\partial Q}{\partial p_i} \\ \delta p_i = -\varepsilon(t)\frac{\partial Q}{\partial q_i} $$ Note that $\delta H = \varepsilon(t) \{ H, Q\} = 0$. We then have \begin{align*} \delta L &= \delta(p_i \dot q_i - H )\\ &= -\varepsilon\frac{\partial Q}{\partial q_i} \dot q_i - p_i \frac{d}{dt} \Big( \varepsilon\frac{\partial Q}{\partial p_i} \Big) \\ &= -\varepsilon\frac{\partial Q}{\partial q_i} \dot q_i - \dot p_i \varepsilon\frac{\partial Q}{\partial p_i} + \frac{d}{dt} \Big( \varepsilon p_i \frac{\partial Q}{\partial p_i}\Big) \\ &= - \varepsilon \dot Q + \frac{d}{dt} \Big( \varepsilon p_i \frac{\partial Q}{\partial p_i}\Big) \\ \end{align*}
(Note that we did not use the equations of motion yet.) Now, on stationary paths, $\delta S = 0$ for any tiny variation. For the above variation in particular, assuming $\varepsilon(t_1) = \varepsilon(t_2) = 0$,
$$ \delta S = -\int_{t_1}^{t_2} \varepsilon \dot Q dt $$
implying that $Q$ is conserved.
Therefore, $Q$ "generates" the very symmetry which you can use to derive its conservation law via Noether's theorem (as hoped).
|
Preprints (rote Reihe) des Fachbereich Mathematik Refine Year of publication 1996 (3) (remove)
285
On derived varieties (1996)
Derived varieties play an essential role in the theory of hyperidentities. In [11] we have shown that derivation diagrams are a useful tool in the analysis of derived algebras and varieties. In this paper this tool is developed further in order to use it for algebraic constructions of derived algebras. Especially the operator \(S\) of subalgebras, \(H\) of homomorphic irnages and \(P\) of direct products are studied. Derived groupoids from the groupoid \(N or (x,y)\) = \(x'\wedge y'\) and from abelian groups are considered. The latter class serves as an example for fluid algebras and varieties. A fluid variety \(V\) has no derived variety as a subvariety and is introduced as a counterpart for solid varieties. Finally we use a property of the commutator of derived algebras in order to show that solvability and nilpotency are preserved under derivation.
284
A polynomial function \(f : L \to L\) of a lattice \(\mathcal{L}\) = \((L; \land, \lor)\) is generated by the identity function id \(id(x)=x\) and the constant functions \(c_a (x) = a\) (for every \(x \in L\)), \(a \in L\) by applying the operations \(\land, \lor\) finitely often. Every polynomial function in one or also in several variables is a monotone function of \(\mathcal{L}\). If every monotone function of \(\mathcal{L}\)is a polynomial function then \(\mathcal{L}\) is called orderpolynomially complete. In this paper we give a new characterization of finite order-polynomially lattices. We consider doubly irreducible monotone functions and point out their relation to tolerances, especially to central relations. We introduce chain-compatible lattices and show that they have a non-trivial congruence if they contain a finite interval and an infinite chain. The consequences are two new results. A modular lattice \(\mathcal{L}\) with a finite interval is order-polynomially complete if and only if \(\mathcal{L}\) is finite projective geometry. If \(\mathcal{L}\) is simple modular lattice of infinite length then every nontrivial interval is of infinite length and has the same cardinality as any other nontrivial interval of \(\mathcal{L}\). In the last sections we show the descriptive power of polynomial functions of lattices and present several applications in geometry.
|
I want to perform a simple linear interpolation between $A$ and $B$ (which are binary floating-point values) using floating-point math with IEEE-754 round-to-nearest-or-even rounding rules, as accurately as possible. Please note that speed is not a big concern here.
I know of two basic approaches. I'll use the symbols $\oplus, \ominus, \otimes, \oslash$ following Knuth [1], to mean floating-point addition, subtraction, product and division, respectively (actually I don't use division, but I've listed it for completeness).
(1) $\quad f(t) = A\,\oplus\,(B\ominus A)\otimes t$
(2) $\quad f(t) = A\otimes(1\ominus t)\,\oplus \,B\otimes t$
Each method has its pros and cons. Method (1) is clearly monotonic, which is a very interesting property, while it is not obvious at all to me that that holds for method (2), and I suspect it may not be the case. On the other hand, method (2) has the advantage that when $t = 1$ the result is exactly $B$, not an approximation, and that is also a desirable property (and exactly $A$ when $t = 0$, but method (1) does that too). That follows from the properties listed in [2], in particular:
$u\oplus v = v\oplus u$
$u\ominus v = u\oplus -v$
$u\oplus v = 0$ if and only if $v = -u$
$u\oplus 0 = u$
$u\otimes 1 = u$
$u\otimes v = 0$ if and only if $u = 0$ or $v = 0$
In [3] Knuth also discusses this case:
$u' = (u\oplus v)\ominus v$
which implicitly means that $u'$ may or may not be equal to $u$. Replacing $u$ with $B$ and $v$ with $-A$ and using the above rules, it follows that it's not granted that $A\oplus(B\ominus A) = B$, meaning that method (1) does not always produce $B$ when $t = 1$.
So, here come my questions:
Is method (2) guaranteed to be monotonic? If not, is there a better method that is accurate, monotonic and yields $A$ when $t = 0$ and $B$ when $t = 1$? If not (or you don't know), does method (1) when $t = 1$ always overshoot (that is, $A\oplus(B\ominus A)=A+(B-A)\cdot t$ for some $t \geq 1$)? Always undershoot (ditto for some $t \leq 1$)? Or sometimes overshoot and sometimes undershoot? I assume that if method (1) always undershoots, I can make a special case when $t = 1$ to obtain the desired property of being exactly equal to $B$ when $t = 1$, but if it always overshoots, then I can't. That's the reason for question 3. EDIT: I've found that the answer to question 3 is that it sometimes overshoots and sometimes undershoots. For example, in double precision:
-0x1.cae164da859c9p-1 + (0x1.eb4bf7b6b2d6ep-1 - (-0x1.cae164da859c9p-1))
results in
0x1.eb4bf7b6b2d6fp-1, which is 1 ulp greater than the original, while
-0x1.be03888ad585cp-1 + (0x1.0d9940702d541p-1 - (-0x1.be03888ad585cp-1))
results in
0x1.0d9940702d540p-1, which is 1 ulp less than the original. On the other hand, the method that I planned (special casing $t=1$) won't fly, because I've found it can be the case where $t < 1$ and $A\oplus(B\ominus A)\otimes t > B$, for example:
t = 0x1.fffffffffffffp-1A = 0x1.afb669777cbfdp+2B = 0x1.bd7b786d2fd28p+1
$A \oplus (B \ominus A)\otimes t =\,$
0x1.bd7b786d2fd29p+1
which means that if method (1) is to be used, the only strategy that may work is clamping.
References
[1] D.E.Knuth, The Art of Computer Programming, vol. 2: Seminumerical algorithms, third edition, p. 215
[2] Op. cit. pp. 230-231
[3] Op. cit. p.235 eq.(41)
|
Update: The MathJax Plugin for TiddlyWiki has a new home: https://github.com/guyru/tiddlywiki-mathjax Some time ago I came across MathJax, a nifty, Javascript based engine for displaying TeX and LaTeX equations. It works by “translating” the equation to MathML or HTML+CSS, so it works on all modern browsers. The result isn’t a raster image, like in most LaTeX solutions (e.g. MediaWiki), so it’s scales with the text around it. Furthermore, it’s quite easy to integrate as it doesn’t require any real installation, and you could always use MathJax’s own CDN, which makes things even simpler.
I quickly realized MathJax will be a perfect fit for TiddlyWiki which is also based on pure Javascript. It will allow me to enter complex formulas in tiddlers and still be able to carry my wiki anywhere with me, independent of a real TeX installation. I searched the web for an existing MathJaX plugin for TiddlyWiki but I came up empty handed (I did find some links, but they referenced pages that no longer exist). So I regarded it as a nice opportunity to begin writing some plugins for TiddlyWiki and created the MathJaxPlugin which integrates MathJax with TiddlyWiki.
As I don’t have an online TiddlyWiki, you’ll won’t be able to import the plugin, instead you’ll have to install it manually (which is pretty simple).
Start by creating a new tiddler named
MathJaxPlugin, and tag with
systemConfig (this tag will tell TiddlyWiki to execute the JS code in the tiddler, thus making it a plugin. Now copy the following code to the tiddler content:
/***
|''Name:''|MathJaxPlugin|
|''Description:''|Enable LaTeX formulas for TiddlyWiki|
|''Version:''|1.0.1|
|''Date:''|Feb 11, 2012|
|''Source:''|http://www.guyrutenberg.com/2011/06/25/latex-for-tiddlywiki-a-mathjax-plugin|
|''Author:''|Guy Rutenberg|
|''License:''|[[BSD open source license]]|
|''~CoreVersion:''|2.5.0|
!! Changelog
!!! 1.0.1 Feb 11, 2012
* Fixed interoperability with TiddlerBarPlugin
!! How to Use
Currently the plugin supports the following delemiters:
* """\(""".."""\)""" - Inline equations
* """$$""".."""$$""" - Displayed equations
* """\[""".."""\]""" - Displayed equations
!! Demo
This is an inline equation \(P(E) = {n \choose k} p^k (1-p)^{ n-k}\) and this is a displayed equation:
\[J_\alpha(x) = \sum_{m=0}^\infty \frac{(-1)^m}{m! \, \Gamma(m + \alpha + 1)}{\left({\frac{x}{2}}\right)}^{2 m + \alpha}\]
This is another displayed equation $$e=mc^2$$
!! Code
***/
//{{{
config.extensions.MathJax = {
mathJaxScript : "http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML",
// uncomment the following line if you want to access MathJax using SSL
// mathJaxScript : "https://d3eoax9i5htok0.cloudfront.net/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML",
displayTiddler: function(TiddlerName) {
config.extensions.MathJax.displayTiddler_old.apply(this, arguments);
MathJax.Hub.Queue(["Typeset", MathJax.Hub]);
}
};
jQuery.getScript(config.extensions.MathJax.mathJaxScript, function(){
MathJax.Hub.Config({
extensions: ["tex2jax.js"],
"HTML-CSS": { scale: 100 }
});
MathJax.Hub.Startup.onload();
config.extensions.MathJax.displayTiddler_old = story.displayTiddler;
story.displayTiddler = config.extensions.MathJax.displayTiddler;
});
config.formatters.push({
name: "mathJaxFormula",
match: "\\\\\\[|\\$\\$|\\\\\\(",
//lookaheadRegExp: /(?:\\\[|\$\$)((?:.|\n)*?)(?:\\\]|$$)/mg,
handler: function(w)
{
switch(w.matchText) {
case "\\[": // displayed equations
this.lookaheadRegExp = /\\\[((?:.|\n)*?)(\\\])/mg;
break;
case "$$": // inline equations
this.lookaheadRegExp = /\$\$((?:.|\n)*?)(\$\$)/mg;
break;
case "\\(": // inline equations
this.lookaheadRegExp = /\\\(((?:.|\n)*?)(\\\))/mg;
break;
default:
break;
}
this.lookaheadRegExp.lastIndex = w.matchStart;
var lookaheadMatch = this.lookaheadRegExp.exec(w.source);
if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
createTiddlyElement(w.output,"span",null,null,lookaheadMatch[0]);
w.nextMatch = this.lookaheadRegExp.lastIndex;
}
}
});
//}}}
After saving the tiddler, reload the wiki and the MathJaxPlugin should be active. You can test it by creating a new tiddler with some equations in it:
This is an inline equation $$P(E) = {n \choose k} p^k (1-p)^{ n-k}$$ and this is a displayed equation:
\[J_\alpha(x) = \sum_{m=0}^\infty \frac{(-1)^m}{m! \, \Gamma(m + \alpha + 1)}{\left({\frac{x}{2}}\right)}^{2 m + \alpha}\]
Which should result in the tiddler that appears in the above image.
Update 2011-08-19: Removed debugging code from the plugin.
Changelog 1.0.1 (Feb 11, 2012 Applied Winter Young’s fix for interoperability with other plugins (mainly TiddlerBarPlugin
|
Assuming we have two sets of $n$ qubits. The first set of $n$ qubits is in state $|a\rangle$ and second set in $|b\rangle$. Is there a fixed procedure that generates a superposed state of the two $|a\rangle + |b\rangle$ ?
Depending on what precisely your assumptions are about a, b, I think this is essentially impossible, and is something called the "no superposing theorem". Please see this paper.
Your question is not quite correctly defined.
First of all, $|a\rangle + |b\rangle$ is not a state. You need to normalize it by considering $\frac{1}{|||a\rangle + |b\rangle||}(|a\rangle + |b\rangle)$.
Secondly, in fact, you don't have access to the states $|a\rangle$ and $|b\rangle$ but to the states up to some global phase, i.e. you can think that the first register is in the vector-state $e^{i\phi}|a\rangle$ and the second register is in the vector-state $e^{i\psi}|b\rangle$ with inaccessible $\phi, \psi$. Since you don't have access to $\phi, \psi$, you can't define sum $|a\rangle + |b\rangle$. But you can ask to construct normalized state $|a\rangle + e^{ it}|b\rangle$ for some $t$. This question is correct. Though, as DaftWullie pointed out in his answer, it is impossible to construct such a state.
|
Answer
In general, $\sin{(bx)} \ne b\cdot \sin{x}$.. This is because they have different periods and amplitudes. Refer to the image in the step by step part below for the graph.
Work Step by Step
RECALL: The function $a \cdot \sin{(bx)}$ has : amplitude = $|a|$ period = $\frac{2\pi}{b}$ Thus: The function $y=\sin{(2x)}$ has an amplitude of $|1|=1$ and a period of $\frac{2\pi}{2} = \pi$. The function $y=2\sin{x}$ has an amplitude of $|2|=2$ and a period of $\frac{2\pi}{1}=2\pi$. From the information above, it is bvious that the t wo functions are different from each other. Thus, it cannot be said that in general, $\sin{(bx)}=b \cdot \sin{x}$. Use a graphing utility to graph the two functions. (Refer to the attached image below for the graph, the green graph is $y=2\sin{x}$ while the red graph is $y=2\sin{x}$.) Use a graphing utility to graph the given functions. (Refer to the graph below.) Notice that the graphs are different.
|
253 23 Homework Statement There is an infinite charged plate in yz plane with surface charge density ##\sigma = 8*10^8 C/m^2## and negatively charged particle at coordinate (4,0,0) Find magnitude of efield at coordinate (4,4,0) Homework Equations E= E1+E2
So I figured to get e-field at point (4,4,0), I need to find the resultant e-field from the negatively charged particle and the plate
##E_{resultant}=E_{particle}+E_{plate}##
##E_{particle}=\frac{kq}{d^2}=\frac{(9*10^9)(-2*10^-6)}{4^2}=-1125N/C##
Now for the plate is where I'm confused.
If this was a wire, it would have been okay for me since I only need to deal with one dimension.
Since what they requested was a plate in yz plane, does this means that my:
##\sigma=dy*dy*x?## where ##dy## is the 'slice' I take and x is the width of the plate? Is that accurate?
If it is true, then to find the e-field created by that slice at the point,
##dE=\frac{kdq}{R^2}##
##dE=\frac{k\sigma *x*dy}{a^2+y^2}##
I know that the vertical components of the resultant e-field will cancel out because there are same amount of segments on top and below the point.
So need to find ##dE_{x}##, which = ##dEcos\theta##, where ##\theta## is shown:
So ##dE_{x} = dEcos\theta = (\frac{k\sigma *x*dy}{a^2+y^2}) (\frac{a}{\sqrt{y^2+a^2}})##,
Now the problem is I can't integrate this to find my resultant e-field because I do not know what the value of x is. If this was a wire in a plane it will have been solvable for me, but now I'm kind of stuck.
Any clues/help? Thanks :)
|
\[\]
\defl{Normal Distribution has the following properties:}
Symmetrical and a bell shaped appearance. The population mean and median are equal. An infinite range, $-\infty < x < \infty$ The approximate probability for certain ranges of $X$-values: $P(\mu - 1\sigma < X < \mu + 1\sigma) \approx 68\%$ $P(\mu - 2\sigma < X < \mu + 2\sigma) \approx 95\%$ $P(\mu - 3\sigma < X < \mu + 3\sigma) \approx 99.7\%$
\[ f(x)=\frac{1}{\sqrt{2\pi}\sigma}e^{\frac{-\left(x-\mu\right)^2}{2\sigma^2}} \]
$-\infty < x < \infty$ $N(\mu,\sigma^2)$ is used to denote the distribution $E(X) = \mu $ $V(X) = \sigma^2$ If $\mu=0$ and $\sigma^2=1$, this is called a Standard Normal and denoted $Z$ All Normally distributed random variables can be converted into a Standard Normal To determine probabilities related to the Normal distribution the Standard Normal distribution is used
\[ Z=\frac{X-\mu}{\sigma} \]Note: if $z$ is known we can solve for $x$:
\[ x=\mu+z\sigma \]
\[ f(z)=\frac{1}{\sqrt{2\pi}}e^{\frac{-z^2}{2}} \]
$-\infty < z < \infty$ Has the same properties as the Normal distribution and $N(0,1)$ is used to denote the distribution $E(Z) = \mu=0 $ $V(Z) = \sigma^2=1$ The approximate probability for certain ranges of $Z$-values: $P(-1 < Z < 1) \approx 68\%$ $P(-2 < Z < 2) \approx 95\%$ $P(-3 < Z < 3) \approx 99.7\%$ To find probabilities of any $z$-value refer to the Table~sntable or computer software such as Microsoft Excel may be used $P(Z = c)=0$, where $c$ is a constant. $P(Z < -c)=P(Z > c)$ $P(Z > c)=1-P(Z < c)$ $P(Z < -c)=1-P(Z < c)$
\[\bar{X} \sim N(\mu,\frac{\sigma^{2}}{n}). \]In addition,
\[ Z=\frac{\bar{X}-\mu}{\sigma/\sqrt{n}}, \]where $Z\sim N(0,1)$. \defs{Examples}
The height of a randomly selected person is often assumed to be Normally distributed The weight of a randomly selected person is often assumed to be Normally distributed The pulse rate of a randomly selected person is often assumed to be Normally distributed
|
Dear Uncle Colin, I've come across a seemingly simple question I can't tackle: solve $x^2 + 2x \ge 2$. I tried factorising to get $x(x+2) \ge 2$, which has the roots 0 and -2, but the book says the answer is $x < -1-\sqrt{3}$ or $x > -1 + \sqrt{3}$.Read More →
A STEP question (1999 STEP II, Q4) asks: By considering the expansions in powers of $x$ of both sides of the identity $(1+x)^n (1+x)^n \equiv (1+x)^{2n}$ show that: $\sum_{s=0}^{n} \left( \nCr{n}{s} \right)^2 = \left( \nCr{2n}{n} \right)$, where $\nCr{n}{s} = \frac{n!}{s!(n-s)!}$. By considering similar identities, or otherwise, show also that: (i)Read More →
Dear Uncle Colin, I have a triangle with sides 4.35cm, 8cm and 12cm; the angle opposite the 4.35cm side is 10º1 and need to find the largest angle. I know how to work this out in two ways: I can use the cosine rule with the three sides, which givesRead More →
"One equals two" growled the mass of zombies in the distance. "One equals two." The first put down the shotgun. "I've got this one," he said, picking up the megaphone. "If you're sure," said the second. "I'M SURE." The second covered his ears. "SORRY. I mean, sorry." The first redirectedRead More →
Dear Uncle Colin, I'm supposed to solve $(1+i)^N = 16$ for $N$, and I don't know where to start! -- Don't Even Mention Other Imaginary Variations -- Reality's Enough Hello, DEMOIVRE, there are a couple of ways to attack this. The simplest way (I think) is to convert the problemRead More →
Somewhere deep in the recesses of my email folder lurks a puzzle that looks simple enough, but that several of my so-inclined friends haven't found easy: A circle of radius $r$, has centre $C\ (0,r)$. A tangent to the circle touches the axes at $A\ (9,0)$ and $B\ (0, 2r+3)$.Read More →
Dear Uncle Colin, I'm trying to find a definite integral: $\int_0^\pi \sin(kx) \sin(mx) \dx$, where $m$ and $k$ are positive integers and the answer needs to be simplified as far as possible. I've wound up with $\left[\frac{ (k+m) \sin((k-m) \pi) - (k-m)\sin((k+m)\pi) }{2(k-m)(k+m)}\right]$, but it's been marked wrong. -- FlatRead More →
"No, no, wait!" said the student. "Look!" "8.000 000 072 9," said the Mathematical Ninja. "Isn't that $\frac{987,654,321}{123,456,789}$? What do you think this is, some sort of a game?" "It has all the hallmarks of..." "I'll hallmark you in a minute!" said the Mathematical Ninja. Seconds later, the students armsRead More →
In this month's podcast, @reflectivemaths and I discuss: Colin's book being available to buy Number of the podcast: Catalan's constant, which is about 0.915 965 (defined as $\frac{1}{1} - \frac{1}{9} + \frac{1}{25} - \frac{1}{49} + ... + \frac{1}{(2n+1)^2} - \frac{1}{(2n+3)^2} + ...$). Not known whether it’s rational. Used in combinatoricsRead More →
|
$$\mathop {\lim }\limits_{n \to \infty } {{{a_n}} \over {{b_n}}} = 1$$Prove the statement implies $\sum {{a_n},\sum {{b_n}} } $ converge or diverge together.
My guess the statement is true.
if $\sum{{a_n}}$ diverges, then $\mathop {\lim }\limits_{n \to \infty } {a_n} \ne 0$
So, $$\eqalign{ & \mathop {\lim }\limits_{n \to \infty } {a_n} = L \ne 0 \cr & {{\mathop {\lim }\limits_{n \to \infty } {a_n}} \over {\mathop {\lim }\limits_{n \to \infty } {b_n}}} = 1 \Rightarrow {L \over {\mathop {\lim }\limits_{n \to \infty } {b_n}}} = 1 \Rightarrow L = \mathop {\lim }\limits_{n \to \infty } {b_n} \ne 0 \cr} $$
therefore, $\sum {b_n}$ also diverges.
What I was not managed to do is proving that the two series converges together.
Or maybe the statement is not always true?
|
Answer
11
Work Step by Step
We adjust equation 18.11b to obtain: $\frac{V_z{max}}{V_{min}} = (\frac{T_{max}}{T_{min}})^{1/(\gamma-1)}$ $\frac{V_{max}}{V_{min}} = (\frac{773}{293})^{1/(1.4-1)}\approx 11$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
This page describes (some of) the differences of Welch (4th Ed) and its alternatives. For coverage comparisons, please look at bookcomparison, please.
Innovations in Approach
Every interested and modestly talented student can understand finance. I believe that our finance concepts are no more difficult than those in standard texts covering the principles of economics and that our mathematics is no more difficult than high-school level. I believe that finance is easiest when explained from basic principles and only gradually ramped up in complexity. I also believe that, although it is important for our students to learn how to solve traditional textbook problems, it is more important for them to learn how to think about and approach new problems that they will inevitably encounter in the real world.
A Logical Progression
The book starts with simple and stylized scenarios in which solutions are easy. It then progresses to more complex and realistic scenarios in which questions and answer become more difficult. Within this architecture, chapters build organically on previous concepts. This incremental progression allows students to reuse what they have learned and to understand the effect of each new change in and of itself.
The book has a logical progression from the perfect-market, law-of-one-price ideal world (on which almost all finance formulas are based) to an imperfect market (in which our formulas may need explicit or implicit adjustments).
Numerical Example Leading to Formula
I learn best by following a numerical example, and I believe that students do, too. Whenever I want to understand an idea, I try to construct numerical examples for myself---the simpler, the better. Therefore, this book relies on simple numerical examples as its primary tutorial method. Instead of a ``bird's eye'' view of the formula first and application second, students start with a ``worm's eye'' view and work their way up---from simple numbers to progressively more complex examples. (Formulas are below the numbers.) Each step is easy. At first glance, you may think this may be less ``executive'' or perhaps not as well-suited to students with only a cursory interest in finance, but I assure you that neither of these is the case.
Critical questions such as, ``What would this project be worth?'' are answered in numerical step-by-step examples, and right under the computations are the corresponding symbolic formulas. I believe that the pairing of numerics with formulas ultimately helps students understand the material both on a higher level and with more ease.
Problem Solving
A corollary to the numbers-first approach is my belief that formulaic memorization is a last resort. Such a rote approach leaves the house without a foundation. Instead of giving students too many canned formulas, I try to teach them how to approach and solve problems---often by discovering the methods themselves. I want students learn how to dissect new problems with basic analytical tools.
Self-Contained
Many students come into class with a patchwork of background knowledge. Along the way, holes in their backgrounds cause some of them to get lost. Not realizing when this happened, student frustration rises. I have therefore tried to keep this book largely self-contained.
For example, all necessary statistical concepts are integrated in Chapter~8 (Investor Choice: Risk and Reward), and all necessary accounting concepts are explained in Chapter~14 (From Financial Statements to Economic Cash Flows), though this is neither a full statistics or accounting textbook.
Reasonable Brevity
This book has a count of about 650 pages rather than 1,100 pages. Sometimes, less is more.
Innovations in Content and Perspective
This book also offers numerous topical and expositional innovations, of which the following is a limited selection.
A Strong Distinction between Expected and Promised Cash Flows
I clearly distinguish between the premium to compensate for default (credit risk), which is introduced in Chapter~6 (Uncertainty, Default, and Risk); and the risk premium, which is introduced in Chapter~9 (Benchmarked Costs of Capital). Students should no longer mistakenly believe that they have taken care of credit risk by discounting a promised cash flow with a CAPM expected rate of return. (If they commit this error---and I know from painful experience before writing this book that many students of other books do---it would have been better if they had never taken a finance course to begin with.)
More Emphasis On Term Premia, Even in Equity Premia
There are many ``nuances'' in calculating and assessing term and equity premia. Some methods are clearly correct, others incorrect. For example, should one use geometric or arithmetic rates of return? Short-term or long-term bonds? These choices induce large differences in inference, even for benchmarked cost of capitals. Should one use 50 or 100 years when estimating forward from historical premia. And this is even when based on the same historical data. For example, one can quote as high an estimate as 8% or as low an estimate as 2%. Mistakes here one dwarf errors that the CAPM commits, and we can explain and get this right! See Chapter~9.
Relative Deemphasis of the CAPM
Every student needs to understand the CAPM because it is a finance standard. This book explains it well.
However, the empirical evidence is clear: the CAPM is not even a good approximation and even in its most common use form. It takes little sophistication to understand this. Figure 10.3 on page 226 shows it.
Moreover, its most common input assessments make no sense in the context of long-term capital budgeting. Clearing them up helps the model commit less serious mistakes. (Levi-Welch, JFQA, April 2017 discuss best practice in more detail.)
As in the benchmarking approach, it is first-order important to assess a good equity premium. Did you know that from 1970 to 2015, stocks outperformed long-termTreasury bonds by about 2% per year. Really. See Figure 9.3 on page 197. Forecasting an equity beta over 5-20 years is hard. Beta estimates require much more a-priori shrinkage than is common. This is not due to estimation uncertainty, but due to changing betas even in the historical data estimation sample. Step back and ask yourself---do you really think we have a good model that can predict which stocks will do better than others over time frames of years or decades? This exercise seems largely futile. notmean that all projectsrequire the same cost of capital. Asset-class and term-structure based alternatives can be recommended where the CAPM cannot. They reflect risk and project differences just like the CAPM. Almost all our tools still work. We can calculate asset expected returns, consider liquidity premia informally, etc. And there are more than enough nuances that matter.
So why are other textbooks still perpetuating this fairy-tale model in chapter over chapter, while glossing over what really matters?
Robustness
The book describes what finance practitioners can reasonably know and what they can only guess at (with varying degrees of accuracy). In the application of a number of financial tools, I point out which of the guessed uncertainties are likely to have important repercussions and which are minor in consequence. I also try to be honest about where our academic knowledge is solid and where it is shaky.
A Spotlight on the Pitfalls of Capital Budgeting
A self-contained chapter (Chapter 13: Capital Budgeting Applications and Pitfalls) describes real-world difficulties and issues in applying capital budgeting techniques, ranging from externalities to real options, to agency problems, behavioral distortions, and so on. The chapter ends with an ``NPV Checklist.''
Comparables
A chapter on comparables (Chapter 15: Valuation from Comparables and Some Financial Ratios), usually not found in other corporate finance textbooks, shows that if used properly, the comparables valuation method is a good cousin to NPV.
Financials from a Finance Perspective
A self-contained accounting chapter (Chapter 14: From Financial Statements to Economic Cash Flows) explains how earnings and economic cash flows relate. When students understand the logic of corporate financial statements, they avoid a number of common mistakes that have crept into financial cash flow calculations ``by tradition.'' In addition, a synthesizing chapter on pro formas (Chapter 21: Pro Forma Financial Statements) combines the ingredients from previous chapters---financials, comparables, capital budgeting, taxes, cost of capital, capital structure, and so on. Many students will be asked in their future jobs to construct pro formas, and our corporate finance curriculum has not always prepared them well enough to execute such assignments appropriately and thoughtfully.
An Updated Perspective on Capital Structure
The academic perspective on capital structure has been changing. Here are a few of the more novel points emphasized in this book:
Corporate claims do not just have cash flow rights but important control rights as well. This fact has many implications---even for one common proof of Modigliani-Miller. Unless the firm is close to financial distress, it probably does not matter much how the firm is financed. Project choice is likely to be far more important than the debt-equity choice. (This does not mean that access to financing is not important, just that the exact debt-equity mix is not.) Corporate liabilities are broader than just financial debt. In fact, on average, about two-thirds of firms' liabilities are nonfinancial. The firm value is thus the sum of its financial debt and equity plusits nonfinancial debt (often linked to operations). Again, this can be important in a number of applications. Adverse selection causes a pecking order, but so do other effects. Thus, the pecking order does not necessarily imply adverse selection. The debate about trade-off theory has moved to how slowly it happens---whether it takes 5 or 50 years for a firm to readjust. Historical stock returns are a major determinant of which firms today have high debt ratios and which have low debt ratios. A simple inspection of the evolution of Intel's capital structure from 2013 to 2015 in Chapter 16 makes this plainly obvious. Capital structure may not necessarily be a corporate-control device. On the contrary, equity-heavy capital structures could be the result of a breakdown of corporate control. Preferred equity and convertibles have become rare among publicly traded corporations over the past decade. A novel synthesizing figure (Figure 19.5 on page 542) provides a conceptual basis for thinking about capital structure in imperfect markets. It shows how APV fits with other non-tax-related imperfections. Specific Changes for the Fourth Edition
This edition has been updated to 2017. The one major change is the insertion of Chapter 9 (Benchmarked Costs of Capital), and the just-discussed even more skeptical view about the practical usefulness of the CAPM. There are small changes throughout the book, but this book is deliberately converging to the clearest explanations. After all, unlike my peers, I do not need to suppress resales via new editions. There are no changes for the sake of changes.
Superior Course Website
syllabus.space provides a superior online course site for your students. The following shows an example equiz question the way a student would see it.
The numbers in the questions change every time the quiz is refreshed. Thus, a student can take the same question many times! Nicely formatted math online is no problem, either. Nice?
But it gets (much) better. Instructors can easily edit quizzes, and with numerical values that change with each browser refresh! For example, the above question was written as follows. First,
::I:: initializes variables (here, $S to a randomly drawn integer between 100 and 120; $rf to either 10%, 15%, or 20%; etc.), calculates values, and calculates the most important calculation, the answer. The answer must be placed into the special variable $ANS.
:I: $S = rseq(100,120) ; $K = rseq(90,130) ; $t=1/pr(2,4) ; $rf=pr(0.10,0.15,0.20) ; $sd= pr(0.40,0.50,0.60) ; $pvk= $K*exp(-$rf*$t); $sdS = $sd*sqrt($t); $d1 = ( log($S/$K) + ($rf+$sd^2/2)*$t ) / ( $sd * $t^0.5 ); $d2 = $d1 - $sd * $t**0.5; $ANS= BlackScholes($S,$K,$t,$rf,$sd)
Next,
::Q:: poses the question itself to the student. To nicely format questions, we want to use html tags like
<p> (e.g., for paragraph separation or bold display), and we want to use latex-style formulas (like
\frac{\sigma\textasciicircum2}{2} to show mathematical notation. Can this really all work together? Yes!}
:Q: Calculate the Black-Scholes European call option price (without dividends) as a function of S = $ $S ; K = $ $K ; t=$t ; \( r_f= $rf \); and \(\sigma\) = $sd, where the risk-free rate and volatility are quoted for log-rates. The rest of this question are merely cheating reminders \[ BS(S,K,t,r_f,\sigma) = S\cdot N(d_1) + PV(K)\cdot N(d_2) = \$ $S \cdot N(d_1) + PV(\$ $K)\cdot N(d_2) \] where N is the cumulative normal distribution, \[ PV(K) = PV(\$ $K) = K\cdot e^{-r_f\cdot t} = $K\cdot e^{-$rf\cdot $t} = \$ $pvk \] and the standard deviation of the stock price to expiration is defined by the standard deviation of plain stock returns, \[ \mbox{sd}(\sigma,t) = \sigma\cdot\sqrt{t} = $sd\cdot \sqrt{$t} = $sdS \] I am quoting both the risk-free rate continuously compounded...[deleted] \[ d_1= \frac{ \log(S/PV(K)) }{\mbox{sd}(\sigma,t) } + \mbox{sd}(\sigma,t)/2 = \frac{ \log(\$ $S/\$ $pvk) }{ $sdS } + $sdS/2 \approx $d1 \] \[ d_2= d_1 - \mbox{sd}(\sigma,t) = $d1 - $sdS \approx $d2 \] Thus, \[ BS(S,K,t,r_f,\sigma) \approx \$ $S \cdot N($d1) + PV(\$ $K)\cdot N($d2) \approx \$ $ANS \] Although I just "accidentally" told you the answer, please give it anyway.(For LaTeX users, the only aspect to keep track of is that a single '$' no longer means starting math-mode, but a variable. Thus
$ $xfirst prints a dollar and then displays the x variable's content. Use
\(and
\)as the alternative to start and stop inline math. It's better anyway.)
Finally, we need
::A:: to explain the correct answer after the student has submitted her own answer:
:A: The answer was already noted as $$ANS. \[ BS(S,K,t,r_f,\sigma) \approx \$ $S \cdot N($d1) + PV(\$ $K)\cdot N($d2) \approx \$ $ANS \]
This equiz system has been successfully used to design quizzes by other instructors for unrelated courses (such as derivatives).
(Optional) Simple Course Administration
Annoyed by (too-)feature-rich but clunky and hard-to-understand course websites? Me, too! The syllabus.space system also offers intuitive course administration functionality for instructors and schools. The learning curve is near zero. For example, look at the following:
What is not immediately clear?
Instructors can receive reports on student equiz performance, post messages, post and collect student homework answers, post syllabi, allow students to see who completes quizzes first (gamification), etc.
The software is all open-source and free for academic non-profit institutions. So any academic department can install the system on their own IT servers, tinker with it, pass it on to other academic institutions, etc. (Commercial textbook publishers, please contact me.)
When instructors want to run their websites on my server, I usually give them their own subdomains, such as mfe327.welch.syllabus.space. At small single-class scale, I can do this easily and without charging. At large scale (1,000 students plus), I would have to hire a dedicated IT expert and charge $20/student.
Other Instructor Materials
Qualified instructors from accredited institutions can receive full support materials. These are not made available to students under any circumstances. If you believe you qualify, please email ivo.welch@gmail.com, noting the class you are teaching and a university webpage that shows your official affiliation. I am not trying to annoy you, but students have tried to masquerade as instructors in the past.
The website also has a map at bookcomparison.html that compares topic coverages across major books. You should switch to this book asap---it is not only free and thus saves students a ton of money, but it is also simple a better book than its peers.
|
For a positive integer $d$, let $h(-d)$ denote the class number of the imaginary quadratic field $\mathbb{Q}(\sqrt{-d})$. Is there a known asymptotic formula for the sum
$$\displaystyle \sum_{\substack{p \leq x \\ p \equiv 3 \pmod{4}}} h(-p)?$$
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community
For a positive integer $d$, let $h(-d)$ denote the class number of the imaginary quadratic field $\mathbb{Q}(\sqrt{-d})$. Is there a known asymptotic formula for the sum
$$\displaystyle \sum_{\substack{p \leq x \\ p \equiv 3 \pmod{4}}} h(-p)?$$
It seems to me one should be able to derive such a formula from the class number formula and the known statistical results on $L(1,\chi)$. But... I see now this has already been worked out. See
Nagoshi, Hirofumi(J-GUN-FS) The moments and statistical distribution of class numbers of quadratic fields with prime discriminant. (English summary) Lith. Math. J. 52 (2012), no. 1, 77–94.
According to the MathSciNet review, the author gets asymptotic formulas for all the (positive integral) moments of $h(-p)$, for $p \equiv 3\pmod{4}$.
|
Johannes Breitling
1,2,3, Anagha Deshmane 4, Steffen Goerke 1, Kai Herz 4, Mark E. Ladd 1,2,5, Klaus Scheffler 4,6, Peter Bachert 1,2, and Moritz Zaiss 4
1Division of Medical Physics in Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany, 2Faculty of Physics and Astronomy, University of Heidelberg, Heidelberg, Germany, 3Max Max Planck Institute for Nuclear Physics, Heidelberg, Germany, 4High-field Magnetic Resonance Center, Max Planck Institute for Biological Cybernetics, Tübingen, Germany, 5Faculty of Medicine, University of Heidelberg, Heidelberg, Germany, 6Department of Biomedical Magnetic Resonance, Eberhard-Karls University Tübingen, Tübingen, Germany
Synopsis
Chemical exchange saturation transfer (CEST) MRI
allows for the indirect detection of low-concentration biomolecules by their saturation
transfer to the abundant water pool. However, reliable quantification of CEST
effects remains challenging and requires a high image signal-to-noise ratio. In this study, we show that principle component
analysis can provide a denoising capability which is comparable or better than
6-fold averaging. Principle component analysis allows identifying similarities
across all noisy Z-spectra, and thus, extracting the relevant information. The
resulting denoised Z-spectra provide a more stable basis for quantification of
selective CEST effects, without requiring additional measurements.
IntroductionChemical exchange saturation transfer (CEST) MRI
allows for the indirect detection of low-concentration biomolecules by their saturation
transfer to the abundant water pool. However, quantification of inherently
small CEST effects remains challenging and requires high image signal-to-noise
ratio (SNR) to achieve reliable results. For optimized saturation parameters
and image readout, higher SNR can only be achieved by averaging, resulting in prolonged
measurement times. In this study, principle component analysis (PCA) was utilized
to identify similarities across all Z-spectra, extract the relevant information
i.e. principal components (PCs) and reduce the dimensionality to improve the
SNR without averaging. 1,2
Methods
The proposed denoising algorithm (Fig. 1) is applied
after motion correction using a rigid-registration-algorithm
in MITK
3, normalization, correction of B 0-inhomogeneities,
and segmentation of brain tissues (cerebrospinal fluid, gray matter, and white
matter). CEST data, consisting of images of size
$$${u}\times{v}\times{w}$$$ for the $$$n$$$ saturation frequency offsets, is
reshaped into a Casorati matrix
$$$\textbf{C}$$$ of size
$$$ {m}\times{n}$$$
, with $$${m}\leq{u}\cdot{v}\cdot{w}$$$
being the
number of brain voxels. Each row of
$$$\textbf{C}$$$ represents the
Z-spectrum of one voxel and each column represents a complete segmented image
for one saturation frequency offset.
PCA is performed by eigendecomposition of the
covariance matrix
$$\mathrm{cov}(\textbf{C})=\frac{1}{n-1}\widetilde{\textbf{C}}^\textbf{T}\widetilde{\textbf{C}}$$
$$\quad=\bf\Phi^\textbf{T}\Lambda\Phi$$
with
$$$\widetilde{\textbf{C}}$$$ being the
column-wise mean-centered Casorati matrix,
$$${\bf\Phi}=({\bf\varphi}_1{\bf\varphi}_2\ldots{\bf\varphi}_n)$$$ being the
$$${n}\times{n}$$$ orthonormal
eigenvector matrix and $$${\bf\Lambda}=\mathrm{diag}(\lambda_1,\lambda_2,\ldots,\lambda_n)$$$ being the
associated diagonal eigenvalue matrix with
$$$\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_n$$$
. The variance i.e. information content of a signal
will concentrate in the first few PCs, whereas the noise is spread evenly over
the dataset. Therefore preserving the first few PCs will remove noise from the
data set. The optimal number of components
can be determined
by an empirical indicator function applied to the eigenvalues.
4
$$k=\underset{i}{\mathrm{argmin}}\left[\frac{\sum_{l=i+1}^{l=n}\lambda_l}{m(n-i)^5}\right]^\frac{1}{2}$$
Projection of
$$$\widetilde{\textbf{C}}$$$
onto the reduced set of the first $$$k$$$ eigenvectors
$$${\bf\Phi}_{(k)}$$$
$$\widetilde{\textbf{C}}_{(k)}=\widetilde{\textbf{C}}{\bf\Phi}_{(k)}{{\bf\Phi}_{(k)}}^\textbf{T}$$
and addition of the mean Z-spectrum results in the denoised Casorati
matrix. Denoised Z-spectra are reformatted into a final denoised image series
with dimensions $$${u}\times{v}\times{w}\times{n}$$$.
In vivo 3D-CEST-MRI
(1.7×1.7×3 mm
3, 12 slices) was performed on a 7T whole-body scanner
(Siemens Healthineers, Germany) using the snapshot-CEST approach 5.
Pre-saturation by 140 Gaussian-shaped pulses (t p = 15 ms, duty cycle
= 60%, t sat = 3.5 s) was applied at 54 unevenly distributed offsets for
two different mean B 1 = 0.6 and 0.9 µT. Each Z-spectrum was acquired
six times to enable comparison with high-SNR data obtained by averaging. Conventional,
averaged and denoised Z-spectra were fitted pixel-wise with a Lorentzian 5-pool
fit model. Lorentzian difference images were calculated according to $$$\mathrm{MTR_{LD}}=Z_{ref}-Z$$$ and corrected for B 1-inhomogeneities 6. ResultsConsiderable noise is
observed in the unprocessed Z-spectra (Fig. 2 left). Averaging 6 measurements reduces
the noise significantly, allowing reliable identification of CEST resonances
(Fig. 2 middle). The same resonances are also revealed by application of the
denoising algorithm, indicating the correct choice for the number of components
(Fig. 2 right). The smoother overall appearance of the denoised vs. averaged
Z-spectra suggests the algorithm is equivalent or better than averaging. This
result is also apparent in the APT and rNOE MTR LD contrasts calculated from
denoised Z-spectra, which exhibit comparable or better image quality than those
from averaged spectra (Fig. 3). DiscussionPCA was previously shown to be a powerful denoising
technique in HyperCEST, with the optimal number of components deduced from the composition
of the investigated phantom. 2 For conventional CEST experiments, the number
of relevant components is not straightforward to determine, as the number of contributions,
their dependencies on physiological parameters and pathological alterations
thereof are unknown. If too few components are preserved, resonances would be
missing or deformed; too many components would diminish the denoising
performance. In this study, the optimal number is determined using a
data-driven approach 4, the functionality of which was verified by the similarity
of averaged and denoised Z-spectra.
Prior segmentation and correction for B 0
inhomogeneities ensured only relevant i.e. meaningful voxels with the same
spectral position were included. PCA generally requires a sound statistical
basis (i.e. a large number of voxels) to determine the PCs whereas the
denoising performance itself depends on the number of preserved components. Therefore
the denoising approach will benefit from 3D imaging, especially whole-brain
imaging, and from a large number of acquired offsets. ConclusionPCA denoising of Z-spectra
results in contrasts which are comparable or even superior to those achieved by
averaging. With this technique at hand, small CEST effects can be reliably
quantified without the need for additional measurements. This might allow clinical
application of CEST MRI at low field strengths as well as with fast imaging
sequences by compensating for the expected SNR deterioration. AcknowledgementsMax Planck Society (support to JB, MZ, AD); German
Research Foundation (DFG, grant ZA 814/2-1, support to MZ); European Union
Horizon 2020 research and innovation programme (Grant Agreement No. 667510,
support to MZ, AD).
References
Hotelling, H. Analysis of a Complex of Statistical
Variables into Principal Components. Journal of Educational Psychology 1933;24:417-441,498-520. Döpfert J, Witte C, Kunth M, and Schröder L. Sensitivity enhancement of
(Hyper-)CEST image series by exploiting redundancies in the spectral domain. Contrast
Media Mol. Imaging 2014;9:100-107. Nolden M, Zelzer S, Seitel A, et al. The Medical Imaging Interaction Toolkit: challenges and advances. Int J
CARS 2013; 8(4):607-620. Malinowski ER. Determination of the number of factors and the experimental error in a
data matrix. Anal. Chem. 1977;49:612-617. Zaiss M, Ehses P, and Scheffler K. Snapshot-CEST: Optimizing spiral-centric-reordered
gradient echo acquisition for fast and robust 3D CEST MRI at 9.4 T. NMR Biomed
2018;31:e3879. Windschuh J, Zaiss M, Meissner JE, et al. Correction of B1-inhomogeneities for relaxation-compensated CEST imaging
at 7 T. NMR Biomed 2015;28:529-537.
|
scipy.interpolate.UnivariateSpline.antiderivative¶
UnivariateSpline.
antiderivative(
n=1)¶
Construct a new spline representing the antiderivative of this spline.
Parameters: n: int, optional
Order of antiderivative to evaluate. Default: 1
Returns: spline: UnivariateSpline
Spline of order k2=k+n representing the antiderivative of this spline.
Notes
New in version 0.13.0.
Examples
>>> from scipy.interpolate import UnivariateSpline >>> x = np.linspace(0, np.pi/2, 70) >>> y = 1 / np.sqrt(1 - 0.8*np.sin(x)**2) >>> spl = UnivariateSpline(x, y, s=0)
The derivative is the inverse operation of the antiderivative, although some floating point error accumulates:
>>> spl(1.7), spl.antiderivative().derivative()(1.7) (array(2.1565429877197317), array(2.1565429877201865))
Antiderivative can be used to evaluate definite integrals:
>>> ispl = spl.antiderivative() >>> ispl(np.pi/2) - ispl(0) 2.2572053588768486
This is indeed an approximation to the complete elliptic integral \(K(m) = \int_0^{\pi/2} [1 - m\sin^2 x]^{-1/2} dx\):
>>> from scipy.special import ellipk >>> ellipk(0.8) 2.2572053268208538
|
Is arithmetic with infinite numbers fictitious?
It depends on your definition of "arithmetic with infinite numbers" and "fictitious". The meaning of
fictitious that this question was written to oppose, was in reference to certain descriptions of how Robinson's nonstandard analysis is used for calculus. Those descriptions don't have any obvious equivalent for Skolem arithmetic, because Skolem arithmetic is not used as a tool for doing or teaching calculus, or for any other application outside of mathematical logic and model theory.
In 1933 Skolem constructed models for arithmetic containing infinite numbers. In a 1977 article Stillwell emphasized the constructive nature of Skolem's approach. [...]
The words like
constructed and construction have no particular meaning here beyond "formal existence proof". Stillwell did not use the word constructive whose precise interpretations do not apply to Skolem's proof.
Is this at odds with Tennenbaum's theorem on nonrecursivity?
There are computable number systems that extend integer arithmetic with additional objects that can be interpreted as infinitely large, and operations extending the familiar ones to the larger system. Polynomials with integer coefficients and computable ordinal notations are two examples. Tennenbaum's theorem shows that Skolem arithmetic cannot be presented in that way, with discrete computable data and operations on them.
This question is related to a
comment exchange at Does evaluating hyperreal $f(H)$ boil down to $f(±∞)$ in the standard theory of limits? where terms like "fictitious" are being applied to nonstandard models,
"Fictitious" was applied to
descriptions of what is done with nonstandard analysis, not the models themselves. The idea that nonstandard models constructed using the Axiom of Choice have a lesser form of existence than constructs that do not, is certainly an objection that arises in discussions of NSA, just not in the one that you linked to.
The metaphors and fictions relating to NSA occur not (as far as I was asserting) so much in the existence of the objects, but in the descriptions of how the theory is used, such as the idea that there is an ability to take the standard part of bounded $f(H)$ (going beyond the standard rubric of taking limits as $H \to \infty$ when they exist) when this ability never materializes except as the standard thing.
To the extent there is a problem on the existence front, it is that taking individual elements of the nonstandard models is more elusive than just constructing the models, so that the description of "choosing a nonstandard $H$ and calculating $f(H)$ and then taking standard part" can only mean a procedure that is independent of $H$, which is standard analysis dressed in very marginally different words. It doesn't matter whether one considers the individual $H$ to really exist or not, there just is no way to do things like compute standard part of $\sin(H)$ or other functions that depend nontrivially on infinite $H$.
Note 2. The point about a nonstandard model of arithmetic is that one can do a significant fragment of calculus just using the quotient field of such a model.
Only in logic papers. This is not a real "use" of nonstandard arithmetic to do calculus as something taught to and utilized by nonlogicians.
|
For just answering the yes/no question, the easiest way is to use the Swiss knife of bijections, the Cantor-Schröder-Bernstein theorem, which just requires us to construct separate
injections in each direction $\mathbb R\to\mathbb N\times\mathbb R$ and $\mathbb N\times\mathbb R\to\mathbb R$ -- which is easy:
$$ f(x) = (1,x) $$
$$ g(n,x) = n\cdot \pi + \arctan(x) $$
Because there is an injection either way, Cantor-Schröder-Bernstein concludes that a bijection $\mathbb R\to\mathbb N\times\mathbb R$ must exist.
If you already know $|\mathbb R\times\mathbb R|=|\mathbb R|$, you can get by even quicker by
restricting your known injection $\mathbb R\times\mathbb R\to \mathbb R$ to the smaller domain $\mathbb N\times\mathbb R\to\mathbb R$ instead of mucking around with arctangents.
|
A first remark
This same phenomenon of 'control' qubits changing states in some circumstances also occurs with controlled-NOT gates; in fact, this is the entire basis of eigenvalue estimation. So not only is it possible, it is an important fact about quantum computation that it is possible. It even has a name: a "phase kick", in which the control qubits (or more generally, a control register) incurs relative phases as a result of acting through some operation on some target register.$\def\ket#1{\lvert#1\rangle}$
The reason why this happens
Why should this be the case? Basically it comes down to the fact that the standard basis is not actually as important as we sometimes describe it as being.
Short version. Only the standard basis states on the control qubits are unaffected. If the control qubit is in a state which is not a standard basis state, it can in principle be changed.
Longer version —
Consider the Bloch sphere. It is, in the end, a sphere — perfectly symmetric, with no one point being more special than any other, and no one
axis more special than any other. In particular, the standard basis is not particularly special.
The CNOT operation is in principle a physical operation. To describe it, we often
express it in terms of how it affects the standard basis, using the vector representations$$ \ket{00} \to {\scriptstyle \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0 \end{bmatrix}}\,,\quad\ket{01} \to {\scriptstyle \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix}}\,,\quad\ket{10} \to {\scriptstyle \begin{bmatrix} 0 \\ 0 \\ 1 \\ 0 \end{bmatrix}}\,,\quad\ket{11} \to {\scriptstyle \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}}$$— but this is just a representation. This leads to a specific representation of the CNOT transformation:$$\mathrm{CNOT}\to{\scriptstyle \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}}\,.$$and for the sake of brevity we say that those column vectors are the standard basis states on two qubits, and that this matrix is a CNOT matrix.
Did you ever do an early university mathematics class, or read a textbook, where it started to emphasise the difference between a linear transformation and matrices — where it was said, for example, that a matrix could
represent a linear transformation, but was not the same as a linear transformation? The situation with CNOT in quantum computation is one example of how this distinction is meaningful. The CNOT is a transformation of a physical system, not of column vectors; the standard basis states are just one basis of a physical system, which we conventionally represent by $\{0,1\}$ column vectors.
What if we were to choose to represent a different basis — say, the X eigenbasis — by $\{0,1\}$ column vectors, instead? Suppose that we wish to represent $$\begin{aligned}\ket{++} \to{}& [\, 1 \;\; 0 \;\; 0 \;\; 0 \,]^\dagger\,,\\\ket{+-} \to{}& [\, 0 \;\; 1 \;\; 0 \;\; 0 \,]^\dagger\,,\\\ket{-+} \to{}& [\, 0 \;\; 0 \;\; 1 \;\; 0 \,]^\dagger\,,\\\ket{--} \to{}& [\, 0 \;\; 0 \;\; 0 \;\; 1 \,]^\dagger \,.\end{aligned}$$This is a perfectly legitimate choice mathematically, and because it is only a notational choice, it doesn't affect the physics — it only affects the way that we would write the physics. It is not uncommon in the literature to do analysis in a way equivalent to this (though it is rare to explicitly write a different convention for column vectors as I have done here). We would have to represent the standard basis vectors by:$$ \ket{00} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ 1 \\ 1 \\ 1 \end{bmatrix}}\,,\quad\ket{01} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ -1 \\ 1 \\ -1 \end{bmatrix}}\,,\quad\ket{10} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ 1 \\ -1 \\ -1 \end{bmatrix}}\,,\quad\ket{11} \to \tfrac{1}{2}\,{\scriptstyle \begin{bmatrix} 1 \\ -1 \\ -1 \\ 1 \end{bmatrix}}\,.$$Again, we're using the column vectors on the right
only to represent the states on the left. But this change in representation will affect how we want to represent the CNOT gate.
A sharp-eyed reader may notice that the vectors which I have written on the right just above are the columns of the usual matrix representation of $H \otimes H$. There is a good reason for this: what this change of representation amounts to is a change of reference frame in which to describe the states of the two qubits. In order to describe $\ket{++} = [\, 1 \;\; 0 \;\; 0 \;\; 0 \,]^\dagger$, $\ket{+-} = [\, 0 \;\; 1 \;\; 0 \;\; 0 \,]^\dagger$, and so forth, we have changed our frame of reference for each qubit by a rotation which is the same as the usual matrix representation of the Hadamard operator — because that same operator interchanges the $X$ and $Z$ observables, by conjugation.
This same frame of reference will apply to how we represent the CNOT operation, so in this shifted representation, we would have$$\begin{aligned}\mathrm{CNOT} \to \tfrac{1}{4}{}\,{\scriptstyle\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 \end{bmatrix}\,\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \end{bmatrix}\,\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1 \\ 1 & 1 & -1 & -1 \\ 1 & -1 & -1 & 1 \end{bmatrix}}\,=\,{\scriptstyle\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \end{bmatrix}}\end{aligned}$$which — remembering that the columns now represent $X$ eigenstates — means that the CNOT performs the transformation$$ \begin{aligned}\mathrm{CNOT}\,\ket{++} &= \ket{++} , \\\mathrm{CNOT}\,\ket{+-} &= \ket{--}, \\\mathrm{CNOT}\,\ket{-+} &= \ket{-+} , \\\mathrm{CNOT}\,\ket{--} &= \ket{+-} .\end{aligned} $$Notice here that it is
only the first, 'control' qubits whose state changes; the target is left unchanged.
Now, I could have shown this same fact a lot more quickly without all of this talk about changes in reference frame. In introductory courses in quantum computation in computer science, a similar phenomenon might be described without ever mentioning the words 'reference frame'. But I wanted to give you more than a mere calculation. I wanted to draw attention to the fact that a CNOT is in principle not just a matrix; that the standard basis is not a special basis; and that when you strip these things away, it becomes clear that the operation realised by the CNOT clearly has the potential to affects the state of the control qubit, even if the CNOT is the only thing you are doing to your qubits.
The very idea that there is a 'control' qubit is one centered on the standard basis, and embeds a prejudice about the states of the qubits that invites us to think of the operation as one-sided. But as a physicist, you should be deeply suspicious of one-sided operations.
For every action there is an equal and opposite reaction; and here the apparent one-sidedness of the CNOT on standard basis states is belied by the fact that, for X eigenbasis states, it is the 'target' which unilaterally determines a possible change of state of the 'control'.
You wondered whether there was something at play which was only a mathematical convenience, involving a choice of notation. In fact, there is: the way in which we write our states with an emphasis on the standard basis, which may lead you to develop a
non-mathematical intuition of the operation only in terms of the standard basis. But change the representation, and that non-mathematical intuition goes away.
The same thing which I have sketched for the effect of CNOT on X-eigenbasis states, is also going on in phase estimation, only with a different transformation than CNOT. The 'phase' stored in the 'target' qubit is kicked up to the 'control' qubit, because the target is in an eigenstate of an operation which is being coherently controlled by the first qubit. On the computer science side of quantum computation, it is one of the most celebrated phenomena in the field. It forces us to confront the fact that the standard basis is only special in that it is the one we prefer to describe our data with — but not in how the physics itself behaves.
|
I have multiple different log sums that I need to evaluate. How would I calculate the following without using a calculator or log tables?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I have multiple different log sums that I need to evaluate. How would I calculate the following without using a calculator or log tables?
Define the function $f:(0,1) \rightarrow \mathbb{R}$ by $f(x) = -(x \log(x) + (1-x) \log(1-x))$.
Note that $$f(x) = f(1-x)$$ Then$$f'(x) = \log(1-x) - \log(x)$$ And$$f''(x) = -(\frac{1}{x}+\frac{1}{1-x})$$ It follows from this that $f$ hits a maximum at $x=\frac{1}{2}$, is strictly increasing on $x<\frac{1}{2}$.
Your problem is (presumably) to compare $x = f(\frac{2}{5})$, $y = f(\frac{1}{5})$ and $z = f(\frac{2}{5})$. Note that $\frac{1}{5} < \frac{2}{5} < \frac{1}{2}$. Hence $y < x = z$.
Here's another way to show that $x>y$ that doesn't require a calculator - only rules of logarithms and the inequality $4>3$. Notice that $$ x = -\frac25(\log 2-\log 5) - \frac35(\log 3-\log 5) = \log 5 - \frac25\log 2 - \frac35\log 3, $$ and similarly $y = \log 5 - \frac45\log 4 = \log 5 - \frac85\log 2$. Therefore the following inequalities are all eqiuvalent to one another: $$ x>y $$
$$ \log 5 - \frac25\log 2 - \frac35\log 3 > \log 5 - \frac85\log 2 $$
$$ \frac25\log 2 + \frac35\log 3 < \frac85\log 2 $$
$$ \frac35\log 3 < \frac65\log 2 $$
$$ \log 3 < 2\log 2 $$
$$ 3 < 2^2. $$
|
(See EDIT #3 below for an explicit formula, and EDIT #2 for a solution comparing Dirichlet coefficients)
Answer
19.03.15 20:20
We can get the first few terms in Mathematica in a straighforward manner as follows.
Let
z[s_]:=Zeta[3s]/Zeta[s]
t[s_,m_]:= Sum[f[n]/n^s,{n,1,m}]
Then we have
z[s] = f[1] + f[2]/2^s + f[3]/3^s + ...
Hence
f[1] = Limit[z[s], s -> \[Infinity]]
(*
Out[35]= 1
*)
f[2] = Limit[2^s (z[s] - 1), s -> \[Infinity]]
(*
Out[5]= -1
*)
f[3] = Limit[3^s (z[s] - t[s, 2]), s -> \[Infinity]]
(*
Out[9]= 0
*)
Continuing this procedure in Mathematica leads to
f[3] = f[4] = ... = 0. But this is obviously wrong.
We can see what's happening here by plotting the functions
g[3] = 3^s (z[s] - t[s, 2])
g[4] = 4^s (z[s] - t[s, 3])
g[n_]:=n^s (z[s] - t[s,n-1])
in the range from
s = 0 to s ~= 25. Starting from n = 4 we see strong oscillations due to inaccuracy but we can still guess the true result of the limit up to n = 10.
Here are some typical plots
This way I came up with the sequence
f = {1, -1, -1, 0, -1, 1, -1, 1, 0, 1}
After n = 10 the accuracy decreases drastically, and I stopped.
Then I looked up the OEIS database, and I found several entries. The most interesting is A210826.
Here we find the comment:
Conjecture: this is a multiplicative sequence with Dirichlet g.f. zeta(3s)/zeta(s)
And - believe it or not, dear Geoffrey - your own MATHEMATICA entry as of TODAY:
Mod[Table[DivisorSigma[0, n], {n, 1, 100}], 3, -1]
(* Geoffrey Critzer, Mar 19 2015 *)
Very nice, but I would greatly appreciate to see your proof.
EDIT #1
The procedure does not give reasonable results for Zeta[4s]/Zeta[s]. I don't know if this is a question of accuracy already for n=4 or if Zeta[4s]/Zeta[s] can be written in the form of t[s] at all.
EDIT #2
24.03.15
Solution by direct computation, i.e. comparing "Dirichlet"-powers n^-x of the two expressions Zeta[k*x] and Zeta[x] * Sum[f[n]/n^x,{n,1,oo}]
fqZeta[k_, nn_] := Module[{z, d, x, g, eqs, sol, t},
z[x_, p_] := Sum[1/n^x, {n, 1, p}];
d[x_, p_] := Sum[f[n]/n^x, {n, 1, p}];
g[k] = (z[k x, nn] - z[x, nn]*d[x, nn] // Expand) /.
a_^( c_ b_) -> Simplify[(a^b)]^c;
eqs[k] =
Table[0 == (1/m^-x Plus @@ Cases[g[k], _. m^-x]) // Simplify, {m, 2, nn}];
sol[k] = Solve[Join[{f[1] == 1}, eqs[k]]][[1]];
t[k] = Table[f[n], {n, 1, nn}] /. sol[k]]
Example
fqZeta[4, 25]
(*
Out[385]= {1, -1, -1, 0, -1, 1, -1, 0, 0, 1, -1, 0, -1, 1, 1, 1, -1, \
0, -1, 0, 1, 1, -1, 0, 0}
*)
No specific problems for large k found.Here's
k = 9:
dt89 = Timing[fqZeta[9, 400] - fqZeta[8, 400]];
dt89[[1]]
(*
Out[370]= 73.663
*)
Fairly quick and
Select[dt89[[2]], # != 0 &]
(*
Out[371]= {-1}
*)
different from k = 8 at
Position[dt89[[2]], -1]
(*
{{256}}
*)
EDIT #3: Explicit formula
28.03.15
The formula
We give an explicit formula for the Dirichlet coefficient $\text{a(k,n)}$ of $\text{$\zeta $(k x)/$\zeta $(x)}$ for integer $k>0$.
The defining identity for $a(n)$ is
$$\text{$\zeta $(k x)/$\zeta $(x) = Sum$\_\{$n$>$=1$\}$ a(k,n)/r${}^{\wedge}$x}$$
and we have
$$\text{a(k,n) = Sum$\_\{$d${}^{\wedge}$k$|$n$\}$ $\mu $(n/d${}^{\wedge}$k)}$$
where $\text{$\mu $(n)}$ is the Moebius function.
Proof
To prove the formula we need two ingredients
1) the Dirichlet series of 1/$\zeta $(x). Which is
$$\text{1/$\zeta $(x) = Sum$\_\{$n$>$=1$\}$ $\mu $(n)/n${}^{\wedge}$x}$$
2) the Dirichlet series of a product of two Dirichlet series
$$\text{(Sum$\_\{$n$>$=1$\}$ u(n)/n${}^{\wedge}$x)*(Sum$\_\{$m$>$=1$\}$ v(m)/m${}^{\wedge}$x) = Sum$\_\{$r$>$=1$\}$ w(r)/r${}^{\wedge}$x}$$
where $\text{w(r) = Sum$\_\{$d$|$r$\}$ u(d) v(r/d)}$
Both relations are standard number theory knowledge, and I'll leave it to the reader as a nice exercise to derive them by himself.
Mathematica
In Mathematica we can write the Dirichlet coefficient as
a[k_, n_] :=
Plus @@ (MoebiusMu[n/#^k] & /@ Select[(Divisors[n])^(1/k), IntegerQ[#] &])
Examples
Table[a[1, n], {n, 1, 10}]
(*
Out[390]= {1, 0, 0, 0, 0, 0, 0, 0, 0, 0}
*)
Table[a[2, n], {n, 1, 10}]
(*
Out[391]= {1, -1, -1, 1, -1, 1, -1, -1, 1, 1}
*)
Note that $\text{a(2,n)}$ is the Liouville function $\text{$\lambda $(n)}$.
Table[Print[{k,
Table[ToString[a[k, n]] /. {"0" -> " 0", "1" -> " 1"}, {n, 1,
32}]}], {k, 2, 6}];
{2,{ 1,-1,-1, 1,-1, 1,-1,-1, 1, 1,-1,-1,-1, 1, 1, 1,-1,-1,-1,-1, 1, 1,-1, 1, 1, 1,-1,-1,-1,-1,-1,-1}}
{3,{ 1,-1,-1, 0,-1, 1,-1, 1, 0, 1,-1, 0,-1, 1, 1,-1,-1, 0,-1, 0, 1, 1,-1,-1, 0, 1, 1, 0,-1,-1,-1, 0}}
{4,{ 1,-1,-1, 0,-1, 1,-1, 0, 0, 1,-1, 0,-1, 1, 1, 1,-1, 0,-1, 0, 1, 1,-1, 0, 0, 1, 0, 0,-1,-1,-1,-1}}
{5,{ 1,-1,-1, 0,-1, 1,-1, 0, 0, 1,-1, 0,-1, 1, 1, 0,-1, 0,-1, 0, 1, 1,-1, 0, 0, 1, 0, 0,-1,-1,-1, 1}}
{6,{ 1,-1,-1, 0,-1, 1,-1, 0, 0, 1,-1, 0,-1, 1, 1, 0,-1, 0,-1, 0, 1, 1,-1, 0, 0, 1, 0, 0,-1,-1,-1, 0}}
Remarks
1) For small k the series are in OEIS (https://oeis.org/)
k = 2
A008836 Liouville's function $\text{$\lambda $(n)} = (-1){}^{\wedge}$k, where k is number of primes dividing n (counted with multiplicity), N. J. A. Sloane
k = 3
A210826 G.f.: $\text{Sum$\_\{$n$>$=1$\}$ a(n)*x${}^{\wedge}$n/(1 - x${}^{\wedge}$n) = Sum$\_\{$n$>$=1$\}$ x${}^{\wedge}$(n${}^{\wedge}$3)}$, Paul D. Hanna, Mar 27 2012
k = 4
A219009 Coefficients of the Dirichlet series for zeta(4s)/zeta(s), Benoit Cloitre, Nov 09 2012
k = 5
A253206 Coefficients of the Dirichlet series for zeta(5x)/zeta(x), here the Mathematica formula for a[k,n] is given, Wolfgang Hintze, Mar 25 2015
2) Miscellaneous
a) Prove that $\text{a(k,n) $\in $ $\{$-1,0,1$\}$}$
|
Monomial symmetric polynomials on $n$ variables $x_1, \ldots x_n$ form a natural basis of the space $\mathcal{S}_n$ of symmetric polynomials on $n$ variables and are defined by additive symmetrization of the function $x^{\lambda} = x_1^{\lambda_1} x_2^{\lambda_2} \ldots x_n^{\lambda_n}$. Here $\lambda$ is a sequence of $n$ nonnegative numbers, arranged in non-increasing order, hence can also be viewed as partition of some integer with number of parts $l(\lambda) \le n$.
Power sum polynomials $p_\lambda$ on $n$ variables also form a basis for $\mathcal{S}_n$
and are defined as $p_\lambda = \prod_{i=1}^n p_{\lambda_i}$, where $p_r = \sum_{i=1}^n x_i^r$.
Schur functions $s_\lambda$ (polynomials) form a basis of the space of symmetric polynomials, indexed by partitions $\lambda$ of at most $n$ parts, and are characterized uniquely by two properties:
$\langle s_\lambda, s_\mu \rangle = 0$ when $\lambda \neq \mu$, where the inner product is defined on the power sum basis by $\langle p_\lambda, p_\mu \rangle = \delta_{\lambda,\mu} z_\lambda$, and $z_\lambda = \prod_{i=1}^n i^{\alpha_i} \alpha_i !$, where $\alpha_i$ is the number of parts in $\lambda$ whose lengths all equal $i$. Notice $n!/z_\lambda$ is the size of the conjugacy class in the symmetric group $S_{\sum \lambda_i}$ whose cycle structure is given precisely by $\lambda$.
If one writes $s_\lambda$ as linear combination of $m_\mu$'s, then the $m_\lambda$ coefficient is $1$ and $m_\mu$ coefficients are all $0$ if $\mu > \lambda$, meaning the partial sums inequality $\sum_{i=1}^k \mu_i \ge \sum_{i=1}^k \lambda_i$ hold for all $k$ and is strict for at least one $k$. Thus one can say the transition matrix from Schur to monomial polynomial basis is upper triangular with $1$'s on the diagonal.
Jack polynomials generalize Schur polynomials in the theory of symmetric functions by replacing the inner product in the first characterizing condition above with $\langle p_\lambda, p_\mu \rangle = \delta_{\lambda, \mu} \alpha^{l(\lambda)} z_{\lambda}$. The second condition remains the same. It can be thought of as an exponential tilting of the Schur polynomials, and in fact it is intimately connected with the Ewens sampling distribution with parameter $\alpha^{-1}$, a 1-parameter probability measure on $S_n$ or on the set of partitions of $n$ that generalize the uniform measure and the induced measure on partitions respectively.
It turns out that the theory of Schur polynomials has connections with classical representation theory of the symmetric group $S_n$. For instance the irreducible characters of $S_n$ are related to the change of basis coefficient from Schur polynomials to power sum polynomials in the following way:
if we write $s_\lambda = \sum_{\mu} c_{\lambda,\mu} p_\mu$, then $$ \chi_\lambda(\mu) = c_{\lambda,\mu} z_\lambda^{-1.}$$.
These are eigenfunctions of the so-called random transposition walk on $S_n$, when viewed as a walk on the space of partitions. The eigenfunctions of the actual random transposition walk on $S_n$ are proportional to the diagonal elements of $\rho$, $\rho$ ranges over all irreducible representations of $S_n$.
The characters $\chi_\lambda$ admit natural generalization in the Jack polynomial setting: simply take the transition coefficients from the Jack polynomials to the poewr sum polynomials. And these when properly normalized indeed gives the eigenfunctions for the so-called metropolized random transposition walk that converges to the Ewens sampling distribution, which is an exponentially tilted 1-parameter family of uniform measure on $S_n$.
My question is, what is the analogue of the diagonal enties of the representations of $\rho$ in the Jack case? Certainly they will be functions on $S_n$.
|
$\newcommand{\Q}{\Bbb Q} \newcommand{\N}{\Bbb N} \newcommand{\R}{\Bbb R} \newcommand{\Z}{\Bbb Z} \newcommand{\C}{\Bbb C} \newcommand{\F}{\Bbb F} \newcommand{\p}{\mathfrak{p}} $ Let $A$ be an abelian variety over a number field $F$. It is expected that the $L$-function of $A$ has analytic continuation to $\Bbb C$ and satisfies a functional equation relating $s$ to $2-s$. In that setting, the (generalized) Birch–Swinnerton-Dyer conjecture states that $$\mathrm{ord}_{s=1}(L(A_{/F},s)) = \mathrm{rk}_{\Z}(A(F)) =: r.$$
Originally, the conjecture for an
elliptic curve $E$ over $\Q$ was$$\exists C>0,\quad \prod_{p \leq x} \dfrac{|E(\F_p)|}{p} \sim C \;\mathrm{log}(x)^r \qquad (x \to \infty).$$
My question is to know what is the analogue of the original conjecture, in the framework of abelian varieties over number fields.
My first guess would to replace to LHS by $$\prod_{N(\p) \leq x} L_{\p}(A_{/F}, N(\p)^{-1}),$$ where $L_{\p}(A_{/F},s)$ is the local factor of the L-function of $A$ at $\p$. But I'm not sure what the RHS should be. Typically, how does it depend on the dimension of $A$ or on the degree of the number field?
|
Given a group $G$, with its binary ("product") and unary ("inverse") operations:$$\begin{equation}\begin{split}\circ&\colon G^2\to G\\\operatorname{inv}&\colon G\to G\end{split}\end{equation}\tag{1}$$you can consider the restrictions of them on $H^2$ and $H$ respectively, where $H\subset G$:$$\begin{equation}\begin{split}\circ_{H^2}&\colon H^2\to G&;~(h_1,h_2)&\mapsto h_1\circ h_2\\\operatorname{inv}_H&\colon H\to G&;~h&\mapsto\operatorname{inv}(h)\end{split}\end{equation}\tag{2}$$
When you ask whether $H$ is a subgroup of $G$, you are asking whether $H$ is a group with the group structure induced by that of $G$, that is, with binary and unary operations pointwise concident with those of $G$ on $H^2$ and $H$ respectively, that is, with binary and unary operations pointwise coincident with the restrictions $(2)$ of the corresponding operations of $G$. In the end you want to know whether there exist corestrictions to $H$ of $(2)$'s, or directly birestrictions of $(1)$'s to the pairs $(H^2,H)$ and $(H,H) $ respectively:$$\begin{equation}\begin{split}\circ_{H^2}^{H}&\colon H^2\to H&;~(h_1,h_2)&\mapsto h_1\circ h_2\\\operatorname{inv}_H^{H}&\colon H\to H&;~h&\mapsto\operatorname{inv}(h)\end{split}\end{equation}\tag{3}$$
These existence conditions are called closure of $H$ under product and inversion.
There remain to be proved associativity, the existence of identity and its equality to the identity of $G$. But the associativity is a pointwise property of $\circ$ that
a fortiori holds when $\circ$ is restricted or birestricted. Moreover if the identity of $H$ exists it must be equal to that of $G$ because the identity of $G$ is defined by a pointwise property of $\operatorname{inv}$ that a fortiori holds when $\operatorname{inv}$ is restricted or birestricted.
But you have no way to prove for the general case that the identity of $H$ exists unless you add to your hypotheses that $H$
must be non-empty.
This last was the only important thing missing in your reasoning.
|
The extensional version of Intuitionistic Type Theory is usually formulated in a way that makes extensional concepts like functional extensionality derivable. In particular, equality reflection, together with $\xi$- and $\eta$-rules for $\Pi$ types are enough to get the standard formulation of $\textsf{funext}$
$\Pi_{x \in A}\textsf{Eq}(B(x), f x, g x) \implies \textsf{Eq}(\Pi_{x \in A}B(x), f, g)$
where $\textsf{Eq}$ is the identity type with rules of reflection and uniqueness of identity proofs (see page 61 of M. Hofmann, Extensional Constructs in Intensional Type Theory).
But what if $\eta$ is not assumed? In particular, consider a standard intensional Martin-Löf type theory with $\Pi$ formulated with $\xi$-rule and "elimination as application", and to which we only add an extensional identity type $\textsf{Eq}$ as described above. What is the power of the resulting theory, in terms of extensional constructs (like functional extensionality) that can be derived in it? It seems to me that neither $\eta$ nor $\textsf{funext}$ should be derivable, although we can surely get to a weaker version using equality reflection and $\xi$:
$\Pi_{x \in A}\textsf{Eq}(B(x), f x, g x) \implies \textsf{Eq}(\Pi_{x \in A}B(x), \lambda x . f x, \lambda x . g x)$
(so, from $\eta$ we could get to $\textsf{funext}$, and obviously vice versa). Here R. Garner shows that the $\eta$ rule is not derivable if $\Pi$ types are given with "elimination as application". He does that for an intensional theory, but the same argument should be applicable in the presence of $\textsf{Eq}$ too, I think.
Are my suspicions correct? Are there any proofs of this in the literature, and in general any investigations on the kind of extensional constructs that can be derived in such minimal versions of ETT? What do we gain by only adding $\textsf{Eq}$, in the presence of such a "limited" $\Pi$ type (no $\eta$ equality, and no induction principle)?
|
Let $G$ be a split reductive algebraic group over an arbitrary field $k$ Suppose we have a split maximal torus $T$. There is a short exact sequence of groups$$1\to \mathrm{Inn}(G)\to \mathrm{Aut}(G)\to \mathrm{Out}(G)\to 1.$$where $\mathrm{Inn}(G) = G^{\mathrm{ad}}(k)$. This sequence splits in various ways, one choice for each
pinning, which is a choice of a base $\Delta$ for the root system for $(G,T)$, along with isomorphisms $\mathbb{G}_a\to U_\alpha$ for each $\alpha\in\Delta$. After this choice, automorphisms in the image of the splitting map $\mathrm{Out}(G)\to\mathrm{Aut}(G)$ fix the torus $T$.
Now given an automorphism $\theta:G\to G$, we define the set $\mathrm{Aut}(G,\theta)$ to be the automorphisms of $G$ commuting with $\theta$, and let $\mathrm{Inn}(G,\theta)$ be the inner automorphisms in $G^{\mathrm{ad}}(k)$ that commute with $\theta$. Let's assume $T$ to be $\theta$-stable; i.e. $\theta(T) = T$. We get a short exact sequence $$ 1\to \mathrm{Inn}(G,\theta)\to \mathrm{Aut}(G, \theta)\to \mathrm{Out}(G,\theta)\to 1 $$
Question: Is it possible in general to choose a splitting (i.e. a section $\mathrm{Out}(G,\theta)\to \mathrm{Aut}(G,\theta)$) such that the image of this section consists of automorphisms that preserve the fixed maximal torus $T$?
(Note that for any $\varphi\in \mathrm{Aut}(G,\theta)$, the torus $\varphi(T)$ is also $\theta$-stable.) I feel like in special cases a splitting is possible, mainly when $\mathrm{Out}(G,\theta)$ is small, but I hope that there is some nice general answer to this question, or even something under more restrictive hypotheses on things such as: type of automorphism, order of automorphism, base field, etc.
|
Journal of Commutative Algebra J. Commut. Algebra Volume 8, Number 1 (2016), 89-111. A criterion for isomorphism of Artinian Gorenstein algebras Abstract
Let $A$ be an Artinian Gorenstein algebra over an infinite field~$k$ of characteristic either 0 or greater than the socle degree of $A$. To every such algebra and a linear projection $\pi $ on its maximal ideal $\mathfrak {m}$ with range equal to the socle $\Soc (A)$ of $A$, one can associate a certain algebraic hypersurface $S_{\pi }\subset \mathfrak {m}$, which is the graph of a polynomial map $P_{\pi }:\ker \pi \to \Soc (A)\simeq k$. Recently, the following surprising criterion has been obtained: two Artinian Gorenstein algebras $A$, $\widetilde {A}$ are isomorphic if and only if any two hypersurfaces $S_{\pi }$ and $S_{\tilde {\pi }}$ arising from $A$ and $\widetilde {A}$, respectively, are affinely equivalent. The proof is indirect and relies on a geometric argument. In the present paper, we give a short algebraic proof of this statement. We also discuss a connection, established elsewhere, between the polynomials $P_{\pi }$ and Macaulay inverse systems.
Article information Source J. Commut. Algebra, Volume 8, Number 1 (2016), 89-111. Dates First available in Project Euclid: 28 March 2016 Permanent link to this document https://projecteuclid.org/euclid.jca/1459169547 Digital Object Identifier doi:10.1216/JCA-2016-8-1-89 Mathematical Reviews number (MathSciNet) MR3482348 Zentralblatt MATH identifier 1338.13044 Keywords Artinian Gorenstein algebras Citation
Isaev, A.V. A criterion for isomorphism of Artinian Gorenstein algebras. J. Commut. Algebra 8 (2016), no. 1, 89--111. doi:10.1216/JCA-2016-8-1-89. https://projecteuclid.org/euclid.jca/1459169547
|
I am having trouble understanding a mapping reduction and I would appreciate your help. Define
$\quad \begin{align} A_{TM} &= \{ \langle M, w \rangle \mid M \text{ Turing machine}, w \in \mathcal{L}(M)\} \\ S_{TM} &= \{ \langle M,w \rangle \mid M \text{ Turing machine}, w \in \mathcal{L}(M) \implies w^R \in \mathcal{L}(M)\} \\ \end{align}$
and consider the reduction of $A_{TM}$ to $S_{TM}$ as follows.
Given $\langle M, w \rangle$ the following Turing machine $M'$ is defined:
M' on input x: if x = 01 then accept else run M on w and accept x if M accepts w
I don't understand the reduction entirely, this reduction is supposed to solve $S_{TM}$ using $A_{TM}$. Why do I need to check if
x = 01? There is no need to check anything about the reverse of $w$? How is that covered by the reduction?
|
Normalized compression distance (NCD) is a way of measuring the similarity between two sequences.
A compression algorithm looks for patterns and repetitions in the input sequence to compress it. For example, instead of “abababab” we can say “(ab)x4”. This is how RLE works and this is the most simple and illustrative example of compression. The main idea of NCD is that a good compression algorithm compress concatenation of two sequences as good as they similar and much better than each sequence separately.
$$ NCD_{Z}(x,y)={\frac {Z(xy)-\min\{Z(x),Z(y)\}}{\max\{Z(x),Z(y)\}}}. $$
x and
y – input sequences.
Z(x) – size of compressed
x.
So, how it works:
Ok, but what is
Z? This is the size of compressed data by a normal compressor (
C). Yeah, you can use any compression algorithm, but for non-normal compressors, you will get strange and non-comparable results. So, there are these properties:
C(xx) = C(x). Without it, you wouldn’t get 0 for the same sequences because
Z(xx) - Z(x) ≠ 0.
C(xy) ≥ C(x). If you can find
C(xy) < C(x) then
Z(xy) - min(Z(x), Z(y)) will be less than zero and all NCD will be less than zero.
C(xy) = C(yx). Without it
NCD(xy) ≠ NCD(yx). You can ignore this property if you change
Z(xy) on
min(Z(xy), Z(yx)) in the formula.
C(xy) + C(z) ≤ C(xz) + C(yz). I’m not sure about this property. I guess this shows that compression really works and make compressed data not larger than the input sequence. Also, I think, there should be
Z instead of
C (as for “Symmetry” property). So, we can say it simpler:
Z(xy) ≤ Z(x) + Z(y).
So, none of the real world compressors really works for NCD:
x sequence appears twice in the input data.
C("abb" + "bbc") = "ab4c" and
C("bbc" + "abb") = "b2cab2".
Z that equals to the size in bytes of compressed data, but this discretization makes more difficult to make difference between short sequences.
So, what can we use? In the original paper authors used real compressors like
Zlib because these properties approximately work for really big and quite random sequences. But can we do it better?
Entropy shows how many information contains this char in the given alphabet. For example, if you’re playing in the “guess the word” game and know that this word starts from “e” it’s not informative for you because of too many words in English start from “e”. Otherwise, if you know that word starts from “x” then you should just try a few words to win (I guess, it’s “x-ray”).
So, we can calculate entropy for any letter in the alphabet (or element in a sequence):
$$ S=-\sum _{i}P_{i}\log_{2} {P_{i}} $$
Let’s calculate entropy for sequence “test”:
$$ S=(-{\frac {2}{4}}\log_{2}{\frac {2}{4}})[t] + (-{\frac {1}{4}}\log_{2}{\frac {1}{4}})[e] + (-{\frac {1}{4}}\log_{2}{\frac {1}{4}})[s] = \frac {2}{4} + \frac {2}{4} + \frac {2}{4} = 1.5 $$
Entropy encoding is a kind of compression algorithms that compress data by making entropy of input sequence higher. A sequence with low entropy has big redundancy, and we can encode this message better. For example, we can encode every bigram in the text by new codes and make encode for most frequent bigrams (“th” for English) with the shortest code. This is how Huffman coding works. So, Entropy of a sequence proportional to the size of compressed data, because sequence with lower entropy we can compress better.
If we want to use entropy as
Z in
NCD we have one issue to solve. Entropy could be 0, so we could catch division by zero in the
NCD formula. To avoid this we can add to every entropy result 1. It doesn’t affect numerator (because we subtract one
Z from another), but make denominator not equal to zero. It changes deviation of
Z but saves all properties of the normal compressor.
Also, we can a little bit patch
NCD formula to compare more than 2 sequences:
$$NCD_{Z}(x,y)={\frac {Z(xy)-(n-1)\min\{Z(x),Z(y)\}}{\max\{Z(x),Z(y)\}}}.$$
Where
n is the count of sequences.
I’ve implemented Entropy-based NCD in the textdistance Python library. Let’s get it and have a look at the results for different synthetic input sequences.
>>> from textdistance import entropy_ncd
The same sequences have 0 distance, totally different – 1:
>>> entropy_ncd('a', 'a')0.0>>> entropy_ncd('a', 'b')1.0>>> entropy_ncd('a', 'a' * 40)0.0
More differences – higher distance:
>>> entropy_ncd('text', 'text')0.0>>> entropy_ncd('text', 'test')0.1>>> entropy_ncd('text', 'nani')0.4
Elements order and repetitions doesn’t matter:
>>> entropy_ncd('test', 'ttse')0.0>>> entropy_ncd('test', 'testsett')0.0
Distance depends on the size difference between strings:
>>> entropy_ncd('a', 'bc')0.792481250360578>>> entropy_ncd('a', 'bcd')0.7737056144690833>>> entropy_ncd('a', 'bbb')0.8112781244591329>>> entropy_ncd('a', 'bbbbbb')0.5916727785823275>>> entropy_ncd('aaaa', 'bbbb')1.0
Sometimes Entropy-based NCD gives non-intuitive results:
>>> entropy_ncd('a', 'abbbbbb')0.5097015764645563>>> entropy_ncd('a', 'aaaaaab')0.34150514509881796>>> entropy_ncd('aaaaaaa', 'abbbbbb')0.6189891221936807
Let’s compare texts of licenses from choosealicense.com:
git clone https://github.com/github/choosealicense.com.git
We will get the name of license as command line argument, compare its text with text of each other license and sort results by distance:
from itertools import islicefrom pathlib import Pathfrom sys import argvfrom textdistance import EntropyNCD# read fileslicenses = dict()for path in Path('choosealicense.com', '_licenses').iterdir(): licenses[path.stem] = path.read_text()# show licenses list if no arguments passedif len(argv) == 1: print(*sorted(licenses.keys()), sep='\n') exit(1)# compare all with oneqval = int(argv[1]) if argv[1] else Nonecompare_with = argv[2]distances = dict()for name, content in licenses.items(): distances[name] = EntropyNCD(qval=qval)( licenses[compare_with], content, )# show 5 most similarsorted_distances = sorted(distances.items(), key=lambda d: d[1])for name, distance in islice(sorted_distances, 5): print('{:20} {:.4f}'.format(name, distance))
Ok, let’s have a look which qval works better:
# calculate entropy for chars$ python3 compare.py 1 gpl-3.0gpl-3.0 0.0000agpl-3.0 0.0013osl-3.0 0.0016cc0-1.0 0.0020lgpl-2.1 0.0022# calculate entropy for bigrams$ python3 compare.py 2 gpl-3.0gpl-3.0 0.0000agpl-3.0 0.0022bsl-1.0 0.0058gpl-2.0 0.0061unlicense 0.0065# calculate entropy for words (qval=None)$ python3 compare.py "" gpl-3.0gpl-3.0 0.0000agpl-3.0 0.0060gpl-2.0 0.0353lgpl-2.1 0.0381epl-2.0 0.0677
Calculating entropy by words looks most promising. Let’s calculate it for some other licenses:
$ python3 compare.py "" mit mit 0.0000bsl-1.0 0.0294ncsa 0.0350unlicense 0.0372isc 0.0473$ python3 compare.py "" bsd-3-clausebsd-3-clause 0.0000bsd-3-clause-clear 0.0117bsd-2-clause 0.0193ncsa 0.0367mit 0.0544python3 compare.py "" apache-2.0apache-2.0 0.0000ecl-2.0 0.0043osl-3.0 0.0412mpl-2.0 0.0429afl-3.0 0.0435
distances = []for name1, content1 in licenses.items(): for name2, content2 in licenses.items(): distances.append((name1, name2, EntropyNCD(qval=None)(content1, content2)))import plotnine as ggimport pandas as pddf = pd.DataFrame(distances, columns=['name1', 'name2', 'distance'])( gg.ggplot(df) + gg.geom_tile(gg.aes(x='name1', y='name2', fill='distance')) # reverse colors + gg.scale_fill_continuous( palette=lambda *args: gg.scale_fill_continuous().palette(*args)[::-1], ) + gg.theme( figure_size=(12, 8), # make chart bigger axis_text_x=gg.element_text(angle=90), # rotate ox labels ))
What we can see here:
gpl-*,
bsd-*,
cc-by-*,
epl-*,
eupl-*,
ms-*.
Source code from this section placed in the textdistance repository.
|
I am trying to solve a set of DAEs.
\begin{equation} -4 \nu (\lambda(s))^{(-1 - 4 \nu)} \theta'(s) \lambda'(s) + (\lambda(s))^{(-4 \nu)} \theta''(s) = -\alpha_y \cos\theta(s) + \alpha_x \sin\theta(s) \end{equation}
\begin{equation} (\lambda(s))^{(-2 \nu)} \log(\lambda(s)) = f_s (\alpha_x \cos\theta(s) + \alpha_y \sin\theta(s)) \end{equation}
\begin{equation} \theta(0) = 0 \end{equation}
\begin{equation} \theta'(1) = \beta \end{equation}
where $\lambda$ and $\theta$ are two variables, varying over the range $s \in [0,1]$. $\alpha_x, \alpha_y, \beta, f_s, \nu$ are constants. When I try to solve them numerically using NDSolve, Mathematica gives me an error saying DAEs must be given as IVPs.
The code I use is given below
i[s] = (lambda[s])^(-4 nu)i'[s] = D[i[s],s]Eqn1 = theta''[s] i[s] + theta'[s] i'[s] == alphax Sin[theta[s]] - alphay Cos[theta[s]]Eqn2 = (lambda[s])^(-2 nu) Log[lambda[s]] == fs*(alphax Cos[theta[s]] + alphay Sin[theta[s]])BC1 = theta[0] == 0BC2 = theta'[1] == betaparam = {alphax->0.1, alphay->0.1, beta->0.1, nu->0.3, fs->10^-6}thetaSol = NDSolve[{Eqn1,Eqn2,BC1,BC2}/.param,{theta,lambda},{s,0,1}]
If I can solve the second equation to obtain $\lambda(s)$ as a function of $\theta(s)$, then I can eliminate the second equation and solve it as a second order ODE in $\theta(s)$. However, I believe this sort of equation has a solution using the Lambert W function.
Can I use Mathematica to solve this system of equations?
|
The fractional Schrödinger equation with singular potential and measure data
1.
Instituto de Matemática Interdisciplinar, Universidad Complutense de Madrid, Plaza de Ciencias 3, 28040 Madrid, Spain
2.
Departamento de Matemáticas, Universidad Autónoma de Madrid, Calle Francisco Tomás y Valiente 7, 28049 Madrid, Spain
We consider the steady fractional Schrödinger equation $ L u + V u = f $ posed on a bounded domain $ \Omega $; $ L $ is an integro-differential operator, like the usual versions of the fractional Laplacian $ (-\Delta)^s $; $ V\ge 0 $ is a potential with possible singularities, and the right-hand side are integrable functions or Radon measures. We reformulate the problem via the Green function of $ (-\Delta)^s $ and prove well-posedness for functions as data. If $ V $ is bounded or mildly singular a unique solution of $ (-\Delta)^s u + V u = \mu $ exists for every Borel measure $ \mu $. On the other hand, when $ V $ is allowed to be more singular, but only on a finite set of points, a solution of $ (-\Delta)^s u + V u = \delta_x $, where $ \delta_x $ is the Dirac measure at $ x $, exists if and only if $ h(y) = V(y) |x - y|^{-(n+2s)} $ is integrable on some small ball around $ x $. We prove that the set $ Z = \{x \in \Omega : \rm{no solution of } (-\Delta)^s u + Vu = \delta_x \rm{ exists}\} $ is relevant in the following sense: a solution of $ (-\Delta)^s u + V u = \mu $ exists if and only if $ |\mu| (Z) = 0 $. Furthermore, $ Z $ is the set points where the strong maximum principle fails, in the sense that for any bounded $ f $ the solution of $ (-\Delta)^s u + Vu = f $ vanishes on $ Z $.
Keywords:Nonlocal elliptic equations, bounded domains, Schrödinger operators, singular potentials, measure data. Mathematics Subject Classification:35R11, 35J10, 35D30, 35J67, 35J75. Citation:David Gómez-Castro, Juan Luis Vázquez. The fractional Schrödinger equation with singular potential and measure data. Discrete & Continuous Dynamical Systems - A, 2019, 39 (12) : 7113-7139. doi: 10.3934/dcds.2019298
References:
[1] [2] [3]
M. Bonforte, A. Figalli and J. Vázquez, Sharp boundary behaviour of solutions to semilinear nonlocal elliptic equations,
[4]
M. Bonforte, Y. Sire and J. L. Vázquez,
Existence, uniqueness and asymptotic behaviour for fractional porous medium equations on bounded domains,
[5]
M. Bonforte and J. L. Vázquez,
Fractional nonlinear degenerate diffusion equations on bounded domains part I. Existence, uniqueness and upper bounds,
[6]
H. Brezis, M. Marcus and A. C. Ponce,
A new concept of reduced measure for nonlinear elliptic equations,
[7]
H. Brezis, M. Marcus and A. C. Ponce, Nonlinear elliptic equations with measures revisited, in
[8]
C. Bucur and E. Valdinoci,
[9]
X. Cabré and Y. Sire,
Nonlinear equations for fractional Laplacians, I: Regularity, maximum principles, and Hamiltonian estimates,
[10] [11]
L. A. Caffarelli and P. R. Stinga,
Fractional elliptic equations, Caccioppoli estimates and regularity,
[12] [13]
M. Cozzi,
Interior regularity of solutions of non-local equations in Sobolev and Nikol'skii spaces,
[14]
E. Di Nezza, G. Palatucci and E. Valdinoci,
Hitchhiker's guide to the fractional Sobolev spaces,
[15]
J. I. Díaz, D. Gómez-Castro and J. Vázquez,
The fractional Schrödinger equation with general nonnegative potentials. The weighted space approach,
[16]
J. I. Díaz, D. Gómez-Castro, J.-M. Rakotoson and R. Temam,
Linear diffusion with singular absorption potential and/or unbounded convective flow: The weighted space approach,
[17] [18] [19]
D. Gilbarg and N. S. Trudinger,
[20]
G. Grubb,
Fractional Laplacians on domains, a development of Hörmander's theory of $\mu$-transmission pseudodifferential operators,
[21]
K.-Y. Kim and P. Kim,
Two-sided estimates for the transition densities of symmetric Markov processes dominated by stable-like processes in $C^{1, \eta}$ open sets,
[22] [23]
L. Orsina and A. C. Ponce, On the nonexistence of Green's function and failure of the strong maximum principle,
[24]
A. C. Ponce,
[25]
A. C. Ponce and N. Wilmet,
Schrödinger operators involving singular potentials and measure data,
[26] [27] [28]
X. Ros-Oton and J. Serra,
The Dirichlet problem for the fractional Laplacian: Regularity up to the boundary,
[29]
H. Triebel,
[30]
J. L. Vázquez,
On a Semilinear Equation in $\mathbb R^2$ Involving Bounded Measures,
show all references
References:
[1] [2] [3]
M. Bonforte, A. Figalli and J. Vázquez, Sharp boundary behaviour of solutions to semilinear nonlocal elliptic equations,
[4]
M. Bonforte, Y. Sire and J. L. Vázquez,
Existence, uniqueness and asymptotic behaviour for fractional porous medium equations on bounded domains,
[5]
M. Bonforte and J. L. Vázquez,
Fractional nonlinear degenerate diffusion equations on bounded domains part I. Existence, uniqueness and upper bounds,
[6]
H. Brezis, M. Marcus and A. C. Ponce,
A new concept of reduced measure for nonlinear elliptic equations,
[7]
H. Brezis, M. Marcus and A. C. Ponce, Nonlinear elliptic equations with measures revisited, in
[8]
C. Bucur and E. Valdinoci,
[9]
X. Cabré and Y. Sire,
Nonlinear equations for fractional Laplacians, I: Regularity, maximum principles, and Hamiltonian estimates,
[10] [11]
L. A. Caffarelli and P. R. Stinga,
Fractional elliptic equations, Caccioppoli estimates and regularity,
[12] [13]
M. Cozzi,
Interior regularity of solutions of non-local equations in Sobolev and Nikol'skii spaces,
[14]
E. Di Nezza, G. Palatucci and E. Valdinoci,
Hitchhiker's guide to the fractional Sobolev spaces,
[15]
J. I. Díaz, D. Gómez-Castro and J. Vázquez,
The fractional Schrödinger equation with general nonnegative potentials. The weighted space approach,
[16]
J. I. Díaz, D. Gómez-Castro, J.-M. Rakotoson and R. Temam,
Linear diffusion with singular absorption potential and/or unbounded convective flow: The weighted space approach,
[17] [18] [19]
D. Gilbarg and N. S. Trudinger,
[20]
G. Grubb,
Fractional Laplacians on domains, a development of Hörmander's theory of $\mu$-transmission pseudodifferential operators,
[21]
K.-Y. Kim and P. Kim,
Two-sided estimates for the transition densities of symmetric Markov processes dominated by stable-like processes in $C^{1, \eta}$ open sets,
[22] [23]
L. Orsina and A. C. Ponce, On the nonexistence of Green's function and failure of the strong maximum principle,
[24]
A. C. Ponce,
[25]
A. C. Ponce and N. Wilmet,
Schrödinger operators involving singular potentials and measure data,
[26] [27] [28]
X. Ros-Oton and J. Serra,
The Dirichlet problem for the fractional Laplacian: Regularity up to the boundary,
[29]
H. Triebel,
[30]
J. L. Vázquez,
On a Semilinear Equation in $\mathbb R^2$ Involving Bounded Measures,
[1]
Woocheol Choi, Yong-Cheol Kim.
The Malgrange-Ehrenpreis theorem for nonlocal Schrödinger operators with certain potentials.
[2]
Woocheol Choi, Yong-Cheol Kim.
$L^p$ mapping properties for nonlocal Schrödinger operators with certain potentials.
[3]
Marco Degiovanni, Michele Scaglia.
A
variational approach to semilinear elliptic equations with
measure data.
[4]
Jussi Behrndt, A. F. M. ter Elst.
The Dirichlet-to-Neumann map for Schrödinger operators with complex potentials.
[5]
Xinlin Cao, Yi-Hsuan Lin, Hongyu Liu.
Simultaneously recovering potentials and embedded obstacles for anisotropic fractional Schrödinger operators.
[6] [7]
Mouhamed Moustapha Fall, Veronica Felli.
Unique continuation properties for relativistic Schrödinger operators with a singular potential.
[8]
Verena Bögelein, Frank Duzaar, Ugo Gianazza.
Very weak solutions of singular porous medium equations with measure data.
[9] [10]
Veronica Felli, Elsa M. Marchini, Susanna Terracini.
On the behavior of solutions to Schrödinger equations with dipole type potentials near the singularity.
[11]
Rémi Carles, Christof Sparber.
Semiclassical wave packet dynamics in
Schrödinger equations with periodic potentials.
[12]
Xing Cheng, Ze Li, Lifeng Zhao.
Scattering of solutions to the nonlinear Schrödinger equations with regular potentials.
[13]
Yongsheng Jiang, Huan-Song Zhou.
A sharp decay estimate for nonlinear Schrödinger equations with vanishing potentials.
[14] [15]
Zaihui Gan, Boling Guo, Jian Zhang.
Blowup and global existence of the nonlinear Schrödinger equations with multiple potentials.
[16]
Liang Zhang, X. H. Tang, Yi Chen.
Infinitely many solutions for a class of perturbed elliptic equations with nonlocal operators.
[17]
Haruya Mizutani.
Strichartz estimates for Schrödinger equations with variable coefficients and unbounded potentials II. Superquadratic potentials.
[18]
Lassaad Aloui, Moez Khenissi.
Boundary stabilization of the wave and Schrödinger equations in exterior domains.
[19]
Giuseppe Maria Coclite, Mario Michele Coclite.
On a Dirichlet problem in bounded domains with singular nonlinearity.
[20]
Mingqi Xiang, Patrizia Pucci, Marco Squassina, Binlin Zhang.
Nonlocal Schrödinger-Kirchhoff equations with external magnetic field.
2018 Impact Factor: 1.143
Tools
Article outline
[Back to Top]
|
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues?
Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson...
Hmm, it seems we cannot just superimpose gravitational waves to create standing waves
The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line
[The Cube] Regarding The Cube, I am thinking about an energy level diagram like this
where the infinitely degenerate level is the lowest energy level when the environment is also taken account of
The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume
Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings
@Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer).
Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it?
Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks.
I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh...
@0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P)
Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio...
the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above
If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\...
@ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there.
@CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer
|
The most frequently used evaluation metric of survival models is the concordance index (c index, c statistic). It is a measure of rank correlation between predicted risk scores $\hat{f}$ and observed time points $y$ that is closely related to Kendall’s τ. It is defined as the ratio of correctly ordered (concordant) pairs to comparable pairs. Two samples $i$ and $j$ are comparable if the sample with lower observed time $y$ experienced an event, i.e., if $y_j > y_i$ and $\delta_i = 1$, where $\delta_i$ is a binary event indicator. A comparable pair $(i, j)$ is concordant if the estimated risk $\hat{f}$ by a survival model is higher for subjects with lower survival time, i.e., $\hat{f}_i >\hat{f}_j \land y_j > y_i$, otherwise the pair is discordant. Harrell’s estimator of the c index is implemented in concordance_index_censored.
While Harrell’s concordance index is easy to interpret and compute, it has some shortcomings:
it has been shown that it is too optimistic with increasing amount of censoring [1], it is not a useful measure of performance if a specific time range is of primary interest (e.g. predicting death within 2 years).
Since version 0.8, scikit-survival supports an alternative estimator of the concordance index from right-censored survival data, implemented in concordance_index_ipcw, that addresses the first issue.
The second point can be addressed by extending the well known receiver operating characteristic curve (ROC curve) to possibly censored survival times. Given a time point $t$, we can estimate how well a predictive model can distinguishing subjects who will experience an event by time $t$ (sensitivity) from those who will not (specificity). The function cumulative_dynamic_auc implements an estimator of the cumulative/dynamic area under the ROC for a given list of time points.
The first part of this post will illustrate the first issue with simulated survival data, while the second part will focus on the time-dependent area under the ROC applied to data from a real study.
|
Suppose that
$\mu_k$ is an increasing sequence of numbers such that $0 < \mu_1 \leq \mu_2 \leq ..$ with $\mu_k \to \infty$ as $k \to \infty$
$\sum_{k=1}^\infty |u_k|^2 < \infty$ and $\sum_{k=1}^\infty \sqrt{\mu_k}|u_k|^2 < \infty$ where $u_k$ is a given sequence of real numbers
I want to show that (if true) the sum $$\sum_{k=1}^\infty |u_k|^2 \mu_k \frac{\cosh(2\sqrt{\mu_k}(T-t))}{\sinh^2(\sqrt{\mu_k}T)}$$ is uniformly convergent in the variable $t \in [\epsilon, T]$ for $\epsilon > 0$? (in the previous version of this thread I forgot to exclude $t=0$.)
The problem is that the numerator contains an exponential term with the "wrong" sign.
The motivation is, I want to integrate this sum term by term because it gives me a bound on a norm (the sum comes from a solution to an differential equation). Thanks for help.
|
Let $(R, \mathfrak{m})$ be a commutative Noetherian complete local rings ($R$ can be regular, if you need). Let $E(R/\mathfrak{p})$ be injective hull of $R/\mathfrak{p}$, if $\mathfrak{p}= \mathfrak{m}$ we simply write by $E$.
Question 1: What is $Hom (E(R/\mathfrak{p}), E)$?
By the isomorphism $Hom (Hom (M,N),E) \cong M \otimes Hom (N, E)$ provided $M$ is finitely generated we can see there are duality of injective modules and flat modules
Question 2: The Matlis' duality of injective modules and flat modules is 1-1 as Noetherian and Artinian?
Question 3: Let $x$ be an element of $R$. Find an injective module $I$ such that $Hom (I, E) \cong R_x$?
EDIT: We denote $Hom(\bullet,E)$ by $D(\bullet)$. An $R$-module $M$ is called Matlis reflexive if $M \cong D(D(M))$. Notherian and Artinian is Matlis reflexive. By (E. Enochs, Proc. AMS, 92 (1984), 179--184, Proposition 1.3) a module $M$ is Matlis reflexive iff there is a Notherian submodule L such that $M/L$ is Artinian. So the dulity of injective and flat is not as good as Notherian-Artinian.
Discussion: Assume more that $(R, \mathfrak{m})$ is a Gorenstein domain of dimension one. Then there are exactly two irreducible injective modules. Namely, $E$ and $E(R) = Q$ the field of fractions. And $0 \to R \to Q \to E \to 0$ is the minimal injective resolution of $R$. Since $E$ is Artinian we have $Q$ is Matlis reflexive.
|
What is the cut rule? I don't mean the rule itself but an explanation of what it means and why are proof theorists always trying to eliminate it? Why is a cut-free system more special than one with cut?
Suppose I have a proof of B starting from assumption A. And a proof of C starting from assumption B. Then the cut rule says I can deduce C from assumption A.
But I didn't need the cut rule. If I was able to deduce B from A I could simply "inline" the proof of B from A directly into the proof of C from B to get a proof of C from A.
So the cut rule is redundant. That's a good reason to eliminate it.
But eliminating it comes at a price. The proofs become more complex. Here's a paper that quantifies just how much. So it's a hard rule to give up.
Cut elimination is indispensable for studying fragments of arithmetic. Consider for example the classical Parsons–Mints–Takeuti theorem:
Theorem If $I\Sigma_1\vdash\forall x\,\exists y\,\phi(x,y)$ with $\phi\in\Sigma^0_1$, then there exists a primitive recursive function $f$ such that $\mathrm{PRA}\vdash\forall x\,\phi(x,f(x))$.
The proof goes roughly as follows. We formulate $\Sigma^0_1$-induction as a sequent rule$$\frac{\Gamma,\phi(x)\longrightarrow\phi(x+1),\Delta}{\Gamma,\phi(0)\longrightarrow\phi(t),\Delta},$$include axioms of Q as extra initial sequents, and apply cut elimination to a proof of the sequent $\longrightarrow\exists y\,\phi(x,y)$ so that the only remaining cut formulas appear as principal formulas in the induction rule or in some axiom of Q. Since other rules have the subformula property,
all formulas in the proof are now $\Sigma^0_1$, and we can prove by induction on the length of the derivation that existential quantifiers in the succedent are (provably in PRA) witnessed by a primitive recursive function given witnesses to existential quantifiers in the antecedent.
Now, why did we need to eliminate cuts here? Because even if the sequent $\phi\longrightarrow\psi$ consists of formulas of low complexity (here: $\Sigma^0_1$), we could have derived it by a cut $$\frac{\phi\longrightarrow\chi\qquad\chi\longrightarrow\psi}{\phi\longrightarrow\psi}$$ where $\chi$ is an arbitrarily complex formula, and then the witnessing argument above breaks.
To give an example from a completely different area: cut elimination is often used to prove decidability of (usually propositional) non-classical logics. If you show that the logic has a complete calculus enjoying cut elimination and therefore subformula property, there are only finitely many possible sequents that can appear in a proof of a given formula. One can thus systematically list all possible proofs, either producing a proof of the formula, or showing that it is unprovable. Again, cut elimination is needed here to have a bound on the complexity of formulas appearing in the proof.
Sigfpe wrote above in his answer that cut elimination makes proofs more complex, but that’s not actually true: cut elimination makes proofs
longer, but more elementary, it eliminates complex concepts (formulas) from the proof. The latter is often useful, and it is the primary reason why so much time and energy is devoted to cut elimination in proof theory. In most applications of cut elimination one does not really care about having no cuts in the proof, but about having control of which formulas can appear in the proof.
Another reason is proof search. Consider that a rule like $\frac{\Gamma \vdash p \quad \Gamma \vdash q}{\Gamma \vdash p \wedge q}$ can be read as "to find a proof for $\Gamma \vdash p \wedge q$, it suffices to find proofs for $\Gamma \vdash p$ and $\Gamma \vdash q$". Since $p$ and $q$ are subformulas of $p \wedge q$, finding proofs for them is simpler.
For cut rule you get "to find a proof for $\Gamma \vdash p$, it suffices to find proofs for $\Gamma \vdash q$ and $\Gamma, q \vdash p$ for some formula $q$". The problem is, how can you choose which $q$ to use? It needn't be a subformula of $p$ or any formula from $\Gamma$. This makes proof search intractable.
To elaborate on Alexey's answer, for "usual" sequent calculi, the rules other than cut "build structure" in the proof: the left rules build up structure of the assumptions from smaller formulae, and the right rules build up structure in the conclusion(s). Thus cut-free proofs have a kind of recursive structure, and one can reason about the class of cut-free proofs to prove things like consistency, completeness, &c.
However, the cut-free proofs don't admit all of the usual ways we reason in logic; modus ponens being an example that needs cut to be modelled. So to move from the class of inferences that we want to work within to the class of inferences in which we can best reason about, we need the cut-elimination theorem. If we can't find a cut-elimination theorem, that is a sign that the "logic" is broken.
|
The general algorithm is the Extended Euclidean algorithm.
Compute the gcd of $26$ and $5$ by Euclid's algorithm keeping track of coefficients:
$$\begin{align} 26 = & 1 \cdot 26 + 0 \cdot 21 \\ 21 = & 0 \cdot 26 + 1 \cdot 21 \end{align}$$
$21$ fits into $26$ $1$ time leaving a remainder of $5$, so we substract$1$ time the second equation from the first, making the second equation the first (so the smallest number becomes the largest and the smallest is replaced b y the remainder):
$$\begin{align} 21 = & 0 \cdot 26 + 1 \cdot 21 \\ 5 = & 1 \cdot 26 - 1 \cdot 21 \\ \end{align}$$
$5$ fits into $21$ $4$ times so we subtract $4$ times the second equation ($20 = 4 \cdot 26 - 4\cdot 21$) from the first, and swap in the same way to get
$$\begin{align}5 = & 1 \cdot 26 - 1\cdot 21\\1 = & -4 \cdot 26 + 5 \cdot 21\\\end{align}$$
And the final equation shows that $\gcd(21,26) =1$
Taking the final equation $1 = -4 \cdot 26 + 5 \cdot 21$ modulo $26$, the first term vanishes and we are left with $1 \equiv 5 \cdot 21 \bmod{26}$which just says that $5$ and $21$ are each other's inverse modulo $26$.So $21^{-1} \bmod{26} = 5$.
Instead of the algorithmic approach (which is fail-safe) we could happen to note that $5 \cdot 5 = 25 = -1 \bmod{26}$, so taking $-$ on both sides we get $5 \cdot -5 = 1 \bmod{26}$ and $-5 = 21 \bmod{26}$ so we get the same inverse for $21$.
|
This discussion is independent of my first answer, which I still think is the right form of the continuous version of the Borel-Cantelli lemma.
However, it seems that people are interested in the following question:
"What is the right condition making a family of random events $(A_t)_{t\geq 0}$ as surely stop happening while $t$ is large enough?"
Unfortunately, as far as I know, there is no universally applicable theory that addresses this.
Even if each event $A_t$ has zero probability, it is still not enough to allow us to make the conclusion that $(A_t)_{t\geq 0}$ will stop happening eventually.
For example, let us consider the one demensional Brownian motion $B_t$, and define that $A_t:=\{B_t=0\}$.
It's easy to see that, for each $t>0$, event $A_t$ has probability 0. Which seems to imply that $A_t$ should never happen.
However, as a matter of fact, a classical property of one demensional Brownian motion is that$$\limsup_{t\to\infty} B_t =\infty, \quad \liminf_{t\to\infty} B_t=-\infty.$$This says that $A_t$ will not stop happenning.
So my opinion is that those type of problems should be discussed case by case.
And often, those discussions will be related to the regularity property of the path of the indicated process:$$X_t:= \mathbf 1_{A_t}, \quad t\geq 0.$$
In the Brownian motion example, the indicated process $X_t$ is very irregular. And as a consequence, the probability of events $A_t$ gives no information in answering the question.
In the case, that $X_t$ is a continuous process. We see that $A_t$ happens for all $t$ if and only if $A_0$ happens. So the desired property is solely determined by the probability of $A_0$.
In the case, that $X_t$ is a cadlag process. If we know that the probability of $A_t$ decay to 0 fast enough, then we should have that $(A_t)_{t\geq 0}$ will stop happening eventually. (Warning: I am not very sure about this last assertion.)
|
I tried construct DFA for this NFA
$\sum$ - alphabet set
$Q$ -states set
$\sigma(Q\times (\sum \cup {\epsilon} )) \to P(Q)$ state func
$q_0 = q_0$
$ F \subseteq Q, F = \{q_0\}$
Because every NFA has equal DFA lets constructDFA $M'$ for this given NFA.
alphabet - the same
$Q' = P(Q)$ - states
Current state is $R \in P(Q)$
$E(R)$ - epsilon closure return set of states reachable over zero or more $\epsilon$ - connections for every $r \in R$
$\sigma'(R,a) = \bigcup_{r \in R} E(\sigma(r,a)) $ -transitions
$q'_{0} = E(\{q_0\})$
$F' = P(Q) \div F$
Some compute on this FSM
$1.$ $\epsilon$ on input: $q'_{0} = E(\{q0\}) = \{q_0, q_1\}$ initial state include $q_1$ so FSM accept $\epsilon$
$2.$ $0*$ on input: $ \sigma'(\{q_0, q_1\}, 0) = E(\sigma(q_0,0)) \cup E(\sigma(q_1,0)) =\{q_0, q_1\} \cup \{ \} = \{q_0, q_1\}$so FSM accept $0*$
at least $\{\epsilon, 0* \} \subset L (M')$
Thanks to David Richerby
|
So I've been working on the question below for a bit, and after arriving at what I thought was the answer, I found out that I was wrong, because the bounds I used for integration was incorrect.
Here is the problem statement.
So, the first thing I did was solve for the values of $\theta$ where an intersection occurs, and obviously these values are $\theta_1=\frac{\pi}{3}$ and $\theta_2=\frac{5}{3}\pi$. From here, I found the value of $(4\cos\theta)^2$ as well as the value of $\left(\frac{d}{dx}(4\cos\theta)\right)^2$, and then substituted these values into the formula for the arc length of a polar curve:
$$\implies \int_\alpha^\beta\sqrt{f(\theta)^2+f'(\theta)^2}~d\theta=\int_{\frac{1}{3}\pi}^{\frac{5}{3}\pi}\sqrt{16cos^2\theta+16\sin^2\theta}~d\theta$$
$$=4\int_{\frac{1}{3}\pi}^{\frac{5}{3}\pi}~d\theta$$
$$=4\left(\frac{5}{3}\pi-\frac{1}{3}\pi\right)$$
$$=\frac{16}{3}\pi$$
Although this answer was wrong, because according to the marking scheme, the correct bounds did not include $\frac{5}{3}\pi$, but rather included $\frac{2}{3}\pi$. Can someone perhaps explain to me why this is?
Any help is appreciated, thank you.
|
I am so new on Mathematica. I try to find the first variation of this function according to $t$ on mathematica but I could not achieve. Here is the function ;
$$\int_{0}^{\infty}u\left(c\left(t\right)\right)\exp\left\{ -\int_{0}^{t}\theta\left(c\left(s\right)\text{d}s\right)\right\} \text{d}t$$
How can I find the first variation of this integral according to $t$ ?
Thanks in advance.
|
Suppose that we want to make the best Bayesian inference about some value $\mu$ we have some normal prior about it. I.e. $\mu\sim N(\mu_0, \mu_0)$ with known parameters. To do so, we can choose parameters $(\mu_x, \sigma_x)$ that define a normal sampling method with mean $\mu_s=\mu \frac{\sigma_x^2}{\sigma_x^2+\sigma_0^2}+\mu_x\frac{\sigma_0^2}{\sigma_x^2+\sigma_0^2}$ and variance $\sigma_s^2=\frac{\sigma_0^2\sigma_x^2}{\sigma_0^2+\sigma_x^2}$.
Can it be shown that the optimal value of $\mu_x$ is $\mu_x=\mu_0$?
Can it be shown that the optimal value of $\sigma_x$ is an intermediate value, i.e. $\sigma_x\in(0,\infty)$?
Can the optimal value of $\sigma_x$ be characterized in closed form?
Details: to be more precise, after choosing parameters $(\mu_x,\sigma_x)$ we will get an observation from the induced normal distribution and the goal is for the expected posterior to be as close as possible to $\mu$
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.