text
stringlengths
256
16.4k
Here I provide a exact one sample or paired sample test for mean(s), which is the exact version of permutation test to compare a single sample against a mean or to compare paired samples’ differences against a mean. The algorithm in this function is based on Bryan F. J. Manly. 1997. Randomization, bootstrap and Monte Carlo methods in biology. 2nd edition. pp. 91-97.That is, all possible permutations (i.e. all possible cases of exchange between $x_i$ and $\mu$ in one-sample test or all possible cases of exchange between $x_{1i}$ and $x_{2i}$) in paired-sample test are processed to calculate a exact p-value. R code Arguments The usage of this function exactOneOrPairedSampleTest() is very similar to the R built-in function t.test(). There are four arguments: x1: a numeric vector specifying $x_{1i}$ x2 = NULL: a numeric vector specifying $x_{2i}$ (in paired-sample case) mu = 0: a number specifying true mean or true difference $\mu$ alternative = c("t","g","l")a single character specifying $H_0$: true mean of $x_1$ or mean of $x_{1i} - x_{2i}$ is equal, greater or less than $\mu$, respectively. Example 1: one-sample test Let $x_i = \{43,67,64,64,51,53,53,26,36,48,34,48,6 \}$, $i=1 \ldots 13$: > x <- c(43,67,64,64,51,53,53,26,36,48,34,48,6) Now we can call the above function exactOneOrPairedSampleTest() to test $H_0$: true mean of $x_i = 56$ against $H_A$: true mean of $x_i \neq 56$: > test1 <- exactOneOrPairedSampleTest(x, alternative="t", mu=56) > test1 Exact one sample test Alternative hypothesis: true mean is not equal to 56 mean(x) - mu = -10.3846153846154 Number of total permutation = 8192 Number of rejected permutation = 364 P-value = 0.04443359 The results show that: $\sum_{i=1}^{13} x_i / 13 - \mu = 45.61538 - 56 = -10.38462$; Totally $8192 = 2^{13}$ permutations are processed; 364 permutations do not support $H_0$; $P = 364/2^{13} = 0.0444$. We can also call the function hist.exactOneOrPairedSampleTest(): > hist(test1) As you can see, the vertical lines $x=\pm 10.38462$ shows the critical boundary of rejecting $H_0$. Note that it is a two-tail test. Finally, you may call the returned object: > str(test1) List of 14 $ x1 : num [1:13] 43 67 64 64 51 53 53 26 36 48 ... $ x2 : NULL $ x1.name : chr "x" $ x2.name : chr "NULL" $ n : int 13 $ mu : num 56 $ test.0 : num -10.4 $ is.onesample: logi TRUE $ alternative : chr "t" $ test.perm : num [1:8192] -10.38 -8.38 -12.08 -10.08 -11.62 ... $ DF :'data.frame': 13 obs. of 3 variables: ..$ x1 : num [1:13] 43 67 64 64 51 53 53 26 36 48 ... ..$ mu : num [1:13] 56 56 56 56 56 56 56 56 56 56 ... ..$ diff: num [1:13] -13 11 8 8 -5 -3 -3 -30 -20 -8 ... $ N : num 8192 $ p.value : num 0.0444 $ rejected.N : int 364 - attr(*, "class")= chr "exactOneOrPairedSampleTest" to get more details of the results if you need them. Example 2: paired-sample test Let $x_{1i} = \{92, 0,72,80,57,76,81,67,50,77,90\}$ and $x_{2i} = \{43,67,64,64,51,53,53,26,36,48,34\}$, where each $i$ is paired and $i=1 \ldots 11$: > x1 <- c(92, 0,72,80,57,76,81,67,50,77,90) > x2 <- c(43,67,64,64,51,53,53,26,36,48,34) Now we can call the function exactOneOrPairedSampleTest() to test $H_0$: true mean of $x_{1i} - x_{2i} \leq 10$ against $H_A$: true mean of $x_{1i} - x_{2i} > 10$: > test2 <- exactOneOrPairedSampleTest(x1, x2, alternative="g", mu=10) > # equivalent to test2 <- exactOneOrPairedSampleTest(x1 - x2, alternative="g", mu=10) > test2 Exact paired sample test Alternative hypothesis: means of x1 - x2 is greater than 10 mean((x1)-(x2)) - mu = 8.45454545454546 Number of total permutation = 2048 Number of rejected permutation = 445 P-value = 0.2172852 The results show that: $\sum_{i=1}^{11} (x_{1i}-x_{2i}) / 11 - 10 = 8.45455$; Totally $2048 = 2^{11}$ permutations are processed; 445 permutations do not support $H_0$; $P = 445/2^{11} = 0.2172852$. We can also call the function hist.exactOneOrPairedSampleTest(): > hist(test2) As you can see, the vertical lines $x=8.45455$ shows the critical boundary of rejecting $H_0$. Note that this is a right-tail test. Again, you may call the returned object: > str(test2) List of 14 $ x1 : num [1:11] 92 0 72 80 57 76 81 67 50 77 ... $ x2 : num [1:11] 43 67 64 64 51 53 53 26 36 48 ... $ x1.name : chr "x1" $ x2.name : chr "x2" $ n : int 11 $ mu : num 10 $ test.0 : num 8.45 $ is.onesample: logi FALSE $ alternative : chr "g" $ test.perm : num [1:2048] 8.45 1.36 22.45 15.36 8.82 ... $ DF :'data.frame': 11 obs. of 3 variables: ..$ x1 : num [1:11] 92 0 72 80 57 76 81 67 50 77 ... ..$ x2 : num [1:11] 43 67 64 64 51 53 53 26 36 48 ... ..$ diff: num [1:11] 39 -77 -2 6 -4 13 18 31 4 19 ... $ N : num 2048 $ p.value : num 0.217 $ rejected.N : int 445 - attr(*, "class")= chr "exactOneOrPairedSampleTest" to get more details of the results if you need them.
It helps to write the equation in this form. $$ \frac{\sin[\alpha(\lambda-1)]}{\alpha(\lambda - 1)} = \frac{\sin(2\alpha)}{2\alpha} $$ Simply put, we're looking for $f(\alpha(\lambda-1))=f(2\alpha)$ where $f(x) = \dfrac{\sin x}{x}$. Everything from here on will involve the sinc function. Here's a plot If $\alpha \ne 0$, there are three solutions:, $\lambda = 1$ and $\lambda-1 = \pm 2$ or $\lambda = -1, 3$. A special case occurs when $\sin (2\alpha) = 0$, or $\alpha = \dfrac{n\pi}{2}$, $n \ne 0$. Then you get rational solutions $\lambda = 1 + \dfrac{2m}{p}$, where $m$ is an integer and $p$ is any factor of $n$. This solution set is countably infinite. EDIT: You don't want $\alpha = \dfrac{n\pi}{2}$ since this makes the original equation undefined, but everything below this still holds. For any other case, you'll always get a finite number of solutions, since the sinc function decays in amplitude. Obtaining a definite count is possible, but rather complicated, since it requires knowing the extrema of the sinc function. You'll have to use numerical methods from here. It's easy to count the solution set if $f'(2\alpha) = 0$. Then there are no solution $|\lambda - 1| > 2$. There are no additional solution if $2\alpha$ is the minimum point in $\pm(\pi,2\pi)$ There are 2 additional solutions if $2\alpha$ is the maximum point in $\pm(2\pi,3\pi)$ There are 4 additional solutions if $2\alpha$ is the minimum point in $\pm(3\pi,4\pi)$ and so on. There are a total of $2n+1$ solutions (including $\lambda = 1$) for $|2\alpha| \in \big(n\pi, (n+1)\pi\big)$ If $f'(2\alpha) \ne 0$, then there may exist some $\beta$ such that $f'(\beta) = f'(2\alpha)$ and $f(\beta) = f(2\alpha)$. If this happens, there are once again $2n+1$ solutions for $|\beta| \in \big(n\pi, (n+1)\pi\big)$ Finally, if $f(2\alpha)$ does not coincide with any extrema, then it has to occur between two maxima or minima. We can then find a $c$ such that $f'(c) = 0$ and $|f(2\alpha)-f(c)|$ is minimal. Then there are $2n-1$ solutions for $|c| \in \big(n\pi, (n+1)\pi\big)$ Tldr; the problem boils down to counting the solutions of $\dfrac{\sin x}{x} = a$
If you need to insert cross-references to numbered elements in the document, (like equations, sections and figures) there are commands to automate it in LaTeX. This article explains how. Contents Below you can see a simple example of images cross referenced: \section{Introduction} \label{introduction} This is an introductory paragraph with some dummy text. This section will be later referenced. \begin{figure}[hbt!] \centering \includegraphics[width=0.3\linewidth]{lion-logo.png} \caption{This image will be referenced below} \label{fig:lion} \end{figure} You can reference images, for instance, figure \ref{fig:lion} shows the logo of the red lion logo. The command \label{ } is used to set an identifier that is later used in the command \ref{ } to set the reference. Below an example on how to reference a section \section{Introduction} \label{introduction} This is an introductory paragraph with some dummy text. This section will be later referenced. \begin{figure}[h] \centering \includegraphics[width=0.3\linewidth]{lion-logo.png} \caption{This image will be referenced below} \label{fig:lion} \end{figure} You can reference images, for instance, the image \ref{fig:lion} shows the red lion logo. \section{Math references} \label{mathrefs} As mentioned in section \ref{introduction}, different elements can be referenced within a document Again, the commands \label and \ref are used for references. The label can be set either right before or after the \section statement. This also works on chapters, subsections and subsubsections. See Sections and chapters. At the introduction an example of a image referenced was shown, below cross referencing equations is presented. \section{Math references} \label{mathrefs} As mentioned in section \ref{introduction}, different elements can be referenced within a document \subsection{powers series} \label{subsection} \begin{equation} \label{eq:1} \sum_{i=0}^{\infty} a_i x^i \end{equation} The equation \ref{eq:1} is a typical power series. For further and more flexible examples with labels and references see Elements usually are referenced by a number assigned to them, but if you need to, you can insert the page where they appear. \section{Math references} \label{mathrefs} As mentioned in section \ref{introduction}, different elements can be referenced within a document \subsection{powers series} \label{subsection} \begin{equation} \label{eq:1} \sum_{i=0}^{\infty} a_i x^i \end{equation} The equation \ref{eq:1} is a typical power series. \section{Last section} In the subsection \ref{subsection} at the page \pageref{eq:1} an example of a power series was presented. The command \pageref will insert the page where the element whose label is used appears. In the example above the equation 1. This command can be used with all other numbered elements mentioned in this article. On Overleaf cross references work immediately, but for cross references to work properly in your local LaTeX distribution you must compile your document twice. There's also a command that can automatically do the job for all the references to work. For instance, if your document is saved as main.tex latexmk -pdf main.tex generates the file main.pdf with all cross-references working. To change the output format use -dvi or -ps. For more information see:
Conveners I.a Calorimetry: Session 1 Jose Repond (Argonne National Laboratory) I.a Calorimetry: Session 2 Elena Rocco (University of Utrecht (NL)) I.a Calorimetry: Session 3 Elena Rocco (University of Utrecht (NL)) Silvia Fracchia (Universitat Autònoma de Barcelona (ES)) 02/06/2014, 16:10 Sensors: 1a) Calorimetry Oral The Tile Calorimeter (TileCal) is the central hadronic calorimeter of the ATLAS experiment at the LHC. Together with other calorimeters, it provides precise measurements of hadrons, jets, taus and missing transverse energy. The monitoring and equalisation of the calorimeter response at each stage of the signal development is allowed by a movable 137Cs radioactive source, a laser calibration... 248. The CMS Electromagnetic Calorimeter: lessons learned during LHC run 1, overview and future projections Arabella Martelli (INFN e Università Milano-Bicocca (IT)) 02/06/2014, 16:30 Sensors: 1a) Calorimetry Oral The Electromagnetic Calorimeter (ECAL) of the Compact Muon Solenoid (CMS) experiment at the LHC is a hermetic, fine grained, homogeneous calorimeter, comprising 75848 lead tungstate scintillating crystals. We highlight the key role of the ECAL in the discovery and elucidation of the Standard Model Higgs boson during LHC Run I. We discuss, with reference to specific examples from LHC Run I, the... Adolf Bornheim (Charles C. Lauritsen Laboratory of High Energy Physics) 02/06/2014, 16:50 Sensors: 1a) Calorimetry Oral The CMS electromagnetic calorimeter (ECAL) is made of 75,848 scintillating lead tungstate crystals arranged in a barrel and two endcaps. The scintillation light is read out by avalanche photodiodes in the barrel and vacuum phototriodes in the endcaps, at which point the scintillation pulse is amplified and sampled at 40 MHz by the on-detector electronics. The fast signal from the crystal... Tatsuya Chujo (University of Tsukuba (JP)) 02/06/2014, 17:10 Sensors: 1a) Calorimetry Oral ALICE at the Large Hadron Collider (LHC) is the dedicated experiment focused on heavy ion collisions at LHC, to study a de-confined matter of quarks and gluons, called Quark Gluon Plasma (QGP). Among the sub-detector systems in AILCE, there are two types of calorimetry in the central barrel. One is EMCal (Lead-Scintillator, a sampling electromagnetic calorimeter with a WLS fiber and APD... Pascal Perret (Univ. Blaise Pascal Clermont-Fe. II (FR)) 02/06/2014, 17:30 Sensors: 1a) Calorimetry Oral The LHCb experiment is dedicated to precision measurements of CP violation and rare decays of B hadrons at the Large Hadron Collider (LHC) at CERN (Geneva). It comprises a calorimeter system composed of four subdetectors: an electromagnetic calorimeter (ECAL) followed by a hadron calorimeter (HCAL). In addition the system includes in front of them the Scintillating Pad Detector (SPD) and... Mr Yuya Makino (STEL, Nagoya University) 02/06/2014, 17:50 Sensors: 1a) Calorimetry Oral The Large Hadron Collider forward (LHCf) experiment is designed to measure the hadronic production cross sections of neutral particles emitted in the very forward angles in p-p collision at the LHC. LHCf has reported energy spectra of forward photons and neutral pions at √s = 900 GeV and 7 TeV proton-proton collisions measured at LHC. Forward spectra can be helpful in verifying cosmic ray... 138. The Time Structure of Hadronic Showers in Analog and Digital Calorimeters confronted with Simulations Frank Simon (Max-Planck-Institut fuer Physik) 03/06/2014, 11:00 Sensors: 1a) Calorimetry Oral The intrinsic time structure of hadronic showers influences the timing capability and the required integration time of highly granular hadronic calorimeters for future collider experiments. To evaluate the influence of different active media and different absorbers, dedicated experiments with tungsten and steel hadron calorimeters of the CALICE collaboration have been carried out. These use... Dr Alexey Petrukhin (IPNL/CNRS) 03/06/2014, 11:20 Sensors: 1a) Calorimetry Oral The SDHCAL prototype that was completed in 2012 was exposed to beams of pions, electrons of different energies at the SPS of CERN for a total time period of 5 weeks. The data are being analyzed within the CALICE collaboration. However preliminary results indicate that a highly granular hadronic calorimeter conceived for PFA application is also a powerful tool to separate pions from electrons.... 156. Shower characteristics of particles with momenta from up to 100 GeV in the CALICE Scintillator-Tungsten HCAL Wolfgang Klempt (CERN) 03/06/2014, 11:40 Sensors: 1a) Calorimetry Oral ABSTRACT: We present a study of the showers initiated by high momentum (up to 100 GeV) positrons, pions and protons in the highly granular CALICE analogue scintillator-tungsten hadronic calorimeter. The data were taken at the CERN PS and SPS. The analysis includes measurements of the calorimeter response to each particle type and studies of the longitudinal and radial shower development. The... Vincent Boudry (Ecole Polytechnique (FR)) 03/06/2014, 12:00 Sensors: 1a) Calorimetry Oral The best jet energy resolution required for precise physics measurements at ILC is achievable using a Particle Flow Algorithm (PFA) and highly granular calorimeters. As it was shown by CALICE international R&D collaboration, the silicon-tungsten imaging electromagnetic calorimeter provides the best granularity and jet resolution. After proving the PFA concept with physical prototypes in... Lloyd Teh (Shinshu University) 03/06/2014, 12:20 Sensors: 1a) Calorimetry Oral The idea of using scintillator strips coupled with Pixelated Photon-Detector(PPD) has provided the ILD an electromagnetic calorimeter(ECAL) option with a lower cost. In the FNAL 2009 beam test, it was found that the prototype calorimeter of 30 layers could meet the stringent requirements of the ILD. Following this, efforts has been made to develop a more feasible ECAL in terms of performance,... Edouard Kistenev (Department of Physics) 06/06/2014, 11:00 Sensors: 1a) Calorimetry Oral The PHENIX Experiment at RHIC is planning a series of major upgrades that will transform the current PHENIX detector into a new detector, sPHENIX, which will be used to carry out a systematic measurement of jets in heavy ion collisions in order to study the phase transition of normal nuclear matter to the Quark Gluon Plasma near its critical temperature. The baseline design of sPHENIX will... Dr Ivano Sarra (LNF INFN Frascati, Italy) 06/06/2014, 11:20 Sensors: 1a) Calorimetry Oral The Mu2e experiment at FNAL aims to measure the charged-lepton flavor violating neutrinoless conversion of of a negative muon into an electron. The conversion results in a monochromatic electron with an energy slightly below the rest mass of the muon (104.97 MeV). The calorimeter should confirm that the candidates reconstructed by the extremely precise tracker system are indeed... Marco Incagli (Sezione di Pisa (IT)) 06/06/2014, 11:40 Sensors: 1a) Calorimetry Oral The Alpha Magnetic Spectrometer (AMS-02) is a high-energy particle detector deployed on the International Space Station (ISS) since May 19, 2011 to conduct a long-duration mission on fundamental physics research in space. The main scientific goals of the mission are the detection of antimatter and dark matter through the study of the spectra and fluxes of protons, electrons, nuclei until the... Giuseppe Finocchiaro (Laboratori Nazionali di Frascati dell’INFN) 06/06/2014, 12:00 Sensors: 1a) Calorimetry Oral The Belle II experiment will operate at the SuperKEKB e+e− collider, designed to reach a peak luminosity of 8×10^35 cm−2 s−1 at the Ypsilon(4S). The high background environment of SuperKEKB poses serious challenges to the design of the Belle II detector. In particular, an upgrade of the forward Electromagnetic Calorimeter is foreseen: the new calorimeter will use pure CsI crystals, which... Dr Ryu Sawada (ICEPP, the University of Tokyo) 06/06/2014, 12:20 Sensors: 1a) Calorimetry Oral The MEG experiment yielded the most stringent upper limit on the branching ratio of the flavor-violating muon decay $\mu\rightarrow e\gamma$. A major upgrade of the detector is planned to improve the sensitivity by one order of magnitude. For the upgrade, 2-inch round-shape photomultiplier tubes (PMTs) on the entrance window will be replaced by $12\times12$ cm${}^2$ Multi-Pixel Photon...
Recall that there is a bijection between irreducible representations of a compact real Lie group $G$ and the cocharacters (homomorphisms $U(1) \to G$, modulo conjugation) of the Langlands dual group $^LG$. The irreducible representations of $G$ have additional structure related to tensoring representations: Given representations $\alpha, \beta, \gamma$ of $G$, we have the invariant subspace $V_{\alpha\beta\gamma}$ of the tensor product $\alpha\otimes\beta\otimes\gamma$. (Or, if you prefer, the space $V_{\alpha\beta}^{\gamma^*}$ of homomorphisms from $\gamma^*$ tp $\alpha\otimes\beta$.) Is there an intrinsic way to define a Langlands dual structure on the cocharacters of $^LG$? In other words, in a natural way (and without using Langlands duality) associate to cocharacters $a,b,c$ of $^LG$ a vector space $V_{abc}$? One possibility would be to think of the cocharacters as (equivalence classes of) geodesics in $^LG$ and replace the tensoring of $G$ representations with the splicing of $^LG$ geodesics. The resulting families of broken geodesics $a \cdot b$ could flow to actual geodesics $\{c_i\}$. Before pursuing this idea I wanted to check whether there are already known answers to the main question above. ADDED LATER: The motivation for the above question was geometric Langlands TQFTs applied to the operation of gluing two disks together to obtain another disk. I had forgotten that on the Rep($G$) side a 2-sphere gives the same monoidal category as a disk, and that therefore the geometric Satake isomorphism answers my question. I still wonder whether thinking in terms of disks instead of spheres would give a different (but presumably equivalent) construction of the monoidal product on the cocharacter side.
Minimax-robust filtering problem for stochastic sequences with stationary increments and cointegrated sequences Abstract The problem of optimal estimation of linear functionals $A {\xi}=$ \quad $\sum_{k=0}^{\infty}a (k)\xi(-k)$ and $A_N{\xi}=\sum_{k=0}^{N}a (k)\xi(-k)$ which depend on the unknown values of a stochastic sequence $\xi(m)$ with stationary $n$th increments from observations of the sequence $\xi(m)+\eta(m)$ at points of time $m=0,-1,-2,\ldots$ is considered in the case where the sequence $\eta(m)$ is stationary and uncorrelated with the sequence $\xi(m)$. Formulas for calculating the mean-square errors and the spectral characteristics of optimal estimates of the functionals are proposed under condition of spectral certainty, where spectral densities of the sequences $\xi(k)$ and $\eta(k)$ are exactly known. The filtering problem for one class of cointegrated sequences is solved. Minimax (robust) method of estimation is applied in the case where spectral densities of the sequences are not known exactly, but sets of admissible spectral densities are given. Formulas that determine the least favorable spectral densities and minimax spectral characteristics are proposed for some definite sets of admissible densities. References G. E. P. BOX, G. M. JENKINS, G. C. REINSEL, Time series analysis. Forecasting and control. 3rd ed. Englewood Cliffs, NJ: Prentice Hall, 1994. I. I. GIKHMAN, A. V. SKOROKHOD, The theory of stochastic processes. I. Berlin: Springer, 2004. I.I. DUBOVETS’KA, O.YU. MASYUTKA, M.P. MOKLYACHUK, Interpolation of periodically correlated stochastic sequences, Theor. Probability and Math. Statist., 84(2012), 43-56. I. I. DUBOVETS’KA, M.P. MOKLYACHUK, Filtration of linear functionals of periodically correlated sequences, Theor. Probability and Math. Statist., 86(2013), 51-64. I. I. DUBOVETS’KA, M. P. MOKLYACHUK, Filtering of periodically correlated processes, Prykl. Stat., Aktuarna Finans. Mat., (2012), No. 2, 149-158. I. I. DUBOVETS’KA, M. P. MOKLYACHUK, Extrapolation of periodically correlated processes from observations with noise, Theor. Probability and Math. Statist., 88(2013), 43-55. I. I. DUBOVETS’KA, M. P. MOKLYACHUK, Minimax estimation problem for periodically correlated stochastic processes, Journal of Mathematics and System Science, 3(2013), No. 1, 26-30. I. I. DUBOVETS’KA, M. P. MOKLYACHUK, On minimax estimation problems for periodically correlated stochastic processes, Contemporary Mathematics and Statistics, (2014), no. 1, 123-150. R. F. ENGLE, C. W. J. GRANGER, Co-integration and error correction: Representation, estimation and testing, Econometrica, 55(1987), 251-276. C. W. J. GRANGER, Cointegrated variables and error correction models, UCSD Discussion paper 83-13a(1983). U. GRENANDER, A prediction problem in game theory, Arkiv för Matematik, 3(1957), 371-379. J. FRANKE, Minimax robust prediction of discrete time series, Z. Wahrsch. Verw. Gebiete 68(1985), 337-364. J. FRANKE, H. V. POOR, Minimax-robust filtering and finite-length robust predictors, Robust and Nonlinear Time Series Analysis. Lecture Notes in Statistics, Springer-Verlag, 26(1984), 87-126. S.A. KASSAM, H. V. POOR, Robust techniques for signal processing: A survey, Proc. IEEE. 73(1985), 433-481. A.N. KOLMOGOROV, Selected works by A. N. Kolmogorov. Vol. II: Probability theory and mathematical statistics. Ed. by A. N. Shiryayev. Mathematics and Its Applications. Soviet Series. 26. Dordrecht etc. Kluwer Academic Publishers, 1992. M. M. LUZ, M. P. MOKLYACHUK, Interpolation of functionals of stochactic sequanses with stationary increments, Theor. Probability and Math. Statist., 87(2013), 117-133. M. M. LUZ, M. P. MOKLYACHUK, Interpolation of functionals of stochastic sequences with stationary increments for observations with noise, Prykl. Stat., Aktuarna Finans. Mat., (2012), No. 2, 131-148. M. M. LUZ, M. P. MOKLYACHUK, Minimax-robust filtering problem for stochastic sequence with stationary increments, Theor. Probability and Math. Statist., 89(2013), 115-129. M. M. LUZ, M. P. MOKLYACHUK, Robust extrapolation problem for stochastic processes with stationary increments, Mathematics and Statistics, 2(2014), no. 2, 78-88. M. P. MOKLYACHUK, Minimax filtration of linear transformations of stationary sequences, Ukr. Math. J. , 43(1991), no. 1, 75-81. M. P. MOKLYACHUK, Robust procedures in time series analysis, Theory Stoch. Process. 6(2000), no. 3-4, 127-147. M. P. MOKLYACHUK, Game theory and convex optimization methods in robust estimation problems, Theory Stoch. Process. 7(2001), no. 1-2, 253-264. M. P. MOKLYACHUK, Robust estimations of functionals of stochastic processes., Kyiv University, Kyiv, 2008. M. P. MOKLYACHUK, Nonsmooth analysis and optimization, Kyiv University, Kyiv, 2008. M. MOKLYACHUK, M. LUZ, Robust extrapolation problem for stochastic sequences with stationary increments, Contemporary Mathematics and Statistics, 1(2013), no. 3, 123-150. M. MOKLYACHUK, A. MASYUTKA, Extrapolation of multidimensional stationary processes, Random Operators and Stochastic Equations, 14(2006), no. 3, 233-244. M. MOKLYACHUK, A. MASYUTKA, Robust estimation problems for stochastic processes, Theory Stoch. Process. 12(2006), no. 3-4, 88-113. M. MOKLYACHUK, A. MASYUTKA, Robust filtering of stochastic processes, Theory Stoch. Process. 13(2007), no. 1-2, 166-181. M. MOKLYACHUK, A. MASYUTKA, Minimax prediction problem for multidimensional stationary stochastic sequences, Theory Stoch. Process. 14(2008), no. 3-4, 89-103. M. MOKLYACHUK, A. MASYUTKA, Minimax prediction problem for multidimensional stationary stochastic processes, Communications in Statistics – Theory and Methods. 40(2011), no. 19-20, 3700-3710. M. MOKLYACHUK, O. MASYUTKA, Minimax-robust estimation technique for stationary stochastic processes, LAP LAMBERT Academic Publishing, 2012. M. S. PINSKER, A. M. YAGLOM, On linear extrapolaion of random processes with nth stationary incremens, Dokl. Akad. Nauk SSSR, 94(1954), no. 3, 385-388. M. S. PINSKER, The theory of curves with nth stationary incremens in Hilber spaces, Izv. Akad. Nauk SSSR, Ser. Mat., 19(1955), no. 3, 319-344. B. N. PSHENICHNYI, Necessary conditions of an extremum, “Nauka”, Moskva, 1982. YU. A. ROZANOV. Stationary stochastic processes. 2nd rev. ed, “Nauka”. Moskva, 1990. K. S. VASTOLA, H. V. POOR, An analysis of the effects of spectral uncertainty on Wiener filtering, Automatica, 28(1983), 319-344. N. WIENER, Extrapolation, interpolation, and smoothing of stationary time series. With engineering applications, The M. I. T. Press, Massachusetts Institute of Technology, Cambridge, Mass., 1966. A. M. YAGLOM, Correlation theory of stationary and related random functions. Vol. 1: Basic results, Springer Series in Statistics, Springer-Verlag, New York etc., 1987. A. M. YAGLOM, Correlation theory of stationary and related random functions. Vol. 2: Suplementary notes and references, Springer Series in Statistics, Springer-Verlag, New York etc., 1987. A. M. YAGLOM, Correlation theory of stationary and related random processes with stationary nth increments, Mat. Sbornik, 37(79)(1955), no. 1, 141-196. A. M. YAGLOM, Some clases of random fields in n-dimentional space related with random stationary processes, Teor. Veroyatn. Primen., 11(1957), no. 3, 292-337. DOI: 10.19139/soic.v2i3.56 Refbacks There are currently no refbacks.
I had an exam today and in the exam a question was this, "A 50 year old tortoise jumps onto a rocket and goes to a distant planet 370 lightyears away, the rockets speed was 0.7c . If a tortoise lives 450 years in avarage, would he live till reaching the destination or not?" cruel question I know, but i calculated like this that relativity won't affect the tortoise and it'd take him (370)©/(0.7c)=528 years, so he won't survive. Is my process correct? because as i understand it, for the tortoise everything is normal. All my friends have answered using time dilation. So I am really confused. Please help. Aging When Travellingrelativity Posted 26 April 2017 - 08:47 AM The tortoise rocket clock would record an elapse time of 377.47 years so the tortoise would only be 427.5 (rounded) years old, giving it another 22.5 years to explore the planet. 528.57years is the period of time an Earth based observer would record for the 370 light year journey of the craft travelling at 0.7c. The tortoise travelling in the rocket would experience time dilation of that figure, giving 377.47 years. Posted 26 April 2017 - 11:54 AM The tortoise rocket clock would record an elapse time of 377.47 years... How did you get that? I had an exam today and in the exam a question was this, "A 50 year old tortoise jumps onto a rocket and goes to a distant planet 370 lightyears away, the rockets speed was 0.7c . If a tortoise lives 450 years in avarage, would he live till reaching the destination or not?" cruel question I know, but i calculated like this that relativity won't affect the tortoise and it'd take him (370)©/(0.7c)=528 years, so he won't survive. Is my process correct? because as i understand it, for the tortoise everything is normal. All my friends have answered using time dilation. So I am really confused. Please help. They got it wrong as well then. The distance traveled in space is shortened by the same amount traveled in time. Apply whatever length contraction is at 0.7c and then work work how long it would take to travel that distance then apply time dilation to the journey time. The tortoise doesn't feel time dilated so it that sense everything is normal. The tortoise should have plenty of time. Edited by A-wal, 26 April 2017 - 04:20 PM. Posted 26 April 2017 - 03:34 PM How did you get that? Special relativity means moving clocks run slow. The time dilation equation is: ∆t 0 = ∆t√ 1 – v 2/c 2 ∆t 0 = spacecraft time interval (in light years for this example) ∆t = Earth based time interval in light years (528.57 in this case) v = spacecraft velocity 0.7c c = speed of light (2.99792458x10 8 m/s) (3.0x10 8) rounded (Cancels out in this example) ∆t 0 = ∆t√1 – v 2/c 2 ∆t 0 = 528.57√1 – (0.7) 2 ∆ t 0 = 528.57√1 – 0.49 ∆ t 0 = 528.57√0.51 ∆ t 0 = 528.57 x 0.7141428 = 377.47 years Posted 26 April 2017 - 04:20 PM The tortoise does feel time dilated so it that sense everything is normal. I meant: The tortoise DOESN'T feel time dilated so it that sense everything is normal. Posted 04 May 2017 - 05:54 AM Mathimatically, the tortoise would have quite some time to explore the planet. However, time dilation is based on moving at fast speeds which can be caused by gravity. 0.7c is hardly fast enough to escape gravity of a black hole and if you're going to a planet 370 light years away, you are bound to run into a 100 different large black holes. In technical terms, he would be dead before he reached even 30% of his journey. To really go into depth, you'll need to know the route, the gravitational field Emmitt; only then will your answer me 100% correct. Posted 04 May 2017 - 09:46 AM if you're going to a planet 370 light years away, you are bound to run into a 100 different large black holes. I’m not so sure, the nearest black hole is 2,800 light years away in the faint Monoceros (Unicorn) constellation, (viewed in the night sky between Sirius and Procyon), according to the reference below. Posted 04 May 2017 - 11:15 AM sparten45 gave the correct answer per SR and the initial conditions, which did not include black holes. Posted 18 May 2017 - 09:34 AM sparten45 gave the correct answer per SR No he didn't, he left out length contraction. To get the difference in the amount of proper time it would take to make the journey you need to work out out the difference in coordinate time over the distance in space. Length contracts by the same amount as time dilates so it should take 270 years proper time. Posted 18 May 2017 - 09:40 AM Hi, Mathimatically, the tortoise would have quite some time to explore the planet. However, time dilation is based on moving at fast speeds which can be caused by gravity. 0.7c is hardly fast enough to escape gravity of a black hole and if you're going to a planet 370 light years away, you are bound to run into a 100 different large black holes. In technical terms, he would be dead before he reached even 30% of his journey. To really go into depth, you'll need to know the route, the gravitational field Emmitt; only then will your answer me 100% correct. No, there are no black holes that close to the Earth. Posted 18 May 2017 - 03:09 PM No he didn't, he left out length contraction. To get the difference in the amount of proper time it would take to make the journey you need to work out out the difference in coordinate time over the distance in space. Length contracts by the same amount as time dilates This is interesting, as you have tackled the problem from a different perspective. Here is my reasoning looking at the problem from the point of view of length contraction. L= L 0√ 1 – v 2/c 2 L 0 = Earth based distance in light years (370 in this case) L = spacecraft measured distance (in light years for this example) v = spacecraft velocity 0.7c c = speed of light (2.99792458x10 8 m/s) (3.0x10 8) rounded (Cancels out in this example) L= L 0√1 – v 2/c 2 L= 370√1 – (0.7) 2 L= 370√1 – 0.49 L= 370√0.51 L= 370 x 0.7141428 = 264.23 light years. Using time=distance/speed t=264.23/0.7 years t = 377.47 years. OceanBreeze likes this Posted 18 May 2017 - 03:45 PM This is interesting, as you have tackled the problem from a different perspective. Here is my reasoning looking at the problem from the point of view of length contraction. L= L 0√ 1 – v 2/c 2 L 0= Earth based distance in light years (370 in this case) L = spacecraft measured distance (in light years for this example) v = spacecraft velocity 0.7c c = speed of light (2.99792458x10 8m/s) (3.0x10 8) rounded (Cancels out in this example) L= L 0√1 – v 2/c 2L= 370√1 – (0.7) 2L= 370√1 – 0.49 L= 370√0.51 L= 370 x 0.7141428 = 264.23 light years. Using time=distance/speed t=264.23/0.7 years t = 377.47 years. You are much too kind! A-wal didn’t tackle the problem at all, he just posted a number of 270 years, that is wrong without showing any calculations at all. You, on the other hand have done your calculations properly, and your solutions of 264.23 LY distance due to length contraction, and 377.47 years time, due to time dilation, are correct. One thing to keep in mind about this sort of problem is both observers (the one on earth and the one in the rocket) will not agree on the time or the distance, but they always agree on the velocity. That gives you a quick and easy way to check your answer. Since velocity = distance / time For the earth observer: 0.7c = 370 LY / 528 Y For the space farer: 0.7C = 264.23 LY / 377.47 Y The velocities agree, as they must, so that is a good indication your answers are correct, and of course they are. But don’t expect A-wal “Mr special relativity” to admit he is wrong. He never does and he never learns Posted 18 May 2017 - 05:06 PM Right, length contracts by the same amount as time dilates so you get the same result. So when both are taken into account you get a proper time of 270 years for the trip, right? Posted 19 May 2017 - 03:38 AM Right, length contracts by the same amount as time dilates so you get the same result. You get the same result for velocity, yes. So when both are taken into account you get a proper time of 270 years for the trip, right? No, Wrong. See here For an object in a SR spacetime traveling with a velocity of v for a time Δ T the proper time interval experienced is the same as in the SR time dilation formula, 377.47 years in this case. Why do you think the proper time is different from the dilated time? Posted 21 May 2017 - 11:52 AM The velocities agree, as they must, so that is a good indication your answers are correct, and of course they are. But don’t expect A-wal “Mr special relativity” to admit he is wrong. He never does and he never learns You get the same result for velocity, yes. No, Wrong. See here For an object in a SR spacetime traveling with a velocity of v for a time Δ T the proper time interval experienced is the same as in the SR time dilation formula, 377.47 years in this case. It shouldn't be, that doesn't take into account traveling the shorter distance caused by length contraction after accelerating. Why do you think the proper time is different from the dilated time? Time dilation is the coordinate difference between two inertial frames, proper time is the amount of time (dilated) that it takes to moves across length contracted space. This... Special relativity means moving clocks run slow. The time dilation equation is: ∆t 0= ∆t√ 1 – v 2/c 2 ∆t 0= spacecraft time interval (in light years for this example) ∆t = Earth based time interval in light years (528.57 in this case) v = spacecraft velocity 0.7c c = speed of light (2.99792458x10 8m/s) (3.0x10 8) rounded (Cancels out in this example) ∆t 0= ∆t√1 – v 2/c 2∆t 0= 528.57√1 – (0.7) 2∆ t 0= 528.57√1 – 0.49 ∆ t 0= 528.57√0.51 ∆ t 0= 528.57 x 0.7141428 = 377.47 years ...is time dilation. This... This is interesting, as you have tackled the problem from a different perspective. Here is my reasoning looking at the problem from the point of view of length contraction. L= L 0√ 1 – v 2/c 2 L 0= Earth based distance in light years (370 in this case) L = spacecraft measured distance (in light years for this example) v = spacecraft velocity 0.7c c = speed of light (2.99792458x10 8m/s) (3.0x10 8) rounded (Cancels out in this example) L= L 0√1 – v 2/c 2L= 370√1 – (0.7) 2L= 370√1 – 0.49 L= 370√0.51 L= 370 x 0.7141428 = 264.23 light years. Using time=distance/speed t=264.23/0.7 years t = 377.47 years. ...is length contraction. To get the proper time you need both. When the tortoise accelerates the distance between the starting point and the destination decreases and so does the coordinate time it takes to cover that distance. The amount of proper time is how long it takes on the watch of the of the tortoise. Maybe the equations are taking both into account in both examples but that's not how it reads. Edited by A-wal, 21 May 2017 - 11:54 AM. Posted 21 May 2017 - 12:33 PM I gave you the link to follow. If that doesn't convince you, I won't waste any more of my time on you. [math] \Delta \tau = {\sqrt {\Delta T^{2}-(v_{x}\Delta T/c)^{2}-(v_{y}\Delta T/c)^{2}-(v_{z}\Delta T/c)^{2}}}=\Delta T{\sqrt {1-v^{2}/c^{2}}} [/math] As you can see, the last part of the equation for proper time, is exactly the same as the expression for dilated time in SR. Edited by OceanBreeze, 21 May 2017 - 12:46 PM. Posted 21 May 2017 - 01:30 PM But this... This is interesting, as you have tackled the problem from a different perspective. Here is my reasoning looking at the problem from the point of view of length contraction. L= L 0√ 1 – v 2/c 2 L 0= Earth based distance in light years (370 in this case) L = spacecraft measured distance (in light years for this example) v = spacecraft velocity 0.7c c = speed of light (2.99792458x10 8m/s) (3.0x10 8) rounded (Cancels out in this example) L= L 0√1 – v 2/c 2L= 370√1 – (0.7) 2L= 370√1 – 0.49 L= 370√0.51 L= 370 x 0.7141428 = 264.23 light years. Using time=distance/speed t=264.23/0.7 years t = 377.47 years. ... is time dilation alone, meaning that the journey in proper time would be 377.47 years if length in space weren't contracted, and this... This is interesting, as you have tackled the problem from a different perspective. Here is my reasoning looking at the problem from the point of view of length contraction. L= L 0√ 1 – v 2/c 2 L 0= Earth based distance in light years (370 in this case) L = spacecraft measured distance (in light years for this example) v = spacecraft velocity 0.7c c = speed of light (2.99792458x10 8m/s) (3.0x10 8) rounded (Cancels out in this example) L= L 0√1 – v 2/c 2L= 370√1 – (0.7) 2L= 370√1 – 0.49 L= 370√0.51 L= 370 x 0.7141428 = 264.23 light years. Using time=distance/speed t=264.23/0.7 years t = 377.47 years. ... is length contraction alone, meaning that the journey in proper time would be 377.47 years if length in time weren't dilated. Are you saying that both ways of doing it take both time dilation and length contraction into account? It doesn't look like it. Also tagged with one or more of these keywords: relativity Physics / Math / Engineering → Physics and Mathematics → Physics / Math / Engineering → Physics and Mathematics → Alternative theories → Strange Claims Forum → Physics / Math / Engineering → Physics and Mathematics → Physics / Math / Engineering → Physics and Mathematics → Physics / Math / Engineering → Physics and Mathematics → Alternative theories →
Let's denote by $C(n)$ the number of comparisons performed on an input of length $n$ (later on I will comment more on this definition). The assumption that the number of comparisons roughly doubles when doubling the input size translates to $C(2n) \approx 2 C(n)$. This is not a precise statement, so let us take it to mean that$$\lim_{n\to\infty} \frac{C(2n)}{C(n)} = 2.$$There are many functions satisfying this statement. Some examples are $n (\log n)^k$, for any value of $k$. This condition cannot distinguish between algorithms with a linear number of comparisons and those with $n\log n$ or $n\log^2 n$ comparisons, and in particular cannot imply that quicksort uses $O(n\log n)$ comparisons. Let $D(n) = \log[C(n)/n]$, so that$$\lim_{n\to\infty} [D(2n) - D(n)] = 0.$$What we can conclude is that $D(n) = o(\log n)$, which translates to $C(n) = n 2^{o(\log n)}$. Another problem with your analysis is that the number of comparisons could depend on the input, not only on the input size. In other words, $C(n)$ is not well-defined. It is known that quicksort has worst case number of comparisons $\Omega(n^2)$, but the average number of comparisons (over a random permutation) is only $\Theta(n\log n)$. We usually take $C(n)$ to be either the worst case number of comparisons or the average number of comparisons (for a random permutation). Yet another problem is that it's not clear how to establish $C(2n)/C(n) \to 2$. Is this the result of an experiment, a heuristic calculation, or a formal proof? You don't make it clear. Experiments might show that $C(2n) \approx 2C(n)$ for small $n$, but the asymptotic behavior might be different; $C(2n)/C(n)$ might grow slowly, but eventually tend to infinity, for example.
I'll show how the Vandermonde determinant identity allows us toestimate the volume of certain spaces of polynomials in one variable(or rather, of homogeneous polynomials in two variables), as the degreegoes to infinity.I'll explain what this is good for in the context of globally valuedfields, and, given time constraints, may give some indications on theapproach for the "real inequality" in higher projective dimension. Keisler measures were introduced in the late 80's by Keisler but they became central objects in model theory only recently with the development of NIP theories. This led naturally to the question of whether there might be a parallel theory of measures in other tame classes, especially in the simple theories where pseudofinite counting measures supply natural and interesting examples. We will describe some first steps toward establishing such a theory, based on Keisler randomizations and the theory of independence for NSOP1 theories in continuous logic. We shall try to prove some surprising (and hopefully, correct) theorems about the relationship between the club principle (Hebrew: tiltan) and the splitting number, with respect to the classical s at omega and the generalized s at supercompact cardinals. Better lucky than smart: realizing a quasi-generic class of measure preserving transformations as diffeomorphisms.Speaker: Matthew ForemanAbstract: In 1932, von Neumann proposed classifying measure preserving diffeomorphisms up to measure isomorphism. Joint work with B. Weissshows this is impossible in the sense that the corresponding equivalence relation is not Borel; hence impossible to capture using countable methods. It is a familiar fact (sometimes attributed to Ahlbrandt-Ziegler, though it is possibly older) that two aleph0-categorical theories are bi-interpretable if and only if their countable models have isomorphic topological isomorphism groups. Conversely, groups arising in this manner can be given an abstract characterisation, and a countable model of the theory (up to bi-interpretation, of course) can be reconstructed. In [Sh771] Shelah rediscovered an old result of Dudley on the non-admissibility of a Polish group topology on an uncountable free group. Crucial to his proof is a so-called Compactness Lemma for Polish groups, concerning satisfaction of algebraic equations for certain sequences of group elements converging to 0 (in distance). The family of high rank arithmetic groups is a class of groups playing an important role in various areas of mathematics. It includes SL(n,Z), for n>2 , SL(n, Z[1/p] ) for n>1, their finite index subgroups and many more.A number of remarkable results about them have been proven including; Mostow rigidity, Margulis Super rigidity and the Quasi-isometric rigidity. Weak Prediction PrinciplesSpeaker: Yair HayutAbstract: Jensen's diamond is a well studied prediction principle. It holds in L (and other core models), and in many cases it follows from local instances of GCH.In the talk I will address a weakening of diamond (due to Shaleh and Abraham) and present Abraham's theorem about the equivalence between weak diamond and a weak consequence of GCH. Abraham's argument works for successor cardinals. I will discuss what is known and what is open for inaccessible cardinals.This is a joint work with Shimon Garti and Omer Ben-Neria. Dependent theories have now a very solid and well-established collection of results and applications. Beyond first order, the development of "dependency" has been rather scarce so far. In addition to the results due to Kaplan, Lavi and Shelah (dependent diagrams and the generic pair conjecture), I will speak on a few lines of current research around the extraction of indiscernibles for dependent diagrams and on various forms on dependence for abstract elementary classes. This is joint work with Saharon Shelah. Arbault sets (briefly, A-sets) were first introduced by Jean Arbault in the context of Fourier analysis. One of his major results concerning these sets,asserts that the union of an A-set with a countable set is again an A-set. The next obvious step is to ask what happens if we replace the word "countable" by א_1. Apparently, an א_1 version of Arbault's theorem is independent of ZFC. The aim of this talk would be to give a proof (as detailed as possible) of this independence result. The main ingredients of the proof are infinite combinatorics and some very basic Fourier analysis. Zilber's trichotomy conjecture, in modern formulation, distinguishes three flavours of geometries of strongly minimal sets --- disintegrated/trivial, modular, and the geometry of an ACF. Each of these three flavours has a classic ``template'' --- a set with no structure, a projective space over a prime field, and an algebraically closed field, respectively. The class of ab initio constructions with which Hrushovski refuted the conjecture features a new flavour of geometries --- non-modular, yet prohibiting any algebraic structure. We will present briefly the "multiverse view" of set theory, advocated by Hamkins, that there are a multitude of set-theoretic universes, and not one background universe, and his proposed "Multiverse Axioms". We will then move on to present the main result of Gitman and Hamkins in their paper "A natural model of the multiverse axioms" - that the countable computably saturated models of ZFC form a "toy model" of the multiverse axioms. The notion of reflection plays a central role in modern Set Theory since the descovering of the well-known Lévy and Montague \textit{Reflection principle}. For any $n\in\omega$, let $C^{(n)}$ denote the class of all ordinals $\kappa$ which correctly interprets the $\Sigma_n$-statements of the universe, with parametes in $V_\kappa$.
This was originally posted on Math Stack Exchange, but no responses were received. I recently came across the following remarkable identity, due to Hardy: $$\displaystyle \int_{-\infty}^{\infty} \frac{J_{\mu}(a(z+t))}{(z+t)^{\mu}} \frac{J_{\nu}(a(\zeta + t))}{(\zeta + t)^{\nu}} \ \mathrm{d}t = \frac{\Gamma(\mu + \nu)\Gamma(\frac{1}{2})}{\Gamma(\mu + \frac{1}{2})\Gamma(\nu + \frac{1}{2})} \times \left( \frac{2}{a}\right)^{\frac{1}{2}} \frac{J_{\mu + \nu - \frac{1}{2}}(a(z - \zeta))}{(z - \zeta)^{\mu + \nu - \frac{1}{2}}},$$ where $J_{\nu}$ denotes the Bessel function of the first kind, $\mathrm{Re}(\mu + \nu) > 0$, $a > 0$, and $z, \zeta \in \mathbb{R}$. If anyone is interested in further details, it can be found in "A Treatise on the Theory of Bessel Functions", page 422. Some of the book (including the page referenced) can be found here. In my research, I am dealing with an integral which looks identical to this one, with the exception that it is in $d$ dimensions. In particular, the integral I am concerned with is $$\displaystyle \int_{\mathbb{R}^d} \frac{J_{d/2}(\rho |\alpha|)}{|\alpha|^{d/2}} \frac{J_{d/2}(\rho|k - \alpha|)}{|k - \alpha|^{d/2}} \ \mathrm{d}\alpha,$$ where $|\cdot|$ denotes the Euclidean norm on $\mathbb{R}^d$, and $\rho$ does not depend on $k \neq 0$ nor on $\alpha$. The obvious co-ordinate change is to take $|\alpha| = \tau$ and to assume without loss of generality that $|k| = r$, but this creates unwanted trigonometric terms in the argument of one of the Bessel functions, as well as destroying the symmetry of the two integrals. Can anyone see some way of relating Hardy's identity to my integral?
Minimax-robust prediction problem for stochastic sequences with stationary increments and cointegrated sequences Abstract The problem of optimal estimation of the linear functionals $A {\xi}=\sum_{k=0}^{\infty}a (k)\xi(k)$ and $A_N{\xi}=\sum_{k=0}^{N}a (k)\xi(k)$ which depend on the unknown values of a stochastic sequence $\xi(m)$ with stationary $n$th increments is considered. Estimates are obtained which are based on observations of the sequence $\xi(m)+\eta(m)$ at points of time $m=-1,-2,\ldots$, where the sequence $\eta(m)$ is stationary and uncorrelated with the sequence $\xi(m)$. Formulas for calculating the mean-square errors and spectral characteristics of the optimal estimates of the functionals are derived in the case of spectral certainty, where spectral densities of the sequences $\xi(m)$ and $\eta(m)$ are exactly known. These results are applied for solving extrapolation problem for cointegrated sequences. In the case where spectral densities of the sequences are not known exactly, but sets of admissible spectral densities are given, the minimax-robust method of estimation is applied. Formulas that determine the least favorable spectral densities and minimax spectral characteristics are proposed for some special classes of admissible densities. Keywords References W. Bell, Signal extraction for nonstationary time series, The Annals of Statistics, vol. 12, no. 2, pp. 646-664, 1984. G. E. P. Box, G. M. Jenkins and G. C. Reinsel, Time series analysis. Forecasting and control. 3rd ed. Englewood Cliffs, NJ: Prentice Hall, 1994. I. I. Golichenko and M. P. Moklyachuk, Estimates of functionals of periodically correlated processes. Kyiv: NVP ``Interservis", 2014. I. I. Gikhman and A. V. Skorokhod, The theory of stochastic processes. I., Berlin: Springer, 2004. C. W. J. Granger, Cointegrated variables and error correction models, UCSD Discussion paper, 83-13a, 1983. I. I. Dubovets’ka, O.Yu. Masyutka and M.P. Moklyachuk, Interpolation of periodically correlated stochastic sequences, Theory Probab. Math. Stat., vol. 84, pp. 43–56, 2012. I. I. Dubovets’ka and M. P. Moklyachuk, Filtration of linear functionals of periodically correlated sequences, Theory Probab. Math. Stat., vol. 86, pp. 51–64, 2013. I. I. Dubovets’ka and M. P. Moklyachuk, Extrapolation of periodically correlated processes from observations with noise, Theory Probab. Math. Stat., vol. 88, pp. 67–83, 2014. I. I. Dubovets’ka and M. P. Moklyachuk, Minimax estimation problem for periodically correlated stochastic processes, Journal of Mathematics and System Science, vol. 3, no. 1, pp. 26–30, 2013. I. I. Dubovets’ka and M. P. Moklyachuk, On minimax estimation problems for periodically correlated stochastic processes, Contemporary Mathematics and Statistics, vol.2, no. 1, pp. 123–150, 2014. R. F. Engle and C. W. J. Granger, Co-integration and error correction: Representation, estimation and testing, Econometrica, vol. 55, pp. 251-276, 1987. U. Grenander, A prediction problem in game theory, Arkiv för Matematik, vol. 3, pp. 371–379, 1957. J. Franke, Minimax robust prediction of discrete time series, Z. Wahrscheinlichkeitstheor. Verw. Gebiete, vol. 68, pp. 337–364, 1985. J. Franke and H. V. Poor, Minimax-robust filtering and finite-length robust predictors, Robust and Nonlinear Time Series Analysis. Lecture Notes in Statistics, Springer-Verlag, vol. 26, pp. 87–126, 1984. K. Karhunen, Über lineare Methoden in der Wahrscheinlichkeitsrechnung, Annales Academiae Scientiarum Fennicae. Ser. A I, vol.37, 1947. A. N. Kolmogorov, Selected works by A. N. Kolmogorov. Vol. II: Probability theory and mathematical statistics. Ed. by A. N. Shiryayev. Mathematics and Its Applications. Soviet Series. 26. Dordrecht etc. Kluwer Academic Publishers, 1992. M. M. Luz and M. P. Moklyachuk, Interpolation of functionals of stochastic sequences with stationary increments, Theory Probab. Math. Stat., vol. 87, pp. 117–133, 2013. M. M. Luz and M. P. Moklyachuk, Interpolation of functionals of stochastic sequences with stationary increments for observations with noise, Prykl. Stat., Aktuarna Finans. Mat., no. 2, pp. 131–148, 2012. M. M. Luz and M. P. Moklyachuk, Minimax-robust filtering problem for stochastic sequence with stationary increments, Theory Probab. Math. Stat., vol. 89, pp. 127–142, 2014. M. Luz and M. Moklyachuk, Robust extrapolation problem for stochastic processes with stationary increments, Mathematics and Statistics, vol. 2, no. 2, pp. 78–88, 2014. M. Luz and M. Moklyachuk, Minimax-robust filtering problem for stochastic sequences with stationary increments and cointegrated sequences, Statistics, Optimization & Information Computing, vol. 2, no. 3, pp. 176–199, 2014. M. Luz and M. Moklyachuk, Minimax interpolation problem for random processes with stationary increments, Statistics, Optimization & Information Computing, vol. 3, no. 1, pp. 30–41, 2015. M. Moklyachuk and M. Luz, Robust extrapolation problem for stochastic sequences with stationary increments, Contemporary Mathematics and Statistics, vol. 1, no. 3, pp. 123–150, 2013. M. P. Moklyachuk, Minimax extrapolation and autoregressive-moving average processes, Theory Probab. Math. Stat., vol. 41, pp. 77–84, 1990. M. P. Moklyachuk, Robust procedures in time series analysis, Theory of Stochastic Processes, vol. 6, no. 3-4, pp. 127-147, 2000. M. P. Moklyachuk, Game theory and convex optimization methods in robust estimation problems, Theory of Stochastic Processes, vol. 7, no. 1-2, pp. 253–264, 2001. M. P. Moklyachuk, Robust estimations of functionals of stochastic processes, Kyiv University, Kyiv, 2008. M. Moklyachuk and A. Masyutka, Extrapolation of multidimensional stationary processes, Random Operators and Stochastic Equations, vol. 14, no. 3, pp.233–244, 2006. M. Moklyachuk and A. Masyutka, Robust estimation problems for stochastic processes, Theory of Stochastic Processes, vol. 12, no. 3-4, pp. 88–113, 2006. M. Moklyachuk and A. Masyutka, Robust filtering of stochastic processes, Theory of Stochastic Processes, vol. 13, no. 1-2, pp. 166–181, 2007. M. Moklyachuk and A. Masyutka, Minimax prediction problem for multidimensional stationary stochastic sequences, Theory of Stochastic Processes, vol. 14, no. 3-4, pp. 89–103, 2008. M. Moklyachuk and A. Masyutka, Minimax prediction problem for multidimensional stationary stochastic processes, Communications in Statistics – Theory and Methods., vol. 40, no. 19-20, pp. 3700–3710, 2001. M. Moklyachuk and O. Masyutka, Minimax-robust estimation technique for stationary stochastic processes, LAP LAMBERT Academic Publishing, 2012. M. P. Moklyachuk, Nonsmooth analysis and optimization, Kyiv University, Kyiv, 2008. M. S. Pinsker and A. M. Yaglom, On linear extrapolation of random processes with nth stationary increments, Doklady Akademii Nauk SSSR, vol. 94, pp. 385–388, 1954. M. S. Pinsker, The theory of curves with nth stationary increments in Hilber spaces, Izvestiya Akademii Nauk SSSR. Ser. Mat., vol. 19, no. 5, pp. 319–344, 1955. B. N. Pshenichnyi, Necessary conditions of an extremum, “Nauka”, Moskva, 1982. Yu. A. Rozanov, Stationary stochastic processes. 2nd rev. ed., “Nauka”, Moskva, 1990. N. Wiener, Extrapolation, interpolation and smoothing of stationary time series. With engineering applications, The M. I. T. Press, Massachusetts Institute of Technology, Cambridge, Mass., 1966. A. M. Yaglom, Correlation theory of stationary and related random functions. Vol. 1: Basic results, Springer Series in Statistics, Springer-Verlag, New York etc., 1987. A. M. Yaglom, Correlation theory of stationary and related random functions. Vol. 2: Supplementary notes and references, Springer Series in Statistics, Springer-Verlag, New York etc., 1987. A. M. Yaglom, Correlation theory of stationary and related random processes with stationary nth increments, Mat. Sbornik, vol. 37, no. 1, pp. 141–196, 1955. A. M. Yaglom, Some classes of random fields in n-dimensional space related with random stationary processes, Teor. Veroyatn. Primen., vol. 11, no. 3, pp. 292–337, 1957. DOI: 10.19139/soic.v3i2.132 Refbacks There are currently no refbacks.
Answer First quadrant Work Step by Step $\theta=4000^{\circ}$ $\cos\theta = 0.766$ $\sin\theta = 0.643$ As the $\cos\theta$ and $\sin\theta$ are both positive, the angle lies in the first quadrant. You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
For which $n$ is it true that two $n \times n$ matrices are similar if they have the same minimal polynomial and the same characteristic polynomial? Counterexample for $n = 4$: $$\begin{pmatrix} 2 & 1 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 2 & 1 \\ 0 & 0 & 0 & 2 \end{pmatrix}, \begin{pmatrix} 2 & 1 & 0 & 0 \\ 0 & 2 & 0 & 0 \\ 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 2 \end{pmatrix}.$$ From reading down the diagonal, both of these have only $\lambda = 2$ as an eigenvalue with multiplicity $4$. Computation reveals that $(z - 2)^2$ is the minimal polynomial (can be seen by the fact that $2$ is the size of the largest Jordan Block). But, they are not similar, as the former has an eigenspace of dimension $2$, whereas the latter has an eigenspace of dimension $3$. If $n \ge 4$, we can get a counterexample as follows . . . Choose a basis $e_1,...,e_n$ (for example, the standard basis). Let $A$ be the $n{\,\times\,}n$ matrix such that $Ae_1=0$ $Ae_k=e_1,\;$for $k>1$ and let $B$ be the $n{\,\times\,}n$ matrix such that $Be_1=0$ $Be_2=0$ $Be_3=e_1$ $Be_k=e_2,\;$for $k \ge 4$ Then $A,B$ are nonzero, but $A^2$ and $B^2$ are both zero. It follows that both of $A,B$ have minimal polynomial $x^2$, and characteristic polynomial $x^n$. But $A$ has rank $1$, and $B$ has rank $2$, hence $A,B$ are not similar. Here is my attempt at a solution for $n = 2,3$. If the Jordan forms of two matrices $A, B$ are the same, then they are obviously similar. Hence, we consider all combinations of non-equal Jordan forms and try to show that the matrices still have to be similar. $n = 2$ $A, B$ both have at most 2 eigenvalues. If both are same, then the Jordan forms could be $ J_A = \begin{bmatrix}\lambda_1 & 1 \\0 & \lambda_1 \end{bmatrix}$ and $ J_B = \begin{bmatrix}\lambda_1 & 0 \\ 0 & \lambda_1 \end{bmatrix}$ However, in this case, the minimal polynomial is no longer the same since $J_B-\lambda_1 = 0 \neq J_A - \lambda_1$. Hence, the matrices must have the same Jordan form. If the two matrices have two eigenvalues $\lambda_1, \lambda_2$ then the Jordan form of the two matrices will be exactly the same, $\texttt{diag(lambda_1, lambda_2})$. n = 3 Again, if we have three eigenvalues then the Jordan form will be the same for $A, B$ which implies that $A,B$ are similar. I am not really sure on how to continue on from here. It feels like there are many cases to consider. Someone have any suggestion? What happens for instance when we have $ J_A = \begin{bmatrix}\lambda_1 & 1 & 0 \\ 0 & \lambda_1 & 0 \\ 0 & 0 & \lambda_2 \end{bmatrix}$
To show that $\Bbb R^d$ with the norm $\Vert x \Vert_1 = \displaystyle \sum_1^d \vert x_i \vert, \tag 1$ where $x = (x_1, x_2, \ldots, x_d) \tag 2$ is Banach we need to prove it is Cauchy-complete with respect to this norm; that is, if $y_i \in \Bbb R^d \tag 3$ is a $\Vert \cdot \Vert_1$-Cauchy sequence, then there exists $y \in \Bbb R^d \tag 4$ with $y_i \to y \tag 5$ in the $\Vert \cdot \Vert_1$ norm. Now if $y_i$ is $\Vert \cdot \Vert_1$-Cauchy, for every real $\epsilon > 0$ there exists $N \in \Bbb N$ such that, for $m, n > N$, $\Vert y_m - y_n \Vert_1 < \epsilon; \tag 6$ if we re-write this in terms of the defiinition (1) we obtain $\displaystyle \sum_{k = 1}^d \vert y_{mk} - y_{nk} \vert < \epsilon, \tag 7$ and we observe that, for every $l$, $1 \le l \le d$, this yields $\vert y_{ml} - y_{nl} \vert \le \displaystyle \sum_{k = 1}^d \vert y_{mk} - y_{nk} \vert < \epsilon; \tag 8$ thus the sequence $y_{ml}$ for fixed $l$ is Cauchy in $\Bbb R$ with respect to the usual norm $\vert \cdot \vert$; and since $\Bbb R$ is Cauchy-complete with respect to $\vert \cdot \vert$ we infer that for each $l$ there is a $y^\ast_l$ with $y_{ml} \to y_l^\ast \tag 9$ in the $\vert \cdot \vert$ norm on $\Bbb R$; thus, taking $N$ larger if necessary, we have $\vert y_{ml} - y_l^\ast \vert < \dfrac{\epsilon}{d}, \; 1 \le l \le d, \; m > N; \tag{10}$ setting $y^\ast = (y_1^\ast, y_2^\ast, \ldots, y_d^\ast) \in \Bbb R^d, \tag{11}$ we further have $\Vert y_m - y^\ast \Vert_1 = \displaystyle \sum_{l = 1}^d \vert y_{ml} - y_l \vert < d \dfrac{\epsilon}{d} = \epsilon, \tag{12}$ that is, $y_m \to y^\ast \tag{13}$ in the $\Vert \cdot \Vert_1$ norm on $\Bbb R^d$; thus $\Bbb R^d$ is $\Vert \cdot \Vert_1$-Cauchy complete; hence $\Vert \cdot \Vert_1$-Banach.
Spectral Decomposition. Prove that if A is symmetric, and orthogonally diagonalized by P = [u1 · · · un], then $A =\sum_{k=1}^n \lambda_k \, u_k \,u_k^T$ where the $\lambda_k$ are the eigenvalues of $A$. Hint: If $A = P DP^T$ where $P$ (as given above) is orthogonal and $D = (\lambda_i \delta_{ij})$, use the definition of matrix multiplication, the fact that $D$ is diagonal, the orthonormality of the $\{u_k\}_{k=1\dots n}$, and the following formula: if $A = XY Z$, where all are $n \times n$ matrices, then $a_{ij} = \sum_{k=1}^n \sum_{l=1} x_{ik}\, y_{kl} \, z_{lj}, \forall i, j = 1, \dots n.$
Uniaxial and Biaxial Stress in Silicon¶ Introduction¶ In this tutorial you will learn how to use QuantumATK to study the electronic properties of silicon under uniaxial and biaxial stress. In particular, you will learn how to use the Target Stress option for a geometry optimization ( Optimize Geometry) to apply a specific stress to a crystal.Then you will calculate and analyze the electronic band structure and effective masses in the strained system. An important aspect of this tutorial is the symmetry of your crystal after the stress is applied.You will have to pay particular attention to this point, and to this end you will find the Brillouin Zone Viewer plugin very useful. Uniaxial Stress¶ Set up the Calculation¶ In the Builder, import a silicon fcc bulk structure from the database and send it to the Script Generator. Add a New Calculator: use the default LDA exchange-correlation potential and select a 9x9x9 k-point sampling; set the density mesh cut-off to 150 Hartree to have a better description of the silicon electronic structure. Add OptimizeGeometryand set these parameters in order to perform a full optimization of the bulk structure: set force and stress tolerances to 0.0005 eV/Å and 0.0005 eV/Å 3; uncheck the Constrain cellbut keep the target stress at zero. set force and stress tolerances to 0.0005 eV/Å and 0.0005 eV/Å Add Bandstructureanalysis: choose 201 points per segment; choose the L, G, X path. Add OptimizeGeometryand set these parameters in order to apply a uniaxial stress: set force and stress tolerances to 0.0005 eV/Å and 0.0005 eV/Å 3; uncheck Constrain cell; uncheck Isotropic Pressureand add a target stress of 1 GPa for the \(x\) component of the stress tensor. Note The definition of target stress ( target_stress) is: if a single value \(p\) is given, it will be interpreted as an external pressure value from which the internal target stress tensor is calculated as \(\sigma = \begin{pmatrix} -p & 0 & 0 \\ 0 & -p & 0 \\ 0 & 0 & -p \end{pmatrix}\); if a target stress tensor is given, it will be interpreted as the internalstress of the system, meaning that a negative entry on the diagonal will result in a compression in the corresponding direction and vice versa. Note that the stress tensor is symmetric, so it’s only necessary to define the upper triangle. Pay attention to the fact that the sign convention is different in these two cases! set force and stress tolerances to 0.0005 eV/Å and 0.0005 eV/Å Add Bandstructureanalysis: choose 201 points per segment; choose the L, G, X path. Note If a target stress tensor is given, the BravaisLattice of the configuration is automatically converted into a UnitCell, to enable changes in the shape of the cell. As described below, the high symmetry points of the Brillouin zone will change, since the cell will no longer be fcc after the stress is applied. However, at this point in the setup, the symmetry of the new cell is not known, so you will have to modify the symmetry points by hand in the Python script. Send the script to the Editorand locate the last Bandstructureanalysis block. Replace X by B in the route; see the next section for a detailed explanation on the modified Brillouin zone. 1 2 3 4 5 6 7 8 9 10 # ------------------------------------------------------------- # Bandstructure # ------------------------------------------------------------- bandstructure = Bandstructure( configuration=bulk_configuration, route=['L', 'G', 'B'], points_per_segment=201, bands_above_fermi_level=All ) nlsave('Silicon_uniaxial.nc', bandstructure) You can download the full script here: Silicon_uniaxial.py. Save and send the script to the Job Managerto run the calculations, which should take less than a minute. Symmetry Considerations¶ Before analyzing the electronic structure of strained silicon you will need to understand how the Brillouin zone and the notation of high symmetry points are dealt with by QuantumATK. From the LabFloor drag and drop the optimized structure ( gID000) and the strained structure ( gID002) to the Builder. Plot the Brillouin zone of each structure with the Bulk Tools->Brillouin Zone Viewer...: The high symmetry points in the UnitCell representation of the strained fcc crystal are related to the fcc symmetry points as follows: A, B, C all correspond to the X point in the unstrained fcc Brillouin zone. By increasing the stress you can see that B and C are still degenerate, but A is not degenerate with B and C. This is logical in the tetragonal symmetry. The L point is still called L, and degenerate with X, Y, Z in the UnitCell notation. The symmetry of the crystal is actually, as expected, tetragonal (space group 141) after the uniaxial stress is applied, which can be verified by activating thestrained crystal on the stash and using Bulk Tools->Crystal Symmetry Information, and clicking Detect. On inspection of the lattice parameters, you seethat, again as expected, all 3 lattice vectors are elongated in the X axis, and contracted in Y and Z (elastic response, Poisson effect), due to the uniaxial deformation. Analyze the Results¶ Electronic Band Structure¶ Now plot the band structure of the optimized cell and the uniaxial stressed cell.From the LabFloor select the calculated Bandstructure object and plot them with the Bandstructure Analyzer plugin. From these plots you can immediately see that, at Gamma, the top of the valence band splits. It is however, more interesting to see that the \(\Delta\) valley is not degenerate anymore. To see this effect more clearly you can calculate the band structure along the A-G-B path. By further inspecting the band structure, you can conclude the following about uniaxially strained Si: L remains degenerate The 6-fold \(\Delta\) valleys split into 2+4 The absolute values of the band gaps are however not correct in this simulation, since the LDA exchange-correlation functional was used for simplicity. Effective Masses¶ It is also very interesting to consider the effects of strain on the effective mass. From the band structure plot of the strained structure, click on the Effective Mass button. Following the example in the figure above calculate the effective masses for the \(\Delta\) valleys, band index 4, along different directions: Longitudinal mass at \(\Delta_A\): fractional coordinate [0, 0.425, 0.425], along the [1,0,0] cartesian direction; Transverse mass (1) at \(\Delta_A\): fractional coordinate [0, 0.425, 0.425], along the [0,1,0] cartesian direction; Transverse mass (2) at \(\Delta_A\): fractional coordinate [0, 0.425, 0.425], along the [0,0,1] cartesian direction; Longitudinal mass at \(\Delta_B\): fractional coordinate [0.425, 0, 0.425], along the [0,1,0] cartesian direction; Transverse mass (1) at \(\Delta_B\): fractional coordinate [0.425, 0, 0.425], along the [1,0,0] cartesian direction; Transverse mass (2) at \(\Delta_B\): fractional coordinate [0.425, 0, 0.425], along the [0,0,1] cartesian direction; You will find the following results: Longitudinal mass at \(\Delta_A\) (2-fold degenerate): 0.91 Transverse masses at \(\Delta_A\): 0.185 (no split) Longitudinal mass at \(\Delta_B\) (4-fold degenerate): 0.898 Transverse masses at \(\Delta_B\): 0.188 and 0.186 So, while all longitudinal and transverse masses are very close to the original \(\Delta\) valley masses (0.903 and 0.186, which you can compute from unstrained crystal), you see splittings due to the broken symmetry. For a more detailed explanation on the calculation of silicon effective masses please refer to the tutorial Effective mass of electrons in silicon. As an exercise, you are encouraged to study the masses at the L point, and investigate possible symmetry breakings. Biaxial Stress¶ Take the optimized (unstrained) silicon structure, send it to the Scripter and set New Calculator, OptimizeGeometry, and Bandstructure as you did in the previous section. To apply a biaxial stress, set the xx and yy components in the stress tensor to 1 GPa in the Target Stress field in the Optimize Geometry dialog. Before running the calculation, send the script to the Editor and modify the Brillouin zone path to L, G, B as described above and run the calculation. Also in this case the cubic lattice distorted is to a tetragonal symmetry. To see that clearly, drag and drop the optimized stressed structure to the Builder andapply the supercell transformation as indicated in the figure below. From Bulk Tools->Lattice Parameters you can see the distorted lattice parameters of 5.44 Å and 5.38 Å. Thus, effectively the biaxial stress is equivalent to a uniaxial stress along the [001] direction, with opposite sign.
How would you tag boxes without brackets after completing a proof? For example, I needed to prove that $$\forall m, n \in \mathbb{N}\cup \{0\}, \ n^{2m + 1} \equiv -1 \pmod {n + 1}$$ I proved this by showing that $$n^{2m + 1} + 1 = n^{2m + 1} - (-1) = n^{2m + 1} - (-1)^{2m + 1}$$ And since $$a^k - b^k = (a - b)\sum_{i = 1}^k a^{k - i}b^{i - 1}$$ then this meant $$n + 1 \mid n^{2m + 1} + 1$$ or $$n^{2m + 1} \equiv -1 \pmod {n + 1}$$ In the event that I want to draft up a proof using LATEX, how would I tag a box $\Box$ to show that I have demonstrated the proof without it having brackets like $(\Box)$? Namely, if I had a Statement $A$ then I don’t want it looking like $$\text{Statement $A$}\tag{$\Box$}$$ and if I use \quad and/or \qquad then it looks like $$\text{Statement $A$}\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad \ \ \ \ \Box$$ Thank you in advance. How would you tag boxes without brackets after completing a proof? For example, I needed to prove that $$\forall m, n \in \mathbb{N}\cup \{0\}, \ n^{2m + 1} \equiv -1 \pmod {n + 1}$$ I proved this by showing that $$n^{2m + 1} + 1 = n^{2m + 1} - (-1) = n^{2m + 1} - (-1)^{2m + 1}$$ And since $$a^k - b^k = (a - b)\sum_{i = 1}^k a^{k - i}b^{i - 1}$$ then this meant $$n + 1 \mid n^{2m + 1} + 1$$ or $$n^{2m + 1} \equiv -1 \pmod {n + 1}$$ In the event that I want to draft up a proof using LATEX, how would I tag a box $\Box$ to show that I have demonstrated the proof without it having brackets like $(\Box)$? Namely, if I had a EDIT: I agree with what mixedmath says in their answer that this is not a reasonable thing to do in order to mark the end of a proof. I read your question simply as a TeX-nical (or MathJax-nical) question: "How do I mark equation with $\Box$ rather than $(\Box)$ on the right?" Probably the intended question is related to possibilities how to mark end of proof (in a way similar to \qed in LaTeX), which is question related to style of writing and MathJax. (And which means that I have misunderstood your question.) Probably you might have a look at past discussion here: \qed for MathJax here on stackexchange. If you do not net MathJax markup in the tag you can do this simply using \tag*. Compare the following two equations: $$a^2+b^2=c^2 \tag{1}$$$$a^2+b^2=c^2 \tag{1}$$and $$a^2+b^2=c^2 \tag*{1}$$$$a^2+b^2=c^2 \tag*{1}$$ It is slightly complicated if you want some mathematical formula as a part of the tag, see also here: I can't put a Greek letter into \tag{…} You can do this $$a^2+b^2=c^2 \tag{$\Box$}$$$$a^2+b^2=c^2 \tag{$\Box$}$$and also this $$a^2+b^2=c^2 \tag*{$\Box$}$$$$a^2+b^2=c^2 \tag*{$\Box$}.$$ I would typically expect you to include a box inline, such as this. $\square$ An indication that you are trying to do something in a way that isn't how it is designed is that you are trying to use tag to produce something other than a tag or label. If you really want to shove a square to the far right, I only know how to do this (with mathjax) in a multline environment, as below. $$\begin{multline} \shoveright \square \end{multline}$$ But I would suggest that you never do this. I mostly separate sections on this site with =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= which usually comes out matching the width of my answer window. My idea is that people can't read this site as carefully as a book they are holding in their hands, something that crosses the window gives a more reliable notice that something different is starting. if you want to say QED before that, fine.
As 'sheegaon' suggested, you can solve for an implied interest rate -- which is not necessarily the cost of borrowing the underlying stock -- using put-call parity.As you probably know, an implied volatility algorithm increases and decreases its implied volatility guess until the theoretical price and market prices of an option converge. Similarly, to ... The cost of the hedge does not appear directly in the price for any one option, but rather will appear as an apparent violation of put-call parity. However, due to differing demand for puts and calls, this merely widens the arbitrage bounds ordinarily set by put-call parity, but does not imply a single implied borrow rate. In other words, the borrow rate ... A "Valuation Short" is a short idea based solely on valuation(fundamentals). This is presumably a "bad" idea and one of thequickest ways of loosing all your money.A "Structural Short" is said of shorting a company with a mature business model in decline, or shorting a stock that is becoming obsolete or significantly less competitive due to some major ... Suppose that the given condition is true. You want to construct an arbitrage portfolio to take advantage of this. Now, $d$ is an interest rate, and the condition suggests that $d$ is too high. So you will want to receive $d$ in order to profit.If you could, you would borrow money at $r$ and lend it to the stock broker or exchange to collect the interest ... Given one satisfies margin requirements anyone can short exchange traded options as long as local regulators permit (American retail investors at present are not permitted, for example, to trade futures options . As long as there is a market and one finds a willing counterpart nothing speaks against shorting options contracts. Some brokers might require a ... Calculate the beta of the VIX Dec 18 contract to the SPY. Then apply this equation:$$\ hedge \ ratio = \frac{1000\beta}{SPY_{price} } $$You then take the hedge ratio round it and that will give you the approximate amount of shares to hedge with. This is a simple solution. There are other ways to calculate the hedge ratio.Just as a note, this will ... Regarding your first question, here a simplified explanation:Assume a company A, worth 100\$, split into 100 outstanding shares. You own 5% of this company and, therefore, 5 shares or 5\$ of the company value. Now A issues 100 new shares to someone. But this will not necessarily raise the value of A. So afterwards you own 5 shares of a company worth 100\$ ... Of course you can sell options and you can certainly sell options on most major indices. Thinkorswim (TDAmeritrade) offers and excellent platform. Moreover, one can short options without "full" account privileges provided a defined risk trade is entered (such as an iron condor or call spread) It depends on the derivatives exchange but e.g. Eurex exchange can also be used by retail investors as long as they are qualified (concerning their max. risk level) and their bank offers access to it (some at least do that). 'Inst. Owned' almost surely means "Institutionally Owned".With respect to the 103% ownership reported:Discrepancies caused by varying time lags in reporting ownership may skew the resultsSecond, and perhaps most likely, is due to short selling. I might own 100 shares, lend them to Bill, and Bill might sell (short) the stock to Nancy. In this case both I ... I think this paper (which I skimmed once a long time ago and no longer have access to) may provide some insight:Cohen, Lauren, Karl B. Diether, and Christopher J. Malloy. "Shorting Demand and Predictability of Returns." Journal of Investment Management 7, no. 1 (2009): 36-52.It seems to consider stock loan fees which may be a proxy for "hard to borrow". In 2008, the SEC instituted an exemption for market makers to allow them to sell short for the purposes of bona fide activities related to market making in options. However, "for new positions, a market maker may not sell short if the market maker knows a customer or counterparty is increasing an economic net short position". You should make your borrow cost sufficient to dissuade unlimited short selling. In practice, each short would require you to borrow shares from your broker. This is usually handled when computing transaction cost. You should account for this in your trading algorithm or in the factor model itself. A simple method would make shorts some N% more expensive ... Are there any regulations baring shorting these 'shadily marketed stocks' ?In the markets I'm most familiar with, Canadian, the answer is no.To be honest the biggest hurdle you'll come across is how do you short penny stocks that are being pumped and dumped?1) There will often be no options on these so you can't use that avenue2) Where will you get ... What you're trying to do is express all your positions in terms of a risk currency. Then you can track your PnL in only one currency. You need to express all this in an Excel spread sheet and include some rates, a bit like the screenshot here. Suppose you are short the index option, and long the single stock options (all vanillas). You size it in such a way that at inception you have flat vega, you hedge out all your deltas.Now assume the market moves down. All your options move away from ATM and they all have less vega (both your long single stock options, as well as ur short index option).... Yes, it is the borrow rate that the short seller pays to borrow the stock from the long holder. The rate increases as the stock goes "special"--meaning that there is a large demand too borrow stock from short sellers.There are a number of investors, such as index funds and pension funds that require them to hold all the stock in an index. Therefore they ... If you are analysing the performance of a long/short type of portfolio you typically do not calculate returns of the portfolio value itself. Typically you would calculate your daily pnl in dollar terms and for example calculate the sharpe ratio of that quantity.The only “return” quantity that truly makes sense for such long/short portfolio is the return on ... You pretty much have this correct. You don’t have to have the spread equal to zero to unwind the trade. All you would care is that the stock you bought (stock A) outperform the stock you shorted (stock B) on a dollar basis in order for this to be a winning trade. In real life you would still need some capital in the trade due to margin requirements on the ... FREEShort Sale Data Source SECData Reported to a FINRA TRF (NASDAQ TRF and NYX TRF)This is a direct link to all of the short sale data available.Short Sale Volume Data (NASDAQ TRF, NYX TRF, ADF and ORF)PAIDShort Squeeze For those who also need an answer as I did. The explanation is here:https://investorshub.advfn.com/boards/read_msg.aspx?message_id=57101068TLDR: Those data are +- worthless because not only actual short sales (speculations) are included. Negative y means you're short the stock. This may have costs, just as being long the stock may have income from lending it to someone else. These costs can easily be incorporated as a growth adjustment to the stock.Since it costs more to borrow than you make from lending, the growth adjustment should depend on whether your overall position is long or ... There are two conditions: $W_1=s_0$ has to be non-negative, which means $\sigma_2^2 - \sigma_1\sigma_2\rho \ge 0$, which simplifies to $\sigma_2 \ge \sigma_1 \rho$. (I assumed $\sigma_2 \ne 0$).The second condition is that $W_2=1-s_0$ also has to be non-negative, i.e. $s_0 \le 1$. So $\sigma_2^2 - \sigma_1\sigma_2\rho \le \sigma_1^2+\sigma_2^2-2\sigma_1\... The first method gives the performance when rebalancing the position every day so you have a constant dollar exposure, the second is the result of shorting USD 1 and then keeping the position open, i.e. a short-and-hold return. They can be quite different in the long run and if there is high volatility. I recommend the short-and-hold with, possibly, ... You assume that you receive the 6% interest rate on the receivings of the short sale (2474.55) and you assume that you will receive the 4% interest rate on the haircut (2474.55). This is not stated in the question and the only thing you should calculate based on the information in the question is the loss from not being able to receive the 6% interest on the ... If you are short you need to use log((entryprice-fees)/exitprice).It is the same logic as in log long return case. You just need to change your entryprice and exitprice inputs. In this case, entryprice is the selling operation and exitprice will be the buying operation (just the opposite).
For general discussion about Conway's Game of Life. Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X Seems like there are "stripes" in 100% fill! Here's a cool case that's a bit "exaggerated", but I see stripes everywhere now, especially in large 1:1 scale fills. Attachments Wat weirdfill.png (250.75 KiB) Viewed 7931 times Airy Clave White It Nay I would guess 641x308, based on the population.fluffykitty wrote:What was the filled area? x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce Tested and also works for me - it seems as though larger selected areas show more periodicity. Bored of using the Moore neighbourhood for everything? Introducing the Range-2 von Neumann isotropic non-totalistic rulespace! This problem has been around for a long time. Given that it's easy to avoid (by adjusting the width of the selection) it's not something I'm all that bothered to fix. The standard C rand() function Golly uses is not a high-quality random number generator. Actually, that might only be true on Windows -- I was able to reproduce the stripes on my Win7 system but not on my Mac, so presumably gcc has a better rand() implementation.Saka wrote:Seems like there are "stripes" in 100% fill! If you don't want to adjust the selection width then write a simple script to do your random fills. You'll probably have to use Python because Lua also uses rand() in its math.random function. Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X Golly crashes when running a pattern in a Neumann LTL rule with S0 or B1. Fix and cause? Airy Clave White It Nay I can't reproduce any crash so can you be more specific about the exact rule. Note that Gnarl.mcl in Patterns/Larger-than-Life has B1 and doesn't crash (on my Mac at least), nor if I change the rule to use S0.Saka wrote:Golly crashes when running a pattern in a Neumann LTL rule with S0 or B1. Fix and cause? Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X Sorry, I forgot about this.Andrew wrote:@Saka: Are you able to post a pattern using an NN rule that causes Golly to crash? If not, I'll assume you were using an old beta version that was known to have bugs. The rule was B2,C0,M0,S1..12,B1..12,NN I was just seeing the neighborhood. Airy Clave White It Nay I'm having some trouble getting Golly 3.0 to start. I'm running Debian Testing. I downloaded "golly-3.0-gtk-64bit.tar.gz" from sourceforge and unzipped it into a directory in my home directory. Then double clicking on "golly" did nothing so I tried Should I try to install libpng12-0 directly? Or is there something else I should do? and got back Code: Select all $ ./golly I looked up "libpng12.so.0" and found it was part of an obsolete package, libpng12-0. Of course I tried Code: Select all ./golly: error while loading shared libraries: libpng12.so.0: cannot open shared object file: No such file or directory but got back Code: Select all sudo apt-get install libpng12-0 as I sort of expected. Code: Select all Reading package lists... DoneBuilding dependency tree Reading state information... DonePackage libpng12-0 is not available, but is referred to by another package.This may mean that the package is missing, has been obsoleted, oris only available from another sourceE: Package 'libpng12-0' has no installation candidate Should I try to install libpng12-0 directly? Or is there something else I should do? Obviously you meant R2, but yes, I can now reproduce the crash, so much thanks for the report. Looks like we'll have to do a 3.1 release sooner or later. I might wait a couple of weeks to see if any other serious bugs are found. The crash only occurs when using small range (< 5), unbounded NN rules, so avoid those until we have a fix.Saka wrote:The rule was B2,C0,M0,S1..12,B1..12,NN I guess that's the first thing to try. I can't help much with Linux problems. What I have discovered is that it seems to be impossible to distribute a binary that will run on all flavors of Linux.Macbi wrote:Should I try to install libpng12-0 directly? If the above fails then I can only think of 2 other possible solutions:Or is there something else I should do? 1. If you have an existing libpng* file somewhere in your system (/usr/lib or /usr/local/lib?) then you might try creating a symlink called libpng12.so.0 that points to that file. If the two versions are binary compatible then it should work. 2. Failing that, try building Golly from source. Get the src tarball from sourceforge and follow the Linux-specific instructions in docs/Build.html. I've no idea what you mean. Please describe the problem in more detail, and the exact steps needed to reproduce it.gameoflifemaniac wrote:I found a bug that makes a pattern advance after resizing when running at a fast speed. I don't think that's a bug, it's just that your computer caps at a certain speed and so when you interact with the UI it updates the screen, also so it doesn't look like Golly is frozen.gameoflifemaniac wrote:Try for example triple-Snark-wick-extruder.mc. Run at 8^5. Use the scroll bar. It will advance ~1000 generations, not 65536 every time you rotate the scroll bar. Will this be fixed in 3.1? This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.) Current rule interest: B2ce3-ir4a5y/S2-c3-y Current rule interest: B2ce3-ir4a5y/S2-c3-y I know this is a really stupid and minor detail, but wouldn't it make more sense for the Non-Totalistic folder to be a subfolder of the Life-Like folder instead of being a top-level folder? And by extension, HashLife under Life? And by extension, HashLife under Life? Bored of using the Moore neighbourhood for everything? Introducing the Range-2 von Neumann isotropic non-totalistic rulespace! There's certainly a minor collision between theory and practice in classifying the rule-type folders.muzik wrote:I know this is a really stupid and minor detail, but wouldn't it make more sense for the Non-Totalistic folder to be a subfolder of the Life-Like folder instead of being a top-level folder? And by extension, HashLife under Life? Technically I would think the Life-Like folder would be a subfolder of the Non-Totalistic folder -- and then the Life folder would be a subfolder under that, with all of its sub-categories. That seems like way too many levels, so the relatively flat folder organization seems like a decent compromise. I'm really pretty happy with the categories we have now. Each top-level folder really does contain a fairly recognizable category of patterns in it -- patterns inside the folder have something in common, and different folders have very different types of patterns. So a new user exploring the collection can see a good representative selection of available patterns, and decide fairly quickly which folder(s) to focus on. One of These Things Is Not Like The Others The Hashlife folder is a special case, because it highlights an algorithm, not a rule. In theoryit definitely doesn't belong as a subset of the Life folder, because the Hashlife algo can be applied equally well to any Life-Like rule -- or to Wolfram 1D rules. In practice, almost all of the really ridiculously large engineered patterns ever created have been Conway's Life patterns... so B3/S23 is bound to be disproportionately represented in the HashLife folder. In fact, wolfram22.mc is the one lonely pattern in there that doesn't happen to be B3/S23. Maybe we should put a few more Life-Like patterns in there to show that the HashLife advantage isn't specific to Conway's Life. As a random example, Patterns/Life-Like/Morley/breeder2.rle runs immeasurably faster at high step sizes with HashLife than with QuickLife. ... No, Probably Not The problem is that nobody is going to expect to find a B368/S245 pattern like breeder2.rle in the HashLife folder. We'd have to call it "B368-S245-breeder2.mc", or something like that. And it's hard to find a Life-Like pattern that really needsto be in the HashLife folder and saved as a macrocell file -- for most purposes, breeder2.rle runs perfectly well in QuickLife. So... it seems like the correct primary category for any pattern in a Life-Like rule is probably "Life-Like", unless it's a particularly huge pattern that really shows off HashLife in some way. The patterns currently in HashLife are mostly those that QuickLife can't run at all, or at least can't run very well. When In Doubt, Leave It Where It Is Decisions about whether or not to move patterns around are painfully subjective and likely to cause disagreements. For some patterns, it may be important to track down and change references in Golly's Help files, or even on the LifeWiki or other sites. So by default we mostly haven't been moving stuff -- we just add and subtract here and there, from one release to the next. The Golly 3.0 reorganization of Self-Rep patterns was the exception that proves the rule. Hopefully that was worth doing because it makes it a little clearer to a new user what "Banks", "Codd", "Devore", and "JvN" are, or at least that they're related to each other somehow and to the idea of self-replication. Help Wanted Golly's pattern collection could definitely use a lot of help to get it back to "cutting-edge". The best I can claim for it now is something like "cutting-edge in a few random places". It would be wonderful if more people would really roll up their sleeves and put on their pattern-collector hats. -- But this is definitely too big a topic to address in scattered messages between Golly 3.0 bug reports and feature requests. Let's move future discussion of pattern collection changes to a new thread. I noticed the [1] button in the top-left corner of golly doesn't work. I'm on Windows 7, 64-bit, version 3.0b3 Attachments the 1 button 1button.png (309 Bytes) Viewed 7337 times Last edited by drc on September 30th, 2017, 2:36 am, edited 1 time in total. This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.) Current rule interest: B2ce3-ir4a5y/S2-c3-y Current rule interest: B2ce3-ir4a5y/S2-c3-y Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X What's next for Golly, Andrew? Also, what about a special thing in rule-generating scripts that if a pattern is loaded in a rule which is generated by an available script, Golly automatically runs the script using the rule as the input? So for example, if a pattern using the alternating rule B3_S23--B2_S is loaded but the rule is not available yet, Golly will automatically run the alternating rule generator script and use B3_S23--B2_S as the input. To make Golly do it automatically, there should be a "rule generators" panel in the settings that allows you to tell Golly what a type of rule looks like and what script to associate it with. For example, the alternating rule definition is like Also, what about a special thing in rule-generating scripts that if a pattern is loaded in a rule which is generated by an available script, Golly automatically runs the script using the rule as the input? So for example, if a pattern using the alternating rule B3_S23--B2_S is loaded but the rule is not available yet, Golly will automatically run the alternating rule generator script and use B3_S23--B2_S as the input. To make Golly do it automatically, there should be a "rule generators" panel in the settings that allows you to tell Golly what a type of rule looks like and what script to associate it with. For example, the alternating rule definition is like Or it could use Python and return True. Code: Select all *_*--*_* Airy Clave White It Nay
A Few LaTeX Tips LaTeX is a high-quality typesetting system and the de facto standard for the publication of scientific documents. However, I never got a chance to learn how to use it formally. Most of my LaTeX knowledge comes from learning by doing. As I’m working on my Ph.D. thesis, I would like to share a few LaTeX tips that I wish I had known earlier. I got most of them from working with my advisors, internship hosts, and collaborators. There is still a lot for me to learn about LaTeX. Any comments and suggestions are welcome! 1. Non-breaking spaces In digital typesetting, a non-breaking space is a space character that prevents an automatic line break at its position. It is like an invisible glue between two words. There are various situations where a non-breaking space is used. For example, if the phrase “100 km” does not fit at the end of a line, the word processor may insert a line break between “100” and “km”, which is not desirable. LaTeX uses the ~ symbol as a non-breaking space. It is usually inserted before any numeric or alphabetic reference, for example, Figure~\ref{fig:foo} and Section~\ref{sec:bar}. Compare the following examples: Without a non-breaking space: Table 3 With a non-breaking space: Table~3 2. Citations using \citet and \citep The natbib package is commonly used for handling references in LaTeX. It is included in the style files by default for many conferences including ICML, NIPS, ICLR, etc. See here for a reference sheet for natbib. The two basic citation commands in natbib are \citetfor textualcitations, \citepfor parentheticalcitations. The difference is that \citet{jon90} prints Jones et al. (1990), while \citep{jon90} prints (Jones et al., 1990). The choice depends on whether the authors are to be read as part of the sentence. According to the formatting instructions for ICLR 2019: When the authors or the publication are included in the sentence, the citation should not be in parenthesis (as in “See Hinton et al. (2006) for more information.”). Otherwise, the citation should be in parenthesis (as in “Deep learning shows promise to make progress towards AI (Bengio & LeCun, 2007).”). The command \cite should be avoided because its behavior depends on the citation format and might cause inconsistency. According to the natbib documentation: The standard LaTeX command \citeshould be avoided, because it behaves like \citetfor author-year citations, but like \citepfor numerical ones. 3. Mathematical symbols When writing mathematical formulas, people follow certain conventions of notations. In fact, there is an ISO standard ISO 80000-2 that defines mathematical signs and symbols. I found an article titled Typesetting mathematics for science and technology according to ISO 31/XI by Claudio Beccari. (ISO 31/XI is the precursor of ISO 80000-2.) The recommendations below are excerpts of the article. The “roman type” refers to upright letters and the “italic type” refers to sloping letters. a) Variables and functions are represented by one italic letter with modifiers such as subscripts, superscripts, primes, etc. LaTeX Display Correct L(x) \(L(x)\) Correct L'(x) \(L’(x)\) Correct L_{\mathrm{adversarial}}(x) \(L_{\mathrm{adversarial}}(x)\) Incorrect LOSS(x) \(LOSS(x)\) Incorrect Loss(x) \(Loss(x)\) b) Mathematical operators indicated with letters must be set in roman type.This includes the predefined operators \exp, \log, \sin, \tanh, \det, \min, \inf, as well as user-defined operators. LaTeX Display Correct \sin(2x) = 2 \sin(x) \cos(x) \(\sin(2x) = 2 \sin(x) \cos(x)\) Incorrect sin(2x) = 2 sin(x) cos(x) \(sin(2x) = 2 sin(x) cos(x)\) Correct \operatorname{softmax}(x) \(\operatorname{softmax}(x)\) Incorrect softmax(x) \(softmax(x)\) c) A particular operator, the operator of differentiation, should be set in roman type.This one is somewhat controversial. In some fields, italic type seems more common. I personally prefer roman type. LaTeX Display Correct \int x \mathrm{d} x \(\int x \mathrm{d} x\) Incorrect \int x dx \(\int x dx\) d) Sub and superscripts that do not represent mathematical variables should be set in roman type.The first row is correct because \(n\) represents a variable here. LaTeX Display Correct S_n = \sum_{i=1}^n X_i \( S_n = \sum_{i=1}^n X_i \) Correct L_{\mathrm{adversarial}}(x) \(L_{\mathrm{adversarial}}(x)\) Incorrect L_{adversarial}(x) \(L_{adversarial}(x)\) 4. Avoid \vspace Most conferences have strict page limits. More often than not, eight pages are not enough to adequately present a brilliant idea. If only Fermat got more space in the margin! A common trick is to use \vspace. It can reduce the length of vertical space before or after a section, figure or table. However, this essentially alters the style template. Many formatting instructions explicitly forbid the use of \vspace. It might even result in the rejection of the paper. From the formatting instructions for AAAI 2019: if your paper is obviously “squeezed” it is not going to be accepted. Reducing the vertical spaces also makes the paper less readable for the reviewers, so it is definitely not advisable. Many conferences support the submission of supplementary material. If the paper is too long, consider reorganizing it and moving some sections to the supplementary material.
This question make more sense when we don't limit outselves to real numbers. For any $n = d + 1 > 1$ and $u = (u_0, u_1, \ldots, u_d) \in \mathbb{C}^n$, consider following product $$\Lambda(u_0,\ldots,u_d) = \prod_{(\epsilon_1,\ldots,\epsilon_d) \in \{ \pm 1 \}^d}\left( \sqrt{u_0} + \sum_{k=1}^d \epsilon_k \sqrt{u_k} \right)\tag{*1}$$We can expand $\Lambda(\cdots)$ to a homogeneous polynomial in $\sqrt{u_k}$ of degree $2^d$ $$\Lambda(u_1,\ldots,u_d) = \sum_{(e_0,\ldots,e_d)\in \mathbb{N}^n}A_{e_0,\ldots,e_d} \prod_{k=0}^d\sqrt{u_k}^{e_k} \tag{*2}$$whose coefficients $A_{e_0,\ldots,e_d} \in \mathbb{Z}$ and vanish unless $e_0 + \ldots + e_d = 2^d$. Consider the effect of flipping the sign of $\epsilon_\ell$ for some $\ell \ge 1$ in $\Lambda(\cdots)$. In $(*1)$, this rearrange the order of product but leaves the value of $\Lambda(\ldots)$ untouched. In $(*2)$, the coefficient $A_{e_0,\ldots,e_d}$ picks up a factor $(-1)^{e_\ell}$. Since the value of product doesn't change, $A_{e_0,\ldots,e_d}$ vanishes unless $e_\ell$ is even. Since this is true for every $\ell \ge 1$ and $A_{e_0,\ldots,e_d}$ vanishes unless $e_0 + \cdots + e_d = 2^d$, $A_{e_0,\ldots,e_d}$ also vanishes unless $e_0$ is even. This implies in expansion$(*2)$, all square roots get completed. As a result, $\Lambda(\cdots)$ is a homogeneous polynomial in $u_0,\ldots, u_d$ of degree $2^{d-1}$: $$\Lambda(u_1,\ldots,u_d) = \sum_{(e_0,\ldots,e_d)\in \mathbb{N}^n}B_{e_0,\ldots,e_d} \prod_{k=0}^du_k^{e_k} \tag{*3}$$whose coefficients $B_{e_0,\ldots,e_d} \in \mathbb{Z}$ and vanish unless $e_0 + \ldots + e_d = 2^{d-1}$. If $\sqrt{u_0} \pm \sqrt{u_1} \pm \cdots \pm \sqrt{u_d} = 0$ for any choice of sign of the square roots, then by construction, $u_0, \ldots, u_d$ need to satisfy the polynomial equation $\Lambda(u_0,\ldots,u_d) = 0$. For the problem at hand, take $n = 3$ and substitute $(u_0,u_1,u_2)$ by $ (ax + \alpha, bx+\beta, cx+\gamma)$. The equation $\sqrt{a x + \alpha} \pm \sqrt{b x + \beta} \pm \sqrt{cx + \gamma} = 0$ leads to a homogeneous polynomial equation in $ax, bx, cx, \alpha, \beta, \gamma$ of degree $2$: $$\Lambda(ax+\alpha, bx+\beta, cx+\gamma) = 0$$ Expand this polynomial out against $x$, we obtain a quadratic equation in $x$: $$C(\cdots) x^2 + D(\cdots) x + E(\cdots) = 0$$ It is easy to see the coefficients $C(\cdots)$ only depends on $(a,b,c)$. By setting $x$ to $1$ and $\alpha, \beta, \gamma$ to $0$, we find$C(\cdots) = \Lambda(a,b,c)$. By setting $x$ to $0$, we find $E(\cdots) = \Lambda(\alpha,\beta,\gamma)$. Thisleads to a equation of the form: $$\Lambda(a,b,c)x^2 + D(\cdots)x + \Lambda(\alpha,\beta,\gamma) = 0$$ Now it comes to the mysterious condition $\sqrt{a} \pm \sqrt{b} \pm \sqrt{c} = 0$.When this condition is fulfilled, $\Lambda(a,b,c) = 0$. Above equation simplifiesto a linear equation in $x$. $$D(\cdots)x + \Lambda(\alpha,\beta,\gamma) = 0$$ We can determine the last unknown coefficient $D(\cdots)$ by setting $x$ to $1$.At the end, we have When $\sqrt{a} \pm \sqrt{b} \pm \sqrt{c} = 0$, then the equation $\sqrt{ax+\alpha} \pm \sqrt{bx+\beta} \pm \sqrt{cx+\gamma} = 0$ leads to a linear equation in $x$: $$\Lambda(a+\alpha,b+\beta,c+\gamma)x + \Lambda(\alpha,\beta,\gamma)(1-x) = 0$$ where $$\Lambda(u,v,w) = u^2 + v^2 + w^2 - 2(uv+vw+uw)$$ In certain sense, one can argue this equation is simple because it's dependence on $x$ is linear. Unlike the general case where $\Lambda(a,b,c) \ne 0$, the solution for $x$ no longer involves any radicals.Whether one agree this is simple is up to one's own judgement. To be honest, I don't.
Contrary to DaftWullie's answer, it is possible to implement a CNOT gate in a photonic system with 100% efficiency. However, there are caveats to this - it depends on what's used as the qubits (or, as this is a photonic system, potentially qudits) in the system. KLM: A photon as a qubit The first thing that most people think of in terms of photonic qubits is polarisation. In this case, postselection (/heralding) is generally required. This was theoretical shown to be possible by Knill, Laflamme and Milburn (KLM) in 2001. Within a couple of years, the first probabilistic photonic CNOT gate was shown by O'Brien et. al. (arXiv version) in an equivalent scheme, as shown in figure 1. Figure 1: Circuit diagram of probabilistic 2-photon CNOT gate. Each photon (control and target) is encoded from polarisation to spatial modes. After postselecting on a single photon in $C_{\text{out}}$ and a single photon in $T_{\text{out}}$, when the control photon, $C$, is in spatial mode $C_0$, the identity operation is performed on the target photon, $T$, while when $C$ is in $C_1$, the NOT (X) operation is performed on $T$ which has a probability of success of 1/9. Uses beamsplitters. Image taken from Figure 1a of O'Brien et. al. One variation of this is to use a nonlinear phase shift to make a deterministic version of this. While the above may not sound overly great for the prospects of optical quantum computing, encoding a qubit as polarisation/2 spatial modes of a photon is far from the only way to perform optical quantum computing. Reck: Many modes makes... Many dimensions One other such method was proposed before KLM by Reck et. al. (shown in figure 2) and has since been improved upon by Clements et. al. In this a single photon is encoded in some number, $d$, of spatial modes. This is equivalent to a $\log_2 d$ qubit system and can be used to implement any unitary. For a 2-qubit system, this is equivalent to having 4 spatial modes labelled $\left|00\right>, \left|01\right>, \left|10\right>$ and $\left|11\right>$ and a CNOT operation is equivalent to swapping the bottom 2 $\left(\left|10\right> \text{ and } \left|11\right>\right)$ modes. Figure 2: Image of a 6-mode Reck scheme chip, which can be used to implement a deterministic 'CNOT' gate. Uses phase shifters and beam splitters to build up a unitary evolution over the modes of the system. Image taken from Figure 1 of Carolan et. al. Of course, it's not quite that simple and, due to requiring an exponential number of modes, the Reck scheme isn't generally considered to be overly scalable. That leaves us with the (final) two options 1: nonlinear optics (continuous variable) and measurement based quantum computing Continuous variable: Just keep squeezing As detailed in my answer here, continuous variable QC also offers a universal gateset which can be used to make arbitrary unitaries, in theory at least. Unfortunately, as more squeezing is still required, an experimental realisation of this is yet to occur. And now for something completely different: Measurement based Another scheme that hasn't been experimentally achieved, yet shows potential, is measurement based QC. Instead of performing CNOT gates during unitary evolution that defines a circuit, the entangling operations occur as part of the state preparation of the system. As per Ewert and Loock (arXiv version) the current idea of doing this involves generating small clusters of entangled photons, then entangling these into larger clusters using fusion gates, as shown in figure 3. Figure 3: Diagram of a 75% efficient fusion gate. Inputting the state $\left|\Upsilon_1\right> = \frac{1}{\sqrt 2}\left(\left|20\right> + \left|02\right>\right)$ allows for the detection of higher dimensional states. These can then be cascaded to detect larger and larger cluster states. The probabilistic measurement is equivalent to an entangling operation, similar to a CNOT gate. Image taken from Figure 1 of Ewert and Loock. 1 Although there are a number of variations of the different schemes used and work is constantly being done to improve upon them
Forgot password? New user? Sign up Existing user? Log in Given that α,β\alpha,\betaα,β are the roots of the equation x2+px−12p2=0,x^2+px-\frac{1}{2p^2}=0,x2+px−2p21=0, where ppp is a constant real number, find the minimum value of α4+β4.\alpha^4+\beta^4.α4+β4. Problem Loading... Note Loading... Set Loading...
A matrix-valued generator $\mathcal{A}$ with strong boundary coupling: A critical subspace of $D((-\mathcal{A})^{\frac{1}{2}})$ and $D((-\mathcal{A}^*)^{\frac{1}{2}})$ and implications 1. Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, United States Mathematics Subject Classification:Primary: 35M13, 93D2. Citation:Roberto Triggiani. A matrix-valued generator $\mathcal{A}$ with strong boundary coupling: A critical subspace of $D((-\mathcal{A})^{\frac{1}{2}})$ and $D((-\mathcal{A}^*)^{\frac{1}{2}})$ and implications. Evolution Equations & Control Theory, 2016, 5 (1) : 185-199. doi: 10.3934/eect.2016.5.185 References: [1] [2] [3] S. Chen and R. Triggiani, Proof of two conjectures of G. Chen and D. L. Russell on structural damping for elastic systems: The case $\alpha = 1/2$,, [4] S. Chen and R. Triggiani, Proof of extensions of two conjectures on structural damping for elastic systems: The case $1/2 \leq \alpha \leq 1$),, 136 (1989), 15. doi: 10.2140/pjm.1989.136.15. Google Scholar [5] S. Chen and R. Triggiani, Characterization of domains of fractional powers of certain operators arising in elastic systems, and applications,, 88 (1990), 279. doi: 10.1016/0022-0396(90)90100-4. Google Scholar [6] L. De Simon, Un'applicazione della teoria degli integrali singolari allo studio delle equazioni differenziali lineari astratte del primo ordine,, 34 (1964), 205. Google Scholar [7] D. Fujiwara, Concrete characterization of the domains of fractional powers of some elliptic differential operators of the second order,, 43 (1967), 82. doi: 10.3792/pja/1195521686. Google Scholar [8] [9] [10] [11] I. Lasiecka and R. Triggiani, [12] I. Lasiecka and R. Triggiani, Domains of fractional powers of matrix-valued Operators: A general approach,, 250 (2015), 297. doi: 10.1007/978-3-319-18494-4_20. Google Scholar [13] I. Lasiecka and R. Triggiani, Heat-structure interaction with viscoelastic damping: Analyticity with sharp analytic sector, exponential decay,, [14] C. Lebiedzik and R. Triggaini, The optimal interior regularity for the critical case of a clamped thermoelastic system with point control revisited,, [15] [16] J. L. Lions and E. Magenes, [17] [18] R. Triggiani, A heat-viscoelastic structure interaction model with Neumann or Dirichlet boundary control at the interface: optimal regularity, control theoretic implications,, show all references References: [1] [2] [3] S. Chen and R. Triggiani, Proof of two conjectures of G. Chen and D. L. Russell on structural damping for elastic systems: The case $\alpha = 1/2$,, [4] S. Chen and R. Triggiani, Proof of extensions of two conjectures on structural damping for elastic systems: The case $1/2 \leq \alpha \leq 1$),, 136 (1989), 15. doi: 10.2140/pjm.1989.136.15. Google Scholar [5] S. Chen and R. Triggiani, Characterization of domains of fractional powers of certain operators arising in elastic systems, and applications,, 88 (1990), 279. doi: 10.1016/0022-0396(90)90100-4. Google Scholar [6] L. De Simon, Un'applicazione della teoria degli integrali singolari allo studio delle equazioni differenziali lineari astratte del primo ordine,, 34 (1964), 205. Google Scholar [7] D. Fujiwara, Concrete characterization of the domains of fractional powers of some elliptic differential operators of the second order,, 43 (1967), 82. doi: 10.3792/pja/1195521686. Google Scholar [8] [9] [10] [11] I. Lasiecka and R. Triggiani, [12] I. Lasiecka and R. Triggiani, Domains of fractional powers of matrix-valued Operators: A general approach,, 250 (2015), 297. doi: 10.1007/978-3-319-18494-4_20. Google Scholar [13] I. Lasiecka and R. Triggiani, Heat-structure interaction with viscoelastic damping: Analyticity with sharp analytic sector, exponential decay,, [14] C. Lebiedzik and R. Triggaini, The optimal interior regularity for the critical case of a clamped thermoelastic system with point control revisited,, [15] [16] J. L. Lions and E. Magenes, [17] [18] R. Triggiani, A heat-viscoelastic structure interaction model with Neumann or Dirichlet boundary control at the interface: optimal regularity, control theoretic implications,, [1] Irena Lasiecka, Roberto Triggiani. Heat--structure interaction with viscoelastic damping: Analyticity with sharp analytic sector, exponential decay, fractional powers. [2] [3] Yongge Tian. A survey on rank and inertia optimization problems of the matrix-valued function $A + BXB^{*}$. [4] [5] George Avalos, Roberto Triggiani. Rational decay rates for a PDE heat--structure interaction: A frequency domain approach. [6] [7] Qiang Du, M. D. Gunzburger, L. S. Hou, J. Lee. Analysis of a linear fluid-structure interaction problem. [8] Jiang-Xia Nan, Deng-Feng Li. Linear programming technique for solving interval-valued constraint matrix games. [9] Anthony Tongen, María Zubillaga, Jorge E. Rabinovich. A two-sex matrix population model to represent harem structure. [10] Francis C. Motta, Patrick D. Shipman. Informing the structure of complex Hadamard matrix spaces using a flow. [11] [12] Roberto Triggiani, Jing Zhang. Heat-viscoelastic plate interaction: Analyticity, spectral analysis, exponential decay. [13] Emine Kaya, Eugenio Aulisa, Akif Ibragimov, Padmanabhan Seshaiyer. A stability estimate for fluid structure interaction problem with non-linear beam. [14] Eugenio Aulisa, Akif Ibragimov, Emine Yasemen Kaya-Cekin. Fluid structure interaction problem with changing thickness beam and slightly compressible fluid. [15] Grégoire Allaire, Alessandro Ferriero. Homogenization and long time asymptotic of a fluid-structure interaction problem. [16] [17] [18] Emine Kaya, Eugenio Aulisa, Akif Ibragimov, Padmanabhan Seshaiyer. FLUID STRUCTURE INTERACTION PROBLEM WITH CHANGING THICKNESS NON-LINEAR BEAM Fluid structure interaction problem with changing thickness non-linear beam. [19] Antonio Greco, Antonio Iannizzotto. Existence and convexity of solutions of the fractional heat equation. [20] Fausto Ferrari, Michele Miranda Jr, Diego Pallara, Andrea Pinamonti, Yannick Sire. Fractional Laplacians, perimeters and heat semigroups in Carnot groups. 2018 Impact Factor: 1.048 Tools Metrics Other articles by authors [Back to Top]
I've read that realized kernels are the thing to use for calculating daily volatility from high-frequency data. So I've got minute data, how do I actually use such a kernel? Will it give me minute-ly volatility, do I have to normalize it somehow? Also, what data do I feed it - minute data since the start of the trading day, one day's worth of data, all historical data I have until this point, or something else? The use of kernels to estimate volatility using intraday data is " nothing more" than combining: intraday volatility estimation kernel smoothing Thus you have to take care about the "usual pits" of these two approaches. Intraday volatility estimation. I hope you know the "signature plot" effect. Of course if you use the proper estimation method, it should take care of if, but just in case you should check that you do not suffer from it. Kernel smoothing. You will have to tune the time scale of your kernel. Of course theoretical papers like Designing Realized Kernels to Measure the ex post Variation of Equity Prices in the Presence of Noise do not really have to do it since their results are asymptotic but real life is not.Moreover you may know that since you will have at date $t$ information of previous dates in your estimator $\hat\sigma_t$, if you multiply it by a statistic from the past $X_{t-\delta t}$ you will obtain a "trace" of the correlation between $X$ and $\sigma$ (i.e. somewhere inside $\mathbb{E}(\hat\sigma_t \cdot X_{t-\delta t})$ you have a term in $\mathbb{E}(K(\delta t) \cdot \sigma_{t-\delta t} \cdot X_{t-\delta t})$, where $K$ is your kernel). It may add biais if you use your kernel volatility estimate to build other estimators. And not only for multiplications. Using a realized kernel for calculating volatility will give you results in the same resolution as the data you feed them. So if you feed them minute-by-minute data, then the volatility will be calculated minute-by-minute. What that really means is that only once per minute will you have a good estimate of the volatility of whatever asset you're looking at. The other 99.99% of the the time, the market might introduce changes which may throw that estimate out of the window. If you're not interested in high-frequency volatility estimates, then it's a completely different matter. You'd be better off trying to pre-filter your data before feeding it to the realized kernel. The goal there is to reduce the noise so that the remaining signal matches up to the frequency you wish to use. So if you want to have a day-by-day estimate based on minute-by-minute data, you can probably get a pretty good result by boiling down the minute-by-minute event data into a hour-by-hour events. I'm not familiar with such algorithms that give weight to temporal heuristics such as day-of-week or month-in-year or year-by-year cycles. Unless you know you're using such an algorithm, then there's no reason to feed more than a just today's data if you want an estimate for the current day. If anything, adding more data only causes the estimate to become duller, giving you only week- or month-accurate estimates. If you don't weight your data at all, then feeding in all your historical data might give you a volatility estimate for the next decade, but it will be off by a mile in the short term.
Let $\left\{a,\lambda\right\}\subset\mathbb{R}$. Let the following differential equation for a function $x\left(t\right)\in\mathbb{R}^{\mathbb{R}}$ be given: $$ \boxed {\ddot{x}\left(t\right)=4\lambda\left(x\left(t\right)^{2}-a^{2}\right)x\left(t\right) } $$ I am trying to find all solutions to this equation which obey the following boundary condition: $x\left(-\infty\right)=-a$ and $x\left(\infty\right)=a$. I have found one such family of solutions, indexed by $\tau\in\mathbb{R}$: $$\boxed{x\left(t\right)=a\,\tanh\left[\frac{\omega}{2}\left(t-\tau\right)\right]}$$ It is easy to verify this is indeed a solution. My question is: Is it the onlyset of solutions for these boundary conditions? If yes, how to prove no others exist? If not, what are all the other solutions? I suspect that there is another set of (at least approximate) solutions: $$ \boxed{x\left(t\right) = a\prod_{j=1}^{n} \tanh\left[\frac{\omega}{2}\left(t-\tau_j\right)\right] }$$ where $n\in2\mathbb{N}+1$, and $\left\{\tau_j\right\}_{j=1}^n\subset\mathbb{R}$ are such that $\tau_j < \tau_{j+1} \forall j\in\left\{1,\dots,n-1\right\}$. If these aresolutions, how do you prove that? (I tried induction and failed) If they are notsolutions, in what way are they approximate solutions? (what is the margin of error?) This problem comes from trying to find instanton solutions to the double well potential in quantum mechanics (see Coleman "Aspects of Symmetry" page 272).
Here is my question : Suppose you have a simple (analytic) closed curve $\gamma$ in an open simply connected domain $\Omega \neq \mathbb{C}$. Does there exist a conformal bijection $f : \Omega \rightarrow U \subset \mathbb{C}$, such that $\gamma$ is sent to the unit circle $S^1$ (the unit disc $D$ would then be contained in $U$) ? The Riemann mapping theorem tells you that you can find a conformal map from the interior of $\gamma$ to $D$, and the conformal geometry of an annulus tells you that you can also find a conformal map from the annulus $\Omega - int(\gamma)$ to an annulus $A(r_1,r_2) = \{z \in \mathbb{C} | r_1 < |z| < r_2 \}$. But this does not answer the question... Note that in the question I don't ask for the open set $U$ to be bounded by a circle (this would clearly be too restrictive). This question is somehow related to a previous question I asked, but I think this one is quite different though. The answer must be well-know, but I can not find it anywhere - neither from myself.
Most books and courses on linear algebra or functional analysis present at least one version of the spectral theorem (either in finite or infinite dimension) and emphasize its importance to many mathematical disciplines in which linear operators to which the spectral theorem applies arise. One finds quite quickly that the theorem is a powerful tool in the study of normal (and other) operators and many properties of such operators are almost trivial to prove once one has the spectral theorem at hand (e.g. the fact that a positive operator has a unique positive square root). However, as hard as it is to admit, I barely know of any application of the spectral theorem to areas which a-priori have nothing to do with linear algebra, functional analysis or operator theory. One nice application that I do know of is the following proof of Von Neumann's mean ergodic theorem: if $T$ is an invertible, measure-preserving transformation on a probability space $(X,\mathcal{B},\mu)$, then $T$ naturally induces a unitary operator $T: L^2(\mu) \to L^2(\mu)$ (composition with $T$) and the sequence of operators $\frac{1}{N}\sum_{n=1}^{N}T^n$ converges to the orthogonal projection on the subspace of $T$-invariant functions, in the strong operator topology. The spectral theorem allows one to reduce to the case where $X$ is the unit circle $\mathbb{S}^1$, $\mu$ is Lebesgue measure and $T$ is multiplication by some number of modulus 1. This simple case is of course very easy to prove, so one can get the general theorem this way. Some people might find this application a bit disappointing, though, since the mean ergodic theorem also has an elementary proof (credited to Riesz, I believe) which uses nothing but elementary Hilbert space theory. Also, I guess that Fourier theory and harmonic analysis are intimately connected to the spectral theory of certain (translation, convolution or differentiation) operators, and who can deny the usefulness of harmonic analysis in number theory, dynamics and many other areas? However, I'm looking for more straight-forward applications of the spectral theorem, ones that can be presented in an undergraduate or graduate course without digressing too much from the course's main path. Thus, for instance, I am not interested in the use of the spectral theorem in proving Schur's lemma in representation theory, since it can't (or shouldn't) be presented without some prior treatment of representation theory, which is a topic in itself. This book by Matousek is pretty close to what I'm looking for. It presents simple and short (but nevertheless impressive and nontrivial) applications of linear algebra to other areas. I'm interested in the more specific question of applications of the spectral theorem, in one of its versions (in finite or infinite dimension, for compact or bounded or unbounded operators, etc.), to areas which are not directly related to the theory of linear operators. Any suggestions will be appreciated.
AuthorPosts June 17, 2019 at 10:54 pm #29307 Prasad sParticipant Find all polynomials P(x) with real coefficient such that: P(0) = 0, and ⌊P⌊P(n)⌋⌋ + n = 4⌊P(n)⌋ ∀n ∈ N.June 23, 2019 at 8:30 pm #29360 Agamdeep SinghParticipant let (c,d) be the interval in which [p(n)] = k. (k is some constant in the range of p(x)) let r lie between in the interval (c,d). we have, ⌊P⌊P(r)⌋⌋ + r = 4⌊P(r)⌋ ⌊P(k)⌋ + r = 4k – [eq 1] let e be such that r+e also lies in (c,d) ⌊P⌊P(r+e)⌋⌋ + r+e = 4⌊P(r+e)⌋ ⌊P(k)⌋ + r + e = 4k – [eq 2] [eq 2] – [eq 1] e=0 therefore no such interval in which ⌊P(x)⌋ is constant. since a polynomial is continuous, every polynomial has to take values from some integer m to m + 1 and in the interval [ m , m+1) , ⌊P(x)⌋ = constant. [for p(x) in that range.] but since there is no such interval in which ⌊P(x)⌋ is constant, there is no such polynomial.June 25, 2019 at 1:17 am #29387June 26, 2019 at 4:42 am #29448 Consider some polynomial P(x) with degree>1. Observe that for large values of natural numbers n, $$\frac{4P(n)}{P(P(n))+n}$$ is approximately 0. Using this idea it seems that our polynomial must be a constant polynomial. Though these were loose arguments and now we need to make our arguments precise. For this, few lemmas have been stated without proof. Lemma 1 : Consider polynomial P(x) ∈ R[x] of degree n>0 such that its leading coefficient is positive. It can be shown that there exist natural number m such that P(x) is strictly increasing function in the interval [m,\infty). Lemma 2: Consider polynomial P(x) ∈ R[x] of degree n>1 such that its leading coefficient is positive. Then there exist natural number m such that P(x)>m ∀ x>m and is strictly increasing in the interval [m,\infty) Proof sketch: Consider polynomial Q(x)=P(x)-x. By lemma 1, there exist m such that Q(x)is strictly increasing in [m,\infty). Hence P(x)-x>0 for all x>m i.e. P(x)>x>m for all x>m. Solution: We consider following cases for P(x) ∈ R[x]- 1) Degree of P(x)>1 & leading coefficient is positive: Then by the above lemma 2, there exist natural number m such that P(x) is strictly increasing function in the interval (m,\infty) and P(x)>m for all x>m . Hence we have- $$P(P(n)-1))+n-1<P⌊P(n)⌋-1+n <⌊P⌊P(n)⌋⌋ + n = 4⌊P(n)⌋<4(P(n)+1) ∀n ∈ N.$$ $$\Longrightarrow P(P(n)-1))-4P(n)+n-5<0 ∀n ∈ N.$$ Now observe that P(P(n)+1))-4P(n)+n+5 is a polynomial of degree>0 with positive leading coefficient. Hence by lemma1 we arrive at a contradiction. 2) Degree of P(x)>1 & leading coefficient is negative: Similar to above case and leads to a contradiction 3) Degree of P(x) at most 1: Consider P(x)=ax+b. Since P(0)=0 Hence b=0. Now we have ⌊a⌊an⌋⌋ + n = 4⌊an ⌋June 27, 2019 at 10:30 pm #29459 3) Degree of P(x) at most 1:Consider P(x)=ax+b.Since P(0)=0Hence b=0. Now we have ⌊a⌊an⌋⌋ + n = 4⌊an ⌋. Now observe that- $$\lim_{n\longrightarrow\infty}\frac{⌊a⌊an⌋⌋ + n}{4⌊an⌋}=\lim_{n\longrightarrow\infty}$$ $$\Longrightarrow \frac{a^2+1}{4a}=1$$ $$\Longrightarrow a=2+\sqrt{3}\; or\; a=2-\sqrt{3}$$ a) $$ P(x)=(2-\sqrt{3})x$$ cannot be a solution for it fails to satisfy the given condition for n=1. b) Surprisingly $$latex P(x)=(2+\sqrt{3})x$$ does satisfy our solution. The equation $$⌊(2+\sqrt(3))⌊(2+\sqrt(3))n⌋⌋ + n = 4⌊(2+\sqrt(3))n ⌋$$ reduces to $$⌊(2-\sqrt(3)){\sqrt(3n)} ⌋=0$$ which is indeed true. Calculation part for this shown below AuthorPosts You must be logged in to reply to this topic.
So the knapsack problem has an integer programming formulation as follows, $$ \max_x v\cdot x\\s.t \\x_i \in \{0,1\}\\w\cdot x \leq C$$ Now consider the second integer program which might be a variation of the knapsack integer program. $$ \max_x v\cdot x\\s.t \\x_i \in \{0,L_i\}\\ x_i \leq R \cdot \delta_i\\ \delta_i \in \{0,1\}\\ \sum_i \delta_i = k$$ where $v_i$ is item's $i$ value, $L_i$ and $k$ are constants, and $R = \max{\{L_1,L_2,...L_d\}}$. Is there a dynamic programming solution or an approximation algorithm for the second integer programming problem ? Is it possible to use the solution of the knapsack problem to warm start or partially solve the second integer programming ? Thanks!
Show that a non-zero ring $R$ is a field if and only if for any non-zero ring $S$, any unital ring homomorphism from $R$ to $S$ is injective. I would like to verify my proof, especially the reverse implication. $\Rightarrow$ Let $S$ be any ring, and $f:R\rightarrow S$ be a ring homomorphism. If $x\in \ker f$ where $x$ is non-zero, then $0= f(x)f(x^{-1}) = f(xx^{-1})=f(1) = 1$ contradiction. Thus $x=0$, so $f$ is injective. $\Leftarrow$ Since any ring homomorphism is injective, the only ideals of $R$ are $\{0\}$ and $R$. Thus $R$ is a field.
"Is this really a proof?" is the exact question e-mailed to me today from an undergraduate mathematics student whom I know as a highly competent student. The one sentence question was accompanied with the following demo: I am looking for a down-to-earth, non-authoritative answer who one may gi... the 2005 AMS article/survey on experimental mathematics[1] by Bailey/Borwein mentions many remarkable successes in the field including new formulas for $\pi$ that were discovered via the PSLQ algorithm as well as many other examples. however, it appears to glaringly leave out any description of t... I'm reading the paper "the classification of algebras by dominant dimension" by Bruno J.Mueller, the link is here http://cms.math.ca/10.4153/CJM-1968-037-9. In the proof of lemma 3 on page 402, there is a place I can't understand. Who can tell me what $E_R \oplus * \cong \oplus X_R$ and $_AHom_... I've been really trying to prove Ramanujan Partition theory, and different sources give me different explanations. Can someone please explain how Ramanujan (and Euler) found out the following equation for the number of partitions for a given integer? Any help is appreciated thank you so much! $... I was wondering what role non-rigorous, heuristic type arguments play in rigorous math. Are there examples of rigorous, formal proofs in which a non-rigorous reasoning still plays a central part? Here is an example of what I am thinking of. You want to prove that some formula $f(n)$ holds, and y... Perhaps the "proofs" of ABC conjecture or newly released weak version of twin prime conjecture or alike readily come to your mind. These are not the proofs I am looking for. Indeed my question was inspired by some other posts seeking for a hint to understand a certain more or less well-establised... I do not know exactly how to characterize the class of proofs that interests me, so let me give some examples and say why I would be interested in more. Perhaps what the examples have in common is that a powerful and unexpected technique is introduced that comes to seem very natural once you are ... Some conjectures are disproved by a single counter-example and garner little or no further interest or study, such as (to my knowledge) Euler's conjecture in number theory that at least $n$ $n^{th}$ powers are required to sum to an $n^{th}$ power, for $n>2$ (disproved by counter-example by L. J. ... There is a tag called proofs. This tag has empty tag-info. Without any usage guidance it is quite likely to be used incorrectly. The fact that there are many deleted questions having this tag can be considered a supporting evidence of this fact. (According to SEDE there are 26 such questions - ... I would like to know how you would rigorously introduce the trigonometric functions ($\sin(x)$ and relatives) to first year calculus students. Suppose they have a reasonable definition of $\mathbb{R}$ (as Cauchy closure of the rationals, or as Dedekind cuts, or whatever), but otherwise require as... After having a solid year long undergraduate course in abstract algebra, I'm interested in learning algebra at a more advanced level, especially in the context of category theory. I've done some research, and from what I've read, it seems that using Lang as a main text and Hungerford as a supple... Dear MO-community, I am not sure how mature my view on this is and I might say some things that are controversial. I welcome contradicting views. In any case, I find it important to clarify this in my head and hope that this community can help me doing that. So after this longish introduction, h... Usually, during lectures Turing Machines are firstly introduced from an informal point of view (for example, in this way: http://en.wikipedia.org/wiki/Turing_machine#Informal_description) and then their definition is formalized (for example, in this way: http://en.wikipedia.org/wiki/Turing_machin... « first day (1811 days earlier) ← previous day next day → last day (434 days later) »
@mickep I'm pretty sure that malicious actors knew about this long before I checked it. My own server gets scanned by about 200 different people for vulnerabilities every day and I'm not even running anything with a lot of traffic. @JosephWright @barbarabeeton @PauloCereda I thought we could create a golfing TeX extension, it would basically be a TeX format, just the first byte of the file would be an indicator of how to treat input and output or what to load by default. I thought of the name: Golf of TeX, shortened as GoT :-) @PauloCereda Well, it has to be clever. You for instance need quick access to defining new cs, something like (I know this won't work, but you get the idea) \catcode`\@=13\def@{\def@##1\bgroup} so that when you use @Hello #1} it expands to \def@#1{Hello #1} If you use the D'Alembert operator as well, you might find pretty using the symbol \bigtriangleup for your Laplace operator, in order to get a similar look as the \Box symbol that is being used for D'Alambertian. In the following, a tricky construction with \mathop and \mathbin is used to get the... Latex exports. I am looking for a hint on this. I've tried everything I could find but no solution yet. I read equations from file generated by CAS programs. I can't edit these or modify them in any way. Some of these are too long. Some are not. To make them fit in the page width, I tried resizebox. The problem is that this will resize the small equation as well as the long one to fit the page width. Which is not what I want. I want only to resize the ones that are longer that pagewidth and keep the others as is. Is there a way in Latex to do this? Again, I do not before hand the size of… \documentclass[12pt]{article}\usepackage{amsmath}\usepackage{graphicx}\begin{document}\begin{equation*}\resizebox{\textwidth}{!}{$\begin{split}y &= \sin^2 x + \cos^2 x\\x &= 5\end{split}$}\end{equation*}\end{document} The above will resize the small equation, which I do not want. But since I do not know before how long the equation is, I do resize on everyone. Is there a way to find in Latex using some latex command, if an equation "will fit" the page width or how long it is? If so I can add logic to add resize if needed in that case. What I mean, I want to resize DOWN only if needed. And not resize UP. Also, if you think I should ask this on main board, I can. But thought to check here first. @egreg what other options do I have? sometimes cas generates an equation which do not fit the page. Now it overflows the page and one can't see the rest of it at all. Now, since in pdf one can zoom in a little, at least one can see it if needed. It is impossible to edit or modify these by hand, as this is done all using a program. @UlrikeFischer I do not generate unreadable equations. These are solutions of ODE's. The latex is generated by Maple. Some of them are longer than the page width. That is all. So what is your suggestion I do? Keep the long solutions flow out of the page? I can't edit these by hand. This is all generated by a program. I can add latex code around them that is all. But editing them is out of question. I tried breqn package, but that did not work. It broke many things as well. @egreg That was just an example. That was something I added by hand to make up a long equation for illustration. That was not real solution to an ODE. Again, thanks for the effort. but I can't edit the latex generated at all by hand. It will take me a year to do. And I run the program many times each day. each time, all the latex files are overwritten again any way. CAS providers do not generate good Latex also. That is why breqn did not work. many times they add {} around large expressions, which made breqn not able to break it. Also breqn has many other problems. So I no longer use it at all.
Hey guys! I built the voltage multiplier with alternating square wave from a 555 timer as a source (which is measured 4.5V by my multimeter) but the voltage multiplier doesn't seem to work. I tried first making a voltage doubler and it showed 9V (which is correct I suppose) but when I try a quadrupler for example and the voltage starts from like 6V and starts to go down around 0.1V per second. Oh! I found a mistake in my wiring and fixed it. Now it seems to show 12V and instantly starts to go down by 0.1V per sec. But you really should ask the people in Electrical Engineering. I just had a quick peek, and there was a recent conversation about voltage multipliers. I assume there are people there who've made high voltage stuff, like rail guns, which need a lot of current, so a low current circuit like yours should be simple for them. So what did the guys in the EE chat say... The voltage multiplier should be ok on a capacitive load. It will drop the voltage on a resistive load, as mentioned in various Electrical Engineering links on the topic. I assume you have thoroughly explored the links I have been posting for you... A multimeter is basically an ammeter. To measure voltage, it puts a stable resistor into the circuit and measures the current running through it. Hi all! There is theorem that links the imaginary and the real part in a time dependent analytic function. I forgot its name. Its named after some dutch(?) scientist and is used in solid state physics, who can help? The Kramers–Kronig relations are bidirectional mathematical relations, connecting the real and imaginary parts of any complex function that is analytic in the upper half-plane. These relations are often used to calculate the real part from the imaginary part (or vice versa) of response functions in physical systems, because for stable systems, causality implies the analyticity condition, and conversely, analyticity implies causality of the corresponding stable physical system. The relation is named in honor of Ralph Kronig and Hans Kramers. In mathematics these relations are known under the names... I have a weird question: The output on an astable multivibrator will be shown on a multimeter as half the input voltage (for example we have 9V-0V-9V-0V...and the multimeter averages it out and displays 4.5V). But then if I put that output to a voltage doubler, the voltage should be 18V, not 9V right? Since the voltage doubler will output in DC. I've tried hooking up a transformer (9V to 230V, 0.5A) to an astable multivibrator (which operates at 671Hz) but something starts to smell burnt and the components of the astable multivibrator get hot. How do I fix this? I check it after that and the astable multivibrator works. I searched the whole god damn internet, asked every god damn forum and I can't find a single schematic that converts 9V DC to 1500V DC without using giant transformers and power stage devices that weight 1 billion tons.... something so "simple" turns out to be hard as duck In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it? If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum. @AaronStevens Yeah, I had a good laugh to myself when he responded back with "Yeah, maybe they considered it and it was just too complicated". I can't even be mad at people like that. They are clearly fairly new to physics and don't quite grasp yet that most "novel" ideas have been thought of to death by someone; likely 100+ years ago if it's classical physics I have recently come up with a design of a conceptual electromagntic field propulsion system which should not violate any conservation laws, particularly the Law of Conservation of Momentum and the Law of Conservation of Energy. In fact, this system should work in conjunction with these two laws ... I rememeber that Gordon Freeman's thesis was "Observation of Einstein-Podolsky-Rosen Entanglement on Supraquantum Structures by Induction Through Nonlinear Transuranic Crystal of Extremely Long Wavelength (ELW) Pulse from Mode-Locked Source Array " In peskin book of QFT the sum over zero point energy modes is an infinite c-number, fortunately, it's experimental evidence doesn't appear, since experimentalists measure the difference in energy from the ground state. According to my understanding the zro pt energy is the same as the ground state, isn't it? If so, always it is possible to substract a finite number (higher exited state for e.g.) from this zero point enrgy (which is infinte), it follows that, experimentally we always obtain infinte spectrum. @ACuriousMind What confuses me is the interpretation of Peskin to this infinite c-number and the experimental fact He said, the second term is the sum over zero point energy modes which is infnite as you mentioned. He added," fortunately, this energy cannot be detected experm., since the experiments measure only the difference between from the ground state of H". @ACuriousMind Thank you, I understood your explanations clearly. However, regarding what Peskin mentioned in his book, there is a contradiction between what he said about the infinity of the zero point energy/ground state energy, and the fact that this energy is not detectable experimentally because the measurable quantity is the difference in energy between the ground state (which is infinite and this is the confusion) and a higher level. It's just the first encounter with something that needs to be renormalized. Renormalizable theories are not "incomplete", even though you can take the Wilsonian standpoint that renormalized QFTs are effective theories cut off at a scale. according to the author, the energy differenc is always infinite according to two fact. the first is, the ground state energy is infnite, secondly, the energy differenc is defined by substituting a higher level energy from the ground state one. @enumaris That is an unfairly pithy way of putting it. There are finite, rigorous frameworks for renormalized perturbation theories following the work of Epstein and Glaser (buzzword: Causal perturbation theory). Just like in many other areas, the physicist's math sweeps a lot of subtlety under the rug, but that is far from unique to QFT or renormalization The classical electrostatics formula $H = \int \frac{\mathbf{E}^2}{8 \pi} dV = \frac{1}{2} \sum_a e_a \phi(\mathbf{r}_a)$ with $\phi_a = \sum_b \frac{e_b}{R_{ab}}$ allows for $R_{aa} = 0$ terms i.e. dividing by zero to get infinities also, the problem stems from the fact that $R_{aa}$ can be zero due to using point particles, overall it's an infinite constant added to the particle that we throw away just as in QFT @bolbteppa I understand the idea that we need to drop such terms to be in consistency with experiments. But i cannot understand why the experiment didn't predict such infinities that arose in the theory? These $e_a/R_{aa}$ terms in the big sum are called self-energy terms, and are infinite, which means a relativistic electron would also have to have infinite mass if taken seriously, and relativity forbids the notion of a rigid body so we have to model them as point particles and can't avoid these $R_{aa} = 0$ values.
Under the auspices of the Computational Complexity Foundation (CCF) A map $f:\{0,1\}^{n}\to \{0,1\}^{n}$ has locality t if every output bit of f depends only on t input bits. Arora, Steurer, and Wigderson (2009) ask if there exist bounded-degree expander graphs on $2^{n}$ nodes such that the neighbors of a node $x\in \{0,1\}^{n}$ can be computed by maps of constant locality. We give an explicit construction of such graphs with locality one. We then give three applications of this construction: (1) lossless expanders with constant locality, (2) more efficient error reduction for randomized algorithms, and (3) more efficient hardness amplification of one-way permutations. We also give, for n of the form $n=4\cdot3^{t}$, an explicit construction of bipartite Ramanujan graphs of degree 3 with $2^{n}-1$ nodes in each side such that the neighbors of a node $x\in \{0,1\}^{n}\setminus\{0^{n}\}$ can be computed either (1) in constant locality or (2) in constant time using standard operations on words of length $\Omega(n)$. Our results use in black-box fashion deep explicit constructions of Cayley expander graphs, by Kassabov (2007) for the symmetric group $S_{n}$ and by Morgenstern (1994) for the special linear group $SL(2,F_{2^{n}})$. Added an application to hardness amplification of one-way permutations, and also made other small changes. Abstract A map $f:{0,1}^{n}\to {0,1}^{n}$ has locality t if every output bit of f depends only on t input bits. Arora, Steurer, and Wigderson (2009) ask if there exist bounded-degree expander graphs on $2^{n}$ nodes such that the neighbors of a node $x\in {0,1}^{n}$ can be computed by maps of constant locality. We give an explicit construction of such graphs with locality one. We apply this construction to obtain lossless expanders with constant locality, and more efficient error reduction for randomized algorithms. We also give, for n of the form $n=4\cdot3^{t}$, an explicit construction of bipartite Ramanujan graphs of degree 3 with $2^{n}-1$ nodes in each side such that the neighbors of a node $x\in {0,1}^{n}\setminus\{0^{n}\}$ can be computed either (1) in constant locality or (2) in constant time using standard operations on words of length $\Omega(n)$. Our results use in black-box fashion deep explicit constructions of Cayley expander graphs, by Kassabov (2007) for the symmetric group $S_{n}$ and by Morgenstern (1994) for the special linear group $SL(2,F_{2^{n}})$.
Now that I'm procrastinating with tweaking LaTeX, instead of finishing my Ph.D. thesis, I'm wondering what's the most beautiful way to typeset source code (C#, C++, LISP, XML, BNF) in LaTeX. Currently, I'm using the listings package, and the literate option, to search&replace some characters with mathematical equivalent symbols, such as: \lstset{ % ... literate= {+}{{$+$}}1 {*}{{$*$}}1 {=}{{$\gets$}}1 {<=}{{$\leq$}}1 {>=}{{$\geq$}}1 {!=}{{$\neq$}}1 {==}{{$\equiv$}}1 {=>}{{$\leadsto$}}1} Although one can find some inspiration for symbols in CWEB, I'm wondering if there exists any standard symbology for common programming (and domain specific) languages. Or maybe more appropriate packages (btw, I'm not looking for fancy syntax coloring, which would ruin my two-colored document). Since there was some users concerned if this should be regarded as a good practice, I think this question deserves some further explanation. Overall, I have to both agree and disagree them (and I think a lot of the literate programming community too). Here's when: I absolutely would agreein cases one is writing a book on the language, a tutorial, a recipe, patterns, something where it is intended the reader to try that code on the computer. This can provoke confusion on the reader (where is that <- sign?), and hinder their readingpattern (you know, when you glance at hundreds of lines of code very quickly). But in thesis, papers, and other scientific publications, most of the snippets are illustrative; almost as if they were pseudo-code. Readability is improved, especially because lots of two/three character symbols (which can vary depending on the language) are compressed to a more symbolic notation. Now, it is true that most of you will suggest: "then why don't you you pseudo-code?" Well... First, because the pseudo-code provided by the algorithmicenvironment is nice for simplifying imperative languages, but it fails for things like LISP, XML and BNF. Second, because it takes a lot of time converting them, and it quickly falls out of sync. Still, I maintain my original question: "examples of beautiful typography of source code".
Under the auspices of the Computational Complexity Foundation (CCF) Buhrman, Cleve and Wigderson (STOC'98) observed that for every Boolean function $f : \{-1, 1\}^n \to \{-1, 1\}$ and $\bullet : \{-1, 1\}^2 \to \{-1, 1\}$ the two-party bounded-error quantum communication complexity of $(f \circ \bullet)$ is $O(Q(f) \log n)$, where $Q(f)$ is the bounded-error quantum query complexity of $f$. Note that the bounded-error randomized communication complexity of $(f \circ \bullet)$ is bounded by $O(R(f))$, where $R(f)$ denotes the bounded-error randomized query complexity of $f$. Thus, the BCW simulation has an extra $O(\log n)$ factor appearing that is absent in classical simulation. A natural question is if this factor can be avoided. H{\o}yer and de Wolf (STACS'02) showed that for the Set-Disjointness function, this can be reduced to $c^{\log^* n}$ for some constant $c$, and subsequently Aaronson and Ambainis (FOCS'03) showed that this factor can be made a constant. That is, the quantum communication complexity of the Set-Disjointness function (which is $NOR_n \circ \wedge$) is $O(Q(NOR_n))$. Perhaps somewhat surprisingly, we show that when $ \bullet = \oplus$, then the extra $\log n$ factor in the BCW simulation is unavoidable. In other words, we exhibit a total function $F : \{-1, 1\}^n \to \{-1, 1\}$ such that $Q^{cc}(F \circ \oplus) = \Theta(Q(F) \log n)$. To the best of our knowledge, it was not even known prior to this work whether there existed a total function $F$ and 2-bit function $\bullet$, such that $Q^{cc}(F \circ \bullet) = \omega(Q(F))$.
$$ \bar { x } =\quad \frac { 1 }{ N } \sum _{ i=1 }^{ N }{ { x }_{ i } } $$ In mathematical terms, given a random variable X with distribution F, a random sample of length N is a set of N independent, identically distributed (iid) random variables with distribution F. In our case, provided that with select our N samples randomly, each of these samples is itself a random variable normally distributed. This means that the sample mean is also itself a random variable. Montecarlo simulation We can use the sample mean as an estimator for the true mean value for the serie. Let's create a set of 4 samples and let's calculate the statistics of the sample mean as a random variable: Sample mean The sample mean is a random variable, and its outcome can be used as an estimator for the underlying actual mean. Why the sample mean distribution behaves as a normal distribution with standard deviation of 1/2? In this case, we know that the population has a normal distribution, therefore u=0, sd=1. We also know that we have taken a set of 4 sample to build our sample mean statistics. The mean is a random variable with mean and variance according to the following formulas: $ \operatorname{Var}\left(\overline{X}\right) = \operatorname{Var}\left(\frac {1} {N}\sum_{i=1}^N X_i\right) = \frac {1} {N^2}\sum_{i=1}^N \operatorname{Var}\left(X_i\right) = \frac {\sigma^2} {N} $ Bias The bias defined as the expected error of the sample mean minus the true mean is zero. $ Bias(\bar { x } )\quad =\quad E[\bar { x } -\quad \mu ]\quad =\quad \\ Bias(\bar { x } )\quad =\quad E[\frac { 1 }{ N } \sum _{ i=1 }^{ N }{ { x }_{ i } } -\quad \mu ]\quad =\quad E[\frac { 1 }{ N } \sum _{ i=1 }^{ N }{ { x }_{ i } } ]\quad -\quad \mu \quad \\ Bias(\bar { x } )\quad =\quad \frac { N }{ N } \mu \quad -\mu \quad =\quad 0 $ Standard Error This formula was discovered by Bienaymé in 1853. It states that the variance decreases with the square root of the number of samples taken to build the estimator. Since in our case N=4, it means the the standard deviation of the mean is 1/sqrt(4), hence 0.5. $$ s\quad =\quad \frac { \sigma }{ \sqrt { N } } $$ $$ Precision(\bar{x})\quad =\quad SE(\bar { x } )\quad =\quad \sqrt { Var(\bar { x } ) } \quad=\quad \sqrt { E[(x-E[\bar{x}])^2] } = \quad \frac { \sigma }{ \sqrt { N } } = s $$ Estimation of the mean If the mean is unknown, we can use the standard mean to estimate the mean. In this case we can depend on the statistics of the sample to assess the true mean. We have just seen the the sample mean is unbiased, but we have also seen that our mean estimation can have a certain error, (the standard error). In general, the squared error that we commit estimating the mean is: $ MSE(\bar { x } )\quad =\quad E[{ (\bar { x } -\quad \mu ) }^{ 2 }]\quad =\quad E[{ { \bar { x } }^{ 2 }-\quad 2\bar { x } \mu \quad +\quad { \mu }^{ 2 } }]\\ MSE(\bar { x } )\quad =\quad Var({ \bar { x } }^{ 2 })\quad +\quad { (E(\bar { x } -\quad \mu )) }^{ 2 }\quad \\ MSE(\bar { x } )\quad =\quad SE({ \bar { x } })\quad +\quad { (Bias(\bar { x } ,\quad \mu )) }^{ 2 } $ Considered the mean sample statistics, there is a probability of 95% (2 sigmas) that the mean of four sample would follow in the range: $$ Mean(\bar { X } )\quad \pm \quad 2\quad SD(\bar { X } ) $$ See the estimation here below from the above monte carlo simulation:
Suppose $f$ is an analytic function with power series expansion $f(z)=\sum_{n=0}^{\infty} a_nz^n$, and $p = \sum_{n=0}^{d}b_nz^n$ is a polynomial. If $f$ is a polynomial of degree larger than $d$, then $|f|$ grows faster than $|p|$, but the situation is not so clear when the expansion of $f$ has infinitely many nonzero coefficients. I would expect the growth of the function $f$ then to be faster than that of $p$, as with the function $e^z = \sum_{n=0}^{\infty}\frac{z^n}{n!}$. However the function $\frac{1}{1-z} = \sum_{n=0}^{\infty}z^n$ also has infinitely many nonzero coefficients and grows slower than any polynomial (as $|z|\to\infty$). I realize this is related to the failure of the power series to converge outside a disk of radius $1$. Also, $log(z)$ grows slower than any polynomial, but any power series representation cannot converge on an infinite radius (The function itself cannot be well-defined everywhere in the complex plane simultaneously). Under what conditions can we say that a power series with infinitely many nonzero coefficients represents a function that grows faster than any polynomial? Is this true for any power series with infinite radius of convergence? Are there such power series which grow at the rate $z^\alpha$, for any $\alpha\in(0,\infty)$? I have in mind the case where $f$ is complex-analytic, but I would also be interested to hear about the case where $f$ is real-analytic, if the cases differ.
I know that light of all frequencies travel at the same speed in vacuum. But I wonder why their speed differ in any other medium, why does red light travel faster if it has less energy than blue light? Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community Feynman: The correct picture of an atom, which is given by the theory of wave mechanics, says that,so far as problems involving light are concerned, the electrons behave as though they were held by springs. So we shall suppose that the electrons have a linear restoring force which, together with their mass $m$, makes them behave like little oscillators, with a resonant frequency $\omega_0$. ...........The electric field of the light wave polarizes the molecules of the gas,producing oscillating dipole moments.The acceleration of the oscillating charges radiates new waves of the field.The new field,interfering with the old field,produces a changed field which is equivalent to a phase shift of the original wave.Because this phase shift is proportional to the thinkness of the material ,the effect is equivalent to having a different phase velocity in the material''. Considering the above model the expression for refractive index can be derived which shows the refractive index depends on the frequency of the light and since speed of the wave through the meterial depends on the refractive index so different frequency waves move with different velocities. Here is the Feynman's derivation Jon Custer hinted at something, which I think is best explained via an analogy. Imagine you can walk along a pavement at 4mph. When the pavement is empty, it takes you an hour to travel four miles. But when the pavement is crowded, you're dodging around people and bumping into them. You're still walking at 4mph, but it takes you an hour and a half to travel the four miles. And if you're a little old lady with short little steps walking at 4mph, you're held up more than if you're a big guy with long strides walking at 4mph. Now let's look at your questions again: But I wonder why their speeds differ in any other medium? Because the light interacts with the material, and those interactions are wavelength dependent. And why Red light travels faster while it has less energy than blue light? Because the light interacts with the material, and those interactions are wavelength dependent! Most of the answers were helpful indeed but I thought it in a different way to say that Red light travels faster compared to blue light. Think it as the example of a prism. We can see that the red light is scattered the least while the violet is scattered the most. Now ask yourselves, How is deflection related to speed of an object? It's an inversely proportional relationship. From the prism, we can conclude that the red light must be traveling through a greater speed compared to blue light as it is deflected the least. $c_0^2 = \frac{1}{ε_0μ_0}$ in vacuum. Permitivity and permeability (in materials) depend on frequency, in general. + In material you have $\epsilon=\epsilon_r \epsilon_0$, $v^2 = \frac{1}{εμ}$, where $v$ is a phase velocity of the light. - see http://en.wikipedia.org/wiki/Permittivity, where is a also a picture of frequency dependence of $\epsilon$. Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
Chjan Lim 110, 8th Street, NY 12180-3590, Troy, NY, USA Rensselaer Polytechnic Institute, Troy, NY Publications: Lim C. C., Assad S. M. Self containment radius for rotating planar flows, single-signed vortex gas and electron plasma 2005, vol. 10, no. 3, pp. 239-255 Abstract A low temperature relation $R^2=\Omega\beta/4\pi\mu$ between the radius $R$ of a compactly supported 2D vorticity (plasma density) field, the total circulation $\Omega$ (total electron charge) and the ratio $\mu/\beta$ (Larmor frequency), is rigorously derived from a variational Principle of Minimum Energy for 2D Euler dynamics. This relation and the predicted structure of the global minimizers or ground states are in agreement with the radii of the most probable vorticity distributions for a vortex gas of $N$ point vortices in the unbounded plane for a very wide range of temperatures, including $\beta = O(1)$. In view of the fact that the planar vortex gas is representative of many 2D and 2.5D statistical mechanics models for geophysical flows, the Principle of Minimum Energy is expected to provide a useful method for predicting the statistical properties of these models in a wide range of low to moderate temperatures.
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ... The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial. This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ... I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv... As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists? I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib... @EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc. Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/… You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball. @ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why? @AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially... @vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes. @RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself @AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that? @ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions... When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former. @RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that And that is what I mean by "the basics". Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers @RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14 The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for... @vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world. @Slereah It's like the brain has a limited capacity on math skills it can store. @NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life" I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it
The points on the lines and the lines through the points... The lines should be many with numerous joints, the points should be few and should sit at right places. What else? We can use high-dimensional spaces, if we do not mind doubling enemy's strength. It would not be bad if we bound the length of games, tell who wins, and what way he should play. The questions are many, but short is the day. One cannot resolve all the problems at once by being just smart. Better leave them to Chance! The law of large numbers is our best friend. Who is unpredictable beats every trend. Of course, he can lose, but the God in the sky does know enough to condition, not try, thus weaving each random irrational state into the unerring decisions of Fate. The first player wins playing more or less randomly after he prepares the playing field in the right way. As it has already been said, we can play this game in $\mathbb R^d$. Since, when projecting a high dimensional configuration to the plane, we can avoid extra triple intersections but not extra double intersections of lines, we should allow the opponent to make two moves for each our move to get a stronger version of the game. Fix $n$ (the number of points on the line we want). Choose small $\delta>0$ and large integer $d>0$ in this order. Consider the cubic lattice in $\mathbb R^d$ with $n$ points along each side ($n^d$ points total). Make $\delta n^d$ moves choosing a random lattice point at each move. If this point is already occupied, just put a point somewhere far away and forget of it. However, mark the occupied point as "intended". Let the opponent move twice after each of your moves. At the end of the game, just look at what you got. Pay attention only to the fully filled (with either actual, or intended points) lattice lines in coordinate directions. You should see the following. 1) The probability to fill each particular line is about $\delta^{n}$, so you should expect $L=n^{d-1}d\delta^{n}$ filled lines. Moreover, the probability to fill less than half of this amount is extremely small because filling two disjoint lines are negatively correlated events and intersecting pairs are few, so the expectation of the square is pretty close to the square of the expectation if $n,\delta$ are fixed and $d\to\infty$. 2) The probability that a given grid point has $k$ filled lines passing through it is at most $\delta\cdot \frac{[d\delta^{n-1}]^k}{k!}$, which is exponentially small in $d$ if $k>2ed\delta^{n-1}$ in the same regime when $n,\delta$ are fixed and $d\to\infty$. 3) Each opponent's point that doesn't lie on the grid can interfere with at most one line. 4) Each opponent's point that lies on the grid can interfere with one of the filled lines only if it was intended during one of your moves. However, since during the whole game only $3\delta n^d$ points have been used, the chance to hit an already occupied point has never been higher than $3\delta$, so it is highly unlikely that the opponent could score much more than $6\delta^2 n^d$ such points. Now it is time to count the blocked lines. First, $e^{-c(n,\delta)d}dn^d=e^{-c(n,\delta)d}\delta^{-n}nL$ come from "high efficiency" points (let the opponent grab them all, it is still nothing compared to $L$ if $d$ is large enough!). Second, the points put off the grid can block only $2\delta n^d=d^{-1}\delta^{-n+1}n L$ lines, which isn't much either. Last, the "efficient" but not "highly efficient" points on the grid block at most $6\delta^2 n^d\cdot 2ed\delta^{n-1}=2e\delta n L$ filled lines. Here we do not gain from $d$, but if we start with choosing $\delta$ so that $12e\delta n\ll 1$, we are still fine. Thus, normally at most $L/2$ lines will be blocked and, with high probability, the outcome is that we have won the game by this moment. Since the game is now of fixed length, one of the players must have a deterministic winning strategy. But the second player loses against the random strategy with positive probability no matter what he does, so it isn't he. The deterministic winning strategy for the first player can be defined explicitly in terms of conditional probabilities to win the random game from the current position (just move to the point that gives you the best chance to win the random game in the maximin sense (max over locations, min over all possible opponent's strategies) but the computation of those conditional probabilities is well beyond the human abilities.
In the book The homology of iterated loop spaces, the homology Hopf algebra (1) $$ H_*(\Omega^n \Sigma^n X;\mathbb{Z}_p) $$ for primes $p\geq 2$ is obtained on p. 226, Thm. 3.2. In particular, the homology Hopf algebra (2) $$ H_*(\Omega^nS^n;\mathbb{Z}_2) $$ is known. However, when I use the Cartan formula and Adem relations to compute the coproduct of (2), I find it is quite complicated when modulo the Adem relations, and do not know how to get the explicit expression of the coproduct. Question: Are there any references where I can find the explicit expression of the cohomology algebra $$ H^*(\Omega^nS^n;\mathbb{Z}_2)? $$ I obtain that $\Omega S^1=\text{Map}_*(S^1,S^1)\simeq [S^1;S^1]_*=\pi_1(S^1)=\mathbb{Z}$. Hence $H^*(\Omega S^1;\mathbb{Z}_2)=\oplus_{k\in\mathbb{Z}} \mathbb{Z}_2a_k$, $|a_k|=0$. How to compute the following examples $$ H^*(\Omega^2S^2;\mathbb{Z}_2) $$ and $$ H^*(\Omega^3S^3;\mathbb{Z}_2)? $$
To send content items to your account,please confirm that you agree to abide by our usage policies.If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.Find out more about sending content to . To send content items to your Kindle, first ensure no-reply@cambridge.orgis added to your Approved Personal Document E-mail List under your Personal Document Settingson the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ partof your Kindle email address below.Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations.‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. We discuss the internal structure of graph products of right LCM semigroups and prove that there is an abundance of examples without property (AR). Thereby we provide the first examples of right LCM semigroups lacking this seemingly common feature. The results are particularly sharp for right-angled Artin monoids. We investigate the$K$-theory of unital UCT Kirchberg algebras${\mathcal{Q}}_{S}$arising from families$S$of relatively prime numbers. It is shown that$K_{\ast }({\mathcal{Q}}_{S})$is the direct sum of a free abelian group and a torsion group, each of which is realized by another distinct$C^{\ast }$-algebra naturally associated to$S$. The$C^{\ast }$-algebra representing the torsion part is identified with a natural subalgebra${\mathcal{A}}_{S}$of${\mathcal{Q}}_{S}$. For the$K$-theory of${\mathcal{Q}}_{S}$, the cardinality of$S$determines the free part and is also relevant for the torsion part, for which the greatest common divisor$g_{S}$of$\{p-1:p\in S\}$plays a central role as well. In the case where$|S|\leq 2$or$g_{S}=1$we obtain a complete classification for${\mathcal{Q}}_{S}$. Our results support the conjecture that${\mathcal{A}}_{S}$coincides with$\otimes _{p\in S}{\mathcal{O}}_{p}$. This would lead to a complete classification of${\mathcal{Q}}_{S}$, and is related to a conjecture about$k$-graphs. Recommend this Email your librarian or administrator to recommend adding this to your organisation's collection.
Difference between revisions of "Main Page" (WIkifying the mathematics, highlighting various headings) (→IP-Szemeredi (a weaker problem than DHJ)) Line 32: Line 32: The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for <math>k=4</math>. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our <math>c</math>-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma. The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for <math>k=4</math>. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our <math>c</math>-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma. − Finally, let me prove that there is square if <math>d</math> is large enough compare to <math>c</math>. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length <math>d</math>. It has a one to one mapping to <math>[4]^d</math>; Given a point <math>((x_1,…,x_d),(y_1,…,y_d))</math> where <math>x_i,y_j</math> are 0 or 1, it maps to <math>(z_1,…,z_d)</math>, where <math>z_i=0</math> if <math>x_i=y_i=0</math>, <math>z_i=1</math> if <math>x_i=1</math> and <math>y_i=0, z_i=2</math> if <math>x_i=0</math> and <math>y_i=1</math>, and finally <math>z_i=3</math> if <math>x_i=y_i=1</math>. Any combinatorial line in <math>[4]^d</math> defines a square in the Cartesian product, so the density HJ implies the statement. + Finally, let me prove that there is square if <math>d</math> is large enough compare to <math>c</math>. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length <math>d</math>. It has a one to one mapping to <math>[4]^d</math>; Given a point <math>( (x_1,…,x_d),(y_1,…,y_d) )</math> where <math>x_i,y_j</math> are 0 or 1, it maps to <math>(z_1,…,z_d)</math>, where <math>z_i=0</math> if <math>x_i=y_i=0</math>, <math>z_i=1</math> if <math>x_i=1</math> and <math>y_i=0, z_i=2</math> if <math>x_i=0</math> and <math>y_i=1</math>, and finally <math>z_i=3</math> if <math>x_i=y_i=1</math>. Any combinatorial line in <math>[4]^d</math> defines a square in the Cartesian product, so the density HJ implies the statement. '''Gowers.7:''' With reference to Jozsef’s comment, if we suppose that the d numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as (\epsilon,\eta), where each of \epsilon and \eta is a 01-sequence of length d. Then a corner is a triple of the form (\epsilon,\eta), (\epsilon,\eta+\delta), (\epsilon+\delta,\eta), where \delta is a \{-1,0,1\}-valued sequence of length d with the property that both \epsilon+\delta and \eta+\delta are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product. '''Gowers.7:''' With reference to Jozsef’s comment, if we suppose that the d numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as (\epsilon,\eta), where each of \epsilon and \eta is a 01-sequence of length d. Then a corner is a triple of the form (\epsilon,\eta), (\epsilon,\eta+\delta), (\epsilon+\delta,\eta), where \delta is a \{-1,0,1\}-valued sequence of length d with the property that both \epsilon+\delta and \eta+\delta are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product. Revision as of 20:54, 11 February 2009 Contents The Problem Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. Density Hales-Jewett (DHJ) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math] The original proof of DHJ used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers. Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (active) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (final call) (500-599) TBA (600-699) A reading seminar on density Hales-Jewett (active) A spreadsheet containing the latest lower and upper bounds for [math]c_n[/math] can be found here. Unsolved questions Gowers.462: Incidentally, it occurs to me that we as a collective are doing what I as an individual mathematician do all the time: have an idea that leads to an interesting avenue to explore, get diverted by some temporarily more exciting idea, and forget about the first one. I think we should probably go through the various threads and collect together all the unsolved questions we can find (even if they are vague ones like, “Can an approach of the following kind work?”) and write them up in a single post. If this were a more massive collaboration, then we could work on the various questions in parallel, and update the post if they got answered, or reformulated, or if new questions arose. IP-Szemeredi (a weaker problem than DHJ) Solymosi.2: In this note I will try to argue that we should consider a variant of the original problem first. If the removal technique doesn’t work here, then it won’t work in the more difficult setting. If it works, then we have a nice result! Consider the Cartesian product of an IP_d set. (An IP_d set is generated by d numbers by taking all the [math]2^d[/math] possible sums. So, if the n numbers are independent then the size of the IP_d set is [math]2^d[/math]. In the following statements we will suppose that our IP_d sets have size [math]2^n[/math].) Prove that for any [math]c\gt0[/math] there is a [math]d[/math], such that any [math]c[/math]-dense subset of the Cartesian product of an IP_d set (it is a two dimensional pointset) has a corner. The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for [math]k=4[/math]. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our [math]c[/math]-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma. Finally, let me prove that there is square if [math]d[/math] is large enough compare to [math]c[/math]. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length [math]d[/math]. It has a one to one mapping to [math][4]^d[/math]; Given a point [math]( (x_1,…,x_d),(y_1,…,y_d) )[/math] where [math]x_i,y_j[/math] are 0 or 1, it maps to [math](z_1,…,z_d)[/math], where [math]z_i=0[/math] if [math]x_i=y_i=0[/math], [math]z_i=1[/math] if [math]x_i=1[/math] and [math]y_i=0, z_i=2[/math] if [math]x_i=0[/math] and [math]y_i=1[/math], and finally [math]z_i=3[/math] if [math]x_i=y_i=1[/math]. Any combinatorial line in [math][4]^d[/math] defines a square in the Cartesian product, so the density HJ implies the statement. Gowers.7: With reference to Jozsef’s comment, if we suppose that the d numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as (\epsilon,\eta), where each of \epsilon and \eta is a 01-sequence of length d. Then a corner is a triple of the form (\epsilon,\eta), (\epsilon,\eta+\delta), (\epsilon+\delta,\eta), where \delta is a \{-1,0,1\}-valued sequence of length d with the property that both \epsilon+\delta and \eta+\delta are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product. This is simpler than the density Hales-Jewett problem in at least one respect: it involves 01-sequences rather than 012-sequences. But that simplicity may be slightly misleading because we are looking for corners in the Cartesian product. A possible disadvantage is that in this formulation we lose the symmetry of the corners: the horizontal and vertical lines will intersect this set in a different way from how the lines of slope -1 do. I feel that this is a promising avenue to explore, but I would also like a little more justification of the suggestion that this variant is likely to be simpler. Gowers.22: A slight variant of the problem you propose is this. Let’s take as our ground set the set of all pairs (U,V) of subsets of \null [n], and let’s take as our definition of a corner a triple of the form (U,V), (U\cup D,V), (U,V\cup D), where both the unions must be disjoint unions. This is asking for more than you asked for because I insist that the difference D is positive, so to speak. It seems to be a nice combination of Sperner’s theorem and the usual corners result. But perhaps it would be more sensible not to insist on that positivity and instead ask for a triple of the form (U,V), ((U\cup D)\setminus C,V), (U, (V\cup D)\setminus C, where D is disjoint from both U and V and C is contained in both U and V. That is your original problem I think. I think I now understand better why your problem could be a good toy problem to look at first. Let’s quickly work out what triangle-removal statement would be needed to solve it. (You’ve already done that, so I just want to reformulate it in set-theoretic language, which I find easier to understand.) We let all of X, Y and Z equal the power set of \null [n]. We join U\in X to V\in Y if (U,V)\in A. Ah, I see now that there’s a problem with what I’m suggesting, which is that in the normal corners problem we say that (x,y+d) and (x+d,y) lie in a line because both points have the same coordinate sum. When should we say that (U,V\cup D) and (U\cup D,V) lie in a line? It looks to me as though we have to treat the sets as 01-sequences and take the sum again. So it’s not really a set-theoretic reformulation after all. O'Donnell.35: Just to confirm I have the question right… There is a dense subset A of {0,1}^n x {0,1}^n. Is it true that it must contain three nonidentical strings (x,x’), (y,y’), (z,z’) such that for each i = 1…n, the 6 bits [ x_i x'_i ] [ y_i y'_i ] [ z_i z'_i ] are equal to one of the following: [ 0 0 ] [ 0 0 ] [ 0, 1 ] [ 1 0 ] [ 1 1 ] [ 1 1 ] [ 0 0 ], [ 0 1 ], [ 0, 1 ], [ 1 0 ], [ 1 0 ], [ 1 1 ], [ 0 0 ] [ 1 0 ] [ 0, 1 ] [ 1 0 ] [ 0 1 ] [ 1 1 ] ? McCutcheon.469: IP Roth: Just to be clear on the formulation I had in mind (with apologies for the unprocessed code): for every $\delta>0$ there is an $n$ such that any $E\subset [n]^{[n]}\times [n]^{[n]}$ having relative density at least $\delta$ contains a corner of the form $\{a, a+(\sum_{i\in \alpha} e_i ,0),a+(0, \sum_{i\in \alpha} e_i)\}$. Here $(e_i)$ is the coordinate basis for $[n]^{[n]}$, i.e. $e_i(j)=\delta_{ij}$. Presumably, this should be (perhaps much) simpler than DHJ, k=3. High-dimensional Sperner Kalai.29: There is an analogous for Sperner but with high dimensional combinatorial spaces instead of "lines" but I do not remember the details (Kleitman(?) Katona(?) those are ususal suspects.) Fourier approach Kalai.29: A sort of generic attack one can try with Sperner is to look at f=1_A and express using the Fourier expansion of f the expression \int f(x)f(y)1_{x<y} where x<y is the partial order (=containment) for 0-1 vectors. Then one may hope that if f does not have a large Fourier coefficient then the expression above is similar to what we get when A is random and otherwise we can raise the density for subspaces. (OK, you can try it directly for the k=3 density HJ problem too but Sperner would be easier;) This is not unrealeted to the regularity philosophy. Gowers.31: Gil, a quick remark about Fourier expansions and the k=3 case. I want to explain why I got stuck several years ago when I was trying to develop some kind of Fourier approach. Maybe with your deep knowledge of this kind of thing you can get me unstuck again. The problem was that the natural Fourier basis in \null [3]^n was the basis you get by thinking of \null [3]^n as the group \mathbb{Z}_3^n. And if that’s what you do, then there appear to be examples that do not behave quasirandomly, but which do not have large Fourier coefficients either. For example, suppose that n is a multiple of 7, and you look at the set A of all sequences where the numbers of 1s, 2s and 3s are all multiples of 7. If two such sequences lie in a combinatorial line, then the set of variable coordinates for that line must have cardinality that’s a multiple of 7, from which it follows that the third point automatically lies in the line. So this set A has too many combinatorial lines. But I’m fairly sure — perhaps you can confirm this — that A has no large Fourier coefficient. You can use this idea to produce lots more examples. Obviously you can replace 7 by some other small number. But you can also pick some arbitrary subset W of \null[n] and just ask that the numbers of 0s, 1s and 2s inside W are multiples of 7. DHJ for dense subsets of a random set Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of {}[3]^n, much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic. Bibliography H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119.
Some mathematical elements change their style depending on the context, whether they are in line with the text or in an equation-type environment. This article explains how to manually adjust the display style. Let's see an example Depending on the value of $x$ the equation \( f(x) = \sum_{i=0}^{n} \frac{a_i}{1+x} \) may diverge or converge. \[ f(x) = \sum_{i=0}^{n} \frac{a_i}{1+x} \] Superscripts, subscripts and fractions are formatted differently. The maths styles can be set explicitly. For instance, if you want an in-line mathematical element to display as a equation-like element put \displaystyle before that element. There are some more maths style-related commands that change the size of the text. In-line maths elements can be set with a different style: \(f(x) = \displaystyle \frac{1}{1+x}\). The same is true the other way around: \begin{eqnarray*} \begin{eqnarray*} f(x) = \sum_{i=0}^{n} \frac{a_i}{1+x} \\ \textstyle f(x) = \textstyle \sum_{i=0}^{n} \frac{a_i}{1+x} \\ \scriptstyle f(x) = \scriptstyle \sum_{i=0}^{n} \frac{a_i}{1+x} \\ \scriptscriptstyle f(x) = \scriptscriptstyle \sum_{i=0}^{n} \frac{a_i}{1+x} \end{eqnarray*} \end{eqnarray*} For more information see
The Fokker-Planck equation for a probability distribution $P(\theta,t)$: \begin{align} \frac{\partial P(\theta,t)}{\partial t}=-\frac{\partial}{\partial\theta}\Big[[\sin(k\theta)+f]P(\theta,t)-D\frac{\partial P(\theta,t)}{\partial\theta}\Big]. \end{align} where $f$, $k$, $D$ are constants, and the initial distribution is a delta function. Starting from Abhishek Halder's derivation of an ODE, we can indeed get a closed form, involving the solutions of the double-confluent Heun equation. $\phi \left( \theta \right) = C_1 {\it HeunD} \left({\frac{2}{kd}},{\frac { \left( 2\,k-4\,\lambda \right) d+{f}^{2}+1}{k^{2}d^{2}}},{\frac {4\,if}{{k}^{2}{d}^{2}}},{\frac { \left( 2k+4\lambda \right) d-{f}^{2}-1}{{k}^{2}{d}^{2}}},{\frac {i}{\tan \left(k\theta/2 \right) }}\right) {{\rm e}^{{\frac {1}{kd} \left( -i\sin\left( k\theta/2 \right) \cos \left( k\theta/2 \right) +f \arctan \left( {\frac {\sin \left( k\theta/2 \right) }{\cos \left( k\theta/2 \right) }} \right) - \left( \cos \left( k\theta/2 \right) \right) ^{2} \right) }}} + C_2\,{\it HeunD} \left({-\frac {2}{kd}},{\frac { \left( 2k-4\lambda \right) d+{f}^{2}+1}{{ k}^{2}{d}^{2}}},{\frac {4\,if}{{k}^{2}{d}^{2}}},{\frac { \left( 2k+4 \lambda \right) d-{f}^{2}-1}{{k}^{2}{d}^{2}}},{\frac {i}{\tan \left( k\theta/2 \right) }} \right) {{\rm e}^{{\frac {1}{kd} \left( i\sin \left( k\theta/2 \right) \cos \left( k\theta/2 \right) +f\arctan \left( {\frac {\sin \left( k\theta/2 \right) }{ \cos \left( k\theta/2 \right) }} \right) - \left( \cos \left( k\theta/2 \right) \right) ^{2} \right) }}} $ A change of variables, $\theta \mapsto t/k$, can simplify the results a fair bit.somewhat. Not a complete solution, but some ideas that may be helpful. Since the eigen-expansion of the solution $P(\theta,t)$ must be of the form $\displaystyle\sum_{i=1}^{n} c_{i} e^{-\lambda_{i}t}\psi_{i}(\theta)$, where $\left(\lambda_{i},\psi_{i}\right)$ are the $i$-th eigenvalue-eigenfunction pair, hence we can substutute the ansatz $e^{-\lambda t}\psi(\theta)$ for $P(\theta,t)$ in the Fokker-Planck PDE, and multiply both sides by $e^{\lambda t}$ to get the following second order homogeneous linear ODE $$ D\psi^{\prime\prime} - \left(f + \sin(k\theta)\right)\psi^{\prime} + \left(\lambda - k\cos(k\theta)\right)\psi = 0 $$ where $^{\prime}$ denotes derivative w.r.t. $\theta$. From here, if we can find all pairs $(\lambda,\psi)$ those solve the above ODE, then the transient PDE solution can be constructed by linear combination of such $e^{-\lambda t}\psi(\theta)$. Notice that for $\lambda = 0$, the above ODE coincides with the one you'd get if you had set $\displaystyle\frac{\partial P}{\partial t} = 0$ in the original equation, which means the eigenfunction corresponding to $\lambda = 0$, is the stationary density. The constants $c_{i}$ would follow from the normalization condition $\displaystyle\int_{-\pi}^{\pi}P(\theta,t) d\theta = 1$ for all $t$. So the question now is whether one could solve the above ODE in closed form. Closed form fundamental solutions in many case are obtained in Igor Tanski's paper
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range 1. Observation of Higgs boson production in association with a top quark pair at the LHC with the ATLAS detector Physics Letters B, ISSN 0370-2693, 09/2018, Volume 784, Issue C, pp. 173 - 191 The observation of Higgs boson production in association with a top quark pair ( ), based on the analysis of proton–proton collision data at a centre-of-mass... PHYSICS, NUCLEAR | SEARCH | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences PHYSICS, NUCLEAR | SEARCH | ASTRONOMY & ASTROPHYSICS | PHYSICS, PARTICLES & FIELDS | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences Journal Article 2. Combinations of single-top-quark production cross-section measurements and $|f_{\rm LV}V_{tb}|$ determinations at $\sqrt{s}=7$ and 8 TeV with the ATLAS and CMS experiments 02/2019 JHEP 05 (2019) 088 This paper presents the combinations of single-top-quark production cross-section measurements by the ATLAS and CMS Collaborations, using... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 3. Observation of Centrality-Dependent Acoplanarity for Muon Pairs Produced via Two-Photon Scattering in Pb+Pb Collisions at sqrt[s_{NN}]=5.02 TeV with the ATLAS Detector Physical review letters, 11/2018, Volume 121, Issue 21, p. 212301 This Letter presents a measurement of γγ→μ^{+}μ^{-} production in Pb+Pb collisions recorded by the ATLAS detector at the Large Hadron Collider at... Journal Article 4. Search for new phenomena in events with same-charge leptons and $b$-jets in $pp$ collisions at $\sqrt{s}= 13$ TeV with the ATLAS detector 07/2018 JHEP 12 (2018) 039 A search for new phenomena in events with two same-charge leptons or three leptons and jets identified as originating from $b$-quarks in a... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 5. Search for Higgs boson pair production in the $\gamma\gamma b\bar{b}$ final state with 13 TeV $pp$ collision data collected by the ATLAS experiment 07/2018 JHEP 11 (2018) 040 A search is performed for resonant and non-resonant Higgs boson pair production in the $\gamma\gamma b\bar{b}$ final state. The data set... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 6. Search for the Higgs boson produced in association with a vector boson and decaying into two spin-zero particles in the $H \rightarrow aa \rightarrow 4b$ channel in $pp$ collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector 06/2018 JHEP 10 (2018) 031 A search for exotic decays of the Higgs boson into a pair of spin-zero particles, $H \rightarrow aa$, where the $a$-boson decays into... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 7. Constraints on off-shell Higgs boson production and the Higgs boson total width in $ZZ\to4\ell$ and $ZZ\to2\ell2\nu$ final states with the ATLAS detector 08/2018 Phys. Lett. B 786 (2018) 223 A measurement of off-shell Higgs boson production in the $ZZ\to4\ell$ and $ZZ\to2\ell2\nu$ decay channels, where $\ell$ stands for... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article 8. Combination of inclusive and differential $ \mathrm{t}\overline{\mathrm{t}} $ charge asymmetry measurements using ATLAS and CMS data at $ \sqrt{s}=7 $ and 8 TeV Journal of High Energy Physics (Online), ISSN 1029-8479, 04/2018, Volume 2018, Issue 4 Journal Article 9. Search for doubly charged scalar bosons decaying into same-sign W boson pairs with the ATLAS detector The European Physical Journal C, ISSN 1434-6044, 1/2019, Volume 79, Issue 1, pp. 1 - 30 A search for doubly charged scalar bosons decaying into W boson pairs is presented. It uses a data sample from proton–proton collisions corresponding to an... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | DISTRIBUTIONS | PHYSICS | PHYSICS, PARTICLES & FIELDS | Analysis | Detectors | Collisions (Nuclear physics) | Phenomenology | Protons | Confidence intervals | Large Hadron Collider | Leptons | Particle collisions | Searching | Decay | Luminosity | Bosons | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Regular - Experimental Physics Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | DISTRIBUTIONS | PHYSICS | PHYSICS, PARTICLES & FIELDS | Analysis | Detectors | Collisions (Nuclear physics) | Phenomenology | Protons | Confidence intervals | Large Hadron Collider | Leptons | Particle collisions | Searching | Decay | Luminosity | Bosons | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Regular - Experimental Physics Journal Article 10. Correlated long-range mixed-harmonic fluctuations measured in pp, p+Pb and low-multiplicity Pb+Pb collisions with the ATLAS detector Physics Letters B, ISSN 0370-2693, 02/2019, Volume 789, Issue C, pp. 444 - 471 Correlations of two flow harmonics and via three- and four-particle cumulants are measured in 13 TeV , 5.02 TeV +Pb, and 2.76 TeV peripheral Pb+Pb collisions... PHYSICS OF ELEMENTARY PARTICLES AND FIELDS PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 11. Measurement of the photon identification efficiencies with the ATLAS detector using LHC Run 2 data collected in 2015 and 2016 The European Physical Journal C, ISSN 1434-6044, 3/2019, Volume 79, Issue 3, pp. 1 - 41 The efficiency of the photon identification criteria in the ATLAS detector is measured using $$36.1\hbox { fb}^1$$ 36.1fb1 to $$36.7\hbox { fb}^1$$ 36.7fb1 of... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PP COLLISIONS | ROOT-S=13 TEV | MASS | PHYSICS, PARTICLES & FIELDS | Measurement | Comparative analysis | Detectors | Photons | Simulation | Large Hadron Collider | Efficiency | Transverse momentum | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology | PP COLLISIONS | ROOT-S=13 TEV | MASS | PHYSICS, PARTICLES & FIELDS | Measurement | Comparative analysis | Detectors | Photons | Simulation | Large Hadron Collider | Efficiency | Transverse momentum | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 12. Comparison of fragmentation functions for light-quark- and gluon-dominated jets from $pp$ and Pb+Pb collisions in ATLAS Physical Review Letters, ISSN 0031-9007, 02/2019, Volume 123, Issue 4 Phys. Rev. Lett. 123, 042001 (2019) Charged-particle fragmentation functions for jets azimuthally balanced by a high-transverse-momentum, prompt, isolated... PHYSICS OF ELEMENTARY PARTICLES AND FIELDS PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article 13. Measurement of photon–jet transverse momentum correlations in 5.02 TeV Pb + Pb and pp collisions with ATLAS Physics Letters B, ISSN 0370-2693, 02/2019, Volume 789, Issue C, pp. 167 - 190 Jets created in association with a photon can be used as a calibrated probe to study energy loss in the medium created in nuclear collisions. Measurements of... ROOT-S(NN)=2.76 TEV | ASTRONOMY & ASTROPHYSICS | PHYSICS, NUCLEAR | PP COLLISIONS | LEAD-LEAD COLLISIONS | DEPENDENCE | PHYSICS, PARTICLES & FIELDS | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences ROOT-S(NN)=2.76 TEV | ASTRONOMY & ASTROPHYSICS | PHYSICS, NUCLEAR | PP COLLISIONS | LEAD-LEAD COLLISIONS | DEPENDENCE | PHYSICS, PARTICLES & FIELDS | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS | Fysik | Subatomär fysik | Physical Sciences | Subatomic Physics | Naturvetenskap | Natural Sciences Journal Article PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 06/2019, Volume 122, Issue 23, p. 231801 Dark matter particles, if sufficiently light, may be produced in decays of the Higgs boson. This Letter presents a statistical combination of searches for H ->... DARK-MATTER | CANDIDATES | PARTICLE | PHYSICS, MULTIDISCIPLINARY | LHC | MASS | Confidence intervals | Standard model (particle physics) | Statistical analysis | Dark matter | Searching | Particle decay | Higgs bosons | Quarks | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS DARK-MATTER | CANDIDATES | PARTICLE | PHYSICS, MULTIDISCIPLINARY | LHC | MASS | Confidence intervals | Standard model (particle physics) | Statistical analysis | Dark matter | Searching | Particle decay | Higgs bosons | Quarks | Physics - High Energy Physics - Experiment | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article
HamiltonianDerivatives¶ class HamiltonianDerivatives( configuration, filename, object_id, repetitions=None, atomic_displacement=None, constraints=None, use_equivalent_bulk=None, constrain_electrodes=None, log_filename_prefix='hamiltonian_displacement_', processes_per_displacement=1)¶ Constructor for the HamiltonianDerivatives object. Parameters: configuration( BulkConfiguration| MoleculeConfiguration| DeviceConfiguration) – The configuration for which to calculate the Hamiltonian derivatives. filename( str) – The full or relative path to save the results to. See nlsave(). object_id( str) – The object id to use when saving. See nlsave(). repetitions( Automatic| list of ints) – The number of repetitions of the system in the A, B, and C-directions given as a list of three positive integers, e.g. [3, 3, 3], or Automatic. Each repetition value must be odd. Default: Automatic atomic_displacement(PhysicalQuantity of type length) – The distance the atoms are displaced in the finite difference method. Default: 0.01 * Angstrom constraints( list of type int) – List of atomic indices that will be constrained, e.g. [0, 2, 10]. Default:Empty list [] use_equivalent_bulk( bool) – Control if a DeviceConfigurationshould be treated as a BulkConfiguration. Default: True constrain_electrodes( bool) – Control if the electrodes and electrode extensions should be constrained in case of a DeviceConfiguration. Default: False processes_per_displacement( int) – The number of processes assigned to calculating a single displacement. Default:1 process per displacement. atomicDisplacement()¶ Returns: The distance the atoms are displaced in the finite difference method. Return type: PhysicalQuantity with length unit. constrainElectrodes()¶ Returns: Boolean determining if the electrodes and electrode extensions are constrained in case of a DeviceConfiguration. Return type: bool constraints()¶ Returns: The list of constrained atoms. Return type: list of int filename()¶ Returns: The filename where the study object is stored. Return type: str logFilenamePrefix()¶ Returns: The filename prefix for the logging output of the study. Return type: str | LogToStdOut nlprint( stream=None)¶ Print a string containing an ASCII table useful for plotting the Study object. Parameters: stream( python stream) – The stream the table should be written to. Default: NLPrintLogger() numberOfProcessesPerTask()¶ Returns: The number of processes to be used to execute each task. If None, all available processes should execute each task collaboratively. Return type: int | None objectId()¶ Returns: The name of the study object in the file. Return type: str processesPerDisplacement()¶ Returns: The number of processes per displacement. Return type: int repetitions()¶ Returns: The number of repetitions of the system in the A, B, and C-directions. Return type: list of three int. update()¶ Run the calculations for the study object. useEquivalentBulk()¶ Returns: Boolean determining if a DeviceConfiguration is treated as a BulkConfiguration. Return type: bool. Usage Examples¶ Note Study objects behave differently from analysis objects. See the Study object overview for more details. Calculate the Hamiltonian derivatives for a system repeated five times in the B direction and three times in the C direction. hamiltonian_derivatives = HamiltonianDerivatives( configuration, filename='HamiltonianDerivatives.hdf5', object_id='hamiltonian_derivatives', repetitions=(1,5,3), )hamiltonian_derivatives.update() When using repetitions=Automatic, the cell is repeated such that all atomswithin a pre-defined, element-pair dependent interaction range are included. hamiltonian_derivatives = HamiltonianDerivatives( configuration, filename='HamiltonianDerivatives.hdf5', object_id='hamiltonian_derivatives', repetitions=Automatic, )hamiltonian_derivatives.update() The default number of repetitions i.e. repetitions=Automatic can be foundbefore a calculation using the function checkNumberOfRepetitions(). (nA, nB, nC) = checkNumberOfRepetitions(configuration) Notes¶ The Hamiltonian derivatives are calculated using the central finite difference method in a repeated cell constituting a super cell. That is, the Hamiltonian derivatives are calculated for each atom in the central cell by displacing the atom in the supercell and determining the Hamiltonian for two displacements from its original position along each of the Cartesian directions. In DFT, the derivatives of the Hamiltonian \(\hat{H}\) can be expressed as the derivatives of the effective potential \(V_{\text{eff}}\) since the kinetic term does not contribute. Thus the Hamiltonian derivatives for the \(i\) and \(j\) basis functions can be approximated as where \(R_{I, \alpha}\) is the \(\alpha\) cartesian coordinate for atom \(I\) in the central unit cell, and \(\delta\) is the atomic displacement. For Slater-Koster calculators, the Hamiltonian derivatives are described by the on-site and off-site parameters and their distance dependence, hence Aborted HamiltonianDerivatives calculations can be resumed by re-running thesame script or reading the study object from file and calling update() onit. The study object will automatically detect which displacement calculationshave already been carried out and only run the calculations that are not yetcompleted. Notes for DFT¶ When calculating the Hamiltonian derivatives both the number of sampled k-points for the super cell and repetitions in the confined the directions must be 1. In the following the number of sampled k-points for the super cell and repetitions in the non-confined directions will be adressed. To simplify things a system with only one non-confined direction will be used, see Fig. 131 (a), but the relations for the one non-confined direction applies to all non-confined directions. The bulk configuration in Fig. 131 (a) is converged in the total energy with respect to the number of k-points of \((1, 1, N_{\text{C}})\). The number of repetitions in the super cell in the non-confined direction, see Fig. 131 (b), is chosen large enough such that the change in the effective potential \(\text{d} V_{\text{eff}}\) goes to zero at the boundaries of the super cell in the non-confined directions for every atomic displacement, confer Fig. 131 (c). For this system five repetitions of the unit cell in the confined direction is enough for the change in the effective potential to go to zero at the boundaries. The recommended k-point sampling for the non-confined direction is the number of k-points in the non-confined direction for the unit cell divided by the number of repetitions in the non-confined direction. The k-point sampling then becomes \((1, 1, \frac{N_{\text{C}}}{\text{repetitions in C}})\) and in this particular case \((1, 1, N_{\text{C}}/5)\) . Note From QuantumATK-2019.03 onwards, the k-point sampling anddensity-mesh-cutoff will be automatically adapted to the given numberof repetitions when setting up the super cell inside DynamicalMatrix and HamiltonianDerivatives. Thatmeans you can specify the calculator settings for the unit cell and use itwith any desired number of repetitions in dynamical matrix and hamiltonianderivatives calculations. The Hamiltonian derivatives calculations generally requires a low tolerance in the IterationControlParameters settings, e.g. a tolerance of 1e-6. Finally,it should be noted that the HuckelCalculator currently does not supportcalculation of the Hamiltonian derivatives.
Pansi wrote: Two pieces of fruit are selected out of a group of 8 pieces of fruit consisting only of apples and bananas. What is the probability of selecting exactly 2 bananas? (1) The probability of selecting exactly 2 apples is greater than 1/2. (2) The probability of selecting 1 apple and 1 banana in either order is greater than 1/3. \(2\,\,{\text{extractions}}\,\,{\text{from}}\,\,8\,\,\left\{ \begin{gathered} \,{\text{apples}}\,\,\left( A \right) \hfill \\ \,{\text{bananas}}\,\,\left( {B = 8 - A} \right) \hfill \\ \end{gathered} \right.\) \(? = P\left( {{\text{both}}\,{\text{extractions}}\,\,{\text{bananas}}} \right)\) \(\left( 1 \right)\,\,\,P\left( {{\text{both}}\,{\text{extractions}}\,\,{\text{apples}}} \right) = \frac{{C\left( {A,2} \right)}}{{C\left( {8,2} \right)}} > \frac{1}{2}\,\,\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,A\left( {A - 1} \right) > 28\,\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,A \geqslant 6\,\,\,\,\,\,\,\,\,\) \(\left\{ {\begin{array}{*{20}{c}} {{\text{If}}\,\,\left( {A,B} \right) = \left( {6,2} \right)} \\ {{\text{If}}\,\,\left( {A,B} \right) = \left( {7,1} \right)} \end{array}\begin{array}{*{20}{c}} {\,\,\, \Rightarrow \,\,\,\,\,? = \frac{1}{{C\left( {8,2} \right)}} = \frac{1}{{28}}} \\ { \Rightarrow \,\,\,\,\,? = 0} \end{array}\,\,\,\,} \right.\) \(\left( 2 \right)\,\,\,P\left( {{\text{one}}\,\,{\text{each}}} \right) = \frac{{A \cdot \left( {8 - A} \right)}}{{C\left( {8,2} \right)}} > \frac{1}{3}\,\,\,\,\,\,\, \Rightarrow \,\,\,\,\,A\left( {8 - A} \right) > 9\frac{1}{3}\,\,\,\,\,\,\,\,\,\) \(\left\{ \begin{gathered} \,{\text{Retake}}\,\,\,\left( {A,B} \right) = \left( {6,2} \right)\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,\,? = \frac{1}{{C\left( {8,2} \right)}} = \frac{1}{{28}}\,\, \hfill \\ \,{\text{If}}\,\,\left( {A,B} \right) = \left( {5,3} \right)\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,\,? = \frac{{C\left( {3,2} \right)}}{{C\left( {8,2} \right)}} = \frac{3}{{28}}\,\,\, \hfill \\ \end{gathered} \right.\,\,\) \(\left( {1 + 2} \right)\,\,\,\left\{ \begin{gathered} \,A \geqslant 6 \hfill \\ \,A\left( {8 - A} \right) > 9\frac{1}{3} \hfill \\ \end{gathered} \right.\,\,\,\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,\,\,A = 6\,\,\,\,\,\,\,\, \Rightarrow \,\,\,\,\,\,\,\,{\text{SUFF}}.\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\left[ {\,?\,\, = \,\,\frac{1}{{C\left( {8,2} \right)}} = \frac{1}{{28}}\,} \right]\) This solution follows the notations and rationale taught in the GMATH method. Regards, Fabio. _________________ Fabio Skilnik :: GMATH method creator (Math for the GMAT) Our high-level "quant" preparation starts here: https://gmath.net
Consider the well known identity from elementary number theory: $$\sum_{k=1}^n \tau(k)=\sum_{d=1}^n \left\lfloor \frac{n}{d} \right\rfloor,$$ where of course the asymptotic expression for both sides is $n \log n + (2\gamma-1) n + O(\sqrt{n}).$ Some experimentation with Maple seems to show that if $v:=\frac{n}{\log n},$ then $$ F(v):=\lim_{n\rightarrow \infty} \frac{\sum_{d=1}^{\lfloor v\rfloor} \left\lfloor \frac{n}{d} \right\rfloor }{\sum_{d=1}^n \left\lfloor \frac{n}{d} \right\rfloor },\quad (1) $$ is increasing towards $1$ from below. Is it possible to prove what the limit in (1) is, if $v=c\frac{n}{\log n},$ for some $c \in (0,1)$?
The Central Limit Theorem states that the the distribution of \bar{X} approaches normality as n increases, regardless of what the distribution of X is. Stated more formally, for samples of size n drawn from a distribution with mean \mu and finite variance \sigma^2, the distribution of the sample mean is approximately N\left(\mu, \frac{\sigma^2}{n} \right) for sufficiently large n. Stated more formally, for samples of size n drawn from a distribution with mean \mu and finite variance \sigma^2, the distribution of the sample mean is approximately N\left(\mu, \frac{\sigma^2}{n} \right) for sufficiently large n. Software/Applets used on this page Glossary central limit theorem The mean of a random sample of size n from a distribution of mean m and variance v will be normally distributed with mean m and variance v/n, for large n. limit the value that a function f(x) approaches as the variable x approaches a value such as 0 or infinity mean the sum of all the members of the list divided by the number of items in the list. sample A collection of sampling units drawn from a sampling frame. union The union of two sets A and B is the set containing all the elements of A and B. variance The square of the standard deviation; a measure of dispersion. This question appears in the following syllabi: Syllabus Module Section Topic Exam Year AQA A-Level (UK - Pre-2017) S1 Estimation Central Limit Theorem - AQA A2 Further Maths 2017 Statistics Central Limit Theorem - Extra Central Limit Theorem - AQA AS/A2 Further Maths 2017 Statistics Central Limit Theorem - Extra Central Limit Theorem - CCEA A-Level (NI) S2 Estimation Central Limit Theorem - CIE A-Level (UK) S2 Estimation Central Limit Theorem - Edexcel A-Level (UK - Pre-2017) S3 Estimation Central Limit Theorem - Edexcel A2 Further Maths 2017 Further Statistics 1 Central Limit Theorem Central Limit Theorem - Edexcel AS/A2 Further Maths 2017 Further Statistics 1 Central Limit Theorem Central Limit Theorem - I.B. Higher Level 7 Estimation Central Limit Theorem - Methods (UK) M15 Estimation Central Limit Theorem - OCR A-Level (UK - Pre-2017) S2 Estimation Central Limit Theorem - OCR A2 Further Maths 2017 Statistics Hypothesis Tests and Confidence Intervals Central Limit Theorem - OCR MEI A2 Further Maths 2017 Statistics B Sample Mean and Central Limit Theorem Central Limit Theorem - OCR-MEI A-Level (UK - Pre-2017) S3 Estimation Central Limit Theorem - Universal (all site questions) E Estimation Central Limit Theorem - WJEC A-Level (Wales) S2 Estimation Central Limit Theorem - More in this topic 1 Definition 2 Illustration 3 Illustration 2 4 Illustration 3 5 Binomial distribution 6 Using integration 7 O-test
I am currently reading "Quantum Computation and Quantum Information" by Nielsen and Chuang. In the section about Quantum Simulation, they give an illustrative example (section 4.7.3), which I don't quite understand: Suppose we have the Hamiltonian $$ H = Z_1 ⊗ Z_2 ⊗ \cdots ⊗ Z_n,\tag{4.113}$$ which acts on an $n$ qubit system. Despite this being an interaction involving all of the system, indeed, it can be simulated efficiently. What we desire is a simple quantum circuit which implements $e^{-iH\Delta t}$, for arbitrary values of $\Delta t$. A circuit doing precisely this, for $n = 3$, is shown in Figure 4.19. The main insight is that although the Hamiltonian involves all the qubits in the system, it does so in a classicalmanner: the phase shift applied to the system is $e^{-i\Delta t}$ if the parityof the $n$ qubits in the computational basis is even; otherwise, the phase shift should be $e^{i\Delta t}$. Thus, simple simulation of $H$ is possible by first classically computing the parity (storing the result in an ancilla qubit), then applying the appropriate phase shift conditioned on the parity, then uncomputing the parity (to erase the ancilla). Furthermore, extending the same procedure allows us to simulate more complicated extended Hamiltonians. Specifically, we can efficiently simulate any Hamiltonian of the form $$H = \bigotimes_{k=1}^n\sigma_{c\left(k\right)}^k,$$ where $\sigma_{c(k)}^k$ is a Pauli matrix (or the identity) acting on the $k$th qubit, with $c(k) \in \{0, 1, 2, 3\}$ specifying one of $\{I, X, Y, Z\}$. The qubits upon which the identity operation is performed can be disregarded, and $X$ or $Y$ terms can be transformed by single qubit gates to $Z$ operations. This leaves us with Hamiltonian of the form of (4.113), which is simulated as described above. How can we obtain gate $e^{-i\Delta t Z}$ from elementary gates (for example from Toffoli gates)?
For simplicity let's consider a 1D BS world. The only source of randomness comes from the Brownian motion dynamics $dB_t$. The risk-free rate is $r$ (one may assume it as constant for the time being). I know that, by virtue of Girsanov's theorem, the Brownian motion under the risk-neutral measure is defined by $$dB_t^{\Bbb Q} = \lambda dt + dB_t$$ where $\lambda$ is the unique market price of risk, or the so-called Sharpe ratio. Under the risk-neutral measure, any non-dividend paying stock price process $S_t$ thus follows $$\frac{dS_t}{S_t} = rdt + \sigma_SdB_t^{\Bbb Q}.$$ However, in Kerry Back's A Course in Derivative Securities page 220, the author claimed without a proof that the instantaneous rate of return for a call option on the stock price $C_t$ is also $r$, i.e.$$\frac{dC_t}{C_t} = rdt + \sigma_C d B_t^{\Bbb Q}$$where $\sigma_C$ is some stochastic process that we're not interested in. The author make crucial use of the above formula (i.e. the drift of $C_t$ is $rC_tdt$) to derive the BS PDE. Question: is it true that under the risk neutral measure, any non-dividend paying asset price $X_t$ must have its instantaneous rate of return equal to $r$? If so, what would be a rigorous explanation for this? Edit: Antoine is spot on. Under the risk neutral measure, any discounted asset price $Y_t=e^{-rt}X_t$ must be a martingale or equivalently an Ito integral without drift. Hence $$\frac{dY_t}{Y_t}=\sigma_Y dB_t^{\Bbb Q}.$$ where $\sigma_Y$ can be a quite general stochastic process. On the other hand, by the compounding rule of Ito processes, $$\frac{dY_t}{Y_t}=-rdt+\frac{dX_t}{X_t}$$ Therefore it follows $$\frac{dX_t}{X_t}=rdt+\sigma_Y dB_t^{\Bbb Q}.$$
This question already has an answer here: I am going mad trying to draw syntactic trees using the tikz-qtree package. Here is what I have come up with: \begin{figure}\centering\begin{tikzpicture}[every node/.style={align=center},level distance=1.5cm]\Tree [.S [.{NP\\($\uparrow$~{\sc subj})~=~$\downarrow$} [.{N\\$\uparrow$~=~$\downarrow$} [. John ] ] ] [.{VP\\$\uparrow$~=~$\downarrow$} [.V [.{saw\\$\uparrow$~=~$\downarrow$}] ] [.{NP\\($\uparrow$~{\sc obj})~=~$\downarrow$} [.{DET\\($\uparrow$~{\sc def})~=~+} ] [.{N\\$\uparrow$~=~$\downarrow$} [. boy ] ] ] ] ]\end{tikzpicture}\caption{test}\end{figure} However, tex keeps complaining about a runaway argument, and I can't seem to be able to spot what I'm doing wrong. It seems to me that I need to use curly braces because the labels on the tree span multiple lines. Any ideas?
User Guide Examples Potential Databases More User Guide Examples Potential Databases More This is an old revision of the document! potfit uses the potential ensemble method 1) to quantify the uncertainty in fitted potential parameters by generating an ensemble of candidate potentials of varying suitability sampled from around the best fit potential space. To enable this feature, compile potfit with the uq option, see compiling. This option is only available for analytic potentials. This generates an ensemble of potentials whose spread can be used to quantify the uncertainties in the fitted parameters. Taking an uncorrelated subsample of the MCMC output forms a potential ensemble representing the uncertainties in each parameter by the ensemble spread and covariance. Propagating the uncertainty represented by the ensemble members through molecular dynamics, the resultant uncertainties in quantities of interest can be obtained. For an example of this see 2). An ensemble of potentials are generated by taking a series on Markov chain Monte Carlo steps starting from the best fit potential parameters. The step size in each parameter direction is scaled dependant on the curvature in each parameter. This is encoded using information about the eigenvalues of the hessian at the best fit potential minimum, for potential parameters $\Theta=\{\theta_1,\ldots, \theta_N\}$. \begin{equation}\Delta\theta_{i}=\sum_{j=1}^{\rm N}\sqrt{\frac{\rm R}{{\rm{max}} (1,\lambda_{j})}}V_{ij}r_{j} \end{equation}where $\lambda_j$ are the hessian eigenvalues, $V_{ij}$ the eigenvector components and $r_j$ is Gaussian noise. The R value, acc_rescaling, is a tunable parameter for the MCMC step acceptance rate. The MCMC algorithm samples potentials from the distribution at a temperature, $T_0$, set by the number of potential parameters and minimum cost value. In the majority of cases tis temperature should be sufficient to generate a suitable ensemble. In the event that a reduced sampling temperature is required this can be scaled by a parameter $\alpha$ ( uq_temp), such that $T=\alpha T_0$. Only use this if you know what you are doing! If hess_pert = -1 the parameter perturbations used in the finite difference calculation of the hessian are found individually. This algorithm can be used as a diagnostic tool to understand the curvature on the length scale of the sampling temperature. However care should be taken when analysing the information as many assumptions about the cost minimum are inherently made (e.g. that the landscape at the sampling temperature height is harmonic). Each parameter is perturbed to bracket the perturbation value yielding a the cost set by the sampling temperature - $C_T = C_0 + T = C_0 +\frac{2\alpha C_0}{N}$. When the bracketing interval is within 5% of $C_T$, a line is drawn between the two bounds and the gradient is used to choose the perturbation value estimated to give a cost of $C_T$. If the landscape at this scale is not harmonic, the eigenvalues of the hessian will be negative. In this case a reduced sampling temperature may be required and the user should think about improving the reference data being fit to, as well as the suitability and possible limitations of the potential model being used. parameter name parameter type default value short explanation. acc_rescaling* float (none) R value to tune MCMC acceptance rate. acc_moves* integer (none) Number of accepted MCMC moves required. ensemblefile string startpot Potential ensemble output filename, ensemblefile.uq. If this is not defined then output_prefix.uq is used. Should neither ensemblefile nor output_prefix be defined, the startpot filename is used, with a '.uq' extension. uq_temp float 1.0 Temperature scaling parameter $\alpha$. use_svd boolean 0 Use singular value decomposition to find Hessian eigenvalues (default is eigenvalue decomposition). hess_pert float 0.00001 Percentage parameter perturbation in Hessian finite difference calculation. (If hess_pert = -1 a bracketing algorithm is used to find individual parameter perturbation values, see explanation above - only use this is you know what you are doing!) eig_max float 1.0 Alternative MCMC step perturbation maximum value in max( eig_max, $\lambda_j$). write_ensemble integer 0 Writes a potential file every write_ensemble members.
The problem with this hamiltonian is that there is a difference between symmetric/Hermitian operators and self-adjoint operators. It looks like a nit-picky mathematician's poking holes into everything, but it is in fact important: In general, the domains of $\hat{A}$ and $\hat{A}^\dagger$ do not coincide. If $\hat{A}=\hat{A}^\dagger$ on $D(\hat{A})$, then $D(\hat{A})\subseteq D(\hat{A}^\dagger)$ holds and $\hat{A}$ is called symmetric or Hermitian. If, in addition, $D(\hat{A}^\dagger)=D(\hat{A})$, then $\hat{A}$ is called self-adjoint. The important existence and reality theorems for eigenvalues and eigenvectors are usually only for self-adjoint operators. This is made clear in page 13 of your textbook. While your operator is indeed symmetric, it is unlikely to be self-adjoint. More specifically, $H$ is densely defined and therefore has an adjoint $H^\dagger$, which is an operator on some domain $D(H^\dagger)$ which satisfies $\langle\phi|H\psi\rangle=\langle H^\dagger\phi|\psi\rangle$ for all $\psi\in D(H)$ and $\phi\in D(H^\dagger)$. Your real job is characterizing the domains of both operators and seeing if they coincide, or figuring out whether $H$ can be extended to a larger domain such that the adjoint's domain will coincide with the original domain. None of that is particularly easy. The thing is, though, that these mathematical troubles very rarely come on their own and are usually accompanied by trouble in the corresponding classical problem. This is beautifully made clear, together with a clear exposition of the necessary mathematical facts, in the paper Classical symptoms of quantum illnesses. Chengjun Zhu and John R. Klauder. Am. J. Phys. 61 no. 7, 605 (1993). The main point is that unless the classical problem has well-defined solutions for all time and for all initial conditions, you really have no business complaining about unexpected behaviour in the quantum counterpart. For your model, the classical hamiltonian $H=\frac12(q^3p+pq^3)=q^3 p$ produces the Hamilton equations$$\left\{\begin{array}{}\dot p=-\frac{\partial H}{\partial q}=&3q^2 p,\\\dot q=\phantom{-}\frac{\partial H}{\partial p}=&q^3.\end{array}\right.$$These are fairly easy to solve, and the solutions are not well behaved:$$\left\{\begin{array}{}\frac{1}{2q^2}&=t_0-t,\\\frac{p_0^2}{p^2}&=(t_0-t)^3,\end{array}\right.$$where $t_0$ and $p_0$ are constants of integration. Note, in particular, that there are no (real) solutions after a certain time $t_0=t_\text{in}+\frac{1}{2q(t_\text{in})^2}$. How, then, are you expecting reasonable physics out of the quantized version of this? Finally, as a coda, let me address Qmechanic's very interesting comment. It is true that for a given physical system, you will have a single hamiltonian and many other physical observables. How, then is one to make sense of this construction for an arbitrary observable? I would counter that any arbitrary observable can be considered as a hamiltonian, and that things like the spectrum do follow from properties like the corresponding time evolution. Take some arbitrary self-adjoint, as-nice-as-necessary physical observable $\hat A$ in a physical system with state space $\mathcal H$. Even if it does not make physical sense, you can definitely postulate the flow associated with that observable, i.e. the curve $t\to|\psi(t)\rangle$ in $\mathcal H$ that obeys the Schrödinger-like equation$$i\partial_t|\psi(t)\rangle=\hat A|\psi(t)\rangle.$$If you want to bring the spectrum into play, you can solve this equation by decomposing the flow into the observable's eigenbasis, so $|\psi(t)\rangle=\sum_n\psi_n e^{-ia_n t}|n\rangle$, where $\hat A|n\rangle=a_n|n\rangle$ and $\psi_n=\langle n|\psi(0)\rangle$. (Here one needs to assume that $|\psi(0)\rangle$ is "general" or "random" enough that all the $\psi_n$ are nonzero. The question, though, is how can you extract the spectrum from the time evolution? The answer to that is to Fourier transform into the frequency domain: define$$|\tilde\psi(\omega)\rangle=\int_{-\infty}^\infty\text dt \, e^{i\omega t}|\psi(t)\rangle$$and see what the eigenvector decomposition does to it:$$|\tilde\psi(\omega)\rangle=\int_{-\infty}^\infty\text dt \, e^{i\omega t}\sum_n\psi_n e^{-ia_n t}|n\rangle=\sum_n\psi_n|n\rangle \int_{-\infty}^\infty\text dt \, e^{i(\omega -a_n) t}=\sum_n\delta(\omega -a_n)\psi_n|n\rangle .$$That is: the Fourier transform of the $\hat A$-induced flow is equal to a number of spikes at the eigenvalues of $\hat A$, and the eigenvectors are the coefficients. To put it another way, this offers a means of obtaining the spectrum from the time evolution: solve it in some way, Fourier transform the solution, and then read the eigenvalues from the support of the transform and the eigenvectors from the value at those points. In fact, this is a useful technique and it has quite wide deployment in numerical solutions of certain classes of problems. (For more information, see David Tannor's excellent textbook Introduction to Quantum Mechanics: A Time-Dependent Perspective (Ebookee).) ... and, of course, if you tried to diagonalize in this way an operator with the kind of problems described above, you'd be heading straight for trouble. Surely it is unreasonable to ask the quantum flow to be well-behaved when the classical flow isn't!
I'm studying the pricing of a Double-Barrier binary option on the price of $S$. By this I mean an option that pays $X$ at maturity $T$ if the lower ($H1$) or upper barriers ($H2$) are not hit during the lifetime of the option. I was told that the valuation could be done by subtracting an up-and-in-cash(at expiry)-or-nothing struck at $H2$ from a down-and-out cash or nothing struck at $H1$. That is: \begin{align*} KO_{H1}- KI_{H2} = &\ (X\ \ \text{if}\ \ \forall\ \ t \le T: S_{t}>H1) - (X\ \ \text{if}\ \ \exists\ \ t \le T: S_{t}>H2) \end{align*} This valuation kind of makes sense to me because we are considering all the paths that are above $H1$ and subtracting the paths that got above $H2$ which would only leave us the paths between the lower and upper barrier. However I am doubtful about it since I can't find this way of doing it in any place. Is there a mistake in it? I've seen a formula for this which involves some infinite series and $sin(x)$ functions, but it seems way too different to my approach. Much help appreciated
Difference between revisions of "Polarization Mixing Due to Feed Rotation" (→Background) (→Applying the Measurements) Line 323: Line 323: \begin{align} \begin{align} \phi_{ij}(XX)' &= \phi_{ij}(XX),\quad\text{no correction} \\ \phi_{ij}(XX)' &= \phi_{ij}(XX),\quad\text{no correction} \\ − \phi_{ij}(XY)' &= \phi_{ij}(XY) - d\phi_j + \phi_{ij}(XY)' &= \phi_{ij}(XY) - d\phi_j \pi/2, \\ \phi_{ij}(YX)' &= \phi_{ij}(YX) + d\phi_i + \pi/2, \\ \phi_{ij}(YX)' &= \phi_{ij}(YX) + d\phi_i + \pi/2, \\ \phi_{ij}(YY)' &= \phi_{ij}(YY) + d\phi_i - d\phi_j. \phi_{ij}(YY)' &= \phi_{ij}(YY) + d\phi_i - d\phi_j. Revision as of 15:59, 2 July 2017 Contents 1 Explanation of Polarization Mixing 2 Absolute vs. Relative Angle of Rotation 3 Effect of an X - Y Delay 4 Another Look at X-Y Delays 5 Effect of Polarization Mixing on Observations Explanation of Polarization Mixing The newer 2.1-m antennas [Ants 1-8 and 12] have AzEl (azimuth-elevation) mounts (also referred to as AltAz; the terms Altitude and Elevation are used synonymously), which means that their crossed linear feeds have a constant angle relative to the horizon (the axis of rotation being at the zenith). The older 2.1-m antennas [Ants 9-11 and 13], and the 27-m antenna [Ant 14], have Equatorial mounts, which means that their crossed linear feeds have a constant angle with respect to the celestial equator, the axis of rotation being at the north celestial pole. Thus, the celestial coordinate system is tilted by the local co-latitude (complement of the latitude). This tilt results in a relative feed rotation between the 27-m antenna and the AzEl mounts, but not between the 27-m and the older equatorial mounts. This angle is called the "parallactic angle," and is given by: where is the site latitude, is the Azimuth angle [0 north], and is the Elevation angle [0 on horizon]. This function obviously changes with position on the sky, and as we follow a celestial source (e.g. the Sun) across the sky this rotation angle is continuously changing in a surprisingly complex manner as shown in Figure 1. Note that at zero hour angle for declinations less than the local latitude (37.233 degrees at OVRO), but is at higher declinations. The crossed linear dipole feeds on all antennas are oriented with the X-feed as shown in Figure 2, at 45-degrees from the horizontal, when the antenna is pointed at 0 hour angle. This is the view as seen looking down at the feed from the dish side, although since the feeds are at the prime focus this is the same as the view projected onto the sky. At other positions, the feeds on the AzEl antennas experience a rotation by angle relative to the equatorial antennas. Because of this rotation, the normal polarization products XX, XY, YX and YY on baselines with dissimilar antennas (one AzEl and the other equatorial) become mixed. The effect of this admixture can be written by the use of Jones matrices (see Hamaker, Bregman & Sault (1996) for a complete description). Consider antenna A whose feed orientation is rotated by , cross-correlated with antenna B with unrotated feed. The corresponding Jones matrices, acting on signal vector are: and the cross-correlation is found by taking the outer product, i.e. which relates the output polarization products to the input as where we have dropped the subscripts and complex conjugate notation for brevity. Of course, there are other effects such as unequal gains and cross-talk between feeds that are also at play, but for now we ignore those and focus only on the effect of this polarization mixing due to the parallactic angle. Absolute vs. Relative Angle of Rotation However, the above description fails when we consider a rotation on both antennas, so that In this case, performing the outer product gives: whereas intuitively we want something like: which becomes the identity matrix when , i.e. when the feeds on two antennas of a baseline are parallel. The difference seems to be that the earlier expression evaluates to components of X and Y in an absolute coordinate frame, whereas we are interested only the difference in angle of the feeds in a relative coordinate frame. This choice no doubt has implications for measuring Stokes Q and U, but for solar data we are not concerned with linear polarization. One way to achieve this in the framework of Jones matrices is to form Mueller matrices from the outer-product of the rotation times the gain matrix: and then form an overall matrix where . Effect of an X - Y Delay Regardless of how the math is done, we expect that the result should be dependent on the difference in angle, , so as a practical solution let us simply replace with and proceed as in section 1. and the cross-correlation is found by taking the outer product, i.e. which relates the output polarization products to the input as Now consider that there is a "multi-band" delay on both antennas, and . Then (2) becomes: The result agrees with our intuition: This approach will be implemented, to see how well it does in correcting for the effects of differential feed rotation. Another Look at X-Y Delays Prior to doing the feed rotation correction, it is essential that any X-Y delays be measured and corrected. We have devised a calibration procedure in which we take data on a strong calibrator with the feeds parallel, then rotate the 27-m (antenna 14) feed so that they are perpendicular. For an unpolarized source, this results in signal on the XX and YY polarization channels in the first case, and on the XY and YX polarization channels in the second case. As a practical matter, this can be done on all antennas at once if a strong source is observed near 0 HA, ideally timed to start 20 min before 0 HA and completing 20 min after 0 HA. The source 2253+161 works well, as does 1229+006 (3C273). Two observations are needed one with the 27-m feed unrotated (gives parallel-feed data for all dishes, if done near 0 HA). Gives strong signal in XX and YY channels. Example one with the 27-m feed rotated to -90 degrees (gives crossed-feed data for all dishes, if done near 0 HA). Gives strong signal in XY and YX channels. Example Note that the feed should be rotated by -90, not 90, in order for the expressions below to be used correctly. Background Consider antenna-based phases on X polarization as and on Y polarization as , i.e. the Y phases are nominally the same as for X, except for a 90-degree rotation and a possible X-Y delay difference , here written as delay phase . We are finding that this delay is a complicated function of frequency, so it is just as well to keep it in terms of phase. On a baseline , then, the four polarization terms become: We then examine the channel differences on baselines with antenna 14, i.e. where . Consequently, we can solve redundantly in two ways for the antenna-based delay phases: where we specifically use to emphasize that this quantity for all antennas should be the same value, because the measurements are all baselines with antenna 14. In practice, we can average the two measurements for each antenna for , and the 26 measurements for antenna 14 for , although care must be taken to do an appropriate average to take care of the phase ambiguity. One way to do this is form unit vectors and sum them, then find the phase of the summed vector. The Figure shows the results for a measurement on 2017-07-02. Applying the Measurements Once we have these, we can apply corrections to each of the polarization channels, and then do the feed rotation correction. The corrections are done to data taken in a normal way, without rotating the 27-m feed. The application of the correction is: It is quite pleasing that this agrees perfectly with the analysis of the previous section, and the only difference is one of emphasis. Rather than using fixed delays and , here we are using the frequency-dependent delay-phase. Actually, there is another, no-so-trivial difference. Here I am considering making these phase corrections first, and then applying the feed-rotation correction, i.e.: where the primed quantities are the phase-corrected channel data. I have analyzed a set of observations and got the values for , but when I attempt to make the corrections to parallel and crossed data I find that these are the required corrections: where is an antenna-dependent value that is zero for antennas 1, 2, 3, 6, and 8, but is 1 for antennas 4, 5, and 7. The values for other antennas remains to be determined. I am not sure what this is, but probably is has to do with differences in the Tecom feed internal connections, which may be reversed for some antennas. When I attempt to make corrections to non-crossed data (i.e. data taken with normal observations), I find that I need to apply: So, the location of the has changed, but this could be due to the sign of the parallactic angle at the time the observations were taken, since the phase flips by 180 degrees at 0 HA. I tried applying the feed rotation correction, and it does not seem to work. Okay, I looked at feed_rot_simulation.py, and I now realize that for my tests, which were done with negative angle for the crossed-feed measurements, I should expect the phase difference between XX and XY to be , not zero. For a positive angle, the flip by would appear in YX. This means that I need a diff correction: Effect of Polarization Mixing on Observations See Powerpoint Presentation File:EOVSA Status Jan 2017.pptx The main effect that is noticeable in observations is that strong signals on the crossed hands (XY and YX) will appear when feeds are misaligned. When feeds are properly aligned, we expect to see only weak signals in the crossed hands, nominally zero, but in practice non-zero due to slight cross-talk between X and Y, which can be due to non-orthogonality or simply coupling between the separate channels. Note that non-equal gains will not cause cross-talk, but can complicate efforts to untangle it. To make the observations, we observe calibrator sources at different declincations over a broad range of hour angle. The two sources observed so far are 3C84, at declination 41 degrees, and 3C273, at declination 2 degrees. We then plot the observed amplitude and phase for each of the observed polarization products [XX, XY, YX, YY]. For this demonstration, we use the baseline of Ant1-14, where Ant1 has the rotating feed and Ant14 has the non-rotating one (with respect to the celestial coordinate system). Figure 3 shows the 3C84 observation and simulation. The upper-left panel is the observed amplitude of the four polarization products during an observation from 08:30-15:00 UT, and the upper-right panel is the corresponding phase. The lower panels are the simulation amplitude and phase, where the simulation assumed constant polarization products with Amp[XX, XY, YX, YY] = [0.15, 0, 0, 0.23], and Phase[XX, XY, YX, YY] = [3.1, 0, 0, 2.4] (radians). A noise level of 0.015 rms was added. It is clear that the amplitude simulation works very well, but the phase does not have the correct character--the only deviation from constant phase is an abrupt 180-degree phase jump in XY and YX at 0 hour angle. Such phase jumps are seen in the observed data, but in addition there is a large amount of phase rotation in the observations that is not in the simulation. As a test, a simulation was done applying a phase rotation based on , as shown in Figure 4. Applying a rotation by the parallactic angle itself proved to be too small, and did not show the symmetric behavior around 0 hour angle, so the phase rotation applied in Fig. 4 is . It now looks about right, but there is a curvature in the simulation phase that is not really seen in the data. As a check, we repeated the exercise on 3C273, again applying a phase rotation of , with the result shown in Figure 5. As before, the amplitudes match quite well. For this different source, however, the measured phase variation is not symmetric about 0 hour angle, so the simulated phases do not match the observed ones. Finally, we instead apply a phase correction without the absolute value, i.e. just , with the result in Figure 6. Clearly this is "better," but still does not match the phase variation precisely. Other Possible Reasons for the Observed Phase Variations It has been suggested that there may be some secular change in phase not related to feed rotation, perhaps a delay error due to a baseline error, or because the Az and El axes do not cross at a common point. However, baseline errors would seem to be unlikely, because exactly the same character in the phase variations occurs on all of the AzEL antennas. And anyway a delay error is ruled out for another reason--the phase variation is not frequency dependent. Figures 7 & 8 illustrate these facts. Based on these tests, I conclude that the observed phase variations are indeed due to the relative feed rotation, but that something is missing in the above mathematical analysis or its application. One possibility is that there is some subtlety in the complex-conjugation of the Jones matrices, since in the above analysis they are entirely real. --Dgary (talk) 11:50, 22 October 2016 (UTC) More On Axis Offset Dr. Avinash Deshpande (Raman Research Institute, Bangalore -- Thanks to Dr. Ananthakrishnan for contacting him) confirms that no phase rotation is expected for the parallactic correction, aside from the 180-degree phase jump at the meridian crossing. He suggests that a non-intersecting axis is more likely, and notes that my plots claiming no evidence of a delay is too hasty. It may be that the small range of frequencies in Figure 8 is too small to see an evident frequency dependence that may nevertheless be there. He notes that the effect of non-intersecting axes is a phase rotation of where is the elevation angle, and is the offset distance. As a test, I applied this function, using cm (based on the apparent phase variation in the observed phases), and obtained the results in Figures 9 and 10. Although the observed phases show a bit more curvature than the simulation, this can be due to residual baseline errors, so I think it is fair to say this is a promising result. We can prove this very shortly, since the feed rotator on the 27-m antenna is soon to be working (I hope). The prediction is that rotating the 27-m feed to keep it parallel to the 2.1-m feeds on these antennas will correct the amplitudes, but the phases will still show the same behavior (since they are due to a different cause), and also that using a wider range of frequencies (which we can do, especially now that the high-frequency receiver is available) will show a frequency dependence in the amount of phase variation. --Dgary (talk) 04:55, 8 November 2016 (UTC) Further update On 2016 Nov 13, new observations of 3C84 were taken, and the correction for the axis offset (d = 15.2 cm) were applied, as shown in Figure 11 (at left). It appears that this correction works well, and that there is a residual baseline error on each of the antennas due to the fact that they were originally determined without the axis-offset correction. --Dgary (talk) 14:20, 15 November 2016 (UTC)
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem. Yeah it does seem unreasonable to expect a finite presentation Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections. How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th... Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ... Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ... The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place. Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$ Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$ So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$ Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$ But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$ For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor. Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$ You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices). Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)... @Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$. This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra. You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost. I'll use the latter notation consistently if that's what you're comfortable with (Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$) @Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$) Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$. Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms. That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$ Voila, Riemann curvature tensor Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean? Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$. Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$. Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$? Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle. You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form (The cotangent bundle is naturally a symplectic manifold) Yeah So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$. But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!! So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ? Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty @Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job. My only quibble with this solution is that it doesn't seen very elegant. Is there a better way? In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}. Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group Everything about $S_4$ is encoded in the cube, in a way The same can be said of $A_5$ and the dodecahedron, say
It's perhaps worth reading about Lagrangian duality and a broader relation (at times equivalence) between: optimization subject to hard (i.e. inviolable) constraints optimization with penalties for violating constraints. Quick intro to weak duality and strong duality Assume we have some function $f(x,y)$ of two variables. For any $\hat{x}$ and $\hat{y}$, we have: $$ \min_x f(x, \hat{y}) \leq f(\hat{x}, \hat{y}) \leq \max_y f(\hat{x}, y)$$ Since that holds for any $\hat{x}$ and $\hat{y}$ it also holds that: $$ \max_y \min_x f(x, y) \leq \min_x \max_y f(x, y)$$ This is known as weak duality. In certain circumstances, you have also have strong duality (also known as the saddle point property): $$ \max_y \min_x f(x, y) = \min_x \max_y f(x, y)$$ When strong duality holds, solving the dual problem also solves the primal problem. They're in a sense the same problem! Lagrangian for constrained Ridge Regression Let me define the function $\mathcal{L}$ as: $$ \mathcal{L}(\mathbf{b}, \lambda) = \sum_{i=1}^n (y - \mathbf{x}_i \cdot \mathbf{b})^2 + \lambda \left( \sum_{j=1}^p b_j^2 - t \right) $$ The min-max interpretation of the Lagrangian The Ridge regression problem subject to hard constraints is: $$ \min_\mathbf{b} \max_{\lambda \geq 0} \mathcal{L}(\mathbf{b}, \lambda) $$ You pick $\mathbf{b}$ to minimize the objective, cognizant that after $\mathbf{b}$ is picked, your opponent will set $\lambda$ to infinity if you chose $\mathbf{b}$ such that $\sum_{j=1}^p b_j^2 > t$. If strong duality holds (which it does here because Slater's condition is satisfied for $t>0$), you then achieve the same result by reversing the order: $$ \max_{\lambda \geq 0} \min_\mathbf{b} \mathcal{L}(\mathbf{b}, \lambda) $$ Here, your opponent chooses $\lambda$ first! You then choose $\mathbf{b}$ to minimize the objective, already knowing their choice of $\lambda$. The $\min_\mathbf{b} \mathcal{L}(\mathbf{b}, \lambda)$ part (taken $\lambda$ as given) is equivalent to the 2nd form of your Ridge Regression problem. As you can see, this isn't a result particular to Ridge regression. It is a broader concept. References (I started this post following an exposition I read from Rockafellar.) Rockafellar, R.T., Convex Analysis You might also examine lectures 7 and lecture 8 from Prof. Stephen Boyd's course on convex optimization.
Quantum gates are said to be unitary and reversible. However, classical gates can be irreversible, like the logical AND and logical OR gates. Then, how is it possible to model irreversible classical AND and OR gates using quantum gates? Let's say we have a function $f$ which maps $n$ bits to $m$ bits (where $m<n$). $$f: \{0,1\}^{n} \to \{0,1\}^{m}$$ We could of course design a classical circuit to perform this operation. Let's call it $C_f$. It takes in as input $n$-bits. Let's say it takes as input $X$ and it outputs $f(X)$. Now, we would like to do the same thing using a quantum circuit. Let's call it $U_f$, which takes as input $|X\rangle$ and outputs $|f(X)\rangle$. Now remember that since quantum mechanics is linear the input qubits could of course be in a superposition of all the $n$-bit strings. So the input could be in some state $\sum_{X\in\{0,1\}^{n}}\alpha_X|X\rangle$. By linearity the output is going to be $\sum_{X\in\{0,1\}^{n}}\alpha_X|f(X)\rangle$. Evolution in quantum mechanics is unitary. And because it is unitary, it is reversible. This essentially means that if you apply a quantum gate $U$ on an input state $|x\rangle$ and get an ouput state $U|x\rangle$, you can always apply an inverse gate $U^{\dagger}$ to get back to the state $|x\rangle$. Notice, carefully in the above picture that the number of input lines (i.e. six) is exactly same as the number of output lines at each step. This is because of the unitarity of the operations. Compare this to classical operations like the logical AND where $0\wedge1$ gives a single bit output $0$. You can't reconstruct the initial bits $0$ and $1$ from the output, since even $0\wedge 0$ and $1\wedge0$ would have mapped to the same output $0$. But, consider the classical NOT gate. If the input is $0$ it ouputs $1$, while if the input is $1$ it outputs $0$. Since this mapping is one-one, it can be easily implemented as a reversible unitary gate, namely, the Pauli-X gate. However, for implementing a classical AND or a classical OR gate we need to think a bit more. Consider the CSWAP gate. Here's a rough diagram showing the scheme: In the SWAP gate depending on the control bit, we the other two may or may not get swapped. Notice that there are three input lines and three output lines. So, it can be modeled as a unitary quantum gate. Now, if $z=0$: If $x=0$, output is $0$, while if $x=1$, output is $y$. If you notice, if $x=0$, we are outputting $\bar{x}\wedge y$ while if $x=1$ we are outputting $x\wedge y$. So we could successfully generate the output $x\wedge y$ which we wanted although we ended up with some "junk" outputs $\bar{x}\wedge y$ and $x$. An interesting fact is that the inverse of the CSWAP gate is the CSWAP gate itself (check!). That's all! Remember that all classical gates can be constructed with the NAND gate, which can of course be constructed an AND and a NOT gate. We effectively modelled the classical NOT and the classical AND gate using reversible quantum gates. Just to be on the safe side we can also add the qauntum CNOT gate to our list, because using CNOT we can copy bits. Hence, the basic message is that using the quantum CSWAP, CNOT and the NOT gates we can replicate any classical gate. BTW, there's a clever trick to get rid of the "junk" bits which are produced when quantum gates are used, but that's another story. P.S: It's very important to get rid of the "junk" bits or else they can cause computational errors! Reference & Image Credits: Quantum Mechanics and Quantum Computation MOOC offered by UC Berkeley on edX.
question_answer1) The difference of the areas of two squares drawn on two line segments of different lengths is 51 sq. cm. Find the length of the greater line segment if one is longer than the other by 3 cm. A) 7 cm B) 9 cm C) 11 cm D) 16 cmView Solution question_answer2) A kite in the shape of a square with a diagonal 32 cm attached to an equilateral triangle of the base 8 cm. Approximately how much paper has been used to make it? (use\[\sqrt{3}=\mathbf{1}.\mathbf{732}\]) A) 539.712 \[c{{m}^{2}}\] B) 538.721 \[c{{m}^{2}}\] C) 540.712 \[c{{m}^{2}}\] D) 539.217 \[c{{m}^{2}}\]View Solution A) 190 \[{{m}^{2}}\] B) 180 \[{{m}^{2}}\] C) 196 \[{{m}^{2}}\] D) 195 \[{{m}^{2}}\]View Solution A) 380 \[{{m}^{2}}\] B) 370\[{{m}^{2}}\] C) 374 \[{{m}^{2}}\] D) 384\[{{m}^{2}}\]View Solution question_answer5) The length of a room floor exceeds its breadth by 20 m. The area of the floor remains unaltered when the length is decreased by 10 m but the breadth is increased by 5 m. The area of the floor (in square metres) is A) 280 B) 325 C) 300 D) 420View Solution A) 60% B) 50% C) 40% D) 32%View Solution A) 25 \[c{{m}^{2}}\] B) \[\frac{25}{2}\sqrt{2}\,c{{m}^{2}}\] C) \[25\sqrt{2}\]\[c{{m}^{2}}\] D) \[25\sqrt{3}\]\[c{{m}^{2}}\]View Solution A) \[\left( 3\sqrt{3}-\frac{\pi }{2} \right)\]\[c{{m}^{2}}\] B) \[\left( \sqrt{3}-\frac{3\pi }{2} \right)\]\[c{{m}^{2}}\] C) \[4\left( \sqrt{3}-\frac{\pi }{2} \right)\]\[c{{m}^{2}}\] D) \[\left( \frac{\pi }{2}-\sqrt{3} \right)\]\[c{{m}^{2}}\]View Solution A) 4 units B) \[2\sqrt{3}\]units C) \[2\sqrt{2}\]units D) \[3\sqrt{2}\]unitsView Solution A) 42 \[c{{m}^{2}}\] B) 60 \[c{{m}^{2}}\] C) 84 \[c{{m}^{2}}\] D) 96 \[c{{m}^{2}}\]View Solution A) 30 units B) 25 units C) 10 units D) 15 unitsView Solution A) \[100\sqrt{3}\]\[c{{m}^{2}}\] B) \[8\sqrt{3}\]\[c{{m}^{2}}\] C) \[160\sqrt{3}\]\[c{{m}^{2}}\] D) 100 \[c{{m}^{2}}\]View Solution A) 25% B) 55% C) 44 % D) 56.25%View Solution question_answer14) ABC is an equilateral triangle. P and Q are two points on \[\overline{\mathbf{AB}}\] and \[\overline{\mathbf{AC}}\] respectively such that\[\mathbf{PQ}\parallel \mathbf{BC}\]. If \[\overline{\mathbf{PQ}}=\mathbf{3}\]cm, then area of \[\Delta \mathbf{APQ}\] is: A) \[\frac{25}{4}\]sq.cm B) \[\frac{25}{\sqrt{3}}\]sq.cm C) \[\frac{9\sqrt{3}}{4}\]sq.cm D) \[25\sqrt{3}\]sq.cmView Solution A) area \[\left( \Delta BCP \right)\] = area \[\left( \Delta DPQ \right)\] B) area \[\left( \Delta BCP \right)\] > area \[\left( \Delta DPQ \right)\] C) area \[\left( \Delta BCP \right)\] < area \[\left( \Delta DPQ \right)\] D) area \[\left( \Delta BCP \right)\] + area \[\left( \Delta DPQ \right)\] = area \[(\Delta \,BCD)\]View Solution question_answer16) In \[\Delta \mathbf{PQR}\], the line drawn from the vertex P intersects QR at a point S. If QR = 4.5 cm and SR = 1.5 cm then the ratios of the area of triangle PQS and triangle PSR is A) 4 : 1 B) 3 : 1 C) 3 : 2 D) 2 : 1View Solution question_answer17) ABCD is a parallelogram X and Y are the mid points of sides BC and CD respectively. If the area of \[\Delta \,\mathbf{ABC}\] is 16\[\mathbf{c}{{\mathbf{m}}^{\mathbf{2}}}\], then the area of \[\Delta \mathbf{AXY}\]is A) 12 \[c{{m}^{2}}\] B) 8 \[c{{m}^{2}}\] C) 9 \[c{{m}^{2}}\] D) 10 \[c{{m}^{2}}\]View Solution A) \[\frac{1}{2}\times area\text{ }of\text{ }\Delta PQR\] B) \[\frac{2}{3}\times area\text{ }of\text{ }\Delta PQR\] C) \[\frac{1}{4}\times area\text{ }of\text{ }\Delta PQR\] D) \[\frac{1}{8}\times area\text{ }of\text{ }\Delta PQR\]View Solution A) \[\frac{\sqrt{2}}{3}\left( a+b+c \right)\] B) \[\frac{\sqrt{3}}{3}{{\left( a+b+c \right)}^{2}}\] C) \[\frac{\sqrt{3}}{3}\left( a+b+c \right)\] D) \[\frac{\sqrt{2}}{3}{{\left( a+b+c \right)}^{2}}\]View Solution question_answer20) ABCD is a square. Draw a triangle QBC on side BC considering BC as base and draw a triangle PAC on AC as its base such that \[\Delta QBC\sim PAC\]. Then,\[\frac{Area\,of\,\Delta QBC}{Area\,of\ \Delta PAC}\]is equal to A) \[\frac{1}{2}\] B) \[\frac{2}{1}\] C) \[\frac{1}{3}\] D) \[\frac{2}{3}\]View Solution question_answer21) A rectangular park 60 metre long and 40 metre wide has two concrete crossroads running in the middle of the park and rest of the park has been used as a lawn. If the area of the lawn is 2109 metre2 then the width of the road is A) 3 metre B) 5 metre C) 6 metre D) 2 metreView Solution Direction: Each of the questions Mow consists of a questions followed by statements. You have to study the questions and the statements and decide which of the statement (s) is/are necessary to answer the question? (I) The perimeter of the field is 110 metres. (II) The length is 5 metres more than the width. (III) The ratio between length and width is 6:5 respectively. A) I and II only B) Any two of the three C) I, and either II or III only D) None of theseView Solution Direction: Each of the questions Mow consists of a questions followed by statements. You have to study the questions and the statements and decide which of the statement (s) is/are necessary to answer the question? (I) The length and breadth of the lawn are in the ratio of 2:1 respectively. (II) The width of the path is twenty times the length of the lawn. (III) The cost of gravelling the path @ Rs. 50 per nr is Rs. 4416. A) All I, II and III B) III, and either I or II C) I and III only D) II and III onlyView Solution question_answer24) The area of a rectangle lies between 40\[\mathbf{c}{{\mathbf{m}}^{\mathbf{2}}}\] and 45\[\mathbf{c}{{\mathbf{m}}^{\mathbf{2}}}\]. If one of the sides is 5cm, then its diagonal lies between A) 8 cm and 10 cm B) 9 cm and 11 cm C) 10 cm and 12 cm D) 11 cm and 13 cmView Solution question_answer25) ABCD is a parallelogram. E is a point BC such that BE : EC = m : n. If AE and BD intersect in F, then what is the ratio of the area of \[\Delta \mathbf{PEB}\] to the area of \[\Delta \mathbf{AFD}\]? A) \[m\text{/}n\] B) \[{{\left( m\text{/}n \right)}^{2}}\] C) \[{{\left( n\text{/}m \right)}^{2}}\] D) \[{{\left[ m/{{(n+m)}^{2}} \right]}^{2}}\]View Solution A) 9:10 B) 8:9 C) 9:11 D) 11:9View Solution A) 15 cm B) 12cm C) 9 cm D) 8 cmView Solution question_answer28) In the given figure, ABCD is a quadrilateral with AB parallel to DC and AD parallel to BC, ADC is a right angle. If the perimeter of the\[\Delta \mathbf{ABE}\]is 6 units, what is the of the quadrilateral? A) \[2\sqrt{3}\]sq units B) 4 sq units C) 3 sq units D) \[4\sqrt{3}\]sq unitsView Solution question_answer29) In the figure given below, ABCD is a parallelogram. P is a point in BC such that PR : PC = 1 : 2, DP produced meets AB produced at Q, If the area of the \[\Delta BPQ\] is 20 Sq. units, what is the area of the\[\Delta DCP\]? A) 20 sq units B) 30 sq units C) 40 sq units D) none of the aboveView Solution A) 160 sq. m B) \[147\sqrt{3}\]sq. m C) \[210\sqrt{3}\]sq. m. D) \[27\sqrt{3}\]sq. nzView Solution You need to login to perform this action. You will be redirected in 3 sec
Let $E$ be a smooth vector bundle equipped with an affine connection $\nabla$. Suppose that $(E,\nabla)$ admits a non-zero parallel section. I think that $(\bigwedge^k E,\bigwedge^k \nabla)$ does not need to admit a non-zero parallel section even locally. How to construct such an example? This seems especially interesting when $k < \text{rank}(E)$. Moreover generally , are there any non-trivial relations between the dimension of the space of parallel sections of $(E,\nabla)$, and that of $(\bigwedge^k E, \bigwedge^k\nabla)$ (locally)? If there are $k$ independent parallel sections of $E$, then $\bigwedge^k E$ has at least one parallel section; $\sigma_1,\dots,\sigma_k$ are parallel $\Rightarrow \sigma_1 \wedge \dots \wedge \sigma_k$ is parallel. What happens if $E$ has $r<k$ parallel sections? Does $\bigwedge^k E$ still admit (locally) a non-zero parallel section? Edit: We can probably use the relation between the curvatures: Let $X,Y \in \Gamma(TM)$. Then $R^{ \bigwedge^k\nabla}(X,Y)=d\psi_{\operatorname{Id}}(R^{\nabla}(X,Y)) $, where $\psi:\text{End}(E) \to \text{End}(\bigwedge^k E)$ is the exterior power map, $\psi(A)=\bigwedge^k A$. Since the dimension of the space of local parallel sections around $p \in M$ equals $\ker R(\cdot,\cdot)$, it suffices to construct an example where $R^{\nabla}(\cdot,\cdot)$ is singular, but $R^{ \bigwedge^k\nabla}(\cdot,\cdot)$ is invertible. This is certainly possible on the algebraic level, see e.g. example 1, in this question, with $k=2, \text{rank}E=3$.
This is a crosspost of this question from MSE. I'm confused about the definition of a projective family of probability spaces $(S_t,\mathscr S _t,\mu_t,f_{ts})_{s,t\in T}$. The conditions $f_{tt}=1_{S_t}$ $f_{us}=f_{ts}\circ f_{ut}$ where $s\leq t\leq u$ are clear from the usual definition of a projective system as a functor from $\mathsf T^\text{op}$, the opposite poset category. However, (in addition to measurability) the following condition always appears: $\mu_s = (f_{ts})_{\ast}(\mu_t)$, where the RHS is the pushforward measure. This looks to me like a kind of cone coherence condition, but it does not (as far as I can see) fall out of the categorical formalism of limits if one works in the category with probability spaces as objects and measurable maps as arrows. Furthermore, the projective limit (if it exists) is required to satisfy the following two conditions: Its $\sigma$-algebra is generated by the restrictions of the projections $\pi_t$ Its probability measure $\mu$ must satisfy $(\pi_t)_\ast (\mu)=\mu _t$ In what category should one work so that these conditions fall out of the categorical formalism? In other words, in what category does the categorical notion of projective limit coincide exactly with the one found in books. all
I have two continuous variables $x_1$ and $x_2$, such that their sum is a constant: $x_1+x_2=c$. Clearly, I cannot run the following OLS model due to perfect multicollinearity: Model (1): $y = \alpha + \beta_1 x_1 + \beta_2 x_2 + \epsilon$ If I run the Model (2) below, $\beta_1$ is significantly negative: Model (2): $y = \alpha + \beta_1 x_1 + \epsilon$ I have reason to believe that $x_2$ is also a determinant of $y$ and must be in the regression model alongside $x_1$. In Model (3), which constrains the intercept to zero, $\beta_1$ is significantly positive: Model (3): $y = \beta_1 x_1 + \beta_2 x_2 + \epsilon$ Models (2) and (3) reach opposite conclusions regarding $\beta_1$. Model (3) yields the theoretically predicted result ($\beta_1>0$). My question is whether I can rely on Model (3), since it excludes the intercept in order to include $x_2$ alongside $x_1$. To address the comments, think of $c$ as the size of a pie that is equal for everyone, and each individual slices the pie into two pieces $x_1$ and $x_2$, such that $x_1+x_2=c$. What I'm investigating is whether one type of slice $x_2$ matters more than the other $x_1$. In other words, whether $\beta_2>\beta_1$ in Model (3). To be more specific, $c$ is the average lottery return, which is decomposed into two parts for each individual as follows: $c = o_i r_{oi} + p_i r_{pi}$ where $o_i$ ($p_i$) is the proportion of lotteries individual $i$ observes (plays), such that $o_i+p_i=1$, and $r_{oi}$ ($r_{pi}$) is the average return of the lotteries that are observed (played) by individual $i$. Finally, $x_1\equiv o_i r_{oi}$ and $x_2\equiv p_i r_{pi}$, such that $x_1+x_2=c$; and $y$ is the future participation rate in the lotteries. Rational learning theories predict that individuals will give equal importance to the returns that they observe ($x_1$) versus those that they experience ($x_2$): $\beta_1=\beta_2>0$. On the other hand, reinforcement learning theories predict personally experienced outcomes matter more: $\beta_2>\beta_1>0$. That is why I'm trying to find out whether Model (3) is an appropriate way of testing these two theories.
I posted this question on MSE two days ago, but did not receive any responses. I have cross-posted it on MO, hoping it gets more attention here and that it is appropriate for this site. A positive integer $N$ is said to be perfect if $\sigma(N) = 2N$, where $\sigma(x)$ is the sum of the divisors of $x$. An odd perfect number $N$ is said to be given in Eulerian form if $N = {q^k}{n^2}$, where $q$ is prime, $q \equiv k \equiv 1 \pmod 4$ and $\gcd(q,n)=1$. Broughan, Delbourgo and Zhou (2013) defines the perfect number index of $N$ at prime $r$ to be the integer$$m := \frac{\sigma(N/{r^{\alpha}})}{r^{\alpha}},$$where $r^{\alpha} || N$. They also show that $m \ge 315$. (Chen and Chen (2014) extend these results in "On the index of an odd perfect number".) Since $\sigma$ is weakly multiplicative, we have $$\sigma(q^k)\sigma(n^2)=\sigma(N)=2N=2{q^k}{n^2}.$$ Because $\gcd(q^k,\sigma(q^k))=1$, this means that $$\sigma(n^2)/q^k = 2{n^2}/\sigma(q^k) = s \in \mathbb{N},$$ where $s \ge 315$. In particular, we have the simultaneous equations $$\sigma(n^2) = s{q^k}$$ and $$2{n^2} = s{\sigma(q^k)}.$$ We obtain $$2{n^2} - \sigma(n^2) = s{\sigma(q^k)} - s{q^k} = s{\sigma(q^{k-1})}.$$ UPDATE - September 27 2016In fact, we know that $s = \gcd(n^2,\sigma(n^2))$. It follows that $$\sigma(q^{k-1}) \mid (2{n^2} - \sigma(n^2)).$$ Now, here is my question: What are the divisors of $\sigma(q^{k-1})$? Notice that $4 \mid (k-1)$. By simple congruence considerations: $$\sigma(q^{k-1}) \equiv \sum_{i=0}^{k-1}{1} \equiv 1 + (k-1) \equiv k \equiv 1 \pmod 4.$$ Additionally, I checked using WolframAlpha and found only the following factorization (for $k = 9$): $$\sigma(q^8) = 1 + q + q^2 + q^3 + q^4 + q^5 + q^6 + q^7 + q^8 = (q^2 + q + 1)(q^6 + q^3 + 1) = \sigma(q^2)(q^6 + q^3 + 1)$$
Consider an arbitrary qudit state $\rho$ over $d$ modes. Any such state can be represented as a point in $\mathbb R^{d^2-1}$ via the standard Bloch representation: $$\rho=\frac{1}{d}\left(\mathbb I +\sum_k c_k\sigma_k\right)$$ with $\sigma_k$ traceless Hermitian matrices satisfying $\text{Tr}(\sigma_j\sigma_k)=d\delta_{jk}$. In the case of a qubit, $d=2$, we can take the $\sigma_k$ matrices to be the standard Pauli matrices, and the set of states is thus mapped into a sphere in $\mathbb R^3$, which is the standard Bloch sphere representation of qubits. For $d>2$, this picture becomes significantly more complicated, and the states are not in general mapped into a (hyper-)sphere. In particular, the special case $d=3$ is analysed in this paper. At the beginning of page 6, the authors make the following statement: It is clear that the boundary of $\Omega_3$ comprises density matrices which are singular. Here, $\Omega_3$ represents the set of qutrit states in their standard Bloch representation (thus $\Omega_3\subset\mathbb R^8$). This statement does not, however, seem so obvious to me. I understand that if $\rho$ is not singular, then it can be written as convex combinations of other states, and it would therefore seem that it could not lie on the boundary of the set of states. However, is it not in principle possible that at least a section of $\Omega_3$ is "flat" (as in, is a subsection of a hyperplane), so that convex combinations of states on this boundary still lie on the boundary? Intuitively this is most likely not the case, but is there an easy way to prove it? Related questions:
It might help to derive both speeds from first principles. We assume that the drag polar of the aircraft can be described by a parabola, like this:$$c_D = c_{D0}+\frac{c_L^2}{\pi\cdot AR\cdot\epsilon}$$ The symbols are: $\kern{5mm} c_D \:\:\:$ drag coefficient $\kern{5mm} c_{D0} \:$ zero-lift drag coefficient $\kern{5mm} c_L \:\:\:$ lift coefficient $\kern{5mm} \pi \:\:\:\:\:$ 3.14159$\dots$ $\kern{5mm} AR \:\:$ aspect ratio of the wing $\kern{5mm} \epsilon \:\:\:\:\:\:$ the wing's [Oswald factor][4] Next, we describe thrust $T$ over speed $v$ with $$T = T_0·v^{n_v}$$ Now first to the maximum climb angle. This is reached when the condition $$\frac{\delta \gamma}{\delta c_L} = 0$$holds true. No change in lift coefficient $c_L$ will improve the climb angle $\gamma$, it is only downhill from here to both sides. In order to get a grip on the climb angle, we look at the force equilibrium in steady flight at full power, assuming small values for $\gamma$:$$sin\gamma = \gamma = \frac{v_z}{v} = \frac{T - c_D\cdot \frac{\rho}{2}\cdot v^2\cdot S_{ref}}{m\cdot g} = \frac{T_0·\left(\sqrt{\frac{2\cdot m\cdot g}{\rho\cdot c_L\cdot S_{ref}}}\right)^{n_v}}{m\cdot g} - \frac{c_{D0}+\frac{c_L^2}{\pi\cdot AR\cdot\epsilon}}{c_L}$$ The symbols are: $\kern{5mm} m\:\:\:\:$ aircraft mass $\kern{5mm} g\:\:\:\:\;$ gravitational acceleration $\kern{5mm} \rho\:\:\:\:\:$ air density $\kern{5mm} v\:\:\:\:\:$ velocity $\kern{5mm} v_z\:\:\;$ climb speed $\kern{5mm} S_{ref} \:$ wing area Ideally, we would also multiply the climb angle with an acceleration factor, but I leave this out here for simplicity. Now we can derive the expression for the climb angle with respect to the lift coefficient and get$$\frac{\delta \gamma}{\delta c_L} = -\frac{n_v}{2}·c_L^{-\frac{n_v}{2}-1}·\frac{T_0·(m·g)^{\frac{n_v}{2}-1}}{\left(\frac{\rho}{2}·S_{ref}\right)^{\frac{n_v}{2}}}+\frac{c_{D0}}{c_L^2}-\frac{1}{\pi·AR·\epsilon}$$The general solution is $$c_{L_{{\gamma_{max}}}} = -\frac{n_v}{4}·\frac{T·\pi·AR·\epsilon}{m·g}+\sqrt{\frac{n_v^2}{16}·\left(\frac{T·\pi·AR·\epsilon}{m·g}\right)^2+c_{D0}·\pi·AR·\epsilon}$$For jets ($n_v = 0$) the solution is quite simple, because the thrust terms are proportional to the thrust coefficient $n_v$ and disappear: $$c_{L_{{\gamma_{max}}}} = \sqrt{c_{D0}·\pi·AR·\epsilon}$$For turbofan and propeller aircraft, we have less luck and get a much longer formula. This is the one for propellers ($n_v = -1$):$$c_{L_{{\,\gamma_{\,max}}}} = \frac{T·\pi·AR·\epsilon}{4·m·g}+\sqrt{\left(\frac{T·\pi·AR·\epsilon}{4·m·g}\right)^2+c_{D0}·\pi·AR·\epsilon}$$So yes, pure turbojets have an optimum lift coefficient for maximum climb angle that uses only terms which are constant over altitude. They do indeed climb steepest at a constant lift coefficient. But the thrust-dependent optimum for other engine types hints at an altitude dependency which might affect the other optimum, that for best climb speed. In order to find the conditions for maximum climb speed, repeat the process above with an expression where both sides are multiplied by speed:$$v_z = \frac{T\cdot v - c_D\cdot \frac{\rho}{2}\cdot v^3\cdot S_{ref}}{m\cdot g} = \frac{T_0·\left(m\cdot g\right)^{\frac{n_v-1}{2}}}{\left(c_L\cdot\frac{\rho}{2}\cdot v^2\cdot S_{ref}\right)^{\frac{n_v+1}{2}}} - \sqrt{\frac{2\cdot m\cdot g}{\rho\cdot c_L\cdot S_{ref}}}\cdot\frac{c_{D0}+\frac{c_L^2}{\pi\cdot AR\cdot\epsilon}}{c_L}$$ Now the solution for turbojets becomes the more complicated one, but that needs to be the case - how else would those optima converge at altitude?$$c_{L_{{\,n_{z_{\,max}}}}} = \sqrt{\left(\frac{T·\pi·AR·\epsilon}{2·m·g}\right)^2 + 3\cdot c_{D0}·\pi·AR·\epsilon} - \frac{T·\pi·AR·\epsilon}{2·m·g}$$ While the angle of attack for steepest climb is constant over altitude, the angle of attack for best climb speed increases as excess thrust disappears with rising altitude. Thus, the question can be answered: No.
The conditional PDF $g$ of $Z=Y-X$ conditionally on $X=x$ is such that, for every $z$, $g(z)=f(x,z+x)/f_X(x)$. The conditional PDF $h$ of $T=X/Y$ conditionally on $Y=y$ is such that, for every $t$, $h(t)=|y|\,f(ty,y)/f_Y(y)$. Which part of this causes you trouble? Note: Formally, the conditional distribution of some $U$ conditionally on $V=v$ is some distribution, that is, a probability measure $\mu$. If $\mu$ has a density $m$, that is, if, for every $B$, $P(U\in B\mid V=v)=\mu(B)=\int\limits_Bm(u)\mathrm du$, then one says that $m$ is the conditional density of $U$ conditionally on $V=v$. Edit: To compute the conditional PDF $h$, note that, by definition, for every measurable function $u$,$$E[u(X,Y)\mid Y=y]=\int u(x,y)f(x,y)\mathrm dx/f_Y(y).$$Using this for $u(x,y)=v(x/y)$ yields$$E[v(T)\mid Y=y]=\int v(x/y)f(x,y)\mathrm dx/f_Y(y).$$The change of variable $t=x/y$ yields $\mathrm dx=y\mathrm dt$ hence$$E[v(T)\mid Y=y]=\int v(t)f(ty,y)(|y|\mathrm dt)/f_Y(y),$$which yields the conditional density $h$.
Reading Ravenel's "green book", I wonder about his question on p.15 "that the spectrum MU may be constructed somehow using formal group law theory without using complex manifolds or vector bundles. Perhaps the corresponding infinite loop space is the classifying space for some category defined in terms of formal group laws. Infinite loop space theorists, where are you?". What is the state of things on that now? As far as I know, there is still no such interpretation. The closest I've heard is some rumored (but unpublished) work in derived algebraic geometry interpreting MU as some kind of representing object. Such a construction of MU in terms of formal group data be very welcome (probably even more now than when Ravenel wrote the green book). EDIT: Some elaboration. We do know a lot about MU. We know that it has an orientation (Chern classes for vector bundles), and in it's universal for this property. It's not then extremely suprising that we get a formal group law from the tensor product for line bundles, but the fact that MU carries a universal formal group law, and that MU ^ MU carries a universal pair of isomorphic formal group laws, is surprising. At this point it's something we observe algebraically. Even Lurie's definition of derived formal group laws, assuming I understand correctly, is geared to construct formal group laws objects in derived algebraic geometry carrying a connection to the formal group law data that we already know is there on the spectrum level, and hence ties it to the story we already knew for MU implicitly. Some reasons these days we might want to know how to construct MU from formal group law data: Selfish, ordinary homotopy-theoretic reasons. It's very useful to be able to construct other spectra with specific connections to formal group law data (like K-theory, TMF, etc) and constructing them is generally very difficult. Things like the Landweber exact functor theorem, the Hopkins-Miller theorem, and Lurie's recent work give us a lot of progress in this direction, but they only apply to restricted circumstances. None of these general methods will construct ordinary integral cohomology, corresponding to the additive formal group law (only rational cohomology). If we understood how to build MU, we might understand how to generalize. Equivariant homotopy theory. I would tentatively say that we don't have nearly as good computational and "qualitative" pictures of the equivariant stable categories, because we don't have something like the startling MU-picture that relates it all to some stack like the moduli stack of 1-dimensional formal group laws. If we found MU by _accident_ then we don't really know how the analogue should play out in other, more general, stable categories. Motivic homotopy theory. Hopkins and Morel found that there is some data to formal group laws appearing in motivic stable homotopy theory via the motivic bordism spectrum MGL. I'm not up with the state of the art here but a better understanding of this connection would be very important too - for understanding MGL itself, but also hopefully for understanding the analogues of chromatic data in these categories related to algebraic geometry. (space reserved for connections to other subjects that I've forgotten) To elaborate on Tyler's comment (and please correct my inaccuracies), the idea is that the moduli space of DERIVED one-dimensional formal group laws (defined appropriately --- roughly formal group laws in which rings are replaced with Eoo ring spectra) is an affine derived scheme, which is the spectrum (in sense of AG) of the spectrum (in the sense of AT) MU. This is a derived version of Quillen's theorem that the formal group law of MU is the universal formal group law. If I remember correctly Jacob Lurie said this is fairly obvious. It's the natural analog of the (much harder) theorem of Lurie's that the moduli stack of derived elliptic curves (roughly, versions of elliptic curves with structure sheaves given by Eoo ring spectra) is representable by a derived enhancement of the moduli of ordinary elliptic curves (the coordinate ring given by the canonical line bundle is then TMF). Another example of the philosophy is Tyler's work with Mark Behrens. But this one is supposed to be easier and less useful (more formal). If you take the moduli STACK of derived formal groups, then the global sections of the structure sheaf are just the sphere I think -- this is a version of the Adams-Novikov spectral sequence. Anyway the idea is to reinterpret the Quillen-Morava-Ravenel-Devinatz-Hopkins-Smith-.. picture for the stable category via formal group laws as describing the algebraic geometry of the derived moduli stack of formal groups. There is a very easy theorem along much weaker but related lines in Adams's "Stable Homotopy and Generalized Homology." I refer to Lemma 4.6 of section II. It isn't written quite like this, but essentially it says that if E is a complex orientable spectrum together with a complex orientation $x \in E^{2}(\mathbf{C}P^{\infty})$ then there is a unique (up to homotopy) map of ring spectra $MU \rightarrow E$ taking the fixed (better fix one) complex orientation of $MU$ to the given complex orientation of $E$. This doesn't build $MU$ out of formal group laws of course, but it shows $MU$ has this universal property for complex oriented cohomology theories, and this book was around of course when Ravenel wrote the green book. It does seem like with the modern point of view these ideas should yield a construction out of complex oriented theories if not quite out of formal group laws.
Cauchy’s functional equations are very simple. The most familiar one has a simple formula: f(x + y) = f(x) + f(y) But first, for the uninitiated, what is a functional equation after all? What is a functional equation? Usually, functions appear as formulae. For example \( f(x) = x^2 \) is a function. It takes in a number and gives out one. If you wanted to do away with the formula, you describe the function as follows: square the input number. There is another way of describing a function. It is by describing abstractly what it does. For example, you could say the function f always gives a non-negative output. But there is one problem with this method. There could be more than one function that fits that description. For example, “always gives a non-negative output” is satisfied by \( f(x) = x^2, g(x) = e^x \) This alternative description of a function is known as a functional equation (our example was more of a functional inequality). All the functions that satisfy a particular functional equation are known as solutions to that functional equation. It is hard to find solutions to functional equations. That is because there can be multiple (weird) functions satisfying the given conditions. First Example One of the most important examples of functional equation is Cauchy’s Functional Equation. We will examine it in a moment. But before that, let’s pick a geometric example. Suppose C is a unit circle with O as the center (you may take it to be the origin) on Euclidian Plane. Define \(f \) to be a function from \( \displaystyle{\mathbb{R}^2 – \{0\}\to \mathbb{R}^2 – \{0\} }\) such that: ** f(P) = P if P is on C. ** f(P) is outside C if P is inside C and vice-versa. ** If P is on a circle orthogonal to C then f(P) is in that circle. Show that such a function exists and is unique. Find a geometric description of Also f. find an explicit formula for it. This function f is known as inversion. Here is a hint. Pick a point P outside the circle C. Draw tangent PT and PT’ from P to the circle. Join OP and TT’. Suppose the intersection is P’. Show that f(P) = P’ (and vice versa). Any circle passing through PP’ is necessarily orthogonal to the circle C (why?!) A Typical Problem (INMO 2018) Let N denote the set of all natural numbers and let f: N→N be a function such that (a) f(mn)=f(m)f(n) for all m,n in N ; (b) m+n divides f(m)+f(n) for all m, n in N Prove that there exists an odd natural number k such that \(f(n)=n^k\) for all n in N. This problem is immediately reminiscent of Cauchy’s functional equation. Therefore let’s work on it first. Cauchy’s Functional Equation: Find f, such that f(x+y) = f(x) + f(y). If Domain and Co-Domain are sets of Natural Numbers, then clearly f(x) = cx where c is a constant (=f(1) ). Why? This is readily showed using induction. Suppose f(1) = c (afterall f(1) is SOME natural number; lets name it c ) Then f(2) = f(1+1) = f(1) + f(1) = c +c = 2c. Now use induction to show f(n) = n*c. What happens if we extend the Domain and Co-Domain to sets of Integers. It is still easy to show f(x) = cx. Hint: Show that f(x) is an odd function. Next, extend the Domain and Co-Domain to sets of Rational Numbers. We will show that f(x) = cx where x is rational. Suppose \( x = \frac{p}{q} \). Therefore \( qx = p \). Hence f(qx) = f(p). But f(p) =cp and f(qx) = qf(x). Hence \( q f(x) = c \times p \rightarrow f(x) = c \times \frac{p}{q} = cx \) So, up to rational numbers, Cauchy’s functional equation describes a family of linear functions (which look similar and only differ by constant multiplication). It is not possible to extend the domain to sets of real numbers and conclude that the function is still linear. For that, we need at least one additional condition (like continuity or monotonicity or continuity at one point). Finally solve the INMO 2018 problem by using Cauchy’s Functional Equation. Hint: Set f = g o log and apply Cauchy This is the supplemental document for Cheenta Open Seminar on Functional Equation. Most of the discussions are sketches which are expanded in the live discussion.
I am looking at a question (though my queries here are more regarding th underlying conepcts, which I don't quite understand) about a radio antenna that is polarising a conducting sphere. For reference, the question is A radar set to operate at a frequency $\nu$ has an antenna which emitts a narrow beam of radio waves with Ponting flux $N(\theta, \phi)= ...$ (I have avoided putting the actual functional form here, as I am not sure I can redistribute the question). The total emitted power is P, and the receiver input imepedance is R. The antenna is also used as a detector with a detection limits set by V_{n} as the minimum voltage across the matched load that can be detected. Find the maximum range at which the radar set can detect a perfectly conducting sphere of radius a. My question here is regarding the poyntiny flux and its role. I would have thought that the electric field strength is $E^2 = 2NZ_0$, with $Z_0$ the impedance of free space, straight from the definition of the Poynting flux as $N=E \times H$. And yet, then I would obtain an electric field strength that does not depend on the distance from the source (which I would think is actually correct for a given electromagnetic wave) but then the maximum dipole moment of the sphere would also not depend on the distance from the radar, if I simply use $p=\alpha E$. I would be grateful if someone could explain where the distance dependence comes in here.
ISSN: 1531-3492 eISSN: 1553-524X All Issues Discrete & Continuous Dynamical Systems - B January 2011 , Volume 15 , Issue 1 Select all articles Export/Reference: Abstract: A bacillary dysentery model with seasonal fluctuation is formulated and studied. The basic reproductive number $\mathcal {R}_0$ is introduced to investigate the disease dynamics in seasonal fluctuation environments. It is shown that there exists only the disease-free periodic solution which is globally asymptotically stable if $\mathcal {R}_0<1$, and there exists a positive periodic solution if $\mathcal {R}_0>1$. $\mathcal {R}_0$ is a threshold parameter, its magnitude determines the extinction or the persistence of the disease. Parameters in the model are estimated on the basis of bacillary dysentery epidemic data. Numerical simulations have been carried out to describe the transmission process of bacillary dysentery in China. Abstract: We consider a system of partial differential equations which describes anti-plane shear in the context of a strain-gradient theory of plasticity proposed by Gurtin in [6]. The problem couples a fully nonlinear degenerate parabolic system and an elliptic equation. It features two types of degeneracies: the first one is caused by the nonlinear structure, the second one by the dependence of the principal part on twice the curl of a planar vector field. Furthermore, the elliptic equation depends on the divergence of such vector field - which is not controlled by twice the curl - and the boundary conditions suggested in [6] are of mixed type. To overcome the latter complications we use a suitable, time-dependent representation of a divergence-free vector field which plays the role of the elastic stress. To handle the nonlinearities, by a suitable reformulation of the problem we transform the original system into one satisfying a monotonicity property which is more "robust" than the gradient flow structure inherited as an intrinsic feature of the mechanical model. These two insights make it possible to prove existence and uniqueness of a solution to the original system. Abstract: We investigate the permeation flow of cholesteric liquid crystal polymers (CLCPs) subject to a small amplitude oscillatory shear using a tensor theory developed by the authors [8]. We model the material system by the Stokes hydrodynamic equations coupled with the orientational dynamics. At low frequencies, the steady permeation modes are recovered and the director rotates in phase with the applied shear. At high frequencies, the out of phase component dominates the dynamics. The asymptotic formulas for the loss modulus ($G''$) and storage modulus ($G^{'}$) are obtained at both low and high frequencies. In the low frequency limit, both the loss modulus and the storage modulus are shown to exhibit a classical frequency $\omega$ dependence ($G^{''} \propto \omega$, $G^{'} \propto \omega^2$ ) with the proportionality of order $O(Er)$ and $O(q)$, respectively, where $\frac{2\pi}{q}$ defines the pitch of the chiral liquid crystal and $Er$ is the Ericksen number of the liquid crystal polymer system. The magnitudes of dimensionless complex flow rate and complex viscosity are calculated. They are shown to have two Newtonian plateaus at low and high frequencies while a power-law response at intermediate frequencies. Abstract: In this paper, we establish the global asymptotic stability of equilibria for an SIR model of infectious diseases with distributed time delays governed by a wide class of nonlinear incidence rates. We obtain the global properties of the model by proving the permanence and constructing a suitable Lyapunov functional. Under some suitable assumptions on the nonlinear term in the incidence rate, the global dynamics of the model is completely determined by the basic reproduction number $R_0$ and the distributed delays do not influence the global dynamics of the model. Abstract: We study numerically the solutions of the steady advection-diffu-sion problem in bounded domains with prescribed boundary conditions when the Péclet number Pe is large. We approximate the solution at high, but finite Péclet numbers by the solution to a certain asymptotic problem in the limit Pe $\to \infty$. The asymptotic problem is a system of coupled 1-dimensional heat equations on the graph of streamline-separatrices of the cellular flow, that was developed in [21]. This asymptotic model is implemented numerically using a finite volume scheme with exponential grids. We conclude that the asymptotic model provides for a good approximation of the solutions of the steady advection-diffusion problem at large Péclet numbers, and even when Pe is not too large. Abstract: The main purpose of this paper is to explore the dynamics of an epidemic model with a general nonlinear incidence $\beta SI^p/(1+\alpha I^q)$. The existence and stability of multiple endemic equilibria of the epidemic model are analyzed. Local bifurcation theory is applied to explore the rich dynamical behavior of the model. Normal forms of the model are derived for different types of bifurcations, including Hopf and Bogdanov-Takens bifurcations. Concretely speaking, the first Lyapunov coefficient is computed to determine various types of Hopf bifurcations. Next, with the help of the Bogdanov-Takens normal form, a family of homoclinic orbits is arising when a Hopf and a saddle-node bifurcation merge. Finally, some numerical results and simulations are presented to illustrate these theoretical results. Abstract: We investigate a stage-structured model of an iteroparous population with two age classes. The population is assumed to exhibit Allee effects through reproduction. The asymptotic dynamics of the model depend on the maximal reproductive number of the population. The population may persist if the maximal reproductive number is greater than one. There exists a population threshold in terms of the unstable interior equilibrium. The host population will become extinct if its initial distribution lies below the threshold and the host population can persist indefinitely if its initial distribution lies above the threshold. In addition, if the unstable equilibrium is a saddle point and the system has no $2$-cycles, then the stable manifold of the saddle point provides the Allee threshold for the host. Based on this host population system, we construct a host-parasitoid model to study the impact of Allee effects upon the population interaction. The parasitoid population may drive the host to below the Allee threshold so that both populations become extinct. On the other hand, under some conditions on the parameters, the host-parasitoid system may possess an interior equilibrium and the populations may coexist as an interior equilibrium. Abstract: Rayleigh's problem of an infinite flat plate set into uniform motion impulsively in its own plane is studied by using the BKW model, the linearized Boltzmann equation and the full Boltzmann equation, respectively. The purpose is to study the gas motion under the diffuse reflection boundary condition. For a small impulsive velocity (small Mach number) and short time, the flow behaves like a free molecule flow. Our analysis is based on certain pointwise estimates for the solution of the problem and flow velocity. Abstract: Using a new method of monotone iteration of a pair of smooth lower- and upper-solutions, the traveling wave solutions of the classical Lotka-Volterra system are shown to exist for a family of wave speeds. Such constructed upper and lower solution pair enables us to derive the explicit value of the minimal (critical) wave speed as well as the asymptotic decay/growth rates of the wave solutions at infinities. Furthermore, the traveling wave corresponding to each wave speed is unique up to a translation of the origin. The stability of the traveling wave solutions with non-critical wave speed is also studied by spectral analysis of a linearized operator in exponentially weighted Banach spaces. Abstract: This article deals with the tracking control of periodic references in single-input single-output bilinear systems using a stable inversion-based approach. Assuming solvability of the exact tracking problem and asymptotic stability of the nominal error system, the study focuses on the output behavior when the control scheme uses a periodic approximation of the nominal feedforward input signal $u_d$. The investigation shows that this results in a periodic, asymptotically stable output; moreover, a sequence of periodic control inputs $u_n$ uniformly convergent to $u_d$ produce a sequence of output responses that, in turn, converge uniformly to the output reference. It is also shown that, for a special class of bilinear systems, the internal dynamics equation can be approximately solved by an iterative procedure that provides of closed-form analytic expressions uniformly convergent to its exact solution. Then, robustness in the face of bounded parametric disturbances/uncertainties is achievable through dynamic compensation. The theoretical analysis is applied to nonminimum phase switched power converters. Abstract: This paper continues the analysis on a bimolecular autocatalytic reaction-diffusion model with saturation law. An improved result of steady state bifurcation is derived and the effect of various parameters on spatiotemporal patterns is discussed. Our analysis provides a better understanding on the rich spatiotemporal patterns. Some numerical simulations are performed to support the theoretical conclusions. Abstract: A semi-analytical procedure for studying stability and bifurcations of limit cycles in higher-dimensional nonlinear autonomous dynamical systems is developed. This procedure is based mainly on the incremental harmonic balance (IHB) method. It is composed of three key steps, namely, the determination of limit cycles by IHB method, the calculation of transition matrix by precise integration (PI) algorithm and the discrimination of limit cycle stability by Floquet theory. As an application, the procedure is used to investigate the dynamics of the limit cycle of a three-dimensional nonlinear autonomous system. The symmetry-breaking bifurcation, the first and the second period-doubling bifurcations of the limit cycle are identified. The critical parameter values corresponding to these bifurcations are calculated. The phase portraits and bifurcation points agree well with those of direct numerical integrations by using Runge-Kutta method. AR) condition Abstract: The existence and multiplicity of homoclinic orbits for a class of the second order Hamiltonian systems $\ddot{u}(t)-L(t)u(t)+\nabla W(t,u(t))=0, \ \forall t \in \mathbb{R}$, are obtained via the concentration-compactness principle and the fountain theorem respectively, where $W(t, x)$ is superquadratic and need not satisfy the ( AR) condition with respect to the second variable $ x\in\mathbb{R}^{N}$. Abstract: In this paper, a predator-prey model with stage structure for the predator and a spatio-temporal delay describing the gestation period of the predator under homogeneous Neumann boundary conditions is investigated. By analyzing the corresponding characteristic equations, the local stability of a positive steady state and each of boundary steady states is established. Sufficient conditions are derived for the global attractiveness of the positive steady state and the global stability of the semi-trivial steady state of the proposed problem by using the method of upper-lower solutions and its associated monotone iteration scheme. Numerical simulations are carried out to illustrate the main results. Abstract: In this paper we study a delayed free boundary problem for the growth of tumors. The establishment of the model is based on the diffusion of nutrient and mass conservation for the two process proliferation and apoptosis(cell death due to aging). It is assumed the process of proliferation is delayed compared to apoptosis. By $L^p$ theory of parabolic equations and the Banach fixed point theorem, we prove the existence and uniqueness of a local solutions and apply the continuation method to get the existence and uniqueness of a global solution. We also study the asymptotic behavior of the solution, and prove that in the case $c$ is sufficiently small, the volume of the tumor cannot expand unlimitedly. It will either disappear or evolve to a dormant state as $t\rightarrow\infty.$ Abstract: This article is concerned with the existence of traveling wave solutions, including standing waves, to some models based on configurational forces, describing respectively the diffusionless phase transitions of solid materials, e.g., Steel, and phase transitions due to interface motion by interface diffusion, e.g., Sintering. These models were proposed by Alber and Zhu in [3]. We consider both the order-parameter-conserved case and the non-conserved one, under suitable assumptions. Also we compare our results with the corresponding ones for the Allen-Cahn and the Cahn-Hilliard equations coupled with linear elasticity, which are models for diffusion-dominated phase transitions in elastic solids. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
The Kaehler potential for the standard Fubini-Study Kaehler form in projective space $\mathbb{C} P^n$ is given by: $$\log(\sum_{i=0}^n |z_i|^2)).$$ What is the analogous formula for a Kaehler potential in a weighted projective space $\mathbb{C} P(m_0, \ldots, m_n)$ with weights $m_0, \ldots, m_n \in \mathbb{Z}_{\geq 0}$? That is, $\mathbb{C} P(m_0, \ldots, m_n) = (\mathbb{C}^{n+1} \setminus \{0\})/\sim$ where $(z_0, \ldots, z_n) \sim (\lambda^{m_0}z_0, \ldots, \lambda^{m_n}z_n)$ for any $0 \neq \lambda \in \mathbb{C}$. Of course the weighted projective space is usually singular, and we are only asking for Kaehler potential in its smooth locus. Thanks
A canoe has a velocity of 0.40m/s southeast relative to the earth. The canoe is on a river that is flowing 0.50m/s east relative to the earth. Find the velocity(magnitude and direction ) of the canoe relative to the river. Discussion: We apply the relative velocity relation. The relative velocities are \(\vec{v_{CE}}\), the canoe relative to the earth, \(\vec{v_{RE}}\), the velocity of the river with respect to Earth and \(\vec{v_{CR}}\), the velocity of the canoe relative to the earth. $$\vec{v_{CE}}=\vec{v_{CR}}+\vec{v_{RE}}$$ Hence $$ \vec{v_{CR}}=\vec{v_{CE}}-\vec{v_{RE}}$$ The velocity components of \( \vec{v_{CR}}\) are $$ -0.5+\frac{0.4}{\sqrt{2}}=-0.217m/s$$( in the east direction) Now, for the velocity component in the south direction $$ \frac{0.4}{\sqrt{2}}=0.28$$ (in the south direction) Now, the magnitude of the velocity of canoe relative to river $$ \sqrt{(-0.217)^2+(0.28)^2}=0.356m/s$$ If we consider \(\theta\) as the angle between the canoe and the river,the direction of the canoe with respect to the river can be given by $$ \theta=52.5^\circ$$ ( in south west direction)
User Guide Examples Potential Databases More User Guide Examples Potential Databases More The target function for the optimization algorithms in potfit is defined as: $$Z_F(\boldsymbol\xi) = \sum_{k=1}^{3N}u_k(F_k(\boldsymbol\xi)-F_k^0)^2 + \sum_{l=1}^{M}w_E(E_l(\boldsymbol\xi)-E_l^0)^2+\ldots$$ where $F_k(\boldsymbol\xi)$ are the forces calculated from the potential and $F_k^0$ the reference value. The same notation is used for the energies $E$. The weights for the energies ($w_E$) and stresses can be given in the parameter file with the eng_weight and stress_weight parameters. The weight for the forces($u_k$) is normally chosen to be 1. This can be a problem for very small reference values. By enabling the fweight compile option, each force component $F_k$ is weighted by the factor $$u_k=\frac{1}{(\boldsymbol F_k^0+\epsilon)^2},$$ where $\boldsymbol F_k^0$ is the magnitude of the force vector for atom $k$. This increases the relative contributions of numerically small forces to the error sum. The compile time parameter FORCE_EPS (in defines.h) prevents the overweighting of very tiny forces. The default for this value is 0.1.
$\pi(s)$ does not mean $q(s,a)$ here. $\pi(s)$ is a policy that represents probability distribution over action space for a specific state. $q(s,a)$ is a state-action pair value function that tells us how much reward do we expect to get by taking action $a$ in state $s$ onwards. For the value iteration on the right side with this update formula: $v(s) \leftarrow \max_\limits{a} \sum_\limits{s'}p(s'\mid s, a)[r(s, a, s') + \gamma v(s')]$ we have an implicit greedy deterministic policy that updates value of state $s$ based on the greedy action that gives us the biggest expected return. When the value iteration converges to its values based on greedy behaviour after $n$ iterations we can get the explicit optimal policy with: $\pi(s) = \arg \max_\limits{a} \sum_\limits{s'} p(s'\mid s, a)[r(s, a, s') + \gamma v(s')]$ here we are basically saying that the action that has highest expected reward for state $s$ will have probability of 1, and all other actions in action space will have probability of 0 For the policy evaluation on the left side with this update formula: $v(s) \leftarrow \sum_\limits{s'}p(s'\mid s, \pi(s))[r(s, \pi(s), s') + \gamma v(s')]$ we have an explicit policy $\pi$ that is not greedy in general case in the beginning. That policy is usually randomly initialized so the actions that it takes will not be greedy, it means we can start with policy that takes some pretty bad actions. It also does not need to be deterministic but I guess in this case it is. Here we are updating value of state $s$ according to the current policy $\pi$. After policy evaluation step ran for $n$ iterations we start with the policy improvement step: $\pi(s) = \arg \max_\limits{a} \sum_\limits{s'} p(s'\mid s, a)[r(s, a, s') + \gamma v(s')]$ here we are greedily updating our policy based on the values of states that we got through policy evaluation step. It is guaranteed that our policy will improve but it is not guaranteed that our policy will be optimal after only one policy improvement step. After improvement step we do the evaluation step for new improved policy and after that we again do the improvement step and so on until we converge to the optimal policy
From Revuz and Yor - Continuous Martingales and Brownian Motion 1999 Chapter IV Proposition 1.13 it is proven, that for a continuous local martingale $M_t$ the intervals of constancy are equal with those of the predictable quadratic variation $<M>_t$ or optional quadratic variation $[M]_{t}$ (since they coincide due to the continuity of the local martingale). I wonder if this stays true for $M_{t}$ being just càdlàg. I guess no. So lets consider this setup: Given a square integrable Martingale $X_t=F_t-a\cdot K_{t}$ with predictable quadratic variation $b\cdot K_{t}$ where $a,b$ are constants and $K_{t}$ is continuous but $F_{t}$ only càdlàg. With the aim to conclude from $K_{t}$ being constant on some interval (predictable quadratic variation is continuous process of $K_{t}$ being constant) implies that $X_{t}$ is constant on that interval and thus $F_{t}$ on the interval. Where $K_{0}=0$, $K_{t}\rightarrow \infty$ a.s. and a non decreasing process.
A few days ago, the World Science Festival of Brian Greene posted a 90-minute video with interviews about the state of fundamental physics: Carlo Rubbiademands particle physicists to be courageous and build the damn muon collider, a compact Higgs factory. Bill Zajc sent it to me and I finally had the time and energy to watch it, at a doubled speed. At the beginning, four minutes of Greene and visual tricks – similar to those from his PBS TV shows – are shown. I actually think that some of the tricks are new and even cooler than they used to be. I really liked the segment where Greene was grabbing and magnifying the molecules and taking the atoms, nuclei, and strings out of them. The illustrations of the microscopic building blocks had to be created to match the motion of Greene's hands. Greene had three guests whom he interviewed separately, in 30-minute segments: Marcelo Gleiser who spoke like a historian and philosopher; Michael Dine who covered the search for supersymmetry etc.; and Andrew Strominger who amusingly discussed the exciting quantum gravity twists of the reductionist stringy program. I know Dine and Strominger very well – not only as co-authors of papers. I have never met Gleiser. Some Brazilian commenters mention he is unknown in Brazil but he should be famous, partly because he is a DILF – I suppose it meant a Daddy [He or She] would Like to Fudge. Gleiser and Greene discuss the history of unification and a model of expanding knowledge (an island) which also expands the ignorance (the boundary between the known island and the unknown sea). I thought that David Gross insisted that it was his metaphor but OK. Michael Dine has been a phenomenologist but he bragged that he was one of the few who were cautiously saying that SUSY may remain undiscovered after a few years ;-). I don't really think that this skepticism was rare (my 2007 predictions said that the probability for the LHC SUSY discovery was 50% and I think it agreed with what people were saying). The real issue is that when you're skeptical about discoveries around the corner, you don't write papers about it – because you really don't think that you have anything interesting to write about. That's why the literatureabout this topic is naturally dominated by the people who did expect speedy discoveries of SUSY. But the actual percentage of the HEP people who expected speedy discoveries of BSM physics wasn't much higher than 1/2 and maybe it was lower than 1/2. The skeptics avoided writing useless and unmotivated papers and I think it's the right thing to do. One must just be careful not to misinterpret the composition of the literature as the quantification of some collective belief – but it's the interpreter's fault if this invalid interpretation of the composition of the papers is made. OK, so Greene said that 90% of Dine's work may be wrong or useless and Dine roughly agreed. Andy Strominger (whose 94-year-old father just got a new 5-year early career grant) pumped quite a different level of excitement into the festival. The research turned out to be much more exciting than what people expected. People expected some reductionist walk towards shorter distances and higher energies, they would finally find the final stringy constituents, strings, and that would be the end of it. Physics departments would shut their doors and Brian Greene would pour champagne on his festivals, or something like that, Andy said. ;-) What happened was something else. People found the black holes, their puzzling quantum gravity behavior, holography, connections with superconductors and lots of other things. This is better than just "some final particles inside quarks". I agree with that. Strominger's metaphor involves Columbus. You can see that Andy is not quitethe extreme progressive left – because those wouldn't dare to speak about the imperialist villain Columbus without trashing him. OK, Columbus promised to get to China. Instead, he discovered America. Some of his sponsors were disappointed that he only discovered some America and not China. These days, people may be happy because America is better than China. Andy forgot to describe the future evaluation. Maybe, in a century or so from now, people will be disappointed again that Columbus found just America and not China because China may be better. But it could be good news for Andy (he's at least impartial) – who has spent quite some time in China and who actually speaks Chinese. While Dine predicted that the field would shrink, Strominger's description implicitly says that the field is growing. I agree with Andy – but a part of the difference is that Andy and Michael don't have quite the same field. Dine is a phenomenologist and the contact with the experiment is almost a defining condition of his field or subfield. Strominger is a formal theorist so he can do great advances independently of experimental tests. It would be a surprise for the people in 1985 if they saw how the string theory research has evolved. In 1985, Witten expected that in a few weeks, the right compactification would be found and that would be the end of physics. Instead, all the complex branches of knowledge were found. With hindsight, it's almost obvious that what has happened had to happen. The one-direction reductionist approach works well but only up to the Planck scale. There are no sub-Planckian, physically distinguishable distances. So it should have been clear, even in advance, that the one-direction journey had to become impossible or multi-directional once the true phenomena near the fundamental scale are probed. And that's what happened. Most of the stringy quantum gravity research makes it very clear that it's not a part of quantum field theory that respects the one-directional quantum field theory renormalization flows. All phenomena are "equally fundamental" at the Planck scale, there is no clear explanatory direction over there anymore. People unavoidably study effects at distances that no longer shrink (you shouldn't shrink beneath the Planck length) and they study increasingly complex behavior (starting with the entanglement) of the building blocks that are roughly Planckian in size. A viewer who has observed the exchange carefully could have seen some additional, funny subtle differences between Greene and Strominger. Greene wanted to introduce black hole thermodynamics with stories worshiping John Wheeler. It was all about Wheeler and... about his student Jacob Bekenstein. Meanwhile, Andy Strominger responded by completely ignoring this thread and talking about his late friend Hawking only. Strominger told us that Hawking had a tomb with the Bekenstein-Hawking entropy on it. Well, Andy has conflated Boltzmann's and Hawking's tomb. Boltzmann has \(S=k\cdot \log W\) over there but Hawking has the temperature formula\[ T = \frac{\hbar c^3}{8\pi G M k} \] on his grave, along with "here lies what was mortal of Stephen Hawking, 1842-2018", indicating that most of Stephen Hawking is immortal and eternal. I just wanted to return some rigor to the discussion of graves here, Andy. All the guests turned out to be amusing narrators but it's pretty funny that Andy Strominger ended up as the most bullish and enthusiastic guest. Why? Fifteen years ago, Andy was invited to record some monologues for the PBS NOVA TV shows of Brian Greene, much like Cumrun Vafa and others. Vafa was totally excited about the TV tricks. They made his head explode! He watched it with his two sons, and when Vafa's and Bugs Bunny's heads exploded, all three boys were happy. (Vafa has 2 sons, Strominger has 4 daughters. If this perfect correlation were the rule in Israel and Iran, you could be optimistic about the fudging happy end of the conflict of the two countries LOL.) On the other hand, those 15 years ago, Andy Strominger was less enthusiastic because all his clips were removed from the show. Maverick Strominger's comments were insufficiently enthusiastic for Greene's simple, pre-determined, bullish tone. So now, when maverick Strominger is the cheerleader-in-chief, you may be pretty sure that most people speak rather negatively and pump lots of the disillusion into the discourse.
berylium? really? okay then...toroidalet wrote:I Undertale hate it when people Emoji movie insert keywords so people will see their berylium page. A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content. 83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X When xq is in the middle of a different object's apgcode. "That's no ship!" Airy Clave White It Nay When you post something and someone else posts something unrelated and it goes to the next page. Also when people say that things that haven't happened to them trigger them. Also when people say that things that haven't happened to them trigger them. "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett -Terry Pratchett Huh. I've never seen a c/posts spaceship before.drc wrote:"The speed is actually" posts Bored of using the Moore neighbourhood for everything? Introducing the Range-2 von Neumann isotropic non-totalistic rulespace! It could be solved with a simple PM rather than an entire post.Gamedziner wrote:What's wrong with them?drc wrote:"The speed is actually" posts An exception is if it's contained within a significantly large post. I hate it when people post rule tables for non-totalistic rules. (Yes, I know some people are on mobile, but they can just generate them themselves. [citation needed]) "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett -Terry Pratchett OK this is a very niche one that I hadn't remembered until a few hours ago. You know in some arcades they give you this string of cardboard tickets you can redeem for stuff, usually meant for kids. The tickets fold beautifully perfectly packed if you order them one right, one left - zigzagging. When people fold them randomly in any direction giving a clearly low density packing with loads of strain, I just think You know in some arcades they give you this string of cardboard tickets you can redeem for stuff, usually meant for kids. The tickets fold beautifully perfectly packed if you order them one right, one left - zigzagging. When people fold them randomly in any direction giving a clearly low density packing with loads of strain, I just think omg why on Earth would you do that?!Surely they'd have realised by now? It's not that crazy to realise? Surely there is a clear preference for having them well packed; nobody would prefer an unwieldy mess?! Also when I'm typing anything and I finish writing it and it just goes to the next line or just goes to the next page. Especially when the punctuation mark at the end brings the last word down one line. This also applies to writing in a notebook: I finish writing something but the very last thing goes to a new page. "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett -Terry Pratchett 83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: ... you were referencing me before i changed it, weren't you? because I had fit both of those.A for awesome wrote: When people put non- spectacularly-interesting patterns, questions, etc. in their signature. ON A DIFFERENT NOTE. When i want to rotate a hexagonal file but golly refuses because for some reason it calculates hexagonal patterns on a square grid and that really bugs me because if you want to show that something has six sides you don't show it with four and it makes more sense to have the grid be changed to hexagonal but I understand Von Neumann because no shape exists (that I know of) that has 4 corners and no edges but COME ON WHY?! WHY DO YOU REPRESENT HEXAGONS WITH SQUARES?! In all seriousness this bothers me and must be fixed or I will SINGLEHANDEDLY eat a universe. EDIT: possibly this one. EDIT 2: IT HAS BEGUN. HAS BEGUN. Last edited by 83bismuth38 on September 19th, 2017, 8:25 pm, edited 1 time in total. Actually, I don't remember who I was referencing, but I don't think it was you, and if it was, it wasn't personal.83bismuth38 wrote:... you were referencing me before i changed it, weren't you? because I had fit both of those.A for awesome wrote: When people put non- spectacularly-interesting patterns, questions, etc. in their signature. x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce 83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: oh okay yeah of course sureA for awesome wrote:Actually, I don't remember who I was referencing, but I don't think it was you, and if it was, it wasn't personal.83bismuth38 wrote:... you were referencing me before i changed it, weren't you? because I had fit both of those.A for awesome wrote: When people put non- spectacularly-interesting patterns, questions, etc. in their signature. but really though, i wouldn't have cared. When someone gives a presentation to a bunch of people and you knowthat they're getting the facts wrong. Especially if this is during the Q&A section. "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett -Terry Pratchett Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X When you watch a boring video in class but you understand it perfectly and then at the end your classmates dont get it so the teacher plays the borinh video again Airy Clave White It Nay 83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: when scientists decide to send a random guy into a black hole hovering directly above Earth for no reason at all. hit; that random guy was me. hit; that random guy was me. 83bismuth38 Posts:453 Joined:March 2nd, 2017, 4:23 pm Location:Still sitting around in Sagittarius A... Contact: When I see a "one-step" organic reaction that occurs in an exercise book for senior high school and simply takes place under "certain circumstance" like the one marked "?" here but fail to figure out how it works even if I have prepared for our provincial chemistry olympiadEDIT: In fact it's not that hard.Just do a Darzens reaction then hydrolysis and decarboxylate. Current status: outside the continent of cellular automata. Specifically, not on the plain of life. An awesome gun firing cool spaceships: An awesome gun firing cool spaceships: Code: Select all x = 3, y = 5, rule = B2kn3-ekq4i/S23ijkqr4eikry2bo$2o$o$obo$b2o! When there's a rule with a decently common puffer but it can't interact with itself "Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life." -Terry Pratchett -Terry Pratchett Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X When that oscillator is just When you're sooooooo close to a thing you consider amazing but miss... not sparky enough. When you're sooooooo close to a thing you consider amazing but miss... Airy Clave White It Nay People posting tons of "new" discoveries that have been known for decades, showing that they've not observed standard netiquette by reading the forums a while before posting, nor done the most minimal research about whether things have been already known, despit repeated posts about where to find such resources (e.g. jslife, wiki, Life lexicon, etc.). People posting tons of useless "new" discoveries that take longer to post than to find (e.g. "look what happens when I put this blinker next to this beehive"). Newbies with attitudes, who think they know more than people who have been part of the community for years or even decades. Posts where the quoted text is substantially longer than added text. Especially "me too" posts. People whose signatures are longer than the actual text of their posts. People whose signatures include graphics or pattern files, especially ones that are just human-readable text. Improper grammar, spelling, and punctuation (although I've gotten used to that; long-term use of the internet has made me rather fluent in typo, both reading and writing). Imperfect English is not unreasonable from people for whom English is not a primary language, but from English speakers, it is a symptom of sloppiness that can also manifest in other areas. People posting tons of useless "new" discoveries that take longer to post than to find (e.g. "look what happens when I put this blinker next to this beehive"). Newbies with attitudes, who think they know more than people who have been part of the community for years or even decades. Posts where the quoted text is substantially longer than added text. Especially "me too" posts. People whose signatures are longer than the actual text of their posts. People whose signatures include graphics or pattern files, especially ones that are just human-readable text. Improper grammar, spelling, and punctuation (although I've gotten used to that; long-term use of the internet has made me rather fluent in typo, both reading and writing). Imperfect English is not unreasonable from people for whom English is not a primary language, but from English speakers, it is a symptom of sloppiness that can also manifest in other areas. Posts:3138 Joined:June 19th, 2015, 8:50 pm Location:In the kingdom of Sultan Hamengkubuwono X That's G U S T A V O right theremniemiec wrote:People posting tons of "new" discoveries that have been known for decades, showing that they've not observed standard netiquette by reading the forums a while before posting, nor done the most minimal research about whether things have been already known, despit repeated posts about where to find such resources (e.g. jslife, wiki, Life lexicon, etc.). People posting tons of useless "new" discoveries that take longer to post than to find (e.g. "look what happens when I put this blinker next to this beehive"). Newbies with attitudes, who think they know more than people who have been part of the community for years or even decades. Also, when you walk into a wall slowly and carefully but you hit your teeth on the wall and it hurts so bad. Airy Clave White It Nay
A fairly trivial sufficient condition for the existence of a complete Boolean algebra of projections is for the space $X$ to be (up to isomorphism) of the form $(\sum_{n\in\mathbb{N}} \oplus E_n)_{\ell_p}$, $1\leq p < \infty$, or $(\sum_{n\in\mathbb{N}} \oplus E_n)_{c_0}$ (where the spaces $E_n$ are, of course, nonzero). So, an example of a space with the property requested in the OP's second question is an space of the form $c_0(E)$ or $\ell_p(E)$, $1\leq p < \infty$, where $E$ is a separable Banach space that does not embed in any Banach space having an unconditional basis; examples of such spaces $E$ include: The James space $J$. The James tree space $JT$. $L_1[0,1]$. Let $K$ be either an uncountable compact metric space (e.g., $[0,1]$), or equal to a compact ordinal interval $[0,\alpha]$, where $\alpha \geq \omega^\omega$ and $[0,\alpha]$ is equipped with its natural order topology. Then one can take $E=C(K)$. The universal basis space $U$ of Pelczynski, which has (a basis and) the property that every Banach space with a basis is isomorphic to a complemented subspace of $U$. In particular, $U$ does not embed in a space with an unconditional basis since it contains (complemented) copies of $L_1[0,1]$ and the $C(K)$ spaces mentioned above (since these spaces all have a basis). With the exception of the James space, all of the examples $E$ given above have the property that $E$ is isomorphic to $c_0(E)$ or $\ell_p(E)$ for some $1\leq p < \infty$ (in the case of Pelczynski's space, we actually have that $U$ is isomorphic to all of the spaces $c_0(U)$ and $\ell_p(U)$, $1\leq p < \infty$); so, with the exception of the James space, we can in fact take any of the spaces $E$ given as examples above as an answer to the second question. Finally, I mention that one could also consider, in a similar way, direct sums with respect to any unconditional basis.
ISSN: 1531-3492 eISSN: 1553-524X All Issues Discrete & Continuous Dynamical Systems - B January 2009 , Volume 11 , Issue 1 Select all articles Export/Reference: Abstract: This special issue of Discrete and Continuous Dynamical Systems Series B serves as proceedings of a conference on the calculus of variations and partial differential equations organized in Cortona in May 2007. The organizers were B. Dacorogna (Ecole Polytechnique Fedérale of Lausanne), E. Mascolo (University of Florence) and C. Sbordone (University of Naples) and it was supported by INdAM, Istituto Nazionale di Alta Matematica Francesco Severi. This conference was the occasion of a special tribute to Paolo Marcellini for his sixtieth birthday. The themes discussed in the conference were: existence, regularity and comparison for minimizers of the calculus of variations and for solutions of nonlinear elliptic equations and systems, study of nonlinear parabolic equations and systems, applications of variational methods to Euler equation, existence and selection principles for implicit equations, isoperimetric inequalities, applications of partial differential equations to nonlinear elasticity. Let us now see in more details all the contributions. For more information please click the “Full Text” above. Abstract: In this paper we illustrate some recent work [1], [2] on Brenier's variational models for incompressible Euler equations. These models give rise to a relaxation of the Arnold distance in the space of measure-preserving maps and, more generally, measure-preserving plans. We analyze the properties of the relaxed distance, we show a close link between the Lagrangian and the Eulerian model, and we derive necessary and sufficient optimality conditions for minimizers. These conditions take into account a modified Lagrangian induced by the pressure field. Abstract: We investigate the continuous dependence of the minimal speed of propagation and the profile of the corresponding travelling wave solution of Fisher-type reaction-diffusion equations $\vartheta_t = (D(\vartheta)\vartheta_x)_x + f(\vartheta)$ with respect to both the reaction term $f$ and the diffusivity $D$. We also introduce and discuss the concept of fast heteroclinic in this context, which allows to interpret the appearance of sharp heteroclinic in the case of degenerate diffusivity ($D(0)=0)$. Abstract: We shall prove results asserting the (global) $L^s$-summability of the minima of integral functionals, using the classical structural assumptions. A feature of the method is that it depends not so much on the minimization problem but rather on the "control from below'' of the structural assumptions. Then the proof concerning the summability of the minima of integral functionals can be easily adapted in order to prove the summability of solutions of nonlinear elliptic equations (even when they are not Euler equations of functionals). Abstract: In this paper we establish higher integrability results for local minimizers of variational integrals satisfying a degenerate ellipticity condition. The function which measures the degeneracy of the problem is assumed to be exponentially integrable. Abstract: In this paper we establish a comparison result for solutions to the problem $\mbox{minimize}\int_\Omega l(||\nabla u(x)||)dx $ or to the problem $\mbox{minimize}\int_\Omega l(\gamma_C(\nabla u(x))dx, $ for a special class of solutions, without assuming neither smoothness nor strict convexity of $l$. Abstract: We prove boundedness of minimizers of energy-functionals, for instance of the anisotropic type (1) below, under sharpassumptions on the exponents $p_{i}$ in terms of $\overline{p} $: the * Sobolev conjugate exponentof $\overline{p}$; i.e., $\overline{p} $ = {n\overline{p}}/{n-\overline{p}}, $ $ 1 / \overline{p}$= $\frac{1}{n} \sum_{i=1}^{n}\frac{1}{p_{i}}$. As a consequence, by mean of regularity results due to Lieberman [21], we obtain the local Lipschitz-continuity of minimizers under sharp assumptions on the exponents of anisotropic growth. * Abstract: Implicit Ordinary or Partial Differential Equations have been widely studied in recent times, essentially from the existence of solutions point of view. One of the main issues is to select a meaningful solution among the infinitely many ones. The most celebrated principle is the viscosity method. This selection principle is well adapted to convex Hamiltonians, but it is not always applicable to the non-convex setting. In this work we present an alternative selecting principle that singles out the most regular solutions (which do not always coincide with the viscosity ones). Our method is based on a general regularity theorem for Implicit ODEs. We also provide several examples. Abstract: The aim of this paper is to study the minimal perimeter problem for sets containing a fixed set $E$ in $\R^2$ in a very general setting, and to give the explicit solution. Abstract: For the linearized setting of the dynamics of complex bodies we construct variational integrators and prove their convergence by making use of BV estimates on the rate fields. We allow for peculiar substructural inertia and internal dissipation, all accounted for by a d'Alembert-Lagrange-type principle. Abstract: We study the nonvariational equation $ \sum_{i,j=1}^n a_{ij}(x)\,\frac{\partial^2 u}{\partial x_i\,\partial x_j}=f$ in domains of $r^n$. We assume that the coefficients $a_{ij}$ are in $BMO$ and the equation is elliptic, but not uniformly, and consider $f$ in $L^2(r^n)$, or even in the Zygmund class $L^2\log^\alpha L(r^n)$. We also solve Dirichlet problem. Abstract: We give an elementary proof for the integrability properties of the gradient of the harmonic extension of a self homeomorphism of the circle giving explicit bounds for the $p$-norms, $p<2$, estimates in Orlicz classes and also an $L^2(\mathbb D)$-weak type estimate. Abstract: Light can change the orientation of a liquid crystal. This is the optical Freedericksz transition, discovered by Saupe. In the Janossy effect, the threshold intensity for the optical Freedericksz transition is dramatically reduced by the additon of a small amount of dye to the sample. This has been interpreted as an optically pumped orientational rachet mechanism, similar to the rachet mechanism in biological molecular motors. To interpret the evolution system proposed for this effect requires an innovative gradient flow. Here we introduce this gradient flow and illustrate how it also provides the boundary conditions, some unusual coupling conditions, between the liquid crystal and the dye. An existence theorem for the evolution problem follows as well. Furthermore, we consider the time independent problem and show its local asymptotic stability. Finally we progress toward showing that the proposed model correctly predicts the onset of the Janossy effect. Abstract: In this paper we deal with the study of some regularity properties of weak solutions to non-linear, second-order parabolic systems of the type $ u_{t}-$div$A(Du)=0, $ $ (x,t)\in \Omega \times (0,T)=\Omega_{T}, $ where $\Omega \subset \mathbb{R}^{n}$ is a bounded domain, $T>0$, $A:\mathbb{R}^{nN}\to \mathbb{R}^{N}$ satisfies a $p$-growth condition and $u:\Omega_{T}\to \mathbb{R}^{N}$. In particular we focus on the case $\frac{2n}{n+2} < p < 2.$ Abstract: We prove existence of bounded weak solutions $u: \Omega \subset \R^{n} \to \R^{N}$ for the Dirichlet problem -div $( a(x, u(x), Du(x) ) ) = f(x),$ $ x \in \Omega$; $u(x) = 0, $ $ x \in \partial\Omega$ where $\Omega$ is a bounded open set, $a$ is a suitable degenerate elliptic operator and $f$ has enough integrability. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
$\newcommand{\dd}{\mathrm{d}}\newcommand{\R}{\mathbb{R}}\newcommand{\C}{\mathbb{C}}\newcommand{\lap}{\mathscr{L}}$I went through the proof again, and I think that you're right. The variables are intertwined and the proof is not rigorous. Here's the correct statement: Final value theorem. Suppose that the following conditions are satisfied: $f$ is continuously differentiable and both $f$ and $f'$ have a Laplace transform $f'$ is absolutely integrable, that is, $\int_0^\infty|f'(\tau)|\dd\tau$ is finite, $\lim_{t\to\infty}f(t)$ exists and is finite, Then, \begin{equation} \lim_{t\to \infty}f(t) = \lim_{s\to 0^+}sF(s). \end{equation} Proof. We know that the Laplace transform of the derivative is \begin{equation} sF(s) - f(0^+) {}={} \lap\{f'(t)\}(s) {}={} \int_{0^+}^\infty f'(\tau) e^{-s\tau}\dd\tau. \end{equation}Therefore, \begin{align} \lim_{s\to 0^+} sF(s) {}={}& f(0^+) + \lim_{s{}\to{}0^+} \int_{0^+}^{\infty}f'(\tau)e^{-s\tau}\dd\tau \notag\end{align} We want to use the dominated convergence theorem ; Define $\phi_s(\tau) = f'(\tau)e^{-s\tau}$. We have $|\phi_s(\tau)|\leq f'(\tau)$ which is assumed to be absolutely integrable (Assumption 2). Therefore, \begin{align}\lim_{s\to 0^+} sF(s) {}={}&f(0^+) + \int_{0^+}^{\infty} \lim_{s{}\to{}0^+} f'(\tau)e^{-s\tau}\dd\tau \notag \\ {}={}&f(0^+) + \int_{0^+}^{\infty} f'(\tau)\dd\tau \notag \\ {}={}& {f(0^+)} + \lim_{t\to\infty}f(t) - {f(0^+)}, \\ {}={}& \lim_{t\to\infty} f(t). \end{align}This completes the proof. $\Box$ Comment on the Wikipedia article (8 May, 2019): it is stated that with a change of variables, $\xi = st$, (in fact, the same symbol is used for $t$ and $\xi$), the integral becomes $$\int_0^\infty \lim_{s\to 0^+}f(\xi/s) e^{-\xi}\dd\xi.$$ As the OP noted, there is some confusion in that proof; in fact, $\xi$ and $s$ are not independent variables (by definition), so the limit $\lim_{s\to 0^+}f(\xi/s)$ is not equal to $\lim_{t\to\infty}f(t)$.
Authors Mats Granath Johan Schött Published in Physical Review B. Condensed Matter and Materials Physics Volume 90 Pages 235129 ISSN 1098-0121 Publication year 2014 Published at Department of Physics (GU) Pages 235129 Language en Links dx.doi.org/10.1103/PhysRevB.90.2351... Subject categories Condensed Matter Physics We study the Mott insulating state of the half-filled paramagnetic Hubbard model within dynamical mean field theory using a recently formulated stochastic and non-perturbative quantum impurity solver. The method is based on calculating the impurity self energy as a sample average over a representative distribution of impurity models solved by exact diagonalization. Due to the natural parallelization of the method, millions of poles are readily generated for the self energy which allows to work with very small pole-broadening $\eta$. Solutions at small and large $\eta$ are qualitatively different; solutions at large $\eta$ show featureless Hubbard bands whereas solutions at small $\eta\leq 0.001$ (in units of half bare band width) show a band of electronic quasiparticles with very small quasiparticle weight at the inner edge of the Hubbard bands. The validity of the results are supported by agreement within statistical error $\sigma_{\text{QMC}}\sim 10^{-4}$ on the imaginary frequency axis with calculations using a continuous time quantum Monte Carlo solver. Nevertheless, convergence with respect to finite size of the stochastic exact diagonalization solver remains to be rigourously established.
Anastasios Bountis Kabanbay batyr, 53, 010000, Astana, Republic of Kazakhstan Nazarbayev University, Kazakhstan Publications: Anastassiou S., Bountis A., Bäcker A. Recent Results on the Dynamics of Higher-dimensional Hénon Maps 2018, vol. 23, no. 2, pp. 161-177 Abstract We investigate different aspects of chaotic dynamics in Hénon maps of dimension higher than 2. First, we review recent results on the existence of homoclinic points in 2-d and 4-d such maps, by demonstrating how they can be located with great accuracy using the parametrization method. Then we turn our attention to perturbations of Hénon maps by an angle variable that are defined on the solid torus, and prove the existence of uniformly hyperbolic solenoid attractors for an open set of parameters.We thus argue that higher-dimensional Hénon maps exhibit a rich variety of chaotic behavior that deserves to be further studied in a systematic way. Efthymiopoulos C., Bountis A., Manos T. Explicit construction of first integrals with quasi-monomial terms from the Painlevé series 2004, vol. 9, no. 3, pp. 385-398 Abstract The Painlevé and weak Painlevé conjectures have been used widely to identify new integrable nonlinear dynamical systems. For a system which passes the Painlevé test, the calculation of the integrals relies on a variety of methods which are independent from Painlevé analysis. The present paper proposes an explicit algorithm to build first integrals of a dynamical system, expressed as "quasi-polynomial" functions, from the information provided solely by the Painlevé–Laurent series solutions of a system of ODEs. Restrictions on the number and form of quasi-monomial terms appearing in a quasi-polynomial integral are obtained by an application of a theorem by Yoshida (1983). The integrals are obtained by a proper balancing of the coefficients in a quasi-polynomial function selected as initial ansatz for the integral, so that all dependence on powers of the time $\tau = t – t_0$ is eliminated. Both right and left Painlevé series are useful in the method. Alternatively, the method can be used to show the non-existence of a quasi-polynomial first integral. Examples from specific dynamical systems are given. Marinakis V., Bountis A., Abenda S. Finitely and Infinitely Sheeted Solutions in Some Classes of Nonlinear ODEs 1998, vol. 3, no. 4, pp. 63-73 Abstract In this paper we examine an integrable and a non-integrable class of the first order nonlinear ordinary differential equations of the type $\dot{x}=x - x^n + \varepsilon g(t)$, $x \in \mathbb{C}$, $n \in \mathbb{N}$. We exploit, using the analysis proposed in [1], the asymptotic formulas which give the location of the singularities in the complex plane and show that there is an essential difference regarding the formation and the density of the singularities between the cases $g(t)=1$ and $g(t)=t$. Our analytical results are combined with a numerical study of the solutions in the complex time plane. Rothos V. M., Bountis A. The Second Order Mel'nikov Vector 1997, vol. 2, no. 1, pp. 26-35 Abstract Mel'nikov's perturbation method for showing the existence of transversal intersections between invariant manifolds of saddle fixed points of dynamical systems is extended here to second order in a small parameter $\epsilon$. More specifically, we follow an approach due to Wiggins and derive a formula for the second order Mel'nikov vector of a class of periodically perturbed $n$-degree of freedom Hamiltonian systems. Based on the simple zero of this vector, we prove an $O(\epsilon^2)$ sufficient condition for the existence of isolated homoclinic (or heteroclinic) orbits, in the case that the first order Mel'nikov vector vanishes identically. Our result is applied to a damped, periodically driven $1$-degree-of-freedom Hamiltonian and good agreement is obtained between theory and experiment, concerning the threshold of heteroclinic tangency.
Some properties of a class of $(F,E)$-$G$ generalized convex functions 1. Department of Mathematics, Chongqing Normal University, Chongqing 400047, China Keywords:$F$-$G$ generalized convex functions, E)$ generalized convex set, $(F, E)$-$G$ quasiconvex functions., $F$ generalized convex set, $(F, E)$-$G$ generalized convex functions, $(F. Mathematics Subject Classification:Primary: 90C26; Secondary: 90C3. Citation:Lijia Yan. Some properties of a class of $(F,E)$-$G$ generalized convex functions. Numerical Algebra, Control & Optimization, 2013, 3 (4) : 615-625. doi: 10.3934/naco.2013.3.615 References: [1] [2] M. Avriel, [3] [4] [5] [6] [7] J. Y. Huang, Y. Zhao, D. Li and Y. J. Li, [8] J. Y. Huang, Y. Zhao, and Y. N. Fang, [9] [10] [11] [12] [13] [14] Y. Zhao and J. Y. Huang, [15] Y. Zhao, J. Y. Huang and C. Y. Li, show all references References: [1] [2] M. Avriel, [3] [4] [5] [6] [7] J. Y. Huang, Y. Zhao, D. Li and Y. J. Li, [8] J. Y. Huang, Y. Zhao, and Y. N. Fang, [9] [10] [11] [12] [13] [14] Y. Zhao and J. Y. Huang, [15] Y. Zhao, J. Y. Huang and C. Y. Li, [1] [2] Orit Lavi, Doron Ginsberg, Yoram Louzoun. Regulation of modular Cyclin and CDK feedback loops by an E2F transcription oscillator in the mammalian cell cycle. [3] Xiuhong Chen, Zhihua Li. On optimality conditions and duality for non-differentiable interval-valued programming problems with the generalized ( [4] Nguyen Thi Bach Kim, Nguyen Canh Nam, Le Quang Thuy. An outcome space algorithm for minimizing the product of two convex functions over a convex set. [5] Byung-Soo Lee. A convergence theorem of common fixed points of a countably infinite family of asymptotically quasi-$f_i$-expansive mappings in convex metric spaces. [6] [7] Małgorzata Wyrwas, Dorota Mozyrska, Ewa Girejko. Subdifferentials of convex functions on time scales. [8] [9] Gregorio Díaz, Jesús Ildefonso Díaz. On the free boundary associated with the stationary Monge--Ampère operator on the set of non strictly convex functions. [10] Dimitra Antonopoulou, Georgia Karali. Existence of solution for a generalized stochastic Cahn-Hilliard equation on convex domains. [11] Fan Jiang, Zhongming Wu, Xingju Cai. Generalized ADMM with optimal indefinite proximal term for linearly constrained convex optimization. [12] Chao Deng, Xiaohua Yao. Well-posedness and ill-posedness for the 3D generalized Navier-Stokes equations in $\dot{F}^{-\alpha,r}_{\frac{3}{\alpha-1}}$. [13] [14] [15] Zhongliang Deng, Enwen Hu. Error minimization with global optimization for difference of convex functions. [16] [17] [18] [19] Jiao Du, Longjiang Qu, Chao Li, Xin Liao. Constructing 1-resilient rotation symmetric functions over $ {\mathbb F}_{p} $ with $ {q} $ variables through special orthogonal arrays. [20] Impact Factor: Tools Metrics Other articles by authors [Back to Top]
Complex bandstructure of Si(100)¶ Version: 2015.1 In this tutorial you will calculate the complex bandstructure of a silicon crystal along the (100) direction. In particular you will: create the Si(100) surface; set up and run the calculation; plot the complex bandstructure (3D and 2D). Background¶ In a periodic solid the eigenstates \(\psi_{n{\bf k}}\) of the Schrödinger equation \(H \psi_{n{\bf k}} = E_{n{\bf k}} S \psi_{n{\bf k}}\), (\(S\) is the overlap matrix) can be written as \(\psi_{n{\bf k}}({\bf r}) = e^{-i {\bf k}\cdot {\bf r}} U_{n{\bf k}}({\bf r})\), where \(U_{n{\bf k}}({\bf r})\) is a periodic function with the same periodicity as the crystal itself. In a usual bandstructure calculation, the wave vector \({\bf k}\) is a real number, and by solving the Schrödinger equation above for various fixed values of \({\bf k}\) (typically located along different symmetry lines in the first Brillouin zone) an eigenproblem is defined, from which the eigenenergies \(E_{n{\bf k}}\) (i.e. the bandstructure) can be determined. When calculating the complex bandstructure another approach is taken [CS82]. Instead the energy \(E\) is fixed, and the values of \({\bf k}\) which solve the Schrödinger equation are sought. Such solutions will be found with both real and complex \({\bf k}\); the solutions with a real \({\bf k}\) are the usual Bloch states, while the solutions with an imaginary part are exponentially decaying in one direction and increasing in the other. Such solutions cannot exist in a bulk material, and so they are normally ignored in a bandstructure calculation. They may however exist at a surface or interface, and they give information about how electronic states decay in the material, for instance through a thin tunneling barrier. Note You will primarily use the graphical user interface QuantumATK for setting up and analyzing the results. If you are new to QuantumATK, it is recommended to go through the Basic QuantumATK Tutorial. The calculations in this tutorial will be performed using the semi-empiricalmodels of QuantumATK. A complete description of all theparameters, and in many cases a longer discussion about their physicalrelevance, can be found in the ATK Reference Manual. In particular,the Reference Manual entry for complex bandstructure calculations isof relevance: ComplexBandstructure. Si(100) surface¶ Unfold Builders from the plugin panel and open the Surface (Cleave) tool,and follow the next steps to set up the Si(100) surface: Keep the default Miller indices (100) and click Next. Keep the default lattice vectors and click Next. Select Periodic (bulk-like)for the out-of-plane cell vector. Set the thickness of the surface to 1 layer. Click Finishto add the Si(100) surface structure to the Stash. With these settings we use a surface representation with a minimum number of layers in order to reduce the calculation time and avoid zone folding. Note The cleave plane – in the present case (100) – is always spanned by the two first unit cell vectors, \({\bf}\) and \({\bf B}\). In the “electrode” mode, the normal to this plane coincides with \({\bf C}\), the third unit cell vector, but in the “bulk-like” mode this is not the case. In QuantumATK the complex bandstructure is always computed along the third reciprocal vector, \({\bf g}_C\), which is of course parallel to \({\bf A} \times {\bf B}\). With the present geometry you will therefore obtain the complex bandstructure along (100). Complex bandstructure calculation¶ Select the ATK-SE: Extended Hückelcalculator. Set the k-point sampling to 5x5x5. Under Huckel Basis set, select the Cerda.Silicon [diamond GW]basis set. This basis set has been fitted to GW calculations, and gives an excellent description of the bandstructure, including the size of the band gap. Close the dialogue by clicking OK. Set the energy range to -15 to 10 eV with 501 points. Note The projection of \({\bf k}\) on the cleave plane is kept constant in the complex andstructure calculation, and the value of the projection is specified by the \((k_A,k_B)\) parameters, which are given in units of the two first reciprocal lattice vectors, \({\bf g}_A\) and \({\bf g}_B\). Therefore, the obtained solutions will lie along the third reciprocal lattice vector, \({\bf g}_C\), which is parallel to the cleave plane normal. A new set of solutions \(k_C+i\kappa_C\) is obtained for each value of \((k_A,k_B)\). Please note that \(k_C\) is therefore the real part of the complexbandstructure, while \(kappa_C\) is the complex part. You have now finished the Python script. Save it as si_100_cbs.pyand send it to the Job Manager for execution.It should only take a few minutes for this small system. If needed,you can also download the script here: si_100_cbs.py. Tip This type of calculation parallelizes extremely well, so for larger structures it is warmly recommended to execute the script in parallel. Analysing the results¶ The file si_100_cbs.hdf5 should now have appeared on the QuantumATK LabFloor.It contains the saved Hückel calculation and the ComplexBandstructureanalysis object: Select that analysis object and click Show 2D Plot tool from theright-hand side plugins panel. A window with a plot of the complexbandstructure pops up. Zoom in a bit to filter out the solutions withlarge values of the complex part \(\kappa_C\), since these are notreally relevant: Note In the plot above, the solutions \(\kappa_C\) are given inreciprocal Cartesian coordinates,to make it easier to compare different structures. For the real partof the bandstructure, on the other hand, the solutions \(k_C\)are normalized by \(L\), the layer separation perpendicular tothe cleave plane. In the case of a fcc crystal cleaved along (100),the layer separation is \(L=a/2\), where \(a\) is the latticeconstant. The layer separation can be extracted from theComplexBandstructure object using the layerSeparation() method,see ComplexBandstructure. 3D and 2D visualizations¶ In the complex part of the bandstructure, the solutions actually havea real part too. Thus, they could be visualized in a three-dimensionalplot, \((x,y,z)=(k_C, \kappa_C, E)\). This is possible with thefollowing script. Open the QuantumATK Editor and copy-pastethe script and save it as 3D_plot.py. Make sure that the indentationis correct. Alternatively, you can simply download it here: 3D_plot.py. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 from QuantumATK import *from mpl_toolkits.mplot3d import Axes3Dimport matplotlib.pyplot as pltimport math# Read the complex bandstructure object from the NC filecbs = nlread('si_100_cbs.nc', ComplexBandstructure)[-1]energies = cbs.energies().inUnitsOf(eV)k_real, k_complex = cbs.evaluate()L = cbs.layerSeparation()fig = plt.figure()ax = fig.add_subplot(111, projection='3d')# First plot the real bandskvr = numpy.array([])e = numpy.array([])# Loop over energies, and pick those where we have solutionsfor (j, energy) in enumerate(energies): k = k_real[j]*L/math.pi if len(k)>0: e = numpy.append(e,[energy,]*len(k)) kvr = numpy.append(kvr,k) # Plot the bands with redax.scatter([0.0,]*len(kvr), kvr, e, c='r', marker='o', linewidths=0, s=10)# Next plot the complex bandskvr = []kvi = []e = []# Again loop over energies and pick solutionsfor (j, energy) in enumerate(energies): if len(k_complex[j])>0: for x in numpy.array(k_complex[j]*L/math.pi): kr = numpy.abs(x.real) ki = -numpy.abs(x.imag) # Discard rapidly decaying modes which clutter the plot if ki>-0.3: e += [energy] kvr += [kr] kvi += [ki]# Plot the complex bands with blueax.scatter(kvi, kvr, e, c='b', marker='o', linewidths=0, s=10)# Put on labelsax.set_xlabel('$\kappa$ (1/Ang)')ax.set_ylabel('$kL/\pi$')ax.set_zlabel('Energy / eV')plt.show() Execute the script using the Job Manager or froma Terminal. The plot will resemble the figure below, but your points willbe more scattered, since the figure below was produced from a Hückelcalculation with 10,001 energy points instead of 501. Another way to visualize the real part of the complex bandstructureis using colors. The script 2D_plot.py does this (you can see the scriptbelow). It is a bit complicated towards the end, but that part is justfor placing the colorbar, and could be omitted. The “forest” of complex bands with a rather large value of \(\kappa\) are not usually seen in tight-binding simulations. However, for DFT and Hückel, the basis sets are larger, so there are more complex bands as well, connecting unoccupied levels with various valence bands. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 from QuantumATK import *import matplotlib.pyplot as pltimport math# Read the complex bandstructure object from the NC filecbs = nlread('si_100_cbs.nc', ComplexBandstructure)[-1]energies = cbs.energies().inUnitsOf(eV)k_real, k_complex = cbs.evaluate()ax = plt.axes()cmap="Spectral"# First plot the real bandskvr = numpy.array([])e = numpy.array([])for (j, energy) in enumerate(energies): k = k_real[j]*cbs.layerSeparation()/math.pi if len(k)>0: e = numpy.append(e,[energy,]*len(k)) kvr = numpy.append(kvr,k) # Plotax.scatter(kvr, e, c=numpy.abs(kvr), cmap=cmap, marker='o', linewidths=0, s=10) # Next plot the complex bandskvr = numpy.array([])kvi = numpy.array([])e = numpy.array([])for (j, energy) in enumerate(energies): if len(k_complex[j])>0: kr = [numpy.abs(x.real) for x in k_complex[j]] ki = [numpy.abs(x.imag) for x in k_complex[j]] e = numpy.append(e,[energy,]*len(kr)) kvr = numpy.append(kvr,kr) kvi = numpy.append(kvi,ki)# Plot with color depending on the imaginary part (corresponding to real k-points)sc = ax.scatter(-kvi, e, c=kvr, cmap=cmap, marker='o', linewidths=0, s=10)# Put on labels and decorationsax.axvline(0,color='b')ax.grid(True, which='major')ax.set_xlim(-1, 1)ax.set_ylim(-15, 10)ax.annotate('$\kappa$ (1/Ang)', xy=(0.25,-0.07), xycoords="axes fraction", ha="center")ax.annotate('$kL / \pi$', xy=(0.75,-0.07), xycoords="axes fraction", ha="center")ax.set_ylabel('Energy / eV')# Add a colorbarfig = plt.gcf()x1, x2, y1, y2 = 0., 1, ax.get_ylim()[0], ax.get_ylim()[0]+1trans = ax.transData + fig.transFigure.inverted()ax_x1, ax_y1 = trans.transform_point([x1, y1])ax_x2, ax_y2 = trans.transform_point([x2, y2])ax_dx, ax_dy = ax_x2 - ax_x1, ax_y2 - ax_y1cmap_axes = plt.axes([ax_x1, ax_y1, ax_dx, ax_dy])a = numpy.outer(numpy.arange(0,1,0.01),numpy.ones(10)).transpose()cmap_plt = plt.imshow(a,aspect='auto',cmap=plt.get_cmap(cmap),origin=(0,0))ax = plt.gca()ax.set_xticks([])ax.set_yticks([])ax.set_xticklabels([])ax.set_yticklabels([])plt.show() Hint You may note that there are “gaps” in the visualizations. The reason is that, unlike a normal bandstructure plot, the data is not plotted with lines, but with dots. In a standard bandstructure plot you can – with some level of confidence – define “bands”, which continuously run between the symmetry points (only band crossings can create some small problems). In the present case, the number of solutions is first of all different at each energy (particularly on the complex side), and depending on the density of the energy sampling, you may not hit a particular band close to points where the bands are very flat (heavy effective mass). This can to some extent be alleviated by increasing the number of energy points.
I have these two classical-quantum states: $$\rho = \sum_{a} \lvert a\rangle \langle a\lvert \otimes q^a \\ \mu = \sum_{a} \lvert a\rangle \langle a\lvert \otimes r^a $$ Where $a$ are the classical basis vectors, $q^a, r^a$ are arbitrary matrices dependent on $a$. Now, we can take the trace distance of these two classical-quantum states, which would be: $$T(\rho, \mu) = \frac{1}{2} ||\rho - \mu||_1 \\ = \frac{1}{2} || \sum_a \lvert a\rangle \langle a\lvert \otimes(q^a - r^a)||_1$$ Now, my question is, can we rewrite the above expression in the following way? $$T(\rho, \mu) = \frac{1}{2} \sum_a ||q^a - r^a||_1$$ I.e. just pulling the summation out of the norm.
When we prove the functional equation, usually we start by proving the Mellin transform $$\Gamma\left(\frac{s}{2}\right)\pi^{-s/2}\zeta(s)=\int_{0}^{\infty}\psi(x)x^{s/2-1}dx\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)$$ where $$\psi(x)=\sum_{n=1}^{\infty}e^{-\pi n^{2}x}.$$This is where the factor of $\Gamma(s/2)\pi^{-s/2}$ comes from, and this can be proven by writing down the definition of $\Gamma(s/2)$, making a change of variable, and summing over $n$. Instead, when $k$ is an integer we can prove this identity in a different way that makes it clear that this factor of $\Gamma(s/2)\pi^{-s/2}$ is really $A_{k-1}/2$ where $A_{k-1}$ is the surface area of the $k$-dimensional ball. We have that $$\int_{-\infty}^{\infty}e^{-\pi n^{2}x^{2}}dx=\frac{1}{n},$$ and so $$\zeta(k)=\sum_{n=1}^{\infty}\frac{1}{n^{k}}=\sum_{n=1}^{\infty}\int_{-\infty}^{\infty}\cdots\int_{-\infty}^{\infty}e^{-\pi n^{2}(x_{1}^{2}+\cdots+x_{k}^{2})}dx_{1}\cdots dx_{k}.$$ Switching to spherical coordinates and letting $r^{2}=x_{1}^{2}+\cdots+x_{k}^{2}$, a shell of radius $r$ has size $A_{k-1}r^{k-1}$ and so $$\zeta(k)=A_{k-1}\int_{0}^{\infty}\sum_{n=1}^{\infty}e^{-\pi n^{2}r^{2}}r^{k-1}dr,$$ and then by letting $t=r^{2},$ we then have that $$\frac{2\zeta(k)}{A_{k-1}}=\int_{0}^{\infty}\psi(t)t^{k/2-1}dt.$$ Modifying this proof, one can show directly that $$A_{k-1}=\frac{2\pi^{k/2}}{\Gamma(k/2)}.$$
This is a complete re-working of the answer I had originally provided. If you're curious, you can check the edit history and see what was posted earlier. In comments to this question, OP stated that they might be able to get throttle and steering angles for the robot, but they probably wouldn't be accurate. That's okay; it's better than nothing. OP also stated that the IMU outputs a fused orientation, where the fused orientation is from the accelerometer, gyro, and magnetometer. The accelerometer output of the IMU may or may not be corrected by the orientation. If it is, great. If not, it's a straightforward conversion provided you pick the correct form; I think generally what you would want is the XYZ Tait-Bryan rotation matrix, but definitely check with the manufacturer and absolutely run through copious amounts of test data to check the results yourself. Side note here, I build quick visualizers for myself all the time and highly recommend you do it for yourself, too. Here's a quick Matlab snippet to use, assuming you have a variable time that has all of your time indices. If you just have the sample intervals, you can make time as the cumulative sum of the sample intervals; time = cumsum(sampleIntervals);. width = 1; length = 2; height = 0.5; shapeTop = [... -width/2, -length/2; ... -width/2, length/2; ... width/2, length/2; ... width/2, -length/2]; shapeTop(:,3) = ones(4,1)*height/2; shapeBottom = shapeTop; shapeBottom(:,3) = shapeBottom(:,3) - ones(4,1)*height; shapeVertices = [shapeTop;shapeBottom]; shapeFaces = [... 1,2,3,4,1; ... 5,6,7,8,5; ... 8,5,1,4,8; ... 7,6,2,3,7; ... 5,6,2,1,5; ... 8,7,3,4,8]; shapeColor = [0.6 0.6 1]; % Light blue figure(1) shapePatch = patch(... 'Faces', shapeFaces, ... 'Vertices', shapeVertices, ... 'FaceColor', shapeColor); axis equal; xlabel('X-Axis') ylabel('Y-Axis') zlabel('Z-Axis') view([60,20]); tic while true elapsedTime = toc; if elapsedTime > time(end) break; end currentSample = find(time>=elapsedTime,1); rotMatrix = EulerToRotationMatrix(... EulerX(currentSample) , ... EulerY(currentSample) , ... EulerZ(currentSample)); tempVertices = shapeVertices; tempVertices = (rotMatrix*tempVertices.').'; set(shapePatch,'Vertices',tempVertices); drawnow; pause(0.01); end This code runs through all of your sample data, at a 1:1 playback speed, and updates a shape's orientation according to your X/Y/Z angles. EulerToRotationMatrix is a function you make yourself that accepts your Euler x/y/z angles and returns the appropriate rotation matrix. That's the part you need to play around with, but again I'm pretty sure you want Tait-Bryan XYZ. Record multiple test data sets of you moving your IMU from one starting pose to another starting pose, and take different paths to end at the same orientation. When you watch the playback you'll see pretty much immediately if you're doing the rotation matrix stuff correctly. So anyways, once you're sure you've got your rotation matrix, if you need to correct the accelerometer data, it's just accNew = rotMatrix*accOld;. Now, there are four possible ways to update yaw information: From a model using inputs (throttle and steering angle). Magnetometer for angle relative to magnetic North. Gyro readings from IMU Heading required to move in a straight line from the previous GPS fix to the current GPS fix. The GPS location is a Gaussian distribution about your actual location, so I would be very reluctant to use option 4 above for anything. The IMU already fuses the magnetometer and gyro data into one yaw reading, so then the only thing that you can do to improve your yaw estimate is with the model. If you can't get access to the inputs (throttle/steering angle) then you're stuck with the yaw angle the IMU gives you. If you do get access to the inputs, then you can model the x/y/$\theta$ rates as follows: $$\dot{x} = v\cos(\frac{\pi}{2} - \theta) \\\dot{y} = v\sin(\frac{\pi}{2} - \theta) \\\dot{\theta} = \frac{v}{L}\tan(\delta) \\$$ where $\theta$ is the heading, measured positive CW from the +y-axis; $\delta$ is the steering angle, measured positive CW from the forward vehicle direction, $L$ is the wheel base, the distance between the front and rear axles, and $v$ is the speed of the vehicle. The model of how your vehicle parameters (position/heading) update are nonlinear to the point that I wouldn't try to put it in state space form. You really do need the actual $\cos$ and $\sin$ functions; the small-angle approximations wouldn't really work here. That's okay! You're using the extended Kalman filter, so you don't need to try to linearize the model. I think I'd probably try to model the throttle signal as a first-order speed regulator, such that: $$\dot{v} = \frac{c\left(\mbox{throttle}\right) - v}{\tau} \\$$ where $\tau$ is the time constant and $c$ is a value that scales the throttle to a speed. Start a test with the vehicle stationary, then give a step command for throttle. The long-term, stable speed can be used to determine $c$. Then you can figure, given a particular speed "request" (throttle position), what the time constant was. The relationship between step input and system response is well known; 63.2% of final value at one time constant, 95.0% at three time constants, etc. I'd run multiple tests at each of a variety of throttle inputs and average the results. Now, your more complete model for the system would be: $$\dot{v} = \frac{c\left(\mbox{throttle}\right) - v}{\tau} \\v = v + \dot{v}\Delta T \\$$$$\dot{x} = v\cos(\frac{\pi}{2} - \theta) \\\dot{y} = v\sin(\frac{\pi}{2} - \theta) \\\dot{\theta} = \frac{v}{L}\tan(\delta) \\$$$$x = x + \dot{x}dT \\y = y + \dot{y}dT \\\theta = \theta + \dot{\theta}dT \\$$ The generic Kalman model. You're using the extended Kalman filter which, unlike the regular ("classic"?) Kalman filter, doesn't require a linear system. This is great because the system model is right above. Your states are position, speed, and yaw angle. So you do your predict steps: Predict the state estimate: $$\hat{x}_{t|t-1} = f\left(\hat{x}_{t-1} , u_{t-1}\right) \\$$ Here $\hat{x}_{t|t-1}$ is your prediction of the state vector $x$, which is what the hat $\hat{ }$ means, for the current time step given the state vector at the previous time step, which is what the ${t|t-1}$ means. Predict the error covariance estimate: $$P_{t|t-1} = F_k P_{t-1} F_k^T + Q_k \\$$ Here $F_k$ is the Jacobian $\left.\frac{\partial f}{\partial x}\right|_{\hat{x}_{t-1},u_{t-1}}$, $P$ is the error covariance (how much you "trust" the state estimate; smaller = more faith), $Q_k$ is the process noise matrix, which is typically a diagonal matrix of [numbers]. How do you measure process noise? There's a bunch of papers on the subject and you should check them out. Try putting 0.1 on the diagonal and see how that works. Generally the process noise is how you tune the filter. Again, the $t$ and $t|t-1$ mean the same as they did (current and current given the past). Then you do the update steps: Here's the tricky-ish thing for you. You have multiple sensors - a GPS and an IMU. What to do? If you had independent sensors, the answer would be update, then update again. For example, if you were using the IMU only for speed and yaw angle, and you were using the GPS only for position, then you would have the following residuals/innovations: $$\tilde{y}_{\mbox{IMU}} = \left[\begin{matrix}{}\mbox{speed measurement} \\\mbox{yaw angle measurement}\end{matrix} \right] - \left[\begin{matrix}{}0 & 1 & 0 \\0 & 0 & 1\end{matrix} \right] \left[\begin{matrix}{}\mbox{position state prediction} \\\mbox{speed state prediction} \\\mbox{yaw angle state prediction}\end{matrix} \right]$$ $$\tilde{y}_{\mbox{GPS}} = \left[\begin{matrix}{}\mbox{position measurement} \end{matrix} \right] - \left[\begin{matrix}{}1 & 0 & 0 \end{matrix} \right] \left[\begin{matrix}{}\mbox{position state prediction} \\\mbox{speed state prediction} \\\mbox{yaw angle state prediction}\end{matrix} \right]$$ You can then go on calculating the residual covariances and Kalman gains without worry because those measurements are independent and don't "overwrite" each other. You're not so lucky; you have to deal with the fact that you've got two redundant measurements - IMU position and GPS position. The common way to handle the redundancy is to average the sensors before you send them to the filter. This works fine for two identical sensors, but that's not what you have. A better way might be to weight the measurements by their covariances such that more "trustworthy" sensors get a bigger weight. All that said, the two don't have to conflict; the IMU technically outputs an acceleration. If you just integrate one time to get speed and leave it at that, then suddenly the IMU and the GPS don't conflict and you can leave their measurements separate and not have to worry about conflict. This is the route I would take. If you find it unsatisfactory you can try integrating the IMU output again and blending it with the GPS data, but I don't think that's going to make things any better. Hopefully I've made things much more clear. A final note, you'll probably find also that your GPS and IMU don't update at the same rate. What to do then? There's a lot of papers about that, but they basically boil down to downsampling the higher frequency data or faking the low frequency data. Downsampling can be done dumb (discarding data) or smart (filtering data), just like faking the data can be done dumb (repeating the same value) or smart (some manner of extrapolating a projected sensor reading).
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
Search Now showing items 1-2 of 2 D-meson nuclear modification factor and elliptic flow measurements in Pb–Pb collisions at $\sqrt {s_{NN}}$ = 5.02TeV with ALICE at the LHC (Elsevier, 2017-11) ALICE measured the nuclear modification factor ($R_{AA}$) and elliptic flow ($\nu_{2}$) of D mesons ($D^{0}$, $D^{+}$, $D^{⁎+}$ and $D^{s+}$) in semi-central Pb–Pb collisions at $\sqrt{s_{NN}} =5.02$ TeV. The increased ... ALICE measurement of the $J/\psi$ nuclear modification factor at mid-rapidity in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV (Elsevier, 2017-11) ALICE at the LHC provides unique capabilities to study charmonium production at low transverse momenta ( p T ). At central rapidity, ( |y|<0.8 ), ALICE can reconstruct J/ ψ via their decay into two electrons down to zero ...
Search Now showing items 11-20 of 26 Measurement of azimuthal correlations of D mesons and charged particles in pp collisions at $\sqrt{s}=7$ TeV and p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-04) The azimuthal correlations of D mesons and charged particles were measured with the ALICE detector in pp collisions at $\sqrt{s}=7$ TeV and p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV at the Large Hadron Collider. ... Production of $\pi^0$ and $\eta$ mesons up to high transverse momentum in pp collisions at 2.76 TeV (Springer, 2017-05) The invariant differential cross sections for inclusive $\pi^{0}$ and $\eta$ mesons at midrapidity were measured in pp collisions at $\sqrt{s}=2.76$ TeV for transverse momenta $0.4<p_{\rm T}<40$ GeV/$c$ and $0.6<p_{\rm ... Measurement of the production of high-$p_{\rm T}$ electrons from heavy-flavour hadron decays in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (Elsevier, 2017-08) Electrons from heavy-flavour hadron decays (charm and beauty) were measured with the ALICE detector in Pb–Pb collisions at a centre-of-mass of energy $\sqrt{s_{\rm NN}}=2.76$ TeV. The transverse momentum ($p_{\rm T}$) ... Flow dominance and factorization of transverse momentum correlations in Pb-Pb collisions at the LHC (American Physical Society, 2017-04) We present the first measurement of the two-particle transverse momentum differential correlation function, $P_2\equiv\langle \Delta p_{\rm T} \Delta p_{\rm T} \rangle /\langle p_{\rm T} \rangle^2$, in Pb-Pb collisions at ... $\phi$-meson production at forward rapidity in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV and in pp collisions at $\sqrt{s}$ = 2.76 TeV (Elsevier, 2017-03) The first measurement of $\phi$-meson production in p-Pb collisions at a nucleon-nucleon centre-of-mass energy $\sqrt{s_{\rm NN}}$ = 5.02 TeV has been performed with the ALICE apparatus at the LHC. $\phi$ mesons have been ... J/$\psi$ suppression at forward rapidity in Pb–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2017-03) The inclusive J/ψ production has been studied in Pb–Pb and pp collisions at the centre-of-mass energy per nucleon pair View the MathML source, using the ALICE detector at the CERN LHC. The J/ψ meson is reconstructed, ... Centrality dependence of the pseudorapidity density distribution for charged particles in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Elsevier, 2017-09) We present the charged-particle pseudorapidity density in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02\,\mathrm{Te\kern-.25exV}$ in centrality classes measured by ALICE. The measurement covers a wide pseudorapidity ... Azimuthally differential pion femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-06) We present the first azimuthally differential measurements of the pion source size relative to the second harmonic event plane in Pb-Pb collisions at a center-of-mass energy per nucleon-nucleon pair of $\sqrt{s_{\rm ... Measurement of electrons from beauty-hadron decays in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV and Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (Springer, 2017-07) The production of beauty hadrons was measured via semi-leptonic decays at mid-rapidity with the ALICE detector at the LHC in the transverse momentum interval $1<p_{\rm T}<8$ GeV/$c$ in minimum-bias p–Pb collisions at ... Linear and non-linear flow modes in Pb-Pb collisions at $\sqrt{s_{\rm NN}} =$ 2.76 TeV (Elsevier, 2017-10) The second and the third order anisotropic flow, $V_{2}$ and $V_3$, are mostly determined by the corresponding initial spatial anisotropy coefficients, $\varepsilon_{2}$ and $\varepsilon_{3}$, in the initial density ...
Although open organ pipe is open on both ends, how standing waves are produced in a open organ pipe. Can someone explain with more clarity? This is an attempt to explain, in a purely intuitive way why sound waves reflect from the end of an open pipe, and therefore can produce a standing wave. Consider a pressure wave travelling up the pipe. I've drawn just a single maximum of the pressure wave to keep the diagram uncluttered: Call the pressure maximum $P_1$ (I haven't marked $P_1$ on the diagram because it doesn't show up against the dots and I can't change the text colour in Google Draw) and the pressure minimum is $P_0$. The pressure wave moves because air flows away from the high pressure $P_1$ and towards the low pressure $P_0$. However because the air is in a tube it can only flow along the tube (green arrows) it can't flow sideways (red arrows) because the tube is in the way. This restriction to the flow will restrict the pressure difference between $P_1$ and $P_0$. Now consider what happens when the pressure wave reaches the end of the tube: Now the walls of the tube are no longer restricting the flow, so the air can flow away from the pressure maximum in all directions. The result is that the flow away from the maximum will be greater than it was in the tube and the pressure will fall to a lower value than the minimum pressure in the tube: But now the air will start flowing back into the low pressure area, and because the pressure there is less than $P_0$ the rebound will produce a pressure greater than $P_1$: And this is the last step in our argument. The next wave along travelling towards the end of the tube at a pressure $P_1$ meets a region just outside the tube with a higher pressure $P > P_1$, and the result is a reflected pressure wave back down the tube. And that's how a pressure wave reflects from the end of an open tube. You may find by starting from first principles, or by consulting external resources that pressure waves in air (in one dimension) are governed by the wave equation $$\frac{\partial^2 p}{\partial x^2} - \frac{1}{v^2} \frac{\partial^2 p}{\partial t^2} = 0$$ where $x$ is a position and $t$ is the time, and $p$ denotes the pressure difference away from equilibrium. To solve this, let us write the solution as a Fourier transform$^{[a]}$ $$p(x,t) = \int_k \int_\omega \tilde{p}(k,\omega) \cos(kx - \omega t) \frac{d\omega}{2\pi}\frac{dk}{2\pi} \, .$$ Plugging this into the wave equation, we find $$(-k^2 + \omega^2 / v^2) \tilde{p}(k,\omega) = 0 \, .$$ For each value of $k$ and $\omega$ one of two things must be true: either $\tilde{p}(k,\omega)=0$ or $\omega^2/v^2 - k^2 = 0$.The case where $\tilde{p}(k,\omega)=0$ are trivial: there is no wave.In the other case, for a particular choice of $k$ and $\omega$ we have a wave with the specific form$$p(x, t) = M \cos(kx - \omega t ) = M\cos(k(x \pm vt)) = \cos(kx)\cos(\omega t) \pm \sin(kx)\sin(\omega t)\, .$$Since the wave equation is linear, these solutions can be summed together to produce other solutions.Furthermore, any solution can be written as a sum of these types of solutions. Now consider an organ pipe which we stimulate at frequency $\omega$. In this case, the linear combination can be made up of only the following two parts: $$ \begin{align} p(x,t) &= p_{\text{left}}(x,t) + p_{\text{right}}(x,t) \\ \text{where} \qquad p_{\text{left}}(x,t) &= M_{\text{left}} \cos(kx + \omega t) \\ \text{and} \qquad p_{\text{right}}(x,t) &= M_{\text{right}} \cos(kx - \omega t) \\ \text{so} \qquad p(x,t) &= (M_{\text{left}} + M_{\text{right}})\cos(kx)\cos(\omega t) \\ &+ (M_{\text{right}} - M_{\text{left}}) \sin(kx)\sin(\omega t)\, . \end{align} $$ Suppose the pipe extends from $x=-L/2$ to $x=L/2$.The two ends are open to the outside air, so at those points $p \approx 0$.These are our boundary conditions.From the form of $p(x,t)$ you can convince yourself that the only possible solution is when $M_{\text{left}} = M_{\text{right}}$ and $k=\pi/L$.$^{[b]}$Putting that in gives $$p(x,t) \propto \cos(\pi x / L) \cos(\omega t)\, .$$ Stop and consider the meaning of this equation for a minute. It is a vibration of air pressure inside the pipe with cosine spatial profile and also cosine oscillation in time. In other words, the pipe, even with its open ends, can made a vibration at the frequency $\omega$. Note that the pressure difference at the ends of the pipe is zero by construction (i.e. our choice of $k$), yet there are still vibration modes! $[a]$ I dropped a phase here which, if you put it in and follow everything through, doesn't change the argument. $[b]$ Actually there are more possible values of $k$. They correspond to higher modes of the vibrating air wave. See if you can figure out what I mean :) Simply: the ends of an organ pipe reflect the wave in the pipe. A closed end will cause a pressure antinode while an open end causes a pressure node. This information is sufficient to determine the vibrational modes of any pipe. If both ends are of the same type (open or closed) the wavelengths of allowed modes will be given by $\lambda = n \, \ell /2$; if the two ends are different, $\lambda = (2n+1)/4 \, \ell$ UPDATE Following a suggestion by John Rennie, it is worth trying to give an intuitive explanation of the "reflection at an open end" - since it would seem there is "nothing" there to cause a reflection. Here is my way to understand it: A sound wave normally expands in all directions (think Huygens' principle) but when it it traveling along an organ pipe it is restricted to travel in one direction only - namely along the pipe. When it reaches the end of their pipe, that restriction is suddenly gone - and that is the reason a partial reflection can happen at that point. Not all the wave can keep traveling forward since the air just outside the pipe "feels different" than the air inside - this is called the acoustic impedance and is not unlike the impedance of an electrical transmission line or the refractive index of an optical medium. Acoustic impedance relates the pressure and amplitude of the wave - and without the walls that changes, as pressure will be lower for same amplitude. For all waves, a discontinuity in the impedance results in (partial) reflection.
'ANTIGONE' AT THE RICHARD RODGERS AMPHITHEATER AT MARCUS GARVEY PARK An ancient tragedy seen through the lens of current crises, this Classical Theater of Harlem production of the Sophocles play, [...] Mathematics - Functional Analysis and Mathematics - Metric Geometry Abstract The following strengthening of the Elton-Odell theorem on the existence of a $(1+\epsilon)-$separated sequences in the unit sphere $S_X$ of an infinite dimensional Banach space $X$ is proved: There exists an infinite subset $S\subseteq S_X$ and a constant $d>1$, satisfying the property that for every $x,y\in S$ with $x\neq y$ there exists $f\in B_{X^*}$ such that $d\leq f(x)-f(y)$ and $f(y)\leq f(z)\leq f(x)$, for all $z\in S$. Comment: 15 pages, to appear in Bulletin of the Hellenic Mayhematical Society NOCHE FLAMENCA AT THE WEST PARK PRESBYTERIAN CHURCH In the early 20th century, Arthur Schnitzler's play ''La Ronde'' scandalized audiences with its vignettes of sexual encounters that breached class borders. [...] Given a finite dimensional Banach space X with dimX = n and an Auerbach basis of X, it is proved that: there exists a set D of n + 1 linear combinations (with coordinates 0, -1, +1) of the members of the basis, so that each pair of different elements of D have distance greater than one. Comment: 15 pages. To appear in MATHEMATIKA
Questions:Is the AUC (area under the ROC curve) a type of "empirical Bayes estimator"? If we take the parameter space to be $\Theta = [0,1]$ and the prior $\Lambda$ to be Lebesgue measure, then the Bayes estimator just gives the estimator whose risk function's curve has the smallest area. So if there is anyway to identify ROC curves with the curves of some "empirical risk function", then it seems like there might be some sort of connection. Definitions: Here are the definitions from my course: A statistical modelis a family of candidate probability distributions $$\mathcal{P} = \{ P_{\theta}: \theta \in \Theta \} $$ for some observed data $X \sim P_{\theta}$. The Bayes estimator$\delta$ with respect to the prior $\Lambda$ on the parameter space $\Theta$ gives the minimized average-case risk over some measure $\Lambda$ on the parameter space $\Theta$: $$\min_{\delta}\int_{\Theta} R(\theta, \delta)\ \mathrm{d}\Lambda(\theta) \,. $$
Under the auspices of the Computational Complexity Foundation (CCF) Cryptographic hash functions are efficiently computable functions that shrink a long input into a shorter output while achieving some of the useful security properties of a random function. The most common type of such hash functions is {\em collision resistant} hash functions (CRH), which prevent an efficient attacker from finding ... more >>> We say that a function $f\colon \Sigma^n \to \{0, 1\}$ is $\epsilon$-fooled by $k$-wise indistinguishability if $f$ cannot distinguish with advantage $\epsilon$ between any two distributions $\mu$ and $\nu$ over $\Sigma^n$ whose projections to any $k$ symbols are identical. We study the class of functions $f$ that are fooled by ... more >>> Several well-known public key encryption schemes, including those of Alekhnovich (FOCS 2003), Regev (STOC 2005), and Gentry, Peikert and Vaikuntanathan (STOC 2008), rely on the conjectured intractability of inverting noisy linear encodings. These schemes are limited in that they either require the underlying field to grow with the security parameter, ... more >>> A one-way function is $d$-local if each of its outputs depends on at most $d$ input bits. In (Applebaum, Ishai, and Kushilevitz, FOCS 2004) it was shown that, under relatively mild assumptions, there exist $4$-local one-way functions (OWFs). This result is not far from optimal as it is not hard ... more >>> Let $G:\{0,1\}^n\to\{0,1\}^m$ be a pseudorandom generator. We say that a circuit implementation of $G$ is $(k,q)$-robust if for every set $S$ of at most $k$ wires anywhere in the circuit, there is a set $T$ of at most $q|S|$ outputs, such that conditioned on the values of $S$ and $T$ ... more >>> We put forward a new approach for the design of efficient multiparty protocols: 1. Design a protocol for a small number of parties (say, 3 or 4) which achieves security against a single corrupted party. Such protocols are typically easy to construct as they may employ techniques that do not ... more >>> Yao's garbled circuit construction transforms a boolean circuit $C:\{0,1\}^n\to\{0,1\}^m$ into a ``garbled circuit'' $\hat{C}$ along with $n$ pairs of $k$-bit keys, one for each input bit, such that $\hat{C}$ together with the $n$ keys corresponding to an input $x$ reveal $C(x)$ and no additional information about $x$. The garbled circuit ... more >>> Motivated by the question of basing cryptographic protocols on stateless tamper-proof hardware tokens, we revisit the question of unconditional two-prover zero-knowledge proofs for $NP$. We show that such protocols exist in the {\em interactive PCP} model of Kalai and Raz (ICALP '08), where one of the provers is replaced by ... more >>> A Private Information Retrieval (PIR) protocol enables a user to retrieve a data item from a database while hiding the identity of the item being retrieved. In a $t$-private, $k$-server PIR protocol the database is replicated among $k$ servers, and the user's privacy is protected from any collusion of up ... more >>>
Defining parameters Level: \( N \) = \( 3600 = 2^{4} \cdot 3^{2} \cdot 5^{2} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 3600.ex (of order \(30\) and degree \(8\)) Character conductor: \(\operatorname{cond}(\chi)\) = \( 1800 \) Character field: \(\Q(\zeta_{30})\) Newforms: \( 0 \) Sturm bound: \(720\) Trace bound: \(0\) Dimensions The following table gives the dimensions of various subspaces of \(M_{1}(3600, [\chi])\). Total New Old Modular forms 64 0 64 Cusp forms 0 0 0 Eisenstein series 64 0 64 The following table gives the dimensions of subspaces with specified projective image type. \(D_n\) \(A_4\) \(S_4\) \(A_5\) Dimension 0 0 0 0