text stringlengths 256 16.4k |
|---|
Journal of Symbolic Logic J. Symbolic Logic Volume 60, Issue 4 (1995), 1208-1241. Minimal Realizability of Intuitionistic Arithmetic and Elementary Analysis Abstract
A new method of "minimal" realizability is proposed and applied to show that the definable functions of Heyting arithmetic (HA)--functions $f$ such that HA $\vdash \forall x\exists!yA(x, y)\Rightarrow$ for all $m, A(m, f(m))$ is true, where $A(x, y)$ may be an arbitrary formula of $\mathscr{L}$(HA) with only $x, y$ free--are precisely the provably recursive functions of the classical Peano arithmetic (PA), i.e., the $< \varepsilon_0$-recursive functions. It is proved that, for prenex sentences provable in HA, Skolem functions may always be chosen to be $< \varepsilon_0$-recursive. The method is extended to intuitionistic finite-type arithmetic, $HA^\omega_0$, and elementary analysis. Generalized forms of Kreisel's characterization of the provably recursive functions of PA and of the no-counterexample-interpretation for PA are consequently derived.
Article information Source J. Symbolic Logic, Volume 60, Issue 4 (1995), 1208-1241. Dates First available in Project Euclid: 6 July 2007 Permanent link to this document https://projecteuclid.org/euclid.jsl/1183744873 Mathematical Reviews number (MathSciNet) MR1367206 Zentralblatt MATH identifier 0854.03054 JSTOR links.jstor.org Citation
Damnjanovic, Zlatan. Minimal Realizability of Intuitionistic Arithmetic and Elementary Analysis. J. Symbolic Logic 60 (1995), no. 4, 1208--1241. https://projecteuclid.org/euclid.jsl/1183744873 |
Introduction
Built at the Jet Propulsion Laboratory by an Investigation Definition Team (IDT) headed by John Trauger, WFPC2 was the replacement for the first Wide Field and Planetary Camera (WF/PC-1) and includes built-in corrections for the spherical aberration of the HST Optical Telescope Assembly (OTA). The WFPC2 was installed in HST during the First Servicing Mission in December 1993.
Early IDT report of the WFPC2 on-orbit performance: Trauger et al. (1994, ApJ, 435, L3) A more detailed assessment of its capabilities: Holtzman et al. (1995, PASP, 107, page 156 and page 1065).
The WFPC2 was used to obtain high resolution images of astronomical objects over a relatively wide field of view and a broad range of wavelengths (1150 to 11,000 Å). WFPC2 was installed during the first HST Servicing Mission in 1993 and removed during Servicing Mission 4 in 2009. WFPC2 data can be found on the MAST Archive.
ISRs Filter WFPC2 ISRs Listing Results 2010-04: The Dependence of WFPC2 Charge Transfer Efficiency on Background Illumination 2010-01: WFPC2 Standard Star CTE Optical Configuration
While it was in operation, the WFPC2 field of view was located at the center of the HST focal plane. The central portion of the f/24 beam coming from the OTA would be intercepted by a steerable pick-off mirror attached to the WFPC2 and diverted through an open port entry into the instrument. The beam would then pass through a shutter and interposable filters. An assembly of 12 filter wheels contained a total of 48 spectral elements and polarizers. The light would then fall onto a shallow-angle, four-faceted pyramid, located at the aberrated OTA focus. Each face of the pyramid was a concave spherical surface, dividing the OTA image of the sky into four parts. After leaving the pyramid, each quarter of the full field of view would then be relayed by an optically flat mirror to a Cassegrain relay that would form a second field image on a charge-coupled device (CCD) of 800 x 800 pixels. Each of these four detectors were housed in a cell sealed by a MgF2 window, which is figured to serve as a field flattener.
The aberrated HST wavefront was corrected by introducing an equal but opposite error in each of the four Cassegrain relays. An image of the HST primary mirror would then be formed on the secondary mirrors in the Cassegrain relays. The spherical aberration from the telescope's primary mirror would be corrected on these secondary mirrors, which were extremely aspheric; the resulting point spread function was quite close to that originally expected for WF/PC-1.
Field of View
The U2,U3 axes were defined by the "nominal" Optical Telescope Assembly (OTA) axis, which was near the center of the WFPC2 FOV. The readout direction was marked with an arrow near the start of the first row in each CCD; note that it rotated 90 degrees between successive chips. The x,y arrows mark the coordinate axes for any POS TARG commands that may have been specified in the proposal. An optional special requirement in HST observing proposals, places the target an offset of POS TARG (in arcsec) from the specified aperture.
Camera Configurations
Camera Pixels Field of View Scale f/ratio PC (PC1) 800 x 800 36" x 36" 0.0455" per pixel 28.3 WF2, 3, 4 800 x 800 80" x 80" 0.0996" per pixel
12.9
A Note about HST File Formats
Data from WFPC2 are made available to observers as files in Multi-Extension FITS (MEF) format, which is directly readable by most PyRAF/IRAF/STSDAS tasks. All WFPC2 data are now available in either waivered FITS or MEF formats. The user may specify either format when retrieving that data from the HDA. WFPC2 data, in either Generic Edited Information Set (GEIS) or MEF formats, can be fully processed with STSDAS tasks.
The figure below provides a physical representation of the typical data format.
Resources Charge Traps
There are about 30 pixels in WFPC2 that are "charge traps" which do not transfer charge efficiently during readout, producing artifacts that are often quite noticeable. Typically, charge is delayed into successive pixels, producing a streak above the defective pixel. In the worst cases, the entire column above the pixel can be rendered useless. On blank sky, these traps will tend to produce a dark streak. However, when a bright object or cosmic ray is read through them, a bright streak will be produced. Here, we show streaks (a) in the background sky, and (b) stellar images produced by charge traps in the WFPC2. Individual traps have been cataloged and their identifying numbers are shown.
Warm Pixels and Annealing
Decontaminations (anneals), during which the instrument is warmed up to about 22
o C for a period of six hours, were performed about once per month. These procedures are required in order to remove the UV-blocking contaminants which gradually build-up on the CCD windows (thereby restoring the UV throughput) as well as fix warm pixels. Examples of warm pixels are presented in the figure below.
Calibration
Procedure Estimated Accuracy Notes Bias subtraction 0.1 DN rms Unless bias jump is present Dark subtraction 0.1 DN/hr rms Error larger for warm pixels; absolute error uncertain because of dark glow Flat fielding <1% rms large scale Visible, near UV 0.3% rms small scale Visible, near UV ~10% F160BW; however, significant noise reduction achieved with use of correction flats
Relative Photometry
Procedure Estimated Accuracy Notes Residuals in CTE correction < 3% for the majority (~90%) of cases up to 1-% for extreme cases (e.g., very low backgrounds) Long vs. short anomaly (uncorrected) < 5% Magnitude errors <1% for well-exposed stars but may be larger for fainter stars. Some studies have failed to confirm the effect. (see Chapter 5 of IHB for more details) Aperture correction 4% rms focus dependence (1 pixel aperture) Can (should) be determined from data <1% focus dependence (> 5 pixel) Can (should) be determined from data 1-2% field dependence (1 pixel aperture) Can (should) be determined from data Contamination correction 3% rms max (28 days after decon) (F160BW) 1% rms max (28 days after decon) (filters bluer than F555W) Background determination 0.1 DN/pixel (background > 10 DN/pixel) May be difficult to exceed, regardless of image S/N Pixel centering < 1%
Absolute Photometry
Precedure Estimated Accuracy Sensitivity < 2% rms for standard photometric filters 2% rms for broad and intermediate filters in visible < 5% rms for narrow-band filters in visible 2-8% rms for UV filters
Astrometry
Procedure Estimated Accuracy Notes Relative 0.005" rms (after geometric and 34th-row corrections) Same chip 0.1" (estimated) Across chips Absolute 1" rms (estimated)
Photometric Systems Used for WFPC2 Data
The WFPC2 flight system is defined so that stars of color zero in the Johnson-Cousins UBVRI system have color zero between any pair of WFPC2 filters and have the same magnitude in V and F555W. This system was established by Holtzman et al. (1995b)
The zeropoints in the WFPC2 synthetic system, as defined in Holtzman et al. (1995b), are determined so that the magnitude of Vega, when observed through the appropriate WFPC2 filter, would be identical to the magnitude Vega has in the closest equivalent filter in the Johnson-Cousins system.
\(m_{AB} = -48.60-2.5\log f_\nu \)
\(m_{ST} = -21.10-2.5\log f_\lambda\)
Photometric Corrections
A number of corrections must be made to WFPC2 data to obtain the best possible photometry. Some of these, such as the corrections for UV throughput variability, are time dependent, and others, such as the correction for the geometric distortion of WFPC2 optics, are position dependent. Finally, some general corrections, such as the aperture correction, are needed as part of the analysis process. Here we provide examples of factors affecting photometric corrections.
Cool Down on April 23, 1994
PSF Variations
34th Row Defect
Gain Variation
Pixel Centering
Possible Variation in Methane Quad Filter Transmission
Polarimetry
WFPC2 has a polarizer filter which can be used for wide-field polarimetric imaging from about 200 through 700 nm. This filter is a
quad, meaning that it consists of four panes, each with the polarization angle oriented in a different direction, in steps of 45 o. The panes are aligned with the edges of the pyramid, thus each pane corresponds to a chip. However, because the filters are at some distance from the focal plane, there is significant vignetting and cross-talk at the edges of each chip. The area free from vignetting and cross-talk is about 60" square in each WF chip, and 15" square in the PC. It is also possible to use the polarizer in a partially rotated.
Accurate calibration of WFPC2 polarimetric data is rather complex, due to the design of both the polarizer filter and the instrument itself. WFPC2 has an aluminized pick-off mirror with a 47° angle of incidence, which rotates the polarization angle of the incoming light, as well as introducing a spurious polarization of up to 5%. Thus, both the HST roll angle and the polarization angle must be taken into account. In addition, the polarizer coating on the filter has significant transmission of the perpendicular component, with a strong wavelength dependence.
Astrometry
Astrometry with WFPC2 means primarily
relative astrometry. The high angular resolution and sensitivity of WFPC2 makes it possible, in principle, to measure precise positions of faint features with respect to other reference points in the WFPC2 field of view. On the other hand, the absolute astrometry that can be obtained from WFPC2 images is limited by the positions of the guide stars, usually known to about 0.5" rms in each coordinate, and by the transformation between the FGS and the WFPC2, which introduces errors of order of 0.1"
Because WFPC2 consists of four physically separate detectors, it is necessary to define a coordinate system that includes all four detectors. For convenience, sky coordinates (right ascension and declination) are often used; in this case, they must be computed and carried to a precision of a few mas, in order to maintain the precision with which the relative positions and scales of the WFPC2 detectors are known. It is important to remember that the coordinates are
not known with this accuracy. The absolute accuracy of the positions obtained from WFPC2 images is typically 0.5" rms in each coordinate and is limited primarily by the accuracy of the guide star positions. |
The reason you can't prove your proposed algorithm correct is because.... it actually is not a correct algorithm for this problem. If you try running it on a small example of network flow graph with more than one unique min cut, you'll see immediately what goes wrong. In particular, this algorithm fails on
every flow graph that contains more than one min cut, so the problem is not at all subtle.
Perhaps it would be helpful for you to recall properties of the min-cut that is produced by applying the min-cut/max-flow theorem to the flow output by a max-flow algorithm. In particular, define $S$ to be the set of vertices reachable from $s$ along some path in the residual graph, and $T$ to be the set of vertices that can reach $t$ along some path in the residual graph. Then $S \cap T = \emptyset$ and $S \cup T = V$, and $(S,T)$ is a $(s,t)$-cut. In particular, $(S,T)$ is the cut that is selected by the min-cut/max-flow theorem (for this flow). Good.
Now notice that $T$ is exactly the set of vertices that are reachable from $t$ in the reverse of the residual graph. Therefore, your proposed algorithm amounts to finding a max-flow, computing the sets $S$ and $T$, then checking whether $S$ and $T$ are the same cuts. But the only way to interpret $S$ as a cut is as the cut $(S,V\setminus S)$, and the only way to interpret $T$ as a cut is as the cut $(V\setminus T, T)$ -- and these are always exactly the same cut! In particular, $V \setminus S = T$ and $V \setminus T = S$, so both of these cuts will always be exactly the same cut -- even if the flow graph admits multiple different cuts, your procedure will only find one of them.
In short, your proposed lemma is wrong, and the method you suggest is not a correct algorithm for this problem. The good news is that it
is possible to build a correct algorithm for this problem, within the running time that you specify; see my comments for more about how to do that -- but since you said in the question you don't want some other algorithm for this task, you just want to know if your proposed algorithm is correct, I won't try to elaborate in any further depth. |
Indices Correction Model
Fitting of experimental transmittance data by model data before and after application of indices corrections model (
Refractive index offsets calculated in the course of the reverse engineering process.
The Index drift models allow you to include into consideration possible drifts of the refractive indices of layer materials:
\[ \delta n_j=C\exp(-\alpha j)\]
Exponential type of drifts usually provides large deviations at the beginning of deposition that are rapidly decreasing with the layer number.
Read more in our papers: |
Situation
Let $G$ be a finite group and provide $G\text{-mod} := {\mathbb Z}G\text{-mod}$ with the Frobenius structure of ${\mathbb Z}$-split short exact sequences. Denote by $\underline{G\text{-mod}}$ the associated stable category with loop functor $\Omega$.
For any Frobenius category $({\mathcal A},{\mathcal E})$ and a complete projective-injective resolution $P_{\bullet}$ of some $X\in{\mathcal A}$, we have for any $Y\in{\mathcal A}$ a canonical isomorphism of abelian groups
$H^n(\text{Hom}_{\mathcal A}(P_{\bullet},Y))\cong [\Omega^n X,Y]$,
where $[-,-] := \text{Hom}_{\underline{{\mathcal A}}}(-,-)$.
Applying this to $G\text{-mod}$ yields an isomorphism
$\widehat{H}^k(G;M)\cong [\Omega^k{\mathbb Z},M]$,
where $\widehat{H}^k(G;M)$ denotes the Tate-Cohomology of $G$ with values in $M$.
If I didn't mix things up, in this language Tate-Duality should mean that the canonical map
$[{\mathbb Z},\Omega^k{\mathbb Z}]\otimes_{\mathbb Z}[\Omega^k{\mathbb Z},{\mathbb Z}]\to[{\mathbb Z},{\mathbb Z}]\cong{\mathbb Z}/|G|{\mathbb Z}$
is a duality.
Question
I'd like to know sources which introduce and treat Tate cohomology in the way described above, i.e. using the language of Frobenius categories and its associated stable categories.
In particular, I would be interested in a proof of Tate Duality using this more abstract language instead of resolutions.
Does anybody know such sources?
Remark
It seems to be more difficult to work over the integers instead of some field, for in this case, the exact sequences in the Frobenius structure $G\text{-mod}$ are required to be ${\mathbb Z}$-split, which is not automatic. As a consequence, there may be projective/injective objects in $(G\text{-mod},{\mathcal E}^{G}_{\{e\}})$ which are not projective/injective as ${\mathbb Z}G$-modules. Further, the long exact cohomology sequence exists only for ${\mathbb Z}$-split exact sequences of $G$-modules (not good, because Brown uses the exact sequence $0\to {\mathbb Z}\to{\mathbb Q}\to{\mathbb Q}/{\mathbb Z}\to 0$ in his proof of Tate duality); of course, one can choose particular complete resolutions of ${\mathbb Z}$ consisting of ${\mathbb Z}G$-projective modules, and such a resolution yields a long exact cohomology sequence for any short exact sequence of coefficient modules, but this seems somewhat unnatural and doesn't fit into the picture right now.
Partial Results (1) For any subgroup $H\leq G$ there are restriction and corestriction morphisms
$[\Omega^k {\mathbb Z},-]^{\underline{G}}=\widehat{H}^*(G;-)\leftrightarrows\widehat{H}^*(H;-)=[\Omega^k{\mathbb Z},-]^{\underline{H}}$
defined as follows: for any $G$-module $M$, the abelian group $[{\mathbb Z},M]^{\underline{G}}$ is in canonical bijection with $M^G / |G| M^G$, and there are restriction and transfer maps
$\text{res}: M^G / |G| M^G\longrightarrow M^H / |H| M^H,\quad [m]\mapsto [m]$,
$\text{tr}: M^H / |H| M^H\longrightarrow M^G / |G| M^G\quad [m]\mapsto\left[\sum\limits_{g\in G/H} g.m\right]$,
respectively. Now
$[\Omega^k{\mathbb Z},M]^{\underline{G}}\cong [{\mathbb Z},\Omega^{-k}M]^{\underline{G}}\stackrel{\text{res}}{\longrightarrow} [{\mathbb Z},\Omega^{-k}M]^{\underline{H}}\cong[\Omega^k{\mathbb Z},M]^{\underline{H}}$
$[\Omega^k{\mathbb Z},M]^{\underline{H}}\cong [{\mathbb Z},\Omega^{-k}M]^{\underline{H}}\stackrel{\text{tr}}{\longrightarrow} [{\mathbb Z},\Omega^{-k}M]^{\underline{G}}\cong[\Omega^k{\mathbb Z},M]^{\underline{G}}$
seems to be the natural thing to define restriction and transfer. (This is very similar to the usual method of giving a morphism of $\delta$-functors only in degree $0$ and extend it by dimension shifting, though a bit more elegant in my opinion)
Note that it was implicitly used that $\Omega^k$ commutes with the forgetful functor $G\text{-mod}\to H\text{-mod}$
(2) For any subgroup $H\leq H$, $g\in G$ and a $G$-module $M$ there is a map
$g_*:\ \widehat{H}^*(H;-)\to\widehat{H}^*(gHg^{-1};M)$
extending the canonical map
$M^H/|H|M^H\longrightarrow M^{gHg^{-1}}/|H|M^{gHg^{-1}},\quad [m]\mapsto [g.m]$.
(1) and (2) fit together in the usual way; there is a transfer formula and a lifting criterion for elements of Sylow-subgroups.
(3) The cup product on $\widehat{H}^*(G;{\mathbb Z})$ is given simply by composition of maps:
$[\Omega^p{\mathbb Z},{\mathbb Z}]\otimes_{\mathbb Z}[\Omega^q{\mathbb Z},{\mathbb Z}]\stackrel{\Omega^q\otimes\text{id}}{\longrightarrow}[\Omega^{p+q}{\mathbb Z},\Omega^q{\mathbb Z}]\otimes_{\mathbb Z}[\Omega^q{\mathbb Z},{\mathbb Z}]\longrightarrow [\Omega^{p+q}{\mathbb Z},{\mathbb Z}]$
Does anybody see why this product is graded-commutative? |
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever?
And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time
Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered?
@tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points.
@DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?)
The x axis is the index in the array -- so I have 200 time series
Each one is equally spaced, 1e-9 seconds apart
The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are
The solid blue line is the abs(shear strain) and is valued on the right axis
The dashed blue line is the result from scipy.signal.correlate
And is valued on the left axis
So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
Because I don't know how the result is indexed in time
Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th...
So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag
I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question
It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy
For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \...
Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics
Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay
@jinawee oh, that I don't think will happen.
In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have.
So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is
Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level'
Others would argue it's not on topic because it's not conceptual
How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss...
I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed.
And what about selfies in the mirror? (I didn't try yet.)
@KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean.
Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods.
Or maybe that can be a second step.
If we can reduce visibility of HW, then the tag becomes less of a bone of contention
@jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework
@Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter.
@Dilaton also, have a look at the topvoted answers on both.
Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway)
@DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on.
hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least.
Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes.
MO is for research-level mathematics, not "how do I compute X"
user54412
@KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube
@ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper) |
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions.
Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not.
Let $A, B, C$ be $n\times n$ invertible matrices. When you simplify the expression\[C^{-1}(AB^{-1})^{-1}(CA^{-1})^{-1}C^2,\]which matrix do you get?(a) $A$(b) $C^{-1}A^{-1}BC^{-1}AC^2$(c) $B$(d) $C^2$(e) $C^{-1}BC$(f) $C$
Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less.Let\[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\]where\begin{align*}p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3.\end{align*}
(a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$.
(b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$.
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be a vector space and $B$ be a basis for $V$.Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$.Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$.
After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form\[\begin{bmatrix}1 & 0 & 2 & 1 & 0 \\0 & 1 & 3 & 0 & 1 \\0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0\end{bmatrix}.\]
(a) What is the dimension of $V$?
(b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$?
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.Let\[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix}a & b\\c& -a\end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\]
(a) Show that $W$ is a subspace of $V$.
(b) Find a basis of $W$.
(c) Find the dimension of $W$.
(The Ohio State University, Linear Algebra Midterm)
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems.
Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$.
(a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$.
(b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$.
Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular.
Problem 9.Determine whether each of the following sentences is true or false.
(a) There is a $3\times 3$ homogeneous system that has exactly three solutions.
(b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric.
(c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$.
(d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 2 and contains Problem 4, 5, and 6.Check out Part 1 and Part 3 for the rest of the exam problems.
Problem 4. Let\[\mathbf{a}_1=\begin{bmatrix}1 \\2 \\3\end{bmatrix}, \mathbf{a}_2=\begin{bmatrix}2 \\-1 \\4\end{bmatrix}, \mathbf{b}=\begin{bmatrix}0 \\a \\2\end{bmatrix}.\]
Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$.
Problem 5.Find the inverse matrix of\[A=\begin{bmatrix}0 & 0 & 2 & 0 \\0 &1 & 0 & 0 \\1 & 0 & 0 & 0 \\1 & 0 & 0 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason.
Problem 6.Consider the system of linear equations\begin{align*}3x_1+2x_2&=1\\5x_1+3x_2&=2.\end{align*}
(a) Find the coefficient matrix $A$ of the system.
(b) Find the inverse matrix of the coefficient matrix $A$.
(c) Using the inverse matrix of $A$, find the solution of the system.
(Linear Algebra Midterm Exam 1, the Ohio State University) |
I'm trying to understand the definition of tensor product of two vector spaces. So far, I've read the one using free vector spaces and a quotient space (here), and I think I understand it well. However, I want to understand the other definitions I can find, and it seems that a very common way to define it is through the universal property (some category theory included, I suspect). Does anyone here know of a good treatment of this? I have no knowledge of category theory though, but would love to read some about it. I'm a second-year undergrad, so not too much of a high level would be nice.
Just some definitions, in case you're unfamiliar with them: Let $\hat{V}$ denote the vector space of linear functions from a vector space $V$ to the scalar field. Remember, a multilinear map is one of the form $V \times V \times \cdots \times V \to W$ (with $n$ copies of $V$), where $W$ is another vector space, such that if we fix $n-1$ of the arguments, the function becomes a linear function from $V$ to $W$ in the argument not fixed. A multilinear form is one in which $W=K$, the scalar field (you can replace $K$ with $\mathbb{R}$ or $\mathbb{C}$ if you like). For example, the inner product on $\mathbb{R}^n$ is a bilinear form on $\mathbb{R}^n$, since if we fix one argument, it becomes linear in the other. If we view an $n \times n$ matrix as a conglomeration of $n$ columns, then the determinant is an $n$-form.
Then $\hat{V} \otimes \hat{V}$ corresponds to the set of bilinear forms, and in general, a tensor product of multiple copies of $\hat{V}$ corresponds to the set of $n$-linear forms (i.e. multilinear forms with $n$ arguments). That, there is a
concrete description of tensor products of the dual space with itself, and many books which do not wish to develop the notion of tensor product will use this in place of tensor products. That is, all they must do is define a certain kind of map, and then the tensor product is just the set of maps of that kind. Then how do we explain the tensor product $V \otimes V$ (or more generally $V \otimes U$, where $U$ is another vector space)? We could note that $V$ is canonically isomorphic to its double dual, i.e. the dual space of $\hat{V}$, and then view $V \otimes V$ as the set of bilinear forms on $\hat{V}$. But there is a nicer way, and this uses the universal property.
A bilinear map $V \times V \to W$ corresponds to a linear map $V \otimes V \to W$. If $f(-,-)$ denotes the bilinear map, and $x,y \in V$, then our
linear map sends $x \otimes y$ to $f(x,y)$. You could try to think of the tensor product as pairs of vectors, but the tensor product contains elements which are not $x \otimes y$ for some $x,y \in V$. We do have that $x_1 \otimes y_1 + x_2 \otimes y_2$ maps to $f(x_1,y_1)+f(x_2,y_2)$. In more generality, if $W$ and $U$ are two other vector spaces, linear maps $U \otimes V \to W$ correspond to bilinear maps $U \times V \to W$. Then what is an element of $U \otimes V$? It is a thing you stick into a bilinear map. This is the key idea which helped me understand tensor products. I repeat, an element of a tensor product is simply a thing you stick into a bilinear map. In general, elements of some universal construction defined by maps going out of a certain object have some description as "things you stick into some kind of map (or a collection of multiple maps)."
You might like Brian Conrad's handouts for a sophomore differential geometry course. Especially relevant are
Construction of tensor products and the two handouts after that one. They have some nice examples and a heavy emphasis on the universal property.
(I don't think this warranted more than a comment, but I can't post those yet.)
My view of the pedagogy, based on teaching this to second year undergraduates at Cambridge.
The tensor product of vector spaces is defined by generators and relations. Also generators and relations, as a way of defining anything, is a method depending on a universal property (to make much sense).
If you take these two parts one at a time, you have a chance of understanding what is happening. The generators and relations are just bilinearity spelled out. The remark about generators and relations as a mode of defining anything can be learned anywhere you like (e.g. group theory): the reason that there is a universal property is just "stuff", "abstract nonsense", "mathematical maturity" even.
I believe, quite strongly, that the eliding of the punctuation between the two sentences is a negative in teaching this material. (I really do not care if this spoils Mac Lane's or anyone else's view of category theory and its role: "universal property" is only a stepping stone there, not the ultimate goal.)
I'm pretty much in your spot. I think part of the way there is learning to think with universal properties. I recently found a really good book (Algebra: Chapter 0, link below) on 'basic' algebra using category theory to unify things. All the basic stuff like products, disjoint union, surjections and injections are treated rigorously and in great generality through their universal properties. If you already know your group and set theory reading through the first few chapters can be done quickly, and should get you in the right mode of thought. I'm doing this myself right now, and so far I recommend you do the same.
EDIT: A nice application of the tensor product can be found in the first few pages of 'Differential Forms in Topology', that is, if $\Omega^*$ is the algebra generated by the formal symbols $dx_j,j=1,\dots,n$ under the relations $dx^2=0$ and $dx_idx_j=-dx_jdx_i$, then $\Omega^*(U)=C^\infty(U)\otimes\Omega^*$ is the algebra of differential forms on the open set $U$ (under the wedge product). I'm not sure if that's how it's primarily used.
A fully categorical approach that emphasizes the universal properties of the tensor product ,as well as a great deal of multilinear algebra, can be found in T.S.Blyth's
Module Theory:An Approach to Linear Algebra. There's also a discussion in Steven Roman's Advanced Linear Algebra,but the presentation in Blyth's book isn't as dry and formal.
By the way,if anyone has a serious interest in algebra,Blyth's books are some of the great unsung textbooks in the subject. They really should be better known and used in the U.S. then they are.
Thank you all. Your documents has been most helpful. I also saw some papers on the tensor product of modules, especially: http://www.math.ucsb.edu/~mckernan/Teaching/05-06/Winter/220B/l_7.pdf was helpful, and http://www.dpmms.cam.ac.uk/~wtg10/tensors3.html gave some good info too.
Now I'm considering TeXing a file where I try to motivate
why one defines the tensor product in the first place. I think that might help me learn the definition even more. I really like the definition in some strange way, even though I find it kind of hard. I want to learn.
So once again, Thank you. |
Actes de la Journée de Recherche sur "les stratégies et la gouvernance des entreprises bancaires en Afrique"
André Tioumagneng ()
Additional contact information André Tioumagneng: Université de Yaoundé II
Post-Print from HAL
Abstract:
We consider a fluid described by a parameterized EoS of the general form $P=(\gamma-1)\rho+p_{0}+\omega_{H}H+\omega_{H2}H^{2}+\omega_{dH}\dot{H}$ \cite{Ren}, where $p_{0}$, $\omega_{H}$, $\omega_{H2}$ and $\omega_{dH}$ are free parameters of the model, interacting with a Tachyonic field with a relativistic Lagrangian $L_{TF}=-V(\phi)\sqrt{1-\partial_{i}\phi\partial^{i}\phi}$. The acceleration of the Universe described by a scale factor $a(t)=t^{n}, (n>1)$. Under consideration of different forms of interaction the field $\phi$ and the potential $V(\phi)$ are recovered and graphical analysis performed. For illustration purposes we fixed values of parameters of the models to provide $V \rightarrow 0$ for later stages of evolution, when $t \rightarrow \infty$.
Date: 2019-07-04
Note: View the original document on HAL open archive server: https://hal-auf.archives-ouvertes.fr/hal-02197473
References: View references in EconPapers View complete reference list from CitEc Citations: Track citations by RSS feed
Published in 2019
Downloads: (external link) https://hal-auf.archives-ouvertes.fr/hal-02197473/document (application/pdf)
Related works: This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-02197473
Access Statistics for this paper
More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD (). |
Main Reference Zee (Quantum Mechanics in a Nutshell).
1) Global symmetry
A global symmetry means that the Lagrangian is invariant by a transformation whose parameters are constant.
For a continuous global symmetry, if the symmetry of the Lagrangian is the group $G$, and if the symmetry of the vacuum is the group $H$, a subgroup of $G$, you have ($dim G-dim H$) Goldstone bosons.
For instance, take a complex scalar field $\Phi$ with a Mexican hat potential , so that the total Lagrangian density is $L = \partial \phi^\dagger \partial \phi + \mu^2 \phi^\dagger \phi - \lambda (\phi^\dagger \phi)^2$.
The group symmetry is here $G=O(2)$
Define $\phi = \rho e^{i\theta}$
Breaking the symmetry means choosing for the vacuum the minima for the potential, and a particular angle, that is :
$\rho_V = v, \theta_V = \theta_0$
The group $H$ is trivial here.
Define :$\rho = v + \chi$, where $v = \sqrt{\frac{\mu^2}{2\lambda}}$
Developping the Lagrangian, you get a term $v^2(\partial \theta)^2$, which is the dynamical part for a massless field $\theta$, so $\theta$ is our Goldstone boson (There is one because $dim G - dim H = 1 - 0 = 1 $).
So, we see, that spontaneous symmetry breaking could arise in a global continuous symmetry.
2) Local symmetry
A local symmetry means that the Lagrangian is invariant by a transformation whose parameters are functions of space-time.
"Gauging" means (continous) local symmetry.So you don't need "gauging" to have a spontaneous symmetry breaking.
With a local symmetry, some of the Goldstone Bosons are "eaten" by the Gauge field ($A_\mu$), so that these gauge fields (which are massless) become massive. In a 4d space-time dimension, a massless Gauge field has $2$ degrees of freedom, while a massive gauge field has $3$ degrees of freedom. To do that, the Gauge field has to "eat" one degree of freedom (one Goldstone boson)
3) Global symmetry as a special case of Local Symmetry
In the set of local symmetry, global symmetry is a very special case (a very special subset), where transformation parameters are constant. So, if you want, you can consider that global symmetry are "included" into local symmetry. |
In homogeneous coordinates, a rotation matrix around the origin can be described as
$R = \begin{bmatrix}\cos(\theta) & -\sin(\theta) & 0\\\sin(\theta) & \cos(\theta) & 0 \\ 0&0&1\end{bmatrix}$
with the angle $\theta$ and the rotation being counter-clockwise.
A translation amongst $x$ and $y$ can be defined as:
$T(x,y) = \begin{bmatrix}1&0&x\\ 0& 1&y\\0&0&1\end{bmatrix}$
As I understand, the rotation matrix around an arbitrary point, can be expressed as moving the rotation point to the origin, rotating around the origin and moving back to the original position. The formula of this operations can be described in a simple multiplication of
$T(x,y) * R * T(-x,-y) \qquad (I)$
I find this to be counter-intuitive. In my understanding, it should be
$T(-x,-y) * R * T(x,y) \qquad (II)$
The two formulations are definitely not equal. The first equation yields
$E1 = \begin{bmatrix}\cos(\theta) & -\sin(\theta) & -x\cdot\cos(\theta)+y\cdot\sin(\theta)+x\\\sin(\theta) & \cos(\theta) & -x\cdot\sin(\theta)-y\cdot\cos(\theta)+y \\ 0&0&1\end{bmatrix}$
The second one:
$E2 = \begin{bmatrix}\cos(\theta) & -\sin(\theta) & x\cdot\cos(\theta)-y\cdot\sin(\theta)-x\\\sin(\theta) & \cos(\theta) & x\cdot\sin(\theta)+y\cdot\cos(\theta)-y \\ 0&0&1\end{bmatrix}$
So, which one is correct? |
Contents:
Scalars is just a single number. Vectors is an array of numbers. Matrices is a 2-D array of numbers, so each element is identified by two indices instead of just one. Tensors an array with more than two axes.
The product operation of matrices(
matrix product) \(C = AB\) is defined as \(C_{i,j} = \sum_{k} A_{i,k} B_{j,k}\).
Matrix containing the product of the individual elements called the
element-wise product or Hadamard product, and is denoted as \(A \odot B\).
Matrix multiplication is distributive \(A(B + C) = AB + AC\).
Matrix multiplication is associative also \(A(BC) = (AB)C\).
Transpose matrix product \((AB)^{T} = B^{T} A^{T}\).
An
identity matrix is a matrix that does not change any vector when we multiply that vector by that matrix. We denote the identity matrix that preserves n-dimensional vectors as \(I_{n}\).
All the entries along the main diagonal in identity matrix is 1, while all other entries are zero.
The
matrix inverse of \(A\) is denoted as \(A^{-1}\) , and it is defined as the matrix such that \(A A^{-1} = I_{n}\). Origin the point specified by the vector of all zeros. Linear combination of some set of vectors \({ v^{1} , … , v^{n} }\) is given by multiplying each vector \(v^{i}\) by a corresponding scalar coefficient and adding the results: \(\sum_{i}c_{i}v^{i}\). Span of a set of vectors is the set of all points obtainable by linear combination of the original vectors.
A set of vectors is
linearly independent if no vector in the set is a linear combination of the other vectors.
Only
square matrix with linearly independent columns have determinant.
A square matrix with linearly dependent columns is known as
singular.
For square matrices the left inverse and right inverse are equal \(AA^{-1}=I\).
Norms are functions mapping vectors to non-negative values. Can be threated as size of the vector.
\(L^{p}\) norm is given by \(||x||_{p}=(\sum_{i}|x_{i}|^{p})^{1/p}\), for \(p \in \mathbb{R}, p \geq 1\).
Euclidean norm the \(L^{2}\) norm, with \(p = 2\).
Squared \(L^{2}\) norm can be calculated simply as \(x^{T}x\).
\(L^{2}\) norm may be undesirable because it increases very slowly near the origin. In these cases, we turn to a function that grows at the same rate in all locations, but retains mathematical simplicity: the \(L^{1}\) norm. The \(L^{1}\) norm is commonly used in machine learning when the difference between zero and nonzero elements is very important.
\(L^{1}\) norm \(||x||_{1}=\sum_i|x_{i}\)
The \(L^{\infty}\) norm, also known as the
max norm. This norm simplifies to the absolute value of the element with the largest magnitude in the vector. \(L^{\infty}\) norm \(||x||_{\infty} = max_{i}|xi|\)
Most common way to measure the size of a matrix this is
Frobenius norm which is analogous to the \(L^{2}\) norm of a vector. Frobenius norm \(||A||_{F}=\sqrt{\sum_{i,j}A^{2}_{i,j}}\)
The dot product of two vectors can be rewritten in terms of norms.
Dot product \(x^{T}y=||x||_{2}||y||_{2}\cos\theta\)
where \(\theta\) is the angle between \(x\) and \(y\).
Diagonal matrices consist mostly of zeros and have non-zero entries only alongthe main diagonal.We write \(diag(v)\) to denote a square diagonal matrix whose diagonal entries are given by the entries of the vector \(v\).To compute \(diag(v)x\), we only need to scale each element \(x_i\) by \(v_i\). In other words, \(diag(v)x = x \odot y\).The inverse exists only if every diagonal entry is nonzero, and in that case, \(diag(v)^{-1} = diag([1/v_1, …, 1/v_n]^T)\).
A
symmetric matrix is any matrix that is equal to its own transpose: \(A = A^T\).
A
unit vector is a vector with unit norm: \(||x||_2 = 1\).
A vector \(x\) and a vector \(y\) are
orthogonal to each other if \(x^Ty = 0\).In \(\mathbb{R}^{n}\), at most \(n\) vectors may be mutually orthogonal with nonzero norm.If the vectors are not only orthogonal but also have unit norm, we call them orthonormal.
An
orthogonal matrix is a square matrix whose rows are mutually orthonormal and whose columns are mutually orthonormal: \(A^TA = AA^T = I\).This implies that \(A^{-1} = A^T\). Edeigen-decomposition decompose a matrix into a set of eigenvectors and eigenvalues.An eigenvector of a square matrix \(\pmb A\) is a non-zero vector \(\pmb v\) such that multiplication by \(\pmb A\) alters only the scale of \(\pmb v\):
The scalar \(\lambda\) is known as the
eigenvalue corresponding to this eigenvector.If \(\pmb v\) is an eigenvector of \(\pmb A\), then so is any rescaled vector \(\pmb{sv}\) for \(\pmb{s} \in \mathbb{R}, s \neq 0\).Moreover, \(\pmb{sv}\) still has the same eigenvalue.For this reason, we usually only look for unit eigenvectors.
Suppose that a matrix \(\pmb A\) has \(n\) linearly independent eigenvectors, \({v^{(1)}, … ,v^{(n)}}\), with corresponding eigenvalues \({\lambda_1, … , \lambda_n}\). We may concatenate all of the eigenvectors to form a matrix \(\pmb V\) with one eigenvector per column: \(\pmb V = [v^{(1)}, … ,v^{(n)}]\). Likewise, we can concatenate the eigenvalues to form a vector \(\pmb{\lambda}= [\lambda_1, … ,\lambda_n]^T\). The eigendecomposition of \(\pmb A\) is then given by:
Not every matrix can be decomposed into eigenvalues and eigenvectors. Every real symmetricmatrix can be decomposed into an expression using only real-valued eigenvectors and eigenvalues:
where \(\pmb Q\) is an orthogonal matrix composed of eigenvectors of \(\pmb A\), and \(\pmb{\wedge}\) is a diagonal matrix. The eigenvalue \(\wedge_{i,i}\) is associated with the eigenvector in columni of \(\pmb{Q}\), denoted as \(\pmb{Q}_{:,i}\). Because \(\pmb{Q}\) is an orthogonal matrix, we can think of \(\pmb{A}\) as scaling space by \(\lambda_i\) in direction \(\pmb{v}^{(i)}\).
While any real symmetric matrix \(\pmb{A}\) is guaranteed to have an eigendecomposition, the eigendecomposition may not be unique. If any two or more eigenvectors share the same eigenvalue, then any set of orthogonal vectors lying in their span are also eigenvectors with that eigenvalue, and we could equivalently choose a \(\pmb{Q}\) using those eigenvectors instead. By convention, we usually sort the entries of \(\pmb{\wedge}\) in descending order. Under this convention, the eigendecomposition is unique only if all of the eigenvalues are unique.
The matrix is singular if and only if any of the eigenvalues are zero. The eigendecomposition of a real symmetric matrix can also be used to optimize quadratic expressions of the form \(f(\pmb{x}) = \pmb{x}^T \pmb{Ax}\) subject to \(||\pmb{x}||_2 = 1\). Whenever \(\pmb x\) is equal to an eigenvector of \(\pmb A\), \(f\) takes on the value of the corresponding eigenvalue. The maximum value of \(f\) within the constraint region is the maximum eigenvalue and its minimum value within the constraint region is the minimum eigenvalue.
A matrix whose eigenvalues are all positive is called
positive definite.A matrix whose eigenvalues are all positive or zero-valued is called positive semidefinite.Positive semidefinite matrices are interesting because they guarantee that\(\forall \pmb x, \pmb{x}^T \pmb{Ax} \geq 0\).Positive definite matrices additionally guarantee that\(\pmb{x}^T \pmb{Ax} = 0 \Rightarrow \pmb x = 0\).
The
singular value decomposition (SVD) provides another way to factorize a matrix, into singular vectors and singular values.Every real matrix has a singular value decomposition, but the same is not true of the eigenvalue decomposition.For example, if a matrix is not square, the eigendecomposition is not defined, and we must use a singular value decomposition instead.
The singular value decomposition is similar to eigendecomposition, except this time we will write \(\pmb A\) as a product of three matrices:
Suppose that \(\pmb A\) is an
m x n matrix.Then \(\pmb U\) is defined to be an m x m matrix,\(\pmb D\) to be an m x n matrix,and \(\pmb V\) to be an n x n matrix.
Each of these matrices is defined to have a special structure. The matrices \(\pmb U\) and \(\pmb V\) are both defined to be orthogonal matrices. The matrix \(\pmb D\) is defined to bea diagonal matrix. Note that \(\pmb D\) is not necessarily square.
The elements along the diagonal of \(\pmb D\) are known as the
singular values of the matrix \(\pmb A\) .The columns of \(\pmb U\) are known as the left-singular vectors.The columns of \(\pmb V\) are known as as the right-singular vectors.
We can actually interpret the singular value decomposition of \(\pmb A\) in terms of the eigendecomposition of functions of \(\pmb A\). The left-singular vectors of \(\pmb A\) are theeigenvectors of \(\pmb{AA}^T\). The right-singular vectors of \(\pmb A\) are the eigenvectors of \(\pmb{A}^T\pmb{A}\) . The non-zero singular values of \(\pmb A\) are the square roots of the eigenvalues of \(\pmb{A}^T\pmb{A}\) . The same is true for \(\pmb{AA}^T\) .
Suppose we want to make a left-inverse \(B\) of a matrix \(A\), so that we can solve a linear equation \(Ax = y\) by left-multiplying each side to obtain \(x = By\). Depending on the structure of the problem, it may not be possible to design a unique mapping from \(A\) to \(B\).
If \(A\) is taller than it is wide, then it is possible for this equation to have no solution. If \(A\) is wider than it is tall, then there could be multiple possible solutions. The pseudoinverse of \(A\) is defined as a matrix
Practical algorithms for computing the pseudoinverse are not based on this definition, but rather the formula
where \(U\), \(D\) and \(V\) are the singular value decomposition of \(A\), and the pseudoinverse \(D^{+}\) of a diagonal matrix \(D\) is obtained by taking the reciprocal of its non-zero elements then taking the transpose of the resulting matrix.
When \(A\) has more columns than rows, then solving a linear equation using the pseudoinverse provides one of the many possible solutions. Specifically, it provides the solution \(x=A^{+}y\) with minimal Euclidean norm \(||x||_2\) among all possible solutions.
When \(A\) has more rows than columns, it is possible for there to be no solution. In this case, using the pseudoinverse gives us the \(x\) for which \(Ax\) is as close as possible to \(y\) in terms of Euclidean norm \(||Ax - y||_2\).
Trace operator gives the sum of all of the diagonal entries of a matrix:
For example, the trace operator provides an alternative way of writing the Frobenius norm of a matrix:
Also: \(Tr(A) = Tr(A^T)\), and \(Tr(ABC) = Tr(CAB) = Tr(BCA)\).
The determinant of a square matrix, denoted \(det(A)\), is a function mapping matrices to real scalars. The determinant is equal to the product of all the eigenvalues of the matrix. The absolute value of the determinant can be thought of as a measure of how much multiplication by the matrix expands or contracts space. If the determinant is 0, then space is contracted completely along at least one dimension, causing it to lose all of its volume. If the determinant is 1, then the transformation preserves volume. |
I am having trouble computing the differental of a map. This is the context:
Let $S\subset \mathbb R^3$ be a regular surface, and fix a point $p\in S$.
Let $\pi:\mathbb R^3\to T_pS$ be the orthogonal projection onto $T_pS\subset \mathbb R^3$, and let $h=\pi|_S$ be the restriction of $\pi$ to $S$. Why is $h$ differentiable? Carefully show that $dh_p$ is injective.
I said that $h$ is differentaible, because it is the restriction of a differentiable function.
Next I assume that near $p$ the surface $S$ is parametrized by some parametrization, say $X(u,v)$. Then I choose $\{X_u,X_v,n\}$ as a basis for $\mathbb R^3$. Now we see that $h(x,y,z)=(x,y)$ which would mean that $$dh_p=\begin{pmatrix}1&0&0\\0&1&0\end{pmatrix}.$$ This doesn't seem right to me, because the zero-culumn suggests tha $dh_p$ isn't injective.
Where exactly am I slipping up? |
Weyl's equidistribution theorem states that the orbit of a point on the circle under rotation by $\alpha$ becomes asymptotically equidistributed with respect to Lebesgue (Haar) measure whenever $\alpha$ is an irrational multiple of $\pi$. (From the dynamical point of view, this is the statement that irrational rotations are uniquely ergodic.)
What happens if we choose two angles $\alpha$ and $\beta$ and flip a coin at each step to determine which we rotate by? Does the orbit still equidistribute?
Here is a more precise statement of the question, since the above is terse and perhaps open to misinterpretation.
Let $\Omega =\{0,1\}^\mathbb{N}$ be the space of infinite sequences of 0s and 1s, endowed with the $(\frac 12, \frac 12)$-Bernoulli measure $\mu$.
Consider the circle $S^1 = \mathbb{R}/\mathbb{Z}$ and a starting point $x_1\in S^1$. Fix $\alpha_0,\alpha_1\in \mathbb{R} / \mathbb{Z}$. Given $\omega\in \Omega$, define a sequence $x_n(\omega)\in S^1$ iteratively by $x_{n+1} = x_n + \alpha_{\omega(n)}$.
Let $m_n(\omega) = \frac 1n \sum_{k=0}^{n-1} \delta_{x_k(\omega)}$ be the average of the $\delta$-measures at the points $x_1(\omega),\dots, x_n(\omega)$. Let $E\subset \Omega$ be the set of $\omega$ such that $m_n(\omega) \to $ Lebesgue. Note that $\omega\in E$ if and only if $\sigma\omega\in E$, where $\sigma$ is the shift map on $\Omega$, and thus by the ergodic theorem we have $\mu(E)=0$ or $\mu(E)=1$.
Questions:
What properties of $\alpha_0,\alpha_1$ determine whether we get $\mu(E)=0$ or $\mu(E)=1$? If $\alpha_0=\alpha_1$ is irrational then we have $\mu(E)=1$ as above. On the other hand, if both are rational then $\mu(E)=0$ since $x_n$ takes only finitely many values. If $\alpha_0$ and $\alpha_1$ are linearly dependent over the rationals (but not rational themselves) then I believe we have $\mu(E)=1$, but probably this takes some argument. I don't know what happens if they are linearly independent over the rationals.
Do any of the above answers change if we consider instead of $E$ the set $F$ of $\omega$ with the property that the limit points of the sequence $m_n$ do not depend on the initial point $x_1$? This would allow more general limiting measures. Or we could impose other conditions on the limit points of the sequence $m_n$ and ask the same question.
What happens if we replace $\mu$ with a different $\sigma$-invariant measure on $\Omega$? Do we get a different answer for Bernoulli measures? Markov measures? Gibbs measures? General shift-invariant measures?
I know that's rather broad and open-ended. I'd be quite happy (at first, at least) with an answer to the first question. It seems worth asking all of them since this feels like something that has been probably been studied and may be well understood, although I didn't find anything when I looked. (I found some related results, but nothing that addressed these particular questions.) |
I'm trying to solve a problem related to waves on a string.
Say I have an infinite string, with tension $T$ and mass density $\mu$.
To the string, at $x=0$ (seeing as it's infinite, the specific point doesn't actually matter, so long as it's constant), I attach a spring $k$ (the
string is horizontal whilst the spring is vertical).
I am attempting to calculate the reflected and transmitted waves, resulting from an incident wave: $$y_{inc}(x,t)=A_{inc}\,e^{i(x-\omega\,t)}$$
My attempt at a solution was to write
$$\mu\,\frac{\partial^2y}{\partial t^2}|_{x=0}=T\,\frac{\partial^2y}{\partial x^2}|_{x=0}-k\,y(0,t)$$
which (I think) is, generally speaking, true. I am not sure how (if at all) I can use this to find an expression for the reflected wave.
So, my question is how does the spring affect the reflection? |
No.
Implied volatility isn't a historical measure of standard deviation. Implied volatility is used to relate a market price to some model, be that Black-Scholes or something more sophisticated.
Another way to phrase it, implied vol is that single vol input into a model, such that the model reproduces the market prices. Different models will have different implied vols.
And even in the Black-Scholes model, the volatility isn't a measure of the standard deviation of the stock price. It's a measure of the standard deviation of the log-return of price.
Consider Geometric Brownian Motion for a stock price: $dS_t = \mu S_t dt + \sigma S_t dW_t $. The distribution of $\ln \frac{S_t}{S_0}$ is $N(\mu-\frac{\sigma^2}{2}, \sigma^2 t)$, where $N(.)$ is the Normal distribution. In other words, over $t=1$ year, the standard deviation of the log-return of the stock is $\sigma$.
In contrast, the standard deviation of the stock price over 1 year is given by $S_0 e^\mu \sqrt{e^{\sigma^2} -1 }$. This quantity looks nothing like the implied vol you are deriving from market option prices.
A final comment: nothing about implied vol calculation is dependent on "the past 1 year". It's strictly a forward-looking concept. Look up Markov property on Wikipedia. |
Dual Nature of Matter and Radiation Wave Nature of Matter, Davisson and Germer Experiment The square root of frequency, (ν) of the spectral line of the characteristic X-ray spectrum is directly proportional to the atomic number, Z of the target element. \tt \sqrt{V} \propto Z \ or \ \sqrt{V} = a(z - b)
For K -series (Characteristic x -ray spectrum)\tt \sqrt{\frac{\nu_{1}}{\nu_{2}}} = \left(\frac{Z_{1} - 1}{Z_{2} - 1}\right) \Rightarrow \sqrt{\frac{\lambda_{1}}{\lambda_{2}}} = \left(\frac{Z_{1} - 1}{Z_{2} - 1}\right)
The intercept on Z-axis gives the screening constant ‘b’ and it is constant for all spectral lines in given series but varies with the series. b = 1 for K series (K α| K β| K γ) b = 7.4 for L series.
The wave lengths of characteristics x -rays is given by \tt \frac{1}{\lambda} = R\left(z - b\right)^{2}\left[\frac{1}{n_1^2} - \frac{1}{n_2^2}\right]
\tt \frac{\lambda_{k\alpha}}{\lambda_{k\beta}} = \frac{32}{27} , Ratio of k αand k βlines DAVISSON AND GERMER'S ELECTRON DIFFRACTION EXPERIMENT gives the evidence of matter wave existence. A beam of electron emitted by electron gun is made to fall on mickel crystal and along cubical axis at a particular angle Ni crystal behaves like a 3 dimensional diffraction grating and it diffracts the electron beam obtained from electron gun. The diffracted beam of electron received by the detector which can be positioned at any angle by rotating about the point of incidence. The energy of the incident beam of electron can also be varied by changing the applied voltage to the electron gun. Intensity of scattered beam of electrons was different at different angles of scattering. Intensity bump is maximum for 54 V potential and 50° scattering angle.
View the Topic in this video From 00:19 To 09:24
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. The wavelength associated with a moving particle is known as de Broglie wavelength and it is given by
\lambda = \frac{h}{p} = \frac{h}{mv}
2. KE = \frac{P^{2}}{2m} \Rightarrow \lambda = \frac{h}{\sqrt{2 m \ KE}}
3. De Broglie wavelength associated with charged particles
For electrons (m c = 9.1 × 10 −31 kg): \lambda = \frac{h}{\sqrt{2mqV}}
4. De Broglie wavelength associated with uncharged particles
For neutrons (m n = 1.67 × 10 −27 kg): \lambda = \frac{h}{\sqrt{2mE}} = \frac{6.62 \times 10^{-34}}{\sqrt{2 \times 1.67 \times 10^{-27}E}}
5. de Broglie wavelength & wave nature
\lambda = \frac{h}{mv} = \frac{6.63 \times 10^{-34}}{(0.046)(30)} = 4.8 \times 10^{-34} m
6.The de Broglie wavelength λ associated with electrons using, for V = 54 V is given by davission & German Experiment
\tt \lambda = h/p = \frac{1.227}{\sqrt{V}}nm |
Electrostatic Potential and Capacitance Electric Potential Electric potential is the amount of work done in bringing a unit positive charge from infinity to given point. \tt V=\frac{W}{Q} Potential at a point \tt V=\frac{1}{4\pi\ e_{0}}.\frac{Q}{R} Electric potential is a scalar quantity. The relation between "V" and "K" \tt E=\frac{-dv}{dr} \tt V_{B}-V_{A}=-\int_{A}^{B} \overline{E}.\overline{dr} If potential at any point is zero it is called zero potential point. For unlike change "Q 1" and "-Q 2" are separated by "d", P 1and P 2one zero potential points as shown.
At P
1\tt \frac{Q_{1}}{x}=\frac{Q_{2}}{d-x} At P 2\tt \frac{Q_{1}}{y}=\frac{Q_{2}}{d+y} View the Topic in this video From 00:59 To 56:29
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
Electric Potential(V) Electric potentical at any point is equal to the work done per unit positive charge in carrying it from infinity to that point in electric field. Electric potential, V = \frac{W}{q} |
2018-09-11 04:29
Proprieties of FBK UFSDs after neutron and proton irradiation up to $6*10^{15}$ neq/cm$^2$ / Mazza, S.M. (UC, Santa Cruz, Inst. Part. Phys.) ; Estrada, E. (UC, Santa Cruz, Inst. Part. Phys.) ; Galloway, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; Gee, C. (UC, Santa Cruz, Inst. Part. Phys.) ; Goto, A. (UC, Santa Cruz, Inst. Part. Phys.) ; Luce, Z. (UC, Santa Cruz, Inst. Part. Phys.) ; McKinney-Martinez, F. (UC, Santa Cruz, Inst. Part. Phys.) ; Rodriguez, R. (UC, Santa Cruz, Inst. Part. Phys.) ; Sadrozinski, H.F.-W. (UC, Santa Cruz, Inst. Part. Phys.) ; Seiden, A. (UC, Santa Cruz, Inst. Part. Phys.) et al. The properties of 60-{\mu}m thick Ultra-Fast Silicon Detectors (UFSD) detectors manufactured by Fondazione Bruno Kessler (FBK), Trento (Italy) were tested before and after irradiation with minimum ionizing particles (MIPs) from a 90Sr \b{eta}-source . [...] arXiv:1804.05449. - 13 p. Preprint - Full text 详细记录 - 相似记录 2018-08-25 06:58
Charge-collection efficiency of heavily irradiated silicon diodes operated with an increased free-carrier concentration and under forward bias / Mandić, I (Ljubljana U. ; Stefan Inst., Ljubljana) ; Cindro, V (Ljubljana U. ; Stefan Inst., Ljubljana) ; Kramberger, G (Ljubljana U. ; Stefan Inst., Ljubljana) ; Mikuž, M (Ljubljana U. ; Stefan Inst., Ljubljana) ; Zavrtanik, M (Ljubljana U. ; Stefan Inst., Ljubljana) The charge-collection efficiency of Si pad diodes irradiated with neutrons up to $8 \times 10^{15} \ \rm{n} \ cm^{-2}$ was measured using a $^{90}$Sr source at temperatures from -180 to -30°C. The measurements were made with diodes under forward and reverse bias. [...] 2004 - 12 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 533 (2004) 442-453 详细记录 - 相似记录 2018-08-23 11:31 详细记录 - 相似记录 2018-08-23 11:31
Effect of electron injection on defect reactions in irradiated silicon containing boron, carbon, and oxygen / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Yakushevich, H S (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) Comparative studies employing Deep Level Transient Spectroscopy and C-V measurements have been performed on recombination-enhanced reactions between defects of interstitial type in boron doped silicon diodes irradiated with alpha-particles. It has been shown that self-interstitial related defects which are immobile even at room temperatures can be activated by very low forward currents at liquid nitrogen temperatures. [...] 2018 - 7 p. - Published in : J. Appl. Phys. 123 (2018) 161576 详细记录 - 相似记录 2018-08-23 11:31 详细记录 - 相似记录 2018-08-23 11:31
Characterization of magnetic Czochralski silicon radiation detectors / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) Silicon wafers grown by the Magnetic Czochralski (MCZ) method have been processed in form of pad diodes at Instituto de Microelectrònica de Barcelona (IMB-CNM) facilities. The n-type MCZ wafers were manufactured by Okmetic OYJ and they have a nominal resistivity of $1 \rm{k} \Omega cm$. [...] 2005 - 9 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 548 (2005) 355-363 详细记录 - 相似记录 2018-08-23 11:31
Silicon detectors: From radiation hard devices operating beyond LHC conditions to characterization of primary fourfold coordinated vacancy defects / Lazanu, I (Bucharest U.) ; Lazanu, S (Bucharest, Nat. Inst. Mat. Sci.) The physics potential at future hadron colliders as LHC and its upgrades in energy and luminosity Super-LHC and Very-LHC respectively, as well as the requirements for detectors in the conditions of possible scenarios for radiation environments are discussed in this contribution.Silicon detectors will be used extensively in experiments at these new facilities where they will be exposed to high fluences of fast hadrons. The principal obstacle to long-time operation arises from bulk displacement damage in silicon, which acts as an irreversible process in the in the material and conduces to the increase of the leakage current of the detector, decreases the satisfactory Signal/Noise ratio, and increases the effective carrier concentration. [...] 2005 - 9 p. - Published in : Rom. Rep. Phys.: 57 (2005) , no. 3, pp. 342-348 External link: RORPE 详细记录 - 相似记录 2018-08-22 06:27
Numerical simulation of radiation damage effects in p-type and n-type FZ silicon detectors / Petasecca, M (Perugia U. ; INFN, Perugia) ; Moscatelli, F (Perugia U. ; INFN, Perugia ; IMM, Bologna) ; Passeri, D (Perugia U. ; INFN, Perugia) ; Pignatel, G U (Perugia U. ; INFN, Perugia) In the framework of the CERN-RD50 Collaboration, the adoption of p-type substrates has been proposed as a suitable mean to improve the radiation hardness of silicon detectors up to fluencies of $1 \times 10^{16} \rm{n}/cm^2$. In this work two numerical simulation models will be presented for p-type and n-type silicon detectors, respectively. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 2971-2976 详细记录 - 相似记录 2018-08-22 06:27
Technology development of p-type microstrip detectors with radiation hard p-spray isolation / Pellegrini, G (Barcelona, Inst. Microelectron.) ; Fleta, C (Barcelona, Inst. Microelectron.) ; Campabadal, F (Barcelona, Inst. Microelectron.) ; Díez, S (Barcelona, Inst. Microelectron.) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Rafí, J M (Barcelona, Inst. Microelectron.) ; Ullán, M (Barcelona, Inst. Microelectron.) A technology for the fabrication of p-type microstrip silicon radiation detectors using p-spray implant isolation has been developed at CNM-IMB. The p-spray isolation has been optimized in order to withstand a gamma irradiation dose up to 50 Mrad (Si), which represents the ionization radiation dose expected in the middle region of the SCT-Atlas detector of the future Super-LHC during 10 years of operation. [...] 2006 - 6 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 566 (2006) 360-365 详细记录 - 相似记录 2018-08-22 06:27
Defect characterization in silicon particle detectors irradiated with Li ions / Scaringella, M (INFN, Florence ; U. Florence (main)) ; Menichelli, D (INFN, Florence ; U. Florence (main)) ; Candelori, A (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Bruzzi, M (INFN, Florence ; U. Florence (main)) High Energy Physics experiments at future very high luminosity colliders will require ultra radiation-hard silicon detectors that can withstand fast hadron fluences up to $10^{16}$ cm$^{-2}$. In order to test the detectors radiation hardness in this fluence range, long irradiation times are required at the currently available proton irradiation facilities. [...] 2006 - 6 p. - Published in : IEEE Trans. Nucl. Sci. 53 (2006) 589-594 详细记录 - 相似记录 |
If you are willing to get a
non-sharp constant, here's another proof found in many differential geometry texts. Without loss of generality assume $f \geq 0$. (Replacing $f$ by $|f|$ doesn't change the integrals on either side, if $f$ is assumed to be $C^1$.)
Let $2M = \sup f$, and let $t_0 \in (0,\pi)$ attain this maximum.
Let $X(t) = f(t) - M$ and $Y(t) = \sqrt{M^2 - X(t)^2}$ if $t \leq t_0$ and $-\sqrt{M^2 - X(t)^2}$ if $t \geq t_0$.
We have that $(X(t),Y(t))$ lies on the circle of radius $M$, and goes around the circle exactly once as $t$ goes from $0$ to $\pi$. We thus can use a well-known formula to conclude that
$$ -\int_0^\pi Y(t) X'(t) \mathrm{d}t = \text{Area of disk} = \pi M^2 $$
By Schwarz inequality, however, we have
$$ \int_0^\pi Y(t) X'(t) \mathrm{d}t \leq \sqrt{ \int_0^\pi Y^2\mathrm{d}t \int_0^\pi X'^2\mathrm{d}t} = \sqrt{ \left(\pi M^2 - \int_0^\pi X^2\mathrm{d}t \right) \int_0^\pi X'(t)^2\mathrm{d}t }$$
Squaring we get
$$ \pi^2 M^4 \leq \left(\pi M^2 - \int_0^\pi X^2 \mathrm{d}t\right) \int_0^\pi f'^2\mathrm{d}t $$
Now, notice that$$ \int_0^\pi f^2 ~\mathrm{d}t = \int_0^\pi (X + M)^2 ~\mathrm{d}t = \pi M^2 + \int_0^\pi X^2 ~\mathrm{d}t + 2M \int_0^{\pi} X ~\mathrm{d}t \leq \pi M^2 (1+A)^2 $$where$$ A: = \left[ \frac{1}{\pi M^2} \int_0^\pi X^2 ~\mathrm{d}t \right] < 1. $$This implies$$ \int_0^\pi f^2 ~\mathrm{d}t \leq (1 + A)^2(1-A^2) \int_0^\pi |f'|^2 ~\mathrm{d}t$$The coefficient has a maximum when $A = 1/2$ or that $$ \int_0^\pi f^2 ~\mathrm{d}t \leq \frac{27}{16} \int_0^\pi |f'|^2~\mathrm{d}t $$
If $\int_0^\pi X ~\mathrm{d}t = 0$, we can sharpen the coefficient to $(1 + A^2)(1-A^2) = 1 - A^4 \leq 1$. This can be achieved by extending $f$ to a function $g$ on $(-\pi,\pi)$ with an odd extension, exactly as you have described for the Fourier proof. |
Is it possible to prove this matrix family only contains totally unimodular matrices?
The matrix has dimensions $\frac{3n(n-1)}2$ rows and $n+\frac{n(n-1)}2$ columns.
To every pair $(i,i')$ with $1\leq i<i'\leq n$ we have an unique integer $f(i,j)$ from $\big\{n+1,\dots,n+\frac{n(n-1)}2\big\}$ associated with it.
Each $3r+1$ row has three non-zero entries with $M_{(3r+1),i}=M_{(3r+1),i'}=1$ at some $1\leq i<i'\leq n$ (two of first $n$ columns are $1$) and $M_{(3r+1),f(i,i')}=1$.
Each $3r+2$ and $3r+3$ row has two non-zero entries $M_{(3r+2),i}=M_{(3r+2),f(i,i')}=M_{(3r+3),i'}=M_{(3r+3),f(i,i')}=1$.
$$M=\begin{bmatrix}1&1&0&1&0&0\\1&0&0&1&0&0\\0&1&0&1&0&0\\1&0&1&0&1&0\\1&0&0&0&1&0\\0&0&1&0&1&0\\0&1&1&0&0&1\\0&1&0&0&0&1\\0&0&1&0&0&1\end{bmatrix}$$ holds at $n=3$ and it is totally unimodular here.
Is this type of matrix always totally unimodular?
One can prove by induction if we know that if $A,B$ are totally unimodular then $\begin{bmatrix}A\\B\end{bmatrix}$ is totally unimodular under mild non-intersection conditions. |
Consider a half sphere of radius $R$, partially filled with an incompressible liquid with a density $\rho$, up to a height $h.$ I'd like to find the pressure field inside the liquid.
For $| x | \leqslant x_l$, it's easy: indeed, we know that $P(x,h)=P_0$, and moreover $\frac{\textrm{d}P}{\textrm{d}z} = -\rho g$, so
$$ P(x,z) = P_0 + \rho g (h - z)$$
Also, when $\theta_l \ll 1$, $x_l \approx R$, so $P$ should not vary too much between $x_l$ and $R$: for any $x$, $$ P(x,z) \approx P_0 + \rho g (h-z)$$
However, in general cases, I really don't know how we could find the pressure field for $|x| > x_l$. The equation $\frac{\textrm{d}P}{\textrm{d}z} = -\rho g$ is still true, yet I can't achieve to find the pressure next to side of the half sphere: how could we find it ?
Edit:
JezuzStarusst's answer is valid away from the side of the sphere, since there's no horizontal force acting on the fluid. However, near the side, surface tension appears. How could we take it into account ? |
I've implemented the following algorithm. For each minibatch:
Compute the gradient using the mini-batch sample Update the parameters Update the hidden layers. If $\Gamma_L$ are the new parameters at the $L$th layer, $\mathbf{X}$ is the input data, and $a()$ is the element-wise activation function:
$$ \hat y = a(a(...a(\mathbf{X}\Gamma_1)\Gamma_2) ...)\Gamma_l $$
I need $\hat y$ to compute the working residual, in order to compute the first step in the chain rule to get the gradient for the subsequent backprop step. But using the whole dataset in the forward pass is
slow when sample size is big, and probably impossible when it is huge.
Up until now I've been working with modest samples, and this problem hasn't occurred to me.
Is it standard practice to only update $\hat y_{i \in minibatch}$?? By computing $$ \hat y_{i \in MB} = a(a(...a(\mathbf{X}_{i \in MB}\Gamma_1)\Gamma_2) ...)\Gamma_l $$ ??
That way each observation's estimate will get updated once per epoch. But this seems weird, because the working residuals for each backward pass would no longer correspond to the parameters used to compute the current gradient. In other words, the parameters would be updated, but the working residual wouldn't.
I guess I had been doing halfway minibatching -- only minibatching for the backward pass. Could someone confirm what is standard practice for the forward pass?
Maybe I've just been doing it backwards? Is it standard to first update $\hat y$ for a minibatch, then compute the gradient? |
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions.
Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not.
Let $A, B, C$ be $n\times n$ invertible matrices. When you simplify the expression\[C^{-1}(AB^{-1})^{-1}(CA^{-1})^{-1}C^2,\]which matrix do you get?(a) $A$(b) $C^{-1}A^{-1}BC^{-1}AC^2$(c) $B$(d) $C^2$(e) $C^{-1}BC$(f) $C$
Let $\calP_3$ be the vector space of all polynomials of degree $3$ or less.Let\[S=\{p_1(x), p_2(x), p_3(x), p_4(x)\},\]where\begin{align*}p_1(x)&=1+3x+2x^2-x^3 & p_2(x)&=x+x^3\\p_3(x)&=x+x^2-x^3 & p_4(x)&=3+8x+8x^3.\end{align*}
(a) Find a basis $Q$ of the span $\Span(S)$ consisting of polynomials in $S$.
(b) For each polynomial in $S$ that is not in $Q$, find the coordinate vector with respect to the basis $Q$.
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be a vector space and $B$ be a basis for $V$.Let $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ be vectors in $V$.Suppose that $A$ is the matrix whose columns are the coordinate vectors of $\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5$ with respect to the basis $B$.
After applying the elementary row operations to $A$, we obtain the following matrix in reduced row echelon form\[\begin{bmatrix}1 & 0 & 2 & 1 & 0 \\0 & 1 & 3 & 0 & 1 \\0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0\end{bmatrix}.\]
(a) What is the dimension of $V$?
(b) What is the dimension of $\Span\{\mathbf{w}_1, \mathbf{w}_2, \mathbf{w}_3, \mathbf{w}_4, \mathbf{w}_5\}$?
(The Ohio State University, Linear Algebra Midterm)
Let $V$ be the vector space of all $2\times 2$ matrices whose entries are real numbers.Let\[W=\left\{\, A\in V \quad \middle | \quad A=\begin{bmatrix}a & b\\c& -a\end{bmatrix} \text{ for any } a, b, c\in \R \,\right\}.\]
(a) Show that $W$ is a subspace of $V$.
(b) Find a basis of $W$.
(c) Find the dimension of $W$.
(The Ohio State University, Linear Algebra Midterm)
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems.
Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$.
(a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$.
(b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$.
Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular.
Problem 9.Determine whether each of the following sentences is true or false.
(a) There is a $3\times 3$ homogeneous system that has exactly three solutions.
(b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric.
(c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$.
(d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 2 and contains Problem 4, 5, and 6.Check out Part 1 and Part 3 for the rest of the exam problems.
Problem 4. Let\[\mathbf{a}_1=\begin{bmatrix}1 \\2 \\3\end{bmatrix}, \mathbf{a}_2=\begin{bmatrix}2 \\-1 \\4\end{bmatrix}, \mathbf{b}=\begin{bmatrix}0 \\a \\2\end{bmatrix}.\]
Find all the values for $a$ so that the vector $\mathbf{b}$ is a linear combination of vectors $\mathbf{a}_1$ and $\mathbf{a}_2$.
Problem 5.Find the inverse matrix of\[A=\begin{bmatrix}0 & 0 & 2 & 0 \\0 &1 & 0 & 0 \\1 & 0 & 0 & 0 \\1 & 0 & 0 & 1\end{bmatrix}\]if it exists. If you think there is no inverse matrix of $A$, then give a reason.
Problem 6.Consider the system of linear equations\begin{align*}3x_1+2x_2&=1\\5x_1+3x_2&=2.\end{align*}
(a) Find the coefficient matrix $A$ of the system.
(b) Find the inverse matrix of the coefficient matrix $A$.
(c) Using the inverse matrix of $A$, find the solution of the system.
(Linear Algebra Midterm Exam 1, the Ohio State University) |
You have $\rm\:a + a^{-1} = 1,\:$ so $\rm\: a^{-1} = 1 - a.\:$ Note that all of your equations should be connected by arrows going
both ways, i.e. $\iff\!\!,\:$ since you need to prove both necessity and sufficiency.
Here the group structure arises simply from renaming (or labeling) the elements of the additive group $\,\Bbb Z/7\,$ via the "label"
bijection $\rm\:\ell\, n := n-3,\:$ i.e. by naming or labeling each natural mod $7\,$ by the natural congruent to $\rm\,n\!-\!3.\,$ To perform an operation on labels, we first unlabel the operands by applying $\,\ell^{-1}n\, =\, n\!+\!3,\,$ then perform the normal operation, then label the result, i.e.
$$\rm a \oplus b\ :=\ \ell\,(\ell^{-1}a\, +\, \ell^{-1}b)\ =\, -3 + ((a\!+\!3) + (b\!+\!3))\ =\ a+b+3 $$
$$\rm \ominus\, a\ :=\ \ell(-\,\ell^{-1}a)\ =\ -3+ (-(a\!+\!3))\ =\ -6-a\ =\ 1-a\quad $$
Thus for $\mu = \ell^{-1}\,$ we have $\rm\ \mu(a \oplus b)\, =\, \mu\,a + \mu\, b,\ $ and $\rm\ \mu\ominus a\, =\, -\mu a\ $ so $\mu$ is a bijective group homomorphism, hence an isomorphism. In more technical language one says that one has
transported the group structure along the bijection $\mu$ (or $\ell).$
For example the equation $\,5+ 6 = 4\,$ transports to $\,\ell\, 5 \oplus \ell\, 6 = \ell\, 4,\,$ i.e. $\it\, 2 \oplus 3 = 1,\,$ and the equation $\,-(5)\, =\, 2\,$ transports to $\,\ominus\,\ell\,5\, =\, \ell\, 2,\,$ i.e. $\it\,\ominus\,2 = 6.\,$ Transporting the entire addition table yields
$$\begin{array}{|c|c|c|c|c|c|c|c|} \hline \color{#C00}\oplus &\it\color{#C00}0 &\it\color{#C00}1 &\it\color{#C00}2 &\it\color{#C00}3 &\it\color{#C00}4 &\it\color{#C00}5 &\it\color{#C00}6 \\ \hline \it\color{#C00} 0 &\it 3 &\it 4 &\it 5 &\it 6 &\it 0 &\it 1 &\it 2 \\ \hline \it\color{#C00} 1 &\it 4 &\it 5 &\it 6 &\it 0 &\it 1\, &\it 2 &\it 3 \\ \hline\it\color{#C00} 2 &\it 5 &\it 6 &\it 0 &\it 1\, &\it 2 &\it 3 &\it 4 \\ \hline\it\color{#C00} 3 &\it 6 &\it 0 &\it 1\, &\it 2 &\it 3 &\it 4 &\it 5 \\ \hline\it\color{#C00} 4 &\it 0 &\it 1\, &\it 2 &\it 3 &\it 4 &\it 5 &\it 6 \\ \hline\it\color{#C00} 5 &\it 1\, &\it 2 &\it 3 &\it 4 &\it 5 &\it 6 &\it 0 \\ \hline\it\color{#C00} 6 &\it 2 &\it 3 &\it 4 &\it 5 &\it 6 &\it 0 &\it 1\, \\ \hline\end{array}\ \ \begin{array}{c}\xrightarrow[\large \ \it N\ \to\,\rm N+3\ ]{\large \rm unlabel\,\ \mu}\\\\ \\\xleftarrow[\large \ \it N-3\ \leftarrow\, \rm N\ ]{\large \rm label\,\ \ell}\end{array}\ \ \begin{array}{|c|c|c|c|c|c|c|c|} \hline\color{#C00}+ &\color{#C00} 3 &\color{#C00} 4 &\color{#C00} 5 &\color{#C00} 6 &\color{#C00} 0 &\color{#C00} 1 &\color{#C00} 2 \\ \hline\color{#C00}3 & 6 & 0 & 1 & 2 & 3 & 4 & 5 \\ \hline\color{#C00}4 & 0 & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline\color{#C00}5 & 1 & 2 & 3 & 4 & 5 & 6 & 0 \\ \hline\color{#C00}6 & 2 & 3 & 4 & 5 & 6 & 0 & 1 \\ \hline\color{#C00}0 & 3 & 4 & 5 & 6 & 0 & 1 & 2 \\ \hline\color{#C00}1 & 4 & 5 & 6 & 0 & 1 & 2 & 3 \\ \hline\color{#C00}2 & 5 & 6 & 0 & 1 & 2 & 3 & 4 \\ \hline\end{array}$$
Note that the addition table on the right is the table for the operation of addition mod $7$, except the rows and colums have been reordered (shifted by $3$). Thus the two addition tables are essentially the same, i.e. they differ only in the names chosen for the elements. This is the sense of isomorphism that is captured by the notion of
isomorphic groups, i.e. the two groups have exactly the same operation tables after a (renaming) bijection is applied to the elements. The notion of isomorphism is defined so that the algebraic structure is determined completely by the operation tables, i.e. the only properties of the elements that we care about algebraically are how the elements relate to each other under the operations. Any other (internal) structure the elements may possess (names, set-theoretic representation, etc), play no role algebraically.
Similarly we can transport the group structure along any permutation $\,\ell\,$ of $\,\Bbb Z/7$, and we can label or index any finite group by natural numbers (e.g. which might be addresses in computer memory, where (un)label operations amounts to memory (de)references). |
WP 34S vs. DM42 decimal128 differences?
03-20-2018, 09:15 AM (This post was last modified: 03-27-2018 09:51 AM by rkf.)
Post: #1
WP 34S vs. DM42 decimal128 differences?
Today I stumbled about a footnote in Walter's WP 34S Owner's manual, where at Page 319 the result of the Calculator Forensics Test is mentioned. To my big surprise, the DP difference between the test result, and 9, is
for WP 34S: -6.2465E-29
for DM42: -6.2466E-29
But why the differerence of 1 ULP?? For the other example (1.0000001^2^27), there isn't any difference between the two models, BTW.
03-20-2018, 09:27 AM
Post: #2
RE: WP 34S vs. DM42 decimal128 differences?
(03-20-2018 09:15 AM)rkf Wrote: Today I stumbled about a footnote in Walter's WP 34S Owner's manual, where at Page 319 the result of the Calculator Forensics Test is mentioned. To my big surprise, the DP difference between the test result, and 9, is
Both calculators use 34-digit precision, but this does not mean that they return the same results. The trig functions may be implemented differently, and there is no claim (at least for the 34s) that all 34 digits are correct. The 34s also has different round modes that can be set, which will affect the result as well.
Dieter
03-20-2018, 09:35 AM
Post: #3
RE: WP 34S vs. DM42 decimal128 differences?
I believe the 34S is correct here. Free42 uses the Intel decimal library which rounds some functions incorrectly.
Well, the 34S used to get this correct but there were some changes to the trig code to better handle cases near multiples of \( \frac{\pi}{2} \), it is possible they threw it off.
Pauli
03-26-2018, 09:51 PM
Post: #4
RE: WP 34S vs. DM42 decimal128 differences?
This is addressed to Pauli:
I've noticed that sin (pi radians) in double-precision mode on the WP34S is only correct to 17 significant figures. This disturbs me more than I would have expected (given that I normally have the calculator set to give me answers to 4 sf only!).
If I change SINCOSDIGITS in decn.c to 69, all calculated digits are correct. Trig functions are slower (more so in double precision than in single precision).
If I use 69 digits in the function sincosTaylor() when the calculator is in double precision mode, but 39 digits in single precision, I seem to get the best of both worlds - correct answers in both single and double precision, with no slow-down in single-precision mode.
Is this a change worth making (once I've done some more extensive testing, and looked at the code for tan x as well)? Or is there a downside other than speed to using these extra digits?
Nigel (UK)
03-26-2018, 10:43 PM
Post: #5
RE: WP 34S vs. DM42 decimal128 differences?
(03-26-2018 09:51 PM)Nigel (UK) Wrote: Is this a change worth making (once I've done some more extensive testing, and looked at the code for tan x as well)? Or is there a downside other than speed to using these extra digits?
I'd be concerned about overflowing the stack in other functions that call this. E.g. the gamma code can call this and gamma in turn is called from the statistical routines which are close to the limit already.
Using a smaller number of digits in single precision mode might also fall awry of this function's use elsewhere. If later digits carry an error they could cascade into the first sixteen.
I'll have to have a think about it...
Pauli
03-27-2018, 06:10 AM
Post: #6
RE: WP 34S vs. DM42 decimal128 differences?
(03-26-2018 09:51 PM)Nigel (UK) Wrote: This is addressed to Pauli:
Although I'm not Pauli but the original poster, I don't get the point here. Typing in Rad mode pi sin gives on both WP 34S DBLON, and DM42, a result of about -1.158E-34.
For me as a plain user this result seems to have nevertheless 34 significant figures, all of them zeroes (one before the decimal point, and 33 thereafter).
Or do you compare the (exact) result of taking sin of a number, which is exactly pi rounded to 34 significant figures (thus to be expected in the ballpark around +/- 1E-34) with the result the calculator gives, and then -1.158028306006248941790250554076922E-34 itself is only correct to 17 figures? But that would be no problem at all for me.
03-27-2018, 07:58 AM
Post: #7
RE: WP 34S vs. DM42 decimal128 differences?
(03-27-2018 06:10 AM)rkf Wrote: Or do you compare the (exact) result of taking sin of a number, which is exactly pi rounded to 34 significant figures (thus to be expected in the ballpark around +/- 1E-34) with the result the calculator gives, and then -1.158028306006248941790250554076922E-34 itself is only correct to 17 figures? But that would be no problem at all for me.
I'm counting significant figures from the first non-zero digit - i.e., what you say above.
I agree that it isn't likely to be a problem in any real-world application of the calculator. However, given that the WP34S has a double-precision mode it would be nice if the result was also correct to this level of precision, so long as this can be done without breaking the calculator's behaviour elsewhere. If you calculate sin (3.1415926536) on the HP28S (for example) you will get an answer correct to 12 non-zero significant digits, and Pauli has already adjusted the WP34S code to get cos (pi/2) (which is a similar case) correct to 34 sf. Most of the time the trig functions are correct to this level of precision, so trying to bring sin (pi) up to the same standard isn't unreasonable.
I think it would be nice to fix it, if it can be fixed.
Nigel (UK)
03-27-2018, 08:22 AM
Post: #8
RE: WP 34S vs. DM42 decimal128 differences?
(03-27-2018 07:58 AM)Nigel (UK) Wrote: ... I'm counting significant figures from the first non-zero digit - i.e., what you say above. ...
OK - thanks for the clarification! BTW, this very situation is discussed in detail at p. 184 of the HP-15C Advanced Functions Handbook (although for the HP-15C's 10 shown, and 13 internal digits - thus giving, depending from interpretation, either ten, or three significant digits). Since the WP 34S, and DM42 behave with respect to sin(pi) in complete accordance with the HP manual cited above, I'm fine with the status quo. :-)
03-27-2018, 09:42 AM
Post: #9
RE: WP 34S vs. DM42 decimal128 differences?
03-29-2018, 04:03 PM
Post: #10
RE: WP 34S vs. DM42 decimal128 differences?
To Pauli, and anyone else interested in this:
I've got full precision working correctly with just one 69-digit decNumber - x, in cvt_2rad_sincos. sincosTaylor itself isn't changed; its arguments are still pointers to SINCOS-digit decNumbers and the calculations in it are carried out to this precision.
I divide radian arguments to cvt_2rad_sincos by \(2\pi\) to 69-digit precision, and then do range reduction as with degrees and grads using fractions of a full circle as the thresholds. This means that any radian arguments that have a very small sine or cosine are mapped onto angles near zero, and so 69-digit precision isn't needed to calculate these functions correctly. I've included the constant 0.125 in consts.h and compile_consts.c for use as the \(45^\circ\) threshold.
I don't know whether the memory requirements will still be a problem. I've attached the modified source files in case you want to try them out yourself - if you don't, or you have a better approach of your own, that's fine as well!
Nigel (UK)
03-29-2018, 11:46 PM
Post: #11
RE: WP 34S vs. DM42 decimal128 differences?
An interesting approach, one I had considered when I added the range reduction to these functions. My concern, then and now, is that e.g. \( \frac{\pi}4 - x \) cannot be represented exactly and that this could change the result in some cases. You are dividing by \( 2\pi \) so the later range reduction will be exact, but this division step and the later inverse would be where the equivalent error occurs.
It is possible that at some level of extended precision, the errors become insignificant. Determining and proving this wouldn't be trivial.
It's quite a difficult problem Still, it is an approach that's worth chasing.
Pauli
03-30-2018, 12:11 PM
Post: #12
RE: WP 34S vs. DM42 decimal128 differences?
(03-29-2018 11:46 PM)Paul Dale Wrote: An interesting approach, one I had considered when I added the range reduction to these functions. My concern, then and now, is that e.g. \( \frac{\pi}4 - x \) cannot be represented exactly and that this could change the result in some cases. You are dividing by \( 2\pi \) so the later range reduction will be exact, but this division step and the later inverse would be where the equivalent error occurs.
I'm sure that I'm missing some of the subtleties but the situation doesn't seem too bad to me.
Incidentally, calculating \(\sin(\pi)\) with a 34-digit value of \(\pi\), using newRPL with 34 digits of precision, gives an answer also correct to 34 digits. Well done Claudio!
Nigel (UK)
03-30-2018, 04:19 PM
Post: #13
RE: WP 34S vs. DM42 decimal128 differences?
(03-30-2018 12:11 PM)Nigel (UK) Wrote: Incidentally, calculating \(\sin(\pi)\) with a 34-digit value of \(\pi\), using newRPL with 34 digits of precision, gives an answer also correct to 34 digits. Well done Claudio!
Thanks, but there's not much glory on that achievement. It simply uses \(\pi\) with twice the current precision, so the range reduction produces the angle with full required precision. It's easy to achieve when the whole system has variable precision.
Now if you set the system to maximum precision:
Code:
Now what do you see? Only 22 good digits (thanks to a few extra guard digits beyond 2000), but certainly not 2000 good digits as you would've expected. At the limit of the system precision, newRPL has the same issues of the wp34s. The only solution I found was to use 4000 digits for pi, but what's the point: if the system works with 4000 digits, then I'd rather let the user use them all, just warning them that if you use more than half of that, don't expect all corner cases to be accurate. Due to memory limitations, I chose 2000 digits as the system limit. If you use 1000 digits or less, all functions are guaranteed to give you correctly rounded results in all corner cases with the 1000 digits precision you expect.
On the wp34s it's exactly the same: they used double precision to guarantee single precision on all corner cases. But since it's available, why not let the user access it? after all it's good for 99% of the cases. Now people want perfect results at double precision, so Paul needs quad precision, and then why not let the user use quad?... and here we go again.
03-30-2018, 04:44 PM
Post: #14
RE: WP 34S vs. DM42 decimal128 differences?
Thank you for explaining the situation in newRPL, which is an extraordinary achievement.
In the WP34S source code \(2\pi\) is already present as a constant to 450 decimal places (or thereabouts). So why not use it to get double precision answers for trig functions of double precision arguments? All it seems to take is a few code changes and one quad precision variable in one function (although whether the calculator has the RAM needed to cope with this remains an open question).
I understand that in the absence of variable-precision arithmetic, chasing perfection is a way without an end. I'm also sure that Pauli has far more important things to think about. But in this case it seems that a few simple changes might make it work, and if so, why not?
Nigel (UK)
03-30-2018, 11:26 PM
Post: #15
RE: WP 34S vs. DM42 decimal128 differences?
(03-30-2018 04:44 PM)Nigel (UK) Wrote: \In the WP34S source code \(2\pi\) is already present as a constant to 450 decimal places (or thereabouts). So why not use it to get double precision answers for trig functions of double precision arguments?
The many decimals are required to get accurate answers in single precision. E.g. try \(sin(10^{100})\). To get the same for double precision requires thousands of digits -- I'd guess about 8500 give or take. Making double precision trigonometric functions accurate across their entire range isn't feasible on the hardware, there isn't enough memory.
Still, you made a good argument for improving them where they are more typically used. I'll have to think it through in more detail. I'll also have to work out the worst case memory usage for functions that can call sine or cosine to see if the extra space required will fit. Were a few bytes from not fitting the stack in.
Pauli
User(s) browsing this thread: 1 Guest(s) |
In my algebra book they define for field extensions $L/K_1/F$ and $L/K_2/F$ the field $K_1K_2 = K_1(K_2)$.
The formal definition I have for this is $$ K_1(K_2) = \bigcap_{K_2 < E < L} E \quad \text{where } E/K_1. $$ So the smallest field extension of $K_1$ containing $K_2$.
For extensions such as $F(\alpha)$ though sometimes it is written, $$ F(\alpha) = \{ a_0 + a_1\alpha + a_2\alpha^2 + \cdots + a_{n - 1}\alpha^{n - 1} : a_i \in F\} $$
Can I extend this definition for $K_1(K_2)$ and write the following? $$ K_1(K_2) = \left\{ \sum_{i = 1}^n \sum_{j = 1}^{m_i} a_{ij}b_i^j : a_{ij} \in K_1, b_i \in K_2, n, m_i \in \mathbb{N} \right\}$$
EDIT: $K_1, K_2$ are algebraic. |
Consider a differential equation of type
\[{y^{\prime\prime} + py’ + qy }={ 0,}\]
where \(p, q\) are some constant coefficients.
For each of the equation we can write the so-called characteristic (auxiliary) equation:
\[{k^2} + pk + q = 0.\]
The general solution of the homogeneous differential equation depends on the roots of the characteristic quadratic equation. There are the following options:
Discriminant of the characteristic quadratic equation \(D \gt 0.\) Then the roots of the characteristic equations \({k_1}\) and \({k_2}\) are real and distinct. In this case the general solution is given by the following function \[{y\left( x \right) }={ {C_1}{e^{{k_1}x}} + {C_2}{e^{{k_2}x}},}\]where \({C_1}\) and \({C_2}\) are arbitrary real numbers. Discriminant of the characteristic quadratic equation \(D = 0.\) Then the roots are real and equal. It is said in this case that there exists one repeated root \({k_1}\) of order 2. The general solution of the differential equation has the form:\[{y\left( x \right) }={ \left( {{C_1}x + {C_2}} \right){e^{{k_1}x}}.}\] Discriminant of the characteristic quadratic equation \(D \lt 0.\) Such an equation has complex roots \({k_1} = \alpha + \beta i,\) \({k_2} = \alpha – \beta i.\) The general solution is written as\[{y\left( x \right) }={ {e^{\alpha x}}\left[ {{C_1}\cos \left( {\beta x} \right) }\right.}+{\left.{ {C_2}\sin \left( {\beta x} \right)} \right].}\] Solved Problems
Click a problem to see the solution.
Example 1Solve the differential equation \(y^{\prime\prime} – 6y’ + 5y = 0.\) Example 2Find the general solution of the equation \(y^{\prime\prime} – 6y’ + 9y\) \(= 0.\) Example 3Solve the differential equation \(y^{\prime\prime} + 4y’ + 5y\) \( = 0.\) Example 4Solve the equation \(y^{\prime\prime} + 25y = 0.\) Example 5Solve the equation \(y^{\prime\prime} + 4iy = 0.\) Example 1.Solve the differential equation \(y^{\prime\prime} – 6y’ + 5y = 0.\)
Solution.
First we write the corresponding characteristic equation for the given differential equation:
\[{k^2} – 6k + 5 = 0.\]
The roots of this equation are \({k_1} = 1,\) \({k_2} = 5.\) Since the roots are real and distinct, the general solution has the form:
\[{y\left( x \right) }={ {C_1}{e^x} + {C_2}{e^{5x}},}\]
where \({C_1}\) and \({C_2}\) are arbitrary constants. |
Let $a \in \Bbb{R}$. Let $\Bbb{Z}$ act on $S^1$ via $(n,z) \mapsto ze^{2 \pi i \cdot an}$. Claim: The is action is not free if and only if $a \Bbb{Q}$. Here's an attempt at the forward direction: If the action is not free, there is some nonzero $n$ and $z \in S^1$ such that $ze^{2 \pi i \cdot an} = 1$. Note $z = e^{2 \pi i \theta}$ for some $\theta \in [0,1)$.
Then the equation becomes $e^{2 \pi i(\theta + an)} = 1$, which holds if and only if $2\pi (\theta + an) = 2 \pi k$ for some $k \in \Bbb{Z}$. Solving for $a$ gives $a = \frac{k-\theta}{n}$...
What if $\theta$ is irrational...what did I do wrong?
'cause I understand that second one but I'm having a hard time explaining it in words
(Re: the first one: a matrix transpose "looks" like the equation $Ax\cdot y=x\cdot A^\top y$. Which implies several things, like how $A^\top x$ is perpendicular to $A^{-1}x^\top$ where $x^\top$ is the vector space perpendicular to $x$.)
DogAteMy: I looked at the link. You're writing garbage with regard to the transpose stuff. Why should a linear map from $\Bbb R^n$ to $\Bbb R^m$ have an inverse in the first place? And for goodness sake don't use $x^\top$ to mean the orthogonal complement when it already means something.
he based much of his success on principles like this I cant believe ive forgotten it
it's basically saying that it's a waste of time to throw a parade for a scholar or win he or she over with compliments and awards etc but this is the biggest source of sense of purpose in the non scholar
yeah there is this thing called the internet and well yes there are better books than others you can study from provided they are not stolen from you by drug dealers you should buy a text book that they base university courses on if you can save for one
I was working from "Problems in Analytic Number Theory" Second Edition, by M.Ram Murty prior to the idiots robbing me and taking that with them which was a fantastic book to self learn from one of the best ive had actually
Yeah I wasn't happy about it either it was more than $200 usd actually well look if you want my honest opinion self study doesn't exist, you are still being taught something by Euclid if you read his works despite him having died a few thousand years ago but he is as much a teacher as you'll get, and if you don't plan on reading the works of others, to maintain some sort of purity in the word self study, well, no you have failed in life and should give up entirely. but that is a very good book
regardless of you attending Princeton university or not
yeah me neither you are the only one I remember talking to on it but I have been well and truly banned from this IP address for that forum now, which, which was as you might have guessed for being too polite and sensitive to delicate religious sensibilities
but no it's not my forum I just remembered it was one of the first I started talking math on, and it was a long road for someone like me being receptive to constructive criticism, especially from a kid a third my age which according to your profile at the time you were
i have a chronological disability that prevents me from accurately recalling exactly when this was, don't worry about it
well yeah it said you were 10, so it was a troubling thought to be getting advice from a ten year old at the time i think i was still holding on to some sort of hopes of a career in non stupidity related fields which was at some point abandoned
@TedShifrin thanks for that in bookmarking all of these under 3500, is there a 101 i should start with and find my way into four digits? what level of expertise is required for all of these is a more clear way of asking
Well, there are various math sources all over the web, including Khan Academy, etc. My particular course was intended for people seriously interested in mathematics (i.e., proofs as well as computations and applications). The students in there were about half first-year students who had taken BC AP calculus in high school and gotten the top score, about half second-year students who'd taken various first-year calculus paths in college.
long time ago tho even the credits have expired not the student debt though so i think they are trying to hint i should go back a start from first year and double said debt but im a terrible student it really wasn't worth while the first time round considering my rate of attendance then and how unlikely that would be different going back now
@BalarkaSen yeah from the number theory i got into in my most recent years it's bizarre how i almost became allergic to calculus i loved it back then and for some reason not quite so when i began focusing on prime numbers
What do you all think of this theorem: The number of ways to write $n$ as a sum of four squares is equal to $8$ times the sum of divisors of $n$ if $n$ is odd and $24$ times sum of odd divisors of $n$ if $n$ is even
A proof of this uses (basically) Fourier analysis
Even though it looks rather innocuous albeit surprising result in pure number theory
@BalarkaSen well because it was what Wikipedia deemed my interests to be categorized as i have simply told myself that is what i am studying, it really starting with me horsing around not even knowing what category of math you call it. actually, ill show you the exact subject you and i discussed on mmf that reminds me you were actually right, i don't know if i would have taken it well at the time tho
yeah looks like i deleted the stack exchange question on it anyway i had found a discrete Fourier transform for $\lfloor \frac{n}{m} \rfloor$ and you attempted to explain to me that is what it was that's all i remember lol @BalarkaSen
oh and when it comes to transcripts involving me on the internet, don't worry, the younger version of you most definitely will be seen in a positive light, and just contemplating all the possibilities of things said by someone as insane as me, agree that pulling up said past conversations isn't productive
absolutely me too but would we have it any other way? i mean i know im like a dog chasing a car as far as any real "purpose" in learning is concerned i think id be terrified if something didnt unfold into a myriad of new things I'm clueless about
@Daminark They key thing if I remember correctly was that if you look at the subgroup $\Gamma$ of $\text{PSL}_2(\Bbb Z)$ generated by (1, 2|0, 1) and (0, -1|1, 0), then any holomorphic function $f : \Bbb H^2 \to \Bbb C$ invariant under $\Gamma$ (in the sense that $f(z + 2) = f(z)$ and $f(-1/z) = z^{2k} f(z)$, $2k$ is called the weight) such that the Fourier expansion of $f$ at infinity and $-1$ having no constant coefficients is called a cusp form (on $\Bbb H^2/\Gamma$).
The $r_4(n)$ thing follows as an immediate corollary of the fact that the only weight $2$ cusp form is identically zero.
I can try to recall more if you're interested.
It's insightful to look at the picture of $\Bbb H^2/\Gamma$... it's like, take the line $\Re[z] = 1$, the semicircle $|z| = 1, z > 0$, and the line $\Re[z] = -1$. This gives a certain region in the upper half plane
Paste those two lines, and paste half of the semicircle (from -1 to i, and then from i to 1) to the other half by folding along i
Yup, that $E_4$ and $E_6$ generates the space of modular forms, that type of things
I think in general if you start thinking about modular forms as eigenfunctions of a Laplacian, the space generated by the Eisenstein series are orthogonal to the space of cusp forms - there's a general story I don't quite know
Cusp forms vanish at the cusp (those are the $-1$ and $\infty$ points in the quotient $\Bbb H^2/\Gamma$ picture I described above, where the hyperbolic metric gets coned off), whereas given any values on the cusps you can make a linear combination of Eisenstein series which takes those specific values on the cusps
So it sort of makes sense
Regarding that particular result, saying it's a weight 2 cusp form is like specifying a strong decay rate of the cusp form towards the cusp. Indeed, one basically argues like the maximum value theorem in complex analysis
@BalarkaSen no you didn't come across as pretentious at all, i can only imagine being so young and having the mind you have would have resulted in many accusing you of such, but really, my experience in life is diverse to say the least, and I've met know it all types that are in everyway detestable, you shouldn't be so hard on your character you are very humble considering your calibre
You probably don't realise how low the bar drops when it comes to integrity of character is concerned, trust me, you wouldn't have come as far as you clearly have if you were a know it all
it was actually the best thing for me to have met a 10 year old at the age of 30 that was well beyond what ill ever realistically become as far as math is concerned someone like you is going to be accused of arrogance simply because you intimidate many ignore the good majority of that mate |
I am reading the article by Lawrence and Venkatesh on diophantine problems and $p-$adic period mappings. At page $35$ they say that the dimension of the Prym variety of an (unramified) cover of curves $C_1' \to C_2$ of degree $q$ is
$$(2g-1) \cdot \frac{q-1}{2},$$
where $g$ is the genus of $C_2$. The Prym variety is defined as $$\text{coker}(\text{Pic}^0(C_2) \to \text{Pic}^0(C_1')).$$
Riemann-Hurwitz tells us that the genus of $C_1'$ is
$$g' = q(g-1)+1.$$ Since the cover is surjective, I expected the map of Jacobians to be injective (is this true?). If that is the case, the dimension of the cokernel should be just the difference of the dimensions of the Jacobians, which are just the genera of the curves:
$$ g'-g=(q-1)(g-1).$$
This is off by $\frac{q-1}{2}$ with respect to the correct dimension. What am I doing wrong? |
I have two continuous variables $x_1$ and $x_2$, such that their sum is a constant: $x_1+x_2=c$. Clearly, I cannot run the following OLS model due to perfect multicollinearity:
Model (1): $y = \alpha + \beta_1 x_1 + \beta_2 x_2 + \epsilon$
If I run the Model (2) below, $\beta_1$ is significantly negative:
Model (2): $y = \alpha + \beta_1 x_1 + \epsilon$
I have reason to believe that $x_2$ is also a determinant of $y$ and must be in the regression model alongside $x_1$. In Model (3), which constrains the intercept to zero, $\beta_1$ is significantly positive:
Model (3): $y = \beta_1 x_1 + \beta_2 x_2 + \epsilon$
Models (2) and (3) reach opposite conclusions regarding $\beta_1$. Model (3) yields the theoretically predicted result ($\beta_1>0$). My question is whether I can rely on Model (3), since it excludes the intercept in order to include $x_2$ alongside $x_1$.
To address the comments, think of $c$ as the size of a pie that is equal for everyone, and each individual slices the pie into two pieces $x_1$ and $x_2$, such that $x_1+x_2=c$. What I'm investigating is whether one type of slice $x_2$ matters more than the other $x_1$. In other words, whether $\beta_2>\beta_1$ in Model (3).
To be more specific, $c$ is the average lottery return, which is decomposed into two parts for each individual as follows:
$c = o_i r_{oi} + p_i r_{pi}$ where $o_i$ ($p_i$) is the proportion of lotteries individual $i$ observes (plays), such that $o_i+p_i=1$, and $r_{oi}$ ($r_{pi}$) is the average return of the lotteries that are observed (played) by individual $i$. Finally, $x_1\equiv o_i r_{oi}$ and $x_2\equiv p_i r_{pi}$, such that $x_1+x_2=c$; and $y$ is the future participation rate in the lotteries.
Rational learning theories predict that individuals will give equal importance to the returns that they observe ($x_1$) versus those that they experience ($x_2$): $\beta_1=\beta_2>0$. On the other hand, reinforcement learning theories predict personally experienced outcomes matter more: $\beta_2>\beta_1>0$. That is why I'm trying to find out whether Model (3) is an appropriate way of testing these two theories. |
A conductor has a resistance of 0.001 Ohms, at room temperature, by increasing the heat x100 degrees Celsius how greatly would that effect the conductor's resistance? I'm trying to find the correlation of heat and wire resistance.
Typically, for a conductor, the coefficient that relates the resistivity of the material with temperature is positive. This means that when the temperature increases, the resistivity increases as well.
Here you can find a table with various coefficients, and an online calculator.
The relationship can be described by
$$ \Delta\rho = \alpha\cdot \Delta T + \rho_0 $$
where \$\Delta\rho\$ is the resistivity variation, \$\alpha\$ is the thermal coefficient for the material, \$\Delta T\$ is the temperature increase and \$\rho_0\$ is the original resistivity.
The resistance of a conductor is
$$ R = \rho\cdot\dfrac{L}{A} $$
where \$R\$ is the resistance, \$\rho\$ the resistivity, \$L\$ the length of the conductor and \$A\$ the cross-section area.
Elemental metals (e.g. Copper) generally increase in resistance at about +0.4% per Kelvin near room temperature. Alloys are often less. For hundreds of degrees C the temperature coefficient will not be a constant and you'll need to find tables or graphs to get an accurate answer for the particular material.
The temperature linearly increases the resistivity of the materials assuming it doesn't get to cold or becomes so hot the material begins to liquify (as shown in figure below)
Lets say the resistivity of a material is given by $$ \rho_{Total} = \rho_{T} + \rho_1 \\ where ~ \rho_T ~ is~equal~to~resistivity~due~to~temperature~ and~\rho_1 ~is ~the ~normal ~resistivity $$
since resistivity is inverse proportional to the mobility of a material $$\rho_{T}= \frac{1}{\sigma_T}=\frac{1}{en\mu_d}$$ where \$\sigma \$ is conductivity, n is free electrons per unit volume, and \$\mu \$ is the mobility of material
The key point is that $$ \mu_d ~inversely~ proportional ~to~temperature ~$$ which then implies that the resisitivity is directly proptional to temperture.
$$\rho_T = AT$$ where A is a constant. Different materials of course will have different constants for A, but the underlying theory that temperature increases resistivity still holds
Here is a picture showing what I mean
It depends on the material of the wire. Resistivity tables showing how the resistance of various materials vary with temperature are readily available online. |
Choosing your system carefully and drawing free-body diagrams is crucial!
It is often easiest to start with defining your system as one big block. You can do this because the engine and wagons are all connected by rigid steel locks that cannot compress or stretch, meaning that if one car moves, the others have to move along at the
exact same rate.
If we define our system as one big block, the total mass is $m = 70000 \text{ kg}$. Since we know $a$, we can easily find that $F = 14000 \text{ N}$ in the horizontal direction. Since the engine is the only object in our system that can generate force, we know that $F$ is the force exerted by the engine.
Now look only at car 2, the car at the end of the train. The force acting on it in the horizontal direction is
not $F$. It is actually a different force, $T_2$, exerted by the steel lock between car 1 and car 2. We still know, however, that all cars accelerate at $0.2 \text{ m/s}^2$. Therefore:\begin{align}\sum F &= ma \\T_2 &= \left(2 \times 10^4 \text{ kg}\right)\left(0.2 \text{ m/s}^2\right) \\&= 4000 \text{ N} \\\end{align}
. . . meaning the force experienced by the last car is quite different from the force exerted by the engine!
Now look only at car 1, the car in the middle. By Newton's third law, it is being pulled backwards with force $T_2$ from car 2. However, it is also being pulled forward by a force $T_1$ exerted by the steel lock between car 1 and the engine. Thus:
\begin{align}\sum F &= ma \\T_1 - T_2 &= \left(2 \times 10^4 \text{ kg}\right)\left(0.2 \text{ m/s}^2\right) \\T_1 &= 8000 \text{ N} \\\end{align}
Finally, let's check our work by looking at a system that only includes the engine. It is experiencing $F$ from itself in the positive direction. By Newton's third law, it is also being pulled backwards by $T_1$ from car 1.\begin{align}\sum F &= ma \\F - T_1 &= \left(3 \times 10^4 \text{ kg}\right)a \\a &= 0.2 \text{ m/s}^2\\\end{align}
Sure enough, we have found the correct acceleration. |
Order of Contact of Plane Curves
Let \(y = f\left( x \right)\) and \(y = g\left( x \right)\) be two plane curves, which osculate at the point \({M_0}\left( {{x_0},{y_0}} \right)\) \(\left({\text{Figure }1}\right)\) and have derivatives up to the \(\left( {n + 1} \right)\)th order inclusively.
It is said that the curves \(y = f\left( x \right)\) and \(y = g\left( x \right)\) have a contact of order \(n\) at the point \({M_0}\left( {{x_0},{y_0}} \right)\) if the following conditions hold:
\[
{f\left( {{x_0}} \right) = g\left( {{x_0}} \right),}\;\;\;\kern-0.3pt {f’\left( {{x_0}} \right) = g’\left( {{x_0}} \right),}\;\;\;\kern-0.3pt {f^{\prime\prime}\left( {{x_0}} \right) = g^{\prime\prime}\left( {{x_0}} \right),\ldots,}\;\;\;\kern-0.3pt {{f^{\left( n \right)}}\left( {{x_0}} \right) = {g^{\left( n \right)}}\left( {{x_0}} \right),}\;\;\;\kern-0.3pt {{f^{\left( {n + 1} \right)}}\left( {{x_0}} \right) \ne {g^{\left( {n + 1} \right)}}\left( {{x_0}} \right).} \]
In particular, if \(n = 1,\) the curves \(y = f\left( x \right)\) and \(y = g\left( x \right)\) have a common tangent line.
The case \(n = 0\) means that the curves have a common point \({M_0}\left( {{x_0},{y_0}} \right):\) \(f\left( {{x_0}} \right) = g\left( {{x_0}} \right),\) but their first derivatives do not coincide: \(f’\left( {{x_0}} \right) \ne g’\left( {{x_0}} \right).\) In this case, the curves simply intersect at the point \({M_0}.\)
We can consider the difference between the functions \(\varphi \left( x \right) = g\left( x \right) – f\left( x \right)\) in a neighborhood of the point \({x_0}\) and expand it in a Taylor series with Peano’s form of remainder. If the curves \(g\left( x \right)\) and \(f\left( x \right)\) have the \(n\)th order contact, then the first \(n\) terms of the series are zero and the difference \(\varphi \left( x \right)\) is represented as
\[
{\varphi \left( x \right) = \frac{{{\varphi ^{\left( {n + 1} \right)}}\left( {{x_0}} \right) + \alpha }}{{\left( {n + 1} \right)!}}{\left( {x – {x_0}} \right)^{n + 1}} } = {\frac{{{g^{\left( {n + 1} \right)}}\left( {{x_0}} \right) – {f^{\left( {n + 1} \right)}}\left( {{x_0}} \right) + \alpha }}{{\left( {n + 1} \right)!}}\cdot}\kern0pt{{\left( {x – {x_0}} \right)^{n + 1},}} \]
that is proportional to \({\left( {x – {x_0}} \right)^{n + 1}}.\) Consequently, for even values of \(n,\) the difference \(\varphi \left( x \right)\) has opposite signs to the left and right of the point of contact \({M_0}.\) The particular case \(n = 0\) was considered above.
For odd \(n,\) the curves \(y = f\left( x \right)\) and \(y = g\left( x \right)\) osculate each other at the point \({M_0}\) without mutual intersection.
Osculating Curve
Consider the following problem. Given the equation of a curve \(y = f\left( x \right)\) and a family of curves
\[G\left( {x,y,a,b, \ldots ,\ell} \right) = 0\]
with \(n + 1\) parameters \({a,b, \ldots ,\ell}.\) By changing the values of the parameters, choose a curve from the given family that has the highest possible order of contact with the curve \(y = f\left( x \right)\) at the point \({M_0}\left( {{x_0},{y_0}} \right).\) Such a curve is called the osculating curve.
We introduce the notation
\[\Phi \left( {x,a,b, \ldots ,l} \right) = G\left( {x,f\left( x \right),a,b, \ldots ,l} \right).\]
The osculation conditions are written as
\[\left\{ \begin{array}{l} \Phi \left( {{x_0},a,b, \ldots ,\ell} \right) = 0\\ {\Phi’_x}\left( {{x_0},a,b, \ldots ,\ell} \right) = 0\\ {\Phi^{\prime\prime}_{xx}}\left( {{x_0},a,b, \ldots ,\ell} \right) = 0\\ \cdots \cdots \cdots \cdots \cdots \cdots \cdots \\ \Phi _{{x^n}}^{\left( n \right)}\left( {{x_0},a,b, \ldots ,\ell} \right) = 0 \end{array} \right..\]
As a result, we have the system of \(n + 1\) equations with \(n + 1\) unknown values of the parameters. By solving this system, we find the parameters \({a,b, \ldots ,\ell}\) and the equation of the osculating curve. Usually its order of contact is not lower than \(n\) (in case of \(n + 1\) parameters). Thus, the order of contact of an osculating curve is usually one less than the number of parameters.
Osculating Circle
In this section we derive the equation of osculating circle. Let be given a function \(y = f\left( x \right),\) which is at least twice differentiable. The family of circles is described by the equation
\[{\left( {x – a} \right)^2} + {\left( {y – b} \right)^2} = {R^2}.\]
As it can be seen, we are dealing here with three parameters: the coordinates of the center of circle \(a, b\) and its radius \(R.\) It is clear that in this case the highest possible order of contact is equal to \(2.\)
Denoting
\[{\Phi \left( {x,a,b,R} \right)} = {{\left( {x – a} \right)^2} + {\left( {y – b} \right)^2} – {R^2},}\]
we write the derivatives of the function \(\Phi:\)
\[ {{{\Phi’_x}\left( {{x_0},a,b,R} \right)} = {2\left( {x – a} \right) + 2\left( {y – b} \right)y’,}}\;\;\;\kern-0.3pt {{{\Phi^{\prime\prime}_{xx}}\left( {{x_0},a,b,R} \right)} = {2 + 2{\left( {y’} \right)^2} + 2\left( {y – b} \right)y^{\prime\prime}.}} \]
Assuming that the curves osculate at the point \(\left( {{x_0},{y_0}} \right),\) we obtain the following system of three equations for finding the osculating circle.
\[ {\left\{ \begin{array}{l} \Phi \left( {{x_0},a,b,R} \right) = 0\\ {\Phi’_x}\left( {{x_0},a,b,R} \right) = 0\\ {\Phi^{\prime\prime}_{xx}}\left( {{x_0},a,b,R} \right) = 0 \end{array} \right.,\;\;}\Rightarrow {\left\{ \begin{array}{l} {\left( {{x_0} – a} \right)^2} + {\left( {{y_0} – b} \right)^2} – {R^2} = 0\\ 2\left( {{x_0} – a} \right) + 2\left( {{y_0} – b} \right){y’_0} = 0\\ 2 + 2\left( {{y’_0}} \right)^2 + 2\left( {{y_0} – b} \right){y^{\prime\prime}_0} = 0 \end{array} \right.} \]
From the last equation we find the value of \(b:\)
\[
{2 + 2{\left( {{y’_0}} \right)^2} + 2\left( {{y_0} – b} \right){y^{\prime\prime}_0} = 0,\;\;}\Rightarrow {\left( {{y_0} – b} \right){y^{\prime\prime}_0} = – 1 – {\left( {{y’_0}} \right)^2},\;\;}\Rightarrow {{y_0} – b = – \frac{{1 + {{\left( {{y’_0}} \right)}^2}}}{{{y^{\prime\prime}_0}}},\;\;}\Rightarrow {b = {y_0} + \frac{{1 + {{\left( {{y’_0}} \right)}^2}}}{{{y^{\prime\prime}_0}}}.} \]
Substituting \({{y_0} – b}\) into the second equation, we get the coordinate \(a\) of the center of circle:
\[
{2\left( {{x_0} – a} \right) + 2\left( {{y_0} – b} \right){y’_0} = 0,\;\;}\Rightarrow {{x_0} – a = – \left( {{y_0} – b} \right){y’_0},\;\;}\Rightarrow {{x_0} – a = \frac{{1 + {{\left( {{y’_0}} \right)}^2}}}{{{y^{\prime\prime}_0}}}{y’_0},\;\;}\Rightarrow {a = {x_0} – \frac{{1 + {{\left( {{y’_0}} \right)}^2}}}{{{y^{\prime\prime}_0}}}{y’_0}.} \]
The radius of the osculating circle is determined from the first equation:
\[
{{\left( {{x_0} – a} \right)^2} + {\left( {{y_0} – b} \right)^2} – {R^2} = 0,\;\;}\Rightarrow {{R^2} = {\left( {{x_0} – a} \right)^2} + {\left( {{y_0} – b} \right)^2},\;\;}\Rightarrow {{R^2} = {\left( {\frac{{1 + {{\left( {{y’_0}} \right)}^2}}}{{{y^{\prime\prime}_0}}}{y’_0}} \right)^2} }+{ {\left( {\frac{{1 + {{\left( {{y’_0}} \right)}^2}}}{{{y^{\prime\prime}_0}}}} \right)^2},\;\;}\Rightarrow {{R^2} = {\left( {\frac{{1 + {{\left( {{y’_0}} \right)}^2}}}{{{y^{\prime\prime}_0}}}} \right)^2}\cdot\kern0pt{\left[ {{{\left( {{y’_0}} \right)}^2} + 1} \right],}\;\;}\Rightarrow {{R^2} = \frac{{{{\left[ {1 + {{\left( {{y’_0}} \right)}^2}} \right]}^3}}}{{{{\left( {{y^{\prime\prime}_0}} \right)}^2}}},\;\;}\Rightarrow {R = \frac{{{{\left[ {1 + {{\left( {{y’_0}} \right)}^2}} \right]}^{\large\frac{3}{2}\normalsize}}}}{{\left| {{y^{\prime\prime}_0}} \right|}}.} \]
We see that the coordinates of the center of circle \(a, b\) are the coordinates of the center of curvature for the curve \(y = f\left( x \right)\) at \({x_0},\) and the radius of the osculating circle coincides with the radius of curvature of the curve at the point of contact.
Solved Problems
Click a problem to see the solution.
Example 1Find the equation of the parabola osculating with the exponential function \(f\left( x \right) = {e^x}\) at the point \({x_0} = 0.\) Example 2Find the equation of the parabola osculating with the function \(f\left( x \right) = \cos x\) at \({x_0} = 0.\) Example 3Find the equation of the curve Example 4Write the equation of a cubic function Example 5Write the equation of a circle osculating with the curve \(f\left( x \right) = \arctan x\) at the point \({x_0} = 1.\) Example 1.Find the equation of the parabola osculating with the exponential function \(f\left( x \right) = {e^x}\) at the point \({x_0} = 0.\)
Solution.
We assume that the parabola is defined by the equation \(y = g\left( x \right) = a{x^2} + bx + c.\) This function has \(3\) parameters. Therefore, we can suppose that the order of contact of the curves is equal to \(2.\) Then the coefficients \(a, b, c\) are found from the following conditions:
\[\left\{ \begin{array}{l} f\left( {{x_0}} \right) = g\left( {{x_0}} \right)\\ f’\left( {{x_0}} \right) = g’\left( {{x_0}} \right)\\ f^{\prime\prime}\left( {{x_0}} \right) = g^{\prime\prime}\left( {{x_0}} \right) \end{array} \right..\]
The derivatives of the functions \(f\left( x \right) = {e^x}\) and \(g\left( x \right) = a{x^2} + bx + c\) are given by the formulas
\[
{f’\left( x \right) = {\left( {{e^x}} \right)^\prime } = {e^x},}\;\;\;\kern-0.3pt {f^{\prime\prime}\left( x \right) = {\left( {{e^x}} \right)^\prime } = {e^x};} \]
\[
{g’\left( x \right) = {\left( {a{x^2} + bx + c} \right)^\prime } = 2ax + b,}\;\;\;\kern-0.3pt {g^{\prime\prime}\left( x \right) = {\left( {2ax + b} \right)^\prime } = 2a.} \]
Then the system of equations takes the following form:
\[\left\{ \begin{array}{l} {e^{{x_0}}} = ax_0^2 + b{x_0} + c\\ {e^{{x_0}}} = 2a{x_0} + b\\ {e^{{x_0}}} = 2a \end{array} \right..\]
Substituting \({x_0} = 0,\) we have
\[ \left\{ \begin{array}{l} c = 1\\ b = 1\\ 2a = 1 \end{array} \right. \;\;\kern-0.3pt{\text{or}\;\; \left\{ \begin{array}{l} a = \frac{1}{2}\\ b = 1\\ c = 1 \end{array} \right..} \]
So the parabola osculating with the exponential function at the point \({x_0} = 0\) has the second order of contact and is determined by the formula
\[y = \frac{{{x^2}}}{2} + x + 1.\]
If we write its equation in the form
\[
{y = \frac{{{x^2}}}{2} + x + 1 } = {\frac{1}{2}\left( {{x^2} + 2x} \right) + 1 } = {\frac{1}{2}\left( {{x^2} + 2x + 1 – 1} \right) + 1 } = {\frac{1}{2}{\left( {x + 1} \right)^2} + \frac{1}{2},} \]
we see that the vertex of the parabola is at the point \(\left( { – 1,{\large\frac{1}{2}\normalsize}} \right).\) Schematically, both osculating curves are shown in Figure \(2.\) |
Note: While writing this answer, I discovered what seems to be a gap in the proof given in the cited lecture notes. I'll thus present a slightly modified version of the proof below, and discuss the discrepancy a bit at the end.
Let's start with a quick recap, since your quote and summary of the lecture notes leaves out some important bits.
The formal definition of a one-way function, slightly expanded from Definition 5 in your lecture notes, is:
A function (family) $f: \{0,1\}^n \to \{0,1\}^m$ is one-way if and only if it can be computed by a polynomial-time algorithm, and if there is no probibilistic polynomial-time algorithm capable of finding preimages for it with non-negligible probability.
In other words, for $f$ to be one-way, there can be no probabilistic algorithm $A$ that could, given the output $y = f(x)$ for some randomly chosen input $x \in \{0,1\}^n$ and a maximum run-time polynomial in $n+m$, find an input $x'$ (possibly, but not necessarily, equal the original input $x$) such that $f(x') = y$ with a probability that is more than a negligible function of $n$.
An informal summary of this, dispensing with all the formalism of asymptotic complexity theory, would simply be that $f$ is one-way if there is no practical way, given a random output of $f$, to find an input that yields that output when given to $f$.
Based on this definition, we can show that:
Padding the output of $f$ with, say, a bunch of zeroes doesn't affect whether it is one-way. (By definition, the adversary will always receive a valid output, so they can just strip away the zeros and then proceed as if they were attacking the original, unpadded function.)
Also, adding a bunch of extra dummy bits to the inputs of $f$, which don't affect the output, doesn't change whether $f$ is one-way. (Since the dummy input bits don't affect the output, the adversary can choose those dummy bits any way it likes; but finding the correct values for the
other, non-dummy input bits is still exactly as hard as finding a preimage for the original, unmodified function.)
(These technically hold only if the amount of padding / ignored bits added is a polynomial function of the original input + output length, but that's plenty enough for our purposes: $n \mapsto 2n$ is certainly a polynomial function.)
So, given an (arbitrary) one-way function $f$ with $n$-bit inputs and outputs, we can construct another one-way function $h$ with twice the input and output length like this:
Let $x^*$ be the first $n$ bits of the input $x$ to $h$. Ignore the rest of the input.
Compute $y^* = f(x^*)$.
Prepend an arbitrary constant $n$-bit string $c$ (e.g. $c = 000...0$) to $y^*$, and output the resulting $2n$-bit string $y = c \,\|\, y^*$ as $h(x)$.
Now, by construction, this function $h$ is one-way, since finding preimages for it is at least as hard as finding preimages for $f$. (Of course, the security parameters for $f$ and $h$ differ by a factor of 2, but that makes no difference asymptotically; a polynomial function of $2n$ is a polynomial function of $n$.)
But also by construction, the first $n$ bits of the output of $h$ are always constant, while the remaining output bits depend
only on the first $n$ bits of the input. Thus, $h(h(x)) = c \,\|\, f(c)$ for all $x$, and so finding preimages for $h(h(x))$ is trivial (since literally any input will do).
Now, the construction given in the lecture notes you cite goes a little bit further, explicitly defining $h$ to yield an all-zero output whenever the first $n$ input bits are zero (and always setting the first $n$ bits of the output to zero otherwise).
While not strictly necessary (we'll get constant output from $h(h(x))$ anyway), this doesn't actually harm the one-wayness of $h$ either. In fact, we can show that modifying $h$ so that it always outputs a
constant value for a negligibly small fraction of the total input space doesn't affect its one-wayness (and that $1/2^n$ is, indeed, a negligibly small fraction as $n$ tends to infinity).
However, where the lecture notes go wrong is when they try to justify this by claiming that:
"A generalization of the previous theorem (fixing values in a one-way function) shows that $h$ is also a one-way function. (In short, we are only fixing the values of $\frac{2^n}{2^{2n}} = \frac1{2^n}$ of all of the possible values of $x$. Since we are only fixing a negligible fraction of the possible values of $x$, the same proof with slight modifications still applies.)"
In fact, this claim is false. As a simply counterexample, consider the modified function $h'$ defined as:
Split the $2n$-bit input $x$ into two $n$-bit strings $x_1$ and $x_2$.
If $x_1 = c$, return $h'(x) = c \,\|\, x_2 = x$.
Otherwise, return $h'(x) = c \,\|\, f(x_1)$.
Clearly, $h'(x) = h(x)$ for all but a negligibly small fraction of the inputs (namely, those that begin with the $n$-bit constant string $c$). Yet $h'$ is obviously
not a one-way function, since any valid output $y = h'(x)$ always begins with $c$, and so is its own preimage!
Of course, this doesn't invalidate the actual claim, since the function $h$ actually constructed in the notes
is in fact one-way (provided that $f$ is one-way). Still, if these notes are from a course you're studying in, you might want to mention this gap in the proof to your instructor. |
Ice, ice, ice! There is sea ice everywhere! Large floes, small floes and the odd iceberg. It is all white, and all beautiful, and it puts a definitive end to waves and seasickness!
When heat is removed from water, the water will cool and it will continue to do so until it reaches its freezingpoint. If we continue to remove heat, the water will freeze. The heat that we remove is then the latent heat. Normally it is not “we” who remove heat, but the atmosphere. When the air above is colder than the ocean below, heat will move from the water up into the air: the colder it is, the faster the heat moves. That is, the colder it is, the faster the ice grows. But the ice is a good isolator; it will isolate the water from the cold atmosphere above. Just like your jacket isolates you when it is cold outside and makes you keep the warmth inside, the ice causes the water to loose heat more slowly, since the heat must be conducted through the ice in order to be lost to the atmosphere. The thicker the ice is, the more slowly is the heat conducted through the ice, since the heatflux (\(F_{ice}\), i.e. the amount of heat that is conducted through the ice per unit of time) is proportional to the temperature gradient:\(F_{ice}=-k_{ice}\frac{dT}{dz}=-k_{ice}\frac{T_{atm}-T_{f}}{H}\)
\(k_{ice}=2\,W\,m^{-1}\,^{\circ}C^{-1}\) is the heat conductivity of ice and \(H\) is the thickness of the ice.
\(T_{atm}\) is the temperature at the top of the ice (which we assume to be equal to the temperature of the air) and \(T_f=-1.9^\circ C\) is the temperature at the bottom of the ice, that is, the freezingpoint of sea water.
The latent heat that is released (per square meter) when the ice is growing a little bit\(dH\) is \(\rho_{ice}LdH\). If that happens during a short time \(dt\), then the latent heat flux is:\(F_{latent}=\rho_{ice}L\frac{dH}{dt}\)
\(\rho_{ice}=900\,kg\,m^{-3}\) is the density of ice and
\(L=3.3*10^5J\,kg^{-1}\) is the latent heat of fusion.
The ice will grow exactly as fast as the latent heat can be conducted up through the ice, i.e. so that\(F_{latent}=F_{ice}\)
When combining the two equations we get a differential equation, that we can solve to get an expression for how the ice thickness increase in time, \(H(t)\).
Exercise 1
a) Set up the differential equation and show that the solution \(H(t)\) is
\(H=\sqrt{ H_0^2+\frac{2k_{ice}(T_{f}-T_{atm})}{\rho_{is}L}t}\).
when \(H(t=0)=H_0\)
Hint: Use the chain rule \(\frac{dH^2}{dt}=2H\frac{dH}{dt}\).
b) Let \(H_0=0\) and plot the function for different \(T_{atm}\)!When does the ice grow fastest? Why?
c) Use the equation from (a) to calculate the thickness of the ice ten hours after it started freezing if the temperature outside is (i) -20C (ii) -2C.
d) When the ice is 1 m thick, how long is it before it grow another 10cm?
e) What do you think will happen if there is snow falling on the ice? \(\kappa_{\textit{snow}}\) is typically between 0.15 and 0.4\(W\,m^{-1}\,^{\circ}C^{-1}\). What is the better isolator? Snow or ice?
f) All heat that is conducted up through the ice has to be conducted through the snow as well. Where is the temperature gradient largest? In the snow or in the ice? Make a sketch!
Exercise 2
The temperature varies from day to day and from year to year. The file ENG_Temperatur gives temperature data from the Amundsen Sea from March 2014 to March 2015.
a) Find the mean temperature for each month and plot it. Find the standard deviation and add it to your graph. What month is coldest? Warmest? When is the temperature most variable?
b) When does the ice stop to grow?
c) Find out how much the ice thickness increase everyt month? What value should you use for \(H_0\)?
d) Plot the (i) the ice thickness and (ii) the ice growth as a function of time. When is the ice growing fastest? Is this when it is coldest? Why? Why not?
Exercise 3 If an ice flow is 30 cm thick, 2 m wide and 5 m long – how much of the ice floe is then above water? \(\rho_{ice}=900kg m^{-3}\) How many scientist can stand on the ice flow (in the middle)without getting their feet wet? How much snow can fall on the ice before the ice floe is submerged? \(\rho_{snow}\approx 300kg m^{-3}\) Exercise 4
The ice in Antarctica is relatively thin and it often snows so much that the ice is submerged. Then we’ll have a layer of slush (snow + seawater) on top of the ice. When the slush freezes, we get what is often called “snow ice”. Estimations suggests that as much as 40% of the ice in the Amundsen Sea is snow ice!
It is quicker to freeze snow ice than regular sea ice – can you explain why? How far does the heat have to be conducted when freezing snow ice? Does the snow have to freeze? |
Article ID 0016 September 2016 Research Article
Four color light curves of the EW type eclipsing binary V441Lac were presented and analyzed by the W--D code. It is found that V441Lac is an extremely low mass ratio $(q = 0.093 \pm 0.001)$ semi-detached binary with the less massive secondary component filling the inner Roche lobe. Two dark spots on the primary component were introduced to explain the asymmetric light curves. By analyzing all times of light minimum, we determined that the orbital period of V441 Lac is continuously increasing at a rate of ${\rm d}P/{\rm d}t = 5.874(\pm 0.007)\times 10^{--7} {\rm d yr}^{--1}$. The semidetached Algol type configuration of V441 Lac is possibly formed by a contact configuration destroyed shallow contact binary due to mass transfer from the less massive component to the more massive one predicted by the thermal relaxation oscillation theory.
Article ID 0017 September 2016 Research Article
The theory of velocity dependent inertial induction, based upon extended Mach's principle, has been able to generate many interesting results related to celestial mechanics and cosmological problems. Because of the extremely minute magnitude of the effect its presence can be detected through the motion of accurately observed bodies like Earth satellites. LAGEOS I and II are medium altitude satellites with nearly circular orbits. The motions of these satellites are accurately recorded and the past data of a few decades help to test many theories including the general theory of relativity. Therefore, it is hoped that the effect of the Earth's inertial induction can have any detectable effect on the motion of these satellites. It is established that the semi-major axis of LAGEOS I is decreasing at the rate of 1.3 mm/d. As the atmospheric drag is negligible at that altitude, a proper explanation of the secular change has been wanting, and, therefore, this paper examines the effect of the Earth's inertial induction effect on LAGEOS I. Past researches have established that Yarkovsky thermal drag, charged and neutral particle drag might be the possible mechanisms for this orbital decay. Inertial induction is found to generate a perturbing force that results in 0.33 mm/d decay of the semi-major axis. Some other changes are also predicted and the phenomenon also helps to explain the observed changes in the orbits of a few other satellites. The results indicate the feasibility of the theory of inertial induction i.e. the dynamic gravitation phenomenon of the Earth on its satellites as a possible partial cause for orbital decay.
Article ID 0018 September 2016 Research Article
This paper reports the first spectroscopic observations in the $\rm{H}\alpha$ region at different orbital phases and the revised photometric solutions, for the contact binary KP101231 (V1) in the direction of the open cluster Praesepe. The photometric solutions obtained for the data in V and R passbands using the Wilson--Devinney (WD) method suggest that both components were in good thermal contact. The equivalent widths (EW) of ${\rm H}\alpha$ and Na lines were studied at various phases and a filled-in absorption profile around phase 0.58--0.68 was observed and compared with other phases. A correlation was observed between the profiles of ${\rm H}\alpha$ and Na lines at various phases.
Article ID 0019 September 2016 Research Article
In this study, a 110-m fully steerable radio telescope was used as an analysis platform and the integral parametric finite element model of the antenna structure was built in the ANSYS thermal analysis module. The boundary conditions of periodic air temperature, solar radiation, long-wave radiation shadows of the surrounding environment, etc. were computed at 30 min intervals under a cloudless sky on a summer day, i.e., worst case climate conditions. The transient structural temperatures were then analyzed under a period of several days of sunshine with a rational initial structural temperature distribution until the whole set of structural temperatures converged to the results obtained the day before. The nonuniform temperature field distribution of the entire structure and the main reflector surface RMS were acquired according to changes in pitch and azimuth angle over the observation period. Variations in the solar cooker effect over time and spatial distributions in the secondary reflector were observed to elucidate the mechanism of the effect. The results presented here not only provide valuable real time data for the design, construction, sensor arrangement and thermal deformation control of actuators but also provide a troubleshooting reference for existing actuators.
Article ID 0020 September 2016 Research Article
Dwarf spheroidal (dSph) galaxies are thought to be good candidates for dark matter search due to their high mass-to-light (M/L) ratio. One of the most favored dark matter candidates is the lightest neutralino(neutral $\chi$ particle) as predicted in the Minimal Supersymmetric Standard Model (MSSM). In this study, we model the gamma ray emission from dark matter annihilation coming from the nearby dSph galaxies Draco, Segue 1, Ursa Minor and Willman 1, taking into account the contribution from prompt photons and photons produced from inverse Compton scattering off starlight and Cosmic Microwave Background (CMB) photons by the energetic electrons and positrons from dark matter annihilation. We also compute the energy spectra of electrons and positrons from the decay of dark matter annihilation products. Gamma ray spectra and fluxes for both prompt and inverse Compton emission have been calculated for neutralino annihilation over a range of masses and found to be in agreement with the observed data. It has been found that the ultra faint dSph galaxy Segue 1 gives the largest gamma ray flux limits while the lowest gamma ray flux limits has been obtained from Ursa Minor. It is seen that for larger M/L ratio of dwarf galaxies the intensity pattern originating from $e^+e^−-$ pairs scattering off CMB photons is separated by larger amount from that off the starlight photons for the same neutralino mass. As the $e^+e^−-$ energy spectra have an exponential cut off at high energies, this may allow to discriminate some dark matter scenarios from other astrophysical sources. Finally, some more detailed study about the effect of inverse Compton scattering may help constrain the dark matter signature in the dSph galaxies.
Article ID 0021 September 2016 Research Article
The Hawking radiation is considered as a quantum tunneling process, which can be studied in the framework of the Hamilton--Jacobi method. In this study, we present the wave equation for a mass generating massive and charged scalar particle (boson). In sequel, we analyse the quantum tunneling of these bosons from a generic 4-dimensional spherically symmetric black hole. We apply the Hamilton--Jacobi formalism to derive the radial integral solution for the classically forbidden action which leads to the tunneling probability. To support our arguments, we take the dyonic Reissner--Nordström black hole as a test background. Comparing the tunneling probability obtained with the Boltzmann formula, we succeed in reading the standard Hawking temperature of the dyonic Reissner–Nordström black hole.
Article ID 0022 September 2016 Research Article
Using a method of population synthesis, we investigate the runaway stars produced by disrupted binaries via asymmetric core collapse supernova explosions (CC-RASs) and thermonuclear supernova explosions (TN-RASs). We find the velocities of CC-RASs in the range of about 30--100 km s$^{−1}$. The runaway stars observed in the galaxy are possibly CC-RASs. Due to differences in stellar chemical components and structures, TN-RASs are divided into hydrogen-rich TN-RASs and helium-rich TN-RASs. The velocities of the former are about 100–500 km s$^{−1}$, while the velocities of the latter are mainly between 600 and 1100 km s$^{−1}$. The hypervelocity stars observed in the galaxy may originate from thermonuclear supernova explosions. Our results possibly cover the US 708 which is a compact helium star and travels with a velocity of 1157$\pm$53 km s$^{−1}$ in our galaxy.
Article ID 0023 September 2016 Research Article
The effects of finite ion Larmor radius (FLR) corrections, Hall current and radiative heat--loss function on the thermal instability of an infinite homogeneous, viscous plasma incorporating the effects of finite electrical resistivity, thermal conductivity and permeability for star formation in interstellar medium have been investigated. A general dispersion relation is derived using the normal mode analysis method with the help of relevant linearized perturbation equations of the problem. The wave propagation is discussed for longitudinal and transverse directions to the external magnetic field and the conditions of modified thermal instabilities and stabilities are discussed in different cases. We find that the thermal instability criterion gets modified into radiative instability criterion. The finite electrical resistivity removes the effect of magnetic field and the viscosity of the medium removes the effect of FLR from the condition of radiative instability. The Hall parameter affects only the longitudinal mode of propagation and it has no effect on the transverse mode of propagation. Numerical calculation shows stabilizing effect of viscosity, heat--loss function and FLR corrections, and destabilizing effect of finite resistivity and permeability on the thermal instability. The outcome of the problem discussed the formation of star in the interstellar medium.
Article ID 0024 September 2016 Research Article
The near-infrared instruments in the upcoming Thirty Meter Telescope (TMT) will be assisted by a multi conjugate Adaptive Optics (AO) system. For the efficient operation of the AO system, during observations, a near-infrared guide star catalog which goes as faint as 22 mag in ${\rm J}_{{\rm Vega}}$ band is essential and such a catalog does not exist. A methodology, based on stellar atmospheric models, to compute the expected near-infrared magnitudes of stellar sources from their optical magnitudes is developed. The method is applied and validated in JHKs bands for a magnitude range of ${\rm J}_{\rm{Vega}}$ 16--22 mag. The methodology is also applied and validated using the reference catalog of PAN STARRS. We verified that the properties of the final PAN STARRS optical catalog will satisfy the requirements of TMT IRGSC and will be one of the potential sources for the generation of the final catalog. In a broader context, this methodology is applicable for the generation of a guide star catalog for any existing/upcoming near-infrared telescopes.
Current Issue
Volume 40 | Issue 5 October 2019
Since January 2016, the Journal of Astrophysics and Astronomy has moved to Continuous Article Publishing (CAP) mode. This means that each accepted article is being published immediately online with DOI and article citation ID with starting page number 1. Articles are also visible in Web of Science immediately. All these have helped shorten the publication time and have improved the visibility of the articles.
Click here for Editorial Note on CAP Mode |
We will be offering mothur and R workshops throughout 2019. Learn more. Difference between revisions of "Sharedace"
Line 4: Line 4:
'''Example Calculations'''
'''Example Calculations'''
−
'''*.
+
'''*.'''
Line 82: Line 82:
[[Media:70.stool_compare.zip|70.stool_compare.zip]]
[[Media:70.stool_compare.zip|70.stool_compare.zip]]
−
*.
+
*.
The first line contains the labels of all the columns. First sampled which shows the frequency of the <math>S_{A,B ACE}.</math> calculations. The frequency was set to 500, so after each 500 selected the <math>S_{A,B ACE}.</math> is calculated at each of the distances, with a calculation done after all are sampled. The following labels in the first line are the distances at which the calculations were made and the names of the groups compared. Each additional line starts with the number of sequences sampled followed by the <math>S_{A,B ACE}.</math> calculation at the column's distance. For instance, at distance 0.01, after 4392 samples <math>S_{A,B ACE}.</math> was 136.599.
The first line contains the labels of all the columns. First sampled which shows the frequency of the <math>S_{A,B ACE}.</math> calculations. The frequency was set to 500, so after each 500 selected the <math>S_{A,B ACE}.</math> is calculated at each of the distances, with a calculation done after all are sampled. The following labels in the first line are the distances at which the calculations were made and the names of the groups compared. Each additional line starts with the number of sequences sampled followed by the <math>S_{A,B ACE}.</math> calculation at the column's distance. For instance, at distance 0.01, after 4392 samples <math>S_{A,B ACE}.</math> was 136.599.
Revision as of 16:37, 22 January 2009 Example Calculations *.shared.ace Example calculations below will be performed using data from the Eckburg 70.stool_compare files with an OTU definition of 0.03. Estimating the richness of shared OTUs between two communities. A Non-parametric richness estimator of the number of shared OTUs between two communities has been developed that is analogous to the ACE (3) single community richness estimator. The <math>S_{A,B ACE},</math> (9), estimator is calculated as: <math>S_{A,B ACE} = S_{12 \left ( abund \right )} + \frac {S_{12 \left ( rare \right )}}{c_{12}} + \frac {1}{C_{12}} \left [ f_{\left ( rare \right )1+} {\Gamma}_1 + f_{\left ( rare \right )+1} {\Gamma}_2 + f_{11}{\Gamma}_3 \right ]</math> where,
<math>C_{12} = 1 - \frac {\sum_{i=1}^{S_{12\left ( rare \right )}} {\left \{Y_i I \left ( X_i = 1 \right ) + X_iI \left ( Y_i = 1 \right ) - I \left ( X_i = Y_i = 1 \right ) \right \}}} {T_{11}}</math>
<math>{\Gamma}_1 = \frac{S_{12 \left (rare \right )} n_{rare} T_{21}}{C_{12}\left( n_{rare} - 1\right)T_{10}T_{11}} - 1</math>, <math>{\Gamma}_2 = \frac{S_{12 \left (rare \right )} m_{rare} T_{12}}{C_{12}\left( m_{rare} - 1\right)T_{01}T_{11}} - 1</math>
<math>{\Gamma}_3 = \left[ \frac{S_{12\left( rare \right)}}{C_{12}}\right ]^2 \frac{n_{rare}m_{rare}T_{22}}{\left(n_{rare}-1\right)\left(m_{rare}-1\right)T_{10}T_{01}T_{11}} - \frac{S_{12 \left( rare \right)}T_{11}}{C_{12}T_{01}T_{10}}-{\Gamma}_1-{\Gamma}2</math>
<math>T_{10} = \sum_{i=1}^{S_{12\left( rare \right)}} X_i </math>, <math>T_{01} = \sum_{i=1}^{S_{12\left( rare \right)}} Y_i </math>, <math>T_{11} = \sum_{i=1}^{S_{12\left( rare \right)}} X_i Y_i </math>, <math>T_{21} = \sum_{i=1}^{S_{12\left( rare \right)}} X_i \left( X_i - 1 \right) Y_i </math>
<math>T_{12} = \sum_{i=1}^{S_{12\left( rare \right)}} X_i \left( Y_i - 1 \right) Y_i </math>, <math>T_{22} = \sum_{i=1}^{S_{12\left( rare \right)}} {X_i \left( X_i - 1 \right) Y_i \left( Y_i - 1 \right)} </math>
where,
<math>f_{11}</math> = number of shared OTUs with one observed individual in A and B
<math>f_{1+}, f_{2+}</math> = number of shared OTUs with one or two individuals observed in A
<math>f_{+1}, f_{+2}</math> = number of shared OTUs with one or two individuals observed in B
<math>f_{\left(rare \right)1+}</math> = number of OTUs with one individual found in A and less than or equal to 10 in B.
<math>f_{\left(rare \right)+1}</math> = number of OTUs with one individual found in B and less than or equal to 10 in A.
<math>n_{rare}</math> = number of sequences from A that contain less than 10 sequences.
<math>m_{rare}</math> = number of sequences from B that contain less than 10 sequences.
<math>S_{12\left(rare\right)}</math> = number of shared OTUs where both of the communities are represented by less than or equal to 10 sequences.
<math>S_{12\left(abund\right)}</math> = number of shared OTUs where at least one of the communities is represented by more than 10 sequences.
<math>S_{12\left(obs\right)}</math> = number of shared OTUs in A and B.
Calculation of <math>S_{A,B ACE}.</math> is considerably complicated to evaluate. First, we determine that there are 23 rare shared OTUs and 37 abundant shared OTUs. Next, considering only the rare OTUs, we calculate <math>C_{12}</math> as 0.845878. We obtained the following T-values:
<math>T_{10} = 93</math>
<math>T_{01} = 64</math>
<math>T_{11} = 279</math>
<math>T_{21} = 1444</math>
<math>{T_{12}} = 988</math>
<math>T_{22} = 5440</math>
Next, calculation of the Γ-values requires knowing <math>f_{\left(rare \right)1+}, f_{\left(rare \right)+1} \mbox{ and } f_{\left(rare \right)11}</math>, which were 5, 8, and 2. Also, <math> n_{rare} \mbox{ and } m_{rare}</math> were 185 and 167, respectively. Finally, calculation of the Γ-values gives <math>{\Gamma}_1=0.530409, {\Gamma}_2 = 0.523308 \mbox{ and } {\Gamma}_3 = 0.151840</math>. This gives a <math>S_{A,B ACE}.</math> value of 72.3024 as seen below.
File Samples on the Eckburg 70.stool_compare Dataset .shared
This file contains the frequency of sequences from each group found in each OTU. Each row consists of the distance being considered, group name, number of OTUS, and the abundance information separated by tabs. The abundance information is as follows. Each subsequent number represents a different OTU so that the number indicates the number of sequences in that group that clustered within that OTU. Note that OTU frequencies can only be compared within a distance definition. Below is a link to the files used in the calculations.
.shared.ace
The first line contains the labels of all the columns. First sampled which shows the frequency of the <math>S_{A,B ACE}.</math> calculations. The frequency was set to 500, so after each 500 selected the <math>S_{A,B ACE}.</math> is calculated at each of the distances, with a calculation done after all are sampled. The following labels in the first line are the distances at which the calculations were made and the names of the groups compared. Each additional line starts with the number of sequences sampled followed by the <math>S_{A,B ACE}.</math> calculation at the column's distance. For instance, at distance 0.01, after 4392 samples <math>S_{A,B ACE}.</math> was 136.599.
sampled 0.01tissuestool 0.02tissuestool 0.03tissuestool 0.04tissuestool 1 0 0 0 0 500 44.2676 52.4249 43.9391 26.2499 1000 86.2691 53.7864 55.2556 60.1921 1500 114.238 106.452 45.6638 50.0418 2000 180.391 99.0382 57.2304 47.1769 2500 124.966 92.2403 48.1031 48.5068 3000 114.838 94.2194 56.2644 59.6396 3500 126.609 102.88 59.8571 71.1169 4000 134.213 98.837 56.6823 68.317 4392 136.599 86.5079 72.3024 62.117 |
Tagged: abelian group
Abelian Group Problems and Solutions.
The other popular topics in Group Theory are:
Problem 616
Suppose that $p$ is a prime number greater than $3$.
Consider the multiplicative group $G=(\Zmod{p})^*$ of order $p-1$. (a) Prove that the set of squares $S=\{x^2\mid x\in G\}$ is a subgroup of the multiplicative group $G$. (b) Determine the index $[G : S]$.
Add to solve later
(c) Assume that $-1\notin S$. Then prove that for each $a\in G$ we have either $a\in S$ or $-a\in S$. If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order Problem 575
Let $G$ be a finite group of order $2n$.
Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup. Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$.
Then prove that $H$ is an abelian normal subgroup of odd order.Add to solve later
Problem 497
Let $G$ be an abelian group.
Let $a$ and $b$ be elements in $G$ of order $m$ and $n$, respectively. Prove that there exists an element $c$ in $G$ such that the order of $c$ is the least common multiple of $m$ and $n$.
Also determine whether the statement is true if $G$ is a non-abelian group.Add to solve later
Problem 434
Let $R$ be a ring with $1$.
A nonzero $R$-module $M$ is called irreducible if $0$ and $M$ are the only submodules of $M$. (It is also called a simple module.) (a) Prove that a nonzero $R$-module $M$ is irreducible if and only if $M$ is a cyclic module with any nonzero element as its generator.
Add to solve later
(b) Determine all the irreducible $\Z$-modules. Problem 420
In this post, we study the
Fundamental Theorem of Finitely Generated Abelian Groups, and as an application we solve the following problem.
Add to solve later
Problem. Let $G$ be a finite abelian group of order $n$. If $n$ is the product of distinct prime numbers, then prove that $G$ is isomorphic to the cyclic group $Z_n=\Zmod{n}$ of order $n$. Problem 343
Let $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$.
Let $\Aut(N)$ be the group of automorphisms of $G$.
Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime.
Then prove that $N$ is contained in the center of $G$. |
Does it make sense to say that the quantum field of a photon is exactly proportional to the photon's electromagnetic field?
\begin{align} \bar{\Psi} = \dfrac{\bar{E}+i\bar{B}}{\sqrt{\int (E^2+B^2)dV}} \end{align}
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
Does it make sense to say that the quantum field of a photon is exactly proportional to the photon's electromagnetic field?
\begin{align} \bar{\Psi} = \dfrac{\bar{E}+i\bar{B}}{\sqrt{\int (E^2+B^2)dV}} \end{align}
The quantized electromagnetic field gives rise to (all) photons. There is only one (quantum) field for all photons. So asking about the quantum field of "a photon" or a "photon's electromagnetic field" doesn't seem to make sense: The photon is a quanta of the field.
Quote from Willis Lamb, Nobel Prize in Physics, 1955.
there is no such thing as a photon.
I guess, what you meant was a Fock state. An eigenstate of the free electromagnetic field, here the first excited one associated to a well defined energy (states which then does not evolve in time) that amounts to a light quantum as said above by DrEntropy.
As an experimentalist I trust analysis by theoretical physicists and this is what I will use and quote selectively in my answer here:
When talking photons, we are talking quantum mechanics and elementary particles with their dual nature, sometimes manifesting classical particles sometimes probability waves, as as displayed by two slit experiment in this answer. The dots are the particle identification of the photon, the built up interference pattern the probability wave in space for the manifestation of a photon.
the probabilities predicted from quantum mechanics should be interpreted exactly in the same way as probabilities predicted from classical statistical physics. The only difference is that quantum mechanics implies that the "exact truth" about the system doesn't exist even in principle. However, in practice, you don't care about it because you can't know the coordinate and positions of many gas molecules in a bottle, anyway.
.........
For a wave from a light bulb where there are different frequencies and statistically :
Imagine that they do differ and you want to calculate the average value of the electric field E⃗ at some point in space, away from the light bulb. The electric field may be rewritten as some combination of creation and annihilation operators for photons in all conceivable states, with various coefficients. And by symmetry, or because the many photons contribute randomly, you get zero. So even though we are surely imagining – and we may measure – nonzero values of the electric field away from the light bulb at some moments and locations, the statistical expectation is zero and the fluctuations of the electric field are due to the randomness of the emission processes.
For a coherent source, like a laser, where a single frequency with a width is produced coherently then the classical potential and the wave function of the photons are directly related:
I don't want to scare you by the indices but the wave function of a single photon mathematically looks like the (complexified) classical electromagnetic potential A⃗ (x,y,z), with some extra subtleties.
I think this answers that the formula you propose is wrong. It is the electromagnetic potential that enters the wave function.
If you are interested really you should read carefully the blog entry and then search further reading. |
U. Biccari, Noboru Sakamoto, Eneko Unamuno, Danel Madariaga, Enrique Zuazua, Jon Andoni Barrena Model reduction of converter-dominated power systems by Singular Perturbation Theory Abstract: The increasing integration of power electronic devices is driving the development of more advanced tools and methods for the modeling, analysis, and control of modern power systems to cope with the…
U. Biccari, V. Hernández-Santamaría Null controllability of a nonlocal heat equation with integral kernel, DOI: Abstract: We consider a linear nonlocal heat equation in a bounded domain $\Omega\subset\mathbb{R}^d$ with Dirichlet boundary conditions. The non-locality is given by the presence of an integral kernel. We analyze the problem of controllability when the control acts on a…
U. Biccari, M. Warna Null-controllability properties of a fractional wave equation with a memory term Abstract: We study the null-controllability properties of a one-dimensional wave equation with memory associated with the fractional Laplace operator. The goal is not only to drive the displacement and the velocity to rest at some time-instant but also to require…
U. Biccari, M. Warna, E. Zuazua Internal observability for coupled systems of linear partial differential equations Abstract: In this paper, we analyze the controllability properties under positivity constraints on the control or the state of a one-dimensional heat equation involving the fractional Laplacian $(-\Delta)^s$ ($0
U. Biccari, S. Micu Null-controllability properties of the wave equation with a second order memory term,doi.org/10.1016/j.jde.2019.02.009 Abstract: We study the internal controllability of a wave equation with memory in the principal part, defined on the one-dimensional torus $\mathbb{T}=\mathbb{R}/2\pi\mathbb{Z}$. We assume that the control is acting on an open subset $\omega(t)\subset\mathbb{T}$, which is moving with a…
U. Biccari, D. Ko, E. Zuazua Dynamics and control for multi-agent networked systems: a finite difference approach Abstract: We analyze the dynamics of multi-agent collective behavior models and their control theoretical properties. We first derive a large population limit to parabolic diffusive equations. We also show that the non-local transport equations commonly derived as the…
U. Biccari, A. Marica, E. Zuazua Propagation of one and two/dimensional discrete waves under finite difference approximation, DOI: Abstract: We analyze the propagation properties of the numerical versions of one and two-dimensional wave equations, semi-discretized in space by finite difference schemes. We focus on high-frequency solutions whose propagation can be described, both at the continuous…
U. Biccari Boundary controllability for a one-dimensional heat equation with a singular inverse-square potential, Mathematical Control and related fields, DOI: 10.3934/mcrf.2019011 Abstract: We analyse controllability properties for the one-dimensional heat equation with singular inverse-square potential $u_t-u_{xx}-\frac{\mu}{x^2}u=0\,\,\,(x,t)\in(0,1)\times(0,T)$. For any $\mu
U. Biccari, V. Hernández-Santamaría The Poisson equation from non-local to local, Electronic Journal of Differential Equations, Vol. 2018 (2018), No. 145, pp. 1-13. DOI: arXiv:1801.09470 Abstract: We analyze the limit behavior as $s\to 1^-$ of the solution to the fractional Poisson equation $\fl{s}{u_s}=f_s$, $x\in\Omega$ with homogeneous Dirichlet boundary conditions $u_s\equiv 0$, $x\in\Omega^c$. We show that…
U. Biccari, V. Hernández-Santamaría Null controllability of a nonlocal heat equation with integral kernel, DOI: Abstract: We consider a linear nonlocal heat equation in a bounded domain $\Omega\subset\mathbb{R}^d$ with Dirichlet boundary conditions. The non-locality is given by the presence of an integral kernel. We analyze the problem of controllability when the control acts on a… |
I want to write continuous wavelet transform codes manually by matlab. And I want to use complex morlet function. Here are some background:
Continuous wavelet transform definition: $C(S,T;f(t),\psi (t))=\frac{1}{\sqrt{S}}\int_{S}^{b}f(t)\psi^{\ast }(\frac{t-T}{S})$ S is scale vector.for example
1:60;$T$ is time slid. And $\psi(t)= \frac{1}{\sqrt{\pi f_{b}}}e^{2\pi f_{c} t}e^{\frac{-t^{2}}{f_{b}}}$
psi = ((pi*fb)^(-0.5)).*exp(2*1i*pi*fc.*t).*exp(-t.^2/fb); % for example fb=15;fc=1;
my discrete signal has $N$ points. discrete version of that integral for the first element in the scale vector is:
for s(i)
$C_{T}(s)=\sum_{n=0}^{N-1}f(n)\psi^{*}(\frac{T-n}{S})$ this must calculate for all scales,
1:60;
at the end it must return me a complex matrix with $N\times S$ size. I am confused about wrting this codes. Anyone can help? P.S I don't want to uses matlab function
conv2 to calculate that convolution.
Thanks in advance. here is my first attempt, but it's not working at all.
%% user CWTclear allN=300; %sample point numberst=linspace(0,30,N);%% signalx=5*sin(2*pi*0.5*t); % signal with freq of 0.5 HZ%% cwt fc=1;fb=15;% psi=((pi*fb)^(-0.5)).*exp(2*1i*pi*fc.*...% t).*exp(-t.^2/fb);%% convolution Psi([N-n]/S)*x(n) so we calculate convolution(psi(n/s),x(n)) for s=1:60 %scale vector s=[1:1:60]for i = 1:N % number of discrete times for k = 1:i if ((i-k+1)<N+1) && (k <N+1) PSI = ((pi*fb)^(-0.5)).*exp(2*1i*pi*fc.*... (t/s)).*exp(-(t/s).^2/fb); c(i,s) = c(i)+ x(k)*PSI(i-k+1); end endend end |
PREAMBLE
I am a graduate student researching evolutionary ecology. My supervisor and I have been trying to learn how to simulate colored noise, discretely, by the $\frac{1}{f^\alpha}$ power law for $\alpha \in \mathbb{R}$, specifically on the interval [0,2].
We found an article that seems to cover a method in great detail.
On page 806, we were able to prove to ourselves that by a symmetric autocorrelation function, the formula for the sampled spectrum of an arbitrary segment of a process is indeed,
$$\hat{S}(\omega)\triangleq E\{\tilde{S}(\omega)\}=\int_{-T}^{0}\left(1+\dfrac{\tau}{T}\right) \Big[ \dfrac{1}{T} \int_{t_o+\tau/2}^{t_o+T+\tau/2} R(t, \tau)\Big] e^{-j\omega\tau}d\tau + \int_{0}^{T}\left(1-\dfrac{\tau}{T}\right) \Big[ \dfrac{1}{T} \int_{t_o+\tau/2}^{t_o+T+\tau/2}R(t, \tau) \Big] e^{-j\omega\tau}d\tau$$
$$=\int_{-T}^{T}\left( 1-\dfrac{|\tau|}{T}\right)R(t,\tau)e^{-j\omega\tau}d\tau$$
Where,
$$R(t,\tau)\triangleq \{x(t+\frac{\tau}{2})x(t-\frac{\tau}{2})$$
Next, the paper computed the spectral estimate for Brownian motion. That estimate in the paper is given as,
$$\hat{S}_B(\omega)=2(1+\dfrac{t_o}{T})\dfrac{1}{\omega^2} - 2(\dfrac{t_o}{T})\dfrac{cos\omega T}{\omega^2} - \dfrac{2}{T\omega^3}sin\omega T$$
Where for $T$ much larger than $t_o$, this reduces to $\frac{2}{\omega^2}$. We were able to reproduce this result.
QUESTION AND PROBLEM
The paper now makes the claim that we have unknowingly been using a rectangular window. We are told we can improve the spectral estimate by using a Hanning window with normalization $8/3$ (After some reading, I've been able to grasp the concept of spectral leakage).
The new spectral estimate for Brownian Motion, with $t_o$ set to 0 is,
$$\hat{S}_B=\dfrac{1}{3}\{ \dfrac{4}{\omega^2} - \dfrac{4sin \omega T}{T \omega^3} + \dfrac{80\pi^4 - 24\pi^2T^2\omega^2 + T^4\omega^4}{T^4(4\pi^2/T^2-\omega^2)^3)} + \dfrac{48\pi^2T\omega sin \omega T - 4T^3\omega^3sin\omega T}{T^4(4\pi^2/T^2-\omega^2)^3)} \}$$
Try as we might, we have not yet been able to reproduce this result. We tried the following,
$$\int_{-T}^{0}\left(1+\dfrac{\tau}{T}\right) \Big[ \dfrac{1}{T} \int_{t_o+\tau/2}^{t_o+T+\tau/2} R(t, \tau)\Big] e^{-j\omega\tau}d\tau \cdot HW(x=t)\cdot HW(x=(t+\tau)) + \int_{0}^{T}\left(1-\dfrac{\tau}{T}\right) \Big[ \dfrac{1}{T} \int_{t_o+\tau/2}^{t_o+T+\tau/2}R(t, \tau) \Big] e^{-j\omega\tau}d\tau \cdot HW(x=t)\cdot HW(x=(t+\tau))$$
Where we defined the Hanning Window (HW) as,
$$1-cos((\pi x)/T)^2$$
Done with the following code in Sage:
Q, T, t0, w, tau, t, x = var('Q T t0 w tau t x')hanning_window = (sqrt(8/3))*(1 - cos((pi*x)/T)^2)rectangular_window = 1assume(T>0)window_1 = hanning_windowwindow_2 = hanning_windowfneghann = Q*(t+tau/2)*exp(-I*w*tau)*window_1.substitute(x=(t))*window_2.substitute(x=(t+tau))fposhann = Q*(t-tau/2)*exp(-I*w*tau)*window_1.substitute(x=(t))*window_2.substitute(x=(t+tau))result = (1/T)*((fneghann.integrate(t,t0-tau/2,t0+T+tau/2)).integrate(tau,-T,0) + (fposhann.integrate(t,t0+tau/2,t0+T-tau/2)).integrate(tau,0,T))f = (result).expand().full_simplify()view(f)
Computing the limit as $T\rightarrow \infty$ and setting $t_o = 0$
DOES produce the spectral estimate of $\frac{1}{\omega^2}$ as indicated in the linked paper, but we got something dissimilar to the initial spectral estimate submitted in the article, leading us to believe the result of our limit was luck. I've chosen to not post the result, $f$, because of it's size.
Where did we make a mistake, if any? Did we apply the Hanning Window incorrectly? Is there a mistake in our code? Your help is greatly appreciated. |
I have a $n$-sided regular pyramid based on a regular polygon, the length of the side of regular polygon, $s$. Also I know the dihedral angle between the face and the base, $\alpha$.
Question. How to calculate the height of the pyramid, $h$? My attemp is:
I have found the Thales' method. Thales measured the height of the pyramids by their shadows at the moment when his own shadow was equal to his height.
Let's say $n=4$, $s=1$ unit and $\alpha=60$ then I can find the $R=\frac{s}{2 \cdot sin \frac{180}{n}}$ and $r=R \cdot cos\frac{180}{n}$. |
I am stuck with a question,
Let $f: A\rightarrow B$ and $g:B\rightarrow C$ show that if $g\circ f$ is one to one then $f$ is one to one. Can anyone please help me out. I have no idea where to start with and how end it up.
Thanks
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I am stuck with a question,
Let $f: A\rightarrow B$ and $g:B\rightarrow C$ show that if $g\circ f$ is one to one then $f$ is one to one. Can anyone please help me out. I have no idea where to start with and how end it up.
Thanks
Suppose $x,y\in A$ such that $f(x)=f(y)$. Then $g(f(x)) = g(f(y))$. But this is the same as $(g\circ f)(x) = (g\circ f)(y)$ and $g\circ f$ being injective $\implies x=y$. This shows that $f$ is injective.
Suppose $f$ is not one to one, then there must be $x,y\in A,x \neq y$ such that $f(x)=f(y)$ and therefore $g(f(x))=g(f(y))$ but this means $g\circ f$ is not one to one and this is a contradiction. So $f$ must be one to one. |
Let $A$ be a set with $n$ elements. Call a subset $C$ of the power set of $A$ "good" if
Each element of $C$ has at least three elements.
If $P, Q\in C$ and $P\cap Q$ has more than one element, then $P=Q$.
I've been interested in finding good upper and lower bounds of the number of good collections, but I haven't made any headway. Does anyone know of any?
Edit: I've gotten some good answers here that tell me that these conditions give too many collections for what I'm trying to do. I'm trying to get an asymptotic formula for the logarithm of the number of isomorphism classes of intervals in weak order of length $n$ in a Coxeter group. The weak order intervals are almost distributive lattices, and the basic underlying structure is a partially ordered set (similar to the Birkhoff representation theorem). I think the extra structure characterizing the weak order interval doesn't add anything asymptotically to the logarithm, so I think it is asymptotically $\frac {n^2}4$. A collection of this sort in addition to the partially ordered set structure would characterize the isomorphism class, but the conditions I'm giving are very loose. It turns out that the logarithm of the number of these collections is asymptotically bigger than $n^2$, so that's no good to get the result I want.
The structure characterizing the weak order interval is the inversion set of the element in the root system. A reduced word for the element is obtained by certain orderings of the inversion set, and the partial ordering arises as the relation that $x\leq y$ if $x$ comes before $y$ in every ordering of the inversion set corresponding to a reduced word. The remaining antichains are then covered by dihedral subsystems corresponding to braid moves in the words, and the ways of arranging these are what I'm trying to count. By default here we assume they are of size $2$, hence the restriction to size at least $3$. They are intersections of two dimensional subspaces with the inversion set, so if their intersection contains two elements then they are equal. There are very heavy restrictions, for example, if the entire inversion set is an antichain; in that case the inversion set is a direct sum of irreducible finite root systems, which have a very small number of isomorphism classes. I have some ideas for stronger restrictions on the collections using the classification of finite root systems, but nothing that fully characterizes the inversion set so far.
If anyone has any ideas on the more detailed problem, I'd be happy to hear them. I'm not going to invalidate the existing answers though, so I'll leave this as is.
I've done some very small computations on the actual number of isomorphism classes, as I can't think of a good way to do it with a computer. https://oeis.org/A185349. Compare to https://oeis.org/A000112. |
Why is the following G.C.D equal to $1$: $$ \gcd(3^s, 2^n-3^{(j-i)}2^m),\quad s> j >i \geq 0, $$ and all variables are natural numbers.
The only prime factor of $3^s$ is 3 as $s\ge 1$
But $2^n-3^{(j-i)}2^m\equiv2^n\pmod 3$ as $3\mid 3^{j-i}$ as $j>i$
So, $2^n-3^{(j-i)}2^m\equiv2^n\equiv(-1)^n\not\equiv 0 \pmod 3$
So, $3^s,2^n-3^{(j-i)}2^m$ can not have any common prime factor, hence $(3^s,2^n-3^{(j-i)}2^m)=1$
Laws of GCD:
$$\gcd(x,y) = \gcd(x,x-y)$$ for $a$ coprime to $y$: $$\gcd(x,y) = \gcd(x,ay)$$
We can derive general formula using the laws of GCD:
$$\gcd(3^a,2^b) = 1$$
$$\gcd(3^a,2^b-3^a) = 1$$
$$\gcd(3^a,2^{b+c}-2^c 3^a) = 1$$
$$\gcd(3^{a+d},2^{b+c}-2^c 3^a) = 1$$
now put $a+d = s$, $b+c = n$, $a = j-i$, $c = m$ to get the special result. |
If the integral kernel $k(x, y)$ of an operator $T : C^\infty_c(M) \to \mathcal{D}'(M)$ is symmetric ($M$ is a compact manifold), then the operator $T$ is symmetric. Is the converse true? That is, given a self-adjoint $T$ which has an integral kernel $k(x, y)$, is $k(x, y) = k(y, x)$? A reference would be really appreciated.
closed as off-topic by Christian Remling, Alex Degtyarev, coudy, Neil Strickland, Ryan Budney May 12 '15 at 20:56
This question appears to be off-topic. The users who voted to close gave this specific reason:
"MathOverflow is for mathematicians to ask each other questions about their research. See Math.StackExchange to ask general questions in mathematics." – Christian Remling, Alex Degtyarev, coudy, Neil Strickland, Ryan Budney
The following standard argument works if $k$ is assumed bounded. I don't think that assumption should be necessary, but maybe this is at least a helpful start.
If $T$ is symmetric then for every $f,g \in C^\infty(M)$ we have $$\int \int k(x,y) f(x) g(y)\,dx \,dy = \int \int k(x,y) g(x) f(y)\,dy\,dx$$ or changing variables and using Fubini's theorem (here we use the assumption that $k$ is bounded), $$\iint k(x,y) f(x) g(y) \,dx\,dy = \iint k(y,x) f(x) g(y)\,dx\,dy.$$ In other words, for every $F : M \times M \to \mathbb{R}$ which is of the form $F(x,y) = \sum_{i=1}^n f_i(x) g_i(y)$ where $f_i, g_i \in C^\infty(M)$, we have $$\iint (k(x,y) - k(y,x)) F(x,y) \,dx\,dy = 0.$$ Now using a monotone class argument, show that the same holds for all bounded measurable $F: M \times M \to \mathbb{R}$. Taking $F(x,y) = k(x,y) - k(y,x)$ you get $$\iint |k(x,y) - k(y,x)|^2 \,dx\,dy = 0.$$ (If $k$ is assumed continuous you can instead use Stone-Weierstrass in the last step.) |
Current browse context:
astro-ph.HE
Change to browse by: Bookmark(what is this?) Astrophysics > High Energy Astrophysical Phenomena Title: Magnetic absorption of VHE photons in the magnetosphere of the Crab pulsar
(Submitted on 4 Feb 2019)
Abstract: The detection of the pulsed $\sim 1 $~TeV gamma-ray emission from the Crab pulsar reported by MAGIC and VERITAS collaborations demands a substantial revision of existing models of particle acceleration in the pulsar magnetosphere. In this regard model independent restrictions on the possible production site of the VHE photons become an important issue. In this paper, we consider limitations imposed by the process of conversion of VHE gamma rays into $e^{\pm}$ pairs in the magnetic field of the pulsar magnetosphere. Photons with energies exceeding 1~TeV are effectively absorbed even at large distances from the surface of the neutron star. Our calculations of magnetic absorption in the force-free magnetosphere show that the twisting of the magnetic field due to the pulsar rotation makes the magnetosphere more transparent compared to the dipole magnetosphere. The gamma-ray absorption appears stronger for photons emitted in the direction of rotation than in the opposite direction. There is a small angular cone inside which the magnetosphere is relatively transparent and photons with energy $1.5$~TeV can escape from distances beyond $0.1$~light cylinder radius ($R_{\rm{lc}}$). The emission surface from where photons can be emitted in the observer's direction further restricts the sites of VHE gamma-ray production. For the observation angle $57^{\circ}$ relative to the Crab pulsar axis of rotation and the orthogonal rotation, the emission surface in the open field line region is located as close as $0.4\,R_{\rm{lc}}$ from the stellar surface for a dipole magnetic field, and $0.1\,R_{\rm{lc}}$ for a force-free magnetic field. Submission historyFrom: Sergey Bogovalov Prof. [view email] [v1]Mon, 4 Feb 2019 14:08:42 GMT (1648kb,D) |
Relative standard deviation is also called percentage relative standard deviation formula, is the deviation measurement that tells us how the different numbers in a particular data set are scattered around the mean. This formula shows the spread of data in percentage.
If the product comes to a higher relative standard deviation, that means the numbers are very widely spread from its mean.
If the product comes lower, then the numbers are closer than its average. It is also knows as the coefficient of variation.
The formula for the same is given as:
\[\large RSD=\frac{s\times 100}{\overline{x}}\]
RSD = Relative standard deviation
s= Standard deviation
$\overline{x}$ = Mean of the data.
Solved Examples Question 1: Following are the marks obtained in by 4 students in mathematics examination: 60, 98, 65, 85. Calculate the relative standard deviation ? Solution:
Formula of the mean is given by:
$\overline{x}$ = $\frac{\sum x}{n}$ $\overline{x}$ = $\frac{60+ 98+ 65+ 85}{4}=77$
Calculation of standard deviation:
$x$ $x-\overline{x}$ $\left(x-\overline{x}\right)^{2}$ 60 -17 289 98 21 441 65 -12 144 85 8 64 $\sum \left(x-\overline{x}\right)^{2}=938$
Formula for standard deviation:
S = $s=\sqrt{\frac{\sum \left(x-\overline{x}^{2}\right)}{n-1}}$ S = $\sqrt{\frac{938}{3}}$ S = 17.66 Relative standard deviation = $\frac{s\times 100}{\overline{x}}$ = $\frac{17.66\times 100}{77}$ = 22.93% |
R - K-fold cross-validation (with Leave-one-out) Table of Contents 1 - About
Cross-validation in R.
2 - Articles Related 3 - Leave-one-out 3.1 - cv.glm
Each time, Leave-one-out cross-validation (LOOV) leaves out one observation, produces a fit on all the other data, and then makes a prediction at the x value for that observation that you lift out.
Leave-one-out cross-validation puts the model repeatedly n times, if there's n observations.
require(boot) ?cv.glm glm.fit=glm(response~predictor, data=myData) #LOOCV cv.glm(myData,glm.fit)$delta [1] 24.23151 24.23114
delta is the cross-validated prediction error where:
The first number is the raw leave-one-out, or lieu cross-validation result. The second one is a bias-corrected version of it.
The
bias correction has to do with the fact that the dataset that we train it on is slightly smaller than the onethat we actually would like to get the error for, which isthe full data set of size n. It turns out that has more of an effect for k-fold cross-validation.
cv.glm does the computation by brute force by refitting the model all the N times and is then slow. It doesn't exploit the nice simple below LOOCV formula.
3.2 - Shortcut Formula
The magic formula for least square regression:
<MATH> CV_{(n)} = \frac{1}{n} \sum^n_{i=1} \left ( \frac{ \href{residual}{y_i - \hat{y}_i} }{ 1-h_i} \right ) ^2 </MATH>
So:
The error would be the ordinary residuals if we didn't leave the observations out. The numerator comes from the least squares fit but we divide them by <math>(1- h_i)^2</math>
The <math>h_i</math> is the diagonal element of the hat matrix. The hat matrix is the operator matrix that produces the least squares fit. This is also known as the self influence. It's a measure of how much observation i contributes to it's own fit.
The values of <math>h_i</math> vary between 0 and 1. If <math>h_i</math> is close to 1 (ie observation i contributes really a lot to its own fit), <math>1- h_i</math> is small. And that will inflate the residual.
Function that represents the above formula:
loocv=function(fit){ h=lm.influence(fit)$h mean((residuals(fit)/(1-h))^2) }
where:
the function ln.influence is a post-processor for ln fit. It'll extract the element h from that and gives you the diagonal elements <math>h_i</math>
Example of use: measure of errors for polynomials fit with different degrees
4 - 10-fold cross-validation
With 10-fold cross-validation, there is less work to perform as you divide the data up into 10 pieces, used the 1/10 has a test set and the 9/10 as a training set. So for 10-fall cross-validation, you have to fit the model 10 times not N times, as loocv
## 10-fold CV # A vector for collecting the errors. cv.error10=rep(0,5) # The polynomial degree degree=1:5 # A fit for each degree for(d in degree){ glm.fit=glm(response~poly(predictor,d), data=myDataFrame) cv.error10[d]=cv.glm(myDataFrame,glm.fit,K=10)$delta[1] } lines(degree,cv.error10,type="b",col="red")
In general, 10-fold cross-validation is favoured for computing errors. It tends to be a more stable measure than leave-one-out cross-validation. And for the most time, it's cheaper to compute. |
Suppose there is a banked road on which a body is placed as shown in the figure.
Now to derive the relation between the velocity and the angle of inclination of the slope we do the following:-
Taking horizontal component of normal reaction and equating it to centripetal force.
$$N \sin(\theta) = f_c = {mv^2\over r}\qquad (1)$$
equating normal reaction to component of weight, co-linear to normal reaction.
$$N = mg \cos(\theta)\qquad (2)$$
Substituting $(2)$ in $(1)$
$$mg\cos(\theta)\sin(\theta) = {mv^2\over r}$$ $$\sin(2\theta) = {2v^2\over gr}$$ $$\theta = \large{\arcsin\left({2v^2\over gr}\right)\over 2}$$
But in solution set, they took $N \cos(\theta) = mg \qquad (3)$
Dividing $(1)$ by $(3)$
$$\tan (\theta) = {v^2\over gr}$$ $$\theta = \arctan\left({v^2\over gr}\right)$$
From $(3)$, $N =\large{mg \over \cos(\theta)}$, whereas from $(2)$ , $mg\cos(\theta) = N$. Now both of these can't be true. So why is $(2)$ false and $(3)$ true ? I have found many other similar questions in this site but none of the answers were quite satisfying. Please don't flag the question as duplicate because I have been struggling a long to find the answer. Any help is highly appreciated. |
B
Two electrons in the same orbital is clearly an entangled quantum state since it is not a tensor product: $$|\psi\rangle=\frac{1}{\sqrt{2}}(|\uparrow\rangle \otimes|\downarrow\rangle-|\downarrow\rangle \otimes|\uparrow\rangle)$$
A
Two fermions in the same orbital can be described by fermionic creation operators a†↑ and a†↓, which increase the occupation numbers: $$|\psi\rangle= a_{\uparrow}^{\dagger} a_{\downarrow}^{\dagger}|0\rangle \otimes|0\rangle=\left|1_{\uparrow}\right\rangle \otimes\left|1_{\downarrow}\right\rangle$$
The resulting singlet state is clearly a tensor product and is thus not entangled according to A
I already have reviewed the entangled states and separable states
But I just wonder What is the basic origin of their confusion ? Are these two states are the same state just in two different basis? Where is B’s entanglement in A’s picture? Why B looks like entangled state and A not? |
Description
desolve will compute the “general solution” to a 1st or 2nd order ordinary differential equation using Maxima. To solve the equation $x′+x−1=0$.
Sage Cell Code
t = var('t') # define a variable tx = function('x')(t) # define x to be a function of that variableDE = diff(x, t) + x - 1desolve(DE, [x,t])
Options Option
You can use
ics to specify an initial condition. For example, we can solve the initial value problem $x′+x−1=0$ with $x(0) = 2$.
Code
t = var('t') x = function('x')(t) DE = diff(x, t) + x - 1desolve(DE, [x,t],ics=[0,2])
Option
Higher order equations such as second-order linear equations can be solved. The following commands solve $x'' + 2x' + x =\sin t$.
Code
t = var('t') x = function('x')(t) DE = diff(x,t,2)+2*diff(x,t)+x == sin(t)desolve(DE, [x,t])
Option
We can specify initial conditions for second-order linear equations. The following commands solve $x'' + 2x' + x =\sin t$, $x(0) = 1$, $x'(0) = 0$.
Code
t = var('t') x = function('x')(t) DE = diff(x,t,2)+2*diff(x,t)+x == sin(t)desolve(DE, [x,t], ics=[0, 1, 0])
Option
Implicit solutions are returned for separable differential equations. Consider the solution to $\cos x \dfrac{dy}{dx} = \tan x$.
Code
x = var('x') y = function('y')(x) DE = diff(y,x)*cos(y) == tan(x)desolve(DE, [y,x])
Tags
Primary Tags: Differential Equations
Secondary Tags:
Related Cells desolve_odeint. Solving ordinary differential equations numerically with
desolve_odeint.
Euler's Method.
eulers_methodimplements Euler’s method for finding a numerical solution of the first-order ODE $y′=f(x,y)$.
Euler's Method for Systems.
eulers_method_2x2implements Euler’s method for finding a numerical solution of a $2 \times 2$ system of first-order ODEs.
desolve_laplace. Solving ordinary differential equations using Laplace transforms. Interact to plot direction fields and solutions for first order differential equations. A Sage interact for plotting direction fields for differential equations. Attribute
Permalink:
Date: 08 Jul 2017 14:10
Submitted by: Tom Judson |
Equilibrium Acids and Bases and their Ionisations Strong acid : 100% dissociation HClO 4, H 2SO 4, HNO 3, HI, HBr HCl + H_{2}O \rightarrow H^{+} + Cl^{-} Strong base : NaOH, KOH, RbOH, CsOH, Ba(OH) 2 Weak acid : Which undergoes partial dissociation
Hydrogen ion concentration :
K_{a} = \frac{C{\alpha} \ C{\alpha}}{C - C{\alpha}} = \frac{C\alpha^{2}}{1 - \alpha}
K a = Cα 2 (α < < < 1, 1 − α = 1) \alpha = \sqrt{\frac{K_{a}}{C}} [H^{+}] = C\alpha = C. \sqrt{\frac{K_{a}}{C}} = \sqrt{K_{a}.C} [H^{+}] = C\alpha = \sqrt{K_{a}.C} Arrhenius acid - base theory : Strong acid : Which produce more no. of H + (HClO 4, H 2SO 4, HCl) Weak acid : Which produce less no. of H +(CH 3COOH, HCN, H 2S) Strong base : Which produce more no. of OH − (NaOH, KOH) Bronsted - Lowry acid - base theory : Acid : Proton donor [HCl , H 2SO 4, CH 3COOH ......] Base : Proton acceptor[NaOH, KOH, NH 3 .......] Salt : Neither proton acceptor nor proton donor. [C 6H 6, CCl 4 ..... aprotic] Conjugate acid - base pair : A Bronsted - Lowry acid - base pair which differ by only one proton is called conjugate acid - base pair. Acid − H + → conjugate base Base + H + → conjugate acid.
Ionic product of water : (K W)
H_{2}O + H_{2}O \rightleftharpoons H_{3}O^{+}+ OH^{-}
K = \frac{[H_{3}O^{+}][OH^{-}]}{[H_{2}O]^{2}}
K{[H_{2}O]^{2}} = [H_{3}O^{+}][OH^{-}]
K_{W} = [H_{3}O^{+}][OH^{-}]
Kw → acid, ΔT = 25°C
K w = 1.0 × 10 -14 mol 2 lit -2 [H +][OH -] = 10 -14, [H +] = 1.0 × 10 -7 mol/lit Degree of dissociation : \alpha = \frac{1}{55.5 \times 10^{7}} = \frac{10^{-7}}{55.5} = \frac{10^{-7}}{\left(\frac{1000}{18}\right)} α = 1.8 × 10 -9 % dissociation = 1.8 × 10 -7 , \alpha = \frac{10^{-7}}{55.5} Part1: View the Topic in this Video from 0:10 to 5:20 Part2: View the Topic in this Video from 0:10 to 19:00 Part3: View the Topic in this Video from 0:10 to 9:04
Disclaimer: Compete.etutor.co may from time to time provide links to third party Internet sites under their respective fair use policy and it may from time to time provide materials from such third parties on this website. These third party sites and any third party materials are provided for viewers convenience and for non-commercial educational purpose only. Compete does not operate or control in any respect any information, products or services available on these third party sites. Compete.etutor.co makes no representations whatsoever concerning the content of these sites and the fact that compete.etutor.co has provided a link to such sites is NOT an endorsement, authorization, sponsorship, or affiliation by compete.etutor.co with respect to such sites, its services, the products displayed, its owners, or its providers.
1. Degree of ionisation :
\tt \alpha = \frac{number \ of \ molecules\ ionised \ or \ dissociated}{total \ number \ of \ molecules \ taken} For strong electrolytes, α = 1 For weak electrolytes α < 1
2. Ostwald's Dilution law :
k = \frac{C\alpha^{2}}{1 - \alpha} If α is very small 1 - α ≈ 1 ⇒ K = Cα 2 or \alpha = \sqrt{\frac{K}{C}} \Rightarrow \alpha \propto \frac{1}{\sqrt{C}} Here, K is dissociation constant and C is molar concentration of the solution.
3. Dissociation constant of acid, K_{a} = \frac{\left[H^{+}\right]\left[A^{-}\right]}{\left[HA\right]} =\frac{C\alpha^{2}}{\left(1 - \alpha\right)}
4. Dissociation constant of the base K_{b} = \frac{\left[B^{+}\right]\left[OH^{-}\right]}{\left[BOH\right]} =\frac{C\alpha^{2}}{\left(1 - \alpha\right)} |
$$
\frac{1}{\sum_{l\in \color{cyan}{L}} |\color{green}{\hat{y}}_l|} \sum_{l \in \color{cyan}{L}} |\color{green}{\hat{y}}_l| \phi(\color{magenta}{y}_l, \color{green}{\hat{y}}_l) $$ \(\color{cyan}{L}\) is the set of labels \(\color{green}{\hat{y}}\) is the true label \(\color{magenta}{y}\) is the predicted label \(\color{green}{\hat{y}}_l\) is all the true labels that have the label \(l\) \(|\color{green}{\hat{y}}_l|\) is the number of true labels that have the label \(l\) \(\phi(\color{magenta}{y}_l, \color{green}{\hat{y}}_l)\) computes the precision or recall for the true and predicted labels that have the label \(l\). To compute
precision, let \(\phi(A,B) = \frac{|A \cap B|}{|A|}\). To compute
recall, let \(\phi(A,B) = \frac{|A \cap B|}{|B|}\).
How is Weighted Precision and Recall Calculated?
Let’s break this apart a bit more.
Continue reading “Weighted Precision and Recall Equation” |
For example, the radiation dominated cosmology, the energy density of radiation is proportional to $a^{-4}$ and the volume is proportional to $a^3$, where $a$ is the scale factor. So the total energy of radiation is propotional to $a^{-1}$. So where is the loss of energy of radiation? Is it because the gravitational field has the energy?
Does $\nabla_aT^{ab}_{\rm matter}=0$ represent the conservation of energy and momentum of matter field in GR?
After taking a look at the answer by Jim, I am not sure in my knowledge at the moment. However, lets try to figure out the details. I claim, that energy-momentum tensor of matter in GR is not conserved by itself since matter always interacts with gravitational field and the total energy should be taken into account instead.
Vanishing of the covariant divergence $\nabla_a T^{ab}=0$ exactly reflects this feature. Consider this equation integrated over a 4-dimensional volume \begin{equation} \begin{aligned} 0=&\int_V d^4x \sqrt{-g}\;\nabla_a T^{ab}=\int_V d^4x \nabla_a(\sqrt{-g}\; T^{ab})=\int_V d^4x \partial_a(\sqrt{-g}\; T^{ab})+\ldots\\ =&\int_{\partial V} d\Sigma_a T^{ab}+\ldots, \end{aligned} \end{equation} where dots represent whatever Christoffel symbols appear there. Hence, we see, that if we choose the surface $\partial V$ in the usual way when only its $x^0=const$ parts contribute to the integral we will get the usual conservation law deformed by the connection terms \begin{equation} 0=P^b(x^0=t_2)-P^b(x^0=t_1)+\dots. \end{equation} So, since the difference of 4-momentum at different time does not vanish, the energy momentum tensor of matter is not conserved by itself.
However, if we take into account the contribution from gravity that is calculated in the usual way \begin{equation} T^{grav}_{ab}=\frac{1}{\sqrt{-g}}\frac{\delta S_{EH}}{\delta g^{ab}}=R_{ab}-\frac12g_{ab}R, \end{equation} we see the well known feature of GR that the total energy-momentum tensor vanishes due to the Einstein equations \begin{equation} T_{ab}+T_{ab}^{grav}=0. \end{equation} That is kind of obvious, because the total energy momentum tensor is obtained by variation of the total action wrt $g_{ab}$ and therefore gives exactly EOM for gravitational field with a source.
It seems, that the loss of energy of radiation goes into energy of the gravitational field. In addition, you may read the 2 volume of Landau-Lofshitz to find out how people define energy-momentum pseudo-tensor of gravitational field that does not cancel the matter energy-momentum tensor and therefore is more useful for some applications.
The energy of radiation falls off like $a^{-1}$ because space is expanding. As space expands, the peaks of an electromagnetic wave expand with it, which makes them get farther apart. This means the wavelength of radiation increases as space expands, thus the frequency decreases. Since energy is $E=h\nu$, if the frequency decreases proportional to $a^{-1}$, then the energy also falls off like $a^{-1}$. This is called cosmological redshift.
Also, as mentioned in the comments, $\nabla_{\mu}T^{\mu}_{~\nu}=0$ means that the energy-momentum tensor is conserved. It is the GR equivalent of the conservation of energy and the conservation of momentum laws.
$\nabla T=0 $ is not a conservation law. You are not considering the energy of the gravitational field in this way.
If you do the calculation you find something like $\partial_{\nu} (\sqrt{-g} T^{\mu \nu}) \neq 0 $, so you can't define a conserved charge like $P^{\mu}= \int \sqrt{-g} d^4x T^{\mu0} $.
To account for the gravitational scalar field you could construct a pseudotensor, but this is not satisfactory. Anyway, in some cases you have symmetries that allow you to have conserved quantities.
For the energy loss due to cosmological redshift see here: redshift
EDIT: I add something due to comments. Going from curved space to a local inertial frame of course you have $\nabla T=0 \rightarrow \partial T=0$ and the latter IS a conservation law. But not in general. In a generic dynamical spacetime the energy is indeed not conserved.
References: Hartle, Gravity, pag. 482 cap. 22 (Local Cons. of En-Momentum in Curved Space). See in particular the example on FRW cosmology that nicely answer the initial question. |
I'm currently working through the book Heisenberg's Quantum Mechanics (Razavy, 2010), and am reading the chapter on classical mechanics. I'm interested in part of their derivative of a generalized Lorentz force via a velocity-dependent potential.
I understand the generalized force that they derive from a Lagrangian of the form $L = \frac{1}{2}m|\vec v|^2 - V(\vec r,\vec v,t)$
$$F_i = -\frac{\partial V}{\partial x_i} + \frac{d}{dt}\left(\frac{\partial V}{\partial v_i}\right)$$
Through a series of steps that I still do not quite understand, the author derives the identity for the mixed velocity derivatives of the force:
$$\frac{\partial F_i}{\partial v_j\partial v_k} = 0$$
At this point, "by integrating this equation once" with respect to $v_k$ , they obtain the equation:
$$\frac{\partial F_i}{\partial v_j} = \sum_k \varepsilon_{ijk}B_k(\vec r,t)$$
where $B_k$ is the $k^{th}$ component of a vector function $\vec B$ that does not depend on velocity.
I'm having trouble understanding where this expression for the integral comes into play. The left-hand side clearly comes from the FTC. Were I to perform the integration myself I would do the same and include an arbitrary function
$$\frac{\partial F_i}{\partial v_j}=g(\vec r, v_1,...,v_{k-1}, v_{k+1},..., t)$$
where $g$ is a function that does not depend on $v_k$ explicitly. In this way $\frac{\partial g}{\partial v_k} =0 $ as we need.
I've tried to work out how this function is related to the expression with $B_k$, but I cannot find any source that could point me in the right direction, especially because my best guess for $g$ depends on the other $n-1$ components of the velocity while the author's $\vec B$ vector is a function of position and time only.
Could I have some help understanding what's being done here?
Edit: Additional important context
Additionally, Razavy goes a step further and assumes that the generalized force is independent of acceleration, just like the Lagrangian. Using this assumption, we can take the second condition listed in another related question I asked to form the anti-symmetry relation
$$\frac{\partial F_i}{\partial v_j} =- \frac{\partial F_j}{\partial v_i}$$
And then we can start taking partial derivatives, assuming all these derivatives are continuous. Taking the left side first:
$$ \frac{\partial}{\partial v_k}(LHS)=\frac{\partial^2 F_i}{\partial v_j\partial v_k} = \frac{\partial^2 F_i}{\partial v_k\partial v_j} = \frac{\partial}{\partial v_j}\frac{\partial F_i}{\partial v_k}= \frac{\partial}{\partial v_j}\left(-\frac{\partial F_k}{\partial v_i}\right) = -\frac{\partial^2 F_k}{\partial v_i\partial v_j} $$
So, we can differentiate and swap the top index and a bottom index at the cost of a negative sign. In a similar way, the right hand side can be differentiated
$$\frac{\partial}{\partial v_k}(RHS)=-\frac{\partial^2 F_j}{\partial v_i\partial v_k}=\frac{\partial^2 F_k}{\partial v_i\partial v_j}$$
Thus, We can write: $\frac{\partial}{\partial v_k}(LHS) = -\frac{\partial}{\partial v_k}(RHS)$.
Because $LHS=-RHS$, we have
$$\frac{\partial}{\partial v_k}(LHS) = \frac{\partial^2 F_i}{\partial v_j\partial v_k} = 0$$ |
There is some confidence that electron is a perfect point e.g. to simplify QFT calculations. However, searching for experimental evidence (stack), Wikipedia article only points argument based on
g-factor being close to 2: Dehmelt's 1988 paper extrapolating from proton and triton behavior that RMS (root mean square) radius for particles composed of 3 fermions should be $\approx g-2$:
Using more than two points for fitting this parabola it wouldn't look so great, e.g.
neutron (udd) has $g\approx-3.8$ and $<r^2_n>\approx -0.1 fm^2$.
And while classically $g$-factor is said to be 1 for rotating object, it is for assuming equal mass and charge density ($\rho_m\propto\rho_q$). Generally we can classically get any $g$ by modifying charge-mass distribution:
$$g=\frac{2m}{q} \frac{\mu}{L}=\frac{2m}{q} \frac{\int AdI}{\omega I}=\frac{2m}{q} \frac{\int \pi r^2 \rho_q(r)\frac{\omega}{2\pi} dr}{\omega I}= \frac{m}{q}\frac{\int \rho_q(r) r^2 dr}{\int \rho_m(r) r^2 dr}$$
Another argument for point nature of electron is
tiny cross-section, so let's look at it for electron-positron collisions:
Beside some bumps corresponding to resonances, we see a linear trend in this log-log plot: $\approx 10^{-6}$ mb for 10GeVs (5GeV per lepton), $\approx 10^{-4}$ mb for 1GeV. The 1GeV case means $\gamma\approx 1000$, which is also in
Lorentz contraction: geometrically means $\gamma$ times reduction of size, hence $\gamma^2$ times reduction of cross-section - exactly as in this line on log-log scale plot.
More proper explanation is that it is for collision - transforming to frame of reference where one particle rests, we get $\gamma\to\approx \gamma^2$. This asymptotic $\sigma \propto 1/E^2$ behavior in colliders is well known (e.g. (10) here) - wanting
size of resting electron, we need to take it from GeVs to E=511keVs. Extrapolating this line (no resonances) to resting electron ($\gamma=1$), we get $\approx 100$ mb, corresponding to $\approx 2$ fm radius.
From the other side we know that two EM photons having 2 x 511keV energy can create electron-positron pair, hence energy conservation doesn't allow electric field of electron to exceed 511keV energy, what requires some its deformation in femtometer scale from $E\propto 1/r^2$:
$$\int_{1.4fm}^\infty \frac{1}{2} |E|^2 4\pi r^2 dr\approx 511keV$$
Could anybody elaborate on concluding upper bound for electron radius from g-factor itself, or point different experimental boundary?
Does it forbid electron's parton structure: being
"composed of three smaller fermions" as Dehmelt writes? Does it also forbid some deformation/regularization of electric field to a finite energy? |
The halting problem states there is no algorithm that will determine if a given program halts. As a consequence, there should be programs about which we can not tell whether they terminate or not. What are the simplest (smallest) known examples of such programs?
A pretty simple example could be a program testing the Collatz conjecture:
$$ f(n) = \begin{cases} \text{HALT}, &\text{if $n$ is 1} \\ f(n/2), & \text{if $n$ is even} \\ f(3n+1), & \text{if $n$ is odd} \end{cases} $$
It's known to halt for $n$ up to at least $5 × 2^{60} ≈ 5.764 × 10^{18}$, but in general it's an open problem.
The halting problem states there is no algorithm that will determine if a given program halts. As a consequence, there should be programs about which we can not tell whether they terminate or not.
"We" are not an algorithm =) There is no
general algorithm that could determine if a given program halts for every program.
What are the simplest (smallest) known examples of such programs?
Consider the following program:
n = 3while true: if is_perfect(n): halt() n = n + 2
Function is_perfect checks whether n is a perfect number. It is unknown whether there are any odd perfect numbers, so we don't know whether this program halts or not.
You write:
The halting problem states there is no algorithm that will determine if a given program halts. As a consequence, there should be programs about which we can not tell whether they terminate or not.
This is a non-sequitur, in both directions. You succumb to a common fallacy that is worth addressing.
Given any fixed program $P$, its halting problem ("Does $P$ always halt?") is
always decidable, because the answer is either "yes" or "no". Even if you can not tell which it is, you know that one of the two trivial algorithms that answer always "yes" resp. "no" solves the $P$-halting problem.
Only if you require that the algorithm should solve the Halting problem for all¹ programs can you show that no such algorithm can exist.
Now, knowing that the Halting problem is undecidable does not imply that there are any programs
nobody can not prove termination or looping of. Even if you are not more powerful than a Turing machine (which is only a hypothesis, not proven fact), all we know is that no single algorithm/person can provide such proof for all programs. There may be a different person being able to decide for each program.
Some more related reading:
How can it be decidable whether $\pi$ has some sequence of digits? Human computing power: Can humans decide the halting problem on Turing Machines? Algorithm to solve Turing's "Halting problem" Program synthesis, decidability and the halting problem Is it possible to solve the halting problem if you have a constrained or a predictable input? Why are the total functions not enumerable?
So you see that your actual question (as repeated below) has nothing to do with whether the halting problem is computable. At all.
What are the simplest (smallest) known examples of [programs we don't know to halt or loop]?
This in itself is a valid question; others have given good answers. Basically, you can transform every statement $S$ with unknown truth value into an example, provided it
does have a truth value:
$\qquad\displaystyle g(n) = \begin{cases}1, &S \text{ true},\\ g(n+1), &\text{else}.\end{cases}$
Granted, these are not very "natural".
Not necessarily all, but "many" in some sense. Infinitely many, at least.
Any open problem regarding the existence of a number with particular properties gives rise to such a program (the one which searches for such a number). For example, take the Collatz conjecture; since we don't know if it is true, we also don't know if the following program terminates:
n:=1; found:=false; while not found do s:={}; i:=n; while i not in s do add i to s; if i even then i:=i/2 else i:=3i+1 if 1 not in s then found:=true; n:=n+1
Given that the Busy Beaver problem is not solved for a 5-state-2-symbol Turing machine, there must be a Turing machine with only five states and only two symbols which has not been shown to halt or not when started for an empty tape. That is a very short, concise, and closed program.
the question is tricky because decidability (the CS equivalent formalization/ generalization of halting problem) is associated with
languages so it needs to be recast in that format. this seems to not be pointed out much, but many open problems in math/ CS can be readily converted to problems (languages) of unknown decidability. this is because of a tight correspondence between theorem proving and (un)decidability analysis. for example (somewhat like the other answer wrt odd perfect numbers), take the twin primes conjecture which dates to the Greeks (over 2 millenia ago) and is subject to major recent research advances eg by Zhang/ Tao. convert it to an algorithmic problem as follows:
Input:
n. Output: Y/N there exists at least ntwin primes.
the algorithm searches for twin primes and halts if it finds
n of them. it is not known if this language is decidable. resolution of the twin primes problem (which asks if there are a finite or infinite number) would also resolve the decidability of this language (if it is also proven/ discovered how many there are, if finite).
another example, take the Riemann hypothesis and consider this language:
Input:
n. Output: Y/N there exist at least nnontrivial zeroes of the Riemann zeta function.
the algorithm searches for nontrivial zeroes (the code is not especially complex, its similar to root finding, and there are other equivalent formulations that are relatively simple, which basically calculate sums of "parity" of all primes less than
x etc) and halts if it finds n of them and again, its not known if this language is decidable and resolution is "nearly" equivalent to solving the Riemann conjecture.
now, how about an even more spectacular example? (
caveat, probably more controversial as well)
Input: c: Output: Y/N there exists an O(n
c) algorithm for SAT.
similarly, resolution of decidability of this language is nearly equivalent to the P vs NP problem. however there is less obvious case for "simple" code for the problem in this case.
Write a simple program that checks whether for every n, $1 ≤ n ≤ 10^{50}$, the Collatz sequence starting with n will reach the number 1 in less than a billion iterations. When it has the answer, let the program stop if the answer is "Yes", and let it loop forever if the answer is "No".
We cannot tell whether this program terminates or not. (Who is we? Let's say "we" is anyone who could add a comment to my answer). However, someone with an incredibly powerful computer might tell. Some genius mathematician might be able to tell. There might be a rather small n, say n ≈ $10^{20}$ where a billion iterations are needed; that would be in reach of someone with a lot of determination, a lot of time, and a lot of money. But right now, we cannot tell. |
Could you develop on the usage of integer frequencies?
When dealing with sampled discrete signals, there can only be an integer amount of resolvable frequencies within a captured interval.
When you take an $x(t), t \in \mathbb{R}$ and apply sample-and-hold on it at some sampling frequency $Fs$, it is turned into an $x \left[n \cdot \frac{1}{Fs} \right ], n \in \mathbb{N}$.
Theoretically, when $t \in \mathbb{R}$, you can "interrogate"
any time instance or interval of $x$. But after sampling, when $n \in \mathbb{N}$, the smallest interval you can interrogate from your signal is $\frac{1}{Fs}$ or between successive samples $n, n+1$.
Even if you tried to sample faster
after the sample-and-hold, the value you would get would be the last known value from the last time the sampling took place.
Therefore, within a block of $N$ samples, you get $N$
resolvable frequencies. Even if you tried to evaluate the Discrete Fourier Transform in more than the available $N$ frequencies, all that you would get would be interpolated values between the $N$ resolvable bins of it.
Which brings us to what Stanley Pawlukiewicz remarks: The fact that $n$ is integer, does not mean that you can only represent integer frequencies. Only that you can represent a
fixed amount of them.
Just as you can only "see" $n$ and then $n+1$ discrete time instances of $x$, so you can only resolve some $f, f+ \frac{Fs}{N}$ discrete frequencies within it, where $N$ is the amount of samples you have collected.
...possibly relate it to the DCT where it seems to me that, e.g. the second basis vector is like half a cycle (as shown on my plot), and half a cycle is like a fractional frequency?
I hope that it is clear from the above discussion that
"...half a cycle..." can well be N samples of some $f$ sampled at the right $Fs$ (?).
Hope this helps. |
I want to code the dynamics of 2D planar quadrotor and than control it to drive it from one state to another.
Dynamics that I use is taken from the online course fiven by Vijay Kumar in Coursera as follows,
$ \begin{bmatrix} \ddot{y}\\ \ddot{z}\\ \ddot{\phi} \end{bmatrix} = \begin{bmatrix} 0\\ -g\\ 0 \end{bmatrix} + \begin{bmatrix} -\frac{1}{m}sin\phi & 0\\ \frac{1}{m}cos\phi & 0\\ 0 & -\frac{1}{I_{xx}} \end{bmatrix}\begin{bmatrix} u_1\\ u_2 \end{bmatrix} $
it has some linearizations also as $sin\phi->\phi$ & $cos\phi -> const.$
And u1, u2 is defined by;
$u_1=m\{g+\ddot{z}_T(t)+k_{v,z}*(\dot{z}_T(t)-\dot{z})+k_{p,z}*(z_{T}(t)-z)\}$
$u_2=I_{xx}(\ddot{\phi}+k_{v,\phi}*(\dot{\phi}_T(t)-\dot{\phi})+k_{p,\phi}*(\phi_{T}(t)-\phi))$
$\phi_c=-\frac{1}{g}(\ddot{y}_T(t)+k_{v,y}*(\dot{y}_T(t)-\dot{y})+k_{p,y}*(y_{T}(t)-y))$
it is assumed to be the vehicle is near hover condition and commanded roll angle $\phi_c$ is calculated based on desired y-component and is used to calculate u2 which is net moment acting on CoG.
The thing that I dont understand is, don't I need any saturation on actuators? Do I need to implement some limiting part on my code to limit the control signals.
The other thing is, I don't have any desired acceleration. There is those terms in control signal equations. Can I remove them?
The last thing is, my control signals creates some signals causes to vehicle to have order of 10^5 in roll angle by integrating the high angular rates caused by high u2 moment signal I guess. Since the linearization works on small angle approximation, those high angles and rates are problematic. Thus how can I handle it? |
2019-09-04 12:06
Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 Rekord szczegółowy - Podobne rekordy 2019-08-15 17:39
LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Rekord szczegółowy - Podobne rekordy 2019-08-15 17:36
Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Rekord szczegółowy - Podobne rekordy 2019-05-15 16:57 Rekord szczegółowy - Podobne rekordy 2019-02-12 14:01
XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 Rekord szczegółowy - Podobne rekordy 2019-01-21 09:59
Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 Rekord szczegółowy - Podobne rekordy 2019-01-15 14:22
Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 Rekord szczegółowy - Podobne rekordy 2019-01-10 15:54 Rekord szczegółowy - Podobne rekordy 2018-12-20 16:31
Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 Rekord szczegółowy - Podobne rekordy 2018-12-14 16:02
The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. Rekord szczegółowy - Podobne rekordy |
In my topology homework we are asked to describe a topology on the Integers such that:
set of all Primes is open. for each $x\in\mathbb Z$, the set $\{x\}$ is not open. $\forall x,y \in\mathbb Z$ distinct, there is an open $U\ni x$ and an open $V\ni y$ such that $U\cap V=\emptyset$
i was looking at Furstenberg's topology as in this proof: For $m, b \in\mathbb Z$ with $m > 0$ define $N(m,b):=\{mx + b : x ∈ Z\}$, an arithmetic progression streching towards infinity in both directions. A set $U$ is open if either:
$U = \emptyset$; or For each $b\in U$ there is an $m>0$ such that $N(m,b)\subseteq U$.
but as I understand the set of Primes is not open in this topology. Now I'm not sure what should I do: is there a way to modify this topology to make set of Primes open or should I think of something completely different.
Any hints are appreciated! thanks! |
I am trying to solve the following equations
$\int^\infty_0 \exp\left(-\frac{\alpha}{x+1}\right)\exp(-c x) x^{\frac{n-1}{2}} I_{n-1}\left(\sqrt{\beta x}\right)\ \mathrm{d}x $
and
$\int^\infty_0 \exp\left(-\frac{\alpha}{x}\right)\exp(-cx)(x-1)^{\frac{n-1}{2}} I_{n-1}\left(\sqrt{\beta(x-1)}\right) \mathrm{d}x $
I tried using Mellin convolution, but I always struggle with the inversion part. Any other hints or clues are welcomed. |
This web page says that the microcanonical partition function $$ \Omega(E) = \int \delta(H(x)-E) \,\mathrm{d}x $$ and the canonical partition function $$ Z(\beta) = \int e^{-\beta H(x)}\,\mathrm{d}x $$ are related by the fact that $Z$ is the Laplace transform of $\Omega$. I can see mathematically that this is true, but why are they related this way? Why can we intrepret the integrand in $Z$ as a probability, and what allows us to identify $\beta = \frac{1}{kT}$?
I guess, one could start by considering the microcanonical and canonical ensembles as entirely different concepts. For they represent different statistical ensembles: in the former the energy of the system is fixed while in the second the energy can span all possible values allowed by the energy spectrum of the system with some penalty attributed to high energies.
In this case it is important to notice that the
meaning of the microcanonical measure and, for that matter, of the canonical one is that of probability measures in state space (quantum or classical). Thus, they have to take real values and be integrable/normalizable.
As you noticed, of course, the canonical partition function can be understood as being a Laplace transform of the (microcanonical) density of states. In some respect, it then tells us something about "how many thermostats do we need to capture the physics of the density of state?". This is of course vague-ish because the set of the Boltzmann weights (understood as probabilistic weights) doesn't constitute an orthonormal basis of the space of function.
So I think that for this Laplace transform idea to mean anything, we need to imagine that it is possible, via analytical continuation, to extend $\beta$-values to the complex plane. In such a case then, the inverse Laplace transform may exist and we may be able to understand something.
In particular we can indeed recast the density of state $\Omega(E) \equiv \sum_i \delta(E-E_i)$ where $i$ labels the microstates as follows:
$$\Omega(E) = \mathcal{L}^{-1}\left[ Z(\beta) \right] = \int \: d \beta \:e^{\beta E} Z(\beta)$$
where $\mathcal{L}^{-1}$ denotes the inverse Laplace transform.
The interesting point from the point of view of statistical mechanics is that, for big system sizes i.e. in the thermodynamic limit, we can write that $Z(\beta) \equiv e^{-\beta A(\beta)} \sim e^{-\beta N a(\beta)}$ where $A$ is the Helmholtz free energy and $N$ the number of particles in the system. The latter equality derives from the extensivity of the free energy. Note that $E$ is also extensive in the thermodynamic limit so that $E \sim N e$, we then get:
$$\Omega(E) = \int \: d \beta \:e^{\beta N(e-a(\beta))}$$
If $N$ is very large, one can then perform a saddle point approximation at, say, $\beta = \beta^*$ (which will most likely take a path in the complex plane) and find that:
$$\Omega(E) \sim e^{\beta^* N(e-a(\beta^*))}$$
Assuming the path lies in the complex plane but the saddle is itself on the real line, we then get that each value of $E$, there exists a single thermostat with $\beta$-value $\beta^*$ that is susceptible to accurately model the density of state, and hence all the statistical properties at fixed energy $E$; in the thermodynamic limit that is.
This is what is called the ensemble equivalence between the canonical and the microcanonical ensemble (of course the same can be done when asking "how many fixed energy systems do I need to model my thermostat at inverse temperature $\beta$").
This was the large system limit. Now, it turns out that assuming that microcanonical and canonical partition functions are, if I dare say, Laplace transforms of one another then, one can do things for small systems as well and at any temperature (i.e. even in the quantum regime).
In particular, the density of states is always quite hard to compute while the partition function is more tractable (and yet no easy beast to tame!).
Just to give the idea of what happens, let's consider the case of a 1D harmonic oscillator. In this case we have that the 1-particle partition function reads:
$$z(\beta) = \frac{1}{\sinh(\frac{\beta \hbar \omega}{2})}$$
Now, trying to figure what would be the corresponding density of state we can use the exact relation:
$$\omega(E) = \int d\beta \: \frac{e^{\beta E}}{\sinh(\frac{\beta \hbar \omega}{2})}$$
As I said before, for this integral to converge, it is very likely that the inversion has to be performed within an integrable slab of the complex plane. Moreover, even if we tried to integrate naively, we would encounter a pole at $\beta = 0$. If we now try to avoid the zero pole via an excursion in the complex plane, we would need to integrate over a closed contour, a semi-circle encompassing the top half of the complex plane say. If we do that, we need to account for all the poles that lie on the top half of the imaginary line and cancel the denominator of the integrand.
The corresponding residues give rise to an oscillatory part that is supplementing an average part that is the one we get in the large system size limit discussed above. The origin of this oscillatory behaviour stems from the fact that if we plot the total number of states which have energy below $E$, then, in the quantum case, it takes a "ladder shape" oscillating about a mean that we commonly use in our usual treatment of statistical thermodynamics.
It becomes more interesting beyond 1D and for those who are interested about the 2D harmonic oscillator, you can find a detailed study of the latter here.
At the end of the day, I would say that although the microcanonical and canonical ensembles are not
defined as being related by a Laplace transform, if we allow to extend their definition to the complex plane, then they can be related by a Laplace transform and it enables us to get very important and general results (ensemble equivalence for large systems and oscillatory behaviour of the density of states for small ones) which are by the way well verified numerically and experimentally.
Consider two connected systems $A$ and $B$ in the microcanonical ensemble. Calling $E$ the total (fixed) energy we have $$\Omega(E)=\Omega_A(E_A)\Omega_B(E_B)=\Omega(E_A)\Omega_B(E-E_A).\tag{1}$$ This only states that number of states of the whole system is the product of the number of states of the subsystems but here is really the key, since everything what follows is a direct consequence of $(1)$ and of the maximum entropy principle.
Using the maximum entropy principle with $(1)$, we obtain that at equilibrium between $A$ and $B$ we have $$\frac{1}{\Omega_A(E_A)}\frac{\partial\Omega_A}{\partial E_A}(E_A) =\frac{1}{\Omega_B(E_B)}\frac{\partial\Omega_B}{\partial E_B}(E_B).$$ This number (the relative variation of $\Omega_i$ with respect to the energy $E_i$) is called $\beta$, (up to the constant $k_{\mathrm B}$ which converts into our particular unit system). Consequently, we have at equilibrium $$\frac{\partial}{\partial E}\left(\ln\Omega-\beta E\right)=0,$$ We can therefore define the Legendre transform of $\ln\Omega(E)$ that we call $\ln\mathcal Z(\beta)$. We have $$-\frac{\partial\ln\mathcal Z}{\partial \beta}=E$$ as a consequence of the inverse Legendre transform. This shows that $\mathcal Z(\beta)$ is indeed the canonical partition function.
The Legendre transform actually gives the relation $$ \ln\mathcal Z(\beta)=\ln\Omega(E)-\beta E.$$ Taking the exponential and integrating over $E$ we get the relation $$ \mathcal Z(\beta)=\int \Omega(E)\mathrm e^{-\beta E}\,\mathrm dE.$$ We also get by integrating over $\beta$ the expression of the inverse $\Omega(E)=\int Z(\beta)\mathrm e^{\beta E}\mathrm d\beta$.
Beware that this does not give any information on the way how to perform the integral. The computation of $\Omega$ from $\mathcal Z$ requires actually a complex integration, as the Laplace inverse transform is performed along a contour in the complex plane such that all poles are on the left-hand side.
So the answer to your question could be that this relation is a consequence of the properties of the Legendre transform combined with the fact that $(1)$ implies the additivity of $\ln\Omega$.
I am not sure that it is true in general.
In your equation, $x$ might not be a single continuous variable in real number line. It may consist of a set of variables such as interacting spin system. It can also be discrete variable, and in the most general case, it is just a set of states in some model system. The Laplace transform is not well defined in the later case. Also, if the Hamiltonian is not linear nor quadratic (with canonical transformation), it is questionable whether it is still a Laplace transform.
Just as a quick conceptual contribution to the answers already provided, for any probability distribution defined over $\mathbb{R}_+$, the Laplace transform is the moment/cumulant-generating function, which is equivalent to saying it is the representation of the distribution in terms of its moments. The partition function may look a bit different, since we evaluate $\beta$ not only at zero, but that is precisely what it is.
Working my way backward from the canonical partition function,
$$\begin{align} Z&=\int e^{-\beta E}\,dx\,\,\,\,\textrm{(sum over states)}\\ &=\int e^{-(\beta-\beta_0) E-\beta_0E}\,dx\\ &=\int e^{-\Delta\beta E}\left(e^{-\beta_0E}\right)\,dx\\ &=\int e^{-\Delta\beta E}\rho (E,\beta_0)\,dx,\,\,\,\,\,\,\rho(E,\beta_0)=e^{-\beta_0E}\\ &=\int\left(1-\Delta\beta E + \frac{(\Delta\beta)^2 E^2}{2}+\ldots\right)\rho(E,\beta_0)\,dx\\ &=1-\Delta\beta m_1+ \frac{(\Delta\beta)^2 m_2^2}{2}+\ldots \end{align}$$
where $m_i$ is the $i$'th moment of the distribution $\rho(E,\beta_0)$ which is obtained by taking the appropriate derivative and setting $\Delta\beta=0$, which by the way immediately sets $\beta_0=\beta$.
I have not made any distinction between microcanonical, canoncical, grand-canonical, etc., because all that does is change the functional form of $\rho$ and that prescription is given by thermodynamics. In the microcanonical case, $\rho=\delta(E(x)-\bar{E})$ and does not depend on $\beta$. In the canonical case, $\rho=e^{-\beta E}$. And so on.
So in summary, the partition function is just the representation of the appropriate distribution in terms of its moments (moment-generating function), just as a Fourier series is the representation of a function in terms of its Fourier components. |
Suppose we use the metric $(+,-,-,-)$ thus the momentum squared is
$p^2 = p_0^2-\vec{p}^2 = m^2>0$
Defining $p_E:=\mathrm{i}\cdot p_0$ and $\bar{p}:=(\,p_E,\vec{p})$ with Euclidean norm $\bar{p}^2 = p_E^2+\vec{p}^2$.
Here's my question:
If we plug in
$\mathrm{i}\,p_0$
instead of $p_E$ we see that $\bar{p}^2 = -p^2 = -m^2$?
So $\bar{p}^2$ is negative?
Also if $\bar{p}^2 = -p^2 = -m^2$ is true...is it always so? Since $p^2$ is a Lorentz invariant, but how do we interpret $\bar{p}^2$ if it's equal to $-m^2$
What I want to get to is the following:
Let
$\mathcal{L}_{int} = \frac{1}{2}g\phi_1^2\phi_3+\frac{1}{2}h\phi_2^2\phi_1$
with: $p_3 = p_1+p_2$ and
$M>2m$
Suppose we have a triangle loop with incoming momentum $p_3$ of mass $M>2m$ and two outgoing identical particles $\phi_2$ and $\phi_2$ momenta $p_1$ and $p_2$ each of mass $m$ (sorry bad notation but $\phi_1$ is not related to $p_1$). The incoming particle $\phi_3$ splits into two light ones (two $\phi_1$'s) of mass $\eta$ each and each of these connects to two $\phi_2$'s.
Thus we have the following momenta flowing inside the loop:
$k$, $ k-p_2$ and $k+p_1$.
After a Wick rotation we get the following integral:
\begin{equation} \int{\,\frac{\mathrm{d}^4\bar{k}}{(2\pi)^4} \frac{1}{\bar{k}^2+m^2}\frac{1}{\left(\bar{k}-\bar{p}_2\right)^2+\eta^2}\frac{1}{\left(\bar{k}+\bar{p}_1\right)^2+\eta^2}} \end{equation} Now to evaluate these let's use Schwingers trick:
\begin{equation} \frac{1}{\bar{k}^2+m^2} = \int_0^\infty{\mathrm{d}s\,\mathrm{e}^{-s(\bar{k}^2+m^2)}}, \end{equation} but for this to be OK we need to have $\mathrm{Re}(\bar{k}^2+m^2)>0$ and similarly for the other propagators, and this is where I get confused.
It seems they don't satisfy this condition depending on how we interpret $\bar{k}^2$ and $\bar{p}^2$ and so on...
On the other hand if all the squares in the denominator of the integrand are taken as positive then the convergence condition is trivially satisfied.
So please help me understand what I'm doing wrong and if you guys can show how to satisfy the positivity condition. Thanks in advance. |
What light looks like from the surface
The sun in your world is half-bright, half-dark. Assume that it is a perfect sphere with any radius greater than zero. Therefore, for any point on the surface, the percentage of the sun that is facing that point at any given time is
$$\cos^2\phi\cos^22\theta + \frac{1}{2}\sin^2\phi.$$
Here are three plots of what that looks like at the equator:
At a point in the temperate zone at about 40 degrees latitude:
And at the poles:
How to measure latitude between two points
Without sunrise over a horizon or a pole star, it is much harder to measure latitude. The best you can do is to measure the brightness of the sun at various points in the day, and to compare with the trigonometeric properties of its brightness from the last section.
The the best of my knowledge, there is no way to scientifically measure brightness until you have a photographic plate. That will not be available to people of Renaissance technology. However, you can at the very least dead-eye reckon brightness, so we will assume some sort of brightness metric. One potential way to measure brightness is that brightness is proportional to the 'bright' portion of the sphere of the sun that is visible to you from your location. If you look at the sun through a dark lens, you may be able to measure this, depending on how large the radius of the sun is.
In this case, you can calculate latitude relative to the equator easily. Maximum brightness at latitude $\phi$ ($\max(B_\phi)$) is $1 - 1/2\sin^2\phi$ times that at the equator ($\max(B_{eq})$), solve backwards for $\phi$:
$$\phi = \sin^{-1}\sqrt{2\left(1 - \frac{\max(B_\phi)}{\max(B_{eq})}\right)}.$$
It is possible that maximum brightness at the equator is a well known standard value in your scientific community, even for those who have never been there. You have to know, or at least be able to estimate, the maximum brightness to be able to calculate latitude. As an alternate measurement, not2 that equatorial max brightness is twice the constant brightness at the poles. You can re-work everything in terms of that value as $$\max(B_{eq}) = 2B_{pole}.$$
Now there are two ways to measure radius
Assuming the hollow world is a perfect sphere, there are two ways to measure the radius. you can compare latitude of two points to the distance between them, or you can compare the time it takes the terminator to travel between two points.
The terminator line
The terminator is important here, because with no stars in the background, the only way to ensure that two points are at exactly the same longitude is to visually signal when the terminator passes by the points. This can be done using one lighthouse to signal another point within visual range of that lighthouse. Since your world is hollow and thus concave from the point of view of someone on the surface, the line of sight of a lighthouse is actually very long, limited only by atmospheric attenuation of light (due to water vapor, or whatever).
The terminator can be exactly identified by looking at the sun through a telescope with a dark lens. As soon as there is no bright patch visible, the terminator line has passed.
Comparing latitude method
The latitude way is take two points that you know are at the same longitude, and calculate their latitude using the above method and distance ($d$) between them, using what ever method.
If the latitude delta is $\alpha$, then the polar radius of the hollow Earth is $$r_{pole} = d/\alpha$$ in radians.
Timing the terminator method
To time the terminator, you will need to get two points at the same latitude (confirmed using the methods above), and measure the distance ($d$) between them and the time it takes the terminator to pass between them ($t$). You also need to know the length of the day $t_{day}$.
Then, the equatorial radius of the hollow Earth is $$r_{eq} = 2\pi d\frac{ t_{day}}{t}.$$ Note this only works if the points are less than $\pi$ radians apart on the surface, that is, in the same hemisphere.
These numbers might not be the same!
These two methods could give you different answers, polar and equatorial radius. If you have a perfect sphere, the two calculated radii should be the same, but if your planet is inscribed within an oblate spheroid with an equatorial bulge(like our Earth), or even a polar bulge, then the two numbers will not be the same.
Conclusion
It is actually pretty difficult to time astronomical pheomena with only one available astronomical object. But, you can use the unique configuration of the sun's surface to do this.
This does require some photometric skills more advanced than what was available in the Renaissance, which I'm not sure how to replicate with Renaissance technology, but I have faith that the Galileo's and Newton's of the world could figure it out. Also, there needs to be some sort of standard measurement of equatorial brightness to make these calculations, but given this number's importance in observational 'heliometry,' I would expect this number to be a well established topic in a Hollow Earth's scientific community. |
Tagged: group Problem 343
Let $G$ be a finite group and let $N$ be a normal abelian subgroup of $G$.
Let $\Aut(N)$ be the group of automorphisms of $G$.
Suppose that the orders of groups $G/N$ and $\Aut(N)$ are relatively prime.
Then prove that $N$ is contained in the center of $G$. Problem 332
Let $G=\GL(n, \R)$ be the
general linear group of degree $n$, that is, the group of all $n\times n$ invertible matrices. Consider the subset of $G$ defined by \[\SL(n, \R)=\{X\in \GL(n,\R) \mid \det(X)=1\}.\] Prove that $\SL(n, \R)$ is a subgroup of $G$. Furthermore, prove that $\SL(n,\R)$ is a normal subgroup of $G$. The subgroup $\SL(n,\R)$ is called special linear group Problem 322
Let $\R=(\R, +)$ be the additive group of real numbers and let $\R^{\times}=(\R\setminus\{0\}, \cdot)$ be the multiplicative group of real numbers.
(a) Prove that the map $\exp:\R \to \R^{\times}$ defined by \[\exp(x)=e^x\] is an injective group homomorphism.
Add to solve later
(b) Prove that the additive group $\R$ is isomorphic to the multiplicative group \[\R^{+}=\{x \in \R \mid x > 0\}.\] |
The cost of controlling weakly degenerate parabolic equations by boundary controls
1.
Dipartimento di Matematica, Universitá di Roma "Tor Vergata", Via della Ricerca Scientifica, 00133 Roma, Italy
2.
Institut de Mathématiques de Toulouse, UMR CNRS 5219, Université Paul Sabatier Toulouse Ⅲ, 118 route de Narbonne, 31 062 Toulouse Cedex 4, France
$u_t - (x^α u_x)_x =0 \;\;x∈(0,1),\ t ∈ (0,T) ,$
$ x=0$
$T$
$ H^1$
$ α ∈ [0,1)$
$ \mathit{\alpha } \to {{\rm{1}}^{\rm{ - }}}$
$ \mathit{T} \to {{\rm{0}}^{\rm{ + }}}$
Next, thanks to the special structure of the eigenfunctions of the problem, we investigate and obtain (partial) results concerning the structure of the reachable states.
Our approach is based on the moment method developed by Fattorini and Russell [
Mathematics Subject Classification:35K65, 93B05, 33C10, 30B10. Citation:Piermarco Cannarsa, Patrick Martinez, Judith Vancostenoble. The cost of controlling weakly degenerate parabolic equations by boundary controls. Mathematical Control & Related Fields, 2017, 7 (2) : 171-211. doi: 10.3934/mcrf.2017006
References:
[1]
F. Alabau-Boussouira, P. Cannarsa and G. Fragnelli,
Carleman estimates for degenerate parabolic operators with applications to null controllability,
[2]
F. Ammar Khodja, A. Benabdallah, M. González-Burgos and L. de Teresa,
The Kalman condition for the boundary controllability of coupled parabolic systems. Bounds on biorthogonal families to complex matrix exponentials,
[3] [4] [5] [6]
P. Cannarsa, G. Fragnelli and D. Rocchetti,
Controllability results for a class of one-dimensional degenerate parabolic problems in nondivergence form,
[7] [8] [9]
P. Cannarsa, P. Martinez and J. Vancostenoble, Global Carleman estimates for degenerate parabolic operators with applications
[10]
P. Cannarsa, P. Martinez and J. Vancostenoble, {The cost of controlling degenerate parabolic equations by locally distributed controls}, in preparation.Google Scholar
[11]
P. Cannarsa, J. Tort and M. Yamamoto,
Unique continuation and approximate controllability for a degenerate parabolic equation,
[12] [13] [14] [15]
S. Ervedoza and E. Zuazua,
Observability of heat processes by transmutation without geometric restrictions,
[16] [17] [18] [19]
H. O. Fattorini and D. L. Russel,
Exact controllability theorems for linear parabolic equations in one space dimension,
[20]
H. O. Fattorini and D. L. Russel,
Uniform bounds on biorthogonal functions for real exponentials with an application to the control theory of parabolic equations,
[21] [22] [23]
A. V. Fursikov and O. Yu. Imanuvilov,
[24]
O. Glass,
A complex-analytic approach to the problem of uniform controllability of transport equation in the vanishing viscosity limit,
[25] [26]
E. N. Güichal,
A lower bound of the norm of the control operator for the heat equation,
[27]
S. Hansen,
Bounds on functions biorthogonal to sets of complex exponentials; control of damped elastic systems,
[28]
E. Kamke, Differentialgleichungen: Lösungsmethoden und Lösungen. Band 1: Gewöhnliche Differentialgleichungen, Neunte Auflage. Mit einem Vorwort von Detlef Kamke. B. G. Teubner, Stuttgart, 1977. Google Scholar
[29]
V. Komornik,
[30]
V. Komornik and P. Loreti,
[31] [32] [33]
N. N. Lebedev,
[34] [35]
P. Lissy,
On the cost of fast controls for some families of dispersive or parabolic equations in one space dimension,
[36]
P. Lissy,
Explicit lower bounds for the cost of fast controls for some 1-D parabolic or dispersive equations, and a new lower bound concerning the uniform controllability of the 1-D transport-diffusion equation,
[37] [38]
P. Martin, L. Rosier and P. Rouchon,
Null controllability of one-dimensional parabolic equations using flatness,
[39]
P. Martin, L. Rosier and P. Rouchon,
On the reachable states for the boundary control of the heat equation,
[40] [41]
L. Miller,
Geometric bounds on the growth rate of null controllability cost for the heat equation in small time,
[42]
F. W. Olver,
[43]
C. K. Qu and R. Wong,
"Best possible" upper and lower bounds for the zeros of the Bessel function $ J_ν(x)$,
[44] [45]
L. Schwartz,
[46] [47] [48] [49] [50]
G. N. Watson,
[51]
R. M. Young,
show all references
References:
[1]
F. Alabau-Boussouira, P. Cannarsa and G. Fragnelli,
Carleman estimates for degenerate parabolic operators with applications to null controllability,
[2]
F. Ammar Khodja, A. Benabdallah, M. González-Burgos and L. de Teresa,
The Kalman condition for the boundary controllability of coupled parabolic systems. Bounds on biorthogonal families to complex matrix exponentials,
[3] [4] [5] [6]
P. Cannarsa, G. Fragnelli and D. Rocchetti,
Controllability results for a class of one-dimensional degenerate parabolic problems in nondivergence form,
[7] [8] [9]
P. Cannarsa, P. Martinez and J. Vancostenoble, Global Carleman estimates for degenerate parabolic operators with applications
[10]
P. Cannarsa, P. Martinez and J. Vancostenoble, {The cost of controlling degenerate parabolic equations by locally distributed controls}, in preparation.Google Scholar
[11]
P. Cannarsa, J. Tort and M. Yamamoto,
Unique continuation and approximate controllability for a degenerate parabolic equation,
[12] [13] [14] [15]
S. Ervedoza and E. Zuazua,
Observability of heat processes by transmutation without geometric restrictions,
[16] [17] [18] [19]
H. O. Fattorini and D. L. Russel,
Exact controllability theorems for linear parabolic equations in one space dimension,
[20]
H. O. Fattorini and D. L. Russel,
Uniform bounds on biorthogonal functions for real exponentials with an application to the control theory of parabolic equations,
[21] [22] [23]
A. V. Fursikov and O. Yu. Imanuvilov,
[24]
O. Glass,
A complex-analytic approach to the problem of uniform controllability of transport equation in the vanishing viscosity limit,
[25] [26]
E. N. Güichal,
A lower bound of the norm of the control operator for the heat equation,
[27]
S. Hansen,
Bounds on functions biorthogonal to sets of complex exponentials; control of damped elastic systems,
[28]
E. Kamke, Differentialgleichungen: Lösungsmethoden und Lösungen. Band 1: Gewöhnliche Differentialgleichungen, Neunte Auflage. Mit einem Vorwort von Detlef Kamke. B. G. Teubner, Stuttgart, 1977. Google Scholar
[29]
V. Komornik,
[30]
V. Komornik and P. Loreti,
[31] [32] [33]
N. N. Lebedev,
[34] [35]
P. Lissy,
On the cost of fast controls for some families of dispersive or parabolic equations in one space dimension,
[36]
P. Lissy,
Explicit lower bounds for the cost of fast controls for some 1-D parabolic or dispersive equations, and a new lower bound concerning the uniform controllability of the 1-D transport-diffusion equation,
[37] [38]
P. Martin, L. Rosier and P. Rouchon,
Null controllability of one-dimensional parabolic equations using flatness,
[39]
P. Martin, L. Rosier and P. Rouchon,
On the reachable states for the boundary control of the heat equation,
[40] [41]
L. Miller,
Geometric bounds on the growth rate of null controllability cost for the heat equation in small time,
[42]
F. W. Olver,
[43]
C. K. Qu and R. Wong,
"Best possible" upper and lower bounds for the zeros of the Bessel function $ J_ν(x)$,
[44] [45]
L. Schwartz,
[46] [47] [48] [49] [50]
G. N. Watson,
[51]
R. M. Young,
[1]
Piermarco Cannarsa, Genni Fragnelli, Dario Rocchetti.
Null controllability of degenerate parabolic operators with drift.
[2]
Morteza Fotouhi, Leila Salimi.
Controllability results for a class of
one dimensional degenerate/singular parabolic equations.
[3]
Mu-Ming Zhang, Tian-Yuan Xu, Jing-Xue Yin.
Controllability properties of degenerate pseudo-parabolic boundary control problems.
[4] [5] [6] [7]
Genni Fragnelli.
Null controllability of degenerate parabolic equations in non divergence form via Carleman estimates.
[8]
Patrick Martinez, Judith Vancostenoble.
The cost of boundary controllability for a parabolic equation with inverse square potential.
[9]
Alexandre Montaru.
Wellposedness and regularity for a degenerate parabolic equation arising in a model
of chemotaxis with nonlinear sensitivity.
[10] [11] [12] [13]
Michiel Bertsch, Danielle Hilhorst, Hirofumi Izuhara, Masayasu Mimura, Tohru Wakasa.
A nonlinear parabolic-hyperbolic system for contact inhibition and a degenerate parabolic fisher kpp equation.
[14]
Michinori Ishiwata.
Existence of a stable set for some nonlinear parabolic equation involving critical Sobolev exponent.
[15]
El Mustapha Ait Ben Hassi, Farid Ammar khodja, Abdelkarim Hajjaj, Lahcen Maniar.
Carleman Estimates and null controllability of coupled degenerate systems.
[16]
Farid Ammar Khodja, Cherif Bouzidi, Cédric Dupaix, Lahcen Maniar.
Null controllability of retarded parabolic equations.
[17]
Xin Li, Chunyou Sun, Na Zhang.
Dynamics for a non-autonomous degenerate parabolic equation in $\mathfrak{D}_{0}^{1}(\Omega, \sigma)$.
[18]
Lingwei Ma, Zhong Bo Fang.
A new second critical exponent and life span for a quasilinear degenerate parabolic equation with weighted nonlocal sources.
[19]
Jingxue Yin, Chunhua Jin.
Critical exponents and traveling wavefronts of a degenerate-singular
parabolic equation in non-divergence form.
[20]
2018 Impact Factor: 1.292
Tools Metrics Other articles
by authors
[Back to Top] |
Difference between revisions of "Polarization Mixing Due to Feed Rotation"
(→Background)
(→Effect of an X - Y Delay)
Line 276: Line 276:
YX_{out} &= (\cos\Delta\chi YX_{in} - \sin\Delta\chi XX_{in})e^{2\pi if\tau_1} \qquad \text{(phase shift depends on} \;\tau_1 \text{)} \\
YX_{out} &= (\cos\Delta\chi YX_{in} - \sin\Delta\chi XX_{in})e^{2\pi if\tau_1} \qquad \text{(phase shift depends on} \;\tau_1 \text{)} \\
YY_{out} &= (\cos\Delta\chi YY_{in} - \sin\Delta\chi XY_{in})e^{2\pi if(\tau_1 - \tau_2)} \qquad \text{(phase shift depends on delay difference)}
YY_{out} &= (\cos\Delta\chi YY_{in} - \sin\Delta\chi XY_{in})e^{2\pi if(\tau_1 - \tau_2)} \qquad \text{(phase shift depends on delay difference)}
−
\end{align}
+
\end{align}
</math></center>
</math></center>
Revision as of 18:11, 2 July 2017 Contents 1 Explanation of Polarization Mixing 2 Absolute vs. Relative Angle of Rotation 3 Effect of an X - Y Delay 4 Another Look at X-Y Delays 5 Effect of Polarization Mixing on Observations Explanation of Polarization Mixing
The newer 2.1-m antennas [Ants 1-8 and 12] have AzEl (azimuth-elevation) mounts (also referred to as AltAz; the terms Altitude and Elevation are used synonymously), which means that their crossed linear feeds have a constant angle relative to the horizon (the axis of rotation being at the zenith). The older 2.1-m antennas [Ants 9-11 and 13], and the 27-m antenna [Ant 14], have Equatorial mounts, which means that their crossed linear feeds have a constant angle with respect to the celestial equator, the axis of rotation being at the north celestial pole. Thus, the celestial coordinate system is tilted by the local co-latitude (complement of the latitude). This tilt results in a relative feed rotation between the 27-m antenna and the AzEl mounts, but not between the 27-m and the older equatorial mounts. This angle is called the "parallactic angle," and is given by:
where is the site latitude, is the Azimuth angle [0 north], and is the Elevation angle [0 on horizon]. This function obviously changes with position on the sky, and as we follow a celestial source (e.g. the Sun) across the sky this rotation angle is continuously changing in a surprisingly complex manner as shown in
Figure 1. Note that at zero hour angle for declinations less than the local latitude (37.233 degrees at OVRO), but is at higher declinations.
The crossed linear dipole feeds on all antennas are oriented with the X-feed as shown in
Figure 2, at 45-degrees from the horizontal, when the antenna is pointed at 0 hour angle. This is the view as seen looking down at the feed from the dish side, although since the feeds are at the prime focus this is the same as the view projected onto the sky. At other positions, the feeds on the AzEl antennas experience a rotation by angle relative to the equatorial antennas.
Because of this rotation, the normal polarization products XX, XY, YX and YY on baselines with dissimilar antennas (one AzEl and the other equatorial) become mixed. The effect of this admixture can be written by the use of Jones matrices (see Hamaker, Bregman & Sault (1996) for a complete description). Consider antenna A whose feed orientation is rotated by , cross-correlated with antenna B with unrotated feed. The corresponding Jones matrices, acting on signal vector are:
and the cross-correlation is found by taking the outer product, i.e.
which relates the output polarization products to the input as
where we have dropped the subscripts and complex conjugate notation for brevity. Of course, there are other effects such as unequal gains and cross-talk between feeds that are also at play, but for now we ignore those and focus only on the effect of this polarization mixing due to the parallactic angle.
Absolute vs. Relative Angle of Rotation
However, the above description fails when we consider a rotation on both antennas, so that
In this case, performing the outer product gives:
whereas intuitively we want something like:
which becomes the identity matrix when , i.e. when the feeds on two antennas of a baseline are parallel. The difference seems to be that the earlier expression evaluates to components of X and Y in an absolute coordinate frame, whereas we are interested only the difference in angle of the feeds in a relative coordinate frame. This choice no doubt has implications for measuring Stokes Q and U, but for solar data we are not concerned with linear polarization.
One way to achieve this in the framework of Jones matrices is to form Mueller matrices from the outer-product of the rotation times the gain matrix:
and
then form an overall matrix
where .
Effect of an X - Y Delay
Regardless of how the math is done, we expect that the result should be dependent on the difference in angle, , so as a practical solution let us simply replace with and proceed as in section 1.
and the cross-correlation is found by taking the outer product, i.e.
which relates the output polarization products to the input as
Now consider that there is a "multi-band" delay on both antennas, and . Then (2) becomes:
The result agrees with our intuition:
This approach will be implemented, to see how well it does in correcting for the effects of differential feed rotation.
Another Look at X-Y Delays
Prior to doing the feed rotation correction, it is essential that any X-Y delays be measured and corrected. We have devised a calibration procedure in which we take data on a strong calibrator with the feeds parallel, then rotate the 27-m (antenna 14) feed so that they are perpendicular. For an unpolarized source, this results in signal on the XX and YY polarization channels in the first case, and on the XY and YX polarization channels in the second case. As a practical matter, this can be done on all antennas at once if a strong source is observed near 0 HA, ideally timed to start 20 min before 0 HA and completing 20 min after 0 HA. The source 2253+161 works well, as does 1229+006 (3C273). Two observations are needed
one with the 27-m feed unrotated (gives parallel-feed data for all dishes, if done near 0 HA). Gives strong signal in XX and YY channels. Example one with the 27-m feed rotated to -90 degrees (gives crossed-feed data for all dishes, if done near 0 HA). Gives strong signal in XY and YX channels. Example
Note that the feed should be rotated by -90, not 90, in order for the expressions below to be used correctly.
Background
Consider antenna-based phases on X polarization as and on Y polarization as , i.e. the Y phases are nominally the same as for X, except for a 90-degree rotation and a possible X-Y delay difference , here written as delay phase . We are finding that this delay is a complicated function of frequency, so it is just as well to keep it in terms of phase. On a baseline , then, the four polarization terms become:
We then examine the channel differences on baselines with antenna 14, i.e.
where . Consequently, we can solve redundantly in two ways for the antenna-based delay phases:
where we specifically use to emphasize that this quantity for all antennas should be the same value, because the measurements are all baselines with antenna 14. In practice, we can average the two measurements for each antenna for , and the 26 measurements for antenna 14 for , although care must be taken to do an appropriate average to take care of the phase ambiguity. One way to do this is form unit vectors and sum them, then find the phase of the summed vector.
Figure 1x shows the results for a measurement on 2017-07-02. Applying the Measurements
Once we have these, we can apply corrections to each of the polarization channels, and then do the feed rotation correction. The corrections are done to data taken in a normal way, without rotating the 27-m feed. The application of the correction is:
where the third term has the opposite sign of because of ... something.
I tried applying the feed rotation correction for data taken on 2017-07-02, and it does seem to work.
Figures 2x and 3x show the amplitude and phase on all baselines with Ant 14, with light green for data before correction and black for after correction. For ants 1-8 and 12, the XX and YY amplitudes have increased a bit, while the XY and YX amplitudes are much reduced. The corresponding phases are slightly improved in XX and YY, and noise-like for XY and YX (less so on YX for some antennas). For the other antennas, no correction was made since those feeds are already parallel to Ant 14.
The proof of this scheme will be seen when we observe a calibrator for many hours while the parallactic angle changes over HA, and then see that the amplitude time profiles become steady and well behaved.
Ultimately, the X-Y delays will need to be measured periodically (especially if the correlator is rebooted or X and Y delays are changed for other reasons), and then stores as a new calibration type in the SQL database.
Effect of Polarization Mixing on Observations See Powerpoint Presentation File:EOVSA Status Jan 2017.pptx
The main effect that is noticeable in observations is that strong signals on the crossed hands (XY and YX) will appear when feeds are misaligned. When feeds are properly aligned, we expect to see only weak signals in the crossed hands, nominally zero, but in practice non-zero due to slight cross-talk between X and Y, which can be due to non-orthogonality or simply coupling between the separate channels. Note that non-equal gains will not cause cross-talk, but can complicate efforts to untangle it.
To make the observations, we observe calibrator sources at different declincations over a broad range of hour angle. The two sources observed so far are 3C84, at declination 41 degrees, and 3C273, at declination 2 degrees. We then plot the observed amplitude and phase for each of the observed polarization products [XX, XY, YX, YY]. For this demonstration, we use the baseline of Ant1-14, where Ant1 has the rotating feed and Ant14 has the non-rotating one (with respect to the celestial coordinate system).
Figure 3 shows the 3C84 observation and simulation. The upper-left panel is the observed amplitude of the four polarization products during an observation from 08:30-15:00 UT, and the upper-right panel is the corresponding phase. The lower panels are the simulation amplitude and phase, where the simulation assumed constant polarization products with Amp[XX, XY, YX, YY] = [0.15, 0, 0, 0.23], and Phase[XX, XY, YX, YY] = [3.1, 0, 0, 2.4] (radians). A noise level of 0.015 rms was added. It is clear that the amplitude simulation works very well, but the phase does not have the correct character--the only deviation from constant phase is an abrupt 180-degree phase jump in XY and YX at 0 hour angle. Such phase jumps are seen in the observed data, but in addition there is a large amount of phase rotation in the observations that is not in the simulation.
As a test, a simulation was done applying a phase rotation based on , as shown in
Figure 4. Applying a rotation by the parallactic angle itself proved to be too small, and did not show the symmetric behavior around 0 hour angle, so the phase rotation applied in Fig. 4 is . It now looks about right, but there is a curvature in the simulation phase that is not really seen in the data.
As a check, we repeated the exercise on 3C273, again applying a phase rotation of , with the result shown in
Figure 5. As before, the amplitudes match quite well. For this different source, however, the measured phase variation is not symmetric about 0 hour angle, so the simulated phases do not match the observed ones. Finally, we instead apply a phase correction without the absolute value, i.e. just , with the result in Figure 6. Clearly this is "better," but still does not match the phase variation precisely. Other Possible Reasons for the Observed Phase Variations
It has been suggested that there may be some secular change in phase not related to feed rotation, perhaps a delay error due to a baseline error, or because the Az and El axes do not cross at a common point. However, baseline errors would seem to be unlikely, because exactly the same character in the phase variations occurs on
all of the AzEL antennas. And anyway a delay error is ruled out for another reason--the phase variation is not frequency dependent. Figures 7 & 8 illustrate these facts.
Based on these tests, I conclude that the observed phase variations are indeed due to the relative feed rotation, but that something is missing in the above mathematical analysis or its application. One possibility is that there is some subtlety in the complex-conjugation of the Jones matrices, since in the above analysis they are entirely real. --Dgary (talk) 11:50, 22 October 2016 (UTC)
More On Axis Offset
Dr. Avinash Deshpande (Raman Research Institute, Bangalore -- Thanks to Dr. Ananthakrishnan for contacting him) confirms that no phase rotation is expected for the parallactic correction, aside from the 180-degree phase jump at the meridian crossing. He suggests that a non-intersecting axis is more likely, and notes that my plots claiming no evidence of a delay is too hasty. It may be that the small range of frequencies in Figure 8 is too small to see an evident frequency dependence that may nevertheless be there. He notes that the effect of non-intersecting axes is a phase rotation of
where is the elevation angle, and is the offset distance. As a test, I applied this function, using cm (based on the apparent phase variation in the observed phases), and obtained the results in
Figures 9 and 10. Although the observed phases show a bit more curvature than the simulation, this can be due to residual baseline errors, so I think it is fair to say this is a promising result. We can prove this very shortly, since the feed rotator on the 27-m antenna is soon to be working (I hope). The prediction is that rotating the 27-m feed to keep it parallel to the 2.1-m feeds on these antennas will correct the amplitudes, but the phases will still show the same behavior (since they are due to a different cause), and also that using a wider range of frequencies (which we can do, especially now that the high-frequency receiver is available) will show a frequency dependence in the amount of phase variation. --Dgary (talk) 04:55, 8 November 2016 (UTC) Further update
On 2016 Nov 13, new observations of 3C84 were taken, and the correction for the axis offset (d = 15.2 cm) were applied, as shown in
Figure 11 (at left). It appears that this correction works well, and that there is a residual baseline error on each of the antennas due to the fact that they were originally determined without the axis-offset correction. --Dgary (talk) 14:20, 15 November 2016 (UTC) |
The total roundoff error for the sum of $N$ numbers is:
$$S = \sum_{i=0}^{N-1} E_i$$
The roundoff error for the $i$-th number is represented by the random variable $E_i$. If we assume that the random number generator used by the computer yields numbers $X_i$ taken from a uniform distribution, then the difference between each $X_i$ and the nearest tenth (which is the roundoff error $E_i$) is uniformly distributed on the interval $(-\frac{0.1}{2}, \frac{0.1}{2}) = (-0.05, 0.05)$.
What we're concerned with, though, is the distribution of $S$. Since $S$ is the sum of $N$ independent, identically distributed (iid) random variables, then via the central limit theorem, as $N \to \infty$, $S$ will tend to a Gaussian distribution. If we assume that your case of $N=1000$ is "large enough" for the Gaussian assumption to hold, we can easily estimate the probability that you seek. It's certainly possible to exactly calculate the distribution of $S$, but the Gaussian assumption is likely close enough for most applications with such large $N$.
A Gaussian distribution is characterized by its first two moments, so if we can find those for $S$, then we have all the information we need. These are easy to calculate for a sum of iid random variables. The mean of $S$ is equal to:
$$\mathbb{E}(S) = \sum_{i=0}^{N-1} \mathbb{E}(E_i) = 0$$
The variance of $S$ is equal to:
$$\mathbb{E}\left((S - \mathbb{E}(S))^2\right) = \sum_{i=0}^{N-1} \mathbb{E}\left((E_i - \mathbb{E}(E_i))^2\right)$$
Recall that the random variables $E_i$ are distributed uniformly. It is well known that the uniform distribution over the interval $(a,b)$ has variance $\frac{1}{12}(b-a)^2$. For this case, that yields a variance $\sigma_{E_i}^2 = \frac{0.01}{12}$. Therefore, the variance of the total roundoff error $S$ is $\sigma_{S}^2 = \frac{0.01N}{12}$.
So in summary, we can approximate $S$'s distribution as Gaussian with mean zero and variance $\sigma_{S}^2 = \frac{0.01N}{12}$. Based on those parameters, you can easily calculate the estimated probability distribution function (pdf), then integrate that result to arrive at whatever probability you seek. The probability that there is a total roundoff error with magnitude greater than one would be:
$$\begin{align}P(|S| > 1) &= P(S>1 \lor S < -1) \\&= 1 - P(-1 < S < 1) \\&= 1 - \int_{-1}^{1}f_S(s)ds \end{align}$$
where $f_S(s)$ is the Gaussian distribution's pdf that we arrived at before. |
There's 2 ways to remember the sign convention:
If you're trading an
exchange-listed spread, then the convention is that going long on the spread A-B implies buying A and selling B. Vice versa, shorting the spread implies selling A and buying B.
If you're trading a
synthetically-constructed spread, then this means that you're trading the residual, i.e. the difference between the observed $y_t$ and the $\hat{y}_t$ predicted by your regression model.
The simplest example is a pair trade where you're regressing a series $y_t$ against another series $x_t$. You may assume that there exists a linear relationship between the series and a normally distributed error term $\epsilon_t \sim \mathcal{N}$ such that $\epsilon_t = y_t - \hat{y_t}= y_t -\beta x_t -\alpha $. $\alpha,\beta \in \mathbb{R}$ are parameters that you estimate from past data, e.g. with ordinary least squares.
Often, you'd also assume $\alpha$ falls off at $x_t=0$. Then "buying the spread" implies having positive delta to $\epsilon_t$ which means buying 1 unit of the product with series $y_t$ and selling $\beta $ units of the product with series $x_t$.
You don't even need to remember what it means to "buy a spread" in this case, because the intuition behind your trade is simply that if the observed value $y_t$ is less than the predicted value $\beta x_t$, then you would buy the product with series $y_t$ and sell $\beta$ units of the product with series $x_t$, since the observed value and your prediction should eventually converge somewhere. You just need to remember which variable you used as the predictor $x_t$ and the dependent variable $y_t$ when fitting your model. |
This is sort of an odd question, I realize - but it has to do with random number generation. What I'd like to do is generate random numbers with a normal distribution; many functions (in Python) do this already as long as you give the function a mean and standard deviation. Note also that to generate uniform random variables you must specify a range, e.g. a random number between $i$ and $j$.
I'm in the situation where I need to generate both uniform random numbers in the range $[i,j]$ as well as normally distributed numbers in that range (or at least such that 98% of the random numbers fit into that range). Computing the mean of the uniform distribution is not difficult, so now I have a mean and a range - but I need the final parameter, the standard deviation in order to generate all my random numbers.
So my initial thought is that we can use the following to compute the standard deviation: $\sigma = \frac{j - \mu}{3}$ and or $\sigma = \frac{i - \mu}{-3}$. This seems to work fairly well, for a range $[50, 150]$, it gives me $\mu=100$ and $\sigma=16.666667$ which looks like this:
Though perhaps even this is still too long of a curve for the distribution? I'd like some advice if this approach is close to correct, or if there is a more official way to compute this - maybe even a way to generalize the computation for various confidence intervals/z-scores (I know I should have probably chosen 2.333 for 98%).
Another generalization that would be very helpful is if I don't have a range but rather a width, (the width was 100 in the previous example) can I generalize the standard deviation for that width given any $\mu$? |
The following references are mostly specific to your question:
[1] Adrien-Marie Legendre, Éléments de Géométrie, avec des notes, Firmin Didot (Paris), 1794, xii + 334 pages.
A proof of the irrationality of ${\pi}^2$ by the use of continued fractions is given on p. 304.
[2] James Whitbread Lee Glaisher, On Lambert’s proof of the irrationality of $\pi,$ and on the irrationality of certain other quantities, pp. 12-16 in Notices and Abstracts of Miscellaneous Communications to the Sections, Report of the Forty-First Meeting of the British Association for the Advancement of Science (August 1871, Edinburgh), John Murray (London), 1872.
The paper is on “.pdf pages” 341-345 of this google books item. The top third of p. 14 discusses Legendre’s proof that ${\pi}^2$ is irrational.
[3] Charles Hermite, Extrait d'une lettre de Mr. Ch. Hermite à Mr. Borchardt [Extract of a letter of Mr. Ch. Hermite to Mr. Borchardt], Journal für die reine und angewandte Mathematik 76 (1873), pp. 342-344.
Hermite shows ${\pi}^2$ is irrational by a method that avoids the use of continued fractions --- a method that very soon afterwards led to his proof that $e$ is transcendental.
[4] Alfred Pringsheim, Ueber die ersten beweise der irrationalität von $e$ und $\pi$ [On the first proof of irrationality of $e$ and $\pi$], Sitzungsberichte der Mathematisch-Physikalischen Classe der K.B. Akademie der Wissenschaften zu München 28 (1898), 325-337.
If you can read German (I can’t), I believe this could be of use. Lambert’s proof that ${\pi}^2$ is irrational is mentioned on p. 326 (line 9).
[5] Sylvain Wachs, Contribution à l'étude de l'irrationalité de certains nombres [Contribution to the study of the irrationality of certain numbers], Bulletin des Sciences Mathématiques (2) 73 (1949), pp. 77-95.
From MR0033299 (11,418a) review by Jan Popkin: By considering a larger class of similar integrals the author intends to obtain more general results. He gives applications by showing in this manner the irrationality of such numbers as ${\pi}^2,$ $\log A$ $(A \neq 1),$ $e^A,$ where $A$ denotes a positive integer. [The paper contains some misprints and other mistakes; the most serious one at the end of § 6, where the quantity $M$ introduced depends on $n.$ In view of various papers giving generalizations of Niven's method it is perhaps of interest to remark that there exists a close connection between this method and the classical proofs for the irrationality of $\pi$ and ${\pi}^2$ of Lambert, Hermite and others. Take for instance the integral $[\cdots]$
[6] Yosikazu Iwamoto, A proof that ${\pi}^2$ is irrational, Journal of the Osaka Institute of Science and Technology. Part I: Mathematics and Physics 1 (1949), pp. 147-148.
[7] Ivan Morton Niven, Irrational Numbers, The Carus Mathematical Monographs #11, Mathematical Association of America, 1956, xii + 164 pages.
The
Alternative proof of Corollary 2.6 on pp. 19-21 gives a proof that ${\pi}^2$ is irrational.
[8] Kustaa Aadolf Inkeri, The irrationality of ${\pi}^2$, Nordisk Matematisk Tidskrift 8 #1 (1960), pp. 11-16 and 63.
[9] John Douglas Dixon, $\pi$ is not algebraic of degree one or two, American Mathematical Monthly 69 #7 (August-September 1962), p. 636.
Regarding this result, see Proof of $\pi$ not being a quadratic irrational number.
[10] Theodor Estermann, A theorem implying the irrationality of ${\pi}^2$, Journal of the London Mathematical Society (1) 41 #3 (1966), 415-416.
[11] Jaroslav Hančl, A simple proof of the irrationality of ${\pi}^4$, American Mathematical Monthly 93 #5 (May 1986), pp. 374-375.
[12] Darrell Desbrow, On the irrationality of ${\pi}^2$, American Mathematical Monthly 97 #10 (December 1990), pp. 903-906.
[13] Michael David Spivak, Calculus, 3rd edition, Publish or Perish, 1994, xiv + 670 pages.
In Chapter 16, Theorem 1 (stated and proved on pp. 323-324) is the irrationality of ${\pi}^2.$ This same result probably appears in either or both earlier editions (1967, 1980), but I have not verified this.
[14] Miklós Laczkovich, On Lambert's proof of the irrationality of $\pi$, American Mathematical Monthly 104 #5 (May 1997), pp. 439-443.
Corollary 2 at the top of p. 441: “${\pi}^2$ is irrational.”
[15] Pierre Eymard and Jean-Pierre Lafon, The Number $\pi$, translated by Stephen Stewart Wilson, American Mathematical Society, 2004, x + 322 pages.
Section 4.2.3 on pp. 136-137 is titled “Niven’s proof of the irrationality of ${\pi}^2$”.
[16] Paul Joel Nahin, Dr. Euler’s Fabulous Formula, Princeton University Press, 2006, xxii + 380 pages.
Chapter 3: The Irrationality of ${\pi}^2$ (pp. 92-113) gives a very detailed presentation of the proof in Carl Ludwig Siegel’s book Transcendental Numbers.
[17] Li Zhou and Lubomir Markov, Recurrent proofs of the irrationality of certain trigonometric values, American Mathematical Monthly 117 #4 (April 2010), 360-362.
(2nd sentence of the paper) We also discuss applications of our technique to simpler irrationality proofs such as those for $\pi,$ ${\pi}^2,$ and certain values of exponential and hyperbolic functions.
[18] Timothy W. Jones, Discovering and Proving that $\pi$ is irrational, American Mathematical Monthly 117 #6 (June-July 2010), pp. 553-557.
Niven’s proof of the irrationality of ${\pi}^2$ is discussed on p. 556.
[19] Timothy W. Jones, The powers of $\pi$ are irrational, viXra:1102.0058, 19 October 2010, 17 pages.
[20] Jürgen Müller and Tom Müller, Niven’s irrationality method revisited, manuscript, undated, 3 pages.
(3rd paragraph of the paper, on p. 1) In this note we take a new look at the classic analytic irrationality proofs for ${\pi}^2$ and the integer powers of $e,$ showing that the required approximation polynomials are generated by one single integral expression. Our approach makes it obvious how the polynomials come into existence, why they have integer coefficients and that the irrationality proofs for ${\pi},$ ${\pi}^2$ and $e^k$ are only different special cases derived from the same general formula.
[21] lhf, Direct proof that $\pi$ is not constructible, Mathematics Stack Exchange, 30 January 2012.
This might also be of interest. |
I'll use the non-unitary Fourier transform (but this is not important, it's just a preference):$$X(\omega)=\int_{-\infty}^{\infty}x(t)e^{-i\omega t}dt\tag{1}$$$$x(t)=\frac{1}{2\pi}\int_{-\infty}^{\infty}X(\omega)e^{i\omega t}d\omega\tag{2}$$where (1) is the Fourier transform, and (2) is the inverse Fourier transform.Now if you formally take the ...
As I understand it, the normalization is because the Haar wavelet conserves energy of the signal. In that, when you take signal from one domain to another, you aren't supposed to add energy to it, (although conceivably you might lose energy).The normalization is just a way to ensure that the energy of your Haar-transformed signal in the Haar-domain has the ...
That really depends on your definition of "enevelope" and what you need it for.The Hilbert transform calculates the "analytic" signal, i.e. it calculates a matching imaginary part to a real signal by shifting the phase by 90 degrees in the frequency domain. It's reputation of calculating the "envelope" comes mainly from communication technology. It works ...
The error lies in the assumption that if $g(t)$ is the Hilbert transform of $f(t)$, then the Hilbert transform of $f(-t)$ must be $g(-t)$. This is not the case.Let $f^-(t)=f(-t)$. Then we have$$g(t)=\mathcal{H}\{f\}(t)=\frac{1}{\pi}\text{p.v.}\int_{-\infty}^{\infty}\frac{f(\tau)}{t-\tau}d\tau\tag{1}$$and$$\begin{align}\mathcal{H}\{f^-\}(t)&=\frac{...
The most practical attempt that I am aware of is by Won and Berger (2005). They simultaneously recorded vocalizations at the mouth with a microphone and on the skull with a homemade vibrometer. They then estimated the relevant transfer functions with linear predictive coding and cepstral smoothing.
Like @sansuiso said, compressed sensing is a way of acquiring signals that happens to be efficient if the signals are sparse or compressible.Compressed Sensing is efficient because signals are multiplexed, hence the number of multiplexed samples (called measurements) is smaller than the number of samples required by Shannon-Nyquist where there are no ...
It is not impossible but it is not going to be a walk in the park too.What you would be trying to do is to add to the voice signal, those vibrations that are delivered to the ear via the bones and are not accessible to anyone else.But this is easier said than done in an accurate way.Sound propagation through a medium depends very much on its density. ...
I think it is kind'a similar to soft and hard thresholding using in wavelet de-noising. Have you come across this topic? pywt has already an in-built function for this purpose. Please take a closer look at this code and try to play with it:import pywtimport matplotlib.pyplot as pltimport numpy as npts = [2, 56, 3, 22, 3, 4, 56, 7, 8, 9, 44, 23, 1, 4, 6,...
The closest orthogonal transform I know of that might meet your needs is the Slant Transform. It's based on sawtooth(ish) waves, but some of the basis functions do resemble triangle waves:(source: Applied Fourier transform)It was developed for image coding/compression, but it seems like a reasonable first approach for the analysis of long-term linear ...
Transliterations of Ukrainian names have different avatars in English (and in others languages as well). You can find Kravchuk polynomials, and other papers like On Krawtchouk Transforms or Krawtchouk polynomials and Krawtchouk matrices. You can find as well Kravchuk orthogonal polynomials.As they form an orthogonal basis of polynomials (as well as many ...
There are two things here: sparsity and compressed sensing.Sparsity is a general hypothesis, just claiming that most of the energy of a signal is stored in a small number of coefficients in the good basis. This is quite intuitive, looking at Fourier transforms or wavelet transforms. It is true for probably any signal of interest (image, sound...) and ...
Correlation and convolution are basically the same operations. You can express the cross-correlation of two functions $f(t)$ and $g(t)$ by a convolution:$$R_{fg}(\tau)=f(\tau)\star g^*(-\tau)$$where $\star$ denotes convolution, and $*$ denotes complex conjugation.If you evaluate the cross-correlation at $\tau=0$ you get the inner product of $f(t)$ and $...
The problem is not sufficiently specified, because the range of admissible values of $n$ is missing. Here I make the assumption that we consider $n>0$. With this assumption we have$$X(z)=\sum_{n=1}^{\infty}x[n]z^{-n}=\sum_{n=1}^{\infty}\frac{z^{-n}}{n^2}\tag{1}$$And that's the point where we might get stuck, if we didn't have a list of mathematical ...
You don't need to split image into blocks. The DCT equation can be applied to the whole image. The block division has been chosen for JPEG standard partly because DCT was costly to compute in the past (but that's not the only reason).You can choose any size of block (including the single block, which is the image itself), then split image into the blocks ...
The fourier transform gives you very fine resolution in the frequency domain, but during the transformation, you loose all the information about when (for time signals) or where (for images) these frequencies occur in your input signal.The Gabor transform alleviates this problem by windowing the base functions of the fourier transform with a Gaussian ...
I would use a linear phase FIR Hilbert transformer, and use block processing, such as the overlap-add method. That means that you partition the input signal into contiguous non-overlapping blocks and compute the convolution of each block with your filter impulse response. The results are then overlapped and added. Overlap occurs because the result of the ...
The time vs frequency resolution is a well-known problem, and there are indeed approaches to overcome it. For audio signals, some of the commonly used techniques include: parametric methods ; adaptive resolution (analyze with various time/frequency configurations and patch the results together - Wen X. and M. Sandler, "Composite spectrogram using multiple ...
As mentioned in Batman's answer, the condition of the sequence being absolutely summable is only sufficient but not necessary. The Fourier transform can be extended to $\ell_2$ sequences, i.e. sequences for which$$\sum_{n=-\infty}^{\infty}|f[n]|^2<\infty$$is satisfied. A further generalization is possible if you allow distributions and their ...
This is achievable with two parallel all pass filters.The two all pass filters synthesize an odd ordered low pass filter whose pass band extends from -90º to +90º in the z-domain. (I will discuss this below).$$G_{lowpass}(z) = \frac{A_0(z)+z^{-1}A_1(z)}{2}$$The low pass filter is then rotated by +90º so that its pass band extends from 0º to 180º, ...
Your first solution using the properties of the Fourier transform is correct. Your second solution is wrong, because you forgot to include the unit step function. Your function $g(t)$ should be defined by$$g(t)=e^{-t}u(t)\tag{1}$$which gives for $g(2t-1)$$$g(2t-1)=e^{-(2t-1)}u(2t-1)=e^{-(2t-1)}u\left(t-\frac12\right)\tag{2}$$Consequently, the Fourier ...
It's all about structure. One early paper on this is A Unified Treatment of Discrete Fast Unitary Transforms, 1977:A set of recursive rules which generate unitary transforms with afast algorithm (FUT) are presented. For each rule, simple relationsgive the number of elementary operations required by the fastalgorithm. The common Fourier, Walsh-...
There is a problem in checking whether the homography is OK.The algorithm for checking correct homographies may interest someone, so I will write it down here:1) Create a quadrilateral $ABDC$ with vertex coordinates (in homogenous coordinates):$$\begin{eqnarray} A:& (-w/2,-h/2, 1.0) \\ B:& (w/2,-h/2, 1.0) \\ C:& (-w/2,h/2, 1.0) \\ D: &(...
Yes, many people have worked on time-frequency analysis.The approach of "slice my data into chunks, perform the FFT on each chunk" is a good idea.Applying a "window function" on each chunk, just before performing the FFT,helps avoid many artifacts.Allowing chunks to overlap also helps.After those tweaks, you end up with the Gabor transform, which seems ...
White noise implies no correlation between samples of the noise, even consecutive samples. Colored noise, therefore, implies that there is correlation of some sort between the noise samples, which in turn implies that we can take advantage of that correlation to get rid of some of the noise.Beyond that, there is not a lot that we can say about what it ...
First order B-splines are triangles, and there exist algorithms to represent an arbitrary signal as a sum of B-splines. As mentioned, these splines do not form an orthobasis, but this is not necessarily a terrible thing.A good place to start is the the paper by Unser on efficient B-spline approximation. http://bigwww.epfl.ch/publications/unser9301.pdf
Because the left (top) edge of an image is unlikely to be a reflection of its right (bottom) edge, there are discontinuities all along the edges of an image when it is viewed as N-periodic. These discontinuities are represented in the frequency domain by high-frequency coefficients. From an image compression point of view, using the 2-D DFT as a transform ...
You are right that the (bilateral) Laplace transform can be interpreted as the Fourier transform of $e^{-\sigma t}f(t)$. However, I think that the significance of the Laplace transform only becomes clear when $s=\sigma+j\omega$ is viewed as a complex variable because then we can study the analytic properties of the system function. E.g., electrical networks ...
The plot is of $$\mid X\left(i\omega\right) \mid = \sqrt{\left(\frac{1}{a+j\omega}\right)\left(\frac{1}{a-j\omega}\right)} = \frac{1}{\sqrt{a^2 + \omega^2}}$$against $\omega$In particular $\omega$ can be equal to $-a$.This checks out with Wolfram alpha
Looks like you need a general explanation of the discrete wavelet transform (DWT). DWT breaks a signal down into subbands distributed evenly in a logarithmic frequency scale, each subband sampled at a rate proportional to the frequencies in that band. The traditional Fourier transformation has no time domain resolution at all, or when done using many short ...
In contrast to Jason R's answer I claim that the Hilbert transform is a phase shift by $-\pi/2$ for real-valued signals. By definition, a phase shifter shifts the phase of a sinusoidal signal by some given phase $\phi$:$$x(t)=\cos(\omega_0t)\quad\Longrightarrow\quad y(t)=\cos(\omega_0t+\phi)\tag{1}$$Since$$\cos(\omega_0t)=\frac{e^{j\omega_0t}+e^{-j\... |
My original GSoC project was about implementing native Julia solvers for solving boundary value problems (BVPs) that were determined from second order ordinary differential equations (ODEs). I started down the BVP path, built a shooting method to solve BVPs from initial value problems (IVPs), and then built the beginning of the mono-implicit Runge-Kutta (MIRK) method. Those solvers are in the BoundaryValueDiffEq.jl repository. Instead of trying to jump directly to the end point, and talk about how to do every detail in MIRK, I went to explore how those details naturally arise in second order ODEs. I implemented many solvers for dynamical IVPs. Although I didn’t fully complete my original goal by the end of GSoC, I am almost there.
First, there the idea of symplecticity, because the Labatto (Lobatto IIIA-IIIB) MIRK tableaux are actually symplectic. Basically, symplecticity is another way to say that first integrals (energy, angular momentum and etc.) are conserved, so symplectic integrators are specialized for solving second order ODEs that are raised from dynamic systems which require energy conservation. It is easier to see what symplecticity actually entails on dynamical IVPs. For instance, the Hamiltonian $\mathcal{H}$ and the angular momentum $L$ for the Kepler problem are
We can solve the Hamilton’s equations
to get the solution of the Kepler problem.
using OrdinaryDiffEq, ForwardDiff, LinearAlgebraH(q,p) = norm(p)^2/2 - inv(norm(q))L(q,p) = q[1]*p[2] - p[1]*q[2]pdot(dp,p,q,params,t) = ForwardDiff.gradient!(dp, q->-H(q, p), q)qdot(dq,p,q,params,t) = ForwardDiff.gradient!(dq, p-> H(q, p), p)
Then, we solve this problem by
Ruth3 symplectic integrator with appropriate initial conditions.
initial_position = [.4, 0]initial_velocity = [0., 2.]tspan = (0,20.)prob = DynamicalODEProblem(pdot, qdot, initial_velocity, initial_position, tspan)sol = solve(prob, Ruth3(), dt=1//50);
Finally, we analyze the solution by computing the first integrals and plotting them.
Note that symplectic integrator doesn’t mean that it has exact conservation. The solutions of a symplectic integrator are on a symplectic manifold, but don’t necessarily conserve the Hamiltonian (energy). The energy can have fluctuations in a (quasi-)periodic manner, so that the first integrals have small variations. In the above case, the energy varies at most
6e-6, and it tends to come back. The variations also decrease as
dt is smaller. The angular momentum is conserved perfectly. More details are in this notebook.
Again, I explored adaptivity and dense output in the IVP world. I implemented several adaptive Runge-Kutta-Nyström (RKN) solvers. The MIRK adaptivity and RKN adaptivity share one common theme, which is error estimation, and MIRK does it by using dense output. Calculating Poincaré section is an example of a practical usage of the dense output. When plotting the Poincaré section, we usually need to use
saveat or
ContinuousCallback, and both of them need dense output in order to do well. Dense output is essentially a continuous solution of a ODE.
saveat uses dense output to evaluate values at the specified time, so the ODE integration can still be adaptive (the integrator doesn’t need to hit the exact
saveat point).
ContinuousCallback performs root-finding on the dense output to find when does an event occur. Thus, high order dense output is important for calculating accurate
saveat and
ContinuousCallback. Here are two examples of plotting Poincaré section.
Duffing oscillator is a forced oscillator that has nonlinear elasticity, which has the form
First, we need to write the ordinary differential equation with parameters.
using OrdinaryDiffEq, Plots; pgfplots()function draw_duffing(Γ, α, β, δ, ω) function driven_pendulum(dv,v,x,p,t) Γ, α, β, δ, ω = p dv[1] = Γ*cos(ω*t) - β*x[1]^3 - α*x[1] - δ*v[1] end prob = SecondOrderODEProblem(driven_pendulum, [1.5], [0.], (5000., 35000.), (Γ, α, β, δ, ω)) sol = solve(prob, DPRKN6(), saveat=2pi/prob.p[end]) y1, x1 = [map(x->x[i], sol.u[end-2000:end]) for i in 1:2] scatter(x1, y1, markersize=0.8, leg=false, title="Poincaré surface of duffing oscillator", xlabel="\$x\$", ylabel="\$\\dot{x}\$", color=:black, xlims=(0.5,1.7))enddraw_duffing(8, 1, 5, 0.02, 0.5)
Then, we need to get the solution at $\omega t \mod 2\pi=0$ to plot the Poincaré section, and we can achieve this by using
saveat.
Drive pendulum is a periodically forced pendulum, which has the form of
Again, we do the same thing as what we did above.
using OrdinaryDiffEq, Plots; pgfplots()function draw_driven_pendulum(f₀,q,ω) function driven_pendulum(dv,v,x,p,t) f₀, q, ω, = p dv[1] = -sin(x[1]) - q*v[1] + f₀*cos(ω*t) end prob = SecondOrderODEProblem(driven_pendulum, [0.], [2pi], (0.,50000.), (f₀,q,ω)) sol = solve(prob, DPRKN6(), saveat=2pi/prob.p[end]) y1, x1 = [map(x->x[i], sol.u[500:end]) for i in 1:2] scatter(x1.%pi, y1, markersize=0.8, leg=false, title="Poincaré surface of driven pendulum", xlabel="\$\\theta\$", ylabel="\$\\dot{\\theta}\$", color=:black)enddraw_driven_pendulum(1.12456789, 0.23456789, 0.7425755501794571)
The MIRK solver in
BoundaryValueDiffEq doesn’t have adaptivity and dense output yet, but with all the things that I have learned from IVPs, most of the pieces have been implemented or understood and so we can expect this to be completed in the near future. Here is an example of using the
BoundaryValueDiffEq package. In this example, we will solve the problem
using BoundaryValueDiffEqconst g = 9.81L = 1.0tspan = (0.0,pi/2)function simplependulum(du,u,p,t) θ = u[1] dθ = u[2] du[1] = dθ du[2] = -(g/L)*sin(θ)endfunction bc1(residual, u, p, t) residual[1] = u[end÷2][1] + pi/2 # the solution at the middle of the time span should be -pi/2 residual[2] = u[end][1] - pi/2 # the solution at the end of the time span should be pi/2endbvp1 = BVProblem(simplependulum, bc1, [pi/2,pi/2], tspan)sol1 = solve(bvp1, GeneralMIRK4(), dt=0.05)
More details can be found in here.
I would like to thank all my mentors Chris Rackauckas, Jiahao Chen and Christoph Ortner for their responsiveness and kind guidance. Especially my mentor Rackauckas, he can answer my questions in five minutes after I asked on Slack. I would also like to thank Julia community for managing GSoC project and JuliaCon 2017. |
If $p,q$ are distinct primes, it is true that the subset $\mathbb{Z} \times \mathbb{Z}$ is dense in $\mathbb{Z}_{p} \times \mathbb{Z}_{q}$. However, is it true that $\lbrace (x,x), x\in \mathbb{Z} \rbrace $ is dense $\mathbb{Z}_{p} \times \mathbb{Z}_{q}$? I think I need to prove the fact I must prove or disprove that my set is dense in $\mathbb{Z} \times \mathbb{Z}$? (The topology here is the product topology, where each toology is induced by the usual p adic metric)
Individually, $\mathbb{Z}$ is dense in $\mathbb{Z}_p$ and $\mathbb{Z}_q$ since that's how we get $p$- and $q$-adic numbers by completion.
It works for any number of primes, simultaneously. This is called
weak approximation:
Let $K$ be a field and $|\cdot|_1, \dots,|\cdot|_n$ be pairwise inequivalent $a_1, \dots, a_n \in K$ and $\epsilon_1, \dots, \epsilon_n \in \mathbb{R}^+$. There exists an $x \in K$ satisfying $|x - a_i|_i < \epsilon_i$ simultaneously. |
I was wondering with triangle inequality is valid for p-norm like this.
$x\in\Bbb{R}^n$, for all $p\ge1$,$$\Vert{x}\Vert_p=\left(\sum_{i=1}^{n} \vert{x_i}\vert^p\right)^{1/p}.$$
And I found a good repo for this Why is every $p$-norm convex?.
Months ago I started to learn measure theory and I notice the Minkowski inequality is stated over a different space (measure space) and in different format of the norm(at least is not the same format with that in $\Bbb{R}^n$), here it is:
Consider we have a measure space $(\Omega,A,\mu)$, and we define r-th norm as $$\Vert{x}\Vert_r= \{E|x|\}^{1/r}$$ this E only makes sense when we are talking about some measurable function X over this measure space. So my question is why we can use it to prove something not over this space(just in Rn), since the ways to define them are different IMO.
This might seem stupid but really confuse my here. Thank you! |
Hey I am reading this paper Entropy Inequalities by Araki and Lieb.
I am trying to prove the following lemma: $$S^1+S^2\leq S^{12}+S^{23}$$ using the following lemmas:
$S^{123}+S^{2}\leq S^{12}+S^{23}$
For a pure density matrix $\rho^{12}$: $Tr^1\left[f(\rho^1)\right]=Tr^2\left[f(\rho^2)\right]$.
and:
There exists a pure density matrix $\rho^{12}$ and a $\rho^{1}$ such that $\rho^1=Tr^2\left[\rho^{12}\right]$.
So I tried the following:
Using the second lemma on the first one we can write:
$S^{123}+S^{1}\leq S^{12}+S^{13}$
Now subtracting the RHS of the new inequality from the LHS of the 1st lemma and the LHS of the new inequality from the RHS of the 1st lemma yields:
$S^{123}+S^2-S^{12}-S^{13}=S^{12}+S^{13}-S^{123}-S^2$
or:
$S^1+S^2 \leq 2(S^{12}-S^{123})+S^{13}+S^{23}$
Now we can use the theorem 1 from their paper $S^{123}\leq S^{12}+S^{23}$
which when plugging in yields:
$S^1+S^2\leq S^{23}-S^{13}+2\,\eta$
But this is not the inequality I wanted to prove. I get a different one. |
I'm trying to show the inequality $$\frac{\sin(x)}{x}>\cos(x)$$ by for $0<x<\pi$ using the Mean Value Theorem, but I don't know how to start. I can show that $\sin(x)<x$, but I can't see how I can use it. I just need some help to get started.
This is not true in general.
For an interval of clear counterexamples, consider that for $x\in(\frac32\pi,2\pi)$ we have $$ \frac{\sin x}{x} < 0 < \cos x$$
Update after the question was amended to specify $0<x<\pi$:
The mean value theorem says that $\frac{\sin x}{x} = \sin'(\alpha)$ for some $\alpha\in(0,x)$. We have $\sin'(\alpha)=\cos\alpha$ so what you need to show is merely that $\cos \alpha > \cos x$. Hopefully you already know that the cosine decreases monotonically between $0$ and $\pi$...
Consider the function $f(x)=\sin(x)$. Fix a point $y$ between $0$ and $\pi$. By applying the mean value theorem to the points $x=0$ and $x=y$, you know that for some point $z$ between $0$ and $y$, $$ \cos(z)=f'(z)=\frac{f(y)-f(0)}{y-0}=\frac{\sin(y)}{y}. $$ Since $\cos(x)$ is a decreasing function on the interval $0$ to $\pi$ and $y>z$, it follows that $\cos(y)<\cos(z)$. Therefore, $$ \cos(y)<\cos(z)<\frac{\sin(y)}{y}. $$
Since $\cos$ is strictly decreasing on $[0,\pi]$ one has $${\sin x\over x}=\int_0^1\cos(t\>x)\>dt>\cos x\qquad(0<x\leq\pi)\ .$$
Wait, if you already know how to show $\sin(x) > x$, then you just need to observe that $\cos(x) \leq 1$ for all $x$ to get what you want. |
(a) If $AB=B$, then $B$ is the identity matrix. (b) If the coefficient matrix $A$ of the system $A\mathbf{x}=\mathbf{b}$ is invertible, then the system has infinitely many solutions. (c) If $A$ is invertible, then $ABA^{-1}=B$. (d) If $A$ is an idempotent nonsingular matrix, then $A$ must be the identity matrix. (e) If $x_1=0, x_2=0, x_3=1$ is a solution to a homogeneous system of linear equation, then the system has infinitely many solutions.
Let $A$ and $B$ be $3\times 3$ matrices and let $C=A-2B$.If\[A\begin{bmatrix}1 \\3 \\5\end{bmatrix}=B\begin{bmatrix}2 \\6 \\10\end{bmatrix},\]then is the matrix $C$ nonsingular? If so, prove it. Otherwise, explain why not.
(a) Suppose that a $3\times 3$ system of linear equations is inconsistent. Is the coefficient matrix of the system nonsingular?
(b) Suppose that a $3\times 3$ homogeneous system of linear equations has a solution $x_1=0, x_2=-3, x_3=5$. Is the coefficient matrix of the system nonsingular?
(c) Let $A$ be a $4\times 4$ matrix and let\[\mathbf{v}=\begin{bmatrix}1 \\2 \\3 \\4\end{bmatrix} \text{ and } \mathbf{w}=\begin{bmatrix}4 \\3 \\2 \\1\end{bmatrix}.\]Suppose that we have $A\mathbf{v}=A\mathbf{w}$. Is the matrix $A$ nonsingular?
Suppose that $B=\{\mathbf{v}_1, \mathbf{v}_2\}$ is a basis for $\R^2$. Let $S:=[\mathbf{v}_1, \mathbf{v}_2]$.Note that as the column vectors of $S$ are linearly independent, the matrix $S$ is invertible.
Prove that for each vector $\mathbf{v} \in V$, the vector $S^{-1}\mathbf{v}$ is the coordinate vector of $\mathbf{v}$ with respect to the basis $B$.
Prove that the matrix\[A=\begin{bmatrix}0 & 1\\-1& 0\end{bmatrix}\]is diagonalizable.Prove, however, that $A$ cannot be diagonalized by a real nonsingular matrix.That is, there is no real nonsingular matrix $S$ such that $S^{-1}AS$ is a diagonal matrix.
The following problems are Midterm 1 problems of Linear Algebra (Math 2568) at the Ohio State University in Autumn 2017.There were 9 problems that covered Chapter 1 of our textbook (Johnson, Riess, Arnold).The time limit was 55 minutes.
This post is Part 3 and contains Problem 7, 8, and 9.Check out Part 1 and Part 2 for the rest of the exam problems.
Problem 7. Let $A=\begin{bmatrix}-3 & -4\\8& 9\end{bmatrix}$ and $\mathbf{v}=\begin{bmatrix}-1 \\2\end{bmatrix}$.
(a) Calculate $A\mathbf{v}$ and find the number $\lambda$ such that $A\mathbf{v}=\lambda \mathbf{v}$.
(b) Without forming $A^3$, calculate the vector $A^3\mathbf{v}$.
Problem 8. Prove that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is also nonsingular.
Problem 9.Determine whether each of the following sentences is true or false.
(a) There is a $3\times 3$ homogeneous system that has exactly three solutions.
(b) If $A$ and $B$ are $n\times n$ symmetric matrices, then the sum $A+B$ is also symmetric.
(c) If $n$-dimensional vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3$ are linearly dependent, then the vectors $\mathbf{v}_1, \mathbf{v}_2, \mathbf{v}_3, \mathbf{v}_4$ is also linearly dependent for any $n$-dimensional vector $\mathbf{v}_4$.
(d) If the coefficient matrix of a system of linear equations is singular, then the system is inconsistent.
An $n\times n$ matrix $A$ is called nonsingular if the only vector $\mathbf{x}\in \R^n$ satisfying the equation $A\mathbf{x}=\mathbf{0}$ is $\mathbf{x}=\mathbf{0}$.Using the definition of a nonsingular matrix, prove the following statements.
(a) If $A$ and $B$ are $n\times n$ nonsingular matrix, then the product $AB$ is also nonsingular.
(b) Let $A$ and $B$ be $n\times n$ matrices and suppose that the product $AB$ is nonsingular. Then:
The matrix $B$ is nonsingular.
The matrix $A$ is nonsingular. (You may use the fact that a nonsingular matrix is invertible.)
Let $A$ be an $n\times (n-1)$ matrix and let $\mathbf{b}$ be an $(n-1)$-dimensional vector.Then the product $A\mathbf{b}$ is an $n$-dimensional vector.Set the $n\times n$ matrix $B=[A_1, A_2, \dots, A_{n-1}, A\mathbf{b}]$, where $A_i$ is the $i$-th column vector of $A$.
Prove that $B$ is a singular matrix for any choice of $\mathbf{b}$.
For each of the following $3\times 3$ matrices $A$, determine whether $A$ is invertible and find the inverse $A^{-1}$ if exists by computing the augmented matrix $[A|I]$, where $I$ is the $3\times 3$ identity matrix. |
So, I understand how the slice selection gradients work in MRI. So, the frequency offset introduced by the slice selection gradient at a location $z$ relative to the MRI isocenter is given by:
$$ \Delta f = \gamma z G_z $$
Putting some numbers to it, for a gradient of 5 mT/m and a slice thickness of 3 mm, we introduce an offset of $639.7$ Hz per slice.
Now, say I have an RF pulse which has a bandwidth of 2000 Hz and a center frequency of 10 kHz. Given the location of the spin along the slice direction, I want to know whether this spin will be excited by the RF pulse or not.
Is it possible to figure that out from this information?
I have a feeling it should be possible from the relation above but I am not sure how the bandwidth parameter will come into play here.
Is it a simple matter of checking that if the resonant frequency experienced by the spin falls in the RF center frequency $\pm$ bandwidth / 2.0? |
Problem 15
Let $p_1(x), p_2(x), p_3(x), p_4(x)$ be (real) polynomials of degree at most $3$. Which (if any) of the following two conditions is sufficient for the conclusion that these polynomials are linearly dependent?
(a) At $1$ each of the polynomials has the value $0$. Namely $p_i(1)=0$ for $i=1,2,3,4$. (b) At $0$ each of the polynomials has the value $1$. Namely $p_i(0)=1$ for $i=1,2,3,4$.
(
University of California, Berkeley) Problem 12
Let $A$ be an $n \times n$ real matrix. Prove the followings.
(a) The matrix $AA^{\trans}$ is a symmetric matrix. (b) The set of eigenvalues of $A$ and the set of eigenvalues of $A^{\trans}$ are equal. (c) The matrix $AA^{\trans}$ is non-negative definite.
(An $n\times n$ matrix $B$ is called
non-negative definite if for any $n$ dimensional vector $\mathbf{x}$, we have $\mathbf{x}^{\trans}B \mathbf{x} \geq 0$.)
Add to solve later
(d) All the eigenvalues of $AA^{\trans}$ is non-negative. Problem 11
An $n\times n$ matrix $A$ is called
nilpotent if $A^k=O$, where $O$ is the $n\times n$ zero matrix. Prove the followings. (a) The matrix $A$ is nilpotent if and only if all the eigenvalues of $A$ is zero.
Add to solve later
(b) The matrix $A$ is nilpotent if and only if $A^n=O$. Read solution Problem 9
Let $A$ be an $n\times n$ matrix and let $\lambda_1, \dots, \lambda_n$ be its eigenvalues.
Show that (1) $$\det(A)=\prod_{i=1}^n \lambda_i$$ (2) $$\tr(A)=\sum_{i=1}^n \lambda_i$$
Here $\det(A)$ is the determinant of the matrix $A$ and $\tr(A)$ is the trace of the matrix $A$.
Namely, prove that (1) the determinant of $A$ is the product of its eigenvalues, and (2) the trace of $A$ is the sum of the eigenvalues.
Read solution Problem 5
Let $T : \mathbb{R}^n \to \mathbb{R}^m$ be a linear transformation.
Let $\mathbf{0}_n$ and $\mathbf{0}_m$ be zero vectors of $\mathbb{R}^n$ and $\mathbb{R}^m$, respectively. Show that $T(\mathbf{0}_n)=\mathbf{0}_m$.
(
The Ohio State University Linear Algebra Exam)
Add to solve later
Problem 3
Let $H$ be a normal subgroup of a group $G$.
Then show that $N:=[H, G]$ is a subgroup of $H$ and $N \triangleleft G$.
Here $[H, G]$ is a subgroup of $G$ generated by commutators $[h,k]:=hkh^{-1}k^{-1}$.
In particular, the commutator subgroup $[G, G]$ is a normal subgroup of $G$Add to solve later |
This article is cited in scientific papers (total in 5 5 papers) Thermophysical Properties of Materials Phase transformations in water-aliphatic alcohol binary systems E. A. Bazaev , A. R. Bazaev Institute for Geothermal Problems, Dagestan Scientific Center, Russian Academy of Sciences, pr. Shamil'a 39-A, Makhachkala, 367030, Russia Abstract: The parameters of liquid-vapor phase transitions $p_s$, $\rho_s$, $T_s$ and critical points $p_c$, $\rho_c$, $T_c$ were determined from the experimental data on the $p$, $\rho$, $T$, $x$-dependences of aqueous solutions of aliphatic alcohols (methanol, ethanol, $n$-propanol) that contain $0.2$, $0.5$, and $0.8$ mole fractions ($x$) of ethanol and correspond to single-phase (gas, liquid), two-phase, or subcritical areas. The dependence of the pressure of saturated vapor in solutions on the temperature and density was described by means of the expansion of the compressibility factor $Z=p/RT\rho_m$ in powers of the density and temperature along the coexistence curve away from the critical point. The temperature dependence of the density of solutions along the coexistence curve and inside the critical area was fitted using the power functions of parameters $\omega\sim\tau^{\beta_i}$, $\tau=(T-T_c)/T_c$ and $\omega=(\rho_{1,v}-\rho_c)/\rho_c$. Full text: PDF file (346 kB) References: PDF file HTML file English version: High Temperature, 2013, 51:2, 224–230 Bibliographic databases: UDC: 536.763: 536.764: 544.344.2 Received: 06.06.2012 Citation: E. A. Bazaev, A. R. Bazaev, “Phase transformations in water-aliphatic alcohol binary systems”, TVT, 51:2 (2013), 253–260; High Temperature, 51:2 (2013), 224–230 Citation in format AMSBIB
\Bibitem{BazBaz13}
\by E.~A.~Bazaev, A.~R.~Bazaev \paper Phase transformations in water-aliphatic alcohol binary systems \jour TVT \yr 2013 \vol 51 \issue 2 \pages 253--260 \mathnet{http://mi.mathnet.ru/tvt80} \elib{http://elibrary.ru/item.asp?id=18822189} \transl \jour High Temperature \yr 2013 \vol 51 \issue 2 \pages 224--230 \crossref{https://doi.org/10.1134/S0018151X1301001X} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000320157900012} \elib{http://elibrary.ru/item.asp?id=20445673} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-84880658145} Linking options: http://mi.mathnet.ru/eng/tvt80 http://mi.mathnet.ru/eng/tvt/v51/i2/p253 Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: B. D. Babaev, “Principles of heat accumulation and heat-accumulating materials in use”, High Temperature, 52:5 (2014), 736–751 A. D. Alekhin, O. I. Bilous, “The phenomenological approach to estimating critical indeces of critical fluid”, High Temperature, 53:2 (2015), 199–205 Y. Marcus, “Some thermophysical properties of methanol and aqueous methanol mixtures at sub-and supercritical conditions”, J. Mol. Liq., 239:SI (2017), 10–13 L. A. Bulavin, A. V. Chalyi, O. I. Bilous, “Anomalous propagation and scattering of sound in 2-propanol water solution near its singular point”, J. Mol. Liq., 235:SI (2017), 24–30 A. B. Alhasov, A. R. Bazaev, E. A. Bazaev, B. K. Osmanova, “Thermodynamic properties and energy characteristics of water+1-propanol”, International Conference Problems of Thermal Physics and Power Engineering (PTPPE-2017), Journal of Physics Conference Series, 891, IOP Publishing Ltd, 2017, UNSP 012327
Number of views: This page: 130 Full text: 21 References: 26 |
Almost entirely irrelevant to the question
That the Middle-$\alpha$ Cantor set is closed is easy as it is the intersection of closed sets.
The rest will be a fairly broad outline, and there are some details to fill in. Let $C$ denote the Middle-$\alpha$ Cantor set, and let $x \in C$ be arbitrary. We need to show that for all $\epsilon > 0$ there is a $y \in C \cap ( x - \epsilon , x + \epsilon )$ distinct from $x$.
Note that there must be an $n$ such the the unique closed interval containing $x$ in the $n$th stage of the construction of $C$ is entirely contained within the $( x - \epsilon , x + \epsilon )$.
(Remember how I said some details are missing? This is where they would go. one has to determine the lengths of the intervals at each stage of the construction, but it is not overly difficult.) Note that the endpoints of this interval will be elements of $C$, and (at least) one of them is distinct from $x$. Clearly each endpoint of $I$ is an endpoint of either $I_0$ or $I_1$.
Perhaps slightly relevant to the question
I think your problem might come down to notational issues. Perhaps a better way of attacking this problem is to determine the endpoints of the open middle-$\alpha$ interval removed given an arbitrary closed interval $[a,b]$. Relatively simple calculation shows that this open interval is $\left( \frac{(b-a)(1-\alpha)}{2} , \frac{(b-a)(1+\alpha)}{2}\right)$, meaning that the subintervals remaining are $\left[ a , \frac{(b-a)(1-\alpha)}{2} \right]$ and $\left[ \frac{(b-a)(1+\alpha)}{2} , b \right]$. From here the result you are looking for is easy.
As it stands, your functions $T_0$ and $T_1$ seem to really mix up the intervals, and it will make it quite difficult to find for each interval remaining in the $(n+1)$st stage which interval from the $n$th stage generated it. (You would have to play around with how these interact, and you could come up with a formula, but it won't be pretty.) |
I'm a novice to homotopical algebra, but I've found myself confronted with it by necessity and have some basic questions...
I'm going to consider chain complexes over a field $F := \mathbb{F}_2$. Given a chain complex $C$, I'm interested in two operations:
the ``homotopy Sym'', where I form $(C \otimes C \otimes E (\mathbb{Z}/2))_{\mathbb{Z}/2}$ where the $\mathbb{Z}/2$ acts diagonally on the tensor product. I'll call this $hSym^2 C$. if $C$ has a $\mathbb{Z}$-action, then I can form ``homotopy quotient'' $C/\mathbb{Z}$, which is $(C \otimes E \mathbb{Z})_{\mathbb{Z}}$, with $\mathbb{Z}$ acting diagonally. I don't know if there is "official" notation for this; I'll just call it $hC/\mathbb{Z}$.
(Edit: I thought I wrote this down but must have deleted it accidentally; $E G$ is a projective $F[G]$-resolution of the complex $F$ in degree $0$, which is supposed to represent a point; thus $EG$ is morally to be the chain complex of some contractible space on which $G$ acts freely.)
So my question is about how the composition of these two operations in either order are related. If $C$ has a $\mathbb{Z}$-action, then I think $hSym^2 C$ still has a $\mathbb{Z}$-action, so I could form
$$ h( hSym^2 C )/\mathbb{Z} $$
or I could do things in the opposite order:
$$ hSym^2 (hC/\mathbb{Z}). $$
Based on naive intuition about how ordinary quotients work, I guess that there should be an induced map
$$ h( hSym^2 C )/\mathbb{Z} \rightarrow hSym^2 (hC/\mathbb{Z}) $$
Is this right? And if it is, then is the above map an (edit:quasi-)isomorphism? (I guess probably not in general) How can I understand this map explicitly? For instance, if I choose an explicit model for $E \mathbb{Z}/2$ and $E \mathbb{Z}$, like the standard ones that spit out $\mathbb{RP}^{\infty}$ and $S^1$, then I should in principle be able to write it down explicitly, but I'm confused about how that goes. |
one of my friends asked me if I could solve him a mathematics problem. It looks like this: $$\frac1{a^2 +2} + \frac1{b^2 +2} + \frac1{c^2 +2} \le \frac{\sqrt2}{2}\frac{\sqrt a+\sqrt b+\sqrt c}{\sqrt{abc}}$$ As I think it looks like an inequality between means, hope it helps. And sorry for my bad english by the way. :)
Without loss of generality, we assume that $0 < a \leq b \leq c.$ Then, by AM-GM we have, $$ \frac{a^2 + (\sqrt{2})^2}{2} \geq a\sqrt{2}, $$ and so $$ \frac{1}{a^2 + 2} + \frac{1}{b^2+2} + \frac{1}{c^2+2} \leq \frac{1}{2\sqrt{2}}\left( \frac{1}{a} + \frac{1}{b} + \frac{1}{c} \right). $$ Now, $ \frac{1}{\sqrt{c}} \leq \frac{1}{\sqrt{b}} \leq \frac{1}{\sqrt{a}}, $ so using the rearrangement inequalities, and adding, we obtain the desired result.
The starting inequality is obviously wrong!. Try $a=0$ and $b=c=1$.
@michael Rozenberg: Let's bury this one for good by presenting a point with all positive $a,b,c$ such that the inequality is violated.
Take $$ a = 25\\ b=25\\ c= \frac{59}{81} \\ \frac{1}{a^2+2}+\frac{1}{b^2+2}+\frac{1}{c^2+2} = \frac{4146953}{10410081} > 0.3983 \\ \frac{\sqrt{2}}{2}\frac{\sqrt{a}+\sqrt{b}+\sqrt{c}}{\sqrt{abc}}= \frac{59+90\sqrt{59}}{2950}\sqrt{2} < 0.3597 \\\mbox{LHS } > 0.39 > 0.36 > \mbox{ RHS} $$ which violates the inequality. |
This question already has an answer here:
I have a matrix expression which, when expanding with the
Series[] command, returns
$$\left( \begin{array}{cc} 1-\frac{1}{2}t^2+\frac{1}{8}t^4-O[t^5] & t-\frac{1}{4}t^3+O[t^5] \\ -t+\frac{1}{4}t^3+O[t^5] & 1-\frac{1}{2}t^2+\frac{1}{8}t^4-O[t^5] \end{array} \right)$$
I would like to use the
SeriesCoefficient[] command on this matrix to return the element-wise series coefficients in matrix form - i.e.:
$$n=0~\rightarrow~ \left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right)$$
$$n=1~\rightarrow~ \left(\begin{array}{cc} 0 & 1 \\ -1 & 0 \end{array}\right)$$
$$n=2~\rightarrow~ \left(\begin{array}{cc} -\frac{1}{2} & 0 \\ 0 & -\frac{1}{2} \end{array}\right)$$
and so on. How would I do this? Simply running
SeriesCoefficient[Matrix[t],n] doesn't give me anything. |
Let $P(i, j)$ be the number of paths from the start node, $(0,0)$, to the destination node, $(i, j)$. Then it's easy to see that$$\begin{align}P(i, 0) &= 1\\P(0, j) &= 1\\P(i, j) &= P(i-1,j)+P(i-1,j-1)+P(i, j-1), \text{ if }i,j>0\end{align}$$Applying this recursion directly isn't particularly efficient, since you'll wind up making a lot of redundant calls. You could improve this by memoizing the intermediate values: at step $k$, compute and save the values $P(0, k), P(1, k), \dotsc P(k,k)$ and the values $P(k, 0), P(k, 1), \dotsc P(k,k)$. Note that this last series actually doesn't need to be saved at all, since $P(a,b)=P(b,a)$.
For another way (which turns out to be conceptually the same as Yuval's answer), you could observe that a path consists of a collection of steps:
An eastward move, $E$, from node $(i,j)$ to node $(i+1, j)$. A southward move, $S$, from node $(i,j)$ to node $(i, j+1)$. A diagonal move, $D$, from node $(i,j)$ to node $(i+1, j+1)$.
So to get from $(0,0)$ to $(i,j)$ we will need to make $i$ moves east and $j$ moves south. Note that each diagonal move will contribute 1 to the eastward moves and 1 to the southward moves. Thus, every path from the origin to node $(i,j)$ can be uniquely described as a sequence of $E, S, D$ where the number of $E$s plus the number of $D$s equals $i$ and the number of $S$s plus the number of $D$s equals $j$. For example, $P(2,2)$ is the number of paths
Using no $D$s: $EESS, ESES, ESSE, SEES, SESE, SSEE$. Using one $D$: $DES, DSE, EDS, SED, ESD, SED$. Using two $D$s: $DD$
So we see that $P(2,2)=6+6+1=13$.
Now the number of sequences of three symbols $D,E,S$, in order, with $a$ of the $D$s, $b$ of the $E$s, $c$ of the $S$s, is given by the multinomial coefficient$$\binom{a+b+c}{a,\ b,\ c}=\frac{(a+b+c)!}{a!\;b!\;c!}$$and from this and the observations we just made, we'll have, for $i\le j$,$$P(i, j)=\sum_{k=0}^i\binom{i+j-k}{k,\ i-k,\ j-k}=\sum_{k=0}^i\frac{(i+j-k)!}{k!\;(i-k)!\;\ (j-k)!}$$where $k$ is the number of $D$ moves, $i$ is the number of $E$ moves and $j$ is the number of $S$ moves. This sum of factorials is also somewhat computationally expensive, but by recognizing that there are some relations among the terms, you can slightly simplify the number of multiplications and divisions involved. Unfortunately, there doesn't appear to be any nice closed form for this sum, unlike the situation where we don't allow diagonal moves, in which case $P(i,j)=\binom{i+j}{i}$. |
In Preskill's quantum computing notes Chapter 7 approximate page 82, he shows that a Pauli channel has capacity $Q \geq 1-H(p_I,p_X,p_Y,p_Z)$ where $H$ is Shannon entropy and $p_I, p_X, p_Y, p_Z$ are the probabilities of the channel acting like the appropriate Pauli matrix. In particular this gives us the 'hashing bound' or 'random coding bound' for the quantum capacity of the depolarizing channel $Q(p) \geq 1-H(p,1-p)-p\log_23$.
He then describes work of Shor and Smolin [1]: if you take a $m$-repetition code and concatenate it with a suitable random code you can do better than the hashing bound. The argument for this is that taking $m-1$ measurements the inner repetition code thought of as a super channel is a Pauli channel with entropy $H_i$. Then averaging over the $2^{m-1}$ possible classical measurements you can find the average entropy of the superchannel $\langle H \rangle$.
[1] P.W. Shor and J.A. Smolin, “Quantum Error-Correcting Codes Need Not Completely Reveal the Error Syndrome” quant-ph/9604006; D.P. DiVincen, P.W. Shor, and J.A. Smolin, “Quantum Channel Capacity of Very Noisy Channels,” quant-ph/9706061.
Then by random coding on this new channel you can achieve a rate $R=\frac{1-\langle H \rangle}{m}$ (dividing by $m$ to get this rate in bits/original channel use).
I don't see how random coding works. You have a random code which is optimal for each particular channel but how do you decide which one to use? By the time you know the classical measurements for your channel you have already sent the codeword.
So two questions:
1) If you have an ensemble of Pauli channels with average entropy $\langle H\rangle$, can you by using random coding achieve a rate $1-\langle H \rangle$?
2) If you can't do this, am I misinterpreting the results of Shor and Smolin or Preskill's exposition?This post has been migrated from (A51.SE) |
I am trying to understand the implementation of Extended Kalman Filter for SLAM using a single, agile RGB camera.
The vector describing the camera pose is $$ \begin{pmatrix} r^W \\ q^W \\ V^W \\ \omega^R \\ a^W \\ \alpha^R \end{pmatrix} $$
where:
$r^W$ : 3D coordinates of camera w.r.t world $q^W$ : unit quaternion describing camera pose w.r.t world $V^W$ : linear velocity along three coordinate frames, w.r.t world $\omega$ : angular velocity w.r.t body frame of camera
The feature vector set is described as $$ \begin{pmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{pmatrix} $$ where, each feature point is described using XYZ parameters.
For the EKF acting under an unknown linear and angular acceleration $[A^W,\psi^R] $ , the process model used for predicting the next state is:
$$ \begin{pmatrix} r^W + V^W\Delta t + \frac{1}{2}\bigl(a^W + A^W\bigr)\Delta t^2 \\ q^W \bigotimes q^W\bigl(\omega^R\Delta t + \frac{1}{2}\bigl(\alpha^R + \psi^R\bigr)\Delta t^2\bigr) \\ V^W + \bigl(a^W + A^W\bigr)\Delta t\\ \omega^R + \bigl(\alpha^R + \psi^R\bigr)\Delta t \\ a^W + A^W \\ \alpha^R + \psi^R \end{pmatrix} $$
So far, I'm clear with the EKF steps. Post this prediction step, I'm not clear how to perform the measurement update of the system state.
From this slide, I was under the impression that we need to initialize random depth particles between 0.5m to 5m from the camera. But, at this point, both the camera pose and the feature depth is unknown.
I can understand running a particle filter for estimating feature depth if camera pose is known. I tried to implement such a concept in this project: where I read the camera pose from a ground truth file and keep triangulating the depth of features w.r.t world reference frame
I can also comprehend running a particle filter for estimating the camera pose if feature depths are known.
But both these parameters are unknown. How do I perform the measurement update?
I can understand narrowing down the active search region for feature matching based on the predicted next state of the camera. But after the features are matched using RANSAC (or any other algorithm), how do I find the updated camera pose? We are not estimating homography, are we?
If you have any idea regarding MonoSLAM (or RGB-D SLAM), please help me out with understanding the EKF steps.
To be more specific: is there a homography estimation step in the algorithm? how do we project the epipolar line (inverse depth OR XYZ) in the next frame if we do not have any estimate of the camera motion? |
Search
Now showing items 1-2 of 2
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions
(Elsevier, 2017-11)
Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ... |
From the Wikipedia article on the Discrete Fourier Transform:
The sequence of $N$ complex numbers $x_0, ..., x_{N−1}$ is transformed into an $N$-periodic sequence of complex numbers according to the DFT formula:
$$ X_k=\sum_{n=0}^{N-1} x_n e^{-2\pi ikn/N}.$$
What you have done is taken $N=10$ integers, $n=\{1,2,...,N\}$, and turned them into $10$ time samples $t=\{T_s,2T_s,...,NT_s\}$, or $t_n=nT_s$. (To clarify my notation change, this second set is what you called $n$.)
Then you created the signal $x_n=\text{square}(2\pi ft_n)$ with period $f=(NT_s)^{-1}$and fed it into MATLAB's FFT algorithm, attempting to take the DFT of it (as the FFT is just a fast DFT). However, this is NOT what the FFT expects to see!
Your signal starts with $x_1$ and ends with $x_N$. However, if you see the definition of the DFT I gave above, it expects the signal to start with $x_0$ and end with $x_{N-1}$. Ordinarily, this would not be much of a problem, because if your signal $x$ is actually $N$-periodic, then $x_0=x_N$.However, as noted in the comments by Jim Clay and Jason R above, the signal you start with is not actually a square wave. As you can see in your screenshot, there are six "1" values and only four "-1" values. The square wave should have an equal number of "1"s and "-1"s. I do not know why the values you put in are not a proper square wave, and I suspect there is some odd detail in how MATLAB implimented the function $\text{square}$. To create a square wave, you should change the line
n = 0.000001:Ts:t; %Generating Samples
to
n = 0:Ts:t-Ts; %Generating Samples
or, even better, to
N=t/Ts;
n=(0:N-1)*Ts;
which makes it clear that you are sampling at integer multiples of your sampling time. The signal $x$ you generate in this way will be equivalent to what Jim Clay has generated in his answer.
As to why your signal has magnitude $\approx 1.2$ instead of $1$, you need to remember how the square wave is defined. From the Wikipedia article on the square wave:
$$x_{\mathrm{square}}(t) =\frac{4}{\pi}\left (\sin(2\pi ft) + {1\over3}\sin(6\pi ft) + {1\over5}\sin(10\pi ft) + \cdots\right ).$$
The first term of this function has frequency $f$ and magnitude $\frac{4}{\pi}\approx 1.27$. If you look at Jim Clay's plot this is exactly the magnitude in bin 2 of the function he has plotted. Up to bin $N/2+1$, the value that will be plotted in bin $k$ is the coefficient of the term in the square wave with frequency $(k-1)f$. (The $-1$ comes from MATLAB indexing beginning with one rather than zero). |
I have found the proof of $\text{Closed - Graph -Theorem}$ to be very instructive.
Proof of the Theorem from Planet Math.
Let $T\colon X\to Y$ be a linear mapping. Denote its graph by $G(T)$, and let $p_1\colon X\times Y\to X$ and $p_2\colon X\times Y\to Y$ be the projections onto $X$ and $Y$, respectively. We remark that these projections are continuous, by definition of the product of Banach spaces.
If $T$ is bounded, then given a sequence$\{(x_i, Tx_i)\}$ in $G(T)$ which converges to $(x,y)\in X\times Y$, we have that $$x_i = p_1(x_i,Tx_i) \xrightarrow[i\to\infty]{} p_1(x,y) = x$$and $$Tx_i = p_2(x_i,Tx_i) \xrightarrow[i\to\infty]{} p_2(x,y) = y,$$by continuity of the projections.But then, since $T$ is continuous,$$Tx = \lim_{i\to\infty} Tx_i = y.$$Thus $(x,y) = (x,Tx)\in G(T)$, proving that $G(T)$ is closed.
Now suppose $G(T)$ is closed. We remark that $G(T)$ is a vector subspace of $X\times Y$, and being closed, it is a Banach space. Consider the operator$\tilde T:X\to G(T)$ defined by $\tilde Tx = (x,Tx)$. It is clear that $\tilde T$ is a bijection, its inverse being $p_1|_{G(T)}$, the restriction of $p_1$ to $G(T)$. Since $p_1$ is continuous on $X\times Y$, the restriction is continuous as well; and since it is also surjective, the open mapping theorem implies that $p_1|_{G(T)}$ is an open mapping, so its inverse must be continuous. That is, $\tilde T$ is continuous, and consequently $T = p_2\circ\tilde T$ is continuous. |
Let $P$ be a finite p-group, $A\vartriangleleft P$, with $|A|= p$. Show that $A \subseteq Z(P)$
I have to show that if $x \in A \Rightarrow x \in Z(P)$.
My attempt:
Since A is a normal subgroup of P then for all $p \in P$, $pA = Ap$
If $x \in A$, and $A\vartriangleleft P$ then by the definition of normal subgroup,
$$ pxp^{-1} \in A, \forall p \in P. $$
we also have that $|A| = p$ and P is a p-group, then the order of each element in P is a power of $p$.
$$Z(P) = \{z \in P ∣ \forall p \in P, zp = pz\}$$
I know that the center is a normal subgroup of P.
Since $|A| = p$, then $A$ is also a finite abelian p-group. We must show that $A$ must be contained in $Z(P)$ or be equal.
If $x \in A \Rightarrow x \in P$, now pick an arbitrary $p \in P$ then $pxp^{-1} \in A$ we must show that $px = xp$.
Using these definitions I can not complete the proof. I do not see where I should use the fact that P is a p-group and that the order of A is p. Any ideas? Alternative definition? Alternative approach by contradiction ? |
Both these limits tend to infinity but it is obvious to say that $$\lim_{x \to 0}\frac{2}{x} \gt \lim_{x \to 0}\frac{1}{x}$$
as at any point it is true, if not what about these one $$\lim_{x \to 0}\frac{1}{x^2} \gt \lim_{x \to 0}\frac{1}{x}$$ $$\lim_{x \to 0}\frac{1}{x^3} \gt \lim_{x \to 0}\frac{1}{x}$$ If all infinities are equal then all graphs mus intersect at a point which is never true
Both these limits tend to infinity but it is obvious to say that $$\lim_{x \to 0}\frac{2}{x} \gt \lim_{x \to 0}\frac{1}{x}$$
closed as off-topic by Asaf Karagila♦, TheGeekGreek, Saad, ahulpke, SchrodingersCat May 11 '18 at 14:55
This question appears to be off-topic. The users who voted to close gave this specific reason:
" This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – TheGeekGreek, Saad, SchrodingersCat
As already noted $\infty$ is used in limits as a symbolic manner to express that the function becomes larger than any fixed bound $M$ as $x\to 0$ (or smaller for the case $-\infty$).
What we can to compare two different functions is to consider their ratio for a same value of $x$ and also take the limit of that ratio, that is for example
$f(x)=\frac1{x^2}$ $g(x)=\frac1{x^4}$
then
$$\lim_{x \to 0}\frac{f(x)}{g(x)}=\lim_{x \to 0}\frac{\frac1{x^2}}{\frac1{x^4}}=\lim_{x \to 0} \frac{x^4}{x^2}=\lim_{x \to 0} x^2=0$$
then we say that $g(x)$ tends to $\infty$ faster than $f(x)$ for $x \to 0$.
Depending on your concept of infinity, yes, one infinity can be greater than another. However, when taking limits of real-valued functions, there really is only one infinity. That infinity means "The function grows beyond any finite bound", and that's it. There is, in this respect, no notion of how
fast it grows beyond any finite bound; they all reach the same $\infty$ in the end.
One case where infinities have different sizes are when they measure the size of sets. The infinity which describes the number of elements in the set of natural numbers is strictly smaller than the infinity which describes the number of elements in the set of the real numbers.
Infinities can be compared by calculating $$\lim_{x \to a} \frac{f\left(x\right)}{x}, f\left(a\right) = \infty$$
if they are also $\infty$ then calculate $$\lim_{x \to a} \frac{f\left(x\right)}{x^2}, f\left(a\right) = \infty$$ and so on this can compare two infinities
No.
Infinity is a concept, not a number. Therefore, the limits, while one accelerates at a faster rate, both have the same endpoint of infinity, and therefore are equal.
Have a look at this for a clearer explanation of why $\infty$ is not a number. |
Find the limit of the sequence as it approches $\infty$ $$\sqrt{1+\left(\frac1{2n}\right)^n}$$
I made a table of the values of the sequence and the values approach 1, so why is the limit $e^{1/4}$?
I know that if the answer is $e^{1/4}$ I must have to take the $\ln$ of the sequence but how and where do I do that with the square root?
I did some work getting the sequence into an indeterminate form and trying to use L'Hospitals but I'm not sure if it's right and then where to go from there. Here is the work I've done
$$\sqrt{1+\left(\frac1{2n}\right)^n} = \frac1{2n} \ln \left(1+\frac1{2n}\right) = \lim_{x\to\infty} \frac1{2n} \ln \left(1+\frac1{2n}\right) \\ = \frac 1{1+\frac1{2n}}\cdot-\frac1{2n^2}\div-\frac1{2n^2}$$
Thank you |
Let $m$ a probability measure, $f$ a positive measurable function (one can assume it is bounded, the existence of the moments is not a problem here).
Is $m(f^3) \le m(f^2) m(f)$?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
No. Consider $([0,1],\mathcal{B}([0,1]),\lambda|_{[0,1]})$ and $f(x) := 1+x$. Then $$\int_0^1 (1+x)^n \, dx = \frac{1}{n+1} (1+x)^{n+1} \bigg|_{x=0}^1 = \frac{2^{n+1}-1}{n+1}$$
for any $n \in \mathbb{N}$. Hence,
$$\frac{15}{4} = \int_0^1 (1+x)^3 \, dx > \left( \int_0^1 (1+x)^2 \, dx \right) \cdot \left( \int_0^1 (1+x) \, dx \right) = \frac{7}{3} \cdot \frac{3}{2} = \frac{7}{2}.$$
No. Actually, for
every probability measure $m$ and nonnegative function $f$, $$m(f^3)\geqslant m(f^2)\cdot m(f),$$ with equality if and only if $f$ is ($m$-almost surely) constant.
Hence, checking
any example would have shown that the conjecture is wrong. |
I was wondering how I could determine a robot's distance from a fixed point when the robot itself is constantly changing positions. I can keep encoders on the wheels and can also get data from a gyroscope and an accelerometer.
If you know the position of the point at the begin, an easy solution would be to implement Dead Reckoning using the encoder value. Knowing the position of the robot at time
t, compare it to its initial position and you can easily find where the fixed point is, in the robot frame (and thus calculate the distance).
Then to compensate the drift create by the dead reckoning process, you can use the values given by your IMU (gyro and accelerometer).
In addition to Malc's answer, a simple Dead Reckoning Algorythm might look like this:
$$ X_k = X_{k-1} + f(u) $$
where: $$ X=\left(\! \begin{array}{c} PosX \\ PosY \\ Heading \end{array} \!\right) \phantom {AAAAA} u=\left(\! \begin{array}{c} EncLeft \\ EncRight \end{array} \!\right) $$
$f(u)$ is the change of the state $X$ since the last measurement. To calculate this change you need the Measurement $u$, which is in the most simple version just your Encoder values. In this case the encoder values are already transformed to their respective distance, therefore the unit is in meters or something similar.
In addition you need your wheelbase $r$ and the SamplingTime $Ts$ of your algorythm.
The update of $PosX$ might be:
$$ PosX_{(k)} =PosX_{(k-1)} + \cos {(Heading_{(k)})} \cdot \frac{EncLeft_{(k)} +EncRight_{(k-1)}} 2 $$
Similar for $PosY$:
$$ PosY_{(k)} =PosY_{(k-1)} + \sin {(Heading_{(k-1)})} \cdot \frac{EncLeft_{(k)} +EncRight_{(k)}} 2 $$
The change in heading might be expressed with:
$$ Heading_{(k)}=Heading_{(k-1)}+ \arctan \left(\! \frac{EncLeft-EncRight} r \!\right) $$
Some Points, like the calculation of the heading, are pretty simplified, but may work in "homemade" application
Further improvements might be done with feedback from the IMU sensor.
This document has a good overview of mobile robot kinematics which is required to perform dead reckoning.
Adding the IMU provides more information and can correct for drift of the encoders. But now you need to fuse the data, typically with a kalman or particle filter. Traditionally, mobile robots will also have a planar laser range finder (LIDAR) sensor such as a Hokuyo. By doing incremental scan matching you can further improve accuracy. In addition to now being able to map and such. This is approaching SLAM.
Additionally, determining "ground truth" location is achieved through the use of other sensor modalities. For example a motion capture system like Optitrack, or a webcam pointed at the ceiling with April tags on it. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.