text stringlengths 256 16.4k |
|---|
Given the concentration of bleach ($6\%\,\mathrm{m/m}$), density of bleach ($1.07\,\mathrm{g\,mL^{-1}}$), and molar mass of bleach ($74.44\,\mathrm{g\,mol^{-1}}$) how many moles of bleach are in a $4.0\,\mathrm{mL}$ sample? I don't understand where the concentration factors in, but the problem says to use it.
The key issue to address this kind of problems is to determine the mass of the pure compound in the given volume of the solution. So, we'll use first the density to determine the mass of the solution. Then, the mass concentration to determine the mass of the pure compound. After that, we can readily compute the number of moles of the pure compound using the molar mass.
The mass of $1\,\mathrm{mL}$ of bleach solution is $1.07\,\mathrm{g}$. So, the mass of $4~\mathrm{mL}$ of bleach solution is $4 \cdot 1.07\,\mathrm{g}$
Or, each $100\,\mathrm{g}$ of bleach solution has $6\,\mathrm{g}$ of pure bleach. So, the mass of $4 \times 1.07\mathrm{g}$ of bleach solution has $\frac{4\cdot 6 \cdot 1.07}{100}\,\mathrm{g}$ of pure bleach.
The number of moles in $4\mathrm{mL}$ of bleach solution is: $$\frac{4\cdot 6 \cdot 1.07}{100\cdot 74.44}= 0.0034\,\mathrm{mol}$$
Dimensional analysis is a straightforward, easy, and beautiful way to solve this problem! Simply cancel the units!
$$4\,\mathrm{mL\,(solution)} \cdot \frac{1.07\,\mathrm{g\,(solution)}}{\mathrm{mL\,(solution)}} \cdot \frac{6\,\mathrm{g}\,\ce{NaClO}}{100\,\mathrm{g\,(solution)}} \cdot \frac{1\,\mathrm{mol}\,\ce{NaClO}}{74.44\,\mathrm{g}\,\ce{NaClO}}$$
$$=0.00345\,\mathrm{mol}\,\ce{NaClO}$$ |
Lephenixnoir af424d1baa update documentation after writing the wiki 3 meses atrás config 4 meses atrás include/TeX 3 meses atrás src 3 meses atrás .gitignore 3 meses atrás Makefile 4 meses atrás README.md 3 meses atrás TODO.md 3 meses atrás configure 3 meses atrás font5x7.bmp 4 meses atrás font8x9.bmp 4 meses atrás font10x12.bmp 4 meses atrás
This library is a customizable 2D math rendering tool for calculators. It can be used to render 2D formulae, either from an existing structure or TeX syntax.
\frac{x^7 \left[X,Y\right] + 3\left|\frac{A}{B}\right>} {\left\{\frac{a_k+b_k}{k!}\right\}^5}+ \int_a^b \frac{\left(b-t\right)^{n+1}}{n!} dt+ \left(\begin{matrix} \frac{1}{2} & 5 \\ -1 & a+b \end{matrix}\right)
List of currently supported elements:
\frac)
_ and
^)
\left and
\right)
\sum,
\prod and
\int)
\vec) and limits (
\lim)
\sqrt)
\begin{matrix} ... \end{matrix})
Features that are partially implemented (and what is left to finish them):
See the
TODO.md file for more features to come.
First specify the platform you want to use :
cli is for command-line tests, with no visualization (PC)
sdl2 is an SDL interface with visualization (PC)
fx9860g builds the library for fx-9860G targets (calculator)
fxcg50 builds the library for fx-CG 50 targets (calculator)
For calculator platforms, you can use
--toolchain to specify a different toolchain than the default
sh3eb and
sh4eb. The install directory of the library is guessed by asking the compiler, you can override it with
--prefix.
Example for an SDL setup:
% ./configure --platform=sdl2
Then you can make the program, and if it’s a calculator library, install it. You can later delete
Makefile.cfg to reset the configuration, or just reconfigure as needed.
% make% make install # fx9860g and fxcg50 only
Before using the library in a program, a configuration step is needed. The library does not have drawing functions and instead requires that you provide some, namely:
TeX_intf_pixel)
TeX_intf_line)
TeX_intf_size)
TeX_intf_text)
The three rendering functions are available in fxlib; for monospaced fonts the fourth can be implemented trivially. In gint, the four can be defined as wrappers for
dpixel(),
dline(),
dsize() and
dtext().
The type of formulae is
TeX_Env. To parse and compute the size of a formula, use the
TeX_parse() function, which returns a new formula object (or
NULL if a critical error occurs). The second parameter
display is set to non-zero to use display mode (similar to
\[ .. \] in LaTeX) or zero to use inline mode (similar to
$ .. $ in LaTeX).
char *code = "\\frac{x_7}{\\left\\{\\frac{\\frac{2}{3}}{27}\\right\\}^2}";struct TeX_Env *formula = TeX_parse(code, 1);
The size of the formula can be queried through
formula->width and
formula->height. To render, specify the location of the top-left corner and the drawing color (which will be passed to all primitives):
TeX_draw(formula, 0, 0, BLACK);
The same formula can be drawn several times. When it is no longer needed, free it with
TeX_free():
TeX_free(formula); |
What will be the time when the hour, minute, and second hands will all coincide in between 6 O'Clock and 7 O'Clock?
How should I approach this type of question? Can anyone explain it?
Puzzling Stack Exchange is a question and answer site for those who create, solve, and study puzzles. It only takes a minute to sign up.Sign up to join this community
ANSWER IS NOT CORRECT
You really just need a couple equations
h = hi + min/60h/12 = min/60 = sec/60
hi is the integer hour before the time you want to find (hi=6) h, min, sec are the actual hour, minute and second they intersect.
Substitute min/60 for h/12 and the equation becomes
h = hi + h/12 (11/12)*h = hi h = 12/11*hi h = 12/11*6**h = 72/11**min = h/12*60min = 6/11*60sec = min = 6/11*60h = 6.5454min = 32.7272sec = 32.7272
For an actual time drop the decimal points of the hour and minute and your left with
**final time is 6:32:32.727**
EDIT TO SHOW THAT IT IS NOT CORRECT
As per @Poolsharker answer these equations are incorrect another formula that should have been considered is
min = mini + sec/6059/60*min = mini
From the previous final time mini = 32
min = 60/59*32min = 32.5424
which does not match the previous min time so they will not all 3 coincide
When we are a fraction $f$ of the way through a 12-hour "day" -- so $f=1$ means 12 hours have passed since noon or midnight -- the positions of the hands as fractions of a whole turn are: $f$ (for the hour hand), $12f$ (for the minute hand), and $720f$ (for the second hand), where integer differences are not visible. So for all three to coincide we need the differences $11f$ and $719f$ to be integers. Since 11 and 719 are coprime, this requires $f$ to be an integer, which means that the only 3-way coincidences happen with all hands pointing to the 12.
In particular, there is none between 6 and 7 o'clock.
(The bit about 11 and 719 being coprime may not be perfectly clear. Suppose $11f=m$ and $719f=n$ where $m,n$ are integers. Then $11n=719m$, so $11|719m$; since 11 is prime this requires either $11|719$ (no!) or $11|m$. So write $m=11k$; then $11f=m=11k$ so $f=k$ and $f$ is an integer.)
From my calculations...
all three hands will not intersect 'exactly' between 6 O'clock and 7 O'clock
The way I approached it isn't elegant but I think my calculations are correct:
Clock face makes up 360 degrees. When the minute hand moves 360 degrees, the hour hand has moved 30 degrees (1/12). So I did my calculations by moving the minute hand to where the hour hand is, and then adjusting the hour hand because of the minute hand movement.
So at 6:00, hour hand (hh) is at 180 degrees, minute hand (mh) is at 0. Move mh 180 degrees, move hh 180 * (1/12) = 15 degrees. Move mh 15 degrees, move hh 15 * (1/12) = 1.25 degrees. Move mh 1.25 degrees, move hh 1.25 (1/12) = 0.104 degrees. and so on... The calculations start repeating at the hour hand having moved 16.3636. So if we are 16.3636 degrees between the 6 and the 7 and 30 degrees = 5 minutes, then 16.3636 / (30/5) = 2.7272 minutes. Now we convert 0.7272 into seconds (0.7272 * 60) and we get 43.632. So I believe that the hour hand and minute hand we be nearly exact at 6:32:43.6, however that does leave the second hand outside. The best approximation would be 6:32:32.72 as mentioned by gtwebb.
Side note
While checking my calculations, I found this site calculating all the times the hour hand and minute hand intersect, good future reference http://www.kodyaz.com/articles/how-many-times-a-day-clocks-hands-overlap.aspx
There's no three pointers coincidence between 6 o'clock and 7 o'clock.
We can solve this problem using equations of motion.
The equation of motion in terms of angles of a pointer is:
$$\theta=\theta_i+\dot{\theta} t$$
where $\theta_i$ is the initial angle (at $t=0$), $\dot{\theta}$ the angular frequency and, $t$ the time.
We have three bodies: hours ($h$), minutes ($m$) and seconds ($s$) pointers.
Consider that the initial condition is at 6 o'clock ($t=0$). For a polar referential system where $\theta=0$ in 12 o'clock, with a direct clock direction for this angle, we know that at the initial condiditon, we have:
$$ \theta_h^i=\pi \text{ (rad)}, \hspace{5pt} \theta_m^i=0 \text{ (rad)}, \hspace{5pt} \theta_s^i=0 \text{ (rad)}$$
In relation to the angular frequencies, we have:
$$ \dot{\theta_h}=\frac{2\pi}{12\times3600} \text{(rad/s)}, \hspace{5pt} \dot{\theta_m}=\frac{2\pi}{3600} \text{ (rad/s)}, \hspace{5pt} \dot{\theta_s}=\frac{2\pi}{60} \text{ (rad/s)}$$
The condition for three pointers coincidence is:
$$\theta_h(t)=\theta_h(t)-2\pi n=\theta_h(t)- 2\pi m$$
where $n$ e $m$ are integers (the subtraction by $2\pi n$ and $2\pi m$ cancels the cyclicality of the pointers). We need that $n\in[0,1]$, $m\in[0,60]$ and $0\leq t\leq3600$ s for the coincidence be between 6 and 7 o'clock.
So, solving the system of equations of $\theta_h(t)=\theta_h(t)-2\pi n$ and $\theta_h(t)-2\pi n=\theta_h(t)- 2\pi m$ in order to $t$ am $m$ we get:
$$t=\frac{21600}{11}\left(1+2n\right) \text{ (s)},\hspace{15pt} m=\frac{719n+354}{11}$$
I tried to find the first value of $n$ that satisfies the conditions of $n$ and $m$ being positive integers, by substituition of $n=1,2,3...$ on the $m(n)$ expression and verified if $m$ was an integer. I got $n=5$ which means that $t=21600 s=6h$. Because $t=0$, at 6 o'clock, the coincidence happens at 12 o'clock, which means that it's impossible that it happens between 6 and 7 o'clock (remember that $t(n)$ is a crescent function with $n$, and $n=5$ is the lowest value for coincidence). |
Volume 24, Number 4 Volume 24, Number 4, 2019
Tsiganov A. V.
The Kepler Problem: Polynomial Algebra of Nonpolynomial First Integrals
Abstract
The sum of elliptic integrals simultaneously determines orbits in the Kepler problem and the addition of divisors on elliptic curves. Periodic motion of a body in physical space is defined by symmetries, whereas periodic motion of divisors is defined by a fixed point on the curve. The algebra of the first integrals associated with symmetries is a well-known mathematical object, whereas the algebra of the first integrals associated with the coordinates of fixed points is unknown. In this paper, we discuss polynomial algebras of nonpolynomial first integrals of superintegrable systems associated with elliptic curves.
Arathoon P.
Singular Reduction of the 2-Body Problem on the 3-Sphere and the 4-Dimensional Spinning Top
Abstract
We consider the dynamics and symplectic reduction of the 2-body problem on a sphere of arbitrary dimension. It suffices to consider the case when the sphere is 3-dimensional. As the 3-sphere is a group it acts on itself by left and right multiplication and these together generate the action of the \(SO(4)\) symmetry on the sphere. This gives rise to a notion of left and right momenta for the problem, and allows for a reduction in stages, first by the left and then the right, or vice versa. The intermediate reduced spaces obtained by left or right reduction are shown to be coadjoint orbits of the special Euclidean group \(SE(4)\). The full reduced spaces are generically 4-dimensional and we describe these spaces and their singular strata. The dynamics of the 2-body problem descend through a double cover to give a dynamical system on \(SO(4)\) which, after reduction and for a particular choice of Hamiltonian, coincides with that of a 4-dimensional spinning top with symmetry. This connection allows us to ``hit two birds with one stone'' and derive results about both the spinning top and the 2-body problem simultaneously. We provide the equations of motion on the reduced spaces and fully classify the relative equilibria and discuss their stability.
Ivanov A. V.
On Transversal Connecting Orbits of Lagrangian Systems in a Nonstationary Force Field: the Newton – Kantorovich Approach
Abstract
We consider a natural Lagrangian system defined on a complete Riemannian manifold subjected to the action of a nonstationary force field with potential $U(q,t) = f(t)V(q)$. It is assumed that the factor $f(t)$ tends to $\infty$ as $t\to \pm\infty$ and vanishes at a unique point $t_{0}\in \mathbb{R}$. Let $X_{+}$, $X_{-}$ denote the sets of isolated critical points of $V(x)$ at which $U(x,t)$ as a function of $x$ attains its maximum for any fixed $t> t_{0}$ and $t< t_{0}$, respectively. Under nondegeneracy conditions on points of $X_{\pm}$ we apply the Newton – Kantorovich type method to study the existence of transversal doubly asymptotic trajectories connecting $X_{-}$ and $X_{+}$. Conditions on the Riemannian manifold and the potential which guarantee the existence of such orbits are presented. Such connecting trajectories are obtained by continuation of geodesics defined in a vicinity of the point $t_{0}$ to the whole real line.
Ryabov P. E. , Shadrin A. A.
Bifurcation Diagram of One Generalized Integrable Model of Vortex Dynamics
Abstract
This article is devoted to the results of phase topology research on a generalized mathematical model, which covers such two problems as the dynamics of two point vortices enclosed in a harmonic trap in a Bose – Einstein condensate and the dynamics of two point vortices bounded by a circular region in an ideal fluid. New bifurcation diagrams are obtained and three-into-one and four-into-one tori bifurcations are observed for some values of the physical parameters of the model. The presence of such bifurcations in the integrable model of vortex dynamics with positive intensities indicates a complex transition and a connection between bifurcation diagrams in both limiting cases. In this paper, we analytically derive equations that define the parametric family of bifurcation diagrams of the generalized model, including bifurcation diagrams of the specified limiting cases. The dynamics of the bifurcation diagram in a general case is shown using its implicit parameterization. A stable bifurcation diagram, related to the problem of dynamics of two vortices bounded by a circular region in an ideal fluid, is observed for particular parameters’ values.
Rybalova E. V. , Klyushina D. Y. , Anishchenko V. S. , Strelkova G. I.
Impact of Noise on the Amplitude Chimera Lifetime in an Ensemble of Nonlocally Coupled Chaotic Maps
Abstract
This paper presents results of numerical statistical analysis of the effect of shortterm localized noise of different intensity on the amplitude chimera lifetime in an ensemble of nonlocally coupled logistic maps in a chaotic regime. It is shown that a single and rather weak noise perturbation added only to the incoherence cluster of the amplitude chimera after its switching to the phase chimera mode is able to revive and stabilize the amplitude chimera, as well as to increase its lifetime to infinity. It is also analyzed how the amplitude chimera lifetime depends on the duration of noise influence of different intensity. |
Seeing that in the Chomsky Hierarchy Type 3 languages can be recognised by a DFA (which has no stacks), Type 2 by a DFA with one stack (i.e. a push-down automaton) and Type 0 by a DFA with two stacks (i.e. with one queue, i.e. with a tape, i.e. by a Turing Machine), how do Type 1 languages fit in...
Considering this pseudo-code of an bubblesort:FOR i := 0 TO arraylength(list) STEP 1switched := falseFOR j := 0 TO arraylength(list)-(i+1) STEP 1IF list[j] > list[j + 1] THENswitch(list,j,j+1)switched := trueENDIFNEXTIF switch...
Let's consider a memory segment (whose size can grow or shrink, like a file, when needed) on which you can perform two basic memory allocation operations involving fixed size blocks:allocation of one blockfreeing a previously allocated block which is not used anymore.Also, as a requiremen...
Rice's theorem tell us that the only semantic properties of Turing Machines (i.e. the properties of the function computed by the machine) that we can decide are the two trivial properties (i.e. always true and always false).But there are other properties of Turing Machines that are not decidabl...
People often say that LR(k) parsers are more powerful than LL(k) parsers. These statements are vague most of the time; in particular, should we compare the classes for a fixed $k$ or the union over all $k$? So how is the situation really? In particular, I am interested in how LL(*) fits in.As f...
Since the current FAQs say this site is for students as well as professionals, what will the policy on homework be?What are the guidelines that a homework question should follow if it is to be asked? I know on math.se they loosely require that the student make an attempt to solve the question a...
I really like the new beta theme, I guess it is much more attractive to newcomers than the sketchy one (which I also liked). Thanks a lot!However I'm slightly embarrassed because I can't read what I type, both in the title and in the body of a post. I never encountered the problem on other Stac...
This discussion started in my other question "Will Homework Questions Be Allowed?".Should we allow the tag? It seems that some of our sister sites (Programmers, stackoverflow) have not allowed the tag as it isn't constructive to their sites. But other sites (Physics, Mathematics) do allow the s...
There have been many questions on CST that were either closed, or just not answered because they weren't considered research level. May those questions (as long as they are of good quality) be reposted or moved here?I have a particular example question in mind: http://cstheory.stackexchange.com...
Ok, so in most introductory Algorithm classes, either BigO or BigTheta notation are introduced, and a student would typically learn to use one of these to find the time complexity.However, there are other notations, such as BigOmega and SmallOmega. Are there any specific scenarios where one not...
Many textbooks cover intersection types in the lambda-calculus. The typing rules for intersection can be defined as follows (on top of the simply typed lambda-calculus with subtyping):$$\dfrac{\Gamma \vdash M : T_1 \quad \Gamma \vdash M : T_2}{\Gamma \vdash M : T_1 \wedge T_2}...
I expect to see pseudo code and maybe even HPL code on regular basis. I think syntax highlighting would be a great thing to have.On Stackoverflow, code is highlighted nicely; the schema used is inferred from the respective question's tags. This won't work for us, I think, because we probably wo...
Sudoku generation is hard enough. It is much harder when you have to make an application that makes a completely random Sudoku.The goal is to make a completely random Sudoku in Objective-C (C is welcome). This Sudoku generator must be easily modified, and must support the standard 9x9 Sudoku, a...
I have observed that there are two different types of states in branch prediction.In superscalar execution, where the branch prediction is very important, and it is mainly in execution delay rather than fetch delay.In the instruction pipeline, where the fetching is more problem since the inst...
Is there any evidence suggesting that time spent on writing up, or thinking about the requirements will have any effect on the development time? Study done by Standish (1995) suggests that incomplete requirements partially (13.1%) contributed to the failure of the projects. Are there any studies ...
NPI is the class of NP problems without any polynomial time algorithms and not known to be NP-hard. I'm interested in problems such that a candidate problem in NPI is reducible to it but it is not known to be NP-hard and there is no known reduction from it to the NPI problem. Are there any known ...
I am reading Mining Significant Graph Patterns by Leap Search (Yan et al., 2008), and I am unclear on how their technique translates to the unlabeled setting, since $p$ and $q$ (the frequency functions for positive and negative examples, respectively) are omnipresent.On page 436 however, the au...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.On StackOverflow (and possibly Math.SE), questions on introductory formal language and automata theory pop up... questions along the lines of "How do I show language L is/isn't...
This is my first time to be involved in a site beta, and I would like to gauge the community's opinion on this subject.Certainly, there are many kinds of questions which we could expect to (eventually) be asked on CS.SE; lots of candidates were proposed during the lead-up to the Beta, and a few...
This is somewhat related to this discussion, but different enough to deserve its own thread, I think.What would be the site policy regarding questions that are generally considered "easy", but may be asked during the first semester of studying computer science. Example:"How do I get the symme...
EPAL, the language of even palindromes, is defined as the language generated by the following unambiguous context-free grammar:$S \rightarrow a a$$S \rightarrow b b$$S \rightarrow a S a$$S \rightarrow b S b$EPAL is the 'bane' of many parsing algorithms: I have yet to enc...
Assume a computer has a precise clock which is not initialized. That is, the time on the computer's clock is the real time plus some constant offset. The computer has a network connection and we want to use that connection to determine the constant offset $B$.The simple method is that the compu...
Consider an inductive type which has some recursive occurrences in a nested, but strictly positive location. For example, trees with finite branching with nodes using a generic list data structure to store the children.Inductive LTree : Set := Node : list LTree -> LTree.The naive way of d...
I was editing a question and I was about to tag it bubblesort, but it occurred to me that tag might be too specific. I almost tagged it sorting but its only connection to sorting is that the algorithm happens to be a type of sort, it's not about sorting per se.So should we tag questions on a pa...
To what extent are questions about proof assistants on-topic?I see four main classes of questions:Modeling a problem in a formal setting; going from the object of study to the definitions and theorems.Proving theorems in a way that can be automated in the chosen formal setting.Writing a co...
Should topics in applied CS be on topic? These are not really considered part of TCS, examples include:Computer architecture (Operating system, Compiler design, Programming language design)Software engineeringArtificial intelligenceComputer graphicsComputer securitySource: http://en.wik...
I asked one of my current homework questions as a test to see what the site as a whole is looking for in a homework question. It's not a difficult question, but I imagine this is what some of our homework questions will look like.
I have an assignment for my data structures class. I need to create an algorithm to see if a binary tree is a binary search tree as well as count how many complete branches are there (a parent node with both left and right children nodes) with an assumed global counting variable.So far I have...
It's a known fact that every LTL formula can be expressed by a Buchi $\omega$-automata. But, apparently, Buchi automata is a more powerful, expressive model. I've heard somewhere that Buchi automata are equivalent to linear-time $\mu$-calculus (that is, $\mu$-calculus with usual fixpoints and onl...
Let us call a context-free language deterministic if and only if it can be accepted by a deterministic push-down automaton, and nondeterministic otherwise.Let us call a context-free language inherently ambiguous if and only if all context-free grammars which generate the language are ambiguous,...
One can imagine using a variety of data structures for storing information for use by state machines. For instance, push-down automata store information in a stack, and Turing machines use a tape. State machines using queues, and ones using two multiple stacks or tapes, have been shown to be equi...
Though in the future it would probably a good idea to more thoroughly explain your thinking behind your algorithm and where exactly you're stuck. Because as you can probably tell from the answers, people seem to be unsure on where exactly you need directions in this case.
What will the policy on providing code be?In my question it was commented that it might not be on topic as it seemes like I was asking for working code. I wrote my algorithm in pseudo-code because my problem didnt ask for working C++ or whatever language.Should we only allow pseudo-code here?... |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
The statement e^{i \theta} = \cos \theta + i \sin \theta \quad is known as Euler's relation (and as Euler's formula) and is considered the first bridge between the fields of algebra and geometry, as it relates the exponential function to the trigonometric sine and cosine functions.
If you substitute \theta = \pi the relation simplifies to e^{i\pi} =-1 \quad, known as Euler's identity.
If you substitute \theta = \pi the relation simplifies to e^{i\pi} =-1 \quad, known as Euler's identity.
Summary/Background
Leonhard Euler (15 April 1707 – 18 September 1783) was a Swiss mathematician who made enormous contributions to a wide range of mathematics and physics including analytic geometry, trigonometry, geometry, calculus and number theory. Euler's work in mathematics is so vast that an article of this nature cannot but give a very superficial account of it. He was the most prolific writer of mathematics of all time. He made large bounds forward in the study of modern analytic geometry and trigonometry where he was the first to consider sin, cos etc. as functions rather than as chords as Ptolemy had done.
He made decisive and formative contributions to geometry, calculus and number theory. He integrated Leibniz's differential calculus and Newton's method of fluxions into mathematical analysis. He introduced beta and gamma functions, and integrating factors for differential equations. He studied continuum mechanics, lunar theory with Clairaut, the three body problem, elasticity, acoustics, the wave theory of light, hydraulics, and music. He laid the foundation of analytical mechanics, especially in his Theory of the Motions of Rigid Bodies (1765). The number e = 2.718281828459 is
He made decisive and formative contributions to geometry, calculus and number theory. He integrated Leibniz's differential calculus and Newton's method of fluxions into mathematical analysis. He introduced beta and gamma functions, and integrating factors for differential equations. He studied continuum mechanics, lunar theory with Clairaut, the three body problem, elasticity, acoustics, the wave theory of light, hydraulics, and music. He laid the foundation of analytical mechanics, especially in his Theory of the Motions of Rigid Bodies (1765).
The number e = 2.718281828459 is
Euler's number, the base of the natural logarithm. Euler's identity, e^{i\pi} + 1 = 0 is also sometimes called Euler's equation. Software/Applets used on this page Glossary body
an object with both mass and size that cannot be taken to be a particle
calculus
the study of change; a major branch of mathematics that includes the study of limits, derivatives, rates of change, gradient, integrals, area, summation, and infinite series. Historically, it has been referred to as "the calculus of infinitesimals", or "infinitesimal calculus".
There are widespread applications in science, economics, and engineering.
There are widespread applications in science, economics, and engineering.
cosine
The trigonometrical function defined as adjacent/hypotenuse in a right-angled triangle.
differential calculus
the study of how functions change when their inputs change.
equation
A statement that two mathematical expressions are equal.
exponential function
A function having variables as exponents.
function
A rule that connects one value in one set with one and only one value in another set.
identity
An equation which is true for all values of the variable.
light
having negligible mass.
logarithm
If y = a
xthen the logarithm to base a of y is x. range
In Statistics: the difference between the largest and smallest values in a data set; a simple measure of spread or variation
In Pure Maths: the values that y can take given an equation y=f(x) and a domain for x.
In Pure Maths: the values that y can take given an equation y=f(x) and a domain for x.
sine
The trigonometrical function defined as opposite/hypotenuse in a right-angled triangle.
trigonometry
The study of the relationships between the angles and sides of triangles.
union
The union of two sets A and B is the set containing all the elements of A and B.
work
Equal to F x s, where F is the force in Newtons and s is the distance travelled and is measured in Joules.
This question appears in the following syllabi:
Syllabus Module Section Topic Exam Year AQA A-Level (UK - Pre-2017) FP2 Complex Numbers Euler relation - AQA A2 Further Maths 2017 Pure Maths Further Complex Numbers Euler Relation - AQA AS/A2 Further Maths 2017 Pure Maths Further Complex Numbers Euler Relation - CCEA A-Level (NI) FP2 Complex Numbers Euler relation - CIE A-Level (UK) P3 Complex Numbers Euler relation - Edexcel A-Level (UK - Pre-2017) FP2 Complex Numbers Euler relation - Edexcel A2 Further Maths 2017 Core Pure Maths Complex Numbers Euler Relation - Edexcel AS/A2 Further Maths 2017 Core Pure Maths Complex Numbers Euler Relation - I.B. Higher Level 1 Complex Numbers Euler relation - Methods (UK) M3 Complex Numbers Euler relation - OCR A-Level (UK - Pre-2017) FP3 Complex Numbers Euler relation - OCR A2 Further Maths 2017 Pure Core Further Complex Numbers Euler Relation - OCR MEI A2 Further Maths 2017 Core Pure B Complex Numbers Euler Relation - OCR-MEI A-Level (UK - Pre-2017) FP2 Complex Numbers Euler relation - Scottish Advanced Highers M2 Complex Numbers Euler relation - Scottish (Highers + Advanced) AM2 Complex Numbers Euler relation - Universal (all site questions) C Complex Numbers Euler relation - WJEC A-Level (Wales) FP2 Complex Numbers Euler relation -
More in this topic
1 Definition
2 A proof
3 More proofs |
Introduction
This is an alternative to j_random_hacker's solution, or more precisely, an alternative to his/her solution to the subproblem:
Given the ordered list of edge weights $x_1,\ldots,x_m$ encountered while traversing down a particular heavy path, we want to preprocess this list to enable us to be able to later efficiently answer queries of the form "How many of the first $r$ elements are less than $k$?"
j_random_hacker uses the data structure of Fenwick tree of sorted lists, which results in a solution with
overall $O(n\log^3 n)$ preprocessing time for all heavy paths, overall $O(n\log n)$ space for all heavy paths, and $O(\log^2 n)$ query time for the subproblem, hence $O(\log^3 n)$ query time for the primary problem.
In contrast, my alternative gives a solution with
overall $O(n\sqrt{n})$ preprocessing time for all heavy paths, overall $O(n\sqrt{n})$ space for all heavy paths, and $O(1)$ query time for the subproblem, hence $O(\log n)$ query time for the primary problem. How to preprocess each heavy path?
In the very first, before we turn to those heavy paths, we sort all edges according to their weights from small to large. Say the result is $e_1,\ldots,e_{n-1}$ with weights $w_1,\ldots,w_{n-1}$. We label edge $e_i$ by $i$ for future use. Denote $I_1=(-\infty,w_1],I_2=(w_1,w_2],\ldots,I_n=(w_{n-1},+\infty)$.
Now let's focus on the subproblem mentioned above. We divide $x_1,\ldots,x_m$ into blocks of length $t=\lceil\sqrt{n}\rceil$: $\left[x_1,\ldots,x_t\right],\left[x_{t+1},\ldots,x_{2t}\right],\ldots$ We build two tables $T_1, T_2$ where $T_1(p,q,r)$ represents how many of the first $r$ elements in the $q$th block are less than $k$ if $k\in I_p$, and $T_2(p,q)$ represents how many of the elements in the first $q$ blocks are less than $k$ if $k\in I_p$.
Now if we have these two tables, we can answer an query for each heavy path in $O(1)$ time. For example, for a query $(r,k)$, if $k\in I_5$ and $r=3t+2$, then the answer is $T_1(5,4,2)+T_2(5,3)$.
Note it takes $O(\log n)$ time to find $p$ such that $k\in I_p$. Since we need to do this search only once for each query of the primary problem, it does not increase the query time.
How to build these tables?
Note $T_2(p,q)=\sum_{i=1}^q T_1(p,i,t)$, if we already have $T_1$, we can build $T_2$ in $O(nm/t)$ time and it takes $O(nm/t)$ space. Next we focus on how to build $T_1$.
Suppose the edge weights in the $q$th block are $w_{i_1},\ldots,w_{i_t}$ where $i_1<\cdots<i_t$ (recall that these edges are already labeled by $i_1,\ldots,i_t$). Then we have
\begin{align}T_1(1,q,*)=T_1(2,q,*)=&\cdots=T_1(i_1,q,*),\\T_1(i_1+1,q,*)=T_1(i_1+2,q,*)=&\cdots=T_1(i_2,q,*),\\&\cdots\\T_1(i_t+1,q,*)=T_1(i_t+2,q,*)=&\cdots=T_1(n,q,*),\end{align}
where $T_1(p,q,*)$ represents the array $[T_1(p,q,1),\ldots,T_1(p,q,t)]$, and two arrays are equal if they are element-wise equal. This means we only need to compute and store $t$ arrays for $T_1(1, q, *),\ldots,T_1(n, q, *)$. So we can build $T_1$ in $O(t^2\cdot m/t)=O(mt)$ time, and it also takes $O(mt)$ space.
Note $t=\lceil\sqrt{n}\rceil$, the overall preprocessing time for all heavy paths are
$$\sum_m O(m\sqrt{n})=O(n\sqrt{n}).$$
So is the overall space used. |
Suppose we have a uniform multinomial distribution with $k$ buckets, i.e. we put $n$ items uniformly at random in $k$ buckets leading to $n_1, \dots, n_k$ items in each bucket respectively. Let $m = \max \{n_1, \dots, n_k\}$. Can we say anything about $\mathbb{E}(m)$, and in particular, its asymptotics as $n \to \infty$?
For the case $k = 2$, this is equivalent to looking at the distance from the origin after $n$ steps in a $1$-dimensional random walk. Then $n_1$ counts the number of steps in the positive direction, $n_2$ counts the number of steps in the negative direction, and $2m - n = m - (n - m) = |n_1 - n_2|$ considers the distance from the origin after $n$ steps in a simple $1$-dimensional random walk. A derivation at for instance MathWorld shows that in this case, $\mathbb{E}|n_1 - n_2| \sim \sqrt{2n/\pi}$ leading to exact asymptotics for $\mathbb{E}(m)$.
I am now interested in the case $k > 2$ and large $n$, for which I could not find an answer. Anything would be appreciated, e.g. specific results for $k = 3$ or any other value of $k$, or even results for the regime $k = n \to \infty$ would be great.
Slightly related is this question, which is about getting bounds on the tails of the distribution of the maximum of a multinomial distribution. But I am interested in (exact) asymptotics for the mean, so such approximations don't seem very useful. Edit: As noted in Waldemar's answer below, the "balls into bins"-problem is closely related to this question. This indeed shows how $\lim_{n \to \infty} \mathbb{E}(m)$ roughly behaves in terms of $k$. However, the classic paper of Raab and Steger only gives bounds on the tails of the distribution of the maximum, and does not tell you what the mean is. In particular, using those bounds alone seems to be insufficient, as one would not be able to derive the $\sqrt{2n/\pi}$ for $k = 2$ then.
So I'm still looking for further pointers that would help me get a better understanding of the exact value of $\mathbb{E}(m)$ for smaller $k$ and large $m$. |
This is
not an answer, but rather an attempt at working out the $m=\infty$ case properly.
Let's assume we want to know the number of words of length $2N$ ($g_{i_1}\ldots g_{i_{2N}}$) that reduce to the identity, and let's call this quantity $E_{2N}$.Let's break into two cases: $i_1 = i_{2N}$ and $i_1 \neq i_{2N}$. Obviously the first case contributes $kE_{2N-2}$. In the second case, there must be some index $2p$ such that $i_{2p} = i_1$ and the initial subword $g_{i_1}\ldots g_{i_{2p}} = e = g_{i_{2p+1}} \ldots g_{i_{2N}}$. This would give us a contribution of $kE_{2p-2} * (k-1)/k E_{2N-2p}$ (from the requirement that $i_{2N} \neq i_1$). Except that we've overcounted these words -- there could be multiple indices $p$ that qualify.
Fortunately, we can use simple inclusion-exclusion to get the correct count. These words look like $g_{i_1}\ldots g_{i_{2p_1-1}} g_{i_1} g_{i_{2p_1+1}} \ldots g_{i_{2p_n -1}} g_{i_1} g_{i_{2p_n +1}} \ldots g_{i_{2N}}$, with each subword $g_{i_{2p_j+1}} \ldots g_{i_{2p_{j+1}-1}} g_{i_1}$ equal to the identity. We get a count $kE_{2p_1 - 2} * (1/k E_{2p_2 - 2p_1}) * \ldots * (1/k E_{2p_n - 2p_{n-1}}) * ((k-1)/k E_{2N-2p_n}$. Each factor of $1/k$ comes from the requirement that the terminal end of the subword is $i_1$; the factor of $(k-1)/k$ comes from the fact that $i_{2N} \neq i_1$, and the initial factor of $k$ comes from summing over possible values of $i_1$.
So we have the following recursion:\begin{equation}E_{2N} = kE_{2N-2} + \sum_{p=1}^{N-1} (k-1) E_{2p-2} E_{2N-2p} - \sum_{0<p_1<p_2<N} (k-1)/k E_{2p_1 -2} E_{2p_2 -2p_1} E_{2N-2p_2} + \ldots\end{equation}Writing this as a generating function equation, we get:\begin{equation}E(x) = \sum_n E_{2n} x^n\end{equation}\begin{equation}E(x) = 1 + kxE(x) + (k-1)x E(x)(E(x)-1) - \frac{k-1}{k} x E(x)(E(x)-1)^2 + \frac{k-1}{k^2} x E(x)(E(x)-1)^3 -\ldots\end{equation}or\begin{equation}E(x) = 1 + xE(x) * [k + (k-1) (E(x)-1) / (1 + (E(x)-1)/k)] \,.\end{equation}The extra $1$ term at the beginning is to account for $E_0$, and the $(E(x)-1)$ terms are to account for the fact that the subwords cannot be 0-length. Multiplying both sides by $k-1+E$ we get\begin{equation}0 = (k-1) - (k-2)E(x) + (k^2x -1) E(x)^2\end{equation}or\begin{equation}E(x) = \frac{k \sqrt{1-4(k-1)x} - (k-2)}{2(1-k^2x)} \,.\end{equation}I would greatly appreciate it if people were to check my math. |
What is the filtration $(\mathfrak{F}_t)$ encircled below?
Is it $(\mathfrak{F}_t) = (\sigma(W_t)) = (\sigma(\tilde{W_t})), t \in [0,T]$?
Or is it $(\mathfrak{F}_t) = (\sigma(\hat{W_t})), t \in [0,T]$?
The reference (p. 271, 275, 336) and suggests that it is in fact the $(\sigma(W_t)) = (\sigma(\tilde{W_t}))$, but I am not really sure I am reading this right.
If so, does that mean there are 2 probability measures being considered in the martingale? Risk neutral measure for filtration and Forward measure for probability measure>? |
The Annals of Mathematical Statistics Ann. Math. Statist. Volume 39, Number 5 (1968), 1507-1512. Cross Spectral Analysis of a Gaussian Vector Process in the Presence of Variance Fluctuations Abstract
Let $x'(t) = (x_1(t),x_2(t)),\quad(t = 1, 2, \cdots)$ be a two dimensional, Gaussian, vector process. Let the process $x'(t)$ have the representation \begin{equation*}\tag{1.1}x'(t) = \sum^p_{m = 0}B_my (t - m),\end{equation*} where \begin{equation*}\begin{align*}\tag{1.2} B_m &= \{b_{ijm}; i,j = 1, 2\}; \\ y'(t) &= (y_1(t), y_2(t)); \\ y_l(t) &= \sigma_l(t)\epsilon_l(t)\quad (l = 1, 2)\end{align*}.\end{equation*} The random variables $\epsilon_l(t)$ are independently and normally distributed with mean zero and variance unity. $p$ is a finite positive integer. The coefficients $B_m = (b_{ijm})_{2 \times 2}$ are finite real constants, and $\sigma^2_l(t)$ are non-random sequence of positive numbers which are not, in general equal, but do satisfy the conditions \begin{equation*}\tag{1.3}N^{-1}\sum^N_{t = 1}\sigma^2_l(t) = \nu_l < \infty\quad (\text{as} N \rightarrow \infty),\end{equation*} and $L \leqq \sigma^2_l(t) \leqq U < \infty\quad (t = 1, 2, \cdots).$ The relation (1.1) is a multivariate representation of a finite moving average process with time trending coefficients. Consider the matrix \begin{equation*}\begin{align*}\tag{1.4}F (\lambda) &= \begin{pmatrix}f_{11}(\lambda) & f_{12}(\lambda) \\ f_{21}(\lambda) & f_{22}(\lambda)\end{pmatrix} \\ &= G(\lambda)\begin{pmatrix}\nu_1 & 0 \\ 0 & \nu_2\end{pmatrix} G^{\ast'}(\lambda)\end{align*},\end{equation*} where $G(\lambda) = \sum^p_{m = 0} B_me^{im\lambda}$ and $G^\ast(\lambda)$ is its complex conjugate. Under the condition (1.3), Herbst [1] has defined $f_{11}(\lambda)$ and $f_{22}(\lambda)$ as the spectral densities of the processes $x_1(t)$ and $x_2(t)$ respectively, and considered their estimation. Here we generalize Herbst [1] results to a vector process and show that under the conditions (1.3) and (3.3) $f_{12}(\lambda)$, which is defined as the cross spectral density of the process $x_1(t)$ and $x_2(t)$, can consistently be estimated.
Article information Source Ann. Math. Statist., Volume 39, Number 5 (1968), 1507-1512. Dates First available in Project Euclid: 27 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aoms/1177698132 Digital Object Identifier doi:10.1214/aoms/1177698132 Mathematical Reviews number (MathSciNet) MR232523 Zentralblatt MATH identifier 0167.17601 JSTOR links.jstor.org Citation
Rao, T. Subba. Cross Spectral Analysis of a Gaussian Vector Process in the Presence of Variance Fluctuations. Ann. Math. Statist. 39 (1968), no. 5, 1507--1512. doi:10.1214/aoms/1177698132. https://projecteuclid.org/euclid.aoms/1177698132 |
The simple answer:
for anything further than the moon, you should use FTL.
The slightly more technical answer:
18 million km is the break-even point.
Obviously, price is going to have a large affect on what you actually use FTL for, but if speed is your worry, FTL is already 3 times faster by Mars (at closest approach), and just gets better from there. By the time you're sending stuff to Proxima, there's a 12300 times speed increase (it doesn't quite hit 12500 because of the acceleration times).
Short Derivation
The closest planet-like object beyond the moon is Mars. At its closest approach, it's 56 million km from Earth. That's 3 minutes, 7 seconds by light.
Our hard drives can be processed in less than a minute, hit max speed in less than a minute, then decelerate in less than a minute. That's less than 3 minutes. Ignoring the fact that the drive is moving during acceleration (reducing the time even more), at 12.5k c, the 56 million km takes 15 ms.
Less than 3 minutes + 15 milliseconds is less than the 3 minutes and 7 seconds it takes light at closest approach. For any other part of the orbit, the difference is even greater. For any other planet or moon, the difference isn't even close.
Longer Derivation
I'll use exactly 60 seconds for processing time since "less than" is annoying. Then, let's assume constant acceleration to/from 12500 c.
$A(t)=a$
$V(t)=\int{A(t)}dt$$=at+v_0$$=at$
$D(t)=\int{V(t)}dt=\frac{a}{2}t^2+v_0t+d_0$$=\frac{a}{2}t^2$
Where $a$ is constant acceleration, $v_0$ is initial velocity (zero here), $d_0$ is initial position (which we can call zero), and A, V, D are functions of acceleration, velocity, and distance.
First, let's solve for $a$:
$V(t)=12500c$$=3.747\cdot10^{12} \frac{m}{s}$$=a\cdot60s$
$a=\frac{1}{60s}\cdot3.747\cdot10^{12}\frac{m}{s}=$$6.245\cdot10^{10}\frac{m}{s^2}$
(About 6 billion gees.)
Now, how far does the drive go before hitting max velocity?
$d=\frac{6.245\cdot10^{10}}{2}\frac{m}{s^2}(60s)^2=$$1.124\cdot10^{14}m$
Which is about 8 times the distance to the edge of the solar system. Since your furthest colony, Proxima b, is 4.2 ly ($3.978\cdot10^{16}m$) away, we need to use piecewise functions for it, but for everything else, we'll never see max velocity.
We can solve for the time taken to get to any specific planet. Since (I presume) we have to accelerate then decelerate, take the half-distance time and multiply by 2.
$\frac{d}{2}=\frac{a}{2}t_\text{half}^2$$\rightarrow t_\text{half}=\sqrt{\frac{d}{a}}$$\rightarrow t=2t_\text{half}=2\sqrt{\frac{d}{a}}$
Specific Travel Times
$t_\text{MarsClose}=2\sqrt{\frac{56\cdot10^{9}m}{6.245\cdot10^{10}\frac{m}{s^2}}}=$$1.894 s$
(plus 60 seconds for processing is 62ish seconds compared to 182 seconds at light speed) $t_\text{MarsAverage}=2\sqrt{\frac{225\cdot10^{9}m}{6.245\cdot10^{10}\frac{m}{s^2}}}=$$3.796 s$ (64 seconds compared to 751 seconds) $t_\text{MarsFar}=2\sqrt{\frac{401\cdot10^{9}m}{6.245\cdot10^{10}\frac{m}{s^2}}}=$$5.068 s$ (65 seconds compared to 1342 seconds)
$t_\text{JupiterNear}=2\sqrt{\frac{588\cdot10^{9}m}{6.245\cdot10^{10}\frac{m}{s^2}}}=$$6.137 s$
(66 seconds compared to 1961 seconds) $t_\text{JupiterFar}=2\sqrt{\frac{968\cdot10^{9}m}{6.245\cdot10^{10}\frac{m}{s^2}}}=$$7.874 s$ (68 seconds compared to 3229 seconds) $t_\text{Proxima}=2\sqrt{\frac{1.124\cdot10^{14}m}{6.245\cdot10^{10}\frac{m}{s^2}}}+\frac{3.978\cdot10^{16}m-1.124\cdot10^{14}m}{3.747\cdot10^{12}\frac{m}{s}}=$$10671 s$ (about 3 hours compared to about 4.2 years)
Exact Point of Equality
We can calculate the exact distance where the two times are equal. $p$ is the 60 seconds of processing time.
$\frac{d}{c}=2\sqrt{\frac{d}{a}}+p$
( Mumble mumble stupid sign errors doing it by hand, just plug it into WolframAlpha /mumble mumble.)
$d=\frac{1}{a}[acp+2c^2\pm2\sqrt{ac^3p+c^4}]$$=cp+\frac{2c^2}{a}\pm\frac{2}{a}\sqrt{ac^3p+c^4}$
$cp+\frac{2c^2}{a}$$=3\cdot10^8\frac{m}{s}\cdot60s+\frac{2(3\cdot10^8\frac{m}{s})^2}{6.245\cdot10^{10}\frac{m}{s^2}}=$$1.8\cdot10^{10}m$
$\frac{2}{a}\sqrt{ac^3p+c^4}$$=\frac{2}{6.245\cdot10^{10}\frac{m}{s^2}}\sqrt{6.245\cdot10^{10}\frac{m}{s^2}\cdot60s\cdot(3\cdot10^8\frac{m}{s})^3+(3\cdot10^8\frac{m}{s})^4}=$$3.221\cdot10^8m$
Since the second term is tiny compared to the first term, we can pretty much ignore it, but the exact distances where the two meet are:
$d_+=$$1.832\cdot10^{10}m$
$d_-=$$1.768\cdot10^{10}m$
Plotting the curves on a graph, I only see an intersection at $d_+$, so I'd go with that one.
$1.8\cdot10^{10}m$ is $18$ $\text{million}$ $km$, or $1.019$ light minutes, which makes sense, given the 60-second processing time and how ridiculously fast our FTL accelerates.
Conclusion
For anything further than 18 million km, FTL is faster. That's a lot further than the moon (about 0.39 million km), but nowhere near Mars (56 million km at closest approach) or Jupiter (588 million km at closest approach). |
Defining parameters
Level: \( N \) = \( 13 \) Weight: \( k \) = \( 3 \) Nonzero newspaces: \( 2 \) Newforms: \( 2 \) Sturm bound: \(42\) Trace bound: \(2\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{3}(\Gamma_1(13))\).
Total New Old Modular forms 20 20 0 Cusp forms 8 8 0 Eisenstein series 12 12 0 Decomposition of \(S_{3}^{\mathrm{new}}(\Gamma_1(13))\)
We only show spaces with odd parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension.
Label \(\chi\) Newforms Dimension \(\chi\) degree 13.3.d \(\chi_{13}(5, \cdot)\) 13.3.d.a 4 2 13.3.f \(\chi_{13}(2, \cdot)\) 13.3.f.a 4 4 |
Time series of returns, $r_t$, in finance are often modeled with some type of conditional heteroskedasticity model, e.g. ARCH(1):
$$r_t = \sigma_t z_t$$ $$\sigma_t^2 = a_0 +a_1 r_{t-1}^2$$
where, say, $z_t \tilde{} N(0,1)$, which implies that
$$r_t = z_t \sqrt{a_0 +a_1 r_{t-1}^2}$$
and hence the returns are not independent. However, Value-at-risk (VaR) seems to be estimated by constructing a single empirical distribution of the observed returns and taking some quantile. But since the returns are not independent, they cannot be represented by one univariate distribution, so why this procedure to calculate VaR is considered valid? It seems to me that, at the very least, VaR estimated in this way would be biased, but I am not sure I have seen corrections being applied / discussed for this.
Add 1 It seems to me that given a time series with a persistent volatility (e.g. one with a high order of the ARCH term above, say 100), any finite period of observations, say 250, is likely to have explored a smaller part of the total space compared to IID returns, so I would have thought that one has to correct somehow for this when estimating VaR from historical observations in this case. |
A convexified energy functional for the Fermi-Amaldi correction
1.
Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152, United States, United States
2.
Department of Mathematics and Computer Science, Benedict College, Columbia, SC 29204, United States
$ \mathcal{E}$,we prove that $ \mathcal{E} $has a unique minimizing density $( \rho _{1},\rho _{2}) \ $ when $N_{1}+N_{2}\leq Z+1\ $and $N_{2}\ $is close to $N_{1}.$ Keywords:$L^{1} $constrained minimization, ground state electron density, Fermi-Amaldi correction, convex minorant, spin polarized system, degree theory, Thomas-Fermi theory. Mathematics Subject Classification:Primary: 35J47, 35J91, 49S05; Secondary: 81Q99, 81V55, 92E1. Citation:Gisèle Ruiz Goldstein, Jerome A. Goldstein, Naima Naheed. A convexified energy functional for the Fermi-Amaldi correction. Discrete & Continuous Dynamical Systems - A, 2010, 28 (1) : 41-65. doi: 10.3934/dcds.2010.28.41
[1] [2]
John P. Perdew, Adrienn Ruzsinszky.
Understanding Thomas-Fermi-Like approximations: Averaging over oscillating occupied orbitals.
[3] [4] [5]
Antonio Giorgilli, Simone Paleari, Tiziano Penati.
Local chaotic behaviour in the Fermi-Pasta-Ulam system.
[6] [7]
Riccardo Adami, Diego Noja, Nicola Visciglia.
Constrained energy minimization and ground states for NLS with point defects.
[8] [9] [10]
Luchuan Ceng, Qamrul Hasan Ansari, Jen-Chih Yao.
Extragradient-projection method for solving constrained convex minimization problems.
[11]
Xinliang An, Avy Soffer.
Fermi's golden rule and $ H^1 $ scattering for nonlinear Klein-Gordon equations with metastable states.
[12]
Liren Lin, I-Liang Chern.
A kinetic energy reduction technique and characterizations of the ground states of spin-1 Bose-Einstein condensates.
[13] [14]
Stefan Possanner, Claudia Negulescu.
Diffusion limit of a generalized matrix Boltzmann equation for
spin-polarized transport.
[15]
Carlos J. García-Cervera, Xiao-Ping Wang.
A note on 'Spin-polarized transport: Existence of weak solutions'.
[16] [17]
Piernicola Bettiol.
State constrained $L^\infty$ optimal control problems interpreted as differential games.
[18]
Yong Xia, Yu-Jun Gong, Sheng-Nan Han.
A new semidefinite relaxation for $L_{1}$-constrained quadratic
optimization and extensions.
[19]
Alexander Pankov, Vassilis M. Rothos.
Traveling waves in Fermi-Pasta-Ulam lattices with saturable nonlinearities.
[20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top] |
Everyone keeps claiming that integer factoring is in $NP$ but I just don't get it... Even with the simplest algorithm (division with all integers up to $\sqrt{n}$) the complexity should be $\sqrt{n}\log(n)$... How is that not in $P$? Is there something I'm missing?
One of the things to remember when dealing with natural numbers (and others, but naturals are the central things here) is the encoding, and that the definitions of $P$ and $NP$ reference the length of the encoding of the input on a Turing Machine (or something closely equivalent).
So the input to integer factoring, as a decision problem, is typically two numbers $n$ and $k$ in $\mathbb{N}$, and the question is whether $n$ has a factor $d \leq k$.
So the magnitude of $n$ is $n$, but the
size of its encoding may be only $O(\log n)$ (for example, in binary). This is exponentially smaller than $n$ (i.e. if we take $n' = \log_{2} n$, then $n = 2^{n'}$).
So then the $\sqrt{n}\log n$ "obvious" algorithm runs in time $2^{\frac{n'}{2}}\cdot n'$, which is exponential in the input size. |
I recently discovered the
LSMonteCarlo library in R which basically determines the price of American options via Longstaff Schwartz method.
I tried the
AmerPutLSM which per description first simulates paths of a geometric Brownian Motion and then uses the Longstaff Schwartz method.
The first few applications did non convince so I tried to price an European Put Option with this function:
library(LSMonteCarlo)s0 <- 100strike <- 100sigma <- 0.03set.seed(123)AmerPutLSM(Spot = s0, sigma = sigma, n = 10000, m = 12, Strike = strike, r = 0, mT = 1, dr = 0)American Put OptionPrice: 1.187689
So basically I consider 10000 paths of a geometric Brownian motion with $\sigma = 0.03$ and an ATM put option (strike = $S_0$ = 100). The maturity of the option is set to 1 (
mT = 1) and the number of time steps in the simulation is also set to 12 (
m = 12). This means that the option can be exercised at the end of each month (Bermudan type).For simplicity I assumed no interest rate (
r = 0) and zero dividends (
dr = 0).This function tells me that the price of this option is about 1.188
But, if we compare this with the Black-Scholes Put Price of a European Put Option we get that
$$ V_{\text{European}} = \text{Strike} \cdot \Phi\Bigl(\frac \sigma 2 \Bigr) - S_0 \cdot \Phi\Bigl(-\frac \sigma 2 \Bigr) = 1.196782. $$
strike * pnorm(sigma/2) - s0 * pnorm(-sigma/2)[1] 1.196782
This makes no sense since the value of the European Option should always be less then its American (or Bermudan) counterpart.
Does anyone have an explanation for this?
Thank you. |
I'm trying to format an optimization problem but I am having trouble aligning and and labeling it properly in one environment. I have two equations, each written using an \begin{aligned*} environment. The first is
\documentclass{report}\usepackage{amsmath}\begin{document}\begin{equation*} \begin{aligned} & \underset{y \in X,\ u \in Y}{\text{minimize}} && J(y,u) \\ &\text{subject to} \end{aligned}\end{equation*}\end{document}
and the second is
\documentclass{report}\usepackage{amsmath}\begin{document}\begin{equation*} \begin{cases} \begin{aligned} -\nabla^2 y &= u &\text{ for } x \text{ in } \Omega, \\ y &= 0 &\text{ for } x \text{ on } \partial \Omega. \end{aligned} \end{cases}\end{equation*}\end{document}
I'd like to be able to either join them together into one equation but have it formatted the same as the above code, or, somehow, leave them separated as two equations but then label them jointly as one equation so I can refer the pair jointly. |
Your confusion stems basically from comparing results in different conventions.
Basically, there is always a phase difference of $\pi$ between the two output ports of a beam splitter, but this can only ever be 'morally' true, because that statement talks about the phase of the EM field at different points, and thus the phase will be different depending on where exactly you fix your measuring point on the two input and the two output ports. In situations where you have two waves co-propagating, then their relative phase is perfectly well-defined, but for the ports of the beam splitter, you're comparing phases at different beams in different positions and moving in different directions, so the whole thing is impossible without some artificial way to fix the convention.
Broadly speaking, there are two separate ways to fix the convention: one with an explicit $\pi$ phase shift,$$\begin{pmatrix} a_{\mathrm{out},1} \\ a_{\mathrm{out},2} \end{pmatrix}=\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}\begin{pmatrix} a_{\mathrm{in},1} \\ a_{\mathrm{in},2} \end{pmatrix},$$and one with several explicit $\pi/2$ phases:$$\begin{pmatrix} a_{\mathrm{out},1} \\ a_{\mathrm{out},2} \end{pmatrix}=\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & i \\ i & 1 \end{pmatrix}\begin{pmatrix} a_{\mathrm{in},1} \\ a_{\mathrm{in},2} \end{pmatrix}.$$Those two conventions are exactly equivalent, since they can be transformed by adding a $\pi/2$ phase to $a_{\mathrm{in},2}$ and $a_{\mathrm{out},2}$,$$\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}=\frac{1}{\sqrt{2}}\begin{pmatrix} 1 & 0 \\ 0 & -i \end{pmatrix}\begin{pmatrix} 1 & i \\ i & 1 \end{pmatrix}\begin{pmatrix} 1 & 0 \\ 0 & -i \end{pmatrix},$$and experimentally this is equivalent to adding a thin slab of glass on one input and one output port. More importantly, though, you didn't know the precise value of the optical path before the $(\mathrm{in}{,}2)$ port or after the $(\mathrm{out}{,}2)$ output, so quibbling about these phases is completely moot.
What you
do need is for the matrix governing the coupling to be unitary, which comes from a strict requirement of energy conservation.
Now, this requirement of unitarity can indeed sound a bit intimidating and exotic, but it is important to note that
the requirement of unitarity has nothing to do with quantum mechanics, and it is already present within the hamiltonian classical-mechanics description of the system.
More precisely, when we say that the beam splitter can be described by a matrix, we are making two core assertions about the electromagnetic fields we're considering:
First, we are asserting that the EM fields we are willing to consider need to be linear combinations of only two pre-specified modes, which basically look like this:
$\qquad$
Second, we are recognizing that those fields can also be expressed as linear combinations of two modes which look like this,
$\qquad$
which are characterized by having all of the output energy on only one of the output ports.
Both of these sets of modes are bases for the
same linear subspace of field modes, which means that each set can be expressed as a linear combination of the other set; in other words, that means that the amplitudes of each set of modes are related via some matrix.
More importantly, when we sit down to describe the (classical) field, we write either$$\mathbf E(\mathbf r,t) = \mathrm{Re}\mathopen{}\left[\sum_{j=1}^2 \alpha_{\mathrm{in},j}(t) \mathbf E_{\mathrm{in},j}(\mathbf r)\right]$$or$$\mathbf E(\mathbf r,t) = \mathrm{Re}\mathopen{}\left[\sum_{j=1}^2 \alpha_{\mathrm{out},j}(t) \mathbf E_{\mathrm{out},j}(\mathbf r)\right],$$where $\alpha_{\mathrm{in},j}(t)$ and $\alpha_{\mathrm{in},j}(t)$ are the complex-valued canonical variables that describe the dynamics of those modes, and which satisfy the dynamical equations$$\frac{\mathrm d^2}{\mathrm dt^2}\alpha_{x,j}(t) = -\omega^2 \alpha_{x,j}(t).$$
The tricky part is the normalization: because $\alpha_{x,j}(t)$ and $\mathbf E_{x,j}(\mathbf r)$ only ever appear (thus far) in the product $\alpha_{x,j}(t) \mathbf E_{x,j}(\mathbf r)$, there is an ambiguity in a common complex factor that can be put on either side, including both normalization and phase.
For the normalization, there is an objective absolute standard that needs to be adhered to: namely, that for each of the modes, the total energy flow across the beam-splitter midline needs to be constant. This is the only way to correctly quantize the system.
For the phase, there is no objective or absolute standard - there is a complete phase ambiguity on all four of the $\mathbf E_{x,j}(\mathbf r)$, and correspondingly on the $\alpha_{x,j}(t)$. You're free to pick any phase that you find convenient, but you do need to choose one.
And, moreover, the phase conventions that you choose for the $\mathrm{in}$ ports cannot be used to set those for the $\mathrm{out}$ ports, or vice versa, because they refer to completely different modes evaluated at different places. They're completely independent quantities.
Once you fix the total incoming energy flow per mode at $R$ (in joules per second), then the
total energy flow can be shown to be both$$F = R\sum_{j=1}^2 |\alpha_{\mathrm{in},j}(t)|^2$$and$$F = R\sum_{j=1}^2 |\alpha_{\mathrm{out},j}(t)|^2,$$and those two energy flows need to match, because of energy conservation. This means, therefore, that the expression of the output canonical variables as linear combinations of the input canonical variables,$$\begin{pmatrix} \alpha_{\mathrm{out},1} \\ \alpha_{\mathrm{out},2} \end{pmatrix}=\begin{pmatrix} c & d \\ e & f \end{pmatrix}\begin{pmatrix} \alpha_{\mathrm{in},1} \\ \alpha_{\mathrm{in},2} \end{pmatrix}=M\begin{pmatrix} \alpha_{\mathrm{in},1} \\ \alpha_{\mathrm{in},2} \end{pmatrix},$$needs to preserve the norm $\sum_{j=1}^2 |\alpha_{x,j}(t)|^2$. Or, in other words, the matrix of the transformation needs to be unitary.
(Why unitary and not just having unit norm on each row or each column? Because the beam splitter needs to conserve $\sum_{j=1}^2 |\alpha_{x,j}(t)|^2$ both for cases where the input is on only one of the input ports, but also for cases where there's light on both. If the matrix isn't unitary, there will be a superposition of input beams that will have a different output energy than the sum of the inputs.)
All of the above is crucial for a correct hamiltonian classical-field-mechanical description of the beam splitter. Once you've done it correctly (but
only after you've done the classical mechanics correctly), you're ready to move on to the quantum mechanics of the system, which is now very easy: since you've done the classical mechanics correctly, all you need to do is to replace the hamiltonian canonical variables with annihilation operators,$$\alpha_{x,j}(t) \mapsto \hat{a}_{x,j}.$$and you're done.
And, since you had a strict requirement of unitarity on the matrix that interlinked the hamiltonian canonical variables between the input and output sets, you have an identical requirement of unitarity on the matrix that links the input and output annihilation (and therefore creation) operators.
What you
don't have, because you didn't have it on the classical fields, is any additional restriction on what the phases must be, either of $\pi/2$ or $\pi$ or anything, because (again) those are just doomed attempts at comparing phases on different places, which cannot be done to any absolute or objective standard. |
Can I be a pedant and say that if the question states that $\langle \alpha \vert A \vert \alpha \rangle = 0$ for every vector $\lvert \alpha \rangle$, that means that $A$ is everywhere defined, so there are no domain issues?
Gravitational optics is very different from quantum optics, if by the latter you mean the quantum effects of interaction between light and matter. There are three crucial differences I can think of:We can always detect uniform motion with respect to a medium by a positive result to a Michelson...
Hmm, it seems we cannot just superimpose gravitational waves to create standing waves
The above search is inspired by last night dream, which took place in an alternate version of my 3rd year undergrad GR course. The lecturer talks about a weird equation in general relativity that has a huge summation symbol, and then talked about gravitational waves emitting from a body. After that lecture, I then asked the lecturer whether gravitational standing waves are possible, as a imagine the hypothetical scenario of placing a node at the end of the vertical white line
[The Cube] Regarding The Cube, I am thinking about an energy level diagram like this
where the infinitely degenerate level is the lowest energy level when the environment is also taken account of
The idea is that if the possible relaxations between energy levels is restricted so that to relax from an excited state, the bottleneck must be passed, then we have a very high entropy high energy system confined in a compact volume
Therefore, as energy is pumped into the system, the lack of direct relaxation pathways to the ground state plus the huge degeneracy at higher energy levels should result in a lot of possible configurations to give the same high energy, thus effectively create an entropy trap to minimise heat loss to surroundings
@Kaumudi.H there is also an addon that allows Office 2003 to read (but not save) files from later versions of Office, and you probably want this too. The installer for this should also be in \Stuff (but probably isn't if I forgot to include the SP3 installer).
Hi @EmilioPisanty, it's great that you want to help me clear out confusions. I think we have a misunderstanding here. When you say "if you really want to "understand"", I've thought you were mentioning at my questions directly to the close voter, not the question in meta. When you mention about my original post, you think that it's a hopeless mess of confusion? Why? Except being off-topic, it seems clear to understand, doesn't it?
Physics.stackexchange currently uses 2.7.1 with the config TeX-AMS_HTML-full which is affected by a visual glitch on both desktop and mobile version of Safari under latest OS, \vec{x} results in the arrow displayed too far to the right (issue #1737). This has been fixed in 2.7.2. Thanks.
I have never used the app for this site, but if you ask a question on a mobile phone, there is no homework guidance box, as there is on the full site, due to screen size limitations.I think it's a safe asssumption that many students are using their phone to place their homework questions, in wh...
@0ßelö7 I don't really care for the functional analytic technicalities in this case - of course this statement needs some additional assumption to hold rigorously in the infinite-dimensional case, but I'm 99% that that's not what the OP wants to know (and, judging from the comments and other failed attempts, the "simple" version of the statement seems to confuse enough people already :P)
Why were the SI unit prefixes, i.e.\begin{align}\mathrm{giga} && 10^9 \\\mathrm{mega} && 10^6 \\\mathrm{kilo} && 10^3 \\\mathrm{milli} && 10^{-3} \\\mathrm{micro} && 10^{-6} \\\mathrm{nano} && 10^{-9}\end{align}chosen to be a multiple power of 3?Edit: Although this questio...
the major challenge is how to restrict the possible relaxation pathways so that in order to relax back to the ground state, at least one lower rotational level has to be passed, thus creating the bottleneck shown above
If two vectors $\vec{A} =A_x\hat{i} + A_y \hat{j} + A_z \hat{k}$ and$\vec{B} =B_x\hat{i} + B_y \hat{j} + B_z \hat{k}$, have angle $\theta$ between them then the dot product (scalar product) of $\vec{A}$ and $\vec{B}$ is$$\vec{A}\cdot\vec{B} = |\vec{A}||\vec{B}|\cos \theta$$$$\vec{A}\cdot\...
@ACuriousMind I want to give a talk on my GR work first. That can be hand-wavey. But I also want to present my program for Sobolev spaces and elliptic regularity, which is reasonably original. But the devil is in the details there.
@CooperCape I'm afraid not, you're still just asking us to check whether or not what you wrote there is correct - such questions are not a good fit for the site, since the potentially correct answer "Yes, that's right" is too short to even submit as an answer |
This does not mimic the markup features provided by Stack Exchange, but if you use this template in a file with a ".html" extension, and replace the body, then drag the file to your browser, you should see the MathJax rendered.
<html>
<head>
<title>MathJax</title>
<script type="text/x-mathjax-config">
MathJax.Hub.Config({"HTML-CSS": { preferredFont: "TeX", availableFonts: ["STIX","TeX"] },
tex2jax: { inlineMath: [ ["$", "$"], ["\\\\(","\\\\)"] ], displayMath: [ ["$$","$$"], ["\\[", "\\]"] ], processEscapes: true, ignoreClass: "tex2jax_ignore|dno" },
TeX: { noUndefined: { attributes: { mathcolor: "red", mathbackground: "#FFEEEE", mathsize: "90%" } } },
messageStyle: "none"
});
</script>
<script type="text/javascript" src="http://cdn.mathjax.org/mathjax/2.2-latest/MathJax.js?config=TeX-AMS_HTML"></script>
</head>
<body>
The definition of the Euler-Mascheroni constant is
$$\gamma=\lim_{n\to\infty}\left(\sum_{k=1}^n\frac1k\right)-\frac1n$$
</body>
</html>
I use this method with the
Markup > Preview in BBEdit feature of BBEdit to get live updating.
Caveat: I may have misinterpreted "offline". To use the template above, you need to be online so that you can access the server at
cdn.mathjax.org, but you can work on MathJax away from MSE. To use MathJax offline, you can download the local version of MathJax as described here. Otherwise, you can get a standalone LaTeX system, many of which are listed here. |
OK, here goes.
We start with changing the notation ($z\to 20z^2$, $-z-3\to r$, $20rz\to y$ means that what was denoted by $z$ will be denoted by $20z^2$ from now on, $r$ is $-z-3$ with
new $z$, so it is $-\sqrt{z/20}-3$ in terms of old $z$, and $y$ denotes $20rz$ with just (re)defined $r,z$).
So, execute the following sequence (the order matters!)$$x\to e^{-x},\ a\to e^{-a},\ mkja\to T,\ ja\to a$$Denote $\varphi(t)=-\log(1-e^{-t})$. For a function $f$ denote by $LRS(f,u,v; h)$ and $RRS(f,u,v;h)$ the left and the right Riemann sums of $f$ on the interval $[u,v]$ with step $h$ respectively (we will always assume that $(v-u)/h\in \mathbb N\cup\{\infty\}$).
Then the above substitutions induce the substitution$$-\log f_{mk}(a^jx)\to \frac{LRS(\varphi,x,\infty;a)+xT-RRS(\varphi,0,T;a)}a$$
The claim to prove reduces to the statement that this function decreases in $a$ as long as we run $a$ along admissible steps for $[0,T]$ with fixed $T$.
We have already seen that replacing Riemann sums by integrals results in the area $A\ge 0$ of the "excess triangle" formed by the vertical line through $x$, the horizontal line through $T$, and the graph of $\varphi$ in the coordinate system where $x$ runs over the horizontal axis and $T$ over the vertical one in the numerator. Thus, the function to investigate is $$\frac Aa+\frac {\Phi(a)}a+\frac{\Psi(a)}a$$where $\Phi(a)=LRS(\varphi,x,\infty;a)-\int_x^\infty \varphi(t)\,dt$ and $\Psi(a)=\int_0^T\varphi(t)\,dt-RRS(\varphi,0,T,a)$
Note that we have a nice series expansion$$\varphi(t)=\sum_{k\ge 1}\frac 1ke^{-kt}$$The integration and the Riemann sum computation are linear operations with respect to the function, so we have$$\Phi(a)=\sum_{k\ge 1}\frac 1k\Phi_k(a),\qquad \Psi(a)=\sum_{k\ge 1}\frac 1k\Psi_k(a)$$ where $\Phi_k$ and $\Psi_k$ are defined in the same way but using $e^{-kt}$ instead of $\psi(t)$.
Now, we (or, if you prefer, at least I) cannot integrate $\varphi$ or to sum it along an arithmetic progression. However everybody can do it for an exponential function. The direct computation yields$$\Phi_k(a)=e^{-kx}\left[\frac a{1-e^{-ka}}-\frac 1k\right],\qquad \Psi_k(a)=(1-e^{-kT})\left[\frac 1k-\frac a{e^{ka}-1}\right]$$ Note that in the computation of $\Psi_k(a)$ we have used the fact that $T/a$ is an integer. However the resulting answer is formally defined for all $a>0$. So we will not use the "fitting condition" anywhere below.
Notice also that $$\left[\frac a{1-e^{-ka}}-\frac 1k\right]+\left[\frac 1k-\frac a{e^{ka}-1}\right]=a$$so$$\frac{\Phi_k(a)+\Psi_k(a)}a=\frac{1-e^{-kx}-e^{-kT}}k\left[\frac 1a-\frac k{e^{ka}-1}\right]+ \operatorname{const}$$ Thus we need to show that $$\frac Aa+\sum_{k\ge 1}\frac{1-e^{-kx}-e^{-kT}}{k^2}\left[\frac 1a-\frac k{e^{ka}-1}\right]$$is decreasing or, equivalently, that$$\frac A{a^2}+\sum_{k\ge 1}\frac{1-e^{-kx}-e^{-kT}}{k^2}\left[\frac 1{a^2}-\frac {k^2}{e^{ka}+e^{-ka}-2}\right]\ge 0\,.$$When the excess triangle is absent or is above the graph, we have $e^{-x}+e^{-T}\le 1$ and the factor $k\ge 1$ can only improve this inequality, so each term is non-negative and the estimate is trivial, thus justifying my first remark.
The other case is slightly more interesting.
Let $t>0$ satisfy $e^{-t}+e^{-x}=1$. We have now $t>T$. Write$$A=\int_T^{t}\varphi(s)\,ds-(t-T)\varphi(t)=\sum_{k\ge 1}\frac 1k\left[\int_T^{t}e^{-ks}\,ds-(t-T)e^{-kt}\right]\\=\sum_{k\ge 1}\frac{e^{-kT}-e^{-kt}-k(t-T)e^{-kt}}{k^2}$$We want to combine these terms multiplied by $\frac 1{a^2}$ with the corresponding terms in the main sum. It will be more convenient to diminish them first by multiplying them not by the full $\frac 1{a^2}$ but just by the expressions in the brackets in the main sum. Then the result can be rewritten as$$\sum_{k\ge 1}\frac{1-e^{-kx}-e^{-kt}-kte^{-kt}}{k^2}\left[\frac 1{a^2}-\frac {k^2}{e^{ka}+e^{-ka}-2}\right]+\sum_{k\ge 1}\frac{Te^{-kt}}{k}\left[\frac 1{a^2}-\frac {k^2}{e^{ka}+e^{-ka}-2}\right]$$ The second sum is clearly non-negative. Let's look at the first one.We have$$\int_u^\infty\varphi(s)\,ds=\sum_{k\ge 1}\frac 1k\int_u^\infty e^{-ks}\,ds=\sum_{k\ge 1}\frac{e^{-ku}}{k^2}\,.$$Using this with $u=0,x,t$ and flipping the geometric picture for the integral of $\varphi$ over $[t,\infty)$ to the vertical axis, as usual, we see that$$\sum_{k\ge 1}\frac{1-e^{-kx}-e^{-kt}}{k^2}=tx$$ (the area of the remaining rectangle).On the other hand,$$\sum_{k\ge 1}\frac{kte^{-kt}}{k^2}=t\varphi(t)=tx$$as well. So the first factors form a sequence whose numerators are (obviously, because $u\mapsto(1+u)e^{-u}$ is decreasing on $[0,+\infty)$) increasing and whose sum equals $0$. Thus, they break from $-$ to $+$ just once when $k$ goes up. However the second factors are increasing in $k$. Thus, the sum we are interested in is non-negative as well.
The End |
Lie algebras over rings (Lie rings) are important in group theory. For instance, to every group $G$ one can associate a Lie ring
$$L(G)=\bigoplus _{i=1}^\infty \gamma _i(G)/\gamma _{i+1}(G),$$
where $\gamma _i(G)$ is the $i$-th term of the lower central series of $G$. The addition is defined by the additive structure of $\gamma _i{G}/\gamma _{i+1}(G)$, and the Lie product is defined on homogeneous elements by $[x\gamma _{i+1}(G),y\gamma _{j+1}(G)]=[x,y]\gamma _{i+j+1}(G)$, and then extended to L(G) by linearity.
There are several other ways of constructing Lie rings associated to groups, and there are numerous applications of these. One of the most notorious ones is the solution of the Restricted Burnside Problem by Zelmanov, see the book M. R. Vaughan-Lee, "The Restricted Burnside Problem". There's other books related to these rings, for example,Kostrikin, "Around Burnside",Huppert, Blackburn, "Finite groups II",Dixon, du Sautoy, Mann, Segal, "Analytic pro-$p$ groups". |
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say |
A relation may be described by: a listed set of ordered pairs a graph a rule
The set of all first elements of a set of ordered pairs is known as the domain and the set of all second elements of a set of ordered pairs is known as the range.
Alternatively, the domain is the set of independent values and the range is the set of dependent values.
The domain of a relation is the set of values of $x$ in the relation.
The range of a relation is the set of values of $y$ in the relation.
The domain and range of a relation are often described using
set notation.
If a relation is described by a rule, it should also specify the domain. For example:
the relation $\{(x,y):y=x+2,x\in\{1,2,3\}\}$ describes the set of ordered pairs $\{(1,3),(2,4),(3,5)\}$ the domain is the set $X=\{1,2,3\}$, where is given the range is the set $Y=\{3,4,5\}$, and can be found by applying the rule $y=x+2$ to the domain values
If the domain of a relation is not specifically stated, it is assumed to consist of all real numbers for which the rule has meaning. This is referred to as the implied domain of a relation.
$y=x^2$ has the implied domain $x \in \mathbb{R}$, and implied range $y=x^2 \ge 0$, where $y \in \mathbb{R}$. $y=\sqrt{x}$ has the implied domain $x \ge 0$, where $x \in \mathbb{R}$, and implied range $y=\sqrt{x} \ge 0$, where $y \in \mathbb{R}$. $y=\dfrac{1}{x}$ has the implied domain $x \ne 0$, where $x \in \mathbb{R}$, and implied range $y=\dfrac{1}{x} \ge 0$, where $y \in \mathbb{R}$. $y=\dfrac{1}{\sqrt{x}}$ has the implied domain $x \gt 0$, where $x \in \mathbb{R}$, and implied range $y=\dfrac{1}{x} \gt 0$, where $y \in \mathbb{R}$. Example 1
State the domain and range of $\{(2,4),(3,9),(4,14),(5,19)\}$.
Domain: $\{2,3,4,5\}$ Range: $\{4,9,14,19\}$ Example 2
State the domain and range of the graph.
Domain: $x \in \mathbb{R}$ Range: $y \ge 0, y \in \mathbb{R}$ Example 3
State the domain and range of the graph.
Domain: $-3 \le x \le 3$ Range: $-3 \le y \le 3$ Example 4
State the domain and range of $y=\sqrt{x}$.
Domain: $x \ge 0, x \in \mathbb{R}$ Range: $y \ge 0, y \in \mathbb{R}$ Example 5
State the domain and range of $y=\dfrac{1}{x+1}$.
Domain: $x \ne -1, x \in \mathbb{R}$ Range: $y \ne 0, y \in \mathbb{R}$ |
In the mathematical subfield of linear algebra or more generally functional analysis, the
linear span (also called the linear hull) of a set of vectors in a vector space is the intersection of all subspaces containing that set. The linear span of a set of vectors is therefore a vector space. Definition
Given a vector space
V over a field K, the span of a set S of vectors (not necessarily finite) is defined to be the intersection W of all subspaces of V that contain S. W is referred to as the subspace spanned by S, or by the vectors in S. Conversely, S is called a spanning set of W, and we say that S spans W.
Alternatively, the span of
S may be defined as the set of all finite linear combinations of elements of S, which follows from the above definition. \operatorname{span}(S) = \left \{ {\sum_{i=1}^k \lambda_i v_i \Big| k \in \mathbb{N}, v_i \in S, \lambda _i \in \mathbf{K}} \right \}.
In particular, if
S is a finite subset of V, then the span of S is the set of all linear combinations of the elements of S. In the case of infinite S, infinite linear combinations (i.e. where a combination may involve an infinite sum, assuming such sums are defined somehow, e.g. if V is a Banach space) are excluded by the definition; a generalization that allows these is not equivalent. Examples
The real vector space
R 3 has {(2,0,0), (0,1,0), (0,0,1)} as a spanning set. This particular spanning set is also a basis. If (2,0,0) were replaced by (1,0,0), it would also form the canonical basis of R 3.
Another spanning set for the same space is given by {(1,2,3), (0,1,2), (−1,1/2,3), (1,1,1)}, but this set is not a basis, because it is linearly dependent.
The set {(1,0,0), (0,1,0), (1,1,0)} is not a spanning set of
R 3; instead its span is the space of all vectors in R 3 whose last component is zero. Theorems
Theorem 1: The subspace spanned by a non-empty subset S of a vector space V is the set of all linear combinations of vectors in S.
This theorem is so well known that at times it is referred to as the definition of span of a set.
Theorem 2: Every spanning set S of a vector space V must contain at least as many elements as any linearly independent set of vectors from V.
Theorem 3: Let V be a finite-dimensional vector space. Any set of vectors that spans V can be reduced to a basis for V by discarding vectors if necessary (i.e. if there are linearly dependent vectors in the set). If the axiom of choice holds, this is true without the assumption that V has finite dimension.
This also indicates that a basis is a minimal spanning set when
V is finite-dimensional. Closed linear span
In functional analysis, a closed linear span of a set of vectors is the minimal closed set which contains the linear span of that set. Suppose that
X is a normed vector space and let E be any non-empty subset of X. The closed linear span of E, denoted by \overline{\operatorname{Sp}}(E) or \overline{\operatorname{Span}}(E), is the intersection of all the closed linear subspaces of X which contain E.
One mathematical formulation of this is
\overline{\operatorname{Sp}}(E)=\{u\in X | \forall\epsilon>0\,\exists x\in\operatorname{Sp}(E) : \|x-u\|<\epsilon\}. Notes
The linear span of a set is dense in the closed linear span. Moreover, as stated in the lemma below, the closed linear span is indeed the closure of the linear span.
Closed linear spans are important when dealing with closed linear subspaces (which are themselves highly important, consider Riesz's lemma).
A useful lemma
Let
X be a normed space and let E be any non-empty subset of X. Then
(a) \overline{\operatorname{Sp}}(E) is a closed linear subspace of
X which contains E,
(b) \overline{\operatorname{Sp}}(E)=\overline{\operatorname{Sp}(E)}, viz. \overline{\operatorname{Sp}}(E) is the closure of \operatorname{Sp}(E),
(c) E^\perp=(\operatorname{Sp}(E))^\perp=(\overline{\operatorname{Sp}(E)})^\perp.
(So the usual way to find the closed linear span is to find the linear span first, and then the closure of that linear span.)
Matroids
Generalizing the definition of the span of points in space, a subset
X of the ground set of a matroid is called a spanning set if the rank of X equals the rank of the entire ground set. See also External links Linear Combinations and Span: Understanding linear combinations and spans of vectors, khanacademy.org. References
M.I. Voitsekhovskii (2001), "Linear hull", in Hazewinkel, Michiel, Lankham, Isaiah; Schilling, Anne (13 February 2010). "Linear Algebra - As an Introduction to Abstract Mathematics". University of California, Davis. Retrieved 27 September 2011. Rynne & Youngson (2001). Linear functional analysis, Springer.
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. |
I don't think these objects can be classified in a manner similar to the normed unital division algebras, if you take "algebra" to mean "vector space $V$ equipped with a bilinear map $V \otimes V \to V$". In particular, I suspect you end up with high-dimensional moduli spaces of such structures in all large dimensions.
Here is a naive calculation of degrees of freedom:
Let $a_{i,j}^k$ be the structure constants of our algebra, assembled into matrices $A^k$, and consider a point $x = (x_1, \ldots, x_n)$. We may write $x \ast x = (x^T A^1 x,\ldots, x^T A^n x)$. The length condition becomes $\left(\sum_i x_i^2\right)^2 = \sum_k \left(x^T A^k x \right)^2$. Writing this out in terms of the coordinates of $x$, we obtain an identity of homogeneous polynomials in $x_i$ of total degree 4, with coefficients that are quadratic in the structure constants. In other words, the space of solutions is an intersection of $\binom{n+3}{4}$ quadric hypersurfaces in $n^3$-dimensional space.
[Revised following YCor's comment:] When we account for the $O(n)$ symmetry of the solution space, we get the formula$$ n^3 - \binom{n+3}{4} - \binom{n}{2}$$which is positive for $2 \leq n \leq 16$ with maximum 299 at $n=13$. When $n$ is large we get more constraints than variables.
Since the Cayley-Dickson construction exists and provides solutions in arbitrarily large dimension, it is clear that the constraints are highly non-generic. This does not completely eliminate the possibility that in some dimensions there are no solutions, but I think it is at least discouraging as far as classification is concerned. |
Edit: I'm leaving the old post below, but before I want to write the proof as suggested by Bruce from his book, which uses the ideas in a more efficient way.
Assume that $\|p-q\|<1$, with $p,q\in A$, a unital C$^*$-algebra. Let $x=pq+(1-p)(1-q)$. Then, as $2p-1$ is a unitary, $$\|1-x\|=\|(2p-1)(p-q)\|=\|p-q\|<1.$$So $x$ is invertible. Now let $x=uz$ be the polar decomposition, $z=(x^*x)^{1/2}\in A$. Then $u=xz^{-1}\in A$. Also, $px=pq=xq$, and $qx^*x=qpq$, so $qx^*x=x^*xq$, and then $qz=zq$. Then$$pu=pxz^{-1}=xqz^{-1}=uzqz^{-1}=uqzz^{-1}=uq.$$So $q=u^*pu$.
=============================================
(the old post starts here)
(A good friend pointed me to the ideas in this answer, so I'm sharing them here)
The result holds in any unital C$^*$-algebra. So assume that $\|p-q\|<1$, with $p,q$ in a unital C$^*$-algebra $A\subset B(H)$.
Claim 1: There is a continuous path of projections joining $p$ and $q$.
Proof. Let $\delta\in(0,1)$ with $\|p-q\|<\delta$. For each $t\in[0,1]$, let $x_t=tp+(1-t)q$. Then$$\|x_t-p\|=\|(1-t)(p-q)\|<\delta(1-t),$$$$\|x_t-q\|=\|t(p-q)\|<\delta t.$$This, together with the fact that $x_t$ is selfadjoint, implies that $\sigma(x_t)\subset K=[-\delta/2,\delta/2]\cup[1-\delta/2,1+\delta/2]$ (since $\min\{t,1-t\}\leq1/2$). Now let $f$ be the continuous function on $K$ defined as $0$ on $[-\delta/2,\delta/2]$ and $1$ on $[1-\delta/2,1+\delta/2]$. Then, for all $t\in[0,1]$, $f(x_t)\in A$ is a projection. And$$t\to x_t\to f(x_t)$$is continuous, completing the proof of the claim. Edit: years later, I posted this answer to a question on MSE that proves the continuity.
Claim 2: We may assume without loss of generality that $\|p-q\|<1/2$.
This is simply a compacity argument, using that each projection in the path $f(x_t)$ is very near another projection in the path. The compacity allows us to make the number of steps finite, and so if we find projectons $p=p_0,p_1,\ldots,p_n=q$ and unitaries with $u_kp_ku_k^*=p_{k+1}$, we can multiply the unitaries to get the unitary that achieves $q=upu^*$.
Claim 3: If $\|p-q\|<1/2$, there exists a unitary $u\in A$ with $q=upu^*$.
Let $x=pq+(1-p)(1-q)$. Then $$\|x-1\|=\|2pq-p-q\|=\|p(q-p)+(p-q)q\|\leq2\|p-q\|<1,$$so $x$ is invertible. Let $x=uz$ be the polar decomposition. Then $u$ is a unitary. Note that$$qx^*x=q(qpq+(1-q)(1-p)(1-q))=qpq,$$so $q$ commutes with $x^*x$ and then with $z=(x^*x)^{1/2}$. Note also that $px=xq$, so $puz=uzq=uqz$. As $z$ is invertible, $pu=uq$, i.e.$$q=u^*pu.$$Note that $u=xz^{-1}\in A$. |
Challenge
Given an integer, \$s\$, as input where \$s\geq 1\$ output the value of \$\zeta(s)\$ (Where \$\zeta(x)\$ represents the Riemann Zeta Function).
Further information
\$\zeta(s)\$ is defined as:
$$\zeta(s) = \sum\limits^\infty_{n=1}\frac{1}{n^s}$$
You should output your answer to 5 decimal places (no more, no less). If the answer comes out to be infinity, you should output \$\infty\$ or equivalent in your language.
Riemann Zeta built-ins are allowed, but it's less fun to do it that way ;)
Examples Outputs must be exactly as shown below
Input -> Output1 -> ∞ or inf etc.2 -> 1.644933 -> 1.202064 -> 1.082328 -> 1.0040819 -> 1.00000
Bounty
As consolation for allowing built-ins, I will offer a 100-rep bounty to the shortest answer which
does not use built-in zeta functions. (The green checkmark will still go to the shortest solution overall) Winning
The shortest code in bytes wins. |
There is a rather simple integral ($K_0$ is the 0-th order MacDonald function) $$\int_0^\infty e^{-x \cosh\xi}\, d\xi = K_0(x)$$ which mathematica cannot solve. This even though the documentation claims that Integrate can give results in terms of many special functions. In fact it can solve the integral obtained by substituting $r=\cosh \xi$, $$\int_1^\infty \frac{e^{-x r}}{\sqrt{r^2-1}}\,dr=K_0(x).$$ In fact it also failed in solving the more general integral $$\int_0^\infty e^{-x \cosh\xi} \cosh(\alpha \xi)\, d\xi = K_\alpha(x).$$
I am using "8.0 for Mac OS X x86 (64-bit) (October 5, 2011)". Are there more recent or older versions of Mathematica which can solve this class of integrals?
Edit:
I want to stress that this is not an arbitrary integral but can be thought of as a definition of $K_0$ (the corresponding integral $\int_0^{2\pi} \!e^{i x \cos \xi}\,d\xi$ for $J_0$ mathematica handles very well). I am just curious how it can happen that a system as developed as Mathematica cannot handle this "elementary" integral.
Here is the Mathematica code for those who want to test:
Integrate[Exp[-x Cosh[ξ]],{ξ,0,Infinity}]
Now I found a related integral which indeed is a bug in mathematica. If you try to evaluate ($x \in \mathbb{R}$) $$\int_0^\infty \cos(x \sinh \xi)\,d\xi = K_0(|x|)$$ then Mathematica claims that the integral diverges. |
1. Perturbations
As already mentioned, the Jahn-Teller effect has its roots in group theory. The essence of the argument is that the energy of the compound is stabilised upon distortion to a lower-symmetry point group. This distortion may be considered to be a normal mode of vibration, with the corresponding vibrational coordinate $q$ labelling the "extent of distortion". There is one condition on the vibrational mode: it cannot transform as the totally symmetric irreducible representation of the molecular point group, as such a vibrational mode cannot bring about any distortion in the molecular geometry (it may lead to a change in equilibrium bond length, but not in the shape of the molecule). $\require{begingroup} \begingroup \newcommand{\En}[1]{E_n^{(#1)}} \newcommand{\ket}[1]{| #1 \rangle} \newcommand{\n}[1]{n^{(#1)}} \newcommand{\md}[0]{\mathrm{d}} \newcommand{\odiff}[2]{\frac{\md #1}{\md #2}}$
In the undistorted geometry (i.e. $q = 0$), the electronic Hamiltonian is denoted $H_0$. The corresponding unperturbed electronic wavefunction is $\ket{\n{0}}$, and the electronic energy is $\En{0}$. We therefore have
$$H_0 \ket{\n{0}} = \En{0}\ket{\n{0}} \tag{1}$$
Here, the Hamiltonian, wavefunction, and energy are all functions of $q$. We can expand them as Taylor series about $q = 0$:
$$\begin{align} H &= H_0 + q \left(\odiff{H}{q}\right) + \frac{q^2}{2}\left(\frac{\md^2 H}{\md q^2}\right) + \cdots \tag{2} \\ \ket{n} &= \ket{\n{0}} + q\ket{\n{1}} + \frac{q^2}{2}\ket{\n{2}} + \cdots \tag{3} \\ E_n &= \En{0} + q\En{1} + \frac{q^2}{2}\En{2} + \cdots \tag{4} \end{align}$$
In the new geometry (i.e. $q \neq 0$), the Schrodinger equation must still be obeyed and therefore
$$H\ket{n} = E_n \ket{n} \tag{5}$$
By substituting in equations $(2)$ through $(4)$ into equation $(5)$, one can compare coefficients of $q$ to reach the results:
$$\begin{align} \En{1} &= \left< \n{0} \middle| \odiff{H}{q} \middle| \n{0} \right> \tag{6} \\ \En{2} &= \left< \n{0} \middle| \frac{\md^2 H}{\md q^2} \middle| \n{0} \right> + 2\sum_{m \neq n}\frac{\left|\left<m^{(0)} \middle|(\md H/\md q)\middle|\n{0} \right>\right|^2}{\En{0} - E_m^{(0)}} \tag{7} \end{align}$$
The derivation of equations $(6)$ and $(7)$ will not be discussed further here.
1
Distortions that arise from a negative value of $\En{1}$ are called
first-order Jahn-Teller distortions, and distortions that arise from a negative value of $\En{2}$ are called second-order Jahn-Teller distortions. 2. The first-order Jahn-Teller effect
Recall that
$$E_n = \En{0} + q\En{1} + \cdots \tag{8}$$
Therefore, if $\En{1} > 0$, then stabilisation may be attained with a negative value of $q$; if $\En{1} < 0$, then stabilisation may be attained with a positive value of $q$. These simply represent distortions in opposite directions along a vibrational coordinate. A well-known example is the distortion of octahedral $\ce{Cu^2+}$: there are two possible choices, one involving axial compression, and one involving axial elongation. These two distortions arise from movement along the same vibrational coordinate, except that one has $q > 0$ and the other has $q < 0$.
In order for there to be a first-order Jahn-Teller distortion, we therefore require that
$$\En{1} = \left<\n{0}|(\md H/\md q)| \n{0}\right> \neq 0 \tag{9}$$
Within group theory, the condition for the integral to be nonzero is that the integrand must contain a component that transforms as the totally symmetric irreducible representation (TSIR). Mathematically,
$$\Gamma_{\text{TSIR}} \in \Gamma_n \otimes \Gamma_{(\md H/\md q)} \otimes \Gamma_n \tag{10}$$
We can simplify this slightly by noting that the Hamiltonian, $H$, itself transforms as the TSIR. Therefore, $\md H/\md q$ transforms as $\Gamma_q$, and the requirement is that
$$\Gamma_{\text{TSIR}} \in \Gamma_n \otimes \Gamma_q \otimes \Gamma_n \tag{11}$$
In all point groups, for any non-degenerate irrep $\Gamma_n$, $\Gamma_n \otimes \Gamma_n = \Gamma_{\text{TSIR}}$. Therefore, if $\Gamma_n$ is non-degenerate, then
$$\Gamma_n \otimes \Gamma_q \otimes \Gamma_n = \Gamma_q \neq \Gamma_{\text{TSIR}} \tag{12}$$
and the molecule is stable against a first-order Jahn-Teller distortion. Therefore, all closed-shell molecules ($\Gamma_n = \Gamma_{\text{TSIR}}$) do not undergo first-order Jahn-Teller distortions.
However, what happens if $\Gamma_n$ is non-degenerate? Now, the product $\Gamma_n \otimes \Gamma_n$ contains other irreps apart from the TSIR.
2 If the molecule possesses a vibrational mode that transforms as one of these irreps, then the direct product $\Gamma_n \otimes \Gamma_q \otimes \Gamma_n$ will contain the TSIR.
In a rather inelegant article,
3 Hermann Jahn and Edward Teller did the maths for every important point group and found that:
stability and degeneracy are not possible simultaneously unless the molecule is a linear one...
In other words, if a
non-linear molecule has a degenerate ground state, then it is susceptible towards a (first-order) Jahn-Teller distortion.
Take, for example, octahedral $\ce{Cu^2+}$. This has a $\mathrm{^2E_g}$ term symbol (see this question) - which is doubly degenerate. The symmetric direct product $\mathrm{E_g \otimes E_g = A_{1g} \oplus E_g}$. Therefore, if we have a vibrational mode of symmetry $\mathrm{E_g}$, then distortion along this vibrational coordinate will occur to give a more stable compound. Recall that the vibrational mode cannot transform as the TSIR, so we can neglect the $\mathrm{A_{1g}}$ term.
What does an $\mathrm{e_g}$ vibrational mode look like? Here is a diagram:
4
It's an axial elongation, which happens to match what we know of Cu(II). However, there is a catch. The vibrational mode is doubly degenerate (the other $\mathrm{e_g}$ mode is not shown), and any linear combination of these two degenerate vibrational modes also transforms as $\mathrm{e_g}$. Therefore, the exact form of the distortion can be any linear combination of these two degenerate modes. It can also involve negative coefficients too i.e. it might feature axial compression instead of elongation; there is no way to find that out using symmetry arguments).
On top of that, there's also no indication of how
much distortion there is. That depends on (amongst other things) the value of $\En{1}$, and all we have said is that it is nonzero - we have not said how large it is.
This is what is meant by "impossible to predict the extent or the exact form of the distortion".
3. The second-order Jahn-Teller effect
Pearson has written an article on second-order Jahn-Teller effects.
5 To be continued another time Notes and references
(1) For more details, look up perturbation theory in your quantum mechanics book of choice. In such treatments, the perturbation is usually formulated slightly differently: e.g. $H$ is taken as $H_0 + \lambda V$, and the eigenstates and eigenvalues are expanded as a power series instead of a Taylor series. Notwithstanding that, the principles remain the same.
(2) There is a subtlety in that the symmetric direct product must be taken. For example, in the $D_\mathrm{\infty h}$ point group, we have $\Pi \otimes \Pi = \Sigma^+ + [\Sigma^-] + \Delta$. The antisymmetric direct product $\Sigma^-$ has to be discarded.
(3) Jahn, H. A.; Teller, E. Stability of Polyatomic Molecules in Degenerate Electronic States. I. Orbital Degeneracy.
Proc. R. Soc. A 1937, 161 (905), 220–235. DOI: 10.1098/rspa.1937.0142.
(4) Albright, T. A.; Burdett, J. K.; Whangbo, M.-H.
Orbital Interactions in Chemistry, 2nd ed.; Wiley: Hoboken, NJ, 2013.
(5) Pearson, R. G. The second-order Jahn-Teller effect.
J. Mol. Struct.: THEOCHEM 1983, 103, 25–34. DOI: 10.1016/0166-1280(83)85006-4. |
Well, by Cayley--Hamilton, each matrix $A\in {\rm GL}(n,p)$ generates an at most $n$-dimensional subalgebra ${\mathbb F}_p[A]\subseteq M(n,p)$ thus containing at most $p^n-1$ nonzero elements. Hence the order of $A$ cannot exceed $p^n-1$.
On the other hand, consider a degree $n$ monic polynomial $P_n$ whose root is a generator $\xi$ of ${\mathbb F}_{p^n}^*$. Then a matrix with $P_n$ as its characteristic polynomial has order at least $p^n-1$ since $\xi$ is its eigenvalue.
ADDENDUM. if you wish the order to be the power of $p$, then the answer is $d=p^{\lceil \log_p n\rceil}$. Since the order of $A$ is divisible by the multiplicative orders of its eigenvalues, all the eigenvalues should be $1$. Hence the characteristic polynomial is $(x-1)^n$, so $A^d-I=(A-I)^d=0$.
On the other hand, if $A=I+J$ is the Jordan cell of size $n$ (with eigenvalue 1), then $A^{d/p}=I^{d/p}+J^{d/p}\neq I$, but $A^d=I+J^d=I$.
NB. The subgroup of all (upper-)unitriangular matrices is a Sylow $p$-subgroup in ${\rm GL}(n,p)$. So you may concentrate on it when looking at the elements of this kind.
ADDENDUM-2 (much later). This is to answer the question in the comments about the maximal order of an element $f\in AGL(n,q)$, where $q$ is a power of $p$. Write $f(x)=Ax+b$.
If $1$ is not an eigenvalue of $A$, then $f$ has a fixed point (the equation $f(x)=x$ has a solution), so we may regard it as an element of $GL(n,q)$, and the maximal order of $f$ is again $q^n-1$.
So we are concerned with the case when the minimal polynomial $\mu(x)$ of $A$ vanishes at $1$, say $\mu(x)=(x-1)^k\nu(x)$, where $\nu(1)\neq 0$. Then $A$ is similar to a block-diagonal matrix with blocks having minimal polynomials $(x-1)^k$ and $\nu(x)$ (in view of $\mathbb F_q^n=\mathop{\mathrm {Ker}}(A-I)^k\oplus\mathop{\rm Ker}\nu(A)$). So the order $d$ of $A$ does not exceed $p^{\lceil\log_p k\rceil}(q^{n-k}-1)$ if $n<k$, and $p^{\lceil\log_p n\rceil}$ otherwise. If $n>3$ (or $n=3$ and $q>2$), one may easily see that this bound does not exceed $q^{n-1}-1$. So $f^d$ is a translation, which yields that $f^{pd}=\mathord{\rm id}$, and $pd\leq p(q^{n-1}-1)<q^n-1$. Thus the maximal order of $f$ in these cases is still $q^n-1$.
We are left with the cases $n=1$, $n=2$, or $n=3$, $p=2$. When $n=1$, the answer is obviously $\max(p,q-1)$. When $n=2$, the only case left is $d=p=q$ achieved when $A$ is similar to the Jordan cell $\begin{pmatrix}1&1\\0&1\end{pmatrix}$. In this case, $f^p(x)=x+(A^{p-1}+\dots+I)b=x$ unless $p=2$, when $f^2(x)=x+\begin{pmatrix}0&1\\0&0\end{pmatrix}b$. So the answer is still $q^2-1$ when $q>2$, and $4$ otherwise.
Finally, if $n=3$ and $q=2$, then the order of $A$ having eigenvalue $1$ exceeds 3 only if $A$ is similar to $3\times 3$ Jordan cell (then $d=4$); but in this case $f^4=\mathord{\rm id}$. So this is not an exception.
Summing up, the only cases when the order may be greater than $q^n-1$ are: (1) $n=1$, $q=p$ (the maximal order is $p$), and (2) $n=2$, $q=2$ (the maximal order is $4$). |
Let me give a construction that leads, for any fixed $N$, to a graph which has $N$ pairwise non-isomorphic edges and removal of any one of them results in the same graph. This is not a complete answer but I think it is of some interest.
Let us denote by $G$ a graph whose vertices are labeled with elements of a finite set $V$. We will assume that $\pi=\{p,q\}$ and $\rho=\{r,s\}$ is a non-isomorphic pair of edges in $G$. I will also assume that the graph $G'=G\setminus\pi$ (obtained from $G$ by removing $\pi$) is isomorphic to $G\setminus\rho$. For example, the 6-vertex graph from the initial question can be taken as $G$.
Let me construct the graph $\Gamma=\Gamma(G,\pi)$ as follows. The vertex set of $\Gamma$ will be the union of $V$ and $W\times V$; we denote the vertices from $W\times V$ as $v_{ec}$ for $e\in W$ and $c\in V$. (Here, $W$ stands for the set of all subsets of $V$ which have cardinality $2$.) The edges of $\Gamma$ are the following.
(1) there is an edge between $i$ and $v_{ec}$ if and only if $i\in e$;
(2) assuming $e\in G$, there is an edge between $v_{ex}$ and $v_{ey}$ if and only if $\{x,y\}\in G$;
(3) assuming $e\notin G$, there is an edge between $v_{ex}$ and $v_{ey}$ if and only if $\{x,y\}\in G'$;
(4) $\Gamma$ has only those edges which are specified in (1)-(3).
Speaking informally, we draw a copy of $G$ instead of every edge $\{i,j\}$. If $\{i,j\}$ is not an edge, we draw a copy of $G'$. In both cases, we draw the edges connecting all the new vertices with both $i$ and $j$. (We do this over all $\{i,j\}\in W$.)
Now, consider the edges $\{v_{\pi p},v_{\pi q}\}$, $\{v_{\pi r},v_{\pi s}\}$, $\{v_{\rho p},v_{\rho q}\}$, $\{v_{\rho r},v_{\rho s}\}$. From an informal description it can be seen that graphs obtained by removal of any one of them are isomorphic. To show that the edges provided are pairwise non-isomorphic, assume by contradiction that an automorphism $\psi$ of $\Gamma$ swaps some pair of them. Then,
(1) a vertex $v_{ec}$ can not be an image $\psi(i)$ of $i$ under $\psi$ because they have different degrees;
(2) then, denoting $e=\{i,j\}$ and $\psi(e)=\{\psi(i),\psi(j)\}$, we conclude $\psi(v_{ec})=v_{\psi(e)c'}$ for some $c'$;
(3) the subgraphs generated by all $v_{ec}$ and by all $v_{e'c}$ (in both cases, $c$ runs over $1,\dots,n$) are isomorphic iff either $e,e'\in G$ or $e,e'\notin G$, therefore $\{\psi(i),\psi(j)\}\in G$ iff $\{i,j\}\in G$;
(4) this shows that $\psi'$, the restriction of $\psi$ to $\{1,\ldots,n\}$, is an automorphism of $G$;
(5) by the definition of $G$, $\psi'$ can not swap $\pi$ and $\rho$, so that $\psi(\pi)=\pi$ and $\psi(\rho)=\rho$;
(6) then, the restriction of $\psi$ to $v_{\pi c}$ is an automorphism, which cannot swap $\{v_{\pi p},v_{\pi q}\}$ and $\{v_{\pi r},v_{\pi s}\}$ again by definition of $G$;
(7) similarly to the previous step, $\psi$ cannot swap $\{v_{\rho p},v_{\rho q}\}$ and $\{v_{\rho r},v_{\rho s}\}$;
(8) finally, $\psi$ can not swap $v_{\rho a}$ and $v_{\pi b}$ because of step (5).
Let me finally note that we can apply the $\Gamma$ construction to $\Gamma_0=\Gamma(G,\pi)$ itself. As far as I can see, the graph $\Gamma_1=\Gamma(\Gamma_0,\{v_{\rho p},v_{\rho q}\})$ has $16$ pairwise non-isomorphic edges and removal of any one of them results in the same graph. Similarly constructed $\Gamma_2$ will have $256$ edges with this property, and so on. |
Interpolation Problem for Stationary Sequences with Missing Observations Abstract
The problem of the mean-square optimal estimation of the linear functional $A_s\xi=\sum\limits_{l=0}^{s-1}\sum\limits_{j=M_l}^{M_l+N_{l+1}}a(j)\xi(j),$ $M_l=\sum\limits_{k=0}^l (N_k+K_k),$ \, $N_0=K_0=0,$ which depends on the unknown values of a stochastic stationary sequence $\xi(k)$ from observations of the sequence at points of time $j\in\mathbb{Z}\backslash S $, $S=\bigcup\limits_{l=0}^{s-1}\{ M_{l}, M_{l}+1, \ldots, M_{l}+N_{l+1} \}$ is considered. Formulas for calculating the mean-square error and the spectral characteristic of the optimal linear estimate of the functional are derived under the condition of spectral certainty, where the spectral density of the sequence $\xi(j)$ is exactly known. The minimax (robust) method of estimation is applied in the case where the spectral density is not known exactly, but sets of admissible spectral densities are given. Formulas that determine the least favorable spectral densities and the minimax spectral characteristics are derived for some special sets of admissible densities.
Keywords References
P. Bondon, Influence of missing values on the prediction of a stationary time series, Journal of Time Series Analysis, vol. 26, no. 4, pp. 519-525, 2005.
P. Bondon, Prediction with incomplete past of a stationary process, Stochastic Process and their Applications, vol.98, pp. 67-76, 2002.
I. I. Dubovets’ka, O.Yu. Masyutka and M.P. Moklyachuk, Interpolation of periodically correlated stochastic sequences, Theory Probab. Math. Stat., vol. 84, pp. 43–56, 2012.
I. I. Dubovets’ka and M. P. Moklyachuk, Filtration of linear functionals of periodically correlated sequences, Theory Probab. Math. Stat., vol. 86, pp. 51–64, 2013.
I. I. Dubovets’ka and M. P. Moklyachuk, Extrapolation of periodically correlated processes from observations with noise, Theory Probab. Math. Stat., vol. 88, pp. 67–83, 2014.
I. I. Dubovets’ka and M. P. Moklyachuk, Minimax estimation problem for periodically correlated stochastic processes, Journal of Mathematics and System Science, vol. 3, no. 1, pp. 26–30, 2013.
I. I. Dubovets’ka and M. P. Moklyachuk, On minimax estimation problems for periodically correlated stochastic processes, Contemporary Mathematics and Statistics, vol.2, no. 1, pp. 123–150, 2014.
J. Franke, Minimax robust prediction of discrete time series, Z. Wahrscheinlichkeitstheor. Verw. Gebiete, vol. 68, pp. 337–364, 1985.
J. Franke and H. V. Poor, Minimax-robust filtering and finite-length robust predictors, Robust and Nonlinear Time Series Analysis. Lecture Notes in Statistics, Springer-Verlag, vol. 26, pp. 87–126, 1984.
I. I. Gikhman and A. V. Skorokhod, The theory of stochastic processes. I., Berlin: Springer, 2004.
I. I. Golichenko and M. P. Moklyachuk, Estimates of functionals of periodically correlated processes. Kyiv: NVP ``Interservis", 2014.
U. Grenander, A prediction problem in game theory, Arkiv f"or Matematik, vol. 3, pp. 371--379, 1957.
E. J. Hannan, Multivariate time series, New York etc.: John Wiley & Sons, Inc. XI, 1970.
A. D. Ioffe, and V. M. Tihomirov, Theory of extremal problems, Studies in Mathematics and its Applications, Vol. 6. Amsterdam, New York, Oxford: North-Holland Publishing Company. XII, 1979.
K. Karhunen, Über lineare Methoden in der Wahrscheinlichkeitsrechnung, Annales Academiae Scientiarum Fennicae. Ser. A I, vol.37, 1947.
S. A. Kassam and H. V. Poor, Robust techniques for signal processing: A survey, Proceedings of the IEEE, vol. 73, no. 3, pp. 433--481, 1985.
A. N. Kolmogorov, Selected works by A. N. Kolmogorov. Vol. II: Probability theory and mathematical statistics. Ed. by A. N. Shiryayev. Mathematics and Its Applications. Soviet Series. 26. Dordrecht etc. Kluwer Academic Publishers, 1992.
M. G. Krein and A. A. Nudelman, The Markov moment problem and extremal problems, Translations of Mathematical Monographs. Vol. 50. Providence, R.I.: American Mathematical Society, 1977.
M. M. Luz and M. P. Moklyachuk, Interpolation of functionals of stochastic sequences with stationary increments, Theory Probab. Math. Stat., vol. 87, pp. 117–133, 2013.
M. M. Luz and M. P. Moklyachuk, Minimax-robust filtering problem for stochastic sequence with stationary increments, Theory Probab. Math. Stat., vol. 89, pp. 127–142, 2014.
M. Luz and M. Moklyachuk, Robust extrapolation problem for stochastic processes with stationary increments, Mathematics and Statistics, vol. 2, no. 2, pp. 78–88, 2014.
M. Luz and M. Moklyachuk, Minimax-robust filtering problem for stochastic sequences with stationary increments and cointegrated sequences, Statistics, Optimization & Information Computing, vol. 2, no. 3, pp. 176–199, 2014.
M. Luz and M. Moklyachuk, Minimax interpolation problem for random processes with stationary increments, Statistics, Optimization & Information Computing, vol. 3, no. 1, pp. 30–41, 2015.
M. Luz, and M. Moklyachuk, Minimax-robust prediction problem for stochastic sequences with stationary increments and cointegrated sequences, Statistics, Optimization & Information Computing, vol. 3, no. 2, pp. 160-188, 2015.
M. P. Moklyachuk, Minimax extrapolation and autoregressive-moving average processes, Theory Probab. Math. Stat., vol. 41, pp. 77–84, 1990.
M. P. Moklyachuk, Stochastic autoregressive sequences and minimax interpolation, Theory Probab. Math. Stat., vol. 48, pp. 95-103, 1994.
M. P. Moklyachuk, Robust procedures in time series analysis, Theory of Stochastic Processes, vol. 6, no. 3-4, pp. 127-147, 2000.
M. P. Moklyachuk, Game theory and convex optimization methods in robust estimation problems, Theory of Stochastic Processes, vol. 7, no. 1-2, pp. 253–264, 2001.
M. P. Moklyachuk, Robust estimations of functionals of stochastic processes, Kyiv University, Kyiv, 2008.
M. Moklyachuk and O. Masyutka, Minimax-robust estimation technique for stationary stochastic processes, LAP LAMBERT Academic Publishing, 2012.
M. Pourahmadi, A. Inoue and Y. Kasahara, A prediction problem in $L^2(w)$, Proceedings of the American Mathematical Society. Vol. 135, No. 4, pp. 1233-1239, 2007.
B. N. Pshenichnyi, Necessary conditions of an extremum, Marcel Dekker, New York, 1971.
R. T. Rockafellar, Convex Analysis, Princeton University Press, 1997.
Yu. A. Rozanov, Stationary stochastic processes, San Francisco-Cambridge-London-Amsterdam: Holden-Day 1967.
H. Salehi, Algorithms for linear interpolator and interpolation error for minimal stationary stochastic processes, The Annals of Probability, Vol. 7, No. 5, pp. 840-846, 1979.
K. S. Vastola and H. V. Poor, An analysis of the effects of spectral uncertainty on Wiener filtering, Automatica, vol. 28, pp. 289--293, 1983.
N. Wiener, Extrapolation, interpolation and smoothing of stationary time series. With engineering applications, The M. I. T. Press, Massachusetts Institute of Technology, Cambridge, Mass., 1966.
A. M. Yaglom, Correlation theory of stationary and related random functions. Vol. 1: Basic results, Springer Series in Statistics, Springer-Verlag, New York etc., 1987.
A. M. Yaglom, Correlation theory of stationary and related random functions. Vol. 2: Supplementary notes and references, Springer Series in Statistics, Springer-Verlag, New York etc., 1987.
DOI: 10.19139/soic.v3i3.149
Refbacks There are currently no refbacks. |
Under the auspices of the Computational Complexity Foundation (CCF)
A {\em supermodel} is a satisfying assignment of a boolean formula
for which any small alteration, such as a single bit flip, can be repaired by another small alteration, yielding a nearby satisfying assignment. We study computational problems associated with super models and some generalizations thereof. For general formulas, ... more >>>
We show new tradeoffs for satisfiability and nondeterministic
linear time. Satisfiability cannot be solved on general purpose random-access Turing machines in time $n^{1.618}$ and space $n^{o(1)}$. This improves recent results of Lipton and Viglas and Fortnow.
A dichotomy theorem for a class of decision problems is
a result asserting that certain problems in the class are solvable in polynomial time, while the rest are NP-complete. The first remarkable such dichotomy theorem was proved by T.J. Schaefer in 1978. It concerns the ... more >>>
We study the space complexity of refuting unsatisfiable random $k$-CNFs in
the Resolution proof system. We prove that for any large enough $\Delta$, with high probability a random $k$-CNF over $n$ variables and $\Delta n$ clauses requires resolution clause space of $\Omega(n \cdot \Delta^{-\frac{1+\epsilon}{k-2-\epsilon}})$, for any $0<\epsilon<1/2$. For constant $\Delta$, ... more >>>
We prove that, with high probability, the space complexity of refuting
a random unsatisfiable boolean formula in $k$-CNF on $n$ variables and $m = \Delta n$ clauses is $O(n \cdot \Delta^{-\frac{1}{k-2}})$.
We describe a deterministic algorithm that, for constant k,
given a k-DNF or k-CNF formula f and a parameter e, runs in time linear in the size of f and polynomial in 1/e and returns an estimate of the fraction of satisfying assignments for f up to ... more >>>
Bayesian inference and counting satisfying assignments are important
problems with numerous applications in probabilistic reasoning. In this paper, we show that plain old DPLL equipped with memoization can solve both of these problems with time complexity that is at least as good as all known algorithms. Furthermore, DPLL with memoization more >>>
$k$-SAT is one of the best known among a wide class of random
constraint satisfaction problems believed to exhibit a threshold phenomenon where the control parameter is the ratio, number of constraints to number of variables. There has been a large amount of work towards estimating ... more >>>
The satisfiability problem of Boolean Formulae in 3-CNF (3-SAT)
is a well known NP-complete problem and the development of faster (moderately exponential time) algorithms has received much interest in recent years. We show that the 3-SAT problem can be solved by a probabilistic algorithm in expected time O(1,3290^n). more >>>
We study approximation hardness and satisfiability of bounded
occurrence uniform instances of SAT. Among other things, we prove the inapproximability for SAT instances in which every clause has exactly 3 literals and each variable occurs exactly 4 times, and display an explicit ... more >>>
Abstract. It is known that random k-SAT formulas with at least
(2^k*ln2)*n random clauses are unsatisfiable with high probability. This result is simply obtained by bounding the expected number of satisfy- ing assignments of a random k-SAT instance by an expression tending to 0 when n, the number of variables ... more >>>
We prove results on the computational complexity of instances of 3SAT in which every variable occurs 3 or 4 times.more >>>
Boolean satisfiability problems are an important benchmark for questions about complexity, algorithms, heuristics and threshold phenomena. Recent work on heuristics, and the satisfiability threshold has centered around the structure and connectivity of the solution space. Motivated by this work, we study structural and connectivity-related properties of the space of solutions ... more >>>
We prove the first time-space tradeoffs for counting the number of solutions to an NP problem modulo small integers, and also improve upon the known time-space tradeoffs for Sat. Let m be a positive integer, and define MODm-Sat to be the problem of determining if a given Boolean formula has ... more >>>
Ever since the fundamental work of Cook from 1971, satisfiability has been recognized as a central problem in computational complexity. It is widely believed to be intractable, and yet till recently even a linear-time, logarithmic-space algorithm for satisfiability was not ruled out. In 1997 Fortnow, building on earlier work by ... more >>>
We show that tree-like OBDD proofs of unsatisfiability require an exponential increase ($s \mapsto 2^{s^{\Omega(1)}}$) in proof size to simulate unrestricted resolution, and that unrestricted OBDD proofs of unsatisfiability require an almost-exponential increase ($s \mapsto 2^{ 2^{\left( \log s \right)^{\Omega(1)}}}$) in proof size to simulate $\Res{O(\log n)}$. The ``OBDD proof ... more >>>
Consider the following two-player communication process to decide a language $L$: The first player holds the entire input $x$ but is polynomially bounded; the second player is computationally unbounded but does not know any part of $x$; their goal is to cooperatively decide whether $x$ belongs to $L$ at small ... more >>>
We give two time- and space-efficient simulations of quantum computations with
intermediate measurements, one by classical randomized computations with unbounded error and the other by quantum computations that use an arbitrary fixed universal set of gates. Specifically, our simulations show that every language solvable by a bounded-error quantum algorithm running ... more >>>
Separating different propositional proof systems---that is, demonstrating that one proof system cannot efficiently simulate another proof system---is one of the main goals of proof complexity. Nevertheless, all known separation results between non-abstract proof systems are for specific families of hard tautologies: for what we know, in the average case all ... more >>>
Recently, Moser and Tardos [MT10] came up with a constructive proof of the Lovász Local Lemma. In this paper, we give another constructive proof of the lemma, based on Kolmogorov complexity. Actually, we even improve the Local Lemma slightly.
This paper characterizes alternation trading based proofs that satisfiability is not in the time and space bounded class $\DTISP(n^c, n^\epsilon)$, for various values $c<2$ and $\epsilon<1$. We characterize exactly what can be proved in the $\epsilon=0$ case with currently known methods, and prove the conjecture of Williams that $c=2\cos(\pi/7)$ is ... more >>>
We construct a PCP for NTIME(2$^n$) with constant
soundness, $2^n \poly(n)$ proof length, and $\poly(n)$ queries where the verifier's computation is simple: the queries are a projection of the input randomness, and the computation on the prover's answers is a 3CNF. The previous upper bound for these two computations was more >>>
Drucker (2012) proved the following result: Unless the unlikely complexity-theoretic collapse coNP is in NP/poly occurs, there is no AND-compression for SAT. The result has implications for the compressibility and kernelizability of a whole range of NP-complete parameterized problems. We present a simple proof of this result.
An AND-compression is ... more >>>
In this work we extend the study of solution graphs and prove that for boolean formulas in a class called CPSS, all connected components are partial cubes of small dimension, a statement which was proved only for some cases in [Schwerdtfeger 2013]. In contrast, we show that general Schaefer formulas ... more >>>
We present an efficient proof system for Multipoint Arithmetic Circuit Evaluation: for every arithmetic circuit $C(x_1,\ldots,x_n)$ of size $s$ and degree $d$ over a field ${\mathbb F}$, and any inputs $a_1,\ldots,a_K \in {\mathbb F}^n$,
$\bullet$ the Prover sends the Verifier the values $C(a_1), \ldots, C(a_K) \in {\mathbb F}$ and ... more >>>
Most of the known lower bounds for binary Boolean circuits with unrestricted depth are proved by the gate elimination method. The most efficient known algorithms for the #SAT problem on binary Boolean circuits use similar case analyses to the ones in gate elimination. Chen and Kabanets recently showed that the ... more >>>
We prove that if every problem in $NP$ has $n^k$-size circuits for a fixed constant $k$, then for every $NP$-verifier and every yes-instance $x$ of length $n$ for that verifier, the verifier's search space has an $n^{O(k^3)}$-size witness circuit: a witness for $x$ that can be encoded with a circuit ... more >>> |
67 0
Sorry I had to start a new thread here because the latex was just not generating on the other thread, I don't know if it is my browser or something else but I am having a very difficult day with the tex.
anyhow
[tex] \lim_{x \rightarrow 0} \frac {\sqrt(5x+9)-3}{x} [/tex]
This gives 0/0 off the bat so apply L'H once I get:
[tex] \lim_{x \rightarrow 0} \frac {(5x+9).5}{2} [/tex]
Which plugs in nicely to give [itex] \frac {45}{2} [/itex]
Did I do that all right?
Hi again can any latex specialists help me. My code definitely says the equation I wrote above is (sqrt(5x+9))-3/x but on my browser there is a red message saying the latex is not valid when I look at the thread and on my preview it is showing me a completely different other tex thing that I wrote two hours ago . I can't seem to avoid this problem, does anyone else see the mix up that I am seeing or is it just my own browser having a hissy fit?
anyhow
[tex]
\lim_{x \rightarrow 0} \frac {\sqrt(5x+9)-3}{x} [/tex]
This gives 0/0 off the bat so apply L'H once I get:
[tex]
\lim_{x \rightarrow 0} \frac {(5x+9).5}{2} [/tex]
Which plugs in nicely to give [itex] \frac {45}{2} [/itex]
Did I do that all right?
Hi again can any latex specialists help me. My code definitely says the equation I wrote above is (sqrt(5x+9))-3/x but on my browser there is a red message saying the latex is not valid when I look at the thread and on my preview it is showing me a completely different other tex thing that I wrote two hours ago . I can't seem to avoid this problem, does anyone else see the mix up that I am seeing or is it just my own browser having a hissy fit?
Last edited: |
Camera has two properties:
bpy.data.objects['Camera'].data.anglebpy.data.objects['Camera'].data.lens
If change one value, the other one will be changed automatically. I am curious about the math logic.
From the table below:
$$ \begin{array}{c|c} \text{angle }(\theta)&\text{lens}\\ \hline 30^\circ&-18.691732\\ 45^\circ&28.681456\\ 60^\circ&2.497919 \end{array} $$
Sounds like the formula is: $\text{lens}=\tan{\frac{\theta}{2}}$, but what's the theory come from. I found a diagram about camera as below but not figure out how to match this formula with this diagram:
Is this logic specific to blender or it's generic for all camera's? |
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s?
@daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format).
@JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems....
well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty...
Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d...
@Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure.
@JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now
@yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first
@yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts.
@JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing.
@Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work
@Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable.
@Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time.
@Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things
@Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)]
@JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :)
@Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!)
@JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand.
@JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series
@JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code.
@PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that? |
The reported solubility product of $\ce{HgS}$ is $4 \times 10^{-54}$. How is this possible? The minimum concentration that is possible is 1/avogadro's constant which shall be about $10^{-23}\ \mathrm{M}$. Hence the minimum value of solubility product for a diatomic specie like $\ce{HgS}$ would be about $10^{-46}$. What am I missing?
Imagine you have one googolplex ($10^{10^{100}}$) liters of water. Now let's dissolve one mole of sugar in your water. Setting aside where would you find so much water, the concentration would be way lower than the inverse of Avogadro's constant.
Long story short, there's no lower limit to the concentration of a substance. You can always add more solvent to the mixture thus making the concentration smaller.
Variax's answer addresses the minimum concentration misconception. Perhaps you are also wondering how we could conceivably measure concentrations that small.
If:
$$\ce{HgS(s) <=> Hg^2+(aq) + S^2-(aq)}\ \ \ \ K_{sp}=4\times 10^{-54}$$
then, in a saturated solution prepared by adding solid $\ce{HgS}$ to ultrapure deionized water:
$$[\ce{Hg^2+}]=[\ce{S^2-}]=\sqrt{4\times 10^{-54}}=2\times 10^{-27}$$
As an aside, actually $[\ce{S^2-}]\approx 0$ because $\ce{S^2-}$ is a strong base with a very large $K_B$. So in fact, the reaction is
$$\ce{HgS(s) +H2O(\ell) <=> Hg^2+ (aq) + SH^- (aq) + OH-(aq)}\ \ \ \ K_{sp}=4\times 10^{-54}$$
However, let's stick with the simpler case. If we can measure $[\ce{Hg^2+}]$ or $[\ce{S^2-}]$, we can compute the value of $K_{sp}$. According to this pdf presentation from Perkin Elmer, their IPC-MS system can detect mercury at 0.016 ppb and sulfur at 28 ppb ppb limit of quantitation will be a little higher). 0.016 ppb is approximately $0.016\ \mathrm{\mu g \ per\ L}$:
$$ 0.016\ \mathrm{\mu g/L} = 1.6\times 10^{-8}\ \mathrm{g/L}\\ 1.6\times 10^{-8}\ \mathrm{g/L\times \dfrac{mol\ Hg}{200.59\ g\ Hg} = 7.98\times 10^{-11}\ M} $$
Clearly, this instrument is not sensitive enough.
One way the might work is by using a concentration cell. Because of Le Chatelier's principle, an electrochemical cell containing the same ions at different concentrations will produce a voltage. The larger the concentration difference, the larger the potential difference:
$$E_{cell}= E^\circ -\dfrac{RT}{nF}\ln{\dfrac{[\ce{Hg^2+}]_{dilute}}{[\ce{Hg^2+}]_{concentrated}}}$$
If we take our saturated $\ce{HgS}$ solution and couple it to a $1\ \mathrm{M}\ \ce{Hg(NO3)2}$ solution, we get $(E^\circ = 0)$:
$$E_{cell}= \mathrm{-\dfrac{(8.314\ J/mol)(298\ K)}{2(9.65\times 10^4\ C/mol}\ln{\dfrac{2\times 10^{-27}\ M}{1\ M}}=0.781\ V}$$
As mentioned by Ivan in the comments, this $K_{sp}$ value could also be estimated from thermodynamic data using
$$\Delta G^\circ = -RT\ln K$$ |
Credit exposure defines the loss in the event of a counterparty defaulting, and expected exposure is the average of all credit exposures.
BUT
When adjusting the CVA calculation to account for wrong-way-risk we replace expected exposure with expect exposure at time T* conditional on this being the counterparty default time.
If exposure is already conditional on default, how is this any different?
EDIT
Here are the definitions of credit exposure and wrong way risk from the John Gregory's book as requested:
Wrong-way riskis used to indicate an unfavorable dependence between exposure and counterparty credit quality
Credit exposureis the loss in the event that the counterparty defaults. Credit exposure is characterized by the fact that a positive value of a financial instrument corresponds to a claim on a defaulted counterparty
To characterize exposure we need to answer two questions:
What is the current exposure (the maximum loss if the counterparty defaults now)? What is the exposure in the future (what could be the loss if the counterparty defaults at some point in the future)? This second question is far more complex to answer
Source text in question (emphasis mine):
The presence of wrong-way risk will (unsurprisingly) increase CVA. However, the magnitude of this increase will be hard to quantify, as we shall show in some examples. Wrong-way risk also prevents one from using the (relatively) simple formulas used for CVA in Chapter 12.We can still use the same CVA expression as long as we calculate the exposure conditional $$CVA \approx (1 - Recovery) \sum\limits_{j=1}^MDF(t_{j})EE(t_{j}| t_{j} = \tau_{c})PD(t_{j-1},t_{j})$$
where $EE(t_{j}| t_{j} = \tau_{c})$ represents the expected exposure at time $t_{j}$ conditional on this being the counterparty default time ($\tau_{c}$). This replaces the previous exposure, which was unconditional. As long as we use the conditional exposure, everything is correct.upon default of the counterparty. Returning to equation (12.2), we simply rewrite the expression as |
Poor alertness experienced by individuals may lead to serious accidents that impact on people’s health and safety. To prevent such accidents, an efficient automatic alertness states identification is required. Sparse representation-based classification has recently gained a lot of popularity. A classifier from this class typically comprises three stages: dictionary learning, sparse coding and class assignment. Gini index, a recently proposed method, was shown to possess a number of properties that make it a better sparsity measure than the widely used \( l_{0} \)- and \( l_{1} \)-norms. This paper investigates whether these properties also lead to a better classifier. The proposed classifier, unlike the existing sparsity-based ones, embeds the Gini index in all stages of the classification process. To assess its performance, the new classifier was used to automatically identify three alertness levels, namely awake, drowsy, and sleep using EEG signal. The obtained results show that the new classifier outperforms those based on \( l_{0} \)- and \( l_{1} \)-norms.
The aim of the paper is to automatically select the optimal EEG rhythm/channel combinations capable of classifying human alertness states. Four alertness states were considered, namely ‘engaged’, ‘calm’, ‘drowsy’ and ‘asleep’. The features used in the automatic selection are the energies associated with the conventional rhythms, \(\delta , \theta , \alpha , \beta\) and \(\gamma\), extracted from overlapping windows of the different EEG channels. The selection process consists of two stages. In t...
Driver drowsiness is a major cause of fatal accidents, injury, and property damage, and has become an area of substantial research attention in recent years. The present study proposes a method to detect drowsiness in drivers which integrates features of electrocardiography (ECG) and electroencephalography (EEG) to improve detection performance. The study measures differences between the alert and drowsy states from physiological data collected from 22 healthy subjects in a driving simulator-bas...
Ratio indices computed from a single EEG channel used as drowsiness indicators.Delta and gamma brain rhythms successfully used for drowsiness detection.Wavelet packet transform as the main tool to localize specific brain frequency ranges.Transition from alert to drowsy state is taken as main event of interest.Wilcoxon signed rank test analysis pointed out the contribution of proposed indices. Advances in materials engineering, electronic circuits, sensors, signal processing and classification te...
At present, the sparse representation-based classification (SRC) methods of electroencephalograph (EEG) signal analysis have become an important approach for studying brain science. SRC methods mean that the target data is sparsely represented on the basis of a fixed dictionary or learned dictionary, and classified based on the reconstruction criteria or the corresponding features extracted. SRC methods have been used to analyze the EEG signals of epilepsy, mild cognitive impairment (MCI) and Al...
This study develops a real-time drowsiness detection system based on grayscale image processing and PERCLOS to determine if the driver is fatigued. The proposed system comprises three parts: first, it calculates the approximate position of the driver's face in grayscale images, and then uses a small template to analyze the eye positions, second, it uses the data from the previous step and PERCLOS to establish a fatigue model, and finally, based on the driver's personal fatigue model, the system ...
In this paper, we present a vehicle active safety model for vehicle speed control based on driver vigilance detection using low-cost, comfortable, wearable electroencephalographic (EEG) sensors and sparse representation. The proposed system consists of three main steps, namely wireless wearable EEG collection, driver vigilance detection, and vehicle speed control strategy. First of all, a homemade low-cost comfortable wearable brain-computer interface (BCI) system with eight channels is designed...
PhysioNet Sleep EDF database has been the most popular source of data used for developing and testing many automatic sleep staging algorithms. However, the recordings from this database has been used in an inconsistent fashion. For example, arbitrary selection of start and end times from long term recordings, data-hypnogram mismatches, different performance metrics and hypnogram conversion from R&K to AASM. All these differences result in different data sections and performance metrics being use...
Dictionary learning and sparse representation (DLSR) is a recent and successful mathematical model for data representation that achieves state-ofthe-art performance in various elds such as pattern recognition, machine learning, computer vision, and medical imaging. The original formulation for DLSR is based on the minimization of the reconstruction error between the original signal and its sparse representation in the space of the learned dictionary. Although this formulation is optimal for solv...
(HIT: Harbin Institute of Technology)+ 2 AuthorsDavidZhang89
Estimated H-index: 89
(PolyU: Hong Kong Polytechnic University)
Sparse representation has attracted much attention from researchers in fields of signal processing, image processing, computer vision, and pattern recognition. Sparse representation also has a good reputation in both theoretical research and practical applications. Many different algorithms have been proposed for sparse representation. The main purpose of this paper is to provide a comprehensive study and an updated review on sparse representation and to supply guidance for researchers. The taxo... |
The Annals of Probability Ann. Probab. Volume 29, Number 1 (2001), 558-575. Inversion de Laplace effective Abstract
Let $F,G$ be arbitrary distribution functions on the real line and let $\widehat{F},\widehat{G}$ denote their respective bilateral Laplace transforms. Let $\kappa > 0$ and let $h : \mathbb{R}^+ \to \mathbb{R}^+$ be continuous, non-decreasing, and such that $h(u) \ge Au^4$ for some $A > 0$ and all $u \ge 0$. Under the assumptions that
we establish the bound
where $C$ is a constant depending at most on $\kappa$ and $A$, $Q_G$ is the concentration function of $G$, and $l := (\log L) /L + (\log W) /W$ ,with $W$ any solution to $h(W) = 1/\epsilon$. Improving and generalizing an estimate of Alladi, this result provides a Laplace transform analogue to the Berry-Esseen inequality, related to Fourier transforms. The dependence in $\epsilon$ is optimal up to the logarithmic factor log $W$. A number-theoretic application, developed in detail elsewhere, is described. It concerns so-called lexicographic integers, whose characterizing property is that their divisors are ranked according to size and valuation of the largest prime factor. The above inequality furnishes, among other informations, an effective Erdös-Kac theorem for lexicographical integers.
Article information Source Ann. Probab., Volume 29, Number 1 (2001), 558-575. Dates First available in Project Euclid: 21 December 2001 Permanent link to this document https://projecteuclid.org/euclid.aop/1008956344 Digital Object Identifier doi:10.1214/aop/1008956344 Mathematical Reviews number (MathSciNet) MR1825164 Zentralblatt MATH identifier 1020.60008 Subjects Primary: 60E10: Characteristic functions; other transforms Secondary: 11N25: Distribution of integers with specified multiplicative constraints 40E05: Tauberian theorems, general 44A10: Laplace transform Citation
Stef, André; Tenenbaum, Gérald. Inversion de Laplace effective. Ann. Probab. 29 (2001), no. 1, 558--575. doi:10.1214/aop/1008956344. https://projecteuclid.org/euclid.aop/1008956344 |
In general, to use the method of least squares, a linear stochastic system is modeled as: \begin{equation} y = ax + \eta \end{equation}
where, $y$, is an observed variable, $x$ is an input while $\eta$ is error in observation. Then the parameter $a$ is estimated in the least squares sense by minimizing the error $\left(y - ax\right)^2$.
Assume that instead of modeling the system in this way, I do some change of variables and write the equation of the system as:
\begin{equation} \alpha\beta = y + \eta \end{equation}
where, $\alpha$ and $\beta$ need to be estimated. Let us also assume that I know the range of values that $\beta$ can take. Then I rewrite the estimation problem as:
\begin{equation} \alpha = \frac{y}{\beta} + \nu \end{equation}
Now suppose that I have $n$ observations i.e. $y_1, y_2, \cdots, y_n$ and I know that $\beta$ can take $m$ possible values i.e. $\beta_1, \beta_2, \cdot, \beta_m$. For each value of $\beta$, I compute the $n$ values of $\alpha$ and find variance of these $n$ values. Since $\alpha$ is constant, ideally the variance should be zero. But due to error $\nu$, the variance will be non-zero. So, the value of $\beta$ that minimizes the variance in $n$ values of $\alpha$ is the optimal estimate for $\beta$. $\alpha$ can then be found out.
Is this a correct approach to remodel the least squares problem. Potential questions can be:
Why not use least squares? ( Ans: Because in some multivariate cases, $x$ can be badly conditioned.) If $y$ and $\beta$ are normally distributed then $\frac{y}{\beta}$ has Cauchy distribution and Cauchy distribution has no finite variance. So, minimizing the variance of $\frac{y}{\beta}$ does not make sense.
So, I want to know whether the modeling mentioned above makes sense. |
I have to compute the surface brightness as a function of the radius from the following set of data: {right ascension ($\alpha$), declination ($\delta$), magnitude (m)}. I also know the center of the set $\left(\alpha_c, \delta_c \right)$.
The surface brightness in logarithmic units (mag/arcsec$^2$) can be computed by (1): \begin{equation} \mu = m + 2.5\log_{10}\Omega \end{equation}
and the solid angle can be computed as $d\Omega = \frac{dS}{d^2} = \frac{d^2 \sin\theta d\theta d\phi}{d^2}$, where $d$ is the distance to the stars.
What I did is to compute the integrated magnitude in a certain the region given by $\alpha_2 > \alpha > \alpha_1$ and $\delta_2 > \delta > \delta_1$. In which case the solid angle should be given by: $\Omega = (\alpha_2 - \alpha_1)(\sin\delta_2 - \sin\delta_1)$ because $\delta$ is the complementary angle of $\theta$.
My question is: which radius I should consider if I want to plot $\mu(r)$?. I have consider the angular separation between ($\alpha_c, \delta_c$) and ($\alpha_1, \delta_c$), and also the angular separation between ($\alpha_c, \delta_c$) and ($\alpha_c, \delta_1$) but with neither of this options reproduce previous results.
I also count the stars which angular separation is $r < r_{test}$ (
edited: in rings of radius $r_{test}$), compute the integrated magnitude, use $\Omega = 2\pi(1 - \cos(r_{test}))$ and finally compute $\mu(r_{test})$.
Both approaches gives more or less the same results, what am I doing wrong here?
By wrong, I mean that I am getting larger values for the surface brightness that previous published results. |
Under the auspices of the Computational Complexity Foundation (CCF)
We show that the multi-commodity max-flow/min-cut gap for series-parallel graphs can be as bad as 2, matching a recent upper bound by Chakrabarti.et.al for this class, and resolving one side of a conjecture of Gupta, Newman, Rabinovich, and Sinclair.
This also improves the largest known gap for planar graphs ... more >>>
We give an explicit (in particular, deterministic polynomial time)
construction of subspaces $X \subseteq \R^N$ of dimension $(1-o(1))N$ such that for every $x \in X$, $$(\log N)^{-O(\log\log\log N)} \sqrt{N}\, \|x\|_2 \leq \|x\|_1 \leq \sqrt{N}\, \|x\|_2.$$ If we are allowed to use $N^{1/\log\log N}\leq N^{o(1)}$ random bits and ... more >>> |
Kumar, Abhinav ; Lokam, Satyanarayana V. ; Patankar, Vijay M. ; Sarma M. N., Jayalal
Using Elimination Theory to construct Rigid Matrices
AbstractThe rigidity of a matrix $A$ for target rank $r$ is the minimum number
of entries of $A$ that must be changed to ensure that the rank of
the altered matrix is at most $r$. Since its introduction by Valiant
\cite{Val77}, rigidity and similar rank-robustness functions of
matrices have found numerous applications in circuit complexity,
communication complexity, and learning complexity. Almost all $\nbyn$
matrices over an infinite field have a rigidity of $(n-r)^2$. It is a
long-standing open question to construct infinite families of
\emph{explicit} matrices even with superlinear rigidity when $r=\Omega(n)$.
In this paper, we construct an infinite family of complex matrices
with the largest possible, i.e., $(n-r)^2$, rigidity. The entries of
an $\nbyn$ matrix in this family are distinct primitive roots of unity
of orders roughly \SL{$\exp(n^4 \log n)$}. To the best of our knowledge, this is
the first family of concrete (but not entirely explicit) matrices
having maximal rigidity and a succinct algebraic description.
Our construction is based on elimination theory of polynomial
ideals. In particular, we use results on the existence of polynomials
in elimination ideals with effective degree upper bounds (effective
Nullstellensatz). Using elementary algebraic geometry, we prove that
the dimension of the affine variety of matrices of rigidity at
most $k$ is exactly $n^2 - (n-r)^2 +k$. Finally, we use elimination theory to
examine whether the rigidity function is semicontinuous.
BibTeX - Entry
@InProceedings{kumar_et_al:LIPIcs:2009:2327,
author = {Abhinav Kumar and Satyanarayana V. Lokam and Vijay M. Patankar and Jayalal Sarma M. N.},
title = {{Using Elimination Theory to construct Rigid Matrices}},
booktitle = {IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science},
pages = {299--310},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-939897-13-2},
ISSN = {1868-8969},
year = {2009},
volume = {4},
editor = {Ravi Kannan and K. Narayan Kumar},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address = {Dagstuhl, Germany},
URL = {http://drops.dagstuhl.de/opus/volltexte/2009/2327},
URN = {urn:nbn:de:0030-drops-23278},
doi = {10.4230/LIPIcs.FSTTCS.2009.2327},
annote = {Keywords: Matrix Rigidity, Lower Bounds, Circuit Complexity}
}
Keywords: Matrix Rigidity, Lower Bounds, Circuit Complexity Seminar: IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science Issue date: 2009 Date of publication: 2009 |
@HarryGindi So the $n$-simplices of $N(D^{op})$ are $Hom_{sCat}(\mathfrak{C}[n],D^{op})$. Are you using the fact that the whole simplicial set is the mapping simplicial object between cosimplicial simplicial categories, and taking the constant cosimplicial simplicial category in the right coordinate?
I guess I'm just very confused about how you're saying anything about the entire simplicial set if you're not producing it, in one go, as the mapping space between two cosimplicial objects. But whatever, I dunno. I'm having a very bad day with this junk lol.
It just seems like this argument is all about the sets of n-simplices. Which is the trivial part.
lol no i mean, i'm following it by context actually
so for the record i really do think that the simplicial set you're getting can be written as coming from the simplicial enrichment on cosimplicial objects, where you take a constant cosimplicial simplicial category on one side
@user1732 haha thanks! we had no idea if that'd actually find its way to the internet...
@JonathanBeardsley any quillen equivalence determines an adjoint equivalence of quasicategories. (and any equivalence can be upgraded to an adjoint (equivalence)). i'm not sure what you mean by "Quillen equivalences induce equivalences after (co)fibrant replacement" though, i feel like that statement is mixing category-levels
@JonathanBeardsley if nothing else, this follows from the fact that \frakC is a left quillen equivalence so creates weak equivalences among cofibrant objects (and all objects are cofibrant, in particular quasicategories are). i guess also you need to know the fact (proved in HTT) that the three definitions of "hom-sset" introduced in chapter 1 are all weakly equivalent to the one you get via \frakC
@IlaRossi i would imagine that this is in goerss--jardine? ultimately, this is just coming from the fact that homotopy groups are defined to be maps in (from spheres), and you only are "supposed" to map into things that are fibrant -- which in this case means kan complexes
@JonathanBeardsley earlier than this, i'm pretty sure it was proved by dwyer--kan in one of their papers around '80 and '81
@HarryGindi i don't know if i would say that "most" relative categories are fibrant. it was proved by lennart meier that model categories are Barwick--Kan fibrant (iirc without any further adjectives necessary)
@JonathanBeardsley what?! i really liked that picture! i wonder why they removed it
@HarryGindi i don't know about general PDEs, but certainly D-modules are relevant in the homotopical world
@HarryGindi oh interesting, thomason-fibrancy of W is a necessary condition for BK-fibrancy of (R,W)?
i also find the thomason model structure mysterious. i set up a less mysterious (and pretty straightforward) analog for $\infty$-categories in the fappendix here: arxiv.org/pdf/1510.03525.pdf
as for the grothendieck construction computing hocolims, i think the more fundamental thing is that the grothendieck construction itself is a lax colimit. combining this with the fact that ($\infty$-)groupoid completion is a left adjoint, you immediately get that $|Gr(F)|$ is the colimit of $B \xrightarrow{F} Cat \xrightarrow{|-|} Spaces$
@JonathanBeardsley If you want to go that route, I guess you still have to prove that ^op_s and ^op_Delta both lie in the unique nonidentity component of Aut(N(Qcat)) and Aut(N(sCat)) whatever nerve you mean in this particular case (the B-K relative nerve has the advantage here bc sCat is not a simplicial model cat)
I think the direct proof has a lot of advantages here, since it gives a point-set on-the-nose isomorphism
Yeah, definitely, but I'd like to stay and work with Cisinski on the Ph.D if possible, but I'm trying to keep options open
not put all my eggs in one basket, as it were
I mean, I'm open to coming back to the US too, but I don't have any ideas for advisors here who are interested in higher straightening/higher Yoneda, which I am convinced is the big open problem for infinity, n-cats
Gaitsgory and Rozenblyum, I guess, but I think they're more interested in applications of those ideas vs actually getting a hold of them in full generality
@JonathanBeardsley Don't sweat it. As it was mentioned I have now mod superpowers, so s/he can do very little to upset me. Since you're the room owner, let me know if I can be of any assistance here with the moderation (moderators on SE have network-wide chat moderating powers, but this is not my turf, so to speak).
There are two "opposite" functors:$$ op_\Delta\colon sSet\to sSet$$and$$op_s\colon sCat\to sCat.$$The first takes a simplicial set to its opposite simplicial set by precomposing with the opposite of a functor $\Delta\to \Delta$ which is the identity on objects and takes a morphism $\langle k...
@JonathanBeardsley Yeah, I worked out a little proof sketch of the lemma on a notepad
It's enough to show everything works for generating cofaces and codegeneracies
the codegeneracies are free, the 0 and nth cofaces are free
all of those can be done treating frak{C} as a black box
the only slightly complicated thing is keeping track of the inner generated cofaces, but if you use my description of frak{C} or the one Joyal uses in the quasicategories vs simplicial categories paper, the combinatorics are completely explicit for codimension 1 face inclusions
the maps on vertices are obvious, and the maps on homs are just appropriate inclusions of cubes on the {0} face of the cube wrt the axis corresponding to the omitted inner vertex
In general, each Δ[1] factor in Hom(i,j) corresponds exactly to a vertex k with i<k<j, so omitting k gives inclusion onto the 'bottom' face wrt that axis, i.e. Δ[1]^{k-i-1} x {0} x Δ[j-k-1] (I'd call this the top, but I seem to draw my cubical diagrams in the reversed orientation).
> Thus, using appropriate tags one can increase ones chances that users competent to answer the question, or just interested in it, will notice the question in the first place. Conversely, using only very specialized tags (which likely almost nobody specifically favorited, subscribed to, etc) or worse just newly created tags, one might miss a chance to give visibility to ones question.
I am not sure to which extent this effect is noticeable on smaller sites (such as MathOverflow) but probably it's good to follow the recommendations given in the FAQ. (And MO is likely to grow a bit more in the future, so then it can become more important.) And also some smaller tags have enough followers.
You are asking posts far away from areas I am familiar with, so I am not really sure which top-level tags would be a good fit for your questions - otherwise I would edit/retag the posts myself. (Other than possibility to ping you somewhere in chat, the reason why I posted this in this room is that users of this room are likely more familiar with the topics you're interested in and probably they would be able to suggest suitable tags.)
I just wanted to mention this, in case it helps you when asking question here. (Although it seems that you're doing fine.)
@MartinSleziak even I was not sure what other tags are appropriate to add.. I will see other questions similar to this, see what tags they have added and will add if I get to see any relevant tags.. thanks for your suggestion.. it is very reasonable,.
You don't need to put only one tag, you can put up to five. In general it is recommended to put a very general tag (usually an "arxiv" tag) to indicate broadly which sector of math your question is in, and then more specific tags
I would say that the topics of the US Talbot, as with the European Talbot, are heavily influenced by the organizers. If you look at who the organizers were/are for the US Talbot I think you will find many homotopy theorists among them. |
@HarryGindi So the $n$-simplices of $N(D^{op})$ are $Hom_{sCat}(\mathfrak{C}[n],D^{op})$. Are you using the fact that the whole simplicial set is the mapping simplicial object between cosimplicial simplicial categories, and taking the constant cosimplicial simplicial category in the right coordinate?
I guess I'm just very confused about how you're saying anything about the entire simplicial set if you're not producing it, in one go, as the mapping space between two cosimplicial objects. But whatever, I dunno. I'm having a very bad day with this junk lol.
It just seems like this argument is all about the sets of n-simplices. Which is the trivial part.
lol no i mean, i'm following it by context actually
so for the record i really do think that the simplicial set you're getting can be written as coming from the simplicial enrichment on cosimplicial objects, where you take a constant cosimplicial simplicial category on one side
@user1732 haha thanks! we had no idea if that'd actually find its way to the internet...
@JonathanBeardsley any quillen equivalence determines an adjoint equivalence of quasicategories. (and any equivalence can be upgraded to an adjoint (equivalence)). i'm not sure what you mean by "Quillen equivalences induce equivalences after (co)fibrant replacement" though, i feel like that statement is mixing category-levels
@JonathanBeardsley if nothing else, this follows from the fact that \frakC is a left quillen equivalence so creates weak equivalences among cofibrant objects (and all objects are cofibrant, in particular quasicategories are). i guess also you need to know the fact (proved in HTT) that the three definitions of "hom-sset" introduced in chapter 1 are all weakly equivalent to the one you get via \frakC
@IlaRossi i would imagine that this is in goerss--jardine? ultimately, this is just coming from the fact that homotopy groups are defined to be maps in (from spheres), and you only are "supposed" to map into things that are fibrant -- which in this case means kan complexes
@JonathanBeardsley earlier than this, i'm pretty sure it was proved by dwyer--kan in one of their papers around '80 and '81
@HarryGindi i don't know if i would say that "most" relative categories are fibrant. it was proved by lennart meier that model categories are Barwick--Kan fibrant (iirc without any further adjectives necessary)
@JonathanBeardsley what?! i really liked that picture! i wonder why they removed it
@HarryGindi i don't know about general PDEs, but certainly D-modules are relevant in the homotopical world
@HarryGindi oh interesting, thomason-fibrancy of W is a necessary condition for BK-fibrancy of (R,W)?
i also find the thomason model structure mysterious. i set up a less mysterious (and pretty straightforward) analog for $\infty$-categories in the fappendix here: arxiv.org/pdf/1510.03525.pdf
as for the grothendieck construction computing hocolims, i think the more fundamental thing is that the grothendieck construction itself is a lax colimit. combining this with the fact that ($\infty$-)groupoid completion is a left adjoint, you immediately get that $|Gr(F)|$ is the colimit of $B \xrightarrow{F} Cat \xrightarrow{|-|} Spaces$
@JonathanBeardsley If you want to go that route, I guess you still have to prove that ^op_s and ^op_Delta both lie in the unique nonidentity component of Aut(N(Qcat)) and Aut(N(sCat)) whatever nerve you mean in this particular case (the B-K relative nerve has the advantage here bc sCat is not a simplicial model cat)
I think the direct proof has a lot of advantages here, since it gives a point-set on-the-nose isomorphism
Yeah, definitely, but I'd like to stay and work with Cisinski on the Ph.D if possible, but I'm trying to keep options open
not put all my eggs in one basket, as it were
I mean, I'm open to coming back to the US too, but I don't have any ideas for advisors here who are interested in higher straightening/higher Yoneda, which I am convinced is the big open problem for infinity, n-cats
Gaitsgory and Rozenblyum, I guess, but I think they're more interested in applications of those ideas vs actually getting a hold of them in full generality
@JonathanBeardsley Don't sweat it. As it was mentioned I have now mod superpowers, so s/he can do very little to upset me. Since you're the room owner, let me know if I can be of any assistance here with the moderation (moderators on SE have network-wide chat moderating powers, but this is not my turf, so to speak).
There are two "opposite" functors:$$ op_\Delta\colon sSet\to sSet$$and$$op_s\colon sCat\to sCat.$$The first takes a simplicial set to its opposite simplicial set by precomposing with the opposite of a functor $\Delta\to \Delta$ which is the identity on objects and takes a morphism $\langle k...
@JonathanBeardsley Yeah, I worked out a little proof sketch of the lemma on a notepad
It's enough to show everything works for generating cofaces and codegeneracies
the codegeneracies are free, the 0 and nth cofaces are free
all of those can be done treating frak{C} as a black box
the only slightly complicated thing is keeping track of the inner generated cofaces, but if you use my description of frak{C} or the one Joyal uses in the quasicategories vs simplicial categories paper, the combinatorics are completely explicit for codimension 1 face inclusions
the maps on vertices are obvious, and the maps on homs are just appropriate inclusions of cubes on the {0} face of the cube wrt the axis corresponding to the omitted inner vertex
In general, each Δ[1] factor in Hom(i,j) corresponds exactly to a vertex k with i<k<j, so omitting k gives inclusion onto the 'bottom' face wrt that axis, i.e. Δ[1]^{k-i-1} x {0} x Δ[j-k-1] (I'd call this the top, but I seem to draw my cubical diagrams in the reversed orientation).
> Thus, using appropriate tags one can increase ones chances that users competent to answer the question, or just interested in it, will notice the question in the first place. Conversely, using only very specialized tags (which likely almost nobody specifically favorited, subscribed to, etc) or worse just newly created tags, one might miss a chance to give visibility to ones question.
I am not sure to which extent this effect is noticeable on smaller sites (such as MathOverflow) but probably it's good to follow the recommendations given in the FAQ. (And MO is likely to grow a bit more in the future, so then it can become more important.) And also some smaller tags have enough followers.
You are asking posts far away from areas I am familiar with, so I am not really sure which top-level tags would be a good fit for your questions - otherwise I would edit/retag the posts myself. (Other than possibility to ping you somewhere in chat, the reason why I posted this in this room is that users of this room are likely more familiar with the topics you're interested in and probably they would be able to suggest suitable tags.)
I just wanted to mention this, in case it helps you when asking question here. (Although it seems that you're doing fine.)
@MartinSleziak even I was not sure what other tags are appropriate to add.. I will see other questions similar to this, see what tags they have added and will add if I get to see any relevant tags.. thanks for your suggestion.. it is very reasonable,.
You don't need to put only one tag, you can put up to five. In general it is recommended to put a very general tag (usually an "arxiv" tag) to indicate broadly which sector of math your question is in, and then more specific tags
I would say that the topics of the US Talbot, as with the European Talbot, are heavily influenced by the organizers. If you look at who the organizers were/are for the US Talbot I think you will find many homotopy theorists among them. |
The alpha invariant $\alpha(X)$ of a Fano manifold $X$ of dimension $n$ is defined as the infimum of log canonical thresholds of (effective) $\mathbb{Q}$-divisors $D\sim_{\mathbb{Q}}-K_X$. Similarly, for $G\subset Aut(X)$ a compact subgroup of the automorphism group, one defines $\alpha_G(X)$ considering only $G$-invariant divisors. The alpha invariant has a corresponding analytic definition involving complex singularity exponents of singular hermitian metrics [2, Appendix].
Tian introduced this invariant and proved that the lower bound $\alpha_G(X)>\frac{n}{n+1}$ implies the existence of a Kähler-Einstein metric [1] (in fact, even today it is one of the few sufficient conditions which is checkable in practice). I'd like to know if this theorem is sharp? That is:
Question: Are there examples Fano manifolds such that $\alpha_G(X)=\frac{n}{n+1}$ but without a Kähler-Einstein metric?
I'd also be interested in any partial results in the positive direction.
An example I know of with $\alpha(X)=\frac{n}{n+1}$ is a del Pezzo surface of degree $4$ (this is due to Cheltsov [3]), however by Tian's classification of Kähler-Einstein metrics on del Pezzo surfaces [4], such surfaces are known to admit Kähler-Einstein metrics.
References:
[1] G. Tian. On Kahler-Einstein metrics on certain K ̈ahler manifolds with $c_1(M)>0$.
[2] I. Cheltsov, C. Shramov, Appendix by J. P. Demailly. Log canonical thresholds of smooth Fano threefolds.
[3] I. Cheltsov. Log canonical thresholds of del Pezzo surfaces.
[4] Tian, G. On Calabi’s conjecture for complex surfaces with positive first Chern class. |
I'm obsessed with sbt-microsites. Sbt-microsites is a fantastic plugin for SBT (the Scala Build Tool) that makes it easy to generate a beautiful sidecar site for your software project,
full of code checked by your CI!
I recently built a microsite for ScalaRL, my in-progress functional Reinforcement Learning library, and found that adding support for Mathjax (a javascript math equation renderer) to the microsite was not obvious. It's not hard... just not clear from the Mathjax docs how to get past some limitations with sbt-microsites.
This post is a pitch for sbt-microsites, plus a short guide on how to get Mathjax working on your microsite.
SBT-Microsites
As an example of what you can make with this plugin, check out the Algebird microsite I built a couple of years ago. On the initial page you'll see this lovely series of three Scala lines with their results printed as comments below:
That snippet was generated from this page in the Algebird repository, specifically from this snippet:
### What can you do with this code?```tut:bookimport com.twitter.algebird._import com.twitter.algebird.Operators._Map(1 -> Max(2)) + Map(1 -> Max(3)) + Map(2 -> Max(4))```
This means that if any of the examples in your documentation get out-of-date due to API changes or bugs... your code will no longer compile. If you run your tests in a CI environment, pull requests that break your documentation will fail! This is an incredible advantage, and will put your project levels beyond all of the sad Scala projects out there with out-of-date Github wikis, rotting away and pissing off users.
Here's a page from the Algebird microsite for the
Min and
Max data structures: http://twitter.github.io/algebird/datatypes/min_and_max.html
This example contains in-line assertions that act as tests that will fail in the documentation if the code compiles, but has some behavior change that brings your example out of date.
```tut:bookval loser: Min[TwitterUser] = Min(barackobama) + Min(katyperry) + Min(ladygaga) + Min(miguno) + Min(taylorswift)assert(miguno == loser.get) // The build will fail if the assertion fails.```
This covers almost none of how to actually set up sbt-microsites, of course. For that, check out these resources:
the sbt-microsites microsite's Getting Started docs Tut's website, for examples on inline code blocks MDoc, a more modern inline code block runner inspired by Tut. (Tut runs code like the Repl, while MDoc has more advanced support for a literate programming style.) The ScalaRL microsite, along with its Settings block in the project's build.sbt file Adding Mathjax Support
Mathjax is a Javascript library that scans a page for blocks of LaTeX demarcated by dollar signs, like this:
$<equation here>$ and renders them into beautiful math equations on the page. (Single dollar signs for inline math, double dollars for standalone blocks.) If you wanted to embed the quadratic equation, you could write:
$$x = {-b \pm \sqrt{b^2-4ac} \over 2a}$$
And MathJax would render the block like this:
$$x = {-b \pm \sqrt{b^2-4ac} \over 2a}$$
Usually you install Mathjax by dropping something like the following in your website's footer:
<!-- MathJax --><script type="text/x-mathjax-config">MathJax.Hub.Config({ tex2jax: {inlineMath: [['$','$'], ['\\(','\\)']],processEscapes: true}, TeX: { equationNumbers: { autoNumber: "AMS" } }});</script><script type="text/javascript" async src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.4/MathJax.js?config=TeX-MML-AM_CHTML"></script>
These two scripts set a configuration for Mathjax and then asynchronously load the script that uses that configuration to render your equations. (This is the config I use on this blog, by the way.)
Sbt-microsites makes the process slightly more difficult. The plugin doesn't give you access to the full page template; instead, it offers various hooks into the template via its large set of configuration options, documented here. The two problems I hit were:
There's no way that I could find to inject an unescaped block of Javascript into the site's footer. Mathjax can't read its configuration from a standalone Javascript file. A standalone Javascript file can't pull in another file asynchronously. The Solution
I found my answer on the Mathjax documentation's section on "Loading Mathjax Dynamically". To get Mathjax working on the site, I had to create a file that would load and configure the library dynamically, and place it in the sbt-microsites default location for custom javascript.
I created a file at
/docs/src/main/resources/microsite/js/mathjax.js (github link here) with the following content:
(function () { var head = document.getElementsByTagName("head")[0], script; script = document.createElement("script"); script.type = "text/x-mathjax-config"; script[(window.opera ? "innerHTML" : "text")] = "MathJax.Hub.Config({\n" + " tex2jax: { inlineMath: [['$','$'], ['\\\\(','\\\\)']], processEscapes: true},\n" + " TeX: { equationNumbers: { autoNumber: \"AMS\" } }\n" + "});"; head.appendChild(script); script = document.createElement("script"); script.type = "text/javascript"; script.src = "https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/MathJax.js?config=TeX-MML-AM_CHTML"; head.appendChild(script);})();
This block:
Configures Mathjax to use either
$<equation>$or
\(<equation>\)syntax to block off inline equations, and
adds support for automatic equation numbering, if you add your block equations like this:
\begin{equation} E = mc^2\end{equation}
You can browse the Mathjax site for many more configuration options.
Sbt-microsites looks for custom javascript files in the
resources/microsite/js subdirectory of your
docs project, but if you like you can override that location in your docs config by adding this key:
micrositeJsDirectory := (resourceDirectory in Compile).value / "site" / "scripts"
Now, I think these scripts
only get injected into sub-pages., and not onto your main index page. That's probably a bug that the plugin could fix in the future, so let them know if you have an index bedazzled with equations.
Best of luck getting this set up on your own projects! Let me know in the comments if you have any questions or run into trouble. |
Understand the problem
Consider a circle of radius 6 as given in the diagram below. Let \(B,C,D\) and \(E\) be points on the circle such that \(BD\) and \(CE\), when extended, intersect at \(A\). If \(AD\) and \(AE\) have length 5 and 4 respectively, and \(DBC\) is a right angle, then show that the length of \(BC\) is \(\frac{12+\sqrt{15}}{5}\).
Diagram : click here.
Source of the problem
I.S.I. (Indian Statistical Institute, B.Stat, B.Math) Entrance. Subjective Problem 2 from 2017
Topic
Plane Geometry
Difficulty Level
6 out of 10
Suggested Book
‘An excursion in Mathematics’ published by Bhaskaracharya Pratishthana
Start with hints
Do you really need a hint? Try it first!
Given, \(AD=5, AE=4\) and \(\angle DBC=90^\circ \).
As \(D,B,C\) are points on the circle having radius 6.
Therefore \(DC\) is the diameter of the circle
\(\Rightarrow DC=6×2=12\).
Now DC is diameter of the circle \(\Rightarrow \angle DEC=90^\circ \).
Therefore \(\angle DEA\) is also right angle.
The length of \(DE=\sqrt{5^2-4^2}=3\).
And the length of \(EC=\sqrt{12^2-3^2}=3\sqrt{15}\).
Therefore \(AC=AE+EC=4+3\sqrt{15}\).
From \(∆AED\) and \(∆ABC\) we have,
\(\frac{AD}{DE}=\frac{AC}{BC} \Rightarrow BC=\frac{AC\cdot DE}{AD}=\frac{(4+3\sqrt{15})\cdot 3}{5}\).
Therefore the length of \(BC\) is \(\frac{12+9\sqrt{15}}{5}\). (Ans.)
Watch the video ( Coming Soon … ) Connected Program at Cheenta I.S.I. & C.M.I. Entrance Program
Indian Statistical Institute and Chennai Mathematical Institute offer challenging bachelor’s program for gifted students. These courses are B.Stat and B.Math program in I.S.I., B.Sc. Math in C.M.I.
The entrances to these programs are far more challenging than usual engineering entrances. Cheenta offers an intense, problem-driven program for these two entrances. |
wabik About me
Name wabik User since July 7, 2008 Number of add-ons developed 0 add-ons Average rating of developer's add-ons Not yet rated My Reviews Rated 5 out of 5 stars
Why 1 new e-mail is showed as 6 unread mails at the badge?
It is ok for RSS feeds ... What can I do to fix it? Rated 4 out of 5 stars
It would be great if the notifications would be permanent and not vanising ... or possibility to set a time to stay on top.This review is for a previous version of the add-on (1.1.0).
Rated 3 out of 5 stars
How to import contacts in Gadu-Gadu network because is not done automatically?This review is for a previous version of the add-on (2.10.4).
Rated 5 out of 5 stars
Yes Yes Yes .... now I can give you 5 stars :)
It is great!!! Rated 2 out of 5 stars
Hi,
It is great extension but one thing!!! When installed one cannot forward messages as an attachment :(((( Too bad :(((( Rated 5 out of 5 stars
Hello :) Great !!! And could you tell me how did you manage to add ThunderNote button into this mode???
Brgds, Rated 4 out of 5 stars
The AB icon seems to be too big ???!!!This review is for a previous version of the add-on (0.5).
Rated 5 out of 5 stars
It would be great of you could add all possible symbols or sentences that can be converted. E.g. it does not work for \hat H \Psi = \left[{\hat T}+{\hat V}+{\hat U}\right]\Psi = \left[\sum_i^N -\frac{\hbar^2}{2m}\nabla_i^2 + \sum_i^N V(\vec r_i) + \sum_{iThis review is for a previous version of the add-on (1.2). |
Determination of low strain interfaces via geometric matching¶ Version: 2017.1
This tutorial introduces the new
GeneralizedLatticeMatch method forcombining two bulk crystals into an interface.
On many occasions, one is interested in creating a realistic interface betweentwo materials, even without having precise structural informations on the twosurfaces that form the interface.The
GeneralizedLatticeMatch method facilitates this process, byautomatically finding all the possible interface supercells between the two crystalsbased only on their bulk crystalline structure. The method is an optimizedversion of the algortihm described in [JLS+17]. Compared to the lattice-matchingmethod used in the Interface Builder, described in The Interface Builder in QuantumATK, the presentmethod is unbiased as it considers all the possible surfaces of the two crystals formingthe interface.
A number of structural parameters are calculated for each interface, whichallows one to easily analyze the matching pattern and choose the most appropriateinterface to be used for further studies. The structure of the desired interface canthen be easily created using the
Interface Builder implemented in QuantumATK.
Method description¶
The following describes in the main steps of the algortihm used in the
GeneralizedLatticeMethod method, which is very similar to that usedin the Interface Builder (see The Interface Builder in QuantumATK).
Initially, the possible surface vectors \([\mathbf{a}_{1}, \mathbf{a}_{2}]\) of the first of the two materials forming the interface are constructed, starting from a linear combination of the Bravais vectors of the primitive cell:\[\mathbf{a}_1 = \sum_{i=1}^3 c_i \mathbf{u}_i \quad c_i \in \mathbb{Z},\]\[\mathbf{a}_2 = \sum_{i=1}^3 c_i^\prime \mathbf{u}_i \quad c_i^\prime \in \mathbb{Z},\]
and by a subsequent projection of the resulting vectors from \(\mathbf{R}^3\) to \(\mathbf{R}^2\). A similar procedure is applied to construct the surface vectors \([\mathbf{b}_{1}, \mathbf{b}_{2}]\) of the second surface. As it will be shown in the Section Input and output description, the number of generated surface cells can be limited by specifying:
The maximum value of the Miller indexes \([h,k,l]\) that define each surface; The maximum length of the lattice vectors \([\mathbf{a}_{1}, \mathbf{a}_{2}]\) and \([\mathbf{b}_{1}, \mathbf{b}_{2}]\).
Each couple of surface cells is matched by using the same lattice-match method as used in the
Interface Builder, and described in the technical note The Interface Builder in QuantumATK.
The average strain is then calculated as:\[\mathbf{\bar{\varepsilon}} = \sqrt{\frac{\varepsilon_{11}^2 + \varepsilon_{22}^2 + \varepsilon_{11}\varepsilon_{22} + \varepsilon_{12}^2}{4}}\]
where \(\epsilon_{11}\), \(\epsilon_{22}\) and \(\epsilon_{12}\) are the components of the 2D strain tensor, as defined in the technical note The Interface Builder in QuantumATK.
Note
Notice that this definition of the average strain differs from that used in the The Interface Builder in QuantumATK. The present definition of strain is more appropriate for the present method as it is an invariant of the strain tensor.
Input and output description¶
In order to use the GeneralizedLatticeMatch method one can set up a simple script as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
# Read the BulkConfiguration of the primitive cell of the# first materialconfiguration_1 = nlread('InAs.py',BulkConfiguration)[-1]# Read the BulkConfiguration of the primitive cell of the# second materialconfiguration_2 = nlread('Al.py',BulkConfiguration)[-1]# Run the GeneralizedLatticeMatch methodgeneralized_lattice_match = GeneralizedLatticeMatch( configuration_1, configuration_2, max_strain=0.02, maximum_miller_index=3, longest_surface_lattice_vector=50*Angstrom, max_surface_area=200.0*Angstrom**2, user_given_miller_index=None )
The script reads two input files, each one containing a BulkConfiguration ofthe bulk primitive cell of one of the two materials that form the interface. Then, it appliesthe GeneralizedLatticeMatch method on these two structures. A number of
input parameters canbe set to control the precision and the extent of the search for possible interface supercells.The full list of input parameters is:
configuration_1: BulkConfiguration of the bulk primitive cell of the first material.
configuration_2: BulkConfiguration of the bulk primitive cell of the second material.
max_strain: Maximum strain applied to each of the two surfaces.
maximum_miller_index: Maximum value of the Miller indexes \(\left[ h, k, l \right]\) that define each of the two surfaces.
longest_surface_lattice_vector: Maximum length of each surface lattice vector \([\mathbf{a}_{1}, \mathbf{a}_{2}]\) and \([\mathbf{b}_{1}, \mathbf{b}_{2}]\).
max_surface_area: Maximum value of the surface area of the interface supercell.
user_given_miller_index: Predefined Miller indexes of the surface of
configuration_1.
Note
When the parameter
user_given_miller_indexis set to a value different than
None, the search for the possible surfaces is restricted to the second material, whereas the surface of the first material is kept fixed. This option will be used in Example 2: Lattice match between a bulk system with a predefined surface.
After the possible matches are calculated, the matching results are printed directly in the QuantumATK log file of the calculation. The matches will be listed in order of increasing strain, together with a number of output parameters that characterize each match.
+----------------------------------------------------------------------------------------+| A B Strain Atoms Area Aspect Angle Rotation |+----------------------------------------------------------------------------------------+[ 1 0 0] >-< [ 1 0 0] 0.000110 29 72.9 1.0 90.0 0.0[ 2 2 1] >-< [ 1 0 0] 0.000110 23 163.9 2.2 153.4 26.6[ 3 3 2] >-< [ 1 1 1] 0.004030 28 170.9 1.2 162.6 60.3
The output parameters are:
A: Miller indexes \(\left[h, k, l \right]\) of the first surface.
B: Miller indexes \(\left[h, k, l \right]\) of the second surface.
Strain: Maximum strain of the two surfaces.
Atoms: Total number of atoms in the interface supercell.
Area: Surface area of the interface supercell.
Aspect: Aspect ratio between two surface vectors of the interface supercell. The ratio is calculated by taking the largest vector at the numerator.
Angle: Angle between the two vectors of the interface supercell.
Rotation: Rotation between the two surfaces in the interface supercell.
Example 1: Lattice match between two bulk systems¶
In this first example, you will calculate the possible matches between indium arsenide and aluminum.
To obtain the structure file containing the BulkConfiguration of bulk InAs,open
QuantumATK, go to the Builder and click on . In the Database, select theprimitive cell of bulk InAs shown in the figure below, add it to the Stashby clicking on the button.
Repeat the same procedure to add to the
Stash the structure of the primitivecell of bulk Al, which is shown in the figure below.
Set up the script for the GeneralizedLatticeMatch method as shown below:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
# Read the BulkConfiguration of the primitive cell of the# first materialconfiguration_1 = nlread('InAs.py',BulkConfiguration)[-1]# Read the BulkConfiguration of the primitive cell of the# second materialconfiguration_2 = nlread('Al.py',BulkConfiguration)[-1]# Run the GeneralizedLatticeMatch methodgeneralized_lattice_match = GeneralizedLatticeMatch( configuration_1, configuration_2, max_strain=0.02, maximum_miller_index=3, longest_surface_lattice_vector=50*Angstrom, max_surface_area=200.0*Angstrom**2, user_given_miller_index=None )
Alternatively, you can download the script from here:
match_InAs_Al.py
In this example, the scan will be performed by considering surfaces with amaximum strain of 0.02, with Miller indexes \(h,k,l \leq 3\), with amaximum length of the lattice vectors of 50 Å, and with a upper threshold for thesurface area of 200 Å
2.
Run the script in the terminal as
atkpython match_InAs_Al.py > match_InAs_Al.log.The output will look as shown below. There are several possible matches, sofor briefness only the first 20 matches are shown.
+------------------------------------------------------------------------------+| || Atomistix ToolKit 2017.1 [Build ce08f12] || |+------------------------------------------------------------------------------+ |--------------------------------------------------|Miller planes for A : ================================================== |--------------------------------------------------|Miller planes for B : ================================================== |--------------------------------------------------|Matching Miller planes : ==================================================+----------------------------------------------------------------------------------------+| A B Strain Atoms Area Aspect Angle Rotation |+----------------------------------------------------------------------------------------+[ 1 0 0] >-< [ 1 0 0] 0.000110 29 72.9 1.0 90.0 0.0[ 2 2 1] >-< [ 1 0 0] 0.000110 23 163.9 2.2 153.4 26.6[ 3 3 2] >-< [ 1 1 1] 0.004030 28 170.9 1.2 162.6 60.3[ 3 1 1] >-< [ 2 1 1] 0.004030 10 120.8 1.7 90.0 0.0[ 3 1 1] >-< [ 3 1 1] 0.004250 16 120.8 1.7 106.8 33.6[ 3 3 1] >-< [ 3 3 1] 0.004250 16 158.8 2.2 102.9 0.0[ 3 3 1] >-< [ 3 3 1] 0.005470 13 158.8 2.2 102.9 0.0[ 3 1 1] >-< [ 3 1 1] 0.005470 13 120.8 1.7 106.8 33.6[ 2 1 1] >-< [ 2 1 1] 0.005470 17 178.5 2.4 90.0 0.0[ 2 1 0] >-< [ 2 1 0] 0.005470 13 162.9 1.2 114.1 0.0[ 1 1 1] >-< [ 1 1 1] 0.005470 13 63.1 1.0 120.0 60.0[ 1 1 0] >-< [ 1 1 0] 0.005470 17 103.0 1.4 90.0 0.0[ 1 0 0] >-< [ 1 0 0] 0.005470 13 72.9 1.0 90.0 0.0[ 1 0 0] >-< [ 2 2 1] 0.005470 7 72.9 1.0 90.0 0.0[ 3 2 2] >-< [ 1 1 1] 0.005940 23 150.2 1.2 157.6 60.5[ 1 0 0] >-< [ 2 1 1] 0.006130 19 182.2 3.9 39.8 0.6[ 3 1 1] >-< [ 1 1 0] 0.007160 18 151.0 1.7 73.2 22.6[ 3 1 1] >-< [ 1 0 0] 0.007730 19 120.8 1.7 90.0 0.0[ 3 2 2] >-< [ 3 1 1] 0.008250 13 150.2 2.3 115.9 35.2[ 1 0 0] >-< [ 1 1 0] 0.008420 28 200.4 2.2 79.7 38.2
Notice how the match with the least strain is that between the \(\langle 100 \rangle\) surface of InAs and the \(\langle 100 \rangle\) surface of Al.
Example 2: Lattice match between a bulk system with a predefined surface¶
In some cases, one could be interested in finding the possible matches of a bulk material to a given surface. In this second example, you will investigate how to find the possible matches of bulk aluminium to the InAs(111) surface. InAs nanowires with \(\langle 111 \rangle\)-oriented facets have been synthesized in Ref. [KZC+15], and it has been demonstrated experimentally that \(\langle 111 \rangle\)-oriented Al layers can be grown on this surface.
Use the same structure files as in Example 1: Lattice match between two bulk systems, and setup the script for the GeneralizedLatticeMatch calculation as follows:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
# Read the BulkConfiguration of the primitive cell of the# first materialconfiguration_1 = nlread('InAs.py',BulkConfiguration)[-1]# Read the BulkConfiguration of the primitive cell of the# second materialconfiguration_2 = nlread('Al.py',BulkConfiguration)[-1]# Run the GeneralizedLatticeMatch methodgeneralized_lattice_match = GeneralizedLatticeMatch( configuration_1, configuration_2, max_strain=0.02, maximum_miller_index=3, longest_surface_lattice_vector=50*Angstrom, max_surface_area=200.0*Angstrom**2, user_given_miller_index=(1,1,1) )
Notice how the
user_given_miller_index option, highlighted in yellow, has nowbeen set to
user_given_miller_index=(1,1,1). Therefore, in this case only thematches of bulk Al to the InAs(111) surface will be considered. You can also downloadthe script from here:
match_InAs111_Al.py
Run the script in the terminal as
atkpython match_InAs111_Al.py > match_InAs111_Al.log.The output will look as shown below. Notice how there are fewer possible matchesthan those in the output of Example 1: Lattice match between two bulk systems. This is because only one surface isnow considered for the material denoted
A.
+------------------------------------------------------------------------------+| || Atomistix ToolKit 2017.1 [Build ce08f12] || |+------------------------------------------------------------------------------+ |--------------------------------------------------|Miller planes for B : ================================================== |--------------------------------------------------|Matching Miller planes : ==================================================+----------------------------------------------------------------------------------------+| A B Strain Atoms Area Aspect Angle Rotation |+----------------------------------------------------------------------------------------+[ 1 1 1] >-< [ 1 1 1] 0.005470 13 63.1 1.0 120.0 60.0[ 1 1 1] >-< [ 3 3 1] 0.010100 17 142.0 1.2 46.1 18.4[ 1 1 1] >-< [ 2 1 0] 0.010710 15 110.4 4.0 30.0 5.9[ 1 1 1] >-< [ 3 1 1] 0.011070 26 110.4 2.6 19.1 77.9[ 1 1 1] >-< [ 3 1 1] 0.012770 15 110.4 2.6 19.1 77.9[ 1 1 1] >-< [ 1 1 0] 0.013260 24 126.2 2.6 40.9 12.1[ 1 1 1] >-< [ 1 0 0] 0.014880 23 126.2 1.7 8.2 1.1[ 1 1 1] >-< [ 2 2 1] 0.014880 13 126.2 2.3 10.9 31.1[ 1 1 1] >-< [ 3 2 1] 0.016000 18 189.3 2.6 26.3 75.4[ 1 1 1] >-< [ 3 1 2] 0.016000 18 189.3 3.5 19.1 44.6[ 1 1 1] >-< [ 2 1 0] 0.016150 13 110.4 4.0 30.0 5.9[ 1 1 1] >-< [ 2 1 1] 0.016420 9 78.9 2.9 30.0 20.8[ 1 1 1] >-< [ 1 1 0] 0.016510 19 126.2 2.6 40.9 12.1+----------------------------------------------------------------------------------------+
In this case, the match to the InAs(111) surface with the lowest strain is that with the Al(111) surface, which is in agreement with the experimentally measured structure reported in Ref. [KZC+15].
References¶
[JLS+17] Line Jelver, Peter Mahler Larsen, Daniele Stradi, Kurt Stokbro, and Karsten Wedel Jacobsen. Determination of low-strain interfaces via geometric matching. Phys. Rev. B, 96:085306, Aug 2017. doi:10.1103/PhysRevB.96.085306.
[KZC+15] (1, 2) P. Krogstrup, N. L. B. Ziino, W. Chang, S. M. Albrecht, M. H. Madsen, E. Johnson, J. Nygård, C. M. Marcus, and T. S. Jespersen. Epitaxy of semiconductor–superconductor nanowires. Nature Materials, 14:400–406, 2015. doi:10.1038/nmat4176. |
What I'm seeing here is that you're calculating the probabilities of needing exactly two, three or four throws incorrectly. Since this looks like a homework problem, I'm not going to do the full calculation for you, but consider instead a very similar problem. Let's take the same game, but with slightly different rules. Now we have an 8-sided die with 2 blue faces and 6 red, and each turn has a maximum of 5 throws. Let $X$ be the number of throws in a player's turn. Then, the probability of a player needing one throw is
$$\mathbb{P}(X = 1) = \mathbb{P}(roll~a~blue~face) = \frac{2}{8} = \frac{1}{4},$$
similarly to how you correctly calculated the same probability for your problem. However, if we consider the probability of having a turn with exactly two throws, we need the event "first throw is NOT blue AND second throw is blue, i.e.
$$\begin{align}\mathbb{P}(X = 2) &= \mathbb{P}(1^{st}~roll~is~red) \cdot \mathbb{P}(2^{nd}~roll~is~blue) \\ &= \frac{6}{8} \cdot \frac{2}{8} = \frac{3}{4} \cdot \frac{1}{4} = \frac{3}{16}.\end{align}$$
Similarly,
$$\mathbb{P}(X = 3) = \frac{6}{8} \cdot \frac{6}{8} \cdot \frac{2}{8} = \frac{9}{64} \\\mathbb{P}(X = 4) = ... = \frac{27}{256}.$$
Now, the final case, $\mathbb{P}(X = 5)$ is slightly different. Because we are cutting things off at five throws, we have two possibilities together. Namely, either the first four throws are red and the fifth is blue or all five throws are red. So, we could add these probabilities together. But, the much better way to think about this problem is to use the Complement Rule (a.k.a. the Law of Total Probability, depending on the book):
$$\mathbb{P}(X = 5) = 1 - \mathbb{P}(X \neq 5) = 1 - \mathbb{P}(X = 1,~2,~3,~or~4).$$
Since the events $\mathbb{P}(X=1)$, $\mathbb{P}(X=2)$, $\mathbb{P}(X=3)$, and $\mathbb{P}(X=4)$ are independent, we can use the Special Addition Rule to find the last probability. Again, since this is clearly a homework problem, I will leave that last step to you. It is straightforward given everything I've already discussed here. |
Let S, the source point, O, the origin, and P, the observation point, are all in a straight line. a is the distance from S to O, b from O to P, and $r_1$ from S to whichever part of the aperture we're interested in and $r_2$ from that to P.
$r_1 + r_2 = \sqrt{a^2+x^2+y^2}+\sqrt{b^2+x^2+y^2}$
$=a+b+\frac{x^2+y^2}{2a}+\frac{x^2+y^2}{2b}+$higher order terms
writing
$1/R=1/a+1/b$
optical path = constant +$\frac{x^2+y^2}{2R}$
giving
$\psi_p = \int\int\frac{h(x,y)K(\theta)exp(ik\frac{x^2+y^2}{2R})}{r_1r_2}dxdy$
where K is the obliquity factor and h is the aperture function.
assuming K=1 and variations in $r_1$ and $r_2$ negligible,
$\psi_p \propto \int\int h(x,y) exp(\frac{x^2+y^2}{2R})dxdy$
This is the derivation given to me in my notes, which I'm sort of happy about.
However, I have two questions:
When this is used for an edge, surely both of the assumptions break down and we cannot use Fresnel diffraction anymore? My notes goes in depth about how to use this for an edge diffraction (using Cornu spiral etc)
When do we know whether Fresnel or Fraunhofer diffraction to use? I am given that if $\frac{x^2+y^2}{\lambda}<<R$ then I can use Fraunhofer, but in the edge case surely this is the case since x goes to infinity but the note uses the edge diffraction as an example of a Fresnel diffraction. |
The Challenge
Write a program or function that takes no input and outputs a vector of length \$1\$ in a
theoretically uniform random direction.
This is equivalent to a random point on the sphere described by $$x^2+y^2+z^2=1$$
resulting in a distribution like such
Output
Three floats from a theoretically uniform random distribution for which the equation \$x^2+y^2+z^2=1\$ holds true to precision limits.
Challenge remarks The random distribution needs to be theoretically uniform. That is, if the pseudo-random number generator were to be replaced with a true RNG from the realnumbers, it would result in a uniform random distribution of points on the sphere. Generating three random numbers from a uniform distribution and normalizing them is invalid: there will be a bias towards the corners of the three-dimensional space. Similarly, generating two random numbers from a uniform distribution and using them as spherical coordinates is invalid: there will be a bias towards the poles of the sphere. Proper uniformity can be achieved by algorithms including but not limited to: Generate three random numbers \$x\$, \$y\$ and \$z\$ from a normal(Gaussian) distribution around \$0\$ and normalize them. Generate three random numbers \$x\$, \$y\$ and \$z\$ from a uniformdistribution in the range \$(-1,1)\$. Calculate the length of the vector by \$l=\sqrt{x^2+y^2+z^2}\$. Then, if \$l>1\$, reject the vector and generate a new set of numbers. Else, if \$l \leq 1\$, normalize the vector and return the result. Generate two random numbers \$i\$ and \$j\$ from a uniformdistribution in the range \$(0,1)\$ and convert them to spherical coordinates like so:\begin{align}\theta &= 2 \times \pi \times i\\\\\phi &= \cos^{-1}(2\times j -1)\end{align}so that \$x\$, \$y\$ and \$z\$ can be calculated by \begin{align}x &= \cos(\theta) \times \sin(\phi)\\\\y &= \sin(\theta) \times \sin(\phi)\\\\z &= \cos(\phi)\end{align} Generate three random numbers \$x\$, \$y\$ and \$z\$ from a Provide in your answer a brief description of the algorithm that you are using. Read more on sphere point picking on MathWorld. Output examples
[ 0.72422852 -0.58643067 0.36275628][-0.79158628 -0.17595886 0.58517488][-0.16428481 -0.90804027 0.38532243][ 0.61238768 0.75123833 -0.24621596][-0.81111161 -0.46269121 0.35779156] |
Basic MathJax and Mathematics Displaying a formula
For inline formulas, use
\$ ... \$. For display-mode formulas (i.e. multiline, centered formulas which take up their own paragraph), use
$$ ... $$. Various symbols will be displaye differently in inline vs multiline mode.
For example, the equation
\sum_{i=0}^n i^2 = \frac{(n^2+n)(2n+1)}{6} renders in inline mode (
\$) as the following: \$\sum_{i=0}^n i^2 = \frac{(n^2+n)(2n+1)}{6}\$
Meanwhile in display mode (
$$) it displays as: $$ \sum_{i=0}^n i^2 = \frac{(n^2+n)(2n+1)}{6} $$
(Note the
$$ also breaks it out into its own lines and centers it.)
New lines: Display formulas can have multiple lines. Insert a line break with
\\.
Grouping
MathJax operates on symbols or groups of symbols. Usually, a MathJax operator that's expecting to do something fancy with some symbols will grab just the very first symbol available and nothing more. For example,
a^bc will be displayed as \$a^bc\$.
If we want to represent that as \$a\$ to the power of \$bc\$ we instead need to
group these symbols using curly braces, i.e.
{ }. So we'd write
a^{bc} instead: \$a^{bc}\$.
You can get literal curly braces by escaping them:
\{foo\} → \$ \{foo\} \$
Basic mathematical formatting
Mathematical operations:
+,
- (hyphen),
\times and
\div: \$1 + 2 - 3 \times 4 \div 5\$.
\cdot for \$ x \cdot y \$
\pm \mp for \$\pm \mp\$
Comparison:
\gt and
\lt for \$\gt\$ and \$\lt\$
\ge or
\geq for \$\ge\$,
\geqslant for \$\geqslant\$.
\le or
\leq for \$\le\$,
\leqslant for \$\leqslant\$
\approx
\sim
\simeq for \$\approx \sim \simeq\$
Superscripts and subscripts: use
^ and
_. These can be combined:
x_i^2 or
x^2_i renders as \$x_i^2\$.
Fractions:
\frac a b grabs the next two groups: \$\frac{a+1}{b+1}\$
\dfrac a b works the same but always occupies two lines of vertical space: \$\dfrac{a+1}{b+1}\$
You may instead prefer to use
\over:
{a+1 \over b+1} for \${a+1 \over b+1}\$
Greek letters: Use
\alpha,
\beta, …,
\omega: \$\alpha, \beta \ldots \omega\$. For uppercase
\Gamma,
\Delta, …,
\Omega: \$\Gamma, \Delta, \ldots \Omega\$.
Plain text: Usually all text is treated as symbols, so
these are some words gets rendered as \$these are some words\$ despite the spaces. To tell MathJax to treat it as just ordinary text use
\text{stuff}: \$\text{these are some words}\$.
Floor and ceiling:
\lfloor x \rfloor for \$\lfloor x \rfloor\$,
\lceil x \rceil for \$\lceil x \rceil\$
Equation alignment
You can use the
\begin{align} ... \end{align} environment to align equations over multiple lines. The
& symbol is an alignment marker in this environment. Use
\\ to start new lines. The following example aligns on the equals sign:
$$
\begin{align}
a^2 &= b^2 + c^2 \\
a &= \sqrt{b^2 + c^2}
\end{align}
$$
$$\begin{align}a^2 &= b^2 + c^2 \\a &= \sqrt{b^2 + c^2}\end{align}$$
Equation sizing
Occasionally the default mathjax sizing may need to be adjusted in order to make things readable. MathJax supports size Latex commands:
\tiny
\scriptsize
\footnotesize
\small
\normalsize
\large
\Large
\LARGE
\huge
\Huge
The commands are listed from smallest to largest.
\normalsize is the default. Note that use of capitalization on some commands.
The a size command is applied within the scope it occurs in.
For example:
\${\Large\sqrt{\frac{1}{a_z^x}}}\$
gives \${\Large\sqrt{\frac{1}{a_z^x}}}\$.
As a more practical example, the application
\small allows the following to render without undesired line breaks:$$f(t)={\small\frac{cos(rt)(gr^2_x+gr^2_y-r_xr^2_zv_y(0)+r_yr^2_zv_x(0)+r^3_x(-v_y(0))+r^2_xr_yv_x(0)-r_xr^2_yv_y(0)+r^3_yv_x(0))}{r}}$$ |
We are working over the finite field $\mathbb{F}_{q}$ of odd prime characteristic $p$ and of cardinality $q$ some power of $p$. We recall the symplectic group $Sp(4,\mathbb{F}_{q})$ as the group of transformations over $\mathbb{F}_{q}^{4}$ preserving a non degenerate alternate bilinear form, and we denote by $PSp(4,\mathbb{F}_{q})$ the quotient by its center. I would like to know all the possible maximal subgroups of this projective group or the initial one. A good reference may be of precious help.
You can find tables of the maximal subgroups of all (almost) simple classical groups in dimensions up to $12$ in the book:
The Maximal Subgroups of the Low-Dimensional Finite Classical Groups, John N. Bray,Derek F. Holt, Colva M. Roney-Dougal, Cambridge University Press, 2013.
The maximal subgroups of ${\rm Sp}_4(q)$ for odd $q$ were first classified in:
H. H. Mitchell. The subgroups of the quaternary abelian linear group. Trans. Amer. Math. Soc. 15 (1914), 379–396.
Here is a complete list. The notation is similar to that used in the ATLAS. There is one conjugacy class of each type except where otherwise stated.
$q^{1+2}\!:\!((q-1) \times {\rm Sp}_2(q))$ (reducible)
$q^3\!:\!{\rm GL}_2(q)$ (reducible)
${\rm Sp}_2(q)^2\!:\!2$ (imprimitive)
${\rm GL}_2(q).2$ (imprimitive)
${\rm Sp}_2(q^2)\!:\!2$ (semilinear)
${\rm GU}_2(q).2$ (semilinear)
${\rm Sp}_4(q_0).(2,r)$, $(2,r)$ classes, $q=q_0^r$, $r$ prime (subfield)
$2^{1+4}.S_5$, $2$ classes, $q$ prime, $q \equiv \pm 1 \bmod 8$
$2^{1+4}.A_5$ $q$ prime, $q \equiv \pm 3 \bmod 8$
$2.A_6$, $q$ prime, $q \equiv \pm 5 \bmod 12$, $q \ne 7$
$2.S_6$, $2$ classes, $q$ prime, $q \equiv \pm 1 \bmod 12$
$2.A_7$, $q=7$
${\rm SL}_2(q)$, $p \ge 5$, $q>7$. |
Generalized fractional derivatives and Laplace transform
1.
Department of Mathematics, Çankaya University 06790, Ankara, Turkey
2.
Department of Mathematics and General Sciences, Prince Sultan University, P. O. Box 66833, 11586 Riyadh, Saudi Arabia
In this article, we study generalized fractional derivatives that contain kernels depending on a function on the space of absolute continuous functions. We generalize the Laplace transform in order to be applicable for the generalized fractional integrals and derivatives and apply this transform to solve some ordinary differential equations in the frame of the fractional derivatives under discussion.
Keywords:Generalized fractional derivatives, generalized Caputo fractional derivative, generalized Laplace transform. Mathematics Subject Classification:Primary: 26A33; Secondary: 44A10. Citation:Fahd Jarad, Thabet Abdeljawad. Generalized fractional derivatives and Laplace transform. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020039
References:
[1]
T. Abdeljawad, D. Baleanu and F. Jarad, Existence and uniqueness theorem for a class of delay differential equations with left and right Caputo fractional derivatives,
[2]
T. Abdeljawad, F. Jarad and D. Baleanu,
On the existence and the uniqueness theorem for fractional differential equations with bounded delay within Caputo derivatives,
[3]
T. Abdeljawad and D. Baleanu, Integration by parts and its applications of a new nonlocal fractional derivative with Mittag-Leffler nonsingular kernel,
[4]
T. Abdeljawad and D. Baleanu, Monotonicity results for fractional difference operators with discrete exponential kernels,
[5] [6] [7] [8] [9] [10]
V. Daftardar-Gejji and H. Jaffari,
Analysis of a system of nonautonomous fractional differential equations involving Caputo derivatives,
[11] [12]
Y. Y. Gambo, F. Jarad, T. Abdeljawad and D. Baleanu, On Caputo modification of the Hadamard fractional derivative,
[13] [14] [15]
F. Jarad, T. Abdeljawad and D. Baleanu,
On the generalized fractional derivatives and their Caputo modification,
[16] [17] [18] [19]
A. Kilbas, H. M. Srivastava and J. J. Trujillo,
[20] [21] [22]
R. L. Magin,
[23]
D. S. Oliveira and E. Capelas de Oliveira, On a Caputo-type fractional derivatives, Available from: http://www.ime.unicamp.br/sites/default/files/pesquisa/relatorios/rp-2017-13.pdf. doi: 10.1515/apam-2017-0068. Google Scholar
[24]
I. Podlubny,
[25]
S. G. Samko, A. A. Kilbas and O. I. Marichev,
[26] [27] [28]
J. Vanterler da C. Sousa and E. Capelas de Oliveira, A new fractional derivative of variable order with non-singular order and fractional differential equations, preprint, arXiv: 1712.06506.Google Scholar
[29]
X. J. Yang, H. M. Srivastava and J. A. T. Machado,
A new fractional derivatives without singular kernel: Application to the modelling of the steady heat flow,
show all references
References:
[1]
T. Abdeljawad, D. Baleanu and F. Jarad, Existence and uniqueness theorem for a class of delay differential equations with left and right Caputo fractional derivatives,
[2]
T. Abdeljawad, F. Jarad and D. Baleanu,
On the existence and the uniqueness theorem for fractional differential equations with bounded delay within Caputo derivatives,
[3]
T. Abdeljawad and D. Baleanu, Integration by parts and its applications of a new nonlocal fractional derivative with Mittag-Leffler nonsingular kernel,
[4]
T. Abdeljawad and D. Baleanu, Monotonicity results for fractional difference operators with discrete exponential kernels,
[5] [6] [7] [8] [9] [10]
V. Daftardar-Gejji and H. Jaffari,
Analysis of a system of nonautonomous fractional differential equations involving Caputo derivatives,
[11] [12]
Y. Y. Gambo, F. Jarad, T. Abdeljawad and D. Baleanu, On Caputo modification of the Hadamard fractional derivative,
[13] [14] [15]
F. Jarad, T. Abdeljawad and D. Baleanu,
On the generalized fractional derivatives and their Caputo modification,
[16] [17] [18] [19]
A. Kilbas, H. M. Srivastava and J. J. Trujillo,
[20] [21] [22]
R. L. Magin,
[23]
D. S. Oliveira and E. Capelas de Oliveira, On a Caputo-type fractional derivatives, Available from: http://www.ime.unicamp.br/sites/default/files/pesquisa/relatorios/rp-2017-13.pdf. doi: 10.1515/apam-2017-0068. Google Scholar
[24]
I. Podlubny,
[25]
S. G. Samko, A. A. Kilbas and O. I. Marichev,
[26] [27] [28]
J. Vanterler da C. Sousa and E. Capelas de Oliveira, A new fractional derivative of variable order with non-singular order and fractional differential equations, preprint, arXiv: 1712.06506.Google Scholar
[29]
X. J. Yang, H. M. Srivastava and J. A. T. Machado,
A new fractional derivatives without singular kernel: Application to the modelling of the steady heat flow,
[1]
Fahd Jarad, Thabet Abdeljawad.
Variational principles in the frame of certain generalized fractional derivatives.
[2]
Fahd Jarad, Sugumaran Harikrishnan, Kamal Shah, Kuppusamy Kanagarajan.
Existence and stability results to a class of fractional random implicit differential equations involving a generalized Hilfer fractional derivative.
[3] [4]
Saif Ullah, Muhammad Altaf Khan, Muhammad Farooq, Zakia Hammouch, Dumitru Baleanu.
A fractional model for the dynamics of tuberculosis infection using Caputo-Fabrizio derivative.
[5]
Mostafa El Haffari, Ahmed Roubi.
Prox-dual regularization algorithm for generalized fractional programs.
[6]
Jen-Yen Lin, Hui-Ju Chen, Ruey-Lin Sheu.
Augmented Lagrange primal-dual approach for generalized fractional programming problems.
[7]
Xian-Jun Long, Jing Quan.
Optimality conditions and duality for minimax fractional programming involving nonsmooth generalized univexity.
[8]
Mehar Chand, Jyotindra C. Prajapati, Ebenezer Bonyah, Jatinder Kumar Bansal.
Fractional calculus and applications of family of extended generalized Gauss hypergeometric functions.
[9]
Amir Khan, Asaf Khan, Tahir Khan, Gul Zaman.
Extension of triple Laplace transform for solving fractional differential equations.
[10]
Ruiyang Cai, Fudong Ge, Yangquan Chen, Chunhai Kou.
Regional gradient controllability of ultra-slow diffusions involving the Hadamard-Caputo time fractional derivative.
[11]
Pierre Aime Feulefack, Jean Daniel Djida, Atangana Abdon.
A new model of groundwater flow within an unconfined aquifer: Application of Caputo-Fabrizio fractional derivative.
[12] [13]
Huijun He, Zhaoyang Yin.
On the Cauchy problem for a generalized two-component shallow water wave system with fractional higher-order inertia operators.
[14]
Hongjie Dong, Dong Li.
On a generalized maximum principle for a transport-diffusion model with $\log$-modulated fractional dissipation.
[15]
Ram U. Verma.
General parametric sufficient optimality conditions for multiple objective fractional subset
programming relating to generalized $(\rho,\eta,A)$ -invexity.
[16]
Anurag Jayswal, Ashish Kumar Prasad, Izhar Ahmad.
On minimax fractional programming problems involving generalized $(H_p,r)$-invex functions.
[17]
Nakao Hayashi, Elena I. Kaikina, Pavel I. Naumkin.
Large time behavior of solutions to the generalized derivative nonlinear Schrödinger equation.
[18]
Hayat Zouiten, Ali Boutoulout, Delfim F. M. Torres.
Regional enlarged observability of Caputo fractional differential equations.
[19]
G. M. Bahaa.
Generalized variational calculus in terms of multi-parameters involving Atangana-Baleanu's derivatives and application.
[20]
Miaohua Jiang.
Derivative formula of the potential function for generalized SRB measures of hyperbolic systems of codimension one.
2018 Impact Factor: 0.545
Tools Metrics Other articles
by authors
[Back to Top] |
g[y_] := y^2;h[y_] := y^2/2 + c;func=(g[y] f[y]/Sqrt[h[y]^2 + f[y]^2]);
You should take the derivative with respect to $x$ of both sides. Then you obtain the differential equation $$ f'(x) = \frac{g(x) f(x)}{\sqrt{h(x)^2+f(x)^2}}$$ and $f(0)=0$. The latter can be solved using
g[x_] := x^2;h[x_] := x^2/2 + c;DSolve[{f'[x] == g[x] f[x]/Sqrt[h[x]^2 + f[x]^2], f[0]==0}, f[x], x]
It turns out that Mathematica is not able to solve the resulting differential equation.
Looking at the numerical solution (setting a value for $c$ and replacing DSolve by NDSolve), I believe that the (only) solution to your problem is $$ f(x) \equiv 0.$$ |
An integer is any whole number, including zero. An integer can be either positive or negative. Examples include -77, -1, 0, 55, 119.
An integer is any whole number, including zero. An integer can be either positive or negative. Examples include -77, -1, 0, 55, 119.
A rational number (or fraction) is represented as a ratio between two integers, a and b, and has the form \({a \over b}\) where a is the
numerator and b is the denominator. An improper fraction (\({5 \over 3} \)) has a numerator with a greater absolute value than the denominator and can be converted into a mixed number (\(1 {2 \over 3} \)) which has a whole number part and a fractional part.
The absolute value is the positive magnitude of a particular number or variable and is indicated by two vertical lines: \(\left|-5\right| = 5\). In the case of a variable absolute value (\(\left|a\right| = 5\)) the value of a can be either positive or negative (a = -5 or a = 5).
A factor is a positive integer that divides evenly into a given number. The factors of 8 are 1, 2, 4, and 8. A multiple is a number that is the product of that number and an integer. The multiples of 8 are 0, 8, 16, 24, ...
The greatest common factor (GCF) is the greatest factor that divides two integers.
The least common multiple (LCM) is the smallest positive integer that is a multiple of two or more integers.
A prime number is an integer greater than 1 that has no factors other than 1 and itself. Examples of prime numbers include 2, 3, 5, 7, and 11.
Fractions are generally presented with the numerator and denominator as small as is possible meaning there is no number, except one, that can be divided evenly into both the numerator and the denominator. To reduce a fraction to lowest terms, divide the numerator and denominator by their greatest common factor (GCF).
Fractions must share a
common denominator in order to be added or subtracted. The common denominator is the least common multiple of all the denominators.
To multiply fractions, multiply the numerators together and then multiply the denominators together. To divide fractions, invert the second fraction (get the reciprocal) and multiply it by the first.
An exponent (cb
e) consists of coefficient (c) and a base (b) raised to a power (e). The exponent indicates the number of times that the base is multiplied by itself. A base with an exponent of 1 equals the base (b 1 = b) and a base with an exponent of 0 equals 1 ( (b 0 = 1).
To add or subtract terms with exponents, both the base and the exponent must be the same. If the base and the exponent are the same, add or subtract the coefficients and retain the base and exponent. For example, 3x
2 + 2x 2 = 5x 2 and 3x 2 - 2x 2 = x 2 but x 2 + x 4 and x 4 - x 2 cannot be combined.
To multiply terms with the same base, multiply the coefficients and add the exponents. To divide terms with the same base, divide the coefficients and subtract the exponents. For example, 3x
2 x 2x 2 = 6x 4 and \({8x^5 \over 4x^2} \) = 2x (5-2) = 2x 3.
To raise a term with an exponent to another exponent, retain the base and multiply the exponents: (x
2) 3 = x (2x3) = x 6
A negative exponent indicates the number of times that the base is divided by itself. To convert a negative exponent to a positive exponent, calculate the positive exponent then take the reciprocal: \(b^{-e} = { 1 \over b^e }\). For example, \(3^{-2} = {1 \over 3^2} = {1 \over 9}\)
Radicals (or
roots) are the opposite operation of applying exponents. With exponents, you're multiplying a base by itself some number of times while with roots you're dividing the base by itself some number of times. A radical term looks like \(\sqrt[d]{r}\) and consists of a radicand (r) and a degree (d). The degree is the number of times the radicand is divided by itself. If no degree is specified, the degree defaults to 2 (a square root).
The radicand of a simplified radical has no perfect square factors. A
perfect square is the product of a number multiplied by itself (squared). To simplify a radical, factor out the perfect squares by recognizing that \(\sqrt{a^2} = a\). For example, \(\sqrt{64} = \sqrt{16 \times 4} = \sqrt{4^2 \times 2^2} = 4 \times 2 = 8\).
To add or subtract radicals, the degree and radicand must be the same. For example, \(2\sqrt{3} + 3\sqrt{3} = 5\sqrt{3}\) but \(2\sqrt{2} + 2\sqrt{3}\) cannot be added because they have different radicands.
To multiply or divide radicals, multiply or divide the coefficients and radicands separately: \(x\sqrt{a} \times y\sqrt{b} = xy\sqrt{ab}\) and \({x\sqrt{a} \over y\sqrt{b}} = {x \over y}\sqrt{a \over b}\)
To take the square root of a fraction, break the fraction into two separate roots then calculate the square root of the numerator and denominator separately. For example, \(\sqrt{9 \over 16}\) = \({\sqrt{9}} \over {\sqrt{16}}\) = \({3 \over 4}\)
Scientific notation is a method of writing very small or very large numbers. The first part will be a number between one and ten (typically a decimal) and the second part will be a power of 10. For example, 98,760 in scientific notation is 9.876 x 10
4 with the 4 indicating the number of places the decimal point was moved to the left. A power of 10 with a negative exponent indicates that the decimal point was moved to the right. For example, 0.0123 in scientific notation is 1.23 x 10 -2.
A factorial has the form n! and is the product of the integer (n) and all the positive integers below it. For example, 5! = 5 x 4 x 3 x 2 x 1 = 120.
Arithmetic operations must be performed in the following specific order:
The acronym
PEMDAS can help remind you of the order.
The distributive property for multiplication helps in solving expressions like a(b + c). It specifies that the result of multiplying one number by the sum or difference of two numbers can be obtained by multiplying each number individually and then totaling the results: a(b + c) = ab + ac. For example, 4(10-5) = (4 x 10) - (4 x 5) = 40 - 20 = 20.
The distributive property for division helps in solving expressions like \({b + c \over a}\). It specifies that the result of dividing a fraction with multiple terms in the numerator and one term in the denominator can be obtained by dividing each term individually and then totaling the results: \({b + c \over a} = {b \over a} + {c \over a}\). For example, \({a^3 + 6a^2 \over a^2} = {a^3 \over a^2} + {6a^2 \over a^2} = a + 6\).
The commutative property states that, when adding or multiplying numbers, the order in which they're added or multiplied does not matter. For example, 3 + 4 and 4 + 3 give the same result, as do 3 x 4 and 4 x 3.
Ratios relate one quantity to another and are presented using a colon or as a fraction. For example, 2:3 or \({2 \over 3}\) would be the ratio of red to green marbles if a jar contained two red marbles for every three green marbles.
A proportion is a statement that two ratios are equal: a:b = c:d, \({a \over b} = {c \over d}\). To solve proportions with a variable term,
cross-multiply: \({a \over 8} = {3 \over 6} \), 6a = 24, a = 4.
A rate is a ratio that compares two related quantities. Common rates are speed = \({distance \over time}\), flow = \({amount \over time}\), and defect = \({errors \over units}\).
Percentages are ratios of an amount compared to 100. The percent change of an old to new value is equal to 100% x \({ new - old \over old }\).
The average (or
mean) of a group of terms is the sum of the terms divided by the number of terms. Average = \({a_1 + a_2 + ... + a_n \over n}\)
A sequence is a group of ordered numbers. An
arithmetic sequence is a sequence in which each successive number is equal to the number before it plus some constant number.
Probability is the numerical likelihood that a specific outcome will occur. Probability = \({ \text{outcomes of interest} \over \text{possible outcomes}}\). To find the probability that two events will occur, find the probability of each and multiply them together.
Many of the arithmetic reasoning problems on the ASVAB will be in the form of word problems that will test not only the concepts in this study guide but those in Math Knowledge as well. Practice these word problems to get comfortable with translating the text into math equations and then solving those equations.
A
monomial contains one term, a binomial contains two terms, and a polynomial contains more than two terms. Linear expressions have no exponents. A quadratic expression contains variables that are squared (raised to the exponent of 2).
You can only add or subtract monomials that have the same variable and the same exponent. However, you can multiply and divide monomials with unlike terms.
To multiply binomials, use the FOIL method. FOIL stands for
First, Outside, Inside, Last and refers to the position of each term in the parentheses.
To factor a quadratic expression, apply the FOIL (
First, Outside, Inside, Last) method in reverse.
An equation is two expressions separated by an equal sign. The key to solving equations is to repeatedly do the same thing to both sides of the equation until the variable is isolated on one side of the equal sign and the answer on the other.
When solving an equation with two variables, replace the variables with the values given and then solve the now variable-free equation. (Remember order of operations, PEMDAS,
Parentheses, Exponents, Multiplication/ Division, Addition/ Subtraction.)
When presented with two equations with two variables, evaluate the first equation in terms of the variable you're not solving for then insert that value into the second equation. For example, if you have x and y as variables and you're solving for x, evaluate one equation in terms of y and insert that value into the second equation then solve it for x.
When solving quadratic equations, if the equation is not set equal to zero, first manipulate the equation so that it is set equal to zero: ax
2 + bx + c = 0. Then, factor the quadratic and, because it's set to zero, you know that one of the factors must equal zero for the equation to equal zero. Finding the value that will make each factor, i.e. (x + ?), equal to zero will give you the possible value(s) of x.
Solving equations with an inequality (<, >) uses the same process as solving equations with an equal sign. Isolate the variable that you're solving for on one wide of the equation and put everything else on the other side. The only difference is that your answer will be expressed as an inequality (x > 5) and not as an equality (x = 5).
A line segment is a portion of a line with a measurable length. The
midpoint of a line segment is the point exactly halfway between the endpoints. The midpoint bisects (cuts in half) the line segment.
A right angle measures 90 degrees and is the intersection of two
perpendicular lines. In diagrams, a right angle is indicated by a small box completing a square with the perpendicular lines.
An acute angle measures less than 90°. An obtuse angle measures more than 90°.
Angles around a line add up to 180°. Angles around a point add up to 360°. When two lines intersect, adjacent angles are
supplementary (they add up to 180°) and angles across from either other are vertical (they're equal).
Parallel lines are lines that share the same slope (steepness) and therefore never intersect. A
transversal occurs when a set of parallel lines are crossed by another line. All of the angles formed by a transversal are called interior angles and angles in the same position on different parallel lines equal each other (a° = w°, b° = x°, c° = z°, d° = y°) and are called corresponding angles. Alternate interior angles are equal (a° = z°, b° = y°, c° = w°, d° = x°) and all acute angles (a° = c° = w° = z°) and all obtuse angles (b° = d° = x° = y°) equal each other. Same-side interior angles are supplementary and add up to 180° (e.g. a° + d° = 180°, d° + c° = 180°).
A triangle is a three-sided polygon. It has three interior angles that add up to 180° (a + b + c = 180°). An exterior angle of a triangle is equal to the sum of the two interior angles that are opposite (d = b + c). The
perimeter of a triangle is equal to the sum of the lengths of its three sides, the height of a triangle is equal to the length from the base to the opposite vertex (angle) and the area equals one-half triangle base x height: a = ½ base x height.
An
isosceles triangle has two sides of equal length. An equilateral triangle has three sides of equal length. In a right triangle, two sides meet at a right angle.
The Pythagorean theorem defines the relationship between the side lengths of a right triangle. The length of the
hypotenuse squared (c 2) is equal to the sum of the two perpendicular sides squared (a 2 + b 2): c 2 = a 2 + b 2 or, solved for c, \(c = \sqrt{a + b}\)
A quadrilateral is a shape with four sides. The
perimeter of a quadrilateral is the sum of the lengths of its four sides (a + b + c + d).
A rectangle is a parallelogram containing four right angles. Opposite sides (a = c, b = d) are equal and the perimeter is the sum of the lengths of all sides (a + b + c + d) or, comonly, 2 x length x width. The area of a rectangle is length x width. A
square is a rectangle with four equal length sides. The perimeter of a square is 4 x length of one side (4s) and the area is the length of one side squared (s 2).
A parallelogram is a quadrilateral with two sets of parallel sides. Opposite sides (a = c, b = d) and angles (red = red, blue = blue) are equal. The area of a parallelogram is base x height and the perimeter is the sum of the lengths of all sides (a + b + c + d).
A rhombus has four equal-length sides with opposite sides parallel to each other. The perimiter is the sum of the lengths of all sides (a + b + c + d) or, because all sides are the same length, 4 x length of one side (4s).
A trapezoid is a quadrilateral with one set of parallel sides. The area of a trapezoid is one-half the sum of the lengths of the parallel sides multiplied by the height. In this diagram, that becomes ½(b + d)(h).
A circle is a figure in which each point around its perimeter is an equal distance from the center. The
radius of a circle is the distance between the center and any point along its perimeter (AC, CB, CD). A chord is a line segment that connects any two points along its perimeter (AB, AD, BD). The diameter of a circle is the length of a chord that passes through the center of the circle (AB) and equals twice the circle's radius (2r).
The
circumference of a circle is the distance around its perimeter and equals π (approx. 3.14159) x diameter: c = π d. The area of a circle is π x (radius) 2 : a = π r 2.
A cube is a rectangular solid box with a height (h), length (l), and width (w). The
volume is h x l x w and the surface area is 2lw x 2wh + 2lh.
A cylinder is a solid figure with straight parallel sides and a circular or oval cross section with a radius (r) and a height (h). The
volume of a cylinder is π r 2h and the surface area is 2(π r 2) + 2π rh.
The coordinate grid is composed of a horizontal
x-axis and a vertical y-axis. The center of the grid, where the x-axis and y-axis meet, is called the origin.
A line on the coordinate grid can be defined by a slope-intercept equation:
y = mx + b. For a given value of x, the value of y can be determined given the slope (m) and y-intercept (b) of the line. The slope of a line is change in y over change in x, \({\Delta y \over \Delta x}\), and the y-intercept is the y-coordinate where the line crosses the vertical y-axis. |
My try, using $x = \sec(u)$ substitution:
$$ \begin{eqnarray} \int \frac{\sqrt{x^2-1}}{x} \mathrm{d}x &=& \int \frac{\sqrt{\sec^2(u) - 1}}{\sec(u)}\tan(u)\sec(u) \mathrm{d}u \\ &=& \int \tan^2(u) \mathrm{d}u \\ &=& \tan(u) - u + C \\ &=& \tan(arcsec(x)) - arcsec(x) + C \end{eqnarray} $$
However, according to Wolfram Alpha, the answer should be: $$ \int \frac{\sqrt{x^2-1}}{x} \mathrm{d}x = \sqrt{x^2-1}+\arctan \left( \frac{1}{\sqrt{x^2-1}} \right)+C $$ When I derive this last answer I don't get back the integrand, but rather: $$ \frac{\mathrm d}{\mathrm d x}\left(\sqrt{x^2-1}+\arctan \left( \frac{1}{\sqrt{x^2-1}} \right)+C\right) = \frac{x}{\sqrt{x^2-1}}- \frac{x}{(x^2-1)^{3/2}\left(1+\frac{1}{x^2-1}\right)} $$
I don't know how to simplify this expression more. Also, I am unable to check whether my answer is correct because I don't know how to find the derivative of $arcsec(x)$.
Can someone check my calculations and tell me where I've done something wrong and how one can simplify the last expression to get back the integrand? |
Search
Now showing items 1-5 of 5
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ... |
I'm new to all this robotics stuff. Especially to Kalman filter.
My initial goal is to have velocity as accurate as possible
Here is my case:
I have a phone which is mounted, for example in the car. So it has low cost GPS and IMU sensors. 2D
GPS gives me:
position (longitude, latitude, altitude) position accuracy ( errorcan't be split into
east, north, updirections)
speed speed accuracy ( errorcan't be split into
east, north, updirections)
heading angle heading angle accuracy
IMU: (separated accelerometer, gyroscope and magnetometer). I fuse them myself
Actually it's needed to be mentioned that I can't use magnetometer in my case. Since "car" is faraday cage. So I only can fuse accelerometer and gyroscope. Both of them are outputs from
Madgwick AHRS (can get from here rotation matrix, quaternion if needed) and represented in
North,
East,
Up dimensions.
What I've done so far:
Get rid of IMU data from chart above.
It's IMU causes that drift. I have GPS updates every 1 second. And IMU with 13 Hz frequency. We can see here that every 13th iteration we have GPS updates and then IMU goes rogue.
Used approach: Since I have GPS 1Hz and IMU upto 100Hz. But I took 13Hz in my case. Since I don't need to have so many updates. predict when IMU fires event When GPS fires event. I take latest IMU data. Do predict and then gps (position, velocity) update. Since my primary goal is to have accurate velocity. I don't care much about position and heading angle but... Since velocity correlates with them they can be added to Kalman Filter. Am I right?
So my Kalman states are
position,
velocity and
heading angle.
Can I use something like?
$$ x = x_i + v_i\Delta t + \frac{a_i\Delta t}{2} $$ $$ v = v_i + a_i\Delta t $$ $$ \theta = \theta_i + w_i\Delta t $$ Questions:
Could velocity benefit from adding position and heading angle as states to Kalman Filter. Since there is some correlation between them. (Angular velocity impacts on velocity itself). Is it OK to use formulas from Linear motion? Because I have Curvilinear motion in my case. Almost all papers describe (position, velocity) model with KF. Can I take advantage in using EKF? I found some papers that mention word. And seems like they have the same model. (pos, velocity, angle) odometry What if after all manipulations velocity is still inaccurate? Should I apply additional instruments after KF? Can I somehow take advantage of current location and prev. location points? (For example, calculate velocity from two points. Of course it means that my unit moves linear and not by curve). Then somehow correct my predicted KF result with this velocity.
Please help me with modeling Kalman Filter. And give an advice how to achieve best velocity accuracy.
Thanks! |
Difference between revisions of "Timeline of prime gap bounds"
Line 737: Line 737:
5,511? [EH] [m=3] ([http://terrytao.wordpress.com/2013/12/20/polymath8b-iv-enlarging-the-sieve-support-more-efficient-numerics-and-explicit-asymptotics/#comment-258433 Sutherland])
5,511? [EH] [m=3] ([http://terrytao.wordpress.com/2013/12/20/polymath8b-iv-enlarging-the-sieve-support-more-efficient-numerics-and-explicit-asymptotics/#comment-258433 Sutherland])
+ +
| [http://math.mit.edu/~drew/admissible_35146_395154.txt 395,154]? [m=2] ([http://terrytao.wordpress.com/2013/12/20/polymath8b-iv-enlarging-the-sieve-support-more-efficient-numerics-and-explicit-asymptotics/#comment-258305 Sutherland])
| [http://math.mit.edu/~drew/admissible_35146_395154.txt 395,154]? [m=2] ([http://terrytao.wordpress.com/2013/12/20/polymath8b-iv-enlarging-the-sieve-support-more-efficient-numerics-and-explicit-asymptotics/#comment-258305 Sutherland])
Line 752: Line 754:
[http://math.mit.edu/~drew/admissible_5511_52130.txt 52,130]? [EH] [m=3] ([http://terrytao.wordpress.com/2013/12/20/polymath8b-iv-enlarging-the-sieve-support-more-efficient-numerics-and-explicit-asymptotics/#comment-258433 Sutherland])
[http://math.mit.edu/~drew/admissible_5511_52130.txt 52,130]? [EH] [m=3] ([http://terrytao.wordpress.com/2013/12/20/polymath8b-iv-enlarging-the-sieve-support-more-efficient-numerics-and-explicit-asymptotics/#comment-258433 Sutherland])
+ +
| A numerical precision issue was discovered in the earlier m=4 calculations
| A numerical precision issue was discovered in the earlier m=4 calculations
|}
|}
Revision as of 19:57, 22 December 2013 [math]H = H_1[/math] is a quantity such that there are infinitely many pairs of consecutive primes of distance at most [math]H[/math] apart. Would like to be as small as possible (this is a primary goal of the Polymath8 project). [math]k_0[/math] is a quantity such that every admissible [math]k_0[/math]-tuple has infinitely many translates which each contain at least two primes. Would like to be as small as possible. Improvements in [math]k_0[/math] lead to improvements in [math]H[/math]. (The relationship is roughly of the form [math]H \sim k_0 \log k_0[/math]; see the page on finding narrow admissible tuples.) More recent improvements on [math]k_0[/math] have come from solving a Selberg sieve variational problem. [math]\varpi[/math] is a technical parameter related to a specialized form of the Elliott-Halberstam conjecture. Would like to be as large as possible. Improvements in [math]\varpi[/math] lead to improvements in [math]k_0[/math], as described in the page on Dickson-Hardy-Littlewood theorems. In more recent work, the single parameter [math]\varpi[/math] is replaced by a pair [math](\varpi,\delta)[/math] (in previous work we had [math]\delta=\varpi[/math]). These estimates are obtained in turn from Type I, Type II, and Type III estimates, as described at the page on distribution of primes in smooth moduli.
In this table, infinitesimal losses in [math]\delta,\varpi[/math] are ignored.
Date [math]\varpi[/math] or [math](\varpi,\delta)[/math] [math]k_0[/math] [math]H[/math] Comments 10 Aug 2005 6 [EH] 16 [EH] ([Goldston-Pintz-Yildirim]) First bounded prime gap result (conditional on Elliott-Halberstam) 14 May 2013 1/1,168 (Zhang) 3,500,000 (Zhang) 70,000,000 (Zhang) All subsequent work (until the work of Maynard) is based on Zhang's breakthrough paper. 21 May 63,374,611 (Lewko) Optimises Zhang's condition [math]\pi(H)-\pi(k_0) \gt k_0[/math]; can be reduced by 1 by parity considerations 28 May 59,874,594 (Trudgian) Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] with [math]p_{m+1} \gt k_0[/math] 30 May 59,470,640 (Morrison)
58,885,998? (Tao)
59,093,364 (Morrison)
57,554,086 (Morrison)
Uses [math](p_{m+1},\ldots,p_{m+k_0})[/math] and then [math](\pm 1, \pm p_{m+1}, \ldots, \pm p_{m+k_0/2-1})[/math] following [HR1973], [HR1973b], [R1974] and optimises in m 31 May 2,947,442 (Morrison)
2,618,607 (Morrison)
48,112,378 (Morrison)
42,543,038 (Morrison)
42,342,946 (Morrison)
Optimizes Zhang's condition [math]\omega\gt0[/math], and then uses an improved bound on [math]\delta_2[/math] 1 Jun 42,342,924 (Tao) Tiny improvement using the parity of [math]k_0[/math] 2 Jun 866,605 (Morrison) 13,008,612 (Morrison) Uses a further improvement on the quantity [math]\Sigma_2[/math] in Zhang's analysis (replacing the previous bounds on [math]\delta_2[/math]) 3 Jun 1/1,040? (v08ltu) 341,640 (Morrison) 4,982,086 (Morrison)
4,802,222 (Morrison)
Uses a different method to establish [math]DHL[k_0,2][/math] that removes most of the inefficiency from Zhang's method. 4 Jun 1/224?? (v08ltu)
1/240?? (v08ltu)
4,801,744 (Sutherland)
4,788,240 (Sutherland)
Uses asymmetric version of the Hensley-Richards tuples 5 Jun 34,429? (Paldi/v08ltu) 4,725,021 (Elsholtz)
4,717,560 (Sutherland)
397,110? (Sutherland)
4,656,298 (Sutherland)
389,922 (Sutherland)
388,310 (Sutherland)
388,284 (Castryck)
388,248 (Sutherland)
387,982 (Castryck)
387,974 (Castryck)
[math]k_0[/math] bound uses the optimal Bessel function cutoff. Originally only provisional due to neglect of the kappa error, but then it was confirmed that the kappa error was within the allowed tolerance.
[math]H[/math] bound obtained by a hybrid Schinzel/greedy (or "greedy-greedy") sieve
6 Jun 387,960 (Angelveit)
387,904 (Angeltveit)
Improved [math]H[/math]-bounds based on experimentation with different residue classes and different intervals, and randomized tie-breaking in the greedy sieve. 7 Jun
26,024? (vo8ltu)
387,534 (pedant-Sutherland)
Many of the results ended up being retracted due to a number of issues found in the most recent preprint of Pintz. Jun 8 286,224 (Sutherland)
285,752 (pedant-Sutherland)
values of [math]\varpi,\delta,k_0[/math] now confirmed; most tuples available on dropbox. New bounds on [math]H[/math] obtained via iterated merging using a randomized greedy sieve. Jun 9 181,000*? (Pintz) 2,530,338*? (Pintz) New bounds on [math]H[/math] obtained by interleaving iterated merging with local optimizations. Jun 10 23,283? (Harcos/v08ltu) 285,210 (Sutherland) More efficient control of the [math]\kappa[/math] error using the fact that numbers with no small prime factor are usually coprime Jun 11 252,804 (Sutherland) More refined local "adjustment" optimizations, as detailed here.
An issue with the [math]k_0[/math] computation has been discovered, but is in the process of being repaired.
Jun 12 22,951 (Tao/v08ltu)
22,949 (Harcos)
249,180 (Castryck) Improved bound on [math]k_0[/math] avoids the technical issue in previous computations. Jun 13 Jun 14 248,898 (Sutherland) Jun 15 [math]348\varpi+68\delta \lt 1[/math]? (Tao) 6,330? (v08ltu)
6,329? (Harcos)
6,329 (v08ltu)
60,830? (Sutherland) Taking more advantage of the [math]\alpha[/math] convolution in the Type III sums Jun 16 [math]348\varpi+68\delta \lt 1[/math] (v08ltu) 60,760* (Sutherland) Attempting to make the Weyl differencing more efficient; unfortunately, it did not work Jun 18 5,937? (Pintz/Tao/v08ltu)
5,672? (v08ltu)
5,459? (v08ltu)
5,454? (v08ltu)
5,453? (v08ltu)
60,740 (xfxie)
58,866? (Sun)
53,898? (Sun)
53,842? (Sun)
A new truncated sieve of Pintz virtually eliminates the influence of [math]\delta[/math] Jun 19 5,455? (v08ltu)
5,453? (v08ltu)
5,452? (v08ltu)
53,774? (Sun)
53,672*? (Sun)
Some typos in [math]\kappa_3[/math] estimation had placed the 5,454 and 5,453 values of [math]k_0[/math] into doubt; however other refinements have counteracted this Jun 20 [math]178\varpi + 52\delta \lt 1[/math]? (Tao)
[math]148\varpi + 33\delta \lt 1[/math]? (Tao)
Replaced "completion of sums + Weil bounds" in estimation of incomplete Kloosterman-type sums by "Fourier transform + Weyl differencing + Weil bounds", taking advantage of factorability of moduli Jun 21 [math]148\varpi + 33\delta \lt 1[/math] (v08ltu) 1,470 (v08ltu)
1,467 (v08ltu)
12,042 (Engelsma) Systematic tables of tuples of small length have been set up here and here (update: As of June 27 these tables have been merged and uploaded to an online database of current bounds on [math]H(k)[/math] for [math]k[/math] up to 5000). Jun 22 Slight improvement in the [math]\tilde \theta[/math] parameter in the Pintz sieve; unfortunately, it does not seem to currently give an actual improvement to the optimal value of [math]k_0[/math] Jun 23 1,466 (Paldi/Harcos) 12,006 (Engelsma) An improved monotonicity formula for [math]G_{k_0-1,\tilde \theta}[/math] reduces [math]\kappa_3[/math] somewhat Jun 24 [math](134 + \tfrac{2}{3}) \varpi + 28\delta \le 1[/math]? (v08ltu)
[math]140\varpi + 32 \delta \lt 1[/math]? (Tao)
1,268? (v08ltu) 10,206? (Engelsma) A theoretical gain from rebalancing the exponents in the Type I exponential sum estimates Jun 25 [math]116\varpi+30\delta\lt1[/math]? (Fouvry-Kowalski-Michel-Nelson/Tao) 1,346? (Hannes)
1,007? (Hannes)
10,876? (Engelsma) Optimistic projections arise from combining the Graham-Ringrose numerology with the announced Fouvry-Kowalski-Michel-Nelson results on d_3 distribution Jun 26 [math]116\varpi + 25.5 \delta \lt 1[/math]? (Nielsen)
[math](112 + \tfrac{4}{7}) \varpi + (27 + \tfrac{6}{7}) \delta \lt 1[/math]? (Tao)
962? (Hannes) 7,470? (Engelsma) Beginning to flesh out various "levels" of Type I, Type II, and Type III estimates, see this page, in particular optimising van der Corput in the Type I sums. Integrated tuples page now online. Jun 27 [math]108\varpi + 30 \delta \lt 1[/math]? (Tao) 902? (Hannes) 6,966? (Engelsma) Improved the Type III estimates by averaging in [math]\alpha[/math]; also some slight improvements to the Type II sums. Tuples page is now accepting submissions. Jul 1 [math](93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math]? (Tao)
873? (Hannes)
Refactored the final Cauchy-Schwarz in the Type I sums to rebalance the off-diagonal and diagonal contributions Jul 5 [math] (93 + \frac{1}{3}) \varpi + (26 + \frac{2}{3}) \delta \lt 1[/math] (Tao)
Weakened the assumption of [math]x^\delta[/math]-smoothness of the original moduli to that of double [math]x^\delta[/math]-dense divisibility
Jul 10 7/600? (Tao) An in principle refinement of the van der Corput estimate based on exploiting additional averaging Jul 19 [math](85 + \frac{5}{7})\varpi + (25 + \frac{5}{7}) \delta \lt 1[/math]? (Tao) A more detailed computation of the Jul 10 refinement Jul 20 Jul 5 computations now confirmed Jul 27 633 (Tao)
632 (Harcos)
4,686 (Engelsma) Jul 30 [math]168\varpi + 48\delta \lt 1[/math]# (Tao) 1,788# (Tao) 14,994# (Sutherland) Bound obtained without using Deligne's theorems. Aug 17 1,783# (xfxie) 14,950# (Sutherland) Oct 3 13/1080?? (Nelson/Michel/Tao) 604?? (Tao) 4,428?? (Engelsma) Found an additional variable to apply van der Corput to Oct 11 [math]83\frac{1}{13}\varpi + 25\frac{5}{13} \delta \lt 1[/math]? (Tao) 603? (xfxie) 4,422?(Engelsma)
12 [EH] (Maynard)
Worked out the dependence on [math]\delta[/math] in the Oct 3 calculation Oct 21 All sections of the paper relating to the bounds obtained on Jul 27 and Aug 17 have been proofread at least twice Oct 23 700#? (Maynard) Announced at a talk in Oberwolfach Oct 24 110#? (Maynard) 628#? (Clark-Jarvis) With this value of [math]k_0[/math], the value of [math]H[/math] given is best possible (and similarly for smaller values of [math]k_0[/math]) Nov 19 105# (Maynard)
5 [EH] (Maynard)
600# (Maynard/Clark-Jarvis) One also gets three primes in intervals of length 600 if one assumes Elliott-Halberstam Nov 20 Optimizing the numerology in Maynard's large k analysis; unfortunately there was an error in the variance calculation Nov 21 68?? (Maynard)
582#*? (Nielsen])
59,451 [m=2]#? (Nielsen])
42,392 [m=2]? (Nielsen)
356?? (Clark-Jarvis) Optimistically inserting the Polymath8a distribution estimate into Maynard's low k calculations, ignoring the role of delta Nov 22 388*? (xfxie)
448#*? (Nielsen)
43,134 [m=2]#? (Nielsen)
698,288 [m=2]#? (Sutherland) Uses the m=2 values of k_0 from Nov 21 Nov 23 493,528 [m=2]#? Sutherland Nov 24 484,234 [m=2]? (Sutherland) Nov 25 385#*? (xfxie) 484,176 [m=2]? (Sutherland) Using the exponential moment method to control errors Nov 26 102# (Nielsen) 493,426 [m=2]#? (Sutherland) Optimising the original Maynard variational problem Nov 27 484,162 [m=2]? (Sutherland) Nov 28 484,136 [m=2]? (Sutherland Dec 4 64#? (Nielsen) 330#? (Clark-Jarvis) Searching over a wider range of polynomials than in Maynard's paper Dec 6 493,408 [m=2]#? (Sutherland) Dec 19 59#? (Nielsen)
10,000,000? [m=3] (Tao)
1,700,000? [m=3] (Tao)
38,000? [m=2] (Tao)
300#? (Clark-Jarvis)
182,087,080? [m=3] (Sutherland)
179,933,380? [m=3] (Sutherland)
More efficient memory management allows for an increase in the degree of the polynomials used; the m=2,3 results use an explicit version of the [math]M_k \geq \frac{k}{k-1} \log k - O(1)[/math] lower bound. Dec 20
55#? (Nielsen)
36,000? [m=2] (xfxie)
175,225,874? [m=3] (Sutherland)
27,398,976? [m=3] (Sutherland)
Dec 21 1,640,042? [m=3] (Sutherland) 429,798? [m=2] (Sutherland) Optimising the explicit lower bound [math]M_k \geq \log k-O(1)[/math] Dec 22 1,628,944? [m=3] (Castryck)
75,000,000? [m=4] (Castryck)
3,400,000,000? [m=5] (Castryck)
5,511? [EH] [m=3] (Sutherland)
2,114,964#? [m=3] (Sutherland)
395,154? [m=2] (Sutherland)
1,523,781,850? [m=4] (Sutherland)
82,575,303,678? [m=5] (Sutherland)
A numerical precision issue was discovered in the earlier m=4 calculations Legend: ? - unconfirmed or conditional ?? - theoretical limit of an analysis, rather than a claimed record * - is majorized by an earlier but independent result # - bound does not rely on Deligne's theorems [EH] - bound is conditional the Elliott-Halberstam conjecture [m=N] - bound on intervals containing N+1 consecutive primes, rather than two strikethrough - values relied on a computation that has now been retracted
See also the article on
Finding narrow admissible tuples for benchmark values of [math]H[/math] for various key values of [math]k_0[/math]. |
A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content.
Hdjensofjfnen Posts: 1338 Joined: March 15th, 2016, 6:41 pm Location: r cis θ
Here is a small minigame.
On each post, we can place a cell anywhere we want.
You cannot post twice in a row.
Between posts we will advance 1 generation.
The goal? Get as high of a population as possible. If a pattern dies we will restart with the starter.
The starter:
Code: Select all
x = 2, y = 2, rule = B3/S23
2o$2o!
"A man said to the universe:
'Sir, I exist!'
'However,' replied the universe,
'The fact has not created in me
A sense of obligation.'" -Stephen Crane
Code: Select all
x = 7, y = 5, rule = B3/S2-i3-y4i
4b3o$6bo$o3b3o$2o$bo!
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact:
Gen 1:
Code: Select all
x = 4, y = 2, rule = B3/S23
2obo$2o!
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
toroidalet Posts: 1019 Joined: August 7th, 2016, 1:48 pm Location: my computer Contact:
Code: Select all
x = 4, y = 3, rule = B3/S23
2o$2o$3bo!
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett dvgrn Moderator Posts: 5878 Joined: May 17th, 2009, 11:00 pm Location: Madison, WI Contact: A for awesome wrote:
Gen 1:
Code: Select all
x = 4, y = 2, rule = B3/S23
2obo$2o!
Taking A for awesome's entry since it got in first by one minute --
Seems like it would be nice to see the before and after stages of the pattern. Here are generations 0 and 1, and a candidate Gen 2:
Code: Select all
x = 37, y = 7, rule = LifeHistory
9.D16.D$10.D16.D6.A$2A.C7.D4.3A.E7.D4.A.2A$2A5.6D3.3A5.6D3.A.2A$11.D
16.D5.A$10.D16.D$9.D16.D!
drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA
A rule should be added where dots have to directly affect the pattern. No adding dots fricking 1000 cells away from the pattern.
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
fluffykitty
Posts: 638 Joined: June 14th, 2014, 5:03 pm
Why? Also, Gen 3:
Code: Select all
x = 22, y = 6, rule = B3/S23
$3bo10b2o$2bob2obo5bo2b2o$2bob2o7bo2b2o$3bo10b2o!
I like making rules
wildmyron
Posts: 1272 Joined: August 9th, 2013, 12:45 am
Gen 3 -> Gen 4
Code: Select all
x = 25, y = 5, rule = B3/S23
b2o18b3o$o2b2o15bo3bo$o2b2o14b2o3bo$3o17b4o$21bo!
As it stands, this evolves into a B heptomino
The latest version of the 5S Project
contains over 221,000 spaceships. Tabulated pages up to period 160 are available on the LifeWiki
.
drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA
Gen5:
Code: Select all
x = 6, y = 7, rule = B3/S23
5bo2$2b3o$bo3bo$2o3bo$b4o$2bo!
This pattern makes an interesting 3-block constellation
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
muzik
Posts: 3499 Joined: January 28th, 2016, 2:47 pm Location: Scotland
6
Code: Select all
x = 6, y = 8, rule = B3/S23
4bo2$3b2o$2b3o$2obobo$o2bobo$o2b2o$b2o!
toroidalet Posts: 1019 Joined: August 7th, 2016, 1:48 pm Location: my computer Contact:
7
Code: Select all
x = 7, y = 8, rule = B3/S23
5bo$4b2o$3bobo$2bo3bo$b2o3bo$2o2bobo$bo2b2o$2b3o!
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA
8
Code: Select all
x = 8, y = 9, rule = B3/S23
4b2o$5b2o$3bob2o$b6o$ob2o2b2o$o2b2obo$2o$2b4o$3bo!
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
wwei23 Posts: 936 Joined: May 22nd, 2017, 6:14 pm Location: The (Life?) Universe
9(I methodrcally searched the cell space, and then the final population, and got this:):
Code: Select all
x = 9, y = 10, rule = B3/S23
6bo$6bo$4b4o$3bo$2b2o$3o2b2obo$o4b4o$o6bo$bo3bo$2bo2bo!
Last edited by wwei23
on July 30th, 2017, 6:40 pm, edited 1 time in total.
drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA wwei23 wrote:
9(I methodrcally searched the cell space, and then the final population, and got this:):
Code: Select all
x = 8, y = 9, rule = B3/S23
4b2o$5b2o$3bob2o$b6o$ob2o2b2o$o2b2obo$2o$2b4o$3bo!
You literally just posted my pattern. Did you even read the initial post?
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
wwei23 Posts: 936 Joined: May 22nd, 2017, 6:14 pm Location: The (Life?) Universe drc wrote: wwei23 wrote:
9(I methodrcally searched the cell space, and then the final population, and got this:):
Code: Select all
x = 8, y = 9, rule = B3/S23
4b2o$5b2o$3bob2o$b6o$ob2o2b2o$o2b2obo$2o$2b4o$3bo!
You literally just posted my pattern. Did you even read the initial post?
My bad, I'll fix it. 9:
Code: Select all
x = 9, y = 9, rule = B3/S23
5b3o2$3b2o3bo$2bo$bo6bo$2o2b5o$b2o$2b4o$3b2o!
Also, did you notice that I said methoDRCally searched the cell space? |
Suppose $f : [a,b] \to \mathbb{R}$ is continuous and $g \in \mathcal{R}[a,b]$ with $g(x) \ge 0$ for all $x \in [a,b]$. Show that there exists a $c \in [a,b]$ such that $$\int_a^b f(x)g(x) \, dx = f(c) \int_a^b g(x) \, dx.$$
My attempt:
My class proved a theorem which states that $f$ attains its minimum and maximum ($m$ and $M$, respectively), since $f$ is continuous on a closed interval like $[a,b]$. So we would have $m \le f(x) \le M$. Multiplying both sides by $g$ and integrating, we get $$m \int_a^b g(x) \, dx \le \int_a^b f(x) g(x) \, dx \le M \int_a^b g(x) \, dx.$$
I am stuck after this. Where can I go from here? |
One of the nice things about relativity is that it doesn't matter how fast you are traveling, only how fast you are traveling relative to something else. So what does that mean?
Moving fuel to the engine - as easy as if the ship was stopped Moving inside the ship - this could be hard while you're accelerating or decelerating, and there would be a lot of that if you're getting close to light speed. However, the ship's speed doesn't matter - if you're accelerating at 1G, it will be as hard to move around whether the ship's just leaving orbit or already at 0.9c. Communicating - from the perspective of anyone/anything on the ship, everything is as if the ship is moving slowly. So signals from the drive can make it to the bridge of the ship just as fast as if the ship was stopped. Turning - it's as complicated as if you were turning while in orbit around a planet.
So what
is a problem with near-light-speed travel? Reacting to outside stimuli, and getting to and from your desired travel velocity.
Reacting to outside stimuli is dodging, which is not the focus of your question, and navigating. Navigating shouldn't be a significant problem though - if you have the tech to get anywhere near light speed, you should be able to plot your trajectory well enough to make navigating fairly simple.
So getting to and from your desired travel velocity is the only real problem you have to worry about. What's so hard about this? At relativistic speeds, your kinetic energy is described by the following formula:
$$K=mc^2\Big(\frac{1}{\sqrt{1-v^2/c^2}}-1\Big)$$
If $v=\frac{\sqrt{3}}{4}c\approx 0.866c$, which you might not count as a "very high fraction of $c$", we get $K=mc^2=E$. In other words, the ship has as much kinetic energy as energy you could get from converting the entire mass of the ship into energy. So one way to get going that fast would be to have a fuel tank carrying as much mass as the entire rest of the ship, and have a way to convert every atom of fuel into pure energy with 100% efficiency.
With a nuclear reactor, we can currently use about 0.1% of uranium's mass worth of energy. So with a nuclear reactor that could run in space and convert the energy it produces into velocity with 100% efficiency, you could get your kinetic energy up to $K=0.001mc^2$. That gets you up to about $0.045c$. Again, this is if your fuel tank carries as much mass as the entire rest of the ship.
Oh, and don't forget that you have to decelerate once you get to your destination. So "your ship" that you have to accelerate consists of your actual ship and the fuel tank carrying enough fuel to decelerate.
In summary, to get a 10000 metric ton spaceship to $0.045c$, you need 10000 metric tons of uranium to decelerate, and 20000 metric tons of uranium to accelerate. Oh, and a 100% efficient reactor and engine, neither of which can actually exist due to entropy always taking a share. Also you may have realized that if you have 10000 metric tons of spaceship and 10000 metric tons of uranium, you're got to accelerate all the unused uranium.
So getting to a "very high fraction of $c$" just isn't feasible unless you're willing to use up a
ridiculous amount of fuel. Unless you consider $0.01c$ to be a very high fraction.
Let's work out an example of an actual "very high fraction of $c$", using an Ohio-class submarine as our starting point. It has a length of 170m and mass of 18750 tonnes. If we scale that lengthwise by 20 we get 3.4km, in the range of your "several kilometers". If we double the cross-sectional radius to make it at least a little bit more cozy, we've now increased the total volume by factor of 80, for an approximate mass of 1.5 million tonnes. I don't know how this compares to what you had in mind for your spaceship, but this is a very long but narrow spaceship. Now how much energy does it require to get this to "a very high fraction of $c$"? To get this ship to $0.99975c$, you need around to convert almost $6*10^{24}kg$ of mass into kinetic energy for the ship. That's the mass of the Earth.
In short, if you're not working with unobtanium/applied phlebotinum/magic you're not likely going to be able to get close to $c$. |
Forgot password? New user? Sign up
Existing user? Log in
In this note, you can tell everyone about your favourite number.
The number should have some interesting property which makes it your favourite.
Note : your number should be a natural number
.
Note by Mr. India 5 months, 4 weeks ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
My favourite number is 142857142857142857
It is a cyclic number
It is a kaprekar number
It is a harshad number
17=0.142857‾\frac{1}{7}=0.\overline{142857}71=0.142857 and interestingly, 1142857=0.000007‾\frac{1}{142857}=0.\overline{000007}1428571=0.000007
142+857=999142+857=999142+857=999 and 14+28+57=9914+28+57=9914+28+57=99
Log in to reply
My favorite number is 467930777446793077744679307774. It has 101010 digits, and it you sum each digit raised to the 101010th power (e.g. 410+610+710+910+310+010+710+710+710+4104^{10}+6^{10}+7^{10}+9^{10}+3^{10}+0^{10}+7^{10}+7^{10}+7^{10}+4^{10}410+610+710+910+310+010+710+710+710+410), you will get 467930777446793077744679307774, the same number.
Interesting!!
Are there any other numbers with similar property?
@Mr. India – Yes. View this oeis page.
Does such a number exists in every base ?So in base n a n-digit number with is the same as raise the digits to n and add up these numbers.
That is a really interesting question, to which I have no answer. I would definitely love to look into this more, though.
this page has numbers for different bases
@Mr. India – Thanks so much!!
73 and 42.
73 is the 21th prime number
21=7∗321 = 7 * 321=7∗3
37 is the 12th prime number(Do you see the symmetry).
73 is in base 2 1001001 which is symmetric.
42 is coolest of each number(hopefully you know why, if you don't know why, don't panic ).
Nice symmetry. And yes, 42 is the supreme number. Don't forget your towel!
42 is so fly, I accidentally missed the ground
@Dan Woldeyesus – please please please tell me why 42 is so great
@Mary Brown – Read the "The Hitchhiker's Guide to the Galaxy" book and you will understand everything. Sometimes it is a litte bit confusion, but it is still a essential book for people who do something with math (why? I don't know).
In other words 42 is a random number, which a random guy wrote in a random book, and it is the answer of life, universe and everything, and that's why it is so special.
@CodeCrafter 1 – 42 is not a simple random number. There are a number of properties for which, it is the first or the last know number having them. And at least for one:
"42 is the only known value that is the number of sets of four distinct positive integers a, b, c, d, each less than the value itself, such that ab − cd, ac − bd, and ad − bc are each multiples of the value. Whether there are other values remains an open question."
Quoted from the Wikipedia.
@João Pedro Afonso – Cool properties...But for Douglas Adams was 42 still a random number.
@CodeCrafter 1 – Actually I loved the bits of The Hitchhiker's Guide to the Galaxy, that I heard once on the radio. Forgot about 42 being the answer to everything. Must read it properly. Thanks.
Reference from big bang theory? :)
Yes ;)
Don't laugh, I think my favorite number is 64. My life is full of coincidences related with that number, starting with my birthday year. 6+4=10 is the base of our decimal system, 64 is the 6th power of 2, 6 is the first perfect number and 4 is also a power of 2, 6-4=2, my favorite numerical base, 6*4=24, the number of hours of our day, so, overall, it has a lot of other of my favorite numbers. These is not over the top properties but it is enough to me.
I never saw 64 that way.
24 because it is 4!, (2^1)(3^2)(1^3), it is an antiprime, also because of challenge 24, 24 hours in a day, 2+2=4, 2*2=4, 2^2=4, 2^(24-1)=8388608 which is a pretty cool power of 2 since it contains a lot of 8's, and 5^2-1^2, 7^2-5^2.
Pretty interesting!
Well, my favourite number, even though is kind of cliche, is the Euler Mascheroni Constant!!
I mean it surprises me that we do not have its closed form!!! And this is the kind of number which can be EASILY explained to anyone using its definition, but yet, no one would know the vast amount of properties it has.....!!
Curious number indeed!
You forgot, it should have be a natural number. I'm not taking it down, I'm sure it must be a very sexy number.
13, because no one like it
Strange reason ;-)
lol reverse psychology
Problem Loading...
Note Loading...
Set Loading... |
TIFR 2014 Problem 15 Solution is a part of TIFR entrance preparation series. The Tata Institute of Fundamental Research is India’s premier institution for advanced research in Mathematics. The Institute runs a graduate programme leading to the award of Ph.D., Integrated M.Sc.-Ph.D. as well as M.Sc. degree in certain subjects. The image is a front cover of a book named Introduction to Real Analysis by R.G. Bartle, D.R. Sherbert. This book is very useful for the preparation of TIFR Entrance.
Also Visit: College Mathematics Program of Cheenta
Problem:
\(X\) is a metric space. \(Y\) is a closed subset of \(X\) such that the distance between any two points in \(Y\) is at most 1. Then
A. \(Y\) is compact.
B. any continuous function from \(Y\to \mathbb{R}\) is bounded.
C. \(Y\) is not an open subset of \(X\)
D. none of the above.
Discussion:
Let \(X=\) an infinite set for example \(=\mathbb{R}\) with the metric as discrete metric.
That is \(d(x,y)=1\) if \(x\neq y\) and \(d(x,y)=0\) if \(x=y\).
Then every set in \(X\) is open and every set is closed.
Now take \(Y=X\). Then \(Y\) can be covered by singleton sets. \(Y=\cup \{\{a\}|a\in Y\}\). Now each of the singleton sets is open in discrete metric space. Therefore, this is an open cover for \(Y\). Since \(Y\) is infinite, this cover has no finite subcover. So \(Y\) is
not compact.
Given any \(f:Y\to \mathbb{R}\), for open set \(U\in \mathbb{R}\), \(f^{-1}(U)\subset Y\). Since Y is discrete, \(f^{-1}(U)\)is open in \(Y\). So every function \(f:Y\to \mathbb{R}\) is a continuous function. In particular if we define \(f(x)=x\) then \(f\) is a continuous function. And \(f\) is
not bounded.
Also, every subset of \(X\) is open. So \(Y\)
is open.
Therefore, we are left with
none of the above. Helpdesk What is this topic:Real Analysis What are some of the associated concept:Continuous function, Discrete Metric Space, Finite subcover Book Suggestions:Introduction to Real Analysis by R.G. Bartle, D.R. Sherbert |
Hah! There is no such thing as the “rigorous mathematical underpinning” of high frequency trading - because HFT, like all trading, is not primarily a mathematical endeavour.It’s true that many people who work in HFT have a mathematical background, but that’s because the tools of applied math and statistics are useful when analysing the large amounts of ...
I would argue, taking a note from John von Neumman, that quantitative finance lacks rigorous underpinnings. Von Neumann warned in 1953 that many things that look like proofs in economics and finance depended on problems that were yet to be solved in mathematics, and where economists were assuming solutions into existence. As the problems were solved in math,...
There is an interesting article "How Derivatives and Risk Models Really Work: Sociological Pricing and the Role of Co-Ordination" by R. Rebonato answering your question.In section "3.8 Conferences and Journals" the author formulate his version of the question as follows:It is worth mentioning one last aspect ... of the ‘institutionalecology’ in ...
Because vanilla derivatives with European exercise depend only on total variance , not on it's dynamics in time.If you have a simpler model (like interpolation of these total variances from your volatility surface) you don't have as much of unobservable parameters stochastic volatility models have.Having more parameters (which many times would need to be ...
It is indeed no rounding error, but follows from the way Yahoo computes the adjustedprice: it does not reflect the actual returns of theinvestor.Just look at August 17 and 20. The actual close priceswere 10.75 and 9.95. On August 20 the company wentex-dividend for an amount 0.4508.The return on that day is$\frac{P_t+D_t}{P_{t-1}} -1 = \frac{9.95+0....
Assume that:$$ S_0^1(1+r)\leq a,b $$Arbitrage for a portfolio $V_t$ is defined as:$$V_0\leq0, \quad P(V_1\geq0)=1, \quad P(V_1>0)>0$$Consider borrowing at rate $r$ to buy the risky asset such that $V_0=0$. Then, assuming $a\not= b$:$$\begin{align}\min_{\omega}V_1(\omega)=a-S_0^1(1+r)\geq 0\\\max_{\omega}V_1(\omega)=b-S_0^1(1+r)> 0\end{...
Consider OP's general formula $f(g(t),X_t)$. In case of ambiguity, let us claim that$f=f(t,x)$ is defined with variables $t$ and $x$,$g=g(s)$ is defined with the variable $s$, and$h=h(u,x)=f(g(u),x)$ is defined with variables $u$ and $x$.Then Ito's formula states that$${\rm d}h(u,X_u)=\frac{\partial h}{\partial u}(u,X_u)\,{\rm d}u+\frac{\partial h}{\...
I'm assuming you're talking about a European option. I did a similar problem for my homework recently, I used the in-out parity for pricing the up and in barrier option.Basically European Option = Knock up and in Option + Knock up and out optionYou can price the up and out easily using Binomial and use BS formula for pricing the European Option, then ...
In physics (statistical physics), this angle bracket is used to represent average, for example, here is the notation from Van Kampen’s book:And in stochastic calculus, the quadratic variation is usually represented by the same angle brackets. But like he noted the context should make clear which one is meant.In the equation you have referenced, an ...
Broadly you're asking about directional versus relative value strategies. There are lots of directional approaches, but I've yet to see many discussed publicly in non-generic ways (I mean, if they work, why would anyone talk about them?).As others have noted, trend following is a notable example. I'd consider a lot of equity factor approaches as a ...
Under Black-Scholes assumption for the 2 assets $S_1$ and $S_2$ with volatilities $\sigma_{1,2}$ and correlation $\rho$ the value of this option has an explicit expression which is the Margrabe formulaTo quote the result explicitlyIntroducing $\sigma = \sqrt{\sigma_1^2 + \sigma_2^2 - 2 \sigma_1\sigma_2\rho}$, Margrabe's formula states that the fair price ...
If you make the change of variable $Y_t = \sinh U_t$ and apply Ito then you immediately get$$dU_t = 2dW_t$$so the solution of your SDE is $$Y_t = \sinh\left(2W_t + C\right)$$with $C$ a constant.Then to answer your question is suffices to notice that$$\frac{Y_u}{\sqrt{1+Y_u^2}}=\tanh(U_t)$$which is bounded therefore your expression is finite ...
$t$ is fixed to simply apply Ito Lemma to $h(s,X_s)$ with the function $h: (s,x)\rightarrow f(t-s,x)$ and you get your answer. There's nothing special about it, I think you are a bit confused by the change of variable $s\rightarrow(t-s)$.@hypernova has laid out the complete steps below for you.
For the house, there are reputational advantages of publishing (“ we have smart quants”). This may not outweigh the loss of competitive information, depending on the material published. As you imply, the main benefit seems to accrue to the quants themselves, in terms of enhancing their own brand.Perhaps this paradox arises from the question of what is ...
Note that we can write $S_1(\omega)$ as a convex combination of $\alpha$ and $\beta$ with\begin{equation}S_1(\omega) = \frac{\beta-S_1(\omega)}{\beta-\alpha} \alpha + \frac{S_1(\omega) - \alpha}{\beta-\alpha} \beta\end{equation}Since $h$ was a convex function then by definition\begin{equation}h(S_1(\omega)) \leq \frac{\beta-S_1(\omega)}{\beta-\alpha} ...
The optimal investment strategy depends on the investment goals, or equivalently your utility function (which the investment strategy is supposed to maximize).The forward will trade at $\mathbf{E}^*_0(F_T)$ in the market when you invest at $t=0$.If you buy your maximum volume $M$, then gain/loss at $T$ is given by $M(F_T-\mathbf{E}^*_0(F_T))$ (which is ...
You should consider the stages of the default process instead of a binary "default", where there are various points the borrower is able to cure the loan.In a traditional credit model, the general process is to predict the state of the loan and then predict transitions between stages over the life of the loan. This is done by simulating macro variables (...
Here is one recipe, in case you can live with Spearmanrank correlation. (Which you should: linear correlationis often not appropriate in the non-normal case. And inthe normal case, there is almost no difference betweenthe two correlation types.)Generate samples of your $k$ features with all thedesired attributes. These samples may be random or...
In a nutshell, this is the "variance drag" problem. The mechanics of how you short something matter, and it's relevant to the discussion of levered/inverse ETFs that behave differently from classic/vanilla positions.Consider an XYZ future at 100. A day later it's 1% up, at 101. Two days later, it's up 1% again, at 102.1.If I go long, I make 2.1 profit. ...
We assume that the stock price process $\{S_t,\,t>0\}$ satisfies, under the real-world probability measure $P$, an SDE of the form\begin{align*}dS_t=S_t\big((\mu-q)dt+\sigma dW_t\big),\end{align*}where $\{W_t, \, t >0\}$ is a standard Brownian motion.Here, we need to consider the total return asset $e^{qt}S_t$, that is, the asset with the dividend ...
The parameter $\xi$ represents your strategy, namely the quantity you hold in your portfolio of each security $S^0$, $S^1$ and $S^2$. Consider the following strategy:$${\xi}=(\xi^1,\xi^2,\xi^3)=(1.5,1,-0.5)$$Then:$$\begin{align}& t=0: && \xi\bar{S}_0=\xi^0S_0^0+\xi^1S_0^1+\xi^2S_0^2 = 1.5+2-3.5=0\\& t=1: && \xi\bar{S}_1(\omega_1)...
Both @alexprice and @FunnyBuzer have some good points, and I have upvoted them. I think I have enough to add here that I'll make another answer entry.First off, @AFK was fairly correct that you do not need stochastic volatility for vanilla (European exercise) option pricing, since (as he says and alexprice elaborates) you just interpolate the surface of ...
I think that the main advantage of using a stochastic volatility model is to produce a consistent volatility smile.Let's consider the pricing formulas for the normal and lognormal volatilities:$$dS_t=\sigma dW_t\Rightarrow \mathbb{E}[(S_T-K)^+]=(S_t-K)\Phi\left(\frac{s-S_t}{\sigma\sqrt{\Delta t}}\right)+\sigma\sqrt{\Delta t}\phi\left(\frac{s-S_t}{\sigma\...
There are different types of publication by bankspaid publication: this is what is called “research” where bank analysts or quants offer the bank client various sorts of market insights or reviews. This is paid research and usually under condition that client does not broacast it to others. Of course some very famous ones get so popular that they ...
Your reasoning for the first property does not look correct or at least I do not understand it. Your arguments for the second property seem sound. But your wording of the second property is a bit fuzzy. You should state this more clearly, for example: $C(1,\ldots,1,u_j,1,\ldots,1) = u_j$ for all $u_j\in [0,1]$ and $j\in 1,\ldots, d.$You don't mention it ...
At $t_1$, this payoff can be priced using the Margrabe formula as used for pricing an exchange option.See Margrabe Formula hereUsing the notations in the question and those used the hyperlinked document above -$Price_{t_1} = P_{t_1}e^{(\mu_P-r)\tau}\Phi(d_+) -HR \times G_{t_1}e^{(\mu_G-r)\tau}\Phi(d_-) \tag{1}$$Price_0$ is the discounted value of $...
Find the conditions under which:$E_{0}^{*}[\max (P_{T} - HR\times G_T, 0)] = \max (P_{0} - HR\times G_0, 0)$We have a no-brainer solution - the condition that the drift and volatility of both $P$ and $G$ is zero, which means $P$ and $G$ are constants in time.Second valid condition - the option is deep in the money or deep out of the money, such that ...
Let $c_t$ be the price of an European call with maturity $T$ and $D_{t,T}$ the discount factor from $T$ to $t$. We assume deterministic rates. Then note that for $s<t\leq T$:$$\begin{align}E^Q_s\left(c_t\right)&=E^Q_s\left(E^Q_t\left(D_{t,T}(S_T-K)^+\right)\right)\\[3pt]&=E^Q_s\left(D_{t,T}(S_T-K)^+\right)\\[3pt]&=E^Q_s\left(\frac{D_{s,t}... |
At a dance party there are 100 men and 20 women. Each man selects a group of women as potential dance partners, but in such a way that given any group of 20 men it is always possible to pair 20 men with 20 women, with each man paired with someone on their list. What is the smallest number L where L is the sum of the total number of women on each mans list that will guarantee this.
I came here to verify my solution to this problem, exercise 20 from chapter 2 of the fourth edition of
Introductory Combinatorics by Richard A. Brualdi. My solution: 1981 Problem text: "At a dance-hop there are $100$ men and $20$ women. For each $i$ from $1, 2, \ldots, 100$, the $i$th man selects a group of $a_i$ women as potential dance partners (his dance list), but in such a way that given any group of $20$ men, it is always possible to pair the $20$ men up with the $20$ women with each man paired up with a woman on his dance list. What is the smallest sum $a_1 + a_2 + \cdots + a_{100}$ that will guarantee this?" Clarification: In order for the problem to make sense, we assume $1 \leq a_i \leq 20$ for each $i$. Pedantic nitpicking: First, we have to establish that, if $n > 100$ does not guarantee this, then no number $100 \leq m < n$ can guarantee this. Suppose $n$ does not guarantee this. Then, there is a choice of lists such that, for some group of $20$ men, it is not possible to pair each man up with a partner from his list. Now, since $n > 100$, then some list must contain at least two names. If we strike out one of the names from the list, we now have a sum of $n-1$ total names. If this configuration were valid, then the pairing chosen would be valid for the original configuration, so $n-1$ does not guarantee a pairing. Argument: First, we show that $1980$ does not guarantee the condition. Let $a_i = 20$ for $1 \leq i \leq 80$, and $a_i = 19$ for $81 \leq i \leq 100$. If the $20$ men with $19$ women on their lists are chosen, and each list is the same, then those $20$ men cannot each be paired up with a woman from their list, so $1980 = 80 \cdot 20 + 20 \cdot 19$ does not work.
Now, we prove that $1981$ guarantees the condition. Choose any configuration of lists of names with sum of lengths equal to $1981$. Consider any $k \leq 19$ lists with less than $20$ women on the list. The total number of women referenced by the lists is at least $20 - \lfloor 19/k \rfloor$, since a name has to be struck off $k$ lists to lose a reference, and there are at most $19$ missing names. By Hall's Marriage Theorem, there is a matching if $20 - \lfloor 19/k \rfloor \geq k$, but this clearly holds for all $1 \leq k \leq 19$. |
ISSN:
1531-3492
eISSN:
1553-524X
All Issues
Discrete & Continuous Dynamical Systems - B
May 2013 , Volume 18 , Issue 3
Select all articles
Export/Reference:
Abstract:
This article surveys the mathematical aspects of traveling waves of a class of chemotaxis models with logarithmic sensitivity, which describe a variety of biological or medical phenomena including bacterial chemotactic motion, initiation of angiogenesis and reinforced random walks. The survey is focused on the existence, wave speed, asymptotic decay rates, stability and chemical diffusion limits of traveling wave solutions. The main approaches are reviewed and related analytical results are given with sketchy proofs. We also develop some new results with detailed proofs to fill the gap existing in the literature. The numerical simulations of steadily propagating waves will be presented along the study. Open problems are proposed for interested readers to pursue.
Abstract:
This paper is concerned with the asymptotic behavior of solutions of the FitzHugh-Nagumo system on $\mathbb{R}^n$ driven by additive noise and deterministic non-autonomous forcing. We prove the system has a random attractor which pullback attracts all tempered random sets. We also prove the periodicity of the random attractor when the system is perturbed by time periodic forcing. The pullback asymptotic compactness of solutions is established by uniform estimates on the tails of solutions outside a large ball in $\mathbb{R}^n$.
Abstract:
In this paper, we use the Chebyshev spectral collocation method to solve a certain type of stochastic differential equations (SDEs). We also use this method to estimate parameters of stochastic differential equations from discrete observations by maximum likelihood technique and Kessler technique. Our numerical tests shows that the spectral method gives better results than the Euler's method and the Shoji-Ozaki method.
Abstract:
We establish the exponential time decay rate of smooth solutions of small amplitude to the Vlasov-Poisson-Fokker-Planck equations to the Maxwellian both in the whole space and in the periodic box via the uniform-in-time energy estimates and also the macroscopic equations.
Abstract:
The method of generalized modeling has been used to analyze differential equations arising in applications. It makes minimal assumptions about the precise functional form of the differential equation and the quantitative values of the steady-states which it aims to analyze from a dynamical systems perspective. The method has been applied successfully in many different contexts, particularly in ecology and systems biology, where the key advantage is that one does not have to select a particular model but is able to provide directly applicable conclusions for sets of models simultaneously. Although many dynamical systems in mathematical biology exhibit steady-state behaviour one also wants to understand nonlocal dynamics beyond equilibrium points. In this paper we analyze predator-prey dynamical systems and extend the method of generalized models to periodic solutions. First, we adapt the equilibrium generalized modeling approach and compute the unique Floquet multiplier of the periodic solution which depends upon so-called generalized elasticity and scale functions. We prove that these functions also have to satisfy a flow on parameter (or moduli) space. Then we use Fourier analysis to provide computable conditions for stability and the moduli space flow. The final stability analysis reduces to two discrete convolutions which can be interpreted to understand when the predator-prey system is stable and what factors enhance or prohibit stable oscillatory behaviour. Finally, we provide a sampling algorithm for parameter space based on nonlinear optimization and the Fast Fourier Transform which enables us to gain a statistical understanding of the stability properties of periodic predator-prey dynamics.
Abstract:
Robust multiple-fate morphogen gradients are essential for embryo development. Here, we analyze mathematically a model of morphogen gradient (such as Dpp in Drosophila wing imaginal disc) formation in the presence of non-receptors with both diffusion of free morphogens and the movement of morphogens bound to non-receptors. Under the assumption of rapid degradation of unbound morphogen, we introduce a method of functional boundary value problem and prove the existence, uniqueness and linear stability of a biologically acceptable steady-state solution. Next, we investigate the robustness of this steady-state solution with respect to significant changes in the morphogen synthesis rate. We prove that the model is able to produce robust biological morphogen gradients when production and degradation rates of morphogens are large enough and non-receptors are abundant. Our results provide mathematical and biological insight to a mechanism of achieving stable robust long distance morphogen gradients. Key elements of this mechanism are rapid turnover of morphogen to non-receptors of neighoring cells resulting in significant degradation and transport of non-receptor-morphogen complexes, the latter moving downstream through a "bucket brigade" process.
Abstract:
This paper studies the multidimensional stability of planar traveling waves for integrodifference equations. It is proved that for a Gaussian dispersal kernel, if the traveling wave is exponentially orbitally stable in one space dimension, then the corresponding planar wave is stable in $H^m(\mathbb{R}^N)$, $N\ge 4$, $m\ge [N/2]+1$, with the perturbation decaying at algebraic rate.
Abstract:
A Hamilton-Jacobi formulation has been established previously for phenotypically structured population models where the solution concentrates as Dirac masses in the limit of small diffusion. Is it possible to extend this approach to spatial models? Are the limiting solutions still in the form of sums of Dirac masses? Does the presence of several habitats lead to polymorphic situations? We study the stationary solutions of a structured population model, while the population is structured by continuous phenotypical traits and discrete positions in space. The growth term varies from one habitable zone to another, for instance because of a change in the temperature. The individuals can migrate from one zone to another with a constant rate. The mathematical modeling of this problem, considering mutations between phenotypical traits and competitive interaction of individuals within each zone via a single resource, leads to a system of coupled parabolic integro-differential equations. We study the asymptotic behavior of the stationary solutions to this model in the limit of small mutations. The limit, which is a sum of Dirac masses, can be described with the help of an effective Hamiltonian. The presence of migration can modify the dominant traits and lead to polymorphic situations.
Abstract:
The aim of this paper is to prove results about the existence and stability of multiple steady states in a system of ordinary differential equations introduced by R. Lev Bar-Or [5] to model the interactions between T cells and macrophages. Previous results showed that for certain values of the parameters these equations have three stationary solutions, two of which are stable. Here it is shown that there are values of the parameters for which the number of stationary solutions is at least seven and the number of stable stationary solutions at least four. This requires approaches different to those used in existing work on this subject. In addition, a rather explicit characterization is obtained of regions of parameter space for which the system has a given number of stationary solutions.
Abstract:
We present a new dynamical approach to the Blumberg's equation, a family of unimodal maps. These maps are proportional to $Beta(p,q)$ probability densities functions. Using the symmetry of the $Beta(p,q)$ distribution and symbolic dynamics techniques, a new concept of mirror symmetry is defined for this family of maps. The kneading theory is used to analyze the effect of such symmetry in the presented models. The main result proves that two mirror symmetric unimodal maps have the same topological entropy. Different population dynamics regimes are identified, when the intrinsic growth rate is modified: extinctions, stabilities, bifurcations, chaos and Allee effect. To illustrate our results, we present a numerical analysis, where are demonstrated: monotonicity of the topological entropy with the variation of the intrinsic growth rate, existence of isentropic sets in the parameters space and mirror symmetry.
Abstract:
In this paper, we propose an efficient numerical method for delay differential equations with vanishing proportional delay
qt(0 < q < 1). The algorithm is a mixture of the Legendre-Gauss collocation method and domain decomposition. It has global convergence and spectral accuracy provided that the data in the given pantograph delay differential equation are sufficiently smooth. Numerical results demonstrate the spectral accuracy of this approach and coincide well with theoretical analysis. Abstract:
This paper deals with the chemotaxis system $$ \left\{ \begin{array}{ll} u_t ={D} u_{xx}-\chi [u(\ln v)_x]_x, & x\in (0, 1), \ t>0,\\ v_t =\varepsilon v_{xx} +uv-\mu v, & x\in (0, 1), \ t>0, \end{array} \right. $$ under Neumann boundary condition, where $\chi<0$, $D>0$, $\varepsilon>0$ and $\mu>0$ are constants.
It is shown that for any sufficiently smooth initial data $(u_0, v_0)$ fulfilling $u_0\ge 0$, $u_0 \not\equiv 0$ and $v_0>0$, the system possesses a unique global smooth solution that enjoys exponential convergence properties in $L^\infty(\Omega)$ as time goes to infinity, which depend on the sign of $\mu-\bar{u}_0$, where $\bar{u}_0 :=\int_0^1 u_0 dx$. Moreover, we prove that the constant pair $(\mu, (\frac{\mu}{\lambda})^{\frac{D}{\chi}})$ (where $\lambda>0$ is an arbitrary constant) is the only positive stationary solution. The biological implications of our results will be given in the paper.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
This question already has an answer here:
So recently, looking at high energy particles through the lens of General and Special Relativity has peaked my interest. One thing I was considering, using the electron as the first example, is as follows:
If gravity is the result of mass (stress-energy tensor), and as particles approach the speed of light, their mass is increased as measured by an outside observer (A) by a factor, $ \frac{1}{\sqrt{(1-(v/c)^{2})}} $.
The mass of the electron, $ m_{e} $, is $ 9.109 \times 10^{-31} kg $, and the radius is $ r_{e} = 2.818 \times 10^{-15} m $.
Given the Schwarzschild Condition is: $$ r_{s} = \frac{2Gm}{c^{2}} $$.
Observer A, due to the Lorentz dilation, will measure a dilated mass term that is increasing monotonically as a function of the electron's velocity. The mass that observer A will then measure is: $$ m_{A} = \frac{m_{e}}{\sqrt{(1-(v/c)^{2})}} $$. Combining the Swarzschild Condition with the mass as measured by A, we attain a relation between the mass that observer A measures with the particle's velocity, accounting for relativistic effects.
This allows me to now pose the question that given the Schwarzschild radius is the electron radius, at what velocity does the mass increase to the point where the electron, as seen by observer A, become a black hole? This also leads to the more conceptual question of what are the implications of simultaneity in this situation? If the electron's mass is still $m_{e}$ in it's own frame, then shouldn't the electron not really turn into a black hole? Of course, this whole argument falls apart if the derived speed is greater than $c$. I went own to calculate that. I found that speed to be:
$$ v = \sqrt{c^{2}-\frac{4G^{2}m^{2}_{e}}{r^{2}_{e}c^{2}}}$$
The velocity at which an object of a given radius and mass would become a black hole.
To my disappointment, this came out to be $8.988 \times 10^{16} m/s $. Over twice the speed of light. I have yet to calculate the velocity for more massive objects, such as stars ( which could theoretically reach relativistic velocities as the result of being flung by a galactic collision ). Either way, if this velocity is attainable for anything, what would simultaneity say about this? |
ISSN:
1531-3492
eISSN:
1553-524X
All Issues
Discrete & Continuous Dynamical Systems - B
August 2013 , Volume 18 , Issue 6
Special Issue on on Deterministic and Stochastic Dynamical Systems with delays
Select all articles
Export/Reference:
Abstract:
Studying delay differential equations is motivated by the fact that the evolution of systems in physics, chemistry, the life sciences, engineering, and economics, may, and often does, depend not only on the present state of the system but also on earlier states. Examples arise from the 2-body problem of electrodynamics (which is barely understood), in laser physics, materials with thermal memory, biochemical reactions, population growth, physiological regulatory systems, and business cycles, among many others. Delays may also appear when one wants to control a system by applying an external force which takes into account the history of the solution. Also mathematical problems in geometry and probability yield delay differential equations. In modeling real world phenomena another important aspect is uncertainty. It is often useful to take into account some randomness or environmental noise. This Special Issue addresses both aspects of dynamical systems, namely, hereditary characteristics and stochasticity. We have selected a dozen of papers which illustrate some lines of recent research.
For more information please click the “Full Text” above.
Abstract:
The stability of equilibrium solutions of a deterministic linear system of delay differential equations can be investigated by studying the characteristic equation. For stochastic delay differential equations stability analysis is usually based on Lyapunov functional or Razumikhin type results, or Linear Matrix Inequality techniques. In [7] the authors proposed a technique based on the vectorisation of matrices and the Kronecker product to transform the mean-square stability problem of a system of linear stochastic differential equations into a stability problem for a system of deterministic linear differential equations. In this paper we extend this method to the case of stochastic delay differential equations, providing sufficient and necessary conditions for the stability of the equilibrium. We apply our results to a neuron model perturbed by multiplicative noise. We study the stochastic stability properties of the equilibrium of this system and then compare them with the same equilibrium in the deterministic case. Finally the theoretical results are illustrated by numerical simulations.
Abstract:
We study invariance and monotonicity properties of Kunita-type sto-chastic differential equations in $\mathbb{R}^d$ with delay. Our first result provides sufficient conditions for the invariance of closed subsets of $\mathbb{R}^d$. Then we present a comparison principle and show that under appropriate conditions the stochastic delay system considered generates a monotone (order-preserving) random dynamical system. Several applications are considered.
Abstract:
We establish a necessary and sufficient condition of exponential stability for the contraction semigroup generated by an abstract version of the linear differential equation $$∂_t u(t)-\int_0^\infty k(s)\Delta u(t-s)ds = 0 $$ modeling hereditary heat conduction of Gurtin-Pipkin type.
Abstract:
For a class of cooperative population models with patch structure and multiple discrete delays, we give conditions for the absolute global asymptotic stability of both the trivial solution and -- when it exists -- a positive equilibrium. Under a sublinearity condition, sharper results are obtained. The existence of positive heteroclinic solutions connecting the two equilibria is also addressed. As a by-product, we obtain a criterion for the existence of positive traveling wave solutions for an associated reaction-diffusion model with patch structure. Our results improve and generalize criteria in the recent literature.
Abstract:
We consider a modified Cahn-Hiliard equation where the velocity of the order parameter $u$ depends on the past history of $\Delta \mu $, $\mu $ being the chemical potential with an additional viscous term $ \alpha u_{t},$ $\alpha >0.$ In addition, the usual no-flux boundary condition for $u$ is replaced by a nonlinear dynamic boundary condition which accounts for possible interactions with the boundary. The aim of this work is to analyze the passage to the singular limit when the memory kernel collapses into a Dirac mass. In particular, we discuss the convergence of solutions on finite time-intervals and we also establish stability results for global and exponential attractors.
Abstract:
In this paper we study a parameter estimation method in functional differential equations with state-dependent delays using a quasilinearization technique. We define the method, prove its convergence under certain conditions, and test its applicability in numerical examples. We estimate infinite dimensional parameters such as coefficient functions, delay functions and initial functions in state-dependent delay equations. The method uses the derivative of the solution with respect to the parameters. The proof of the convergence is based on the Lipschitz continuity of the derivative with respect to the parameters.
Abstract:
We consider state-dependent delay equations of the form \[ x'(t) = f(x(t - d(x(t)))) \] where $d$ is smooth and $f$ is smooth, bounded, nonincreasing, and satisfies the negative feedback condition $xf(x) < 0$ for $x \neq 0$. We identify a special family of such equations each of which has a ``rapidly oscillating" periodic solution $p$. The initial segment $p_0$ of $p$ is the fixed point of a return map $R$ that is differentiable in an appropriate setting.
We show that, although all the periodic solutions $p$ we consider are unstable, the stability can be made arbitrarily mild in the sense that, given $\epsilon > 0$, we can choose $f$ and $d$ such that the spectral radius of the derivative of $R$ at $p_0$ is less than $1 + \epsilon$. The spectral radii are computed via a semiconjugacy of $R$ with a finite-dimensional map.
Abstract:
A class of stochastic optimal control problems of infinite dimensional Ornstein-Uhlenbeck processes of neutral type are considered. One special feature of the system under investigation is that time delays are present in the control. An equivalent formulation between an adjoint stochastic controlled delay differential equation and its lifted control system (without delays) is developed. As a consequence, the finite time quadratic regulator problem governed by this formulation is solved based on a direct solution of some associated Riccati equation.
Abstract:
In this paper we deal with a nonautonomous differential equation with a nonautonomous delay. The aim is to establish the existence of an unstable invariant manifold to this differential equation for which we use the Lyapunov-Perron transformation. However, the delay is assumed to be unbounded which makes it necessary to use nonclassical methods.
Abstract:
We establish the existence of a deterministic exponential growth rate for the norm (on an appropriate function space) of the solution of the linear scalar stochastic delay equation $d X(t) = X(t-1) d W(t)$ which does not depend on the initial condition as long as it is not identically zero. Due to the singular nature of the equation this property does not follow from available results on stochastic delay differential equations. The key technique is to establish existence and uniqueness of an invariant measure of the projection of the solution onto the unit sphere in the chosen function space via
asymptotic couplingand to prove a Furstenberg-Hasminskii-type formula (like in the finite dimensional case). Abstract:
For a stochastic functional differential equation (SFDE) to have a unique global solution it is in general required that the coefficients of the SFDE obey the local Lipschitz condition and the linear growth condition. However, there are many SFDEs in practice which do not obey the linear growth condition. The main aim of this paper is to establish existence-and-uniqueness theorems for SFDEs where the linear growth condition is replaced by more general Khasminskii-type conditions in terms of a pair of Laypunov-type functions.
Abstract:
The existence of a random attactor is established for a mean-square random dynamical system (MS-RDS) generated by a stochastic delay equation (SDDE) with random delay for which the drift term is dominated by a nondelay component satisfying a one-sided dissipative Lipschitz condition. It is shown by Razumikhin-type techniques that the solution of this SDDE is ultimately bounded in the mean-square sense and that solutions for different initial values converge exponentially together as time increases in the mean-square sense. Consequently, similar boundedness and convergence properties hold for the MS-RDS and imply the existence of a mean-square random attractor for the MS-RDS that consists of a single stochastic process.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top] |
Quasirandomness Introduction
Quasirandomness is a central concept in extremal combinatorics, and is likely to play an important role in any combinatorial proof of the density Hales-Jewett theorem. This will be particularly true if that proof is based on the density increment method or on some kind of generalization of Szemerédi's regularity lemma.
In general, one has some kind of parameter associated with a set, which in our case will be the number of combinatorial lines it contains, and one would like a
deterministic definition of the word "quasirandom" with the following key property. Every quasirandom set [math]\mathcal{A}[/math] has roughly the same value of the given parameter as a random set of the same density.
Needless to say, this is not the
only desirable property of the definition, since otherwise we could just define [math]\mathcal{A}[/math] to be quasirandom if it has roughly the same value of the given parameter as a random set of the same density. The second key property is this. Every set [math]\mathcal{A}[/math] that failsto be quasirandom has some other property that we can exploit.
These two properties are already discussed in some detail in the article on the density increment method: this article concentrates more on examples of quasirandomness in other contexts, and possible definitions of quasirandomness connected with the density Hales-Jewett theorem.
A possible definition of quasirandom subsets of [math][3]^n[/math]
As with all the examples above, it is more convenient to give a definition for quasirandom functions. However, in this case it is not quite so obvious what should be meant by a balanced function.
Here, first, is a possible definition of a quasirandom function from [math][2]^n\times [2]^n[/math] to [math][-1,1].[/math] We say that f is c-quasirandom if [math]\mathbb{E}_{A,A',B,B'}f(A,B)f(A,B')f(A',B)f(A',B')\leq c.[/math] However, the expectation is not with respect to the uniform distribution over all quadruples (A,A',B,B') of subsets of [math][n].[/math] Rather, we choose them as follows. (Several variants of what we write here are possible: it is not clear in advance what precise definition will be the most convenient to use.) First we randomly permute [math][n][/math] using a permutation [math]\pi[/math]. Then we let A, A', B and B' be four random intervals in [math]\pi([n]),[/math] where we allow our intervals to wrap around mod n. (So, for example, a possible set A is [math]\{\pi(n-2),\pi(n-1),\pi(n),\pi(1),\pi(2)\}.[/math])
As ever, it is easy to prove positivity. To apply this definition to subsets [math]\mathcal{A}[/math] of [math][3]^n,[/math] define f(A,B) to be 0 if A and B intersect, [math]1-\delta[/math] if they are disjoint and the sequence x that is 1 on A, 2 on B and 3 elsewhere belongs to [math]\mathcal{A},[/math] and [math]-\delta[/math] otherwise. Here, [math]\delta[/math] is the probability that (A,B) belongs to [math]\mathcal{A}[/math] if we choose (A,B) randomly by taking two random intervals in a random permutation of [math][n][/math] (in other words, we take the marginal distribution of (A,B) from the distribution of the quadruple (A,A',B,B') above) and condition on their being disjoint. It follows from this definition that [math]\mathbb{E}f=0[/math] (since the expectation conditional on A and B being disjoint is 0 and f is zero whenever A and B intersect).
Nothing that one would really like to know about this definition has yet been fully established, though an argument that looks as though it might work has been proposed to show that if f is quasirandom in this sense then the expectation [math]\mathbb{E}f(A,B)f(A\cup D,B)f(A,B\cup D)[/math] is small (if the distribution on these "set-theoretic corners" is appropriately defined). |
Each independent components of Killing vector for Schwarzschild that generates isometry are from the translation along $t$ and also from the fact that it is spherically symmetric, i.e
\begin{eqnarray} K^{(0)}&=&\partial_{t}\\\ K^{(1)}&=&\sin{\theta}\partial_{\theta}+\cot{\theta}\cos{\phi}\partial_{\phi}\\\ K^{(2)}&=&\cos{\theta}\partial_{\theta}-\cot{\theta}\sin{\phi}\partial_{\phi}\\\ K^{(3)}&=&\partial_{\phi}. \end{eqnarray}
Then we know that $K^{r}=0$ (from $K=K^{\mu}\partial_{\mu}=K^{t}\partial_{t}+K^{r}\partial_{r}+K^{\theta}\partial_{\theta}+K^{\phi}\partial_{\phi} $). I think, it's because we can put the variable $r$ out of a parenthesis, i.e for Schwarzschild \begin{equation} ds^{2}= r\left\{-\left(\frac{r-2m}{r^{2}}\right)dt^{2}+\frac{1}{r-2m}+rd\Omega^{2}\right\}. \end{equation} Similar to the Kerr metric, that the each component of the metric depends on $r$ and $\theta$, so we might put that components out of parenthesis with some algebra (I didn't do it yet), and we get $K$ for Kerr metric with $K^{r}$ and $K^{\theta}$ equal to zero.
Furthermore, I also saw on this paper [eq. (3.8), (3.9), (3.10), (3.11)] that the Killing vector $K^{\theta}$, on the near horizon extremal Kerr black hole, equal to zero. I think, it was because we can put the $\theta$ variable out of the parenthesis.
Thus, I conclude that if we can put out some variables for any metrics, the component of Killing vector along that direction equal to zero. I don't know whether it is true or not. At least it worked to the metrics above, but I don't know why it does. If it's true, I think we can reduce the 10 equations from Killing equation to calculate the Killing vector for any given metric. Please, anyone explain it. |
I'm solving the classical Black & Scholes (BS) PDE for a European option using finite difference and the implicit scheme. In other words, I'm trying to solve
$\displaystyle\frac{\partial V}{\partial t} + \frac 12\sigma^2\frac{\partial^2V}{\partial x^2} + \mu\frac{\partial V}{\partial x} - rV = 0$
with boundary condition $V(T,x) = \Phi(e^x) = \max\{e^x - K, 0\}$. The drift parameter is as usual $\mu = r - y - \frac 12 \sigma^2$.
The problem is when my $x$ grid is too narrow, my option prices diverge from the BS. Specifically, I'm solving the ATM case with
S0 = K = 2775 and if I use
[np.log(2675), np.log(2875)] as the boundary points, the solution has a huge error (~400). However, when I use a wider grid
[np.log(1775), np.log(3775)], it works like a charm.
I thought the implicit scheme would be numerically stable enough to not exhibit this kind of behavior, so should I assume there is something wrong with my Python code?
My other parameters are
r, y, T, sigma = (0.0278, 0.0189, 1, 0.15).
def buildDs(nx): D1 = np.diag(np.ones(nx), 1) - np.diag(np.ones(nx), -1) D2 = np.diag(np.ones(nx), 1) + np.diag(np.ones(nx), -1) - 2 * np.eye(nx + 1) D1[0, 2] = -1 D1[0, 1] = 4 D1[0, 0] = -3 D1[-1, -1] = 3 D1[-1, -2] = -4 D1[-1, -3] = 1 D2[0, 2] = 1 D2[0, 1] = -2 D2[0, 0] = 1 D2[-1, -1] = 1 D2[-1, -2] = -2 D2[-1, -3] = 1 return csc_matrix(D1), csc_matrix(D2)def pxFD(Phi, r, mu, sigma, xs, ts, nx = 2000, nt = 10000): dx = (xs[1] - xs[0]) / nx dt = (ts[1] - ts[0]) / nt xs = np.linspace(xs[0], xs[1], num = nx + 1, endpoint = True) ts = np.linspace(ts[0], ts[1], num = nt + 1, endpoint = True) V = np.zeros((nt + 1, nx + 1)) I = identity(nx + 1) D1, D2 = buildDs(nx) L = 1 / 2 * (sigma / dx) ** 2 * D2 + mu / (2 * dx) * D1 - r * I P = I - dt * L V[-1] = Phi(xs) for j in reversed(range(nt)): V[j] = spsolve(P, V[j + 1]) return V, xs, tsdef pxFDGBM(Phi, r, y, sigma, Ss, ts, nx = 2000, nt = 10000): mu = r - y - 1 / 2 * sigma ** 2 xs = np.log(Ss) Philog = lambda x: Phi(np.exp(x)) Vs, xs, ts = pxFD(Phi = Philog, r = r, mu = mu, sigma = sigma, xs = xs, ts = ts, nx = nx, nt = nt) return Vs, np.exp(xs), ts |
Here's a question in combinatorial geometry which feels very much like other questions I'm familiar with but which I can't see how to get a hold of. I'll actually propose two different questions on the same theme. The continuous question is the one I'd really like to know the answer to, but the discrete question feels more "mainstream" and tractable and the two are clearly related.
Discrete Question:
Let $S$ be a subset of the $N \times N$ grid such that no three distinct points of $S$ (including collinear points) form an isosceles triangle. How big can $|S|$ be? Can it be as large as $N^{2-\epsilon}$?
Continuous Question:
For $\beta > 0$, we say a triangle in Euclidean space is
$\beta$-isosceles if some vertex is within distance $\beta$ of the line equidistant between the other two vertices.
Let $S$ be a finite subset of $[0,1]^d$ such that no three distinct points of $S$ form a $\beta$-isosceles triangle. What is the minimum possible value of the Hausdorff distance between $S$ and $[0,1]^d$? In other words, what is the largest $\alpha$ such that there must be a ball of radius $\alpha$ containing no points of $S$?
This is a question I was going to pose as a challenge in a paper I'm writing arising from a problem in data science. I realized it might be a good idea to check with people here as to whether anything is already known about it!
Of course, the 1-dimensional versions of these questions have to do with sets with no 3-term arithmetic progressions, which are the subject of a large literature. But I don't immediately see how to leverage those results to say anything about these problems, except to say that the answer to the discrete problem has to be $o(N^2)$. Is there a version of Behrend's construction that applies to the higher-dimensional case? |
I was studying the equation of motion for the probability density function of theposition coordinates of the Brownian particles, also known as the
Smoluchowski Equation (SE).
Particularly, I came across:
$$\frac{\partial}{\partial t} \rho (r,t) = D_o [\nabla^2 \rho (r,t) + \beta \nabla \rho (r,t) \int dr' [\nabla V (|r-r'|)] \rho (r',t) g(r,r',t)]$$
Which is the SE form for interacting particles.
Where:
1) $\rho (r,t)$ is the probability density function.
2) $D_o$ is the diffusion coefficient.
3) $$\nabla = \frac{\partial}{\partial x}$$
4) $$\beta = \frac{1}{k_\mathrm BT}$$
5) $V (|r-r'|)$ is the potential.
6) $g(r,r',t)$ is the pair correlation function.
I got curious and wanted to verify this equation using
dimensional analysis.
We know that:
$$[D_o] = \frac{L^2}{T}$$
$$[\beta] = \frac{T^2}{ML^2}$$
$$[\rho] = \frac{1}{L T}$$
$$[\nabla] = \frac{1}{L}$$
$$[g] = \frac{1}{L^2T}$$
Note that dimensions of $\rho$ come from the fact that the integral of the probability density function over its entire support is 1. I used the same reasoning with $g$.
Based on this information I got on the left hand side:
$$\frac{1}{LT^2}$$
My struggle is triggered by the second term on the right hand side of the equation, as I got as the final result:
$$\frac{1}{LT^2} = \frac{1}{LT^2} + \frac{1}{L^6T^3}$$
I think I may be missing something related to the dimensions of the correlation function... |
The cosmonauts who landed at the pole found that the force of gravity there is \(0.01\) of that on the Earth, while the duration of the day on the planet is the same as that on Earth. It turned out that besides that the force of gravity on the equator is zero. Find the radius \(R\) of the planet.
Discussion:
For a body of mass \(m\) resting on the equator of a planet of radius \(R\), which rotates at an angular velocity \(\omega\), the equation of motion has the form $$ m\omega^2R=mg’-N$$ where \(N\) is the normal reaction of the planet surface, and \(g’=0.001g\) is the free-fall acceleration on the planet.
The bodies on the equator are assumed to be weightless i.e. \(N=0\). We know, \(w=2\frac{\pi}{T}\), where \(T\) is the period of revolution of the planet. Hence we obtain $$ R=\frac{T^2}{4\pi^2}g’$$ Substituting the value of \(T=8.6*10^4s\) and \(g’=0.1m/s^2\), we get $$ R=1.8*10^7 Km$$ |
Search
Now showing items 1-6 of 6
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC
(Springer, 2013-09)
We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ... |
Under the auspices of the Computational Complexity Foundation (CCF)
We study the approximability of constraint satisfaction problems (CSPs) by linear programming (LP) relaxations. We show that for every CSP, the approximation obtained by a basic LP relaxation, is no weaker than the approximation obtained using relaxations given by $\Omega\left(\frac{\log n}{\log \log n}\right)$ levels of the Sherali-Adams hierarchy on instances of size $n$.
It was proved by Chan et al. [FOCS 2013] that any polynomial size LP extended formulation is no stronger than relaxations obtained by a super-constant levels of the Sherali-Adams hierarchy. Combining this with our result also implies that the any polynomial size LP extended formulation is no stronger than the basic LP.
Using our techniques, we also simplify and strengthen the result by Khot et al. [STOC 2014] on (strong) approximation resistance for LPs. They provided a necessary and sufficient condition under which $\Omega(\log \log n)$ levels of the Sherali-Adams hierarchy cannot achieve an approximation better than a random assignment. We simplify their proof and strengthen the bound to $\Omega\left(\frac{\log n}{\log \log n}\right)$ levels.
Updated discussion on related works.
We study the approximability of constraint satisfaction problems (CSPs) by linear programming (LP) relaxations. We show that for every CSP, the approximation obtained by a basic LP relaxation, is no weaker than the approximation obtained using relaxations given by $\Omega\left(\frac{\log n}{\log \log n}\right)$ levels of the Sherali-Adams hierarchy on instances of size $n$.
It was proved by Chan et al. [FOCS 2013] that any polynomial size LP extended formulation is no stronger than relaxations obtained by a super-constant levels of the Sherali-Adams hierarchy. Combining this with our result also implies that the any polynomial size LP extended formulation is no stronger than the basic LP.
Using our techniques, we also simplify and strengthen the result by Khot et al. [STOC 2014] on (strong) approximation resistance by LPs. They provided a necessary and sufficient condition under which $\Omega(\log \log n)$ levels of the Sherali-Adams hierarchy cannot achieve an approximation better than a random assignment. We simplify their proof and strengthen the bound to $\Omega\left(\frac{\log n}{\log \log n}\right)$ levels. |
I am working on the problem
Consider the steady-state of the heat equation in a ball of radius a centred at the origin. In spherical coordinates, the ball occupied the region $0 \le r \le a$, $0 \le \theta \le \pi$ and $0 \le \phi < 2\pi$. It has a given temperature $g(\theta)$ imposed along its boundary, which is the sphere of radius $a$. Since the boundary condition is independent of $\phi$, we can assume that the temperature at the point $(r, \theta, \phi)$ in the ball is given as $u(r, \theta)$, which is given by the solution of the following boundary value problem,
$$\dfrac{1}{r^2} \dfrac{\partial}{\partial{r}} \left( r^2 \dfrac{\partial{u}}{\partial{r}} \right) + \dfrac{1}{r^2 \sin(\theta)} \dfrac{\partial}{\partial{\theta}} \left( \sin(\theta) \dfrac{\partial{u}}{\partial{\theta}} \right) = 0,$$
subject to boundary conditions
$u(a, \theta) = g(\theta)$ for $0 ≤ \theta ≤ \pi$.
(i) Show that the separation of variables $u(r, \theta) = R(r)S(\theta)$ leads to the equations
$$\dfrac{1}{\sin(\theta)} \dfrac{d}{d \theta} \left( \sin(\theta) \dfrac{dS}{d \theta} \right) + \lambda S = 0$$
and
$$(r^2 R')' - \lambda R = 0$$
(ii) Now let $\lambda = n(n + 1)$ for $n = 0,1,2,3, \dots$ and let $\mu = \cos(\theta)$, transform the ODE for $S(\theta)$ to the following Legendre’s equation:
$$(1 - \mu^2) \dfrac{\partial^2{S}}{\partial{\mu}^2} - 2\mu \dfrac{dS}{d \mu} + n(n + 1)S = 0$$
(iii) Solve the differential equation for $R$ for each eigenvalue $\lambda n = n(n + 1)$. (Hint: Try $R = Ar^m$.)
(iv) Given the solution of the Legendre’s equations are the Legendre polynomials $P_n(\mu) = P_n(\cos(θ))$, write the general solution for $u(r, \theta)$ as an infinite series.
I'm stuck on (iv) and just don't understand how to do this. I don't have very much experience with Legendre polynomials, so this is probably why. My textbook also doesn't have any solutions, so I am totally stuck. I would be very thankful if someone could please take the time to explain what (iv) is asking and show how (iv) is done. Thank you very much for your help! |
Go to the corresponding LIPIcs Volume Portal Eden, Talya ; Ron, Dana ; Seshadhri, C. Sublinear Time Estimation of Degree Distribution Moments: The Degeneracy Connection pdf-format: LIPIcs-ICALP-2017-7.pdf (0.5 MB) AbstractWe revisit the classic problem of estimating the degree distribution moments of an undirected graph. Consider an undirected graph G=(V,E) with n (non-isolated) vertices, and define (for s > 0) mu_s = 1\n * sum_{v in V} d^s_v. Our aim is to estimate mu_s within a multiplicative error of (1+epsilon) (for a given approximation parameter epsilon>0) in sublinear time. We consider the sparse graph model that allows access to: uniform random vertices, queries for the degree of any vertex, and queries for a neighbor of any vertex. For the case of s=1 (the average degree), \widetilde{O}(\sqrt{n}) queries suffice for any constant epsilon (Feige, SICOMP 06 and Goldreich-Ron, RSA 08). Gonen-Ron-Shavitt (SIDMA 11) extended this result to all integral s > 0, by designing an algorithms that performs \widetilde{O}(n^{1-1/(s+1)}) queries. (Strictly speaking, their algorithm approximates the number of star-subgraphs of a given size, but a slight modification gives an algorithm for moments.)
We design a new, significantly simpler algorithm for this problem. In the worst-case, it exactly matches the bounds of Gonen-Ron-Shavitt, and has a much simpler proof. More importantly, the running time of this algorithm is connected to the degeneracy of G. This is (essentially) the maximum density of an induced subgraph. For the family of graphs with degeneracy at most alpha, it has a query complexity of widetilde{O}\left(\frac{n^{1-1/s}}{\mu^{1/s}_s} \Big(\alpha^{1/s} + \min\{\alpha,\mu^{1/s}_s\}\Big)\right) = \widetilde{O}(n^{1-1/s}\alpha/\mu^{1/s}_s). Thus, for the class of bounded degeneracy graphs (which includes all minor closed families and preferential attachment graphs), we can estimate the average degree in \widetilde{O}(1) queries, and can estimate the variance of the degree distribution in \widetilde{O}(\sqrt{n}) queries. This is a major improvement over the previous worst-case bounds. Our key insight is in designing an estimator for mu_s that has low variance when G does not have large dense subgraphs. BibTeX - Entry
@InProceedings{eden_et_al:LIPIcs:2017:7374, author = {Talya Eden and Dana Ron and C. Seshadhri}, title = {{Sublinear Time Estimation of Degree Distribution Moments: The Degeneracy Connection}}, booktitle = {44th International Colloquium on Automata, Languages, and Programming (ICALP 2017)}, pages = {7:1--7:13}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-041-5}, ISSN = {1868-8969}, year = {2017}, volume = {80}, editor = {Ioannis Chatzigiannakis and Piotr Indyk and Fabian Kuhn and Anca Muscholl}, publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik}, address = {Dagstuhl, Germany}, URL = {http://drops.dagstuhl.de/opus/volltexte/2017/7374}, URN = {urn:nbn:de:0030-drops-73747}, doi = {10.4230/LIPIcs.ICALP.2017.7}, annote = {Keywords: Sublinear algorithms, Degree distribution, Graph moments}}
Keywords: Sublinear algorithms, Degree distribution, Graph moments Seminar: 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017) Issue Date: 2017 Date of publication: 06.07.2017 |
Find the flux through a spherical Gaussian surface of radius \(a=1m\) surrounding a charge of \(8.85pC\).
Solution:
The value of \(\epsilon_0=8.85*10^{-12}\)
The flux through the Gaussian surface is the charge \(Q\) located inside the surface. Therefore, $$ \Phi=\frac{Q}{\epsilon_0}$$ $$=8.85*10^{-12}/8.85*10^{-12}$$ $$=1Nm^2/C$$ |
I wanted to better understand dfa. I wanted to build upon a previous question:Creating a DFA that only accepts number of a's that are multiples of 3But I wanted to go a bit further. Is there any way we can have a DFA that accepts number of a's that are multiples of 3 but does NOT have the sub...
Let $X$ be a measurable space and $Y$ a topological space. I am trying to show that if $f_n : X \to Y$ is measurable for each $n$, and the pointwise limit of $\{f_n\}$ exists, then $f(x) = \lim_{n \to \infty} f_n(x)$ is a measurable function. Let $V$ be some open set in $Y$. I was able to show th...
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Consider a non-UFD that only has 2 units ( $-1,1$ ) and the min difference between 2 elements is $1$. Also there are only a finite amount of elements for any given fixed norm. ( Maybe that follows from the other 2 conditions ? )I wonder about counting the irreducible elements bounded by a lower...
How would you make a regex for this? L = {w $\in$ {0, 1}* : w is 0-alternating}, where 0-alternating is either all the symbols in odd positions within w are 0's, or all the symbols in even positions within w are 0's, or both.
I want to construct a nfa from this, but I'm struggling with the regex part |
Welcome to
Robotics, Mark!
IMUs are typically comprised of two core components: accelerometers and gyroscopes. They may also have a magnetometer.
An accelerometer measures linear
acceleration, and a gyroscope measures angular velocity.
In order to get linear and angular position estimates, those outputs need to be integrated. I'm not sure what your background is, but integration is just an accumulation of samples with respect to time.
If you travel 50 km/h for an hour, you have gone 50 km. If you then go 25 km/hr for half an hour, you have gone and
additional (25 * 0.5) = 12.5 km. Your total distance traveled is (50 + 12.5) = 62.5 km. Get the current speed, multiply by how long you've been going that speed, and add it to a running total. This is integration.
For the accelerometer, you multiply the current reading by the amount of time that has elapsed since the last reading and add that to a running total to get the current linear speed. You need to do the same over again to get current linear position.
In code form, it looks like:
sampleTime = GetElapsedTime();
linearAcceleration = GetAccelerometerOutput();
linearSpeed = linearSpeed + linearAcceleration*sampleTime;
linearPosition = linearPosition + linearSpeed*sampleTime;
A gyroscope outputs an angular speed, so you only need to integrate once to get to the angular position:
sampleTime = GetElapsedTime();
angularSpeed = GetGyroscopeOutput();
angularPosition = angularPosition + angularSpeed*sampleTime;
The trouble here is that no sensors are perfect. The gyroscope and the accelerometer both have noise, biases, bias drift, etc. You integrate the sensor outputs, and integration is an
accumulation of sorts, so the linear and angular position estimates "remember" all the noise.
The noise might be zero-mean, meaning your long-term average of that signal should be the true signal, but the
integration of that noise is not.
Consider the following example: Have your speed be solely determined by the "white noise" of a coin flip. If the coin flip is heads, you are moving forward, if it is tails, you're moving backwards. On each sample, take a step in the appropriate direction.
If the total results of coin flips are even heads/tails, then your position
should be where you started, right?
If you have flipped the coin one time, is it possible for you to be 10 steps away from the starting position? No, that's not possible. You take one step per flip, so you could only possibly be 1 step away from where you started, which should be your "true position."
If you have flipped the coin 10 times, is it possible for you to be 10 steps away from the starting position? Now it
is possible. It's very unlikely, but it's possible.
If you have flipped the coin 100 times, now it's pretty reasonable to assume you
could be 10 steps away from the "true" position.
For a gyroscope, the random distance traveled each "flip" (sample) is called the "random walk" or "angle random walk," and this should be a parameter given in the datasheet.
The angle random walk tells you how you can expect the standard deviation to change with respect to time. You can use the standard deviation with the empirical rule to estimate the possible error in your integrated angle.
The empirical rule says that 68 percent of measurements should fall within +/- one standard deviation, 95 percent should fall within +/- two standard deviations, and 99.7 percent should fall within +/- three standard deviations.
A high quality gyroscope should give the angle random walk in units of $\mbox{ARW} = \mbox{deg}/\sqrt{\mbox{hr}}$. This means your standard deviation after 1 hour should be $\mbox{ARW}*\sqrt{1}$, after two hours it would be $\mbox{ARW}*\sqrt{2}$, etc.
Lower quality gyroscopes give the same parameter in units of $\mbox{ARW} = \mbox{deg}/\sqrt{\mbox{s}}$, so you get to "dilate" your standard deviation on a much faster time scale.
For example, if your datasheet gave $\mbox{ARW} = 0.01 \mbox{deg}/\sqrt{s}$, then your standard deviation after 60 seconds is $0.01 * \sqrt{60} = 0.0775 \mbox{deg}$. By the empirical rule, your actual angle is probably (68 percent sure) within +/- (1*0.0775) = +/- 0.0775 deg and almost certainly (99.7 percent sure) within +/- (3*0.0775) = +/- 0.23 deg.
If you had a gyro with that rating and you ran it for six hours, as you asked in your question, the standard deviation would be $0.01 * \sqrt(6*3600) = 1.47 \mbox{deg}$. Your estimate could be as far as +/- (3*1.47) = +/- 4.4 deg away.
Some data sheets give the noise in terms of a noise power spectral density, sometimes with units of $\mbox{deg}/\mbox{s}/\sqrt{\mbox{Hz}}$. You can review IEEE Std 952-1997 for a more detailed description of the noise PSD, but section C.1.1 of that document gives a conversion of:
$$N\left(^{\circ}/\sqrt{\mbox{hr}}\right) = \frac{1}{60}\sqrt{\mbox{PSD}\left[\left(\frac{^{\circ}}{\mbox{hr}}\right)^2/\mbox{Hz}\right]} \\$$
Note the units there. It seems like most noise PSD are listed with units of $^{\circ}/\mbox{s}/\sqrt{hr}$, which would imply that they've already taken the square root of the proper noise PSD. That leaves the conversion to random walk as (1/60) of the listed noise PSD.
So, finally, for example, here is a listing of some three-axis accelerometer/gyro IMUs on Digikey. The first item you can buy in quantity (1) is the Bosch BMI160. The datasheet for that sensor gives the Output Noise for the 200 Hz data rate to be 0.07 $^{\circ}/\mbox{s rms}$.
In the first second, at 200 Hz, there are 200 samples. The standard deviation is $0.07\sqrt{200} = 1 \mbox{deg}$. After 60 seconds there are (60*200) = 1200 samples, and the standard deviation is $0.07\sqrt{1200} = 2.4 \mbox{deg}$.
This is the least expensive surface mount, in stock, 3 axis, exclusively accelerometer/gyro chip listed on Digikey's website (\$5.39 at the time of the writing). The most expensive is the Analog Devices ADIS16477-3BMLZ (\$610.95 at the time of writing). The datasheet for that sensor gives the random walk as 0.15 $^{\circ}/\sqrt{hr}$.
One minute is 0.0167 hours, so after one minute the gyro standard deviation is $0.15\sqrt{0.0167} = 0.0194 \mbox{deg}$. After two minutes, the standard deviation is $0.15\sqrt{2/60} = 0.0274 \mbox{deg}$.
For your six hour run, the cheapest IMU has a standard deviation of $0.07\sqrt{6*3600*200} = 145.5 \mbox{deg}$, and the most expensive has a standard deviation of $0.15\sqrt{6} = 0.367 \mbox{deg}$. Keep in mind that 95% of samples should fall within +/- 2 standard deviations.
And, since I started writing this post, benh has written an answer that says, in part,
An IMU will not drift as the accelerometers will always read against gravity.
This is true if you're using accelerometers, but it will only fix the horizontal axes (typically x/y, roll/pitch). Gravity acts along the vertical axis, so rotations about the vertical axis (heading/yaw) aren't able to use the gravity vector as a means of absolute orientation reference. |
To send content items to your account,please confirm that you agree to abide by our usage policies.If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.Find out more about sending content to .
To send content items to your Kindle, first ensure no-reply@cambridge.orgis added to your Approved Personal Document E-mail List under your Personal Document Settingson the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ partof your Kindle email address below.Find out more about sending to your Kindle.
Note you can select to send to either the @free.kindle.com or @kindle.com variations.‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi.‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We show that a directed graph$E$is a finite graph with no sinks if and only if, for each commutative unital ring$R$, the Leavitt path algebra$L_{R}(E)$is isomorphic to an algebraic Cuntz–Krieger algebra if and only if the$C^{\ast }$-algebra$C^{\ast }(E)$is unital and$\text{rank}(K_{0}(C^{\ast }(E)))=\text{rank}(K_{1}(C^{\ast }(E)))$. Let$k$be a field and$k^{\times }$be the group of units of$k$. When$\text{rank}(k^{\times })<\infty$, we show that the Leavitt path algebra$L_{k}(E)$is isomorphic to an algebraic Cuntz–Krieger algebra if and only if$L_{k}(E)$is unital and$\text{rank}(K_{1}(L_{k}(E)))=(\text{rank}(k^{\times })+1)\text{rank}(K_{0}(L_{k}(E)))$. We also show that any unital$k$-algebra which is Morita equivalent or stably isomorphic to an algebraic Cuntz–Krieger algebra, is isomorphic to an algebraic Cuntz–Krieger algebra. As a consequence, corners of algebraic Cuntz–Krieger algebras are algebraic Cuntz–Krieger algebras.
Let$A=\bigoplus _{i\in \mathbb{Z}}A_{i}$be a finite-dimensional graded symmetric cellular algebra with a homogeneous symmetrizing trace of degree$d$. We prove that if$d\neq 0$then$A_{-d}$contains the Higman ideal$H(A)$and$\dim H(A)\leq \dim A_{0}$, and provide a semisimplicity criterion for$A$in terms of the centralizer of$A_{0}$.
The purpose of this note is to prove the following. Suppose$\mathfrak{R}$is a semiprime unity ring having an idempotent element$e$($e\neq 0,~e\neq 1$) which satisfies mild conditions. It is shown that every additive generalized Jordan derivation on$\mathfrak{R}$is a generalized derivation.
For a compact metric space (K, d), LipK denotes the Banach algebra of all complex-valued Lipschitz functions on (K, d). We show that the continuous Hochschild cohomology Hn(LipK, (LipK)*) and Hn(LipK, ℂe) are both infinite-dimensional vector spaces for each n ≥ 1 if the space K contains a certain infinite sequence which converges to a point e ∈ K. Here (LipK)* is the dual module of LipK and ℂe denotes the complex numbers with a LipK-bimodule structure defined by evaluations of LipK-functions at e. Examples of such metric spaces include all compact Riemannian manifolds, compact geodesic metric spaces and infinite compact subsets of ℝ. In particular, the (small) global homological dimension of LipK is infinite for every such space. Our proof uses the description of point derivations by Sherbert [‘The structure of ideals and point derivations in Banach algebras of Lipschitz functions’, Trans. Amer. Math. Soc.111 (1964), 240–272] and directly constructs non-trivial cocycles with the help of alternating cocycles of Johnson [‘Higher-dimensional weak amenability’, Studia Math.123 (1997), 117–134]. An alternating construction of cocycles on the basis of the idea of Kleshchev [‘Homological dimension of Banach algebras of smooth functions is equal to infinity’, Vest. Math. Mosk. Univ. Ser. 1. Mat. Mech.6 (1988), 57–60] is also discussed.
Nakayama automorphisms play an important role in the fields of noncommutative algebraic geometry and noncommutative invariant theory. However, their computations are not easy in general. We compute the Nakayama automorphism ν of an Ore extension R[x; σ, δ] over a polynomial algebra R in n variables for an arbitrary n. The formula of ν is obtained explicitly. When σ is not the identity map, the invariant EG is also investigated in terms of Zhang’s twist, where G is a cyclic group sharing the same order with σ.
Let R→U be an associative ring epimorphism such that U is a flat left R-module. Assume that the related Gabriel topology$\mathbb{G}$of right ideals in R has a countable base. Then we show that the left R-module U has projective dimension at most 1. Furthermore, the abelian category of left contramodules over the completion of R at$\mathbb{G}$fully faithfully embeds into the Geigle–Lenzing right perpendicular subcategory to U in the category of left R-modules, and every object of the latter abelian category is an extension of two objects of the former one. We discuss conditions under which the two abelian categories are equivalent. Given a right linear topology on an associative ring R, we consider the induced topology on every left R-module and, for a perfect Gabriel topology$\mathbb{G}$, compare the completion of a module with an appropriate Ext module. Finally, we characterize the U-strongly flat left R-modules by the two conditions of left positive-degree Ext-orthogonality to all left U-modules and all$\mathbb{G}$-separated$\mathbb{G}$-complete left R-modules.
In this article, we consider a twisted partial action$\unicode[STIX]{x1D6FC}$of a group$G$on an associative ring$R$and its associated partial crossed product$R\ast _{\unicode[STIX]{x1D6FC}}^{w}G$. We provide necessary and sufficient conditions for the commutativity of$R\ast _{\unicode[STIX]{x1D6FC}}^{w}G$when the twisted partial action$\unicode[STIX]{x1D6FC}$is unital. Moreover, we study necessary and sufficient conditions for the simplicity of$R\ast _{\unicode[STIX]{x1D6FC}}^{w}G$in the following cases: (i)$G$is abelian; (ii)$R$is maximal commutative in$R\ast _{\unicode[STIX]{x1D6FC}}^{w}G$; (iii)$C_{R\ast _{\unicode[STIX]{x1D6FC}}^{w}G}(Z(R))$is simple; (iv)$G$is hypercentral. When$R=C_{0}(X)$is the algebra of continuous functions defined on a locally compact and Hausdorff space$X$, with complex values that vanish at infinity, and$C_{0}(X)\ast _{\unicode[STIX]{x1D6FC}}G$is the associated partial skew group ring of a partial action$\unicode[STIX]{x1D6FC}$of a topological group$G$on$C_{0}(X)$, we study the simplicity of$C_{0}(X)\ast _{\unicode[STIX]{x1D6FC}}G$by using topological properties of$X$and the results about the simplicity of$R\ast _{\unicode[STIX]{x1D6FC}}^{w}G$.
The Dixmier Conjecture says that every endomorphism of the (first) Weyl algebra$A_{1}$(over a field of characteristic zero) is an automorphism, i.e., if$PQ-QP=1$for some$P,Q\in A_{1}$, then$A_{1}=K\langle P,Q\rangle$. The Weyl algebra$A_{1}$is a$\mathbb{Z}$-graded algebra. We prove that the Dixmier Conjecture holds if the elements$P$and$Q$are sums of no more than two homogeneous elements of$A_{1}$(there is no restriction on the total degrees of$P$and$Q$).
We introduce the class of partially invertible modules and show that it is an inverse category which we call the Picard inverse category. We use this category to generalize the classical construction of crossed products to, what we call, generalized epsilon-crossed products and show that these coincide with the class of epsilon-strongly groupoid-graded rings. We then use generalized epsilon-crossed groupoid products to obtain a generalization, from the group-graded situation to the groupoid-graded case, of the bijection from a certain second cohomology group, defined by the grading and the functor from the groupoid in question to the Picard inverse category, to the collection of equivalence classes of rings epsilon-strongly graded by the groupoid.
We determine sufficient criteria for the prime spectrum of an ambiskew polynomial algebra R over an algebraically closed field 𝕂 to be akin to those of two of the principal examples of such an algebra, namely the universal enveloping algebra U(sl2) (in characteristic 0) and its quantization Uq(sl2) (when q is not a root of unity). More precisely, we determine sufficient criteria for the prime spectrum of R to consist of 0, the ideals (z − λ)R for some central element z of R and all λ ∈ 𝕂, and, for some positive integer d and each positive integer m, d height two prime ideals P for which R/P has Goldie rank m.
We obtain a complete structural characterization of Cohn–Leavitt algebras over no-exit objects as graded involutive algebras. Corollaries of this result include graph-theoretic conditions characterizing when a Leavitt path algebra is a directed union of (graded) matricial algebras over the underlying field and over the algebra of Laurent polynomials and when the monoid of isomorphism classes of finitely generated projective modules is atomic and cancelative. We introduce the nonunital generalizations of graded analogs of noetherian and artinian rings, graded locally noetherian and graded locally artinian rings, and characterize graded locally noetherian and graded locally artinian Leavitt path algebras without any restriction on the cardinality of the graph. As a consequence, we relax the assumptions of the Abrams–Aranda–Perera–Siles characterization of locally noetherian and locally artinian Leavitt path algebras.
We use concepts of continuous higher randomness, developed in Bienvenu et al. [‘Continuous higher randomness’, J. Math. Log.17(1) (2017).], to investigate$\unicode[STIX]{x1D6F1}_{1}^{1}$-randomness. We discuss lowness for$\unicode[STIX]{x1D6F1}_{1}^{1}$-randomness, cupping with$\unicode[STIX]{x1D6F1}_{1}^{1}$-random sequences, and an analogue of the Hirschfeldt–Miller characterization of weak 2-randomness. We also consider analogous questions for Cohen forcing, concentrating on the class of$\unicode[STIX]{x1D6F4}_{1}^{1}$-generic reals.
Let R be a graded ring. We introduce the concepts of Ding gr-injective and Ding gr-projective R-modules, which are the graded analogues of Ding injective and Ding projective modules. Several characterizations and properties of Ding gr-injective and Ding gr-projective modules are obtained. In addition, we investigate the relationships among Gorenstein gr-flat, Ding gr-injective and Ding gr-projective modules.
Let$\Bbbk$be a field of characteristic zero. For any positive integer$n$and any scalar$a\in \Bbbk$, we construct a family of Artin–Schelter regular algebras$R(n,a)$, which are quantizations of Poisson structures on$\Bbbk [x_{0},\ldots ,x_{n}]$. This generalizes an example given by Pym when$n=3$. For a particular choice of the parameter$a$we obtain new examples of Calabi–Yau algebras when$n\geqslant 4$. We also study the ring theoretic properties of the algebras$R(n,a)$. We show that the point modules of$R(n,a)$are parameterized by a bouquet of rational normal curves in$\mathbb{P}^{n}$, and that the prime spectrum of$R(n,a)$is homeomorphic to the Poisson spectrum of its semiclassical limit. Moreover, we explicitly describe$\operatorname{Spec}R(n,a)$as a union of commutative strata.
In characteristic two, some criteria are obtained for a symmetric square-central element of a totally decomposable algebra with orthogonal involution, to be contained in an invariant quaternion subalgebra.
Given a partial action$\unicode[STIX]{x1D703}$of a group on a set with an algebraic structure, we construct a reflector of$\unicode[STIX]{x1D703}$in the corresponding subcategory of global actions and study the question when this reflector is a globalization. In particular, if$\unicode[STIX]{x1D703}$is a partial action on an algebra from a variety$\mathsf{V}$, then we show that the problem reduces to the embeddability of a certain generalized amalgam of$\mathsf{V}$-algebras associated with$\unicode[STIX]{x1D703}$. As an application, we describe globalizable partial actions on semigroups, whose domains are ideals.
Let${\mathcal{A}}$be a unital torsion-free algebra over a unital commutative ring${\mathcal{R}}$. To characterise Lie$n$-higher derivations on${\mathcal{A}}$, we give an identity which enables us to transfer problems related to Lie$n$-higher derivations into the same problems concerning Lie$n$-derivations. We prove that: (1) if every Lie$n$-derivation on${\mathcal{A}}$is standard, then so is every Lie$n$-higher derivation on${\mathcal{A}}$; (2) if every linear mapping Lie$n$-derivable at several points is a Lie$n$-derivation, then so is every sequence$\{d_{m}\}$of linear mappings Lie$n$-higher derivable at these points; (3) if every linear mapping Lie$n$-derivable at several points is a sum of a derivation and a linear mapping vanishing on all$(n-1)$th commutators of these points, then every sequence$\{d_{m}\}$of linear mappings Lie$n$-higher derivable at these points is a sum of a higher derivation and a sequence of linear mappings vanishing on all$(n-1)$th commutators of these points. We also give several applications of these results.
Let$A_{2}$be a free associative algebra or polynomial algebra of rank two over a field of characteristic zero. The main results of this paper are the classification of noninjective endomorphisms of$A_{2}$and an algorithm to determine whether a given noninjective endomorphism of$A_{2}$has a nontrivial fixed element for a polynomial algebra. The algorithm for a free associative algebra of rank two is valid whenever an element is given and the subalgebra generated by this element contains the image of the given noninjective endomorphism. |
Given an undirected graph $G = (V, E)$ with $n$ vertices and $m$ edges, how many $k$-colorings of $G$ exist? A $k$-coloring is a function $c: V \to \{ 1, 2, \dots, k \}$ such that $c(u) \neq c(v)$ for all edges $\{ u, v \} \in E$. Let $f(G, k)$ denote the number of $k$-colorings of $G$.
There is a well-known formula, sometimes called
Fundamental Reduction Theorem:
$\displaystyle f(G, k) = f(G - e, k) - f(G\,/\, e, k)$ for all $e \in E$.
Note that $G - e$ is $G$ without edge $e$ and $G\,/\,e$ is $G$ with edge $e$ contracted.
Using this formula, it is easy to figure out a recursive algorithm. We just need to choose an arbitrary edge and recurse on $G - e$ and $G\,/\, e$. The base case is a graph without edges. If $T(n + m)$ denotes the time complexity, we have $T(n + m) \le T(n + m - 1) + T(n + m - 2)$ which is just fibonacci recurrence and establishes a time complexity of $\mathcal{O}(\phi^{n + m})$ where $\displaystyle \phi = \frac{1 + \sqrt{5}}{2}$ denotes the golden ratio. So far, so good.
But we made an assumption: Can $G - e$ and $G\,/\,e$ really be obtained from $G$ in $\mathcal{O}(1)$ time? It seems like we cannot avoid a polynomial factor:
If $G$ is represented by an adjacency matrix, vertex removal (needed for contraction) does not shrink the matrix and we always need $\mathcal{O}(n)$ time to access the edges of a single vertex. And another problem: How to select an arbitrary edge fast?
If $G$ is represented by adjacency lists, edge removal is fast. But edge contraction requires moving/copying neighbour vertices to another adjacency lists which is expensive.
If $G$ is represented by an edge list, edge selecting is fast. Edge removal is also fast, because the edge is arbitrary and we can always take the first edge in the list. But again, edge contraction causes many edges to be updated (i.e. assigned another endpoint)...
Is there a way to really achieve $\mathcal{O}(\phi^{n + m})$ without a factor of $n$, $m$ or $n + m$ using an appropiate data structure or modification of the algorithm? |
I am looking for a good approximation for the $W_0$ branch of the Lambert $W$ function. I am looking for values $0 < x < e$ only, so I expect something simpler than the general Taylor expansion. Thanks.
I don't know how simple you need it, and since you never said anything on how accurate you want your approximant to be (i.e., to how many correct decimal places should the approximant match the Lambert function?),
$$W_0(z)\approx\ln(1+z)\frac{1+\frac{123}{40}z+\frac{21}{10}z^2}{1+\frac{143}{40}z+\frac{713}{240}z^2}$$
should be good enough, which has a maximum error of around $1.6\times 10^{-4}$ for $z\in[0,e]$.
The rational portion here is a Padé approximant; probably one might do better with a minimax rational approximation, but I don't have the patience and inclination to derive it since your question's rather vague to begin with. |
Quick question regarding the conditional distributions (SABR is just an example here)
Consider $$dS_t = \sigma_tS_tdW_t$$ $$d\sigma_t = \alpha\sigma_tdV $$ $$dW_tdV_t=\rho dt$$
Hence a SABR process with $\beta=1$. The volatility process is a GBM and so we can implement the exact solution and simulate $\sigma_{i+1}$ from $\sigma_i$. Now I dont't know what mathematical terminology to use on $S_t$
When $V_{i+1}$ and $V_{i}$ is KNOWN we know the exact value of $\sigma_{i+1} $ from $\sigma_i$.
With $\sigma_t$ known, we can simulate $W_{i+i}-W_{i}$ and a proper way to compute $S_{i+1}$ from $s_i$ is as a GBM with volatility $\sigma_i$. This is how it is done in practice with these parameter.
My Question: To which extend can we call $S_{i+1}$ as exact? My own personal take: This is not exact at all because we have to know the whole path of $[\sigma_i,\sigma_{i+1}]$ to call it exact.
The reason for my confusion is that people call SABR for $\beta = 1$ for log-normal for a realized volatility. |
The Challenge
Write a program or function that takes no input and outputs a vector of length \$1\$ in a
theoretically uniform random direction.
This is equivalent to a random point on the sphere described by $$x^2+y^2+z^2=1$$
resulting in a distribution like such
Output
Three floats from a theoretically uniform random distribution for which the equation \$x^2+y^2+z^2=1\$ holds true to precision limits.
Challenge remarks The random distribution needs to be theoretically uniform. That is, if the pseudo-random number generator were to be replaced with a true RNG from the realnumbers, it would result in a uniform random distribution of points on the sphere. Generating three random numbers from a uniform distribution and normalizing them is invalid: there will be a bias towards the corners of the three-dimensional space. Similarly, generating two random numbers from a uniform distribution and using them as spherical coordinates is invalid: there will be a bias towards the poles of the sphere. Proper uniformity can be achieved by algorithms including but not limited to: Generate three random numbers \$x\$, \$y\$ and \$z\$ from a normal(Gaussian) distribution around \$0\$ and normalize them. Generate three random numbers \$x\$, \$y\$ and \$z\$ from a uniformdistribution in the range \$(-1,1)\$. Calculate the length of the vector by \$l=\sqrt{x^2+y^2+z^2}\$. Then, if \$l>1\$, reject the vector and generate a new set of numbers. Else, if \$l \leq 1\$, normalize the vector and return the result. Generate two random numbers \$i\$ and \$j\$ from a uniformdistribution in the range \$(0,1)\$ and convert them to spherical coordinates like so:\begin{align}\theta &= 2 \times \pi \times i\\\\\phi &= \cos^{-1}(2\times j -1)\end{align}so that \$x\$, \$y\$ and \$z\$ can be calculated by \begin{align}x &= \cos(\theta) \times \sin(\phi)\\\\y &= \sin(\theta) \times \sin(\phi)\\\\z &= \cos(\phi)\end{align} Generate three random numbers \$x\$, \$y\$ and \$z\$ from a Provide in your answer a brief description of the algorithm that you are using. Read more on sphere point picking on MathWorld. Output examples
[ 0.72422852 -0.58643067 0.36275628][-0.79158628 -0.17595886 0.58517488][-0.16428481 -0.90804027 0.38532243][ 0.61238768 0.75123833 -0.24621596][-0.81111161 -0.46269121 0.35779156] |
I can't wrap my head around notation in differential geometry especially the abundant versions of differentiation.
Peter Petersen: Riemannian Geometry defines a lot of notation to be equal but I don't really know when one tends to use which version and how to memorize the definitions and properties/identities.
Directional derivative or equivalently the action of a vector field $X$ on a function ($f:M\to\mathbb R$): $X\cdot f=D_Xf=df\cdot X\ $, which is also denoted as $L_Xf$
This is mostly clear except why the notation $D_Xf\ $ exists.
$grad(f)=\nabla f\ $ the gradiant of $f:M\to\mathbb R$
Has $\nabla$ something to do with the Levi-Civita connection?
Lie derivative of vector fields: $L_XY:=[X,Y]= X\cdot Y - X\cdot Y\ $, where the action of one vector field on one another is given by: $X\cdot Y:=D_XY\ $ the directional derivative of $Y$ along an integral curve of the vector field $X$.
Also mostly clear.
The covariant derivative or Levi-Civita connection $\nabla_XY$
Here my understanding stops and my brain starts dripping out of my ears… Are there mnemonics or other ways to get into all those ways of thinking about differentiating on manifolds. And why do most books use coordinates - are they necessary I rather like not using $X=\sum_ia^i\partial_i$ for vector fields especially if the author (ab)uses Einstein sum convention. |
Title: Counting points on hyperelliptic curves defined over finite fields oflarge characteristic: algorithms and complexity.
Abstract:Counting points on algebraic curves has drawn a lot of attention due to itsmany applications from number theory and arithmetic geometry to cryptographyand coding theory. In this talk, we focus on counting points on hyperellipticcurves over finite fields of large characteristic p. In this setting, the mostsuitable algorithms are currently those of Schoof and Pila, because theircomplexities are polynomial in log p. However, their dependency in the genus gof the curve is exponential, and this is already painful even in genus 3.
Our contributions mainly consist of establishing new complexity bounds with asmaller dependency in g of the exponent of log p. For hyperelliptic curves,previous work showed that it was quasi-quadratic, and we reduced it to a lineardependency. Restricting to more special families of hyperelliptic curves withexplicit real multiplication (RM), we obtained a constant bound for thisexponent.
In genus 3, we proposed an algorithm based on those of Schoof andGaudry-Harley-Schost whose complexity is prohibitive in general, but turns outto be reasonable when the input curves have explicit RM. In this more favorablecase, we were able to count points on a hyperelliptic curve defined over a64-bit prime field.
In this talk, we will carefully reduce the problem of counting points to thatof solving polynomial systems. More precisely, we will see how our results areobtained by considering either smaller or structured systems.
Contains joint work with P. Gaudry and P.-J. Spaenlehauer.
Title : Wave: A New Family of Trapdoor One-Way Preimage Sampleable Functions Based on Codes
Abstract : We present here a new family of trapdoor one-way functions that are Preimage Sampleable on Average (PSA) based on codes: the Wave-PSA family. Our trapdoor function is one-way under two computational assumptions: the hardness of generic decoding for high weights and the indistinguishability of generalized (U, U + V )-codes. Our proof follows the GPV strategy [GPV08]. By including rejection sampling, we ensure the proper distribution for the trapdoor inverse output. The domain sampling property of our family is ensured by using and proving a variant of the left-over hash lemma. We instantiate the new Wave-PSA family with ternary generalized (U, U + V )-codes to design a “hash-and-sign” signature scheme which achieves existential unforgeability under adaptive chosen message attacks (EUF-CMA) in the random oracle model. For 128 bits of classical security, signature sizes are in the order of 13 thousand bits, the public key size in the order of 3 megabytes, and the rejection rate is limited to one rejection every 100 signatures.
Abstract: We study the 5G-AKA authentication protocol described in the 5G mobile communication standards. This version of AKA tries to achieve a better privacy than the 3G and 4G versions through the use of asymmetric randomized encryption. Nonetheless, we show that except for the IMSI-catcher attack, all known attacks against 5G-AKA privacy still apply.Next, we modify the 5G-AKA protocol to prevent these attacks, while satisfying 5G-AKA efficiency constraints as much as possible. We then formally prove that our protocol is $\sigma$-unlinkable. This is a new security notion, which allows for a fine-grained quantification of a protocol privacy. Our security proof is carried out in the Bana-Comon indistinguishability logic. We also prove mutual authentication as a secondary result.
Title: Attaques par invariant: comment s’en protéger
Abstract: En 2011, Gregor Leander et ses co-auteurs ont décrit un nouveau type d’attaque sur les chiffrements par bloc, qui exploite l’existence d’un espace vectoriel invariant par les composantes utilisées dans ledit chiffrement. Ces attaques ont ensuite été généralisées en 2015 et sont appelées les attaques par invariant nonlinéaires.
Depuis, ces attaques ont mis en évidence de nouvelles vulnérabilités sur un grand nombre de chiffrements par bloc, notamment les chiffrements par bloc de type SPN (Substitution-Permutation Network) où les clefs de tours sont égales à la clef maître additionnée à une constant de tour souvent arbitrairement choisie.
Dans cette présentation, nous expliquons pourquoi ces attaques réalisables sur certains chiffrements et nous en déduisons un nouveau critère de conception pour les chiffrements par bloc. Nous verrons comment choisir la couche linéaire ainsi que les constants de tour, afin de s’assurer de l’absence d’invariants.
Ce travail apparaît à un moment fondamental, puisqu’il aide les concepteurs de chiffrement et que le NIST standardise actuellement les algorithmes de chiffrements dits “à bas coût”. Ce travail a été publié à CRYPTO en 2017, en collaboration avec Christof Beierle, Anne Canteaut et Gregor Leander: Proving Resistance against Invariant attacks: How to choose the round constants.
Title: On the complexity of modular composition of generic polynomials
Abstract: This talk is about algorithms for modular composition of univariate polynomials, and for computing minimal polynomials. For two univariate polynomials a and g over a commutative field, modular composition asks to compute h(a) mod g for some given h, while the minimal polynomial problem is to compute h of minimal degree such that h(a) = 0 mod g. For generic g and a, we propose algorithms whose complexity bound improves upon previous algorithms and in particular upon Brent and Kung’s approach (1978); the new complexity bound is subquadratic in the degree of g and a even when using cubic-time matrix multiplication. Our improvement comes from the fast computation of specific bases of bivariate ideals, and from efficient operations with these bases thanks to fast univariate polynomial matrix algorithms.
Contains joint work with Seung Gyu Hyun, Bruno Salvy, Eric Schost, Gilles Villard.
Title : Jasmin: a workbench for high-assurance low-level programming.
Abstract : High-efficiency low-level programming — and in particular the implementation of cryptographic primitives — challenges formal verification methods for correctness and security of the implementations. To overcome this difficulty, the Jasmin language and compiler provide high-level abstractions and predictability of the generated assembly so as to write verifiable efficient low-level programs. The compiler is also formally verified in Coq so as to enable reasoning about functional correctness at the source level. Moreover we’ll present in this talk a proof methodology to enable reasoning about side-channel attack resistance at the source level: building on compiler correctness arguments, we show how to justify preservation, along the compilation process, of the constant-time property.
Title: Nearly linear time encodable codes beating the Gilbert-Varshamov bound.
Abstract: Error-correcting codes enable reliable transmission of information over an erroneous channel. One typically desires codes to transmit information at a high rate while still being able to correct a large fraction of errors. However, rate and relative distance (which quantifies the fraction of errors corrected) are competing quantities with a trade off. The Gilbert-Varshamov bound assures for every rate R, relative distance D and alphabet size Q, there exists an infinite family of codes with R + H_Q(D) >= 1-\epsilon. Constructing codes meeting or beating the Gilbert-Varshamov bound remained a long-standing open problem, until the advent of algebraic geometry codes by Goppa. In a seminal paper, for prime power squares Q ≥ 7^2, Tsfasman-Vladut-Zink constructed algebraic geometry codes beating the Gilbert-Varshamov bound. A rare occasion where an explicit construction yields better parameters than guaranteed by randomized arguments! For codes to find use in practice, one often requires fast encoding and decoding algorithms in addition to satisfying a good trade off between rate and minimum distance. A natural question, which remains unresolved, is if there exist linear time encodable and decodable codes meeting or beating the Gilbert-Varshamov bound. In this talk, I shall present the first nearly linear time encodable codes beating the Gilbert-Varshamov bound, along with a nearly quadratic decoding algorithm. Applications to secret sharing, explicit construction of pseudorandom objects, Probabilistically Checkable Interactive Proofs and the like will also be discussed.
Title : Un algorithme géométrique efficace pour le calcul d’espaces de Riemann-Roch.
Abstract : Le calcul effectif de bases d’espaces de Riemann-Roch intervient dans denombreux domaines pratiques, notamment pour l’arithmétique dans les jacobiennesde courbes ou pour la construction de codes correcteurs d’erreursalgébraico-géométriques. Nous proposons une variante probabiliste de type LasVegas de l’algorithme de Brill et Noether décrit par Goppa pour le calcul d’unebase de l’espace de Riemann-Roch L(D) associé à un diviseur D sur une courbeplane projective C de degré d définie sur un corps parfait K suffisammentgrand. On prouve que sa complexité (estimée par le nombre d’opérationsarithmétiques dans le corps K) est en O(max(d^(2 \omega), deg(D_+)^(\omega)))où \omega est la constante de l’algèbre linéaire et D_+ est le plus petitdiviseur effectif supérieur ou égal à D. Cet algorithme probabiliste peutéchouer, mais sous quelques conditions on prouve que sa probabilité d’échec estbornée par O(max(d^4,deg(D_+)^2)/|E|) où E est un sous ensemble fini de K danslequel on peut choisir des éléments de K uniformément aléatoirement. Nous avonsimplémenté cet algorithme en C++/NTL et nous présenterons des résultatsexpérimentaux qui semblent indiquer une amélioration des temps de calculs parrapport à l’implémentation dans le logiciel de calcul formel Magma.
Abstract : Oblivious linear-function evaluation (OLE) is a securetwo-party protocol allowing a receiver to learn any linear combinationof a pair of field elements held by a sender. OLE serves as a commonbuilding block for secure computation of arithmetic circuits,analogously to the role of oblivious transfer (OT) for boolean circuits.A useful extension of OLE is vector OLE (VOLE), allowing the receiver tolearn any linear combination of two vectors held by the sender. Inseveral applications of OLE, one can replace a large number of instancesof OLE by a smaller number of long instances of VOLE. This motivates thegoal of amortizing the cost of generating long instances of VOLE.
We suggest a new approach for fast generation of pseudo-random instancesof VOLE via a deterministic local expansion of a pair of shortcorrelated seeds and no interaction. This provides the first example ofcompressing a non-trivial and cryptographically useful correlation withgood concrete efficiency. Our VOLE generators can be used to enhance theefficiency of a host of cryptographic applications. These include securearithmetic computation and noninteractive zero-knowledge proofs withreusable preprocessing.
Our VOLE generators are based on a novel combination of function secretsharing (FSS) for multi-point functions and linear codes in whichdecoding is intractable. Their security can be based on variants of thesyndrome decoding assumption over large fields that resist knownattacks. We provide several constructions that offer tradeoffs betweendifferent efficiency measures and the underlying intractability assumptions.
Toward the end of the talk, I will also discuss exciting recentdevelopments of this work regarding the compression of more general(pseudo)random bilinear correlations.
Antonin Leroux, 13:30, élève école
Efficient Proactive Multi-Party Computation
Secure Multi-Party Computation (MPC) allows a set of “n” distrustingparties to compute functions on their private inputs while guaranteeingsecrecy of inputs while ensuring correctness of the computation. MostMPC protocols can achieve such security only against a minority ofcorrupted parties (e.g., there is an honest majority > n/2). Based oncryptographic assumptions, security against dishonest majorities can beobtained but requires more computation and communication. These levelsof security are often not sufficient in real life especially threatsthat require long-term security against powerful persistant attackers(e.g., so called Advanced Persistent Threats). In such cases, all theparties involved in the protocol may become corrupted at some point.Proactive MPC (PMPC)aims to address such mobile persistent threats; PMPCguarantees privacy and correctness against an adversary allowed tochange the set of corrupted parties over time but that is bounded by athreshold at any given instant. Until recently, PMPC protocols existedonly against a dishonest minority. The first generic PMPC protocolagainst a dishonest majority was introduced in a recent work to bepresented in September 2018, it presents a feasibility resultdemonstrating that it can be achieved but with high communicationcomplexity: O(n^4).
This talk presents our most recent work which develops an efficientgeneric PMPC protocol secure against a dishonest majority. We improvethe overall complexity of the generic PMPC from O(n^4) to O(n^2)communication. Two necessary stepping stones for generic PMPC areProactive Secret Sharing (PSS) and a secure distributed multiplicationprotocol. In this work we introduce a new PSS scheme requiring onlyO(n^2) communications. We also present a multiplication protocol againstdishonest majorities in the proactive setting; this protocol introducesa new efficient way to perform multiplication in dishonest majoritywithout using pre-computation. |
Understand the problem
Let \( f\) : \( (0,1) \rightarrow \mathbb{R} \) be defined by
\( f(x) = \lim_{n\to\infty} cos^n(\frac{1}{n^x}) \).
(a) Show that \(f\) has exactly one point of discontinuity.
(b) Evaluate \(f\) at its point of discontinuity.
Source of the problem
I.S.I. (Indian Statistical Institute, B.Stat, B.Math) Entrance. Subjective Problem 2 from 2019
Topic
Calculus
Difficulty Level
6 out of 10
Suggested Book
Calculus in one variable by I.A.Maron
Start with hints
Do you really need a hint? Try it first!
Try to determine the
Form of the Limit.
Show that the limit is of the form \(1^\infty\).
Hence, try to find the
Functional Form.
\( f(x) = \lim_{n\to\infty} (1 + (cos(\frac{1}{n^x}) – 1))^n = e^{(\lim_{n\to\infty}(cos(\frac{1}{n^x}) – 1).n} \).
\({\lim_{n\to\infty}(cos(\frac{1}{n^x}) – 1).n = -\frac{1}{2}\lim_{n\to\infty}\frac{(sin^2(\frac{1}{2n^x}))}{(\frac{1}{2n^x})^2}.n^{(1-2x)} } \)
Prove that
\(f(x)\) =
\[\left\{ \begin{array}{ll}
0 & 0 < x < \frac{1}{2} \\ \frac{1}{\sqrt{e}} & x = \frac{1}{2} \\ 1 & x > \frac{1}{2} \\ \end{array} \right. \] Watch the video ( Coming Soon … ) Connected Program at Cheenta I.S.I. & C.M.I. Entrance Program
Indian Statistical Institute and Chennai Mathematical Institute offer challenging bachelor’s program for gifted students. These courses are B.Stat and B.Math program in I.S.I., B.Sc. Math in C.M.I.
The entrances to these programs are far more challenging than usual engineering entrances. Cheenta offers an intense, problem-driven program for these two entrances. |
Let AOB be a given angle less than \( 180^o \) and let P be an interior point of the angular region determined by \( \angle AOB \) . Show, with proof, how to construct, using only ruler and compass, a line segment CD passing through P such that C lies on the ray OA and D lies on the ray OB and CP:PD = 1:2. Show that the equation $$ a^3 + (a+1)^3 + (a+2)^3 + (a+3)^3 + (a+4)^3 + (a+5)^3 + (a+6)^3 = b^4 + (b+1)^4 $$ has no solutions in integer a, b.
Discussion by Writabrata Bhattacharya (Associate Faculty – Cheenta)
Let \( P(x) = x^2 + \frac {1}{2} x + b \) and \( Q(x) = x^2 + cx + d \) be two polynomials with real coefficients such that P(x) Q(x) = Q(P(x)) for all real x. Find all real roots of P(Q(x)) = 0 Discussion:https://www.cheenta.com/forums/topic/rmo-2017-p3-2/ Consider \( n^2 \) unit squares in the xy-plane centered at the point (i, j) with integer coordinates, \( 1 \le i \le n \) , \( 1 \le j \le n \) . It is required to color each unit square in such a way that whenever \( 1 \le i < j \le n \) and \( 1 \le k < l \le n \) the thre squares with centers at (i, k), (j, k) , (j, l) have distinct colours. What is the least possible colours needed? Let \( \Omega \) be a circle with a chord AB which is not a diameter. Let \( \Gamma_1 \) be a circle on one side of AB such that it is tangent to AB at C and internally tangent to \( \Omega \) at D. Likewise let \( \Gamma_2 \) be a circle on the other side of AB such that it is tangent to AB at E and internally tangent to \( \Omega \) at F. Suppose the line DC intersects \( \Omega \) at \( X \neq D \) and the line FE intersects \( \Omega \) at \( Y \neq F \). Prove that XY is a diameter of \( \Omega \)Discussion by Sauvik Mondal (Faculty – Cheenta) Let x, y, z be real numbers, each greater than 1. Prove that $$ \frac {x+1}{y +1 } + \frac {y+1}{z+1} + \frac {z+1}{x+1} \le\frac {x – 1}{y – 1 } + \frac {y- 1}{z-1} + \frac {z-1}{x-1} $$Discussion by Writabrata Bhattacharya (Associate Faculty – Cheenta) Problem 2
Problem 5
Problem 6 |
Let $\mathscr{H}$ be the relevant Hilbert space and $\Psi\in \mathscr{H}$. Let furthermore, $P : \mathscr{H}\to \mathscr{H}$ be a projection operator onto some subspace. It is a fact that $P$ defines the complementary projection $Q : \mathscr{H}\to \mathscr{H}$ given by $Q = \mathbf{1} - P$ where $\mathbf{1}$ is the Hilbert space identity transformation.
Notice that by definition a projection satisfies $P^2 = P$. It turns out that $Q$ is indeed a projection operator. We have the proof:
$$Q^2=(1-P)^2=1-2P+P^2=1-2P+P=1-P=Q,$$
where we've used that $P$ is a projection. Another fact is that $PQ\Psi =0$. This is true because we have
$$P(Q\Psi)=P(1-P)\Psi=P\Psi-P^2\Psi=P\Psi-P\Psi=0.$$
Similarly $QP\Psi = 0$ since $[P,Q]=0$.
But if $P$ and $Q$ are projections, they are projections onto where? Well, we can define
$$\mathscr{B}_p=P(\mathscr{H})$$
$$\mathscr{B}_Q=P(\mathscr{H}).$$
These are the subspaces of $\mathscr{H}$ onto which the projectors project. They are not independent however, since these are complementary projections. We have the following result:
$$\mathscr{H}=\mathscr{B}_P\oplus \mathscr{B}_Q.$$
Indeed, by definition of $Q$ we have that $\mathbf{1} = P+Q$ and hence any $\Psi$ is
$$\Psi=P\Psi+Q\Psi.$$
The decomposition is unique, however. Indeed if $\Psi=\Psi_P+\Psi_Q$ with $\Psi_P\in \mathscr{B}_P$ and $\Psi_Q\in \mathscr{B}_Q$ then since $PQ\Psi=QP\Psi=0$ by applying $P$ and $Q$ to the decomposition it follows $\Psi_P = P\Psi$ and $\Psi_Q = Q\Psi$.
This indeed shows that the decomposition is a direct sum decomposition.
Why bother with all this? Well, if $A$ is an observable, corresponding to each eigenvalue there is an eigenspace. The eigenspace corresponding to $\lambda$ has a projector $P_\lambda$ associated to it. Then $Q_\lambda = 1-P_\lambda$ projects out of it.
Thus, all of this allow us to say that
a state $\Psi$ may have a "part of it" contained in the eigenspace of $A$ with eigenvalue $\lambda$ and this part is $P_\lambda \Psi$.
The properties of a projection are what allow us to say that $P_\lambda \Psi$ is its part inside that eigenspace.
What the postulate says is that immediately after the measurement the state is $$\frac{P_\lambda \Psi}{\sqrt{(P_\lambda \Psi, P_\lambda \Psi)}}$$
since this projection won't usually be normalized. In summary what is says is: the system doesn't collapse to an arbitrary element of $P_\lambda \mathscr{H}$ but rather to specificaly the part of $\Psi$ lying there properly normalized, which is given by the above formula. |
Defining parameters
Level: \( N \) = \( 3600 = 2^{4} \cdot 3^{2} \cdot 5^{2} \) Weight: \( k \) = \( 1 \) Character orbit: \([\chi]\) = 3600.r (of order \(4\) and degree \(2\)) Character conductor: \(\operatorname{cond}(\chi)\) = \( 80 \) Character field: \(\Q(i)\) Newforms: \( 0 \) Sturm bound: \(720\) Trace bound: \(0\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{1}(3600, [\chi])\).
Total New Old Modular forms 56 4 52 Cusp forms 8 0 8 Eisenstein series 48 4 44
The following table gives the dimensions of subspaces with specified projective image type.
\(D_n\) \(A_4\) \(S_4\) \(A_5\) Dimension 0 0 0 0 Decomposition of \(S_{1}^{\mathrm{old}}(3600, [\chi])\) into lower level spaces
\( S_{1}^{\mathrm{old}}(3600, [\chi]) \cong \) \(S_{1}^{\mathrm{new}}(720, [\chi])\)\(^{\oplus 2}\) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.