text stringlengths 256 16.4k |
|---|
Answer
For company A and B we have $A: \$ 590,000; B: \$ 595,562$
Work Step by Step
Here, we have $a_1= 20,000$ and $b_1= 20,000$ We know that $a_n=a_1+(n-1)d$ Plug $d=1000$ And $r=1+0.04=1.04$ $a_{20}=20,000+(20-1) \times (1000)=39,000$ $S_{20}=\dfrac{20 \times (20,000+39000)}{2}=590,000$ The sum $T_n$ of the first n terms of the sequence $b_n$ can be found as: $T_n=\dfrac{20,000(1-1.04^{20})}{1-1.04} \approx 595,562$ Hence, for company A and B we have $A: \$ 590,000; B: \$ 595,562$ |
Bernard F. Schutz’s A First Course in General Relativity provides a nice introduction to the difficult subject in my opinion. In Chapter 6, he mentions that one should derive the Euler-Lagrange equations to minimise the spacetime interval of a particle’s trajectory, obtaining the geodesic equation:
$$ \frac{\mathrm{d}}{\mathrm{d}\lambda}\left(\frac{\mathrm{d}x^{\gamma}}{\mathrm{d}\lambda}\right) + \Gamma^{\gamma}_{\;\alpha\beta}\frac{\mathrm{d}x^{\alpha}}{\mathrm{d}\lambda}\frac{\mathrm{d}x^{\beta}}{\mathrm{d}\lambda} = 0 $$
Note: At first I derived it from variational principles, but the Euler-Lagrange equations provide a faster route via means of a neat trick. Motivation: The main aim of this investigation is to find curves that parallel-transport tangent vectors. The following diagram should make this idea clearer:
Image Source: https://upload.wikimedia.org/wikipedia/commons/6/6d/Parallel_transport.png
If one starts from point A on the equator of a sphere and moves to the North pole, the tangent vector along the curve will look like the one shown in the figure, perpendicular to the equatorial line. Continuing this path from the North pole, if one wishes to reach a different point B on the equator, the vector field described will have undergone a 90 degrees rotation, so the vector points along the equatorial line.
This can be easily visualised as walking to the North pole from A with your arm outstretched forward (representing the tangent vector), which is perpendicular to the equator. Once you reach, you need to go to point B, so your body rotates, but your arm is fixed because of parallel transport, so you’re now walking with your arm outstretched to the left. Once you reach point B, you realise that your arm is along the equatorial line. Therefore parallel transport isn’t preserved because of the curvature of the sphere.
Since this is a property that directly results from the intrinsic curvature of the sphere, one can deduce that there is no definition of globally parallel vector fields.
A geodesic can be thought of as a curve which parallel transports its own tangent vector. Another way of saying this is that it’s the curve of shortest distance between two points in a given space of any curvature, implying local parallel transport. In the case of a sphere, a geodesic lies on a great circle, defined as the circle on the surface of a sphere which lies in a plane passing through the sphere’s centre.
In relativity, the definition of distance (or interval between points) is a little different as compared to Euclidean geometry. Since we deal with space-time rather than space, the metric of space-time (called Minkowski space-time) must be used, as described below (in one convention):
$$ (\Delta s)^2 = -(c\Delta t)^2 + (\Delta x)^2 + (\Delta y)^2 + (\Delta z)^2 $$
There is a large historical development for this formulation with plenty of literature available. The foundation lies in Maxwell’s equations and Einstein’s postulates of special relativity. There is a more general formulation for calculating magnitudes over arbitrary metrics by using the dot product:
$$ \vec{U}\cdot\vec{V} = g_{\alpha\beta}U^{\alpha}V^{\beta} = g_{00}U^{0}V^{0} + g_{10}U^{1}V^{0} + g_{01}U^{0}V^{1} + …$$
With summations over the set of values that the repeated indices $\alpha$ and $\beta$ take and $ g_{\alpha\beta}(\vec x)$ is the metric tensor defined by the space under evaluation. In the case of special relativity, the metric tensor is represented as the matrix $ g_{\alpha\beta} = \mathrm{diag}(-1,1,1,1) $ in one convention, a flat space-time.
The length of the tangent vector between two points can be described by its magnitude. This functional that must be minimised is called the proper length, and can be expressed as follows:
$$ \mathrm{d}s = \int_{\lambda_0}^{\lambda_l} \left|\vec{V}\cdot\vec{V}\right|^{\frac{1}{2}} \mathrm{d}\lambda = \int_{\lambda_0}^{\lambda_1} \left|g_{\alpha\beta}\frac{\mathrm{d}x^{\alpha}}{\mathrm{d}\lambda}\frac{\mathrm{d}x^{\beta}}{\mathrm{d}\lambda}\right|^{\frac{1}{2}} \mathrm{d}\lambda $$
Where $\lambda$ is a parameter for the curve, usually the proper time. This exercise essentially means that the magnitude of the four-velocity integrated over proper time should be a minimum. So the Lagrangian is:
$$ \mathcal{L} = \left|g_{\alpha\beta}\frac{\mathrm{d}x^{\alpha}}{\mathrm{d}\lambda}\frac{\mathrm{d}x^{\beta}}{\mathrm{d}\lambda}\right|^{\frac{1}{2}} $$
The neat trick here is taking $\mathcal{L}^2$ and substituting it into the Euler-Lagrange equation:
$$ g_{\alpha\beta,\mu}\dot{x}^{\alpha}\dot{x}^{\beta} - \frac{\mathrm{d}}{\mathrm{d}\lambda}\left[g_{\alpha\mu}\dot{x}^{\alpha} + g_{\mu\beta}\dot{x}^{\beta}\right] = 0 $$
Where $\dot{x} = \mathrm{d}x/\mathrm{d}\lambda$. Note that this equation just talks about extrema rather than maxima or minima explicitly. The second term is a total derivative, which results in:
$$ g_{\alpha\beta,\mu}\dot{x}^{\alpha}\dot{x}^{\beta} - \left[g_{\alpha\mu,\beta}\dot{x}^{\beta}\dot{x}^{\alpha} + g_{\mu\beta,\alpha}\dot{x}^{\alpha}\dot{x}^{\beta} + g_{\alpha\mu}\ddot{x}^{\alpha} + g_{\mu\beta}\ddot{x}^{\beta} \right] = 0$$
Changing the last term’s dummy index $\beta\rightarrow\alpha$ and multiplying by $g^{\mu\gamma}$:
$$ -2g^{\mu\gamma}g_{\alpha\mu}\ddot{x}^{\alpha} + g^{\mu\gamma}\left(g_{\alpha\beta,\mu} - g_{\mu\beta,\alpha} - g_{\mu\alpha,\beta}\right)\dot{x}^{\alpha}\dot{x}^{\beta} = 0 $$
Using the Christoffel symbol of the second kind with the following definition:
$$ \Gamma^{\gamma}_{\;\alpha\beta} = \frac{1}{2}g^{\mu\gamma}\left(g_{\mu\beta,\alpha} + g_{\mu\alpha,\beta} - g_{\alpha\beta,\mu}\right)$$
and using $g^{\mu\gamma}g_{\alpha\mu} = \delta^{\gamma}_{\;\alpha}$:
$$ -2\delta^{\gamma}_{\;\alpha}\ddot{x}^{\alpha} - 2\Gamma^{\gamma}_{\;\alpha\beta}\dot{x}^{\alpha}\dot{x}^{\beta} = 0 $$
The expression simplifies to the geodesic equation:
$$ \ddot{x}^{\gamma} + \Gamma^{\gamma}_{\;\alpha\beta}\dot{x}^{\alpha}\dot{x}^{\beta} = 0 $$ |
Mathematics: A Novel by Jacques Roubaud has a lot in common with my degree. It seemed to go on forever without any particular point, was incomprehensible in large chunks, and ended with a nagging sensation that it ought to have been first class. Oh, and it was a cross betweenRead More →
It's a sunny Saturday1, and two maths items have caught my eye. Firstly, a bit of a cock-up on the sums front from Andover Tesco (via ): Is Andover @uktesco onto the next best thing to gold? Blueberries! at £8000/kg @whichconvo @moneysavingexp E.Privett twitter.com/ReevesHall/sta… — Andrew Reeves-Hall (@ReevesHall) June 7,Read More →
The Silly Questions Amnesty is going on hiatus for the summer. You can still leave comments here if there's anything you'd like me to answer - depending on how the Mathematical Ninja's holidays pan out, SQA might be back in September. We'll see!Read More →
Ever since I’ve been teaching GCSE maths, I’ve struggled to explain one topic more than any other: how to find a centre of rotation. There are several ways students approach this problem. There’s the “I dunno” way, sometimes disguised as “I dunno where to start.” This is the kind ofRead More →
"I stayed in ALL DAY waiting for that delivery of exam papers in Amsterdam," said the Mathematical Ninja. "I could have been out pillaging, but no, sodding Yodel told me the package was out for delivery and it was only when I called them up for the eighth time theyRead More →
These are my barely-edited, initial thoughts on today's controversial EdExcel Core 3 paper. Nothing is meant as an attack on anyone - except, of course, for Mr Gove, who must be used to it by now. UPDATED June 14: The legendary Arsey at TSR has worked solutions for the paperRead More →
"Fancy a Bob Marley doughnut?" asked Constable Gale. I sighed. "And that's a doughnut…" "Wi' jam in." It's a delicate line to draw, with Gale's jokes: laugh too little, and you won't get a doughnut. Laugh too much and he'll tell you another one. "Speaking of jam," I said, "thisRead More →
Here's a quick multiple-choice quiz about the tough stuff in C4 integration. Ready?1 Question 1: squared trig functions What method do you use to calculate $\int \sin^2(x) dx$? (Give me all four answers!) a) Parts ($u = \sin(x),~v'=\sin(x)$) b) Trig substitution ($u=\cos(2x)$) c) Split-angle formula ($\sin(A)\sin(B) = d) Parts ($uRead More → |
In the book Skolnik, Introduction to Radar Systems, ed. 2, pag. 52 (sec. Transmitter Power) the radar equation in its simplest form is initially cited:
$$R_{\max}=\left[\frac{P_t G A_e \sigma }{(4 \pi )^2 S_{\min}} \right]^{1/4}$$
where $P_t$ is the transmitted power (spatial integral of the poynting vector generated by the transmitting antenna in far field), $G$ is the receiving antenna gain, $A_e$ is the equivalent area of the receiving antenna, $\sigma$ is the radar cross section of the target, $S_{\min}$ is the minimum receiving power that make possibile to detect that particular target located at the maximum distance of $R_{\max}$.
Subsequently, the author says:
The average radar power $P_{av}$ is also of interest in radar and is defined as the average transmitter power over the pulse-repetition period. If the transmitted waveform is a train of rectangular pulses of width $\tau$ and pulse-repetition period $T_p=\frac{1}{f_p}$, the average power is related to the peak power by:
$$P_{av}=\frac{P_t\cdot \tau}{T_p}$$
and then immediately he rewrites the radar equation replacing $P_t$ with the expression $\frac{P_{av}\cdot T_p}{\tau}=\frac{E_t}{\tau}$ (he calls $E_t$ the
transmitted energy).
I can't understand why, certainly apparently (but unfortunately I don't see where this 'apparently' is), it seems that it has mixed time and frequency as if they were interchangeable. In fact, according to me, if we want to write explicitly the radar equation, even in its simple form like the one above, it becomes:
$$R_{\max}=\left[\frac{P_t(\omega) G(\theta, \phi,\omega) A_e(\theta, \phi,\omega) \sigma(\theta, \phi,\theta_b,\phi_b,\omega) }{(4 \pi )^2 S_{\min}(\theta, \phi,\theta_b,\phi_b,\omega)} \right]^{1/4}=R_{\max}(\theta, \phi,\theta_b,\phi_b,\omega)$$
where $\theta,\phi$ is the direction in which the target is, $\theta_b,\phi_b$ is the orientation of the target with respect to the antennas, $\omega$ is the angular frequency of the EM wave.
Following what was written by Skolnik I would therefore have that:
$$P_{av}(\omega):=\frac{P_t(\omega)\cdot \tau}{T_p}$$
which to me means nothing, no kind of 'average value'. The only way I know of linking the temporal average of instantaneous power with complex power is as follows:
$$<\mathcal{P}(t)>=\int_{\omega=-\infty}^{\infty}P(\omega)\mathrm{d}\omega$$
Does anyone know how to justify what the author said? |
You're missing two things. First, that the decay constant is the probability of decay
per unit time. That part is important. The actual decay probability over a short time period is equal to the probability per unit time, multiplied by the time period:
$$P = \lambda\Delta t$$
$\lambda$ can be as large as you like, but for a small enough interval $\Delta t$, you'll still have $P < 1$. So there's no contradiction there.
The other thing you're missing is that $\lambda$ is only the probability per unit time
given that the nucleus has not already decayed. That's also important. You have to start with an undecayed nucleus.
So let's say you have an undecayed nucleus at $t = 0$.
\begin{align}P_0(\text{decayed}) &= 0 &P_0(\text{undecayed}) &= 1\end{align}
After some short time $\Delta t$, the probability that it will have decayed is $\lambda\Delta t$, as above.
\begin{align}P_1(\text{decayed}) &= \lambda\Delta t &P_1(\text{undecayed}) &= 1 - \lambda\Delta t\end{align}
Now consider the
next time interval, from $t = \Delta t$ to $t = 2\Delta t$. If the nucleus didn't decay in the first time interval, it has a probability $\lambda\Delta t$ of decaying in this second interval. But if the nucleus did decay in the first time interval, the probability that it will have decayed by the end of the second time interval is 1. So overall, the probability that it has decayed by $t = 2\Delta t$ is
\begin{align}P_2(\text{decayed})&= P_1(\text{undecayed})\lambda\Delta t + P_1(\text{decayed})(1) \\&= (1 - \lambda\Delta t)\lambda\Delta t + \lambda\Delta t \\&= (2 - \lambda\Delta t)\lambda\Delta t \\P_2(\text{undecayed})&= P_1(\text{undecayed})(1 - \lambda\Delta t) \\&= (1 - \lambda\Delta t)^2\end{align}
You can probably see the pattern from here:
\begin{align}P_3(\text{decayed})&= P_2(\text{undecayed})\lambda\Delta t + P_2(\text{decayed})(1) \\&= (1 - \lambda\Delta t)^2\lambda\Delta t + (2 - \lambda\Delta t)\lambda\Delta t \\&= \bigl(3 - 3\lambda\Delta t + (\lambda\Delta t)^2\bigr)\lambda\Delta t \\P_3(\text{undecayed})&= P_2(\text{undecayed})(1 - \lambda\Delta t) \\&= (1 - \lambda\Delta t)^3\end{align}
In particular, at $t = n\Delta t$,
$$P_n(\text{undecayed}) = (1 - \lambda\Delta t)^n$$
Now, in the limit where $\Delta t$ is short, and $n$ is large, as it must be if $T = n\Delta t$ is going to be a normal-scale time interval, you may recognize this as an exponential:
$$\lim_{n\to\infty}P_n(\text{undecayed}) = \lim_{n\to\infty}(1 - \lambda\Delta t)^n = \lim_{n\to\infty}\biggl(1 - \frac{\lambda T}{n}\biggr)^n = e^{-\lambda T}$$
So the equation for exponential decay emerges naturally from the fact that the decay constant is the decay probability
per unit time for an undecayed nucleus. (Or of course the same argument applies to any other system that undergoes exponential decay, not just nuclei.) |
Research Open Access Published: On the Lucas polynomials and some of their new identities Advances in Difference Equations volume 2018, Article number: 126 (2018) Article metrics
666 Accesses
Abstract
The main purpose of this paper is, using the elementary and combination methods, to study the arithmetical properties of the Lucas polynomials and to obtain some new and interesting identities for them.
Introduction
For any non-negative integer
n, the Fibonacci polynomials \(\{F_{n}(x)\}\) and Lucas polynomials \(\{L_{n}(x)\}\) are defined by the second order linear recursive formulas \(F_{n+2}(x)=xF_{n+1}(x)+F_{n}(x)\) and \(L_{n+2}(x)=xL_{n+1}(x)+L_{n}(x)\) with \(F_{0}(x)=0\), \(F_{1}(x)=1\), \(L_{0}(x)=2\), and \(L_{1}(x)=x\). The general terms of \(F_{n}(x)\) and \(L_{n}(x)\) are given by
and
where \(\binom{m}{n}=\frac{m!}{n!(m-n)!}\), and \([x]\) denotes the greatest integer ≤
x.
It is easy to prove the identities
and
If \(x=1\), then \(\{F_{n}(x)\}\) becomes the famous Fibonacci sequences \(\{F_{n}\}\) and \(\{L_{n}(x)\}\) becomes the Lucas sequences \(\{L_{n}\}\).
These sequences and polynomials occupy very important positions in the theory and application of mathematics, so many scholars have studied their various arithmetical properties and obtained a series of important results. For example, Ozeki [1] proved the identity
Prodinger [2] studied the more general summation \(\sum_{k=0}^{n}F_{2k+ \delta }^{2m+1+\epsilon }\), where \(\delta , \epsilon \in \{0, 1\}\), and obtained many interesting results.
Ma and Zhang [3] used the properties of Chebyshev polynomials to obtain some identities involving Fibonacci numbers and Lucas numbers. Wang and Zhang [4] proved some divisible properties involving Fibonacci numbers and Lucas numbers. Some of other related papers can also be found in references [5–15], here we are not going to list them all.
In this paper, we shall use the elementary and combination methods to study the arithmetical properties of Lucas polynomials, and give some new identities for them. That is, we shall prove the following results.
Theorem 1 For any positive integer h and integer \(k \geq 0\), we have Theorem 2 For any integers h and \(k\geq 0\), we have Theorem 3 For any integers \(n\geq 1\) and \(h\geq 0\), we have the identity Theorem 4 For any integers \(n\geq 1\) and \(h\geq 0\), we have the identity Corollary 1 For any positive integer h, we have the identities and
If \(x=1\) and \(k=0\), then we also have the following:
Corollary 2 For any positive integer h, we have the identities Corollary 3 For any integers \(n\geq 1\) and \(h\geq 0\), we have Corollary 4 For any integers \(n\geq 1\) and \(h\geq 0\), we have Several simple lemmas Lemma 1 For any positive integers n, we have the identities Proof
Note the identities \(( x+\sqrt{x^{2}+4} ) '= 1+\frac{x}{\sqrt{x ^{2}+4}}=\frac{x+\sqrt{x^{2}+4}}{\sqrt{x^{2}+4}}\) and \(( x-\sqrt{x ^{2}+4} ) '= 1-\frac{x}{\sqrt{x^{2}+4}}=-\frac{x- \sqrt{x^{2}+4}}{\sqrt{x^{2}+4}}\). From the definitions of the polynomials \(F_{n}(x)\) and \(L_{n}(x)\), we have
Applying (3), the integration by parts, and the recursive formulae of \(L_{n}(x)\) and \(F_{n}(x)\), we have
If \(n=2k\), then note that \(L_{2k+1}(0)=L_{2k-1}(0)=0\). From (4) we have
Lemma 2 For any positive integer n and non- negative integer k, we have the identity Proof
Let \(\alpha =\frac{x+\sqrt{x^{2}+4}}{2}\) and \(\beta = \frac{x-\sqrt{x^{2}+4}}{2}\). Then replace
x by \(L_{2k+1}(x)\) in (2) and note that \(\alpha^{2k+1}\beta^{2k+1}=-1\), we have
and
From (2) we have the identity
This proves Lemma 2. □
Lemma 3 For any non- negative integer n, we have the identities and Proof
From the definition of \(L_{n}(x)\) we know that \(L_{2k}(x)\) is an even function. So we may suppose that
Taking \(x=2i\cos \theta \) in (7) and noting that \(x^{2}+4=4-4\cos^{2} \theta =4\sin^{2}\theta \), from Euler’s formula we have
Note the identities
and
or
Similarly, since \(L_{2k+1}(x)\) is an odd function, we can suppose that
Taking \(x=2i\cos \theta \) in (12) and noting that
we have
Proofs of the theorems
Using the lemmas in Sect. 2, we can prove our theorems easily. First we prove Theorem 2. Similarly, we can also deduce Theorem 1 and then omit its proving process here. From Lemma 1 and the definition of \(L_{n}(x)\), we have
or
This proves Theorem 3.
Similarly, we can also deduce Theorem 4.
References 1.
Ozeki, K.: On Melham’s sum. Fibonacci Q.
46/47, 107–110 (2008/2009) 2.
Prodinger, H.: On a sum of Melham and its variants. Fibonacci Q.
46/47, 207–215 (2008/2009) 3.
Ma, R., Zhang, W.: Several identities involving the Fibonacci numbers and Lucas numbers. Fibonacci Q.
45, 164–170 (2007) 4.
Wang, T., Zhang, W.: Some identities involving Fibonacci. Lucas polynomials and their applications. Bull. Math. Soc. Sci. Math. Roum.
55, 95–103 (2012) 5.
Melham, R.S.: Some conjectures concerning sums of odd powers of Fibonacci and Lucas numbers. Fibonacci Q.
46/47, 312–315 (2008/2009) 6.
Li, X.: Some identities involving Chebyshev polynomials. Math. Probl. Eng.
2015, Article ID 950695 (2015) 7.
Ma, Y., Lv, X.: Several identities involving the reciprocal sums of Chebyshev polynomials. Math. Probl. Eng.
2017, Article ID 4194579 (2017) 8.
Kim, D.S., Dolgy, D.V., Kim, T., Rim, S.H.: Identities involving Bernoulli and Euler polynomials arising from Chebyshev polynomials. Proc. Jangjeon Math. Soc.
15, 361–370 (2012) 9.
Kim, D.S., Kim, T., Lee, S.: Some identities for Bernoulli polynomials involving Chebyshev polynomials. J. Comput. Anal. Appl.
16, 172–180 (2014) 10.
Kim, T., Kim, D.S., Seo, J.J., Dolgy, D.V.: Some identities of Chebyshev polynomials arising from non-linear differential equations. J. Comput. Anal. Appl.
23, 820–832 (2017) 11.
He, Y.: Some new results on products of Apostol-Bernoulli and Apostol-Euler polynomials. J. Math. Anal. Appl.
431, 34–46 (2015) 12.
He, Y., Wang, C.P.: New symmetric identities involving the Eulerian polynomials. J. Comput. Anal. Appl.
17, 498–504 (2014) 13.
He, Y., Kim, D.S.: General convolution identities for Apostol–Bernoulli, Euler and Genocchi polynomials. J. Nonlinear Sci. Appl.
9, 4780–4797 (2016) 14.
Chen, L., Zhang, W.: Chebyshev polynomials and their some interesting applications. Adv. Differ. Equ.
2017, 303 (2017) 15.
Yi, Y., Zhang, W.: Some identities involving the Fibonacci polynomials. Fibonacci Q.
40, 314–318 (2002) Acknowledgements
The author would like to thank the referees for their very helpful and detailed comments, which have significantly improved the presentation of this paper. This work is supported by the N.S.F. (Grant No. 11771351) of P.R. China.
Ethics declarations Competing interests
The author declares that she has no competing interests.
Additional information Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
A query is any mapping $I:STRUC[\sigma] \to STRUC[\tau]$ that is polynomially bounded. A boolean query is a map $I_b: STRUC[\sigma] \to \{0,1\}$. A boolean query can also be thought as the susbset:$...
I got an alphabet E={x,y,z}. And i want to create a finite state automaton that accepts strings consisting of at least two x's, followed by at most three y's followed by any number of z's. And i want ...
Hello I have a slightly unusual question which relates to a definition of filtration structure. The following is my current state of the definition:$ \mathcal{M} = (W, R, L) $, W is a set of worlds,...
A language $L$ is regular if and only if it is definiable by a sentence in monadic second order logic (MSO) over strings (J.R. Buchi, Weak second-order arithmetic and Finite automata; Z. Math. Logik ...
In the proof of Trakhtenbrot's theorem (as given in "Elements of Finite Model Theory" by Leonid Libkin), for every Turing machine $M$, author constructs a FO sentence $\Phi_M$ of vocabulary $\sigma$ ...
Suppose $\sigma$ is a vocabulary of First Order logic consisting of one binary relation $E$ and let $\phi$ be a $\sigma$ sentence (FO formula with no free variables). Is it decidable whether there is ...
According to Immerman, the complexity class associated with SQL queries is exactly the class of safe queries in $\mathsf{Q(FO(COUNT))}$ (first-order queries plus counting operator): SQL captures safe ... |
There have been already several questions asking for an introduction to quantum mechanics for a mathematician, but this one is slightly different, and more restrictive.
I know (some) quantum mechanics, but I'd like to find a reference which explains, in a way as clear and systematic as possible, how we pass from a classical system (in the hamiltonian formulation, with a phase space $X$, and an Hamiltonian function $H$ on it) to the corresponding quantum system, with an Hilbert space $V$ and an Hamiltonian operator $\hat H$ on it.
If the reference is precise and rigorous mathematically, that's a plus (ideally it would even define a functor $(X,H) \mapsto (V,\hat{H})$ of the adequate categories); if the reference gives a lot of physical intuition, that's also a plus.
I ask this question because I am trying to understand Quantum Unique Ergodicity, in particularthe classical example of Billiard. In this example $B$ is a closed region of the plane with asmooth boundary, and $X=B \times S^{1}$, the second factor corresponding to the velocity vector. The Hamiltonian in the inside of $X$ corresponds to free motion, but it has to be defined somehow on the boundary so that it corresponds to the ball reflecting on the boundary in the standard way (I am not sure how exactly). Then I am told that the quantic version of this system is a $V$ which is the space of function on $B$
which vanishes on the boundary,and I'd like to understand why, and $\tilde H$ is the laplacian (that I more or less understand). If anyone has an explanation for that example, that would be great.
EDIT: Thanks to all for your five answers. Each of them taught me something valuable, and collectively they taught me I knew much less about Quantum Mechanics than I thought.
SECOND EDIT: Since answers keep arriving, let me add something: When I said I was "told" that quantization of a Billiard B is the space of functions on B
vanishing at the boundary, it is true but I also read it A. Hassel, "What is quantum unique ergodicity?", page 161.Now that I realize that my question was too vast and too difficult (for me to understand fully the answer).
I'd like to precise it by asking: when people working in Quantum Theory quantize a classical physical system (like in the article quoted above), what specific method do they use? Or are they just math people happy with any quantum system having some analogy with the classical one and leading to a mathematically interesting problem? |
For a positive integer $n$, let $S_n$ denote the set of $n\times n$ symmetric matrices over $\mathbb{C}$. As a complex vector space, this set has dimension $\mathrm{dim}(S_n)=\binom{n+1}{2}$. The standard Hilbert-Schmidt inner product $$ \langle A,B\rangle = \mathrm{Tr}(AB^*) $$ for $A,B\in S_n$ turns this space into an inner product space.
Question. Does there always exist an orthogonal basis $\{U_1,U_2,\dots,U_{\frac{n(n+1)}{2}}\} \subseteq S_n$ of symmetric matrices such that each $U_i$ is unitary? Case for $n=2$
The case $n=2$ is simple, since we may take the matrices $$ U_1 = \begin{pmatrix} 1&0\\0&1\end{pmatrix},\quad U_2 = \begin{pmatrix} 1&0\\0&-1\end{pmatrix},\quad\text{and}\quad U_3 = \begin{pmatrix} 0&1\\1&0\end{pmatrix}. $$ These matrices are all symmetric and unitary, and they satisfy $\mathrm{Tr}(U_iU_j^*) = 0$ whenenver $i\neq j$. Moreover, these matrices span the space of $2\times 2$ symmetric matrices.
Case for $n$ even
I've come up with a way to construct such a basis for any even $n$.
Consider the matrices $\{E_{i,j}\,:\, i,j\in\{1,\dots,n\}\}$, where $E_{i,j}$ is the $n\times n$ matrix that has a 1 in the $i$th row and $j$th column with zeros elsewhere. One orthogonal basis for the set of $n\times n$ symmetric matrices is the set $$ \{H_{i,j}: i,j\in\{1,\dots,n\},\, i\leq j\} $$ where we define $$ H_{i,j} = \left\{\begin{array}{ll}E_{i,i} & \text{if }i=j\\ \frac{1}{\sqrt{2}}(E_{i,j}+E_{j,i}) & \text{if }i\neq j\end{array}\right. $$ Consider the complete graph $K_n$ with $n$ vertices. We may identify the edges of this graph with the set $$ \mathcal{E} = \{H_{i,j} : i,j\in\{1,\dots,n\},\, i<j\} $$ where the edge connecting vertices $i$ and $j$ (with $i<j$) is denoted $H_{i,j}$. Note that $\mathcal{E}$ has $\binom{n}{2}=\frac{n(n-1)}{2}$ elements.
If $n$ is even, there is a 1-factorization of this graph (see here). A 1-factorization corresponds to a partitioning the edges $\mathcal{E}$ into $n-1$ subsets, $\mathcal{F}_1,\dots,\mathcal{F_{n-1}}$, each with $n/2$ edges, such that, for each $k$, no two edges in $\mathcal{F_k}$ are adjacent and $$ \mathcal{F_1}\cup\cdots\cup\mathcal{F_{n-1}} = \mathcal{E}. $$ For each $k\in\{1,\dots,n-1\}$, we can label the elements of $\mathcal{F_k}$ as $$ \mathcal{F_k} = \{F_{k,1},\dots,F_{k,n/2}\}. $$ Let $\omega = e^{i2\pi/n}$ denote the $n$th root of unity, and for each $k\in\{1,\dots,n-1\}$ and $\ell\in\{1,\dots,\frac{n}{2}\}$ we define the matrix $$ G_{k,\ell} = 2\sum_{a=1}^{n/2} \omega^{2\ell a} F_{k,a}. $$ It can be verified that $G_{k,\ell}$ is symmetric and unitary. Finally, define symmetric unitary matrices $A_1,\dots,A_n$ by $$ A_j = \sum_{a=1}^n \omega^{ja} E_{a,a} $$ for each $j\in\{1,\dots n\}$. It can be shown that the set $$ \{A_1,\dots,A_n\}\cup\{G_{k,\ell}\, :\, k\in\{1,\dots,n-1\},\, \ell\in\{1,\dots,n/2\}\} $$ is an orthogonal basis of $S_n$.
Case when $n=3$
The case for $n$ odd seems to be a bit trickier. I've at least been able to construct a basis of the desired form when $n=3$ as follows: \begin{align*} U_1 &= \frac{\sqrt{3}}{2}\begin{pmatrix} \frac{2}{\sqrt{3}} & 0 & 0\\ 0 &\frac{1}{\sqrt{3}} & 1\\ 0 & 1 & -\frac{1}{\sqrt{3}}\end{pmatrix} & U_2&=\frac{\sqrt{3}}{2}\begin{pmatrix} \frac{2}{\sqrt{3}} & 0 & 0\\ 0 &\frac{1}{\sqrt{3}} & -1\\ 0 & -1 & -\frac{1}{\sqrt{3}}\end{pmatrix}\\ U_3 &= \frac{\sqrt{3}}{2}\begin{pmatrix} \frac{-\overline{\beta}}{\sqrt{3}} & 0 & 1\\ 0 &\frac{\alpha}{\sqrt{3}} & 0\\ 1 & 0 & \frac{\beta}{\sqrt{3}}\end{pmatrix} & U_4&=\frac{\sqrt{3}}{2}\begin{pmatrix} \frac{-\overline{\beta}}{\sqrt{3}} & 0 & -1\\ 0 &\frac{\alpha}{\sqrt{3}} & 0\\ -1 & 0 & \frac{\beta}{\sqrt{3}}\end{pmatrix} \\ U_5 &= \frac{\sqrt{3}}{2}\begin{pmatrix} \frac{i\overline{\beta}}{\sqrt{3}} & 1 & 0\\ 1 &\frac{i\beta}{\sqrt{3}} & 0\\ 0 & 0 & \frac{i\alpha}{\sqrt{3}}\end{pmatrix} & U_6&=\frac{\sqrt{3}}{2}\begin{pmatrix} \frac{i\overline{\beta}}{\sqrt{3}} & -1 & 0\\ -1 &\frac{i\beta}{\sqrt{3}} & 0\\ 0 & 0 & \frac{i\alpha}{\sqrt{3}}\end{pmatrix} \end{align*} where $\alpha = \sqrt{\frac{27}{8}} - i\sqrt{\frac{5}{8}}$ and $\beta=\sqrt{\frac{3}{8}} + i\sqrt{\frac{5}{8}}$. It can be verified that the matrices $\{U_1,\dots,U_6\}$ are symmetric, unitary, and pairwise orthogonal.
Existence for odd $n$ greater than $3$?
Numerically, I've been able to find an orthogonal basis of symmetric unitary matrices for odd dimensions up to $n=11$. But I found no discernible pattern that allowed me to construct an exact solution.
I'd like to know if there exists a collection of $\binom{n+1}{2}$ orthogonal symmetric $n\times n$ unitary matrices for any odd $n>3$. |
For the full paper, which includes proofs, click here. Abstract
(Editor’s note:) This paper represents the third installment of a masters thesis by Jonathan Johnson. The first two can be found here and here. This paper continues the development of the theory of summation chains of sequences. Since summation chains are doubly infinite, it’s important to know how little information we actually need to define a chain. The linearity of the function rules that generates a summation chain helps to answer this question. The notion of
uniquely completable is defined from the set of positions, and several important theorems are developed to determine when a set of positions is uniquely completeable. Introduction
(Editor’s note:) In Part 2, Johnson notes that summation chains can be generated by the function rule T_{\Sigma}: M^{\infty} \to M^{\infty},
T_{\Sigma}(x_{1}, x_{2}, x_{3},\ldots) = (x_{1}, x_{1} + x_{2}, x_{1} + x_{2} + x_{3},\ldots). where M^{\infty} is a \mathbb{Z}-Module. This is a formal way to define the operator that generates the partial sums of a given sequence. Johnson proves in this installment that this function rule is a linear operator. The linearity of these function operators assists in determining how little information we need to know about the chain to be able to uniquely define it.
Johnson introduces the notion of the set of positions, which becomes the smallest amount of information that can define a summation chain under certain conditions.
Chains Generated by Linear Functions
Since vector-valued chains are vector-valued functions, they inherit the scalar multiplication and addition operations defined for functions on vector spaces. The linearity of the summation chain function rule leads to many interesting and useful results. The first of these is the closure of the set of summation chains under linear operations. This result holds for any function rule that is a linear operator.
Lemma. Lemma.
Let
M be a an 1-dimensional vector space with scalar field F. The summation function rule T_{\Sigma} defined in Remark 3.1 of [6] is a linear operator on M^{\infty}. Proposition. Proposition.
Let T:V\to V be a function rule where
V is a vector space with scalar field F. \mathcal{C}_T is closed under scalar multiplication and addition if and only if T is a linear operator. Corollary 1. Corollary 1.
Let T:V\to V be a bijective linear operator on vector space
V with scalar field F, then \mathcal{C}_T is a subspace of the space of all functions from \mathbb{Z} to V, V^{\mathbb{Z}}, \Phi_{\Sigma[M]} is an isomorphism, and \mathcal{C}_T\cong V. Corollary 2. Corollary 2.
Let
M be a an 1-dimensional vector space with scalar field F. \mathcal{C}_{\Sigma[M]} is a vector space, and \Phi_{\Sigma[M]} is an isomorphism from M^{\infty} to \mathcal{C}_{\Sigma[M]}. |
For counting many types of combinatorial objects, like trees in this case, there are powerful mathematical tools (the symbolic method) that allow you to mechnically derive such counts from a description how the combinatorial objects are constructed. This involves generating functions.An excellent reference is Analytic Combinatorics by the late Philipe ...
The main answer is that by exploiting semi-group structure, we can build systems that parallelize correctly without knowing the underlying operation (the user is promising associativity).By using Monoids, we can take advantage of sparsity (we deal with a lot of sparse matrices, where almost all values are a zero in some Monoid).By using Rings, we can do ...
The issue comes down to ambiguous terminology.$(a^b)^c = a^{bc}$, but $a^{(b^c)} \neq a^{bc}$. In other words, exponents aren't associative.Conventionally, nested exponentials without parentheses are grouped in this second way, because it's more useful. So $2^{2^n} = 2^{(2^n)} \neq 2^{2n}$. If we wanted to talk about $(2^2)^n$, we could just write $2^{2n}...
Short answer.If we formulate an appropriate decision problem version of the Discrete Logarithm problem, we can show that it belongs to the intersection of the complexity classes NP, coNP, and BQP.A decision problem version of Discrete Log.The discrete logarithm problem is most often formulated as a function problem, mapping tuples of integers to another ...
Ok, A bit more detailed answer than in the comments.Choosing $k$ out of $n$ is done by ${n \choose k} = \frac{n!}{k!(n-k)!}$. So for things like the size of the pizza, where you have 4 options (and you need to choose one, coz pizza cannot be both medium and extra-large at the same times) you have only $4$ options. Indeed, ${4 \choose 1}=\frac{4!}{3!}=4$....
I'm assuming that by $n$, you mean the total number of nodes in the binary tree. The height (or depth) of a binary tree is the length of the path from the root node (the node without parents) to the deepest leaf node. To make this height minimum, the tree most be fully saturated (except for the last tier) i.e. if a specific tier has nodes with children, then ...
Here are several ways to solve your recurrence relation.GuessingAnyone with enough experience in computer science might recognize your recurrence as the one satisfied by $T(n) = 2^n$. Given this guess, you can verify it by summing the appropriate geometric series: if $T(m) = 2^m$ for $m < n$ then$$T(n) = 1 + \sum_{m=0}^{n-1} T(m) = 1 + \sum_{m=0}^{...
Insert real-world predicates and read aloud, for instance:It can not be both winter and summer (at any point in time).and(At any point in time) It is not winter or it is not summer.Clearly, the two statements are equivalent.
Let me first answer your subquestion: Does the literature on semiautomata ever look at "group-automata"?. The answer is yes. In his book (Automata, languages, and machines. Vol. B, Academic Press), S. Eilenberg gave a characterization of the regular languages recognized by finite commutative groups and $p$-groups. Similar results are known for finite ...
If you like to visualize it, use the venn diagrams. See this, for instance.I find it more simple just to memorize the basic 2 laws: everytime you "break" a negation line, you replace the AND to OR (or vice versa). Adding two negation lines changes nothing (but gives you more "lines" to break). It just works.
Monoids are ubiquitous in programming, just that most programmers don't know about them.Number operations like addition and multiplication.Matrix multiplication.Basically all collection-like data structures form monoids, where the monoidal operation is concatenation or union. This includes lists, sets, maps of keys to values, various kinds of trees etc....
Barrington's famous theorem reduces computation in NC$^1$ to computing iterated products in the group $S_5$ (or $A_5$, or indeed any non-solvable group). There is also a connection to leakage-resistant computation, in Shielding Circuits with Groups by Miles and Viola (2012).Regarding the classification of the finite simple groups, as far as I remember it ...
A binary tree has 1 or 2 children at non-leaf nodes and 0 nodes at leaf nodes. Let there be $n$ nodes in a tree and we have to arrange them in such a way that they still form a valid binary tree.Without proving, I am stating that to maximize the height, given nodes should be arranged linearly, i.e. each non-leaf node should have only one child:...
There is a whole course being offered by Udacity*, Logic and Discrete Mathematics, which has interactive quizzes and homework assignments.The course description is as follows:This course presents key concepts in discrete mathematics, specifically, elementary propositional logic and elements of enumerative combinatorics, elementary number theory, and ...
Since I created this I probably can explain it best ;-):the first step is to calculate an image segmentation which will combine small areas of similar colors into bigger chunks. The tolerance values of that segmentation will influence how big the biggest circles can become (higher tolerance => bigger areas => bigger circles)You proceed by processing each ...
A famous area of study in the theory of group presentations is the word problem for groups. A group presentation is given by a bunch of generators $g_1, ..., g_m$ and a bunch of equations $a_1 = b_1, ..., a_n = b_n$ that the generated group needs to satisfy. Now given two words $x, y \in \{g_1, ..., g_m\}^*$, i.e. two strings over the alphabet $\{g_1, ..., ...
This is known as a one-way permutation. The "permutation" refers to the first of your two requirements; the "one-way" refers to the second of your two requirements. There are various candidate constructions for one-way permutations, e.g., based on raising to the third power modulo an RSA modulus or other schemes.
If you don't mind graphs with self-loops, the "easiest" expander family is probably this one, giving expanders that are 3-regular.Start with some prime number $p$, and construct vertices numbered $0$ to $p-1$.For every vertex $u \ne 0$, connect $u$ to $u-1$ and $u+1$, modulo $p$.Also connect $u$ to the unique vertex $v$ such that $uv \equiv 1 \mod p$....
For the field of A.I. and machine learning, I would recommend you to explore and learn more about these topics:StatisticsProbabilityStochastic processesBayesian Data AnalysisConvex OptimizationGraph TheoryWith your math background, you could easily pick any good machine learning book and learn the required math that you don't have as you go. Kevin ...
In a partially ordered set, there may be members which are not comparable. A partial order where all elements are comparable is called a total order.We say $a$ and $b$ are comparable when at least one of the following holds:$a\leq b$,$b \leq a$.
A course on data structures will not be about "creating data structures". You can expect to analyse data structures, prove various properties of them, and create your own to solve some highly nontrivial problems.Every single data structure you intend to build must be rigorously defined. Each method on a data structure must be proven rigorously to be ...
The number of such images is exponentially large in the dimensions of the image (even after taking into account symmetries), and grows enormous rapidly. For all but very small images, no, it's not feasible to enumerate all such images within the lifetime of the solar system. (There's something wrong with your reasoning if you've concluded it's reasonably ...
Finite fields come up in many places. Here are just a few examples:The Razborov-Smolensky polynomial method.Fourier analysis, as used for example in the proof of the PCP theorem, or fast integer multiplication.List decoding - codes like Reed-Muller are algebraic codes.Algebraization, the method used to prove IP=PSPACE.Elliptic curves over finite fields ...
The short answer is no. No quick algorithm for this problem is known.A big open problem (for at least 50 years) in algebraic graph theory asks about the existence a regular graph of degree 57 order 3250 girth 5 and diameter 2. This is known as a Moore graph.However, it is known that if such a graph exists its characteristic polynomial has to be $p(x) = ...
In a partially ordered set (poset for short), you can have $a \le b$ and $a \le c$ without $b$ and $c$ being comparable (i.e. neither $b \le c$ nor $c \le b$ holds). That's what makes it a partial order and not a total order. Mathematicians often mean a total order when say “order”, because the primary example of an ordered set is the real numbers (or ...
Generating functions are a very powerful and very useful magic wand. The following solution to the first question (why are there $C_n$ trees) is somewhat less magical. Hence, cute.Example. To produce a tree of $5$ nodes we start with a sequence in which $+1$ occurs $5+1$ times, and $-1$ occurs $5$ times. For example, $+-++-+--++-$. Among those prefixes ...
One important problem in distributed file systems (DFS) is to generate files from distributed blocks. The area of Erasure code from information theory and Algebra (groups, rings, linear algebra,...) is used extensively in distributed fault tolerant file systems for example in HDFS RAID (Hadoop Based File System). Social network and Cloud companies are ...
Universities are not in the business of cramming useless info. If what they were teaching wasn't useful, they wouldn't bother. I recommend taking a more accepting attitude to life. If you go to college expecting to learn nothing, this is a self-fulfilling prophecy: you will learn nothing, so college will be wasted and you might as well start flipping burgers ...
If your question isWhat are examples of groups, monoids, and rings in computation?then one example I can think of off-hand is for path-finding algorithms in graph-theory. If we define a semiring with $+$ as $\min$ and $\cdot$ as $+$, then we can use matrix multiplication with the adjacency matrix to find all-pairs-shortest-path. This method is ... |
We have shown that a holomorphic map \(f: G\to \mathbb{C}\) to be expressed as a power series, which bears a certain similarity to polynomials, and a feature of polynomials are that if \(a\) is a root, or zero, for a polynomial \(p\), we can factor \(p\) such that \(p(z)=(z-a)^n q(z)\) where \(q\) is another polynomial with the property that \(q(a)\neq 0\). Now, does this similarity with polynomials extend to factorization? In fact it does as we shall see.
Let \(f: G\to \mathbb{C}\) be a holomorphic map that is not identically zero, with \(G\subseteq \mathbb{C}\) a domain and \(f(a)=0\). It is our claim that there exists a smallest natural number \(n\) such that \(f^{(n)}(a)\neq 0\). So suppose that there are no such \(n\), such that \(f^{(k)}(a)=0\) for all \(k\in\mathbb{N}\). Let \(B_\rho(a)\) be the largest open ball with center \(a\) contained in \(G\), since we have that \[f(z)=\sum^\infty_{k=0}\frac{f^{(k)}(a)}{k!}(z-a)^k\] we then have that \(f\) is identically zero on \(B_\rho(a)\). Fix a point \(z_0\in G\) and let \(\gamma : [0,1]\to G\) be a continuous curve from \(a\) to \(z_0\). By the paving lemma there is a finite partition \(0=t_1 < t_2 <\cdots <t_m=1\) and an \(r>0\) such that \(B_r(\gamma(t_k))\subseteq G\) for all \(k\) and \(\gamma([t_{k-1},t_k])\subseteq B_r(\gamma(t_k))\). Note that \(B_r(\gamma(t_1))=B_r(a)\subseteq B_\rho(a)\) so \(f\) is identically zero on \(B_r(\gamma(t_1))\), but since \(\gamma([t_1,t_2])\subseteq B_r(\gamma(t_1))\) we must have that \(f\) is identically zero on \(B_r(\gamma(t_2))\), and so on finitely many times untill we reach \(\gamma(t_m)\) and conclude that \(f\) is identically zero on \(B_r(\gamma(t_m))=B_r(z_0)\) and since \(z_0\) was chosen to be arbitrary we must conclude that \(f\) is identically zero on all of \(G\). A contradiction.
Now, let \(n\) be the smallest natural number such that \(f^{(n)}(a)\neq 0\), then we must have that \(f^{(k)}(a)=0\) for \(k < n\). We then get, for \(z\in B_\rho(a)\): \[\begin{split} f(z) &=\sum^\infty_{k=0}\frac{f^{(k)}(a)}{k!}(a-z)^k \\ &= \sum^\infty_{k=n}\frac{f^{(k)}(a)}{k!}(a-z)^k \\ &= \sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{n+k} \\&=(z-a)^n \sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{k}, \end{split}\] now, let \(\tilde{f}(z)=\sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{k}\) and note that \(\tilde{f}\) is non-zero and holomorphic on \(B_\rho(a)\). We then define a map \(g\) given by \[g(z)=\begin{cases} \tilde{f}(z), & z\in B_\rho(a) \\ \frac{f(z)}{(z-a)^n}, & z\in G\setminus \{a\}\end{cases}\] and note that \[f(z)=(z-a)^n g(z),\] showing the existance of a factorization with our desired properties. Showing that this representation is unique is left as an exercise 😉
References Complex analysis. Copenhagen: Department of Mathematical Sciences, University of Copenhagen. |
Contents
Back to Constrained Optimization
consider the problem of optimizing an objective function subject to bound constraints on the values of the variables. In mathematical terms, Bound constrained optimization problems
\[ \begin{array}{ll}
\mbox{minimize} & f(x) \\
\mbox{subject to} & l \leq x \leq u
\end{array}
\]
Bound constrained optimization problems play an important role in the development of algorithms and software for the general constrained problem because many algorithms reduce the solution of the general problem to the solution of a sequence of bound-constrained problems. Bound constrained optimization problems also arise on their own in applications where the parameters that describe physical quantities are constrained to be in a given range.
Algorithms for the solution of bound-constrained problems seek a local minimizer \(x^* \,\) of \(f(x) \,\). The standard
first-order necessary condition for a local minimizer \(x^* \,\) can be expressed in terms of the binding set \[B(x^*) = \{ i : x_i^* = l_i, \partial_i f(x^*) \geq 0 \} \cup \{ i : x_i^* = u_i, \partial_i f(x^*) \leq 0 \} \] at \(x^* \,\) by requiring that \[\partial_i f(x^*) = 0, \quad \forall i \not \in B(x^*).\]
A
second-order sufficient condition for \(x^* \,\) to be a local minimizer of the bound-constrained problem is that the first-order condition holds and that \[s^T \nabla^2 f(x) s > 0\] for all vectors \(s \not = 0\) with \(s_i=0, \; i \in B_s(x^*)\) where \[B_s(x^*) = B(x^*) \cap \{i : \partial_i f(x^*) \not = 0 \}\] is the strictly binding set at \(x^* \,\).
Given any set of free variables \(F \,\), we can define the reduced gradient and the reduced Hessian matrix, respectively, as the gradient and the Hessian matrix of \(f(x) \,\) with respect to the free variables. In this terminology, the second-order condition requires that the reduced gradient be zero and that the reduced Hessian matrix be positive definite when the set \(F \,\) of free variables consists of all the variables that are not strictly binding at \(x^* \,\). Many algorithms for the solution of bound-constrained problems use unconstrained minimization techniques to explore the reduced problem defined by a set \(F_k \,\) of free variables. Once this exploration is complete, a new set of free variables is chosen with the aim of driving the reduced gradient to zero.
Bound Constrained Optimization Solvers on the NEOS Server Optimization Online Nonlinear Optimization area
The books of Fletcher (1987) and Gill, Murray, and Wright (1981) contain chapters on the solution of linearly constrained problems with specific details on the solution of bound-constrained problems. Both Newton and quasi-Newton methods are discussed, but neither book discusses the gradient-projection method, since its use in codes for the solution of large-scale problems is recent. Bertsekas (1996) has a section on the solution of bound-constrained problems with gradient-projection techniques, while Conn, Gould, and Toint (1992) discuss the approach used by the LANCELOT code.
Bertsekas, D. P. 1996. Constrained Optimization and Lagrange Multiplier Methods, Athena Scientific, Belmont, MA. Conn, A. R., Gould, N. I. M., and Toint, P. L. 1992. LANCELOT, Springer Series in Computational Mathematics, Springer-Verlag, Berlin. Fletcher, R. 1987. Practical Methods of Optimization, 2nd ed., John Wiley & Sons, Inc., New York. Gill, P. E., Murray, W., and Wright, M. H. 1981. Practical Optimization, Academic Press, New York. Last updated: December 13, 2013 |
Definition\(\PageIndex{1}\)
A random variable \(X\) has a
uniform distribution on interval \([a, b]\), write \(X\sim\text{uniform}[a,b]\), if has pdf given by $$f(x) =\left\{\begin{array}{l l} \frac{1}{b-a}, & \text{for}\ a\leq x\leq b \\ 0, & \text{otherwise} \end{array}\right.\notag$$
A typical application of the uniform distribution is to model randomly generated numbers. In other words, it provides the probability distribution for a random variable representing a randomly chosen number between numbers \(a\) and \(b\).
The uniform distribution assigns equal probabilities to intervals of equal lengths, since it is a constant function, on the interval it is non-zero \([a, b]\). This is the continuous analog to equally likely outcomes in the discrete setting. |
Difference between revisions of "Probability Seminar"
(→May 2, TBA)
(→April 25, Kavita Ramanan, Brown)
Line 103: Line 103:
== April 25, [https://www.brown.edu/academics/applied-mathematics/kavita-ramanan Kavita Ramanan], [https://www.brown.edu/academics/applied-mathematics/ Brown] ==
== April 25, [https://www.brown.edu/academics/applied-mathematics/kavita-ramanan Kavita Ramanan], [https://www.brown.edu/academics/applied-mathematics/ Brown] ==
+ + + +
== April 26, Colloquium, [https://www.brown.edu/academics/applied-mathematics/kavita-ramanan Kavita Ramanan], [https://www.brown.edu/academics/applied-mathematics/ Brown] ==
== April 26, Colloquium, [https://www.brown.edu/academics/applied-mathematics/kavita-ramanan Kavita Ramanan], [https://www.brown.edu/academics/applied-mathematics/ Brown] ==
Revision as of 13:23, 18 April 2019 Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM.
If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu
January 31, Oanh Nguyen, Princeton
Title:
Survival and extinction of epidemics on random graphs with general degrees
Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly.
Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University
Title:
When particle systems meet PDEs
Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems..
Title:
Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime
Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2.
February 14, Timo Seppäläinen, UW-Madison
Title:
Geometry of the corner growth model
Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah).
February 21, Diane Holcomb, KTH
Title:
On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette.
Title: Quantitative homogenization in a balanced random environment
Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison).
Wednesday, February 27 at 1:10pm Jon Peterson, Purdue
Title:
Functional Limit Laws for Recurrent Excited Random Walks
Abstract:
Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina.
March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison
Title:
Harmonic Analysis on GLn over finite fields, and Random Walks
Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the
character ratio:
$$ \text{trace}(\rho(g))/\text{dim}(\rho), $$
for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant
rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison
Title:
Outliers in the spectrum for products of independent random matrices
Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke.
April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge.
Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems.
April 18, Andrea Agazzi, Duke
Title:
Large Deviations Theory for Chemical Reaction Networks
Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes.
April 25, Kavita Ramanan, Brown
Title:
Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs
Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu.
April 26, Colloquium, Kavita Ramanan, Brown
Title:
Tales of Random Projections
Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems. |
In his celebrated paper "Conjugate Coding" (written around 1970), Stephen Wiesner proposed a scheme for quantum money that is unconditionally impossible to counterfeit, assuming that the issuing bank has access to a giant table of random numbers, and that banknotes can be brought back to the bank for verification. In Wiesner's scheme, each banknote consists of a classical "serial number" $s$, together with a quantum money state $|\psi_s\rangle$ consisting of $n$ unentangled qubits, each one either
$$|0\rangle,\ |1\rangle,\ |+\rangle=(|0\rangle+|1\rangle)/\sqrt{2},\ \text{or}\ |-\rangle=(|0\rangle-|1\rangle)/\sqrt{2}.$$
The bank remembers a classical description of $|\psi_s\rangle$ for every $s$. And therefore, when $|\psi_s\rangle$ is brought back to the bank for verification, the bank can measure each qubit of $|\psi_s\rangle$ in the correct basis (either $\{|0\rangle,|1\rangle\}$ or ${|+\rangle,|-\rangle}$), and check that it gets the correct outcomes.
On the other hand, because of the uncertainty relation (or alternatively, the No-Cloning Theorem), it's "intuitively obvious" that, if a counterfeiter who
doesn't know the correct bases tries to copy $|\psi_s\rangle$, then the probability that both of the counterfeiter's output states pass the bank's verification test can be at most $c^n$, for some constant $c<1$. Furthermore, this should be true regardless of what strategy the counterfeiter uses, consistent with quantum mechanics (e.g., even if the counterfeiter uses fancy entangled measurements on $|\psi_s\rangle$).
However, while writing a paper about other quantum money schemes, my coauthor and I realized that we'd never seen a rigorous proof of the above claim anywhere, or an explicit upper bound on $c$: neither in Wiesner's original paper nor in any later one.
So,
has such a proof (with an upper bound on $c$) been published? If not, then can one derive such a proof in a more-or-less straightforward way from (say) approximate versions of the No-Cloning Theorem, or results about the security of the BB84 quantum key distribution scheme?
Update: In light of the discussion with Joe Fitzsimons below, I should clarify that I'm looking for more than just a reduction from the security of BB84. Rather, I'm looking for an explicit upper bound on the probability of successful counterfeiting (i.e., on $c$)---and ideally, also some understanding of what the optimal counterfeiting strategy looks like. I.e., does the optimal strategy simply measure each qubit of $|\psi_s\rangle$ independently, say in the basis
$$\{ \cos(\pi/8)|0\rangle+\sin(\pi/8)|1\rangle, \sin(\pi/8)|0\rangle-\cos(\pi/8)|1\rangle \}?$$
Or is there an entangled counterfeiting strategy that does better?
Update 2: Right now, the best counterfeiting strategies that I know are (a) the strategy above, and (b) the strategy that simply measures each qubit in the $\{|0\rangle,|1\rangle\}$ basis and "hopes for the best." Interestingly, both of these strategies turn out to achieve a success probability of (5/8) n. So, my conjecture of the moment is that (5/8) n might be the right answer. In any case, the fact that 5/8 is a lower bound on c rules out any security argument for Wiesner's scheme that's "too" simple (for example, any argument to the effect that there's nothing nontrivial that a counterfeiter can do, and therefore the right answer is c=1/2).
This post has been migrated from (A51.SE)
Update 3: Nope, the right answer is (3/4) n! See the discussion thread below Abel Molina's answer. |
Search
Now showing items 1-2 of 2
Search for new resonances in $W\gamma$ and $Z\gamma$ Final States in $pp$ Collisions at $\sqrt{s}=8\,\mathrm{TeV}$ with the ATLAS Detector
(Elsevier, 2014-11-10)
This letter presents a search for new resonances decaying to final states with a vector boson produced in association with a high transverse momentum photon, $V\gamma$, with $V= W(\rightarrow \ell \nu)$ or $Z(\rightarrow ...
Fiducial and differential cross sections of Higgs boson production measured in the four-lepton decay channel in $\boldsymbol{pp}$ collisions at $\boldsymbol{\sqrt{s}}$ = 8 TeV with the ATLAS detector
(Elsevier, 2014-11-10)
Measurements of fiducial and differential cross sections of Higgs boson production in the ${H \rightarrow ZZ ^{*}\rightarrow 4\ell}$ decay channel are presented. The cross sections are determined within a fiducial phase ... |
There are two possible solutions here, one using recurrences and theother one using PIE.
Recurrence.
The recurrences use two sequences $\{a_n\}$ and $\{b_n\}$ which countstrings not containing the two-character pattern that end in the firstcharacter of the pattern and that do not. This gives
$$a_1 = 1 \quad\text{and}\quad b_1 = 25$$and for $n\gt 1$$$a_n = a_{n-1} + b_{n-1}\quad\text{and}\quadb_n = 24a_{n-1} + 25b_{n-1}.$$
These recurrences produce for $1\le n\le 7$ the sequence of sums$\{a_n+b_n\}$ which is$$26, 675, 17524, 454949, 11811150, 306634951, 7960697576,\ldots$$so that the answer to the problem is(count of no ocurrences)$$7960697576.$$
Inclusion-Exclusion.
Let $M_{\ge q}$ be the set of strings containing at least $q$ocurrences of the two-character pattern and let $M_{=q}$ be the setcontaining exactly $q$ ocurrences of the pattern. Then byinclusion-exclusion we have
$$|M_{=0}| = \sum_{k=0}^{\lfloor n/2\rfloor}(-1)^k |M_{\ge k}|.$$
Note however that$$|M_{\ge k}| = 26^{n-2k} {n-2k + k\choose k}= 26^{n-2k} {n-k\choose k}.$$
This is because when we have $k$ copies of the pattern there are $n-2k$ freely choosable letters that remain. Hence we have a total of $n-2k+k=n-k$ items to permute. We then choose the $k$ locations of the patterns among the $n-k$ items.
This gives the formula$$|M_{=0}| = \sum_{k=0}^{\lfloor n/2\rfloor} (-1)^k \times26^{n-2k} \times {n-k\choose k}.$$
a :=
proc(n)
option remember;
if n=1 then return 1 fi;
a(n-1)+b(n-1);
end;
b :=
proc(n)
option remember;
if n=1 then return 25 fi;
24*a(n-1)+25*b(n-1);
end;
ex_pie :=
proc(n)
option remember;
add((-1)^q*26^(n-2*q)*binomial(n-q,q),
q=0..floor(n/2));
end;
Proof that the two answers are the same.
Introduce the generating functions$$A(z) = \sum_{n\ge 0} a_n z^n\quad\text{and}\quadB(z) = \sum_{n\ge 0} b_n z^n.$$
Observe that the correct intial value pair is$a_0 = 0$ and $b_0 = 1.$
Multiply the two recurrences by $z^n$ and sum over $n\ge 1$to get$$A(z) - 0 = z A(z) + z B(z)\quad\text{and}\quadB(z) - 1 = 24 z A(z) + 25 z B(z).$$
Solve these two obtain$$A(z) = \frac{z}{z^2-26z+1}\quad\text{and}\quadB(z) = \frac{1-z}{z^2-26z+1}.$$
This yields the following generating function $G(z)$ for $\{a_n+b_n\}:$$$G(z) = \frac{1}{z^2-26z+1}.$$
On the other hand we have$$G(z) = \sum_{n\ge 0} z^n\left(\sum_{k=0}^{\lfloor n/2\rfloor}26^{n-2k} (-1)^k {n-k\choose k}\right).$$
This is$$\sum_{k\ge 0} 26^{-2k} (-1)^k\sum_{n\ge 2k} 26^n z^n {n-k\choose k}\\ = \sum_{k\ge 0} 26^{-2k} (-1)^k \sum_{n\ge 0} 26^{n+2k} z^{n+2k} {n+2k-k\choose k}\\ = \sum_{k\ge 0} z^{2k} (-1)^k\sum_{n\ge 0} 26^n z^{n} {n+k\choose k}= \sum_{k\ge 0} z^{2k} (-1)^k \frac{1}{(1-26z)^{k+1}}\\ = \frac{1}{1-26z}\sum_{k\ge 0} z^{2k} (-1)^k \frac{1}{(1-26z)^{k}}= \frac{1}{1-26z} \frac{1}{1+z^2/(1-26z)}\\ = \frac{1}{1-26z+z^2}.$$
This establishes the equality of the generating functions which was tobe shown.
Closed form and OEIS entry.
The roots of the denominator of the generating function are$$\rho_{1,2} = 13 \pm 2\sqrt{42}.$$Writing$$\frac{1}{1-26z+z^2}= \frac{1}{(z-\rho_1)(z-\rho_2)}= \frac{1}{\rho_1-\rho_2}\left(\frac{1}{z-\rho_1}-\frac{1}{z-\rho_2}\right)\\ = \frac{1}{4\sqrt{42}}\left(\frac{1}{\rho_1}\frac{1}{z/\rho_1-1}-\frac{1}{\rho_2}\frac{1}{z/\rho_2-1}\right)\\ = \frac{1}{4\sqrt{42}}\left(-\frac{1}{\rho_1}\frac{1}{1-z/\rho_1}+\frac{1}{\rho_2}\frac{1}{1-z/\rho_2}\right).$$
We now extract coefficients to get$$[z^n] G(z) =\frac{1}{4\sqrt{42}}\left(\rho_2^{-n-1}-\rho_1^{-n-1}\right).$$
Since $\rho_1\rho_2 = 1$ this finally becomes$$[z^n] G(z) =\frac{1}{4\sqrt{42}}\left(\rho_1^{n+1}-\rho_2^{n+1}\right)$$
which is the sequence$$26, 675, 17524, 454949, 11811150, 306634951, 7960697576,\\ 206671502025, 5365498355074, 139296285729899, \ldots$$
This is OEIS A097309 which has additionalmaterial and where in fact we find a copy of the problem statementthat initiated this thread.
Alternative derivation of the closed form of $G(z).$
This uses the following integral representation.$${n-k\choose k}= \frac{1}{2\pi i}\int_{|w|=\epsilon}\frac{(1+w)^{n-k}}{w^{k+1}} \; dw.$$
This gives for the inner sum$$\frac{1}{2\pi i}\int_{|w|=\epsilon}\frac{(1+w)^n}{w}\left(\sum_{k=0}^{\lfloor n/2\rfloor}26^{n-2k} (-1)^k \frac{1}{(1+w)^k w^k}\right) \; dw.$$
Note that the defining integral is zero when $\lfloor n/2\rfloor \lt k \le n,$ so this is in fact$$\frac{1}{2\pi i}\int_{|w|=\epsilon}\frac{(1+w)^n}{w}\left(\sum_{k=0}^n26^{n-2k} (-1)^k \frac{1}{(1+w)^k w^k}\right) \; dw.$$
Simplifying we obtain$$\frac{1}{2\pi i}\int_{|w|=\epsilon}\frac{(1+w)^n}{w} 26^n\frac{(-1)^{n+1}/26^{2(n+1)}/(1+w)^{n+1}/w^{n+1}-1}{(-1)/26^2/(1+w)/w-1} \; dw$$or$$\frac{1}{2\pi i}\int_{|w|=\epsilon}(1+w)^{n+1} 26^n\frac{(-1)^{n+1}/26^{2(n+1)}/(1+w)^{n+1}/w^{n+1}-1}{(-1)/26^2-w(w+1)} \; dw$$
The difference from the geometric series contributes two terms, thesecond of which has no poles inside the contour, leaving just
$$\frac{1}{2\pi i}\int_{|w|=\epsilon}\frac{(-1)^{n+1}}{26^{n+2}}\frac{1}{w^{n+1}}\frac{1}{(-1)/26^2-w(w+1)} \; dw.$$
It follows that$$G(z) = \sum_{n\ge 0}z^n [w^n] \frac{(-1)^{n+1}}{26^{n+2}}\frac{1}{(-1)/26^2-w(w+1)}.$$
What we have here is an
annihilated coefficient extractor whichsimplifies as follows.$$\frac{-1}{26^2} \sum_{n\ge 0}(-z/26)^n [w^n] \frac{1}{(-1)/26^2-w(w+1)}\\ = \frac{-1}{26^2} \frac{1}{(-1)/26^2+z/26(-z/26+1)}= -\frac{1}{-1+z(-z+26)}\\ = \frac{1}{1-26z+z^2}.$$
This concludes the argument.
There is another
annihilated coefficient extractor at thisMSE link. |
Dear Uncle Colin,
In an answer sheet, they've made a leap from $\arctan\left(\frac{\cos(x)+\sin(x)}{\cos(x)-\sin(x)}\right)$ to $x + \frac{\pi}{4}$ and I don't understand where it's come from. Can you help?
-- Awful Ratio Converted To A Number
Hello, ARCTAN, and thank you for your message!
There's a principle I want to introduce here that's not an obvious one, and possibly not a hard-and-fast rule: when you're working with the arctangent of something, it often helps if the something is expressed in tangents.
In this case, the argument of the arctangent is $\frac{\cos(x)+\sin(x)}{\cos(x)-\sin(x)}$, which - if you divide top and bottom by $\cos(x)$ - is $\frac{1 + \tan(x)}{1 - \tan(x)}$.
If we notice that $\tan(A+B) \equiv \frac{\tan(A)+\tan(B)}{1-\tan(A)\tan(B)}$, then we can see that if $\tan(A)=1$ (so $A=\frac{\pi}{4}$) and $B=x$, we recover exactly what we have above.
Your expression is $\arctan\left(\tan\left(x + \frac{\pi}{4}\right)\right)$, which is clearly $x + \frac{\pi}{4}$.
Hope that helps,
-- Uncle Colin |
Research Open Access Published: \((\omega ,c)\)-Periodic solutions for time varying impulsive differential equations Advances in Difference Equations volume 2019, Article number: 259 (2019) Article metrics
228 Accesses
Abstract
In this paper, we study a class of \((\omega ,c)\)-periodic time varying impulsive differential equations and establish the existence and uniqueness results for \((\omega ,c)\)-periodic solutions of homogeneous problem as well as nonhomogeneous problem.
Introduction
It is well known that the concept of \((\omega ,c)\)-periodic functions is the same of “affine-periodic functions” or “periodic of second kind”, which were introduced by Floquet [1] and have been studied in the past decades. Recently, Alvarez et al. [2] introduced a new concept of \((\omega ,c)\)-periodic function by considering Mathieu’s equation \(z''+[\alpha -2\beta \cos (2t)]z=0\), and its solution satisfies \(z(t+\omega )=cz(t)\), \(c\in \mathbb {C}\). Clearly, \((\omega ,c)\)-periodic functions become the standard
ω-periodic functions when \(c=1\) and ω-antiperiodic functions when \(c=-1\). For these particular cases, we refer readers to [3,4,5,6].
Meanwhile, Alvarez et al. [7] transferred the same idea to study \((N,\lambda )\)-periodic discrete functions and established the existence and uniqueness of \((N,\lambda )\)-periodic solutions to a class of Volterra difference equations with infinite delay. Next, Agaoglou et al. [8] applied the concept of \((\omega ,c)\)-periodic to semilinear evolution equations in complex Banach spaces and studied its existence and uniqueness of \((\omega ,c)\)-periodic solutions. Li et al. [9] transferred the similar idea to consider \((\omega ,c)\)-periodic solutions impulsive differential systems.
Although, Floquet [1] studied a homogenous linear periodic system \(x'(t)=A(t)x(t)\) with \(A(t+\omega )=A(t)\), \(t\in \mathbb {R}\), there are quite few analogous results to Floquet’s theory for \((\omega ,c)\)-periodic systems with impulse. Motivated by [1, 2, 8, 9], we consider the following time varying impulsive differential equation:
where \(a\in C(\mathbb {R},\mathbb {R})\), \(f\in C(\mathbb {R}\times \mathbb {R},\mathbb {R})\), \(b_{i}, c_{i} \in \mathbb {R}\), and \(t_{i}< t_{i+1}\), \(i\in \mathbb {N}\). The symbols \(x(t_{i}^{+})\) and \(x(t_{i}^{-})\) represent the right and left limits of \(x(t)\) at \(t=t_{i}\).
The main purpose of this paper is to derive existence and uniqueness results for \((\omega ,c)\)-periodic solutions of nonhomogeneous linear problem as well as homogeneous linear problem.
Preliminaries
We introduce a Banach space \(\operatorname{PC}(\mathbb {R},\mathbb {R})=\{x: \mathbb {R}\to \mathbb {R}:x\in C((t _{i},t_{i+1}],\mathbb {R}), \text{and } x(t_{i}^{-})=x(t_{i}), x(t_{i} ^{+}) \text{ exists } \forall i\in \mathbb {N}\}\) endowed with the norm \(\|x\|=\sup_{t\in \mathbb {R}}|x(t)|\).
Lemma 2.1
(See [10, p.9])
Suppose that \(f\in C(\mathbb {R},\mathbb {R})\). A solution \(x\in \operatorname{PC}(\mathbb {R},\mathbb {R})\) of the following nonhomogeneous linear impulsive equation is given by where ( see [10, p.8]) Lemma 2.2 For any \(t, t_{0}\in \mathbb {R}\), \(\tau \in \mathbb {R}\setminus \{t_{i}\}_{i\in \mathbb {N}}\), and \({t\geq \tau \geq t_{0}}\), we have Proof
Since \(\tau \notin \{t_{i}\}_{i\in \mathbb {N}}\), we derive
□
Definition 2.3
(See [2])
Let \(c\in \mathbb {R}\setminus \{0\}\) and \(\omega >0\). A function \(f:\mathbb {R}\to \mathbb {R}\) is said to be \((\omega ,c)\)-periodic if \(f(t+\omega )=cf(t)\) for all \(t\in \mathbb {R}\).
Lemma 2.4
(See [8, Lemma 2.2])
Set \(\varPsi _{\omega ,c}:=\{x:x\in \operatorname{PC}(\mathbb {R},\mathbb {R}) \text{ and } cx(\cdot )=x(\cdot +\omega )\}\). Let \(x\in \varPsi _{\omega ,c}\), that is, x is a piecewise continuous and \((\omega ,c)\)- periodic function. Then \(x\in \varPsi _{\omega ,c}\) is equivalent to Lemma 2.5 Assume that the following conditions hold: \((A_{1})\) :
\(a(\cdot )\)
is ω- periodic, i. e., \(a(t+\omega )=a(t)\), \(\forall t\in \mathbb {R}\). \((A_{2})\) : Set\(t_{0}=0\) and\(t_{i}< t_{i+1}\), \(i\in \mathbb {N}\). There exists\(N\in \mathbb {N}\) such that\(t_{i+N}=t_{i}+\omega \), \(b_{i+N}=b _{i}\), and\(c_{i+N}=c_{i}\), \(\forall i\in \mathbb {N}\). Then the following homogeneous linear impulsive equation has a solution \(x\in \varPsi _{\omega ,c}\) if and only if \(x_{0}(c-W( \omega ,0))=0 \). Proof
The solution \(x\in PC(\mathbb{R},\mathbb{R})\) of (6) is given by
If there exists \(t_{i}\in (0,t)\) such that \(1+b_{i}=0\), obviously, \(x(t+\omega )=cx(t)=0\), and the result holds.
If \(1+b_{i}\neq 0\), \(\forall t_{i}\in (0,t)\) and \(t\in [0,\infty ) \setminus \{t_{i}\}_{i\in \mathbb {N}}\), we derive
In addition, since \(x(t_{i})=x(t_{i}^{-})\), we obtain \(x(t_{i}+\omega )=cx(t_{i})\). □
Main results
We consider the \((\omega ,c)\)-periodic solutions of the following nonhomogeneous linear problem:
where \(f\in C(\mathbb {R},\mathbb {R})\) and
f is \((\omega ,c)\)-periodic. We give the following assumption: \((A_{3})\) :
\(c\neq W(\omega ,0)\).
Lemma 3.1 where Proof
The solution \(x\in \varUpsilon \) of (7) is given by
Thus \(x(\omega )=W(\omega ,0)x_{0}+\int _{0}^{\omega }W(\omega ,s)f(s)\,ds+ \sum_{0< t_{i}<\omega }W(\omega ,t_{i})c_{i}=cx_{0} \), which is equivalent to \(x_{0}=(c-W(\omega ,0))^{-1} (\int _{0}^{\omega }W( \omega ,s)f(s)\,ds+\sum_{0< t_{i}<\omega }W(\omega ,t_{i})c_{i} ) \) due to \(c\neq W(\omega ,0)\).
Then we have
where
If \(t\in [0,\omega ]\setminus \{t_{1},\ldots ,t_{N}\}\), by (4) and condition \((A_{3})\), we derive
and
Thus we get (8). Since \(x(t_{i})=x(t_{i}^{-})\), we can also get the same result for \(t\in \{t_{1},\ldots ,t_{N}\}\). □
Lemma 3.2 Let \(\tilde{a}:=\max_{t\in [0,\omega ]}\{a(t)\}\) and \(\tilde{b}:=\max_{1\leq i\leq N}\{|1+b_{i}|\}\). Then, for any \(t\in [0,\omega ]\), we have Proof
Using (9), we derive
If \(\tilde{a}>0\), we get
If \(\tilde{a}\leq 0\), we get
The proof is finished. □
Lemma 3.3 For any \(t\in [0,\omega ]\), we have Proof
By (9), we have
If \(\tilde{a}> 0\), we obtain
If \(\tilde{a}\leq 0\), we obtain
The proof is complete. □
Now we are ready to study the existence of semilinear impulsive problems. We make the following hypotheses:
\((A_{4})\) :
For any \(t\in \mathbb {R}\) and \(x\in \mathbb {R}\), it holds \(f(t+\omega ,cx)=cf(t,x)\).
\((A_{5})\) :
There exists \(L>0\) such that \(|f(t,x)-f(t,y)|\leq L|x-y|\) for any \(t\in \mathbb {R}\) and \(x,y\in \mathbb {R}\).
\((A_{6})\) :
There exist constants \(K,J>0\) such that \(|f(t,x)|\leq K |x|+J\) for any \(t\in \mathbb {R}\) and \(x\in \mathbb {R}\).
Theorem 3.4 Suppose that \((A_{1})\), \((A_{2})\), \((A_{3})\), \((A_{4})\), and \((A_{5})\) hold. If \(0< LP_{\tilde{a}}<1\), then (1) has a unique \((\omega ,c)\)- periodic solution \(x\in \varPsi _{\omega ,c}\). Moreover, it holds \(\|x\|\leq \frac{f_{0}P_{\tilde{a}}+Q_{\tilde{a}}}{1-LP_{ \tilde{a}}} \), where \(f_{0}=\max_{t\in [0,\omega ]}|f(t,0)|\). Proof
For any \(x\in \varPsi _{\omega ,c}\), i.e., \(x(\cdot +\omega )=cx)\), we have \(f(t+\omega ,x(t+\omega ))=f(t,cx(t))\), \(t\in \mathbb {R}\). Further, by assumption \((A_{4})\), \(f(t+\omega ,x(t+\omega ))=f(t,cx(t))=cf(t,x)\), \(t\in \mathbb {R}\). Thus, \(f(\cdot ,x(\cdot ))\in \varPsi _{\omega ,c}\). For more characterization of the \((\omega ,c)\)-periodic functions, see [2, Sect. 2].
Let \(\mathbb {G}:\varUpsilon \to \varUpsilon \) be the operator given by
It is easy to show that \(\mathbb {G}(\varUpsilon )\subseteq \varUpsilon \). For any \(x,y\in \varUpsilon \), we derive
which implies \(\|\mathbb {G}x-\mathbb {G}y\|\leq LP_{\tilde{a}}\|x-y\| \). Noticing \(0< LP_{\tilde{a}}<1\), \(\mathbb {G}\) is a contraction mapping. Thus, \(\mathbb {G}\) defined in (11) has a unique fixed point satisfying \(x(\omega )=cx(0)\) due to Lemma 3.1. Further, by Lemma 2.4, one has \(x\in \varPsi _{\omega ,c}\). From the above, there exists a unique \((\omega ,c)\)-periodic solution \(x\in \varPsi _{\omega ,c}\) of (1).
Moreover, we have
which implies
The proof is finished. □
Theorem 3.5 Suppose that \((A_{1})\), \((A_{2})\), \((A_{3})\), \((A_{4})\), and \((A_{6})\) hold. If \(KP_{\tilde{a}}<1\), then (1) has at least one \((\omega ,c)\)- periodic solution \(x\in \varPsi _{\omega ,c}\). Proof
Let \(\mathbb {B}_{r}=\{x\in \varUpsilon :\|x\|\leq r\}\), where \(r\geq \frac{J P _{\tilde{a}}+Q_{\tilde{a}}}{1-KP_{\tilde{a}}} \). We consider \(\mathbb {G}\) defined in (11) on \(\mathbb {B}_{r}\). For all \(x\in \mathbb {B}_{r}\) and \(t\in [0,\omega ]\), using Lemmas 3.2 and 3.3, we derive
which implies \(\|\mathbb {G}x\|\leq r\). Thus \(\mathbb {G}(B_{r})\subset B_{r}\). In addition, it is easy to see that \(\mathbb {G}\) is continuous and \(\mathbb {G}(\mathbb {B}_{r})\) is pre-compact. By Schauder’s fixed point theorem, we obtain that (1) has at least one \((\omega ,c)\)-periodic solution \(x\in \varPsi _{\omega ,c}\). □
Examples Example 4.1
We consider the following semilinear impulsive equation:
where \(\rho \in \mathbb {R}\), \(t_{i}=\frac{(3i-1)\pi }{6}\), \(\omega =\pi \), \(c=-1\), \(a(t)=\cos 2t\), \(f(t,x)=\rho \sin t\cos x\), \(b_{i}= \frac{1}{2}\sin {\frac{(2i-1)\pi }{2}}\), and \(c_{i}=\cos i\pi \). Clearly, \(t_{i+2}=t_{i}+\pi \), \(b_{i+2}=b_{i}\), \(c_{i+2}=c_{i}\) for all \(i\in \mathbb {N}\), then we obtain \(N=2\), \((A_{1})\) and \((A_{2})\) hold. Since \(W(\omega ,0)=\frac{3}{4}\neq -1=c\), we get \((A_{3})\) holds. Note that \(f(\cdot +\omega ,cx)=f(\cdot +\pi ,-x)=-\rho \sin \cdot \cos x=-f( \cdot ,x)=cf(\cdot ,x)\), we get \((A_{4})\) holds. \(|f(t,x)-f(t,y)| \leq |\rho ||x-y|\), then we get \(L=|\rho |\) and \((A_{5})\) holds. In addition, \(\tilde{a}=1\), \(\tilde{b}=\frac{3}{2}\), \(P_{\tilde{a}}=\frac{18 \pi e^{\pi }}{7}\doteq 186.939334\), and \(Q_{\tilde{a}}= \frac{36e^{ \pi }}{7}\doteq 119.009276\).
Letting \(0<|\rho |<\frac{7}{18\pi e^{\pi }}\doteq 0.005349\), we get \(0< LP_{\tilde{a}}<1\), then all the assumptions of Theorem 3.4 hold. So if \(0<|\rho |<\frac{7}{18\pi e^{\pi }}\), problem (12) has a unique
π-antiperiodic solution \(x\in \operatorname{PC}([0,\infty )),\mathbb {R})\).
Since \(|f(t,x)|\leq |\rho |\), we get \(K=0\), \(J=|\rho |\), \((A_{6})\) holds, and \(KP_{\tilde{a}}=0<1\). Then all the assumptions of Theorem 3.5 hold for any \(\rho \in \mathbb {R}\). So (12) has at least one
π-antiperiodic solution for any \(\rho \in \mathbb {R}\). Example 4.2
We consider the following semilinear impulsive equation:
where \(\rho \in \mathbb {R}\), \(t_{i}=\frac{3i-1}{6}\), \(\omega =1\), \(c=2\), \(a(t)=\sin 2\pi t\), \(f(t,x)=\rho x \cos (2^{-t}x)\), \(b_{i}=1\) and \(c_{i}=1\). Clearly, \(t_{i+2}=t_{i}+1\), \(b_{i+2}=b_{i}\), \(c_{i+2}=c_{i}\) for all \(i\in \mathbb {N}\), then we obtain \(N=2\), \((A_{1})\) and \((A_{2})\) hold. Since \(W(\omega ,0)=4\neq 2=c\), we get \((A_{3})\) holds. Note that \(f(\cdot +\omega ,cx)=f(\cdot +1,2x)=2\rho x \cdot \cos (2^{-t}x)=2f(\cdot ,x)=cf(\cdot ,x)\), we get \((A_{4})\) holds. Now \(f(\cdot ,x)\) does not satisfy the Lipschitz condition. Since \(|f(t,x)|\leq |\rho ||x|\), we get \(K=|\rho |\), \(J=0\), and \((A_{6})\) holds. Moreover, \(\tilde{a}=1\), \(\tilde{b}=2\), and \(P_{\tilde{a}}=6e\).
Set \(|\rho |<\frac{1}{6e}\doteq 0.061313\). Then \(KP_{\tilde{a}}<1\). Now all the assumptions of Theorem 3.5 hold. Thus,(13) has at least one \((1,2)\)-periodic solution \(x\in PC([0,\infty )),\mathbb {R})\) if \(|\rho |<\frac{1}{6e}\).
Conclusion
Existence and uniqueness of \((\omega ,c)\)-periodic solutions for homogeneous problem and nonhomogeneous as well as semilinear time varying impulsive differential equations are established. In a forthcoming work, we shall extend the study to \((\omega ,c)\)-periodic solutions for nonlinear impulsive evolution systems in infinite dimensional spaces as follows:
where the linear operator \(\{C(t):t\geq 0\}\) generates a strongly continuous evolutionary process \(\{U(t,s),t\geq s\geq 0\}\) on a Banach space
X. D is a bounded linear operator and \(d_{i}\in X\). Motivated by [11,12,13,14,15], we shall also consider \((\omega ,c)\)-periodic delay differential equations with non-instantaneous impulses. References 1.
Floquet, G.: Sur les équations différentielles linéaires à coefficients périodiques. Ann. Sci. Éc. Norm. Supér.
12, 47–88 (1883) 2.
Alvarez, E., Gómez, A., Pinto, M.: \((\omega ,c)\)-periodic functions and mild solutions to abstract fractional integro-differential equations. Electron. J. Qual. Theory Differ. Equ.
16, 1 (2018) 3.
Akhmet, M.U., Kivilcim, A.: Periodic motions generated from nonautonomous grazing dynamics. Commun. Nonlinear Sci. Numer. Simul.
49, 48–62 (2017) 4.
Al-Islam, N.S., Alsulami, S.M., Diagana, T.: Existence of weighted pseudo anti-periodic solutions to some non-autonomous differential equations. Appl. Math. Comput.
218, 1–8 (2012) 5.
Bainov, D.D., Simeonov, P.S.: Impulsive Differential Equations: Periodic Solutions and Applications. Wiley, New York (1993)
6.
Cooke, C.H., Kroll, J.: The existence of periodic solutions to certain impulsive differential equations. Comput. Math. Appl.
44, 667–676 (2002) 7.
Alvarez, E., Díaz, S., Lizama, C.: On the existence and uniqueness of \((N,\lambda )\)-periodic solutions to a class of Volterra difference equations. Adv. Differ. Equ.
2019, 105 (2019) 8.
Agaoglou, M., Fečkan, M., Panagiotidou, A.P.: Existence and uniqueness of \((\omega ,c)\)-periodic solutions of semilinear evolution equations. Int. J. Dyn. Syst. Differ. Equ. (2018)
9.
Li, M., Wang, J., Fečkan, M.: \((\omega ,c)\)-periodic solutions for impulsive differential systems. Commun. Math. Anal.
21, 35–46 (2018) 10.
Bainov, D.D., Simeonov, P.S.: Impulsive Differential Equations: Asymptotic Properties of the Solutions. World Scientific, Singapore (1995)
11.
You, Z., Wang, J., O’Regan, D., Zhou, Y.: Relative controllability of delay differential systems with impulses and linear parts defined by permutable matrices. Math. Methods Appl. Sci.
42, 954–968 (2019) 12.
Wang, J.: Stability of noninstantaneous impulsive evolution equations. Appl. Math. Lett.
73, 157–162 (2017) 13.
Wang, J., Ibrahim, A.G., O’Regan, D., Zhou, Y.: Controllability for noninstantaneous impulsive semilinear functional differential inclusions without compactness. Indag. Math.
29, 1362–1392 (2018) 14.
Yang, D., Wang, J., O’Regan, D.: On the orbital Hausdorff dependence of differential equations with non-instantaneous impulses. C. R. Acad. Sci. Paris, Ser. I
356, 150–171 (2018) 15.
Tian, Y., Wang, J., Zhou, Y.: Almost periodic solutions of non-instantaneous impulsive differential equations. Quaest. Math. (2018). https://doi.org/10.2989/16073606.2018.1499562
Acknowledgements
The authors are grateful to the referees for their careful reading of the manuscript and their valuable comments.
Funding
This work is partially supported by the National Natural Science Foundation of China (11671339).
Ethics declarations Competing interests
The authors declare that they have no competing interests.
Additional information Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
Híradástechnika PTK-1072
Datasheet legend Ab/c: Fractions calculation AC: Alternating current BaseN: Number base calculations Card: Magnetic card storage Cmem: Continuous memory Cond: Conditional execution Const: Scientific constants Cplx: Complex number arithmetic DC: Direct current Eqlib: Equation library Exp: Exponential/logarithmic functions Fin: Financial functions Grph: Graphing capability Hyp: Hyperbolic functions Ind: Indirect addressing Intg: Numerical integration Jump: Unconditional jump (GOTO) Lbl: Program labels LCD: Liquid Crystal Display LED: Light-Emitting Diode Li-ion: Lithium-ion rechargeable battery Lreg: Linear regression (2-variable statistics) mA: Milliamperes of current Mtrx: Matrix support NiCd: Nickel-Cadmium rechargeable battery NiMH: Nickel-metal-hydrite rechargeable battery Prnt: Printer RTC: Real-time clock Sdev: Standard deviation (1-variable statistics) Solv: Equation solver Subr: Subroutine call capability Symb: Symbolic computing Tape: Magnetic tape storage Trig: Trigonometric functions Units: Unit conversions VAC: Volts AC VDC: Volts DC
Híradástechnika PTK-1072
In 1978 or 1979, the Hungarian microelectronics company Híradástechnika began manufacturing an OEM version of the Commodore PR-100 programmable calculator. Until then, programmable calculators, privately imported, cost more than the average price of a new car in Hungary. With the PTK-1072, suddenly it was possible for a high-school geek like myself to acquire one. Wow!
Needless to say, in 1979 I had no access to resources such as the excellent book
Numerical Recipes in C. However, I found another book on differential and integral calculus ( A differenciál- és integrálszámítás elemei, by Pál Szász, Budapest, 1955) that contained an extensive discussion on the Gamma function.
As part of this discussion, the book provided a variation of the Gamma function's Maclaurin series:
\[2\ln\Gamma(1+x)=\ln\frac{1-x}{1+x}+\ln\frac{\pi x}{\sin\pi x}+(1-C)x+\sum\limits_{n=1}^\infty\left[\left(1-\sum\limits_{\nu=1}^\infty\frac{1}{\nu^{2n+1}}\right)\frac{x^{2n+1}}{2n+1}\right].\]
This approximation converges fairly quickly for $0.5\lt x\lt 1.5$.
$C$ is Euler's constant, defined as the infinite sum
\[\lim\limits_{N\rightarrow\infty}\left(\sum\limits_{n=1}^N\frac{1}{n}-\ln N\right)=0.577~215~664~901~532~860~607...\]
Values of this constant can be easily obtained from the literature. That is not the case for the remaining coefficients in this formula. My poor PTK-1072 ran for several
weeks to obtain an approximation of \(\sum\limits_{\nu=2}^\infty 1/\nu^3\) to 8 digits of precision. In the end, I obtained the following values for the various coefficients:
\begin{align}x&=0.8455686582,\\
x^3&=-0.1347045888,\\ x^5&=-1.477110203\times 10^{-2},\\ x^7&=-2.38550782\times 10^{-3},\\ x^9&=-4.463095166\times 10^{-4},\\ x^{11}&=-8.985247346\times 10^{-5}.\end{align}
As it turns out, my calculations were not entirely accurate. Being equipped with a dual-processor Pentium-II workstation, I no longer find it problematic to calculate these coefficients to a high degree of precision. Whereas the PTK-1072 ran for weeks to add a sum of a few thousand values, I can add several
million values in a matter of seconds. Here are my modern results:
\begin{align}x&=0.8455686702,\\
x^3&=-0.1347046021,\\ x^5&=-1.477110206\times 10^{-2},\\ x^7&=-2.385507823\times 10^{-3},\\ x^9&=-4.463095169\times 10^{-4},\\ x^{11}&=-8.985247348\times 10^{-5}.\end{align}
As I mentioned, the formula above can be used to obtain the logarithm of the Gamma function for arguments between 0.5 and 1.5. The function's value for other arguments can be calculated using the following relationship:
\[\Gamma(x)\Gamma(1-x)=\frac{\pi x}{\sin\pi x}.\]
With this and the recurrence relationship, it is now possible to calculate the Gamma function for any real argument. (I have never investigated whether the Maclaurin series can be used for any complex argument.)
Sadly, I no longer have my old notebook with my PTK-1072 programs. However, I do have a TI-59 magnetic card, still readable, that contains a variant of my Gamma function algorithm. From this, I was able to reconstruct my original program, shown below.
Note that you need to set memory registers 4-9 to the values specified before running the program (the coefficients shown in the listing below are the ones I calculated in 1979.) To enter values with ten-digit precision, use key sequences like this one:
0.1347045 +/− − 8.88 EE 8 +/− = M 4
In order for the program to work properly, you must first set the calculator to radian mode by pressing
F rad. To use the function, enter the argument then hit R/S.
M4 = −.1347045888M5 = −1.477110203e−2M6 = −2.38550782e−3M7 = −4.463095166e−4M8 = −8.985247346e−5M9 = 0.845568658285 00 −81 01 174 02 ×51 03 M91 04 095 05 =51 06 M81 07 174 08 ×52 09 MR62 10 884 11 +52 12 MR61 13 774 14 ×52 15 MR81 16 184 17 +52 18 MR73 19 674 20 ×52 21 MR81 22 184 23 +52 24 MR72 25 574 26 ×52 27 MR81 28 184 29 +52 30 MR71 31 474 32 ×52 33 MR81 34 184 35 +52 36 MR63 37 974 38 ×52 39 MR91 40 084 41 +64 42 (45 43 π74 44 ×52 45 MR91 46 075 47 ÷55 48 x-y22 49 sin74 50 ×64 51 (81 52 185 53 −52 54 MR91 55 065 56 )75 57 ÷64 58 (52 59 MR91 60 084 61 +81 62 165 63 )65 64 )32 65 ln95 66 =21 67 F32 68 ex35 69 √ |
I'm trying to understand Milnor's proof of the existence of exotic 7-spheres.
Milnor finds his examples among $S^{3}$ bundles over $S^{4}$ (with structure group $SO(4)$ ). Such a bundle can be described as follows:
Given $M$, an $S^{3}$ bundle over $S^{4}$, if we restrict $M$ to the northern (or southern) hemisphere of $S^{4}$, it must trivialize since each hemisphere is contractible. Hence, we can build $M$ by specifying, for each point $p$ in $S^{3}$ = equator of $S^{4}$ = intersection of northern and southern hemispheres, an element of $SO(4)$ which glues $p\times S^{3}$ in the northern hemisphere to $p\times S^{3}$ in the southern hemisphere.
This defines a function $f:S^{3}\rightarrow SO(4)$, which is known as the clutching function for $M$. By usual fiber bundle theory, the isomorphism type of $M$ only depends on the homotopy class of $f$.
$SO(4)$ is double covered by $S^3\times S^3$, and hence $\pi_3(SO(4)) = \mathbb{Z}\oplus\mathbb{Z}$. Thus, $f$ is really determined (at least, up to homotopy) by an ordered pair of integers (i,j).
Now, as the bundles have structure group $SO(4)$, it makes sense to talk about the Pontryagin classes of $M$. In Milnor's proof of the existence of exotic spheres, he needs to argue that $p_1(M) = \pm 2(i-j)$. His first step in this argument is that "clearly $p_1(M)$ is a linear function of $i$ and $j$."
It IS clear to me that the Pontragin classes associated to $(ni, nj)$ for $n\in \mathbb{Z}$ will depend linearly on $n$. For, if we let $N_{i,j}$ denote the principal $SO(4)$ bundle over $S^{4}$ corresponding to $(i,j)$, then $N_{ni,nj}$ is clearly obtained as the pullback of $N_{i,j}$ via a degree $n$ map from $S^{4}$ to itself.
However, it's not clear to me why $p_1(M)$ is additive in $(i,j)$. Am I missing something simple?
And while we're talking about it, is more true? That is, For any sphere bundle over a sphere, say, $S^{k}\rightarrow E\rightarrow S^{n}$, should any characteristic classes (Pontryagin, Stiefel-Whitney, Euler) be linear in terms of the clutching function?
For example, we can think of $p_1$ as a map from $\pi_{n-1}(SO(k+1))\rightarrow H^{4}(S^{n})$. Is this map a homomorphism? How about for the other characteristic classes? |
Back to Unconstrained Optimization.
The line search and trust-region techniques are suitable if the number of variables \(n\) is not too large, since the cost per iteration is of order \(n^3\). Implemented algorithms for problems with a large number of variables tend to use iterative techniques for obtaining a direction \(d_k\) in a line-search method or a step \(s_k\) in a trust-region method. These techniques are usually called
truncated Newton methods because the iterative technique is stopped (truncated) as soon as a termination criterion is satisfied.
For example, some algorithms use a line search method in which the direction \(d_k\) satisfies
\[\parallel \nabla^2 f(x_k) d_k + \nabla f(x_k) \parallel \leq \eta_k \parallel \nabla f(x_k) \parallel \] for some \(\eta_k \in (0,1)\).
A similar idea can be used in the context of trust-region methods. Conjugate gradient algorithms mesh well with truncated Newton methods because of their desirable numerical properties. Preconditioning is necessary in order to improve the efficiency and reliability of the conjugate gradient method; effective preconditioners include those based on the incomplete Cholesky factorization and symmetric successive overrelaxation.
Last updated: October 10, 2013 |
The electric field $\bf{E}$ represents how much force would act on a particle at a certain position per unit charge. However, if we actually place a particle in that position, the electric field will have a singularity there (because of the $\frac{1}{r^2}$ in Coulomb's law). Isn't this kind of a paradox? In my eyes, this makes the concept of electric field useless, because it cannot be used to calculate the force on a particle.
It's true that a point particle with finite charge is problematic in electromagnetism because of the infinite field and associated energy near such a particle. However, we don't need that concept in order to make a defining statement about the electric field. Rather, we can use $$ {\bf E} = \lim_{r \rightarrow 0} \frac{\bf f}{q} $$ where $\bf f$ is the force on a charged sphere of radius $r$ with a finite charge density $\rho$ independent of $r$, and $q = (4/3) \pi r^3 \rho$ is the charge on the sphere. This charge $q$ will tend to zero as the radius does, and it does so sufficiently quickly that no infinities arise and everything is ok.
You're forgetting one thing: a particle cannot feel its own electric field, so a point charge that generates a $1/r^2$ field doesn't do anything unless acted upon by an external field. You also can't place a particle at $r=0$ of another particle's $1/r^2$ electric field, because, well, there's already a particle there. (Also, how are you going to get it there, even if you could? It takes so much energy to even get close that you're leaving the realm of classical electromagnetism when you try.)
Isn't this kind of a paradox?
Consider two point charges, $q_1$ and $q_2$, in the vacuum with separation vector $\mathbf{r}_{12}$. Coulomb's Law for the force on charge $q_2$:
$$\mathbf{F}_2=q_2\frac{q_1}{4\pi\epsilon_0}\frac{\hat{\mathbf{r}_{12}}}{|\mathbf{r}_{12}|^2}=q_2\mathbf{E}_1$$
Thus, the force on charge $q_2$ is due to the electric field of charge $q_1$
only. Similarly,
$$\mathbf{F}_1=q_1\frac{q_2}{4\pi\epsilon_0}\frac{\hat{\mathbf{r}_{21}}}{|\mathbf{r}_{21}|^2}=q_1\mathbf{E}_2$$
the force on charge $q_1$ is due to the electric field of charge $q_2$
only. This easily generalizes to $N$ point charges; the force on charge $q_n$ is the vector sum of the forces due to electric field of each of the other $N-1$ charges.
You may (or may not) be familiar with the notion of a
test charge which 'feels' the electric field due to other charges but has no significant electric field. Armed with this abstraction, one can say that the (total) electric field at a point is the force per unit charge at that point. Indeed, from the Wikipedia article Electric field
The electric field is defined mathematically as a vector field that associates to each point in space the (electrostatic or Coulomb) force per unit of charge exerted on an infinitesimal positive
test chargeat rest at that point.
(emphasis mine)
You should distinguish the interaction if a charge with another charge from its self-interaction. For the first case there is no issue. For the second case there are issues. For a classical point particle the self-interaction energy diverges, so you will have to assume a finite radius. If you assume a homogeneus spherical distribution and equate the self energy to the rest energy you find about 2.8 femtometer for an electron. See https://en.m.wikipedia.org/wiki/Classical_electron_radius. However there is no experimental evidence for a finite value of the electron radius. As far as high energy physicists know it is a point particle.
The basic reason here is that "electric field", like all theories in physics and science, is a
model for reality, not "reality" itself, to which we have no direct access, and thus what one should really be asking here is what it is intended to model and how best to conceptualize that model. A poster here named @Cort Ammon likes to emphasize points along this line. Science doesn't tell us what things "really" are - insofar as its "pictures" are concerned, it actually really is a social construct, sorry, but those "useless" degrees aren't useless. What isn't a construct, is what it tells us we can and can't do, or what will happen if we do something, when those actually happen. Photons may be a social construct and if you want you can imagine little green gnomes instead, but it is not a construct that if you put your hand under a magnifier focusing the Sun (DON'T), you'll get a nasty lil' mark. The former is a framework we have in our heads to think about the latter, why and what happens during it, perh. also why that such burn may have a slightly greater chance of cancer in the future (and that works a bit better than green gnomes do :g:).
An electric field is such a construct whose initial purpose is to model one kind of phenomenon: how an electrically-charged object moves when placed in a region of space
that didn't before have one, by (simplifiedly) putting little arrows at each place in space that point in the direction in which it seems compelled to move, and making the contours of such mathematically precise with the tools originally developed by Rene Descartes and Pierre de Fermat known as analytic geometry.
Because of that, it is an answer to a kind of "counterfactual" question, and hence what the arrows of the electric field you are describing in your post, then, where you've added the second object as producing field, do is to describe how a hypothetical
third object not already present there would move. If we want to talk of the motion of the second object, we thus need to consider the field from the first alone, without adding its "own" field in as you just did.
You might also compare Cort Ammon's answer at the top of here:
as this touches on the philosophical blurb at the beginning. |
The wavefunction \(\psi(L,t)\) is confined to a circle whenever the eigenvalues L of a particle are only nonzero on the points along a circle. When the wavefunction \(\psi(L,t)\) associated with a particle has non-zero values only on points along a circle of radius \(r\), the eigenvalues \(p\) (of the momentum operator \(\hat{P}\)) are quantized—they come in discrete multiples of \(n\frac{ℏ}{r}\) where \(n=1,2,…\) Since the eigenvalues for angular momentum are \(L=pr=nℏ\), it follows that angular momentum is also quantized.
Newton's second law describes how the classical state {\(\vec{p_i}, \vec{R_i}\)} of a classical system changes with time based on the initial position and configuration \(\vec{R_i}\), and also the initial momentum \(\vec{p_i}\). We'll see that Schrodinger's equation is the quantum analogue of Newton's second law and describes the time-evolution of a quantum state \(|\psi(t)⟩\) based on the following two initial conditions: the energy and initial state of the system.
In this section, we'll begin by seeing how Schrodinger's time-independent equation can be used to determine the wave function of a free particle. After that, we'll use Schrodinger's time-independent equation to solve for the allowed, quantized wave functions and allowed, energy eigenvalues of a "particle in a box"; this will be useful later on as a qualitative understanding of the quantized wave functions and energy eigenvalues of atoms.
In general, if a quantum system starts out in any arbitrary state, it will evolve with time according to Schrödinger's equation such that the probability \(P(L)\) changes with time. In this lesson, we'll prove that if a quantum system starts out in an energy eigenstate, then the probability \(P(L)\) of measuring any physical quantity will not change with time. |
Exercise: Let $f$ be entire and non-constant. For any positive real number $c>0$ show that the closure of $\{ z:|f(z)|<c\}$ is the set $\{z:|f(z)|\leq c\}$. Solution: Let $\varepsilon>0$ and define $A:=\{ z:|f(z)|<c\}$. Then, because $f$ is continuous (via entire) and non-constant, there exists $z\in A$ such that
$$B(c;\varepsilon)\cap A \neq \varnothing .$$
That is to say that $c$ is a limit point of $A$. Since $A^-$ must contain all of its limit points, we have that $\{z:|f(z)|\leq c\}$.
Is my solution correct? I have another way of doing it using a sequence $z_n\to z$, but I would prefer to use the above if it correct. The reason I am feeling unsure about my solution is because I am not sure how we know (geometrically) that $|f(z)|$ goes all the way up to the boundary (i.e. $c$). Is it safe to say that $|f(z)$ gets infinitely close to $c$ but does not touch because $f$ is entire & non-constant? FYI I am new at these concepts so please be a detailed as possible. Thank you in advance!
Note: I am using the notation $A^-$ to indicate the closure of $A$. This is the notation used in my text (Complex, Conway) and I suspect it is used as to not be confused with the complex conjugate. |
If a set $A$ has the same cardinality as an ordinal $\alpha$, then there exists a bijection $f:\alpha\to A$, so $A$ is indexed by $\alpha$ and hence well-ordered. Therefore a choice function $g:\mathcal{P}(A)\to A$ exists.
Therefore, if Axiom of Choice(AC) is false, then there must exist a set $A$, such that no ordinals have the same cardinality of $A$.
So the equivalence class of sets with the same cardinal is larger if AC is false than if AC is true.
As a result, the truth or falsity of the Continuum hypothesis(CH) needs to be considered separately with AC and without AC.
Is the reasoning above correct? and is there really two cases to consider for CH?
EDIT: Check my idea about "larger": Let two sets $X\sim Y$ iff they have the same cardinality. $\sim$ is an equivalence relation. Denote the class of all the equivalence classes under AC $S_1$, and the class of all the equivalence classes without AC $S_2$. Clearly $S_1\subseteq S_2$, since $S_2$ has elements not in the set Ord(ordinals).
This is really not formal, since it depends on the model we use. |
There's a legend, so well-known that it's almost a cliche, about the wise man who invented chess. When asked by the great king what reward he wanted, he replied that he'd be satisfied by a chessboard full of rice: one grain on the first square, two on the second, four on the third, doubling each time.
The king, of course, laughed at his modest demands, and told his people to make it so.
His people nervously told the king that actually, that was quite a lot of rice, and if he knew about his Core 2 geometric sequences, he wouldn't have been so badly duped. After all, $S_n = \frac{ a(1-r^n)}{1-r}$. Here $a=1$, $r=2$ and $n=64$, so that works out to ${2^{64} -1}$, which is 18,446,744,073,709,551,615 grains of rice altogether.
"Do we have that much rice?" asked the king.
"Well, sire, that's $1.8\times 10^{19}$ grains, and there are about $3.6 \times 10^{6}$ grains in a tonne."
"So it's, what, $0.5 \times 10^{13}$... five trillion tonnes?"
"Very good, sire."
"Do we have that much rice?"
"I'm afraid not, sire - even looking far into the future, say in the early 21st century, that'll be roughly the entire worldwide crop for a decade."
"Oh."
"One square centimetre of rice," said the king's people, "is about ten grains."
"So 18 quintillion grains needs $1.8 \times 10^{18} \text {cm}^2$?"
"Yes, sire, although we should convert that into more sensible units."
"Fine. A metre squared is 100... no! 10,000 centimetres squared, which takes us down to $1.8 \times 10^{14} \text{m}^2$."
"Still a little... unwieldy, sire."
"Fine. Let's take it down another million by talking about kilometres squared, so it's $1.8 \times 10^{8} \text {km}^2.$ Is that a big number?"
"It's quite big, sire."
"How big is the world?"
"The world, sire? I don't have that information to hand - but we can work it out. The world's circumference is about $4\times 10^{4}$ kilometres, so its radius is that divided by $2\pi$."
"$6.3\times 10^{3}\text {km}$?" guessed the king, who'd had some Ninja training.1
"Yes, sire. So the surface area is..."
"$4\pi r^2$," interrupted the king. "$r^2$ is about $4 \times 10^7$, so it's roughly $5 \times 10^8 \text{km}^2$."
"But of course, only a third of the Earth's surface is land."
"Correct, sire."
"Which is $1.7 \times 10^{8} \text{km}$. So the wise man wants enough rice to cover pretty much every landmass on the planet to a depth of one grain. Fine. I think this can be easily solved."
And the king had the wise man's head chopped off.
Nobody likes a smartarse.
* Thanks to Aidan for working this out with me. |
Research Open Access Published: Finite-time control and synchronization for memristor-based chaotic system via impulsive adaptive strategy Advances in Difference Equations volume 2016, Article number: 101 (2016) Article metrics
1515 Accesses
10 Citations
Abstract
This paper investigates the stabilizing and synchronization problems of a memristor-based Chua chaotic system in a finite time. A lemma concerning the finite-time stability for an impulsive system is proposed by extending the finite-time stability theory. Then some finite-time stabilizing and synchronization criterion are presented which guarantee the finite-time stabilization and synchronization for the model considered. Finally, the efficiency of the control scheme is further demonstrated by the simulation examples.
Introduction
In 1971, the memristor which is considered to be the missing fourth passive circuit element was postulated [1]. However, this important postulation has not caused attention in almost 40 years. Until in 2008, Hewlett-Packard Labs announced the development of a memristor based on nanotechnology [2]. As we know, the memristor takes its place along with the other three existing elements: the resistor, the capacitor, and the inductor. Increasing focus was put on the memristor for its potential applications in programmable logic, signal processing, neural networks, and so on [3].
Moreover, as the novel element, the circuit based on the memristor shares many interesting phenomenon. Recently, the research memristor chaotic circuits have become a focal topic [4–10]. In [4], the author presented a novel fourth-order memristor-based Chua oscillator by replacing Chua’s diode with an active two-terminal circuit. The stabilization problem of a memristor-based chaotic system was investigated in [10]. As the most important phenomenon, the synchronization was also discussed [9]. In [9], the adaptive synchronization problem of memristor-based Chua circuits was investigated.
As time goes on, more and more researchers began to realize the important role of the synchronization time. To attain a high convergence speed, many effective methods have been introduced and finite-time control is one of them. Finite-time synchronization means the optimality in convergence time. Much research work has been done on chaos synchronization based on finite time (see for instance [11–22] and the references therein). However, the finite-time synchronization problem has not been fully investigated in the literature, and it still remains open. Motivated by the above discussion, we investigate the finite-time synchronization problem for a memristor-based Chua circuit. Based on the finite-time stability theory, a novel lemma which guarantees the impulsive system is finite-time stable is presented. Then the impulsive [16–18] adaptive control law is proposed to realize finite-time synchronization of the model considered. Numerical simulations demonstrate the effectiveness and correctness of this results.
The paper is organized as follows. Some preliminaries are presented in the next section. Section 3 proposes the main results of this paper. In Section 4, the numerical simulations are presented, which is followed by the conclusion in Section 5.
Preliminaries
In [6], the author proposed a novel nonlinear circuits, with a flux-controlled memristor which replaces the Chua diode. Figure 1 shows a memristor-based Chua oscillator with a flux-controlled memristor.
Applying a Kirchhoff voltage, the current law, and the volt-ampère relationship of the components, the state equation of a Chua memristor-based chaotic system can be described as follows:
For convenience, letting \(x_{1} = u_{1}\), \(x_{2} = u_{2}\), \(x_{3} = i_{3}\), \(x_{4} = \phi\), \(\alpha = 1/C_{1}\), \(\beta = 1/L_{1}\), \(\gamma = r/L\), \(\xi = G\), \(C_{2} = 1\), and \(R = 1\), then the model can be rewritten as the following equation:
If we set \(\alpha = 10\), \(\beta = 100 / 7\), \(\gamma = 0.1\), \(\xi = 9 / 7\), \(a = 1 / 7\) and \(b = 2 / 7\), and the initial values are \(( 10^{ - 10},0,0, - 0.515 )\), then system (2) is a chaotic system and the chaotic attractor is shown in Figure 2.
In order to compute simply, letting \(x = [ x_{1},x_{2},x_{3},x_{4} ]^{T}\), then system (2) can be described as follows:
where
and \(W ( x_{4} ) = a + 3bx_{4}^{2}\),
a, b, γ, ξ, α, and β are positive constants.
Similar to [7], the nonlinear functions \(q ( \phi )\), \(W ( \phi )\) are given by
Throughout this paper, the following assumption and lemma are necessary for our main results.
Assumption 1
System (2) is a chaotic system, namely, the state is bound, we assume that the following assumptions hold:
where \(M_{1}\), \(M_{2}\) are real constants.
Lemma 1 Suppose the function is continuous and non- negative when \(t \in [ 0,\infty )\) and satisfies the following conditions: where \(\rho > 0\), \(0 < \eta < 1\), \(0 < \delta < 1\), \(k = \{ 1,2,\ldots,m\}\), is a finite natural number set and m is a positive integer, then the following inequality holds: where T is a constant which represents the setting time. Proof
Without loss of generality, let \(t_{0} = 0\). In order to prove (7) holds, the following function \(H ( t )\) is constructed:
Clearly, if the function \(H ( t )\) satisfies \(H ( t ) \le 0\), then the equality (7) holds.
One can easily observe that
Next, we will prove that \(H ( t ) \le 0\) holds for \(t \in [ t_{0},t_{1} ]\). Otherwise, there exists \(t^{ *}\) such that
which contradicts (11). Namely, \(H ( t ) \le 0\) holds for \(t \in [ t_{0},t_{1} )\).
When \(t = t_{1}\), we get
It yields
Then we suppose that \(H ( t ) \le 0\) holds for \(t \in [ t_{k - 1},t_{k} ]\). For \(t \in [ t_{k},t_{k + 1} ]\), we have
Main results
In this section, the finite-time control and synchronization problems via an impulsive adaptive strategy are investigated, respectively. Taking the impulsive adaptive strategy into account in (3), one obtains
where \(\Delta x ( t_{k} ) = x ( t_{k}^{ +} ) - x ( t_{k}^{ -} )\), \(x ( t_{k}^{ +} ) = \lim_{t \to t_{k}^{ +}} x ( t )\), \(x ( t_{k}^{ -} ) = \lim_{t \to t_{k}^{ -}} x ( t )\), \(\ell = \{ 1,2, \ldots, n,n_{1}, \ldots, n_{k} \}\), is a finite natural number set. For simplicity, it is assumed that \(x ( t_{k}^{ -} ) = x ( t_{k} )\), which means that \(x ( t_{k} )\) is left continuous. Letting \(u ( t ) = - k_{1}x ( t ) - k_{2}\operatorname{sign} ( x ( t ) )\vert x ( t ) \vert ^{\gamma}\), we have the following theorem.
Theorem 1 Suppose Assumption 1 holds. There exists a positive constant γ satisfying \(0 < \gamma < 1\) such that the memristor- based chaotic system is finite- time stable if the following conditions hold: (i)
\(q = \lambda_{\max} [ A^{T} + A - ( 2k_{1} + 1 )I ] < 0\);
(ii)
\(d = \lambda_{\max} ( I + B )^{T} ( I + B ) < 1\).
Proof
Construct the following Lyapunov candidate function:
Calculating the derivative along the trajectory of (16) we have
From Assumption 1, one has
where \(J = [ [ a + bM_{4}^{2} ]^{2},0,0,0 ]^{T}\).
where \(q = \lambda_{\max} [ A^{T} + A - ( 2k_{1} + 1 )I ]\). From the fact that \(0 < \gamma < 1\), one obtains
Also
Then, from condition (1) of Theorem 1, we have for \(t \ne t_{k}\)
When \(t = t_{k}\), one obtains
Next, we investigate the problem of finite-synchronization for a memristor-based chaotic system. Based on the drive-response synchronization concept, letting system (3) be the drive system, the response system with control input
u is as follows:
Taking the impulsive adaptive effects into account, the response system (25) is as follows:
where \(e = [ x - y ]^{T} = [ x_{1} - y_{1},x_{2} - y_{2},x_{3} - y_{3},x_{4} - y_{4} ]^{T}\), \(t_{k}\) are the impulsive instants which satisfy \(t_{1} < t_{2} < \cdots < t_{k - 1} < t_{k}\) and \(\lim_{k \to \infty} t_{k} = \infty\). Letting
where the constants \(k_{1}\), \(k_{2}\) are the control strength coefficients to be designed, the real number
γ satisfies \(0 \le \gamma < 1\).
where \(\phi ( e ) = [\alpha W ( y_{4} )y_{1} - \alpha W ( x_{4} )x_{1},0,0,0]^{T}\). Hence, we have the following theorem.
Theorem 2 Suppose Assumption 1 holds. There exists a positive constant γ satisfying \(0 < \gamma < 1\) such that the memristor- based Chua systems (3) and (26) can be synchronized under the impulsive adaptive strategy if the following conditions hold: (i)
\(A^{T} + A - 2k_{1}I - 2abM_{1}M_{2}I < 0\);
(ii)
\(d = \lambda_{\max} ( I + B )^{T} ( I + B ) < 1\).
Proof
Construct the following Lyapunov candidate function
For \(t \in [ t_{k},t_{k + 1} )\), the derivative of \(V ( t )\) along the trajectory of (28) is
From Assumption 1, we have
From the fact that \(0 < \gamma < 1\), one obtains
Also
Thus, based on the condition (i) in Theorem 2, we have for \(t \ne t_{k}\)
When \(t = t_{k}\), one obtains
Simulation results
The numerical simulations are carried out using the fourth-order Runge-Kutta method. The initial states of the drive and response systems are \(( 10^{ - 10},0,0,0 )\) and \(( 0,0,0,0 )\). The parameters of the drive systems are \(\alpha = 10\), \(\beta = 100 / 7\), \(\gamma = 0.1\), \(\xi = 9 / 7\), \(a = 1 / 7\), and \(b = 2 / 7\). Solving the inequality in Theorem 2, and choosing \(B = \operatorname{diag}( - 0.9, - 0.9, - 0.9, - 0.9)\), \(k_{1} = 0.02\), \(k_{2} = 0.01\), \(\gamma = 0.3\), the response system synchronizes with the drive system as shown in Figure 3. It is easily shown that the state response curve of the error system is stable.
Conclusion
In this paper, the finite-time control and synchronization problems of memristor-based chaotic systems have been investigated. Some novel impulsive adaptive control laws which guarantee the memristor-based Chua circuits is stabilized and synchronized in finite time have been proposed. Moreover, simulation results were given to verify the effectiveness and feasibility of the method. Our future research topics mainly consider the time delay effects on the finite-time stability of the memristor-based nonlinear system.
References 1.
Chua, LO: Memristor - the missing circuit element. IEEE Trans. Circuit Theory
18(5), 507-511 (1971) 2.
Strukov, DB, Snider, GS, Stewart, DR, Williams, RS: The Missing Memristor Found. Nature
453, 80-83 (2008) 3.
Bayat, FM, Shouraki, SB: Programming of memristor crossbars by using genetic algorithm. Proc. Comput. Sci.
3, 232-237 (2011) 4.
Itoh, M, Chua, LO: Memristor oscillators. Int. J. Bifurc. Chaos
18, 3183-3206 (2008) 5.
Pakkiyappan, R, Sivasamy, R, Li, XD: Synchronization of identical and nonidentical memristor-based chaotic systems via active back stepping control technique. Circuits Syst. Signal Process.
34(3), 763-778 (2015) 6.
Wu, HG, Chen, SY, Bao, BC: Impulsive synchronization and initial value effect for a memristor-based chaotic system. Acta Phys. Sin.
64(3), 030501 (2015) 7.
Wang, X, Li, CD, Huang, TW, Duan, SK: Predicting chaos in memristive oscillators via harmonic balance method. Chaos
22(4), 043119 (2012) 8.
Bao, HB, Cao, JD: Projective synchronization of fractional-order memristor-based neural networks. Neural Netw.
63, 1-9 (2015) 9.
Wen, SP, Zeng, ZG, Huang, TW: Adaptive synchronization of memristor-based Chua’s circuits. Phys. Lett. A
376, 2275-2780 (2012) 10.
Huang, JJ, Li, CD, He, X: Stabilization of a memristor-based chaotic system by intermittent and fuzzy processing. Int. J. Control. Autom. Syst.
11(3), 643-647 (2013) 11.
Amato, F, Ariola, M, Abdallah, CT: Finite-time control for uncertain linear systems with disturbance inputs. In: IEEE Proceedings of the 1999 American Control Conference, vol. 3, pp. 1776-1780 (1999)
12.
Amato, F, Arioila, M, Dorato, P: Finite-time control of linear systems subject to parametric uncertainties and disturbances. Automatica
37(9), 1459-1463 (2001) 13.
Amato, F, Arioila, M, Cosentino, C: Finite-time stability of linear time-varying systems: analysis and controller design. IEEE Trans. Autom. Control
55(4), 1003-1008 (2009) 14.
Sanjay, PB, Dennis, SB: Finite time of continuous autonomous systems. SIAM J. Control Optim.
3(3), 751-766 (2000) 15.
Liu, YG: Global finite time stabilization via time varying feedback uncertain nonlinear systems. SIAM J. Control Optim.
52(3), 1886-1913 (2014) 16.
Li, X, Bohner, M, Wang, CK: Impulsive differential equations: periodic solutions and applications. Automatica
52, 173-178 (2015) 17.
Li, X, Song, S: Impulsive control for existence, uniqueness and global stability of periodic solutions of recurrent neural networks with discrete and continuously distributed delays. IEEE Trans. Neural Netw.
24, 868-877 (2013) 18.
Wen, SP, Huang, TW, Yu, XH, Chen, MZQ, Zeng, ZG: Aperiodic sampled-data sliding-mode control of fuzzy systems with communication delays via the event-triggered method. IEEE Trans. Fuzzy Syst. (2015). doi:10.1109/TFUZZ.2015.2501412
19.
Wen, SP, Yu, XH, Zeng, ZG, Wang, JJ: Event-triggering load frequency control for multi-area power systems with communication delay. IEEE Trans. Ind. Electron.
63(2), 1308-1317 (2015) 20.
Song, QK, Huang, TW: Stabilization and synchronization of chaotic systems with mixed time-varying delays via intermittent control with non-fixed both control period and control width. Neurocomputing
154, 61-69 (2015) 21.
Song, QK, Zhao, ZJ: Stability criterion of complex-valued neural networks with both leakage delay and time-varying delays on time scales. Neurocomputing
171, 179-184 (2016) 22.
Cao, JD, Song, QK: Stability in Cohen-Grossberg-type bidirectional associative memory neural networks with time-varying delays. Nonlinearity
19(7), 1601 (2006) Acknowledgements
The work described in this paper was partially supported by the National Natural Science Foundation of China (Grant No. 61403050) and the Scientific and Technological Research Program of Chongqing Municipal Education Commission (KJ1501412, KJ1501409, KJ1501301), and the Foundation of CQUE (KY201519B, KY201520B).
Additional information Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors drafted the manuscript, and they read and approved the submitted version. |
Table of Contents Sub-headings
## Heading 2### Heading 3#### Heading 4##### Heading 5###### Heading 6
Emphasis
Italics with *asterisks* or _underscores_.Bold with **asterisks** or __underscores__.Combined emphasis with **asterisks and _underscores_**.Strikethrough with ~~two tildes~~.
Ordered lists
1. First item2. Another item
Unordered lists
* First item* Another item
Images
Images may be added to a page by placing them in your
static/img/ folder and referencing them using one of the following two notations:
A general image:

A numbered figure with caption:
{{< figure src="/img/screenshot.png" title="Figure Caption" >}}
Links
[I'm a link](https://www.google.com)[A post]({{< ref "post/hi.md" >}})[A publication]({{< ref "publication/hi.md" >}})[A project]({{< ref "project/hi.md" >}})[Another section]({{< relref "hi.md#who" >}})
To enable linking to a file, such as a PDF, first place the file in your
static/files/ folder and then link to it using the following form:
{{% staticref "files/cv.pdf" "newtab" %}}Download my CV{{% /staticref %}}
The optional
"newtab" argument for
staticref will cause the link to be opened in a new tab.
Emojis
See the Emoji cheat sheet for available emoticons. The following serves as an example, but you should remove the spaces between each emoji name and pair of semicolons:
I : heart : Academic : smile :
I ❤️ Academic 😄
Blockquote
> This is a blockquote.
This is a blockquote.
Footnotes
I have more [^1] to say.[^1]: Footnote example.
I have more
1 to say. Code highlighting
Pass the
language of the code, such as
python, as a parameter after three backticks:
```python# Example of code highlightinginput_string_var = input("Enter some data: ")print("You entered: {}".format(input_string_var))```
Result:
# Example of code highlightinginput_string_var = input("Enter some data: ")print("You entered: {}".format(input_string_var))
Highlighting options
The Academic theme uses highlight.js for source code highlighting, and highlighting is enabled by default for all pages. However, several configuration options are supported that allow finer-grained control over highlight.js.
The following table lists the supported options for configuring highlight.js, along with their expected type and a short description. A "yes" in the
config.toml column means the value can be set globally in
config.toml, and a "yes" in the
preamble column means that the value can be set locally in a particular page's preamble.
option type description config.toml preamble
highlight
boolean enable/disable highlighting yes yes
highlight_languages
slice choose additional languages yes yes
highlight_style
string choose a highlighting style yes no Option
highlight
The
highlight option allows enabling or disabling the inclusion of highlight.js, either globally or for a particular page. If the option is unset, it has the same effect as if you had specified
highlight = true. That is, the highlight.js javascript and css files will be included in every page. If you'd like to only include highlight.js files on pages that actually require source code highlighting, you can set
highlight = false in
config.toml, and then override it by setting
highlight = true in the preamble of any pages that require source code highlighting. Conversely, you could enable highlighting globally, and disable it locally for pages that do not require it. Here is a table that shows whether highlighting will be enabled for a page, based on the values of
highlight set in
config.toml and/or the page's preamble.
config.toml page preamble highlighting enabled for page? unset or true unset or true yes unset or true false no false unset or false no false true yes Option
highlight_languages
The
highlight_languages option allows you to specify additional languages that are supported by highlight.js, but are not considered "common" and therefore are not supported by default. For example, if you want source code highlighting for Go and clojure in all pages, set
highlight_languages = ["go", "clojure"] in
config.toml. If, on the other hand, you want to enable a language only for a specific page, you can set
highlight_languages in that page's preamble.
The
highlight_languages options specified in
config.toml and in a page's preamble are additive. That is, if
config.toml contains,
highlight_languages = ["go"] and the page's preamble contains
highlight_languages = ["ocaml"], then javascript files for
both go and ocaml will be included for that page.
If the
highlight_languages option is set, then the corresponding javascript files will be served from the cdnjs server. To see a list of available languages, visit the cdnjs page and search for links with the word "languages".
The
highlight_languages option provides an easy and convenient way to include support for additional languages to be severed from a CDN. If serving unmodified files from cdnjs doesn't meet your needs, you can include javascript files for additional language support via one of the methods described in the getting started guide.
Option
highlight_style
The
highlight_style option allows you to select an alternate css style for highlighted code. For example, if you wanted to use the solarized-dark style, you could set
highlight_style = "solarized-dark" in
config.toml.
If the
highlight_style option is unset, the default is to use the file
/css/highlight.min.css, either the one provided by the Academic theme, or else the one in your local
static directory. The
/css/highlight.min.css file provided by Academic is equivalent to the
github style from highlight.js.
If the
highlight_style option
is set, then
/css/highlight.min.css is ignored, and the corresponding css file will be served from the cdnjs server. To see a list of available styles, visit the cdnjs page and search for links with the word "styles".
See the highlight.js demo page for examples of available styles.
Not all styles listed on the highlight.js demo page are available from the cdnjs server. If you want to use a style that is not served by cdnjs, just leave
highlight_style unset, and place the corresponding css file in
/static/css/highlight.min.css.
If you don't want to change the default style that ships with Academic but you do want the style file served from the cdnjs server, set
highlight_style = "github" in
config.toml.
The
highlight_style option is only recognized when set in
config.toml. Setting
highlight_style in your page's preamble has no effect.
Twitter tweet
To include a single tweet, pass the tweet’s ID from the tweet's URL as parameter to the shortcode:
{{< tweet 666616452582129664 >}}
Youtube
{{< youtube w7Ft2ymGmfc >}}
Vimeo
{{< vimeo 146022717 >}}
GitHub gist
{{< gist USERNAME GIST-ID >}}
Speaker Deck
{{< speakerdeck 4e8126e72d853c0060001f97 >}}
$\rm \LaTeX$ math
$$\left [ – \frac{\hbar^2}{2 m} \frac{\partial^2}{\partial x^2} + V \right ] \Psi = i \hbar \frac{\partial}{\partial t} \Psi$$
$$\left [ – \frac{\hbar^2}{2 m} \frac{\partial^2}{\partial x^2} + V \right ] \Psi = i \hbar \frac{\partial}{\partial t} \Psi$$
Alternatively, inline math can be written by wrapping the formula with only a single
$:
This is inline: $\mathbf{y} = \mathbf{X}\boldsymbol\beta + \boldsymbol\varepsilon$
This is inline: $\mathbf{y} = \mathbf{X}\boldsymbol\beta + \boldsymbol\varepsilon$
Note that Markdown special characters need to be escaped with a backslash so they are treated as math rather than Markdown. For example,
* and
_ become
\* and
\_ respectively.
Multiline equations
The standard LaTeX line break consisting of 2 backslashes needs to be replaced with 6 backslashes:
$$f(k;p\_0^\*) = \begin{cases} p\_0^\* & \text{if }k=1, \\\\\\1-p\_0^\* & \text {if }k=0.\end{cases}$$
$$f(k;p_0^*) = \begin{cases} p_0^* & \text{if }k=1, \\
1-p_0^* & \text {if }k=0.\end{cases}$$ Publication abstracts
As Hugo and Academic attempt to parse TOML, Markdown, and LaTeX content in the abstract, the following guidelines should be followed just for the publication
abstract and
abstract_short fields:
escape each LaTeX backslash (
\) with an extra backslash, yielding
\\
escape each LaTeX underscore (
_) with two backslashes, yielding
\\_
Hence,
abstract = "${O(d_{\max})}$" becomes
abstract = "${O(d\\_{\\max})}$".
Table
Code:
| Command | Description || ------------------| ------------------------------ || `hugo` | Build your website. || `hugo serve -w` | View your website. |
Result:
Command Description
hugo
Build your website.
hugo serve -w
View your website. Alerts
Alerts are a useful feature that add side content such as tips, notes, or warnings to your articles. They are especially handy when writing educational tutorial-style articles. Use the corresponding shortcodes to enable alerts inside your content:
{{% alert note %}}Here's a tip or note...{{% /alert %}}
This will display the following
note block:
Here's a tip or note...
{{% alert warning %}}Here's some important information...{{% /alert %}}
This will display the following
warning block:
Here's some important information...
Table of Contents
A table of contents may be particularly useful for long posts or tutorial/documentation type content. Use the
{{% toc %}} shortcode anywhere you wish within your Markdown content to automatically generate a table of contents.
Footnote example. ^ |
I'm kind of struggling with the concept of primitive roots with non primes, specifically for $25$ in this case. I was calculating the sequences $2^x \pmod {25}$ and $3^x \pmod{ 25}$ for each $x$ up to $25$ but at $x = 20$ it starts repeating and I see that the numbers that can't be obtained are those that aren't coprime with $25$, but then I don't see how $2$, $3$ and the others can be primitive roots since there are some numbers in the residue class that you just can't obtain from the roots. What I would have concluded is that non primes simply do not have primitive roots. Am I missing something in the definition of roots here?
Powers of odd primes do have primitive roots. Moreover, if $g$ is a primitive root mod $ p$, then $g$ or $g+p$ is a primitive root mod $p^n$.
$2$ and $3$ are primitive roots mod $5$ and so one of $2,3,7,8$ is a primitive root mod $25$. It turns out that $2,3,8$ work but $7$ does not.
Prime powers do have primitive roots. Simply adding $p$ to a known primitive root does not always guarantee a primitive root.
For example, 2 is a primitive root of 25, since it cycles through all of the twenty possible answers before returning to 1. On the other hand, 7 is not, because it only cycles through just four values (7, 24, 18, 1).
There are a smattering of primes where the smallest primitive root of p is not a primitive root of p^2, but they're pretty rare.
Multiplicative order ord$\displaystyle_{p^s}a=d\implies $ord$\displaystyle_{p^{s+1}}a=d$ or $p\cdot d$ where $p$ is an odd prime, integer $s\ge1$
ord$_5(2)=4\implies$ ord $_{25}(2)=5$ or $5\cdot4$
Now, $2^4\not\equiv1\pmod{25}\implies$ ord $_{25}(2)=5\cdot4=\phi(25)$
In fact from Order of numbers modulo $p^2$,
ord $_{25}(2+5r)=5\cdot4$ for $0\le r<5,r\ne1$
as $7^2\equiv-1\pmod{25}\implies7^4\equiv1\iff$ord$_{25}7=4$ |
Difference between revisions of "Extended mean value theorem"
Line 1: Line 1: −
The ''extended mean value theorem'' (also called ''Cauchy's mean value theorem'') is:
+
The ''extended mean value theorem'' (also called ''Cauchy's mean value theorem'') is :
+
Let
Let
:<math> f, g: [a,b] \to \mathbb{R}</math>
:<math> f, g: [a,b] \to \mathbb{R}</math>
Revision as of 18:37, 29 January 2019
The
extended mean value theorem (also called Cauchy's mean value theorem) is usually formulated as:
Let
[math] f, g: [a,b] \to \mathbb{R}[/math]
be continuous functions that are differentiable on the open interval [math](a,b)[/math]. If [math]g'(x)\neq 0[/math] for all [math]x\in(a,b)[/math], then there exists a value [math]\xi \in (a,b)[/math] such that
[math] \frac{f'(\xi)}{g'(\xi)} = \frac{f(b)-f(a)}{g(b)-g(b)}. [/math] Remark:It seems to be easier to state the extended mean value theorem in the following form:
Let
[math] f, g: [a,b] \to \mathbb{R}[/math]
be continuous functions that are differentiable on the open interval [math](a,b)[/math]. Then there exists a value [math]\xi \in (a,b)[/math] such that
[math] f'(\xi)\cdot (g(b)-g(a)) = g'(\xi) \cdot (f(b)-f(b)). [/math]
This second formulation avoids the need that [math]g'(x)\neq 0[/math] for all [math]x\in(a,b)[/math] and is therefore much easier to handle numerically.
The proof is similar, just use the function
[math] h(x) = f(x)\cdot(g(b)-g(a)) - (g(x)-g(a))\cdot(f(b)-f(a)) [/math]
and apply Rolle's theorem.
The underlying JavaScript code
var board = JXG.JSXGraph.initBoard('box', {boundingbox: [-5, 10, 7, -6], axis:true});var p = [];p[0] = board.create('point', [0, -2], {size:2});p[1] = board.create('point', [-1.5, 5], {size:2});p[2] = board.create('point', [1, 4], {size:2});p[3] = board.create('point', [3, 3], {size:2});// Curvevar fg = JXG.Math.Numerics.Neville(p);var graph = board.create('curve', fg, {strokeWidth:3, strokeOpacity:0.5});// Secant line = board.create('line', [p[0], p[3]], {strokeColor:'#ff0000', dash:1});var df = JXG.Math.Numerics.D(fg[0]);var dg = JXG.Math.Numerics.D(fg[1]);// Usually, the extended mean value theorem is formulated as// df(t) / dg(t) == (p[3].X() - p[0].X()) / (p[3].Y() - p[0].Y())// We can avoid division by zero with that formulation:var quot = function(t) { return df(t) * (p[3].Y() - p[0].Y()) - dg(t) * (p[3].X() - p[0].X());};var r = board.create('glider', [ function() { return fg[0](JXG.Math.Numerics.root(quot, (fg[3]() + fg[2]) * 0.5)); }, function() { return fg[1](JXG.Math.Numerics.root(quot, (fg[3]() + fg[2]) * 0.5)); }, graph], {name: '', size: 4, fixed:true, color: 'blue'});board.create('tangent', [r], {strokeColor:'#ff0000'}); |
Hi, Can someone provide me some self reading material for Condensed matter theory? I've done QFT previously for which I could happily read Peskin supplemented with David Tong. Can you please suggest some references along those lines? Thanks
@skullpatrol The second one was in my MSc and covered considerably less than my first and (I felt) didn't do it in any particularly great way, so distinctly average. The third was pretty decent - I liked the way he did things and was essentially a more mathematically detailed version of the first :)
2. A weird particle or state that is made of a superposition of a torus region with clockwise momentum and anticlockwise momentum, resulting in one that has no momentum along the major circumference of the torus but still nonzero momentum in directions that are not pointing along the torus
Same thought as you, however I think the major challenge of such simulator is the computational cost. GR calculations with its highly nonlinear nature, might be more costy than a computation of a protein.
However I can see some ways approaching it. Recall how Slereah was building some kind of spaceitme database, that could be the first step. Next, one might be looking for machine learning techniques to help on the simulation by using the classifications of spacetimes as machines are known to perform very well on sign problems as a recent paper has shown
Since GR equations are ultimately a system of 10 nonlinear PDEs, it might be possible the solution strategy has some relation with the class of spacetime that is under consideration, thus that might help heavily reduce the parameters need to consider to simulate them
I just mean this: The EFE is a tensor equation relating a set of symmetric 4 × 4 tensors. Each tensor has 10 independent components. The four Bianchi identities reduce the number of independent equations from 10 to 6, leaving the metric with four gauge fixing degrees of freedom, which correspond to the freedom to choose a coordinate system.
@ooolb Even if that is really possible (I always can talk about things in a non joking perspective), the issue is that 1) Unlike other people, I cannot incubate my dreams for a certain topic due to Mechanism 1 (consicous desires have reduced probability of appearing in dreams), and 2) For 6 years, my dream still yet to show any sign of revisiting the exact same idea, and there are no known instance of either sequel dreams nor recurrence dreams
@0celo7 I felt this aspect can be helped by machine learning. You can train a neural network with some PDEs of a known class with some known constraints, and let it figure out the best solution for some new PDE after say training it on 1000 different PDEs
Actually that makes me wonder, are the space of all coordinate choices more than all possible moves of Go?
enumaris: From what I understood from the dream, the warp drive showed here may be some variation of the alcuberrie metric with a global topology that has 4 holes in it whereas the original alcuberrie drive, if I recall, don't have holes
orbit stabilizer: h bar is my home chat, because this is the first SE chat I joined. Maths chat is the 2nd one I joined, followed by periodic table, biosphere, factory floor and many others
Btw, since gravity is nonlinear, do we expect if we have a region where spacetime is frame dragged in the clockwise direction being superimposed on a spacetime that is frame dragged in the anticlockwise direction will result in a spacetime with no frame drag? (one possible physical scenario that I can envision such can occur may be when two massive rotating objects with opposite angular velocity are on the course of merging)
Well. I'm a begginer in the study of General Relativity ok? My knowledge about the subject is based on books like Schutz, Hartle,Carroll and introductory papers. About quantum mechanics I have a poor knowledge yet.
So, what I meant about "Gravitational Double slit experiment" is: There's and gravitational analogue of the Double slit experiment, for gravitational waves?
@JackClerk the double slits experiment is just interference of two coherent sources, where we get the two sources from a single light beam using the two slits. But gravitational waves interact so weakly with matter that it's hard to see how we could screen a gravitational wave to get two coherent GW sources.
But if we could figure out a way to do it then yes GWs would interfere just like light wave.
Thank you @Secret and @JohnRennie . But for conclude the discussion, I want to put a "silly picture" here: Imagine a huge double slit plate in space close to a strong source of gravitational waves. Then like water waves, and light, we will see the pattern?
So, if the source (like a Black Hole binary) are sufficent away, then in the regions of destructive interference, space-time would have a flat geometry and then with we put a spherical object in this region the metric will become schwarzschild-like.
if**
Pardon, I just spend some naive-phylosophy time here with these discussions**
The situation was even more dire for Calculus and I managed!
This is a neat strategy I have found-revision becomes more bearable when I have The h Bar open on the side.
In all honesty, I actually prefer exam season! At all other times-as I have observed in this semester, at least-there is nothing exciting to do. This system of tortuous panic, followed by a reward is obviously very satisfying.
My opinion is that I need you kaumudi to decrease the probabilty of h bar having software system infrastructure conversations, which confuse me like hell and is why I take refugee in the maths chat a few weeks ago
(Not that I have questions to ask or anything; like I said, it is a little relieving to be with friends while I am panicked. I think it is possible to gauge how much of a social recluse I am from this, because I spend some of my free time hanging out with you lot, even though I am literally inside a hostel teeming with hundreds of my peers)
that's true. though back in high school ,regardless of code, our teacher taught us to always indent your code to allow easy reading and troubleshooting. We are also taught the 4 spacebar indentation convention
@JohnRennie I wish I can just tab because I am also lazy, but sometimes tab insert 4 spaces while other times it inserts 5-6 spaces, thus screwing up a block of if then conditions in my code, which is why I had no choice
I currently automate almost everything from job submission to data extraction, and later on, with the help of the machine learning group in my uni, we might be able to automate a GUI library search thingy
I can do all tasks related to my work without leaving the text editor (of course, such text editor is emacs). The only inconvenience is that some websites don't render in a optimal way (but most of the work-related ones do)
Hi to all. Does anyone know where I could write matlab code online(for free)? Apparently another one of my institutions great inspirations is to have a matlab-oriented computational physics course without having matlab on the universities pcs. Thanks.
@Kaumudi.H Hacky way: 1st thing is that $\psi\left(x, y, z, t\right) = \psi\left(x, y, t\right)$, so no propagation in $z$-direction. Now, in '$1$ unit' of time, it travels $\frac{\sqrt{3}}{2}$ units in the $y$-direction and $\frac{1}{2}$ units in the $x$-direction. Use this to form a triangle and you'll get the answer with simple trig :)
@Kaumudi.H Ah, it was okayish. It was mostly memory based. Each small question was of 10-15 marks. No idea what they expect me to write for questions like "Describe acoustic and optic phonons" for 15 marks!! I only wrote two small paragraphs...meh. I don't like this subject much :P (physical electronics). Hope to do better in the upcoming tests so that there isn't a huge effect on the gpa.
@Blue Ok, thanks. I found a way by connecting to the servers of the university( the program isn't installed on the pcs on the computer room, but if I connect to the server of the university- which means running remotely another environment, i found an older version of matlab). But thanks again.
@user685252 No; I am saying that it has no bearing on how good you actually are at the subject - it has no bearing on how good you are at applying knowledge; it doesn't test problem solving skills; it doesn't take into account that, if I'm sitting in the office having forgotten the difference between different types of matrix decomposition or something, I can just search the internet (or a textbook), so it doesn't say how good someone is at research in that subject;
it doesn't test how good you are at deriving anything - someone can write down a definition without any understanding, while someone who can derive it, but has forgotten it probably won't have time in an exam situation. In short, testing memory is not the same as testing understanding
If you really want to test someone's understanding, give them a few problems in that area that they've never seen before and give them a reasonable amount of time to do it, with access to textbooks etc. |
Prototyping the pliable lasso Introduction
Tibshirani and Friedman (2017) propose a generalization of the lasso that allows the model coefficients to vary as a function of a general set of modifying variables, such as gender, age or time. The pliable lasso model has the form
\[ \begin{equation} \hat{y} = \beta_0{\mathbf 1} + Z\theta_0 + \sum_{j=1}^p(X_j\beta_j + W_j\theta_j) \end{equation} \]
where \(\hat{y}\) is the predicted \(N\times1\) vector, \(\beta_0\) is a scalar, \(\theta_0\) is a \(K\)-vector, \(X\) and \(Z\) are \(N\times p\) and \(N\times K\) matrices containing values of the predictor and modifying variables respectively with \(W_j=X_j \circ Z\) denoting the elementwise multiplication of Z by column \(X_j\) of \(X\).
The objective function used for pliable lasso is
\[ J(\beta_0, \theta_0, \beta, \Theta) = \frac{1}{2N}\sum_{i=1}^N (y_i-\hat{y}_i)^2 + (1-\alpha)\lambda\sum_{j=1}^p\biggl(||(\beta_j,\theta_j)||_2 + ||\theta_j||_2\biggr) + \alpha\lambda\sum_{j,k}|\theta_{j,k}|_1. \]
In the above, \(\Theta\) is a \(p\times K\) matrix of parameters with \(j\)-th row \(\theta_j\) and individual entries \(\theta_{j,k}\), \(\lambda\) is a tuning parameters. As \(\alpha \rightarrow 1\) (but \(<1\)), the solution approaches the lasso solution. The default value used is \(\alpha = 0.5.\)
An R package for the pliable lasso is forthcoming from theauthors. Nevertheless, the pliable lasso is an excellent example tohighlight the prototyping capabilities of
CVXR in research. Alongthe way, we also illustrate some additional atoms that are actuallyneeded in this example.
The pliable lasso in
CVXR
We will use a simulated example from section 3 of Tibshirani and Friedman (2017) with \(n=100\), \(p=50\) and \(K=4\). The response is generated as
\[ \begin{eqnarray*} y &=& \mu(x) + 0.5\cdot \epsilon;\ \ \epsilon \sim N(0, 1)\\ \mu(x) &=& x_1\beta_1 + x_2\beta_2 + x_3(\beta_3 e + 2z_1) + x_4\beta_4(e - 2z_2);\ \ \beta = (2, -2, 2, 2, 0, 0, \ldots) \end{eqnarray*} \]
where \(e=(1,1,\ldots , 1)^T).\)
## Simulation data.set.seed(123)N <- 100K <- 4p <- 50X <- matrix(rnorm(n = N * p, mean = 0, sd = 1), nrow = N, ncol = p)Z <- matrix(rbinom(n = N * K, size = 1, prob = 0.5), nrow = N, ncol = K)## Response model.beta <- rep(x = 0, times = p)beta[1:4] <- c(2, -2, 2, 2)coeffs <- cbind(beta[1], beta[2], beta[3] + 2 * Z[, 1], beta[4] * (1 - 2 * Z[, 2]))mu <- diag(X[, 1:4] %*% t(coeffs))y <- mu + 0.5 * rnorm(N, mean = 0, sd = 1)
It seems worthwhile to write a function that will fit the model for us so that we can customize a few things such as an intercept term, verbosity etc. The function has the following structure with comments as placeholders for code we shall construct later.
plasso_fit <- function(y, X, Z, lambda, alpha = 0.5, intercept = TRUE, ZERO_THRESHOLD= 1e-6, verbose = FALSE) { N <- length(y) p <- ncol(X) K <- ncol(Z) beta0 <- 0 if (intercept) { beta0 <- Variable(1) * matrix(1, nrow = N, ncol = 1) } ## Define_Parameters ## Build_Penalty_Terms ## Compute_Fitted_Value ## Build_Objective ## Define_and_Solve_Problem ## Return_Values}## Fit pliable lasso using CVXR.#pliable <- pliable_lasso(y, X, Z, alpha = 0.5, lambda = lambda)
Defining the parameters
The parameters are easy: we just have \(\beta\), \(\theta_0\) and \(\Theta\).
beta <- Variable(p)theta0 <- Variable(K)theta <- Variable(p, K) ; theta_transpose <- t(theta)
Note that we also define the transpose of \(\Theta\) for use later.
The penalty terms
There are three of them. The first term in the parenthesis,\(\sum_{j=1}^p\biggl(||(\beta_j,\theta_j)||_2\biggr)\), involves components of\(\beta\) and rows of \(\Theta\).
CVXR provides two functions to expressthis norm:
hstackto bind columns of \(\beta\) and the matrix \(\Theta\), the equivalent of
rbindin R,
cvxr_normwhich accepts a matrix variable and an
axisdenoting the axis along which the norm is to be taken. The penalty requires us to use the row as axis, so
axis = 1per the usual R convention.
The second term in the parenthesis \(\sum_{j}||\theta_j||_2\) is also a norm along rows as the \(\theta_j\) are rows of \(\Theta\). And the last one is simply a 1-norm.
penalty_term1 <- sum(cvxr_norm(hstack(beta, theta), 2, axis = 1))penalty_term2 <- sum(cvxr_norm(theta, 2, axis = 1))penalty_term3 <- sum(cvxr_norm(theta, 1))
The fitted value
Equation 1 above for \(\hat{y}\) contains a sum:\(\sum_{j=1}^p(X_j\beta_j + W_j\theta_j)\). This requires multiplicationof \(Z\) by the columns of \(X\) component-wise. That is a natural candidatefor a map-reduce combination: map the column multiplication functionappropriately and reduce using
+ to obtain the
XZ_term below.
xz_theta <- lapply(seq_len(p), function(j) (matrix(X[, j], nrow = N, ncol = K) * Z) %*% theta_transpose[, j])XZ_term <- Reduce(f = '+', x = xz_theta)y_hat <- beta0 + X %*% beta + Z %*% theta0 + XZ_term
The objective
The objective is now straightforward.
objective <- sum_squares(y - y_hat) / (2 * N) + (1 - alpha) * lambda * (penalty_term1 + penalty_term2) + alpha * lambda * penalty_term3
The problem and its solution
prob <- Problem(Minimize(objective))result <- solve(prob, verbose = verbose)beta_hat <- result$getValue(beta)
The return values
We create a list with values of interest to us. However, sincesparsity is desired, we set values below
ZERO_THRESHOLD tozero.
theta0_hat <- result$getValue(theta0)theta_hat <- result$getValue(theta)## Zero out stuff before returningbeta_hat[abs(beta_hat) < ZERO_THRESHOLD] <- 0.0theta0_hat[abs(theta0_hat) < ZERO_THRESHOLD] <- 0.0theta_hat[abs(theta_hat) < ZERO_THRESHOLD] <- 0.0list(beta0_hat = if (intercept) result$getValue(beta0)[1] else 0.0, beta_hat = beta_hat, theta0_hat = theta0_hat, theta_hat = theta_hat, criterion = result$value)
The full function
We now put it all together.
plasso_fit <- function(y, X, Z, lambda, alpha = 0.5, intercept = TRUE, ZERO_THRESHOLD= 1e-6, verbose = FALSE) { N <- length(y) p <- ncol(X) K <- ncol(Z) beta0 <- 0 if (intercept) { beta0 <- Variable(1) * matrix(1, nrow = N, ncol = 1) } beta <- Variable(p) theta0 <- Variable(K) theta <- Variable(p, K) ; theta_transpose <- t(theta) penalty_term1 <- sum(cvxr_norm(hstack(beta, theta), 2, axis = 1)) penalty_term2 <- sum(cvxr_norm(theta, 2, axis = 1)) penalty_term3 <- sum(cvxr_norm(theta, 1)) xz_theta <- lapply(seq_len(p), function(j) (matrix(X[, j], nrow = N, ncol = K) * Z) %*% theta_transpose[, j]) XZ_term <- Reduce(f = '+', x = xz_theta) y_hat <- beta0 + X %*% beta + Z %*% theta0 + XZ_term objective <- sum_squares(y - y_hat) / (2 * N) + (1 - alpha) * lambda * (penalty_term1 + penalty_term2) + alpha * lambda * penalty_term3 prob <- Problem(Minimize(objective)) result <- solve(prob, verbose = verbose) beta_hat <- result$getValue(beta) theta0_hat <- result$getValue(theta0) theta_hat <- result$getValue(theta) ## Zero out stuff before returning beta_hat[abs(beta_hat) < ZERO_THRESHOLD] <- 0.0 theta0_hat[abs(theta0_hat) < ZERO_THRESHOLD] <- 0.0 theta_hat[abs(theta_hat) < ZERO_THRESHOLD] <- 0.0 list(beta0_hat = if (intercept) result$getValue(beta0)[1] else 0.0, beta_hat = beta_hat, theta0_hat = theta0_hat, theta_hat = theta_hat, criterion = result$value)}
The Results
Using \(\lambda = 0.6\) we fit the pliable lasso without an intercept
result <- plasso_fit(y, X, Z, lambda = 0.6, alpha = 0.5, intercept = FALSE)
We can print the various estimates.
cat(sprintf("Objective value: %f\n", result$criterion))
## Objective value: 4.279446
We only print the nonzero \(\beta\) values.
index <- which(result$beta_hat != 0)est.table <- data.frame(matrix(result$beta_hat[index], nrow = 1))names(est.table) <- paste0("$\\beta_{", index, "}$")knitr::kable(est.table, format = "html", digits = 3) %>% kable_styling("striped")
\(\beta_{1}\) \(\beta_{2}\) \(\beta_{3}\) \(\beta_{4}\) \(\beta_{20}\) \(\beta_{34}\) \(\beta_{39}\) 1.783 -1.373 2.736 0.021 -0.141 -0.093 0.066
For this value of \(\lambda\), the nonzero \((\beta_1, \beta_2, \beta_3,\beta4)\) are picked up along with a few others \((\beta_{20}, \beta_{34},\beta_{39}).\)
The values for \(\theta_0\).
est.table <- data.frame(matrix(result$theta0_hat, nrow = 1))names(est.table) <- paste0("$\\theta_{0,", 1:K, "}$")knitr::kable(est.table, format = "html", digits = 3) %>% kable_styling("striped")
\(\theta_{0,1}\) \(\theta_{0,2}\) \(\theta_{0,3}\) \(\theta_{0,4}\) -0.153 0.281 -0.65 0.102
And just the first five rows of \(\Theta\), which happen to contain all the nonzero values for this result.
est.table <- data.frame(result$theta_hat[1:5, ])names(est.table) <- paste0("$\\theta_{,", 1:K, "}$")knitr::kable(est.table, format = "html", digits = 3) %>% kable_styling("striped")
\(\theta_{,1}\) \(\theta_{,2}\) \(\theta_{,3}\) \(\theta_{,4}\) 0 0.000 0 0 0 0.000 0 0 0 0.000 0 0 0 -0.093 0 0 0 0.000 0 0 Final comments
Typically, one would run the fits for various values of \(\lambda\) and choose one based on cross-validation and assess the prediction against a test set. Here, even a single fit takes a while, but techniques discussed in other articles here can be used to speed up the computations.
A logistic regression using a pliable lasso model can be prototyped similarly.
Session Info
sessionInfo()
## R version 3.6.0 (2019-04-26)## Platform: x86_64-apple-darwin18.5.0 (64-bit)## Running under: macOS Mojave 10.14.5## ## Matrix products: default## BLAS/LAPACK: /usr/local/Cellar/openblas/0.3.6_1/lib/libopenblasp-r0.3.6.dylib## ## locale:## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8## ## attached base packages:## [1] stats graphics grDevices datasets utils methods base ## ## other attached packages:## [1] kableExtra_1.1.0 CVXR_0.99-6 ## ## loaded via a namespace (and not attached):## [1] gmp_0.5-13.5 Rcpp_1.0.1 highr_0.8 ## [4] compiler_3.6.0 pillar_1.4.1 R.methodsS3_1.7.1## [7] R.utils_2.8.0 tools_3.6.0 digest_0.6.19 ## [10] bit_1.1-14 viridisLite_0.3.0 evaluate_0.14 ## [13] tibble_2.1.2 lattice_0.20-38 pkgconfig_2.0.2 ## [16] rlang_0.3.4 Matrix_1.2-17 rstudioapi_0.10 ## [19] yaml_2.2.0 blogdown_0.12.1 xfun_0.7 ## [22] xml2_1.2.0 httr_1.4.0 Rmpfr_0.7-2 ## [25] ECOSolveR_0.5.2 stringr_1.4.0 knitr_1.23 ## [28] hms_0.4.2 webshot_0.5.1 bit64_0.9-7 ## [31] grid_3.6.0 glue_1.3.1 R6_2.4.0 ## [34] rmarkdown_1.13 bookdown_0.11 readr_1.3.1 ## [37] magrittr_1.5 scales_1.0.0 htmltools_0.3.6 ## [40] scs_1.2-3 rvest_0.3.4 colorspace_1.4-1 ## [43] stringi_1.4.3 munsell_0.5.0 crayon_1.3.4 ## [46] R.oo_1.22.0
Source References
Tibshirani, Robert J., and Jerome H. Friedman. 2017. “A Pliable Lasso.”
arXiv Preprint Arxiv:1712.00484. https://arxiv.org/abs/1712.00484. |
An industry has a total cost function : TC=$4Q^2+100Q+100$ . Where $Q$ is the quantity produced. They are asking me to find the long run equilibrium price. How do I find it? What I've found is that i calculate the sratc(short run average total cost) and then solve for Q (while equating the derivative of sratc to 0), then find Price after substituting Q. Is this the way to go?
Hint: In a competitive market the firms produce on the long run at a level where the average total cost function has its minimum. Thus you have to find the minimum of
$$\frac{TC(Q)}{Q}=4Q+100+\frac{100}{Q}$$
The minimum can be found by setting the derivative equal to $0$. In this case it is good to remember that $Q$ is defined for non-negative values only. The picture below shows the course of the average cost function.
The price of the product will be equal the solution.
In a perfectly competitive market, the long-run equilibrium price is the price at which the firm earns zero economic profit, which will happen at $MC=ATC$. Hence: $$MC=TC'=(4Q^2+100Q+100)'=8Q+100\\ ATC=\frac{TC}{Q}=4Q+100+\frac{100}{Q}\\ 8Q+100=4Q+100+\frac{100}{Q} \Rightarrow 4Q^2=100 \Rightarrow Q=5\\ P=MR=MC(5)=8\cdot 5+100=140.$$ Verify: $$TR=140Q; TC=4Q^2+100Q+100\\ TR(5)-TC(5)=140\cdot 5-(4\cdot 5^2+100\cdot 5+100)=0.$$ |
In this section we consider a property of events that relates to conditional probability, namely
independence. First, we define what it means for a pair of events to be independent, and then we consider collections of more than two events.
Independence for Pairs of Events
The following definition provides an intuitive definition of the concept of independence for two events, and then we look at an example that provides a computational way for determining when events are independent.
Definition \(\PageIndex{1}\)
Events \(A\) and \(B\) are
if knowing that one occurs does not affect the probability that the other occurs, i.e., independent
$$P(A\ |\ B) = P(A) \quad\text{and}\quad P(B\ |\ A) = P(B). \label{indep}$$
Using the definition of conditional probability (Definition 2.2.1), we can derive an alternate way to the equations \ref{indep} for determining when two events are independent, as the following example demonstrates.
Example \(\PageIndex{1}\)
Suppose that events \(A\) and \(B\) are independent. We rewrite equations \ref{indep} using the definition of conditional probability:
\begin{align}
P(A\ |\ B) = P(A) \quad & \Rightarrow\quad \frac{P(A\cap B)}{P(B)} = P(A) \\ & \text{and} \\ \notag P(B\ |\ A) = P(B) \quad & \Rightarrow\quad \frac{P(A\cap B)}{P(A)} = P(B) \end{align}
In each of the expressions on the right-hand side above we isolate \(P(A\cap B)\):
\begin{align}
\frac{P(A\cap B)}{P(B)} = P(A) \quad & \Rightarrow\quad P(A\cap B) = P(A)P(B) \\ & \text{and} \\ \notag \frac{P(A\cap B)}{P(A)} = P(B) \quad & \Rightarrow\quad P(A\cap B) = P(A)P(B) \end{align}
Both expressions result in \(P(A\cap B) = P(A)P(B)\). Thus, we have shown that if events \(A\) and \(B\) are independent, then the probability of their intersection is equal to the product of their individual probabilities. We state this fact in the next definition.
Definition \(\PageIndex{2}\)
Events \(A\) and \(B\) are
if $$P(A\cap B) = P(A)P(B).$$ independent
Generally speaking, Definition 2.3.2 tends to be an easier condition than Definition 2.3.1 to verify when checking whether two events are independent.
Example \(\PageIndex{2}\)
Consider the context of Exercise 2.2.1, where we randomly draw a card from a standard deck of 52 and \(C\) denotes the event of drawing a club, \(K\) the event of drawing a King, and \(B\) the event of drawing a black card.
Are \(C\) and \(K\) independent events? Recall that \(P(C\cap K) = 1/52\), and note that \(P(C) = 13/52\) and \(P(K) = 4/52\). Thus, we have
$$P(C\cap K) = \frac{1}{52} = P(C)P(K) = \frac{13}{52}\times\frac{4}{52},$$ indicating that \(C\) and \(K\) are independent.
Are \(C\) and \(B\) independent events? Recall that \(P(C\cap B) = 13/52\), and note that \(P(B) = 26/52\). Thus, we have
$$P(C\cap B) = \frac{13}{52} \neq P(C)P(B) = \frac{13}{52}\times\frac{26}{52},$$ indicating that \(C\) and \(B\) are not independent.
Let's think about the results of this example intuitively. To say that \(C\) and \(K\) are independent means that knowing that one of the events occurs does not affect the probability of the other event occurring. In other words, knowing that the card drawn is a King does not influence the probability of the card being a club. The proportion of clubs in the entire deck of 52 is the same as the proportion of clubs in just the collection of Kings: \(1/4\). On the other hand, \(C\) and \(B\) are not independent (AKA
dependent) because knowing that the card drawn is club indicates that the card must be black, i.e., the probability that the card is black is 1. Alternately, knowing that the card drawn is black increases the probability that the card is a club, since the proportion of clubs in the entire deck is \(1/4\), but the proportion of clubs in the collection of black cards is \(1/2\).
Independence for 3 or More Events
For collections of 3 or more events, there are two different types of independence.
Definition \(\PageIndex{3}\)
Let \(A_1, A_2, \ldots, A_k\), where \(k\geq3\), be a collection of events.
The events are if every pair of events in the collection is independent. pairwise independent The events are if every sub-collection of events, say \(A_{i_1}, A_{i_2}, \ldots, A_{i_n}\), satisfy the following: mutually independent
$$P(A_{i_1}\cap A_{i_2}\cap \ldots \cap A_{i_n}) = P(A_{i_1})\times P(A_{i_2})\times \ldots\times P(A_{i_n})$$
Mutually independent is a stronger type of independence, since it
implies pairwise independent. But pairwise independence does NOT imply mutual independence, as the following example will demonstrate. Example \(\PageIndex{3}\)
Consider again the context of Example 1.1.1, i.e., tossing a fair coin twice, and define the following events:
\begin{align*}
A &= \text{first toss is heads}\\ B &= \text{second toss is heads}\\ C &= \text{exactly one head is recorded} \end{align*}
We show that this collection of events - \(A, B, C\) - is pairwise independent, but NOT mutually independent. First, we note that the individual probabilities of each event are \(0.5\):
\begin{align*}
P(A) &= P(\{hh, ht\}) = 0.5 \\ P(B) &= P(\{hh, th\}) = 0.5 \\ P(C) &= P(\{ht, th\}) = 0.5 \end{align*}
Next, we look at the probabilities of all pairwise intersections to establish pairwise independence:
\begin{align*}
P(A\cap B) &= P(hh) = 0.25 = P(A)P(B) \\ P(A\cap C) &= P(ht) = 0.25 = P(A)P(C) \\ P(B\cap C) &= P(th) = 0.25 = P(B)P(C) \end{align*}
However, note that the three events do not have any outcomes in common, i.e., \(A\cap B\cap C = \varnothing\). Thus, we have
$$P(A\cap B\cap C) = 0 \neq P(A)P(B)P(C),\notag$$ and so the events are not mutually independent. |
Definition\(\PageIndex{1}\)
A random variable \(X\) has a
normal distribution, with parameters \(\mu\) and \(\sigma\), write \(X\sim\text{normal}(\mu,\sigma)\), if it has pdf given by $$f(x) = \frac{1}{\sigma\sqrt{2\pi}}e^{-(x-\mu)^2/2\sigma^2}, \quad\text{for}\ x\in\mathbb{R},\notag$$ where \(\mu\in\mathbb{R}\) and \(\sigma > 0\).
The normal distribution is arguably the most important probably distribution. It is used to model the distribution of population characteristics such as weight, height, and IQ. The pdf is terribly tricky to work with, in fact integrals involving the normal pdf cannot be solved exactly, but rather require numerical methods to approximate. Because of this, there is no closed form for the corresponding cdf of a normal distribution. Given the importance of the normal distribution though, many software programs have built in normal probability calculators. There are also many useful properties of the normal distribution that make it easy to work with. We state these properties without proof below.
Properties of the Normal Distribution If \(X\sim\text{normal}(\mu, \sigma)\), then \(aX+b\) also follows a normal distribution with parameters \(a\mu + b\) and \(a\sigma\). If \(X\sim\text{normal}(\mu, \sigma)\), then \(\displaystyle{\frac{X-\mu}{\sigma}}\) follows the standard normal distribution, i.e., the normal distribution with parameters \(\mu=0\) and \(\sigma = 1\). |
Let us correct some of your numbers. The size of the capacity is twice the size of the expected security margin (against a birthday attack). This is the idea of
flat sponge clain etc
When using a random sponge as a security reference, one considers the success of a particular attack. Such a success probability depends not only on the nature of the attack considered but also on the chosen parameters of the random sponge, i.e., its capacity, bitrate and whether it calls a random permutation or a random transformation.
The flat sponge claim is a simplification in the sense that we consider only the worst-case success probability, determined by the upper bound on the random oracle differentiating advantage [Bertoni et al., Eurocrypt'08], which depends solely on the capacity of the random sponge. Hence, it flattens the claimed success probabilities of all attacks using a single parameter: the claimed capacity $c_{\mathit{claim}}$. [source]
Thus if you aim for a security margin of $128$ bits, you need to have a capacity of $256$ bits.
In this approach, one designs a permutation $f$ on $b=r+c$ bits and uses it in the sponge construction to build the sponge function $F$. In addition, one makes a flat sponge claim on $F$ with a claimed capacity equal to the capacity used in the sponge construction, namely $c_{\mathit{claim}}=c$. In other words, the claim states that the best attacks on $F$ must be generic attacks. Hence, $c_{\mathit{claim}}=c$ means that any attack on $F$ with expected complexity below $2^{c/2}$ implies a structural distinguisher on $f$, and the design of the permutation therefore attempts to avoid such distinguishers. Note that the existence of a structural distinguisher for $f$ does not necessarily imply an attack or weakness in $F$. For example, any distinguisher for $f$ that has success probability zero below $2^{c/2}$ forms no threat, as the flat sponge claim expresses no security against adversaries that may send more that $2^{c/2}$ queries. [source]
TL;DR: the security margin is half the capacity. If you aim for 128 bits, you need a capacity of 256 bits.
That being said, we can move to your misunderstanding. You are right to say that if you take an output length of $1$ bit, you won't have the $64$ bits security claim... because
you are looking for a collision in the output.
When you use $\operatorname{Keccak}$, you can retrieve the full size of the bit rate ($r$). You can also extend this output and because these outputs are
related (as you basically always know the content of the bit rate part). Thus the only part that you don't know is the capacity. If you are able to guess or have a collision on this part, then the output is compromised. The capacity is the only part of the state that you don't know.
When you want a collision, you look either to have a collision on the capacity part or on the bitrate part which is usually bigger than the capacity...
Let $a,b$ be two inputs and $c,c_1,c_2$ the capacity part. You have a collision when :$$f(a) \to [\theta \mathbin\| c_1]\\f(b) \to [\theta \mathbin\| c_2]$$where $[\theta \mathbin\| c]$ is full state and $\theta$ is the resulting output (here your output length is important).
The other way to find a collision is to manage to get:$$f(a) \to [\alpha \mathbin\| c]\\f(b) \to [\beta \mathbin\| c]$$(collision in the capacity), because then you can have (due to the simplified sponge construction):$$\begin{align}f(a \mathbin\| 0) & \to [\alpha \mathbin\| c] &\Rightarrow&&& f([\alpha \oplus 0 \mathbin\| c]) \to \theta \\f(b \mathbin\| \beta \oplus \alpha) & \to [\beta \mathbin\| c] &\Rightarrow&&& f([\beta \oplus \beta \oplus \alpha \mathbin\| c] ) = f([\alpha \mathbin\| c]) \to \theta\end{align}$$
TL;DR 2: as @CodeInChaos said
Collision resistance is $\min(c/2,n/2)$ and preimage resistance $\min(c/2,n)$ where n is the output size.. But most papers search collisions on $\operatorname{Keccak}[1600]$ thus why they usually state this more generic security margin ($2^{\mathit{capacity}/2}$). |
I've recently found an article (referred somewhere on this site) criticizing the use of common rules of algebra on infinite series. To be honest, the video referred is one of the videos of Numberphile I liked the most. I mean, informally, to say a rule doesn't hold, I think one should find an example (in modern logic, a $\forall$ statement is true by default, and a $\exists$ false); say, associativity of addition for infinite series:
$$ S_1=(1-1)+(1-1)+(1-1)+(1-1)\cdots=0\\ S_2=1+(-1+1)+(-1+1)+(-1+1)+\cdots=1\\ \therefore S_1\neq S_2 $$
But what inconsistency does:
$$ \begin{align} S=1&-1+1-1+\cdots\\ S+S=1&-1+1-1+\cdots+\\ &+1-1+1-1+\cdots=1\iff\\ \iff2S=1&\iff S=\frac12 \end{align} $$
create? I'm not even getting into Cesàro summation. Why does the limit of a sum have to equal the sum itself?
Why can't we have
$$ \frac12=\sum_{n=0}^\infty\ (-1)^n\neq \lim_{x\to\infty}\sum_{n=0}^x\ (-1)^n= \text{st}\sum_{n=0}^H\ (-1)^n= \text{undefined} $$
After all, in here $x$ is an arbitrarily big real
, $H$ is a positive infinite hyperinteger number and $\infty$ is number , not a number. Where is the inconsistency? NaN Edit:
There are already a lot of comments, and I feel I haven't made myself clear. Maybe the question is more philosophical than I thought. Here is an attempt to make my still developing points clearer:
$\cdots$ means the continuation to infinity of a series that continues the most simple pattern.
Example: $\displaystyle\sum_{n=0}^\infty\ (-1)^n$ means that for whatever number you have taken the partial sum, you are as far from the result as you were in the beginning. As by $6.$, such non-converging sum cannot be computed directly.
In the first example, it is proven that associativity does not hold for all infinite series, at least for divergent series, the same way $\sqrt a \sqrt b=\sqrt{ab}$ does not hold in $\mathbb C$. However, non-contradicting laws for associativity can be found:
Associativity may not work infinitely for numbers within the same series, as $S\neq S_1\neq S_2\neq S$ shows. However, it works pairwise between infinite series. $$ \begin{align} S+S=1&-1+1-1+\cdots+\\ &+1-1+1-1+\cdots \end{align} $$ is the same as $$ S+S=1+(-1+1)+(+1-1)+(-1+1)+\cdots $$ which is $1$, as we have seen. This rule is consistent. Other pairwise associations for this will either give the same, or $2-2+2-2+\cdots$, which is also $1$. In other words, for any numbers, associativity works. So it works pairwise (2 series, 2 numbers per application), an infinite number of times. But it doesn't work within the same series, as each set of numbers would be a finite series, infinity is not a number, so it can't have not-a-number of times per application. That is, $$ S_1=(1-1)+(1-1)+(1-1)+\cdots $$ is the same as $$ \begin{align} S_1=1+&1+1+\cdots\\ -1-&1-1+\cdots \end{align} $$ and not the same as $$ \begin{align} S=1-&1+\\ +1-&1+\\ +1-&1+\\ +\cdots \end{align} $$
The limit of a sum equals the sum when the sum converges. Example: $$ \sum_{n=0}^\infty 2^{-n}=\lim_{x\to\infty}\sum_{n=0}^x 2^{-n}=2 $$
The limit of a non-converging sum does not exist or is infinity, not a number. All sums have a value, even thought their limits might not have one, or the value of the sum is infinity.
If that is the case, infinity, as not a number, cannot be directly summed with another sum (eliminating problems as $\infty-\infty$ by reason of lack of information). If according to non-contradictory rules, a value can be assigned to a sum, that is the value of the sum. See the example above for $1-1+1-1+\cdots$
To distinguish between the values of two non-convergent sums, they first must be computed according to non-contradictory rules. Then their values can be compared, by transitivity of equality.
A divergent sum cannot be computed directly (the reason why $S\neq S_1\neq S_2\neq S$), as by definition of infinity, one cannot reach it. Again, use non-contradictory rules, making a finite number of changes that maintain the value of the divergent sum (see the examples' consistency).
Thank you for reading, |
Introduction
We begin with a definition.
Definition \(\PageIndex{1}\) provides a mathematical model for chance (or random) phenomena. Probability theory
While this is not a very informative definition, it does indicate the overall goal of this course, which is to develop a
formal, mathematical structure for the fairly intuitive concept of probability. While most everyone is familiar with the notion of "chance" -- we informally talk about the chance of it raining tomorrow, or the chance of getting what you want for your birthday -- when it comes to quantifying the chance of something happening, we need to develop a mathematical model to make things precise and calculable.
Sample Spaces and Events
Before we can formally define what the mathematical model is that we will use to make probability precise, we first establish the
structure on which the model operates: sample spaces and events.
Definition \(\PageIndex{2}\)
The
for a probability experiment (i.e., an experiment with random outcomes) is the set of all possible outcomes. sample space The sample space is denoted \(S\). An is an outcome elementof \(S\), generally denoted \(s \in S\). Example \(\PageIndex{1}\)
Suppose we toss a coin twice and record the sequence of heads (\(h\)) and tails (\(t\)). A possible outcome of this experiment is then given by
$$s = ht$$
and the sample space is
$$S = \{hh, ht, th, tt\}.\label{coinflip}$$
Example \(\PageIndex{2}\)
Suppose we record the time (\(t\)), in minutes, that a car spends waiting for a green light at a particular intersection. A possible outcome of this experiment is then given by
$$t=1.5,$$
indicating that a particular car waited one and a half minutes for the light to turn green. The sample space consists of all non-negative numbers, since a measurement of time cannot be negative and, in theory, there is no limit on how a long a car could wait for a green light. We can then write the sample space as follows:
$$S = \{t \in\mathbb{R}\ |\ t\geq 0\} = [0,\infty).\label{time}$$
Definition \(\PageIndex{3}\)
An
is a particular subset of the sample space. event
Example \(\PageIndex{3}\)
Continuing in the context of Example 1.1.1, define \(A\)
$$A = \{hh, ht, th\}.$$
Note that \(A\) is a subset of \(S\) given in Equation \ref{coinflip}.
Example \(\PageIndex{4}\)
Continuing in the context of Example 1.1.2, define \(B\) to be the event that a car waits at most 2 minutes for the light to turn green. We can write the event \(B\) as the following interval, i.e., a subset of the sample space \(S\) given in Equation \ref{time}:
$$B = [0,2] = \{t \in \mathbb{R}\ |\ 0 \leq t \leq 2\}.$$
Set Theory: A Brief Review
As we see from the above definitions of sample spaces and events,
sets play the primary role in the structure of probability experiments. So, in this section, we review some of the basic definitions and notation from set theory. We do this in the context of sample spaces, outcomes, and events.
Definition \(\PageIndex{4}\) The of two events \(A\) and \(B\), denoted \(A\cup B\), is the set of all outcomes in \(A\) or \(B\) ( union or both). The of two events \(A\) and \(B\), denoted \(A\cap B\), is the set of all outcomes in both \(A\) and \(B\). intersection The of an event \(A\), denoted \(A^c\), is the set of all outcomes in the sample space that are not in \(A\). This may also be written as follows: $$A^c= \{s\in S\ |\ s\notin A \}.$$ complement The empty , denoted \(\varnothing\), is the set containing no outcomes. set Two events \(A\) and \(B\) are (or disjoint ) if their intersection is the empty set, i.e., \(A \cap B = \varnothing\). mutually exclusive
Example \(\PageIndex{5}\)
$$B = \{ht, th\}.$$
Now we can apply the set operations just defined to the events \(A\) and \(B\):
$$A \cup B = \{hh, ht, th\} = A$$
$$A \cap B = \{ht, th\} = B$$
$$A^c = \{tt\}$$
$$B^c = \{hh, tt\}$$
Note the relationship between events \(A\) and \(B\): every outcome in \(B\) is an outcome in \(A\). In this case, we say that \(B\) is a
of \(A\), and write subset
$$B \subseteq A.$$
Note also that events \(A\) and \(B\) are
$$C = \{tt\},$$
and
$$A \cap C = \varnothing$$
$$B \cap C = \varnothing.$$
Thus, events \(A\) and \(C\) are disjoint, and events \(B\) and \(C\) are disjoint. |
Reading about the history of mathematical typesetting before TeX, I found an interesting article from Ivan Niven titled: "A SIMPLE PROOF THAT π IS IRRATIONAL" which is dated back to 1946 when there was no TeX (and probably no computer at all). The typesetting was great, and I was wondering if this is an originally typeset paper from 1946, or a new remake?
I have created a remake with plain TeX and also LaTeX, and the similarities between the plain TeX output and the original is astonishing! I am eager to know more about the similarities between the fonts and typesetting of the original document, and the output of plain TeX remake of this article. Compare these two for yourself:
Fig. 3: Original document
Fig. 4: Remade with plain TeX If this is from the old days, how they could achieve such a nice output?
The article is published in:
Bulletin of the American Mathematical Society (Bull. Amer. Math. Soc.), Volume 53, Number 6 (1947), page 509, and it was first became available in Project Euclid in July 4th, 2007. In the dedicated page for BAMS on Project Euclid we read:
The digitization and unrestricted availablity of the backfile of the Bulletin of the American Mathematical Society (1891-1991) is made possible with the generous support of the Gordon and Betty Moore Foundation, the Mathematical Sciences Research Institute, and the American Mathematical Society.
Compare the whole page of two documents. They are nearly identical! Even the hyphenation is mostly the same. They have slight differences in math spacing:
This is the code I used to remake the paper:
\magnification=\magstep1\baselineskip=12pt\hsize=5.0truein\vsize=8.7truein\font\footsc=cmcsc10 at 10truept\font\footbf=cmbx10 at 10truept\font\footrm=cmr10 at 10truept\font\bigrm=cmr12 at 14pt\font\smallbf=cmbx10 at 8truept\parindent=0.15in\pageno=509 \centerline{\bigrm\bf A SIMPLE PROOF THAT $\pi$ IS IRRATIONAL}\smallskip\smallskip\centerline{\smallbf IVAN NIVEN}\smallskip\smallskipLet $\pi = a/b$, the quotient of positive integers. We define thepolynomials$$\displayindent=0.3in\displaywidth=1.3in f(x)={x^n(a-bx)^n \over n!},$$$$\displayindent=0.3in\displaywidth=3.3in F(x) = f(x) - f^{(2)}(x)+f^{(4)}(x)-\ldots+(-1)^nf^{(2n)}(x),$$the positive integer $n$ being specified later. Since $n!f(x)$ has integralcoefficients and terms in $x$ of degree not less than $n$, $f(x)$ and itsderivatives $f^{(j)}(x)$ have integral values for $x=0$; also for $x=\pi=a/b$,since $f(x)=f(a/b-x)$. By elementary calculus we have$$\displayindent=0.01in\displaywidth=4.0in{d \over dx}\{F'(x) \sin x - F(x) \cos x\} = F''(x) \sin x + F(x) \sin x = f(x) \sin x$$\noindent and$$\int^\pi_0 f(x) \sin xdx = [F'(x) \sin x - F(x) \cos x]^\pi_0 = F(\pi) + F(0).\leqno(1)$$Now $F(\pi)+F(0)$ is an {\it integer}, since $f^{(j)}(\pi)$ and $f^{(j)}(0)$ are integers. Butfor $0<x<\pi$,$$0 < f(x) \sin x < {\pi^n a^n \over n!},$$so that the integral in (1) is {\it positive}, {\it but arbitrarily small} for $n$sufficiently large. Thus (1) is false, and so is our assumption that $\pi$ isrational.\smallskip{\footsc Purdue University}\kern +10pt\hrule width 1.0in\kern +10pt{\footrm Received by the editors November 26, 1946, and, in revised form, December 20, 1946.}\bye |
Difference between revisions of "Extended mean value theorem"
Line 42: Line 42:
var p = [];
var p = [];
−
p[0] = board.create('point', [0, -2], {size:2});
+
p[0] = board.create('point', [0, -2], {size:2});
−
p[1] = board.create('point', [-1.5, 5], {size:2});
+
p[1] = board.create('point', [-1.5, 5], {size:2});
−
p[2] = board.create('point', [1, 4], {size:2});
+
p[2] = board.create('point', [1, 4], {size:2});
−
p[3] = board.create('point', [3, 3], {size:2});
+
p[3] = board.create('point', [3, 3], {size:2});
// Curve
// Curve
Line 78: Line 78:
var p = [];
var p = [];
−
p[0] = board.create('point', [0, -2], {size:2});
+
p[0] = board.create('point', [0, -2], {size:2});
−
p[1] = board.create('point', [-1.5, 5], {size:2});
+
p[1] = board.create('point', [-1.5, 5], {size:2});
−
p[2] = board.create('point', [1, 4], {size:2});
+
p[2] = board.create('point', [1, 4], {size:2});
−
p[3] = board.create('point', [3, 3], {size:2});
+
p[3] = board.create('point', [3, 3], {size:2});
// Curve
// Curve
Revision as of 18:48, 29 January 2019
The
extended mean value theorem (also called Cauchy's mean value theorem) is usually formulated as:
Let
[math] f, g: [a,b] \to \mathbb{R}[/math]
be continuous functions that are differentiable on the open interval [math](a,b)[/math]. If [math]g'(x)\neq 0[/math] for all [math]x\in(a,b)[/math], then there exists a value [math]\xi \in (a,b)[/math] such that
[math] \frac{f'(\xi)}{g'(\xi)} = \frac{f(b)-f(a)}{g(b)-g(b)}. [/math] Remark:It seems to be easier to state the extended mean value theorem in the following form:
Let
[math] f, g: [a,b] \to \mathbb{R}[/math]
be continuous functions that are differentiable on the open interval [math](a,b)[/math]. Then there exists a value [math]\xi \in (a,b)[/math] such that
[math] f'(\xi)\cdot (g(b)-g(a)) = g'(\xi) \cdot (f(b)-f(b)). [/math]
This second formulation avoids the need that [math]g'(x)\neq 0[/math] for all [math]x\in(a,b)[/math] and is therefore much easier to handle numerically.
The proof is similar, just use the function
[math] h(x) = f(x)\cdot(g(b)-g(a)) - (g(x)-g(a))\cdot(f(b)-f(a)) [/math]
and apply Rolle's theorem.
VisualizationThe extended mean value theorem says that given the curve [math] C: [a,b]\to\mathbb{R}, \quad t \mapsto (f(t), g(t)) [/math]
with the above prerequisites for [math]f[/math] and [math]g[/math], there exists a [math]\xi[/math] such that the tangent to the curve in the point [math]C(\xi)[/math] is parallel to the secant through [math]C(a)[/math] and [math]C(b)[/math].
The underlying JavaScript code
var board = JXG.JSXGraph.initBoard('box', {boundingbox: [-5, 10, 7, -6], axis:true});var p = [];p[0] = board.create('point', [0, -2], {size:2, name: 'C(a)'});p[1] = board.create('point', [-1.5, 5], {size:2, name: ''});p[2] = board.create('point', [1, 4], {size:2, name: ''});p[3] = board.create('point', [3, 3], {size:2, name: 'C(b)'});// Curvevar fg = JXG.Math.Numerics.Neville(p);var graph = board.create('curve', fg, {strokeWidth:3, strokeOpacity:0.5});// Secant line = board.create('line', [p[0], p[3]], {strokeColor:'#ff0000', dash:1});var df = JXG.Math.Numerics.D(fg[0]);var dg = JXG.Math.Numerics.D(fg[1]);// Usually, the extended mean value theorem is formulated as// df(t) / dg(t) == (p[3].X() - p[0].X()) / (p[3].Y() - p[0].Y())// We can avoid division by zero with that formulation:var quot = function(t) { return df(t) * (p[3].Y() - p[0].Y()) - dg(t) * (p[3].X() - p[0].X());};var r = board.create('glider', [ function() { return fg[0](JXG.Math.Numerics.root(quot, (fg[3]() + fg[2]) * 0.5)); }, function() { return fg[1](JXG.Math.Numerics.root(quot, (fg[3]() + fg[2]) * 0.5)); }, graph], {name: 'C(ξ)', size: 4, fixed:true, color: 'blue'});board.create('tangent', [r], {strokeColor:'#ff0000'}); |
Let $G$ be a finite group with conjugacy classes $C_1, C_2, ..., C_k$ and let $g_i \in C_i$ be an element for each $i=1, ..., k$
Part 1: State the theorems on row and column orthogonality in the character table of $G$,
Row orthogonality:
$<X_i, X_j>=0$ for $i\neq j $, $1$ for $i=j$
Column orthogonality:
$\sum_{X_i}X_i(g)\overline{X_i(h)}=|C_G(g)|$ if $g, h$ are conjugate, $0$ otherwise $i=j$ where the sum is over irreducible characters $X_i$ and $|C_G(g)|$ is the size of the centralizer.
Part 2: The following shows part of a character table of a group $G$ whose conjugacy classes are $C_1, C_2, ..., C_5$. Below each conjugacy class in the table the size of the centraliser of one of its elements is given. Find the values of $x$ and $y$ in the table.
I am not sure how to apply the orthogonality relations to the table. I think the first step is to consider column $1$, to find $x$ using column orthogonality. Then we can calculate $y$ using row orthogonality of row $5$. However I am not sure how to do this in practice. Many thanks for your help. |
I'm trying to solve an exercise which its conclusion seems to be the title of this post. The exercise is:
Show that the function $h:\Bbb R\to [0,1[$ given by $$h(t)=\begin{cases} e^{-1/t^2} &\text{if } t\neq 0\\ 0 &\text{otherwise} \end{cases}$$ is $C^\infty$. Show that the functions $$h_+(t)=\begin{cases} e^{-1/t^2} &\text{if } t\gt 0\\ 0 &\text{otherwise} \end{cases}\quad\text{and}\quad h_{-}(t)=\begin{cases} e^{-1/t^2} &\text{if } t\lt 0\\ 0 &\text{otherwise} \end{cases}$$ are $C^\infty$. Show that the function $k:\Bbb R\to [0,1[$ given by $k(t)=h_-(t-b)h_+(t-a)$ is $C^\infty$ and positive for $t\in ]a,b[$. Let $R$ the rentangle $]a_1,b_1[\times\cdots\times]a_n,b_n[$. Show that there is a $C^\infty$ function $g:\Bbb R^n\to [0,1[$ strictly positive on $R$. Conclude that if $K$ is a compact subset of $\Bbb R^n$ and $U$ is an open neighborhood of $K$, there is a $C^\infty$ function $f:\Bbb R^n\to [0,1]$ such that $f_{|K}\equiv 1$ and its support is contained in $U$.
From 1.-4. I can prove that for any open and bounded set $O\subset \Bbb R^n$, there is a $C^\infty$ function with its support contained in $O$. So my first attempt was apply this to the open $U\setminus K$. Then I get a $C^\infty$ function $f$ that is $0$ (in particular) over $K$. If I just consider $\chi_K+f$ that function can fail to be $C^\infty$.
In a discussion on the chat, robjohn suggest this. It works fine, but then my question is:
Can 5. be proved by using 1.-4.? If yes, how? |
For any formula $\varphi$ in which $x$ and $y$ are free variables, $\forall x \> \forall y \> \varphi$ quantifies over all possible values for $x$ and $y$. As a result, $x$ and $y$
may be different objects or they may be equal; for the statement to hold, $\varphi$ must hold in both cases.
Following this reasoning, a valid interpretation for your formula would be "an Italian is happy regardless of who wins the World Cup" (assuming "WC" stands for "World Cup"), with "Italian" standing for $x$ and "who" for $y$. Thus, $x$ is happy if they win the World Cup ($x = y$) but also happy if someone else does ($x \neq y$); hence "regardless".
The other statement can be expressed in predicate logic as follows:$$\forall x (\text{italian}(x) \to (\text{winWC}(x) \to \text{happy}(x)))$$
In fact, this evaluates to the same truth value as the first formula precisely when you pick the same values for $x$ and $y$.
Assuming the most reasonable domain (i.e., persons in the world) and interpretation in this setting, you could argue there is an Italian which is happy only if the Italian national team wins the World Cup and is unhappy otherwise. Pick this person as your value for $x$ and an arbitrary non-Italian (e.g., German) person for $y$; then the second formula evaluates to true under this variable assignment (since $\text{italian}(x)$ is true and $x$ is happy if $\text{winWC}(x)$, that is, the Italian national team wins), but the first evaluates to false (since $x$ is unhappy if $y$'s team, that is, the German national team wins). |
Put each of the $k-1$ greatest numbers in their own set, and the remaining numbers all together in the $k$th set. This linear-time algorithm can be proved correct by using the following (non-strictly) improving moves to reach the specified optimal solution from any other.
Given two numbers $a<b$ with $a\in S_i$ and $b\in S_j$ and $\lvert S_i\rvert\le\lvert S_j\rvert$, swap $a$ and $b$.
Given $S_i,S_j$ where $|S_i|\ne 1$ and where all of the numbers in $S_i$ are greater than or equal to all of the numbers in $S_j$, move the smallest number in $S_i$ to $S_j$. (This move does not decrease the average of $S_i$, and does not decrease the average of $S_j$.)
An alternative correctness proof uses a linear relaxation. Let the numbers be $a_1\ge\cdots\ge a_n$ and consider the following linear program.
\begin{align}&\text{maximize }\sum_{i=1}^na_ix_i\\&\text{subject to}\\&\sum_{i=1}^nx_i=k&(w)\\&\forall i\in\{1,\ldots,n\},\quad-x_i\le\frac{-1}{n-(k-1)}&(y_i)\\&\forall i\in\{1,\ldots,n\},\quad x_i\le1&(z_i)\end{align}
Given a partition, for all $i\in\{1,\ldots,n\}$, we can set $x_i=1/\lvert S_j\rvert$, where $a_i\in S_j$. It follows that this program is in fact a relaxation. The proposed partition sets $x_1,\ldots,x_{k-1}=1$ and $x_k,\ldots,x_n=1/(n-(k-1))$.
Here is the dual program. By weak duality, feasible solutions of this program upperbound the objective value of the primal.
\begin{align}&\text{minimize }kw+\sum_{i=1}^n\left(\frac{-y_i}{n-(k-1)}+z_i\right)\\&\text{subject to}\\&\forall i\in\{1,\ldots,n\},\quad w-y_i+z_i=a_i&(x_i)\\&\forall i\in\{1,\ldots,n\},\quad y_i,z_i\ge0\\\end{align}
Here is a feasible solution to the dual program whose objective value is equal to the previously proposed primal solution. It follows that both solutions are optimal.
\begin{align}w&=a_k\\y_i&=\begin{cases}a_k-a_i&\text{if }i\in\{k,\ldots,n\}\\0&\text{otherwise}\end{cases}\\z_i&=\begin{cases}a_i-a_k&\text{if }i\in\{1,\:\ldots,\:k-1\}\\0&\text{otherwise}\end{cases}\end{align} |
Can anyone please help me in solving this integration problem $\int \frac{e^x}{1+ x^2}dx \, $?
Actually, I am getting stuck at one point while solving this problem via integration by parts.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Since $x^2+1=(x+i)(x-i)$, partial fraction decomposition leads to $$\frac 1{x^2+1}=\frac 1{2i}\Big(\frac{1}{x-i}-\frac{1}{x+i}\Big)=-\frac i{2}\Big(\frac{1}{x-i}-\frac{1}{x+i}\Big)$$ So $$I=\int \frac{e^x}{1+ x^2}\,dx =-\frac i{2}\int\Big(\frac{e^x}{x-i}-\frac{e^x}{x+i}\Big)\,dx=-\frac i{2}\int\Big(\frac{e^i\,e^{x-i}}{x-i}-\frac{e^{-i}\,e^{x+i}}{x+i}\Big)\,dx$$ Now make change of variable $y=x-i$ for the first and $z=x+i$ for the second. So $$I=-\frac {i e^i}{2}\int \frac{e^y}y dy+\frac {i e^{-i}}{2}\int \frac{e^z}z dz$$ and remember that $$\int\frac {e^t} t dt=\text{Ei}(t)$$ which finally makes $$I=\frac{1}{2}\, i \,e^{-i}\, \text{Ei}(x+i)-\frac{1}{2}\, i\, e^i\, \text{Ei}(x-i)$$ with $i\,e{^i}=-\sin (1)+i \cos (1)$ and $i\,e^{-i}=\sin (1)+i \cos (1)$.
According to Wolfram|Alpha, no closed form exists. However, that doesn't mean we can't make any progress at all.
We can use the substitution $x=\tan{u}$, expand the resulting integrand as a power series in $\tan{u}$ and then each term can be expressed by way of a reduction formula.
Let $x=\tan{u}$. Then, ${\mathrm{d}x \over \mathrm{d}u} = \sec^{2}{u}$. Hence, we have \begin{eqnarray*} \int\frac{e^{x}}{1+x^{2}}\,\mathrm{d}x & = & \int\frac{e^{\tan{u}}}{1+\tan^{2}{u}}\sec^{2}{u}\,\mathrm{d}u\\ & = & \int e^{\tan{u}}\,\mathrm{d}u\\ & = & \int \left( 1+ \tan{u} + \frac{1}{2!}\tan^{2}{u}+ \ldots + \frac{1}{k!}\tan^{k}{u} + \ldots \right)\,\mathrm{d}u\\ & = & \int \left( \sum_{k=0}^{\infty}\frac{1}{k!}\tan^{k}{u} \right)\,\mathrm{d}u\\ & = & \sum_{k=0}^{\infty}\frac{1}{k!}I_{k}, \end{eqnarray*} where, for each $k=0,1,2,\ldots$, we have $I_{k} = \int \tan^{k}{u}\,\mathrm{d}u$.
Now we consider the reduction formula as follows: for $k\geq2$, we have \begin{eqnarray*} I_{k} & = & \int\tan^{k}{u}\,\mathrm{d}u\\ & = & \int (\sec^{2}{u}-1)\tan^{k-2}{u}\,\mathrm{d}u\\ & = & \int \sec^{2}{u}\tan^{k-2}{u}\,\mathrm{d}u - I_{k-2} & = & \frac{1}{k-1}\tan^{k-1}{u} -I_{k-2}. \end{eqnarray*} Additionally, note that $I_{0} = u\,(+\text{constant})$ and $I_{1} = \log{\sec{u}}\,( + \text{constant} )$. Using the formula derived above and these two initial values, we can calculate $I_{k}$ for any value of $k$ (there may well be a general formula; I haven't checked).
Hence, we have a series representation of the integral to as many terms as we please; note also that each term in the series is basically a polynomial in $x=\tan{u}$ with an extra $\tan^{-1}{x}$ or $\log{\sec{\tan^{-1}{x}}}$ tacked on at the end.
Read from the link It cannot be solved using byparts
This is an alternative to the other answer I have provided. I have decided to add it as another answer because I think it uses a sufficiently different approach, and nobody else seems to have hinted at it. We begin just as we begun in my other answer to this same question, and continue until we reach $$\int e^{\tan{u}}\,\mathrm{d}u.$$ From here, the two answers diverge radically, and the content of this post will be entirely concerned with evaluating this integral, which in turn answers the question.
In the other answer, we expanded $e^{\tan{u}}$ as a power series in $\tan{u}$. Now, instead, we shall first expand $\tan{u}$ as a power series in terms of $u$. Then $e^{\tan{u}}$ is an infinite product of exponentials, each of which can be expanded as a power series. From this sequence of expansions, we produce a power series for $e^{\tan{u}}$ which can be integrated term-by-term.
The power series for $\tan{u}$, valid for $|u|<\pi/2$, is $$\tan{u} = \sum_{n=1}^{\infty} {B_{2n}(-4)^{n}(1-4^{n}) \over (2n)!}u^{2n-1} = u+\frac{u^{3}}{3}+\frac{2u^{5}}{15}+\ldots.$$
Hence, for $|u|<\pi/2$, we have \begin{eqnarray*} \exp{\tan{u}} & = & \exp{\left(\sum_{n=1}^{\infty} {B_{2n}(-4)^{n}(1-4^{n}) \over (2n)!}u^{2n-1}\right)}\\ & = & \prod_{n=1}^{\infty}\exp{\left( {B_{2n}(-4)^{n}(1-4^{n}) \over (2n)!}u^{2n-1}\right)}\\ & = & \exp{u}\cdot\exp{\frac{u^{3}}{3}}\cdot\exp{\frac{2u^{5}}{15}}\cdot\ldots\\ & = & \sum_{k=1}^{\infty} \left({B_{2}^{k}(-4)^{k}(1-4)^{k} \over k!2!^{k}}u^{k}\right) \cdot \sum_{k=1}^{\infty} \left({B_{4}^{k}(-4)^{2k}(1-4^{2})^{k} \over k!4!^{k}}u^{3k}\right)\cdot\ldots\\ & = & \left( 1+\frac{u}{1!}+\frac{u^{2}}{2!}+\ldots \right)\left( 1+\frac{u^{3}}{3^{1}\cdot1!}+\frac{u^{6}}{3^{2}\cdot2!}+\ldots \right)\left( 1+\frac{2u^{5}}{15\cdot1!}+\frac{2^{2}u^{10}}{15^{2}\cdot2!}+\ldots \right)\\ & = & 1 + u + \frac{u^{2}}{2!} + \left(\frac{u^{3}}{3!}+\frac{u^{3}}{3\cdot1!}\right) + \left(\frac{u^{4}}{4!}+\frac{u}{1!}\cdot\frac{u^{3}}{3\cdot1!}\right)\\ && \;\; + \left(\frac{u^{5}}{5!} +\frac{u^{2}}{2!}\cdot\frac{u^{3}}{3\cdot1!}+ \frac{2u^{5}}{15}\right)+\ldots\\ & = & 1 + u + \frac{u^{2}}{2} + \frac{u^{3}}{2}+\frac{3u^{4}}{8}+\frac{37u^{5}}{120}+\ldots. \end{eqnarray*}
This gives us a series that I think we should be able to integrate term-by-term, to get \begin{eqnarray*} \int e^{\tan{u}}\,\mathrm{d}u & = & \int(1 + u + \frac{u^{2}}{2} + \frac{u^{3}}{2}+\frac{3u^{4}}{8}+\frac{37u^{5}}{120}+\ldots)\,\mathrm{d}u\\ & = & \text{constant} + u + \frac{u^{2}}{2} + \frac{u^{3}}{6}+\frac{u^{4}}{8}+\frac{3u^{5}}{40}+\frac{37u^{6}}{720}+\ldots. \end{eqnarray*}
Therefore, assuming that our interval of integration is within $|\tan^{-1}{x}|<\pi/2$, we have \begin{eqnarray*} \int\frac{e^{x}}{1+x^{2}}\,\mathrm{d}x = \text{constant} + \tan^{-1}{x} + \frac{(\tan^{-1}{x})^{2}}{2}+ \frac{(\tan^{-1}{x})^{3}}{6}+\frac{(\tan^{-1}{x})^{4}}{8}+\frac{3(\tan^{-1}{x})^{5}}{40}+\frac{37(\tan^{-1}{x})^{6}}{720}+\ldots, \end{eqnarray*} and I'm pretty sure we can adjust this to other intervals of integration using the fact that $\tan{(u+\pi)}=\tan{u}$ for all $u\in\mathbb{R}$ for which $\tan{u}$ is defined. |
Starting with an arbitrary class of sets $\Gamma$, can you generate a free semigroup $\Gamma^*$ over $\Gamma$ with the group operation of concatenation ($\frown$)?
The goal here is to codify a formal language in terms of set theory.
The difficulty is in coming up with a set-theoretic operation that corresponds to concatenation such that it makes every new element resulting from concatenation unique, and is associative.
Given $a,b \in \Gamma$, the first thought would be to represent $a \frown b\frown c$ as a 3-tuple $<a,b,c>$. I know I can define tuples set-theoretically via $<a,b>:=\{\{a\},\{a,b\}\}$ but this will violate associativity in concatenation:
$$a \frown(b \frown c)=<a,<b,c>> \ne <<a,b>,c>=(a \frown b)\frown c$$
I have tried other variants but I haven't been able to come up with a set-theoretic description of concatenation that respects associativity, any ideas?
EDIT: This is a related question: https://mathoverflow.net/questions/12190/set-theoretic-foundations-for-formal-language-theory
unfortunately none of the answers provide an explicit definition of concatenation in set-theoretic terms. |
You may as well ask: Why teach elementary school children how to perform whole-number arithmetic without teaching them the Peano axioms first? Why teach high-school Algebra without starting with the basics of groups, rings and fields? Why, for that matter, teach children to read first, instead of starting with the fundamentals of grammar and linguistics?...
The evidence says noWhat research I'm aware of is all about how giving any overall data about their own performance is actively harmful in promoting further learning. They learn considerably more from instruction about what to do differently in the absence of a grade or numerical score. To repeat; individual scores discourage learning! (Explanations for ...
Here's the example I had which inspired me to post the question in the first place:The game League of Legends was the most-played PC game, in number of hours played, in North America and Europe in 2012. There is a good chance that League of Legends is a part of many of your students' daily life, especially if you are teaching engineering calculus. It doesn'...
The student changed something which was indeterminate ($\infty-\infty$) into something which was not ($\infty\cdot \infty$). How does that not merit a perfect score? Changing indeterminate expressions into determinant ones is, generally speaking, the point.If the professor had some other solution in mind, then they made a mistake. They should have chosen a ...
I'm an old-school biologist (animal physiology) who works with mostly cell biologists. I sent out an email to a bunch of grad students and postdocs I work with. Here is the data so far:Senior undergrad, pharmacology major: absolutely no calculus used in biology courses. She actually laughed when I asked her.Grad student: Undergrad biophysics course used ...
Your student should get full marks.In fact, I would say that even a more complicated example, like $$\int 2x\cos(x^2) dx = \sin(x^2) + C$$should be awarded full points as long as the student justifies this by differentiating $\sin(x^2)$. In fact, this solution demonstrates deeper understanding of the meaning of these symbols than the variable ...
This is a small tip based on the obvious idea that it needs to feel safe to answer questions.Suppose you need to take the derivative of x sin x, but you want students to speak up about it in the flow of lecture. Here are three ways to do it:"Now I need the derivative of x sin x. What should I do first?""Now I need the derivative of x sin x. Which rule ...
On quizzes, homeworks, and tests, I repeatedly ask questions like this:Find three different functions that have derivative equal to $x^2 + x$.Forcing them to do antiderivatives and deal with the quantifier on the +C without staring at the notation helps some of them separate the +C from the voodoo magic.I do a similar thing in college algebra classes ...
I have no evidence to back this up, nor do I know how I could obtain such a thing. But I strongly believe this is a vocabulary issue, and I would like to see the term "series" phased out of usage in this context.To many minds, "sequence" and "series" convey the same thing: a list of items. In modern parlance, we speak of a television series (a sequence of ...
One important point to make is that you should ensure that interaction is part of the culture of your lectures. It isn't enough to pose a question now and again and expect them to suddenly leap in to action to answer it. So you need to be asking questions consistently through the course.The next point I'd like to make is that it will take time for the ...
The think-pair-share technique is an oldie but a goodie:Pose a questionGive students 1 minute to quietly think of and write down their answer (if you have a computer/projector setup you can use an onscreen timer to enforce the "1 minute" frame)Give students 2 minutes to exchange / compare solutions with a neighborAsk for volunteers to share results with ...
Assuming we're talking about mostly US students, most American high schools teach calculus in a way that's very focused on the AP test. The pressure to get students through that with an adequate success rate seems to have lead to some streamlining and cutting corners in ways that mean that what gets taught doesn't really stick, and also doesn't set students ...
I agree with vonbrand that it is important to stress that this is a convention that is used sometimes but not others. But I would add the emphasis that all conventions are local. There are places where it is helpful to adopt this convention but other settings in which it would be a disaster. My preference would be to make sure students understand that the ...
Bad Optimization ProblemsI thought that Jack M made an interesting comment about this question:There aren't any. There may be situations where it's possible to apply optimization to solve a problem you've encountered, but in none of these cases is it honestly worth the effort of solving the problem analytically. I optimize path lengths every day when I ...
If this is calc I, that deserves a 5/5. If this is analysis, it depends on what you taught them. Don't you set up a grading rubric ahead of time? What do the 5 point answers look like? What do other not-so-great answers look like?
The specific identity\begin{equation}\tag{A}\tfrac{1}{1 - \sin{x}} + \tfrac{1}{1 + \sin{x}} = 2\sec^{2}{x}\end{equation}as such is probably not often encountered, but simplifications akin to \begin{equation}\tag{B}\tfrac{1}{1-t} + \tfrac{1}{1 + t} = \tfrac{2}{1 - t^{2}}\end{equation}occur frequently. For example, integration via partial fractions ...
The most intuitive reason I know for $0^0 = 1$ comes from interpretation in terms of functions, namely$$\text{There are } |B|^{|A|} \text{ functions } A \to B \text{ for any finite $A$ and $B$}.$$Now, there are no functions $\{\spadesuit\} \to \varnothing$, so $0^1 = 0$, but there exists exactly one function $\varnothing \to \varnothing$ (its set of ...
For some reason there is a wide spread view that the $\epsilon-\delta$ definition of a limit is an obscure thing, relevant only to mathematicians, and that the only reason to care about them is to make limits ``rigorous''. This could not be further from the truth. $\epsilon-\delta$ analysis is all about learning how to control error in the outputs of a ...
No, it is a bad idea to avoid indefinite integrals, the reason being simply that your students will encounter them elsewhere, and therefore need to be familiar with them. Calculus is a service course. The purpose of the course is to make science and engineering majors fluent in the language of calculus as used in their fields.Rather than always using ...
As a professor/teacher I have some insight. You just answered your own question:"my level of math right now is not basic. So, like many, I tend to forget the basics of math, like adding fractions, reducing fractions to lowest form fast, subtracting big numbers"By avoiding using a calculator you'll strengthen the basics.
Safety:When a wrong answer is given, if you can figure out what would have made it right, you help the student feel safer. Teacher: 2*3 is...? Student: 5 Teacher: oh, I bet you're thinking about 2+3. In calculus, teacher: integral of sinx? Student: cosx. Teacher: If I were asking the derivative, you'd be exactly right. What's the derivative of your answers?...
The different notations are to a considerable extent outgrowth of slightly different ways of looking at "the derivative." To me these different ways is the important thing to discuss, the notation is a by product. And, I think it is quite useful to spend some time on discussing these different ways and to mention the different notations in that context. (...
Since $\varphi$ is rather close to the conversion rate between miles and kilometers, one can use the Fibonacci numbers to convert: if $f_n$ is the distance in miles, then $f_{n+1}$ is (roughly) the distance in kilometers. You can use this to facilitate a discussion about, first of all, the convergence of these ratios $\frac{f_{n+1}}{f_n}\to\varphi$, and also ...
I'm no native English speaker, but you can tackle that question from the mathematical point of view as well.The best verb depends on how you view the nature of definite and indefinite integrals.Operators/FunctionalsIndefinite integrals are operators mapping functions to a set of functions or function (as representative of the equivalence class)....
I would like to encourage consideration of a free textbook. The conventional textbooks are outrageously expensive. (Actually, if your department insists on one of the choices you mentioned, I'd want to go with the cheapest.) Students are suffering from massive amounts of debt these days, with no guarantees of good jobs with which to pay off the debt. ...
While all of your students at this point will have done (extensive) units on manipulating and simplifying expressions with exponents, this is the limits unit. When doing limits questions, most students are only searching their brains for limits techniques - they're extremely unlikely to come up with the $\frac{x^a}{x^b} = x^{a-b}$ rule that had been driven ...
Problem of sloppy notationThe notation is sloppy. Your students are justifiably confused. We've just gotten used to it.In order to untangle this, we need the notion of free variables and bound variables. These have somewhat confusing, perhaps even counter-intuitive names. So, I will use "local" as synonymous with "bound" and "non-local" as synonymous ...
What are some examples of math history that can be mentioned in calculus classes, either to liven things up or to provide additional perspective / insight on the material being learned?You mention two different goals here. Personally, I have used anecdotes about and discussion of historical events/people in calculus courses to...provide some context for ...
Some students are instead taught to change the substitution variable back into the original variable before evaluating the antiderivative at the bounds.$$\int\limits_0^2 2x\cos(x^2) \;\mathrm{d}x = \int\limits_{x=0}^{x=2} \cos(u) \;\mathrm{d}u = \sin(u) \Big|_{x=0}^{x=2} = \sin(x^2) \Big|_{x=0}^{x=2} = \sin(4) \\\text{versus}\\\int\limits_0^2 2x\cos(x^2)...
The root of the difficulty is that $x$ appears free in $f(z)$, but we are trying to "capture" it with $g(x)$, which is illegal. When we substitute $g(x)$ into $f(g(x))$, we have a variable clash:$$f(g(\color{red} x)) = 3^{5\color{blue}x + 1}$$The red (first) $x$ is a different variable from the blue (second) $x$. This is clearer if we rename the bound ... |
Under what operations are linear context-free languages closed? Suppose $L_1, L_2$ are two linear context free languages. Are there any guarantees about $L_1 \cup L_2$, $L_1 \cap L_2$, $\overline{L_1}$, $L_1 . L_2$, etc.?
For our readers. Linear grammars are close to regular grammars, a single nonterminal at the time, but they may generate letters at both sides $A \to aBb$ with $A,B$ nonterminal, and $a,b$ terminal (or empty). $REG \subset LIN \subset CF$, strict. Some closure proofs may benefit from another way to define linear languages: as single turn pushdown languages.
Linear languages are closed under union, construction as for context-free grammars $S\to S_1, S\to S_2$. Likewise they are closed under intersection with regular languages.
They are not closed under intersection: $\{ a^n b^n c^m \mid m,n \ge 0 \} \cap \{ a^n b^m c^m \mid m,n \ge 0 \}$.
Hence they cannot be closed under complement, but I would be nice to have a concrete example.
Neither are they closed under concatenation, intuitively because $S\to S_1S_2$ is not a linear structure. Same for Kleene star.
Linear languages have a pumping property which is similar to that of the context-free languages, except that the composition $z = uvwxy$ may additionally be required to have $|uvxy|\le m$, where $m$ is the pumping constant. This is similar to the requirement for regular languages, where pumping may be assumed to be at the beginning. Linear languages also have a linear structure (like finite state computations) but the symbols that are generated first are at both ends of the string. WIth this property it can be easily shown that $K =\{a^nb^n \mid n\ge 1\}^2$ is not linear.
Both regular and context-free are closed under rotation, but linear languages are not: $K$ is the rotation of linear $\{ b^m a^n b^n a^m \mid m,n\ge 1 \}$ (and a suitable regular intersection).
Regular languages are closed under quotient, but linear (and context-free) languages are not. In fact the operation is very powerful: the RE languages are quotients of two linear languages.
On the other hand, they are closed under quotient with regular languages. That follows as they form a
full trio (i.e., they are closed under homomorphisms, and inverse morphisms and intersection with regular languages). Consequently/equivalently they are closed under finite state transductions.
Of course, they are closed under reversal. |
I have a matrix $P \in M_n(\mathbb N)$, where
$$ P = \begin{bmatrix} 0 & P_{12} & \ldots & P_{1n}\\ P_{21} & 0 & \ldots & P_{2n}\\ \vdots & \vdots & \ddots & \vdots\\ P_{n1} & P_{n2} & \ldots & 0 \end{bmatrix}$$
with $P_{ii} = 0$ for all $i \in \{1,2,\dots,n\}$. I need to find matrices $A, B \in M_n(\mathbb N)$ that satisfy $A + B = P$ and that satisfy the following constraints
$$\forall i\in [\![ 1,n]\!] \sum_{k=1}^{n} A_{ik} = \sum_{k=1}^{n} A_{ki}$$
such that $\sum_{i=1}^{n} \sum_{k=1}^{n} A_{ik}$ is
maximized.
I need to implement an algorithm to solve this problem. On my input data I have approximately: $ n = 16 $ and $ \sum_{i=1}^{n}\sum_{k=1}^{n} P_{ik} = 60000 $ so the brute-force approach is out of question. I don't know what could be a good approximation algorithm so my current approach is to reduce the problem to a binary integer programming problem and then apply Branch-and-Cut but I have serious doubt on it effectiveness for this specific problem.
Finding the optimal solution in polynomial time would be perfect (not sure if it's possible), but I can satisfy myself with a good approximation algorithm. Not having a strong background in CS I'm a bit confused, help would be greatly appreciated ! |
Image for Jussara Da Silva Smyth's LinkedIn activity called Honored to have participated in the beautiful
International Symposium on Intensive Care and Emergency Medicine: Abstract submission - list - posters - instructions
Figure 4 Response surface plots for the retention of total anthocyanins (D). (a) jussara powders with MS:WPC and (b) jussara powders with MS:SPI.
Physics case for an LHCb Upgrade II - Opportunities in flavour physics, and beyond, in the HL-LHC era - CERN Document Server
U.S. Soccer WNT on Twitter: "One more ✈ ! See ya in a few hours, Belo Horizonte! #RoadToRio #OneNationOneTeam… "
Excluding patients with underlying pituitary, adrenal or CNS disease, current glucocorticoid therapy is associated with increased rates of SST failure (A).
Molecules | Free Full-Text | Polyphenols-Rich Fruit (Euterpe edulis Mart.) Prevents Peripheral Inflammatory Pathway Activation by the Short-Term High-Fat ...
Figure 8 Images obtained by tomographic analyses via X-ray transmission (XRT) of the whole jussara berry, being (a) a tomographic image of the whole fruit; ...
Nutrients | Free Full-Text | Supplementation of Juçara Berry (Euterpe edulis Mart.) Modulates Epigenetic Markers in Monocytes from Obese Adults: A ...
Differential branching fraction and angular analysis of $\Lambda^{0}_{b} \rightarrow \Lambda \mu^+\mu^-$ decays - CERN Document Server
Arne Janssen | PhD | University of Amsterdam, Amsterdam | UVA | Institute for Biodiversity and Ecosystem Dynamics
Peter von Dadelszen | BMedSc, MBChB, DipObst (Otago), DPhil (Oxon), FRANZCOG, FRCSC, FRCOG | King's College London, London | KCL | Department of Women and ...
Figure 2 Lung ultrasound is largely based on the interpretation of artifacts created by the interplay of air and fluid in the lung.
Angular moments of the decay $\Lambda_b^0 \rightarrow \Lambda \mu^{+} \mu^{-}$ at low hadronic recoil - CERN Document Server
Clovis Orlando da Fonseca | MD, PhD | Universidade Federal Fluminense, Niterói | UFF | Departamento de Cirurgia Geral e Especializada (MCG)
... desirability for the responses sensory acceptance, consistency and antioxidant capacity of the formulations of banana, strawberry and juçara smoothie. |
Remember to register here for FREE to ask any questions you may come across in your QCE studies!
Induction is a special type of proof technique mathematicians use. It tends to be used on proofs based off integers, where recursion helps come into play. The structure involves two key steps:
- An
initial statement
(sometimes known as the base case) - the statement is explicitly proven for the first integer you permit. (Tends to be \(n=1\).)
- An
inductive step
- the statement is first assumed to hold when \(n=k\). Under such an assumption, it should then hold for \(n=k+1\).
The principle of mathematical induction asserts that then it will hold for all integers you're required to consider.
The syllabus asks for two types of proofs to consider. For both of them, there's an extra subtlety to handle in the inductive step.
-
Sums
: When stating what to prove when \(n=k+1\), you should always state the last
two
terms in the sum. That makes it more obvious as to how to use the assumption.
-
Divisibility
: When stating the assumption, make sure that you introduce some other integer! I personally use \(M\) a lot. Then, the assumption may need to be used in a rearranged form.
Example:
We will show that \(4^n + 15n - 1\) is divisible by 9 for all positive integers \(n\), using mathematical induction.
Initial statement
Since we're dealing with the positive integers, the first integer to consider is \(n=1\). We manually compute the expression when \(n=1\):
\[ 4^1 + 15(1) - 1 = 4+15-1 = 18 = 9\times 2 \]
which is divisible by \(9\). So the statement is true when \(n=1\).
Inductive step
When we assume the statement holds for \(n=k\) (where \(k\geq 1\)), we're saying that
\[ 4^k + 15k - 1 = 9M \]
for some integer \(M\). We wish to prove it holds when \(n=k+1\), i.e.
\[ 4^{k+1} + 15(k+1) - 1\text{ is divisible by 9.} \]
Here, we can use index laws to spot that \(4^{k+1} = 4\times 4^k\), and observe that \(4^k\) is perhaps the 'ugliest' thing in the assumed statement. So we make that the subject: \(\boxed{4^k = 9M - 15k + 1}\) and continue:
\begin{align*}
4^{k+1} + 15(k+1) - 1 &= 4(4^k) + 15k + 14\\
&= 4(9M-15k + 1) + 15k 14\\
&= 36M - 45k+18\\
&= 9(4M - 5k + 2).
\end{align*}
Since we see that 9 can be factorised (and the stuff in the brackets is still an integer), the expression is therefore divisible by 9. Hence the statement holds when \(n=k+1\).
As long as we have these two steps, we can conclude by mathematical induction that the expression is divisible by \(9\) for every positive integer. |
What is the expected value of the determinant over the uniform distribution of all possible 1-0 NxN matrices? What does this expected value tend to as the matrix size N approaches infinity?
As everyone above has pointed out, the expected value is $0$.
I expect that the original poster might have wanted to know about how big the determinant is. A good way to approach this is to compute $\sqrt{E((\det A)^2)}$, so there will be no cancellation.
Now, $(\det A)^2$ is the sum over all pairs $v$ and $w$ of permutations in $S_n$ of $$(-1)^{\ell(v) + \ell(w)} (1/2)^{2n-\# \{ i : v(i) = w(i) \}}$$
Group together pairs $(v,w)$ according to $u := w^{-1} v$. We want to compute $$(n!) \sum_{u \in S_n} (-1)^{\ell(u)} (1/2)^{2n-\# (\mbox{Fixed points of }u)}$$
This is $(n!)^2/2^{2n}$ times the coefficient of $x^n$ in $$e^{2x-x^2/2+x^3/3 - x^4/4 + \cdots} = e^x (1+x).$$
So $\sqrt{E((\det A)^2)}$ is $$\sqrt{(n!)^2/2^{2n} \left(1/n! + 1/(n-1)! \right)} = \sqrt{(n+1)!}/ 2^n$$
If $N \ge 2$, then the expected value is $0$ since interchanging two rows preserves the distribution but negates the determinant.
It is a little more convenient to work with random (-1,+1) matrices. A little bit of Gaussian elimination shows that the determinant of a random n x n (-1,+1) matrix is $2^{n-1}$ times the determinant of a random n-1 x n-1 (0,1) matrix. (Note, for instance, that Turan's calculation of the second moment ${\bf E} \det(A_n)^2$ is simpler for (-1,+1) matrices than for (0,1) matrices, it's just n!. It is also clearer why the determinant is distributed symmetrically around the origin.)
The log $\log |\det(A_n)|$ of a (-1,+1) matrix is known to asymptotically be $\log \sqrt{n!} + O( \sqrt{n \log n} )$ with probability $1-o(1)$; see this paper of Vu and myself. A more precise result should be that the logarithm is asymptotically normally distributed with mean $\log \sqrt{(n-1)!}$ and variance $2 \log n$. This result was claimed by Girko; the proof is unfortunately not quite complete, but the result is still likely to be true.
For some further results of this nature, see Exercise 5.64 of
Enumerative Combinatorics, vol. 2. This exercise deals with the uniform distribution on (0,1)-matrices or $(-1,1)$-matrices, but the arguments can be carried over to other distributions where the matrix entries are i.i.d. The proofs are similar to the argument in David Speyer's comment.
5.64. a.[2+] Let $\mathcal D_n$ be the set of all $n\times n$ matrices of $+1$'s and $-1$'s. For $k\in\mathbb P$ let \begin{align*} f_k(n)&= 2^{-n^2} \sum_{M\in\mathcal D_n} (\det M)^k \\ g_k(n)&= 2^{-n^2} \sum_{M\in\mathcal D_n} (\operatorname{per} M)^k, \end{align*} where $\operatorname{per}$ denotes the permanent function defined by $$\operatorname{per}(m_{ij})= \sum_{n\in\mathfrak{S}_n} m_{1,\pi(1)} m_{2,\pi(2)} \dots m_{n,\pi(n)}.$$ Find $f_k(n)$ and $g_k(n)$ explicitly when $k$ is odd or $k=2$. b.[3-] Show that $f_4(n)=g_4(n)$, and show that $$\sum_{n\ge 0} f_4(n) \frac{x^n}{n!} = (1-x)^{-3} e^{-2x}. \tag{5.120}$$ HINT. We have $$\sum_m (\det M)^4 = \sum_M \left(\sum_{\pi\in\mathfrak S_n} \pm m_{1,\pi(1)}\dots m_{n,\pi(n)}\right)^4.$$ Interchange the order of summation and use Exercise 5.63. c.[2+] Show that $f_{2k}(n)<g_{2k}(n)$ if $k\ge 3$ and $n\ge 3$. d.[3-] Let $\mathcal D'_n$ be the set of all $n\times n$ 0-1 matrices. Let $f'_k(n)$ and $g'_k(n)$ be defined analogously to $f_k(n)$ and $g_k(n)$. Show that $f'_k(n)=2^{-kn} f_k(n+1)$. Show also that \begin{align*} g'_1(n) &= 2^{-n} n!\\ g'_2(n) &= 4^n n!^2 \left(1+\frac1{1!}+\frac1{2!}+\dots+\frac1{n!}\right) \end{align*}
Unless I'm missing something, this also follows immediately from linearity and multiplicativity of expectation, treating each entry as independently $0-1$ with probability $1/2$. Every permutation yields the same expected value in the sum, $\pm (1/2)^n$ depending on sign, and the number of even and odd permutations is identical (for $n \ge 2$, as noted above).
It's probably worth mentioning that an old result of Komlos shows that despite this, the probability the determinant is actually 0 is $o(1)$.
Is it not zero whenever $n \geq 2$? Let $A$ be a $n \times n$ permutation matrix with determinant $-1$ (which requires $n \geq 2$). Then the uniform distribution of a random $n \times n$ $(0,1)$-matrix $X$ is the same as the distribution of $AX$. The determinant is multiplicative, hence Det$(AX)=$Det$(A)$Det$(X)=-$Det$(X)$. Hence the probability of Det$(X)=x$ is the same as the probability of Det$(X)=-x$.
Miodrag Zivkovic has actually done a classification on small orders of 0-1 matrices by rank and absolute determinant value. You may be interested in the tables in his Arxiv paper http://arxiv.org/abs/math.CO/0511636 .
Gerhard "Ask Me About System Design" Paseman, 2010.01.26 |
In Bayesian theorem, $$p(y|x) = \frac{p(x|y)p(y)}{p(x)}$$, and from the book I'm reading, $p(x|y)$ is called the
likelihood, but I assume it's just the conditional probability of $x$ given $y$, right?
The
maximum likelihood estimation tries to maximize $p(x|y)$, right? If so, I'm badly confused, because $x,y$ are both random variables, right? To maximize $p(x|y)$ is just to find out the $\hat y$? One more problem, if these 2 random variables are independent, then $p(x|y)$ is just $p(x)$, right? Then maximizing $p(x|y)$ is to maximize $p(x)$.
Or maybe, $p(x|y)$ is a function of some parameters $\theta$, that is $p(x|y; \theta)$, and MLE tries to find the $\theta$ which can maximize $p(x|y)$? Or even that $y$ is actually the parameters of the model, not random variable, maximizing the likelihood is to find the $\hat y$?
UPDATE
I'm a novice in machine learning, and this problem is a confusion from the stuff I read from a machine learning tutorial. Here it is, given an observed dataset $\{x_1,x_2,...,x_n\}$, the target values are $\{y_1,y_2,...,y_n\}$, and I try to fit a model over this dataset, so I assume that, given $x$, $y$ has a form of distribution named $W$ parameterized by $\theta$, that is $p(y|x; \theta)$, and I assume this is the
posterior probability, right?
Now to estimate the value of $\theta$, I use MLE. OK, here comes my problem, I think the likelihood is $p(x|y;\theta)$, right? Maximizing the likelihood means I should pick the right $\theta$ and $y$?
If my understanding of likelihood is wrong, please show me the right way. |
Full Title Journal Links Status Groups TeV with the ATLAS detectorAngular analysis of $B^0_d \rightarrow K^{*}\mu^+\mu^-$ decays in $pp$ collisions at $\sqrt{s}= 8$ JHEP Inspire, arXiv, Figures JHEP 10 (2018) 047
(Submitted: 2018/05/10)
BPHY Search for a Structure in the $B^0_s \pi^\pm$ Invariant Mass Spectrum with the ATLAS Experiment PRL Inspire, arXiv, Figures Phys. Rev. Lett. 120 (2018) 202007
(Submitted: 2018/02/06)
BPHY Measurement of quarkonium production in proton--lead and proton--proton collisions at $5.02$ $\mathrm{TeV}$ with the ATLAS detector EPJC Inspire, arXiv, Figures Eur. Phys. J. C 78 (2018) 171
(Submitted: 2017/09/10)
HION / BPHY TeVMeasurement of $b$-hadron pair production with the ATLAS detector in proton-proton collisions at $\sqrt{s}=8$ JHEP Inspire, arXiv, Figures JHEP 11 (2017) 62
(Submitted: 2017/05/09)
BPHY TeV with the ATLAS detectorMeasurement of the prompt $J/\psi$ pair production cross-section in $pp$ collisions at $\sqrt{s} = 8$ EPJC Inspire, arXiv, Figures Eur. Phys. J. C77 (2017) 76
(Submitted: 2016/12/09)
BPHY Measurements of $\psi(2S)$ and $X(3872) \to J/\psi\pi^{+}\pi^{-}$ production in pp collisions $\sqrt{s} = 8$ with the ATLAS detector JHEP Inspire, arXiv, Figures JHEP 01 (2017) 117
(Submitted: 2016/10/28)
BPHY Measurement of the relative width difference of the $B 0$--$\bar B 0$ system with the ATLAS detector JHEP Inspire, arXiv, Figures JHEP06 (2016) 081
(Submitted: 2016/05/24)
BPHY Study of the rare decays of $B^0_s$ and $B^0$ into muon pairs from data collected during the LHC Run 1 with the ATLAS detector EPJC Inspire, arXiv, Figures Eur. Phys. J. C 76 (2016) 513
(Submitted: 2016/04/14)
BPHY Measurement of the CP-violating phase $\phi_s$ and the $B^0_s$ meson decay width difference with $B^0_s \to J/\psi\phi$ decays in ATLAS JHEP Inspire, arXiv, Figures JHEP 08 (2016) 147
(Submitted: 2016/01/13)
BPHY TeV with the ATLAS detectorMeasurement of the differential cross-sections of prompt and non-prompt production of $J/\psi$ and $\psi(2\mathrm{S})$ in $pp$ collisi ons at $\sqrt{s} = 7$ and $8$ EPJC Inspire, arXiv, Figures Eur. Phys. J. C 76(5), 1-47, (2016)
(Submitted: 2015/12/11)
BPHY TeV with the ATLAS detectorMeasurement of $D^{*\pm}$, $D^\pm$ and $D_s^\pm$ meson production cross sections in $pp$ collisions at $\sqrt{s}=7$ NPB Inspire, arXiv, Figures Nucl. Phys. B 907 (2016) 717
(Submitted: 2015/12/09)
BPHY TeV with the ATLAS detectorDetermination of the ratio of $b$-quark fragmentation fractions $f_s/f_d$ in $pp$ collisions at $\sqrt{s}=7$ PRL Inspire, arXiv, Figures Phys. Rev. Lett. 115, 262001 (2015)
(Submitted: 2015/07/31)
BPHY Measurement of the branching ratio $\Gamma(\Lambda_b^0 \rightarrow \psi(2S)\Lambda^0)/\Gamma(\Lambda_b^0 \rightarrow J/\psi\Lambda^0)$ with the ATLAS detector PLB Inspire, arXiv, Figures Physics Letters B 751 (2015) 63-80
(Submitted: 2015/07/29)
BPHY Study of the $B_c^+ \to J/\psi D_s^+$ and $B_c^+ \to J/\psi D_s^{*+}$ decays with the ATLAS detector EPJC Inspire, arXiv, Figures Eur. Phys. J. C, 76(1), 1-24 (2016)
(Submitted: 2015/07/25)
BPHY Observation and measurements of the production of prompt and non-prompt J/ψ mesons in association with a Z boson in proton-proton collisions at √s = 8 TeV with the ATLAS detector EPJC Inspire, arXiv, Figures Eur. Phys. J. C75 (2015) 229
(Submitted: 2014/12/19)
BPHY Search for the $X_b$ and other hidden-beauty states in the $\pi^+ \pi^- \Upsilon$(1S) channel at ATLAS PLB Inspire, arXiv, Figures Physics Letters B 740 (2015), pp. 199-217
(Submitted: 2014/10/16)
BPHY TeV at ATLASMeasurement of the production cross-section of psi(2S) -> J/\psi -> mu+mu-pi+pi- in pp collisions at sqrt(s) = 7 JHEP Inspire, arXiv, Figures JHEP09(2014)079
(Submitted: 2014/07/21)
BPHY Flavour tagged time dependent angular analysis of the Bs→J/ψ phi decay and extraction of Delta Gamma s and the weak phase phi s in ATLAS PRD Inspire, arXiv, Figures Phys. Rev. D. 90, 052007 (2014)
(Submitted: 2014/07/05)
BPHY Observation of an excited $B_{c}$ meson state PRL Inspire, arXiv, Figures Phys. Rev. Lett. 113, 212004 (2014)
(Submitted: 2014/07/03)
BPHY Measurement of $\chi_{c1}$ and $\chi_{c2}$ production with $\sqrt{s} = 7$~TeV $pp$ collisions at ATLAS JHEP Inspire, arXiv, Figures JHEP07(2014)154
(Submitted: 2014/04/28)
BPHY Measurement of the parity violating asymmetry parameter &α; b and the helicity amplitudes for the decay &Λ;B → J/ψ &Λ;0 with the ATLAS detector PRD Inspire, arXiv, Figures PhysRevD.89.092009
(Submitted: 2014/04/04)
BPHY Measurement of the production cross-section of prompt J/Psi mesons in association with a W boson in pp collisions at √s = 7 TeV with the ATLAS detector JHEP Inspire, arXiv, Figures JHEP04(2014)172
(Submitted: 2014/01/14)
BPHY Measurement of the differential cross section of B+ meson production in pp collisions at √s = 7 TeV at ATLAS JHEP Inspire, arXiv, Figures JHEP10(2013)042
(Submitted: 2013/06/29)
BPHY Measurement of &Υ; production in 7 TeV pp collisions at ATLAS PRD Inspire, arXiv, Figures Phys. Rev. D 87, 052004 (2013)
(Submitted: 2012/11/30)
BPHY Time dependent angular analysis of the decay Bs→J/ψ phi and extraction of Delta Gamma s and the CP-violating weak phase phi s by ATLAS JHEP Inspire, arXiv, Figures JHEP 1212 (2012) 072
(Submitted: 2012/08/02)
BPHY Measurement of the &Λ; b lifetime and mass in the ATLAS experiment PRD Inspire, arXiv, Figures Phys. Rev. D 87, 032002 (2013)
(Submitted: 2012/07/10)
BPHY Measurement of the b-hadron production cross section using decays to D*muX final states in pp collisions at √s = 7 TeV with the ATLAS detector NPB Inspire, arXiv, Figures Nucl. Phys. B 864 (2012) 341-381
(Submitted: 2012/06/14)
BPHY Search for the decay B 0 s→mu+mu- with the ATLAS detector PLB Inspire, arXiv, Figures Phys. Lett. B 713 (2012) 387-407
(Submitted: 2012/04/03)
BPHY Observation of a new chi b state in radiative transitions to &Υ;(1S) and &Υ;(2S) at ATLAS PRL Inspire, arXiv, Figures Phys. Rev. Lett. 108 (2012) 152001
(Submitted: 2011/12/21)
BPHY $\&Υ;(1S)$ differential production cross section PLB Inspire, arXiv, Figures Phys.Lett. B 705 (2011) 9
(Submitted: 2011/06/27)
BPHY Measurement of the differential cross-sections of inclusive, prompt and non-prompt J/ψ production in proton-proton collisions at √s = 7 TeV NPB Inspire, arXiv, Figures Nucl. Phys. B 850 (2011) 387-444
(Submitted: 2011/04/15)
BPHY |
This is just a formal proof using idea of Loïc Teyssier that appeared in the comments. We just rewrite both sides of the equation as double sums (using geometric series) and notice that they are equal after changing the order of summation.
Since $$\frac{q}{1-q} = \sum_{k=1}^\infty q^k,$$ we have for the left hand side
\begin{align}- \sum_{n=1}^\infty \frac{(-1)^n}{2^n-1} & = - \sum_{n=1}^\infty \frac{(-1)^n2^{-n}}{1-2^{-n}} \\& = - \sum_{n=1}^\infty (-1)^n\sum_{k=1}^\infty (2^{-n})^k \\& = \sum_{k,n = 1}^\infty (-1)^{n+1} 2^{-nk}.\end{align}
Similarly, for the right hand side we use $$\frac{q}{1+q} = \sum_{k=1}^\infty (-1)^{k+1} q^k $$ to get\begin{align}\sum_{n=1}^\infty \frac{1}{1+2^n} &= \sum_{n=1}^\infty \frac{2^{-n}}{1+2^{-n}}\\& = \sum_{n=1}^\infty \sum_{k=1}^\infty (-1)^{k+1}(2^{-n})^k \\& = \sum_{k,n=1}^\infty (-1)^{k+1} 2^{-nk}.\end{align} |
I am trying to solve the question 6.12 in Arora-Barak (Computational Complexity: A modern approach). The question asks you to show that the $\mathsf{PATH}$ problem (decide whether a graph $G$ has a path from a given node $s$ to another given node $t$) which is complete for $\mathbf{NL}$ is also contained in $\mathbf{NC}$ (this is easy). The question then also makes a remark that this implies that $\mathbf{NL} \subseteq \mathbf{NC}$ which is not obvious to me.
I think in order to show this, one has to show that $\mathbf{NC}$ is closed under logspace reductions, i.e
$$(1): B \in \mathbf{NC} \hbox{ and } A \le_l B \Longrightarrow A \in \mathbf{NC}$$
where $\le_l$ is the logspace reduction defined as
$$A \le_l B :\Longleftrightarrow (\exists M \hbox{ TM}, \forall x)[x \in A \Longleftrightarrow M(x) \in B]$$
($M$ is a TM which runs in logarithmic space).
I would appreciate if someone could give a tip for proving the statement $(1)$. |
Polya-Hurwitz program.
This may become more interesting in light of the recent progress in the
Polya-Jensen program by Griffin, Ono, Rolen, Zagier.
We will first provide definitions of some functions involved.
The Riemann Xi-function $\Xi(z)$ is related to the Riemann zeta-function $\zeta(s)$ via ([A], [B]): $\Xi(z)=\xi(\tfrac{1}{2}+iz)$,$\xi(s)=\tfrac{1}{2}s(s-1)\pi^{-s/2}\Gamma\left(\frac{s}{2}\right)\zeta(s)$.
Riemann $\Xi(z)$ can be expressed as a Fourier transform of a positive, fast decaying, and even kernel $\Phi(t)$ ([A], [B]):
\begin{equation}\Xi(z)=2\int_0^{\infty}\Phi(t)\cos(zt)\mathrm{d}t,\tag{1}\end{equation}
where
\begin{equation}\Phi(t)=\sum_{k\geqslant 1}\phi_k(t)=\Phi(-t),\tag{2}\\\end{equation}
\begin{equation}\phi_k(t)=\left(4\pi^2 k^4 e^{9t/2}-6\pi k^2e^{5t/2}\right)\exp\left(-\pi k^2 e^{2t}\right)\tag{3}.\end{equation}
The Polya aspect of this approach is the following:
Truncate the Kernel $\Phi(t)$ of (2) and/or the integration range in (1) such that the resulting Fourier transform leads to a family of entire functions which only have real roots.
One such candidate is given in [C]:\begin{equation}\Phi_{\color{red}n}(t)=(1/2)\sum_{1\leqslant k\leqslant {\color{red}n}}\left(\phi_k(t)+\phi_k(-t)\right)=\Phi_{\color{red}n}(-t)\tag{4}\end{equation}
\begin{equation}\Xi_{\color{red}n}(z)=2\int_0^{(1/2)\log {\color{red}n}}\Phi_{\color{red}n}(t)\cos(zt)\mathrm{d}t=\Xi_{\color{red}n}(-z),\tag{5}\end{equation}
We refer to [D] and [E] for a near complete review on the zeros of entire functions as Fourier transforms.
The Hurwitz aspect of this approach is the following:
Corollary of Hurwitz's theorem in complex analysis (applied to our case) [F]:
If $\Xi(z)$ and $\{\Xi_n(z)\}$ are analytic functions on a domain $S_{1/2}(z)=\{z: 0<Im(z)<1/2\}$, $\{\Xi_n(z)\}$ converges to $\Xi(z)$ uniformly on compact subsets of $S_{1/2}(z)$, and all but finitely many $\Xi_n(z)$ have no zeros in $S_{1/2}(z)$, then either $\Xi(z)$ is identically zero or $\Xi(z)$ has no zeros in $S_{1/2}(z)$.
The functional equation for $\zeta(s)$ becomes $\Xi(-z)=\Xi(z)$. The candidate of $\Xi_n(z)$ in (5) automatically satisfies this functional equation.
Another benefit of Polya-Hurwitz approach is that the entrance barrier is relatively low (comparing to other approaches that usually require the advance knowledge of analytical number theory).
To get started, one only needs to know Fourier transform, basic complex analysis, some knowledge of entire functions, polynomials etc. So anyone who has math training with the college undergraduate math major may start to work on the Polya-Hurwitz approach and learn other necessary new math as he/she goes.
The most difficult part of Polya-Hurwitz approach seems to be the following:(for example,) proving that all the zeros of $\Xi_n(z)$ in (5) are real in $S_{1/2}(z)$.
One may need to have several iterations: guess one form of the Kernel like $\Phi_n(1,t)$ and complete the integration to get explicit expression for $\Xi_n(1,z)$. If all the zeros of $\Xi_n(1,z)$ are found not to be all real in $S_{1/2}(z)$, then move on to $\Phi_n(2,t)$ and $\Xi_n(2,z)$...
References:
[A] Titchmarsh,"The Theory of the Riemann Zeta-Function",
(1986).
[B] Edwards, "Riemann's Zeta Function", (1974).
[C] Shi, "On the zeros of Riemann Xi-function", (2017) arXiv:1706.08868.
[D] Dimitrov and Rusev, “ZEROS OF ENTIRE FOURIER TRANSFORMS” (2001), 108 page review paper.
[E] Hallum, “ZEROS OF ENTIRE FUNCTIONS REPRESENTED BY FOURIER TRANSFORMS” (2014), Master thesis.
[F] Conway, "Functions of One Complex Variable",(1978) |
I am interested in the practical method and I like to discover if it is cheap enough to be done as an experiment in a high school.
Method
The method is based on measuring variations in perceived revolution time of Io around Jupiter. Io is the innermost of the four Galilean moons of Jupiter and it takes around 42.5 hours to orbit Jupiter.
The revolution time can be measured by calculating the time interval between the moments Io enters or leaves Jupiter's shadow. Depending on the relative position of Earth and Jupiter, you will either be able to see Io entering the shadow but not leaving it or you will be able to see it leaving the shadow, but not entering. This is because Jupiter will obstruct the view in one of the cases.
You might expect that if you keep looking at Io for a few weeks or months you will see it enter/leave Jupiter's shadow at roughly regular intervals matching Io's revolution around Jupiter.
However, even after introducing corrections for Earth's and Jupiter's orbit eccentricity, you still notice that for a few weeks as Earth moves away from Jupiter the time between observations becomes longer (eventually by a few minutes). At other time of year, you notice that for a few weeks as Earth moves towards Jupiter the time between observations becomes shorter (again, eventually by a few minutes). This few minutes difference comes from the fact that when Earth is further away from Jupiter it takes light more time to reach you than when Earth is closer to Jupiter.
Say you have made two consecutive observations of Io entering Jupiter's shadow at
t 0 and tseparated by 1 nIo's revolutions about Jupiter T. If the speed of light was infinite, one would expect
\begin{equation} t_1 = t_0 + nT \end{equation}
This is however not the case and the difference
\begin{equation} \Delta t = t_1 - t_0 - nT \end{equation}
can be used to measure the speed of light since it is the extra time that light needs to travel the distance equal to the difference in the separation of Earth and Jupiter at
t 1 and t: 0
\begin{equation} c = \frac{\Delta d}{\Delta t} = \frac{d_{EJ}(t_1)-d_{EJ}(t_0)}{\Delta t} \end{equation}
(both numerator and denominator can be negative representing Earth approaching or receding from Jupiter)
In reality more than two observations are needed since
T isn't known. It can be approximated by averaging observations equally distributed around Earth's orbit accounting for eccentricity or simply solved for as another variable. Practical considerations
Note that you will not manage to see Io enter/leave Jupiter's shadow every Io's orbit (i.e. roughly every 42.5 hours) since some of your observation times will fall on a day or will be made impossible by weather conditions. This is of no concern however. You should simply number all Io's revolutions around Jupiter (timed by Io entering/leaving Jupiter's shadow) and note which ones you managed to observe. For successful observations you should record precise time. It might be good to use UTC to avoid problems with daylight saving time changes. After a few weeks you will notice cumulative effect of the speed of light in that the average intervals between Io entering/leaving Jupiter's shadow will become longer or shorter. Cumulative effect is easier to notice. At minimum you should try to make two observations relatively close to each other (separated by just a few Io revolutions) and then at least one more observation a few weeks or months later (a few dozens of Io revolutions). This will let you calculate the average time interval between observations within a short and long time period by dividing the length of the time period by the number of revolutions Io has made around Jupiter in that period. The average computed over the long time period will exhibit cumulative effect of the speed of light by being noticeably longer or shorter than the average computed over the short time period. More observations will help you make a more accurate determination of the speed of light. You must plan all of the observations ahead since you can't make the observations when Earth and Jupiter are close to conjunction or opposition.
Calculations
Once you collected the observations you should determine the position of Earth and Jupiter at the times of the observations (for example using JPL's Horizons system). You can then use the positions to determine the distance between the planets at the time the observations were made. Finally, you can use the distance and the variation in Io's perceived revolution period to compute the speed of light.
You will notice that roughly every 18 millions kms change in the distance of Earth and Jupiter makes an observation happen 1 minute earlier or later.
Cost
The cost of the experiment is largely the cost of buying a telescope that allows you to see Io. Note that the experiment takes a few months and requires measuring time of the observations with the accuracy of seconds.
History
See this wikipedia article for historical account of the determination of the speed of light by Rømer using Io. |
In this lesson, we'll derive an equation which will allow us to calculate the wavefunction (which is to say, the collection of probability amplitudes) associated with any ket vector \(|\psi⟩\). Knowing the wavefunction is very important since we use probability amplitudes to calculate the probability of measuring eigenvalues (i.e. the position or momentum of a quantum system).
Newton's second law describes how the classical state {\(\vec{p_i}, \vec{R_i}\)} of a classical system changes with time based on the initial position and configuration \(\vec{R_i}\), and also the initial momentum \(\vec{p_i}\). We'll see that Schrodinger's equation is the quantum analogue of Newton's second law and describes the time-evolution of a quantum state \(|\psi(t)⟩\) based on the following two initial conditions: the energy and initial state of the system.
In this lesson, we'll give a broad overview and description of single-variable calculus. Single-variable calculus is a big tool kit for finding the slope or area underneath any arbitrary function \(f(x)\) which is smooth and continuous. If the slope of \(f(x)\) is constant, then we don't need calculus to find the slope or area; but when the slope of \(f(x)\) is a variable, then we must use a tool called a derivative to find the slope and another tool called an integral to find the area.
The wavefunction \(\psi(L,t)\) is confined to a circle whenever the eigenvalues L of a particle are only nonzero on the points along a circle. When the wavefunction \(\psi(L,t)\) associated with a particle has non-zero values only on points along a circle of radius \(r\), the eigenvalues \(p\) (of the momentum operator \(\hat{P}\)) are quantized—they come in discrete multiples of \(n\frac{ℏ}{r}\) where \(n=1,2,…\) Since the eigenvalues for angular momentum are \(L=pr=nℏ\), it follows that angular momentum is also quantized.
In this lesson, we'll mathematically prove that for any Hermitian operator (and, hence, any observable), one can always find a complete basis of orthonormal eigenvectors.
In this lesson we'll explain why there is structure and "clumpyness" in the universe. In other words, why there is more stuff here than over there. Now, if all of the distribution of matter and energy in the universe was initially completely uniform (meaning homogenous and isotropic), then galaxies, stars, and people would have never formed. Non-uniform density would never arise in such a universe. But due to the time-energy uncertainty principle, the distribution of matter and energy must have had been randomly distributed throughout space at the beginning of the universe. Since the distribution was truly random, there would always be regions of space with slight more matter and energy than other. These slight non-uniformities ("imprinted" by the uncertainty principle) in matter and energy density throughout space near the beginning of the universe is the origin of the slight non-uniformities that we see in the CMBR.
In general, if a quantum system starts out in any arbitrary state, it will evolve with time according to Schrödinger's equation such that the probability \(P(L)\) changes with time. In this lesson, we'll prove that if a quantum system starts out in an energy eigenstate, then the probability \(P(L)\) of measuring any physical quantity will not change with time.
Superconductors are the key to unlocking the future of transportation and electrical transmission. They enable the most efficient approaches to these industrial processes known to present science. A maglev vehicle, to borrow Jeremy Rifkin's wording, will shrink the dimensions of space and time by allowing distant continental and inter-continental regions to be accessed in, well, not much time at all. But superconductors also offer unprecedented efficiency: they eliminate the problem of atoms colliding with other atoms and would allow vehicle to "slide" across enormous distances with virtually no loss of energy and it would allow a loop of current to persist longer than the remaining lifetime of the universe. Much of the damage accumulated in the components of vehicles can, in some way or another, be traced to the friction against the road; maglev transportation circumvents this issue. |
I'm confused about a problem these days and I decided to seek an answer here. The question is about the section 4.1 of the paper LPST16. Let me recall the weakly sublinear compact FE scheme $\textrm{FE}$ from succinct functional encryption scheme $\textrm{sFE}$ and XiO.
$(msk, pk) \gets \textrm{FE.Setup}(1^\lambda)$: it runs $(msk,pk) \gets \textrm{sFE.Setup}(1^\lambda)$ $\textrm{ct} \gets \textrm{FE.Enc}(pk, m)$: it samples a puncturable PRF key and outputs $\textrm{ct} \gets \textrm{XiO}(1^\lambda, G_{pk,K,m})$ where $G_{pk,K,m}$ is a circuit with input length $n = \log s$ and $s$ is the max output length of the circuit $C$, and circuit $G_{pk,K,m}$ works as follows $$G_{pk,K,m}(i) = \textrm{sFE.Enc}(pk,(m,i); \textrm{PRF.Eval}(K,i))$$ $sk_C \gets \textrm{FE.KeyGen}(msk; C)$ outputs $\textrm{sFE.KeyGen}(msk,C')$ where $C'$ on input $(m,i)$ outputs the $i$th bit of $C(m)$ $y \gets \textrm{FE.Dec}(sk_C, \textrm{ct})$: $\textrm{ct}_i \gets \textrm{ct}(i)$ and $y_i \gets \textrm{sFE.Dec}(sk_C, \textrm{ct}_i)$ for each $i$ and outputs $y_1,\cdots,y_{2^n}$
Now I have two questions:
The paper speficies the circuit size of $G_{pk,K,m}$ is bounded by $poly(\lambda, |m|, \log s)$, how can we get this? to the best of my knowledge, I thinks the size of a circuit should be only related to the elements that are hardwired, is that correct? If this is correct, why the size is also polylog in $s$? More broadly, I also want to know how to calculate a circuit size. Assume that the circuit size is also related to $s$ and $\log s$ is because of the puncturing, the second question is from my imagination. What if the circuit $G_{pk,K,m}$ outputs two encryptions, where one is over $(m,i)$ and another one is over $m$, and what is the circuit size of $G_{pk,K,m}$ now?
Thanks for your answers and comments. |
What is the advantage of using a polar coordinate system with rotating unit vectors? Kleppner's and Kolenkow's
An Introduction to Mechanics states that base vectors $\mathbf{ \hat{r}}$ and $\mathbf{\hat{\theta}}$ have a variable direction, such that for a Cartesian coordinates system's base vectors $\mathbf{ \hat{i}}$ and $\mathbf{ \hat{j}}$ we have
$$\mathbf{\hat{r}} = \cos \theta\ \mathbf{\hat{i}} + \sin \theta\ \mathbf{\hat{j}}$$
$$\mathbf{\hat{\theta}} = -\sin \theta\ \mathbf{\hat{i}} + \cos \theta\ \mathbf{\hat{j}}$$
Now, isn't counter-productive to define a coordinate system in terms of another? Why, at least in this book, we choose to use such a dependent coordinate system, instead of using a polar coordinate system employing a radius and the angle that this one forms with a polar axis, therefore independent of another coordinate system?
EDIT: Let me clarify that I'm not asking about the advantages of the polar coordinate system over the Cartesian one, but about the advantages of a polar coordinate system defined on rotating base vectors $\mathbf{ \hat{r}}$ and $\mathbf{\hat{\theta}}$ over another polar coordinate system where we employ a base vector $\mathbf{ \hat{r}}$ and the angle (hence a scalar and not a vector) that this one forms with a polar axis. |
The code I was using is this:
\[ \begin{cases} W_n(\mathcal{O}_k) =\{X^{(n)}=(x_0,x_1,\cdots,x_{n-1})\in W_n(k): x_i\in \mathcal{O}_k,\text{ } i\in{\{0,1,\cdots, n-1\}} \}\\ W_n(m_k)=\{X^{(n)}=(x_0,x_1,\cdots,x_{n-1})\in W_n(k) :x_i\in m_k,\text{ } i\in{\{0,1,\cdots, n-1\}}\} \\ W_n(m_k)^{(m)}= \{X^{(n)}=(x_0,x_1,\cdots,x_{n-1})\in W_n(k): v(x_i)\geq m/p^{n-1-i},\text{ } i\in{\{0,1,\cdots, n-1\}} \} \end{cases}\]
which worked fine, but when I changed my spacing and all this the third equation ended up being too long meaning I had to change something. I went for the most naive option to just split the definition of the set
\{W_n(m_k)^{(m)} by simply doing this
W_n(m_k)^{(m)}= \{X^{(n)}=(x_0,x_1,\cdots,x_{n-1})\in W_n(k): \\ v(x_i)\geq m/p^{n-1-i},\text{ } i\in{\{0,1,\cdots, n-1\}} \}
but at least I should align the two lines of definition.
How could I do this? (I tried using
\align,
\aligned but it seems it doesn't work with
cases.
Or would there be a 'better' way of writing a set in two lines? |
I Have a question. Given only the definition of equality of two sets, how can we prove that there is one and only one empty set. I mean by equality of two sets the following: $$A=B \iff \forall x (x \in A \iff x \in B)$$
Modern set theory conceives of a set as an abstraction of a property. Two properties might seem different, but be essentially the same because they are true of the same objects. For example, the property $\mathcal O_1$ of being a natural number of the form $2n+1$, and the property $\mathcal O_2$ of being a natural number that is the difference of two consecutive perfect squares $S_{n+1} - S_n$. These are not the same property, but one can prove that they are the same in a certain sense, namely that $\mathcal O_1$ holds for some object $x$ precisely when $\mathcal O_2$ also holds for $x$. Sets are a formalization of this idea: we say that the set of objects for which $\mathcal O_1$ holds is the same set as the set for which $\mathcal O_2$ holds. That is, $$\{ x \mid \mathcal O_1(x) \} = \{ x \mid \mathcal O_2(x) \}.$$
The idea here is that we want sets to be equal not if their defining conditions are the same (which is the complicated situation we are trying to simplify) but if they contain the same objects.
Suppose we have two empty sets, say $$\varnothing_1 = \{ n \mid \text{$n$ is an even prime number bigger than 10} \}$$ and $$\varnothing_2 = \{ n \mid \text{$n$ is a living crown prince of the Ottoman Empire} \}$$
These sets
do have the same elements, so we want to consider them to be the same set, because that's what sets are for: to abstract away the confusing details of properties, and focus only on the things for which the properties hold or don't hold. So because there is no object by which we can distinguish these two properties—there is no living Crown Prince of the Ottoman Empire who is not also an even prime number bigger than 10, and vice versa—we say that the two sets are equal.
In some theories there is more than one empty set. For example, Bertrand Russell's theory of types (1913) has multiple empty sets. In addition, it has a family of empty relations, which are different from the empty sets. (In modern theories an empty relation is an empty set.) This proliferation of empty sets is one of the most criticized points of the theory of types.
As $$\forall x(x\not\in\emptyset_i)$$ is obvious that $$x\in\emptyset_1\iff x\in\emptyset_2$$ because both conditions are false.
In the case of two empty sets, $\emptyset_1, \emptyset_2$, we have that $\forall x(x\in\emptyset_1\Leftrightarrow x\in \emptyset_2)$ is vacuously true. |
I would like to know the answer to the question "
why do materials preventing heterogeneous nucleation of $CO_2$ aren't used for soda bottles and glasses?".
Two possible answers so far that I thought about: Either it is too hard to make such a material or polish it "near perfectly", or it would not be worth it. By "worth it" I mean the motivation below would be false.
The motivation to use such materials would be that bubbles of $CO_2$ would not form at all, because if I'm not wrong bubbles are forming on the walls of the bottle or glass thanks to heterogeneous nucleation due to microscopic cracks or "imperfections" (i.e. the crystal isn't plane. In a common glass the atoms aren't ordered like in a crystal and this favors heterogeneous nucleation) while homogeneous nucleation never occurs because the radius of the bubble required for it to occur is too big and so the probability that it's created spontaneously is almost nil. Therefore the bottle or glass would, I believe, only slowly lose $CO_2$ gas through the interface liquid/air which is due to diffusion and occurs because the chemical potential "$\mu$" of the soda is higher than the chemical potential of the air. So the process will end when there is no more $CO_2$ in the soda, if I assume that there's no $CO_2$ in the air which is a good approximation.
To sum up the motivation: no bubbles formed. The loss of $CO_2$ would be very slow and so we could drink soda with plenty of "gas" even if the bottle has been opened for a long while compared to what we're currently used to.
Now I would like to use some maths to show how much slower the rate of decrease of $CO_2$ would be if we were to use such bottle or glass, compared to a normal bottle or glass.
More precisely: let $c(t)$ be the concentration of $CO_2$ in function of time. To settle numbers I'll assume that when $c(t)=0.1\cdot c(0)$ then there is too few "gas" for the soda to be drank. I want to estimate by calculations via a model how much time it takes until this low concentration threshold is reached in both cases.
Let's assume the bottle or glass is a cylinder of 20 cm height, 3.5 cm radius. This means the area of the interface soda/air is $A_\text{interface soda/air} \approx 38.5 \text{ cm}^2$.
Model 1 (no heterogeneous nucleation occurs):
In this case we only have a diffusion equation for $c(t): \frac{dc(t)}{dt}=-rc(t)$ where $r$ is a positive constant proportional to $A_\text{interface soda/air}$, yielding a solution of the form $c(t)=c(0)e^{-rt}$. I can now solve for $t_c$ so that $c(t_c)=0.1\cdot c(0)$.
Model 2 (heterogeneous nucleation occurs):
In this case I have the same diffusion equation for c(t) except that I have a new term. I am unsure how to write it. I'll assume that there are 2 bubbles per $\text {cm} ^2$. The total area of the container is $A_\text{bottle}=2 \pi R \cdot 20 \text{ cm} + A_\text{interface soda/air} \approx 478 \text{ cm}^2$, so there are about 957 bubbles in total.
Again to simplify things I'll assume that a bubble is most of its existing time stuck on the glass rather than going up in the soda. So that nucleation only occurs to bubbles growing on the glass. I'll also assume that all bubbles start with a zero radius and all have the same maximum radius, say $\text{ 0.1 cm}$ before they detach from the wall and get replaced instantly by a $0 \text{ cm}$ radius bubble.
Now I believe the bubbles growth rate depend on the current value of $c(t)$, but I am not sure whether it is the rate of change of the volume or area or radius that depends linearly on $c(t)$. I'd appreciate a comment here. So that I can set up the differential equation for $c(t)$, solve it and compare it with the first model. |
Given triangle ABC.
sinC = (sinA + sinB) / (cosA + cosB)
What details/characteristics can be determined about the shape of ABC (e.g. type of triangle) ?
\(\textrm{Let}\quad\overline{BC}=a,\quad\overline{CA}=b,\quad\overline{AB}=c\\ \frac{a}{sinA}=\frac{b}{sinB}=\frac{c}{sinC}=2R\quad (\textrm{R=radius of the circumscribed circle of triangle ABC})\\ \therefore sinA=\frac{a}{2R},\quad sinB=\frac{b}{2R},\quad sinC=\frac{c}{2R}\\ cosA=\frac{b^2+c^2-a^2}{2bc},\quad cosB=\frac{c^2+a^2-b^2}{2ca},\quad cosC=\frac{a^2+b^2-c^2}{2ab}\\ \therefore \frac{c}{2R}=\frac{\frac{a}{2R}+\frac{b}{2R}}{\frac{b^2+c^2-a^2}{2bc}+\frac{c^2+a^2-b^2}{2ca}}\\ \quad c=\frac{2abc(a+b)}{a(b^2+c^2-a^2)+b(c^2+a^2-b^2)}=\frac{2abc(a+b)}{ab^2+c^2a-a^3+bc^2+a^2b-b^3}=\frac{2abc(a+b)}{c^2(a+b)-(a-b)^2(a+b)}=\frac{2abc}{c^2-(a-b)^2}\\ \therefore c^2-(a-b)^2=2ab\\ \quad c^2=2ab+(a-b)^2=a^2+b^2\\ \therefore \textrm{Triangle ABC is a right triangle that has angle C as the right angle.}\).
The 2R comes from the standard proof of the sine rule.
Draw your triangle ABC with its circumscribing circle, radius R, centre O.
With the usual notation, angle BOC = 2A, so if you drop a perpendicular from O to BC you get a rt-angled triangle with
two sides R and a/2 and an angle A, from which a/2R = sin(A) and a/sin(A) = 2R.
Repeat for B and C and equate the three.
Here's an alternative answer to the original question.
LHS,
\(\displaystyle \sin(C)=2\sin(C/2)\cos(C/2)=2\sin(90-(A+B)/2)\cos(90-(A+B)/2)\\ =2\cos((A+B)/2)\sin((A+B)/2) \dots\dots\dots(1)\)
RHS
\(\displaystyle \frac{\sin(A)+\sin(B)}{\cos(A)+\cos(B)}=\frac{2\sin((A+B)/2)\cos((A-B)/2)}{2\cos((A+B)/2)\cos((A-B)/2)}=\frac{\sin((A+B)/2)}{\cos((A+B)/2)}\dots\dots(2)\)
Equate (1) and (2), cancel and cross multiply,
\(\displaystyle \cos^{2}((A+B)/2)=1/2,\\\text{so }\cos((A+B)/2) = 1/\sqrt{2}\\\text{so }(A+B)/2=45\text{ deg}\\A+B=90\text{ deg}\\C=90\text{ deg}.\)
Tiggsy.
Thanks Tiggsy.
I shall learn from your answer properly in the morning :)
Welcome as a member Tiggsy.
I should have said this immediately but I did not realize straight away that you had joined!
I know you have been here forever but I finally talked you into becoming a member.
I am very pleased!
Now I will be able to look back and find your fabulous answers (from this point forward)
Finally ... I have exhausted this question. There was a lot for me to work through here.
I do not remember seeing those formulas before that you used Tiggsy. Hopefully now they, and their proofs, are a little embedded into my brain.
Anyway, i finally fully understand both solutions so I am happy. |
Why is a pre-main sequence star brighter than it will be when it reaches the main sequence?
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
It is not generally true that a Pre-Main Sequence (PMS) star is brighter than the corresponding Zero-Age Main Sequence (ZAMS) star - whether this is the case depends on the mass. The main source of energy in this phase is the gravitational potential energy of the gas cloud being converted into random kinetic, i.e. thermal, energy. The main question is how fast this energy is transported out of the protostar, compared to how fast the star is contracting and the temperature rising.
The figure in the question shows, correctly, an evolutionary track in the H-R diagram of a PMS object. However, this track is only valid for a certain mass.
The figure above, from the Wikipedia Hayashi Track entry, shows representative PMS evolutionary tracks for different masses. The blue lines are tracks through the H-R diagram for PMS objects of different masses; they start on the upper diagonal, called the
birth line (the time when the surrounding clouds get cleared away and the system becomes visible), and end on the lower black diagonal, the Zero Age Main Sequence (the time when Hydrogen fusion sets in). The blue numbers below the ZAMS show the final mass of the star in Solar masses. The red lines show isochrones; where the different evolutionary tracks crosses the same isochrone, the corresponding objects have the same age. You can see that a $6.0 M_{\odot}$ star reaches the ZAMS after only a hundred thousand years, a time when a $0.1 M_{\odot}$ star has not left the birthline yet. It takes 10 million years for a $2.0 M_{\odot}$ star and 100 million years for stars of $M \lesssim 1.5 M_{\odot}$ to reach the ZAMS.
The almost-vertical parts of the tracks are called
Hayashi tracks. In this phase, the contraction happens more or less isothermally. The objects on these stages are convective, such that the heat generated by contraction is transported from the core to the outer layers efficiently enough that the object stays roughly at the same temperature. Therefore, its surface brightness also stays unchanged, and the luminosity simply scales with the surface area.
For higher mass systems, however, the contraction happens too fast, the temperature in the core becomes so hot that a radiative zone develops, and convection becomes inefficient and surplus energy is trapped in the system. As it contracts, the temperature therefore rises steeply, and the surface brightness rises with it. This is partly or completely cancelled by the shrinking surface area, meaning that the total luminosity stays roughly unchanged or growing slightly, but the temperature rises steeply. This is seen as a more or less horizontal track going right-to-left in the diagram; these are called
Heyney Tracks
As the figure shows, intermediate mass systems are relatively cool in the beginning, allowing for convection, so they start on the Hayashi Track, but as the temperature gets larger, convection is broken, and it turns onto a Heyney track as the heat cannot escape efficiently anymore.
EDIT: i found this figure from this publication which shows some more detail.
Here, the isochrones are gone, but the diagram is separated into regions corresponding to four different scenarios: Fully convective, fully radiative, Radiative core with convective outer layers, and convective core with radiative outer layers. All stars with $M \leq 3.5 M_{\odot}$ go through some phase with a convective core. All stars of $M \geq 1.5 M_{\odot}$ go through several phases, from possibly being fully convective, over having a radiative core and convective outer layers, over being fully radiative, to developing a convective core and radiative outer layers, before settling on the ZAMS.
The virial theorem can be applied to stars in quasi-equilibrium and tells us that the internal kinetic energy and gravitational potential energy of a star are directly proportional to each other. Crudely speaking, this means the pressure (which is proportional to kinetic energy
density) times the volume in the star is proportional to gravitational potential energy: $$PR^3 \propto M^2/R.$$ For an ideal gas, pressure is proportional to density and temperature, so $$ \frac{M}{R^3}T R^3 \propto \frac{M^2}{R}$$ $$ T \propto \frac{M}{R}$$
Thus for a star of a given mass, the central temperature increases as the radius shrinks. Why does the radius decrease? Because the star is losing energy from its surface and this is supplied by decreasing its gravitational potential energy by becoming smaller (see here for more detail).
In a low mass star the shrinkage continues at roughly constant surface temperature because the temperature gradient is limited by convective instability. This means the luminosity shrinks as $R^2$ (along the Hayashi track). Eventually the core temperature becomes high enough to start hydrogen fusion. This increases smoothly until it exactly replaces the luminosity supplied by gravitational potential energy, thus halting the collapse at (roughly) a minimum in radius and luminosity.
In larger stars (more than about 0.7 solar masses) things are more complicated because the core is less dense, opacities are lower and at some core temperature (prior to H fusion), convective instability ends in favour of radiative energy transport. This lowers the temperature gradient, allowing the PMS star to be larger for a similar core temperature. This keeps the luminosity roughly constant for a short time whilst the radius more slowly contracts and the surface temperature increases (along the Henyey track) until core hydrogen burning stabilises the contraction as before.
There are two questions to answer: Why is a pre-main sequence star so bright, and why does it get dimmer?
The first question is easy. A pre-main sequence star directly follows a protostar, and protostars can get quite large from accretion. From the virial theorem and Kelvin-Helmholtz contraction, we know that the thermal energy that can be released from a contracting mass is $$U_{\text{thermal}}=\frac{3}{10}\frac{GM^2}{R}$$ If we assume, prior to pre-main sequence contraction, that none of this energy has been radiated away, we can calculate the maximum radius of the body. This turns out to be quite substantial (see e.g. Eqn 5.10 here).
We can also approximate the pre-main sequence star as a blackbody, which has a luminosity of $$L=4\pi\sigma R^2T_e^4$$ There is indeed a temperature-radius relationship, but the dependence is very small. Thus, $$\frac{dL}{dt}\simeq4\pi\sigma T^4\left(2R\frac{dR}{dt}\right)$$ and, given that $\frac{dR}{dt}<0$, the luminosity of the pre-main sequence star decreases until the core becomes strongly radiative, at which point the star heats up and moves left on the H-R diagram until it reaches the main sequence. |
A forum where anything goes. Introduce yourselves to other members of the forums, discuss how your name evolves when written out in the Game of Life, or just tell us how you found it. This is the forum for "non-academic" content.
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: viewtopic.php?p=44724#p44724
Like this:
[/url][/wiki][/url]
[/wiki]
[/url][/code]
Many different combinations work. To reproduce, paste the above into a new post and click "preview".
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
I wonder if this works on other sites? (Remove/Change )
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
Related:[url=http://a.com/]
[/url][/wiki]
My signature gets quoted. This too. And my avatar gets moved down
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: Saka wrote:
Related:
[
Code: Select all
[wiki][url=http://a.com/][quote][wiki][url=http://a.com/]a[/url][/wiki][/quote][/url][/wiki]
]
My signature gets quoted. This too. And my avatar gets moved down
It appears to be possible to quote the entire page by repeating that several times. I guess it leaves <div> and <blockquote> elements open and then autofills the closing tags in the wrong places.
Here, I'll fix it:
[/wiki][url]conwaylife.com[/url]
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact:
It appears I fixed @Saka's open <div>.
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
toroidalet Posts: 1019 Joined: August 7th, 2016, 1:48 pm Location: my computer Contact:
A for awesome wrote:It appears I fixed @Saka's open <div>.
what fixed it, exactly?
"Build a man a fire and he'll be warm for a day. Set a man on fire and he'll be warm for the rest of his life."
-Terry Pratchett A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: toroidalet wrote:
A for awesome wrote:It appears I fixed @Saka's open <div>.
what fixed it, exactly?
The post before the one you quoted. The code was:
Code: Select all
[wiki][viewer]5[/viewer][/wiki][wiki][url]conwaylife.com[/url][/wiki]
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
Aidan, could you fix your ultra quote? Now you can't even see replies and the post reply button. Also, a few more ones eith unique effects popped up.
Appart from Aidan Mode, there is now: -Saka Quote -Daniel Mode -Aidan Superquote We should write descriptions for these: -Adian Mode: A combination of url, wiki, and code tags that leaves the page shaterred in pieces. Future replies are large and centered, making the page look somewhat old-ish. -Saka Quote: A combination of a dilluted Aidan Mode and quotes, leaves an open div and blockquote that quotes the entire message and signature. Enough can quote entire pages. -Daniel Mode: A derivative of Aidan Mode that adds code tags and pushes things around rather than scrambling them around. Pushes bottom bar to the side. Signature gets coded. -Aidan Superqoute: The most lethal of all. The Aidan Superquote is a broken superquote made of lots of Saka Quotes, not normally allowed on the forums by software. Leaves the rest of the page white and quotes. Replies and post reply button become invisible. I would not like new users playing with this. I'll write articles on my userpage.
Last edited by Saka
on June 21st, 2017, 10:51 pm, edited 1 time in total.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA
I actually laughed at the terminology.
"IT'S TIME FOR MY ULTIMATE ATTACK. I, A FOR AWESOME, WILL NOW PRESENT: THE AIDAN SUPERQUOTE" shoots out lasers
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
fluffykitty
Posts: 638 Joined: June 14th, 2014, 5:03 pm
There's actually a bug like this on XKCD Forums. Something about custom tags and phpBB. Anyways,
[/wiki]
I like making rules
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
Here's another one. It pushes the avatar down all the way to the signature bar. Let's name it...
-Fluffykitty Pusher
Unless we know your real name that's going to be it lel. It's also interesting that it makes a code tag with purple text.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact:
Probably the simplest ultra-page-breaker:
Code: Select all
[viewer][wiki][/viewer][viewer][/wiki][/viewer]
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X A for awesome wrote:
Probably the simplest ultra-page-breaker:
Code: Select all
[viewer][wiki][/viewer][viewer][/wiki][/viewer]
Screenshot?
New one yay.
-Adian Bomb: The smallest ultra-page breaker. Leaks into the bottom and pushes the pages button, post reply, and new replies to the side.
Last edited by Saka
on June 21st, 2017, 10:20 pm, edited 1 time in total.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA
Someone should create a phpBB-based forum so we can experiment without mucking about with the forums.
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
Saka
Posts: 3138 Joined: June 19th, 2015, 8:50 pm Location: In the kingdom of Sultan Hamengkubuwono X
The testing grounds have now become similar to actual military testing grounds.
Airy Clave White It Nay
Code: Select all
x = 17, y = 10, rule = B3/S23
b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5b
o2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
fluffykitty
Posts: 638 Joined: June 14th, 2014, 5:03 pm
We also have this thread. Also,
is now officialy the Fluffy Pusher. Also, it does bad things to the thread preview when posting. And now, another pagebreaker for you:
Code: Select all
[wiki][viewer][/wiki][viewer][/viewer][/viewer]
Last edited by fluffykitty
on June 22nd, 2017, 11:50 am, edited 1 time in total.
I like making rules
83bismuth38 Posts: 453 Joined: March 2nd, 2017, 4:23 pm Location: Still sitting around in Sagittarius A... Contact:
oh my, i want to quote somebody and now i have to look in a diffrent scrollbar to type this. intersting thing, though, is that it's never impossible to fully hide the entire page -- it will always be in a nested scrollbar.
EDIT: oh also, the thing above is kinda bad. not horrible though -- i'd put it at a 1/13 on the broken scale.
Code: Select all
x = 8, y = 10, rule = B3/S23
3b2o$3b2o$2b3o$4bobo$2obobobo$3bo2bo$2bobo2bo$2bo4bo$2bo4bo$2bo!
No football of any dui mauris said that.
Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet
Code: Select all
[quote][wiki][viewer][/wiki][/viewer][wiki][/quote][/wiki]
This dosen't do good things
Edit:
Code: Select all
[wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url]
Neither does this
^
What ever up there likely useless Cclee Posts: 56 Joined: October 5th, 2017, 9:51 pm Location: de internet
Code: Select all
[viewer][wiki][/viewer][wiki][url][size=200][wiki][viewer][viewer][url=http://www.conwaylife.com/forums/viewtopic.php?f=4&t=2907][/wiki][/url][quote][/url][/quote][/viewer][wiki][quote][/wiki][/quote][url][wiki][quote][/url][/wiki][/quote][url][wiki][/url][/wiki][/wiki][/viewer][/size][quote][viewer][/quote][/viewer][/wiki][/url][viewer][/wiki][/viewer]
I get about five different scroll bars when I preview this
Edit:
Code: Select all
[viewer][wiki][quote][viewer][wiki][/viewer][/wiki][viewer][viewer][wiki][/viewer][/wiki][/quote][viewer][wiki][/viewer][/wiki][quote][viewer][wiki][/viewer][viewer][wiki][/viewer][/wiki][/wiki][/viewer][/quote][/viewer][/wiki]
Makes a really long post and makes the rest of the thread large and centred
Edit 2:
Code: Select all
[url][quote][quote][quote][wiki][/quote][viewer][/wiki][/quote][/viewer][/quote][viewer][/url][/viewer]
Just don't do this
(Sorry I'm having a lot of fun with this)
^
What ever up there likely useless cordership3 Posts: 127 Joined: August 23rd, 2016, 8:53 am Location: haha long boy
Here's another small one:
Code: Select all
[url][wiki][viewer][/wiki][/url][/viewer]
fg
Moosey Posts: 2492 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact:
Code: Select all
[wiki][color=#4000BF][quote][wiki]I eat food[/quote][/color][/wiki][code][wiki]
[/code]
Is a pinch broken
Doesn’t this thread belong in the sandbox?
I am a prolific creator of many rather pathetic googological functions
My CA rules can be found here
Also, the tree game
Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?"
77topaz Posts: 1345 Joined: January 12th, 2018, 9:19 pm
Well, it started out as a thread to documents "Bugs & Errors" in the forum's code...
Moosey Posts: 2492 Joined: January 27th, 2019, 5:54 pm Location: A house, or perhaps the OCA board. Contact:
77topaz wrote:Well, it started out as a thread to documents "Bugs & Errors" in the forum's code...
Now it's half an aidan mode testing grounds.
Also, fluffykitty's messmaker:
Code: Select all
[viewer][wiki][*][/viewer][/*][/wiki][/quote]
I am a prolific creator of many rather pathetic googological functions
My CA rules can be found here
Also, the tree game
Bill Watterson once wrote: "How do soldiers killing each other solve the world's problems?"
PkmnQ
Posts: 666 Joined: September 24th, 2018, 6:35 am Location: Server antipode
Don't worry about this post, it's just gonna push conversation to the next page so I can test something while actually being able to see it. (The testing grounds in the sandbox crashed golly)
Code: Select all
x = 12, y = 12, rule = AnimatedPixelArt
4.P.qREqWE$4.2tL3vSvX$4.qREqREqREP$4.vS4vXvS2tQ$2.qWE2.qREqWEK$2.2vX
2.vXvSvXvStQtL$qWE2.qWE2.P.K$2vX2.2vX2.tQ2tLtQ$qWE4.qWE$2vX4.2vX$2.qW
EqWE$2.4vX!
i like loaf |
Six dice are thrown. The six dice are thrown a second time. What is the probability of getting the same numbers as in the first throw? If the order of the six numbers matters, the problem is easy, but if the order does not matter, I find myself in troubles, because I should consider too many cases depending on the number of repeated numbers and don't know how to proceed.
The number $a_n$ of favourable events for $n$ $n$-sided dice is OEIS sequence A033935. There's a formula given there,
$$a_n=[x^n]n!^2\left(\sum_{k=0}^n\frac{x^k}{k!^2}\right)^n$$
(where $[x^n]$ denotes extraction of the coefficient of $x^n$), which is a succinct statement of the perhaps more obvious formula
$$a_n=\sum_{1n_1+2n_2+\dotso+kn_k=n}\frac{n!}{(n-(n_1+\dotso+n_k))!}\frac1{n_1!\cdots n_k!}\left(\frac{n!}{1!^{n_1}\cdots k!^{n_k}}\right)^2\;,$$
where if we have $n_j$ groups of $j$ identical dice each, the first factor gives the number of ways of assigning values to the groups, the second factor accounts for the fact that it doesn't matter which group a value is assigned to as long as it has a certain number of dice, and the third factor gives the number of favourable events for each assignment, which is the square of the number of distinct permutations of the dice given the assignment.
Based initially on Mark Dominus's table, you get
Pattern Ways Different Ways/Different (Ways/Different)^2 Ways^2/DifferentAAAAAA 6 6 1 1 6AAAAAB 180 30 6 36 1080AAAABB 450 30 15 225 6750AAAABC 1800 60 30 900 54000AAABBB 300 15 20 400 6000AAABBC 7200 120 60 3600 432000AAABCD 7200 60 120 14400 864000AABBCC 1800 20 90 8100 162000AABBCD 16200 90 180 32400 2916000AABCDE 10800 30 360 129600 3888000ABCDEF 720 1 720 518400 518400Sum 46656 462 1602 708062 8848236
And the answer is $\dfrac{8848236}{46656^2} \approx 0.0040648\ldots$ |
Contents
Back to Continuous Optimization
consider the problem of minimizing an objective function that depends on real variables with no restrictions on their values. Mathematically, let \(x \in \mathcal{R}^n\) be a real vector with \(n \geq 1\) components and let \(f : \mathcal{R}^n \rightarrow \mathcal{R}\) be a smooth function. Then, the unconstrained optimization problem is \[\mbox{min}_x \; f(x).\] Unconstrained optimization problems
Unconstrained optimization problems arise directly in some applications but they also arise indirectly from reformulations of constrained optimization problems. Often it is practical to replace the constraints of an optimization problem with penalized terms in the objective function and to solve the problem as an unconstrained problem.
An important aspect of continuous optimization (constrained and unconstrained) is whether the functions are
smooth, by which we mean that the second derivatives exist and are continuous. There has been extensive study and development of algorithms for the unconstrained optimization of smooth functions. At a high level, algorithms for unconstrained minimization follow this general structure: Choose a starting point \(x_0\). Beginning at \(x_0\), generate a sequence of iterates \(\{x_k\}_{k=0}^{\infty}\) with non-increasing function (\(f\)) value until a solution point with sufficient accuracy is found or until no further progress can be made.
To generate the next iterate \(x_{k+1}\), the algorithm uses information about the function at \(x_k\) and possibly earlier iterates.
Newton's Method
Newton's Method gives rise to a wide and important class of algorithms that require computation of the
gradient vector \[\nabla f(x) = \left(\partial_1 f(x), \ldots , \partial_n f(x) \right)^T\] and the Hessian matrix \[\partial^2 f(x) = \left[ \partial_i \partial_j f(x) \right].\]
Although the computation or approximation of the Hessian can be a time-consuming operation, there are many problems for which this computation is justified.
Wikipedia Link to Newton's Method in Optimization
Newton's method forms a quadratic model of the objective function around the current iterate. The model function is defined by
\[q_k(s) = f(x_k) + \nabla f(x_k)^T s + \frac{1}{2} s^T \nabla^2 f(x_k) s.\]
In the basic Newton method, the next iterate is obtained from the minimizer of \(q_k(s)\). When the Hessian matrix, \(\nabla^2 f(x_k)\), is positive definite, the quadratic model has a unique minimizer that can be obtained by solving the symmetric \(n \times n\) linear system:
\[\nabla^2 f(x_k) s = - \nabla f(x_k).\] The next iterate is then \[x_{k+1} = x_k + s_k.\] Convergence is guaranteed if the starting point is sufficiently close to a local minimizer \(x^*\) at which the Hessian is positive definite. Moreover, the rate of convergence is quadratic, that is, \[ \| x_{k+1} - x^* \| \leq \beta \| x_k - x^* \|^2\] for some positive constant \(\beta\). In most circumstances, however, the basic Newton method has to be modified to achieve convergence. There are two fundamental strategies for moving from \(x_k\) to \(x_{k+1}\): line search and trust region. Most algorithms follow one of these two strategies. The line-search method modifies the search direction to obtain another downhill, or descent, direction for \(f\). It then tries different step lengths along this direction until it finds a step that not only decreases \(f\) but also achieves at least a small fraction of this direction's potential. Wikipedia Link to Line Search The trust-region methods use the original quadratic model function, but they constrain the new iterate to stay in a local neighborhood of the current iterate. To find the step, it is necessary to minimize the quadratic function subject to staying in this neighborhood, which is generally ellipsoidal in shape. Wikipedia Link to Trust Region
Line-search and trust-region techniques are suitable if the number of variables \(n\) is not too large, because the cost per iteration is of order \(n^3\). Codes for problems with a large number of variables tend to use truncated Newton methods, which usually settle for an approximate minimizer of the quadratic model.
Wikipedia Link to Truncated Newton Method Methods with Hessian Approximations
If computing the exact Hessian matrix is not practical, the same algorithms can be used with a reasonable approximation of the Hessian matrix. Two types of methods use approximations to the Hessian in place of the exact Hessian.
One approach is to use difference approximations to the exact Hessian. Difference approximations exploit the fact that each column of the Hessian can be approximated by taking the difference between two instances of the gradient vector evaluated at two nearby points. For sparse Hessians, it is often possible to approximate many columns of the Hessian with a single gradient evaluation by choosing the evaluation points judiciously. Quasi-Newton Methods build up an approximation to the Hessian by keeping track of the gradient differences along each step taken by the algorithm. Various conditions are imposed on the approximate Hessian. For example, its behavior along the step just taken is forced to mimic the behavior of the exact Hessian, and it is usually kept positive definite. Wikipedia Link to Quasi-Newton Method Other Methods for Unconstrained Optimization
There are two other approaches for unconstrained problems that are not so closely related to Newton's method.
Nonlinear conjugate gradient methods are motivated by the success of the linear conjugate gradient method in minimizing quadratic functions with positive definite Hessians. They use search directions that combine the negative gradient direction with another direction, chosen so that the search will take place along a direction not previously explored by the algorithm. At least, this property holds for the quadratic case, for which the minimizer is found exactly within just n iterations. For nonlinear problems, performance is problematic, but these methods do have the advantage that they require only gradient evaluations and do not use much storage. Wikipedia Link to Nonlinear Conjugate Gradient Method The nonlinear Simplex method (not to be confused with the simplex method for linear programming) requires neither gradient nor Hessian evaluations. Instead, it performs a pattern search based only on function values. Because it makes little use of information about
f, it typically requires a great many iterations to find a solution that is even in the ballpark. It can be useful when
fis nonsmooth or when derivatives are impossible to find, but it is unfortunately often used when one of the algorithms above would be more appropriate.
The Nonlinear Least-Squares Problem is a special case of unconstrained optimization. It arises in many practical problems, especially in data-fitting applications. The objective function \(f\) has the form \[f(x) = 0.5 \sum_{j=1}^{m} r_j^2(x),\] where each \(r_j\) is a smooth function from \(\cal{R}^n\) to \(\cal{R}\). The special form of \(f\) and its derivatives has been exploited to develop efficient algorithms for minimizing \(f\). The problem of solving a system of Nonlinear Equations is related to unconstrained optimization in that a number of algorithms for nonlinear equations proceed by minimizing a sum of squares. It often arises in problems involving physical systems. In nonlinear equations, there is no objective function to optimize but instead the goal is to find values of the variables that satisfy a set of \(n\) equality constraints. Nocedal, J. and S. J. Wright. 1999. Numerical Optimization. Springer-Verlag, New York. Optimization Online Nonlinear Optimization area |
Dependence and self-sufficiency in Hesiod's "Works and Days" Abstract
Although the theme of self-sufficiency in the second half of the Works and Days (286-828) has often been recognized by scholars, a through examination of Hesiod's definition and treatment of self-sufficiency in the poem is needed. This dissertation demonstrates how Hesiod exploits the tension between dependence and self-sufficiency, and discusses this tension in the behavior of the poet's brother, Perses, and in that of the ideal farmer whom Perses must emulate. Hesiod warns Perses that he must stop depending on others for livelihood and must achieve the farmer's distinctive personal qualities--perception, industry, and self-sufficiency--which are shared by Hesiod as $\pi\alpha\nu\acute\alpha\rho\iota\sigma\tau o\varsigma$. This study involves a detailed analysis of the diction expressing Hesiod's view of dependence and self-sufficiency in the Works and Days. Much of the diction of the farmer's year (383-617) is un-Homeric and has associations with poetry and poetic activity. Specifically, the farmer's activities are closely allied with the work of craftsmen--carpenter, sewer, weaver--who in Indo-European tradition and in later poetry commonly appear as metaphorical representations of the poet. Indeed, recent studies on the Works and Days have argued that Hesiod was a skilled poet whose work addressed issues of poetics. I argue that Hesiod's prudent farmer who works to achieve his ideal of total self-sufficiency metaphorically represents Hesiod the poet, who in the Works and Days defines his individual poetic style. The farmer's year functions as Hesiod's poetic program in which he states his relationship to the tradition of hexameter poetry represented chiefly by the Iliad and Odyssey. Hesiod instructs Perses that the desired method of gaining livelihood is not by using his malleable rhetoric while begging at others' homes, but by devoting his life to "farming" (i.e., composing poetry in the style of the Works and Days).
Subject Area
Classical studies|Ancient languages
Recommended Citation
Marsilio, Maria Suzanne, "Dependence and self-sufficiency in Hesiod's "Works and Days"" (1992).
Dissertations available from ProQuest. AAI9308624. https://repository.upenn.edu/dissertations/AAI9308624 |
Following the success of my JT4G detector, which I used to detect very weak signals from DSWLP-B and was also tested by other people, I have made a similar detector for the 250baud GMSK telemetry transmissions.
The coding used by the DSLWP-B GMSK telemetry follows the CCSDS standards for turbo-encoded GMSK/OQPSK. The relevant documentation can be found in the TM Synchronization and Channel Coding and Radio Frequency and Modulation Systems–Part 1: Earth Stations and Spacecraft blue books.
The CCSDS standards specify that a 64bit ASM shall be attached to each \(r=1/2\) turbo codeword. The idea of this algorithm is to correlate against the ASM (adequately precoded and modulated in GMSK). The ASM spans 256ms and the correlation is done as a single coherent integration. As a rule of thumb, this should achieve a reliable detection of signals down to around 12dB C/N0, which is equivalent to -12dB Eb/N0 or -22dB SNR in 2500Hz. Note that the decoding threshold for the \(r=1/2\) turbo code is around 1.5dB Eb/N0, so it is much easier to detect the GMSK beacon using this algorithm than to decode it. The difficulty of GMSK detection is comparable to the difficulty of JT4G decoding, which has a decoding threshold of around -23dB SNR in 2500Hz.
Here I explain the details of this GMSK ASM detector. The Python script for the detector is dslwp_gmsk.py.
The ASM (attached sync marker) for \(r=1/2\) turbo coded telemetry is specified in Section 9.3.5 of the “TM Synchronization and Channel Coding” blue book as
0x034776C7272895B0 (it should be transmitted left to right, in the order it is written). This 64bit syncword in transmitted before each turbo codeword, as indicated in the figure below.
Since DSLWP-B doesn’t use convolutional coding or stream LDPC encoding, the ASM is not encoded any further and it is passed directly as channel symbols to the physical layer, as shown in the figure below.
However, there is a subtlety here. In the physical layer a precoder is used before modulating the channel symbols as GMSK. This precoder is described in the figure below, taken from the “Radio Frequency and Modulation Systems–Part 1: Earth Stations and Spacecraft” blue book.
As we shall see, the goal of this precoder is that the symbols \(d_k\) are read directly when the GMSK modulation is interpreted as OQPSK. Indeed, in GMSK, a 1 is transmitted as a phase shift of \(\pi/2\) and a 0 is transmitted as a phase shift of \(-\pi/2\). When this is read as OQPSK, the following happens.
Assume that we are currently sampling the \(I\) branch, so our phase is either \(0\) or \(\pi\), corresponding to \(I=1\) or \(I=-1\) respectively. Then half a symbol period later, the phase gets shifted by \(\pm\pi/2\), corresponding to the transmission of a GMSK bit, and we sample the \(Q\) branch (recall that the symbol rate for OQPSK is half the symbol rate for GMSK). The resulting phase and hence the resulting \(Q\) depends both on the \(I\) we had and on the GMSK bit transmitted.
With a GMSK bit of 1 we either get from \(I = 1\) to \(Q = 1\) or from \(I = -1\) to \(Q = -1\). With a GMSK bit of 0 we get from \(I = 1\) to \(Q = -1\) or from \(I = -1\) to \(Q = 1\). We see that the GMSK bit acts in a differential way. A 1 preserves the value of the \(I\) branch into the \(Q\) branch, and a 0 inverts the value of \(I\) into the \(Q\) branch. When going from \(Q\) to \(I\), the behaviour is opposite: a GMSK bit of 1 gets from \(Q = 1\) to \(I = -1\) or from \(Q = -1\) to \(I = 1\), and a GMSK bit of 0 gets from \(Q = 1\) to \(I = 1\) or from \(Q = -1\) or from \(Q = -1\) to \(I = -1\). So when going from \(Q\) to \(I\), a GMSK bit of 1 inverts and a GMSK bit of 0 preserves the value.
Thus, if we want to read the stream of symbols \(d_k\) directly from the OQPSK demodulation, we should transmit \(a_k = 1 + d_k + d_{k-1}\) when going from \(I\) to \(Q\) and \(a_k = d_k + d_{k-1}\) when going from \(Q\) to \(I\) (here we use arithmetic over \(GF(2)\)). Noting that we go from \(I\) to \(Q\) when \(k\) is even and we go from \(Q\) to \(I\) when \(k\) is odd, we have\[a_k = 1 + k + d_k + d_{k-1},\] for all \(k\), which is exactly the same as it is represented in the figure above.
This precoding for GMSK transmission is not only done for convenience at the receiving side. Without it, the receiver would have to perform some form of differential decoding, since GMSK is inherently differential (bits are transmitted as a change in phase). This differential decoding would propagate bit errors. Therefore, the precoder ensures optimal performance.
Note that when precoding the 64bit ASM, we only get 63 bits. The first bit would depend on the initial state of the precoder (the contents of the \(z^{-1}\) cell), which is undefined. These 63 bits are then shaped with a Gaussian filter. The Gaussian filter implementation is taken from the GNU Radio GMSK modulator, which in turn uses the following Gaussian filter taps.
These Gaussian filter taps follow the Gaussian curve\[a_n = \exp\left(-\frac{2\pi^2\beta^2n^2}{S^2\log{2}}\right),\]where \(\beta\) is the bandwith time product (BT) and \(S\) is the samples per symbol.
Using this formula, a window spanning 4 symbols is obtained, so \(n\) ranges from -2S to 2S-1, and normalized to have an integral of one. This window is then convolved with a square window spanning one symbol to obtain the taps of the symbol filter. The precoded ASM bits are then filtered and upsampled using this filter.
The BT is taken as \(\beta=0.5\) from gr-dswlp. However, the CCSDS recommends a BT of 0.25. It would be interesting to know what is the BT really used by DSLWP-B and see if it can be measured from the high SNR recordings made at Dwingeloo.
After the bits are fitered and upsampled, they are scaled to produce the correct deviation and FM modulated to produce the GMSK modulated ASM. This GMSK signal is then used as a matched filter to correlate with the received signal.
The correlation algorithm goes as follows. The FFT is used to scan in frequency. The FFT size is the size of the GMSK modulated ASM, so all the ASM is integrated coherently. We denote this size by \(N\). A block of \(N\) samples from the signal is taken, multiplied by the complex conjugated of the GMSK modulated ASM, and Fourier transformed. A peak in the FFT would indicate that an ASM is contained in the signal, at the frequency indicated by the FFT peak. Blocks offset by \(T/4\), where \(T\) is the symbol period, are taken to scan in time.
For the situation we have here (a large search both in time and frequency) this approach is better (computationally less expensive) than using the FFT to scan in time and performing FFT shifts to scan in frequency. This algorithm is also very similar to what gr-dslwp does in the “QT GUI FFT Correlator Hier” block to detect the signal and pass coarse frequency and phase estimates to the OQPSK decoder.
To show this algorithm in action, we test it with the recordings from the first VLBI session between Dwingeloo and Shahe. We use the recordings taken at UNIX timestamp 1528604394, which corresponds to 2018-06-10 04:19:54 UTC. The recordings are extracted and converted to
complex64 raw files as already did in that post (see the script process_vlbi.sh).
Then, we use sox to lowpass filter and convert the raw files to wav. The lowpass filtering is done to remove interfering signals. The sample rate of 40000kHz is maintained. The correlation algorithm can work with any sample rate that is an interger multiple of 250Hz, so as to have an integer number of samples per symbol. The sox command used for the conversion is as follows.
sox -t raw -e floating-point -b 32 -c 2 -r 40000 dwingeloo_435.raw dwingeloo_435.wav lowpass 1000
The wav files are then processed using
$ dslwp_gmsk.py dwingeloo_435.wav dwingeloo_435 "Dwingeloo 435.4MHz" Start time: 1.87s Frequency: 496.0Hz CN0: 37.1dB, EbN0: 13.1dB, SNR (in 2500Hz): 3.1dB $ dslwp_gmsk.py dwingeloo_436.wav dwingeloo_436 "Dwingeloo 436.4MHz" Start time: 2.39s Frequency: 455.6Hz CN0: 43.1dB, EbN0: 19.1dB, SNR (in 2500Hz): 9.1dB $ dslwp_gmsk.py dwingeloo_435.wav shahe_435 "Shahe 435.4MHz" Start time: 2.01s Frequency: -241.9Hz CN0: 22.4dB, EbN0: -1.6dB, SNR (in 2500Hz): -11.6dB $ dslwp_gmsk.py dwingeloo_436.wav shahe_436 "Shahe 436.4MHz" Start time: 2.54s Frequency: -282.3Hz CN0: 26.7dB, EbN0: 2.7dB, SNR (in 2500Hz): -7.3dB
We can observe several interesting details from the results of these correlations. First, note that the 435.4MHz signal is seen roughly 0.52s before the 436.4MHz both in Dwingeloo and Shahe. As I already commented in the the VLBI experiment post, the transmissions in both bands are not synchronized precisely and the data transmitted is different.
Second, in Dwingeloo we observe a difference of 6dB between the 435.4MHz signal and the 436.4MHz signal, while in Shahe the difference is only 4dB. The reason for the difference between both bands was already explained in the VLBI experiment. It is due to the orientations of the antennas used by DSLWP-B in each band. The fact that the difference in Dwingeloo is 6dB while in Shahe is 4dB could be explained because the performance of the receivers might be different for each of the two bands.
Last but not least, the observed frequencies in each band don’t match what would be caused by the Doppler or frequency offset of the DSLWP-B clock if both transmit frequencies are derived from the same oscillator. Indeed, for 1kHz of offset (either Doppler or clock offset), the difference between both bands should be 2Hz, which is less than the frequency resolution using this algorithm. Thus, we should expect to see the same frequency in both bands. However, in both groundstations we observe a difference of roughly 40Hz, the 435.4MHz band being higher in frequency.
The only reasonable explanation for this is that each transmitter has its own independent clock, and the 435.4MHz transmitter is 40Hz (92ppb) higher than the 436.4MHz transmitter. I think that nobody has observed this before. I had already observed that both transmitters are around 200Hz lower than the published frequency, but it turns out that there is also a difference between both transmitters. It will be interesting to monitor this difference and see if and how it evolves with time.
The images produced by the detection script can be seen below. It is interesting to note that the whole packet can be seen in the time correlations, since the cross-correlation of the ASM with the rest of the packet is higher than with the background noise. This is best seen in the Dwingeloo high-SNR recordings and it shows that the ASM is not transmitted immediately at the beginning of the packet. There are about 1.5 seconds of GMSK data before the ASM. I don’t know what is this data. Perhaps it is just a preamble to aid receiver synchronization, although correlation against the ASM as done in gr-dslwp should be enough. In principle no preamble is needed. |
Let $[n]:=\lbrace 1, \dots, n \rbrace$. We define a partial ordering on the set of subsets of $[n]$ as follows. We say that $X \preceq Y$ if there is an injective map $f:X \to Y$ such that $x \leq f(x)$ for all $x \in X$. This is a pretty standard construction in poset theory.
The motivation for this question comes from a subset sum problem I've been playing with. Let us regard $[n]$ as the set of indices of a set $A:=\lbrace a_1, \dots, a_n \rbrace$ of numbers (indexed so that $a_1 < \dots < a_n$). If $X \preceq Y$, then the sum of the elements in $A$ corresponding to $X$ is at most the sum of the elements in $A$ corresponding to $Y$. If $X$ and $Y$ are incomparable, then we don't know which sum is bigger (without additional information about $A$).
I would like to cover this poset with as few chains as possible, so it is natural to apply Dilworth's Theorem and then ask
What is the size of a largest antichain in this poset?
One natural candidate is to take all subsets of $[n]$ with the same sum $s$. To maximize the size of this antichain, we should take $s$ to be halfway between $0$ and $1+\dots + n$. I'd guess that this is optimal. Any references or thoughts would be much appreciated. |
I have a question about a proof that I am reading in "A primer on Mapping Class Groups" by Farb and Margalit.
Let $a$ be a simple closed curve in a compact surface $S$ (possibly with marked points and boundary components) not isotopic to a point or boundary component and let $T_a$ denote the Dehn twist about $a$.
Now let $\alpha_1 , ..., \alpha_n$ be a collection of pairwise disjoint isotopy classes of simple closed curves in $S$ and let $M = \prod_{i=1}^{n} T_{\alpha_i}^{e_i}$. Also suppose that $e_i >0 $ $\forall i$
or $e_i <0$ $\forall i$ where $e_i$ is an integer and that $b$ is an arbitrary isotopy class of a simply closed curve.
Ok now we look at $M(b)$, and find a representative $\beta '$ in its isotopy class. We also take $\beta$ to be in the isotopy class of $b$.
I now want to show that $\beta$ and $\beta'$ are in minimal position!
For context, this is propostion 3.4 in Farb and Margalit's "A primer on Mapping Class Groups".
If one lets $n=1$ above then it wouldn't be hard to prove (prop 3.2 in the same book) as it follows from the bigon criterion. However they say that it also is true here for $\beta$ and $\beta'$ since all the $e_i$'s have the same sign or rather all the twists are in the same direction.
So what happens if we allow the $e_i$'s to have arbitrary sign?
Any comments/help would be greatly appreciated! cheers!
P.S. I have asked this at math stack exchange but thought I would ask at this site as well. |
By the "noncompact $U(1)$ group", we mean a group that is isomorphic to $({\mathbb R},+)$. In other words, the elements of $U(1)$ are formally $\exp(i\phi)$ but the identification $\phi\sim \phi+2\pi k$ isn't imposed. When it's not imposed, it also means that the dual variable ("momentum") to $\phi$, the charge, isn't quantized. One may allow fields with arbitrary continuous charges $Q$ that transform by the factor $\exp(iQ\phi)$.
It's still legitimate to call this a version of a $U(1)$ group because the Lie algebra of the group is still the same, ${\mathfrak u}(1)$.
In the second part of the question, where I am not 100% sure what you don't understand about the quote, you probably want to explain why compactness is related to quantization? It's because the charge $Q$ is what determines how the phase $\phi$ of a complex field is changing under gauge transformations. If we say that the gauge transformation multiplying fields by $\exp(iQ\phi)$ is equivalent for $\phi$ and $\phi+2\pi$, it's equivalent to saying that $Q$ is integer-valued because the identity $\exp(iQ\phi)=\exp(iQ(\phi+2\pi))$ holds iff $Q\in{\mathbb Z}$. It's the same logic as the quantization of momentum on compact spaces or angular momentum from wave functions that depend on the spherical coordinates.
He is explaining that the embedding of the $Q$ into a non-Abelian group pretty much implies that $Q$ is embedded into an $SU(2)$ group inside the non-Abelian group, and then the $Q$ is quantized for the same mathematical reason why $J_z$ is quantized. I would only repeat his explanation because it seems utterly complete and comprehensible to me.
Note that the quantization of $Q$ holds even if the $SU(2)$ is spontaneously broken to a $U(1)$. After all, we see such a thing in the electroweak theory. The group theory still works for the spontaneously broken $SU(2)$ group.This post imported from StackExchange Physics at 2014-03-22 17:13 (UCT), posted by SE-user Luboš Motl |
Let $C\subseteq 2^X$ be a concept class over $X$ and let $\bar{C}:=\{X\setminus c\mid c\in C\}$ be the complement. Show that $VCdim(C)=VCdim(\bar{C})$.
Proof:
Let $d:=VC_{dim}(C)$, then there exists $S\subseteq X$, $|S|=d$, s.t. $S$ is shattered by $C$.
Let $d':=VC_{dim}(\bar{C})$, then there exists $S'\subseteq X$, $|S'|=d'$, s.t. $S'$ is shattered by $C$.
Show that $d\leq d'$ and $d' \leq d$. I know that a set $S$ is shattered by $C$ iff $\Pi_C(S):=\{c\cap S\mid c\in C\}=2^S$, but I have no clue how to show the two sides. Can someone help me with that? |
Firstly, the pressure you are talking about is the
average pressure exerted on the wall. Secondly, does this water have air above it? If so the average pressure will be $P_0+\frac{1}{2}\rho gh$, which I agree is the pressure at a point halfway down the wall ($P_0$ is the atmospheric pressure). Finally, the average pressure is independent of the angle of the wall, what is changing is the direction of the normal force (as it is always perpendicular to the wall). Below I show a detailed calculation.
Calculation
The pressure at any point along the wall is given by $P = P_0 + \rho g x$, where $x$ is the vertical distance from the surface of the water. Assuming the wall has a constant width of, say $a$, the total force on the wall is given by (given $A = \frac{ah}{\cos\theta}$ we have $dA = \frac{a}{\cos\theta}dx$)\begin{eqnarray}F &=& \int P~ dA = \frac{a}{\cos\theta}\int^h_0 dx \left(P_0 + \rho g x\right)\\&=& \frac{ah}{\cos\theta} P_0 + \frac{1}{2}\rho g h\frac{ah}{\cos\theta}\end{eqnarray}
(Note that this force is pointing normal to the surface of the wall. We also see that its magnitude is dependent on $\theta$, however as we will see, the average pressure is independent of $\theta$). The area of the surface is $\frac{ah}{\cos\theta}$ so the average pressure is just $$P_{avg} = \frac{F}{A} = P_0 + \frac{1}{2}\rho g h$$ So we see that the average pressure is independent of the angle of our wall. |
$\DeclareMathOperator{\ex}{\mathbb E}\DeclareMathOperator{\Var}{Var}\DeclareMathOperator{\Cov}{Cov}$If there are two giant families, one with all the women in it and one with all the men in it, then men will have more brothers on average. So you cannot say anything
in general, even assuming that there are the same number of men and women.
So we will need to make some assumptions. In order to find out what the correct assumptions are, I shall define some notation. For convenience, I shall use probabilistic notation, but we are really just talking about counting.
Let $M$ be the random variable corresponding to the number of men in a family chosen at random from the set of families, and let $W$ be the quantity corresponding to the number of women in a randomly chosen family. Let $\mathcal M$ denote the total number of men, let $\mathcal W$ denote the total number of women and let $\mathcal F$ denote the total number of families.
Important Note: The families themselves should be treated as constants. They are not random samples drawn from some kind of
distribution or anything like that. When I use probabilistic
notation, it is purely for the sake of convenience - I am interpreting the question combinatorially, and it so happens that probabilistic constructs such as sample variance do a good job of capturing certain combinatorial quantities that are relevant in this question.
If you like, we are working over the discrete measure space $(F,
\mathcal P(F), \mathbb P)$, where $F$ is the set of families and
$\mathbb P(A)=|A|/|F|$ for any $A\subset F$. $M$ is then the random
variable defined by $M(f)=\textrm{number of men in $f$}$, while
$W(f)=\textrm{number of women in $f$}$. $\ex,\Var,\Cov$ will all take
their usual meanings as population mean, population variance and population
covariance.
We want to compute the average number of brothers that each man has. To do this, we shall double-count the set $A$ of pairs $(m_1,m_2)$ such that $m_1$ and $m_2$ are brothers.
The first way we count this set will be by family. For a family $f$, let $m_f$ denote the number of men in family $f$. Then we have\begin{align}|A|&=\sum_f m_f(m_f-1)\\&=\mathcal F\ex(M(M-1))\end{align}since in each family $f$, we have $m_f$ choices for the first brother and $m_f-1$ choices for the second brother.
The second way to count this set will be by man. For a man $m$, denote by $b_m$ the number of brothers that $m$ has. Then we have $$|A|=\sum_m b_m$$Therefore:$$\sum_m b_m=\mathcal F\ex(M(M-1))$$Then:\begin{align}\textrm{Average number of brothers a man has}&=\sum_m b_m/\mathcal M\\&=\frac{\mathcal F}{\mathcal M}\ex(M(M-1))\\&=\frac{\ex(M(M-1))}{\ex M}\end{align}
A similar argument gives us$$\textrm{Average number of brothers a woman has}=\frac{\ex(MW)}{\ex W}$$In this second case, if we write $w_f$ for the number of women in family $f$, then the number of pairs $(w,m)$ such that $w$ and $m$ are sister and brother is $m_fw_f$.
We would like to show that this second quantity is bigger than the first. Let's try and rewrite each quantity first.
\begin{align}\frac{\ex(M(M-1))}{\ex M}&=\frac1{\ex M}\left(\ex M^2-\ex M\right)\\&=\frac1{\ex M}\left(\Var M+(\ex M)^2-\ex M\right)\\&=\frac{\Var M}{\ex M} + \ex M - 1\end{align}
while
\begin{align}\frac{\ex(MW)}{\ex W}&=\frac{1}{\ex W}\left(\Cov(M,W)+\ex M\ex W\right)\\&=\frac{\Cov(M,W)}{\ex W} + \ex M\end{align}
Therefore, in order to ensure that the average number of brothers a woman has is greater than the average number of brothers a man has, we need to assume that:$$\frac{\Cov(M,W)}{\ex W}>\frac{\Var M}{\ex M}-1$$
We can turn this into a condition saying that the number of men per family has to have small variance$$\Var M<\ex M+\frac{\mathcal M}{\mathcal W}\Cov(M, W)$$
Is this a reasonable assumption to make? Assuming that the number of men in a family is independent of the number of women in that family, we should expect $\Cov(M, W)$ to be small. And if we assume that the total number of men is roughly equal to the to the number of women, then $\mathcal M/\mathcal W$ will be roughly equal to $1$. So women have more brothers if and only if the variance in the number of men per family is less than $\ex M$.
This is a surprising result, since there's no reason to suppose that the variance in the number of men per family should be less than $\ex M$. In fact, numerical experiments indicate that $\Var M$ is quite often larger than $\ex M$, which means that in fact it will be men who have more brothers than women.
So the answer is that it depends on how large the families are, but your intuition that women will have more brothers is
not true in general, even under fairly strong assumptions. If the number of men per family varies greatly from family to family, it is in fact men who have more brothers on average.
This surprising result can easily be confirmed with numerical experiment.
What's the explanation? Well, let's consider a situation in which the number of men per family varies greatly from family to family. Suppose we assume also that the number of men per family is uncorrelated with the number of women per family.
What this means is that there are going to be a significant number of families with lots of men and very few women, and a significant number of families with lots of women and very few men.
Now, within any given family, we know that the women will have more brothers than the men. But look at the overall contribution to the average:
The first type of family gives us lots of men with lots of brothers, and a small number of women with lots of brothers. The second type of family gives us lots of women with very few brothers, and a small number of men with very few brothers.
So on average, the men will tend to have lots of brothers, and the women will tend to have very few brothers, as long as the variance in the number of men is large enough to counteract the extra brother that each woman has. With large enough families, that one extra brother counts for less and less, and the variance effect takes over, giving you precisely the opposite effect from what you expected.
Let's take one last look at the result. We found that the important factor was that the variance in the number of men per family should not be too large. What if this value were equal to zero? That would mean that every family had the same number $a$ of men, so then it
would be true that women have more brothers, since every man would have $a-1$ brothers and every woman would have $a$ brothers.
The dependence on the covariance is interesting, too. By the Cauchy-Schwarz inequality, we have $\Cov(M,W)\le\sqrt{\Var M\Var W}$, and if we assume that $\Var M$ and $\Var W$ are roughly the same, then we have $\Cov(M,W)\le\Var M$. This extreme value occurs if $W=\lambda M$ for some positive constant $\lambda$. In that case, a simple counting argument shows us that women will have an average of one more brother than men. |
56 6 Homework Statement Car A drives a curve of radius 60m with a constant velocity of 48 km/h. When A is at the given position, car B is at 30m away from the intersection and accelerating at 1.2 m/s^2 to the south. Calculate the lenght and direction of the acceleration that car B would measure of car A from its perspective at that instant. Homework Equations Kinematic equations in polar and cartesian coordinates
I think my approach is quite wrong, still I gave it a shot:
First I know that ##v_A=13.3 m/s=r\omega=60\omega \rightarrow \omega=0.2 \frac{rad}{s}## Then $$\vec a_A=-r\omega^2 e_r=-2.4 e_r$$ But ##e_r=\cos{\theta}i+\sin{\theta}j## and substituing the latter in the acceleration equation I have that ##\vec a_A= -2i-1.2j## At last: $$\vec a_{A/B}=\vec a_A - \vec a_B$$ and this is were I stopped, hope you can help me. Thanks!
First I know that ##v_A=13.3 m/s=r\omega=60\omega \rightarrow \omega=0.2 \frac{rad}{s}##
Then $$\vec a_A=-r\omega^2 e_r=-2.4 e_r$$
But ##e_r=\cos{\theta}i+\sin{\theta}j## and substituing the latter in the acceleration equation I have that ##\vec a_A= -2i-1.2j##
At last: $$\vec a_{A/B}=\vec a_A - \vec a_B$$
and this is were I stopped, hope you can help me. Thanks!
Attachments 5.4 KB Views: 17 |
No, and this is wrong. The implied vols (from market prices) are actually not necessarily convex but yet may be still arbitrage-free, there are many examples of this for various equities. Furthermore, preserving convexity is not necessarily enough either. In terms of implied variance $w(y)=\sigma^2 T$ as a function of log-moneyness $y=\ln\frac{K}{F}$, the no butterfly arbitrage constraint becomes:$$1 - \frac{y}{w}\frac{\partial w}{\partial y} + \frac{1}{4}\left(-\frac{1}{4}-\frac{1}{w}+\frac{y^2}{w^2}\right)\left(\frac{\partial w}{\partial y}\right)^2 + \frac{1}{2}\frac{\partial^{2} w}{\partial y^2} \geq 0$$and is not a nice linear constraint as in the case of call prices. In terms of implied vol, the expression is not all that different, and less readable. The above stems from Gatheral local vol derivation, and is also explained in my book.
Preserving convexity in the implied variance would mean that only the last term is positive, which does not guarantee that all is positive.
A convexity preserving interpolation on the call prices vs strike is what matters to avoid butterfly-spread arbitrage.
Furthermore, note that such a B-spline will not be an exact interpolation, but a least-square fit, precisely because of the convexity constraint. Furthermore, you it won't be perfect either, since the associated implied vol may look strange sometimes (see Peter Jackel paper "Clamping Down on Arbitrage", or Le Floc'h and Oosterlee Model-Free Stochastic Collocation for an Arbitrage-Free Implied Volatility, Part II. |
In the preceding chapter we learned that populations are characterized by descriptive measures called parameters. Inferences about parameters are based on sample statistics. We now want to estimate population parameters and assess the reliability of our estimates based on our knowledge of the sampling distributions of these statistics.
Point Estimates
We start with a point estimate. This is a single value computed from the sample data that is used to estimate the population parameter of interest.
The sample mean (\(\bar {x}\)) is a point estimate of the population mean (\(\mu\)). The sample proportion (\(\hat {p}\)) is the point estimate of the population proportion (p).
We use point estimates to construct confidence intervals for unknown parameters.
A confidence interval is an interval of values instead of a single point estimate. The level of confidence corresponds to the expected proportion of intervals that will contain the parameter if many confidence intervals are constructed of the same sample size from the same population. Our uncertainty is about whether our particular confidence interval is one of those that truly contains the true value of the parameter.
Example \(\PageIndex{1}\): bear weight
We are 95% confident that our interval contains the population mean bear weight.
If we created 100 confidence intervals of the same size from the same population, we would expect 95 of them to contain the true parameter (the population mean weight). We also expect five of the intervals would not contain the parameter.
Figure \(\PageIndex{1}\): Confidence intervals from twenty-five different samples.
In this example, twenty-five samples from the same population gave these 95% confidence intervals. In the long term, 95% of all samples give an interval that contains µ, the true (but unknown) population mean.
Level of confidence is expressed as a percent.
The compliment to the level of confidence is α (alpha), the level of significance. The level of confidence is described as \((1- \alpha) \times 100%\).
What does this really mean?
We use a point estimate (e.g., sample mean) to estimate the population mean. We attach a level of confidence to this interval to describe how certain we are that this interval actually contains the unknown population parameter. We want to estimate the population parameter, such as the mean (μ) or proportion ( p).
\[\bar {x}-E < \mu < \bar {x}+E\]
or
\[\hat {p}-E < p <\hat {p}+E\]
where \(E\) is the margin of error.
The confidence is based on area under a normal curve. So the assumption of normality must be met (Chapter 1).
Confidence Intervals about the Mean ( μ) when the Population Standard Deviation ( σ) is Known
A confidence interval takes the form of:
point estimate \(\pm\) margin of error. The point estimate The point estimate comes from the sample data. To estimate the population mean (\(μ\)), use the sample mean (\(\bar{x}\)) as the point estimate. The margin of error Depends on the level of confidence, the sample size and the population standard deviation. It is computed as \(E=Z_{\frac {\alpha}{2}}\times \frac {\sigma}{\sqrt {n}}\)where \(Z_{\frac {\alpha}{2}}\) is the critical value from the standard normal table associated with α (the level of significance). The critical value \(Z_{\frac {\alpha}{2}}\) This is a Z-score that bounds the level of confidence. Confidence intervals are ALWAYS two-sided and the Z-scores are the limits of the area associated with the level of confidence.
Figure \(\PageIndex{1}\): The middle 95% area under a standard normal curve. The level of significance (α) is divided into halves because we are looking at the middle 95% of the area under the curve. Go to your standard normal table and find the area of 0.025 in the body of values. What is the Z-score for that area? The Z-scores of ± 1.96 are the critical Z-scores for a 95% confidence interval.
Table \(\PageIndex{1}\): Common critical values (Z-scores).
Steps
Construction of a confidence interval about \(μ\) when \(σ\) is known:
\(Z_{\frac {\alpha}{2}}\) (critical value) \(E=Z_{\frac {\alpha}{2}}\times \frac {\sigma}{\sqrt {n}}\) (margin of error) \(\bar {x} \pm E\) (point estimate ± margin of error)
Example \(\PageIndex{3}\): Construct a confidence interval about the population mean
Researchers have been studying p-loading in Jones Lake for many years. It is known that mean water clarity (using a Secchi disk) is normally distributed with a population standard deviation of
σ = 15.4 in. A random sample of 22 measurements was taken at various points on the lake with a sample mean of x̄ = 57.8 in. The researchers want you to construct a 95% confidence interval for μ, the mean water clarity.
A secchi disk to measure turbidly of water. Image used with permission (CC SA; publiclab.org) Solution
1) \(Z_{\frac {\alpha}{2}}\) = 1.96
2) \(E=Z_{\frac {\alpha}{2}}\times \frac {\sigma}{\sqrt {n}}\) = \(1.96 \times \frac {15.4}{\sqrt {22}}\) = 6.435
3) \(\bar {x} \pm E\) = 57.8 ± 6.435
95% confidence interval for the mean water clarity is (51.36, 64.24).
We can be 95% confident that this interval contains the population mean water clarity for Jones Lake.
Now construct a 99% confidence interval for μ, the mean water clarity, and interpret.
1) \(Z_{\frac {\alpha}{2}}\)= 2.575
2) \(E=Z_{\frac {\alpha}{2}}\times \frac {\sigma}{\sqrt {n}}\) = \(2.575 \times \frac {15.4}{\sqrt {22}}\) = 8.454
3) \(\bar {x} \pm E\)= 57.8± 8.454
99% confidence interval for the mean water clarity is (49.35, 66.25).
We can be 99% confident that this interval contains the population mean water clarity for Jones Lake.
As the level of confidence increased from 95% to 99%, the width of the interval increased. As the probability (area under the normal curve) increased, the critical value increased resulting in a wider interval.
Software Solutions Minitab
You can use Minitab to construct this 95% confidence interval (Excel does not construct confidence intervals about the mean when the population standard deviation is known). Select Basic Statistics>1-sample Z. Enter the known population standard deviation and select the required level of confidence.
Figure 3. Minitab screen shots for constructing a confidence interval. One-Sample Z: depth
Variable
N
Mean
StDev
SE Mean
95% CI
depth
22
57.80
11.60
3.28
(51.36, 64.24)
Confidence Intervals about the Mean (μ) when the Population Standard Deviation ( σ) is Unknown
Typically, in real life we often don’t know the population standard deviation (σ). We can use the sample standard deviation (
s) in place of σ. However, because of this change, we can’t use the standard normal distribution to find the critical values necessary for constructing a confidence interval.
The Student’s t-distribution was created for situations when σ was unknown. Gosset worked as a quality control engineer for Guinness Brewery in Dublin. He found errors in his testing and he knew it was due to the use of s instead of σ. He created this distribution to deal with the problem of an unknown population standard deviation and small sample sizes. A portion of the t-table is shown below.
Table \(\PageIndex{2}\): Portion of the student’s t-table.
Example \(\PageIndex{4}\)
Find the critical value \(t_{\frac {\alpha}{2}}\) for a 95% confidence interval with a sample size of n=13.
Solution Degrees of freedom (down the left-hand column) is equal to n-1 = 12 α = 0.05 and α/2 = 0.025 Go down the 0.025 column to 12 df \(t_{\frac {\alpha}{2}}\)= 2.179
The critical values from the students’ t-distribution approach the critical values from the standard normal distribution as the sample size (n) increases.
Table 3. Critical values from the student’s t-table.
Using the standard normal curve, the critical value for a 95% confidence interval is
1.96. You can see how different samples sizes will change the critical value and thus the confidence interval, especially when the sample size is small.
Construction of a Confidence Interval
When
σ is Unknown \(t_{\frac {\alpha}{2}}\) critical value with n-1 df \(E = t_{\frac {\alpha}{2}} \times \frac{s}{\sqrt {n}}\) \(\bar {x} \pm E\)
Example \(\PageIndex{5}\):
Researchers studying the effects of acid rain in the Adirondack Mountains collected water samples from 22 lakes. They measured the pH (acidity) of the water and want to construct a 99% confidence interval about the mean lake pH for this region. The sample mean is 6.4438 with a sample standard deviation of 0.7120. They do not know anything about the distribution of the pH of this population, and the sample is small (n<30), so they look at a normal probability plot.
Figure 4. Normal probability plot. Solution
The data is normally distributed. Now construct the 99% confidence interval about the mean pH.
1) \(t_{\frac {\alpha}{2}}\) = 2.831
2) \(E = t_{\frac {\alpha}{2}} \times \frac{s}{\sqrt {n}}\) = \(2.831 \times \frac {0.7120}{\sqrt {22}}\)= 0.4297
3) \(\bar {x} \pm E\) = 6.443 ± 0.4297
The 99% confidence interval about the mean pH is (6.013, 6.863).
We are 99% confident that this interval contains the mean lake pH for this lake population.
Now construct a 90% confidence interval about the mean pH for these lakes.
1) \(t_{\frac {\alpha}{2}}\) = 1.721
2) \(E = t_{\frac {\alpha}{2}} \times \frac{s}{\sqrt {n}}\) = \(1.71221 \times \frac {0.7120}{\sqrt {22}}\)0.2612
3) \(\bar {x} \pm E\) = 6.443 ± 0.2612
The 90% confidence interval about the mean pH is (6.182, 6.704).
We are 90% confident that this interval contains the mean lake pH for this lake population.
Notice how the width of the interval decreased as the level of confidence decreased from 99 to 90%.
Construct a 90% confidence interval about the mean lake pH using Excel and Minitab.
Software Solutions Minitab
For Minitab, enter the data in the spreadsheet and select Basic statistics and 1-sample t-test.
One-Sample T: pH
Variable N Mean StDev SE Mean 90% CI pH
22
6.443
0.712
0.152
(6.182, 6.704)
Additional example:
Excel
For Excel, enter the data in the spreadsheet and select descriptive statistics. Check Summary Statistics and select the level and confidence.
Mean
6.442909
Standard Error
0.151801
Median
6.4925
Mode
#N/A
Standard Deviation
0.712008
Sample Variance
0.506956
Kurtosis
-0.5007
Skewness
-0.60591
Range
2.338
Minimum
5.113
Maximum
7.451
Sum
141.744
Count
22
Confidence Level(90.0%)
0.26121
Excel gives you the sample mean in the first line (6.442909) and the margin of error in the last line (0.26121). You must complete the computation yourself to obtain the interval (6.442909±0.26121).
Confidence Intervals about the Population Proportion ( p)
Frequently, we are interested in estimating the population proportion (p), instead of the population mean (
µ). For example, you may need to estimate the proportion of trees infected with beech bark disease, or the proportion of people who support “green” products. The parameter p can be estimated in the same ways as we estimated µ, the population mean. The Sample Proportion The sample proportion is the best point estimate for the true population proportion. Sample proportion \(\hat {p} = \frac {x}{n}\)where xis the number of elements in the sample with the characteristic you are interested in, and nis the sample size. The Assumption of Normality when Estimating Proportions The assumption of a normally distributed population is still important, even though the parameter has changed. Normality can be verified if:$$ n \times \hat {p} \times (1- \hat {p}) \ge 10$$ Constructing a Confidence Interval about the Population Proportion
Constructing a confidence interval about the proportion follows the same three steps we have used in previous examples.
\(Z_{\frac {\alpha}{2}}\)(critical value from the standard normal table) \(E = Z_{\frac {\alpha}{2}} \times \sqrt {\frac{\hat {p}(1-\hat {p})}{n}}\) (margin of error) \(\hat {p} \pm E\)(point estimate ± margin of error)
Example \(\PageIndex{6}\):
A botanist has produced a new variety of hybrid soybean that is better able to withstand drought. She wants to construct a 95% confidence interval about the germination rate (percent germination). She randomly selected 500 seeds and found that 421 have germinated.
Solution
First, compute the point estimate
$$\hat {p} = \frac {x}{n} =\frac {421}{500}=0.842$$
Check normality:
$$n \times \hat {p} \times (1-\hat {p}) \ge 10 = 500 \times 0.842 \times (1-0.842) =66.5$$
You can assume a normal distribution.
Now construct the confidence interval:
1) \(Z_{\frac {\alpha}{2}}\) = 1.96
2) \(E = Z_{\frac {\alpha}{2}} \times \sqrt {\frac{\hat {p}(1-\hat {p})}{n}}\) =\(1.96 \times \sqrt {\frac {0.842(1-0.842)}{500}}\) = 0.032
3) \(\hat {p} \pm E =0.842 \pm 0.0032\)
The 95% confidence interval for the germination rate is (81.0%, 87.4%).
We can be 95% confident that this interval contains the true germination rate for this population.
Software Solutions Minitab
You can use Minitab to compute the confidence interval. Select STAT>Basic stats>1-proportion. Select summarized data and enter the number of events (421) and the number of trials (500). Click Options and select the correct confidence level. Check “test and interval based on normal distribution” if the assumption of normality has been verified.
Test and CI for One Proportion
Sample X N Sample p 95% CI 1 421 500 0.842000 (0.810030, 0.873970)
Using the normal approximation.
Excel
Excel does not compute confidence intervals for estimating the population proportion.
Confidence Interval Summary
Which method do I use?
The first question to ask yourself is:
Which parameter are you trying to estimate ? If it is the mean ( µ), then ask yourself: Is the population standard deviation ( σ ) known ? If yes, then follow the next 3 steps: Confidence Interval about the Population Mean (µ) when σ is Known \(Z_{\frac {\alpha}{2}}\) critical value (from the standard normal table) \(E=Z_{\frac {\alpha}{2}} \times \frac {\sigma}{\sqrt {n}}\) \(\bar {x} \pm E\)
If no, follow these 3 steps:
Confidence Interval about the Population Mean (µ) when σ is Unknown \(t_{\frac {\alpha}{2}}\) critical value with n-1 df from the student t-distribution \(E=t_{\frac {\alpha}{2}} \times \frac {s}{\sqrt {n}}\) \(\bar {x} \pm E\)
If you want to construct a confidence interval about the population proportion, follow these 3 steps:
Confidence Interval about the Proportion \(Z_{\frac {\alpha}{2}}\) critical value from the standard normal table \(E = Z_{\frac {\alpha}{2}} \times \sqrt {\frac{\hat {p}(1-\hat {p})}{n}}\) \(\hat {p} \pm E\)
Remember that the assumption of normality must be verified. |
Consider the following game show: two friends Tom and Jerry (X and Y) are selected from an audience to compete for a grand prize, a brand new Ferrari.
The game description:
The two contestants are space-like separated.
Each contestant will be asked one of three questions {A, B, C} and the questions which are asked of the two contestants need not be the same. Each of these questions has two possible answers, which we will quantify as {+1, -1}.
Repeat 2 a large number of times.
When the two contestants are asked the same question, they must always give the same answer.
When the two contestants are asked different questions, they must agree $ \frac 14 $ of the time.
Now, how can Tom and Jerry win this game?
It's simple: They create a strategy, whereby they pre decide on a set of answers to give to the three questions {A, B, C}. This will guarantee that they always give the same answer when they are asked the same question. For example, they may decide on {+1, +1, -1} to {A, B, C}. Let us denote these answers as $v_i(\alpha) = \pm1$ for $\mathcal i$ = X, Y, $\alpha$ = A, B, C.
This will not allow Tom and Jerry to satisfy 4.
$\mathscr Theorem:$
There do not exist random variables $v_i(\alpha), \mathcal i$ = X, Y, $\alpha = A, B, C$ such that:
$$ 1. v_i(\alpha) = \pm1 $$ $$ 2. v_X(\alpha) = v_Y(\alpha)\forall \alpha $$ $$ 3. Pr(v_X(\alpha) = v_Y(\beta)) = \frac 14 \forall \alpha, \beta, \alpha \neq \beta $$
$\mathscr Proof:$
Assume for contradiction that there do exist random variables $v_i(\alpha), \mathcal i$ = X, Y, $\alpha = A, B, C$ such that (1-3 hold).
Since $ v_i(\alpha)$ can only take on the two values $\pm1$, we must have $Pr(v_X(A) = v_X(B)) + Pr(v_X(A) = v_X(C)) + Pr(v_X(B) = v_X(C)) \geq 1 $
By condition 2, we then have $Pr(v_X(A) = v_Y(B)) + Pr(v_X(A) = v_Y(C)) + Pr(v_X(B) = v_Y(C)) \geq 1$
Now, by condition 3, we have $ \frac 14 + \frac 14 + \frac 14 \geq 1$ a contradiction.
But, if you look at the predictions of quantum mechanics, it is possible to satisfy (1-3). Experiments have validated quantum mechanics, thus the correlations achieved cannot be due to a pre existing strategy. Then we must wonder how could Tom and Jerry always assure that property (2) holds, if they can not simply pre decide on a set of answers to give to the three questions {A, B, C}. It must be that when Tom and Jerry are asked a question, they communicate with each other what question is being asked, and agree to an answer, for if not, one would have $Pr(v_X(\alpha) = v_Y(\alpha)) = \frac 12$
$\mathscr Bell's\; Theorem\; Implication:$ Quantum mechanical correlations are not due to pre existing properties $\Rightarrow$ there must exist an exchange of information between entangled subsystems about what properties are being measured on them, and what values of these properties are to be taken on. Combining this with rule (1) of the game implies that this must take place faster than light, recent experiments overwhelmingly suggest instantaneously.
My question is why is this salient point so muddled in the literature? Bell's theorem is often stated as follows.
$\mathscr No\; theory\; of\; local\; hidden\; variables\; can\; produce\; the\; predictions\; of\; quantum\; mechanics$
That's missing the point. The hidden variables (pre existing properties) are a derived consequence of no exchanging of information about what properties are being measured and what values to take on. Bell simply showed that pre existing properties fail as an explanation. Consequently, we must have the implication above.
Credit to Tim Maudlin for the game description. |
If you have a look at my answer to When objects fall along geodesic paths of curved space-time, why is there no force acting on them? this explains how on a curved surface two moving observers will appear to exprience a force pulling them together. However two stationary observers will feel no force. The force only becomes apparent when you move on the curved surface.
This is true in general relativity as well, but what is easily forgotten by newcomers to GR is that in GR we consider motion in spacetime, not just space. You are always moving in spacetime because you can't help moving in time. Your speed in spacetime is known as the four-velocity, and in fact the magnitude of the four-velocity (technically the
norm) is always $c$. So you can't help moving through spacetime (at the speed of light!) and when spacetime is curved this means you will experience gravitational forces.
You are probably familiar with Newton's first law of motion. This says that the acceleration of a body is zero unless a force acts on it. Newton's second law gives us the equation for the acceleration:
$$ \frac{d^2x}{dt^2} = \frac{F}{m} $$
The general relativity equivalent to this is called the geodesic equation:
$$ {d^2 x^\mu \over d\tau^2} = - \Gamma^\mu_{\alpha\beta} u^\alpha u^\beta \tag{1} $$
This is a lot more complicated than Newton's equation, but the similarity should be obvious. On the left we have an acceleration, and on the right we have the GR equivalent of a force. The objects $\Gamma^\mu_{\alpha\beta}$ are the Christoffel symbols and these tell us how much spacetime is curved. The quantity $u$ is the four-velocity.
Now let's consider the particular example you describe of releasing a ball. You say the ball is initially stationary. If it was stationary in spacetime, i.e. the four velocity $u = 0$, then the right hand side of equation (1) would always be zero and the acceleration would always be zero. So the ball wouldn't fall. But the four velocity isn't zero.
Suppose we use polar coordinates $(t, r, \theta, \phi)$ and write the four-velocity as $(u^t, u^r, u^\theta, u^\phi)$. If you're holding the ball stationary in space the spatial components of the four velocity are zero: $u^r = u^\theta = u^\phi = 0$. But you're still moving through time at (approximately) one second per second, so $u^t \ne 0$. If we use the geodesic equation (1) to calculate the radial acceleration we get:
$$ {d^2 r \over d\tau^2} = - \Gamma^r_{tt} u^t u^t $$
The Christoffel symbol $\Gamma^r_{tt}$ is fiendishly complicated to calculate so I'll do what we all do and just look it up:
$$ \Gamma^r_{tt} = \frac{GM}{c^2r^2}\left(1 - \frac{2GM}{c^2r}\right) $$
and our equation for the radial acceleration becomes:
$$ {d^2 r \over d\tau^2} = - \frac{GM}{c^2r^2}\left(1 - \frac{2GM}{c^2r}\right) u^t u^t \tag{2} $$
Now, I don't propose to go any further with this because the maths gets very complicated very quickly. However it should be obvious that the radial acceleration is non-zero and negative. That means the ball will accelerate inwards. Which is of course exactly what we observe. What is interesting is to consider what happens in the Newtonian limit, i.e. when GR effects are so small they can be ignored. In this limit we have:
If we feed these approximations into equation (2) we get:
$$ {d^2 r \over dt^2} = - \frac{GM}{c^2r^2}c^2 = - \frac{GM}{r^2} $$
and this is just Newton's law of gravity! |
Ohm's Law is the relationships between power \((P)\), voltage \((V)\), current \((I)\), and resistance \((R)\).
OHM'S LAW PIE CHART
Voltage (Volt)
Volt \(\:(V)\:\) or \(\:(E)\:\) is a unit of electrical pressur. One volt is the amount of pressure that will cause one ampere of current in one ohm of resistance. Different voltages that are typically used in the oilfield in the USA are 480V, 4,160V and 12kV (Kilo Volt)
Voltage Formula
\(\large{ Volts = \sqrt{Watts \; Ohms} = V = \sqrt{P \; R} }\)
\(\large{ Volts = \frac{Watts}{Amps} = V = \frac{P}{I} }\)
\(\large{ Volts = Amps \; Ohms = V = I \; R }\)
Current (Amp)
Current \(\:(I)\:\) is the rate of flow of electricity in a circuit, measured in amperes.
Amp is a unit of current. One ampere (amp) is the current flowing through one ohm of resistance at one volt potential.
Current formula
\(\large{ Amps = \frac{Volts}{Ohms} = I = \frac{V}{R} }\)
\(\large{ Amps = \frac{Watts}{Volts} = I = \frac{P}{V} }\)
\(\large{ Amps = \sqrt{\frac{Watts}{Ohms}} = I = \sqrt{\frac{P}{R}} }\)
Power (Watt)
Power \(\:(P)\:\) is the rate of doing work and is measured by the amount of foot pounds of work done in a particular unit of time. A horsepower is a measure of power.
Power Formula
\(\large{ Watts = \frac{Volts^2}{Ohms} = P = \frac{V^2}{R} }\)
\(\large{ Watts = Volts^2 \; Ohms = P = V^2 \; R }\)
\(\large{ Watts = Volts \; Amps = P = V \; I }\)
Resistance (Ohm)
Resistance \(\:(R)\:\) is the ability to resist or prevent the flow of current. In order to overcome the resistance and get the current to flow a higher voltage will be required. Resistance is measured in Ohms, represented by \(R\) and has a symbols \(\Omega\).
Ohm is a unit of resistance. A constant current of one ampere produces a force of one volt.
Resistance formula
\(\large{ Ohms = \frac{Volts}{Amps} = R = \frac{V}{I} }\)
\(\large{ Ohms = \frac{Volts^2}{Watts} = R = \frac{V^2}{P} }\)
\(\large{ Ohms = \frac{Watts}{Amps^2} = R = \frac{P}{I^2} }\)
electrical symbols
Quantity Symbol Measuring Units Description Capacitance \(C\) Farad unit of capacitance
\(C=Q + V\)
Charge \(Q\) Coulomb unit of electrical charge
\(Q=C \; V\)
Conductance \(G\;\) Siemen reciprocal of resistance
\(G=1+R\)
Current \(I\) Amp unit of electrical current
\(I= \frac{V}{R}\)
Frequency \(Hz\) Hertz unit of frequency
\(f=1 + T\)
Impedance \(Z\) Ohm unit of AC resistance
\(Z^2=R^2 + X^2\)
Inductance \(L\;\) or \(\;H\) Volt unit of inductance
\(V_L=L \frac{di}{dt} \)
Power \(P\) Watt unit of power
\(P=V \; I\)
Resistance \(R\) or \(\Omega\) Ohm unit of DC resistance
\(R=V+I\)
Voltage \(V\;\) or \(\;E\) Volt unit of electrical potential
\(V=I \; R\)
Tags: Equations for Electrical |
Kelly Gambling Introduction
In Kelly gambling (Kelly 1956), we are given the opportunity to bet on \(n\) possible outcomes, which yield a random non-negative return of \(r \in {\mathbf R}_+^n\). The return \(r\) takes on exactly \(K\) values \(r_1,\ldots,r_K\) with known probabilities \(\pi_1,\ldots,\pi_K\). This gamble is repeated over \(T\) periods. In a given period \(t\), let \(b_i \geq 0\) denote the fraction of our wealth bet on outcome \(i\). Assuming the \(n\)th outcome is equivalent to not wagering (it returns one with certainty), the fractions must satisfy \(\sum_{i=1}^n b_i = 1\). Thus, at the end of the period, our cumulative wealth is \(w_t = (r^Tb)w_{t-1}\). Our goal is to maximize the average growth rate with respect to \(b \in {\mathbf R}^n\):
\[ \begin{array}{ll} \underset{b}{\mbox{maximize}} & \sum_{j=1}^K \pi_j\log(r_j^Tb) \\ \mbox{subject to} & b \geq 0, \quad \sum_{i=1}^n b_i = 1. \end{array} \]
Example
We solve the Kelly gambling problem for \(K = 100\) and \(n = 20\). Theprobabilities \(\pi_j \sim \mbox{Uniform}(0,1)\), and the potentialreturns \(r_j \sim \mbox{Uniform}(0.5,1.5)\) except for \(r_n = {\mathbf 1}\),which represents the payoff from not wagering. With an initial wealthof \(w_0 = 1\), we simulate the growth trajectory of our Kelly optimalbets over \(P = 100\) periods, assuming returns are i.i.d. over time. Inthe following code,
rets is the \(K \times n\) matrix of possiblereturns with row \(r_i\), while
ps is the vector of returnprobabilities \((\pi_1,\ldots,\pi_K)\).
set.seed(1)n <- 20 # Total betsK <- 100 # Number of possible returnsPERIODS <- 100TRIALS <- 5## Generate return probabilitiesps <- runif(K)ps <- ps/sum(ps)## Generate matrix of possible returnsrets <- runif(K*(n-1), 0.5, 1.5)shuff <- sample(1:length(rets), size = length(rets), replace = FALSE)rets[shuff[1:30]] <- 0 # Set 30 returns to be relatively lowrets[shuff[31:60]] <- 5 # Set 30 returns to be relatively highrets <- matrix(rets, nrow = K, ncol = n-1)rets <- cbind(rets, rep(1, K)) # Last column represents not betting## Solve for Kelly optimal betsb <- Variable(n)obj <- Maximize(t(ps) %*% log(rets %*% b))constraints <- list(sum(b) == 1, b >= 0)prob <- Problem(obj, constraints)result <- solve(prob)bets <- result$getValue(b)## Naive betting scheme: bet in proportion to expected returnbets_cmp <- matrix(0, nrow = n)bets_cmp[n] <- 0.15 # Hold 15% of wealthrets_avg <- ps %*% rets## tidx <- order(rets_avg[-n], decreasing = TRUE)[1:9]tidx <- 1:(n-1)fracs <- rets_avg[tidx]/sum(rets_avg[tidx])bets_cmp[tidx] <- fracs*(1-bets_cmp[n])## Calculate wealth over timewealth <- matrix(0, nrow = PERIODS, ncol = TRIALS)wealth_cmp <- matrix(0, nrow = PERIODS, ncol = TRIALS)for(i in seq_len(TRIALS)) { sidx <- sample(K, size = PERIODS, replace = TRUE, prob = ps) winnings <- rets[sidx,] %*% bets wealth[,i] <- cumprod(winnings) winnings_cmp <- rets[sidx,] %*% bets_cmp wealth_cmp[,i] <- cumprod(winnings_cmp)}
Growth curves for five independent trials are plotted in the figures below. Red lines represent the wealth each period from the Kelly bets, while cyan lines are the result of the naive bets. Clearly, Kelly optimal bets perform better, producing greater net wealth by the final period.
df <- data.frame(seq_len(PERIODS), wealth)names(df) <- c("x", paste0("kelly", seq_len(TRIALS)))plot.data1 <- gather(df, key = "trial", value = "wealth", paste0("kelly", seq_len(TRIALS)), factor_key = TRUE)plot.data1$Strategy <- "Kelly Optimal Bets"df <- data.frame(seq_len(PERIODS), wealth_cmp)names(df) <- c("x", paste0("naive", seq_len(TRIALS)))plot.data2 <- gather(df, key = "trial", value = "wealth", paste0("naive", seq_len(TRIALS)), factor_key = TRUE)plot.data2$Strategy <- "Naive Bets"plot.data <- rbind(plot.data1, plot.data2)ggplot(data = plot.data) + geom_line(mapping = aes(x = x, y = wealth, group = trial, color = Strategy)) + scale_y_log10() + labs(x = "Time", y = "Wealth") + theme(legend.position = "top")
Extensions
As observed in some trajectories above, wealth tends to drop by a significant amount before increasing eventually. One way to reduce this drawdown risk is to add a convex constraint as described in Busseti, Ryu, and Boyd (2016, 5.3)
\[ \log\left(\sum_{j=1}^K \exp(\log\pi_j - \lambda \log(r_j^Tb))\right) \leq 0 \]
where \(\lambda \geq 0\) is the risk-aversion parameter. With
CVXR,this can be accomplished in a single line using the
log_sum_exp atom. Other extensions like wealth goals, bettingrestrictions, and VaR/CVaR bounds are also readily incorporated.
Session Info
sessionInfo()
## R version 3.6.0 (2019-04-26)## Platform: x86_64-apple-darwin18.5.0 (64-bit)## Running under: macOS Mojave 10.14.5## ## Matrix products: default## BLAS/LAPACK: /usr/local/Cellar/openblas/0.3.6_1/lib/libopenblasp-r0.3.6.dylib## ## locale:## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8## ## attached base packages:## [1] stats graphics grDevices datasets utils methods base ## ## other attached packages:## [1] tidyr_0.8.3 ggplot2_3.1.1 CVXR_0.99-6 ## ## loaded via a namespace (and not attached):## [1] gmp_0.5-13.5 Rcpp_1.0.1 compiler_3.6.0 ## [4] pillar_1.4.1 plyr_1.8.4 R.methodsS3_1.7.1## [7] R.utils_2.8.0 tools_3.6.0 digest_0.6.19 ## [10] bit_1.1-14 evaluate_0.14 tibble_2.1.2 ## [13] gtable_0.3.0 lattice_0.20-38 pkgconfig_2.0.2 ## [16] rlang_0.3.4 Matrix_1.2-17 yaml_2.2.0 ## [19] blogdown_0.12.1 xfun_0.7 withr_2.1.2 ## [22] dplyr_0.8.1 Rmpfr_0.7-2 ECOSolveR_0.5.2 ## [25] stringr_1.4.0 knitr_1.23 tidyselect_0.2.5 ## [28] bit64_0.9-7 grid_3.6.0 glue_1.3.1 ## [31] R6_2.4.0 rmarkdown_1.13 bookdown_0.11 ## [34] purrr_0.3.2 magrittr_1.5 scales_1.0.0 ## [37] htmltools_0.3.6 scs_1.2-3 assertthat_0.2.1 ## [40] colorspace_1.4-1 labeling_0.3 stringi_1.4.3 ## [43] lazyeval_0.2.2 munsell_0.5.0 crayon_1.3.4 ## [46] R.oo_1.22.0
Source References
Busseti, E., E. K. Ryu, and S. Boyd. 2016. “Risk–Constrained Kelly Gambling.”
Journal of Investing 25 (3): 118–34.
Kelly, J. L. 1956. “A New Interpretation of Information Rate.”
Bell System Technical Journal 35 (4): 917–26. |
Finite dimensional smooth attractor for the Berger plate with dissipation acting on a portion of the boundary
1.
Department of Mathematics, University of Nebraska-Lincoln, Lincoln, Nebraska 68588
2.
Department of Mathematics, Faculty of Science, Hacettepe University, Beytepe 06800, Ankara
3.
Haceteppe University, Ankara , Turkey
one boundary conditionon a portion of the boundary. In [24] this type of boundary damping was considered for a Berger plate on the whole boundary and shown to yield the existence of a compact global attractor. In this work we address the issues arising from damping active only on a portion of the boundary, including deriving a necessary trace estimate for $(\Delta u)\big|_{\Gamma_0}$ and eliminating a geometric condition in [24] which was utilized on the damped portionof the boundary.
Additionally, we use recent techniques in the asymptotic behavior of hyperbolic-like dynamical systems [11, 18] involving a ``stabilizability" estimate to show that the compact global attractor has finite fractal dimension and exhibits additional regularity beyond that of the state space (for finite energy solutions).
Keywords:Global attractor, boundary dissipation, dissipative dynamical system., nonlinear plate equation. Mathematics Subject Classification:Primary: 35B41, 74K20; Secondary: 35Q74, 35A0. Citation:George Avalos, Pelin G. Geredeli, Justin T. Webster. Finite dimensional smooth attractor for the Berger plate with dissipation acting on a portion of the boundary. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2301-2328. doi: 10.3934/cpaa.2016038
References:
[1]
J. P. Aubin, Une théorè de compacité,,
[2]
G. Avalos and I. Lasiecka, Exponential stability of a thermoelastic system without mechanical dissipation,,
[3]
G. Avalos and I. Lasiecka, Boundary controllability of thermoelastic plates via the free boundary conditions,,
[4]
A. Babin and M. Vishik,
[5] [6]
H. M. Berger, A new approach to the analysis of large deflections of plates,,
[7]
V. V. Bolotin,
[8] [9]
F. Bucci, I. Chueshov and I. Lasiecka, Global attractor for a composite system of nonlinear wave and plate equations,,
[10]
F. Bucci and I. Chueshov, Long-time dynamics of a coupled system of nonlinear wave and thermoelastic plate equations,,
[11] [12] [13]
I. Chueshov,
[14]
I. Chueshov, M. Eller and I. Lasiecka, Finite dimensionality of the attractor for a semilinear wave equation with nonlinear boundary dissipation,,
[15]
I. Chueshov and I. Lasiecka, Global attractors for von Karman evolutions with a nonlinear boundary dissipation,,
[16] [17]
I. Chueshov and I. Lasiecka, Long-time dynamics of von Karman semi-flows with non-linear boundary/interior damping,,
[18] [19]
I. Chueshov, I. Lasiecka and D. Toundykov, Global attractor for a wave equation with nonlinear localized boundary damping and a source term of critical exponent,,
[20]
I. Chueshov, I. Lasiecka and J. T. Webster, Attractors for delayed, non-rotational von Karman plates with applications to flow-structure interactions without any damping,,
[21]
P. Ciarlet and P. Rabier,
[22]
A. Eden and A. J. Milani, Exponential attractors for extensible beam equations,,
[23]
P. Fabrie, C. Galusinski, A. Miranville and S. Zelik, Uniform exponential attractors for a singularly perturbed damped wave equation,,
[24]
P. G. Geredeli and J. T. Webster, Qualitative results on the dynamics of a Berger plate with nonlinear boundary damping,,
[25]
P. G. Geredeli, I. Lasiecka and J. T. Webster, Smooth attractors of finite dimension for von Karman evolutions with nonlinear frictional damping localized in a boundary layer,,
[26]
P. G. Geredeli and J. T. Webster, Decay rates to eqilibrium for nonlinear plate equations with geometrically constrained, degenerate dissipation,
[27]
J. K. Hale and G. Raugel, Attractors for dissipative evolutionary equations,,
[28]
G. Ji and I. Lasiecka, Nonlinear boundary feedback stabilization for a semilinear Kirchhoff plate with dissipation acting only via moments-limiting behavior,,
[29] [30] [31]
I. Lasiecka and R. Triggiani,
[32] [33] [34]
J. L. Lions,
[35]
J. L. Lions, Contrôlabilité exacte, perturbations et stabilization de systèmes distribués,,
[36] [37]
A. Miranville and S. Zelik, Attractors for dissipative partial differential equations in bounded and unbounded domains,,
[38] [39] [40] [41] [42] [43]
C. P. Vendhan, A study of Berger equations applied to nonlinear vibrations of elastic plates,,
show all references
References:
[1]
J. P. Aubin, Une théorè de compacité,,
[2]
G. Avalos and I. Lasiecka, Exponential stability of a thermoelastic system without mechanical dissipation,,
[3]
G. Avalos and I. Lasiecka, Boundary controllability of thermoelastic plates via the free boundary conditions,,
[4]
A. Babin and M. Vishik,
[5] [6]
H. M. Berger, A new approach to the analysis of large deflections of plates,,
[7]
V. V. Bolotin,
[8] [9]
F. Bucci, I. Chueshov and I. Lasiecka, Global attractor for a composite system of nonlinear wave and plate equations,,
[10]
F. Bucci and I. Chueshov, Long-time dynamics of a coupled system of nonlinear wave and thermoelastic plate equations,,
[11] [12] [13]
I. Chueshov,
[14]
I. Chueshov, M. Eller and I. Lasiecka, Finite dimensionality of the attractor for a semilinear wave equation with nonlinear boundary dissipation,,
[15]
I. Chueshov and I. Lasiecka, Global attractors for von Karman evolutions with a nonlinear boundary dissipation,,
[16] [17]
I. Chueshov and I. Lasiecka, Long-time dynamics of von Karman semi-flows with non-linear boundary/interior damping,,
[18] [19]
I. Chueshov, I. Lasiecka and D. Toundykov, Global attractor for a wave equation with nonlinear localized boundary damping and a source term of critical exponent,,
[20]
I. Chueshov, I. Lasiecka and J. T. Webster, Attractors for delayed, non-rotational von Karman plates with applications to flow-structure interactions without any damping,,
[21]
P. Ciarlet and P. Rabier,
[22]
A. Eden and A. J. Milani, Exponential attractors for extensible beam equations,,
[23]
P. Fabrie, C. Galusinski, A. Miranville and S. Zelik, Uniform exponential attractors for a singularly perturbed damped wave equation,,
[24]
P. G. Geredeli and J. T. Webster, Qualitative results on the dynamics of a Berger plate with nonlinear boundary damping,,
[25]
P. G. Geredeli, I. Lasiecka and J. T. Webster, Smooth attractors of finite dimension for von Karman evolutions with nonlinear frictional damping localized in a boundary layer,,
[26]
P. G. Geredeli and J. T. Webster, Decay rates to eqilibrium for nonlinear plate equations with geometrically constrained, degenerate dissipation,
[27]
J. K. Hale and G. Raugel, Attractors for dissipative evolutionary equations,,
[28]
G. Ji and I. Lasiecka, Nonlinear boundary feedback stabilization for a semilinear Kirchhoff plate with dissipation acting only via moments-limiting behavior,,
[29] [30] [31]
I. Lasiecka and R. Triggiani,
[32] [33] [34]
J. L. Lions,
[35]
J. L. Lions, Contrôlabilité exacte, perturbations et stabilization de systèmes distribués,,
[36] [37]
A. Miranville and S. Zelik, Attractors for dissipative partial differential equations in bounded and unbounded domains,,
[38] [39] [40] [41] [42] [43]
C. P. Vendhan, A study of Berger equations applied to nonlinear vibrations of elastic plates,,
[1]
Francesca Bucci, Igor Chueshov, Irena Lasiecka.
Global attractor for a composite system of nonlinear wave and plate equations.
[2]
Moncef Aouadi, Alain Miranville.
Quasi-stability and global attractor in nonlinear thermoelastic diffusion plate with memory.
[3]
Azer Khanmamedov, Sema Simsek.
Existence of the global attractor
for the plate equation with nonlocal nonlinearity in $
\mathbb{R}
^{n}$.
[4]
Yongqin Liu, Shuichi Kawashima.
Global existence and asymptotic behavior
of solutions for quasi-linear dissipative plate equation.
[5] [6]
Nikos I. Karachalios, Nikos M. Stavrakakis.
Estimates on the dimension of a global attractor for a semilinear dissipative wave equation on $\mathbb R^N$.
[7]
Wided Kechiche.
Regularity of the global attractor for a nonlinear Schrödinger equation with a point defect.
[8] [9] [10]
Vladimir V. Chepyzhov, Monica Conti, Vittorino Pata.
Totally dissipative dynamical processes
and their uniform global attractors.
[11]
Moez Daoulatli, Irena Lasiecka, Daniel Toundykov.
Uniform energy decay for a wave equation with partially supported nonlinear boundary dissipation without growth restrictions.
[12]
Wen Tan.
The regularity of pullback attractor for a non-autonomous
[13]
Sébastien Court.
Stabilization of a fluid-solid system, by the deformation of the self-propelled solid. Part II: The nonlinear system..
[14]
Boyan Jonov, Thomas C. Sideris.
Global and almost global existence of small solutions to a dissipative wave equation in 3D with nearly null nonlinear terms.
[15]
Dominique Blanchard, Nicolas Bruyère, Olivier Guibé.
Existence and uniqueness of the solution of a Boussinesq
system with nonlinear dissipation.
[16] [17] [18]
Jun Zhou.
Global existence and energy decay estimate of solutions for a class of nonlinear higher-order wave equation with general nonlinear dissipation and source term.
[19]
Marek Fila, Kazuhiro Ishige, Tatsuki Kawakami.
Convergence to the Poisson kernel for the Laplace equation with a nonlinear dynamical boundary condition.
[20]
Boling Guo, Zhengde Dai.
Attractor for the dissipative Hamiltonian amplitude equation governing modulated wave instabilities.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top] |
There is much misconception in the question, but I'll hazard an answer. I am not going to give any links (no point linking to Wikipedia, it's
just there), but I'll highlight the important terms in bold. If you want to research more, search for these. Any good general astronomy course textbook will cover these topics, too, if you want a bit more systematic approach.
My understanding is that the sun is basically a sphere of hydrogen with a helium core, and that the hydrogen is undergoing nuclear fusion to produce helium.
Basically, the Sun is a ball of hydrogen and helium, but this is not all there is. Being a Population I star, the Sun contains heavier elements (called metals in stellar astrophysics; anything lithium and heavier is considered metal in this sense). These elements already came with the gas cloud the Sun has formed from, and were produced by previously burst older stars. Despite low abundance, the metallicity plays an important role in the Sun's core power stability.
At some depth the gas ball compresses its inner area enough to heat it up so much that hydrogen fusion into helium begins. This area is called
the core. This is where practically all fusion happens, and what is responsible for the star's energy production. For a Sun-mass star and below, the proton-proton chain dominates. The pp-chain energy output is approximately proportional to $T^4$. The good news is, if reaction rate drops, then the outer layer of the star will compress the core, so it heats up, and the renewed energy output compensates for the compression. So this highly-sensitive dependency on the temperature is what gives the star its long term stability.
It is also notable that the center of the core is hotter and therefore more energetic than its periphery, and turns hydrogen into helium faster. Absent any mixing, the core would develop an inert helium ball in the middle (helium cannot be fused by a Sun-mass star, its core is too cold for that): A
pp-chain core is entirely non-convective. However, there is another multistage reaction that fuses protons into helium nuclei, the CNO cycle. This cycle requires metals ($C$, $N$ and $O$, naturally) be present in the core. They are not consumed, but participate in stages of the reaction and are ultimately recycled. The rate of this reaction depends on the temperature as $T^{20}$. It's a huge dependency! The CNO-dominant core has so much temperature gradient that it's fully convective, so it mixes the material very thoroughly.
It happens so that for a star with the mass $M=1.5 M_\odot$ the core is fully convective, but this is not an on/off phenomenon. Even in the Sun, the CNO cycle produces roughly 10% of core's output power, and is responsible for intermittent mixing of the core material. The dependency on temperature for this reaction is so large that the reaction is practically irrelevant at $M=0.9 M_\odot$ and $T=14.5\times 10^6 K$, and becomes dominant at $M=1.5 M_\odot$ and $T=17.5\times 10^6 K$. The Sun is at the very lower end of this range.
There is not a huge difference in the lifetime of the star even absent the CNO mechanism; it only changes the hydrodynamics of the core and its reactivity to temperature variation. But for the short-term stability it's very important; it amplifies the negative feedback loop that stabilizes the core reaction rate. It is probable (so models tell us) that the Sun's energy output would be much more variable on the scales of $\sim 10^3$ years. So we are lucky to have gotten enough "metal" in our home star, in the end--our ice ages have been bad enough already!
There are many images and cross-sectional schematics on Google but I can't find any actual numbers for the radii.
About $0.2\,R_\odot$.
Are the nuclear reactions occurring where the helium meets the hydrogen?
A Sun-mass star does not fuse helium. Helium fusion is a much more energetic process, and happens only in more massive and shorter-living stars. Helium is the embers of the combustion in the Sun, not its fuel.
As a side note, the Sun is a very calm reactor by Earthling's standards. The core's energy output is about $300\, W/m^3$, far too low for any practical fusion reactor on Earth. You need a chunk of the Sun's core $10^7\,m^3$ in size to match the power of a large coal-fueled electrical plant, and that's the volume of a ball about $300\,m$ in size. No way we could contain such a fireball at 15 million K; terrestrial fusion projects aim at much higher temperatures and thus reaction rates.
What radius are the nuclear reaction occurring at?
In the Sun while on the
main sequence, all throughout the core. The core is essentially isolated against hydrogen supply from outer layers by the radiative zone, where the high thermal gradient stabilizes the gas against convection. As the Sun exhausts its fuel in the core, it transitions into the red giant phase (and the Sun will do it twice!). This happens when the core mostly turns into unburnable helium and cools down. What was previously the hydrogen-rich material in the radiative zone collapses on the surface of the helium core, and restarts the hydrogen reaction when its temperature reaches the ignition point. This reaction occurs only on the surface of the inner helium ball, in a spherical shell. There won't be any mixing mechanism this time that could disturb the inner inert ball.
All numbers come entirely off the top of my head, the best I could recall. Please double-check after me! |
Sparse Inverse Covariance Estimation Introduction
Assume we are given i.i.d. observations \(x_i \sim N(0,\Sigma)\) for \(i = 1,\ldots,m\), and the covariance matrix \(\Sigma \in {\mathbf S}_+^n\), the set of symmetric positive semidefinite matrices, has a sparse inverse \(S = \Sigma^{-1}\). Let \(Q = \frac{1}{m-1}\sum_{i=1}^m (x_i - \bar x)(x_i - \bar x)^T\) be our sample covariance. One way to estimate \(\Sigma\) is to maximize the log-likelihood with the prior knowledge that \(S\) is sparse (Friedman, Hastie, and Tibshirani 2008), which amounts to the optimization problem:
\[ \begin{array}{ll} \underset{S}{\mbox{maximize}} & \log\det(S) - \mbox{tr}(SQ) \\ \mbox{subject to} & S \in {\mathbf S}_+^n, \quad \sum_{i=1}^n \sum_{j=1}^n |S_{ij}| \leq \alpha. \end{array} \]
The parameter \(\alpha \geq 0\) controls the degree of sparsity. The problemis convex, so we can solve it using
CVXR.
Example
We’ll create a sparse positive semi-definite matrix \(S\) using synthetic data
set.seed(1)n <- 10 ## Dimension of matrixm <- 1000 ## Number of samples## Create sparse, symmetric PSD matrix SA <- rsparsematrix(n, n, 0.15, rand.x = stats::rnorm)Strue <- A %*% t(A) + 0.05 * diag(rep(1, n)) ## Force matrix to be strictly positive definite
We can now create the covariance matrix \(R\) as the inverse of \(S\).
R <- base::solve(Strue)
As test data, we sample from a multivariate normal with the fact that if \(Y \sim N(0, I)\), then \(R^{1/2}Y \sim N(0, R)\) since \(R\) is symmetric.
x_sample <- matrix(stats::rnorm(n * m), nrow = m, ncol = n) %*% t(expm::sqrtm(R))Q <- cov(x_sample) ## Sample covariance matrix
Finally, we solve our convex program for a set of \(\alpha\) values.
alphas <- c(10, 8, 6, 4, 1)S <- Semidef(n) ## Variable constrained to positive semidefinite coneobj <- Maximize(log_det(S) - matrix_trace(S %*% Q))S.est <- lapply(alphas, function(alpha) { constraints <- list(sum(abs(S)) <= alpha) ## Form and solve optimization problem prob <- Problem(obj, constraints) result <- solve(prob) ## Create covariance matrix R_hat <- base::solve(result$getValue(S)) Sres <- result$getValue(S) Sres[abs(Sres) <= 1e-4] <- 0 Sres })
In the code above, the
Semidef constructor restricts
S to thepositive semidefinite cone. In our objective, we use
CVXR functionsfor the log-determinant and trace. The expression
matrix_trace(S %*% Q) is equivalent to `sum(diag(S %*% Q))}, but the former is preferredbecause it is more efficient than making nested function calls.
However, a standalone atom does not exist for the determinant, so wecannot replace
log_det(S) with
log(det(S)) since
det isundefined for a
Semidef object.
Results
The figures below depict the solutions for the above dataset with \(m = 1000, n = 10\), and \(S\) containing 26% non-zero entries, represented by the dark squares in the images below. The sparsity of our inverse covariance estimate decreases for higher \(\alpha\), so that when \(\alpha = 1\), most of the off-diagonal entries are zero, while if \(\alpha = 10\), over half the matrix is dense. At \(\alpha = 4\), we achieve the true percentage of non-zeros.
do.call(multiplot, args = c(list(plotSpMat(Strue)), mapply(plotSpMat, S.est, alphas, SIMPLIFY = FALSE), list(layout = matrix(1:6, nrow = 2, byrow = TRUE))))
Session Info
sessionInfo()
## R version 3.6.0 (2019-04-26)## Platform: x86_64-apple-darwin18.5.0 (64-bit)## Running under: macOS Mojave 10.14.5## ## Matrix products: default## BLAS/LAPACK: /usr/local/Cellar/openblas/0.3.6_1/lib/libopenblasp-r0.3.6.dylib## ## locale:## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8## ## attached base packages:## [1] grid stats graphics grDevices datasets utils methods ## [8] base ## ## other attached packages:## [1] expm_0.999-4 Matrix_1.2-17 ggplot2_3.1.1 CVXR_0.99-6 ## ## loaded via a namespace (and not attached):## [1] gmp_0.5-13.5 Rcpp_1.0.1 RColorBrewer_1.1-2## [4] compiler_3.6.0 pillar_1.4.1 plyr_1.8.4 ## [7] R.methodsS3_1.7.1 R.utils_2.8.0 tools_3.6.0 ## [10] digest_0.6.19 bit_1.1-14 evaluate_0.14 ## [13] tibble_2.1.2 gtable_0.3.0 lattice_0.20-38 ## [16] pkgconfig_2.0.2 rlang_0.3.4 yaml_2.2.0 ## [19] blogdown_0.12.1 xfun_0.7 withr_2.1.2 ## [22] dplyr_0.8.1 Rmpfr_0.7-2 ECOSolveR_0.5.2 ## [25] stringr_1.4.0 knitr_1.23 tidyselect_0.2.5 ## [28] bit64_0.9-7 glue_1.3.1 R6_2.4.0 ## [31] rmarkdown_1.13 bookdown_0.11 purrr_0.3.2 ## [34] magrittr_1.5 scales_1.0.0 htmltools_0.3.6 ## [37] scs_1.2-3 assertthat_0.2.1 colorspace_1.4-1 ## [40] labeling_0.3 stringi_1.4.3 lazyeval_0.2.2 ## [43] munsell_0.5.0 crayon_1.3.4 R.oo_1.22.0
Source References
Friedman, J., T. Hastie, and R. Tibshirani. 2008. “Sparse Inverse Covariance Estimation with the Graphical Lasso.”
Biostatistics 9 (3): 432–41. |
Search
Now showing items 1-10 of 26
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ... |
Osaka Journal of Mathematics Osaka J. Math. Volume 47, Number 2 (2010), 353-384. Positive forms on hyperkähler manifolds Abstract
Let $(M,I,J,K,g)$ be a hyperkähler manifold, $\dim_{\mathbb{R}} M =4n$. We study positive, $\partial$-closed $(2p,0)$-forms on $(M,I)$. These forms are quaternionic analogues of the positive $(p,p)$-forms, well-known in complex geometry. We construct a monomorphism $\mathcal{V}_{p,p}\colon \Lambda^{2p,0}_{I}(M)\to\Lambda^{n+p,n+p}_{I}(M)$, which maps $\partial$-closed $(2p,0)$-forms to closed $(n+p,n+p)$-forms, and positive $(2p,0)$-forms to positive $(n+p,n+p)$-forms. This construction is used to prove a hyperkähler version of the classical Skoda--El Mir theorem, which says that a trivial extension of a closed, positive current over a pluripolar set is again closed. We also prove the hyperkähler version of the Sibony's lemma, showing that a closed, positive $(2p,0)$-form defined outside of a compact complex subvariety $Z\subset (M,I)$, $\codim Z > 2p$ is locally integrable in a neighbourhood of $Z$. These results are used to prove polystability of derived direct images of certain coherent sheaves.
Article information Source Osaka J. Math., Volume 47, Number 2 (2010), 353-384. Dates First available in Project Euclid: 23 June 2010 Permanent link to this document https://projecteuclid.org/euclid.ojm/1277298909 Mathematical Reviews number (MathSciNet) MR2722365 Zentralblatt MATH identifier 1196.32011 Citation
Verbitsky, Misha. Positive forms on hyperkähler manifolds. Osaka J. Math. 47 (2010), no. 2, 353--384. https://projecteuclid.org/euclid.ojm/1277298909 |
Let $TV$ denote the total variation semi-norm over domain $\Omega\subset \mathbb R^2$ which is open bounded with smooth boundary.
Let $\mathcal N$ denote the null space of $TV$. That is, a function belongs to $\mathcal N$ should be a constant. Let $P$ denote the projection operator onto $\mathcal N$.
My question: let a function $u\in L^\infty$ be given. Then do we have $$ P(u) = \frac1{|\Omega|}\int_\Omega u\,dx $$ hold? i.e., the projection gives the average of $u$?
thank you! |
One of the chapters of the book-in-progress talks about neutrino detection, drawing heavily on a forthcoming book I was sent for blurb/review purposes (about which more later). One of the little quirks of the book is that the author regularly referred to physicists trying to "trap" neutrinos. It took me a while to realize that he just meant "detect"-- coming from the AMO community, I naturally assume that "trap" means "localize to a small-ish region of space for a long-ish period of time." That is, after all, what I spent my Ph.D. work doing-- trapping cold atoms.
SteelyKid had a rough morning today, so I'm not quite in the right frame of mind for editing this chapter (which is what I really ought to be doing), but I started to make the effort. And immediately got distracted thinking of the "trap" issue. In particular, I made a mention in the text of the several hundred relic neutrinos from the Big Bang believed to be in every cubic centimeter of the universe, phrased in a way that made it sound like they were just sitting there. Which got me wondering what it would take to get neutrinos just sitting still in some region of space.
Of course, a real answer to this question would require me to know a whole bunch of stuff about neutrino physics that I don't actually know. So in the spirit of students the world over confronted with an exam question they don't know how to answer, I decided to change the question to something I do know how to attack, namely an estimate of the size of the "trap" you would need to have a neutrino sitting more or less still.
This still seems like an impossible problem, but the key word there is "estimate." And as long as you don't want a hard number, I can draw on one of the famous equations that give this blog its name, the Heisenberg Uncertainty Principle:
$latex \Delta x \Delta p \geq \frac{\hbar}{2} $
This says that the product of the uncertainty in the momentum of a particle and the uncertainty in its position must be greater than or equal to a non-zero constant. Thus, it's impossible to know both of those to arbitrary precision.
The main importance of this is as a concept, rather than something to calculate with, but there is one sort of calculation it's frequently used for, which is to estimate the properties of a confined particle. If you know that some particle is confined to a region of width $latex \Delta x $, then you know that there must be some uncertainty in its momentum as well. That means you'll never be sure of finding a trapped particle just sitting still, but you can put a rough limit on the velocity it will have given a particular trapping region. And from that, you can say what the energy of the lowest trap state ought to be, give or take.
So, if we were to confine a neutrino to some region of space, "trapping" it in the AMO sense of the word, what would the velocity be? Because I'm lazy, we'll use the classical approximation for momentum as just mass times velocity (which isn't as bad as it might seem, since the goal is to have slow-moving neutrinos, here), and get
$latex (m \Delta v) \Delta x \geq \frac{\hbar}{2} $
$latex v_{min} \approx \frac{\hbar}{2 m \Delta x} $
So, the approximate speed of a trapped neutrino decreases with increasing mass and decreases as you increase the size of the trapping region. Of course, getting an actual number requires a value for the neutrino mass, which we don't know in an absolute sense. But this is a ballpark kind of calculation, anyway, so we can just pick a value. If we say that our trapped neutrino has a mass of 1 eV/c
2 in the units that particle physicists use (a value that's probably way too big, but convenient), the various constants end up giving you a relationship between approximate velocity in m/s and the "trap" size in meters that's really simple:
$latex v_{min} \approx 30/\Delta x $
So, a 1eV/c
2 neutrino trapped in a 1m box would be moving at an approximate minimum speed of 30m/s. that's really fast, actually-- an electron trapped in the same size box would have a minimum uncertainty-derived speed of about 60 micrometers per second, half a million times smaller.
(As a sanity check, you can ask what this would predict for something like a BEC of atoms, which would be around 100,000 times heavier than an electron (ballpark), in a trap a micron on a side (ballpark), which gets you a minimum speed of about 0.6 mm/s, which is the right general range.)
So, what would it take to get neutrinos "just sitting there?" Well, it depends on your definition. My original phrasing mentioned a volume of one cubic centimeter. If you took that as the trap volume, your neutrinos would be moving at roughly 3000 m/s. If you want them at speeds comparable to the trapped laser-cooled atoms I'm used to, say 0.1 m/s, you would need a trap 300m on a side.
Of course, what you would make the walls of the trap of, in order to confine neutrinos to that volume, I have no idea. Given that you need a 100-m scale tank full of water, like the SuperK detector shown above in the "featured image," just to have a prayer of detecting a minuscule fraction of the vast number of neutrinos created in the Sun, I don't think we'll be actually trapping neutrinos any time soon...
How slow are neutrinos in the wild? My impression is that they are very close to c. Is there any way to get a neutrino to slow down?
If a neutrino has a mass of 1eV/c^2 (which, again, is pretty high), then at 99% of the speed of light, it would have a total energy of around 7eV. The sorts of neutrinos produced in nuclear reactions in the Sun tend to have energies in the keV range, so you're looking at a lot of nines after the decimal point in the speed as a fraction of the speed of light...
I assume that a neutrino that interacts with a material object could lose energy during the collision, and would thus slow down a bit. Given the tiny mass, the velocity change would be pretty negligible for realistic neutrino energies, but if you could somehow get a single neutrino to scatter many, many (manymanymanymany....) times, you could in principle get it to a very low energy and thus low speed. Of course, I don't know how you'd manage that, but "in principle" covers all sorts of bizarre crap...
I'm just a simple country molecular biologist, so bear with me. Were there neutrinos produced during the big bang? Are they subject to the same red shift caused by expansion that photons are, and if so, what kind of energies would they have now?
"I’m just a simple country molecular biologist, so bear with me. Were there neutrinos produced during the big bang? Are they subject to the same red shift caused by expansion that photons are, and if so, what kind of energies would they have now?"
EXCELLENT QUESTION!
As it turns out, they were! And they are! And the answer is "not very much at all".
Cosmic Microwave Background (CMB) photons are microwaves, which peak at about 1mm. So CMB photons all have energy around 1mev, or a milli-electron volt.
The Cosmic Neutrino Background (CNB) was released before the CMB when the universe was in the electro-weak epoch. Once neutrinos decoupled with matter, they were free to propogate just like CMB photons were. So they have even less energy than the CMB.
We have some indirect evidence for the CNB, WMAP claimed they had it to three sigma, but while the CMB is easy to measure due to the high cross section of photons... the CNB can pass through pretty much the entire universe undisturbed.
(This is also why neutrinos are an excellent predictor for supernova. They are released before photons are, so in cataclysmic events, we should see a flux of high energy neutrinos before we see the light)
@Nick: Yes, there were neutrinos produced "during" the Big Bang. More specifically, there would have been neutrinos produced from Z0 decays in the first few microseconds, as well as neutrinos produced in the first hours from neutron decay (800 s half life).
These neutrinos, known as the "cosmological neutrino background" (CNB) are subject to the same redshift as photons.
Because they decouple from matter slightly earlier than photons, they are expected to be _colder_ than the CMB photons, about 1.95 K.
That temperature corresponds to an average energy of 16.8 meV (yes, that's _milli_ electronvolts). For a 1 eV mass (which is too large, given current limits), the relic neutrinos would have a velocity of 0.09c (p^2 = E^2 - m^2, and v = p/m).
Since the interaction probability (cross section) increases with energy you would have real problems finding walls for your trap at those low energies, indeed. That's the problem with detecting relic background neutrinos, which we know exist but cannot see :-)
Why do you assume a box?
Take something the mass of the earth, but not moving, just sitting there really really quietly in intergalactic space. The escape velocity is 11000 metres per second, so Chad's neutrinos would stick to the planet like glue. The planet could have an essentially unlimited number (limited by some pauli exclusion calculation that I don't think I want to do) neutrinos hanging around it.
An object like a neutron star near the solar mass limit has about to what it takes to have relic neutrinos trapped around it, out to a 1000 km, depending on that rest mass, but assuming 1 eV mass.
Ha, that's right! I actually never thought of that. Still, it's difficult to capture even low energy particles in bound orbits around (or inside) a heavy object, if it doesn't have repeated interactions. (Compare capture of dark matter particles.) But I'm sure someone has thought of this.
If I could send a detector with really high velocity through the neutrino cloud around a neutron star I could possibly even detect them. Of course that would be difficult in many other ways ...
Yes - you would need these low energy neutrinos to have some kind of energy robbing interaction in the neutron star, so that the orbits are not hyperbolic.
Yeah, I'm not a neutrino physicist, but if they start off far from the massive object then they have a lot of potential energy, so they'll be able to swing in close, speed up, then escape and slow down (without completely stopping) as they fly away. |
The previous two posts introduced the ideas of Meyer and Tschudin [11] involving the application and exploitation of chemical kinetic theory to flow management in computer networking. The first part introduced the ideas and gave an overview of the entire work, and the second part took a deeper look into the formal model of a packet chemistry. This section discusses the analysis options available once a packet chemistry model has been created.
This section can also be skipped for those less interested in the formal mathematics. Suffice it to say that there are a multitude of already created methods now available for the elegant analysis of computer networks when modeled by an artificial packet chemistry. Formal Analysis of Artificial Packet Chemistry
By representing packet flow in a computer network as an artificial chemistry, a multitude of analyses are available, from high to low granularity. The authors give a heavily brief survey (and a good bibliography) of works that can be utilized to analyze these networks pulled from the physics and chemistry literature. A particular advantage of this method is the ability to study the transient states of the network rather than just steady states. The authors also claim the ability to determine the stability of the network flow based only on topology, a heavy advantage in design.
Stochastic Analysis at the Microscopic Level
The stochastic behavior of chemical reaction networks is described by the chemical master equation[10] which takes the form
\frac{\text{d}\mathbf{P}}{\text{d}t} = \mathbf{A}\mathbf{P}
which is a differential equation describing the evolution of state probabilities for a system. Here the states are discrete, and time is continuous. The matrix \mathbf{A} describes the transition rates (which can also be kinetic or reaction rates), and the stochastic process described is a Markov jump-process Since we’re on a network, the Markov jump process exists in an \mathcal{S}-dimensional integer lattice. Some work has been done to analyze several classes of chemical reaction networks to find the steady-state probability distribution of the state space. For example, if the total number of packets in the network has a bound, and the network contains only first order (unimolecular to unimolecular) reactions, the steady state probability distribution for the lengths of the queues in the network is a multinomial distribution[3]. On the other hand, if the network is open (we allow packets to exit the network completely), then the steady state probability distribution of the lengths of the queues follows a product of Poisson distributions (which is also Poisson)[3]. (This is an extremely desirable property, called a product-form.)
Deterministic Approximations
This is the most common approach utilized in computer network analysis today, simply because networks are so large and complex that stochastic modeling becomes too cumbersome. Here, the average trajectory is represented by a system of ordinary differential equations, building a fluid model. One downside to this in the networking space is that the analysis of protocols by this method requires manual extraction from source code and accuracy is uncertain.
In the chemistry sector (and now in the packet chemistry model), obtaining a fluid approximation is not only easier, but shown to be accurate. There are links between the stochastic master equation to several approximations[5,6] including a deterministic ODE model. Gillespie[5] showed that the ODE model accurately predicts the network flow trajectory in many cases.
One thing the authors note here is that the ODE model can be directly and automatically generated from the network topology. For example, a single server with a single queue (M/M/1) is simply modeled as one chemical species X. The arrival rate (inflow) is \lambda, and the service rate is proportional to the queue length, so \mu = kx, where x is the queue length. Then we get a simple differential equation
\dot{x} = \lambda-kx describing the change in queue length as the difference of inflow and outflow. In the steady state, \dot{x} = 0, which lets us look for a fixed point \hat{x} = \frac{\lambda}{k}. This is the steady-state queue length, which allows us to derive the expected waiting time T = \frac{1}{k}, showing that the latency of a packet under this model is independent of the arrival rate and fill level. This model when implemented automatically adjusts the service rate such that in the steady state, every packet sees the same latency.
It’s also important to determine just how stable this steady state is by analyzing the sensitivity of the network and states to perturbations. The authors list several citations to show that no new approaches are needed to do this; one can look to signal and control theory literature. In particular, a network designer would desire to predict the stability of a complex network by studying the topology as opposed to an analysis of the system of ODEs. Fortunately, modeling a network this way allows for the use of the Deficiency Zero Theorem for complex chemical networks that gives conditions for stability of steady-state[2,7].
The authors give a formal convergence proof that the example network above converges to a stable fixed point and is asymptotically stable, comparing it to the proof of a similar protocol
Push-Sum (a gossip protocol in computer networks). Continuation
The next post in this series will discuss Meyer and Tschudin’s implementation of a scheduler based on the principles discussed thus far.
References Dittrich, P., Ziegler, J., and Banzhaf, W. Artificial chemistries – a review. Artificial Life 7(2001), 225–275. Feinburg, M. Complex balancing in general kinetic systems. Archive for Rational Mechanics and Analysis 49 (1972). Gadgil, C., Lee, C., and Othmer, H. A stochastic analysis of first-order reaction networks. Bulletin of Mathematical Biology 67 (2005), 901–946. Gibson, M., and Bruck, J. Effcient stochastic simulation of chemical systems with many species and many channels. Journal of Physical Chemistry 104 (2000), 1876–1889. Gillespie, D. The chemical langevin equation. Journal of Chemical Physics 113 (2000). Gillespie, D. The chemical langevin and fokker-planck equations for the reversible isomerizationreaction. Journal of Physical Chemistry 106 (2002), 5063–5071. Horn, F. On a connexion between stability and graphs in chemical kinetics. Proceedings of the RoyalSociety of London 334 (1973), 299–330. Kamimura, K., Hoshino, H., and Shishikui, Y. Constant delay queuing for jitter-sensitive iptvdistribution on home network. IEEE Global Telecommunications Conference (2008). Laidler, K. Chemical Kinetics. McGraw-Hill, 1950. McQuarrie, D. Stochastic approach to chemical kinetics. Journal of Applied Probability 4 (1967), 413–478. Meyer, T., and Tschudin, C. Flow management in packet networks through interacting queues and law-of-mass-action-scheduling. Technical report, University of Basel. Pocher, H. L., Leung, V., and Gilles, D. An application- and management-based approach to atm scheduling. Telecommunication Systems 12 (1999), 103–122. Tschudin, C. Fraglets- a metabolistic execution model for communication protocols. Proceedings of the 2nd annual symposium on autonomous intelligent networks and systems (2003). This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. |
We consider Real bundle gerbes on manifolds equipped with an involution andprove that they are classified by their Real Dixmier-Douady class inGrothendieck's equivariant sheaf cohomology. We show that the Grothendieckgroup of Real bundle gerbe modules is isomorphic to twisted KR-theory for atorsion Real Dixmier-Douady class. Using these modules as building blocks, weintroduce geometric cycles for twisted KR-homology and prove that they generatea real-oriented generalised homology theory dual to twisted KR-theory for Realclosed manifolds, and more generally for Real finite CW-complexes, for any RealDixmier-Douady class. This is achieved by defining an explicit naturaltransformation to analytic twisted KR-homology and proving that it is anisomorphism. Our model both refines and extends previous results by Wang andBaum-Carey-Wang to the Real setting. Our constructions further provide a newframework for the classification of orientifolds in string theory, providingprecise conditions for orientifold lifts of H-fluxes and for orientifoldprojections of open string states.
R\'esum\'e. Nous d\'efinissons un groupo\"ide de Fr\'echet-Lie Map(S^1,X)d'ana-foncteurs du cercle vers un groupo\"ide de Lie X. Ceci fournit unepr\'esentation du Hom-champ Hom(S^1,\cX), o\`u \cX est le champdiff\'erentiable associ\'e \`a X. Nous appliquons cette construction augroupo\"ide de Lie sous-jacent au `gerbe fibr\'e' d'une vari\'et\'ediff\'erentiable M; le r\'esultat est un gerbe fibr\'e au-dessus de l'espacedes lacets LM de M. Abstract. We define a Fr\'echet--Lie groupoid Map(S^1,X) of anafunctors fromthe circle into a Lie groupoid X. This provides a presentation of the Hom-stackHom(S^1,\cX), where \cX is the differentiable stack associated to X. We applythis construction to the Lie groupoid underlying a bundle gerbe on a manifoldM; the result is a bundle gerbe on the loop space LM of M.
For a compact manifold M and a differentiable stack \cX presented by a Liegroupoid X, we show the Hom-stack Hom(M,\cX) is presented by a Fr\'echet-Liegroupoid Map(M,X) and so is an infinite-dimensional differentiable stack. Wefurther show that if \cX is an orbifold, presented by a proper \'etale Liegroupoid, then Map(M,X) is proper \'etale and so presents aninfinite-dimensional orbifold.
We develop the theory of simplicial extensions for bundle gerbes and theircharacteristic classes with a view towards studying descent problems andequivariance for bundle gerbes. Equivariant bundle gerbes are important in thestudy of orbifold sigma models. We consider in detail two examples: the basicbundle gerbe on a unitary group and a string structure for a principal bundle.We show that the basic bundle gerbe is equivariant for the conjugation actionand calculate its characteristic class; we show also that a string structuregives rise to a bundle gerbe which is equivariant for a natural action of theString 2-group.
Odd $K$-theory has the interesting property that it admits an infinite numberof inequivalent differential refinements. In this paper we provide a bundletheoretic model for odd differential $K$-theory using the caloroncorrespondence and prove that this refinement is unique up to a unique naturalisomorphism. We characterise the odd Chern character and its transgression formin terms of a connection and Higgs field and discuss some applications. Ourmodel can be seen as the odd counterpart to the Simons-Sullivan construction ofeven differential $K$-theory. We use this model to prove a conjecture ofTradler-Wilson-Zeinalian regarding a related differential extension of odd$K$-theory
In gauge theory, the Faddeev-Mickelsson-Shatashvili anomaly arises as aprolongation problem for the action of the gauge group on a bundle ofprojective Fock spaces. In this paper, we study this anomaly from the point ofview of bundle gerbes and give several equivalent descriptions of theobstruction. These include lifting bundle gerbes with non-trivial structuregroup bundle and bundle gerbes related to the caloron correspondence.
We outline in detail the general caloron correspondence for the group ofautomorphisms of an arbitrary principal $G$-bundle $Q$ over a manifold $X$,including the case of the gauge group of $Q$. These results are used to definecharacteristic classes of gauge group bundles. Explicit but complicateddifferential form representatives are computed in terms of a connection andHiggs field.
We give a classifying theory for $LG$-bundles, where $LG$ is the loop groupof a compact Lie group $G$, and present a calculation for the string class ofthe universal $LG$-bundle. We show that this class is in fact an equivariantcohomology class and give an equivariant differential form representing it. Wethen use the caloron correspondence to define (higher) characteristic classesfor $LG$-bundles and to prove for the free loop group an analogue of the resultfor characteristic classes for based loop groups in Murray-Vozzo (J. Geom.Phys., 60(9), 2010). These classes have a natural interpretation in equivariantcohomology and we give equivariant differential form representatives for theuniversal case in all odd dimensions.
The caloron correspondence can be understood as an equivalence of categoriesbetween $G$-bundles over circle bundles and $LG \rtimes_\rho S^1$-bundles where$LG$ is the group of smooth loops in $G$. We use it, and lifting bundle gerbes,to derive an explicit differential form based formula for the (real) stringclass of an $LG \rtimes_\rho S^1$-bundle.
We review the caloron correspondence between $G$-bundles on $M \times S^1$and $\Omega G$-bundles on $M$, where $\Omega G$ is the space of smooth loops inthe compact Lie group $G$. We use the caloron correspondence to definecharacteristic classes for $\Omega G$-bundles, called string classes, bytransgression of characteristic classes of $G$-bundles. These generalise thestring class of Killingback to higher dimensional cohomology. |
I will try to give an answer to my question, which is basically an extension of the last paragraph of the question and of the comment of Ryan Thorngren.
I will limit myself to a one-loop study. To this order, the RG flow is the « Ricci flow », $\frac{dg_{ij}}{dt} = - R_{ij}$ where $R_{ij}$ is the Ricci curvature and the variable $t$ is something like - log of the energy scale. I choose this variable only by convenience: the direction of the RG flow from the UV to the IR is the same that the positive direction of the « time » $t$.
This is true for any $X$.
For a metric with positive Ricci curvature, the manifold $X$ is large in the UV, the sigma model is asymptotically free, $X$ shrinks under the RG flow and becomes strongly coupled in the IR. The perturbative sigma model description breaks down when $X$ becomes of size of order $\sqrt{\alpha'}$.
For a metric with negative Ricci curvature, the manifold $X$ is large in the IR, the sigma model is a good description in the IR. It is strongly coupled on the UV, it is not clear if the theory exists in the UV.
For a metric with zero Ricci curvature, we have a fixed point of the RG flow and the sigma model defines a well-defined CFT.
My question can be reformulated as : what happens for a metric which is a small perturbation of a Ricci flat metric, small in particular in the sense of having the same Kähler class. Naively, what happens is not clear because the Ricci curvature of such a metric is neither positive or negative, it is not of fixed sign, the Ricci curvature has fluctuations of both signs around zero. This apparent difficulty was basically the reason for my question. So now the question is: how small fluctuations of the metric evolve under the RG flow? Are there smooth out or are there amplified?
I think that the key point is the remark that $R_{ij}$ is roughly (in correct coordinates and up to non-linear terms) minus the Laplacian of g.To find the Laplacian is not surprizing because the Ricci curvature is by defintion roughly the trace of second derivatives in the metrics. The key point is the minus sign. It means that up to non-linear terms, the RG flow is roughly$\frac{dg_{ij}}{dt} = \Delta g_{ij}$ i.e the heat equation for the metric. This implies that under the RG flow, the fluctuations will be smooth out.
So a small fluctuation of the Ricci flat metric will flow in the IR to the Ricci flat fixed point. In the given Kähler class, the Ricci flat fixed point is the only fixed point and it is an attractive point: all the trajectories converge to this point in the IR.
Toward the UV, the RG flow will have exaclty the inverse behavior: if one tries to go to the UV, the fluctuations will be amplified (as for a heat equation with time inversed). If we begin with a random fluctuation and go to the UV, the total size of $X$ does not change (the Kähler class is unchanged) but the metric on $X$ will fluctuate more and more drastically and apparently chaotically. The perturbative sigma model description will break down when the typical size of the fluctuations will become of the order $\sqrt{\alpha'}$ and it is not clear if there is some definition of the theory in the UV. |
And I think people said that reading first chapter of Do Carmo mostly fixed the problems in that regard. The only person I asked about the second pset said that his main difficulty was in solving the ODEs
Yeah here there's the double whammy in grad school that every grad student has to take the full year of algebra/analysis/topology, while a number of them already don't care much for some subset, and then they only have to pass rather the class
I know 2 years ago apparently it mostly avoided commutative algebra, half because the professor himself doesn't seem to like it that much and half because he was like yeah the algebraists all place out so I'm assuming everyone here is an analyst and doesn't care about commutative algebra
Then the year after another guy taught and made it mostly commutative algebra + a bit of varieties + Cech cohomology at the end from nowhere and everyone was like uhhh. Then apparently this year was more of an experiment, in part from requests to make things more geometric
It's got 3 "underground" floors (quotation marks because the place is on a very tall hill so the first 3 floors are a good bit above the the street), and then 9 floors above ground. The grad lounge is in the top floor and overlooks the city and lake, it's real nice
The basement floors have the library and all the classrooms (each of them has a lot more area than the higher ones), floor 1 is basically just the entrance, I'm not sure what's on the second floor, 3-8 is all offices, and 9 has the ground lounge mainly
And then there's one weird area called the math bunker that's trickier to access, you have to leave the building from the first floor, head outside (still walking on the roof of the basement floors), go to this other structure, and then get in. Some number of grad student cubicles are there (other grad students get offices in the main building)
It's hard to get a feel for which places are good at undergrad math. Highly ranked places are known for having good researchers but there's no "How well does this place teach?" ranking which is kinda more relevant if you're an undergrad
I think interest might have started the trend, though it is true that grad admissions now is starting to make it closer to an expectation (friends of mine say that for experimental physics, classes and all definitely don't cut it anymore)
In math I don't have a clear picture. It seems there are a lot of Mickey Mouse projects that people seem to not help people much, but more and more people seem to do more serious things and that seems to become a bonus
One of my professors said it to describe a bunch of REUs, basically boils down to problems that some of these give their students which nobody really cares about but which undergrads could work on and get a paper out of
@TedShifrin i think universities have been ostensibly a game of credentialism for a long time, they just used to be gated off to a lot more people than they are now (see: ppl from backgrounds like mine) and now that budgets shrink to nothing (while administrative costs balloon) the problem gets harder and harder for students
In order to show that $x=0$ is asymptotically stable, one needs to show that $$\forall \varepsilon > 0, \; \exists\, T > 0 \; \mathrm{s.t.} \; t > T \implies || x ( t ) - 0 || < \varepsilon.$$The intuitive sketch of the proof is that one has to fit a sublevel set of continuous functions $...
"If $U$ is a domain in $\Bbb C$ and $K$ is a compact subset of $U$, then for all holomorphic functions on $U$, we have $\sup_{z \in K}|f(z)| \leq C_K \|f\|_{L^2(U)}$ with $C_K$ depending only on $K$ and $U$" this took me way longer than it should have
Well, $A$ has these two dictinct eigenvalues meaning that $A$ can be diagonalised to a diagonal matrix with these two values as its diagonal. What will that mean when multiplied to a given vector (x,y) and how will the magnitude of that vector changed?
Alternately, compute the operator norm of $A$ and see if it is larger or smaller than 2, 1/2
Generally, speaking, given. $\alpha=a+b\sqrt{\delta}$, $\beta=c+d\sqrt{\delta}$ we have that multiplication (which I am writing as $\otimes$) is $\alpha\otimes\beta=(a\cdot c+b\cdot d\cdot\delta)+(b\cdot c+a\cdot d)\sqrt{\delta}$
Yep, the reason I am exploring alternative routes of showing associativity is because writing out three elements worth of variables is taking up more than a single line in Latex, and that is really bugging my desire to keep things straight.
hmm... I wonder if you can argue about the rationals forming a ring (hence using commutativity, associativity and distributivitity). You cannot do that for the field you are calculating, but you might be able to take shortcuts by using the multiplication rule and then properties of the ring $\Bbb{Q}$
for example writing $x = ac+bd\delta$ and $y = bc+ad$ we then have $(\alpha \otimes \beta) \otimes \gamma = (xe +yf\delta) + (ye + xf)\sqrt{\delta}$ and then you can argue with the ring property of $\Bbb{Q}$ thus allowing you to deduce $\alpha \otimes (\beta \otimes \gamma)$
I feel like there's a vague consensus that an arithmetic statement is "provable" if and only if ZFC proves it. But I wonder what makes ZFC so great, that it's the standard working theory by which we judge everything.
I'm not sure if I'm making any sense. Let me know if I should either clarify what I mean or shut up. :D
Associativity proofs in general have no shortcuts for arbitrary algebraic systems, that is why non associative algebras are more complicated and need things like Lie algebra machineries and morphisms to make sense of
One aspect, which I will illustrate, of the "push-button" efficacy of Isabelle/HOL is its automation of the classic "diagonalization" argument by Cantor (recall that this states that there is no surjection from the naturals to its power set, or more generally any set to its power set).theorem ...
The axiom of triviality is also used extensively in computer verification languages... take Cantor's Diagnolization theorem. It is obvious.
(but seriously, the best tactic is over powered...)
Extensions is such a powerful idea. I wonder if there exists algebraic structure such that any extensions of it will produce a contradiction. O wait, there a maximal algebraic structures such that given some ordering, it is the largest possible, e.g. surreals are the largest field possible
It says on Wikipedia that any ordered field can be embedded in the Surreal number system. Is this true? How is it done, or if it is unknown (or unknowable) what is the proof that an embedding exists for any ordered field?
Here's a question for you: We know that no set of axioms will ever decide all statements, from Gödel's Incompleteness Theorems. However, do there exist statements that cannot be decided by any set of axioms except ones which contain one or more axioms dealing directly with that particular statement?
"Infinity exists" comes to mind as a potential candidate statement.
Well, take ZFC as an example, CH is independent of ZFC, meaning you cannot prove nor disprove CH using anything from ZFC. However, there are many equivalent axioms to CH or derives CH, thus if your set of axioms contain those, then you can decide the truth value of CH in that system
@Rithaniel That is really the crux on those rambles about infinity I made in this chat some weeks ago. I wonder to show that is false by finding a finite sentence and procedure that can produce infinity
but so far failed
Put it in another way, an equivalent formulation of that (possibly open) problem is:
> Does there exists a computable proof verifier P such that the axiom of infinity becomes a theorem without assuming the existence of any infinite object?
If you were to show that you can attain infinity from finite things, you'd have a bombshell on your hands. It's widely accepted that you can't. If fact, I believe there are some proofs floating around that you can't attain infinity from the finite.
My philosophy of infinity however is not good enough as implicitly pointed out when many users who engaged with my rambles always managed to find counterexamples that escape every definition of an infinite object I proposed, which is why you don't see my rambles about infinity in recent days, until I finish reading that philosophy of infinity book
The knapsack problem or rucksack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items.The problem often arises in resource allocation where there are financial constraints and is studied in fields such as combinatorics, computer science...
O great, given a transcendental $s$, computing $\min_P(|P(s)|)$ is a knapsack problem
hmm...
By the fundamental theorem of algebra, every complex polynomial $P$ can be expressed as:
$$P(x) = \prod_{k=0}^n (x - \lambda_k)$$
If the coefficients of $P$ are natural numbers , then all $\lambda_k$ are algebraic
Thus given $s$ transcendental, to minimise $|P(s)|$ will be given as follows:
The first thing I think of with that particular one is to replace the $(1+z^2)$ with $z^2$. Though, this is just at a cursory glance, so it would be worth checking to make sure that such a replacement doesn't have any ugly corner cases.
In number theory, a Liouville number is a real number x with the property that, for every positive integer n, there exist integers p and q with q > 1 and such that0<|x−pq|<1qn.{\displaystyle 0<\left|x-{\frac {p}...
Do these still exist if the axiom of infinity is blown up?
Hmmm...
Under a finitist framework where only potential infinity in the form of natural induction exists, define the partial sum:
$$\sum_{k=1}^M \frac{1}{b^{k!}}$$
The resulting partial sums for each M form a monotonically increasing sequence, which converges by ratio test
therefore by induction, there exists some number $L$ that is the limit of the above partial sums. The proof of transcendentally can then be proceeded as usual, thus transcendental numbers can be constructed in a finitist framework
There's this theorem in Spivak's book of Calculus:Theorem 7Suppose that $f$ is continuous at $a$, and that $f'(x)$ exists for all $x$ in some interval containing $a$, except perhaps for $x=a$. Suppose, moreover, that $\lim_{x \to a} f'(x)$ exists. Then $f'(a)$ also exists, and$$f'...
and neither Rolle nor mean value theorem need the axiom of choice
Thus under finitism, we can construct at least one transcendental number. If we throw away all transcendental functions, it means we can construct a number that cannot be reached from any algebraic procedure
Therefore, the conjecture is that actual infinity has a close relationship to transcendental numbers. Anything else I need to finish that book to comment
typo: neither Rolle nor mean value theorem need the axiom of choice nor an infinite set
> are there palindromes such that the explosion of palindromes is a palindrome nonstop palindrome explosion palindrome prime square palindrome explosion palirome prime explosion explosion palindrome explosion cyclone cyclone cyclone hurricane palindrome explosion palindrome palindrome explosion explosion cyclone clyclonye clycone mathphile palirdlrome explosion rexplosion palirdrome expliarome explosion exploesion |
Research output: Contribution to journal › Article
In: Memoirs of the American Mathematical Society, 02.04.2017.
Sums of reciprocals of fractional parts and multiplicative Diophantine approximation. / Beresnevich, Victor; Haynes, Alan; Velani, Sanju.
Research output: Contribution to journal › Article
}
TY - JOUR
T1 - Sums of reciprocals of fractional parts and multiplicative Diophantine approximation
AU - Beresnevich, Victor
AU - Haynes, Alan
AU - Velani, Sanju
N1 - This is an author-produced version of the published paper. Uploaded in accordance with the publisher’s self-archiving policy. Further copying may not be permitted; contact the publisher for details
PY - 2017/4/2
Y1 - 2017/4/2
N2 - There are two main interrelated goals of this paper. Firstly we investigate the sums $S_N(\alpha,\gamma):=\sum_{n=1}^N\frac{1}{n\|n\alpha-\gamma\|}$ and $R_N(\alpha,\gamma):=\sum_{n=1}^N\frac{1}{\|n\alpha-\gamma\|},$where $\alpha$ and $\gamma$ are real parameters and $\|\cdot\|$ is the distance to the nearest integer. Our theorems improve upon previous results of W.M.Schmidt and others, and are (up to constants) best possible. Related to the above sums, we also obtain upper and lower bounds for the cardinality of $\{1\le n\le N:\|n\alpha-\gamma\|<\ve\} \, ,$ valid for all sufficiently large $N$ and all sufficiently small $\ve$.This first strand of the work is motivated by applications to multiplicative Diophantine approximation, which are also considered. In particular, we obtain complete Khintchine type results for multiplicative simultaneous Diophantine approximation on fibers in $\R^2$. The divergence result is the first of its kind and represents an attempt of developing the concept of ubiquity to the multiplicative setting.
AB - There are two main interrelated goals of this paper. Firstly we investigate the sums $S_N(\alpha,\gamma):=\sum_{n=1}^N\frac{1}{n\|n\alpha-\gamma\|}$ and $R_N(\alpha,\gamma):=\sum_{n=1}^N\frac{1}{\|n\alpha-\gamma\|},$where $\alpha$ and $\gamma$ are real parameters and $\|\cdot\|$ is the distance to the nearest integer. Our theorems improve upon previous results of W.M.Schmidt and others, and are (up to constants) best possible. Related to the above sums, we also obtain upper and lower bounds for the cardinality of $\{1\le n\le N:\|n\alpha-\gamma\|<\ve\} \, ,$ valid for all sufficiently large $N$ and all sufficiently small $\ve$.This first strand of the work is motivated by applications to multiplicative Diophantine approximation, which are also considered. In particular, we obtain complete Khintchine type results for multiplicative simultaneous Diophantine approximation on fibers in $\R^2$. The divergence result is the first of its kind and represents an attempt of developing the concept of ubiquity to the multiplicative setting.
M3 - Article
JO - Memoirs of the American Mathematical Society
T2 - Memoirs of the American Mathematical Society
JF - Memoirs of the American Mathematical Society
SN - 0065-9266
ER - |
In first part we explored the statistical model underlying the machine learning problem, and used it to formalize the problem in terms of obtaining the minimum generalization error. By noting that we cannot directly evaluate the generalization error of an ML model, we continued in the second part by establishing a theory that relates this elusive generalization error to another error metric that we can actually evaluate, which is the empirical error. Our final result was that:
That is: the generalization error (or the risk) $R(h)$ is bounded by the empirical risk (or the training error) plus a term that is proportionate to the complexity (or the richness) of the hypothesis space $|\mathcal{H}|$, the dataset size $N$, and the degree of certainty $1 - \delta$ about the bound. We can simplify that bound even more by assuming that we have a fixed dataset (which is the typical case in most practical ML problems), so that for a specific degree of certainty we have:
Starting from this part, and based on this simplified theoretical result, we’ll begin to draw some practical concepts for the process of solving the ML problem. We’ll start by trying to get more intuition about why a more complex hypothesis space is bad.
Why rich hypotheses are bad?
To make things a little bit concrete and to be able to visualize what we’re talking about, we’ll be using the help of a
simulated dataset which is a useful tool that is used often to demonstrate concepts in which we might need to draw multiple instance of the dataset from the same distribution; something that cannot be effectively done with real datasets. In a simulated dataset we define our own target function, and use that function, through the help of a computer program, to draw as much datasets as we want form the distribution it describes.
In the following discussion we’re gonna to sample $x$ uniformly from the interval $[-1,1]$ and use a one-dimensional target function $f(x) = \sin(x)$ which generates a noisy response (as we discussed in the first part) $y = f(x) + \zeta$ where $\zeta$ is a random noise drawn from a zero-mean distribution, in our case a Gaussian distribution with a standard deviation of 2.
Recall that when we train an ML model on a dataset, we are trying to find the relation between the predictor features $x$ and the response $y$, so ideally we need hypothesis to account for the noise as little as possible; as noise by definition has no explanatory value whatsoever, and accommodation of noise will skew the model away from the true target resulting in poor performance on future data, hence poor generalization. In order to understand the problem of rich hypotheses, we’ll investigate how different hypotheses of different complexities adhere to such criteria.
In the following animation we train a linear, a cubic and a tenth degree polynomial hypotheses each on 100 different simulated datasets of 200 points (only 20 are shown) drawn form the above described distribution. Each of these models is drawn with
light-blue line, the average of each hypothesis is shown with the darker blue line, while the true target is shown by the dashed black line. The offset of the points from the true target curve is an indicator of the noise; because if there wasn’t any noise, the points would lie on the dashed black curve. So the further the point is from the true target curve, the more noisy it is.
The
first thing we notice from that animation is that the richer and more complex the hypothesis gets, the less the its difference from the true target becomes on average. That difference between the estimator’s mean (the hypothesis) and the value it’s trying to estimate (the target) is referred to in statistics as the bias.
Where $\overline{h}(x)$ is the mean of different hypotheses generated from training the model on different datasets, i.e. $\overline{h}(x) = \mathop{\mathbb{E}}_{\mathcal{D}}\left[h^{(\mathcal{D})}(x)\right]$, where $h^{(\mathcal{D})}(x)$ indicates a hypothesis generated by training on the dataset $\mathcal{D}$.
In English, the word “bias” commonly implies some kind of an inclination or prejudice towards something. Analogously, the bias in a statistical estimator can be interpreted as the estimator favoring some specific direction or a component in the target distribution over other major components. To make this interpretation concrete let’s take a look at the Taylor’s expansion of our target function. If you’re not familiar with the concept of Taylor’s expansion, you can think of it method to write a function as a infinite sum of simpler functions, you can consider such simpler functions as
components for the sake of our discussion here.
It’s obvious from the increasing value of the denominator that each higher component contributes very little to the value of the function which makes higher components minor and unimportant.
The high bias of the linear model can now be interpreted by the linear hypothesis function $h(x) = w_1x + w_0$ favoring the $x$ component of the target over the other major component $\frac{x^3}{3!}$. With the same logic, the seemingly low bias of cubic model can be explained by the fact that th cubic hypothesis function $h(x) = w_1x + w_2x^2+w_3x^3 + w_0$ includes on average both the major components of the target without favoring any of them over the other. The little decrease in bias introduced by the tenth degree model can also be explained by the fact that a the tenth degree polynomial includes the other minor components that do not contribute much to the value.
It’s simple to see that the closer the hypothesis gets to the target on average, the less its average loss from the target value becomes. This means that a low bias hypothesis results in low empirical risk; which makes it desirable to use low bias models, and since rich models have the lower bias, what makes them so bad then?
Then answer to that question lies in
second thing we notice in the animation, which is that richer the hypothesis gets, the greater its ability to extends its reach and grab the noise becomes. Go back to the animation and see how the linear model cannot reach the noisy points that lie directly above the peaks of the target graph, then notice how the cubic model can reach these but remains unable to reach those at the top of the frame, and finally see how the tenth degree model can even reach those on the top. In such situations, we say that the hypothesis is overfitting the data by including the noise.
This overfitting behavior can quantified by noticing how tightly the linear hypothesis realizations (the light-blue curves) are packed around its mean (the darker blue curve) compared to the messy fiasco the tenth degree model is making around its mean.This shows that the more the hypothesis overfits, the wider its possible realizations are spread around the its mean, which is precisely the definition of variance! So how much the hypothesis overfits can be quantified by how much its variance around its mean:
Obviously, a high variance model is not desired because, as we mentioned before, we don’t want to accommodate for the noise, and since rich models have the higher variance, this is what makes them so bad and penalized against in the generalization bound.
The Bias-variance Decomposition
Let’s take a closer look at the mess the tenth degree model made in its plot:
Since $h^{\mathcal{(D)}}(x)$ changes as $\mathcal{D}$ changes as its randomly sampled each time, we can consider $h^{\mathcal{(D)}}(x)$ as a random variable of which the concrete hypothesis are realizations. Leveraging a similar trick we used in the first part, we can decompose that random variable into two components: a deterministic component that represents it mean and a random one that purely represents the variance:
where $H^{\mathcal{(D)}}_{\sigma}(x)$ is a random variable with zero mean and a variance equal to the variance of the hypothesis, that is:
So some realization of $h^{\mathcal{(D)}}(x)$, such as $\widetilde{h}(x)$ (the red curve in the above plot) can be written as $\widetilde{h}(x) = \overline{h}(x) + h_{\sigma}^{\mathcal{(D)}}(x)$, where $h_{\sigma}^{\mathcal{(D)}}(x)$ is a realization of $H_{\sigma}^{\mathcal{(D)}}(x)$.
Using the squared difference loss function (which is a very generic loss measure) $L(\hat{y},y) = (\hat{y} - y)^2$, we can write the risk at some specific data point x as:
Here we replaced the expectation on $(x,y) \sim P(X,Y)$ by an expectation on the dataset $\mathcal{D}$ as the set of points $(x,y)$ distributed by $P(X,Y)$ are essentially the members of the dataset. Using the decomposition of $h^{\mathcal{(D)}}(x)$ we made earlier we can say that:
By the linearity of the expectation and the fact that the bias does not depend on $\mathcal{D}$, we write the previous equation as:
By recalling that the mean of $H^{\mathcal{(D)}}_{\sigma}(x)$ is 0, and by noticing that:
we can say that:
Now for all the data points in every possible dataset $\mathcal{D}$, the risk is:
This shows that the generalization error decomposes nicely into the bias and the variance of the model, and by comparing this decomposition to our generalization inequality we can see the relation between the bias and the empirical risk, and between the variance and the complexity term. The decomposition also shows how the generalization error will be high even if the model has low bias due to its high variance, and how it will remain high when using a low variance model due to its high bias and high training error. This is the origin story of the
Bias-variance Trade-off, our constant need to find the sweet model with the right balance between the bias and variance. A Little Exercise
I cheated a little bit earlier when I defined
because the correct definition should measure the loss from the label $y$ (which the available piece of information) not the target $f(x)$. Try to decompose the correct risk definition
and see how it differs from the result we just got. View your results in light of what we claimed back in part 1 where we said:
“We’ll later see how by this simplification [abstracting the model by a target function and noise] we revealed the first source of error in our eventual solution”and see how it relates to your result. Taming the Rich
Let’s investigate more into this overfitting behavior, this time not by looking at how the different hypotheses are spread out but by looking at individual hypothesis themselves. Let’s take the red-curve hypothesis $\widetilde{h}(x)$ in the recent plot and look at the coefficients of its polynomial terms, especially those that exist in the Taylor expansion of the target function. For that particular function we find that:
its $x$ coefficient is about $3.9$, as opposed to $1$ in the target’s Taylor expansion. its $x^3$ coefficient is about $-5.4$ as opposed to $-\frac{1}{3!} \approx -0.17$. its $x^5$ coefficient is about $22.7$ as opposed to $\frac{1}{5!} \approx 8.3 \times 10^{-3}$. its $x^7$ coefficient is about $-53.1$ as opposed to $-\frac{1}{7!} \approx -2.0 \times 10^{-4}$ its $x^9$ coefficient is about $33.0$ as opposed to $\frac{1}{9!} \approx 2.8 \times 10^{-6}$
It turns out that the hypothesis drastically overestimate its coefficients; they are much larger than they’re supposed to be. This overestimation is the reason behind the hypothesis ability to reach beyond the target mapping $x \mapsto f(x)$ and grab the noise as well. So this gives us another way to quantify the overfitting behavior, which is the magnitude of the hypothesis parameters or coefficients; the bigger this magnitude is, the more the hypothesis would overfit. It also gives us a way to prevent a hypothesis form overfitting: we can force it to have parameters of small magnitudes!
In training our models, we find a vector of parameters $\mathbf{w}$ that minimizes the empirical risk on the given dataset. This can be expressed mathematically as the following optimization problem:
where $m$ is the size of the dataset, $\mathbf{x}$ is the feature vector, and $h(\mathbf{x};\mathbf{w})$ is our hypothesis but with explicitly stating that it’s parametrized by $\mathbf{w}$. Utilizing our observation on the magnitudes of $\mathbf{w}$’s component, we can add a
constraint on that optimization problem to force these magnitudes to be small. Instead of adding a constraint on every component of the parameters vector, we can equivalently constraint the magnitude of one of its norms (or more conveniently, the square of its norm) to be less than or equal some small value $Q$. One of these norms that we can choose is the Euclidean norm:
Where $n$ is the number of features. So we can rewrite the optimization with the constraint on the Euclidean norm as:
Using the method of
Lagrange multipliers (here’s a great tutorial on Khan Academy if you’re not familiar with it) that states that:
The constrained optimization problem:
is equivalent to the unconstrained optimization problem:
where $\lambda$ is a saclar called the
Lagrange multiplier.
we can now write our constrained optimization problem in an unconstrained fashion as:
By choosing $\lambda$ to be proportionate to $\frac{1}{Q}$ we can get rid of the explicit dependency on $Q$ and replace it with an arbitraty constant $k$:
If you’re up for a little calculus you can prove that the value of $\lambda$ that minimizes the problem when we drop the term involving $k$ also minimizes the problem with term involving $k$ kept intact. So we can drop $k$ and write our minimization problem as:
and this is the formula for the
regularized cost function that we’ve practically worked with a lot. This form of regularization is called L2-regularization because the norm we used, the Euclidean norm, is also called the L2-norm. If we used a different norm, like the L1-norm:
the resulting regularization would be called
L1-regularization. The following plot shows the effect of L2-regularization (with $\lambda = 2$) on training the tenth degree model with the simulated dataset from earlier:
The regularization resulted in a much more well behaved spread around the mean than the unregulraized version. Although the regularization introduced an increase in bias, the decrease in variance was greater, which makes the overall risk smaller (with software help we can get numerical estimates for these values and see these changes for ourselves).
We can also examine the effect of regularization on the risk in light of our generalization bound. The following plot shows the contours of the squared difference loss of a linear model (two parameters). The red circle depicts our L2-regularization constraint $w_0^2 + w_1^2$.
The plot shows that when the regularization is applied, the solution to the optimization problem shifted from its original position to the lowest value position that lies on the constraint circle. This means that for a solution to feasible, it has to be within that constraining circle. So you can think of the whole 2D grid as the hypothesis space before the regularization, and regularization comes to confine the hypothesis space into this red circle.
With this observation, we can think of the minimization problem:
as a direct translation of the generalization bound $R(h) \leq R_{\mathrm{emp}}(h) + C(|\mathcal{H}|)$, with the regularization term as a minimizer for the complexity term. The only piece missing from that translation is the definition of the loss function $L$. Here we used the squared difference, next time we’ll be looking into other loss functions and the underlying principle that combines them all.
References and Additional Readings Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA. Abu-Mostafa, Y. S., Magdon-Ismail, M., & Lin, H. (2012). Learning from data: a short course. |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.