text stringlengths 256 16.4k |
|---|
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever?
And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time
Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered?
@tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points.
@DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?)
The x axis is the index in the array -- so I have 200 time series
Each one is equally spaced, 1e-9 seconds apart
The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are
The solid blue line is the abs(shear strain) and is valued on the right axis
The dashed blue line is the result from scipy.signal.correlate
And is valued on the left axis
So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
Because I don't know how the result is indexed in time
Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th...
So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag
I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question
It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy
For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \...
Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics
Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay
@jinawee oh, that I don't think will happen.
In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have.
So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is
Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level'
Others would argue it's not on topic because it's not conceptual
How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss...
I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed.
And what about selfies in the mirror? (I didn't try yet.)
@KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean.
Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods.
Or maybe that can be a second step.
If we can reduce visibility of HW, then the tag becomes less of a bone of contention
@jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework
@Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter.
@Dilaton also, have a look at the topvoted answers on both.
Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway)
@DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on.
hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least.
Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes.
MO is for research-level mathematics, not "how do I compute X"
user54412
@KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube
@ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper) |
The equation to approximate an input signal with a unit impulse in Continuous Time, is shown below, before we take the limit $\hat{x}(t)=\frac{lim}{\Delta\rightarrow0}\sum^{\infty}_{-\infty}x(k\Delta)\delta_\Delta(t-k\Delta)\Delta$ <-- why is there a final $\Delta$ multiplying the $\delta_\Delta(t-k\Delta)$?
Here the signal is in continuous time domain. We can approximate any signal with weighted integral of unit impulse. As the signal is in continuous time domain
integration is used instead of summation.
The actual equation is , $$x(t)=\int_{-\infty}^{\infty}{x(t_0)\delta(t-t_0)dt}$$
In your equation ${lim}_{\Delta\rightarrow0}\sum^{\infty}_{-\infty}$ stands for integration so
$\Delta$ is required at the end, which stands for dt in integral.
Hope you clear that.
$\delta_\Delta(t-k\Delta)$ is a spike with value 1. $\Delta$ is the width. It is like you are approximating the area under $x(t)$ with a bunch of rectangles of height $x(k\Delta)$ and width $\Delta$. So area is width by height, hence the multiplication. $x(k\Delta)\delta_\Delta(t-k\Delta)$ is a just a way of writing the discrete set of values at $x(k\Delta)$. |
Let {$f_n$} be defined recursively as $f_1 = f_2 = f_3 = 1$ and $f_n = f_{n-1} + f_{n-3}$ for all $n \gt 3$.
Also, define {$a_n$} as the ratio of the terms of {$f_n$}. That is, $a_n = \frac{f_{n+1}}{f_n}$ for some $n \geq 1$.
So, the terms of {$f_n$} are $$f_1 = 1,f_2 = 1,f_3 = 1,f_4 = 2,f_5 = 3,f_6 = 4,f_7 = 6,\ldots,$$ and the terms of {$a_n$} are $$a_1 = 1,a_2 = 1,a_3 = 2,a_4 = \frac{3}{2},a_5 = \frac{4}{3},a_6 = \frac{6}{4},\ldots$$
The question then becomes evaluating the limiting ratio of {$f_n$} or, in other words,
Find $$\lim_{n \to \infty}{a_n} = \lim_{n \to \infty}\frac{f_{n+1}}{f_n}, \forall n \geq 1.$$
The way I approached this problem was to try to put bounds on $a_k = \frac{f_{k+1}}{f_k}$ for some $k$. It made the most sense to me that $1 \leq a_k \leq 2$ just based off of the first few terms of {$a_n$}.
Then, I tried to rewrite $a_k = \frac{f_{k+1}}{f_k}$ in some way that would allow me to put bounds on $a_{k+1}$, since we want to show next that $1 \leq a_{k+1} \leq 2$.
$$a_{k+1} = \frac{f_{k+2}}{f_{k+1}} = \frac{f_{k+1} + f_{k-1}}{f_{k+1}} = 1 + \frac{f_{k-1}}{f_{k+1}}.$$
Next, I thought it would be a good idea to invert the inequality $1 \leq a_k \leq 2$. That is, $1 \geq \frac{1}{a_k} = \frac{f_k}{f_{k+1}} \geq \frac{1}{2}$ and then add $1$ to get the inequality $2\geq 1 + \frac{f_k}{f_{k+1}} \geq \frac{3}{2}$.
And while $1 + \frac{f_k}{f_{k+1}}$ looks like a pretty result, what I actually need to find in this case is $1 + \frac{f_{k-1}}{f_{k+1}}$.
It seems that at this point more clever manipulation is required, but I don't know what else can be done once I've reached this dead end. Can someone please elaborate on how to proceed with the above method or provide an alternative approach altogether?
I appreciate any and all advice!
Thanks for reading,
A |
Abbreviation:
MultLat
A
(or multiplicative lattice ) is a structure $\mathbf{A}=\langle A,\vee,\wedge,\cdot\rangle$ of type $\langle 2,2,2\rangle$ such that $m$-lattice
$\langle A,\vee,\wedge\rangle$ is a lattice
$\cdot$ distributes over $\vee$: $x(y\vee z)=xy\vee xz$, $(x\vee y)z=xz\vee yz$
Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{A}$ and $\mathbf{B}$ be … . A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x ... y)=h(x) ... h(y)$
An
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
[[Lattice-ordered semigroups]] [[Lattices]] reduced type [[Multiplicative semilattices]] reduced type |
Generally in RSA we encrypt as $m^e \pmod n$. Will RSA work if we replace the power by normal multiplication? $E = (m \times e) \mod n$ and decryption as $c \times d \mod n$. What will be $d$ disadvantage if it works ?
Suppose an RSA public key is still $(e, N)$ and the private key is still $d$. We have $Enc = (m*e) \text{ mod } N$ and $Dec = (c*d) \text{ mod } N$. The correctness of the scheme depends on the fact that $Dec(Enc(m)) = (m*e*d) \text{ mod } N = m \text{ mod } N$. This implies $e*d = 1 \text{ mod } N$. It is thus trivial to compute the private key given only the public key (i.e. compute $d = e^{-1} \text{ mod } N$). Hence this scheme is completely broken.
Note that this scheme only works if $e$ and $N$ are coprime otherwise no $d$ such that $e*d = 1 \text{ mod } N$ exists.
$e$ and $d$ are prime with $N$. Assumption:
If we use the multiplication instead of the power. We have $d \equiv e^{-1} \pmod N$ :
\begin{align}c \times d &= m &\mod N\\ m \times e \times d &= m &\mod N\\ e \times d &= 1 &\mod N \end{align} (It has to work for any $m$ so we can suppose $m$ prime).
$e$ and $N$ are publicly known, to compute $d$ the inverse of $e$, you just need the Extended Euclidean Algorithm : $e$ is prime with $N$, therefore there exists $u, v$ such as $$e \times u + N \times v = 1$$ by Bezout's Theorem $$e \times u = 1 \pmod N$$ Therefore $u = d = e^{-1}$ which is easily computed. |
In order to set up a system of equations using matrices, you need to understand how matrices multiply one another. Not all matrices can be multiplied together – they need to be compatible with one another. Not only that, unlike scalar (single number) arithmetic, multiplication does not commute, that is, the order of the multiplication will generally produce different results or one order may not even be possible. So what do I mean by compatible?
Let’s start with an example:\[
\begin{array}{l}
{\left[{\begin{array}{cc}{1}&{2}\\{3}&{4}\end{array}}\right]\times\left[{\begin{array}{cc}{5}&{6}\\{7}&{8}\end{array}}\right]\hspace{0.33em}{=}\hspace{0.33em}\left[{\begin{array}{cc}{{(}{5}\times{1}{)}{+}{(}{7}\times{2}{)}}&{{(}{6}\times{1}{)}{+}{(}{8}\times{2}{)}}\\{{(}{5}\times{3}{)}{+}{(}{7}\times{4}{)}}&{{(}{6}\times{3}{)}{+}{(}{8}\times{4}{)}}\end{array}}\right]}\\
{{=}\hspace{0.33em}\left[{\begin{array}{cc}{19}&{22}\\{43}&{50}\end{array}}\right]}
\end{array}
\]
To multiply these two 2 × 2 matrices, you take the first column of the second matrix and lay it over the top of the first matrix:\[
\begin{array}{l}
{\left[{\begin{array}{cc}{5}&{7}\end{array}}\right]}\\
{\left[{\begin{array}{cc}{1}&{2}\\{3}&{4}\end{array}}\right]}
\end{array}
\]
Starting with the top row of the first matrix, Multiply the numbers in the same position together and add the result of each: (5 × 1) + (7 × 2) = 19. This result is the first row, first column number in the new matrix. Repeat this using the second row of the first matrix: (5 × 3) + (7 × 4) = 43. This is the first element of the second row of the new matrix. Now do the same with the second column of the second matrix:\[
\begin{array}{l}
{\left[{\begin{array}{cc}{6}&{8}\end{array}}\right]}\\
{\left[{\begin{array}{cc}{1}&{2}\\{3}&{4}\end{array}}\right]}
\end{array}
\]
to get the second column of the new matrix. I will leave it as an exercise for you to confirm that if I reverse the order of the matrices, you will get a different result. That is,\[
\left[{\begin{array}{cc}{1}&{2}\\{3}&{4}\end{array}}\right]\times\left[{\begin{array}{cc}{5}&{6}\\{7}&{8}\end{array}}\right]\hspace{0.33em}\ne\hspace{0.33em}\left[{\begin{array}{cc}{5}&{6}\\{7}&{8}\end{array}}\right]\hspace{0.33em}\times\hspace{0.33em}\left[{\begin{array}{cc}{1}&{2}\\{3}&{4}\end{array}}\right]
\]
So this method works for any size matrices as long as they are compatible. From this example, you see that this works only if the second matrix has the same number of rows as the number of columns in the first matrix. This is easy to see if you put the dimensions together: (2 × 2) × (2 × 2). The inside numbers need to be the same if multiplication is possible (2 = 2). The outside numbers give the dimensions of the resulting matrix (2 × 2).
So you can multiply a 3 × 2 matrix by a 2 × 4 matrix to get a 3 × 4 matrix, but you cannot reverse the order because the inside dimensions will not be equal. It’s interesting that if you multiply a 1 × (anything) matrix by a (same anything) × 1 matrix, you will get a 1 × 1 matrix which is just a number (a scalar).
This multiplication works even if some or all of the elements of the matrices are variables. I will illustrate this in my next post. |
This question already has an answer here:
I would like the limits to be vertically above and below the summation sign. I also need a larger summation sign... This is currently what I have:
$x^2sin(x) = \sum_{n=-\infty\atop n\ne \pm 1}^\infty \dfrac {4i(-1)^{n}n}{(n^2 - 1)^2} $
Any help appreciated! |
The hypergeometric distribution arises when one samples from a finite population, thus making the trials dependent on each other. There are five characteristics of a hypergeometric experiment.
Characteristics of a hypergeometric experiment
You take samples from twogroups. You are concerned with a group of interest, called the first group. You sample without replacementfrom the combined groups. For example, you want to choose a softball team from a combined group of 11 men and 13 women. The team consists of ten players. Each pick is notindependent, since sampling is without replacement. In the softball example, the probability of picking a woman first is \(\frac{13}{24}\). The probability of picking a man second is \(\frac{11}{23}\) if a woman was picked first. It is \(\frac{10}{23}\) if a man was picked first. The probability of the second pick depends on what happened in the first pick. You are notdealing with Bernoulli Trials.
The outcomes of a hypergeometric experiment fit a
hypergeometric probability distribution. The random variable \(X\) = the number of items from the group of interest.
Example \(\PageIndex{1}\)
A candy dish contains 100 jelly beans and 80 gumdrops. Fifty candies are picked at random. What is the probability that 35 of the 50 are gumdrops? The two groups are jelly beans and gumdrops. Since the probability question asks for the probability of picking gumdrops, the group of interest (first group) is gumdrops. The size of the group of interest (first group) is 80. The size of the second group is 100. The size of the sample is 50 (jelly beans or gumdrops). Let \(X =\) the number of gumdrops in the sample of 50. \(X\) takes on the values \(x = 0, 1, 2, ..., 50\). What is the probability statement written mathematically?
Answer
\(P(x = 35)\)
Exercise \(\PageIndex{1}\)
A bag contains letter tiles. Forty-four of the tiles are vowels, and 56 are consonants. Seven tiles are picked at random. You want to know the probability that four of the seven tiles are vowels. What is the group of interest, the size of the group of interest, and the size of the sample?
Answer
The group of interest is the vowel letter tiles. The size of the group of interest is 44. The size of the sample is seven.
Example \(\PageIndex{2}\)
Suppose a shipment of 100 DVD players is known to have ten defective players. An inspector randomly chooses 12 for inspection. He is interested in determining the probability that, among the 12 players, at most two are defective. The two groups are the 90 non-defective DVD players and the 10 defective DVD players. The group of interest (first group) is the defective group because the probability question asks for the probability of at most two defective DVD players. The size of the sample is 12 DVD players. (They may be non-defective or defective.) Let \(X =\) the number of defective DVD players in the sample of 12. \(X\) takes on the values \(0, 1, 2, \dotsc, 10\). \(X\) may not take on the values 11 or 12. The sample size is 12, but there are only 10 defective DVD players. Write the probability statement mathematically.
Answer
\(P(x \leq 2)\)
Exercise \(\PageIndex{2}\)
A gross of eggs contains 144 eggs. A particular gross is known to have 12 cracked eggs. An inspector randomly chooses 15 for inspection. She wants to know the probability that, among the 15, at most three are cracked. What is \(X\), and what values does it take on?
Answer
Let \(X =\) the number of cracked eggs in the sample of 15. \(X\) takes on the values \(0, 1, 2, \dotsc, 12\).
Example \(\PageIndex{3}\)
You are president of an on-campus special events organization. You need a committee of seven students to plan a special birthday party for the president of the college. Your organization consists of 18 women and 15 men. You are interested in the number of men on your committee. If the members of the committee are randomly selected, what is the probability that your committee has more than four men?
This is a hypergeometric problem because you are choosing your committee from two groups (men and women).
Are you choosing with or without replacement? What is the group of interest? How many are in the group of interest? How many are in the other group? Let \(X =\) _________ on the committee. What values does \(X\) take on? The probability question is \(P(\)_______\()\). Solution without the men 15 men 18 women Let \(X =\) the number of men on the committee. \(x = 0, 1, 2, \dotsc, 7\). \(P(x > 4)\)
Exercise \(\PageIndex{3}\)
A palette has 200 milk cartons. Of the 200 cartons, it is known that ten of them have leaked and cannot be sold. A stock clerk randomly chooses 18 for inspection. He wants to know the probability that among the 18, no more than two are leaking. Give five reasons why this is a hypergeometric problem.
Answer There are two groups. You are concerned with a group of interest. You sample without replacement. Each pick is not independent. You are not dealing with Bernoulli Trials.
Notation for the Hypergeometric: \(H =\) Hypergeometric Probability Distribution Function
\[X \sim H(r, b, n)\]
Read this as "\(X\) is a random variable with a hypergeometric distribution." The parameters are \(r, b\), and \(n\); \(r =\) the size of the group of interest (first group), \(b =\) the size of the second group, \(n =\) the size of the chosen sample.
Example \(\PageIndex{4}\)
A school site committee is to be chosen randomly from six men and five women. If the committee consists of four members chosen randomly, what is the probability that two of them are men? How many men do you expect to be on the committee?
Let \(X\) = the number of men on the committee of four. The men are the group of interest (first group).
\(X\) takes on the values \(0, 1, 2, 3, 4\), where \(r = 6, b = 5\), and \(n = 4\). \(X \sim H(6, 5, 4)\)
Find \(P(x = 2)\). \(P(x = 2) = 0.4545\) (calculator or computer)
Currently, the TI-83+ and TI-84 do not have hypergeometric probability functions. There are a number of computer packages, including Microsoft Excel, that do.
The probability that there are two men on the committee is about 0.45.
The graph of \(X \sim H(6, 5, 4)\) is:
Figure 4.6.1.
The
y-axis contains the probability of \(X\), where \(X =\) the number of men on the committee.
You would expect \(m = 2.18\) (about two) men on the committee.
The formula for the mean is
\[\mu = \frac{nr}{r+b} \frac{(4)(6)}{6+5} = 2.18\]
Exercise \(\PageIndex{4}\)
An intramural basketball team is to be chosen randomly from 15 boys and 12 girls. The team has ten slots. You want to know the probability that eight of the players will be boys. What is the group of interest and the sample?
Answer
The group of interest is the 15 boys. The sample consists of the ten slots on the intramural basketball team.
Summary
A hypergeometric experiment is a statistical experiment with the following properties:
You take samples from two groups. You are concerned with a group of interest, called the first group. You sample without replacement from the combined groups. Each pick is not independent, since sampling is without replacement. You are not dealing with Bernoulli Trials.
The outcomes of a hypergeometric experiment fit a hypergeometric probability distribution. The random variable \(X\) = the number of items from the group of interest. The distribution of \(X\) is denoted \(X \sim H(r, b, n)\), where \(r =\) the size of the group of interest (first group), \(b =\) the size of the second group, and \(n =\) the size of the chosen sample. It follows that \(n \leq r + b\). The mean of \(X\) is \(\mu = \frac{nr}{r+b}\) and the standard deviation is \(\sigma = \sqrt{\frac{rbn(r+b-n)}{(r+b)^{2}(r+b-1)}}\).
Formula Review
\(X \sim H(r, b, n)\) means that the discrete random variable \(X\) has a hypergeometric probability distribution with \(r =\) the size of the group of interest (first group), \(b =\) the size of the second group, and \(n =\) the size of the chosen sample.
\(X\) = the number of items from the group of interest that are in the chosen sample, and \(X\) may take on the values \(x = 0, 1, \dotsc,\) up to the size of the group of interest. (The minimum value for \(X\) may be larger than zero in some instances.)
\(n \leq r + b\)
The mean of \(X\) is given by the formula \(\mu = \frac{nr}{r+b}\) and the standard deviation is \(= \sqrt{\frac{rbn(r+b-n)}{(r+b)^{2}(r+b-1)}}\).
Use the following information to answer the next five exercises: Suppose that a group of statistics students is divided into two groups: business majors and non-business majors. There are 16 business majors in the group and seven non-business majors in the group. A random sample of nine students is taken. We are interested in the number of business majors in the sample.
Exercise 4.6.5
In words, define the random variable \(X\).
Answer
\(X =\) the number of business majors in the sample.
Exercise 4.6.6
\(X \sim\) _____(_____,_____)
Exercise 4.6.7
What values does \(X\) take on?
Answer
\(2, 3, 4, 5, 6, 7, 8, 9\)
Exercise 4.6.8
Find the standard deviation.
Exercise 4.6.9
On average (\(\mu\)), how many would you expect to be business majors?
Answer
6.26
Glossary Hypergeometric Experiment a statistical experiment with the following properties: You take samples from two groups. You are concerned with a group of interest, called the first group. You sample without replacement from the combined groups. Each pick is not independent, since sampling is without replacement. You are not dealing with Bernoulli Trials. Hypergeometric Probability a discrete random variable (RV) that is characterized by: A fixed number of trials. The probability of success is not the same from trial to trial. Contributors
Barbara Illowsky and Susan Dean (De Anza College) with many other contributing authors. Content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. Download for free at http://cnx.org/contents/30189442-699...b91b9de@18.114. |
Classical entropy quantities are a way to quantify how much information is revealed in a random event. Shannon first introduced a way to quantify information by associating to an event occuring with probability p, an amount of information -\log p (as is standard in information theory, logarithms are taken in base 2 and so information is measured in bits). If an unlikely event occurs, one gains a large amount of information. The Shannon entropy is the average amount of information gained by observing the outcome of an event. In some cases, in particular ones relevant to cryptography, the average amount of information is not a good quantity, since it assumes many independent repetitions of the same experiment.Random Events
When we discuss random events, we assume that they occur according to a pre-defined ensemble of possible outcomes of that event, and associated probabilities. Event X is a single instance drawn from the ensemble \{0,1,\ldots,|X|\} with probabilities \{p_0,p_1,\ldots,p_{|X|}\}. We call this probability distribution P_X. The terminology X=x refers to a single instance drawn from this distribution taking the value x. One similarly defines distributions over more than one random variable. For instance, P_{XY} is the joint distribution of X and Y, and P_{X|Y=y} is the distribution of X conditioned on the fact that Y takes the value Y=y.Shannon Entropy
It was Shannon who pioneered the mathematical formulation of information. In essence his insight was that an event that occurs with probability p could be associated with an amount of information -\log p. Consider many independent repetitions of random event X. The average information revealed by each instance of X is given by the Shannon entropy of X defined as follows.
The
Shannon entropy associated with an event x drawn from random distribution X is
H(X)\equiv\sum_{x\in X}-P_X(x)\log P_X(x).
Likewise, one can define conditional Shannon entropies. H(X|Y=y) denotes the Shannon entropy of X given Y=y. It measures the average amount of information one learns from a single instance of X if one possesses string y\in Y, where X,Y are chosen according to joint distribution P_{XY}. One can average this quantity to form H(X|Y), the conditional Shannon entropy.
The
conditional Shannon entropy of an event X given Y is defined by
H(X|Y)\equiv\sum_{x\in X, y\in Y}-P_Y(y)P_{X|Y=y}(x)\log P_{X|Y=y}(x).
This leads one to define the
mutual Shannon information between X and Y by
I(X:Y)\equiv H(X)-H(X|Y)=H(Y)-H(Y|X).
In some sense, this is the amount of information in common to the two strings X and Y.
Shannon information was first used to solve problems of compression, and communication over a noisy channel, as given in the following theorems.
Source coding theorem : Consider a source emitting independent and indentically distributed (i.i.d.) random variables drawn from distribution P_X. For any \epsilon>0 and R>H(X), there exists an encoder such that for sufficiently large N, any sequence drawn from P_X^N can be compressed to length NR, and a decoder such that, except with probability <\epsilon, the original sequence can be restored from the compressed string.
Furthermore, if one tries to compress the same source using R bits per instance, it is virtually certain that information will be lost.
For a discrete, memoryless channel, in which Alice sends a random variable drawn from X to Bob who receives Y, the
channel capacity is defined by C\equiv \max_{P_X}I(X:Y). Noisy channel coding theorem : Consider Alice communicate with Bob via a discrete memoryless channel which has the property that if Alice draws from an i.i.d. source X, Bob receives Y. For any \epsilon>0 and R, for large enough N, there exists an encoding of length N and a decoder such that \geq RN bits of information are conveyed by the channel for each encoder-channel-decoder cycle, except with probability <\epsilon.
Notice that in the noisy channel coding theorem, the channel is memoryless, and Alice has an i.i.d. source. In other words, all uses of the channel are independent of one another. This is the situation in which Shannon information is useful. However, in cryptographic scenarios where the channel may be controlled by an eavesdropper, such an assumption is not usually valid. Instead, other entropy measures have been developed that apply for these cases.Beyond Shannon entropy
Rényi introduced the following generalization of the Shannon entropy.
The
Rényi entropy of order \alpha is defined by
H_{\alpha}(X)\equiv\frac{1}{1-\alpha}\log\sum_{x\in X}P_X(x)^{\alpha}.
We have, H_1(X)\equiv\lim_{\alpha\rightarrow 1}H_{\alpha}(X)=H(X). Two other important cases are H_0(X)=\log|X| and H_{\infty}(X)=-\log\max_{x\in X}P_X(x). A useful property is that, for \alpha\leq\beta, H_{\alpha}(X)\geq H_{\beta}(X).
H_0(X) is sometimes called the
max entropy of X. It is important for information reconciliation, which in essence is error correction.
H_{\infty}(X) is sometimes called the
min entropy of X. It is important for privacy amplification. There, the presence of an eavesdropper means that it no longer suffices to consider each use of the channel as independent. The min entropy represents the maximum amount of information that could be learned from the event X, so describes the worst case scenario. In a cryptographic application, one wants to be assured security even in the worst case.
In general, Rényi entropies are strongly discontinuous. As an example, consider the two distributions P_X and Q_X defined on x\in\{1,\ldots,2^n\}. Take P_X(1)=2^{-\frac{n}{4}}, P_X(x\neq 1)=\frac{1-2^{-\frac{n}{4}}}{2^n-1}, and Q_X to be the uniform distribution. The difference between min entropies is \frac{3n}{4}. In the large n limit, the two distributions have distance \approx 2^{-\frac{n}{4}}, which is exponentially small, while the difference in min entropies becomes arbitrarily large.
Smoothed versions of these quantities have been introduced which remove such discontinuities. In essence, these smoothed quantities involve optimizing such quantities over a small region of probability space. They have operational significance in cryptography in that they provide the relevant quantities for information reconciliation and privacy amplification.
The conditional versions of these smoothed entropies are relevant in cryptography, hence we provide a definition of these directly.
For a distribution P_{XY}, and smoothing parameter \epsilon>0, we define the following '''smoothed Rényi entropies: '''
\begin{array}{ccc} H_0^{\epsilon}(X|Y)\equiv\min_{\Omega}\max_y\log|\{x:P_{X\Omega|Y=y}(x)>0\}|\\ H_{\infty}^{\epsilon}(X|Y)\equiv\max_{\Omega}\left(-\log\max_y\max_x P_{X\Omega|Y=y}(x)\right), \end{array}
where \Omega is a set of events with total probability at least 1-\epsilon.
More generally, the smooth Rényi entropy of order \alpha can be defined. However, up to an additive constant these equal either H_0^{\epsilon} (for \alpha<1) or H_{\infty}^{\epsilon} (for \alpha>1). It is also worth noting that for a large number of independent repetitions of the same experiment, the Rényi entropies tend to the Shannon entropy, that is,
\lim_{\epsilon\rightarrow 0}\lim_{n\rightarrow\infty}\frac{H_{\alpha}^{\epsilon}(X^n|Y^n)}{n}=H(X|Y).
Information Reconciliation
The task of information reconciliation can be stated as follows. Alice has string X and Bob Y, these being chosen with joint distribution P_{XY}. Alice also possesses some additional independent random string R. What is the minimum length of string S=f(X,R) that Alice can compute such that X is uniquely obtainable by Bob using Y, S and R, except with probability less than \epsilon?
Renner and Wolf denote this quantity H_{enc}^{\epsilon}(X|Y). It is tightly bounded by the relation
H_0^{\epsilon}(X|Y)\leq H_{enc}^{\epsilon}(X|Y)\leq H_0^{\epsilon_1}(X|Y)+\log\frac{1}{\epsilon_2},
where \epsilon_1+\epsilon_2=\epsilon.
It is intuitively clear why H_0^{\epsilon}(X|Y) is the correct quantity. Recall the definition
H_0^{\epsilon}(X|Y)\equiv\min_{\Omega}\max_y\log|\{x:P_{X\Omega|Y=y}(x)>0\}|,
where \Omega is a set of events with total probability at least 1-\epsilon. The size of the set of strings x that could have generated Y=y given \Omega is |\{x:P_{X\Omega|Y=y}(x)>0\}|. Alice's additional information needs to point to one of these. It hence requires \log |\{x:P_{X\Omega|Y=y}(x)>0\}| bits to encode. Since Alice does not know y, she must assume the worst, hence we maximize on y. Furthermore, since some error is tolerable, we minimize on \Omega, by cutting away unlikely events from the probability distribution.
Privacy Amplification
In essence, this task seeks to find the maximum length of string Alice and Bob can form from their shared string such that Eve has no information on this string.
This task can be stated more formally as follows. Alice possesses string X and Eve Z, distributed according to P_{XZ}. Alice also has some uncorrelated random string R. What is the maximum length of a binary string S=f(X,R), such that for a uniform random variable U that is independent of Y and R, we have S=U, except with probability less than \epsilon?
Again, this quantity, denoted H_{ext}^{\epsilon}(X|Z), has been defined and bounded by Renner and Wolf :
H_{\infty}^{\epsilon_1}(X|Z)-2\log\frac{1}{\epsilon_2}\leq H_{ext}^{\epsilon}(X|Z)\leq H_{\infty}^{\epsilon}(X|Z),
where \epsilon_1+\epsilon_2=\epsilon.
The reason that this quantity is relevant follows from the definition of extractors, which are functions commonly exploited for privacy amplification. In essence they take a non-uniform random string, and some additional catalytic randomness, to form a smaller string which is arbitrarily close to being uniform. The length reduction in the string is bounded by its min-entropy.
References
C. E. Shannon, Bell System Technical Journal
27, 379 (1948).
A. Rényi, in
Proceedings of the 4th Berkeley Symposium on Mathematics, Statistics and Probability (1961), vol. 1.
R. Renner and S. Wolf, in
Advances in Cryptology --- ASIACRYPT 2005, edited by B. Roy (Springer-Verlag, 2005), vol. 3788, pp. 199--216. |
Site Index Site is defined by the Society of American Foresters (1971) as “an area considered in terms of its own environment, particularly as this determines the type and quality of the vegetation the area can carry.” Forest and natural resource managers use site measurement to identify the potential productivity of a forest stand and to provide a comparative frame of reference for management options. The productive potential or capacity of a site is often referred to as site quality.
Site quality can be measured directly or indirectly. Direct measurement of a stand’s productivity can be measured by analyzing the variables such as soil nutrients, moisture, temperature regimes, available light, slope, and aspect. A productivity-estimation method based on the permanent features of soil and topography can be used on any site and is suitable in areas where forest stands do not presently exist. Soil site index is an example of such an index. However, such indices are location specific and should not be used outside the geographic region in which they were developed. Unfortunately, environmental factor information is not always available and natural resource managers must use alternative methods.
Historical yield records also provide direct evidence of a site’s productivity by averaging the yields over multiple rotations or cutting cycles. Unfortunately, there are limited long-term data available, and yields may be affected by species composition, stand density, pests, rotation age, and genetics. Consequently, indirect methods of measuring site quality are frequently used, with the most common involving the relationship between tree height and tree age.
Using stand height data is an easy and reliable way to quantify site quality. Theoretically, height growth is sensitive to differences in site quality and height development of larger trees in an even-aged stand is seldom affected by stand density. Additionally, the volume-production potential is strongly correlated with height-growth rate. This measure of site quality is called site index and is the average total height of selected dominant-codominant trees on a site at a particular reference or index age. If you measure a stand that is at an index age, the average height of the dominant and codominant trees is the site index. It is the most widely accepted quantitative measure of site quality in the United States for even-aged stands (Avery and Burkhart 1994).
The objective of the site index method is to select the height development pattern that the stand can be expected to follow during the remainder of its life (not to predict stand height at the index age). Most height-based methods of site quality evaluation use site index curves. Site index curves are a family of height development patterns referenced by either age at breast height or total age. For example, site index curves for plantations are generally based on total age (years since planted), where age at breast height is frequently used for natural stands for the sake of convenience. If total age were to be used in this situation, the number of years required for a tree to grow from a seedling to DBH must be added in. Site index curves can either be anamorphic or polymorphic curves. Anamorphic curves (most common) are a family of curves with the same shape but different intercepts. Polymorphic curves are a family of curves with different shapes and intercepts.
The index age for this method is typically the culmination of mean annual growth. In the western part of the United States, 100 years is commonly used as the reference age with 50 years in the eastern part of this country. However, site index curves can be based on any index age that is needed. Coile and Schumacher (1964) created a family of anamorphic site index curves for plantation loblolly pine with an index age of 25 years. The following family of anamorphic site index curves for a southern pine is based on a reference age of 50 years.
Figure 1. Site index curves with an index age of 50 years.
Creating a site index curve involves the random selection of dominant and codominant trees, measuring their total height, and statistically fitting the data to a mathematical equation. So, which equation do you use? Plotting height over age for single species, even-aged stands typically results in a sigmoid shaped pattern.
$$H_d = b_0e^{(b_1A^{-1})}$$
where
Hd is the height of dominant and codominant trees, A is stand age, and b0 and b1 are coefficients to be estimated. Variable transformation is needed if linear regression is to be used to fit the model. A common transformation is
$$ln \ H_d = b_0+b_1A^{-1}$$
Coile and Schumacher (1964) fit their data to the following model:
$$ln \ S = ln \ H +5.190(\frac {1}{A} - \frac {1}{25})$$
where
S is site index, H is total tree height, and A is average age. The site index curve is created by fitting the model to data from stands of varying site qualities and ages, making sure that all necessary site index classes are equally represented at all ages. It is important not to bias the curve by using an incomplete range of data.
Data for the development of site index equations can come from measurement of tree or stand height and age from temporary or permanent inventory plots or from stem analysis. Inventory plot data are typically used for anamorphic curves only and sampling bias can occur when poor sites are over represented in older age classes. Stem analysis can be used for polymorphic curves but requires destructive sampling and it can be expensive to obtain such data.
We are going to examine three different methods for developing site index equations:
Guide curve method Difference equation method Parameter prediction method Guide Curve Method
The guide curve method is commonly used to generate anamorphic site index equations. Let’s begin with a commonly used model form:
$$ ln \ H_d =b_0 +b_1A^{-1} = b_0 + b_1\frac{1}{A}$$
Parameterizing this model results in a “guide curve” (the average line for the sample data) that is used to create the individual height/age development curves that parallel the guide curve. For a particular site index the equation is:
$$ln \ H_d = b_{0i} +b_1A^{-1}$$
where
boi is the unique y-intercept for that age. By definition, when A = A0 (index age), H is equal to site index S. Thus:
$$b_{0i} = ln \ S - b_1A_0^{-1}$$
Substituting
boi into equation 9.2.5 gives:
$$ ln \ H = ln \ S + b_1(A^{-1} - A_0^{-1})$$
which can be used to generate site index curves for given values of
S and A0 and a range of ages ( A). The equation can be algebraically rearranged as:
$$ln \ S = ln \ H -b_1(A^{-1} - A_0^{-1}) = ln (H) - b_1(\frac {1}{A} - \frac {1}{A_0})$$
This is the form to estimate site index (height at index age) when height and age data measurements are given. This process is sound only if the average site quality in the sample data is approximately the same for all age classes. If the average site quality varies systematically with age, the guide curve will be biased.
Difference Equation Method
This method requires either monumented plot, tree remeasurement data, or stem analysis data. The model is fit using differences of height and specific ages. This method is appropriate for anamorphic and polymorphic curves, especially for longer and/or multiple measurement periods. Schumacher (after Clutter et al. 1983) used this approach when estimating site index using the reciprocal of age and the natural log of height. He believed that there was a linear relationship between Point A (1/
A1, ln H1) and Point B (1/ A2, ln H2) and defined β1 (slope) as:
$$\beta_1 = \dfrac {ln(H_2) - ln (H_1)}{(1/A_2)-(1/A_1)}$$
where
H1 and A1 were initial height and age, and H2 and A2 were height and age at the end of the remeasurement period. His height/age model became:
$$ln (H_2) = ln (H_1) +\beta_1 (\frac {1}{A_2} - \frac {1}{A_1})$$
Using remeasurement data, this equation would be fitted using linear regression procedures with the model
$$Y = \beta_1X$$
where
Y = ln( H2) – ln( H1) X = (1/ A2) – (1/ A1)
After estimating β1, a site index equation is obtained from the height/age equation by letting
A2equal A0 (the index age) so that H2 is, by definition, site index ( S). The equation can then be written:
$$ln (S) = ln(H_1) + \beta_1(\frac {1}{A_0} - \frac {1}{A_1})$$
Parameter Prediction Method
This method requires remeasurement or stem analysis data, and involves the following steps:
Fitting a linear or nonlinear height/age function to the data on a tree-by-tree (stem analysis data) or plot by plot (remeasurement data) basis Using each fitted curve to assign a site index value to each tree or plot (put A0 in the equation to estimate site index) Relating the parameters of the fitted curves to site index through linear or nonlinear regression procedures
Trousdell et al. (1974) used this approach to estimate site index for loblolly pine and it provides an example using the Chapman-Richards (Richards 1959) function for the height/age relationship. They collected stem analysis data on 44 dominant and codominant trees that had a minimum age of at least 50 years. The Chapman-Richards function was used to define the height/age relationship:
$$H = \theta_1[1-e^{(-\theta_2A)}]^{[(1-\theta_3)^{-1}]}$$
where
H is height in feet at age A and θ1, θ2, and θ3 are parameters to be estimated. This equation was fitted separately to each tree. The fitted curves were all solved with A = 50 to obtain site index values ( S) for each tree.
The parameters θ1, θ2, and θ3 were hypothesized to be functions of site index, where
$$\theta_1 = \beta_1 + \beta_2S$$
$$\theta_2 = \beta_3 + \beta_4S+\beta_5S^2$$
$$\theta_3 = \beta_6 + \beta_7S + \beta_8S^2$$
The Chapman-Richards function was then expressed as:
$$H = (\beta_1+\beta_2S){1-e^{[-(\beta_3+\beta_4S+\beta_5S^2)A]}}^{[(1-\beta_6-\beta_7S-\beta_8S^2)^{-1}]}$$
This function was then refitted to the data to estimate the parameters β1, β2, …β8. The estimating equations obtained for θ1, θ2, and θ3 were
$$\hat {\theta_1} = 63.1415+0.635080S$$
$$\hat {\theta_2} = 0.00643041 + 0.000124189S + 0.00000162545S^2$$
$$\hat {\theta_3} = 0.0172714 - 0.00291877S + 0.0000310915S^2$$
For any given site index value, these equations can be solved to give a particular Chapman-Richards site index curve. By substituting various values of age into the equation and solving for
H, we obtain height/age points that can be plotted for a site index curve. Since each site index curve has different parameter values, the curves are polymorphic. Periodic Height Growth Data
An alternative to using current stand height as the surrogate for site quality is to use periodic height growth data, which is referred to as a growth intercept method. This method is practical only for species that display distinct annual branch whorls and is primarily used for juvenile stands because site index curves are less dependable for young stands.
This method requires the length measurement of a specified number of successive annual internodes or the length over a 5-year period. While the growth-intercept values can be used directly as measures of site quality, they are more commonly used to estimate site index.
Alban (1972) created a simple linear model to predict site index for red pine using 5-year growth intercept in feet beginning at 8 ft. above ground.
SI = 32.54 + 3.43 X
where
SI is site index at a base age of 50 years and X is 5-year growth intercept in feet.
Using periodic height growth data has the advantage of not requiring stand age or total tree height measurements, which can be difficult in young, dense stands. However, due to the short-term nature of the data, weather variation may strongly influence the internodal growth thereby rendering the results inaccurate.
Site index equations should be based on biological or mathematical theories, which will help the equation perform better. They should behave logically and not allow unreasonable values for predicted height, especially at very young or very old ages. The equations should also contain an asymptotic parameter to control unbounded height growth at old age. The asymptote should be some function of site index such that the asymptote increases with increases of site index.
When using site index, it is important to know the base age for the curve before use. It is also important to realize that site index based on one base age cannot be converted to another base age. Additionally, similar site indices for different species do not mean similar sites even when the same base age is used for both species. You have to understand how height and age were measured before you can safely interpret a site index curve. Site index is not a true measure of site quality; rather it is a measure of a tree growth component that is affected by site quality (top height is a measure of stand development, NOT site quality). |
If $I=(a_1,\dots, a_m)$ and $J=(b_1, \dots, b_n)$ are ideals in a commutative ring, then we have\[IJ=(a_ib_j),\]where $1\leq i \leq m$ and $1\leq j \leq n$.
Proof.
(a) Prove that $IJ=(x, 6)$.
Note that the product ideal $IJ$ is generated by the products of generators of $I$ and $J$, that is, $x^2, 2x, 3x, 6$. That is, $IJ=(x^2, 2x, 3x, 6)$.
It follows that $IJ$ contains $x=3x-2x$ as well. As the first three generators can be generated by $x$, we deduce that $IJ=(x, 6)$.
(b) Prove that the element $x\in IJ$ cannot be written as $x=f(x)g(x)$, where $f(x)\in I$ and $g(x)\in J$.
Assume that $x=f(x)g(x)$ for some $f(x)\in I$ and $g(x) \in J$.As $\Z[x]$ is a UFD, we have either\[f(x)=\pm x, g(x)=\pm 1, \text{ or } f(x)=\pm 1, g(x)=\pm x.\]
In the former case, we have $1\in J$ and hence $J=\Z[x]$, which is a contradiction.Similarly, in the latter case, we have $1\in I$ and hence $I=\Z[x]$, which is a contradiction.Thus, in either case, we reached a contradiction.
Hence, $x$ cannot be written as the product of elements in $I$ and $J$.
Comment.
Let $I$ and $J$ be an ideal of a commutative ring $R$.Then the product of ideals $I$ and $J$ is defined to be\[IJ:=\{\sum_{i=1}^k a_i b_i \mid a_i\in I, b_i\in J, k\in \N\}.\]
The above problems shows that in general, there are elements in the product $IJ$ that cannot be expressed simply as $ab$ for $a\in I$ and $b\in J$.
Equivalent Conditions For a Prime Ideal in a Commutative RingLet $R$ be a commutative ring and let $P$ be an ideal of $R$. Prove that the following statements are equivalent:(a) The ideal $P$ is a prime ideal.(b) For any two ideals $I$ and $J$, if $IJ \subset P$ then we have either $I \subset P$ or $J \subset P$.Proof. […]
Prove the Ring Isomorphism $R[x,y]/(x) \cong R[y]$Let $R$ be a commutative ring. Consider the polynomial ring $R[x,y]$ in two variables $x, y$.Let $(x)$ be the principal ideal of $R[x,y]$ generated by $x$.Prove that $R[x, y]/(x)$ is isomorphic to $R[y]$ as a ring.Proof.Define the map $\psi: R[x,y] \to […]
A Prime Ideal in the Ring $\Z[\sqrt{10}]$Consider the ring\[\Z[\sqrt{10}]=\{a+b\sqrt{10} \mid a, b \in \Z\}\]and its ideal\[P=(2, \sqrt{10})=\{a+b\sqrt{10} \mid a, b \in \Z, 2|a\}.\]Show that $p$ is a prime ideal of the ring $\Z[\sqrt{10}]$.Definition of a prime ideal.An ideal $P$ of a ring $R$ is […]
Prime Ideal is Irreducible in a Commutative RingLet $R$ be a commutative ring. An ideal $I$ of $R$ is said to be irreducible if it cannot be written as an intersection of two ideals of $R$ which are strictly larger than $I$.Prove that if $\frakp$ is a prime ideal of the commutative ring $R$, then $\frakp$ is […]
Generators of the Augmentation Ideal in a Group RingLet $R$ be a commutative ring with $1$ and let $G$ be a finite group with identity element $e$. Let $RG$ be the group ring. Then the map $\epsilon: RG \to R$ defined by\[\epsilon(\sum_{i=1}^na_i g_i)=\sum_{i=1}^na_i,\]where $a_i\in R$ and $G=\{g_i\}_{i=1}^n$, is a ring […] |
This picture is a copy of the pattern on my curtains. The points of a hexagonal lattice are each coloured with one of four possible colours. It has translational symmetry in two directions: a vertical shift by four lines and a horizontal shift by six lines. One generating patch is shown with a solid line.
However if you change just three colours of the 24 in the generating patch, many more symmetries appear. This picture shows the same pattern, except that the colours within the dotted line have been changed. The new symmetries are: 180 degree rotational symmetry about any of the marked points and also a horizontal translation by three composed with the operation which switches dark blue for light blue and dark pink for light pink. The latter symmetry can be applied twice to get the horizontal translation by six symmetry from before.
If the person who designed this pattern didn't know about the extra symmetries, they might have just filled in the 24 colours in the generating patch at random and then translated it to produce the whole curtain. The number of ways of doing this is $4^{24}$.
However, they might have started by generating the full pattern. If they had done that, I think they would have only had 6 choices (both the rotation and the translation by 3 should cut the number of choices in half). If instead they had decided to start with the full pattern and then make up to three changes to it, they would have needed to make the 6 original choices, then choose three of 24 colours to change and the colours to change them to. The number of ways of doing this is $4^6 \times \binom{24}{3} \times 4^3$.
Thus, the probability that the first scenario (choosing 24 colours at random) would produce a pattern so close to the modified one is $\frac{4^6 \times \binom{24}{3} \times 4^3}{4^{24}} \approx 1.885 \times 10^{-6}$. This might suggest that they broke the full symmetry group intentionally to upset people like me. However, I do not know how to calculate the chance that a randomly chosen generating 6 by 4 patch would be close to a pattern with
some extra symmetries. My questions Suppose I generate a curtain pattern by choosing an $n$ by $m$ generating patch from $c$ possible colours. This pattern automatically has symmetry group $\mathbb{Z}^2$. What is the average number of colours I will have to change to get a larger group?
(Or more precisely, if I want the group to be, say, twice as big, how many changes will I have to make on average?) It's likely this depends on the underlying lattice (here, it's hexagonal) but I'm looking for techniques to answer this kind of question rather than numerical answers.
The modified pattern has symmetry group generated by:
$a$, the vertical translation by 4 $b$, the horizontal translation by 3 combined with the colour switching $c$, the rotation by 180 degrees about one of the marked points.
It is $<a,b,c \mid c^2=e, cac=a^{-1}, cbc=b^{-1}>$, which is like two infinite dihedral groups combined together. Does this group have a name?
Why would the designer intentionally break such a nice symmetry group? |
This is for a thought experiment I'm programming.
Let's say I have 2 cars; 1 in front, 1 in back. I'm adjusting the rear car's acceleration so that it will never be closer than 1 second's worth of distance from a half car's length behind the front car.
I have created the below formulae to calculate the acceleration, velocity, and position of the rear vehicle, as a result. Since this for a programmatic animation, these formulae are calculated for every frame of animation.
$$A_r = \frac{V_f + \frac{1}{2}A_f+S_f-2V_r-S_r}{2}$$ $$V_r = V_r + A_r$$ $$S_r = S_r + V_r$$
$A_r$ = Rear Car's Acceleration
$V_r$ = Rear Car's Velocity $S_r$ = Rear Car's Position of Front Bumper $A_f$ = Front Car's Acceleration $V_f$ = Front Car's Velocity $S_f$ = Front Car's Position of Rear Bumper minus a half-car length
For your understanding, here's how I generated the acceleration formula. I'm just combining two displacement formulae, and solving for the rear car's acceleration.
Word Explanation: The position of the rear car after 2 seconds should be equal to the position of the front car's position after 1 second. The displacement equation I'm using is:
$$S_t = V_x t + \frac{1}{2} A_x t^2 + S_x$$
Where $S_t$ = Target Position
Therefore: The position of the front car after 1 second:
$$S_t = V_f + \frac{1}{2} A_f + S_f$$
And, the position of the second car after 2 seconds:
$$S_t = 2V_r + 2A_r + S_r$$
And now, to solve for $A_r$:
$$2A_r = S_t - 2V_r - S_r$$
$$A_r = \frac{S_t - 2V_r - S_r}{2}$$
And lastly, to plug in $S_t$ from the front car:
$$A_r = \frac{V_f + \frac{1}{2}A_f+S_f-2V_r-S_r}{2}$$
Now, this works
perfectly; after every iteration, the car will accelerate up to a safe speed, and slow down so that it can stop by the time $S_r = S_f$.
Since I'm simulating this in a computer application, the issue I'm having is that when I simulate it, it take very few iterations to make $S_r \approx S_f$. I'd like to increase the amount of iterations required to achieve the same result. In other words, my simulation currently takes $\approx$10 frames to complete this simulation, but I'd like it to take at least 200 frames (so, let's say I need 20× more iterations).
However, if I make the seemingly obvious change (with the below equation), the car will "drift" past the safe point and "crash" into the vehicle, driving past it entirely, before its velocity reverses to correct itself.
$$V_r = V_r + \frac{A_r}{20}$$
It should also be noted that if I keep the time variables in my displacement formulae, and use that as a value (for example, 2 instead of 1), a similar result occurs (although less pronounced):
$$A_r = \frac{2(V_f \cdot t + \frac{1}{2}A_f \cdot t^2 + S_f-2V_r-S_r}{(2t)^2}$$
How exactly would I go about incorporating some sort of "time step" such that it would increase the number of iterations by a factor of $x$, and would achieve the same result at the end of the iterations? |
In BDF schemes for $\dot y = f$, one uses $$f(t_n)=\dot y(t_n)$$and tries to approximate $\dot y(t_n)\approx \sum_{j=0}^k\alpha_k y_{n-j}$ by the current value $y_n$ (that is to be computed) and the $k$ previously computed approximations.
In the presented approach, in $(5)$, $y$ is approximated as a polynomial $p$ in $t$ fitted to $y_{n-j}$, so that the time derivative of the polynomial at $t_n$ approximates $\dot y(t_n)$ as desired. With that, for a given approximation order $k$, one can read of the coefficients $\alpha_j$ by evaluating $\dot p$ at $t_n$:
For $k=1$:
$$\quad \dot y(t_n) \approx \dot p(t_n) = \frac{1}{h}(y_n - y_{n-1})$$which gives that $h\dot y(t_n)$ is approximated by $$1\cdot y_n + (-1)\cdot y_{n-1}.$$
For $k=2$ the terms read:$$k=2: \quad \dot y(t_n) \approx \dot p(t_n) = \frac{1}{h}(y_n - y_{n-1})+\frac{1}{2h^2}[(t_n-t_n)\nabla^2y_n + (t_n-t_{n-1})\nabla^2y_n]$$which, with $t_n-t_{n-1}=h$ and $\nabla^2y_n = y_n - 2y_{n-1} + y_{n-2}$ gives that $h\dot y(t_n)$ is approximated by $$\frac{3}{2}\cdot y_n + (-2)\cdot y_{n-1} + \frac{1}{2}y_{n-2}$$.
And so on... |
Wave energy converters in coastal structures Introduction
Fig 1: Construction of a coastal structure.
Coastal works along European coasts are composed of very diverse structures. Many coastal structures are ageing and facing problems of stability, sustainability and erosion. Moreover climate change and especially sea level rise represent a new danger for them. Coastal dykes in Europe will indeed be exposed to waves with heights that are greater than the dykes were designed to withstand, in particular all the structures built in shallow water where the depth imposes the maximal amplitude because of wave breaking.
This necessary adaptation will be costly but will provide an opportunity to integrate converters of sustainable energy in the new maritime structures along the coasts and in particular in harbours. This initiative will contribute to the reduction of the greenhouse effect. Produced energy can be directly used for the energy consumption in harbour area and will reduce the carbon footprint of harbours by feeding the docked ships with green energy. Nowadays these ships use their motors to produce electricity power on board even if they are docked. Integration of wave energy converters (WEC) in coastal structures will favour the emergence of the new concept of future harbours with zero emissions.
Inhoud Wave energy and wave energy flux
For regular water waves, the time-mean wave energy density E per unit horizontal area on the water surface (J/m²) is the sum of kinetic and potential energy density per unit horizontal area. The potential energy density is equal to the kinetic energy
[1] both contributing half to the time-mean wave energy density E that is proportional to the wave height squared according to linear wave theory [1]:
(1)
[math]E= \frac{1}{8} \rho g H^2[/math]
g is the gravity and [math]H[/math] the wave height of regular water waves. As the waves propagate, their energy is transported. The energy transport velocity is the group velocity. As a result, the time-mean wave energy flux per unit crest length (W/m) perpendicular to the wave propagation direction, is equal to
[1]:
(2)
[math] P= Ec_{g}[/math]
with [math]c_{g}[/math] the group velocity (m/s). Due to the dispersion relation for water waves under the action of gravity, the group velocity depends on the wavelength λ (m), or equivalently, on the wave period T (s). Further, the dispersion relation is a function of the water depth h (m). As a result, the group velocity behaves differently in the limits of deep and shallow water, and at intermediate depths:
[math](\frac{\lambda}{20} \lt h \lt \frac{\lambda}{2})[/math]
Application for wave energy convertersFor regular waves in deep water:
[math]c_{g} = \frac{gT}{4\pi} [/math] and [math]P_{w1} = \frac{\rho g^2}{32 \pi} H^2 T[/math]
The time-mean wave energy flux per unit crest length is used as one of the main criteria to choose a site for wave energy converters.
For irregular waves in deep water:
[math]P_{w1} = \frac{\rho g^2}{64 \pi} H_{m0}^2 T_e[/math]
If local data are available ([math]H_{m0}^2 [/math], T) for a sea state through in-situ wave buoys for example, satellite data or numerical modelling, the last equation giving wave energy flux [math]P_{w1}[/math] gives a first estimation. Averaged over a season or a year, it represents the maximal energetic resource that can be theoretically extracted from wave energy. If the directional spectrum of sea state variance F (f,[math]\theta[/math]) is known with f the wave frequency (Hz) and [math]\theta[/math] the wave direction (rad), a more accurate formulation is used:
[math]P_{w2} = \rho g\int\int c_{g}(f,h)F(f,\theta) dfd \theta[/math]
Fig 2: Time-mean wave energy flux along
West European coasts
[2] .
It can be shown easily that equation (4) can be reduced to (3) with the hypothesis of regular waves in deep water. The directional spectrum is deduced from directional wave buoys, SAR images or advanced spectral wind-wave models, known as third-generation models, such as WAM, WAVEWATCH III, TOMAWAC or SWAN. These models solve the spectral action balance equation without any a priori restrictions on the spectrum for the evolution of wave growth.
From TOMAWAC model, the near shore wave atlas ANEMOC along the coasts of Europe and France based on the numerical modelling of wave climate over 25 years has been produced
[3]. Using equation (4), the time-mean wave energy flux along West European coasts is obtained (see Fig. 2). This equation (4) still presents some limits like the definition of the bounds of the integration. Moreover, the objective to get data on the wave energy near coastal structures in shallow or intermediate water requires the use of numerical models that are able to represent the physical processes of wave propagation like the refraction, shoaling, dissipation by bottom friction or by wave breaking, interactions with tides and diffraction by islands.
The wave energy flux is therefore calculated usually for water depth superior to 20 m. This maximal energetic resource calculated in deep water will be limited in the coastal zone:
at low tide by wave breaking; at high tide in storm event when the wave height exceeds the maximal operating conditions; by screen effect due to the presence of capes, spits, reefs, islands,...
Technologies
According to the International Energy Agency (IEA), more than hundred systems of wave energy conversion are in development in the world. Among them, many can be integrated in coastal structures. Evaluations based on objective criteria are necessary in order to sort theses systems and to determine the most promising solutions.
Criteria are in particular:
the converter efficiency : the aim is to estimate the energy produced by the converter. The efficiency gives an estimate of the number of kWh that is produced by the machine but not the cost. the converter survivability : the capacity of the converter to survive in extreme conditions. The survivability gives an estimate of the cost considering that the weaker are the extreme efforts in comparison with the mean effort, the smaller is the cost.
Unfortunately, few data are available in literature. In order to determine the characteristics of the different wave energy technologies, it is necessary to class them first in four main families
[2].
An interesting result is that the maximum average wave power that a point absorber can absorb [math]P_{abs} [/math](W) from the waves does not depend on its dimensions
[4]. It is theoretically possible to absorb a lot of energy with only a small buoy. It can be shown that for a body with a vertical axis of symmetry (but otherwise arbitrary geometry) oscillating in heave the capture (or absorption) width [math]L_{max}[/math](m) is as follows [4]:
[math]L_{max} = \frac{P_{abs}}{P_{w}} = \frac{\lambda}{2\pi}[/math] or [math]1 = \frac{P_{abs}}{P_{w}} \frac{2\pi}{\lambda}[/math]
Fig 4: Upper limit of mean wave power
absorption for a heaving point absorber.
where [math]{P_{w}}[/math] is the wave energy flux per unit crest length (W/m). An optimally damped buoy responds however efficiently to a relatively narrow band of wave periods.
Babarit et Hals propose
[5] to derive that upper limit for the mean annual power in irregular waves at some typical locations where one could be interested in putting some wave energy devices. The mean annual power absorption tends to increase linearly with the wave power resource. Overall, one can say that for a typical site whose resource is between 20-30 kW/m, the upper limit of mean wave power absorption is about 1 MW for a heaving WEC with a capture width between 30-50 m.
In order to complete these theoretical results and to describe the efficiency of the WEC in practical situations, the capture width ratio [math]\eta[/math] is also usually introduced. It is defined as the ratio between the absorbed power and the available wave power resource per meter of wave front times a relevant dimension B [m].
[math]\eta = \frac{P_{abs}}{P_{w}B} [/math]
The choice of the dimension B will depend on the working principle of the WEC. Most of the time, it should be chosen as the width of the device, but in some cases another dimension is more relevant. Estimations of this ratio [math]\eta[/math] are given
[5]: 33 % for OWC, 13 % for overtopping devices, 9-29 % for heaving buoys, 20-41 % for pitching devices. For energy converted to electricity, one must take into account moreover the energy losses in other components of the system.
Civil engineering
Never forget that the energy conversion is only a secondary function for the coastal structure. The primary function of the coastal structure is still protection. It is necessary to verify whether integration of WEC modifies performance criteria of overtopping and stability and to assess the consequences for the construction cost.
Integration of WEC in coastal structures will always be easier for a new structure than for an existing one. In the latter case, it requires some knowledge on the existing coastal structures. Solutions differ according to sea state but also to type of structures (rubble mound breakwater, caisson breakwaters with typically vertical sides). Some types of WEC are more appropriate with some types of coastal structures.
Fig 5: Several OWC (Oscillating water column) configurations (by Wavegen – Voith Hydro).
Environmental impact
Wave absorption if it is significant will change hydrodynamics along the structure. If there is mobile bottom in front of the structure, a sand deposit can occur. Ecosystems can also be altered by change of hydrodynamics and but acoustic noise generated by the machines.
Fig 6: Finistere area and locations of
the six sites (google map).
Study case: Finistere area
Finistere area is an interesting study case because it is located in the far west of Brittany peninsula and receives in consequence the largest wave energy flux along the French coasts (see Fig.2). This area with a very ragged coast gathers moreover many commercial ports, fishing ports, yachting ports. The area produces a weak part of its consumption and is located far from electricity power plants. There are therefore needs for renewable energies that are produced locally. This issue is important in particular in islands. The production of electricity by wave energy will have seasonal variations. Wave energy flux is indeed larger in winter than in summer. The consumption has peaks in winter due to heating of buildings but the consumption in summer is also strong due to the arrival of tourists.
Six sites are selected (see figure 7) for a preliminary study of wave energy flux and capacity of integration of wave energy converters. The wave energy flux is expected to be in the range of 1 – 10 kW/m. The length of each breakwater exceeds 200 meters. The wave power along each structure is therefore estimated between 200 kW and 2 MW. Note that there exist much longer coastal structures like for example Cherbourg (France) with a length of 6 kilometres.
(1) Roscoff (300 meters) (2) Molène (200 meters) (3) Le Conquet (200 meters) (4) Esquibien (300 meters) (5) Saint-Guénolé (200 meters) (6) Lesconil (200 meters) Fig.7: Finistere area, the six coastal structures and their length (google map).
Wave power flux along the structure depends on local parameters: bottom depth that fronts the structure toe, the presence of caps, the direction of waves and the orientation of the coastal structure. See figure 8 for the statistics of wave directions measured by a wave buoy located at the Pierres Noires Lighthouse. These measurements show that structures well-oriented to West waves should be chosen in priority. Peaks of consumption occur often with low temperatures in winter coming with winds from East- North-East directions. Structures well-oriented to East waves could therefore be also interesting even if the mean production is weak.
Fig 8: Wave measurements at the Pierres Noires Lighthouse.
Conclusion
Wave energy converters (WEC) in coastal structures can be considered as a land renewable energy. The expected energy can be compared with the energy of land wind farms but not with offshore wind farms whose number and power are much larger. As a land system, the maintenance will be easy. Except the energy production, the advantages of such systems are :
a “zero emission” port industrial tourism test of WEC for future offshore installations.
Acknowledgement
This work is in progress in the frame of the national project EMACOP funded by the French Ministry of Ecology, Sustainable Development and Energy.
See also Waves Wave transformation Groynes Seawall Seawalls and revetments Coastal defense techniques Wave energy converters Shore protection, coast protection and sea defence methods Overtopping resistant dikes
References Mei C.C. (1989) The applied dynamics of ocean surface waves. Advanced series on ocean engineering. World Scientific Publishing Ltd Mattarolo G., Benoit M., Lafon F. (2009), Wave energy resource off the French coasts: the ANEMOC database applied to the energy yield evaluation of Wave Energy, 10th European Wave and Tidal Energy Conference Series (EWTEC’2009), Uppsala (Sweden) Benoit M. and Lafon F. (2004) : A nearshore wave atlas along the coasts of France based on the numerical modeling of wave climate over 25 years, 29th International Conference on Coastal Engineering (ICCE’2004), Lisbonne (Portugal), 714-726. De O. Falcão A. F. (2010) Wave energy utilization: A review of the technologies. Renewable and Sustainable Energy Reviews, Volume 14, Issue 3, April 2010, Pages 899–918. Babarit A. and Hals J. (2011) On the maximum and actual capture width ratio of wave energy converters – 11th European Wave and Tidal Energy Conference Series (EWTEC’2011) – Southampton (U-K). |
I define a
$n$-labeling of a directed acyclic graph $G = (V, E)$ as a function $f$ from $V$ to the power set of {1, ..., $n$} such that for any $x, y \in V$, $x \neq y$, we have $f(y) \subset f(x)$ iff $x \rightarrow^+ y$ (i.e. there is a path of length >0 from $x$ to $y$ in $G$). Clearly, any DAG $G$ admits a $|V|$-labeling (by assigning a unique id to each vertex and setting each vertex's label as the set of its own id and the ids of vertices reachable from this vertex), but this upper bound is not tight (for instance a perfect binary tree of height $h$ has $2^{h+1} - 1$ vertices but admits a $2^h$-labeling by assigning a unique number to each leaf). Hence my question: Given a graph $G$, what is the smallest $n$ such that there exists an $n$-labeling of $G$?
(This problem seems related to things such as comparability graphs and geometric containment orders, but I could not find the exact terms. It seems likely to me that this problem is already known with a different terminology, so I'd be happy to get relevant references if you know some.)
(Another possible choice of condition would be: for any $x, y \in V$, we have $f(y) \subsetneq f(x)$ iff $x \rightarrow^+ y$. It's not equivalent (it allows some nodes with the same children to carry the same labels) and I'm not sure of which one is the more natural.) |
I have two scenarios where I use arrows with a super-scripted asterisk: math mode and
tikz-cd diagrams. I would like to be able to show such an arrow in both scenarios such that the arrows look the same, i.e., with respect to positioning of the asterisk.
Consider the below MWE. This is how I would like the arrow to look.
MWE
\documentclass{article}\usepackage[dvipsnames]{xcolor}\usepackage{tikz-cd}\newcommand*\dirinfsymname{Rightarrow}\newcommand*\directdatacolourname{PineGreen}\newcommand*\directdatacolour{\textcolor{\directdatacolourname}}\newcommand*\dirinfsym{\mathbin{\directdatacolour{\Rightarrow}}}\newcommand{\pathdirinfsym}[1][]{\mathrel{ \vphantom{\dirinfsym{#1}} \smash{\dirinfsym{#1}} \vphantom{\to}^{\textcolor{PineGreen}{*}}}}\begin{document}$a \pathdirinfsym b$\end{document}
This outputs:
Now, consider this MWE.
MWE
\documentclass{article}\usepackage[dvipsnames]{xcolor}\usepackage{tikz,tikz-cd}\usetikzlibrary{shapes,fit}\usetikzlibrary{positioning}\usetikzlibrary{decorations.pathmorphing}\newcommand*\dirinfsymname{Rightarrow}\newcommand*\directdatacolourname{PineGreen}\newcommand*\directdatacolour{\textcolor{\directdatacolourname}}\newcommand*\dirinfsym{\mathbin{\directdatacolour{\Rightarrow}}}\begin{document}\begin{tikzcd}[ column sep=small, cells={nodes={draw=black, ellipse, anchor=center, minimum height=2em}}] a \arrow[\dirinfsymname, \directdatacolourname, bend left]{rrrrr}{*} & a \arrow[\dirinfsymname, \directdatacolourname]{r}{*} & a & |[draw=none]|a\vphantom{1} & a & a\end{tikzcd}\end{document}
This outputs:
Notice how the two arrows have the asterisk positioned in the middle of the stem. I would like the asterisk positioned in the same position as in the first diagram.
Furthermore, I would like this to work for more than just
\Rightarrow. I would like like to be able to do the same for
\rightarrow. |
Definition: Angular acceleration of an object undergoing circular motion is defined as the rate with which its angular velocity changes with time. Angular acceleration is also referred to as rotational acceleration. It is a vector quantity, that is, it has both magnitude and direction.
Angular acceleration is denoted by α and is expressed in the units of rad/s2 or radians per second square.
Formula:
Angular acceleration can be expressed as given below,
\(\alpha =\frac{d\omega }{dt}\)
And also in terms of the double differentiation of the angular displacement, as given below,
\(\alpha = \frac{d^{2}\theta}{ dt^{2}}\)
Derivation:
Angular acceleration is the rate of change of angular velocity with respect to time, or we can write it as,
\(\alpha = \frac{d\omega }{dt}\)
Here, α is the angular acceleration that is to be calculated, in terms of rad/s2, ω is the angular velocity given in terms of rad/s and t is the time taken expressed in terms of seconds.
Angular velocity as we know, can be expressed as given below.
\(\omega = \frac{v}{r}\)
Here, ω is the angular velocity in terms of rad/s, v is the linear velocity and r is the radius of the path taken.
Angular Velocity can also be expressed as the change in angular displacement with respect to time, as given below.
\(\omega = \frac{\theta }{t}\)
Where, θ is the angular rotation of the object and t is the total time taken.
Using the above formula, we can write angular acceleration α as
\(\alpha = \frac{d^{2}\theta }{dt^{2}}\)
Real Life Example: Example 1:
An ant is sitting at the edge of a rotating circular disc. It’s angular velocity changes at the rate of 60 rad/s for 10 seconds. Calculate its angular acceleration during this time?
Solution:
Given: The change in angular velocity is equal to dω = 60 rad/s. The time taken for this change to occur is equal to t = 10s.
Using the formula for angular acceleration and substituting the above values, we get,
\(\alpha = \frac{d\omega }{dt}=\frac{30}{5}=6\;rad/s^{2}\)
Example 2:
The rear wheel of a motorcycle has an angular acceleration of 20 rad/s2 in a second. What can be said about its angular velocity?
Answer:
Given: The angular acceleration of the wheel is equal to α = 10 \(rad/s^{2}\),
Time taken t = 1 s,
According to the formula for angular acceleration,
\(\alpha = \frac{d\omega }{dt}\)
Upon substituting the values, we get,
Angular velocity d is
dω = αdt
dω =20×1 = 20 rad/s |
In the article
Pricing via utility maximization and entropy from Richard Rouge and Nicole El Karoui, they define the value function of the optimization problem as
\begin{align} V(x,C) = \dfrac{1}{\gamma} \ln E\left[ -\hat{U}(x,-C) \right] = \inf_{\pi \in \mathcal{A}} \dfrac{1}{\gamma} \ln E[\exp -\gamma ( X_{T}^{x, \pi} - C)], \end{align}
where $X_{t}^{x, \pi}$ is the wealth process with an initial endowment $x$ and portfolio strategy $\pi$. $C$ is the payoff of the contingent claim at time $T$. $\mathcal{A}$ is a closed convex cone, and $\hat{U}$ is the maximal expected utility function that is defined as \begin{align} \hat{U}(x,C) = \max_{\pi \in \mathcal{A}}E[U(X_{T}^{x, \pi} + C)], \end{align} where $U(y):= - \exp(-\gamma y)$, the negative exponential utility.
They use the duality between
free energy and the relative entropy that is\begin{align}\ln E[\exp B] = \sup_{Q \ll P} [E^{Q}[B] -h(Q \vert P)]\end{align}
to conclude that
\begin{align} V(x,C) = \inf_{\pi \in \mathcal{A}} \sup_{Q \ll P}\left\lbrace E^{Q}[-X_{T}^{x, \pi}+C] - \dfrac{1}{\gamma}h(Q \vert P)\right\rbrace \end{align} that is the value of a stochastic game between an agent and the market.
The authors claim in Theorem 2.1 that the price of the contingent claim is given by
\begin{align} p(x,C) /B_{0,T} = \sup_{Q_{T}}\left\lbrace E^{Q_{T}}[C] - \dfrac{1}{\gamma}h(Q_{T} \vert P)\right\rbrace - \sup_{Q_{T}}\left\lbrace - \dfrac{1}{\gamma}h(Q_{T} \vert P)\right\rbrace \end{align} where $Q_{T}$ runs trough the set of probabilities $Q_{T} \sim P$. The authors claim this result holds in a general setting but they do not explicitly show how they deduce this price. Does anyone know how to deduce this result under the general setting or under the Brownian motion setting mentioned below (that appears in the article)? or any reference?
By the way, the authors define the price $p(x,C)$ of a contingent claim as the smallest $p$ such that \begin{align} \sup_{\pi \in \mathcal{A}}E[U(X_{T}^{x+p, \pi}- C)] \geq E[U(X_{T}^{x, \pi})] \end{align}
Even under the the following Brownian motion model that they provide, they do not explicitly show how to deduce the price.
\begin{align} dP_{t}^{0} =r_{t}P_{t}^{0}dt, \ \ \ P_{0}^{0}=1, \end{align} is the dynamics of the riskless asset price.
\begin{align} dP_{t}^{i} = P_{t}^{i}\left[ b_{t}^{i}dt + \sum_{j=1}^{d} \sigma_{t}^{i j} dW_{t}^{j}\right] \end{align} are the dynamics of the risky assets, where $P_{t}^{i}$ is the price of the risky i
thrisky asset at time $t$.
\begin{align} X_{0}^{x, \pi}=x \ \ \ \ dX_{t} = \left( X_{t} - \sum_{i=1}^{d} \pi_{t}^{i} \right)r_{t} dt + \sum_{i=1}^{d}\pi_{t}^{i}\left[ b_{t}^{i}dt + \sum_{j=1}^{d} \sigma_{t}^{i j} dW_{t}^{j}\right] \end{align} is the equation of the wealth process $X^{x, \pi}$ where $\pi_{t}^{i}$ is the amount invested in the i
thrisky asset with $i =1,...,d$.
and the zero coupon bond $B_{t,T}$ is used as numeraire and follow the equation \begin{align} dB_{t,T} = B_{t,T} [(r_{t} + \sigma_{t}^{T^{*}}\sigma_{t})dt + \sigma_{t}^{T^{*}} \sigma_{t} dW_{t}] \end{align} where $\sigma_{t}^{T}$ is an $\mathbb{R}^{d}$-valued progressively measurable process, and $*$ denotes transpose.
This is the only result that I have not been able to prove. Thanks in advance. |
Defining parameters
Level: \( N \) = \( 210 = 2 \cdot 3 \cdot 5 \cdot 7 \) Weight: \( k \) = \( 2 \) Nonzero newspaces: \( 12 \) Newforms: \( 32 \) Sturm bound: \(4608\) Trace bound: \(4\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{2}(\Gamma_1(210))\).
Total New Old Modular forms 1344 249 1095 Cusp forms 961 249 712 Eisenstein series 383 0 383 Decomposition of \(S_{2}^{\mathrm{new}}(\Gamma_1(210))\)
We only show spaces with even parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension. |
I never thought these posts would get to 5.
Now I said I would do a population problem but I have decided to go with a radioactive decay problem instead. I will use the example of carbon dating as this is based on radioactive decay. But first, let’s look at the general equation for exponential decay:
\[
{A}\hspace{0.33em}{=}\hspace{0.33em}{A}_{0}{e}^{{-}{kt}}
\]
This formula gives the amount of something that is decreasing exponentially.
A is the amount left after t seconds, hours, days, years or whatever depending on the value of the rate of decrease factor which is k. A 0 is the amount of something we started out with, the amount present at t = 0. This formula makes sense when you look at the e – part of the equation. kt
Now I have talked about
e before. It is an irrational number, like 𝜋, and is approximately equal to 2.7183. k is a rate of decrease factor that depends on the material we are working with and the units of time t. The – kt part, the exponent of e, is a negative number since k and t are positive. I have explained negative exponents before but e – equals 1/ kt e . Now what happens to kt e as t gets large? Any number greater than 1 (which kt e is) raised to a larger and larger power, gets very big. And when you divide a big number into 1, you get a very small number. So A 0 is being multiplied by a number that gets smaller and smaller as time goes on. That is why A, the amount of material, is exponentially decreasing.
With that as a background, let’s talk about carbon dating. Any living thing has carbon in it. Indeed, all life on earth is carbon-based which means that the the molecules essential for life are composed of lots of carbon. Now carbon comes in different “flavors”. These flavors are called isotopes and carbon has two main isotopes: carbon 12 the most abundant and non-radioactive, and carbon 14 which is radioactive. Fortunately, the amount of carbon 14 is very small – about 1 atom to every 10
12 atoms of carbon 12. However, in living things, this ratio is pretty much constant since carbon 14 is continually made in our atmosphere. But once something dies, the carbon 14 is not replenished and the amount present at the time of death starts decreasing.
So carbon dating is a process of determining the amount of carbon 14 left in a once living object then calculating the time it would take to have that much carbon 14 left.
So let’s go back to our equation for exponential decay. In order to use this equation for carbon dating, we need to know what
k is for carbon 14. Now we know that the half-life of carbon 14 is 5700 years which means that given any amount of carbon 14, only half that amount will be left in 5700 years due to radioactive decay. So let’s use this fact to calculate k.
Taking this information and putting it into our equation results in
\[
{0}{.}{5}{A}_{0}\hspace{0.33em}{=}\hspace{0.33em}{A}_{0}{e}^{{-}{k}\times{5700}}
\]
So the left side shows that there is half (0.5) the initial amount and the right side shows that this occurs in 5700 years. So now, I will take the log
(abbreviated as ln) of both sides. Note that since e e is the base on the right side, taking the log to that base just results in the exponent – kt. Also note that A 0 appears on both sides of the equation, so we can divide both sides of the equation by A 0 which makes A 0 disappear:
\[
\begin{array}{l}
{{0}{.}{5}{A}_{0}\hspace{0.33em}{=}\hspace{0.33em}{A}_{0}{e}^{{-}{k}\times{5700}}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\Longrightarrow\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}{0}{.}{5}\hspace{0.33em}{=}\hspace{0.33em}{e}^{{-}{k}\times{5700}}}\\
{\Longrightarrow\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\ln{(}{0}{.}{5}{)}\hspace{0.33em}{=}\hspace{0.33em}\ln{(}{e}^{{-}{k}\times{5700}}{)}\hspace{0.33em}{=}\hspace{0.33em}{-}{k}\times{5700}}\\
{\Longrightarrow\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}{-}{0}{.}{6931}\hspace{0.33em}{=}\hspace{0.33em}{-}{k}\times{5700}}\\
{\Longrightarrow{k}\hspace{0.33em}{=}\hspace{0.33em}\frac{{-}{0}{.}{6931}}{5700}\hspace{0.33em}{=}\hspace{0.33em}{0}{.}{0001216}}
\end{array}
\]
So now that we know what
k is, we can use the following equation to do our carbon dating:
\[
{A}\hspace{0.33em}{=}\hspace{0.33em}{A}_{0}{e}^{{-}{0}{.}{0001216}{t}}
\]
So let’s say a fossil has 35% (0.35) of its original carbon 14 when it died. How old is the fossil?
\[
\begin{array}{l}
{{0}{.}{35}{A}_{0}\hspace{0.33em}{=}\hspace{0.33em}{A}_{0}{e}^{{-}{0}{.}{0001216}{t}}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\Longrightarrow\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}{0}{.}{35}\hspace{0.33em}{=}\hspace{0.33em}{e}^{{-}{0}{.}{0001216}{t}}}\\
{\Longrightarrow\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\ln{(}{0}{.}{35}{)}\hspace{0.33em}{=}\hspace{0.33em}\ln{(}{e}^{{-}{0}{.}{0001216}{t}}{)}\hspace{0.33em}{=}\hspace{0.33em}{-}{0}{.}{0001216}{t}}\\
{\Longrightarrow\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}{-}{1}{.}{0498}\hspace{0.33em}{=}\hspace{0.33em}{-}{0}{.}{0001216}{t}}\\
{\Longrightarrow{t}\hspace{0.33em}{=}\hspace{0.33em}\frac{{-}{1}{.}{0498}}{{-}{0}{.}{0001216}}\hspace{0.33em}{=}\hspace{0.33em}{8633}\hspace{0.33em}{\mathrm{years}}}
\end{array}
\]
We have a lot of birthdays to catch up on! |
There are proofs that treat the cases of real and non-real $\chi$ on an equal footing. One proof is in Serre's Course in Arithmetic, which the answers by Pete and David are basically about. That method is using the (hidden) fact that the zeta-function of the $m$-th cyclotomic field has a simple pole at $s = 1$, just like the Riemann zeta-function.Here is another proof which focuses only on the $L$-function of the character $\chi$ under discussion, the $L$-function of the conjugate character, and the Riemann zeta-function.
Consider the product$$H(s) = \zeta(s)^2L(s,\chi)L(s,\overline{\chi}).$$This function is analytic for $\sigma > 0$, with the possible exception of a pole at $s = 1$. (As usual I write $s = \sigma + it$.)
Assume $L(1,\chi) = 0$. Then also $L(1,\overline{\chi}) = 0$.So in the product defining $H(s)$, the double pole of $\zeta(s)^2$ at $s = 1$ is cancelled and $H(s)$ is therefore analytic throughout the half-plane $\sigma > 0$.
For $\sigma > 1$, we have the exponential representation $$H(s) = \exp\left(\sum_{p, k} \frac{2 + \chi(p^k) + \overline{\chi}(p^k)}{kp^{ks}}\right),$$where the sum is over $k \geq 1$ and primes $p$. If $p$ does not divide $m$, then we write $\chi(p) = e^{i\theta_p}$ and find
$$\frac{2 + \chi(p^k) + \overline{\chi}(p^k)}{k} = \frac{2(1 + \cos(k\theta_p))}{k} \geq 0.$$ If $p$ divides $m$ then this sum is $2/k > 0$. Either way, inside that exponential is a Dirichlet series with nonnegative coefficients, so when we exponentiate and rearrange terms (on the half-plane of abs. convergence, namely where $\sigma > 1$), we see that $H(s)$ is a Dirichlet series with nonnegative coefficients. A lemma of Landau on Dirichlet series with nonnegative coefficients then assures us that the Dirichlet series representation of $H(s)$ is valid on any half-plane where $H(s)$ can be analytically continued.
To get a contradiction at this point, here are several methods.
[Edit: In the answer by J.H.S., and due to Bateman, is the slickest argument I have seen, so let me put it here. The idea is to look at the coefficient of $1/p^{2s}$ in the Dirichlet series for $H(s)$. By multiplying out the $p$-part of the Euler product, the coefficient of $1/p^s$ is $2 + \chi(p) + \overline{\chi}(p)$, which is nonnegative, but the coefficient of $1/p^{2s}$ is $(\chi(p) + \overline{\chi}(p) + 1)^2 + 1$, which is not only nonnegative but in fact is greater than or equal to 1. Therefore if $H(s)$ has an analytic continuation along the real line out to the number $\sigma$, then for real $s \geq \sigma$ we have $H(s) \geq \sum_{p} 1/p^{2s}$. The hypothesis that $L(1,\chi) = 0$ makes $H(s)$ analytic for all complex numbers with positive real part, so we can take $s = 1/2$ and get $H(1/2) \geq \sum_{p} 1/p$, which is absurd since that series over the primes diverges. QED!]
If you are willing to accept that $L(s,\chi)$ (and therefore $L(s,\overline{\chi})$) has an analytic continuation to the whole plane, or at least out to the point $s = -2$, then $H(s)$ extends to $s = -2$. The Dirichlet series representation of $H(s)$ is convergent at $s = -2$ by our analytic continuation hypothesis and it shows $H(-2) > 1$, or the exponential representation implies that at least $H(-2) \not= 0$.But $\zeta(-2) = 0$, so $H(-2) = 0$. Either way, we have a contradiction.
There is a similar argument, pointed out to me by Adrian Barbu, that does not require analytic continuation of $L(s,\chi)$ beyond the half-plane $\sigma > 0$. If you are willing to accept that $\zeta(s)$ has zeros in the critical strip $0 < \sigma < 1$ (which is a region that the Dirichlet series and exponential representations of $H(s)$ are both valid since $H(s)$ is analytic on $\sigma > 0$), we can evaluate the exponential representation of $H(s)$ at such a zero to get a contradiction. Of course the amount of analysis that lies behind this is more substantial than what is used to continue $L(s,\chi)$ out to $s = -2$.
We consider $H(s)$ as $s \rightarrow 0^{+}$. We need to accept that $H$ is bounded as $s \rightarrow 0^{+}$. (It's even holomorphic there, but we don't quite need that.) For real $s > 0$ and a fixed prime $p_0$ (not dividing $m$, say), we can bound $H(s)$ from below by the sum of the $p_0$-power terms in its Dirichlet series. The sum of these terms is exactly the $p_0$-Euler factor of $H(s)$, so we have the lower bound $$H(s) > \frac{1}{(1 - p_0^{-s})^2(1 - \chi(p_0)p_0^{-s})(1 - \overline{\chi}(p_0)p_0^{-s})} = \frac{1}{(1 - p_0^{-s})^2(1 - (\chi(p_0)+ \overline{\chi}(p_0))p_{0}^{-s} + p_0^{-2s})}$$for real $s > 0$. The right side tends to $\infty$ as $s \rightarrow 0^{+}$.We have a contradiction. QED
These three arguments at some point use knowledge beyond the half-plane $\sigma > 0$ or a nontrivial zero of the zeta-function. Granting any of those lets you see easily that $H(s)$ can't vanish at $s = 1$, but that "granting" may seem overly technical. If you want a proof for the real and complex cases uniformly which does not go outside the region $\sigma > 0$, use the method in the answer by Pete or David [edit: or use the method I edited in as the first one in this answer]. |
Bonus: An ATLAS \(\mu\mu j\) event with \(m=2.9\TeV\) will be discussed at the end of this blog post. A model with exactly this prediction was published in June
Two days ago, I discussed four LHC collisions suggesting a particle of mass \(5.2\TeV\). Today, just two days later, Tommaso Dorigo described a spectacular dielectron event seen by CMS on August 22nd. See also the CERN document server; CERN graduate students have to prepare a PDF file for each of the several quadrillion collisions. ;-)
On that Tuesday, the world stock markets were just recovering from the two previous cataclysmic days while the CMS detector enjoyed a more pleasing day with one of the \(13\TeV\) collisions that have turned the LHC into a rather new kind of a toy.
This is how the outcome of the collision looked from the direction of the beam. The electron and positron were flying almost exactly in the opposite direction, each having about \(1.25\TeV\) of transverse energy. A perfectly balanced picture.
You may see the collision from another angle, too:
The electron-positron pair is the only notable thing that is going on.
The fun is that no such high-energy collision has been seen at the \(8\TeV\) run – even though it has performed more than 100 times greater a number of collisions than the ongoing \(13\TeV\) run in 2015. When you demand truly highly energetic particles in the final state, the weakness of the \(8\TeV\) run in 2012 becomes self-evident.
The expected number of similar collisions with the invariant mass\[
M_{e^+e^-}\gt 2.5\TeV
\] seen in the CMS dataset of 2015 (so far) has been estimated as \(\langle N \rangle =0.002\). Clearly, this number – because it is so small that we may neglect the possibility that more than 1 such event arises – may be interpreted as the probability that one event (and not zero events) take place. For the mass above \(2.85\TeV\), you would almost certainly get a probability \(0.001\) or less.
If you take the estimate \(p=0.002\) seriously, it means that either the CMS detector has been 1:500 "lucky" to see a high-energy event that is actually noise; or it is seeing a new particle that may decay to the electron-positron pair.
Such a new particle would probably be neutral from all points of view. It could be a heavier cousin of the \(Z\)-boson, a \(Z'\)-boson. That would be the gauge boson associated with a new \(U(1)_{\rm new}\) gauge symmetry. Most types of vacua in string theory tend to predict lots of these additional \(U(1)\) groups.
And your humble correspondent can even offer you a paper that predicts a \(Z'\)-boson of mass \(2.9\TeV\). See the bottom of page 10 here. (Sadly, they made the prediction less accurate in v2 of their preprint.) The left-right-symmetric model in the paper also intends to explain the excesses near \(2\TeV\) – as a \(W'\)-boson. The model is lepto-phobic (LP) which means that only right-handed quarks are arranged to doublets of \(SU(2)_R\) while the right-handed leptons remain \(SU(2)_R\) singlets. It's the model with the Higgs triplet (LPT) that gives the right \(Z'\)-boson mass.
Just for fun, let me show you the calculation of the invariant mass. The coordinates of the two electron-like particles are written as\[
\eq{
p_T &= 1.27863\TeV\\
\eta &= - 1.312\\
\phi &= 0.420
}
\] and \[
\eq{
p_T &= 1.25620\TeV\\
\eta &= - 0.239\\
\phi &= -2.741
}
\] One may convert these coordinates to the Cartesian coordinates\[
\eq{
p_x &= p_T\cos \phi\\
p_y &= p_T\sin \phi\\
p_z &= p_T \sinh \eta \\
E &= p_T \cosh \eta
}
\] in the approximation \(m_e\ll E\) i.e. \(m_e\sim 0\): feel free to check that the 4-vector above is identically light-like. The two 4-vectors (in the order I chose above) are therefore\[
\eq{
\frac{p_A^\mu }{ {\rm TeV}}&= (1.16750, 0.521375, -2.20200, 2.54631) \\
\frac{p_B^\mu }{ {\rm TeV}}&= (-1.15675, -0.48987, -0.30310, 1.29225)
}
\] where the last coordinate is the energy. Now, because these 4-vectors are null, \[
(p_A^\mu+p_B^\mu)^2 = 2p_A^\mu p_{B,\mu} = (2.908\TeV)^2
\] in the West Coast metric convention. You're invited to check it. Thanks to the Higgs Kaggle contest, I gained some intuition for the \((p_T,\eta,\phi)\) coordinates. ;-)
In a few more weeks, we should see whether this highly energetic electron-positron event was a fluke or something much more interesting... You know, the progress on the energy frontier has been rather substantial. Note that \(13/8=1.625\), an increase by 62.5%.
Lots of particles – the \(W\)-bosons, the \(Z\)-boson, the Higgs boson, and the top quark – are confined in the interval \(70\GeV,210\GeV\) – safely four types of particles in an interval whose upper bound is thrice the lower bound. Now, we can produce particles with masses up to \(5\TeV\) or so. Why shouldn't we find any new particles with masses between \(175\GeV\) and \(4,900\GeV\) – an interval whose ratio of limiting energies is twenty-eight?
It's quite some jump, isn't it? ;-) It could harbor lots of so far secret and elusive animals.
Next Monday, the full-fledged physics collisions should resume and continue through the early November. |
2019-05-20 15:18 Detailed record - Similar records 2019-01-23 09:13
nuSTORM at CERN: Feasibility Study / Long, Kenneth Richard (Imperial College (GB)) The Neutrinos from Stored Muons, nuSTORM, facility has been designed to deliver a definitive neutrino-nucleus scattering programme using beams of $\bar{\nu}_e$ and $\bar{\nu}_\mu$ from the decay of muons confined within a storage ring. The facility is unique, it will be capable of storing $\mu^\pm$ beams with a central momentum of between 1 GeV/c and 6 GeV/c and a momentum spread of 16%. [...] CERN-PBC-REPORT-2019-003.- Geneva : CERN, 2019 - 150. Detailed record - Similar records 2019-01-23 08:54 Detailed record - Similar records 2019-01-15 15:35
Report from the LHC Fixed Target working group of the CERN Physics Beyond Colliders forum / Barschel, Colin (CERN) ; Bernhard, Johannes (CERN) ; Bersani, Andrea (INFN e Universita Genova (IT)) ; Boscolo Meneguolo, Caterina (Universita e INFN, Padova (IT)) ; Bruce, Roderik (CERN) ; Calviani, Marco (CERN) ; Carassiti, Vittore (Universita e INFN, Ferrara (IT)) ; Cerutti, Francesco (CERN) ; Chiggiato, Paolo (CERN) ; Ciullo, Giuseppe (Universita e INFN, Ferrara (IT)) et al. Several fixed-target experiments at the LHC are being proposed and actively studied. Splitting of beam halo from the core by means of a bent crystal combined with a second bent crystal after the target has been suggested in order to study magnetic and electric dipole moments of short-lived particles. [...] CERN-PBC-REPORT-2019-001.- Geneva : CERN, 2019 Fulltext: PDF; Detailed record - Similar records 2018-12-20 13:45 Detailed record - Similar records 2018-12-18 14:08
Physics Beyond Colliders at CERN: Beyond the Standard Model Working Group Report / Beacham, J. (Ohio State U., Columbus (main)) ; Burrage, C. (U. Nottingham) ; Curtin, D. (Toronto U.) ; De Roeck, A. (CERN) ; Evans, J. (Cincinnati U.) ; Feng, J.L. (UC, Irvine) ; Gatto, C. (INFN, Naples ; NIU, DeKalb) ; Gninenko, S. (Moscow, INR) ; Hartin, A. (U. Coll. London) ; Irastorza, I. (U. Zaragoza, LFNAE) et al. The Physics Beyond Colliders initiative is an exploratory study aimed at exploiting the full scientific potential of the CERN’s accelerator complex and scientific infrastructures through projects complementary to the LHC and other possible future colliders. These projects will target fundamental physics questions in modern particle physics. [...] arXiv:1901.09966; CERN-PBC-REPORT-2018-007.- Geneva : CERN, 2018 - 150 p. Full Text: PDF; Fulltext: PDF; Detailed record - Similar records 2018-12-17 18:05
PBC technology subgroup report / Siemko, Andrzej (CERN) ; Dobrich, Babette (CERN) ; Cantatore, Giovanni (Universita e INFN Trieste (IT)) ; Delikaris, Dimitri (CERN) ; Mapelli, Livio (Universita e INFN, Cagliari (IT)) ; Cavoto, Gianluca (Sapienza Universita e INFN, Roma I (IT)) ; Pugnat, Pierre (Lab. des Champs Magnet. Intenses (FR)) ; Schaffran, Joern (Deutsches Elektronen-Synchrotron (DE)) ; Spagnolo, Paolo (INFN Sezione di Pisa, Universita' e Scuola Normale Superiore, Pisa (IT)) ; Ten Kate, Herman (CERN) et al. Goal of the technology WG set by PBC: Exploration and evaluation of possible technological contributions of CERN to non-accelerator projects possibly hosted elsewhere: survey of suitable experimental initiatives and their connection to and potential benefit to and from CERN; description of identified initiatives and how their relation to the unique CERN expertise is facilitated.. CERN-PBC-REPORT-2018-006.- Geneva : CERN, 2018 - 31. Fulltext: PDF; Detailed record - Similar records 2018-12-14 16:17
AWAKE++: The AWAKE Acceleration Scheme for New Particle Physics Experiments at CERN / Gschwendtner, Edda (CERN) ; Bartmann, Wolfgang (CERN) ; Caldwell, Allen Christopher (Max-Planck-Institut fur Physik (DE)) ; Calviani, Marco (CERN) ; Chappell, James Anthony (University of London (GB)) ; Crivelli, Paolo (ETH Zurich (CH)) ; Damerau, Heiko (CERN) ; Depero, Emilio (ETH Zurich (CH)) ; Doebert, Steffen (CERN) ; Gall, Jonathan (CERN) et al. The AWAKE experiment reached all planned milestones during Run 1 (2016-18), notably the demonstration of strong plasma wakes generated by proton beams and the acceleration of externally injected electrons to multi-GeV energy levels in the proton driven plasma wakefields. During Run~2 (2021 - 2024) AWAKE aims to demonstrate the scalability and the acceleration of electrons to high energies while maintaining the beam quality. [...] CERN-PBC-REPORT-2018-005.- Geneva : CERN, 2018 - 11. Detailed record - Similar records 2018-12-14 15:50
Particle physics applications of the AWAKE acceleration scheme / Wing, Matthew (University of London (GB)) ; Caldwell, Allen Christopher (Max-Planck-Institut fur Physik (DE)) ; Chappell, James Anthony (University of London (GB)) ; Crivelli, Paolo (ETH Zurich (CH)) ; Depero, Emilio (ETH Zurich (CH)) ; Gall, Jonathan (CERN) ; Gninenko, Sergei (Russian Academy of Sciences (RU)) ; Gschwendtner, Edda (CERN) ; Hartin, Anthony (University of London (GB)) ; Keeble, Fearghus Robert (University of London (GB)) et al. The AWAKE experiment had a very successful Run 1 (2016-8), demonstrating proton-driven plasma wakefield acceleration for the first time, through the observation of the modulation of a long proton bunch into micro-bunches and the acceleration of electrons up to 2 GeV in 10 m of plasma. The aims of AWAKE Run 2 (2021-4) are to have high-charge bunches of electrons accelerated to high energy, about 10 GeV, maintaining beam quality through the plasma and showing that the process is scalable. [...] CERN-PBC-REPORT-2018-004.- Geneva : CERN, 2018 - 11. Fulltext: PDF; Detailed record - Similar records 2018-12-13 13:21
Summary Report of Physics Beyond Colliers at CERN / Jaeckel, Joerg (CERN) ; Lamont, Mike (CERN) ; Vallee, Claude (Centre National de la Recherche Scientifique (FR)) Physics Beyond Colliders is an exploratory study aimed at exploiting the full scientific potential of CERN's accelerator complex and its scientific infrastructure in the next two decades through projects complementary to the LHC, HL-LHC and other possible future colliders. These projects should target fundamental physics questions that are similar in spirit to those addressed by high-energy colliders, but that require different types of beams and experiments. [...] arXiv:1902.00260; CERN-PBC-REPORT-2018-003.- Geneva : CERN, 2018 - 66 p. Fulltext: PDF; PBC summary as submitted to the ESPP update in December 2018: PDF; Detailed record - Similar records |
Now the fractions we have been working with are called
proper fractions: these are fractions where the numerator is smaller than the denominator. These types of fractions are smaller than one, which is why they are called proper as they are a fractional part of one. This implies there are things called improper fractions, and there are. Not improper in the sense of your drunk uncle at a wedding, but improper because they are greater than or equal to one. These will be fractions with a numerator equal to or greater than the denominator.
For example \[
\frac{5}{3} \] is an improper fraction. Now they can be added, subtracted, multiplied, and simplified like any other fraction. But when the result of an operation with fractions results in an improper fraction, you are expected to convert this to a mixed fraction – a whole number plus a proper fraction. So \[ \frac{5}{3} \] is equal to \[ 1\frac{2}{3} \]
So how do you convert an improper fraction to a mixed one? Just take out the whole parts and leave the resulting proper fraction. You do this by dividing: \[
\frac{5}{3}\hspace{0.33em}{=}\hspace{0.33em}{5}\hspace{0.33em}\div\hspace{0.33em}{3}\hspace{0.33em}{=}\hspace{0.33em}{1} \] plus a reminder of 2. So in this case, the improper fraction is one whole plus two thirds left over or \[ 1\frac{2}{3} \].
Now try \[
\frac{24}{11} \]:
\[
\frac{24}{11}\hspace{0.33em}{=}\hspace{0.33em}{24}\hspace{0.33em}\div\hspace{0.33em}{11}\hspace{0.33em}{=}\hspace{0.33em}{2} \] with a remainder of 2. So \[ \frac{24}{11}\hspace{0.33em}{=}\hspace{0.33em}{2}\frac{2}{11} \].
One more example:
\[\frac{66}{33}\hspace{0.33em}{=}\hspace{0.33em}{66}\hspace{0.33em}\div\hspace{0.33em}{33}\hspace{0.33em}{=}\hspace{0.33em}{2}
\].
Sometime there is no remainder and you just have a whole number. By the way, this result could also have been obtained by factoring:\[
\frac{66}{33}\hspace{0.33em}{=}\hspace{0.33em}\frac{\rlap{/}{3}\rlap{/}{3}\hspace{0.33em}\times\hspace{0.33em}{2}}{\rlap{/}{3}\rlap{/}{3}\hspace{0.33em}\times\hspace{0.33em}{1}}\hspace{0.33em}{=}\hspace{0.33em}\frac{2}{1}\hspace{0.33em}{=}\hspace{0.33em}{2}
\] |
Instead of writting very long lines to create complex equations it would be great to use variables to substitute smaller chunks...
A simple example:
\[\frac{ \sqrt{ \mu(i)^{ \frac{3}{2}} (i^{2} -1) } }{ \sqrt[3]{\rho(i) - 2} + \sqrt[3]{\rho(i) - 1} }\]
It would be done something like this:
A = \sqrt{ \mu(i)^{ \frac{3}{2}} (i^{2} -1) }B = \sqrt[3]{\rho(i) - 2} C = \sqrt[3]{\rho(i) - 1} \[\frac{ A } { B + C }\]
And the result would be the same but the code easier to read with more complex examples.
How can I define easily these substitutions or parameters? or using any specialized package?
PD: there are other questions at Tex.StackExchange asking how to split an equation in several lines, but this is not the same. |
So now I’m up to “4” on Newton’s clock:
So the expression\[
{\left({2\sin\frac{\mathit{\pi}}{2}}\right)}^{2}
\]
uses the sine function which has been talked about many posts before. Only this time, it is using radian measure of angles instead of degrees. If your calculator is in degree mode, you can substitute 90° in place of 𝜋/2 to get the same answer. The sine of 𝜋/2 radians or 90° is 1. So in the brackets we have 2 × 1 = 2. 2² = 4, hence its position on the clock.
Now let’s look at\[
\sqrt[3]{125}
\]
This is the cube root of 125. This expression is asking the question: “What number multiplied 3 times equals 125?”. The answer to that is 5 because 5 × 5 × 5 = 125. So once again, the clock does not lie.
Now let’s look at 3! This is pronounced “3 factorial”. The factorial of a number is that number successively multiplied by a number which is 1 less. So 5! = 5 × 4 × 3 × 2 × 1 = 120. So 3! = 3 × 2 × 1 = 6. Factorials are used a lot in probability. I have touched on this before but perhaps there is another future post here.
Now let’s look at 0111
2. We are very familiar with decimal system way of counting. This system is a base 10 system because we use 10 distinct digits (symbols) to count: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. When we run out of digits, like when we count up to 9, we add another place holder to the right of the number and put the starting digit 0 there: 10. And then successively increase it’s digits until we get to 9 again. then we increase the left digit by 1 and start over again: 20, 21, … . There are other number systems based on numbers other than ten.
Computers are composed of switches based on two states, on or off. We mathematically say that off is 0 and on is 1. Computers essentially count with just o’s and 1’s: a base 2 system. Counting in base 2 is done exactly as we do in base 10, we just have fewer digits to work with.
So we if we start counting we get 0, 1, but we’ve ran out of digits so we add a place holder to the right and start again: 0, 1, 10, 11. Ran out of digits again so add another place holder and start over: 0, 1, 10, 11, 100, 101, 110, 111. If you are keeping track, 111 in base 2 is equal to 7 in base 10. It is a convention to subscript a number with its base when dealing with other base systems, so 0111
2 means 7 in base 10. The leading 0 doesn’t add to the value but in computer maths, base 2 numbers are typically written 4 digit places at a time. |
Consider the function $f$ on $S_n$ which equals $1/n$ on all adjacent transpositions $(i,i+1)$, where we let $n+1 = 1$, and $0$ otherwise, and its Fourier transform $\hat{f}(\rho)$ evaluated at the irreducible representations.
Recall the irreducible representations of $S_n$ are indexed by the set of partitions of $n$. Partitions here are written as a finite non-increasing sequence of positive integers that add up to $n$.
When $\rho$ is the representation corresponding to the partition $(n)$, the matrix $\hat{f}(\rho)$ is simply the $1 \times 1$ matrix $[1]$.
When $\rho$ is the representation corresponding to the partition $(n-1,1)$, the resulting matrix $\hat{f}(\rho)$ can be explicitly diagonalized, since it can be extended into a cyclic matrix on $\mathbb{R}^n$. The eigenvalues are simply $\cos \frac{2\pi k}{n}$ where $k = 1, \ldots n-1$. Therefore the spectral gap for that matrix (the smallest gap between $1$ and an eigenvalue not equal to $1$) is simply
$$1-\cos \frac{2 \pi}{n} = 1-\cos \frac{2(n-1)\pi}{n} = \frac{2 \pi^2}{n^2} + O(\frac{1}{n^3})$$.
the following questions are in increasing levels of difficulty and are interesting to Markov chain theorists:
Is it true that all other eigenvalues of $\hat{f}(\rho)$ for some irreducible representation $\rho$ are strictly less than $1-\cos \frac{2 \pi}{n}$ in absolute value?
Denote by $e_{\lambda,j}$, $j = 1, \ldots, d_\lambda$ the eigenvalues of $\hat{f}(\rho_\lambda)$, where $\rho_\lambda$ is the representation associated with the partition $\lambda$ and $d_\lambda$ is the dimension of that representation.
For any fixed $k \in \mathbb{N}$, is it true that $ (1-\max_j e_{\lambda,j}) \le (n-\lambda_1) \frac{2 \pi^2}{n^2} + O(\frac{1}{n^3})$, for $n-\lambda_1 \le k$? Here $\lambda_1$ denotes the longest part of the partition $\lambda$.
If $\lambda > \lambda'$ in the sense that one can move blocks in the Ferrers diagram of $\lambda'$ in the up and right direction to obtain $\lambda$, for instance $(n-1,1) > (n-2,1,1)$, is it true that the spectral gap of $\hat{f}(\rho_\lambda)$ is smaller than that associated with $\lambda'$?
Give an explicit formula for $e_{\lambda,j}$. This is most likely not possible.
This question shows how hard it can be to diagonalize matrices and to understand the representation theory of $S_n$ at a practical level. |
Hi, I’m Chris and I teach people to teach machines. But I am a reluctant computer scientist. Sometimes I get concerned that the thing I know the most about is not directly linked to my survival. My father knew how to keep machines running. My wife grows vegetables. In a post-apocalyptic world, they would be […]
Check out these these parametric equations: $$\begin{array}{rll}x &=& \cos v \cdot \cos u \\y &=& \sin v \\z &=& \cos v \cdot \sin u\end{array}$$ Do you know what they do? They are most assuredly not magic. Here, let’s rename the variables, and you can try again: $$\begin{array}{rll}x &=& \cos \textit{latitude} \cdot \cos \textit{longitude} \\y &=& […]
Madeup’s dowel solidifier has one job: thicken a sequence of line segments into a solid. But what if the sequence isn’t a polyline, but rather a branching structure like a tree or a fork? One could model each branch as a separate dowel and hope that nobody looks too closely at the joints, but that’s […]
Welcome to the notes for the Computational Making with Madeup workshop at STEM in Education 2018. It’s my hope that you read the abstract for this workshop, and you consent to our stated goals: In this interactive hands-on workshop, participants will learn to build models using Madeup and gain skills and resources they can use […]
In the late 1600s, William Molyneaux posed a question to a friend: Suppose a man born blind, and now adult, and taught by his touch to distinguish between a cube and a sphere of the same metal, and nighly of the same bigness, so as to tell, when he felt one and the other, which […]
Recently, I stole something precious from a friend. I was sitting in Andy’s office, and there it was. An interesting shape. Three rings nestled inside each other. They could rotate independently, but if any translated, the others would follow. I had to have these rings. So, I stole them. Well, I stole the idea of […]
If you are an illustrator, there are two things that are harder to draw than anything else. The first is lettering. Bless you, illustrators, if you have to draw a busy street scene with lots of storefronts full of signs and words. The second is stars. Most illustrators just give up on stars. They are […]
When all you know of trees is that they have bark and leaves, you view the woods as a background to the more interesting foreground activity of a jog, or a campout, or a proposal. But when one knows the trees, it’s hard to not stop every few feet and shake hands with some old […]
Several hours later, I have now found the difference between an octahedron and an icosahedron. I had been stuck on generating the coordinates of the octahedron. A little reading and experimentation directed my attention to the cube circumscribing the icosahedron. The way I’ve set things up, its vertices are all [±u, ±u, ±u], where u […]
One of the important consequences of the internet is that we can now talk freely about icosahedrons. We’re not bound to the interests of those that are geographically near. We can love pretty much anything and find a community that shares our passions somewhere online. So, this morning, while I was trying to get other […] |
Extension.
This post is about set theory, which is a framework to reason about collections, elements and membership.
We start with a informal and naïve outline, which is (very loosely) based on a Godel-Bernays version of set theory. This theory is about sets (collections) which contain elements, which can in turn be sets. When a set \(X\) contains an element \(t\), it is written \(t\in X\).
First, set equality has to be defined:
E0 Axiom of extensionality:
$$X = Y \equiv (t \in X \equiv t \in Y)$$
(Here \(\equiv\) is a logical equivalence, pronounced "if and only if".) This axiom means that sets are equal if and only if they have the same elements. (In this and following formulae, free variables are implicitly universally quantified.)
Subsets are defined by \(X \subseteq Y \equiv (t\in X \Rightarrow t\in Y)\), that is, X is a subset of Y when every element of X is also an element of Y. It's easy to check that \(X = Y \equiv (X\subseteq Y \wedge Y\subseteq X)\), where \(\wedge\) means "and".
Next, a way to build new sets is needed:
E1 Axiom of (extensional) comprehension:
$$t \in \{u \mid P(u)\} \equiv P(t)$$
This axiom introduces a "set-builder notation" \(\{u \mid P(u) \}\) and states that \(\{u \mid P(u) \}\) is exactly the set of all \(t\) such that \(P(t)\) holds.
Now, there is already enough machinery for two famous collections to be immediately constructed:
empty set, \(\varnothing = \{t \mid false \}\), which contains no elements, and the collection of all sets: \(U = \{t \mid true \}\), which contains all possible elements.
With these axioms, conventional set operations can be defined:
singleton: \(\{t\} = \{u\mid u = t\}\), for each \(t\) a singleton set \(\{t\}\) can be constructed that contains \(t\) as its only element. Note that \(t\in\{t\}\) union: \(X\cup Y = \{t\mid t \in X \vee t \in Y\}\) (\(\vee\) means "or") intersection: \(X\cap Y = \{t\mid t\in X \wedge t\in Y\}\) complement: \(\complement X = \{t \mid t \notin X\}\) (unordered) pair: \(\{t,u\} = \{t\}\cup\{u\}\) ordered pair: \((t,u) = \{t, \{t, u\}\}\) power set: \(\mathfrak{P}(X) = \{t \mid t \subseteq X\}\)
From here, one can build hierarchical sets, representing all traditional mathematical structures, starting with natural numbers:
$$0 = \varnothing, 1 = \{0\}, 2 = \{0, 1\}, \ldots n + 1 = \{0, \ldots, n\}, \ldots$$
then integers, rationals, reals,
&c., adding more axioms (of infinity, of foundation, of replacement, &c.) along the way.
It was discovered quite early that this system is not entirely satisfactory. First defect is that it is impossible to have elements which are not sets themselves. For example, one would like to talk about a "set of all inhabited planets in the Solar system". Elements of this set (planets) are not sets, they are called
ur-elements. Unfortunately, the axiom of extensionality makes all ur-elements equal to the empty set. Note, that this indicates that the axiom of extensionality doesn't work well with sets that have very few (none) elements. This was never considered a problem, because all sets of interest to mathematics can be constructed without ur-elements.
Another, more serious drawback, arises in the area of very large sets: existence of a set \(\{t\mid t\notin t\}\) directly leads to a contradiction known as Russel's paradox.
Among several methods to deal with this, one separates sets into two types: "smaller" collections which are continued to be called "sets" and "proper classes", which are collections so large that they cannot be a member of any collection. Axiom of comprehension is carefully modified so that set-builder never produces a collection having some class as its element. In this setup Russel's paradox becomes a theorem: \(\{t\mid t\notin t\}\) is a proper class.
Intention.
The axiom of extensionality states that sets are equal when they
containthe same elements. What would happen, if set theory were based on a dual notion of intentional equality(which will be denoted by \(\sim\) to tell it from extensional one), where sets are equal when they are containedin the same collections? I0 Axiom of intensionality:
$$X \sim Y \equiv (X \in t \equiv Y\in t)$$
This looks bizarre: for any "normal" set \(X\) a collection of all sets containing \(X\) as element is unmanageably huge. But as a matter of fact, intentional equality is much older than extensional, it is variously known as
Leibniz's law, identity of indiscernibles and, in less enlightened age, as duck typing.
There is a nice symmetry: while intensional equality myopically confuses small sets (ur-elements), extensional equality cannot tell very large collections (proper classes) from each other, because they are not members of anything and, therefore, intentionally equal.
The whole extensional theory buildup can be mirrored easily by moving things around \(\in\) sign:
Intensional subsets: \(X \unlhd Y \equiv (X\in t \Rightarrow Y\in t)\)
I1 Axiom of intensional comprehension (incomprehension):
$$[u \mid P(u)]\in t \equiv P(t)$$
And associated operations:
uniqum (or should it have been "s- ex-gleton?): \([t] = [u\mid u \sim t]\), note that \([t]\in t\). intentional union: \(X\triangledown Y = [t\mid X \in t \vee Y \in t]\) intentional intersection: \(X\triangle Y = [t\mid X \in t \wedge Y \in t]\) intentional complement: \(\Game X = [t \mid X \notin t]\) intentional pair: \([t,u] = [t]\triangledown [u]\) intentional ordered pair: \(<t,u> = [t, [t, u]]\) intentional power set: \(\mathfrak{J}(X) = [t \mid X \unlhd t\}\)
What do all these things
mean? In extensional world, a set is a container, where elements are stored. In intensional world, a set is a property, which other sets might or might not enjoy. If \(t\) has property \(P\), it is written as \(t\in P\). In the traditional notation, \(P\) is called a predicateand \(t\in P\) is written as \(P(t)\). The axiom of intentional equality claims that sets are equal when they have exactly the same properties (quite natural, right?). \(X\) is an intentional subset of \(Y\) when \(Y\) has all properties of \(X\) and perhaps some more (this looks like a nice way to express LSP). Intentional comprehension \([u \mid P(u)]\) is a set having exactly all properties \(t\) for which \(P(t)\) holds and no other properties. Intentional union of two sets is a set having properties of either and their intentional intersection is a set having properties of both, &c. Uniqum \([P]\) is theset that has property \(P\) and no other properties.
Because intensional theory is a perfect dual of extensional nothing interesting is obtained by repeating extensional construction, for example, by building "intensional natural numbers" as
$$0' = U, 1' = [0'], 2' = [0', 1'], \ldots (n + 1)' = [0', \ldots, n'], \ldots$$
What is more interesting, is how intensional and extensional twins meet. With some filial affection it seems:
by uniqum property \([\varnothing] \in\varnothing\), which contradicts the definition of \(\varnothing\), also set \([t\mid false]\) is not a member of any set (perhaps it's a proper class) and set \([t\mid true]\) is a member of every set, which is strange; a set of which a singleton can be formed has very shallow intentional structure. Indeed: \(x \unlhd y\) \(\equiv\) { definition of intensional subset } \(x\in t \Rightarrow y\in t\) \(\Rightarrow\) { substitute \(\{x\}\) for \(t\)} \(x\in \{x\} \Rightarrow y\in \{x\}\) \(\equiv\) { \(x\in \{x\}\) is true by the singleton property, modus ponens} \(y\in \{x\}\) \(\equiv\) { the singleton property, again } \(x = y\) |
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences). |
It's hard to say just from the sheet music; not having an actual keyboard here. The first line seems difficult, I would guess that second and third are playable. But you would have to ask somebody more experienced.
Having a few experienced users here, do you think that limsup could be an useful tag? I think there are a few questions concerned with the properties of limsup and liminf. Usually they're tagged limit.
@Srivatsan it is unclear what is being asked... Is inner or outer measure of $E$ meant by $m\ast(E)$ (then the question whether it works for non-measurable $E$ has an obvious negative answer since $E$ is measurable if and only if $m^\ast(E) = m_\ast(E)$ assuming completeness, or the question doesn't make sense). If ordinary measure is meant by $m\ast(E)$ then the question doesn't make sense. Either way: the question is incomplete and not answerable in its current form.
A few questions where this tag would (in my opinion) make sense: http://math.stackexchange.com/questions/6168/definitions-for-limsup-and-liminf http://math.stackexchange.com/questions/8489/liminf-of-difference-of-two-sequences http://math.stackexchange.com/questions/60873/limit-supremum-limit-of-a-product http://math.stackexchange.com/questions/60229/limit-supremum-finite-limit-meaning http://math.stackexchange.com/questions/73508/an-exercise-on-liminf-and-limsup http://math.stackexchange.com/questions/85498/limit-of-sequence-of-sets-some-paradoxical-facts
I'm looking for the book "Symmetry Methods for Differential Equations: A Beginner's Guide" by Haydon. Is there some ebooks-site to which I hope my university has a subscription that has this book? ebooks.cambridge.org doesn't seem to have it.
Not sure about uniform continuity questions, but I think they should go under a different tag. I would expect most of "continuity" question be in general-topology and "uniform continuity" in real-analysis.
Here's a challenge for your Google skills... can you locate an online copy of: Walter Rudin, Lebesgue’s first theorem (in L. Nachbin (Ed.), Mathematical Analysis and Applications, Part B, in Advances in Mathematics Supplementary Studies, Vol. 7B, Academic Press, New York, 1981, pp. 741–747)?
No, it was an honest challenge which I myself failed to meet (hence my "what I'm really curious to see..." post). I agree. If it is scanned somewhere it definitely isn't OCR'ed or so new that Google hasn't stumbled over it, yet.
@MartinSleziak I don't think so :) I'm not very good at coming up with new tags. I just think there is little sense to prefer one of liminf/limsup over the other and every term encompassing both would most likely lead to us having to do the tagging ourselves since beginners won't be familiar with it.
Anyway, my opinion is this: I did what I considered the best way: I've created [tag:limsup] and mentioned liminf in tag-wiki. Feel free to create new tag and retag the two questions if you have better name. I do not plan on adding other questions to that tag until tommorrow.
@QED You do not have to accept anything. I am not saying it is a good question; but that doesn't mean it's not acceptable either. The site's policy/vision is to be open towards "math of all levels". It seems hypocritical to me to declare this if we downvote a question simply because it is elementary.
@Matt Basically, the a priori probability (the true probability) is different from the a posteriori probability after part (or whole) of the sample point is revealed. I think that is a legitimate answer.
@QED Well, the tag can be removed (if someone decides to do so). Main purpose of the edit was that you can retract you downvote. It's not a good reason for editing, but I think we've seen worse edits...
@QED Ah. Once, when it was snowing at Princeton, I was heading toward the main door to the math department, about 30 feet away, and I saw the secretary coming out of the door. Next thing I knew, I saw the secretary looking down at me asking if I was all right.
OK, so chat is now available... but; it has been suggested that for Mathematics we should have TeX support.The current TeX processing has some non-trivial client impact. Before I even attempt trying to hack this in, is this something that the community would want / use?(this would only apply ...
So in between doing phone surveys for CNN yesterday I had an interesting thought. For $p$ an odd prime, define the truncation map $$t_{p^r}:\mathbb{Z}_p\to\mathbb{Z}/p^r\mathbb{Z}:\sum_{l=0}^\infty a_lp^l\mapsto\sum_{l=0}^{r-1}a_lp^l.$$ Then primitive roots lift to $$W_p=\{w\in\mathbb{Z}_p:\langle t_{p^r}(w)\rangle=(\mathbb{Z}/p^r\mathbb{Z})^\times\}.$$ Does $\langle W_p\rangle\subset\mathbb{Z}_p$ have a name or any formal study?
> I agree with @Matt E, as almost always. But I think it is true that a standard (pun not originally intended) freshman calculus does not provide any mathematically useful information or insight about infinitesimals, so thinking about freshman calculus in terms of infinitesimals is likely to be unrewarding. – Pete L. Clark 4 mins ago
In mathematics, in the area of order theory, an antichain is a subset of a partially ordered set such that any two elements in the subset are incomparable. (Some authors use the term "antichain" to mean strong antichain, a subset such that there is no element of the poset smaller than 2 distinct elements of the antichain.)Let S be a partially ordered set. We say two elements a and b of a partially ordered set are comparable if a ≤ b or b ≤ a. If two elements are not comparable, we say they are incomparable; that is, x and y are incomparable if neither x ≤ y nor y ≤ x.A chain in S is a...
@MartinSleziak Yes, I almost expected the subnets-debate. I was always happy with the order-preserving+cofinal definition and never felt the need for the other one. I haven't thought about Alexei's question really.
When I look at the comments in Norbert's question it seems that the comments together give a sufficient answer to his first question already - and they came very quickly. Nobody said anything about his second question. Wouldn't it be better to divide it into two separate questions? What do you think t.b.?
@tb About Alexei's questions, I spent some time on it. My guess was that it doesn't hold but I wasn't able to find a counterexample. I hope to get back to that question. (But there is already too many questions which I would like get back to...)
@MartinSleziak I deleted part of my comment since I figured out that I never actually proved that in detail but I'm sure it should work. I needed a bit of summability in topological vector spaces but it's really no problem at all. It's just a special case of nets written differently (as series are a special case of sequences). |
Firstly, I will define what
Pythagorean Triples are for those who do not know.
Definition:
A Pythagorean Triple is a group of three integers $a$, $b$ and $c$ such that $a^2+b^2=c^2$, since the
Pythagorean Theoremasserts that for any $90^\circ$ (right-angle) triangle $ABC$ with sides $a$, $b$ and $c$, one will always have the equation, $a^2+b^2=c^2$.
I was looking at Pythagorean Triples and noticed another property apart from how $a^2+b^2=c^2$. Here are the first $30$ Pythagorean Triples $(a,b,c)$ ordered from smallest to greatest value, i.e. $$(a,b,c)\qquad\text{ s.t. }\qquad a<b<c.\tag*{$\big(\text{s.t. = such that}\big)$}$$
I noticed that $a^2=(c+b)(c-b)$, but that is trivial since $$\begin{align}a^2&=(c+b)(c-b)\tag{given} \\ &=c^2-b^2 \\ \Leftrightarrow\,\,\,\, a^2+b^2&=c^2.\end{align}$$
However, I also noticed that by having "$u\mid v$" be read as
"$u$ divides $v$", it appears that $$a+b+c\mid abc.$$ For example, $(a,b,c)=(3,4,5)$ is a classic Pythagorean Triple; $3^2+4^2=5^2$.
Also, $$\begin{align}3+4+5&=12 \\ \& \quad3\times 4\times 5 &= 60. \\ \\ 12 &\,\mid 60 \\ \Leftrightarrow \,\,\,\,3+4+5&\,\mid 3\times 4\times 5.\end{align}$$ This, I cannot prove to be true $-$ but I tested with all the $30$ Pythagorean Triples above, and I have come across no counter-example. Is there a proof? I do not know where to begin myself.
Conjecture:
Given three positive integers $a$, $b$ and $c$, if $a < b<c$ and $a^2+b^2=c^2$, then $$a+b+c\mid abc.$$
Thank you in advance.
Edit:
My conjecture was originally the other way round; i.e. if $a+b+c\mid abc$ then $a^2+b^2=c^2$. But $6$ is a counter-example, namely because it is a
Perfect Number. |
Artin's axioms do not apply in this case, because the stack is not limit-preserving. They only work with stacks that are locally finitely presented.
In any case, it is easy to give examples of quasi-coherent sheaves whose functor of automorphisms is not representable (for example, an infinite dimensional vector space), and this of course prevents the stack from being algebraic.
As Matthieu says, one should consider coherent sheaves.
[Edit] The question is: what about the stack of coherent sheaves, without flatness hypothesis? First of all, one should interpret "coherent" as meaning "quasi-coherent of finite presentation". The notion of coherent sheaf, as defined in EGA, is not functorial, that is, pullbacks of coherent sheaves are not necessarily coherent. Hartshorne's book defines "coherent" as "quasi-coherent and finitely generated", but this is a useless notion when working with non-noetherian schemes.
The stack of quasi-coherent finitely presented sheaves is not algebraic either. For example, let $k$ be an algebraically closed field, $k[\epsilon] = k[t]/(t^2)$ the ring of dual numbers. If $S = \mathop{\rm Spec} k[\epsilon]$, consider the coherent sheaf $F$ corresponding to the $k[\epsilon]$-module $k = k[\epsilon]/(\epsilon)$. Then I claim that the functor of automorphisms of $F$ over $S$ is not represented by an algebraic space.
Suppose it is represented by an algebraic space $G \to S$. Denote by $p$ the unique rational point of $S$ over $k$; the tangent space of $S$ at $p$ has a canonical generator $v_0$. Furthermore, if $X$ is a $k$-scheme with a rational point $x_0$ and $v$ is a tangent vector of $X$ at $x_0$, then there exists a unique $k$-morphism $S \to X$ sending $p$ to $x_0$ and $v_0$ to $v$. The inverse image of $S_{\rm red} = \mathop{\rm Spec} k$ in $G$ is isomorphic to $\mathbb G_{\mathrm m, k}$; so $G_{\rm red}$ is an affine scheme, hence $G$ is an affine scheme. The differential of the projection $G \to S$ at the origin of $G$ has a $1$-dimensional kernel, the tangent space of $\mathbb G_{\mathrm m, k}$ at the origin. On the other hand there is a unique section $S \to G$ sending $p$ to the origin, corresponding to $1 \in k^* = \mathrm{Aut}_{k[\epsilon]}k$; this means that there is a unique tangent vector of $G$ at the origin mapping to $v_0$. These two facts give a contradiction.
Here is another way to look at this. Given a contravariant functor $F$ on $k$-schemes and an element $p$ of $F(\mathop{\rm Spec}k)$, one can define the tangent space of $F$ at $p$ as the set of element of $F(\mathop{\rm Spec}k[\epsilon])$ that restrict to $p$. However, in order for this tangent space to be a $k$-vector space, one needs a Schlessinger-like gluing condition on $F$ (this is standard in deformation theory). The analysis above shows that this condition is not satisfied for the functor of automorphisms of $k$ over $k[\epsilon]$. |
Let $H$ be a mapping from some normed space $X$ into a normed space $Y$. When solving an equation of the form\begin{equation}H x = y\end{equation}with an ill-posed operator $H$,
Tikhonov Regularization replaces theunstable inverse $H^{-1}$ by a family of stable mappings $R_{\alpha}$ define dy\begin{equation}\label{Ralpha def}R_{\alpha} := (\alpha I + H^{\ast}H)^{-1}H,\end{equation}where $\alpha>0$ is known as regularization parameter. Here, the unbounded operator$(H^{\ast}H)^{-1}$ which appears in the Moore-Penrose Pseudo-Inverse $(H^{\ast}H)^{-1}H^{\ast}$is replaced by the bounded operator $(\alpha I + H^{\ast}H^{\ast})^{-1}$. For $y = Hx$ we have$$\begin{array}{cc}(\alpha I + H^{\ast}H)^{-1}H^{\ast} y & = &(\alpha I + H^{\ast}H)^{-1}H^{\ast} Hx \\& = & (\alpha I + H^{\ast}H)^{-1}( \alpha I + H^{\ast} H - \alpha I ) x \\& = & x - \alpha (\alpha I + H^{\ast}H)^{-1} x \\& \rightarrow & x\end{array}$$for $\alpha \rightarrow 0$, where the convergence\begin{equation}\alpha (\alpha I + H^{\ast}H)^{-1} x \rightarrow 0, \;\; \alpha \rightarrow 0,\end{equation}is shown by spectral arguments. Please note that this convergence is a pointwiseconvergence in $X$ and does not hold in norm!
The invertibility of $\alpha I + H^{\ast}H$ is obtained by $$ \begin{array}{cc} \langle x, (\alpha I + H^{\ast}H) x \rangle & = & \alpha \langle x,x \rangle + \langle Hx, Hx \rangle \\ & \geq & \alpha || x ||^2 \end{array} $$ according to the Lax-Milgram Lemma, which also yields $$ || (\alpha I + H^{\ast}H)^{-1} || \leq \frac{1}{\alpha}. $$ The sharper result $$ || (\alpha I + H^{\ast}H)^{-1}H^{\ast} || \leq \frac{1}{2\sqrt{\alpha}} $$ is again shown by spectral arguments and the arithmetic-geometric mean $$ \frac{\mu}{\alpha + \mu^2} \leq \frac{1}{2\sqrt{\alpha}}. $$
We can reformulate the equation $Hx=y$ into minimizing \begin{equation}J(x) = || y - Hx ||^2, \;\; x \in X, \end{equation}which is the
Moore-Penrose pseudo inverse. A stabilization is given by adding a term to the functional\begin{equation}J(x) := \alpha || x ||^2 + ||y - Hx||^2, \;\; x \in X. \end{equation}First order optimality conditions for the minimizer lead to $$0 = \nabla_x J(x) = 2\alpha x + 2 H^{\ast}(y - H x),$$i.e.$$(\alpha I + H^{\ast}H )x = H^{\ast} y. $$Thus, the minimizer is obtained by \begin{equation}x_{\alpha} := (\alpha I + H^{\ast}H)^{-1} H^{\ast} y, \end{equation}which coincides with (\ref{Ralpha def}).
The operator $H^{\ast}H$ is a self-adjoint operator. If $H$ is compact, then $H^{\ast}H$ iscompact as well, and there is an orthonormal system $\varphi_{j}, j \in \mathbb{N},$ of
eigenvectors of $H^{\ast}H$ with eigenvalues $\mu_{j}^2$, such that$$H^{\ast}H \varphi_{j} = \mu_{j}^{2} \varphi_{j}, \;\; j \in \mathbb{N}. $$Then, the pseudo inverse of $H$ can be written as$$(H^{\ast}H)^{-1}H^{\ast} y = \sum_{j=1}^{\infty} \frac{1}{\mu_{j}} \langle\varphi_{j},y\rangle$$When $H$ is compact and $\mu_j$ is sorted in descending order, we know that $\mu_j \rightarrow 0$for $j\rightarrow \infty$. Here, the ill-posedness of $H^{\ast}H$ is reflected by the unboundedness of $1/\mu_j^2$. Stabilization can be achieved by bounding this unbounded term. A spectral damping scheme is achieved by \begin{equation}R_{\alpha} y := \sum_{j=1}^{\infty} \frac{\mu_j}{\alpha + \mu_{j}^2} <\varphi_{j},y>,\end{equation}which for $\alpha \rightarrow 0$ tends to $H^{-1}y$ for every fixed $y \in H(X)$. Using the spectralrepresentation, this is readily to be identical to the above inverse $(\alpha I + H^{\ast}H)^{-1} H^{\ast}$.
We refer to the following literature for more detail about Tikhonov Regularization: |
Let $C$ be the set of all (strictly) increasing functions $\Bbb Z^+\rightarrow\Bbb Z^+$.
First, consider this function $\sigma$ from $A$ to $C$. For each $f$ we define $\sigma(f)=\sigma_f$ defined so:$$\sigma_f(n)=\sum_{j=1}^n f(j)$$This funtion is bijective. Hence, $\#C=\#A$.
Let's show that indeed, this function is bijective:
Suppose that $\sigma_f=\sigma_g$ and take an arbitrary $n\in\Bbb Z^+$. If $n=1$, then $$f(1)=\sigma_f(1)=\sigma_g(1)=g(1)$$
and if $n\geq2$, then
$$f(n)=\sigma_f(n)-\sigma_f(n-1)= \sigma_g(n)-\sigma_g(n-1)=g(n)$$
This proves that $\sigma$ is injective.
Now, take any function $u\in C$ and define:
$$f(n)=\left\{
\begin{array}{cl}
u(1)&\text{ if }n=1\\
u(n)-u(n-1)&\text{ otherwise}
\end{array}
\right.$$
It is clear that $f\in A$ and that $u=\sigma_f$ and, hence, $\sigma$ is surjective.
We will now define a function $\phi$ from $B$ to $C$. Take a function $f$ from $B$. If the preimage of $1$ is infinite, let $J$ be this preimage. If not, then the preimage of $0$ is infinite, and let then $J$ be this preimage. Since it is a subset of $\Bbb Z^+$, which is well ordered, we can write $J$ as an increasing sequence: $j_1<j_2<\ldots<j_n<\ldots$
Now, if $J$ is the preimage of $1$, define $\phi(f)=\phi_f$ this way:$$\phi_f(n)=2j_n$$And if $J$ is the preimage of $0$:$$\phi_f(n)=2j_n+1$$
The function $\phi:f\mapsto\phi_f$ is injective. Hence $\#B\leq\#C$.
To prove that $\phi$ is injective, let's assume that $\phi_f=\phi_g$ and take $n\in\Bbb Z^+$. If the images of $\sigma_f$ are even, then
$$f(n)=1\iff 2n \in\phi_f(\Bbb Z^+)\iff 2n \in\phi_g(\Bbb Z^+)\iff g(n)=1$$
and if the images of $\sigma_f$ are odd,
$$f(n)=0\iff 2n+1 \in\phi_f(\Bbb Z^+)\iff 2n+1 \in\phi_g(\Bbb Z^+)\iff g(n)=0$$
Since the images of $f$ and $g$ can be only $0$ or $1$, this proves that $\phi$ is injective.
Last, define the function $\delta:f\mapsto\delta_f$ from $C$ to $B$:$$\delta_f(n)=\left\{\begin{array}{cl}1&\text{ if }n\in f(\Bbb Z^+)\\0&\text{ otherwise}\end{array}\right.$$
This function is injective. Then, $\#C\leq\#B$
So $\#A=\#B=\#C$, q.e.d.
Note: we need this theorem. |
If you want an FPT algorithm for the problem (parameterized by treewidth $t$), you want an algorithm working in time $f(t) \cdot n^{O(1)}$, where $f$ is any computable function (depending solely on $t$). Of course, it would be nice to make $f$ as appealing as possible.In addition to the mentioned algorithm running in $O(t^t n)$ time, you can also get a ...
There is an outline of the algorithm you want in these slides: http://www.cs.bme.hu/~dmarx/papers/marx-warsaw-fpt2. Given a nice-tree decomposition of width $w$ for $G$, the algorithm runs in time $O(w^w \cdot n)$. As it is based on a nice-tree decomposition, you will need to show what happens in the case of a forget node, an introduce node, and a join node ...
The classical version of this question is for Hamiltonian cycles, but there is probably little difference. I will only consider the version with cycles.In order for a graph to contain a Hamiltonian cycle, the minimal degree should be at least 2. This is essentially the only obstruction for Hamiltonicity. To state this we need to define the following ...
The statement in CRLS is not wrong in any case; an algorithm that runs in $O(n)$ time also runs in $O(n^2)$ time. Of course, it would be more precise to state the running time as $O(n)$ if this were true, so why doesn't CLRS do this?First off, this depends on the encoding chosen for $G$. If an adjacency matrix is used, a graph with $V$ vertices always has ...
This result is proved in the senior thesis The complexities of puzzles, cross sum and their another solution problems (ASP) by Takahiro Seta.The systematic study of ASPs was initiated by Ueda and Nagao in their paper NP-completeness Results for NONOGRAM viaParsimonious Reductions. See also Takayuki Yato's master thesis, Complexity and completeness of ...
Just add the constraint that $x_{1j}=x_{nj}$ for all $j$, and make a special exception to constraint 2 so that you omit constraint 2 in the case where $i=1$ and $k=n$.Everything works fine for a directed graph. You just need to interpret constraint 5 appropriately.
The $n$-ary Hadamard gate acts on $n$ qubits. The state of the $n$ qubits is a unit norm vector of dimension $2^n$. You can take as basis the vectors $|x\rangle$ for $x \in \{0,1\}^n$.As it happens, the $n$-ary Hadamard gate can be simulated by $n$ unary Hadamard gates acting on the individual qubits, which is what you see on page 6-3. The reason is that ...
Recall a Hamiltonian cycle visits each vertex of a graph exactly once. Thus, the cycles H1 and H2 in your second example can't be Hamiltonian cycles in the same graph.In your first example, the two cycles could be distinct Hamiltonian cycles in e.g. the complete graph on 4 vertices. That is to say, there can definitely be multiple Hamiltonian cycles in a ...
This is NOT an answer. This post is just to construct a concrete numeric example out of the left diagram in the picture in the question.(This answer, too long to fit as a comment, was requested by a user who wanted to see a concrete numeric example which his greedy algorithm may fail. It was written about the same time when OP added a numeric example that ...
Does a graph have a hamiltonian cycle if for anyone of its edges, it has a hamiltonian path after the removal of that edge?Note that any hypohamiltonian graph must be such a graph. Is it even true that every hypohamiltonian graph has a hamiltonian cycle?Not necessarily.A counterexample is the Peterson graph, which is hypohamiltonian but not hamiltonian....
Counting the number of Hamiltonian circuits in a graph is $\mathsf{\#P}$-hard (see for example this answer). Your problem is even harder, since we can use your problem together with binary search to count the number of Hamiltonian circuits in a given graph.As an aside, let me mention that listing things in lexicographical order might be harder than listing ...
Perhaps the most comprehensive list is available via ISGCI, along with references. Examples for which the problem is easy include bounded treewidth graphs and many subclasses of chordal graphs.As to why exponential time seems unavoidable in general, we don't strictly know. As far as we know, P = NP is also possible.
There is no "purpose" for the condition that the number of vertices be divisible by 3 – it is part of the problem statement. DHAM3 is the special case of DHAM in which the number of vertices is divisible by 3. In principle this might make the problem easier (a special case can only be easier), but in this case DHAM3 is also NP-hard (and so NP-complete), as ...
In order to prove NP-completeness of a given problem $\Pi$, it is enough to prove that $\Pi \in$ NP and that there exists a polynomial time reduction from any NP-complete problem $\Pi^*$ to $\Pi$. The reduction must take instances of $\Pi^*$ and return instances of $\Pi$, in polynomial time, such that the answer is the same in both $\Pi^*$ and $\Pi$.A ... |
I was approaching the following problem:
"Let $f \colon X \to Y$ be continuous. Is it true that if $x$ is a limit point of $A \subset X$ then $f(x)$ is a limit point of $f(A)$?"
The answer is that it is false and here is a counterexample I found: $X = \mathbb{R}$ with the standard topology, $Y = \mathbb{N}$ with the discrete topology and finally $f(x) = 1$ for every $x \in \mathbb{R}$. (it's a counter example since $2$ is a limit point of $[0,58]$ but $f(2)$ is not a limit point of $f([0,58])$. To prove this just notice that $\{1\}$ is an open neighborhood of $f(2)$ but $\{1\}\cap (f([0,58]) \setminus \{f(2)\}) = \emptyset$.
Now you are all thinking "where's the problem then?"... The problem is that my intuition failed and I tried to prove the statement for a while before trying to find a counterexample!
What can I do to avoid this problem in future? Is there any intuition I should have had to start looking for counterexamples before trying to prove the affirmative result? |
Is there any polynomial $f(x,y)\in{\mathbb Q}[x,y]{}$ such that $f\colon\mathbb{Q}\times\mathbb{Q} \rightarrow\mathbb{Q}$ is a bijection?
Jonas Meyer's comment:
Quote from arxiv.org/abs/0902.3961, Bjorn Poonen, Feb. 2009: "Harvey Friedman asked whether there exists a polynomial $f(x,y)\in Q[x,y]$ such that the induced map $Q × Q\to Q$ is injective. Heuristics suggest that most sufficiently complicated polynomials should do the trick. Don Zagier has speculated that a polynomial as simple as $x^7+3y^7$ might already be an example. But it seems very difficult to prove that any polynomial works. Our theorem gives a positive answer conditional on a small part of a well-known conjecture." – Jonas Meyer
Added June 2019 Poonen's paper is published as:
This is a link to a new, crowdsourced attempt to resolve this question (at least conditional on the assumption of some strong number-theoretic conjectures) being led by Terry Tao.
protected by Andrés E. Caicedo Dec 5 '13 at 14:23
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
Up to this point, we have discussed inferences regarding a single population parameter (e.g., μ, p, \(\sigma^2\)). We have used sample data to construct confidence intervals to estimate the population mean or proportion and to test hypotheses about the population mean and proportion. In both of these chapters, all the examples involved the use of one sample to form an inference about one population. Frequently, we need to compare two sets of data, and make inferences about two populations. This chapter deals with inferences about two means, proportions, or variances. For example:
You are studying turkey habitat and want to see if the mean number of brood hens is different in New York compared to Pennsylvania. You want to determine if the treatment used in Skaneateles Lake has reduced the number of milfoil plants over the last three years. Is the proportion of people who support alternative energy in California greater compared to New York? Is the variability in application different between two mist blowers?
These questions can be answered by comparing the differences of:
Mean number of hens in NY to the mean number of hens in PA. Number of plants in 2007 to the number of plants in 2010. Proportion of people in CA to the proportion of people in NY. Variances between the mist blowers.
This chapter is comprised of five sections. The first and second sections examine inferences about two means with two independent samples. The third section examines inferences about means with two dependent samples, the fourth section examines inferences about two proportions, and the fifth section examines inferences between two variances.
Inferences about Two Means with Independent Samples (Assuming Unequal Variances)
Using independent samples means that there is no relationship between the groups. The values in one sample have no association with the values in the other sample. For example, we want to see if the mean life span for hummingbirds in South Carolina is different from the mean life span in North Carolina. These populations are not related, and the samples are independent. We look at the difference of the independent means.
In Chapter 3, we did a one-sample t-test where we compared the sample mean (\(\bar {x}\)) to the hypothesized mean (μ). We expect that \(\bar {x}\) would be close to μ. We use the sample mean, the sample standard deviation, and the sample size for the one-sample test.
With a two-sample t-test, we compare the population means to each other and again look at the difference. We expect that \(\bar {x_1}-\bar {x_2}\) would be close to \(\mu_{1} – \mu_{2}\). The test statistic will use both sample means, sample standard deviations, and sample sizes for the test.
For a one-sample t-test we used \(\frac {s}{\sqrt{n}}\)as a measure of the standard deviation (the standard error). We can rewrite $$\frac {s}{\sqrt{n}} \rightarrow \sqrt {\frac {s^2}{n}}$$. The numerator of the test statistic will be \((\bar {x_1} - \bar{x_2})-(\mu_{1} - \mu_{2})\) This has a standard deviation of \(\sqrt {\frac {s^2_1}{n_1}+\frac {s^2_2}{n_2}}\).
A two-sample t-test follows the same four steps we saw in Chapter 3.
Write the null and alternative hypotheses. State the level of significance and find the critical value. The critical value, from the student’s t-distribution, has the lesser of n1-1 and n2 -1 degrees of freedom. Compute the test statistic. Compare the test statistic to the critical value and state a conclusion.
The assumptions we saw in Chapter 3 still must be met. Both samples come from independent random samples. The populations must be normally distributed, or both have large enough sample sizes (n1 and n2 ≥ 30). We will also use the same three pairs of null and alternative hypotheses.
Table 1. Null and alternative hypotheses.
Rewriting the null hypothesis of μ1 = μ2 to μ1 – μ2 = 0, simplifies the numerator. The test statistic is Welch’s approximation (Satterthwaite Adjustment) under the assumption that the independent population variances are not equal.
$$t=\frac {(\bar {x_1}-\bar {x_2})-(\mu_{1}-\mu_{2})}{\sqrt {\frac {s^2_1}{n_1}+\frac {s^2_2}{n_2}}}$$
This test statistic follows the student’s t-distribution with the degrees of freedom
adjusted by
$$df=\frac {(\frac {S^2_1}{n_1} + \frac {S^2_2}{n_2})^2}{\frac {1}{n_1-1}(\frac {S^2_1}{n_1})^2+\frac {1}{n_2-1}(\frac {S^2_2}{n_2})^2}$$
A simpler alternative to determining degrees of freedom when working a problem long-hand is to use the lesser of n1-1 or n2-1 as the degrees of freedom. This method results in a smaller value for degrees of freedom and therefore a larger critical value. This makes the test more conservative, requiring more evidence to reject the null hypothesis.
Example \(\PageIndex{1}\):
A forester is studying the number of cavity trees in old growth stands in Adirondack Park in northern New York. He wants to know if there is a significant difference between the mean number of cavity trees in the Adirondack Park and the old growth stands in the Monongahela National Forest. He collects two independent random samples from each forest. Use a 5% level of significance to test this claim.
Adirondack Park
Monongahela Forest
\(n_1\) = 51 stands
\(n_2\) = 56 stands
\(\bar {x_1}\)= 39.6
\(\bar {x_2}\)= 43.9
\(s_1\) = 9.4
\(s_2\) = 10.7
1) \(H_0: \mu_1 = \mu_2 or \mu_1 – \mu_2 = 0\) There is no difference between the two population means.
\(H_1: \mu_1 ≠ \mu_2\) There is a difference between the two population means.
2) The level of significance is 5%. This is a two-sided test so alpha is split into two sides. Computing degrees of freedom using the equation above gives 105 degrees of freedom.
$$df = \frac {(\frac {9.4^2}{51}+\frac {10.7^2}{56})^2}{\frac {1}{51-1}(\frac {9.4^2}{51})^2+\frac {1}{56-1}(\frac {10.7^2}{56})^2}=104.9$$
The critical value (\(t_{\frac {\alpha}{2}}\), based on 100 degrees of freedom (closest value in the t-table), is ±1.984. Using 50 degrees of freedom, the critical value is ±2.009.
3) The test statistic is
$$t=\frac {(\bar {x_1} - \bar {x_2}) - (\mu _1 - \mu_2)}{\sqrt {\frac {s_1^2}{n_1}+\frac {s_2^2}{n_2}}} =\frac {(39.6-43.9)-(0)}{\sqrt{\frac {9.4^2}{51}+\frac {10.7^2}{56}}} = -2.213$$
4) The test statistic falls in the rejection zone.
Figure 1. A comparison of the critical values and test statistic.
We reject the null hypothesis. We have enough evidence to support the claim that there is a difference in the mean number of cavity trees between the Adirondack Park and the Monongahela National Forest.
Construct and Interpret a Confidence Interval about the Difference of Two Independent Means
A hypothesis test will answer the question about the difference of the means. BUT, we can answer the same question by constructing a confidence interval about the difference of the means. This process is just like the confidence intervals from Chapter 2.
Find the critical value. Compute the margin of error. Point estimate ± margin of error.
Because we are working with two samples, we must modify the components of the confidence interval to incorporate the information from the two populations.
The point estimate is \(\bar {x_1} -\bar {x_2}\). The standard error comes from the test statistic \(\sqrt {\frac {s_1^2}{n_1} +\frac {s^2_2}{n_2}}\) The critical value \(t_{\frac {\alpha}{2}}\)comes from the student’s t-table.
The confidence interval takes the form of the point estimate plus or minus the standard error of the differences.
$$\bar {x_1} -\bar {x_2} \pm t_{\frac {\alpha}{2}}\sqrt {\frac {s_1^2}{n_1} +\frac {s^2_2}{n_2}}$$
We will use the same three steps to construct a confidence interval about the difference of the means.
critical value \(t_{\frac {\alpha}{2}}\) \(E = t_{\frac {\alpha}{2}}\sqrt {\frac {s_1^2}{n_1} +\frac {s^2_2}{n_2}}\) \(\bar {x_1} -\bar {x_2} \pm E\)
Example \(\PageIndex{2}\):
Let’s look at the mean number of cavity trees in old growth stands again. The forester wants to know if there is a difference between the mean number of cavity trees in old growth stands in the Adirondack forests and in the Monongahela Forest. We can answer this question by constructing a confidence interval about the difference of the means.
1) \(t_{\frac {\alpha}{2}}\) = 2.009
2) \(E = t_{\frac {\alpha}{2}}\sqrt {\frac {s_1^2}{n_1} +\frac {s^2_2}{n_2}} = 2.009 \sqrt {\frac {9.4^2}{51}+\frac {10.7^2}{56}}=3.904\)
3) \(\bar {x_1} -\bar {x_2} \pm 3.904\)
The 95% confidence interval for the difference of the means is (-8.204, -0.396).
We can be 95% confident that this interval contains the mean difference in number of cavity trees between the two locations. BUT, this doesn’t answer the question the forester asked. Is there a difference in the mean number of cavity trees between the Adirondack and Monongahela forests? To answer this, we must look at the confidence interval interpretations.
Confidence Interval Interpretations If the confidence interval contains all positive values, we find a significant difference between the groups, AND we can conclude that the mean of the first group is significantly greater than the mean of the second group. If the confidence interval contains all negative values, we find a significant difference between the groups, AND we can conclude that the mean of the first group is significantly less than the mean of the second group. If the confidence interval contains zero (it goes from negative to positive values), we find NO significant difference between the groups.
In this problem, the confidence interval is (-8.204, -0.396). We have all negative values, so we can conclude that there is a significant difference in the mean number of cavity trees AND that the mean number of cavity trees in the Adirondack forests is significantly less than the mean number of cavity trees in the Monongahela Forest. The confidence interval gives an estimate of the mean difference in number of cavity trees between the two forests. There are, on average, 0.396 to 8.204 fewer cavity trees in the Adirondack Park than the Monongahela Forest.
P-value Approach
We can also use the p-value approach to answer the question. Remember, the p-value is the area under the normal curve associated with the test statistic. This example is a two-sided test (H1: μ1 ≠ μ2 ) so the p-value, when computed by hand, will be multiplied by two.
The test statistic equals -2.213, so the p-value is two times the area to the left of -2.213. We can only estimate the p-value using the student’s t-table. Using the lesser of n1– 1 or n2– 1 as the degrees of freedom, we have 50 degrees of freedom. Go across the 50 row in the student’s t-table until you find the absolute value of the test statistic. In this case, 2.213 falls between 2.109 and 2.403. Going up to the top of each of those columns gives you the estimate of the p-value (between 0.02 and 0.01).
Table 2. Student t-Distribution
The p-value is 2x(0.01 – 0.02) = (0.02 < p < 0.04). The p-value is greater than 0.02 but less than 0.04. This is less than the level of significance (0.05), so we reject the null hypothesis. There is enough evidence to support the claim that there is a significant difference in the mean number of cavity trees between the areas.
Example \(\PageIndex{3}\):
Researchers are studying the relationship between logging activities in the northern forests and amphibian habitats. They were comparing moisture levels between old-growth and post-harvest habitats. The researchers believe that post-harvest habitat has a lower moisture level. They collected data on moisture levels from two independent random samples. Test their claim using a 5% level of significance.
n1 = 26
n2 = 31
=0.62 g/cm3
= 0.56 g/cm3
s1 = 0.12 g/cm3
s2 = 0.17 g/cm3
H0: μ1 = μ2 or μ1 – μ2 = 0. There is no difference between the two population means.
H1: μ1 > μ2. Mean moisture level in old growth forests is greater than post-harvest levels.
We will use the critical value based on the lesser of n1– 1 or n2– 1 degrees of freedom. In this problem, there are 25 degrees of freedom and the critical value is 1.708. Now compute the test statistic.
$$t=\frac {(0.62-0.56)-0}{\sqrt {\frac {0.12^2}{26}+\frac {0.17^2}{31}}} = 1.556$$
The test statistic does not fall in the rejection zone. We fail to reject the null hypothesis. There is not enough evidence to support the claim that the moisture level is significantly lower in the post-harvest habitat.
Now answer this question by constructing a 90% confidence interval about the difference of the means.
1) \(t_{\frac {\alpha}{2}}\) = 1.708
2) E = \(t_{\frac {\alpha}{2}}\)\(\sqrt {\frac {s_1^2}{n_1}+\frac {s^2_2}{n_2}}=1.708\sqrt {\frac {0.12^2}{26}+\frac {0.17^2}{31}}=0.0658\)
3) \(\bar {x_1} -\bar {x_2} \pm E= (0.62-0.56) ±0.0658\)
The 90% confidence interval for the difference of the means is (-0.0058, 0.1258). The values in the confidence interval run from negative to positive indicating that there is no significant different in the mean moisture levels between old growth and post-harvest stands.
Software Solutions Minitab
Two-sample T for old vs. post
N
Mean
StDev
SE Mean
old
26
0.620
0.121
0.024
post
31
0.559
0.172
0.031
Difference = \(\mu_{(old)} – \mu_{(post)}\)
Estimate for difference: 0.0603
95% lower bound for difference: -0.0049
T-Test of difference = 0 (vs >): T-Value = 1.55 p-Value = 0.064 DF = 53
The p-value (0.064) is greater than the level of confidence so we fail to reject the null hypothesis.
Additional example: www.youtube.com/watch?v=7pIb-GVixFo. Excel
Variable 1
Variable 2
Mean
0.619615
0.559355
Variance
0.014708
0.02948
Observations
26
31
Hypothesized Mean Difference
0
df
54
t Stat
1.557361
\(P(T\le t)\) one-tail
0.063809
t Critical one-tail
1.673565
\(P(T\le t)\) two-tail
0.127617
t Critical two-tail
2.004879
The one-tail p-value (0.063809) is greater than the level of significance, therefore, we fail to reject the null hypothesis. |
Question 1
$$ H(e^{j\omega})=\sum_{n=0}^{N-1}h[n]e^{-jn\omega} =\mathbf{c}^H(\omega)\cdot \mathbf{h} \tag{1} $$ $$ =\mathbf{h}^H\cdot\mathbf{c}(\omega) \tag{2} $$
$$H(\mathbf{h})=\sum_{k=1}^Kh[k]e^{j\omega_k}\tag{3}$$
The design of filter in matlab ,if using:
$-$$(1)$ and $(3)$ the length of filter is $K$.
$-$$(2)$ the length of filter is $N$.
What is the difference between these two design? Can i use the operation 'transpose' in $D(w)$ and weighting function,length of filter?
Question 2
application in matlab
I used the same functions in 'cfirls' like desired and actual function.
%% Desired coefficient N = 61; %desired filter length tau = 26; % desired passband group delay % frequency grid f = [linspace(-1,-.18,164),linspace(-.1,.3,80),linspace(.38,1,124)]; % desired frequency response d = [zeros(1,164),ones(1,80),zeros(1,124)].*exp(-1i*pi*f*tau); w = [10*ones(1,164),ones(1,80),10*ones(1,124)]; A = w(:,ones(1,N)) .* exp(-1i*pi*f*(0:N-1)); h = A \ (w.*d); %%actual function c=[zeros(1,164),ones(1,80),zeros(1,124)]*exp(-1i*pi*f*(0:N-1)); H=h'.*c; %% error function E=abs(H-d)^2; S=sum(E'); plot(f,S)
I need in my work to reduce the error as little as possible.How can i changing in this code? |
The repeating decimal .36666... in base 8 can be written in a fraction in base 8. I understand simple patterns such as 1/9 in base 10 is .1111.... so 1/7 in base 8 is .1111. But I'm not too sure how to convert this decimal in this base to the fraction in the same base.
\begin{align} 0.3\bar{6}_8 &= \frac{3}{8} + 6\left(\frac{1}{8^2}+\frac{1}{8^3} + \cdots\right)\\ &= \frac{3}{8} + \frac{6}{8^2}\left(1+\frac{1}{8}+ \frac{1}{8^2} +\cdots\right)\\ &= \frac{3}{8} + \frac{6}{8^2}\frac{1}{1-(1/8)} & \text{geometric series}\\ &= \frac{3}{8} + \frac{3}{28}\\ &= \frac{27}{56}\\ &= \frac{33_8}{70_8}. \end{align}
You could just do all your thinking in base 8. To save writing all the subscripts in the following computations I'll omit the base 8 designation. Legal digits are $0$ through $7$. It's a little mindbending, but only because we're used to base 10.
Let $x=0.3666\ldots$. Then
$$ 10x = 3.666\ldots = 3 + 6/7 = (25 + 6)/7 = 33/7 $$
so $x=33/70$.
I used the facts that multiplying by 10 just shifts the "decimal" point, $3 \times 7 = 25$ and $6/7 = 0.66\ldots$.
We use subscript $8$ to denote base $8$ numbers, and all other numbers are base $10$. Take $x = 0.3\overline{6}_8$. Then $7x = 8x - x = 3.\overline{6}_8 - 0.3\overline{6}_8 = 3.3_8 = 27/8$. It follows that $x = 27/56 = 33_8/70_8$.
We compute directly in base $8$ as in base $10$:
Set $y=0.3666\dots$, $x=0.666\dots$. Then \begin{align*}8y&=3+0.666\dots=3+x\\ 8x&=6.666\dots=6+x \end{align*} Thus $$7x=6,\quad\text{so}\enspace x=\frac 67\enspace\text{and}\enspace 8y=3+\frac 67=\frac{3\times 7+6}7=\frac{25+6}7 =\frac{33}7,$$ because in base $8$, $\;3\times 7=25,\;5+6=13$, so that finally $\; y=\dfrac{33}{8\times7}=\color{red}{\dfrac{33}{70}}$. |
The question was the following:
There are $28$ students in a class, $15$ study chemistry, $18$ study physics and $2$ study neither chemistry nor physics.
Calculate the probability that a student chosen at random studies both chemistry and physics.
My approach was as follows:
We pick a random person, the chance that this person studies physics is $\frac{18}{28}$ (because $18$ people study physics out a total of $28$). Now given that he studies physics, what is the chance that he also studies chemistry? I would say $P(\text{chem}|\text{phy}) = \frac{15}{26}$, because we already now that the person studies physics we can exclude the two persons who study neither given $26$ in total, of which $15$ study physics. Concluding that: $P(\text{phy and chem}) = P(\text{phy})P(\text{chem}|\text{phy}) = \frac{18}{28}\frac{15}{26} = 0.37$.
However, the solutions provides elsewjere stated the following approach:
$P(\text{phy and chem}) = P(\text{phy}) + P(\text{chem}) - P(\text{phy}\cup \text{chem})$. Where $P(\text{phy}) = \frac{18}{28}$, $P(\text{chem}) = \frac{15}{28}$, and $P(\text{phy}\cup \text{chem}) = 1 - P(\overline{\text{phy}\cup \text{chem}}) = 1 - P(\overline{\text{phy}} \cap \overline{\text{chem}}) = 1 - \frac{2}{28} = \frac{26}{28}$. Concluding that: $P(\text{phy and chem}) = \frac{18}{28}+\frac{15}{28} - \frac{26}{28} = \frac{1}{4} = 0.25$.
The two answers differ. I think I made a mistake in calculating the conditional probability but I can't identify where. Please comment on this. |
Search
Now showing items 1-10 of 53
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Highlights of experimental results from ALICE
(Elsevier, 2017-11)
Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ...
Event activity-dependence of jet production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV measured with semi-inclusive hadron+jet correlations by ALICE
(Elsevier, 2017-11)
We report measurement of the semi-inclusive distribution of charged-particle jets recoiling from a high transverse momentum ($p_{\rm T}$) hadron trigger, for p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in p-Pb events ...
System-size dependence of the charged-particle pseudorapidity density at $\sqrt {s_{NN}}$ = 5.02 TeV with ALICE
(Elsevier, 2017-11)
We present the charged-particle pseudorapidity density in pp, p–Pb, and Pb–Pb collisions at sNN=5.02 TeV over a broad pseudorapidity range. The distributions are determined using the same experimental apparatus and ...
Photoproduction of heavy vector mesons in ultra-peripheral Pb–Pb collisions
(Elsevier, 2017-11)
Ultra-peripheral Pb-Pb collisions, in which the two nuclei pass close to each other, but at an impact parameter greater than the sum of their radii, provide information about the initial state of nuclei. In particular, ...
Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE
(Elsevier, 2017-11)
The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ... |
From Hartshorne:
If $Y$ is an irreducible subset of $X$, then its closure $\overline{Y}$ in $X$ is also irreducible.
By irreducible they mean that $Y$ cannot be written $Y_1 \cup Y_2$ for two proper subsets $Y_i$ of $Y$ that are closed in the subspace topology of $Y$.
My attempt
Suppose that $\overline{Y} = Y_1 \cup Y_2$, both $Y_1, Y_2$ closed in $\overline {Y}$. Since the closed subsets of $\overline{Y}$ are precisely the closed subsets of $X$ intersected with $\overline{Y}$, we have that $Y_i = \overline{Y} \cap Y_i'$ for some closed $Y_i'$ in $X$. In other words, each $Y_i$ is closed in the whole space $X$ as well as in $\overline{Y}$.
By a previous statement, If $U \subset Y$ is an open subset, then when $Y$ is irreducible, $U$ is both irreducible (in $U$) and dense (in $Y$).
So we have that $U_i = \overline{Y} \setminus Y_i$ is irreducible in itself and dense in $\overline{Y}$, meaning $\overline{U}_i = \overline{Y} \setminus \text{Int} (Y_i) = \overline{Y}$.
Where to next? |
Consider an $n\times n$ chessboard whose top-left corner is colored white. But Alice likes darkness, so she wants you to cover those white cells for her. The only tool you have are black L-shaped tiles each of which covers $3$ unit cells.
Formally, each tile covers unit cells satisfying the following:
Two of the cells are adjacent to the third (shares a side). All three of the cells do not lie on the same row or same column. No two tiles should overlap (cover the same cell) or go outside the board.
Since these tiles cost a lot, you have to cover all the white cells using the minimum number tiles.
Example: $1\times 1$
Answer: Impossible, there's a single cell which is white. Since one tile needs $3$ empty cells, there's no way to cover this cell.
Example: $4\times 4$
Answer: $4$ ($4$ tiles can be placed as shown)
Example: $7 \times 7$
If each tile can be represented by a number, and each uncovered piece of board can be represented by 'zero', then the answer for a $7 \times 7$ board is $16$:
$$ \begin{bmatrix} 16& 16& 15& 15& 14& 14& 13 \\ 16& 12& 15& 11& 14& 13& 13 \\ 12& 12& 11& 11& 10& 10& 9 \\ 8& 8& 7& 6& 10& 9& 9 \\ 8& 7& 7& 6& 6& 2& 2 \\ 5& 5& 4& 3& 3& 1& 2 \\ 5& 0& 4& 4& 3& 1& 1\\ \end{bmatrix} $$
Question
For any given $n$, what will be the minimum number of tiles?
(Note: Answer exists for odd value of $n \geq 7$) |
The form of the rendering equation that uses only the BRDF ($f$ in your example, often called $f_r$) and integrates over one hemisphere does not account for transmission.When adding in transmission, it's fairly common to add a second integral over the opposite hemisphere, using a different BTDF function (bidirectional transmission distribution function). ...
Your image definitely does not look correct, and it appears that you are not correctly computing the internal path of light rays as they travel through your mesh. From the looks of it, I would say that you are computing the distance between the point where the view ray first enters the cube and where it first hits the interior wall, and using that as your ...
A set of techniques to avoid explicit ordering go under the name of Order Independent Transparency (OIT for short).There are lots of OIT techniques.Historically one is Depth Peeling. In this approach you first render the front-most fragments/pixels, then you find the closest to the one found in the previous step and so forth, going on with as many "...
Premultiplied alpha itself does not give you order independent transparency, no.This page talks about how it can be used as part of an order independent transparency solution however: http://casual-effects.blogspot.com/2015/03/implemented-weighted-blended-order.htmlOther benefits of premultiplied alpha include:better mipmaps for textures that contain ...
Two suggestions:If your data is from an image you are displaying on a standard monitor, the chances are it is (or you are implicitly assuming that it's) in sRGB format. This means that the colour components are not linear. Ideally, you should first map into a linear colour space, do your filtering (e.g. blurring) operations, and then map back.*(If you ...
The basic equation for alpha blending is as follows:$$ c_\text{final} = c_\text{source} \cdot \alpha + c_\text{dest} \cdot (1 - \alpha) $$Here, $c_\text{source}$ is the color of the thing being blended, $c_\text{dest}$ is the background onto which you're blending it, and $\alpha$ is between 0 and 1.In your case, $c_\text{source} = 0$ (black fading ...
One option is to use depth peeling.Essentially, one processes the scene a set number of times (say, n times) in order to determine the closest, second-closest, all the way to nth-closest fragments of the scene.This processing is done by first applying a regular depth test to the entire scene (which naturally returns the closest surface). One then uses ...
The common way to render transparent polygons in a rasterizer is by use of Alpha Blending, which basically combines the colour of the supposedly transparent pixel with the colour of the background at that pixel's position, this way rendering the pixel transparently onto what was already rendered before. This technique, however, requires your polygons (or at ...
PaulHK is right in what he said: you have to consider that there may be more than 2 transparent objects behind each other.Also, the idea of deferred shading is to render the geometry only once to be more efficient. If you render the geometries multiple times, you lose (part of) your efficiency. Moreover, the lighting is deferred, thus you'd need to do ...
From the proof of premultiplied alpha blending, there is an assumption that "the operator must respect the associative rule."So, it may lead to confusion of the order of process.Since this is not the commutative rule, blend(a,b) is not same as blend(b,a).Hence,blend(blend(a,b),c) returns same value of blend(a,blend(b,c)).but,blend(blend(a,b),c) does ...
Two big ones you're missing:Angle-dependent reflection. This is one possible cause of your "transparent in places and not in others" effect, and the most likely cause of the missing wetness.Ice cubes usually have air bubbles trapped inside. This shows up as a white volumetric haze denser in the center of the cube (for small bubbles) or distinct bubbles (...
I've found that bump mapping when calculating lighting and refraction rays can add a lot to the look of ice. It makes the ice look textured and imperfect, like a melting ice cube would look.I sort of wonder if maybe animating a bump map could help make it look wet, as water sheets / droplets ran down it's surface.The images below look pretty nice, but ...
First option should be to make all particles able to go through the same pipeline. Perhaps with an uber shader. That way you can batch them all.Positive ieee floating point numbers can be sorted like unsigned integer. And there are O(n) algorithms to sort those for example radix sort.
Both Unity's "Deferred shading rendering path" and "Legacy Deferred Lighting Rendering Path" work only for opaque surfaces. They both rely on a very similar set of passes:Render the opaque objects' lighting parameters to a number of render targets. This is referred to as the "G-Buffer pass" or "base pass".Lighting is then computed in screen-space using ...
Is it possible to bind default framebuffer's depth buffer to another framebuffer?First, you "attach" images to framebuffers. You "bind" objects to the context; you "attach" objects to other objects.Second, no, you cannot attach images of the default framebuffer to anything.If not, how can I copy default depth to another depth buff, or what is best ...
You can use a threshold set through a constant buffer to clip pixels (ie an alpha test):float threshold; // in constant bufferfloat4 color = myTexture.Sample(...);if (color.a < threshold)discard;Set threshold to 1.0 to discard all transparent fragments; set it to 0.0 to discard nothing. There will be some minor inefficency in the latter case ...
According to Wikipedia, ice has a slightly lower IOR than non-frozen water, though I don't know how much that difference would affect the results.The "opaque"-looking parts of an ice cube are caused by clusters of microscopic bubbles formed during freezing. You might be able to model those using geometry, but given the scale and number I suspect that some ...
When blending multiple layers, physically the "right" thing to do is calculating lighting on each layer separately, then composite the lit layers together. This way, for instance, you can have a translucent sprite standing in a spotlight, in front of a dark background, and the sprite will be well-lit while the background visible through it stays dark. Or ...
The simplest solution is to do three passes:Render opaque meshes to a buffer (front-to-back, depth read/write on)Render translucent meshes to another buffer (front-to-back, depth read/write on)This makes sure that only the closest translucent mesh is rendered.Alpha blend translucent mesh on top of opaque buffer using opaque depth buffer. (depth read on)...
Physically, the origin of diffuse light is subsurface scattering, which happens continuously as light travels through a material. So, the proportion of transmitted light depends on the thickness of the object.There's no precise equivalent to the Fresnel law, but maybe the closest thing is the Beer–Lambert law. It states that the transmitted light falls off ...
I have worked with this specific formula for the OVER operator but not with additive blending. I'll use the paper's nomenclature in the following discussion:$$ C_f = \frac{\sum_{i=1}^{n}C_i \cdot w(z_i, \alpha_i)}{\sum_{i=1}^{n}\alpha_i \cdot w(z_i, \alpha_i)}(1 - \prod_{i=1}^{n}(1 - \alpha_i)) + C_0\prod_{i=1}^{n}(1 - \alpha_i) $$This is not explicitly ...
You seem to be using additive blending against its purpose. Additive blending is supposed to represent light from multiple sources being combined. It is not physically possible for one source of light to eclipse another.Furthermore, even if you hack an alpha of exactly 1 to mean "opaque", you will get a strange circumstance where an alpha of 0.99 is quite ...
The issue is that an extra depth pass won't cut it. You may need an arbitrary number of extra depth passes. Just imagine the volume between two sinusoidal surfaces, you would have infinitely many alternating z-intervals of volume/empty space as long as you're looking from a specific direction.Edit:Taking into consideration the updated formulation, here's ...
A fundamental assumption of deferred shading is that there will be only one surface, and therefore only one depth, at a given pixel.An effect that contradicts that assumption will require some sort of special handling in a deferred renderer. Translucency, because it allows to see through multiple layers, is such an effect and therefore will need its own ...
Sort the particles by Z each rendering cycle using an algorithm such as bubble sort which is good when element changes position in small steps. If the perspective does not change much the errors would be few enough over time to be unnoticable. The technique is easy to configure between quality and performance dependning on the target platform by adjusting ...
I used Stencil Buffer to fixing your problem , you need a way for checking overlapping two or more shapesShader "Custom/SemiTransparent"{Properties{_Color("Color",Color) = (0,0,1,0.1)}SubShader{Tags {"Queue"="Transparent" "IgnoreProjector"="true" "RenderType"="Transparent"}ZWrite Off Blend SrcAlpha OneMinusSrcAlpha ...
This is called "deep shadow mapping" and deep compositing.Sadly invented long before you had this ideas yourself);Now you are talking about implementing this idea specifically on a given architecture (GPU). It's up to you to make it work for this given architecture, and if you have technical difficulties with that maybe you can ask a question on this ...
Disclaimer: I haven't actually tested this but it seems feasible.I agree with trichoplax that what you want could possibly be expressed more clearly but, assuming I've understood correctly, would the following do the job?First I assume you can supply the opaque and translucent geometry separately, and that the opaque is sent first. I'm also going to ...
A normal argb image has 8 bits per channel and therefor 32 bits per pixel, the size of an integer. That's why it's stored in an integer a lot of the times.Floats are also 32 bits.Something that you can do, is to have a separate image that has only 1 channel, for the transparency. If you make that a float type (or for 16 bit, another type that uses 16 ... |
Inferences about Two Population Proportions
We can apply the same methods we just learned with means to our two-sample proportion problems. We have two populations with two samples and we want to compare the population proportions.
Is the proportion of lakes in New York with invasive species different from the proportion of lakes in Michigan with invasive species? Is the proportion of construction companies using certified lumber greater in the northeast than in the southeast?
A test of two population proportions is very similar to a test of two means, except that the parameter of interest is now “
p” instead of “µ”. With a one-sample proportion test, we used \(\hat p =\frac {x}{n}\)as the point estimate of p. We expect that p̂ would be close to p. With a test of two proportions, we will have two p̂’s, and we expect that ( p̂1 – p̂2) will be close to ( p1 – p2). The test statistic accounts for both samples. With a one-sample proportion test, the test statistic is
$$z = \frac {\hat p - p}{\sqrt {\frac {p(1-p)}{n}}}$$
and it has an approximate standard normal distribution.
For a two-sample proportion test, we would expect the test statistic to be
$$z=\frac {(\hat {p_1} -\hat {p_2})-(p_1-p_2)}{\sqrt {\frac {p_1(1-p_1)}{n_1}+\frac {p_2(1-p_2)}{n_2}}}$$
HOWEVER, the null hypothesis will be that
p1 = p2. Because the H0 is assumed to be true, the test assumes that p1 = p2. We can then assume that p1 = p2 equals p, a common population proportion. We must compute a pooled estimate of p (its unknown) using our sample data.
$$\bar p = \frac {x_1+x_2}{n_1+n_2}$$
The test statistic then takes the form of
$$z=\frac {(\hat {p_1} -\hat {p_2})-(p_1-p_2)}{\sqrt {\frac {\bar p(1-\bar p)}{n_1}+\frac {\bar p(1-\bar p)}{n_2}}}$$
The hypothesis test follows the same steps that we have seen in previous sections:
State the null and alternative hypotheses State the level of significance and determine the critical value Compute the test statistic Compare the critical value and the test statistic and state a conclusion
The assumptions that we set for a one-sample proportion test still hold true for both samples. Both must be random samples from normally distributed populations satisfying the following statements:
\(n(p)(1 – p) \ge 10\) Each sample size is no more than 5% of the population size.
We can again use the same three pairs of null and alternative hypotheses. Notice that we are working with population proportions so the parameter is
p.
Table 5. Null and alternative hypotheses.
The critical value comes from the standard normal table and depends on the alternative hypothesis (is the question one- or two-sided?). As usual, you must state a conclusion. You must always answer the question that is asked in the alternative hypothesis.
Example \(\PageIndex{1}\):
A researcher believes that a greater proportion of construction companies in the northeast are using certified lumber in home construction projects compared to companies in the southeast. She collected a random sample of 173 companies in the southeast and found that 86 used at least 30% certified lumber. She collected another random sample of 115 companies from the northeast and found that 68 used at least 30% certified lumber. Test the researcher’s claim that a greater proportion of companies in the northeast use at least 30% certified lumber compared to the southeast. α = 0.05.
Southeast Northeast \(n_1 = 173\) \(n_2 = 115\) \(x_1 = 86\) \(x_2 = 68\) Solution
Write the null and alternative hypotheses:
\(H_0: p_1 = p_2\) or \(p_1 – p_2 = 0\)
\(H_1: p_1 < p_2\)
The critical value comes from the standard normal table. It is a one-sided test, so alpha is all in the left tail. The critical value is -1.645.
Compute the point estimates
$$\hat {p_1} = \frac {86}{173}=0.497$$
$$\hat {p_2} = \frac {68}{115} = 0.591$$
Now compute
p̄
$$\bar p = \frac {x_1+x_2}{n_1+n_2} = \frac {86+68}{173+115} = 0.535$$
The test statistic is
$$z=\frac {(\hat {p_1} -\hat {p_2})-(p_1-p_2)}{\sqrt {\frac {\bar p(1-\bar p)}{n_1}+\frac {\bar p(1-\bar p)}{n_2}}} = \frac {(0.497-0.591)-0}{\sqrt {\frac {0.535(1-0.535)}{173}+\frac {0.535(1-0.535)}{115}}} = -1.57$$
Now compare the critical value to the test statistic and state a conclusion.
Figure 3. A comparison of the critical value and the test statistic.
We fail to reject the null hypothesis. There is not enough evidence to support the claim that a greater proportion of companies in the northeast use at least 30% certified lumber compared to companies in the southeast.
Using the P-Value Approach
We can also answer this question using the p-value approach. The p-value is the area associated with the test statistic. This is a left-tailed problem with a test statistic of -1.57 so the p-value is the area to the left of -1.57. Look up the area associated with the Z-score -1.57 in the standard normal table.
The p-value is 0.0582.
The hatched area (p-value) is greater than the 5% level of significance (red area). We fail to reject the null hypothesis. There is not enough statistical evidence to support the claim that a greater proportion of companies in the northeast use at least 30% certified lumber compared to companies in the southeast.
Figure 4. Comparison of p-value and the level of significance.
Construct and Interpret a Confidence Interval about the Difference of Two Proportions
Just like a two-sample t-test about the means, we can answer this question by constructing a confidence interval about the difference of the proportions. The point estimate is \(\hat {p_1} - \hat {p_2}\). The standard error is \(\sqrt {\frac {\hat {p_1}(1-\hat {p_1})}{n_1}+\frac {\hat {p_2}(1-\hat {p_2})}{n_2}} \)and the critical value \(z_{\alpha/2}\)comes from the standard normal table.
The confidence interval takes the form of the point estimate ± the margin of error.
$$(\hat {p_1}- \hat {p_2}) \pm z_{\alpha/2} \sqrt {\frac {\hat {p_1}(1-\hat {p_1})}{n_1} + \frac {\hat {p_2}(1-\hat {p_2})}{n_2}}$$
We will use the same three steps to construct a confidence interval about the difference of the proportions. Notice the estimate of the standard error of the differences. We do not rely on the pooled estimate of
p when constructing confidence intervals to estimate the difference in proportions. This is because we are not making any assumptions regarding the equality of p1 and p2, as we did in the hypothesis test.
1) critical value \(z_{\alpha/2}\)
2) \(E = z_{\alpha/2} \sqrt {\frac {\hat {p_1}(1-\hat {p_1})}{n_1}+\frac {\hat {p_2}(1-\hat {p_2}}{n_2}}\)
3) \((\hat {p_1}-\hat {p_2}) \pm E\)
Let’s revisit Ex. 6 again, but this time we will construct a confidence interval about the difference between the two proportions.
Example \(\PageIndex{2}\):
The researcher claims that a greater proportion of companies in the northeast use at least 30% certified lumber compared to companies in the southeast. We can test this claim by constructing a 90% confidence interval about the difference of the proportions.
1) critical value \(z_{\alpha/2}= 1.645\)
2) \(E = z_{\alpha/2} \sqrt {\frac {\hat {p_1}(1-\hat {p_1})}{n_1}+\frac {\hat {p_2}(1-\hat {p_2}}{n_2}}=1.645\sqrt {\frac {0.497(1-0.497)}{173}+\frac {0.591(1-0.591)}{115}}=0.098\)
3) \((\hat {p_1}-\hat {p_2}) \pm E= (0.497-0.591) ± 0.098\)
The 90% confidence interval about the difference of the proportions is (-0.192, 0.004).
BUT, this doesn’t answer the question the researcher asked. We must use one of the three interpretations seen in the previous section. In this problem, the confidence interval contains zero. Therefore we can conclude that there is no significant difference between the proportions of companies using certified lumber in the northeast and in the southeast.
Example \(\PageIndex{3}\):
A hydrologist is studying the use of Best Management Plans (BMP) in managed forest stands to protect riparian zones. He collects information from 62 stands that had a management plan by a forester and finds that 47 stands had correctly implemented BMPs to protect the riparian zones. He collected information from 58 stands that had no management plan and found that 26 of them had correctly implemented BMPs for riparian zones. Do these data suggest that there is a significant difference in the proportion of stands with and without management plans that had correct BMPs for riparian zones? α = 0.05.
Plan No Plan \(x_1 = 47\) \(x_2 = 26\) \(n_1 = 62\) \(n_2 = 58\)
Let’s answer this question both ways by first using a hypothesis test and then by constructing a confidence interval about the difference of the proportions.
\(H_0: p_1 = p_2\) or \(p_1 – p_2 = 0\)
\(H_1: p_1 \ne p_2\)
Critical value: ±1.96
Test statistic:
$$z=\frac {(\hat {p_1}-\hat {p_2})-(p_1 - p_2)}{\sqrt {\frac {\bar p (1- \bar p)}{n_1}+\frac {\bar p(1-\bar p)}{n_2}}}= \frac {(0.758-0.448)-0}{\sqrt {\frac {0.608(1-0.608)}{62}+\frac {0.608(1-0.608)}{58}}}=3.48$$
The test statistic is greater than 1.96 and falls in the rejection zone. There is enough evidence to support the claim that there is a significant difference in the proportion of correctly implemented BMPs with and without management plans.
Now compute the p-value and compare it to the level of significance. The p-value is two times the area under the curve to the right of 3.48. Look for the area (in the standard normal table) associated with a Z-score of 3.48. The area to the right of 3.48 is 1 – 0.9997 = 0.0003. The p-value is 2 x 0.0003 = 0.0006.
The p-value is less than 0.05. We will reject the null hypothesis and support the claim that the proportions are different.
Now, answer this question using a confidence interval.
1) critical value \(z_{\alpha/2}= 1.96\)
2) \(E = z_{\alpha/2} \sqrt {\frac {\hat {p_1}(1-\hat {p_1})}{n_1}+\frac {\hat {p_2}(1-\hat {p_2})}{n_2}}=1.96\sqrt {\frac {0.758(1-0.758)}{62}+\frac {0.448(1-0.448)}{58}}=0.1666\)
3) \(\hat {p_1}-\hat {p_2} \pm E = (0.758,-0.448) \pm 0.1666\)
The 95% confidence interval about the difference of the proportions is (0.143, 0.477). The confidence interval contains all positive values, telling you that there is a significant difference between the proportions AND the first group (BMPs used with management plans) is significantly greater than the second group (BMPs with no plans). This confidence interval estimates the difference in proportions. For this problem, we can say that correctly implemented BMPs with a plan occur in a greater proportion (14.3% to 44.7%) compared to those implemented without a management plan.
Software Solutions Minitab
Test and CI for Two Proportions
Sample
X
N
Sample p
1
47
62
0.758065
2
26
58
0.448276
Difference = p (1) – p (2)
Estimate for difference: 0.309789
95% CI for difference: (0.143223, 0.476355)
Test for difference = 0 (vs. not = 0): Z = 3.47 p-value = 0.001
Fisher’s exact test: p-value = 0.001
The p-value equals 0.001 which tells us to reject the null hypothesis. There is a significant difference in the proportion of correctly implemented BMPs with and without management plans. The confidence interval for the difference in proportions is also given (0.143223, 0.476355) which allows us to estimate the difference.
Excel
Excel does not analyze data from proportions. |
Any pure strategy Nash equilibrium is implicitly a mixed-strategies Nash equilibrium. Since the valuations vary, it's a good indicator we want to consider mixed-strategies. The fact that the problem tells us this is a stronger indicator, though I'm sure not the axiomatic justification you are seeking. :-)
Consider player $1$. We have player $1$'s expected profit: $\mathbb{E}[\Pi_{1}(b)] = (2-b) Pr[b \geq \beta_{2}(v_{2})]$, where $b$ is player $1$'s bid, $\beta_{2}$ is player $2$'s bidding strategy, and $v_{2}$ is player $2$'s valuation. We can assume $\beta_{2}(0) = 0$ (because if $\beta_{2}(0) > 0$, player $2$ can improve upon this by decreasing his bid). Since we are only considering two potential valuations for player $2$, we can assume $\beta_{2}(v) = av$, for some constant $a \in \mathbb{R}_{++}$. (That is, given the two points $(0, 0)$ and $(2, \beta_{2}(2))$, we just draw a line between them).
Observe that $Pr[b \geq av] = Pr[v \leq \frac{b}{a}] = \frac{b}{2a}$, with the last inequality since we have a 50-50 chance on the valuation of player $2$.
Now for a Nash equilibrium, player $1$ seeks to maximize his expected value. This is given by the following optimization problem:
$$\max_{b} (2-b) \cdot (\frac{b}{2a})$$
This yields the first order conditions:
$\frac{1}{2a} \cdot (2 - 2b) = 0$, and we obtain that $b = 1$ is our only solution for player $1$. This answer should be reasonably intuitive.
Now player $2$ only wins if his valuation is $2$. So he players $\beta_{2}(2) = 1$ and $\beta_{2}(0) = 0$. |
Preprints (rote Reihe) des Fachbereich Mathematik Refine Keywords average density (3) (remove)
296
We show that the occupation measure on the path of a planar Brownian motion run for an arbitrary finite time intervalhas an average density of order three with respect to thegauge function t^2 log(1/t). This is a surprising resultas it seems to be the first instance where gauge functions other than t^s and average densities of order higher than two appear naturally. We also show that the average densityof order two fails to exist and prove that the density distributions, or lacunarity distributions, of order threeof the occupation measure of a planar Brownian motion are gamma distributions with parameter 2.
303
We show that the intersection local times \(\mu_p\) on the intersection of \(p\) independent planar Brownian paths have an average density of order three with respect to the gauge function \(r^2\pi\cdot (log(1/r)/\pi)^p\), more precisely, almost surely, \[ \lim\limits_{\varepsilon\downarrow 0} \frac{1}{log |log\ \varepsilon|} \int_\varepsilon^{1/e} \frac{\mu_p(B(x,r))}{r^2\pi\cdot (log(1/r)/\pi)^p} \frac{dr}{r\ log (1/r)} = 2^p \mbox{ at $\mu_p$-almost every $x$.} \] We also show that the lacunarity distributions of \(\mu_p\), at \(\mu_p\)-almost every point, is given as the distribution of the product of \(p\) independent gamma(2)-distributed random variables. The main tools of the proof are a Palm distribution associated with the intersection local time and an approximation theorem of Le Gall.
294 |
Shapes appearing stretched in the periphery is a consequence of perspective projection. The wider the field of view (FOV) is, the stronger the stretching effect gets.To demonstrate the effect I wrote a quick example on ShaderToy: https://www.shadertoy.com/view/MltBW2As you can see on the images below (corresponding to FOV of 40, 80 and 120; if I didn't ...
The space at which you transform your vertices is completely up to you, because it depends on what algorithms and kind of effects that you are trying to achieve.As of my personal experience, I usually shoot rays in world space because eventually we all need some sort of "world-space" acceleration data structure, such as a space-partition tree, that gathers ...
If your bunny is purely specular, then sampling the light directly at the shade point would give no contribution since the specular BSDF is a delta BSDF. It generally evaluates to zero for any direction other than the mirror direction. If it was a glossy BSDF, then it might be possible that the pdf value could be very small so that the monte-carlo estimator $...
If your plane has a normal of $\begin{pmatrix}0 & 0 & z\end{pmatrix}^T$, then your computationvec3 u = vec3( normal.y, -normal.x, 0 ).normalized();vec3 v = normal.cross( u );will result in u and v both being $\begin{pmatrix}0 & 0 & 0\end{pmatrix}^T$.A more general approach would be, for example, to compute the cross product of your ...
I wasn't really expecting that, no. The formula in the paper is not the most elegant - there's quite a few parentheses in there. In this case I think it's just a matter of shuffling the parentheses around a little in the get_k implementation - both terms should be divided by (1-r):float get_k(float r, float n) {return std::sqrt( 1.0/(1.0-r) * (...
Read up on the basics for ray-tracing here,Usually we don't mess up with viewports and stuff in raytracing, So I'm just telling you for the case where viewport equals the Image Width and Height.There are two cases when the field of view changes. Either you move the image plane back and forth or you increase the size. We choose to change $d$ ( former ...
Some elementary trigonometry tells you what to expect from this situation.The angle to see the shadow terminator is marked on the diagram, and a use of SOHCAHTOA tells you it's $\cos^{-1}\tfrac{1}{2} = 60^\circ$. Yours looks higher than that so your intuition seems correct. Stepping through the lighting code will help you see where it's going wrong, and ...
The issue was caused by an incorrect calculation of the reflection direction vector.With D ray direction and N the normal vector:R = D - 2 * dot(D, N) * NThe issue was caused by calculating the components of R as follows:R[i] = D[i] - 2 * (D[i] * N[i]) * N[i]It took me a while to find the mistake because this produced a correct reflection with the ...
I have found the issue. The gamma correction was the correct value, the same as in the book (1/2), but the light source had the brightness of 1.0f. The book had set the light's brightness to 18.0f for all color channels.This would introduce color overflow if left at that, and the very light areas (above 1.0f, and subsequently when converted, outside the ...
Your main idea is more or less correct. The cosine hidden in the projected area measure $dA^\perp = dA\cos(θ)$ compensates the weakening of irradiance due to incident angle (the Lambert's cosine law). This makes radiance independent from the incident angle. My guess is that the main motivation was to make it more practical to work with.The cosine in the ...
That's just how the Reinhard operator works. If the scene has very high dynamic range important detail may be lost near the high luminance region as you found since both will map near 0.99. Reinhard is a form of global operator. There are other types of algorithms using local operators which tonemap the pixel based on the intensity of the underlying ...
Operating systems cancel GPU program executions if they take too long. On Windows it is generally two seconds and on Linux it is five seconds most of the time, but it can vary.This is to detect GPU programs that are stuck and cancel them. There are different methods to get around this timeout, but they all require admin/root privileges, which is not always ...
First, the viewport size:$$h_x = 2*d*tan(\theta_x/2)$$$$h_y = 2*d*tan(\theta_y/2)$$Each pixel (from your diagram) has the following size in the eye coordinate system:$$W = h_x / (k-1)$$$$H = h_y / (m-1)$$Note that usually the field of view encompasses whole pixels and doesn't stop at the center of the edge pixels like your diagram shows.If $P_c$ is ...
That's actually incorrect. You can transform every ray of your camera if you wish (and numerous implementations do so). There are some advantages and disadvantages to each method (e.g. if your rays are more than your vertices you end up doing more transformations, however you don't incur cache misses by running over all vertices).
So, for Uniform sampling the PDF is $1/2π$, For Cos-weighted its $cos(θ)/π$. The Lambertian BRDF has a $\pi$ term as well in the denominator for energy conservation.When not optimizing things you should be dividing by $\pi$ during the BRDF calculation, then dividing by the proper PDFs mentioned above.Considering all the factors into account, for uniform ...
My code is setup to calculate the intersection points of each of the spheres with any given ray and spit out the correct points to be rendered to the screen based on the type of boolean operation being performed (Union, Difference or Intersection).Then you need to do the exact same thing with the normals. Indeed, you would typically do this operation with ...
The normal of a given point on the height map is perpendicular to the 2 vectors defined by gx and gy. The original surface normal is not relevant to this. So in tangent space, the normal is:Vector tangent = Vector(1, 0, gx);Vector bitangent = Vector(0, 1, gy);Vector normal = normalize(cross(tangent, bitangent));You then need to convert the normal from ...
As you said, the RTX Turing architecture comes with wired primitive-ray intersection, (to be more specific, triangle-ray intersection). The BVH is built by specifying the Bounding Box program to OptiX, the signature of which is:RT_PROGRAM void my_boundingbox_build_program( int, float result[6] )As you can guess, the result must contain the minimun (3 ...
Disclaimer: It's been a long time since I looked at this sort of thing but here goes...Disclaimer2: On re-reading your question(s) I realised I might have misunderstood what you were asking. I'll leave this here just in case you were looking for this sort of reply.Are you rendering from the original Bezier patch definition (e.g. https://www.cs.utah.edu/...
Those are moire patterns. They are an aliasing artifact that usually occurs when sampling on a regular grid. Did you jitter the positions of your samples? If you just sampled an evenly spaced 10x10 grid within each pixel, that could explain it. Also, numerical errors or inaccuracy could it.
You're simply not normalizing correctly, since you've picked the pdf for uniform to be $1$ which it is not, and for cosine to be $\cos\theta$ which it is not. The pdf for uniformly distributed points on the upper hemisphere is: $p_U(\theta, \phi) = \frac{\sin\theta}{2\pi}$. The pdf for cosine distributed points on the upper hemisphere is: $p_C(\theta, \phi) =...
Let me first address some misconceptions you have:the theory states that we need to shoot x number of rays for eachintersectionNo, the "theory" doesn't state such a thing. Note also that the paper @gallickgunner is referring to is inapplicable in this case since smallpt is based on the rendering equation and not the limited variant present in Cook's ...
During the implementation, the way rays are scattered does not actually change and remains random.Actually the way rays are scattered does change, specifically when you sample a light. In chapter 8 he makes a mixture pdf in order to sample either the light or the bsdf.What changes looks to be the contribution of each ray.This does change, but not ...
It's a variant to projective texture mapping.The simple way to do this in the fragment shader by using the position of the fragment to decide whether it is close enough to the plane of the laser to light up.
There are two ways to 'translate' your object. The first is by moving each point of your object by the desired translation. The second is by translating the origin of your coordinate system. In this case it's the latter. Basically it turns out to be the same, whether you translate your object by a vector $\vec{t}$ or whether you translate your origin by $-\...
It'd be hell a lot easier if this were on graphicsexchange, since I can't use latex here but anyways.In the first pass of Photon Mapping you don't need to use the Flux form of rendering equation. You just divide the original flux coming from the light source among the N photons, then for each photon you use Russian Roulette to determine whether it reflects,...
The best algorithm depends on the condition like whether or not the square is axis aligned for example. I'm gonna discuss the more general case which can find find intersection for any arbitrary oriented square. The algorithm works by first checking the intersection of ray with the plane containing the square or rectangle etc. Then check if the ray is within ...
So, I figured it out.While UAV will be necessary when I start manipulating data in Compute Shaders, for the time being, SRV works fine provided the resource is read-only from the fragment shader.The two big problems wereCreating an ImageTexture that was receiving the data. Note that the data copy was occurring just fine, but the texture was never used ... |
Notice:
If you happen to see a question you know the answer to, please do chime in and help your fellow community members. We encourage our fourm members to be more involved, jump in and help out your fellow researchers with their questions. GATK forum is a community forum and helping each other with using GATK tools and research is the cornerstone of our success as a genomics research community.We appreciate your help!
Test-drive the GATK tools and Best Practices pipelines on Terra Check out this blog post to learn how you can get started with GATK and try out the pipelines in preconfigured workspaces (with a user-friendly interface!) without having to install anything. Genotype Refinement workflow for germline short variants Contents Overview Summary of workflow steps Output annotations Example More information about priors Mathematical details 1. Overview
The core GATK Best Practices workflow has historically focused on variant discovery --that is, the existence of genomic variants in one or more samples in a cohorts-- and consistently delivers high quality results when applied appropriately. However, we know that the quality of the individual genotype calls coming out of the variant callers can vary widely based on the quality of the BAM data for each sample. The goal of the Genotype Refinement workflow is to use additional data to improve the accuracy of genotype calls and to filter genotype calls that are not reliable enough for downstream analysis. In this sense it serves as an optional extension of the variant calling workflow, intended for researchers whose work requires high-quality identification of individual genotypes.
While every study can benefit from increased data accuracy, this workflow is especially useful for analyses that are concerned with how many copies of each variant an individual has (e.g. in the case of loss of function) or with the transmission (or de novo origin) of a variant in a family.
If a “gold standard” dataset for SNPs is available, that can be used as a very powerful set of priors on the genotype likelihoods in your data. For analyses involving families, a pedigree file describing the relatedness of the trios in your study will provide another source of supplemental information. If neither of these applies to your data, the samples in the dataset itself can provide some degree of genotype refinement (see section 5 below for details).
After running the Genotype Refinement workflow, several new annotations will be added to the INFO and FORMAT fields of your variants (see below).
Note that GQ fields will be updated, and genotype calls may be modified. However, the Phred-scaled genotype likelihoods (PLs) which indicate the original genotype call (the genotype candidate with PL=0) will remain untouched. Any analysis that made use of the PLs will produce the same results as before. 2. Summary of workflow steps Input
Begin with recalibrated variants from VQSR at the end of the germline short variants pipeline. The filters applied by VQSR will be carried through the Genotype Refinement workflow.
Step 1: Derive posterior probabilities of genotypes Tool used: CalculateGenotypePosteriors
Using the Phred-scaled genotype likelihoods (PLs) for each sample, prior probabilities for a sample taking on a HomRef, Het, or HomVar genotype are applied to derive the posterior probabilities of the sample taking on each of those genotypes. A sample’s PLs were calculated by HaplotypeCaller using only the reads for that sample. By introducing additional data like the allele counts from the 1000 Genomes project and the PLs for other individuals in the sample’s pedigree trio, those estimates of genotype likelihood can be improved based on what is known about the variation of other individuals.
SNP calls from the 1000 Genomes project capture the vast majority of variation across most human populations and can provide very strong priors in many cases. At sites where most of the 1000 Genomes samples are homozygous variant with respect to the reference genome, the probability of a sample being analyzed of also being homozygous variant is very high.
For a sample for which both parent genotypes are available, the child’s genotype can be supported or invalidated by the parents’ genotypes based on Mendel’s laws of allele transmission. Even the confidence of the parents’ genotypes can be recalibrated, such as in cases where the genotypes output by HaplotypeCaller are apparent Mendelian violations.
Step 2: Filter low quality genotypes Tool used: VariantFiltration
After the posterior probabilities are calculated for each sample at each variant site, genotypes with GQ < 20 based on the posteriors are filtered out. GQ20 is widely accepted as a good threshold for genotype accuracy, indicating that there is a 99% chance that the genotype in question is correct. Tagging those low quality genotypes indicates to researchers that these genotypes may not be suitable for downstream analysis. However, as with the VQSR, a filter tag is applied, but the data is not removed from the VCF.
Step 3: Annotate possible de novo mutations Tool used: VariantAnnotator
Using the posterior genotype probabilities, possible de novo mutations are tagged. Low confidence de novos have child GQ >= 10 and AC < 4 or AF < 0.1%, whichever is more stringent for the number of samples in the dataset. High confidence de novo sites have all trio sample GQs >= 20 with the same AC/AF criterion.
Step 4: Functional annotation of possible biological effects Tool options: Funcotator (experimental)
Especially in the case of de novo mutation detection, analysis can benefit from the functional annotation of variants to restrict variants to exons and surrounding regulatory regions. Funcotator is a new tool that is currently still in development. If you would prefer to use a more mature tool, we recommend you look into SnpEff or Oncotator, but note that these are not GATK tools so we do not provide support for them.
3. Output annotations
The Genotype Refinement workflow adds several new info- and format-level annotations to each variant. GQ fields will be updated, and genotypes calculated to be highly likely to be incorrect will be changed. The Phred-scaled genotype likelihoods (PLs) carry through the pipeline without being changed. In this way, PLs can be used to derive the original genotypes in cases where sample genotypes were changed.
Population Priors
New INFO field annotation PG is a vector of the Phred-scaled prior probabilities of a sample at that site being HomRef, Het, and HomVar. These priors are based on the input samples themselves along with data from the supporting samples if the variant in question overlaps another in the supporting dataset.
Phred-Scaled Posterior Probability
New FORMAT field annotation PP is the Phred-scaled posterior probability of the sample taking on each genotype for the given variant context alleles. The PPs represent a better calibrated estimate of genotype probabilities than the PLs are recommended for use in further analyses instead of the PLs.
Genotype Quality
Current FORMAT field annotation GQ is updated based on the PPs. The calculation is the same as for GQ based on PLs.
Joint Trio Likelihood
New FORMAT field annotation JL is the Phred-scaled joint likelihood of the posterior genotypes for the trio being incorrect. This calculation is based on the PLs produced by HaplotypeCaller (before application of priors), but the genotypes used come from the posteriors. The goal of this annotation is to be used in combination with JP to evaluate the improvement in the overall confidence in the trio’s genotypes after applying CalculateGenotypePosteriors. The calculation of the joint likelihood is given as:
where the GLs are the genotype likelihoods in [0, 1] probability space.
Joint Trio Posterior
New FORMAT field annotation JP is the Phred-scaled posterior probability of the output posterior genotypes for the three samples being incorrect. The calculation of the joint posterior is given as:
where the GPs are the genotype posteriors in [0, 1] probability space.
Low Genotype Quality
New FORMAT field filter lowGQ indicates samples with posterior GQ less than 20. Filtered samples tagged with lowGQ are not recommended for use in downstream analyses.
High and Low Confidence De Novo
New INFO field annotation for sites at which at least one family has a possible de novo mutation. Following the annotation tag is a list of the children with de novo mutations. High and low confidence are output separately.
4. Example
Before:
1 1226231 rs13306638 G A 167563.16 PASS AC=2;AF=0.333;AN=6;… GT:AD:DP:GQ:PL 0/0:11,0:11:0:0,0,249 0/0:10,0:10:24:0,24,360 1/1:0,18:18:60:889,60,0
After:
1 1226231 rs13306638 G A 167563.16 PASS AC=3;AF=0.500;AN=6;…PG=0,8,22;… GT:AD:DP:GQ:JL:JP:PL:PP 0/1:11,0:11:49:2:24:0,0,249:49,0,287 0/0:10,0:10:32:2:24:0,24,360:0,32,439 1/1:0,18:18:43:2:24:889,60,0:867,43,0
The original call for the child (first sample) was HomRef with GQ0. However, given that, with high confidence, one parent is HomRef and one is HomVar, we expect the child to be heterozygous at this site. After family priors are applied, the child’s genotype is corrected and its GQ is increased from 0 to 49. Based on the allele frequency from 1000 Genomes for this site, the somewhat weaker population priors favor a HomRef call (PG=0,8,22). The combined effect of family and population priors still favors a Het call for the child.
The joint likelihood for this trio at this site is two, indicating that the genotype for one of the samples may have been changed. Specifically a low JL indicates that posterior genotype for at least one of the samples was not the most likely as predicted by the PLs. The joint posterior value for the trio is 24, which indicates that the GQ values based on the posteriors for all of the samples are at least 24. (See above for a more complete description of JL and JP.)
5. More information about priors
The Genotype Refinement Pipeline uses Bayes’s Rule to combine independent data with the genotype likelihoods derived from HaplotypeCaller, producing more accurate and confident genotype posterior probabilities. Different sites will have different combinations of priors applied based on the overlap of each site with external, supporting SNP calls and on the availability of genotype calls for the samples in each trio.
Input-derived Population Priors
If the input VCF contains at least 10 samples, then population priors will be calculated based on the discovered allele count for every called variant.
Supporting Population Priors
Priors derived from supporting SNP calls can only be applied at sites where the supporting calls overlap with called variants in the input VCF. The values of these priors vary based on the called reference and alternate allele counts in the supporting VCF. Higher allele counts (for ref or alt) yield stronger priors.
Family Priors
The strongest family priors occur at sites where the called trio genotype configuration is a Mendelian violation. In such a case, each Mendelian violation configuration is penalized by a de novo mutation probability (currently 10-6). Confidence also propagates through a trio. For example, two GQ60 HomRef parents can substantially boost a low GQ HomRef child and a GQ60 HomRef child and parent can improve the GQ of the second parent. Application of family priors requires the child to be called at the site in question. If one parent has a no-call genotype, priors can still be applied, but the potential for confidence improvement is not as great as in the 3-sample case.
Caveats
Right now family priors can only be applied to biallelic variants and population priors can only be applied to SNPs. Family priors only work for trios.
6. Mathematical details
Note that family priors are calculated and applied before population priors. The opposite ordering would result in overly strong population priors because they are applied to the child and parents and then compounded when the trio likelihoods are multiplied together.
Review of Bayes’s Rule
HaplotypeCaller outputs the likelihoods of observing the read data given that the genotype is actually HomRef, Het, and HomVar. To convert these quantities to the probability of the genotype given the read data, we can use Bayes’s Rule. Bayes’s Rule dictates that the probability of a parameter given observed data is equal to the likelihood of the observations given the parameter multiplied by the prior probability that the parameter takes on the value of interest, normalized by the prior times likelihood for all parameter values:
$$ P(\theta|Obs) = \frac{P(Obs|\theta)P(\theta)}{\sum_{\theta} P(Obs|\theta)P(\theta)} $$
In the best practices pipeline, we interpret the genotype likelihoods as probabilities by implicitly converting the genotype likelihoods to genotype probabilities using non-informative or flat priors, for which each genotype has the same prior probability. However, in the Genotype Refinement Pipeline we use independent data such as the genotypes of the other samples in the dataset, the genotypes in a “gold standard” dataset, or the genotypes of the other samples in a family to construct more informative priors and derive better posterior probability estimates.
Calculation of Population Priors
Given a set of samples in addition to the sample of interest (ideally non-related, but from the same ethnic population), we can derive the prior probability of the genotype of the sample of interest by modeling the sample’s alleles as two independent draws from a pool consisting of the set of all the supplemental samples’ alleles. (This follows rather naturally from the Hardy-Weinberg assumptions.) Specifically, this prior probability will take the form of a multinomial Dirichlet distribution parameterized by the allele counts of each allele in the supplemental population. In the biallelic case the priors can be calculated as follows:
$$ P(GT = HomRef) = \dbinom{2}{0} \ln \frac{\Gamma(nSamples)\Gamma(RefCount + 2)}{\Gamma(nSamples + 2)\Gamma(RefCount)} $$
$$ P(GT = Het) = \dbinom{2}{1} \ln \frac{\Gamma(nSamples)\Gamma(RefCount + 1)\Gamma(AltCount + 1)}{\Gamma(nSamples + 2)\Gamma(RefCount)\Gamma(AltCount)} $$
$$ P(GT = HomVar) = \dbinom{2}{2} \ln \frac{\Gamma(nSamples)\Gamma(AltCount + 2)}{\Gamma(nSamples + 2)\Gamma(AltCount)} $$
where Γ is the Gamma function, an extension of the factorial function.
The prior genotype probabilities based on this distribution scale intuitively with number of samples. For example, a set of 10 samples, 9 of which are HomRef yield a prior probability of another sample being HomRef with about 90% probability whereas a set of 50 samples, 49 of which are HomRef yield a 97% probability of another sample being HomRef.
Calculation of Family Priors
Given a genotype configuration for a given mother, father, and child trio, we set the prior probability of that genotype configuration as follows:
$$ P(G_M,G_F,G_C) = P(\vec{G}) \cases{ 1-10\mu-2\mu^2 & no MV \cr \mu & 1 MV \cr \mu^2 & 2 MVs} $$
where the 10 configurations with a single Mendelian violation are penalized by the de novo mutation probability μ and the two configurations with two Mendelian violations by μ^2. The remaining configurations are considered valid and are assigned the remaining probability to sum to one.
This prior is applied to the joint genotype combination of the three samples in the trio. To find the posterior for any single sample, we marginalize over the remaining two samples as shown in the example below to find the posterior probability of the child having a HomRef genotype:
This quantity P(Gc|D) is calculated for each genotype, then the resulting vector is Phred-scaled and output as the Phred-scaled posterior probabilities (PPs). |
Let $(X_i,Y_i)_{i\in\mathbb{Z}}$ be a finite-valued stationary process whose $\sigma$-algebra of tail events is trivial. Let $\mathcal{F}_n^m$ be the $\sigma$-algebra generated by $X_n,\dots,X_m$ ($n,m\in\mathbb{Z}$) and define $\mathcal{G}_n^m$ similarly for $Y_i$. Let $a$ be some fixed state of $X$ and consider the random variable
$$b_k:=\mathbb{P}[X_{2k+1}=a\mid \mathcal{F}_{0}^{2k}\vee\mathcal{G}_{0}^{k}]$$ and $$A_N:=\frac{1}{N}\sum_{k=1}^Nb_k$$
Question: Does $A_N$ converge almost surely? Remarks
To be more precise, let me frame this question in an ergodic-theoretic setting. Let $X$ and $Y$ be finite sets, $\Omega:=(X\times Y)^\mathbb{Z}$ and $\left(\Omega,\mu,T\right)$ be a measure-preserving system that is a Kolmogorov automorphism. Let $\pi_X:X\times Y\to X$ be the projection to $X$ and $X_i:\Omega\to X$ ($i\in\mathbb{Z}$) be the measurable function $X_i(\omega)=\pi_X(\omega_i)$ (similarly, $Y_i(\omega)=\pi_Y(\omega_i)$). Given a $\sigma$-algebra $\mathcal{H}$, we write $L^p(\mathcal{H})$ for the space of $\mathcal{H}$-measurable functions in $L^p$, and we also write $\mathcal{F}_n^m:=\sigma(\{X_i:\ n\leq i\leq m\})$, $\mathcal{G}_n^m:=\sigma(\{Y_i:\ n\leq i\leq m\})$, and, for $k\geq 0$, $P_k:L^1\to L^1$ for the orthogonal projection onto $L^1(\mathcal{F}_{-2k}^0\vee\mathcal{G}_{-2k}^{-k})$ (i.e., $P_k(f)=\mathbb{E}[f\mid\mathcal{F}_{-2k}^0\vee\mathcal{G}_{-2k}^{-k}]$). Fix some $a\in X$, write $A=\{\omega:\ X_1(\omega)=a\}$ and consider $$a_k:=P_k(1_A)$$ and $$A_N:=\frac{1}{N}\sum_{k=1}^NT^{2k}a_k$$
By invariance, both definitions of $A_N$ coincide
If $P_k$ were a monotone sequence of projections, then the affirmative answer would be trivial, since $a_k$ would be a pointwise converging martingale and, by Maker's generalization of Birkhoff's ergodic theorem, pointwise convergence of $a_k$ suffices.
In fact, it is true $A_N$ converges in $L^2$ (to $\int 1_A$) since one can prove that $a_k$ converges in $L^2$ to $\mathbb{E}[1_A\mid \mathcal{F}^0_{-\infty}]$ by showing there are monotone sequences of orthogonal projections $S_k$, $R_k$ (the first nondecreasing, the second nonincreasing) such that $S_k\leq P_k\leq R_k$ in the lattice of projections and $S_k\nearrow P$, $R_k\searrow P$, where $P$ projects onto $L^2(\mathcal{F}^0_{-\infty})$. One then uses Maker's theorem in the $L^2$ version.
It is also true that when $A_N$ is regarded as a Markovian operator $A_N(f)=(1/N)\sum_k^NT^{2k}P_k(f)$ then there is a dense subset $\mathcal{D}$ of $L^2$ such that $A_N(f)$ converges a.s. for any $f\in\mathcal{D}$. I would then like to use Banach's Principle to prove pointwise convergence on the whole of $L^2$ (which implies that of $A_N(1_A)$) but I can't prove the necessary bound $\sup_N |A_N(f)|<\infty$ for any $f\in L^2$. Any help here would be welcome. |
Recent Posts Recent Comments Archives Categories Meta Author Archives: res65
This course was eye-opening in the sense that sustainability is in all aspects of life. I never thought about all the ways it can be improved and actually how significant the impacts can be to little changes over the course … Continue reading
Solar power is a source of energy that has not been tapped into nearly as much as it should have by now in 2017. New York City is one of the most populated cities in the country, housing around 8.5 … Continue reading
The traditional automobile industry might be in for a rude awakening as electric cars prove to be much better in the long run for sustainability, according to the article “Are Electric Vehicles Pushing Oil Demand Over a Cliff?” by Erica … Continue reading
Rachel Stone Math & Sustainability Professor Deforest 9 October 2017 Energy Savings from Line-Drying Clothes As a college student, I’m looking to save as much money as I possibly can. I’ve already jumped on the sustainability bandwagon and started using … Continue reading
\(x^3 – 3x^2 – 10x=0\) \((1+r)^n\) \((5.7 \times 10^{-8})\) \(\times \) \((1.6 \times 10^{12})\) = \(9.12 \times 10^4\) \(\pi{L}(1 – \alpha ){R}^2 = 4\pi \sigma {T}^4{R}^2\) \(12\text{km}\times \frac{0.6\text{mile}}{1\text{km}}\approx 7.2\text{mile}\) \[12\text{km}\times \frac{0.6\text{mile}}{1\text{km}}\approx 7.2\text{mile}\] \(4,173,445,346.50\approx 4,200,000,000 = 4.2\times 10^9\) \[50\text{m}\times \frac{3.4\times 10^6\text{kg}}{1\text{sec}}\times … Continue reading |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Search
Now showing items 1-5 of 5
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Elliptic flow of muons from heavy-flavour hadron decays at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(Elsevier, 2016-02)
The elliptic flow, $v_{2}$, of muons from heavy-flavour hadron decays at forward rapidity ($2.5 < y < 4$) is measured in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The scalar ...
Centrality dependence of the pseudorapidity density distribution for charged particles in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2013-11)
We present the first wide-range measurement of the charged-particle pseudorapidity density distribution, for different centralities (the 0-5%, 5-10%, 10-20%, and 20-30% most central events) in Pb-Pb collisions at $\sqrt{s_{NN}}$ ...
Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays
(Elsevier, 2014-11)
The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ... |
Search
Now showing items 1-10 of 55
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC
(Elsevier, 2013-12)
The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ...
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(American Physical Society, 2013-12)
The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ...
Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2014-06)
The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(Elsevier, 2014-01)
In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... |
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism |
Details
For Math equations in entries - compose posts in itex, and converts them to XHTML+MathML, which advanced browsers like Mozilla can render.
$\sin(\theta)\cos(\theta)= \frac{1}{2}\sin(2\theta)$
is an inline equation.
\[\int_{-\infty}^{\infty} e^{-x^2} dx = \sqrt{\pi}\]
is a display equation. |
Signal-to-Noise Ratio
Don H. Johnson (2006), Scholarpedia, 1(12):2088. doi:10.4249/scholarpedia.2088 revision #126771 [link to/cite this article] Signal-to-noise ratio generically means the dimensionless ratio of the signal power to the noise power contained in a recording.Abbreviated SNR by engineers and scientists, the signal-to-noise ratio parameterizes the performance of optimal signal processing systems when the noise is Gaussian.
Contents Basics
The signal \(s(t)\) may or may not have a stochastic description; the noise \(N(t)\) always does. When the signal is deterministic, its power \(P_s\) is defined to be \[P_s = \frac{1}{T} \int_{0}^{T}\!\! s^2(t)\,dt\] where \(T\) is the duration of an observation interval, which could be infinite (in which case a limit needs to be evaluated).
Special terminology is used for periodic signals.In this case, the interval \(T\) equals the signal's period and the signal's
root mean squared (rms) value equals the square-root of its power.For example, the sinusoid \(A \sin 2\pi f_0 t\) has an rms value equal to \(A/\sqrt{2}\) and power \(A^2/2\ .\)
When the signal is a stationary stochastic process, its power is defined to be the value of its correlation function \(R_s(\tau)\) at the origin.\[R_s(\tau) \equiv \mathsf{E}[s(t)s(t+\tau)];\quad P_s = R_s(0)\]Here, \(\mathsf{E}[\cdot]\) denotes expected value.The noise power \(P_N\) is similarly related to its correlation function\[P_N=R_N(0)\ .\]The signal-to-noise ratio is typically written as
SNR and equals\[\mathrm{SNR}=\frac{P_s}{P_N}\ .\]
Signal-to-noise ratio is also defined for random variables in one of two ways.
\(X = s+N\ ,\) where \(s\ ,\) the signal, is a constant and \(N\) is a random variable having an expected value equal to zero. The SNR equals \(s^2/\sigma^2_N\ ,\) with \(\sigma^2_N\) the variance of \(N\ .\) \(X = S+N\ ,\) where both \(S\) and \(N\) are random variables.
A random variable's power equals its mean-squared value: the signal power thus equals \(\mathsf{E}[S^2]\ .\) Usually, the noise has zero mean, which makes its power equal to its variance. Thus, the SNR equals \(\mathsf{E}[S^2]/\sigma^2_N\ .\)
White Noise
When we have white noise, the noise correlation function equals \(N_0/2\cdot\delta(\tau)\ ,\) where \(\delta(\tau)\) is known both as Dirac's delta function and as an impulse.The quantity \(N_0/2\) is the
spectral height of the white noise and corresponds to the (constant) value of the noise power spectrum at all frequencies.White noise power is infinite and the SNR as defined above will be zero.White noise cannot physically exist because of its infinite power, but engineers frequently use it to describe noise that has a power spectrum that extends well beyond the signal's bandwidth.When white noise is assumed present, optimal signal processing systems can sometimes take it into account and their performance typically depends on a modified definition of signal-to-noise ratio.When the signal is deterministic, the SNR is taken to be\[\mathrm{SNR}=\frac{\int\!\! s^2(t)\,dt}{N_0/2}\ .\] Peak Signal-to-Noise Ratio (PSNR)
In image processing, signal-to-noise ratio is defined differently.Here, the numerator is the square of the peak value the signal
could have and the denominator equals the noise power (noise variance).For example, an 8-bit image has values ranging between 0 and 255.For PSNR calculations, the numerator is 255 2 in all cases. Expressing Signal-to-Noise Ratios in Decibels
Engineers frequently express SNR in decibels as \[\mathrm{SNR} (\mathrm{dB}) = 10 \log_{10}\frac{P_s}{P_N}\ .\] Engineers consider a SNR of 2 (3 dB) to be the boundary between low and high SNRs. In image processing, PSNR must be greater than about 20 dB to be considered a high-quality picture.
Interference
These definitions implicitly assume that the signal and the noise are statistically unrelated and arise from different sources.In many applications, some part of what is not signal arises from man-made sources and can be statistically related to the signal.For example, a cellular telephone's signal can be corrupted by other telephone signals as well as noise.Such non-signals are termed
interference and a signal-to-interference ratio, abbreviated SIR, can be defined accordingly.However, when both interference and noise are present, neither the SIR nor the SNR characterizes the performance of signal processing systems. References External Links
J. Sijbers et al., "Quantification and improvement of the signal-to-noise ratio in a magnetic resonance image acquisition procedure", Magnetic Resonance Imaging, vol. 14, no. 10, pp. 1157-1163, 1996 |
Defining parameters
Level: \( N \) = \( 30 = 2 \cdot 3 \cdot 5 \) Weight: \( k \) = \( 2 \) Nonzero newspaces: \( 3 \) Newforms: \( 3 \) Sturm bound: \(96\) Trace bound: \(1\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{2}(\Gamma_1(30))\).
Total New Old Modular forms 40 7 33 Cusp forms 9 7 2 Eisenstein series 31 0 31 Decomposition of \(S_{2}^{\mathrm{new}}(\Gamma_1(30))\)
We only show spaces with even parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension.
Label \(\chi\) Newforms Dimension \(\chi\) degree 30.2.a \(\chi_{30}(1, \cdot)\) 30.2.a.a 1 1 30.2.c \(\chi_{30}(19, \cdot)\) 30.2.c.a 2 1 30.2.e \(\chi_{30}(17, \cdot)\) 30.2.e.a 4 2 |
07/16/19
No Comments
Introduction
In this post I will discuss building a simple recommender system for a movie database which will be able to:
– suggest top N movies similar to a given movie title to users, and – predict user votes for the movies they have not voted for. In the next part of this article I will show how to deploy this model using a Rest API in Python Flask, in an attempt to make this recommendation system easily useable in production. A recommender system for a movie database
Recommender systems are so prevalently used in the net these days that we all have come across them in one form or another. Have you ever received suggestions on Amazon on what to buy next? Or suggestions on what websites you may like on Facebook?
Aside from the natural disconcerting feeling of being chased and traced, they can sometimes be helpful in navigating us into the right direction. Let’s look at an appealing example of recommendation systems in the movie industry. I will be using the data provided from Movie-lens 20M datasets to describe different methods and systems one could build. With a bit of fine tuning, the same algorithms should be applicable to other datasets as well. I find the above diagram the best way of categorising different methodologies for building a recommender system. I will briefly explain some of these entries in the context of movie-lens data with some code in python. Full scripts for this article are accessible on my GitHub page. Suppose someone has watched “Inception (2010)” and loved it! What can my recommender system suggest to them to watch next? Well, I could suggest different movies on the basis of the content similarity to the selected movie such as genres, cast and crew names, keywords and any other metadata from the movie. In that case I would be using an item-content filtering. I could also compare the user metadata such as age and gender to the other users and suggest items to the user that similar users have liked. In that case I would be using a user-content filtering. The movie-lens dataset used here does not contain any user content data. So in a first step we will be building an item-content (here a movie-content) filter. Memory-based content filtering
In memory-based methods we don’t have a model that learns from the data to predict, but rather we form a pre-computed matrix of similarities that can be predictive. Please read on and you’ll see what I mean!
The data sets I have used for an item content filtering are and . I skip the data wrangling and filtering part which you can find in the well-commented in the scripts on my GitHub page. We collect all the tags given to each movie by various users, add the movie’s genre keywords and form a final data frame with a metadata column for each movie.
# create a mixed dataframe of movies title, genres
# and all user tags given to each movie
mixed = pd.merge(movies, tags, on='movieId', how='left')
mixed.head(3)
# create metadata from tags and genres
mixed.fillna("", inplace=True)
mixed = pd.DataFrame(mixed.groupby('movieId')['tag'].apply(
lambda x: "%s" % ' '.join(x))
Final = pd.merge(movies, mixed, on='movieId', how='left')
Final ['metadata'] = Final[['tag', 'genres']].apply(
lambda x: ' '.join(x), axis = 1)
Final[['movieId','title','metadata']].head(3)
We then transform these metadata texts to vectors of features using
Tf-idf transformer of scikit-learn package. Each movie will transform into a vector of the length ~ 23000! But we don’t really need such large feature vectors to describe movies. Truncated singular value decomposition (SVD) is a good tool to reduce dimensionality of our feature matrix especially when applied on Tf-idf vectors. As you can see from the explained variance graph below, with 200 latent components (reduction from ~23000) we can explain more than 50% of variance in the data which suffices for our purpose in this work. So we will keep a latent matrix of 200 components as opposed to 23704 which expedites our analysis greatly. We name this latent matrix the content_latent and use this matrix a few steps later to find our top N similar movies to a given movie title. But let’s learn a bit about the ratings data.
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer(stop_words='english')
tfidf_matrix = tfidf.fit_transform(Final['metadata'])
tfidf_df = pd.DataFrame(tfidf_matrix.toarray(), index=Final.index.tolist())
print(tfidf_df.shape)
# Compress with SVD
from sklearn.decomposition import TruncatedSVD
svd = TruncatedSVD(n_components=200)
latent_matrix = svd.fit_transform(tfidf_df)
# plot var expalined to see what latent dimensions to use
explained = svd.explained_variance_ratio_.cumsum()
plt.plot(explained, '.-', ms = 16, color='red')
plt.xlabel('Singular value components', fontsize= 12)
plt.ylabel('Cumulative percent of variance', fontsize=12)
plt.show() Memory-based collaborative filtering
Aside from the movie metadata we have another valuable source of information at our exposure: the user rating data. Our recommender system can recommend a movie that is similar to “Inception (2010)” on the basis of user ratings. In other words, what other movies have received similar ratings by other users? This would be an example of
item-item collaborative filtering. You might have heard of it as “The users who liked this item also liked these other ones.” The data set of interest would be and we manipulate it to form items as vectors of input rates by the users. As there are many missing votes by users, we have imputed Nan(s) by 0 which would suffice for the purpose of our collaborative filtering. Here we have movies as vectors of length ~80000. Again as before we can apply a truncated SVD to this rating matrix and only keep the first 200 latent components which we will name the collab_latent matrix. The next step is to use a similarity measure and find the top N most similar movies to “Inception (2010)” on the basis of each of these filtering methods we introduced. Cosine similarity is one of the similarity measures we can use. To see a summary of other similarity criteria, read Ref [2]- page 93. In the following, you will see how the similarity of an input movie title can be calculated with both content and collaborative latent matrices. I have also added a hybrid filter which is an average measure of similarity from both content and collaborative filtering standpoints. If I list the top 10 most similar movies to “Inception (2010)” on the basis of the hybrid measure, you will see the following list in the data frame. For me personally, the hybrid measure is predicting more reasonable titles than any of the other filters.
from sklearn.metrics.pairwise import cosine_similarity
# take the latent vectors for a selected movie from both content
# and collaborative matrixes
a_1 = np.array(Content_df.loc['Inception (2010)']).reshape(1, -1)
a_2 = np.array(Collab_df.loc['Inception (2010)']).reshape(1, -1)
# calculate the similartity of this movie with the others in the list
score_1 = cosine_similarity(Content_df, a_1).reshape(-1)
score_2 = cosine_similarity(Collab_df, a_2).reshape(-1)
# an average measure of both content and collaborative
hybrid = ((score_1 + score_2)/2.0)
# form a data frame of similar movies
dictDf = {'content': score_1 , 'collaborative': score_2, 'hybrid': hybrid}
similar = pd.DataFrame(dictDf, index = Content_df.index )
#sort it on the basis of either: content, collaborative or hybrid
similar.sort_values('content', ascending=False, inplace=True)
similar[['content']][1:].head(11)
We could use the similarity information we gained from item-item collaborative filtering to compute a rating prediction,
\(r_{ui}\), for an item \((i)\) by a user \((u)\) where the rating is missing. Namely by taking a weighted average on the rating values of the top K nearest neighbours of item \((i)\). The Ref [2] page 97 discusses the parameters that can refine this prediction.
As mentioned right at the beginning of this article, there are model-based methods that use statistical learning rather than ad hoc heuristics to predict the missing rates. In the next section, we show how one can use a matrix factorisation model for the predictions of a user’s unknown votes.
Model-based collaborative filtering
Previously we used truncated SVD as a means to reduce the dimensionality of our matrices. To that end, we imputed the missing rating data with zero to compute SVD of a sparse matrix. However, one could also compute an estimate to SVD in an iterative learning process. For this purpose we only use the known ratings and try to minimise the error of computing the known rates via gradient descent. This algorithm was popularised during the Netflix prize for the best recommender system. Here is a more mathematical description of what I mean for the more interested reader. Otherwise you can skip this part and jump to the implementation part.
Mathematical description
SVD factorizes our rating matrix \(M_{m \times n}\) with a rank of \(k\), according to equation (1a) to
3 matrices of \(U_{m \times k}\), \(\Sigma_{k \times k}\) and \(I^T_{n \times k}\):
\(M = U \Sigma_k I^T \tag{1a}\)
\(M \approx U \Sigma_{k\prime} I^T \tag{1b}\)
where \(U\) is the matrix of user preferences and \(I\) the item preferences and \(\Sigma\) the matrix of singular values. The beauty of SVD is in this simple notion that instead of a full \(k\) vector space, we can approximate \(M\) on a much smaller \(k\prime\) latent space as in (1b). This approximation will not only reduce the dimensions of the rating matrix, but it also takes into account only the most important singular values and leaves behind the smaller singular values which could otherwise result in noise. This concept was used for the dimensionality reduction above as well.
To approximate \(M\), we would like to find \(U\) and \(I\) matrices in \(k\prime\) space using all the known rates which would mean we will solve an optimisation problem. According to (2), every rate entry in \(M\), \(r_{ui}\) can be written as a dot product of \(p_u\) and \(q_i\):
\(r_{ui} = p_u \cdot q_i \tag{2}\)
where \(p_u\) makes up the rows of \(U\) and \(q_i\) the columns of \(I^T\). Here we disregard the diagonal \(\Sigma\) matrix for simplicity (as it provides only a scaling factor). Graphically it would look something like this:
Finding all \(p_u\) and \(q_i\)s for all users and items will be possible via the following minimisation:
\( \min_{p_u,q_i} = \sum_{r_{ui}\in M}(r_{ui} – p_u \cdot q_i)^2 \tag{3}\)
A gradient descent (GD) algorithm (or a variant of it such as stochastic gradient descent SGD) can be used to solve the minimisation problem and to compute all \(p_u\) and \(q_i\)s. I will not describe the minimisation procedure in more detail here. You can read more about it on this blog or in Ref [2]. After we have all the entries of \(U\) and \(I\), the unknown rating r_{ui} will be computed according to eq. (2).
The minimisation process in (3) can also be regularised and fine-tuned with biases. Implementation
A SVD algorithm similar to the one described above has been implemented in Surprise library, which I will use here. Aside from SVD, deep neural networks have also been repeatedly used to calculate the rating predictions. This blog entry describes one such effort. SVD was chosen because it produces a comparable accuracy to neural nets with a simpler training procedure. In the following you can see the steps to train a SVD model in Surprise.
We gain a root-mean-squared error (RMSE) accuracy of 0.77 (the lower the better!) for our rating data, which does not sound bad at all. In fact, with a memory-based prediction from the item-item collaborative filtering described in the previous section, I could not get an RMSE lower that 1.0; that’s 23% improvement in prediction! Next we use this trained model to predict ratings for the movies that a given user \(u\), here e.g. with the \(id\) = 7010, has not rated yet. The top 10 highly rated movies can be recommended to user 7010 as you can see below.
from surprise import Dataset, Reader, SVD, accuracy
from surprise.model_selection import train_test_split
# instantiate a reader and read in our rating data
reader = Reader(rating_scale=(1, 5))
data = Dataset.load_from_df(ratings_f[['userId','movieId','rating']], reader)
# train SVD on 75% of known rates
trainset, testset = train_test_split(data, test_size=.25)
algorithm = SVD()
algorithm.fit(trainset)
predictions = algorithm.test(testset)
# check the accuracy using Root Mean Square Error
accuracy.rmse(predictions)
RMSE: 0.7724
# check the preferences of a particular user
user_id = 7010
predicted_ratings = pred_user_rating(user_id)
pdf = pd.DataFrame(predicted_ratings, columns = ['movies','ratings'])
pdf.sort_values('ratings', ascending=False, inplace=True)
pdf.set_index('movies', inplace=True)
pdf.head(10) Conclusion
As you saw in this article, there are a handful of methods one could use to build a recommendation system. The data scientist is tasked with finding and fine-tuning the methods that match the data better.
In the next part of this article I will be showing how the methods and models introduced here can be rearranged and categorised differently to facilitate serving and deployment. We will serve our model as a REST-ful API in Flask-restful with multiple recommendation endpoints. References
Ref [1] – IEEE Transactions on knowledge and data engineering, Vol. 17, No. 6, JUNE 2005, DOI: 10.1109/TKDE.2005.99.
Ref [2] – Foundations and Trends in Human–Computer Interaction Vol. 4, No. 2, DOI: 10.1561/1100000009. |
Yes, $Z$ is a proper martingale. However, $\int_0^T(Z_sW_s)^2\,ds$ is
not integrable for large $T$. As the quadratic variation of $Z$ is $[Z]_t=4\int_0^t(Z_sW_s)^2\,ds$, Ito's isometry says that this is integrable if and only if $Z$ is a square-integrable martingale, and you can show that $Z$ is not square integrable at large times (see below).
However, it is
conditionally square integrable over small time intervals.
$$
\begin{align}
\mathbb{E}\left[Z_t^2W_t^2\;\Big\vert\;\mathcal{F}_s\right]&\le\mathbb{E}\left[W_t^2\exp(W_t^2)\;\Big\vert\;\mathcal{F}_s\right]\\
&=\frac{1}{\sqrt{2\pi(t-s)}}\int x^2\exp\left(x^2-\frac{(x-W_s)^2}{2(t-s)}\right)\,dx
\end{align}
$$
It's a bit messy, but you can evaluate this integral and check that it is finite for $s \le t < s+\frac12$. In fact, integrating over the range $[s,s+h]$ (any $h < 1/2$) with respect to $t$ is finite. So, conditional on $W_s$, you can say that $Z$ is a square integrable martingale over $[s,s+h]$.
This is enough to conclude that $Z$ is a proper martingale. We have $\mathbb{E}[Z_t\vert\mathcal{F}_s]=Z_s$ (almost surely) for any $s \le t < s+\frac12$. By induction, using the tower rule for conditional expectations, this extends to all $s < t$. Then, $\mathbb{E}[Z_t]=\mathbb{E}[Z_0] < \infty$, so $Z$ is integrable and the martingale conditions are met.
I mentioned above that the suggested method in the question cannot work because $Z$ is not square integrable. I'll elaborate on that now. If you write out the expected value of an expression of the form $\exp(aX^2+bX+c)$ (for $X$ normal) as an integral, it can be seen that it becomes infinite exactly when $a{\rm Var}(X)\ge1/2$ (because the integrand is bounded away from zero at either plus or minus infinity). Let's apply this to the given expession for $Z$.
The expression for $Z$ can be made more manageable by breaking the exponent into independent normals. Fixing a positive time $t$, then $B_s=\frac{s}{t}W_t-W_s$ is a Brownian bridge independent of $W_t$. Rearrange the expression for $Z$$$
\begin{align}
Z_t&=\exp\left(W_t^2-\int_0^t(2(\frac{s}{t}W_t+B_s)^2+1)\,ds\right)\\
&=\exp\left(W_t^2-2\int_0^t\frac{s^2}{t^2}W_t\,ds+\cdots\right)\\
&=\exp\left((1-2t/3)W_t^2+\cdots\right)
\end{align}
$$where '$\cdots$' refers to terms which are at most linear in $W_t$. Then, for any $p > 0$,$$
Z_t^p=\exp\left(p(1-2t/3)W_t^2+\cdots\right).
$$The expectation $\mathbb{E}[Z_t^p\mid B]$ of $Z_t^p$ conditional on $B$ is infinite whenever$$
p(1-2t/3){\rm Var}(W_t)=p(1-2t/3)t \ge \frac12.
$$The left hand side of this inequality is maximized at $t=\frac34$, where it takes the value $3p/8$. So, $\mathbb{E}[Z_{3/4}^p\mid B]=\infty$ for all $p\ge\frac43$. The expected value of this must then be infinite, so $\mathbb{E}[Z^p_{3/4}]=\infty$. It is a standard application of Jensen's inequality that $\mathbb{E}[\vert Z_t\vert^p]$ is increasing in time for any $p\ge1$ and martingale $Z$. So, $\mathbb{E}[Z_t^p]=\infty$ for all $p\ge 4/3$ and $t\ge3/4$. In particular, taking $p=2$ shows that $Z$ is not square integrable. |
its a famous equation $$\Delta s^2=-(c\Delta t)^2+(\Delta x)^2$$ but why do we put the minus sign i heard its there because of that space is ruled by non-Euclidean geometry rules and if that was true what is the difference between a Euclidean and non-Euclidean geometry?
[I will work in natural units where $c=1$. I will, for the most part, consider $4$ coordinates in a coordinate frame. The coordinates of an event would be $x^\mu$ where $\mu$ runs from $0$ to $3$. $x^0$ is the time coordinate and the rest are the space coordinates. I will use Einstein summation convention throughout the treatment.]
The reasoning that the interval is $-\Delta t^2 + \Delta x^2$ because the spacetime is ruled by a non-Euclidean geometry is wrong. It is actually the other way around. Since the physical arguments dictate that the
interval must have the signs that they have, we conclude that the spacetime actually has a non-Euclidean geometry.
The physical reasoning behind the signs can be presented something like this: What we want to construct by a quantity called the interval is a frame-invariant measure of the separation between events in the spacetime. How would we know whether a given quantity, expressed in terms of the coordinate measures $\Delta x^\mu$ is a frame invariant quantity or not? By expressing the quantity in terms of the coordinate measures of a different frame $\Delta x'^\nu$ and then expressing the primed quantities in the terms of the unprimed quantities (using the appropriate transformation law) and checking whether the expression for our quantity reduces to the expression for the same in terms of the unprimed coordinates.
In other words, if the quantity that we want to check for its frame-invariance is $I$ and $I=f(\Delta x^\mu)$ then for $I$ to be frame-invariant $f(\Delta x'^\nu)=f(P(\Delta x^\mu ))$ must hold; where the function $P$ is determined by the transformation law between the primed and unprimed coordinates.
Now, Special Relativity tells us that the coordinate transformation law between two inertial coordinate frame is the Lorentz transformation law.
$$\Delta x'^{\mu}=\Lambda^{\mu}_\alpha \Delta x^\alpha$$
where $\Lambda$ is a constant matrix for the given two inertial frames with the constraint: $$\Lambda^T\eta\Lambda=\eta$$ where $\eta$ is the following $4\times 4$ matrix:
\begin{bmatrix} -1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}
Now, if I consider a quantity $\eta_{\mu\nu}\Delta x^\mu\Delta x^\nu$ then it can be shown that it is frame invariant in the following manner:
$$\eta_{\alpha\beta}\Delta x'^\alpha \Delta x'^\beta=\eta_{\alpha\beta}\Lambda^\alpha_\mu\Delta x^\mu \Lambda^\beta_\nu\Delta x^\nu=(\Lambda^\alpha_\mu\eta_{\alpha\beta}\Lambda^\beta_\nu)\Delta x^\mu \Delta x^\nu = \eta_{\mu\nu} \Delta x^\mu\Delta x^\nu$$
As you can easily see, $\eta_{\mu\nu}\Delta x^\mu\Delta x^\nu$ is simply $-\Delta t^2 + \Delta x_1^2 + \Delta x_2^2 + \Delta x_3^2$. Thus, $-\Delta t^2 + \Delta x_1^2 + \Delta x_2^2 + \Delta x_3^2$ is the invariant interval in Special Relativity.
Go and browse the related questions for the math. I'll still provide a handwaving illustration, hopefully it helps in understanding.
The difference between Euclidean and Minkowskian metric is precisely that minus. In general relativity, the metric gets more complicated, because the way how $\Delta t$ and $\Delta x$ are composed into the spacetime interval is where gravity comes in.
You will hear a lot of times, that the minus distinguishes the time coordinate from the spatial coordinates. How it does that, is the fact that in Minkowski metric, the space-time splits into two parts relative to a chosen origin: the time-like part (which can be related causally to your current point) and space-like (what cannot possibly influence the origin). This gives time its direction by restricting it from "rotating by 90 degrees" if rotations still worked the way they do in Euclidean space.
What would have been rotation, becomes boost - changing velocity of the reference frame, and gives you the hard limit of the speed of light.
This can be very well compared to the distinction between Elliptic differential equations: for instance, for solving the shape of soap on a wire frame, you need to solve it all at once: the boundary conditions influence everything and every point influences every other point, the solution needs to be self-consistent. Just because the equation has a plus: $\frac{d^2 f}{dx^2}+\frac{d^2 f}{dy^2}=0$.
If you take a wave equation, (which is a hyperbolic differential equation) $\frac{d^2 f}{dx^2}-\frac{d^2 f}{dt^2}=0$you can
propagate the solution from the previous time: you need an initial condition and don't need to know anything about the future. All it takes is a little minus.
Compare to a circle ($x^2+y^2=1$) and hyperbola ($x^2-y^2=1$), one of which you can rotate properly, but the other has two asymptotes (which correspond to the speed of light in the light-cone defined by the zero space-time interval). In other words: a hyperbola in space-time is what defines the set of
equal time-space distance (compare to the circle which defines points of equal distance in euclidean space).
The question raised was "Why" the interval equation has a negative sign. The answer that this is so because of Minkowski space lets the "why" in the air: "Why" Minkowski space?
A more in-deep answer could be that this minus sign is intrinsically related to the possibility of movement:
That our Universe is 4-dimensional is obvious from Albert Einstein's mass-energy equivalence $ E = mc^2 $ and the relativistic energy invariant $ E^2/c^2 - p⃗^2 = m_0^2 c^2 $. Measuring distance in light-seconds instead of meters, the speed-of-light c becomes =1, and we obtain the simple formula:
$ E^2 = m^2 = m_0^2 + p⃗^2 = m_0^2 + p_1^2 + p_2^2 + p_3^2 $
From this formula it is obvious that the rest mass $m_0$ is the fourth component of the momentum (or movement) vector, and that the energy or mass is the total length (absolute value) of the momentum vector.
A fundamental (empirical) law of physics stats that in a closed physical system energy, i.e. the above sum of four squares, is maintained during movement and during physical processes.
There is a mathematical identity, stating that a sum of four squares can always be written as the product of two sums of each four squares. (This holds as well for sums of two squares, and for sums of eight squares, but for nothing else).
Applying this to our formula for $E^2$, we can write:
$ (m_0^2 + p_1^2 + p_2^2 + p_3^2) = (r_0^2 + r_1^2 + r_2^2 + r_3^2)(M_0^2 + P_1^2 + P_2^2 + P_3^2) $
wherein the components (proof by algebraic evaluation) are:
$ m_0 = (r_0M_0 - r_1P_1 - r_2P_2 - r_3P_3) $
$ p_1 = (r_0P_1 + r_1M_0 + r_2P_3 - r_3P_2) $
$ p_2 = (r_0P_2 - r_1P_3 + r_2M_0 + r_3P_1) $
$ p_3 = (r_0P_3 + r_1P_2 - r_2P_1 + r_3M_0) $
Now, let's assume that these sums of each four squares are metric products (scalar products) of vectors with themselves, noteworthy the vectors
$ P⃗ = (M_0, P_1, P_2, P_3) $
$ R⃗ = (r_0, r_1, r_2, r_3) $ and
$ p⃗ = (m_0, p_1, p_2, p_3) $
Then we can rewrite the formula for $E^2$ as
$ (p⃗)^2 = (R⃗ * P⃗)^2 $
wherein the multiplication $R⃗ * P⃗$ is defined as given above.
Let's now assume that $ R⃗ $ is a physical process operator which changes $ P⃗ $ into $ p⃗ $, and that $ R⃗ $ does not change the total energy $E$ of the system, i.e. $ (r_0^2 + r_1^2 + r_2^2 + r_3^2 ) = 1 $.
Given the bilinearity of the product $R⃗ * P⃗$, the vector $ P⃗ = P⃗1 + P⃗2 + ...$ can be a sum of vectors, representing a complex physical system. Also the vector $ p⃗ = p⃗1 + p⃗2 + ...$ can be a sum of vectors, representing another complex physical system.
The formula:
$ p⃗ = (p⃗1 + p⃗2 + ...) = R⃗ * (P⃗1 + P⃗2 + ...) = R⃗ * P ⃗ $
describes now a general movement of a physical system $ P⃗ = (P⃗1 + P⃗2 + ...) $ into a physical system $ p⃗ = (p⃗1 + p⃗2 + ...) $ under the influence of a physical process operator $ R⃗ $, wherein the total energy $E$ is conserved.
Note that the energy contained in physical system $ P⃗ $ can be transferred, shared, or cumulated in the systems $ p⃗1 , p⃗2 , ... $ under the effect of operator $ R⃗ $.
Note as well that the formula for
$ m_0 = (r_0M_0 - r_1P_1 - r_2P_2 - r_3P_3) $
shows the metric signature (+,-,-,-) of Minkowsky space.
Conservation of energy and movement is possible in 4 dimensional space because of the existence of the 4-squares identity, and movement necessarily follows a negative (hyperbolic) metrics, by the mathematical reasons outlined.
The corresponding metric differential equation in 4-space is
$ ( (1/c)^2 (∂^2/(∂t^2) - (∂^2/∂x_1^2) - (∂^2/∂x_2^2) - (∂^2/∂x_3^2) )A = μ_0 J $
with$ A = (φ/c,(A_1,A_2,A_3 )) $ the 4-potential composed of electrostatic and vector potential, and
$ J = (ρc,(J_1,J_2,J_3 )) $ the 4-current density composed of charge and currents.
This equation is the 4-dimensional form of Maxwells Equations.
Due to its negative metrics, the solutions of this differential equation are wave functions. In other words, movement always and necessarily takes the form of a wave.
There are also stationary solutions to this differential equation:
$ ((1/c^2) (∂^2/∂t^2) - (∂^2/∂x_1^2) - (∂^2/∂x_2^2) - (∂^2/∂x_3^2))Ψ = λΨ $
and they represent the massive particles and their possible compounds. |
Tracking and forecasting epidemic spread through viral genome sequencing
Trevor Bedford (@trvrb)
9 Oct 2019 EPPIcenter Seminar Series UCSF Slides at: bedford.io/talks We work at the interface of virology, evolution and epidemiology Sequencing to reconstruct pathogen spread Epidemic process Sample some individuals Sequence and determine phylogeny Sequence and determine phylogeny Localized Middle Eastern MERS-CoV phylogeny Regional West African Ebola phylogeny Global influenza phylogeny Phylogenetic tracking has the capacity to revolutionize epidemiology Outline
Analysis of Ebola epidemic spread in West Africa Nextstrain platform for real-time phylodynamics Actionable genomic epidemiology for Ebola in the DRC Seasonal influenza evolution and vaccine strain selection Forecasting influenza strain turnover Ebola epidemic of 2014-2016 was unprecedented in scope Ebola epidemic in West Africa Ebola epidemic within Sierra Leone Virus genomes reveal factors that spread and sustained the Ebola epidemic
with Gytis Dudas, Andrew Rambaut, Luiz Carvalho, Marc Suchard, Philippe Lemey, and many others Sequencing of 1610 Ebola virus genomes collected during the 2013-2016 West African epidemic Sequenced genomes were representative of spatiotemporal diversity Phylogenetic reconstruction of epidemic Tracking migration events Factors influencing migration rates Effect of borders on migration rates Spatial structure at the country level Substantial mixing at the regional level Each introduction results in a minor outbreak Regional outbreaks due to multiple introductions Regional outbreaks due to multiple introductions Ebola spread in West Africa followed a gravity model with moderate slowing by international borders, in which spread is driven by short-lived migratory clusters Genomic analyses were mostly done in a retrospective manner
Dudas and Rambaut 2016
Key challenges to making genomic epidemiology actionable
Timely analysis and sharing of results critical Dissemination must be scalable Integrate many data sources Results must be easily interpretable and queryable Nextstrain
Project to conduct real-time molecular epidemiology and evolutionary analysis of emerging epidemics
with Richard Neher, James Hadfield, Emma Hodcroft, Thomas Sibley, John Huddleston, Louise Moncla, Misja Ilcisin, Kairsten Fay, Jover Lee, Allison Black, Colin Megill, Sidney Bell, Barney Potter, Charlton Callender Nextstrain architecture
All code open source at github.com/nextstrain
Two central aims: (1)
rapid and flexible phylodynamic analysis and (2) interactive visualization Rapid build pipeline for 1600 Ebola genomes
Align with MAFFT (34 min) Build ML tree with RAxML (54 min) Temporally resolve tree and geographic ancestry with TreeTime (16 min) Total pipeline (1 hr 46 min) Flexible pipelines constructed through command line modules
Modules called via
augur filter,
augur tree,
augur traits, etc...
Designed to be composable across pathogen builds Defined pipeline, making steps obvious Provides dependency graph for fast recomputation Pathogen-specific repos give users an obvious foundation to build from Nextstrain is two things
a bioinformatics toolkit and visualization app, which can be used for a broad range of datasets a collection of real-time pathogen analyses kept up-to-date on the website nextstrain.org Rapid on-the-ground sequencing in Makeni, Sierra Leone
"Community" builds to promote frictionless sharing of results
Attempting to write us out of the picture JSON outputs uploaded to
github.com/ucsf/dengue, would be available at
nextstrain.org/community/ucsf/dengue
Used now for a variety of pathogens including Lassa in Nigeria, global RSV and cassava virus Genomic epidemiology applied to North Kivu Ebola outbreak
with Placide Mbala-Kingebeni, Eddy Kinganda Lusamaki, Catherine Pratt, Mike Wiley, James Hadfield, Allison Black, Jean-Jacques Muyembe Tamfum, Steve Ahuka-Mundeke, Daniel Mukadi, Gustavo Palacios, Amadou Sall, Ousmane Faye, Eric Delaporte, Martine Peeters and many others Nextstrain is being used to track North Kivu outbreak
Allison Black (PhD student) and James Hadfield (postdoc) working with scientists at the INRB. Goal is to provide training in bioinformatics, Nextstrain and genomic epidemiology.
View of current genomic data Current dataset:
376 full genomes sequenced (15% of confirmed cases) Most recent sequenced virus collected Sep 12 (4 weeks ago) Often (but not always) need specific actionable pieces of information rather than large-scale understanding. For Ebola outbreak response, I believe this needs to revolve around contact tracing. Superspreader event in June Using narratives to walk through specific transmission inferences
We're rolling out new narratives functionality in Nextstrain These are Markdown posts that allow you to pair narrative text to visualization state Made possible through an early decision to embed visualization state in URL Example narrative for Ebola in the DRC here: nextstrain.org/narratives/inrb-ebola-example-sit-rep Tracking seasonal influenza virus evolution Population turnover of A/H3N2 influenza is extremely rapid Clades emerge, die out and take over Clades show rapid turnover Dynamics driven by antigenic drift Drift necessitates vaccine updates H3N2 vaccine updates occur every ~2 years Vaccine strain selection by WHO Working with Richard Neher, we decided to tackle this head on and build something that: Charts behavior of specific strains Can be kept continually up to date Nextflu
Project to provide a real-time view of the evolving influenza population
Made possible by rapid and open sharing of WHO GISRS data through GISAID database
Current view of H3N2 from nextstrain.org/flu Clade frequencies show recent rise of A1b/197R viruses Local branching index (LBI) also points to A1b/197R as the most rapidly spreading clade Serological assay data indicates largely similar antigenic phenotypes in A1b viruses Forecasting seasonal influenza A/H3N2 evolution
with John Huddleston and Richard Neher Fitness models project strain frequencies
Future frequency $x_i(t+\Delta t)$ of strain $i$ derives from strain fitness $f_i$ and present day frequency $x_i(t)$, such that
$$\hat{x}_i(t+\Delta t) = x_i(t) \, \mathrm{exp}(f_i \, \Delta t)$$
Total strain frequencies at each timepoint are normalized. This captures clonal interference between competing lineages.
Two inputs
Estimate of present-day strain frequencies $x(t)$ Estimate of present-day strain fitnesses $f$ Strain frequency estimated via region-weighted KDE Strain fitness estimated from viral attributes
The fitness $f$ of strain $i$ is estimated as
$$\hat{f}_i = \beta^\mathrm{A} \, f_i^\mathrm{A} + \beta^\mathrm{B} \, f_i^\mathrm{B} + \ldots$$
where $f^A$, $f^B$, etc... are different standardized viral attributes and $\beta^A$, $\beta^B$, etc... coefficients are trained based on historical evolution
Antigenic drift Intrinsic fitness Recent growth epitope mutations non-epitope mutations local branching index HI titers DMS data (via Bloom lab) delta frequency Future population depends on frequency and fitness Forecast assessed based on weighted distance match to observed future population Forecast assessed based on weighted distance match to observed future population Train in 6-year sliding windows from 1995 to 2015 with most recent years held out as test Single predictors favor HI drift, non-epitope fitness and local branching index Composite models suggest a combination of local branching index and non-epitope fitness Model successfully predicts clade growth Best pick from model is generally close to best possible retrospective pick Forecast from current virus population Predicted sequence match of circulating strains to future population These forecasts are now rolled out live to nextstrain.org/flu
This work relies on rapid and open sharing of pathogen genomic data.
All Nextstrain code is entirely open source and intended to be used by the community. We've been working hard on improving documentation at nextstrain.org/docs. You're most welcome to kick the tires.
Acknowledgements
Bedford Lab: Alli Black, John Huddleston, James Hadfield, Katie Kistler, Louise Moncla, Maya Lewinsohn, Thomas Sibley, Jover Lee, Kairsten Fay, Misja Ilcisin
Ebola in West Africa: Gytis Dudas, Andrew Rambaut, Luiz Carvalho, Philippe Lemey, Marc Suchard, Andrew Tatem Nextstrain: Richard Neher, James Hadfield, Emma Hodcroft, Tom Sibley, John Huddleston, Sidney Bell, Barney Potter, Colin Megill, Charlton Callender Ebola in DRC: James Hadfield, Allison Black, Eddy Kinganda Lusamaki, Placide Mbala-Kingebeni, Catherine Pratt, Mike Wiley, Jean-Jacques Muyembe Tamfum, Steve Ahuka-Mundeke, Daniel Mukadi, Gustavo Palacios, Amadou Sall, Ousmane Faye, Eric Delaporte, Martine Peeters, David Blazes, Cecile Viboud, David Spiro Seasonal flu: WHO Global Influenza Surveillance Network, John Huddleston, Richard Neher, Barney Potter, Dave Wentworth, Becky Garten |
I am in my 4th year (3 semesters left including the current one), taking mechanics, E&M, quantum mechanics, and a lab course. For each of the three main courses, we get one problem set per week that's around 5-8 questions. In addition to that there's a lab report due every 1-2 weeks.It really...
This is a random problem I am trying to figure out. The context doesn't matter.I wish to define a function z(x, y) based on the following limits:1. lim z (x→∞) = 02. lim z (x→0) = y3. lim z (y→∞) = ∞4. lim z (y→0) = 0
This is a somewhat vague question that stems from the entries in a directional cosine matrix and I believe the answer will either be much simpler or much more complicated than I expect.So consider the transformation of an arbitrary vector, v, in ℝ2 from one frame f = {x1 , x2} to a primed...
I am looking at an explanation of the gradient operator acting on a scalar function ## \phi ##. This is what is written:In the steps 1.112 and 1.113 it is written that ## \frac {\partial x'_k} {\partial x'_i} ## is equivalent to the Kronecker delta. It makes sense to me that if i=k, then...
1. Homework StatementIntegrate by changing to polar coordinates:## \int_{0}^6 \int_{0}^\sqrt{36-x^2} tan^{-1} \left( \frac y x \right) \, dy \, dx ##2. Homework Equations## x = r \cos \left( \theta \right) #### y = r \sin \left( \theta \right) ##3. The Attempt at a SolutionSo this...
I assume this is a simple summation of the normal components of the vector fields at the given points multiplied by dA which in this case would be 1/4.This is not being accepted as the correct answer. Not sure where I am going wrong. My textbook doesn't discuss estimating surface integrals...
1. Homework StatementGiven this diagram, the problem is to find an expression for β/ΘE in terms of X/ΘE and Y/ΘE.2. Homework Equationsβ = Θ – α(Θ)Dsβ = DsΘ – Dlsα'(Θ)3. The Attempt at a SolutionI really only need help starting this problem. In my textbook and every document I can...
I am currently in Honors Physics 3 which is the third introductory course of my physics degree program and covers modern physics beginning with special relativity. So far we have covered Lorentz transformations and velocity additions, relativistic energy and momentum, blackbody radiation, photon...
The problem is to find the general term ##a_n## (not the partial sum) of the infinite series with a starting point n=1$$a_n = \frac {8} {1^2 + 1} + \frac {1} {2^2 + 1} + \frac {8} {3^2 + 1} + \frac {1} {4^2 + 1} + \text {...}$$The denominator is easy, just ##n^2 + 1## but I can't think of...
1. Homework StatementA car engine moves a piston with a circular cross section of 7.500 ± 0.005cm diameter a distance of3.250 ± 0.001cm to compress the gas in the cylinder.(a) By what amount is the gas decreased in volume in cubic centimeters?(b) Find the uncertainty in this volume.2...
1. Homework StatementA uniform disk of radius R=1m rotates counterclockwise with angular velocity ω=2rads/s about a fixed perpendicular axle passing through its center. The rotational inertia of the disk relative to this axis is I=9kg⋅m2. A small ball of mass m=1 is launched with speed v=4m/s...
1. Homework StatementA tall, cylindrical chimney falls over when its base is ruptured. Treat the chimney as a thin rod of length 49.0 m. Answer the following for the instant it makes an angle of 32.0° with the vertical as it falls. (Hint: Use energy considerations, not a torque.)(a) What is...
1. Homework StatementDuring a rockslide, a 710 kg rock slides from rest down a hillside that is 500 m long and 300 m high. The coefficient of kinetic friction between the rock and the hill surface is 0.23... |
I don't think there is a specific name for the coarsest topology on $X \times X$ for which $d : X \times X \to \mathbb{R}$ is continuous. In general if $f$ is a function from a set $Y$ to a topological space $Z$, the coarsest topology on $Y$ for which $f$ is continuous is called the
topology on $X$ generated by $f$. Therefore we may call it the topology on $X \times X$ generated by $d$.
For the remainder let's fix the following notations:
$\mathcal{O}_d$ is the topology on $X \times X$ generated by $d$; $\mathcal{O}_{\mathrm{m}d}$ is the metric topology on $X$; $\mathcal{O}_{\mathrm{p}d} \sim \mathcal{O}_{\mathrm{m}d} \otimes \mathcal{O}_{\mathrm{m}d}$ is the "metric product topology" on $X \times X$.
$\mathcal{O}_{d}$ is generated by the sets of the form $$d^{-1} [ ( a , b ) ] = \{ ( x , y ) \in X \times X : a < d (x,y) < b \}$$
Note that since $\mathcal{O}_d$ is a topology on $X \times X$ and $\mathcal{O}_{\mathrm{m}d}$ is a topology on $X$, the two topologies cannot be compared. We can, however, compare $\mathcal{O}_d$ and $\mathcal{O}_{\mathrm{p}d}$. Of course, $\mathcal{O}_{d}$ is coarser than $\mathcal{O}_{\mathrm{p}d}$, since $d$ is continuous with respect to $\mathcal{O}_{\mathrm{p}d}$.
These two topologies will differ greatly (except in trivial cases).
For instance, if $X$ has at least two points, then $\mathcal{O}_{d}$ is not Hausdorff (it's not even T
0). This is because for all $x , y \in X$ there is no open set containing exactly one of $(x,x)$ or $(y,y)$. On the other hand, $\mathcal{O}_{\mathrm{p}d}$ is always Hausdorff (even perfectly normal, since it is itself a metrizable topology). |
To do Hartree-Fock to Helium atom, you just need to calculate one orbital, which for helium is spherically symmetric. The Hartree-Fock 'integro-differential' equation for spherically symmetric atom with one eigenstate can be written as
$$\left(-\frac{1}{2}\nabla^2 - \frac{2}{r} + V_{Hx}(r) \right) u(r) = \epsilon u(r),$$
where $u(r) = \psi(r) r$ and
$$V_{Hx}(r) = \frac{1}{2} \int d{\bf r}' \frac{n({\bf r'})}{|{\bf r-r}'|}$$The factor $\frac{1}{2}$ comes from exchange term cancelling half of the Hartree-potential.
I would recommend shooting method. (Alternatives: gaussian basis set, finite element, finite difference methods).
Shooting method is simple. Guess an eigenvalue and integrate the solution of radial Schrödinger equation from r=0 to r=r_max. Require that boundary condition is fulfilled at r_max (zero). If not, change the eigenvalue accordingly.
https://en.wikipedia.org/wiki/Shooting_method
Solving the radial Hartree-Fock potential equation is also super simple.
$$ \int dr' \frac{n(r')}{|r-r'|} = \frac{1}{r} \int_0^r dr' 4\pi n(r) r^2 + \int_r^\infty 4\pi n(r) r $$
(Eq. 7.6 here) https://wiki.fysik.dtu.dk/gpaw/_static/rostgaard_master.pdf
The latter part of my answer here explains how this equation works.Dipole in a spherical cavity in an infinite dielectric
Beware, that the Fock-operator is only multiplicative for the Helium 1s orbitals. Rest of the eigenvalue spectrum of the given equation does not match to Fock-operator spectrum.
edit: I just read a comment of yours, which stated that you are using gaussian orbitals. You can solve it easily. You just need to calculate some matrix elements. Given a set of gaussian orbitals $\phi_i$, you calculate two matrices
$$H_{ij} = < \phi_i | H | \phi_j>$$
and
$$S_{ij} = < \phi_i | \phi_j >$$.
Then you solve this generalized eigenvalue equation
$$Hc=\epsilon Sc$$,and you get the new coefficients for the wave function as the eigenvector. Your wave function is now
$$ \psi(r) = \sum_n c_n \phi_n(r) $$
The method is called subspace diagonalization.
To solve the generalized eigenvalue equation with octave or Matlab,
octave:1> H = [ -2 0.1 ; 0.1 -3 ];
octave:2> S = [ 1 0.1 ; 0.1 1 ];
octave:5> [psi, lambda] = eig(H,S)
psi =
-0.35088 -0.94180
0.97217 -0.25494
lambda =
Diagonal Matrix
-3.1498 0
0 -1.9209
Here you obtain the eigenvalues (-3.14 is the lowest), and the corresponding eigenvector [-0.35, 0.97]. Now, all that there is left, is to fill matrices H and S with the real matrix elements.
To be more spesific, following integrals must be evaluated:
$$H_{ij} = \int dr \phi_i(r) (-1/2 \nabla^2 + V(r) ) \phi_j(r) \\S_{ij} = \int dr \phi_i(r) \phi_j(r) $$ |
We will consider the case of real transport with a given flow rate $J=\rho u$. We assume that all compressors are identical, capable of maintaining a given air flow. Specifications: the flow velocity is limited by the condition $1\le u\le 10$ m/s. It is necessary to determine how many compressors are needed and how to optimally position them when ...
If you have a pressurized container of sufficient size, would there bea pressure gradient due to gravity?There will always be a pressure gradient due to gravity. How significant it is, however, depends on what you mean by "sufficient size".For example, for the first 1000 meters above sea level, the increase in atmospheric pressure due to gravity is 11....
A balloon is comprised of surface patches that expand with hydrostatic pressure and taken on a certain shape. If the material shape and properties are known, the direct problem of finding out what shape it takes at a given pressure and thus the volume is straight forward. The inverse problem; however, isn't. In this case one would normally model the volume ...
As explained in other answers, a tank can fail under internal or external pressure if the material strength is exceeded. The critical pressure differential of a thin spherical shell for this failure mode can be calculated using the following formula (to derive it, one can consider equilibrium of a half of the shell):$$2\pi R h\sigma=\pi R^2 \Delta p,$$...
Expanding tank vs contracting tank is the comparison. Since the collapsing tank may not tear, and therefore technically not be destroyed in that it could be reinflated and still be usable to some extent (search for images of 'tank collapse under vacuum' on web), a different way to look at the problem is useful. Two questions can be posed:"Which tank would ...
Adding on to what others have said, this is very non-straightforward.As others mention, the failure criteria for a vacuum vessel and a pressurized vessel is quite different. There is one very large factor that no one seems to have mentioned yet.When the spherical vessel is under a vacuum, it develops compression, and if it's a ductile material, it is ...
Though failure with the sphere evacuated is highly unlikely given a maximum pressure difference of one atmosphere, if there were no limit to the external pressure, then It would probably depend on the material.With positive pressure inside a spherical tank, the walls are subjected to tensile stress. If negative, compressive stress. Failure may depend on ...
There's no physics principle that gives a single answer here.First of all, a vacuum can never achieve more than a 1atm pressure difference across a vessel (assuming we're doing this here in a normal workshop). So if you have a vessel that can withstand 5atm without any yield, then it will never have a problem with a vacuum, but would fail if pressurized ...
Shouldn't the can still start moving from the moment we make the opening and until the air pressure inside the can is equalized with the outside air?Indeed it will, but that takes very little time.If that in fact happens, why would it stop and return ("a momentary slight oscillation about the center of mass") and not simply continue moving ...
If the pressure difference between inside and outside is the same (but opposite) then the force on the can in both situations is equal and opposite. If the geometry is the same then the pressure difference evolves in the same way but with opposite sign. The motion of the can, opposite, is only different due to difference in the amount of gas resistance.So ...
The easiest way to answer this problemis by thinking about conservation of momentum.Consider the still closed can with vacuum inside, and the air outside.The can is at rest, hence it has momentum $\vec{p}_\text{can} = \vec{0}$.The air is at rest too, hence it has momentum $\vec{p}_\text{air} = \vec{0}$.Therefore total momentum is$$\vec{p}_{\text{...
Because gas is compressible, you must specify its pressure along with the volume it happens to occupy in order to properly define its state. And since heating a gas causes its pressure to increase, a complete description of any gas will necessarily include calling out its temperature as well.Since more gas atoms in a fixed volume will exert more pressure ...
If you are considering a gas/partial vacuum that is not confined in a well-defined 'container', you would presumably want use specific volume (aka inverse density) instead. I.e. consider the amount of volume occupied per unit mass of the substance. |
Abbreviation:
MultSlat
A
(or multiplicative semilattice ) is a structure $\mathbf{A}=\langle A,\vee,\cdot\rangle$ of type $\langle 2,2\rangle$ such that $m$-semilattice
$\langle A,\vee\rangle$ is a semilattice
$\cdot$ distributes over $\vee$: $x(y\vee z)=xy\vee xz$, $(x\vee y)z=xz\vee yz$
Remark: This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{A}$ and $\mathbf{B}$ be multiplicative semilattices. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x \vee y)=h(x) \vee h(y)$, $h(x \cdot y)=h(x) \cdot h(y)$,
An
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
[[Lattice-ordered semigroups]] [[Semilattices]] reduced type |
Wrinklers are twitchy leech-like creatures that, in normal gameplay, only start appearing during the Grandmapocalypse. While they at first appear to reduce CpS (cookies per second), they actually provide a massive boost to cookie production in the long run. There is a 0.01% chance a Shiny wrinkler will spawn instead of a normal wrinkler. OverviewEditWrinklers appear within the game's left panel and slowly crawl toward the Big Cookie.
Upon reaching the big cookie, a wrinkler will begin to feed upon it, withering the total CpS by 5% each. A total of ten wrinklers can feed on the big cookie at once; twelve if Elder spice has been purchased. Each additional wrinkler withers another 5% of CpS – two wrinklers wither 10% CpS total, three wither 15%, and the maximum of twelve wrinklers withering 60% of the total CpS. While the output of manually clicking the big cookie is unaffected, having enough wrinklers will appear to halve the cookie production of every building and upgrade.
A wrinkler can be killed, or popped, by being clicked three consecutive times. The game hints at this property by having a wrinkler continuously shake when hovered over. When exploded, it will drop 1.1 × the total number of cookies withered during its life. If the current season is Halloween, a popped wrinkler also has a 5% chance of unlocking one of the 7 Halloween cookie types. The stats menu will display how much of the current CpS has been withered, as well as the total number of popped wrinklers. If a wrinkler is popped before reaching the big cookie, it will not drop any cookies or upgrades.
If the Grandmatriarchs are appeased, whether via Elder Pledge, Elder Covenant, or by selling all grandmas, all the wrinklers on the screen will automatically be exploded, and will not begin to appear again unless the Grandmapocalypse is restarted.
CpS multiplierEdit
Because wrinklers store a percentage of the
total wither rate, each additional wrinkler adds a larger bonus than the previous one. The net increase in CpS is 0.5% with one wrinkler, rising to 832% with a maximum of twelve. The twelfth wrinkler alone increases net CpS by more than 120% of the pre-wrinkler base value.
To calculate the effective CpS multiplier, assuming
N wrinklers present, the formula consists of: $ N \cdot 0.05 \cdot 1.1 \cdot 1.05 \cdot 1.05 $(withered CPS for each wrinkler. Each wrinkler hoards 5% times number of wrinklers present. Hoarded cookies are then amplified by a flat factor of 1.1(10% gain after popping), an additional 1.05(5% more cookies) with the wrinklerspawn upgrade and additional 1.05with the Sacrilegious corruption upgrade.) $ 1-N \cdot 0.05 $(Remaining unwithered CPS)
Put together, this is
$ N \cdot (N \cdot 0.05 \cdot 1.1) + (1 - N \cdot 0.05) = N^2 \cdot 0.055 - N \cdot 0.05 + 1 $without wrinkler upgrades. $ N \cdot (N \cdot 0.05 \cdot 1.1) \cdot (1.05) + (1 - N \cdot 0.05) = N^2 \cdot 0.05775 - N \cdot 0.05 + 1 $with 1 wrinkler upgrade. $ N \cdot (N \cdot 0.05 \cdot 1.1) \cdot (1.05 \cdot 1.05) + (1 - N \cdot 0.05) = N^2 \cdot 0.0606375 - N \cdot 0.05 + 1 $with both wrinkler upgrades.
From the above formula, we can draw the following CpS conclusion:
No. wrinklers
CpS Multiplier
CpS Multiplier with one of: "Wrinklerspawn" or "Sacrilegious corruption"
CpS Multiplier with both: "Wrinklerspawn" and "Sacrilegious corruption"
0 1 1 1 1 1.005 1.00775 1.0106375 2 1.12 1.131 1.14255 3 1.345 1.36975 1.3957375 4 1.68 1.724 1.7702 5 2.125 2.19375 2.2659375 6 2.68 2.779 2.88295 7 3.345 3.47975 3.6212375 8 4.12 4.296 4.4808 9 5.005 5.22775 5.4616375 10 6 6.275 6.56375 11 7.105 7.43775 7.7871375 12 8.32 8.716 9.1318 Spawn RateEdit
Each empty wrinkler slot has a (re-) spawn rate of 0.001% (0.005% with "Unholy bait") per Grandmapocalypse stage per frame (the usual framerate is 30 fps). The Skruuia, Spirit of Scorn and Garden plants can also alter the rate.
factor Grandmapocalypse stage 3/2/1 (One Mind/Communal Brainsweep/Elder Pact) Unholy bait 5 Skruuia, Spirit of Scorns 2.5/2/1.5 (Diamond/Ruby/Jade) Garden plants
1-0.15WA+0.02WR
For example, if you have unholy bait and 3 Wardlichens in the garden when in Communal Brainsweep stage, the spawn rate will be r= 0.0001% * 2 * (1-0.15*3) = 0.000011%. On average, collecting all 12 wrinklers takes:
$ \frac{1}{r}\sum_{k=1}^{12} \frac{1}{k}=\frac{1}{r}\frac{86021}{27720} $ frames
$ \approx\frac{0.10344}{r} $ seconds
On average, it takes 57 minutes and 28 seconds (11 minutes 30 seconds with "Unholy bait") for all 12 Elder Pact (speedup factor is 3) wrinklers to spawn, from a starting point of 0 wrinklers. With "Unholy bait" the time is reduced to 11 min 30 seconds}. For Communal Brainsweep, it takes 3/2 of that time, about 86 (17 with "Unholy bait") minutes. For One mind this is three times as long, which is about 172 (34 with "Unholy bait") minutes. Upon opening the Cookie Clicker page (includes refreshing page) and regardless of their previous location, all actively withering wrinklers will be moved to upper right around the cookie in a clockwise pattern. The total amount of withered cookies will also be averaged and reallocated between each wrinkler. SeasonsEdit HalloweenEdit Main article: Halloween Season
Halloween Cookies are spawned randomly during the Halloween season by popping wrinklers. Each cookie has from 5% to 56% depending on: "Spooky cookies" achievement, "Santa's bottomless bag" upgrade, "Mind Over Matter" and "Reality Bending" dragon auras, "Starterror" prestige upgrade and wrinkler type (normal or shiny).
EasterEdit Main article: Easter Season
All Easter upgrades are egg based in theme and can be unlocked randomly when clicking a Golden Cookie or Wrath Cookie, or by popping a wrinkler. At base, a Golden/Wrath Cookie has a 10% chance to unlock an egg and a wrinkler has a 2% chance to unlock an egg. The "Hide and Seek Champion" achievement, "Omelette" and "Santa's Bottomless Bag" upgrades, "Starspawn" prestige upgrade, and "Mind Over Matter" dragon aura all increase the chance to unlock an egg.
ChristmasEdit Main article: Christmas Season
The appearance of wrinklers change during Christmas season. Shiny wrinklers don't change.
AchievementsEdit
Wrinkler achievements Icon Name Description ID Itchscratcher Burst 1 . wrinkler 105 Wrinklesquisher [note 1] Burst 50 wrinklers. 106 Moistburster [note 1] Burst 200 wrinklers. 107 Last Chance to See
(shadow achievement)
Burst the near-extinct shiny wrinkler. "You monster!" 262 ↑ 1.0 1.1Unlike most "number of actions" achievements, the Wrinklerand Reindeerachievements are counted in a single game, not all time: Ascending will reset the counter.
To help keep track of how many wrinklers you have popped, go in the
"Stats" section, then look under "Special". TriviaEdit In the game's code, next to the code that tells the wrinklers to return 110% of their withered cookies is the comment "cookie dough does weird things inside wrinkler digestive tracts". The "Moistburster" achievement is a reference to the Alien Franchise's Chestburster. Prior to the v1.0453 update, Wrinklers disappeared when you closed the Cookie Clicker window or exited the browser. The Wrinkler may be a reference to the Mosquito Larvae or the Wriggler.
Interactive Objects
Big Cookie
Golden Cookie
Wrath Cookie
Wrinkler
Reindeer
Santa
Krumblor
Shiny wrinkler
Category:Interactive Objects
Cookie Clicker game mechanics Cookies Clicking • Buildings General Achievements • CpS • Milk • Golden Cookies • News Ticker • Options • Cheating • Sugar Lump Upgrades Upgrades overview
Multipliers: Flavored Cookies • Kittens
Research: Grandmapocalypse • Wrath Cookies •
Wrinklers • Shiny wrinklers Ascension Ascension • Heavenly Chips • Challenge Mode Seasons Seasons overview
Valentine's Day • Business Day • Easter • Halloween • Christmas
Minigames Minigames overview
Garden • Pantheon • Grimoire
Further reading Gameplay |
Be $U\subseteq \mathbb{C}$ simply connected region with $U\neq\mathbb{C}$ and $a,b\in U$, $a\neq b$. Is there an biholomorphism $f:U\longrightarrow U$ with $f(a)=b$ and $f(b)=a$?
I know that, by the Riemann mapping theorem, there are unique isomorphisms $h:U\longrightarrow D(0,1)$ with $h(a)=0$, and $g:U\longrightarrow D(0,1)$ with $g(b)=0$. And considering $Id:D(0,1)\longrightarrow D(0,1)$ automorphism, I lead to two cases:
If $g(a)=h(b)$, then I can define $f=g^{-1}\circ Id \circ h $, becouse $f$ is an isomorphism by composition of isomorphisms, $f(a)=g^{-1}\circ Id \circ h(a)=g^{-1}( h(a))=g^{-1}(0)=b$ and $f(b)=g^{-1}\circ Id \circ h(b)=g^{-1}(h(b))=g^{-1}(g(a))=a$.
If $g(a)\neq h(b)$ is where I have problems. I think that I must build an automorphism $\phi$ of the disc that take $g(a)$ into $h(b)$ so $f=g^{-1}\circ \phi\circ h$.
I also know that the set of automorphisms of the disc is $\{e^{i\theta}\phi_{a} (z)=e^{i\theta}\dfrac{z-a}{1-\bar{a}z},\theta\in\mathbb{R}, a\in D(0,1)\}$.
Is there an automorphism of the disc like that? |
Show that $\sqrt{13}$ is an irrational number.
How to direct proof that number is irrational number. So what is the first step.....
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Consider the polynomial $x^2-13\color{grey}{\in \mathbb Z[x]}$. The rational root theorem guarantees its roots aren't rational and since $\sqrt {13}$ is a root of the polynomial, it is irrational.
The standard proof that $\sqrt{p}$ is irrational for any prime $p$ is as follows
Let $\sqrt{p} = \frac{m}{n}$ where $m,n\in\mathbb N.$ and $m$ and $n$ have no factors in common.
Now $\frac{m^2}{n^2} = p \Rightarrow m^2 = p \cdot n^2$
Since $p$ is prime and $m^2$ is a multiple of $p$ then $m$ is multiple of $p$
So substitute $m = p \cdot k$
Now $\frac{(p \cdot k)^2}{n^2} = \frac{p^2 \cdot k^2}{n^2} = p\Rightarrow n^2 = p \cdot k^2$
Since $p$ is prime and $n^2$ is a multiple of $p$ then $n$ is multiple of $p$
We now have a contradiction since $m$ and $n$ must have no common factors (except 1) but we have proved that if $\frac{m}{n}$ exits then $m$ and $n$ must have common factor $p$
So $\frac{m}{n}$ can not exist and the square root of any prime is irrational.
The equation $m^2=13n^2$ is a direct contradiction to the Uniqueness part of the Fundamental Theorem of Arithmetic, since the left side has evenly many $13$’s, while the right side has oddly many.
You can try it this way:
A number is irrational, if you can not find a finite continued fraction.
Now try to write $\sqrt{13}$ as continued fraction and you'll see, its periodic:
$$\sqrt{13}=[3;\overline{1,1,1,1,6}]$$
Hope that is what your you looking for.
As mentioned, one can quickly prove the irrationality of square roots using the Rational Root Test, or
uniqueness of prime factorizations, or other closely related propeerties such as Euclid's Lemma or Bezout's gcd identity. Below is a simple proof using Bezout that I discovered as a teenager. Theorem $\quad \rm r = \sqrt{n}\;\;$ is integral if rational, $\:$ for $\:\rm n\in\mathbb{N}$ Proof $\ \ $ Note that $\rm\,\ \color{#0a0}{r = a/b},\ \ \gcd(a,b) = 1\ \Rightarrow\ \color{#C00}{ad\!-\!bc \,=\, \bf 1}\;$ for some $\:\rm c,d \in \mathbb{Z}\,\ $ by Bezout.
$\rm\color{#C00}{That\,}$ and $\rm\: r^2\! = \color{orange}{\bf n}\:\Rightarrow\ \color{#0a0}{0\, =\, (a\!-\!br)}\, (c\!+\!dr) \ =\ ac\!-\!bd\color{orange}{\bf n} \:+\: \color{#c00}{\bf 1}\cdot r \ \Rightarrow\ r \in \mathbb{Z}\ \ \ $
QED |
We can look at this in the following way:
Suppose we are doing an experiment where we need to toss an unbiased coin $n$ times. The overall outcome of the experiment is $Y$ which is the summation of individual tosses (say, head as 1 and tail as 0). So, for this experiment, $Y = \sum_{i=1}^n X_i$, where $X_i$ are outcomes of individual tosses.
Here, the outcome of each toss, $X_i$, follows a Bernoulli distribution and the overall outcome $Y$ follows a binomial distribution.
The complete experiment can be thought as a single sample. Thus, if we repeat the experiment, we can get another value of $Y$, which will form another sample. All possible values of $Y$ will constitute the complete population.
Coming back to the single coin toss, which follows a Bernoulli distribution, the variance is given by $pq$, where $p$ is the probability of head (success) and $q = 1 – p$.
Now, if we look at Variance of $Y$, $V(Y) = V(\sum X_i) = \sum V(X_i)$. But, for all individual Bernoulli experiments, $V(X_i) = pq$. Since there are $n$ tosses or Bernoulli trials in the experiment, $V(Y) = \sum V(X_i) = npq$. This implies that $Y$ has variance $npq$.
Now, the sample proportion is given by $\hat p = \frac Y n$, which gives the 'proportion of success or heads'. Here, $n$ is a constant as we plan to take same no of coin tosses for all the experiments in the population.
So, $V(\frac Y n) = (\frac {1}{n^2})V(Y) = (\frac {1}{n^2})(npq) = pq/n$.
So, standard error for $\hat p$ (a sample statistic) is $\sqrt{pq/n}$ |
I am tired of fractions, you too? Well let’s switch gears and talk about the roots of numbers. This is preparing you for a post or posts on the rules of exponents.
So before, I introduced the concept of the square root and how it is the reverse operation of squaring a number:
\[
\sqrt{25}\hspace{0.33em}{=}\hspace{0.33em}\pm{5} \] because \[ {5}^{2}\hspace{0.33em}{=}\hspace{0.33em}{25} \]. That is \[ \sqrt{{5}^{2}}\hspace{0.33em}{=}\hspace{0.33em}{5} \]. Or in general, \[ \sqrt{{x}^{2}}\hspace{0.33em}{=}\hspace{0.33em}\pm{x} \]. Remember that when taking the square root, there are two solutions since \[ {\left({{-}{5}}\right)}^{2}\hspace{0.33em}{=}\hspace{0.33em}{25} \] as well.
Well, what about the opposite operation to \[
{x}^{3} \]? Well there is one:
\sqrt[3]{{x}^{3}}\hspace{0.33em}{=}\hspace{0.33em}{x}
\]
Notice a few things here. First, there is only one solution. There are no plus/minus solutions because the index (the “3”) of the root is odd and using the rules of signs, the sign of the odd root of a number will be the same as the number in the radical symbol. That is:\[
\sqrt[3]{125}\hspace{0.33em}{=}\hspace{0.33em}{5}{,}\hspace{0.33em}\hspace{0.33em}\sqrt[3]{{-}{125}}\hspace{0.33em}{=}\hspace{0.33em}{-}{5}
\]
The other thing to notice is that if there is no index shown, then a “2” is assumed to be there. If the index is a “3”, it is called a
cube root. After that, you use ordinal numbers, that is fourth root, fifth root, etc.
The last thing to notice is if the index is even, then you do get two plus/minus solutions. If the index is odd, you only get one solution.
So in general,
\[
\sqrt[n]{{x}^{n}}\hspace{0.33em}{=}\hspace{0.33em}\pm{x} \] if n is even and
\[
\sqrt[n]{{x}^{n}}\hspace{0.33em}{=}\hspace{0.33em}{x} \] if n is odd.
Examples:\[
\begin{array}{l}
{\sqrt[4]{16}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[4]{{(}\pm{2}{)}^{4}}\hspace{0.33em}{=}\hspace{0.33em}\pm{2}}\\
{\sqrt[3]{{-}{8}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[3]{{(}{-}{2}{)}^{3}}\hspace{0.33em}{=}\hspace{0.33em}{-}{8}}\\
{\sqrt[5]{32}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[5]{{2}^{5}}\hspace{0.33em}{=}\hspace{0.33em}{2}}\\
{\sqrt[5]{{-}{32}}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[5]{{(}{-}{2}{)}^{5}}\hspace{0.33em}{=}\hspace{0.33em}{-}{2}}\\
{\sqrt[4]{81}\hspace{0.33em}{=}\hspace{0.33em}\sqrt[4]{{(}\pm{3}{)}^{4}}\hspace{0.33em}{=}\hspace{0.33em}\pm{3}}
\end{array}
\]
In my next post, I’ll introduce some rules regarding exponents. |
Let $c$ be an irrational real number. Let $\{\cdot\}$ be the fractional part operator. I would like to get some sense of how in-the-dark we are about the distribution of values of $\{cn!\}$, for familiar values of $c$. This is related to a previous post which (essentially) asks the question "Does $n!/(2\pi)$ tend to a limit mod 1?"
Here is the question: Can anyone give a value of $c$ which is either algebraic, or a familiar transcendental, or defined in some reasonably simple way using the elementary functions of calculus, such that
$\{cn!\}<1/2$ infinitely often, or $\{cn!\}$ tends to a limit, or The values of $\{cn!\}$ are dense in the interval $[0,1]$.
What do I know that we know about all this? First of all, it is a theorem of P. Diaconis (The Annals of Probability 1977, v5) that $\log(n!)$ is uniformly distibuted mod 1. This has the consequence that any sequence of leading (most significant) digits appears infinitely often. This is probably not going to be of any direct help, but it seems like it deserves to be mentioned.
Secondly, and importantly, it is known that for any lacunary sequence of positive integers $a_n$ (meaning that there is a fixed $\rho>1$ such that the inequality $a_{n+1}>\rho a_n$ holds for all large enough $n$) there are real numbers $c$ such that the sequence $\{ca_n\}$ is bounded away from 0 mod 1, and in fact the set of such real numbers has Hausdorff dimension 1. This is trivial to prove for $\rho > 2$, and in fact in this case we can easily choose $c$ (nonconstuctively!) to get any of the behaviors described in Items 1,2 and 3.
For $\rho$ near 1 the above statement about lacunary sequences was an Erdos problem, first solved by B. de Mathan (Acta Math. Hungar. 1980 v36). There is an exposition by Katznelson here.
Since the sequence $n!$ is lacunary (with $\rho$ as large as one wants) we already know that the behaviors described in Items 1-3 occur in abundance. The question is whether we know any specific examples. |
Sparsity is an important topic in machine learning. When we build a model, the simplest approach is usually to make no assumptions about its internal structure, and to connect every possible input to every possible output, perhaps with some intermediate representations. This is, for instance, how we ended up with the fully connected network (or multilayer perceptron) as one of the first neural network architectures.
When that fails to work, a good way to add some domain knowledge to the network is to remove some of these connections. For instance, in image analysis, we know that when we build up low-level representations, we can just look at local information in the input, so instead of connecting everything with everything, we take a local
neighborhood of pixels, and connect it to one hidden unit in the next layer.
In a phrase, we add sparsity to strengthen the
inductive bias.
Now, more than 20 years since the first convnet, we are in the age
self-attention: we represent units (words, pixels, graph nodes) with a vector instead of a single value, and we let some function \(\text{att}(\bf{a}, \bf{b})\) determine how much attention \(\bf{a}\) should pay to \(\bf{b}\) when building up its new representation vector for the next layer. Again, potentially everything connected with everything, although the weights are now derived through attention rather than learned.
So when the self-attention models fail to work, or fail to scale, we ask the same question. Can we add some sparsity to the model, based on what we know of the domain, to strengthen the inductive bias. Some recent examples are the sparse transformer by Child et al. from OpenAI labs, and the graph attention network, by Veličković et al. In both models, self-attention is used, but only particular units are allowed to attend to one another. Under graph attention, the input graph controls the allowed connections, and in the sparse transformer, a domain-specific connectivity pattern is introduced.
The problem
To build a sparse self attention, and to do so efficiently, we want to store the attention weights in a sparse matrix \(\bf A\). It's no use building a sparse model if we have to store a dense \(n\times m\) matrix full of zeros to store all the attention weights from \(n\) inputs to \(m\) outputs.
The trouble with storing this matrix sparsely is that we need to compute a row-wise softmax. The weights in each row need to sum to one, so that the new representation for a particular unit becomes a weighted mean of all previous representations.
All this brings us to the topic of this post:
How do we compute a row-wise softmax on a sparse matrix efficiently and stably. Efficient, in this context, means using the basic primitives available for sparse matrices, so that we can rely on existing optimizations for sparse matrix multiplication. Stable means that we can compute the softmax accurately for a wide range of values, without rounding errors exploding into infinity, zero or NaN.
Currently, the standard approach in GATs is to rely on dense matrices, which makes the memory requirements quadratic in the number of nodes in the input. The sparse transformer uses custom CUDA kernels, that seem to rely on the attention matrix being
block-sparse rather than fully sparse.
We'll look at three approaches towards implementing a fully sparse softmax using only basic matrix primitives which we'll call the
naive, the \(p\)-norm and the iterative approach. As far as I know the latter two are original methods, but I'm happy to be corrected if somebody's already though of this in some other context. notation and terminology
A
sparse matrix is a matrix for which most elements have the same value (usually \(0\)), so that it becomes more efficient to store only the elements that deviate from this value. We will call these the explicit elements, and the rest the implicit elements.
A sparse matrix witk \(k\) explicit elements \(\bf A\) is defined by three components: a \(k \times 2\) integer matrix \(\bf D\) containing the indices of the explicit elements. A length \(k\) real valued vector \(\bf v\) containing the values of the explicit elements, and an integer pair determining the size of the matrix. We will omit the size, to clarify our notation.
We'll use \({\bf A} = M({\bf v}, {\bf D})\) to refer to a sparse matrix as a function of its component parts.
1. The naive approach
The row-wise softmax \(\bf \overline{A}\) for matrix \(\bf A \) is defined as $$ \overline{A}_{\kc{i}\rc{j}} = \frac{ \text{exp}\;A_{\kc{i}\rc{j}} }{ \sum_\gc{k} \text{exp}\;A_{\kc{i}\gc{k}} } $$ where the sum is over all elements in row \(\kc{i}\) of \(\bf A\).
That is, we exp all the values in the matrix, perform a row-wise sum, and then divide \(\text{exp}\;\bf A\) element-wise by the resulting vector (broadcasting along the rows).
A row-wise sum is easy enough to implement using matrix multiplication, all you need is: $$ {\bf M} \cdot {\bf 1}^m $$ where \(\bf{1}^m\) is a length \(m\) vector filled with ones. This gives us the following straightforward algorithm: $$ \begin{aligned} & {\bf E} \leftarrow \text{exp}({\bf A}) \\ & {\bf s} \leftarrow {\bf E} \cdot \bf{1}^m \\ & \bar {\bf A} \leftarrow \frac{{\bf E}}{{\bf s} \cdot {{\bf 1}^n} ^T}\\ \end{aligned} $$ Where the division in the last line is element-wise.
To see how this algorithm fares, we generate random sparse matrices of size \(1000 \times 1000\), with 6000 explicit elements drawn from a normal distribution \(N(0, \gc{\sigma})\) for increasing \(\gc{\sigma}\). We softmax the matrix by the algorithm above and plot the proportion, out of 40 repeats, of results containing at least one NaN.
If you've ever implemented softmax, this result should come as no surprise. The problem comes from the exponentiation: after this, the values in the matrix have such wildly different scales that, unless they fit a very narrow range, it's likely that either some will overflow to positive infinity (ultimately leading to NaN) or, if they're all negative, the whole sum will underflow to zero (leading to NaN again in the division).
There is a standard solution, closely related to the famous logsumexp trick. We simply subtract a large value \(b_\kc{i}\) from all elements in row \(\kc{i}\) of \( \bf A\).
Working \(-b_\kc{i}\) out of the exp, we see that we get a multiplier \(\text{exp} -b_\kc{i}\) on both sides of the divisor, so the formulation is equivalent:
The trick is to choose \(b_\kc{i}\) large enough that any values that would normally overflow, can now be computed. Any values that are much smaller than this will underflow to zero, but they would not have contributed to the sum anyway. The only danger is that we choose \(b_\kc{i}\) so big that _all_ values underflow, and we end up dividing by zero. The foolproof solution is take the maximum of the values over which we're softmaxing: $$ b_{\kc{i}} = \text{max}_\gc{k} A_{\kc{i}\gc{k}} \text{.} $$ That way, at least one value in the sum will end up \(\text{exp}\;0 = 1\) and anything much smaller will just underflow to zero.
But that's also where the problem arises for our use case. For this to work, we need to compute the row-wise maximum of \(\bf A\). This is a tricky operation. To work out the row-wise maxima of \(\bf A\), we need to group \(v\) by their rows, and then compute the maxima over each group. Since each group will have a different size, this is tough to parallelize. Can we simplify this using the basic primitives available for sparse matrices?
How do we efficiently compute the row-wise maximum of a sparse matrix, when all we can do is matrix multiply, and element-wise operations? 2. The \(p\)-norm approach
We will start by using the following identity: $$ \begin{equation} \text{lim}_{\rc{p} \to \infty} \left(\sum_\lbc{i} {x_\lbc{i}}^\rc{p} \right)^{\frac{1}{\rc{p}}} = \text{max} ({\bf x}) \label{eq:limit} \end{equation} $$
That is, the \(\rc{p}\)-norm of \(\bf x\) approaches the maximum element of \({\bf x}\), as \(\rc{p}\) goes to infinity. To see why, consider what happens to the values of \(\bf x\) under application of a convex function:
The \(\lbc{\text{largest element}}\) is increased, proportionally, by more than \(\lgc{\text{the others}}\). In the limit, the contribution of any element other than the maximum disappears from the sum, and all we are doing is exponentiating \(\text{max} ({\bf x})\) by \(p\) and then reversing that exponentiation.
Of course, we can't exponentiate by infinity to compute the max, but maybe a large value is already enough to give us a reasonable \(b_\kc{i}\). After all, we don't need the exact maximum, we just need a close enough value to prevent overflows.
This gives us the following algorithm (with \(\rc{p}\) larger than 1 but not too large): $$ \begin{aligned} &{\bf b} \leftarrow \text{pow}({\bf A}, \rc{p}) \cdot {\bf 1}^m \\ &{\bf b} \leftarrow \text{pow}({\bf b}, 1/\rc{p}) \\ & {\bf \bc{E}} \leftarrow \text{exp}({\bf A} - {\bf b}) \\ & {\bf \gc{s}} \leftarrow {\bf \bc{E}} \cdot \bf{1}^m \\ & \bar {\bf A} \leftarrow \frac{{\bf \bc{E}}}{{\bf \gc{s}} \cdot {{\bf 1}^n} ^T} \\ \end{aligned} $$
Let's see how this method does.
As we can see, we have avoided the problem of the NaNs. Even for matrices with pretty large random values, we get no NaNs. Unfortunately, the second row in the plot shows that the method fails a second basic test: the rows in the result should sum to one (this is the whole point of the softmax operation). When we compute the row sums of the result, and check how far they deviate from 1, we see that the \(p\)-norm method begins to fail much earlier than the naive approach. Changing the value of the exponent \(\rc{p}\) has little effect.
The problem is that the \(p\)-norm approximation tends to
overestimate the max. In the limit, the non-max terms don't contribute to the sum anymore, but before we reach the limit, they still contribute, giving us only an upper bound for the max. And if we use a value larger than the max for our \(b_\kc{i}\), we end up pushing all values towards zero. The problem is not severe enough to cause NaNs, but the rounding errors ultimately make the result less useful than the naive approach. 3. The iterative approach
In short, it turns out we
do need quite a precise approximation for the maximum if we want to compute the row-wise softmax with a greater degree of accuracy and stability.
Let \(\bf x\) be a vector containing only nonnegative values. Consider the following iteration: $$ {x_\lbc{i}} \leftarrow \frac{{x_\lbc{i}}^2}{\sum_\lgc{k} {x_\lgc{k}}^2} \text{.} $$ That is, we square the elements of \(\bf x\) and then normalize the result to sum to 1.
Under this iteration, \(\bf x\) is always a probability vector (its elements sum to one). This ensures that so long as the iteration itself is stable, the vector itself should never blow up. Moreover, it turns out that
this iteration converges to the one-hot vector for the the maximum element of the original \(\bf x\). To see why, consider what happens after two iterations:
That is, after \(n\) iterations, we have computed \( \frac{{x_\lbc{i}}^\rc{p}}{\sum_\lgc{k} {x_\lgc{k}}^\rc{p}} \) with \(\rc{p}=2^n\).
As \(\rc{p}\) goes to infinity, this converges to a one-hot vector for the maximum of the initial \(\bf x\) (scroll down for a proof).
If we compute this one-hot vector, multiply it by the original values of \(\bf x\) and sum, we get the maximum of \(\bf x\).
To use this method for a row-wise softmax, we need to make all values positive first. We have two options.
Take the softplus() over all all values. Subtract the minimum of the whole matrix from all values.
Both of these options are standard dense-matrix operations that we can apply directly to the elements of \(\bf v\). The first option seems to work best in our experiments.
This brings us to the following algorithm to compute a row-wise max for a sparse matrix.
$$ \begin{aligned} & \textbf{function}\;\text{max}({\bf v}, {\bf D}, \rc{p}, k): \\ & \hspace{1cm} \text{given: a sparse matrix $\bf A$ with values $\bf v$ at indices $\bf D$,}\\ & \hspace{1cm} \text{an exponent $p$, and a number of iterations $k$} \\ & \hspace{1cm}{\bf h} \leftarrow \text{softplus}({\bf v}) \\ & \hspace{1cm}\textbf{for } i \in [0..k]: \\ & \hspace{2cm} {\bf h} \leftarrow \text{pow}(h, \rc{p}) \\ & \hspace{2cm} {\bf s} \leftarrow {\bf M}({\bf h}, {\bf D}) \cdot {\bf 1}^m \\ & \hspace{2cm} {\bf h} \leftarrow {\bf h} / \text{select}({\bf s}, {\bf D}[0, :]) \\ & \hspace{1cm}\textbf{return } {\bf v} \otimes {\bf h} \\ \end{aligned} $$
After computing the maxima, the softmax can be computed in the same manner as in the \(p\)-norm approach.
We can now show all approaches together. We'll add 2 additional metrics:
The RMSE of the non-zero values compared to computation of the softmax on a dense version of the matrix. The mean absolute error of just the estimated row-wise max (only for the \(p\)-norm and iterative approaches).
We run the iterative approach for 20 iterations, with exponents 2 and 1.1.
The iterative approach seems to do the trick. It avoids NaNs, and the rows sum to one for greater variances than either the naive or the \(p\)-norm approaches. We can expect reasonable stability up to a variance of 40. Even better stability may be possible by reducing \(\rc{p}\) further, and increasing the number of iterations, but if \(\rc{p}\) gets too close to 1, stability will suffer again.
Conclusion
Is this useful? Iterative approaches are expensive in deep learning systems (every iteration is stored in the computation graph), and there are other approaches that may work, depending on the situation: for shallow models, careful initialization may be enough for the naive approach to work, and batch normalization in between layers can probably ameliorate the problem as well.
There may even be a way to compute the row-wise max more directly (perhaps with a custom CUDA kernel). Nevertheless, this approach uses only existing API calls, which means it can be implemented in a decoupled way (for instance, the implementation is the same for GPU and CPU computation).
Whether this allows us to build deeper GATs and sparse transformers, and whether these allow for better performance than was previously possible is left to be seen. I may investigate in a future blog post.
Finally, we should note that most of these issues arise from the default use of the softmax function to parametrize a categorical distribution. Other approaches exist that may be just as practical and much more stable. Simply taking the absolute values of the inputs and normalizing may work just as well. The softmax approach seems to be inspired by statistical mechanics with little requirement for numerical precision. This is very similar to the sigmoid activation, which was ultimately replaced by the more linear ReLU, at least for internal activations. Perhaps a similar approach is called for when it comes to the softmax activation.
Appendix: proof of convergence Lemma. Let \(\bf x \in {\mathbb{R}_+}^n\). Assume that \(\bf x\) has \(n'\) maximal elements; that is, there are \(n'\) elements \(x_\lbc{j}\) such that \(x_\lbc{j} = \text{max}({\bf x})\). Then $$\text{lim}_{\rc{p} \to \infty} \frac{ {x_\lrc{i}}^\rc{p} }{ \sum_\gc{k} {x_\gc{k}}^\rc{p} } = \begin{cases} \frac{1}{n'} &\text{ if } x_\lrc{i} = \text{max}({\bf x})\\ 0 & \text{otherwise.}\end{cases}$$ Proof. Let \(x_\lrc{i} \lt \text{max}({\bf x})\). Then$$0 \leq \frac{ {x_\rc{i}}^\rc{p} }{ \sum_\gc{k} {x_\gc{k}}^\rc{p} } \leq \frac{ {x_\rc{i}}^\rc{p} }{ \text{max}({\bf x})^ \rc{p} } \leq \left ( \frac{ {x_\rc{i}} }{ \text{max}({\bf x}) } \right ) ^\rc{p} \text{.}$$Since the right hand side goes to zero as \(\rc{p}\) goes to infinity, the limit goes to zero.
Let \(x_\lbc{j} = \text{max}({\bf x})\). Then $$\frac{1}{n} = \frac{\text{max}({\bf x})^\rc{p}}{n \cdot \text{max}({\bf x}) ^ \rc{p}} \leq \frac{{x_\lbc{j}}^\rc{p}}{\sum_\gc{k} {x_\gc{k}} ^ \rc{p}} \leq \frac{\text{max}({\bf x})^\rc{p}}{n' \cdot \text{max}({\bf x}) ^ \rc{p}} \leq \frac{1}{n'} \;\;\;\;\text{.} $$ In other words, \({x_\lbc{j}}^\rc{p}/\sum {x_\gc{k}}^\rc{p} \in \left [\frac{1}{n}, \frac{1}{n'} \right] \) for all \(\rc{p}\). Since the sum over all elements of \(\bf x\) must equal one and the limit goes to zero for the non-maximal elements $x_\lrc{i}$ the limit for the maximal elements $x_\lbc{j}$ must go to \(\frac{1}{n'}\). |
In have a matrix function $A$ (size 3x3) and a vector function $v$ (size 3x1) that I calculate with a matrix-vector multiplication $B(x)e(x)$, $B(x)$ a 3x3 matrix function and $e(x)$ a 3x1 vector function. Applying convolution $A*v$ involves many operatons:
$$ (A(x)*\underbrace{(B(x)e(x))}_{ = v(x)})_{ij} = \begin{bmatrix}A_{11}(x) &A_{12}(x)&A_{13}(x)\\A_{21}(x)&A_{22}(x)&A_{23}(x)\\A_{31}(x)&A_{32}(x)&A_{33}(x)\end{bmatrix} * \begin{bmatrix}v_{11}(x)\\v_{21}(x)\\v_{31}(x)\end{bmatrix}\\ = \sum_{m=0}^{2}\sum_{n=0}^{2} A_{mn} v_{i-m, j-n} $$
For example for one element we have to calculate
$$ (A(x)*v(x))_{32} = A_{00}v_{33} + A_{01}v_{32} + A_{02}v_{31}\\ + A_{10}v_{23} + A_{11}v_{22} + A_{12}v_{21} \\+ A_{20}v_{13} + A_{21}v_{12} + A_{22}v_{11} $$
I wonder whether I can apply the convolution theorem to this problem. If yes does the convolution between A and v become a matrix vector multiplication in the fourier domain: $$ DFT(A(x)*v(x)) = DFT(A(x))\cdot DFT(v(x))? $$ I could not find anything to this after searching in books and on internet. Also another question, I read that the convolution theorem becomes adventageous in 1D if you have sequences with a size bigger than 100. For my problem it is commented that this problem is too small. What if I have discrete set for my variable x, lets say consisting of more than 100 elements, and I have to do this matrix calculation for more than 100 x. Would there be a computational cost difference with and without applying convolution theorem?
P.S.: I know that the mathematical formulations are not very accurate but I wanted to keep it as simple as possible. I hope you readers understand. |
If a Half of a Group are Elements of Order 2, then the Rest form an Abelian Normal Subgroup of Odd Order
Problem 575
Let $G$ be a finite group of order $2n$.Suppose that exactly a half of $G$ consists of elements of order $2$ and the rest forms a subgroup.Namely, suppose that $G=S\sqcup H$, where $S$ is the set of all elements of order in $G$, and $H$ is a subgroup of $G$. The cardinalities of $S$ and $H$ are both $n$.
Then prove that $H$ is an abelian normal subgroup of odd order.
Also, the order of $H$ must be odd, otherwise $H$ contains an element of order $2$.So it remains to prove that $H$ is abelian.
Let $a\in S$ be an element of order $2$.As $a\notin H$, the left coset $aH$ is different from $H$.Since the index of $H$ is $2$, we have $aH=G\setminus H=S$.So for any $h\in H$, the order of $ah$ is $2$.
It follows that we have for any $h\in H$\[e=(ah)^2=ahah,\]where $e$ is the identity element in $G$.
Equivalently, we have\[aha^{-1}=h^{-1} \tag{*}\]for any $h\in H$.(Remark that $a=a^{-1}$ as the order of $a$ is $2$.)
Using this relation, for any $h, k \in H$, we have\begin{align*}(hk)^{-1}&\stackrel{(*)}{=} a(hk)a^{-1}\\&=(aha^{-1})(aka^{-1})\\&\stackrel{(*)}{=}h^{-1}k^{-1}=(kh)^{-1}.\end{align*}
As a result, we obtain $hk=kh$ for any $h, k$.Hence the subgroup $H$ is abelian.
Any Subgroup of Index 2 in a Finite Group is NormalShow that any subgroup of index $2$ in a group is a normal subgroup.Hint.Left (right) cosets partition the group into disjoint sets.Consider both left and right cosets.Proof.Let $H$ be a subgroup of index $2$ in a group $G$.Let $e \in G$ be the identity […]
Group of Order 18 is SolvableLet $G$ be a finite group of order $18$.Show that the group $G$ is solvable.DefinitionRecall that a group $G$ is said to be solvable if $G$ has a subnormal series\[\{e\}=G_0 \triangleleft G_1 \triangleleft G_2 \triangleleft \cdots \triangleleft G_n=G\]such […]
Group of Order $pq$ Has a Normal Sylow Subgroup and SolvableLet $p, q$ be prime numbers such that $p>q$.If a group $G$ has order $pq$, then show the followings.(a) The group $G$ has a normal Sylow $p$-subgroup.(b) The group $G$ is solvable.Definition/HintFor (a), apply Sylow's theorem. To review Sylow's theorem, […]
Abelian Group and Direct Product of Its SubgroupsLet $G$ be a finite abelian group of order $mn$, where $m$ and $n$ are relatively prime positive integers.Then show that there exists unique subgroups $G_1$ of order $m$ and $G_2$ of order $n$ such that $G\cong G_1 \times G_2$.Hint.Consider […]
Normal Subgroup Whose Order is Relatively Prime to Its IndexLet $G$ be a finite group and let $N$ be a normal subgroup of $G$.Suppose that the order $n$ of $N$ is relatively prime to the index $|G:N|=m$.(a) Prove that $N=\{a\in G \mid a^n=e\}$.(b) Prove that $N=\{b^m \mid b\in G\}$.Proof.Note that as $n$ and […]
Non-Abelian Group of Order $pq$ and its Sylow SubgroupsLet $G$ be a non-abelian group of order $pq$, where $p, q$ are prime numbers satisfying $q \equiv 1 \pmod p$.Prove that a $q$-Sylow subgroup of $G$ is normal and the number of $p$-Sylow subgroups are $q$.Hint.Use Sylow's theorem. To review Sylow's theorem, check […]
Are Groups of Order 100, 200 Simple?Determine whether a group $G$ of the following order is simple or not.(a) $|G|=100$.(b) $|G|=200$.Hint.Use Sylow's theorem and determine the number of $5$-Sylow subgroup of the group $G$.Check out the post Sylow’s Theorem (summary) for a review of Sylow's […] |
Let $ f$ be an irreducible polynomial of degree $q$ over $\mathbb{F}_p$. Let ${\bf F}=\frac{\mathbb{F}_p[x]}{f}$ be the finite field which contain $p^q$ elements. Assume $k>1$ is an integer and suppose that ${\bf R}=\frac{\mathbb{F}_p[x]}{f^k}$ is the quotient ring such that the coefficients come from $\mathbb{F}_p$ and the multiplication of elements are reduced by $f^k$.
Let $\bf A$ be an $n \times m$ matrix such that the elements of $\bf A$ are polynomials over $\mathbb{F}_p[x]$. Suppose that the minimum number of linearly dependent columns of the matrix $\bf A$ over ${\bf F}$ and ${\bf R}$ are denoted with ${\operatorname{MD}}_F$ and ${\operatorname{MD}}_R$, respectively.
My question:
How to prove that if ${\operatorname{MD}}_F=r$ for some positive integer $1\leq r \leq m$, then ${\operatorname{MD}}_R \geq r$.
Furthermore, Is it possible to make some conditions over $\bf A$ such that we have ${\operatorname{MD}}_F=r$ and ${\operatorname{MD}}_R > r$.
In practical, I work with $p=2$.
Thanks for any help. |
Fluid Mechanics - Dec 2016
Mechanical Engg (Semester 3)
TOTAL MARKS: 100
TOTAL TIME: 3 HOURS (1) Question 1 is compulsory. (2) Attempt any four from the remaining questions. (3) Assume data wherever required. (4) Figures to the right indicate full marks.
Solve any one question fromQ.1(a,b) and Q.2(a,b) 1(a) Distinguish between: i) Simple manometer and differential manometer ii) Real fluids and ideal fluids iii) Specific weight and specific volume.(6 marks) 1(b) Determine the total pressure and cenre of pressure on an isoceless triangular plate of base 6 m when it is immersed vertically in an oil of sp. Gr 0-8. Take altitude as 4 m and base of the plate concides with the tree surface of oil.(6 marks) 2(a) State and prove Pascal's law.(6 marks) 2(b) The stream function for a two dimensional flow is given by ψ = 8xy. Calculate the velocity at the point P(4,5). Find also velocity potential function φ(6 marks)
Solve any one question fromQ3(a,b) and Q.4(a,b) 3(a) State Bernoulli's theorem for steady flow of an incompressible fluid. Derive an expression for Bernoulli's theoremfrom first principle and state assumptions made for such a derivation.(6 marks) 3(b) Find the discharge of water flowing through a pipe 30 cm diameter placed in an inclined position where a verturimeter is inserted, having a throat diameter 18 cm. The difference of pressure between the main and thorat is measured by a liquid of sp gr. 0.7 in an inverted V-tube which gives a reading of 30 cm. The loss of head between the main and thorat is 0.2 times the kinetic head of pipe.(6 marks) 4(a) State the operating principle of pitot tube and derive the equation for measurement of velocity at any point for it.(6 marks) 4(b) The water is flowing through a tapper pipe of length 80 m having diameters 600 mm at the upper end and 400 mm at the lower end at the rate of 50 litres/second. The pipe has a slope of 1 in 30. Find the pressure at lower end if the pressure at the higher level is 20.72 N/cm 2.(6 marks)
Solve any one question fromQ5(a,b) and Q.6(a,b) 5(a) A pipe line of length 2000 m is used for power transmission. If 110.3625 KW power is to be transmitted through the pipe in which water having a pressure of 490.5 N/cm 2 at inlet is flowing. Find the diameter of the pipe and efficiency of transmission if the pressure drop over the length of pipe is 98.1 N/cm 2. Take f = 0.0065.(7 marks) 5(b) Define and explain the terms: i) Hydraulic gradient line ii) Total energy line.(6 marks) 6(a) Using Buckingham's π theorem, show that the velocity through a circular orifice is given by: $V=\sqrt{29H}\phi\left [ \frac{D}{H},\frac{\mu }{\rho VH} \right ] $/ where H is head causing flow, D is diameter of orice μ is coefficient of viscosity,ρ is mass density and g is gravitational acceleration.(7 marks) 6(b) An old water supply distribution pipe of 250 mm diameter of a city is to be replaced by two parallel pipes of smaller equal diameter having equal lengths and identical friction factor values. Find out new diameter required.(6 marks)
Solve any one question fromQ.7(a,b) and Q.8(a,b) 7(a) Find the difference in drag force exerted on a flat plate of size 2 m ×2 when the plate is moving at a speed of 4 m/s normal to its plane in: i) water ii) air of density 1.24 kg/m 3 coefficient of drag is given as 1.15.(7 marks) 7(b) Deefine the terms: i) Lift ii) Drag iii) Angle of attack iv) Camber.(6 marks) 8(a) Define displacement thickness and momentum thickness. Derive an expression for displacement thickness.(6 marks) 8(b) Find the displacement thickness, the momentum thinkness and energy thickness for the velocity distribution in the boundary layer given by:$$\frac{u}{U}=2\left ( \frac{y}{\delta } \right )-\left ( \frac{y}{\delta } \right )^2.$$(7 marks) |
I need to prove that
every compact metric space is complete. I think I need to use the following two facts: A set $K$ is compact if and only if every collection $\mathcal{F}$ of closed subsets with finite intersection property has $\bigcap\{F:F\in\mathcal{F}\}\neq\emptyset$. A metric space $(X,d)$ is complete if and only if for any sequence $\{F_n\}$ of non-empty closed sets with $F_1\supset F_2\supset\cdots$ and $\text{diam}~F_n\rightarrow0$, $\bigcap_{n=1}^{\infty}F_n$ contains a single point.
I do not know how to arrive at my result that every compact metric space is complete. Any help?
Thanks in advance. |
User:Andrey Shilnikov/Proposed/Multi-stability in neuronal models
In preparation
The ability of distinct anatomical circuits to generate multiple patterns of rhythmic activity is widespread among vertebrate and invertebrate species. These patterns correspond to different locomotor behaviours. For example, swimming and struggling behaviour of the tadpole are controlled by two very different activity patterns of the spinal locomotor generator. In computational neuroscience, such multifunctional neuronal networks are modelled by multistable activity dynamics and modular organization of neural circuits. The neurobiological evidence suggests the multifunctional circuits can be classified by distinct architectures, yet the activity patterns of individual neurons involved in more than one behaviour can vary drastically. In computational neuroscience, such multifunctional neuronal networks are modelled by multi-state dynamics and modular organization of neural circuits. Each state corresponds to an attractor co-existing in the phase space of the model describing the neuronal activity; transitions between these states triggered by external perturbations are represented by switching between the corresponding attraction basins. The neurobiological evidence suggests multifunctional circuits can be classified by distinct architectures, yet the activity patterns of individual neurons involved in more than one behaviour can vary drastically. Several mechanisms, including external (e.g. sensory) input, the activity of projection neurons, neuromodulation, and biomechanics, are responsible for the switching between patterns. This complexity of neuronal dynamics adds potential flexibility to the nervous system; and multi-stability has many implications for motor control. Recent advances in both experimental techniques and modelling tools form a basis for studying these complex circuits. Understanding generic mechanisms of evolution of neuronal connectivity and transitions between different patterns of neural activity is a fundamental problem of neurobiology.
Multistability remains an intriguing phenomenon for neuroscience. The functional roles of multistability in neural systems are not well understood. On the other hand, its pathological roles are widely discussed in recent studies. Knowledge of the dynamic mechanisms supporting multistability opens new horizons for medical applications. It has been intensively targeted in a search for new treatments of the medical conditions which are caused by malfunction of the dynamics of the CNS. Sudden infant death syndrome, epilepsy and Parkinson’s disease are examples of such conditions. Recent progress in the modern technology of computer-brain interface based on real time systems allows to utilize complex feedback stimulation algorithms suppressing pathological regime co-existing with the normal one.
It still remains a challenge to identify the scenarios leading to the multistability in the neuronal dynamics and discuss what are potential roles of the multistability in the operation of the central nervous system under normal and pathological conditions. Multistability of neuronal regimes of activity has been studied combining theoretical and experimental approaches since the pioneering works by Rinzel, 1978 and Guttman et al. 1980. Multistability means the co-existence of attractors separated in the phase space describing the state of neuronal system. The multistability is actively studied on both levels, cellular and network. On the cellular level, it is co-existence of different basic regimes like bursting, spiking, sub-threshold oscillations and silence. On the network level, examples of the multistability include co-existence of different synchronization modes, “on” and “off” states, and polyrhythmic oscillations. Generically, one can design stimulation procedures which will switch system between regimes. Also, multistability suggests presence of hysteresis in response to neuromodulation.
The complexity of neuronal dynamics originates from dynamical diversity of ionic and synaptic currents, which can be separated by different time scales; and multistability of neuronal dynamics can be described within a framework of the analytical and computational methods of qualitative theory of slow-fast systems and the theory of bifurcation. It remains a challenge for the interdisciplinary scientific community to identify all possible bifurcations giving rise to bursting and other regimes of complex dynamics and to describe all possible scenarios supporting co-existence of different regimes of activity
The functional roles of multistability in neural systems are not well understood. On the other hand, its pathological roles are widely discussed in recent studies. Knowledge of the dynamic mechanisms supporting multistability opens new horizons for medical applications. It has been intensively targeted in a search for new treatments of the medical conditions which are caused by malfunction of the dynamics of the CNS. Sudden infant death syndrome, epilepsy and Parkinson’s disease are examples of such conditions. Recent progress in the modern technology of computer-brain interface based on real time systems allows to utilize complex feedback stimulation algorithms suppressing pathological regime co-existing with the normal one.
Contents Types of multistability
Neurons are observed in one of four fundamental, generally defined modes: silence, sub-threshold oscillations, spiking and bursting. The co-existence of these modes has been observed in modeling and experimental studies of single neurons and neuronal networks. Such multi-stability may be controlled by neuromodulators and thus reflect the functional state of the nervous system. This complexity adds potential flexibility to the nervous system; and multi-stability has many implications for motor control, dynamical memory, information processing, decision making and pathological conditions in central nervous system.
Bursting and Quiescence Tonic spiking and Quiescence Tonic spiking and Tonic Spiking Tonic spiking and Bursting Canonical leech heart interneuron model Reduced leech heart interneuron model
Reduced oscillatory heart interneuron model [A. Shilnikov and Cymbalyuk, 2005]: \[\tag{1} \mathrm{\dot V} = \mathrm{-2\,[30\, m^2_{K2} (V+0.07)+8\,(V+0.046)}+ \mathrm{200\, f^3_{\infty}(-150,\,0.0305\,,V) h_{Na}\,(V-0.045)}+0.0060]\ ,\]
\(\mathrm{\dot h_{Na}} = \mathrm{[f_{\infty}(500,\,0.0325,\,V)-h_{Na}]/0.0406}\ ,\)
\[\mathrm{\dot m_{K2}} =\mathrm{[f_{\infty}(-83,V_{\frac{1}{2}}+V_{K2}^{shift},V)-m_{K2}]/0.9}\ ,\] where \(\mathrm{V}\) is the membrane potential, \(\mathrm{h}_{\rm Na}\) is inactivation of the fast sodium current, and \(\mathrm{m}_{\rm K2}\) is activation of persistent potassium one; a Boltzmann function \(\mathrm{f_{\infty}(a,b,V)=1/(1+e^{a(b+V)})}\) describes kinetics of (in)activation of the currents. The bifurcation parameter \(\mathrm{V^{shift}_{K2}}\) is a deviation from the canonical value \(\mathrm{V_{\frac{1}{2}}}=0.018\)V corresponding to \(f_{\infty}=1/2\ ,\) i.e. to the semi-activated potassium channel. |
This article presents a simple tutorial code from SDALGCP package to make inference on spatially aggregated disease count data when one assume that the disease risk is spatially continious. There are two main functions provided by the package, for parameter estimation and for prediction.
where \(y_{i}\) and \(d_{i}\) are the number of reported cases and a vector of explanatory variables associated with \(i\)-th region \(\mathcal{R}_{i}\), respectively. Hence, we model \(y_{i}\) conditional on the stochastic process \(S(X)\) as poission distribution with mean \(\lambda_i= m_{i} \exp\{d_{i}\beta^* + S_{i}^*\}\). Then we assume that \(S^* \sim MVN(0, \Sigma)\), where \[\Sigma_{ij} = \sigma^2 \int_{\mathcal{R}_{i}} \int_{\mathcal{R}_{j}} w_i(x) w_j(x') \: \rho(\|x-x'\|; \phi) \: dx \: dx'\], where \(w(x)\) is population density weight. There are two classes of models in this package; one is when we approximate \[S_i^* = \int_{\mathcal{R}_{i}} w_i(x) S^*(x) \: dx \] and the other is \[S_i^* = \frac{1}{\mathcal{R}_{i}} \int_{\mathcal{R}_{i}} S^*(x) \: dx. \] ## Inference We used Monte Carlo Maximum Likelihood for inference. The likelihood function for this class of model is usually intractible, hence we approximate the likelihood function as \[\frac{1}{N}~ \sum_{j=1}^N~\frac{f(\eta_{(j)}; \psi)}{f(\eta_{(j)}; \psi_0)}.\], where \(\psi\) is the vector of the parameters.
This vignette walk you through how to analyse spatial and spatio-temporal dataset using package. Two illustrative examples were provided; application to primary biliary cirrhosis in Newcastle-upon-tyne, UK (static spatial case) and Lung cancer mortality in Ohio, USA (spatio-temporal case).
This part illustrates how to fit an SDALGCP model to spatially aggregated data. We used the example dataset that is supplied in the package.
load the package
require(SDALGCP)
load the data
data("PBCshp")
extract the dataframe containing data from the object loaded
data <- as.data.frame(PBCshp@data)
load the population density raster
data("pop_den")
set any population density that is NA to zero
pop_den[is.na(pop_den[])] <- 0
write a formula of the model you want to fit
FORM <- X ~ propmale + Income + Employment + Education + Barriers + Crime + Environment + offset(log(pop))
Now to proceed to fitting the model, note that there two types of model that can be fitted. One is when approximate the intensity of LGCP by taking the population weighted average and the other is by taking the simple average. We shall consider both cases in this tutorial, starting with population weighted since we have population density on a raster grid of 300m by 300m.
Here we estimate the parameters of the model
Discretise the value of scale parameter \(\phi\)
phi <- seq(500, 1700, length.out = 20)
estimate the parameter using MCML
my_est <- SDALGCPMCML(data=data, formula=FORM, my_shp=PBCshp, delta=200, phi=phi, method=1, pop_shp=pop_den, weighted=TRUE, par0=NULL, control.mcmc=NULL)
To print the summary of the parameter estimates as well as the confidence interval, use;
summary(my_est)#and for confidence interval useconfint(my_est)
We create a function to compute the confidence interval of the scale parameter using the deviance method. It also provides the deviance plot.
phiCI(my_est, coverage = 0.95, plot = TRUE)
Having estimated the parameters of the model, one might be interested in area-level inference or spatially continuous inference.
Dis_pred <- SDALGCPPred(para_est=my_est, continuous=FALSE)
From this discrete inference one can map either the region-specific incidence or the covariate adjusted relative risk.
#to map the incidenceplot(Dis_pred, type="incidence", continuous = FALSE)#and its standard errorplot(Dis_pred, type="SEincidence", continuous = FALSE)#to map the covariate adjusted relative riskplot(Dis_pred, type="CovAdjRelRisk", continuous = FALSE)#and its standard errorplot(Dis_pred, type="SECovAdjRelRisk", continuous = FALSE)#to map the exceedance probability that the covariate-adjusted relative risk is greter than a particular thresholdplot(Dis_pred, type="CovAdjRelRisk", continuous = FALSE, thresholds=3.0)
Con_pred <- SDALGCPPred(para_est=my_est, cellsize = 300, continuous=TRUE)
Then we map the spatially continuous covariate adjusted relative risk.
#to map the covariate adjusted relative riskplot(Con_pred, type="relrisk")#and its standard errorplot(Con_pred, type="SErelrisk")#to map the exceedance probability that the relative risk is greter than a particular thresholdplot(Con_pred, type="relrisk", thresholds=1.5)
As for the unweighted which is typically by taking the simple average of the intensity an LGCP model, the entire code in the weighted can be used by just setting in the line below.
my_est <- SDALGCPMCML(data=data, formula=FORM, my_shp=PBCshp, delta=200, phi=phi, method=1, weighted=FALSE, plot=FALSE, par0=NULL, control.mcmc=NULL, messages = TRUE, plot_profile = TRUE)
Download the dataset
require(rgdal)require(sp)require(spacetime)ohiorespMort <- read.csv("https://raw.githubusercontent.com/olatunjijohnson/dataset/master/OhioRespMort.csv")download.file("https://github.com/olatunjijohnson/dataset/raw/master/ohio_shapefile.zip", "ohio_shapefile.zip")unzip("ohio_shapefile.zip")ohio_shp <- rgdal::readOGR("ohio_shapefile/","tl_2010_39_county00")ohio_shp <- sp::spTransform(ohio_shp, sp::CRS("+init=epsg:32617"))
create a spacetime object as an input of the spatio-temporal SDALGCP model
m <- length(ohio_shp)TT <- 21Y <- ohiorespMort$yX <- ohiorespMort$yearpop <- ohiorespMort$nE <- ohiorespMort$Edata <- data.frame(Y=Y, X=X, pop=pop, E=E)formula <- Y ~ X + offset(log(E))phi <- seq(10, 300, length.out = 10)control.mcmc <- controlmcmcSDA(n.sim=10000, burnin=2000, thin=80, h=1.65/((m*TT)^(1/6)), c1.h=0.01, c2.h=0.0001)time <- as.POSIXct(paste(1968:1988, "-01-01", sep = ""), tz = "")st_data <- spacetime::STFDF(sp = ohio_shp, time = time, data = data)
Plot the spatio-temporal count data
spacetime::stplot(st_data[,,"Y"])
Parameter estimation
model.fit <- SDALGCPMCML_ST(formula=formula, st_data = st_data, delta=800, phi=phi, method=2, pop_shp=NULL, kappa=0.5, weighted=FALSE, par0=NULL, control.mcmc=control.mcmc, plot=TRUE, plot_profile=TRUE, rho=NULL, giveup=50, messages=TRUE)summary(model.fit)
Area-level of the spatio-temporal prediction
dis_pred <- SDALGCPPred_ST(para_est = model.fit, continuous = FALSE)
Ploting the area-level incidence and the covariate adjusted relative risk
plot(dis_pred, type="CovAdjRelRisk", main="Relative Risk", continuous=FALSE)plot(dis_pred, type="incidence", main="Incidence", continuous=FALSE)
Spatially continuous prediction of the covariate adjusted relative risk
con_pred <- SDALGCPPred_ST(para_est = model.fit, cellsize = 2500, continuous=TRUE, n.window = 1)
Ploting the spatially continuous covariate-adjusted relative risk
plot(con_pred, type="relrisk", continuous=TRUE)
Using SDALGCP package for analysis of spatially aggregated data provides two main advantages. One, it allows the user to make spatially continous inference irrespective of the level of aggregation of the data. Second, it is more computationally efficient than the lgcp model for aggregated data that was implemented in package. |
If the Images of Vectors are Linearly Independent, then They Are Linearly Independent
Problem 62
Let $T: \R^n \to \R^m$ be a linear transformation.Suppose that $S=\{\mathbf{x}_1, \mathbf{x}_2,\dots, \mathbf{x}_k\}$ is a subset of $\R^n$ such that $\{T(\mathbf{x}_1), T(\mathbf{x}_2), \dots, T(\mathbf{x}_k) \}$ is a linearly independent subset of $\R^m$.
Vectors $\mathbf{x}_1, \mathbf{x}_2,\dots, \mathbf{x}_k$ are linearly independentif and only if the only solution to the vector equation\[c_1\mathbf{x}_1+c_2\mathbf{x}_2+\cdots+c_k\mathbf{x}_k=\mathbf{0}_n\]is $c_1=c_2=\cdots=c_k=0$.
A linear transformation $T:\R^n \to \R^m$ is a map such that
$T(\mathbf{v}+\mathbf{w})=T(\mathbf{v})+\mathbf{w}$ for all $\mathbf{v}, \mathbf{w} \in \R^n$.
$T(c\mathbf{v})=cT(\mathbf{v})$ for all $c \in \R$ and $\mathbf{v}\in \R^n$.
Proof.
Consider a linear combination of vectors in $S$\[c_1\mathbf{x}_1+c_2\mathbf{x}_2+\cdots+c_k\mathbf{x}_k=\mathbf{0}_n,\]where $\mathbf{0}_n$ is the $n$ dimensional zero vector.To show that $S$ is linearly independent, we need to show that the coefficients $c_i$ are all zero.
Thus we have\begin{align*}\mathbf{0}_m&=T(\mathbf{0}_n)=T(c_1\mathbf{x}_1+c_2\mathbf{x}_2+\cdots+c_k\mathbf{x}_k)\\&= c_1T(\mathbf{x}_1)+c_2T(\mathbf{x}_2)+\cdots+c_k T(\mathbf{x}_k).\end{align*}In the last step, we used the linearity of $T$.
Since the vectors $T(\mathbf{x}_1), T(\mathbf{x}_2), \dots, T(\mathbf{x}_k)$ are linearly independent, the coefficient of this linear combination of these vectors must be zero.Thus we have $c_1=c_2=\dots=c_k=0$, hence the set $S$ is linearly independent.
Linear Combination of Eigenvectors is Not an EigenvectorSuppose that $\lambda$ and $\mu$ are two distinct eigenvalues of a square matrix $A$ and let $\mathbf{x}$ and $\mathbf{y}$ be eigenvectors corresponding to $\lambda$ and $\mu$, respectively.If $a$ and $b$ are nonzero numbers, then prove that $a \mathbf{x}+b\mathbf{y}$ is not an […]
Properties of Nonsingular and Singular MatricesAn $n \times n$ matrix $A$ is called nonsingular if the only solution of the equation $A \mathbf{x}=\mathbf{0}$ is the zero vector $\mathbf{x}=\mathbf{0}$.Otherwise $A$ is called singular.(a) Show that if $A$ and $B$ are $n\times n$ nonsingular matrices, then the product $AB$ is […] |
Reynolds number, abbreviated as Re, is a dimensionless number that measures the ratio of inertial forces (forces that remain at rest or in uniform motion) to viscosity forces (the resistance to flow).
Reynolds Number formula
\(\large{ Re = \frac{ \rho \; v \; l_c }{ \mu } }\)
\(\large{ Re = \frac{ v \; l_c }{ \nu } }\)
Where:
\(\large{ Re }\) = Reynolds number
\(\large{ l_c }\) = characteristic length or diameter of the fluid flow
\(\large{ \rho }\) (Greek symbol rho) = fluid density
\(\large{ v }\) = fluid velocity
\(\large{ \mu }\) (Greek symbol mu) = dynamic viscosity
\(\large{ \nu }\) (Greek symbol nu) = kinematic viscosity
Solve for:
\(\large{ l_c = \frac{ Re \; \mu }{\rho \; v } }\)
\(\large{ \rho = \frac{ Re \; \mu }{ l_c \; v } }\)
\(\large{ v = \frac{ Re \; \mu }{ \rho \; l_c } }\)
\(\large{ \mu = \frac{ \rho \; v \; l_c }{ Re } }\)
Tags: Equations for Flow |
I would like a clarification on bank angle and how its different from roll angle with respect to to fixed wing aircraft. It is my understanding that the bank angle is a result of rotating the aircraft body to the stability frame, implying that if the angle of attack $\alpha$ and the side slip angle $\beta$ are zero then, and only then the bank and angle and roll angle are the same. Based on Stevens and Lewis Aircraft Control and Simulation (which doesn't define bank angle) the rotation from body to stability(wind) frame is given by $$ C_{w-b} = \begin{bmatrix} &\cos\alpha \cos\beta & \sin\beta & \sin\alpha cos\beta \\ &-\cos\alpha\sin\beta & \cos\beta & -\sin\alpha\sin\beta\\ &-\sin\alpha & 0 & \cos\alpha \end{bmatrix} $$
Therefore, the bank angle $\mu$
I think is given by:$$\mu \triangleq \begin{bmatrix} &\cos\alpha \cos\beta & \sin\beta & \sin\alpha \cos\beta \end{bmatrix} \begin{bmatrix} \phi \\ \theta \\ \psi\end{bmatrix} $$
where $\phi,\, \theta,\, \psi$ are standard aerospace Euler angles defined based on a 3-2-1 rotation. My specific questions are:
1) is my understanding and calculation of bank angle correct? If not can someone point me to a good resource for this. I was surprised to be able to find a clear definition and formula quickly.
2) by multiplying the second and third row of $C_{w-b}$ with Euler angles we obtain a set of other two angles relative to stability frame. Do these angles have any names or specific role in aerospace dynamics/control? |
Let $\Sigma$ be an alphabet, ie a nonempty finite set. A string is any finite sequence of elements (characters) from $\Sigma$. As an example, $ \{0, 1\}$ is the binary alphabet and $0110$ is a string for this alphabet.
Usually, as long as $\Sigma$ contains more than 1 element, the exact number of elements in $\Sigma$ doesn't matter: at best we end up with a different constant somewhere. In other words, it doesn't really matter if we use the binary alphabet, the numbers, the Latin alphabet or Unicode.
Are there examples of situations in which it matters how large the alphabet is?
The reason I'm interested in this is because I happened to stumble upon one such example:
For any alphabet $\Sigma$ we define the random oracle $O_{\Sigma}$ to be an oracle that returns random elements from $\Sigma$, such that every element has an equal chance of being returned (so the chance for every element is $\frac{1}{|\Sigma|}$).
For some alphabets $\Sigma_1$ and $\Sigma_2$ - possibly of different sizes - consider the class of oracle machines with access to $O_{\Sigma_1}$. We're interested in the oracle machines in this class that behave the same as $O_{\Sigma_2}$. In other words, we want to convert an oracle $O_{\Sigma_1}$ into an oracle $O_{\Sigma_2}$ using a Turing machine. We will call such a Turing machine a conversion program.
Let $\Sigma_1 = \{ 0, 1 \}$ and $\Sigma = \{ 0, 1, 2, 3 \}$. Converting $O_{\Sigma_1}$ into an oracle $O_{\Sigma_2}$ is easy: we query $O_{\Sigma_1}$ twice, converting the results as follows: $00 \rightarrow 0$, $01 \rightarrow 1$, $10 \rightarrow 2$, $11 \rightarrow 3$. Clearly, this program runs in $O(1)$ time.
Now let $\Sigma_1 = \{ 0, 1 \}$ and $\Sigma = \{ 0, 1, 2 \}$. For these two languages, all conversion programs run in $O(\infty)$ time, ie there are no conversion programs from $O_{\Sigma_1}$ to $O_{\Sigma_2}$ that run in $O(1)$ time.
This can be proven by contradiction: suppose there exists a conversion program $C$ from $O_{\Sigma_1}$ to $O_{\Sigma_2}$ running in $O(1)$ time. This means there is a $d \in \mathbb{N}$ such that $C$ makes at most $d$ queries to $\Sigma_1$.
$C$ may make less than $d$ queries in certain execution paths. We can easily construct a conversion program $C'$ that executes $C$, keeping track of how many times an oracle query was made. Let $k$ be the number of oracle queries. $C'$ then makes $d-k$ additional oracle queries, discarding the results, returning what $C$ would have returned.
This way, there are exactly $|\Sigma_1|^d = 2^d$ execution paths for $C'$. Exactly $\frac{1}{|\Sigma_2|} = \frac{1}{3}$ of these execution paths will result in $C'$ returning $0$. However, $\frac{2^d}{3}$ is not an integer number, so we have a contradiction. Hence, no such program exists.
More generally, if we have alphabets $\Sigma_1$ and $\Sigma_2$ with $|\Sigma_1|=n$ and $|\Sigma_2|=k$, then there exists a conversion program from $O_{\Sigma_1}$ to $O_{\Sigma_2}$ if and only if all the primes appearing in the prime factorisation of $n$ also appear in the prime factorisation of $k$ (so the exponents of the primes in the factorisation doesn't matter).
A consequence of this is that if we have a random number generator generating a binary string of length $l$, we can't use that random number generator to generate a number in $\{0, 1, 2\}$ with exactly equal probability.
I thought up the above problem when standing in the supermarket, pondering what to have for dinner. I wondered if I could use coin tosses to decide between choice A, B and C. As it turns out, that is impossible. |
3 0
Some friend asked me the following question:
For a real scalar field \phi, assume that H = H_free - \int d^3 x\ J \phi. J(x, t) is just some real number, source, or background field, without second quantization. Now, what is the amplitude \psi(x, t) for finding a particle at time t(before, during, or after source is on/off) at position x? The J(x,t) is nonzero only for finite period of time. And the initial state is vacuum, when t --> -\infty .
This question looks simple. However, I cannot find a solution which satisfies both causality and Lorentz invariance.
For a real scalar field \phi, assume that H = H_free - \int d^3 x\ J \phi. J(x, t) is just some real number, source, or background field, without second quantization. Now, what is the amplitude \psi(x, t) for finding a particle at time t(before, during, or after source is on/off) at position x? The J(x,t) is nonzero only for finite period of time. And the initial state is vacuum, when t --> -\infty .
This question looks simple. However, I cannot find a solution which satisfies both causality and Lorentz invariance. |
The question as asked is open to interpretation, so I will first rephrase it to have a basis to build upon. Your last paragraph tells me that you want to know the optimum bank angle to get the highest ratio of turn rate to altitude loss in a glide at a given airspeed.
Spoiler: Since steeper bank angles require more lift, and aircraft with better L/D are more efficient in producing lift, the optimum bank angle depends on the aerodynamic qualities of the aircraft.
What is given Glider or powered aircraft with inoperative engine. The polar and the weight are known and do not change over time. Airspeed. This will result in a restricted optimum - the absolute best bank angle will require a suitable speed. What can be changed Bank angle $\varphi$ (obviously - you are asking for this) Lift $L$ (again, obviously. You want to stay airborne) Solution
First I need to formulate the ratio of turn rate over height loss. This then needs to be derived with respect to the bank angle and set to zero. To have a derivable polar, I use the quadratic polar where $c_D = c_{D0}\cdot\frac{c_L^2}{\pi\cdot AR\cdot\epsilon}$.
I further assume a coordinated turn, so we can define the lift and drag equations. Drag is compensated by selecting a suitable glide path angle $\gamma$ in order to convert potential into kinetic energy to keep the speed constant. The angular velocity $\Omega$ in a turn with the radius $R$ is$$\Omega = \frac{v}{R} = \frac{g\cdot tan\varphi}{v} = \frac{g\cdot \sqrt{n_z^2-1}}{v}$$Height loss over time is vertical speed $v_z$, and this can be calculated from speed $v$ and flight path angle $\gamma$:$$v_z = v\cdot sin\gamma$$Since $v$ is given and constant, we can rephrase the problem as a maximization of turn rate over flight path angle or sink speed. This is equivalent to the smallest height loss for a given azimuth change.$$\frac{\Omega}{v_z} = \frac{g\cdot tan\varphi}{sin\gamma}$$
Before deriving this, we need to express $\gamma$ in terms of $\varphi$. If we had the liberty to adjust speed, we could directly solve for the optimum bank angle at optimum L/D. Now, however, speed is fixed and L/D is what the airplane produces at the required lift. Since for gliders $sin\gamma = \frac{c_D}{c_L}$, we can write:$$\frac{\Omega}{v_z} = \frac{g\cdot tan\varphi\cdot c_L}{c_{D0}+\frac{c_L^2}{\pi\cdot AR\cdot\epsilon}} = \frac{g\cdot sin\varphi\cdot \frac{m\cdot g}{q\cdot S}}{c_{D0}\cdot cos^2\varphi + \frac{\left(\frac{m\cdot g}{q\cdot S}\right)^2}{\pi\cdot AR\cdot\epsilon}}$$with $c_L = \frac{m\cdot g}{q\cdot S\cdot cos\varphi}$. Since the dynamic pressure $q$ is constant, we can now derive with respect to the bank angle. With the chain rule we get a fraction, and since it will be set to zero, it is enough to look for the condition when the numerator is zero:$$g\cdot cos\varphi\cdot \frac{m\cdot g}{q\cdot S}\cdot\left({c_{D0}\cdot cos^2\varphi + \frac{\left(\frac{m\cdot g}{q\cdot S}\right)^2}{\pi\cdot AR\cdot\epsilon}}\right) = g\cdot sin\varphi\cdot \frac{m\cdot g}{q\cdot S}\cdot 2\cdot c_{D0}\cdot sin\varphi\cdot cos\varphi$$$$\frac{\left(\frac{m\cdot g}{q\cdot S}\right)^2}{c_{D0}\cdot\pi\cdot AR\cdot\epsilon} = 2\cdot sin^2\varphi - cos^2\varphi = \frac{1}{2} - \frac{3}{2}cos2\varphi$$$$\varphi = \frac{1}{2}\cdot arccos\left(\frac{1}{3} - \frac{2\cdot\left(\frac{m\cdot g}{q\cdot S}\right)^2}{3\cdot c_{D0}\cdot\pi\cdot AR\cdot\epsilon}\right)$$This does not obviously look wrong, but I could very well have screwed up on the path to the result. If you plug in the numbers for an airplane you know, you can check whether the result makes sense. At least, with too low airspeed you get a negative argument for the cosine which mathematically means a roll angle of >90° and can be interpreted as too slow for that turn.
EDIT
Now we got a similar question but with both speed and roll angle as variables. Obviously, now we need to derive both with respect to speed
and roll angle. But it is more fun to plot the results over these two as a contour plot. I just had to do this since several answers here claim that the optimum angle is 45°. Equally obviously, this is too simplistic.
First the math: I start from the same equations as above and added a term for wind ($w_z$) which adds rising or sinking air mass to the problem. $$∆h = \frac{\pi\cdot v}{g\cdot tan\phi}\cdot(v_z+w_z) = \frac{\pi\cdot v}{g\cdot\sqrt{n_z^2-1}}\cdot\left(\frac{v\cdot c_D}{c_L}+w_z\right)$$Expressing the lift coefficient as $$c_L = \frac{2\cdot n_z\cdot m\cdot g}{\rho\cdot S\cdot v^2}$$brings us to $$∆h = \frac{\pi}{g^2\cdot\sqrt{n_z^2-1}}\cdot \left(\frac{\rho\cdot S\cdot v^4\cdot c_{D0}}{2\cdot n_z\cdot m} + \frac{2\cdot n_z\cdot m\cdot g^2}{\pi\cdot\rho\cdot S\cdot AR\cdot\epsilon} + w_z\cdot v\cdot g\right)$$Nomenclature:
$\kern4mm g\kern6mm$gravitational acceleration $\kern4mm n_z\kern4mm$vertical load factor $\kern4mm \rho\kern6mm$air density $\kern4mm S\kern5mm$wing area $\kern4mm v\kern6mm$flight speed $\kern4mm c_{D0}\kern2mm$zero-lift drag coefficient $\kern4mm m\kern5mm$aircraft mass $\kern4mm AR\kern1mm$wing aspect ratio $\kern4mm \epsilon\kern6mm$Oswald factor
The figure below is the result plotted in R. Since I need to read the full matrix of values for the contour plot, the area of low speed and high bank angle is filled with the result of a strict penalty function, so please disregard the values to the right and below the red line.
Contour plot of height losses a A320 type aircraft in a 180° turn at sea level and MTOW (78 tons), no wind. X is bank angle in degrees and Y is flight speed in m/s. Own work.
As you can see, the minimum (approx. 170 m) is achieved right before stall at a high bank angle and speed. Unfortunately, you need the aerobatic version of the A320 to fly this safely. |
Differential and Integral Equations Differential Integral Equations Volume 31, Number 9/10 (2018), 685-700. A sharp lower bound for the lifespan of small solutions to the Schrödinger equation with a subcritical power nonlinearity Abstract
Let $T_{\varepsilon}$ be the lifespan for the solution to the Schrödinger equation on $\mathbb R^d$ with a power nonlinearity $\lambda |u|^{2\theta/d}u$ ($\lambda \in \mathbb C$, $0< \theta < 1$) and the initial data in the form $\varepsilon \varphi(x)$. We provide a sharp lower bound estimate for $T_{\varepsilon}$ as $\varepsilon \to +0$ which can be written explicitly by $\lambda$, $d$, $\theta$, $\varphi$ and $\varepsilon$. This is an improvement of the previous result by H. Sasaki [Adv. Diff. Eq., 14 (2009), 1021-1039].
Article information Source Differential Integral Equations, Volume 31, Number 9/10 (2018), 685-700. Dates First available in Project Euclid: 13 June 2018 Permanent link to this document https://projecteuclid.org/euclid.die/1528855435 Mathematical Reviews number (MathSciNet) MR3814562 Zentralblatt MATH identifier 06945777 Citation
Sagawa, Yuji; Sunagawa, Hideaki; Yasuda, Shunsuke. A sharp lower bound for the lifespan of small solutions to the Schrödinger equation with a subcritical power nonlinearity. Differential Integral Equations 31 (2018), no. 9/10, 685--700. https://projecteuclid.org/euclid.die/1528855435 |
The "left to right" of the biconditional is true. As noted in another answer, we can use L'hopital. But I will utilize a direct approach. We need to show that for arbitrarily large $M$, we have for sufficiently large $x$ the inequality $\frac{f(x)}{x} > M$.
By assumption, for any arbitrarily large $M$ we have $f'(x) > 2M$ when $x>x_0$. This means $f(x) \geq f(x_0) + 2M(x-x_0)$ for $x > x_{0}$. Note also that there is an $x_1$ such that for all $x > x_1$, $f(x_0) + 2M(x-x_0) > {Mx}$. Hence, we can see that for $x > \max\{x_1, x_0\}$ we have $$\frac{f(x)}{x} \geq \frac{f(x_0) + 2M(x-x_0)}{x} > \frac{Mx}{x} = M $$
The "right to left" of the biconditional is false. Consider $f(x) = x^2(\sin x + 2)$. This is positive and bounded below by $x^2$, hence $\lim_{x \to +\infty} \frac{f(x)}{x} = +\infty$ but $f'$ oscillates as $x \to +\infty$.
We can say something weaker, however, namely the following
Theorem: Let $f \in C^1(\mathbb{R})$ such that $$\lim_{x \to +\infty} \frac{f(x)}{x} = +\infty$$ Then we have $$\limsup_{x \to +\infty} \ f'(x) = +\infty$$
To prove this, first note that for $f$, we can assume $f(0) = 0$ without any loss of generality. Indeed, define $g(x) = f(x) - f(0)$ and note $\lim_{x \to +\infty} \frac{f(x)}{x} = +\infty \Longleftrightarrow \lim_{x \to +\infty} \frac{g(x)}{x} = +\infty$ and also $f' = g'$.
We can prove by contradiction. Suppose the $\lim \sup$ is finite or $-\infty$. This means $f'$ is bounded above in $[M, +\infty)$ for some $M>0$. Since $f'$ is continuous, by the extreme value theorem it is bounded above in $[0,M]$, and hence it is bounded above in $[0, +\infty)$. By the mean value theorem, we have that $\frac{f(x)}{x} = f'(\alpha)$ for some $\alpha$ in $[0, x]$. Letting $x \to +\infty$ we can see that $f'(\alpha)$ takes on arbitrarily large positive values, which contradicts the fact that $f'$ is bounded above in $[0, +\infty)$.
This can probably be modified so that the $C^1$ condition can be relaxed (e.g., to allow for cases where $f'$ is discontinuous), but I'm not sure how to do that. |
This answer is a response to a comment by the OP on on yoda's answer.
Suppose that $h(t)$, the impulse response of a continuous-time linear time-invariant system, has the property that $$\int_{-\infty}^{\infty} |h(t)| \mathrm dt = M$$ forsome finite number $M$. Then, for
each and every bounded input $x(t)$, the output $y(t)$ is bounded also.If $|x(t)| \leq \hat{M}$ for all $t$ where $\hat{M}$is some finite number, then $|y(t)| \leq \hat{M}M$ for all $t$where $\hat{M}M$ is also a finite number.The proof is straightforward.$$\begin{align*}
|y(t)| &= \left |\int_{-\infty}^\infty h(\tau)x(t - \tau)\mathrm d\tau\right |\\
&\leq \int_{-\infty}^\infty |h(\tau)x(t - \tau)|\mathrm d\tau\\
&\leq \int_{-\infty}^\infty |h(\tau)|\cdot|x(t - \tau)|\mathrm d\tau\\
&\leq \hat{M}\int_{-\infty}^\infty |h(\tau)|\mathrm d\tau\\
&= \hat{M}M.
\end{align*}$$In other words, $y(t)$ is bounded whenever $x(t)$ is bounded.
Thus, the condition
$\displaystyle\int_{-\infty}^{\infty} |h(t)| \mathrm dt < \infty$
is sufficient for BIBO-stability.
The condition $\displaystyle\int_{-\infty}^{\infty} |h(t)| \mathrm dt < \infty$
is also necessary for BIBO-stability.
Assume that
every bounded inputproduces a bounded output. Now consider the input $x(t) = \text{sgn}(h(-t)) ~\forall~ t$. This is clearly bounded,($|x(t)| \leq 1$ for all $t$), and at $t=0$, it produces output$$\begin{align*}
y(0) &= \int_{-\infty}^\infty h(0-\tau)x(-\tau)\mathrm d\tau\\
&= \int_{-\infty}^\infty h(-\tau)\text{sgn}(h(-\tau))\mathrm d\tau
&= \int_{-\infty}^\infty |h(-\tau)|\mathrm d\tau\\
&= \int_{-\infty}^\infty |h(t)|\mathrm dt.
\end{align*}$$Our assumption that the system is BIBO stable means that $y(0)$ is necessarily finite, that is,$$\int_{-\infty}^{\infty} |h(t)| \mathrm dt < \infty$$
The proof for discrete-time systems is similar with the obvious change that all the integrals are replaced by sums.
Ideal LPFs are not BIBO-stable systems because the impulse response is not absolutely integrable,as stated in the answer by yoda. But his answer does not really answer the question
Can anyone give me a proof that ideal LPF can indeed be BIBO unstable?
A specific example of a bounded input signal that produces an unbounded outputfrom an ideal LPF (and thus proves that the system is
not BIBO-stable)can be constructed as outlined above (see also my comment on the main question). |
Defining parameters
Level: \( N \) = \( 63 = 3^{2} \cdot 7 \) Weight: \( k \) = \( 2 \) Nonzero newspaces: \( 10 \) Newforms: \( 17 \) Sturm bound: \(576\) Trace bound: \(4\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{2}(\Gamma_1(63))\).
Total New Old Modular forms 192 131 61 Cusp forms 97 87 10 Eisenstein series 95 44 51 Decomposition of \(S_{2}^{\mathrm{new}}(\Gamma_1(63))\)
We only show spaces with even parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension. |
Basically 2 strings, $a>b$, which go into the first box and do division to output $b,r$ such that $a = bq + r$ and $r<b$, then you have to check for $r=0$ which returns $b$ if we are done, otherwise inputs $r,q$ into the division box..
There was a guy at my university who was convinced he had proven the Collatz Conjecture even tho several lecturers had told him otherwise, and he sent his paper (written on Microsoft Word) to some journal citing the names of various lecturers at the university
Here is one part of the Peter-Weyl theorem: Let $\rho$ be a unitary representation of a compact $G$ on a complex Hilbert space $H$. Then $H$ splits into an orthogonal direct sum of irreducible finite-dimensional unitary representations of $G$.
What exactly does it mean for $\rho$ to split into finite dimensional unitary representations? Does it mean that $\rho = \oplus_{i \in I} \rho_i$, where $\rho_i$ is a finite dimensional unitary representation?
Sometimes my hint to my students used to be: "Hint: You're making this way too hard." Sometimes you overthink. Other times it's a truly challenging result and it takes a while to discover the right approach.
Once the $x$ is in there, you must put the $dx$ ... or else, nine chances out of ten, you'll mess up integrals by substitution. Indeed, if you read my blue book, you discover that it really only makes sense to integrate forms in the first place :P
Using the recursive definition of the determinant (cofactors), and letting $\operatorname{det}(A) = \sum_{j=1}^n \operatorname{cof}_{1j} A$, how do I prove that the determinant is independent of the choice of the line?
Let $M$ and $N$ be $\mathbb{Z}$-module and $H$ be a subset of $N$. Is it possible that $M \otimes_\mathbb{Z} H$ to be a submodule of $M\otimes_\mathbb{Z} N$ even if $H$ is not a subgroup of $N$ but $M\otimes_\mathbb{Z} H$ is additive subgroup of $M\otimes_\mathbb{Z} N$ and $rt \in M\otimes_\mathbb{Z} H$ for all $r\in\mathbb{Z}$ and $t \in M\otimes_\mathbb{Z} H$?
Well, assuming that the paper is all correct (or at least to a reasonable point). I guess what I'm asking would really be "how much does 'motivated by real world application' affect whether people would be interested in the contents of the paper?"
@Rithaniel $2 + 2 = 4$ is a true statement. Would you publish that in a paper? Maybe... On the surface it seems dumb, but if you can convince me the proof is actually hard... then maybe I would reconsider.
Although not the only route, can you tell me something contrary to what I expect?
It's a formula. There's no question of well-definedness.
I'm making the claim that there's a unique function with the 4 multilinear properties. If you prove that your formula satisfies those (with any row), then it follows that they all give the same answer.
It's old-fashioned, but I've used Ahlfors. I tried Stein/Stakarchi and disliked it a lot. I was going to use Gamelin's book, but I ended up with cancer and didn't teach the grad complex course that time.
Lang's book actually has some good things. I like things in Narasimhan's book, but it's pretty sophisticated.
You define the residue to be $1/(2\pi i)$ times the integral around any suitably small smooth curve around the singularity. Of course, then you can calculate $\text{res}_0\big(\sum a_nz^n\,dz\big) = a_{-1}$ and check this is independent of coordinate system.
@A.Hendry: It looks pretty sophisticated, so I don't know the answer(s) off-hand. The things on $u$ at endpoints look like the dual boundary conditions. I vaguely remember this from teaching the material 30+ years ago.
@Eric: If you go eastward, we'll never cook! :(
I'm also making a spinach soufflé tomorrow — I don't think I've done that in 30+ years. Crazy ridiculous.
@TedShifrin Thanks for the help! Dual boundary conditions, eh? I'll look that up. I'm mostly concerned about $u(a)=0$ in the term $u'/u$ appearing in $h'-\frac{u'}{u}h$ (and also for $w=-\frac{u'}{u}P$)
@TedShifrin It seems to me like $u$ can't be zero, or else $w$ would be infinite.
@TedShifrin I know the Jacobi accessory equation is a type of Sturm-Liouville problem, from which Fox demonstrates in his book that $u$ and $u'$ cannot simultaneously be zero, but that doesn't stop $w$ from blowing up when $u(a)=0$ in the denominator |
I am interested in the early proofs of the theorem. It is often called Cauchy mean value theorem, so perhaps Cauchy proved it first. In all the proofs that I have seen we construct a contrived function, applying Rolle's theorem to which works out just right. It seems like a miracle that one would come up with such a function. Perhaps the older proofs were more intuitive.
Cauchy was indeed the first, although his version was weaker than the modern one. An intuitive geometric interpretation of the theorem, along with a proof motivated by it, is due to Bonnet. A very well sourced and illustrated account of history of the mean value theorem is A brief history of the mean value theorem by Besenyei, which describes many historically given proofs of it and its generalization, along with controversies surrounding some of them.
Cauchy's proof appeared in Resume des Lecons sur le Calcul Infinitesimal (1823). He does not use Rolle's theorem, but he also does not prove the modern form of it involving derivatives at an intermediate point. In Besenyei's reconstruction, we assume $g′ > 0$, and denote $A := \min_{[a,b]} f ′/g′$, and $B := \max_{[a,b]} f ′/g′$. Then $f' − Ag'\geq0$ and $Bg' − f'\geq0$, and hence $f − Ag$ and $Bg − f$ are monotone increasing. Therefore, $$ f (b)−f (a)−A(g(b)−g(a)) ≥ 0\ \textrm{and}\ B(g(b)−g(a))−(f (b)−f (a)) ≥ 0. $$ These are easily manipulated into the double inequality $$ A ≤ \frac{f (b) − f (a)}{g(b) − g(a)} ≤ B, $$ which is Cauchy's version of the theorem.
The modern version appeared in Serret's Cours de Calcul Infinitesimal (1868). The auxiliary function Serret introduced is directly inspired by Cauchy's proof, it is $\varphi(x) = (f(x) − Ag(x)) − (f(a) − Ag(a))$, but Rolle's theorem is not used.
Bonnet gave an intuitive geometric interpretation of the theorem, by treating $g(t),f(t)$ as parametric equations of a plane curve. Then the theorem amounts to saying that the tangent to the curve segment at some point is parallel to the chord connecting its endpoints, $\big(g(a),f(a)\big)$ and $\big(g(b),f(b)\big)$. The slope of the chord is $\frac{f(b)−f(a)}{g(b)−g(a)}$, which suggests introducing an auxiliary function $$ h(t) = f (t) − f (a) − \frac{f(b)−f(a)}{g(b)−g(a)}(g(t) − g(a)). $$ Geometrically, this is the vertical distance between $(g(t),f(t))$ and the point on the chord right above or below it. The generalized mean value theorem follows by applying Rolle's theorem to $h(t)$. |
In this chapter we turn to the important question of determining the distribution of a sum of independent random variables in terms of the distributions of the individual constituents. In this section we consider only sums of discrete random variables, reserving the case of continuous random variables for the next section.
We consider here only random variables whose values are integers. Their distribution functions are then defined on these integers. We shall find it convenient to assume here that these distribution functions are defined for all integers, by defining them to be 0 where they are not otherwise defined.
Convolutions
Suppose
X and Y are two independent discrete random variables with distribution functions \(m_1(x)\) and \(m_2(x)\). Let Z = X + Y. We would like to determine the distribution function m3(x) of Z. To do this, it is enough to determine the probability that Z takes on the value z, where z is an arbitrary integer. Suppose that X = k, where k is some integer. Then Z = z if and only if Y = z − k. So the event Z = z is the union of the pairwise disjoint events.
\[(X=k) \text{ and } (Y= z - k)\]
where
k runs over the integers. Since these events are pairwise disjoint, we have
\[P(Z=z) = \sum_{k=-\infty}^\infty P(X=k) \cdot P(Y=z-k)\]
Thus, we have found the distribution function of the random variable
Z. This leads to the following definition.
Definition: convolution
Let \(X\) and \(Y\) be two independent integer-valued random variables, with distribution functions \(m_1(x)\) and \(m_2(x)\) respectively. Then the convolution of \(m_1(x)\) and \(m_2(x)\) is the distribution function \(m_3 = m_1 * m_2\) given by
\[ m_3(j) = \sum_k m_1(k) \cdot m_2(j-k) ,\]
for
j = . . . , −2, −1, 0, 1, 2, . . .. The function m3(x) is the distribution function of the random variable Z = X + Y.
It is easy to see that the convolution operation is commutative, and it is straightforward to show that it is also associative.
Now let \(S_n = X_1 + X_2 + . . . + X_n \) be the sum of n independent random variables of an independent trials process with common distribution function m defined on the integers. Then the distribution function of \(S_1\) is m. We can write
\[ S_n = S_{n-1} + X_n \]
Thus, since we know the distribution function of \(X_n\) is
m, we can find the distribution function of \(S_n\) by induction. Example 7.1
A die is rolled twice. Let \(X_1\) and \(X_2\) be the outcomes, and let \( S_2 = X_1 + X_2\) be the sum of these outcomes. The \(X_1\) and \(X_2\) have the common distribution function:
\[ m = \bigg( \begin{array}{}1 & 2 & 3 & 4 & 5 & 6 \\ 1/6 & 1/6 & 1/6 & 1/6 & 1/6 & 1/6 \end{array} \bigg) .\]
The distribution function of \(S_2\) is then the convolution of this distribution with itself. Thus,
\[\begin{array}{} P(S_2 =2) & = & m(1)m(1) \\ & = & \frac{1}{6}\cdot\frac{1}{6} = \frac{1}{36} \\ P(S_2 =3) & = & m(1)m(2) + m(2)m(1) \\ & = & \frac{1}{6}\cdot\frac{1}{6} + \frac{1}{6}\cdot\frac{1}{6} = \frac{2}{36} \\ P(S_2 =4) & = & m(1)m(3) + m(2)m(2) + m(3)m(1) \\ & = & \frac{1}{6}\cdot\frac{1}{6} + \frac{1}{6}\cdot\frac{1}{6} + \frac{1}{6}\cdot\frac{1}{6} = \frac{3}{36}\end{array}\]
Continuing in this way we would find \(P(S_2 = 5) = 4/36, P(S_2 = 6) = 5/36, P(S_2 = 7) = 6/36, P(S_2 = 8) = 5/36, P(S_2 = 9) = 4/36, P(S_2 = 10) = 3/36, P(S_2 = 11) = 2/36,\) and \(P(S_2 = 12) = 1/36\). The distribution for S3 would then be the convolution of the distribution for \(S_2\) with the distribution for \(X_3\). Thus \(P(S_3 = 3) = P(S_2 = 2)P(X_3 = 1)\).
and so forth.
This is clearly a tedious job, and a program should be written to carry out this calculation. To do this we first write a program to form the convolution of two densities p and q and return the density r. We can then write a program to find the density for the sum Sn of n independent random variables with a common density p, at least in the case that the random variables have a finite number of possible values.
Running this program for the example of rolling a die
n times for n = 10, 20, 30 results in the distributions shown in Figure 7.1. We see that, as in the case of Bernoulli trials, the distributions become bell-shaped. We shall discuss in Chapter 9 a very general theorem called the Central Limit Theorem that will explain this phenomenon. Example 7.2
A well-known method for evaluating a bridge hand is: an ace is assigned a value of 4, a king 3, a queen 2, and a jack 1. All other cards are assigned a value of 0. The
point count of the hand is then the sum of the values of the cards in the hand. (It is actually more complicated than this, taking into account voids in suits, and so forth, but we consider here this simplified form of the point count.) If a card is dealt at random to a player, then the point count for this card has distribution
\[ p_x = \bigg( \begin{array}{} 0&1 & 2 & 3 & 4 \\ 36/52 & 4/52 & 4/52 & 4/52 & 4/52 \end{array} \bigg) \].
Let us regard the total hand of 13 cards as 13 independent trials with this common distribution. (Again this is not quite correct because we assume here that we are always choosing a card from a full deck.) Then the distribution for the point count C for the hand can be found from the program
NFoldConvolution by using the distribution for a single card and choosing n = 13. A player with a point count of 13 or more is said to have an opening bid. The probability of having an opening bid is then
\[P(C \geq 13) \].
Since we have the distribution of
C, it is easy to compute this probability. Doing this we find that
\[ P(C \geq 13) = .2485,\]
so that about one in four hands should be an opening bid according to this simplified model. A more realistic discussion of this problem can be found in Epstein,
The Theory of Gambling and Statistical Logic.\(^1\)
For certain special distributions it is possible to find an expression for the distribution that results from convoluting the distribution with itself n times. The convolution of two binomial distributions, one with parameters
m and p and the other with parameters n and p, is a binomial distribution with parameters \((m + n)\) and \(p\). This fact follows easily from a consideration of the experiment which consists of first tossing a coin m times, and then tossing it n more times.
The convolution of
k geometric distributions with common parameter p is a negative binomial distribution with parameters p and k. This can be seen by considering the experiment which consists of tossing a coin until the kth head appears. Exercises \(\PageIndex{1}\)
A die is rolled three times. Find the probability that the sum of the outcomes is (a) greater than 9 (b) an odd number.
\(\PageIndex{2}\)
The price of a stock on a given trading day changes according to the distribution
\[ p_X = \bigg( \begin{array}{} -1 & 0 & 1 & 2 \\ 1/4 & 1/2 & 1/8 & 1/8 \end{array} \bigg) \].
Find the distribution for change in stock price after two (independent) trading days.
\(\PageIndex{3}\)
Let \(X_1\) and \(X_2\) be independent random variables with common distribution
\[ p_X = \bigg( \begin{array}{} 0 & 1 & 2 \\ 1/2 & 3/8 & 1/2 \end{array} \bigg) \].
Find the distribution of the sum \(X_1\) + \(X_2\).
\(\PageIndex{4}\)
In one play of certain game you win an amount
X with distribution
\[ p_X = \bigg( \begin{array}{} 1 & 2 & 3 \\ 1/4 & 1/4 & 1/2 \end{array} \bigg) \].
Using the program
NFoldConvolution find the distribution for your total winnings after ten (independent) plays. Plot this distribution. \(\PageIndex{5}\)
Consider the following two experiments: the first has outcome
X taking on the values 0, 1, and 2 with equal probabilities; the second results in an (independent) outcome Y taking on the value 3 with probability 1/4 and 4 with probability 3/4. Find the distribution of
\[ \begin{array}{} (a) & Y+X \\ (b) & Y-X \end{array}\]
\(\PageIndex{6}\)
People arrive at a queue according to the following scheme: During each minute of time either 0 or 1 person arrives. The probability that 1 person arrives is
p and that no person arrives is \(q = 1 − p\). Let \(C_r\) be the number of customers arriving in the first r minutes. Consider a Bernoulli trials process with a success if a person arrives in a unit time and failure if no person arrives in a unit time. Let \(T_r\) be the number of failures before the rth success.
\[ \begin{array}{} (a) & What is the distribution for \(T_r\) \\ (b) & What is the distribution \(C_r\) \\ (c) Find the mean and variance for the number of customers arriving in the first
r minutes \end{array}\] \(\PageIndex{7}\)
(a) A die is rolled three times with outcomes \(X_1, X_2\) and \(X_3\). Let \(Y_3\) be the maximum value obtained. Show that
\[P(Y_3 \leq j) = P(X_1 \leq j)^3\]
Use this find the distribution of \(Y_3\). Does \(Y_3\) have a bell-shaped distribution?
(b) Now let \(Y_n\) be the maximum value when
n dice are rolled. Find the distribution of \(Y_n\). Is this distribution bell-shaped for large values of n? \(\PageIndex{8}\)
A baseball player is to play in the World Series. Based upon his season play, you estimate that if he comes to bat four times in a game the number of hits he will get has a distribution
\[ p_X = \bigg( \begin{array}{} 0&1&2&3&4\\.4&.2&.2&.1&.1 \end{array} \bigg) \]
Assume that the player comes to bat four times in each game of the series.
(a) Let X denote the number of hits that he gets in a series. Using the program NFoldConvolution, find the distribution of X for each of the possible series lengths: four-game, five-game, six-game, seven-game.
(b) Using one of the distribution found in part (a), find the probability that his batting average exceeds .400 in a four-game series. (The batting average is the number of hits divided by the number of times at bat.)
(c) Given the distribution pX , what is his long-term batting average?
\(\PageIndex{9}\)
Prove that you cannot load two dice in such a way that the probabilities for any sum from 2 to 12 are the same. (Be sure to consider the case where one or more sides turn up with probability zero.)
\(\PageIndex{10}\)
(Lévy\(^2\) ) Assume that n is an integer, not prime. Show that you can find two distributions a and b on the nonnegative integers such that the convolution of
a and b is the equiprobable distribution on the set 0, 1, 2, . . . , n − 1. If n is prime this is not possible, but the proof is not so easy. (Assume that neither a nor b is concentrated at 0.) \(\PageIndex{11}\)
Assume that you are playing craps with dice that are loaded in the following way: faces two, three, four, and five all come up with the same probability (1/6) + r. Faces one and six come up with probability (1/6) − 2r, with \(0 < r < .02.\) Write a computer program to find the probability of winning at craps with these dice, and using your program find which values of r make craps a favorable game for the player with these dice. |
Mathematics - Functional Analysis and Mathematics - Metric Geometry
Abstract
The following strengthening of the Elton-Odell theorem on the existence of a $(1+\epsilon)-$separated sequences in the unit sphere $S_X$ of an infinite dimensional Banach space $X$ is proved: There exists an infinite subset $S\subseteq S_X$ and a constant $d>1$, satisfying the property that for every $x,y\in S$ with $x\neq y$ there exists $f\in B_{X^*}$ such that $d\leq f(x)-f(y)$ and $f(y)\leq f(z)\leq f(x)$, for all $z\in S$. Comment: 15 pages, to appear in Bulletin of the Hellenic Mayhematical Society
Given a finite dimensional Banach space X with dimX = n and an Auerbach basis of X, it is proved that: there exists a set D of n + 1 linear combinations (with coordinates 0, -1, +1) of the members of the basis, so that each pair of different elements of D have distance greater than one. Comment: 15 pages. To appear in MATHEMATIKA |
Bunuel wrote:
In the figure above, if isosceles right triangle PQR has an area of 4, what is the area of the shaded portion of the figure?
(A) \(\pi\)
(B) \(2\pi\)
(C) \(2\sqrt{2}\pi\)
(D) \(4\pi\)
(E) \(8\pi\)
Attachment:
The attachment
2018-02-05_0842.png is no longer available
Attachment:
2018-02-05_0842ed.png [ 19.54 KiB | Viewed 1554 times ]
The shaded region is a sector of the circle.
Use hypotenuse PQ to find the radius of circle.
The radius plus the central angle of sector PQR will yield shaded region's area.
Draw a line from R to PQ: RX = Height of ∆ PQR = radius
• From given area, find side length of ∆ PQR, then find hypotenuseSide length
Area of ∆ PQR* = \(\frac{s^2}{2} = 4\)
\(s^2 = 8\) \(\sqrt{s^2} = \sqrt{4*2}\) \(s = 2\sqrt{2}\)Hypotenuse PQ
**
∆ PQR has angle measures 45-45-90.
Sides opposite those angles are in ratio \(x : x: x\sqrt{2}\)\(x = 2\sqrt{2}\)
PQ corresponds with \(x\sqrt{2}= (2\sqrt{2} * \sqrt{2}) = (2 * 2) = 4\)
• Find radius
Area of ∆ PQR = Base \(\frac{PQ * h}{2}\)
, where \(PQ = 4\)\(\frac{4 * h}{2} = 4\)
\(4 * h = 8\) \(h = 2\)radius \(RX = 2\)
• Use radius plus central angle to find area of sector
The central angle of the sector is 90°
The sector as a fraction of the circle is\(\frac{Part}{Whole}=\frac{SectorAngle}{360}=\frac{90}{360}=\frac{1}{4}\)
Sector Area = \(\frac{1}{4}\) Circle Area
Circle area = \(\pi r^2 = 4\pi\)
Sector Area = \(\frac{1}{4} * 4\pi = \pi\)
Sector area = shaded region = \(π\)
Answer A*You can use Area = \(\frac{b*h}{2}\)
Isosceles triangles have legs of equal lengths. Base = \(x\), height = \(x\) \(A = \frac{x * x}{2} = 4\) \(x^2 = 8\) \(x = 2\sqrt{2}\) **Or use Pythagorean theorem: leg\(^2\) + leg\(^2\) = hypotenuse\(^2\) \((2\sqrt{2})^2 + (2\sqrt{2})^2 = (PQ)^2\) \(8 + 8 = PQ^2\) \(PQ = \sqrt{16}\) \(PQ = 4\)
_________________
SC Butler
has resumed! Get two SC questions to practice
, whose links you can find by date, here.Choose life. |
Question
A beam is supported at the two end and is uniformly loaded. The bending moment M at a distance x from one end is given by \[M = \frac{WL}{2}x - \frac{W}{2} x^2\] .
Find the point at which M is maximum in a given case.
Solution
\[\text { Given }: \hspace{0.167em} M = \frac{WL}{2}x - \frac{W}{2} x^2 \]
\[ \Rightarrow \frac{dM}{dx} = \frac{WL}{2} - 2 \times \frac{Wx}{2}\]
\[ \Rightarrow \frac{dM}{dx} = \frac{WL}{2} - Wx\]
\[\text { For maximum or minimum values of M, we must have }\]
\[\frac{dM}{dx} = 0\]
\[ \Rightarrow \frac{WL}{2} - Wx = 0\]
\[ \Rightarrow \frac{WL}{2} = Wx\]
\[ \Rightarrow x = \frac{L}{2}\]
\[\text { Now,} \]
\[\frac{d^2 M}{d x^2} = - W < 0\]
\[\text { So,M is maximum at }x = \frac{L}{2} . \]
Video Tutorials view Video Tutorials For All Subjects Graph of Maxima and Minima video tutorial00:05:34 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.