text
stringlengths 256
16.4k
|
|---|
Let $X$ be a scheme and $x \in X$. Consider the functor
$\text{Qcoh}(X) \to \mathcal{O}_{X,x} \text{-Mod} , M \mapsto M_x.$
Does it have a right-inverse? I.e. is there, for every $\mathcal{O}_{X,x}$-module $N$ a (functorial) quasi-coherent $\mathcal{O}_X$-module $M$ such that there is a natural isomorphism $M_x \cong N$?
If $X$ is quasi-separated, the answer is yes. First observe that, if $X$ is affine, the direct image with respect to $\text{Spec}(\mathcal{O}_{X,x}) \to X$ works. Now if $X$ is quasi-separated, use the affine case to extend $N$ to a quasi-coherent module on an open affine neighborhood $U$ of $x$, and then take the direct image with respect to $U \to X$. This works since $U \to X$ is a quasi-compact, quasi-separated morphism.
In the general case, note that direct images don't work, but this does not disprove the existence of the functor I'm looking for. The question is motivated by this one, which is still unsolved.
If $\mathfrak{m}_x \subseteq \text{rad}(\text{Ann}(N))$, then the direct image with respect to $\text{Spec}(\mathcal{O}_{X,x}/\text{Ann}(N)) \to X$ (this is then an affine morphism!) works.
EDIT: A stronger, but more natural question is the following: Let $U \subseteq X$ an open subset. Does then $res : \text{Qcoh}(X) \to \text{Qcoh}(U)$ have a right inverse? As I said, this is clear if $U \to X$ is a quasi-compact morphism. In general, a transfinite recursion on the length of an affine cover shows that it is enough to consider the case that $X$ is affine, but $U \subseteq X$ arbitrary. Then everything is ok when $U$ is quasi-compact. In general, you can write $U$ as a directed union of quasi-compact subsets of $X$. Does this help somehow? EDIT 2: There is a right adjoint $\text{Qcoh}(U) \to \text{Qcoh}(X)$ to the restriction functor due to abstract reasons. If $X$ is affine, it maps $N$ to the module associated to $\Gamma(U,N)$, where the latter is considered as a $\Gamma(X,\mathcal{O}_X)$-module. However, this fails to be an extension of $N$, i.e. the counit $\widetilde{\Gamma(U,N)}|_U \to N$ is no isomorphism in general (I have an explicit counterexample). Thus, the desired right-inverse (if it exists) won't be a right adjoint. EDIT 3: Here is a class of examples where extension works. Assume $X$ is affine and $U \subseteq X$ open can be written as $\coprod_{i \in I} U_i$ with $U_i$ affine. If $M \in \text{Qcoh}(U)$ and $M_i := \Gamma(U_i,M)$, then $\widetilde{\Gamma(U,M)} = \widetilde{\prod_{i \in I} M_i} \in \text{Qcoh}(X)$ is not an extension of $M$, but $\widetilde{\bigoplus_{i \in I} M_i}$ works. EDIT 4: Let $U,X,M$ as in edit 3. Then an application of Zorn's lemma shows that $M$ can be extended to a maximal open subset $U \subseteq V \subseteq X$. Then for every open affine $W \subseteq X$, either $V \cap W = W$ or $V \cap W$ is not quasi-compact. In particular, $V$ is ("very") dense. For example, $\mathbb{A}^{\infty} - \{0\} \subseteq \mathbb{A}^{\infty}$ is such a dense subset. But I don't know if here extension works. I've already looked at several examples, but have not found out anything .. EDIT 5: If $dim(X)=0$, then extension works.
Please let me know if you have any ideas!
|
We use a simple argument to estimate the speed of traffic on a highway as a function of the density of cars. The idea is to simply calculate the maximum speed that traffic could go without supporting a growing traffic jam.
Follow @efavdb Follow us on twitter for new submission alerts! Jam dissipation argument
To estimate the speed of traffic as a function of density, we’ll calculate an upper bound and argue that actual traffic speeds must be described by an equation similar to that obtained. To derive our upper bound, we’ll consider what happens when a small traffic jam forms. If the speed of cars is such that the rate of exit from the jam is larger than the rate at which new cars enter the jam, then the jam will dissipate. On the other hand, if this doesn’t hold, the jam will grow, causing the speed to drop until a speed is obtained that allows the jam to dissipate. This sets the bound. Although we consider a jam to make the argument simple, what we really have in mind is any other sort of modest slow-down that may occur.
To begin, we introduce some definitions. (1) Let $\lambda$ be the density of cars in units of $[cars / mile]$. (2) Next we consider the rate of exit from a jam: Note that when traffic is stopped, a car cannot move until the car in front of it does. Because a human is driving the car, there is a slight delay between the time that one car moves and the car behind it moves. Let $T$ be this delay time in $[hours]$. (3) Let $v$ be the speed of traffic outside the jam in units of $[miles / hour]$.
With the above definitions, we now consider the rate at which cars exit a jam. This is the number of cars that can exit the jam per hour, which is simply
\begin{eqnarray} \tag{1} \label{1} r_{out} = \frac{1}{T}. \end{eqnarray} Next, the rate at which cars enter the jam is given by \begin{eqnarray} \tag{2} \label{2} r_{in} = \lambda v. \end{eqnarray} Requiring that $r_{out} > r_{in}$ we get \begin{eqnarray} \label{3} \tag{3} v < \frac{1}{\lambda T}. \end{eqnarray} This is our bound and estimate for the speed of traffic. We note that this form for $v$ follows from dimensional analysis, so the actual rate of traffic must have the same algebraic form as our upper bound (\ref{3}) -- it can differ by a constant factor in front, but should have the same $\lambda$ and $T$ dependence. Plugging in numbers
I estimate $T$, the delay time between car movements to be about one second, which in hours is
\begin{eqnarray} \tag{4} \label{4} T \approx 0.00028\ [hour]. \end{eqnarray} Next for $\lambda$, note that a typical car is about 10 feet long and a mile is around 5000 feet, so the maximum for $\lambda$ is around $ \lambda \lesssim 500 [cars / mile]$. Consider a case where there is a car every 10 car lengths or so. In this case, the density will go down from the maximum by a factor of 10, or \begin{eqnarray}\tag{5} \label{5} \lambda \approx 50 \ [cars / mile]. \end{eqnarray} Plugging (\ref{4}) and (\ref{5}) into (\ref{3}), we obtain \begin{eqnarray} v \lesssim \frac{1}{0.00028 * 50} \approx 70\ [mile / hour], \end{eqnarray} quite close to our typical highway traffic speeds (and speed limits). Final comments
The above bound clearly depends on what values you plug in — I picked numbers that seemed reasonable, but admit I adjusted them a bit till I got the final number I wanted for $v$. Anecdotally, I’ve found the result to work well at other densities: For example, when traffic is slow on the highway near my house, if I see that there is a car every 5 car lengths, the speed tends to be about $30 [miles / hour]$ — so scaling rule seems to work. The last thing I should note is that wikipedia has an article outlining some of the extensive research literature that’s been done on traffic flows — you can see that here.
|
My book
My book,
Designing Data-Intensive Applications, was published by O’Reilly in March 2017.
Published by Martin Kleppmann on 26 Jan 2017.
This blog post uses MathJax to render mathematics. You need JavaScriptenabled for MathJax to work.
Many distributed storage systems (e.g. Cassandra, Riak, HDFS, MongoDB, Kafka, …) use replication to make data durable. They are typically deployed in a “Just a Bunch of Disks” (JBOD) configuration – that is, without RAID to handle disk failure. If one of the disks on a node dies, that disk’s data is simply lost. To avoid losing data permanently, the database system keeps a copy (replica) of the data on some other disks on other nodes.
The most common replication factor is 3 – that is, the database keeps copies of every piece of data on three separate disks attached to three different computers. The reasoning goes something like this: disks only die once in a while, so if a disk dies, you have a bit of time to replace it, and then you still have two copies from which you can restore the data onto the new disk. The risk that a second disk dies before you restore the data is quite low, and the risk that all three disks die at the same time is so tiny that you’re more likely to get hit by an asteroid.
As a back-of-the-envelope calculation, if the probability of a single disk failing within some timeperiod is 0.1% (to pick an arbitrary number), then the probability of two disks failing is(0.001)
2 = 10 -6, and the probability of all three disks failing is(0.001) 3 = 10 -9, or one in a billion. This calculation assumes thatone disk’s failure is independent from another disk’s failure – which is not actually true, sincefor example disks from the same manufacturing batch may show correlated failures – but it’s a goodenough approximation for our purposes.
So far the common wisdom. It sounds reasonable, but unfortunately it turns out to be untrue for many data storage systems. In this post I will show why.
If your database cluster really only consists of three machines, then the probability of all three of them dying simultaneously is indeed very low (ignoring correlated faults, such as the datacenter burning down). However, as you move to larger clusters, the probabilities change. The more nodes and disks you have in your cluster, the more likely it is that you lose data.
This is a counter-intuitive idea. “Surely,” you think, “every piece of data is still replicated on three disks. The probability of a disk dying doesn’t depend on the size of the cluster. So why should the size of the cluster matter?” But I calculated the probabilities and drew a graph, and it looked like this:
To be clear, this isn’t the probability of a single node failing – this is the probability of
permanently losing all three replicas of some piece of data, so restoring from backup (if youhave one) is the only remaining way to recover that data. The bigger your cluster, the more likelyyou are haemorrhaging data. This is probably not what you intended when you decided to pay fora replication factor of 3.
The y axis on that graph is a bit arbitrary, and depends on a lot of assumptions, but the directionof the line is scary. Under the assumption that a node has a 0.1% chance of dying within some timeperiod, the graph shows that in a 8,000-node cluster, the chance of permanently losing all threereplicas of some piece of data (within the same time period) is about 0.2%. Yes, you read thatcorrectly: the risk of losing
all three copies of some data is twice as great as the risk oflosing a single node! What is the point of all this replication again?
The intuition behind this graph is as follows: in an 8,000-node cluster it’s almost certain thata
few nodes are always dead at any given moment. That is normally not a problem: a certain rate ofchurn and node replacement is expected and a part of routine maintenance. However, if you getunlucky, there is some piece of data whose three replicas just happen to be three of those nodesthat have died – and if this is the case, that piece of data is gone forever. The data that is lostis only a small fraction of the total dataset in the cluster, but still that’s not great: when youuse a replication factor of 3, you generally mean “I really don’t want to lose this data”, not “Idon’t mind occasionally losing a bit of this data, as long as it’s not too much”. Maybe that pieceof lost data was a particularly important one.
The probability that all three replicas are dead nodes depends crucially on the algorithm that thesystem uses to assign data to replicas. The graph above is calculated under the assumption that thedata is split into a number of partitions (shards), and that each partition is stored on three
randomly chosen nodes (or pseudo-randomly with a hash function). This is the case withconsistent hashing, used in Cassandra and Riak, among others (as far as I know). Withother systems I’m not sure how the replica assignment works, so I’d appreciate any insights frompeople who know about the internals of various storage systems.
Let me show you how I calculated that graph above, using a probabilistic model of a replicated database.
Let’s assume that the probability of losing an individual node is \(p=P(\text{node loss})\). I am going to ignore time in this model, and simply look at the probability of failure in some arbitrary time period. For example, we could assume that \(p=0.001\) is the probability of a node failing within a given day, which would make sense if it takes about a day to replace the node and restore the lost data onto new disks. For simplicity I won’t distinguish between node failure and disk failure, and I will consider only permanent failures (ignoring crashes where the node comes back again after a reboot).
Let \(n\) be the number of nodes in the cluster. Then the probability that \(f\) out of \(n\) nodes have failed (assuming that failures are independent) is given by the binomial distribution:
\[ P(f \text{ nodes failed}) = \binom{n}{f} \, p^f \, (1-p)^{n-f} \]
The term \(p^f\) is the probability that \(f\) nodes have failed, the term \((1-p)^{n-f}\) is the probability that the remaining \(n-f\) have not failed, and \(\binom{n}{f}\) is the number of different ways of picking \(f\) out of \(n\) nodes. \(\binom{n}{f}\) is pronounced “n choose f”, and it is defined as:
\[ \binom{n}{f} = \frac{n!}{f! \; (n-f)!} \]
Let \(r\) be the replication factor (typically \(r=3\)). If we assume that \(f\) out of \(n\) nodes have failed, what is the probability that a particular partition has all \(r\) replicas on failed nodes?
Well, in a system that uses consistent hashing, each partition is assigned to nodes independently and randomly (or pseudo-randomly). For a given partition, there are \(\binom{n}{r}\) different ways of assigning the \(r\) replicas to nodes, and these assignments are all equally likely to occur. Moreover, there are \(\binom{f}{r}\) different ways of choosing \(r\) replicas out of \(f\) failed nodes – these are the ways in which all \(r\) replicas can be assigned to failed nodes. We then work out the fraction of the assignments that result in all replicas having failed:
\[ P(\text{partition lost} \mid f \text{ nodes failed}) = \frac{\binom{f}{r}}{\binom{n}{r}} = \frac{f! \; (n-r)!}{(f-r)! \; n!} \]
(The vertical bar after “partition lost” is pronounced “given that”, and it indicates aconditional probability: the probability is given
under the assumption that \(f\)nodes have failed.)
So that’s the probability that all replicas of one particular partition has been lost. What about a cluster with \(k\) partitions? If one or more partitions have been lost, we have lost data. Thus, in order to not lose data, we require that all \(k\) partitions are not lost:
\begin{align} P(\text{data loss} \mid f \text{ nodes failed}) &= 1 - P(\text{partition not lost} \mid f \text{ nodes failed})^k \\ &= 1 - \left( 1 - \frac{f! \; (n-r)!}{(f-r)! \; n!} \right)^k \end{align}
Cassandra and Riak call partitions “vnodes” instead, but they are the same thing. In general, thenumber of partitions \(k\) is independent from the number of nodes \(n\). In the case ofCassandra, there is usually a fixed number of partitions per node; the defaultis \(k=256\,n\) (configured by the
num_tokens parameter), and this is also what I assumed for thegraph above. In Riak, the number of partitions is fixed when you create the cluster, butgenerally more nodes also mean more partitions.
With all of this in place, we can now work out the probability of losing one or more partitions in a cluster of size \(n\) with a replication factor of \(r\). If the number of failures \(f\) is less than the replication factor, we can be sure that no data is lost. Thus, we need to add up the probabilities for all possible numbers of failures \(f\) with \(r \le f \le n\):
\begin{align} P(\text{data loss}) &= \sum_{f=r}^{n} \; P(\text{data loss} \;\cap\; f \text{ nodes failed}) \\ &= \sum_{f=r}^{n} \; P(f \text{ nodes failed}) \; P(\text{data loss} \mid f \text{ nodes failed}) \\ &= \sum_{f=r}^{n} \binom{n}{f} \, p^f \, (1-p)^{n-f} \left[ 1 - \left( 1 - \frac{f! \; (n-r)!}{(f-r)! \; n!} \right)^k \right] \end{align}
That is a bit of a mouthful, but I think it’s accurate. And if you plug in \(r=3\), \(p=0.001\) and \(k=256\,n\), and vary \(n\) between 3 and 10,000, then you get the graph above. I wrote a little Ruby program to do the calculation.
We can get a simpler approximation using the union bound:
\begin{align} P(\text{data loss}) &= P(\ge\text{ 1 partition lost}) \\ &= P\left( \bigcup_{i=1}^k \text{partition } i \text{ lost} \right) \\ &\le k\, P(\text{partition lost}) = k\, p^r \end{align}
Even though one partition failing is not independent from another partition failing, this approximation still applies. And it seems to match the exact result quite closely: in the graph, the data loss probability looks like a straight line, proportional to the number of nodes. The approximation says that the probability is proportional to the number of partitions, which is equivalent since we assumed a fixed 256 partitions per node.
Moreover, if we plug in the numbers for 10,000 nodes into the approximation, we get \(P(\text{data loss}) \le 256 \cdot 10^4 \cdot (10^{-3})^3 = 0.00256\), which matches the result from the Ruby program very closely.
Is this a problem in practice? I don’t know. Mostly I think it’s an interesting and counter-intuitive phenomenon. I’ve heard rumours that it is causing real data loss at companies with large database clusters, but I’ve not seen the issue documented anywhere. If you’re aware of any discussions on this topic, please point me at them.
The calculation indicates that in order to reduce the probability of data loss, you can reduce the number of partitions or increase the replication factor. Using more replicas costs more, so it’s not ideal for large clusters that are already expensive. However, the number of partitions presents an interesting trade-off. Cassandra originally used one partition per node, but then switched to 256 partitions per node a few years ago in order to achieve better load distribution and more efficient rebalancing. The downside, as we can see from this calculation, is a much higher probability of losing at least one of the partitions.
I think it’s probably possible to devise replica assignment algorithms in which the probability of data loss does not grow with the cluster size, or at least does not grow as fast, but which nevertheless have good load distribution and rebalancing properties. That would be an interesting area to explore further. In that context, my colleague Stephan pointed out that the expected rate of data loss is constant in a cluster of a particular size, independent of the replica assignment algorithm – in other words, you can choose between a high probability of losing a small amount of data, and a low probability of losing a large amount of data! Is the latter better?
You need fairly large clusters before this effect really shows up, but clusters of thousands of nodes are used by various large companies, so I’d be interested to hear from people with operational experience at such scale. If the probability of permanently losing data in a 10,000 node cluster is really 0.25% per day, that would mean a 60% chance of losing data in a year. That’s way higher than the “one in a billion” getting-hit-by-an-asteroid probability that I talked about at the start.
Are the designers of distributed data systems aware of this issue? If I got this right, it’s something that should be taken into account when designing replication schemes. Hopefully this blog post will raise some awareness of the fact that just because you have three replicas you’re not automatically guaranteed to be safe.
|
A First Look at Quantum Probability, Part 1
In this article and the next, I'd like to share some ideas from the world of quantum probability.* The word "quantum" is pretty loaded, but don't let that scare you. We'll take a
first—not second or third—look at the subject, and the only prerequisites will be linear algebra and basic probability. In fact, I like to think of quantum probability as another name for "linear algebra + probability," so this mini-series will explore the mathematics, rather than the physics, of the subject.**
In today's post, we'll motivate the discussion by saying a few words about (classical) probability. In particular, let's spend a few moments thinking about the following:
What do I mean? We'll start with some basic definitions. Then I'll share an example that illustrates this idea.
A
probability distribution (or simply, distribution) on a finite set $X$ is a function $p \colon X\to [0,1]$ satisfying $\sum_x p(x) = 1$. I'll use the term joint probability distribution to refer to a distribution on a Cartesian product of finite sets, i.e. a function $p\colon X\times Y\to [0,1]$ satisfying $\sum_{(x,y)}p(x,y)=1$. Every joint distribution defines a marginal probability distribution on one of the sets by summing probabilities over the other set. For instance, the marginal distribution $p_X\colon X\to [0,1]$ on $X$ is defined by $p_X(x)=\sum_yp(x,y)$, in which the variable $y$ is summed, or "integrated," out. It's this very process of summing or integrating out that causes information to be lost. In other words, marginalizing loses information. It doesn't remember what was summed away!
I'll illustrate this with a simple example. To do so, I need to give you some finite sets $X$ and $Y$ and a probability distribution on them.
Consider the set $S$ of all bitstrings of length three. Every bitstring begins with either a 0 or a 1, so we can think of $S$ as the Cartesian product of the set $X=\{0,1\}$ with the set of bitstrings of length two, $Y=\{00,11,01,10\}.$ It will be convenient to refer to elements in $X$ as
prefixes and to elements in $Y$ as suffixes.
Now we need a probability distribution on $S\cong X\times Y$ to work with. Let's suppose the probability of 011 is $\frac{3}{7}$, the probability of 110 is $\frac{2}{7}$, the probability of both 000 and 101 is $\frac{1}{7}$, and the probability of the other bitstrings is zero. So you can imagine a seven-sided die with the faces labeled as follows:
One way to visualize this joint distribution is a weighted bipartite graph. We've explored this idea before. The set of prefixes and suffixes define the two sets of vertices (ergo,
bipartite). An edge connects a prefix and a suffix if their concatenation is one of the samples drawn. That edge is labeled with the corresponding probability (ergo, weighted). Yet another
Marginal probabilities are easy to compute from a table: just sum along a row or column! For example, the probability of the suffix 00 is $p(00) = \frac{1}{7} + 0$.
Now here's the point I want to emphasize:
Marginal probability is forgetful. The marginal probability of the prefix 0 is $\frac{4}{7}$, but that number doesn't tell us that of the possible suffixes following 0, one is 00, three are 11, and none are 01 or 10. The marginal probability of the prefix 1 is $\frac{3}{7}$, but that number doesn't tell us that of the possible suffixes following 1, one is 01, two are 10, and none are 00 or 11.
In other words, summing over the suffixes in $Y$ has caused us to loose all information from $Y$ in a totally irreparable way. It cannot be recovered. There's no going back. This is what I meant by "marginal probability doesn't have memory." It's just a feature of probability.
And this is where the fun begins.
I'm now going to introduce a
different way to compute the marginals from a joint distribution. It can be thought of as "marginal probability with memory." When computing marginal probabilities in this new way, you'll have ready access to the information lost in the old way!
The ideas are simple, though they might feel unmotivated at first. Bear with me. I want to show you something nice. Afterwards I'll explain what's going on.
Here's a better way...
Again, we'll start with the joint distribution. As we know, it can be viewed as a $2\times 4$ table. Here it is again:
You see I've made some small cosmetic changes. The $2\times 4$ table is now a $2\times 4$ matrix, and I've judiciously added some square roots. You may ignore them if you like. I've included them for math reasons that we needn't worry about now. Lastly, I've given the matrix a name, $M$.
Let's now multiply $M$ by its transpose:
This matrix is
very interesting. For one, it's a $4\times 4$ matrix. The set $Y$ has four elements in it and we can identify them with the rows/columns of this matrix. This sets the stage for another observation: the diagonal of $M^\top M$ contains the marginal probability distribution on $Y$.
So we've just computed marginal probability by multiplying a matrix by its transpose.
Voila.
But wait, there's more.
The off-diagonals of $M^\top M$ are also interesting.
Some are non-zero. Let's not focus on their values for now (although they convey rich information). Let's just appreciate their existence: the fact that $M^\top M$ has non-zero off-diagonals means that it has interesting eigenvectors. It's a rank 2 matrix, so it has two eigenvectors. Here they are:
Interesting indeed!
The square of the entries of these eigenvectors define conditional probability distributions on the set of suffixes, $Y$. For instance, the first eigenvector defines a probability distribution on $Y$, conditioned on $0$ being the prefix of a bitstring. Concretely: given that 0 is the prefix of a bitstring $s=(x,y)\in X\times Y$, the suffix $y$ will be 00 with probability $\left(\sqrt{\frac{1}{4}}\right)^2=\frac{1}{4}$, it will be 11 with probability $\left(\sqrt{\frac{3}{4}}\right)^2=\frac{3}{4}$, and it will be 01 and 10 each with probability $0^2=0$. This is the information contained in the entries of the first eigenvector. The second eigenvector likewise defines a probability distribution on $Y$, conditioned on the prefix $x=1$. The information contained in the eigenvectors is precisely the information that's destroyed after computing marginal probability in the usual way.
But we're not done! We multiplied $M$ by its transpose on the left. If you switch the order, you'll see that the $2\times 2$ matrix $MM^\top$ has the marginal distribution on the set of prefixes $X$ along its diagonal, and that its eigenvectors define conditional probabilities on $X$ after squaring the entries.
So $M^\top M$ has the marginals of $Y$ along its diagonal. And the existence of non-zero off-diagonals imply that its eigenvectors contain conditional probabilistic information; $MM^\top$ contains the same for $X$. Together, these two matrices recover all the information from the original joint distribution—something that can't be done when marginalizing in the usual way.
Excellent.
So, what's going on?
The matrices $M^\top M$ and $MM^\top$ are the quantum versions of marginal probability distributions. What does that mean?
I'll explain next time.
But here's a sneak peak:
The quantum version of a probability distribution is something called a
density operator. The quantum version of marginalizing corresponds to "reducing" that operator to a subsystem. This reduction is a construction in linear algebra called the partial trace. In Part 2 of this miniseries, I'll begin by explaining the partial trace. Then I'll explain what I mean by "quantum version." Along the way, we'll unwind the basics of quantum probability theory.
Until then!
*This mini-series is based on recent talks I gave at Smith College, the CUNY Graduate Center, and the EDGE/PRiME colloquium at Pomona College.
**When I started Math3ma in 2015, I wanted some aspect of the site to reflect my ever-present admiration for physics. This is why the logo is an "M" surrounded by little electrons. Four years later, I'm happy to (finally!) write about mathematical physics.
Sincere thanks to John Terilla for inspiring and providing feedback for this mini-series.
|
How do I generate $1000$ points $\left(x, y, z\right)$ and make sure they land on a sphere whose center is $\left(0, 0, 0\right)$ and its diameter is $20$ ?. Simply, how do I manipulate a point's coordinates so that the point lies on the sphere's "surface" ?.
Use the fact that if you cut a sphere of a given radius with two parallel planes, the area of the strip of spherical surface between the planes depends only on the distance between the planes, not on where they cut the sphere. Thus, you can get a uniform distribution on the surface using two uniformly distributed random variables:
a $z$-coordinate, which in your case should be chosen between $-10$ and $10$; and an angle in $[0,2\pi)$ corresponding to a longitude.
From those it’s straightforward to generate the $x$- and $y$-coordinates.
Using Gaussian distribution for all three coordinates of your point will ensure an uniform distribution on the surface of the sphere. You should proceed as follows
Generate three random numbers $x, y, z$ using Gaussian distribution Multiply each number by $1/\sqrt{x^2+y^2+z^2}$ (a.k.a. Normalise) . You should handle what happens if $x=y=z=0$. Multiply each number by the radius of your sphere.
Here is a simple but less efficient way:
Generate points uniformly $x \in [-10,10]^3$ and reject if $\|x\| =0 $ (which should rarely happen) or $\|x\| > 10$ (which should happen with probability ${20^3 -{4 \over 3} \pi 10^3 \over 20^3} =1 - {\pi \over 6} \approx 48\%$). Otherwise let $y = {10 \over \|x\|} x$. Then $y$ will be distributed uniformly on the surface of the $10$-sphere.
In addition to Brian Scott's excellent and clever answer, here's another, more straightforward way (in case you want to approach it with a geographical intuition): From two random variables $u_1, u_2$, distributed uniformly on the interval $[0, 1]$, generate (in radians) the latitude
$$ \lambda = \arccos (2u_1-1)-\frac{\pi}{2} $$
and the longitude
$$ \phi = 2\pi u_2 $$
Then compute the rectangular coordinates accordingly:
$$ x = \cos\lambda\cos\phi $$ $$ y = \cos\lambda\sin\phi $$ $$ z = \sin\lambda $$
ETA (thanks to Tanner Strunk—see comments): This will give coordinates of points on the unit sphere. To have them land on the sphere with diameter $20$ (and therefore radius $10$), simply multiply each by $10$.
Same way as on a real sphere, but $(x,y,z) $ multiplied by $i.$
Wolfram Mathworld provides a methodology for randomly picking a point on a sphere:
To obtain points such that any small area on the sphere is expected to contain the same number of points, choose $u$ and $ν$ to be random variates on $[0,1]$. Then: $$\begin{array}{ll}\theta=2\pi u\\ \varphi= arccos(2v - 1)\end{array}$$ gives the spherical coordinates for a set of points which are uniformly distributed over $\mathbb{S}^2$.
|
Let $\theta\gt 0$ and $X_1, ..., X_n$ be independently and identically distributed with probability densitiy function $f_\theta(x) = \frac{1}{2\theta} \chi_{x\in[-\theta,\theta]}$, where $\chi$ is the indicator function. What is the maximum likelihood estimator for $\theta$?
This came up while studying for an exam and I would like some verification on my work:
I compute the product likelihood function first: $p_x(\theta)=\Pi_{i=1}^nf_\theta(x_i)=\frac{1}{(2\theta)^n}\Pi_{i=1}^n\chi_{x_i\in[-\theta,\theta]}=\frac{1}{(2\theta)^n}\chi_{\theta\ge \max|x_i|}$.
Now in the case of $\max|x_i|=0$ the function has no maximizer. This can be ignored since the case has probability $0$. On the other hand if $\max|x_i|\gt0$ we see that $p$ is strictly decreasing in $\theta$ and therefore $\hat\theta=\max|x_i|$ is the unique maximizer and therefore the wanted estimator.
|
To be a knot in the first place, it essentially needs a well-defined regular neighborhood, sort of as proof of its tameness. If you think of a knot as being in the one-skeleton of a triangulation of $\mathbb{R}^3$, then if you want a triangulation of the complement without too many more simplices, you can remove a regular neighborhood of the knot -- after all, this is a deformation retract of $\mathbb{R}^3-K$. It is also possible to triangulate $\mathbb{R}^3-K$, but it takes infinitely many more simplices!
In any case, the van Kampen theorem applies to pairs of subcomplexes whose intersection is a path connected subcomplex. The proof involves taking a neighborhood of the intersection that deformation retracts onto the intersection, where the neighborhood is formed from neighborhoods close to faces of incident simplices, then adding this neighborhood to the pair of subcomplexes. (This is very similar to how subcomplexes of a complex form a "good pair," in Hatcher's terminology.)
There is a cell structure of the torus minus $K$ with infinitely many cells, and one can extend this to a cell structure of $\mathbb{R}^3-K$. If $X_1,X_2$ are closed sets with $X_1\cup X_2=\mathbb{R}^3$ and $X_1\cap X_2$ being the torus, then $X_1-K$ and $X_2-K$ inherit the cell structure, and the van Kampen theorem for complexes applies.
(Note: if you took the solid torus minus $K$ and the
closure of the complement of this, as you suggest, their union would be all of $\mathbb{R}^3$! This is like the example of covering $S^1$ by $[0,1/2]$ and $(1/2,1]$ through the quotient map $[0,1]\to S^1$. These are two simply connected subsets of $S^1$ that intersect at a point, but van Kampen, if it were to apply, would give $\pi_1(S^1)=1$. This illustrates what open sets are meant to handle, but also $(1/2,1]$ is not a subcomplex of $S^1$.)
|
Intuitively, I would expect the Taylor expansion around $x_0$ of a polynomial in $(x-x_0)$ to be identical to the polynomial. However, I cannot seem to show that/whether this is the case:
For a finite power series $f(x) = \sum_{i=0}^{n} a_i (x-x_0)^i$ the $k$-th derivative is given by $$\frac{d^k f(x)}{dx^k} = \sum_{j=k}^n \frac{j!}{(j-k)!} a_j (x-x_0)^{j-k}$$
To get the Taylor series, I would write $$ F(x) = \sum_{i=0}^{\infty} \frac{(x-x_0)^i}{i!} \frac{d^i f(x_0)}{dx^i} $$ and substitute the expression for the $i$-th derivative of $f(x)$: $$ F(x) = \sum_{i=0}^{\infty} \sum_{j=i}^{n} \frac{j!}{(j-i)!i!} (x_0 - x_0)^{j-i} a_j (x-x_0)^i $$
I am confused by the $(x_0 - x_0)$ factor that appears and the binomial coefficient that shows up. Is my intuition wrong?
|
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s?
@daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format).
@JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems....
well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty...
Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d...
@Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure.
@JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now
@yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first
@yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts.
@JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing.
@Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work
@Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable.
@Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time.
@Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things
@Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)]
@JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :)
@Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!)
@JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand.
@JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series
@JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code.
@PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
|
What i am seeking is to explain why the relationship here is not V2/R2 instead it's given as (V1-V2)/R2
Because ohm's law is actually:
$$\Delta V=RI <=> Va-Vb=RI_{ab}$$
Where
is the voltage at one terminal of the resistor, and Va is the voltage at the other terminal. I just added the little index to the current so that you know its direction. Vb
The delta is often omitted for simplicity, and this usually leads to a lot of confusion from beginners.
I don't understand the difficulty, at all. With currents spilling outward on the left and currents spilling inward on the right, I get:
$$\begin{align*} \frac{V_1}{R_1} + \frac{V_1}{R_2} &= \frac{0\:\textrm{V}}{R_1} + \frac{V_2}{R_2} + i_s \\ \\ \frac{V_1}{R_1} + \frac{V_1}{R_2} &= \frac{V_2}{R_2} + i_s \\ \\ \frac{V_1}{R_1} + \frac{V_1}{R_2} - \frac{V_2}{R_2} &= i_s \\ \\ \frac{V_1}{R_1} + \frac{V_1-V_2}{R_2} &= i_s \end{align*}$$
I'm really flummoxed why you can't get there. But you haven't really exposed your thinking much, either.
As an aside, another way of writing the second node is:
$$\begin{align*} \frac{V_2}{R_2} + \frac{1}{L} \int V_2\:\textrm{d}t &= \frac{V_1}{R_2} \\ \\ \frac{1}{L} \int V_2\:\textrm{d}t &= \frac{V_1-V_2}{R_2} \end{align*}$$
But, assuming \$i_s\$ is a constant current source, your circuit can also be reduced to a voltage source with a series resistance into an inductor. So a DC voltage and an R+L load.
|
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say
|
Answer
$\theta = 30 ^{\circ}$
Work Step by Step
1. $x=1.73\approx \sqrt 3$ ; $y=1$ 2.$r=\sqrt {(1)^{2}+(\sqrt 3)^{2}} = \sqrt 4 = 2$ 3. $\sin\theta = \frac{y}{r}=\frac{1}{2}$ $\sin^{-1} 0.5=30^{\circ}$ $\theta = 30^{\circ}$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
This article will be permanently flagged as inappropriate and made unaccessible to everyone.
Are you certain this article is inappropriate?
Excessive Violence
Sexual Content
Political / Social
Email Address:
Article Id:
WHEBN0000061220
Reproduction Date:
Spintronics (a portmanteau meaning "spin transport electronics"[1][2][3]), also known as spinelectronics or fluxtronic, is an emerging technology exploiting both the intrinsic spin of the electron and its associated magnetic moment, in addition to its fundamental electronic charge, in solid-state devices.
Spintronics differs from the older magnetoelectronics, in that the spins are not only manipulated by magnetic fields, but also by electrical fields.
Spintronics emerged from discoveries in the 1980s concerning spin-dependent electron transport phenomena in solid-state devices. This includes the observation of spin-polarized electron injection from a ferromagnetic metal to a normal metal by Johnson and Silsbee (1985),[4] and the discovery of giant magnetoresistance independently by Albert Fert et al.[5] and Peter Grünberg et al. (1988).[6] The origins of spintronics can be traced back even further to the ferromagnet/superconductor tunneling experiments pioneered by Meservey and Tedrow, and initial experiments on magnetic tunnel junctions by Julliere in the 1970s.[7] The use of semiconductors for spintronics can be traced back at least as far as the theoretical proposal of a spin field-effect-transistor by Datta and Das in 1990.[8]
The spin of the electron is an angular momentum intrinsic to the electron that is separate from the angular momentum due to its orbital motion. The magnitude of the projection of the electron's spin along an arbitrary axis is \frac{1}{2}\hbar, implying that the electron acts as a Fermion by the spin-statistics theorem. Like orbital angular momentum, the spin has an associated magnetic moment, the magnitude of which is expressed as
In a solid the spins of many electrons can act together to affect the magnetic and electronic properties of a material, for example endowing a material with a permanent magnetic moment as in a ferromagnet.
In many materials, electron spins are equally present in both the up and the down state, and no transport properties are dependent on spin. A spintronic device requires generation or manipulation of a spin-polarized population of electrons, resulting in an excess of spin up or spin down electrons. The polarization of any spin dependent property X can be written as
A net spin polarization can be achieved either through creating an equilibrium energy splitting between spin up and spin down such as putting a material in a large magnetic field (Zeeman effect) or the exchange energy present in a ferromagnet; or forcing the system out of equilibrium. The period of time that such a non-equilibrium population can be maintained is known as the spin lifetime, \tau. In a diffusive conductor, a spin diffusion length \lambda can also be defined as the distance over which a non-equilibrium spin population can propagate. Spin lifetimes of conduction electrons in metals are relatively short (typically less than 1 nanosecond), and a great deal of research in the field is devoted to extending this lifetime to technologically relevant timescales.
There are many mechanisms of decay for a spin polarized population, but they can be broadly classified as spin-flip scattering and spin dephasing. Spin-flip scattering is a process inside a solid that does not conserve spin, and can therefore send an incoming spin up state into an outgoing spin down state. Spin dephasing is the process wherein a population of electrons with a common spin state becomes less polarized over time due to different rates of electron spin precession. In confined structures, spin dephasing can be suppressed, leading to spin lifetimes of milliseconds in semiconductor quantum dots at low temperatures.
By studying new materials and decay mechanisms, researchers hope to improve the performance of practical devices as well as study more fundamental problems in condensed matter physics.
The simplest method of generating a spin-polarised current in a metal is to pass the current through a ferromagnetic material. The most common applications of this effect involve giant magnetoresistance (GMR) devices. A typical GMR device consists of at least two layers of ferromagnetic materials separated by a spacer layer. When the two magnetization vectors of the ferromagnetic layers are aligned, the electrical resistance will be lower (so a higher current flows at constant voltage) than if the ferromagnetic layers are anti-aligned. This constitutes a magnetic field sensor.
Two variants of GMR have been applied in devices: (1) current-in-plane (CIP), where the electric current flows parallel to the layers and (2) current-perpendicular-to-plane (CPP), where the electric current flows in a direction perpendicular to the layers.
Other metals-based spintronics devices:
Non-volatile spin-logic devices to enable scaling beyond the year 2025[9] are being extensively studied. Spin-transfer torque-based logic devices that use spins and magnets for information processing have been proposed[10] and are being extensively studied at Intel.[11] These devices are now part of the ITRS exploratory road map and have potential for inclusion in future computers. Logic-in memory applications are already in the development stage at Crocus[12] and NEC.[13]
Read heads of modern hard drives are based on the GMR or TMR effect.
Motorola has developed a first-generation 256 kb magnetoresistive random-access memory (MRAM) based on a single magnetic tunnel junction and a single transistor and which has a read/write cycle of under 50 nanoseconds.[14] (Everspin, Motorola's spin-off, has since developed a 4 Mb version[15]). There are two second-generation MRAM techniques currently in development: thermal-assisted switching (TAS)[16] which is being developed by Crocus Technology, and spin-transfer torque (STT) on which Crocus, Hynix, IBM, and several other companies are working.[17]
Another design in development, called racetrack memory, encodes information in the direction of magnetization between domain walls of a ferromagnetic metal wire.
There are magnetic sensors using the GMR effect.
In 2012, IBM scientists mapped the creation of persistent spin helices of synchronized electrons persisting for more than a nanosecond. This is a 30-fold increase from the previously observed results and is longer than the duration of a modern processor clock cycle, which opens new paths to investigate for using electron spins for information processing.[18]
Much recent research has focused on the study of dilute ferromagnetism in doped semiconductor materials. In recent years, Dilute magnetic oxides (DMOs) including ZnO based DMOs and TiO2-based DMOs have been the subject of numerous experimental and computational investigations.[19][20] Non-oxide ferromagnetic semiconductor sources (like manganese-doped gallium arsenide GaMnAs),[21] increase the interface resistance with a tunnel barrier,[22] or using hot-electron injection.[23]
Spin detection in semiconductors is another challenge, met with the following techniques:
The latter technique was used to overcome the lack of spin-orbit interaction and materials issues to achieve spin transport in silicon, the most important semiconductor for electronics.[28]
Because external magnetic fields (and stray fields from magnetic contacts) can cause large Hall effects and magnetoresistance in semiconductors (which mimic spin-valve effects), the only conclusive evidence of spin transport in semiconductors is demonstration of spin precession and dephasing in a magnetic field non-collinear to the injected spin orientation. This is called the Hanle effect.
Applications using spin-polarized electrical injection have shown threshold current reduction and controllable circularly polarized coherent light output.[29] Examples include semiconductor lasers. Future applications may include a spin-based transistor having advantages over MOSFET devices such as steeper sub-threshold slope.
Magnetic-tunnel transistor: The magnetic-tunnel transistor with a single base layer, by van Dijken et al. and Jiang et al.,[30] has the following terminals:
The magnetocurrent (MC) is given as:
And the transfer ratio (TR) is
MTT promises a highly spin-polarized electron source at room temperature.
Recently also antiferromagnetic storage media have been studied, whereas hitherto always ferromagnetism has been used.,[31] especially since with antiferromagnetic material the bits 0 and 1 can as well be stored as with ferromagnetic material (instead of the usual definition 0 -> 'magnetisation upwards', 1 -> 'magnetisation downwards', one may define, e.g., 0 -> 'vertically-alternating spin configuration' and 1 -> 'horizontally-alternating spin configuration'.[32]).
The main advantages of using antiferromagnetic material are
Web Ontology Language, World Wide Web, Metadata, Resource Description Framework, Ontology (information science)
Spintronics, Proton, Spin (physics), Electron, Magnetism
Lanthanum, Lutetium, Cerium, Neodymium, Thulium
Canada, Nanotechnology, Spintronics, University of Alberta, Plasmonics
Peer review, Engineering, Nanotechnology, Molecular nanotechnology, Space elevator
|
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Research schools
Works by Sarig and Benovadia have built symbolic dynamics for arbitrary diffeomorphisms of compact manifolds. This shows thatthere can be at most countably many ergodic hyperbolic equilibriummeasures for any Holder continuous or geometric potentials. We will explain how this yields uniqueness inside each homoclinic class of measures, i.e., of ergodic and hyperbolic measures that are homoclinically related. In some cases, further topological or geometric arguments can show global uniqueness.
This is a joint work with Sylvain Crovisier and Omri Sarig
Works by Sarig and Benovadia have built symbolic dynamics for arbitrary diffeomorphisms of compact manifolds. This shows thatthere can be at most countably many ergodic hyperbolic equilibriummeasures for any Holder continuous or geometric potentials. We will explain how this yields uniqueness inside each homoclinic class of measures, i.e., of ergodic and hyperbolic measures that are homoclinically related. In some cases, further topological or ...
37C40
... Lire [+]
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Research schools
Smooth parametrizations of semi-algebraic sets were introduced by Yomdin in order to bound the local volume growth in his proof of Shub’s entropy conjecture for C∞ maps. In this minicourse we will present some refinement of Yomdin’s theory which allows us to also control the distortion. We will give two new applications: - for any C∞ surface diffeomorphism f with positive entropy the saddle periodic points with Lyapunov exponents $\delta$-away from zero for $\delta \in]0,htop(f)[$ are equidistributed along measures of maximal entropy. - for C∞ maps the entropy is physically greater than or equal to the top Lyapunov exponents of the exterior powers.
Smooth parametrizations of semi-algebraic sets were introduced by Yomdin in order to bound the local volume growth in his proof of Shub’s entropy conjecture for C∞ maps. In this minicourse we will present some refinement of Yomdin’s theory which allows us to also control the distortion. We will give two new applications: - for any C∞ surface diffeomorphism f with positive entropy the saddle periodic points with Lyapunov exponents $\delta$-away ...
37C05 ; 37C40 ; 37D25
... Lire [+]
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Research schools
Smooth parametrizations of semi-algebraic sets were introduced by Yomdin in order to bound the local volume growth in his proof of Shub’s entropy conjecture for C∞ maps. In this minicourse we will present some refinement of Yomdin’s theory which allows us to also control the distortion. We will give two new applications: - for any C∞ surface diffeomorphism f with positive entropy the saddle periodic points with Lyapunov exponents $\delta$-away from zero for $\delta \in]0,htop(f)[$ are equidistributed along measures of maximal entropy. - for C∞ maps the entropy is physically greater than or equal to the top Lyapunov exponents of the exterior powers.
Smooth parametrizations of semi-algebraic sets were introduced by Yomdin in order to bound the local volume growth in his proof of Shub’s entropy conjecture for C∞ maps. In this minicourse we will present some refinement of Yomdin’s theory which allows us to also control the distortion. We will give two new applications: - for any C∞ surface diffeomorphism f with positive entropy the saddle periodic points with Lyapunov exponents $\delta$-away ...
37C05 ; 37C40 ; 37D25
... Lire [+]
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Research schools
Smooth parametrizations of semi-algebraic sets were introduced by Yomdin in order to bound the local volume growth in his proof of Shub’s entropy conjecture for C∞ maps. In this minicourse we will present some refinement of Yomdin’s theory which allows us to also control the distortion. We will give two new applications: - for any C∞ surface diffeomorphism f with positive entropy the saddle periodic points with Lyapunov exponents $\delta$-away from zero for $\delta \in]0,htop(f)[$ are equidistributed along measures of maximal entropy. - for C∞ maps the entropy is physically greater than or equal to the top Lyapunov exponents of the exterior powers.
Smooth parametrizations of semi-algebraic sets were introduced by Yomdin in order to bound the local volume growth in his proof of Shub’s entropy conjecture for C∞ maps. In this minicourse we will present some refinement of Yomdin’s theory which allows us to also control the distortion. We will give two new applications: - for any C∞ surface diffeomorphism f with positive entropy the saddle periodic points with Lyapunov exponents $\delta$-away ...
37C05 ; 37C40 ; 37D25
... Lire [+]
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Research schools
These lectures are a mostly self-contained sequel to Vaughn Climenhaga’s talks in week 1. The focus of the week 2 lectures will be on uniqueness of equilibrium states for rank 1 geodesic flows, and their mixing properties. Burns, Climenhaga, Fisher and myself showed recently that if the higher rank set does not carry full topological pressure then the equilibrium state is unique. I will discuss the proof of this result. With this result in hand, the question of when the “pressure gap” hypothesis can be verified becomes crucial. I will sketch our proof of the “entropy gap”, which is a new direct constructive proof of a result by Knieper. I will also describe new joint work with Ben Call, which shows that all the unique equilibrium states provided above have the Kolmogorov property. When the manifold has dimension at least 3, this is a new result even for the Knieper-Bowen-Margulis measure of maximal entropy. The common thread that links all of these arguments is that they rely on weak orbit specification properties in the spirit of Bowen.
These lectures are a mostly self-contained sequel to Vaughn Climenhaga’s talks in week 1. The focus of the week 2 lectures will be on uniqueness of equilibrium states for rank 1 geodesic flows, and their mixing properties. Burns, Climenhaga, Fisher and myself showed recently that if the higher rank set does not carry full topological pressure then the equilibrium state is unique. I will discuss the proof of this result. With this result in hand, ...
37D35 ; 37D40 ; 37C40 ; 37D25
... Lire [+]
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Research schools
These lectures are a mostly self-contained sequel to Vaughn Climenhaga’s talks in week 1. The focus of the week 2 lectures will be on uniqueness of equilibrium states for rank 1 geodesic flows, and their mixing properties. Burns, Climenhaga, Fisher and myself showed recently that if the higher rank set does not carry full topological pressure then the equilibrium state is unique. I will discuss the proof of this result. With this result in hand, the question of when the “pressure gap” hypothesis can be verified becomes crucial. I will sketch our proof of the “entropy gap”, which is a new direct constructive proof of a result by Knieper. I will also describe new joint work with Ben Call, which shows that all the unique equilibrium states provided above have the Kolmogorov property. When the manifold has dimension at least 3, this is a new result even for the Knieper-Bowen-Margulis measure of maximal entropy. The common thread that links all of these arguments is that they rely on weak orbit specification properties in the spirit of Bowen.
These lectures are a mostly self-contained sequel to Vaughn Climenhaga’s talks in week 1. The focus of the week 2 lectures will be on uniqueness of equilibrium states for rank 1 geodesic flows, and their mixing properties. Burns, Climenhaga, Fisher and myself showed recently that if the higher rank set does not carry full topological pressure then the equilibrium state is unique. I will discuss the proof of this result. With this result in hand, ...
37D35 ; 37D40 ; 37C40 ; 37D25
... Lire [+]
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Research schools
These lectures are a mostly self-contained sequel to Vaughn Climenhaga’s talks in week 1. The focus of the week 2 lectures will be on uniqueness of equilibrium states for rank 1 geodesic flows, and their mixing properties. Burns, Climenhaga, Fisher and myself showed recently that if the higher rank set does not carry full topological pressure then the equilibrium state is unique. I will discuss the proof of this result. With this result in hand, the question of when the “pressure gap” hypothesis can be verified becomes crucial. I will sketch our proof of the “entropy gap”, which is a new direct constructive proof of a result by Knieper. I will also describe new joint work with Ben Call, which shows that all the unique equilibrium states provided above have the Kolmogorov property. When the manifold has dimension at least 3, this is a new result even for the Knieper-Bowen-Margulis measure of maximal entropy. The common thread that links all of these arguments is that they rely on weak orbit specification properties in the spirit of Bowen.
These lectures are a mostly self-contained sequel to Vaughn Climenhaga’s talks in week 1. The focus of the week 2 lectures will be on uniqueness of equilibrium states for rank 1 geodesic flows, and their mixing properties. Burns, Climenhaga, Fisher and myself showed recently that if the higher rank set does not carry full topological pressure then the equilibrium state is unique. I will discuss the proof of this result. With this result in hand, ...
37D35 ; 37D40 ; 37C40 ; 37D25
... Lire [+]
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Research talks;Dynamical Systems and Ordinary Differential Equations
I will survey recent results on the generic properties of probability measures invariant by the geodesic flow defined on a nonpositively curved manifold. Such a flow is one of the early example of a non-uniformly hyperbolic system. I will talk about ergodicity and mixing both in the compact and noncompact setting, and ask some questions about the associated frame flow, which is partially hyperbolic.
37B10 ; 37D40 ; 34C28 ; 37C20 ; 37C40 ; 37D35
... Lire [+]
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Research talks;Dynamical Systems and Ordinary Differential Equations
We prove a couple of general conditional convergence results on ergodic averages for horocycle andgeodesic subgroups of any continuous $SL(2,\mathbb{R})$- action on a locally compact space. These results are motivated by theorems of Eskin, Mirzakhani and Mohammadi on the $SL(2,\mathbb{R})$-action on the moduli space of Abelian differentials. By our argument we can derive from these theorems an improved version of the “weak convergence” of push-forwards of horocycle measures under the geodesic flow and a new short proof of a theorem of Chaika and Eskin on Birkhoff genericity in almost all directions for the Teichmüller geodesic flow.
We prove a couple of general conditional convergence results on ergodic averages for horocycle andgeodesic subgroups of any continuous $SL(2,\mathbb{R})$- action on a locally compact space. These results are motivated by theorems of Eskin, Mirzakhani and Mohammadi on the $SL(2,\mathbb{R})$-action on the moduli space of Abelian differentials. By our argument we can derive from these theorems an improved version of the “weak convergence” of ...
37D40 ; 37C40 ; 37A17
... Lire [+]
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
- vii; 177 p.
ISBN 978-2-85629-904-3
Astérisque , 0410
Localisation : Périodique 1er étage
hyperbolicté non-uniforme # sélection de paramètres # application unimodale # attracteur Hénon # dynamiques chaotiques # dynamiques en petite dimension # pièce de puzzle
37D20 ; 37D25 ; 37D45 ; 37C40 ; 37E30
... Lire [+]
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
- vii; 326 p.
ISBN 978-3-319-43058-4
Lecture notes in mathematics , 2164
Localisation : Collection 1er étage
Chaire Jean-Morlet # CIRM # dynamique # théorie ergodique # géométrie différentielle
37C40 ; 37D40 ; 37-06 ; 53-06 ; 37Axx ; 53Cxx
... Lire [+]
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
- xxii; 266 p.
ISBN 978-2-85629-843-5
Astérisque , 0382
Localisation : Périodique 1er étage
forme modulaire de Hilbert # forme modulaire $\rho$-adique # forme modulaire surconvergente # représentation galoisienne # modularité # conjecture d'Artin # conjecture de Fontaine-Mazur
37A20 ; 37D25 ; 37D30 ; 37A50 ; 37C40
... Lire [+]
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
- ix; 165 p.
ISBN 978-2-85629-778-0
Astérisque , 0358
Localisation : Périodique 1er étage
Cocycle abélien # équation cohomologique # invariant d'holonomie # principe d'invariance # cocycle linéaire # théorie de Livsic # exposant de Liapounoff # hyperbolicité partielle # rigidité # cocycle lisse
37A20 ; 37D25 ; 37D30 ; 37A50 ; 37C40
... Lire [+]
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
- ix; 277 p.
ISBN 978-0-8218-9853-6
Graduate studies in mathematics , 0148
Localisation : Collection 1er étage
système dynamique # théorie ergodique # exposant de Lyapunov # dynamique topologique # hyperbolicité non-uniforme # flot géodésique
37D25 ; 37C40 ; 37-01
... Lire [+]
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
- 339 p.
ISBN 978-0-8218-4274-4
Fields institute communications , 0051
Localisation : Collection 1er étage
système dynamique # théorie ergodique # ergodicité lisse # système hyperbolique # flots sur surface # méthode quasiconforme # théorie de Teichmüller # foliation # groupe de Kleinian # surface modulaire de Riemann
37C40 ; 37D25 ; 37D30 ; 37E35 ; 37F30 ; 37C85 ; 30F60 ; 30F40 ; 32G15
... Lire [+]
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
- 138 p.
ISBN 978-3-540-40121-6
Springer monographs in mathematics
Localisation : Ouvrage RdC (MARG)
système dynamique # courbure négative # fonction zéta # opérateur de transfert # orbite périodique # système d'Anosov # flot hyperbolique
37A05 ; 35A10 ; 37B10 ; 37C10 ; 37C27 ; 37C30 ; 37C35 ; 37C40 ; 37D20 ; 37D35 ; 37D40
... Lire [+]
|
I try to solve numerically the following PDE for $E(r, z)$ with a cylindrical symmetrie (i. e. $E(r, z) = E(-r, z)$).
$\frac{\partial E}{\partial z} = \frac{i}{2k} \Delta E + \mathcal{N}(E)$
Where $\Delta$ is the Laplace operator in transversal direction and $k$ a real number. I want to use the Crank-Nicolson scheme to treat the Laplace operator and the Adams-Bashforth scheme to treat the nonlinearity ($\mathcal N(E)$). $r$ is defined by $r_j = 0 + j\Delta r$ for $j = 0 \dots N$. Since the Laplace in cylinder coordinates is given by: $\Delta E = \frac{\partial^2E}{\partial r^2} + \frac{1}{r}\frac{\partial E}{\partial r} $ one gets the following representation of the Laplace operator:
$\Delta E_j^n = E^n_{j-1} -2 E^n_j + E^n_{j+1} + \frac{1}{2j}(E^n_{j+1} - E^n_{j-1})$
The given PDE equation is therefore given by the following, where the nonlinearity is treated by the Adams-Bashforth scheme.
$E_j^{n+1} - E_j^n = i\delta (\Delta_j E^{n+1}_j + \Delta_j E^{n}_j) + (3/2 \mathcal N^n_j - 1/2 N^{n-1}_{j})$
where $\delta = \frac{\Delta z}{4 k \Delta r^2}$. From this $E^{n+1}_j $ can be expressed the following way:
$E^{n+1}_j = L_-^{-1} [L_+E^n_j + 3/2\mathcal N^n_j - 1/2\mathcal N^{n-1}_j]$ with the following matrixes $L_-$ and $L_+$.
$ L_{\pm} = \left( \begin{array}{rrrr} 1\mp2i\delta & \pm i\delta v_0 \\ \pm i\delta u_1 & 1\mp2i\delta & \pm i\delta v_1\\ & & & & \\ & & & & \\ & & & \pm i\delta u_N & 1\mp2i\delta\\ \end{array}\right) $
Where $u_j = 1 - 1/(2j)$ and $v_j = 1 + 1/(2j)$.
For my problem the following boundary conditions are given: $ E(r = r_{max}, z) = 0 $ and $\frac{\partial E}{\partial r} |_{r=0}$. The first one for $r=r_{max}$ I can easily incorporate by changing the last row of $ L_{\pm} $.
The other one ($\frac{\partial E}{\partial r} |_{r=0}$) gives me trouble. I somehow have to change the entries of the first row of $L_{\pm}$, but I don't know how. I got some working boundary conditions for the case the nonlinearity is 0, but they break as soon as I add a nonlinearity.
I am greatful for any help.
|
You may have heard the terms
resistivity and resistance as they relate to resistors. They sound alike but have slightly different meaning. Resistivity and resistance capture the idea that materials fight against the flow of current.
There are two more resistor words you should know about:
conductivity and conductance. Conductivity and conductance are the same ideas as resistivity and resistance, but with the opposite attitude. They describe how much current is welcomed to flow.
This article assumes you are familiar with Ohm’s Law, $v = i\,\text R$.
Written by Willy McAllister.
Contents Resistivity Resistance Making resistors Measuring resistance Measuring resistivity Conductance Conductivity Where we’re headed Resistivityis an electrical property of bulk material—a measure of how much the material fights back when you push an electric current through it. The unit of resistivity is $\Omega \cdot \text m$, (ohm meters). Resistanceis the property of a circuit component called a resistor. Resistance is derived from two things: the resistivityof the material used to make the resistor, and the shape of the resistor. The unit of resistance is the ohm, $\Omega$. Conductivityis the reciprocal of resistivity, also a property of bulk material. The unit of conductivity is $\text S/\text m$, (siemens per meter). Conductanceis the reciprocal of resistance. The unit of conductance is the siemens, with symbol $\text S$. Resistivity Resistivity is a property of bulk material. “Bulk” means “a big chunk of” or “a bucket of.” Resistivity is the measure of how much the bulk stuff fights against the flow of current. Higher resistivity means more fight.
The variable name for resistivity is usually the Greek lowercase rho, $\rho$. It looks like a little p, but it’s more fun to write because you start at the bottom and swoop up.
Graphite (a form of carbon) conducts electricity about $100\times$ less than copper. It is used to make resistors (and pencil leads). To make resistors you mix together powdered graphite, clay, and glue. A bucket of this mixture has a bulk resistivity based on the proportions of carbon, clay, and glue. You adjust the proportions up and down to get any resistivity you want. If you mix in more carbon powder that makes the resistivity lower.
Resistance Resistance is the value of a particular resistor. Take your bucket of carbon/clay/glue goop and pull out a little blob. Form it into a little rectangle or cylinder. Attach wires to the ends. When the glue dries, you have a “lumped” circuit element called a resistor. The resistance depends the resistivity of the bulk material AND the shape of the resistor.
Resistance is the measure of how much a specific resistor fights against the flow of current. Higher resistance means more fight.
Making resistors
Let’s make a rectangular resistor,
You get different values of resistance by changing two things—the resistivity of the bulk material OR the shape of the resistor.
If you know the resistivity and the shape then the resistance is,
$\text R = \rho \,\dfrac{l}{A}$
$\text R$ is the resistance of the specific structure you built, units: ohms, $\Omega$.
$\rho$ is the resistivity of the bulk material, units: $\Omega \cdot m$. $l$ is the length of the resistor, units: $m$. $A$ is the area of the ends of the resistor, units: $m^2$.
(If the resistor is a cylinder you figure out $A$ using the area of a circle, $A = \pi \, r^2$.)
The equation tells us,
Resistance $\text R$ scales up and down directly with the material property $\rho$. Using higher resistivity material means a higher resistance value. That makes sense.
If you make $l$ longer then $\text R$ gets bigger. A longer resistor has higher resistance. There’s more resistive material the current has to flow through. That makes sense, too. It’s like stringing out resistors in series.
If you make $A$ bigger (the resistor gets fatter) then $\text R$ gets
smaller. That might take a second to sink in. When you make the resistor fatter there are more paths available for current to flow. It’s like connecting skinny resistors side-by-side in parallel.
If you solve the resistance equation for $\rho$ you get,
$\rho = \text R \,\dfrac{A}{l}$
Notice the term with area over length, which has units of $m^2/m$. This simplifies to just meters, $m$. So the units of resistivity are $\Omega \cdot m$, “ohm meter.””
Measuring resistance
How do you measure resistance? We use Ohm’s Law of course. An ohmmeter or multimeter has a battery inside it that applies a small voltage to the resistor being measured. The meter knows the voltage and measures the current, then it calculates the resistance, $\text R = v/i$.
Measuring resistivity
You might have a big chunk of some material and you want to know its resistivity. Or you may have a sheet of an unknown material and you want to identify it by measuring its resistivity. How do you measure resistivity? That’s a bit tricky.
You might try to use an ohmmeter to measure between two points. But, the meter tells you resistance in ohms, not resistivity in ohm$\cdot$meters. Depending on where you jab the probes into the material the the meter gives a different number. This isn’t doing the job.
Can you can think of a way to use an ohmmeter to measure resistivity?
Measure resistivity
We measure resistivity using the equation where we found resistivity in terms of resistance, area, and length,
$\rho = \text R \dfrac{A}{l}$
You basically sacrifice some material to build a carefully dimensioned resistor. You can compute $\rho$ if you know all three variables on the right side. Get out your chisel or scissors and cut off a chunk of bulk material.
Trim it to a precise size with known $A$ and known $l$. Touch your ohmmeter probes on the $A$ ends of the test piece and read $\text R$ from the ohmmeter. Put all three numbers into the equation and compute $\rho$.
Conductance
The reciprocal of resistance is called
conductance. The unit of conductance is $1/\Omega$ or “inverse ohms”. In the SI system this unit has an honorary name, the siemens (always with an s on the end). The symbol for siemens is $(\text S)$. The unit is named after Werner von Siemens, a German electrical engineer who founded the company with the same name.
The idea of conductance is most useful when you have resistors in parallel. See this article on parallel conductance.
moh
In the old days, $1/\Omega$ was called a “mho” (Ohm spelled backwards), and the symbol for mhos was $\mho$ (inverse ohms, get it?). This terminology is old fashioned and you shouldn’t use it (unless you are writing a paper on the history of electricity).
Where $\text R$ as the common variable name for resistance, the common variable name for conductance is $\text G$. I think this is because the letter $\text G$ has a slight resemblance to $\Omega$ rolled over on its side. It is not fully upside down, but reminds us of the antique $\mho$ symbol.
Conductivity
The reciprocal of resistivity is called
conductivity.
$\sigma = \dfrac{1}{\rho}$
The unit of conductivity is $1/(\Omega \cdot m)$. If you regroup the parenthesis you get $(1/\Omega)/m$. Since the definition of a siemens is $1/\Omega$ you express conductivity as siemens per meter,
$\sigma = \text S/m$
Summary
There’s a lot of names and symbols to keep straight, but don’t worry, you will use $\text R$ most of the time.
name variable unit unit symbol Resistance $\text R$ ohms $\Omega$ Conductance $\text G$ siemens $\text S$ Resistivity $\rho$ ohm meters $\Omega \cdot m$ Conductivity $\sigma$ siemens/meter $\text S / m$
Resistance is a property of a circuit component called a resistor. Resistance is based on two things: resistivity and shape.
Resistivity is a property of bulk material. It tells you how much the material fights back when an electric current flows through it.
Conductance the the reciprocal of resistance, a property of a specific circuit component.
Conductivity is the reciprocal of resistivity, so it is a bulk material property.
|
Search
Now showing items 1-1 of 1
Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2016-09)
The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Tomorrow, for the final lecture of the
Mathematical Statistics course, I will try to illustrate – using Monte Carlo simulations – the difference between classical statistics, and the Bayesien approach.
The (simple) way I see it is the following,
for frequentists, a probability is a measure of the the frequency of repeated events, so the interpretation is that parameters are fixed (but unknown), and data are random for Bayesians, a probability is a measure of the degree of certainty about values, so the interpretation is that parameters are random and data are fixed
Or to quote Frequentism and Bayesianism: A Python-driven Primer, a Bayesian statistician would say “given our observed data, there is a 95% probability that the true value of \theta falls within the credible region” while a Frequentist statistician would say “there is a 95% probability that when I compute a confidence interval from data of this sort, the true value of \theta will fall within it”.
To get more intuition about those quotes, consider a simple problem, with Bernoulli trials, with insurance claims. We want to derive some confidence interval for the probability to claim a loss. There were = 1047 policies. And 159 claims.
Consider the standard (frequentist) confidence interval. What does that mean that \overline{x}\pm\sqrt{\frac{\overline{x}(1-\overline{x})}{n}}is the (asymptotic) 95% confidence interval? The way I see it is very simple. Let us generate some samples, of size n, with the same probability as the empirical one, i.e. \widehat{\theta} (which is the meaning of “from data of this sort”). For each sample, compute the confidence interval with the relationship above. It is a 95% confidence interval because in 95% of the scenarios, the empirical value lies in the confidence interval. From a computation point of view, it is the following idea,
> xbar <- 159 > n <- 1047 > ns <- 100 > M=matrix(rbinom(n*ns,size=1,prob=xbar/n),nrow=n)
I generate 100 samples of size . For each sample, I compute the mean, and the confidence interval, from the previous relationship
> fIC=function(x) mean(x)+c(-1,1)*1.96*sqrt(mean(x)*(1-mean(x)))/sqrt(n) > IC=t(apply(M,2,fIC)) > MN=apply(M,2,mean)
Then we plot all those confidence intervals. In red when they do not contain the empirical mean
> k=(xbar/n<IC[,1])|(xbar/n>IC[,2]) > plot(MN,1:ns,xlim=range(IC),axes=FALSE, + xlab="",ylab="",pch=19,cex=.7, + col=c("blue","red")[1+k]) > axis(1) > segments(IC[,1],1:ns,IC[,2],1: + ns,col=c("blue","red")[1+k]) > abline(v=xbar/n)
Now, what about the Bayesian credible interval ? Assume that the prior distribution for the probability to claim a loss has a distribution. We’ve seen in the course that, since the Beta distribution is the conjugate of the Bernoulli one, the posterior distribution will also be Beta. More precisely
Based on that property, the confidence interval is based on quantiles of that (posterior) distribution
> u=seq(.1,.2,length=501) > v=dbeta(u,1+xbar,1+n-xbar) > plot(u,v,axes=FALSE,type="l") > I=u<qbeta(.025,1+xbar,1+n-xbar) > polygon(c(u[I],rev(u[I])),c(v[I], + rep(0,sum(I))),col="red",density=30,border=NA) > I=u>qbeta(.975,1+xbar,1+n-xbar) > polygon(c(u[I],rev(u[I])),c(v[I], + rep(0,sum(I))),col="red",density=30,border=NA) > axis(1)
What does that mean, here, that we have a 95% credible interval. Well, this time, we do not draw using the empirical mean, but some possible probability, based on that posterior distribution (given the observations)
> pk <- rbeta(ns,1+xbar,1+n-xbar)
In green, below, we can visualize the histogram of those values
> hist(pk,prob=TRUE,col="light green", + border="white",axes=FALSE, + main="",xlab="",ylab="",lwd=3,xlim=c(.12,.18))
And here again, let us generate samples, and compute the empirical probabilities,
> M=matrix(rbinom(n*ns,size=1,prob=rep(pk, + each=n)),nrow=n) > MN=apply(M,2,mean)
Here, there is 95% chance that those empirical means lie in the credible interval, defined using quantiles of the posterior distribution. We can actually visualize all those means : in black the mean used to generate the sample, and then, in blue or red, the averages obtained on those simulated samples,
> abline(v=qbeta(c(.025,.975),1+xbar,1+ + n-xbar),col="red",lty=2) > points(pk,seq(1,40,length=ns),pch=19,cex=.7) > k=(MN<qbeta(.025,1+xbar,1+n-xbar))| + (MN>qbeta(.975,1+xbar,1+n-xbar)) > points(MN,seq(1,40,length=ns), + pch=19,cex=.7,col=c("blue","red")[1+k]) > segments(MN,seq(1,40,length=ns), + pk,seq(1,40,length=ns),col="grey")
More details and exemple on Bayesian statistics, seen with the eyes of a (probably) not Bayesian statistician in my slides, from my talk in London, last Summer,
|
Next, we combine what we have learned about convection and diffusion and apply it to the Burger's Equation. This equation looks like —and is— the direct combination of both of the PDE's we had been working on earlier.$$ \frac{\partial u}{\partial t} + \frac{\partial u}{\partial x} = \nu \frac{\partial^2 u}{\partial x^2} $$
We can discretize it using the methods we have developed previously in steps 1-3. It will take forward difference for the time component, backward difference for space and our 2nd order combination method for hte second derivatives. This yields:$$ \frac{u^{n+1}_i - u^n_i}{\Delta t} + u_i^n \frac{u^{n}_i - u^n_{i-1}}{\Delta x} = \nu \frac{u^{n}_{i+1} -2u^n_i + u^n_{i-1}}{\Delta x^2} $$
Given that we have full initial conditions as before we can solve for our only unknown $u^{n+1}_i$ and iterate through the equation that follows:$$ u^{n+1}_i = u^n_i - u^n_i \frac{\Delta t}{\Delta x} (u^n_i - u^n_{i-1}) + \frac{\nu \Delta t}{\Delta x^2}(u^{n}_{i+1} - 2u^n_i + u^n_{i-1}) $$
The above equation will now allow us to write a program to advance our solution in time and perform our simulation. As before, we need initial conditions, and we shall continue to use the one we obtained in the previous two steps.
The Burger's equation is way more interesting than the previous ones. To have a better feel for its properties it is helpful to use different initial and boundary conditions than what we have been using for the previous steps.\begin{eqnarray} u &=& -\frac{2 \nu}{\phi} \frac{\partial \phi}{\partial x} + 4 \\\ \phi &=& \exp \bigg(\frac{-(x-4t)^2}{4 \nu (t+1)} \bigg) + \exp \bigg(\frac{-(x-4t -2 \pi)^2}{4 \nu(t+1)} \bigg) \end{eqnarray}
This has an analytical solution, given by: \begin{eqnarray} u &=& -\frac{2 \nu}{\phi} \frac{\partial \phi}{\partial x} + 4 \\\ \phi &=& \exp \bigg(\frac{-(x-4t)^2}{4 \nu (t+1)} \bigg) + \exp \bigg(\frac{-(x-4t -2 \pi)^2}{4 \nu(t+1)} \bigg) \end{eqnarray}
Our boundary conditions will be:$$ u(0) = u(2 \pi) $$
This is a periodic boundary condition which we must be careful with.
Evaluating this initial condition by hand would be relatively painful, to avoid this we can calculate the derivative using sympy. This is basically mathematica but can be used to output the results back into Python calculations.
We shall start by loading all of the python libraries that we will need for hte project along with a fix to make sure sympy prints our functions in latex.
# Adding inline command to make plots appear under commentsimport numpy as npimport sympyimport matplotlib.pyplot as pltimport time, sys%matplotlib inline sympy.init_printing(use_latex =True)
We shall start by defining the symbolic variables in our initial conditions and then typing out the full equation.
x, nu, t = sympy.symbols('x nu t')phi = (sympy.exp(-(x - 4 * t) **2 / (4 * nu * (t+1))) + sympy.exp(-(x - 4 *t - 2 * np.pi)**2 / (4 * nu * (t + 1))))phi
phiprime = phi.diff(x)phiprime
In python code:
print(phiprime)
-(-8*t + 2*x)*exp(-(-4*t + x)**2/(4*nu*(t + 1)))/(4*nu*(t + 1)) - (-8*t + 2*x - 12.5663706143592)*exp(-(-4*t + x - 6.28318530717959)**2/(4*nu*(t + 1)))/(4*nu*(t + 1))
Now that we have the expression for $ \frac{\partial \phi}{\partial x} $ we can finish writing the full initial condition equation and then translating it into a usable python expression. To do this we use the lambdify function which takes a sympy simbolic equation and turns it into a callable function.
u = -2 * nu * (phiprime / phi) + 4print(u)
-2*nu*(-(-8*t + 2*x)*exp(-(-4*t + x)**2/(4*nu*(t + 1)))/(4*nu*(t + 1)) - (-8*t + 2*x - 12.5663706143592)*exp(-(-4*t + x - 6.28318530717959)**2/(4*nu*(t + 1)))/(4*nu*(t + 1)))/(exp(-(-4*t + x - 6.28318530717959)**2/(4*nu*(t + 1))) + exp(-(-4*t + x)**2/(4*nu*(t + 1)))) + 4
ufunc = sympy.utilities.lambdify((t,x,nu), u)print(ufunc(1,4,3))
3.4917066420644494
Pretty neat right?!
Now that we can set up the initial conditions we can finish up the problem. We can generate the plot of intiial conditions using the lambifyied function.
#New initial conditionsgrid_length = 2grid_points = 101nt = 150dx = grid_length * np.pi / (grid_points - 1) nu = .07 dt = dx * nu #Dynamically scaling dt based on grid size to ensure convergence#Initiallizing the array containing the shape of our initial conditionsx = np.linspace(0,2 * np.pi, grid_points)un = np.empty(grid_points)t = 0u = np.asarray([ufunc(t,x0,nu) for x0 in x])u
array([4. , 4.06283185, 4.12566371, 4.18849556, 4.25132741, 4.31415927, 4.37699112, 4.43982297, 4.50265482, 4.56548668, 4.62831853, 4.69115038, 4.75398224, 4.81681409, 4.87964594, 4.9424778 , 5.00530965, 5.0681415 , 5.13097336, 5.19380521, 5.25663706, 5.31946891, 5.38230077, 5.44513262, 5.50796447, 5.57079633, 5.63362818, 5.69646003, 5.75929189, 5.82212374, 5.88495559, 5.94778745, 6.0106193 , 6.07345115, 6.136283 , 6.19911486, 6.26194671, 6.32477856, 6.38761042, 6.45044227, 6.51327412, 6.57610598, 6.63893783, 6.70176967, 6.76460125, 6.82742866, 6.89018589, 6.95176632, 6.99367964, 6.72527549, 4. , 1.27472451, 1.00632036, 1.04823368, 1.10981411, 1.17257134, 1.23539875, 1.29823033, 1.36106217, 1.42389402, 1.48672588, 1.54955773, 1.61238958, 1.67522144, 1.73805329, 1.80088514, 1.863717 , 1.92654885, 1.9893807 , 2.05221255, 2.11504441, 2.17787626, 2.24070811, 2.30353997, 2.36637182, 2.42920367, 2.49203553, 2.55486738, 2.61769923, 2.68053109, 2.74336294, 2.80619479, 2.86902664, 2.9318585 , 2.99469035, 3.0575222 , 3.12035406, 3.18318591, 3.24601776, 3.30884962, 3.37168147, 3.43451332, 3.49734518, 3.56017703, 3.62300888, 3.68584073, 3.74867259, 3.81150444, 3.87433629, 3.93716815, 4. ])
plt.figure(figsize=(11, 7), dpi= 100)plt.plot(x, u, marker='o', lw=2)plt.xlim([0, 2 * np.pi])plt.ylim([0, 10]);plt.xlabel('x')plt.ylabel('u')plt.title('Burgers Equation at t=0');
This new function is known as a
sawtooth function.
The biggest difference between this step and the previous ones is the use of periodic boundary conditions. If you have experimented with steps 1-2 you would have seen that eventually the wave moves out of the picture to the right and does not show up in the plot.
With periodic BC, what happens now is that when the wave hits the end of the frame it wraps around and starts from the beginning again.
Now we will apply the discretization as outlined above and check out the final results.
for n in range(nt): #Runs however many timesteps you set earlier un = u.copy() #copy the u array to not overwrite values for i in range(1,grid_points-1): u[i] = un[i] - un[i] * dt/dx * (un[i]-un[i-1]) + nu * (dt/dx**2) * (un[i+1]- 2*un[i] + un[i-1]) u[0] = un[0] - un[0] * dt / dx * (un[0] - un[-2]) + nu*(dt / dx**2) *(un[1] - 2* un[0] + un[-2]) u[-1] = u[0]u_anal = np.asarray([ufunc(nt* dt , xi, nu) for xi in x])
plt.figure(figsize=(11, 7), dpi=100)plt.plot(x,u, marker ='o', lw=2, label='Computational')plt.plot(x, u_anal, label='Analytical')plt.xlim([0, 2* np.pi])plt.ylim([0,10])plt.xlabel('x')plt.ylabel('u')plt.title('Burgers Equation at t=10');plt.legend();
#Imports for animation and display within a jupyter notebookfrom matplotlib import animation, rc from IPython.display import HTML#Generating the figure that will contain the animationfig, ax = plt.subplots()fig.set_dpi(100)fig.set_size_inches(9, 5)ax.set_xlim(( 0, 2*np.pi))ax.set_ylim((0, 10))comp, = ax.plot([], [], marker='o', lw=2,label='Computational')anal, = ax.plot([], [], lw=2,label='Analytical')ax.legend();plt.xlabel('x')plt.ylabel('u')plt.title('Burgers Equation time evolution from t=0 to t=10');#Resetting the U wave back to initial conditionsu = np.asarray([ufunc(0, x0, nu) for x0 in x])
#Initialization function for funcanimationdef init(): comp.set_data([], []) anal.set_data([], []) return (comp,anal,)
#Main animation function, each frame represents a time step in our calculationdef animate(j): un = u.copy() #copy the u array to not overwrite values for i in range(1,grid_points-1): u[i] = un[i] - un[i] * dt/dx * (un[i]-un[i-1]) + nu * (dt/dx**2) * (un[i+1]- 2*un[i] + un[i-1]) u[0] = un[0] - un[0] * dt / dx * (un[0] - un[-2]) + nu*(dt / dx**2) *(un[1] - 2* un[0] + un[-2]) u[-1] = u[0] u_anal = np.asarray([ufunc(j * dt, xi, nu) for xi in x]) comp.set_data(x, u) anal.set_data(x, u_anal) return (comp,anal,)
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=nt, interval=20)anim.save('../gifs/1dBurgers.gif',writer='imagemagick',fps=60)#HTML(anim.to_jshtml())
This concludes our examination of 1D sims and boy oh boy was this cool! This last model in particular shines in the animation showing the behavior and properties of the burghers equation quite well.
Next, we will start our move to 2D but before this a quick detour on array operations on NumPy.
|
I'm trying to prove the following:
If $(a_n)$ is a sequence of positive numbers such that $\sum_{n=1}^\infty a_n b_n<\infty$ for all sequences of positive numbers $(b_n)$ such that $\sum_{n=1}^\infty b_n^2<\infty$, then $\sum_{n=1}^\infty a_n^2 <\infty$.
The context here is functional analysis homework, in the subject of Hilbert spaces.
Here's what I've thought:
Let $f=(a_n)>0$. Then the problem reads: if $\int f\overline{g}<\infty$ for all $g>0,g\in \ell^2$, then $f\in \ell^2$. This brings the problem into the realm of $\ell^p$ spaces.
I know the inner product is defined only in $\ell^2$, but it's sort of like saying: if $\langle f,g\rangle <\infty$ for all $g>0,g\in \ell^2$ then $f\in \ell^2$.
I read this as: "to check a positive sequence is in $\ell^2$, just check its inner product with any positive sequence in $\ell^2$ is finite, then you're done", which I find nice, but I can't prove it :P
From there, I don't know what else to do. I thought of Hölder's inequality which in this context states: $$\sum_{n=1}^\infty a_nb_n \leq \left( \sum_{n=1}^\infty a_n^2 \right)^{1/2} \left( \sum_{n=1}^\infty b_n^2 \right)^{1/2}$$
but it's not useful here.
|
MathView
MathView is a third-party view library, which might help you display math formula on Android apps easier. Two rendering engines available: MathJax and KaTeX. Support Android version 4.1 (Jelly Bean) and newer.
Setup
There are two ways you can add
MathView to your project in Android Studio:
From a remote Maven repository (jcenter). From a local .aar file. 1. Setup from a remote Maven repository (jcenter)
Add
compile 'io.github.kexanie.library:MathView:0.0.6' into
dependencies section of your module build.gradle file. For example:
dependencies { compile fileTree(include: ['*.jar'], dir: 'libs') compile 'com.android.support:appcompat-v7:23.0.0' compile 'io.github.kexanie.library:MathView:0.0.6'}
2. Setup from local .aar file
You can download the latest version of MathView from Bintray.
Import the module from local .aar file
Click
File -> New -> New Module (yes, not
import Module)
-> Import .JAR/.AAR Package, and find out where the file located.
Add dependency
Click
File -> Project Structure -> Dependencies, and then click the plus icon, select
3. Module Dependency.
For Eclipse users
Just migrate to Android Studio.
Usage
The behaviour of
MathView is nearly the same as
TextView, except that it will automatically render
TeX code (or MathML code if rendering with MathJax) into math formula. For basic tutorial and quick reference, please have a look on this tutorial. Caution You should enclose the formula in
\(...\)rather than
$...$for inline formulas.
You need to escape spacial characters like backslash, quotes and so on in Java code. If you want to make the height of
MathViewactually
wrap_content, warp the views into
NestedScrollView.
About the engines
KaTeX is faster than MathJax on mobile environment, but MathJax supports more features and is much more beautiful. Choose whatever suits your needs.
Define
MathView in your layout file
For example:
<LinearLayout ...> <TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Formula one: from xml with MathJax" android:textStyle="bold"/> <io.github.kexanie.library.MathView android:id="@+id/formula_one" android:layout_width="match_parent" android:layout_height="wrap_content" auto:text="When \\(a \\ne 0\\), there are two solutions to \\(ax^2 + bx + c = 0\\) and they are $$x = {-b \\pm \\sqrt{b^2-4ac} \\over 2a}.$$" auto:engine="MathJax" > </io.github.kexanie.library.MathView> <TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:text="Formula two: from Java String with KaTeX" android:textStyle="bold"/> <io.github.kexanie.library.MathView android:id="@+id/formula_two" android:layout_width="match_parent" android:layout_height="wrap_content" auto:engine="KaTeX" > </io.github.kexanie.library.MathView></LinearLayout>
Get an instance from your
Activity
public class MainActivity extends AppCompatActivity { MathView formula_two; String tex = "This come from string. You can insert inline formula:" + " \\(ax^2 + bx + c = 0\\) " + "or displayed formula: $$\\sum_{i=0}^n i^2 = \\frac{(n^2+n)(2n+1)}{6}$$"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); } @Override protected void onResume() { super.onResume(); formula_two = (MathView) findViewById(R.id.formula_two); formula_two.setText(tex); }}
Noted that the method MatView.getText() will return the raw TeX code (Java String). Configuration
I am not an expert in MathJax. Rather than providing a pre-configured version of MathJax, I choose to add another method
config()(for MathJax only) to
MathView in version
0.0.5. You can tweak MathJax with more complicated configurations. For example, to enable auto linebreaking, you can call
MathView.config("MathJax.Hub.Config({\n"+ " CommonHTML: { linebreaks: { automatic: true } },\n"+ " \"HTML-CSS\": { linebreaks: { automatic: true } },\n"+ " SVG: { linebreaks: { automatic: true } }\n"+ "});");
before
setText().
How it works
MathView inherited from Android
WebView and use javascript ( MathJax or KaTeX ) to do the rendering stuff. Another library called Chunk is just an lightweight Java template engine for filling the TeX code into an html file. So we can render it. It's still rather primitive, but at least functional. Check the code for more details.
Known Issues When rendering with MathJax, some characters are blank(like character 'B' of BlackBoard Bold font) due to MathJax's bug on Android
WebView.
Not all TeX commands are supported by KaTeX, check this link for more details. Feedback
If you have any issues or need help please do not hesitate to create an issue ticket.
|
Convolutions from a DSP perspective
I'm a bit late to this but still would like to share my perspective and insights. My background is theoretical physics and digital signal processing. In particular I studied wavelets and convolutions are almost in my backbone ;)
The way people in the deep learning community talk about convolutions was also confusing to me. From my perspective what seems to be missing is a proper separation of concerns. I will explain the deep learning convolutions using some DSP tools.
Disclaimer
My explanations will be a bit hand-wavy and not mathematical rigorous in order to get the main points across.
Definitions
Let's define a few things first. I limit my discussion to one dimensional (the extension to more dimension is straight forward) infinite (so we don't need to mess with boundaries) sequences $x_n = \{x_n\}_{n=-\infty}^{\infty} = \{\dots, x_{-1}, x_{0}, x_{1}, \dots \}$.
A pure (discrete) convolution between two sequences $y_n$ and $x_n$ is defined as
$$ (y * x)_n = \sum_{k=-\infty}^{\infty} y_{n-k} x_k $$
If we write this in terms of matrix vector operations it looks like this (assuming a simple kernel $\mathbf{q} = (q_0,q_1,q_2)$ and vector $\mathbf{x} = (x_0, x_1, x_2, x_3)^T$):
$$ \mathbf{q} * \mathbf{x} = \left( \begin{array}{cccc} q_1 & q_0 & 0 & 0 \\ q_2 & q_1 & q_0 & 0 \\ 0 & q_2 & q_1 & q_0 \\ 0 & 0 & q_2 & q_1 \\ \end{array} \right) \left( \begin{array}{cccc} x_0 \\ x_1 \\ x_2 \\ x_3 \end{array} \right) $$
Let's introduce the down- and up-sampling operators, $\downarrow$ and $\uparrow$, respectively. Downsampling by factor $k \in \mathbb{N}$ is removing all samples except every k-th one:
$$ \downarrow_k\!x_n = x_{nk} $$
And upsampling by factor $k$ is interleaving $k-1$ zeros between the samples:
$$ \uparrow_k\!x_n = \left \{ \begin{array}{ll} x_{n/k} & n/k \in \mathbb{Z} \\ 0 & \text{otherwise} \end{array} \right.$$
E.g. we have for $k=3$:
$$ \downarrow_3\!\{ \dots, x_0, x_1, x_2, x_3, x_4, x_5, x_6, \dots \} = \{ \dots, x_0, x_3, x_6, \dots \} $$$$ \uparrow_3\!\{ \dots, x_0, x_1, x_2, \dots \} = \{ \dots x_0, 0, 0, x_1, 0, 0, x_2, 0, 0, \dots \} $$
or written in terms of matrix operations (here $k=2$):
$$ \downarrow_2\!x = \left( \begin{array}{cc} x_0 \\ x_2 \end{array} \right) = \left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ \end{array} \right) \left( \begin{array}{cccc} x_0 \\ x_1 \\ x_2 \\ x_3 \end{array} \right) $$
and
$$ \uparrow_2\!x = \left( \begin{array}{cccc} x_0 \\ 0 \\ x_1 \\ 0 \end{array} \right) = \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right) \left( \begin{array}{cc} x_0 \\ x_1 \end{array} \right) $$
As one can already see, the down- and up-sample operators are mutually transposed, i.e. $\uparrow_k = \downarrow_k^T$.
Deep Learning Convolutions by Parts
Let's look at the typical convolutions used in deep learning and how we write them. Given some kernel $\mathbf{q}$ and vector $\mathbf{x}$ we have the following:
a strided convolution with stride $k$ is $\downarrow_k\!(\mathbf{q} * \mathbf{x})$, a dilated convolution with factor $k$ is $(\uparrow_k\!\mathbf{q}) * \mathbf{x}$, a transposed convolution with stride $k$ is $ \mathbf{q} * (\uparrow_k\!\mathbf{x})$
Let's rearrange the transposed convolution a bit:$$ \mathbf{q} * (\uparrow_k\!\mathbf{x}) \; = \; \mathbf{q} * (\downarrow_k^T\!\mathbf{x}) \; = \; (\uparrow_k\!(\mathbf{q}*)^T)^T\mathbf{x}$$
In this notation $(\mathbf{q}*)$ must be read as an operator, i.e. it abstracts convolving something with kernel $\mathbf{q}$.Or written in matrix operations (example):
$$ \begin{align} \mathbf{q} * (\uparrow_k\!\mathbf{x}) & = \left( \begin{array}{cccc} q_1 & q_0 & 0 & 0 \\ q_2 & q_1 & q_0 & 0 \\ 0 & q_2 & q_1 & q_0 \\ 0 & 0 & q_2 & q_1 \\ \end{array} \right) \left( \begin{array}{cc} 1 & 0 \\ 0 & 0 \\ 0 & 1 \\ 0 & 0 \\ \end{array} \right) \left( \begin{array}{c} x_0\\ x_1\\ \end{array} \right) \\ & = \left( \begin{array}{cccc} q_1 & q_2 & 0 & 0 \\ q_0 & q_1 & q_2 & 0 \\ 0 & q_0 & q_1 & q_2 \\ 0 & 0 & q_0 & q_1 \\ \end{array} \right)^T \left( \begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0\\ \end{array} \right)^T \left( \begin{array}{c} x_0\\ x_1\\ \end{array} \right) \\ & = \left( \left( \begin{array}{cccc} 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0\\ \end{array} \right) \left( \begin{array}{cccc} q_1 & q_2 & 0 & 0 \\ q_0 & q_1 & q_2 & 0 \\ 0 & q_0 & q_1 & q_2 \\ 0 & 0 & q_0 & q_1 \\ \end{array} \right) \right)^T \left( \begin{array}{c} x_0\\ x_1\\ \end{array} \right) \\ & = (\uparrow_k\!(\mathbf{q}*)^T)^T\mathbf{x} \end{align}$$
As one can see the is the transposed operation, thus, the name.
Connection to Nearest Neighbor Upsampling
Another common approach found in convolutional networks is upsampling with some built-in form of interpolation. Let's take upsampling by factor 2 with a simple repeat interpolation.This can be written as $\uparrow_2\!(1\;1) * \mathbf{x}$. If we also add a learnable kernel $\mathbf{q}$ to this we have $\uparrow_2\!(1\;1) * \mathbf{q} * \mathbf{x}$. The convolutions can be combined, e.g. for $\mathbf{q}=(q_0\;q_1\;q_2)$, we have $$(1\;1) * \mathbf{q} = (q_0\;\;q_0\!\!+\!q_1\;\;q_1\!\!+\!q_2\;\;q_2),$$
i.e. we can replace a repeat upsampler with factor 2 and a convolution with a kernel of size 3 by a transposed convolution with kernel size 4. This transposed convolution has the same "interpolation capacity" but would be able to learn better matching interpolations.
Conclusions and Final Remarks
I hope I could clarify some common convolutions found in deep learning a bit by taking them apart in the fundamental operations.
I didn't cover pooling here. But this is just a nonlinear downsampler and can be treated within this notation as well.
|
Imagine that you had access to one very long (or very many somewhat shorter) unbiased MD simulation(s). What would (or should) you do with the dataset; what quantities can be estimated?
First, we can estimate
equilibrium expectation values. Each trajectory is, by construction, a realization of a Markov chain whose stationary distribution is the equilibrium distribution of the system. Assuming we're in the canonical (NVT) ensemble, then the equilibrium (Boltzmann) distribution of the system is
$$ \pi(x) = \frac{1}{Z} e^{-\beta U(x)} $$
If we index the timesteps of the simulation, $t \in \{1, 2, \ldots\}$, and let $X_t$ be the position at time $t$, then $X_t$ converges in distribution to $\pi$.
$$X_t \xrightarrow{d} \pi$$
Why is this useful? Because it provides the basis by which we can calculate expectation values for the system. If we want to calculate things like the ensemble average distance between various residues, FRET absorbance, X-ray scattering intensities, or other properties that can be well-modeled as an ensemble average over $\pi$ of some (instantaneous) observable $g(X)$, this property guarantees that averages of $g$ over a trajectory will converge to the right answer in the limit that the length of the trajectory goes to infinity.
$$ \lim_{t\rightarrow \infty}\frac{1}{T} \sum_t^T g(X_t) = \mathbb{E}_\pi \left[ g(X) \right] $$
But there's potentially much more information in an MD trajectory than just estimates of equilibrium expectation values. Consider questions about the system's
dynamics like the following: How long does it take for the system to transition between two particular regions of conformation space? What are the slow dynamical modesin the system? Which degrees of freedom or collective variables in the system take the most time to equilibrate? What predictions can be made about relaxation experiments like temperature-jump IR spectroscopy?
These types of questions concern the dynamics of the system, and cannot truly be answered by the calculation of equilibrium expectation values. Why? If we look at the structure of the mean calculation above, we can see that it treats the individual $X_t$ as if they're exchangeable. This means roughly that given an MD trajectory, we get the same estimate for the equilibrium average of $g$ if we scramble the order of the frames in the trajectory (vs. using them in the ``proper'' order). All of the temporal structure of the trajectories is thus disregarded.
In an upcoming post, I'll talk about how these types of questions about the dynamics can be posed, and how Markov state models can be efficient estimators for these quantities.
|
The numbers of periodic orbits hidden at fixed points of $n$-dimensional holomorphic mappings (II)
DOI: http://dx.doi.org/10.12775/TMNA.2009.006
Abstract
Let $\Delta ^{n}$ be the ball $|x|< 1$ in the complex vector space ${\mathbb C}
^{n}$, let $f\colon \Delta ^{n}\rightarrow {\mathbb C}^{n}$ be a holomorphic mapping and let $M$ be a positive integer. Assume that the origin $0=(0,\ldots ,0)$ is an isolated fixed point of both $f$ and the $M$-th iteration $f^{M}$ of $f$. Then the (local) Dold index $P_{M}(f,0)$ at the origin is well defined, which can be interpreted to be the number of periodic points of period $M$ of $f$ hidden at the origin: any holomorphic mapping $f_{1}\colon \Delta ^{n}\rightarrow {\mathbb C}^{n}$ sufficiently close to $f$ has exactly $P_{M}(f,0)$ distinct periodic points of period $M$ near the origin, provided that all the fixed points of $f_{1}^{M}$ near the origin are simple. Therefore, the number ${\mathcal O}_{M}(f,0)=P_{M}(f,0)/M$ can be understood to be the number of periodic orbits of period $M$ hidden at the fixed point. According to Shub-Sullivan [< i> A remark on the Lefschetz fixed point formula for differentiable maps< /i> , Topology < b> 13< /b> (1974), 189–191] and Chow-Mallet-Paret-Yorke [< i> A periodic orbit index which is a bifurcation invariant< /i> , Lecture Notes in Math., vol. 1007, Springer, Berlin, 1983, pp. 109–131], a necessary condition so that there exists at least one periodic orbit of period $M$ hidden at the fixed point, say, ${\mathcal O}_{M}(f,0)\geq 1$, is that the linear part of $f$ at the origin has a periodic point of period $M$. It is proved by the author in [< i> Fixed point indices and periodic points of holomorphic mappings< /i> , Math. Ann. < b> 337< /b> (2007), 401–433] that the converse holds true. In this paper, we continue to study the number ${\mathcal O}_{M}(f,0)$. We will give a sufficient condition such that ${\mathcal O}_{M}(f,0)\geq 2$, in the case that all eigenvalues of $Df(0)\ $are primitive $m_{1}$-th, $\ldots $, $m_{n}$-th roots of unity, respectively, and $m_{1},\ldots ,m_{n}$ are distinct primes with $M=m_{1}\ldots m_{n}$.
^{n}$, let $f\colon \Delta ^{n}\rightarrow {\mathbb C}^{n}$ be a holomorphic
mapping and let $M$ be a positive integer. Assume that the origin
$0=(0,\ldots ,0)$ is an isolated fixed point of both $f$ and the $M$-th
iteration $f^{M}$ of $f$. Then the (local) Dold index $P_{M}(f,0)$ at the
origin is well defined, which can be interpreted to be the number of
periodic points of period $M$ of $f$ hidden at the origin: any holomorphic
mapping $f_{1}\colon \Delta ^{n}\rightarrow {\mathbb C}^{n}$ sufficiently close
to $f$ has exactly $P_{M}(f,0)$ distinct periodic points of period $M$ near
the origin, provided that all the fixed points of $f_{1}^{M}$ near the origin
are simple. Therefore, the number ${\mathcal O}_{M}(f,0)=P_{M}(f,0)/M$ can be
understood to be the number of periodic orbits of period $M$ hidden at the
fixed point.
According to Shub-Sullivan [< i> A remark on the Lefschetz fixed point formula for
differentiable maps< /i> , Topology < b> 13< /b> (1974), 189–191] and Chow-Mallet-Paret-Yorke
[< i> A periodic orbit index which is a bifurcation invariant< /i> ,
Lecture Notes in Math., vol. 1007, Springer, Berlin, 1983, pp. 109–131],
a necessary condition so that there exists at least one periodic
orbit of period $M$ hidden at the fixed point, say,
${\mathcal O}_{M}(f,0)\geq 1$, is that the linear part of $f$ at the origin has
a periodic point of period $M$. It is proved by the author in
[< i> Fixed point indices and periodic points of holomorphic mappings< /i> , Math. Ann.
< b> 337< /b> (2007), 401–433] that the converse holds true.
In this paper, we continue to study the number ${\mathcal O}_{M}(f,0)$. We
will give a sufficient condition such that ${\mathcal O}_{M}(f,0)\geq 2$, in
the case that all eigenvalues of $Df(0)\ $are primitive $m_{1}$-th, $\ldots $,
$m_{n}$-th roots of unity, respectively, and $m_{1},\ldots ,m_{n}$ are
distinct primes with $M=m_{1}\ldots m_{n}$.
Keywords
Fixed point index; periodic point
Full Text:FULL TEXT Refbacks There are currently no refbacks.
|
In today’s post, we document our efforts at applying a gradient boosted trees model to forecast bike sharing demand — a problem posed in a recent Kaggle competition. For those not familiar, Kaggle is a site where one can compete with other data scientists on various data challenges. Top scorers often win prize money, but the site more generally serves as a great place to grab interesting datasets to explore and play with. With the simple optimization steps discussed below, we managed to quickly move from the bottom 10% of the competition — our first-pass attempt’s score — to the top 10%: no sweat!
Our work here was inspired by a post by the people at Dato.com, who used the bike sharing competition as an opportunity to demonstrate their software. Here, we go through a similar, but more detailed discussion using the python package SKlearn.
Follow @efavdb
Follow us on twitter for new submission alerts! Introduction
Bike sharing systems are gaining popularity around the world — there are over 500 different programs currently operating in various cities, and counting! These programs are generally funded through rider membership fees, or through pay-to-ride one time rental fees. Key to the convenience of these programs is the fact that riders who pick up a bicycle from one station can return the bicycle to any other in the network. These systems generate a great deal of data relating to various ride details, including travel time, departure location, arrival location, and so on. This data has the potential to be very useful for studying city mobility. The data we look at today comes from Washington D. C.’s Capital Bikeshare program. The goal of the Kaggle competition is to leverage the historical data provided in order to forecast future bike rental demand within the city.
As we detailed in an earlier post, boosting provides a general method for increasing a machine learning algorithm’s performance. Here, in order to model the Capital Bikeshare program’s demand curves, we’ll be applying a gradient boosted trees model (GBM). Simply put, GBM’s are constructed by iteratively fitting a series of simple trees to a training set, where each new tree attempts to fit the residuals, or errors, of the trees that came before it. With the addition of each new tree the training error is further reduced, typically asymptoting to a reasonably accurate model — but one must watch out for overfitting — see below!
Loading package and data
Below, we show the relevant commands needed to load all the packages and training/test data we will be using. We work with the package Pandas, whose DataFrame data structure enables quick and easy data loading and wrangling. We take advantage of this package immediately below, where in the last lines we use its parse_dates method to convert the first column of our provided data — which can be downloaded here — from string to datetime format.
import numpy as np import matplotlib.pyplot as plt import pandas as pd import math from sklearn import ensemble from sklearn.cross_validation import train_test_split from sklearn.metrics import mean_absolute_error from sklearn.grid_search import GridSearchCV from datetime import datetime #Load Data with pandas, and parse the #first column into datetime train = pd.read_csv('train.csv', parse_dates=[0]) test = pd.read_csv('test.csv', parse_dates=[0])
The training data provided contains the following fields:
– hourly date + timestamp datetime – 1 = spring, 2 = summer, 3 = fall, 4 = winter season – whether the day is considered a holiday holiday – whether the day is neither a weekend nor holiday workingday : weather Clear, Few clouds, Partly cloudy, Partly cloudy Mist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist Light Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds Heavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog – temperature in Celsius temp – “feels like” temperature in Celsius atemp – relative humidity humidity – wind speed windspeed – number of non-registered user rentals initiated casual – number of registered user rentals initiated registered – number of total rentals count
The data provided spans two years. The training set contains the first 19 days of each month considered, while the test set data corresponds to the remaining days in each month.
Looking ahead, we anticipate that the year, month, day of week, and hour will serve as important features for characterizing the bike demand at any given moment. These features are easily extracted from the datetime formatted-values loaded above. In the following lines, we add these features to our DataFrames.
#Feature engineering temp = pd.DatetimeIndex(train['datetime']) train['year'] = temp.year train['month'] = temp.month train['hour'] = temp.hour train['weekday'] = temp.weekday temp = pd.DatetimeIndex(test['datetime']) test['year'] = temp.year test['month'] = temp.month test['hour'] = temp.hour test['weekday'] = temp.weekday #Define features vector features = ['season', 'holiday', 'workingday', 'weather', 'temp', 'atemp', 'humidity', 'windspeed', 'year', 'month', 'weekday', 'hour'] Evaluation metric
The evaluation metric that Kaggle uses to rank competing algorithms is the Root Mean Squared Logarithmic Error (RMSLE).
\begin{eqnarray}
J = \sqrt{\frac{1}{n} \sum_{i=1}^n [\ln(p_i + 1) – \ln(a_i+1)]^2 } \end{eqnarray} Here, $n$ is the number of hours in the test set $p_i$ is the predicted number of bikes rented in a given hour $a_i$ is the actual rent count $ln(x)$ is the natural logarithm
With ranking determined as above, our aim becomes to accurately guess the natural logarithm of bike demand at different times (actually demand count plus one, in order to avoid infinities associated with times where demand is nil). To facilitate this, we add the logarithm of the casual, registered, and total counts to our training DataFrame below.
#the evaluation metric is the RMSE in the log domain, #so we should transform the target columns into log domain as well. for col in ['casual', 'registered', 'count']: train['log-' + col] = train[col].apply(lambda x: np.log1p(x))
Notice that in the code above we use the $log1p()$ function instead of the more familiar $log(1+x)$. For large values of $x$, these two functions are actually equivalent. However, at very small values of $x$, the two can disagree. The source of the discrepancy is floating point error: For very small $x$, python will send $1+x \to 1$, which when supplied as an argument to $log(1+x)$ will return $log(1)=0$. The function $log1p(x) \sim x$ in this limit. The difference is not very important when the result is being added to other numbers, but can be very important in a multiplicative operation. We use this function instead for this reason. The inverse of $log(x+1)$ is $e^{x} -1$ — an operation we will also need to make use of later, in order to return linear-scale demand values. We’ll use an analog of the $log1p()$ function, numpy’s $expm1()$ function, to carry out this inversion.
Model development A first pass
The Gradient Boosting Machine (GBM) we will be using has some associated hyperparameters that will eventually need to be optimized. These include:
n_estimators = the number of boosting stages, or trees, to use. max_depth = maximum depth of the individual regression trees. learning_rate = shrinks the contribution of each tree by the learning rate. in_samples_leaf = the minimum number of samples required to be at a leaf node
However, in order to get our feet wet, we’ll begin by just picking some ad hoc values for these parameters. The code below fits a GBM to the log-demand training data, and then converts predicted log-demand into the competition’s required format — in particular, the demand is output in linear scale.
clf = ensemble.GradientBoostingRegressor( n_estimators=200, max_depth=3) clf.fit(train[features], train['log-count']) result = clf.predict(test[features]) result = np.expm1(result) df=pd.DataFrame({'datetime':test['datetime'], 'count':result}) df.to_csv('results1.csv', index = False, columns=['datetime','count'])
In the last lines above, we have used the DataFrames to_csv() method in order to output results for competition submission. Example output is shown below. Without a hitch, we successfully submitted the results of this preliminary analysis to Kaggle. The only bad news was that our model scored in the bottom 10%. Fortunately, some simple optimizations that follow led to significant improvements in our standing.
datetime count 2011-01-20 00:00:00 0 2011-01-20 01:00:00 0 2011-01-20 02:00:00 0 … Hyperparameter tuning
We now turn to the challenge of tuning our GBM’s hyperparameters. In order to carry this out, we segmented our training data into a training set and a validation set. The validation set allowed us to check the accuracy of our model locally, without having to submit to Kaggle. This also helped us to avoid overfitting issues.
As mentioned earlier, the training data provided covers the first 19 days of each month. In segmenting this data, we opted to use days 17-19 for validation. We then used this validation set to optimize the model’s hyperparameters. As a first-pass at this, we again chose an ad hoc value for n_estimators, but optimized over the remaining degrees of freedom. The code follows, where we make use of GridSearchCV() to perform our parameter sweep.
#Split data into training and validation sets temp = pd.DatetimeIndex(train['datetime']) training = train[temp.day <= 16] validation = train[temp.day > 16] param_grid = {'learning_rate': [0.1, 0.05, 0.01], 'max_depth': [10, 15, 20], 'min_samples_leaf': [3, 5, 10, 20], } est = ensemble.GradientBoostingRegressor(n_estimators=500) # this may take awhile gs_cv = GridSearchCV( est, param_grid, n_jobs=4).fit( training[features], training['log-count']) # best hyperparameter setting gs_cv.best_params_ #Baseline error error_count = mean_absolute_error(validation['log-count'], gs_cv.predict(validation[features])) result = gs_cv.predict(test[features]) result = np.expm1(result) df=pd.DataFrame({'datetime':test['datetime'], 'count':result}) df.to_csv('results2.csv', index = False, columns=['datetime','count']) Note: If you want to run n_jobs > 1 on a Windows machine, the script needs to be in an “if __name__ == ‘__main__’:” block. Otherwise the script will fail.
Best Parameters Value learning_rate 0.05 max_depth 10 min_samples_leaf 20
The optimized parameters are shown above. Submitting the resulting model to Kaggle, we found that we had moved from the bottom 10% of models to the top 20%! An awesome improvement, but we still have one final hyperparameter to optimize.
Tuning the number of estimators
In boosted models, training set performance will always improve as the number of estimators is increased. However, at large estimator number, overfitting can start to become an issue. Learning curves provide a method for optimization. These are constructed by plotting the error on both the training and validation sets as a function of the number of estimators used. The code below generates such a curve for our model.
error_train=[] error_validation=[] for k in range(10, 501, 10): clf = ensemble.GradientBoostingRegressor( n_estimators=k, learning_rate = .05, max_depth = 10, min_samples_leaf = 20) clf.fit(training[features], training['log-count']) result = clf.predict(training[features]) error_train.append( mean_absolute_error(result, training['log-count'])) result = clf.predict(validation[features]) error_validation.append( mean_absolute_error(result, validation['log-count'])) #Plot the data x=range(10,501, 10) plt.style.use('ggplot') plt.plot(x, error_train, 'k') plt.plot(x, error_validation, 'b') plt.xlabel('Number of Estimators', fontsize=18) plt.ylabel('Error', fontsize=18) plt.legend(['Train', 'Validation'], fontsize=18) plt.title('Error vs. Number of Estimators', fontsize=20)
Notice in the plot that by the time the number estimators in our GBM reaches about 80, the error of our model as applied to the validation set starts to slowly increase, though the error on the training set continues to decrease steadily. The diagnosis is that the model begins to overfit at this point. Moving forward, we will set n_estimators to 80, rather than 500, the value we were using above. Reducing the number of estimators reduced the calculated error and moved us to a higher position on the leaderboard.
Separate models for registered and casual users
Reviewing the data, we see that we have info regarding two types of riders: casual and registered riders. It is plausible that each group’s behavior differs, and that we might be able to improve our performance by modeling each separately. Below, we carry this out, and then also merge the two group’s predicted values to obtain a net predicted demand. We also repeat the hyperparameter sweep steps covered above — this returned similar values. Resubmitting the resulting model, we found we had increased our standing in the competition by a few percent.
def merge_predict(model1, model2, test_data): # Combine the predictions of two separately trained models. # The input models are in the log domain and returns the predictions # in original domain. p1 = np.expm1(model1.predict(test_data)) p2 = np.expm1(model2.predict(test_data)) p_total = (p1+p2) return(p_total) est_casual = ensemble.GradientBoostingRegressor(n_estimators=80, learning_rate = .05) est_registered = ensemble.GradientBoostingRegressor(n_estimators=80, learning_rate = .05) param_grid2 = {'max_depth': [10, 15, 20], 'min_samples_leaf': [3, 5, 10, 20], } gs_casual = GridSearchCV(est_casual, param_grid2, n_jobs=4).fit(training[features], training['log-casual']) gs_registered = GridSearchCV(est_registered, param_grid2, n_jobs=4).fit(training[features], training['log-registered']) result3 = merge_predict(gs_casual, gs_registered, test[features]) df=pd.DataFrame({'datetime':test['datetime'], 'count':result3}) df.to_csv('results3.csv', index = False, columns=['datetime','count'])
The last step is to submit a final set of model predictions, this time training on the full labeled dataset provided. With these simple steps, we ended up in the top 11% on the competition’s leaderboard with a rank of 280/2467!
<pre> est_casual = ensemble.GradientBoostingRegressor( n_estimators=80, learning_rate = .05, max_depth = 10,min_samples_leaf = 20) est_registered = ensemble.GradientBoostingRegressor( n_estimators=80, learning_rate = .05, max_depth = 10,min_samples_leaf = 20) est_casual.fit(train[features].values, train['log-casual'].values) est_registered.fit(train[features].values, train['log-registered'].values) result4 = merge_predict(est_casual, est_registered, test[features]) df=pd.DataFrame({'datetime':test['datetime'], 'count':result4}) df.to_csv('results4.csv', index = False, columns=['datetime','count']) DISCUSSION
By iteratively tuning a GBM, we were able to quickly climb the leaderboard for this particular Kaggle competition. With further feature extraction work, we believe further improvements could readily be made. However, our goal here was only to practice our rapid development skills, so we won’t be spending much time on further fine-tuning. At any rate, our results have convinced us that simple boosted models can often provide excellent results.
Note: With this post, we have begun to post our python scripts and data at GitHub. Clicking on the icon at left will take you to our repository. Feel free to stop by and take a look!
featured image credit: Siren-Com
|
I need to prove that Constant Absolute Risk Aversion (CARA) is equivalent to \begin{gather} \int u'(x)dF(x) = u'(c(F,u)) \end{gather}
where $u(x)$ is a Bernoulli utility function, $F$ is the distribution of the lottery and $c(F,u)$ is the certainty equivalent.
I started from the fact that CARA is defined as $-\frac{u''(x)}{u'(x)}=a$, where $a$ is a constant and that the certainty equivalent is defined as $u(c(F,u))=\int u(x)dF(x)$.
I tried to mix up the two definitions but I am a bit lost with the interaction of integration and differentiation.
Do you have any hint?
|
There are several questions here so let me start by defining the different types of cells and standard reduction potentials.
Definition of Electrochemical (Galvanic) cell: A cell that converts chemical energy into electrical energy. In these cells a redox reaction creates electrons that can do work.
Definition of Electrolytic cell: A cell that converts electrical energy into chemical energy. In these cells the electrical energy source provides the electrons to perform a reaction.
Standard reduction potential: The tendency for a chemical species to be reduced and is measured in volts at STP. The more positive the potential is the more likely it will be reduced.
Now, here are your two reduction equations for $\ce{Zn}$ and $\ce{Cu}$:
$$\begin{alignat}{2}\ce{Zn^2+(aq) + 2e- &-> Zn}\qquad&{-}0.76\ \mathrm V\\\ce{Cu^2+(aq) + 2e- &-> Cu}\qquad&{+}0.34\ \mathrm V\end{alignat}$$
The more positive the potential the more favorable the reaction
as it is written will be. Remember that $\Delta G = -nFE^\circ$ and that when $\Delta G$ is positive, the reaction is non-spontaneous, and when $\Delta G$ is negative, the reaction is spontaneous. Positive values of $E^\circ$ will lead to negative values of $\Delta G$ and vice versa.
So, the reduction of $\ce{Cu^2+}$ to form $\ce{Cu}$ is more favorable than the reduction of $\ce{Zn^2+}$ to form $\ce{Zn}$. This means that $\ce{Cu^2+}$ is a better oxidant than $\ce{Zn^2+}$. For an electrochemical cell, the cell potential can be calculated by the following equation:
$$E^\circ_\text{cell}=E^\circ_\text{cathode}-E^\circ_\text{anode}$$
For a working electrochemical cell we need $E^\circ_\text{cell}$ to be positive. I would use $\ce{Zn}$ as the anode (oxidation) and $\ce{Cu^2+}$ as the cathode to give an $E^\circ_\text{cell}$ of $+1.10\ \mathrm V$.
|
I would like to solve the following nonlinear PDE:
$$ \frac{\partial^2 \phi}{\partial x^2} - \frac{\partial^2 \phi}{\partial t^2} = \lambda |\phi|^2 \phi $$
I was trying:
NDSolve[{D[f[x, t], x, x] - D[f[x, t], t, t] == f[x, t]^3, f[x, 0] == Sin[2*Pi*x], f[0, t] == 0, f[1, t] == 0}, f, {x, 0, 1}, {t, 0, 1}]
but, I am consistenly getting
NDSolve::femnonlinear: Nonlinear coefficients are not supported in this version of NDSolve.
Is there any solver for non-linear PDEs?
|
I work on an inverse problem for my Ph.D. research, which for simplicity's sake we'll say is determining $\beta$ in
$L(\beta)u \equiv -\nabla\cdot(k_0e^\beta\nabla u) = f$
from some observations $u^o$; $k_0$ is a constant and $f$ is known. This is typically formulated as an optimization problem for extremizing
$J[u, \lambda; \beta] = \frac{1}{2}\int_\Omega(u(x) - u^o(x))^2dx + \int_\Omega\lambda(L(\beta)u - f)dx$
where $\lambda$ is a Lagrange multiplier. The functional derivative of $J$ with respect to $\beta$ can be computed by solving the adjoint equation
$L(\beta)\lambda = u - u^o.$
Some regularizing functional $R[\beta]$ is added to the problem for the usual reasons.
The unspoken assumption here is that the observed data $u^o$ are defined continuously throughout the domain $\Omega$. I think it might be more appropriate for my problem to instead use
$J[u, \lambda; \beta] = \sum_{n = 1}^N\frac{(u(x_n) - u^o(x_n))^2}{2\sigma_n^2} + \int_\Omega\lambda(L(\beta)u - f)dx$
where $x_n$ are the points at which the measurements are taken and $\sigma_n$ is the standard deviation of the $n$-th measurement. The measurements of this field are often spotty and missing chunks; why interpolate to get a continuous field of dubious fidelity if that can be avoided?
This gives me pause because the adjoint equation becomes
$L(\beta)\lambda = \sum_{n = 1}^N\frac{u(x_n) - u^o(x_n)}{\sigma_n^2}\delta(x - x_n)$
where $\delta$ is the Dirac delta function. I'm solving this using finite elements, so in principle integrating a shape function against a delta function amounts to evaluating the shape function at that point. Still, the regularity issues probably shouldn't be dismissed out of hand. My best guess is that the objective functional should be defined in terms of the finite element approximation to all the fields, rather than in terms of the real fields and then discretized after.
I can't find any comparisons of assuming continuous or pointwise measurements in inverse problems in the literature, either in relation to the specific problem I'm working on or generally. Often pointwise measurements are used without any mention of the incipient regularity issues, e.g. here.
Is there any published work comparing the assumptions of continuous vs. pointwise measurements? Should I be concerned about the delta functions in the pointwise case?
|
Neutrino Mass and Proton Lifetime in a Realistic Supersymmetric SO(10) Model Дата2015 Автор
Severson, Matthew Michael
Advisor
Mohapatra, Rabindra N
DRUM DOI MetadataПоказать полную информацию Аннотации
This work presents a complete analysis of fermion fitting and proton decay in a supersymmetric $SO(10)$ model previously suggested by Dutta, Mimura, and Mohapatra. A key question in any grand unified theory is whether it satisfies the stringent experimental lower limits on the partial lifetimes of the proton. In more generic models, substantial fine-tuning is required among GUT-scale parameters to satisfy the limits. In the proposed model, the {\bf 10}, $\overline{\bf{126}}$, and {\bf 120} Yukawa couplings contributing to fermion masses have restricted textures intended to give favorable results for proton lifetime, while still giving rise to a realistic fermion sector, without the need for fine-tuning, even for large $\tan\beta$, and for either type-I or type-II dominance in the neutrino mass matrix. In this thesis, I investigate the above hypothesis at a strict numerical level of scrutiny; I obtain a valid fit for the entire fermion sector for both types of seesaw dominance, including $\theta_{13}$ in good agreement with the most recent data. For the case with type-II seesaw, I find that, using the Yukawa couplings fixed by the successful fermion sector fit, proton partial lifetime limits are readily satisfied for all but one of the pertinent decay modes for nearly arbitrary values of the triplet-Higgs mixing parameters, with the $K^+ \bar\nu$ mode requiring a minor ${\cal O}(10^{-1})$ cancellation in order to satisfy its limit. I also find a maximum partial lifetime for that mode of $\tau(K^+ \bar\nu) \sim 10^{36}$\,years. For the type-I seesaw case, I find that $K^+ \bar\nu$ decay mode is satisfied for any values of the triplet mixing parameters giving no major enhancement, and all other modes are easily satisfied for arbitrary mixing values; I also find a maximum partial lifetime for $K^+ \bar\nu$ of nearly $10^{38}$\,years, which is largely sub-dominant to gauge boson decay channels.
|
I'm having trouble finding the Lagrangian of:
$$L = \frac{\dot x^2}{y^2}+\frac{\dot y^2}{y^2}$$
So far, I've got the following: $$\frac{\partial L}{\partial \dot x} = \dot x \frac{2}{y^2}$$ $$\frac{\partial L}{\partial \dot y} = \dot y \frac{2}{y^2}$$ $$\frac{\partial L}{\partial x} = 0$$ $$\frac{\partial L}{\partial y} = -\dot x^2 \frac{2}{y^3} -\dot y^2 \frac{2}{y^3}$$
I'm just having trouble with the final step. I don't know how to correctly solve $$\frac{d}{d \lambda }\frac{\partial L}{\partial \dot x}$$ Would it simply be $\ddot x\frac{2}{y^2}- \dot x \frac{4}{y^3}\dot y$, or am I missing something here?
|
The Fundamental Group of the Circle, Part 2 EDIT 11/22/16: I encourage the reader to skip this post and proceed directly to Part 3. Part 2 merely contains the justification of a shortcut that we never actually use in the remainder of this series. In particular, we are certainly not proving 'well-definedness,' even though that's what I claim. (Apologies, dear readers!) For more, see my explanation in the comments below.
This week we continue our proof from Hatcher's
Algebraic Topology that the fundamental group of the circle is isomorphic to $\mathbb{Z}$. Recall our outline:
Part 1: Set-up/observations
Part 2: Show $\Phi$ is well defined Part 3: Show $\Phi$ is a group homomorphism Part 4: Show $\Phi$ is surjective Part 5: Show $\Phi$ is injective (Note: parts 4 and 5 require two lemmas whose proofs we will defer until part 5) Part 6: Prove the two lemmas used in parts 4 and 5
Last week we defined the map $\Phi:\mathbb{Z}\to\pi_1(S^1)$ by $n\mapsto[\omega_n]$ where $\omega_n:[0,1]\to S^1$ is the loop given by $s\mapsto (\cos{2\pi ns}, \sin{2\pi ns})$. Our goal for today is to prove that $\Phi$ is well-defined, and the proof is quite simple.
Claim: Φ is Well-Defined
Recall that $\omega_n=p\circ\widetilde{\omega}_n$ is a loop from $I$ to $S^1$ where $\widetilde{\omega}_n:I\to\mathbb{R}$ is a path (a straight line, actually) from $0$ to $n$. We wish to show that equivalence class $[\omega_n]$ doesn't depend on its representative. In other words, we wish to show that $\omega_n$ doesn't depend on the path $\widetilde{\omega}_n$ from $0$ to $n$.
To this end, let $\widetilde{\gamma}_n$ be any
other path from $0$ to $n$ in $\mathbb{R}$. Then $\widetilde{\omega}_n\simeq \widetilde{\gamma}_n$ are homotopic by the straight line homotopy $F:I\times I\to\mathbb{R}$ where $F(s,t)=(1-t)\widetilde{\omega}_n+t\widetilde{\gamma}_n$.
Set $\omega_n'=p\circ \widetilde{\gamma}_n$. If we can show the existence of a homotopy $G:I\times I\to S^1$ such that $$\omega_n=p\circ \widetilde{\omega}_n\;\underset{G}{\simeq}\; p\circ\widetilde{\gamma}_n=\omega_n',$$ then we'll be done since we can conclude $[\omega_n]=[\omega_n']$! But this is easy enough since we can simply let $G=p\circ F$ be the projection of the homotopy $F$ in $\mathbb{R}$ down to $S^1$:
And indeed $G$ is a homotopy since it's continuous (as both $p$ and $F$ are) and
$G(s,0)=p(F(s,0))=p(\widetilde{\omega}_n(s))=\omega_n(s)$ $G(s,1)=p(F(s,1))=p(\widetilde{\gamma}_n(s))=\omega_n'(s)$.
Therefore $[\omega_n]=[\omega_n']$ and so $\Phi$ is indeed well-defined.
Next time, we'll show that $\Phi$ is in fact a group homomorphism.
|
When you fit a generalized linear model (GLM) in R and call
confint on the model object, you get confidence intervals for the model coefficients. But you also get an interesting message:
Waiting for profiling to be done...
What's that all about? What exactly is being profiled? Put simply, it's telling you that it's calculating a
profile likelihood ratio confidence interval.
The typical way to calculate a 95% confidence interval is to multiply the standard error of an estimate by some normal quantile such as 1.96 and add/subtract that product to/from the estimate to get an interval. In the context of GLMs, we sometimes call that a Wald confidence interval.
Another way to determine an upper and lower bound of plausible values for a model coefficient is to find the minimum and maximum value of the set of all coefficients that satisfy the following:
\[-2\log\left(\frac{L(\beta_{0}, \beta_{1}|y_{1},…,y_{n})}{L(\hat{\beta_{0}}, \hat{\beta_{1}}|y_{1},…,y_{n})}\right) < \chi_{1,1-\alpha}^{2}\]
Inside the parentheses is a ratio of
likelihoods. In the denominator is the likelihood of the model we fit. In the numerator is the likelihood of the same model but with different coefficients. (More on that in a moment.) We take the log of the ratio and multiply by -2. This gives us a likelihood ratio test (LRT) statistic. This statistic is typically used to test whether a coefficient is equal to some value, such as 0, with the null likelihood in the numerator (model without coefficient, that is, equal to 0) and the alternative or estimated likelihood in the denominator (model with coefficient). If the LRT statistic is less than \(\chi_{1,0.95}^{2} \approx 3.84\), we fail to reject the null. The coefficient is statisically not much different from 0. That means the likelihood ratio is close to 1. The likelihood of the model without the coefficient is almost as high the model with it. On the other hand, if the ratio is small, that means the likelihood of the model without the coefficient is much smaller than the likelihood of the model with the coefficient. This leads to a larger LRT statistic since it's being log transformed, which leads to a value larger than 3.84 and thus rejection of the null.
Now in the formula above, we are seeking all such coefficients in the numerator that would make it a true statement. You might say we're “profiling” many different null values and their respective LRT test statistics.
Do they fit the profile of a plausible coefficient value in our model? The smallest value we can get without violating the condition becomes our lower bound, and likewise with the largest value. When we're done we'll have a range of plausible values for our model coefficient that gives us some indication of the uncertainly of our estimate.
Let's load some data and fit a binomial GLM to illustrate these concepts. The following R code comes from the help page for
confint.glm. This is an example from the classic Modern Applied Statistics with S.
ldose is a dosing level and
sex is self-explanatory.
SF is number of successes and failures, where success is number of dead worms. We're interested in learning about the effects of dosing level and sex on number of worms killed. Presumably this worm is a pest of some sort.
# example from Venables and Ripley (2002, pp. 190-2.)
ldose <- rep(0:5, 2)
numdead <- c(1, 4, 9, 13, 18, 20, 0, 2, 6, 10, 12, 16)
sex <- factor(rep(c("M", "F"), c(6, 6)))
SF <- cbind(numdead, numalive = 20-numdead)
budworm.lg <- glm(SF ~ sex + ldose, family = binomial)
summary(budworm.lg)
## ## Call: ## glm(formula = SF ~ sex + ldose, family = binomial) ## ## Deviance Residuals: ## Min 1Q Median 3Q Max ## -1.10540 -0.65343 -0.02225 0.48471 1.42944 ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -3.4732 0.4685 -7.413 1.23e-13 *** ## sexM 1.1007 0.3558 3.093 0.00198 ** ## ldose 1.0642 0.1311 8.119 4.70e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 124.8756 on 11 degrees of freedom ## Residual deviance: 6.7571 on 9 degrees of freedom ## AIC: 42.867 ## ## Number of Fisher Scoring iterations: 4
The coefficient for
ldose looks significant. Let's determine a confidence interval for the coefficient using the
confint function. We call
confint on our model object,
budworm.lg and use the
parm argument to specify that we only want to do it for
ldose:
confint(budworm.lg, parm = "ldose")
## Waiting for profiling to be done... ## 2.5 % 97.5 % ## 0.8228708 1.3390581
We get our “waiting” message though there really was no wait. If we fit a larger model and request multiple confidence intervals, then there might actually be a waiting period of a few seconds. The lower bound is about 0.8 and the upper bound about 1.32. We might say every increase in dosing level increase the log odds of killing worms by at least 0.8. We could also exponentiate to get a CI for an odds ratio estimate:
exp(confint(budworm.lg, parm = "ldose"))
## Waiting for profiling to be done... ## 2.5 % 97.5 % ## 2.277027 3.815448
The odds of “success” (killing worms) is at least 2.3 times higher at one dosing level versus the next lower dosing level.
To better understand the profile likelihood ratio confidence interval, let's do it “manually”. Recall the denominator in the formula above was the likelihood of our fitted model. We can extract that with the
logLik function:
den <- logLik(budworm.lg)
den
## 'log Lik.' -18.43373 (df=3)
The numerator was the likelihood of a model with a
different coefficient. Here's the likelihood of a model with a coefficient of 1.05:
num <- logLik(glm(SF ~ sex + offset(1.05*ldose), family = binomial))
num
## 'log Lik.' -18.43965 (df=2)
Notice we used the
offset function. That allows us to fix the coefficient to 1.05 and not have it estimated.
Since we already extracted the
log likelihoods, we need to subtract them. Remember this rule from algebra?
\[\log\frac{M}{N} = \log M – \log N\]
So we subtract the denominator from the numerator, multiply by -2, and check if it's less than 3.84, which we calculate with
qchisq(p = 0.95, df = 1)
-2*(num - den)
## 'log Lik.' 0.01184421 (df=2)
-2*(num - den) < qchisq(p = 0.95, df = 1)
## [1] TRUE
It is. 1.05 seems like a plausible value for the
ldose coefficient. That makes sense since the estimated value was 1.0642. Let's try it with a larger value, like 1.5:
num <- logLik(glm(SF ~ sex + offset(1.5*ldose), family = binomial))
-2*(num - den) < qchisq(p = 0.95, df = 1)
## [1] FALSE
FALSE. 1.5 seems too big to be a plausible value for the
ldose coefficient.
Now that we have the general idea, we can program a
while loop to check different values until we exceed our threshold of 3.84.
cf <- budworm.lg$coefficients[3] # fitted coefficient 1.0642
cut <- qchisq(p = 0.95, df = 1) # about 3.84
e <- 0.001 # increment to add to coefficient
LR <- 0 # to kick start our while loop
while(LR < cut){
cf <- cf + e
num <- logLik(glm(SF ~ sex + offset(cf*ldose), family = binomial))
LR <- -2*(num - den)
}
(upper <- cf)
## ldose ## 1.339214
To begin we save the original coefficient to
cf, store the cutoff value to
cut, define our increment of 0.001 as
e, and set
LR to an initial value of 0. In the loop we increment our coefficient estimate which is used in the
offset function in the estimation step. There we extract the log likelihood and then calculate
LR. If
LR is less than
cut (3.84), the loop starts again with a new coefficient that is 0.001 higher. We see that our upper bound of 1.339214 is very close to what we got above using
confint (1.3390581). If we set
e to smaller values we'll get closer.
We can find the LR profile lower bound in a similar way. Instead of adding the increment we subtract it:
cf <- budworm.lg$coefficients[3] # reset cf
LR <- 0 # reset LR
while(LR < cut){
cf <- cf - e
num <- logLik(glm(SF ~ sex + offset(cf*ldose), family = binomial))
LR <- -2*(num - den)
}
(lower <- cf)
## ldose ## 0.822214
The result, 0.822214, is very close to the lower bound we got from
confint (0.8228708).
This is a
very basic implementation of calculating a likelihood ratio confidence interval. It is only meant to give a general sense of what's happening when you see that message
Waiting for profiling to be done.... I hope you found it helpful. To see how R does it, enter
getAnywhere(profile.glm) in the console and inspect the code. It's not for the faint of heart.
I have to mention the book Analysis of Categorical Data with R, from which I gained a better understanding of the material in this post. The authors have kindly shared their R code at the following web site if you want to have a look: http://www.chrisbilder.com/categorical/
To see how they “manually” calculate likelihood ratio confidence intervals, go to the following R script and see the section “Examples of how to find profile likelihood ratio intervals without confint()”: http://www.chrisbilder.com/categorical/Chapter2/Placekick.R
|
The Fundamental Group of the Circle, Part 3
Welcome back to our proof that the fundamental group of the circle is isomorphic to $\mathbb{Z}$. Today's post is part 3 of our outline:
Part 1: Set-up/observations
Part 2: Show $\Phi$ is well defined Part 3: Show $\Phi$ is a group homomorphism Part 4: Show $\Phi$ is surjective Part 5: Show $\Phi$ is injective (Note: parts 4 and 5 require two lemmas whose proofs we will defer until part 5) Part 6: Prove the two lemmas used in parts 4 and 5
Last week we showed that the map map $\Phi:\mathbb{Z}\to\pi_1(S^1)$ by $n\mapsto[\omega_n]$ where $\omega_n:[0,1]\to S^1$ is the loop given by $s\mapsto (\cos{2\pi n s}, \sin{2\pi n s})$ is well-defined. Our goal for today is to show that it is indeed a group homomorphism.
Claim: Φ is a Homomorphism, i.e. Φ(m+n)=Φ(m)∙Φ(n)
Recall that $\omega_n=p\circ\widetilde{\omega}_n$ is a loop from $I$ to $S^1$ where $\widetilde{\omega}_n:I\to\mathbb{R}$ is a path (a straight line, actually) from $0$ to $n$.
We begin by defining a translation map $\tau_m:\mathbb{R}\to\mathbb{R}$ by $x\mapsto x+m$ for $m\in\mathbb{Z}$.
Notice that in our "helix model" of $\mathbb{R}$, this is a shift up or down by $|m|$. This observation lets us conclude that $$\widetilde{\omega}_m\cdot \tau_m \widetilde{\omega}_n\;\underset{F}{\simeq}\;\widetilde{\omega}_{m+n}$$ by some homotopy $F:I\times I\to\mathbb{R}$ in $\mathbb{R}$. This becomes evident once we pause for a bit to think about what these paths really are: On the right-hand side, $\widetilde{\omega}_{m+n}$ is simply the line (path) from $0$ to $m+n$ in $\mathbb{R}$ On the left-hand side, $\widetilde{\omega}_m\cdot \tau_m \widetilde{\omega}_n$ is the product of the path from $0$ to $m$ (in $\mathbb{R}$) with the path from $m$ to $n+m$ (in $\mathbb{R}$). (Note: $\tau_m \widetilde{\omega}_n$ takes the line which starts at $0$ and ends at $n$ and shifts it so that the starting point is now $m$ and the ending point is now $m+n$.) In other words, it is the union of the green and blue paths below.
So we must have that $\widetilde{\omega}_{m+n}$ (the red path) and $\widetilde{\omega}_m\cdot \tau_m \widetilde{\omega}_n$ (the green/blue path) are homotopic - they are "basically the same"!
So when we project each of these paths onto $S^1$, we see that the resulting paths are
also homotopic: $$p\circ(\widetilde{\omega}_m\cdot \tau_m \widetilde{\omega}_n)\underset{G}{\simeq}p\circ \widetilde{\omega}_{m+n}$$ where $G:I\times I\to S^1$ is $G=p\circ F$. And now we're basically done! We simply need to write down the following equalities:
So $\Phi$ is indeed a homomorphism. QED! Next week we'll move on to Part 4 by proving that $\Phi$ is surjective.
|
Academic Wednesday, October 24, 2018
Workshop | October 24 | 9 a.m.-3 p.m. | 24 University Hall
Seminar | October 24 | 12-1 p.m. | 106 Stanley Hall
Nicole Seiberlich, Case Western Reserve University
Magnetic Resonance Imaging of the heart is challenging due to cardiac and respiratory motion, and making quantitative measurements of tissue properties using MRI is valuable for physicians but complicated by this motion. This seminar will describe new techniques developed in the Seiberlich Lab at CWRU to accelerate data collection and reconstruction to enable real-time cardiac MRI and... More >
Plant and Microbial Biology Seminar: "Evolution and function of large DNA viruses as beneficial symbionts of insects"
Seminar | October 24 | 12-1 p.m. | 101 Barker Hall
Michael R. Strand, University of Georgia
Seminar | October 24 | 12-1 p.m. | Valley Life Sciences Building, 3101 VLSB, Grinnell-Miller Library
MVZ Lunch is a graduate level seminar series (IB264) based on current and recent vertebrate research. Professors, graduate students, staff, and visiting researchers present on current and past research projects. The seminar meets every Wednesday from 12- 1pm in the Grinnell-Miller Library. Enter through the MVZ's Main Office, 3101 Valley Life Sciences Building, and please let the receptionist... More >
CITRIS Research Exchange Seminar with Dawn Song on "Oasis: Privacy-preserving Smart Contracts at Scale": Blockchain * Cryptography * Startups
Conference/Symposium | October 24 | 12-1 p.m. | Sutardja Dai Hall, 310 Sutardja Dai Hall
About the talk:
Oasis Labs is building a privacy-first cloud computing platform on blockchain. With privacy built-in in every layer of the platform and a new blockchain architecture, Oasis is the first smart contract platform to provide security, privacy, and high scalability. Oasis technologys unique properties and capabilities enable a broad spectrum of new applications from finance and... More >
Seminar | October 24 | 12-1 p.m. | 939 Evans Hall
Giovanni Canepa, University of Zurich
This week, the GRASP seminar hosts a talk by Giovanni Canepa (Uni Zurich) on "General Relativity on manifolds with corners in the BV-BFV formalism". Abstract: The BV-BFV formalism allows to treat field theories and their symmetries in a coherent way on manifolds with boundaries. It is possible to iterate the construction on manifolds with corners. We will introduce the formalism and the... More >
Colloquium | October 24 | 12:10-1:15 p.m. | 1104 Berkeley Way West
Much research has robustly shown that individuals benefit from making a first offer in negotiations and has advocated high offers for sellers and low offers for buyers. However, little research has considered how extreme (unreasonably high for sellers and unreasonably low for buyers) offers, as well as the negotiators who make them, are perceived. Experiment 1 found that, compared to moderate... More >
Seminar | October 24 | 12:30-2 p.m. | C320 Cheit Hall
Patrick Kline, UC Berkeley
Colloquium | October 24 | 12:30-2 p.m. | 223 Moses Hall
This is one session in the Fall 2018 African Studies Colloquium series.
Workshop | October 24 | 12:45-2 p.m. | 110 Boalt Hall, School of Law
Berkelety Journal of Criminal Law
We will be discussing how to effectively source collect and complete Bluebooking assignments to help ensure our articles are up to publishable quality.
Workshop | October 24 | 1-2 p.m. | 101 Morgan Hall
Getting Started in Undergraduate Research
If you are thinking about getting involved in undergraduate research, this workshop is a great place to start! You will get a broad overview of the research opportunities available to undergraduates on campus, and suggestions on how to find them. We will also let you know about upcoming deadlines and eligibility requirements for some of... More >
Workshop | October 24 | 1-2:30 p.m. | 309 Sproul Hall
Hear from a panel of experts - an acquisitions editor, a first-time author, and an author rights expert - about the process of turning your dissertation into a book. Youll come away from this panel discussion with practical advice about revising your dissertation, writing a book proposal, approaching editors, signing your first contract, and navigating the peer review and... More >
Course | October 24 | 1:30-3:30 p.m. | 240 Bechtel Engineering Center
This training is required for anyone who is listed on a Biological Use Authorization (BUA) application form that is reviewed by the Committee for Laboratory and Environmental Biosafety (CLEB). A BUA is required for anyone working with recombinant DNA molecules, human clinical specimens or agents that may infect humans, plants or animals. This safety training will discuss the biosafety risk... More >
Topology Seminar (Introductory Talk): Introduction of geometric finiteness in hyperbolic space $ℍ^3$
Seminar | October 24 | 2-3 p.m. | 736 Evans Hall
Beibei Liu, UC Davis
The notion of geometrically finite discrete groups are originally defined by Ahlfors for subgroups of isometries of the 3-dimensional hyperbolic space, and alternative definitions of geometric finiteness were later given by Marden, Beardon and Maskit, and Thurston. We will focus on the definition given by Beardon and Marskit, and review Bishop’s characterization of geometrically finite discrete... More >
Seminar | October 24 | 3-4 p.m. | 1011 Evans Hall
Alex Dunlap, Stanford University
The (d+1)-dimensional KPZ equation
\[ \partial_t h = \nu \Delta h + \frac{\lambda}{2}|\nabla h|^2 + \sqrt{D}\dot{W}, \] in which \dot{W} is a space--time white noise, is a natural model for the growth of d-dimensional random surfaces. These surfaces are extremely rough due to the white noise forcing, which leads to difficulties in interpreting the nonlinear term in the equation. In... More > Department of Psychology Faculty Research Lecture: The role of self-distancing in enabling adaptive behavior under stress: Implications for emotion regulation and self-control
Colloquium | October 24 | 3 p.m. | 1104 Berkeley Way West
Ozlem Ayduk, Professor, University of California, Berkeley
This talk will describe a program of research on the emotion regulatory benefits of self-distancing -- the process of transcending ones egocentric point of view in the here-and-now. I will present data from multiple levels of analyses (e.g., behavioral, neural) using a variety of research designs (i.e., correlational, experimental, longitudinal) that elucidate how and why self-distancing might... More >
Berkeley ACM A.M. Turing Laureate Lecture: Computational Complexity in Theory and in Practice with Richard Karp
Colloquium | October 24 | 4-5 p.m. | Soda Hall, 306 (HP Auditorium)
Richard Karp, UC Berkeley
Computational complexity theory measures the complexity of a problem by the best possible asymptotic growth rate of a correct algorithm for the exact or approximate solution. The phenomena of NP completeness and hardness of approximation often lead to a pessimistic conclusion. In practice, one seeks algorithms that perform well on typical instances, as measured by computational experiments or... More >
Seminar | October 24 | 4-5 p.m. | 114 Morgan Hall
Rana Gupta, Touchstone Diabetes Center Department of Internal Medicine UT Southwestern Medical Center
Colloquium | October 24 | 4-6 p.m. | 180 Tan Hall
Shane Ardo, University of California, Irvine
Most electrochemical technologies that operate under ambient conditions require ion-conducting polymer electrolytes. These polymers are passive in that electric bias drives ion migration in the thermodynamically favored direction. Recently, my group engineered two important features into passive ion-selective polymers to introduce the active function of photovoltaic action and demonstration of an... More >
Seminar | October 24 | 4-5 p.m. | 1011 Evans Hall
Claire Tomlin, UC Berkeley
A great deal of research in recent years has focused on robot learning. In many applications, guarantees that specifications are satisfied throughout the learning process are paramount. For the safety specification, we present a controller synthesis technique based on the computation of reachable sets, using optimal control and game theory. In the first part of the talk, we will review these... More >
Seminar | October 24 | 4-5 p.m. | 3 Evans Hall
Beibei Liu, UC Davis
In this talk, we focus on negatively pinched Hadamard manifolds which are complete, simply connected Riemannian manifolds with sectional curvature ranging between two negative constants. We use the techniques in geometric groups theory to generalize Bishop’s characterization of geometric finiteness to discrete isometry subgroups of negatively pinched Hadamard manifolds.
ERG Colloquium: Alasdair Cohen: Understanding and Advancing Access to Safe Drinking Water in Rural China
Colloquium | October 24 | 4-5:30 p.m. | 126 Barrows Hall
Alasdair Cohen, Project Scientist, UC Berkeley College of Natural Resources and Berkeley Water Center
This talk will center on UC Berkeleys water-and-health focused research collaboration with the Chinese Center for Disease Control and Prevention (China CDC). I will discuss our 2013-2014 rural water treatment field research and explain how our analyses revealed that, among treatment methods, electric kettles were associated with the safest water, while bottled water was frequently contaminated... More >
|
The Fundamental Group of the Circle, Part 4
Welcome back to our series on the fundamental group of the circle where we're following the outline below to prove that $\pi_1(S^1)\cong \mathbb{Z}$:
Part 1: Set-up/observations
Part 2: Show $\Phi$ is well defined Part 3: Show $\Phi$ is a group homomorphism Part 4: Show $\Phi$ is surjective Part 5: Show $\Phi$ is injective (Note: parts 4 and 5 require two lemmas whose proofs we will defer until part 5) Part 6: Prove the two lemmas used in parts 4 and 6
Last week we showed that the map $\Phi:\mathbb{Z}\to\pi_1(S^1)$ by $n\mapsto[\omega_n]$ where $\omega_n:[0,1]\to S^1$ is the loop given by $s\mapsto (\cos{2\pi n s}, \sin{2\pi n s})$ is a group homomorphism. Today we will show that $\Phi$ is surjective.
Claim: Φ is Surjective
To prove the claim, we will need the following lemma whose proof we defer until Part 6:
Lemma 1 (The Path Lifting Property): For each path $f:I\to S^1$, with $f(0)=x_0$ and for each $\tilde{x_0}\in p^{-1}(x_0)$, there is a unique lift $\tilde{f}:I\to\mathbb{R}$ such that $\tilde{f}(0)=\tilde{x_0}$. A couple of observations: It's easy to see that a path in the helix ($\mathbb{R}$) will project down to a path in $S^1$ (we've already used this idea in Part 3). Lemma 1 tells us that the converseis true too! That is, given a path $f$ 'downstairs' in $S^1$ that starts at $x_0$, and given a lift $\tilde{x_0}$ of $x_0$, there is a path $\tilde{f}$ 'upstairs' in $\mathbb{R}$ starting at $\tilde{x_0}$ that projects down to $f$. Notice the set $p^{-1}(x_0)$ is infinite! So the lemma says that if you fix an $\tilde{x_0}\in p^{-1}(x_0)$ (we call this a point in the "fiber" above $x_0$), then there is a map $\tilde{f}:I\to\mathbb{R}$ which takes $0$ to that $\tilde{x_0}$.
We will
assume Lemma 1 for now and will proceed to show $\Phi$ is onto.
Let $[f]\in\pi_1(S^1)$ be the path-homotopy class* of some loop in $S^1$ and pick a representative $f:I\to S^1$ with base point $(1,0)=f(0)=f(1)$. We wish to show that there is an $n\in\mathbb{Z}$ such that $\Phi(n)=[f]$, i.e. such that $f\simeq \omega_n$ for some $n$.
To this end, begin by observing that $0\in p^{-1}(1,0)=\mathbb{Z}$. Hence by Lemma 1, there is a unique lift $\tilde{f}:I\to\mathbb{R}$ such that $\tilde{f}(0)=0$. Now since $$(p\circ \tilde{f})(1)=f(1)=(1,0)\in S^1,$$ we see that $\tilde{f}(1)$ must be an integer. In other words, $\tilde{f}$ is a path from $0$ to $n$ for some integer $n$. (In the picture below, $\tilde{f}$ is the red path and $n=3$.)
Thus $\tilde{f}\underset{F}{\simeq} \widetilde{\omega}_n$ for some homotopy $F$** and so $$f= p\circ\tilde{f}\;\underset{p\circ F}{\simeq}\; p\circ\widetilde{\omega}_n=\omega_n. $$ With this we conclude $[f]=[\omega_n]=\Phi(n)$ and so $\Phi$ is surjective.
Next time, we will prove $\Phi$ is one to one.
Footnotes:
*Just as a reminder, two paths $f$ and $g$ are
path homotopic if they are homotopic and if their starting/ending points are the same.
**They are homotopic and not equal because $\tilde{f}$ isn't necessarily the
straight line $\widetilde{\omega}_n(s)=ns$ for $s\in[0,1]$.
|
Difference between revisions of "Inaccessible"
(Created page with " Inaccessible cardinals are the traditional entry-point to the large cardinal hierarchy (although there are some weaker large cardinal notions, such as universe cardinals). ...")
Line 1: Line 1: −
Inaccessible cardinals are the traditional entry-point to the large cardinal hierarchy (although there are some weaker large cardinal notions, such as [[universe]] cardinals).
Inaccessible cardinals are the traditional entry-point to the large cardinal hierarchy (although there are some weaker large cardinal notions, such as [[universe]] cardinals).
Line 7: Line 6:
* Every inaccessible cardinal $\kappa$ is a [[beth fixed point]], and consequently $V_\kappa=H_\kappa$.
* Every inaccessible cardinal $\kappa$ is a [[beth fixed point]], and consequently $V_\kappa=H_\kappa$.
* (Zermelo) The models of second-order ZFC are precisely the models $\langle V_\kappa,\in\rangle$ for an inaccessible cardinal $\kappa$.
* (Zermelo) The models of second-order ZFC are precisely the models $\langle V_\kappa,\in\rangle$ for an inaccessible cardinal $\kappa$.
+ + +
* The uncountable [[inaccessible#Grothendieck_universe | Grothedieck universes]] are precisely the sets of the form $V_\kappa$ for an inaccessible cardinal $\kappa$.
* The uncountable [[inaccessible#Grothendieck_universe | Grothedieck universes]] are precisely the sets of the form $V_\kappa$ for an inaccessible cardinal $\kappa$.
− + − +
==Weakly inaccessible==
==Weakly inaccessible==
Line 15: Line 16:
A cardinal $\kappa$ is ''weakly inaccessible'' if it is an [[uncountable]] [[regular]] [[limit]] cardinal. Under the [[GCH]], this is equivalent to inaccessibility, since under GCH every limit cardinal is a strong limit cardinal. So the difference between weak and strong inaccessibility only arises when GCH fails badly. Every inaccessible cardinal is weakly inaccessible, but forcing arguments show that any inaccessible cardinal can become a non-inaccessible weakly inaccessible cardinal in a forcing extension, such as after adding an enormous number of Cohen reals (this forcing is c.c.c. and hence preserves all cardinals and cofinalities and hence also all regular limit cardinals). Meanwhile, every weakly inaccessible cardinal is fully inaccessible in any inner model of GCH, since it will remain a regular limit cardinal in that model and hence also be a strong limit there. In particular, every weakly inaccessible cardinal is inaccessible in the constructible universe $L$. Consequently, although the two large cardinal notions are not provably equivalent, they are equiconsistent.
A cardinal $\kappa$ is ''weakly inaccessible'' if it is an [[uncountable]] [[regular]] [[limit]] cardinal. Under the [[GCH]], this is equivalent to inaccessibility, since under GCH every limit cardinal is a strong limit cardinal. So the difference between weak and strong inaccessibility only arises when GCH fails badly. Every inaccessible cardinal is weakly inaccessible, but forcing arguments show that any inaccessible cardinal can become a non-inaccessible weakly inaccessible cardinal in a forcing extension, such as after adding an enormous number of Cohen reals (this forcing is c.c.c. and hence preserves all cardinals and cofinalities and hence also all regular limit cardinals). Meanwhile, every weakly inaccessible cardinal is fully inaccessible in any inner model of GCH, since it will remain a regular limit cardinal in that model and hence also be a strong limit there. In particular, every weakly inaccessible cardinal is inaccessible in the constructible universe $L$. Consequently, although the two large cardinal notions are not provably equivalent, they are equiconsistent.
+ + + + + + +
==Grothendieck universe==
==Grothendieck universe==
Revision as of 19:11, 27 December 2011
Inaccessible cardinals are the traditional entry-point to the large cardinal hierarchy (although there are some weaker large cardinal notions, such as universe cardinals).
If $\kappa$ is inaccessible, then $V_\kappa$ is a model of ZFC, but this is not an equivalence, since the weaker notion of universe cardinal also have this feature, and are not all regular when they exist. Every inaccessible cardinal $\kappa$ is a beth fixed point, and consequently $V_\kappa=H_\kappa$. (Zermelo) The models of second-order ZFC are precisely the models $\langle V_\kappa,\in\rangle$ for an inaccessible cardinal $\kappa$. Solovay proved that if there is an inaccessible cardinal, then there is an inner model of a forcing extension satisfying ZF+DC in which every set of reals is Lebesgue measurable there. (citation) Shelah proved that Solovay's use of the inaccessible cardinal is necessary, in the sense that in any model of ZF+DC in which every set of reals is Lebesgue measurable, there is an inner model of ZFC with an inaccessible cardinal. Consequently, the consistency of the existence of an inaccessible cardinal with ZFC is equivalent to the impossibility of our constructing a non-measurable set of reals using only ZF+DC. The uncountable Grothedieck universes are precisely the sets of the form $V_\kappa$ for an inaccessible cardinal $\kappa$. The universe axiom is equivalent to the assertion that there is a proper class of inaccessible cardinals. Contents Weakly inaccessible
A cardinal $\kappa$ is
weakly inaccessible if it is an uncountable regular limit cardinal. Under the GCH, this is equivalent to inaccessibility, since under GCH every limit cardinal is a strong limit cardinal. So the difference between weak and strong inaccessibility only arises when GCH fails badly. Every inaccessible cardinal is weakly inaccessible, but forcing arguments show that any inaccessible cardinal can become a non-inaccessible weakly inaccessible cardinal in a forcing extension, such as after adding an enormous number of Cohen reals (this forcing is c.c.c. and hence preserves all cardinals and cofinalities and hence also all regular limit cardinals). Meanwhile, every weakly inaccessible cardinal is fully inaccessible in any inner model of GCH, since it will remain a regular limit cardinal in that model and hence also be a strong limit there. In particular, every weakly inaccessible cardinal is inaccessible in the constructible universe $L$. Consequently, although the two large cardinal notions are not provably equivalent, they are equiconsistent. Levy collapse
The Levy collapse of an inaccessible cardinal $\kappa$ is the $\lt\kappa$-support product of $\text{Coll}(\omega,\gamma)$ for all $\gamma\lt\kappa$. This forcing collapses all cardinals below $\kappa$ to $\omega$, but since it is $\kappa$-c.c., it preserves $\kappa$ itself, and hence ensures $\kappa=\omega_1$ in the forcing extension.
Inaccessible to reals
A cardinal $\kappa$ is
inaccessible to reals if it is inaccessible in $L[x]$ for every real $x$. For example, after the Levy collapse of an inaccessible cardinal $\kappa$, which forces $\kappa=\omega_1$ in the extension, the cardinal $\kappa$ is of course no longer inaccessible, but it remains inaccessible to reals. Grothendieck universe
The concept of Grothendieck universes arose in category theory out of the desire to create a hierarchy of notions of smallness, so that one may form such categories as the category of all small groups, or small rings or small categories, without running into the difficulties of Russell's paradox.
A
Grothendieck universe is a transitive set $W$ that is closed under pairing, power set and unions. That is, (transitivity) If $b\in a\in W$, then $b\in W$. (pairing) If $a,b\in W$, then $\{a,b\}\in W$. (power set) If $a\in W$, then $P(a)\in W$. (union) If $a\in W$, then $\cup a\in W$. Universe axiom
The
Grothendieck universe axiom is the assertion that every set is an element of a Grothendieck universe. This is equivalent to the assertion that the inaccessible cardinals form a proper class.
|
We're very close now to building the final Navier-Stokes simulation that brought us here in the first place but before that, let's take a quick look at the Navier-Stokes equations for an incompresssible fluid, where $\vec{v}$ represents the velocity field:$$ \nabla \cdot \vec{v} = 0 $$$$ \frac{\partial \vec{v}}{\partial t} + (\vec{v} \cdot \nabla) \vec{v} = - \frac{1}{\rho} \nabla p + \nu \nabla^2 \vec{v} $$
The first equation is a representation of mass conservation at constant density. The second equation is the conservation of momentum. Yet, we have a problem: the continuity equation for incompressible flow does not have a dominant variable and there is no obvious way to couple the velocity and the pressure. In the case of compressible flow, in contranst, mass continuity would provide an evolution equation for the density $rho$, which is couple with an equation of state relating $rho$ and $p$.
In incompressible flow, the continuity equation $\nabla \cdot \vec{v} = 0$ provides a
kinematic constraint that requires the pressure field to evolve so that the rate of expansion $\nabla \cdot \vec{v} $ should vanish everywhere. A way out of this difficulty is to construct a pressure field that guarantees continuity is satisfied; such relation can be obtained by taking the divergence of the momentum equation. In that process, a Poisson equation for the pressure shows up!
We can obtain Poisson's equation from adding a source term to the right hand side of Laplace's equation:$$ \frac{\partial^2 p}{\partial x^2} + \frac{\partial^2 p}{\partial y^2} = b $$
This leads to a different behaviour within the field where there is some finite value inside the field that affects the solution. Poisson's equation acts to "relax" the initial sources in the field.
In discretized formed this looks pretty much exactly the same as in Step 9:$$ \frac{p^n_{i+1,j} - 2p^n_{i,j} + p^n_{i-1,j}}{\Delta x^2} + \frac{p^n_{i,j+1} - 2p^n_{i,j} + p^n_{i,j-1}}{\Delta y^2} = b^n_{i,j} $$
Rearranging the discretized equation we can get:$$ p^n_{i,j} = \frac{\Delta y^2 (p^n_{i+1,j} + p^n_{i-1,j})+ \Delta x^2 (p^n_{i,j+1} + p^n_{i,j-1}) - b^n_{i,j} \Delta x^2 \Delta y^2}{2(\Delta x^2 + \Delta y^2 )} $$
We solve this equation by assuming an initial state of p = 0 everywhere. Then we add the following boundary conditions.
$p = 0 \ \text{at} \ x = 0 , 2 \ \text{and} \ y = 0,1 $
The source term consists of two initial spikes inside the domain, as follow:
$ b_{i,j} = 100 \ \text{at} \ i = nx / 4, j = ny / 4 $
$ b_{i,j} = -100 \ \text{at} \ i = nx * 3 / 4, j = \frac{3}{4} ny $
$ b_{i,j} = 0 \ \text{everywhere else} $
The iterations will advance in pseudo time to relax the initial spikes. The relaxation under Posisson's equation get slower as time progresses however.
# Adding inline command to make plots appear under commentsimport numpy as npimport matplotlib.pyplot as pltfrom matplotlib import cmfrom mpl_toolkits.mplot3d import Axes3D%matplotlib notebook
def plot2D(x,y,p): fig = plt.figure(figsize=(11,7), dpi=100) ax = fig.gca(projection='3d') X,Y = np.meshgrid(x,y) surf = ax.plot_surface(X, Y, p[:], rstride=1, cstride=1, cmap=cm.viridis, linewidth=0, antialiased=False) ax.set_xlim(0,2) ax.set_ylim(0,1) ax.view_init(30,225) ax.set_xlabel('$x$') ax.set_zlabel('$p$') ax.set_ylabel('$y$') ax.text2D(0.35, 0.95, "2D Poisson's Equation", transform=ax.transAxes);
grid_points_x = 50grid_points_y = 50nt = 100xmin = 0xmax = 2ymin = 0ymax = 1dx = (xmax - xmin) / (grid_points_x - 1) dy = (ymax - ymin) / (grid_points_y - 1) #Initializing arraysp = np.zeros((grid_points_x, grid_points_y))pd = np.zeros((grid_points_x, grid_points_y))b = np.zeros((grid_points_x, grid_points_y))x = np.linspace(xmin, xmax, grid_points_x)y = np.linspace(ymin, ymax, grid_points_y)#Source initializingb[int(grid_points_y / 4), int(grid_points_x / 4)] = 100b[int(3 * grid_points_y / 4), int(3 * grid_points_x / 4)] = -100
for it in range(nt): pd = p.copy() p[1:-1,1:-1] = (((pd[1:-1, 2:] + pd[1:-1, :-2]) * dy**2 + (pd[2:, 1:-1] + pd[:-2, 1:-1]) * dx**2 - b[1:-1, 1:-1] * dx**2 * dy**2) / (2 * (dx**2 + dy**2))) p[0, :] = 0 p[grid_points_y-1, :] = 0 p[:, 0] = 0 p[:, grid_points_x-1] = 0
plot2D(x,y,p)
#Imports for animation and display within a jupyter notebookfrom matplotlib import animation, rc from IPython.display import HTML#Resetting back to initial conditionsp = np.zeros((grid_points_x, grid_points_y))pd = np.zeros((grid_points_x, grid_points_y))b = np.zeros((grid_points_x, grid_points_y))b[int(grid_points_y / 4), int(grid_points_x / 4)] = 100b[int(3 * grid_points_y / 4), int(3 * grid_points_x / 4)] = -100#Generating the figure that will contain the animationfig = plt.figure(figsize=(9,5), dpi=100)ax = fig.gca(projection='3d')X,Y = np.meshgrid(x,y)surf = ax.plot_surface(X, Y, p[:], rstride=1, cstride=1, cmap=cm.viridis, linewidth=0, antialiased=False)ax.set_xlabel('$x$')ax.set_zlabel('$p$')ax.set_ylabel('$y$')ax.text2D(0.35, 0.95, "2D Poisson's Equation Time History", transform=ax.transAxes);
#Initialization function for funcanimationdef init(): ax.clear() surf = ax.plot_surface(X, Y, p[:], rstride=1, cstride=1, cmap=cm.viridis, linewidth=0, antialiased=False) return surf
#Main animation function, each frame represents a time step in our calculationdef animate(j): pd = p.copy() p[1:-1,1:-1] = (((pd[1:-1, 2:] + pd[1:-1, :-2]) * dy**2 + (pd[2:, 1:-1] + pd[:-2, 1:-1]) * dx**2 - b[1:-1, 1:-1] * dx**2 * dy**2) / (2 * (dx**2 + dy**2))) p[0, :] = 0 p[grid_points_y-1, :] = 0 p[:, 0] = 0 p[:, grid_points_x-1] = 0 ax.clear() surf = ax.plot_surface(X, Y, p[:], rstride=1, cstride=1, cmap=cm.viridis, linewidth=0, antialiased=False) ax.set_xlim(0,2) ax.set_ylim(0,1) ax.set_xlabel('$x$') ax.set_zlabel('$p$') ax.set_ylabel('$y$') ax.text2D(0.35, 0.95, "2D Poisson's Equation Time History", transform=ax.transAxes); return surf
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=nt, interval=20)HTML(anim.to_jshtml())#anim.save('../gifs/2dPoisson.gif',writer='imagemagick',fps=60)
|
This question already has an answer here:
A question in math mode 1 answer
I have written two equations for the report of my Master thesis:
$NLL_{k} = - \sum_{j=1}^n \ln(pdf(PD_{k}^{i},skill_{i}^{j}))$$person_{k}(interest) = \dfrac{1}{n} \sum_{p=1}^n \dfrac{person_{k}(task_{p})}{\alpha_{k}^{i}}$
In math mode everything is italic (except
\ln). Is this ok or how should this be correctly formated?
Second, let's say I would like to use part of the equation in text. For example, I would like to write "It is visible that
$person_{k}$ has very high interest.". Should
$person_{k}$ in this case be in italic?
|
I'm really confused about the argument in Cardy's book for why there can't be long range order in 1D for discrete models. Let me just copy it out, and hopefully someone can explain it to me.
He takes an Ising-like system as an example. We start with the ground state with all spins up, and we want to see if this state is stable against flipping the spins in some chain of length $l$. This chain has two domain walls at the endpoints, so we get an energy change of $4J$. Then the claim is that there is an entropy of $\log l$ associated with this chain, since "each wall may occupy $O(l)$ positions." If this were true, we would get a free energy change of $4J-\beta^{-1} \log l$, and this would imply that the ground state is unstable to flipping very long chains.
The only part I'm not on board with is the claim about the entropy. I would say that if $L$ is the length of the system, then we have $L$ places to put the chain, so we get an entropy of $\log L$. Certainly as $L\to \infty$ this gives no long range order, as expected.
So, is the entropy $\log l$ or $\log L$?
(Incidentally, I'm perfectly happy with his argument in 2D...)
This post imported from StackExchange Physics at 2015-11-08 10:09 (UTC), posted by SE-user Matthew
|
Well, the title of the question pretty much says everything. To be more precise: I'd like to know (1) what font would be visually suitable to use with Linux Libertine and/or Linux Biolinum as the main text font (I haven't decided which of the two I'll use) and (2) how to enable it (as a math font) in XeTeX.
A good guide on what factors to consider when mixing fonts is Thierry Bouche's Diversity in math fonts article in
TUGboat, Volume 19 (1998), No. 2.
The most important aspect is to use the same font for text and math letters (as well as letter-like symbols as
\partial or
\infty). This has drawbacks as some letters will suffer from spacing problems, but compared to the other option (using totally different math letters), it's really a lesser evil. Of course, if this is not acceptable to you, then you should first choose the math font and then use the same font for text, but that limits your font choices dramatically.
Once you've assigned the text font to the math letters, the remaining choices you face is for the geometric symbols, the delimiters and the big operators (
\sum,
\int,
\bigcup etc.). The main consideration is color (how bold the symbols are) and the shape of the symbols (mainly the shape of sum or integral symbols, especially if you use them often). Compared to Libertine, XITS and Asana are a bit too bold (especially true for the sum symbol), Latin Modern is a bit too light (especially
+,
\otimes, etc.), and Cambria has a very huge
\sum symbol, huge
\otimes and
\oplus as well as very bold
\bigcup. Thus, which font will look better will depend on what type of math you're typing, and none will be perfect.
Here's a sample to show the results of this font mixing with Libertine. Notice the spacing problems around the f in
f(r_k) and
\Sigma_c f(r) due to the fact that's it's a text font we're using for math. I've not set all letter-like symbols to come from Libertine (only
\infty), so there's still room for improvement. (Note also the missing parenthesis in one of the formulas with Latin Modern Math.)
\documentclass{article}\usepackage{amsmath}\usepackage{fontspec}\usepackage{unicode-math}\setmainfont{Linux Libertine O}\newcommand{\setlibertinemath}{%% use Libertine for the letters\setmathfont[range=\mathit/{latin,Latin,num,Greek,greek}]{Linux Libertine O Italic}\setmathfont[range=\mathup/{latin,Latin,num,Greek,greek}]{Linux Libertine O}\setmathfont[range=\mathbfup/{latin,Latin,num,Greek,greek}]{Linux Libertine O Bold}%\setmathfont[range={"2202}]{Linux Libertine O}% "02202 = \partial % doesn't work\setmathfont[range={"221E}]{Linux Libertine O}% "0221E = \infty% etc. (list should be completed depending on needs)}\newcommand{\sample}{%When computing the sums $\sum_{k=0}^{+\infty}{f(r_k)}$ of $f$ the integral representation of $K_0(x)$ may be used.\[ \eta(r)\frac{\partial f}{\partial r} + 2\Sigma_cf(r) = \sum_{k=0}^{+\infty}{K_0\mathopen{}\left(\frac{\lvert r - r_k \rvert}{L}\right)} = \int_{0}^{\infty}{e^{-\left(z+\frac{r^2}{4L^2\pi}\right)} \frac{dz}{2z}}.\]We then use\[ \bigcup_{\lambda \in \Lambda}{U_\lambda} \cap \bigsqcup_{\delta > 0}{G_\delta} = \bigcap_{i \in I}{\mathbf{A}_i} \quad \text{so that} \quad u \otimes w \oplus v = 0.\]}\pagestyle{empty}\begin{document}\section{Libertine + Latin Modern}\setmathfont{Latin Modern Math}\setlibertinemath\sample\section{Libertine + Cambria}\setmathfont{Cambria Math}\setlibertinemath\sample\section{Libertine + XITS}\setmathfont{XITS Math}\setlibertinemath\sample\section{Libertine + Asana}\setmathfont{Asana Math}\setlibertinemath\sample\end{document}
A late answer (and a shameless plug), but I have been working on math companion to Linux Libertine fonts, which got more attention recently (thanks to support from TUG) and it is starting to take shape.
The character coverage is still a bit limited and there may be bugs in the existing ones, but testing and bug reports are appreciated.
I’m currently forking the whole Linux Libertine and Linux Biolinum family (and changed the name to avoid confusion and follow the reserved name clause in the license) as I need a way to quickly fix bugs I see in the text fonts (there is quite a few of them), but the idea is to merge this back with the original fonts once the dust settles.
Here is a sample (the full document is here):
In the MWE below, a pangram is first in text italics (with Linux Libertine) and then in four different math alphabets -- Asana Math, Cambria Math (not entirely free, but quite cheap), XITS Math, and Latin Modern Math. The exercise is repeated with Linux Biolinum and the same four math alphabets, this time in sans-serif mode.
I'd say that the overall closest, though by no means perfect, fit is between Linux Libertine and Asana Math. Should you, however, wish to give considerable weight to the shape of the letters
f,
p, and
q, XITS Math may be your best choice. Or, should you care much about compatibility of the shapes of the letter
w (but not care much about the letters
f and
g), Cambria Math may be best for you. In any case, Latin Modern is not visually compatible with Linux Libertine.
% !TEX program = xelatex\documentclass[letterpaper]{article}\usepackage[no-math]{fontspec} \setmainfont{Linux Libertine O} \setsansfont{Linux Biolinum O}\usepackage{unicode-math} \setmathfont[version=asana]{Asana Math} \setmathfont[version=cambria]{Cambria Math} \setmathfont[version=xits]{XITS Math} \setmathfont[version=lm]{Latin Modern Math}\newcommand{\qbf}{The\ quick\ brown\ fox\ jumps\ over\ the\ lazy\ dog.}\begin{document} \noindent\emph{\qbf} --- Linux Libertine O, italics\newline\mathversion{asana} $\qbf$ --- Asana Math\newline\mathversion{cambria} $\qbf$ --- Cambria Math\newline\mathversion{xits} $\qbf$ --- XITS Math\newline\mathversion{lm} $\qbf$ --- Latin Modern Math\bigskip\noindent\textsf{\qbf} --- \textsf{Linux Biolinum O}\newline\mathversion{asana} $\mathsf{\qbf}$ --- Asana Math-sf\newline\mathversion{cambria} $\mathsf{\qbf}$ --- Cambria Math-sf\newline\mathversion{xits} $\mathsf{\qbf}$ --- XITS Math-sf\newline\mathversion{lm} $\mathsf{\qbf}$ --- Latin Modern Math-sf\end{document}
Incidentally, the weird "tz" character in the first line (text italics, Linux Libertine) seems to be a product of an unfortunate interaction between Linux Libertine and XeLaTeX. This problem does not occur if one uses either a font other than Linux Libertine or if the MWE is run under LuaLaTeX.
Addendum To learn more about the various math symbols that the various math fonts provide, please refer to Will Robertson's write-up, Every symbol defined by
unicode-math. You'll find out quickly that just about all Unicode math fonts provide all of the "standard" math symbols. However, the math font packages tend to differ importantly in the sets of specialized symbols, e.g., arrows, that they provide. Obviously, the font with the most math symbols (AFAICT,
XITS Math at present) need not be the one that's best for you, simply because you may have no need for most of the symbols that the most feature-laden package provides.
Finally, then, suppose that you end up deciding to use the
XITS Math font because it comes with all the special symbols you need (and the other math font packages do not). In that case, you should probably be willing to use the
XITS
text font, rather than
Linux Libertine O, because
XITS harmonizes very well (by design!) with the
XITS Math font.
You can consider
newtxmath with the
libertine option. The math fonts used are not OpenType, so no
unicode-math, but the result is pretty good. The order of packages is important.
\documentclass{article}\usepackage{amsmath}\usepackage[libertine]{newtxmath}\usepackage[no-math]{fontspec}\usepackage{mleftright}\setmainfont{Linux Libertine O}\pagestyle{empty}\begin{document}When computing the sums $\sum_{k=0}^{+\infty}{f(r_k)}$ of $f$ the integralrepresentation of $K_0(x)$ may be used.\[ \eta(r)\frac{\partial f}{\partial r} + 2\Sigma_cf(r) = \sum_{k=0}^{+\infty}{K_0 \mleft(\frac{\lvert r - r_k \rvert}{L}\mright)} = \int_{0}^{\infty}{e^{-\mleft(z+\frac{r^2}{4L^2\pi}\mright)} \frac{dz}{2z}}.\]We then use\[ \bigcup_{\lambda \in \Lambda}{U_\lambda} \cap \bigsqcup_{\delta > 0}{G_\delta} = \bigcap_{i \in I}{\mathbf{A}_i} \quad \text{so that} \quad u \otimes w \oplus v = 0.\]\end{document}
you should wait until the Lucida Math OpenType font will be published by TUG. I suppose it will be at least at the end of this year.
(answer copied from here: https://tex.stackexchange.com/a/364502/75284)
Libertinus is a fork of Linux Libertine with bug fixes and pretty nice math support (see this example document). It is the perfect match for Linux Libertine, because, well, it is Linux Libertine, just forked. And I personally much prefer the upright integral symbol, in keeping with the spirit of the ISO recommendations that only variables should be italic.
\documentclass[varwidth,border=1mm]{standalone}\usepackage[ math-style=ISO, bold-style=ISO, partial=upright, nabla=upright]{unicode-math}\setmainfont{Libertinus Serif}\setsansfont{Libertinus Sans}\setmathfont{Libertinus Math}\begin{document}The formula \(E=mc^2\) is arguably the most famous formula in physics.In mathematics, it could be \(\mathrm{e}^{\mathrm{i}\uppi}-1=0\).\(\displaystyle \sum_{k=0}^\infty \frac{1}{k^2} = \frac{\uppi^2}{6}\), and\(\displaystyle \int\displaylimits_{-\infty}^\infty \exp\left(-\frac{x^2}{2}\right) = \sqrt{2\uppi}\).\(\alpha\beta\gamma\delta\epsilon\zeta\eta\theta\iota\kappa\lambda\mu\nu\xi\pi\rho\sigma\tau\upupsilon\phi\chi\psi\omega \varepsilon\vartheta\varrho\varsigma\varphi\varkappa\)\(\upalpha\upbeta\upgamma\updelta\upepsilon\upzeta\upeta\uptheta\upiota\upkappa\uplambda\upmu\upnu\upxi\uppi\uprho\upsigma\uptau\upupsilon\upphi\upchi\uppsi\upomega \upvarepsilon\upvartheta\upvarrho\upvarsigma\upvarphi\upvarkappa\)\(\Alpha\Beta\Gamma\Delta\Epsilon\Zeta\Eta\Theta\Iota\Kappa\Lambda\Mu\Nu\Xi\Pi\Rho\Sigma\Tau\Upsilon\Phi\Chi\Psi\Omega\)\(\upAlpha\upBeta\upGamma\upDelta\upEpsilon\upZeta\upEta\upTheta\upIota\upKappa\upLambda\upMu\upNu\upXi\upPi\upRho\upSigma\upTau\upUpsilon\upPhi\upChi\upPsi\upOmega\)\end{document}
Be aware though:
xe(la)tex currently (TeX Live 2016) has a bug, which is visible in my screenshot: Why is the fraction off the math axis in XeTeX? .
lualatex of TeX Live 2016 has a bug as well: spacing in LuaLaTeX with unicode-math
|
The Fundamental Group of the Circle, Part 6
At last we come to the sixth and final post in our proof that the fundamental group of the circle is $\mathbb{Z}$. In the first five posts, we showed that the map $\Phi:\mathbb{Z}\to\pi_1(S^1)$ given by $n\mapsto[\omega_n]$ where $\omega_n:[0,1]\to S^1$ is the loop $s\mapsto (\cos{2\pi ns}, \sin{2\pi ns})$ is a group isomorphism. Our outline has been the following:
Part 1: Set-up/observations
Part 2: Show $\Phi$ is well defined Part 3: Show $\Phi$ is a group homomorphism Part 4: Show $\Phi$ is surjective Part 5: Show $\Phi$ is injective Part 6: Prove the two lemmas used in parts 4 and 5
While proving $\Phi$ is one to one and onto, we used two lemmas whose proofs we have deferred until now. For reference, we restate them here:
Lemma 1: For each path $f:I\to S^1$, with $f(0)=x_0$ and for each $\tilde{x_0}\in p^{-1}(x_0)$, there is a unique lift $\tilde{f}:I\to\mathbb{R}$ such that $\tilde{f}(0)=x_0$. Lemma 2: For each homotopy $f_t:I\to S^1$ of paths starting at $f_t(0)=x_0$ and for each $\tilde{x}_0\in p^{-1}(x_0)$, there is a unique lift $\tilde{f}_t:I\to\mathbb{R}$ such that $\tilde{f}_t(0)=\tilde{x}_0$.
It turns out that both lemmas are special cases of a more general result, namely the following proposition:
Proposition: Given any space $Y$, a map $F:Y\times I\to S^1$, and a map $\tilde{F}:Y\times\{0\}\to\mathbb{R}$ which lifts $F\big|_{Y\times\{0\}}$, there exists a unique map $G:Y\times I\to \mathbb{R}$ which lifts $F$ and is an extension of $\tilde{F}$.
Notice that Lemma 1 and Lemma 2 are the special cases when $Y=\{y_0\}$ is a single point and $Y=I$ is the unit interval (pictured above), respectively.
Proving the Proposition*
We begin with an
Observation: There is an open cover $\{U_\alpha\}$ of $S^1$ such that for each $\alpha$, the set $p^{-1}(U_\alpha)$ is a disjoint union of open sets each of which is mapped homeomorphically to $U_\alpha$ by $p$. (For an example, see the footnote ** below)
Let $\{U_\alpha\}$ be an open cover of $S^1$ of the kind mentioned in the remark. Suppose $F:Y\times I\to S^1$ is any map and let $\tilde{F}:Y\times\{0\}\to\mathbb{R}$ be any map which lifts $F\big|_{Y\times\{0\}}$. Let $(y_0,t)\in Y\times I$ be any point and observe that it has a neighborhodd $N_t\times (a_t,b_t)\subset Y\times I$ such that $F(N_t\times (a_t,b_t))\subset U_\alpha \subset S^1$ for some $\alpha$. (This last bit holds simply because $\{U_\alpha\}$ is a cover for $S^1$.)
By compactness of $\{y_0\}\times I$ (since it's homeomorphic to $I$), finitely many such products $N_t\times (a_t,b_t)$ cover $y_0\times I$. Thus we can choose a single neighborhood $N$ of $y_0$ and a partition $0=t_0< t_1 < \cdots < t_m=I$ of $I$ so that for each $i$, $F(N\times [t_i,t_{i+1}])$ is containd in $U_\alpha$ for some $\alpha$, which we'll now denote by $U_i$.
Our present goal is to construct a lift $G:N\times I\to\mathbb{R}$ where $N\times I$ is the "rainbow tube." Assume by induction on $i$ that $G$ has already been constructed on $N\times [0,t_i]$. (Notice! This
is true for $i=0$ since $N\times [0,0]$ is a subset of $Y\times\{0\}$ and by assumption we already have the map $\tilde{F}:Y\times\{0\}\to\mathbb{R}$.) Now since $F(N\times[t_i,t_{i+1}])\subset U_i$ we know by our opening Observation that $p^{-1}(U_i)$ is a disjoint union of open sets in $\mathbb{R}$, each of which is mapped homeomorphically onto $U_i$. Hence there exists a $\tilde{U}_i\subset p^{-1}(U_i)$ that contains the point $G(y_0,t_i)$ (this point exists by the induction hypothesis) since $(y_0,t_i)$ sits inside $N\times [0,t_i]$, and the latter maps into $U_i$ by $F$.
Now it's possible that $G(N\times\{t_i\}))$ (the lift of the top slice of $N\times [0,t_i]$) isn't fully contained within $\tilde{U}_i$ but we can just assume it is.*** Finally we may define $G$ on $N\times [t_i,t_{i+1}]$ by $$\left( p\big|_{\tilde{u}_i} \right)^{-1}\circ F\big|_{[t_i,t_{t+1}]}.$$ (This is exactly what you think it should be! Imagine for a moment that $N\times[t_i,t_{i+1}]$ is the green rectangle in our "rainbow tube" above. We can map it to $\mathbb{R}$ by first mapping it across to $S^1$ via $F$ and then lifting it up to $\tilde{U}_i$ via $p^{-1}$!)
This is well-defined since $p\big|_{\tilde{u}_i}$ is a homeomorphism, so we can repeat this inductive step finitely many times to obtain the desired lift $G:N\times I\to\mathbb{R}$ for some neighborhood $N$ of $y_0$. And since $y_0\in Y$ was arbitrary, we have our lift $G$ on all of $Y\times I$.
QED!
Footnotes:
*This proof is directly from Hathcer's
Algebraic Topology chapter 1.1, but I've included the pictures to help make sense of things.
**For example, take $\{U_\alpha\}=\{U_1,U_2\}$ where $U_1=S^1\smallsetminus \{(1,0)\}$ and $U_2=S^1\smallsetminus \{(-1,0)\}$. Then $p^{-1}(U_1)=\{(n,n+1)\}_{n\in\mathbb{Z}}$ is a disjoint collection and $p(n,n+1)\cong U_1$ for all $n$. Similarly $p^{-1}(U_2)=\{(n-1/2,n+1/2)\}_{n\in\mathbb{Z}}$ is disjoint and $p(n-1/2,n+1/2)\cong U_2$ for all $n$.
*** Since we can replace $N\times {t_i}$ with the intersection $(N\times {t_i})\cap \tilde{G}\big|_{N\times\{t_i\}}^{-1}(\tilde{U}_i)$.
|
It's called a Principal-Agent Conflict.The RIAA/MPAA act as agents on behalf of the people who actually produce content (and consequently end-consumer value).To maintain relevance to their principals', the RIAA/MPAA must signal value to them (i.e. claim loudly and repeatedly that they do something good for them [regardless of the validity of that claim])...
The simple answer is that they don't think they would make as much money.In many countries illegally downloading music or movies is getting harder and harder. The recording industry has achieved this by persuading governments to instruct the ISPs to block torrent sites, torrent proxy sites and sites that list proxy sites completely so no one can access ...
Partly your question relates to more general questions like "buy versus rent a house", or "buy versus lease a machine". Under neoclassical assumptions of competition, full information, etc, you can imagine that arbitrage would make these options equivalent for the average of the population (or the representative agent). In practice, heterogeneous ...
I think I have found the explenation for this. If you look at the wikipdedia page of the economy of Madagascar, it states the following:The standard of living of the Malagasy population has been declining dramatically over the past 25 years. The country has gone from being a net exporter of agricultural products in the 1960s to a net importer since 1971. ...
Such a "planned" and sought-after re-allocation of given income from consumption to saving, is justified only if the savings in an economy are sub-optimal (or we think so), in the sense of hurting the investment rate, which in turns hurts the (human and physical) capital infrastructure.Think about the extremes: consume all that you produce, save nothing (...
"Durable goods" are a form of utility-generating capital. But they are capital, and what is actually generating utility is the flow of services from them, not them directly. So when we buy a durable good, this is not consumption, it is investment. The phenomenon of uneven intertemporal allocation of purchasing expenses in durable goods is not related to "...
Heuristically, you can think of the integral as just a sum:$$ \bar{C} = \left( \sum_{i=1}^n C_i^{1-\frac{1}{\epsilon}} \right)^{\frac{\epsilon}{\epsilon - 1}} $$where $\bar{C}$ is an index of aggregate consumption, and utility is given by $u \left( \bar{C} \right)$.It's easy to check that the marginal rate of substitution between goods $j$ and $k$ is ...
tl,dr: I don't see an economic argument for GN marriage, or marriage in general what-so-ever.Frictionless environmentAll spending on marriage, are nothing more than consumption goods. There is no reason to believe that marriage-related spending have a higher Keynesian multiplier than other consumption categories.As long as this is the case, as @denesp ...
The two papers that explored first savings under uncertainty in a two-period setting areLeland, H. E. (1968). Saving and uncertainty: The precautionary demand for saving. The Quarterly Journal of Economics, 82(3), 465-473.andSandmo, A. (1970). The effect of uncertainty on saving decisions. The Review of Economic Studies, 37(3), 353-360.They both ...
This has to do with the form of the utility function. Assume instead that,say, we had$$U(c,l) = c^{1/2} - \frac{1}{2}l^2$$Does now $R$ affect the labor-supplied decision?Solve it and explore. Check also the link offered in a comment to your question.
Firstly there are services like this in Spotify, and even radio and tv, but it sounds like you are talking about downloading the material with ads in.That causes a problem. Revenue from ads relies on giving many ads to many people. Each time you listen to a song the provider needs to be able to provide a new ad. If you download a song or book with ads ...
As stated the Government successfully runs a campaign and people stop spending and start saving.Part of the answer depends on how they save, and the other part depends on how manufacturers react. However it´s worth observing that in most countries only a minority of people can do this - the 60-70% of people in the USA for example who live pay cheque to ...
The whole point of saving is that you consume less NOW in order to consume more LATER. That is certainly true at a personal level.Moreover, that is true at a national level. More savings this year means less consumption this year. The danger of savings is that if the money goes under the proverbial "mattress," instead of being invested, less consumption ...
Definition: The Aggregate Demand curve shows the combinations of the price level and level of output at which the goods market and assets (money) markets are simultaneously in equilibrium. This definition proves option a) incorrect. The correct option is d)
Good question. The answer depends on what exactly you mean by an increase in the money supply and how it is implemented.Because standard monetary policy (ie. open market operations) is implemented through banks, it is functionally quite different from a pure infusion of cash to consumers, which is more the realm of fiscal policy, depending on how it is ...
There are two major "qualifiers" to the life-cycle hypothesis (LCH). Both put forth by John Maynard Keynes in "The General Theory of Employment, Interest, and Money."The first is the "precautionary effect," that people save more than the LCH would predict because they are uncertain about their future health, life span, medical bills, etc.A second ...
Why Save? "In the long run we are all dead!" - K.Your interesting question brings more questions. Because the short answer is: it depends.Part A - DiagnosticWhat phase of the business cycle are you in?You first need to make a diagnostic of the shape and timing of the economy your are analyzing.The timing of the business cycle is of prime importance. ...
To simplify matters, let's call the right-hand side of your starting equation $X_t$ Then, I start just like you did with$1 = X_t$The difference between your solution and Gali's is that you took the Taylor expansion around$\log(1)=0=log(X_t)$ which implies that also the steady state equals 0, so we can simply subtract it to get log-differences,...
Read the economics textbook by Gary Becker. He does not deal with NG marriage but he offers the first economic theory of marriage. Also, read "A Treatise on the Family: Enlarged Edition", 441 pages, Gary S. Becker, Harvard University Press.
I wanted to add this as a reply to denesp's comment, but I do not have enough reps.MRS and a binding BC gives a system of two equations from which we can solve the optimum bundle. In case of income = 10, these two equations have positive solutions, in case of income = 1, these two equations do not have positive solutions. See this:Income = 1 just makes ...
$p$ represents total production.$Ap$ represents the intermediate goods and services used in production, i.e. intermediate consumption.So $p-Ap=(I-A)p$ represents net production, i.e. output minus input.
I would agree that volunteered labor is a utility enhancing activity. Seen as a good, its price could be the opportunity cost, i.e. the wages foregone (for not working for pay). But then this is not really different from the concept of "leisure", since "leisure" in economics does not mean "fun and games", but rather "time spent not working for pay".An ...
The argument relayed in the question as regards consumption smoothing is flawed. Consumption smoothing does not mean consumption equality over periods, but rather, tendency to avoid corner solutions, or near-corner solutions. So it has nothing to do with whether, in the context of this family of models, $\beta (1+r) =1$ or not (after all, in a more ...
One needs to go case-by-case and arrive at a utility function with branches. To get you started, if $x_1 < x_2/2 \implies \min(2x_1,x_2) = 2x_1$, but then also $\min(x_1,2x_2) = x_1$. Therefore in this case, $u(x_1,x_2) = 2x_1$. etcThere are two other intervals to consider as regards the relation between $x_1$ and $x_2$, so in all you will obtain a ...
Those are some good starting points but I would refrain from using them as the end result. There are some fatal logical jumps in your example.Moving house: A good one to use. If I were to guess, I think the ratio should have decreased due to tech. For example, we don't have those huge t.v. nowadays. No desktops for the most part. However, be careful about ...
It's very likely that that these major categories in the hierarchy of goods in the market basket differ somewhat by country. The sub-categories within these major groups are even more likely to differ across countries, as the market basket is supposed to reflect the purchases of a typical consumer, and typical purchases are obtained from consumption surveys ...
You can include government consumption in the utility function of the representative consumer, but why? When we add exogenous government purchase shocks to a macro model, we enquire about the dynamics of the agent's consumption and labour supply change. This concept is clear and easy to measure.Let's assume that you are including government spending in an ...
I agree with @BBKing, just to put it here as an answer: if we go back to your two definitions you can find quiet easily some counter examples. Let's take a firm, which uses a machine as capital. Let's assume also that this machine can only be used by one man at a time. Do you think this machine is a non-excludable / non-rival good ?
I guess it just follows by definition.Assume that in the high state (for household $i$, aggregate endowment is 10, and household $i$ consumes $1/5 = 2$ of this, while having an endowment of $5$. Now consider what happens if we were in the low state (for household $i$), where household $i$ only gets $1$ of whatever endowment. However, if this is a high ...
Isn't this mostly an issue of pricing at a level where most people feel it's worth paying to avoid the hassle (and potential legal issues) of piracy?Take music singles for example: when I was a teenager (late 90's), a CD single cost £3.99 in the UK. When it became possible to download songs for free that someone else had ripped and uploaded, many people ...
|
I have following first order nonlinear ordinary differential and i was wondering if someone can suggest some method by which either i can get an exact solution or approaximate and converging perturbative solution.
$$\begin{align*} \dfrac{dx}{dt} &= 2(1-W^{-1}) x + \dfrac{2xy}{W} - 8x^{2}\\ \dfrac{dy}{dt} &= \gamma W (x - \dfrac{y}{W}) \end{align*}$$
Kindly help me with any methods you that might work and it will be great if you can provide few references where i can read about those methods.
Any help will be highly helpful.
Thanks a lot in advance.
|
Difference between revisions of "Reflecting"
(→Reflection and correctness: I1)
(typo)
Line 25: Line 25:
Although it may be surprising, the existence of a correct cardinal is equiconsistent with ZFC. This can be seen by a simple compactness argument, using the fact that the theory ZFC+"$\kappa$ is correct" is finitely consistent, if ZFC is consistent, precisely by the observation about $\Sigma_n$-correct cardinals above.
Although it may be surprising, the existence of a correct cardinal is equiconsistent with ZFC. This can be seen by a simple compactness argument, using the fact that the theory ZFC+"$\kappa$ is correct" is finitely consistent, if ZFC is consistent, precisely by the observation about $\Sigma_n$-correct cardinals above.
−
$C^{(n)}$ are the classes of $\Sigma_n$-ordinals. These classes are clubs (closed unbounded). $C^{(0)}$ is the class of all ordinals. $C^{(1)}$ is precisely the class of all uncountable cardinals $α$ such that $V_α = H(α)$. References to the $C^{(n)}$ classes (different from just the requirement that the cardinal belongs to $C^{(n)}$) can sometimes make
+
$C^{(n)}$ are the classes of $\Sigma_n$-ordinals. These classes are clubs (closed unbounded). $C^{(0)}$ is the class of all ordinals. $C^{(1)}$ is precisely the class of all uncountable cardinals $α$ such that $V_α = H(α)$. References to the $C^{(n)}$ classes (different from just the requirement that the cardinal belongs to $C^{(n)}$) can sometimes make cardinal properties stronger (for example $C^{(n)}$-[[superstrong]], $C^{(n)}$-[[extendible]], $C^{(n)}$-[[huge]] and $C^{(n)}$-[[rank-into-rank|I3]] and $C^{(n)}$-[[rank-into-rank|I1 cardinals]]). On the other hand, every [[measurable]] cardinal is $C^{(n)}$-measurable for all $n$ and every ($λ$-)[[strong]] cardinal is ($λ$-)$C^{(n)}$-strong for all $n$.<cite>Bagaria2012:CnCardinals</cite>
A cardinal $\kappa$ is ''reflecting'' if it is inaccessible and correct. Just as with the notion of correctness, this is not first-order expressible as a single assertion in the language of set theory, but it is expressible as a scheme (''Lévy scheme''). The existence of such a cardinal is equiconsistent to the assertion [[ORD is Mahlo]].
A cardinal $\kappa$ is ''reflecting'' if it is inaccessible and correct. Just as with the notion of correctness, this is not first-order expressible as a single assertion in the language of set theory, but it is expressible as a scheme (''Lévy scheme''). The existence of such a cardinal is equiconsistent to the assertion [[ORD is Mahlo]].
Revision as of 10:20, 8 October 2019
Reflection is a fundamental motivating concern in set theory. The theory of ZFC can be equivalently axiomatized over the very weak Kripke-Platek set theory by the addition of the reflection theorem scheme, below, since instances of the replacement axiom will follow from an instance of $\Delta_0$-separation after reflection down to a $V_\alpha$ containing the range of the defined function. Several philosophers have advanced philosophical justifications of large cardinals based on ideas arising from reflection.
Contents Reflection theorem
The Reflection theorem is one of the most important theorems in Set Theory, being the basis for several large cardinals. The Reflection theorem is in fact a "meta-theorem," a theorem about proving theorems. The Reflection theorem intuitively encapsulates the idea that we can find sets resembling the class $V$ of all sets.
Theorem (Reflection): For every set $M$ and formula $\phi(x_0...x_n,p)$ ($p$ is a parameter) there exists some limit ordinal $\alpha$ such that $V_\alpha\supseteq M$ such that $\phi^{V_\alpha}(x_0...x_n,p)\leftrightarrow \phi(x_0...x_n,p)$ (We say $V_\alpha$ reflects $\phi$). Assuming the Axiom of Choice, we can find some countable $M_0\supseteq M$ that reflects $\phi(x_0...x_n,p)$.
Note that by conjunction, for any finite family of formulas $\phi_0...\phi_n$, as $V_\alpha$ reflects $\phi_0...\phi_n$ if and only if $V_\alpha$ reflects $\phi_0\land...\land\phi_n$. Another important fact is that the truth predicate for $\Sigma_n$ formulas is $\Sigma_{n+1}$, and so we can find a (Club class of) ordinals $\alpha$ such that $(V_\alpha,\in)\prec_{{T_{\Sigma_n}}\restriction{V_\alpha}} (V,\in)$, where $T_{\Sigma_n}$ is the truth predicate for $\Sigma_n$ and so $ZFC\vdash Con(ZFC(\Sigma_n))$ for every $n$, where $ZFC(\Sigma_n)$ is $ZFC$ with Replacement and Separation restricted to $\Sigma_n$.
Lemma: If $W_\alpha$ is a cumulative hierarchy, there are arbitrarily large limit ordinals $\alpha$ such that $\phi^{W_\alpha}(x_0...x_n,p)\leftrightarrow \phi^W(x_0...x_n,p)$. Reflection and correctness
For any class $\Gamma$ of formulas, an inaccessible cardinal $\kappa$ is
$\Gamma$-reflecting if and only if $H_\kappa\prec_\Gamma V$, meaning that for any $\varphi\in\Gamma$ and $a\in H_\kappa$ we have $V\models\varphi[a]\iff H_\kappa\models\varphi[a]$. For example, an inaccessible cardinal is $\Sigma_n$-reflecting if and only if $H_\kappa\prec_{\Sigma_n} V$. In the case that $\kappa$ is not necessarily inaccessible, we say that $\kappa$ is $\Gamma$-correct if and only if $H_\kappa\prec_\Gamma V$ . A simple Löwenheim-Skolem argument shows that every infinite cardinal $\kappa$ is $\Sigma_1$-correct. For each natural number $n$, the $\Sigma_n$-correct cardinals form a closed unbounded proper class of cardinals, as a consequence of the reflection theorem. This class is sometimes denoted by $C^{(n)}$ and the $\Sigma_n$-correct cardinals are also sometimes referred to as the $C^{(n)}$-cardinals. Every $\Sigma_2$-correct cardinal is a $\beth$-fixed point and a limit of such $\beth$-fixed points, as well as an $\aleph$-fixed point and a limit of such. Consequently, we may equivalently define for $n\geq 2$ that $\kappa$ is $\Sigma_n$-correct if and only if $V_\kappa\prec_{\Sigma_n} V$.
A cardinal $\kappa$ is
correct, written $V_\kappa\prec V$, if it is $\Sigma_n$-correct for each $n$. This is not expressible by a single assertion in the language of set theory (since if it were, the least such $\kappa$ would have to have a smaller one inside $V_\kappa$ by elementarity). Nevertheless, $V_\kappa\prec V$ is expressible as a scheme in the language of set theory with a parameter (or constant symbol) for $\kappa$.
Although it may be surprising, the existence of a correct cardinal is equiconsistent with ZFC. This can be seen by a simple compactness argument, using the fact that the theory ZFC+"$\kappa$ is correct" is finitely consistent, if ZFC is consistent, precisely by the observation about $\Sigma_n$-correct cardinals above.
$C^{(n)}$ are the classes of $\Sigma_n$-correct ordinals. These classes are clubs (closed unbounded). $C^{(0)}$ is the class of all ordinals. $C^{(1)}$ is precisely the class of all uncountable cardinals $α$ such that $V_α = H(α)$. References to the $C^{(n)}$ classes (different from just the requirement that the cardinal belongs to $C^{(n)}$) can sometimes make large cardinal properties stronger (for example $C^{(n)}$-superstrong, $C^{(n)}$-extendible, $C^{(n)}$-huge and $C^{(n)}$-I3 and $C^{(n)}$-I1 cardinals). On the other hand, every measurable cardinal is $C^{(n)}$-measurable for all $n$ and every ($λ$-)strong cardinal is ($λ$-)$C^{(n)}$-strong for all $n$.[1]
A cardinal $\kappa$ is
reflecting if it is inaccessible and correct. Just as with the notion of correctness, this is not first-order expressible as a single assertion in the language of set theory, but it is expressible as a scheme ( Lévy scheme). The existence of such a cardinal is equiconsistent to the assertion ORD is Mahlo.
If there is a pseudo uplifting cardinal, or indeed, merely a pseudo $0$-uplifting cardinal $\kappa$, then there is a transitive set model of ZFC with a reflecting cardinal and consequently also a transitive model of ZFC plus Ord is Mahlo. You can get this by taking some $\lambda\gt\kappa$ such that $V_\kappa\prec V_\lambda$.
$\Sigma_2$ correct cardinals
The $\Sigma_2$-correct cardinals are a particularly useful and robust class of cardinals, because of the following characterization: $\kappa$ is $\Sigma_2$-correct if and only if for any $x\in V_\kappa$ and any formula $\varphi$ of any complexity, whenever there is an ordinal $\alpha$ such that $V_\alpha\models\varphi[x]$, then there is $\alpha\lt\kappa$ with $V_\alpha\models\varphi[x]$. The reason this is equivalent to $\Sigma_2$-correctness is that assertions of the form $\exists \alpha\ V_\alpha\models\varphi(x)$ have complexity $\Sigma_2(x)$, and conversely all $\Sigma_2(x)$ assertions can be made in that form.
It follows, for example, that if $\kappa$ is $\Sigma_2$-correct, then any feature of $\kappa$ or any larger cardinal than $\kappa$ that can be verified in a large $V_\alpha$ will reflect below $\kappa$. So if $\kappa$ is $\Sigma_2$-reflecting, for example, then there must be unboundedly many inaccessible cardinals below $\kappa$. Similarly, if $\kappa$ is $\Sigma_2$-reflecting and measurable, then there must be unboundedly many measurable cardinals below $\kappa$.
The Feferman theory
This is the theory, expressed in the language of set theory augmented with a new unary class predicate symbol $C$, asserting that $C$ is a closed unbounded class of cardinals, and every $\gamma\in C$ has $V_\gamma\prec V$. In other words, the theory consists of the following scheme of assertions: $$\forall\gamma\in C\ \forall x\in V_\gamma\ \bigl[\varphi(x)\iff\varphi^{V_\gamma}(x)\bigr]$$as $\varphi$ ranges over all formulas. Thus, the Feferman theory asserts that the universe $V$ is the union of a chain of elementary substructures $$V_{\gamma_0}\prec V_{\gamma_1}\prec\cdots\prec V_{\gamma_\alpha}\prec\cdots \prec V$$Although this may appear at first to be a rather strong theory, since it seems to imply at the very least that each $V_\gamma$ for $\gamma\in C$ is a model of ZFC, this conclusion would be incorrect. In fact, the theory does
not imply that any $V_\gamma$ is a model of ZFC, and does not prove $\text{Con}(\text{ZFC})$; rather, the theory implies for each axiom of ZFC separately that each $V_\gamma$ for $\gamma\in C$ satisfies it. Since the theory is a scheme, there is no way to prove from that theory that any particular $\gamma\in C$ has $V_\gamma$ satisfying more than finitely many axioms of ZFC. In particular, a simple compactness argument shows that the Feferman theory is consistent provided only that ZFC itself is consistent, since any finite subtheory of the Feferman theory is true by the reflection theorem in any model of ZFC. It follows that the Feferman theory is actually conservative over ZFC, and proves with ZFC no new facts about sets that is not already provable in ZFC alone.
The Feferman theory was proposed as a natural theory in which to undertake the category-theoretic uses of Grothendieck universes, but without the large cardinal penalty of a proper class of inaccessible cardinals. Indeed, the Feferman theory offers the advantage that the universes are each elementary substructures of one another, which is a feature not generally true under the universe axiom.
Maximality Principle
The existence of an inaccessible reflecting cardinal is equiconsistent with the boldface maximality principle $\text{MP}(\mathbb{R})$, which asserts of any statement $\varphi(r)$ with parameter $r\in\mathbb{R}$ that if $\varphi(r)$ is forceable in such a way that it remains true in all subsequent forcing extensions, then it is already true; in short, $\text{MP}(\mathbb{R})$ asserts that every possibly necessary statement with real parameters is already true. Hamkins showed that if $\kappa$ is an inaccessible reflecting cardinal, then there is a forcing extension with $\text{MP}(\mathbb{R})$, and conversely, whenever $\text{MP}(\mathbb{R})$ holds, then there is an inner model with an inaccessible reflecting cardinal.
References Bagaria, Joan. $C^{(n)}$-cardinals.Archive for Mathematical Logic 51(3--4):213--240, 2012. www DOI bibtex
|
This will be the first of a series of short posts relating to subject matter discussed in the text, “An Introduction to Statistical Learning”. This is an interesting read, but it often skips over statement proofs — that’s where this series of posts comes in! Here, I consider the content of Section 5.1.2: This gives a lightning-quick “short cut” method for evaluating a regression’s leave-one-out cross-validation error. The method is applicable to any least-squares linear fit.
Follow @efavdb Follow us on twitter for new submission alerts! Introduction: Leave-one-out cross-validation
When carrying out a regression analysis, one is often interested in two types of error measurement. The first is the training set error and the second is the generalization error. The former relates to how close the regression is to the data being fit. In contrast, the generalization error relates to how accurate the model will be when applied to other points. The latter is of particular interest whenever the regression will be used to make predictions on new points.
Cross-validation provides one method for estimating generalization errors. The approach centers around splitting the training data available into two sets,
a cross-validation training set and cross-validation test set. The first of these is used for training a regression model. Its accuracy on the test set then provides a generalization error estimate. Here, we focus on a special form of cross-validation, called leave-one-out cross-validation (LOOCV). In this case, we pick only one point as the test set. We then build a model on all the remaining, complementary points, and evaluate its error on the single-point held out. A generalization error estimate is obtained by repeating this procedure for each of the training points available, averaging the results.
LOOCV can be computationally expensive because it generally requires one to construct many models — equal in number to the size of the training set. However, for the special case of least-squares polynomial regression we have the following “short cut” identity:
$$ \label{theorem} \tag{1} \sum_i \left ( \tilde{y}_i – y_i\right)^2 = \sum_i \left ( \frac{\hat{y}_i – y_i}{1 – h_i}\right)^2. $$ Here, $y_i$ is the actual label value of training point $i$, $\tilde{y}_i$ is the value predicted by the cross-validation model trained on all points except $i$, $\hat{y}_i$ is the value predicted by the regression model trained on all points (including point $i$), and $h_i$ is a function of the coordinate $\vec{x}_i$ — this is defined further below. Notice that the left side of (\ref{theorem}) is the LOOCV sum of squares error (the quantity we seek), while the right can be evaluated given only the model trained on the full data set. Fantastically, this allows us to evaluate the LOOCV error using only a single regression! Statement proof
Consider the LOOCV step where we construct a model trained on all points except training example $k$. Using a linear model of form $\tilde{y}(\vec{x}) \equiv \vec{x}^T \cdot \vec{\beta}_k$ — with $\vec{\beta}_k$ a coefficient vector — the sum of squares that must be minimized is
$$\tag{2} \label{error_sum} J_k \equiv \sum_{i \not = k} \left ( \tilde{y}_i – y_i \right)^2 = \sum_{i \not = k} \left (\vec{x}^T_i \cdot \vec{\beta}_k – y_i \right)^2. $$ Here, we’re using a subscript $k$ on $\vec{\beta}_k$ to highlight the fact that the above corresponds to the case where example $k$ is held out. We minimize (\ref{error_sum}) by taking the gradient with respect to $\vec{\beta}_k$. Setting this to zero gives the equation $$\tag{3} \left( \sum_{i \not = k} \vec{x}_i \vec{x}_i^T \right) \cdot \vec{\beta}_k = \sum_{i \not = k} y_i \vec{x}_i. $$ Similarly, the full model (trained on all points) coefficient vector $\vec{\beta}$ satisfies $$\tag{4} \label{full_con} \left( \sum_{i} \vec{x}_i \vec{x}_i^T \right) \cdot \vec{\beta} \equiv M \cdot \vec{\beta} = \sum_{i} y_i \vec{x}_i. $$ Combining the prior two equations gives, $$\tag{5} \left (M – \vec{x}_k \vec{x}_k^T \right) \cdot \vec{\beta}_k = \left (\sum_{i} y_i \vec{x}_i\right) – y_k \vec{x}_k = M\cdot \vec{\beta} – y_k \vec{x}_k. $$ Using the definition of $\tilde{y}_k$, rearrangement of the above leads to the identity $$\tag{6} M \cdot \left ( \vec{\beta}_k – \vec{\beta} \right) = \left (\tilde{y}_k – y_k \right) \vec{x}_k. $$ Left multiplication by $\vec{x}_k^T M^{-1}$ gives, $$\tag{7} \tilde{y}_k – \hat{y}_k = \left( \tilde{y}_k – y_k\right) – \left( \hat{y}_k – y_k \right) = \vec{x}_k^T M^{-1} \vec{x}_k \left (\tilde{y}_k – y_k \right). $$ Finally, combining like-terms, squaring, and summing gives $$\tag{8} \sum_k \left (\tilde{y}_k – y_k \right) ^2 = \sum_k \left (\frac{\hat{y}_k – y_k}{1 -\vec{x}_k^T M^{-1} \vec{x}_k } \right)^2. $$ This is (\ref{theorem}), where we now see the parameter $h_k \equiv \vec{x}_k^T M^{-1} \vec{x}_k$. This is referred to as the “leverage” of $\vec{x}_k$ in the text. Notice also that $M$ is proportional to the correlation matrix of the $\{\vec{x}_i\}$. $\blacksquare$
|
Measuring Cloud Oktas From Outer Space
This article describes the approach undertaken by data scientists at Axibase to calculate cloud cover using satellite imagery from the Japanese Himawari 8 satellite.
Today, cloud cover is measured using automated weather stations, specifically ceilometers and sky imager instruments. The ceilometer is an upward pointed laser that calculates the time required for the laser to reflect back to ground surface from overhead clouds, which determines the height of the cloud base. The sky imager divides the sky into regions and finds the percentage of clouds in those different regions. Only clouds located directly overhead are detected. As a result, with how sparsely weather stations are placed today, the amount of cloud cover data is limited. Most modern weather stations can discover and measure clouds up to 7,600 meters. You can learn more about modern automated weather stations and ceilometers on Wikipedia.
Ceilometer:
Sky Imager:
You can learn more about automated weather stations in Australia on the official website of the Bureau of Meteorology Australia.
Cloud cover measurements have many applications and benefits in weather forecasting and solar energy generation. For example, seasonal cloud cover statistics allow tourists to plan their holidays for sunnier weeks and months of the year. This information is also useful to mountain climbers planning an ascent, since the climbers need to choose seasons with less cloud cover, guaranteeing the best possible conditions for summit attempts. Photovoltaic energy generation hinges heavily on quality cloud cover data. Solar panels are most efficient when there are no clouds, when building a solar power station, a company or government must analyze cloud oktas data. Because automated weather stations that measure this metric are distributed sparsely, the data is often not available. Below is a visualization comparing cloud cover with solar power generation for a particular station in Australia. It is readily apparent that the two metrics are interdependent.
This research project is aimed at calculating cloudiness over Australia from satellite images. The goal is to use a simple method that can effectively determine the cloud cover without employing complex algorithms and machine learning. Afterwards, compare the results with actual data from ground weather stations. Data available from the Japan Meteorological Agency (JMA) and the Australian Bureau of Meteorology (BOM) made this research both interesting and feasible.
Cloudiness Data
Australian meteorological stations are used as the source of cloudiness data. The list of all meteorological stations is available on the website of the Australian Bureau of Meteorology.
Here is a summary of the available stations:
20,112: Total number of meteorological stations. 7,568: Total number of currently functional stations. 867: Total number of stations that have available data. 778: Total number of stations whose data is loaded into ATSD. 394: Total number of stations that measure cloud oktas. 45: Total number of station that measure cloud oktas at least four times per day.
Cloud cover measurements are available from the Australian Bureau of Meteorology Latest Weather Observations Portal. Cloud cover data from each station for the past few days can be retrieved in JSON format using the REST API.
As stated on Wikipedia, cloud cover is the fraction of the sky obscured by clouds when observed from a particular location. Cloud cover is measured in oktas, meaning eighths:
0,
1/8,
2/8, up to
1. Several methods are used to measure cloud cover but which method is used by the Australian weather stations is not exactly clear.
Satellite Data
To collect more data, the analysis of images from a Japanese geostationary weather satellite, Himawari 8, is performed. The launch of this satellite took place on October 7, 2014 and the satellite became operational on July 7, 2015. It provides high quality images of the Earth in 16 frequency bands every ten minutes. Learn more about Himawari satellites and imaging on the Meteorological Satellite Center of the JMA website.
The Japan Meteorological Agency processes the satellite images and determines several parameters of the clouds. The results can viewed on the Meteorological Satellite Center of the JMA website.
The algorithms used by the JMA to process the images are complex. Read
Introduction of the Optimal Cloud Analysisfor Himawari‐8/‐9 to learn more about these algorithms.
To determine cloud cover from Himawari images as simply as possible, only one band is analyzed. For this research project, images of Australia available from the MSC JMA Real Time Image portal are used. The server keeps images for the past 24 hours. Images in infrared band 13 with wavelength equal to 10,400 nm are used.
Data Flow
ATSD can collect data from the Australian Bureau of Meteorology in JSON format. ATSD comes with the Axibase Collector, which collects data from any remote source and stores it in ATSD. Another benefit of ATSD is the built-in visualization that supports graphing results, to give a good understanding of the progress.
Once both the actual cloud oktas and the calculated cloud oktas are loaded into ATSD, visualization portals are built to compare the collected and computed metrics.
Cloudiness Detection From Images
Using the geographical coordinates of each station, each station location on the satellite images is determined. The images are somewhat distorted near the border of Australia and on the lines of the coordinate grid. This distortion comes in the form of overlays added on top of the images, the green Australian border and white grid. Therefore, only stations that are far from the distorted areas are included.
A simple method to detect clouds is used. Since clouds are cooler than the surface of the earth, clouds are rendered white on the satellite images, and the surface of the earth is black. Therefore, the brightness of the pixels in the images reflects cloudiness. Hence, calculate cloud cover for a given meteorological station as the average pixel brightness over a
3 x 3 square of pixels, centered over the station.
Since one pixel on the image, depending on the location, covers an area from
5.5 x 3.9 to
5.5 x 5.6 square kilometers; the determination of cloudiness of an area sized about 230 square kilometers is analyzed.
Here is the key line from the R script used to calculate the
cloudiness_himawari_b13 metric from the satellite images:
new_row[1, i + 1] <- get_density(lat = latitudes[i], lon = longitudes[i], matr = img_cont[ , , 1])
To store the results in ATSD:
save_series(series, metric_col = 2, metric_name = cloudiness_Himawari_b13", entity = entities[i], verbose = FALSE)
Stations that measure cloud cover at an average frequency of at least once every four hours are selected. The table below displays the Pearson correlation coefficient between actual and calculated cloud cover for the selected stations:
the
Countcolumn indicates the average number of cloud cover measurements per day.
This approach does not appear to work particularly well as there is little correlation between computed and factual values.
Interestingly, the
cloudiness_himawari_b13 series has a daily cycle; the value is lower during the day than at night. This trend is clearly visible when comparing this series with the height of the sun above the horizon, known as sun altitude. Sun altitude is calculated using the
SunCalc library created by Vladimir Agafonkin.
View the live ChartLab Portal comparing Cloud Cover to Sun Altitude:
The results clearly show that the correlation during daytime hours is higher.
The diurnal cycle is removed by subtracting the average of values of the last
n days. The results with the diurnal cycle removed:
Improving the Correlation
Data scientists sought to improve the correlation by adjusting the method for determining the cloudiness from an image.
The following adjustments are made:
Average out the brightness over different areas of the images. Calculate a weighted average of brightness over large areas of the image, giving less weight to pixels more distant from the center of the area. Calculate the weights using geometrical principles, described below. Take into account the height of the lower edge of the cloud directly over a station. Meteorological stations measure the height of clouds and this data is available from the Australian Bureau of Meteorology. It seems that the lower the height of the cloud the greater the value of cloud cover.
To be more concrete, here are example computations for eight meteorological stations. These stations are chosen because there are enough measurements of cloud cover and the stations are far from the overlaid white lines on the images.
For each of the stations, five not-perfectly round disks are selected with radii of
0,
3,
5,
10,
20, and
30 pixels:
0 Pixels 3 Pixels 5 Pixels 10 Pixels 20 Pixels 30 Pixels
Averaged brightness over each disc is an estimation of the cloud cover. There are six estimations, and the correlations with actual values of cloud cover are saved in columns
to avg0 in these tables:
avg30
Averaged Brightness:
Weighted Average Brightness:
Columns
to wavg0 display correlations between cloud cover and the weighted average of brightness for given disks. To compute the weight (
wavg30
w) of each pixel, use the following equation:
$w = \frac{h}{(d^2 + h^2)^{3/2}} = \frac{h}{l^3}.$
In this equation,
d is the distance between the center of the pixel and the given station (in meters) and
h is the cloud height (in meters). Distant and high clouds have lower weights.
Here is a code snippet from the R script used to calculate the weighted average of brightness for given disks:
sum <- 0total_weight <- 0for (i in 1:length(pixels$x_shift)) { x_shift <- pixels$x_shift[i] y_shift <- pixels$y_shift[i] coords <- get_lat_lon(x_0 + x_shift, y_0 + y_shift) dist <- distGeo(c(lon_0, lat_0), c(coords[2], coords[1])) weight <- cloud_base / (cloud_base^2 + dist^2)^(3/2) sum <- sum + weight * matr[x_0 + x_shift, y_0 + y_shift] total_weight <- total_weight + weight } return(sum / total_weight)
Unfortunately, since only the height of the clouds directly above each station is known, this height is used when calculating the weighted average for all pixels, assuming that the height of all clouds is the same around each station.
To explain the reasoning behind using this formula, assume that clouds are flat (this assumption is incorrect, but is used anyway). A flat cloud with an area of
A squared meters has height of
h meters above the ground and is
d meters away of station. As a result, the “solid angle” of the cloud for the station is approximately equal
w * A:
Legend:
A: the cloud area.
h: the height of the cloud.
d: distance from the station to the center of the cloud’s projection on the ground.
l: distance between the station and the center of the cloud.
s: the area of the cloud’s image on the unit hemisphere. This tells us how big the cloud appears to the station.
$s = \int_A w\ d\sigma \approx w\cdot A$, where $w = \frac{h}{(d^2 + h^2)^{3/2}} = \frac{h}{l^3}.$
The measurement of a “solid angle” is equal to the area of intersection of this angle and the unit sphere centered in the angle vertex. The contribution of a pixel to cloud cover is proportional to the
w coefficient, because all pixels on an image cover nearly the same area equal to
A. The exact statement is that the integral of
w over the area
A equals to the “solid angle.”
The results show that none of the methods used to improve the correlation led to a significant increase.
Changing the Original Logic – Improved Correlation
To improve the correlation, change the original logic of how cloud oktas are determined from satellite images. Rather than using black as the color of the earth, determine a shade of grey for each station that the earth, without cloud cover, reached during the day and used this color as the calculation base. This logic is used because the temperature of the surface of the earth changes throughout the day, meaning that the brightness on the satellite images changes as well. The brightest point occurred during the night, when the earth is coolest. All darker shades are considered cloudless, and lighter shades are considered cloudy.
This approach lead to an improvement in correlations. This approach also decreased the diurnal cycle.
Here are the calculations and results for the town of Oakey: The lower brightness threshold =
0.2916667.
All values below the lower threshold are zeroed out. The upper brightness threshold =
0.4583333.
All values are scaled – divided by the upper brightness threshold. Those values greater than
1are set equal to
1.
Conclusion
Comparing the calculated cloud oktas with solar power generation for a particular station, clearly this algorithm can be used as a basis of forecasting and power generation planning.
The above ChartLab portal compares the power generation of a solar power station near one of the automated weather stations in the city of Griffith, for which cloud cover is calculated. The solar power station is three kilometers away from the automated weather station. From these results the increases in calculated cloud cover lead to decreases in solar power generation and vice versa. There is a correlation between the calculated cloud oktas and solar power generation.
Comparing the improved correlation results with solar power generation for the same station, the interdependency of the measurements is even more apparent. There is a strong correlation between the improved calculated cloud cover and solar power generation.
The results of this research project indicate that this algorithm can be used as a way to calculate cloud oktas with relative accuracy. The calculated cloud cover accuracy is high enough that the algorithm can be used to forecast and plan solar energy production. This conclusion is especially relevant for areas that are not covered by BOM meteorological weather stations, where there is no other real source of cloud cover data.
|
Hallo,
I have a question on a paper of Azad and Kobayashi "Ricci-flat Kähler metrics on symmetric varieties". Here is the link: http://www.academia.edu/2579043/Ricci-flat_Kahler_metrics_on_symmetric_varieties
On the first, there is mentioned the main result: "Theorem (1.1): Let $G^{\mathbb{C}} / K^{\mathbb{C}}$ be a symmetric variety. Then there exists a $G$-invariant complete Ricci-flat Kähler metric on $G^{\mathbb{C}} / K^{\mathbb{C}}$." Further below in section (1.2) (1) I do not understand the statement: "If $\sqrt{-1}\partial \overline{\partial} P$ is a Ricci-flat complete Kähler metric in Theorem (1.1), then $\bigwedge^{top}(\sqrt{-1} \partial \overline{\partial} P) = \eta \wedge \overline{\eta}$.", where $\eta$ is the $G^{\mathbb{C}}$-invariant top degree holomorphic volume form on $G^{\mathbb{C}} / K^{\mathbb{C}}$. I do not understant how one obtains the equation $\bigwedge^{top}(\sqrt{-1} \partial \overline{\partial} P) = \eta \wedge \overline{\eta}$ from the above assumptions?
I have to admit that I am not very experienced in this field and right now I am in the state of learning. Hope that some of you could help me out with this question.
Greetings. Bernard
|
Let $w=a_1a_2a_3...$ be an infinite word over a finite alphabet and $\epsilon>0$. Do there exist integers $n,k$ such that $\frac{d(a_1a_2...a_n,a_{k+1}a_{k+2}...a_{k+n})}{n}<\epsilon$ ? ($d(u,v)$ is the hamming distence)
OK, let's go over it slowly.
The alphabet will consist of 4 symbols: $x,u,b,c$.
The infinite word will be $xU_1Q_2U_3Q_4U_5Q_6\dots$ where $U_m$ is the finite word consisting of $m$ symbols $u$ and $Q_m$ is the
random word consisting of $m$ symbols each of which is $b$ or $c$ with probability $1/2$ with the convention that the choices of symbols at different positions are independent. So you get something like$$xubcuuubcbbuuuuucbbccbuuuuuuucccbcbbb\dots$$
It is easy to check (see the discussion here) that as $n\to\infty$, the string $a_1a_2\dots a_n$ contains one symbol $x$ ($a_1=x$), $\frac n2+O(\sqrt n)$ symbols $u$ and $\frac n2+O(\sqrt n)$ symbols each of which is $b$ or $c$.
Now suppose that $n$ is large enough and $k>n^2$. Then the $u$'s in the word $a_{k+1}a_{k+2}\dots a_{k+n}$ form a single block and the non-$u$'s form another block. One of these blocks has length $\ell \ge n/2$. However, the corresponding block in $a_1a_2\dots a_n$ is occupied by $\frac \ell 2+O(\sqrt n)$ symbols $u$ and $\frac \ell 2+O(\sqrt n)$ symbols that are not $u$, so the Hamming distance in question is at least $\frac \ell 2+O(\sqrt n)\ge \frac n4+O(\sqrt n)\ge \frac n5$ if $n\ge n_0$.
Thus we need to look only at $k\le n^2$ for large $n$. We have $\frac n2+O(\sqrt n)$ random symbols in $a_1a_2\dots a_n$ and, for fixed $k\ge 1$, the probability that each of them is matched in $a_{k+1}a_{k+2}\dots a_{k+n}$ is $0$ or $1/2$, the corresponding events being independent. Thus, the chance that we have at least $\frac n3$ matchings instead of expected $\le\frac n4$ is at most $Ce^{-cn}$ by the Bernstein (a.k.a. Chernov, Hoeffding, etc.) bound. Since the series $\sum_n Cn^2e^{-cn}$ converges, we conclude that with probability close to $1$, the Hamming distance in question is at least $\frac n6+O(\sqrt n)>\frac n7$ for all $n\ge n_0$, $k\le n^2$.
Finally, due to the uniqueness of $x$ in the word, the Hamming distance is always at least $1$, so the ratio in question is never less than $\min(\frac 17,\frac 1{n_0})$.
I hope it is clearer now but feel free to ask questions if something is still confusing.
By the way, the word "conjecture" means "a statement supported by extensive circumstantial evidence and several rigorous partial results", not "something that just came to my head" or "something I want to be true", so, since you put it in the title, I wonder what positive results you can prove here.
|
During logistic regression, in order to compute the optimal parameters in the model, we have to use an iterative numerical optimization approach (Newton method or Gradient descent method, instead of a simple analytical approach). Numerical optimization is a crucial mathematical concept in machine learning and function fitting, and it is deeply integrated in model training, regularization, support vector machine, neural network, and so on. In the next few posts, I will summarize key concepts and approaches in numerical optimization, and its application in machine learning.
1. The basic formula
$$ \min f(x) \tag {1.1} $$
$$ s.t. $$
$$ h_i(x) = 0, i \in 1,2…m \tag {1.2} $$
$$ g_j(x) \leq 0, j \in 1,2…n \tag{1.3} $$
\(f(x): R^n \rightarrow R\) is the objective function we want to optimize, and usually takes the format of “minimize”. For example, in linear regression, it has the format of \(f(\hat y_i) = \sum_i^N(y_i – \hat y_i)^2 \). If the goal is to maximize a function, just take the negative value of \(f\).
2. constrained v.s. constrained
Equation 1.1 is the basic format of an unconstrained numerical optimization problem.
Equation 1.2 and 1.3 represent conditions in constrained optimization problems. \(s.t.\) means subject to, and it is followed by the constraint functions. \(i, j\) are indices of constraint functions as an optimization problem may have several constraint functions. In particular, equation 1.2 is equality constraint and 1.3 is inequality constraint.
A set of points satisfying all constraints define the feasible region:
$$ K = \{x \in R^n | h_i(x) = 0, g_j(x) \leq 0 \} \tag{2} $$
The optimal point that minimizes the objective function is denoted as \(x^{*}\):
$$ x^{*} = argmin_{x \in R^n} f(x) \tag {3} $$
3. global v.s. local
\(x^{*}\) is the global minimum if
$$ f(x^{*}) \leq f(x), x \in K \tag {4.1} $$
\(x^{*}\) is a local minimum if within a neighborhood of \(\epsilon\):
$$ f(x^{*}) \leq f(x), \exists \epsilon>0, \|x^{*} – x\| \leq \epsilon \tag {4.2} $$
If \(f(x)\) is convex, a local minimum is also the global minimum.
4. continuous v.s. discrete
Some optimization problems require \(x\) to be integers. Therefore, rather than taking infinite real value, \(x\) can only take a finite set of values. It is usually more difficult to solve discrete optimization problems than continuous problems. See “linearity” below.
5. convexity
Convexity is a key concept in optimization. It guarantees a local minimum is also the global minimum.
\(S\) is a convex set, if for any two points in \(S\):
\(\lambda x_1 + (1 – \lambda) x_2 \in S, \forall x_1, x_2 \in S, \lambda \in [0,1] \tag {5} \)
In the figure below, areas enclosed by the red curves represent set \(S\). \(S_1\) is convex and \(S_2\) is not convex, as a straight line (blue) connecting \(x_1, x_2\) is not within \(S_2\).
\(f\) is a convex function if its domain \(S\) is a convex set and if for any two points in its domain:
\(f(\lambda x_1 + (1 – \lambda) x_2) < \lambda f( x_1) +(1 – \lambda) f(x_2),\forall x_1, x_2 \in S, \lambda \in [0,1] \tag {6} \)
As discussed in the logistic regression post, a twice continuously differentiable function \(f\) is convex if and only if its second order derivative \(f”(x) \geq 0\). In matrix format, it means its Hessian matrix of second partial derivatives is positive semidefinite. That is why we need to use maximum likelihood, rather than squared error as the error function in logistics regression.
\(f\) is concave if \(-f\) is convex.
6. linearity
If \(f(x), h(x), g(x)\) are all linear function of \(x\), the problem is a linear optimization problem (also called linear programming). If any function is non-linear, the problem is a non-linear optimization problem.
Continuous and integer linear optimization problems have different complexity. Continuous linear optimization problems are in P, which means it can be solved in polynomial time. Integer linear optimization is NP-complete (Nondeterministic polynomial), which means we can verify whether a given solution is correct or not in polynomial time, but it is uncertain whether we can solve the problem ourselves in polynomial time [3]. I will discuss
P v.s. NP in detail in later posts. 7. iterative algorithm
As in logistic regression, numerical optimization algorithms are iterative, with a sequence of improved estimates \(x_k, k = 1,2,…\) until reaching a solution. In addition, we would like to go towards a descent direction such that:
\(f(x_{k+1}) < f(x_k) \tag{7}\)
The termination condition is defined by a small threshold \(\epsilon\) with any the following format:
1. the change of input is small
$$ \|x_{k+1} – x_{k} \| < \epsilon \tag{8.1} $$
$$ \frac { \|x_{k+1} – x_{k} \|}{ \|x_k \|} < \epsilon \tag{8.2} $$
2. the decrease of the objective function is small
$$f(x_{k+1}) – f(x_k) < \epsilon \tag{9.1}$$
$$ \frac{f(x_{k+1}) – f(x_k)}{|f(x_k)|} < \epsilon \tag{9.2}$$
Note that because \(x \in R^n\) is a vector, the distance between 2 vectors is defined by the norm, such as Euclidean distance, and the
magnitude of a vector is defined by its norm. \(f\) is a scalar, and thus its magnitude is defined by its absolute value.
A good algorithm should have the following properties [1]:
robust: work well with reasonably choices of initial values for various problems efficient: not require too much computing time, resource, and storage accurate: identify a solution with high precision, and not over sensitive to noise and errors
Usually, an efficient and fast converging method may require too much computing resource, while a robust method could be very slow. A highly accurate method may converge very slow and less robust. In practice, we have to consider the trade-off between speed and cost, between accuracy and storage [1].
8. Taylor’s theorem
I first learned about Taylor’s theorem back in college and have been taking its application for granted. While the basic format of Taylor’s theorem is simple, it plays a central role in numerical optimization. Thus, I would like to recap some key ideas in Taylor’s theorem.
The idea of
is to approximate a complex non-polynomial by higher order polynomials, using a series of simple k-th order polynomials, called Taylor’s theorem Taylor polynomials .Taylor polynomials are finite-order truncation of , which completely defines the original function in a neighborhood. Taylor series
The simplest case is first-order 1-dimension Taylor’s theorem [4], which is equivalent to
[5]: Lagrange’s Mean Value Theorem
If \(f\) is continuous on interval \([a,x]\) and \(f\) is differentiable on \((a,x)\), then there exists a point \(b: a<b<x\) such that
$$f'(b) = \frac {f(x) – f(a)}{x-a} \tag {10.1}$$
$$f(x) = f(a) + f'(b)(x-a) \tag {10.2}$$
If \(a\) is close enough to \(x\), thus \(f'(b) \approx f'(a)\), equation 10.2 can be written as:
$$f(x) \approx f(a) + f'(a)(x-a) \tag {11}$$
Thus, \(f(x)\) can be approximated by a linear function.
Similarly, if \(f(x)\) is twice differentiable, we can approximate it as:
$$f(x) \approx f(a) + f'(a)(x-a) + \frac {1}{2}f”(a)(x-a)^2\tag {12}$$
If \(f(x)\) is infinitely differentiable, such as \(e^x\), its Taylor series in a neighborhood of value \(a\) can be written as the following:
$$f(x) = \sum_{n=0}^{\infty} \frac {f^{(n)}(a)}{n!} (x-a)^n \tag{13}$$
\(n!\) is the factorial of n, and \(f^{(n)}(a)\) denotes the \(n\)th derivative of \(f\) at point \(a\). When \(a= 0\), it is called
: Maclaurin series
$$f(x) = \sum_{n=0}^{\infty} \frac {f^{(n)}(0)}{n!} (x)^n \tag{14}$$
For example, expanding \(e^x\) at 0 we can get
$$e^x = \sum_{n=0}^{\infty} \frac {1}{n!} x^n \tag{15} $$
When we only consider the first k-th order of derivatives, we have an approximation of \(f(x)\) as
$$f(x) \approx \sum_{n=0}^{k} \frac {f^{(n)}(a)}{n!} (x-a)^n \tag{16}$$
In practice, we usually expand a function to its 1st or 2nd order derivative to get a good enough approximation of a function.
In matrix format, equation 11 can be written as
\(f(x) \approx f(a) + (x-a)^T \nabla f(a) \tag{17}\)
Equation 12 can be written as
\(f(x) \approx f(a) + (x-a)^T \nabla f(a) + \frac{1}{2}(x-a)^TH(a)(x-a) \tag{18} \)
Here, \(\nabla\) denotes first order derivative Jacobian matrix, and \(H\) denotes second order partial derivative Hessian matrix.
In the case of numerical optimization, as we will see in the next post, Taylor first order polynomial will give us a good clue about the direction to move \(x_k\) to \(x_{k+1}\). And Taylor second order polynomial will give us useful information about convexity.
References
[1] Numerical Optimization, Jorge Nocedal, Stephen J. Wright
[2] https://github.com/wzhe06/Ad-papers/blob/master/Optimization%20Method/%E9%9D%9E%E7%BA%BF%E6%80%A7%E8%A7%84%E5%88%92.doc
[3] https://cs.stackexchange.com/questions/40366/why-is-linear-programming-in-p-but-integer-programming-np-hard
[4] https://www.rose-hulman.edu/~bryan/lottamath/mtaylor.pdf
[5] https://en.wikipedia.org/wiki/Mean_value_theorem
|
Let $(M,g)$ be a Riemannian manifold with Levi Civita connection $\nabla$. Then $\nabla$ satisfies a compatibility condition:
$(\nabla_ZX,Y)+(X,\nabla_ZY)=Z((X,Y))$ where $(\cdot,-)$ is a Hermitian pairing. In general, if we have a connection $\nabla$ on bundle $E$ (in our case $E=TM$) one can define the dual connection with the formula $\nabla'_Z(\alpha)=Z(\alpha(\cdot))-\alpha(\nabla_Z(\cdot))$ where $\alpha$ is a one form. My question is the following:
Does the dual connection satisfy compatibility condition?
I did some computation and get that compatibility is equivalent to the condition: $Z(g^{ij})\alpha_i\beta_j=-g^{ij}Z^p\Gamma_{pi}^q\alpha_q\beta_j-g^{ij}Z^p\Gamma_{pj}^q\alpha_i\beta_q$
where $\Gamma_{ij}^k$ are defined by $\nabla_{\partial_i}\partial_j=\Gamma_{ij}^k\partial_k$, $g^{ij}$ are components of the inverse matrix to the$(g_{ij})_{i,j}$ where $g_{i,j}=g(\partial_i,\partial_j)$ (I used the Einstein summation convention).
|
Fatou's Lemma The Basic Idea
Given a sequence of functions $\{f_n\}$ which converge pointwise to some limit function $f$, it is
not always true that $$\int \lim_{n\to\infty}f_n = \lim_{n\to\infty}\int f_n.$$ (Take this sequence for example.) Fatou's Lemma, the Monotone Convergence Theorem (MCT), and the Dominated Convergence Theorem (DCT) are three major results in the theory of Lebesgue integration which answer the question "When do $\displaystyle{ \lim_{n\to\infty} }$ and $\int$ commute?" The MCT and DCT tell us that if you place certain restrictions on both the $f_n$ and $f$, then you can interchange the limit and integral. On the other hand, Fatou's Lemma says, "Here's the best you can do if you don't put any restrictions on the functions."
Below, we'll give the formal statement of Fatou's Lemma as well as the proof. Then we'll look at an exercise from Rudin's
Real and Complex Analysis (a.k.a "Big Rudin") which illustrates that the inequality in Fatou's Lemma can be a strict inequality.
From English to Math
Fatou's Lemma: Let $(X,\Sigma,\mu)$ be a measure space and $\{f_n:X\to[0,\infty]\}$ a sequence of nonnegative measurable functions. Then the function $\displaystyle{ \liminf_{n\to\infty} f_n}$ is measureable and $$\int_X \liminf_{n\to\infty} f_n \;d\mu \;\; \leq \;\; \liminf_{n\to\infty} \int_X f_n\;d\mu .$$
Proof
For each $k\in\mathbb{N}$, let $g_k=\displaystyle{\inf_{n\geq k}f_n}$ and define $$h=\lim_{k\to\infty}g_k=\lim_{k\to\infty}\inf_{n\geq k}f_n=\liminf_{n\to\infty}f_n.$$
1st observation: $\int g_k \leq \int f_n$ for all $n\geq k$. This follows easily from the fact that for a fixed $x\in X$, $\displaystyle{\inf_{n\geq k}\{f_n(x)\}}\leq f_n(x)$ whenever $n\geq k$ (by definition of infimum). Hence $\int \displaystyle{\inf_{n\geq k} f_n} \leq \int f_n$ for all $n\geq k$, as claimed. This allows us to write \begin{align} \int g_k\leq \inf_{n\geq k}\int f_n. \qquad \qquad (1) \end{align} 2nd observation: $\{g_k\}$ is an increasing sequence and $\displaystyle{\lim_{k\to\infty} g_k}=h$ pointwise. Thus, by the Monotone Convergence Theorem, \begin{align*} \int\liminf_{n\to\infty} f_n =\int h = \lim_{k\to\infty} \int g_k \leq \lim_{k\to\infty} \inf_{n\geq k}\int f_n = \liminf_{n\to\infty} \int f_n \end{align*} where the inequality in the middle follows from (1).
Finally, $\liminf f_n$ is measurable as we've proved before in the footnotes here.
Exercise from Big Rudin
The following is taken from chapter 1 of Rudin's
Real and Complex Analysis. (Rudin, RCA, #1.8) Let $E\subset \mathbb{R}$ be Lebesgue measurable, and for $n\geq 0$ define $$f_n=\begin{cases} \chi_E &\text{if $n$ is even};\\1-\chi_E &\text{if $n$ is odd.}\end{cases}$$ What is the relevance of this example to Fatou's Lemma? For simplicity, let's just consider what happens when $X=[0,2]\subset \mathbb{R}$ and we let $E=(1,2]\subset X$. Then we get the following sequence of functions$$f_n=\begin{cases}\chi_{(1,2]} &\text{if $n$ is even};\\\chi_{[0,1]} &\text{if $n$ is odd}.\end{cases}$$The first few of these functions look like this:
Notice that as $n$ increases, the graphs switch back and forth. For any given $n$, $$\int_{[0,2]}f_n=1$$ but $\liminf_nf_n=0$. (Recall that $\liminf_n f_n$ is the infimum of all subsequential limits of $\{f_n\}$). This shows us that $$0=\int_{[0,2]} \liminf_{n\to\infty} f_n < \liminf_{n\to\infty}\int_{[0,2]} f_n=1$$ proving that a strict inequality in Fatou's Lemma is possible.
|
The possibility of recursive self-improvement is often brought up as a reason to expect that an intelligence explosion is likely to result in a singleton - a single dominant agent controlling everything. Once a sufficiently general artificial intelligence can make improvements to itself, it begins to acquire a compounding advantage over rivals, because as it increases its own intelligence, it increases its ability to increase its own intelligence. If returns to intelligence are not substantially diminishing, this process could be quite rapid. It could also be difficult to detect in its early stages because it might not require a lot of exogenous inputs.
However, this argument only holds if self-improvement is not only a rapid route to AI progress, but the fastest route. If an AI participating in the broader economy could make advantageous trades to improve itself faster than a recursively self-improving AI could manage, then AI progress would be coupled to progress in the broader economy.
If algorithmic progress (and anything else that might seem more naturally a trade secret than a commodity component) is shared or openly licensed for a fee, then a cutting-edge AI can immediately be assembled whenever profitable, making a single winner unlikely. However, if leading projects keep their algorithmic progress secret, then the foremost project could at some time have a substantial intelligence advantage over its nearest rival. If an AI project attempting to maximize intelligence growth would devote most of its efforts towards such private improvements, then the underlying dynamic begins to resemble the recursive self-improvement scenario.
This post reviews a prior mathematization of the recursive self-improvement model of AI takeoff, and then generalizes it to the case where AIs can allocate their effort between direct self-improvement and trade.
A recalcitrance model of AI takeoff
In
Superintelligence, Nick Bostrom describes a simple model of how fast an intelligent system can become more intelligent over time by working on itself. This exposition loosely follows the one in the book.
We can model the intelligence of the system as a scalar quantity , and the work, or optimization power, applied to system in order to make it more intelligent, as another quantity . Finally, at any given point in the process, it takes some amount of work to augment the system's intelligence by one unit. Call the marginal cost of intelligence in terms of work recalcitrance, , which may take different values at different points in the progress. So, at the beginning of the process, the rate at which the system's intelligence increases is determined by the equation .
We then add two refinements to this model. First, assume that intelligence is nothing but a type of optimization power, so and can be expressed in the same units. Second, if the intelligence of the system keeps increasing without limit, eventually the amount of work it will be able to put into things will far exceed that of the team working on it, so that . is now the marginal cost of intelligence in terms of applied intelligence, so we can write .
Constant recalcitrance
The simplest model assumes that recalcitrance is constant, . Then , or . This implies exponential growth.
Declining recalcitrance
Superintelligence also considers a case where work put into the system yields increasing returns. Prior to takeoff, where , this would look like a fixed team of researchers with a constant budget working on a system that always takes the same interval of time to double in capacity. In this case we can model recalcitrance as , so that , so that for some constant , which implies that the rate of progress approaches infinity as approaches ; a singularity.
How plausible is this scenario? In a footnote, Bostrom brings up Moore's Law as an example of increasing returns to input, although (as he mentions) in practice it seems like increasing resources are being put into microchip development and manufacturing technology, so the case for increasing returns is far from clear-cut. Moore's law is predicted by the experience curve effect, or Wright's Law, where marginal costs decline as cumulative production increases; the experience curve effect produces exponentially declining costs under conditions of exponentially accelerating production. This suggests that in fact accelerating progress is due to an increased amount of effort put into making improvements. Nagy et al. 2013 show that for a variety of industries with exponentially declining costs, it takes less time for production to double than for costs to halve.
Since declining costs also reflect broader technological progress outside the computing hardware industry, the case for declining recalcitrance as a function of input is ambiguous.
Increasing recalcitrance
In many cases where work is done to optimize a system, returns diminish as cumulative effort increases. We might imagine that high intelligence requires high complexity, and more intelligent systems require more intelligence to understand well enough to improve at all. If we model diminishing returns to intelligence as , then . In other words, progress is a linear function of time and there is no acceleration at all.
Generalized expression
The recalcitrance model can be restated as a more generalized self-improvement process with the functional form :
Increasing recalcitrance Constant progress Increasing recalcitrance Polynomial progress Constant recalcitrance Exponential progress 1" /> Declining recalcitrance Singularity Deciding between trade and self-improvement
Some inputs to an AI might be more efficiently obtained if the AI project participates in the broader economy, for the same reason that humans often trade instead of making everything for themselves. This section lays out a simple two-factor model of takeoff dynamics, where an AI project chooses how much to engage in trade.
Suppose that there are only two inputs into each AI: computational hardware available for purchase, and algorithmic software that the AI can best design for itself. Each AI project is working on a single AI running on a single hardware base. The intelligence of this AI depends both on hardware progress and software progress, and holding either constant, the other has diminishing returns. (This is broadly consistent with trends described by Grace 2013.) We can model this as .
At each moment in time, the AI can choose whether to allocate all its optimization power to making money in order to buy hardware, improving its own algorithms, or some linear combination of these. Let the share of optimization power devoted to algorithmic improvement be .
Assume further that hardware earned and improvements to software are both linear functions of the optimization power invested, so , and .
What is the intelligence-maximizing allocation of resources ?
This problem can be generalized to finding that maximizes for any monotonic function . This is maximized whenever is maximized. (Note that this is no longer limited to the case of diminishing returns.)
This generalization is identical to the Cobb-Douglas production function in economics. If then this model predicts exponential growth, if 1" /> it predicts a singularity, and if then it predicts polynomial growth. The intelligence-maximizing value of is .
In our initial toy model , where , that implies that no matter what the price of hardware, as long as it remains fixed and the indifference curves are shaped the same, the AI will always spend exactly half its optimizing power working for money to buy hardware, and half improving its own algorithms.
Changing economic conditions
The above model makes two simplifying assumptions: that the application of a given amount of intelligence always yields the same amount in wages, and that the price of hardware stays constant. This section relaxes these assumptions.
Increasing productivity of intelligence
We might expect the productivity of a given AI to increase as the economy expands (e.g. if it discovers a new drug, that drug is more valuable in a world with more or richer people to pay for it). We can add a term exponentially increasing over time to the amount of hardware the application of intelligence can buy: .
This does not change the intelligence-maximizing allocation of intelligence between trading for hardware and self-improving.
Declining hardware costs
We might also expect the long-run trend in the cost of computing hardware to continue. This can again be modeled as an exponential process over time, . The new expression for the growth of hardware is , identical in functional form to the expression representing wage growth, so again we can conclude that .
Maximizing profits rather than intelligence
AI projects might not reinvest all available resources in increasing the intelligence of the AI. They might want to return some of their revenue to investors if operated on a for-profit basis. (Or, if autonomous, they might invest in non-AI assets where the rate of return on those exceeded the rate of return on additional investments in intelligence.) On the other hand, they might borrow if additional money could be profitably invested in hardware for their AI.
If the profit-maximizing strategy involves less than 100% reinvestment, then whatever fraction of the AI's optimization power is reinvested should still follow the intelligence-maximizing allocation rule , where is now the share of reinvested optimization power devoted to algorithmic improvements.
If the profit-maximizing strategy involves a reinvestment rate of slightly greater than 100%, then at each moment the AI project will borrow some amount (net of interest expense on existing debts) , so that the total optimization power available is . Again, whatever fraction of the AI's optimization power is reinvested should still follow the intelligence-maximizing allocation rule , where is now the share of economically augmented optimization power devoted to algorithmic improvements.
This strategy is no longer feasible, however, once \frac{\alpha}{\alpha+\beta}" />. Since by assumption hardware can be bought but algorithmic improvements cannot, at this point additional monetary investments will shift the balance of investment towards hardware, while 100% of the AI's own work is dedicated to self-improvement.
|
Forgot password? New user? Sign up
Existing user? Log in
Note by Jessica Wang 3 years, 11 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
Rofl.
Log in to reply
Ah , Never thought of this :P
Lolfication of Infinity.Haha
That's a great proof! But you can even create a question in the logic section based on this.
That's the brilliant mind! it boosted my interest in maths!!
Congratulations on having one of the only comments with more upvotes then downvotes.
haha btw thanx for that!!!
You rocked it !!
!! <3 maths
That's the beauty of maths!! ❤️
Truly
XDDDDDDDDD LOLLLLLLLLLLLL!!!!!!
Best proof EVER!!!!
HAHAHA!!!!!!!!!!!
innovative
I wonder what would happen if you showed this at school?
This was a nice one!
This is just awesome
My two favorite things: maths and trolling. ;)
Amazed by your imagination
epic !!
You sir, just won the internet today. Congrats.
You need to add periods in the title to make emphasis.
Pretty trolly though
you forgot to rotate the equal sign hahaha good one !
i think the inventor of this needs to get married asap!!
nice work
hahahaha this is the best proof ive ever seen....
Whoa, Is it Original ? How did you come up with it!!!!
Doubt it's original. I've seen it on Brilliant multiple times.
lmaooo
milf
Whoever is making alts and downvoting everyone, you have no life :P Lets see how many downvotes that'll get me. 110? 120? Have a field day :D
great!!!
each and every comment has downvotes more than upvotes
Mine is an exception haha!
Probably that's also a troll! they want us to rotate it by 180 degrees
Really a great idea to use Physics in proving mathematics rules usually we do converse of it...
Lol
Woah!! Best Proof Ever.
Why has every comment received so many downvotes?
And the funny thing is that this comment itself has 10 downvotes.
I couldn't stop laughing, each and very comment has got soooo many downvotes.
@Kushagra Sahni – Wrong. See above for reasons.
No need to think out of the box if u rotate it
Very nice and ingenious proof!!!
Cool! I loved the way you did it, I hope we can use such a beautiful method in our exams lol! ⌣¨\huge\ddot\smile⌣¨
Your proof is bad but your handwriting is cool!!
Let's see you do better. c:
1= sqrt(1) = sqrt(-1-1) = sqrt(-1) * sqrt(-1) = ii = -1transitive property: 1=-1 lol
Problem Loading...
Note Loading...
Set Loading...
|
So, we know, that the atomic carbon in the electronic configuration $1s^22s^22p^2$ has the following terms
$${}^1S, {}^1D, {}^3P$$
My question is - how can I correctly specify these terms in the terms of coupled and uncoupled representations? My attempt
So, in the case of terms, we're only considering the orbital angular momentum, not the spin. Because of that, we can describe the single terms in the
coupled representation $\left|L, M_L\right>$ which correspond with the linear combination of microstates, i.e. the uncoupled representations $\left|m_{l1}, m_{l2}\right>$ using Clebsch-Gordan coefficients.
For ${}^1S$ term it's pretty easy, as $L=0$ and $M_L=0$ (as described in this answer):
$$\begin{align}{}^1S: |L = 0, M_L = 0\rangle &= \frac{1}{\sqrt 3} |m_{l1}= 1, m_{l2} = -1\rangle + \frac{1}{\sqrt 3} |-1, 1\rangle - \frac{1}{\sqrt 3} |0, 0\rangle\\ &= \frac{1}{\sqrt{3}} \left| 8 \right> + \frac{1}{\sqrt{3}} \left| 11 \right> - \frac{1}{\sqrt{3}}\left| 14 \right >\end{align}$$
In the last expression there are wavefunctions specified with indices from the microstate table below.
But further it gets somewhat more tricky - both ${}^3P$ and ${}^1D$ will contain multiple states. $P$ corresponds with $L=1$ and so $M_L \in \left\{ -1, 0, 1 \right\}$. I suppose, that its coupled representations are $\left| L=1, M_L=-1\right>, \left| L=1, M_L=0\right>, \left| L=1, M_L=1\right>$.
$$\begin{align} {}^3P: \left| L=1, M_L=-1\right> &= \frac{1}{\sqrt{2}}\left| -1, 0 \right> + \frac{1}{\sqrt{2}}\left| 0, -1 \right>\\ &= \frac{1}{\sqrt{2}}\left| 2 \right> + \frac{1}{\sqrt{2}}\left|5 \right>\\ \left| L=1, M_L=0\right> &= \frac{1}{\sqrt{2}}\left| 1, -1 \right> - \frac{1}{\sqrt{2}}\left| -1, 1 \right>\\ &= \frac{1}{\sqrt{2}}\left| 3 \right> - \frac{1}{\sqrt{2}}\left|6 \right>\\ \left| L=1, M_L=1\right> &= \frac{1}{\sqrt{2}}\left| 1, 0 \right> - \frac{1}{\sqrt{2}}\left| 0, 1 \right>\\ &= \frac{1}{\sqrt{2}}\left| 1 \right> - \frac{1}{\sqrt{2}}\left|4 \right> \end{align} $$
${}^1D$ corresponds with $L=2$ and $M_L \in \left\{ -2,-1,0,1,2 \right\}$.
$$\begin{align} {}^1D:\left| L = 2, M_L = -2 \right> &= \left| -1, -1 \right> = \left| 15\right>\\ \left| L = 2, M_L = -1 \right> &= \frac{1}{\sqrt{2}}\left| 0, -1 \right> + \frac{1}{\sqrt{2}}\left|-1, 0 \right> \\ &= \frac{1}{\sqrt{2}}\left| 10 \right> + \frac{1}{\sqrt{2}}\left| 12 \right>\\ \left| L = 2, M_L = 0 \right> &= \frac{1}{\sqrt{6}}\left|1, -1 \right> + \sqrt{\frac{2}{3}}\left| 0, 0 \right> + \frac{1}{\sqrt{6}}\left| -1, 1 \right> \\ &= \frac{1}{\sqrt{6}}\left| 8 \right> + \sqrt{\frac{2}{3}}\left|14 \right> + \frac{1}{\sqrt{6}}\left| 11 \right> \\ \left| L = 2, M_L = 1 \right> &= \frac{1}{\sqrt{2}}\left| 1, 0 \right> + \frac{1}{\sqrt{2}}\left| 0, 1 \right>\\ &= \frac{1}{\sqrt{2}}\left|7 \right> + \frac{1}{\sqrt{2}}\left|9 \right> \\ \left| L = 2, M_L = 2 \right> &= \left| 1, 1 \right> = \left| 13 \right> \end{align} $$
Is this the right approach or do I understand it incorrectly?
|
The
IEEEtran class actually makes provisions for such constructions via the
IEEEeqnarraybox family of commands. The environment
IEEEeqnarrayboxm typesets its material in math mode. The syntax is
\begin{IEEEeqnarrayboxm}[initialcommands][pos][width]{format}
....
\end{IEEEeqnarrayboxm}
Making
[pos] equal to
[t] means that the baseline of the first line aligns with the surrounding text.
[intialcommnads] can be empty, or one might want to try
[\IEEEeqnstrutmode] for line spacing that is similar to normal text line breaks.
Here are some samples:
\documentclass{IEEEtran}
\begin{document}
\begin{enumerate}
\item A first item in a list.
\item \leavevmode
\begin{IEEEeqnarrayboxm}[][t]{rCl}
\int_0^1 e^{-t}\,dt &=& 1-e^{-1},\\
E &=& mc^2,\\
0 &=& 6x^2 - 2x + 1.\strut
\end{IEEEeqnarrayboxm}
\item \strut
\begin{IEEEeqnarrayboxm}[][t]{cCcCcCl}
2a &+&3b&-&c&=&3\\
5a &+&b&+&2c&=&1\\
-a&+&7b&+&3c&=&7\strut
\end{IEEEeqnarrayboxm}
and some further text.
\item A text item of a length that shows the current column width.
\item \leavevmode
\begin{IEEEeqnarrayboxm}[\IEEEeqnarraystrutmode][t]{cCcCcCl}
2a &+&3b&-&c&=&3\\
5a &+&b&+&2c&=&1\\
-a&+&7b&+&3c&=&7
\end{IEEEeqnarrayboxm}\\
and some text explaining this system of equations.
\item \leavevmode
\begin{IEEEeqnarrayboxm}[][t]{l}
y = \frac{e^{ax^2+bx+c}}2.
\end{IEEEeqnarrayboxm}
\item Text item.
\end{enumerate}
\end{document}
Note that it is necessary to get LaTeX in to horizontal mode before entering the environment. This can be done by having some previous text on the line after the label, by issuing
\leavevmode or by issuing a box such as
\strut (this has the height of a capital letter, so can help with line spacing, but has zero width).
Similarly,
\strut on the final line of the equation enviroments can improve the spacing over the default.
The documentation for the
IEEEtran class has a comprehensive discription of these enviroments and various strut mechanisms in Appendix F.
|
I am new to Latex and was trying to insert the below equation.
But I am not getting it right
below is my code
\begin{equation} S (ω)=1.466\, H_s^2 \, \frac{ω_0^5}{ω^6 } \, e^[-3^ { ω/(ω_0 )]^2}\end{equation}
TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community
The formula to the right of the
= symbol in the first row comes reasonably close to the screenshot you posted, including the use of small outer square brackets and large inner curly braces. However, I do not recommend this look. The "look" of the second row may be more appealing to your readers.
\documentclass{article}\usepackage{amsmath} % for 'align*' env.\begin{document}\begin{align*}S(\omega) &= \frac{\alpha g^2}{\omega^5} e^{[ -0.74\bigl\{\frac{\omega U_\omega 19.5}{g}\bigr\}^{\!-4}\,]} \\&= \frac{\alpha g^2}{\omega^5} \exp\Bigl[ -0.74\Bigl\{\frac{\omega U_\omega 19.5}{g}\Bigr\}^{\!-4}\,\Bigr] \end{align*}\end{document}
I only correct your code and not code your image from scratch:
\documentclass{article}\usepackage{mathtools}\begin{document} \begin{equation}S(\omega)=1.466\, H_s^2 \, \frac{\omega_0^5}{\omega^6} \, e^{\left[-3^{\omega/(\omega_0)}\right]^2}\end{equation}or better\begin{equation}S(\omega)=1.466\, H_s^2 \frac{\omega_0^5}{\omega^6} \exp\Bigl[-3^{\frac{\omega}{\omega_0}}\Bigr]^2 \end{equation}\end{document}
Salient points include using brace delimiters
{} around extended super and subscript groups. I also used
\bigl and
\bigr to extend the height of braces in the exponent, and added a thin space
\, before the
e of the exponential. Like the OP's original figure, I did not extend the height of the brackets in the exponential, as it kind of confuses it as looking less exponential.
\documentclass{article}\begin{document}\begin{equation} S (\omega)=\frac{\alpha g^2}{\omega^5} \,e ^{[-0.74\bigl\{\frac{\omega U_\omega 19.5}{g}\bigr\}^{-4}]}\end{equation}\end{document}
If one finds, as I do, the
\big braces in the exponential too "clunky", one can alternately use a vertically-scaled, width-limited version of the normal brace:
\documentclass{article}\usepackage{scalerel}\begin{document}\begin{equation} S (\omega)=\frac{\alpha g^2}{\omega^5} \,e ^{[-0.74\scaleleftright[.7ex]{\{}{\frac{\omega U_\omega 19.5}{g}}{\}}^{-4}]}\end{equation}\end{document}
|
Assume we have communication system between transmit and receive over wireless channel.
The Friis path loss model is given by the following equation
$$P_r = P_t G_t G_r (\frac{\lambda}{4\pi D})^2$$
Where $P_{r(t)}$ is the received (transmit) power, $G_{t(r)}$ is the gain of the antennas at transmitter (receive) and $\lambda $ is the signal wavelength while $D$ is the distance between transmit receive ends.
In theoretical (academia) and simulation scenarios (for example MATLAB)I have come across examples wireless channel where a power delay profile is defined, that is time of arrival (TOA) of rays versus amplitude of those rays. For example, assume I have five rays arriving and delay spread $40 ns$ with amplitude
$$channel= [ 0.5+i0.5, 0.3+i0.3, 0.2+i0.2, 0.1+i 0.1, 0.3 ] $$
$$TOA= [0, 10*10^{-9} , 20*10^{-9}, 30*10^{-9} , 40*10^{-9}]$$
My simple question is: How are the two concepts related??
Thanks
|
Notes - Linear Equation in one variable
Category : 7th Class
LINEAR EQUATION IN ONE VARIABLE
FUNDAMENTALS
Symbols used to denote a constant are generally, 'c', 'k' etc...
e.g., 2x - 6; here, 2 is the coefficient of \['x';\,\,'x'\] is the variable and \[-6\]is the constant. Similarly in \[ay+b;\,\,a\] is the coefficient of \[y;\,\,'y'\] is the variable and \[(+b)\] is the constant.
e.g., (i) \[5x-1=6x+m\] (ii) \[3\left( x-4 \right)=5\]
(iii)\[2y+5=\frac{y}{6}-2\] (iv) \[\frac{t-1}{6}+\frac{2t}{7}=a\]
e.g., \[2x+6=3x-10\Rightarrow 6+10=3x-2x\Rightarrow 16=x\]
Verification
Substituting \[x=16\] we have LHS \[=2\times 16+6=38\] & RHS \[=\text{ }3\times 16-10=38\]
\[\therefore \]\[x=16\]is a solution of the above equation.
(a) Same number can be added to both sides of an equation.
(b) Same number can be subtracted from both sides of an equal.
(c) Both sides of an equation can be multiplied by the same non - zero number
(d) Both sides of an equation can be divided by the same non - zero number
(e) Cross multiplication: If\[\frac{ax+b}{cx+d}=\frac{p}{q}\], then \[q\left( ax+b \right)=p\left( cx+d \right).\]
This process is called cross multiplication.
You need to login to perform this action.
You will be redirected in 3 sec
|
What is a Natural Transformation? Definition and Examples
I hope you have enjoyed our little series on basic category theory. (I know I have!) This week we'll close out by chatting about natural transformations which are, in short, a nice way of moving from one functor to another. If you're new to this mini-series, be sure to check out the very first post,
What is Category Theory Anyway? as well as What is a Category? and last week's What is a Functor? ...the notion of category is best excused as that which is necessary in order to have the notion of functor. But the progression does not stop here. There are maps between functors, and they are called natural transformations. And it was in order to define these that Eilenberg and Mac Lane first defined functors. - Peter J. Freyd
What is a natural transformation?
Here's the formal definition. Given two functors $F$ and $G$, both from a category $\mathsf{C}$ to a category $\mathsf{D}$, a
natural transformation $\eta:F\Longrightarrow G$ from $F$ to $G$ consists of some data that satisfies a certain property. The Data a morphism $F(x)\overset{\eta_x}{\longrightarrow}G(x)$ for each object $x$ in $\mathsf{C}$
The Property Whenever $x\overset{f}{\longrightarrow}y$ is a morphism in $\mathsf{C}$, $$G(f)\circ \eta_x=\eta_y\circ F(f).$$ In other words, the square below commutes.
Notice that the natural transformation $\eta$ is the totality of
all the morphisms $\eta_x$, so sometimes you might see the notation $$\eta=(\eta_x)_{x\in\mathsf{C}},$$ where each $\eta_x$ is referred to as a component of $\eta$. This is very similar to how a sequence $s$ is comprised of the totality of its terms $s=\{s_n\}_{n\in\mathbb{N}}$ or how a vector $\vec{v}$ is comprised of all of its components $\vec{v}=(v_1,v_2,\ldots).$
Simply put, a natural transformation is a collection of maps from one diagram to another. And these maps are special in that they
commute with the arrows in the diagrams. For example, in the picture below, the black arrows below comprise a natural transformation between two functors* $F$ and $G$.
or, cleaning things up a bit,
where each of the three rectangular faces in the prism is a commuting square that shows up in "The Property" above. To get a better feel for natural transformations, let's look at a few special cases.
Case #1: F and G are constant
Suppose $F,G:\mathsf{C}\to\mathsf{D}$ are both constant functors. That is, suppose $F$ sends every object in $\mathsf{C}$ to a single object $d$ in $\mathsf{D}$ and every morphism to $\text{id}_{d}$. Similarly suppose $G$ sends every object and morphism to a fixed $d'$ and $\text{id}_{d'}$ in $\mathsf{D}$. Then a natural transformation from $F$ to $G$ is simply a morphism $d\overset{\eta}{\longrightarrow} d'$.
Case #2: F is constant
Now if $F$ is constant at some object $d$ in $\mathsf{D}$, and $G$ is any functor, then $\eta:F\Longrightarrow G$ consists of maps $d\overset{\eta_x}{\longrightarrow} G(x)$, one for each $x$ in $\mathsf{C}$, satisfying the equation $\eta_y=G(f)\circ\eta_x$ whenever $x\overset{f}{\longrightarrow}y$ is a morphism in $\mathsf{C}$. I've drawn a picture on the right, where for simplicity I've used the color ${\color{Magenta}\text{pink}}$ for the object $G(x)$, and ${\color{Green}\text{green}}$ for the object $G(y)$ and so on. (So the vertices and edges of the bottom square represent the diagram given by $G$.) The equation $\eta_y=G(f)\circ\eta_x$ says that the three arrows that make up the each of the triangular sides of the tetrahedron must commute. So, for instance, traveling down $\eta_{\color{Magenta}\bullet}$ and then going across ${\color{Magenta}\bullet}\to{\color{Green}\bullet}$ is the same as traveling down $\eta_{\color{Green}\bullet}$.
For good reasons, $\eta$ in this case is called a
cone over $G$. Case #3: G is constant
If, on the other hand, $G$ is constant at $d$ in $\mathsf{D}$ while $F$ is arbitrary, then a natural transformation consists of a collection of maps $F(x)\overset{\eta_x}{\longrightarrow} d$ so that $\eta_y\circ F(f)=\eta_x$ whenever $x\overset{f}{\longrightarrow}y$ is a morphism in $\mathsf{C}$. In other words, each of the triangular faces in the picture on the left, for example, must commute. As you can see, the scenario in case #3 is the same as that in case #2, but now the direction of the arrows $\eta_x$ have flipped. Not surprisingly, this type of $\eta$ is called a
cone under $F$ (or sometimes a cocone).
Cones under/over a functor are the beginning of two immensely important constructions in category theory called
limits and colimits. You've no doubt come across a (co)limit or two, though perhaps without knowing it. The empty set, the one point set, the intersection, union, and product of sets, the kernel of a group, the quotient of a topological space, the direct sum of vector spaces, the free product of groups, the pullback of a fiber bundle, inverse limits and direct limits are all examples of either a limit or a colimit. Each is special in that it forms a "universal" cone over a particular functor/diagram!
This deserves much more than a few sentences of attention, so we'll chat about more (co)limits in a future post.
Case 4: each $\eta_x$ is an isomorphism
Suppose now that $F$ and $G$ are any functors from $\mathsf{C}$ to $\mathsf{D}$, and let $x\overset{f}{\longrightarrow}y$ be any morphism in $\mathsf{C}$. In the case when each component $F(x)\overset{\eta_x}{\longrightarrow} G(x)$ of $\eta$ is an isomorphism, the naturality condition $\eta_y\circ F(f)=G(f)\circ \eta_x$ is equivalent to $F(f)=\eta_y^{-1}\circ G(f)\circ\eta_x$ since $\eta_y$ is invertible.
I've made the objects gray so that we can focus more on the arrows. In fact, let's clean up the diagram on the right even more:
So when each $\eta_x$ is an isomorphism, the naturality condition is a bit like a conjugation! It's also reminiscent of a homotopy from $G$ to $F$. Both viewpoints suggest that when each $\eta_x$ is an isomorphism, $F$ and $G$ are really the same functor
up to a change in perspective. When this is the case, the natural transformation $\eta$ is called a natural isomorphism, and $F$ and $G$ are said to be naturally isomorphic.
Example #1: group actions & equivariant maps
We mentioned previously that every group $G$ can be viewed as a category $\mathsf{B}G$ with one object $\bullet$ and a morphism $\bullet\overset{g}{\longrightarrow}\bullet$ for each group element $g$. On this category, we can define a functor $\mathsf{B}G\to\mathsf{Set}$ that sends the one object $\bullet$ in $\mathsf{B}G$ to exactly one set, call it $X$, and that sends a group element $g$ to a function $g\cdot-:X\to X$ given by $x\mapsto g\cdot x$. The functoriality conditions actually determine a left action of $G$ on $X$! (Check this!) In other words, every functor $\mathsf{B}G\to\mathsf{Set}$ encodes a group action, and the image of the single object under this functor is a $G$-set.
So what's a natural transformation in this setup? Suppose $A,B:\mathsf{B}G\to\mathsf{Set}$ are two functors with $A(\bullet)=X$ and $B(\bullet)=Y$ and let $\bullet\overset{g}{\longrightarrow}\bullet$ be a group element in $G$. Then $\eta:A\Longrightarrow B$ consists of exactly one function $\eta:X\to Y$ that satisfies $\eta(g(x))=g(\eta(x))$ for every $x\in X$.
This equality follows from the commuting square below. As with all such diagrams, simply pick an element in one of the corners (here, I've picked a little blue $x\in X$, top-left) and chase it around. Notice, the two horizontal maps are exactly the same $X\overset{\eta}{\longrightarrow}Y$, despite the fact that they're drawn in two different locations on the screen.
In words, the naturality condition says that for any point $x$ in $X$, first "translating" $x$ by $g\in G$ to the point $gx$ and then sending it to $Y$ via $\eta$ is the
same as first sending $x$ to $Y$ via $\eta$ and then translating that point by $g$. In short, natural transformations are $G$-equivariant maps!
I have a few more more examples to share, but I'll save them until next time. Check back in a few days!
*Here, I'm imagining $F$ and $G$ to be functors from a little,
indexing category
into some other category (pick your favorite).
|
Learning Outcomes Conduct and interpret hypothesis tests for two population proportions
When conducting a hypothesis test that compares two independent population proportions, the following characteristics should be present:
The two independent samples are simple random samples that are independent. The number of successes is at least five, and the number of failures is at least five, for each of the samples. Growing literature states that the population must be at least ten or 20 times the size of the sample. This keeps each population from being over-sampled and causing incorrect results.
Comparing two proportions, like comparing two means, is common. If two estimated proportions are different, it may be due to a difference in the populations or it may be due to chance. A hypothesis test can help determine if a difference in the estimated proportions reflects a difference in the population proportions.
The difference of two proportions follows an approximate normal distribution. Generally, the null hypothesis states that the two proportions are the same. That is,
H 0: p= A p. To conduct the test, we use a pooled proportion, B p. c
The pooled proportion is calculated as follows: [latex]\displaystyle{p}_{{c}}=\frac{{{x}_{{A}}+{x}+{B}}}{{{n}_{{A}}+{n}_{{B}}}}[/latex]
The distribution for the differences is: [latex]\displaystyle{P}\prime_{{A}}-{P}\prime_{{B}}~{N}{\Bigg[{0},\sqrt{{{p}_{{c}}{\big({1}-{p}_{{c}}\big)}{\bigg(\frac{{1}}{{n}_{{A}}}+\frac{{1}}{{n}_{{B}}}\bigg)}}}\Bigg]}[/latex]
The test statistic (
z-score) is: [latex]\displaystyle{z}=\frac{(p\prime_{A}-p\prime_{B})-(p_A-p_B)}{\sqrt{p_c(1-p_c)(\frac{1}{n_A}+\frac{1}{n_B})}}[/latex] Example
Two types of medication for hives are being tested to determine if there is a
difference in the proportions of adult patient reactions. Twenty out of a random sample of 200 adults given medication A still had hives 30 minutes after taking the medication. Twelve out of another random sample of 200 adults given medication B still had hives 30 minutes after taking the medication. Test at a 1% level of significance.
Solution:
The problem asks for a difference in proportions, making it a test of two proportions.
Let
A and B be the subscripts for medication A and medication B, respectively. Then p A and pare the desired population proportions. B Random Variable: P′ A – P′= difference in the proportions of adult patients who did not react after 30 minutes to medication A and to medication B. B H 0: p= A p B p A – p= 0 B H a: p≠ A p B p A – p≠ 0 B
The words
“is a difference” tell you the test is two-tailed. Distribution for the test: Since this is a test of two binomial population proportions, the distribution is normal:
[latex]\displaystyle{p_c}=\frac{x_A-x_B}{n_A-n_B}=\frac{20+12}{200+200}=0.08[/latex] 1 –
p c = 0.92
[latex]\displaystyle{P}\prime_{{A}}-{P}\prime_{{B}}~{N}{\Bigg[{0},\sqrt {{{({0.08})}{({0.92})}{\bigg(\frac{{1}}{{200}}+\frac{{1}}{{200}}\bigg)}}}}\Bigg][/latex]
P′ A – P′follows an approximate normal distribution. B Calculate the p-value using the normal distribution: p-value = 0.1404.
Estimated proportion for group A: [latex]\displaystyle{p}\prime_{{A}}=\frac{{x}_{{A}}}{{n}_{{A}}}=\frac{{20}}{{200}}={0.1}[/latex]
Estimated proportion for group B: [latex]\displaystyle{p}\prime_{{B}}=\frac{{x}_{{B}}}{{n}_{{B}}}=\frac{{12}}{{200}}={0.06}[/latex]
Graph: P′ A – P′= 0.1 – 0.06 = 0.04. B
Half the
p-value is below –0.04, and half is above 0.04. Compare α and the p-value: α= 0.01 and the p-value = 0.1404. α< p-value. Make a decision: Since α < p-value, do not reject H0. Conclusion: At a 1% level of significance, from the sample data, there is not sufficient evidence to conclude that there is a difference in the proportions of adult patients who did not react after 30 minutes to medication A and medication B. Using a Calculator Press
STAT.
Arrow over to
TESTSand press
6:2-PropZTest.
Arrow down and enter
20for x1,
200for n1,
12for x2, and
200for n2.
Arrow down to
p1: and arrow to
not equal p2. Press
ENTER.
Arrow down to
Calculateand press
ENTER.
The p-value is p= 0.1404 and the test statistic is 1.47. Do the procedure again, but instead of
Calculatedo
Draw.
try it
Two types of valves are being tested to determine if there is a difference in pressure tolerances. Fifteen out of a random sample of 100 of Valve
A cracked under 4,500 psi. Six out of a random sample of 100 of Valve B cracked under 4,500 psi. Test at a 5% level of significance.
The
p-value is 0.0379, so we can reject the null hypothesis. At the 5% significance level, the data support that there is a difference in the pressure tolerances between the two valves.
Example
A research study was conducted about gender differences in “sexting.” The researcher believed that the proportion of girls involved in “sexting” is less than the proportion of boys involved. The data collected in the spring of 2010 among a random sample of middle and high school students in a large school district in the southern United States is summarized in the table. Is the proportion of girls sending sexts less than the proportion of boys “sexting?” Test at a 1% level of significance.
Males Females Sent “sexts” 183 156 Total number surveyed 2231 2169
Solution:
This is a test of two population proportions. Let M and F be the subscripts for males and females. Then
p M and pare the desired population proportions. F
Random variable:
p′ F − p′= difference in the proportions of males and females who sent “sexts.” M H 0: p= F p M H: 0 p– F p= 0 M H a: p< F p M H: a p– F p< 0 M
The words
“less than” tell you the test is left-tailed. Distribution for the test: Since this is a test of two population proportions, the distribution is normal:
[latex]\displaystyle{p}_c=\frac{x_F+x_M}{n_F+n_M}=\frac{156+183}{2169+2231}=0.077[/latex]
Therefore,
[latex]\displaystyle{P}\prime_{{A}}-{P}\prime_{{B}}~{N}{\Bigg[{0},\sqrt {{{({0.077})}{({0.923})}{\bigg(\frac{{1}}{{2169}}+\frac{{1}}{{2231}}\bigg)}}}}\Bigg][/latex]
p′F – p′M follows an approximate normal distribution. Calculate the p-value using the normal distribution: p-value = 0.1045
Estimated proportion for females: 0.0719
Estimated proportion for males: 0.082
Graph: Decision: Since α < p-value, Do not reject H0 Conclusion: At the 1% level of significance, from the sample data, there is not sufficient evidence to conclude that the proportion of girls sending “sexts” is less than the proportion of boys sending “sexts.” Using a Calculator Press STAT. Arrow over to TESTS and press 6:2-PropZTest. Arrow down and enter 156 for x1, 2169 for n1, 183 for x2, and 2231 for n2. Arrow down to p1: and arrow to less than p2. Press
ENTER.
Arrow down to Calculate and press ENTER. The p-value is P= 0.1045 and the test statistic is z= –1.256. Example
Researchers conducted a study of smartphone use among adults. A cell phone company claimed that iPhone smartphones are more popular with whites (non-Hispanic) than with African Americans. The results of the survey indicate that of the 232 African American cell phone owners randomly sampled, 5% have an iPhone. Of the 1,343 white cell phone owners randomly sampled, 10% own an iPhone. Test at the 5% level of significance. Is the proportion of white iPhone owners greater than the proportion of African American iPhone owners?
Solution:
This is a test of two population proportions. Let W and A be the subscripts for the whites and African Americans. Then
p W and pare the desired population proportions. A Random variable: p′ W – p′= difference in the proportions of Android and iPhone users. A H 0: p= W p A H: 0 p– W p= 0 A H a: p> W p A H: a p– W p> 0 A
The words
“more popular” indicate that the test is right-tailed. Distribution for the test: The distribution is approximately normal:
[latex]\displaystyle{p}_c=\frac{x_W+x_A}{n_W+n_A}=\frac{134+12}{1343+232}=0.0927[/latex]
Therefore,
[latex]\displaystyle{P}\prime_{{A}}-{P}\prime_{{B}}~{N}{\Bigg[{0},\sqrt {{{({0.0927})}{({0.9073})}{\bigg(\frac{{1}}{{1343}}+\frac{{1}}{{232}}\bigg)}}}}\Bigg][/latex]
follows an approximate normal distribution.
Calculate the p-value using the normal distribution: p-value = 0.0077
Estimated proportion for group A: 0.10
Estimated proportion for group B: 0.05
Graph: Decision: Since α > p-value, reject the H0. Conclusion: At the 5% level of significance, from the sample data, there is sufficient evidence to conclude that a larger proportion of white cell phone owners use iPhones than African Americans. Using A Calculator
TI-83+ and TI-84
Press STAT. Arrow over to TESTS and press 6:2-PropZTest. Arrow down and enter 135 for x1, 1343 for n1, 12 for x2, and 232 for n2. Arrow down to p1: and arrow to greater than p2. Press ENTER. Arrow down to Calculate and press ENTER. The P-value is P = 0.0092 and the test statistic is Z = 2.33. try it
A concerned group of citizens wanted to know if the proportion of forcible rapes in Texas was different in 2011 than in 2010. Their research showed that of the 113,231 violent crimes in Texas in 2010, 7,622 of them were forcible rapes. In 2011, 7,439 of the 104,873 violent crimes were in the forcible rape category. Test at a 5% significance level. Answer the following questions:
Is this a test of two means or two proportions? Which distribution do you use to perform the test? What is the random variable? What are the null and alternative hypothesis? Write the null and alternative hypothesis in symbols. Is this test right-, left-, or two-tailed? What is the p-value? Do you reject or not reject the null hypothesis? At the ___ level of significance, from the sample data, there ______ (is/is not) sufficient evidence to conclude that ____________. two proportions normal for two proportions Subscripts: 1 = 2010, 2 = 2011 P′ 2– P′ 2 Subscripts: 1 = 2010, 2 = 2011 H: 0 p= 1 p 2 H: 0 p− 1 p= 0 2 H: a p≠ 1 p 2 H: a p− 1 p≠ 0 2 two-tailed p-value = 0.00086 Reject the H. 0 At the 5% significance level, from the sample data, there is sufficient evidence to conclude that there is a difference between the proportion of forcible rapes in 2011 and 2010. Concept Review
Test of two population proportions from independent samples. Random variable: [latex]\displaystyle\hat{{p}}_{{A}}-\hat{{p}}_{{B}}[/latex] = difference between the two estimated proportions Distribution: normal distribution
Formula Review
Pooled Proportion:[latex]\displaystyle{p}_{{c}}=\frac{{{x}_{{A}}+{x}+{B}}}{{{n}_{{A}}+{n}_{{B}}}}[/latex]
Distribution for the differences: [latex]\displaystyle{P}\prime_{{A}}-{P}\prime_{{B}}~{N}{\Bigg[{0},\sqrt{{{p}_{{c}}{\big({1}-{p}_{{c}}\big)}{\bigg(\frac{{1}}{{n}_{{A}}}+\frac{{1}}{{n}_{{B}}}\bigg)}}}\Bigg]}[/latex]
where the null hypothesis is
H 0: p= A por B H: 0 p– A p= 0. B
Test Statistic (
z-score): [latex]\displaystyle{z}=\frac{(p\prime_{A}-p\prime_{B})-(p_A-p_B)}{\sqrt{p_c(1-p_c)(\frac{1}{n_A}+\frac{1}{n_B})}}[/latex]
where the null hypothesis is
H 0: p= A por B H: 0 p− A p= 0. B
where
p′ A and p′are the sample proportions, B pand A pare the population proportions, B P c is the pooled proportion, and nand A nare the sample sizes. B
|
Fit the Neyman-Scott cluster process with Cauchy kernel
Fits the Neyman-Scott Cluster point process with Cauchy kernel to a point pattern dataset by the Method of Minimum Contrast, using the pair correlation function.
Usage
cauchy.estpcf(X, startpar=c(kappa=1,scale=1), lambda=NULL, q = 1/4, p = 2, rmin = NULL, rmax = NULL, ..., pcfargs = list())
Arguments X
Data to which the model will be fitted. Either a point pattern or a summary statistic. See Details.
startpar
Vector of starting values for the parameters of the model.
lambda
Optional. An estimate of the intensity of the point process.
q,p
Optional. Exponents for the contrast criterion.
rmin, rmax
Optional. The interval of \(r\) values for the contrast criterion.
…
Optional arguments passed to
optimto control the optimisation algorithm. See Details.
pcfargs
Optional list containing arguments passed to
pcf.pppto control the smoothing in the estimation of the pair correlation function.
Details
This algorithm fits the Neyman-Scott cluster point process model with Cauchy kernel to a point pattern dataset by the Method of Minimum Contrast, using the pair correlation function.
The argument
X can be either
a point pattern:
An object of class
"ppp"representing a point pattern dataset. The pair correlation function of the point pattern will be computed using
pcf, and the method of minimum contrast will be applied to this.
a summary statistic:
An object of class
"fv"containing the values of a summary statistic, computed for a point pattern dataset. The summary statistic should be the pair correlation function, and this object should have been obtained by a call to
pcfor one of its relatives.
The algorithm fits the Neyman-Scott cluster point process with Cauchy kernel to
X, by finding the parameters of the Matern Cluster model which give the closest match between the theoretical pair correlation function of the Matern Cluster process and the observed pair correlation function. For a more detailed explanation of the Method of Minimum Contrast, see
mincontrast.
The model is described in Jalilian et al (2013). It is a cluster process formed by taking a pattern of parent points, generated according to a Poisson process with intensity \(\kappa\), and around each parent point, generating a random number of offspring points, such that the number of offspring of each parent is a Poisson random variable with mean \(\mu\), and the locations of the offspring points of one parent follow a common distribution described in Jalilian et al (2013).
If the argument
lambda is provided, then this is used as the value of the point process intensity \(\lambda\). Otherwise, if
X is a point pattern, then \(\lambda\) will be estimated from
X. If
X is a summary statistic and
lambda is missing, then the intensity \(\lambda\) cannot be estimated, and the parameter \(\mu\) will be returned as
NA.
The remaining arguments
rmin,rmax,q,p control the method of minimum contrast; see
mincontrast.
The corresponding model can be simulated using
rCauchy.
For computational reasons, the optimisation procedure internally uses the parameter
eta2, which is equivalent to
4 * scale^2 where
scale is the scale parameter for the model as used in
rCauchy.
The optimisation algorithm can be controlled through the additional arguments
"..." which are passed to the optimisation function
optim. For example, to constrain the parameter values to a certain range, use the argument
method="L-BFGS-B" to select an optimisation algorithm that respects box constraints, and use the arguments
lower and
upper to specify (vectors of) minimum and maximum values for each parameter.
Value
An object of class
"minconfit". There are methods for printing and plotting this object. It contains the following main components:
Vector of fitted parameter values.
Function value table (object of class
"fv") containing the observed values of the summary statistic (
observed) and the theoretical values of the summary statistic computed from the fitted model parameters.
References
Ghorbani, M. (2012) Cauchy cluster process.
Metrika, to appear.
Jalilian, A., Guan, Y. and Waagepetersen, R. (2013) Decomposition of variance for spatial Cox processes.
Scandinavian Journal of Statistics 40, 119-137.
Waagepetersen, R. (2007) An estimating function approach to inference for inhomogeneous Neyman-Scott processes.
Biometrics 63, 252--258. See Also
rCauchy to simulate the model.
Aliases cauchy.estpcf Examples
# NOT RUN { u <- cauchy.estpcf(redwood) u plot(u, legendpos="topright")# }
Documentation reproduced from package spatstat, version 1.59-0, License: GPL (>= 2)
|
Here, we review parameter regularization, which is a method for improving regression models through the penalization of non-zero parameter estimates. Why is this effective? Biasing parameters towards zero will (of course!) unfavorably bias a model, but it will also reduce its variance. At times the latter effect can win out, resulting in a net reduction in generalization error. We also review Bayesian regressions — in effect, these generalize the regularization approach, biasing model parameters to any specified prior estimates, not necessarily zero.
Follow @efavdb Follow us on twitter for new submission alerts! Introduction and overview
In this post, we will be concerned with the problem of fitting a function of the form
$$\label{function} y(\vec{x}_i) = f(\vec{x}_i) + \epsilon_i \tag{1}, $$ where $f$ is the function’s systematic part and $\epsilon_i$ is a random error. These errors have mean zero and are iid — their presence is meant to take into account dependences in $y$ on features that we don’t have access to. To “fit” such a function, we will suppose that one has chosen some appropriate regression algorithm (perhaps a linear model, a random forest, etc.) that can be used to generate an approximation $\hat{f}$ to $y$, given a training set of example $(\vec{x}_i, y_i)$ pairs.
The primary concern when carrying out a regression is often to find a fit that will be accurate when applied to points not included in the training set. There are two sources of error that one has to grapple with: Bias in the algorithm — sometimes the result of using an algorithm that has insufficient flexibility to capture the nature of the function being fit, and variance — this relates to how sensitive the resulting fit is to the samples chosen for the training set. The latter issue is closely related to the concept of overfitting.
To mitigate overfitting, parameter regularization is often applied. As we detail below, this entails penalizing non-zero parameter estimates. Although this can favorably reduce the variance of the resulting model, it will also introduce bias. The optimal amount of regularization is therefore determined by appropriately balancing these two effects.
In the following, we carefully review the mathematical definitions of model bias and variance, as well as how these effects contribute to the error of an algorithm. We then show that regularization is equivalent to assuming a particular form of Bayesian prior that causes the parameters to be somewhat “sticky” around zero — this stickiness is what results in model variance reduction. Because standard regularization techniques bias towards zero, they work best when the underlying true feature dependences are sparse. When this is not true, one should attempt an analogous variance reduction through application of the more general Bayesian regression framework.
Squared error decomposition
The first step to understanding regression error is the following identity: Given any fixed $\vec{x}$, we have
$$ \begin{align} \overline{\left (\hat{f}(\vec{x}) – y(\vec{x}) \right)^2} &= \overline{\left (\hat{f}(\vec{x}) – \overline{\hat{f}(\vec{x})} \right)^2} + \left (\overline{\hat{f}(\vec{x})} – f(\vec{x}) \right)^2 + \overline{ \epsilon^2} \\ & \equiv var\left(\hat{f}(\vec{x})\right) + bias\left(\hat{f}(\vec{x})\right)^2 + \overline{\epsilon^2}. \tag{2}\label{error_decomp} \end{align} $$ Here, overlines represent averages over two things: The first is the random error $\epsilon$ values, and the second is the training set used to construct $\hat{f}$. The left side of (\ref{error_decomp}) gives the average squared error of our algorithm, at point $\vec{x}$ — i.e., the average squared error we can expect to get, given a typical training set and $\epsilon$ value. The right side of the equation decomposes this error into separate, independent components. The first term at right — the variance of $\hat{f}(\vec{x})$ — relates to how widely the estimate at $\vec{x}$ changes as one randomly samples from the space of possible training sets. Similarly, the second term — the algorithm’s squared bias — relates to the systematic error of the algorithm at $\vec{x}$. The third and final term above gives the average squared random error — this provides a fundamental lower bound on the accuracy of any estimator of $y$.
We turn now to the proof of (\ref{error_decomp}). We write the left side of this equation as
$$\label{detail} \begin{align} \tag{3} \overline{\left (\hat{f}(\vec{x}) – y(\vec{x}) \right)^2} &= \overline{\left ( \left \{\hat{f}(\vec{x}) – f(\vec{x}) \right \} – \left \{ y(\vec{x}) – f(\vec{x}) \right \} \right)^2}\\ &= \overline{\left ( \hat{f}(\vec{x}) – f(\vec{x}) \right)^2} – 2 \overline{ \left (\hat{f}(\vec{x}) – f(\vec{x}) \right ) \left (y(\vec{x}) – f(\vec{x}) \right ) } + \overline{ \left (y(\vec{x}) – f(\vec{x}) \right)^2}. \end{align} $$ The middle term here is zero. To see this, note that it is the average of the product of two independent quantities: The first factor, $\hat{f}(\vec{x}) – f(\vec{x})$, varies only with the training set, while the second factor, $y(\vec{x}) – f(\vec{x})$, varies only with $\epsilon$. Because these two factors are independent, their average product is the product of their individual averages, the second of which is zero, by definition. Now, the third term in (\ref{detail}) is simply $\overline{\epsilon^2}$. To complete the proof, we need only evaluate the first term above. To do that, we write $$\begin{align} \tag{4} \label{detail2} \overline{\left ( \hat{f}(\vec{x}) – f(\vec{x}) \right)^2} &= \overline{\left ( \left \{ \hat{f}(\vec{x}) – \overline{\hat{f}(\vec{x})} \right \}- \left \{f(\vec{x}) -\overline{\hat{f}(\vec{x})} \right \}\right)^2} \\ &= \overline{\left ( \hat{f}(\vec{x}) – \overline{\hat{f}(\vec{x})} \right)^2} -2 \overline{ \left \{ \hat{f}(\vec{x}) – \overline{\hat{f}(\vec{x})} \right \} \left \{f(\vec{x}) -\overline{\hat{f}(\vec{x})} \right \} } + \left ( f(\vec{x}) -\overline{\hat{f}(\vec{x})} \right)^2. \end{align} $$ The middle term here is again zero. This is because its second factor is a constant, while the first averages to zero, by definition. The first and third terms above are the algorithm’s variance and squared bias, respectively. Combining these observations with (\ref{detail}), we obtain (\ref{error_decomp}). Bayesian regression
In order to introduce Bayesian regression, we focus on the special case of least-squares regressions. In this context, one posits that the samples generated take the form (\ref{function}), with the error $\epsilon_i$ terms now iid, Gaussian distributed with mean zero and standard deviation $\sigma$. Under this assumption, the probability of observing values $(y_1, y_2,\ldots, y_N)$ at $(\vec{x}_1, \vec{x}_2,\ldots,\vec{x}_N)$ is given by
$$ \begin{align} \tag{5} \label{5} P(\vec{y} \vert f) &= \prod_{i=1}^N \frac{1}{(2 \pi \sigma)^{1/2}} \exp \left [-\frac{1}{2 \sigma^2} (y_i – f(\vec{x}_i))^2 \right]\\ &= \frac{1}{(2 \pi \sigma)^{N/2}} \exp \left [-\frac{1}{2 \sigma^2} (\vec{y} – \vec{f})^2 \right], \end{align} $$ where $\vec{y} \equiv (y_1, y_2,\ldots, y_N)$ and $\vec{f} \equiv (f_1, f_2,\ldots, f_N)$. In order to carry out a maximum-likelihood analysis, one posits a parameterization for $f(\vec{x})$. For example, one could posit the linear form, $$\tag{6} f(\vec{x}) = \vec{\theta} \cdot \vec{x}. $$ Once a parameterization is selected, its optimal $\vec{\theta}$ values are selected by maximizing (\ref{5}), which gives the least-squares fit.
One sometimes would like to nudge (or bias) the parameters away from those that maximize (\ref{5}), towards some values considered reasonable ahead of time. A simple way to do this is to introduce a Bayesian prior for the parameters $\vec{\theta}$. For example, one might posit a prior of the form
$$ \tag{7} \label{7} P(f) \equiv P(\vec{\theta}) \propto \exp \left [- \frac{1}{2\sigma^2} (\vec{\theta} – \vec{\theta}_0) \Lambda (\vec{\theta} – \vec{\theta}_0)\right]. $$ Here, $\vec{\theta}_0$ represents a best guess for what $\theta$ should be before any data is taken, and the matrix $\Lambda$ determines how strongly we wish to bias $\theta$ to this value: If the components of $\Lambda$ are large (small), then we strongly (weakly) constrain $\vec{\theta}$ to sit near $\vec{\theta}_0$. To carry out the regression, we combine (\ref{5}-\ref{7}) with Bayes’ rule, giving $$ \tag{8} P(\vec{\theta} \vert \vec{y}) = \frac{P(\vec{y}\vert \vec{\theta}) P(\vec{\theta})}{P(\vec{y})} \propto \exp \left [-\frac{1}{2 \sigma^2} (\vec{y} – \vec{\theta} \cdot \vec{x})^2 – \frac{1}{2\sigma^2} (\vec{\theta} – \vec{\theta}_0) \Lambda (\vec{\theta} – \vec{\theta}_0)\right]. $$ The most likely $\vec{\theta}$ now minimizes the quadratic “cost function”, $$\tag{9} \label{9} F(\theta) \equiv (\vec{y} – \vec{\theta} \cdot \vec{x})^2 +(\vec{\theta} – \vec{\theta}_0) \Lambda (\vec{\theta} – \vec{\theta}_0), $$ a Bayesian generalization of the usual squared error. With this, our heavy-lifting is at an end. We now move to a quick review of regularization, which will appear as a simple application of the Bayesian method. Parameter regularization as special cases
The most common forms of regularization are the so-called “ridge” and “lasso”. In the context of least-squares fits, the former involves minimization of the quadratic form
$$ \tag{10} \label{ridge} F_{ridge}(\theta) \equiv (\vec{y} – \hat{f}(\vec{x}; \vec{\theta}))^2 + \Lambda \sum_i \theta_i^2, $$ while in the latter, one minimizes $$ \tag{11} \label{lasso} F_{lasso}(\theta) \equiv (\vec{y} – \hat{f}(\vec{x}; \vec{\theta}))^2 + \Lambda \sum_i \vert\theta_i \vert. $$ The terms proportional to $\Lambda$ above are the so-called regularization terms. In elementary courses, these are generally introduced to least-squares fits in an ad-hoc manner: Conceptually, it is suggested that these terms serve to penalize the inclusion of too many parameters in the model, with individual parameters now taking on large values only if they are really essential to the fit.
While the conceptual argument above may be correct, the framework we’ve reviewed here allows for a more sophisticated understanding of regularization: (\ref{ridge}) is a special case of (\ref{9}), with $\vec{\theta}_0$ set to $(0,0,\ldots, 0)$. Further, the lasso form (\ref{lasso}) is also a special-case form of Bayesian regression, with the prior set to $P(\vec{\theta}) \propto \exp \left (- \frac{\Lambda}{2 \sigma^2} \sum_i \vert \theta_i \vert \right)$. As advertised, regularization is a form of Bayesian regression.
Why then does regularization “work”? For the same reason any other Bayesian approach does: Introduction of a prior will bias a model (if chosen well, hopefully not by much), but will also effect a reduction in its variance. The appropriate amount of regularization balances these two effects. Sometimes — but not always — a non-zero amount of bias is required.
Discussion
In summary, our main points here were three-fold: (i) We carefully reviewed the mathematical definitions of model bias and variance, deriving (\ref{error_decomp}). (ii) We reviewed how one can inject Bayesian priors to regressions: The key is to use the random error terms to write down the probability of seeing a particular observational data point. (iii) We reviewed the fact that the ridge and lasso — (\ref{ridge}) and (\ref{lasso}) — can be considered Bayesian priors.
Intuitively, one might think introduction of a prior serves to reduce the bias in a model: Outside information is injected into a model, nudging its parameters towards values considered reasonable ahead of time. In fact, this nudging introduces bias! Bayesian methods work through reduction in variance, not bias — A good prior is one that does not introduce too much bias.
When, then, should one use regularization? Only when one expects the optimal model to be largely sparse. This is often the case when working on machine learning algorithms, as one has the freedom there to throw a great many feature variables into a model, expecting only a small (a prior, unknown) minority of them to really prove informative. However, when not working in high-dimensional feature spaces, sparseness should not be expected. In this scenario, one should reason some other form of prior, and attempt a variance reduction through the more general Bayesian framework.
|
I understand that the ATM volatility of Swaption moves quite frequently and the SABR will need to be recalibrated. Which parameter should I recalibrate?
Is there any financial meanings why we only recalibrate on certain parameters?
Thanks.
Quantitative Finance Stack Exchange is a question and answer site for finance professionals and academics. It only takes a minute to sign up.Sign up to join this community
I understand that the ATM volatility of Swaption moves quite frequently and the SABR will need to be recalibrated. Which parameter should I recalibrate?
Is there any financial meanings why we only recalibrate on certain parameters?
Thanks.
You will need to recal alpha beta and rho:
\begin{align*} dF_{t}&=\sigma _{t}F_{t}^{\beta }\,dW_{t}\\ d\sigma _{t}&=\alpha \sigma _{t}^{{}}\,dZ_{t}\\ \end{align*} Where $$dW_{t}dZ_{t}=\rho dt$$
Parameters describe smile (richness of out of the money options) and the skew (whether implied vol is upward or downward sloping as a function of strike).
Take a look at Matlab's implementation, which discusses two methods based on closed form, https://www.mathworks.com/help/fininst/calibrating-the-sabr-model.html?s_tid=gn_loc_drop
|
The Most Obvious Secret in Mathematics
Yes, I agree. The title for this post is a little pretentious.
other mathematical secrets that are more obvious than this one, but hey, I got your attention, right? Good. Because I'd like to tell you about an overarching theme in mathematics - a mathematical mantra, if you will. A technique that mathematicians use all the time to, well, do math.
I'm calling it a 'secret' because until recently, I've rarely (if ever?) heard it stated
explicitly . This suggests to me that it's one of those things that folks assume you'll just eventually pick up. Hopefully. Like some sort of unspoken rule of mathematics. But a few weeks ago while chatting with my advisor*, I finally heard this unspoken rule uttered! Explicitly. Repeatedly, in fact. And at that time I realized it needs to be ushered further into the spotlight. Today's post, then, is my invitation to you to come listen in on that conversation.
So enough with the chit-chat! What's the secret? Here 'tis:
A mathematical object is determined by its relationships to other objects.
Practically speaking, this suggests that
an often fruitful way to discover properties of an object is NOT to investigate the object itself, but rather to study the collection of maps to or from the object.
Or to be a little less formal,
you can learn a lot about an object by studying its interactions with other things.
By "object" I mean things like sets or groups or measurable spaces or vector spaces or topological spaces or.... And by "maps" I mean the appropriate version of 'function': functions, group homomorphisms, measurable functions, linear transformations, continuous functions, and so on.
So now do you see why I'm calling this an
obvious secret? We students have been using this technique - though perhaps unknowingly - since we were mathematical infants! We learned about functions in our younger days. We've labored over properties of real-valued functions and their (anti)derivatives throughout Calculus. We became well-acquainted with linear transformations and their corresponding matrices in linear algebra. We battled with homomorphisms during the first week of undergrad abstract algebra. We finally learned the real definition of a continuous map in point-set topology. The list goes on and on.
See how pervasive this idea is? It's obvious!
And that's my point.
Because have you ever stopped to
really think about it?
At first glance, perhaps it seems a little odd that "best" way to study an object is to divert your attention
away from the object and focus on something else. But we do this all the time. Take people-watching, for instance. You can learn a lot about a person simply by looking at how they relate to the folks around them. And the same is true in mathematics.
I've hinted at this theme briefly in a previous post, but I'd like to list a few examples to further convince you. Keep in mind, though, that this is a philosophy that permeates throughout
all of mathematics. So what I'm sharing below is peanuts compared to what's out there. But I hope it's enough to illustrate the idea. In analysis...
One word:
sequences! Recall (or observe) that a sequence $\{x_n\}=\{x_1,x_2,\ldots\}$ is - yes, a long list of numbers but ultimately - a function $\phi:\mathbb{N}\to\mathbb{R}$, where $x_n=\phi(n)$. By using sequences to 'probe' the real line $\mathbb{R}$, we learn that $\mathbb{R}$ has no "holes" - if you point your infinitesimally small finger anywhere on the real line, you'll always land on a real number. This property of $\mathbb{R}$ is called completeness, and it is investigated by special types of sequences called Cauchy sequences. Another good example is curvature. Need to measure how much a curve or surface is bending in space? Then you'll want to think about second derivatives which, assuming the curve/surface is "nice enough," are themselves continuous functions to $\mathbb{R}$!** In group theory...
By looking at homomorphisms from arbitrary groups to special types of groups called symmetric groups, we discover that the
raison d'être of a group is to shuffle things around! This is captured in Cayley's Theorem, a major result in group theory, which says that every group is isomorphic to a group of permutations or, less formally, a group is to math what a verb is to language. In fact, this (not the theorem, per se, but the idea) is historically how groups were first understood and is precisely what motivated Galois to lay down the foundations of the discipline of mathematics that bears his name. You might recall that we've chatted previously about the verb-like behavior of groups in this non-technical introduction to Galois theory. In topology...
Want to know if your topolgical space $X$ is connected? Just check that any continuous map from it to $\{0,1\}$ is constant! Want to determine how many 'holes' $X$ has? Study continuous functions from the circle, $S^1$, into it! This leads to the fundamental group, $\pi_1$. Want to know many higher dimensional 'holes' there are? Look at continuous functions from the $n$-sphere into it! This leads to the higher homotopy groups, $\pi_n$. Want to know what the topology on any given space is? Simply look at the collection of continuous functions from it to a little two-point space! In fact, this last example is really quite paradigmatic, and I'd like to elaborate a bit more. So stay tuned for next time!
In the mean time, how many examples of today's not-so-secret secret can you think of? I'd love to hear 'em. Let me know in the comments below!
*Yes! I have an advisor now! And since my written qualifying exams are out of the way, the next thing on my to-do list is passing the oral qual. I've also picked up a teaching assignment this year. For both of these reasons, blogging has been - and may continue to be - a little bit slow. But although my posts may become less frequent, I'm hoping the content will be richer. I'm almost positive they'll be more topology/category-flavored, too.
** This is really a statement about differential geometry rather than analysis, for it generalizes nicely for things called manifolds. In fact, the whole premise behind differential geometry is a great example of today's theme. The idea is that globally, a manifold $M$ may be so complicated and wonky that we don't have many tools to probe it with. But - following the old adage,
How do you eat an elephant? One bite at a time. - the impossible becomes possible if we just consider $M$ little patches at a time. Why? Because locally manifolds look exactly like Euclidean space, $\mathbb{R}^n$. (Take the earth, for example. Even though it's round, it looks flat locally.) And since we have tons of tools at our disposal in $\mathbb{R}^n$ (like calculus!), we can apply them to the little patches of our manifold too.
|
Ancillary-file links:
Ancillary files (details): python3_src/KW_scaldims.py python3_src/LICENSE python3_src/README python3_src/TNR.py python3_src/cdf_ed_scaldimer.py python3_src/cdf_scaldimer.py python3_src/custom_parser.py python3_src/ed_scaldimer.py python3_src/initialtensors.py python3_src/modeldata.py python3_src/pathfinder.py python3_src/scaldim_plot.py python3_src/scaldimer.py python3_src/scon.py python3_src/scon_sparseeig.py python3_src/tensordispenser.py python3_src/tensors/abeliantensor.py python3_src/tensors/ndarray_svd.py python3_src/tensors/symmetrytensors.py python3_src/tensors/tensor.py python3_src/tensors/tensor_test.py python3_src/tensors/tensorcommon.py python3_src/tensorstorer.py python3_src/timer.py python3_src/toolbox.py Current browse context:
cond-mat.stat-mech
Change to browse by: Bookmark(what is this?) Condensed Matter > Strongly Correlated Electrons Title: Topological conformal defects with tensor networks
(Submitted on 11 Dec 2015 (v1), last revised 23 Sep 2016 (this version, v3))
Abstract: The critical 2d classical Ising model on the square lattice has two topological conformal defects: the $\mathbb{Z}_2$ symmetry defect $D_{\epsilon}$ and the Kramers-Wannier duality defect $D_{\sigma}$. These two defects implement antiperiodic boundary conditions and a more exotic form of twisted boundary conditions, respectively. On the torus, the partition function $Z_{D}$ of the critical Ising model in the presence of a topological conformal defect $D$ is expressed in terms of the scaling dimensions $\Delta_{\alpha}$ and conformal spins $s_{\alpha}$ of a distinct set of primary fields (and their descendants, or conformal towers) of the Ising CFT. This characteristic conformal data $\{\Delta_{\alpha}, s_{\alpha}\}_{D}$ can be extracted from the eigenvalue spectrum of a transfer matrix $M_{D}$ for the partition function $Z_D$. In this paper we investigate the use of tensor network techniques to both represent and coarse-grain the partition functions $Z_{D_\epsilon}$ and $Z_{D_\sigma}$ of the critical Ising model with either a symmetry defect $D_{\epsilon}$ or a duality defect $D_{\sigma}$. We also explain how to coarse-grain the corresponding transfer matrices $M_{D_\epsilon}$ and $M_{D_\sigma}$, from which we can extract accurate numerical estimates of $\{\Delta_{\alpha}, s_{\alpha}\}_{D_{\epsilon}}$ and $\{\Delta_{\alpha}, s_{\alpha}\}_{D_{\sigma}}$. Two key new ingredients of our approach are (i) coarse-graining of the defect $D$, which applies to any (i.e. not just topological) conformal defect and yields a set of associated scaling dimensions $\Delta_{\alpha}$, and (ii) construction and coarse-graining of a generalized translation operator using a local unitary transformation that moves the defect, which only exist for topological conformal defects and yields the corresponding conformal spins $s_{\alpha}$. Submission historyFrom: Markus Hauru [view email] [v1]Fri, 11 Dec 2015 23:01:19 GMT (1352kb,D) [v2]Mon, 16 May 2016 22:16:02 GMT (1920kb,AD) [v3]Fri, 23 Sep 2016 19:16:05 GMT (1919kb,AD)
|
Considering a number expressed by four decimal digits $n=ABCD_d$ let $\sigma(n) = A+B+C+D$ the sums of its decimal digits.
Doubling the number is equivalent to summing it to itself: $2n = ABCD_d + ABCD_d$ so the result will be composed summing each digit and, if the result is more than 10, the second digit is the carry that will increment the next more significant digit.
Regarding the sum of digits of the double, though, we can find a way to neglect the carry.
Consider for instance $7621$ and $\sigma(7621) = 16$. Doubling it we have $15242$ and $\sigma(15242) = 14$. But we may compute the same sum doubling each digit and getting the sum of each result. So from 7621 we double $7 \rightarrow 14 \rightarrow 5$, $6 \rightarrow 12 \rightarrow 3$ , $2 \rightarrow 4$ and $1 \rightarrow 2$ and $5 +3 +4 + 2 = 14$.
In the general case, to determine the sum of the digits of the double of a number, we transform each digit using the following table and sum the resulting digits.
$\begin{array}{|r|r|r|}\hlinex_i & 2x_i & y_i \\ \hline0 & 0 & 0 \\ \hline1 & 2 & 2 \\ \hline2 & 4 & 4 \\ \hline3 & 6 & 6 \\ \hline4 & 8 & 8 \\ \hline5 & 10 & 1 \\ \hline6 & 12 & 3 \\ \hline7 & 14 & 5 \\ \hline8 & 16 & 7 \\ \hline9 & 18 & 9 \\ \hline\end{array}$
Consider now the original problem: given $n$ we define $\sigma(n) = \sum_1^n x_i$ where $x_i$ is the i-th digit. If we take $y_i$ as the image of $x_i$ with the former table we can state that $\sigma(2n) = \sum_1^n y_i$.
With these definitions, the problem becomes the solution of the following system$\begin{cases} \sigma(n) = \sum_1^n x_i = 100 \\ \sigma(2n) = \sum_1^n y_i = 110\end{cases}$
where both $x_i$ and $y_i \in \{0,1,2,3,4,5,6,7,8,9 \}$
Let us now subtract the second equation from the first: we get $ \sigma(2n)-\sigma(n) = \sum_1^n y_i - \sum_1^n x_i = \sum_1^n (y_i - x_i) = 10$
From the former table we can now derive the following
$\begin{array}{|r|r|r|}\hliney_i & x_i & y_i - x_i \\ \hline0 & 0 & 0 \\ \hline2 & 1 & 1 \\ \hline4 & 2 & 2 \\ \hline6 & 3 & 3 \\ \hline8 & 4 & 4 \\ \hline1 & 5 & -4 \\ \hline3 & 6 & -3 \\ \hline5 & 7 & -2 \\ \hline7 & 8 & -1 \\ \hline9 & 9 & 0 \\ \hline\end{array}$
With this result, we can immediately find a number $n$ so that $\sigma(2n)-\sigma(n) = 10$ by finding for instance 4 digits so that the sum of the corresponding numbers in the fourth column is 10. Consider $n = 43228$: transforming each digit of the second column into the corresponding digit of the third column we get $\sigma(2n) = 4+3+2+2+(-1) = 10$and in fact $\sigma(2n) = \sigma(86456) = 29$ and $\sigma(n) = 19 \Rightarrow \sigma(2n) - \sigma(n) = 10$.
Considering the second table, it seems evident that $9$ and $0$ are equivalent with respect to the difference because they have 0 in the last column. So if in the previous number we insert one digit 9 and one digit 0 in any place, the difference between the sums of the digit of $2n$ and $n$ remains 10.
For instance $n' = 4932028, 2n' = 9864056$ and $\sigma(2n') - \sigma(2n) = 38-28-10$
Let us go back to the given problem. We must solve the following system of equations
$\begin{cases} \sigma(n) = \sum_1^n x_i = 100 \\ \sigma(2n) -\sigma(n)= \sum_1^n y_i-x_i = 10\end{cases}$
Here are some interesting points
The sum of digits does not depend on the order of the digits The sum of digits does not depend on the presence of one or more zeroes in the decimal representation of $n$ $\sigma(2n)-\sigma(n)$ does not depend on the presence of $0$ or $9$ in the decimal representation of $n$
Let us now determine the smallest $m | \sigma(2m)-\sigma(m) = 10$
$m$ cannot have two digits as the maximum is $4$ and $4+4=8<10$ so we must use at least 3 digits. We have two options $n=433$ or $m=442$ and in fact $\sigma(2n)-\sigma(n) = \sigma(866)-\sigma(433) = 10$ and $\sigma(2m)-\sigma(m) = \sigma(884)-\sigma(442) = 10$
Given these numbers, we have $\sigma(n) = 10$ so in we must determine a number $p$ so that $\sigma(p) = 90$. We may choose $p=9,999,999,999$ as $\sigma(p) = 9 \times 10 = 90$
So here are two numbers having the required property $p_1 = 2,449,999,999,999$ and $p_2 = 3,349,999,999,999$.
Doubling yields
$2 p_1 = 4,899,999,999,998$ and $2 p_2 = 6,699,999,999,998$
Finally $\sigma(2 p_1) = 110, \sigma(p_1) = 100 $ and $\sigma(2 p_2) = 110, \sigma(p_2) = 100$
These are the smallest numbers featuring this property. Of course any number composed by any possible permutation of these digits still has the required property.
Treating the problem this way suggests an algorithm to generate arbitrarily numbers having this or similar properties.
|
I have two matrices $A, B$ coming from a finite element discretization of a system of partial differential equations. $A$ represents the system matrix and is symmetric and indefinite. $B$ is symmetric positive definite and is used as a preconditioner. Direct solver is used for computation of action of $B^{-1}$. The preconditioner is quite good, $\sigma(B^{-1}A) \subset (-10, -0.1) \cup (0.1, 10)$.
When I solve this system using GMRES preconditioned by $B$ I get the approximate solution after a reasonable number of iterations. However the same preconditioner fails when used in MINRES. The number of iterations for the preconditioned system is even higher than for unpreconditioned system. (I tried this computation using PETSc, Matlab and scipy with the same qualitative result.)
Could you please navigate me to some theoretical result or a keyword that would explain such behaviour?
Edit: I found a mistake in my code, the question is no longer valid, $B$ was (slightely) not symmetric which makes MINRES useless. My deepest apologies.
|
Global bifurcation for discrete competitive systems in the plane
1.
University of Rhode Island, Kingston, RI 02881, United States, United States
$x_{n+1} = f_\alpha(x_n,y_n) $
$y_{n+1} = g_\alpha(x_n,y_n)$
where $\alpha$ is a parameter,
$f_\alpha$ and $g_\alpha$ are continuous real valued functions
on a rectangular domain
$\mathcal{R}_\alpha \subset \mathbb{R}^2$ such that
$f_\alpha(x,y)$ is non-decreasing in $x$ and non-increasing in $y$, and $g_\alpha(x, y)$ is non-increasing in $x$ and non-decreasing in $y$.
A unique interior fixed point is assumed
for all values of the parameter $\alpha$.
As an application of the main result for competitive systems a global period-doubling bifurcation result is obtained for families of second order difference equations of the type
$x_{n+1} = F_\alpha(x_n, x_{n-1}), \quad n=0,1, \ldots $
where $\alpha$ is a parameter, $F_\alpha:\mathcal{I_\alpha}\times \mathcal{I_\alpha} \rightarrow \mathcal{I_\alpha}$ is a decreasing function in the first variable and increasing in the second variable, and $\mathcal{I_\alpha}$ is a interval in $\mathbb{R}$, and there is a unique interior equilibrium point. Examples of application of the main results are also given.
Keywords:global stable manifold, monotonicity, period-two solution., bifurcation, competitive, map. Mathematics Subject Classification:Primary: 37G35 Secondary: 39A10, 39A1. Citation:M. R. S. Kulenović, Orlando Merino. Global bifurcation for discrete competitive systems in the plane. Discrete & Continuous Dynamical Systems - B, 2009, 12 (1) : 133-149. doi: 10.3934/dcdsb.2009.12.133
[1]
Yasuhito Miyamoto.
Global bifurcation and stable two-phase separation for a phase field model in a disk.
[2] [3]
Denis Gaidashev, Tomas Johnson.
Dynamics of the universal area-preserving map associated with period-doubling: Stable sets.
[4] [5]
James W. Cannon, Mark H. Meilstrup, Andreas Zastrow.
The period set of a map from the Cantor set to itself.
[6]
Tobias Black.
Global existence and asymptotic stability in a competitive two-species chemotaxis system with two signals.
[7]
Xiaoli Li.
Global strong solution for the incompressible flow of liquid crystals with vacuum in dimension two.
[8]
Yongming Liu, Lei Yao.
Global solution and decay rate for a reduced gravity two and a half layer model.
[9]
Àngel Jorba, Pau Rabassa, Joan Carles Tatjer.
Period doubling and reducibility in the quasi-periodically forced logistic map.
[10]
Partha Sharathi Dutta, Soumitro Banerjee.
Period increment cascades in a discontinuous map with
square-root singularity.
[11]
Alexander Nabutovsky and Regina Rotman.
Lengths of geodesics between two points on a Riemannian manifold.
[12]
Reza Lotfi, Gerhard-Wilhelm Weber, S. Mehdi Sajadifar, Nooshin Mardani.
Interdependent demand in the two-period newsvendor problem.
[13] [14]
Yuncherl Choi, Jongmin Han, Chun-Hsiung Hsia.
Bifurcation analysis of the damped Kuramoto-Sivashinsky equation with respect to the period.
[15]
Plamen Stefanov, Gunther Uhlmann, Andras Vasy.
On the stable recovery of a metric from the hyperbolic DN map with incomplete data.
[16]
Jishan Fan, Fucai Li, Gen Nakamura.
Global strong solution to the two-dimensional density-dependent magnetohydrodynamic equations with vaccum.
[17]
Laiqing Meng, Jia Yuan, Xiaoxin Zheng.
Global existence of almost energy solution to the two-dimensional chemotaxis-Navier-Stokes equations with partial diffusion.
[18]
Ming Zhao, Cuiping Li, Jinliang Wang, Zhaosheng Feng.
Bifurcation analysis of the three-dimensional Hénon map.
[19] [20]
Zhaoquan Xu, Jiying Ma.
Monotonicity, asymptotics and uniqueness of travelling wave solution of a non-local delayed lattice dynamical system.
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top]
|
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty).
And Chrome has a Personal Blocklist extension which does what you want.
: )
Of course you already have a Google account but Chrome is cool : )
Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies?
do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created.
@QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value.
I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$.
@QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0.
@KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc.
In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people
in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results
@QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O
@NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that.
@NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment.
@QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h).
@KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow)
Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
|
This is the last exercise of a quite challenging exercises paper a friend who is taking calculus has which I'm trying to help. I already helped her doing the other bunch. But this got me. I will appreciate anyone help to see my work and to tell me if is right or If I need to correct something.
The exercise is:
If $f'(a)=1$ for $a>0$, find $\lim_{x \to a} \frac{f(x)-f(a)}{\sqrt{x}-\sqrt{a}}$.
What came to my mind was to rationalize the denominator.
$$\lim_{x \to a} \frac{f(x)-f(a)}{\sqrt{x}-\sqrt{a}}$$
$$=\lim_{x \to a} \frac{f(x)-f(a)}{\sqrt{x}-\sqrt{a}}\cdot \frac{\sqrt{x}+\sqrt{a}}{\sqrt{x}+\sqrt{a}}$$
$$=\lim_{x \to a} \frac{(f(x)-f(a))(\sqrt{x}+\sqrt{a})}{x-a}$$
$$=\lim_{x \to a} \left(\frac{f(x)-f(a)}{x-a}\cdot (\sqrt{x}+\sqrt{a})\right)$$
$$=\lim_{x \to a} \frac{f(x)-f(a)}{x-a}\cdot \lim_{x \to a}(\sqrt{x}+\sqrt{a})$$
$$=f'(a)\cdot \lim_{x \to a}(\sqrt{x}+\sqrt{a})$$
$$=1\cdot \lim_{x \to a}(\sqrt{x}+\sqrt{a})$$
$$=\lim_{x \to a}(\sqrt{x}+\sqrt{a})$$
$$=\sqrt{a}+\sqrt{a}$$
$$=2\sqrt{a}$$
|
If we pretend our data is like a big pool of numbered balls, we draw one ball from the pool and copy the number to another ball that we set aside. Now, toss the ball back, mix, pick another ball and repeat. In this scheme you have a chance of drawing the same ball twice (or more). This probability is $ \frac{1}{N}^D $, where D is the number of draws, if the balls are well mixed.
As I was reading Leo Brieman's description of the algorithm (which is very readable), I noticed the remark, "About one-third of the cases are left out of the bootstrap sample and not used in the construction of the kth tree". Is that true?
If the probability of chosing one sample at random is $1/N$, then the probability that a sample is
notchosen is $1-\frac{1}{N}$. For independent draws, the chance that a sample is not chosen twice is
\begin{aligned} \left(1- \frac{1}{N}\right)\left(1- \frac{1}{N}\right) = \left(1- \frac{1}{N}\right)^2. \end{aligned}
For $D$ independent draws the probably of
notchosing that sample is
\begin{aligned} \left(1- \frac{1}{N}\right)^D \end{aligned}
Hmmmm ... that looks interesting. Suppose we let $D$ go to $N$, meaning if we have 10 total samples we draw 10 "boostrap samples". What's the probability that we'll miss a sample? Wolfram|Alpha can make this a bit easier. Turns that as we go to a very large dataset, $N\to\infty$ this converges to something specific:
\begin{aligned} \lim_{N\to\infty }(1-\frac{1}{N})^N=\frac{1}{e}\approx 0.36787... \end{aligned}
Pretty interesting. This means that the probability a sample
willbe chosen is $\approx 0.64$, which justifies Brieman's statement that, "About one-third of the cases are left out". Pictorally this looks like:
|
For a while now I have tried to come up with a macro, that can properly format inegrals (the way I like).Aswell as giving it a few options that can be chosen e.g.
\int and
\oint.So far there is a little problem arising in the code. It occurs when you choose the
\oint by adding a
* to the argument.It does come with rigth integral, but the limits are a little wrong, since it contain too the
* in the limits.
My Integral macro and an example is shown below
\documentclass{article}\usepackage{amsmath}\usepackage{xparse}\usepackage{xstring}\NewDocumentCommand{\Int}{ >{\SplitList{;}}o}{% \IfValueT{#1}{\ProcessList{#1}{\IntLimitONE}}\,%}\NewDocumentCommand{\IntLimitONE}{m}{% \IfSubStr{#1}{*}{\oint\IntSplitLimits{#1}\!}{\int\IntSplitLimits{#1}\!}}\NewDocumentCommand{\IntSplitLimits}{ >{\SplitArgument{1}{,}}m}{% \IfValueT{#1}{\IntLimits#1}%}\NewDocumentCommand{\IntLimits}{mm}{% _{#1}^{\IfNoValueTF{#2}{}{#2\!\!}}%}\begin{document} \begin{equation} \Int[a,b;*c,d;e,f] f(x,y,z) dxdydz \quad , \quad \Int[*a,b] f(x) dx \end{equation}\end{document}
I have tried a few ways to counter this, none satisfy completely my needs.One I tried was using the
\xstring command
\StrSubstitute
\NewDocumentCommand{\IntLimitONE}{m}{% \StrSubstitute{#1}{*}{}[\temp]% \IfSubStr{#1}{*}{\oint\IntSplitLimits{\temp}}{\int\IntSplitLimits{#1}}}
It does indeed remove
* from the limit, but the upper limit have been moved to the lower.
I cannot seem to understand what goes wrong here? Any help is appreciated :)
|
I'm just trying to build the formula shown below in Latex. My approach would be the following:
prob[ \tilde{n}=n \mid \tilde{s}=s]However, the first two
n's are not displayed correctly. Does anyone know what is wrong about my approach? (absolute Latex beginner)
I'm just trying to build the formula shown below in Latex. My approach would be the following:
With a code adapted from an example in
mathtools documentation. The
\prob macro can make the size of the brackets and midrule fit the contents of the macro using the starred version
\prob*. Alternatively one can use an optional argument:
\big, \Big, \bigg,\Bigg, which inserts a pair of implicit
\bigl ... \bigr before the delimiters:
\documentclass{article}\usepackage{mathtools, nccmath}\providecommand\given{}\DeclarePairedDelimiterXPP\prob[1]{\mathrm{prob}}[]{}{\renewcommand\given{\nonscript\:\delimsize\vert\nonscript\:\mathopen{}}#1}\begin{document} %\[ \prob[\big]{\tilde{n} = n\given\tilde{s} = n} = p > \mfrac12\]%\end{document}
An alternative that sizes the brackets and the vertical bar automatically to the content between them is to use
\left,
\middle and
\right. This uses egreg’s trick for a vertical bar that grows.
\documentclass[preview,varwidth]{standalone}\usepackage{amsmath}\usepackage{unicode-math}\DeclareMathOperator\prob{prob}\ifdefined\Umiddle \newcommand{\relmiddle}{\Umiddle class 5 }\else \newcommand{\relmiddle}[1]{\mathrel{}\middle#1\mathrel{}}\fi\begin{document}\(\prob \left[ \tilde{n} = \frac{n^{2^m}}{2} \relmiddle\vert \tilde{s} = n \right] = p > \frac{1}{2}\)\end{document}
This example will work with your font packages of choice, not just
unicode-math. And you can still declare
\prob with two arguments.
The following works for me very well even though Steven B. Segletes has a perfect answer. I have changed
n to
n/2 for better visibility on the height of the mid line.
\begin{equation}\operatorname{prob} \left[\left. \tilde{n}=\frac{n}{2} \right| \tilde{s}=n\right] =p > \frac{1}{2}\end{equation}
|
During my 4-week visit as an OpenMM Visiting Scholar, I implemented a constant pH algorithm for implicit solvent. A brief overview of the algorithm is given in this video:
Below, I describe the algorithm in much greater detail and also highlight key features of OpenMM that enabled me to quickly implement the algorithm.
Introduction
Constant pH dynamics is a simulation procedure whereby the protonation states of residues and molecules are allowed to fluctuate. The role that this process plays in structural biology can be surprisingly difficult to model with classical molecular dynamics. Jacobson
et al [1] provide an excellent review of some of these cases. Protonation states often are inferred with programs like MCCE [2], but in a static structural context. With this approach, a set of stable initial protonation states can then be studied and compared, either by docking or simulation.
As noted by Jacobson, however, pH plays a much more dynamic role. Notable examples include DNA binding, allostery, fibril formation, and other mechanisms. These systems can exhibit cascades of proton transport, requiring an
in situ approach to the simulation. To account for these effects properly remains one of the great challenges in biomolecular simulation. Many useful approaches have been proposed [3-6], and we propose to build on this impressive body of work. Implementation of Constant pH Dynamics
In my implementation of constant pH dynamics, I built from an existing alchemical formulation that uses a fractional parameter to describe the protonation state of the system. Using this formulation, we can gradually morph molecules from one protonation state to another. The gradual parameter adjustment allows the surrounding environment to adjust to the new protonation state. The new state is then accepted or rejected based on a Monte Carlo criterion. Below, I describe the new energy formulation and the Monte Carlo criterion used for accepting the trial protonation states.
Consider a system with coordinates $r$ and protonation state(s) ${\lambda}$ , and an energy $U_{SYS}$, which includes bonded and nonbonded terms, as well as solvation. The full energy term $U_T$ includes the Henderson-Hasselbach energy, giving Energetics:
$$U_{T}(r,\lambda)=U_{SYS}\left(r,p(\lambda) \right) + G_{HH}(\lambda)$$
where $G_{HH}(\lambda)=\sum_{i} G_{HH}(\lambda_i) $ is the sum over all titratable residues and molecules, with the term for the $i$th residue follow the form given by Brooks [4] as
$$\beta G_{HH}(\lambda_i))=2.3\cdot \beta \lambda_i \cdot \left( pK_a(i)-pH \right )$$
The energy of the system is currently computed using a linear mixing rule for the parameter states, computed as
$$p(\lambda)=(1-\lambda)p_0 + \lambda p_1 $$
Where $p_0$ and $p_1$ are the initial and final parameter states, respectively, which can include partial charges, Van der Waals radii, and other forcefield parameters. The current implementation is in implicit solvent, and explicit solvent implementations will be available in the near term.
The perturbation procedure is a Monte Carlo technique that collaborators and I have developed known as Nonequilibrium Candidate Monte Carlo (NCMC)[7]. The idea behind this procedure is to generate a trial move that gradually adjusts the parameter to allow the local environment to relax to the new parameter state. We then accept this trial with Monte Carlo criterion that ensures correct statistics. If we use a Verlet integrator in the trial move, the Metropolis acceptance criterion is Sampling:
$$\textrm{acc}(r_0 \rightarrow r_1)=\min(1,e^{-
\beta \Delta H_T}) ,$$
where $\Delta H_T$ is the change in the total Hamiltonian moving from state 0 to state 1 through the nonequilibrium trial.
OpenMM Experience and Tips
Implementing this algorithm in OpenMM was very straightforward, given that the data structures can readily be accessed through the Python API. This type of prototype development can often be very difficult in large molecular dynamics codes, but OpenMM is clearly designed with rapid prototyping in mind. It took me about a week to refresh my Python skills and learn OpenMM, and another couple of weeks to implement the algorithm using OpenMM’s Python API and begin testing the algorithm. We still have much to do, but the basic machinery seems to be in place and working as expected, including the GPU acceleration.
For the scientist whose interest is primarily developing prototypes and testing hypotheses, the Python API is an ideal development platform. Python is very easy to use and debug. It is also a very legible language, so that others can easily see your logic. If you leverage the object-oriented structure, it is easy to see how your prototype code might be implemented in C++ once the prototype is stable.
Since the Python layer is an API to OpenMM’s GPU-accelerated code, it also runs extremely fast, so that generating validation data is much easier. We found that using special functions that allow for the rapid updating of forcefield parameters without needing to fully update the context on the GPU was extremely useful for our particular algorithm, as we needed to make frequent communications to the GPU.
The development team has provided extremely high quality documentation with great examples in the user manual. The code is also very well written and organized, and I found the online class diagrams to be particularly helpful when looking for routines to incorporate in the code.
Future Work
I hope to continue to develop this code as part of my research program moving forward. Having a stable constant pH code for implicit and explicit solvent would be our first effort. Another goal is to port this procedure onto the C++ layer so that it can be readily accessible to Folding@Home, which generates many simulation trajectories that can be pieced together using Markov State models via MSMBuilder. This would allow for microsecond to millisecond timescale studies of pH dependent folding events (for example). OpenMM has made it easy to test out my idea, and I am excited to see what new research results come of it.
References
1. Schönichen, A., et al.,
Considering protonation as a posttranslational modification regulating protein structure and function. Annual review of biophysics, 2013. 42: p. 289-314.
2. Gunner, M., X. Zhu, and M.C. Klein,
MCCE analysis of the pKas of introduced buried acids and bases in staphylococcal nuclease. Proteins: Structure, Function, and Bioinformatics, 2011. 79(12): p. 3306-3319.
3. Bürgi, R., P.A. Kollman, and W.F. van Gunsteren,
Simulating proteins at constant pH: an approach combining molecular dynamics and Monte Carlo simulation. Proteins: Structure, Function, and Bioinformatics, 2002. 47(4): p. 469-480.
4. Khandogin, J. and C.L. Brooks,
Constant pH molecular dynamics with proton tautomerism. Biophysical journal, 2005. 89(1): p. 141-157.
5. Mertz, J.E. and B.M. Pettitt,
Molecular dynamics at a constant pH. International Journal of High Performance Computing Applications, 1994. 8(1): p. 47-53.
6. Mongan, J. and D.A. Case,
Biomolecular simulations at constant pH. Current opinion in structural biology, 2005. 15(2): p. 157-163.
7. Nilmeier, J.P., et al.,
Nonequilibrium candidate Monte Carlo is an efficient tool for equilibrium simulation. Proceedings of the National Academy of Sciences, 2011. 108(45): p. E1009-E1018.
8. Kong, X. and C.L. Brooks,
λ‐dynamics: A new approach to free energy calculations. The Journal of chemical physics, 1996. 105(6): p. 2414-2423.
9. Stern, H.A.,
Molecular simulation with variable protonation states at constant pH. The Journal of chemical physics, 2007. 126(16): p. 164112.
10. Wagoner, J.A. and V.S. Pande,
A smoothly decoupled particle interface: New methods for coupling explicit and implicit solvent. The Journal of chemical physics, 2011. 134(21): p. 214103.
11. Wagoner, J.A. and V.S. Pande,
Reducing the effect of Metropolization on mixing times in molecular dynamics simulations. The Journal of chemical physics, 2012. 137(21): p. 214105.
12. Börjesson, U. and P.H. Hünenberger,
Explicit-solvent molecular dynamics simulation at constant pH: methodology and application to small amines. The Journal of chemical physics, 2001. 114(22): p. 9706-9719.
|
@HarryGindi So the $n$-simplices of $N(D^{op})$ are $Hom_{sCat}(\mathfrak{C}[n],D^{op})$. Are you using the fact that the whole simplicial set is the mapping simplicial object between cosimplicial simplicial categories, and taking the constant cosimplicial simplicial category in the right coordinate?
I guess I'm just very confused about how you're saying anything about the entire simplicial set if you're not producing it, in one go, as the mapping space between two cosimplicial objects. But whatever, I dunno. I'm having a very bad day with this junk lol.
It just seems like this argument is all about the sets of n-simplices. Which is the trivial part.
lol no i mean, i'm following it by context actually
so for the record i really do think that the simplicial set you're getting can be written as coming from the simplicial enrichment on cosimplicial objects, where you take a constant cosimplicial simplicial category on one side
@user1732 haha thanks! we had no idea if that'd actually find its way to the internet...
@JonathanBeardsley any quillen equivalence determines an adjoint equivalence of quasicategories. (and any equivalence can be upgraded to an adjoint (equivalence)). i'm not sure what you mean by "Quillen equivalences induce equivalences after (co)fibrant replacement" though, i feel like that statement is mixing category-levels
@JonathanBeardsley if nothing else, this follows from the fact that \frakC is a left quillen equivalence so creates weak equivalences among cofibrant objects (and all objects are cofibrant, in particular quasicategories are). i guess also you need to know the fact (proved in HTT) that the three definitions of "hom-sset" introduced in chapter 1 are all weakly equivalent to the one you get via \frakC
@IlaRossi i would imagine that this is in goerss--jardine? ultimately, this is just coming from the fact that homotopy groups are defined to be maps in (from spheres), and you only are "supposed" to map into things that are fibrant -- which in this case means kan complexes
@JonathanBeardsley earlier than this, i'm pretty sure it was proved by dwyer--kan in one of their papers around '80 and '81
@HarryGindi i don't know if i would say that "most" relative categories are fibrant. it was proved by lennart meier that model categories are Barwick--Kan fibrant (iirc without any further adjectives necessary)
@JonathanBeardsley what?! i really liked that picture! i wonder why they removed it
@HarryGindi i don't know about general PDEs, but certainly D-modules are relevant in the homotopical world
@HarryGindi oh interesting, thomason-fibrancy of W is a necessary condition for BK-fibrancy of (R,W)?
i also find the thomason model structure mysterious. i set up a less mysterious (and pretty straightforward) analog for $\infty$-categories in the fappendix here: arxiv.org/pdf/1510.03525.pdf
as for the grothendieck construction computing hocolims, i think the more fundamental thing is that the grothendieck construction itself is a lax colimit. combining this with the fact that ($\infty$-)groupoid completion is a left adjoint, you immediately get that $|Gr(F)|$ is the colimit of $B \xrightarrow{F} Cat \xrightarrow{|-|} Spaces$
@JonathanBeardsley If you want to go that route, I guess you still have to prove that ^op_s and ^op_Delta both lie in the unique nonidentity component of Aut(N(Qcat)) and Aut(N(sCat)) whatever nerve you mean in this particular case (the B-K relative nerve has the advantage here bc sCat is not a simplicial model cat)
I think the direct proof has a lot of advantages here, since it gives a point-set on-the-nose isomorphism
Yeah, definitely, but I'd like to stay and work with Cisinski on the Ph.D if possible, but I'm trying to keep options open
not put all my eggs in one basket, as it were
I mean, I'm open to coming back to the US too, but I don't have any ideas for advisors here who are interested in higher straightening/higher Yoneda, which I am convinced is the big open problem for infinity, n-cats
Gaitsgory and Rozenblyum, I guess, but I think they're more interested in applications of those ideas vs actually getting a hold of them in full generality
@JonathanBeardsley Don't sweat it. As it was mentioned I have now mod superpowers, so s/he can do very little to upset me. Since you're the room owner, let me know if I can be of any assistance here with the moderation (moderators on SE have network-wide chat moderating powers, but this is not my turf, so to speak).
There are two "opposite" functors:$$ op_\Delta\colon sSet\to sSet$$and$$op_s\colon sCat\to sCat.$$The first takes a simplicial set to its opposite simplicial set by precomposing with the opposite of a functor $\Delta\to \Delta$ which is the identity on objects and takes a morphism $\langle k...
@JonathanBeardsley Yeah, I worked out a little proof sketch of the lemma on a notepad
It's enough to show everything works for generating cofaces and codegeneracies
the codegeneracies are free, the 0 and nth cofaces are free
all of those can be done treating frak{C} as a black box
the only slightly complicated thing is keeping track of the inner generated cofaces, but if you use my description of frak{C} or the one Joyal uses in the quasicategories vs simplicial categories paper, the combinatorics are completely explicit for codimension 1 face inclusions
the maps on vertices are obvious, and the maps on homs are just appropriate inclusions of cubes on the {0} face of the cube wrt the axis corresponding to the omitted inner vertex
In general, each Δ[1] factor in Hom(i,j) corresponds exactly to a vertex k with i<k<j, so omitting k gives inclusion onto the 'bottom' face wrt that axis, i.e. Δ[1]^{k-i-1} x {0} x Δ[j-k-1] (I'd call this the top, but I seem to draw my cubical diagrams in the reversed orientation).
> Thus, using appropriate tags one can increase ones chances that users competent to answer the question, or just interested in it, will notice the question in the first place. Conversely, using only very specialized tags (which likely almost nobody specifically favorited, subscribed to, etc) or worse just newly created tags, one might miss a chance to give visibility to ones question.
I am not sure to which extent this effect is noticeable on smaller sites (such as MathOverflow) but probably it's good to follow the recommendations given in the FAQ. (And MO is likely to grow a bit more in the future, so then it can become more important.) And also some smaller tags have enough followers.
You are asking posts far away from areas I am familiar with, so I am not really sure which top-level tags would be a good fit for your questions - otherwise I would edit/retag the posts myself. (Other than possibility to ping you somewhere in chat, the reason why I posted this in this room is that users of this room are likely more familiar with the topics you're interested in and probably they would be able to suggest suitable tags.)
I just wanted to mention this, in case it helps you when asking question here. (Although it seems that you're doing fine.)
@MartinSleziak even I was not sure what other tags are appropriate to add.. I will see other questions similar to this, see what tags they have added and will add if I get to see any relevant tags.. thanks for your suggestion.. it is very reasonable,.
You don't need to put only one tag, you can put up to five. In general it is recommended to put a very general tag (usually an "arxiv" tag) to indicate broadly which sector of math your question is in, and then more specific tags
I would say that the topics of the US Talbot, as with the European Talbot, are heavily influenced by the organizers. If you look at who the organizers were/are for the US Talbot I think you will find many homotopy theorists among them.
|
Welcome back to our mini-series on quantum probability! Last time, we motivated the series by pondering over a thought from classical probability theory, namely that marginal probability doesn't have memory. That is, the process of summing over of a variable in a joint probability distribution causes information about that variable to be lost. But as we saw then, there is a quantum version of marginal probability that behaves much like "marginal probability with memory." It remembers what's destroyed when computing marginals in the usual way. In today's post, I'll unveil the details. Along the way, we'll take an introductory look at the mathematics of quantum probability theory.
Let's begin with a brief recap of the ideas covered in Part 1: We began with a joint probability distribution on a product of finite sets $p\colon X\times Y\to [0,1]$ and realized it as a matrix $M$ by setting $M_{ij} = \sqrt{p(x_i),p(y_j)}$. We called elements of our set $X=\{0,1\}$ prefixes and the elements of our set $Y=\{00,11,01,10\}$ suffixes so that $X\times Y$ is the set of all bitstrings of length 3.
We then observed that the matrix $M^\top M$ contains the marginal probability distribution of $Y$ along its diagonal. Moreover its eigenvectors define conditional probability distributions on $Y$. Likewise, $MM^\top$ contains marginals on $X$ along its diagonal, and its eigenvectors define conditional probability distributions on $X$.
The information in the eigenvectors of $M^\top M$ and $MM^\top$ is precisely the information that's destroyed when computing marginal probability in the usual way. The big reveal last time was that the matrices $M^\top M$ and $MM^\top$ are the quantum versions of marginal probability distributions.
As we'll see today, the quantum version of a probability distribution is something called a density operator. The quantum version of marginalizing corresponds to "reducing" that operator to a subsystem. This reduction is a construction in linear algebra called the partial trace. I'll start off by explaining the partial trace. Then I'll introduce the basics of quantum probability theory. At the end, we'll tie it all back to our bitstring example.
In this article and the next, I'd like to share some ideas from the world of quantum probability.* The word "quantum" is pretty loaded, but don't let that scare you. We'll take a first—not second or third—look at the subject, and the only prerequisites will be linear algebra and basic probability. In fact, I like to think of quantum probability as another name for "linear algebra + probability," so this mini-series will explore the mathematics, rather than the physics, of the subject.**
In today's post, we'll motivate the discussion by saying a few words about (classical) probability. In particular, let's spend a few moments thinking about the following:
What do I mean? We'll start with some basic definitions. Then I'll share an example that illustrates this idea.
A probability distribution (or simply, distribution) on a finite set $X$ is a function $p \colon X\to [0,1]$ satisfying $\sum_x p(x) = 1$. I'll use the term joint probability distribution to refer to a distribution on a Cartesian product of finite sets, i.e. a function $p\colon X\times Y\to [0,1]$ satisfying $\sum_{(x,y)}p(x,y)=1$. Every joint distribution defines a marginal probability distribution on one of the sets by summing probabilities over the other set. For instance, the marginal distribution $p_X\colon X\to [0,1]$ on $X$ is defined by $p_X(x)=\sum_yp(x,y)$, in which the variable $y$ is summed, or "integrated," out. It's this very process of summing or integrating out that causes information to be lost. In other words, marginalizing loses information. It doesn't remember what was summed away!
I'll illustrate this with a simple example. To do so, I need to give you some finite sets $X$ and $Y$ and a probability distribution on them.
Today I'd like to share an idea. It's a very simple idea. It's not fancy and it's certainly not new. In fact, I'm sure many of you have thought about it already. But if you haven't—and even if you have!—I hope you'll take a few minutes to enjoy it with me. Here's the idea:
So simple! But we can get a lot of mileage out of it.
To start, I'll be a little more precise: every matrix corresponds to a weighted bipartite graph. By "graph" I mean a collection of vertices (dots) and edges; by "bipartite" I mean that the dots come in two different types/colors; by "weighted" I mean each edge is labeled with a number.
|
I am trying to prove $$X_n \xrightarrow{d} X, Y_n \xrightarrow{d} a \implies Y_n X_n \xrightarrow{d} aX$$where $a$ is a constant.
What I tried: Let $g:\mathbb R\to \mathbb R$ an arbitrary uniformly continuous, bounded function. It suffices to show $\mathbb E[g(Y_n X_n)] \to \mathbb E[g(aX)]$. We have $$\left \lvert \int g(Y_nX_n) - g(aX) \,dP \right \rvert \leq \left \lvert \int g(Y_n X_n) - g(aX_n) \, dP \right \rvert +\left \lvert \int g(aX_n) - g(aX) \, dP \right \rvert,$$ where the right summand goes to $0$ by assumption as $a$ is constant. Now I want to use uniform continuity of $g$ to estimate the left summand: Choose $\delta > 0 $ such that $$\left \lvert g(Y_n X_n) - g(aX_n) \right \rvert < \epsilon,$$ whenever $|Y_n X_n - aX_n| < \delta.$ Since convergence in distribution to a constant implies convergence in probability, we have can control $P(|Y_n - a | > \delta)$. But is it possible to get a bound on $|X_n|$? I assume that probability in distribution does not imply some form of boundedness... Any help is appreciated.
I am trying to prove $$X_n \xrightarrow{d} X, Y_n \xrightarrow{d} a \implies Y_n X_n \xrightarrow{d} aX$$where $a$ is a constant.
First, you can choose $g$ to be Lipschitz bounded, with $C$ its Lipschitz constant. It follows that $\forall \epsilon >0, A > 0$: $$|\mathbb{E}g(X_n Y_n) - \mathbb{E}g(X_n a)| \leq \mathbb{E}|g(X_n Y_n) - g(X_n a)|1_{|X_n Y_n - X_n a| > \epsilon} + \mathbb{E}|g(X_n Y_n) - g(X_n a)|1_{|X_n Y_n - X_n a| \leq \epsilon} \leq C \epsilon + ||g||_\infty \mathbb{P} (|X_n Y_n - X_n a | > \epsilon) \leq C \epsilon + \mathbb{P} (|X_n Y_n - X_n a | > \epsilon, |X_n|> A) + \mathbb{P} (|X_n Y_n - X_n a | > \epsilon, |X_n| \leq A) \leq C \epsilon + \mathbb{P} (|Y_n - a | > \epsilon / A) + \mathbb{P} (|X_n| \geq A)$$
Now, $\lim_{n \rightarrow \infty} \mathbb{P} (|Y_n - a | > \epsilon / A) = 0$ so $\limsup{n \rightarrow \infty} |\mathbb{E}g(X_n Y_n) - \mathbb{E}g(X_n a)| \leq C \epsilon + \mathbb{P} (|X| \geq A)$. I used here that fact that $X_n$ goes in law to $X$ and that $[A, \infty[$ is closed (see the Portemanteau lemma). You can now let $\epsilon$ goes to zero and $A$ goes to $\infty$.
|
A common task in applied statistics is the pairwise comparison of the responses of $N$ treatment groups in some statistical test — the goal being to decide which pairs exhibit differences that are statistically significant. Now, because there is one comparison being made for each pairing, a naive application of the Bonferroni correction analysis suggests that one should set the individual pairwise test sizes to $\alpha_i \to \alpha_f/{N \choose 2}$ in order to obtain a desired family-wise type 1 error rate of $\alpha_f$. Indeed, this solution is suggested by many texts. However, implicit in the Bonferroni analysis is the assumption that the comparisons being made are each mutually independent. This is not the case here, and we show that as a consequence the naive approach often returns type 1 error rates far from those desired. We provide adjusted formulas that allow for error-free Bonferroni-like corrections to be made.
[edit (7/4/2016): After posting this article, I’ve since found that the method we suggest here is related to / is a generalization of Tukey’s range test — see here.]
[edit (6/11/2018): I’ve added the notebook used below to our Github, here]
Follow @efavdb Follow us on twitter for new submission alerts! Introduction
In this post, we consider a particular kind of statistical test where one examines $N$ different treatment groups, measures some particular response within each, and then decides which of the ${N \choose 2}$ pairs appear to exhibit responses that differ significantly. This is called the pairwise comparison problem (or sometimes “posthoc analysis”). It comes up in many contexts, and in general it will be of interest whenever one is carrying out a multiple-treatment test.
Our specific interest here is in identifying the appropriate individual measurement error bars needed to guarantee a given family-wise type 1 error rate, $\alpha_f$. Briefly, $\alpha_f$ is the probability that we incorrectly make any assertion that two measurements differ significantly when the true effect sizes we’re trying to measure are actually all the same. This can happen due to the nature of statistical fluctuations. For example, when measuring the heights of $N$ identical objects, measurement error can cause us to incorrectly think that some pair have slightly different heights, even though that’s not the case. A classical approach to addressing this problem is given by the Bonferroni approximation: If we consider $\mathcal{N}$ independent comparisons, and each has an individual type 1 error rate of $\alpha_i,$ then the family-wise probability of not making any type 1 errors is simply the product of the probabilities that we don’t make any individual type 1 errors,
$$ \tag{1} \label{bon1} p_f = (1 – \alpha_f) = p_i^{\mathcal{N}} \equiv \left ( 1 – \alpha_i \right)^{\mathcal{N}} \approx 1 – \mathcal{N} \alpha_i. $$ The last equality here is an expansion that holds when $p_f$ is close to $1$, the limit we usually work in. Rearranging (\ref{bon1}) gives a simple expression, $$ \tag{2} \label{bon2} \alpha_i = \frac{\alpha_f}{\mathcal{N}}. $$ This is the (naive) Bonferroni approximation — it states that one should use individual tests of size $\alpha_f / \mathcal{N}$ in order to obtain a family-wise error rate of $\alpha_f$.
The reason why we refer to (\ref{bon2}) as the naive Bonferroni approximation is that it doesn’t actually apply to the problem we consider here. The reason why is that $p_f \not = p_i^{\mathcal{N}}$ in (\ref{bon1}) if the $\mathcal{N}$ comparisons considered are not independent: This is generally the case for our system of $\mathcal{N} = {N \choose 2}$ comparisons, since they are based on an underlying set of measurements having only $N$ degrees of freedom (the object heights, in our example). Despite this obvious issue, the naive approximation is often applied in this context. Here, we explore the nature of the error incurred in such applications, and we find that it is sometimes very significant. We also show that it’s actually quite simple to apply the principle behind the Bonferroni approximation without error: One need only find a way to evaluate the true $p_f$ for any particular choice of error bars. Inverting this then allows one to identify the error bars needed to obtain the desired $p_f$.
General treatment
In this section, we derive a formal expression for the type 1 error rate in the pairwise comparison problem. For simplicity, we will assume 1) that the uncertainty in each of our $N$ individual measurements is the same (e.g., the variance in the case of Normal variables), and 2) that our pairwise tests assert that two measurements differ statistically if and only if they are more than $k$ units apart.
To proceed, we consider the probability that a type 1 error does not occur, $p_f$. This requires that all $N$ measurements sit within $k$ units of each other. For any set of values satisfying this condition, let the smallest of the set be $x$. We have $N$ choices for which of the treatments sit as this position. The remaining $(N-1)$ values must all be within the region $(x, x+k)$. Because we’re considering the type 1 error rate, we can assume that each of the independent measurements takes on the same distribution $P(x)$. These considerations imply
$$ \tag{3} \label{gen} p_{f} \equiv 1 – \alpha_{f} = N \int_{-\infty}^{\infty} P(x) \left \{\int_x^{x+k} P(y) dy \right \}^{N-1} dx. $$ Equation (\ref{gen}) is our main result. It is nice for a couple of reasons. First, its form implies that when $N$ is large it will scale like $a \times p_{1,eff}^N$, for some $k$-dependent numbers $a$ and $p_{1,eff}$. This is reminiscent of the expression (\ref{bon1}), where $p_f$ took the form $p_i^{\mathcal{N}}$. Here, we see that the correct value actually scales like some number to the $N$-th power, not the $\mathcal{N}$-th. This reflects the fact that we actually only have $N$ independent degrees of freedom here, not ${N \choose 2}$. Second, when the inner integral above can be carried out formally, (\ref{gen}) can be expressed as a single one-dimensional integral. In such cases, the integral can be evaluated numerically for any $k$, allowing one to conveniently identify the $k$ that returns any specific, desired $p_f$. We illustrate both points in the next two sections, where we consider Normal and Cauchy variables, respectively. Normally-distributed responses
We now consider the case where the individual statistics are each Normally-distributed about zero, and we reject any pair if they are more than $k \times \sqrt{2} \sigma$ apart, with $\sigma^2$ the variance of the individual statistics. In this case, the inner integral of (\ref{gen}) goes to
$$\tag{4} \label{inner_g} \frac{1}{\sqrt{2 \pi \sigma^2}} \int_x^{x+k \sqrt{2} \sigma} \exp\left [ -\frac{y^2}{2 \sigma^2} \right] dy = \frac{1}{2} \left [\text{erf}(k + \frac{x}{\sqrt{2} \sigma}) – \text{erf}(\frac{x}{\sqrt{2} \sigma})\right]. $$ Plugging this into (\ref{gen}), we obtain $$\tag{5} \label{exact_g} p_f = \int \frac{N e^{-x^2 / 2 \sigma^2}}{\sqrt{2 \pi \sigma^2}} \exp \left ((N-1) \log \frac{1}{2} \left [\text{erf}(k + \frac{x}{\sqrt{2} \sigma}) – \text{erf}(\frac{x}{\sqrt{2} \sigma})\right]\right)dx. $$ This exact expression (\ref{exact_g}) can be used to obtain the $k$ value needed to achieve any desired family-wise type 1error rate. Example solutions obtained in this way are compared to the $k$-values returned by the naive Bonferroni approach in the table below. The last column $p_{f,Bon}$ shown is the family-wise success rate that you get when you plug in $k_{Bon},$ the naive Bonferroni $k$ value targeting $p_{f,exact}$.
$N$ $p_{f,exact}$ $k_{exact}$ $k_{Bon}$ $p_{f, Bon}$ $4$ $0.90$ $2.29$ $2.39$ $0.921$ $8$ $0.90$ $2.78$ $2.91$ $0.929$ $4$ $0.95$ $2.57$ $2.64$ $0.959$ $8$ $0.95$ $3.03$ $3.10$ $0.959$
Examining the table shown, you can see that the naive approach is consistently overestimating the $k$ values (error bars) needed to obtain the desired family-wise rates — but not dramatically so. The reason for the near-accuracy is that two solutions basically scale the same way with $N$. To see this, one can carry out an asymptotic analysis of (\ref{exact_g}). We skip the details and note only that at large $N$ we have
$$\tag{6} \label{asy_g} p_f \sim \text{erf} \left ( \frac{k}{2}\right)^N \sim \left (1 – \frac{e^{-k^2 / 4}}{k \sqrt{\pi}/2} \right)^N. $$ This is interesting because the individual pairwise tests have p-values given by $$ \tag{7} \label{asy_i} p_i = \int_{-k\sqrt{2}\sigma}^{k\sqrt{2}\sigma} \frac{e^{-x^2 / (4 \sigma^2)}}{\sqrt{4 \pi \sigma^2 }} = \text{erf}(k /\sqrt{2}) \sim 1 – \frac{e^{-k^2/2}}{k \sqrt{\pi/2}}. $$ At large $k$, this is dominated by the exponential. Comparing with (\ref{asy_g}), this implies $$ \tag{8} \label{fin_g} p_f \sim \left (1 – \alpha_i^{1/2} \right)^N \sim 1 – N \alpha_i^{1/2} \equiv 1 – \alpha_f. $$ Fixing $\alpha_f$, this requires that $\alpha_i$ scale like $N^{-2}$, the same scaling with $N$ as the naive Bonferroni solution. Thus, in the case of Normal variables, the Bonferroni approximation provides an inexact, but reasonable approximation (nevertheless, we suggest going with the exact approach using (\ref{exact_g}), since it’s just as easy!). We show in the next section that this is not the case for Cauchy variables. Cauchy-distributed variables
We’ll now consider the case of $N$ independent, identically-distributed Cauchy variables having half widths $a$,
$$ \tag{9} \label{c_dist} P(x) = \frac{a}{\pi} \frac{1}{a^2 + x^2}. $$ When we compare any two, we will reject the null if they are more than $ka$ apart. With this choice, the inner integral of (\ref{gen}) is now $$ \tag{10} \label{inner_c} \frac{a}{\pi} \int_x^{x+ k a} \frac{1}{a^2 + y^2} dy = \frac{1}{\pi} \left [\tan^{-1}(k + x/a) – \tan^{-1}(x/a) \right]. $$ Plugging into into (\ref{gen}) now gives
$$\tag{11} \label{exact_c}
p_f = \int \frac{N a/\pi}{a^2 + x^2} \exp \left ((N-1) \log \frac{1}{\pi} \left [\tan^{-1}(k + x/a) – \tan^{-1}(x/a) \right] \right). $$ This is the analog of (\ref{exact_g}) for Cauchy variables — it can be used to find the exact $k$ value needed to obtain a given family-wise type 1 error rate. The table below compares the exact values to those returned by the naive Bonferroni analysis [obtained using the fact that the difference between two independent Cauchy variables of width $a$ is itself a Cauchy distributed variable, but with width $2a$].
$N$ $p_{f,exact}$ $k_{exact}$ $k_{Bon}$ $p_{f, Bon}$ $4$ $0.90$ $27$ $76$ $0.965$ $8$ $0.90$ $55$ $350$ $0.985$ $4$ $0.95$ $53$ $153$ $0.983$ $8$ $0.95$ $107$ $700$ $0.993$
In this case, you can see that the naive Bonferroni approximation performs badly. For example, in the last line, it suggests using error bars that are seven times too large for each point estimate. The error gets even worse as $N$ grows: Again, skipping the details, we note that in this limit, (\ref{exact_c}) scales like
$$\tag{12} \label{asym_c} p_f \sim \left [\frac{2}{\pi} \tan^{-1}(k/2) \right]^N. $$ This can be related to the individual $p_i$ values, which are given by $$ \tag{13} \label{asym2_c} p_i = \int_{-ka}^{ka} \frac{2 a / \pi}{4 a^2 + x^2}dx = \frac{2}{\pi}\tan^{-1}(k/2). $$ Comparing the last two lines, we obtain $$ \tag{14} \label{asym3_c} p_f \equiv 1 – \alpha_f \sim p_i^N \sim 1 – N \alpha_i. $$ Although we’ve been a bit sloppy with coefficients here, (\ref{asym3_c}) gives the correct leading $N$-dependence: $k_{exact} \sim 1/\alpha_i \propto N$. We can see this linear scaling in the table above. This explains why $k_{exact}$ and $k_{Bon}$ — which scales like ${N \choose 2} \sim N^2$ — differ more and more as $N$ grows. In this case, you should definitely never use the naive approximation, but instead stick to the exact analysis based on (\ref{exact_c}). Conclusion
Some people criticize the Bonferroni correction factor as being too conservative. However, our analysis here suggests that this feeling may be due in part to its occasional improper application. The naive approximation simply does not apply in the case of pairwise comparisons because the ${N \choose 2}$ pairs considered are not independent — there are only $N$ independent degrees of freedom in this problem. Although the naive correction does not apply to the problem of pairwise comparisons, we’ve shown here that it remains a simple matter to correctly apply the principle behind it: One can easily select any desired family-wise type 1 error rate through an appropriate selection of the individual test sizes — just use (\ref{gen})!
We hope you enjoyed this post — we anticipate writing a bit more on hypothesis testing in the near future.
|
There's a mistake in your proof, you do not have$F=\bigcup_{k=1}^mR_k$.The idea is however correct.
To see where it fails, try drawing it in 2 dimensions.F is then a rectangle, you've sliced each side into $m$ equal portions, in other word you made an $m$ by $m$ grid of smaller rectangles out of F. You then defined your $R_k$ as being the diagonal, smaller rectangles... Meaning you're missing all the other $m(m-1)$ rectangles.
If we go back to $n$ dimensions, you have a$m\times m\times\ldots\times m$ (a total of n times) hypergrid, which contains$m^n$ small hyper-rectangles. In your definition, you only take the $m$ diagonal elements and you're then missing $m(m^{n-1}-1)$ hyper-rectangles.
EDIT
Because I have no clue what your background in maths (and other related topics) is, I have included a lot of details below. If you still need more on some points feel free to ask though, if you already know part of what's below, my apologies for making it so long.
Defining enough hyper-rectangles
Let $M$ be a positive integer such that
$$\frac{\operatorname{diam}(F)}M =\sqrt{\sum_{i=1}^n\left( \frac{b_i-a_i}M\right)^2} <\epsilon$$
We want to split $F$ into smaller hyper-rectangles so that theirdiameter is no more than $\frac{\operatorname{diam}(F)}M$, and that they are a partition of $F$. The most natural way to do that is, as you tried previously, to divide $F$ into a hyper-grid of size $M\times M\times\ldots\times M$ ($n$ times).
It is quite annoying to do so formally, and even more so if you limit yourself to only one index.For now let's see what we can do with multiple indices, consider$n$ integers$i_1,i_2,\ldots,i_n\in\left[\!\left[ 0,M-1 \right]\!\right]$and define
\begin{align*}R({i_1,\ldots,i_n})&=\left[ a_1+\frac{i_1(b_1-a_1)}M, \ a_1+\frac{(i_1+1)(b_1-a_1)}M \right]\times\ldots\times\left[ a_n+\frac{i_n(b_n-a_n)}M, \ a_n+\frac{(i_n+1)(b_n-a_n)}M \right]\end{align*}
Proof that our hyper-rectangles do not spill outside of $F$ (edit 2)
As requested, here are some details on proving the inclusion$$\bigcup_{0\le i_1,\ \ldots\ ,i_n\le M-1} R(i_1,\ldots,i_n) \subseteq F$$Consider one small hyper-rectangle $R(i_1,\ldots,i_n)$and let $x\in R(i_1,\ldots,i_n)$.For $k$, $1\le k\le n$, we have by definition of the hyper-rectangle$$a_k+\frac{i_k(b_k-a_k)}M \le x_k \le a_k+\frac{(i_k+1)(b_k-a_k)}M$$Now remember that $0\le i_k\le M-1$, we deduce the following:\begin{align*}0\le \frac{i_k}M &\implies a_k\le a_k+\frac{i_k(b_k-a_k)}M\\\frac{i_k+1}M\le 1 &\implies a_k+\frac{(i_k+1)(b_k-a_k)}M \le a_k+ b_k-a_k=b_k\end{align*}It follows that $a_k\le x_k\le b_k$. This is true for all $1\le k\le n$,therefore $x\in F$,thus $R(i_1,\ldots,i_n)\subseteq F$,and finally$$\bigcup_{0\le i_1,\ \ldots\ ,i_n\le M-1} R(i_1,\ldots,i_n) \subseteq F$$
Proof that we have enough hyper-rectangles (v2)
We now want to prove the reverse inclusion (which was where your previous approach failed).
Let $x=(x_1,x_2,\ldots,x_n)\in F$. Then $\forall k\in\left[\!\left[ 1,n \right]\!\right]$, $a_k\le x_k\le b_k$.In particular, there is some integer $i_k$ such that $$a_k+\frac{i_k(b_k-a_k)}M\le x_k \le a_k+\frac{(i_k+1)(b_k-a_k)}M$$Indeed it suffices to take\begin{cases}i_k=M-1 & \text{if $x_k=b_k$} \\i_k=\left\lfloor\frac{(x_k-a_k)M}{b_k-a_k}\right\rfloor & \text{if $x_k< b_k$}\end{cases}Basically if we exclude the special case "$x_k=b_k$", $i_k$ is just the index of the slice of length $\frac{b_k-a_k}M$ that contains $x_k$.To see why this works, notice that $x_k$ can be written as$x_k=a_k+q\frac{b_k-a_k}M$.Specifically, we can always define the real number $q=M\frac{x_k-a_k}{b_k-a_k}$.Because $a_k\le x_k\le b_k$, we know that$0\le \frac{x_k-a_k}{b_k-a_k}\le 1$, hence $0\le q\le M$.In general $q$ is not an integer and cannot be used "as is"for our indices. We can however use the floor function to get an integer.By definition, we have$\lfloor q\rfloor\le q<\lfloor q\rfloor+1$.If we go back to $x_k$ using these inequalities we obtain$$a_k+\lfloor q\rfloor\frac{b_k-a_k}M\le a_k+q\frac{b_k-a_k}M=x_k< a_k+(\lfloor q\rfloor+1)\frac{b_k-a_k}M$$If you compare this to the definition of our hyper-rectangles, it is obvious you can very often use $\lfloor q\rfloor$ as an index.The only exception is when $\lfloor q\rfloor$ is equal to $M$, because our indices must be strictly smaller than $M$. If you backtrack the equations,this can only occur if $q\ge M$, which means $q=M$, which also means$x_k=b_k$. This is however okay, because $b_k$ is covered by our last slice that have index $M-1$.This assymetry occurs because when $x_k$ is right at the boundary between $2$ slices, you need to decide whether you put the point in the left slice or the right slice. In this definition here, we decide to put it in the right slice, and treat the rightmost point (when $x_k=b_k$) as a special case (and put it in the left slice).
With those definitions, we thus have $x\in R(i_1,\ldots, i_n)$. Because this holds for every $x\in F$ we conclude that$$F\subseteq \bigcup_{0\le i_1,\ \ldots\ ,i_n\le M-1} R(i_1,\ldots,i_n)$$and we have as desired$$F = \bigcup_{0\le i_1,\ \ldots\ ,i_n\le M-1} R(i_1,\ldots,i_n)$$
With this, your problem is basically solved since the diameter of each small hyper-rectangle is $\frac{\operatorname{diam}(F)}M <\epsilon$...But if you REALLY want to only use one index, there's a little more work to do.
Reducing to one index (probably too detailed bis)
When you have a finite number of indices that can take a finite number of integer values, there's a well known "trick" to reduce to only one index.For instance in the 2D case, assume you have a grid of height $h$ and width $w$. You can refer to any place in the grid with two integers $x$ and $y$,with $0\le x\le w-1$ and $0\le y \le h-1$.
To reduce to one index $i$, you can split the grid into its lines, and just stack them one after the other. You then get one very long line, of which you just take the index. You have the below correspondance\begin{align}x,y &\quad\rightarrow\quad i= x+y\times w\\i &\quad\rightarrow\quad y=\left\lfloor\frac i w\right\rfloor, \ x=i-y\times w\end{align}and $i$ can take every value between $0$ and $h\times w-1$ (inclusive) for a total of $h\times w$ distinct values, which is precisely the number of places in our grid.
In the $n$ dimensional case, you want to do the same, except it gets much more annoying. If you have one saving grace here, it's that everything has the same size. Once again consider $n$ integers$i_1,\ldots, i_n$ in between $0$ and $M-1$, we define our unique index as$$I = \sum_{k=1}^n \left( i_k\times M^{k-1} \right)$$The values of $I$ then range from $0$ to $M^n-1$ (inclusive) for a total of $M^n$ distinct values, and as many hyper-rectangles as we defined above.For $0\le I\le M^n-1$ you can properly retrieve the $n$ indices:$$i_k =\left\lfloor \frac{I-\sum_{j>k}\left( i_j\times M^{j-1} \right)}{M^{i-1}}\right\rfloor$$In summation, you can define$R_I = R(i_1,\ldots,i_n)$ and you get one index instead of $n$ indices.
As a conclusion, you may have a little bit more of work if you want the above definitions to perfectly match your problem statement.From my background I'm more used to indexing stuff from $0$, so my definitions use a $0$-initial index, whereas your problem statement uses an index that starts from $1$. Speaking of which, you'd have to define $m=M^n$.
|
Automorphisms of the Complex Plane
Welcome back to our little series on automorphisms of four (though, for all practical purposes, it's really
three) different Riemann surfaces: the unit disc, the upper half plane, the complex plane, and the Riemann sphere. Last time, we proved that the automorphisms of the upper half plane take on a certain form. Today, we'll prove a similar result about automorphisms of the complex plane. Unlike last week's post (which was purely computational), today's proof will require two major results - or "big guns" as my professor used to say - namely the Casorati-Weierstrass theorem and the open mapping theorem.
If you missed the introductory/motivational post for this series, be sure to check it out here!
Also in this series:
Automorphisms of the Complex Plane Theorem: Every automorphism $f$ of the complex plane $\mathbb{C}$ is of the form $f(z)=az+b$ where $a,b\in\mathbb{C}$ and $a\neq 0$. Proof. We'll start with the easy direction first. Note that if $f(z)=az+b$ for $a,b\in \mathbb{C}$ with $a\neq 0$, then $f$ maps from $\mathbb{C}$ to $\mathbb{C}$ and $f^{-1}(z)=\frac{z-b}{a}$. Thus $f$ is indeed an automorphism of $\mathbb{C}$. (Both $f$ and $f^{-1}$ are holomorphic since they're simply linear maps!)
For the other direction, we need a little lemma:
Lemma: Suppose $f$ is an injective, holomorphic function on the punctured unit disc $\Delta^\times:\{z\in\mathbb{C}:0< |z| < 1 \}$. Then $0$ cannot be an essential singularity of $f$. Proof. Let $z$ be a point in $\Delta^{\times}$ and let $B_1$ and $B_2$ be disjoint open balls containing $0$ and $z$, respectively, and suppose to the contrary that $0$ is an essential singularity of $f$. Then by the Casorati-Weierstrass theorem, $f(B_1\smallsetminus\{0\})$ is dense in $\mathbb{C}$. In particular, since $f(B_2)$ is open by the open mapping theorem, we must have $f(B_1\smallsetminus\{0\})\cap f(B_2)\neq \varnothing$. In other words, there exist nonzero $z_0\in B_1$ and $w_0\in B_2$ such that $f(z_0)=f(w_0)$. Since $f$ is injective, it follows that $z_0=w_0$. This of course contradicts our assumption that $B_1\cap B_2=\varnothing$. $\square$
Now suppose $f$ is an automorphism of $\mathbb{C}$ so that $f$ has a power series expansion $f(z)=a_0+a_1z+a_2z^2+\cdots$ and define $g(z)=f(1/z)=a_0+\frac{a_1}{z}+\frac{a_2}{z^2}+\cdots$. Then $g$ is both holomorphic and injective on the punctured disc $\Delta^\times$ and so by our lemma $g$ cannot have an essential singularity at $0$. Thus there is a nonnegative integer $m$ such that $a_n=0$ for all $n>m$ and so $f$ must be a polynomial of degree $m$. But $f$ is injective by assumption. Hence $f$ must have degree 1, that is, $f(z)=a_0+a_1z$ with $a_1\neq 0$.
$\square$
Next time: We'll prove a similar result about the automorphisms of the Riemann sphere.
|
As per the title, I was wondering if there was a $\operatorname{sinc}$ based interoplation formula for reconstructing a signal in the frequency domain which has been sampled with respect the bipolar coordinate system?
For instance, we all know the $\operatorname{sinc}$ based Whitakker-Shannon interpolation formula for reconstructing signals from their samples in the time domain is:
$$x(t)=\sum_{n=-\infty}^{\infty}x[n]\cdot\operatorname{sinc}\left(\frac{t-n\Delta t}{\Delta t}\right)$$
provided that certain criterion are satisfied. This can quite easily be extended to higher dimensions so that if, for example, $\phi$ is now a function and $\boldsymbol{x}\in\mathbb{R}^n$ then
$$\phi(\boldsymbol{x})=\sum_{k\in\mathbb{Z}^n}\phi(k\Delta\boldsymbol{x})\cdot\operatorname{sinc}\left(\frac{\boldsymbol{x}-k\Delta\boldsymbol{x}}{\Delta\boldsymbol{x}}\right),$$ where we define $\operatorname{sinc}(\boldsymbol{x})=\operatorname{sinc}(x_1,...,x_n)=\operatorname{sinc}(x_1)\cdot...\cdot\operatorname{sinc}(x_n)$.
One can also find a similar $\operatorname{sinc}$ based interpolation method in the book
'Advanced Topics in Shannon Sampling and Interpolation Theory' which allows one to reconstruct a signal in the frequency domain from samples taken in the polar coordinate system $(\rho, \theta)$
Let $\phi(x,y)$ be space-limited to $2A$ and $\hat{\phi}(\rho,\theta)$ be angularly band-limited to $K$. Then $\hat{\phi}(\rho,\theta)$ can be reconstructed from its polar samples via
$$\hat{\phi}(\rho,\theta)=\sum_{n=-\infty}^{\infty}\sum_{k=0}^{N-1}\tilde{\hat{\phi}}\left(\frac{n}{2A},\frac{2\pi k}{N}\right)\operatorname{sinc}\left[\frac{2A(\rho-n)}{2A}\right]\cdot\sigma\left(\theta-\frac{2\pi k}{N}\right),$$ where $N$ is even, $$\sigma=\frac{\sin\left[\frac{1}{2}(N-1)\theta\right]}{N\sin\left(\frac{\theta}{2}\right)},$$ and $$\tilde{\hat{\phi}}\left(\frac{n}{2A},\frac{2\pi k}{N}\right)=\begin{cases} \hat{\phi}\left(\frac{n}{2A},\frac{2\pi k}{N}\right),& n\ge 0, \\ \hat{\phi}\left(-\frac{n}{2A},\frac{2\pi k}{N}+\pi\right),& n<0. \end{cases}$$
However, for if we take samples in the bipolar coordinate system $(\sigma,\tau)$ and we want to reconstruct $\hat{x}(\sigma,\tau)$ from $\hat{x}(n\Delta\sigma,m\Delta\tau)$ then I can find absolutely no such formula, despite scouring the internet for literature. Now that I mention it, for my intents and purposes, it would be more helpful to find such a formula for the coordinate system $(a,\tau)$, where $\tau$ are our isosurfaces and $(0,-a)$ and $(0,a)$ are their respective foci.
|
As far as intuition goes, maybe this will help. For any two vectors $\bf a$ and $\bf b$, the dot product $\bf a \cdot \hat b$ is the component of $\bf a$ in the direction of $\bf b$. If you "remove" that component from $\bf a$ you would be left with the part of it which is perpendicular to $\bf b$:
$$\begin{align}{\bf a}_{\parallel} &= ({\bf a} \cdot \hat {\bf b}) \hat {\bf b} \\{\bf a}_\perp &= {\bf a} - ({\bf a} \cdot \hat {\bf b}) \hat {\bf b}\end{align}$$
[Note that to do any subtraction, first we need to make that "component" into an actual vector, hence the $\bf \hat b$'s multiplying the dot products in the above.]
Thus we have obtained a decomposition of the original vector $\bf a$ as$$ {\bf a} = {\bf a}_\parallel + {\bf a}_\perp $$
where the two "parts" are vectors that are parallel and perpendicular to $\bf b$, respectively. Now you can see quite easily that your second vector equation is equivalent to $|{\bf a}_\perp| = k$, which is exactly what a cylinder is: a set of points that are a given
perpendicular distance from an axis (defined here by the vector $\hat {\bf b}$).
The first one is not that straightforward, but you can see it is equivalent to $|{\bf a}_\parallel| = m|\bf a|$ which, if you try to visualize
all possible vectors $\bf a$, tells you that when each of them is "joined" to $\bf b$ (tail-to-tail at the origin), all the triangles formed are similar; that is what a cone is.
|
Basically, you are right: the reason we do that is that we only care about the real part of the solution (for basic physics problems), and that gives it to us.
Now, you might ask why you can do this. One really important fact about the wave equation is that it is "linear". This means that you can add two solutions to each other and get another solution, and you can multiply a solution by a constant and get another solution. In particular, if\begin{equation} \psi(x, t) = A\, e^{i(k x - \omega t)} \tag{1}\end{equation}is a solution and its complex conjugate $\bar{\psi}$ is too, then you also know that $(\psi + \bar{\psi})/2$ is a solution. That is, the real part alone is a solution.
So that's why you
can do it, but there are also pedagogical reasons you should do it. By introducing these exponential solutions now in a familiar setting, you can start to get good at the mathematics, and that'll get even handier later. Here are just a few reasons off the top of my head:
1) It's easier to manipulate complex exponentials than to manipulate trig functions. High school hotshots may care about trig identities, but nobody else does. They're annoying and dumb, and you should forget all about them. What you
should get good at is exponentials. Just a few simple rules, and you can rederive any old trig identity you want, and then some.
2) It's a good introduction to the ideas of Fourier methods. It's true that you can use Fourier methods with sines and cosines, but they're just much prettier with complex exponentials. And they're extremely powerful. For example, you can show that (subject to some basic assumptions), any solution to the wave equation can be expressed as a combination of solutions like equation (1), with various values for $A$ and $\omega$ (hence also $k$). More generally, you'll frequently see Fourier methods as nice ways of understanding other differential equations.
3) There are other physical systems where you actually
do care about both the real and imaginary parts of the solution. For example, when you talk about gravitational waves, you find that they have two components. If you add the first component to $i$ times the second component, they satisfy the wave equation, and the solution is proportional to $e^{i(kx-\omega t)}$ — not just its real part. And of course, you'll also find many examples in quantum physics. Quantum fields are inherently complex, so again you can find solutions like $e^{i(kx-\omega t)}$. If you want to get really fancy, geometric algebra is full of examples (or here for a free copy) of systems where the solution looks like this, except that $i$ is replaced by geometric objects that have the familiar algebraic property that they square to $-1$.
|
Search
Now showing items 1-9 of 9
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-12)
In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ...
Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
(Springer-verlag, 2012-11)
The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ...
Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV
(Springer, 2012-09)
The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ...
J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ...
Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ...
Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-03)
The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ...
Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV
(American Physical Society, 2012-12)
The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
|
Proof by contradiction.
Let us assume that $\{v_1, \dots, v_n\}$ are linearly independent, $\{v_1+w, \dots, v_n+w\}$ are linearly dependent and that $w \not\in \mathrm{span}\{v_1, \dots, v_n\}$. Because $\{v_1 + w, \dots, v_n + w\}$ are linearly dependent, we know that there are non-zero constants $\beta_1, \dots, \beta_n \in \mathbb{F}$ such that$$\beta_1(v_1+w)+\dots+\beta_n(v_n+w)=\mathbf{0},$$in other words,$$ \beta_1 v_1 + \dots + \beta_n v_n + w(\beta_1 + \dots + \beta_n) = \mathbf{0}.$$
There are two situations: 1) $\beta_1 + \dots + \beta_n \not= 0$; and 2) $\beta_1 + \dots + \beta_n=0$.
If $\beta_1 + \dots + \beta_n \not= 0$, then we get$$ w = \frac{-\beta_1}{\beta_1+\dots+\beta_n}v_1 + \dots + \frac{-\beta_n}{\beta_1 + \dots + \beta_n} v_n$$and thus $w \in \mathrm{span}\{v_1,\dots,v_n\}$ which contradicts the assumption that $w \not\in\mathrm{span}\{v_1,\dots,v_n\}$.
If $\beta_1 + \dots + \beta_n = 0$, then we have the equality$$ \beta_1 v_1 + \dots + \beta_n v_n = \mathbf{0},$$which contradicts the assumption that $\{v_1, \dots, v_n\}$ are linearly independent because $\beta_1, \dots, \beta_n$ are non-zero constants.
Comment: "non-zero constants" means that at least one of them is non-zero.
|
In what circumstances should one consider using regularization methods (ridge, lasso or least angles regression) instead of OLS?
In case this helps steer the discussion, my main interest is improving predictive accuracy.
Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up.Sign up to join this community
In what circumstances should one consider using regularization methods (ridge, lasso or least angles regression) instead of OLS?
In case this helps steer the discussion, my main interest is improving predictive accuracy.
Short answer: Whenever you are facing one of these situations:
Ridge regression generally yields better predictions than OLS solution, through a better compromise between bias and variance. Its main drawback is that all predictors are kept in the model, so it is not very interesting if you seek a parsimonious model or want to apply some kind of feature selection.
To achieve sparsity, the lasso is more appropriate but it will not necessarily yield good results in presence of high collinearity (it has been observed that if predictors are highly correlated, the prediction performance of the lasso is dominated by ridge regression). The second problem with L1 penalty is that the lasso solution is not uniquely determined when the number of variables is greater than the number of subjects (this is not the case of ridge regression). The last drawback of lasso is that it tends to select only one variable among a group of predictors with high pairwise correlations. In this case, there are alternative solutions like the group (i.e., achieve shrinkage on block of covariates, that is some blocks of regression coefficients are exactly zero) or fused lasso. The Graphical Lasso also offers promising features for GGMs (see the R glasso package).
But, definitely, the
elasticnet criteria, which is a combination of L1 and L2 penalties achieve both shrinkage and automatic variable selection, and it allows to keep $m>p$ variables in the case where $n\ll p$. Following Zou and Hastie (2005), it is defined as the argument that minimizes (over $\beta$)
$$ L(\lambda_1,\lambda_2,\mathbf{\beta}) = \|Y-X\beta\|^2 + \lambda_2\|\beta\|^2 + \lambda_1\|\beta\|_1 $$
where $\|\beta\|^2=\sum_{j=1}^p\beta_j^2$ and $\|\beta\|^1=\sum_{j=1}^p|\beta_j |$.
The lasso can be computed with an algorithm based on coordinate descent as described in the recent paper by Friedman and coll., Regularization Paths for Generalized Linear Models via Coordinate Descent (JSS, 2010) or the LARS algorithm. In R, the penalized, lars or biglars, and glmnet packages are useful packages; in Python, there's the scikit.learn toolkit, with extensive documentation on the algorithms used to apply all three kind of regularization schemes.
As for general references, the Lasso page contains most of what is needed to get started with lasso regression and technical details about L1-penalty, and this related question features essential references, When should I use lasso vs ridge?
A theoretical justification for the use of ridge regression is that its solution is the posterior mean given a normal prior on the coefficients. That is, if you care about squared error and you believe in a normal prior, the ridge estimates are optimal.
Similarly, the lasso estimate is the posterior mode under a double-exponential prior on your coefficients. This is optimal under a zero-one loss function.
In practice, these techniques typically improve predictive accuracy in situations where you have many correlated variables and not a lot of data. While the OLS estimator is best linear unbiased, it has high variance in these situations. If you look at the bias-variance trade off, prediction accuracy improves because the small increase in bias is more than offset by the large reduction in variance.
|
In order to study the behavior of an RC circuit, I connected a resistor and a capacitor to an Arduino's I/O as shown:
The Arduino digital Output feeds the circuit with a square pulse of 2 sec duration.
(one second HIGH, one second LOW)
for a charge time of 1 sec: $$V_c = E(1-e^{-\dfrac{t}{\tau}}) = E(1-e^{-\dfrac{1}{0.83}})=0.7E$$
where E is the power supply voltage
Converting
E value to a 10bit range, $$V_c = 0.7 \times 1024 = 717$$
Now, this is the graph I take from the analog input:
whose minimum value is 237
(0.23E), and maximum value = 784 (0.76E).
Assuming that the capacitor's value may differ a little, I may accept that 0.70E = 0.76E. But in that case,
shouldn't Vc start from zero?
Assuming that the capacitor is semi charged, shouldn't in any case max-min=0.7E?
(Before initiating, I discharged the capacitor connecting it with a resistor for several seconds.)
Any thoughts would be appreciated.
EDIT:Using several values of charge time, every time the graph seems to be positioned in the middle, meaning Vc(min)+Vc(max) = E/2.
|
The Hermite polynomial is the one that interpolates a set of points and the value of their derivatives in any points we want. That is, let's suppose that we have $$(x_k,f_k)$$ and $$(x_k,f'_k)$$.
Then we construct the same table as in Newton's method, placing $$x_k$$ in the first column and writing twice the same point if we know the value of the derivative at this point; in the second column the values of $$f$$ corresponding to the $$x$$ of the same line. Namely if we know the value of $$f$$ in $$x_0$$ and of its derivative we will write $$x_0$$ twice and next to them we will write $$f_0$$. For example,
$$x_0$$ $$f_0$$ $$x_0$$ $$f_0$$ $$x_1$$ $$f_1$$ $$x_1$$ $$f_1$$
From here on we proceed the same way, but with the difference that we have to define $$f[x_i,x_i]=f'_i$$, the value of the derivative in $$x_i$$.
$$x_0$$ $$f_0$$ $$f'_0$$ $$x_0$$ $$f_0$$ $$f[x_0,x_0,x_1]$$ $$f[x_0,x_1]$$ $$f[x_0,x_0,x_1,x_1]$$ $$x_1$$ $$f_1$$ $$f[x_0,x_1,x_1]$$ $$f'_1$$ $$x_1$$ $$f_1$$
Therefore, if we have $$n +1$$ values of the function and $$n +1$$ values of the derivatives, the Hermite polynomial will have a $$2n +1$$ degree .
Let's consider an example:
Let's suppose that we want to calculate $$f\Big(\dfrac{1}{8}\Big)$$ where $$f(x)=\tan(\pi x)$$ from Hermite interpolation in $$0,\dfrac{1}{4}$$.
To obtain the result, we draw a table as in Newton's interpolation but repeating every point which derivative we know. This is:
$$0$$ $$0$$ $$f'(0)=\pi$$ $$0$$ $$0$$ $$\dfrac{4-\pi}{\dfrac{1}{4}-0}=16-4\pi$$ $$\dfrac{1-0}{\dfrac{1}{4}-0}=4$$ $$\dfrac{8\pi-16-16+4\pi}{\dfrac{1}{4}-0}=148\pi-128$$ $$\dfrac{1}{4}$$ $$1$$ $$\dfrac{2\pi-4}{\dfrac{1}{4}-0}=8\pi-16$$ $$f'\Big( \dfrac{1}{4} \Big) = 2\pi$$ $$\dfrac{1}{4}$$ $$1$$
Proceeding as with Newton's interpolation, we get: $$$ P_3(x)= \pi x +(16-4\pi)x^2+ (48\pi-128)x^2\Big( x-\dfrac{1}{4}\Big)$$$
Now, $$$\tan\Big(\dfrac{\pi}{8}\Big)\approx P_3\Big(\dfrac{1}{8}\Big)=0.4018\dots$$$
|
L^2 and intersection cohomologies for the reductive representation of the fundamental groups of quasiprojective manifolds with unipotent local monodromy
Xuanming Ye, Kang Zuo
Let X be a projective manifold, and D be a normal crossing divisor of X. By Jost-Zuo's theorem that if we have a reductive representation \rho of
the fundamental group \pi_{1}(X^{*}) with unipotent local monodromy, where X^*=X-D, then there exists a tame pluriharmonic metric h on the flat bundle \mathcal V associated to the local system \mathbb V obtain from \rho over X^*. Therefore, we get a harmonic bundle (E, \theta, h), where \theta is
the Higgs field, i.e. a holomorphic section of End(E)\otimes\Omega^{1,0}_{X^*} satisfying \theta^2=0.
In this paper, we study the harmonic bundle (E,\theta,h) over X^*. We are going to prove that the intersection cohomology IH^{k}(X; \mathbb V) is isomorphic to the L^{2}-cohomology H^{k}(X, (\mathcal A_{(2)}^{\cdot}(X,\mathcal V), \mathbb D)).
|
Gauss Law
The Gauss law is a very convenient tool to find the electric field of a system of charges. It is especially useful when situations of symmetry can be easily exploited. The electric flux (\(d\phi \)) through a differential area is the dot product of the electric field and the area vector (this vector is normal to the area pointing towards the convex end; its magnitude is the area magnitude).
\(d\phi \,=\,\overrightarrow{E}.\overrightarrow{dA}\,=\,E.dA\,\cos \theta \).
Where \(\theta \) is the angle between the electric field and area vector. The net flux through a surface is simply the integral of the differential flux \(d\phi \).
\(\phi \,=\,\int{d\phi }\,=\,\int{\overrightarrow{E}.\overrightarrow{dA}}\).
For a closed surface such as that of a sphere, torus or a cube, this integral is represented as a loop integral \(\phi \,=\,\oint{\overrightarrow{E}.\overrightarrow{dA}}\).
Gauss Theorem: The net flux through a closed surface is directly proportional to the net charge in the volume enclosed by the closed surface.
\(\phi \,=\,\oint{\overrightarrow{E}}.\overrightarrow{dA}\,=\,\frac{{{q}_{n}}et}{{{\varepsilon }_{0}}}.\).
In simple words, the Gauss law relates the ‘flow’ of electric field lines (flux) to the charges within the enclosed surface. If there are no charges enclosed by a surface, then the net electric flux remains zero. This means that the number of electric field lines entering the surface is equal to the field lines leaving the surface.
The electric flux from any closed surface is only due to the sources (positive charges) and sinks (negative charges) of electric fields enclosed by the surface. Any charges outside the surface do not contribute to the electric flux. Also, only electric charges can act as sources or sinks of electric fields. Changing magnetic fields. For example, cannot act as sources or sinks of electric fields.
The net flux for the surface on the left is non – zero as it encloses a net charge. The net flux for the surface on the right is zero. Since it does not enclose any charge. Note that the Gauss Law is only a restatement of the Coulomb’s Law. If you apply Gauss Law to a point charge enclosed by a sphere, you will get back the Coulomb’s Law easily.
Application to an infinitely long line of Charge: Consider an infinitely long line of charge with the charge per unit length being λ. We can take advantage of the cylindrical symmetry of this situation. By symmetry, the electric fields all point radially away from the line of charge, there is no component parallel to the line of charge.
We can use a cylinder (with any arbitrary radius r and length l) centred on the line of charge as our Gaussian surface.
As you can see in the above diagram, the electric field is perpendicular to the curved surface (hence parallel to the area vector) of the cylinder. Thus, the angle between the electric field and area vector is zero and cos θ = 1. The top and bottom surfaces of the cylinder lie parallel to the electric field. Thus the angle between area vector and electric field is 90 degrees and cos θ = 0.
Thus the electric flux is only due to the curved surface:
\(\phi \,=\,\oint{\overrightarrow{E}}.\overrightarrow{dA}\),
\(\phi \,=\,{{\phi }_{curved}}\,+\,{{\phi }_{top}}\,+\,{{\phi }_{bottom}}\),
\(\phi \,=\,\oint{\overrightarrow{E}.\overrightarrow{dA}}\,=\,\int{E.dA.\cos \,0\,+\,\int{E.dA.\cos \,{{90}^{0}}}}+\,\int{E.dA.\cos \,{{90}^{0}}}\),
\(\phi \,=\,\int{E.dA.1}\).
Due to radial symmetry, the curved surface is equidistant from the line of charge and the electric field in the surface has a constant magnitude throughout.
\(\phi \,=\,\int{E.dA}\,=\,E\,\int{dA}\,=\,E.2\pi rl\).
The net charge enclosed by the surface is:
\({{q}_{n}}et\,=\,\lambda .l\).
Using Gauss theorem,
\(\phi \,=\,E.2\pi rl\,=\,\frac{{{q}_{n}}et}{{{\varepsilon }_{0}}}\,=\,\frac{\lambda .l}{{{\varepsilon }_{0}}}\),
\(E.2\pi rl\,=\,\frac{\lambda .l}{{{\varepsilon }_{0}}}\),
\(E\,=\,\frac{1}{2\pi {{\varepsilon }_{0}}}\times \frac{\lambda }{r}\).
|
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s?
@daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format).
@JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems....
well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty...
Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d...
@Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure.
@JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now
@yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first
@yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts.
@JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing.
@Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work
@Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable.
@Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time.
@Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things
@Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)]
@JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :)
@Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!)
@JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand.
@JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series
@JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code.
@PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
|
There are many categories of windows, e.g., rectangular, Gaussian, and triangular. What are their effects on STFT?
A widnow $w[n]$ truncates and weights the input signal $x[n]$ to prepare it for subsequent spectral analysis. A windows's effect on the input signal's true spectrum $X(e^{j\omega})$ is mathematically described by the
convolution of window's Fourier transform $W(e^{j\omega})$ with the signal's true spectrum $X(e^{j\omega})$;
$$V(e^{j\omega}) = \frac{1}{2\pi} \int_{-\pi}^{\pi} W(e^{j\theta})X(e^{j (\omega -\theta)}) d\theta $$
From this convolution there are two main effects observed on $X(e^{j\omega})$
1- A
smoothing (smearing) of $X(e^{j\omega})$ due to the main lobe width of the window's Fourier transform $W(e^{j\omega})$, which results in a loss of spectral resolution in the $V(e^{j\omega})$.
2- A
leakage due to peak side lobe of $W(e^{j\omega})$, which results in loss of weak signal components that's below leaking nearby strong frequencies.
Main lobe width of any window type is primarily determined by its length. Increasing the length of any window will therefore decrease its main lobe width (hence increase its spectral resolution capability)
A
rectangular window has the narrowest main lobe width and the highest peak side lobe, compared to all other windows. And the remainig window types perform a tradeoff between mainlobe width and the peak side lobes.
Peak side lobe is primarily determined by windows's
shape. So by changing the shape (type) of the window you adjust it.
The rectangular window is just what, when we have truncated the data, while theother windows provide some data weighting. From their effects on the frequencyspectra,
. The advantage of using a window other than rectangular is to havelower sidelobes
However, the disadvantage is a loss in frequency resolution, from $\Delta \omega = 4\pi / N$ for the rectangular window to $\Delta \omega = 8\pi / N$ and $16\pi / N$ for Gaussian and Triangular windows respectively.
In addition to what's already been said, using a
rectangular window also results in the minumum possible
noise floor, which is desirable in some applications.
With regard to obtaining the best
amplitude/magnitudes estimates, you should consider using some of the
flat top windows, which are designed specifically for this purpose. However, they have a
wide main lobe, and thus
poor frequency resolution, so they're not suitable for signals where you have sinusoids that are close in frequency.
Anyway, see here http://zone.ni.com/reference/en-XX/help/371361H-01/lvanlsconcepts/char_smoothing_windows/ or here https://en.wikipedia.org/wiki/Window_function if you interested in the theory.
[EDIT 2017-11-10: added details one the used of inverses] Their first effect, in the time domain, is to localize, or (weakly) stationarize the data, as a preprocessing before applying the FFT.
Then in the analysis side, their frequency effect is the same as when used for an FFT, ad detailed on other answers.
Last, in the synthesis side, a different window can be used when recovering a signal from a selection of chunks in the time-frequency domain.
This is used in practice, for instance in image compression. One wavelet/window type is used for the analysis or image decomposition, better at compacting information. Then, this information is quantized, and another wavelet/window is used for decompression: it is smoother, and attenuate, visually, quantization artifacts. Here, the whole transform is not redundant, and this is called biorothogonality.
In the redundant setting, certain analysis windows admit a closed-form inverse with the same window, but this is not always the case, as you c an see from the following picture given in Duality for Frames, 2016, with the analysis window on the left, and the synthesis one on the right.
|
Is there a simple description of a Chow ring of a blow-up of a point on a smooth projective variety? Or at least of successive blow-ups of $\mathbb{P}^n$?
Maybe something like $A(\tilde{X})=f^*(A(X))\oplus\mathbb{Z}(E)$, where $f\colon\tilde{X}\to{}X$ is a blow-up, E is an exceptional divisor, with multiplication given by $E\cdot{}E_k=-E_{k-1}$, $E_0=f^*(P)$, where $E_k{}$ is a k-dimensional linear subspace of an exceptional divisor $E(=E_{n-1})$, and $P$ is a point we are blowing up. What I'm suggesting is true for surfaces (exercise 6.5 in appendix A of Hartshorne), and seems geometrically plausible in the case $X=\mathbb{P}^n$. Also, it'd be great to know what cycles are effective. I'm afraid all this is really trivial for someone understanding Fulton's book, but I'm not at that level yet.
Is there a simple description of a Chow ring of a blow-up of a point on a smooth projective variety? Or at least of successive blow-ups of $\mathbb{P}^n$?
The general formula about the intersection ring of blow-ups is discussed in Fulton's book. In your case you want to study the intersection ring of a smooth algebraic variety $V$ blown up at a point $Z$. There is a simple formula for this situation by Keel. You can find it in his paper: Intersection Theory of Moduli Space of Stable N-Pointed Curves of Genus Zero.
Another nice reference is the paper "A compactification of configuration spaces" by Fulton-MacPherson. In section 5 of this paper they mention the Keel's formula and state the facts needed in the computation of the Chow ring. I summarize it below.
The key fact is that the restriction map from the Chow ring of the variety $V$ to the Chow ring of the point $Z$ is surjective. The intersection ring of the blow-up $\widetilde{V}$ is generated over $A(V)$ by the class of the exceptional divisor $E$ with the ideal $I$ of relations described bellow:
1) Let $J_{Z/V}$ be the kernel of the restriction map from $A(V)$ to $A(Z)$. It contains all elements in $A(V)$ of positive degree, for example.
2) Assume that you can write $Z$ as a transversal intersection $\cap_{i=1}^r D_i$ of the divisor classes $D_i$. Define the polynomial $P_{Z/Y} \in A(V)[t]$ by the rule $P(t)=\prod_{i=1}^r(t+D_i)$. This polynomial is called a Chern polynomial of $Z$. It depends on the choice of the divisor classed $D_i$. It means that it is not unique and is determined up-to an element in $J_{Z/V}$.
The ideal $I$ is generated by $J_{Z/V}\cdot E$ and $P_{Z/V}(-E)$. The Chow ring of $\widetilde{V}$ is therefore equal to $\frac{A(V)[E]}{I}$.
|
Exercise:Suppose that $a_k \geq 0$ for $k$ large and that $\sum_{k = 1}^{\infty} \frac{a_k}{k}$ converges.
Prove that $$\lim_{j \to \infty}\sum_{k = 1}^{\infty} \frac{a_k}{j+k} = 0$$
Attempt in proof:
Suppose $a_k \geq 0$ for any large $k$ and that $\sum_{k = 1}^{\infty} \frac{a_k}{k}$ converges. . Then give $\epsilon >0$ there is $N \in N$ such that $\left|\sum_{k = n}^{\infty} \frac{a_k}{k}\right| < \epsilon$.
Let $s_n = \frac{a_n}{j+k}$ denote the partial sums.
Then $\sum_{k = 1}^{\infty} \frac{a_k}{j+k} $ will converge to zero if and only if its partial sum converges to zero as n approaches infinity.
Taking the limit of $$\lim_{j \to \infty}\sum_{k = 1}^{\infty} \frac{a_k}{j+k} = \lim_{j \to \infty}\frac{a_1}{j + k} + \cdots+ \frac{a_n}{j + k} + \cdots = \lim_{j \to \infty}\frac{a_1/j}{1 + k/j} + \cdots+ \frac{a_n/j}{1 + k/j} +\cdots $$
Can someone please help me finish? I don't know if this a right way. Any help/hint/suggestion will be really appreciate it. Thank you in advance.
|
I know the following generalization of Borwein - Preiss's Variational Principle(BPVP), known as Loewen-Wang's Variational principle (LWVP)
$\textbf{Loewen- Wang Variational Principle}$
Let $f:X\to \mathbb{\bar{R}}$ be proper, l.s.c and bounded below. Let $\epsilon >0$ and consider a point $\bar{x}$ such that ${f(\bar{x})\leq \inf_X f +\epsilon}.$ Let $\rho:X\to \mathbb{R}$ be continuous and such that
$$\sup\{\|x\|: \rho(x)>1\}< +\infty.$$ Then, for any decreasing sequence $\{\mu_n\}\subseteq (0,1)$ such that
$$\sum_{n=0}^\infty \mu_n <+\infty$$ there exists a sequence $\{z_n\}\subseteq X$ convergent to some $z\in X$ such that
$\rho(z-\bar{x})<1,$ $z$ is a strong minimizer of the function
$$f(x)+ \epsilon\sum_{n=0}^\infty \mu_n \rho((n+1)(x-z_n)).$$
In the proof of this result they don't use the condition $$\sum_{n=0}^\infty \mu_n <+\infty,$$ so here is my first question:
Question 1: Why do we need this condition? Is it for somehow guarantee that $$\sum_{n=0}^\infty \mu_n \rho((n+1)(x-z_n))< +\infty ?$$
Now, I tried to deduce BPVP from the previous statement.
$\textbf{Borwein - Preiss's Variational Principle}$
Let $f:X\to \mathbb{\bar{R}}$ be proper, l.s.c and bounded below. Let $p\geq 1,\epsilon >0$ and consider a point $\bar{x}$ such that ${f(\bar{x})\leq \inf_X f +\epsilon}.$ Then, for each $\lambda >0$ there exists a sequence $\{\nu_n\}\subseteq (0,1)$ such that
$$\sum_{n=0}^\infty \nu_n =1,$$ and there exists a sequence $\{z_n\}\subseteq X$ convergent to some $z\in X$ such that
$\|z-\bar{x}\|\leq \lambda,$ $z$ is a strong minimizer of the function
$$f(x)+ \frac{\epsilon}{\lambda ^p}\sum_{n=0}^\infty \nu_n \|(x-z_n)\|^p.$$
Here is my proof of this statement: In order to apply LWVP, we take
$$\rho(x)=\frac{1}{\lambda^p}\|x\|^p,\;\mu_n=\frac{1}{2^{n+1}(n+1)^p}.$$
It is easy to see that all the conditions of LWVP are satisfied and then it follows the existence of $z_n$ convergent to some $z$ such that $z$ is a strong minimizer of $$f(x)+ \epsilon\sum_{n=0}^\infty \mu_n \rho((n+1)(x-z_n))=$$ $$f(x)+ \frac{\epsilon}{\lambda^p}\sum_{n=0}^\infty \frac{1}{2^{n+1}(n+1)^p} \|(n+1)(x-z_n)\|^p= $$
$$f(x)+ \frac{\epsilon}{\lambda ^p}\sum_{n=0}^\infty \nu_n \|(x-z_n)\|^p,$$ with $$\nu_n= \frac{1}{2^{n+1}}.$$ $\Box$
Now, in the book I am reading (Schirotzek, Nonsmooth Analysis), instead of taking $$\mu_n=\frac{1}{2^{n+1}(n+1)^p},$$ they take $$\mu_n=\frac{1}{2^{n+1}(n+1)\sigma},$$ where $$\sigma= \sum_{n=0}^\infty \frac{(n+1)^{p-1}}{2^{n+1}} $$ and proceed as I did. Now, although I find this last proof to be correct, my question is
Question 2: Is my reasoning correct, so that in BPVP we can remove the assumption of the existence of $\{\nu_n\}$ since it can be chosen to be $\nu_n=\frac{1}{2^{n+1}} ?$
I have searched other papers and all of them state the theorem without recognizing this, which makes me think that I am wrong somewhere. Any help is appreciated. Thanks in advance to all.
|
How to solve this logarithmic equation? $8n^2 = 64n\log n$, ($\log n$ here is base 2) I have tried to convert it to $n-8\log n = 0$, but how to solve the latest?
This doesn't have any solutions using elementary functions. But, using the Lambert W function, we get: $$n = -\frac {8}{\ln 2} \operatorname{W} \left (-\frac {\ln 2}{8} \right)$$ and $$n = -\frac {8}{\ln 2} \operatorname{W}_{-1} \left (-\frac {\ln 2}{8} \right)$$
If you cannot use Lambert function, consider that you look for the zero's of $$f(x)=x-8\log_2(x)$$ The first derivative $$f'(x)=1-\frac{8}{x \log (2)}$$ cancels for $x_*=\frac{8}{\log (2)}$ and $$f(x_*)=\frac{8-8 \log \left(\frac{8}{\log (2)}\right)}{\log (2)}\approx -16.6886$$ The second derivative test shows that this corresponds to a minimum. So, there are two roots to the equation.
If you plot the function, you will see that the roots are close to $1$ and $40$. So, start Newton method and below are given the iterates $$\left( \begin{array}{cc} n & x_n \\ 0 & 1.000000000 \\ 1 & 1.094862617 \\ 2 & 1.099983771 \\ 3 & 1.099997030 \end{array} \right)$$ $$\left( \begin{array}{cc} n & x_n \\ 0 & 40.00000000 \\ 1 & 43.61991000 \\ 2 & 43.55927562 \\ 3 & 43.55926044 \end{array} \right)$$
|
This question already has an answer here:
What are common lower bounds for ${{2n} \choose {n}}$?
Edit: I made a mistake in my original question.
It doesn't change my question but there is no reason for me to include the mistake.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
What are common lower bounds for ${{2n} \choose {n}}$?
Edit: I made a mistake in my original question.
It doesn't change my question but there is no reason for me to include the mistake.
Using the bounds from this answer, we have $$ \frac{4^n}{\sqrt{\pi\left(n+\frac13\right)}}\le\binom{2n}{n}\le\frac{4^n}{\sqrt{\pi\left(n+\frac14\right)}} $$
Using Stirling's bound
$$\sqrt{2\pi}\ n^{n+\frac12}e^{-n} \le n! \le e\ n^{n+\frac12}e^{-n}$$
we obtain
$$\binom{2n}{n}=\frac{(2n)!}{n!^2}\ge\frac{\sqrt{2\pi}\ (2n)^{2n+\frac12}e^{-2n}}{e^2\ n^{2n+1}e^{-2n}}=\frac{\sqrt{2\pi}\ 2^{2n}2^{\frac12}n^{2n+\frac12}}{e^2\ n^{2n+1}}=\frac{2\sqrt{\pi}}{e^2}\frac{4^n}{\sqrt n}$$
Erdös had a really nice lower bound, which he used in his proof of Bertrand's Postulate: $4^n/2n \leqslant {{2n}\choose{n}}$. This follows because \begin{align*} (1+1)^{2n}=\sum_{k=0}^{2n} {{2n}\choose{k}} < 1+2n{{2n}\choose{n}} \end{align*} and then $4^n \leqslant 2n{{2n}\choose{n}}$. In fact the bound can be strengthened without too much difficulty to give $4^n/n \leqslant {{2n}\choose{n}}$.
Edit: The stronger bound only holds for $n \geqslant 4$.
|
Defining parameters
Level: \( N \) = \( 12 = 2^{2} \cdot 3 \) Weight: \( k \) = \( 2 \) Character orbit: \([\chi]\) = 12.a (trivial) Character field: \(\Q\) Newforms: \( 0 \) Sturm bound: \(4\) Trace bound: \(0\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{2}(\Gamma_0(12))\).
Total New Old Modular forms 5 0 5 Cusp forms 0 0 0 Eisenstein series 5 0 5
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.