text stringlengths 256 16.4k |
|---|
It looks like you're new here. If you want to get involved, click one of these buttons!
In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly:
Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints.
Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints.
Today we'll conclude our discussion of Chapter 1 with two more bombshells:
Joins
are left adjoints, and meets are right adjoints.
Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down.
This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world!
Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders.
In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins:
$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets:
$$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets.
Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have
all joins: it's enough that all the joins in this formula exist:
$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have
all meets: it's enough that all the meets in this formula exist:
$$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes.
Suppose \(A\) is a poset with all binary joins. Then we get a function
$$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows:
$$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that
$$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the
diagonal
$$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called
duplication, since it duplicates any element of \(A\).
Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact:
$$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \).
Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \).
A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function
$$ \wedge : A \times A \to A $$that's the
right adjoint of \( \Delta \). This is just a clever way of saying
$$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check.
Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number.
All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on.
Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by
$$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short.
I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason.
Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\).
So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called
duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs.
Once you start looking you can find duality everywhere, from ancient Chinese philosophy:
to modern computers:
But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality!
This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises. |
We have a sequence $a_1,a_2,...,a_n$ and $\forall i\in N;~~~ a_i\in\{1,2,3,...,n\}$. A sequence is $GOOD$ if we have that: For Every $k \in N$, we have $a_k≥a_i$ for every $i<k$, or $a_k≤a_i$ for every $i<k$. it means that every element is larger than all previous elements or smaller than all previous elements.
We want to find how many permutations of $1,2,3,...n$ are $GOOD$ ?
It was a question in an old exam and the answer was $2^{n-1}$ it said that in every choice, we can put the biggest or the smallest element from the remaining numbers, except that for the last one we have one choice so the answer is $2^{n-1}$. I don't completely understand this answer. (eg. if we follow this algorithm it could easily get to the point where we can't choose any other number: $1\to 1, \ n \to ?$ Can anyone explain this answer or give another answer for this question? Is this answer correct?
PS. Changed the condition so it is better understandable. |
Archive:
Subtopics:
Comments disabled
Tue, 23 Jul 2019
About ten years ago I started an article, addressed to my younger self, reviewing various books in category theory. I doubt I will ever publish this. But it contained a long, plaintive digression about Categories, Allegories by Peter Freyd and Andre Scedrov:
In light of my capsule summary of category theory, can you figure out what is going on here? Even if you already know what is supposed to be going on you may not be able to make much sense of this. What to make of the axiom that !!□(xy) = □(x(□y))!!, for example?
The explanation is that Freyd has presented a version of category theory in which the objects are missing. Since every object !!X!! in a category is associated with a unique identity morphism !!{\text{id}}_X!! from !!X!! to itself, Freyd has identified each object with its identity morphism. If !!x:C\to D!!, then !!□x!! is !!{\text{id}}_C!! and !!x□!! is !!{\text{id}}_D!!. The axiom !!(□x)□ = □x!! is true because both sides are equal to !!{\text{id}}_C!!.
Still, why phrase it this way? And what about that !!□(x(□y))!!thing? I guessed it was a mere technical device, similar to the onethat we can use to reduce five axioms of group theory to three.Normally, one defines a group to have an identity element !!e!! suchthat !!ex=xe=x!! for all !!x!!, and each element !!x!! has an inverse!!x^{-1}!! such that !!xx^{-1} = x^{-1}x = e!!. But if you are trying to beclever, you can observe that it is sufficient for there to be a
We no longer require !!xe=x!! or !!xx^{-1}=e!!, but it turns out that you can prove these anyway, from what is left. The fact that you can discard two of the axioms is mildly interesting, but of very little practical value in group theory.
I thought that probably the !!□(x(□y))!! thing was some similar bit of “cleverness”, and that perhaps by adopting this one axiom Freyd was able to reduce his list of axioms. For example, from that mysterious fourth axiom !!□(xy) = □(x(□y))!! you can conclude that !!xy!! is defined if and only if !!x(□y)!! is, and therefore, by the first axiom, that !!x□ = □y!! if and only if !!x□ = □(□y)!!, so that !!□y = □(□y)!!. So perhaps the phrasing of the axiom was chosen to allow Freyd to dispense with an additional axiom stating that !!□y = □(□y)!!.
Today I tinkered with it a little bit and decided I think not.
Freyd has:
$$\begin{align} xy \text{ is defined if and only if } x□ & = □y \tag{1} \\ (□x)□ & = □x \tag{2} \\ (□x)x & = x \tag{3} \\ □(xy) & = □(x(□y)) \tag{4} \end{align} $$
and their duals. Also composition is associative, which I will elide.
In place of 4, let's try this much more straightforward axiom:
$$ □(xy) = □x\tag{$4\star$} $$
I can now show that !!1, 2, 3, 4\star!! together imply !!4!!.
First, a lemma: !!□(□x) = □x!!. Axiom !!3!! say !!(□x)x = x!!, so therefore !!□((□x)x) = □x!!. By !!4\star!!, the left-hand side reduces to !!□(□x)!!, and we are done.
Now I want to show !!4!!, that !!□(xy) = □(x(□y))!!. Before I can even discuss the question I first need to show that !!x(□y)!! is defined whenever !!xy!! is; that is, whenever !!x□ = □y!!. But by the lemma, !!□y=□(□y)!!, so !!x□ = □(□y)!!, which is just what we needed.
At this point, !!4\star!! implies !!4!! directly: both sides of !!4!! have the form !!□(xz)!!, and !!4\star!! tells us that both are equal to !!□x!!.
Conversely, !!4!! implies !!4\star!!. So why didn't Freyd use !!4\star!! instead of !!4!!? I emailed him to ask, but he's 83 so I may not get an answer. Also, knowing Freyd, there's a decent chance I won't understand the answer if I do get one.
My plaintive review of this book continued:
Apparently some people like this book. I don't know why, and perhaps I never will. |
It looks like you're new here. If you want to get involved, click one of these buttons!
Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the
way in which we match up these two objects, to see that they look the same.
For example, any two of these squares look the same after you rotate and/or reflect them:
An isomorphism between two of these squares is a
process of rotating and/or reflecting the first so it looks just like the second.
As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse:
Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that
and
I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\).
Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse.
Now we're ready for isomorphisms!
Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\).
Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like!
What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph:
The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2:
$$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1:
$$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms:
$$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism!
In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism.
We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a
preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\).
Puzzle 144 says that in a poset, the only isomorphisms are identities.
Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions.
Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\).
So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them.
One more example:
Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism.
This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the
isomorphisms deserve to be called 'natural isomorphisms'.
But what are they like?
Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism
$$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes:
Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism
$$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that
$$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means
$$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\).
In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\).
But the converse is true, too! It takes a
little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism.
Doing this will help you understand natural isomorphisms. But you also need examples!
Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal!
We should talk about this. |
tNIRS-1 非侵襲脳酸素モニタ
従来法(MBL法、SRS法)は、透過光の光強度を測定し、その情報から酸素飽和度などを算出しますが、測定部位の形状やプローブの装着状態の違いに影響されやすく、定量性の点で応 用に限界がありました。
TRS法は、生体に照射する光に短パルス光 を用い、透過光の時間応答波形を測定します。測定部位の形状やプローブの装着状態に影響されにくく、酸素飽和度などを精度良く安定的に測定できます。特に定量性・再現性に優れているため、日をまたいだデータの比較も可能になります。
$$R(t,\mu_a,\mu'_s) =\Biggl(\frac{4\pi c}{3\mu'_s}\Biggr)^{-\frac{3}{2}}\frac{1}{\mu'_s}t^{-\frac{5}{2}}exp(-\mu_act)exp\Biggl(-\frac{3}{4ct}\Biggl(\rho^2\mu'_s+\frac{1}{\mu'_s}\Biggr)\Biggr)$$
測定波形に合致するように理論波形をフィッテングし、最も波形が一致した時の吸収係数(µa)を取得します。3波長の吸収係数(µa)から生体の酸素飽和度(StO
2)、総ヘモグロビン濃度(tHb)を算出します。
型名 C12707 測定項目 酸素化ヘモグロビン濃度 O2Hb( μM)
脱酸素化ヘモグロビン濃度 HHb (μM)
総ヘモグロビン濃度 tHb ( μM)
組織酸素飽和度 StO2( %)
測定範囲 StO2(%) 0 %~99 % 測定サンプル間隔 5 s ~ 60 s (5 sステップ) 測定方式 TRS法(時間分解分光法) 光源 レーザダイオード(クラス1) 照射光 波長755 nm、816 nm、850 nm(公称値)
パルス発振、繰り返し周波数9 MHz
光検出器 MPPC(multi-pixel photon counter) チャンネル数 2チャンネル バッテリ動作時間 約30分間(フル充電時) データ保存回数 20回分 外部保存 USBメモリ プローブファイバ長 約3.5 m 生体情報モニタへの接続 Philips社 IntelliBridge Interface Module接続により可能 電源 AC 100 V、50 Hz/60 Hz 消費電力 90 VA 外形寸法/質量 tNIRS-1本体 292 mm(W)×291 mm(H)×207 mm(D)
約7.5 kg
医療機器認証番号 225AFBZX00091000
It looks like you’re in the . If this is not your location, please select the correct region and country below.
You're headed to Hamamatsu Photonics website for U.K. (English). If you want to view an other country's site, the optimized information will be provided by selecting options below.
In order to use this website comfortably, we use cookies. For cookie details please see our cookie policy.
This website or its third-party tools use cookies, which are necessary to its functioning and required to achieve the purposes illustrated in this cookie policy. By closing the cookie warning banner, scrolling the page, clicking a link or continuing to browse otherwise, you agree to the use of cookies.
Hamamatsu uses cookies in order to enhance your experience on our website and ensure that our website functions.
You can visit this page at any time to learn more about cookies, get the most up to date information on how we use cookies and manage your cookie settings. We will not use cookies for any purpose other than the ones stated, but please note that we reserve the right to update our cookies.
For modern websites to work according to visitor’s expectations, they need to collect certain basic information about visitors. To do this, a site will create small text files which are placed on visitor’s devices (computer or mobile) - these files are known as cookies when you access a website. Cookies are used in order to make websites function and work efficiently. Cookies are uniquely assigned to each visitor and can only be read by a web server in the domain that issued the cookie to the visitor. Cookies cannot be used to run programs or deliver viruses to a visitor’s device.
Cookies do various jobs which make the visitor’s experience of the internet much smoother and more interactive. For instance, cookies are used to remember the visitor’s preferences on sites they visit often, to remember language preference and to help navigate between pages more efficiently. Much, though not all, of the data collected is anonymous, though some of it is designed to detect browsing patterns and approximate geographical location to improve the visitor experience.
Certain type of cookies may require the data subject’s consent before storing them on the computer.
This website uses two types of cookies:
This website uses cookies for following purposes:
Cookies help us help you. Through the use of cookies, we learn what is important to our visitors and we develop and enhance website content and functionality to support your experience. Much of our website can be accessed if cookies are disabled, however certain website functions may not work. And, we believe your current and future visits will be enhanced if cookies are enabled.
There are two ways to manage cookie preferences.
If you don’t want to receive cookies, you can modify your browser so that it notifies you when cookies are sent to it or you can refuse cookies altogether. You can also delete cookies that have already been set.
If you wish to restrict or block web browser cookies which are set on your device then you can do this through your browser settings; the Help function within your browser should tell you how. Alternatively, you may wish to visit www.aboutcookies.org, which contains comprehensive information on how to do this on a wide variety of desktop browsers.
Occasionally, we may use internet tags (also known as action tags, single-pixel GIFs, clear GIFs, invisible GIFs and 1-by-1 GIFs) at this site and may deploy these tags/cookies through a third-party advertising partner or a web analytical service partner which may be located and store the respective information (including your IP-address) in a foreign country. These tags/cookies are placed on both online advertisements that bring users to this site and on different pages of this site. We use this technology to measure the visitors' responses to our sites and the effectiveness of our advertising campaigns (including how many times a page is opened and which information is consulted) as well as to evaluate your use of this website. The third-party partner or the web analytical service partner may be able to collect data about visitors to our and other sites because of these internet tags/cookies, may compose reports regarding the website’s activity for us and may provide further services which are related to the use of the website and the internet. They may provide such information to other parties if there is a legal requirement that they do so, or if they hire the other parties to process information on their behalf.
If you would like more information about web tags and cookies associated with on-line advertising or to opt-out of third-party collection of this information, please visit the Network Advertising Initiative website http://www.networkadvertising.org.
We use third-party cookies (such as Google Analytics) to track visitors on our website, to get reports about how visitors use the website and to inform, optimize and serve ads based on someone's past visits to our website.
You may opt-out of Google Analytics cookies by the websites provided by Google:
As provided in this Privacy Policy (Article 5), you can learn more about opt-out cookies by the website provided by Network Advertising Initiative:
We inform you that in such case you will not be able to wholly use all functions of our website.
Close |
The cross section method for determining triple integral bounds
One tricky part of calculating define triple integrals is determining the bounds of integration. To help you in this endeavor, we outline a couple methods for reducing the task of finding these bounds to the simpler task of finding bounds on a double integral combined with a single integral. One such method is one we call the “shadow method,” which we describe another page. Here we describe a method we call the “cross section method.”
To begin, imagine taking a big meat cleaver and chopping the three-dimensional region $\dlv$ into slices perpendicular to one of the coordinate axes (in a similar manner to the way in which we take cross sections of a surface). If you view the axis perpendicular to the slices as being vertical, then you could view the region $\dlv$ as being composed of a bunch of these cross sections stacked on top of each other. For example, for the below ellipsoidal region, we would view the $x$-axis as being vertical.
Applet loading Stack of cross sections. Inside the transparent ellipsoidal region $x^2/16+y^2+z^2/4 \le 1$ are a stack of cross sections that are cut perpendicular to the $x$-axis. If you view the $x$-axis as being vertical, then the cross sections are stacked on top of each other. You can drag the cyan point on the slider to change the number of cross sections.
Unless $\dlv$ happens to be a cylinder, the size and/or shape of the cross section will vary as a function of the location along the “vertical” coordinate axis (i.e., the coordinate axis perpendicular to the cross sections). The integral over $\dlv$ will correspond to the double integrals over all these cross sections, summed up from the bottom cross section up to the top cross section. If the “vertical” axis is the $z$-axis, we can write the integral of $f(x,y,z)$ over $\dlv$ as $$\iiint_{\dlv} f(x,y,z)dV = \int_{\text{bottom}}^{\text{top}} \left(\iint_{\text{cross section}(z)} f(x,y,z) dx\,dy\right)dz.$$ On the other hand, if the $y$-axis is the “vertical” axis, the integral is $$\iiint_{\dlv} f(x,y,z)dV = \int_{\text{bottom}}^{\text{top}} \left(\iint_{\text{cross section}(y)} f(x,y,z) dx\,dz\right)dy.$$ If the $x$-axis is the “vertical” axis, the integral is $$\iiint_{\dlv} f(x,y,z)dV = \int_{\text{bottom}}^{\text{top}} \left(\iint_{\text{cross section}(x)} f(x,y,z) dy\,dz\right)dx.$$
Note that the differences between the cross section method and the shadow method. For the shadow method, the double integral is on the outside (with constant bounds) and the single integral is on the inside (with variable bounds). In contrast, for the cross section method, the double integral is on the inside (with variable bounds) and the single integral is on the outside (with constant bounds). This result is consistent with the idea that the cross sections change size or shape with height while the shadow is just a single shape.
We illustrate the cross section method with an example.
Example
Let $\dlv$ be the pyramid bounded by the planes $z=0$, $z=4-2x$, $z=2-y$, $z=2x$, and $z=2+y$. Let the density of the pyramid at point $(x,y,z)$ be $f(x,y,z)=xz$. Find the mass of the pyramid.
Solution: The mass of the pyramid is the integral of its density:$$\text{mass of pyramid} = \iiint_{\dlv} f(x,y,z) dV,$$where $\dlv$ is the pyramid. The first task is to determine the integration limits given by $\dlv$.
The shape of the pyramid $\dlv$ is shown below. Given that it points in the positive $z$-direction, we choose the $z$-axis as the “vertical” axis and make the cross sections perpendicular to the $z$-axis. As illustrated in the below applet, the cross sections are rectangles.
Applet loading Demonstrating the cross section method for computing triple integral limits. The transparent region is a pyramid bounded by the planes $z=0$, $z=4-2x$, $z=2-y$, $z=2x$, and $z=2+y$. The cross sections perpendicular to the $z$-axis are rectangles, as illustrated by the single green cross section shown. By moving the rectangle up or down, you can change the value of $z$ at which the cross section is calculated.
For a given value of $z$, the boundaries of the rectangular cross sections are determined by the equations for the planes $z=4-2x$, $z=2-y$, $z=2x$, and $z=2+y$. Rewriting those equations for $x$ and $y$ in terms of the given $z$, the boundaries are $x=2-z/2$, $y=2-z$, $x=z/2$, and $y=z-2$. Inside the rectangle, the ranges of $x$ and $y$ are \begin{gather*} \frac{z}{2} \le x \le 2-\frac{z}{2}\\ z-2 \le y \le 2-z \end{gather*} as illustrated below. The integration limits for the cross section are \begin{gather*} \iint_{\text{cross section}(z)} f(x,y,z) dy\,dx = \int_{z/2}^{2-z/2} \int_{z-2}^{2-z} f(x,y,z) dy\,dx. \end{gather*}
The bottom of the pyramid is $z=0$, as that plane is one of the given boundaries. The top of the pyramid occurs where all four planes meet, which is when $z=2$. Therefore, the top and bottom integration limits are $$\int_{\text{bottom}}^{\text{top}} \cdots dz = \int_0^2 \cdots dz.$$
Putting the top and bottom integration limits together with the cross section integration limits, we can calculate that the mass of the pyramid is \begin{align*} \text{mass of pyramid} &= \iiint_{\dlv} f(x,y,z) dV\\ &=\int_{\text{bottom}}^{\text{top}} \left(\iint_{\text{cross section}(z)} f(x,y,z) dx\,dy\right)dz\\ &=\int_0^2 \int_{z/2}^{2-z/2} \int_{z-2}^{2-z}xz \,dy\,dx\,dz\\ &=\int_0^2 \int_{z/2}^{2-z/2} xyz \bigg|_{y=z-2}^{y=2-z} dx\,dz\\ &=\int_0^2 \int_{z/2}^{2-z/2} 2xz(2-z) dx\,dz\\ &=\int_0^2 x^2z(2-z) \bigg|_{x=z/2}^{x=2-z/2}dz\\ &=\int_0^2 2z(2-z)^2 dz\\ &=2\biggl(2z^2-\frac{4}{3}z^3+\frac{z^4}{4}\biggr) \bigg|_0^2 = 2\biggl(8-\frac{32}{3}+4\biggr)=\frac{8}{3}. \end{align*}
It would be possible to calculate the triple integral using the shadow method. The shadow would be the rectangle $0 \le x \le 2$, $-2 \le y \le 2$. However, for this pyramid, the calculation would be more difficult as the function $\text{top}(x,y)$ for the shadow method would be four different triangular regions. You'd have to break up the integral into four parts. For this reason, the cross section method is easier for this example. |
Principle of Mathematical Induction Contents 1 Theorem 2 Proof 3 Contexts 4 Terminology 5 Informal Analogy 6 Warning 7 Also defined as 8 Also known as 9 Also see 10 Historical Note 11 Sources Theorem
Let $\map P n$ be a propositional function depending on $n \in \Z$.
Let $n_0 \in \Z$ be given.
Suppose that: $(1): \quad \map P {n_0}$ is true $(2): \quad \forall k \in \Z: k \ge n_0 : \map P k \implies \map P {k + 1}$ Then: $\map P n$ is true for all $n \in \Z$ such that $n \ge n_0$. The principle of mathematical induction is usually stated and demonstrated for $n_0$ being either $0$ or $1$.
This is often dependent upon whether the analysis of the fundamentals of mathematical logic are zero-based or one-based.
Let $\map P n$ be a propositional function depending on $n \in \N$.
Suppose that:
$(1): \quad \map P 0$ is true $(2): \quad \forall k \in \N: k \ge 0 : \map P k \implies \map P {k + 1}$ Then: $\map P n$ is true for all $n \in \N$.
Let $\map P n$ be a propositional function depending on $n \in \N_{>0}$.
Suppose that:
$(1): \quad \map P 1$ is true $(2): \quad \forall k \in \N_{>0}: k \ge 1 : \map P k \implies \map P {k + 1}$ Then: $\map P n$ is true for all $n \in \N_{>0}$.
Let $\Z_{\ge n_0}$ denote the set:
$S = \set {n \in \Z: n \ge n_0}$ $S = \set {n \in \Z_{\ge n_0}: \map P n}$
That is, the set of all integers for which $n \ge n_0$ and for which $\map P n$ holds.
From Subset of Set with Propositional Function we have that:
$S \subseteq \Z_{\ge n_0}$ From $(1)$ we have that $\map P {n_0}$.
Hence $n_0 \in S$.
Let $k \in S$.
Then $\map P k$ holds.
But by $(2)$, $\map P {k + 1}$ also holds.
This implies $k + 1 \in S$.
So as:
$S \subseteq \Z_{\ge n_0}$
and:
$S$ satisfies $(1)$ and $(2)$
it follows by the Principle of Finite Induction that $S = \Z_{\ge n_0}$.
Hence for all $n \ge n_0$, $\map P n$ holds.
$\blacksquare$
Contexts
Let $\struct {S, \preceq}$ be a well-ordered set.
Let $T \subseteq S$ be a subset of $S$ such that: $\forall s \in S: \paren {\forall t \in S: t \prec s \implies t \in T} \implies s \in T$ Then $T = S$.
Let the unity of $D$ be $1_D$.
Let $S \subseteq D$ be such that: $1_D \in S$ $a \in S \implies a + 1_D \in S$ Then: $D_{> 0_D} \subseteq S$
where $D_{> 0_D}$ denotes all the elements $d \in D$ such that $\map P d$.
That is, $D_{> 0_D}$ is the set of all (strictly) positive elements of $D$.
Let $\struct {S, \circ, \preceq}$ be a naturally ordered semigroup.
Let $T \subseteq S$ such that $0 \in T$ and $n \in T \implies n \circ 1 \in T$.
Then $T = S$.
Let $\struct {P, s, 0}$ be a Peano structure.
Let $\map Q n$ be a propositional function depending on $n \in P$.
Suppose that: $(1): \quad \map Q 0$ is true $(2): \quad \forall n \in P: \map Q n \implies \map Q {\map s n}$ Then: $\forall n \in P: \map Q n$
The step which shows that $\map P k \implies \map P {k + 1}$ is called the
induction step.
When the first domino is knocked over, the entire line topples, one after the other.
It follows that if either: $(1) \quad$ no domino is knocked over to start with (that is, the basis for the induction does not hold)
or:
$(2) \quad$ the gap between two dominoes is too large for one domino to knocked over the next one (that is, the induction step does not hold)
This can cause incorrect reasoning.
Let $L_k$ denote the $k$th Lucas number.
Let $F_k$ denote the $k$th Fibonacci number.
Given that $L_n = F_n$ for $n = 1, 2, \ldots, k$, we see that:
\(\displaystyle L_{k + 1}\) \(=\) \(\displaystyle L_k + L_{k - 1}\) Definition 1 of Lucas Number \(\displaystyle \) \(=\) \(\displaystyle F_k + F_{k - 1}\) by assumption \(\displaystyle \) \(=\) \(\displaystyle F_{k + 1}\) Definition of Fibonacci Number Hence: $\forall n \in \Z_{>0}: F_n = L_n$
We are to prove that:
$\dfrac 1 {1 \times 2} + \dfrac 1 {2 \times 3} + \dotsb + \dfrac 1 {\paren {n - 1} \times n} = \dfrac 3 2 - \dfrac 1 n$ For $n = 1$ we have: $\dfrac 3 2 - \dfrac 1 n = \dfrac 1 2 = \dfrac 1 {1 \times 2}$
Assuming true for $k$, we have:
\(\displaystyle \dfrac 1 {1 \times 2} + \dfrac 1 {2 \times 3} + \dotsb + \dfrac 1 {\paren {n - 1} \times n} + \dfrac 1 {n \times \paren {n + 1} }\) \(=\) \(\displaystyle \dfrac 3 2 - \frac 1 n + \dfrac 1 {n \paren {n + 1} }\) by the induction hypothesis \(\displaystyle \) \(=\) \(\displaystyle \dfrac 3 2 - \frac 1 n + \paren {\dfrac 1 n - \dfrac 1 {n + 1} }\) \(\displaystyle \) \(=\) \(\displaystyle \dfrac 3 2 - \frac 1 {n + 1}\)
But clearly this is wrong, because for $n = 6$:
$\dfrac 1 2 + \dfrac 1 6 + \dfrac 1 {12} + \dfrac 1 {30} = \dfrac 5 6$
on the left hand side, but:
$\dfrac 3 2 - \dfrac 1 6 = \dfrac 4 3$
on the right hand side.
We are to prove that:
$1 + 3 + 5 + \dotsb + \paren {2 n - 1} = n^2 + 3$ We establish as an induction hypothesis: $1 + 3 + 5 + \dotsb + \paren {2 k - 1} = k^2 + 3$ Then:
\(\displaystyle 1 + 3 + 5 + \dotsb + \paren {2 k - 1} + \paren {2 k + 1}\) \(=\) \(\displaystyle k^2 + 3 + \paren {2 k + 1}\) from the induction hypothesis \(\displaystyle \) \(=\) \(\displaystyle k^2 + 2 k + 1 + 3\) \(\displaystyle \) \(=\) \(\displaystyle \paren {k + 1}^2 + 3\) Square of Sum But clearly this is wrong, because for $n = 2$, say:
\(\displaystyle \paren {2 \times 1 - 1} + \paren {2 \times 2 - 1}\) \(=\) \(\displaystyle 1 + 3\) \(\displaystyle \) \(=\) \(\displaystyle 4\)
on the left hand side, but:
$2^2 + 3 = 7$
on the right hand side.
Also defined as
This principle can often be found stated more informally, inasmuch as the propositional function $P$ is referred to as
"a statement about integers".
Some call it the
induction principle.
The abbreviation
PMI is often seen. Some sources call it the Principle of Weak Induction. Also see Results about Proofs by Inductioncan be found here.
The phrase
mathematical induction appears to have been coined by Augustus De Morgan. Sources 1966: Richard A. Dean: Elements of Abstract Algebra... (previous) ... (next): $\S 0.1$: Theorem $4$ 1971: Robert H. Kasriel: Undergraduate Topology... (previous) ... (next): $\S 1.17$: Finite Induction and Well-Ordering for Positive Integers: Exercise $1$ 2008: David Joyner: Adventures in Group Theory(2nd ed.) ... (previous) ... (next): Chapter $2$: 'And you do addition?': $\S 2.4$: Counting and mathematical induction: Definition $2.4.1$ |
It looks like you're new here. If you want to get involved, click one of these buttons!
In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly:
Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints.
Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints.
Today we'll conclude our discussion of Chapter 1 with two more bombshells:
Joins
are left adjoints, and meets are right adjoints.
Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down.
This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world!
Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders.
In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins:
$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets:
$$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets.
Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have
all joins: it's enough that all the joins in this formula exist:
$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have
all meets: it's enough that all the meets in this formula exist:
$$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes.
Suppose \(A\) is a poset with all binary joins. Then we get a function
$$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows:
$$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that
$$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the
diagonal
$$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called
duplication, since it duplicates any element of \(A\).
Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact:
$$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \).
Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \).
A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function
$$ \wedge : A \times A \to A $$that's the
right adjoint of \( \Delta \). This is just a clever way of saying
$$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check.
Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number.
All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on.
Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by
$$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short.
I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason.
Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\).
So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called
duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs.
Once you start looking you can find duality everywhere, from ancient Chinese philosophy:
to modern computers:
But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality!
This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises. |
I am interested in modeling the following experiment:
A binomial trial with $n$ Bernoulli experiments is run.For the k positive outcomes a second (independent) binomial trial with $k$ runs with a
different success probability is run.
For example: I throw $n$ darts with probability $p_1$ of hitting the bull's eye. The number of hits is $k$. My friend is then allowed to throw $k$ times and has a different skill set than I do (probability $p_2$). I want to model the probability that he will hit the bull's eye $l$ times. Let us call the random variable $Y_{n}$ if n trials are run. The first binomial trial is $X^{1}_{n}$ and the second is $X^{2}_{k}$.
Question 1:
What is the distribution of this random variable?
I argue that an ugly form is:
$P \left(Y_{n}=l\right) = \sum_{i=l}^{n}P\left(X_{n}^{1}=i\right)\cdot P\left(X^{2}_i = l\right)$.
Is this correct and is there a nice way to determine confidence intervals for the success probability $p$ of $Y$? I am especially concerned with the quality of the confidence intervals for small $n$ so I am not feeling well with using a normal approximation which would I guess lead to a sum of product normal probabilities. Using Jeffrey's prior (which afaik leads to better results for smaller $n$) I would with a wild guess receive a sum of products of beta probabilities which seems to be numerically difficult (Product of beta distributions).
Question 2:
Finally I am interested for $Y^{i}\sim D \left(n_{i},p_{i}\right)$, with $D$ being the (for me) unknown distribution with overall success probability $p_{i}$ to calculate (approximately)
$P\left(p^{1}_{n_1}\gt p^2_{n_2}\right)$ given sample data. Is there a general way to attack this problem?
Interpretation: there are two teams of dart player pairs. I observe a series of outcomes and want to calculate the probability that one team has a higher sucess probability (success being hitting the bull's eye in the second step which requires at least one hit in the first step).
Question 3:
Is there a nice generalization if I have not only 2, but $k$ successive binomial experiments of this kind? Nice in the sense not simply extending my formula above and summing/integrating approximately. |
What you wrote is an expectation value, which means an average on some state $|\psi\rangle$ over all possible eigenvalues of the operator under analysis, weighted with the probability of that eigenvalue occurring on that state $\psi$.
So, yes, $\hat H$ is an observable (we reserve this term for operators and the quantities they represent) and this means that its eigenvalues (the energy levels) can actually be measured by means of suitable experiments.
But $\bar E$ is not an observable, and in general each single result of your measurements might have nothing to do with that value.
How to reconcile the two things?
Actually, repeating the same measurement an incredibly large number $N$ of times (or on an incredibly large number of systems) in the same initial conditions, should provide you with a set of results, whose average, weighted with the frequency each value occurs, should provide a value which bill be closer to $\bar E$ the larger $N$ is (they will coincide in the limit $N\rightarrow\infty$). |
Necessary Condition for Twice Differentiable Functional to have Minimum Theorem
Let $J\sqbrk y$ be a twice differentiable functional.
Let $\delta J \sqbrk {\hat y; h} = 0$.
Suppose, for $y=\hat y$ and all admissible $h$
$\delta^2 J\sqbrk{y;h}\ge 0$
Then $J$ has a minimum for $y=\hat y$ if
Proof
By definition, $\Delta J\sqbrk y$ can be expressed as:
$\Delta J\sqbrk{y;h}=\delta J\sqbrk{y;h}+\delta^2 J\sqbrk{y;h}+\epsilon\size {h}^2$
By assumption:
$\delta J\sqbrk{\hat y;h}=0$
Hence:
$\Delta J \sqbrk{\hat y;h}=\delta^2 J\sqbrk{\hat y;h}+\epsilon\size {h}^2 $
Therefore, for sufficiently small $\size h$ both $\Delta J\sqbrk{\hat y;h}$ and $\delta^2 J\sqbrk{\hat y;h}$ will have the same sign.
$\Box$
Suppose, there exists $h=h_0$ such that:
$\delta^2 J\sqbrk{\hat y;h_0}<0$
Then, for any $\alpha\ne 0$
\(\displaystyle \delta^2 J\sqbrk{\hat y;\alpha h_0}\) \(=\) \(\displaystyle \alpha^2\delta^2 J\sqbrk{\hat y;h_0}\) \(\displaystyle \) \(<\) \(\displaystyle 0\)
Therefore, $\Delta J\sqbrk{\hat y;h}$ can be made negative for arbitrary small $\size h$.
However, by assumption $\Delta J\sqbrk{\hat y;h}$ is a minimum of $\Delta J\sqbrk{y;h}$ for all sufficiently small $\size h$.
This is a contradiction.
Thus, a function $h_0:\delta^2 J\sqbrk{\hat y;h_0}<0$ does not exist.
In other words:
$\delta^2 J\sqbrk{\hat y;h}\ge 0$
for all $h$.
$\blacksquare$ |
The terms
heavy and light are commonly used in two different ways. We refer to weight when we say that an adult is heavier than a child. On the other hand, something else is alluded to when we say that oak is heavier than balsa wood. A small shaving of oak would obviously weigh less than a roomful of balsa wood, but oak is heavier in the sense that a piece of given size weighs more than the same-size piece of balsa.
What we are actually comparing is the
mass per unit volume, that is, the density. In order to determine these densities, we might weigh a cubic centimeter of each type of wood. If the oak sample weighed 0.71 g and the balsa 0.15 g, we could describe the density of oak as 0.71 g cm –3 and that of balsa as 0.15 g cm –3. (Note that the negative exponent in the units cubic centimeters indicates a reciprocal. Thus 1 cm –3 = 1/cm 3 and the units for our densities could be written as \(\frac{\text{g}}{\text{cm}^\text{3}}\) , g/cm 3, or g cm –3. In each case the units are read as grams per cubic centimeter, the per indicating division.) We often abbreviate "cm 3" as "cc", and 1 cm 3 = 1 mL exactly by definition.
In general it is not necessary to weigh exactly 1 cm
3 of a material in order to determine its density. We simply measure mass and volume and divide volume into mass:
\[\text{Density} = \dfrac{\text{mass}}{\text{volume}} \]
or
\[\rho = \dfrac{m}{V} \quad \label{1}\]
where
ρ = density m = mass V = volume
Example \(\PageIndex{1}\): Density of Aluminum
Calculate the density of (a) a piece of aluminum whose mass is 37.42 g and which, when submerged, increases the water level in a graduated cylinder by 13.9 ml; (b) an aluminum cylinder of mass 25.07 g, radius 0.750 cm, and height 5.25 cm.
Solution a) Since the submerged metal displaces its own volume,
\[\text{Density} = \rho =\dfrac{m}{V} = \dfrac{\text{37.42 g}}{\text{13.9 ml}} = \text{2.69 g}/\text{ml or 2.69 g ml}^{-\text{1}}\]
b) The volume of the cylinder must be calculated first, using the formula
\[ \text{V} = \pi r^\text{2} h = \text{3.142}\times\text{(0.750 cm)}^\text{2}\times\text{5.25 cm} = \text{9.278 718 8 cm}^\text{3}\]
Then
\[ \rho = \dfrac{m}{V} = \dfrac{\text{25.07 g}}{\text{9.278 718 8 cm}^\text{3}} = \begin{cases} 2.70 \dfrac{\text{g}}{\text{cm}^\text{3}} \\ \text{2.70 g cm}^{-\text{3}} \\ \text{2.70 g}/ \text{cm}^\text{3} \end{cases}\]
which are all acceptable alternatives.
Note that unlike mass or volume (
extensive properties), the density of a substance is independent of the size of the sample ( intensive property). Thus density is a property by which one substance can be distinguished from another. A sample of pure aluminum can be trimmed to any desired volume or adjusted to have any mass we choose, but its density will always be 2.70 g/cm 3 at 20°C. The densities of some common pure substances are listed below.
Tables and graphs are designed to provide a maximum of information in a minimum of space. When a physical quantity (number × units) is involved, it is wasteful to keep repeating the same units. Therefore it is conventional to use pure numbers in a table or along the axes of a graph. A pure number can be obtained from a quantity if we divide by appropriate units. For example, when divided by the units gram per cubic centimeter, the density of aluminum becomes a pure number 2.70:
\[\dfrac{\text{Density of aluminum}}{\text{1 g cm}^{-\text{3}}} = \dfrac{\text{2.70 g cm}^{-\text{3}}}{\text{1 g cm}^{-\text{3}}} = \text{2.70} \]
Substance Density / g cm -3 Helium gas 0.000 16 Dry air 0.001 185 Gasoline 0.66-0.69 (varies) Kerosene 0.82 Benzene 0.880 Water 1.000 Carbon tetrachloride 1.595 Magnesium 1.74 Salt 2.16 Aluminum 2.70 Iron 7.87 Copper 8.96 Silver 10.5 Lead 11.34 Uranium 19.05 Gold 19.32
Therefore, a column in a table or the axis of a graph is conveniently labeled in the following form:
Quantity/units
This indicates the units that must be divided into the quantity to yield the pure number in the table or on the axis. This has been done in the second column of the table.
Converting Densities
In our exploration of density, notice that chemists may express densities differently depending on the subject. The density of pure substances may be expressed in kg/m
3 in some journals which insist on strict compliance with SI units; densities of soils may be expressed in lb/ft 3 in some agricultural or geological tables; the density of a cell may be expressed in mg/µL; and other units are in common use. It is easy to transform densities from one set of units to another, by multiplying the original quantity by one or more : unity factors
Example \(\PageIndex{2}\): Density of Water
Convert the density of water, 1 g/cm
3 to (a) lb/cm 3 and (b) lb/ft 3 Solution a. The equality \(\text{454 g} = \text{1 lb}\) can be used to write two unity factors,
\[ \dfrac{\text{454 g}}{\text{1 lb}} \]
or
\[\dfrac{\text{1 lb}}{\text{454 g}} \]
The given density can be multiplied by one of the unity factors to get the desired result. The correct conversion factor is chosen so that the units cancel:
\( \text{1} \dfrac{\text{g}}{\text{cm}^\text{3}} \times \dfrac{\text{1 lb}}{\text{454 g}} = \text{0.002203} \dfrac{\text{lb}}{\text{cm}^\text{3}}\)
b. Similarly, the equalities \(\text{2.54 cm} = \text{1 inch}\), and \(\text{12 inches} = \text{1 ft}\) can be use to write the unity factors:
\( \dfrac{\text{2.54 cm}}{\text{1 in}} \text{, } \dfrac{\text{1 in}}{\text{2.54 cm}} \text{, } \dfrac{\text{12 in}}{\text{1 ft}} \text{ and } \dfrac{\text{1 ft}}{\text{12 in}} \)
In order to convert the cm
3 in the denominator of 0.002203 to in 3, we need to multiply by the appropriate unity factor three times, or by the cube of the unity factor:
\( \text{0.002203} \dfrac{\text{g}}{\text{cm}^\text{3}} \times \dfrac{\text{2.54 cm}}{\text{1 in}} \times \dfrac{\text{2.54 cm}}{\text{1 in}} \times \dfrac{\text{2.54 cm}}{\text{1 in}}\)
or
\( \text{0.002203} \dfrac{\text{g}}{\text{cm}^\text{3}} \times \big(\dfrac{\text{2.54 cm}}{\text{1 in}}\big)^\text{3} = \text{0.0361 lb}/ \text{in}^\text{3} \)
This can then be converted to lb/ft
3:
\( \text{0.0361 lb}/ \text{in}^\text{3}\times \big(\dfrac{\text{12 in}}{\text{1 ft}}\big)^\text{3} = \text{62.4 lb}/\text{ft}^\text{3}\)
It is important to notice that we have used conversion factors to convert from one unit to another unit of the same parameter |
I'm looking for the right argument why the function $ \cos\sqrt z$ is analytic on the whole complex plane. As far as I understand, a holomorphic branch of $\sqrt z$ can only be found on the cut plane (without negative numbers) since the Argument function isn't continuous everywhere. Hence $ \cos\sqrt z$ is at least holomorphic on the same domain, but how to justify that it is actually holomorphic everywhere?
The two branches of $\sqrt{z}$ differ only by a sign, while the cosine function is even. Thus the ambiguity in the square root is undone by the application of the cosine.
Another way to see it is to use the power series $$\cos w=\sum_{n=0}^\infty \frac{(-1)^n w^{2n}}{(2n)!},$$ insert $w=\sqrt{z}$, and to get $$\cos \sqrt{z}=\sum_{n=0}^\infty \frac{(-1)^n z^{n}}{(2n)!}.$$
$w = \cos \sqrt{z}$
$w = \cos z^\frac{1}{2}$
$w = \cos \left (e^{\frac{1}{2} ( \ln |z| + i \, Argz + i 2n\pi)} \right )$ for all $n \in \mathbb{N}$
$w = \cos \left (e^{\frac{1}{2} (\ln |z| + i \, Argz)} e^{ i n \pi} \right )$ for all $n \in \mathbb{N}$
Recall that $e^{i n \pi} = \cos n \pi + i \sin n \pi$.
For $n = 2k$, $k \in \mathbb{N}$, we have $w = \cos \left ( e^{\frac{1}{2} (\ln |z| + i \, Argz)} \right )$
For $n = 2k+1$, $k \in \mathbb{N}$, we have $w = \cos \left ( - e^{\frac{1}{2} (\ln |z| + i \, Argz)} \right ) = \cos \left ( e^{\frac{1}{2} (\ln |z| + i \, Argz)} \right )$
Since $\cos$ is an even function, $\cos (-p) = \cos p$. For example, if $\sqrt{|z|} e^{\frac{1}{2} i \, Argz} = \pi$, then $w = \cos \pi = -1$, and $w = \cos \left (- \pi \right ) = -1$.
Thus, there is no issue of discontinuity like there is with $\sin \sqrt{z}$. There is no need to restrict the domain. Hence, there is not need for branch cuts. Therefore, you get analyticity. |
The rubber duck explains coherence spaces
I've spent a chunk of the past week, at least, trying to understandthe idea of a
coherence space (or coherent space). This appearsin Jean-Yves Girard's Proofs andTypes, and it's amodel of a data type. For example, the type of integers and the typeof booleans can be modeled as coherence spaces.
The definition is one of those simple but bafflingly abstract onesthat you often meet in mathematics: There is a set !!\lvert\mathcal{A}\rvert!! of
tokens, and the points of the coherence space !!\mathcal{A}!!(“cliques”) are sets of tokens. The cliques are required to satisfytwo properties: If !!a!! is a clique, and !!a'\subset a!!, then !!a'!! is also a clique. Suppose !!\mathcal M!! is some family of cliques such that !!a\cup a'!! is a clique for each !!a, a'\in \mathcal M!!. Then !!\bigcup {\mathcal M}!! is also a clique.
To beginning math students it often seems like these sorts ofdefinitions are generated at random. Okay, today we're going to studyEulerian preorders with no maximum element that are closed underfinite unions; tomorrow we're going to study semispatulated coalgebraswith countably infinite signatures and the weak Cosell property. Whatever,man.
I have a long article about this in progress, but I'll summarize: theyare
never generated at random. The properties are carefully chosenbecause we have something in mind that we are trying to model, anduntil you understand what that thing is, and why someone thinks thoseproperties are the important ones, you are not going to get anywhere.
So when I see something like this I must stop immediately and ask‘wat’. I can try to come up with the explanation myself, or I canread on hoping for an explanation, or I can do some of each, but I amnot going to progress very far until I understand what it is about.And I'm not sure anyone short of Alexander Grothendieck would have anymore success trying to move on with the definition itself and nothing else.
Girard explains shortly after:
The aim is to interpret a type by a coherence space !!\mathcal{A}!!, and a
term of this type as a point [clique] of !!\mathcal{A}!!, infinite in
general…
Okay, fine. I understand the point of the project, although not whythe definition is what it is. I know a fair amount about types. AndGirard has given two examples: booleans and integers. But theseexamples are unusually simple, because none of the cliques has morethan one element, and so the examples are not as illuminating as they might be.
Some of the ways I tried to press onward were:
Read ahead and see if there is more explanation. I tried this butI still wasn't getting it. The next section seemed clear: thecliques define a “coherence” relation on the tokens, from which thecliques can be recovered. Consider a graph, where the vertices aretokens and there is an edge !!a—b!! exactly when !!\{a, a'\}!! is aclique; we say that !!a!! and !!a'!! are
coherent. Then thecliques of the coherence space are exactly the cliques of thegraph; hence the name. The graph is called the web of the space,and from the web one can recover the original space.
But after that part came
stable functions, which I couldn'tfigure out, and I got stuck again.
Read ahead and see if there is a more complicated specific example.There wasn't.
Read ahead and see if any of the derived concepts are familiar, andif so then work backward. For instance, if I had been able torecognize that I already knew what stable functions were, I mighthave been able to leverage tha tinto an understanding of what wasgoing on with the coherence spaces. But for me they were justanother problem of the same sort: what is a stable functionsupposed to be modeling?
Read someone else's explanation instead. I tried several withoutmuch success. They all seemed to be written for someone whoalready had a clue what was going on. (That is a large part of thereason I have written up this long and clueless explanation.)
Try to construct some examples and see if they make sense in thecontext of what comes later. For example, I know what thecoherence space ofbooleans looks like because Girard showed me. Can I figure out thestructure of the coherence space for the type of “wrapped booleans”?
-- (Haskell)
data WrappedBoolean = W Bool
Can I figure it out for the type of pairs of booleans?
-- (Haskell)
type BooleanPair = (Bool, Bool)
None of this was working. I had several different ideas about whatthe coherence spaces might look like for other types, but none of themseemed to fit with what Girard was doing. I couldn't come up with any consistent story.
So I prepared to ask on StackExchange, and I spent about an hourwriting up my question, explaining all the things I had tried and whatthe problems were with each one. And as I drew near to the end ofthis, the clouds parted! I never had to post the question. I was in the middle of composing thisparagraph:
In section 8.4 Girard defines a direct product of coherence spaces,
but it doesn't look like the direct product I need to get a product
type; it looks more like a disjoint union type. If the coherence space for
Pairbool is the square of the coherence space for !!{{\mathcal B}ool}!!, how? It
has 4 2-cliques, but if those are the total elements of !!{{\mathcal B}ool}^2!!,
then what are do the 1-cliques mean?
I decided I hadn't made enough of an effort to understand the directproduct. So even though I couldn't see how it could possibly give meanything like what I wanted, I followed its definition for !!{{\mathcal B}ool}^2!! —and the light came on.
Here's the puzzling coproduct-like definition of the product of twocoherence spaces, from page 61:
If !!{\mathcal A}!! and !!{\mathcal B}!! are two coherence spaces, we define !!{\mathcal A}\&{\mathcal B}!! by:
!!|{\mathcal A}\&{\mathcal B}| = |{\mathcal A}| + |{\mathcal B}| = \{1\}×|{\mathcal A}| \cup
\{2\}×|{\mathcal B}|!!
That is, the tokens in the product space are literally the disjointunion of the tokens in the component spaces.
And the edges in the product's web are whatever they were in !!{\mathcal A}!!'sweb (except lifted from!!|{\mathcal A}|!! to !!\{1\}×|{\mathcal A}|!!), whatever they were in !!{\mathcal B}!!'s web (similarly),
and also there is an edge between every !!\langle1, {\mathcal A}\rangle!!and each !!\langle2, {\mathcal B}\rangle!!. For !!{{\mathcal B}ool}^2!! the web lookslike this:
There is no edge between!!\langle 1, \text{True}\rangle!!and!!\langle 1, \text{False}\rangle!!because in !!{{\mathcal B}ool}!! there is no edge between!!\text{True}!! and !!\text{False}!!.
This graph has nine cliques. Here they are ordered by set inclusion:
(In this second diagram I have abbreviated the pair !!\langle1,\text{True}\rangle!! to just !!1T!!. The top nodes in the diagram areeach labeled with a set of two ordered pairs.)
What does this mean?The ordered pairs of booleans are being represented by
functions.The boolean pair!!\langle x, y\rangle!! is represented by the function that takesas its argument a number, either 1 or 2, and then returns thecorresponding component of the pair: the first component !!x!! if theargument was 1, and the second component !!y!! if the argument was 2.
The nodes in the bottom diagram represent functions. The top row arefully-defined functions. For example, !!\{1F, 2T\}!! is the functionwith !!f(1) = \text{False}!! and !!f(2) = \text{True}!!, representingthe boolean pair !!\langle\text{False}, \text{True}\rangle!!.Similarly ifwe were looking at a space of infinite lists, we could consider it afunction from !!\Bbb N!! to whatever the type of the lists elementswas. Then the top row of nodes in the coherence space would beinfinite sets of pairs of the form !!\langle n, \text{(list element)}\rangle!!.
The lower nodes are still functions, but they are functions aboutwhich we have only incomplete information. The node !!\{2T\}!! is afunction for which !!f(2) = \text{True}!!. But we don't yet know what!!f(1)!! is because we haven't yet tried to compute it. And thebottommost node !!\varnothing!! is a function where we don't knowanything at all — yet. As we test the function on various arguments,we move up the graph, always following the edges. The lower nodes are
approximations to the upper ones, made on the basis of incompleteinformation about what is higher up.
Now the importance of
finite approximants on page 56 becomes clearer.!!{{\mathcal B}ool}^2!! is already finite. But in general the space is infinitebecause the type is functions on an infinite domain, or infinitelists, or something of that sort. In such a space we can't get allthe way to the top row of nodes because to do that we would have tocall the function on all its possible arguments, or examine everyelement of the list, which is impossible. Girard says “Above all,there are enough finite approximants to a.” I didn't understand what hemeant by “enough”. But what he means is that each clique !!a!! is theunion of its finite approximants: each bit of information in the function!!a!! is obtainable from some finite approximation of !!a!!.The “stable functions” of section 8.3 start to become less nebulousalso.
I had been thinking that the !!\varnothing!! node was somehow like the!!\bot!! element in a Scott domain, and then I struggled toidentify anything like !!\langle \text{False}, \bot\rangle!!. It looks at firstlike you can do it somehow, because there are the right number ofnodes at the middle level. Trouble arises in other coherencespaces.
For the
WrappedBoolean type, for example, the type hasfour elements: !!W\ \text{True},W\ \text{False},W\ \bot,!! and !!\bot!!. I think the coherence space for
WrappedBoolean is just like the onefor !!{{\mathcal B}ool}!!:
Presented with a value from
WrappedBoolean, you don't initially knowwhat it is. Then you examine it, and you know whether it is !!W\ \text{True}!! or !!W\ \text{False}!!. You are now done.
I think there isn't anything like !!\bot!! or !!W\ \bot!! in the coherence space.Or maybe they they are there but sharing the !!\varnothing!! node.But I think more likely partial objects will appear in some other way.
Whew! Now I can move along.
(If you don't understand why “rubber duck”, Wikipedia explains:
Many programmers have had the experience of explaining a problem to
someone else, possibly even to someone who knows nothing about
programming, and then hitting upon the solution in the process of
explaining the problem.
[“Rubber duck”] is a reference to a story in the book
The Pragmatic
Programmer in which a programmer would carry around a rubber duck
and debug their code by forcing themselves to explain it,
line-by-line, to the duck.
I spent a week on this but didn't figure it out until I triedformulating my question for StackExchange. The draft question, nevercompleted, is here if forsome reason you want to see what it looked like.)
[Other articles in category /math/logic] permanent link |
For discussion of specific patterns or specific families of patterns, both newly-discovered and well-known.
gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm Kazyan wrote:
Component found in a CatForce result:
Code: Select all
x = 42, y = 67, rule = LifeHistory
A$.2A$2A5$7.A$8.2A$7.2A19$24.2A$24.2A2$39.A$37.A3.A$36.A$36.A4.A$36.
5A$14.2A.2D$13.A.AD.D$13.A$12.2A25$5.3A$7.A$6.A!
That can be done with 4 gliders, although it's still interesting that it was found accidentally:
Code: Select all
x = 21, y = 30, rule = B3/S23
10b2o$11bo$11bobo$12b2o14$10bo4bo$10b2ob2o$9bobo2b2o8$2o17bo$b2o15b2o$
o17bobo!
What were you looking for, exactly? A MWSS-to-herschel converter?
Kazyan Posts: 864 Joined: February 6th, 2014, 11:02 pm
gmc_nxtman wrote:What were you looking for, exactly? A MWSS-to-herschel converter?
I'd settle for any signal, but yes. The current Orthogonoids have geometry challenges that pad their size, and the limiting factor in their repeat time is the syringe. Repeat time is more important for single-channel operations than probably any other constructor design, so I'm trying to give that fire some better fuel.
Tanner Jacobi
mniemiec
Posts: 1055 Joined: June 1st, 2013, 12:00 am
gmc_nxtman wrote:4-glider trans-boat with tail edgeshoot: ...
Even though was already buildable from 4 gliders, this method improves syntheses of one still-life and 18 pseudo-objects.
Kazyan Posts: 864 Joined: February 6th, 2014, 11:02 pm
Potential component spotted in a failed eating reaction:
Code: Select all
x = 21, y = 17, rule = B3/S23
o$3o$3bo$2b2o2$6bo$5bobo2$5b3o$19bo$8bo9bo$18b3o3$15bo$14b2o$14bobo!
Tanner Jacobi
gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm
Unusual still life in 8 gliders:
Code: Select all
x = 18, y = 26, rule = B3/S23
11bo$10bobo$10b2o2$10bo$9b2o$9bobo5$obo$b2o$bo2$3b2o$4b2o$3bo$9bo$9b2o
$8bobo2$15b3o$7b2o6bo$6bobo7bo$8bo!
EDIT:
This also gives 21.41458 in 9 gliders.
Kazyan Posts: 864 Joined: February 6th, 2014, 11:02 pm
Potentially grow out a BTS into a structure like a snorkel loop:
Code: Select all
x = 15, y = 25, rule = B3/S23
2b2obo$3bob3o$bobo4bo$ob2ob2obo$o4bobo$b3obo$3bob2o3$10b3o2$8bo5bo$8bo
5bo$8bo5bo2$10b3o6$11b2o$10bo2bo$11b2o$11bo!
I suspect that the drifter catalyst and its variants also have odd transformations, since both objects are robust.
Tanner Jacobi
gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm
Haven't seen a component quite like this before:
Code: Select all
x = 27, y = 18, rule = B3/S23
20bobo$20b2o$21bo7$15bo$15bobo$15b2o2$3o9bobo$b3o9b2o$13bo10b3o$24bo$
25bo!
EDIT:
Better version:
Code: Select all
x = 15, y = 11, rule = B3/S23
13bo$12bo$12b3o3$7bo$6bobo$6bobo2b2o$7bo2b2o$3o9bo$b3o!
Gamedziner
Posts: 796 Joined: May 30th, 2016, 8:47 pm Location: Milky Way Galaxy: Planet Earth
p8 c/2 derived from blinker puffer 1
:
Code: Select all
2bo$o3bo$5bo$o4bo$b5o5$b2o2b2o$bob2ob2o$2b5o$3b3o$4bo$2bo3bo$7bo$2bo4bo$3b5o!
Code: Select all
x = 81, y = 96, rule = LifeHistory
58.2A$58.2A3$59.2A17.2A$59.2A17.2A3$79.2A$79.2A2$57.A$56.A$56.3A4$27.
A$27.A.A$27.2A21$3.2A$3.2A2.2A$7.2A18$7.2A$7.2A2.2A$11.2A11$2A$2A2.2A
$4.2A18$4.2A$4.2A2.2A$8.2A!
mniemiec
Posts: 1055 Joined: June 1st, 2013, 12:00 am
This is known. It can be easily synthesized from 10 gliders:
Code: Select all
x = 88, y = 26, rule = B3/S23
34bobo$35boo$35bo3$45bo$bo44boo$bbo42boo$3o$20boo18boo6bo$bbo17bobo17b
obo4bo$boo18bo19bo5b3o$bobo$$43boo$44boo7b3o22bo4b3o$31b3o9bo9bobbo20b
3o3bobbo$33bo19bo16b3o3boobo3bo$32bo20bo3bo12bobbobb3o4bo3bo$53bo16bo
6boo4bo$54bobo13bo3bo3bo5bobo$70bo$71bobo$77bo$78bo$77bo!
gameoflifemaniac Posts: 774 Joined: January 22nd, 2017, 11:17 am Location: There too
Code: Select all
x = 17, y = 17, rule = B3/S23
8bo$7bobo$4bo2bobo2bo$3bobobobobobo$2bo2bobobobo2bo$3b2o2bobo2b2o$7bob
o$b6o3b6o$o15bo$b6o3b6o$7bobo$3b2o2bobo2b2o$2bo2bobobobo2bo$3bobobobob
obo$4bo2bobo2bo$7bobo$8bo!
Code: Select all
x = 17, y = 17, rule = B3/S23
8bo$7bobo$4bo2bobo2bo$3bobobobobobo$2bo2bobobobo2bo$3b2o2bobo2b2o$7bob
o$b6o3b6o$o7bo7bo$b6o3b6o$7bobo$3b2o2bobo2b2o$2bo2bobobobo2bo$3bobobob
obobo$4bo2bobo2bo$7bobo$8bo!
dvgrn Moderator Posts: 5874 Joined: May 17th, 2009, 11:00 pm Location: Madison, WI Contact:
While incompetently welding a tremi-Snark this evening...
Code: Select all
x = 23, y = 31, rule = LifeHistory
$3.4B$4.4B$5.4B5.2A$6.4B4.2A$7.9B$8.6B$8.4BA3B$6.7BA2B$6.5B3A2B$6.11B
$4.2AB.10B$4.2AB3.B2A4B$9.2B2A5B$10.8B$10.6B$11.5B$5.2A5.3B$5.A.A3.5B
$7.A2.B2AB2A$7.2A2.2A.AB2.2A$10.B3.A.A2.A$5.10A.2A$5.A$6.12A$17.A$8.
7A$8.A5.A$11.A$10.A.A$11.A!
... I ended up with a p3 that I didn't really want.
Doesn't seem worth keeping it around until people are synthesizing all the 58-bit p3's, but it seemed mildly entertaining anyway.
A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: dvgrn wrote:
Code: Select all
x = 23, y = 31, rule = LifeHistory
$3.4B$4.4B$5.4B5.2A$6.4B4.2A$7.9B$8.6B$8.4BA3B$6.7BA2B$6.5B3A2B$6.11B
$4.2AB.10B$4.2AB3.B2A4B$9.2B2A5B$10.8B$10.6B$11.5B$5.2A5.3B$5.A.A3.5B
$7.A2.B2AB2A$7.2A2.2A.AB2.2A$10.B3.A.A2.A$5.10A.2A$5.A$6.12A$17.A$8.
7A$8.A5.A$11.A$10.A.A$11.A!
Pointless reduction:
Code: Select all
x = 17, y = 25, rule = LifeHistory
4B$.4B$2.4B5.2A$3.4B4.2A$4.9B$5.6B$5.4BA3B$3.7BA2B$3.5B3A2B$3.11B$.2A
B.10B$.2AB3.B2A4B$6.2B2A5B$7.8B$7.6B$8.5B$9.3B$8.5B$7.B2AB2A$4.2A2.2A
.AB2.2A$4.A2.B3.A.A2.A$5.7A.3A2$7.2A.4A$7.2A.A2.A!
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm
Can someone salvage this? (Look at T≈20)
Code: Select all
x = 16, y = 12, rule = B3/S23
bo$2bo$3o7$3b2o5b2o2b2o$4b2o3bobob2o$3bo7bo3bo!
BlinkerSpawn Posts: 1905 Joined: November 8th, 2014, 8:48 pm Location: Getting a snacker from R-Bee's gmc_nxtman wrote:
Can someone salvage this? (Look at T≈20)
Code: Select all
x = 16, y = 12, rule = B3/S23
bo$2bo$3o7$3b2o5b2o2b2o$4b2o3bobob2o$3bo7bo3bo!
The red pattern inserted at gen 16 would do it:
Code: Select all
x = 17, y = 14, rule = LifeHistory
13.D$11.2D$.A14.D$2.A8.5D$3A7$3.2A5.2A2.2A$4.2A3.A.A.2A$3.A7.A3.A!
AbhpzTa
Posts: 475 Joined: April 13th, 2016, 9:40 am Location: Ishikawa Prefecture, Japan gmc_nxtman wrote:
Can someone salvage this? (Look at T≈20)
Code: Select all
x = 16, y = 12, rule = B3/S23
bo$2bo$3o7$3b2o5b2o2b2o$4b2o3bobob2o$3bo7bo3bo!
Code: Select all
x = 27, y = 19, rule = B3/S23
16bo$4bo9b2o$5bo9b2o$3b3o$22bo$20b2o$21b2o$bo$2bo$3o2$25b2o$24b2o$26bo
3$3b2o5b2o2b2o$4b2o3bobob2o$3bo7bo3bo!
Iteration of sigma(n)+tau(n)-n [sigma(n)+tau(n)-n : OEIS A163163] (e.g. 16,20,28,34,24,44,46,30,50,49,11,3,3, ...) :
965808 is period 336 (max = 207085118608). gmc_nxtman Posts: 1147 Joined: May 26th, 2015, 7:20 pm
Reduced an old synthesis from eleven (I think) down to eight gliders:
Code: Select all
x = 34, y = 34, rule = B3/S23
10bo$bobo7bo19bobo$2b2o5b3o19b2o$2bo29bo3$32bo$30b2o$31b2o3$12bo$12bob
o$12b2o11$14b2o$14bobo$14bo12b2o$27bobo$27bo3$b2o$obo$2bo!
Extrementhusiast Posts: 1796 Joined: June 16th, 2009, 11:24 pm Location: USA
Glider + two-glider loaf/tub/block/blinker constellation lasts for over 10K gens:
Code: Select all
x = 16, y = 13, rule = B3/S23
3bobo$3b2o$4bo9bo$13bobo$4b2o8bo$4b2o3$6bo$5bobo$4bo2bo$5b2o$3o!
I Like My Heisenburps! (and others)
Entity Valkyrie Posts: 247 Joined: November 30th, 2017, 3:30 am
A glider synthesis of Sawtooth 311
x = 193, y = 140, rule = B3/S23 40bo$41bo$39b3o$72bo$70b2o$71b2o19$32bobo$33b2o$33bo30bo$63bo$63b3o2$ 75bo$74b2o$24bo49bobo$25b2o$24b2o$49bo$47b2o$48b2o2$2bo$obo$b2o7$34b2o $35b2o$34bo3$67b2o$67bobo$53b2o12bo$53bobo$53bo6$27b2o93b2o$26bobo93bo bo$28bo93bo2$53b3o$53bo74bobo$54bo73b2o$4bo124bo$4b2o172b2o$3bobo171b 2o$174b2o3bo$31b2o140bobo$30b2o98b2o43bo$32bo97bobo36bobo$130bo3bobo 10bo22b2o$135b2o11b2o20bo$135bo4bobo4b2o34bobo$115b2o21bobobobo6b2o20b o9b2o$116b2o21b2ob2o6b2o22bo9bo$115bo36bo19b3o2$120bo23b2ob2o23b3o5bo$ 120b2o21bobobobo24bo4bo$119bobo13b2ob2o5bobo25bo5b3o$134bobobobo$115bo 20bobo13b2o25b3o$116b2o12b2o19b2o22bo3bo$115b2o14b2o20bo22bo3bo$130bo 35bo4bo2b3o$164bobo2b2o$113b3o49b2o3b2o2b3o4bobo$115bo60bo4b2o$114bo 60bo6bo$179bo$126bo25bo18bo7b2o$125b2o23b2o19b2o5bobo$125bobo23b2o17bo bo$146b2o$147b2o$121b2o23bo$120b2o$122bo2$156b2o$142bobo10bobo$134bobo 5b2o13bo$134b2o7bo$135bo$132bo6bo$132b2o4bo$131bobo4b3o2$138b3o43bo$ 134bo3bo22bo20b2o$135bo3bo22b2o19b2o$133b3o25b2o13bobo$174bobobobo7bo$ 133b3o5bo25bobo5b2ob2o6bobo$135bo4bo24bobobobo15b2o3bo$63b3o68bo5b3o 23b2ob2o19b2o$63bo127b2o$64bo75b3o19bo$130bo9bo22b2o6b2ob2o$130b2o9bo 20b2o6bobobobo$129bobo34b2o4bobo$144bo20b2o$143b2o22bo12b2o$143bobo33b 2o$181bo2$139bo$138bo$138b3o2$137bo$136b2o$136bobo! Entity Valkyrie Posts: 247 Joined: November 30th, 2017, 3:30 am
This Simkin-Glider=Gun=like object actually produces two MWSS:
Code: Select all
x = 53, y = 17, rule = B3/S23
44b2o5b2o$44b2o5b2o2$47b2o$47b2o$12bo$12b3o$12bobo$14bo4$4b2o$4b2o2$2o
5b2o$2o5b2o!
mniemiec
Posts: 1055 Joined: June 1st, 2013, 12:00 am
Entity Valkyrie wrote:A glider synthesis of Sawtooth 311 ...
It's nice to have syntheses like this. Unfortunately, in this case, there are several pairs of gliders that would have had to pass through each other earlier (i.e. they would have already collided before this phase). To make sure this doesn't happen, it is usually a good idea to backtrack all the gilders a certain amount (e.g. far enough away that they are in four distinct clouds, one coming from each direction) and then run them to see if any unwanted interactions occur first.
Rhombic Posts: 1056 Joined: June 1st, 2013, 5:41 pm
This component (the reverse component would have been more useful). Found accidentally though.
Code: Select all
x = 12, y = 14, rule = B3/S23
11bo$9b3o$8bo$9bo$6b4o$6bo$2b2o3b3o$2b2o5bo$9bobo$2bo7b2o$bobo$bob2o$o
$2bo!
Code: Select all
x = 13, y = 15, rule = B3/S23
7bo$7b3o$10bo$2b2ob3o2bo$o2bobo2bob2o$2o4b3o3bo$9bobo$3b2o3b2ob2o$3b2o
2$3bo$2bobo$2bob2o$bo$3bo!
Extrementhusiast Posts: 1796 Joined: June 16th, 2009, 11:24 pm Location: USA
Switch engine turns two rows of beehives into two rows of table on tables:
Code: Select all
x = 88, y = 96, rule = B3/S23
13b2o$12bo2bo$13b2o6$21b2o$8b3o9bo2bo$9bo2bo8b2o$13bo$10bobo4$29b2o$
28bo2bo$29b2o2$bo$obo$obo$bo$37b2o$36bo2bo$37b2o2$9bo$8bobo$8bobo$9bo$
45b2o$44bo2bo$45b2o2$17bo$16bobo$16bobo$17bo$53b2o$52bo2bo$53b2o2$25bo
$24bobo$24bobo$25bo$61b2o$60bo2bo$61b2o2$33bo$32bobo$32bobo$33bo$69b2o
$68bo2bo$69b2o2$41bo$40bobo$40bobo$41bo$77b2o$76bo2bo$77b2o2$49bo$48bo
bo$48bobo$49bo$85b2o$84bo2bo$85b2o2$57bo$56bobo$56bobo$57bo5$65bo$64bo
bo$64bobo$65bo5$73bo$72bobo$72bobo$73bo!
I Like My Heisenburps! (and others)
KittyTac Posts: 533 Joined: December 21st, 2017, 9:58 am Extrementhusiast wrote:
Switch engine turns two rows of beehives into two rows of table on tables:
Code: Select all
x = 88, y = 96, rule = B3/S23
13b2o$12bo2bo$13b2o6$21b2o$8b3o9bo2bo$9bo2bo8b2o$13bo$10bobo4$29b2o$
28bo2bo$29b2o2$bo$obo$obo$bo$37b2o$36bo2bo$37b2o2$9bo$8bobo$8bobo$9bo$
45b2o$44bo2bo$45b2o2$17bo$16bobo$16bobo$17bo$53b2o$52bo2bo$53b2o2$25bo
$24bobo$24bobo$25bo$61b2o$60bo2bo$61b2o2$33bo$32bobo$32bobo$33bo$69b2o
$68bo2bo$69b2o2$41bo$40bobo$40bobo$41bo$77b2o$76bo2bo$77b2o2$49bo$48bo
bo$48bobo$49bo$85b2o$84bo2bo$85b2o2$57bo$56bobo$56bobo$57bo5$65bo$64bo
bo$64bobo$65bo5$73bo$72bobo$72bobo$73bo!
And then explodes. I wonder if there's a way to eat it at the end.
dvgrn Moderator Posts: 5874 Joined: May 17th, 2009, 11:00 pm Location: Madison, WI Contact: KittyTac wrote:
Extrementhusiast wrote:Switch engine turns two rows of beehives into two rows of table on tables...
And then explodes. I wonder if there's a way to eat it at the end.
Yeah, switch engine/swimmer eaters definitely aren't a problem:
Code: Select all
x = 96, y = 98, rule = B3/S23
13b2o$12bo2bo$13b2o6$8b3o10b2o$20bo2bo$8bo3bo8b2o$9b4o$12bo4$29b2o$28b
o2bo$29b2o2$bo$obo$obo$bo$37b2o$36bo2bo$37b2o2$9bo$8bobo$8bobo$9bo$45b
2o$44bo2bo$45b2o2$17bo$16bobo$16bobo$17bo$53b2o$52bo2bo$53b2o2$25bo$
24bobo$24bobo$25bo$61b2o$60bo2bo$61b2o2$33bo$32bobo$32bobo$33bo$69b2o$
68bo2bo$69b2o2$41bo$40bobo$40bobo$41bo$77b2o$76bo2bo$77b2o2$49bo$48bob
o$48bobo$49bo$85b2o$84bo2bo$85b2o2$57bo$56bobo$56bobo$57bo5$65bo$64bob
o$64bobo$65bo5$73bo$72bobo$72bobo17b2o$73bo18bo$93b3o$95bo!
#C [[ AUTOSTART STEP 9 THEME 2 ]]
kiho park
Posts: 50 Joined: September 24th, 2010, 12:16 am
I found this c/3 diagonal fuse while searching c/3 long barge crawler.
Code: Select all
x = 10, y = 11, rule = B3/S23:T40,27
8b2o$7bo2$6bobo$5bo2bo$4bobo$3bobo$2bobo$bobo$obo$bo! |
I am preparing for my exam in Formal languages and Automata theory and I'm looking at some old exam questions right now. I need help with the following question:
For each of the following languages answer whether it is regular, context-free but not regular, or not context-free. A brief, informal explanation is sufficient.
$$ L_3 = \left\{ w \in \{a,b,c,d\}^* \Bigg| \begin{array}{l} \text{\(w\) does not have a substring \(aba\),} \\ \text{each \(a\) in \(w\) is immediately followed by \(b\),} \\ \text{and \(\#c(w)\) is odd} \end{array} \right\} $$ $$ \begin{align} L_4 &= \{ a^ib^jc^ka^ib^l \mid j \gt l \text{ and } i,l,k \gt 0 \} \\ L_5 & \text{ is the image of \(L_4\) under the homomorphism } h:\{a,b,c,d\}^* \to \{0,1,2\}^* \\ & \text{ such that } h(a) = h(b) = 10 \text{ and } h(c) = 210 \text{ and } h(d) = \epsilon \\ L_6 & \text{ is the image of \(L_4\) under the homomorphism } h:\{a,b,c,d\}^* \to \{0,1,2\}^* \\ & \text{ such that } h(a) = h(b) = 210 \text{ and } h(c) = h(d) = \epsilon \\ \end{align} $$
Here is my attempt:
$L_3$ is regular. It's the intersection between 3 regular languages. Regular languages are closed under intersection, so the resulting language is regular. The language where $w$ does not have a substring $aba$ is just the complement of the language $aba$, regular languages are closed under complement, so the resulting language is regular. The language with an odd number of $c$ is regular. Hence the resulting language when taking the intersection between these languages is regular.
$L_4$ is not context-free. When reading the first $a$'s we will push the $a$'s onto the stack. Then we will read the first $b$'s and push them onto the stack. Then we will read the $c$'s. When we now read a second group of $a$'s, we will not be able to compare the number with the first $a$'s, because $b$'s are on the top of the stack and if we pop them then we will not be able to compare the number $b$'s in the beginning and the end. Hence, $L_4$ is not context-free.
$L_5$ is context-free but not regular. The language in question looks like this: $$ 10^{i+j} 210^k10^{i+l} \text{ where } j \gt l \text{ and } i,l, k \gt 0 $$ A grammar can be constructed which generates at least one more $10$ in the beginning of the string than $10$ after $210$.
$L_6$ is regular because it's given by the regular expression: $$ 210^{2i + j + l} \text{ where } 2i+j+l \text{ is any number } \gt 0$$
Is this correct? Note that informal explanations is sufficient in the answer and that no grammars has to be given. |
Archive:
Subtopics:
Comments disabled
Tue, 31 Oct 2017
[ The Atom and RSS feeds have done an unusually poor job of preserving the mathematical symbols in this article. It will be much more legible if you read it on my blog. ]
Lately I've been enjoying
He continues a little later:
As you can see, it is not written in the usual dry mathematical-textstyle, presenting the material as a perfect and aseptic distillationof absolute truth. Instead, one sees the history of logic, the riseand fall of different theories over time, the interaction and relationof many mathematical and philosophical ideas, and Girard's reflectionsabout it all. It is a transcription of a lecture series, and readslike one, including all of the speaker's incidental remarks andoffhand musings, but written down so that each can be weighed andpondered at length. Instead of wondering in the moment what he meantby some intriguing remark, then having to abandon the thought to keepup with the lecture, I can pause and ponder the significance. Girardis really, really smart, and knows way more about logic than I everwill, and his offhand remarks reward this pondering. The book is
The book really gets going with its discussion of Gentzen's sequent calculus in chapter 3.
Between around 1890 (when Peano and Frege began to liberate logic from itsmedieval encrustations) and 1935 when the sequent calculus wasinvented, logical proofs were mainly in the “Hilbert style”.Typically there were some axioms, and some rules of deduction by whichthe axioms could be transformed into other formulas. A typicalexample consists of the axioms $$A\to(B\to A)\\(A \to (B \to C)) \to ((A \to B) \to (A \to C)) $$(where !!A, B, C!! are understood to be placeholders that can bereplaced by any well-formed formulas) and the deduction rule
In contrast, sequent calculus has few axioms and many deductionrules. It deals with
A typical deductive rule in sequent calculus is:
$$ \begin{array}{c} Γ ⊢ A, Δ \qquad Γ ⊢ B, Δ \\ \hline Γ ⊢ A ∧ B, Δ \end{array} $$
Here !!Γ!! and !!Δ!! represent any lists of formulas, possibly empty. The premises of the rule are:
From these premises, the rule allows us to deduce:
The only axioms of sequent calculus are utterly trivial:
$$ \begin{array}{c} \phantom{A} \\ \hline A ⊢ A \end{array} $$
There are no premises; we get this deduction for free: If can prove !!A!!, we can prove !!A!!. (!!A!! here is a metavariable that can be replaced with any well-formed formula.)
One important point that Girard brings up, which I had never realizeddespite long familiarity with sequent calculus, is the symmetrybetween the left and right sides of the turnstile ⊢. As I mentioned,the interpretation of !!Γ ⊢ Δ!! I had been taught was that itmeans that if every formula in !!Γ!! is provable, then some formula in!!Δ!! is provable. But instead let's focus on just one of theformulas !!A!! on the right-hand side, hiding in the list !!Δ!!. The sequent!!Γ ⊢ Δ, A!! can be understood to mean that to prove!!A!!, it suffices to proveall of the formulas in !!Γ!!, and to
The all-some correspondence, which had previously caused me to wonder why it was that way and not something else, perhaps the other way around, has turned into a simple relationship about logical negation: the formulas on the left are positive, and the ones on the right are negative.[2]) With this insight, the sequent calculus negation laws become not merely simple but trivial:
$$ \begin{array}{cc} \begin{array}{c} Γ, A ⊢ Δ \\ \hline Γ ⊢ \lnot A, Δ \end{array} & \qquad \begin{array}{c} Γ ⊢ A, Δ \\ \hline Γ, \lnot A ⊢ Δ \end{array} \end{array} $$
For example, in the right-hand deduction: what is sufficient to prove !!A!! is also sufficient to disprove !!¬A!!.
(Compare also the rule I showed above for ∧: It now says that if proving everything in !!Γ!! and disproving everything in !!Δ!! is sufficient for proving !!A!!, and likewise sufficient for proving !!B!!, then it is also sufficient for proving !!A\land B!!.)
But none of that was what I planned to discuss; this article is (intended to be) about sequent calculus's “cut rule”.
I never really appreciated the cut rule before. Most of the deductive rules in the sequent calculus are intuitively plausible and so simple and obvious that it is easy to imagine coming up with them oneself.
But the cut rule is more complicated than the rules I have already shown. I don't think I would have thought of it easily:
$$ \begin{array}{c} Γ ⊢ A, Δ \qquad Λ, A ⊢ Π \\ \hline Γ, Λ ⊢ Δ, Π \end{array} $$
(Here !!A!! is a formula and !!Γ, Δ, Λ, Π!! are lists of formulas, possibly empty lists.)
Girard points out that the cut rule is a generalization of modus ponens: taking !!Γ, Δ, Λ!! to be empty and !!Π = \{B\}!! we obtain:
$$ \begin{array}{c} ⊢ A \qquad A ⊢ B \\ \hline ⊢ B \end{array} $$
The cut rule is also a generalization of the transitivity of implication:
$$ \begin{array}{c} X ⊢ A \qquad A ⊢ Y \\ \hline X ⊢ Y \end{array} $$
Here we took !!Γ = \{X\}, Π = \{Y\}!!, and !!Δ!! and !!Λ!! empty.
This all has given me a much better idea of where the cut rule came from and why we have it.
In sequent calculus, the deduction rules all come in pairs. There is a rule about introducing ∧, which I showed before. It allows us to construct a sequent involving a formula with an ∧, where perhaps we had no ∧ before. (In fact, it is the only way to do this.) There is a corresponding rule (actually two rules) for getting rid of ∧ when we have it and we don't want it:
$$ \begin{array}{cc} \begin{array}{c} Γ ⊢ A\land B, Δ \\ \hline Γ ⊢ A, Δ \end{array} & \qquad \begin{array}{c} Γ ⊢ A\land B, Δ \\ \hline Γ ⊢ B, Δ \end{array} \end{array} $$
Similarly there is a rule (actually two rules) about introducing !!\lor!! and a corresponding rule about eliminating it.
The cut rule seems to lie outside this classification. It is not paired.
But Girard showed me that it
$$ \begin{array}{c} \phantom{A} \\ \hline A ⊢ A \end{array} $$
can be seen as an introduction rule for a pair of !!A!!s, one on each side of the turnstile. The cut rule is the corresponding rule for eliminating !!A!! from both sides.
Sequent calculus proofs are much easier to construct than Hilbert-style proofs. Suppose one wants to prove !!B!!. In a Hilbert system the only deduction rule is modus ponens, which requires that we first prove !!A\to B!! and !!A!! for some !!A!!. But what !!A!! should we choose? It could be anything, and we have no idea where to start or how big it could be. (If you enjoy suffering, try to prove the simple theorem !!A\to A!! in the Hilbert system I described at the beginning of the article. (Solution)
In sequent calculus, there is only one way to prove each kind of thing, and the premises in each rule are simply related to the consequent we want. Constructing the proof is mostly a matter of pushing the symbols around by following the rules to their conclusions. (Or, if this is impossible, one can conclude that there is no proof, and why.[3]) Construction of proofs can now be done entirely mechanically!
Except! The cut rule
The good news is that Gentzen, the inventor of sequent calculus, showed that one can dispense with the cut rule: it is unnecessary:
Gentzen's demonstration of this shows how one can take any proof that involves the cut rule, and algorithmically eliminate the cut rule from it to obtain a proof of the same result that does not use cut. Gentzen called this the “Hauptsatz” (“principal theorem”) and rightly so, because it reduces construction of logical proofs to an algorithm and is therefore the ultimate basis for algorithmic proof theory.
The bad news is that the cut-elimination process cansuper-exponentially increase the size of the proof, so it does notlead to a
$$ \begin{array}{cc} \begin{array}{c} Γ, A, A ⊢ Δ \\ \hline Γ, A ⊢ Δ \end{array} & \qquad \begin{array}{c} Γ ⊢ A, A, Δ \\ \hline Γ ⊢ A, Δ \end{array} \end{array} $$
And suddenly Girard's invention of linear logic made sense to me. In linear logic, contraction is forbidden; one must use each formula in one and only one deduction.
Previously it had seemed to me that this was a pointless restriction.Now I realized that it was no more of a useless hair shirt than theintuitionistic rejection of the law of the proof by contradiction:not a stubborn refusal to use an obvious tool ofreasoning, but arestriction of proofs to produce
The book is going to get into linear logic later in the next chapter. I have read descriptions of linear logic before, but never understood what it was up to. (It has two logical and operators, and two logical or operators; why?) But I am sure Girard will explain it marvelously. |
MathRevolution wrote:
Attachment:
The attachment
GEOMETRY.jpg is no longer available
If a smaller circle is inscribed in an equilateral triangle and a lager circle circumscribed about the triangle shown as above figure, what is the ratio of the smaller circle’s area to the larger circle’s area?
A. 1:2
B. 1:√3
C. 1:3
D. 1:4
E. 1:5
Attachment:
gggggg.jpg [ 22.28 KiB | Viewed 11221 times ]
The equilateral triangle can be used for a quick answer.
An equilateral triangle can be divided, by its three medians, into 6 equal 30-60-90 triangles. Use one of them.
1) Draw two lines
Drop an altitude from B to the base of the triangle, to X.
Then draw a line between O and the triangle's vertex on the right, to Y.
2) Assign a side length to OX, and derive OY from 30-60-90 right triangle properties
Triangle OXY is a 30-60-90 right triangle, with sides in ratio \(x: x\sqrt{3}: 2x\)
OX is the small circle's radius
OY is the large circle's radiusAssign a value
to OX*: let OX = 2
By properties of a 30-60-90 right triangle, if OX = 2, OY = 4
3) Find areas of circles, then the ratio needed
Area of small circle: \(\pi*r^2 = 4\pi\)
Area of large circle: \(\pi*r^2 = 16\pi\)
Ratio of small circle's area to large circle's area?
\(\frac{4\pi}{16\pi} = \frac{1}{4} = 1:4\)ANSWER D
*Or let OX = \(x\). Then OY = \(2x\)
Small circle's area: \(x^2\pi\)
Large circle's area: \(4x^2\pi\)
Ratio of small to large areas is \(1:4\)
_________________
SC Butler
has resumed! Get two SC questions to practice
, whose links you can find by date, here.Choose life. |
I have searched but have not found a solution to the following problem. I am trying to shift the location of a dot accent over a greek letter using the
textgreek package. However, the dot is shifted left of the letter. I've seen solutions to solve this problem in math mode, however I would like to use the greek letter in text mode. Any help would be greatly appreciated.
A minimal working example would look like this.
\documentclass[9pt,twocolumn,twoside,lineno]{article}\usepackage{textgreek}\begin{document} I would like this \.{\textgamma}, and not $\dot{\gamma}$ this.\end{document} |
Given a distribution $\mu$ on a finite set, let us denote by $T(\mu)$ the average depth of a leaf in a Huffman tree of $\mu$ (depth is measured by the number of edges from root to leaf); we assume that no element has zero probability. Then$$H(\mu) \leq T(\mu) < H(\mu)+1,$$where $H(\mu) = \sum_i \mu_i \log_2 (1/\mu_i)$ is the entropy of $\mu$ (here $\mu_i$ is the probability of the $i$th element).
To prove this, let us start with
Kraft's identity. In the statement below, a complete binary tree is one in which every internal node has exactly two children.
Kraft's identity. There is a complete binary tree whose leaf depths are the multiset $\{\ell_1,\ldots,\ell_n\}$ if and only if $\sum_{i=1}^n 2^{-\ell_i} = 1$.
Proof. Let us first show that the leaf depths satisfy Kraft's identity. The proof is by induction on $n$. If $n = 1$ then $\ell_1 = 0$, and indeed $2^{-\ell_1} = 1$. Otherwise, arbitrarily choose two sibling leaves, and remove them. If the original depth was $\ell$, this affects the multiset of depths by removing $\ell,\ell$ and adding $\ell-1$. The induction hypothesis shows that the new multiset satisfies Kraft's identity. Since $2^\ell + 2^\ell = 2^{\ell-1}$, the original multiset also does, completing the proof.
Let us now show that if a multiset $L$ satisfies Kraft's identity, then there is a complete tree whose leaf depths are $L$. Once again, the proof is by induction on $n = |L|$. If $n = 1$, then necessarily $L = \{0\}$, and the tree consisting of a single leaf works. Otherwise, let $\ell = \max L$. Notice that$$2^\ell = \sum_i 2^{\ell - \ell_i}.$$If $\ell_i = \ell$ then $2^{\ell - \ell_i} = 1$, and if $\ell_i < \ell$ then $2^{\ell - \ell_i}$ is even. Hence the number of copies of $\ell$ in $L$ is even, and in particular there are at least two such copies. Let us form $L'$ by removing two copies of $\ell$ and replacing them by a copy of $\ell-1$. Applying the induction hypothesis, we obtain a tree $T'$ whose leaf depths are $L'$. By construction, $T'$ has a leaf of depth $\ell-1$. Adding to it two children, we obtain a tree $T$ whose leaf depths are $L$, completing the proof. $\square$
We can now prove the inequalities on $T(\mu)$, starting with $T(\mu) \geq H(\mu)$.
Lower bound on $T(\mu)$. Consider any tree whose leaves are labeled by the support of $\mu$, and suppose that element $i$ is on a leaf at depth $\ell_i$. Let $\nu_i = 2^{-\ell_i}$. Kraft's identity shows that $\sum_i \nu_i = 1$. Since $\ell_i = \log_2(1/\nu_i)$, we have$$T(\mu) - H(\mu) = \sum_i \mu_i (\ell_i - \log_2 (1/\mu_i)) = \sum_i \mu_i \log_2 (\mu_i/\nu_i).$$The function $\log_2 (1/x)$ is convex, hence Jensen's inequality shows that$$\sum_i \mu_i \log_2 (\mu_i/\nu_i) =\sum_i \mu_i \log_2 \frac{1}{\nu_i/\mu_i} \geq \log_2 \frac{1}{\sum_i \mu_i (\nu_i/\mu_i)} = 0,$$using $\sum_i \nu_i = 1$. We conclude that $T(\mu) \geq H(\mu)$. $\square$
The other direction uses Shannon–Fano coding.
Upper bound on $T(\mu)$. Let $\ell_i = \lceil \log_2 (1/\mu_i) \rceil$. Then$$\sum_i 2^{-\ell_i} \leq \sum_i 2^{-\log_2 (1/\mu_i)} = \sum_i \mu_i = 1.$$If $\sum_i 2^{-\ell_i} < 1$, let $\ell = \max_i \ell_i$. Notice that$$\sum_i 2^{\ell-\ell_i} < 2^\ell,$$where all summands are integral.If we decrement one copy of $\ell$ to $\ell-1$ then we increase the left-hand side by $2^{\ell-(\ell-1)} - 2^{\ell-\ell} = 1$, so the left-hand side is still at most $2^\ell$. Hence after the update, we still have $\sum_i 2^{-\ell'_i} \leq 1$, where $\ell'_i$ are the new values. Continue doing so until the sum reaches 1; this must happen, since the decreasing process cannot continue forever (the depths keep decreasing while never dipping below zero). Denoting by $r_i$ the new values, Kraft's identity implies that there exists a tree whose leaf depths are the $r_i$. The average depth of a leaf in this tree is$$\sum_i \mu_i r_i \leq \sum_i \mu_i \ell_i < \sum_i \mu_i (\log_2 (1/\mu_i) + 1) = H(\mu) + 1.$$Huffman's algorithm will find a tree which is at least as good. $\square$
Finally, let me show that for every $\epsilon>0$ there are distributions $\mu$ for which $T(\mu) \geq H(\mu) + 1 - \epsilon$. Given $\delta>0$, consider the distribution $\mu$ on two elements with $\mu_1 = 1-\delta$ and $\mu_2 = \delta$. Clearly $T(\mu) = 1$, while $\lim_{\delta\to0} H(\mu) = 0$. Therefore we can find positive $\delta$ for which $H(\mu) \leq \epsilon$. For such $\mu$ we have $T(\mu) = 1 \geq H(\mu) + 1 - \epsilon$.
What happens if we force all probabilities to be small? Gallager, in his classic paper Variations on a theme by Huffman showed that even in this regime, there are distributions for which the gap is roughly $\log_2 [(2/e)\log_2 e] \approx 0.086$. Amazingly, this is attained by uniform distributions! |
I'm trying to get consistent normals along a 3D Bezier curve $B(t)$, where for any point I compute the normal as:
$$ \begin{align} \vec{a} &= B'(t) \\ \vec{b} &= B''(t) \\ \vec{c} &= \vec{a} + \vec{b} \\ \vec{r} &= \vec{c} × \vec{a} \\ \vec{n} &= \vec{r} × \vec{a} \\ \end{align} $$
So, get the derivative at a point for time value $t$, and implicitly get the plane of curvature at the point by computing the cross product of the derivative vector at the point, and the "next" derivative vector we get from moving the derivative by the amount dictated by the second derivative. The cross product yields the
axis of rotation, so to then form the normal at the point for time value $t$ I take the cross product of the axis of rotation, and the original derivative vector, since these three vectors are by definition perpendicular.
The problem is that normals computed this way are not consistent: they will "flip" around inflections, and I'm not sure what the right way is to go about making sure that does not happen.
As visual illustration, consider the following 3D cubic Bezier curve:
$$ B(t) = \left[\begin{matrix}1&t&t^2&t^3\end{matrix}\right] \left[\begin{matrix}1&0&0&0\\-3&3&0&0\\3&-6&3&0\\-1&3&-3&1\end{matrix}\right] \left[\begin{matrix} 0 & 0 & 0\\ -0.38 & 2.68 & 0\\ -0.25 & 5.41 & 0\\ -0.15 & 8.21 & 0 \end{matrix}\right] $$
Now, this happens to be a 3D curve that lies entirely on the x/y plane, but it illustrates the problem rather well. The above procedure yields the following normals:
However, this is rather different from the 2D normals we get when taking advantage of the 2D plane, where a normal can be constructed by simply rotating the (normalised) derivative vector a quarter turn clockwise, setting $(x,y)$ as $(-y,x)$:
I'd like to get something similar to the 2D case for the 3D case, but I don't know how to ensure that the cross products are unaffected by "which direction" the second derivative moves the derivative across its plane of curvature
(Effectively, how do I ensure that, when considering the triplet {normal,derivative,axis of rotation} that these always map to the local {x,y,z} axes, rather than sometimes mapping to {x,y,z} and somethings mapping to {y,x,z} axes)
Edit
While more "algorithmic" than I'd like, the only workable solution I've found so far is to compute the normals for two points $B(t)$ and $B(t+\varepsilon )$, then computing the angular difference in the plane for those two normals,
$$ \theta = \textit{acos} \left ( \frac{n_1 \cdot n_2 }{||n_1|| ||n_2||} \right ) $$
and then check whether that value is close to $\pi$ or not. Even in fast-changing curves, the angle between two "reasonable" normals is a relatively small value, so if the angle suddenly flips to "nearly $\pi$" then as of that time value the "desired normals" are negative actual normal.
While that works, it feels kind of hacky.
Without algorithmic flipping:
With algorithmic flipping:
Note this does not affect cuves with "reasonable twisting", e.g. when we set the $z$ values to $\{0,200,-200,600\}$ for the first, second, third and fourth control point respectively: |
As you are confused let me start by stating the problem and taking your questions one by one. You have a sample size of 10,000 and each sample is described by a feature vector $x\in\mathbb{R}^{31}$. If you want to perform regression using Gaussian radial basis functions then are looking for a function of the form $$f(x) = \sum_{j}{w_j * g_j(x; \mu_j,\sigma_j}), j=1..m$$ where the $g_i$ are your basis functions. Specifically, you need to find the $m$ weights $w_j$ so that for given parameters $\mu_j$ and $\sigma_j$ you minimise the error between $y$ and the corresponding prediction $\hat{y}$ = $f(\hat{x})$ - typically you will minimise the least squares error.
What exactly is the Mu subscript j parameter?
You need to find $m$ basis functions $g_j$. (You still need to determine the number $m$) Each basis function will have a $\mu_j$ and a $\sigma_j$ (also unknown). The subscript $j$ ranges from $1$ to $m$.
Is the $\mu_j$ a vector?
Yes, it is a point in $\mathbb{R}^{31}$. In other words, it is point somewhere in your feature space and a $\mu$ must be determined for each of the $m$ basis functions.
I've read that this governs the locations of the basis functions. So is this not the mean of something?
The $j^{th}$ basis function is centered at $\mu_j$. You will need to decide on where these locations are. So no, it is not necessarily the mean of anything (but see further down for ways to determine it)
Now for the sigma that "governs the spatial scale". What exactly is that?
$\sigma$ is easier to understand if we turn to the basis functions themselves.
It helps to think of the Gaussian radial basis functions in lower dimensons, say $\mathbb{R}^{1}$ or $\mathbb{R}^{2}$. In $\mathbb{R}^{1}$ the Gaussian radial basis function is just the well-known bell curve. The bell can of course be narrow or wide. The width is determined by $\sigma$ – the larger $\sigma$is the narrower the bell shape. In other words, $\sigma$ scales the width of the bell shape. So for $\sigma$ = 1 we have no scaling. For large $\sigma$ we have substantial scaling.
You may ask what the purpose of this is. If you think of the bell covering some portion of space (a line in $\mathbb{R}^{1}$) – a narrow bell will only cover a small part of the line*. Points $x$ close to the centre of the bell will have a larger $g_j(x)$ value. Points far from the centre will have a smaller $g_j(x)$ value. Scaling has the effect of pushing points further from the centre – as the bell narrows points will be located further from the centre - reducing the value of $g_j(x)$
Each basis function converts input vector x into a scalar value
Yes, you are evaluating the basis functions at some point $\mathbf{x}\in\mathbb{R}^{31}$.
$$\exp\left({-\frac{\|\mathbf{x}-\mu_j\|_2^2}{2*\sigma_j^2}}\right)$$
You get a scalar as a result. The scalar result depends on the distance of the point $\mathbf{x}$ from the centre $\mu_j$ given by $\|\mathbf{x}-\mu_j\|$ and the scalar $\sigma_j$.
I've seen some implementations that try such values as .1, .5, 2.5 for this parameter. How are these values computed?
This of course is one of the interesting and difficult aspects of using Gaussian radial basis functions. if you search the web you will find many suggestions as to how these parameters are determined. I will outline in very simple terms one possibility based on clustering. You can find this and several other suggestions online.
Start by clustering your 10000 samples (you could first use PCA to reduce the dimensions followed by k-Means clustering). You can let $m$ be the number of clusters you find (typically employing cross validation to determine the best $m$). Now, create a radial basis function $g_j$ for each cluster. For each radial basis function let $\mu_j$ be the center (e.g. mean, centroid, etc) of the cluster. Let $\sigma_j$ reflect the width of the cluster (eg radius...) Now go ahead and perform your regression (this simple description is just an overview- it needs lots of work at each step!)
*Of course, the bell curve is defined from -$\infty$ to $\infty$ so will have a value everywhere on the line. However, the values far from the centre are negligible |
Let's review the basic concept of a confidence interval.
Suppose we want to estimate an actual population mean \(\mu\). As you know, we can only obtain \(\bar{x}\), the mean of a sample randomly selected from the population of interest. We can use \(\bar{x}\) to find a range of values:
\[\text{Lower value} < \text{population mean}\;\; \mu < \text{Upper value}\]
that we can be really confident contains the population mean \(\mu\). The range of values is called a "
confidence interval." Example S.2.1 Should using a hand-held cell phone while driving be illegal? Section
There is little doubt that over the years you have seen numerous confidence intervals for population proportions reported in newspapers.
For example, a newspaper report (ABC News poll, May 16-20, 2001) was concerned whether or not U.S. adults thought using a hand-held cell phone while driving should be illegal. Of the 1,027 U.S. adults randomly selected for participation in the poll, 69% thought that it should be illegal. The reporter claimed that the poll's "
margin of error" was 3%. Therefore, the confidence interval for the (unknown) population proportion p is 69% ± 3%. That is, we can be really confident that between 66% and 72% of all U.S. adults think using a hand-held cell phone while driving a car should be illegal. General Form of (Most) Confidence Intervals Section
The previous example illustrates the general form of most confidence intervals, namely:
$\text{Sample estimate} \pm \text{margin of error}$
The lower limit is obtained by:
$\text{the lower limit L of the interval} = \text{estimate} - \text{margin of error}$
The upper limit is obtained by:
$\text{the upper limit U of the interval} = \text{estimate} + \text{margin of error}$
Once we've obtained the interval, we can claim that we are really confident that the value of the population parameter is somewhere between the value of
L and the value of U.
So far, we've been very general in our discussion of the calculation and interpretation of confidence intervals. To be more specific about their use, let's consider a specific interval, namely the "
." t-interval for a population mean µ (1-α)100% t-interval for the population mean \(\mu\)
If we are interested in estimating a population mean \(\mu\), it is very likely that we would use the
t-interval for a population mean \(\mu\). t-Interval for a Population Mean The formula for the confidence interval in words is:
$\text{Sample mean} \pm (\text{t-multiplier} \times \text{standard error})$
and you might recall that the formula for the confidence interval in notation is: $\bar{x}\pm t_{\alpha/2, n-1}\left(\dfrac{s}{\sqrt{n}}\right)$
Note that:
the " ," which we denote as \(t_{\alpha/2, n-1}\), depends on the sample size through t-multiplier n- 1 (called the " degrees of freedom") and the confidence level \((1-\alpha)\times100%\) through \(\frac{\alpha}{2}\). the " standard error," which is \(\frac{s}{\sqrt{n}}\), quantifies how much the sample means \(\bar{x}\) vary from sample to sample. That is, the standard error is just another name for the estimated standard deviation of all the possible sample means. the quantity to the right of the ± sign, i.e., " t -multiplier," ×standard error µis calculated by multiplying the t-multiplier by the standard error of the sample mean. the formula is only appropriate if a certain assumption is met, namely that the data are normally distributed.
Clearly, the sample mean \(\bar{x}\) , the sample standard deviation
s, and the sample size n are all readily obtained from the sample data. Now, we just need to review how to obtain the value of the t-multiplier, and we'll be all set. How is the t-multiplier determined?
As the following graph illustrates, we put the confidence level $1-\alpha$ in the center of the
t-distribution. Then, since the entire probability represented by the curve must equal 1, a probability of α must be shared equally among the two "tails" of the distribution. That is, the probability of the left tail is $\frac{\alpha}{2}$ and the probability of the right tail is $\frac{\alpha}{2}$. If we add up the probabilities of the various parts $(\frac{\alpha}{2} + 1-\alpha + \frac{\alpha}{2})$, we get 1. The t-multiplier, denoted \(t_{\alpha/2}\), is the t-value such that the probability "to the right of it" is $\frac{\alpha}{2}$:
It should be no surprise that we want to be as confident as possible when we estimate a population parameter. This is why confidence levels are typically very high. The most common confidence levels are 90%, 95% and 99%. The following table contains a summary of the values of \(\frac{\alpha}{2}\) corresponding to these common confidence levels. (Note that the"
confidence coefficient" is merely the confidence level reported as a proportion rather than as a percentage.)
Confidence Coefficient $(1-\alpha)$ Confidence Level $(1-\alpha) \times 100$ $(1-\dfrac{\alpha}{2})$ $\dfrac{\alpha}{2}$ 0.90 90% 0.95 0.05 0.95 95% 0.975 0.025 0.99 99% 0.995 0.005 Minitab ® – Using Software
The good news is that statistical software, such as Minitab, will calculate most confidence intervals for us.
Let's take an example of researchers who are interested in the average heart rate of male college students. Assume a random sample of 130 male college students were taken for the study.
The following is the Minitab Output of a one-sample
t-interval output using this data. One-Sample T: Heart Rate Descriptive Statistics
N Mean StDev SE Mean 95% CI for $\mu$ 130 73.762 7.062 0.619 (72.536, 74.987)
$\mu$: mean of HR
In this example, the researchers were interested in estimating \(\mu\), the heart rate. The output indicates that the mean for the sample of
n = 130 male students equals 73.762. The sample standard deviation (StDev) is 7.062 and the estimated standard error of the mean (SE Mean) is 0.619. The 95% confidence interval for the population mean $\mu$ is (72.536, 74.987). We can be 95% confident that the mean heart rate of all male college students is between 72.536 and 74.987 beats per minute. Factors Affecting the Width of the t-interval for the Mean $\mu$ Section
Think about the width of the interval in the previous example. In general, do you think we desire narrow confidence intervals or wide confidence intervals? If you are not sure, consider the following two intervals:
We are 95% confident that the average GPA of all college students is between 1.0 and 4.0. We are 95% confident that the average GPA of all college students is between 2.7 and 2.9.
Which of these two intervals is more informative? Of course, the narrower one gives us a better idea of the magnitude of the true unknown average GPA. In general, the narrower the confidence interval, the more information we have about the value of the population parameter. Therefore, we want all of our confidence intervals to be as narrow as possible. So, let's investigate what factors affect the width of the
t-interval for the mean \(\mu\).
Of course, to find the width of the confidence interval, we just take the difference in the two limits:
Width = Upper Limit - Lower Limit
What factors affect the width of the confidence interval? We can examine this question by using the formula for the confidence interval and seeing what would happen should one of the elements of the formula be allowed to vary.
\[\bar{x}\pm t_{\alpha/2, n-1}\left(\dfrac{s}{\sqrt{n}}\right)\]
What is the width of the
t-interval for the mean? If you subtract the lower limit from the upper limit, you get:
\[\text{Width }=2 \times t_{\alpha/2, n-1}\left(\dfrac{s}{\sqrt{n}}\right)\]
Now, let's investigate the factors that affect the length of this interval. Convince yourself that each of the following statements is accurate:
As the sample mean increases, the length stays the same. That is, the sample mean plays no role in the width of the interval. As the sample standard deviation sdecreases, the width of the interval decreases. Since sis an estimate of how much the data vary naturally, we have little control over sother than making sure that we make our measurements as carefully as possible. As we decrease the confidence level, the t-multiplier decreases, and hence the width of the interval decreases. In practice, we wouldn't want to set the confidence level below 90%. As we increase the sample size, the width of the interval decreases. This is the factor that we have the most flexibility in changing, the only limitation being our time and financial constraints. In Closing
In our review of confidence intervals, we have focused on just one confidence interval. The important thing to recognize is that the topics discussed here — the general form of intervals, determination of
t-multipliers, and factors affecting the width of an interval — generally extend to all of the confidence intervals we will encounter in this course. |
In 1998 C. Cachin proposed an information-theoretic approach to steganography. In particular, in the framework of this approach, so-called perfectly secure stegosystems were defined, where messages that carry and do not carry hidden information are statistically indistinguishable. There was also described a universal steganographic system, for which this property holds only asymptotically, as the message length grows, while encoding and decoding complexity increases exponentially. (By definition, a system is universal if it is also applicable in the case where probabilistic characteristics of messages used to transmit hidden information are not known completely.) In the present paper we propose a universal steganographic system where messages that carry and do not carry hidden information are statistically indistinguishable, while transmission rate of "hidden" information approaches the limit, the Shannon entropy of the source used to "embed" the hidden information.
Convergence properties of Shannon entropy are studied. In the differential setting, it is known that weak convergence of probability measures (convergence in distribution) is not sufficient for convergence of the associated differential entropies. In that direction, an interesting example is introduced and discussed in light of new general results provided here for the desired differential entropy convergence, which take into account both compactly and uncompactly supported densities. Convergence of differential entropy is also characterized in terms of the Kullback-Liebler discriminant for densities with fairly general supports, and it is shown that convergence in variation of probability measures guarantees such convergence under an appropriate boundedness condition on the densities involved. Results for the discrete setting are also provided, allowing for infinitely supported probability measures, by taking advantage of the equivalence between weak convergence and convergence in variation in that setting.
A threshold gate is a linear combination of input variables with integer coefficients (weights). It outputs 1 if the sum is positive. The maximum absolute value of the coefficients of a threshold gate is called its weight. A degree-d perceptron is a Boolean circuit of depth 2 with a threshold gate at the top and any Boolean elements of fan-in at most d at the bottom level. The weight of a perceptron is the weight of its threshold gate.For any constant d ≥ 2 independent of the number of input variables n, we construct a degree-d perceptron that requires weights of at least $$ n^{\Omega (n^d )} $$ ; i.e., the weight of any degree-d perceptron that computes the same Boolean function must be at least $$ n^{\Omega (n^d )} $$ . This bound is tight: any degree-d perceptron is equivalent to a degree-d perceptron of weight $$ n^{O(n^d )} $$ . For the case of threshold gates (i.e., d = 1), the result was proved by Håstad in [2]; we use Håstad’s technique.
The subspace metric is an object of active research in network coding. Nevertheless, little is known on codes over this metric. In the present paper, several classes of codes over subspace metric are defined and investigated, including codes with distance 2, codes with the maximal distance, and constant-distance constant-dimension codes. Also, Gilbert-type bounds are presented.
Ensembles of binary random LDPC block codes constructed using Hamming codes as constituent codes are studied for communicating over the binary symmetric channel. These ensembles are known to contain codes that asymptotically almost meet the Gilbert-Varshamov bound. It is shown that in these ensembles there exist codes which can correct a number of errors that grows linearly with the code length, when decoded with a low-complexity iterative decoder, which requires a number of iterations that is a logarithmic function of the code length. The results are supported by numerical examples, for various choices of the code parameters.
We consider regular block and convolutional LDPC codes determined by paritycheck matrices with rows of a fixed weight and columns of weight 2. Such codes can be described by graphs, and the minimum distance of a code coincides with the girth of the corresponding graph. We consider a description of such codes in the form of tail-biting convolutional codes. Long codes are constructed from short ones using the “voltage graph” method. On this way we construct new codes, find a compact description for many known optimal codes, and thus simplify the coding for such codes. We obtain an asymptotic lower bound on the girth of the corresponding graphs. We also present tables of codes.
We consider partitions of finite abelian groups. We introduce the concept of Fourier-invariant pairs and demonstrate that this concept is equivalent to the concept of an association scheme in an abelian group. It follows that Fourier-invariant pairs of partitions might be viewed as a very natural approach to abelian association schemes.
We consider a realistic model of a wireless network where nodes are dispatched in an infinite map with uniform distribution. Signals decay with distance according to attenuation factor alpha. At any time we assume that the distribution of emitters is lambda per square unit area. From an explicit formula of the Laplace transform of a received signal, we derive an explicit formula for the information rate received by an access point at a random position, which is alpha/2(log 2)(-1) per Hertz. We generalize to network maps of any dimension.
We study two new concepts of combinatorial coding theory: additive stem similarity and additive stem distance between q-ary sequences. For q = 4, the additive stem similarity is applied to describe a mathematical model of thermodynamic similarity, which reflects the "hybridization potential" of two DNA sequences. Codes based on the additive stem distance are called DNA codes. We develop methods to prove upper and lower bounds on the rate of DNA codes analogous to the well-known Plotkin upper bound and random coding lower bound (the Gilbert-Varshamov bound). These methods take into account both the "Markovian" character of the additive stem distance and the structure of a DNA code specified by its invariance under the Watson-Crick transformation. In particular, our lower bound is established with the help of an ensemble of random codes where distribution of independent codewords is defined by a stationary Markov chain.
We consider a retrial queueing system with batch arrival of customers. Unlike standard batch arrival, where a whole batch enters the system simultaneously, we assume that customers of a batch (session) arrive one by one in exponentially distributed time intervals. Service time is exponentially distributed. The batch arrival flow is MAP. The number of customers in a session is geometrically distributed. The number of sessions that can enter the system simultaneously is a control parameter. We analyze the joint probability distribution of the number of sessions and customers in the system using the techniques of multidimensional asymptotically quasi-Toeplitz Markov chains.
We introduce notions of local and interweight spectra of an arbitrary coloring of a Boolean cube, which generalize the notion of a weight spectrum. The main objects of our research are colorings that are called perfect. We establish an interrelation of local spectra of such a coloring in two orthogonal faces of a Boolean cube and study properties of the interweight spectrum. Based on this, we prove a new metric property of perfect colorings, namely, their strong distance invariance. As a consequence, we obtain an analogous property of an arbitrary completely regular code, which, together with his neighborhoods, forms a perfect coloring.
A transformation of Steiner quadruple systems S(υ, 4, 3) is introduced. For a given system, it allows to construct new systems of the same order, which can be nonisomorphic to the given one. The structure of Steiner systems S(υ, 4, 3) is considered. There are two different types of such systems, namely, induced and singular systems. Induced systems of 2-rank r can be constructed by the introduced transformation of Steiner systems of 2-rank r − 1 or less. A sufficient condition for a Steiner system S(υ, 4, 3) to be induced is obtained.
We obtain some upper and lower bounds for the maximum of mutual information of several random variables via variational distance between the joint distribution of these random variables and the product of its marginal distributions. In this connection, some properties of variational distance between probability distributions of this type are derived. We show that in some special cases estimates of the maximum of mutual information obtained here are optimal or asymptotically optimal. Some results of this paper generalize the corresponding results of [1–3] to the multivariate case.
The discrete Walsh transform is a linear transform defined by a Walsh matrix. Three ways to construct Walsh matrices are known, which differ by the sequence order of rows and correspond to the Paley, Walsh, and Hadamard enumerations. We propose a new enumeration of Walsh matrices and study its properties. The new enumeration is constructed as a linear rearrangement; we obtain an eigenvector basis for it and propose a convenient-to-generate fast implementation algorithm; the new enumeration possesses certain symmetry properties, which make it similar to the discrete Fourier transform.
We apply the theory of products of random matrices to the analysis of multi-user communication channels similar to the Wyner model, which are characterized by short-range intra-cell broadcasting. We study fluctuations of the per-cell sum-rate capacity in the non-ergodic regime and provide results of the type of the central limit theorem (CLT) and large deviations (LD). Our results show that CLT fluctuations of the per-cell sum-rate C m are of order $$ 1/\sqrt m $$ , where m is the number of cells, whereas they are of order 1/m in classical random matrix theory. We also show an LD regime of the form P(|C m − C| > ɛ) ≤ e −mα with α = α(ɛ) > 0 and C = $$ \mathop {\lim }\limits_{m \to \infty } $$ C m , as opposed to the rate $$ e^{ - m^2 \alpha } $$ in classical random matrix theory.
We generalize the method for computing the number of errors correctable by a low-density parity-check (LDPC) code in a binary symmetric channel, which was proposed by V.V. Zyablov and M.S. Pinsker in 1975. This method is for the first time applied for computing the fraction of guaranteed correctable erasures for an LDPC code with a given constituent code used in an erasure channel. Unlike previously known combinatorial methods for computing the fraction of correctable erasures, this method is based on the theory of generating functions, which allows us to obtain more precise results and unify the computation method for various constituent codes of a regular LDPC code. We also show that there exist an LDPC code with a given constituent code which, when decoded with a low-complexity iterative algorithm, is capable of correcting any erasure pattern with a number of erasures that grows linearly with the code length. The number of decoding iterations, required to correct the erasures, is a logarithmic function of the code length. We make comparative analysis of various numerical results obtained by various computation methods for certain parameters of an LDPC code with a constituent single-parity-check or Hamming code.
Two codes C (1) and C (2) are said to be weakly isometric if there exists a mapping J: C (1) -> C (2) such that for all x, y in C (1) the equality d(x, y) = d holds if and only if d(J(x), J(y)) = d, where d is the code distance of C (1). We prove that Preparata codes of length n a parts per thousand yen 2(12) are weakly isometric if and only if the codes are equivalent. A similar result is proved for punctured Preparata codes of length at least 2(10) - 1.
We consider properties of the matrix of a real quadratic form that takes a constant value on a sufficiently large set of vertices of a multidimensional cube centered at the origin given that the corresponding quadric does not separate vertices of the cube. In particular, we show that the number of connected components of the graph of the matrix of such a quadratic form does not change when one edge of the graph is deleted.
We consider the performance of the maximum-likelihood algorithm for detection and measurement of appearance and disappearance times of an arbitrary waveform signal observed against additive white Gaussian noise. We find exact expressions for detection error probabilities and densities of estimated appearance and disappearance times.
Complete constructions play an important role in theoretical computer science. However, in cryptography complete constructions have so far been either absent or purely theoretical. In 2003, L.A. Levin presented the idea of a combinatorial complete one-way function. In this paper, we present two new one-way functions based on semi-Thue string rewriting systems and a version of the Post correspondence problem. We also present the properties of a combinatorial problem that allow a complete one-way function to be based on this problem. The paper also gives an alternative proof of Levin's result. |
I have the following proof so far:
In step 9 I'm not sure how to prove P from the steps I have before. I thought that I could use ∨ Elim but I don't think I can now.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
The trick is to use $\bot \ Elim$. Now, to continue from what you have, you can do:
But note that you never used the subproof on lines 5-7, so this can be simplified to:
Although conceptually, it may be helpful to keep the original subproof, and (since it shows that $B$ leads to a contradiction $\bot$) derive $\neg B$, and to then combine $\neg B$ with $B \lor P$ (the latter is a
super common patterns, so remember that one!!):
Finally, you can set this up as a proof by cases within a conditional proof, i.e. derive your goal $P \land D$ from each of the cases $B$ and $P$:
I've actually seen some texts define Or-Elimination as $$\frac{\lnot B,~ B \lor P}{P}$$ which is actually constructively valid. Anyway, the way to get there from regular Or-Elimination (proof by cases) is just to show that in both cases of $B$ and case $P$ that $P$ follows:
$$\begin{array} {rll} (1) & \lnot B & \text{Given} \\ \\ % (2) & \quad B & \text{assumption, Case 1} \\ % (3) & \quad \bot & \text{Contradiction} \\ % (4) & \quad P & \text{Explosion} \\ \\ % (5) & \quad P & \text{assumption, Case 2} \\ \\ % (6) & B \lor P & \text{Given} \\ % (7) & P & \text{Proof by cases (Or Elim) of 6, 2 through 4, 5 through 5} \\ % \end{array}$$ |
Solutions to elementary discrete dynamical systems biology problems
The following is a set of solutions to the elementary discrete dynamical systems biology problems. Let us know if you have a better solution.
Let $t$ be time in years. Let $m_t =$ the mass of the fish in grams in year $t$. The dynamical system where the fish increases by 100 grams is \begin{align*} m_{t+1}-m_t = 100, \quad \text{for $t=0,1,2, \ldots$} \end{align*} Let $m_0=45$ grams, then \begin{align*} m_1 &= m_0+100 = 145 \text{ grams}\\ m_2 &= m_1+100 = 245 \text{ grams}\\ m_3 &= m_2+100 = 345 \text{ grams}\\ m_4 &= m_3+100 = 445 \text{ grams} \end{align*} Let $E$ be an equilibrium. Then, $E$ must satisfy \begin{align*} E-E &= 100\\ 0 &= 100 \end{align*} There is no equilibrium (which makes intuitive sense given that the fish is always growing by the same amount).
Let $t$ indicate time in years. Let $e_t$ be the number of elk in the population in the beginning of year $t$. The dynamical system is \begin{align*} \text{population change in a year} &= \text{number born} - \text{number died}. \end{align*}
The change in one year, from the beginning of year $t$ to the beginning of year $t+1$, is just the difference in the population size at the beginning of those two years: $e_{t+1}-e_t$.
The number born in this period is 20% of the population at the beginning of the year, i.e., it is $0.2e_t$.
The number died in this period is one quarter of the population at the beginning of the year, i.e., it is $0.25e_t$.
Putting this all together, we obtain the dynamical system \begin{align*} e_{t+1} - e_t &= 0.2 e_t - 0.25 e_t \end{align*} or \begin{align*} e_{t+1} - e_t &= -0.05 e_t, \quad \text{for $t=0,1,2,3,\ldots$} \end{align*} The dynamical system just captures the fact that there is a net 5% decline in elk population each year.
Let the year $t=0$ correspond to the year when there are 18,000 elk so that $e_0 = 18,000$. To simplify the calculation, let's rewrite the system as \begin{align*} e_{t+1} &=0.95 e_t, \quad \text{for $t=0,1,2,3,\ldots$} \end{align*} where we added $e_t$ to both sides of the equation. Then, we just need to multiply by $0.95$ to calculate the next year's population size. Our result is \begin{align*} e_1 &= 0.95 e_0 = 0.95 \times 18000 = 17,100 \text{ elk}\\ e_2 &= 0.95 e_1 = 0.95 \times 17100 = 16,245 \text{ elk}\\ e_3 &= 0.95 e_2 = 0.95 \times 16245 \approx 15,433 \text{ elk}\\ e_4 &= 0.95 e_3 \approx 0.95 \times 15433 \approx 14,551 \text{ elk} \end{align*}
Let $E$ be an equilibrium of the system. Then, \begin{align*} E - E &= -0.05 E\\ 0 &= -0.05 E\\ E &=0 \end{align*} The only equilibrium is $E=0$, when all the elk in the population have died out.
With 1.1% annual population growth, the population size must be multiplied by $c=1+0.011 = 1.011$ each year. In $t$ years, the population size will be multiplied by $c^t$. The number of years $t$ where $c^t$ is 2 is the doubling time \begin{align*} T_{\text{double}} &= \frac{\log 2}{\log c}\\ &=\frac{\log 2}{\log 1.011}\\ &\approx \frac{0.6931}{0.0109} \approx 63.36 \text{ years.} \end{align*} We used the natural log to calculate the numbers in the fraction, but you would get the same result if you used any other base for the logarithm.
To go from 7 billion to 56 billion people, the population must increase by the factor $56/7 = 8$. The population size must double three times, so the time to 56 billion people is $3 T_{\text{double}} \approx 3 \times 63.36 \approx 190.08$ years.
To capture the 5% annual decrease in the population of Steller sea lions, we need to multiply the population size by $d=1-0.05=0.95$ each year. After $t$ years, we must multiply by $d^t$. The value of $t$ where $d^t$ is one-half is the half-life \begin{align*} T_{\text{half}} &= \frac{\log 1/2}{\log d}\\ &=\frac{\log 1/2}{\log 0.95}\\ &\approx \frac{-0.6931}{-0.05129} \approx 13.51 \text{ years.} \end{align*} We used the natural log to calculate the numbers in the fraction, but you would get the same result if you used any other base for the logarithm.
For the population to go from 40,000 to 10,000, it must decrease by a factor of $40000/10000=4$, or must decrease in half two times. The time for this to occur is two half-lives or $2T_{\text{half}} \approx 2 \times 13.51 \approx 27.03$ years. (Notice that when doing the actual calculation, we didn't round until after multiplying by two. For this reason, the rounded result isn't exactly twice the rounded half-life.)
Let $t=$ time in days. Let $n_t=$ the number of embryo cells in day $t$. Since the number of cells double each day, the dynamical system to go from day $t$ to day $t+1$ is \begin{align*} n_{t+1} = 2n_t \quad \text{for $t=0,1,2, \ldots$} \end{align*} If $t$ is the number of days since fertilization, then the initial condition is $n_0=1$. Since for each day, we multiply the number of cells by 2, the number of cells after $t$ days is \begin{align*} d_t = 2^t. \end{align*} A pregnancy of 40 weeks is $40 \cdot 7 = 280$ days long. If the number of cells doubled every day for 280 days, the number of cells after 280 days would be \begin{align*} n_{280} &= 2^{280}\\ &= 1,942,668,892,225,729,070,919,461,906,823,518,906,642,406,839,052,139,521,251,812,409,738,904,285,205,208,498,176\\ &\approx 1.943 \times 10^{84} \end{align*} This number is about 10,000 times larger than the estimated number of atoms in the observable universe. Since a newborn child is typically much smaller than the universe and each cell contains many atoms, we can safely assume that a newborn child has much fewer cells. Therefore, we can conclude that the rate of cell division must slow down during the course of the pregnancy.
Let $t$ denote the number of 5 week periods since some reference time. Let $r_t = $ the number of rabbits in the group in time period $t$. In each time period, the number of rabbits increases by a fixed fraction. We will let that fixed fraction be an unknown parameter, which we'll denote by $f$, where $f=$ the fraction by which the population of rabbits increases in a 5 week period.
With this notation, the change in a five week period is $r_{t+1}-r_t$ and we set this equal to $fr_t$, the fraction $f$ times the population size $r_t$ at the beginning of the period. Our resulting dynamical system model is \begin{align*} r_{t+1}-r_t = fr_t \quad \text{for $t=0,1,2,3, \ldots$} \end{align*}
In the given data, the plot was the population change $r_{t+1}-r_t$ versus the population size $r_t$ at the beginning of the period. Since the data was fit by a line going through the origin with slope 0.3, the line indicates that the population change $r_{t+1}-r_t$ can be modeled as being 0.3 times the population size $r_t$: $r_{t+1}-r_t = 0.3 r_t$. Moreover, if we let $t=0$ denoted the first time period from the data table, then the initial population size is $r_0 = 101$.
We can summarize these results with the dynamical system \begin{align*} r_0 &= 101\\ r_{t+1}-r_t &= 0.3 r_t \quad \text{for $t=0,1,2,3, \ldots$} \end{align*}
As a first step in solving the system, we rewrite it as \begin{align*} r_{t+1} = 1.3r_t \end{align*} by adding $r_t$ to both sides. Since in each time period, the population size is multiplied by 1.3, the solution is \begin{align*} r_t = 1.3^t r_0 = 1.3^t \cdot 101. \end{align*}
Let $T=$ time in weeks, so that $t=T/5$. Plugging that into the solution, the rabbit population at week $T$ is estimated to be \begin{align*} \text{Number of rabbits in week $T$} = 1.3^{T/5}\cdot 101. \end{align*} The model estimates the number of rabbits after 35 weeks to be $$1.3^{35/5}\cdot101=1.3^7 \cdot 101 \approx 634,$$ which is just above the observed number. If the rabbit population continued to grow at the same rate for two years, the number of rabbits would be $$1.3^{104/5}\cdot 101 = 1.3^{20} \cdot 101 \approx 19,195.$$
Let $t=$ time in minutes. Let $b_t=$ the population size of the bacteria in minute $t$. Given that the increase from time $t$ to time $t+1$ is $0.1487$ times the population size at time $t$, the dynamical system model is \begin{align*} b_{t+1}-b_t = 0.1487 b_t, \quad \text{for $t=0,1,2,3 \ldots$} \end{align*}
By adding $b_t$ to both size, we can rewrite the dynamical system as \begin{align*} b_{t+1} = 1.1487 b_t, \quad \text{for $t=0,1,2,3 \ldots$} \end{align*} Given that the population size is multiplied by 1.1487 each time step, the solution is \begin{align*} b_t = 1.1487^t b_0. \end{align*} The population size doubles when $1.1487^t = 2$, i.e., when \begin{align*} t=T_{\text{double}} &= \frac{\log 2}{\log 1.1487}\\ &\approx \frac{0.6931}{0.1386} \approx 5. \end{align*} Therefore, the bacteria population size doubles every 5 minutes. We used the natural log to calculate the numbers in the fraction, but you would get the same result if you used any other base for the logarithm.
If the population continues to grow at this rate for one hour, or sixty minutes, the population increases by the factor $1.1487^{60} \approx 4096$, which is the same as doubling about $60/5=12$ times. In two hours, the population size doubles about 24 times, increasing by a factor of $1.1487^{120} \approx 1.678 \times 10^7$, which is about 16.78 million. In four hours, the population size doubles about 48 times, increasing by a factor of $1.1487^{240} \approx 2.816 \times 10^{14}$, which is about 28.16 trillion.
Let $b_t$ measure the bacteria population size in units of the volume of the beaker, so that $b_t=1$ means the beaker is exactly full. If $t$ is minutes after midnight, then the initial conditions are $b_0=5.959 \times 10^{-8}$. As a double check, we can calculate the population size at 2 AM, when $t=120$. By our solution formula, $$b_{120} = 1.1487^{120} \cdot 5.959 \times 10^{-8} \approx 1.$$ Indeed, the beaker is exactly full at 2 AM.
However, this calculation was just optional, as the problem didn't ask us to calculate that the bacteria completely fill the beaker after two hours. We could have taken that for granted.
Given that the beaker was full at 2 AM, the beaker was half full exactly one doubling time $T_{\text{double}}$ before 2 AM. Since we calculated that $T_{\text{double}}$ was 5 minutes, the beaker was half full at 5 minutes to 2:00, or 1:55 AM.
The bacteria filled one beaker at 2 AM. To fill four beakers, they just need to double two more times. The bacteria fill all four beakers after two doubling times, $2T_{\text{double}}$, or 10 minutes, i.e., at 2:10 AM. Not much time was gained by quadrupling the available space.
Let $t=$ time in years. Let $d_t$ be the deer population size in year $t$. In each year, the population size increases by a factor of $a$, which means the population size is multiplied by $a$ each year. The dynamical system describing this population growth is \begin{align*} d_{t+1} = a d_t, \quad \text{for $t=0,1,2, \ldots$} \end{align*}
Let $E$ be an equilibrium. Then, $E$ must satisfy \begin{align*} E&=aE\\ (1-a)E &=0\\ E &=0 \quad \text{or} \quad a=1. \end{align*} Since the problem stated that $a>1$, we can rule out the second condition and conclude that the only equilibrium is $E=0$.
To determine the stability of the equilibrium, we can solve the dynamical system. The solution is \begin{align*} d_t = a^t d_0. \end{align*} If $d_0=0$, then $d_t=0$ for all time, which we knew must be the case since $E=0$ is an equilibrium. But, if $d_0$ is slightly larger than 0, then $d_t$ will grow with time, given that $a>1$. The solution moves away from the equilibrium $E=0$, so it is unstable.
Each year, the number $b$ must be subtracted off the population size. The modified dynamical system is \begin{align*} d_{t+1} = a d_t - b, \quad \text{for $t=0,1,2, \ldots$} \end{align*} If $E$ is an equilibrium, it must satisfy \begin{align*} E &= aE -b\\ (1-a)E &= -b\\ E &= \frac{-b}{1-a} = \frac{b}{a-1}. \end{align*}
We could divide by $1-a$ since we were given that $a>1$ and new that $1-a \ne 0$. Note that since $a>1$ and $b>0$, both numerator and denominator in the last fraction are positive numbers, and $E>0$.
It turns out that the equilibrium is unstable, but the problem didn't ask you to determine that.
Now, instead of subtracting off the fixed number $b$, we need to subtract off a number proportional to the number of deer at previous time step, i.e., we subtract off $cd_t$. The modified dynamical system is \begin{align*} d_{t+1} = a d_t - cd_t = (a-c)d_t, \quad \text{for $t=0,1,2, \ldots$} \end{align*}
Let $E$ be an equilibrium. It must satisfy \begin{align*} E &= (a-c)E\\ (1-a+c)E &= 0\\ E &=0 \quad \text{or} \quad c=a-1 \end{align*} If the hunting exactly matches the growth rate, i.e., $c= a-1$, then every choice of $E$ is an equilibrium; the population size will never change, as the system becomes $d_{t+1}=d_t$. Moreover, in this case, a solution that starts near an equilibrium does stay near the equilibrium (it doesn't move), so by our definition, we'd have to say all the equilibria are stable when $c=a-1$.
On the other hand, if $c \ne a-1$, then there is only one equilibrium: $E=0$. We can check its stability by solving the equation. In each year, the population is multiplied by $(a-c)$, so after $t$ years, the solution is \begin{align*} d_t = (a-c)^t d_0. \end{align*}
If $c < a-1$, then $a-c > 1$ and the population size grows each year. The hunting is less the natural growth. Now matter how close we make the initial population size to the equilibrium $d_0=0$, it will move away from the equilibrium. The equilibrium is unstable. (Of course, if we started exactly at $d_0=0$, the population would stay at zero, consistent with the fact that $E=0$ is an equilibrium.)
If $a> c > a-1$, then the hunting is greater than the natural growth and the population size decreases each year. In this case, $0 < a-c < 1$, so that $(a-c)^t$ goes to zero with increasing $t$. The solution moves toward the equilibrium, and the equilibrium is stable.
If $c >a$, then we get unphysical results, as the model indicates we hunt more deer than there are. We get a negative population size. We can throw out this case.
The $E$ be an equilibrium. Then, $E=f(E)$ so that \begin{align*} E &= \frac{3E^2}{2+E^2}. \end{align*} We can simplify this equation by multiplying through by $2+E^2$ and getting a cubic equation for $E$. \begin{align*} E(2+E^2) &= 3E^2\\ E^3+2E &= 3E^2\\ E^3-3E^2+2E &=0\\ \end{align*} We can factor out an $E$ and then factor the remaining quadratic expression. \begin{align*} E(E^2-3E+2) &=0\\ E(E-1)(E-2) &=0\\ \end{align*} We end up with the conclusion that $E = 0,1, \text{ or }2$.
On the other hand, the problem just asked to show that 0, 1, and 2 were equilibria, so we could have just plugged those values into any of the versions of the equation for $E$. Since those numbers will satisfy the equation, we could conclude they are equilibria.
The cobwebbing is shown below. The equilibria are represented by the red circles. The cobwebbing shows that if one starts with a value $z_0$ just above or just below the equilibrium $E=1$ the trajectory moves away from the equilibria. The equilibrium $E=1$ is unstable. Moreover, if the initial condition $z_0$ is just larger than 1, the trajectory converges to the equilibrium $E=2$. On the other hand, if the initial condition $z_0$ is just smaller than 1, the trajectory converges to the equilibrium $E=0$.
If the banana is shown, the initial condition of $z_0$ around 1.2 will cause the system to evolve to the upper equilibrium of $E=2$. If, on the other hand, a stick is shown, the initial condition of $z_0$ around 0.8 will cause the system to evolve to the lower equilibrium of $E=0$. Since you observe the firing rate has evolved close to the upper equilibrium of $E=2$, the monkey must have been shown a banana. |
Assume we have the following setup:
A client with trusted storage and computing capabilities (e.g. a smartcard) A server with trusted computing and short-term storage capabilities (e.g. RAM + CPU, possibly with something like Intel SGX). The server has no trusted large-scale long-term storage capabilities and may only store small amounts of data confidential and integrity protected (like the HTTPS private key).
The problem is: The server should be able to be shut-down and started-up, no passwords should be involved and the server has no HSM, yet the server should be able to provide somewhat secure access to some data without the clients needing to decrypt it themselves (for complexity reasons). So the storage need to be encrypted and the transfer (-> TLS) as well.
The solution is now (what I call it):
blinded decryption.
The server uses some homomorphic encryption scheme (e.g. EC-ElGamal or RSA) with the message space $\mathcal M$. He chooses a random $k\in \mathcal M$ and uses $H(k)$ ($H:\mathcal M \rightarrow \{0,1\}^{256}$) as the key for the authenticated encryption of the data. The server now either stores the (asymmetric) encryption of $k$, called $\mathcal E(k)$ in his trusted area of the drive(s) or may store it in an untrusted section (with back-ups) if server authentication is required and the private key for this authentication is already stored in the trusted area.
For the temporary unlock of the encrypted data, the server loads $\mathcal E(k)$. Then he blinds it using some operation $f(\cdot,\cdot)$ (multiplication for ElGamal and RSA, addition for EC-ElGamal) using some random $r\in \mathcal M$ as $c=f(\mathcal E(k),\mathcal E(r))=\mathcal E(g(k,r))$ with $g(\cdot,\cdot)$ being the "inner homomorphism" (same as $f$ in many cases). The $r$ is kept available in trusted short-term memory and the $c$ is sent to the client.
The client decrypts $c$ using his trusted device and returns $c'=g(k,r)$ to the server. Finally the server unblinds $c'$ using his $r$ and uses the obtained $k$ to derive $H(k)$ and allow access to the data.
Now (finally) the question:
Given the above and standard assumptions (RSA-assumption, DDH-assumption in ECC and $\mathbb Z_p^*$,$H$ is a random oracle, the symmetric encryption is secure and authenticated,...) is it safe to instantiate $\mathcal E$ with textbook RSA?
As pointed out in the comments, every good question about "is this secure?" requires a threat model, so here's mine:
The security of the whole protocol is broken if an attacker is able to learn the secret symmetric key $H(k)$ while it's valid. The attacker may not compromise the server (i.e. he can't control/spy on RAM / CPU and may not learn the stored $\mathcal E(k)$). An attacker not breaking into the server may have successfully attacked the client (except for the trusted device) and he may be able to completely modify and read the network traffic. I think an attacker without having broken into the server may be computationally unbounded.
If not clear until now, the instantiation of $\mathcal E(k)$ is $\mathcal E(k):=k^e \bmod N$ with $e,N$ being standard RSA parameters. |
It looks like you're new here. If you want to get involved, click one of these buttons!
Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the
way in which we match up these two objects, to see that they look the same.
For example, any two of these squares look the same after you rotate and/or reflect them:
An isomorphism between two of these squares is a
process of rotating and/or reflecting the first so it looks just like the second.
As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse:
Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that
and
I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\).
Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse.
Now we're ready for isomorphisms!
Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\).
Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like!
What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph:
The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2:
$$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1:
$$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms:
$$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism!
In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism.
We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a
preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\).
Puzzle 144 says that in a poset, the only isomorphisms are identities.
Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions.
Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\).
So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them.
One more example:
Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism.
This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the
isomorphisms deserve to be called 'natural isomorphisms'.
But what are they like?
Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism
$$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes:
Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism
$$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that
$$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means
$$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\).
In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\).
But the converse is true, too! It takes a
little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism.
Doing this will help you understand natural isomorphisms. But you also need examples!
Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal!
We should talk about this. |
The easiest way which is widely known to calculate a modular inverse is finding the smallest $k\in\mathbb N$ such that the following expression is an integral integer:
$$\frac{1+k\cdot \varphi}{e}$$
If this is an integral integer, it is the inverse of $e$ modulo $\varphi$ and thus $d$ and you'll find it with at most $e$ tries.
The issue obviously know is that the run-time of this will grow linearly with the size of $e$, as opposed to the extended euclidean algorithm (EEA) which will run in time proportional to (a power of) the logarithm of $e$, which will especially be much faster than the above method when $e$ is chosen as large as the modulus. If you want to see examples and how-tos on the algorithm, Wikipedia is your friend as usual. |
This is a side question which is more motivated by teaching than research.
First, I am trying to convince myself that sequences appear before series (as numerical approximations to "interesting" quantities; on the other hand, decimal expansions -- especially infinite -- are more likely to be series).
Secondly, is it natural for sequences to be placed prior to series in a calculus course?
So, which one is more original, a sequence or a series?
After-dinner edit. We define a sequence to be... a function mapping the positive numbers to a set(?). We define a series to be...a formal infinite sum $\sum_{n=1}^\infty a_n$(?). Tell me what is your way to "define" these two guys, I do not believe they are very related.
There are no doubts that it is easier to define
convergence of series via convergence of sequences, but it does not imply their "primogeniture".The notion of Cauchy sequence is an elegant way to build the apparatus of not only sequences but also of real numbers; as such it canserve as a definition of series: a series is a formal infinite sum $\sum_{n=1}^\infty a_n$, and it is called a convergent series iffor any $\epsilon>0$ there exists an $N=N(\epsilon)$ such that for any $m>n>N$ the sum $|a_n+\dots+a_m|<\epsilon$. The real numbersthen are nothing but representatives of equivalence classes of convergent series. (I have no desire here to expand all the details.)A sequence $b_n$ is convergent when the corresponding series $\sum_{n=1}^\infty a_n$ where $a_1=b_1$ and $a_n=b_n-b_{n-1}$ for $n\ge2$converges. It would be honest to say that, besides the trivialities like "algebra of limits", the techniques for investigating convergenceof series are quite independent from that of sequences. And it does not sound impossible to do series prior to sequences.
Historically, all these convergence/divergence issues were purely intuitive for both sequences and series, and they both were on the market for many centuries. I ask whether their exists an overwhelming historical support to the notion of sequence to lead. |
Main Page The Problem
Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A
combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. [math]k=3[/math] Density Hales-Jewett (DHJ(3)) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math]
The original proof of DHJ(3) used arguments from ergodic theory. The basic problem to be considered by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers.
Useful background materials
Some background to the project can be found here. General discussion on massively collaborative "polymath" projects can be found here. A cheatsheet for editing the wiki may be found here. Finally, here is the general Wiki user's guide
Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (final call) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (inactive) (500-599) Possible proof strategies (active) (600-699) A reading seminar on density Hales-Jewett (active) (700-799) Bounds for the first few density Hales-Jewett numbers, and related quantities (arriving at station)
Here are some unsolved problems arising from the above threads.
Here is a tidy problem page.
Bibliography
Density Hales-Jewett
H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished.
Behrend-type constructions
M. Elkin, "An Improved Construction of Progression-Free Sets ", preprint. B. Green, J. Wolf, "A note on Elkin's improvement of Behrend's construction", preprint. K. O'Bryant, "Sets of integers that do not contain long arithmetic progressions", preprint.
Triangles and corners
M. Ajtai, E. Szemerédi, Sets of lattice points that form no squares, Stud. Sci. Math. Hungar. 9 (1974), 9--11 (1975). MR369299 I. Ruzsa, E. Szemerédi, Triple systems with no six points carrying three triangles. Combinatorics (Proc. Fifth Hungarian Colloq., Keszthely, 1976), Vol. II, pp. 939--945, Colloq. Math. Soc. János Bolyai, 18, North-Holland, Amsterdam-New York, 1978. MR519318 J. Solymosi, A note on a question of Erdős and Graham, Combin. Probab. Comput. 13 (2004), no. 2, 263--267. MR 2047239 |
How can I prove that non-regular languages are closed under concatenation using only the non-regularity of $L=\{a^nb^n|n\ge1\}$ ?
You can't prove it because it isn't true: the class of non-regular languages isn't closed under concatenation.
Let $X\subseteq \mathbb{N}$ be any undecidable set containing $1$ and every even number. For example, take your favourite undecidable set $S$ and let $$X = \{0, 2, 4, \dots\} \cup \{1\} \cup \{2i+1\mid i\in S\}\,.$$ The language $\mathcal{L} = \{a^i\mid i\in X\}$ is undecidable, so it certainly isn't regular. But $$\mathcal{L}\cdot\mathcal{L} = \{a^{i+j}\mid i,j\in X\} = \{a^i\mid i\in\mathbb{N}\}\,,$$ is regular.
Here is another example showing that the claim is false. Let $L$ be any non-regular language over $\{0,1\}$, and take $$ L_0 = \{ 0 w : w \in L \} \cup \{ 1 w : w \in \{0,1\}^* \} \cup \{\epsilon\}, \\ L_1 = \{ 1 w : w \in L \} \cup \{ 0 w : w \in \{0,1\}^* \} \cup \{\epsilon\}. $$ You can check that $L_0,L_1$ are not regular, but $L_0L_1 = \{0,1\}^*$ is regular.
Another nice example uses Lagrange's four square theorem, which states that every non-negative integer is a sum of four squares. Define $L_1 = \{1^{n^2} : n \geq 0\}$, $L_2 = L_1^2$ and $L_4 = L_2^2$. The four square theorem shows that $L_4 = 1^*$ is regular. Conversely, the pumping lemma shows that $L_1$ is not regular. Hence either $L_2$ is regular, in which case $L_4 = L_2^2$ is a counterexample to the claim, or $L_2$ is not regular, in which case $L_2 = L_1^2$ is a counterexample to the claim. (In fact, $L_2$ is not regular.)
Yet another example assumes the Goldbach conjecture: every even integer larger than 4 is the sum of two odd primes. Let $L = \{ 1^p : \text{$p$ is an odd prime } \}$. Using the pumping lemma it is not hard to show that $L$ isn't regular. Assuming the Goldbach conjecture, $L^2 = \{ (11)^n : n \geq 3 \}$, which is regular. |
Let $\pi$ is a $\Bbb{P}$-name for a partial order, i.e. there is a name $\pi'$ and $\pi''$ such that $$1\Vdash_\Bbb{P} \pi '' \in \pi\land (\text{$\pi'$ is a partial order of $\pi$ with largest element $\pi''$}).$$ We call $\pi$ is full for $\searrow$ $\omega$-sequences if whenever $p\in \Bbb{P}$, $\rho_n\in\operatorname{dom} \pi$ for each $n<\omega$ and $$p\Vdash \rho_n\in\pi \land \rho_{n+1} \le \rho_n$$ for each $n$ then there is a $\sigma\in\operatorname{dom} \pi$ s.t. $p\Vdash \sigma\in\pi$ and $p\Vdash \sigma\le \rho_n$ for each $n$.
I try to prove following statement: if $\Bbb{P}$ is $\omega_1$-closed and $\pi$ is full for $\searrow$ $\omega$-sequences, then $1\Vdash \text{$\pi$ is $\omega_1$-closed}.$
I made two 'proofs' which the former one is regarded as false and the latter one is incomplete. The 'false proof' goes as follow:
Let $\rho$ is a $\Bbb{P}$-name satisfying $$1\Vdash \forall n<\check{\omega}:\rho(n+1)\le \rho(n) \,\land\, \rho(n)\in \pi .$$ so for each $n$, $1\Vdash \rho(n+1)\le \rho(n)$ and $1\Vdash \rho(n)\in\pi$. (*)
Therefore, since $\pi$ is full for $\searrow$ $\omega$-sequences, we can find a name $\sigma$ such that $1\Vdash \sigma\le \rho(n)$ for all $n<\omega$ and $1\Vdash \sigma\in\pi$, so $1\Vdash \forall n<\check{\omega}: \sigma\le\rho(n)$ (**) and therefore $1\Vdash \text{$\pi$ is $\omega_1$-closed}$.
I doubt the part (*) and (**) are problematic. Is it right? I didn't get why they are problematic so I need an explanation for them.
I guess the 'proof' given below goes to right direction:
Let $p\in \Bbb{P}$ be arbitrary and $\rho$ be a $\Bbb{P}$-name satisfying $$p\Vdash \forall n<\check{\omega}: \rho(n+1) \le\rho(n)\,\land\, \rho(n)\in\pi.$$ Since $p\Vdash \exists x \in \pi : \rho(\check{0}) = x$, there is $p_0\le p$ and $\Bbb{P}$-name $\sigma_0$ s.t. $$p_0 \Vdash \sigma_0\in \pi\,\land\, \rho(\check{0}) = \sigma_0.$$ Similarly, we can take $p_1\le p_0$ and a $\Bbb{P}$-name $\sigma_1$ satisfying $$p_1 \Vdash \sigma_1\in \pi\,\land\, \rho(\check{1}) = \sigma_1.$$ and so on. Since $\langle p_n : n<\omega\rangle$ is decreasing $\Bbb{P}$-sequence, we can find some $q\in \Bbb{P}$ with $q\le p_n$ for all $n<\omega$. For such $q$ should satisfy $$q\Vdash \sigma_{n+1}\le\sigma_n\,\land\, \sigma_n\in\pi$$ for each $n<\omega$, so we can find a $\Bbb{P}$-name $\sigma$ s.t. $q\Vdash \sigma\le\sigma_n$ and $q\Vdash \sigma\in\pi$ for each $n$.
In above proof, however, I wonder can I conclude that $q\Vdash \forall n<\check{\omega} : \sigma\le \rho(n)$ holds. I would appreciate any help, thanks! |
Special cases of the multivariable chain rule
The general statement of the multivariable chain rule is the following.
Chain Rule: For differentiable functions $\vc{g}: \R^m \rightarrow \R^k$ and$\vc{f}: \R^k \rightarrow \R^n$ (confused?), the derivative matrix of the composition$\vc{h}=\vc{f} \circ \vc{g}$ (i.e,. $\vc{h}(\vc{x}) = \vc{f}(\vc{g}(\vc{x}))$) at the point $\vc{a}$ is the product of thederivative matricesfor $\vc{f}$ and $\vc{g}$:\begin{gather} D\vc{h}(\vc{a})= D(\vc{f} \circ \vc{g})(\vc{a}) = D{\vc{f}}\bigl(\vc{g}(\vc{a})\bigr) D{\vc{g}}(\vc{a}). \label{general_chain_rule}\end{gather}
In this form, the multivariable chain rule looks similar to the one-variable chain rule: $$\diff{}{x}(f \circ g)(x) = \diff{}{x}f(g(x)) = f'(g(x))g'(x).$$ The biggest difference in the multivariable case is that the ordinary derivative has been replaced with the derivative matrix. One important fact to remember is that the matrix of partial derivatives for $\vc{f}$ is evaluated at $\vc{g}(\vc{a})$,
not at $\vc{a}$.
Using the above general form may be the easiest way to learn the chain rule. If you are comfortable forming derivative matrices, multiplying matrices, and using the one-variable chain rule, then using the chain rule \eqref{general_chain_rule} doesn't require memorizing a series of formulas and determining which formula applies to a given problem.
On the other hand, having a few special case formulas available can save some work. For a given type of problem, you can form the matrices and calculate their product once to obtain a formula valid for that particular type of problem. Then, for each example of that problem type, you can plug the particular functions into the special case formula and get to the final result more quickly.
In the following, we derive formulas for a few special cases. In each case, the outer function $f$ is a scalar-valued function. We look at two groups of special cases: when $\vc{g}$ is a function of one variable and when $\vc{g}$ is a function of two variables.
$\vc{g}$ is a function of one variable
When the inner function $\vc{g}$ is a function of one variable, $\vc{g}: \R \to \R^n$, and $f$ is a scalar-valued function, $f: \R^n \to \R$, then the composition $h(t)=f(\vc{g}(t))$ is just a scalar-valued function of a single variable $h: \R \to \R$. Its derivative is just a single number $h'(t)$. We show how to express this single number as a dot product of two vectors.
Since $\vc{g}$ is a function fo a single variable, we can view $\vc{g}$ as parametrizing a curve. We can write the derivative of the parametrized curve as a vector \begin{align*} D\vc{g}(t) = \left[ \begin{array}{c} \diff{g_1}{t}(t)\\ \diff{g_2}{t}(t)\\ \vdots\\ \diff{g_n}{t}(t) \end{array} \right] = (g_1'(t),g_2'(t),\ldots,g_n'(t)) =\vc{g}'(t). \end{align*}.
Since we are considering the case where $f$ is a scalar-valued function, its derivative matrix can be viewed as the gradient vector: $$\nabla f(\vc{x}) = \left(\pdiff{f}{x_1}(\vc{x}), \pdiff{f}{x_2}(\vc{x}), \cdots, \pdiff{f}{x_n}(\vc{x}) \right).$$ The matrix product of the general chain rule $Df(\vc{g}(t))Dg(t)$ can then be viewed as a dot product between the gradient vector $\nabla f(\vc{x})$ and the vector $\vc{g}'(t)$. We can write the chain rule as \begin{align} h'(t) = \nabla f(\vc{g}(t)) \cdot \vc{g}'(t). \label{chainruledotproduct} \end{align}
If $\vc{g}(t)$ is a one-dimensional function $x=g(t)$ and $f(x)$ is a function of a single variable, then we are back to the single-variable chain rule and equation \eqref{chainruledotproduct} becomes $ h'(t) = f'(g(t))g'(t).$ If we wrote $g(t)=x(t)$, we could also write this chain rule as $\diff{h}{t} = \diff{f}{x} \diff{x}{t}$, where we neglect to write the arguments of each function.
If, on the other hand, $\vc{g}(t)$ is a two-dimensional function $(x,y)=\vc{g}(t)=(g_1(t),g_2(t))$ and $f(\vc{x})$ is a function of two variables, $f(x,y)$, then we can multiply out the dot product of equation \eqref{chainruledotproduct} to write it as \begin{align} h'(t) = \pdiff{f}{x}(\vc{g}(t))g_1'(t) + \pdiff{f}{y}(\vc{g}(t))g_2'(t). \tag{2a} \end{align} This special case is exactly equation (4) chain rule introduction.
If we write $\vc{g}(t)=(g_1(t),g_2(t))=(x(t),y(t))$ and its derivative as $\vc{g}'(t)=\left(\diff{x}{t},\diff{y}{t}\right)$, then we can write this formula in a way that some people fine easier to memorize: \begin{align} \diff{h}{t} = \pdiff{f}{x}\diff{x}{t} + \pdiff{f}{y}\diff{y}{t}. \tag{2a'} \end{align} This formula looks especially simple since we didn't write the arguments of each function.
We can write a similar expression for three-dimensional $\vc{g}(t)=(g_1(t),g_2(t),g_3(t))=(x(t),y(t),z(t))$ and $f(x,y,z)$: \begin{align} h'(t) = \pdiff{f}{x}(\vc{g}(t))g_1'(t) + \pdiff{f}{y}(\vc{g}(t))g_2'(t) + \pdiff{f}{z}(\vc{g}(t))g_3'(t). \tag{2b} \end{align} The simplified verision of the 3D formula is \begin{align} \diff{h}{t} = \pdiff{f}{x}\diff{x}{t} + \pdiff{f}{y}\diff{y}{t} + \pdiff{f}{z}\diff{z}{t}. \tag{2b'} \end{align} (Some people even write such a formula as $\diff{f}{t} = \pdiff{f}{x}\diff{x}{t} + \pdiff{f}{y}\diff{y}{t} + \pdiff{f}{z}\diff{z}{t}$ where $f$ is viewed as both a function of $t$ and as a function of $\vc{x}=(x,y,z)$.)
$\vc{g}$ is a function of two variables
If $\vc{g}(s,t)$ is a vector-valued function of two variables, $\vc{g}: \R^2 \to \R^n$, then we can no longer write its derivative as a vector. Instead, the derivative $D\vc{g}(s,t)$ will be a matrix of partial derivatives with two columns, i.e., an $n \times 2$ matrix. Since the derivative of $f: \R^n \to \R$ is a $1 \times n$ matrix, the derivative of the composition $h(s,t)=g(\vc{g}(s,t))$ will be a $1 \times 2$ matrix: \begin{align*} Dh(s,t) = \left[\pdiff{h}{s}(s,t) \quad \pdiff{h}{t}(s,t)\right]. \end{align*} In a similar manner to the above procedure, we can write down component formulas for both $\pdiff{h}{s}$ and $\pdiff{h}{t}$, depending on the dimension $n$.
If $g$ is a scalar-valued function, $x(s,t)=g(s,t)$, then its derivative is the $1 \times 2$ matrix \begin{align*} Dg(s,t) = \left[\pdiff{g}{s}(s,t) \quad \pdiff{g}{t}(s,t)\right] = \left[\pdiff{x}{s} \quad \pdiff{x}{t}\right], \end{align*} where in the second shortcut form, we neglect function arguments. In this case, $f(x)$ must be a function of a single variable and its derivative is the scalar $f'(x)$. By the chain rule \eqref{general_chain_rule}, the derivative of $h$ is \begin{align*} \left[\pdiff{h}{s}(s,t) \quad \pdiff{h}{t}(s,t)\right] = f'(g(s,t)) \left[\pdiff{g}{s}(s,t) \quad \pdiff{g}{t}(s,t)\right]. \end{align*} Writing each component separately, we can write this special case of the chain rule as \begin{align} \pdiff{h}{s}(s,t) &= f'(g(s,t))\pdiff{g}{s}(s,t)\notag\\ \pdiff{h}{t}(s,t) &= f'(g(s,t))\pdiff{g}{t}(s,t)\tag{3a} \end{align} or writing $g$ as $x$, we can write the simplified form as \begin{align} \pdiff{h}{s} &= \diff{f}{x}\pdiff{x}{s}\notag\\ \pdiff{h}{t} &= \diff{f}{x}\pdiff{x}{t}.\tag{3a'} \end{align}
If $\vc{g}$ is a two-dimensional function $(x(s,t),y(s,t))=\vc{g}(s,t)=(g_1(s,t),g_2(s,t))$ and $f$ is a function of two variables, $f(x,y)$, then $D\vc{g}(s,t)$ is a $2 \times 2$ matrix and $Df(x,y)$ is a $1 \times 2$ matrix. The chain rule \eqref{general_chain_rule} can be written as \begin{align*} \left[\pdiff{h}{s}(s,t) \quad \pdiff{h}{t}(s,t)\right] = \left[\pdiff{f}{x}(\vc{g}(s,t))\quad \pdiff{f}{y}(\vc{g}(s,t))\right] \left[\begin{array}{cc}\pdiff{g_1}{s}(s,t) & \pdiff{g_1}{t}(s,t)\\ \pdiff{g_2}{s}(s,t) & \pdiff{g_2}{t}(s,t) \end{array}\right]. \end{align*} We multiply the right two matrices and write each component separately to write the result as \begin{align} \pdiff{h}{s}(s,t) &= \pdiff{f}{x}(\vc{g}(s,t))\pdiff{g_1}{s}(s,t) +\pdiff{f}{y}(\vc{g}(s,t))\pdiff{g_2}{s}(s,t)\notag\\ \pdiff{h}{t}(s,t) &= \pdiff{f}{x}(\vc{g}(s,t))\pdiff{g_1}{t}(s,t) +\pdiff{f}{y}(\vc{g}(s,t))\pdiff{g_2}{t}(s,t). \label{chainrule22}\tag{} \end{align} We can simplify by writing $(g_1,g_2)$ as $(x,y)$: \begin{align} \pdiff{h}{s} &= \pdiff{f}{x}\pdiff{x}{s} +\pdiff{f}{y}\pdiff{y}{s}\notag\\ \pdiff{h}{t} &= \pdiff{f}{x}\pdiff{x}{t} +\pdiff{f}{y}\pdiff{y}{t}.\tag{3b'} \end{align}
Lastly, in three-dimensions, with functions $\vc{g}: \R^2 \to \R^3$ and $f: \R^3 \to \R$, we can write the chain rule in matrix form as \begin{align*} \left[\pdiff{h}{s}(s,t) \quad \pdiff{h}{t}(s,t)\right] = \left[\pdiff{f}{x}(\vc{g}(s,t))\quad \pdiff{f}{y}(\vc{g}(s,t)) \quad \pdiff{f}{z}(\vc{g}(s,t))\right] \left[\begin{array}{cc}\pdiff{g_1}{s}(s,t) & \pdiff{g_1}{t}(s,t)\\ \pdiff{g_2}{s}(s,t) & \pdiff{g_2}{t}(s,t)\\ \pdiff{g_3}{s}(s,t) & \pdiff{g_3}{t}(s,t) \end{array}\right], \end{align*} and in component form as \begin{align} \pdiff{h}{s}(s,t) &= \pdiff{f}{x}(\vc{g}(s,t))\pdiff{g_1}{s}(s,t) +\pdiff{f}{y}(\vc{g}(s,t))\pdiff{g_2}{s}(s,t) +\pdiff{f}{z}(\vc{g}(s,t))\pdiff{g_3}{s}(s,t)\notag\\ \pdiff{h}{t}(s,t) &= \pdiff{f}{x}(\vc{g}(s,t))\pdiff{g_1}{t}(s,t) +\pdiff{f}{y}(\vc{g}(s,t))\pdiff{g_2}{t}(s,t) +\pdiff{f}{z}(\vc{g}(s,t))\pdiff{g_3}{t}(s,t).\tag{3c} \end{align} Writing $\vc{g}(s,t)=(g_1(s,t),g_2(s,t),g_3(s,t))$ as $(x(s,t),y(s,t),z(s,t))$, we obtain the simplified form \begin{align} \pdiff{h}{s} &= \pdiff{f}{x}\pdiff{x}{s} +\pdiff{f}{y}\pdiff{y}{s}+\pdiff{f}{z}\pdiff{z}{s}\notag\\ \pdiff{h}{t} &= \pdiff{f}{x}\pdiff{x}{t} +\pdiff{f}{y}\pdiff{y}{t}+\pdiff{f}{z}\pdiff{z}{t}.\tag{3c'} \end{align} |
Archive:
Subtopics:
Comments disabled
Sun, 15 Oct 2017
[ I started this article in March and then forgot about it. Ooops! ]
Back in February I posted an article about how there are exactly 715 nondecreasing sequences of 4 digits. I said that !!S(10, 4)!! was the set of such sequences and !!C(10, 4)!! was the number of such sequences, and in general $$C(d,n) = \binom{n+d-1}{d-1} = \binom{n+d-1}{n}$$ so in particular $$C(10,4) = \binom{13}{4} = 715.$$
I described more than one method of seeing this, but I didn't mention the method I had found first, which was to use the Cauchy-Frobenius-Redfeld-Pólya-Burnside counting lemma. I explained the lemma in detail some time ago, with beautiful illustrated examples, so I won't repeat the explanation here. The Burnside lemma is a kind of big hammer to use here, but I like big hammers. And the results of this application of the big hammer are pretty good, and justify it in the end.
To count the number of distinct sequences of 4 digits, where some sequences are considered “the same” we first identify a symmetry group whose orbits are the equivalence classes of sequences. Here the symmetry group is !!S_4!!, the group that permutes the elements of the sequence, because two sequences are considered “the same” if they have exactly the same digits but possibly in a different order, and the elements of !!S_4!! acting on the sequences are exactly what you want to permute the elements into some different order.
Then you tabulate how many of the 10,000 original sequences are left fixed by each element !!p!! of !!S_4!!, which is exactly the number of cycles of !!p!!. (I have also discussed cycle classes of permutations before.) If !!p!! contains !!n!! cycles, then !!p!! leaves exactly !!10^n!! of the !!10^4!! sequences fixed.
(Skip this paragraph if you already understand the table. The four rows above are an abbreviation of the full table, which has 24 rows, one for each of the 24 permutations of order 4. The “How many permutations?” column says how many times each row should be repeated. So for example the second row abbreviates 6 rows, one for each of the 6 permutations with three cycles, which each leave 1,000 sequences fixed, for a total of 6,000 in the second row, and the total for all 24 rows is 17,160. There are two different types of permutations that have two cycles, with 3 and 8 permutations respectively, and I have collapsed these into a single row.)
Then the magic happens: We average the number left fixed by each permutation and get !!\frac{17160}{24} = 715!! which we already know is the right answer.
Now suppose we knew how many permutations there were with each number of cycles. Let's write !!\def\st#1#2{\left[{#1\atop #2}\right]}\st nk!! for the number of permutations of !!n!! things that have exactly !!k!! cycles. For example, from the table above we see that $$\st 4 4 = 1,\quad \st 4 3 = 6,\quad \st 4 2 = 11,\quad \st 4 1 = 6.$$
Then applying Burnside's lemma we can conclude that $$C(d, n) = \frac1{n!}\sum_i \st ni d^i .\tag{$\spadesuit$}$$ So for example the table above computes !!C(10,4) = \frac1{24}\sum_i \st 4i 10^i = 715!!.
At some point in looking into this I noticed that$$\def\rp#1#2{#1^{\overline{#2}}}%\def\fp#1#2{#1^{\underline{#2}}}%C(d,n) =\frac1{n!}\rp dn$$ where !!\rp dn!! is the so-called “risingpower” of !!d!!: $$\rp dn = d\cdot(d+1)(d+2)\cdots(d+n-1).$$I don't think I had a proof of this; I just noticed that !!C(d, 1) =d!! and !!C(d, 2) = \frac12(d^2+d)!! (both obvious), and the Burnside'slemma analysis of the !!n=4!! case had just given me !!C(d, 4) =\frac1{24}(d^4 +6d^3 + 11d^2 + 6d)!!. Even if one doesn't immediatelyrecognize this latter polynomial it looks like it ought to factor andthen on factoring it one gets !!d(d+1)(d+2)(d+3)!!. So it's easy toconjecture !!C(d, n) = \frac1{n!}\rp dn!! and indeed, this is easy toprove from !!(\spadesuit)!!: The !!\st n k!! obey the recurrence$$\st{n+1}k = n \st nk + \st n{k-1}\tag{$\color{green}{\star}$}$$ (by an easy combinatorial argument
In general !!\rp nk = \fp{(n+k-1)}k!! so we have !!C(d, n) = \rp dn = \fp{(n+d-1)}n = \binom{n+d-1}d = \binom{n+d-1}{n-1}!! which ties the knot with the formula from the previous article. In particular, !!C(10,4) = \binom{13}9!!.
I have a bunch more to say about this but this article has already been in the oven long enough, so I'll cut the scroll here.
[1] The combinatorial argument that justifies !!(\color{green}{\star})!! is as follows: The Stirling number !!\st nk!! counts the number of permutations of order !!n!! with exactly !!k!! cycles. To get a permutation of order !!n+1!! with exactly !!k!! cycles, we can take one of the !!\st nk!! permutations of order !!n!! with !!k!! cycles and insert the new element into one of the existing cycles after any of the !!n!! elements. Or we can take one of the !!\st n{k-1}!! permutations with only !!k-1!! cycles and add the new element in its own cycle.)
[2] We want to show that the coefficients of !!\rp nk!! obey the same recurrence as !!(\color{green}{\star})!!. Let's say that the coefficient of the !!n^i!! term in !!\rp nk!! is !!c_i!!. We have $$\rp n{k+1} = \rp nk\cdot (n+k) = \rp nk \cdot n + \rp nk \cdot k $$ so the coefficient of the the !!n^i!! term on the left is !!c_{i-1} + kc_i!!. |
[Top][All Lists] [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Axiom-developer] about Expression Integer
From: Page, Bill Subject: RE: [Axiom-developer] about Expression Integer Date: Fri, 24 Feb 2006 08:12:16 -0500 Ralf,
On Friday, February 24, 2006 5:12 AM you wrote:
> ...
> Let us take again the view that a polynomial ring "R[x]" (it's
> in quotes since I haven't defined that notation yet) is the
> ring
>
> P = \bigoplus_{e \in N} R
>
> where N are the non-negative integers.
> Elements in P (the polynomials) are just functions with finite
> support from N to R. So the are nothing else than (infinite)
> sequences of numbers from R where only a finite non-zeros appear.
>
Ignoring for now the details of multivariate polynomials, in
Axiom we would write:
P:=POLY R
where R is some Ring (such as INT).
> Now, let x \in P be such that x(1)=1 and x(e)=0 for e \in
> N\setminus\{1\}.
>
> If you like, you can consider this x to be the "indeterminate"
> or "variable" of P.
>
No, this is not quite complete. There is a function 'coerce'
such that 'coerce(x::Symbol)$P' \in P. Axiom displays both
this polynomial and the symbol in the same way except the
Type is different.
> It is also clear that by construction this x is different
> from anything that lives in R.
Yes, I agree with this. Except you say "numbers from R" which
could be miss-leading if R itself is a polynomial ring or a more
general ring (such as Axiom's EXPR INT).
> ...
> **What is differentiation in P?**
>
> (That is a bit informal here.)
> Given that elements in P have finite support, we can represent
> them as finite tuples.
> Let a := (a_0, a_1, ..., a_n) \in P, then the (formal)
> derivative of a is
>
> a' = (a_1, 2 a_2, 3 a_3, ..., n a_n)
>
> (Note that a_0 is missing.)
>
> If you write a as a formal sum, using the x introduced above,
> you get.
>
> a = \sum_{e=0}^n a_e x^e
> a' = \sum_{e=0}^{n-1} a_{e+1} x^e
>
> just as expected.
The defintion of differentiation that concerns me is specified
with respect to some symbol, e.g.
differentiate:(POLY R, Symbol) -> POLY R
If R is a Ring (such as another polynomial ring) which also
allows for example 'coerce(x::Symbol)$R' (in Axiom we would
write 'R has PDRING Symbol') then it makes good sense, I think
to define this differentiaton as an extension of the
differentiation in R (DifferentialExtension). This is what
is done in some Axiom polynomial domains and not others.
> ...
> I think that the current implementation of polynomials in
> Axiom more closely follow the formal approach. If somebody
> wants polynomials to behave differently, then there should
> be a clear documentation and an **additional** domain that
> support these ideas. But, please don't modify the existing
> polynomials in Axiom.
>
I agree. But we not talking about changing the way polynomials
behave, we are trying to define how polynomials behave now
in Axiom. My claim is that they do not (at least not quite)
behave that way that some people here have described. So I
am trying to change their description. :)
> >> I guess you know that, so there is probably a
> >> misunderstanding somewhere. Just to be clear:
> >>
> >> sin x + y*cos x + y^2* tan x
> >>
> >> is perfectly allright as a polynomial in y.
> >
> > I agree, but it is also a perfectly good polynomial in x
> > provided that it can appear as a member of the coefficient
> > domain:
>
> Bill, if this is your terminology, I strongly disagree with
> you. If I say something is "polynomial in x" then I mean if
> the expression is written as an expression tree then in the
> path from every x to the root I should at most see "^", "*",
> "+" (and "-").
You are right. My terminology was poor. But Axiom does not
have any domain consisting of an expression tree such as
you describe. This would be a much more restricted that the
current polynomial domains in Axiom.
What I should have written was perhaps:
"it is also a perfectly good polynomial over a ring that
allows such such expressions"
My point is that this is well-defined even if the same
symbol 'x' appears in both the underlying ring (i.e. in a
coefficient, and as the polynomial variable.
>
> > (1) -> (sin x + y*cos x + y^2* tan x)$UP(x,EXPR INT)
> >
> > 2
> > (1) y tan(x) + sin(x) + y cos(x)
> > Type: UnivariatePolynomial(x,Expression Integer)
> >
> > (2) -> degree %
> >
> > (2) 0
> > Type: NonNegativeInteger
>
> Yes, Axiom is correct _and_ confusing!
The confusion is relative to the presumptions that one has
about how Axiom works... ;)
> If a user want to construct something like UP(x,EXPR INT),
> then he should know what he is doing. Unfortunately, beginners
> probably don't know what they do when they type in
> UP(x,EXPR INT). And they will be surprise by the strange
> results.
Perhaps, but I do not see any reason for beginners to use
such complicated domains.
> And I very much think that they will turn their back to
> Axiom, because it is so confusing. (Which would be a pity.)
But that *is* how Axiom was designed. You are suggesting
that we should change or at least suppress this part of
the design. I instead am trying to find some way to describe
the what Axiom does now in a way that is comprehesible to
as many Axiom users as possible.
I object in principle to the idea of "dumbing down" the
fundamental design of Axiom so that it only behaves in the
way naïve users expect. On the other hand, I would be
happy to see a new user interface for new Axiom users that
would focus on implementing the "principle of least surprize".
> ...
> Hey, if coerce: SOMETHING -> UP(x, INT) is a total function,
> then what does that mean?
> ...
> Does someone believe that "coerce" always has to be an
> inclusion function?
>
> > ...
> > 1
> > (11) - x
> > x
> > Type: UnivariatePolynomial(x,Expression Integer)
> >
> > is not the same a '1$P'
>
> What does that matter? If we have two domains A and B, two
> elements a \in A, b \in B, a function coerce: A->B, and
> coerce(a) = b. The whole relation between a and b is clear.
> But where is it written, that coerce in Axiom has to be
> a homomorphism? (BTW, a homomorphism of which type?
> Homomorphism of rings, of groups, of sets?)
>
'coerce' is a conversion that is intended to be automatically
applied by the interpreter presumably without the risk of
encuring any mathematical errors. So I think that at the
very least a coercion must be a morphism from A to B
that preserves the structure of A (i.e. a functor). B can
have however additional structure that is not in A. Perhaps
this is also true of other conversions (invoked by ::) when
we consider the algebraic properties of the possible result
"failed".
For example we can coerce INT -> FLOAT but we can only
convert FLOAT -> INT.
(1) -> X:INT:=1
(1) 1
Type: Integer
(2) -> Y:=X+1.1
(2) 2.1 <---- this is a coercion of INT to Float
Type: Float
(3) -> Y + 1 <---- Float is not coerced to INT
(3) 3.1
Type: Float
(4) -> Y::INT
Cannot convert from type Float to Integer for value
2.1
(4) -> Z:Float:=1
(4) 1.0
Type: Float
(5) -> Z::INT <---- this is a conversion of Float to INT
(5) 1
Type: Integer
Regards,
Bill Page.
Re: [Axiom-developer] about Expression Integer, Ralf Hemmecke,
2006/02/19
Re: [Axiom-developer] about Expression Integer, William Sit,
2006/02/21
RE: [Axiom-developer] about Expression Integer, Bill Page,
2006/02/21
RE: [Axiom-developer] about Expression Integer, Page, Bill,
2006/02/24
RE: [Axiom-developer] about Expression Integer, Page, Bill <= RE: [Axiom-developer] about Expression Integer, Page, Bill,
2006/02/24 |
I had a problem when considering symmetry breaking in an SO(4) gauge theory:
$\mathcal{L} = \left| D_\mu\phi \right|^2$
where $D_\mu$ is the SO(4) covariant derivative. Then assuming there is some potential that has a minimum such that we can choose the ground state to be:
$\langle \phi \rangle = \begin{pmatrix} 0 & 0 & 0 & v \end{pmatrix}^{T}$
After this I found the unbroken generators which have to generate a subgroup of SO(4) and that their generators fulfill the $\mathfrak{su}(2)$ algebra. Now I wanted to conclude that therefore the unbroken subgroup is SU(2). But there are multiple groups that have this same algebra, e.g. SO(3) does too.
How do I know which one is the correct subgroup? Is there any way to see this from the explicit form of the generators? (e.g. the dimension of the representation) |
Difference between revisions of "Haar system"
m (some TeX)
m (some TeX)
Line 3: Line 3:
\chi_1(t) \equiv 1\quad \text{ on } [0,1];
\chi_1(t) \equiv 1\quad \text{ on } [0,1];
$$
$$
− −
if $n=2^m+k$, $k=1,\dots, 2^m$, $m=0,1,\dots$, then
if $n=2^m+k$, $k=1,\dots, 2^m$, $m=0,1,\dots$, then
Line 16: Line 14:
At interior points of discontinuity a Haar function is put equal to half the sum of its limiting values from the right and from the left, and at the end points of $[0,1]$ to its limiting values from within the interval.
At interior points of discontinuity a Haar function is put equal to half the sum of its limiting values from the right and from the left, and at the end points of $[0,1]$ to its limiting values from within the interval.
−
The system
+
The system was defined by A. Haar in [[#References|[1]]]. It is orthonormal on the interval 0. The Fourier series of any continuous function on 0with respect to this system converges uniformly to it. Moreover, if is the modulus of continuity of on 0, then the partial sums of order of the Fourier–Haar series of satisfy the inequality
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046070/h04607018.png" /></td> </tr></table>
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/h/h046/h046070/h04607018.png" /></td> </tr></table>
Revision as of 19:11, 3 November 2015
One of the classical orthonormal systems of functions. The Haar functions $\chi_n$ of this system are defined on the interval $[0,1]$ as follows: $$ \chi_1(t) \equiv 1\quad \text{ on } [0,1]; $$
if $n=2^m+k$, $k=1,\dots, 2^m$, $m=0,1,\dots$, then $$ \chi_n = \begin{cases} \sqrt{2^m} \quad &&\text{ for } t\in\left(\frac{2k-2}{2^{m+1}}, \frac{2k-1}{2^{m+1}} \right),\\ -\sqrt{2^m} \quad &&\text{ for } t\in\left(\frac{2k-1}{2^{m+1}}, \frac{2k}{2^{m+1}} \right),\\ 0 \quad &&\text{ for } t\not\in\left(\frac{k-1}{2^{m}}, \frac{k}{2^{m}} \right). \end{cases} $$
At interior points of discontinuity a Haar function is put equal to half the sum of its limiting values from the right and from the left, and at the end points of $[0,1]$ to its limiting values from within the interval.
The system $\{\chi_n\}$ was defined by A. Haar in [1]. It is orthonormal on the interval $[0,1]$. The Fourier series of any continuous function on $[0,1]$ with respect to this system converges uniformly to it. Moreover, if $\omega(\sigma, f)$ is the modulus of continuity of $f$ on $[0,1]$, then the partial sums $S_n(f)$ of order $n$ of the Fourier–Haar series of $f$ satisfy the inequality
The Haar system is a basis in the space , . If and is the integral modulus of continuity of in the metric of , then (see [3])
The Haar system is an unconditional basis in for (see [6]).
If is Lebesgue integrable on , then its Fourier–Haar series converges to it at any of its Lebesgue points; in particular, almost-everywhere on . Here convergence (and absolute convergence) of the Fourier–Haar series at a fixed point of depends only on the values of the function in any arbitrarily small neighbourhood of this point.
For Fourier–Haar series the following properties differ substantially from each other: a) absolute convergence everywhere; b) absolute convergence almost-everywhere; c) absolute convergence on a set of positive measure; and d) absolute convergence of the series of Fourier coefficients. For trigonometric series all these properties are equivalent.
The properties of the Fourier–Haar coefficients differ sharply from those of the trigonometric Fourier coefficients. For example, if a function is continuous on the interval and if are its Fourier coefficients with respect to the system , then the following inequality holds:
which implies that
However, the Fourier–Haar coefficients of continuous functions cannot decrease too rapidly: If is continuous on and if
then on (see [6]).
For functions , , the following estimates hold (see [3]):
If is of bounded variation on , then
All these inequalities are sharp in the sense of the order of decrease of their right-hand sides as (in the corresponding classes) (see [3]).
Almost-everywhere unconditionally-converging series of the form
(*)
are distinguished by an interesting peculiarity: If a series of the form (*) for any order of its terms converges almost-everywhere on a set of positive Lebesgue measure (the exceptional set of measure 0 may depend on the order of the terms of the series (*)), then this series converges absolutely almost-everywhere on . For series of the form (*) the following criterion holds: For a series (*) to converge almost-everywhere on a measurable set it is necessary and sufficient that the series converges almost-everywhere on (see [6]).
Haar series may serve as representations of measurable functions: For any measurable function that is finite almost-everywhere on there exists a series of the form (*) that converges almost-everywhere on to . Here the finiteness of the function is essential: There is no series of the form (*) that converges to (or ) on a set of positive Lebesgue measure.
References
[1] A. Haar, "Zur Theorie der orthogonalen Funktionensysteme" Math. Ann. , 69 (1910) pp. 331–371 [2] G. Alexits, "Convergence problems of orthogonal series" , Pergamon (1961) (Translated from Russian) [3] P.L. Ul'yanov, "On Haar series" Mat. Sb. , 63 : 3 (1964) pp. 356–391 [4] P.L. Ul'yanov, "Absolute and uniform convergence of Fourier series" Math. USSR Sb. , 1 : 2 (1967) pp. 169–197 Mat. Sb. , 72 : 2 (1967) pp. 193–225 [5] B.I. Golubov, "Series with respect to the Haar system" J. Soviet Math. , 1 (1971) pp. 704–726 Itogi. Nauk. Mat. Anal. 1970 (1971) pp. 109–143 [6] B.S. Kashin, A.A. Saakyan, "Orthogonal series" , Moscow (1984) (In Russian) Comments References
[a1] I.M. Singer, "Bases in Banach spaces" , 1–2 , Springer (1970–1981) [a2] J. Lindenstrauss, L. Tzafriri, "Classical Banach spaces" , 1–2 , Springer (1977–1979) How to Cite This Entry:
Haar system.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Haar_system&oldid=36818 |
I analyzed a Pratt & Whitney F100 turbofan last semester in my aerothermodynamics course, so allow me to answer this question. The short answer:
the un-compressed air provides the majority of an engine's total thrust since the compressed air powers the engine.
Correction: I forgot to mention that the fans also compress entering air. That is, all air entering an engine is compressed a bit by the fan. Some of this compressed air enters the turbojet core and the rest of the fan-compressed air bypasses the engine core. I ignored this for the sake of simplicity, but I should have explained this since you directly asked about air compression in a turbofan.
Long answer: see below!
1. The fan(s)
Air enters the engine through a fan (or fans in the case of the F100 engine). These are the giant fans with a spinning insignia in the middle that you see inside an engine.
Update: The spinning insignia clearly show if the fan is spinning so workers don't get injured.
The fan(s) increases the pressure of the air that enters an engine. Some of this compressed air is diverted around the rest of the engine and is directed straight out of the engine.
The bypass ratio is a measure of how much air bypasses the "jet" core (bypass air/core air).
2. The compressors
The rest of this air is then compressed through a combination of more fans and a converging duct. The data from my thermo project tells me that this stage increases the pressure of the core air by more than 10,000%, but I'm not too sure about that*. Sufficed to say,
this core air now has a lot of energy--let's add some more :D
Quick note: The compressed air now has insignificant velocity
relative to the bypass air. Most of the compressed air's energy is "in" its pressure (senior SE members, please correct me if I'm wrong).
3. The combustion chamber (aka, combustor aka magic chamber)
Now the core air enters the combustion chamber. Here the air enters small chambers, mixes with jet fuel, and is ignited. The main parts-of-an-engine diagram I posted makes it seam like the combustion chamber is one big part of a jet-engine, but really the combustor consists of a bunch of smaller chambers surrounded the main shaft of the engine. Here is a gif that shows what I mean:
How the combustion chamber operates is beyond my scope of understanding, but consider that a combustor is essentially trying to keep a candle alight in the middle of a hurricane. Awesome engineering goes into designing better and more efficient (hotter-burning) chambers.
4. The turbine (aka more magic section)
Now that hot and even more energetic air enters a the turbine section which consists of a diverging (increasing in area) duct and more fans. Whereas the compress "inserted" energy into the air, the turbines draw out energy from the air. As the air enters the larger (in volume) turbine area, it expands and spins the turbine fans
which power the compressors and the fan. This Back Work Ratio (BWR) is a measure of how much turbine power it takes to spin the compressors.
5. The nozzle
The still energetic core air is once again concentrated before being shot out of the back of the engine.This thrust, together with the thrust of the bypass air, propels the air forward following this model:
$F_{thrust} = \dot{m}_{bypass} \times \Delta v_{bypass} + \dot{m}_{core} \times \Delta v_{core}$
Where $\dot{m}_{bypass}$ is the mass flow-rate of air that is bypassed and $\Delta v_{bypass}$ is the change in velocity of that air as a result of the fan.
And $\dot{m}_{core}$ is the mass flow-rate of air that is combusted and $\Delta v_{core}$ is the change in velocity of the core air as a result of the fan, compressor, combustion chamber, and turbine.
The uncompressed air contributes about 60% of the total thrust. The "processed" air loses a significant portion of its energy to powering the engine. However, the compressed air still provides about 40% of the total thrust. Adding an afterburner can increase this contribution to 50%. How is this possible? Dead algae from a billion years ago.
The hydrocarbons in jet fuel pack a lot of energy into a small space
and mass (two completely different concepts). Burning those hydrocarbons releases a lot of energy that powers the fan, compressors, and electric generators of an aircraft before pushing the airplane forward. This high energy in a small place/space is also why electric cars/anything weren't practical until LiPo batteries (story for another article).
I applaud you for noticing that the uncompressed air contributes to an engine's thrust. I think the term "bypass" confuses some people into thinking that this air is "thrown away". it's not. The bypassed air is actually sped up by a series of fans and imparts forward momentum to the aircraft.
I'm studying Aerospace engineering right now (Whoop!), so my excitement at being able to answer this question rather side-tracked me from your original question, but I hope you enjoy the extra info on how a turbofan operates.
Additional info
The hotter a combustion chamber burns, the hotter the core air gets and the more energy it can provide for engine operation. Furthermore, hotter chambers waste less jet fuel and result in relatively cleaner and safer fumes. Increasing CC max temps and designing the rest of the engine (i.e., the turbine blades) to handle the increase in temp is the cutting edge of engine research. This is one of the most challenging and lucrative materials science/engineering problems in the world since efficiency is paramount in today and tomorrow's aviation industry.
Here is a nice article describing how GE, Rolls-Royce, and other engine companies sell thrust, not engines. It's a big long, so you may read it on your next flight :) |
Suppose we are given that a sequence of functions $f_n(z)$ convergences pointwise to $f(z)$ on the interval $[0,1]$. Suppose further that all of these functions are given by power series centered at 0 with radius of convergence $R > 1$.
To fix notation, say $f_n(z) = a_{n, 0} + a_{n, 1}z + a_{n, 2} z^2 + \dots$
and $f(z) = a_0 + a_1 z + a_2 z^2 + \dots$.
We are given $f_n(z) \to f(z)$ as $n \to \infty$, for each fixed $z \in [0,1]$.
Is it true that $a_{n, k} \to a_k$ as $n \to \infty$, for each $k$? Why? We can't use complex analysis it seems, since we are only on $[0,1]$.
If not, is it true under the additional assumption that all the $a_{n, k}$ and $a_n$ are uniformly bounded by some $M$? |
There are
many properties that are equivalent to uniqueness of factorization in $\,\Bbb Z.\:$ Below is a sample off the top of my head (by no means complete). Each provides a slightly different perspective on why uniqueness holds - perspectives that becomes clearer when one sees how these equivalent properties bifurcate in more general integral domains. Below we use the notation $\rm\:(a,b)=1\:$ to mean that $\rm\:a,b\:$ are coprime, i.e. $\rm\:c\mid a,b\:\Rightarrow\:c\mid 1.$
$\rm(1)\ \ \ gcd(a,b)\:$ exists for all $\rm\:a,b\ne 0\ \ $ [GCD domain]
$\rm(2)\ \ \ a\mid BC\:\Rightarrow a=bc,\ b\mid B,\ c\mid C\ \ \, $ [Schreier refinement, Euler's four number theorem]
$\rm(3)\ \ \ a\,\Bbb Z + b\, \Bbb Z\, =\, c\,\Bbb Z,\:$ for some $\rm\,c\quad\ $ [Bezout domain]
$\rm(4)\ \ \ (a,b)=1,\ a\mid bc\:\Rightarrow\: a\mid c\qquad\ \ $ [Euclid's Lemma]
$\rm(5)\ \ \ (a,b)=1,\ \dfrac{a}{b} = \dfrac{c}{d}\:\Rightarrow\: b\mid d\quad\ \ $ [Unique Fractionization]
$\rm(6)\ \ \ (a,b)=1,\ a,b\mid c\:\Rightarrow\: ab\mid c$
$\rm(7)\ \ \ (a,b)=1\:\Rightarrow\: a\,\Bbb Z\cap b\,\Bbb Z\, =\, ab\,\Bbb Z $
$\rm(8)\ \ \ gcd(a,b)\ \ exists\:\Rightarrow\: lcm(a,b)\ \ exists$
$\rm(9)\ \ \ (a,b)=1=(a,c)\:\Rightarrow\: (a,bc)= 1$
$\rm(10)\ $ atoms $\rm\, p\,$ are prime: $\rm\ p\mid ab\:\Rightarrow\: p\mid a\ \ or\ \ p\mid b$
Which of these properties sheds the most intuitive light on why uniqueness of factorization entails? If I had to choose one, I would choose $(2),$ Schreier refinement. If you extend this by induction it implies that any two factorizations of an integer have a
common refinement. For example if we have two factorizations $\rm\: a_1 a_2 = n = b_1 b_2 b_3\:$ then Schreier refinement implies that we can build the following refinement matrix, where the column labels are the product of the elements in the column, and the row labels are the products of the elements in the row
$$\begin{array}{c|ccc} &\rm b_1 &\rm b_2 &\rm b_3 \\\hline\rm a_1 &\rm c_{1 1} &\rm c_{1 2} &\rm c_{1 3}\\\rm a_2 &\rm c_{2 1} &\rm c_{2 2} &\rm c_{2 3}\\\end{array}$$
This implies the following common refinement of the two factorizations
$$\rm a_1 a_2 = (c_{1 1} c_{1 2} c_{1 3}) (c_{2 1} c_{2 2} c_{2 3}) = (c_{1 1} c_{2 1}) (c_{1 2} c_{2 2}) (c_{1 3} c_{2 3}) = b_1 b_2 b_3.$$
This immediately yields the
uniqueness of factorizations into primes (atoms). It also works more generally for factorizations into coprime elements, and for factorizations of certain types of algebraic structures (abelian groups, etc). |
How to determine if a vector field is conservative
A conservative vector field (also called a path-independent vector field) is a vector field $\dlvf$ whose line integral $\dlint$ over any curve $\dlc$ depends only on the endpoints of $\dlc$. The integral is independent of the path that $\dlc$ takes going from its starting point to its ending point. The below applet illustrates the two-dimensional conservative vector field $\dlvf(x,y)=(x,y)$.
The line integral over multiple paths of a conservative vector field. The integral of conservative vector field $\dlvf(x,y)=(x,y)$ from $\vc{a}=(3,-3)$ (cyan diamond) to $\vc{b}=(2,4)$ (magenta diamond) doesn't depend on the path. Path $\dlc$ (shown in blue) is a straight line path from $\vc{a}$ to $\vc{b}$. Paths $\adlc$ (in green) and $\sadlc$ (in red) are curvy paths, but they still start at $\vc{a}$ and end at $\vc{b}$. Each path has a colored point on it that you can drag along the path. The corresponding colored lines on the slider indicate the line integral along each curve, starting at the point $\vc{a}$ and ending at the movable point (the integrals alone the highlighted portion of each curve). Moving each point up to $\vc{b}$ gives the total integral along the path, so the corresponding colored line on the slider reaches 1 (the magenta line on the slider). This demonstrates that the integral is 1 independent of the path.
What are some ways to determine if a vector field is conservative? Directly checking to see if a line integral doesn't depend on the path is obviously impossible, as you would have to check an infinite number of paths between any pair of points. But, if you found two paths that gave different values of the integral, you could conclude the vector field was path-dependent.
Here are some options that could be useful under different circumstances.
As mentioned in the context of the gradient theorem, a vector field $\dlvf$ is conservative if and only if it has a potential function $f$ with $\dlvf = \nabla f$. Therefore, if you are given a potential function $f$ or if you can find one, and that potential function is defined everywhere, then there is nothing more to do. You know that $\dlvf$ is a conservative vector field, and you don't need to worry about the other tests we mention here. Similarly, if you can demonstrate that it is impossible to find a function $f$ that satisfies $\dlvf = \nabla f$, then you can likewise conclude that $\dlvf$ is non-conservative, or path-dependent.
For this reason, you could skip this discussion about testing for path-dependence and go directly to the procedure for finding the potential function. If this procedure works or if it breaks down, you've found your answer as to whether or not $\dlvf$ is conservative. However, if you are like many of us and are prone to make a mistake or two in a multi-step procedure, you'd probably benefit from other tests that could quickly determine path-independence. That way, you could avoid looking for a potential function when it doesn't exist and benefit from tests that confirm your calculations.
Another possible test involves the link between path-independence and circulation. One can show that a conservative vector field $\dlvf$ will have no circulation around any closed curve $\dlc$, meaning that its integral $\dlint$ around $\dlc$ must be zero. If you could somehow show that $\dlint=0$ for every closed curve (difficult since there are an infinite number of these), then you could conclude that $\dlvf$ is conservative. Or, if you can find one closed curve where the integral is non-zero, then you've shown that it is path-dependent.
Although checking for circulation may not be a practical test for path-independence, the fact that path-independence implies no circulation around any closed curve is a central to what it means for a vector field to be conservative.
If $\dlvf$ is a three-dimensional vector field, $\dlvf : \R^3 \to \R^3$ (confused?), then we can derive another condition. This condition is based on the fact that a vector field $\dlvf$ is conservative if and only if $\dlvf = \nabla f$ for some potential function. We can calculate that the curl of a gradient is zero, $\curl \nabla f = \vc{0}$, for any twice continuously differentiable $f : \R^3 \to \R$. Therefore, if $\dlvf$ is conservative, then its curl must be zero, as $\curl \dlvf = \curl \nabla f = \vc{0}$.
For a continuously differentiable two-dimensional vector field, $\dlvf : \R^2 \to \R^2$, we can similarly conclude that if the vector field is conservative, then the scalar curl must be zero, $$ \pdiff{\dlvfc_2}{x}-\pdiff{\dlvfc_1}{y} = \frac{\partial f^2}{\partial x \partial y} -\frac{\partial f^2}{\partial y \partial x} =0.$$
We have to be careful here. The valid statement is that if $\dlvf$ is conservative, then its curl must be zero. Without additional conditions on the vector field, the converse may not be true, so we
cannotconclude that $\dlvf$ is conservative just from its curl being zero. There are path-dependent vector fields with zero curl. On the other hand, we can conclude that if the curl of $\dlvf$ is non-zero, then $\dlvf$ must be path-dependent.
Can we obtain another test that allows us to determine for sure that a vector field is conservative? We can by linking the previous two tests (tests 2 and 3). Test 2 states that the lack of “macroscopic circulation” is sufficient to determine path-independence, but the problem is that lack of circulation around any closed curve is difficult to check directly. Test 3 says that a conservative vector field has no “microscopic circulation” as captured by the curl. It's easy to test for lack of curl, but the problem is that lack of curl is not sufficient to determine path-independence.
What we need way to link the definite test of zero “macroscopic circulation” with the easy-to-check test of zero “microscopic circulation.” This link is exactly what both Green's theorem and Stokes' theorem provide. Don't worry if you haven't learned both these theorems yet. The basic idea is simple enough: the “macroscopic circulation” around a closed curve is equal to the total “microscopic circulation” in the planar region inside the curve (for two dimensions, Green's theorem) or in a surface whose boundary is the curve (for three dimensions, Stokes' theorem).
Let's examine the case of a two-dimensional vector field whose scalar curl $\pdiff{\dlvfc_2}{x}-\pdiff{\dlvfc_1}{y}$ is zero. If we have a closed curve $\dlc$ where $\dlvf$ is defined everywhere inside it, then we can apply Green's theorem to conclude that the “macroscopic circulation” $\dlint$ around $\dlc$ is equal to the total “microscopic circulation” inside $\dlc$. We can indeed conclude that the “macroscopic circulation” is zero from the fact that the “microscopic circulation” $\pdiff{\dlvfc_2}{x}-\pdiff{\dlvfc_1}{y}$ is zero everywhere inside $\dlc$.
According to test 2, to conclude that $\dlvf$ is conservative, we need $\dlint$ to be zero around
everyclosed curve $\dlc$. If the vector field is defined inside every closed curve $\dlc$ and the “microscopic circulation” is zero everywhere inside each curve, then Green's theorem gives us exactly that condition. We can conclude that $\dlint=0$ around every closed curve and the vector field is conservative.
The only way we could run into trouble is if there are some closed curves $\dlc$ where $\dlvf$ is not defined for some points inside the curve. In other words, if the region where $\dlvf$ is defined has some holes in it, then we cannot apply Green's theorem for every closed curve $\dlc$. In this case, we cannot be certain that zero “microscopic circulation” implies zero “macroscopic circulation” and hence path-independence. Such a hole in the domain of definition of $\dlvf$ was exactly what caused in the problem in our counterexample of a path-dependent field with zero curl.
On the other hand, we know we are safe if the region where $\dlvf$ is defined is
simply connected, i.e., the region has no holes through it. In this case, we know $\dlvf$ is defined inside every closed curve $\dlc$ and nothing tricky can happen. We can summarize our test for path-dependence of two-dimensional vector fields as follows.
If a vector field $\dlvf: \R^2 \to \R^2$ is continuously differentiable in a simply connected domain $\dlr \in \R^2$ and its curl is zero, i.e., $$\pdiff{\dlvfc_2}{x}-\pdiff{\dlvfc_1}{y}=0,$$ everywhere in $\dlr$, then $\dlvf$ is conservative within the domain $\dlr$.
It turns out the result for three-dimensions is essentially the same. If a vector field $\dlvf: \R^3 \to \R^3$ is continuously differentiable in a simply connected domain $\dlv \in \R^3$ and its curl is zero, i.e., $\curl \dlvf = \vc{0}$, everywhere in $\dlv$, then $\dlvf$ is conservative within the domain $\dlv$.
One subtle difference between two and three dimensions is what it means for a region to be simply connected. Any hole in a two-dimensional domain is enough to make it non-simply connected. But, in three-dimensions, a simply-connected domain can have a hole in the center, as long as the hole doesn't go all the way through the domain, as illustrated in this figure.
The reason a hole in the center of a domain is not a problem in three dimensions is that we have more room to move around in 3D. If we have a curl-free vector field $\dlvf$ (i.e., with no “microscopic circulation”), we can use Stokes' theorem to infer the absence of “macroscopic circulation” around any closed curve $\dlc$. To use Stokes' theorem, we just need to find a surface whose boundary is $\dlc$. If the domain of $\dlvf$ is simply connected, even if it has a hole that doesn't go all the way through the domain, we can always find such a surface. The surface can just go around any hole that's in the middle of the domain. With such a surface along which $\curl \dlvf=\vc{0}$, we can use Stokes' theorem to show that the circulation $\dlint$ around $\dlc$ is zero. Since we can do this for any closed curve, we can conclude that $\dlvf$ is conservative.
The flexiblity we have in three dimensions to find multiple surfaces whose boundary is a given closed curve is illustrated in this applet that we use to introduce Stokes' theorem.
Applet loading Macroscopic and microscopic circulation in three dimensions.The relationship between the macroscopic circulation of a vector field $\dlvf$ around a curve (red boundary of surface) and the microscopic circulation of $\dlvf$ (illustrated by small green circles) along a surface in three dimensions must hold for any surface whose boundary is the curve. No matter which surface you choose (change by dragging the green point on the top slider), the total microscopic circulation of $\dlvf$ along the surface must equal the circulation of $\dlvf$ around the curve. (We assume that the vector field $\dlvf$ is defined everywhere on the surface.) You can change the curve to a more complicated shape by dragging the blue point on the bottom slider, and the relationship between the macroscopic and total microscopic circulation still holds. The surface is oriented by the shown normal vector (moveable cyan arrow on surface), and the curve is oriented by the red arrow.
Of course, if the region $\dlv$ is not simply connected, but has a hole going all the way through it, then $\curl \dlvf = \vc{0}$ is not a sufficient condition for path-independence. In this case, if $\dlc$ is a curve that goes around the hole, then we cannot find a surface that stays inside that domain whose boundary is $\dlc$. Without such a surface, we cannot use Stokes' theorem to conclude that the circulation around $\dlc$ is zero. |
Elementary number theory
The branch of number theory that investigates properties of the integers by elementary methods. These methods include the use of divisibility properties, various forms of the axiom of induction and combinatorial arguments. Sometimes the notion of elementary methods is extended by bringing in the simplest elements of mathematical analysis. Traditionally, proofs are deemed to be non-elementary if they involve complex numbers.
Usually, one refers to elementary number theory the problems that arise in branches of number theory such as the theory of divisibility, of congruences, of arithmetic functions, of indefinite equations, of partitions, of additive representations, of the approximation by rational numbers, and of continued fractions. Quite often, the solution of such problems leads to the need to go beyond the framework of elementary methods. Occasionally, following the discovery of a non-elementary solution of some problem, one also finds an elementary solution of it.
As a rule, the problems of elementary number theory have a history going back over centuries, and they are quite often a source of modern trends in number theory and algebra.
From preserved ancient Babylonian cuneiform tablets one may deduce that the Babyloneans were familiar with the task of factoring natural numbers into prime factors. In the 5th century B.C. the Pythagoreans established the so-called doctrine of even and odd numbers and justified the proposition that the product of two natural numbers is even if and only if at least one of the factors is even. A general theory of divisibility was created, in essence, by Euclid. In his Elements (3rd century B.C.), he introduces an algorithm for finding the greatest common divisor of two integers and on this basis he justifies the main theorem of the arithmetic of integers: Every natural number can be factored in one and only one way into a product of prime factors.
After C.F. Gauss, at the beginning of the 19th century, constructed a theory of divisibility of complex integers, it became clear that the study of an arbitrary ring must begin with the construction of a divisibility theory in it.
All properties of the integers are connected in one way or another with the prime numbers (cf. Prime number). Therefore, questions on the disposition of the prime numbers in the sequence of natural numbers evoked the interest of scholars. The first proof that the set of prime numbers is infinite is due to Euclid. Only in the middle of the 19th century did P.L. Chebyshev take the following step in the study of the function $\pi(x)$, the number of prime numbers not exceeding $x$. He succeeded in proving by elementary means inequalities that imply
$$0.92129\frac{x}{\ln x}<\pi(x)<1.10555\ldots\frac{x}{\ln x}$$
for all sufficiently large $x$. Actually, $\pi(x)\sim x/\ln x$, but this was not established until the end of the 19th century by means of complex analysis. For a long time it was considered impossible to obtain the result by elementary means. However, in 1949, A. Selberg obtained an elementary proof of this theorem. Chebyshev also proved that for $n\geq2$ the interval $(n,2n)$ contains at least one prime number. A refinement of the interval containing at least one prime number requires a deeper study of the behaviour of the function $\pi(x)$.
In the 3rd century B.C. the sieve of Eratosthenes (cf. Eratosthenes, sieve of) was used to select the prime numbers from the set of natural numbers. In 1918 V. Brun showed that a modification of this method can serve as a basis for the study of a set of "almost primes". He proved the Brun theorem on prime twins. The Brun sieve is a special case of general sieve methods (cf. Sieve method) which give estimates for collections of almost primes not exceeding $x$ and belonging to a sequence $\{a_n\}$ of natural numbers. Sieve methods can be used when "good" approximations in the mean are known for the amount of integers $a_n\leq x$ belonging to a progression the modulus of which grows with $x$. Among the sieve methods developed after Brun, the Selberg sieve is of special significance. The strongest results are obtained by a combination of sieve methods and analytic methods. A sieve method in conjunction with the Shnirel'man method made it possible to effectively find a number $k$ such that any natural number $n\geq4$ can be represented as a sum of at most $k$ prime numbers.
Two integers $a$ and $b$ are said to be congruent modulo $m\geq1$ if they have the same remainder on division by $m$. Gauss (in 1801) introduced the notation $a\equiv b$ $\pmod m$. This form of writing, which brings about the analogy of congruences and equations, turned out to be convenient and instrumental in the development of the theory congruences (cf. Congruence).
Many results obtained previously by P. Fermat, L. Euler, J.L. Lagrange, and others, and also the Chinese remainder theorem, can be stated and proved simply in the language of the theory of congruences. One of the most interesting results of this theory is the quadratic reciprocity law.
The ancient Babylonians knew a large number of "Pythagorean triples". Apparently they knew some method for finding arbitrary many integer solutions of the indeterminate equation $x^2+y^2=z^2$. The Pythagoreans used the formulas $x=(a^2-1)/2$, $y=a$, $z=(a^2+1)/2$ to find solutions of this equation. Euclid indicated a method that allows one to find in succession integer solutions of the equation $x^2+2y^2=1$ (a special case of the Pell equation). In his Aritmetika Diophantus (3rd century B.C.) made an attempt to setting up a theory of indeterminate equations (see Diophantine equations). In particular, for the solution of equations of the second and some higher degrees he used systematically a device that enabled him to find from one rational solution of a given equation other rational solutions of it. Fermat (17th century) discovered another method, the "method of descent", and solved by it a number of equations, but the so-called Fermat great theorem, which he declared to have solved, has turned out to be beyond the power of elementary methods.
Fermat solved the problem of representing natural numbers by sums of two squares of integers. As a result of research by Lagrange (1773) and Gauss (1801) the problem of the representation of integers by a definite binary quadratic form was solved. Gauss developed the general theory of binary quadratic forms. The solution of the problem of representing numbers by forms of higher degree (for example, the Waring problem) and by quadratic forms in several variables usually go beyond the framework of elementary methods. Only certain special cases of such problems can be solved elementarily. An example is Lagrange's theorem: Every natural number is the sum of four squares of integers. It should be mentioned that Diophantus in his Aritmetika repeatedly used the possibility of representing a natural number as the sum of four squares of integers.
To elementary number theory one also reckons the theory of partitions, the foundations of which were laid by Euler (in 1751). One of the basic problems of the theory of partitions is the study of the function $P(n)$, the number of representations of a natural number $n$ as a sum of natural numbers. Other functions of similar type are also treated in the theory of partitions. Continued fractions (cf. Continued fraction) appeared in connection with problems of approximate computations (extraction of the square root of a natural number, search for approximations of real numbers by common fractions with small denominators). Continued fractions are applied in solving indefinite equations of the first and second degree. Using the apparatus of continued fractions J. Lambert (1766) was the first to establish that the number $\pi$ is irrational. In the solution of various problems in the approximation of real numbers by rational numbers one uses in elementary number theory the Dirichlet principle, besides continued fractions.
In number theory it is easy to state many problems that can be formulated elementarily and have so far remained unsolved. For example; Is the set of even perfect numbers finite or not? Is there at least one odd perfect number? Is the set of Fermat prime numbers finite or not? Is the set of Mersenne prime numbers finite or not (cf. Mersenne number)? Is the set of prime numbers of the form $n^2+1$ finite or not? Is it true that there is at least one prime number between the squares of two consecutive natural numbers? Is the set of incomplete fractions of the expansion of $2^{1/3}$ into a continued fraction bounded or not?
References
[1] B.A. Venkov, "Elementary number theory" , Wolters-Noordhoff (1970) (Translated from Russian) MR0265267 Zbl 0204.37101 [2] I.M. Vinogradov, "Basic number theory" , Moscow (1981) (In Russian) [3] A.O. Gel'fond, Yu.V. Linnik, "Elementary methods in the analytic theory of numbers" , M.I.T. (1966) (Translated from Russian) MR201368 Zbl 0142.01403 [4] A.Ya. Khinchin, "Continued fractions" , Univ. Chicago Press (1964) (Translated from Russian) MR0161833 Zbl 0117.28601 [5] C.F. Gauss, "Disquisitiones Arithmeticae" , Teubner (1801) (Translated from Latin) MR2308276 MR1876694 MR1847691 MR1356001 MR1167370 MR1138220 MR0837656 MR0638487 MR0479854 MR0197380 Zbl 1167.11001 Zbl 0899.01034 Zbl 0585.10001 Zbl 0136.32301 Zbl 21.0166.04 [6] H. Davenport, "The higher arithmetic" , Hutchinson (1952) MR0050598 Zbl 0049.30901 [7] E. Trost, "Primzahlen" , Birkhäuser (1953) MR0058630 Zbl 0053.36002 [8] G.E. Andrews, "Theory of partitions" , Addison-Wesley (1976) MR0557013 Zbl 0371.10001 [9] , The history of mathematics from Antiquity to the beginning of the XIX-th century , 1–3 , Moscow (1970–1972) (In Russian) [10] H. Wieleitner, "Die Geschichte der Mathematik von Descartes bis zum Hälfte des 19. Jahrhunderts" , de Gruyter (1923) [11] L.E. Dickson, "History of the theory of numbers" , 1–3 , Chelsea, reprint (1971) MR1793101 MR1720467 MR0245501 MR0245500 MR0245499 MR1520248 MR1519706 MR1519382 Zbl 1214.11003 Zbl 1214.11002 Zbl 1214.11001 Zbl 0958.11500 Zbl 60.0817.03 Zbl 49.0100.12 Zbl 48.1114.03 Zbl 48.0137.02 Zbl 47.0100.04 [12] G.H. Hardy, E.M. Wright, "An introduction to the theory of numbers" , Oxford Univ. Press (1979) MR0568909 Zbl 0423.10001 [13] W. Sierpiński, "Elementary theory of numbers" , PWN (1964) (Translated from Polish) MR0227080 MR0175840 Zbl 0122.04402 [14] K. Ireland, M. Rosen, "A classical introduction to modern number theory" , Springer (1982) MR0661047 Zbl 0482.10001 Comments
For Chebyshev's result mentioned above see also Chebyshev theorems on prime numbers.
A Pythagorean triple is a triple $(a,b,c)$ of natural numbers satisfying $a^2+b^2=c^2$ (cf. also Pythagorean numbers).
Fermat numbers are the special case of Mersenne numbers, which are of the form $2^n-1$, when $n$ itself is of the form $2^m$ (cf. Mersenne number); here $n$, $m$ are natural numbers.
References
[a1] C.F. Gauss, "Disquisitiones Arithmeticae" , Yale Univ. Press (1966) (Translated from Latin) MR0197380 Zbl 0136.32301 [a2] D. Shanks, "Solved and unsolved problems in number theory" , Chelsea, reprint (1978) MR0516658 Zbl 0397.10001 [a3] A. Weil, "Number theory: an approach through history: from Hammupari to Legendre" , Birkhäuser (1984) How to Cite This Entry:
Elementary number theory.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Elementary_number_theory&oldid=33625 |
I am trying to derive a relation between the angle from the center of ellipse $C$ and its true anomaly (angle from the focal point $F$) (
alpha vs.
beta in the picture), for a general ellipse. For some reason, it seems I am doing some mistake somewhere.
What I tried is this. When we take the polar description with regard to the center
$y = r\sin(\alpha) = \frac{ab}{\sqrt{(b\cos\alpha)^2+(a\sin\alpha)^2}}\sin\alpha$
Also for the other equation from the focus point we have $y=r\sin(\beta)$, were $r$ is the length with respect to the focus point, i.e.,
$r=\frac{a(1-e^2)}{1\pm e\cos\beta}.$
By putting the equations together and squaring, I get a monster equation:
$\frac{a^2\sin^2\alpha}{(b\cos\alpha)^2+(a\sin\alpha)^2} = \frac{b^2\sin^2\beta}{1\pm2e\cos\beta+e^2\cos^2\beta}$
This leads to a horrible quadratic equation in $\sin\alpha$. Am I doing some error somewhere. Is there some other option, some "shortcut" to derive it in an easier way?
Any hints are welcome.
EDIT:
After fighting with the equations, I got this result:
$\sin\alpha = \pm\frac{b}{a}q\sin\beta\sqrt{\frac{1}{1-e^2q^2\sin^2\beta}}, \to \alpha = \arcsin\left(\pm\frac{b}{a}q\sin\beta\sqrt{\frac{1}{1-e^2q^2\sin^2\beta}}\right)$
where $q=\frac{b}{a}\frac{1}{1\pm e\cos\beta}$ and $e=\sqrt{1-\left(\frac{b}{a}\right)^2}$.
Doing numerical testing, it looks like it is a correct result, except for angles larger than $90^°$, where some sign error seems to be. If there is a simpler and nicer solution, which works for any angle $\beta$, I would be grateful to see it. |
I have been trying to solve the following problem for quite some time now: Let X denote the Fermat curve of degree d in $\mathbb{P}^2$, defined by the homogenous polynomial $$x^d+y^d+z^d=0$$. Let $F:X \rightarrow \mathbb{P}^1$ be defined by $F([x:y:z]) = [x:y]$, show that F has d branch points, and find the d corresponding permutations.
This is what I have done so far, first, I've noted that for a given point in the image, the number of preimages to it are equal to the number of solutions in z to $z^d = -(x^d+y^d)$, and in general, this number is d, and we have ramification exactly when $x^d+y^d=0$. This gives that $x = \zeta y$, where $\zeta^d = -1$. So there are d ramification points, each of order d,so we know that the monodromy representation must consist of d cycles $\sigma_0,..,\sigma_d$ of length d in $S_d$, and further we must have that $\sigma_0\sigma_1\dotsm\sigma_d=(1)$.
Now, I thought of somehow getting a local coordinate and from this finding the permutations, but I can't seem to make it to work. Should I triangulate some surface (as was so cleverly done in my previous question here:Calculating Monodromy)?
Any help would be appreciated. Thank you. |
Investigations concerning random Morse functions led me to the following problem. Consider the classical GOE of $m\times m$ real symmetric matrices $A$ with independent Gaussian entries with zero means and variances
$$ \boldsymbol{E}(a_{ii}^2)=2 \boldsymbol{E}(a_{ij}^2)= 2 $$
for all $i \neq j$. Consider the function
$$ F_m(x, y) = \boldsymbol{E}_{GOE}\bigl( |\det(y+A)| e^{ -x(tr A)^2 } \bigr), $$
$x,y$ real, $x>0$. What can one say about the behavior of $F_m(x,y)$ as $m\rightarrow \infty$.
Equivalently we can consider the Gaussian ensemble $\mathcal{S}(m,x)$ of symmetric $m\times m$ real matrices with probability density
$$ dP(A)=\frac{1}{Z_{m,x}} e^{-\frac{1}{2}tr(A^2)-x(tr A)^2} \prod_{i\leq j} da_{ij}, $$
$x>0$, and then ask for the bevavior as $m\rightarrow \infty$ of the expectation
$$ \boldsymbol{E}_{\mathcal{S}(m,x)}\left( |\det(A+y)|\right). $$
Observe that
$$GOE= \mathcal{S}(m,x)_{x=0}.$$
The normalizing constant $Z_{m,x}$ can be explicitly computed for any $x$ and thus
$$ F_m(x,y)= \frac{Z_{m,x}}{Z_{m,0}} \boldsymbol{E}_{\mathcal{S}(m,x)}\left( |\det(A+y)|\right). $$
In the geometric problem I am interested $x=\frac{1}{8}$. In this case the ensemble $\mathcal{S}_m:=\mathcal{S}(m, \frac{1}{4})$ can be described as the ensemble of real, symmetric $m\times m$ matrices whose entries are mean zero Gaussian variables satisfying the covariance equalities
$$ \boldsymbol{E}\left( a_{ij} a_{k\ell}\right)=-\frac{2}{2+m}\delta_{ij}\delta_{k\ell} +\left( \delta_{ik}\delta_{j\ell}+ \delta_{i\ell}\delta_{jk}\right).$$
Note that as $m\rightarrow \infty$ this ensemble resembles more and more the classical GOE which satisfies the covariance equalities
$$ \boldsymbol{E}\left( a_{ij} a_{k\ell}\right)= \left(\delta_{ik}\delta_{j\ell}+ \delta_{i\ell}\delta_{jk}\right).$$
Finally, I want to explain how is this related to Morse theory. To put things in perspective observe that if $A$ is a symmetric $m\times m$ matrix, then its spectrum can be identified with the set of critical values of the restriction to the unit sphere in $\mathbb{R}^m$ of the quadratic polynomial
$$\mathbb{R}^m\ni x\mapsto q_A(X)=(Ax,x).$$
To a Morse function $f$ on a compact smooth manifold $M$ of dimension $m$ we can associate two measures.
(a) A measure $K_f$ on $M$ defined as the sum of Dirac delta's concentrated at the critical points of $f$
$$K_f=\sum_{df(p)=0}\delta_p.$$
(b) A measure $\Delta_f$ on $\mathbb{R}$ supported on the set of critical values of $f$ and defined as the pushforward of $K_f$ via $f$,
$$\Delta_f:=f_*(K_f).$$
In other words, $\Delta_f$ counts the critical values with multiplicity. Note that when $f$ is the restriction to the unit sphere of the quadratic form $q_A$ then $\Delta_f$ coincides with the spectral measure of $A$.
Fix a Riemann metric $g$ on $M$ and an orthonormal $(\Psi_k)_{k\geq 0}$ basis of $L^2(M)$ consisting of eigenfunctions of the Laplacian
$$ \Delta \Psi_k=\lambda_k \Psi_k. $$
Fix i.i.d. standard Gaussian random variables $(x_k)_{k\geq 0}$ and for every $L >0$ define the random function
$$f_L=\sum_{\lambda_k\leq L^2}x_k\Psi_k. $$
The function $f_L$ is roughly speaking a random polynomial of large degree. Equivalently one should think of $f_L$ as a random element in the space $U_L$ spanned by the eigenfunctions corresponding to eigenvalues $\leq L^2$ and equipped with the standard Gaussian measure. The large $L$ behavior of $\dim U_L$ is governed by Weyl's asymptotic formula
$$ \dim U_L \sim const. L^m.$$
To $f_L$ we associate two random measures
$$ K_{f_L},\;\; \Delta_{f_L} $$
that have normalized expectations
$$ K_L:=\frac{1}{\dim U_L} \boldsymbol{E}( K_{f_L} ), $$
$$ \Delta_L:=\frac{1}{\dim U_L} \boldsymbol{E}( \Delta_{f_L} ). $$
Above, $K_L$ is a measure on $M$ and $\Delta_L$ is a measure on $\mathbb{R}$. I can show that as $L\to\infty$ the measure $K_L$ converges weakly to $C_m dV_g$, where $dV_g$ denotes the volume measure determined by the metric $g$, and $C_m$ is a certain explicit constant that depends only on $m$ but not on $(M,g)$. Thus, the critical points of a random $f_L$, $L\gg 0$, is uniformly distributed on average.
As $L\to \infty$ the measure a suitable rescaled version of $\Delta_L$ converges to a measure $d\mu_m(y)$ on $\mathbb{R}$ that is absolutely continuous with respect to the Lebesgue measure. More precisely
$$d\mu_m(y)=\rho_m(y) dy= Const_m \times \boldsymbol{E}_{\mathcal{S}(m,1/8)}\left( \;|\det(A-s_my )|\;\right) e^{-\frac{y^2}{2 }} dy,$$
$$s_m=\sqrt{\frac{m+4}{m+2}} $$
Remark. The measure $d\mu_m(y)$ can also be given a description as a conditional expectation. To explain this I need to introduce another Gaussian ensemble of symmetric $m\times m$ matrices.
To describe it observe that to any such matrix $A$ we can associate a quadratic form $q_A$ on $\mathbb{R}^m$,
$$ q_A(x)=(Ax, x).$$
We have a unique, centered Gaussian probability measure on the space of symmetric $m\times m$ matrices with variance
$$V(A)=\int_{\mathbb{R^m}} q_A(x)^2 \frac{e^{-\frac{|x|^2}{2}}}{(2\pi)^{\frac{m}{2}}} dx. $$
Denote by $\mathcal{U}_m$ this Gaussian ensemble of symmetric matrices. (I use the symbol $\mathcal{U}_m$ because this ensemble has a remarkable universality property.)
Now fix a standard (scalar) Gaussian r.v. $Y$ such that the pair $(A,Y)$ is a Gaussian vector satisfying the correlation equalities
$$\boldsymbol{E}(a_{ij} Y)=s_m\delta_{ij} $$.
Then for any Borel subset of $\mathbb{R}$ we have
$$\mu_m(B)=\boldsymbol{E}_{\mathcal{U}_m}\Bigl( |\det A|\;\Bigl|\; Y\in B\Bigr). $$ |
I'm learning about the quantum computer basics and got confused about qubits and the hadamard-gate. What I understood:
A qubit can (naturally) be in the states $\lvert 0 \rangle$, $\lvert 1 \rangle$ or any superposition $\alpha \lvert 0 \rangle + \beta \lvert 1 \rangle$
The hadamard-gate transforms a qubit from state $\lvert 0 \rangle$ to the superposition $\frac{1}{\sqrt{2}}( \lvert 0 \rangle + \lvert 1 \rangle)$ and therefore creates a uniform random generator that will probably be useful for further (cryptographic?) algorithms/applications.
Question: How can one be sure, that the state of the input qubit really is $\lvert 0\rangle$? Couldn't it also be in $\lvert 1\rangle$ or any superposition of them? And as we cannot measure the input qubits, as they will fall randomly into one of the two base states, we cannot be sure about it, no?
And what if the input qubit was in a superposition? What will the hadamard gate do to it? |
Chaochen Wang$^1$, Yingsong Lin$^1$, Masumi Okuda$^2$, Shogo Kikuchi$^1$
1.Aichi Medical University School of Medicine 2.Hyogo College of Medicine Metronidazole (MNZ) has been broadly prescribed as therapy for \(H. pylori\) eradication worldwide.
Second line regimen using MNZ is covered under national health insurance in Japan.
Eradication rate :
1 60.5% for PPI + AMPC + CLA (PAC) | 98.3% for PPI + AMPC + MNZ (PAM)
Resistance rate of CLA in Japanese children is reported to be more than
40%. 2
MNZ
\(\Longrightarrow\) Possibly carcinogenic to human (2B) ヒトに対する発がん性が疑われる by International Agency for Research on Cancer, IARC in 1987
1: Unpublished data, Mabe et al.; 2: Kato and Fujimura 2010
Related literature (published until April. 2016) were reviewed.
Search term:
(("Drug-Related Side Effects and Adverse Reactions"[Mesh]) AND "Metronidazole"[Mesh]) OR ("Metronidazole/adverse effects"[Mesh] OR "Metronidazole/toxicity"[Mesh]) OR (("Metronidazole"[Mesh]) AND "Carcinogenicity Tests"[Mesh]) Oral exposure of MNZ has shown carcinogenic activity in mice and rats.
Results Year Authors Carcinogenic cites: Pulmonary \(\uparrow\) 1972 Rustia and Shubik \(\uparrow\) 1977 IARC \(\uparrow\) 1983 Cavaliere et al. Liver \(\uparrow\) 1979 Rustia and Shubik Lymphomas \(\uparrow\) 1972 Rustia and Shubik Mammary-gland \(\uparrow\) 1979 Rustia and Shubik \(\uparrow\) 1984 Cavaliere et al. Pituitary-gland \(\uparrow\) 1979 Rustia and Shubik Genetic damage Negative 2000 Touati et al Reproductive Organs
& Fertility
\(\downarrow\) 2013 Kumari and Singh
Lack of evidence for cancer due to use of metronidazole. N Engl J Med. 1979;301:519–522.
ヒトでの追跡研究
Data on MNZ carcinogenecity for humans is still not sufficient.
No increased cancer risk in 12,000 users of MNZ.
1 \(\Longrightarrow\) Only followed for 2.5 years. (A letter to JAMA)
No association between short-term exposure to MNZ and cancer in human were found in 5,222 MNZ user/nonuser pairs
(RR 0.98; 95% CI, 0.80$-$1.20) 2
Another retrospective study
3 of children (\(<\) 5 \(y.\), n \(=\) 328,846) who had been exposed to MNZ in utero also reported ヒトでの研究から,発がんリスクの上昇は認められていない. 1. Hari et al. 2013; 2. Farmakiotis et al. 2016; 3. Khan et al., 2007;4. Ohnishi et al. 2014;5. Kumar et al. 2013;6. O’Halloran et al. 2010. Recommended regimens in UK (イギリスの小児除菌推奨レジメ)
Age Range (\(y.\)) Oral dose (mg per day)
with omeprazole (PPI)
combined with AMPC \(1\sim 6\) 250, twice CLA 125, 3 times MNZ \(6\sim 12\) 500, twice CLA 250, 3 times MNZ \(12\sim18\) 1000, twice CLA 500, 3 times MNZ CLA \(1\sim12\) 7.5 mg/kg (max. 500), twice MNZ/AMPC \(12\sim18\) 500, twice MNZ/AMPC MNZ \(1\sim 6\) 100, twice CLA 100, 3 times AMPC \(6\sim 12\) 200, twice CLA 200, 3 times AMPC \(12\sim18\) 400, twice CLA 400, 3 times AMPC
Abbreviations: AMPC, amoxicillin; CLA, clarithromycin; MNZ, metronidazole.
Evidence-based guidelines from
ESPGHAN and NASPGHAN for Helicobacter pylori infection in children. J Pediatr Gastroenterol Nutr. 2011;53:230–243. 北米・ユーロッパ小児除菌アルゴリズム:"
So far, there is
no convincing evidence that short-term exposure to metronidazole would increase the risk of any cancer in human.
Considering the high resistance rate of clarithromycin in Japanese children, first-line therapy of PPI + amoxicillin + metronidazole would be an alternative option.
これまでの研究報告からみると,短期間の利用は,ヒトでのがん発生リスクの上昇は認められない.
日本の中高生においてクラリスロマイシンの感受性試験を行わない場合は,メトロニダゾールを含む併用療法は一つの選択肢となりうる. |
How to interpret a formula with free variables ?
In mathematics, we usually have two kind of "equations" :
$(x+1)(x-1)=x^2-1$
$x^2-2x+1=0$
The first one is an
identity and it is clearly implicitly universally quantified; i.e. it must be read as :
$\forall x [(x+1)(x-1)=x^2-1]$.
If we consider for simplicity the interpretation based on the domain $\mathbb N$ of
natural numbers, and we instantiate it with a number $n$ whatever, we always get a true formula : $(n+1)(n-1)=n^2-1$, i.e. the "matrix" (the sub-formula without the quantifier) is always satisfied.
This is simply a consequence of the
logical axiom (or law) :
$\forall x \varphi \rightarrow \varphi_t^x$, where $t$ is
substitutable for $x$ in $\varphi$.
Formula $x^2-2x+1=0$, instead, must not be read as universally quantified, because it is simply false that every number $n \in \mathbb N$ satisfy it.
It must be read as :
$\exists x (x^2-2x+1=0)$.
How to interpret $x^2-2x+1=0$ with $x$ free ? To do this, we have to assign a "temporary"
denotation to the free variable: this can be done in more than one way (all more or less equivalent).
We can, for example, use a
variable assignement function $s : Var \to D$ where $Var$ is the set of variables of the language and $D$ is the domain of the interpretation : in our example $\mathbb N$.
Thus, if we consider the function $s$ that assigns to $x$ the number $0$ we have that :
$(x^2-2x+1=0)[s]$
is clearly false, because : $0-2 \times 0 +1 = 1 \ne 0$. We say that $s$ does not satisfy the formula.
If we consider instead the function $s'$ that assigns to $x$ the number $1$ we have that :
$(x^2-2x+1=0)[s']$
is true, because : $1-2+1 = 0$, and we say that $s'$ satisfy the formula (by the way, this shows that the formula $\exists x (x^2-2x+1=0)$ is true in $\mathbb N$, due to the fact that we have found a variable assignment that satisfy its "matrix").
The above argument is also the explanation of the assertion that "free variables and constants play the same role"; in order to show the satisfiability of a formula with a free variable, we treat the variable as a "temporary" name for an object in the domain of the interpretation (we assign to $x$ a
denotation through the variable assignment function $s$). |
Search
Now showing items 1-7 of 7
The Lockman Hole project : LOFAR observations and spectral index properties of low-frequency radio sources
(2016-12-11)
The Lockman Hole is a well-studied extragalactic field with extensive multi-band ancillary data covering a wide range in frequency, essential for characterising the physical and evolutionary properties of the various source ...
LOFAR/H-ATLAS: A deep low-frequency survey of the Herschel-ATLAS North Galactic Pole field
(2016-10-21)
We present LOFAR High-Band Array (HBA) observations of the Herschel-ATLAS North Galactic Pole survey area. The survey we have carried out, consisting of four pointings covering around 142 square degrees of sky in the ...
LOFAR 150-MHz observations of the Boötes field: Catalogue and Source Counts
(2016-08-11)
We present the first wide area (19 deg$^2$), deep ($\approx120-150$ {\mu}Jy beam$^{-1}$), high resolution ($5.6 \times 7.4$ arcsec) LOFAR High Band Antenna image of the Bo\"otes field made at 130-169 MHz. This image is at ...
A plethora of diffuse steep spectrum radio sources in Abell 2034 revealed by LOFAR
(2016-06-11)
With Low-Frequency Array (LOFAR) observations, we have discovered a diverse assembly of steep spectrum emission that is apparently associated with the intra cluster medium (ICM) of the merging galaxy cluster Abell 2034. ...
LOFAR facet calibration
(2016-03-07)
LOFAR, the Low-Frequency Array, is a powerful new radio telescope operating between 10 and 240 MHz. LOFAR allows detailed sensitive high-resolution studies of the low-frequency radio sky. At the same time LOFAR also provides ...
LOFAR MSSS: Detection of a low-frequency radio transient in 400 hrs of monitoring of the North Celestial Pole
(2016-03-01)
We present the results of a four-month campaign searching for low-frequency radio transients near the North Celestial Pole with the Low-Frequency Array (LOFAR), as part of the Multifrequency Snapshot Sky Survey (MSSS). The ...
LOFAR, VLA, and Chandra observations of the Toothbrush galaxy cluster
(2016-02-22)
We present deep LOFAR observations between 120-181 MHz of the "Toothbrush" (RX J0603.3+4214), a cluster that contains one of the brightest radio relic sources known. Our LOFAR observations exploit a new and novel calibration ... |
Background
For a system consisting of two molecules (monomers or fragments are also used) X and Y, the binding energy is
$$\Delta E_{\text{bind}} = E^{\ce{XY}}(\ce{XY}) - [E^{\ce{X}}(\ce{X}) + E^{\ce{Y}}(\ce{Y})]\label{eq:sherrill-1} \tag{Sherrill 1}$$
where the letters in the parentheses refer to the atoms present in the calculation and the letters in the superscript refer to the (atomic orbital, AO) basis present in the calculation. The first term is the energy calculated for the combined X + Y complex (the dimer) with basis functions, and the next two terms are energy calculations for each isolated monomer with only their respective basis functions. The remainder of this discussion will make more sense if the complex geometry is used for each monomer, rather than the isolated fragment geometry.
The counterpoise-corrected (CP-corrected) binding energy [1] to correct for basis set superposition error (BSSE) [2] is defined as
$$\Delta E_{\text{bind}}^{\text{CP}} = E^{\ce{XY}}(\ce{XY}) - [E^{\ce{XY}}(\ce{X}) + E^{\ce{XY}}(\ce{Y})]\label{eq:sherrill-3} \tag{Sherrill 3}$$
where the monomer calculations are now performed in the dimer/complex basis. Let's explicitly state how this works for the $E_{\ce{XY}}(\ce{X})$ term. The first molecule X contributes nuclei with charges, basis functions (AOs) centered on those nuclei, and electrons that will count to the final occupied molecular orbital (MO) index into the MO coefficient array. There is no reason why additional AOs that are not centered on atoms can't be added to a calculation. Depending on their spatial location, if they're close enough to have significant overlap, they may combine with atom-centered MOs, increasing the variational flexibility of the calculation and lowering the overall energy. Put another way, place the AOs that would correspond to molecule Y at their correct positions, but don't put the nuclei there, and don't consider the number of electrons they would contribute to the total number of occupied orbitals. This means that for the full electronic Hamiltonian
$$\hat{H}_{\text{elec}} = \hat{T}_{e} + \hat{V}_{eN} + \hat{V}_{ee}$$
calculating the electron-nuclear attraction $\hat{V}_{eN}$ term is now different. Considered explicitly in matrix form in the AO basis,
$$\begin{align*}V_{\mu\nu} &= \int \mathop{d\mathbf{r}_{i}} \chi_{\mu}(\mathbf{r}_{i}) \left( \sum_{A}^{N_{\text{atoms}}} \frac{Z_{A}}{|\mathbf{r}_{i} - \mathbf{R}_{A}|} \right) \chi_{\nu}(\mathbf{r}_{i}) \\&=\sum_{A}^{N_{\text{atoms}}} Z_{A} \left< \chi_{\mu} \middle| \frac{1}{r_{A}} \middle| \chi_{\nu} \right>\end{align*}$$
there are now fewer terms in the summation, since the nuclear charges from molecule Y are zero (the atoms just aren't there), but the number of $\mu\nu$ are the same as for the XY complex. This and the $\hat{T}_{e}, \hat{V}_{ee}$ terms aren't really mathematically or functionally different then; this is more to show where the additional basis functions enter, or to show where nuclei appear in the equations [3].
These atoms that don't have nuclei or electrons, only basis functions, are called
ghost atoms. Sometimes you also see the term ghost functions, ghost basis, or ghost {something} calculation. Adding the basis of monomer Y to make the full "dimer basis" means taking the monomer X and including basis functions at the nuclear positions for Y. Geometry optimization
Now to calculate the molecular gradient, that is, the derivative of the energy with respect to the $3N$ nuclear coordinates. This is the central quantity in any geometry optimization. For the sake of simplicity, consider a steepest descent-type update of the nuclear coordinates$$R_{A,x}^{(n+1)} = R_{A,x}^{(n)} - \alpha \frac{\partial E_{\text{total}}^{(n)}}{\partial R_{A,x}}\label{eq:steepest-descent} \tag{Steepest Descent}$$where $n$ is the optimization iteration number, $\alpha$ is some small step size with units [length
2][energy], and the last term is the derivative of the total (not just electronic) energy with respect to a change in atom $A$'s $x$-coordinate. Even Newton-Raphson-type updates with approximate Hessians (2nd derivative of the energy with respect to nuclear coordinates, rather than the 1st) need the gradient, so we must formulate it. Formulation of the energy
We're in a bit of trouble, because we want to replace $E_{\text{total}}$ in the gradient with $E_{\text{total}}^{\text{CP}}$, but all we have is $\Delta E_{\text{bind}}^{\text{CP}}$. The concept of CP correction can still be applied to a total energy, but the BSSE must be removed from each monomer. The BSSE correction itself for each monomer is$$\begin{split}E_{\text{BSSE}}(\ce{X}) &= E^{\ce{XY}}(\ce{X}) - E^{\ce{X}}(\ce{X}), \\E_{\text{BSSE}}(\ce{Y}) &= E^{\ce{XY}}(\ce{Y}) - E^{\ce{Y}}(\ce{Y}),\end{split}\label{eq:2}$$which, when subtracted from $\eqref{eq:sherrill-1}$, gives $\eqref{eq:sherrill-3}$. More correctly, considering that the geometry for each step is at the final cluster geometry and not the isolated geometry, the above is [4]$$\begin{split}E_{\text{BSSE}}(\ce{X}) &= E_{\ce{XY}}^{\ce{XY}}(\ce{X}) - E_{\ce{XY}}^{\ce{X}}(\ce{X}), \\E_{\text{BSSE}}(\ce{Y}) &= E_{\ce{XY}}^{\ce{XY}}(\ce{Y}) - E_{\ce{XY}}^{\ce{Y}}(\ce{Y}).\end{split}\label{eq:sherrill-10} \tag{Sherrill 10}$$
The CP-corrected
total energy is the full dimer energy with BSSE removed from each monomer is then$$\begin{split}E_{\text{tot}, \ce{\widetilde{XY}}}^{\text{CP}} &= E_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{XY}) - E_{\text{BSSE}}(\ce{X}) - E_{\text{BSSE}}(\ce{Y}), \\&= E_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{XY}) - \left[ E_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{X}) - E_{\ce{\widetilde{XY}}}^{\ce{X}}(\ce{X}) \right] - \left[ E_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{Y}) - E_{\ce{\widetilde{XY}}}^{\ce{Y}}(\ce{Y}) \right].\end{split}\label{eq:sherrill-15} \tag{Sherrill 15}$$Note that I have modified which geometry is used for each monomer in $\eqref{eq:sherrill-15}$. All monomers are calculated at the supermolecule geometry. This is convenient for two reasons: 1. We are only interested in removing the BSSE, not the effect of monomer deformation, and 2. a isolated monomer geometry without deformation doesn't make sense in the context of a geometry optimization. I also added the tilde to signify that the supermolecular/dimer geometry used may not be the final or minimum-energy geometry, as would be the case during a geometry optimization. We simply extract all structures consistently from a given geometry iteration. Perhaps $\ce{XY}(n)$ would be better notation. Formulation of the gradient
As Pedro correctly states, the differentiation operator is a linear operator. Because there are no products in $\eqref{eq:sherrill-15}$, the total gradient needed for $\eqref{eq:steepest-descent}$ will be a sum of gradients [5]:$$\frac{\partial E_{\text{tot}, \ce{\widetilde{XY}}}^{\text{CP}}}{\partial R_{A,x}} = \frac{\partial E_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{XY})}{\partial R_{A,x}} - \left[ \frac{\partial E_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{X})}{\partial R_{A,x}} - \frac{\partial E_{\ce{\widetilde{XY}}}^{\ce{X}}(\ce{X})}{\partial R_{A,x}} \right] - \left[ \frac{\partial E_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{Y})}{\partial R_{A,x}} - \frac{\partial E_{\ce{\widetilde{XY}}}^{\ce{Y}}(\ce{Y})}{\partial R_{A,x}} \right],$$so each step of a CP-corrected geometry optimization will require 5 gradient calculations rather than 1. Note that the nuclear gradient should be included for each term as well, which is a trivial calculation.
Extension to other molecular properties
Although not commonly done, counterpoise correction can be applied to any molecular property, not just energies or gradients. Simply replace $E$ or $\partial E/\partial R$ with the property of interest. For example, the CP-corrected polarizability $\alpha$ of two fragments is$$\alpha_{\text{tot}, \ce{\widetilde{XY}}}^{\text{CP}} = \alpha_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{XY}) - \left[ \alpha_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{X}) - \alpha_{\ce{\widetilde{XY}}}^{\ce{X}}(\ce{X}) \right] - \left[ \alpha_{\ce{\widetilde{XY}}}^{\ce{XY}}(\ce{Y}) - \alpha_{\ce{\widetilde{XY}}}^{\ce{Y}}(\ce{Y}) \right]$$where I believe it now makes even less sense to have each individual fragment calculation not be at the cluster geometry. In papers that calculate CP-corrected properties, no mention is usually made of which geometry the individual calculations are performed at for this reason.
References Boys, S. Francis; Bernardi, F. The calculation of small molecular interactions by the differences of separate total energies. Some procedures with reduced errors. Mol. Phys. 1970, 19, 553-566. Sherrill, C. David. Counterpoise Correction and Basis Set Superposition Error. 2010, 1-6. One implementation note: Most common quantum chemistry packages should allow for the usage of ghost atoms in energy and gradient calculations. However, as Sherrill states, they do not properly allow for composing the full gradient expression to perform CP-corrected geometry optimizations. Gaussian can, and Psi4 may. For programs that can calculate gradients with ghost atoms, Cuby can be used to drive the calculation of CP-corrected geometries and frequencies. There is a typo in the Sherrill paper; the subscripts for all 4 energy terms should be $AB$, which here are $\ce{XY}$. Simon, S.; Bertran, J.; Sodupe, M. Effect of Counterpoise Correction on the Geometries and Vibrational Frequencies of Hydrogen Bonded Systems. J. Chem. Phys. A 2001, 105,, 4359-4364. |
Mapping is Injection and Surjection iff Inverse is Mapping/Proof 2 Theorem
Let $S$ and $T$ be sets.
Let $f: S \to T$ be a mapping.
Then:
$f: S \to T$ can be defined as a bijection in the sense that: the inverse $f^{-1}$ of $f$ is such that: That is, such that $f^{-1} \subseteq T \times S$ is itself a mapping. Proof
Let $f: S \to T$ be a mapping such that:
Let $t \in T$.
Then as $f$ is a surjection:
$\exists s \in S: t = \map f s$
As $f$ is an injection, there is only one $s \in S$ such that $t = \map f s$.
Define $\map g t = s$.
As $t \in T$ is arbitrary, it follows that:
$\forall t \in T: \exists s \in S: \map g t = s$
such that $s$ is unique for a given $t$.
That is, $g: T \to S$ is a mapping.
By the definition of $g$:
$(1): \quad \forall t \in T: \map f {\map g t} = t$ Let $s \in S$.
Let:
$(2): \quad s' = \map g {\map f s}$
Then:
\(\displaystyle \map f {s'}\) \(=\) \(\displaystyle \map f {\map g {\map f s} }\) \(\displaystyle \) \(=\) \(\displaystyle \map f s\) from $(1)$ \(\displaystyle \leadsto \ \ \) \(\displaystyle s\) \(=\) \(\displaystyle s'\) as $f$ is an injection \(\displaystyle \) \(=\) \(\displaystyle \map g {\map f s}\) from $(2)$
Thus $f: S \to T$ and $g: T \to S$ are inverse mappings of each other.
$\blacksquare$
$\Box$
Sufficient Condition
Let $f^{-1}: T \to S$ be a mapping.
Hence, in particular, $f$ is a bijection.
$\blacksquare$ |
Mathematics - Functional Analysis and Mathematics - Metric Geometry
Abstract
The following strengthening of the Elton-Odell theorem on the existence of a $(1+\epsilon)-$separated sequences in the unit sphere $S_X$ of an infinite dimensional Banach space $X$ is proved: There exists an infinite subset $S\subseteq S_X$ and a constant $d>1$, satisfying the property that for every $x,y\in S$ with $x\neq y$ there exists $f\in B_{X^*}$ such that $d\leq f(x)-f(y)$ and $f(y)\leq f(z)\leq f(x)$, for all $z\in S$. Comment: 15 pages, to appear in Bulletin of the Hellenic Mayhematical Society
Given a finite dimensional Banach space X with dimX = n and an Auerbach basis of X, it is proved that: there exists a set D of n + 1 linear combinations (with coordinates 0, -1, +1) of the members of the basis, so that each pair of different elements of D have distance greater than one. Comment: 15 pages. To appear in MATHEMATIKA |
We call a compact complex manifold Moisezon manifold, if its dimension coincides with the algebraic dimension, i.e. it has as many algebraically independent meromorphical functions as its complex dimension. Boris Moisezon himself gave a proof to the following theorem: Let $X$ be a Moisezon mainfold, then for $X$ to be projective it is necessary and sufficient to be Kähler.
It was told that it could also be formulated as a criterion for projectivity:
A compact complex manifold is projective if and only if it is kähler and moisezon.
I didn't find any complete proof to the second formulation. Does someone know where to find it? Or at least how it works?
Edit: I already found two versions for the implication "projective=>moisezon" (necessity to be kähler is in fact clear), but i am also interested in alternatives, especially for the other one. On the one hand Huybrechts, Complex Geometry and on the other hand Wells, Moisezon spaces and the kodaira embeddingtheorem.
Next Edit: I don't want this to look like a jeaopardy question but, after postponing the problem I stumbled upon this:
There is a different proof by Thomas Peternell, given in "Algebraicity Criteria for Compact Complex Manifolds", Math. Ann. 275, 653-672 (1986). Theorem 1.4. states a slight variation of the theorem of Moisezon which is indeed equivalent. More precisely it states, that if there exist a real $(1,1)$-form $\omega$ and a real $2$-form $\varphi$ on a Moisezon manifold $X$ such that $\omega$ is positive definite, $d(\omega-\varphi) = 0$ and $\int_C \varphi = 0$ for all curves $C\subset X$, then $X$ is projective. |
Computing Ranges in Constant Time
Suppose have some sequence of elements. We want to be able to answer questions about any of its ranges in time \(O(1)\). For example, we have the following sequence:
\[ A = \{ 5, 2, 4, 7, 6, 3, 1, 2 \} \]
What is the minimum/maximum element in the range from index 0 to 3? What is the sum of the elements in the range from index 1 to 4?
A naive approach would simply iterate over the range and determine the result (\(O(n)\) search and space), or precalculate all of the possible queries (\(O(1)\) search and \(O(n^2)\) space). When speed and efficiency are essential, however, in cases when we’re dealing with large datasets, we’ll have to come up with something more clever.
In this article, we’ll introduce the
Sparse Table data structure and see how it, with a little bit of a preprocessing, lets us answer range queries in constant time. Intuition
The main idea is to precompute all of the answers for the range queries and store them in a data structure. The challenge is how to do it in an efficient way. We want to save as much space as we can thus retaining the ability to retrieve answers in constant time. Our target is \(O(1)\) search and \(O(nlog_2n)\) space and we can achieve it with
dynamic programming and some basic arithmetic.
We know that we can represent any natural number as a unique decreasing sum of powers of two (yes we’ve just described binary). For example:
\[ 11 = (1011)_2 = 1*2^3 + 0*2^2 + 1*2^1 + 1*2^0 = 8 + 0 + 2 + 1 \]
We can use the same reasoning to represent a sequence as a finite union of ranges. Consider the sequence of natural numbers from 2 to 13. It can be represented in the following way:
\[ [2 … 12] = [2 … 9] \cup [10 … 11] \cup [12 … 12] \]
\([2 … 12]\) has \(11\) elements and we broke it down to ranges of \(8\), \(2\) and \(1\) elements (all powers of 2) respectively. We can also observe that such union can consist of
at most \(log_2N\) ranges where \(N\) is the length of the original sequence. Efficiently precomputing the results
We’re going to compute range minima. Let’s go back to our example sequence \(A\) and encode all the possible answers in the
sparse table. We’re going to represent it as a two-dimensional array \(M\) of size \(N \times K\), where \(K = \lfloor {log_2N} \rfloor + 1\). Every cell in this matrix will contain the index of the minimum in a particular range. Note that these ranges have sizes of powers of \(2\) which means that we compute the minima only of those ranges, hence the size \(O(Nlog_2N)\)
var A = new[] { 5, 2, 4, 7, 6, 3, 1, 2 };var N = A.Length;var K = (int)Math.Floor(Math.Log(N, 2)) + 1;var M = new int[N, K]; // The Sparse Table
Basis
A range of length \(1\) is still a valid range. So the minimum of \(A[1…1]\) is exactly A[1] = 2, therefore, filling the first row of the table is trivial.
for (int i = 0; i < N; i++) M[i, 0] = i;
In other words, we have computed the minima of all the ranges starting at index \(i\) of length \(1 = 2^0\).
Iteration
This is where things get interesting. We introduce a general procedure for determining the minimum of a range of size \(2^j\), where \( 1 \le j \le log_2N \). Assuming that we’ve already found the minima for all ranges of size \(2^{j-1}\), we’re going to reuse those solutions to find the minima for \(2^j\). This is dynamic programming in its essence. We break down a problem into smaller sub-problems, solve the simplest case and work our way up.
Before diving into the mathematics of this procedure, let’s go through an example. This is our sequence:
\[ A = \{ 5, 2, 4, 7, 6, 3, 1, 2 \} \]
So our sparse table stores the indices of the minima in a certain range. After computing the basis, we end up with the following table:
j\i 0 1 2 3 4 5 6 7 0 5 2 4 7 6 3 1 2
Note that
this is not a 1 to 1 representation of the way we actually store the data. Here for the sake of readability, I’m showing the actual values in the cells whereas in the implementation we store indices. Above are the already computed ranges of length 1. Let’s see the next step.
j\i 0 1 2 3 4 5 6 7 0 5 2 4 7 6 3 1 2 1 2 2 4 6 3 1 1
Now we build our way up. We compute the ranges of length \(2 = 2^1 \). The way we interpret the values, \(M[i, j]\) is the minimum in the range from index \(i\) to \(i + 2^j - 1\) in \(A\). So at \(M[0, 1]\) we need to insert the minimum in the range from \(0\) to \(1\). We can split the range into two equal subranges. Looking at the table above, we’ve already computed them, so we take the smaller value. \(M[0, 1] = Min(M[0, 0], M[1, 0]) \).
The next step should be more representative of the power of dynamic programming. Now \(j = 2\) so we find the minima of the ranges of length \(2^2 = 4\).
j\i 0 1 2 3 4 5 6 7 0 5 2 4 7 6 3 1 2 1 2 2 4 6 3 1 1 2 2 2 3 1 1
So \(M[i = 0, j = 2]\) represents the smallest element in the range from 0 to 3 (the first four), which is indeed 2. We came up with the result by only looking at the row above. We can represent \(A[0 … 3] = A[0 … 1] \cup A[2 … 3]\). We already know the minima of \(A[0 … 1]\) and \(A[2 … 3]\). They are located at \(M[0, 1] = 2\) and \(M[2, 1] = 4\) so we simply picked the smaller number.
j\i 0 1 2 3 4 5 6 7 0 5 2 4 7 6 3 1 2 1 2 2 4 6 3 1 1 2 2 2 3 1 1 3 1
The last row of the table is computed in the same way. The minimum of the first 8 is the smaller between value between the minima of the first four and the next four elements. \(M[0, 3] = Min(M[0, 2], M[4, 2])\).
Formally we can describe the procedure as:
\[ M[i, j] = \begin{cases} M[i, j-1], & \text{if } A[M[i,j-1]] \le A[M[i + 2^{j-1}, j - 1]] \newline M[i + 2^{j-1}, j - 1], & \text{otherwise} \end{cases} \]
The range index calculation might seem a bit unintuitive at first but it’s actually pretty straightforward.
\[ A[i … i + 2^j - 1] = A[i … i + 2^{j-1} - 1] \cup A[i + 2^{j-1} … i + 2^j - 1] \]
Both sub-ranges have a length of \(2^{j-1}\). This is how we turn this formal notation into code:
for (int j = 1; j < K; j++) { // 1 << j = 2^j for (int i = 0; i + (1 << j) <= N; i++) { int left = M[i, j - 1]; int right = M[i + (1 << (j - 1)), j - 1]; M[i, j] = A[left] <= A[right] ? left : right; }}
The Range Query
Now our
sparse table is constructed, we are ready to process queries. We’ve stored the minima for the ranges that are a power of two, but how do we compute minimum for arbitrary ranges?
The idea is to select two blocks that entirely cover this range. Suppose we have an arbitrary block \(A[p … q], \text{where } p < q \) and we need to find the minimum.
Let \(k = \lfloor log_2(q - p + 1)]\rfloor \), \(2^k\) is
the size of the largest block in the table that fits into the range \(A[p … q]\). Then we can compute the minimum by comparing the minima of the following blocks: \( A[p … p + 2^k] \text{ and } A[q - 2^k + 1 … k] \). Formally
\[ RangeMinimum(p, q) = Min(M[p, k], M[q - 2^k + 1, k]) \]
Let’s see an example. We’re going to use the same
sparse table that we computed in the previous seciton.
j\i 0 1 2 3 4 5 6 7 0 5 2 4 7 6 3 1 2 1 2 2 4 6 3 1 1 2 2 2 3 1 1 3 1
What is the range minimum of 1 and 5?
p = 1, q = 5k = floor(log(5 - 1 + 1)) = 2M[1, 2] = 2M[5 - 2^2 + 1, 2] = M[2, 2] = 3return 2
The block \(A[1 … 5] \) contains \( { 2, 4, 7, 6, 3 }\) so we got a correct answer in constant time! But what do these calculations actually mean?
We found the size of the largest block in A[1 … 5], with size power of 2 by calculating \(k\). The size of this block is 4. We already know the minima of all blocks with sizes of 4.
Therefore we pick two
overlapping ranges of this length. The first starts at \(p\) and the other ends at \(q\). The whole range includes:
A[1 ... 5] = { 2, 4, 7, 6, 3 }left = { 2, 4, 7, 6 }right = { 4, 7, 6, 3 }
We have converted the question from something we don’t know to something we know and thus can easily determine the result of the query in \(O(1)\).
public int RangeMinimum(int[] A, int[,] M, int p, int q) { var k = (int)Math.Floor(Math.Log((q - p + 1), 2)); var left = M[p, k]; var right = M[q - (1 << k) + 1, k]; return A[left] <= A[right] ? left : right;}
This algorithm can be easily tweaked so it computes some other property like maximum for example.
Range Sums
Let’s see how to compute range sums in constant time using a sparse table. We need to slightly modify our precomputation procedure. The main difference is that instead of storing indexes to elements in the array, we store sums.
for (int i = 0; i < N; i++)- M[i, 0] = i;+ M[i, 0] = A[i];
for (int j = 1; j < K; j++) { // 1 << j = 2^j for (int i = 0; i + (1 << j) <= N; i++) { int left = M[i, j - 1]; int right = M[i + (1 << (j - 1)), j - 1];- M[i, j] = A[left] <= A[right] ? left : right;+ M[i, j] = left + right; }}
For computing a sum of an arbitrary range \(A[p … q]\), we’re going to use the observation that any range is a union of subranges with lengths of powers of \(2\). We start with the largest such subrange contained in \(A[p … q]\) and continue by adding the sums of the subsequent smaller ones, but only if they are within the bounds of \(A[p … q]\).
public int RSQ(int[,] M, int p, int q) { var sum = 0; // The size of the table's second dimension int k = (int)Math.Floor(Math.Log((q - p + 1), 2)) for (int j = k; j >= 0; j--) { if ((1 << j) <= (q - p + 1)) { sum += M[p, j]; p += 1 << j; } } return sum;}
Note that the sum query will run in \(O(log_2N)\) so it is not constant time, but it’s still pretty good. The sum problem can as well be solved in an even more efficient way (e.g. Prefix Sum), however, I wanted to show an example of a different usage of this construction.
Conclusion
One can go a long way with some preprocessing. We’ve got a constant speed with just a little bit of overhead in terms of memory due to the logarithms. There’s another price we pay for \(O(1)\) time though and that’s immutability. If we modify our sequence, we’d have to run the precomputation procedure all over again.
To speed things up a bit more we can precalculate the logarithms. For a sequence of size \(N\) for all queries, we’ll have \(N\) different log values. This can also be done with simple dynamic programming. You can check the complete implementations in the references below.
There’s an \(O(n)\) space with \(O(1)\) time solution for the
RMQ problem introduced by Farach-Colton and Bender in their “The LCA Problem Revisited” paper which builds on top of the one in this article, but is quite a bit more complex. If space efficiency is critical, then I’d recommend checking it out. The idea behind it is very clever too. References and Further Reading Code reference for RMQ and RSQ Sparse Tables on CP-Algorithms Range Minimum Query (Wikipedia) Farch-Colton, Bender, “The LCA Problem Revisited”- linear space, constant time solution. |
The question was a simple one: given a string (e.g.) "examplesgnome" consisting of two substrings (in this case, "examples" and "gnome"), how can we swap the two substrings in-place? In the interview, we covered the third method in more detail, but touched upon the other two. Afterwards, I calculated their computational efficiency, and determined while they all have the same upper bound, one has a lower average running time.
Method one: reversal
This is one Dr. Kelly suggested, which is really rather neat. It consists of two steps: reversing the entire string, then reversing both substrings. For example:
examplesgnome emongselpmaxe gnomeexamples
If we take
n
to equal the total number of characters,
a
to equal the number of characters in the smaller substring, and
b
to equal the number of characters in the larger substring, this has:
\left\lfloor \frac{n}{2} \right\rfloor + \left\lfloor \frac{a}{2} \right\rfloor + \left\lfloor \frac{b}{2} \right\rfloor
swaps, so we may as well say it's of linear complexity
O(n)
.
Method two: cycles
This one was also suggested by Dr. Kelly, and involves taking the first character of the larger substring out to a temporary location, then cyclically moving characters
b
places along into the generated space. Once the strings have been swapped (
n
swaps have taken place), we put the stored character back in the final space. For example:
examplesgnome _xamplesgnome gxamples_nome gxa_plesmnome gxamplesmno_e gxampl_smnoee g_amplxsmnoee gnamplxsm_oee gnam_lxsmpoee gnamelxsmpoe_ gnamelx_mpoes gn_melxampoes gnomelxamp_es gnome_xamples gnomeexamples
Since each character moves to its final position in one move, the algorithm is intuitively of complexity
O(n)
— but we can go further and say it's of absolute complexity
\sim (n + 2)
, taking the steps to remove the first character and fill the last space into account.
Method three: reduction
This is the method I came up with during the interview and, with some help, made work. It basically involves swapping the first smaller substring into its final position, then applying the same algorithm to the resulting subsubstrings of the second substring. For example:
examplesgnome gnomelesexamp gnomeampex les gnomeexpam les gnomeexma ples gnomeexamples
This is quite a neat one to implement, and will perform
O(n)
swaps in the worst case (when the smaller substring is only one character long, as one character moves to its final position with each swap).
However, what if we consider the case "foobar", where the substrings are "foo" and "bar"? In this case, method three only takes 3 swaps. Obviously, this method is limited in the upper bound to
O(n)
swaps, but it seems it can do the operation in far fewer — as few as
\Omega (a)
.
I haven't got a concrete proof for this, but I believe this is actually of complexity:
\sim (a \left\lfloor \frac{b}{a} \right\rfloor + (b \bmod{a}) \left\lfloor \frac{a}{b \bmod{a}} \right\rfloor)
as the lengths of the substring for each iteration
i
depend on the remainder of
\frac{b}{a}
for the
i - 1
th iteration. This complexity can be manipulated to prove that it's also of complexity
O(n)
:
All-in-all, I'm quite pleased with the interview: even if I'm not successful, it's given me an interesting problem to consider. Corrections and further thoughts on the problem are welcomed. |
The Erdos-Rado sunflower lemma The problem
A
sunflower (a.k.a. Delta-system) of size [math]r[/math] is a family of sets [math]A_1, A_2, \dots, A_r[/math] such that every element that belongs to more than one of the sets belongs to all of them. A basic and simple result of Erdos and Rado asserts that Erdos-Rado Delta-system theorem: There is a function [math]f(k,r)[/math] so that every family [math]\cal F[/math] of [math]k[/math]-sets with more than [math]f(k,r)[/math] members contains a sunflower of size [math]r[/math].
(We denote by [math]f(k,r)[/math] the smallest integer that suffices for the assertion of the theorem to be true.) The simple proof giving [math]f(k,r)\le k! (r-1)^k[/math] can be found here.
The best known general upper bound on [math]f(k,r)[/math] (in the regime where [math]r[/math] is bounded and [math]k[/math] is large) is
[math]\displaystyle f(k,r) \leq D(r,\alpha) k! \left( \frac{(\log\log\log k)^2}{\alpha \log\log k} \right)^k[/math]
for any [math]\alpha \lt 1[/math], and some [math]D(r,\alpha)[/math] depending on [math]r,\alpha[/math], proven by Kostkocha from 1996. The objective of this project is to improve this bound, ideally to obtain the Erdos-Rado conjecture
[math]\displaystyle f(k,r) \leq C^k [/math]
for some [math]C=C(r)[/math] depending on [math]r[/math] only. This is known for [math]r=1,2[/math](indeed we have [math]f(k,r)=1[/math] in those cases) but remains open for larger r.
Variants and notation
Given a family F of sets and a set S, the
star of S is the subfamily of those sets in F containing S, and the link of S is obtained from the star of S by deleting the elements of S from every set in the star. (We use the terms link and star because we do want to consider eventually hypergraphs as geometric/topological objects.)
We can restate the delta system problem as follows: f(k,r) is the maximum size of a family of k-sets such that the link of every set A does not contain r pairwise disjoint sets.
Let f(k,r;m,n) denote the largest cardinality of a family of k-sets from {1,2,…,n} such that that the link of every set A of size at most m-1 does not contain r pairwise disjoint sets. Thus f(k,r) = f(k,r;k,n) for n large enough.
Conjecture 1: [math]f(k,r;m,n) \leq C_r^k n^{k-m}[/math] for some [math]C_r[/math] depending only on r.
This conjecture implies the Erdos-Ko-Rado conjecture (set m=k). The Erdos-Ko-Rado theorem asserts that
[math]f(k,2;1,n) = \binom{n-1}{k-1}[/math] (1)
when [math]n \geq 2k[/math], which is consistent with Conjecture 1. More generally, Erdos, Ko, and Rado showed
[math]f(k,2;m,n) = \binom{n-m}{k-m}[/math]
when [math]n[/math] is sufficiently large depending on k,m. The case of smaller n was treated by several authors culminating in the work of Ahlswede and Khachatrian.
Erdos conjectured that
[math]f(k,r;1,n) = \max( \binom{rk-1}{k}, \binom{n}{k} - \binom{n-r}{k} )[/math]
for [math]n \geq rk[/math], generalising (1), and again consistent with Conjecture 1. This was established for k=2 by Erdos and Gallai, and for r=3 by Frankl (building on work by Luczak-Mieczkowska).
A family of k-sets is
balanced (or k-colored) if it is possible to color the elements with k colors so that every set in the family is colorful. Reduction (folklore): It is enough to prove Erdos-Rado Delta-system conjecture for the balanced case. Proof: Divide the elements into d color classes at random and take only colorful sets. The expected size of the surviving colorful sets is k!/k^k|F|. Hyperoptimistic conjecture: The maximum size of a balanced collection of k-sets without a sunflower of size r is (r-1)^k. Disproven for [math]k=3,r=3[/math]: set [math]|V_1|=|V_2|=|V_3|=3[/math] and use ijk to denote the 3-set consisting of the i^th element of V_1, j^th element of V_2, and k^th element of V_3. Then 000, 001, 010, 011, 100, 101, 112, 122, 212 is a balanced family of 9 3-sets without a 3-sunflower. Threads Polymath10: The Erdos Rado Delta System Conjecture, Gil Kalai, Nov 2, 2015. Inactive Polymath10, Post 2: Homological Approach, Gil Kalai, Nov 10, 2015. Active Erdos-Ko-Rado theorem (Wikipedia article) Sunflower (mathematics) (Wikipedia article) What is the best lower bound for 3-sunflowers? (Mathoverflow) Bibliography
Edits to improve the bibliography (by adding more links, Mathscinet numbers, bibliographic info, etc.) are welcome!
On set systems not containing delta systems, H. L. Abbott and G. Exoo, Graphs and Combinatorics 8 (1992), 1–9. On finite [math]\Delta[/math]-systems], H. L. Abbott and D. Hanson, Discrete Math. 8 (1974), 1-12. Intersection theorems for systems of sets, H. L. Abbott, D. Hanson, and N. Sauer, J. Comb. Th. Ser. A 12 (1972), 381–389. Hodge theory for combinatorial geometries, Karim Adiprasito, June Huh, and Erick Katz The Complete Nontrivial-Intersection Theorem for Systems of Finite Sets, R. Ahlswede, L. Khachatrian, Journal of Combinatorial Theory, Series A 76, 121-138 (1996). Intersection theorems for systems of finite sets, P. Erdős, C. Ko, R. Rado, The Quarterly Journal of Mathematics. Oxford. Second Series 12: 313–320 (1961), doi:10.1093/qmath/12.1.313. Intersection theorems for systems of sets, P. Erdős, R. Rado, Journal of the London Mathematical Society, Second Series 35 (1960), 85–90. On the Maximum Number of Edges in a Hypergraph with Given Matching Number, P. Frankl An intersection theorem for systems of sets, A. V. Kostochka, Random Structures and Algorithms, 9 (1996), 213-221. Extremal problems on [math]\Delta[/math]-systems, A. V. Kostochka On Erdos' extremal problem on matchings in hypergraphs, T. Luczak, K. Mieczkowska Intersection theorems for systems of sets, J. H. Spencer, Canad. Math. Bull. 20 (1977), 249-254. |
(a) The idea of potentials is familiar from mechanical and electrical systems. In an electric field the work required to move a charge $q$ from one location with potential $\theta_1$ to one with $\theta_2$ is $q(\theta_2-\theta_1)$. This expression has the form of a constant factor ($q$) times the change in potential. In a gravitational field the potential at height $h$ is $gh$ and so the work needed to lift a mass $m$ (the constant factor) from $h_1 \to h_2$ is $mg(h_2-h_1)= mg\Delta h$. We also know that these systems naturally move to position of lowest potential energy to reach equilibrium: stones do drop to the ground.
In a chemical system we can choose the mole number $n$ as the constant factor and then the molar free energy is the chemical potential which is usually given the symbol $\mu$.
The chemical potential controls mass equilibrium just as temperature controls thermal equilibrium. If there is a gradient of chemical potential then mass flows (or molecules rearrange). At equilibrium the chemical potential of all parts are equal. Similarly energy flows in a temperature gradient until equilibrium is achieved and the temperature is uniform throughout. In this sense the chemical potential is considered as providing the 'force' to drive chemical systems to equilibrium.
(b) If we wish to transfer $n$ moles of a gas from a state with molar free energy $G_1$ to one of $G_2$ then the work done (other than expansion) is the change in free energy $n(G_2-G_1)$. Suppose now that we isothermally transfer $n$ moles of an ideal gas from a vessel with pressure $p_1$ to one with pressure $p_2$ the work involved is $nRT\ln(p_2/p_1)$ which is also $n(G_2-G_1)$ so that $(G_2-G_1)=RT\ln(p_2/p_1)$.
Usually we relate the pressures to one at a standard state, say 1 atm. then letting $G_1 \to G^\mathrm{o} $ and $G_2 \to G$ then $G= G^\mathrm{o}+RT\ln(p)$ where $p$ is $p/(1\text{atm})$ which is a dimensionless ratio. As the molar free energy is the chemical potential,
$$\mu=\mu^{\mathrm{o}}+RT\ln(p)$$
(The equation $(G_2-G_1)=RT\ln(p_2/p_1)$ can also be derived by starting with $dG=Vdp-SdT$, at constant temperature ($dT=0$) and for an ideal gas by using the gas law substitute for $V$, and integrating from $p_1\to p_2$.)
(c) More formally the chemical potential can be defined via the work done in reversibly compressing a gas composed of $i$ species with mole numbers $n_i$. If the force $F$ is applied to a piston that moves a distance $dx$ then
$$Fdx= -pdV +\sum_i \mu_i dn_i$$
where $\mu_i$ is the chemical potential of species $i$ and is defined by this equation. This means that the chemical potential is the (reversible) rate of change of internal energy with mole number while keeping other variables ($S,V$) constant, thus since $dU=TdS-pdV+\sum_i \mu_i dn_i$, where $U$ is the internal energy, then $\displaystyle \mu_i= \left(\frac{\partial U}{\partial n_i}\right)_{T,S,n_j}$. (The subscript $n_j$ means that other mole number are held constant.)
It is more common, however, do define the chemical potential in terms of the Gibbs free energy and where $T,p$ are held constant.
For a pure substance your textbook will derive the equation
$$dG=Vdp-SdT$$
but if there is a mixture the number of moles can vary then it is necessary to account for energy changes due to this by adding a term to the energy and doing this produces
$$dG=Vdp-SdT+\sum_i \mu_i dn_i$$
which is sometimes referred to as the fundamental equation of chemical thermodynamics, and then the chemical potential is defined as
$$\displaystyle \mu_i= \left(\frac{\partial G}{\partial n_i}\right)_{p,T,n_j}$$
[discussion in (a) and (b) based on arguments in Lewis & Randall, 'Thermodynamics' publ McGraw-Hill.] |
It looks like you're new here. If you want to get involved, click one of these buttons!
In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly:
Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints.
Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints.
Today we'll conclude our discussion of Chapter 1 with two more bombshells:
Joins
are left adjoints, and meets are right adjoints.
Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down.
This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world!
Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders.
In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins:
$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets:
$$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets.
Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have
all joins: it's enough that all the joins in this formula exist:
$$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have
all meets: it's enough that all the meets in this formula exist:
$$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes.
Suppose \(A\) is a poset with all binary joins. Then we get a function
$$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows:
$$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that
$$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the
diagonal
$$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called
duplication, since it duplicates any element of \(A\).
Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact:
$$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \).
Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \).
A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function
$$ \wedge : A \times A \to A $$that's the
right adjoint of \( \Delta \). This is just a clever way of saying
$$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check.
Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number.
All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on.
Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by
$$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short.
I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason.
Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\).
So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called
duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs.
Once you start looking you can find duality everywhere, from ancient Chinese philosophy:
to modern computers:
But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality!
This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises. |
Let us consider the parameter p of population proportion. For instance, we might want to know the proportion of males within a total population of adults when we conduct a survey. A test of proportion will assess whether or not a sample from a population represents the true proportion from the entire population.
Critical Value Approach Section
The steps to perform a test of proportion using the critical value approval are as follows:
State the null hypothesis Hand the alternative hypothesis 0 H. A Calculate the test statistic:
\[z=\frac{\hat{p}-p_0}{\sqrt{\frac{p_0(1-p_0)}{n}}}\]
where \(p_0\) is the null hypothesized proportion i.e., when \(H_0: p=p_0\)
Determine the critical region.
Make a decision. Determine if the test statistic falls in the critical region. If it does, reject the null hypothesis. If it does not, do not reject the null hypothesis.
Example S.6.1
Newborn babies are more likely to be boys than girls. A random sample found 13,173 boys were born among 25,468 newborn children. The sample proportion of boys was 0.5172. Is this sample evidence that the birth of boys is more common than the birth of girls in the entire population?
Here, we want to test
\(H_0: p=0.5\)
\(H_A: p>0.5\)
The test statistic
\[\begin{align} z &=\frac{\hat{p}-p_o}{\sqrt{\frac{p_0(1-p_0)}{n}}}\\
&=\frac{0.5172-0.5}{\sqrt{\frac{0.5(1-0.5)}{25468}}}\\ &= 5.49 \end{align}\]
We will reject the null hypothesis \(H_0: p = 0.5\) if \(\hat{p} > 0.5052\) or equivalently if Z > 1.645
Here's a picture of such a "critical region" (or "rejection region"):
It looks like we should reject the null hypothesis because:
\[\hat{p}= 0.5172 > 0.5052\]
or equivalently since our test statistic Z = 5.49 is greater than 1.645.
Our Conclusion: We say there is sufficient evidence to conclude boys are more common than girls in the entire population. \(p\)- value Approach Section
Next, let's state the procedure in terms of performing a proportion test using the
p-value approach. The basic procedure is: State the null hypothesis H 0and the alternative hypothesis H. A Set the level of significance \(\alpha\). Calculate the test statistic:
\[z=\frac{\hat{p}-p_o}{\sqrt{\frac{p_0(1-p_0)}{n}}}\]
Calculate the
p-value.
Make a decision. Check whether to reject the null hypothesis by comparing
p-value to \(\alpha\). If the p-value < \(\alpha\) then reject \(H_0\); otherwise do not reject \(H_0\). Example S.6.2
Let's investigate by returning to our previous example. Again, we want to test
\(H_0: p=0.5\)
\(H_A: p>0.5\)
The test statistic
\[\begin{align} z &=\frac{\hat{p}-p_o}{\sqrt{\frac{p_0(1-p_0)}{n}}}\\
&=\frac{0.5172-0.5}{\sqrt{\frac{0.5(1-0.5)}{25468}}}\\ &= 5.49 \end{align}\]
The
p-value is represented in the graph below:
\[P = P(Z \ge 5.49) = 0.0000 \cdots \doteq 0\]
Our Conclusion: Because the p-value is smaller than the significance level \(\alpha = 0.05\), we can reject the null hypothesis. Again, we would say that there is sufficient evidence to conclude boys are more common than girls in the entire population at the \(\alpha = 0.05\) level.
As should always be the case, the two approaches, the critical value approach and the
p-value approach lead to the same conclusion. |
Defining parameters
Level: \( N \) = \( 8 = 2^{3} \) Weight: \( k \) = \( 21 \) Nonzero newspaces: \( 1 \) Newforms: \( 2 \) Sturm bound: \(84\) Trace bound: \(0\) Dimensions
The following table gives the dimensions of various subspaces of \(M_{21}(\Gamma_1(8))\).
Total New Old Modular forms 43 21 22 Cusp forms 37 19 18 Eisenstein series 6 2 4 Decomposition of \(S_{21}^{\mathrm{new}}(\Gamma_1(8))\)
We only show spaces with odd parity, since no modular forms exist when this condition is not satisfied. Within each space \( S_k^{\mathrm{new}}(N, \chi) \) we list the newforms together with their dimension.
Label \(\chi\) Newforms Dimension \(\chi\) degree 8.21.c \(\chi_{8}(7, \cdot)\) None 0 1 8.21.d \(\chi_{8}(3, \cdot)\) 8.21.d.a 1 1 8.21.d.b 18 |
It looks like you're new here. If you want to get involved, click one of these buttons!
Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the
way in which we match up these two objects, to see that they look the same.
For example, any two of these squares look the same after you rotate and/or reflect them:
An isomorphism between two of these squares is a
process of rotating and/or reflecting the first so it looks just like the second.
As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse:
Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that
and
I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\).
Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse.
Now we're ready for isomorphisms!
Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\).
Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like!
What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph:
The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2:
$$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1:
$$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms:
$$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism!
In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism.
We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a
preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\).
Puzzle 144 says that in a poset, the only isomorphisms are identities.
Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions.
Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\).
So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them.
One more example:
Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism.
This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the
isomorphisms deserve to be called 'natural isomorphisms'.
But what are they like?
Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism
$$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes:
Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism
$$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that
$$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means
$$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\).
In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\).
But the converse is true, too! It takes a
little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism.
Doing this will help you understand natural isomorphisms. But you also need examples!
Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal!
We should talk about this. |
Prove that $$\int_0^\infty \frac{\sqrt{x}}{x^2+1}\log\left(\frac{x+1}{2\sqrt{x}}\right)\;dx=\frac{\pi\sqrt{2}}{2}\log\left(1+\frac{\sqrt{2}}{2}\right).$$ I managed to prove this result with some rather roundabout complex analysis (writing the log term as an infinite sum involving nested logs), but I am hoping for a more direct solution via complex or real methods. The log term seems to require a rather complicated branch cut, so I am unsure as to how to solve the problem with a different technique.
We can transform the integral into a form that has only one simple branch point that needs to be considered. First, let $x=u^2$, and the integral is
$$\int_0^\infty dx \frac{\sqrt{x}}{x^2+1}\log\left(\frac{x+1}{2\sqrt{x}}\right) = 2 \int_0^{\infty} du \frac{u^2}{1+u^4} \log{\left ( \frac{1+u^2}{2 u} \right )} $$
which may be rewritten as
$$-2 \log{2} \int_0^{\infty} du \frac{u^2}{1+u^4} + 2 \int_0^{\infty} \frac{du}{u^2+\frac1{u^2}} \log{\left ( u+\frac1{u} \right )} \qquad (*)$$
Let's worry about the latter integral. By subbing $v=u+1/u$, we get (exercise for the reader):
$$2 \int_0^{\infty} \frac{du}{u^2+\frac1{u^2}} \log{\left ( u+\frac1{u} \right )} = 2 \int_2^{\infty} dv \frac{v}{\sqrt{v^2-4}} \frac{\log{v}}{v^2-2} $$
Finally, sub $v^2=y^2+4$, so we get
$$2 \int_0^{\infty} \frac{du}{u^2+\frac1{u^2}} \log{\left ( u+\frac1{u} \right )}= \frac12 \int_{-\infty}^{\infty} dy \frac{\log{(y^2+4)}}{y^2+2} $$
Now we have an integral ripe for the residue theorem. Consider
$$\oint_C dz \frac{\log{(z^2+4)}}{z^2+2} $$
where $C$ is a semicircle of radius $R$ in the upper half plane, with a detour about the branch point at $z=i 2$ and up and down the imaginary axis. (For a picture, see this.) By the residue theorem, the contour integral is equal to
$$\underbrace{\int_{-\infty}^{\infty} dx \frac{\log{(x^2+4)}}{x^2+2}}_{\text{Integral over the real axis}} \underbrace{-i 2 \pi (i) \int_2^{\infty} \frac{dy}{2-y^2}}_{\text{integral over imaginary axis about branch point } z=i 2} = i 2 \pi \underbrace{\frac{\log{2}}{i 2 \sqrt{2}}}_{\text{residue at pole }z=i \sqrt{2}} $$
Thus, the second integral in (*) is equal to
$$\pi \int_2^{\infty} \frac{dy}{y^2-2} + \frac{\pi}{2 \sqrt{2}} \log{2} $$
and
$$\int_2^{\infty} \frac{dy}{y^2-2} = \frac{1}{2 \sqrt{2}} \log{(3+2 \sqrt{2})} $$
For the first integral in (*), we may use the residue theorem again, this time over a simple quarter-circle $Q$ in the upper right half plane. Thus, the integral we seek is, by the residue theorem,
$$\oint_Q dz \frac{z^2}{1+z^4} = (1+i) \int_0^{\infty} du \frac{u^2}{1+u^4} = i 2 \pi \frac{e^{i 2 \pi/4}}{4 e^{i 3 \pi/4}} $$
Thus,
$$\int_0^{\infty} du \frac{u^2}{1+u^4} = \frac{\pi}{2 \sqrt{2}} $$
Putting this all together, we finally get that the original integral is equal to
$$\int_0^\infty dx \frac{\sqrt{x}}{x^2+1}\log\left(\frac{x+1}{2\sqrt{x}}\right) =-\frac{\pi}{\sqrt{2}} \log{2} + \frac{\pi}{2 \sqrt{2}} \log{(3+2 \sqrt{2})} + \frac{\pi}{2 \sqrt{2}} \log{2} $$
or
$$\int_0^\infty dx \frac{\sqrt{x}}{x^2+1}\log\left(\frac{x+1}{2\sqrt{x}}\right) = \frac{\pi}{\sqrt{2}} \log{\left ( 1+\frac1{\sqrt{2}} \right )} $$
as was to be shown.
We begin by writing the integral of interest $I$ as
$$\begin{align} I&=\int_0^{\infty} \frac{x^{1/2}}{x^2+1}\log\left(\frac{x+1}{2x^{1/2}}\right)\,dx\\\\ &=I_1+I_2+I_3 \end{align}$$
where
$$\begin{align} I_1&=-\log (2)\,\int_0^{\infty} \frac{x^{1/2}}{x^2+1}\,dx\\\\ I_2&=-\frac12\,\int_0^{\infty} \frac{x^{1/2}\log x}{x^2+1}\,dx\\\\ I_3&=\int_0^{\infty} \frac{x^{1/2}\log (x+1)}{x^2+1}\,dx \end{align}$$
Evaluating $I_1$ and $I_2$ are rather straightforward, so we defer evaluating until later in the development. For $I_3$ we use the method of Integrating Under The Integral Sign. To that end, let $F(a)$ be given by
$$F(a)=\int_0^{\infty}\frac{x^{1/2}\log (x+a)}{x^2+1}\,dx$$
Note that $I_3=F(1)$ and $-2I_2=F(0)$. Now, differentiating reveals
$$F'(a)=\int_0^{\infty}\frac{x^{1/2}}{(x^2+1)(x+a)}\,dx$$
The indefinite integral $\int\frac{x^{1/2}}{(x^2+1)(x+a)}\,dx$ can be found in terms of elementary functions and we leave that as an exercise for the reader. Alternatively, we can evaluate $F'(a)$ through analysis in the complex plane and integrating over the classic keyhole contour. Using the latter approach we have
$$\begin{align} \oint_C \frac{z^{1/2}}{(z^2+1)(z+a)}\,dx&=\int_0^\infty\frac{x^{1/2}}{(x^2+1)(x+a)}\,dx-\int_\infty^0 \frac{x^{1/2}}{(x^2+1)(x+a)}\,dx\\\\ &=2\pi i \left(\frac{e^{i\pi/4}}{2i(a+i)}+\frac{e^{i3\pi/4}}{-2i(a-i)}+\frac{ia^{1/2}}{-(i+a)(i-a)}\right)\\\\ &=2\pi\left(\frac{\sqrt{2}}{2}\frac{a+1}{a^2+1}-\frac{\sqrt{a}}{a^2+1}\right) \end{align}$$
Therefore, $F'(a)$ is given by
$$F'(a)=\pi\left(\frac{\sqrt{2}}{2}\frac{a+1}{a^2+1}-\frac{\sqrt{a}}{a^2+1}\right)$$
Integrating $F'(a)$ between $a=0$ and $a=1$, we obtain
$$\begin{align} I_3&=-2I_2+\int_0^1 \pi\left(\frac{\sqrt{2}}{2}\frac{x+1}{x^2+1}-\frac{\sqrt{x}}{x^2+1}\right)\,dx\\\\ &=\bbox[5px,border:2px solid #C0A000]{-2I_2+\pi\frac{\sqrt{2}}{2}\left(\frac12\log(2)+\frac{\pi}{4}\right)-\pi\frac{\sqrt{2}}{4}\left(\pi-\log(2) -2\log\left(1+\frac{\sqrt{2}}{2}\right)\right)} \tag 1 \end{align}$$
We now evaluate $I_1$ and $I_2$ in a uniform manner. To do this, let $G(a)$ be the integral
$$G(a)=\int_0^{\infty}\frac{x^{a}}{x^2+1}\,dx$$
Note that $I_1=-\log(2)G(1/2)$ and $I_2=-\frac12 G'(1/2)$. We evaluate $G(a)$ by again moving to the complex plane and analyzing the closed-contour integral of $\frac{z^{a}}{z^2+1}$ around the keyhole contour. We obtain
$$\begin{align} \oint_C \frac{z^{a}}{z^2+1}\,dz&=(1-e^{i2\pi a})\int_0^{\infty}\frac{x^a}{x^2+1}\,dx\\\\ &=(1-e^{i2\pi a})G(a)\\\\ &=2\pi i\left(\frac{e^{i\pi a/2}}{2i}+\frac{e^{i3\pi a/2}}{-2i}\right)\\\\ G(a)&=\frac{\pi}{2\cos(\pi a/2)} \end{align}$$
Thus,
$$\bbox[5px,border:2px solid #C0A000]{I_1=-\log (2) \frac{\pi}{\sqrt{2}}} \tag 2$$
and
$$\bbox[5px,border:2px solid #C0A000]{I_2=-\frac12 \frac{\pi^2\sqrt{2}}{4}} \tag 3$$
Putting it all together, we have
$$\bbox[5px,border:2px solid #C0A000]{I=I_1+I_2+I_3=\frac{\pi}{\sqrt{2}}\log\left(1+\frac{\sqrt{2}}{2}\right)}$$
as expected!! |
Suppose $f : [a, b] \rightarrow R$ is continuous on $[a, b]$ and convex on the open interval $(a, b).$ Show that $f$ is convex on the closed interval $[a, b].$
closed as off-topic by uniquesolution, Claude Leibovici, mrp, Namaste, Shailesh Mar 13 '17 at 14:19
This question appears to be off-topic. The users who voted to close gave this specific reason:
" This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – mrp, Namaste
It is enough to check that $f\bigl(tx+(1-t)y\bigr)\le tf(x)+(1-t)f(y)$ for any $t\in[0,1]$ and $x\in[a,b]$. If $x,y\in(a,b)$, then we have it by convexity. If, for instance, $x=a,y\in(a,b)$, then take $x_n=a+\frac{1}{n}$ and pass to a limit with $n\to\infty$. The remaining cases are handled in a similar way.
The statement remains true, if $f(a^+)\le f(a)$ and $f(b^-)\le f(b)$. These limits do exist, because $f$ is either monotone, or unimodal in the sense that $f$ decreases (possibly weakly) on $[a,c]$ and increases (possibly weakly) on $[c,b]$ for some $c\in(a,b)$. The monotonic functions admit one-sided limits. |
I have this problem where a transmitting antenna radiated uniformly in all directions at a radius of r. The station broadcasts at 10 kilowatts. How much power is obtained by a reciever antenna 20km away? The antenna of the reciever is 50cm^2. I am unsure of how to set up an equation for this problem, I know the distance and power, but how would I set this in a equation to find the power obtained by a receiver?
If the antenna radiates in all directions (isotropic radiator), you can consider it as the center of a sphere, where the radius is the distance to the RX. If there isn't attenuation on the path, the power remains the same, then divide by the sphere area and you got power/area relation.
Knowing antenna's area, you can calculate the power received.
Here is a practical example. I have a cluster of FM broadcast antennas 5.25 miles away from my home, or 8400 meters, seven or eight masts. Their total transmission power is about 640 kW, all in the FM range 96 – 100 MHz.
In first approximation, let’s assume an uniform emission into a sphere with radius 8400 meters. The surface area of a sphere is S = 4*pi*R^2, = 9e+8 (=886233600) m2, and the RF emission will be spread out evenly. Again, assuming no losses or absorbers on the way, the power density at my location should be about 640000/88623360 = 7.22e-4 W/m2, or about 700uW/m2. This is assuming the spherical space.
For commercial broadcast antennas they use an array of vertical dipoles, so they won’t emit up or down, but mostly into horizontal plane. The antenna gain will be at least 2 or more, I assume 3x. So the field is likely about 0.7mW x 3 = 2mW/m2 at my location.
If a receiver loop is, say, ~ 3cm x 3cm = ~10 cm2, it covers 0.001m2. So the 10 cm2 loop will get 2uW.
Now a practical question, is it much, or not really? For example, how much voltage can be registered by an oscilloscope from this loop? This can be a tricky part. One way is to assume that the wire loop gets the power from free space which is having an impedance of 300 Ohms. Then the open loop will produce about 25 mV RMS:
P= V^2/R, so V^2 = 2e-6*300 = 6e-4; so the sqrt() gives V = 25 mV.
Surprisingly, this is close to what I can see on all my scopes, for example:
This is very annoying when dealing with low-voltage signals, the interference is everywhere. Note to retired engineers and serious hobbyists: when shopping for a home to live, pay attention to broadcast towers around.
I have this problem where a transmitting antenna radiated uniformly in all directions at a radius of r. The station broadcasts at 10 kilowatts. How much power is obtained by a reciever antenna 20km away? The antenna of the reciever is 50cm^2.
If the transmitter emits uniformly in all directions then all the 10 kW transmitted is passing notionally through a surface area of \$4\pi \times20,000^2\$ square metres. That's the surface area of a sphere of radius 20 km.
This means that your receiver only gets a tiny fraction because it has an area of 0.25 square metres.
So, 10,000 watts x 0.25/(\$4\pi \times20,000^2\$) = 487 nW.
The power per square metre is clearly 4 times bigger at ~2 \$\mu W/m^2\$ and, given that the impedance of free space is 377 ohms you can calculate the local E field at the receiver as being 27.4 mV/metre using P = V^2/R. Not a bad sized signal really. H field is 377 times lower at 72.6 uA/m. |
The question is as follows, apologies in advance, I don't know how to do the LaTex thing in posts.
Let $X_1,\ldots,X_n, Y_1,\ldots,Y_n$ be independent random variables such that $X_i \sim N(\mu_1,\sigma^2)$ and $Y_j \sim N(\mu_2,\sigma^2)$. Both $\mu_1$ and $\mu_2$ are known but $\sigma^2$ is not.
Find the maximum likelihood estimator for $\sigma^2$ based on all $n+m$ observations. Show all working.
I am trying to work through with this but I am getting some horrible results when I get to the log-likelihood function. Any help in deriving the log-likelihood function would be appreciated.
Cheers. |
I've been reading this article where the IR radiance of an IR-window ($MgF_2$, 1.75mm thick, passband 3-5$\mu m$) is calculated. At some point during the calculation they mention in a footnote:
The spectral emittance has been approximated by means of a step function: $\epsilon_\lambda$ = 0.08 and 0.06 from 3$\mu$ to 4$\mu$ and 4$\mu$ to 5$\mu$, respectively (cf. Ref. 9)
Ref. 9 is
S. S. Ballard, K. A. McCarthy, and W. L. Wolfe, IRIA Stateof- the-Art Report: Optical Materials for Infrared Instrumentation: Supplement (The University of Michigan, Ann Arbor, 1961), p. 17.
although I have no further access to it so I can't check.
Now, to my own knowledge, the emissivity of a transparent material is given by
$$\epsilon = 1-e^{-\alpha d}$$
where d is the material's thickness (1.75mm, or 0.175 cm) and $\alpha$ the absorption coefficient. The absorption coefficient for $MgF_2$ is somewhere around $40*10^{-3}$ cm$^{-1}$ at 2.7 $\mu m$(Source) or 5.5-6* $10^{-3}$ cm$^{-1}$ at 2.8 and 5.1 $\mu m$ (Source)
Question 1: Is there an explanation for the order of magnitude difference between both sources, other than "one must be wrong"? Question 2: Calculating the emissivity using those two sources yields me numbers of 0.007 (first source) or 0.001 (second source). These are both a factor 10 and 70 different from what's used in the paper. The resulting graphs & data from the paper don't appear to be wrong, does that mean I used a wrong equation / did something else wrong? EDIT:In addition, this thesis shows on page 52 the emissivity of Germanium to be ~35% (0.35) at 10µm for 1.14mm thickness. Using an absorption coefficient for Germanium I found here (0.035 cm$^{-1}$) and the earlier equation yields an emissivity of 0.004 - which is about TWO orders of magnitude different. I'd say since I found two sources that yield wrong answers now, it's more sensible that I'm doing something wrong. Please enlighten me :) |
In electron liquids, the compressibility $K$ is defined as $\frac{1}{K}=-V\left(\frac{\partial P}{\partial V}\right)_N=n^2\frac{\partial \mu}{\partial n}$, where $P$, $V$, $n$ and $\mu$ are pressure, volume, density and chemical potential. However, in thermodynamics we learned, I can only find the definition of isothermal and isentropic compressibility: $$\kappa_T=-\frac{1}{V}\left(\frac{\partial V}{\partial P} \right)_T$$ $$\kappa_S=-\frac{1}{V}\left(\frac{\partial V}{\partial P} \right)_S$$ which have the relation $\kappa_T/\kappa_S=\gamma$, where $\gamma$ is the Heat capacity ratio $c_P/c_V$.
My question is
What is the relation between compressibility defined in electron liquid and that defined in thermodynamics?
If I want to get the compressibility of electron liquid in thermodynamics' view, what should I have? the free energy? the internal energy? ... |
Inflection points, concavity upward and downward
A
point of inflection of the graph of a function $f$ is a point
where the second derivative $f''$ is $0$. We have to wait a
minute to clarify the geometric meaning of this.
A piece of the graph of $f$ is
concave upward if the curve
‘bends’ upward. For example, the popular parabola $y=x^2$ is concave
upward in its entirety.
A piece of the graph of $f$ is
concave downward if the curve
‘bends’ downward. For example, a ‘flipped’ version $y=-x^2$ of the
popular parabola is concave downward in its entirety.
The relation of
points of inflection to intervals where the
curve is concave up or down is exactly the same as the relation of
critical points to intervals where the function is
increasing or decreasing. That is, the points of inflection mark the
boundaries of the two different sort of behavior. Further, only one
sample value of $f''$ need be taken between each pair of consecutive
inflection points in order to see whether the curve bends up or down
along that interval.
Expressing this as a systematic procedure:
to find the intervals
along which $f$ is concave upward and concave downward: Compute the secondderivative $f''$ of $f$, and solvethe equation $f''(x)=0$ for $x$ to find all the inflection points, which we list in order as $x_1 < x_2 <\ldots < x_n$. (Any points of discontinuity, etc., should be added to the list!) We need some auxiliary points: To the left of the leftmost inflection point $x_1$ pick any convenient point $t_o$, between each pair of consecutive inflection points $x_i,x_{i+1}$ choose any convenient point $t_i$, and to the right of the rightmost inflection point $x_n$ choose a convenient point $t_n$. Evaluate the second derivative$f''$ at all the auxiliarypoints $t_i$. Conclusion: if $f''(t_{i+1})>0$, then $f$ is concave upwardon $(x_i,x_{i+1})$, while if $f''(t_{i+1}) < 0$, then $f$ is concave downwardon that interval. Conclusion: on the ‘outside’ interval $(-\infty,x_o)$, the
function $f$ is concave upwardif $f''(t_o)>0$ and is concave downwardif $f''(t_o) < 0$. Similarly, on $(x_n,\infty)$, the function $f$ is concave upwardif $f''(t_n)>0$ and is concave downwardif $f''(t_n) < 0$. Example 1
Find the inflection points and intervals of concavity up and down of $$f(x)=3x^2-9x+6$$ First, the second derivative is just $f''(x)=6$.
Solution: Since this is never zero, there are not points of
inflection. And the value of $f''$ is always $6$, so is always $>0$,
so the curve is entirely concave upward. Example 2
Find the inflection points and intervals of concavity up and down of $$f(x)=2x^3-12x^2+4x-27.$$
Solution:
First, the second derivative is
$f''(x)=12x-24$. Thus, solving $12x-24=0$, there is just the one
inflection point, $2$. Choose auxiliary points $t_o=0$ to the left
of the inflection point and $t_1=3$ to the right of the inflection
point. Then $f''(0)=-24<0$, so on $(-\infty,2)$ the curve is concave
downward. And $f''(3)=12>0$, so on $(2,\infty)$ the curve is concave
upward. Example 3
Find the inflection points and intervals of concavity up and down of $$f(x)=x^4-24x^2+11.$$
Solution:
The second derivative is
$f''(x)=12x^2-48$. Solving the equation $12x^2-48=0$, we find
inflection points $\pm 2$. Choosing auxiliary points $-3,0,3$ placed
between and to the left and right of the inflection points, we
evaluate the second derivative: First, $f''(-3)=12\cdot 9-48>0$, so the curve
is concave upward on $(-\infty,-2)$. Second, $f''(0)=-48
<0$, so
the curve is concave downward on $(-2,2)$. Third,
$f''(3)=12\cdot 9-48>0$, so the curve
is concave upward on $(2,\infty)$. Exercises Find the inflection points and intervals of concavity up and down of $f(x)=3x^2-9x+6$. Find the inflection points and intervals of concavity up and down of $f(x)=2x^3-12x^2+4x-27$. Find the inflection points and intervals of concavity up and down of $f(x)=x^4-2x^2+11$. |
Let G be a directed graph with a countable number of vertices, and suppose G is strongly connected (given any two vertices v and w, there exists a path from v to w). Fix a base vertex v
0∈G, and let L n denote the number of loops of length n based at v 0; that is, the number of sequences of vertices v 0, v 1, ..., v n such that v n = v 0 and there is an edge from v i to v i+1 for every 0 ≤ i < n. We allow the loops to self-intersect, repeat segments, etc.
Let h be the exponential growth rate of the number of such loops: $h=\lim_{n\to\infty} \frac 1n \log L_n$. The value of h may be either finite or infinite, and I am interested in finding conditions on the graph that help determine which of these is the case.
: Is there any characterisation of the set of graphs for which h < ∞? A necessary and sufficient condition would be ideal, but anything that is known would be appreciated. Question Partial answer so far: If G is uniformly locally finite -- that is, if there exists C < ∞ such that every vertex of G has incoming degree ≤ C or every vertex has outgoing degree ≤ C -- then L n ≤ C n, and so h ≤ log(C) < ∞. However, it is not difficult to construct locally finite graphs with unbounded degree, or even graphs that are not locally finite, for which h < ∞, so this condition is not necessary. If G is undirected, or equivalently, if v → w implies w → v, then one can show that this condition is both necessary and sufficient. However, the directed case is more subtle. Motivation: One can define a topological Markov chain as the space of all infinite paths through the graph G together with a shift that maps v 0v 1v 2... to v 1v 2v 3.... The value h defined above is the Gurevich entropy of this dynamical system, and it is of interest to know when the Gurevich entropy is finite. |
Algebra is a branch of mathematics deals with symbols and rules for manipulating those symbols. Algebra involves algebraic expressions or manipulating equations. Studying algebra helps you to think logically and critically to solve many problems both in studies and in real-life situations. It opens up the other subject. Most of the subjects need basic knowledge of algebra. Algebra for class 6 will help to get a basic knowledge like
How to add, subtract, multiply and divide integers, decimals and fractional values. How to calculate powers and roots How to simplify the expressions with exponents How to solve the single variable and multivariable equations How to solve the inequalities of variables By using slope-intercept formula and point-slope form, how to graph the lines How to solve the equation to find the roots using the quadratic formula.
Algebra is the study of the use of letters it is useful to solve the problems. Using pictorial and graphical representation in class 6 algebra makes the chapter more interesting and the concepts are in a comprehensive manner. In this article, we are going to discuss the basic concepts involved in algebra for class 6 along with its formula and example.
Algebra Formulas for Class 6
The list of important algebra formulas for class 6 are given. Before that, you will get to know about the basic concepts covered in algebra for class 6.
Variable:A letter or symbol that represents any member of a collection of two or more numbers is called a variable. Constant:A letter or a symbol that represents a specific number is called a constant or else a symbol having a fixed numerical value is called a constant. The letters which are used to represent numbers are called literals or literal numbers Multiplication Property : X x Y = XY Example 5 x X = 5X a x a x a x….x 11 times = a 11times In x 9, where 9 is called the index or exponent and x is called the base. The operations used in algebra are addition, subtraction, multiplication and division Addition : x + y Subtraction : x – y Multiplication: It is represented in either of the forms such as xy or x.y or x(y) or (x)(y) Division : x/y or x÷y or \(\frac{x}{y}\) or \(y\sqrt{x}\) Order of operations:The order of operation in algebra is given as follows Perform all the operations inside the brackets Perform the operations on roots and exponents Perform all the multiplication and division operations moving from left to right Perform all the addition and subtraction operations from left to right. Basic Algebra Formula
The simple quadratic equation is given by
ax
2+bx+c = 0
Where a is the coefficient of \(x^{2}\)
b is the coefficient of x
2
c is a constant term
To find the variable x, the quadratic equation is,\(x=\frac{-b\pm \sqrt{b^{2}-4ac}}{2a}\)
Some of the topics that are covered in class 6 algebra are as follows:
Introduction to Algebra Matchstick Problems The idea of a variable Use of variables in common rules Rules from Geometry Rules from arithmetic Expressions with variables Practical applications of expressions What is the equation? A solution of an equation
The basic topics which are covered in algebra for class 6 like writing expressions using variables, evaluating the expressions using single variables, two variables will build a strong foundation for the further concepts in higher studies.
Algebra Class 6 Example Question:
Find the number if 18 is taken away from the 6 times of a number is 30.
Solution:
Let ‘a ‘ be the number.
Given : 16 is taken away from the 6 times of a number is 30.
6a – 18 = 30
6a = 30 + 18
6a = 48
a=48 / 6
a=8
Therefore, the number is 8.
Question 2:
Solve the equation given below and find the value of x and y
X + y = 3
X – y = 1
Solution:
Given, x + y = 3 …(1)
x – y = 1 …..(2)
By solving two equations, we get
2x = 4
X = 4/2
x= 2
Substitute x= 2 in equation (1), we get
2 + y =3
Y = 3 – 2
Y = 1
Therefore, x = 2 and y =1
Visit BYJU’S – The Learning App, for more information on algebra for class 6 and formulas and also watch interactive videos to clarify the doubts. |
The emf is not the work done per unit charge integrated along a closed loop by a source that is not electrostatic, it's just:
$$ \mathscr E =\oint \vec{f}_s \cdot d\vec{\ell},$$
where the integral is around the circuit, and $\vec{f}_s$ is the net force per unit charge on the conduction charges that move about within the circuit element. This might seem the same, but it isn't. The integral for the emf is around a loop at some fixed time. The work done between two points needs to follow the charges in space and time. In statics there isn't much difference, but in dynamics (changing fields or moving wires) it matters.
The Lorentz force can do work, but I think you are specifically asking about how a magnetic field can produce an emf. There are two ways a magnetic field can be responsible for an emf.
The first is cheating. If the magnetic field at a point in space changes, then it is responsible for producing an electric field and that electric field can do work and in fact $\vec{E}$ can be line integrated along the circuit, and the result is equal to the flux of $\vec{\nabla}\times\vec{E}$ through the surface determined by the circuit, so equal to the flux of $-\partial \vec{B}/\partial t$ through the surface determined by the circuit.
The second is if the wire is moving. If there wire is moving, then you can compute $\oint (\vec{v}\times\vec{B}) \cdot d\vec{\ell}$ where $\vec{v}=\vec{w}+\vec{u}$ is the velocity of the conduction (mobile) charge, and $\vec{w}$ is the velocity of the wire element itself, and $\vec{u}$ is the relative velocity of the mobile charge through the wire. This is the
magnetic contribution to the emf because $\vec{v}\times\vec{B}$ is the magnetic force per unit charge on the mobile charges.
We can evaluate this:
$$ \oint (\vec{v}\times\vec{B}) \cdot d\vec{\ell}=-\oint \vec{B} \cdot (\vec{v}\times d\vec{\ell})=-\oint \vec{B} \cdot ((\vec{w}+\vec{u})\times d\vec{\ell}).$$
Then notice that $\vec{u}$ is along the wire, so parallel to $d\vec{\ell}$, so
$$ \oint (\vec{v}\times\vec{B}) \cdot d\vec{\ell}=-\oint \vec{B} \cdot (\vec{w}\times d\vec{\ell})$$
In a small amount of time, there is a circuit at one place, and then a time $\Delta t$ later the circuit is somewhere else and each part moves $\vec{w}\Delta t$ in space. Imagine a circuit at one time, and at the later time (so a latter circuit, and a former circuit), and there is a ribbon in between. Each part of the ribbon has an area $d\vec{a}=(\vec{w}\times d\vec{\ell})\Delta t$. Make your time interval so small that $\vec {B}$ doesn't change much (in time) from when the circuit is in one place to the other or anywhere inside. Then for a fixed time in that interval $\Delta t$, $\vec{\nabla}\cdot \vec{B}=0$ so the total flux through the surface $S$ bounded by the latter circuit ($C_2$), the former circuit ($C_1$) and the ribbon ($R$) is zero.
$0=\oint_S \vec{B}\cdot d\vec{a}=\int_{C_1} \vec{B}\cdot d\vec{a}+\int_{C_2} \vec{B}\cdot d\vec{a}+\int_{R} \vec{B}\cdot d\vec{a}.$
So $$\Delta t\oint (\vec{v}\times\vec{B}) \cdot d\vec{\ell}=-\Delta t\oint \vec{B} \cdot (\vec{w}\times d\vec{\ell})=\int_{C_1} \vec{B}\cdot d\vec{a}+\int_{C_2} \vec{B}\cdot d\vec{a}.$$
Where $d\vec{a}$ points outwards in both cases.
So if the $\vec{B}$ field is changing we get an induced electric field and a corresponding electric emf of:
$$\oint_{C_1} \vec{E} \cdot d\vec{\ell}=\int_{C_1} (\vec{\nabla}\times\vec{E})\cdot d\vec{a}=\int_{C_1} (\vec{\nabla}\times\vec{E})\cdot d\vec{a}=\int_{C_1} (-\partial \vec{B}/\partial t)\cdot d\vec{a}.$$
So for a small time interval $\Delta t$:
$$\Delta t\oint_{C_1} \vec{E} \cdot d\vec{\ell}=\Delta t\int_{C_1} (-\partial \vec{B}/\partial t)\cdot d\vec{a}=-\int_{C_1} (\vec{B}(t_0+\Delta t)-\vec{B}(t_0))\cdot d\vec{a}.$$
And if the circuit is moving we get:
$$\Delta t\oint (\vec{v}\times\vec{B}) \cdot d\vec{\ell}=-\int_{C_1} \vec{B}\cdot d\vec{a}+\int_{C_2} \vec{B}\cdot d\vec{a}.$$
Where this time both the $d\vec{a}$ vectors point in the direction associated with the direction of the oriented circuit.
Now if you compute the magnetic flux $\Phi=\int_{C} \vec{B}\cdot d\vec{a}$, then it's time derivative has two parts (from the product rule), one from the changing $\vec{B}$ (for fixed circuit) and one from a fixed $\vec{B}$ (and changing circuit).
So putting them together, $$\oint \vec{E} \cdot d\vec{\ell}+\oint (\vec{v}\times\vec{B}) \cdot d\vec{\ell}=-d\Phi/dt.$$
Thus
the negative of the time rate of change of the magnetic flux is literally equal to the integral of the Lorentz force per unit charge around the circuit, and the electric part of the Lorentz force is due to the parts of the $\vec{B}$ field that are changing at some point, and the magnetic parts of the Lorentz force contribute where the circuit element itself is moving.
So the Lorentz force exactly contributes the emf due to the changing magnetic flux. The magnetic part because of the moving circuit, the electric part because of the induced electric field from the changing magnetic field. Both matter in general. |
Let
$H$ be a separable $\mathbb R$-Hilbert space $L\in\mathfrak L(H,\mathfrak L(H,\mathbb R))$ $T\in\mathfrak L(H)$ be nonnegative, self-adjoint and nuclear (trace-class)
Note that$^1$ $$\operatorname{tr}\left(\left(L\otimes_\pi\operatorname{id}_H\right)T\right)=LT,\tag1$$ where on the left-hand side $L$ is considered as being an element of $\mathfrak L(H)$ and on the right-hand side $L$ is considered as being an element of $\left(H\:\hat\otimes_\pi\:H\right)'$.
Now, assume $L\in\mathfrak L(H,\mathfrak L(H))$. $L$ can be considered as being an element of $\mathfrak L(H\:\hat\otimes_\pi\:H,H)$ and hence the right-hand side of $(1)$ is still well-defined. Is there a generalization of the trace functional such that (with a suitable identification of $L$ for the left-hand side) we still have the identity $(1)$?
$^1$ If $E,F,X,Y$ are $\mathbb R$-Banach spaces, $S\in\mathfrak L(X,E)$ and $T\in\mathfrak L(Y,F)$, let $S\otimes_\pi T$ denote the unique bounded linear operator from $X\:\hat\otimes_\pi\:Y$ to $E\:\hat\otimes_\pi\:F$ with $$(S\otimes_\pi T)(x\otimes y)=Sx\otimes Ty\;\;\;\text{for all }(x,y)\in X\times Y.\tag2$$ |
@Julio's excellent answer describes a flight path angle, and explains that it is the angle between the tangential direction (perpendicular to the radial vector to the central body) and the current velocity vector.
I've first tried to get the angle from this expression, but it's obviously wrong, since $\arccos$ is an even function and the angle can go from $-\pi/2$ to $\pi/2$:
$$\arccos\left(\frac{\mathbf{r \centerdot v}}{|\mathbf{r}| \ |\mathbf{v}|} \right) - \frac{\pi}{2} \ \ \ \text{ (incorrect!)}$$
I've integrated orbits for GM ($\mu$) and SMA ($a$) of unity and starting distances from 0.2 to 1.8. That makes the period always $2 \pi$. When I plot the result of my function, I get too many wiggles.
What expression can I use to get the correct flight path angle gamma starting from state vectors?
Revised python for the erroneous part would be appreciated, but certainly not necessary for an answer.
def deriv(X, t): x, v = X.reshape(2, -1) acc = -x * ((x**2).sum())**-1.5 return np.hstack((v, acc))import numpy as npimport matplotlib.pyplot as pltfrom scipy.integrate import odeint as ODEinthalfpi, pi, twopi = [f*np.pi for f in (0.5, 1, 2)]T = twopitime = np.linspace(0, twopi, 201)a = 1.0rstarts = 0.2 * np.arange(1, 10)vstarts = np.sqrt(2./rstarts - 1./a) # from vis-viva equationanswers = []for r, v in zip(rstarts, vstarts): X0 = np.array([r, 0, 0, v]) answer, info = ODEint(deriv, X0, time, full_output= True) answers.append(answer.T)gammas = []for a in answers: xx, vv = a.reshape(2, 2, -1) dotted = ((xx*vv)**2).sum(axis=0) rabs, vabs = [np.sqrt((thing**2).sum(axis=0)) for thing in (xx, vv)] gamma = np.arccos(dotted/(rabs*vabs)) - halfpi gammas.append(gamma)if True: plt.figure() plt.subplot(4, 1, 1) for x, y, vx, vy in answers: plt.plot(x, y) plt.plot(x[:1], y[:1], '.k') plt.plot([0], [0], 'ok') plt.title('y vs x') plt.subplot(4, 1, 2) for x, y, vx, vy in answers: plt.plot(time, x, '-b') plt.plot(time, y, '--r') plt.title('x (blue) y (red, dashed)') plt.xlim(0, twopi) plt.subplot(4, 1, 3) for x, y, vx, vy in answers: plt.plot(time, vx, '-b') plt.plot(time, vy, '--r') plt.title('vx (blue) vy (red), dashed') plt.xlim(0, twopi) plt.subplot(4, 1, 4) for gamma in gammas: plt.plot(time, gamma) plt.title('gamma?') plt.xlim(0, twopi) plt.show() |
Ratio of Area of Triangle Inscribed in a Circle to Triangle Enclosing the Circle
\[2x\]. Draw a line from the centre of the circle - which is also the centre of the triangles - to a vertex of the triangle as shown.
This line will bisect the angle at the vertex, producing an angle of 30 degrees. From the centre of the circle draw a line down to meet the base of the triangle at right angles. We can use simple trigonometry to find the radius of the circle.
\[cos 30 = \frac{x}{r} \rightarrow r= \frac{x}{cos 30}= \frac{x}{2/ \sqrt{3}} = \frac{2x}{ \sqrt{3}}\]
Now drop a line from the centre of the circle to the centre of the base of the large triangle to form a right angle triangle as show. We use simple trigonometry to find the base of the;large triangle.
\[tan 30 = \frac{r}{half \: the \: base} \rightarrow base =\frac{r}{1/2 tan 30}= \frac{2x / \sqrt{3}}{1/2 \times 1 / \sqrt{3}}= 4x\]
Then sides of the big triangle are
\[\frac{4x}{2x} = 2\]times the sides of the small triangle and the are of the big triangle is the square of this ( equals 4) times the area of the small triangle. |
Taylor polynomials: formulas
Before attempting to illustrate what these funny formulas can be used for, we just write them out. First, some reminders:
The notation $f^{(k)}$ means the $k$th derivative of$f$. The notation $k!$ means $k$-
factorial, which by definition is$$k!=1\cdot 2\cdot 3\cdot 4\cdot \ldots\cdot (k-1)\cdot k$$ Taylor's Formula with Remainder Term first somewhat verbal version: Let$f$ be a reasonable function, and fix apositive integer $n$. Then we have\begin{multline*}f(\textit{input})=f(\textit{basepoint})+{f'(\textit{basepoint}) \over 1!}(\textit{input}-\textit{basepoint})\\+{ f''(\textit{basepoint})\over 2!}(\textit{input}-\textit{basepoint})^2+{f'''(\textit{basepoint}) \over 3!}(\textit{input}-\textit{basepoint })^3\\\ldots+\frac{ f^{(n)}(\textit{basepoint})}{n!}(\textit{input}-\textit{basepoint})^n+{f^{(n+1)}(c) \over (n+1)!}(\textit{input}-\textit{basepoint})^{n+1 }\end{multline*}for some $c$ between basepoint and input.
That is, the value of the function $f$ for some
inputpresumably ‘near’ the basepoint is expressible in terms of thevalues of $f$ and its derivatives evaluated at the basepoint,with the only mystery being the precise nature of that $c$ between input and basepoint. Taylor's Formula withRemainder Term second somewhat verbal version: Let$f$ be a reasonable function, and fix apositive integer $n$.\begin{multline*}f(\textit{basepoint + increment})=f(\textit{basepoint})+{ f'(\textit{basepoint}) \over 1! }(\textit{increment})\\+{ f''(\textit{basepoint})\over 2!}(\textit{increment})^2+{f'''(\textit{basepoint}) \over 3!}(\textit{increment})^3\\\ldots+\frac{ f^{(n)}(\textit{basepoint})}{n!}(\textit{increment})^n+{f^{(n+1)}(c) \over (n+1)!}(\textit{increment})^{n+1 }\end{multline*}for some $c$ between basepoint and basepoint +increment.
This version is really the same as the previous, but with adifferent emphasis: here we still have a
basepoint, but are thinkingin terms of moving a little bit away from it, by the amount increment.
And to get a more compact formula, we can be more symbolic: let's repeat these two versions:
Taylor's Formula with Remainder Term: Let$f$ be a reasonable function, fix an input value $x_o$, and fix apositive integer $n$. Then for input $x$ wehave \begin{multline*}f(x)=f(x_o)+{f'(x_o)\over 1!}(x-x_o)+{f''(x_o)\over 2!}(x-x_o)^2+{f'''(x_o)\over 3!}(x-x_o)^3+\ldots\\\ldots+\frac{ f^{(n)}(x_o)}{n!}(x-x_o)^n+{f^{(n+1)}(c) \over (n+1)!}(x-x_o)^{n+1 }\end{multline*}for some $c$ between $x_o$ and $x$.
Note that in every version, in the very last term where allthe indices are $n+1$, the input into $f^{(n+1)}$ is
not thebasepoint $x_o$ but is, instead, that mysterious $c$ about which wetruly know nothing but that it lies between $x_o$ and $x$. The part ofthis formula without the error term is the degree-n Taylorpolynomial for $f$ at $x_o$, and that last term is the error term or remainder term. The Taylor series is said to be expanded at or expanded about or centered at orsimply at the basepoint $x_o$.
There are many other possible forms for the error/remainder term. The one here was chosen partly because it resembles the other terms in the main part of the expansion.
Let $f$ be a reasonable function, fix an input value $x_o$. For any(reasonable) input value $x$ we have$$f(x)=f(x_o)+{f'(x_o)\over 1!}(x-x_o)+{f''(c)\over 2!}(x-x_o)^2$$for some $c$ between $x_o$ and $x$. Linear Taylor's Polynomial with Remainder Term:
The previous formula is of course a very special case of thefirst, more general, formula. The reason to include the ‘linear’ caseis that
without the error term it is the old approximationby differentials formula, which had the fundamental flaw of having noway to estimate the error. Now we have the error estimate.
The general idea here is to approximate ‘fancy’ functions by polynomials, especially if we restrict ourselves to a fairly small interval around some given point. (That ‘approximation by differentials’ circus was a very crude version of this idea).
It is at this point that it becomes relatively easy to ‘beat’ a calculator, in the sense that the methods here can be used to give whatever precision is desired. So at the very least this methodology is not as silly and obsolete as some earlier traditional examples.
But even so, there is more to this than getting numbers out: it oughtto be of some intrinsic interest that pretty arbitrary functions canbe approximated as well as desired by polynomials, which are soreadily computable (by hand
or by machine)!
One element under our control is choice of
how high degreepolynomial to use. Typically, the higher the degree (meaning moreterms), the better the approximation will be. (There is nothingcomparable to this in the ‘approximation by differentials’).
Of course, for all this to really be worth anything either intheory or in practice, we do need a tangible
error estimate, sothat we can be sure that we are within whatever tolerance/error isrequired. (There is nothing comparable to this in the ‘approximationby differentials’, either).
And at this point it is not at all clear what exactly can be done with such formulas. For one thing, there are choices.
Exercises Write the first threeterms of the Taylor series at 0of $f(x)=1/(1+x)$. Write the first threeterms of the Taylor series at 2of $f(x)=1/(1-x)$. Write the first threeterms of the Taylor series at 0of $f(x)=e^{\cos x}$. |
The curve is not hyperbolic. A hyperbolic curve is a result of an equation $f(x)=\dfrac{b}{a}\sqrt{x^2-a^2}$. Which is not the case here.
The enzyme catalysis (Michaelis-Menten model) can be described by the two step reaction.
$$\ce{E + S<=>[k_f][k_r] ES ->[k_{cat}] E + P}$$
In Michaelis-Menten kinetics you make either of the two approximations:
Equilibrium approximation: assuming that first reaction (i.e. enzyme-substrate binding) is in equilibrium. Therefore $k_r[ES]=k_f[E][S]$ Quasi-steady-state approximation (QSS): assuming that the concentration of $ES$ complex remains constant over time. Therefore $k_f[E][S]=(k_r+k_{cat})[ES]$
For both these cases you can substitute $[ES]$ with $[E_0]-[E]$ where $E_0$ is the total enzyme concentration. You can also represent $[E]$ in terms of $[S]$ using any of the above relationship. Then, you assume that maximum catalytic activity would be when all the available enzyme is complexed with the substrate ($[ES]=[E_0]$). In that case the maximum rate of catalysis $V_{max}$ would be $k_{cat}[E_0]$. When you put everything together then the rate of catalysis would turn out to be:
$$\frac{d[P]}{dt}=V_{max}\frac{[S]}{K_M+[S]}$$
Where:
These kind of curves denote saturation kinetics and can also be seen in case of adsorption. |
Axiom:Axiom of Empty Set Axiom $\exists x: \forall y: \paren {\neg \paren {y \in x} }$ Also defined as
It can equivalently be specified:
$\exists x: \forall y \in x: y \ne y$
The equivalence is proved by Equivalence of Definitions of Empty Set.
Also known as
This axiom is also known as the
Axiom of Existence, but there exists another axiom with such a name.
Hence it is preferable not to use that name.
Relation to other axioms Also see Sources 1955: John L. Kelley: General Topology... (previous) ... (next): Chapter $0$: Subsets and Complements; Union and Intersection Weisstein, Eric W. "Zermelo-Fraenkel Axioms." From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/Zermelo-FraenkelAxioms.html |
What's a meaningful "correlation" measure to study the relation between the such two types of variables?
In R, how to do it?
Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up.Sign up to join this community
For a moment, let's ignore the continuous/discrete issue. Basically correlation measures the strength of the linear relationship between variables, and you seem to be asking for an alternative way to measure the strength of the relationship. You might be interested in looking at some ideas from information theory. Specifically I think you might want to look at mutual information. Mutual information essentially gives you a way to quantify how much knowing the state of one variable tells you about the other variable. I actually think this definition is closer to what most people mean when they think about correlation.
For two discrete variables X and Y, the calculation is as follows: $$I(X;Y) = \sum_{y \in Y} \sum_{x \in X} p(x,y) \log{ \left(\frac{p(x,y)}{p(x)\,p(y)} \right) }$$
For two continuous variables we integrate rather than taking the sum: $$I(X;Y) = \int_Y \int_X p(x,y) \log{ \left(\frac{p(x,y)}{p(x)\,p(y)} \right) } \; dx \,dy$$
Your particular use-case is for one discrete and one continuous. Rather than integrating over a sum or summing over an integral, I imagine it would be easier to convert one of the variables into the other type. A typical way to do that would be to discretize your continuous variable into discrete bins.
There are a number of ways to discretzie data (e.g. equal intervals), and I believe the entropy package should be helpful for the MI calculations if you want to use R.
If the categorical variable is ordinal and you bin the continuous variable into a few frequency intervals you can use Gamma. Also available for paired data put into ordinal form are Kendal's tau, Stuart's tau and Somers D. These are all available in SAS using Proc Freq. I don't know how they are computed using R routines. Here is a link to a presentation that gives detailed information: http://faculty.unlv.edu/cstream/ppts/QM722/measuresofassociation.ppt#260,5,Measures of Association for Nominal and Ordinal Variables
A categorical variable is effectively just a set of indicator variable. It is a basic idea of measurement theory that such a variable is invariant to relabelling of the categories, so it does not make sense to use the numerical labelling of the categories in any measure of the relationship between another variable (e.g., 'correlation'). For this reason, and measure of the relationship between a continuous variable and a categorical variable should be based entirely on the indicator variables derived from the latter.
Given that you want a measure of 'correlation' between the two variables, it makes sense to look at the correlation between a continuous random variable $X$ and an indicator random variable $I$ derived from t a categorical variable. Letting $\phi \equiv \mathbb{P}(I=1)$ we have:
$$\mathbb{Cov}(I,X) = \mathbb{E}(IX) - \mathbb{E}(I) \mathbb{E}(X) = \phi \left[ \mathbb{E}(X|I=1) - \mathbb{E}(X) \right] ,$$
which gives:
$$\mathbb{Corr}(I,X) = \sqrt{\frac{\phi}{1-\phi}} \cdot \frac{\mathbb{E}(X|I=1) - \mathbb{E}(X)}{\mathbb{S}(X)} .$$
So the correlation between a continuous random variable $X$ and an indicator random variable $I$ is a fairly simple function of the indicator probability $\phi$ and the standardised gain in expected value of $X$ from conditioning on $I=1$. Note that this correlation does not require any discretization of the continuous random variable.
For a general categorical variable $C$ with range $1, ..., m$ you would then just extend this idea to have a
vector of correlation values for each outcome of the categorical variable. For any outcome $C=k$ we can define the corresponding indicator $I_k \equiv \mathbb{I}(C=k)$ and we have:
$$\mathbb{Corr}(I_k,X) = \sqrt{\frac{\phi_k}{1-\phi_k}} \cdot \frac{\mathbb{E}(X|C=k) - \mathbb{E}(X)}{\mathbb{S}(X)} .$$
We can then define $\mathbb{Corr}(C,X) \equiv (\mathbb{Corr}(I_1,X), ..., \mathbb{Corr}(I_m,X))$ as the vector of correlation values for each category of the categorical random variable. This is really the only sense in which it makes sense to talk about 'correlation' for a categorical random variable.
(
Note: It is trivial to show that $\sum_k \mathbb{Cov}(I_k,X) = 0$ and so the correlation vector for a categorical random variable is subject to this constraint. This means that given knowledge of the probability vector for the categorical random variable, and the standard deviation of $X$, you can derive the vector from any $m-1$ of its elements.)
The above exposition is for the true correlation values, but obviously these must be estimated in a given analysis. Estimating the indicator correlations from sample data is simple, and can be done by substitution of appropriate estimates for each of the parts. (You could use fancier estimation methods if you prefer.) Given sample data $(x_1, c_1), ..., (x_n, c_n)$ we can estimate the parts of the correlation equation as:
$$\hat{\phi}_k \equiv \frac{1}{n} \sum_{i=1}^n \mathbb{I}(c_i=k).$$
$$\hat{\mathbb{E}}(X) \equiv \bar{x} \equiv \frac{1}{n} \sum_{i=1}^n x_i.$$
$$\hat{\mathbb{E}}(X|C=k) \equiv \bar{x}_k \equiv \frac{1}{n} \sum_{i=1}^n x_i \mathbb{I}(c_i=k) \Bigg/ \hat{\phi}_k .$$
$$\hat{\mathbb{S}}(X) \equiv s_X \equiv \sqrt{\frac{1}{n-1} \sum_{i=1}^n (x_i - \bar{x})^2}.$$
Substitution of these estimates would yield a basic estimate of the correlation vector. If you have parametric information on $X$ then you could estimate the correlation vector directly by maximum likelihood or some other technique.
R package mpmi has the ability to calculate mutual information for mixed variable case, namely continuous and discrete. Although there are other statistical options like (point) biserial correlation coefficient to be useful here, it would be beneficial and highly recommended to calculate mutual information since it can detect associations other than linear and monotonic.
If $X$ is a continuous random variable and $Y$ is a categorical r.v.. the observed correlation between $X$ and $Y$ can be measured by
It should be noted, though, that the point-polyserial correlation is just a generalization of the point-biserial.
For a broader view, here's a table from Olsson, Drasgow & Dorans (1982)[1].
[1]: Source: Olsson, U., Drasgow, F., & Dorans, N. J. (1982). The polyserial correlation coefficient. Psychometrika, 47(3), 337–347 |
With $s \in \mathbb{C}, a \in \mathbb{R}$,
numerical evidence strongly suggests that the complex zeros in the critical strip of:
$$\zeta\left(\frac{s}{a}\right) \pm \zeta\left(\frac{1-s}{a}\right)$$
all reside on the line $\Re(s)=\frac12$ for $a \lt 0$ or $a\ge 1$. There also exist a finite few complex zeros for each $a$, but these are outside the critical strip.
When $a=1$ the formula reduces to this question.
Question: Is there an explanation for this phenomenon (or a counter example)? |
If you want to understand the classical limit of a harmonic oscillator, it is more meaningful to consider coherent states $$|\alpha \rangle = e^{-|\alpha|^2/2}\sum_{n=0}^\infty \frac{\alpha^n}{\sqrt{n!}} |n \rangle$$ for $\alpha$ an arbitrary complex number. Such states satisfy $a |\alpha \rangle = \alpha |\alpha \rangle$. These states all saturate the Heisenberg uncertainty relation, and (unlike the higher excited states of the harmonic oscillator)
do have associated dynamics, namely $e^{-i H t/\hbar} |\alpha \rangle = |\alpha e^{i \omega t} \rangle$ up to an overall phase.
One advantage to using a coherent state is that we can easily define the amplitude of the oscillation. The position and momentum expectation values are just $\langle x \rangle = \sqrt{\frac{2 \hbar}{m \omega}} \text{Re} \alpha$ and $\langle p \rangle = \sqrt{2 m \omega \hbar} \text{Im} \alpha$ where $\text{Re} \alpha$ and $\text{Im} \alpha$ are the real and imaginary parts of $\alpha$, and both are normally distributed about their mean values. Restricting to real $\alpha$ means the oscillator is maximally displaced (as $\langle p \rangle = 0$), and so we would associate the amplitude $A = \sqrt{\frac{2 \hbar}{m \omega}} |\alpha|$ (in fact this remains correct if $\alpha$ is not real). The coherent state is not an energy eigenstate, but $\langle E \rangle = (|\alpha|^2+\frac12)\hbar \omega$. Then we see that $\langle E \rangle = \frac12 m \omega^2 A^2 + \frac12 \hbar \omega$. The first term matches the expression you have in the classical case. The second term is the ground state energy of the quantum harmonic oscillator. It is $0$ in the classical limit $\hbar \rightarrow 0$, and it can also be consistently removed if you choose to by just shifting the potential down by a constant $\frac12 \hbar \omega$.
While one can attempt to recover the classical limit from the higher energy eigenstates rather than from coherent states, it is quite a bit less satisfying to do so, as the high energy eigenstates describe a particle that is delocalized on a macroscopic scale (with similar issues in momentum space) and which does not evolve in time, while a classical particle in a harmonic oscillator should be localized at a scale much smaller than the length of its oscillation and should oscillate with characteristic angular frequency $\omega$ (which is not the case in a single eigenstate, only in a superposition of multiple energy eigenstates). Since energy eigenstates only depend on a single parameter $n$, even for large $n$ where the spacing between adjacent energy levels is negligible the energy eigenstates can not describe a single classical state (which depends on both initial position and momentum), only the time-averaged phase space density of the state, whereas since $\alpha$ is complex (2 real parameters) we have a one-to-one correspondence between classical states and coherent states as we saw above. |
This argument involving $\Gamma(\omega)$ is made since in a naive approach, in order to obtain the Hawking radiation, one generally drops the effective potential. Let me make this statement clearer:
In a curved spacetime, the Lagrangian of a free massless scalar field is:\begin{equation}\mathcal{L}=\frac{1}{2}g^{\mu\nu}\partial_\mu\phi\,\partial_\nu \phi\end{equation}where the metric $g^{\mu\nu}$ is the solution of the Einstein equations:$$G_{\mu\nu}=8\pi G\,T_{\mu\nu}$$in a spacetime with a source $T_{\mu\nu}$.
Now, by making use of the Euler-Lagrange equations, you can write down the equations of motion, which reads:$$\hat{\square}\phi=0$$where $\hat{\square}$ is the Laplace-Beltrami operator:$$\frac{1}{\sqrt{g}}\partial_\mu (\sqrt{g}\,g^{\mu\nu}\partial_\nu)$$with $g=det(g_{\mu\nu})$.
If you now consider the Schwarzchild metric given by $$ds^2=(1-\frac{2m}{r})dt^2-\frac{dr^2}{1-\frac{2m}{r}}-r^2d\Omega^2$$Since we are in a spherically symmetric spacetime, we can expand the field in spherical harmonics:$$\phi=\sum_{l,m}\frac{F(r,t)}{r}Y_l^m(\theta,\phi)$$And now, by putting this expansion in the equation of motion, and considering the radial part, you will obtain (after some algebraic steps):$$\left(\frac{\partial^2}{\partial t^2}-\frac{\partial^2}{\partial {r^*}^2}+V_l\right)F_l(r,t)=0$$with the effective potential $V_l=(1-\frac{2m}{r})(\frac{2m}{r^3}+\frac{l(l+1)}{r^2})$.
In a first approximation, you can drop the $V_l$ term, by saying that your solution will be valid in the $r\to\infty$ regions (the asympotical regions). What you will find is a mean particle value given by the planckian distribution at late time:$$\langle in|N^R_\omega|in\rangle=\frac{1}{e^{8\pi m\omega}-1}$$(where $N^R_\omega={a^R_\omega}^\dagger a^R_\omega$, with this creation and annhilation operators referring to the modes $u^R_\omega\sim e^{-i\omega u}$ and with $u=t-r^*$, one of the null coordinates in the double-null extension of the Schwarzchild metric).
This value is clearly divergent for $\omega\to 0$, and this is clear since we have dropped the effective potential, which would shield the modes allowed to go to the infinity. If you consider the effective potential, the equation of motion will obviously change, and you can say that there will be some Transmitted modes (T-modes) and some Reflected modes (R-modes) by the potential. Since we are considering modes $\sim e^{-i\omega u}$, which are directed through the future null infinity $\mathcal{I^+}$ (in a penrose diagram), only the T-modes will arrive at this $\mathcal{I}^+$, since the R-modes will be reflected by the potential into the Black-Hole. Thus, by an asymptotic analysis, you can calculate this $T$ and $R$ probability, and you will find that:$$|T|^2\simeq 16m^2\omega^2\sim A_H\omega^2\;\;\;\; \text{with }A_H\text{ the horizon surface area}$$(and $|R^2|=1-|T^2|$).
Your Planckian distribution will then become:$$\langle in|N^R_\omega|in\rangle=\frac{|T|^2}{e^{8\pi m\omega}-1}=\frac{\Gamma(\omega)}{e^{8\pi m\omega}-1}$$and this is correct since the low modes are shielded by this $\Gamma$.
However, in your reference, Parker is considering the modes $\sim e^{-i\omega v}$, with $v=t+r^*$, the other null-coordinate of the double-null extension of the Schwarzchild metric, then you have that the modes which will go to infinity, will be those that are reflected by the potential, and by doing the same analysis I have done, you will arrive at the same conclusion. |
The formula in complex analysis is
$$\int f(\gamma(t))\cdot(\gamma'(t)dt$$
and the formula in the real variable setting, for a gradient field, is:
$$\int F\cdot dr$$ $$=\int f_x\,dx + f_y\,dy + f_z\,dz,$$
where the integrand is said to be an "exact differential" (or total differential.)
Are the formulas essentially the same thing, when we regard the complex function as a "vector field" mapping $C^2 \to C^2$?
Also, can one compute line integrals of scalar-valued functions in the real-variable setting -- or would this not make any physical sense?
Thanks, |
Using Kjetil's answer answer, with process91's comment, we arrive at the following procedure.
Derivation
We are given two unit column vectors, $A$ and $B$ ($\|A\|=1$ and $\|B\|=1$). The $\|\circ\|$ denotes the L-2 norm of $\circ$.
First, note that the rotation from $A$ to $B$ is just a 2D rotation on a plane with the normal $A \times B$. A 2D rotation by an angle $\theta$ is given by the following augmented matrix: $$G=\begin{pmatrix}\cos\theta & -\sin\theta & 0 \\\sin\theta & \cos\theta & 0 \\0 & 0 & 1\end{pmatrix}.$$
Of course we don't want to actually
compute any trig functions. Given our unit vectors, we note that $\cos\theta=A\cdot B$, and $\sin\theta=||A\times B||$. Thus $$G=\begin{pmatrix}A\cdot B & -\|A\times B\| & 0 \\\|A\times B\| & A\cdot B & 0 \\0 & 0 & 1\end{pmatrix}.$$
This matrix represents the rotation from $A$ to $B$ in the base consisting of the following column vectors:
normalized vector projection of $B$ onto $A$: $$u={(A\cdot B)A \over \|(A\cdot B)A\|}=A$$
normalized vector rejection of $B$ onto $A$: $$v={B-(A\cdot B)A \over \|B- (A\cdot B)A\|}$$
the cross product of $B$ and $A$: $$w=B \times A$$
Those vectors are all orthogonal, and form an orthogonal basis. This is the detail that Kjetil had missed in his answer. You could also normalize $w$ and get an orthonormal basis, if you needed one, but it doesn't seem necessary.
The basis change matrix for this basis is:$$F=\begin{pmatrix}u & v & w \end{pmatrix}^{-1}=\begin{pmatrix} A & {B-(A\cdot B)A \over \|B- (A\cdot B)A\|} & B \times A\end{pmatrix}^{-1}$$
Thus, in the original base, the rotation from $A$ to $B$ can be expressed as right-multiplication of a vector by the following matrix: $$U=F^{-1}G F.$$
One can easily show that $U A = B$, and that $\|U\|_2=1$. Also, $U$ is the same as the $R$ matrix from Rik's answer.
2D Case
For the 2D case, given $A=\left(x_1,y_1,0\right)$ and $B=\left(x_2,y_2,0\right)$, the matrix $G$ is the forward transformation matrix itself, and we can simplify it further. We note$$\begin{aligned} \cos\theta &= A\cdot B = x_1x_2+y_1y_2 \\\sin\theta &= \| A\times B\| = x_1y_2-x_2y_1\end{aligned}$$
Finally,$$U\equiv G=\begin{pmatrix}x_1x_2+y_1y_2 & -(x_1y_2-x_2y_1) \\x_1y_2-x_2y_1 & x_1x_2+y_1y_2\end{pmatrix}$$and$$U^{-1}\equiv G^{-1}=\begin{pmatrix}x_1x_2+y_1y_2 & x_1y_2-x_2y_1 \\-(x_1y_2-x_2y_1) & x_1x_2+y_1y_2\end{pmatrix}$$
Octave/Matlab Implementation
The basic implementation is very simple. You could improve it by factoring out the common expressions of
dot(A,B) and
cross(B,A). Also note that $||A\times B||=||B\times A||$.
GG = @(A,B) [ dot(A,B) -norm(cross(A,B)) 0;\
norm(cross(A,B)) dot(A,B) 0;\
0 0 1];
FFi = @(A,B) [ A (B-dot(A,B)*A)/norm(B-dot(A,B)*A) cross(B,A) ];
UU = @(Fi,G) Fi*G*inv(Fi);
Testing:
> a=[1 0 0]'; b=[0 1 0]';
> U = UU(FFi(a,b), GG(a,b));
> norm(U) % is it length-preserving?
ans = 1
> norm(b-U*a) % does it rotate a onto b?
ans = 0
> U
U =
0 -1 0
1 0 0
0 0 1
Now with random vectors:
> vu = @(v) v/norm(v);
> ru = @() vu(rand(3,1));
> a = ru()
a =
0.043477
0.036412
0.998391
> b = ru()
b =
0.60958
0.73540
0.29597
> U = UU(FFi(a,b), GG(a,b));
> norm(U)
ans = 1
> norm(b-U*a)
ans = 2.2888e-16
> U
U =
0.73680 -0.32931 0.59049
-0.30976 0.61190 0.72776
-0.60098 -0.71912 0.34884
Implementation of Rik's Answer
It is computationally a bit more efficient to use Rik's answer. This is also an Octave/MatLab implementation.
ssc = @(v) [0 -v(3) v(2); v(3) 0 -v(1); -v(2) v(1) 0]
RU = @(A,B) eye(3) + ssc(cross(A,B)) + \
ssc(cross(A,B))^2*(1-dot(A,B))/(norm(cross(A,B))^2)
The results produced are same as above, with slightly smaller numerical errors since there are less operations being done. |
Dear community,
In light of the recent work of DeBacker/Reeder on the depth zero local Langlands correspondence, I was wondering if there is an attempt to "geometrize" the depth zero local Langlands correspondence.
In particular, in Teruyoshi Yoshida's thesis, one can see a glimpse of this for $GL(n,F)$, where $F$ is a $p$-adic field. Namely : Suppose $k$ is the residue field of $F$. Let $w$ be the cyclic permutation $(1 \ 2 \ 3 \ ... \ n)$ in the Weyl group $S_n$ of $GL(n,F)$. Let $\widetilde{Y_w}$ be the Deligne-Lusztig variety associated to $w$, and denote by $H^*(\widetilde{Y_w})$ the alternating sum of the cohomologies $H_c^i(\widetilde{Y_w}, \overline{\mathbb{Q}_{\ell}})$. Let $T_w(k) = k_n^*$ be the elliptic torus in $GL(n,k)$, where $k_n$ is the degree $n$ extension of $k$.
Since $T_w(k)$ and $GL(n,k)$ act on cohomology, $H^*(\widetilde{Y_w})$ is an element of the Grothendieck group of $GL(n,k) \times T_w(k)$-modules. There is a canonical surjection $I_F \rightarrow k_n^* = T_w(k)$, where $I_F$ is the inertia subgroup of the Weil group of $F$. Therefore, we may pull back the $GL(n,k) \times T_w(k)$ action on $H^*(\widetilde{Y_w})$ to an action of $GL(n,k) \times I_F$.
By Deligne-Lusztig Theory, as $GL(n,k) \times T_w(k)$-representations,
$$H_c^{n-1}(\widetilde{Y_w}, \overline{\mathbb{Q}_{\ell}})^{cusp} = $$
$$\displaystyle\sum_{\theta \in C} \pi_{\theta} \otimes \theta$$
where $C$ denotes the set of all characters of $k_n^*$ that don't factor through the norm map $k_n^* \rightarrow k_m^*$ for any integer $m$ such that $m \neq n$ and $m$ divides $n$, and where cusp denotes "cuspidal part". Here, $\pi_{\theta}$ is the irreducible cuspidal representation of $GL(n,k)$ associated to the torus $T_w(k)$ and the character $\theta$ of $T_w(k)$.
One of Yoshida's main theorems is that in this decomposition $$\displaystyle\sum_{\theta \in C} \pi_{\theta} \otimes \theta,$$ the correspondence $\theta \leftrightarrow \pi_{\theta}$ is indeed the depth zero local Langlands correspondence for $GL(n,k)$, ''up to twisting'' (this twist is unimportant for my question), by comparing with Harris-Taylor.
So my question is : Has anyone tried to generalize this to more general groups, but still working only in depth zero local Langlands? One could try to do this, and then compare to the recent work of DeBacker/Reeder (they write down a fairly general depth zero local Langlands correspondence). In other words, has anyone tried to realize depth zero local Langlands in the cohomology of Deligne-Lusztig varieties outside of the case $GL(n,F)$, which Yoshida did?
A priori the above idea for $GL(n)$ won't work on the nose for other reductive groups since the tori that arise in other reductive groups vary considerably, but something similar might. One would possibly want to try to pull back the action of $T_w(k)$ on cohomology to the inertia group $I_F$ in a more general setting, where now $T_w(k)$ is a more general torus in a more general reductive group. Then, one could compare to DeBacker/Reeder.
I took a look at the case of unramified $U(3)$, and it seems that things will work quite nicely.
My other question is : It might turn out that what I'm proposing is an easy check if one understands DeBacker/Reeder and Deligne-Lusztig enough to write this down in general. If so, then is my original question even interesting? It would basically say that Deligne-Lusztig theory is very naturally compatible with local Langlands correspondence, but the hard work is really in DeBacker/Reeder and Deligne-Lusztig, and putting everything together might not be difficult. Is the original question interesting regardless of whether or not it is difficult to answer?
Sincerely,
Moshe Adrian |
How do I integrate this?
$$\int_0^{2\pi}\frac{dx}{2+\cos{x}}, x\in\mathbb{R}$$
I know the substitution method from real analysis, $t=\tan{\frac{x}{2}}$, but since this problem is in a set of problems about complex integration, I thought there must be another (easier?) way.
I tried computing the poles in the complex plane and got $$\text{Re}(z_0)=\pi+2\pi k, k\in\mathbb{Z}; \text{Im}(z_0)=-\log (2\pm\sqrt{3})$$ but what contour of integration should I choose? |
Use cylindrical coordinates to evaluate the triple integral
$$\iiint_E \sqrt{x^2+y^2}dV, $$ where $E$ is the solid bounded by the circular paraboloid $z=16−4(x^2+y^2)$ and the $xy$-plane. Please Help I am confused.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Use cylindrical coordinates to evaluate the triple integral
$$\iiint_E \sqrt{x^2+y^2}dV, $$ where $E$ is the solid bounded by the circular paraboloid $z=16−4(x^2+y^2)$ and the $xy$-plane. Please Help I am confused.
First draw a picture:
You then just need to set up the integral according to the picture as follows:
$$\int_0^{16} dz \: \int_0^{\sqrt{4-z/4}} dr \: r^2 \: \int_0^{2 \pi} d\theta$$
What is going on here? The volume element $dV = r\,dr\,d\theta\,dz$. The volume is rotationally symmetric as you can see, so there's no dependence on $\theta$. Note also that I choose to integrate disks parallel to the $xy$ plane through $z$; this involves solving for $r$ as a function of $z$. Note the extra factor of $r$ comes from your specification of the integral of $r \, dV$. Finally, we integrate over $z$ from $z=0$, i.e., the $xy$ plane, through to the top of the solid at $z=16$.
We then need to evaluate the integral. I'll reduce it to a single integral for you to evaluate:
$$\frac{2 \pi}{3} \int_0^{16}\: dz (4-z/4)^{3/2}$$
You should be able to do this one out. |
How these processes are different from simple IIR 1 order filtering, FIR filters in terms of amplitude and phase characteristics?
Yes, integration and differentiation can be linear filters.You can start from
laplace properties that say:
$ \int_{0}^{t} {x(t)dt} \longrightarrow \frac{X(s)}{s} \\ \frac{d}{dt}x(t) \longrightarrow sX(s) $
So you can find
transfer function of integration and differentiation:
$ H_{INT}(s) = \frac{1}{s} \\ H_{DIFF}(s)=s $
You can convert these transfer function in digital
IIR filters for example by
bilinear transform or others digitization techniques. However you should notice that $ H_{DIFF}(s) $ is not causal, then you must add a
pole to the transfer faction far from
useful signal frequency, and it begin:
$ H_{DIFF_{causal}}(s)= \frac{s}{\alpha s + 1}$ where $\alpha$ is the
time constant of the derivative filter a small
real $>0$.
Using
bilinear transform $H_{INT}(z)$ begin a
trapezoidal integrator, using
Euler transofrm $H_{INT}(z)$ begin a
rectangular integrator, you can see this difference and the digitization in the MATLAB PID page.
I wrote a simple
PID program in
C that computes these operations using
Euler transform, a
PID in closed loop doesn't need to be accurate so
Euler works well.
In literature there are a lot of ways to implement
derivative filters (
IIR and
FIR) that workaround in elegant way anti-causal problem, however in a lot of situations you can simply digitize the analog transfer functions to make IIRs and FIRs. |
Visit the forum if you have a language query!
formula
Definition from Dictionary, a free dictionary
Now and then it's good to pause in our pursuit of happiness and just be happy.Guillaume Apollinaire
Wikipedia
See also fórmula Contents English Etymology Pronunciation NounWikipedia (mathematics) Any mathematical rule expressed symbolically. <math>x = \frac {-b \pm \sqrt{b^2 - 4ac}}{2a}</math> is the formula for finding the roots of the quadratic equation y = ax. 2+ bx + c (chemistry) A symbolic expression of the structure of a compound. H 2O is the formula for water (Dihydrogen monoxide) A plan of action intended to solve a problem. A formulation; a prescription; a mixture or solution made in a prescribed manner; the identity and quantities of ingredients of such a mixture. The formula of the rocket fuel has not been revealed. (logic) A syntactic expression of a proposition, built up from quantifiers, logical connectives, variables, relation and operation symbols, and, depending on the type of logic, possibly other operators such as modal, temporal, deontic or epistemic ones. Synonyms ( in mathematics): mathematical formula ( in chemistry): chemical formula Derived terms Translations
in mathematics
in chemistry
Bosnian Noun formula f. ( pl.: formule) formula rule Crimean Tatar Etymology Noun formula Declension References Useinov & Mireev Dictionary, Simferopol, Dolya, 2002[1] ItalianWikipedia it Noun formula f. ( plural formule) formula ( mathematics, chemistry) Derived terms Verb formula Serbian Noun formula f. ( pl.: formule) formula rule Cyrillic spelling Elsewhere on the web
Onelook IATE IATE IATE IATE ProZ Dict.cc Wordnik IATE Linguee IATE |
Are there any differences between the study of Calculus done by Newton and by Leibniz. If so please mention point by point.
Newton's notation, Leibniz's notation and Lagrange's notation are all in use today to some extent they are respectively:
$$\dot{f} = \frac{df}{dt}=f'(t)$$ $$\ddot{f} = \frac{d^2f}{dt^2}=f''(t)$$
You can find more notation examples on Wikipedia.
The standard integral($\displaystyle\int_0^\infty f dt$) notation was developed by Leibniz as well. Newton did not have a standard notation for integration.
I have read from "The Information" by James Gleick the following: According to Babbage who eventually took the Lucasian Professorship at Cambridge which Newton held, Newton's notation crippled mathematical development. He worked as an undergraduate to institute Leibniz's notation as it is used today at Cambridge despite the distaste the university still had because of the Newton/Leibniz conflict. This notation is alot more useful that Newton's for most cases. It does, however, imply that it can be treated as a simple fraction which is incorrect.
You should definitely take a look at the second chapter of Arnold's
Huygens & Barrow, Newton & Hooke. The late Prof. Arnold summarized therein the difference between Newton's approach to mathematical analysis and Leibniz's as follows:
Newton's analysis was the application of power series to the study of motion... For Leibniz, ... analysis was a more formal algebraic study of differential rings.
Arnold's overview of Leibniz's contributions to the theme is spiced up with a non-negligible number of thought-provoking remarks:
In the work of other geometers--e.g., Huygens and Barrow--many objects connected with a given curve also appeared [for example: abscissa, ordinate, tangent, the slope of the tangent, the area of a curvilinear figure, the subtangent, the normal, the subnormal, and so on]... Leibniz, with his individual tendency to universality [he considered necessary to discover the so-called characteristic, something universal, that unites everything in science and contains all answers to all questions], decided that all these quantities should be considered in the same way. For this he introduced a single term for any of the quantities connected with a given curve and fulfilling some function in relation to the given curve--the term
function...
Thus, according to Leibniz many functions were associated with a curve. Newton had another term--fluent--which denoted a flowing quantity, a variable quantity, and hence associated with motion. On the basis of Pascal's studies and his own arguments Leibniz quite rapidly developed formal analysis in the form in which we now know it. That is, in a form specially suitable to teach analysis by people who do not understand it to people who will never understand it... Leibniz quite rapidly established the formal rules for operating with infinitesimals, whose meaning is obscure.
Leibniz's method was as follows. He assumed that the whole of mathematics, like the whole of science, is found inside us, and by means of philosophy alone we can hit upon everything if we attentively take heed of processes that occur inside our mind. By this method he discovered various laws and sometimes very successfully. For example, he discovered that $d(x+y) = dx+dy$, and this remarkable discovery immediately forced him to think about what the differential of a product is. In accordance with the universality of his thoughts he rapidly came to the conclusion that differentiation [had to be] a ring homomorphism, that is, that the formula $d(xy) = dx dy$ must hold. But after some time he verified that this leads to some unpleasant consequences, and found the correct formula $d(xy) = xdy + y dx$, which is now called Leibniz's rule. None of the inductively thinking mathematicians--neither Barrow nor Newton, who as a consequence was called an empirical ass in the Marxist literature--could [have ever gotten] Leibniz's original hypothesis into his head, since to such a person it was quite obvious what the differential of a product is, from a simple drawing...
Beyond the issue of notation, Newton experimented with a number of foundational approaches. One of the earliest ones involved infinitesimals, whereas later he shied away from them because of philosophical resistance of his contemporaries, often stemming from sensitive religious considerations closely related to inter-denominational quarrels. Leibniz also was aware of the quarrels, but he used infinitesimals and differentials systematically in developing the calculus, and for this reason was more successful in attracting followers and stimulating research--or what he called the
Ars Inveniendi.
From a practical point of view, the notation was vastly different.
A particular sore point for me is that the Leibniz notation lets you incorrectly work with derivatives as though they were a mathematical fraction. Unfortunately this 'works out' a lot of the time so its still used, even in college courses, today.
I don't think there is anything wrong with shortcuts, up to the point that they don't interfere with understanding. In this case, I do believe it creates a misunderstanding of the subject matter. This alone I think puts Newtons notation above Leibniz's.
From Loemker's translation,
"Leibniz's reasoning, though it strives for a broader application of the law of inverse squares than to gravity alone, is less general than Newton's (Principia, Book I, Propositions I, 2, 14), since it presupposes harmonic motion."
Leibniz, Gottfried Wilhelm
Philosophical Papers and Letters : A Selection / Translated and Edited, with an Introduction by Leroy E. Loemker. 2d ed. Dordrecht : D. Reidel, 1970. p.362 |
Consider a smooth convex/compact domain $D\subset \mathbb{R}^n$ and a smooth, concave function $F:D\to \mathbb{R}$. Then we can define the function that simply takes the volume of the upper contour sets determined by the argument:
$$G(t) = \int_{\{x\in D \; : \; F(x) \ge t\}} d\lambda$$
where $\lambda$ denotes the Lebesgue measure. I'm trying to figure out an expression for $\frac{d}{dt}G(t)$.
This seems like nothing more than a special case of a higher-dimensional Leibniz Integral Rule, but wikipedia gives me a substantially more general formula than I suspect I need for this case (for definitions of terms see the link):
$$\frac{d}{dt} \int_{\Omega(t)} \omega = \int_{\Omega(t)} i_{\vec{v}}(d_x \omega) + \int_{\partial \Omega(t)} i_{\vec{v}}\omega + \int_{\Omega(t)} \dot{\omega}.$$
I have almost no background in differential forms, but immediately I know, for starters, the volume form I'm integrating is time invariant so the last term drops out here. Moreover, given I'm just concerned with a uniform density, I'd imagine the first term should be zero too? (This corresponding to the intuition that all that really matters here is how much 'volume bleeds out of the bag $\Omega(t)$' as I cinch it shut by increasing $t$, and hence I need only be concerned with the incremental flow of volume across the boundary.) But that may be wildly incorrect.
Ideally if someone could help guide me (ideally both intuitively and analytically) to be able to understand and describe this derivative I'd be very grateful! In particular an expression for what the Leibniz rule reduces to in this case would be most welcome. |
Related rates
In this section, most functions will be functions of a parameter $t$which we will think of as
time. There is a convention comingfrom physics to write the derivative of any function $y$of $t$ as $\dot{y}=dy/dt$, that is, with just a dot over thefunctions, rather than a prime.
The issues here are variants and continuations of the previoussection's idea about
implicit differentiation. Traditionally,there are other (non-calculus!) issues introduced at this point,involving both story-problem stuff as well as requirement to be ableto deal with similar triangles, the Pythagorean Theorem,and to recall formulas for volumes of cones and such.
Continuing with the idea of describing a function by a relation, wecould have
two unknown functions $x$ and $y$ of $t$, related by some formula such as $$x^2+y^2=25$$ A typical question of this genre is ‘What is $\dot{y}$ when $x=4$ and$\dot{x}=6$?’
The fundamental rule of thumb in this kind of situation is
differentiate the relation with respect to $t$: so we differentiatethe relation $x^2+y^2=25$ with respect to $t$, even though we don'tknow any details about those two function $x$ and $y$ of $t$:$$2x\dot{x}+2y\dot{y}=0$$using the chain rule. We can solve this for $\dot{y}$:$$\dot{y}=-{x\dot{x} \over y}$$So at any particular moment, if we knew the values of$x,\dot{x},y$ then we could find $\dot{y}$ at that moment.
Here it's easy to solve the original relation to find $y$ when $x=4$: we get $y=\pm 3$. Substituting, we get $$\dot{y}=-{4\cdot 6\over \pm 3}=\pm 8$$ (The $\pm$ notation means that we take $+$ chosen if we take $y=-3$ and $-$ if we take $y=+3$).
Exercises Suppose that $x,y$ are both functions of $t$, and that $x^2+y^2=25$. Express ${ dx \over dt }$ in terms of $x,y,$ and ${ dy \over dt }$. When $x=3$ and $y=4$ and ${ dy \over dt }=6$, what is${ dx \over dt }$? A 2-foot tall dog is walking away from a streetlight which is on a 10-foot pole. At a certain moment, the tip of the dog's shadow is moving away from the streetlight at 5 feet per second. How fast is the dog walking at that moment? A ladder $13$ feet long leans against a house, but is sliding down. How fast is the top of the ladder moving at a moment when the base of the ladder is $12$ feet from the house and moving outward at $10$ feet per second? |
Assume you have a fixed ($d=O(1)$ for that matter) degree matrix polynomial $$P(X)=A_0+A_1\cdot X+A_2\cdot X^2+\ldots+A_dX^d$$
Where $A_0,A_1,\ldots A_d\in\mathbb N^{n\times n}$ are given as input. Also given is some constant $\epsilon$.
Can we find a matrix $X_0\in \mathbb R^{n\times n}$ such that $||P(X)||<\epsilon$, or assert such does not exist?
What is the complexity of such algorithm?
There has been a lot of work of numerically finding (approximate) roots of polynomials over the reals, is there an equivalent to matrix polynomials?
For example, if $P(X)=-I+X^2$, then a solution could be $$X= \left( \begin{array}{ccc} 1 & 0 \\ 2015 & -1 \\ \end{array} \right) $$ |
I would like to prove that the number of simple jump discontinuities of any function is countable.
Can someone point me some material where the proof is or explain the proof here?
Thanks.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
I would like to prove that the number of simple jump discontinuities of any function is countable.
Can someone point me some material where the proof is or explain the proof here?
Thanks.
Let $f:(a,b)\to \mathbb{R}$ and $$A=\left\{x\in (a,b):f\text{ has a jump discontinuity at $x$}\right\}$$ Now $$A=A^{+}\cup A^{-}$$ where $$A^{+}=\left\{x\in (a,b):\lim_{y\to x^+}f(y)>\lim_{y\to x^-}f(y)\right\}$$ and $$A^{-}=\left\{x\in (a,b):\lim_{y\to x^+}f(y)<\lim_{y\to x^-}f(y)\right\}$$ I will show $A^{+}$ is countable and leave the rest to you. Fix $x\in A^{+}$ and then $\exists q\in \mathbb{Q}$ so that $$\lim_{y\to x^+}f(y)>q>\lim_{y\to x^-}f(y)$$ (why???). This means that $\exists \delta>0$ so that $$x-\delta<y<x<z<x+\delta\implies f(z)>q>f(y)$$ and so (why?) $\exists n\in \mathbb{N}$ so that $$x-\frac1n<y<x<z<x+\frac1n\implies f(z)>q>f(y)$$ If we let $$A_{q,n}=\left\{x\in (a,b):x-\frac1n<y<x<z<x+\frac1n\implies f(z)>q>f(y)\right\}$$ ($q\in \mathbb{Q}$,$n\in \mathbb{N}$) then by our previous discussion $$A^{+}\subseteq\bigcup_{q\in \mathbb{Q}}\bigcup_{n\in \mathbb{N}}A_{q,n}$$ Therefore the problem moves to proving that $A_{q,n}$ is countable. This follows from the fact $A_{q,n}$ is isolated (show this!).
The argument below is essentially the one outlined in Robert Israel's post here, but I tweak it a bit to show that there are only countably many removable discontinuities as well.
Let $f:\mathbb{R}\rightarrow\mathbb{R}$ be a function. The key idea is that we can control the amount of fluctuation in $f$ (and hence the size of jumps) on the left (resp., right) side of a point $x$ where the left limit (resp., right limit) exists by taking points sufficiently close to $x$. We cannot guarantee that there are no jumps in a neighborhood of a jump discontinuity; for example, the function $g:[-1,1]\rightarrow\mathbb{R}$ given by
$$g(x) = \begin{cases} \phantom{-}1 & \text{if}\ x\leq0 \\ 1/n & \text{if}\ n \text{ is a positive integer and } 1/(n+1)<x\leq 1/n \end{cases}$$
has a jump discontinuity at and in every neighborhood of $0$ (a more pathological example is given in iballa's comment on Koushik's post; see also Brian Scott's post here for details). However, it is true that we can make jumps around a jump discontinuity as small as desired by taking a sufficiently small neighborhood (but we actually only use a slightly weaker result -- see below). To that end, we note that the definition of the left limit and the triangle inequality give the
Lemma. If $f(x-)=\lim_{t\rightarrow x^-} f(t)$ exists then for any $\varepsilon > 0$ we have some $\delta>0$ such that$$\mathrm{diam} f(x-\delta,x) < \varepsilon. \Box$$
Now for any $x\in\mathbb{R}$ where $f(x-), f(x+)$ exist, put
$$M(x)=\max\{|f(x)-f(x-)|,|f(x)-f(x+)|\},$$
and for any $\varepsilon>0$, let
$$\mathcal{J}(\varepsilon)=\{ x\in\mathbb{R} : f(x-),f(x+) \text{ exist and } M(x)>\varepsilon \}.$$
Since any point $x$ at which a jump or removable discontinuity occurs lies in $\bigcup_n \mathcal{J}(1/n),$ it suffices to show that each $\mathcal{J}(\varepsilon)$ is countable. Fix $x\in\mathcal{J}(\varepsilon)$ and take $\delta>0$ such that $\mathrm{diam} f(x-\delta,x) < \varepsilon.$ If $t_0$ is an element of $(x-\delta, x)$ such that $f(t_0-), f(t_0+)$ exist then the sequences $f(t_0-1/n), f(t_0+1/n)$ eventually lie in
$$f(x-\delta,x) \subset [f(t_0)-\varepsilon, f(t_0)+\varepsilon],$$
so that
$$f(t_0 -)=\lim_{n\rightarrow\infty} f(t_0-1/n) \in [f(t_0)-\varepsilon, f(t_0)+\varepsilon]$$
and
$$f(t_0 +)=\lim_{n\rightarrow\infty} f(t_0+1/n) \in [f(t_0)-\varepsilon, f(t_0)+\varepsilon].$$
Consequently, we have $M(t_0)\leq\varepsilon$, and we deduce that $(x-\delta, x)$ and $\mathcal{J}(\varepsilon)$ are disjoint. Letting $q_x$ be any rational number in $(x-\delta, x),$ the map $x\mapsto q_x$ yields an injection $\mathcal{J}(\varepsilon)\rightarrow\mathbb{Q},$ completing the proof.
any jump discontinuity has a neighborhood with no other jump discontinuity, Associate to each such neighbourhood a rational number inside that.so there is a bijection between a subset of rationals and jump discontinuity.
As R is union of countable open interval to prove result on R is enough to show on (a,b) a arbitrarily open set .
Claim : Set of Jump discontinuities are countable .It is enough to associate each discontinuity with some countable set here we do by countable rational triple. $f:(a,b) \to R$ By Jump discontinuities we mean that $f(x-)$ and $f(x+)$ exist but not equal to $f(x)$ So we can make 3 cases 1) $f(x-)$ < $f(x+)$ 2) $f(x-)$ > $f(x+)$ 3)$f(x-)$ = $f(x+)$ $\neq f(x)$ It is enough to show that case 1 and 3 Consider Rational triple (p,q,r) Case1) Consider $f(x-)$ < $f(x+)$ So by denseness of Rational number there exist some rational p such that $f(x-)$ < p < $f(x+)$ a < q < t < x such that f(t) < p As f(x-) < p By definition of f(x-) which is $lim_{t \to x}f(t)=f(x-)$ There exist rational q such that above happening is true from defination of limit Similarly there exist rational r such that x < t < r < b such that f(t) > p
Now to show uniqueness consider $x \neq y$ and $(p,q,r)$ will hold for both point then without loss of generality consider $x<y$ for (x,y) f(t) < p and f(t) > p which is contradiction
Case 3 )Here $f(x-)=f(x+)=z$ we can associate rational pair (q,r) such that $a<q<t<x$ such that $|f(t)-z|<|f(x)-z|$ and $x<t<r<b$ such that $|f(t)-z|<|f(x)-z|$
Similar to above we can show that it is unique |
Written by
Carlo Luschi & Dominic Masters
Posted
Apr 24, 2018
The team at Graphcore Research has recently been considering mini-batch stochastic gradient optimization of modern deep network architectures, comparing the test performance for different batch sizes. Our experiments show that small batch sizes produce the best results.
We have found that increasing the batch size progressively reduces the range of learning rates that provide stable
convergence and acceptable test performance. Smaller batch sizes also provide more up-to-date gradient calculations, which give more stable and reliable training. The best performance has been consistently obtained for mini-batch sizes between 2 and 32. This contrasts with recent work, which is motivated by trying to induce more data parallelism to reduce training time on today’s hardware. These approaches often use mini-batch sizes in the thousands.
The training of modern deep neural networks is based on mini-batch Stochastic Gradient Descent (SGD) optimization, where each weight update relies on a small subset of training examples. The recent drive to employ progressively larger batch sizes is motivated by the desire to improve the parallelism of SGD, both to increase the efficiency on today's processors and to allow distributed implementation across a larger number of physical processors. On the other hand, the use of small batch sizes has been shown to improve generalization performance and optimization convergence (LeCun et al., 2012; Keskar et al., 2016) and requires a significantly smaller memory footprint, but needs a different type of processor to sustain full speed training.
We have investigated the training dynamics and generalization performance of small batch training for different scenarios. The main contributions of our work are the following:
We have produced an extensive set of experimental results which highlight that using small batch sizes significantly improves training stability. This results in a wider range of learning rates that provide stable convergence, while using larger batch sizes often reduces the usable range to the point that the optimal learning rate could not be used. The results confirm that using small batch sizes achieves the best generalization performance, for a given computation cost. In all cases, the best results have been obtained with batch sizes of 32 or smaller. Often mini-batch sizes as small as 2 or 4 deliver optimal results.
Our results show that a new type of processor which is able to efficiently work on small mini-batch sizes will yield better neural network models, and faster.
Stochastic Gradient Optimization
SGD optimization updates the network parameters $\boldsymbol{\theta}$ by computing the gradient of the loss $L(\boldsymbol{\theta})$ for a mini-batch $\mathcal{B}$ of $m$ training examples, resulting in the weight update rule
$$\boldsymbol{\theta}_{k+1} = \boldsymbol{\theta}_k - \eta \; \frac{1}{m} \sum_{i=1}^{m} \nabla_{\boldsymbol{\theta}} L_i(\boldsymbol{\theta}_k) \, ,$$
where $\eta \;$ denotes the learning rate.
For a given batch size $m$ the expected value of the weight update per training example (i.e., per gradient calculation $\nabla_{\boldsymbol{\theta}} L_i(\boldsymbol{\theta})$) is proportional to $\eta/m$. This implies that a linear increase of the learning rate $\eta$ with the batch size $m$ is required to keep the mean weight update per training example constant.
This is achieved by the
linear scaling rule, which has been recently widely adopted (e.g., Goyal et al., 2017). Here we suggest that, as discussed by Wilson & Martinez (2003), it is clearer to define the SGD parameter update rule in terms of a fixed base learning rate $\tilde{\eta} = \eta / m$, which corresponds to using the sum instead of the average of the local gradients
$$\boldsymbol{\theta}_{k+1} = \boldsymbol{\theta}_k - \tilde{\eta} \; \sum_{i=1}^{m} \nabla_{\boldsymbol{\theta}} L_i(\boldsymbol{\theta}_k) \, .$$
In this case, if the batch size $m$ is increased, the mean SGD weight update per training example is kept constant by simply maintaining a constant learning rate $\tilde{\eta}$.
At the same time, the variance of the parameter update scales linearly with the quantity $\eta^2/m = \tilde{\eta} ^2 \cdot m \, $ (Hoffer et al., 2017). Therefore, keeping the base learning rate $\tilde{\eta}$ constant implies a linear increase of the variance with the batch size $m$.
Benefits of Small Batch Training
When comparing the SGD update for a batch size $m$ with the update for a larger batch size $n \cdot m$, the crucial difference is that with the larger batch size all the $n \cdot m$ gradient calculations are performed with respect to the original point $\boldsymbol{\theta}_k$ in the parameter space. As shown in the figure below, for a small batch size $m$, for the same computation cost, the gradients for $n$ consecutive update steps are instead calculated with respect to new points $\boldsymbol{\theta}_{k+j}$, for $j = 1, ..., n - 1$.
Therefore, under the assumption of constant base learning rate $\tilde{\eta}$, large batch training can be considered to be an approximation of small batch methods that trades increased parallelism for stale gradients (Wilson & Martinez ,2003).
Small Batch sizes provide a better optimisation path
The CIFAR-10 test performance obtained for a reduced AlexNet model over a fixed number of epochs shows that using smaller batches gives a clear performance advantage. For the same base learning rate $\tilde{\eta}$, reducing the batch size delivers improved test accuracy. Also, using smaller batches corresponds to the largest range of learning rates that provide stable convergence.
Modern deep networks commonly employ
Batch Normalization (Ioffe & Szegedy, 2015), which has been shown to significantly improve training performance. With Batch Normalization, each layer is normalized based on the estimate of the mean and variance from a batch of examples for the activation of one feature. The performance with Batch Normalization for very small batch size is typically affected by the reduced sample size available for estimation of the batch mean and variance. However, the collected data shows best performance with batch sizes smaller than previously reported.
The following figure shows the CIFAR-100 performance for ResNet-32, with Batch Normalization, for different values of batch size $m$ and base learning rate $\tilde{\eta}$. The results show again a significant performance degradation for increasing values of the batch size, with the best results obtained or batch sizes $m = 4$ or $m = 8$. The results also indicate a clear optimum value of base learning rate, which is only achievable for batch sizes $m=8$ or smaller.
As summarized in the following figure, increasing the batch size progressively reduces the range of learning rates that provide stable convergence. This demonstrates how the increased variance in the weight update associated with the use of larger batch sizes can affect the robustness and stability of training. The results clearly indicate that small batches are required to achieve both the best test performance, and allow easier and more robust optimization.
Different Batch Sizes for Weight Update and Batch Normalization
In the following figure, we consider the effect of using small sub-batches for Batch Normalization, and larger batches for SGD. This is common practice for the case of data-parallel distributed processing, where Batch Normalization is often implemented independently on each individual processor, while the gradients for the SGD weight updates are aggregated across all workers.
The CIFAR-100 results show a general performance improvement by reducing the overall batch size for the SGD weight updates. We note that the best test accuracy for a given overall SGD batch size is consistently obtained when even smaller batches are used for Batch Normalization. This evidence suggests that to achieve the best performance both a modest overall batch size for SGD and a small batch size for Batch Normalization are required.
Why it matters
Using small batch sizes has been seen to achieve the best training stability and generalization performance, for a given computational cost, across a wide range of experiments. The results also highlight the optimization difficulties associated with large batch sizes. Overall, the experimental results support the broad conclusion that using small batch sizes for training provides benefits both in terms of the range of learning rates that provide stable convergence and the test performance for a given number of epochs.
While we are not the first to conclude that smaller mini-batch sizes give better generalization performance, current practice is geared to ever larger batch sizes because today's hardware requires a trade-off between getting more accurate results and synthesizing parallelism to fill the wide vector data-paths of today's processors, and to hide their long latencies to model data stored off-chip in DRAM.
With the arrival of new hardware specifically designed for machine intelligence, like Graphcore’s Intelligence Processing Unit (IPU), it’s time to rethink conventional wisdom on optimal batch size. With the IPU you will be able to run training efficiently even with small batches, and hence achieve both increased accuracy and faster training. In addition, because the IPU holds the entire model inside the processor, you gain an additional speed up by virtue of not having to access external memory continuously. Our benchmark performance results highlight the faster training times that can be achieved.
You can read the full paper here https://arxiv.org/abs/1804.07612.
Written by
Carlo Luschi & Dominic Masters
Posted
Apr 24, 2018 |
I have earlier posted the same question here on
math stackexchange but without any answer. As the question concerns tensors, I guess that I have come to the right place i.e. to physicists.
Say I have the following equation of motion in the
Cartesian coordinate system for a typical mass spring damper system:
$$M \; \ddot{x} + C \; \dot{x} + K \; x = 0$$
where the dot $^\dot{}$ represents differentiation with respect to time.
Now I would like to convert this equation to
Polar coordinates. So I introduce
$$x=r \; \cos{\theta}$$ to obtain
$$\dot{x}=\dot{r} \; \cos{\theta} - r \; \dot{\theta} \sin{\theta}$$
and $$\ddot{x}=\ddot{r} \; \cos{\theta}-2 \; \dot{r} \; \dot{\theta} \; \sin{\theta}-r \; \dot{\theta}^2 \; \cos{\theta}- r \; \ddot{\theta} \; \sin{\theta}$$
I can insert $x, \; \dot{x} \; \text{and} \; \ddot{x}$ in my original equation in the
Cartesian coordinate system to yield
$$M \; (\ddot{r} \; \cos{\theta}-2 \; \dot{r} \; \dot{\theta} \; \sin{\theta}-r \; \dot{\theta}^2 \; \cos{\theta}- r \; \ddot{\theta} \; \sin{\theta}) + C \; (\dot{r} \; \cos{\theta} - r \; \dot{\theta} \sin{\theta}) + K \; (r \; \cos{\theta}) = 0$$
Note: I am just showing the equation and derivatives in the x-direction. But the full system has both $x$ and $y$ components.
I wonder if the above way of thinking is right. I am
very new to tensors and I after reading about covariant derivatives, I am now thinking that one should include consider the basis vectors of the
Polarcoordinate system (a
non-Cartesiancoordinate system) also since unlike the basis vectors of the
Cartesian coordinate system which
do not change direction in the 2D space,
Polar coordinate basis vectors
change direction depending on the angle $\theta$.
I am thinking about
covariant derivatives as the conversion process includes differentiation with respect to the bases. For example if $x=r \; \cos{\theta}$, then
$$\dot{x}=\frac{dx}{dt}=\frac{\partial{x}}{\partial{r}} \cdot \frac{dr}{dt} + \frac{\partial{x}}{\partial{\theta}} \cdot \frac{d\theta}{dt}$$
So we have terms like $\frac{\partial{x}}{\partial{r}}$ and $\frac{\partial{x}}{\partial{\theta}}$ that concern basis vectors both in the
Cartesian and in the
Polar coordinate systems.
Hope that someone can shed some light on this. |
Double integrals where one integration order is easier
Suppose you need to calculate the double integral $\iint_\dlr f(x,y)\,dA$ for some function $f(x,y)$ and the region $\dlr$ shown below.
To calculate the double integral, you can write it as an iterated integral. For example, let's say that in the region $\dlr$, the lowest value of $y$ is $a$ and the highest value of $y$ is $b$. In other words, the range of $y$ in the region $\dlr$ is $a \le y \le b$.
For a given value of $y$, the range of $x$ in $\dlr$ depends on the value of $y$. However, the region is nice enough so that the range of $x$ for any $y$ is just a simple interval. We could define two functions $h_1(y)$ and to $h_2(y)$ so that this interval in $x$ is $[h_1(y),h_2(y)]$ for each value of $y$. This description of the region $\dlr$ is shown in the following picture.
Since the region $\dlr$ is defined by \begin{gather*} a \le y \le b,\\ h_1(y) \le x \le h_2(y), \end{gather*} we can represent double integral of $f(x,y)$ over $\dlr$ as the following iterated integral, \begin{align*} \iint_\dlr f(x,y) dA = \int_a^b \left( \int_{h_1(y)}^{h_2(y)} f(x,y) dx \right) dy. \end{align*}
However, we would run into trouble if we tried to change the order ofintegration so that we integrated with respect to $y$ first.The difficulty is that the range of $y$ for some valuesof $x$ is not a simple interval. For example, for the value of $x$given by the vertical dashed line below, the range of $y$ is
two different intervals.
Since we cannot always write the range of $y$ as a single interval $[g_1(x),g_2(x)]$, we cannot write the integral $\iint_\dlr f(x,y)\,dA$ as a single iterated integral of the form \begin{align*} \iint_\dlr f(x,y) dA = \int_c^d \left( \int_{g_1(x)}^{g_2(x)} f(x,y) dy \right) dx. \end{align*} If we really wanted to integrate with respect to $y$ first, we'd have to break the region $\dlr$ into pieces and compute separate integrals for each piece. We'll leave that procedure to your imagination. |
Let $G$ be a semisimple group over $\mathbb C$, and $X=G/H$ be a spherical homogeneous space of $G$. Let $T\subset B\subset G$ be a maximal torus and a Borel subgroup. Let $S=S(G,T,B)$ denote the corresponding set of simple roots.
Let ${\mathcal{P}}(S)$ denote the set of subsets of $S$. Let $M$ denote the weight lattice of $X$, and set $N:={\rm Hom}(M,\mathbb Z)$. Let $\mathcal D$ denote the set of colors of $X$.
We have maps $\rho\colon \mathcal D\to N$ and ${\varsigma}\colon\mathcal D\to{\mathcal{P}}(S)$.Here ${\varsigma}(D)$ for $D\in\mathcal D$ is the set of simple roots $\alpha\in S$ such that the corresponding minimal parabolic subgroup $P_\alpha\supset B$
moves the color $D$.Thus we obtain a map$$ {\varsigma}\times\rho\colon\ \mathcal D\ \longrightarrow\ {\mathcal{P}}(S)\times N.$$This map need not be injective, but by Proposition 3.2.3 of Losev's paper "Uniqueness property for spherical homogeneous spaces" each of its fibers has $\le 2$ elements.
Now consider the group ${{\rm Aut}}_G(X)=\mathcal N_G(H)/H$, this group acts on $\mathcal D$. One can easily see that ${{\rm Aut}}_G(X)$ acts on the fibers of ${\varsigma}\times\rho$.
Question 1.Is it true that ${{\rm Aut}}_G(X)$ acts transitively on each fiber of ${\varsigma}\times\rho$ ?
Question 2.In particular, is it true that if $\mathcal N_G(H)=H$, then then the map ${\varsigma}\times\rho$ is injective? |
SPPU Electronics and Telecom Engineering (Semester 4)
Control Systems December 2015
Control Systems
December 2015
Total marks: --
Total time: --
Total time: --
INSTRUCTIONS
(1) Assume appropriate data and state your reasons (2) Marks are given to the right of every question (3) Draw neat diagrams wherever necessary
(1) Assume appropriate data and state your reasons
(2) Marks are given to the right of every question
(3) Draw neat diagrams wherever necessary
Solve any one question from Q1 and Q2
1 (a) Consider the R-L-C network shown in Fig. 1:
i) Obtain transfer function if V ii) Find the location of poles in terms of R, L and C. iii) If R=1 MΩ, C=1 μF, L=1 mH. Is the location of poles of transfer function given in (i) are real? If yes, find the location.
i) Obtain transfer function if V
iand V oare input and output voltage respectively.
ii) Find the location of poles in terms of R, L and C.
iii) If R=1 MΩ, C=1 μF, L=1 mH. Is the location of poles of transfer function given in (i) are real? If yes, find the location.
6 M
1 (b) If \( G(s)= \dfrac {K}{s(s+64)} \) with H(s)=1, determine value of K so that damping factor is 0.5. For this value of 'K' determine:
i) Rise time, and ii) Settling time. Assume unit step input.
i) Rise time, and
ii) Settling time.
Assume unit step input.
6 M
2 (a) Find \( \dfrac {C(s)}{R(s)} \) for the system shown in Fig. 2 using Block diagram rules.
6 M
2 (b) The open loop transfer function of unity feedback system is:
\( G(s) = \dfrac {K}{s(\tau s+1)}, K, \tau > 0 \) with a given value of K, the peak overshoot was found to be 80%. Suppose peak overshoot is decreased to 20% by decreasing gain K. Find the new value of K (say K
\( G(s) = \dfrac {K}{s(\tau s+1)}, K, \tau > 0 \) with a given value of K, the peak overshoot was found to be 80%. Suppose peak overshoot is decreased to 20% by decreasing gain K. Find the new value of K (say K
2) in terms of the old value.
6 M
Solve any one question from Q3 and Q4
3 (a) Comment on stability of a system using Routh's criteria, if characteristics equation is D(s)=s
4+5s 3+s 2+10+1. How many poles lies in Right of s-plane?
4 M
3 (b) Construct Bode Plot and calculate GM, PM, W
gcand W pcif \( G(s) = \dfrac {200(s+20)}{s(2s+1)(s+40)} \) and H(s)=1.
8 M
4 (a) Open loop transfer function of unity feedback system is \( G(s) = \dfrac {K}{s(s+2)(s+10)}. \) Sketch the complete root locus and comment on stability of system.
8 M
4 (b) For unity feedback system with \( G(s) = \dfrac {100}{s(s+5)} \).
Determine: i) Resonance peak ii) Resonance frequency.
Determine:
i) Resonance peak
ii) Resonance frequency.
4 M
Solve any one question from Q5 and Q6
5 (a) Enlist any two advantages of state space approach over transfer function. Obtain a state space representation in controllable and observable canonical form for the system \( G(s) = \dfrac {s+3}{s^2 + 3s +2} \)
6 M
5 (b) Obtain the state space representation of system whose differential equation is: \[ \dfrac {d^2 y}{dt^3}+ 2 \dfrac {d^2 y}{dt^2}+ 3 \dfrac {dy}{dt}+ 6y = \dfrac {d^2u}{dt^2} - \dfrac {du}{dt}+ 2u. \] Also find controllability and observability of the system. Assume zero initial conditions.
7 M
6 (a) Obtain state transition matrix if: \[ i) \ \dfrac {dx}{dt} = \begin{bmatrix} 0 &1 \\-1 &0 \end{bmatrix}x \\ ii) \ \dfrac {dx}{dt} = \begin{bmatrix} 0 &1 \\0 &0 \end{bmatrix} x \] using Laplace transformation.
6 M
6 (b) Write a short note on 'state transition matrix and its properties'.
4 M
Solve any one question from Q7 and Q8
7 (a) Advantage of digital control system over analog control systems.
4 M
7 (b) Application of PLC (Programmable Logic Controller) in Elevator/List.
4 M
7 (c) PID controllers and its operational characteristics.
5 M
8 (a) Obtain pulse transfer function of the system shown in Fig. 3 with a=1.
6 M
8 (b) Obtain pulse transfer function of system shown in Fig. 4
7 M
More question papers from Control Systems |
Review Questions Review Questions Creating Expressions Equations
Q10.01 Create the symbolic math variables a, b, c and x. Use these variables to define the symbolic math expressions:
Q10.02 Create the symbolic math variables a, b, c and x. Use these variables to define the symbolic math equations:
Q10.03 Create the symbolic math variables a, b, c, x, and y. Use these variables to define the symbolic math expression:
Substitute the variable y in for the variable c.
Substitute the value
5 in for the variable y.
Q10.04 Create the symbolic math variables E, A, d, P, L, and F. Use these variables to define the symbolic math equation:
Substitute the value 29 \times 10^6 for E
Substitute F/2 for the variable P
Q10.05 Create the symbolic math variables t, T, c, and J. Use these variables to define the symbolic math equation:
Substitute the J = \frac{\pi}{2}c^4 into the equation
Substitute T=9.0 and c=4.5. Print out the resulting value of t.
Q10.06 Mohr's circle is used in mechanical engineering to calculate the shear and normal stress. Given the height of Mohr's circle \tau_{max} is equal to the expression below:
Use SymPy expressions or equations to calculate \tau if \sigma_x = 90, \sigma_y = 60 and \tau_{xy} = 20
Solving Equations
Q10.20 Use SymPy to solve for x if x - 4 = 2
Q10.21 Use SymPy to solve for the roots of the quadratic equation 2x^2 - 4x + 1.5 = 0
Q10.22 Create the symbolic math variable b and define the equation below:
Find the numeric value of b to three decimal places
Q10.30 Use SymPy to solve the system of linear equations below for the variables x and y:
Q10.31 Use SymPy to solve the system of linear equations below for the variables x, y, and z:
Q10.32 A set of five equations is below:
Use symbolic math variables and equations to solve for x_1, x_2, x_3, x_4 and x_5.
Q10.33 An equation in terms of the variables L and x is defined below.
Solve the equation for x in terms of the variable L. Note their will be more than one solution.
Q10.50 Use SymPy to solve the system of non-linear equations below for the variables x and y: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.