text stringlengths 256 16.4k |
|---|
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!! |
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D
OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a...
@NeuroFuzzy awesome what have you done with it? how long have you been using it?
it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game
As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity
@Secret I mean more along the lines of the fluid dynamics in that kind of game
@Secret Like how in the dan-ball one air pressure looks continuous (I assume)
@Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A.
I would bet you get lots of cool reaction-diffusion-like patterns with that rule.
(Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ...
Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a...
Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl...
@ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-)
What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ...
and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles
The documentary then showed one of the bird's eye view of the farmlands
(which pardon my sketchy drawing skills...)
Most of the farmland is tiled into grids
Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array
In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl
and in others grass grew
Two blue steel bars were visible laying across the grid, holding up a triangle pool of water
Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e.
ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it
At the end of the documentary, near a university lodge area
I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends
Reality check: I have been to London, but not Belgium
Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order
Presumably one can formulate it (using an example of a 4th order tensor) as follows:
$$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$
and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array
while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$
However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers
@DavidZ in the recent meta post about the homework policy there is the following statement:
> We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems.
This is an interesting statement.
I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking".
I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea.
I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments).
@DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic.
@peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive.
@DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds.
@EmilioPisanty Yes, but I had liked to talk to him here.
@DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things.
@peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck.
4
Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful.
@EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging". |
It looks like you're new here. If you want to get involved, click one of these buttons!
Now let's look at a mathematical approach to resource theories. As I've mentioned, resource theories let us tackle questions like these:
Our first approach will only tackle question 1. Given \(y\), we will only ask
is it possible to get \(x\). This is a yes-or-no question, unlike questions 2-4, which are more complicated. If the answer is yes we will write \(x \le y\).
So, for now our resources will form a "preorder", as defined in Lecture 3.
Definition. A preorder is a set \(X\) equipped with a relation \(\le\) obeying: reflexivity: \(x \le x\) for all \(x \in X\). transitivity \(x \le y\) and \(y \le z\) imply \(x \le z\) for all \(x,y,z \in X\).
All this makes sense. Given \(x\) you can get \(x\). And if you can get \(x\) from \(y\) and get \(y\) from \(z\) then you can get \(x\) from \(z\).
What's new is that we can also
combine resources. In chemistry we denote this with a plus sign: if we have a molecule of \(\text{H}_2\text{O}\) and a molecule of \(\text{CO}_2\) we say we have \(\text{H}_2\text{O} + \text{CO}_2\). We can use almost any symbol we want; Fong and Spivak use \(\otimes\) so I'll often use that. We pronounce this symbol "tensor". Don't worry about why: it's a long story, but you can live a long and happy life without knowing it.
It turns out that when you have a way to combine things, you also want a special thing that acts like "nothing". When you combine \(x\) with nothing, you get \(x\). We'll call this special thing \(I\).
Definition. A monoid is a set \(X\) equipped with:
such that these laws hold:
the
associative law: \( (x \otimes y) \otimes z = x \otimes (y \otimes z) \) for all \(x,y,z \in X\)
the
left and right unit laws: \(I \otimes x = x = x \otimes I\) for all \(x \in X\).
You know lots of monoids. In mathematics, monoids rule the world! I could talk about them endlessly, but today we need to combine the monoids and preorders:
Definition. A monoidal preorder is a set \(X\) with a relation \(\le\) making it into a preorder, an operation \(\otimes : X \times X \to X\) and element \(I \in X\) making it into a monoid, and obeying:
$$ x \le x' \textrm{ and } y \le y' \textrm{ imply } x \otimes y \le x' \otimes y' .$$This last condition should make sense: if you can turn an egg into a fried egg and turn a slice of bread into a piece of toast, you can turn an egg
and a slice of bread into a fried egg and a piece of toast!
You know lots of monoidal preorders, too! Many of your favorite number systems are monoidal preorders:
The set \(\mathbb{R}\) of real numbers with the usual \(\le\), the binary operation \(+: \mathbb{R} \times \mathbb{R} \to \mathbb{R} \) and the element \(0 \in \mathbb{R}\) is a monoidal preorder.
Same for the set \(\mathbb{Q}\) of rational numbers.
Same for the set \(\mathbb{Z}\) of integers.
Same for the set \(\mathbb{N}\) of natural numbers.
Money is an important resource: outside of mathematics, money rules the world. We combine money by addition, and we often use these different number systems to keep track of money. In fact it was bankers who invented negative numbers, to keep track of debts! The idea of a "negative resource" was very radical: it took mathematicians over a century to get used to it.
But sometimes we combine numbers by multiplication. Can we get monoidal preorders this way?
Puzzle 60. Is the set \(\mathbb{N}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{N} \times \mathbb{N} \to \mathbb{N}\) and the element \(1 \in \mathbb{N}\) a monoidal preorder? Puzzle 61. Is the set \(\mathbb{R}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{R} \times \mathbb{R} \to \mathbb{R}\) and the element \(1 \in \mathbb{R}\) a monoidal preorder? Puzzle 62. One of the questions above has the answer "no". What's the least destructive way to "fix" this example and get a monoidal preorder? Puzzle 63. Find more examples of monoidal preorders. Puzzle 64. Are there monoids that cannot be given a relation \(\le\) making them into monoidal preorders? Puzzle 65. A monoidal poset is a monoidal preorder that is also a poset, meaning
$$ x \le y \textrm{ and } y \le x \textrm{ imply } x = y $$ for all \(x ,y \in X\). Are there monoids that cannot be given any relation \(\le\) making them into monoidal posets?
Puzzle 66. Are there posets that cannot be given any operation \(\otimes\) and element \(I\) making them into monoidal posets? |
When viewing cars that are driving along side of us, sometimes their wheels appear to be turning backwards even though they are traveling in the same direction as our car. Why do they look that way?
The issue appears to be rather complex, so I do not aim at providing an exhaustive answer.
At a toy model level it is reasonable to model the eye as a "camera". Specifically, let us assume that a human eye "samples" at a maximum frequency of $\nu$, so that we may make use of the Nyquist-Shannon sampling theorem. Basically, given an instantaneous angular velocity of $\omega$, if the wheel has $n$ spacings, then the "highest frequency" component is $n\omega\over{2\pi}$ (i.e., in a full rotation, there are $n$ wheel bars passing at a given angle). Therefore, writing $\omega = {v\over r}$ with $v$ being the car speed and $r$ the wheel radius (here I am assuming pure rolling of the wheel), when $$ v > {{\pi\nu r}\over{n} } $$ we may assume that some kind of aliasing took place, i.e., I guess you would be unable to reconstruct correctly the wheel motion.
So assuming that a typical wheel has 10 bars and a radius of about 0.3 meters and your eye samples at ~30 Hz (typical frame rate of most first person shooter videogames, so it may be used as an upper limit since there one has complete illusion of movement), a rule of thumb calculation yields about 30 meters/second as a reasonable threshold speed for aliasing phenomena. |
A somewhat different Newton solver (HP35s)
08-07-2016, 08:47 PM (This post was last modified: 08-07-2016 08:48 PM by Dieter.)
Post: #1
A somewhat different Newton solver (HP35s)
I think most of us will know the usual Newton method for finding the roots of a function. This requires the evaluation of the function f(x) as well as its derivative f'(x), which usually means two function calls per iteration step.
For a project where the derivative cannot be calculated fast and easily I now tried the following method that is based on a method already discussed earlier, e.g. here and there. The idea here is the simulaneous calculation of f(x) and f'(x) via complex math. On the 35s this can be done quite easily – at least within the supported function set.
Here is a sample program to illustrate the idea.
Code:
S001 LBL S
Please note: this is just to show the idea, not a fully working program. ;-)
Steps S009...S014 calculate the quotient of the real (approx. f(x)) and imaginary parf (approx. f'(x)). The sine and cosine should not be replaced by 1/tangent since f(x)=0 will yield a real part of ±90° where the tangent is not defined (a cotangent function would be nice here...).
Example:
Code your function f(x) at LBL F, assuming the argument x in the X-register. The sample function is x³–x²–x+0,5 = 0. After XEQ S enter an initial guess for x.
Code:
[XEQ] S [ENTER] X?
So this seems to work.
What about limitations of this method? As far as I get it both f(x) as well as f'(x) are only approximate. I wonder how accurate especially f(x) is evaluated using this method. Maybe the math experts here can say more about this. ;-)
Dieter
09-23-2018, 05:28 PM (This post was last modified: 09-24-2018 01:59 AM by Thomas Klemm.)
Post: #2
RE: A somewhat different Newton solver (HP35s)
(08-07-2016 08:47 PM)Dieter Wrote: What about limitations of this method? As far as I get it both f(x) as well as f'(x) are only approximate. I wonder how accurate especially f(x) is evaluated using this method. Maybe the math experts here can say more about this. ;-)
Using the Taylor series we can separate the real and imaginary part of \(f(x+ih)\):
\(\begin{align*}
\Re[f(x+ih)]&=f(x)-\frac{f''(x)}{2!}h^2+O(h^4)\\
\Im[f(x+ih)]&=h(f'(x)-\frac{f'''(x)}{3!}h^2+O(h^4))
\end{align*}\)
Thus by choosing a small \(h\) we can make the difference as small as we want.
You can verify this by using \(h=10^{-50}\) and the \(\sin\) function:
sin(2 + 1e-50 i) = 9.09297426826e-1 - 4.16146836547e-51 i
sin(2) = 9.09297426826e-1
cos(2) = -4.16146836547e-1
Both values are exact. But for a calculator with 12 significant digits something smaller like \(10^{-10}\) might be enough. It really depends on the value of the 2nd and 3rd derivative at the root.
We could go further and try to use \(h=10^{-99}\). But then we may end up with 0.
Thus we have to choose \(h\) small enough to avoid errors but not too small.
Kind regards
Thomas
09-25-2018, 11:46 AM (This post was last modified: 09-25-2018 01:54 PM by Ángel Martin.)
Post: #3
RE: A somewhat different Newton solver (HP35s)
Nicely done, it piqued my curiosity so I went ahead and jotted down the following 41Z version of the same concept.- About 30 steps in this proof of concept; which expects the function name in ALPHA, h in Y and the xo guess (a real value) in X.
Code:
01 LBL "ZNWT"
The execution shows the successive values of the root if user flag 10 is set (a la PPC-ROM).
Convergence criteria always uses 9 decimal places - irrespective of the display FIX settings.
The function is to be programmed using 41Z functions (definitely a super-set of those in the 35S), as a complex equation.
To use Dieter's same example:
Code:
01 LBL "Z1"
Results:
ALPHA, "Z1", ALPHA
0.1, ENTER^, 2, XEQ "ZNWT" -> 1.451605963
0.1, ENTER^, 0, XEQ "ZNWT" -> 0.403031717
0.1, ENTER^, -2, XEQ "ZNWT"-> -0.854637680
Supposedly more accurate results would be obtained with smaller values of h...
Cheers,
ÁM
09-26-2018, 07:56 AM
Post: #4
RE: A somewhat different Newton solver (HP35s)
You don't have to execute the function twice since both, value and derivative are returned:
Code:
01 LBL "ZNWT" ; h x
Example:
2
ENTER
1E-9
XEQ "ZNWT"
2.000000000
1.642857143
1.487473705
1.453261463
1.451609751
1.451605963
(09-25-2018 11:46 AM)Ángel Martin Wrote: The execution shows the successive values of the root if user flag 10 is set (a la PPC-ROM).
Feel free to change that to your needs: Add the check of user flag 10 and remove RND.
Quote:Supposedly more accurate results would be obtained with smaller values of h...
Usually 1e-9 will be small enough. But you might also use 1e-50.
Kind regards
Thomas
09-26-2018, 09:05 AM (This post was last modified: 09-26-2018 11:21 AM by Ángel Martin.)
Post: #5
RE: A somewhat different Newton solver (HP35s)
(09-26-2018 07:56 AM)Thomas Klemm Wrote: You don't have to execute the function twice since both, value and derivative are returned:
Brilliant Thomas, many thanks for snapping me out of my stupor to see what was before my nose!
And so glad to see there's at least one more 41Z-literate person in the world ;-)
BTW nice move choosing R02 for the function name instead of R00 - this allows using the 41Z functions on ZR00, which uses "0" as default parameter so it's not required in the program.
Really impressive - although you must be using an older revision of the module, because with the latest one (the 'Deluxe" edition) the real part is stored in the lower register of the pair, i.e. ZSTO 00 stores Re(z) in R00 and Im(z) in R01
So re-writing the code for the 41Z-Deluxe conventions (transposing R00 and R01), and starting with h in Y, x guess in X:
Code:
01 LBL "ZNWT" ; x h
17 steps, nothing can beat that ;-)
Thanks much again.
Best,
ÁM
PS. As it turns out removing RND is not a good idea because some times the oscillation in the 10th. decimal place causes the test X#0? never to occur. This is a well-known behavior of the Newton method, so nothing new here...
09-26-2018, 12:49 PM
Post: #6
RE: A somewhat different Newton solver (HP35s)
(09-26-2018 09:05 AM)Ángel Martin Wrote: And so glad to see there's at least one more 41Z-literate person in the world ;-)
You made me read your excellent documentation.
And who in this forum doesn't love to read manuals?
Quote:BTW nice move choosing R02 for the function name instead of R00 - this allows using the 41Z functions on ZR00, which uses "0" as default parameter so it's not required in the program.
I've tried that but somehow it didn't work. But I must admit I didn't invest too much into it.
Quote:Really impressive - although you must be using an older revision of the module, because with the latest one (the 'Deluxe" edition) the real part is stored in the lower register of the pair, i.e. ZSTO 00 stores Re(z) in R00 and Im(z) in R01
I was using a somewhat buggy/unstable emulator my41CX on my iPhone since I didn't have access to an emulator on a computer.
When the calculator is turned off it displays for a moment: 41Z-REV: 9Z
Cheers
Thomas
09-27-2018, 12:06 PM
Post: #7
RE: A somewhat different Newton solver (HP35s)
(09-26-2018 09:05 AM)Ángel Martin Wrote:
I don't think it's a good idea to round the calculated ∆x to display precision and then adjust x by this lower-precision value. So you should first adjust x (ST– 00) and then RND the delta and do the X≠0 test. Simply swap line 12 and 13.
Dieter
09-27-2018, 12:38 PM (This post was last modified: 09-27-2018 12:40 PM by Albert Chan.)
Post: #8
RE: A somewhat different Newton solver (HP35s)
(08-07-2016 08:47 PM)Dieter Wrote: I think most of us will know the usual Newton method for finding the roots of a function. This requires the evaluation of the function f(x) as well as its derivative f'(x), which usually means two function calls per iteration step.
Does the project reach its goal ?
I know complex math can get better derivative, but is it really fast ?
I would guess complex math for f(x) and f(x + h*I) are much slower than real f(x) and f(x+h).
Just look at effort needed to multiply of 2 complex numbers ...
09-27-2018, 12:39 PM (This post was last modified: 09-27-2018 12:44 PM by Ángel Martin.)
Post: #9
RE: A somewhat different Newton solver (HP35s)
(09-27-2018 12:06 PM)Dieter Wrote: I don't think it's a good idea to round the calculated ∆x to display precision and then adjust x by this lower-precision value. So you should first adjust x (ST– 00) and then RND the delta and do the X≠0 test. Simply swap line 12 and 13.
Very true, thanks.
(09-26-2018 12:49 PM)Thomas Klemm Wrote: I was using a somewhat buggy/unstable emulator my41CX on my iPhone since I didn't have access to an emulator on a computer.
Yes, that's a very old revision, probably not even using the Library#4...
09-27-2018, 12:41 PM (This post was last modified: 09-27-2018 12:45 PM by Ángel Martin.)
Post: #10
RE: A somewhat different Newton solver (HP35s)
(09-27-2018 12:38 PM)Albert Chan Wrote: Does the project reach its goal ?
I guess it all depends on the platform. On the 41Z it definitely meets the goal - with a single execution of the function to calculate both the function *and* its derivative.
I'd think that's also the case for the 42S and even the 35S since all complex functions are also MCODE.
09-27-2018, 01:07 PM (This post was last modified: 09-27-2018 01:08 PM by Dieter.)
Post: #11
RE: A somewhat different Newton solver (HP35s)
(09-27-2018 12:38 PM)Albert Chan Wrote: Does the project reach its goal ?
A complex multiply is just four real multiplications, and two additions/subtractions. Thats "nothing".
While I can't say much about this method in high level languages with complex number support by the compiler, I bet that on almost any calculator this method will be faster than the classic (approximative) approach with f(x) and f(x+h):
First, there is just one single function call per iteration step. The complex result holds both f(x) and f'(x). The classic method requires two calls per iteration. This alone speeds up the calculation, even if the complex functions should be somewhat slower.
More important, I don't know any programmable pocket calculator where complex functions are returned significantly slower to the user than their real counterparts. Sure, the internal calculations are more elaborate, but all this is "low level" code. The execution time for such functions is completely negligible compared to the running speed of the calculator program, which does merely a dozen or maybe a few hundred user instructions per second. The CPU that handles the complex math "under the hood" may do thousands of low level instructions per second to calculate a function that is returned almost immediately.
So all in all the complex evaluation is not significantly slower than real math, and at the same time only half of the function calls is required. Yes, that's faster. ;-)
Dieter
09-27-2018, 10:04 PM
Post: #12
RE: A somewhat different Newton solver (HP35s)
.
Hi, Dieter:
(09-27-2018 01:07 PM)Dieter Wrote: A complex multiply is just four real multiplications, and two additions/subtractions.
It can be done with just three real multiplications (instead of 4) and five additions/subtractions (instead of two), i.e. trading one multiplication for three add/subs.
In user code (say RPN) this would probably fail to be advantageous because the time to execute a "*" is about the same as the time to execute a "+" or "-", which in both cases is dominated by overhead not related to the actual math operation.
However, if doing the multiplications, additions and subtractions as part of a microcoded keyword or function (such as the ones Ángel implements all the time) then the individual overheads are much less and it might be the case that calling (or implementing) a microcode (or assembly language) multiplication might be more expensive than doing three extra subs/adds, and thus the 3m+5as way would be faster than the 4m+2as one.
Ángel probably has tested this possibility and if so he knows what's better (or if not he might try it) and will perhaps share his experience with us.
Regards.
V.
.
Find All My HP-related Materials here: Valentin Albillo's HP Collection
User(s) browsing this thread: 1 Guest(s) |
This is a personal interest project that has come to a dead-end. I'm looking for comments and suggestions.
Looking for interesting ways to calculate PI, I seem to have take an approach similar to the one for Viète's formula. I've come up with this extensible expression:
$${pi} \approx {2}^{3}\cdot\sqrt{2-\sqrt{2+\sqrt{2}}}$$ $${pi} \approx {2}^{4}\cdot\sqrt{2-\sqrt{2+\sqrt{2+\sqrt{2}}}}$$ $${pi} \approx {2}^{5}\cdot\sqrt{2-\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2}}}}}$$ $${pi} \approx {2}^{6}\cdot\sqrt{2-\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2}}}}}}$$
As a programmer, this form has the very interesting term ${2}^{n}$, which may point to a possible efficient bitwise implementation, but the recursive roots are obviously not good. I would like to convert this form into a series but this is where my mathematical knowledge has hit its limits.
So, some questions:
Does this form exist already? Is it worth exploring more? If so, can you suggest methods I can research?
I see a similar form appear in this question, but it doesn't help me move forward, I believe. |
The answer to this question should be obvious, but I can't seem to figure it out. Suppose we have a surface $F$, and a representation $\rho : \pi_1(F)\to SU(n)$. We can define the homology with local coefficients $H_*(F,\rho)$ straightforwardly as the homology of the twisted complex $$C_*(F,\rho):=C_*(\widetilde{F};\mathbf{Z})\otimes_{\mathbf{Z}[\pi_1(F)]} \mathbf{C}^n$$ where $\widetilde{F}$ is the universal cover, and $\mathbf{Z}[\pi_1(F)]$ acts on each side in the obvious way.
Now, this complex is actually very easy to compute explicitly: just lift a nice basis of cells in $F$ to $\widetilde{F}$, and write down the boundary maps explicitly. For example, if $F$ is a torus and we take $n=2$, say, we can choose a natural meridian-longitude basis $(x,y)$ for $H_1(F)$, and the twisted boundary map $\partial_1:C_1(F,\rho)=\mathbf{C}^4\to C_2(F,\rho)=\mathbf{C}^2$ is $$ \left( \begin{array}{ccc} \rho(x)-Id \newline\rho(y)-Id\end{array} \right)$$
So, here's my question. Since $\rho$ is a unitary representation, we should get a twisted intersection form on $H_1(F)$, simply by combining the untwisted intersection form with the standard hermitian product on $\mathbf{C}^2$, right? And I would imagine this is also really easy to compute, in a similar basis, say? I can't seem to figure out how it would go. Could anyone help me, even show me how it works for the same torus example?
Or, if I've said anything wrong, tell me where? |
Definition:Perfect Number Contents Definition
A
perfect number $n$ is a (strictly) positive integer such that: $\sigma \left({n}\right) = 2 n$
where $\sigma: \Z_{>0} \to \Z_{>0}$ is the sigma function.
Let $A \left({n}\right)$ denote the abundance of $n$.
$n$ is
perfect if and only if $A \left({n}\right) = 0$.
A
perfect number $n$ is a (strictly) positive integer such that: $\dfrac {\sigma \left({n}\right)} n = 2$
where $\sigma: \Z_{>0} \to \Z_{>0}$ is the sigma function.
\(\displaystyle 6\) \(=\) \(\displaystyle 2^{2 - 1} \times 2^2 - 1\) \(\displaystyle 28\) \(=\) \(\displaystyle 2^{3 - 1} \times 2^3 - 1\) \(\displaystyle 496\) \(=\) \(\displaystyle 2^{5 - 1} \times 2^5 - 1\) \(\displaystyle 8128\) \(=\) \(\displaystyle 2^{7 - 1} \times 2^7 - 1\) \(\displaystyle 33 \, 550 \, 336\) \(=\) \(\displaystyle 2^{13 - 1} \times 2^{13} - 1\) \(\displaystyle 8 \, 589 \, 869 \, 056\) \(=\) \(\displaystyle 2^{17 - 1} \times 2^{17} - 1\)
$6$ is a perfect number:
$1 + 2 + 3 = 6$
$28$ is a perfect number:
$1 + 2 + 4 + 7 + 14 = 28$
$496$ is a perfect number:
$1 + 2 + 4 + 8 + 16 + 31 + 62 + 124 + 248 = 496$
$8128$ is a perfect number:
$1 + 2 + 4 + 8 + 16 + 32 + 64 + 127 + 254 + 508 + 1016 + 2032 + 4064 = 8128$ Euclid's Definition
In the words of Euclid:
Also known as Also see Theorem of Even Perfect Numbers: An even perfect number is of the form $2^{n - 1} \paren {2^n - 1}$, where $2^n - 1$ is prime. Results about perfect numberscan be found here.
The first $4$ perfect numbers:
$6, 18, 496, 8128$
were known to the ancient Greeks.
Nicomachus made the following conjectures: One Perfect Number for Each Number of Digits Last Digit of Perfect Numbers Alternates between $6$ and $8$
both of which are seen to be incorrect from the next few instances in the sequence:
$6, 28, 496, 8128, 33 \, 550 \, 336, 8 \, 589 \, 869 \, 056, \ldots$ |
Definition:Naturally Ordered Semigroup/Axioms
Jump to navigation Jump to search
A
Definition
A
naturally ordered semigroup is a (totally) ordered commutative semigroup $\left({S, \circ, \preceq}\right)$ satisfying:
\((NO 1)\) $:$ $S$ is well-ordered by $\preceq$ \(\displaystyle \forall T \subseteq S:\) \(\displaystyle T = \varnothing \lor \exists m \in T: \forall n \in T: m \preceq n \) \((NO 2)\) $:$ $\circ$ is cancellable in $S$ \(\displaystyle \forall m, n, p \in S:\) \(\displaystyle m \circ p = n \circ p \implies m = n \) \(\displaystyle p \circ m = p \circ n \implies m = n \) \((NO 3)\) $:$ Existence of product \(\displaystyle \forall m, n \in S:\) \(\displaystyle m \preceq n \implies \exists p \in S: m \circ p = n \) \((NO 4)\) $:$ $S$ has at least two distinct elements \(\displaystyle \exists m, n \in S:\) \(\displaystyle m \ne n \) |
Equivalence of Definitions of Normal Subset/3 and 4 imply 2
This article has been proposed for deletion. In particular:
Please assess the validity of this proposal. (discuss)
Superseded by contents of Equivalence of Definitions of Normal Subgroup Theorem
Let $\left({G,\circ}\right)$ be a group.
Let $S \subseteq G$.
Then $S$ is a normal subset of $G$ by Definition 2. Proof
By Equivalence of Definitions of Normal Subset: 3 iff 4, $S$ being a normal subset of $G$ by Definition 3 and Definition 4 implies that the following hold:
$(1)\quad \forall g \in G: g \circ S \circ g^{-1} \subseteq S$ $(2)\quad \forall g \in G: g^{-1} \circ S \circ g \subseteq S$ $(3)\quad \forall g \in G: S \subseteq g \circ S \circ g^{-1}$ $(4)\quad \forall g \in G: S \subseteq g^{-1} \circ S \circ g$
By $(1)$ and $(3)$ and definition of set equality:
$\forall g \in G: g \circ S \circ g^{-1} = S$
By $(2)$ and $(4)$ and definition of set equality:
$\forall g \in G: g^{-1} \circ S \circ g = S$
$\blacksquare$ |
In the Quantum Operations section in Nielsen and Chuang, (page 358 in the 2002 edition), they have the following equation: $$\varepsilon(\rho) = tr_{env} [U(\rho \otimes \rho_{env})U^\dagger]$$
They show an example where
$$\rho_{env} = |0\rangle \langle0|$$ $$U = CNOT$$
And they claim the final solution is: $P_0\rho P_0 + P_1\rho P_1$ where $P_0$ is $|0\rangle \langle0|$ and P1 is $|1\rangle \langle 1|$.
These are my steps so far to get this, but I don't know how to trace out environment after this:
Let $\rho$ be $|\psi \rangle \langle \psi |$
So, $\rho \otimes \rho_{env} = |\psi 0\rangle \langle \psi 0|$
After applying the unitary,
$ |00 \rangle \langle 00| \psi 0 \rangle \langle \psi 0 | 00 \rangle \langle 00 |$
$ + |00 \rangle \langle 00| \psi 0 \rangle \langle \psi 0 | 10 \rangle \langle 11 | $
$+ |11 \rangle \langle 10| \psi 0 \rangle \langle \psi 0 | 00 \rangle \langle 00 |$
$ + |11 \rangle \langle 10| \psi 0 \rangle \langle \psi 0 | 10 \rangle \langle 11 |$
I don't know how to trace out environment for the above state.
Also, I realize that I have considered only a pure state, if anyone can show it for a general state that would be great. |
Differential and Integral Equations Differential Integral Equations Volume 16, Number 6 (2003), 757-768. Positive solutions for classes of $p$-Laplacian equations Abstract
We study positive $C^1(\bar{\Omega})$ solutions to classes of boundary value problems of the form \begin{eqnarray*} -\Delta_{p} u & = & g(\lambda,u)\mbox{ in } \Omega \\ u & = & 0 \mbox{ on } \partial \Omega, \end{eqnarray*} where $ \Delta_{p} $ denotes the p-Laplacian operator defined by $$ \Delta_{p} z:= \mbox{div}(|\nabla z|^{p-2}\nabla z);\, p > 1, $$ where $ \lambda > 0$ is a parameter and $ \Omega $ is a bounded domain in $ R^{N} $; $ N \geq 2 $ with $\partial \Omega$ of class $ C^{2}$ and connected. (If $N=1$, we assume that $\Omega$ is a bounded open interval.) In particular, we establish existence and multiplicity results for classes of nondecreasing, p-sublinear functions $g(\lambda,\cdot)$ belonging to $C^1([0,\infty))$. Our results also extend to classes of p-Laplacian systems. Our proofs are based on comparison methods.
Article information Source Differential Integral Equations, Volume 16, Number 6 (2003), 757-768. Dates First available in Project Euclid: 21 December 2012 Permanent link to this document https://projecteuclid.org/euclid.die/1356060611 Mathematical Reviews number (MathSciNet) MR1973279 Zentralblatt MATH identifier 1030.35054 Citation
Chhetri, Maya; Oruganti, Shobha; Shivaji, R. Positive solutions for classes of $p$-Laplacian equations. Differential Integral Equations 16 (2003), no. 6, 757--768. https://projecteuclid.org/euclid.die/1356060611 |
I think you need to understand at what point the magnetic field needs to begin collapsing in order to prevent the projectile decelerating and hence losing built-up momentum. I have done a simulation of a 44 mm long 10 mm radius air cored solenoid to enable to concepts to be more clearly seen. I have made it this shape just for my convenience in answering and I have no-idea if this is shape is useful in the real thing. Below is a picture of the flux density inside and outside the solenoid along with the gradient of said flux density: -
Regard the blue trace - flux density (for a given current) rises as you approach either side of the solenoid. At 22 mm either side of the centre line is the aperture of the solenoid. As you enter the aperture it can be seen that the flux density rises to an almost constant value - by this I mean that if the solenoid were really quite long then maximum flux density would be constant for most of the central length of the solenoid. This is important to realize because when the gradient of flux density is zero there can be no mechanical force exerted (such as in the centre).
So, it's important to look at the gradient (red line). If the projectile enters from the right, the force that accelerates it is maximum as it enters the solenoid (for a typically small projectile). After it has entered it will still accelerate but to a lesser degree until it reaches the solenoid centre. At this point it will start to decelerate (not wanted).
The upshot here is that you need to start turning the mag field off somewhere between the projectile entering and reaching dead centre. This is impossible to do instantaneously because of the solenoid's inductance but given that you can measure the inductance at least you can begin to calculate when to start the process (noting that any magnetic field remaining as the projectile passes dead centre is going to slow it down).
Here are a few extra formula and observations that might help.
The force exerted by a solenoid on a ferromagnetic object is: -
Force = \$\dfrac{(N.I)^2\mu_0.A}{2g^2}\$
Where N is number of turns, A is cross sectional area of solenoid, I is the applied currents, g is the gap from the end of the solenoid to the object and \$\mu_0\$ is \$4\pi\times 10^{-7}\$.
Also, for a solenoid, inductance can be calculated thus: -
Inductance = \$\dfrac{\mu_0.N^2.A}{l}\$
Where \$\mu_0\$, N and A are as before and \$l\$ is length of solenoid.
So, making the inductance bigger makes the force bigger and, to keep turns as low as possible you want the length of the solenoid to be a short as possible.
However, with a bigger inductance it takes longer for the current to ramp up to maximum and this will reduce the effectiveness of a coil-gun due to not being able to accelerate the bullet to sufficient speed. |
It looks like you're new here. If you want to get involved, click one of these buttons!
Now let's look at a mathematical approach to resource theories. As I've mentioned, resource theories let us tackle questions like these:
Our first approach will only tackle question 1. Given \(y\), we will only ask
is it possible to get \(x\). This is a yes-or-no question, unlike questions 2-4, which are more complicated. If the answer is yes we will write \(x \le y\).
So, for now our resources will form a "preorder", as defined in Lecture 3.
Definition. A preorder is a set \(X\) equipped with a relation \(\le\) obeying: reflexivity: \(x \le x\) for all \(x \in X\). transitivity \(x \le y\) and \(y \le z\) imply \(x \le z\) for all \(x,y,z \in X\).
All this makes sense. Given \(x\) you can get \(x\). And if you can get \(x\) from \(y\) and get \(y\) from \(z\) then you can get \(x\) from \(z\).
What's new is that we can also
combine resources. In chemistry we denote this with a plus sign: if we have a molecule of \(\text{H}_2\text{O}\) and a molecule of \(\text{CO}_2\) we say we have \(\text{H}_2\text{O} + \text{CO}_2\). We can use almost any symbol we want; Fong and Spivak use \(\otimes\) so I'll often use that. We pronounce this symbol "tensor". Don't worry about why: it's a long story, but you can live a long and happy life without knowing it.
It turns out that when you have a way to combine things, you also want a special thing that acts like "nothing". When you combine \(x\) with nothing, you get \(x\). We'll call this special thing \(I\).
Definition. A monoid is a set \(X\) equipped with:
such that these laws hold:
the
associative law: \( (x \otimes y) \otimes z = x \otimes (y \otimes z) \) for all \(x,y,z \in X\)
the
left and right unit laws: \(I \otimes x = x = x \otimes I\) for all \(x \in X\).
You know lots of monoids. In mathematics, monoids rule the world! I could talk about them endlessly, but today we need to combine the monoids and preorders:
Definition. A monoidal preorder is a set \(X\) with a relation \(\le\) making it into a preorder, an operation \(\otimes : X \times X \to X\) and element \(I \in X\) making it into a monoid, and obeying:
$$ x \le x' \textrm{ and } y \le y' \textrm{ imply } x \otimes y \le x' \otimes y' .$$This last condition should make sense: if you can turn an egg into a fried egg and turn a slice of bread into a piece of toast, you can turn an egg
and a slice of bread into a fried egg and a piece of toast!
You know lots of monoidal preorders, too! Many of your favorite number systems are monoidal preorders:
The set \(\mathbb{R}\) of real numbers with the usual \(\le\), the binary operation \(+: \mathbb{R} \times \mathbb{R} \to \mathbb{R} \) and the element \(0 \in \mathbb{R}\) is a monoidal preorder.
Same for the set \(\mathbb{Q}\) of rational numbers.
Same for the set \(\mathbb{Z}\) of integers.
Same for the set \(\mathbb{N}\) of natural numbers.
Money is an important resource: outside of mathematics, money rules the world. We combine money by addition, and we often use these different number systems to keep track of money. In fact it was bankers who invented negative numbers, to keep track of debts! The idea of a "negative resource" was very radical: it took mathematicians over a century to get used to it.
But sometimes we combine numbers by multiplication. Can we get monoidal preorders this way?
Puzzle 60. Is the set \(\mathbb{N}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{N} \times \mathbb{N} \to \mathbb{N}\) and the element \(1 \in \mathbb{N}\) a monoidal preorder? Puzzle 61. Is the set \(\mathbb{R}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{R} \times \mathbb{R} \to \mathbb{R}\) and the element \(1 \in \mathbb{R}\) a monoidal preorder? Puzzle 62. One of the questions above has the answer "no". What's the least destructive way to "fix" this example and get a monoidal preorder? Puzzle 63. Find more examples of monoidal preorders. Puzzle 64. Are there monoids that cannot be given a relation \(\le\) making them into monoidal preorders? Puzzle 65. A monoidal poset is a monoidal preorder that is also a poset, meaning
$$ x \le y \textrm{ and } y \le x \textrm{ imply } x = y $$ for all \(x ,y \in X\). Are there monoids that cannot be given any relation \(\le\) making them into monoidal posets?
Puzzle 66. Are there posets that cannot be given any operation \(\otimes\) and element \(I\) making them into monoidal posets? |
What does it mean to take the gradient of a vector field? $\nabla \vec{v}(x,y,z)$? I only understand what it means to take the grad of a scalar field.
The gradient of a vector is a tensor which tells us how the vector field changes in any direction. We can represent the gradient of a vector by a matrix of its components with respect to a basis. The $(\nabla V)_{\text{ij}}$ component tells us the change of the $V_j$ component in the $\pmb{e}_i$ direction (maybe I have that backwards). You can check out the Wikipedia article for the details of calculating the components.
To get a physical picture of its meaning we can decompose it into 1) the trace (the divergence) 2) an anti-symmetric tensor (the curl) 3) a traceless symmetric tensor (the shear)
If the vector field represents the flow of material, then we can examine a small cube of material about a point. The divergence describes how the cube changes volume. The curl describes the shape and volume preserving rotation of the fluid. The shear describes the volume-preserving deformation.
Gradient of a vector field (or a multi-valued function $f: R^m\to R^n$) is jacobian of the multi-valued function $f$, where each row $r_i$ of the $\text{Jacobian}(f)$ represents the gradient of $f_i$ (remember, each component $f_i$ of the multi-valued function $f$ is a scalar).
It depends on how you define the gradient operator. In geometric calculus, we have the identity $\nabla A = \nabla \cdot A + \nabla \wedge A$, where $A$ is a
multivector field. A vector field is a specific type of multivector field, so this same formula works for $\vec v(x,y,z)$ as well.
So we get $\nabla\vec v = \nabla \cdot \vec v + \nabla \wedge \vec v$. The first term should be familiar to you -- it's just the regular old divergence. However the second term is a different type of object entirely (actually, it's a generalization of the familiar $3$D curl $\nabla \times \vec u$ that works in
any dimension).
In the same way that a vector field can be though of as associating with every point in your domain an oriented line segment (a vector), $\nabla \wedge \vec v$ associates with every point in your domain an oriented
plane segment (which we call bivectors). So $\nabla \wedge \vec v$ is called a bivector field.
So to answer your question, the gradient of a vector field is the sum of a scalar field and a bivector field.
Assume the vector $\vec{\bf F} = (F_1, F_2, F_3)$ exists in a 3D space with basis $x_1, x_2, x_3$, then its gradient is the 3 × 3 matrix: $\partial_i$$F_j$
$\nabla\vec{\bf F}=\left( {\begin{array}{c} \frac{\partial F_1}{\partial x_1}&\frac{\partial F_2}{\partial x_1}&\frac{\partial F_3}{\partial x_1}\\ \frac{\partial F_1}{\partial x_2}&\frac{\partial F_2}{\partial x_2}&\frac{\partial F_3}{\partial x_2}\\ \frac{\partial F_1}{\partial x_3}&\frac{\partial F_2}{\partial x_3}&\frac{\partial F_3}{\partial x_3}\\ \end{array}} \right).$
That is, each column is a "usual" gradient of the corresponding scalar component function.
Gradient of a vector field is intuitively the Flux/volume leaving out of the differential volume dV. Visualise in 2D first. Suppose you have a vector field E in 2D. Now if you plot the Field lines of E and take a particular Area (small area..), Divergence of E is the net field lines, that is, (field line coming out of the area minus field lines going into the area). Similarly in 3D, Divergence is a measure of (field lines going out - field lines coming in). If you mathematically implement this you see you get 3 terms of partial derivatives added, which essentially adds the total net field lines.
For a scalar field(say F(x,y,z) ) it represents the rate of change of F along the the 3 perpendicular ( also called orthonormal ) vectors you defined your system with (say x, y, z ). |
How can I show that the universal cover of $SO(n)$, for $n\ge 3$, is a double cover? And how does that reflect the fact that the fundamental group of $SO(n)$ has two elements? What is the relation between the fundamental group of a topological space and its universal cover? How come that $SU(2)$ is simply connected but $SO(3)$ is not? Thank you!
Adapted from this answer.
The fact that the sphere $SU(2)$ is a twofold cover of $SO(3)$ can be seen by viewing $SU(2)$ as the group of unit quaternions, which acts by conjugation on the real $3$-dimensional space of purely imaginary quaternions as explained here; the action can be seen to be by elements of $SO(3)$, and two unit quaternions that have the same action differ by a factor $-1$ (call these antipodes of each other).
This fact, together with the fact that $SU(2)$ is connected, shows that that $SO(3)$ is
not simply connected. Indeed, one can take a path from a unit quaternion to its antipode, and map this path to $SO(3)$ (take the rotation action defined by each unit quaterion on the path), where it becomes a loop (in $SO(3)$ its starting and ending point are identified). This loop cannot be contracted in $SO(3)$: if it could, we could perform the corresponding deformation to the path in $SU(2)$ as well, contracting it to a point while keeping the endpoints antipodes of each other all the time, which is absurd.
And $SU(2)$ is simply connected because the set of unit quaternions is homeomorphic to the $3$-sphere $\{\,(a,b,c,d)\in\mathbf R^4\mid a^2+b^2+c^2+d^2=1\,\}$ (the $n$-sphere is simply connected for all $n>1$). Therefore forming a new loop in $SO(3)$ by going around the one indicated above
twice, so that the result lifts to a loop in $SU(2)$, the new loop can be contracted in $SO(3)$ (just contract the loop "covering" it in $SU(2)$ to a point, and project that deformation back to $SO(3)$). One can conclude from this that the fundamental group of $SO(3)$ has two elements.
For $SO(n)$ things are more complicated to describe explicitly.
There are advanced techniques that I am not familiar with (e.g. using fiber bundles, differential geometry) to show that $\pi_1(SO(n)) = \pi_1(SO(3))$ for all $n > 3$. Now $SO(3)$ is homeomorphic to $\Bbb{R}P^3$ which by the Van - Kampen Theorem has $\pi_1$ isomorphic to that for $\Bbb{R}P^2$. This of course is known to have fundamental group isomorphic to $\Bbb{Z}/2\Bbb{Z}$. |
What are the usual assumptions for linear regression?
Do they include:
a linear relationship between the independent and dependent variable independent errors normal distribution of errors homoscedasticity
Are there any others?
Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up.Sign up to join this community
The answer depends heavily on how do you define complete and usual. Suppose we write linear regression model in the following way:$ \newcommand{\x}{\mathbf{x}} \newcommand{\bet}{\boldsymbol\beta} \DeclareMathOperator{\E}{\mathbb{E}} \DeclareMathOperator{\Var}{Var} \DeclareMathOperator{\Cov}{Cov} \DeclareMathOperator{\Tr}{Tr} $
$$y_i = \x_i'\bet + u_i$$
where $\mathbf{x}_i$ is the vector of predictor variables, $\beta$ is the parameter of interest, $y_i$ is the response variable, and $u_i$ are the disturbance. One of the possible estimates of $\beta$ is the least squares estimate: $$ \hat\bet = \textrm{argmin}_{\bet}\sum(y_i-\x_i\bet)^2 = \left(\sum \x_i \x_i'\right)^{-1} \sum \x_i y_i .$$
Now practically all of the textbooks deal with the assumptions when this estimate $\hat\bet$ has desirable properties, such as unbiasedness, consistency, efficiency, some distributional properties, etc.
Each of these properties requires certain assumptions, which are not the same. So the better question would be to ask which assumptions are needed for wanted properties of the LS estimate.
The properties I mention above require some probability model for regression. And here we have the situation where different models are used in different applied fields.
The simple case is to treat $y_i$ as an independent random variables, with $\x_i$ being non-random. I do not like the word usual, but we can say that this is the usual case in most applied fields (as far as I know).
Here is the list of some of the desirable properties of statistical estimates:
Existence
Existence property might seem weird, but it is very important. In the definition of $\hat\beta$ we invert the matrix $\sum \x_i \x_i'.$
It is not guaranteed that the inverse of this matrix exists for all possible variants of $\x_i$. So we immediately get our first assumption:
Matrix $\sum \x_i \x_i'$ should be of full rank, i.e. invertible.
Unbiasedness
We have $$ \E\hat\bet = \left(\sum \x_i \x_i' \right)^{-1}\left(\sum \x_i \E y_i \right) = \bet, $$ if $$\E y_i = \x_i \bet.$$
We may number it the second assumption, but we may have stated it outright, since this is one of the natural ways to define linear relationship.
Note that to get unbiasedness we only require that $\E y_i = \x_i \bet$ for all $i$, and $\x_i$ are constants. Independence property is not required.
Consistency
For getting the assumptions for consistency we need to state more clearly what do we mean by $\to$. For sequences of random variables we have different modes of convergence: in probability, almost surely, in distribution and $p$-th moment sense. Suppose we want to get the convergence in probability. We can use either law of large numbers, or directly use the multivariate Chebyshev inequality (employing the fact that $\E \hat\bet = \bet$):
$$\Pr(\lVert \hat\bet - \bet \rVert >\varepsilon)\le \frac{\Tr(\Var(\hat\bet))}{\varepsilon^2}.$$
(This variant of the inequality comes directly from applying Markov's inequality to $\lVert \hat\bet - \bet\rVert^2$, noting that $\E \lVert \hat\bet - \bet\rVert^2 = \Tr \Var(\hat\bet)$.)
Since convergence in probability means that the left hand term must vanish for any $\varepsilon>0$ as $n\to\infty$, we need that $\Var(\hat\bet)\to 0$ as $n\to\infty$. This is perfectly reasonable since with more data the precision with which we estimate $\bet$ should increase.
We have that $$ \Var(\hat\bet) =\left( \sum \x_i \x_i' \right)^{-1} \left( \sum_i \sum_j \x_i \x_j' \Cov(y_i, y_j) \right) \left(\sum \mathbf{x}_i\mathbf{x}_i'\right)^{-1}.$$
Independence ensures that $\Cov(y_i, y_j) = 0$, hence the expression simplifies to $$ \Var(\hat\bet) = \left( \sum \x_i \x_i' \right)^{-1} \left( \sum_i \x_i \x_i' \Var(y_i) \right) \left( \sum \x_i \x_i' \right)^{-1} .$$
Now assume $\Var(y_i) = \text{const}$, then $$ \Var(\hat\beta) = \left(\sum \x_i \x_i' \right)^{-1} \Var(y_i) .$$
Now if we additionally require that $\frac{1}{n} \sum \x_i \x_i'$ is bounded for each $n$, we immediately get $$\Var(\bet) \to 0 \text{ as } n \to \infty.$$
So to get the consistency we assumed that there is no autocorrelation ($\Cov(y_i, y_j) = 0$), the variance $\Var(y_i)$ is constant, and the $\x_i$ do not grow too much. The first assumption is satisfied if $y_i$ comes from independent samples.
Efficiency
The classic result is the Gauss-Markov theorem. The conditions for it is exactly the first two conditions for consistency and the condition for unbiasedness.
Distributional properties
If $y_i$ are normal we immediately get that $\hat\bet$ is normal, since it is a linear combination of normal random variables. If we assume previous assumptions of independence, uncorrelatedness and constant variance we get that $$ \hat\bet \sim \mathcal{N}\left(\bet, \sigma^2\left(\sum \x_i \x_i' \right)^{-1} \right)$$ where $\Var(y_i)=\sigma^2$.
If $y_i$ are not normal, but independent, we can get approximate distribution of $\hat\bet$ thanks to the central limit theorem. For this we need to assume that $$\lim_{n \to \infty} \frac{1}{n} \sum \x_i \x_i' \to A$$ for some matrix $A$. The constant variance for asymptotic normality is not required if we assume that $$\lim_{n \to \infty} \frac{1}{n} \sum \x_i \x_i' \Var(y_i) \to B.$$
Note that with constant variance of $y$, we have that $B = \sigma^2 A$. The central limit theorem then gives us the following result:
$$\sqrt{n}(\hat\bet - \bet) \to \mathcal{N}\left(0, A^{-1} B A^{-1} \right).$$
So from this we see that independence and constant variance for $y_i$ and certain assumptions for $\mathbf{x}_i$ gives us a lot of useful properties for LS estimate $\hat\bet$.
The thing is that these assumptions can be relaxed. For example we required that $\x_i$ are not random variables. This assumption is not feasible in econometric applications. If we let $\x_i$ be random, we can get similar results if use conditional expectations and take into account the randomness of $\x_i$. The independence assumption also can be relaxed. We already demonstrated that sometimes only uncorrelatedness is needed. Even this can be further relaxed and it is still possible to show that the LS estimate will be consistent and asymptoticaly normal. See for example White's book for more details.
There are a number of good answers here. It occurs to me that there is one assumption that has not been stated however (at least not explicitly). Specifically, a regression model assumes that $\mathbf X$ (the values of your explanatory / predictor variables) is
fixed and known, and that all of the uncertainty in the situation exists within the $Y$ variable. In addition, this uncertainty is assumed to be sampling error only.
Here are two ways to think about this: If you are building an explanatory model (modeling experimental results), you know exactly what the levels of the independent variables are, because
you manipulated / administered them. Moreover, you decided what those levels would be before you ever started gathering data. So you are conceptualizing all of the uncertainty in the relationship as existing within the response. On the other hand, if you are building a predictive model, it is true that the situation differs, but you still treat the predictors as though they were fixed and known, because, in the future, when you use the model to make a prediction about the likely value of $y$, you will have a vector, $\mathbf x$, and the model is designed to treat those values as though they are correct. That is, you will be conceiving of the uncertainty as being the unknown value of $y$.
These assumptions can be seen in the equation for a prototypical regression model: $$ y_i = \beta_0 + \beta_1x_i + \varepsilon_i $$ A model with uncertainty (perhaps due to measurement error) in $x$ as well might have the same data generating process, but the model that's estimated would look like this: $$ y_i = \hat\beta_0 + \hat\beta_1(x_i + \eta_i) + \hat\varepsilon_i, $$ where $\eta$ represents random measurement error. (Situations like the latter have led to work on errors in variables models; a basic result is that if there is measurement error in $x$, the naive $\hat\beta_1$ would be attenuated--closer to 0 than its true value, and that if there is measurement error in $y$, statistical tests of the $\hat\beta$'s would be underpowered, but otherwise unbiased.)
One practical consequence of the asymmetry intrinsic in the typical assumption is that regressing $y$ on $x$ is different from regressing $x$ on $y$. (See my answer here: What is the difference between doing linear regression on y with x versus x with y? for a more detailed discussion of this fact.)
The assumptions of the classical linear regression model include:
Although the answers here provide already a good overview of the classical OLS assumption, you can find a more comprehensive description of the assumption of the classical linear regression model here:
In addition, the article describes the consequences in case one violates certain assumptions.
What gives?!
An answer is that somewhat
different sets of assumptions can be used to justify the use of ordinary least squares (OLS) estimation. OLS is a tool like a hammer: you can use a hammer on nails but you can also use it on pegs, to break apart ice, etc...
Two broad categories of assumptions are those that apply to small samples and those that rely on large samples so that the central limit theorem can be applied.
Small sample assumptions as discussed in Hayashi (2000) are:
Under (1)-(4), the Gauss-Markov theorem applies, and the ordinary least squares estimator is the best linear unbiased estimator.
Further assuming normal error terms allows hypothesis testing. If the error terms are conditionally normal, the distribution of the OLS estimator is also conditionally normal.
Another noteworthy point is that with normality, the OLS estimator is also the maximum likelihood estimator.
These assumptions can be modified/relaxed if we have a large enough sample so that we can lean on the law of large numbers (for consistency of the OLS estimator) and the central limit theorem (so that the sampling distribution of the OLS estimator converges to the normal distribution and we can do hypothesis testing, talk about p-values etc...).
Hayashi is a macroeconomics guy and his large sample assumptions are formulated with the time series context in mind:
You may encounter stronger versions of these assumptions, for example, that error terms are independent.
Proper large sample assumptions get you to a sampling distribution of the OLS estimator that is asymptotically normal.
Hayashi, Fumio, 2000,
Econometrics
It's all about what you want to do with your model. Imagine if your errors were positively skewed/non-normal. If you wanted to make a prediction interval, you could do better than using the t-distribution. If your variance is smaller at smaller predicted values, again, you'd be making a prediction interval that's too big.
It's better to understand why the assumptions are there.
The following diagrams show which assumptions are required to get which implications in the finite and asymptotic scenarios.
I think it's important to think about not only what the assumptions are, but what the implications of those assumptions are. For example, if you only care about having unbiased coefficients, then you don't need homoskedasticity.
The following are the assumptions of Linear Regression analysis.
Correct specification. The linear functional form is correctly specified. Strict exogeneity. The errors in the regression should have conditional mean zero. No multicollinearity. The regressors in X must all be linearly independent. Homoscedasticity which means that the error term has the same variance in each observation. No autocorrelation: the errors are uncorrelated between observations. Normality. It is sometimes additionally assumed that the errors have normal distribution conditional on the regressors. I.i.d observations: $(x_i, y_i)$ is independent from, and has the same distribution as, $(x_j, y_j)$ for all $i\neq j$.
For more information visit this page.
There is no such a thing as a single list of assumptions, there will be at least 2: one for fixed and one for random design matrix. Plus you may want to look at the assumptions for time series regressions (see p.13)
The case when the design matrix $X$ is
fixed could be the most common one, and its assumptions are often expressed as a Gauss-Markov theorem. The fixed design means that you truly control the regressors. For instance, you conduct an experiment and can set the parameters such as temperature, pressure etc. See also p.13 here.
Unfortunately, in social sciences such as economics you rarely can control the parameters of the experiment. Usually, you
observe what happens in economy, record the environment metrics, then regress on them. It turns out that it's a very different and more difficult situation, called a random design. In this case the Gauss-Markov theorem is modified also see p.12 here. You can see how the conditions are now expressed in terms of conditional probabilities, which is not an innocuous change.
In econometrics the assumptions have names:
Notice that I never mentioned normality. It's not a standard assumption. It's often used in intro regression courses because it makes some derivations easier, but it's not required for regression to work and have nice properties.
The assumption of linearity is that the model is linear in the parameters. It is fine to have a regression model with quadratic or higher order effects as long as the power function of the independent variable is part of a linear additive model. If the model does not contain higher order terms when it should, then the lack of fit will be evident in the plot of the residuals. However, standard regression models do not incorporate models in which the independent variable is raised to the power of a parameter (although there are other approaches that can be used to evaluate such models). Such models contain non-linear parameters.
The least squares regression coefficient provides a way to summarize the first order trend in any kind of data. @mpiktas answer is a thorough treatment of the conditions under which least squares is increasingly optimal. I'd like to go the other way and show the most general case when least squares works. Let's see the most general formulation of the least-squares equation:
$$E[Y|X] = \alpha + \beta X$$
It's just a linear model for the conditional mean of the response.
Note I've bucked the error term. If you'd like to summarize the uncertainty of $\beta$, then you must appeal to the central limit theorem. The most general class of least squares estimators converge to normal when the Lindeberg condition is met: boiled down, the Lindeberg condition for least squares requires that the fraction of the largest squared residual to the sum of the sum of squared residuals must go to 0 as $n \rightarrow \infty$. If your design will keep sampling larger and larger residuals, then the experiment is "dead in the water".
When the Lindeberg condition is met, the regression parameter $\beta$ is well defined, and the estimator $\hat{\beta}$ is an unbiased estimator that has a known approximating distribution. More efficient estimators may exist. In other cases of heteroscedasticity, or correlated data, usually a weighted estimator is
more efficient. That's why I would never advocate using the naïve methods when better ones are available. But they often are not! |
I have that $\mathbf{x}_{i}=(x_{i1},\ldots,x_{ip})' \sim N_{p}(0,V)$ and I'm interesting in the variance of:
$$ S = \sum_{i=1}^{n} \mathbf{x}_{i}\mathbf{x}_{i}' $$
for the case when the vectors are correlated. In my case: $\operatorname{Cov}(\mathbf{x}_{i},\mathbf{x}_{j})= - \frac{V}{n-1}$. I now that:
$$ \operatorname{Var}(\mbox{vec} \ \mathbf{x}_{i}\mathbf{x}_{i}' )=\operatorname{ Var}(\mathbf{x}_{i} \otimes \mathbf{x}_{i}) = (I-K_{p})(V \otimes V)$$
and
$$ \mbox{Cov}(\mbox{vec} \ x_{i}x_{i}, \mbox{vec} \ x_{j}x_{j}) = \operatorname{Var}(\mathbf{x}_{i} \otimes \mathbf{x}_{j}' ) = V \otimes V + K_{p}(\operatorname{Cov}(x_{i},x_{j}) \otimes \operatorname{Cov}(x_{i},x_{j}))$$
where $K$ is the commutation matrix. Both results are by Magnus and Neudecker (1979). I know that in the case of independence $Cov(\mathbf{x}_{i},\mathbf{x}_{j}) = 0$ (Well know result):
$$ \operatorname{Var}(S) = n(I-K_{p})(V \otimes V) $$
Moreover, $S$ is Wishart distributed. But the extension for non-independent vectors doesn't seem to be right to me:
$$ \operatorname{Var}(S) = \sum_{i=1}^{n} \operatorname{Var}(\mathbf{x}_{i} \otimes \mathbf{x}_{i}) + \sum_{i=1}^{n} \sum_{j \neq i} \operatorname{Var}(\mathbf{x}_{i} \otimes \mathbf{x}_{j})$$
Can you help me with the last expression? (it's maybe wrong). |
A morphism $h$ in a category is an
if it is right-cancellative, i.e. for all morphisms $f$, $g$ in the category $f\circ h=g\circ h$ implies $f=g$. epimorphism
A function $h:A\to B$ is
(or surjective ) if $B=f[A]=\{f(a): a\in A\}$, i.e., for all $b\in B$ there exists $a\in A$ such that $f(a)=b$. onto in a (concrete) category of structures if the underlying function of every epimorphism is surjective. Epimorphisms are surjective |
In the book Conformal Field Theory (authors: Philippe Di Francesco, Pierre Mathieu, David Senechal), a field $f(z)$ is primary if it transforms as $$f(z) \rightarrow g(\omega)=\left( \frac{d\omega}{dz}\right)^{-h}f(z)$$ under an infinitesimal conformal transformation $z \rightarrow \omega(z)$. The physical meaning of this definition is unclear to me. For instance, what's the difference between a primary field and a secondary field? And what's the significance of the fact that the energy-momentum tensor is not a primary field?
In fact, primary fields(operators) sometimes are also called tensor field(operators). The name is justified as they transform in the same way that a tensor transforms under coordinate transformation.
To see that, we look at the transformation rule of primary fields, which by their definition, is $$\mathcal{O}'(z',\bar{z}')=(\partial_z z')^{-h}(\partial_{\bar{z}}\bar{z}')^{-\bar{h}}\mathcal{O}(z,\bar{z}).$$ Compare that to the transformation rules of a tensor with $n$ lower indices. $$T_{u_1u_2\dots u_n}'=\frac{\partial x^{v_1}}{\partial x'^{u_1} }\frac{\partial x^{v_2}}{\partial x'^{u_2}}\dots\frac{\partial x^{v_n}}{\partial x'^{u_n} }T_{v_1v_2\dots v_n}.$$ Note in the case of 2d conformal transformation, in terms of $z,\bar{z}$ coordinates, $z'$ is only a function of $z$. Thus $\frac{\partial x^{v_i}}{\partial x'^{u_i}}$ is non zero only when $v_i=u_i$. In other words, various $\frac{\partial x^{v_i}}{\partial x^{u_i}}$ are either $\frac{\partial z}{\partial z'}$ or $\frac{\partial \bar{z}}{\partial \bar{z}'}$. Therefore we can write $\frac{\partial x^{v_1}}{\partial x'^{u_1} }\frac{\partial x^{v_2}}{\partial x'^{u_2}}\dots\frac{\partial x^{v_n}}{\partial x'^{u_n} }$ as $(\frac{\partial z}{\partial z'})^h(\frac{\partial \bar{z}}{\partial \bar{z}'})^{\bar{h}}$, or with $(\frac{\partial z'}{\partial z})^{-h}(\frac{\partial \bar{z}'}{\partial \bar{z}})^{-\bar{h}}$ with $h+\bar{h}=n$.
Thus we just demonstrate that $\mathcal{O}(z,\bar{z})$ obeys the same transformation rule as a (component of a) tensor, and thus it is called a tensor field.
Finally, the direct consequence of $T_{zz}$ not being a primary field is that it does not transform as a tensor. Moreover because it does not transform as a tensor, its OPE can have a $z^{-4}$ term, which is related to the Casimir energy for a unitary CFT. |
Essentially similar question to here Different boolean degrees polynomially related? (change being error condition $\epsilon\in(0,1)$).
Let $p$ be the minimum degree (of degree $d_f$) real polynomial that represents boolean function $f$ such that $f(x)=p(x)$.
Let $p_{0,\epsilon}$ be the minimum degree (of degree $d_{0,f,\epsilon}$) real polynomial that represents boolean function $f$ such that $$f(x)=0\implies p_{0,\epsilon}(x)=0$$$$f(x)=1\implies|p_{0,\epsilon}(x)-f(x)|\leq\epsilon.$$
Let $p_{1,\epsilon}$ be the minimum degree (of degree $d_{1,f,\epsilon}$) real polynomial that represents boolean function $f$ such that $$f(x)=1\implies p_{1,\epsilon}(x)=1$$$$f(x)=0\implies|p_{1,\epsilon}(x)-f(x)|\leq\epsilon.$$
Is $d_{f}\leq d_{0,f,\epsilon}^{c_0}$ and $d_{f}\leq d_{1,f,\epsilon}^{c_1}$ for some $c_0$ and $c_1$?
Above holds if $\epsilon\in(0,\frac{1}{2})$ as mentioned here in link Different boolean degrees polynomially related?.
However what happens if $\epsilon\in(0,1)$ instead of $(0,\frac{1}{2})$ (does polynomial relation still hold)?
That is we consider $0<\epsilon<\frac{1}{2}\leq\delta<1$.
Note that defining $p_\delta$ makes little sense if $\delta\in[\frac{1}{2},1)$.
I am most interested in $\delta=1-\frac{1}{h(n)}$ with some function of $n$ (logarithmic/polynomial/exponential). |
On the DNA Computer Binary Code
In any finite set we can define a
, a partial order in different ways. But here, a partial order is defined in the set of four DNA bases in such a manner that a Boolean lattice structure is obtained. A Boolean lattice is an algebraic structure that captures essential properties of both set operations and logic operations. This partial order is defined based on the physico-chemical properties of the DNA bases: hydrogen bond number and chemical type: of purine {A, G} and pyrimidine {U, C}. This physico-mathematical description permits the study of the genetic information carried by the DNA molecules as a computer binary code of zeros (0) and (1). binary operation 1. Boolean lattice of the four DNA bases
In any four-element Boolean lattice every element is comparable to every other, except two of them that are, nevertheless, complementary. Consequently, to build a four-base Boolean lattice it is necessary for the bases with the same number of hydrogen bonds in the DNA molecule and in different chemical types to be complementary elements in the lattice. In other words, the complementary bases in the DNA molecule (
G ≡C and A= T or A= U during the translation of mRNA) should be complementary elements in the Boolean lattice. Thus, there are four possible lattices, each one with a different base as the maximum element. 2. Boolean (logic) operations in the set of DNA bases
The Boolean algebra on the set of elements
X will be denoted by $(B(X), \vee, \wedge)$. Here the operators $\vee$ and $\wedge$ represent classical “OR” and “AND” term-by-term. From the Boolean algebra definition it follows that this structure is (among other things) a logical operations in which any two elements $\alpha$ and $\beta$ have upper and lower bounds. Particularly, the greater lower bound of the elements $\alpha$ and $\beta$ is the element $\alpha\vee\beta$ and the least upper bound is the element $\alpha\wedge\beta$. partially ordered set This equivalent partial ordered set is called. Boolean lattice In every Boolean algebra (denoted by $(B(X), \vee, \wedge)$) for any two elements , $\alpha,\beta \in X$ we have $\alpha \le \beta$, if and only if $\neg\alpha\vee\beta=1$, where symbol “$\neg$” stands for the logic negation. If the last equality holds, then it is said that $\beta$ is deduced from $\alpha$. Furthermore, if $\alpha \le \beta$ or $\alpha \ge \beta$ the elements and are said to be comparable. Otherwise, they are said not to be comparable.
In the set of four DNA bases, we can built twenty four isomorphic Boolean lattices [1]. Herein, we focus our attention that one described in reference [2], where the DNA bases
G and C are taken as the maximum and minimum elements, respectively, in the Boolean lattice. The logic operation in this DNA computer code are given in the following table:
OR AND $\vee$ G A U C $\wedge$ G A U C G G A U Ç G G G G G A A A C C A G A G A U U C U C U G G U U C C C C C C G A U C
It is well known that all Boolean algebras with the same number of elements are isomorphic. Therefore, our algebra $(B(X), \vee, \wedge)$ is isomorphic to the Boolean algebra $(\mathbb{Z}_2^2(X), \vee, \wedge)$, where $\mathbb{Z}_2 = \{0,1\}$. Then, we can represent this DNA Boolean algebra by means of the correspondence: $G \leftrightarrow 00$; $A \leftrightarrow 01$; $U \leftrightarrow 10$; $C \leftrightarrow 11$. So, in accordance with the operation table:
$A \vee U = C \leftrightarrow 01 \vee 10 = 11$ $U \wedge G = U \leftrightarrow 10 \wedge 00 = 00$ $G \vee C = C \leftrightarrow 00 \vee 11 = 11$ A Boolean lattice has in correspondence a directed graph called Hasse diagram, where two nodes (elements) $\alpha$ and $\beta$ are connected with a directed edge from $\alpha$ to $\beta$ (or connected with a directed edge from $\beta$ to $\alpha$) if, and only if, $\alpha \le \beta$ ($\alpha \ge \beta$) and there is no other element between $\alpha$ and $\beta$. 3. The Genetic code Boolean Algebras
Boolean algebras of codons are, explicitly, derived as the direct product $C(X) = B(X) \times B(X) \times B(X)$. These algebras are isomorphic to the dual Boolean algebras $(\mathbb{Z}_2^6, \vee, \wedge)$ and $(\mathbb{Z}_2^6, \wedge, \vee)$ induced by the isomorphism $B(X) \cong \mathbb{Z}_2^2$, where $X$ runs over the twenty four possibles ordered sets of four DNA bases [1]. For example:
CAG $\vee$ AUC = CCC $\leftrightarrow$ 110100 $\vee$ 011011 = 111111
ACG $\wedge$ UGA = GGG $\leftrightarrow$ 011100 $\wedge$ 100001 = 000000
$\neg$ (CAU) = GUA $\leftrightarrow$ $\neg$ (110110) = 001001
The Hasse diagram for the corresponding Boolean algebra derived from the direct product of the Boolean algebra of four DNA bases given in the above operation table is:
In the Hasse diagram, chains and anti-chains are located. A Boolean lattice subset is called a chain if any two of its elements are comparable but, on the contrary, if any two of its elements are not comparable, the subset is called an anti-chain. In the Hasse diagram of codons shown in the figure, all chains with maximal length have the same minimum element GGG and the maximum element CCC. It is evident that two codons are in the same chain with maximal length if and only if they are comparable, for example the chain: GGG $\leftrightarrow$ GAG $\leftrightarrow$ AAG $\leftrightarrow$ AAA $\leftrightarrow$ AAC $\leftrightarrow$ CAC $\leftrightarrow$ CCC
The Hasse diagram symmetry reflects the role of hydrophobicity in the distribution of codons assigned to each amino acid. In general, codons that code to amino acids with extreme hydrophobic differences are in different chains with maximal length. In particular, codons with
U as a second base will appear in chains of maximal length whereas codons with A as a second base will not. For that reason, it will be impossible to obtain hydrophobic amino acid with codons having U in the second position through deductions from hydrophilic amino acids with codons having A in the second position.
There are twenty four Hasse diagrams of codons, corresponding to the twenty four genetic-code Boolean algebras. These algebras integrate a symmetric group isomorphic to the symmetric group of degree four $S_4$ [1]. In summary, the DNA binary code is not arbitrary, but subject to logic operations with subjacent biophysical meaning.
References Sanchez R. Symmetric Group of the Genetic-Code Cubes. Effect of the Genetic-Code Architecture on the Evolutionary Process. MATCH Commun Math Comput Chem, 2018, 79:527–60. Sánchez R, Morgado E, Grau R. A genetic code Boolean structure. I. The meaning of Boolean deductions. Bull Math Biol, 2005, 67:1–14. |
Understanding the Paraxial Gaussian Beam Formula
The Gaussian beam is recognized as one of the most useful light sources. To describe the Gaussian beam, there is a mathematical formula called the paraxial Gaussian beam formula. Today, we’ll learn about this formula, including its limitations, by using the
Electromagnetic Waves, Frequency Domain interface in the COMSOL Multiphysics® software. We’ll also provide further detail into a potential cause of error when utilizing this formula. In a later blog post, we’ll provide solutions to the limitations discussed here.
Gaussian Beam: The Most Useful Light Source and Its Formula
Because they can be focused to the smallest spot size of all electromagnetic beams, Gaussian beams can deliver the highest resolution for imaging, as well as the highest power density for a fixed incident power, which can be important in fields such as material processing. These qualities are why lasers are such attractive light sources. To obtain the tightest possible focus, most commercial lasers are designed to operate in the lowest transverse mode, called the Gaussian beam.
As such, it would be reasonable to want to simulate a Gaussian beam with the smallest spot size. There is a formula that predicts real Gaussian beams in experiments very well and is convenient to apply in simulation studies. However, there is a limitation attributed to using this formula. The limitation appears when you are trying to describe a Gaussian beam with a spot size near its wavelength. In other words, the formula becomes less accurate when trying to observe the most beneficial feature of the Gaussian beam in simulation. In a future blog post, we will discuss ways to simulate Gaussian beams more accurately; for the remainder of this post, we will focus exclusively on the paraxial Gaussian beam.
A schematic illustrating the converging, focusing, and diverging of a Gaussian beam.
Note: The term “Gaussian beam” can sometimes be used to describe a beam with a “Gaussian profile” or “Gaussian distribution”. When we use the term “Gaussian beam” here, it always means a “focusing” or “propagating” Gaussian beam, which includes the amplitude
andthe phase. Deriving the Paraxial Gaussian Beam Formula
The paraxial Gaussian beam formula is an approximation to the Helmholtz equation derived from Maxwell’s equations. This is the first important element to note, while the other portions of our discussion will focus on how the formula is derived and what types of assumptions are made from it.
Because the laser beam is an electromagnetic beam, it satisfies the Maxwell equations. The time-harmonic assumption (the wave oscillates at a single frequency in time) changes the Maxwell equations to the frequency domain from the time domain, resulting in the monochromatic (single wavelength) Helmholtz equation. Assuming a certain polarization, it further reduces to a scalar Helmholtz equation, which is written in 2D for the out-of-plane electric field for simplicity:
where k=2 \pi/\lambda for wavelength \lambda in vacuum.
The original idea of the paraxial Gaussian beam starts with approximating the scalar Helmholtz equation by factoring out the propagating factor and leaving the slowly varying function, i.e., E_z(x,y) = A(x,y)e^{-ikx}, where the propagation axis is in x and A(x,y) is the slowly varying function. This will yield an identity
This factorization is reasonable for a wave in a laser cavity propagating along the optical axis. The next assumption is that |\partial^2 A/ \partial x^2| \ll |2k \partial A/\partial x|, which means that the envelope of the propagating wave is slow along the optical axis, and |\partial^2 A/ \partial x^2| \ll |\partial^2 A/ \partial y^2|, which means that the variation of the wave in the optical axis is slower than that in the transverse axis. These assumptions derive an approximation to the Helmholtz equation, which is called the paraxial Helmholtz equation, i.e.,
The special solution to this paraxial Helmholtz equation gives the paraxial Gaussian beam formula. For a given waist radius w_0 at the focus point, the slowly varying function is given by
\sqrt{\frac{w_0}{w(x)}}
\exp(-y^2/w(x)^2)
\exp(-iky^2/(2R(x)) + i\eta(x))
where w(x), R(x), and \eta(x) are the beam radius as a function of x, the radius of curvature of the wavefront, and the Gouy phase, respectively. The following definitions apply: w(x) = w_0\sqrt{1+\left ( \frac{x}{x_R} \right )^2 }, R(x) = x +\frac{x_R^2}{x}, \eta(x) = \frac 12 {\rm atan} \left ( \frac{x}{x_R} \right ), and x_R = \frac{\pi w_0^2}{\lambda}.
Here, x_R is referred to as the Rayleigh range. Outside of the Rayleigh range, the Gaussian beam size becomes proportional to the distance from the focal point and the 1/e^2 intensity position diverges at an approximate divergence angle of \theta = \lambda/(\pi w_0).
Definition of the paraxial Gaussian beam.
Note: It is important to be clear about which quantities are given and which ones are being calculated. To specify a paraxial Gaussian beam, either the waist radius w_0 or the far-field divergence angle \theta must be given. These two quantities are dependent on each other through the approximate divergence angle equation. All other quantities and functions are derived from and defined by these quantities.
Simulating Paraxial Gaussian Beams in COMSOL Multiphysics®
In COMSOL Multiphysics, the paraxial Gaussian beam formula is included as a built-in background field in the
Electromagnetic Waves, Frequency Domain interface in the RF and Wave Optics modules. The interface features a formulation option for solving electromagnetic scattering problems, which are the Full field and the Scattered field formulations.
The paraxial
Gaussian beam option will be available if the scattered field formulation is chosen, as illustrated in the screenshot below. By using this feature, you can use the paraxial Gaussian beam formula in COMSOL Multiphysics without having to type out the relatively complicated formula. Instead, you simply need to specify the waist radius, focus position, polarization, and the wave number. Plots showing the electric field norm of paraxial Gaussian beams with different waist radii. Note that the variable name for the background field is ewfd.Ebz. Looking into the Limitation of the Paraxial Gaussian Beam Formula
In the scattered field formulation, the total field E_{\rm total} is linearly decomposed into the background field E_{\rm bg} and the scattered field E_{\rm sc} as E_{\rm total} = E_{\rm bg} + E_{\rm sc}. Since the total field must satisfy the Helmholtz equation, it follows that (\nabla^2 + k^2 )E_{\rm total} = 0, where \nabla^2 is the Laplace operator. This is the full field formulation, where COMSOL Multiphysics solves for the total field. On the other hand, this formulation can be rewritten in the form of an inhomogeneous Helmholtz equation as
The above equation is the scattered field formulation, where COMSOL Multiphysics solves for the scattered field. This formulation can be viewed as a scattering problem with a scattering potential, which appears in the right-hand side. It is easy to understand that the scattered field will be zero if the background field satisfies the Helmholtz equation (under an approximate Sommerfeld radiation condition, such as an absorbing boundary condition) because the right-hand side is zero, aside from the numerical errors. If the background field doesn’t satisfy the Helmholtz equation, the right-hand side may leave some nonzero value, in which case the scattered field may be nonzero. This field can be regarded as an error of the background field. In other words, under certain conditions, you can qualify and quantify exactly how and by how much your background field satisfies the Helmholtz equation. Let’s now take a look at the scattered field for the example shown in the previous simulations.
Plots showing the electric field norm of the scattered field. Note that the variable name for the scattered field is ewfd.relEz. Also note that the numerical error is contained in this error field as well as the formula’s error.
The results shown above clearly indicate that the paraxial Gaussian beam formula starts failing to be consistent with the Helmholtz equation as it’s focused more tightly. Quantitatively, the plot below may illustrate the trend more clearly. Here, the relative L2 error is defined by \left ( \int_\Omega |E_{\rm sc}|^2dxdy / \int_\Omega |E_{\rm bg}|^2dxdy \right )^{0.5}, where \Omega stands for the computational domain, which is compared to the mesh size. As this plot suggests, we can’t expect that the paraxial Gaussian beam formula for spot sizes near or smaller than the wavelength is representative of what really happens in experiments or the behavior of real electromagnetic Gaussian beams. In the settings of the paraxial Gaussian beam formula in COMSOL Multiphysics, the default waist radius is ten times the wavelength, which is safe enough to be consistent with the Helmholtz equation. It is, however, not a “cut-off” number, as the approximation assumption is continuous. It’s up to you to decide when you need to be cautious in your use of this approximate formula.
Semi-log plot comparing the relative L2 error of the scattered field with the waist size in the units of wavelength. Checking the Validity of the Paraxial Approximation
In the above plot, we saw the relationship between the waist size and the accuracy of the paraxial approximation. Now we can check the assumptions that were discussed earlier. One of the assumptions to derive the paraxial Helmholtz equation is that the envelope function varies relatively slowly in the propagation axis, i.e., |\partial^2 A/ \partial x^2| \ll |2k \partial A/\partial x|. Let’s check this condition on the
x-axis. To that end, we can calculate a quantity representing the paraxiality. As the paraxial Helmholtz equation is a complex equation, let’s take a look at the real part of this quantity, {\rm abs} \left ( {\rm real} \left ( (\partial^2 A/ \partial x^2) / (2ik \partial A/\partial x) \right ) \right ).
The following plot is the result of the calculation as a function of
x normalized by the wavelength. (You can type it in the plot settings by using the derivative operand like
d(d(A,x),x) and
d(A,x), and so on.) We can see that the paraxiality condition breaks down as the waist size gets close to the wavelength. This plot indicates that the beam envelope is no longer a slowly varying one around the focus as the beam becomes fast. A different approach for seeing the same trend is shown in our Suggested Reading section.
Real part of the paraxiality along the x -axis for paraxial Gaussian beams with different waist sizes. Concluding Remarks on the Paraxial Gaussian Beam Formula
Today’s blog post has covered the fundamentals related to the paraxial Gaussian beam formula. Understanding how to effectively utilize this useful formulation requires knowledge of its limitation as well as how to determine its accuracy, both of which are elements that we have highlighted here.
There are additional approaches available for simulating the Gaussian beam in a more rigorous manner, allowing you to push through the limit of the smallest spot size. We will discuss this topic in a future blog post. Stay tuned!
Editor’s note, 7/2/18: The follow-up blog post, “The Nonparaxial Gaussian Beam Formula for Simulating Wave Optics“, is now live. Suggested Reading P. Vaveliuk, “Limits of the paraxial approximation in laser beams”, Optics Letters, Vol. 32, No. 8 (2007) Browse related topics here on the COMSOL Blog: Comments (25) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science |
I have seen that if a set $K$ on an Hilbert space $H$ is convex and strongly sequentially-closed, it is weakly closed. The teacher said that if you take a convex and weakly lower semicontinuous functional $F$, using the fact that the sets $F^{-1}(-\infty, \lambda]$ are convex and that closure implies weak closure, it is easy to conclude that convexity and strong lower semicontinuity imply weak lower semicontinuity. I do not see how to do that though. I would like to see a proof not involving weak topologies etc. The way he said it menat it was supposed to be done using only the definitions, or little more.
Let us start with two facts and a remark.
Fact 1. Let $(X,\mathcal{T})$ be a topological space and let $f \colon (X,\mathcal{T}) \to \left[{-}\infty,{+}\infty\right]$. Then $f$ is lower semicontinuous if and only if, for every $\xi \in \mathbb{R}$, the lower level set $f^{-1}(\left[{-}\infty,\xi\right])$ is closed. Here, by lower semicontinuity, I mean: for every $x \in X$ and for every $\xi \in \left]-\infty,f(x)\right[$, there exists a neighborhood $V$ of $x$ in such that $(\forall y \in V)\; f(y) > \xi$. Remark Lower semicontinuity goes with the topology on the domain of $f$. In particular, in your question, lower semicontinuous means "$f$ is lower semicontunuous wrt to the strong topology" whereas "weakly lower semicontinuous" means "$f$ is lower semicontinuous wrt to the weak topology on $H$." So I guess there is no way to avoid weak topology in the proof as it directly relates to the topologies on the domain. Fact 2. Let $C$ be a convex subset of $H$ (in your question). Then $C$ is closed in the topology induced by the hilbertian norm of $H$ if and only if $C$ is closed in the weak topology.
Returning to your question and assume that $f$ is lower semicontinuous w.r.t the strong topology (induced by the norm of $H$) and that $f$ is convex. We must show that $f$ is weakly lower semicontinuous, i.e., $f$ is continuous when $H$ is equipped with the weak topology. Let us use Fact 1 to do this, i.e., take $\xi \in \mathbb{R}$ and show that $f^{-1}(\left[{-}\infty,\xi\right])$ is weakly closed. Since $f$ is convex, the set $f^{-1}(\left[{-}\infty,\xi\right])$ is convex. On the other hand, since $f$ is lsc w.r.t to the strong topology, the set $f^{-1}(\left[{-}\infty,\xi\right])$ is closed in the strong topology by Fact 1. Altogether, Fact 2 implies that it is indeed weakly closed.
So, we have shown that, for every $\xi \in \mathbb{R}$, the set $f^{-1}(\left[{-}\infty,\xi\right])$ is closed in the weak topology. In view of Fact 1, we conclude that $f$ is weakly lsc, i.e., lower semicontinuous when $H$ is equipped with the weak topology. |
Consider the predicates
$M(x,y):$ "x has sent an email to y",
$T(x,y):$ "x has called y".
The predicate variable x, y take values in the domain D = {students in the class}. I need to express these statements using symbolic logic:
"There are at least 2 students in the class such that one student has sent the other an email, and the second student has called the first student." (I don't know how to translate this using symbolic logic.)
"There are some students in the class who have emailed everyone": $\exists x\in D, \forall y\in D M(x, y)\quad$? |
Why there is no monopole radiation in Electromagnetic field? I read somewhere that it is impossible because it violates charge conservation. I don't understand how? How charge conservation gets violated here?
In a multipole expansion of the electric potential, outside of some charge charge distribution $\rho(\mathbf r,t)$, the monopole term is simply
$$V_{mp}(\mathbf r) = \frac{Q_{total}}{4\pi \epsilon_0 r}$$
The associated electric field is then
$$\vec E_{mp} = \frac{Q_{total}}{4\pi \epsilon_0 r^2}\hat r$$
For this term to be time varying at some fixed $r$, the
total charge must change with time, i.e., charge must be created or destroyed which is inconsistent with the conservation of electric charge.
So,
if there were monopole EM radiation, charge would not be conserved and, further, such radiation would be longitudinal.
Comment to the question (v1):
It seems relevant to mention that a spherically symmetric solution of Maxwell equations (for a system with a spherically symmetric charge and current distributions) is necessarily
static in a (not necessarily thin) vacuum shell (i.e. a region with no charges/matter). This is a consequence of the electromagnetic version of Birkhoff's theorem.
From quantum physics we know that Electrons have angular momentum l and that must be conserved (conservation of momentum) by emitted Photons which have a Spin of 1. There are no photons without spin (no spin= monopole) => therefore dipole radiation.
Hypothetical gravitons have a Spin of 2 (2x2 matrix). So gravitational waves are supposed to have a quadrupole moment |
This question already has an answer here:
Consider the following recursion: $\begin{cases} T(n) = 2T(\frac{n}{2}) + \frac{n}{\log n} &n > 1 \\ O(1) &n = 1 \end{cases}$.
The master theorem doesn't work, as the exponent of $\log n$ is negative. So I tried unfolding the relation and finally got the equation: $T(n) = n[1 + \frac{1}{\log(\frac{n}{2})} + \frac{1}{\log(\frac{n}{4})} + ... + \frac{1}{\log(2)}]$.
I do not know how to simplify (inequalities to use???) from here. A trivial method would be to assume that all reciprocal of the log terms are $< \frac{1}{\log(2)}$, and since there are $\log n$ terms, the summation of all the reciprocal-log terms is $< \frac{\log n }{\log(2)} = \log_2 n$, which gives $T(n) = O(n \log n)$. However this is a very poor approximation, as by the master theorem we can check that the time complexity for the recursive relation $T(n) = 2T(\frac{n}{2}) + n$ is $O(n \log n)$. Can someone find a tighter correct upper bound? |
My teacher gave us this problem where we need to find:
$$\lim_{(x,y) \to (0,0)} \frac{1-cos(x+y)}{x+y}$$
My first gut instinct would have been to try sandwich theorem (after attempting a few paths and getting all $0$s). However, he gave us a solution using a Taylor approximation:
$$\lim_{(x,y) \to (0,0)} \frac{\frac1 2 (x^2+2xy+y^2)+R_2(x,y)}{x+y} = \lim_{(x,y) \to (0,0)} \frac{\frac1 2 (x+y)^2}{x+y}+ \frac{R2(x,y)}{x+y}=0$$
It seems he used a second degree Taylor polynomial with its remainder term (which tends to $0$ when the function and the Taylor polynomial are close to the same point).
However, three questions arise:
1) Can I always use this Taylor approach to avoid taking a limit by using paths and sandwich theorem? If not, when exactly can or cannot I use Taylor for limits?
2) Why a second degree polynomial? How should I know which grade to use in this technique?
3) I don't get where the $2xy$ came from in the polynomial (inside the parentheses) and why some terms are positive instead of negative. Since $\cos(x+y)$ equals $1$ when evaluated at $(0;0)$, the first derivatives evaluated at $(0;0)$ are equal to $0$ and the second derivatives are all equal to $-1$ then, to me, the polynomial should look like: $$ 1+\frac 1 2 (-x^2-xy-y^2)+R_2(x,y) $$ |
There are
many properties that are equivalent to uniqueness of factorization in $\,\Bbb Z.\:$ Below is a sample off the top of my head (by no means complete). Each provides a slightly different perspective on why uniqueness holds - perspectives that becomes clearer when one sees how these equivalent properties bifurcate in more general integral domains. Below we use the notation $\rm\:(a,b)=1\:$ to mean that $\rm\:a,b\:$ are coprime, i.e. $\rm\:c\mid a,b\:\Rightarrow\:c\mid 1.$
$\rm(1)\ \ \ gcd(a,b)\:$ exists for all $\rm\:a,b\ne 0\ \ $ [GCD domain]
$\rm(2)\ \ \ a\mid BC\:\Rightarrow a=bc,\ b\mid B,\ c\mid C\ \ \, $ [Schreier refinement, Euler's four number theorem]
$\rm(3)\ \ \ a\,\Bbb Z + b\, \Bbb Z\, =\, c\,\Bbb Z,\:$ for some $\rm\,c\quad\ $ [Bezout domain]
$\rm(4)\ \ \ (a,b)=1,\ a\mid bc\:\Rightarrow\: a\mid c\qquad\ \ $ [Euclid's Lemma]
$\rm(5)\ \ \ (a,b)=1,\ \dfrac{a}{b} = \dfrac{c}{d}\:\Rightarrow\: b\mid d\quad\ \ $ [Unique Fractionization]
$\rm(6)\ \ \ (a,b)=1,\ a,b\mid c\:\Rightarrow\: ab\mid c$
$\rm(7)\ \ \ (a,b)=1\:\Rightarrow\: a\,\Bbb Z\cap b\,\Bbb Z\, =\, ab\,\Bbb Z $
$\rm(8)\ \ \ gcd(a,b)\ \ exists\:\Rightarrow\: lcm(a,b)\ \ exists$
$\rm(9)\ \ \ (a,b)=1=(a,c)\:\Rightarrow\: (a,bc)= 1$
$\rm(10)\ $ atoms $\rm\, p\,$ are prime: $\rm\ p\mid ab\:\Rightarrow\: p\mid a\ \ or\ \ p\mid b$
Which of these properties sheds the most intuitive light on why uniqueness of factorization entails? If I had to choose one, I would choose $(2),$ Schreier refinement. If you extend this by induction it implies that any two factorizations of an integer have a
common refinement. For example if we have two factorizations $\rm\: a_1 a_2 = n = b_1 b_2 b_3\:$ then Schreier refinement implies that we can build the following refinement matrix, where the column labels are the product of the elements in the column, and the row labels are the products of the elements in the row
$$\begin{array}{c|ccc} &\rm b_1 &\rm b_2 &\rm b_3 \\\hline\rm a_1 &\rm c_{1 1} &\rm c_{1 2} &\rm c_{1 3}\\\rm a_2 &\rm c_{2 1} &\rm c_{2 2} &\rm c_{2 3}\\\end{array}$$
This implies the following common refinement of the two factorizations
$$\rm a_1 a_2 = (c_{1 1} c_{1 2} c_{1 3}) (c_{2 1} c_{2 2} c_{2 3}) = (c_{1 1} c_{2 1}) (c_{1 2} c_{2 2}) (c_{1 3} c_{2 3}) = b_1 b_2 b_3.$$
This immediately yields the
uniqueness of factorizations into primes (atoms). It also works more generally for factorizations into coprime elements, and for factorizations of certain types of algebraic structures (abelian groups, etc). |
Search
Now showing items 1-10 of 26
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions
(Nature Publishing Group, 2017)
At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ...
K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV
(American Physical Society, 2017-06)
The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ... |
I am just a relatively new user, so please correct me if I'm wrong.
If you read about the questions with most votes in MSE, you can find that until the $35$-th of question, the questions are all asked at least 4 years ago. Some of them are even about 8 years ago.
Does this mean new questions can't receive enough attention?
One of the reasons of this phenomenon, is, I think, is about the 'hot topics':
Classic results like Different methods to compute $\sum\limits_{k=1}^\infty \frac{1}{k^2}$ (Basel problem) Some explanation to some important concept in mathematics, such as Is $\frac{\textrm{d}y}{\textrm{d}x}$ not a ratio?; What are imaginary numbers? Or some 'interesting' results like Is this Batman equation for real?.
And interestingly, some of the problems are really off-topic, such as My sister absolutely refuses to learn math.
Does this mean oftenly, receiving many votes and attentions really doesn't mean it is a good question? Are our users really focus on wrong position? We should not just pay attention to familiar topic, or 'hot topic'. Does it means we should revise our MSE system?
Maybe it is because we have lots more questions these days and so the average attention of problem is decreasing. But I think the number of good quality posts won't decrease.
Maybe we can let moderators or expert users to select good question (for month, years, or etc.?) Somethings like votes shouldn't be the most important.
Thank you for paying attentions to my question. |
I want to find out whether the series $\sum_{n=1}^\infty \frac{(-1)^n(2n)!!}{(2n+1)!!}$ convergent and I know the alternating series test. However, I don't know whether the absolute term converges to 0 or not. I already show that it is not absolutely convergent. Thanks.
One may observe that, as $n \to \infty$, by the use of the Stirling formula,$$\frac{(2n)!!}{(2n+1)!!}=\frac{2^n\cdot n!}{\frac{(2n+1)!}{2^n n!}}=\frac{2^{2n} (n!)^2}{(2n+1)!}\sim \frac{\sqrt{\pi}}2\frac1{\sqrt{n}}$$ and the initial series is
not absolutely convergent.
In fact, as $n \to \infty$, by using the asymptotic expansion, $$ n! = \sqrt{2 \pi}n^{n+1/2} e^{-n} \left( 1 + O\left(\frac1n\right)\right) $$ we get
$$ (-1)^n\frac{(2n)!!}{(2n+1)!!}=\frac{\sqrt{\pi}}2\frac{(-1)^n}{\sqrt{n}}+ O \left( \frac{1}{n^{3/2}} \right) $$
the initial series is convergent being the sum of two convergent series.
One may prove that
$$ \sum_{n\geq1}(-1)^n\frac{(2n)!!}{(2n+1)!!}=\frac{\sqrt{2}}2\log \left(1+\sqrt{2}\right)-1. $$ |
Definition:Barycenter Definition
Let $p_1,\ldots,p_n \in \mathcal E$ be points.
Let $\lambda_1,\ldots,\lambda_n \in k$ such that $\displaystyle \sum_{i \mathop = 1}^n \lambda_i = 1$.
The barycenter of $p_1,\ldots,p_n$ with weights $\lambda_1,\ldots,\lambda_n$ is the unique point $q$ of $\mathcal E$ such that for every point $r \in \mathcal E$ $\displaystyle q = r + \sum_{i \mathop = 1}^n\lambda_i \vec{r p_i}$ Notation
It is conventional to write:
$q = \lambda_1 p_1 + \cdots + \lambda_np_n$
despite the fact that linear combinations are not defined in affine spaces.
Also known as
UK orthography encodes this as
barycentre. |
We study the distribution of singularities (poles and zeros) of rational solutions of the Painlevé IV equation by means of the isomonodromic deformation method.
Seminars
An important conjecture in knot theory relates the large-$N$, double scaling limit of the colored Jones polynomial $J_{K,N}(q)$ of a knot $K$ to the hyperbolic volume of the knot complement, Vol($K$). A less studied question is whether Vol($K$) can be recovered directly from the original Jones polynomial ($N=1$). In this report we use a deep neural network to approximate Vol($K$) from the Jones polynomial.
Juan Antonio Valiente Kroon, 2019/02/13, 11h, Construction of anti de Sitter-like spacetimes using the metric conformal field equations
In this talk I with describe how to make use of the metric version of the conformal Einstein field equations to construct anti-de Sitter-like spacetimes by means of a suitably posed initial-boundary value problem. The evolution system associated to this initial-boundary value problem consists of a set of conformal wave equations for a number of conformal fields and the conformal metric. This formulation makes use of generalised wave coordinates and allows the free specification of the Ricci scalar of the conformal metric via a conformal gauge source function.
Bruno Oliveira, 2019/02/26, 15h, Big jet-bundles on resolution of orbifold surfaces of general type.
The presence of symmetric and more generally $k$-jet differentials on surfaces $X$ of general type play an important role in constraining the presence of entire curves (nonconstant holomorphic maps from $\mathbb{C}$ to $X$). Green-Griffiths-Lang conjecture and Kobayashi conjecture are the pillars of the theory of constraints on the existence of entire curves on varieties of general type.
When the surface as a low ratio $c_1^2/c_2$ a simple application of Riemann-Roch is unable to guarantee abundance of symmetric or $k$-jet differentials.
By solving a singular initial value problem, we prove the existence of solutions of the wave equation $\Box_g\phi=0$ which are bounded at the Big Bang in the Friedmann-Lemaitre-Robertson-Walker cosmological models. More precisely, we show that given any function $A \in H^3(\Sigma)$ (where $\Sigma=\mathbb{R}^n$, $\mathbb{S}^n$ or $\mathbb{H}^n$ models the spatial hypersurfaces) there exists a unique solution $\phi$ of the wave equation converging to $A$ in $H^1(\Sigma)$ at the Big Bang, and whose time derivative is suitably controlled in $L^2(\Sigma)$.
Jarrod Williams, 2019/03/21, 14h 30m, The Friedrich-Butscher method for the construction of initial data in General Relativity
The construction of initial data for the Cauchy problem in General Relativity is an interesting problem from both the mathematical and physical points of view. As such, there have been numerous methods studied in the literature the "Conformal Method" of Lichnerowicz-Choquet-Bruhat-York and the "gluing" method of Corvino-Schoen being perhaps the best-explored. In this talk I will describe an alternative, perturbative, approach proposed by A. Butscher and H.
Anne Franzen, 2019/01/30, 11h, Flat FLRW and Kasner Big Bang singularities analyzed on the level of scalar waves
We consider the wave equation, $\square_g\psi=0$, in fixed flat Friedmann-Lemaitre-Robertson-Walker and Kasner spacetimes with topology $\mathbb{R}_+\times\mathbb{T}^3$. We obtain generic blow up results for solutions to the wave equation towards the Big Bang singularity in both backgrounds. In particular, we characterize open sets of initial data prescribed at a spacelike hypersurface close to the singularity, which give rise to solutions that blow up in an open set of the Big Bang hypersurface $\{t=0\}$.
I plan to cover the following topics: Euler equations; Burger's equation; p-system; symmetric hyperbolic PDE's; shock formation; Lax method of solving Riemann problems; Glimm's method for solving Cauchy problems; Entropy solutions; artificial viscosity.
We prove the following theorem: axisymmetric, stationary solutions of the Einstein field equations formed from classical gravitational collapse of matter obeying the null energy condition, that are everywhere smooth and ultracompact (i.e., they have a light ring) must have at least two light rings, and one of them is stable. It has been argued that stable light rings generally lead to nonlinear spacetime instabilities. |
I can think of one extreme case, numbers much larger than you are using. Given a real number $\delta > 0,$ the exponent of some prime $p$ in the superior highly composite number associated with $\delta$ is$$ k = \left\lfloor \frac{1}{p^\delta - 1} \right\rfloor. $$For small primes these exponents are roughly proportional to $1 / \log p,$ so they decrease fairly quickly at first, in any case they never increase, and no prime is skipped over. These numbers, and all highly composite numbers, are the product of primorials.
For example, taking $\delta = 0.085$ gives us the S.H.C. number$$ N_{0.085} = 2^{16} \cdot 3^{10} \cdot 5^6 \cdot \mbox{more}. $$ The number of divisors is$$ d(N_{0.085}) = 17 \cdot 11 \cdot 7 \cdot \mbox{more}. $$This number is divisible by $17$ but not by $13,$ hence it cannot be a highly composite number.
Taking $\delta = 0.05$ gives us the S.H.C. number$$ N_{0.05} = 2^{28} \cdot 3^{17} \cdot 5^{11} \cdot \mbox{more}. $$ The number of divisors is$$ d(N_{0.05}) = 29 \cdot 18 \cdot 12 \cdot \mbox{more}. $$This number is divisible by $29$ but not by any of $23,19,17,13,$ hence it cannot be a highly composite number.
We can invert things: given a prime $p$ and a target exponent $k,$ the
largest $\delta$ that will assign the exponent $k$ to the prime $p$ is$$ \delta = \frac{\log \left( \frac{k+1}{k} \right)}{\log p} $$
So, you see, it is possible to generate S.H.C. numbers on your own, and prescribe one of the exponents as you see fit. Note that the exact value, which will cause numerical problems, is not necessary; my preference is to take $\delta$ a rational number with$$ \frac{\log \left( \frac{k+1}{k} \right)}{\log p} > \delta > \frac{\log \left( \frac{k+2}{k+1} \right)}{\log p}, $$ which still gives the correct $k.$ I wanted the exponent of $2$ to be a prime minus $1,$ but the exponent of $3$ to be smaller than the previous prime minus $1;$ you can see how I did that now.
Edit: factoring; the ratio of consecutive S.H.C. numbers is a single, very small, prime. If you have factored one of these, you can factor the next one by taking the ratio, and just correcting the appropriate exponent by $1.$ Often, the exponent increased is for a new prime, exponent bumped from $0$ to $1.$ Anyway, you can keep doing this to factor all of them that anyone has listed, such as http://oeis.org/A002201 and text list here http://oeis.org/A002201/b002201.txt Oh, not by the way, the increased exponent found by ratio tells you what $\delta$ can be for the larger S.H.C. number. |
Abbreviation:
Grpd
A
is a category $\mathbf{C}=\langle C,\circ,\text{dom},\text{cod}\rangle$ such that groupoid
every morphism is an isomorphism: $\forall x\exists y\ x\circ y=\text{dom}(x)\text{ and }y\circ x=\text{cod}(x)$
Let $\mathbf{C}$ and $\mathbf{D}$ be Schroeder categories. A morphism from $\mathbf{C}$ to $\mathbf{D}$ is a function $h:C\rightarrow D$ that is a
: $h(x\circ y)=h(x)\circ h(y)$, $h(\text{dom}(x))=\text{dom}(h(x))$ and $h(\text{cod}(x))=\text{cod}(h(x))$. functor
Remark: These categories are also called
. Brandt groupoids
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &2\\ f(3)= &3\\ f(4)= &7\\ f(5)= &9\\ f(6)= &16\\ f(7)= &22\\ f(8)= &42\\ f(9)= &57\\ f(10)= &90\\ \end{array}$ |
What's the meaning of random variables $X_i^2(A)$
For example:
Consider we are doing Bernoulli trials, $\omega =\{A, \text{not} A\}$ with $P(A)=p$ and $P(\text{not} A)=1-p=q$, Given $n$ independent random variables $X_1,X_2,\text{...},x_n$, each taking
$$\begin{align*}X_i(A)=1,X_i(\text{not} A)=0,\end{align*}$$
set
$$\begin{align*}S_n=\sum _{i=1}^n X_i\end{align*}$$
I can understand this:
$$\begin{align*}E\left(X_k\right)=X_k(A)P(A)+X_k(\text{not} A)P(\text{not} A)=p\end{align*}$$
but feel difficulties in understanding the $X_k^2$ in the variance $V\left(X_k\right)$
$$\begin{align*}V\left(X_k\right)=E\left(X_k^2\right)-\left[E\left(X_k\right)\right]^2\\&=\color{blue}{X_k{}^2(A)P(A)}\color{red}{+}\color{blue}{X_k{}^2(\text{not}A)P(\text{not} A)}-p^2\end{align*}$$
And I can derive the blue part, but how to understand the (real, physical, world, historical...) meaning of $X_k^2(A)$, further, $X_k^3,\text{...}$,
$X_i$ may mean an event, or an event's profit such like earning 10 dollars in one gambling game. |
Abbreviation:
CloA
A
is a modal algebra $\mathbf{A}=\langle A,\vee,0,\wedge,1,\neg,\diamond\rangle$ such that closure algebra
$\diamond$ is
: $x\le \diamond x$, $\diamond\diamond x=\diamond x$ closure operator
Remark: Closure algebras provide algebraic models for the modal logic S4. The operator $\diamond$ is the
, and the possibility operator $\Box$ is defined as $\Box x=\neg\diamond\neg x$. necessity operator
Let $\mathbf{A}$ and $\mathbf{B}$ be closure algebras. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\to B$ that is a Boolean homomorphism and preserves $\diamond$:
$h(\diamond x)=\diamond h(x)$
Example 1: $\langle P(X),\cup,\emptyset,\cap,X,-,cl\rangle$, where $X$ is any topological space and $cl$ is the closure operator associated with $X$.
Classtype variety Equational theory decidable Quasiequational theory decidable First-order theory undecidable Locally finite no Residual size unbounded Congruence distributive yes Congruence modular yes Congruence n-permutable yes, $n=2$ Congruence regular yes Congruence uniform yes Congruence extension property yes Definable principal congruences yes Equationally def. pr. cong. yes Discriminator variety no Amalgamation property yes Strong amalgamation property yes Epimorphisms are surjective yes
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ f(6)= &\\ \end{array}$ |
It looks like you're new here. If you want to get involved, click one of these buttons!
Let's start trying to understand enriched profunctors. We'll start with a very nice special case: 'feasibility relations'.
Definition. Suppose \( (X, \le_X) \) and \( (Y, \le_Y) \) are preorders. Then a feasibility relation from \(X\) to \(Y\) is a monotone function
$$ \Phi : X^{\text{op}} \times Y \to \mathbf{Bool} .$$If \(\Phi\) is a feasibility relation from \(X\) to \(Y\) we write \( \Phi : X\nrightarrow Y \). If \( \Phi(x,y) = \text{true}\), we say
\(x\) can be obtained given \(y\).
The idea is that we use elements of \( X\) to describe 'requirements' - things you want - and elements of \(Y\) to describe 'resources' - things you have. A feasibility relation \( \Phi : X \nrightarrow Y \) says when what you want can be obtained from what you have! And the fact that it's
monotone makes a lot of sense.
In fact:
Theorem. A function \(\Phi : X^{\text{op}} \times Y \to \mathbf{Bool}\) is a feasibility relation if and only if:
If \(\Phi(x,y) = \text{true}\) and \(x' \le_X x\) then \(\Phi(x',y) = \text{true}\).
If \(\Phi(x,y) = \text{true}\) and \(y \le_Y y'\) then \(\Phi(x,y') = \text{true}\).
Translating this into English, we see it makes perfect sense:
If what you want can be obtained from the resources you have, and then you change your mind and want
less, you can still obtain what you want.
If what you want can be obtained from the resources you have, and then you acquire
more resources, you can still obtain what you want.
But let's prove the theorem. This is mainly a nice review of various concepts.
Proof. First, remember that \(\textbf{Bool}\) is the preorder with two elements \(\text{true}\) and \(\text{false}\), with
$$ \text{false} \le \text{true} . $$ We can read \(\le\) here as 'implies'.
Second, remember that \(X^{\text{op}}\) is the
opposite of the preorder \(X\), with the definition of \(\le\) turned around:
$$ x \le_{X^{\text{op}}} x' \text{ if and only if } x' \le_X x $$Third, remember that \( X^{\text{op}} \times Y \) is the
product of the preorders \(X^{\text{op}}\) and \(Y\). So, its elements are pairs \( (x,y) \) with \(x \in X\) and \(y \in Y\), and we define a concept of \(\le\) on these pairs by
$$ (x,y) \le (x',y') \text{ if and only if } x' \le_X x \text{ and } y \le_Y y' .$$ Note how I said \(x' \le x\). This looks backwards, but it's not one of my usual typos! It works this way because of the \(\text{op}\) in \( X^{\text{op}} \times Y \).
Now we're ready to see exactly what a feasibility relation, that is a monotone function \(\Phi : X^{\text{op}} \times Y \to \mathbf{Bool}\), really is. It's a function that obeys
$$ (x,y) \le (x',y') \text{ implies } \Phi(x,y) \le \Phi(x',y') $$ or in other words
$$ \text{ if } x' \le_X x \text{ and } y \le_Y y', \text{ then } \Phi(x,y) \text{ implies } \Phi(x',y') $$ Translating this into English to see what this means, it says:
If you can get what you want from the resources you have, and then you change your mind and want
less, and also go out and get moreresources, then you can still get what you want.
Here "less" really means "less than or equal to", and "more" really means "greater than or equal to" - English is not very good at saying these things quickly! So, the process of wanting less and getting more resources can always be broken into two steps:
wanting less but keeping the resources the same, and then
getting more resources but wanting the same thing.
So, this condition
$$ \text{ if } x' \le_X x \text{ and } y \le_Y y', \text{ then } \Phi(x,y) \text{ implies } \Phi(x',y') $$is equivalent to
the combination of both these conditions:
If \(x' \le_X x \), then \( \Phi(x,y) \) implies \(\Phi(x',y)\).
If \(y \le y'\), then \( \Phi(x,y) \) implies \( \Phi(x,y')\).
But here's another equivalent way to say these two things:
If \(\Phi(x,y) = \text{true}\) and \(x' \le_X x\) then \(\Phi(x',y) = \text{true}\).
If \(\Phi(x,y) = \text{true}\) and \(y \le_X y'\) then \(\Phi(x,y') = \text{true}\).
Logic! Ain't it great? \( \qquad \blacksquare \)
Next time I'll show you how to draw pictures of feasibility relations, and look at some examples. We've already drawn pictures of preorders, or at least posets: they're called
Hasse diagrams, and they look like a bunch of dots, one for each element of our poset \(X\), and a bunch of arrows, enough so that \(x \le y\) whenever there's a path of arrows leading from \(x \) to \( y \). So, to draw a feasibility relation \(\Phi : X \nrightarrow Y\), we'll draw two Hasse diagrams and some extra arrows to say when \( \Phi(x,y) = \text{true}\).
Finally, two puzzles:
Puzzle 169. I gave a verbal argument for how we can break up any inequality \( (x,y) \le (x',y') \) in \(X^{\text{op}} \times Y\) into two other inequalities. Can you write this out in a purely mathematical way? Puzzle 170. What if we have a morphism in a product of categories \( \mathcal{C} \times \mathcal{D}\). Can we always write it as a composite of two morphisms, copying the procedure in my verbal argument and Puzzle 169? How does this work? |
Search
astrophysics (66)biophysics (16)chemistry (18)electric field (58)electric current (61)gravitational field (64)hydromechanics (123)nuclear physics (34)oscillations (40)quantum physics (25)magnetic field (29)mathematics (75)mechanics of a point mass (219)gas mechanics (79)mechanics of rigid bodies (188)molecular physics (59)geometrical optics (65)wave optics (47)other (135)relativistic physics (33)statistical physics (24)thermodynamics (117)wave mechanics (42)
oscillations (8 points)2. Series 33. Year - 5. wheel with a spring
We have a perfectly rigid homogeneous disc with a radius $R$ and mass $m$, to which a rubber band is connected. It is fixed by one end in distance $2R$ from an edge of the disc and by the other end at the end of the disc. The rubber band behave as ideal, thin spring with stiffness $k$, rest length $2R$ and negligible mass. Disc is secured in the middle, so it is able to rotate in one axis around this point, but cannot move or change the rotation axis. Figure out relation between the magnitude of moment of force, by which the rubber band will be increasing or decreasing the rotation of disc depending on $\phi $. Also, figure out an equation of motion.
Bonus: Define the period of system's small oscillations. (9 points)6. Series 32. Year - 5. elastic cord swing
Matěj was bored by common swings, which are at playgrounds because you can swing on only forward and backwards. Therefore, he has invented his own amusement ride, which will move vertically. It will consist of an elastic cord of length $l$ attached to two points separated by distance $l$ in the same height. If he sits in the middle of the attached cord, it will stretch so that the middle will displace by a vertical distance $h$. Then, he pushes himself up and starts to swing. Find the frequency of small oscillations.
Matěj wonders how to hurt little children at playgrounds.
(12 points)5. Series 32. Year - E. thirty centimeters tone
Everyone has ever tried out of boredom to strum on a long ruler sticking out of the edge of a school-desk. Choose the right model of frequency versus the part of the length of the ruler which is sticking out and prove it experimentally. Also, describe other properties of the ruler.
Note: Allow vibration only for outsticking part of the ruler by fixing its position above the table.
Michal K. found a ruler
(12 points)4. Series 31. Year - E. heft of a string
Measure the length density of the catgut which arrived to you together with the tasks. You are forbidden to weigh the catgut.
Hint: You can try to vibrate the string.
Mišo wondered about catguts on ITF.
(8 points)5. Series 30. Year - 4. on a string
Two masses of negligible dimensions and mass $m=100g$ are connected by a massless string with rest length $l_{0}=1\;\mathrm{m}$ and spring constant $k=50\;\mathrm{kg}\cdot \mathrm{s}^{-2}$. One of the masses is held fixed and the other rotates around it with frequency $f=2\;\mathrm{Hz}$. The first mass can rotate freely around its axis. At one point the fixed mass is released. Find the minimal separation of the two masses during the resulting motion. Do not consider the effects of gravity and assume the validity of Hook's law.
(3 points)3. Series 29. Year - 3. will it jump?
Consider a massless spring with spring constant $k$. Weights are attached to both ends with masses $m$, and $Mrespectively$. This system is placed on a horizontal surface so that weight of mass $Mlies$ on the surface and the spring with the second weight points up. The system is in equilibrium (i.e. top weight does not oscillate) and length of the spring in this state is $l$. How much do we have to compress the spring so that the weight of mass $M$ jumps up when it is released? Consider only vertical motion.
(8 points)1. Series 29. Year - E. small g
Measure the local gravitational acceleration with at least two different methods. Then compare these two methods in detail.
Viktor heard the complaint of the participants that they don't want to constantly be knee deep in water.
(6 points)1. Series 28. Year - S. Unsure
Write down the equations for a throw in a homogeneous gravitational field (you don't need to prove them but you need to know how to use them). Design a machine that will throw an item and determine the angle of approach and the velocity. You can throw with the item with a spring, determine its spring constant, mass of the object and calculate the kinetic energy and thus the velocity of the item. What do you think is the precision of the your value of the velocity and angle? Put the boundaries determined by this error into the equations and show in what boundaries we can expect the distance of the landing from the origin to be.Throw the item with your device at least five times and determine the distance of the landing and what are the boundaries within which you are certain of your distance? Show if your results fit into your predictions. (For a link to video with a throw you get a bonus point!) Tie a pendulum with an amplitude of $x$, which effectively oscillates harmonically but the frequency of its oscillations depends on the maximum displacement $x_{0}$
$$x(t) = x_0 \cos\left[\omega(x_0) t\right]\,, \quad \omega(x_0) = 2\pi \left(1 - \frac{x_0^2}{l_0^2}\right)\,,$$
where $l_{0}is$ some length scale. We think that are letting go of the pendulum from $x_{0}=l_{0}⁄2$ but actually it is from $x_{0}=l_{0}(1+ε)⁄2$. B By how much does the argument of the cosine differ from 2π after one predicted period? How many periods will it take for the pendulum to displaced to the other side than which we expect?
Tip Argument of the cosine will in that moment differ from the expected one by more than π ⁄ 2. Take a pen into your hand and let it stand on its tip on the table. Why does it fall? And what will determine if it will fall to the right or to the left? Why can't you predict a die throw even though the laws of physics should predict it? When you play billiard is the inability to finish the game only due to being incapable of doing all the neccessary calculations? Write down your answers and try to enumerate physics phenomenons that occur in daily life which are unpredictable even if we know the situation well. (3 points)4. Series 26. Year - 3. A rubber duck
A passanger on a ferry forgot to set the parking brake. Assume that the axis of the car is aligned with the axis of the ferry, and that because of waves the ferry is undergoing a harmonic motion,
i.e. $φ(t)=Φ\sin\left(ωt)$. How far from the edge of the ferry can the passenger park the car without worrying about it falling into the sea? Assume that the maximal amplitude of oscillations is slowly increasing from zero to Φ.
Lukáš and Jáchym were brainstorming about the physics of everyday hygiene. |
Bernoulli Bernoulli Volume 23, Number 1 (2017), 249-287. Concentration inequalities in the infinite urn scheme for occupancy counts and the missing mass, with applications Abstract
An infinite urn scheme is defined by a probability mass function $(p_{j})_{j\geq1}$ over positive integers. A random allocation consists of a sample of $N$ independent drawings according to this probability distribution where $N$ may be deterministic or Poisson-distributed. This paper is concerned with occupancy counts, that is with the number of symbols with $r$ or at least $r$ occurrences in the sample, and with the missing mass that is the total probability of all symbols that do not occur in the sample. Without any further assumption on the sampling distribution, these random quantities are shown to satisfy Bernstein-type concentration inequalities. The variance factors in these concentration inequalities are shown to be tight if the sampling distribution satisfies a regular variation property. This regular variation property reads as follows. Let the number of symbols with probability larger than $x$ be $\vec{\nu}(x)=|\{j\colon p_{j}\geq x\}|$. In a regularly varying urn scheme, $\vec{\nu}$ satisfies $\lim_{\tau\rightarrow0}\vec{\nu}(\tau x)/\vec{\nu}(\tau)=x^{-\alpha}$ for $\alpha\in[0,1]$ and the variance of the number of distinct symbols in a sample tends to infinity as the sample size tends to infinity. Among other applications, these concentration inequalities allow us to derive tight confidence intervals for the Good–Turing estimator of the missing mass.
Article information Source Bernoulli, Volume 23, Number 1 (2017), 249-287. Dates Received: December 2014 Revised: May 2015 First available in Project Euclid: 27 September 2016 Permanent link to this document https://projecteuclid.org/euclid.bj/1475001355 Digital Object Identifier doi:10.3150/15-BEJ743 Mathematical Reviews number (MathSciNet) MR3556773 Zentralblatt MATH identifier 1366.60016 Citation
Ben-Hamou, Anna; Boucheron, Stéphane; Ohannessian, Mesrob I. Concentration inequalities in the infinite urn scheme for occupancy counts and the missing mass, with applications. Bernoulli 23 (2017), no. 1, 249--287. doi:10.3150/15-BEJ743. https://projecteuclid.org/euclid.bj/1475001355 |
It looks like you're new here. If you want to get involved, click one of these buttons!
Now let's look at a mathematical approach to resource theories. As I've mentioned, resource theories let us tackle questions like these:
Our first approach will only tackle question 1. Given \(y\), we will only ask
is it possible to get \(x\). This is a yes-or-no question, unlike questions 2-4, which are more complicated. If the answer is yes we will write \(x \le y\).
So, for now our resources will form a "preorder", as defined in Lecture 3.
Definition. A preorder is a set \(X\) equipped with a relation \(\le\) obeying: reflexivity: \(x \le x\) for all \(x \in X\). transitivity \(x \le y\) and \(y \le z\) imply \(x \le z\) for all \(x,y,z \in X\).
All this makes sense. Given \(x\) you can get \(x\). And if you can get \(x\) from \(y\) and get \(y\) from \(z\) then you can get \(x\) from \(z\).
What's new is that we can also
combine resources. In chemistry we denote this with a plus sign: if we have a molecule of \(\text{H}_2\text{O}\) and a molecule of \(\text{CO}_2\) we say we have \(\text{H}_2\text{O} + \text{CO}_2\). We can use almost any symbol we want; Fong and Spivak use \(\otimes\) so I'll often use that. We pronounce this symbol "tensor". Don't worry about why: it's a long story, but you can live a long and happy life without knowing it.
It turns out that when you have a way to combine things, you also want a special thing that acts like "nothing". When you combine \(x\) with nothing, you get \(x\). We'll call this special thing \(I\).
Definition. A monoid is a set \(X\) equipped with:
such that these laws hold:
the
associative law: \( (x \otimes y) \otimes z = x \otimes (y \otimes z) \) for all \(x,y,z \in X\)
the
left and right unit laws: \(I \otimes x = x = x \otimes I\) for all \(x \in X\).
You know lots of monoids. In mathematics, monoids rule the world! I could talk about them endlessly, but today we need to combine the monoids and preorders:
Definition. A monoidal preorder is a set \(X\) with a relation \(\le\) making it into a preorder, an operation \(\otimes : X \times X \to X\) and element \(I \in X\) making it into a monoid, and obeying:
$$ x \le x' \textrm{ and } y \le y' \textrm{ imply } x \otimes y \le x' \otimes y' .$$This last condition should make sense: if you can turn an egg into a fried egg and turn a slice of bread into a piece of toast, you can turn an egg
and a slice of bread into a fried egg and a piece of toast!
You know lots of monoidal preorders, too! Many of your favorite number systems are monoidal preorders:
The set \(\mathbb{R}\) of real numbers with the usual \(\le\), the binary operation \(+: \mathbb{R} \times \mathbb{R} \to \mathbb{R} \) and the element \(0 \in \mathbb{R}\) is a monoidal preorder.
Same for the set \(\mathbb{Q}\) of rational numbers.
Same for the set \(\mathbb{Z}\) of integers.
Same for the set \(\mathbb{N}\) of natural numbers.
Money is an important resource: outside of mathematics, money rules the world. We combine money by addition, and we often use these different number systems to keep track of money. In fact it was bankers who invented negative numbers, to keep track of debts! The idea of a "negative resource" was very radical: it took mathematicians over a century to get used to it.
But sometimes we combine numbers by multiplication. Can we get monoidal preorders this way?
Puzzle 60. Is the set \(\mathbb{N}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{N} \times \mathbb{N} \to \mathbb{N}\) and the element \(1 \in \mathbb{N}\) a monoidal preorder? Puzzle 61. Is the set \(\mathbb{R}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{R} \times \mathbb{R} \to \mathbb{R}\) and the element \(1 \in \mathbb{R}\) a monoidal preorder? Puzzle 62. One of the questions above has the answer "no". What's the least destructive way to "fix" this example and get a monoidal preorder? Puzzle 63. Find more examples of monoidal preorders. Puzzle 64. Are there monoids that cannot be given a relation \(\le\) making them into monoidal preorders? Puzzle 65. A monoidal poset is a monoidal preorder that is also a poset, meaning
$$ x \le y \textrm{ and } y \le x \textrm{ imply } x = y $$ for all \(x ,y \in X\). Are there monoids that cannot be given any relation \(\le\) making them into monoidal posets?
Puzzle 66. Are there posets that cannot be given any operation \(\otimes\) and element \(I\) making them into monoidal posets? |
I’m a student with a pure math background starting to work through Arnold’s “Mathematical Methods...” and I’m struggling right of the bat with Section 1.2 on Galilean Structure. (pg 4 - 6)
So we have this affine space $A^4$ accompanied by a space of displacements $\mathbb{R}^4$. Fine.
On page 5, Arnold defined Time as a linear mapping $t:\mathbb{R}^4 \to \mathbb{R}$, and says two events $a,b\in A^4$ are simultaneous if $t(b-a) = 0$. Fine.
Then Arnold says the set of events simultaneous with a given event is a three dimensional subspace $A^3$, to which I say "Not necessarily". The mapping $t(a)= 0$ for all $a\in A^4$ satisfies Arnold's definition of a time mapping, yet clearly has a four-dimensional kernel. Is a three-dimensional kernel a requirement for a Time mapping $t$? If so, Arnold is certainly not clear about that.
But let's say I accept that for now, meaning I believe we have some Time mapping $t$ with a three dimensional kernel. The text then says that we can define the distance between two simultaneous events $a,b\in A^3$ as $\rho(a,b)=\sqrt{\langle a-b, a-b \rangle}$ where $\langle, \rangle$ is the dot product in $\mathbb{R}^3$.
But vector $a-b$ still has the same representation as it did in $\mathbb{R}^4$, (something like $(x_1, x_2, x_3, x_4)$, perhaps) so it does not make sense to directly apply the three dimensional dot product. I feel we would need to choose a basis for $\text{Ker}(t)$ and then we could use the coordinate representation of $b-a$.
I hope my gripes make sense. What I could really use a extremely rigorous definition of Galilean structure. |
Nuclear Experiment New submissions 1-9]
[ showing up to 2000 entries per page: fewer | more ]
New submissions for Tue, 15 Oct 19 [1] arXiv:1910.06086 [pdf, other] Title: Time-based Reconstruction of Hyperons at PANDA at FAIRComments: 5 pages, 4 figures, Conference Proceedings for Connecting the Dots and Workshop on Intelligent Trackers (CTD/WIT) 2019Subjects: Nuclear Experiment (nucl-ex); Instrumentation and Detectors (physics.ins-det)
The upcoming PANDA (anti-Proton ANnihilation at DArmstadt) experiment at FAIR (Facility for Anti-proton and Ion Research) offers unique possibilities for performing hyperon physics such as extraction of spin observables. Due to their relatively long-lived nature, the displaced decay vertices of hyperons impose a particular challenge on the track reconstruction and event building. The foreseen high luminosity and beam momenta at PANDA requires new advanced tracking algorithms for successfully identifying the hyperon events. The purely software based event selection of PANDA puts high demands on the online reconstruction. A fast, versatile, modular and dynamic approach to track reconstruction and event building is required. This text addresses the reconstruction algorithms used in the scheme such as the Cellular Automaton. A method for obtaining the z-component of the particle momentum is described as well as methods for merging the information from different track reconstruction detectors.
Cross-lists for Tue, 15 Oct 19 [2] arXiv:1910.05349 (cross-list from nucl-th) [pdf, other] Title: Relativistic energy-density functional approach to magnetic-dipole excitation and its sum ruleComments: 6 pages, 2 figures, 1 table, contribution to the conference proceedings for INPC 2019 in Glasgow, UK (29th July - 2nd August, 2019)Subjects: Nuclear Theory (nucl-th); High Energy Astrophysical Phenomena (astro-ph.HE); Nuclear Experiment (nucl-ex)
Magnetic-dipole (M1) excitations of $^{18}$O and $^{42}$Ca nuclei are investigated within a relativistic nuclear energy density functional framework. In our last work \cite{2019OP}, these nuclei are found to have unique M1 excitation and its sum rule, because of their characteristic structure: the system consists of the shell-closure core plus two neutrons. For a more systematic investigation of the M1 mode, we have implemented a framework based on the relativistic nuclear energy density functional (RNEDF). For benchmark, we have performed the RNEDF calculations combined with the random-phase approximation (RPA). We evaluate the M1 excitation of $^{18}$O and $^{42}$Ca, whose sum-rule value (SRV) of the M1 transitions can be useful to test the computational implementation \cite{2019OP}. We also apply this RNEDF method to $^{208}$Pb, whose M1 property has been precisely measured \cite{1979Holt,1987Koehler,1988Laszewski,2016Birkhan}. Up to the level of the M1 sum rule, our result is in agreement with the experiments, except the discrepancy related with the quenching factors for $g$ coefficients.
[3] arXiv:1910.05481 (cross-list from nucl-th) [pdf, other] Title: The JETSCAPE framework: p+p resultsAuthors: A. Kumar, Y. Tachibana, D. Pablos, C. Sirimanna, R. J. Fries, A. Angerami, S. A. Bass, S. Cao, J. Coleman, L. Cunqueiro, T. Dai, L. Du, H. Elfner, D. Everett, W. Fan, C. Gale, Y. He, U. Heinz, B. V. Jacak, P. M. Jacobs, 15 S. Jeon, K. Kauder, W. Ke, E. Khalaj, M. Kordell II, T. Luo, A. Majumder, M. McNelis, J. Mulligan, C. Nattrass, D. Oliinychenko, L.-G. Pang, C. Park, J.-F. Paquet, J. H. Putschke, G. Roland, B. Schenke, L. Schwiebert, C. Shen, R. A. Soltz, G. Vujanovic, X.-N. Wang, R. L. Wolpert, Y. Xu, Z. Yang (The JETSCAPE Collaboration)Comments: 23 pages, 23 figuresSubjects: Nuclear Theory (nucl-th); High Energy Physics - Experiment (hep-ex); Nuclear Experiment (nucl-ex)
The JETSCAPE framework is a modular and versatile Monte Carlo software package for the simulation of high energy nuclear collisions. In this work we present a new tune of JETSCAPE, called PP19, and validate it by comparison to jet-based measurements in $p+p$ collisions, including inclusive single jet cross sections, jet shape observables, fragmentation functions, charged hadron cross sections, and dijet mass cross sections. These observables in $p+p$ collisions provide the baseline for their counterparts in nuclear collisions. Quantifying the level of agreement of JETSCAPE results with $p+p$ data is thus necessary for meaningful applications of JETSCAPE to A+A collisions. The calculations use the JETSCAPE PP19 tune, defined in this paper, based on version 1.0 of the JETSCAPE framework. For the observables discussed in this work calculations using JETSCAPE PP19 agree with data over a wide range of collision energies at a level comparable to standard Monte Carlo codes. These results demonstrate the physics capabilities of the JETSCAPE framework and provide benchmarks for JETSCAPE users.
[4] arXiv:1910.05995 (cross-list from hep-ex) [pdf, other] Title: Precision Measurements of Fundamental Interactions with (Anti)NeutrinosAuthors: R. PettiComments: Proceedings of DIS 2019, 6 pages, 3 figuresSubjects: High Energy Physics - Experiment (hep-ex); High Energy Physics - Phenomenology (hep-ph); Nuclear Experiment (nucl-ex); Nuclear Theory (nucl-th)
We discuss the main limitations of past neutrino scattering experiments and possible ways to address them in a next-generation program of precision measurements of fundamental interactions with (anti)neutrinos. A reduction of the longstanding precision gap with respect to electron scattering experiments could provide interesting synergies with the existing efforts in the fixed-target, collider, and nuclear physics communities.
[5] arXiv:1910.06170 (cross-list from nucl-th) [pdf, ps, other] Title: Probing the neutron skin with ultrarelativistic isobaric collisionsComments: 5 pages, 4 figuresSubjects: Nuclear Theory (nucl-th); Nuclear Experiment (nucl-ex)
Particle production in ultrarelativistic heavy ion collisions depends on the details of the nucleon density distributions in the colliding nuclei. We demonstrate that the charged hadron multiplicity distributions in isobaric collisions at ultrarelativistic energies provide a novel approach to determine the poorly known neutron density distributions and thus the neutron skin thickness in finite nuclei, which can in turn put stringent constraints on the nuclear symmetry energy.
[6] arXiv:1910.06177 (cross-list from hep-ph) [pdf, ps, other] Title: Two-Photon-Exchange effects in $ep\rightarrow enπ^+$ and their corrections to separated cross sections $σ_{\textrm{L,T,LT,TT}}$ at small $-t$Comments: 6 figuresSubjects: High Energy Physics - Phenomenology (hep-ph); Nuclear Experiment (nucl-ex); Nuclear Theory (nucl-th)
In this work, the two-photon-exchange (TPE) effects in $ep\rightarrow en\pi^+$ at small $-t$ are discussed within a hadronic model. The TPE contributions to the amplitude and the unpolarized differential cross section are both estimated and we find that the TPE corrections to the unpolarized differential cross section are about $-4\%\sim-15\%$ at $Q^2=1$GeV$^2\sim1.6$GeV$^2$. After considering the TPE corrections to the experimental data sets of unpolarized differential cross section, we analyse the TPE corrections to the separated cross sections $\sigma_{\textrm{L,T,LT,TT}}$. We find that the TPE corrections (at $Q^2=1$GeV$^2\sim1.6$GeV$^2$) to $\sigma_{\textrm{L}}$ are about $-10\%\sim -20\%$, to $\sigma_{\textrm{T}}$ are about $20\%$ and to $\sigma_{\textrm{LT,TT}}$ are much larger. By these analysis, we conclude that the TPE contributions in $ep\rightarrow en\pi^+$ at small $-t$ are important to extract the separated cross sections $\sigma_{\textrm{L,T,LT,TT}}$ and the electromagnetic magnetic form factor of $\pi^+$ in the experimental analysis.
[7] arXiv:1910.06292 (cross-list from hep-ph) [pdf, ps, other] Title: Different space-time freeze-out picture -- an explanation of different $Λ$ and $\barΛ$ polarization?Comments: LATEX, 7 pages, 8 figuresSubjects: High Energy Physics - Phenomenology (hep-ph); Nuclear Experiment (nucl-ex); Nuclear Theory (nucl-th)
Thermal vorticity in non-central Au+Au collisions at energies $7.7 \leq \sqrt{s} \leq 62.4$ GeV is calculated within the UrQMD transport model. Tracing the $\Lambda$ and $\bar{\Lambda}$ hyperons back to their last interaction point we were able to obtain the temperature and the chemical potentials at the time of emission by fitting the extracted bulk characteristics of hot and dense medium to statistical model of ideal hadron gas. Then the polarization of both hyperons was calculated. The polarization of $\Lambda$ and $\bar{\Lambda}$ increases with decreasing energy of nuclear collisions. The stronger polarization of $\bar{\Lambda}$ is explained by the different space-time distributions of $\Lambda$ and $\bar{\Lambda}$ and by different freeze-out conditions of both hyperons.
[8] arXiv:1910.06293 (cross-list from nucl-th) [pdf, other] Title: Shear viscosity in microscopic calculations of A+A collisions at energies of Nuclotron-based Ion Collider fAcility (NICA)Comments: LATEX, 8 pages, 10 figuresSubjects: Nuclear Theory (nucl-th); High Energy Physics - Phenomenology (hep-ph); Nuclear Experiment (nucl-ex)
Time evolution of shear viscosity $\eta$, entropy density $s$, and their ratio $\eta / s$ in the central area of central gold-gold collisions at NICA energy range is studied within the UrQMD transport model. The extracted values of energy density, net baryon density and net strangeness density are used as input to (i) statistical model of ideal hadron gas to define temperature, baryo-chemical potential and strangeness chemical potential, and to (ii) UrQMD box with periodic boundary conditions to study the relaxation process of highly excited matter. During the relaxation stage, the shear viscosity is determined in the framework of Green-Kubo approach. The procedure is performed for each of 20 time slices, corresponding to conditions in the central area of the fireball at times from 1~fm/$c$ to 20~fm/$c$. For all tested energies the ratio $\eta / s$ reaches minimum, $\left( \eta/s \right)_{min} \approx 0.3$ at $t \approx 5$~fm/$c$. Then it increases up to the late stages of the system evolution. This rise is accompanied by the drop of both, temperature and strangeness chemical potential, and increase of baryo-chemical potential.
Replacements for Tue, 15 Oct 19 [9] arXiv:1809.02980 (replaced) [pdf, ps, other] Title: Experimentally well-constrained masses of $^{27}$P and $^{27}$S: Implications for studies of explosive binary systemsAuthors: L. J. Sun, X. X. Xu, S. Q. Hou, C. J. Lin, J. José, J. Lee, J. J. He, Z. H. Li, J. S. Wang, C. X. Yuan, F. Herwig, J. Keegans, T. Budner, D. X. Wang, H. Y. Wu, P. F. Liang, Y. Y. Yang, Y. H. Lam, P. Ma, F. F. Duan, Z. H. Gao, Q. Hu, Z. Bai, J. B. Ma, J. G. Wang, F. P. Zhong, C. G. Wu, D. W. Luo, Y. Jiang, Y. Liu, D. S. Hou, R. Li, N. R. Ma, W. H. Ma, G. Z. Shi, G. M. Yu, D. Patel, S. Y. Jin, Y. F. Wang, Y. C. Yu, Q. W. Zhou, P. Wang, L. Y. Hu, X. Wang, H. L. Zang, P. J. Li, Q. Q. Zhao, L. Yang, P. W. Wen, F. Yang, H. M. Jia, G. L. Zhang, M. Pan, X. Y. Wang, H. H. Sun, Z. G. Hu, R. F. Chen, M. L. Liu, W. Q. Yang, Y. M. Zhao, H. Q. ZhangSubjects: Nuclear Experiment (nucl-ex) 1-9]
[ showing up to 2000 entries per page: fewer | more ]
Disable MathJax (What is MathJax?) |
Abbreviation:
MSet
An
is a structure $\mathbf{A}=\langle A,f_m (m\in M)\rangle$, where $\mathbf M=\langleM,\cdot,1\rangle$ is a monoid, such that $\mathbf M$-set
$f_1$ is the identity map: $1x=x$ and
the monoid action associates: $(m\cdot n)x=m(nx)$
Remark: $f_m(x)=mx$ is a unary operation called
. the monoid action by $m$
This is a template. If you know something about this class, click on the 'Edit text of this page' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{A}$ and $\mathbf{B}$ be $\mathbf M$-sets. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(f_m^{\mathbf A}(x))=f_m^{mathbf B}(h(x))$.
An
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
[[G-sets]] [[R-modules]] [[Unary algebras]] |
Search
astrophysics (66)biophysics (16)chemistry (18)electric field (58)electric current (61)gravitational field (64)hydromechanics (123)nuclear physics (34)oscillations (40)quantum physics (25)magnetic field (29)mathematics (75)mechanics of a point mass (219)gas mechanics (79)mechanics of rigid bodies (188)molecular physics (59)geometrical optics (65)wave optics (47)other (135)relativistic physics (33)statistical physics (24)thermodynamics (117)wave mechanics (42)
quantum physics (6 points)6. Series 29. Year - S. A closing one
Find, in literature or online, the change of enthalpy and Gibbs free energy in the following reaction
$$2\,\;\mathrm{H}_2 \mathrm{O}_2\longrightarrow2\,\mathrm{H}_2\mathrm{O},$$
where both the reactants and the product are gases at standard conditions. Find the change of entropy in this reaction. Give results per mole.
Power flux in a photon gas is given by
$j=\frac{3}{4}\frac{k_\;\mathrm{B}^4\pi^2}{45\hbar^3c^3}cT^4$.
Substitute the values of the constants and compare the result with the Stefan-Boltzmann law.
Calculate the internal energy and the Gibbs free energy of a photon gas. Use the internal energy to write the temperature of a photon gas as a function of its volume for an adiabatic expansion (a process with $δQ=0)$. Hint: The law for an adiabatic process with an ideal gas was derived in the second part of this series (Czech only). Considering a photon gas, show that if $δQ⁄T$ is given by
$$\delta Q / T = f_{,T} \;\mathrm{d} T f_{,V} \mathrm{d} V\,,$$
then functions $f_{,T}$ and $f_{,V}$ obey the necessary condition for the existence of entropy, that is
$$\frac{\partial f_{,T}(T, V)}{\partial V} = \frac{\partial f_{,V}(T, V)}{\partial T} $$
(2 points)5. Series 28. Year - 1. stiffness of Mr. Planck
Maybe you have heard about the so called Planck's units ie. units expressed in the form of fundamental physics constants – speed of light $c≈3.00\cdot 10^{8}\;\mathrm{m}\cdot \mathrm{s}^{-1}$, gravitational constant $G=6.67\cdot 10^{-11}\;\mathrm{m}\cdot \mathrm{kg}^{-1}\cdot \mathrm{s}^{-2}$ and the reduced Planck's constant $h=1.05\cdot 10 ^{-34}\;\mathrm{kg}\cdot \mathrm{m}\cdot \mathrm{s}^{-1}$. This way Planck's time, Planck's length and Planck's weight are often mentioned. What if we were interested in „Planck's spring constant“? Using dimensional analysis with $c$, $G$ and $h$ the equation of the unit relating to the unit of a spring constant [ $k]=\;\mathrm{kg}\cdot \mathrm{s}^{-2}$. To determine the equation assume that the unknown and from dimensional analysis undeterminable dimensionless constant is equal to 1.
Karel was learning quantumdots
(6 points)6. Series 27. Year - S. series
How will the spectrum of an open string on a mass level $M=2⁄α′?$ How many possible states of the string on this level? If we consider the interaction of tachyons with other strings, we would find out, že ho můžeme popsat přibližně jako částici pohybující se v nějakém potenciálu. We consider a model of a string that is fastened on a unstable D-brane. The relevant potential of the tachyon is defined by
$$V(\phi)=\frac{1}{3\alpha'}\frac{1}{2\phi _0}(\phi-\phi _0)^2\left (\phi \frac{1}{2}\phi _0\right )\,,$$
where $$\alpha'$$
The theory of superstrings enables the description of fermions. For their description one needs anticomutating variables. For those one creates an anticomutator instead of a comuator with the relation
$$\{A,B\}=AB BA$$
Find two such $$2\times 2$$
…
(6 points)5. Series 27. Year - S. string
We consider only open strings and we shall limit ourselves merely to three dimensions. Draw how the following things look like a string moving freely through timespace, a string fixed with both ends to a D2-brane, a string between a D2-brane and D1-brane.
Where can the strings end in the case of three parallel D2-branes?
Choose one of the functions
$$\mathcal{P}_{\mu}^{\tau}$$
ot $$\mathcal{P}_{\mu}^{\sigma}$$ that was defined in the first part of the series and find its explicit
form (in other words a direct dependence on $$\dot{X}^{\mu}$$ and <img
src=„https://latex.codecogs.com/gif.latex?X'^{\mu}“>). Show that the conditions $$\vect{X}'\cdot \dot{\vect{X}}=0$$
and $$|\dot{\vect{X}}|^2=-|\vect{X}'|^2$$
Find the spectrum of energies of a harmonic oscilator. The energy of the oscilator is given by the hamiltonian
$$\hat{H}=\frac{\hat{p}^2}{2m} \frac{1}{2}m\omega^2\hat{x}^2$$
The second expression is clearly the potential energy while the first gives after substituting in $$\hat{p}=m\hat{v}$$ kinetic energy. We define linear combination as
$$\hat{\alpha}=a\hat{x} \;\mathrm{i} b\hat{p}$$ . Find the real constants <img
src=„https://latex.codecogs.com/gif.latex?a“> a $b$ , such that the Hamiltonian will have the form of
<img src=„https://latex.codecogs.com/gif.latex?\hat{H}=\hbar \omega \left(\hat{\alpha} ^{\dagger}\hat{\alpha}+\frac{1}
{2}\right)\,,“> where $$\hat{\alpha} ^{\dagger}$$ is the complex conjugate <img
src=„https://latex.codecogs.com/gif.latex?\hat{\alpha}“>.
Show from your knowledge of canoninc commutation relations for
$$\hat{x}$$
and $$\hat{p}$$ that the following is true
<img src=„https://latex.codecogs.com/gif.latex?\left[\hat{\alpha},\hat{\alpha}\right]=0\,,\quad\left[\hat{\alpha} ^{\dagger},\hat{\alpha} ^
{\dagger}\right]=0\,,\quad\left[\hat{\alpha} ,\hat{\alpha} ^{\dagger}\right]=1\,.“>
In the spectrum of the oscilator there will surely be the state with the lowest possible energy which corresponds to the smallest possible
amount of oscilating. Lets call it $$|0\rangle$$ . This state must fulfill <img
src=„https://latex.codecogs.com/gif.latex?\alpha |0\rangle =0“>. Show that its energy is equal to $$\hbar\omega/2$$ , ie. $$\hat{H}|0\rangle=\hbar\omega/2|0\rangle$$ . Furthermore prove that if $$\alpha |0\rangle \neq 0$$ then we have a contradiction with the fact that <img
$$E<\hbar\omega/2$$ . All the eigenstates of the Hamiltonian can be described
as $$\left(\alpha^{\dagger}\right) ^n|0\rangle$$
for $$n=0,1,2,\dots$$ Find the energy of these states, in other words find such numbers <img
(\alpha^{\dagger}\right)^n|0\rangle“>.
Tip Use the commutation relation for $$\hat{\alpha}^{\dagger}$$a <img
src=„https://latex.codecogs.com/gif.latex?\hat{\alpha}“>.
(4 points)4. Series 27. Year - 4. discharged pudding
There are a lot of models of hydrogens and many of these have been overcome but we like pudding and so we shall return to the pudding model of hydrogen. The atom is made of a sphere with a radius $R$ with an equally distributed positive charge(„puding“), in which we can find an electron(„rozinka“). Obviously the electron prefers being in the place with the lowest possible energy and so he sits in the middle of the pudding. Overall the system is electrically neutral. What is the energy that we must give the electron to get it to infinity? What would radius have to be so that this energy would be equal to Rydberg's energy (the energy of excitation of an electron in an atom of hydrogen)? Express the radius in multiples of the Bohr radius.
Jakub was making pudding.
(6 points)4. Series 27. Year - S. quantum
Look into the text to see how the operator of position $<img$
src=„https://latex.codecogs.com/gif.latex?\hat%20X“>$
and momentum $<img$ src=„https://latex.codecogs.com/gif.latex?\hat
%20P“>$ acts on the components of the state vector in $x-$
representation (wave function) and calculate their comutator, in other
words
<img src=„https://latex.codecogs.com/gif.latex?(\hat%20{X})_x%20\left((\hat%20
{P})_x%20{\psi}%20(x)\right)%20-%20(\hat%20{P})_x%20\left((\hat%20{X})_x%20
{\psi}%20(x)\right)%20“>
Tip Find out what happens when you take the derivative
of two functions multiplied together
The problem of levels of energy for a free quantum particle in other words
for $V(x)=0$ has the
following form:
<img src=„https://latex.codecogs.com/gif.latex?-\frac%20{\hbar%20^2}
{2m}%20\dfrac{\partial^2%20{\psi}%20(x)}{\partial%20x^2}=%20E%20{\psi}%20
(x)\,.“>
Try inputting $ψ$
( $x)=e^{αx}$ as the solution
and find out for what $α$ (a general complex number)
is $Epositive$ (only use such $α$ from now on).
Is this solution periodic? If yes then with what spatial period
(wavelength)?
Is the gained wave function the eigenvector of the operator of momentum
(in the $x-representation)?$ If yes find the relation between
wavelength and momentum (in other words the respective eigenvalue) of the state.
Try to formally calculate the density of probability oof presence of the
particle in space.naší vlnové funkci podle vzorce uvedeného v textu. Pravděpodobnost, že se
částice vyskytuje v celém prostoru by měla být pro fyzikální hustotu pravděpodobnosti 1,
tj. <img src=„https://latex.codecogs.com/gif.latex?\int_\mathbb{R}%20\rho
(x)%20\mathrm{d}%20x=1.“> Show that our wave function can't be
$normalized$ (in other words multiply by some constant) so that its formal
density of probability according to the equation from the text was a real
physical density of probability.
*Bonus:** What do you think that the limit of the
uncertainity of a position of a particle is if the wave function it has is close
to ours (In other words it approaches it in all properties but it always has a
normalized probability density and thus is a physical state) Can we (using Heisenberg's relation of uncertainty) determine what is
the lowest possible imprecision while finding the momentum?
Tip Take care when dealing with complex numbers. For
example the square of a complex number is different than that of its magnitude.
In the second part of the series we derivated the energy levels of an
electron in hydrogen using reduced action. Due to a random happenstance the
solution of the spectrum of the hamiltonian in a coulombic potential of a
proton would lead to thecompletely same energy,in other words
<img src=„https://latex.codecogs.com/gif.latex?E_n%20=%20-{\mathrm{Ry}}%20\frac
%20{1}{n^2}“>
where Ty = 13,6 eV is an ernergy constant that is known
as the
Rydberg constant. An electron which falls from a random energy
level to $n=2$ shall emit energy in the form of a proton
and the magnitude of the energy shall be equal to the diference of the energies
of the two states. Which are the states that an electron can fall from so that
the light will be in the visible spectrum? What will the color of the spectral
lines be?
Tip Remember the photoelectric
effect and the relation between the frequency of light and its
wavelength.
(5 points)1. Series 27. Year - P. speed of light
What would be the world like if the speed of light was only $c=1000\;\mathrm{km}\cdot h^{-1}$ while all the other fundamental constants stayed unchanged? What would be the impact on life on Earth? Would it even be possible for people to exist in such a world?
Karel came up with an unsolvable problem.
5. Series 23. Year - 1. photon fountain
Honza is not satisfied with the current bed standard. Thus, he started to test laser levitation. He bought a ball with a perfectly polished mirror surface of mass $m$, radius $r$ and put it on the ground. The ground was immediately lit by the laser with a wave length $\lambda$ and surface power $P$. What is the height of the ball at equilibrium? To get extra points, you may try to solve the problem for a ball made of glass. We suppose, in both the cases, that the laser will not fuse the ball and the experiment takes place in a homogenous gravitational field.
brought by Honza Humplík
4. Series 23. Year - 2. Fever
Returning home from an observatory, watching the sunrise, Janap discovered an easy way to calculate the temperature of the Sun. We do give away that the Earth is an absolute black body with a temperature of 0° C.
solved by Janap in one of her lectures on theoretical physics |
Let $(X, \scr{A})$ be a measure space. If $\mu, \nu$ are finite signed measure, then $$|\nu + \mu|(A) \leq |\mu|(A) + |\nu|(A)$$ where $|\mu| := \mu^+ +\mu^-$ the to tal variation of $\mu$, and $\mu^+, \mu^-$ are the Jordan-decomposition of $\mu.$
I think that it is quite clear that $\mu + \nu$ is a signed measure. Let $E, F$ be the Hahn Decomposition of $X$ with respect to $\mu + \nu$ where $E$ is a negative set and $F$ is a positive set.
Then $$(\mu+\nu)^+(A) = (\mu+\nu)(A \cap F) = (\mu^+ + \nu^+ - \mu^- -\nu^-)(A \cap F) \leq (\mu^+ + \nu^+)(A \cap F) \leq \mu^+(A) + \nu^+(A).$$
Similarly, $$(\mu + \nu)^-(A) = - (\mu + \nu)(A \cap E) \leq \mu^-(A) + \nu^-(A).$$ Then I have the result, But I do not use the assumption $\mu, \nu$ are finite. So I doubt that my proof is wrong somewhere. Can anyone point out why we need the finiteness of signed measures, and is my proof correct ? |
If I know how long one side of a regular hexagon is, what's the formula to calculate the radius of a circle inscribed inside it?
Illustration:
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Label the center of the circle. Draw six lines from the the center to the circle to the vertices of the hexagon. (These lines will be longer than the radius.) This will divide the circle into six triangles.
Question for you: Tell me every thing you can about these triangles. In particular, what are the lengths of the lines from the center?
Now draw six radii of the circle to the six edges of the hexagon. Along with the six "spokes" before you have divided the hexagon into twelve triangles.
Question for you: tell me every thing you can about these triangles. In particular:
are they congruent to each other?
what are the angles of these triangles?
What are the lengths of the sides of these triangles?
And from there I will ask you these two questions: What is the radius of the circle? and, what is the formula for the area of the circle.
The radius equals the height of the equilateral triangles of side $s$.
By Pythagoras,
$$h^2+\left(\frac s2\right)^2=s^2$$ so that
$$h=\frac{\sqrt 3}2s.$$
Draw the six isosceles triangles.
Divide each of these triangles into two right angled triangles.
Then you have
$s = 2x = 2 (r \sin \theta)$
where $r$ is the radius of the circle, $\theta$ is the top angle in the right angled triangles and there are in total $12$ of these triangles so its easy to figure out $\theta$. $x$ is the short side in these right angled triangles and $s$ is of course the outer side in the isosceles triangles, i.e. the side length you say you know.
Hence the formula for the radius is
$$r = \frac{s}{2 \sin \theta}$$
A regular Hexagon can be split into $6$ equilateral triangles. Since the inscribed circle is tangent to the side lengths of the Hexagon, we can draw a height from the center of the circle to the side length of the Hexagon.
Using the $30-60-90$ rule, the height is $\frac {x\sqrt{3}}{2}$ with a Hexagon with a side length of $x$ units.
So the radius of the circle is $\frac {x\sqrt{3}}{2}$ with $x$ as a side length of the Hexagon.
*NOTE: This is only true when the Hexagon is a regular Hexagon!
And for the area of the circle, just use the formula for the area of a circle ($A=\pi r^2$) where $r$ is the radius. |
I have mentioned this elsewhere, but it bears repeating because it is such an important concept:
Sufficiency pertains to
data reduction, not parameter estimation per se. Sufficiency only requires that one does not "lose information" about the parameter(s) that was present in the original sample.
Students of mathematical statistics have a tendency to conflate sufficient statistics with estimators, because "good" estimators in general need to be sufficient statistics: after all, if an estimator discards information about the parameter(s) it estimates, it should not perform as well as an estimator that does not do so. So the concept of sufficiency is one way in which we characterize estimators, but that clearly does not mean that sufficiency is about estimation. It is vitally important to understand and remember this.
That said, the Factorization theorem is easily applied to solve (a); e.g., for a sample $\boldsymbol x = (x_1, \ldots, x_n)$, the joint density is $$f(\boldsymbol x \mid \theta) = \prod_{i=1}^n \frac{\theta}{x_i^2} \mathbb 1 (x_i \ge \theta) \mathbb 1 (\theta > 0) = \mathbb 1 (x_{(1)} \ge \theta > 0) \, \theta^n \prod_{i=1}^n x_i^{-2},$$ where $x_{(1)} = \min_i x_i$ is the minimum order statistic. This is because the product of the indicator functions $\mathbb 1 (x_i \ge \theta)$ is $1$ if and only if all of the $x_i$ are at least as large as $\theta$, which occurs if and only if the smallest observation in the sample, $x_{(1)}$, is at least $\theta$. We see that we cannot separate $x_{(1)}$ from $\theta$, so this factor must be part of $g(\boldsymbol T(\boldsymbol x) \mid \theta)$, where $\boldsymbol T(\boldsymbol x) = T(\boldsymbol x) = x_{(1)}$. Note that in this case, our sufficient statistic is a function of the sample that reduces a vector of dimension $n$ to a scalar $x_{(1)}$, so we may write $T$ instead of $\boldsymbol T$. The rest is easy: $$f(\boldsymbol x \mid \theta) = h(\boldsymbol x) g(T(\boldsymbol x) \mid \theta),$$ where $$h(\boldsymbol x) = \prod_{i=1}^n x_i^{-2}, \quad g(T \mid \theta) = \mathbb 1 (T \ge \theta > 0) \theta^n,$$ and $T$, defined as above, is our sufficient statistic.
You may think that $T$ estimates $\theta$--and in this case, it happens to--but just because we found a sufficient statistic via the Factorization theorem, this doesn't mean it estimates anything. This is because any one-to-one function of a sufficient statistic is also sufficient (you can simply invert the mapping). $T^2 = x_{(1)}^2$ is also sufficient (note while $m : \mathbb R \to \mathbb R$, $m(x) = x^2$ is not one-to-one in general, in this case it is because the support of $X$ is $X \ge \theta > 0$).
Regarding (b), MLE estimation, we express the joint likelihood as
proportional to $$\mathcal L(\theta \mid \boldsymbol x) \propto \theta^n \mathbb 1(0 < \theta \le x_{(1)}).$$ We simply discard any factors of the joint density that are constant with respect to $\theta$. Since this likelihood is nonzero if and only if $\theta$ is positive but not exceeding the smallest observation in the sample, we seek to maximize $\theta^n$ subject to this constraint. Since $n > 0$, $\theta^n$ is a monotonically increasing function on $\theta > 0$, hence $\mathcal L$ is greatest when $\theta = x_{(1)}$; i.e., $$\hat \theta = x_{(1)}$$ is the MLE. It is trivially biased because the random variable $X_{(1)}$ is never smaller than $\theta$ and is almost surely strictly greater than $\theta$; hence its expectation is almost surely greater than $\theta$.
Finally, we can explicitly compute the density of the order statistic as requested in (c): $$\Pr[X_{(1)} > x] = \prod_{i=1}^n \Pr[X_i > x],$$ because the least observation is greater than $x$ if and only if all of the observations are greater than $x$, and the observations are IID. Then $$1 - F_{X_{(1)}}(x) = \left(1 - F_X(x)\right)^n,$$ and the rest of the computation is left to you as a straightforward exercise. We can then take this and compute the expectation $\operatorname{E}[X_{(1)}]$ to ascertain the precise amount of bias of the MLE, which is necessary to answer whether there is a scalar value $c$ (which may depend on the sample size $n$ but not on $\theta$ or the sample $\boldsymbol x$) such that $c\hat \theta$ is unbiased. |
How to Find Unknown Variables by Cramers Rule?
The concept of the matrix determinant appeared in Germany and Japan at almost identical times. Seki wrote about it first in 1683 with his
Method of Solving the Dissimulated Problems. Seki developed the pattern for determinants for $2 \times 2$, $3 \times 3$,$4 \times 4$, and $5 \times 5$ matrices and used them to solve equations. In the same year, G. Leibniz wrote about a method for solving asystem of equations. This method is well known as Cramer's Rule. The determinant of a square matrix $A$ is a unique, real number which is an attribute of the matrix $A$. The determinant of the matrix $A$ is denoted by $det(A)$ or $|A|$. Cramer's rule is a formula for the solution of a system of linear equations. It derives the solution in terms of the determinants of the matrix and of matrices obtained from it by replacing one column by the column vector of right sides of the equations. It is named by Gabriel Cramer (17041752), and the rule for an arbitrary number of unknowns is published in the paper [Cramer, G. (1750), Introduction a l'Analyse des lignes Courbes alg' ebriques" (in French). Geneva: Europeana. pp. 656--659]. To solve a system of linear equations using Cramer's rule we need to follow the next steps: Calculate a determinant of the square (main) matrix, $D$; Replace the $x^{th}$ column of the main matrix by the vector of right sides of the equations and calculate its determinant, $D_x$. To find the $x$ solution of the system of linear equations using Cramer's rule divide the determinant $D_x$ by the main determinant $D$; Repeat the previous step for each variable;
If the main determinant is zero the system of linear equations is either inconsistent or has infinitely many solutions.
Cramer's Rule in Two Variables
: Let us consider the system of equations:
$$\begin{align} &a_1x+b_1y=\color{blue}{c_1}\\ &a_2x+b_2y=\color{blue}{c_2}\end{align} $$The main determinant is $$D=\left| \begin{array}{cc} a_1 & b_1 \\ a_2 &b_2 \\ \end{array}\right|$$and other two determinants are$$D_x=\left| \begin{array}{cc} \color{blue}{c_1} & b_1 \\ \color{blue}{c_2} &b_2 \\ \end{array}\right|\quad\mbox{and}\quad D_y=\left| \begin{array}{cc} a_1 & \color{blue}{c_1} \\ a_2 &\color{blue}{c_2} \\ \end{array}\right|$$With help of determinants, $x$ and $y$ can be found with Cramer's rule as
$$x=\frac{D_x}{D}=\frac{\left| \begin{array}{cc} \color{blue}{c_1} & b_1 \\ \color{blue}{c_2} &b_2 \\ \end{array}\right|}{\left| \begin{array}{cc} a_1 & b_1 \\ a_2 &b_2 \\ \end{array}\right|}\quad\mbox{and}\quad y=\frac{D_y}{D}=\frac{\left| \begin{array}{cc} a_1 & \color{blue}{c_1} \\ a_2 &\color{blue}{c_2} \\ \end{array}\right|}{\left| \begin{array}{cc} a_1 & b_1 \\ a_2 &b_2 \\ \end{array}\right|}$$If every determinant is zero, the system is consistent and equations are dependent. The system has infinitely many solutions. If $D=0$ and $D_x$ or $D_y$ is not zero, the system is inconsistent and does not have a solution.
Cramer's Rule in Three Variables
: Let us consider the system of equations:$$\begin{align} &a_1x+b_1y+c_1z=\color{blue}{d_1}\\ &a_2x+b_2y+c_2z=\color{blue}{d_2}\\ &a_3x+b_3y+c_3z=\color{blue}{d_3}\\ \end{align} $$ The main determinant is $$D=\left| \begin{array}{ccc} a_1 & b_1 &c_1\\ a_2 &b_2 &c_2\\ a_3 &b_3 &c_3\\ \end{array} \right|$$ and other three determinant are $$D_x=\left| \begin{array}{ccc} \color{blue}{d_1} & b_1 &c_1\\ \color{blue}{d_2} &b_2 &c_2\\ \color{blue}{d_3} &b_3 &c_3\\ \end{array} \right|\quad D_y=\left| \begin{array}{ccc} a_1 & \color{blue}{d_1} &c_1\\ a_2 &\color{blue}{d_2} &c_2\\ a_3 &\color{blue}{d_3} &c_3\\ \end{array} \right|\quad\mbox{and}\quad D_z=\left| \begin{array}{ccc} a_1 & b_1 &\color{blue}{d_1}\\ a_2 &b_2 &\color{blue}{d_2}\\ a_3 &b_3 &\color{blue}{d_3}\\ \end{array} \right|$$ The solution of the system of three equations is $$x=\frac{D_x}{D},\quad y=\frac{D_y}{D},\quad \mbox{and}\quad z=\frac{D_z}{D}$$ For example, let us solve the system of linear equations: $$\begin{align} &3x+4y+5z=10\\ &5x+6y+7z=12\\ &4x+5y+0z=15\\ \end{align} $$ Firstly, we calculate the main determinant: $$\begin{align} D&=\left| \begin{array}{ccc} 3 & 4 &5\\ 5 &6 &7\\ 4 &5 &0\\ \end{array} \right|\&=\left|\begin{array}{ccc|cc} 3 & 4 & 5&3 & 4 \\ 5& 6 & 7&5& 6 \\ 4 &5 & 0&4&5 \\ \end{array} \right.=3\cdot6\cdot0+4\cdot7\cdot4+5\cdot5\cdot 5-5\cdot6\cdot4-3\cdot7\cdot5-4\cdot6\cdot0=12\end{align}$$ Similarly, $$ D_x=\left| \begin{array}{ccc} \color{blue}{10} & 4 &5\\ \color{blue}{12} &6 &7\\ \color{blue}{15} &5 &0\\ \end{array} \right|=-80,\quad D_y=\left| \begin{array}{ccc} 3 & \color{blue}{10} &5\\ 5 &\color{blue}{12} &7\\ 4 &\color{blue}{15} &0\\ \end{array} \right|=100,\quad D_z=\left| \begin{array}{ccc} 3 & 4 &\color{blue}{10}\\ 5 &6 &\color{blue}{12}\\ 4 &5 &\color{blue}{15}\\ \end{array} \right|=-8$$ |
I was thinking about how would capillary action change in a tube (classic example) and in a tube fitted inside another tube (considering water as the liquid involved).
Height of liquid column: where:
$\gamma$ = liquid-air surface tension
$\theta$ = contact angle
$\rho$ = density of liquid
$g$ = gravity acceleration
$r$ = radius
I tried my best to draw the examples I'm interested in order to help my explanation.
I didn't consider the capillarity inside the smaller tube in both example #2 and #3 because I'd like to assume that "a/2" in example #1 is close to "c" in example #2 and #3 (drawings not to scale).
Since from what I understand the column height is given, among other things (most of which can't be changed, like liquid-air surface tension, contact angle, density of liquid and gravity acceleration), by the tube radius, I'd like to know if "c" in example #2 can be considered as "a/2" in example #1 to calculate column height using above formula.
Also I'd like to know how having beads of slightly smaller diameter than "c" between the two tubes (example #3) would affect the column height. If said beads were less dense than water, could they still improve column height or would they just form a floating mat on top of 1 unit thickness? What'd be the column height of example #2 and #3 assuming "c" as 1mm ?
I'm quite sure that given the same reached height "h" in example #2 and #3, "c" of #2 has to be smaller than "c" in #3, though it's apparently the opposite (beads lower the water column) and pointed at sciencedirect article by another user.
Edit: I've been told that If $c\ll R$ the radius of your outer tube, the total curvature is approximately $\cos \theta/c$, so you will get $$h = \frac{\gamma}{\rho g} \frac{\cos \theta}{c}.$$
Beads will usually lower the apparent surface tension, so you'll get a lower column, although the amount of that depends on their wetting properties and of their arrangement (packing)
However by $c \ll R$ which orders of magnitude/fractions are we talking about? I guess $c$ still has to be smaller than water's capillary length (about 2.7mm), right? If $c$ is slightly smaller than the outer tube radius (like 1/6), how'd total curvature and hence Jurin's law (above formula) be affected?
Thank you very much |
Forgot password? New user? Sign up
Existing user? Log in
can anybody elaborate me the lcm and hcf iam not talking about the method what is its meaning and complete discription
Note by Hoor Ulain 4 years, 2 months ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
okay so whats the application of this lcm and hcf???
Log in to reply
Check out the wikis of Lowest Common Multiple and Greatest Common Divisor.
please tell the reason of prime factorization why do we just take prime factors for lcm
Because that is the easiest way to calculate the LCM for 2 "random" numbers. Assuming you know the prime factorization of each term (which could be hard)
Do you know the proof of the following theorem?
@Calvin Lin – ok you are saying that its easiest way so what about 4 we can also use this in factorization i know its not a prime number but y cant we use this i didnt got the answer of my question yet
The "HCF" is the "Highest Common Factor" of a given list of integers. The highest common factor is the integer which can divide each and every integer in the given list.
The "LCM" is the "Lowest Common Multiple" of a given list of integers. The lowest common multiple is the smallest integer which is divisible by each and every integer in the given list.
what about its applications
Please create a complete page for HCF and LCM involving all their concepts and properties .
Problem Loading...
Note Loading...
Set Loading... |
I am having trouble with the following problem. I keep on getting a long unmanagable result - so any suggestion as to where I've gone wrong/how to do this would be a lifesaver! Please?
Consider a Vector Field in $\mathbb R^3 $ Given By F($\bar{x}$)=$\bar{\varepsilon} \times \bar{x}$
Where $\bar{\varepsilon}$ is a fixed non-zero vector and $\bar{x}$ is some variable vector
Compute this Vector Field in Spherical coordinates. I have assumed that they want me to express this field using spherical coordinates bases $\bar{e }_{p}$ $\bar{e }_{\phi}$ $\bar{e }_{\theta}$
$\bar{e }_{p}$ =$\cos \theta \sin \phi $ $\bar{i}$ + $\sin \theta \sin \phi $$\bar{j}$ + $\cos \phi$$\bar{k}$
$\bar{e }_{\phi}$ = $\cos \theta$ $\cos \phi $$\bar{i}$ + $\cos \phi $$ \sin \theta $$\bar{j}$ - $ \sin \phi $$\bar{k}$
$\bar{e }_{\theta}$ = $ \sin \theta $$\bar{i}$ +$\cos \theta $$\bar{j}$
Let $\bar{x}$ = x$\bar{i}$+y$\bar{j}$ + z$\bar{k}$
In Spherical Coordinates x=p $\cos \theta \sin \phi $
y=p $\sin \theta \sin \phi $ z=p $\cos \phi$
$\bar{x}$=p $\cos \theta \sin \phi $$\bar{i}$ += p $\sin \theta \sin \phi $$\bar{j}$ +p $\cos \phi$$\bar{k}$
Then $\bar{x}$= p$\bar{e }_{p}$
I then calculated the Fixed Vector in relation to the spherical coordinate base
$\bar{\varepsilon}$ = a$\bar{i}$+b$\bar{j}$ + c$\bar{k}$ (Vector in relation to Cartesian Base) where a,b,c, are constants
$\bar{\varepsilon}$ = a($\cos \theta \sin \phi $$\bar{e }_{p}$ + $\cos \phi \cos\theta $$\bar{e }_{\phi}$ - $\sin \theta $$\bar{e }_{\theta}$) + b($\sin \theta \sin \phi $$\bar{e }_{p}$ + $\cos \phi \sin \theta $$\bar{e }_{\phi}$ + $\sin \theta $$\bar{e }_{\theta}$) + c($\cos\phi $$\bar{e }_{p}$ - $\sin \phi $$\bar{e }_{\phi}$ (used inverse relation between bases)
$\bar{\varepsilon}$ = (a$\cos \theta \sin \phi $ + b$\sin \theta \sin \phi $+ c($\cos\phi $)$\bar{e }_{p}$ + (a$\cos \phi \cos\theta $+b$\cos \phi \sin \theta $ - c$\sin \phi $)$\bar{e }_{\phi}$ +(b$\sin \theta $-a$\sin \theta $)$\bar{e }_{\theta}$
I then took the cross-product of these two vectors.(I know how to do the cross-product so that's not an issue - however I am not sure that what I have done here is correct and I am extremely uncomfortable with the result. Where have I gone wrong? Any help, suggestion or comment would very very gratefully recieved |
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ... |
Abbreviation:
BoolLat
A
is a bounded distributive lattice$\mathbf{L}=\langle L,\vee ,0,\wedge ,1\rangle $ such that Boolean lattice
every element has a complement: $\exists y(x\vee y=1\mbox{ and }x\wedge y=0)$
Let $\mathbf{L}$ and $\mathbf{M}$ be bounded distributive lattices. A morphism from $\mathbf{L}$ to $\mathbf{M}$ is a function $h:L\to M$ that is a bounded lattice homomorphism:
$h(x\vee y)=h(x)\vee h(y)$, $h(x\wedge y)=h(x)\wedge h(y)$, $h(0)=0 $, $h(1)=1$
Example 1: $\langle \mathcal P(S), \cup, \emptyset, \cap, S\rangle$, the collection of subsets of a set $S$, with union, empty set, intersection, and the whole set $S$.
Classtype first-order Equational theory decidable Quasiequational theory decidable First-order theory decidable Congruence distributive yes Congruence modular yes Congruence n-permutable yes Congruence regular yes Congruence uniform yes Congruence extension property yes Definable principal congruences yes Equationally def. pr. cong. Amalgamation property Strong amalgamation property Epimorphisms are surjective Locally finite yes Residual size
Any finite member is a power of the 2-element Boolean lattice. |
Rest Mass
The rest mass of a particle is the mass a particle has when it's speed is zero, and is labelled
\[m_0\]. The mass of a particle is not fixed. As the speed increases, so does the mass, according to the equation
\[m=\frac{m_0}{\sqrt{1-v^2/c^2}}\].
Because
\[\gamma = \frac{1}{\sqrt{1-v^2/c^2}} \ge 1\],
\[m_0 \le m\]. .
The rest mass is the most fundamental measure of mass for a particle, but is must be noted that the speed of a particle can never be zero because of the Heisenberg Uncertainty Principle
\[\Delta p \delta x \ge \hbar\]. This means that the mass of a particle can never be equal to its rest mass. Nevertheless, the rest mass is an important idea and represents the smallest possible mass of a particle. |
In class this week we've been learning about the CFLs and their closure properties. I've seen proofs for union, intersection and compliment but for reversal my lecturer just said its closed. I wanted to see the proof so I've been searching for the past few days but all I've found is most people just say that to reverse the productions is enough to prove it. Those that do go a little more formal just state there is an easy inductive proof you can give. Can anyone provide me with some more information/hints about the inductive proof? Try as I might I can't come up with it.
Your sources are right, and I am afraid there is only little to add, except formalism. I denote the reverse (mirror) of string $w$ by $w^R$.
If $G$ is a grammar, let $H$ be its reversed, so for production $A\to w$ in $G$ we have $A\to w^R$ in $H$.
Then by induction we show that $A\Rightarrow_G^*w$ iff $A\Rightarrow_H^*w^R$.
( basis) In zero steps we have $A\Rightarrow_G^0 A$ iff $A\Rightarrow_H^0 A$. ( induction) Assuming $A\Rightarrow_G^*w_1Bw_2$ iff $A\Rightarrow_H^*w_2^RBw_1^R$ we can apply any production $B\to u$ in $G$ (and in $H$ in reverse) and obtain $A\Rightarrow_G^*w_1uw_2$ and $A\Rightarrow_H^*w_2^Ru^Rw_1^R$ respectively, where indeed $w_2^Ru^Rw_1^R$ is the reverse of $w_1uw_2$.
This is a very condensed proof, but contains all necessary ingredients. Again, a derivation of the reverse grammar is the reverse of the original one. This is especially clear when looking at the two derivation trees.
There is another way to look at this problem.
Consider that the Language $L$ is a CFL. This means that there is a grammar $G=\{N,\sum,P,S\}$ that satisfies the CFL. We can assume that this is in Chomsky Normal Form.
If $\epsilon$ is part of the language, trivially $\epsilon^R$ is also part of the language. Now for every production of the form $P_1 \longrightarrow AB$, replace it with, $P_1 \longrightarrow BA$ and for the productions of the form $P_1 \longrightarrow a$, where $a \in \sum$, leave it the same.
From the parse tree of the derived string, it is easy to see that the language derived will be exactly the reverse of the initial language as the construction mirrors the original parse tree.
First off. CFL's are not closed under intersection or complement (or difference for that matter). They are closed under Union, Concatenation, Kleene star closure, substitution, homomorphism, inverse homomorphism, and reversal. NOTE: The two homomorphism's are usually not covered in an intro Computer Theory course.
To prove reversal, Let L be a CFL, with grammar G=(V,T,P,S).Let L
R be the reverse of L, such that the Grammar is G R = (V,T,P R,S).That is, reverse every production.
Ex. P -> AB would become P -> BA
Since G
R is a CFG, therefore L(G R) is a CFL. |
Other answers address the question of gravity, so I'll just expand on the atmosphere topic.
The Problem
The biggest physics issue with your proposed world is the atmosphere.
A star is formed when enough gas is present that the gravity from all of the gas is enough to collapse it down into a dense hot sphere. It would thus collapse any atmosphere around it into itself. First you may think, well maybe the star only collapses the heavy elements and the light ones could still form an atmosphere. However, most of a star is hydrogen, the lightest element. So this is not true.
The density and pressure of a gas grows exponentially towards a the source of gravity, so it is impossible to have an extremely thick atmosphere without reaching pressures where things form plasmas. The exponential growth is because each layer of gas must have a pressure that can support all of the weight above it.
Even if the star initially didn't have enough gravity by itself to suck down a giant atmosphere, even a low density atmosphere would have more mass and thus gravity than a typical star.
The Solution
If you want to have a giant atmosphere, consider having it on the outside of a shell. So then you'd have a star, a large vacuum, a (possibly transparent (maybe even diamond)) inner shell, an atmosphere, and an optional outer shell.
With this arrangement your gravity in the atmosphere would be:
$$g=\frac{G\,(M_{star}+M_{shell}+ M_{atm})}{r^2}$$
Where $r$ is the distance to the center of the star, $g$ is your gravitational acceleration, $G$ is the universal constant of gravitation, and the $M_{star}$, $M_{shell}$, and $M_{atm}$ are the masses of the star, inner shell, and portion of the atmosphere closer to the star than $r$ respectively.
So now let's take a look at the equations for the atmosphere to see if we can come up with some numbers that will fit your criteria:
First the specific ideal gas law relating temperature $T$, density $\rho$, and pressure $P$:
$$\rho=\frac{P}{RT}$$
Where $R$ is the specific gas constant (for air = $286.9\frac{J}{kg\,K}$)
Let's say the temperature is constant to simplify our analysis, and keep our creatures comfortable.
The change in mass of atmosphere closer than $r$ as $r$ increases will just be the surface area of the sphere of size $r$ times the density at that $r$:
$$\frac{d\,M_{atm}}{dr}=4\,\pi\,r^2\,\rho=\frac{4\,\pi}{RT}\,r^2\,P$$
Then since the pressure of the atmosphere must support the weight of the gas above it, the rate of change is also related to the density:
$$\frac{dP}{dr}=\rho\,g=\frac{P}{RT}\frac{G\,(M_{star}+M_{shell}+ M_{atm})}{r^2}$$
To simplify things a little let's change our variables:
$$M=M_{star}+M_{shell}+ M_{atm}$$
$$\frac{dM}{dr}=\frac{dM_{atm}}{dr}$$
Now we almost have enough information to do a numerical integration; we just need our initial values, and our constants. So let's try:
$$T=25^\circ C$$
$$M_0=M_{sol}= 10^{30} kg$$
$$P_0=1 atm = 10^5 Pa $$
$$r_0 = 1 AU = 1.5\times 10^{11} m$$
Integrating numerically we can get a plot of pressure vs altitude:
As you can see, the pressure drops off to less than three quarters of the initial pressure (enough to cause altitude sickness) by about 5000 km. Certainly a thicker breathable atmosphere than the measly 2.4 km that earth has, but let's see if we can do better.
By increasing our starting radius and decreasing the mass of our star and shell we can decrease the rate of pressure drop off, so let's look at a start with near the minimum mass to still be a red dwarf, about a tenth of our sun, and let's start out 100 times as far away:
For this system it looks like the breathable atmosphere would extend out to 7000 km: not much of an improvement for the extremes it took to get there. At 100 AU out from the star, you'd probably need an outer shell to insulate and keep your atmosphere warm.
Other concerns are how you'd get an intershell with a curvature of 1 AU to withstand an atmosphere of pressure, even if it was 100 miles thick, it would need to withstand 100 GPa of stress (diamond breaks somewhere in the 70-300 GPa range) Of course 100 mile thick shell of diamond would have a mass of 160 solar masses, but of course with that much mass we'd need to support its own weight in addition to the atmosphere. Turns out we just can't do this, so maybe just hand wave it away? Weightless force field? |
We have $k$ independent random variables with exponential distribution ($T_1, T_2, \ldots , T_k$), parameters of random variables are ($\lambda,\frac{\lambda}{2},\frac{\lambda}{3},\ldots,\frac{\lambda}{k}$), what is the distribution of new variable $T = T_1 + T_2 + \cdots + T_k $
you can use the main result here applied to the sequence $\lambda, \lambda/2,...,\lambda/k$, which for the general case states that
$$ h_{X_1,.,X_n}(x) = \left[ \prod_{i=1}^n \lambda_{i} \right] \sum_{j=\ 1}^n \frac{e^{-\lambda_i x}}{\prod_{k \neq j}^n(\lambda_i-\lambda_j)} $$
Hint: You could use convolution to calculate the distribution of two independent variables:
Assume $X$ follows $f(x)$, $Y$ follows $g(y)$, then $Z=X+Y$ follows
$$f_Z(z)=\int _{-\infty}^{+\infty}f_X(z-y)f_Y(y) \, dy.$$
Following this logic, you just do a serial integrations, then you would get the result. |
Show that $d$ is a metric for $X$ then , $d'(x,y) =\frac{d(x,y)}{1+d(x,y)}$ is a bounded metric space that gives the topology of $X$.
In dbfin.com the solution reads as follows :
Now, we show that d′ induces the same topology as d . Since $f$ and $f^{−1}(y)=\frac{y}{1−y}:[0,1)→R^+$ are continuous, $d′=f∘d$ and $d=f^{−1}∘d′$ , $d′$ is continuous in the $d$ -topology, and $d$ is continuous in the $d′$ -topology, implying that the topologies are the same .
Which topologies they are talking about? I think they talk about the coarsest topologies on $X \times X$ such that $d : X \times X \to \mathbb R$ and $d' : X \times X \to \mathbb R$ are continuous.
Can anyone please correct me if I have gone wrong anywhere? |
I apologize in advance for the length.
The equation $\sin x = (\log x)^{-1}$ has exactly one solution $x_n$ in the interval $(2\pi n,2\pi n + \pi/2)$ for $n \geq 1$, and the exercise (de Bruijn, Asymptotic Methods in Analysis, ch. 2) asks me to show that
$$ x_n = 2\pi n + (\log 2\pi n)^{-1} + O((\log 2 \pi n)^{-3}). $$
To start, we have $0 < x_n - 2 \pi n < 1$ and
$$ \sin x_n = \sin (x_n - 2 \pi n) = (\log x_n)^{-1} \to 0, $$
so $x_n \to 2 \pi n$. It would make sense then to make the substitution $x_n = z + t$, where $t = 2 \pi n$, so that we're now concerned with finding the asymptotic behavior of $z$ in terms of $t$ as $t \to \infty$ in the equation
$$ \sin z = (\log (z + t))^{-1}. $$
That is, we want to show that
$$ z = (\log t)^{-1} + O((\log t)^{-3}). $$
So far I've only been able to show that $z = O((\log t)^{-1})$ through arguments which are probably not sound.
I've tried to apply the Lagrange Inversion Formula but I can't seem to get it into the right form. If we let $w = \log t$, then $w = \frac{z}{f(z)}$, where
$$ f(z) = \frac{z}{\log(e^{1/\sin z} - z)}. $$
But $f(0) = 0$ (and, probably more importantly, $f$ isn't analytic at $0$), so I can't apply Lagrange. Of course there may be a "correct" way to rearrange the equation to put it into Lagrange form.
I've also considered applying Newton's method, but I don't know if that's valid. Applying the method to $(\log (z + t))^{-1} - \sin z$ with $z_0 = 0$ I get
$$ z_1 = - (\log t)^{-1} ( 1 + O((t \log t)^{-2})), $$
which at least has the right asymptotic behavior in the first term. Trying to iterate using, for example, $x_0 = (\log t)^{-1}$ in the hopes of getting more stable terms leads me to a wall of computation, and I doubt that's the goal of the problem. More importantly, even if I did get a stable asymptotic series as I continued to iterate, I don't know whether I'm actually converging to the actual root of the equation.
Lastly I should mention that I've also tried letting $z = x_n(2 \pi n)^{-1} - 1$, but this didn't seem to lead to anywhere helpful.
Any tips? |
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever?
And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time
Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered?
@tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points.
@DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?)
The x axis is the index in the array -- so I have 200 time series
Each one is equally spaced, 1e-9 seconds apart
The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are
The solid blue line is the abs(shear strain) and is valued on the right axis
The dashed blue line is the result from scipy.signal.correlate
And is valued on the left axis
So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe...
Because I don't know how the result is indexed in time
Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th...
So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag
I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question
It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy
For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \...
Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics
Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay
@jinawee oh, that I don't think will happen.
In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have.
So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is
Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level'
Others would argue it's not on topic because it's not conceptual
How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss...
I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed.
And what about selfies in the mirror? (I didn't try yet.)
@KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean.
Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods.
Or maybe that can be a second step.
If we can reduce visibility of HW, then the tag becomes less of a bone of contention
@jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework
@Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter.
@Dilaton also, have a look at the topvoted answers on both.
Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway)
@DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on.
hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least.
Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes.
MO is for research-level mathematics, not "how do I compute X"
user54412
@KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube
@ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper) |
In the course of solving a certain problem, I've had to evaluate integrals of the form:
$$\int_0^\infty \frac{x^k}{1+\cosh(x)} \mathrm{d}x $$
for several values of k. I've noticed that that, for k a positive integer other than 1, the result is seemingly always a dyadic rational multiple of $\zeta(k)$, which is not particularly surprising given some of the identities for $\zeta$ (k=7 is the first noninteger value).
However, I've been unable to find a nice way to evaluate this integral. I'm reasonably sure there's a way to change this expression into $\int \frac{x^{k-1}}{e^x+1} \mathrm{d}x$, but all the things I tried didn't work. Integration by parts also got too messy quickly, and Mathematica couldn't solve it (though it could calculate for a particular value of k very easily).
So I'm looking for a simple way to evaluate the above integral. |
Key Idea $\ $ Composite polynomials take composite values (except for finitely many values)
Indeeed, suppose that $\ f(x)\color{#c00}{\ne 0}\ $ is a composite polynomial: $\, f(x) = g(x)h(x)\,$ with $\ g,\,h\color{#c00}{\ne \pm1}.\,$ Then $\, f(n) = g(n)h(n)\, $ is a composite integer if $\,g(n),\,h(n)\,\neq\, 0,\,\pm1.\,$ The possible exceptions to this are $ $
finite $ $ in number: $ $ when $\,n\,$ is a root of $\ g,\, h,\, g\pm1,\,$ or $\, h\pm1, \, $ all of which are $\color{#c00}{nonzero}$ polynomials, hence have finite sets of roots. $\ $ QED
Remark $\ $ For a specific composite polynomial $\,f = gh\,$ this yields a simple algorithm to enumerate its finitely many prime values: test if $\,f(n)\,$ is prime as $\,n\,$ ranges over the roots of $\,g\pm1\,$ or $\,h\pm1.\,$ Applying this to $\, f = x^3-1 = (x-1)(x^2\!+x+1)\,$ quickly yields the sought result.
Hence the method used in the other answers is a special case of a method that works generally. Furthermore, this is an instance of a
general philosophy relating the factorizations of polynomials to the factorizations of their values (see said answer for much more on this viewpoint). |
The OP asks:
What am I doing wrong with this method?
When should I not use polar coordinates to find limits of multivariable functions?
The answer to the second question is somewhat unsatisfactory: If you find a limit, then you can. If you don't find a limit, then you can't.
So now, let's just leave that behind us and focus on the first question: "What am I doing wrong with this method?"
For this I will just consider the case where we have cartesian coordinates. The analogy with polar coordinates should be evident.
The mistake you made actually has nothing to do with "polar coordinates" per se, but with "limits". To this end, I'll first repeat the definition of the limit of a two-variable function here:
Suppose we have a function
\begin{align}f:\mathbb R\times \mathbb R\supset U&\to \mathbb R\\
(x,y)&\mapsto f(x,y)\end{align}
For a point $(a,b)\in\mathbb R$ we say that $\lim\limits_{(x,y)\to (a,b)}f(x,y)=L$, if and only if,
$$\forall \varepsilon>0\,\exists \delta >0: \big(\Vert (x,y)-(a,b)\Vert<\delta \implies \vert f(x,y)-L\vert<\epsilon\big).\tag{*}$$
In words $(*)$ says that $f(x,y)$ will be close to $L$, whenever the point $(x,y)$ is sufficiently close to
the point $(a,b)$.
Now comes your mistake: We have not defined what $\lim\limits_{x\to a}f(x,y)$ should mean. To say something about the limt of $f(x,y)$ we need to manipulate points in $\mathbb R^2$. But $x\to a$ means we are considering points in $\mathbb R$ which lie close to $a$ (which is definitely not a point in $\mathbb R^2$).
So this is a problem. If we would want to evaluate $\lim\limits_{x\to a}f(x,y)$, we would first have to define what this means. So let's do that:
Define $\ell_a\subset \mathbb R^2$ as the line $x=a$, i.e. $\ell_a =\left\{(x,y)\in\mathbb R^2\mid x=a\right\}$. Also introduce the notation: $d\big(\ell_a,(x,y)\big)=\text{distance between $(x,y)$ and $\ell_a$}$.
Now we say that $\lim\limits_{x\to a}f(x,y)=L(y)$, if and only if,
$$\forall\varepsilon>0\,\exists\delta>0: \Big(d\big(\ell_a,(x,y)\big)\tag{**}<\delta \implies\vert f(x,y)-L(y)\vert<\varepsilon\Big).$$
In words $(**)$ says that $f(x,y)$ will be close to $L$, whenever the point $(x,y)$ is sufficiently close to
the line $x=a$.
Notice the difference of the definitions in $(*)$ and $(**)$. The first tells us what happens if we are close to some point, the second tells us what happens if we are close to some line. Also, $(**)$ only says that we can get close to $L(y)$, which is some funtion of $y$. In general, proving that $\lim\limits_{x\to a}f(x,y)=L(y)$ is not at all easy and quite often not usesful.
To sum up: The problem is that $\lim\limits_{r\to 0}\frac{r\cos^2\theta\sin\theta}{r^2\cos^4\theta+\sin^2\theta}$ is actually a rather stange and unuseful thing. If ever, it needs to be used with caution. This in particular means that it cannot be evaluated by simply substituting $r$ by $0$.
As an extra I would like to leave you with a funtion $f(r,\theta)$ for which $\lim\limits_{r\to 0}f(r,\theta)$ is of more use:$$\lim_{r\to 0}\frac {r\cos^2\theta \sin\theta}{r^2\cos^4\theta +1}=0\text{, because}$$
$$0<r<\delta\implies \left\vert\frac {r\cos^2\theta \sin\theta}{r^2\cos^4\theta +1}\right\vert<\left\vert\frac {r\cos^2\theta \sin\theta}{1}\right\vert=r\left\vert \cos^2\theta\sin\theta\right\vert<\delta\underbrace{\left\vert \cos^2\theta\sin\theta\right\vert}_{\text{bounded}}.\\\text{The singular cases where $\cos^2\theta \sin\theta=0$ are easily seen to be compatible.}$$
This rimes with its graph: |
Forgot password? New user? Sign up
Existing user? Log in
And why? Please choose dispassionately.
Note by Patrick Engelmann 5 years ago
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
_italics_
**bold**
__bold__
- bulleted- list
1. numbered2. list
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
This is a quote
# I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world"
# I indented these lines# 4 spaces, and now they show# up as a code block.print "hello world"
\(
\)
\[
\]
2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Sort by:
Please be more specific. Undergrad?Graduate? what do you mean by best? Accessibility of professors? Course design? Rigor? Research opportunities?
Log in to reply
In order to get good responses, I believe it is important to specify what you are looking for in a mathematics department.
In any case, I'd really like to hear responses from @Calvin Lin and @Scott Kominers.
Indeed. the answer would vary for different people. For you to figure out which is the best university for YOU to study mathematics, you need to ask yourself the following questions
0) What level of mathematics are you talking about? Given your age, I will assume that you are looking for undergrad classes.1) What area of mathematics are you interested in? At the very least, do you like real or applied math? 2) Do you prefer a large department where there are many professors to talk to, or a small close knit group? Each has its pros and cons 3) Can you study overseas? E.g. do you have financial constraints (not all universities are need-blind)? Do want to study overseas or would you prefer something local? 4) Can you speak English, French or Russian? Each of these will open up different opportunities for you. 5) What kind of university culture are you looking for? (This is independent of the math department, and you might not really care.)
MIT of course
I don't know much about math-focused universities, but Cambridge and Oxford seem prestigious, because of their alumni and achievements related to math.
You may view www.feeltiptop.com insight search engine for comparison of all the universities in USA in every aspect. Open this link, authorize your twitter access to verify you and then click on the icon on university / education.
http://nturanking.lis.ntu.edu.tw/DataPage/TOP300.aspx?query=Mathematics&y=2014
Massachusetts Institute of Technology is world number 1Harvard is world number 2 or 3Stanford is world number 2 or 3.
The answer is simple: it depends on your needs and how much you could pay for this. But I would recommend reading about traveling tips https://fupping.com/natty/2019/05/04/traveling-tips-for-university-students/ because the study is an important thing but the rest is important too. At least you need that whatever university you choose.
cool
UCLA is also good..
Problem Loading...
Note Loading...
Set Loading... |
I need some help. This can be a dumb question as I am not an electrical engineer nor an electronics engineer, but need to solve a problem as given below. I tried to solve it but not sure if it is true because spice simulations give different result.
Where is my mistake?
Problem
Choose Emitter Resistance \$R_E\$ for the given problem for the max Emitter Current \$I_E\$ that is possible. Take \$\beta=100\$.
My solution approach
\$I_C=\frac{V_+-V_{out}}{R_c}\Rightarrow V_c=R_cI_c+V_0\\ I_E=I_B+I_C=(\beta+1)I_B\\ \\ \frac{V_{out}-V_{CE}+V_+}{R_E}=I_E\\ \\ \frac{R_CI_C+V_+-V_{CE}+V_+}{R_E}=I_E\\ \\ I_C=\left( \frac{\beta}{1+\beta}\right)I_E\\ \\ \frac{R_C\left( \frac{\beta}{1+\beta}\right)I_E+V_+-V_{CE}+V_+}{R_E}=I_E\\ \$
which gives
\$ I_E=\cfrac{2V_+ -V_{CE}}{R_E-R_C \cfrac{\beta}{\beta+1}} \$
for the maximum current \${R_E-R_C \cfrac{\beta}{\beta+1}}\$ should be zero, thus,
\${R_E=R_C \cfrac{\beta}{\beta+1}}\$ |
Serial of year 18
You can find the serial also in the yearbook.
We are sorry, this serial has not been translated. Tasks 1. Series 18. Year - S. kinematics of point mass
* The position of point mass in time in Cartesian coordinates is described by position vector $\vect{r}(t) =(R \cos\(\omega t\)$,R sin\(\omega t\),d)\,.$$ Calculate, what is time dependence of vectors $v(t)$ and $a(t)$. Calculate tangential, normal and bi-normal component of acceleration.
A wheel of radius $R$ rolls without slipping at straight track at velocity $v$. A point is connected with the wheel at the distance $r$ from the centre of wheel. Calculate its movement and the velocity as function of the time in coordinate system connected with the Earth. Can the speed be zero at some moment?
Autoři seriálu.
2. Series 18. Year - S. Newton's kinematics equations
Write down and solve the kinematics equation for mass point in gravitation field of the Earth. The orientation of the coordinate system make that $x$ and $y$ are horizontal and z is vertical, pointing upwards. The starting position is $\textbf{r}_{0} = (0,0$,$h)$, starting velocity is $\textbf{v}_{0} =(v_{0}\cosα,0,v_{0}\sinα)$. The man with the gun sits in the chair rotating alongside vertical axe at frequency $f=1\;\mathrm{Hz}$. With the chair also the target is rotating (it is fixed to the chair). Then the man shoots the bullet at the speed of $v=300\;\mathrm{km} \cdot \mathrm{h}^{-1}$ from the rotational axes directly to the middle of the target. In what place the bullet is going to go through the target. Solve in non-inertial system and from the inertial system. The distance to the middle of the target from the centre of rotation is $l=3\;\mathrm{m}$, the air friction is negligible. State the dependence of the speed of the mass point at its position in gravitational field of the Sun.
Zadal Honza Prachař.
3. Series 18. Year - S. Langrange's equations first type
Lets have a mass point suspended on massless string. Introduce Cartesian coordinates and write down equation for the mass point. Write Lagrange's equations of first type for mass point from part a). Show, that they are equivalent with the equation for mathematical pendulum
d^{2}$\varphi/dt^{2}$ + $g/l$ \cdot sin $\varphi$ = 0,
where $\varphi$ is angular displacement from equilibrium.
Small body is in rest at the top of the hemisphere and starts to slide down. Using Lagrange's equations for first type calculate the height when the body take-off the hemisphere. ( Hint: The body takes-off when the $λ$ = 0.)
Autoři seriálu.
4. Series 18. Year - S. Lagrange equation of the 2nd type
The small bead of the mass $m$ is sliding without friction on the wire loop of the shape of circle of radius $R$, the loop is rotating with the constant angular speed $Ω$ around the horizontal axis (see image).
Select appropriate generalised coordinate and construct Lagrange function of the problem. Construct Lagrange equation of the 2nd order which describes the motion of the bead. Decide, when the equilibrium position in the lowest point of the loop stable and unstable, depending on $Ω$. For $Ω$, when this position is stable calculate the period of the oscillations of the bead around this stable position. For a bonus point find next stable equilibrium positions and discuss if they are stable or unstable. For stable equilibrium positions calculate frequencies of oscillations.
Navrhli autoři seriálu Jarda Trnka a Honza Prachař.
5. Series 18. Year - S. Mercury, the pit and the pendulum
The following questions will test the knowledge from all presented chapters about mechanics – Newtons formalism, D'Alembert's principle and Lagrange's formalism.
Imagine planet Mercury orbiting around Sun. It is know, that its elliptic trajectory is rotating, the position of perihelion is moving, which cannot be explained by gravitation force.
$\textbf{F}=κ(mM\textbf{r})⁄r^{3}$.
Proof, that adding an additional central force
$\textbf{F}=C(\textbf{r})⁄r^{4}$,
where $C$ is suitable constant, full trajectory (ellipse) will rotate at constant angular speed. In other words, that exists a frame rotating at constant speed, where the trajectory is an ellipse. Knowing this angular speed $Ω$, calculate the constant $C$. Is such correction for gravitation enough?
Calculate equilibrium position of homogeneous rod of length $l$ supported by inner wall of excavation in the V-shape (see figure 12) as a function of the angle of V-shape $α$. Using Lagrange's equations calculate period of small oscillations of double-reverse pendulum in image 13. The weight are at the ends of weightless rod of the length $l$ and have masses $m_{1}$ and $m_{2}$, the distance from the joint from the weight $m_{1}$ is $l_{0}$.
a)Na úlohu narazil Matouš v jedné pěkné ruské knize. b), c) Zadal Honza Prachař a Jarda Trnka.
6. Series 18. Year - S. Hamilton formalism
Lagrangian of a particle in electromagnetic field is
$L=\frac{1}{2}mv-qφ+q\textbf{v}\cdot \textbf{A}=\frac{1}{2}\;\mathrm{m}\cdot \sum_{i=1}^{3}v_{i}-qφ+q\cdot \sum_{i=1}^{3}v_{i}A_{i}$,
where $φ$ is electrical potential and $\textbf{A}$ is magnetic vector potential.
Calculate generalized momentum of the particle $p_{i}$ belonging to the speed $v_{i}$. Write Hamiltonian function (in variables ($x_{i}$, p$_{i})!)$. Solve Hamiltonian equation, if when $\textbf{A}=**0**$ and $φ=-Ex_{1}$.
Zadal Honza Prachař. |
It looks like you're new here. If you want to get involved, click one of these buttons!
Now let's look at a mathematical approach to resource theories. As I've mentioned, resource theories let us tackle questions like these:
Our first approach will only tackle question 1. Given \(y\), we will only ask
is it possible to get \(x\). This is a yes-or-no question, unlike questions 2-4, which are more complicated. If the answer is yes we will write \(x \le y\).
So, for now our resources will form a "preorder", as defined in Lecture 3.
Definition. A preorder is a set \(X\) equipped with a relation \(\le\) obeying: reflexivity: \(x \le x\) for all \(x \in X\). transitivity \(x \le y\) and \(y \le z\) imply \(x \le z\) for all \(x,y,z \in X\).
All this makes sense. Given \(x\) you can get \(x\). And if you can get \(x\) from \(y\) and get \(y\) from \(z\) then you can get \(x\) from \(z\).
What's new is that we can also
combine resources. In chemistry we denote this with a plus sign: if we have a molecule of \(\text{H}_2\text{O}\) and a molecule of \(\text{CO}_2\) we say we have \(\text{H}_2\text{O} + \text{CO}_2\). We can use almost any symbol we want; Fong and Spivak use \(\otimes\) so I'll often use that. We pronounce this symbol "tensor". Don't worry about why: it's a long story, but you can live a long and happy life without knowing it.
It turns out that when you have a way to combine things, you also want a special thing that acts like "nothing". When you combine \(x\) with nothing, you get \(x\). We'll call this special thing \(I\).
Definition. A monoid is a set \(X\) equipped with:
such that these laws hold:
the
associative law: \( (x \otimes y) \otimes z = x \otimes (y \otimes z) \) for all \(x,y,z \in X\)
the
left and right unit laws: \(I \otimes x = x = x \otimes I\) for all \(x \in X\).
You know lots of monoids. In mathematics, monoids rule the world! I could talk about them endlessly, but today we need to combine the monoids and preorders:
Definition. A monoidal preorder is a set \(X\) with a relation \(\le\) making it into a preorder, an operation \(\otimes : X \times X \to X\) and element \(I \in X\) making it into a monoid, and obeying:
$$ x \le x' \textrm{ and } y \le y' \textrm{ imply } x \otimes y \le x' \otimes y' .$$This last condition should make sense: if you can turn an egg into a fried egg and turn a slice of bread into a piece of toast, you can turn an egg
and a slice of bread into a fried egg and a piece of toast!
You know lots of monoidal preorders, too! Many of your favorite number systems are monoidal preorders:
The set \(\mathbb{R}\) of real numbers with the usual \(\le\), the binary operation \(+: \mathbb{R} \times \mathbb{R} \to \mathbb{R} \) and the element \(0 \in \mathbb{R}\) is a monoidal preorder.
Same for the set \(\mathbb{Q}\) of rational numbers.
Same for the set \(\mathbb{Z}\) of integers.
Same for the set \(\mathbb{N}\) of natural numbers.
Money is an important resource: outside of mathematics, money rules the world. We combine money by addition, and we often use these different number systems to keep track of money. In fact it was bankers who invented negative numbers, to keep track of debts! The idea of a "negative resource" was very radical: it took mathematicians over a century to get used to it.
But sometimes we combine numbers by multiplication. Can we get monoidal preorders this way?
Puzzle 60. Is the set \(\mathbb{N}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{N} \times \mathbb{N} \to \mathbb{N}\) and the element \(1 \in \mathbb{N}\) a monoidal preorder? Puzzle 61. Is the set \(\mathbb{R}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{R} \times \mathbb{R} \to \mathbb{R}\) and the element \(1 \in \mathbb{R}\) a monoidal preorder? Puzzle 62. One of the questions above has the answer "no". What's the least destructive way to "fix" this example and get a monoidal preorder? Puzzle 63. Find more examples of monoidal preorders. Puzzle 64. Are there monoids that cannot be given a relation \(\le\) making them into monoidal preorders? Puzzle 65. A monoidal poset is a monoidal preorder that is also a poset, meaning
$$ x \le y \textrm{ and } y \le x \textrm{ imply } x = y $$ for all \(x ,y \in X\). Are there monoids that cannot be given any relation \(\le\) making them into monoidal posets?
Puzzle 66. Are there posets that cannot be given any operation \(\otimes\) and element \(I\) making them into monoidal posets? |
For a presentation, I am learning about the Cantor Set and how it is homeomorphic to the p-adic numbers. I was reading section two of this paper. In it it states that the Cantor Set has a vanishing Lebesgue measure.
Wikipedia says: Given a subset ${\displaystyle E\subseteq \mathbb {R} } $, with the length of interval ${\displaystyle I=[a,b]({\text{or }}I=(a,b))} $ given by ${\displaystyle \ell (I)=b-a} $, the Lebesgue outer measure ${\displaystyle \lambda ^{*}(E)} $ is defined as
$${\displaystyle \lambda ^{*}(E)=\operatorname {inf} \left\{\sum _{k=1}^{\infty }\ell (I_{k}):{(I_{k})_{k\in \mathbb {N} }}{\text{ is a sequence of open intervals with }}E\subseteq \bigcup _{k=1}^{\infty }I_{k}\right\}} $$.
Why would this be zero for the Cantor Set? |
Answer
a) $ \omega_f=2.5 \ rads/s$ b) $W=3.4 \times 10^{-3}J$
Work Step by Step
a) We know that angular momentum is conserved. Since the old angular speed was 2.3 radians per second, we use the new moment of inertia to find the new angular speed: $L_0=L_f \\ I\omega_f = I\omega_0 + m\omega_0r^2 \\ .0154\omega_f=.0154(2.3)+(.0195)(2.3)(.25)^2 \\ \omega_f=2.5 \ rads/s$ b) Thus, we can find the work done by finding the change in energy: $W = \frac{1}{2}I\omega_f^2 -\frac{1}{2}I\omega_0^2+\frac{1}{2}mr^2\omega_0$ Plugging in the known values, we find: $W=3.4 \times 10^{-3}J$ |
This question is prompted by (comments at) another one. There, I was surprised to find that despite traditional claims to the contrary, Boltzmann himself
did once write his formula $S=k\log W$:
$\hspace{11em}$
That’s in his book (1898, §61, p. 172), with a pointer to (1896, §8, p. 60) where he says the same thing in words, and emphasizes that $RM$ is
the same constant for all gases. In fact, if we note that his $R$ is the specific gas constant (equal to $P\ /\ \rho T$ by the gas law, p. 53) and his $M$ the molecular mass $\rho V\ /\ N$, we see on multiplying that his $RM$ is indeed our $PV\ /\ NT=k$. So all seems well.
Although Boltzmann published his famous definition of entropy in 1877, the constant of proportionality in Boltzmann’s definition, was not identified as Boltzmann’s constant until 1900 when Planck published his analysis of blackbody radiation (1900a, 1900b), where he identified the constant as $k$ and
named it after Boltzmann.
And that, again, seems not true. Of course Planck does introduce $k$ — in (1900b, p. 241), after $h$, as “a second constant of nature” such that an entropy is $k\log\mathfrak R_0$. But he does not name it after Boltzmann there — nor, unless I missed it, in any of the obvious or oft-quoted places. Not in the follow-up articles (1901a, 1901b). Not in the Boltzmann
Festschrift (1904: no mention of $k$). Not in his books (1906, 1910, 1913, 1930). Not in his Nobel lecture (1920):
This constant is frequently termed Boltzmann’s constant, although to the best of my knowledge
Boltzmann himself never introduced it
I (...) was not even taken seriously, in some places. But I did not let such doubts deter me from trusting
my constant$k$.
The literature also seems to have settled slower than legend has it — see chronology in the CW answer below. (A name Planck
did propose in (1900b, p. 245) is “Boltzmann-Drude constant” for $\alpha=3k/2$, but few besides Abraham (1905, pp. 284, 362) seem to have adopted it — e.g. Perrin (1909) calls $\alpha$ “la constante d’énergie moleculaire”.)
So: If not in 1900, when did $k$ get its name? Was there ever a debate (e.g. after the unveiling of Boltzmann’s famous tombstone)? Was there a concerted decision? Were the above-quoted pages of Boltzmann’s book ever invoked? Or was it no-one’s doing, just resolution by attrition? Finally, if Planck did not
name the constant after Boltzmann, how did we end up with the tale that he did? |
The states in quantum mechanics belong to some Hilbert space while the states in quantum field theory belong to a Fock space. For simplicity, let me stick to the Fock space emerging after the quantization of a real scalar field.
A Fock space is defined as a direct sum, $$\mathcal{F}=\oplus_n\mathcal{H}_n$$ of Hilbert spaces $\mathcal{H}_n$, of physical $n$-particle states.
For a real scalar field, which after quantization (which lead to only one type of particle) the states in $\mathcal{H}_n$, are in general, linear combination of $n$-particle states $\{|p_1,p_2,...,p_n\rangle\}$ of all possible momenta satisfying $p^i_{\mu }p^{\mu i}=m^2$, and $p^0_i>0$.
Questions
What is the physical interpretation of the Fock space being a
direct sum of $\mathcal{H}_n$?
It looks like the Fock space has invariant subspaces of labels $n$ where $n\in \mathbb{Z}$. Does it mean that under Poincare transformation, the $n$-particle states, for a given $n$, represent an irreducible representation of the Poincare group i.e., under a Poincare transformation, the states within $\mathcal{H}_n$, for a given $n$, mix among themselves.
If the above interpretation is correct, is it also true that the states in different irreducible representations, for $n\neq m$, are labelled by different values of masses?
Does it also mean that the superposition of states belonging to two different irreducible representations (for example, superposition of a one-particle state with a two-particle state) is forbidden in nature? |
Canonical correlation analysis (CCA) is a technique related to principal component analysis (PCA). While it is easy to teach PCA or linear regression using a scatter plot (see a few thousand examples on google image search), I have not seen a similar intuitive two-dimensional example for CCA. How to explain visually what linear CCA does?
Well, I think it is really difficult to present a visual explanation of
Canonical correlation analysis (CCA) vis-a-vis Principal components analysis (PCA) or Linear regression. The latter two are often explained and compared by means of a 2D or 3D data scatterplots, but I doubt if that is possible with CCA. Below I've drawn pictures which might explain the essence and the differences in the three procedures, but even with these pictures - which are vector representations in the "subject space" - there are problems with capturing CCA adequately. (For algebra/algorithm of canonical correlation analysis look in here.)
Drawing individuals as points in a space where the axes are variables, a usual scatterplot, is a
variable space. If you draw the opposite way - variables as points and individuals as axes - that will be a subject space. Drawing the many axes is actually needless because the space has the number of non-redundant dimensions equal to the number of non-collinear variables. Variable points are connected with the origin and form vectors, arrows, spanning the subject space; so here we are (see also). In a subject space, if variables have been centered, the cosine of the angle between their vectors is Pearson correlation between them, and the vectors' lengths squared are their variances. On the pictures below the variables displayed are centered (no need for a constant arises). Principal Components
Variables $X_1$ and $X_2$ positively correlate: they have acute angle between them. Principal components $P_1$ and $P_2$ lie in the same space "plane X" spanned by the two variables. The components are variables too, only mutually orthogonal (uncorrelated). The direction of $P_1$ is such as to maximize the sum of the two squared loadings of this component; and $P_2$, the remaining component, goes orthogonally to $P_1$ in plane X. The squared lengths of all the four vectors are their variances (the variance of a component is the aforementioned sum of its squared loadings). Component loadings are the coordinates of variables onto the components - $a$'s shown on the left pic. Each variable is the error-free linear combination of the two components, with the corresponding loadings being the regression coefficients. And
vice versa, each component is the error-free linear combination of the two variables; the regression coefficients in this combination are given by the skew coordinates of the components onto the variables - $b$'s shown on the right pic. The actual regression coefficient magnitude will be $b$ divided by the product of lengths (standard deviations) of the predicted component and the predictor variable, e.g. $b_{12}/(|P_1|*|X_2|)$. [Footnote: The components' values appearing in the mentioned above two linear combinations are standardized values, st. dev. = 1. This because the information about their variances is captured by the loadings. To speak in terms of unstandardized component values, $a$'s on the pic above should be eigenvectors' values, the rest of the reasoning being the same.] Multiple Regression
Whereas in PCA everything lies in plane X, in multiple regression there appears a dependent variable $Y$ which usually doesn't belong to plane X, the space of the predictors $X_1$, $X_2$. But $Y$ is perpendicularly projected onto plane X, and the projection $Y'$, the $Y$'s shade, is the prediction by or linear combination of the two $X$'s. On the picture, the squared length of $e$ is the error variance. The cosine between $Y$ and $Y'$ is the multiple correlation coefficient. Like it was with PCA, the regression coefficients are given by the skew coordinates of the prediction ($Y'$) onto the variables - $b$'s. The actual regression coefficient magnitude will be $b$ divided by the length (standard deviation) of the predictor variable, e.g. $b_{2}/|X_2|$.
Canonical Correlation
In PCA, a set of variables predict themselves: they model principal components which in turn model back the variables, you don't leave the space of the predictors and (if you use all the components) the prediction is error-free. In multiple regression, a set of variables predict one extraneous variable and so there is some prediction error. In CCA, the situation is similar to that in regression, but (1) the extraneous variables are multiple, forming a set of their own; (2) the two sets predict each other simultaneously (hence correlation rather than regression); (3) what they predict in each other is rather an extract, a latent variable, than the observed predictand of a regression (see also).
Let's involve the second set of variables $Y_1$ and $Y_2$ to correlate canonically with our $X$'s set. We have spaces - here, planes - X and Y. It should be notified that in order the situation to be nontrivial - like that was above with regression where $Y$ stands out of plane X - planes X and Y must intersect only in one point, the origin. Unfortunately it is impossible to draw on paper because 4D presentation is necessary. Anyway, the grey arrow indicates that the two origins are one point and the only one shared by the two planes. If that is taken, the rest of the picture resembles what was with regression. $V_x$ and $V_y$ are the pair of canonical variates. Each canonical variate is the linear combination of the respective variables, like $Y'$ was. $Y'$ was the orthogonal projection of $Y$ onto plane X. Here $V_x$ is a projection of $V_y$ on plane X and simultaneously $V_y$ is a projection of $V_x$ on plane Y, but they are
not orthogonal projections. Instead, they are found (extracted) so as to minimize the angle $\phi$ between them. Cosine of that angle is the canonical correlation. Since projections need not be orthogonal, lengths (hence variances) of the canonical variates are not automatically determined by the fitting algorithm and are subject to conventions/constraints which may differ in different implementations. The number of pairs of canonical variates (and hence the number of canonical correlations) is min(number of $X$s, number of $Y$s). And here comes the time when CCA resembles PCA. In PCA, you skim mutually orthogonal principal components (as if) recursively until all the multivariate variability is exhausted. Similarly, in CCA mutually orthogonal pairs of maximally correlated variates are extracted until all the multivariate variability that can be predicted in the lesser space (lesser set) is up. In our example with $X_1$ $X_2$ vs $Y_1$ $Y_2$ there remains the second and weaker correlated canonical pair $V_{x(2)}$ (orthogonal to $V_x$) and $V_{y(2)}$ (orthogonal to $V_y$).
For the difference between CCA and PCA+regression see also Doing CCA vs. building a dependent variable with PCA and then doing regression.
For me it was much helpful to read in the book of S. Mulaik "The Foundations of Factoranalysis" (1972), that there is a method purely of rotations of a matrix of factor loadings to arrive at a canonical correlation, so I could locate it in that ensemble of concepts which I had already understood so far from principal components analysis and factor analysis.
Perhaps you're interested in this example (which I've rebuilt from a first implementation/discussion of about 1998 just a couple of days ago to crosscheck and re-verify the method against the computation by SPSS). See here . I'm using my small matrix/pca-tools
Inside-[R] and
Matmate for this, but I think it can be reconstructed in
R without too much effort.
This answer doesn't provide a visual aid for understanding CCA, however a good geometric interpretation of CCA is presented in
Chapter 12 of Anderson-1958 [1]. The gist of it is as follows:
Consider $N$ data points $x_1, x_2, ..., x_N$, all of dimension $p$. Let $X$ be the $p\times N$ matrix containing $x_i$. One way of looking at the data is to interpret $X$ as a collection of $p$ data points in the $(N-1)$-dimensional subspace$^*$. In that case, if we separate the first $p_1$ data points from the remaining $p_2$ data points, CCA tries to find a linear combination of $x_1,...,x_{p_1}$ vectors that is parallel (as parallel as possible) with the linear combination of the remaining $p_2$ vectors $x_{p_1+1}, ..., x_p$.
I find this perspective interesting for these reasons:
It provides an interesting geometric interpretation about the entries of CCA canonical variables. The correlation coefficients is linked to the angle between the two CCA projections. The ratios of $\frac{p_1}{N}$ and $\frac{p_2}{N}$ can be directly related to the ability of CCA to find maximally correlated data points. Therefore, the relationship between overfitting and CCA solutions is clear. $\rightarrow$ Hint: The data points are able to span the $(N-1)$-dimensional space, when $N$ is too small (sample-poor case).
Here I've added an example with some code where you can change $p_1$ and $p_2$ and see when they are too high, CCA projections fall on top of each other.
* Note that the sub-space is $(N-1)$-dimensional and not $N$-dimensional, because of the centering constraint (i.e., $\text{mean}(x_i) = 0$).
[1] Anderson, T. W. An introduction to multivariate statistical analysis. Vol. 2. New York: Wiley, 1958.
The best way to teach statistics is with data. Multivariate statistical techniques are often made very complicated with matrices which are not intuitive. I would explain CCA using Excel. Create two samples, add new variates (columns basically) and show the calculation. And as far as the matrix construction of CCA is concerned, best way is to teach with a bivariate case first and then expand it.
protected by kjetil b halvorsen Oct 28 '18 at 12:25
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? |
It looks like you're new here. If you want to get involved, click one of these buttons!
Last time we learned most of the tricks needed to assemble a co-design diagram from the smaller boxes inside:
Remember, we're really building a feasibility relation out of other feasibility relations. Each smaller box is a feasibility relation. If we don't inquire into exactly what feasibility relations these particular boxes stand for, just how they're assembled, there's just one question left to discuss. What happens when a wire
bends back?
Intuitively, this describes 'feedback'. For example, I may take some bread, toast it, sell it, make money, and use that to buy more bread. Here I'm stringing together a number of processes, each one producing the resources needed for the next... until finally the last one produces some resources needed for the first!
It's not obvious how to describe feedback using just
composition, which takes two feasibility relations \(\Phi \colon X \nrightarrow Y\) and \(\Psi \colon Y \nrightarrow Z\) and puts one after the other:
and
tensoring, which takes two feasibility relations \(\Phi \colon X \nrightarrow Y\) and \(\Psi \colon X' \nrightarrow Y'\) and puts them side by side:
Using these we can build co-design diagrams like this:
But we cannot get wires that bend back! For that we need two extra feasibility relations. The first is called the
cup:
It looks like a wire coming in labelled \(X\), bending around and going back out to the left. In co-design diagrams, labels of wires stand for preorders. The wire coming in stands for a preorder \(X\) as usual. But what about the wire going back out? This stands for \(X^{\text{op}}\): the same set with the opposite concept of \(\le\).
This is a a rule we haven't discussed yet. A wire going from left to right, labelled by some preorder, stands for that preorder. But a wire going right to left, labelled by some preorder stands for the
opposite of that preorder.
But how do we tell if wire are going from left to right or the other way? One way is to draw arrows on them, as I've done just now. Fong and Spivak often use a different notation: a wire going from left to right is labelled \(\le\), while one going the other way is labelled \(\ge\). This will turn out to make a lot of sense.
Anyway, it looks like our cup stands for a feasibility relation from \(X^{\text{op}} \times X \) to...
nothing! But what's 'nothing'?
That's another rule I haven't told you. 'Nothing' - the invisible wire - is our symbol for the preorder \(\textbf{1}\). This is the set \( \{0\} \) made into a preorder in the only way possible, with \(0 \le 0\).
So, our cup stands for some particular feasibility relation
$$ \cup_X \colon X^{\text{op}} \times X \nrightarrow \textbf{1} . $$ Which one? We can guess if remember that such a feasibility relation is really a monotone function
$$ \cup_X \colon (X^{\text{op}} \times X)^\text{op} \times \textbf{1} \to \mathbf{Bool} .$$ We can simplify this a bit:
Puzzle 213. Show that for any preorder \(A\), the preorder\( A \times \textbf{1}\) is isomorphic to \(A\):: in other words, there is a monotone function from \( A \times \textbf{1}\) to \(A\) with a monotone inverse. For short we write \(A \times \textbf{1} \cong A\). (This is one way in which \(\textbf{1}\) acts like 'nothing'.) Puzzle 214. Show that for any preorders \(A\) and \(B\) we have \( (A \times B)^{\text{op}} \cong A^{\text{op}} \times B^{\text{op}}\). Puzzle 215. Show that for any preorder \(A\) we have \( (A^{\text{op}})^{\text{op}} \cong A\).
So, we get
$$ (X^{\text{op}} \times X)^\text{op} \times \textbf{1} \cong (X^{\text{op}} \times X)^\text{op} \cong (X^{\text{op}})^\text{op} \times X^\text{op} \cong X \times X^\text{op} $$ Replacing the fancy expression at left by the simpler but isomorphic expression at right, we can reinterpret the cup more simply as a monotone function
$$ \cup_X \colon X \times X^{\text{op}} \to \textbf{Bool} . $$ What could this be? It should remind you of our old friend the hom-functor
$$ \text{hom} \colon X^{\text{op}} \times X \to \textbf{Bool} .$$ And that's what it is - just twisted around a bit! In other words, we define
$$ \cup_X (x,x') = \text{hom}(x',x) . $$ If you forget your old friend the hom-functor, this just means that
$$ \cup_X (x,x') = \begin{cases} \texttt{true} & \mbox{if } x' \le x \\ \texttt{false} & \mbox{otherwise.} \end{cases} $$ Whew! It's simple in the end: it just says that when send a resource round a bend, what comes out must be less than or equal to what came in. The little inequality symbols on this picture are designed to make that easier to remember:
One more thing: to describe feedback we also need a feasibility relation called the
cap, which looks like this:
This is some feasibility relation from 'nothing' to \( X \times X^{\text{op}}\), or more precisely
$$ \cap_X \colon \textbf{1} \nrightarrow X \times X^{\text{op}} .$$
Puzzle 216. Rewrite this feasibility relation as a monotone function and simplify it just as we did for the cup. Then, guess what it is!
The cap winds up being just as simple as the cup. We can discuss this and work it out... and next time I'll show you how to use the cap and cup to describe feedback.
By the way, this stuff we're doing was first invented in particle physics. Feynman invented diagrams that describe the interaction between particles... and he realized that an antiparticle could be understood as a particle going 'backwards in time', so he drew little arrows to indicate whether the edges in his diagrams were going forwards or backwards in time. The cup describes the annihilation of a particle-antiparticle pair, and the cap describes the creation of such a pair.
Only much later did people realize that the preorder \(X^{\text{op}}\) is mathematically like the 'antiparticle' of \(X\). |
Also read, Circle Formulas Circle Theorems Lines and Angles Straight Line Coordinate Geometry Conic Section Formulas:
Since we have read simple geometrical figures in earlier classes. We already know about the importance of geometry in mathematics. Here we will learn conic section formulas. Circles, ellipses, parabolas and hyperbolas are in fact, known as conic sections or more commonly conics. As they can be obtained as intersections of any plane with a double-napped right circular cone. In fields such as planetary motion, design of telescopes and antennas, reflectors in flashlights and automobile headlights, etc. these curves have a very wide range of applications.
Circle:
To know more about circle visit Circle Formula.
Standard Equation of Circle:
x 2 + y2 = r2Where r is radius of circle.
Parabola:
It’s interesting to know that (
‘Para’ means ‘for’ and ‘bola’ means ‘throwing’, i.e., the shape described when you throw a ball in the air). Conic section formulas for the parabola is listed below.
Equation of Parabola:
S. No. Standard equation of Parabola Focus Directrix Parametric equation of parabola 1 y 2 = 4ax (a, 0) x = -a x =at 2
y =2at
2 y 2 = – 4ax (-a, 0) x = a 3 x 2 = 4ay (0, a) y = a 4 x 2 = – 4ay (0, -a) y = – a Latus rectum of parabola: 4a Ellipse:
The set of every point in a plane, the sum of whose distances from two fixed points in the plane is a constant. Conic section formulas for Ellipse is listed below.
:Equation of Ellipse
S. No. Standard Equation of Ellipse Parametric Equation of ellipse F 1 & F 2 1 \(\frac{x^{2}}{a^{2}}\) + \(\frac{y^{2}}{b^{2}}\) = 1 x = a cos t
y= b sin t
\(\begin{aligned} & \text{if}~a \geq b \Longrightarrow F_1\left(-\sqrt{a^2-b^2},0\right)~~ F_2\left(\sqrt{a^2-b^2},0\right) \\ & \text{if}~a < b \Longrightarrow F_1\left(0, -\sqrt{b^2-a^2}\right) ~~ F_2\left(0, \sqrt{b^2-a^2}\right) \end{aligned}\) 2 \(\frac{x^{2}}{b^{2}}\) + \(\frac{y^{2}}{a^{2}}\) = 1 Eccentricity of ellipse (e) =\(\frac{c}{a}\) = \(\frac{\sqrt{a^2-b^2}}{a}\) Latus rectum of ellipse (l) =\(\frac{b^{2}}{a}\) Area of Ellipse = π ⋅a ⋅b Hyperbola:
The full set of all points in a plane, the difference of whose distances from two fixed points in the plane is a constant is Hyperbola. Conic section formulas for hyperbola is listed below.
Equation of Hyperbola:
S. No. Standard equation of Hyperbola Parametric equations of the Hyperbola: F 1 & F 2 1 \(\frac{x^{2}}{a^{2}}\) – \(\frac{y^{2}}{b^{2}}\) = 1 x = \(\frac{a}{sin\:t}\)
y = \(\frac{b\:sin\:t}{cos\:t}\)
\(\begin{aligned} & \text{if}~a \geq b \Longrightarrow F_1\left(-\sqrt{a^2+b^2},0\right)~~ F_2\left(\sqrt{a^2+b^2},0\right) \\ & \text{if}~a < b \Longrightarrow F_1\left(0, -\sqrt{a^2+b^2}\right) ~~ F_2\left(0, \sqrt{a^2+b^2}\right) \end{aligned}\) 2 \(\frac{y^{2}}{a^{2}}\) – \(\frac{x^{2}}{b^{2}}\) = 1 Eccentricity of Hyperbola ( e )= \(\frac{c}{a}\) Also, c ≥ a, the eccentricity is never less than one. Distance of focus from centre: ae Equilateral hyperbola:Hyperbola in which a = b Conic section formulas for latus rectum in hyperbola:\(\frac{2b^{2}}{a}\) Conic section formulas examples: Here h = k = 0. Therefore, the equation of the circle is Find an equation of the circle with centre at (0,0) and radius r.Solution: x 2+ y 2= r 2 In this equation, y Find the coordinates of the focus, axis, the equation of the directrix and latus rectum of the parabola ySolution: 2= 16x. 2is there, so the coefficient of x is positive so the parabola opens to the right. Comparing with the given equation y 2= 4ax, we find that a = 4. Thus, the focus of the parabola is (4, 0) and the equation of the directrix of the parabola is x = – 4 Length of the latus rectum is 4a = 4 × 4 = 16. |
I read the following solution for Showing that NP is closed under unionand they used the same $c$ for both the verifies $V_1$ and $V_2$.
Why is it correct?
Let $L_1$ and $L_2$ be languages in $NP$. Also, for $i = 1, 2$ let $V_i(x, c)$ be an algorithm that, for a string $x$ and a possible certificate $c$, verifies whether $c$ is actually a certificate for $x \in L_i$. Thus, $V_i(x,c) = 1$ if certificate c verifies $x \in L_i$, and $V_i(x, c) = 0$ otherwise. Since both $L_1$ and $L_2$ are both in $NP$, we know that $V_i(x, c)$ terminates in polynomial time $O(|x|^d)$ for some constant $d$. To show that $L_3 = L_1 \cup L_2$ is also in $NP$, we will construct a polynomial-time verifier $V3$ for $L_3$. Since a certificate $c$ for $L_3$ will have the property that either $V_1(x, c) = 1$ or $V_2(x, c) = 1$, we can easily construct a verifier $V_3(x, c) = V_1(x, c) \lor V_2(x, c)$. Clearly then $x \in L_3$ if and only if there is a certificate $c$ such that $V_3(x, c) = 1$. Notice also that the new verifier $V_3$ will run in time $O(2(|x|^d))$, which is polynomial. Therefore, the union $L_3$ of two languages in $NP$ is also in $NP$, so $NP$ is closed under union.
taken from here.
$M_1,M_2$ TM which accept $w$ can accept $w$ for different reasons so we can't claim that $c_1=c_2$
Questions:
Is it legal to use the same certificate for both $V_1,V_2$ in the answer? Why?
Is a verifier by definition is deterministic TM?
In the above answer, does $V3$ runs $x,c$ on both $V_1$ and $V_2$ in the worst case? |
I've tried using KVL and KCL but I always end up with two or more variables, and I've got another question, how can I know if this transistor is in the saturation region?
Call the collector node \$C\$ and call the voltage there \$V_C\$. Call the base node \$B\$ and call the voltage there \$V_B\$. We know that \$V_B=700\:\textrm{mV}\$, by definition. We also know that \$\beta=315\$ and therefore that \$I_C=\beta \:I_B\$, by definition.
Direct Route:
You can immediately compute \$I_8=\frac{V_B}{R_8}\$. That current, plus the base current must flow through \$R_7\$. So \$I_7=I_B+\frac{V_B}{R_8}\$. That current, plus the collector current must flow through \$R_6\$. As \$I_C=\beta \:I_B\$, so \$I_6=\frac{V_B}{R_8}+\left(\beta+1\right)\:I_B\$. The sum of the voltage drops across the three resistors must be your voltage source, \$V_{CC}=9\:\textrm{V}\$. So it must be the case that \$I_6\:R_6+I_7\:R_7+I_8\: R_8=9\:\textrm{V}=V_{CC}\$. From this information we have:
$$\begin{align*} V_{CC}&=\left(\frac{V_B}{R_8}+\left[\beta+1\right]\:I_B\right)\:R_6+\left(I_B+\frac{V_B}{R_8}\right)\:R_7+\frac{V_B}{R_8}\: R_8\\\\ \end{align*}$$
Solving for \$I_B\$ you should get:
$$I_B =\frac{V_{CC}-V_B\left(1+\frac{R_6+R_7}{R_8}\right)}{R_6\left(\beta+1\right)+R_7}$$
That method is pretty straight-forward.
Using KCL:
This is using nodal analysis. It will get you to the same place, but through a slightly more complex route.
Then, by KCL at each node I get:
$$\begin{align*} \frac{V_C}{R_6}+\frac{V_C}{R_7}+I_C&=\frac{V_{CC}}{R_6}+\frac{V_B}{R_7}\\\\\frac{V_B}{R_7}+\frac{V_B}{R_8}+I_B&=\frac{V_C}{R_7} \end{align*}$$
The first equation is just putting all the currents "spilling away" from node \$C\$ on the left and all the currents "spilling into" node \$C\$ on the right. The two must equal each other, of course.
The second equation is just putting all the currents "spilling away" from node \$B\$ on the left and all the currents "spilling into" node \$B\$ on the right. The two must equal each other, of course, again.
The above solves out easily as:
$$\begin{align*} V_C&= \frac{V_{CC}\:R_7+V_B\:R_6\left(1+\beta\left[\frac{R_7}{R_8}+1\right]\right)}{R_6\left(\beta+1\right)+R_7}\\\\ I_B &=\frac{V_{CC}-V_B\left(1+\frac{R_6+R_7}{R_8}\right)}{R_6\left(\beta+1\right)+R_7} \end{align*}$$
As you can see, the equation for \$I_B\$ works out the same way. It's just that this also gets you \$V_C\$ along the way.
Since you already know that \$V_B=700\:\textrm{mV}\$ and all the resistor values are known as is \$V_{CC}=9\:\textrm{V}\$, I think you should be able to make the calculations here. You should also be able to come up with those equations.
In the end, I think you will find that \$V_C\$ is large enough that the transistor cannot be saturated. |
31 3
I’m trying to derive the infinitesimal volume element in spherical coordinates. Obviously there are several ways to do this. The way I was attempting it was to start with the cartesian volume element, dxdydz, and transform it using
$$dxdydz = \left (\frac{\partial x}{\partial r}dr + \frac{\partial x}{\partial \theta }d\theta + \frac{\partial x}{\partial \phi }d\phi \right )\left ( \frac{\partial y}{\partial r}dr + \frac{\partial y}{\partial \theta }d\theta + \frac{\partial y }{\partial \phi}d\phi \right )\left ( \frac{\partial z}{\partial r}dr + \frac{\partial z}{\partial \theta }d\theta + \frac{\partial z}{\partial \phi}d\phi \right )$$ Unfortunately, I can’t see how I will arrive at the correct expression, ##r^{2}sin\theta drd\theta d\phi ##. For one reason, when completely expanded, I get terms with repeated differentials like ##dr^{3} ## that don’t cancel. Why is my method of derivation invalid?
$$dxdydz = \left (\frac{\partial x}{\partial r}dr + \frac{\partial x}{\partial \theta }d\theta + \frac{\partial x}{\partial \phi }d\phi \right )\left ( \frac{\partial y}{\partial r}dr + \frac{\partial y}{\partial \theta }d\theta + \frac{\partial y }{\partial \phi}d\phi \right )\left ( \frac{\partial z}{\partial r}dr + \frac{\partial z}{\partial \theta }d\theta + \frac{\partial z}{\partial \phi}d\phi \right )$$
Unfortunately, I can’t see how I will arrive at the correct expression, ##r^{2}sin\theta drd\theta d\phi ##.
For one reason, when completely expanded, I get terms with repeated differentials like ##dr^{3} ## that don’t cancel.
Why is my method of derivation invalid? |
I'm trying to understand the reasons why an electric current should occur in the following circumstances:
Given a magnetic field in space, and a neutral (not charged) body moving at some speed, in a direction so the magnetic force is not zero upon its charged particles:
Will the electric current be caused by:
1.Electromagnetic induction (since the flux of the magnetic field via the body is changing).
2.The magnetic force upon the particles, will cause them to move from each other, thus charging the body which will create an electric field, and a current.
And what about a constant magnetic field in space?
As the body moves along a constant field, the magnetic flux will not change.
But the magnetic field will cause particles from different signs to move away from each other, so an electric field will appear, and a current also.
But this is in contradiction to Faraday law since $\epsilon = -\frac{\partial\Phi}{\partial t} = 0$ and therefore no current should be created.
What is my misunderstanding?
Thank you. |
We know that if we take two atoms/molecules, their interaction energy shows a short-range attractive part and a medium-range attractive part.
One popular way to schematize this interaction is to employ the Lennard-Jones potential:
$$U(r) = 4 \epsilon \left[ \left( \frac{\sigma} r \right)^{12} - \left( \frac{\sigma} r \right)^{6}\right]$$
The exponent $6$ comes from the dipole-dipole van der Waals interaction, while the exponent $12$ has no theoretical justification whatsoever and only represents a very strong repulsion at short distances.
It is very common to find the statement that this repulsive term comes from the Pauli exclusion principle. For example, from Wikipedia:
The $r^{-12}$ term, which is the repulsive term, describes
Pauli repulsionat short ranges due to overlapping electron orbitals...
However, in the book
Statistical Mechanics by K. Huang (par. 2.3) we find the following statement about molecular potentials:
The attractive part of the potential energy originates from the mutual electric polarization of the two molecules and the repulsive part from the
Coulomb repulsionof the overlapping electronic clouds of the molecules.
I also found this post on Quora which basically asks the same thing. In the first answer we can find the following statement:
In fact the repulsive part of the "electrostatic force" is really caused by the Pauli exclusion principle: when the atoms are pushed together too much, the distortion of the orbitals caused by the Pauli exclusion principle will result in the electrons being "squished" out of the region between the atoms - this will then allow the strong electrostatic repulsion between the two positively charged nuclei to push the atoms apart.
So the repulsive term would be explained by a combination of Pauli exclusion and Coulombic interaction.
What is the real origin of the repulsive term? Pauli exclusion principle, Coulombic repulsion or a combination of both?
PS: There are many questions on the site similar to this one and many of them are labeled as duplicates of this one. However, the answers to this question are (apart from maybe one) incredibly vague, and tend to address more the general problem of the impenetrability of solids (ex. "you don't fall towards the center of the Earth because the ground is solid and not liquid") than the more specific problem of the origin of the repulsive term in interatomic/intermolecular interaction. This is why I think that this post is not a duplicate of the aforementioned one. |
Abbreviation:
CRLSgrp
A
is a residuated lattice-ordered semigroup $\mathbf{A}=\langle A, \vee, \wedge, \cdot, \to\rangle$ such that commutative residuated lattice-ordered semigroup
$\cdot$ is
: $xy=yx$ commutative
Remark: This is a template. If you know something about this class, click on the ``Edit text of this page'' link at the bottom and fill out this page.
It is not unusual to give several (equivalent) definitions. Ideally, one of the definitions would give an irredundant axiomatization that does not refer to other classes.
Let $\mathbf{A}$ and $\mathbf{B}$ be commutative residuated lattice-ordered semigroups. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x \vee y)=h(x) \vee h(y)$, $h(x \wedge y)=h(x) \wedge h(y)$, $h(x \cdot y)=h(x) \cdot h(y)$, and $h(x \to y)=h(x) \to h(y)$.
A
is a structure $\mathbf{A}=\langle A,...\rangle$ of type $\langle...\rangle$ such that …
$...$ is …: $axiom$
$...$ is …: $axiom$
Example 1:
Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described.
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ \end{array}$ $\begin{array}{lr} f(6)= &\\ f(7)= &\\ f(8)= &\\ f(9)= &\\ f(10)= &\\ \end{array}$
[[Commutative distributive residuated lattice-ordered semigroups]] subvariety [[Commutative residuated lattices]] expansion [[Residuated lattice-ordered semigroups]] supervariety [[Commutative lattice-ordered semigroups]] subreduct |
The $\mathbb{Z_5}$-vector space $\mathfrak{B}$ 3 over the field $(\mathbb{Z_5}, +, .)$ $\mathfrak{B}$ 3over the field $(\mathbb{Z_5}, +, .)$ 1. BackgroundThis is a formal introduction to the genetic code $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$ over the field $(\mathbb{Z_5}, +, .)$. This mathematical model is defined based on the physicochemical properties of DNA bases (see previous post). This introduction can be complemented with a Wolfram Computable Document Format (CDF) named IntroductionToZ5GeneticCodeVectorSpace.cdf available in GitHub. This is graphic user interface with an interactive didactic introduction to the mathematical biology background that is explained here. To interact with a CDF users will require for Wolfram CDF Player or Mathematica. The Wolfram CDF Player is freely available (easy installation on Windows OS and on Linux OS). 2. Biological mathematical model
If the Watson-Crick base pairings are symbolically expressed by means of the sum “+” operation, in such a way that hold: G + C = C + G = D, U + A = A + U = D, then this requirement leads us to define an additive group ($\mathfrak{B}^3$, +) on the set of five DNA bases ($\mathfrak{B}^3$, +). Explicitly, it was required that the bases with the same number of hydrogen bonds in the DNA molecule and different chemical types were algebraically inverse in the additive group defined in the set of DNA bases $\mathfrak{B}$. In fact eight sum tables (like that one shown below), which will satisfice the last constraints, can be defined in eight ordered sets: {D, A, C, G, U}, {D, U, C, G, A}, {D, A, G, C, U}, {D, U, G, C, A},{G, A, U, C},{G, U, A, C},{C, A, U, G} and {C, U, A, G} [1,2]. The sets originated by these base orders are called the strong-weak ordered sets of bases [1,2] since, for each one of them, the algebraic-complementary bases are DNA complementary bases as well, pairing with three hydrogen bonds (strong, G:::C) and two hydrogen bonds (weak, A::U). We shall denote this set
SW.
A set of extended base triplet is defined as $\mathfrak{B}^3$ = {
XYZ | X, Y, Z $\in\mathfrak{B}$}, where to keep the biological usual notation for codons, the triplet of letters $XYZ\in\mathfrak{B}^3$ denotes the vector $(X,Y,Z)\in\mathfrak{B}^3$ and $\mathfrak{B} =$ {A, C, G, U}. An Abelian group on the extended triplets set can be defined as the direct third power of group:
$(\mathfrak{B}^3,+) = (\mathfrak{B},+)×(\mathfrak{B},+)×(\mathfrak{B},+)$
where
X, Y, Z $\in\mathfrak{B}$, and the operation “+” as shown in the table [2]. Next, for all elements $\alpha\in\mathbb{Z}_{(+)}$ (the set of positive integers) and for all codons $XYZ\in(\mathfrak{B}^3,+)$, the element: $\alpha \bullet XYZ = \overbrace{XYZ+XYX+…+XYZ}^{\hbox{$\alpha$ times}}\in(\mathfrak{B}^3,+)$ is well defined. In particular, $0 \bullet X =$ D for all $X\in(\mathfrak{B}^3,+) $. As a result, $(\mathfrak{B}^3,+)$ is a three-dimensional (3D) $\mathbb{Z_5}$-vector space over the field $(\mathbb{Z_5}, +, .)$ of the integer numbers modulo 5, which is isomorphic to the Galois field GF(5). Notice that the Abelian groups $(\mathbb{Z}_5, +)$ and $(\mathfrak{B},+)$ are isomorphic. For the sake of brevity, the same notation $\mathfrak{B}^3$ will be used to denote the group $(\mathfrak{B}^3,+)$ and the vector space defined on it.
+ D A C G U D D A C G U A A C G U D C C G U D A G G U D A C U U D A C G
This operation is only one of the eight sum operations that can be defined on each one of the ordered sets of bases from
SW. 3. The canonical base of the $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$Next, in the vector space $\mathfrak{B}^3$, vectors (extended codons): e 1 =ADD, e 2 =DAD and e 3 =DDA are linearly independent, i.e., $\sum\limits_{i=1}^3 c_i e_i =$ DDD implies $c_1=0, c_2=0$ and $c_3=0$ for any distinct $c_1, c_2, c_3 \in\mathbb{Z_5}$. Moreover, the representation of every extended triplet $XYZ\in\mathfrak{B}^3$ on the field $\mathbb{Z_5}$ as $XYZ=xe_1+ye_2+ze_3$ is unique and the generating set $e_1, e_2$, and $e_3$ is a canonical base for the $\mathbb{Z_5}$-vector space $\mathfrak{B}^3$. It is said that elements $x, y, z \in\mathbb{Z_5}$ are the coordinates of the extended triplet $XYZ\in\mathfrak{B}^3$ in the canonical base ($e_1, e_2, e_3$) [3] References José M V, Morgado ER, Sánchez R, Govezensky T. The 24 Possible Algebraic Representations of the Standard Genetic Code in Six or in Three Dimensions. Adv Stud Biol, 2012, 4:119–52. Sanchez R. Symmetric Group of the Genetic-Code Cubes. Effect of the Genetic-Code Architecture on the Evolutionary Process. MATCH Commun Math Comput Chem, 2018, 79:527–60. Sánchez R, Grau R. An algebraic hypothesis about the primeval genetic code architecture. Math Biosci, 2009, 221:60–76. |
I am trying to prove this limit to be true: $$\lim_{x\to a}(x^2)=(a^2)$$ using the Epsilon Delta Limit Definition.
So far I can understand how it works but I got stumped on this inequality $$|x+a|<|2a|+1$$ if $|x-a|<1$
I saw this on this following link: https://www.ma.utexas.edu/users/nrauh/teaching/m408d/limits.pdf
I would really appreciate it if someone can explain how is the inequality derived.
Last but not least, a quick question: What does the $$\delta=min\{\frac{\epsilon}{|2a|+1},1\}$$ mean? Is it the range of values that $\delta$ can accept in an open interval with $\frac{\epsilon}{|2a|+1}$ being the minimum value and 1 being the maximum value?
Thank you, appreciate it lots! |
Your count of the total number of inversions is right. There are $\binom n2$ pairs of elements that can be inverted, each of them is inverted in half of all permutations, and there are $n!$ permutations, for a total of $\frac14n!n(n-1)$.
Given that $p(\sigma)=\frac{\def\inv{\operatorname{inv}}\inv\sigma}{\sum_\sigma\inv\sigma}$, we have
$$E[\inv\sigma]=\frac{\sum_\sigma(\inv\sigma)^2}{\sum_\sigma\inv\sigma}\;,$$
so we need the sum of the squares of the inversion counts, which is the total number of ordered pairs of inversions in all permutations. There are three types of ordered pairs, with $0$, $1$ or $2$ elements coinciding.
For $2$ elements to coincide, the inversions must be the same, so this contribution is simply $\sum_\sigma\inv\sigma$.
If $1$ element coincides, there are three elements, and two pairs must be inverted. If the shared element is in the middle, $1$ of $6$ orders inverts both pairs (the inverted one), whereas if the shared element is the least or the greatest, then $2$ of $6$ orders invert both pairs (the ones where the shared element is the greatest or the least, respectively). There are $\binom n3$ ways to pick three elements, $2$ ordered pairs with the middle element shared and $4$ ordered pairs with one of the other elements shared, for a contribution of
$$n!\binom n3\left(\frac16\cdot2+\frac26\cdot4\right)=\frac5{18}n!n(n-1)(n-2)\;.$$
If no elements coincide, there are $\binom42\binom n4$ ways to pick the elements, and the inversions are independent so both pairs are inverted in $1$ of $4$ orders, for a contribution of
$$\frac{n!}4\binom42\binom n4=\frac1{16}n!n(n-1)(n-2)(n-3)\;.$$
Thus in total we have
\begin{align}\sum_\sigma(\inv\sigma)^2&=n!n(n-1)\left(\frac14+\frac5{18}(n-2)+\frac1{16}(n-2)(n-3)\right)\\&=\frac1{144}n!n(n-1)(9n^2-5n+10)\;,\end{align}
and dividing by $\sum_\sigma\inv\sigma$ yields
$$E[\inv\sigma]=\frac{9n^2-5n+10}{36}\;.$$
For $n=4$, this is
$$\frac{9\cdot4^2-5\cdot4+10}{36}=\frac{67}{18}\;.$$ |
I dont understand completely a proof of the dominated convergence theorem stated in page 104 of
Analysis III of Amann and Escher. I will transcribe here the proof and comment about my thoughts.
In the next: $(X,\mathcal A,\mu)$ is a $\sigma$-finite measure space, $E$ is a Banach space and $\mathcal L_1(X,\mu, E)$ is the space of Bochner $\mu$-integrable functions from $X$ to $E$, what is complete under the seminorm defined by $\|f\|:=\int_X|f|\,d\mu$.
We denote by $|{\cdot}|$ the norm on $E$, and $\mathcal L_0(X,\mu,E)$ is the space of $\mu$-measurable functions in $E^X$.
Let $(f_j)$ a sequence in $\mathcal L_1(X,\mu, E)$ and suppose that there exists $g\in\mathcal L_1(X,\mu,\Bbb R)$ such that
a) $|f_j|\le g$ $\mu$-almost everywhere for all $j\in\Bbb N$
Suppose also that for some $f\in E^X$
b) $f_j\to f$ $\mu$-almost everywhere
Then $f$ is $\mu$-integrable, $f_j\to f$ in $\mathcal L_1(X,\mu,E)$ and $\int_Xf_j\,d\mu\to\int_X f\,d\mu$ in $E$.
Proof:define $g_j:=\sup_{k,\ell\ge j}|f_k-f_\ell|$, then $(g_j)\to 0$ $\mu$-a.e. in $\mathcal L_0(X,\mu,\overline{\Bbb R}^+)$, and $|f_k-f_\ell|\le 2g$ $\mu$-a.e. for all $k,\ell\in\Bbb N$. Hence $|g_j|\le 2g$ $\mu$-a.e.
From a corollary of Fatou's lemma it follow that $$0\le\varlimsup_j\int_X g_j\,d\mu\le\int_X\varlimsup_j g_j\,d\mu=0$$ Therefore $(\int_Xg_j\,d\mu)$ is a decreasing null sequence. Then for every $\epsilon>0$ there is a $N\in\Bbb N$ such that $$\int_X|f_k-f_\ell|\,d\mu\le\int_X g_j\,d\mu<\epsilon$$ for $k,\ell\ge j\ge N$. Hence $(f_j)$ is a Cauchy sequence in $\mathcal L_1(X,\mu,E)$.
Up to here all is right to me, but now is the punching line what I dont follow clearly:
and the claim follow from the completeness of $\mathcal L_1(X,\mu,E)$ and Theorem 2.18.
Theorem 2.18 what say is that if $(f_j)\to f$ in the seminorm then
a) There is a subsequence $(f_{j_k})\to f$ $\mu$-a.e., and for each $\epsilon>0$ there is some $A\subset X$ with $\mu(A)<\epsilon$ such that $(f_{j_k})\to f$ uniformly in $A^\complement$.
b) The integral $\int_X f_j\,d\mu$ converges to $\int_X f\,d\mu$.
What I dont follow is how it follow from the completeness of the space. I mean: we knows that $(f_j)$ is Cauchy in the seminorm, so it converges to some value but, how we knows that it converges to $f$ in the seminorm?
I guess that it is because $|f-f_j|\le\sup_{k,\ell\ge j}|f_k-f_\ell|$, but Im not sure. However this is not related to the completeness of the space of integrable functions.
Can someone explain more clear the punching line of the proof? Thank you. |
I don't know how to get the second line from the first line in the following:
In the above case, $Y=(y_1, \dots , y_n)^T$ is a random sample from $N(\mu,\sigma^2)$.
My trouble is in simplifying $ E\left(\left\{\sum\limits_{i=1}^n (Y_i-\mu ) \right\}^2\right)$. What I've tried:
$$ \begin{align} E\left(\left\{\sum\limits_{i=1}^n (Y_i-\mu ) \right\}^2\right) & = E\left(\left\{\sum\limits_{i=1}^n (Y_i-\mu ) \right\}\right) E\left(\left\{\sum\limits_{i=1}^n (Y_i-\mu ) \right\}\right) , \text{ by independence} \\ & = \left(\sum\limits_{i=1}^n E \left( Y_i - \mu \right) \right)^2 \text{ by linearity of expectations} \end{align}$$
I don't see how I can get the variance from here on wards. Did I do something wrong? |
I am trying to simplify Leibniz Rule to the (first) Fundamental Theorem of Calculus (FTC) but believe I am doing so incorrectly. Leibniz rule can be written as:
$$\frac{d}{dt} \int_{f(t)}^{g(t)} A(t,\sigma) d\sigma = A(t,g(t))\dot g(t) - A(t,f(t))\dot f(t) + \int_{f(t)}^{g(t)} \frac{\partial}{\partial t} A(t,\sigma) d\sigma \qquad (1)$$
If I set $f(t)=c=const$ and $g(t)=t$ this simplifies to
$$\frac{d}{dt} \int_{c}^{t} A(t,\sigma) d\sigma = A(t) + \int_{c}^{t} \frac{\partial}{\partial t} A(t,\sigma) d\sigma \qquad (2)$$
Now if I assume $A$ does not depend on $t$ s.t. $A=A(\sigma)$ then
$$\frac{d}{dt} \int_{c}^{t} A(\sigma) d\sigma = A(t) + \int_{c}^{t} \frac{\partial}{\partial t} A(\sigma) d\sigma \qquad (3)$$
which simplifies to
$$\frac{d}{dt} \int_{c}^{t} A(\sigma) d\sigma = A(t) \qquad (4)$$
which is the (first) FTC. But what happens if instead of assuming $A$ does not depend on $t$, we assumed $\sigma=t$? We get
$$\frac{d}{dt} \int_{c}^{t} A(t) dt= A(t) + \int_{c}^{t} \frac{\partial}{\partial t} A(t) dt \qquad (5)$$
which can be proven incorrect by setting $A(t) = t^2$ yeilding
$$t^2=t^2+t^2-c^2=2t^2-c^2 \qquad (6)$$
I don't understand where my error in logic is. Can anyone please help? I'm trying to understand how the (first) FTC applies to functions of time like velocity; i.e. the following should be true
$$\frac{d}{dt} \int_{c}^{t} v(t) dt= v(t) \qquad (7)$$
Please let me know if I need to be more specific or clarify anything. Many thanks. |
Let $X_1,\dots,X_n$ be a sample of iid exponential random variables with mean $\beta$, and let $X_{(1)},\dots,X_{(n)}$ be the order statistics from this sample. Let $\bar X = \frac{1}{n}\sum_{i=1}^n X_i$.
Define spacings $$W_i=X_{(i+1)}-X_{(i)}\ \forall\ 1 \leq i \leq n-1\,.$$ It can be shown that each $W_i$ is also exponential, with mean $\beta_i=\frac{\beta}{n-i}$.
Question: How would I go about finding $\mathbb{P}\left( \frac{W_i}{\bar X} > t \right)$, where $t$ is known and non-negative? Attempt: I know that this is equal to $1 - F_{W_i}\left(t \bar X\right)$. So I used the law of total probability like so:$$\mathbb{P}\left( W_i > t \bar X \right) = 1 - F_{W_i}\left( t \bar X \right) = 1 - \int_0^\infty F_{W_i}(ts)f_{\bar X}(s) \mathrm{d}s \,,$$
which turns into a messy but I think tractable integral.
Am I on the right track here? Is this a valid use of the Law of Total Probability?
Another approach might be to look at the difference distribution: $$ \mathbb{P}\left( W_i - t \bar X > 0\right) $$
Or even break apart the sums: $$ \mathbb{P}\left( W_i - t \bar X > 0 \right) = P \left( \left(X_{(i+1)} - X_{(i)}\right) + \frac{t}{n}\left(X_{(1)} + \dots + X_{(n)} \right) \right) $$
A solution to the exponential case would be great, but even better would be some kind of general constraints on the distribution. Or at the very least, its moments, which would be enough to give me Chebyshev and Markov inequalities.
Update: here's the integral from the first method:$$\begin{align}1 - \int_0^\infty \left( 1 - \exp \left( -\frac{ts}{\beta_i} \right) \right) \left( \frac{1}{\Gamma(n)\beta^n} s^{n-1} \exp \left( -\beta s \right) \right) \mathrm{d}s \\1 - \int_0^\infty \left( 1 - \exp \left( -\frac{(n-i)ts}{\beta} \right) \right) \left( \frac{1}{\Gamma(n)\beta^n} s^{n-1} \exp \left( -\beta s \right) \right) \mathrm{d}s\end{align}$$
I've been playing around with it for a little while and I'm not sure where to go with it. |
Let $\ell^\infty$ be the Banach space of bounded sequences with the usual norm and let $c,c_0$ be the subspaces of sequences that are convergent, resp. convergent to zero. Show that:
The linear functional $\ell_0\colon c\rightarrow \mathbb{C}$ defined for $x = (x_n) \in c$ by $$ \ell_0(x) = \lim_{n\rightarrow \infty} x_n$$ extends to a continuous functional on $\ell^\infty$ if $L$ denotes the set of all continuous extensions of the functional $\ell_0$ from (1), then a sequence $x = (x_n) \in \ell^\infty$ belongs to $c_0$ iff $$\ell(x) = 0 \;\; \forall \ell \in L$$ Describe $c$ in a similar way My try:
(1): This follows by Banach limits.
(2): $(\Rightarrow)$ follows by extension
$(\Leftarrow)$ Here Im a bit unsure, assume $x \not \in c_0$ if $x \in c$ we get an contradiction. But if $x\not \in c$ what happens then, can we use a subsequence? since we have bounded functionals? can we use $\ell x = \lim_{k \rightarrow \infty} x_{n_k}$ or something like that, would $\ell \in L$?
(3): same as two I suppose, can we use subseqeunces?
Please correct what I'm missed |
The
dividend discount model ( DDM) is a method of valuing a company's stock price based on the theory that its stock is worth the sum of all of its future dividend payments, discounted back to their present value. [1] In other words, it is used to value stocks based on the net present value of the future dividends. The equation most widely used is called the Gordon growth model. It is named after Myron J. Gordon of the University of Toronto, who originally published it along with Eli Shapiro in 1956 and made reference to it in 1959. [2] [3] Their work borrowed heavily from the theoretical and mathematical ideas found in John Burr Williams 1938 book "The Theory of Investment Value."
The variables are: P is the current stock price. g is the constant growth rate in perpetuity expected for the dividends. r is the constant cost of equity capital for that company. D_1 is the value of the next year's dividends.
P = \frac{D_1}{r-g}
Contents Derivation of equation 1 Income plus capital gains equals total return 2 Growth cannot exceed cost of equity 3 Some properties of the model 4 Problems with the model 5 Related methods 6 References 7 Further reading 8 External links 9 Derivation of equation
The model uses the fact that the current value of the dividend payment D_0 (1+g)^t at (discrete ) time t is \frac{D_0 (1+g)^t}{(1+r)^t}
This summation can be rewritten as
P={D_0} r' (1+r'+{r'}^2+{r'}^3+....)
where
r'=\frac{(1+g)}{(1+r)}.
Clearly, the series in parenthesis is the geometric series with common ratio r' so it sums to \frac{1}{1-r'} if r'<1. Thus,
P = \frac{D_0 r'}{1-r'}
Substituting the value for r' leads to
P = \frac {\frac{1-\frac{1+g}{1+r}},
which is simplified by multiplying by \frac {1+r}{1+r}, so that
P = \frac{D_0(1+g)}{r-g} Income plus capital gains equals total return
The equation can also be understood to compute the value of a stock such that the sum of its dividend yield (income) plus its growth (capital gains) equals the investor's required total return. Consider the dividend growth rate as a proxy for the growth of earnings and by extension the stock price and capital gains. Consider the company's cost of equity capital as a proxy for the investor's required total return.
[4] \text{Income} + \text{Capital Gain} = \text{Total Return} \text{Dividend Yield} + \text{Growth} = \text{Cost Of Equity} \frac{D}{P} + g = r \frac{D}{P} = r - g \frac{D}{r -g} = P Growth cannot exceed cost of equity
From the first equation, one might notice that r-g cannot be negative. When growth is expected to exceed the cost of equity in the short run, then usually a two-stage DDM is used:
P = \sum_{t=1}^N \frac{D_0 \left( 1+g \right)^t}{\left( 1+r\right)^t} + \frac{P_N}{\left( 1 +r\right)^N}
Therefore,
P = \frac{D_0 \left( 1 + g \right)}{r-g} \left[ 1- \frac{\left( 1+g \right)^N}{\left( 1 + r \right)^N} \right] + \frac{D_0 \left( 1 + g \right)^N \left( 1 + g_\infty \right)}{\left( 1 + r \right)^N \left( r - g_\infty \right)},
where g denotes the short-run expected growth rate, g_\infty denotes the long-run growth rate, and N is the period (number of years), over which the short-run growth rate is applied.
Even when
g is very close to r, P approaches infinity, so the model becomes meaningless. Some properties of the model
a) When the growth g is zero the dividend is capitalized. P_0 = \frac{D_1}{r}.
b) This equation is also used to estimate cost of capital by solving for r. r = \frac{D_1}{P_0} + g. Problems with the model
a) The presumption of a steady and perpetual growth rate less than the cost of capital may not be reasonable.
b) If the stock does not currently pay a dividend, like many growth stocks, more general versions of the discounted dividend model must be used to value the stock. One common technique is to assume that the Modigliani-Miller hypothesis of dividend irrelevance is true, and therefore replace the stocks's dividend D with E earnings per share. However, this requires the use of earnings growth rather than dividend growth, which might be different. This approach is especially useful for computing a residual value of future periods.
c) The stock price resulting from the Gordon model is hyper-sensitive to the growth rate g chosen. Related methods
The dividend discount model is closely related to both discounted earnings and discounted cashflow models. In either of the latter two, the value of a company is based on how much money is made by the company. For example, if a company consistently paid out 50% of earnings as dividends, then the discounted dividends would be worth 50% of the discounted earnings. Also, in the dividend discount model, a company that does not pay dividends is worth nothing.
References ^ Investopedia – Digging Into The Dividend Discount Model ^ Gordon, M.J and Eli Shapiro (1956) "Capital Equipment Analysis: The Required Rate of Profit," Management Science, 3,(1) (October 1956) 102-110. Reprinted in Management of Corporate Capital, Glencoe, Ill.: Free Press of, 1959. ^ ^ Spreadsheet for variable inputs to Gordon Model Further reading External links
Alternative derivations of the Gordon Model and its place in the context of other DCF-based shortcuts
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization. |
A field in the $(A,B)$ representation of the Lorentz Group has a propagator that scales as $|p|^{2(s-1)}$ for $|p|\to\infty$, where $s=A+B$ is the "spin" of the field (Ref.1 §12.1) . Therefore, the propagator is a decaying (or constant) function of $p$ if and only if $s=0,\,1/2,\,1$. Otherwise, the propagator grows in the UV and the theory is non-renormalisable (unless we have SUSY
1, which in principle may allow you to go up to $s=3/2,2$). If fields of that type are actually realised in Nature, their effect is invisible in the IR (essentially, by dimensional analysis; more formally, by the standard classification of irrelevant interactions, cf. Ref.1 §12.3). That is why they have not been detected so far.
This leaves as the only options $(0,0),\,(\frac12,0),\,(0,\frac12),\,(1,0),\,(\frac12,\frac12),\,(0,1)$. All of these are used in the Standard Model but for $(1,0),(0,1)$ (these are the self-dual and anti-self dual anti-symmetric second rank representations, respectively). There is nothing intrinsically wrong about these representations; they just happen to be irrelevant for the Standard Model: no known particle is described by such a field. They are indeed sometimes used in toy models. Let me in fact quote a paragraph from Ref.1 §5.9:
Although there is no ordinary four-vector for massless particles of helicity $h=\pm1$, there is no problem in constructing an antisymmetric tensor $F_{\mu\nu}$ for such particles. [...] Why should we want to use a [gauge dependent vector field $A^\mu$] in constructing theories of massless particles of spin one, rather than being content with fields like $F_{\mu\nu}$ [which is gauge-independent]? The presence of derivatives in eq. 5.9.34 means that an interaction density constructed solely from $F_{\mu\nu}$ and its derivatives will have matrix elements that vanish more rapidly for small massless particle energy and momentum than one that uses the vector field $A^\mu$. Interactions in such a theory will have a correspondingly rapid fall-off at large distances, faster than the usual inverse-square law. This is perfectly possible, but gauge-invariant theories that use vector fields for massless spin one aprticles represent a more general class of theories, including those that are actually realized in nature.
Parallel remarks apply to gravitons, massless particles of helicity $\pm2$. [...] in order to incorporate the usual inverse-square gravitational interactions we need to introduce a field $h_{\mu\nu}$ that transforms as a symmetric tensor, up to gauge transformations of the sort associated in general relativity with general coordinate transformations. Thus, in order to construct a theory of massless particles of helicity $\pm2$ that incorporate long-range interactions, it is necessary for it to have a symmetry something like general covariance. As in the case of electromagnetic gauge invariance, this is archived by coupling the field to a conserved "current" $\theta^{\mu\nu}$, now with two spacetime indices, satisfying $\partial_\mu\theta^{\mu\nu}=0$. The only such conserved tensor is the energy-momentum tensor, aside from possible total derivative terms that do not affect the long-range behaviour of the force produced. The fields of massless particles of spin $j\ge3$ would have to couple to conserved tensors with three or more spacetime indices, but aside from total derivatives there are none, so highs-spin massless cannot produce long-range forces.
In short: most "non-standard" representations are fine but phenomenologically useless. The only non-trivial cases are $(1,0),\,(0,1)$, but they don't seem to be realised in Nature. A possible reason is that they mediate short-ranged interactions (but non-confining: it can be proven that confinement arises only if you have non-abelian gauge interactions) and are therefore they are not seen in actual experiments. If any such particle existed, we would need much larger accelerators.
References. Weinberg's QFT, Vol.1.
1: Fields of higher spin are always of the gauge type, because of the usual mismatch between field components and particle degrees of freedom. If you consider a bosonic field of arbitrary spin, you can always fix the gauge in such a way that its propagator is $\mathcal O(k^{-2})$ in the UV; similarly, a fermionic field can be gauge-fixed so that its propagator scales as $\mathcal O(k^{-1})$. Thus, it appears that any field is power-counting renormalisable. The catch is that the theory is gauge invariant if and only if you have a Ward-Takahashi-Slavnov-Taylor identity to control the unphysical degrees of freedom. You therefore need a conserved current which, as per Coleman-Mandula-Haag–Łopuszański–Sohnius-etc., is at most of the vector type if bosonic, or has spin $3/2$ if fermionic. In other words, you may only introduce $s=1$ fields if the symmetries form a regular algebra, or $s=2$ if you allow superalgebras. You cannot introduce any higher spin, simply because of the lack of a conserved current (cf. Weinberg's quote above).
Alternatively, if you don't want to fix the gauge (say, by working with massive particles and introducing Proca-like auxiliary conditions), the propagators always grow like $|p|^{2(s-1)}$, and the problematic large-$k$ behaviour is only cancelled if you have conserved currents at the vertices (so that the terms proportional to $k^\mu$ vanish, cf. this PSE post). But, as in the previous paragraph, such currents can only be, at most, of the supersymmetric type, so no particle of spin higher than $s=2$ is allowed. See also Weinberg–Witten theorem. |
The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every proper rotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.
Yeah it does seem unreasonable to expect a finite presentation
Let (V, b) be an n-dimensional, non-degenerate symmetric bilinear space over a field with characteristic not equal to 2. Then, every element of the orthogonal group O(V, b) is a composition of at most n reflections.
How does the Evolute of an Involute of a curve $\Gamma$ is $\Gamma$ itself?Definition from wiki:-The evolute of a curve is the locus of all its centres of curvature. That is to say that when the centre of curvature of each point on a curve is drawn, the resultant shape will be the evolute of th...
Player $A$ places $6$ bishops wherever he/she wants on the chessboard with infinite number of rows and columns. Player $B$ places one knight wherever he/she wants.Then $A$ makes a move, then $B$, and so on...The goal of $A$ is to checkmate $B$, that is, to attack knight of $B$ with bishop in ...
Player $A$ chooses two queens and an arbitrary finite number of bishops on $\infty \times \infty$ chessboard and places them wherever he/she wants. Then player $B$ chooses one knight and places him wherever he/she wants (but of course, knight cannot be placed on the fields which are under attack ...
The invariant formula for the exterior product, why would someone come up with something like that. I mean it looks really similar to the formula of the covariant derivative along a vector field for a tensor but otherwise I don't see why would it be something natural to come up with. The only places I have used it is deriving the poisson bracket of two one forms
This means starting at a point $p$, flowing along $X$ for time $\sqrt{t}$, then along $Y$ for time $\sqrt{t}$, then backwards along $X$ for the same time, backwards along $Y$ for the same time, leads you at a place different from $p$. And upto second order, flowing along $[X, Y]$ for time $t$ from $p$ will lead you to that place.
Think of evaluating $\omega$ on the edges of the truncated square and doing a signed sum of the values. You'll get value of $\omega$ on the two $X$ edges, whose difference (after taking a limit) is $Y\omega(X)$, the value of $\omega$ on the two $Y$ edges, whose difference (again after taking a limit) is $X \omega(Y)$ and on the truncation edge it's $\omega([X, Y])$
Gently taking caring of the signs, the total value is $X\omega(Y) - Y\omega(X) - \omega([X, Y])$
So value of $d\omega$ on the Lie square spanned by $X$ and $Y$ = signed sum of values of $\omega$ on the boundary of the Lie square spanned by $X$ and $Y$
Infinitisimal version of $\int_M d\omega = \int_{\partial M} \omega$
But I believe you can actually write down a proof like this, by doing $\int_{I^2} d\omega = \int_{\partial I^2} \omega$ where $I$ is the little truncated square I described and taking $\text{vol}(I) \to 0$
For the general case $d\omega(X_1, \cdots, X_{n+1}) = \sum_i (-1)^{i+1} X_i \omega(X_1, \cdots, \hat{X_i}, \cdots, X_{n+1}) + \sum_{i < j} (-1)^{i+j} \omega([X_i, X_j], X_1, \cdots, \hat{X_i}, \cdots, \hat{X_j}, \cdots, X_{n+1})$ says the same thing, but on a big truncated Lie cube
Let's do bullshit generality. $E$ be a vector bundle on $M$ and $\nabla$ be a connection $E$. Remember that this means it's an $\Bbb R$-bilinear operator $\nabla : \Gamma(TM) \times \Gamma(E) \to \Gamma(E)$ denoted as $(X, s) \mapsto \nabla_X s$ which is (a) $C^\infty(M)$-linear in the first factor (b) $C^\infty(M)$-Leibniz in the second factor.
Explicitly, (b) is $\nabla_X (fs) = X(f)s + f\nabla_X s$
You can verify that this in particular means it's a pointwise defined on the first factor. This means to evaluate $\nabla_X s(p)$ you only need $X(p) \in T_p M$ not the full vector field. That makes sense, right? You can take directional derivative of a function at a point in the direction of a single vector at that point
Suppose that $G$ is a group acting freely on a tree $T$ via graph automorphisms; let $T'$ be the associated spanning tree. Call an edge $e = \{u,v\}$ in $T$ essential if $e$ doesn't belong to $T'$. Note: it is easy to prove that if $u \in T'$, then $v \notin T"$ (this follows from uniqueness of paths between vertices).
Now, let $e = \{u,v\}$ be an essential edge with $u \in T'$. I am reading through a proof and the author claims that there is a $g \in G$ such that $g \cdot v \in T'$? My thought was to try to show that $orb(u) \neq orb (v)$ and then use the fact that the spanning tree contains exactly vertex from each orbit. But I can't seem to prove that orb(u) \neq orb(v)...
@Albas Right, more or less. So it defines an operator $d^\nabla : \Gamma(E) \to \Gamma(E \otimes T^*M)$, which takes a section $s$ of $E$ and spits out $d^\nabla(s)$ which is a section of $E \otimes T^*M$, which is the same as a bundle homomorphism $TM \to E$ ($V \otimes W^* \cong \text{Hom}(W, V)$ for vector spaces). So what is this homomorphism $d^\nabla(s) : TM \to E$? Just $d^\nabla(s)(X) = \nabla_X s$.
This might be complicated to grok first but basically think of it as currying. Making a billinear map a linear one, like in linear algebra.
You can replace $E \otimes T^*M$ just by the Hom-bundle $\text{Hom}(TM, E)$ in your head if you want. Nothing is lost.
I'll use the latter notation consistently if that's what you're comfortable with
(Technical point: Note how contracting $X$ in $\nabla_X s$ made a bundle-homomorphsm $TM \to E$ but contracting $s$ in $\nabla s$ only gave as a map $\Gamma(E) \to \Gamma(\text{Hom}(TM, E))$ at the level of space of sections, not a bundle-homomorphism $E \to \text{Hom}(TM, E)$. This is because $\nabla_X s$ is pointwise defined on $X$ and not $s$)
@Albas So this fella is called the exterior covariant derivative. Denote $\Omega^0(M; E) = \Gamma(E)$ ($0$-forms with values on $E$ aka functions on $M$ with values on $E$ aka sections of $E \to M$), denote $\Omega^1(M; E) = \Gamma(\text{Hom}(TM, E))$ ($1$-forms with values on $E$ aka bundle-homs $TM \to E$)
Then this is the 0 level exterior derivative $d : \Omega^0(M; E) \to \Omega^1(M; E)$. That's what a connection is, a 0-level exterior derivative of a bundle-valued theory of differential forms
So what's $\Omega^k(M; E)$ for higher $k$? Define it as $\Omega^k(M; E) = \Gamma(\text{Hom}(TM^{\wedge k}, E))$. Just space of alternating multilinear bundle homomorphisms $TM \times \cdots \times TM \to E$.
Note how if $E$ is the trivial bundle $M \times \Bbb R$ of rank 1, then $\Omega^k(M; E) = \Omega^k(M)$, the usual space of differential forms.
That's what taking derivative of a section of $E$ wrt a vector field on $M$ means, taking the connection
Alright so to verify that $d^2 \neq 0$ indeed, let's just do the computation: $(d^2s)(X, Y) = d(ds)(X, Y) = \nabla_X ds(Y) - \nabla_Y ds(X) - ds([X, Y]) = \nabla_X \nabla_Y s - \nabla_Y \nabla_X s - \nabla_{[X, Y]} s$
Voila, Riemann curvature tensor
Well, that's what it is called when $E = TM$ so that $s = Z$ is some vector field on $M$. This is the bundle curvature
Here's a point. What is $d\omega$ for $\omega \in \Omega^k(M; E)$ "really"? What would, for example, having $d\omega = 0$ mean?
Well, the point is, $d : \Omega^k(M; E) \to \Omega^{k+1}(M; E)$ is a connection operator on $E$-valued $k$-forms on $E$. So $d\omega = 0$ would mean that the form $\omega$ is parallel with respect to the connection $\nabla$.
Let $V$ be a finite dimensional real vector space, $q$ a quadratic form on $V$ and $Cl(V,q)$ the associated Clifford algebra, with the $\Bbb Z/2\Bbb Z$-grading $Cl(V,q)=Cl(V,q)^0\oplus Cl(V,q)^1$. We define $P(V,q)$ as the group of elements of $Cl(V,q)$ with $q(v)\neq 0$ (under the identification $V\hookrightarrow Cl(V,q)$) and $\mathrm{Pin}(V)$ as the subgroup of $P(V,q)$ with $q(v)=\pm 1$. We define $\mathrm{Spin}(V)$ as $\mathrm{Pin}(V)\cap Cl(V,q)^0$.
Is $\mathrm{Spin}(V)$ the set of elements with $q(v)=1$?
Torsion only makes sense on the tangent bundle, so take $E = TM$ from the start. Consider the identity bundle homomorphism $TM \to TM$... you can think of this as an element of $\Omega^1(M; TM)$. This is called the "soldering form", comes tautologically when you work with the tangent bundle.
You'll also see this thing appearing in symplectic geometry. I think they call it the tautological 1-form
(The cotangent bundle is naturally a symplectic manifold)
Yeah
So let's give this guy a name, $\theta \in \Omega^1(M; TM)$. It's exterior covariant derivative $d\theta$ is a $TM$-valued $2$-form on $M$, explicitly $d\theta(X, Y) = \nabla_X \theta(Y) - \nabla_Y \theta(X) - \theta([X, Y])$.
But $\theta$ is the identity operator, so this is $\nabla_X Y - \nabla_Y X - [X, Y]$. Torsion tensor!!
So I was reading this thing called the Poisson bracket. With the poisson bracket you can give the space of all smooth functions on a symplectic manifold a Lie algebra structure. And then you can show that a symplectomorphism must also preserve the Poisson structure. I would like to calculate the Poisson Lie algebra for something like $S^2$. Something cool might pop up
If someone has the time to quickly check my result, I would appreciate. Let $X_{i},....,X_{n} \sim \Gamma(2,\,\frac{2}{\lambda})$ Is $\mathbb{E}([\frac{(\frac{X_{1}+...+X_{n}}{n})^2}{2}] = \frac{1}{n^2\lambda^2}+\frac{2}{\lambda^2}$ ?
Uh apparenty there are metrizable Baire spaces $X$ such that $X^2$ not only is not Baire, but it has a countable family $D_\alpha$ of dense open sets such that $\bigcap_{\alpha<\omega}D_\alpha$ is empty
@Ultradark I don't know what you mean, but you seem down in the dumps champ. Remember, girls are not as significant as you might think, design an attachment for a cordless drill and a flesh light that oscillates perpendicular to the drill's rotation and your done. Even better than the natural method
I am trying to show that if $d$ divides $24$, then $S_4$ has a subgroup of order $d$. The only proof I could come up with is a brute force proof. It actually was too bad. E.g., orders $2$, $3$, and $4$ are easy (just take the subgroup generated by a 2 cycle, 3 cycle, and 4 cycle, respectively); $d=8$ is Sylow's theorem; $d=12$, take $d=24$, take $S_4$. The only case that presented a semblance of trouble was $d=6$. But the group generated by $(1,2)$ and $(1,2,3)$ does the job.
My only quibble with this solution is that it doesn't seen very elegant. Is there a better way?
In fact, the action of $S_4$ on these three 2-Sylows by conjugation gives a surjective homomorphism $S_4 \to S_3$ whose kernel is a $V_4$. This $V_4$ can be thought as the sub-symmetries of the cube which act on the three pairs of faces {{top, bottom}, {right, left}, {front, back}}.
Clearly these are 180 degree rotations along the $x$, $y$ and the $z$-axis. But composing the 180 rotation along the $x$ with a 180 rotation along the $y$ gives you a 180 rotation along the $z$, indicative of the $ab = c$ relation in Klein's 4-group
Everything about $S_4$ is encoded in the cube, in a way
The same can be said of $A_5$ and the dodecahedron, say |
The first question we have to ask is: what is a one particle state in an interacting theory? It is reasonable to require that they are states that are both momentum eigenstates and energy eigenstates. (In fact, as the Hamiltonian and the momentum operator commute, these are not two different conditions.) Weinberg, in his famous textbook, says that particle states are those which transform under an irreducible representation of the Poincare group, but we need not fuss around with the Poincare group here.
All we will say is that, in the interacting theory, there are
some single particle states, labelled by
$$|\lambda k \rangle$$
where $k$ is the four-momentum, and $\lambda$ is whatever other labels we need for our particles. (In this answer I will be working with just a real scalar field, but even in the spin-0 case there can still be extra data that distinguishes our particles in an interacting theory.)
Now, we know know we have a set of momentum and energy eigenstates $|\lambda k\rangle$ that represent the stable particles of our theory. We can now "smudge out" these definite momentum states into wave packets, using a Gaussian window function $f_W$ that has some momentum uncertainty $\kappa$. We will denote these smudged out, approximate energy and momentum eigenstates with a subscript $W$ for "window."
$$|\lambda k\rangle_W \equiv \int d^3{\mathbf{k}'} f_W(\mathbf{k} - \mathbf{k}') |\lambda k\rangle$$
We will come back to these.
Now, the free vacuum $|0\rangle$ of $\hat H_0$ and the true vacuum $|\Omega\rangle$ of $\hat H = \hat H_0 + \hat H_{\rm int}$ are very different states. Particles in the interacting theory must indeed be defined to be formed from the action of the "creation operator" on the true vacuum, as long as we properly define what we mean by the "creation operator" in the interacting theory.
To create an annihilate particles, we will use the Klein Gordon inner product. (We suppress $\hbar$ and $c$.)
$$(\psi_1, \psi_2)_{KG} \equiv i \int d^3 x (\psi^*_1 \partial_t \psi_2 - \partial_t \psi^*_1 \psi_2)$$
The motivation for defining this is that in the FREE theory, the Klein Gordon inner product gives us an inner product between single particle states. If we have two single particle states (in the free theory) $|\Psi_1\rangle$ and $|\Psi_2 \rangle$, we have
$$\langle \Psi_1 | \Psi_2 \rangle = ( \psi_1, \psi_2 )_{KG}$$
where we used the "single particle wave functions" of the states defined by
$$\psi_i(x) \equiv \langle 0| \hat \phi(x) |\Psi_i\rangle$$
The niceties of free field theory come from the simple algebra of the creation and annihilation operators, combined with the fact that the annihilation operator annihilates the vacuum. We will try to recreate those relationships using the Klein Gordon inner product. However, to do this, we will need to use
widely separated wave packets.
From here on out, everything will be in the interacting theory.
For a given function $\psi$, we define the creation and annihilation operators that "create" the state corresponding to that wave function as follows.$$ \hat a^\dagger_i (t) \equiv -\big( \psi^*_i(t, \cdot), \hat \phi(t, \cdot) \big)_{KG} $$$$\hat a_i(t) = \big( \psi_i(t, \cdot), \hat \phi(t, \cdot) \big)_{KG}$$
(In the free theory, this creation operator literally would create the single particle state with the single particle wave function $\psi_1$.)
(Something I must mention about these operators is their time evolution. It is a point of notational confusion that $\hat a^\dagger_{1}(t)$ depends explicitly on a time $t$, given that we usually have defined time dependence such that $e^{i \hat H t} \hat{O}(t') e^{-i \hat H t} = \hat {\mathcal{O}}(t'+t)$. This is not the case here.)
Now, sadly, in the interacting theory, the annihilation operator defined above will not annihilate the vacuum. However, we can recover something close:
$$\langle \Omega| \hat a_1(t) |\Omega\rangle = i \int d^3{x}\langle \Omega| \big( \psi_1^*(t, \vec x) \partial_t \hat \phi (t, \vec x) - \partial_t \psi_1^*(t, \vec x) \hat \phi(t, \vec x) \big) |\Omega\rangle$$$$= i \int d^3{x} \big( \psi_1^*(t, \vec x) \partial_t \langle \Omega| \hat \phi(t, \vec x) |\Omega\rangle - \partial_t \psi^*_1(t, \vec x) \langle \Omega| \hat \phi(t, \vec x) |\Omega\rangle \big)$$$$= i \langle \Omega| \hat \phi(t, \vec x) |\Omega\rangle \int d^3{x} (- \partial_t \psi_1^*(t, \vec x))$$
The fact that $\partial_t \langle \Omega| \hat \phi(t, \vec x) |\Omega\rangle = 0$ follows directly from the fact that the vacuum state has zero energy, so $e^{- i \hat H t} |\Omega\rangle = |\Omega\rangle$. Now as we want $\langle \Omega| \hat a_1(t) |\Omega\rangle = 0$ for any $\psi_1$, we can see that this is achieved if and only if $\langle \Omega| \hat \phi(x) |\Omega\rangle = \langle \Omega| \hat \phi(0) |\Omega\rangle = 0$. We will assume this is the case.
In the free theory, $\langle 0| \hat a_1(t) \hat a_2^\dagger(t) |0\rangle = \langle \Psi_1 | \Psi_2\rangle = (\psi_1, \psi_2)_{KG}$. In an interacting theory, for any $\hat a_1$ and state $|\Psi_2\rangle$ (not just a single particle state) we have
$$\langle \Omega| \hat a_1(t) |\Psi_2\rangle = \langle \Omega| \big( \psi_1(t, \cdot) , \hat \phi(t, \cdot) \big)_{KG}|\Psi_2\rangle$$$$= \big( \psi_1(t, \cdot), \langle \Omega| \hat \phi(t, \cdot) |\Psi_2\rangle\big)_{KG} $$$$= \big( \psi_1(t, \cdot), \psi_2(t, \cdot) \big)_{KG}$$
$$\langle\Psi_2| \hat a_1(t) |\Omega\rangle = \big( \psi_1(t, \cdot) , \psi_2^*(t, \cdot) \big)_{KG}$$
Remember our single particle states? We're now going consider the "single particle wave function" of those states. Namely, they
have to be plane waves.
$$\langle \Omega| \hat \phi(x) |\lambda k\rangle = C_\lambda e^{-ikx}$$
where $C_\lambda$ is a constant that depends on $\lambda$.
We now want to see what our states $\hat a^\dagger_1|\Omega\rangle$ have to do with these true particle wave packets $| \lambda k \rangle_W$. To do this, we will see what the inner product of these two states are. Just from our simple algebra above, for an annihilation operator $\hat a_{\lambda_1 k_1} = (\psi_{k_1}, \hat \phi)_{KG}$ where $k_1^2 = m_{\lambda_1}^2$, we have
\begin{equation*}\begin{split}\langle \Omega | \hat a_{\lambda_1 k_1} (t) |\lambda_2 k_2\rangle_W = \big( \psi_{ k_1}(t, \cdot), \langle \Omega |\hat \phi(t, \cdot)|\lambda_2 k_2\rangle_W \big)_{KG} = C_{\lambda_2}\big(\psi_{ k_1}(t, \cdot), \psi_{ k_2 }(t, \cdot) \big)_{KG}\\{}_W \langle \lambda_2 k_2 | \hat a_{\lambda_1 k_1}(t) |\Omega\rangle = \big( \psi_{ k_1}(t, \cdot), {}_W \langle \lambda_2 k_2 |\hat \phi(t, \cdot) |\Omega\rangle \big)_{KG} = C_{\lambda_2}\big(\psi_{ k_1}(t, \cdot), \psi^*_{ k_2 }(t, \cdot) \big)_{KG}.\end{split}\end{equation*}We desire for the top expression to be $\propto \delta_{\lambda_1 \lambda_2} \delta^3(\mathbf{k}_1 - \mathbf{k}_2)$ and for the bottom expression to be 0. If this were the case, then the only single particle state $\hat a^\dagger_{ k_1}(t) |\Omega\rangle$ would overlap with would be $|\lambda_1 k_1\rangle$, and $\hat a_{k_1}(t)$ could still functionally "annihilate" the vacuum, even though we need to keep ${}_W \langle \lambda k |$ on the left. Defining $\omega_{\lambda k} \equiv (m^2_{\lambda} + \mathbf{k}^2)^\frac{1}{2}$, we have
\begin{equation*}\begin{split}\big(\psi_{ k_1}(t, \cdot), \psi_{ k_2 }(t, \cdot) \big)_{KG} = (2 \pi)^3 \int d^3{\mathbf{k}} f_W(\mathbf{k}_1 - \mathbf{k}) f_W(\mathbf{k}_2 - \mathbf{k}) (\omega_{\lambda_1 k} + \omega_{\lambda_2 k})e^{it(\omega_{\lambda_1 k} - \omega_{\lambda_2 k})} \\\big(\psi_{ k_1}(t, \cdot), \psi_{ k_2 }^*(t, \cdot) \big)_{KG} = (2 \pi)^3 \int d^3{\mathbf{k}} f_W(\mathbf{k}_1 - \mathbf{k}) f_W(\mathbf{k}_2 + \mathbf{k}) (\omega_{\lambda_1 k} - \omega_{\lambda_2 k})e^{it(\omega_{\lambda_1 k} + \omega_{\lambda_2 k})}. \\\end{split}\end{equation*}
The top expression is not $\propto \delta_{\lambda_1 \lambda_2} \delta^3_W(\mathbf{k}_1 - \mathbf{k}_2)$ and the bottom expression is not $0$. However, if we take $\kappa \ll |\mathbf{k}_1 - \mathbf{k}_2|$ and also take $t \to \pm \infty$, they are! This hinges on our assumption that $m_{\lambda_1} \neq m_{\lambda_2}$ if $\lambda_1 \neq \lambda_2$. The $e^{it (\ldots)}$ term will oscillate wildly in both integrals if $\lambda_1 \neq \lambda_2$, causing them to be 0. In the top integral, this oscillation does not occur when $\lambda_1 = \lambda_2$. Furthermore, the top integral will be negligible unless $\mathbf{k}_1 = \mathbf{k}_2$. Taking the $f_W(\mathbf{k}) \to \delta^3(\mathbf{k})$ and $t \to \pm \infty$ limit, we can now write
\begin{equation*}\begin{split}\langle \lambda_2 k_2 | \hat a_{\lambda_1 k_1}^\dagger (\pm \infty) |\Omega\rangle = C_{\lambda_2} (2 \pi)^3 2 \omega_{\lambda_2 k_2} \delta_{\lambda_1 \lambda_2} \delta^3(\mathbf{k}_1 - \mathbf{k}_2) \\\langle \lambda_2 k_2 | \hat a_{\lambda_1 k_1} (\pm \infty) |\Omega\rangle = 0. \end{split}\end{equation*}These properties are even more important than I let on. This is because the states $| \lambda k \rangle$ are so generally defined: they are just momentum eigenstates with all the extra necessary data stuffed into $\lambda$. As they diagonalize the momentum operator, they form a basis of our entire state space! Therefore, we can immediately see from the first equation that
$$\hat a^\dagger_{\lambda k}(\pm \infty) |\Omega\rangle = -C_\lambda \big( e^{ikx}, \hat \phi( x)\big)_{KG} |\Omega\rangle \vert_{t = \pm \infty} = | \lambda k \rangle$$where we have chosen the normalization $\langle\lambda k | \lambda' k' \rangle = C_{\lambda}^* C_{\lambda'} (2 \pi)^3 (2 \omega_{\lambda k}) \delta_{\lambda \lambda'} \delta^3(\mathbf{k} - \mathbf{k}')$. From the second equation, we can immediately see that
$$\langle\Psi | \hat a_{\lambda k}(\pm \infty) |\Omega\rangle = 0 \hspace{0.15 cm} \text{ for all } \langle\Psi | \hspace{0.5 cm} \Longrightarrow \hspace{0.5 cm} \hat a_{\lambda k}(\pm \infty) |\Omega\rangle = 0.$$Apparently our asymptotic creation and annihilation operators behave almost exactly like our good old creation and annihilation operators from the free theory!
There's another important property I must mention, which is that two creation/annihilation operators that have different $\lambda k$ data will commute. This is a direct consequence of the fact that our creation/annihilation operators are spacial integrals weighted by wave packets that are spatially separated at large times. (For operators with the same $k$ but different $\lambda$, as $m_\lambda$ is different the wave packets will propagate at different speeds and will still succeed to separate.) Note that spatial separation is a property of wave packets but not of plane waves. This is another place where it is necessary to view plane waves as a limit of wave packets in order to properly understand your theory. In fact, the operators will not commute unless they are defined with this limiting procedure.
We are finally ready to define our incoming and outgoing multi-particle states. As our asymptotic creation operators only change the ground state in localized spatial regions and each spatial excitation is justifiably called a "particle state" we can say that acting with a few of them on the ground state will create a perfectly good multi-particle state. We will now define our incoming (created at $t = -\infty$) and outgoing (created at $t = + \infty$) multi-particle asymptotic states.
$$|\lambda_1 k_1, \ldots, \lambda_n k_n\rangle_{\rm in} \equiv \hat a^\dagger_{\lambda_1 k_1}(-\infty) \ldots \hat a^\dagger_{\lambda_n k_n}(-\infty) |\Omega\rangle \\|\lambda_1 k_1, \ldots, \lambda_n k_n\rangle_{\rm out} \equiv \hat a^\dagger_{\lambda_1 k_1}(+\infty) \ldots \hat a^\dagger_{\lambda_n k_n}(+\infty) |\Omega\rangle$$
The four-momenta $k_i$ will have masses $k_i^2 = m_{\lambda_i}^2$ and no $|\lambda_i k_i\rangle$ is allowed to equal another. Some people prefer to rescale $\hat \phi$ in order to hide those $C_\lambda$ prefactors but I will not. The nature of these prefactors will be explored much later. It is important to note that the total momenta of these states are approximately the sum of all $\mathbf{k}_i$, and the energy is approximately the sum of all $\omega_{\lambda_i k_i}$. This lends more credence to the notion that these are "multi-particle" states.
Now that we have successfully defined our incoming and outgoing asymptotic multi particle states and derived some important properties of our newly constructed asymptotic creation and annihilation operators, we have completed the framework necessary to derive the LSZ reduction formula. Using the properties defined here, you should be able to justifiably go through the steps as outlined in Srednicki.
To answer your doubt 2: In order to get out states to have the right properties, we needed these to be wave-packets that are widely deparated in the distant past and future. Therefore, these states are only approximately momentum and energy eigenstates (although you can get as close as you want). As they're not perfect energy eigenstates, some time evolution will occur. You particles will start far apart, come together, interact, then (different) particles will leave.
TLDR: If you define creation and annihilation operators properly, using the Klein Gordon inner product with widely separated wave packets in the far past/future, you will get your actual particle states when acting with these operators on the true vacuum $|\Omega\rangle$. |
How can i show that the following long language is not context free using the pumping lemma?
$L=\left\{abc^{i_1}bc^{i_2}...bc^{i_{2m}}def^{j_1}ef^{j_2}..ef^{j_{2n}}ghq^{k_1}hq^{k_2}...hq^{k_o}\right\}$
Such that:
$m,n,o \geq 1;$
$m>n>o>0;$
$i_1,i_2,...,i_{2m} \geq 0;$
$j_1,j_2,...,j_{2n} \geq 0;$
$k_1,k_2,...,k_o \geq 0$
And how can I conclude from that $L=\left\{0^i1^j2^k|1\le \:i<j<k\right\}$ is not a context free language?
I have been struggling with it for many hours, would really appreciate an explanation I can follow and learn from. The examples given in class are simpler and not on that level, and I don't know which z to take and how to break it in order to deduct in a proof that L is not context free.
Could you please give a slow explanation so I could learn fast?
My attempt for the
: first part
Proving by negation that L is not a context free language: Assuming L is a context free language, then there should exist a pumping length P for which any string S such that $|S| \leq P$ can be divided into 5 pieces(uvxyz) while obeying the pumping lemma rules. Because of the information on the question, I'll focus on the first part of the lemma, i.e: $\forall i: uv^ixy^iz \in L$. The structure of a typical word from L will be:$S=abc^{p_1}bc^{p_2}...bc^{2p_i+2}def^{p_1}ef^{p_2}...ef^{2p_i}ghq^{p_1}...ghq^{2p_i-1}$. vxy cannot contain c,f,q's, We'll divide it into the following cases based on vxy. Don't know how to divide it or how to continue, would really appreciate your assistance with it. Very important to me
My attempt for the
(I don't understand it well enough to solve the first part, I will ask for your help with it): second part
Proving by negation that L is not a context free language: Assuming L is a context free language, then there should exist a pumping length P for which any string S such that $|S| \leq P$ can be divided into 5 pieces(uvxyz) while obeying the pumping lemma rules. Because of the information on the question, I'll focus on the first part of the lemma, i.e: $\forall i: uv^ixy^iz \in L$. The structure of a typical word from L will be:$S=0^p1^p2^p$. vxy cannot contain a,b,c's, We'll divide it into the following cases based on vxy:
Doesn't contain 0: pumping S with 0 to obtain $uv^0xy^0z=uxz$. in this case, there are fewer 1 or 2, so not in L. There's 0 but not 2: pumping S with 2 to obtain $uv^2xy^2z$, meaning more 0's than 2's, so it is not in L. There are no 2's: pumping S with 2 to obtain $uv^2xy^2z$, meaning more 1's or 0's than 2's, so it is not in L.
Since each option was checked and each one contradicted, It can be safe to assume that $L=\left\{0^i1^j2^k|1\le \:i<j<k\right\}$ is not a context free language since it does not adhere to the pumping lemma.
Thank you very much |
It looks like you're new here. If you want to get involved, click one of these buttons!
I spent a lot of time in this course explaining two big ideas:
Adjoint functors. We've focused a lot on the simplest of categories: preorders. Pairs of adjoint functors between these are also called Galois connections, and we first met them in Lecture 4. In Lecture 6 we saw that a left adjoint is a 'best approximation from above' to the possibly nonexistent inverse of a monotone function between preorders, while a right adjoint is a 'best approximation from below'. Much later, starting in Lecture 47, we looked at adjoint functors between categories in general. We saw that the pattern persists: left adjoints are 'liberal' while right adjoints are 'conservative'. Compact closed categories. In Lecture 67, in our study of feasibility relations, we began looking at caps and cups. We saw these allow us to describe feedback, or, more generally, the process of 'bending back' an input to some process and turning it into an output - or vice versa. In Lecture 71 we saw that caps and cups exist - and obey the all-important snake equations - in any category of enriched profunctors. And in Lecture 74, we saw this works in any 'compact closed' category. Morphisms in a compact closed category can be drawn as string diagrams, which we can manipulate just like boxes with wires coming in and out! In particular, we can 'bend back' the wires.
These are both great ideas... but amazingly, they are
two aspects of the same idea!
To see this, start with a pair of adjoint functors:
$$ F \colon \mathcal{A} \to \mathcal{B}, \quad G \colon \mathcal{B} \to \mathcal{A} $$ By definition, there's a bijection between these sets:
$$ \alpha_{a,b} \colon \mathcal{B}(F(a),b) \to \mathcal{A}(a,G(b)) $$ for any objects \(a\) in \(\mathcal{A}\) and \(b\) in \(\mathcal{B}\). Moreover this is a natural isomorphism.
What can we do with this? Not much until we know some elements of these sets! So let's take \(b = F(a)\):
$$ \alpha_{a,F(a)} : \mathcal{B}(F(a),F(a)) \to \mathcal{A}(a,G(F(a))) $$ There's an obvious element of \(\mathcal{B}(F(a),F(a))\), namely the identity \(1_{F(a)}\). Our bijection maps this to some morphism from \(a\) to \(G(F(a))\), which we write as
$$ \eta_a \colon a \to G(F(a)) .$$You get such a morphism for any \(a\). And using the fact that \(\alpha\) is
natural, you can prove these morphisms define a natural transformation
$$ \eta \colon 1_{\mathcal{A}} \to G F $$This is called the
unit. (I'm sorry; that word that is used for too many different things in mathematics.)
Amazingly,
the unit is a lot like a cap. Why? Remember that when we have an object \(x\) in a compact closed category, the cap is a morphism
$$ \cap_x \colon I \to x \otimes x^\ast.$$ This resembles the unit, with \(x\) playing the role of \(G\), and \(x^\ast\) playing the role of \(F\). The surprise is that this resemblance is significant, not just superficial!
What about the cup? Well, we can take our bijection
$$ \alpha_{a,b} : \mathcal{B}(F(a),b) \to \mathcal{A}(a,G(b)) $$ and let \(a = G(b)\), getting
$$ \alpha_{G(b),b} : \mathcal{B}(F(G(b)),b) \to \mathcal{A}(G(b),G(b)) .$$ There's an obvious element of \( \mathcal{A}(G(b),G(b))\), namely the identity \(1_{G(b)}\). It must come from some morphism from \(F(G(b))\) to \(b\), which we write as
$$ \epsilon_b \colon F(G(b)) \to b, $$ and you can prove such morphisms define a natural transformation
$$ \epsilon \colon F G \Rightarrow 1_{\mathcal{B}} $$called the
counit. This should remind you of how any object \(x\) in a compact closed category has a cup:
$$ \cup_x \colon x^\ast \otimes x \to I .$$ So far our evidence for an analogy between the unit and counit and the cap and cup is pretty thin. The real test is the snake equation. If we can prove the unit and counit obey that, something real must be going on!
We can do it. Of course, first we need to
state the snake equation for the unit and counit. I don't have room to do this here, so watch these short videos by my friends Eugenia Cheng and Simon Willerton:
where they call the snake equations the 'triangle equations' - you'll see why. They
start by defining an 'adjunction' to be a pair of functors \( F \colon \mathcal{A} \to \mathcal{B}\), \( G \colon \mathcal{B} \to \mathcal{A} \) equipped with a unit and counit \(\eta \colon 1_{\mathcal{A}} \to GF \), \( \epsilon \colon FG \to 1_{\mathcal{B}}\) obeying the triangle equations. Then they show this definition is equivalent to the definition of adjoint functors we've been using!
The success of this analogy suggests that maybe we could use string diagrams to work with categories, functors and natural transformations. It's true! To learn how, watch these:
After setting up string diagrams for category theory, Simon describes adjunctions using string diagrams in part 3. You'll see exactly why the unit is like a cap and the counit like a cup - and you'll see the snake equations pop out at the end! In parts 4 and 5 he uses string diagrams to get
monads from adjunctions. Monads are very popular in programming languages like Haskell, but this will give a completely different outlook on them.
I should warn you: all this is a
different idea than using string diagrams to study enriched categories and enriched profunctors, as we'd been doing in Chapter 4. So don't get them mixed up. But everything fits together in the end - as you've probably seen, category theory keeps generalizing everything in order to unify it and eventually simplify it.
There's much more to say; you can see my own take on it by reading this:
You'll see how adjunctions and monads and compact closed categories all fit nicely into the framework of
2-categories. Just as you need categories to work efficiently with set-based mathematics, you need 2-categories to work efficiently with category-based mathematics. These days my students and I have been using 2-categories (and related gadgets like double categories) to study Markov processes, Petri nets and other kinds of networks.
I'm tempted to go on, but this course was meant to give you just a tiny taste of the grand meal of category theory and its many applications, so I will restrain myself and stop here. I've been getting very abstract, but next time I'll give you some suggestions to read more about applications. |
There's a good function going the other way, \\(f^{\ast}: PY \rightarrow PX\\), the **preimage** function, defined by
$$f^{\ast}(S \in PY) = \\{x \in X: f(x) \in S\\} .$$
Claim: this is right adjoint to the image function \\(f_{\ast}: PX \rightarrow PY\\).
Proof: \\(f_{\ast}(S) \subseteq T\\) means that \\(S\\) maps into \\(T\\), which means that \\(S\\) is included in the preimage of \\(T\\), i.e., \\(S \subseteq f^{\ast}(T)\\). |
Equations for Estimating Creatinine Clearance or GFR
Considerations and Variations of Creatinine Clearance
Cockcroft-Gault 1976
1
Particularly for renally dosing medications, the Cockcroft-Gault equation has been the long-standing gold standard for the estimation of creatinine clearance for decades. The original study was based on data from 249 male patients with stable renal function. The study used actual body weight, but mentioned that a correction factor of some kind should be used in patients with marked obesity or ascites.
$$
\\ CrCl =\frac{(140-Age)*(WeightInKg)}{72*SCr} *0.85\;(if\;female)
$$
CKD-EPI
2
The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equation was developed as a follow-up to the MDRD equation in an attempt to be as accurate in describing renal function at lower GFR (less than 60 mL/min/1.73 m
2), but more accurate at a higher GFR. The CKD-EPI equation was developed and validated retrospectively in 8,254 patients from 10 studies. The study included all patients age > 20 years old who were not pregnant and did not have renal failure (defined as an eGFR < 15 mL/min/1.73 m 2). The data set included 45% women and 87% non-black patients.
$$
\\ GFR\;(mL/min/1.73 m^2) =\\ Black\;females,\;SCr\;\le\;0.7\;mg/dL:\\ 166 * (SCr/0.7)^{-0.329} * (0.993)^{Age}\\ Black\;females,\;SCr\;>\;0.7\;mg/dL:\\ 166 * (SCr/0.7)^{-1.209} * (0.993)^{Age}\\ Black\;males,\;SCr\;\le\;0.9\;mg/dL:\\ 163 * (SCr/0.9)^{-0.411} * (0.993)^{Age}\\ Black\;males,\;SCr\;>\;0.9\;mg/dL:\\ 163 * (SCr/0.9)^{-1.209} * (0.993)^{Age}\\ Non-black\;females,\;SCr\;\le\;0.7\;mg/dL:\\ 144 * (SCr/0.7)^{-0.329} * (0.993)^{Age}\\ Non-black\;females,\;SCr\;>\;0.7\;mg/dL:\\ 144 * (SCr/0.7)^{-1.209} * (0.993)^{Age}\\ Non-black\;males,\;SCr\;\le\;0.9\;mg/dL:\\ 141 * (SCr/0.9)^{-0.411} * (0.993)^{Age}\\ Non-black\;males,\;SCr\;>\;0.9\;mg/dL:\\ 141 * (SCr/0.9)^{-1.209} * (0.993)^{Age}
$$
Jelliffe 1973 (stable renal function) 3
DEPRECATED
Published as a "Letter to the Editor", the Jelliffe equation does not require a patient's height or weight because it describes renal function normalized to a body surface area of 1.73 m
2. While this was a landmark equation for its era, its use has become deprecated in favor of newer equations.
$$
\\ CrCl\;(mL/min*1.73\;m^2) = \frac{98 - 16*(\frac{Age-20}{20})}{SCr}\\ (CrCl\;is\;multiplied\;by\;0.9\;for\;female\;patients)
$$
Salazar-Corcoran 1988 4
DEPRECATED
This equation was specifically designed to measure creatinine clearance in obese patients (defined as a BMI ≥ 30 m
2). The equation is derived from a "fat free mass" equation and was shown to be superior to the Cockcroft-Gault and Jelliffe methods when using total body weight. Although interesting for historical reasons, the Salazar-Corcoran method has largely become deprecated in favor of the Cockcroft-Gault method with a body weight adjustment, such as the 40% adjustment factor equation.
$$
\\ CrCl\;(for\;men)\;=\\ \frac{(137-Age)*((0.285*WeightInKg) + (12.1*HeightInMeters^2) )}{51*SCr}\\ CrCl\;(for\;women)\;=\\ \frac{(146-Age)*((0.287*WeightInKg) + (9.74*HeightInMeters^2) )}{60*SCr}
$$
MDRD (four-variable) 5 , 6
DEPRECATED
The MDRD equation was originally developed in 1999
7 as a six-variable equation, but has since been updated to a simpler, four-variable equation in two variations (to reflect the conventional and IDMS laboratory methods). The MDRD equation is more accurate than the Cockcroft-Gault method (particularly when using total body weight), but it is rarely used for drug dosing because most medications are validated using the Cockcroft-Gault method.
The MDRD equation was only studied in patients with renal dysfunction (GFR < 60 mL/min/1.73 m
2), and therefore it should not be used in patients with normal renal function. For this reason, the MDRD equation has become deprecated in favor of the CKD-EPI equation, which was developed similarly to the MDRD equation, but is able to accurately describe GFR in patients without renal dysfunction.
$$
\\ For\;IDMS-calibrated\;assays:\\ GFR\;(mL/min/1.73\;m^2) = 175 * (SCr)^{-1.154} * (Age)^{-0.203} \\ * (0.742\;if\;female) \\ * (1.210\;if\;African-American)\\ For\;non-IDMS\;assays:\\ GFR\;(mL/min/1.73\;m^2) = 186 * (SCr)^{-1.154} * (Age)^{-0.203} \\ * (0.742\;if\;female) \\ * (1.210\;if\;African-American)
$$
Equations for estimating unstable renal function
Most conventional, commonly used equations to estimate renal function require that patients have a stable renal function. Usually, this is defined as having two consecutive serum creatinine values, drawn at least 24 hours apart, within 20% of each other. Unfortunately, many hospitalized patients do not have stable renal function. For this reason, other equations have been developed to aid clinicians in estimating renal function for the purposes of drug dosing.
The two most common equations for estimating unstable renal function are the Jelliffe 1972
8 and Chiou 1975 9 methods. 10 All of these equations lack the robust evidence of the equations for stable renal function and are poorly validated in a large group of patients. Generally speaking, these equations are developed using a one-compartment pharmacokinetic estimation model, and are less accurate when renal function is improving (rather than worsening). 11 Although the data are not compelling, these are the best equations available for this patient population.
$$
\\ (Men)\;E^{SS} = IdealBW * (29.3 - (0.203*Age))\\ (Women)\;E^{SS} = IdealBW * (25.1 - (0.175*Age))\\ SCr_{avg} = (SCr1+SCr2)/2\\ E^{SS}_{corr} = E^{SS} * (1.035 - (0.0337*SCr_{avg}))\\ E = E^{SS}_{corr} - \frac{4*IdealBW*(SCr_2 - SCr_1)}{\Delta Time\;(days)}\\ CrCl (mL/min/1.73 m^2) = \frac{E}{14.4*SCr_{avg}}
$$
$$
\\ CrCl\;(for\;men) =\\ \frac{2*IdealBW*(28-(0.2*Age))}{SCr_1 + SCr_2} \\ + \frac{2*(0.6*IdealBW)*(SCr_1-SCr_2)}{(SCr_1 + SCr_2)*\Delta Time\;(hrs)}\\ - (0.0286*0.6*IdealBW)\\ CrCl\;(for\;women) =\\ \frac{2*IdealBW*(22.4-(0.16*Age))}{SCr_1 + SCr_2} \\ + \frac{2*(0.6*IdealBW)*(SCr_1-SCr_2)}{(SCr_1 + SCr_2)*\Delta Time\;(hrs)} \\ - (0.0286*0.6*IdealBW)
$$
Creatinine Clearance (CrCl) versus Glomerular Filtration Rate (GFR)
Creatinine clearance (CrCl) is an estimate of Glomerular Filtration Rate (GFR); however, CrCl is slightly higher than true GFR because creatinine is secreted by the proximal tubule (in addition to being filtered by the glomerulus). The additional proximal tubule secretion falsely elevates the CrCl estimate of GFR.
12
Equations that express GFR (such as MDRD and CKD-EPI) express GFR in mL/min/1.73 m
2. For the purposes of drug dosing or estimating GFR in patients with body size that is very different than average, GFR can be non-normalized using the following equation: 12
$$
\\ BSA = \sqrt{\frac{(HeightInCm * WeightInKg)}{3600}}
\\ GFR\;(mL/min) = GFR\;(mL/min/\;1.73 m^2) * BSA / 1.73
$$
Adjustment Factors for Female Gender
Many equations have an adjustment factor to account for the fact that female patients have less muscle mass, and therefore produce less creatinine. Historically, the Cockcroft-Gault and Jelliffe equations used an arbitrary value of 0.85 or 0.9 as a correction factor, but this value was largely based on empiric estimates with limited data. Fortunately, newer data have shown that this correction factor is actually relatively accurate, with an "optimal" correction factor between 0.84 and 0.88 being the most appropriate for female patients.
13 Adjustment for Obesity
14
Obesity has been a long-standing problem in the estimation of renal function. Serum creatinine production is approximated based on lean body weight because muscle tissue (not fat) is responsible for creatine production. Furthermore, a change in total body mass does not increase the size of the kidney (or GFR) proportionally. Equations that do not correct or adjust for obesity risk overestimation of true renal function.
While there is still significant debate regarding the optimal method of controlling for obesity, it appears that using the Cockcroft-Gault equation with a 40% adjustment is the most appropriate method. In one of the largest study on the topic to date of nearly 3000 overweight and obese patients, the following conclusions can be drawn:
14
Actual body weight will significantly
overestimate renal function Ideal body weight will significantly
underestimate renal function The LBW2005 equation, while initially very promising,
15 , 16 significantly underestimates renal function. For all classes of obesity (overweight, obese, and morbid obesity), the Cockcroft-Gault equation with a 40% adjustment proved to consistently offer the most accurate estimate of creatinine clearance (often within about 5 mL/min of accuracy)
There are equations that report GFR as a normalized value to body surface area (mL/min/1.73 m
2). While these may appear to circumvent the issue of obesity, these values need to be converted to a non-normalized GFR (mL/min) for the purposes of drug dosing. In the process of conversion, however, the non-normalized value will also overestimate GFR in obese patients. Cockcroft-Gault 40% Obesity Adjustment
The most accurate equation for creatinine clearance in obese patients is the Cockcroft-Gault equation with a 40% adjustment factor.
14 This equation is most appropriate for patients who are greater than 20-30% of their ideal body weight. 17 In essence, this correction accounts for 40% of body mass above a patient's "ideal" body weight:
$$
\\ Adjusted\;weight = IdealBW + 0.4*(ActualBW-IdealBW)
$$
Ideal and Lean Body Weight (Devine 1974 and LBW2005)
Historically, the Devine 1974 equation
18 has been used to estimate fat-free, ideal, or lean body weight (all terms generally meaning the same thing). This equation was not scientifically derived or validated, 15 but is extensively used in medicine. A newer equation, called LBW2005 19 may be a more promising estimation of lean body weight and has been derived and validated with actual patient data. Rounding Creatinine in the Elderly
Some practitioners routinely round the serum creatinine of elderly patients (eg, > 60 years) to a value of 1 mg/dL in an effort to control for a reduced muscle mass. Intuitively, this practice does not make sense because rounding a serum creatinine of 0.3 mg/dL (230% increase) is much different than rounding a value of 0.8 mg/dL (25% increase). This practice becomes even more inconsistent when an elderly patient's serum creatinine is already above 1 mg/dL. The literature does not support this practice as it often results in an underestimation of true renal function.
20 , 21 If any correction factor is used, it is likely that a percent adjustment, similar to underweight patients, would be the most appropriate; however, such a correction factor has not been studied in elderly patients. Rounding Creatinine in Underweight Patients
In underweight patients, a low serum creatinine may be more reflective of a decrease in production rather than an increased rate of renal elimination. Similarly to elderly patients, clinicians may be tempted to round creatinine in underweight patients to account for less muscle mass; however, this practice is not supported by the literature.
22 The most accurate method to control for underweight patients is to multiply the patient's Cockcroft-Gault value by an adjustment factor of 0.69 (regardless of whether the patient's serum creatinine is above or below 1 mg/dL). This correction factor was shown to be more precise and less bias than rounding or making no adjustment. Medications that Modify Serum Creatinine
Because serum creatinine undergoes tubular secretion, any medications that interfere with this process will falsely elevate the patient's serum creatinine; however, this will not impact the patient's true GFR. The following medications have been shown to falsely elevate serum creatinine:
12 , 20 , 23
Cefoxitin
Cimetidine
Cisplatin
Flucytosine
Trimethoprim
Populations who are Difficult to Estimate
Certain patient groups have dramatically different serum creatinine production or elimination compared to the normal patient population. The following groups are notoriously difficult to estimate true renal function:
Amputation - Falsely low serum creatinine due to less production from muscle mass
Burn injury - Increased GFR
Cirrhosis - Falsely low serum creatinine due to less muscle mass and reduced hepatic conversion of creatine to creatinine
Cystic fibrosis - Increased GFR
Muscle disorders - Muscular dystrophy and other muscle disorders that can cause cachexia
Pregnancy - Difficult to estimate lean body mass, increased GFR
Unstable renal function - Equations used to estimate unstable renal function are very old and not validated in a large patient population Impact of IDMS
There are primarily two laboratory methods for measuring serum creatinine: a number of conventional (older) methods (eg, alkaline picrate), and the newer IDMS method. The conventional methods have a positive bias (falsely elevated by up to 20%) because they detected non-creatinine chromagens.
24 The conventional assay method is most susceptible to bias when serum creatinine is within the normal range. The NKDEP guidelines 25 recommend that all laboratories convert their systems to use the newer, more accurate IDMS method. According to the NKDEP, almost all laboratories are expected to convert to the IDMS method by the end of 2010.
Note that this calculator automatically converts to and from IDMS as indicated based on the CrCl/GFR equation. All equations before the MDRD equation use non-IDMS creatinine values, the MDRD equation has two equations for either assay, and the CKD-EPI equation is only standardized for IDMS. The following equations are used to convert between IDMS and non-IDMS:
26
$$
\\ Conventional\;SCr\;(mg/dL) = (IDMS\;SCr)*1.065 + 0.067
\\ IDMS\;SCr\;(mg/dL) = ((Conventional\;SCr)-0.067)/1.065
$$
You may specify whether you are entering serum creatinine as an IDMS or 'conventional' assay by clicking the
"Config" icon in the top, right-hand corner of the page heading. |
It looks like you're new here. If you want to get involved, click one of these buttons!
Now let's look at a mathematical approach to resource theories. As I've mentioned, resource theories let us tackle questions like these:
Our first approach will only tackle question 1. Given \(y\), we will only ask
is it possible to get \(x\). This is a yes-or-no question, unlike questions 2-4, which are more complicated. If the answer is yes we will write \(x \le y\).
So, for now our resources will form a "preorder", as defined in Lecture 3.
Definition. A preorder is a set \(X\) equipped with a relation \(\le\) obeying: reflexivity: \(x \le x\) for all \(x \in X\). transitivity \(x \le y\) and \(y \le z\) imply \(x \le z\) for all \(x,y,z \in X\).
All this makes sense. Given \(x\) you can get \(x\). And if you can get \(x\) from \(y\) and get \(y\) from \(z\) then you can get \(x\) from \(z\).
What's new is that we can also
combine resources. In chemistry we denote this with a plus sign: if we have a molecule of \(\text{H}_2\text{O}\) and a molecule of \(\text{CO}_2\) we say we have \(\text{H}_2\text{O} + \text{CO}_2\). We can use almost any symbol we want; Fong and Spivak use \(\otimes\) so I'll often use that. We pronounce this symbol "tensor". Don't worry about why: it's a long story, but you can live a long and happy life without knowing it.
It turns out that when you have a way to combine things, you also want a special thing that acts like "nothing". When you combine \(x\) with nothing, you get \(x\). We'll call this special thing \(I\).
Definition. A monoid is a set \(X\) equipped with:
such that these laws hold:
the
associative law: \( (x \otimes y) \otimes z = x \otimes (y \otimes z) \) for all \(x,y,z \in X\)
the
left and right unit laws: \(I \otimes x = x = x \otimes I\) for all \(x \in X\).
You know lots of monoids. In mathematics, monoids rule the world! I could talk about them endlessly, but today we need to combine the monoids and preorders:
Definition. A monoidal preorder is a set \(X\) with a relation \(\le\) making it into a preorder, an operation \(\otimes : X \times X \to X\) and element \(I \in X\) making it into a monoid, and obeying:
$$ x \le x' \textrm{ and } y \le y' \textrm{ imply } x \otimes y \le x' \otimes y' .$$This last condition should make sense: if you can turn an egg into a fried egg and turn a slice of bread into a piece of toast, you can turn an egg
and a slice of bread into a fried egg and a piece of toast!
You know lots of monoidal preorders, too! Many of your favorite number systems are monoidal preorders:
The set \(\mathbb{R}\) of real numbers with the usual \(\le\), the binary operation \(+: \mathbb{R} \times \mathbb{R} \to \mathbb{R} \) and the element \(0 \in \mathbb{R}\) is a monoidal preorder.
Same for the set \(\mathbb{Q}\) of rational numbers.
Same for the set \(\mathbb{Z}\) of integers.
Same for the set \(\mathbb{N}\) of natural numbers.
Money is an important resource: outside of mathematics, money rules the world. We combine money by addition, and we often use these different number systems to keep track of money. In fact it was bankers who invented negative numbers, to keep track of debts! The idea of a "negative resource" was very radical: it took mathematicians over a century to get used to it.
But sometimes we combine numbers by multiplication. Can we get monoidal preorders this way?
Puzzle 60. Is the set \(\mathbb{N}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{N} \times \mathbb{N} \to \mathbb{N}\) and the element \(1 \in \mathbb{N}\) a monoidal preorder? Puzzle 61. Is the set \(\mathbb{R}\) with the usual \(\le\), the binary operation \(\cdot : \mathbb{R} \times \mathbb{R} \to \mathbb{R}\) and the element \(1 \in \mathbb{R}\) a monoidal preorder? Puzzle 62. One of the questions above has the answer "no". What's the least destructive way to "fix" this example and get a monoidal preorder? Puzzle 63. Find more examples of monoidal preorders. Puzzle 64. Are there monoids that cannot be given a relation \(\le\) making them into monoidal preorders? Puzzle 65. A monoidal poset is a monoidal preorder that is also a poset, meaning
$$ x \le y \textrm{ and } y \le x \textrm{ imply } x = y $$ for all \(x ,y \in X\). Are there monoids that cannot be given any relation \(\le\) making them into monoidal posets?
Puzzle 66. Are there posets that cannot be given any operation \(\otimes\) and element \(I\) making them into monoidal posets? |
X is an arbitrary , non empty set, B(X) the set of bounded functions $f:X\rightarrow \mathbb{R}$ and $||f||_\infty = \sup_{x\in X }|f(x)|$.
Is $(B(X),||.||_\infty )$ a Banach Algebra?
My attempt at showing that this is true:
Definition of a Banach Algebra: A normed space E with elements f,g,... is called normed Algebra if it is an Algebra and the multiplication with the norm fulfills: $$||fg||\le ||f||\cdot||g||$$
A normed algebra is a Banach algebra , if it is complete as a space (if it is a Banach space).
* Defintion of an Algebra:* If K is a field , A a vector space equipped with multiplication operation in form of $A \times A \rightarrow A$, then A is an algebra if for $x,y,z \in A $ and $a,b \in K$ scalars it holds that: $$1. (x+y)\cdot z = xz+yz \\2: x\cdot(y+z)=xy+xz \\ 3: (ax)\cdot (by)=(ab)(x\cdot y)$$
In this case A is B(X) and x,y,z are bounded functions, $a,b\in \mathbb{R}$ and it fulfills (1-3) of the Algebra definition.
Now for the step from Algebra to normed Algebra one has to check the submultiplicativity : $$\sup_{x\in X}|f(x)g(x)| \le \sup _{x \in X} |f(x)|\sup_{x\in X}|g(x)|$$
How to show this ??? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.